Comments by Section

There are 110 comments in this document
"Technological changes have taken place so quickly that many in library positions today began their careers long before the World Wide Web was a reality, and these workers may not fully understand the import of these changes." => what an extremely bold statement.
"cooperative agreements " between whom?
Please elaborate on the "particularly libraries" statement. Also "have greatly hindered libraries ability to create competitive information services." => add a reference
The heading seems to assume that linked data necessarily means open data. This isn't the case as you can publish data as RDF without an open license or without any license at all (as several organisations do) or as you can even do linked data in an intranet. Also, you can publish linkable data and let it be linked to and then establish a paywall around the data. In general, the report lacks a clarification regarding the terms "Linked Data" vs. "Open Data". I suggest adding a paragraph or section to the report which clarifies the two terms, "open data" being about open access, open standards and open licenses in the first place and "linked data" being about a specific set of standards or best practices for publishing data on the web recommended by the W3C. An important aspect of open data is legal compatibility of data while linked data deals with technical compatibility of data.
I think this paragraph has to be fundamentally changed or even omitted. It implicitely argues that individual records are copyrighted. Much speaks for individual records aren't copyrighted at all and that, thus, nobody owns any rights on them. At least in Europe you only have the related database right on collections of records. I believe the legal status of records is quite clear (not copyrighted), at most this is a grey area. The report shouldn't speak in favour of the view that individual records are copyrightable.
"the need to think of broader bibliographic data exchange (e.g. with publishers) is new and not universally accepted" I suggest adding scholars to the brackets as an example of communities with which data exchange and interlinking would be very fruitful for academic libraries.
Jennifer, can you give examples? I'm not sure what you're referring to.
For non-profits and other service organizations, ROI includes intangible benefits like "making society better." The non-profit management literature addresses this. So we should assume ROI to include those "less tanglibles."
Jennifer, I would gladly add XC but (and I just checked) there is no documentation that demonstrates that it produces LD. The only documentation that I can find talks about MARC and FRBR, but there is nothing on the record format or serialization. That information has to be public and open before a service can be included in the report. Your ontology needs to be open access on the Web in RDF format. If it is, please give a pointer.
“While the Web values global interchange between all parties, library cataloguing standards in the past have aimed to address only the exchange of data within the library community where the need to think of broader bibliographic data exchange (e.g. with publishers) is new and not universally accepted.” This is not a new issue. Libraries and publishers have different business models, which are reflected in their development of different standards for exchange. Publishers think of publications as products; libraries are concerned with inventory of their collections and the content of publications. The granularity of open linked data may provide an opportunity for a fresh look at what could be shared for mutual benefit. However publishers, as well as librarians, may regard metadata as a commodity to be restricted.
This whole section has a rather negative tone. Libraries are aware of the need for change. Linked data is one of the directions that change might take, if the benefits can be demonstrated, but as the section makes clear the challenges are considerable.
Discussed where below? This paragraph is very intriguing and deserves more attention. It seems related to the migration strategies in the Recommendations. Coming up with a plan for making these two paradigms coexist will be extremely important for the success of LLD.
Although some work has been done to try to change this situation and some progress has been made. (e.g. MARC21 subfield zero) It seems misleading to me to not include some mention of efforts to get around these limitations.
It seems to me that more detail is needed here about the issues with data sharing and the history of cooperative cataloging using centralized databases. This is so brief that it seems to be skirting around the issue. Perhaps just an additional sentence or two.
On the one hand this can be seen as a barrier for libraries to participate in linked data. On the other hand it represents an area where linked data could be a huge improvement for libraries in terms of managing such changes using a different infrastructure (registries, etc.)
But there are ways to address this issue, by providing tools that enable a smooth migration process for libraries to begin using linked data while continuing to use these niche systems. What is needed are cost-effective strategies for moving libraries forward..
I would like to see an acknowledgment of other measures of success for linked data other than those that can be calculated, in particular the ability of libraries to meet the needs of their users. The success of this can best be studied using other methods, such as participatory design, as described in the recent book, "Scholarly Practice, Participatory Design and the eXtensible Catalog" http://www.alastore.ala.org/detail.aspx?ID=3408.
This paragraph could use some clarification. Who are the "few" in the last sentence? People within the library community? I assume the bibliographic data that needs "smarting up" is meant to be data from outside the library community that would be enriched with data from the library community? This is not clear.
The statement that there are "no tools that specifically address library data" is a bit strong. I suggest at least a mention of emerging tools, such as the eXtensible Catalog, which will help to make library data "linked data ready", if not (currently) to create true linked data yet.
If the statement about the library community only engaging with established technologies is allowed to stand, then there needs to be some explanation of WHY that is the case.
This section deals with libraries being understaffed technologically, so the section on library leaders should probably address the problems that library leaders have in employing technology staff. The points that are made here about libraries taking leadership in LLD should perhaps go in a separate section. I suggest that this also include discussion of how some library organizations are now exploring what actions to take regarding LLD (ALA and the Program for Cooperative Cataloging are two examples) and that what is needed is advice and leadership from outside the library community, to enable library leaders to know what specific steps need to be taken and to make informed decisions. That process is already beginning.
More should be made here (or somewhere in the report) about how the strong cooperative culture that is now present in the library community can be an asset for implementing linked data: use of common vocabularies and standards, consistency of metadata, structures in place to mobilize community action toward a shared goal...
The sentences on library workers do not follow logically one from another. I would like to see this paragraph suggest possible ways to change the way that library workers are educated, or provide continuing education in linked data. Much has been happening in that arena over the past year or so.
Are there any signs that this is beginning to change, where libraries are beginning to interact with these other communities? Cite some examples here? How about the mere existence of this incubator group?
One of the conclusions that I would make from this is that libraries can derive great benefit from linked data to try to address this situation of decreased budgets and inability to extend their missions to include digital information. Libraries have a great NEED for linked data, and this paragraph explains why.
My general impression of this section is that it fairly accurately describes the difficulties that libraries may face to adopting linked data (although some statements may be a bit too sweeping at times - I agree with some of the other comments below). However, my concern is that these challenges and barriers are presented without any attempt to suggest possible solutions to them. Since there are indeed many challenges to the library community, the recommendations presented later in the report do not seem to be adequately justified by the content of the report. In other words, the way the report reads right now, the challenges may outweigh the benefits. I do not believe that is what the report intends to convey, and that is not what is SHOULD convey. I recommend that some of the sections below include at least a brief discussion of possible steps to mediate these challenges and barriers, otherwise the whole situation just begins to seem pretty hopeless. Making the benefits section at the beginning more compelling will also help considerably. I will add other suggestions below where additions could be made.
An important point. We have discovered this first-hand working on the eXtensible Catalog - programmer salaries in libraries cannot compete within the marketplace. This deserves mention in the report.
This is costly when metadata needs to be changed in many local records across thousands of libraries; if metadata were in a centralized database and linked to by library records, however, the vocabulary changes would only need to occur in one place, thus saving costs in the long run.
Perhaps if this was broadened to talk about the fact that we share metadata within our supply chain, for lack of a better phrase (publishers, indexing and abstracting services, etc), but not frequently with organizations outside of the traditional information world.
I don't think this heading is entirely clear. Meaning that standards should also be considered as something that should last a long time? That standards take a long time to be developed? That we need to start thinking about how to preserve digital objects and web-based objects with new standards?
I think linked data solves a fairly large need which tends to be overlooked: Making library metadata interoperable with the rest of the web, and with other networked information. I don't necessarily think this is a situation where an application is going to pop up that makes people see its usefulness, but one where these ideas need to be taken into consideration when we think about how to reconstruct bibliographic metadata. How do we implement these ideas as we re-work (or get rid of) MARC?
Difficult, but perhaps not impossible. A test implementation could be used as a basis for feedback and user testing. Measurement should be made of experienced researchers as well as undergraduate students, in comparison with the same site unmodified. If indeed we can measure improved research capabilities, speed, and discovery, we will have built a case for expanding this effort.
There appears to be a cultural prejudice against software developers in at least some segments of the library culture. Such work is seen as tasks for underlings, and hence the pay scale for programmers in libraries cannot compete with the commercial market. Those in higher ranks in the library are often expected to set aside programming work and not "get their hands dirty." Programmers may not get respect from librarians, particularly if the programmers do not also have librarian degrees. Continued cutbacks to library funding also reduces the ability to hire decent programmers. These issues combine to keep many libraries in the backwaters of technological development.
That libraries "do not adapt well to technological change" is debatable, and largely orthogonal in any event to whether libraries are using linked data. The crux of the problem to me is the old "chicken and egg" problem. Libraries won't use linked data until/unless it solves a need. Right now it doesn't, or at least we lack the tools to make linked data effective in a library environment. Frankly, I don't see any killer apps out there in any industry, which inhibits adoption in any industry, and even more so in libraries which are organizations of limited resources.
I agree with Laura that, if we investigate fully, we may find that we have more in common than it appears on the surface. An advantage to that investigation would be that it would require us to clarify our data goals in new terms; we might learn something from the exercise.
It may be useful, however, to say that there is some valuable information to be gleaned from these private areas, like overall circulation statistics for individual titles. Scrubbing the data of any personally identifiable information adds cost to these projects. Privacy is essential and should not be compromised, but it has an impact on projects.
I'm not sure this is true as stated. I think that the issue is that local development efforts mainly spread through vendor adoption, and that means that a local development must have wide-spread utility to be adopted. The issue isn't so much bottom-up but developments that only are of interest to a niche market that vendors cannot economically support.
I think it would be worthwhile to talk separately about the issue of having iterative standards with proof-of-concept development, and the issue of the time lag imposed by the meeting cycles. I also think somewhere we should mention the variety of standards fora -- IFLA, national fora, NISO, and the recent awareness of non-library fora, like W3C.
The paragraph doesn't really match the heading. The paragraph talks about the lack of library-related LD tools. Maybe that is an issue on its own?
This has been criticized as being not only negative but not really true. Is there another way to say this? I think it is mainly about libraries having trouble being on the leading edge and generally having a hard time changing.
I think the web community does have a concept which is equivalent (or at least quasi-equivalent) to libraries headings or authority control. It's the concept of "unique identifiers." It is true that the communities don't share a common language or vocabulary but I don't think it's true that they don't have concepts in common.
"the library metadata record, being designed primarily as a communication format, requires a full record replace for updates to any of its fields." Not true. It's possible to overlay specific fields while doing global updates in an ILS. It is true that it is costly, however. The process of propagating the need to do updates is what's expensive. LC changes a subject term or the NAF changes an authority heading, they have to spread the news that the change is made, and then local databases have to do the global updates. It suffers a time delay in addition to the monetary cost. This process can be automated and/or out-sourced but it still has its price.
Catherine is correct in that it's true in other disciplines. I question, however, to what extent this is true in library systems. Database work is database work no matter how the element sets are structured. I think most commercial systems probably use ER modeling/diagramming when creating their systems, those systems are often built on commercial DB (ex. Innovative ILS implementation can be Oracle-based) or Open Source DB (MySQL), and who really knows what type of programming tools and paradigms are being used behind the proprietary wall. I think vendors like VTLS and Ex Libris are probably using agile development techniques.
I actually think that the picture is brighter than this. Although libraries haven't been technology leaders, they have embraced new technologies to the benefit of their communities, providing free Internet access, lending ebooks (even before they were popular). This is separate from the struggle to manage the flood of digital content. There is another issue, which is that managing digital content might be better done on a scale that is larger than any one library, while managing physical items is suitable to local institutions. There are tens or hundreds of thousands of libraries, many very small. Digital materials need to be managed globally, not locally, and there is no global library organization to do this.
This is true - but one could say this of most disciplines, for example scientific instruments producing data in a certain format needs specialist, niche systems solutions. What is the special issue about Library systems in particular?
While I agree that cataloguing standards were designed to exchange data between libraries; I'm not sure that I would agree that bib. exchange with publishers is new and not accepted. While libraries may not use individual publishers - it is common practice to get bib records from your book supplier. This also doesn't address journals - our institutional repository uses Cross-Ref to look up the DOIs of journal articles to enahance the information recorded about the publication in the IR.
While I appreciate the fact that this section is in a larger one about Barriers to adoption, I do feel that the heading is overly critical. I think it would be fairer to say that Libraries are no longer early adopters of new technology for the parts of their service which they consider to be business critical – partially because of the issues of retro-conversion of the collections that they already hold and partially because they are service providers who need to ensure that the service continues to run.