This post has permanently moved to http://archiveshub.ac.uk/locah/2010/12/01/assessing-linked-data/
Please update any links and bookmarks.
We apologise for any inconvenience.
Tags: archival description, barriers, EAD, jiscexpo, linkeddata, locah, opportunities
This entry was posted
on Wednesday, December 1st, 2010 at 2:31 pm and is filed under Linked Data.
You can follow any responses to this entry through the RSS 2.0 feed.
Both comments and pings are currently closed.
Hi Jane – lots in this post!
I think the point that we are on the cutting edge with this stuff is definitely true – as I look at models for bibliographic data I find the ground shifting as (for example) the British Library have done three iterations of their RDF representations of Bibliographic data in the last three months, and not just that, others have taken what the BL have published and then changed and republished, proposing different ways of representing the same data.
Some of this will be resolved in time – to the extent that we can expect ‘community norms’ to be adopted for representations of this type of data – and I guess for archives etc. to. (although we have to recognise that we may see more communities external to the MLA sector publishing similar data but adopting different practices)
In terms of the skills and knowledge required – some of the really hard stuff may go away – creating models for data from scratch for example – we can guess that as agreement is reached in the community standard models will be incorporated into the tools we use. However, I do think that as ‘information professionals’ we do need to put more effort into understanding how data is modelled and what that means for our ability to manipulate that data.
I think this is a shift for both information professionals and the technical staff they work with. Previously we have dealt with MARC/AACR2/ISAD(G)/EAD etc. which are perhaps somewhere between formats and models – but certainly for MARC/AACR2 I’d argue the modelling isn’t rigorous. Behind the scenes programmers have had to deal with this, and no doubt created their own modelling within software. With Linked Data this more rigorous modelling is more exposed and up-front – and I think engagement with this from both sides will give us benefits.
I wrote a blog post a few months ago which laid out my thoughts about the challenges of linked data http://www.meanboyfriend.com/overdue_ideas/2010/04/whats-so-hard-about-linked-data/. My conclusion at the time (and I think I’d still stand by this) was:
“I used to think the technical aspects of Linked Data were the hard bits – RDF, SPARQL, and a whole load of stuff I haven’t mentioned. While there is no doubt that these things are complicated, and complex, I now believe the really difficult bits are the modelling and reuse aspects. I also think that there is an overlap here with the areas where domain experts need to have an understanding of ‘computing’ concepts, and computing experts need to understand the domain – and this kind of crossover is always difficult.”
Sounds to me like we’re definitely in accord on these issues. Yes, I think the modelling is more exposed and up-font, and I do think that this can be seen as an opportunity for us (info profs), even though initially it might be a little off-putting to begin with. I do wonder if ISAD(G) – and maybe other domain standards – are not really up to the task and need updating themselves in order to reflect the Web-driven information environment that we are now living in. I know that there are plans afoot to do this, and I hope that developments such as Linked Data are taken into account.
You say “I also think that there is an overlap here with the areas where domain experts need to have an understanding of ‘computing’ concepts, and computing experts need to understand the domain – and this kind of crossover is always difficult.”
Yes, absolutely. I spoke about this at a Society of Archivists’ conference a few years ago and wrote a chapter in ‘What Are Archives’ that expanded on this theme. Part of my aim with Locah is to try to clarify what I think domain experts need to know – where the crossover lies – and it would be interesting to talk to you about this.
“it is reasonable to acknowledge that Linked Data does involves programming skills, and therefore it is not so dissimilar from structuring and outputting your data through a traditional relational database, for example, where you would expect that specialist skills are needed.”
Data modelling is not programming, while there may be some skills overlaps.
No, data modelling is not programming. I was talking about the process of creating Linked Data – so thsi would be getting into the actual output of RDF once you have your data model.
The relationship between data modelling and programming is a little bit more complex since there is also the concept of declarative programming in computer science and there you just model the data and it’s relationships which also declaratively states how conclusions can be drawn automatically. Pretty similar to RDF.
However, working with data will require more elaborate programming skills in the future, especially when you want to model, extract or visualize the information within.
This blog has moved to http://archiveshub.ac.uk/locah/ is proudly powered by
WordPress MU running on UKOLN Blogs. Create a new blog and join in the fun!
and Comments (RSS).