JISC Beginner's Guide to Digital Preservation

…creating a pragmatic guide to digital preservation for those working on JISC projects

Archive for the 'Events' Category

Events attended by the project team

Digital Preservation Benefits Toolset Workshop

Posted by Marieke Guy on 10th June 2011

UKOLN have announced that registration is now open for the Workshop to disseminate the Digital Preservation Benefits Toolset and accompanying materials such as user guides and factsheets to the research community.

Workshop Details

Tuesday, 12 July 2011: 12.30 -16.00
London South Bank University
Main Conference Room
The Keyworth Centre
Keyworth Street
London
SE1 6NG

Workshop registration is free but please note that places are limited and early registration is advised. At least 24 hours notice of cancellation is required, otherwise a fee of £50 will be charged to recover costs.

The Digital Preservation Benefit Analysis Tools Project is funded by the Joint Information Systems Committee (JISC) and runs from 1 February to 31 July 2011.

The project has tested and reviewed the combined use of the Keeping Research Data Safe (KRDS) Benefits Framework and the Value Chain and Impact Analysis tool, which were first applied in the I2S2 Project for assessing the benefits and impact of digital preservation of research data. We have extended their utility to, and adoption within, the JISC community by providing user review and guidance for the tools and by creating an integrated toolset. The project consortium consists of a mix of user institutions, projects, and disciplinary data services committed to the testing and exploitation of these tools and the lead partners in their original creation.

A project Web site and the project plan are available and further outputs will be available from the Web site during the summer. The project partners are UKOLN and the Digital Curation Centre at the University of Bath, Centre for Health Informatics and Multi-professional Education (CHIME) at University College London, UK Data Archive (University of Essex), Archaeology Data Service (University of York), OCLC Research, and Charles Beagrie Limited.

Details concerning the Workshop programme, venue and registration are all available from the UKOLN Web site.

Posted in Events, Workshops | Comments Off

The Future of the Past Report

Posted by Marieke Guy on 28th May 2011

On 4-5 May 2011, the Cultural Heritage and Technology Enhanced Learning unit hosted a workshop for invited experts in the field of digital preservation. It was attended by around 60 representatives from universities and research centres, memory institutions, industry and other organisations such as foundations dedicated to digital preservation.

The event started with a stock-take of achievements and ongoing activities funded under the ICT programme, presenting the portfolio of digital preservation projects and the research roadmaps proposed by the community so far. This presentation was based on a report commissioned for the event which can be downloaded in the documents section below.

The main part of the workshop consisted in group discussions providing input to the digital preservation research agenda within the next EU framework programme for research and innovation (Common Strategic Framework, 2013-2020). A number of reports are now available from the workshop:

Posted in Events, Reports | 1 Comment »

Free AQuA Events – QA for Digital Preservation

Posted by Marieke Guy on 28th March 2011

The JISC Automating Quality Assurance Project (AQuA) is running a series of free events in April and June for coders, technical experts, collection curators and digital preservation practitioner.

The events will be helping attendees explore a number of questions including:

  • Do you have large amounts of digital content to look after?
  • How well do you know your digital content?*
  • Is your file what it says it is?
  • Do your users do your QA for you?
  • Are you Intimidated by digital preservation tools?

The AQuA events will be held 11-13 April 2011 and 13-15 June 2011 and will bring together digital preservation practitioners, collection curators and technical experts to automate quality assurance of our digital collections.

Preservation or quality issues can occur in our digital content from
many sources:

  • When we create the content via digitisation (eg. missing pages, duplicate pages, poor focus/contrast)
  • When the collection is stored (eg. bit rot)
  • When the collection is processed or moved from store to store (eg. when processes run out of memory or disk space)
  • When technology changes (eg. when our standards and file formats become obsolete)

Manually checking material for these kinds of problems is laborious, challenging and, most critically, expensive. Checking samples of material reduces the cost, but can let through problematic quality issues. Automated tools that can check every digital item in a precise way should allow us to reduce our costs and increase the overall quality of our digital collections.

The AQuA events will provide the opportunity to get hands on experience of developing and applying digital preservation techniques and technology to digital collections.

  • University of Leeds, 11th – 13th April 2011: Join the team for the first Mashup retreat at the beautiful Weetwood Hall Conference Centre and Hotel
  • British Library, London, 13th – 15th June 2011: Get involved in the
    second AQuA Mashup in the heart of London at the UK’s National Library

Inspiring locations, cross discipline collaboration, challenges and prizes, and evening social events. Plus it’s FREE! Accommodation and
refreshments are paid for.

More info at http://wiki.opf-labs.org/display/AQuA/Home

Register at http://aquamashup.eventbrite.com

AQuA is a JISC funded collaborative project between the University of Leeds, the University of York, the British Library and Open Planets Foundation.

Questions – by email to digital@leeds.ac.uk

Tags:
Posted in Events | 1 Comment »

Getting Started in Digital Preservation

Posted by Marieke Guy on 21st February 2011

The Digital Preservation Coalition are running a Getting Started in Digital Preservation workshop on Wednesday 21st March 2011 at Glamorgan Archives, Cardiff. Places are limited and cost costs £25.00 but free to DPC members.

The workshop follows on from the Decoding the Digital conference and being organised in conjunction with the British Library Preservation Advisory Centre whose Approaches to Digitisation workshop I presented at a few weeks back.

The event provides an introduction to digital preservation, builds an understanding of the risks to digital materials, includes practical sessions to help you apply digital preservation planning and tools, and features speakers sharing their own experience of putting digital preservation into practice.

The sessions are aimed at librarians, archivists and collection managers in all sectors and in all sizes of institution who want to find out more about digital preservation and the implications for their organisation of having to retain, manage and provide ongoing access to large quantities of digital material.

Have a look at the Digital Preservation Coalition Web site for more details.

Posted in Events | Comments Off

The Preservation and Curation of Software

Posted by Marieke Guy on 14th February 2011

A while back I mentioned that the Software Preservation Study team were running a free workshop for digital curators and repository managers to understand and discuss the particular challenges of software preservation.

Although I was unable to attend the day two of my colleagues, Alex Ball and Manjula Patel, were there representing UKOLN. Manjula has written a blog post on her thoughts on the day. Alex has kindly allowed me to publish his trip report below:

****

On Monday 7th February I attended a workshop on the preservation and curation of software, put on by Curtis+Cartwright Consulting and the Software Sustainability Institute, the team behind the JISC study ‘Clarifying the Purpose and Benefits of Preserving Software’. It was only a brief event, but it still managed to cover ample ground.

The day started with three mini-keynotes. Kevin Ashley (Edinburgh/DCC) set the scene, illustrating why we should care about software preservation with anecdotes and examples from computing history. Neil Chue Hong (Edinburgh/SSI) reviewed the seven possible approaches to software preservation, and discussed the occasions when one might have to choose one, and what factors influence this decision. Finally, Brian Matthews (STFC) talked about some previous work done on the subject, namely the SigSoft (Significant Properties of Software) and SoftPres (Tools and guidelines for the preservation of software as research outputs) projects.

Following this, we were split into groups of about 5-8 people for two group exercises.

In the first exercise, we each considered who was responsible for software preservation in our organization, who else should be involved, and what practical steps could be taken to improve the status quo. Now, normally when one is asked ‘who is responsible’ in a workshop like this, the correct answer is usually ‘I am’, accompanied by inward groaning on the part of the delegates and waves of smugness coming off the leader who has the thing in question in their job title. There was none of that here, thankfully. There was thoughtful consideration of the parts played by developers, procurers, users, senior managers, funding bodies, IT departments and The Community (i.e. the other users and potential developers). One interesting suggestion was that the DCC should set up a preservation certification scheme, so that procurers and users could know which software they could trust to be preserved (or, at least, preservable).

The second group exercise was to go through a preservation planning exercise for a particular piece of software. The process was, in summary: to establish why the software needed to be preserved, for whose benefit, when and where; to work out the requirements for the preserved software (e.g. is the process important or only the outputs?); to determine the most suitable preservation approaches for the short term and the long term; to enumerate the skills and resources needed, and where the money will come from; and work out what needs to be done to get the ball rolling. Our group deliberated over a fictional piece of software loosely based on the DANS Typological Database System, but one of the other groups considered the collection of console games at the Science Museum and did the exercise for real.

The workshop ended with Neil Grindley going through JISC activity in the area of software, mentioning among other things the Preservation of Complex Objects Symposia, DevCSI, the OpenPlanets Foundation and various rapid innovation projects. Areas of particular interest will be preserving software with a web services/cloud computing component, economic sustainability, and how funders can help ensure the software they fund can be preserved.

The workshop was followed by a surgery session where people could seek expert advice from the project team, but unfortunately I had to dash off to catch my train.

All in all, I found the workshop to be a particularly enjoyable example of the learning-by-doing genre of event. The pacing was good and the main points were repeated enough to be memorable but not enough to be annoying. Looking back I see I’ve learnt more than I thought I had at the time.

Project blog:http://softwarepreservation.jiscinvolve.org/wp

SSI:http://www.software.ac.uk/

Tags:
Posted in Events | 1 Comment »

Approaches to Digitisation

Posted by Marieke Guy on 11th February 2011

Digital preservation is about preserving digital objects. These objects have to come to be somehow and earlier this week (Wednesday 9th February) I was invited to talk at an Approaches to Digitisation course facilitated by Research Libraries UK and the British Library. It was held at the British Library Centre for Conservation, a swanky new building in the British Library grounds. It was the first time the course has run, though they are planning another in the autumn. The course was aimed at those from cultural heritage institutions who are embarking on digitisation projects and sought to provide overview of how to plan for and undertake digitisation of library and archive material.

British Library by Phil Of Photos

I was really pleased that the event spent a considerable amount of time on the broader issues. Digitisation itself, although not necessarily easy, is just one piece of the jigsaw and there has been a tendency in the past for institutions to carry out mass digitisation and not consider the bigger picture. During the day several speakers advocated the use of the lifecycle approach and planning, selection and sustainability were highlighted as being key areas for consideration. If digitisation managers take this on board the end result will hopefully be a collection of well-rounded, well-maintained, well-used digitised collections with a preservation strategy in place.

The course followed a fairly traditional format with presentations, networking time and a printed out delegate pack. Unfortunately there was no wireless but this left us concentrating completely on presenters and what they had to say, and it was all very useful stuff.

Benefits of Digitising Material – Richard Davies, British Library

Richard Davies started the day with an introduction to the benefits of digitisation (a good article entitled The Case for Digitisation is available on P16 of the most recent JISC inform). Rather than just giving a straightforward overview of the different benefits he used a number of case studies to illustrate the added value that digitisation can provide, for example by opening up access, allowing digital scholarship and collaboration.

The British Library has now digitised approximately 4 million news papers. Opening up access has meant that people can use the papers in completely different ways, for example by full text searching and allowing different views on resources. Projects like the British Library Beowulf project and others allow extensive cross searching and the Codex Sinaiticus project has taken a bible physically held in 4 locations and allowed it to be accessed as one, for the first time. The Google Art Project allows users to navigate galleries in a similar way to Google Street view and the high resolution of the digital objects is impressive.

Google Art Project

Digitisation also presents opportunities for taking next step in digital scholarship. In the past carrying out the level of research that now possible with digital resources would have taken a very long time indeed. The Old Bailey project has digitised 198,000 records and users can now carry out extensive analysis on content in a matter of minutes.

Davies also illustrated how digitisation can allow you to expand your collection by bringing in resources from general public and by crowd sourcing. The Trove Project has carried out crowd sourcing of its optical character recognition (OCR) results. They have offered prizes for people who corrected the most text, they have also made a lot of details of how they went about the project is available online. The Transcribe Bentham project have also made many details about how they carried out work available on their blog.

Davies suggested that digitisation managers need to think about the model they will be using. Will content be freely available or will there be a business model behind it. One option is to allow users to dig to a certain level and then ask them to pay if they wish to access any further resources.

Davies concluded that to have a successful digitisation project you need to spend time on the other stuff – metadata, OCRing the text, making resources available in innovative ways. Digitisation is only one element of a digitisation project.

Planning for Digitisation- Richard Davies, British Library

Richard Davies continued presenting, this time looking more at his day job – planning for digitisation projects. He offered up a list of areas for consideration (though stated that this was a far from exhaustive list).

He suggested that a digitisation strategy helps you prioritise and can be a way of narrowing down the field. Such a strategy should fit within a broader context, in the British Library it is part of their 2020 vision. Policy and strategy should consider questions like: Who are we? Where could we go? Where should we go? How do we get there? It should also bear in mind funding and staffing levels.

Davies also spent a lot of time talking about the operational and strategic elements of embarking on a project. It is very much a case of preparation being the key, he suggested digitisation managers do as much preparation up front as possible without holding up the project. For example when considering selection consider what’s unique? What’s needed? What’s possible? (bearing in mind cost, copyright, conservation). He also emphasised the importance of lessons learnt reports.

Davies concluded by talking about some current challenges to digitisation programmes. The primary one was economic as funding calls are rarer and rarer. It can be useful to have funding bid expert onboard. He also explained that you can make the most of the bidding process by using it as an opportunity to help yourself to answer difficult questions about what you want to do. There is currently a lot of competition for funding. The last JISC call (Rapid Digitisation Call) offered up £400,000 worth of funding, 7 projects were funded, 45 bids were received.

Davies also highlighted that digital preservation and storage are increasingly becoming problems. Sustainability need not be forever but you should at least have a 3 – 5 year plan in place.

I was also pleased to hear Davies highlight a project I am now working on: The IMPACT project, funded by the European Commission. It aims to significantly improve access to historical text and to take away the barriers that stand in the way of the mass digitisation of the European cultural heritage

Use of Digital Materials – Aquilies Alencar Brayner, British Library

After a coffee break Aquilies Alencar-Brayner considered how users are currently using digital materials. He mentioned research by OCLC that states that students consistently use search engines to find resources rather than starting at library Web sites. They are also using the library less as books are still the brand associated.

Alencar-Brayner ran through the 10 ‘ins’ that users want: integrity, integration, interoperability, instant access, interaction, information, ingest of content, interpretation, innovation and indefinite access.

He showed us some examples of how the British Library is carrying out work facilitating access to digital materials for example through the Turning the Pages project which will allow you to actually turn the page, magnify the text, see the text within context, listen to audio.

Codex Sinaiticus

Where to begin? Selecting resources for digitisation Maureen Pennock, British Library

Maureen Pennock introduced us to selection. She explained that selection is usually based on previously selected resources and commonly the reason given for selection is for improving face access. However sometimes the reason can be for conservation of original and occasionally it is for enabling non-standard uses of resource.

Pennock explained that selection is often based on the appraisal made for archival purposes, known as assessment – areas for consideration include suitability and desirability and whether they are what users need.

Selection is an iterative process and revisited several times after you’ve defined your final goals and objectives. It is important to identify internal and external stakeholders such as curators, collection managers and so on and include them in the process.

Once you’ve set a scope you will need to pre select items, but there is no one size fits all approach. Practical and strategic issues come into play and items will need to be assessed and prioritised.

Pennock explained that suitability will need to consider areas like intellectual justification, demand, relevance, links to organisational digitisation policy, sensitivity, potential for adding value (e.g. commercial exploitation of resources).

Alongside suitability there will need to be item assessment looking at the quality of the original, the feasibility of image capture, the integrity and condition of resources, complex layouts for different material types, historical and unusual fonts and the size of artefacts. Legal issues such as copyright, data protection, licences, IPR also have a role to play.

Pennock concluded that not all issues are relevant to everyone and some with have more weighting than others. Practitioners will need to decide on their level of assessment and define their shortlist. It is important that you can justify your selection process in case issues arise later down the line.

Metadata Creation, Chris Clark, British Library

To wet our appetite for lunch Chris Clark took us on a whirlwind tour of digitisation metadata and its value. He explained that metadata adds value unfortunately often left at the end with tragic consequences. He also warned us that there is still no commonly agreed framework and it is still an immature area. Quite often metadata’s real value is most realised in situations where it isn’t expected. Clark recommended Here comes Everybody by Clay Shirky as a text that illustrated this. He also suggested delegates look at the One to Many; Many to One: The resource discovery taskforce vision.

Metadata is a big topic and Clark was only able to touch the surface. He advised us to think of metadata as a lubricant or adhesive that holds together users, digital objects, systems and services. We could also see metadata as a savings account – the more you put in more you get out.

Clark then offered us a quick introduction to XML and some background to the most relevant types of metadata when it comes to digitisation (descriptive, administrative, structural) metadata. He explained that Roy Tennant OCLC had characterised 3 essential metadata requirements: liquidity (written once use many times, expose), granularity and extensible (accommodate all subjects).

Clark concluded with an example of a high level case study he had worked on: Archival Sound Recordings at the British Library. On the project they had passed some of the load to the public by crowd sourcing recording quality and asking people to add tags and comments.

Preparing Handling Guidelines for Digitisation Projects, Jane Pimlott, British Library

After a very enjoyable lunch Jane Pimlott provided a real-world case study by looking at a recent project on which the British Library had created training and handling guidelines for a 2 year project to scan 19th century regional newspapers. It had been an externally funded project but work carried out on premises at Colindale. The team had had 6 weeks in which to deliver a training project, though a service plan was already in place and contractors were used.

Pimlott explained that damage can occur even if items are handled carefully but that material that is in a poor condition can be digitised but can take longer. She explained the need to understand processes and equipment used – e.g. large scale scanners. Much of the digitisation team’s work had been making judgement calls on assessing the suitability of items for scanning for the newspaper project. Their view was that canning should not be at expense of the item, it should not be seen as last chance scanning. Pimlott concluded that different projects present different risks and may require different approaches to handling and training.

Preservation Issues, Neil Grindley, JISC

Finally the day moved in to the realm of digital preservation. Neil Grindley from JISC explained how he had come from a paper, scissors, glue and pictures world (like many others there) but that the changing landscape required changing methods.

He began by trying to find out whether people considered digital preservation to be their responsibility. Unsurprisingly few did. He explained that digital preservation involves a great deal of discussion and there is lot of overlapping territory, it is best undertaken collaboratively. Career paths are only just beginning to emerge and the benefits are hard to explain and quantify. He revealed that a recent Gartner report stated that 15% of organisations are going to be hiring digital preservation professionals in the future, so it is a timely area in which to work in. Despite this is still tricky to make a business case to your organisation for why you should be doing it.

Grindley explained that there are no shortage of examples of digital preservation out there; recent ones include Becta and Flickr.

Grindley then went on to make the distinction between bit preservation and logical preservation. Bit preservation is keeping the integrity of the files that you need. He asked is bit preservation just the IT departments back up? Or is it more? He saw the preservation specialist as sitting between the IT specialist and content specialist almost as a go-between.

Used the example of Heydegger showing pixel corruption, corruption is both easy and potentially dangerous – especially in scientific research areas.

Grindley took us on a tour of some of the most pertinent areas of digital preservation such as checksums. These are very important for bit preservation and ensure that when you use something and go back to you can check that the files are not corrupted or changed. It is very easy to see if a file has been tampered with over time. Some of the tools suggested include:

Grindley then considered some of the main digital preservation strategies: technology preservation, emulation, migration, which led him on to the subject of logical digital preservation – not just focussing on keeping the bits but looking at what the material is and keeping its value

To conclude Grindley looked at some useful tools out there including DROID – digital record object identification, Curators workbench – useful tool from University of North Carolina, creates a MODS description and Archivematica – comprehensive preservation system. He also touched on new JISC work in this area.

5 new preservation projects starting Feb – July 2011

Other Sources of Information, Marieke Guy, UKOLN

I concluded the day by giving a presentation on other sources of information on digitisation and digital preservation. My slides are available on Slideshare and embedded below.



I think by now the delegates had had their fill of information but hopefully some will go back and look at the resources I’ve linked to.

To conclude: I really enjoyed the workshop and found it extremely useful. If I have one criticism it’s that the day was a little heavy on the content side and might have benefited from a few break-out sessions – just to lighten it up and get people talking a little more. Maybe something for them to bear in mind for next time?

Posted in Conference, Events | 2 Comments »

Software Preservation Workshop for Digital Curators

Posted by Marieke Guy on 10th January 2011

The Software Preservation Study team are running a free workshop for digital curators and repository managers to understand and discuss the particular challenges of software preservation. It’s on Monday 7 February 2011 and will be held in London.

There is increasingly a need to preserve software: for example software is sometimes needed to unlock accompanying research data and make it (re)usable, or software is often a research output in its own right. The workshop’s premise is that curators and software developers will need to collaborate to preserve software: the curator needing the technical knowledge of the developer, and the developer needing the preservation expertise and mandate of the curator. This workshop is intended to be the first ‘bridging’ event between these two previously separate communities – so ground-breaking in its own small way.

Friendly technical expertise will be provided by the Software Sustainability Institute and the Science & Technologies Facilities Council (STFC). It’s a workshop for curation practitioners where real examples can be discussed and useful advice exchanged.

The team have created a briefing paper entitled Digital Preservation and Curation: The Danger of Overlooking Software which gives potential attendees a taster of what will be explored in more detail at the event.

Registration will open at 10:30am, with the workshop starting promptly at 11am on Monday 7 February at Brettenham House. Lunch will be provided. The team are aiming to finish the event at 3pm but will be holding a surgery-style session for additional queries, and walk-throughs of the methodologies until 4pm.

The workshop is free to attend but places are strictly limited. Further details and the full agenda will be provided to registrants.

For more details see the Software Preservation blog.

Posted in Events | 2 Comments »

Addressing the Research Data Challenge

Posted by Marieke Guy on 8th November 2010


Last week the Digital Curation Centre (DCC) ran a series of inter-linked workshops aimed at supporting institutional data management, planning and training. The roadshow will travel round the UK but the first one was held in central Bath. The event ran over 3 days and provided Institutions with advice and guidance tailored to a range of different roles and responsibilities.

Day one (Tuesday 2nd November) looked the Research Data Landscape and offered a selection of case studies highlighting different models, approaches and working practice. Day two (Wednesday 3rd November) considered the research data challenge and how we can develop an institutional response. Day three (Thursday 4th November) comprised of 2 half-day training workshops: Train the Trainer and Digital Curation 101.

Unfortunately due to other commitments I could only make the second day of the roadshow, but found it really useful and would thoroughly recommend anyone interested in institutional curation of research data to attend the next workshop (to be held in Sheffield early next year – watch this space!).

The Research Data Challenge: Developing an Institutional Response

Liz Lyon Presenting

Day two of the roadshow was aimed at high-level managers and researchers with the intention of getting them to work together to identify first steps in developing an institutional strategic plan for research data management support and service delivery. Although there was a huge amount of useful information to take in (if only I’d come across more of it when writing the Beginner’s Guide! Currently waiting for the go ahead for release.) it was very much a ‘working day’. We were to get our hands dirty looking at real research curation and preservation situations in our own institutions.

After coffee and enjoying some of the biggest biscuits I’ve seen we were introduced to the DCC and given a quick overview by Kevin Ashley, Director DCC, University of Edinburgh. The majority of the day was facilitated by Dr Liz Lyon, Associate Director, DCC and Director of UKOLN, University of Bath. Liz reiterated the research data challenge we face but pointed out that there are both excellent case-studies and excellent tools now available for our use. Two that are worth highlighting here are DMP Online (DCC’s data management planning tool) and University of Southampton’s IDMB: Institutional data management blueprint. The slides Liz used during the day were excellent, they are available from the DCC Web site in PPT format and can be downloaded as a PDF from here.

During the day we worked in groups on a number of exercises. The idea is that we would start fairly high level and then drill down into more specific actions. In the first exercise my group took a look at motivations and benefits for research data management and the barriers that are currently in place. Naturally the economic climate was mentioned a fair amount during the day but some of the long-standing issues still remain: where responsibility lies, lack of skills, lack of a coherent framework, taking data out of context, storage issues and so on. After our feedback Liz gave another plenary on Reviewing Data Support Services: Analysis, Assessment, Priorities. The key DCC key tool in this area is the Data Asset Framework (formerly the Data Audit Framework) which provides organisations with the means to identify, locate, describe and assess how they are managing their research data assets – very useful for prioritising work. Useful reports include those from the Supporting Data Management Infrastructure for the Humanities (Sudamih) project. There was a feeling that looking into this area was becoming easier, people tend to be more open than they were a few years back, there is definitely groundswell.

Group Exercises

In exercise 2 we carried out a SWOT analysis of current research data. In the feedback there were a few mentions of the excellent Review of the State of the Art of the Digital Curation of Research Data by Alex Ball. Liz also provided us with a useful resources list (in her slides).

After an excellent lunch and a very brief break (no time to rest when sorting out HE’s data problems!) we returned to another plenary by Liz on Building Capacity and Capability in your Institution: Skills, Roles, Resources whih laid the groundwork for exercise 3 –
a skills and services Audit. This exercise required us to think about the various skills needed for data curation and align them with people in our institutions. There was a recognition that librarians do ‘a lot’ and are more than likely to become the hub for activity in the future. There was also a realisation that there is a fair number of gaps (for example around provenance) and that there can be a bit of a hole between the creation of data by researchers and the passing on of curated data to librarians. Another reason why we need to create more links with our researchers. Again lots of excellent resources that I hope to return to including Appraise & Select Research Data for Curation by Angus Whyte, Digital Curaton Centre, and Andrew Wilson, Australian National Data Service.

Liz then gave her final plenary on Developing a Strategic Plan for Research Data Management: Position, Policy, Structure and Service Delivery. The suggestions on optimising organisational support and looking at quick wins put us in the right frame of mind for the final exercise – Planning Actions and Timeframe. We were required to lay down our ‘real’ and aspirational actions for the short-term (0-12 months), medium-term (1-36 months) and long term (over 3 years). A seriously tricky task! The feedback reflected on the situation we are currently in economically and how it offers us as many opportunities as clallenges. Now is a better time than ever for reform and for information services to take on a leadership role. Kevin Ashley concluded the day with some thoughts on the big who, how and why issues. He stressed that training is so important at the moment. Many skills are in short supply and employing new staff is not an option so reskilling your staff is essential.

Flickr photos from the day (include photos of the flip chart pages created) are available from the UKOLN Flickr page and brief feedback videos are available from the UKOLN Vimeo page. There is also a Lanyard entry for the roadshow. The event tag was #dccsw10.

Tags:
Posted in Conference, Events | 1 Comment »

Creating Open Training Materials

Posted by Marieke Guy on 24th June 2010

Yesterday I attended the Open University annual Learning and Technology conference: Learning in an open world.

I’ve talked more about my general feelings on the day in another blog post (Learning at an Online Conference) but here I want to focus in on the content of one particular session: Creating Open Courses, presented by Tony Hirst of the Open University (you can watch a playback of the session in Elluminate). During my time working on the JISC Beginner’s Guide to Digital Preservation I’ve been thinking quite a bit about what it means to create open training/learning materials and Tony’s approach struck a chord with me.

Tony’s slides are available on Slideshare and embedded below.



Tony’s talk focused around his creation of the T151 course, an OU module on Digital Worlds, part of the Relevant Knowledge programme.

Tony talked about how the OU are making their print content open through services like OpenLearn and their AV material through YouTube and iTunesU. However while this is happening the mode of production is not necessarily open and he explained that it can take several years to produce a course and it can take 5 to 10 academics 18 months to write one.

Tony wanted to move away from this approach and write the T151 course in public and virtually in real time – 10 weeks of content in 20 weeks. He did so by writing blog posts. The course actually took about 15 weeks to write.

Tony made the choice to use WordPress primarily because of the restrictions on what you can embed, in this way it is similar to Moodle (the open source VLE OU and others are familar with). This is an interesting approach. I am leaning heavily towards WordPress for the final delivery of the Beginner’s Guide (primarily due to time restraints and the fact that I already have experience using WordPress). The restrictions are sometimes a hinderance to me rather than a benefit! Another reason he chose WordPress was it “gives you RSS with everything“, agreed, this can be a real bonus.

Tony then wrote blog posts on series of topics according to a curriculum developed with other academics. He used a FreeMind mind map to get his ideas down and then each blog post was made up of 500-1000 words and took 1-4 hours to write. The end result would take students 10 minutes to 1 hour to work through. Within his posts Tony embedded YouTube movies and other external services. The end result was not a single fixed linear narrative but an emergent narratives. He used GraphViz visualisation to show reverse trackbacks where posts reference previous posts.

The blog also contained questions, readings and links to other relevant content. The idea of this was that each area could be populated from a live feed maintained by someone else. Tony felt that the important thing was to allow students to explore and do (e.g use GameMaker to build a game and submit it), share (using Moodle forums) and demonstrate.

Tony wanted to get away from the idea that there’s a single route through the course and that educator is expressing the one true answer. The students were also provided with a Skunkworks area in a wiki and a FreeMind mind map of all the resources in the course. Assessment was given through short questions and a larger question: they had to write a game design document for a game. He was looking for students to have opportunities to surprise.

In the Q&A Tony talked about how he had written the course while trying to do 101 other things at the same time and how a lot of the course chunks he would write for multiple reasons – this seems to be the approach I’m currently taking. Tony concluded by saying that creating the course was a travelogue in part and was his journey through that material.

How good to hear the approach that I’m currently taking (or trying to take) being endorsed!

Tony has written more about his approach on http://blog.ouseful.info and is very vocal on Twitter as @psychemedia.

Tags:
Posted in Events, trainingmaterials | Comments Off

Blue Ribbon Task Force Symposium

Posted by Marieke Guy on 14th May 2010

Last week on election day (May 6th 2010) I attended the Blue Ribbon Task Force Symposium on Sustainable Digital Preservation and Access.

The symposium provided an opportunity for stakeholders to respond to the recent Blue Ribbon Task Force report entitled Sustainable economics for a digital planet: Ensuring long term access to digital information. The report is available to download in PDF Format.

Panel Session: Clifford Lynch, Adam Farquhar, Matthew Woollard, Graham Higley, John Zubrzycki

Introduction to the Report

Neil Grindley, JISC Digital Preservation Programme Manager opened the symposium by introducing the two UK members of the Blue Ribbon taskforce: Paul Ayris, Director of Library Services, University College London and the recently retired Chris Rusbridge, an Independent consultant.

After Paul Ayris’ introduction explaining that the taskforce had been set up to answer three key questions 1) What shall we preserve 2) Who will preserve it and 3) who will pay for it? Chris Rusbridge followed with a summary of Blue Ribbon activity and recommendations. He explained that despite what some might think sustainability of resources is not just about finding money, it is about incentivising. Yet current access to digital information is not a clear case; those who pay for it, those who provide it, and those whose benefit from it are not necessarily the same. With this in mind the Blue Ribbon Task force report has been written with an economic framework on board. Rusbridge also explained within the report they had set down that the case for preservation is the case for use. People don’t want digital preservation, they want access to resources: digital preservation is effectively a derived demand. The report conclusions offered an agenda for further action including looking at economies of scale and scope, chains of stewardship and investigation of public partnerships. It had laid down the foundations for a further report taking the next steps.

Brian Lavoie, Research Scientist at OCLC, and fellow taskforce member, then talked a little about the US launch; the products of the launch are available online. Lavoie explained that Clarity of licensing and devices like Creative Commons have been valuable in making resources preservable: they encourage third-party curation by enshrining the right to preserve.

A panel session on what the task force had actually achieved followed. The initial questions were posed by Paul Ayris and centred around the fact that while open access is now so high on everyone’s agenda, digital preservation remains low, almost invisible. It is very much a case of open access being today’s problem and digital preservation being tomorrow’s.

Different Perspectives

After a much needed coffee break the symposium moved onto session two chaired by Clifford Lynch of the Coalition for Networked Information, considering different sector perspectives. The view from the heritage sector was offered by Graham Higley Head of Library and Information Services at the Natural History Museum. Higley introduced the Biodiversity Heritage Library (BHL) at the Natural History Museum which holds about 1 million books. Many of the resources are very old with more than half of all named species documented in literature pre 1900. Preservation is considered a core part of BHL work and their long term access approach is LOCKSS based on international partnership guarantees and entirely on open source software.

John Zubrzycki Principal Technologist and Archives Research Section Leader at BBC Research followed with a view from public broadcasting. The BBC have 650k hours of video, 350k hours of audio, 2 million stills, 3 million items of sheet music, 400k “pronunciations”, 1.5 million titles in “grams library” and 100 km of shelves – that’s a lot of stuff and it will up to take 16 years to digitise all the 65 PetaBytes of existing content. The BBC Charter states obligations on the BBC to preserve output and the BBC is aiming to provide public web access to all its archived content by 2020.

Lunch was really good and gave us a chance to network and put faces to Twitter IDs. We then all proceeded back to the lecture theatre. The Data Manager’s perspective was given by Matthew Woollard, Director-Designate of the UK Data Archive. The UK Data Archive is a department at the University of Essex and provides infrastructure and shared services for various data archives. Wollard argued that it is a fallacy that researchers want to keep everything and that priorities for selection, curation and retention were key. In reality it costs the UKDA more to restrict access than to open it. Wollard is currently involved in formulation of the ESRC research data policy which will hopefully influenced by Blue Ribbon Task Force report. He ended with the suggestion that Data archives should use the arguments in the Blue Ribbon Task Force report report to leverage, not necessarily more money, but more sustainable money.

The final perspective was that of the national library. Adam Farquhar, Head of Digital Library Technology at the British Library where: “preservation is their day job”. They have to ask for permission to archive Web sites; of the 13,000 people asked only100 have said ‘no’, but then only 4,000 have responded. It is this Copyright investigation that costs time and money, establishing the right legislative foundation is a priority Farquahar talked about their use of Datacite and Dryad to support researchers by providing methods for them to locate, identify and cite research datasets with confidence. The British Library also has an interest in Planets and the Open Planets Foundation.

There followed a discussion on free-riders (those who use content but do not contribute to its upkeep), who exactly they are and whether they are a problem. Brian Lavoie explained that taxes pay for public bodies to perform preservation and therefore free use of these services is not ‘free’. The report itself is fairly critical of free riders, though those of use working in academia might believe that any use of resources should be encouraged. Matthew Wollard pointed out that the costs of excluding ‘free riders’ can be greater than costs of letting them in.

Higher Level Views

The final talks gave two higher level views: that of the European Commission and the JISC. Pat Manson, Acting Director of Digital Content and Cognitive Systems at the European Commission, talked about policy initiatives at European level and how they are tackling the sustainability challenge.

The JISC vision for digital preservation was provided by Sarah Porter, Head of Innovation at the JISC. The JISC are keen to ensure that organisations are prepared to undertake preservation and to embed preservation practice. Currently the JISC has taken no formal position in this area but one possibility is that they, as funders, create an explicit mandate for projects to follow. They are also considering if funders in different countries could work together on further actions and if they should create financial incentives for private entities to preserve in the public interest?

Chris Rusbridge sums up

The chair for the session Brian Lavoie then facilitated a discussion on ‘where do we go from here?’ One suggestions made was engaging beyond academia and the cultural sector at a high political and governmental level, promotion of this as one of the ‘big society’ challenges, how apt on the day of the election. Chris Rusbridge closed with the thought that the report offered something for us to build on but the scale of challenge required us to move on quickly.

After the symposium there was a drinks reception for those who didn’t need to rush back to cast their vote. I had an interesting chat with the BBC team, most of our talk focussed around where a possible new government would leave not just digital preservation but the public sector as a whole.

A longer version of this trip report will appear in Ariadne Web magazine.

More photos are available from the UKOLN Flickr site.

Tags:
Posted in Events | 2 Comments »