JISC Beginner's Guide to Digital Preservation

…creating a pragmatic guide to digital preservation for those working on JISC projects

The Preservation and Curation of Software

Posted by Marieke Guy on February 14th, 2011

A while back I mentioned that the Software Preservation Study team were running a free workshop for digital curators and repository managers to understand and discuss the particular challenges of software preservation.

Although I was unable to attend the day two of my colleagues, Alex Ball and Manjula Patel, were there representing UKOLN. Manjula has written a blog post on her thoughts on the day. Alex has kindly allowed me to publish his trip report below:

****

On Monday 7th February I attended a workshop on the preservation and curation of software, put on by Curtis+Cartwright Consulting and the Software Sustainability Institute, the team behind the JISC study ‘Clarifying the Purpose and Benefits of Preserving Software’. It was only a brief event, but it still managed to cover ample ground.

The day started with three mini-keynotes. Kevin Ashley (Edinburgh/DCC) set the scene, illustrating why we should care about software preservation with anecdotes and examples from computing history. Neil Chue Hong (Edinburgh/SSI) reviewed the seven possible approaches to software preservation, and discussed the occasions when one might have to choose one, and what factors influence this decision. Finally, Brian Matthews (STFC) talked about some previous work done on the subject, namely the SigSoft (Significant Properties of Software) and SoftPres (Tools and guidelines for the preservation of software as research outputs) projects.

Following this, we were split into groups of about 5-8 people for two group exercises.

In the first exercise, we each considered who was responsible for software preservation in our organization, who else should be involved, and what practical steps could be taken to improve the status quo. Now, normally when one is asked ‘who is responsible’ in a workshop like this, the correct answer is usually ‘I am’, accompanied by inward groaning on the part of the delegates and waves of smugness coming off the leader who has the thing in question in their job title. There was none of that here, thankfully. There was thoughtful consideration of the parts played by developers, procurers, users, senior managers, funding bodies, IT departments and The Community (i.e. the other users and potential developers). One interesting suggestion was that the DCC should set up a preservation certification scheme, so that procurers and users could know which software they could trust to be preserved (or, at least, preservable).

The second group exercise was to go through a preservation planning exercise for a particular piece of software. The process was, in summary: to establish why the software needed to be preserved, for whose benefit, when and where; to work out the requirements for the preserved software (e.g. is the process important or only the outputs?); to determine the most suitable preservation approaches for the short term and the long term; to enumerate the skills and resources needed, and where the money will come from; and work out what needs to be done to get the ball rolling. Our group deliberated over a fictional piece of software loosely based on the DANS Typological Database System, but one of the other groups considered the collection of console games at the Science Museum and did the exercise for real.

The workshop ended with Neil Grindley going through JISC activity in the area of software, mentioning among other things the Preservation of Complex Objects Symposia, DevCSI, the OpenPlanets Foundation and various rapid innovation projects. Areas of particular interest will be preserving software with a web services/cloud computing component, economic sustainability, and how funders can help ensure the software they fund can be preserved.

The workshop was followed by a surgery session where people could seek expert advice from the project team, but unfortunately I had to dash off to catch my train.

All in all, I found the workshop to be a particularly enjoyable example of the learning-by-doing genre of event. The pacing was good and the main points were repeated enough to be memorable but not enough to be annoying. Looking back I see I’ve learnt more than I thought I had at the time.

Project blog:http://softwarepreservation.jiscinvolve.org/wp

SSI:http://www.software.ac.uk/

Tags:
Posted in Events | 1 Comment »

Approaches to Digitisation

Posted by Marieke Guy on February 11th, 2011

Digital preservation is about preserving digital objects. These objects have to come to be somehow and earlier this week (Wednesday 9th February) I was invited to talk at an Approaches to Digitisation course facilitated by Research Libraries UK and the British Library. It was held at the British Library Centre for Conservation, a swanky new building in the British Library grounds. It was the first time the course has run, though they are planning another in the autumn. The course was aimed at those from cultural heritage institutions who are embarking on digitisation projects and sought to provide overview of how to plan for and undertake digitisation of library and archive material.

British Library by Phil Of Photos

I was really pleased that the event spent a considerable amount of time on the broader issues. Digitisation itself, although not necessarily easy, is just one piece of the jigsaw and there has been a tendency in the past for institutions to carry out mass digitisation and not consider the bigger picture. During the day several speakers advocated the use of the lifecycle approach and planning, selection and sustainability were highlighted as being key areas for consideration. If digitisation managers take this on board the end result will hopefully be a collection of well-rounded, well-maintained, well-used digitised collections with a preservation strategy in place.

The course followed a fairly traditional format with presentations, networking time and a printed out delegate pack. Unfortunately there was no wireless but this left us concentrating completely on presenters and what they had to say, and it was all very useful stuff.

Benefits of Digitising Material – Richard Davies, British Library

Richard Davies started the day with an introduction to the benefits of digitisation (a good article entitled The Case for Digitisation is available on P16 of the most recent JISC inform). Rather than just giving a straightforward overview of the different benefits he used a number of case studies to illustrate the added value that digitisation can provide, for example by opening up access, allowing digital scholarship and collaboration.

The British Library has now digitised approximately 4 million news papers. Opening up access has meant that people can use the papers in completely different ways, for example by full text searching and allowing different views on resources. Projects like the British Library Beowulf project and others allow extensive cross searching and the Codex Sinaiticus project has taken a bible physically held in 4 locations and allowed it to be accessed as one, for the first time. The Google Art Project allows users to navigate galleries in a similar way to Google Street view and the high resolution of the digital objects is impressive.

Google Art Project

Digitisation also presents opportunities for taking next step in digital scholarship. In the past carrying out the level of research that now possible with digital resources would have taken a very long time indeed. The Old Bailey project has digitised 198,000 records and users can now carry out extensive analysis on content in a matter of minutes.

Davies also illustrated how digitisation can allow you to expand your collection by bringing in resources from general public and by crowd sourcing. The Trove Project has carried out crowd sourcing of its optical character recognition (OCR) results. They have offered prizes for people who corrected the most text, they have also made a lot of details of how they went about the project is available online. The Transcribe Bentham project have also made many details about how they carried out work available on their blog.

Davies suggested that digitisation managers need to think about the model they will be using. Will content be freely available or will there be a business model behind it. One option is to allow users to dig to a certain level and then ask them to pay if they wish to access any further resources.

Davies concluded that to have a successful digitisation project you need to spend time on the other stuff – metadata, OCRing the text, making resources available in innovative ways. Digitisation is only one element of a digitisation project.

Planning for Digitisation- Richard Davies, British Library

Richard Davies continued presenting, this time looking more at his day job – planning for digitisation projects. He offered up a list of areas for consideration (though stated that this was a far from exhaustive list).

He suggested that a digitisation strategy helps you prioritise and can be a way of narrowing down the field. Such a strategy should fit within a broader context, in the British Library it is part of their 2020 vision. Policy and strategy should consider questions like: Who are we? Where could we go? Where should we go? How do we get there? It should also bear in mind funding and staffing levels.

Davies also spent a lot of time talking about the operational and strategic elements of embarking on a project. It is very much a case of preparation being the key, he suggested digitisation managers do as much preparation up front as possible without holding up the project. For example when considering selection consider what’s unique? What’s needed? What’s possible? (bearing in mind cost, copyright, conservation). He also emphasised the importance of lessons learnt reports.

Davies concluded by talking about some current challenges to digitisation programmes. The primary one was economic as funding calls are rarer and rarer. It can be useful to have funding bid expert onboard. He also explained that you can make the most of the bidding process by using it as an opportunity to help yourself to answer difficult questions about what you want to do. There is currently a lot of competition for funding. The last JISC call (Rapid Digitisation Call) offered up £400,000 worth of funding, 7 projects were funded, 45 bids were received.

Davies also highlighted that digital preservation and storage are increasingly becoming problems. Sustainability need not be forever but you should at least have a 3 – 5 year plan in place.

I was also pleased to hear Davies highlight a project I am now working on: The IMPACT project, funded by the European Commission. It aims to significantly improve access to historical text and to take away the barriers that stand in the way of the mass digitisation of the European cultural heritage

Use of Digital Materials – Aquilies Alencar Brayner, British Library

After a coffee break Aquilies Alencar-Brayner considered how users are currently using digital materials. He mentioned research by OCLC that states that students consistently use search engines to find resources rather than starting at library Web sites. They are also using the library less as books are still the brand associated.

Alencar-Brayner ran through the 10 ‘ins’ that users want: integrity, integration, interoperability, instant access, interaction, information, ingest of content, interpretation, innovation and indefinite access.

He showed us some examples of how the British Library is carrying out work facilitating access to digital materials for example through the Turning the Pages project which will allow you to actually turn the page, magnify the text, see the text within context, listen to audio.

Codex Sinaiticus

Where to begin? Selecting resources for digitisation Maureen Pennock, British Library

Maureen Pennock introduced us to selection. She explained that selection is usually based on previously selected resources and commonly the reason given for selection is for improving face access. However sometimes the reason can be for conservation of original and occasionally it is for enabling non-standard uses of resource.

Pennock explained that selection is often based on the appraisal made for archival purposes, known as assessment – areas for consideration include suitability and desirability and whether they are what users need.

Selection is an iterative process and revisited several times after you’ve defined your final goals and objectives. It is important to identify internal and external stakeholders such as curators, collection managers and so on and include them in the process.

Once you’ve set a scope you will need to pre select items, but there is no one size fits all approach. Practical and strategic issues come into play and items will need to be assessed and prioritised.

Pennock explained that suitability will need to consider areas like intellectual justification, demand, relevance, links to organisational digitisation policy, sensitivity, potential for adding value (e.g. commercial exploitation of resources).

Alongside suitability there will need to be item assessment looking at the quality of the original, the feasibility of image capture, the integrity and condition of resources, complex layouts for different material types, historical and unusual fonts and the size of artefacts. Legal issues such as copyright, data protection, licences, IPR also have a role to play.

Pennock concluded that not all issues are relevant to everyone and some with have more weighting than others. Practitioners will need to decide on their level of assessment and define their shortlist. It is important that you can justify your selection process in case issues arise later down the line.

Metadata Creation, Chris Clark, British Library

To wet our appetite for lunch Chris Clark took us on a whirlwind tour of digitisation metadata and its value. He explained that metadata adds value unfortunately often left at the end with tragic consequences. He also warned us that there is still no commonly agreed framework and it is still an immature area. Quite often metadata’s real value is most realised in situations where it isn’t expected. Clark recommended Here comes Everybody by Clay Shirky as a text that illustrated this. He also suggested delegates look at the One to Many; Many to One: The resource discovery taskforce vision.

Metadata is a big topic and Clark was only able to touch the surface. He advised us to think of metadata as a lubricant or adhesive that holds together users, digital objects, systems and services. We could also see metadata as a savings account – the more you put in more you get out.

Clark then offered us a quick introduction to XML and some background to the most relevant types of metadata when it comes to digitisation (descriptive, administrative, structural) metadata. He explained that Roy Tennant OCLC had characterised 3 essential metadata requirements: liquidity (written once use many times, expose), granularity and extensible (accommodate all subjects).

Clark concluded with an example of a high level case study he had worked on: Archival Sound Recordings at the British Library. On the project they had passed some of the load to the public by crowd sourcing recording quality and asking people to add tags and comments.

Preparing Handling Guidelines for Digitisation Projects, Jane Pimlott, British Library

After a very enjoyable lunch Jane Pimlott provided a real-world case study by looking at a recent project on which the British Library had created training and handling guidelines for a 2 year project to scan 19th century regional newspapers. It had been an externally funded project but work carried out on premises at Colindale. The team had had 6 weeks in which to deliver a training project, though a service plan was already in place and contractors were used.

Pimlott explained that damage can occur even if items are handled carefully but that material that is in a poor condition can be digitised but can take longer. She explained the need to understand processes and equipment used – e.g. large scale scanners. Much of the digitisation team’s work had been making judgement calls on assessing the suitability of items for scanning for the newspaper project. Their view was that canning should not be at expense of the item, it should not be seen as last chance scanning. Pimlott concluded that different projects present different risks and may require different approaches to handling and training.

Preservation Issues, Neil Grindley, JISC

Finally the day moved in to the realm of digital preservation. Neil Grindley from JISC explained how he had come from a paper, scissors, glue and pictures world (like many others there) but that the changing landscape required changing methods.

He began by trying to find out whether people considered digital preservation to be their responsibility. Unsurprisingly few did. He explained that digital preservation involves a great deal of discussion and there is lot of overlapping territory, it is best undertaken collaboratively. Career paths are only just beginning to emerge and the benefits are hard to explain and quantify. He revealed that a recent Gartner report stated that 15% of organisations are going to be hiring digital preservation professionals in the future, so it is a timely area in which to work in. Despite this is still tricky to make a business case to your organisation for why you should be doing it.

Grindley explained that there are no shortage of examples of digital preservation out there; recent ones include Becta and Flickr.

Grindley then went on to make the distinction between bit preservation and logical preservation. Bit preservation is keeping the integrity of the files that you need. He asked is bit preservation just the IT departments back up? Or is it more? He saw the preservation specialist as sitting between the IT specialist and content specialist almost as a go-between.

Used the example of Heydegger showing pixel corruption, corruption is both easy and potentially dangerous – especially in scientific research areas.

Grindley took us on a tour of some of the most pertinent areas of digital preservation such as checksums. These are very important for bit preservation and ensure that when you use something and go back to you can check that the files are not corrupted or changed. It is very easy to see if a file has been tampered with over time. Some of the tools suggested include:

Grindley then considered some of the main digital preservation strategies: technology preservation, emulation, migration, which led him on to the subject of logical digital preservation – not just focussing on keeping the bits but looking at what the material is and keeping its value

To conclude Grindley looked at some useful tools out there including DROID – digital record object identification, Curators workbench – useful tool from University of North Carolina, creates a MODS description and Archivematica – comprehensive preservation system. He also touched on new JISC work in this area.

5 new preservation projects starting Feb – July 2011

Other Sources of Information, Marieke Guy, UKOLN

I concluded the day by giving a presentation on other sources of information on digitisation and digital preservation. My slides are available on Slideshare and embedded below.



I think by now the delegates had had their fill of information but hopefully some will go back and look at the resources I’ve linked to.

To conclude: I really enjoyed the workshop and found it extremely useful. If I have one criticism it’s that the day was a little heavy on the content side and might have benefited from a few break-out sessions – just to lighten it up and get people talking a little more. Maybe something for them to bear in mind for next time?

Posted in Conference, Events | 2 Comments »

Your Digital Legacy

Posted by Marieke Guy on January 31st, 2011

Personal Legacy

Last week Law.Com published an interesting article entitled What Happens to Your Digital Life When You Die?.

The article, written by Ken Strutlin, starts by explaining that the dealing with our digital legacy is something the legal world has yet to get to grips with.

Still, one of the neglected ensigns of internet citizenship is advanced planning. When people die, there are virtual secrets that follow them to the grave — the last refuge of privacy in a transparent society. Courts and legislatures have only begun to reckon with the disposition of digital assets when no one is left with the knowledge or authority to conclude the business of the cyber-afterlife.

It is an immensley complicated area and “the most important long-term consideration is who can access a person’s online life after they have gone or become incapacitated?“. Many people can leave behind a huge amount of digital data. Much of this, for example images and documents, may no longer be sat on a local hard drive but may be out there stored on cloud services such as Flickr and Facebook. It is likely that loved ones will be keen to be able to access and collate this data.

Information on both legal rights and what physically needs to be done is becoming increasingly important.

A few years ago a colleague of mine passed away and after some time I took it upon myself to notify Facebook. Relatives had initially posted some information (such as funeral details) up on my colleague’s wall but no other action had been taken. The profile had remained as one of a living user. After I contacted them Facebook acted quickly and effectively and memorialized the account. It is quite clear that they have a well thought out set of procedures in place.

Work Legacy

At UKOLN where I work we have touched on this subject when considering how you deal with the digital legacy of staff who move on. Although former members of staff are not ‘dead’ the problems that their leaving causes can be similar to those when someone dies – unknown passwords and use of unlisted services, to name two. In the past this type of information has been described as corporate or organisational memory and has often been subdivided into explicit and tacit knowledge. Recording corporate knowledge, especially the tacit type, has always caused problems, but the digital nature of resources now adds another level.

Strutlin offers a recount of the tale of the Rosetta stone, whose meaning was originally lost but then rediscovered when a Napoleonic soldier found a triptych in the Egyptian town of Rosetta which offered meanings for the hieroglyphics. Strutlin’s response is that “We need more than serendipity to preserve the data of our lives beyond our lifetimes“.

Over time it is likely that laws will emerge and processes and procedures will evolve but we need to be proactive about instigating them.

The principal concern today is the passing on of passwords, divvying up social media contents, and protecting virtual assets. But five minutes from now, those social media sites will include life logged metrics with excruciating details about our health, activities, and collective experiences. They will be more intimate and vivid than any handwritten personal journal or photo album. And they will demand clear and comprehensive rules to oversee their final disposition.

Posted in Web | Comments Off

Supporting long-term access to digital content

Posted by Marieke Guy on January 28th, 2011

The MLA has recently released a principles paper on Supporting long term access to Digital Material.

“At a time when digital formats are increasingly important, it is vital to ensure they are sustainable and accessible over the long term. This is equally the case for materials that originate in digital format, and for those that originate in different forms which are then digitally reformatted.”

The paper, commisssioned by the MLA, was produced on its behalf by Collections Trust, in collaboration with the following organisations:

The National Archives; Heritage Lottery Fund; Archaeology Data Service; The British Library; Collections Trust; Digital Preservation Coalition; Museums Galleries Scotland; Joint Information Systems Committee; UKOLN.

The principles will be supplemented by guidance and other tools to support long term access to digital material across the sector and promoted using a joint advocacy and marketing plan.

Tags:
Posted in Project news | Comments Off

Approaches to Digitisation Training Day

Posted by Marieke Guy on January 14th, 2011

Next month I will be presenting at the Approaches to Digitisation Training Day.

The day is being organised by the British Library Preservation Advisory Centre and Research Libraries UK and will be held on Wednesday 9 February 2011 at the British Library Centre for Conservation, London. It aims to give an overview of how to plan for and undertake digitisation of library and archive material; and to preserve the digital objects that are produced.

By the end of the day participants will be able to:

  • Explain the benefits of digitising material
  • Give examples of the types of materials that are suitable for digitisation
  • Identify the issues to be considered when planning for digitisation
  • Define what is meant by digital preservation
  • Describe the risks to digital objects and explain how digital preservation can address these risks.

I will be presenting on Other sources of information and next steps. My talk will look primarily at digital preservation and explore some of the resources linked to from the JISC Beginner’s Guide to Digital Preservation.

Posted in Project news | Comments Off

Software Preservation Workshop for Digital Curators

Posted by Marieke Guy on January 10th, 2011

The Software Preservation Study team are running a free workshop for digital curators and repository managers to understand and discuss the particular challenges of software preservation. It’s on Monday 7 February 2011 and will be held in London.

There is increasingly a need to preserve software: for example software is sometimes needed to unlock accompanying research data and make it (re)usable, or software is often a research output in its own right. The workshop’s premise is that curators and software developers will need to collaborate to preserve software: the curator needing the technical knowledge of the developer, and the developer needing the preservation expertise and mandate of the curator. This workshop is intended to be the first ‘bridging’ event between these two previously separate communities – so ground-breaking in its own small way.

Friendly technical expertise will be provided by the Software Sustainability Institute and the Science & Technologies Facilities Council (STFC). It’s a workshop for curation practitioners where real examples can be discussed and useful advice exchanged.

The team have created a briefing paper entitled Digital Preservation and Curation: The Danger of Overlooking Software which gives potential attendees a taster of what will be explored in more detail at the event.

Registration will open at 10:30am, with the workshop starting promptly at 11am on Monday 7 February at Brettenham House. Lunch will be provided. The team are aiming to finish the event at 3pm but will be holding a surgery-style session for additional queries, and walk-throughs of the methodologies until 4pm.

The workshop is free to attend but places are strictly limited. Further details and the full agenda will be provided to registrants.

For more details see the Software Preservation blog.

Posted in Events | 2 Comments »

The End of Delicious?

Posted by Marieke Guy on December 17th, 2010

Oh dear, another ‘end of’ post. Are we going to see a lot more termination of services as the economic situation really starts to hit the Web?

Yeterday news that Yahoo plans to kill off a handful of services (including Yahoo Buzz, Altavista and the bookmarking service Delicious) made it into the mainstream. The source was a internal Yahoo slide showing future plans leaked by a Yahoo employee, Eric Marcoullier. Yahoo have recently had to implement cuts and lay off staff. In repsonse to the leak a company spokesperson explained:

Part of our organizational streamlining involves cutting our investment in underperforming or off-strategy products to put better focus on our core strengths and fund new innovation.

Delicious is a very well used service. I presonally have been a member for several years and currently have 1295 links bookmarked. The service is embedded in many of my Web pages and on my blogs. For my remote worker blog I have even set up a Google custom search allowing searching of the 300+ remote working urls I have collected. The JISC Beginner’s Guide to Digital Preservation also has 300+ urls associated with it.

We are all aware that Web 2.0 services come and go and that this has many implications for digital preservation. The JISC PoWR project took a look at related issues and offered a set of pragmatic guidelines on the approaches we can use to safeguard our data. Here is a chance to put theory into practice…

On hearing the news about Delicious my initial reaction was one of panic…all my urls would be lost! This isn’t actually the case. The termination of the service has yet to be confirmed and already several campaigns have sprung up (act.ly, save delicious, …) petitioning to save the service. Also one would like to believe that if the service is to be terminated users would be given advance warning on a switch off date which will give them the opportunity to get their data out. Whatever happens it makes sense to take action to protect any investment you have in Delicious.

Exporting Data

Delicious has an Export / Download Your Delicious Bookmarks feature. This is available from the Settings tab, under the bookmarks subheading. This will allow you to save the generated page (as HTML) and import it into your browser, or anything else that accepts bookmarks in a standard format. Save the delicious html file somewhere safe.

Although this now means that you have a copy of your urls (which is a step in the right direction) you really need to import them into another bookmarking service to make use of tags, bundles and other functionality.

Lots of people are turning to Diigo (there is a page on how to import bookmarks from Delicious), other options include Connotea, Citeulike, Trunk.ly and Stumbleupon – a more comprehensive list is available from Wikipedia. SearchEngine Land have also compiled a list of their 10 best alternatives to Delicious.

Web pages that use Delicious

Some though also needs to be given to the other ways you use Delicious – in Web pages, on blogs etc. Untill the confirmation that Delicious is going it seems a little early to act here. Diigo can do most of the things Delicious does, so it will be a case of using it from now and and at some point changing all embeds. What I personally will be doing is compiling a list of the places in which I currently use Delicious. All very time consuming and maybe something I should have already been doing?

Digital preservation of Web 2.0 services is an important area but not something people have given much consideration to in the past.

It seems that there may suddenly be a lot more case studies for us to consider…

Tags:
Posted in Case studies | 3 Comments »

The End of NeSCForge: Preserving Software

Posted by Marieke Guy on December 10th, 2010

On 20 December 2010, the NeSCForge service, a collaborative software development tool for the UK e-Science community, will be turned off. The main reason for this is that there isn’t any money to keep thee service running. The official message on the site is as follows:

Posted By: David McNicol
Date: 2010-09-27 13:48
Summary: NeSCForge closure 20/12/2010

Dear NeSCForge community,

Because of various grants finishing, we will be losing the IT staff and skills required to keep the NeSCForge service running properly. Rather than leaving it running until something goes wrong with no clear idea of ownership and responsibility for the service, we have taken the difficult decision to shut it down on Monday 20th December 2010.

We would encourage you to review your projects and take copies of any code or documentation you wish to keep before that date. Unfortunately, the software that NeSCForge runs is bespoke and fairly obfuscated so we cannot offer a method of extracting bug reports, forum posts and so on.

If you have any questions, please email them to:

nescforge-support@nesc.ac.uk

Thankyou,
David McNicol

A shame that the some of the issues, such as the bespoke nature of the software, were not addressed earlier down the line!

The closure leaves many organisations, including the National Grid Service (NGS), without a software repository. The UK National Grid Infrastructure has now moved it’s data with the help of the The Software Sustainability Institute, which offers a collection of guides inlcuding one on Retrieving project resources from NeSCForge. The NeSCForge portfolio of projects includes DIALOGUE, ComparaGRID, BRIDGES and Triana.

The only option left to many is Sourceforge, a resource for open source software development and distribution.

The JISC Beginner’s Guide to Digital Preservation has a section on how to preserve Software.

Is your software held in NeSCForge service? How sustainable are other services like Sourceforge? How do you archive and preserve your software?

Tags:
Posted in Archiving | Comments Off

Launch of the JISC Beginner’s Guide to Digital Preservation

Posted by Marieke Guy on November 19th, 2010

We have now been given the go-ahead for a soft launch of the JISC Beginner’s Guide to Digital Preservation.

Just to reitereate this is the guide that the writing of this blog has documented and contributed to.

It has been written for those working on JISC projects who would like help with preserving their outputs. It is aimed at those who are new to digital preservation but can also serve as a resource for those who have specific requirements or wish to find further resources in certain areas.

The Guide is available at: http://blogs.ukoln.ac.uk/jisc-beg-dig-pres/

The site can be navigated in the following ways:

You can comment on any page on the site, so please do let us know what you think and if there are any resources we’ve missed.

We will promoting the guide over the forthcoming months.

Posted in Project news, trainingmaterials | Comments Off

Wikipedia Terminal Event Management policy

Posted by Marieke Guy on November 15th, 2010

A representation of the primer section of the Wikipedia message

Have you ever taken a look at Wikipedia’s Terminal Event Management policy? It details the “procedures to be followed to safeguard the content of the encylopedia in the event of a non-localized event that would render the continuation of Wikipedia in its current form untenable“.

The policy is designed to facilitate the preservation of the encyclopedia by a transition to non-electronic media in an orderly, time-sensitive manner or, if events dictate otherwise, the preservation of the encyclopedia by other means.

It starts off by saying when the policy will be implemented – imminent societal collapse e.g. limited nuclear exchange, pandemic, hypercane, supervolcano, the rapid onset of a climatic change or other global ecological disaster. Or an imminent extinction level event e.g. global thermonuclear war, asteroid impact, global revenant epidemic, stellar gamma ray burst, etc.

OK so now you are starting to wonder if this is a serious thing….It sounds like the content of a blockbuster movie!

So it’s a bit of a joke and filed under Wikipedia humor. Nevertheless the data preservation techniques and procedures are definitely of interest.

It is suggested that editors print as many articles as possible, with due regard to any personal safety concerns that may be faced in these extraordinary events. However laborious this approach may seem, editors are asked to bear in mind that transfer to electronic media, such as CD, DVD or memory stick, while quicker, would defeat the purpose of this policy.

Once again we are back to the more secure preservation format – paper!

The policy goes on to discuss the type of articles to save given that there are currently 3 million in Wikipedia:

While articles that would be of immediate utility in the changed world circumstances, such as animal husbandry and carpentry, should be amongst those articles that every editor should have in their archives, consideration should be given to the preservation of articles of high cultural significance or of a more esoteric nature.

The proposed plan is that editors and archivist all print off and store as many random articles as they can and then later on they pool their resources in an attempt to recreate Wikipedia.

What a relief to hear that an “alternative strategy will be undertaken at the Wikimedia server facility. On the implementation of the TEMP protocol, a laser etched version of Wikipedia will be created using plates of a resillient alloy to store minaturized versions of every page“.

The policy goes into more detail over the preservation approaches that can be taken if exstinction is nigh – “data shall be transmitted from the world’s radio telescopes to the 300 nearest stars and to the centre of the galaxy for as long as possible”.

Jimmy Wales, founder of Wikipedia, leaves us with a final thought.

While the light of humanity may flicker and die, we go gently into this dark night, comforted in the knowledge that someday Wikipedia shall take its rightful place as part of a consensus-built Galactic Encyclopedia, editable by all sentient beings.

Tags:
Posted in General | 2 Comments »