Cultural Heritage

A UKOLN Blog for the Cultural Heritage sector (now archived)

Archive for the 'Technical' Category

Decoding Art

Posted by Brian Kelly on 10th January 2011

About this Guest Post

Martin Grimes is the Web Manager for Manchester City Galleries. He can be contacted at m.grimes@manchester.gov.uk


Decoding Art: Delivering interpretation about public artworks to mobiles

What’s that weird blocky thing?

A little over two years ago independent consultant Julian Tomlin worked with Manchester Art Gallery to trial the use of QR codes to deliver interpretive content about six objects in the gallery’s Revealing Histories: Remembering Slavery trail.

Image of QR label

QR label

Large QR code labels were placed beside the object labels and each of these linked to a specially created web page which had further text information and in some cases an audio clip about the object. A guide leaflet was produced and Visitor Service staff were briefed about the pilot – mainly so they could answer the frequent question, ‘What’s that weird blocky thing?

There’s little doubt that this pilot was ahead of the curve in terms of public recognition of QR codes in the UK and it’s difficult to say for sure how many of the visits to the web pages were made by gallery visitors and how many were made via links on the technology sites that reviewed the pilot.

Fast-forward two years and the landscape has changed significantly, QR codes are becoming almost mainstream in the UK. With this awareness in mind, at the beginning of this year we re-visited the use of QR codes as a means of delivering interpretive content to mobile phones, but this time out in the public spaces of the city. Building on the work done by gallery placement student Marek Pilny, which used Google Maps to mark the geographical location of most of the public artworks in Manchester Art Gallery’s care (http://www.manchestergalleries.org/the-collections/public-art/) we again worked with Julian Tomlin to investigate how we might use QR codes or other location based technologies to deliver interpretative material to people’s mobile phones as they came across artworks in the city.

Decoding Art

We embarked on a pilot that aimed to discover:

  • Whether QR codes are a viable method to do this
  • What the practical and technical issues might be
  • How existing online content might need to be adapted or developed
  • Whether new forms of content – audio for instance – are feasible
  • What the take-up will be – are QR codes recognised by a wider public, what content types are most effective?
  • How we can enable users to feedback or contribute to the content
Image of smartphone and QR label

Using a smart phone to get information about an item

Julian conducted research that looked at QR code origination methods, symbol versions, optimum label size, performance of the labels at different locations on the works and in different light levels and label fabrication options. We also did some limited testing with a number of mobile phones with different screen sizes and different operating systems.

Testing also included looking closely at two methods of mounting the labels, adhesion and physical fixing. Each work in the pilot had a unique base and had different types of inscription or information panels, so finding an approach that would work across all has been perhaps the most difficult and time-consuming aspect of the project, involving extensive testing by a conservator and significant consultation with city planning officers.

In some situations it has not been possible to find a suitable mounting point on the work itself so other nearby surfaces have been used. Though we don’t have enough data yet, it seems very likely that people will not immediately see the connection between work and label and this may impact on visits.

Research into suitable materials from which to fabricate the QR labels had to consider that this project was a pilot, so along with aesthetic and effectiveness considerations, cost and permanence were key issues. After considering many options including laser-cut or etched and coloured stainless steel we settled on Traffolyte, a multi layered phenolic plastic which is used to make name badges, signs and labels. The QR code, gallery logo and project title have been laser-etched into the top layer and as objects in themselves they are quite beautiful.

Image of QR label and art object

QR label for Queen Victoria statue

Whilst the research and testing was under way, Beth Courtney, a conservator at the gallery, took the rather dry documentation content that we already had and re-scripted it to suit a mobile-using audience. Instead of listing basic facts and details about the work, Beth divided the content into a series of slightly offbeat and quirky questions or facts and presented just a sentence or two of further detail beneath:

Why does she look so grumpy?

I think the sculptor was probably aiming for stately, but she does look a bit grumpy. For much of her reign Victoria was rather a sad figure because she never recovered from the sudden death of Prince Albert when she was in her early forties. She wore black for the rest of her long life as a sign of mourning for him.

Manchester historian, writer, broadcaster and Blue Badge Guide Jonathan Schofield also recorded two minute reflections on 12 of the works. His approach was similarly quirky, informed but thoroughly engaging and not a little opinionated.

Following further research and costed options from developers, we decided to build and host a website to host the content ourselves using WordPress. We used the Manifest 1.01 theme as it was unfussy, clean and streamlined and the WordPress Mobile Pack plugin (http://wordpress.org/extend/plugins/wordpress-mobile-pack/) to help us deliver readable content to the widest range of mobile phones.

Sticky backed plastic

Ongoing issues around the fixing of the QR labels to the works – especially to those with listed building status – eventually lead to a decision to proceed with temporary vinyl labels. The labels were trailed in June and July and we informally launched the pilot at the beginning of August. As well as the QR code, the labels included short code URLs for those users who didn’t have a QR reader installed.

The project had received some advance publicity from Visit Manchester and at the point of launch was promoted through twitter, facebook, our email newsletters and a Manchester City Council email newsletter. As expected, following each promotion, the visit figures increased a little, often though, this was to the desk-top version of the site. A mobile analytics package from Percent Mobile enables us to differentiate between desktop and in-the-street mobile use.

Have we learned anything yet?

We’ve learned that more people than we imagined do know what QR codes are and how to use them. The maximum visits in one day so far were 32 with the daily average being 4.3. We’ve learned that visits go up at weekends and that they go down when people peel off the labels. Currently we have to re-label works in some high traffic areas every two weeks.

Works that are clearly labelled at a reasonable height off the ground and which face high traffic walkways also get more visits. The Christmas Markets which surround 6 works in the pilot have also blocked access to the codes and this has impacted upon visit numbers.

In terms of devices, the iPhone heads the pack followed by the Blackberry 8520, HTC Desire and HTC Nexus One. In detail, we’ve seen:

  • 39 Devices
  • 98.7% WiFi Capable
  • 77.5% Touchscreen
  • 23.5% Full keyboard
  • It’s all about the content

We’ve had some very positive feedback about the interpretive content via twitter, and other equally positive anecdotal feedback. Each work description has a comment option but we’ve not had any responses through these yet. Formal online and offline evaluation will take place early in the new year with the aim of reviewing the technologies and the content. From the feedback so far we think we’ve judged the content well, but we do need qualitative evaluation to confirm this. We are also aware that, despite it’s unfussy and quirky tone, it is still the museum offering interpretation, one or two voices, uni-directional, still didactic. Nancy Proctor , in issue 5 of Museum Identity [1], discusses the idea of the distributed network as a “[...] metaphor to describe new ways of authoring and supporting museum experiences that are:

  • conversational rather than unilateral
  • engaging rather than simply didactic
  • generative of content and open-ended rather than finite and closed

Decoding Art does, we think, engage with the first two of these points, but it is the third that we’d like to explore further and there are already ideas in place about how we might do this.

The desktop version of Decoding Art can be found here: http://www.manchestergalleries.org/decodingart/

If you’re in the city with your mobile phone, see if you can spot any of the works included in the pilot and let us know what you think.

Reference

  1. Nancy Proctor, 2010, The Museum As Distributed Network, p48, Museum Identity, Issue 5.

Posted in Evaluation, Museums, QR-codes, Technical, Web 2.0 | 2 Comments »

The Library Technology Market: a case for an ‘open’ conversation

Posted by guestblogger on 20th September 2010

About this Guest Post

Ken Chad is CEO of Ken Chad Consulting which has the mission of helping to ‘make libraries more effective’ through better and more imaginative use of technology. His consulting work has been wide ranging. He has worked with academic and public libraries and with various government and sector organisations in the UK and internationally. His published articles and conference contributions have focused on the strategic impact on libraries of technology driven change. Ken can be contacted at Ken@kenchadconsulting.com.


The library technology market: a case for an open ‘conversation’

Over the years a number of resources including books, articles and websites have been available to help libraries get the best from the opportunities offered by technology. For example back in the 1980s Juliet Leeves published ‘Library Systems: a buyer’s guide’.  Each April, in Library Journal, Marshall Breeding publishes a review of the library automation marketplace. His  ‘Library Technology Guides’ website is also an invaluable resource despite its US bias. In the UK the ‘eGovernment Register’, maintained by the London Borough of Brent, published a listing of local authority systems (including some library related ones) on their (now defunct) website. UCISA does a similar job for Higher Education (HE) through its ‘Corporate Information System’ (CIS) annual survey.

However all these resources are ‘closed’ to some degree. They are also very incomplete as far as library technology is concerned. The eGovernment register ceased in June this year and passed the baton to the SOCITM application software index. However this is currently even more closed with very restricted access and editing rights. Marshall Breeding says that he is ‘solely responsible for all content’ on the Library Technology Guides web site ‘and for any errors it may contain’.

It seemed to me that it would be possible create something more comprehensive, accurate and useful by taking a very open and inclusive approach:  something that harnessed the capabilities and goodwill of the library community.  I had read David Weinberger’s marketing book ‘The Cluetrain Manifesto‘ some years ago and I think his notion back in 1999 that ‘markets are ‘conversations’ rings true more than a decade later.  ‘Through the Internet, people are discovering and inventing new ways to share relevant knowledge with blinding speed. As a direct result, markets are getting smarter’. Perhaps then we could enhance the quality of the technology ‘conversation’ in the library domain. Maybe being ‘smarter’ could take, at least some of, the cost and ‘friction’ out of the market and make it easier for everyone. Moreover it seemed to me everyone could benefit from this open and inclusive approach, not least in having the content freely available for anyone to re-use.

I started with simple lists of who had what Library Management System (LMS – or Integrated Library System (ILS) in American parlance). The truth was that working in the library software business for over 20 years I actually knew most of it by heart! My job was made easier, for HE at least, because I had been closely involved in the much cited JISC/SCONUL ‘LMS study’, which is a great source for data and analysis. During the work on the study vendors were very open and helpful about giving me their customer lists and information about their business and strategies. SCONUL were enthusiastic about getting more value out of the study by putting it online in a more interactive format than a PDF. I persuaded them that a wiki was a simple, inexpensive and effective tool to help in that goal. It would also allow the community itself to keep the information and analysis current. A further possibility was to expand on the original study’s coverage which was very focussed on the LMS. The Higher Education Library Technology wiki was born.

The underlying wiki technology (Wikispaces) is very easy and inexpensive to set up and maintain and we soon had a good part of the SCONUL LMS study uploaded. We chose Wikispaces too because, after some serious evaluation, we judged it easier to maintain and edit that alternatives such a MediaWiki (the platform for Wikipedia). We knew the proportion of active contributions would be small. That is a fact of ‘Web 2.0’ life. I knew about Jacob Nielsen’s ‘90-9-1 Rule’ for large scale online communities and social networks. He argues 90% of users are ‘lurkers’, 9% of users contribute intermittently and only 1% of users are heavy contributors. With this in mind we didn’t want to make the task of contributors harder than absolutely necessary. It was uncertain if our small-scale community would fare worse in terms of contributors. In fact it’s been about the same but with a higher proportion of ‘intermittent’ contributors. I also had in my mind a comment, I believe attributed to one of the founders of Flickr, to the effect that an important factor in building critical mass and success was putting tremendous effort early on to encourage and support their contributors. We believe that’s important and our role in Ken Chad Consulting as ‘wikimaster’ is all about enabling things and keeping up the momentum. It’s most certainly not about control. We haven’t had a single case of spamming or abuse. (Though of course we have tools to deal with them). We also know that sometimes it takes time for resources to get embedded in the community’s consciousness. The wikimaster has an important sustaining role.

As well as a Library Technology wiki for HE we’ve created one for local government public libraries. Clearly there is overlap but there are significant differences too. For example HELibTech has much more emphasis on the management of e-resources. We felt that the audiences would differ significantly and this has been the case. This leads me into another point. We have an inclusive view of our audience. We welcome contributions from librarians, and vendors-and indeed anyone with an interest. Just sign up and get started.

screenshot of local government library technology wiki

Local Government Library Technology wiki

Finally how valuable are these wikis to the communities they are designed to serve? Feedback so far has been good. For example when SCONUL held a ‘community event’ about its recent study into the feasibility and business case for shared services they created an entry on HELibTech. We saw a significant rise in traffic, some of which has been sustained. Clearly though with communities based around a market of around 180-200 institutions in UK HE and public libraries respectively, we are not expecting a huge audience. Both wikis have a small but growing number of ‘members’ and, as the community of ‘lurkers’ grows, so does the number of contributors. Finally an important factor in determining value is to realise this is an equation. Using modern tools we can deliver valuable services effectively and cheaply to relatively small communities. All the time Web 2.0 tools are getting better and (mostly) less expensive. Costs are often less a factor of the purchase price than the cost of maintaining the service. Enabling the community to keep the content up-to-date is much less expensive than a printed annual guide, survey or ‘closed’ website that incurs heavy editorial and production costs. We think it’s more accurate too. Feel free to join in the ‘conversation’….

Posted in Guest-blog, Libraries, Technical, wikis | 1 Comment »

Liver and Mash: Mashed Library in Liverpool

Posted by Marieke Guy on 14th April 2010

The Mashed Library series was mentioned to delegates on the UKOLN/MLA Web 2.0 Workshops.

The event is aimed at those who work in libraries and are interested in how they can use technology to deliver their services. The offical byline is “bringing together interested people and doing interesting stuff with libraries and technology“. Although the event is looking at ‘mashing up’ services and using data sets you don’t have to be a ‘techie’ to attend and those who aren’t developers but have fair technical skills will still enjoy the event. The series is organised by people in the Higher Education sector but it has a lot to offer those in the cultural heritage sector too.

The first Mashed Library event (Mashed Libraries UK 2008) was held on 27th November 2008 at Birkbeck, University of London. Since then there have been events at the University of Huddersfield (Mash Oop North, 7 July 2009) and Birmingham City University (Middlemash, 30 November 2009).

This year the event is taking place in Liverpool on Friday 14th May and registration has just opened. Places are limited so sign up as soon as possible.

For more information on the series of events keep and eye on the wiki or the ning group.

Tags:
Posted in Libraries, Technical | Comments Off

Google Wave: What’s all the Fuss About?

Posted by Marieke Guy on 15th June 2009

Recently there has been a lot of commotion over Google’s new offering: Google Wave.

Where can you see it in action?

The full developer preview (80 minutes long) given at Google I/O Symposium is available to watch. If you haven’t got time spare to view the full demo video (though it is a great show!) then the highlights are also available. The beta version is currently undergoing extensive testing and the final version is expected to be released later in 2009.

What is it?

Google Wave already has an extensive Wikipedia entry. It is described as:

A web based service and computing platform designed to merge e-mail, instant messaging, wiki, and social networking. It has a strong collaborative and real-time focus supported by robust spelling/grammar checking, automated translation between 40 languages, and numerous other extensions.

Users create a ‘wave’, which is very much like a conversation on a particular topic (or an email or message board thread). To this wave they can add users, documents and ideas. The users can then collaboratively edit the resources and create spin off waves. All activity is ‘recorded’ and you can choose to playback a wave to see how it was created. The aim is a more free-flowing, informal and linked form of communication.

A useful guide to the key features is given on Pocket-lint.

Some of the important factors that will shape its delivery are:

  • That its aim is first and foremost to rethink the way we all communicate with each other online
  • It is an open source product and platform – which means that there are going to be plenty of plugins and add ons for it. Google have also agreed to allow organisations to create their own internal versions of Wave.
  • The text typed appears in real time – which makes it unlike other messaging software we are familiar with.

So what is its relevance to the Cultural Heritage Sector?

If Google Wave delivers what it has promised than it will have an effect on all online activity and quite possibly all communication activity. The specific implications it has for the cultural heritage sector are still a little hazy but things to consider are:

  • If Waves are a new form of communication then they will need managing and preserving. This has implications for those involved in records management and archival activities.
  • Google wave involves further merger of spoken conversations and written conversations however as activities take place in one particular place (rather than all over the Web as happens now) there may be opportunities for better organisation of communication.
  • Google wave could potentially have an effect on how libraries provide their enquiry and advisory service.
  • Google Wave may well have a big effect on other smaller communication activities such as Twitter and on services like Microsoft Sharepoint.
  • Google Wave is likely to include a Google Book Search facility. Although some have reported that this may be a negative for libraries it is quite likely that it won’t take long before library developers offer their own plugins. The open API will easily allow this.

There will also be significant implications for those involved in learning and teaching, e-learning, research, remote working and remote learning.

For many Google Wave is just the next step when it comes to the Internet. For those of us who have been working with the Web for some time change has become so inevitable that a period of calm almost seems strange. Those working in Cultural Heritage will find it helps to stay aware of what direction communication and the Web is moving (have a look at this useful explanation of the evolution of the Web). As they say well informed means well armed!

Tags:
Posted in Technical | 2 Comments »

Clouds, Libraries and Museums

Posted by Brian Kelly on 28th April 2009

‘Clouds’ Workshop Session at the MW2009 Conference

Back in January Paul Walk and myself submitted a proposal for a paper on APIs and the Cloud to the Museums and the Web 2009 (MW2009) conference as we both felt that this was an area of increasing importance to the museum’s sector. The proposal was accepted, but in addition to the paper (which Paul Walk wrote) the conference organisers asked us to run the session as a interactive workshop session, rather than a formal presentation.

Unfortunately Paul was not able to attend the conference itself so I facilitated the workshop by myself. The workshop, entitled “SaaSy APIs (Openness in the Cloud)“, followed on from a workshop on “What is your museum good at and how do you build an API for it?” during which Richard Morgan, the Web Technical Manager at the Victoria and Albert Museum, described the APIs which have been provided at the V&A in order to open up access to the museum’s collections. Since Richard have addressed the issues associated with the provision of APIs from within an organisation, I decided (following discussions I’d had with Richard prior to the conference) to focus my session on use of cloud services by museums. And note, incidentally that Frankie Roberto has included a review of Richard’s session in his Museums and the Web 2009 roundup as has Sebastian Chan in his post on the Fresh and New(er) Powerhouse Museum blog on MW2009 Clouds, Switches, APIs, Geolocation and Galleries – a shoddy summary.

Paul’s paper “Software as a Service and Open APIs” provided a valuable primer on what SaaS (and related terms such as IaaS, PaaS and EaaS) means and what the Cloud is for policy makers and those new to this area. The wider issues, such as clarifying specific benefits which can be provided by Cloud services and the associated risks, formed the main points of discussion at the session and it was pleasing that the discussions appeared to be of interest to both policy makers and managers and the developers in the session.

Clouds and Museums

The workshop session which explored the policy issues and risks associated with use of Cloud services seemed to have been very timely. I attended the Technology Strategies session at the conference and was particularly interested in the talk on Museums and Cloud Computing: Ready for Primetime, or Just Vaporware? (and note that the paper and the accompanying slides are available on the MW 2009 Web site).

This presentation described how developers in the Indianapolis Museum of Art have been making use of Amazon S3 and EC2 cloud services in order to provide the ArtBabble video service.  I have to admit that I have previously encountered developers (although perhaps in the HE rather than museum’s sector) who seem to insist that their IT infrastructure needs to be located locally (possibly under their desk). It was good to see developers who seemed to be comfortable with the notion of their storage and their computational cycles being provided by a commercial company. It was also reassuring to see a speaker who acknowledged that the costs of providing production services is a real issue today, and to hear how the costs of the disk storage, video processing and delivery of video content (at about $350 /month) was felt to be very reasonable.

Clouds and Libraries

OCLC have recently announced that they are entering the library system marketplace with a Web-based suite of library system modules. The press release describes:

OCLC’s vision [a]s similar to Software as a Service (SaaS) but … distinguished by the cooperative “network effect” of all libraries using the same, shared hardware, services and data, rather than the alternative model of hosting hardware and software on behalf of individual libraries. Libraries would subscribe to Web-scale management services that include modular management functionality.

And it should be noted that an article in the Library Journal described this move as “a bold move that could reshape the library automation landscape“.

Where To From Here?

It struck me that cloud computing and use of APIs were the major technical talking point at the Museums and the Web conference this year (and although it could be argued that this was only because I attended session on these topics it is also true that there were several informal sessions in which museum developers discussed these topics in more detail).

But we should also know that there is no silver bullet and that if organisations leap into Cloud computing without carefully considering the reasons why, the areas in which Cloud computing should be best applied and the non-technical aspects there will be an inevitable backlash as Cloud computing moves from its current rise up the Gartner hype curve until it reaches the peak of over-inflated expectations and then descends into the trough of despair?

To help avoid such dangers I feel we need to encourage open debate on this issue and to share experiences, not only of the successes but also of any difficulties  experienced – and perhaps even the failures. Anyone like to start the ball rolling by describing plans to move services to the Cloud, or perhaps summarise services which have already moved there? Is this new to the UK’s cultural heritage sector (perhaps we are concerned that data protection legislation prohibits us from making use of services outside the UK)?. Or perhaps it is taking off in particular sectors – the smaller organisations who do not have significant levels of technical resources in-house? What are your views on CLoud services in the cultural heritage sector?

Posted in Technical | 1 Comment »

What Can OPML Offer?

Posted by Brian Kelly on 17th April 2009

The importance of RSS as a format for allowing content to be syndicated, embedded on other Web sites and easily viewed on mobile devices has been emphasised at UKOLN workshops for the cultural heritage sector.  But what if you make use of an RSS reader (such as, say, Netvibes or Pageflakes) and wish to move to an alternative RSS tool (such as, say, Feedreader or NetNewsWire). You may wish to do this because of preformance problems with your preferred RSS reader, because you’d prefer to make use of a desktop RSS reader rather than a Web-based tool or because you wish to read RSS feeds on an iPhone or iPod Touch device and wish to integrate this will a desktop client.

OPML (Outline Processor Markup Language) provides an import and export format for RSS readers, allowing groups of RSS feeds to be moved between RSS tools without the need for the time-consuming process of manually adding feeds if you wish to use (or perhaps just evaluate) a new tool.

An example of how this can be achieved is illustrated in the two accompanying images.

The first image shows the File menu in the Feedreader desktop RSS reader. The menu contains items for both importing and exporting an OPML file.

If Feedreader is your current RSS reader you can export the RSS feeds (and corresponding structure of the folders used to manage such feeds) to an OPML file.

The second image shows how an OPML file can be imported into a different RSS reader. In this case the import and export functions of the Web-based Google Reader are shown.

Easy, isn’t it?

Posted in Technical | Comments Off