Learn about this new standard for digitised images and other artifacts which is transforming the cultural heritage sector.
An Introduction to the International Image Interoperability Framework
The International Image Interoperability Framework (IIIF, pronounced “tripleeyeeff”) is a set of application programming interfaces (APIs) based on open web standards and defined in specifications derived from shared real world use cases. It is also a community that implements those specifications in software, both server and client. This article is an extract of a new paper delivered by our Technical Director Tom Crane, the paper provides a brief and fairly nontechnical overview of the standards, and the benefits they bring to image delivery and sharing of content.
Two decades of delivering collections on the web has taken a lot of effort. Institutions have been building and buying image servers, image viewers, page turners, discovery applications, learning environments and annotation tools to make their content accessible to the world, and to let the world interact with it. Many wonderful software tools have been made, and many beautiful websites enjoyed by scholars and the interested public.
But the fruits of these labours have been difficult to share and reuse. One institution’s image delivery is not compatible with another's. The same problems are solved over and over again in different silos of noninteroperable content. If the British Library develops a viewer for their digitised books, I can’t use it to view the Bodleian’s manuscripts. Their content is part of a different technology silo. If Stanford develops a fantastic image comparison and annotation tool, it might be perfect for one of my collections too, and I’d like to use it straight away. But my vendor’s asset management system comes with a different viewer, or we’re stuck with the delivery and discovery platform we built a few years ago. We can see the kinds of things that others are doing with imagebased content, and we’ve got lots of ideas of our own, but our content is stuck in our silo and it’s expensive to jump to another silo. We can’t interoperate.
We put a lot of effort into describing the rich structure of a complex digitised work, so that we can drive viewing tools in our discovery or learning environments. We use this structure to enable users to visualise and navigate the work, understand its physical nature and layout, and view transcriptions, commentary and other annotations that we might have available for it, in the right place and with an intuitive user interface. But that great effort of description is useless
beyond our walls. We do have Descriptive Metadata, and we’re getting good at sharing that, allowing interoperability with aggregators and the web of linked data using common vocabularies and data models. But that’s a different problem, a different kind of metadata. It has nothing to say about the complex structure of a digitised work, how a viewer application gets pixels on screen for that work and lets a user read the pages, look at the brush strokes, see the film grain, read the transcriptions, sift through the commentary. The Descriptive Metadata doesn’t let us refer to parts of the work, down to the tiniest detail interesting marginalia, a single word on a page and make statements about those parts in the web of linked data.
IIIF provides a family of standards to address these problems. A model for describing digital representations of objects and a format for software viewing tools, annotation clients, web sites to consume and render the objects and the statements made about them.