'I don't see what you mean': Broadening participation through co-created inclusive digital museum audio interpretation

About the project

This research will examine how museums can transform the way that they think about and create digital audio interpretation for their collections, to enhance inclusion and access for all audiences. For people who are blind or partially blind (BPB), audio description (AD) is traditionally described as a verbal narrative for information available through vision. In both the UK and US, museums are legally obliged to ensure equitable access to their collections. AD is a key tool for achieving this for BPB audiences but museums need to dramatically improve AD provision. The charity VocalEyes found that only 5% of museums in the UK mentioned AD provision on their websites.

Museums could transform accessibility through apps, such as Smartify. Smartify currently gives over three million users worldwide online access to more than two million works of art at home or through a QR code in the museum building. Of these two million works, only a handful are offered with AD, and all come from only two institutions: the Smithsonian National Portrait Gallery (US) and Royal Holloway Picture Gallery (UK).

The need goes beyond inclusive digital access for BPB people. This project it is about enhancing the museum experience for everyone. The pandemic has spotlighted both the scope and desire for digital participation and the massive opportunity for museums to grow audiences. Our previous research has shown that AD benefits not only people who are BPB, an inclusive way to audiences globally through high-quality online access. 

The UK-US research team brings together experts on psychology, aesthetics and design, critical disability studies, cultural diversity, translation studies and includes members who are partially blind and non-blind, neurotypical and with learning differences and of different ethnic and cultural backgrounds. With our digital heritage sector-leading partners (Smithsonian National Portrait Gallery, Royal Holloway Picture Gallery, Smartify and VocalEyes) this research will challenge current AD practice, where sighted curators/describers produce AD for BPB audiences.

The project will develop and extend AD usage as a tool for all visitors (blind, partially blind and sighted). It will do this by creating and evaluating the Workshop for Inclusive Co-created Audio Description (W-ICAD) model whereby AD creation is led by partially blind co-creators, collaborating with blind and sighted co-creators. The W-ICAD model will give museums a streamlined way to create new AD, extending their digital provision and boosting inclusion. The research will compare how audiences in the UK and US experience AD so that AD creation takes account of varying cultural needs or expectations.

Funder

Arts and Humanities Research Council

Investigators

Dr Alison Eardley, Principal Investigator (University of Westminster)
Dr Lindsay Bywood, Postdoctoral Research Fellow (University of Westminster)
Professor Hannah Thompson, Co-Investigator (Royal Holloway, University of London)
Dr Deborah Husbands, Co-Investigator (University of Westminster)

Project partners

Smithsonian National Portrait Gallery (US)
Watts Gallery and Artists Village (UK)
VocalEyes (UK)
Access Smithsonian (US)