Using AI tools in a virtual desktop for transcription and translation.

Ms Yvette Wharton1, Mr James Love1, Mr Noel Zeng1, Dr Nidhi Gowdra1, Mr Sean Matheny1

1The University of Auckland, New Zealand

Biography:

Yvette Wharton works at the Centre for eResearch, Waipapa Taumata Rau, University of Auckland. She is the Research Data Management Programme Lead, working on the Secure Research Environment and machine-actionable Data Management Planning initiatives. http://orcid.org/0000-0002-6689-8840

Abstract:

Transcribing audio into text is part of the research process for many researchers. Manually transcribing text can be time-consuming, so an increasing number of researchers are using software to transcribe and translate, for example, interview or focus group audio recordings. Recognising this need, we aim to enable researchers to use AI responsibly while being mindful of data sovereignty issues and responding to the needs of a multi-lingual society.

In collaboration with the core Nectar team, the Centre for eResearch has piloted a simple, easy-to-use, locally hosted, ephemeral digital transcription desktop. The desktop runs on existing CeR-managed local UoA Nectar infrastructure. This desktop enables researchers to safely and efficiently transcribe digital audio content using AI-powered tools. In this talk, we will share how we have delivered this service, including bidding for funding, technical and ethical considerations, desktop environment development, support material creation and outcomes.

 

Categories