Ms Dianne Brown1, Ms Komathy Padmanabhan
1Monash University, Australia
Biography:
Dianne is a research data governance consultant working with sensitive data research such as clinical quality registries (CQRs) and the broader research community at Monash University. Dianne began her career working globally for a strategy consulting firm and joined Monash in 2015, developing a Health Research Data Governance Framework that took a novel value chain approach to how all data activities should be governed. She coordinates COPs in data linkage and runs health research data dictionary training at the University, redesigned Monash's DMP+ and research data website.
Abstract:
Situation
Artificial Intelligence (AI) presents unparalleled opportunities in accelerating or lowering the cost of knowledge creation. There are, however, risks associated with AI development and its use. The difficulty in fully identifying, describing and assessing the magnitude of these risks is linked to the comprehensive impact AI may have on society and the novelty of the harms that may be faced. Possible harms may also have a temporal nature, situated upstream, midstream or downstream from people’s immediate actions.
AI risk has been framed broadly in a number of ways by governments, academia and the corporate sector. But for research Universities, with distributed decision-making and other hallmarks of complex systems, it is difficult to frame these risks and then assess them against the opportunity in each research situation.
Task/ Action
To develop a tool that overcomes this complexity in identifying and managing AI risk in research by focussing initially, where risk is highest – the data risk.
Result
The first steps, as part of research data governance, to ensure that correct controls in place when risk is high in AI use either due to:
sensitive research data is being used;
any research data is being used where AI being developed or deployed is considered to be high risk (e.g. systems affecting public health and safety, critical infrastructure, law enforcement, etc)
This approach for assessing AI research data risks incorporates the principles of research ethics, integrity, cybersecurity, privacy and IP, ensuring alignment of AI safety and research culture.