Responsible Generative AI in Australian Government Agencies: Insights from Risk Management in the Copilot Deployment
Funding
Socrates: Software Security with a focus on critical technologies. | Funder: Cyber Security Research Centre Limited | Grant ID: C21-00267
History
Pagination
1-48
Language
eng
Research statement
Background
The work has been supported by the Cyber Security Research Centre Limited whose activities are partially funded by the Australian Government’s Cooperative Research Centres Programme
Contribution
It examines genAI risks related to misinformation, social inequality, impact on team dynamics, information oversharing within organisations, malicious use, and identified mitigation measures commonly proposed to address them. Interviews with public officers involved in risk management revealed that mitigating misinformation and social inequality-related risks largely depended on end users’ responsibility for human review and fact-checking. Meanwhile, risks associated with information oversharing and cyber security risks were addressed through readiness testing and contractual assurance from vendors. Beyond these risks, respondents also expressed awareness o
Significance
The report recommends strengthening fact-checking and human review as mitigation measures to counteract the increasing capability of genAI tools to mislead or influence users or make them more reliant on genAI tools, especially those users lacking in time, knowledge and experience. It also recommends measures to address the impact on team dynamics and the long-term implication on human cognitive abilities.