Deakin University
Browse

File(s) under embargo

What Can Artificial Intelligence Do for Refugee Status Determination? A Proposal for Removing Subjective Fear

Version 2 2024-06-02, 22:54
Version 1 2024-01-18, 04:00
journal contribution
posted on 2024-06-02, 22:54 authored by N Kinchin, Davoud MougoueiDavoud Mougouei
Abstract The drive for innovation, efficiency, and cost-effectiveness has seen governments increasingly turn to artificial intelligence (AI) to enhance their operations. The significant growth in the use of AI mechanisms in the areas of migration and border control makes the potential for its application to the process of refugee status determination (RSD), which is burdened by delay and heavy caseloads, a very real possibility. AI may have a role to play in supporting decision makers to assess the credibility of asylum seekers, as long as it is understood as a component of the humanitarian context. This article argues that AI will only benefit refugees if it does not replicate the problems of the current system. Credibility assessments, a central element of RSD, are flawed because the bipartite standard of a ‘well-founded fear of being persecuted’ involves consideration of a claimant’s subjective fearfulness and the objective validation of that fear. Subjective fear imposes an additional burden on the refugee, and the ‘objective’ language of credibility indicators does not prevent the challenges decision makers face in assessing the credibility of other humans when external, but largely unseen, factors such as memory, trauma, and bias, are present. Viewing the use of AI in RSD as part of the digital transformation of the refugee regime forces us to consider how it may affect decision-making efficiencies, as well as its impact(s) on refugees. Assessments of harm and benefit cannot be disentangled from the challenges AI is being tasked to address. Through an analysis of algorithmic decision making, predictive analysis, biometrics, automated credibility assessments, and digital forensics, this article reveals the risks and opportunities involved in the application of AI in RSD. On the one hand, AI’s potential to produce greater standardization, to mine and parse large amounts of data, and to address bias, holds significant possibility for increased consistency, improved fact-finding, and corroboration. On the other hand, machines may end up replicating and manifesting the unconscious biases and assumptions of their human developers, and AI has a limited ability to read emotions and process impacts on memory. The prospective nature of a well-founded fear is counter-intuitive if algorithms learn based on training data that is historical, and an increased ability to corroborate facts may shift the burden of proof to the asylum seeker. Breaches of data protection regulations and human rights loom large. The potential application of AI to RSD reveals flaws in refugee credibility assessments that stem from the need to assess subjective fear. If the use of AI in RSD is to become an effective and ethical form of humanitarian tech, the ‘well-founded fear of being persecuted’ standard should be based on objective risk only.

History

Journal

International Journal of Refugee Law

Volume

34

Season

October/December 2022

Pagination

373-397

Location

Oxford, Eng.

ISSN

0953-8186

eISSN

1464-3715

Language

eng

Publication classification

C1.1 Refereed article in a scholarly journal

Issue

3-4

Publisher

Oxford University Press (OUP)

Usage metrics

    Research Publications

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC