This chapter examines the right to an effective remedy under international human rights law and what obligations this may place on states in using AI, particularly in relation to vulnerable persons.
International human rights law requires Member States to put in place a framework to provide an effective remedy where there has been a human rights violation. Specifically, Article 2(3) of the ICCPR provides that:
'… all State parties commit themselves to ensuring that any person whose rights or freedoms are violated ‘shall have an effective remedy’ and that any claims to such a remedy shall be determined by ‘competent judicial, administrative or legislative authorities’.
United Nations human rights bodies have emphasised that administrative remedies, not only judicial remedies, are an important means of providing ‘effective remedies’ to people whose rights are breached because they are accessible, affordable and timely.
Similar principles are set out domestically in some jurisdictions. For instance the Australian Department of Industry’s Ethical Principles for AI includes a reference to effective remedies:
'When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or output of the AI system. … This principle aims to ensure the provision of efficient, accessible mechanisms that allow people to challenge the use or output of an AI system, when that AI system significantly impacts a person, community, group or environment…Particular attention should be paid to vulnerable persons or groups.'
This raises the question as to what type of review mechanisms are best suited to recourse for those affected by AI. For instance, in Australia, a great deal of reliance has been placed on class actions and judicial review in response to automated decision-making. However, there are questions as to whether this is an appropriate primary mechanism for redress. Some thought therefore needs to be given to the operation of low-cost, accessible mechanisms such as merits review, anti-discrimination commissions (which typically utilise alternative dispute resolution mechanisms) and the use of ombudsman investigations. It is important that individuals are heard, but also that they feel they have been heard, in relation to AI challenges and that the systems are fair and transparent.
As part of considering appropriate review mechanisms for AI, some thought must also be given to vulnerable groups. This is a complex question as individuals can be vulnerable permanently, for long periods, or may slip ‘in and out’ of vulnerability and only need assistance for a brief period. Contrast, for instance, the situation of a person who has a permanent illness or disability and is unable to undertake full-time employment and may therefore require ongoing social support to that of a person who loses their job due to a company restructure or closure and may only need support until another position becomes available to them. An individual may also be subject to inter-related vulnerabilities comprised of economic vulnerability, personal vulnerability and contextual vulnerability. As an example of this, an individual unable to work for a period may also have physical or psychological vulnerabilities such as a mental illness. Contextual factors such as institutional responses, the design of AI and digital access issues may also then impact on their ability to apply for review of adverse decisions.
This chapter will examine these issues holistically and discuss some conclusions as to what an effective right to a remedy under international human rights law means for the implementation of AI.