• Jonathan McCully

Spotting There Is A Problem


In order to take a human rights case to court, you have to first identify the human rights problem that you are asking the court to (re)solve.


In human rights cases, this problem will usually be in the form of a harm or threat of harm to one or more of the legally binding rights set out under human rights law.


The rights that an individual can legally enforce will depend on the law that is applicable to a specific context (which is explained in more detail here).


However, in order to aid the process of identifying a human rights problem, this article will examine the international human rights principles that are most commonly engaged by machine learning and similar algorithmic technologies.


It is important to note that the assessment of whether a person has a strong human rights case is a two-step process. Broadly speaking, it requires an examination of the following questions:

  • Has a human right been engaged? This involves a determination of whether a human right has been restricted, denied, threatened or otherwise interfered with.

  • Can this be justified by the other party? The law permits or allows certain restrictions on human rights. This means an assessment has to be made of whether the party responsible for engaging the right(s) was lawfully justified in doing so.

Assessing the strength of a human rights claim in a particular scenario can be a complex question, so it is always good to consult a lawyer to help do this.



Has a human right been engaged?


There are a range of human rights protected by international human rights law. It may be that one or more are engaged through the process of developing, designing, training, testing, using or maintaining a system that relies on machine learning or algorithmic processes.


To identify whether a right has been engaged involves looking at all the acts, decisions, behaviours or omissions that take place around these systems.


For instance, human rights can be engaged through the collection or use of data to build or run a system.


Alternatively, human rights might be engaged by an action, omission or measure taken (sometimes solely) on the basis of the output of a system. This might, for example, be a decision to sentence someone to prison or to determine what social security payments they are entitled to.


They can also be engaged by the more general impact their deployment or use has on society. For instance, human rights law has recognised (mostly in relation to free speech) a concept known as the “chilling effect.” This is where certain circumstances have the result of hindering or deterring people from exercising their rights. For example, having machine learning technologies around the home might lead to the feeling or fear of constantly being watched, and could prevent people from engaging in certain activities or behaviours protected by human rights.


This article will briefly summarise some of the human rights that are particularly likely to be engaged through the adoption of machine learning technologies. They are:


The Right to Equality and Non-Discrimination


It is a central tenet of human rights law that all people should be able to enjoy their human rights and freedoms on an equal basis. It is a cross-cutting human rights obligation.


This means that countries are not permitted to act in a discriminatory manner or pass discriminatory laws when it comes to individual rights. It also means that countries must put in place laws that protect individuals from discrimination.


There are a number of ways in which unlawful discrimination can manifest itself:

  • Direct Discrimination: this is the act of treating people in analogous or similar situations differently on the basis of an identifiable characteristic or status (e.g age, disability, gender, race, religion, sex, sexual orientation). This difference in treatment might be in the form of distinguishing, excluding, restricting or preferring a person on the basis of this characteristic or status. For example, using an automated system against one group and not another can amount to direct discrimination. Or where an automated system uses a protected characteristic or status to treat someone differently from others in similar circumstances will amount to direct discrimination. In this case study, for example, a computer system directly discriminated by refusing or deliberately omitting applications to medical school on the basis of race and gender. Direct discrimination can also occur where a person is treated differently because they are perceived, by an algorithm or otherwise, to have a certain characteristic or status.

  • Indirect Discrimination: this is where a general measure which, although seemingly neutral, has a particularly prejudicial and discriminatory effect or impact on a particular group. For example, some “neutral” factors weighed in an algorithm may in practice serve as a proxy for a protected characteristic or status. In this case study, for example, a gig economy algorithm that was seemingly neutral had a disproportionate and discriminatory impact on trade union workers’ ability to access work.

  • Discrimination by Association: this is where a person is discriminated against on the basis of the characteristics or status of a different individual somehow connected to that person. For example, an algorithm that flags individuals for extra scrutiny by police because of the religion of their parents will amount to discrimination by association. It has been suggested that association could include having been assumed (or profiled) to have a certain characteristic or status.

  • Proactive Action: this is where there has been a failure to take action where different treatment is necessary in order to correct for inequality. For example, a government might need to ensure that their system of welfare payments sufficiently takes into account disability in light of the inequalities of need brought about by this protected status.

In recent years, there have been ground-breaking studies that have exposed the bias of machine learning and automated systems by the likes of Timnit Gebru, Joy Buolamwini, Rediet Abebe, Inioluwa Deborah Raji, Safiya Noble, Ruha Benjamin, Simone Browne, Meredith Broussard, Caroline Criado Perez, and Cathy O’Neill.


These researchers have shown how machine learning and automated systems can increase and perpetuate inequality in ways that violate the right to non-discrimination.


In their work, they have highlighted that discrimination can happen at both a substantive and structural levels:

  • Substantive: the systems themselves, through the decisions they make and the inferences they draw, can discriminate. They can treat individuals in similar situations differently, including on the basis of race, gender, political opinion, nationality, religion and other protected characteristics. They can also (without necessarily being programmed to) have a disproportionate and discriminatory impact or effect on protected groups and communities. Moreover, machine learning technologies have consistently been found to struggle with understanding the full context around them, hindering their potential to take relevant differences into account to correct for inequality in a particular context. This spans the whole spectrum of unlawful discrimination outlined above.

  • Structural: the systems are also being used within processes and structures that are already discriminatory. This means they are contributing to, supporting, and further optimising or entrenching discriminatory practices. When systems are trained to “learn” patterns and rules by analysing existing data, they will replicate the discriminatory decision-making or practices represented by that data, thus further perpetuating existing discriminatory structures of oppression.

When machine learning and other technologies are developed and used, it is crucial that their human rights implications be assessed not only as to whether they engage the discrete rights set out below, but whether their engagement with these rights has occured in a discriminatory way. Emerging technologies have been shown to be capable of engaging a variety of rights in ways that sustain racial and ethnic exclusion in systemic or structural terms.



The Right to a Remedy


Another cross-cutting principle of international human rights law is the right to an effective remedy.


This right means that governments must ensure that individuals are actually able to bring claims to courts and other official bodies that are capable of resolving or redressing violations to their rights.


In other words, there must be real (rather than illusionary) accountability for violations.


Where machine learning technologies are implicated in an action, omission or behaviour that may have violated human rights, such as the right to privacy or non-discrimination, their “black box” nature can also engage the right to a remedy.


This is because their opacity and inscrutability can preclude individuals from challenging such measures before a court or other independent body. In other words, if people are unable to understand the factors, weighting, and reasons behind why something that engaged their rights was done, they will be unable to effectively challenge it.


Sometimes it is not just the difficulties around understanding how a system operates that can engage this right, but also the fact that such systems are often protected as “trade secrets” or by confidentiality laws preventing those impacted by (or even using) the systems from scrutinising them.


It has been recognised that a precondition to this right is “ensuring that individuals know that they have been subject to an algorithmic decision (including one that is suggested by an artificial intelligence system and approved by a human interlocutor) and are equipped with information about the logic behind that decision.”


Therefore, where a decision engaging human rights has been taken using an algorithmic system, and the individual affected either (i) does not know they have been subjected to such a system or (ii) does not know how it went about reaching that decision, then this right will likely be engaged.



The Right to Privacy


It is a recognised human right that everyone has the right to privacy. This means freedom from intrusion upon your private or family life. This aspect of the right will be engaged where, for example, a decision is made (whether with the aid of machine learning or not) to break up the family unit.


The right also includes the right to information self-determination, which means that people must be able to decide what information about themselves is used by others and under what circumstances.


This right to self-determination applies to situations where their personal data is collected, stored or used. Personal data is any data relating to a person who could be identified either from the data itself or other information.


This means the very act of collecting or receiving data can engage this right, as will the use and storage of such data. As set out in our Illustrated Guide to the Machine Learning Process, the process of working with machine learning involves working with data. Furthermore, when machine learning tools are run they can produce more data. So wherever and however these technologies are being used, they run the real risk of engaging this right.


The right includes a number of core principles aimed at ensuring data is handled in a fair, lawful and transparent way, including:

  • Purpose limitation: the specific purpose for which data is going to be used needs to be made clear at the outset to the person whose data it is, and any further use of such data needs to conform to this purpose. This means that re-purposing of data can engage the right to privacy. For example, data that has been collected in a medical context to treat a patient should not be repurposed in other contexts, e.g. to serve purposes of employers, businesses or social security services.

  • Collection limitation or data minimisation: data cannot be inadequate, irrelevant or more than what is necessary for the purpose it is used for. For example, machine learning technologies that are designed and used in the medical context to diagnose illness should not collect or require demographic data where that is not necessary for diagnosis.

  • Consent: central to the right is the principle that people should retain a level of control over their data. Therefore, in many contexts, data cannot be used without the free, specific, informed consent of the person whose data it is. This means that if the training, testing or use of machine learning technologies involves the collection or use of the data of people who were unaware of, did not agree to, changed their mind about or otherwise objected to such use, the right to privacy will likely be engaged. Furthermore, it has become a recognised aspect of the right to privacy that people should not be subject to a decision affecting their lives that has been taken by a machine without first having their views taken into account.

  • Data quality: steps need to be taken to make sure that data that is collected or stored is accurate and up to date. People should also be able to have their data rectified or erased. This aspect of the right to privacy is particularly crucial in machine learning contexts where the reliability of such systems depends on accurate data.

  • Design: privacy should be considered at the very beginning of any project that involves working with data, this means before a machine learning system is developed, and before any data is collected or used in the machine learning process. Privacy should lie at the heart of machine learning, and not just performance and functionality. This means that assessments should be carried out on machine learning technologies on their likely human rights impacts, to ensure an understanding of their risks and steps that can be taken to safeguard against such risk.

  • Transparency: data should be used in a transparent manner. This means people need to know how data is gathered and processed, by whom, for what purpose, for how long and with whom it is shared. When it comes to machine learning, people must also be able to know the logic and assumptions of such systems, as well as the significance of their envisioned consequences.

  • Sensitive data: some data is particularly sensitive, such as genetic, biometric and health data, as well as data on a person’s sexual orientation or sex-life. Machine learning tools might even generate sensitive data about people from non-sensitive data, for example by categorising someone as “gay” on the basis of their social media “likes.” This kind of data can only be used under strictly limited circumstances that are explicitly set out in a country’s laws.

  • Storage and security: data should not be kept longer than is needed, and it should be handled and used in a secure environment. This means protecting the data against loss, destruction or damage. Also, privacy should be the default, so if anonymous or pseudonymous data can be used, they should.

Machine learning techniques pose risks to many of these principles. Data is the fuel of such systems. So, where they are being developed, trained, tested or deployed, there is likely to be an engagement of the right to privacy.



Right to Liberty and a Fair Trial


The right to liberty and security of the person protects people from having their physical freedom restrained (i.e. the act of being detained) unlawfully or arbitrarily.


It is also recognised that all people are equal before the courts and that they should have a fair and public hearing. Furthermore, they have the right to be presumed innocent.


These principles are capable of being undermined by machine learning and similar technologies.


The use of risk scoring and other probabilistic models are increasingly being adopted in the criminal justice system. This includes contexts where decisions are made that result in the detention (or deprivation of liberty) of the individual (e.g. sentencing and bail decisions). These decisions are capable of engaging the right to liberty and a fair trial, and can jeopardise the fairness of a hearing (e.g. if the individual is unable to assess the accuracy and completeness of the evidence upon which the decision was based).


The presumption of innocence is also undermined by the “allure of objectivity” presented by machine learning and other algorithmic systems. When an individual is flagged as a “risk,” it can be difficult for a decisionmaker to freely disregard this output even when it might be based on inaccurate, discriminatory or flawed factors.


Although not likely to happen soon, if machine learning technologies completely took over the role of judges in court, this could engage the right in other ways. Judges must be impartial. Not only does this mean they cannot allow their judgment to be influenced by personal bias or prejudice, they must also “appear to a reasonable observer to be impartial.” It may be difficult for machine learning technologies to be trusted in this way.


Other threats to the right to liberty and security come from the use of machine learning technologies as an alternative to physical or chemical restraint, setting boundaries around where people can roam freely with the use of geolocation technologies. Concerns around such technology have been raised in the writing of law professor Chaz Arnett, who talks about the move from mass incarceration to mass e-carceration in the US, through the use of ankle bracelets, and the impact this has on rights such as the right to liberty.



The Right to Life

International human rights law recognises that every human being has the right to life. This means that nobody is permitted to end the life of another person.


Governments are also expected to introduce laws that protect life and, in some circumstances, take steps to protect against risk to life.


This means that where machine learning systems factor into life and death decisions, this right can be engaged.


For example, technological developments in machine learning capabilities have introduced new threats to this right, with the possibility for weapons to select and engage targets without intervention by a human operator. There is an international campaign to bring about an international ban on such weapons because of their threat to this and other rights.



Freedom of Opinion, Expression, Religion and Protest


International human rights law recognises a number of basic freedoms. It recognises the right of all people to form their own opinions, speak freely, look for and receive information, form peaceful protests, and have and practice religion. These rights and freedoms can be engaged by the context in which machine learning and similar technologies are used.


The right to freedom of expression recognises the right of all people to seek, receive and impart information and ideas of all kinds. Machine learning and other technologies dominate the online environment, including by removing, blocking, filtering, ranking, and personalising content. They are also used to aid the decisions of human content moderators. These practices raise their potential to engage the right to freedom of expression by undermining the rights of individuals to disseminate and access information without restriction or censorship.


The right to form an opinion and to develop this by way of reasoning might also be engaged by this use of algorithmically driven tools. It is possible that content curation and “micro-targeting,” through the use of machine learning and other technologies, can affect a person’s capacity to form an opinion.


Another aspect of the right to freedom of expression is the right to access information held by public bodies. This right is exercised most frequently through the filing of freedom of information requests with such bodies, which we discuss in more detail here. Such a right might be engaged if, for example, a machine learning or algorithmic system blocks the processing of such a request or rejects the request for illegitimate reasons. Where public authorities are using machine learning and similar technologies, it could be engaged by their refusal to disclose details of those technologies (such as the source code or algorithms).


The rights to speak freely, form peaceful protests, and have and practice religion can also be engaged by the profiling that can take place through machine learning and similar technologies. If an individual is flagged or treated differently in some way because they are perceived to have said something, attended a protest or practiced a certain religion, this might create a climate of fear or inhibition. Even by flagging or treating differently those who are perceived to have relationships with people exercising these rights can add to this climate. In this way, machine learning and other technologies are capable of having a “chilling effect” on the rights to freedom of expression, peaceful protest and religion.


Although there may be occasions when machine learning algorithms are limiting speech, there are also occasions where their failure to do so can raise human rights issues. It is an established principle of international human rights law that propaganda for war and advocacy of national, racial or religious hatred that incites discrimination, hostility or violence should be prohibited. Machine learning systems can be programmed or, through learning patterns in datasets, trained to produce this type of prohibited speech. If a government fails to sanction or take appropriate measures to prevent such technologies being used in this way they will be failing this obligation.



Rights of Persons with Disabilities and Older Persons


Under international human rights law, countries must protect and promote the rights of persons with disabilities and older persons by eliminating discrimination, enabling them to live independently in the community, and ensuring they are protected from all forms of exploitation, violence and abuse.


The emergence of technologies that rely on machine learning and other capabilities has been heralded as potentially promoting the security, inclusion and independence of people with disabilities and older persons.


However, the use of such capabilities in tools like mobility aids, as well as assistive technologies and devices, also gives rise to real threats to a number of rights:

  • Autonomy: an interference with human rights will occur where the use of the technology interferes with a person’s ability to make decisions for themselves and their ability to maintain choice and control of their living arrangements, such as their daily schedule, routine and way of life. In care contexts, automated technologies should not be used to override or substitute the decisions of a person receiving support. Furthermore, individuals must be able to give free and informed consent to assistive technologies being used or withdrawn from use. They must also be able to change their mind on the use of such technologies. For consent to be free and informed, an individual must be provided with clear and accurate information in order to assess the implications of the technology before giving consent.

  • Dignity: the overreliance of machine learning and similar capabilities, like “care robots," poses a particular threat to the dignity of those most in need of care or support. It can lead to segregation, neglect and abandonment by dehumanising care practices and encouraging collective disengagement in providing human care and support. Therefore, international human rights law is clear. These technologies cannot be a substitute for human care. There is also a risk that, by poor training or malfunction, technologies could contribute towards abuse, neglect or maltreatment.

  • Equality of access: access to assistive technologies or support cannot be to the exclusion of others who have an equal right and comparable need to access support. This means that countries should be prioritising the research and development of affordable technologies so a greater proportion of society can benefit from access.

  • Indirect impact (“chilling effect”): it also has been noted by a human rights expert that technology that involves increased monitoring of caregivers in care contexts might lead to their withdrawal of care (or it might change the way they provide care) to the detriment of the needs of people with disabilities and older people.

Where automated systems are used in a way that engage any of these threats, there may be potential to raise a human rights claim.



Right to an Adequate Standard of Living


The right to an adequate standard of living requires that everyone shall enjoy the rights to adequate food and nutrition, clothing, housing, social security and other basic needs.


This right will be interfered with where access to basic needs is impeded or denied, including through the application of automated decision-making systems.


In the social security context, for example, such systems have been used to verify identity, assess eligibility for welfare, determine need, calculate payments, detect and prevent fraud, and communicate with welfare beneficiaries (i.e. through chatbots). The increased and thoughtless adoption of these systems has been referred to as “stumbling zombie-like into a digital welfare dystopia.” Similar systems can also be found in the provision of other basic needs, such as housing and care, and they have been criticised for profiling, policing and punishing the poor.


Machine learning and other technologies can malfunction, rely on inaccurate data and aid or make discriminatory decisions in a way that can effectively deny people access to basic sustenance and resources they need to survive. As a case in India has shown, the technical malfunctioning of a system used in food provision has led to starvation and even death.


Where machine learning technologies are being used in a way that denies or otherwise inhibits a person’s ability to attain an adequate standard of living, there will be an interference with that person’s human rights.



Right to Work


International human rights law recognises the right of everyone to the opportunity to gain their living by work which they freely choose or accept. It also recognises the right to just and favourable work conditions. Also, under this right, is the right to form trade unions and to strike.


Automated systems are transforming the world of work in ways that can interfere with these rights. For example;

Where machine learning or similar technologies are used in these ways, the right to work may be engaged.



Right to Freedom of Movement


It is a recognised principle of international human rights law that everyone has the right to move freely within their country and freely choose their residence. It is also recognised that people should be free to leave any country and not be arbitrarily prevented from entering their own country.


Machine learning and similar technologies are increasingly being used in immigration and refugee contexts in ways that are experimental, dangerous and discriminatory. These technologies interfere with the rights of, among others, refugees, migrants and stateless persons.


They are integrated into ID documents, facial and gait recognition systems, retinal and fingerprint scans, ground sensors, aerial video surveillance drones, biometric databases, asylum decision-making processes and other aspects of border enforcement. In this way, they have contributed to the rise of what some have referred to as “digital borders.


Machine learning and automated technologies have bolstered immigration surveillance by, among other things, giving a detailed picture of individuals’ movements and predicting future location. This can then be used to interfere with the right to move freely.


Moreover, through faulty and inaccurate risk scoring and predictive modelling, these technologies have been used to forecast whether individuals are likely to be “troublemakers” in such a way that has perpetuated racial inequality, discrimination and contributed to the targeting and over-policing of non-citizens.


All these uses of technology will engage a range of rights, including the right to freedom of movement.



Right to Participation in Social and Public Life


A core human rights principle is the right of all citizens to have the opportunity to take part in the conduct of social and public life.


Participation is a core human rights principle. This is the right of any individual to be involved in decision-making that affects their interests. This means that those whose rights may be engaged by the adoption of technologies should be able to participate actively in the formulation of the laws, policies, and other measures around the technology.


Where products are being made for certain groups or communities, such as assistive products for older persons or persons with disabilities, these groups or communities should have direct involvement in the design and development of these technologies, from planning and delivery to their evaluation. Special effort also needs to be made to include the participation of any under-represented groups, including indigenous people, migrants and refugees, ethnic, cultural or linguistic minorities and those with complex support needs. Failure to do this can interfere with this human right.


This right also extends to the right of all individuals to participate equally in elections, including the right to vote. Machine learning technologies present distinct threats to this aspect of the right, which has been recognised as protecting against election districts being drawn to ensure substantial differences in the number of inhabitants per elected representative (i.e. gerrymandering).


As was noted recently by a US Supreme Court judge, “[g]errymanders will only get worse (or depending on your perspective, better) as time goes on — as data becomes ever more fine-grained and data analysis techniques continue to improve […] What was possible with paper and pen — or even with Windows 95 — doesn't hold a candle (or an LED bulb?) to what will become possible with developments like machine learning. And someplace along this road, 'we the people' become sovereign no longer."


Furthermore, machine learning technologies that influence the online environment, including by optimising content for engagement and virality, and by profiling voters, can undermine an individual’s ability to find certain kinds of content and could threaten the right to participation in political life.


Where technologies are deployed in contexts that engage any of these issues or threats, there could be an interference with the right to participation in social and public life.



The Rights of Children


If machine learning and other technologies are built or used in ways that harm, or fail to take account of, a child, this could give rise to an interference with the rights of the child.


These are a set of rights recognising children’s rights to, among other things, privacy, education, freedom of expression and non-discrimination.


An interference with these rights in the context of machine learning or automated decision-making systems might occur in the following contexts:

  • Human-centred: machine learning and similar systems should not act as a substitute for direct, responsive interactions amongst children themselves or between children and their parents/caregivers. In other words, machines that might be capable of talking, reading and playing with children should supplement and not replace human interactions.

  • Participation: designers and providers of machine learning and similar technologies should actively engage children and give their views due consideration when developing their technologies. Children should also be provided with an opportunity to participate in the development of laws and regulation of such technologies that might affect them.

  • Best interests of the child: where machine learning and other systems can impact on the rights of children, the best interests of the child must be the primary consideration when providing, regulating, designing or managing such systems.

  • Evolving capacities: children gradually acquire greater competencies, understanding and agency over time. Providers of new technologies should ensure that their products or services are appropriate for children’s evolving capacities.

  • Content moderation: tools used to filter or moderate online content may interfere with children’s rights. For example, where algorithms target children in a way that anticipates and guides them towards more extreme or harmful content. Machine learning systems designed to target children for commercial purposes amount to a particularly disproportionate interference with their rights. Furthermore, content moderation and automated search and recommendation systems that prevent children from easily finding diverse and good quality information online will amount to an interference. Automated filters should only be preventing the flow of harmful material to children. Moreover, automated decision-making systems should not supplant, manipulate or interfere with children’s ability to form and express their opinions.

  • Education: international human rights law recognises the right of everyone to education. It may be engaged, for example, where children are denied the opportunity to learn and increase their digital literacy on the uses of automated processing and personal data and surveillance. It can also be engaged where automated decision-making systems deny individuals access to educational opportunities. There has been a long history of automated systems being used to support decision-making on the allocation of places at educational establishments (see the case studies here and here).

The rights of children may be enhanced and enabled through digital technologies, but if the factors above arise there may be an interference with a child’s right.



Can this be justified by the other party?


When a right is engaged by the context in which an automated system is used, the next step is to determine whether the restriction, denial, interference or threat to the right can be justified.


But note, there are some rights that can never be justifiably interfered with, these are called absolute rights (e.g. the right to freedom from torture).


The extent to which a right can justifiably be interfered with will depend on the right that has been engaged. Nonetheless, generally speaking, to determine whether an interference is justified will require an examination of the following:

  • Was it done in accordance with a law? This question looks at whether a law was in place (for more on laws, read here) that provided for the relevant interference. It is not enough that the other party can point to the existence of a law, that law must also have been accessible, its consequences foreseeable and it must have safeguards against the law being abused.

  • Was it done in pursuance of some public interest goal? It will then be necessary to look at the purpose behind the interference. For example, why was the machine learning technology used? What objective was it serving? In this context, it may be open to the other party to claim that the interference to the right was for the purpose of protecting national security, public order, public health or the human rights of others.

  • Was it necessary and proportionate? Answering this question will require a balancing between the public interest goal being pursued and the extent to which the relevant right has been interfered with. In particular, it can involve consideration of whether less severe measures could have been taken, e.g. by using a tool that did not use as much personal data.

If the other party is unlikely to sufficiently answer any one of these three questions, then it is likely there is a strong human rights claim.


Once a right has been identified as having been violated in an unjustifiable way, it will be necessary to gather evidence to prove this to a court. Some key aspects of evidence gathering are covered in the explainer on Building An Evidence Base.


If you notice anything incorrect or missing from our explanations, please let us know through the contact form!

Want to submit a case study or have a question?

Thank you to our sponsors: 

ClearBG_logos-02.png
ClearBG_logos-01.png

© 2020 Atlas Lab      Privacy Policy