There’s No ‘We’ in Technology: Protecting Against the Invisible Erosion of Human Rights

Take a moment to consider, what do all 8 billion humans on this planet share? It’s actually quite simple, yet often overlooked: our human rights. Very few things are shared between everyone in the world, so shouldn’t we protect the one thing which we all have?

Whilst we are in the middle of redefining what it means to be human, and as the rapid development of AI and digital technologies continuously reshapes our collective and individual lives, we need to recognise the importance of maintaining fundamental human rights moving forward. You are not immune to the impacts of developing technologies.

It’s not hard to figure out why our rights are overlooked in the development of AI. The tech giants and corporations that lead AI development are focused on fostering innovation, improving efficiency, and increasing gains. Human rights on the other hand argues for things like the right to privacy and data protection, transparent and accountable systems, and non-discrimination. The tensions between the two actually impact you more than you think. If we continue to let business priorities prevail, your rights will keep experiencing a quiet erosion, and you won’t even realise.

/Understanding Your Rights is the First Step to Protecting Them

  1. Right to Privacy

    One of the most invasive threats which AI poses to human rights is the threat to privacy rights. Article 17 of the International Covenant on Civil and Political Rights outlines a person’s right to privacy from arbitrary or unlawful interference. Increasingly, new technologies which collect, and store data are causing significant risks for human dignity, autonomy and privacy. AI isn’t just collecting your data, its being used for wide-spread monitoring of societies, pervasive data-collection of personal information and even predictive AI algorithms which rely on intimate individualised data (some examples to follow below).

  2. Right to Non-Discrimination

    Discrimination is a more nuanced and complicated human rights infringement to establish, especially when regarding the lack of transparency of AI systems and algorithms. Article 26 of the International Covenant on Civil and Political Rights explains that all persons are equal before the law and that the law shall provide effective protection against discrimination on any grounds such as race, colour, sex and language, among others.

    Discrimination caused from AI systems has been seen in a wide-ranging amount of sectors; from healthcare to the criminal justice system. For example in the US, a healthcare algorithm favoured providing extra medical treatments for White Americans over Black Americans, and the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm, responsible for predicting the likelihood of a defendant to become a recidivist in the US Court System, predicted almost double false positives for recidivism for black offenders than white offenders.

  3. Right to Work

    Following privacy and discrimination, AI has the potential to multidimensionally impact the right to work, both in how work is performed and evaluated. Articles 6 and 7 of the International Covenant on Economic, Social and Cultural Rights cover the right to work. Specifically, the right of everyone to the opportunity to gain his living by work and also recognising the right of everyone to the enjoyment of just and favourable conditions of work.

    In an already incredibly competitive job market, AI systems are transforming how companies hire. They are being used scan resumes, conduct initial interviews and make preliminary hiring decisions. AI systems, like the Amazon hiring tool, can screen out qualified candidates just because their resumes don’t have the key words the algorithm is programmed to recognise. Amazon even used a hiring tool which was found to discriminate against woman as a result of being trained on data collected mostly from men.

/Ethics vs. Human Rights

While conversations and frameworks around the ethics of AI are more common, the direction of ethics for AI regulation is often vague and leaves room for various interpretations of guidelines. Human rights law and international legal frameworks however, offer internationally agreed upon and recognised rules which would create much more cohesion amongst international bodies.

/Rights Aren’t Optional

The plus side of human rights law is that it not only creates a legal duty on governments which they must uphold, but also places responsibilities on companies and organisations to comply. Human rights is not the sole answer. But if more people understood their rights, how they are interconnected to each other, and how they are impacted through the actions of governments and corporations, we as society could take more informed action.

In my short time since joining the AI space, I have observed that minimal reference to human rights has been made in any kind of AI frameworks, legislation or conversation. Hype around ethics and responsible tech is welcomed, but there is a sliding scale of expectations and responsibilities. Advocating for embedding human rights at the core of AI development would maintain consistency, accountability and reliability in creating safe and trustworthy AI systems. The biggest challenge that lies ahead is how we reconcile the different priorities and ensure that people’s rights are understood and prioritised, not pushed aside for unrestricted growth and profit.

Jasmine Hasmatali

Jasmine Hasmatali is a previous founder and recent Master's graduate. She holds an LLM in Human Rights from the University of Edinburgh and is passionate about using her socio-legal background to advocate for inclusive AI futures which benefit everyone. Her research is oriented towards bridging the gap between rights, technology and society. 

Previous
Previous

Building in Mid-Air: 2024 In Review with Catherine McMillan and Noah Frank

Next
Next

The Silicocene: Entering a New Era of Technology, Ethics, and Collective Responsibility