Algorithms, biases, and discrimination in their use: About recent judicial rulings on the subject

Today we live in an increasingly automated world where algorithms[1] are used for decision-making, based on artificial intelligence (AI) systems.

Many times, these algorithms generate biases in their application, as evidenced in two important judgments on the subject relating to Social Security, taxes, and labor law.

These judgments against the use of predictive algorithms are extremely relevant and constitute a great advance in fighting against discrimination produced by automated decisions and in the use of the risk profiles.

Of course, it is the task of governments to ensure that these systems do not produce bias or discrimination.

Therefore, the objective of this document is, on the one hand, to review these judgments and, on the other, to highlight the importance of the correct design, development, application, and verification of the algorithms, in accordance with the Law, in a socially fair and responsible manner, to avoid discrimination.

1 – COURTS RULINGS

1.1 – SYRI – THE HAGUE TRIBUNAL 2/5/2020.

The Hague Court rendered a landmark ruling[2] annulling data collection and risk profiling of Dutch citizens to detect social security fraud (SyRI acronym for System Risk Indication).

SyRI is a legal instrument that the Dutch government uses to combat fraud in areas such as Social Security and taxes.

The Ministry of Social Affairs and Employment has been using it since 2014, studying data on income, pensions, insurance, type of house, taxes, fines, integration, education, debts, or unemployment benefit of taxpayers to later calculate, based on algorithms, who is more likely to defraud the Administration.

To do this, it collects personal data, creating risk profiles of citizens through algorithms. In other words, the system seeks to determine which citizens are more likely to commit fraud.

According to the legislators that created SyRI, the data can be linked and analyzed anonymously in a secure environment, so that risk reports can be generated. In the system, personal data of citizens are exchanged.

The Court in The Hague said that the SyRI system does not meet the requirements of Article 8, paragraph 2 of the ECHR (Right to respect for privacy) to justify the mutual exchange of personal data.

It should be clarified that Article 8 of the ECHR includes the right to respect for private and family life in the following terms:

  • Everyone has the right to respect for his or her private and family life, domicile and his or her correspondence.
  • There shall be no interference by a public authority with the exercise of this right except such as is in accordance with the law and if it is necessary for a democratic society in the interests of national security, public safety, or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others.

The Court recognized “the legitimate objective and of great social relevance to avoid a crime”. However, the court said that “the risk model developed at this time by SyRI may have unwanted effects, such as stigmatizing and discriminating against citizens, due to the huge amount of information it collects.”

In its defense, the ministry told the court that “the system only links data that the State already has and draws with them a decision tree (a prediction model), so that neither AI nor self-learning algorithms are used”.

On the contrary, the ruling indicates that this assertion “cannot be verified, because the legislation that allows the use of SyRI is not transparent enough”. The judges indicated that they had not received enough information about the system, “since the State keeps secret how to calculate the possibility of fraud.”

1.2 DELIVEROO. RULING OF THE COURT OF BOLOGNA, ITALY

In this case, a Bologna Court ruled that the algorithm used by the European food delivery application Deliveroo to classify and offer shifts to workers (riders) is discriminatory.

The particular algorithm examined by the court was allegedly used to determine the “reliability” of a working cyclist for the company.

If a worker does not cancel a previously booked shift through the app at least 24 hours prior to the start, his or her “reliability rating” will be negatively affected.

Since the workers considered most reliable by the algorithm were the first to offer shifts in busier blocks of time, this effectively meant that these workers who cannot fulfill their shifts, even if it is due to an emergency or serious illness, would have fewer opportunities of work in the future.

According to the court, the fact that the algorithm does not consider the reasons behind a cancellation amounts to discrimination and unfairly penalizes workers with legally legitimate reasons for not working.

The court determined that even if an algorithm unintentionally discriminates against a protected group, a company can still be held liable and obliged to pay damages.

Deliveroo was ordered to pay € 50,000 (~ $ 61,400) to the plaintiffs.

The General Confederation of Labor of Italy (CGIL) described the judgment of the Bologna court as “a momentous turning point in the conquest of trade union rights and freedoms in the digital world [3].”

2 – IMPORTANCE OF THE TOPIC AND FINAL COMMENTS

The two commented decisions are very relevant and show that there may be indirect discrimination through algorithms.

They deal with a current and constant topic in the debate, that is, whether it is legal to place our trust in algorithms that, due to their design or training, may present hidden biases.

For this reason, I understand that it is vital that in the design, development, application, and evaluation of algorithms, the fundamental rights of citizens must be always respected, thus avoiding all biases or discrimination that their use may produce.

I share what Idoia Salazar[4]  said , that international governments, for example, the EU, are trying to start legislating on specific use cases for AI. For example, to put limits on facial recognition, or the level of autonomy left for AI algorithms. But the elaboration of laws is slow and the technological evolution very fast nowadays, that is why it is important to try to promote ethics and human responsibility, not machines, to prevent disasters or negative effects.

As Cathy O’Neil[5] says, “Big Data processes encode the past. They do not invent the future. We have to explicitly incorporate best values into our algorithms, creating big data models that follow our ethical leadership. Sometimes that means putting justice before profit. ”

For his part, Galdon Clavell[6] says that most organizations that implement algorithms have very little awareness or understanding on how to address the challenges of bias, even if they recognize them as a problem in the first place. Algorithms are just mathematical functions, so what they do is coding complex social realities to see if we can make good guesses about what might happen in the future.

The algorithms are based on AI machine learning models and require periodic evaluation to ensure that new and unexpected biases are not introduced into their own decision-making during self-study.

I want to highlight that Ethical Consulting[7] has prepared the first Algorithmic Audit Guide aimed at companies, public entities, and citizens, which offers a general and replicable methodology to audit products and services around AI systems using algorithms that collect or process data of a personal nature.

In this way, it will be possible to verify that they are designed, developed, and used in accordance with the Law, in a socially fair and responsible way to avoid discrimination.

I share what Antonio Anson[8] said, that unfortunately, more AI does not guarantee more freedom or equality. No one wants a future with supremacist, racist, or sexist robots that amplify the worst in our societies. The Public Administration will play a relevant role in ensuring that individual freedom and equality are not harmed by the advance of automation.

Regarding the tax field, it is fundamental that TAs that use algorithms and AI respect the rights and guarantees of taxpayers.

It is important to note that algorithms are always subject to judicial review, but many times the countries still do not have such advanced legislation that adequately protects the rights and guarantees of taxpayers, as I duly stated [9].

I am a fervent defender of digitalization, and I believe it should always be at the service of citizens to improve their quality of life, allowing a better future for everyone, for which it is essential that the unwanted effects of their application be promptly resolved., in cases such as those currently discussed here.

Finally, as always, the main objective is to open the issue to debate and I await your valuable comments.

[1] Algorithms are a set of defined and unambiguous, ordered, and finite instructions or rules that typically allow solving a problem, performing the computation, processing data, and carrying out other tasks or activities. Diccionario de la lengua española. Royal Academy.
[2] https://uitspraken.rechtspraak.nl/inziendocument?id=ECLI:NL:RBDHA:2020:865
[3] On this issue of guaranteeing the social protection of platform workers, I wrote at https://www.mercojuris.com/35634/trabajadores-de-plataformas-digitales-y-la-imperiosa-necesidad-de-su-proteccion-social
[4] https://www.noentiendonada.es/idoia-salazar-la-ia-puede-hacer-mucho-mal/
[5] Author of Weapons of Mathematical Destruction: How Big Data increases inequality and threatens democracy. https://weaponsofmathdestructionbook.com/
[6] https://www.esdelatino.com/auditoria-por-discriminacion-algoritmica/
[7] https://www.diariojuridico.com/guia-de-auditoria-algoritmica-para-que-la-ia-cumpla-con-la-legalidad/
[8] https://trabajandomasporunpocomenos.wordpress.com/2018/02/07/algoritmos-y-administracion-publica-no-a-los-robots-supremacistas/
[9] Digitalization of Tax Administrations and Taxpayers’ Rights and also see https://contadores-aic.org/digitalizacion-de-las-administraciones-tributaria-fiscalizacion-y-derechos-de-los-contribuyentes/

3,227 total views, 2 views today

Disclaimer. Readers are informed that the views, thoughts, and opinions expressed in the text belong solely to the author, and not necessarily to the author's employer, organization, committee or other group the author might be associated with, nor to the Executive Secretariat of CIAT. The author is also responsible for the precision and accuracy of data and sources.

1 comment

  1. Liptak Jansen Law Firm Reply

    I appreciate the informative article and expertise. keep up the great work!

Leave a Reply to Liptak Jansen Law Firm Cancel reply

Your email address will not be published.

CIAT Subscriptions

Browse through the site without restrictions. Consult and download the contents.

Subscribe to our electronic newsletters:

  • Blog
  • Academic offer (Only in spanish)
  • Newsletter
  • Publications
  • News alert

Activate subscription

CIAT Members

Representatives, Correspondent and Authorized staff (TA)