Could Artificial Intelligence help resolve disputes with Tax Administrations?

Introduction

The objective of this collaboration is to review some experiences in the application of Artificial Intelligence (AI), that may be applicable for the resolution of disputes with tax administrations (TAs), that is to say for the substantiation of the resources presented by taxpayers against various administrative acts, including before the courts.

First, we emphasize that there is no single definition of AI, it refers to the ability of a digital system to emulate cognitive functions, such as learning, interacting, creating, and replicating, previously attributed only to humans.

According to a recent publication[1], for the moment, the existing AI is known as soft AI, capable of performing a specific task, relatively simple or routine, with better performance than a human, as opposed to hard AI, which would be able to reason like the human intellect, in more complex tasks. The latter raises major discrepancies among experts.

The application of AI has optimized data analysis, providing inputs that make processes more efficient and enabling evidence-based decision-making.  One of the main advantages of AI is its ability to process large volumes of data, significantly shortening time frames.

However, we warn from the beginning that it is necessary to be very cautious with its use, so as not to generate bias or unintended consequences.

There are risks of misuse, which requires an assessment even from an ethical perspective and the adoption of a number of principles that should govern its use.

A not lesser edge, which has to do with the incorporation of automation and robotics mechanisms, including AI, is the possible destruction of employment that may occur through an increasing use of these techniques.

Some experiences of using artificial intelligence in Justice

Going to a specific field, it is worth mentioning the application of AI in Justice, where we can mention that in Argentina in 2017 the Public Prosecutor’s Office of the City of Buenos Aires developed PROMETEA, a system that applies AI to automatically prepare court rulings.

It consists of a software system, whose main task is the automation of repetitive tasks and the automatic drafting of legal rulings, based on similar cases, for the resolution of which there are already repeated judicial precedents.

PROMETEA has enabled the Public Attorney’s office to significantly increase the efficiency of its proceedings.[2]

The most innovative component of PROMETEA is that, for each case and on the basis of previous judgments of similar cases, it allows, from predictive inference, the elaboration of the recommendation that the prosecutor must submit to the judge to give judgment, and prepares and proposes to the prosecutor the model of legal ruling. This is based on a statistical correlation between keywords associated with each process and previous sentence patterns.

In addition to its use in the prosecutor’s office, PROMETEA was trained to improve processes in other agencies.

Moreover, in the United States there is the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) [3]program, which is used in several states.  It is a software that has been used since 1998 to analyze, according to the criminal record of an accused, his chances of recidivism.

The program offers a questionnaire to the defendant. Once the defendant has answered, the system calculates the risk of recidivism, so the judge defines, for example, whether or not to grant parole.

COMPAS rose to fame with the Loomis case, in which the accused fled from the police and used a vehicle without authorisation from its owner, and was sentenced to 6 years’ imprisonment and 5 years’ probation, because the system estimated a high risk of recidivism. Loomis appealed against the decision, arguing that its defence could not challenge the COMPAS methods because the algorithm was not public. The Supreme Court of the state of Wisconsin dismissed the appeal.

Years later, in 2018, it became known that the system analyzes 137 aspects of each accused. However, when comparing the level of success between the system’s predictions and the decisions of legal experts, it was found that the level of success of AI is not higher, or even that serious errors are evident.

In Beijing, China, in October 2019 was presented the Internet Court, defined as an “online litigation Center”. It is a platform where the parties upload the data of the dispute to be resolved and the AI does the rest: it searches for jurisprudence, analyzes the subject, contrasts evidence, and gives judgment.

The system does not have great technical differences with that of Estonia[4], where there is also a strong commitment to the automation of justice, without human intervention in the whole process. Broadly speaking, the parties digitally submit their claims and evidence. The AI judge-in development-will analyze the documentation and issue a judgment. This would speed up dozens of overdue disputes. If either party disagrees with the outcome, they can always appeal to a human judge.

Recently in Colombia, the Constitutional Court announced the adoption of an AI program, called PRETORIA. It is a predictive system of intelligent detection of sentences and information to facilitate the work of judges.[5]

Possible application in the resolution of tax disputes

At this point, the question arises as to whether these innovations, which are already applied in the courts, could be used in the resolution of tax disputes, in the context of Tas, and also of Tax Justice (tax courts), i.e. in the treatment of the claims or defence mechanisms that taxpayers have to challenge the acts of the tax administration, such as certain settlements, sanctions imposed, and various administrative acts.

As an antecedent, it should be mentioned that, in recent years, TAs have begun to use automatic mechanisms, which could be described as soft AI, to resolve certain procedures and even deal with disagreements that may arise from taxpayers, attempting to avoid as much as possible the intervention of the human resources and the presence of taxpayers in the agencies.

The above represents an antecedent for us to consider the possibility of applying AI to the resolution of tax disputes.

At first view, we do not foresee any major problems in the use of AI, to assist in the resolution of simple and routine tax litigation, such as the application of certain fines and in judicial areas such as no-charge litigation incidents, regulation of professional fees, etc.

More tax disputes involving the application of taxes in simple assumptions could also eventually be managed via AI.

In any case, what the AI would do is to provide the judge or the ruler with an element of consultation or support for the decision that undoubtedly has to be made by the person exercising such roles.

The possible application of AI should not ignore the premise that a dispute resolution system should be fair, rapid, and effective, as an important safeguard for taxpayers, where their various rights and guarantees are respected, in due process.

Taxpayers must be assured that the handling of their information is secure, and comprehensive laws are required to protect confidentiality.

These laws should specify responsibility for system failures, for example, when leaks occur due to the actions of cybercriminals, or due to the TAs’ own human resources.

The timeless principles of taxpayer protection and the rights frameworks in force in each country should be adapted to the digital disruption and, where appropriate, include the use of AI to resolve disputes. Otherwise, the use of AI may conflict with the rights and guarantees of taxpayers.

It seems to us that the taxpayer should have the right to know the procedure by which the AI concludes the appeal, and certainly be able to challenge it.

We should also remind that it is vital that the decisions of the appeals must be duly substantiated by the TAs, that they resolve the dispute, having to decide on all the questions raised by the appellant and those presented in the file (as raised in the appeal), as well as the evidence produced or elements considered.

It should also be mentioned that the OECD[6] recommends certain complementary values-based principles for the responsible administration of reliable AI.

Let us keep in mind that some experience has shown that if AI systems are not properly designed, they can have discriminatory, unfair, and undesirable social effects, as happened with the case of COMPAS in the US.

The adoption of AI requires careful data management and governance. AI uses data to make the algorithm ”learn” how to decide. But if these existing data reflect biases that are already in the real world, it could be said that algorithms learn to make decisions that reflect those biases.

It is important to have the possibility to examine and monitor the code of these algorithms, since there is a risk of incorporating, intentionally or not, biases, prejudices, or other elements in the programming, somehow “contaminating” the response.

The ethics of AI proves to be fundamental as a first step towards the desirable limits in the use of modern technology in the TAs. Certain principles of AI ethics must be respected, such as privacy, accountability, security, transparency and explanation, justice and non-discrimination, human control and supervision of technology, and the promotion of human values within the framework of international human rights law.

The application of the AI to procedures, applications, disagreements and tax disputes, in addition to reducing the time of resolution can help streamlining decisions, that is, to standardize criteria, so that we do not face different answers to a same litigation issue.

In the whole process of adoption of ICTs in TAs-and in tax courts, it is key to strategically define the new role of the human resource, as new digital skills are required.

As a synthesis:

In short, we believe that AI can be applied to resolve tax disputes, but only as support for the ruling that the judge must take in their resolution, and not resolve them by itself based on an algorithm.

We clearly rule out the idea that judges can be replaced by robots acting through AI algorithms that completely displace the human process, although the debate remains open.


[1] Constanza Gómez Mont, Claudia May Del Pozo, Ana Victoria Martin del Campo. Data economy and artificial intelligence in Latin America opportunities and risks for Responsible Use. This report was developed by C Minds and commissioned by the Center for Studies in technology and society (CETyS) of the University of San Andres in Argentina.
[2] Elsa Estévez Sebastián Linares Lejarraga Pablo Fillottrani. PROMETEA. TRANSFORMING THE ADMINISTRATION OF JUSTICE WITH ARTIFICIAL INTELLIGENCE TOOLS. IDB 2020.
[3] https://retina.elpais.com/retina/2020/03/03/innovacion/1583236735_793682.html
[4] https://confilegal.com/20191013-china-y-estonia-desarrollan-jueces-virtuales-basados-en-inteligencia-artificial-para-resolver-demandas-de-cantidad/
[5] https://www.metrolatam.com/hub/tecnologia/2020/08/07/la-inteligencia-artificial-ahora-podria-juzgarte.html
[6]https://www.oecd.org/going-digital/ai/principles/

2,796 total views, 1 views today

Disclaimer. Readers are informed that the views, thoughts, and opinions expressed in the text belong solely to the author, and not necessarily to the author's employer, organization, committee or other group the author might be associated with, nor to the Executive Secretariat of CIAT. The author is also responsible for the precision and accuracy of data and sources.

2 comments

  1. Ronnie Nielsen Reply

    Thanks for a great article. I think that the right title should be “How could AI help…”

  2. Pablo Porporatto Reply

    Yes, of course Ronnie!! Thanks

Leave a Reply to Ronnie Nielsen Cancel reply

Your email address will not be published.

CIAT Subscriptions

Browse through the site without restrictions. Consult and download the contents.

Subscribe to our electronic newsletters:

  • Blog
  • Academic offer (Only in spanish)
  • Newsletter
  • Publications
  • News alert

Activate subscription

CIAT Members

Representatives, Correspondent and Authorized staff (TA)