Important Information
Human rights

More than 50 algorithms of the tax authorities are illegal, says Data Protection Authority

Many checks carried out by the tax authorities are unlawful. More than 50 algorithms for selecting returns for control, among other things, are discriminatory and therefore in violation of the law, according to a letter from the Data Protection Authority and an explanation thereof. This may mean that millions of decisions — such as attacks — are also unlawful. 'The AP has serious concerns about this. '

This piece in 1 minute

What's the news?

  • The tax authorities have more than 100 algorithms for selecting returns and applications for audit. Half of them are discriminatory and therefore unlawful, according to the Data Protection Authority (AP).
  • Millions of decisions made based on these algorithms may be illegal, says legal expert Fatma Çapkurt.
  • Some algorithms decide without any human intervention, possibly in violation of the privacy law.
  • The AP does not use enforcement means, but asks for an “action plan”.

Why is this important?

  • After the benefits scandal, “never again” was the promise; yet discriminatory algorithms appear to be structural.
  • Large numbers of citizens may have wrongly come into the sights of the tax authorities without knowing it.

How has this been studied?

  • Analysis of a letter from the AP to the Ministry of Finance, further explanation from the supervisor to FTM, WOO documents and internal memos about changes to risk models, supplemented with expert interpretation and relevant case law.

This story is part of an ongoing research file.The rule of law

Whether you're filing income or motor vehicle taxes, or applying for a VAT number, you'll have to deal with it anyway: the algorithms of the tax authorities. Every year, these automated systems broke out millions of risk scores. These are indications of the chance that something is wrong with your application or declaration. The aim is to detect errors and fraud. The tax authorities say they can no longer do without algorithms. It is impossible to have millions of returns, involving billions of euros, all viewed through human eyes. That is why only the declarations and applications with the highest risk scores are automatically “ejected” for manual control. Or for stopping a refund. The tax authorities have at least a hundred of these types of “selection tools” in their arsenal. Sometimes they are complex statistical models, sometimes they are almost as simple as an abacus. In many cases, they lead to profiling: an assessment of your risk based on your personal characteristics, such as age, place of residence and your tax history.

Selection criteria should be based on “relevant statistical and validated research”

The operation of these systems is one of the best-kept secrets of the tax authorities. You must not know if and why the tax authorities singled you out. So you can't argue with it if the algorithms give incorrect results — or discriminate. That lack of controllability violates the privacy law and erodes legal protection. By definition, an algorithm with selection criteria distinguishes between groups of people. That is why the tax authorities should only treat you differently than your neighbor if there are very good reasons for doing so. For this reason, selection criteria must be based on “relevant statistical and validated research,” the Data Protection Authority (AP) said in a consulting last year.

'Gut feelings'

But that research is often not there, according to one that has gone unnoticed letter from the AP to the Department of Finance last month. Indeed, for “more than half of the selection tools” - more than 50 - the substantiation of the selection criteria is based on “general experience” or that those criteria are not “documented” at all. General experience is no more than an assessment of tax officials — at worst, they are gut feelings. In any case, according to the supervisor, such experience does not count as statistical and validated research. “You should always substantiate the distinction between groups of people very well and precisely,” says an AP spokesperson after questions from Follow the Money. “At the tax authorities, about half of the automated selection tools lack such substantiation and objective justification for the selection criteria. If there are none, this is discriminatory processing. Discriminatory processing is unlawful. The AP has serious concerns about this. '

This means that, according to the supervisor, half of the algorithms are in violation of the law. Precisely because of the lack of substantiation, the AP issued a penalty call on the Department of Education Implementation (DUO) for discrimination against students with a migration background. “The risk factors used in the algorithm were based on “experience and common sense,” the AP wrote. “There was no objective justification for these risk factors.” That is why the processing of personal data in the algorithm was in violation of the General Data Protection Regulation (AVG). Moreover, the AP notes that only a fraction of the tax authorities's selection tools were tested for “”bias'. In other words: the tax authorities do not periodically check whether biases have crept into the algorithms that provide discriminatory results. The supervisor does set this requirement for such systems. The algorithms are therefore not sufficient at this point either. Nevertheless, the service asks questions: 'The tax authorities are constantly looking critically at its own use of selection tools. '

Enhanced supervision

The tax authorities do not deny that half of the algorithms lack substantiation. The fact that there is a risk of discrimination is a “serious observation”, but it goes “less far” than the conclusion that the tax authorities conduct are unlawful. On the other hand, the service acknowledges that there is a 'challenge', but the problem would be recent: 'The yardstick and social expectations for the selection tools change over time. This also poses challenges for instruments that were designed in a different zeitgeist. ' (The entire comment is at the bottom of this article.)

But the principle of equal treatment was included in the Constitution as early as 1798, and the GDPR has been in force since 2018. Since last year, the tax authorities have been under for five years enhanced supervision from the AP. It is now “insistently” asking for an “action plan” before the end of the year, in which the tax authorities must still provide substantiation for the chosen criteria.

Nothing happened after the benefits scandal

Five years ago, things went completely wrong when it came to checking the childcare allowance. For years, the tax authorities” allowance algorithm had discriminated unnoticed, including on the basis of nationality — a mortal sin. Using another algorithm, more than a hundred thousand often innocent people — without their own knowledge — ended up on the infamous Fraud Signaling Facility blacklist. It later proved to be in violation of the law and provided the tax authorities with a penalty up from 3.7 million euro.AL in January 2021 said then-prime minister Mark Rutte, when the cabinet resigned due to the benefits scandal, that the government would no longer go wrong with algorithms. “We're all going to tackle that. Because it should never be the case that we are discriminated against in any way. That is absolutely unacceptable under the rule of law. '

You would expect that a cultural change would have occurred after the benefits scandal.

Fatma Çapkurt, university lecturer in administrative law

The fact that years after the benefits scandal is still unclear whether controls are discriminatory shows, according to Amnesty International director Dagmar Oudshoorn, that the tax authorities do not give this priority. “Amnesty calls for random checks to be carried out immediately, unless the tax authorities can guarantee that people will not be discriminated against.” University lecturer in administrative law Dr. Fatma Çapkurt (Leiden University) — also promoted at legal protection against profiling — read the AP's letter with surprise: “You would expect that a cultural change would have occurred after the benefits scandal and that all risk selection systems would be checked for legality like no other. The fact that that has not happened shows that the rule of law culture is not doing well. '

Far-reaching consequences

The fact that the tax authorities discriminate algorithmically on a large scale is a conclusion with potentially far-reaching consequences. According to statements by the highest administrative judges, the government is obliged to explain why someone was selected for control. This is to rule out discrimination and enable citizens to defend themselves. After all, without information about the how and why of a selection, you can't argue with it.If these rule of law safeguards are violated, the government may not use the evidence gathered during the audit in decision-making. If that does happen, that administrative decision — such as setting a tax assessment — may also be unlawful, because it is based on evidence obtained illegally (see box below).

University lecturer Çapkurt therefore finds the AP's findings very worrying. “If the start of an investigation is unlawful, that also affects the later decision. These systems have been running for many years. Millions of decisions may have been taken illegally. 'The tax authorities say that it is not certain that there are unlawful algorithms. The fact that all kinds of decisions may be null and void is therefore “wrong and too short of a curve”.

Unlawful decisions based on an algorithm

The Council of State ruled in 2017 that the so-called Aerius algorithm suffered from a “lack of insight into the choices made and the data and assumptions used”. In the case of decision-making based on such a “black box”, taxpayers cannot control how a decision was reached. And that comes at the expense of legal protection. That is why the minister had to reveal all relevant data completely, timely and of his own accord, so that citizens can oppose themselves.In line with this ruling ruled the Central Board of Appeals in 2020 that the municipality of Eindhoven was unable to clarify why an assisted couple was selected for control. As a result, it was impossible to assess whether or not the municipality had discriminated using the risk profiles. This meant that home visits were unlawful, as was the evidence obtained. The repayment decision, which was based on that evidence, was overturned. In a tax case ruled the Supreme Court in 2021 that if a risk selection for an audit leads to a violation of the prohibition of discrimination, the evidence that came to light during that audit is inadmissible. In that case, the decision, such as correcting a declaration, will also be called into question.

Decisions without human control

Another remarkable finding is that not all decisions made by the tax authorities (after risk selection) involve “human intervention”. This means that the final decision is not made by a flesh and blood official, but by the algorithm itself.Automated decisions (based on profiling) are prohibited under Article 22 of the GDPR. Human intervention must ensure that decisions are made carefully. It is one of the safeguards to prevent people from (unintentionally) being excluded or discriminated against by an algorithm. In order to make such automated decisions, a legal basis is needed that offers citizens “appropriate guarantees” and the ability to oppose profiling. However, there are no specific laws for the tax authorities' algorithms with such guarantees. This is also an indication that the tax authorities' systems are in conflict with the law.

Internal alarm

Internally at the tax authorities, there have been considerable concerns about the use of algorithms and their legal sustainability for some time. In early 2021, the Department of Small and Medium-sized Enterprises was internally advised to stop three algorithms due to conflict with the privacy law. The advice also warned that there was a high risk of discrimination. After a brief stop, the management decided to turn the systems back on. In doing so, the “social and organizational interest” outweighed complying with the law and avoiding the risk of fundamental rights violations, revealed Follow the Money at the end of 2023.Documents released after a new WOO request from Follow the Money show that similar urgent advice was given at the end of 2022. Shortly thereafter, all kinds of risky selection criteria were urgently removed from some systems. In the case of an algorithm called Issuing a VAT Number (ABN), there was almost nothing left. According to internal documents, the tax authorities removed “all risky rules”, so that the algorithm could only forward requests to the correct department and no longer make risk assessments. Remarkably, despite determining these “unacceptable” elements in the algorithms, the tax authorities no longer investigated whether there was discrimination. And whether civilians have been affected by this.

Heavier resources

University lecturer Çapkurt does not understand the fact that the AP only asks for a plan of action in light of the shortcomings found. “If the tax authorities break the law, there are no consequences. The AP is standing by and watching. Why the AP does not enforce with heavier means is a mystery to me. 'The lawyer believes that the August letter conceals the seriousness of the problem. 'The AP does not firmly conclude that half of the algorithms are unlawful, even though their investigation clearly shows that. A supervisor is obliged to communicate clearly about serious findings that may affect millions of citizens. Can we still trust the AP if it packages such findings in vague official language? ' Amnesty director Oudshoorn is also critical. 'The government's obsession with efficiency seems to have no limits. Supervisors do not weigh this critically against the inherent and life-size dangers of discrimination in risk profiling. The question is not if, but when, the next scandal will occur. '

Tax Administration Response

“The urgent request that the AP made to the Secretary of State stems from the - for the AP - insufficient information available at that time to substantiate the legality of selection instruments. The AP therefore states in the letter that it “raises the question whether the risk of discriminatory processing is in all cases in a demonstrable way and is being investigated and addressed”. This is a serious observation. However, this does not go as far as the conclusion from the above question, which states that these selection tools are discriminatory and therefore unlawful. In addition, the AP states about the broader request following the above observation: “The AP therefore considers it necessary for the tax authorities to i) assess and assess whether the use of selection tools meets the legal frameworks and requirements (in other words: is actually in line with the AP's article 22 GDPR advice) and, where necessary, takes adequate measures and ii) tests and assesses whether a specific legal provision for automated risk selection is still necessary.” Subject to the above, your questions thus anticipate the situation and go beyond the letter obtained from the AP. The term discriminatory action involves a legal load and is often confused with prohibited or indirect discrimination. Distinctions are allowed in taxation, as long as it is well motivated and in line with the legal frameworks. The term is therefore broader than it seems intended, especially in relation to selection. When a selection tool is the subject of dispute in a court of law, the judge assesses, based on the circumstances of the specific case, whether an unlawful selection tool has been used. The judge does this by assessing criteria that lead to a violation of a fundamental right for citizens in this situation. The picture that emerges from the case law is therefore more nuanced than is suggested. The starting point is that the legality of the attack is in principle not affected by unlawful selection. Only if the use of a selection tool has been unlawful and the results of that selection have been used for an attack and a fundamental right has been violated can it result in that the results of that selection may not be used for the attack. This makes the assumption that millions of decisions may be unlawful is wrong and too short-sighted. The use of selection tools requires careful and continuous consideration of various interests. The plan of action mentioned in the letter to Parliament dated September 10 ensures that this consideration of interests comes together transparently. Something that should have happened earlier, which requires a lot of time and research, and something that we find very important. The plan of action will soon provide insight into how we arrive at a correct and lawful basis for the use of these selection tools. This can, as far as possible, prevent selection tools from being used unlawfully. Where necessary, action can be taken. '

Date
08 November 2025
Author (s)
research
Source
No items found.
Readers' comments
No items found.