Important Information
The Dutch Tax Scandal

The Dutch scandal serves as a warning to Europe about the risks of using algorithms

The Dutch tax authorities have ruined thousands of lives after using an algorithm to detect suspected benefits fraud — and critics say there is little that will prevent it again.

PART

Free article usually reserved for subscribers

Algemene gezichten van Amsterdam
As the world turns to AI to automate their systems, the Dutch scandal shows how devastating they can be | Dean Mouhtaropoulos/Getty Images

MARCH 29, 2022 6:14PM EST

BY MELISSA HEIKKILÄ

Chermaine Leysner's life changed in 2012 when she received a letter from the Dutch tax authorities demanding that she repay her 2008 childcare allowance. Leysner, a social work student at the time, had three children under the age of six. The bill amounted to more than €100,000. “I thought, 'Don't worry, this is a big mistake. “But it wasn't a mistake. It was the start of something big,” she says. The order lasted nine years of Leysner's life. The stress caused by her mother's tax bill and cancer diagnosis sent Leysner into depression and burnout. She eventually divorced the father of her children. “I worked like crazy so I could still do something for my kids, like give them something tasty to eat or buy candy. But I had times when my son had to go to school with a hole in his shoe,” says Leysner. Leysner is one of the tens of thousands of victims of what the Netherlands calls the” allowance affair “has mentioned. In 2019, it was announced that the Dutch tax authorities had used a self-learning algorithm to create risk profiles in an attempt to detect fraud with childcare benefits. The authorities punished families if they suspect fraud based on the system's risk indicators alone. Tens of thousands of families — often with lower incomes or ethnic minorities — were driven into poverty because of exorbitant debts to the tax authorities. Some of the victims committed suicide. More than a thousand kids were taken into foster families. The Dutch tax authorities are now being fined a new €3.7 million by the Dutch privacy regulator. In a statement released on April 12, the agency outlined several violations of the EU's data protection rulebook, the General Data Protection Regulation , including the absence of a legal basis for processing people's data and holding the information for too long. Aleid Wolfsen, the head of the Dutch privacy authority, called the violations unprecedented. “For over 6 years, people were often unfairly labeled as fraudsters, with all the consequences that entails. Some did not receive a payment arrangement or you were not eligible for debt restructuring. The tax authorities have turned lives upside down,” he says. after the statement. As governments around the world turn to algorithms and AI to automate their systems, the Dutch scandal shows how extremely devastating automated systems can be without proper safeguards. The European Union, which positions itself as the world's leading technology supervisor, is working on a bill that aims to curb algorithmic damage. But critics say the bill misses its purpose and will fail to protect citizens from incidents like what happened in the Netherlands.

No checks and balances

The Dutch system — which started in 2013 — was intended to eliminate benefit fraud at an early stage. The criteria for the risk profile have been developed by the tax authorities, reports allegiance . Having dual nationality was identified as a major risk indicator, as was a low income. Why Leysner ended up in the situation is unclear. One reason could be that she had a twin, so she needed more support from the government. Born in the Netherlands, Leysner also has Surinamese roots. Unveiled in 2020 allegiance and another Dutch news channel, RTL News, that the tax authorities also kept secret blacklists of people for twenty years, keeping both credible and unsubstantiated “signals” of possible fraud. There was no way civilians could find out why they were on the list, nor defend themselves. research it turned out that the tax authorities focused on people with “a non-western appearance”, focusing mainly on Turkish or Moroccan nationality. Being blacklisted also led to a higher risk score in the childcare allowance system.In a parliamentary report the childcare allowance scandal revealed several serious flaws, including institutional biases and authorities that concealed information or misled parliament about the facts. When the full extent of the scandal came to light, the government joined rid of Prime Minister Mark Rutte, only to regroup 225 days later. In addition to the fine announced on April 12, the Dutch Data Protection Authority also imposed the Dutch tax authorities in December 2021 a penalty of €2.75 million imposed because of the “unlawful, discriminatory and therefore inappropriate way” in which the tax authorities processed data about the dual nationality of children. applicants for a health allowance. “There was a total lack of checks and balances within each organization to ensure that people realized what was going on,” said Pieter Omtzigt, an independent member of the Dutch parliament who played a crucial role in exposing the scandal and coming under fire from the tax authorities. “What really concerns me is that I'm not sure we've taken even vaguely enough preventive measures to strengthen our institutions to cope with the next derailment,” he continued. The new Rutte administration has promised to create a new algorithm supervisor under the country's data protection authority. Dutch Minister of Digital Affairs Alexandra van Huffelen — who was previously Minister of Finance and responsible for the tax authorities — told POLITICO that the role of the data authority will be “overseeing the creation of algorithms and AI, but also how it plays out when it's there, how it's treated, make sure that people come first and that it applies to all regulations that are in place.” The regulator will examine algorithms in both the public and private sectors. Van Huffelen emphasized the need to ensure that people are always up to date. “What I find very important is to ensure that decisions, government decisions based on AI, are always treated by a person afterwards,” she says.

A warning for the rest of Europe

Europe's top digital official, Executive Vice-President of the European Commission, Margrethe Vestager, said the Dutch scandal is exactly what every government should fear. “We have huge public sectors in Europe. There are so many different services where decision-making supported by AI could be very useful, if you have confidence in them,” Vestager told the European Parliament in March. The EU's new AI law is aimed at creating that trust, she argued, “so that this large public sector market will also be open to artificial intelligence.” The Commission's proposal for the AI Act restricts the use of so-called risky AI systems and prohibits certain “unacceptable” applications. Companies that provide high-risk AI systems must meet certain EU requirements. The AI Act also creates an EU public register of such systems in an attempt to improve transparency and helping with the handhaving.Dat is not good enough, argues Renske Leijten, a socialist member of the Dutch parliament and another key politician who helped uncover the true extent of the scandal. Leijten states that the AI Act should also apply to those who use risky AI systems in both the private and public sectors. In the AI Act, “we see that there are more guarantees for your rights when companies and private companies work with AI. But the most important thing we need to learn from the childcare allowance scandal is that this wasn't a company or a private sector... This was the government,” she said.As it is, the AI Act will not protect citizens from similar dangers, said Dutch Green MEP Kim van Sparrentak, a member of the European Parliament's AI Act negotiating team in the internal market committee. Van Sparrentak insists that the AI Act contains impact assessments for fundamental rights that will also be published in the EU AI register. Parliament also states pre to add obligations to users of high-risk AI systems, including in the public sector. “Fraud forecasting and predictive policing based on profiling should simply be banned. Because we've only seen very bad results and no one person can be determined based on some of their data,” says Van Sparrentak.In a report detailing how the Dutch government used ethnic profiling in the childcare allowance scandal, Amnesty International calls on governments to “prohibit the use of nationality and ethnicity data when determining risks for law enforcement purposes when looking for potential crime or fraud suspects.” The Netherlands is still considering the aftermath of the scandal. The government has promised to reimburse the victims of the incident €30,000. But for people like Leysner, that doesn't even include the years she lost — justice seems a long way off. “When you experience this kind of thing, you also lose faith in the government. So it's very difficult to trust what [authorities] are saying now,” Leysner said.Clothilde Goujard and Vincent Manancourt contributed to the reporting. This article has been updated with the results of the Dutch Tax Administration investigation released in April.

Related tags

Algorithms Artificial intelligence Dates Data protection Fraud police police Rights services Tax Transparency

Related countries

Netherlands

Related people

Kim from Spruce Branch Margaretha Vestager

Related organizations

European Commission European Parliament

Date
04 May 2024
Author (s)
research
Source
No items found.
Readers' comments
No items found.