Allowances scandal is human rights violation, says Amnesty International
The benefits scandal in the Netherlands is a violation of human rights with a high risk of recurrence. Also in other countries.
This is stated in the report released today. 'Xenophobic machines — Discrimination through unregulated use of algorithms in the Dutch childcare benefits scandal'It is the first time that a human rights analysis has been done of the algorithms of the benefits scandal. It report focuses on a small sub-part of the benefits scandal. The conclusion of the investigation is that the algorithms used by the tax authorities to check erroneous applications for childcare allowance and fraud led to ethnic profiling. Ethnic profiling is a form of discrimination and therefore a violation of human rights. In addition, there is discrimination based on social class, because people from lower social classes have been hit extra hard. The conclusions of the Amnesty International investigation deepen previous investigations, such as that of the Data Protection Authority (AP). Victims have been saying for years that discrimination in the benefits scandal is broader than the government acknowledges.
Ethnicity discrimination
Organizations that are involved in social services are increasingly automating their work processes, with the aim of detecting fraud, among other things. The Netherlands is one of the leaders in this area worldwide. When checking applications for childcare allowance for accuracy and fraud, the tax authorities used a risk model. This model used several risk factors, including not having Dutch nationality. People without Dutch nationality were ultimately more likely to be (unfairly) labeled as fraudsters. The assumption that certain population groups are more likely to commit fraud points to existing prejudices and unequal treatment by the tax authorities.
Repetition is looming
The Dutch government states that it has taken measures to prevent a recurrence of the benefits scandal. Amnesty has analyzed these measures and concludes that they are inadequate on all fronts: for example, officials are not required to identify human rights risks in advance, the supervision of algorithms is inadequate, and government institutions are allowed to keep the use and operation of algorithms confidential.
Regulate algorithms
Amnesty is not against the use of algorithms. But the protection of human rights when using them needs to be better. That is why Amnesty will continue to investigate and campaign in the coming years to end human rights violations related to algorithmic systems. Amnesty International calls on governments around the world to establish a framework that prevents human rights violations when using algorithmic decision-making systems, and that establishes monitoring and monitoring mechanisms. Those responsible for violations must also be held accountable and victims compensated. In the Netherlands, this means that victims of discrimination must also be compensated with the risk model.Learn more about our work on technology and human rights.And read the English message.
More information
.avif)