Algorithms, Big Data and the Government

© TCMake_Photo
Algorithms, Big Data and the Government
The government uses algorithms and Big Data to assess risks. In the benefits scandal, this led to discrimination and privacy violations.
Government agencies are increasingly using algorithms (a series of instructions that lead to a certain result) and Big Data to assess people. For example, the government processes data to try to calculate the chance that you may commit fraud or do something else punishable. We saw that this can go wrong in the benefits scandal at the tax authorities: thousands of parents were wrongly accused of fraud and had to repay benefits. As a result, people were unfairly in need of money. Some lost their homes and jobs. The tax authorities had used data and algorithms to calculate the risk of fraud. Someone's origin and whether or not you are Dutch were used to assess the risk. This is discrimination and is therefore prohibited.
Discriminatory algorithms in the benefit scandal
Dutch Amnesty researchers released the report on October 25, 2021 Xenophobic machines — Discrimination through unregulated use of algorithms in the Dutch benefits scandal out. This shows that there is ethnic profiling and discrimination based on social class by the tax authorities. This is partly due to the algorithms used. The tax authorities used data and algorithms to assess applications for childcare allowance for potential inaccuracies and thus possible fraud. In addition, the fact whether or not you are Dutch was used to assess the risk. However, having Dutch nationality is not a prerequisite for receiving childcare allowance. Taking nationality into account is discrimination and is therefore prohibited. After analyzing the measures that the government says it is taking, we have to conclude that they are still inadequate. For example, officials are not obliged to identify human rights risks in advance, and there is insufficient supervision of algorithms. In addition, government institutions are allowed to keep the use and operation of algorithms confidential. Amnesty International is not against the use of algorithms. But the protection of human rights when using them is inadequate.
Good rules wished
There are hardly any good rules to prevent a second benefits scandal in the Netherlands. Amnesty notes at the end of 2023 in a analysis of the algorithm policy that the measures taken and promised by the government still do not sufficiently protect people against discriminatory algorithms. Your data must be handled carefully and there must be clear and binding rules about the use of algorithms. Nowadays, systems are often not transparent, so people don't know what happens to their data. This makes it difficult for them to defend themselves against decisions made using algorithms or Big Data analyses. How do you prove that you are entitled to an allowance if you don't know why the algorithm says no? How do you show that you are being discriminated against if the government does not want to reveal whether they see your nationality as a risk? In this one video we simply explain what algorithms are and how they can discriminate.

Concerns about the bill
The government wants to have more options for processing and sharing data. To achieve this, there will be a new law: the Data Processing by Partnerships Act (WGS). If this law is passed, various branches of government will soon be able to exchange data with each other and with companies. This data can then be analyzed automatically with algorithmic systems. Amnesty is deeply concerned about this law. And we are not alone. The Council of State, the Data Protection Authority and other social organizations also see this as a dangerous law because human rights, such as the right to privacy, are at risk. This creates risks of arbitrariness and abuse of powers. The benefits scandal at the tax authorities makes it clear that the use of data and its automated analysis can cause a lot of harm to people who have nothing to do with criminal offences or fraud. The suffering caused to people with the benefits scandal should be prevented by setting clear rules in advance that prevent discrimination and privacy violations. The bill is now before the Senate. Amnesty is asking a new cabinet to withdraw the proposal and asks the Senate to take criticism of the WGS seriously and vote against the law.
Amnesty's appeal
To protect human rights, Amnesty urges the following measures:
- An immediate and explicit ban on the (indirect) use of ethnicity and nationality when using (automated) risk profiles in law enforcement. Read more about our call on the cabinet to combat discriminatory risk profiles.
- The introduction of a binding human rights test prior to the design and during the use of algorithmic systems and automated decision-making.
- The establishment of an independent and well-equipped algorithm supervisor who can provide binding advice and supervise from the perspective of all human rights.
- A ban on autonomous and machine learning algorithms in the performance of government tasks when this significantly affects people or has a major impact on people or society.
Amnesty International has received funding from the National Postcode Lottery to research AI and Big Data in governments. We also do this in the Netherlands.
More information
- Amnesty report 'Ethnic profiling is a government-wide problem'
- News item about report 'Ethnic profiling is a government-wide problem'
- Opinion piece in the NRC 'House of Representatives, prevent a new benefits affair'
- Letter to the House of Representatives about the WGS
- Letter to the House of Representatives about the report “Unprecedented Injustice”
- EU bill too weak to protect us against dangerous AI systems
- Amnesty's position on the “Big Data Quality Framework” by the Public Prosecutor and the Dutch Police
- Discriminatory risk profiles
- Amnesty article in the Dutch Legal Journal about discriminatory risk profiles
- Amnesty's algorithm policy analysis (November 2023)
.avif)