Automated racism: How tech can entrench bias

Dutch benefits scandal highlights need for EU scrutiny.

Automated racism: How tech can entrench bias

Nani Jansen Reventlow is founding director of the Digital Freedom Fund, which supports strategic litigation on digital rights in Europe. She is adjunct professor at Oxford University and Columbia Law School and an adviser to Harvard Law School’s Cyberlaw Clinic.

In the run-up to parliamentary elections in the Netherlands this month, center-right and extreme right parties are outdoing one another in calling for a surveillance state that will come down on marginalized and minority groups in all its might.

This should send alarm bells ringing in Brussels and beyond.

The party of Prime Minister Mark Rutte, projected to emerge as the election winner, doesn’t appear to have learned any lessons from last year’s benefits scandal, which a parliamentary report called “unprecedented injustice” and a violation of “fundamental principles of the rule of law.”

Over the course of two decades, as many as 26,000 parents were wrongly accused of having fraudulently claimed child care allowances. As a result, some 10,000 families were forced to repay tens of thousands of euros, which led to financial hardship, unemployment, bankruptcies, divorces and people losing their homes. One parent committed suicide.

An investigation by the Dutch Data Protection Authority last summer made it unequivocally clear that the methods used by the tax authority to “detect” these alleged cases of fraud were outright discriminatory: Parents were singled out for special scrutiny because of their ethnic origin or dual nationality.

Earlier in the year, the tax authority — following three years of relentless enquiry by members of parliament — had disclosed that no fewer than 180,000 citizens had been wrongfully included on secret blacklists like this. The press uncovered internal correspondence in which tax authority staff referred to parents in a derogatory and racist manner: “It looks like a nest of Antilleans,” one said.

The scandal — and the Netherlands’ failure to formally reckon with its underlying causes — is an uncomfortable reminder for Europe that institutional racism is very much the lived experience of millions of people of color living across the Continent.

This problem will not go away if left unaddressed. In fact, the technology on which governments are increasingly relying to automate crucial decisions about our lives has the potential to not only reproduce but amplify structural inequalities in our societies.

The European Union has a crucial role to play here — not only by addressing inequalities within its own institutions and considering new legislation, as outlined in its recent anti-racism action plan, but also by holding member countries to account for their compliance with existing regulations such as the Racial Equality Directive.

Earlier this month, the European Parliament’s Anti-Racism and Diversity Intergroup called on the EU executive to “take a strong and decisive stand against institutional racism” in the Netherlands in light of the benefits scandal. It also urged the Commission to ensure it implements the promises it made last year in the wake of international Black Lives Matter protests.

While the Dutch government has apologized and promised compensation to the affected families, no concrete steps have been taken to address the institutional racism that informed these policies over the past two decades — and to prevent something like this from happening again. A parliamentary enquiry scheduled for the summer of 2022 may prove to be too late to prevent further damage being done in the meantime.

Despite revelations that government bodies, including customs, are said to be using at least 211 other blacklists, the main political parties in January rejected a parliamentary motion to forbid the use of nationality or ethnic origin in risk profiling.

In fact, the prime minister’s party is campaigning with a promise to create “an exception to privacy legislation to make it possible to create blacklists of fraudulent individuals and share that information between governmental and private institutions.”

These blacklists pose an undeniable danger to racialized groups in the Netherlands — one that could soon become more acute as a result of new technology: In December, a legislative proposal was adopted that would give the government even greater algorithmic profiling powers.

The EU can’t afford to ignore what is happening in the Netherlands. The country’s failure to formally reckon with the institutional racism brought to light by the recent scandal, its lack of transparency regarding what other “blacklists” are being used by the government, and its proposal to expand such automated, potentially discriminatory processes at scale is deeply worrying.

When the EU’s anti-racism action plan was launched last September, the Commission’s vice president for values and transparency, Věra Jourová said “change must happen now.” The Commission should make good on its word and take concrete steps to use the tools at its disposal to thoroughly investigate cases like those in the Netherlands. 

The risks posed by this kind of automation and its potential to reinforce institutional racism are not limited to the Netherlands. It’s happening across Europe and at a global scale.