In Ukraine, Identifying the Dead Comes at a Human Rights Cost


Five days later Russia launched its full-scale invasion of Ukraine, a year ago this week, US-based facial recognition company Clearview AI offered the Ukrainian government free access to its technology, suggesting that it could be used to reunite families, identify Russian operatives, and fight misinformation. Soon afterwards, the Ukrainian government revealed that it was using the technology to scan the faces of dead Russian soldiers to identify their bodies and notify their families. By December 2022, Mykhailo Fedorov, Ukraine’s vice prime minister and minister of digital transformation, was tweeting a picture of himself with Clearview AI’s CEO Hoan Ton-That, thanking the company for its support.

Accounting for the dead and letting families know the fate of their relatives is a human rights imperative written into international treaties, protocols, and laws like the Geneva Conventions and the International Committee of the Red Cross’ (ICRC) Guiding Principles for Dignified Management of the Dead. It is also tied to much deeper obligations. Caring for the dead is among the most ancient human practices, one that makes us human, as much as language and the capacity for self-reflection. Historian Thomas Laqueur, in his epic meditation, The Work of the Dead, writes that “as far back as people have discussed the subject, care of the dead has been regarded as foundational—of religion, of the polity, of the clan, of the tribe, of the capacity to mourn, of an understanding of the finitude of life, of civilization itself.” But identifying the dead using facial recognition technology uses the moral weight of this type of care to authorize a technology that raises serious human rights concerns.

In Ukraine, the bloodiest war in Europe since World War II, facial recognition may seem to be just another tool brought to the grim task of identifying the fallen, along with digitizing morgue records, mobile DNA labsand exhuming mass graves.

But does it work? Ton-That says his company’s technology “works effectively regardless of facial damage that may have occurred to a deceased person.” There is little research to support this assertion, but authors of one small study found results “promising” even for faces in states of decomposition. However, forensic anthropologist Luis Fondebrider, former head of forensic services for the ICRC, who has worked in conflict zones around the world, casts doubt on these claims. “This technology lacks scientific credibility,” he says. “It is absolutely not widely accepted by the forensic community.” (DNA identification remains the gold standard.) The field of forensics “understands technology and the importance of new developments” but the rush to use facial recognition is “a combination of politics and business with very little science,” in Fondebrider’s view. “There are no magic solutions for identification,” he says.

Using an unproven technology to identify fallen soldiers could lead to mistakes and traumatize families. But even if the forensic use of facial recognition technology were backed up by scientific evidence, it should not be used to name the dead. It is too dangerous for the living.

Organizations including Amnesty International, the Electronic Frontier Foundationthe Surveillance Technology Oversight Project, and the Immigrant Defense Project have declared facial recognition technology a form of mass surveillance that threatens privacyamplifies racist policingthreatens the right to protestand can lead to wrongful arrest. Damini Satija, head of Amnesty International’s Algorithmic Accountability Lab and deputy director of Amnesty Tech, says that facial recognition technology undermines human rights by “reproducing structural discrimination at scale and automating and entrenching existing societal inequities.” In Russia, facial recognition technology is being used to quash political dissent. It fails to meet legal and ethical standards when used in law enforcement in the UK and US, and is weaponized against marginalized communities around the world.

Clearview AI, which primarily sells its wares to police, has one of the largest known databases of facial photos, at 20 billion images, with plans to collect an additional 100 billion images—equivalent to 14 photos for every person on the planet. The company has promised investors that soon “almost everyone in the world will be identifiable.” Regulators in Italy, Australia, UK, and France have declared Clearview’s database illegal and ordered the company to delete their citizens’ photos. In the EU, Reclaim Your Facea coalition of more than 40 civil society organizations, has called for a complete ban on facial recognition technology.

AI ethics researcher Stephanie Hare says Ukraine is “using a tool, and promoting a company and CEO, who have not only behaved unethically but illegally.” She conjectures that it’s a case of “the end justifies the means,” but asks, “Why is it so important that Ukraine is able to identify dead Russian soldiers using Clearview AI?” How is this essential to defending Ukraine or winning the war?”





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: