How to Stop Robots From Becoming Racist


In 2017, Holliday contributed to a RAND report warning that resolving bias in machine learning requires hiring diverse teams and cannot be fixed through technical means alone. In 2020, he helped found the nonprofit Black in Robotics, which works to widen the presence of Black people and other minorities in the industry. He thinks two principles from an algorithmic bill of rights he proposed at the time could reduce the risk of deploying biased robots. One is requiring disclosures that inform people when an algorithm is going to make a high stakes decision affecting them; the other is giving people the right to review or dispute such decisions. The White House Office of Science and Technology Policy is currently developing an AI Bill of Rights.

Some Black roboticists say their worries about racism becoming baked into automated machines come from a mix of engineering expertise and personal experience.

Terrence Southern grew up in Detroit and now lives in Dallas, maintaining robots for trailer manufacturer ATW. He recalls facing barriers to entering the robotics industry, or even to being aware of it. “Both my parents worked for General Motors, and I couldn’t have told you outside of The Jetsons and Star Wars what a robot could do,” Southern says. When he graduated college, he didn’t see anybody who looked like him at robotics companies, and believes little has changed since—which is one reason why he mentors young people interested in pursuing jobs in the field.

Southern believes it’s too late to fully prevent the deployment of racist robots, but thinks the scale could be reduced by the assembly of high-quality datasets, as well as independent, third-party evaluations of spurious claims made by companies building AI systems.

Andra Keay, managing director of industry group Silicon Valley Robotics and president of Women in Robotics, which has more than 1,700 members around the world, also considers the racist robot experiment’s findings unsurprising. The combination of systems necessary for a robot to navigate the world, she said, amounts to “a big salad of everything that could possibly go wrong.”

Keay was already planning to push standards-setting bodies like the Institute of Electrical and Electronics Engineers (IEEE) to adopt rules requiring that robots have no apparent gender and are neutral in ethnicity. With robot adoption rates on the rise as a result of the Covid-19 pandemic, Keay says, she also supports the idea of ​​the federal government maintaining a robot register to monitor the deployment of machines by industry.

Late in 2021, partly in response to concerns raised by the AI ​​and robotics community, the IEEE approved a new transparency standard for autonomous systems that could help nudge companies to ensure robots treat all people fairly. It requires autonomous systems to honestly convey the causes of their actions or decisions to users. However, standard-setting professional groups have their limits: In 2020, a tech policy committee at the Association for Computing Machinery urged businesses and governments to stop using face recognition, a call that largely fell on deaf ears.

When Carlotta Berry, a national director for Black in Robotics, heard that a chess robot broke a child’s finger last month, her first thought was, “Who thought this robot was ready for prime time when it couldn’t recognize the difference between a chess piece and a child’s finger?” She is codirector of a robotics program at the Rose-Hulman Institute of Technology in Indiana and editor of a forthcoming textbook about mitigating bias in machine learning. She believes that part of the solution to prevent the deployment of sexist and racist machines is a common set of evaluation methods for new systems before being made available to the public.

In the current age of AI, as engineers and researchers compete to rush out new work, Berry is skeptical that robot builders can be relied on to self-regulate or add safety features. She believes a greater emphasis should be placed on user testing.

“I just don’t think researchers in the lab can always see the forest for the trees, and will not recognize when there’s a problem,” Berry says. Is the computational power available to the designers of AI systems running ahead of their ability to thoughtfully consider what they should or should not build with it? “It’s a hard question,” Berry says, “but one that needs to be answered, because the cost is too high for not doing it.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *