To receive The Algorithm newsletter in your inbox every Monday, sign up here.
Welcome to the Algorithm!
It’s never been more important for companies to ensure that their AI systems function safely, especially as new laws to hold them accountable kick in. The responsible AI teams they set up to do that are supposed to be a priority, but investment in it is still lagging behind.
People working in the field suffer as a result, as I found in my latestpiece. Organizations place huge pressure on individuals to fix big, systemic problems without proper support, while they often face a near-constant barrage of aggressive criticism online.
The problem also feels very personal—AI systems often reflect and exacerbate the worst aspects of our societies, such as racism and sexism. The problematic technologies range from facial recognition systems that classify Black people as gorillas to deepfake software used to make porn videos of women who have not consented. Dealing with these issues can be especially taxing to women, people of color, and other marginalized groups, who tend to gravitate towards AI ethics jobs.
I spoke with a bunch of ethical-AI practitioners about the challenges they face in their work, and one thing was clear: burnout is real, and it’s harming the entire field. Read my story here.
Two of the people I spoke to in the story are pioneers of applied AI ethics: Margaret Mitchell and Rumman Chowdhury, who now work at Hugging Face and Twitter, respectively. Here are their top tips for surviving in the industry.
1. Be your own advocate. Despite growing mainstream awareness about the risks AI poses, ethicists still find themselves fighting to be recognized by colleagues. Machine-learning culture has historically not been great at acknowledging the needs of people. “No matter how confident or loud the people in the meeting are [who are] talking or speaking against what you’re doing—that doesn’t mean they’re right,” says Mitchell. “You have to be prepared to be your own advocate for your own work.”
2. Slow and steady wins the race. In the story, Chowdhury talks about how exhausting it is to follow every single debate on social media about the possible harmful side effects of new AI technologies. Her advice: It’s okay not to engage in every debate. “I’ve been in this for long enough to see the same narrative cycle over and over,” Chowdhury says. “You’re better off focusing on your work, and coming up with something solid even if you’re missing two or three cycles of information hype.”
3. Don’t be a martyr. (It’s not worth it.) AI ethicists have a lot in common with activists: their work is fueled by passion, idealism, and a desire to make the world a better place. But there’s nothing noble about taking a job in a company that goes against your own values. “However famous the company is, it’s not worth being in a work situation where you don’t feel like your entire company, or at least a significant part of your company, is trying to do this with you,” says Chowdhury. “Your job is not to be paid lots of money to point out problems. Your job is to help them make their product better. And if you don’t believe in the product, then don’t work there.”
Machine learning could vastly speed up the search for new metals
Machine learning could help scientists develop new types of metals with useful properties, such as resistance to extreme temperatures and rust, according to new research. This could be useful in a range of sectors—for example, metals that perform well at lower temperatures could improve spacecraft, while metals that resist corrosion could be used for boats and submarines.
Why this matters: The findings could help pave the way for greater use of machine learning in materials science, a field that still relies heavily on laboratory experimentation. Also, the technique could be adapted for discovery in other fields, such as chemistry and physics. Read more from Tammy Xu here.
Even Deeper Learning
The evolution of AI
On Thursday, November 3, MIT Technology Review’s senior editor for AI, William Heaven, will quiz AI luminaries such as Yann LeCun, chief AI scientist at Meta; Raia Hadsell, senior director of research and robotics at DeepMind; and Ashley Llorens, hip-hop artist and distinguished scientist at Microsoft Research, on stage at our flagship event, EmTech.
On the agenda:They will discuss the path forward for AI research, the ethics of responsible AI use and development, the impact of open collaboration, and the most realistic end goal for artificial general intelligence. Registerhere.
LeCun is often called one of the “godfathers of deep learning.” Will and I spoke with LeCun earlier this year when he unveiled his bold proposal about how AI can achieve human-level intelligence. LeCun’s vision includes pulling together old ideas, such as cognitive architectures inspired by the brain, and combining them with deep-learning technologies.
Bits and Bytes
Shutterstock will start selling AI-generated imagery
The stock image company is teaming up with OpenAI, the company that created DALL-E. Shutterstock is also launching a fund to reimburse artists whose works are used to train AI models. (The Verge)
The UK’s information commissioner says emotion recognition is BS
In a first from a regulator, the UK’s information commissioner said companies should avoid the “pseudoscientific” AI technology, which claims to be able to detect people’s emotions, or risk fines. (The Guardian)
Alex Hanna left Google to try to save AI’s future
MIT Technology Review profiled Alex Hanna, who left Google’s Ethical AI team earlier this year to join the Distributed AI Research Institute (DAIR), which aims to challenge the existing understanding of AI through a community-focused, bottom-up approach to research. The institute is the brainchild of Hanna’s old boss, Timnit Gebru, who was fired by Google in late 2020. (MIT Technology Review)
Thanks for reading!