Approximately two seconds after Microsoft let people poke around with its new ChatGPT-powered Bing search engine, people started finding that it responded to some questions with incorrect or nonsensical answers, such as conspiracy theories. Google had an embarrassing moment when scientists spotted a factual error in the company’s own advertisement for its chatbot Bard, which subsequently wiped $100 billion off its share price.
What makes all of this all the more shocking is that it came as a surprise to precisely no one who has been paying attention to AI language models.
Here’s the problem: the technology is simply not ready to be used like this at this scale. AI language models are notorious bullshitters, often presenting falsehoods as facts. They are excellent at predicting the next word in a sentence, but they have no knowledge of what the sentence actually means. That makes it incredibly dangerous to combine them with search, where it’s crucial to get the facts straight.
OpenAI, the creator of the hit AI chatbot ChatGPT, has always emphasized that it is still just a research project, and that it is constantly improving as it receives people’s feedback. That hasn’t stopped Microsoft from integrating it into a new version of Bing, albeit with caveats that the search results might not be reliable.
Google has been using natural-language processing for years to help people search the internet using whole sentences instead of keywords. However, until now the company has been reluctant to integrate its own AI chatbot technology into its signature search engine, says Chirag Shah, a professor at the University of Washington who specializes in online search. Google’s leadership has been worried about the “reputational risk” of rushing out a ChatGPT-like tool. The irony!
The recent blunders from Big Tech don’t mean that AI-powered search is a lost cause. One way Google and Microsoft have tried to make their AI-generated search summaries more accurate is by offering citations. Linking to sources allows users to better understand where the search engine is getting its information, says Margaret Mitchell, a researcher and ethicist at the AI startup Hugging Face, who used to lead Google’s AI ethics team.
This might even help give people a more diverse take on things, she says, by nudging them to consider more sources than they might have done otherwise.
But that does nothing to address the fundamental problem that these AI models have makeup information and confidently present falsehoods as fact. And when AI-generated text looks authoritative and cites sources, that could ironically make users even less likely to double-check the information they’re seeing.