The Generative AI Race Has a Dirty Secret

[ad_1]

In early February, first Google, then Microsoft, announced major overhauls to their search engines. Both tech giants have spent big on building or buying generative AI tools, which use large language models to understand and respond to complex questions. Now they are trying to integrate them into search, hoping they’ll give users a richer, more accurate experience. The Chinese search company Baidu has announced it will follow suit.

But the excitement over these new tools could be concealing a dirty secret. The race to build high-performance, AI-powered search engines is likely to require a dramatic rise in computing power, and with it a massive increase in the amount of energy that tech companies require and the amount of carbon they emit.

“There are already huge resources involved in indexing and searching internet content, but the incorporation of AI requires a different kind of firepower,” says Alan Woodward, professor of cybersecurity at the University of Surrey in the UK. “It requires processing power as well as storage and efficient search. Every time we see a step change in online processing, we see significant increases in the power and cooling resources required by large processing centers. I think this could be such a step.”

Training large language models (LLMs), such as those that underpin OpenAI’s ChatGPT, which will power Microsoft’s souped-up Bing search engine, and Google’s equivalent, Bardmeans parsing and computing linkages within massive volumes of data, which is why they have tended to be developed by companies with sizable resources.

“Training these models takes a huge amount of computational power,” says Carlos Gómez-Rodríguez, a computer scientist at the University of Coruña in Spain. “Right now, only the Big Tech companies can train them.”

While neither OpenAI nor Google, have said what the computing cost of their products is, third-party analysis by researchers estimates that the training of GPT-3, which ChatGPT is partially based on, consumed 1,287 MWh, and led to emissions of more than 550 tons of carbon dioxide equivalent—the same amount as a single person taking 550 roundtrips between New York and San Francisco.

“It’s not that bad, but then you have to take into account [the fact that] not only do you have to train it, but you have to execute it and serve millions of users,” Gómez-Rodríguez says.

There’s also a big difference between utilizing ChatGPT—which investment bank UBS estimates has 13 million users a day—as a standalone product, and integrating it into Bing, which handles half a billion searches every day.

Martin Bouchard, cofounder of Canadian data center company QScale, believes that, based on his reading of Microsoft and Google’s plans for search, adding generative AI to the process will require “at least four or five times more computing per search” at a minimum. He points out that ChatGPT currently stops its understanding of the world in late 2021, as part of an attempt to cut down on the computing requirements.

In order to meet the requirements of search engine users, that will have to change. “If they’re going to retrain the model often and add more parameters and stuff, it’s a totally different scale of things,” he says.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *