Google Search Is Quietly Damaging Democracy


Google’s aesthetic has always been rooted in a clean appearance—a homepage free of advertising and pop-up clutter, adorned only with a signature “doodle” decorating its name. Part of why many users love Google is its sleek designs and ability to return remarkably accurate results. Yet the simplicity of Google’s homepage is deceptively static. Overtime, the way that the corporation returns information has shifted ever so slightly. These incremental changes go largely unnoticed by the millions of users who rely on the search engine daily, but it has fundamentally changed the information seeking processes—and not necessarily for the better.

When Google first launched, queries returned a simple list of hyperlinked websites. Slowly, that format changed. First Google launched AdWords, allowing businesses to buy space at the top and tailoring returns to maximize product placement. By the early 2000s it was correcting spelling, providing summaries of the news under the headlines, and anticipating our queries with autocomplete. In 2007 it started Universal Search, bringing together relevant information across formats (news, images, video). And in 2012 it introduced Knowledge Graph, providing a snapshot that sits apart from the returns, a source of knowledge that many of us have come to rely on exclusively when it comes to quick searches.

As research has shown, many of these design changes now link back to Google properties, placing its products above competitors. Instead of showing just a series of blue links, its goal, according to official SEC documents filed by Alphabet, is to increasingly “provide direct answers.” By adding all of these features, Google—as well as competitors such as DuckDuckGo and Bing, which also summarize content—has effectively changed the experience from an explorative search environment to a platform designed around verification, replacing a process that enables learning and investigation with one that is more like a fact-checking service.

Google’s latest desire to answer our questions for us, rather than requiring us to click on the returns and find the answers for ourselves, is not particularly problematic if what you’re seeking is a straightforward fact like how many ounces make up a gallon. The problem is, many rely on search engines to seek out information about more convoluted topics. And, as my research reveals, this shift can lead to incorrect returns that often disrupt democratic participation, confirm unsubstantiated claims, and are easily manipulated by people looking to spread falsehoods.

For example, if one queried “When is the North Dakota caucus” during the 2020 presidential election, Google highlighted the wrong information, stating that it was on Saturday, March 28, 2020. In fact, the firehouse caucus took place on March 10, 2020—it was the Republican convention that took place on the 28th. Worse yet, when errors like this happen, there is no mechanism whereby users who notice discrepancies can flag it for informational review.

Google summaries can also mislead the public on issues of grave importance to sustaining our democracy. When Trump supporters stormed the Capitol on January 6, 2021, conservative politicians and pundits quickly tried to frame the rioters as “anti-Trumpers,” spreading lies that antifa (a loose organization of people who believe in active and aggressive opposition to far-right movements) was to blame for the violence. On the day of the attack, The Washington Times ran an article, titled “Facial Recognition Identifies Extremists Storming the Capitol,” supporting the claim, and this story was perpetuated on the House floor and on Twitter by elected officials.

Yet even though the FBI has found no evidence to back these claims, and The Washington Times ultimately issued a correction to the article, the disinformation is still widely accessible with a simple Google search. If one were to look up “Washington Times Antifa Evidence,” the top return (as of the time of this writing) is the original article with the headline “Facial Recognition Identifies Extremists Storming the Capitol.” Underneath, Google summarizes an inaccurate argument, highlighting that the ones identified as the extremists were antifa. Perpetuating these falsehoods has long-lasting effects, especially since those in my study described Google as a neutral purveyor of news and information. According to an April 2021 poll, more than 20 percent of Republican voters still blame antifa for the violence that transpired that day.

The problem is, many users still rely on Google to fact-check information, and doing so might strengthen their belief in false claims. This is not only because Google sometimes delivers misleading or incorrect information, but also because people I spoke with for my research believed that Google’s top search returns were “more important,” “more relevant,” and “more accurate,” and they trusted Google more than the news—they considered it to be a more objective source. Many said the Knowledge Graph might be the only source they consult, but few realized how much Google has changed—that it is not the search engine it once was. In an effort to “do their own research,” people tend to search for something they saw on Facebook or other social media platforms, but because of the way content has been tagged and categorized, they are actually falling into an information trap .

This leads to what I refer to in my book, The Propagandists’ Playbook, as the “IKEA effect of misinformation.” Business scholars have found that when consumers build their own merchandise, they value the product more than an already assembled item of similar quality—they feel more competent and therefore happier with their purchase. Conspiracy theorists and propagandists are drawing on the same strategy, providing a tangible, do-it-yourself quality to the information they provide. Independently conducting a search on a given topic makes audiences feel like they are engaging in an act of self-discovery when they are actually participating in a scavenger-hunt engineered by those spreading the lies.

To combat this, users must recalibrate their thinking on what Google is and how information is returned to them, particularly as a heated midterm season approaches. Rather than assume that returns validate truth, we must apply the same scrutiny we’ve learned to have towards information on social media. Googling the exact same phrase that you see on Twitter will likely return the same information you saw on Twitter. Just because it’s from a search engine doesn’t make it more reliable. We must be mindful of the keywords we start with, but we should also take a bit more time to explore the information returned to us. Rather than rely on quick answers to tough questions, take the time to click on the links, do a bit of digging on who is doing the reporting, and read information from a variety of sources. Then start the search again but from a different perspective, to see how slight shifts in syntax change your results.

After all, something we might not even think to consider could be just a click away.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *