A Closer Look at Google's Fight Against Disinformation

Download Now: Free Marketing Plan Template
Amanda Zantal-Wiener
Amanda Zantal-Wiener

Updated:

Published:

Last weekend, Google published an extensive white paper detailing how it fights the spread of misinformation.

google-approach-to-disinformation

These efforts, Google noted in the white paper, can be defined as coordinated,  "deliberate efforts to deceive and mislead using the speed, scale, and technologies of the open web." The open web, in this case, includes such Google products as Search, News, Ads, and video platform YouTube -- the last of which Google acquired in 2006.

But to some, the system that Google has implemented to combat the spread of disinformation is not without flaw. Potentially problematic content still circumvents the rules, and in some reported cases, seemingly benign publishers are rejected as sources of news with little-to-no explanation.

Below is a closer look at these various pieces of the misinformation puzzle, from Google's "three foundational pillars" of fighting it, to the gaps in the combat system.

How Google Says It Fights Misinformation

As noted above, Google says in its white paper that its fight against the coordinated spread of misinformation focuses on "three foundational pillars:

  1. Make Quality Count
  2. Counteract Malicious Actors
  3. Give Users More Context

Make Quality Count

As Google has always said about its Search product, the way it ranks results depends first and foremost on the quality of the content it crawls. 

In Google's words, that means sifting through what can only be described as the magnanimous amount of information that exists online, and yielding only the most relevant and authoritative content among it in search results.

That content, if Google's products and algorithms work correctly, "best meets  ... users’ needs [by] delivering quality information."

Yielding high-quality search is crucial -- but becomes especially important, Google says, when it comes to promotional content (e.g., ads) and current events. These areas are more likely than others to be leveraged for the spread of misinformation -- often in the form of false advertising (which has existed for over a decade) and the phenomenon that's come to be known as fake news.

The latter issue became particularly saliently after it was revealed that social media network Facebook was weaponized by foreign actors to influence the 2016 U.S. presidential election, as has continued to be used for coordinated misinformation campaigns in the time since. 

Returning the best resultsSource: Google

Combating lower-quality content at one time -- in the early 2000s, for instance -- meant that Google had to downgrade search results that were potentially spam or otherwise trying to "game" the Search algorithm.

But as forms of misinformation have since become more complicated, high-risk, and sophisticated, Google's fight against it has become more multi-layered -- for instance, its News product must tow the lines among timeliness, relevance, regional policies, and accuracy.

The way Google approaches those complex layers in fighting misinformation, it says, essentially boil down to making the product as objectively helpful to users as possible -- "as measured by user testing, not fostering the ideological viewpoints of the individuals that build or audit them."

Counteract Malicious Actors

A subsequent step in prioritizing quality search results is countering the actors who work to spread misinformation -- the "malicious actors" behind coordinated efforts to promote inauthentic content.

But as advanced as Google's algorithms have become, the company says that they still aren't intelligent enough to automatically tell if a news article, for instance, is accurate or not, and whether or not the publisher behind it had malicious intent.

Pairing different signals, however, Google says it has improved its ability to detect malicious activity. Using the example of a news headline that says it was written from France when, in fact, the account activity places it in New Jersey, sends a signal that perhaps the information on that page is less authentic -- or transparent to users.

Search Quality TestsSource: Google

Combined with human content review, Google says such signals have contributed to an overarching system that "can reduce ‘spammy’ behaviors at scale." 

Google, for its part, has policies in place to discourage malicious behavior, which illustrates the broad scope of deliberately spreading misinformation -- activity that includes lying about one's "purpose" as an advertiser or publisher on Google News, or improperly replicating content from other Creators or channels on YouTube. 

But YouTube recently modified some of its own content policies to make them, in the eyes of some, slightly more lenient. Whereas there was previously a strict "three strikes" policy for channels that violated the site's terms, users who go against these rules will now receive an initial warning -- which will be followed by three strikes in the event of repeated offenses.

Give Users More Context

Finally, Google says the third core pillar in its fight against misinformation is to incorporate methods of letting the user know exactly what kind of content they're seeing, why they're seeing it -- be it a search result, an ad, or a news story -- and implementing opportunities to provide feedback on what appears.

In its white paper, Google outlines the key tools it's implemented to boost user context, which largely consist of labels that indicate, for instance, if a "specific piece of content is a fact-checking article," why a user is seeing a certain ad, or more information about news sources and topics that the company says are more prone to misinformation -- e.g., history and science.

Screen Shot 2019-02-20 at 2.50.47 PMSource: Google

Where the System Potentially Falls Short

 

 

While Google's fight against misinformation has made some notable progress, it still falls short in certain areas -- namely, YouTube.

In addition to the aforementioned changes to YouTube's "three strikes" policy -- one that some argue imposes a disproportionate assignment of punishment that, for example. According to The Verge, for instance, the new community violation policies assign the same penalty to someone who's "impersonating ... as someone who posts violent hate speech or threats."

YouTube's more seemingly lenient approach to guideline violations don't instill a high degree in confidence, therefore, that it's growing more capable of combating the viral spread of conspiracy theory videos.

This particular issue is the phenomenon of popular YouTube creators using the platform, as Kevin Roose of the New York Times describes, to promote "far-fetched hypotheses" that are not based in fact -- such as one YouTube star's suggestion that the the National Aeronautics and Space Administration (NASA) fabricated the very first Apollo moon landing.

That YouTube -- and by proxy, its parent company Google -- has faced significant challenges in curbing this particular breed of inauthentic information speaks to the larger issue of tackling content moderation. Determining what content is right, wrong, rule-breaking, or otherwise offensive is far from an objective task, and one that numerous tech companies have struggled to define. 

Facebook, for instance, was recently in the spotlight once again for the extremely ambiguous, often conflicting nature content moderation "rulebooks" that places the company -- and those who are tasked with monitoring what's shared on its network -- in an awkward, sometimes dangerous intersection of legal authority and social media.

No one entity, especially not a for-profit venture like Facebook, should have that kind of power to influence public debate and policy.

- Jasmin Mujanovic, political scientist and author, as told to the New York Times

But for its part, Google has acknowledged that it still has progress to make -- noting that some of its programs, such as those designed to fight election-related misinformation, are still "rolling out" -- and that it consistently works to improve its Search product, for instance, to both yield the most authoritative results and build user trust.

"Misinformation is a problem with three fronts: timeliness, a strong algorithm that can fight coordinated campaigns to spread divisive information, and sometimes, government intervention," says HubSpot Principal SEO Marketing Manager Victor Pan. "Google has matured as a search company, but striking the balance between what accurate information can be displayed to users, versus what cannot due to legal and other concerns, means that any innovation in this area will be slow and deliberate."

 

Featured image credit: Google

Related Articles

We're committed to your privacy. HubSpot uses the information you provide to us to contact you about our relevant content, products, and services. You may unsubscribe from these communications at any time. For more information, check out our Privacy Policy.

Outline your company's marketing strategy in one simple, coherent plan.

Marketing software that helps you drive revenue, save time and resources, and measure and optimize your investments — all on one easy-to-use platform

START FREE OR GET A DEMO