Skip to main content

Can AI stamp out fake news?

Image Credit: David Famolari/Shutterstock

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


During the recent U.S. Presidential election, it became clear that both the left and right were using the Internet and social media to disseminate false information using a new form of insidious propaganda: “fake news.” Nearly in real-time and at little cost to the campaigns, organizations and individuals were able to post fake news stories on news sites, social media, and blogs that looked and felt legitimate. Millions of people saw these stories and may have been influenced by what they read. As President Barack Obama stated in a joint press conference with German Chancellor Merkel:

“If we are not serious about facts and what’s true and what’s not — and particularly in an age of social media where so many people are getting their information in sound bites and snippets off their phones — if we can’t discriminate between serious arguments and propaganda, then we have problems.”

It’s still up for debate whether fake news changed the outcome of the recent presidential election, but most would agree the country is better off if we were reading accurate stories.

AI to the rescue

While there’s no quick and easy way to fix fake news, one technology could help improve the quality of public discourse: artificial Intelligence. Facebook and Google are already using AI to identify content that appears specious, and soon we’ll see media companies, government and non-partisan groups, and other concerned organizations deploy similar tools.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

AI-powered software can already analyze the structure of an article to assess its logical soundness. While analyzing video content is more challenging, it is not far off.

Imagine a world where every article could be assessed based on its level of sound discourse. Several years ago, I read with interest an article by Paul Graham, famed investor and founder of Y Combinator, entitled “How to Disagree.” In it, Graham lays out the stages of argumentation – spanning from the least sophisticated strategy of name calling to ad-hominen attacks, all the way up to direct refutation of a central argument. Natural language understanding and machine learning tools could be designed to parse articles to uncover the underlying rhetorical architecture, including the main point of the piece, the statements supporting the central thesis, determining if the author directly assails the central argument or simply tries to discredit the source, and more.

Think of this as the higher-level abstraction of software development environments that automatically format text according to the programming language and context like indenting, closing brackets, grouping functions by color, etc. These capabilities make it simpler to understand the context and functionality of the underlying code and how individual snippets relate and contribute to the larger code base. Additionally, AI software could automatically insert links to source material for both supporting and disconfirming information. While such tools won’t eliminate fake news, they’ll allow people to put stories into context, assess their overall logical structure, and make more informed decisions regarding their credibility.

Demand will drive innovation

Unfortunately, that reality is still theoretical. Using AI to stem the flow of fake news is possible but expensive — and widescale deployment of fake news monitoring won’t happen until there’s a real business need for it. Media companies will increasingly be interested in protecting their reputations and could invest in AI content monitoring systems to prove the credibility of their content versus fake news. Market research firms and brands will also be interested in using AI to get a deeper understanding of how media — fake and real — influences consumers to make certain buying decisions.

But for now, we haven’t seen any real business case for a fake news monitoring systems. In fact, there are powerful economic incentives that favor total views and engagement over underlying quality, hence the power and widespread practice of clickbait.

That doesn’t mean innovation isn’t already happening. In fact, it seems there are more initiatives being announced to combat fake news every day. Recently, Google’s Jigsaw division announced a software tool to help publishers identify toxic comments. Facebook and Google are investing in AI to filter out fake news and incivility because their reputations depend on it. Companies such as Narrative Science, Automated Insights, and Smart Logic are already using AI to automatically author news stories, so as soon as a business case presents itself, these companies could apply their technology toward fake news filtering. In the meantime, computer science researchers, developers, and hackers are already beginning to tackle the problem. A 19-year old Stanford student took it upon himself to build an AI-driven fake news filter.

But a wide scale, non-partisan news filtering system that can scan and assess the credibility of all news content on the web is still a pipe dream. Such a system would be massive in scale and require continual updates to stay ahead of hackers.

Should the government fund and build an AI-driven fake news monitoring system? Should the media industry be responsible for creating and maintaining a third-party peer-monitoring system? Should fake news monitoring be a crowd-sourced system, created and monitored by thousands of independent developers? And regardless of who runs such a system, how could we ensure that a broad group of developers with an array of viewpoints creates a monitoring system free of bias? These questions don’t have easy answers.

What we do know is AI is already capable of playing the role of media cop. It can analyze an article to identify the provenance of information and create links to sources. It can assign credibility scores to articles based on the quality of sources and the strength of discourse. Of course, AI isn’t a perfect solution to identifying fake news. While it could provide the tools and guidelines to help people better decide for themselves whether an article is credible or not, it could never provide 100 percent accurate assessments.

As an investor in AI companies, I’m always on the lookout for startups attempting to tackle the world’s biggest problems. It’s only a matter of time before an entrepreneur identifies a business model for fighting fake news. Until then, the spread of false content, propaganda, and biased clickbait will unfortunately continue unabated.

David Famolari is a Director at Verizon Ventures. He serves on the Corporate Venture Capital Advisory Board for the NVCA and the Advisory Board for Grand Central Tech. Prior to Verizon, he held senior roles in investment banking and advanced R&D. He has authored more than 50 research publications and is an inventor on over 70 granted and pending patents.

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.