Could a fake news article start World War III? And could technology stop it?

“Fake news” isn’t a new phenomenon, even the Romans spread lies and rumors to gain the upper hand, but in today’s world, where news is pumped out and spreads at breakneck speeds with the help of technology, its potency has reached dangerous levels.

Quentin Hardy speaks at the Dawn or Doom 2016 conference. Quentin Hardy, editorial head of cloud computing at Google and formerly the deputy technology editor at the New York Times, participated in 2016’s Dawn or Doom writers’ panel, discussing ethical issues that arise in technology journalism. Hardy will be talking about “fake news” at Dawn or Doom ’17.

Should a video of the United States president declaring war on a foreign country surface, what would the reaction time be to determine if it’s real or fake? Would a country as easily provoked as, for instance, North Korea even make the effort to check? The balance of sourcing news responsibly and wanting to be the first to report or the most clicked link is perilous.

There is no real punishment for publishing fake news. In fact, some average Americans have taken up the job of publishing fake news to generate online advertising revenue. When the stories confirm a reader’s biases, all the better.

Quentin Hardy, editorial head of cloud computing at Google and formerly the deputy technology editor at the New York Times, says the quandary of fake news can’t be solved solely by further technological advances. But technology could be part of a suite of solutions, he says.

Hardy and Dan Goldwasser, professor of computer science, will discuss dealing with fake news and information during Dawn or Doom ’17, a conference on the risks and rewards of emerging technologies at Purdue. Dawn or Doom will be held Tuesday and Wednesday, Sept. 26 and 27, on the Purdue West Lafayette campus and is free and open to the public.

Dawn or Doom, which features a track called Designing Information, also will include a featured talk by Nicholas Thompson, editor-in-chief of WIRED magazine, focusing on the “dawn” aspect of science and technology’s influence over journalism. Other tracks at the conference include Designing Humans, Designing Cities, Designing Food and Designing the Workforce. Visit for more information.

“At its root, you have to encourage people that it’s OK to be proven wrong,” Hardy says. “You do that through the education system. But technically speaking, I would like to see people build attribution bots and scour the web to expose the roots of these stories.”

For example, Hardy says, if a story is spread by thousands of Twitter bots and fake Facebook profiles, readers and the social media networks should be able to identify that it came from automated accounts and is from a questionable source.

Fake news also presents a psychological problem. Humans don’t like to be wrong, nor do they enjoy experiencing cognitive dissonance. Even when a news item is found to be fake, the lie lives on in many people’s minds, regardless of how ridiculous it may seem.

Goldwasser hopes his algorithms that teach computers to understand natural language will help humans understand other humans and break biases.

His most recent project with computer science graduate student Kristen Johnson was analyzing U.S. politicians’ tweets on the topic of health care. Republicans framed the issue around cost, while Democrats framed it around care and empathy.

There were, however, some Republicans who defected from their party’s line. Goldwasser’s algorithm successfully identified which politicians would vote with the Democrats solely based on how they framed an issue in the public sphere, in this case, Twitter.

If computers can in essence read between the lines, it would make it much harder for politicians to talk around an issue. Inversely, it would be harder for political activists to send coded messaging in the guise of fake news.

Goldwasser remains cautiously optimistic about the solutions new technologies will offer and their impact on how people approach information.

The age-old problem of disinformation is not going to go away, nor is the modern technology that enables its rapid spread. Some other alternatives for dealing with the issue – laws limiting who can use the technology and what they can say, for example – are not particularly attractive.

“At the end of the day, it has do with if there is regulation and who has access to the information and technology,” Goldwasser says.

Writer: Kirsten Gibson, technology writer, Information Technology at Purdue (ITaP), 765-494-8190,

Last updated: September 8, 2017

Purdue University, 610 Purdue Mall, West Lafayette, IN 47907, (765) 494-4600

© 2015 Purdue University | An equal access/equal opportunity university | Copyright Complaints | Maintained by ITaP

Trouble with this page? Disability-related accessibility issue? Please contact ITaP at