By Chantal Braganza, Associate Editor
From September to the end of November 2014, Regret the Error columnist, Poynter Institute adjunct faculty and entrepreneurial journalist Craig Silverman tracked and analyzed news briefs on rumours as part of a larger report on how mainstream media cover unverified claims and often spread misinformation in the process.
The database he used, Emergent.info, is an algorithm-driven app that trawls rumour-based stories regularly to see if the piece’s text has changed, or if a correction is made. Still online and operational, it tracks stories in culture, business and world news in real time and provides updates on whether a story has been verified, unverified or proven false.
“Lies, Damn Lies and Viral Content,” Silverman’s report for the Tow Center for Digital Journalism at Columbia University, was published just last week. Here are five things we learned about what journalists often do incorrectly when chasing unverified information that’s going viral online.
Silverman’s report quotes Nieman Journalism Lab editor Joshua Benton to describe the auto-pilot-type process that often takes over when news outlets scramble to cover rumours and viral content: “This is journalism as an act of pointing—‘Look over here, this is interesting.’”
An outlet will jump on a viral Facebook post about, say, a possibly fake letter from a grandfather disowning his daughter for disowning her gay son, and simply point to the post’s existence and popularity, often without providing additional reporting to try to confirm if the content is true. Also not-uncommon is the practice of pointing to other outlets who have reported on the rumour as a watered-down form of verification: “This is the terrible power of the cascading Internet, with many ‘fingers’ pointing back and forth.”
The report also highlights the practice of publishing online posts about early rumours with declarative headlines that don’t match the status of the report’s veracity. From the Emergent database:
“A Daily Mail story ran with the headline: ‘Is Kim Jong ill? North Korean dictator in poor health as his weight has ballooned thanks to an obsession with cheese.’
“The headline begins with a question, but the second part of it is a declarative statement. The story’s lead then walks it back: ‘Kim Jong-un is putting his health at serious risk due to his dangerously high consumption of Emmental cheese, it has been claimed.’”
The problem with this, writes Silverman, is that the dissonance often leaves readers with the impression that the information is true.
We hedge poorly
Any newsroom is familiar with the practice of using words such as “reportedly” or “allegedly” when reporting on unverified claims. A couple of the problems come up, though, with their use in reporting on viral content: we don’t know if hedging language registers with readers or if they simply assume what they’re reading is true, and there’s little standardization in how this language is employed.
For example, when Silverman and his research team ran the headlines of the database’s articles through a search platform, they found that using the word “report” and putting quotation marks around a claim were the two most common ways news outlets tried to convey in that the report’s claims were unverified.
“It’s also notable that stronger hedging words are nowhere to be found in our top five roundup. ‘Allege’ and ‘rumor’ were both used in fewer than 5 percent of headlines,” he writes.
… and we often don’t follow up.
News outlets are quick to jump on potentially viral stories or quick rumours, but, Silverman finds, they’re also awfully slow to follow up on or correct their original reports—and in some cases don’t bother at all.
One of the stories that Silverman’s team tracked through the Emergent database involved the search for the 43 teacher trainees who went missing in Iguala, Mexico, in September 2014. The rumour: that a mass grave had been found, containing the remains of the students, who had previously been involved in a demonstration against local government corruption.
“We logged articles from 14 news organizations in the early days of the claim,” Silverman writes. “Weeks later, DNA testing proved the bodies did not belong to the students. Of the 14 news outlets that covered the initial speculation, just over 35 percent (five) followed up with an article noting the rumor was proven false.”
So, how do we improve?
Silverman points to one case that highlights something that can go right when reporting on a rumour-based story: when the debunking of the rumour itself is what gets shared online,
In September 2014, when a Photoshopped image suggesting the existence of a pumpkin-spice–flavoured condom by Durex started to gain traction on Twitter, a Quartz reporter tried to verify with the company if such a product was actually in development. The ensuing story, which specifically reported that the company wouldn’t confirm or deny the existence of the pumpkin spice condom, is what became the viral content. Within hours, Durex issued a statement about the image being false.
“Contrary to the sharing trends we identified with other false claims,” writes Silverman, “the pumpkin spice condom story saw the number of shares rise after it was confirmed false.”
Towards the end of the report, Silverman offers a list of recommendations for best newsroom practices in covering viral rumours.
You can also watch Silverman’s lecture from the launch of the report here:
Photo by Nic McPhee, via Flickr.