April 14, 2018 | Accountability, Artificial Intelligence, Citizen Journalism, Database, Ethics, Fact Checking, Trust
ISOJ panelists expose ‘fake news,’ highlight tools and practices to improve flows of accurate information
Watch video of the discussion panel on tools to improve the flow of accurate information, from ISOJ 2018.
Jennifer Preston, former social media editor at the New York Times and vice-president of journalism at the Knight Foundation, introduced the ISOJ panel on trust, by underlying the foundation’s commitment to finding ways to battle misinformation in the journalism sphere.
Preston noted April 13 that a recent Gallup survey demonstrates just how deep the partisan divide is when it comes to trust in journalism.
One of the Knight Foundation’s recent projects to improve the quality of accurate information was the Prototype Fund Challenge. Over 800 people responded with ideas to address serious concerns regarding trust in journalism, and on Friday afternoon, five of the Prototype Challenge winners highlighted their ideas.
Before the winners spoke, however, Preston introduced Joan Donovan, lead researcher at Data & Society. Donovan laid out how media often get manipulated but also how people are using media to manipulate individuals and groups.
Especially during times of breaking news and crisis, Donovan said, there is very low information. Journalists and individuals, as well as the police, are often scrambling to find different forms of evidence for what actually happened. During that moment of low information and high public interest, within four to eight hours of the event, if trolls have been successful, someone ends up reporting the wrong information.
“How, if we were to try and fix this, what are the challenges?” Donovan asked.
She then outlined tactics that trolls might use to spread misinformation. Source hacking is where groups coordinate to feed false information to journalists in times of crisis. For example, a fake document on Twitter claimed that California Rep. Maxine Waters called on a bank to give her a $1 million campaign donation in exchange for her promise to bring in refugees and thus, new mortgages for the bank. There have been efforts to take down the forged document and debunk it, but it still lives on.
The same tactic was used during the recent French presidential election, where information about President Emmanuel Macron was released suggesting he had a hidden bank account in the Caribbean.
Another tactic is keyword squatting, where trolls lay and wait to hack social media, centering their misinformation among a key word. The website “Blacktivist” and the Facebook page “Black Matters” are two such examples of this method of spreading misinformation.
The challenge, Donavan said, is that “there’s no real way to verify who’s running these pages. Source material is much harder to vet because trolls are repeating the same tactics. They treat it as a game.”
Or, if a journalist covers the misinformation that’s been created, the trolls take these stories, compile them and treat them as trophies. The trolls are “trying to destabilize the entire institution of journalism.”
What we need now, Donovan said, is a media movement where journalists are better equipped to address the trolls and create a context where they and fellow journalists can lean on each other and prevent misinformed stories from happening.
The winners of the Prototype Fund Challenge have designed ways to provide more accurate information in journalism. Frédéric Filloux, creator of Deepnews.ai, spoke about a machine-learning algorithm he’s designed to better sift through fake news stories and identify quality journalism. Filloux explained that the Internet is essentially like a field of trash — there are 100 million links created per day, half of which are in English.
“If we manually check the stories, it’s like trying to purify water of the Ganges River one glass at a time, Filloux explained.
Instead, Deepnew.ai presents stories to people for scoring then uses those scores to rate the quality of news in those stories. This human scoring interface asks a reader questions like: What kind of story type is this? What is its thoroughness, balance and fairness? What is its lifespan or relevance?
“An important part of the process is the human factor,” Filloux said.
Another way Deepnews.ai attempts to better find stories of quality is through a deep learning model, where among 10 million articles from all kinds of news, hidden patterns of editorial quality are detected and rated. So far DeepNews.ai has had a 90 percent accuracy between the quality of news stories detected algorithmically and through human means.
Lisa Fazio spoke next about the value of social science research to root out misinformation in journalism. She helped design CrossCheck, an online collaboration that took place around the 2017 French presidential election, that produced fact checks among various organizations, created misinformation debunks and posted them online.
Researchers then studied reader response to false stories in both the United States and France. Ten rumors were presented, pre-rated by readers on how accurate they felt the statements were. Readers then read one of the debunks, re-rated the rumored story and answered memory questions related to the story.
The research found that “everyone starts in the middle,” Fazio explained, on a scale of zero to 10 on how accurate the story is. The level of rated accuracy decreases after readers read the debunk, though accuracy is lower among French readers than Americans. Additionally, researchers found that there was no evidence of the “backfire effect” — or the way debunks are presented causing people to become more entrenched in their views. A week after the initial ratings, readers in the United States still remembered that the rumors were false.
However, memory of specific details among readers was not that great. They answered about half to two-thirds correctly, perhaps signaling a less careful reading of the story.
Fazio went on to say that a headline posed as a question and the number of logos near the online story caused no significant difference in readers’ credibility ratings.
Darryl Holliday talked about his efforts as co-founder of City Bureau in Chicago. He presented various ways the organization is trying to improve trust in journalism, but highlighted the Documenters Program, where City Bureau is paying and training citizens to document public meetings throughout the city.
Currently, 330 documenters from all over the city, of a range of ages and races, seek to address larger civic questions, Holliday said. “Who makes decisions for Chicago and how do we know where and when those decisions are being made?”
Holliday explained that public meetings are important spaces for democracy where any resident can participate and hold city leaders accountable. And with over 20 different websites listing where those decisions are being made, it’s hard for a resident to get basic information about where and when public meetings are being held.
Holliday continued that City Bureau sends the documenters to these meetings and “scrapers” then puts this information on a single calendar. City Bureau is still figuring out which form the documentation of public meetings will ultimately take, such as live tweeting, audio recordings or meeting notes. But he explained, “We want a documenter at every single meeting in Chicago.”
In this era of local news often being gutted by staff cuts, coverage of local meetings is the first to go. Holliday ended by saying this project has started in south and west Chicago, but there are plans to begin in Detroit and North Carolina, and that City Bureau wants to add more cities.
Finally, Cameron Hickey, producer at PBS NewsHour, explained his development of Newstracker.org. Hickey started by saying that after the 2016 U.S. presidential election, he began investigating misinformation on social media, taking a more data driven approach.
“The challenge is like a game of Whac-A-Mole,” Hickey said. “Whoever’s creating misinformation, they’re all trying to avoid detection.”
Hickey also decided we need a better term than “fake news” to describe this misinformation. He uses “junk news,” which includes clickbait, or anything fake, hyper-partisan, misleading, plagiarized or, potentially, satirical.
The goals of Newstracker are to identify news sources creating misinformation and track and collect this content, both among news sites and shared information on social media. The way his organization does this is through an algorithm and so far, Newstracker has found that new junk news domains are being created constantly — over 80 a month. Newstracker has collected 4,000 so far. Additionally, meme images can be a source of misinformation. Newstracker has identified over 90,000 since the beginning of this year.
The panel ended with several questions from attendees, including one about the potential of algorithms to censor groups that are already marginalized. The panel’s moderator, Jennifer Preston, explained that there really isn’t enough information out there for journalists about artificial intelligence and that the Knight Center is also funding attempts to address questions about Artificial Intelligence.
ISOJ is being live-streamed on YouTube and Isoj.org, including a channel with simultaneous translation to Spanish. For updates during the conference, follow ISOJ on Twitter at @ISOJ2018 or with #ISOJ2018. News will also be available on Facebook and Snapchat.