Social media plays a bigger role in bringing people to fake news sites than it plays in bringing them to real news sites. More than 40 percent of visits to 65 fake news sites come from social media, compared to around 10 percent of visits to 690 top US news sites, according to a 2017 study by researchers from NYU and Stanford, NPR reports:
And another study suggests Facebook was a major conduit for this news. The more people used Facebook, the more fake news they consumed, as Princeton’s Andrew Guess, Dartmouth University’s Brendan Nyhan, and the University of Exeter’s Jason Reifler found. That study also found that Facebook was “among the three previous sites visited by respondents in the prior 30 seconds for 22.1 percent of the articles from fake news websites we observe in our web data.” But it was only in the prior sites visited for around 6 percent of real news articles.
Russia started it, but politicians will proliferate it. Online trolling could become a service for campaigns everywhere, says Clint Watts, a Non-Resident Fellow at the Alliance for Securing Democracy.
“Cambridge Analytica’s approach appears to have advanced the Kremlin’s playbook by applying more science to their art, aggregating voluminous personal data and then employing greater computational power to more rapidly and effectively influence audiences on scale,” he writes for the Daily Beast.
“Cambridge Analytica is really just the beta version of trolling-as-a-service,” adds Watts, the author of the forthcoming Messing with the Enemy: Surviving in a Social Media World of Hackers, Terrorists, Russians and Fake News.
Americans go to social media to find happiness and organize their world, only to come away miserable. The Russians knew that older Americans are more likely to vote, but were less likely to be sophisticated enough to spot a scam on Facebook, Watts told Princeton’s April 7 forum on “Defending Democracy: Civil and Military Responses to Weaponized Information.”
“Propaganda has a bad reputation, but it can be benign, he said. Yet despite the valiant efforts of the State Department’s Global Engagement Center, the United States “has no common message and so no consistent narrative,” Watts added. “We don’t like to admit that our number one enemy is us.”
The Russians were able to take advantage of the febrile, polarized state of American politics to spread disinformation and disorder, the forum heard.
“We should be worried less about the match than the powder keg,” said one participant.
Panelists on the topic of deterrence said it was important for a nation to identify where its “red lines” are, and publicly state under which scenarios it would respond against foreign adversaries, writes Aaron Nathans of Princeton’s Office of Engineering Communications:
“You need some doctrine,” said Jacob Shapiro, a professor of politics and international affairs at Princeton’s Woodrow Wilson School. In a Princeton interview, he said platform operators need to be able to cooperate to identify hostile foreign organizations, and share information with each other to ban them from posting content. “You’re not restricting speech, but what you’re saying is that on certain platforms, there are certain entities that we’re just not going to allow to operate there,” said Shapiro, who was interviewed on Facebook Live the day of the event.
It has always been possible to create “fake news,” said Pulitzer Prize-winning author and National Endowment for Democracy board member Anne Applebaum, who pointed to a notorious Soviet disinformation campaign in the 1980s that claimed that the CIA invented AIDS.
“It’s not that different from today except it happens much faster,” she told Stanford University’s Cardinal Conversations initiative, noting how technology has disrupted the news cycle. “You’ve always had fakes, you’ve always had mistakes, you’ve always had people pretending to be who they weren’t. But the number, quantity and speed have made them into something different, which is that you can now live online in an alternate reality,” said Applebaum, currently a visiting professor at the London School of Economics where she runs Arena, a program dedicated to defeating disinformation.
While tech companies can be part of a solution to disinformation, forcing them to ban disinformation or to offer greater transparency is not enough. Ultimately, people can decide what they read, what they believe and what they share, say RAND analysts Jennifer Kavanagh and Stijn Hoorens:
- Firstly, the rise of the Internet and social media platforms have drastically increased the volume of information available and speed with which it can be disseminated. After investigating 126,000 rumours and false new stories on Twitter, researchers at the Massachusetts Institute of Technology came to the conclusion that these travelled faster and reached more people than the truth.
- Secondly, tech firms have enabled the creation of online “echo chambers” — where social media users are more likely to engage with people and sources that share their beliefs and worldviews — which has helped to reinforce people’s own cognitive biases. These are further reinforced by the algorithms of social media platforms and search engines that feed users with stories tailored to their profiles and based on past behaviour. Research from the think-tank Demos (PDF) appears to confirm the existence of an echo chamber effect in European politics. The report suggests a strong connection between a user’s ideology and the other users and news sources they interact with online.
“Disinformation and ‘fake news’ had a big impact on society long before Google, Facebook and Twitter existed. It is as much of an offline problem as an online problem,” they add.
In his new book – Cyber Mercenaries: The State, Hackers, and Power – Tim Maurer, co-director of the Cyber Policy Initiative and current fellow at the Carnegie Endowment for International Peace, discusses the secretive relationship between states and hackers, and what this relationship means for civil society and human rights.
“As cyberspace has emerged as the new frontier for geopolitics, states have become entrepreneurial in their sponsorship, deployment, and exploitation of hackers as proxies to project power,” he writes:
Such modern-day mercenaries and privateers can impose significant harm undermining global security, stability, and human rights. These state-hacker relationships therefore raise important questions about the control, authority, and use of offensive cyber capabilities. While different countries pursue different models for their proxy relationships, they face the common challenge of balancing the benefits of these relationships with their costs and the potential risks of escalation.
The book details case studies in the United States, Iran, Syria, Russia, and China “for the purpose of establishing a framework to better understand and manage the impact and risks of cyber proxies on global politics,” he adds.
NewsWhip is launching a hub, the NewsWhip Research Center, that will serve as a repository for its research into how social media affects the ways people engage with stories, Nieman Lab reports. It’s overseen by Gabriele Boland, NewsWhip’s manager of content strategy and communications.
“We’re seeing a lot of opportunity in helping brands maneuver around junk news and demystify how things are spreading on social media,” Boland said. “We want to go deeper here, whether it’s why fake news is still spreading a year and a half after the election, or why a particular video is going viral — it can range from very serious to very topical.” The hub is here.
Voice of Canada (above) highlights a report on disinformation, fake news, and how to address the problem.