A British TV station broadcast video Monday apparently showing the head of the data analysis firm Cambridge Analytica, talking about using bribes, traps involving sex workers and other unethical tactics to swing elections around the world, the Washington Post reports:
The broadcast sparked a fresh round of questions about a company already embroiled in controversy about its use of personal information from tens of millions of Facebooks users — the vast majority of whom had no idea their names, likes and work histories had been collected for political purposes. The report relied on surreptitious video recordings of Alexander Nix, the chief executive of Cambridge Analytica, claiming to have used “a web of shadowy front companies” in pursuit of winning elections.
It does not help Facebook that, just as it grapples with how it has been used to spread disinformation for political purposes, the social network’s chief information security officer (CISO), Alex Stamos, is leaving the company, one observer notes.
By allegedly accessing the profiles of 50m Facebook users, the data mining company could infer the political preferences of voters and help target personalized messages at them, the FT’s John Thornhill notes.
“You are whispering into the ear of each and every voter,” said Christopher Wylie, a data scientist who blew the whistle on Cambridge Analytica’s operations. That had enabled political actors to whisper different messages into different ears. “If we don’t have a shared understanding, how can we be a functioning society?” he asked.
What Cambridge Analytica does is “worse than bullying,” according to Wylie, the architect of its principal psychological warfare tool.
“Because people don’t necessarily know it’s being done to them. At least bullying respects the agency of people because they know,” he told the Guardian. “So it’s worse, because if you do not respect the agency of people, anything that you’re doing after that point is not conducive to a democracy. And fundamentally, information warfare is not conducive to democracy.”
A recent case shows how hacked emails and texts can be used to create a false mosaic. John Scott-Railton, a senior researcher at the Citizen Lab at the Munk School of Global Affairs at the University of Toronto, calls this new kind of information operation “leak-flavored product,” to use cherry-picked items from a hack to create a false narrative, Bloomberg’s Eli Lake reports:
A Citizen Lab report from last year covered one variation on this phenomenon in what it called “tainted leaks,” in which mass disclosures of hacked emails were altered to impugn targets. The paper focused on the phishing attack of historian and journalist David Satter. …..Citizen Lab could not conclusively prove that the Russian state was behind the hack of Satter’s emails. But it did show that the real target of the alterations was not Satter, but prominent Russian opposition figures. For example, a report found in Satter’s email from the National Endowment for Democracy was altered to make it appear that Russian anti-corruption activist Alexei Navalny was receiving U.S. government funding. The modified document was then posted on the blog of Russian “hacktivist” CyberBerkut. Suddenly the internet contains a lie woven into authentic stolen information.
Democracies around the world are grappling with new threats to democracy in the digital age, from foreign actors tampering with voting systems to the viral spread of disinformation through social media, CBC News adds:
Alicia Wanless, director of strategic communications at The SecDev Foundation and a researcher in information warfare and propaganda, said disinformation that affects the choices voters make likely poses a bigger threat to a fair electoral process than electronic hacks from outside Canada. Fake news thrives in polarized domestic political environments, which can then be exploited by foreign actors, she said.
“I think we’ve had a problem in the public realm being able to discuss certain problems that we have as a society, and with those things not being addressed, they’re ripe for exploitation. Not just by foreign actors but domestic political actors, too,” she said.
A recent report from the Canadian Security Intelligence Service (CSIS) warned that western democracies face a “torrent” of lies and distortions by agents of disinformation, flagging Russia as “the most skilled national purveyor of falsehoods.”
The Kremlin’s foreign policy tool kit includes a technique called “reflexive control,” designed to cause a stronger adversary voluntarily to choose the actions Russia prefers by shaping that adversary’s perceptions of the situation, according to Columbia University’s Maria Snegovaya, an adjunct fellow at the Center for European Policy Analysis.
“Such techniques are also intended to test a target country’s response, to assess what might be expected after future provocations,” she writes for the Post’s Monkey Cage blog. “Russia may be trying to promote an image of itself as a state willing to violate international norms and rules if its interests are not met.”
Bret Schafer with the German Marshall Fund’s Alliance for Securing Democracy, which operates the Hamilton 68 dashboard monitoring more than 600 pro-Kremlin Twitter accounts, said that counter-messaging by official U.S. channels is “pretty mellow” by comparison.
“They don’t aggressively go after targets or weigh in on snarky commentary on things, and they certainly don’t engage in more aggressive propaganda,” he told PBS. “We’re fighting on a different playing field.”
Russian efforts to sow discord in democratic societies have leveraged traditional covert and clandestine tools, military confrontation, and new technologies such as social media platforms, to shape political events beyond their borders, according to the Center for Strategic & International Studies (see below). This approach is low cost, seemingly low risk, and has been more successful than Russian leaders could have imagined. Responding to these tactics requires some kind of response if we are to change Russian behavior.