- The social media giant evades responsibility by techno-gaslighting, hiding behind phrases designed to placate and confuse simultaneously.
The last few weeks saw an intense drama play out between Meta, the social media giant, and The Wire, a small but strong independent news website in India. The Wire released a couple of stories, but Meta denied them. The Wire dug its heels in, putting out evidence to support its story: leaked emails, screenshots, and links. Meta still denied it. Technology experts, and credible tech reporters weighed in. Meta’s communications person Andy Stone’s leaked email seemed too good to be true, the evidence provided by The Wire didn’t seem to stand up to scrutiny.
In response, The Wire issued an uncomplicated, straightforward statement, that it will review all evidence. Finally, it took the respectable step of withdrawing the stories permanently. An anti-climax, sure, but morally and ethically the right step.
Stuff happens. It happened with The New York Times. As the Times’ top editors put it simply, “We got it wrong.” They did the honorable thing: returned The Peabody Award, and removed its citation as a finalist for the Pulitzer Prize. It wasn’t even the first time it had happened to them.
Andy Stone’s leaked email seemed suspicious for one reason: it was a surprise to read decipherable English from anything that came out of the Meta-verse.
Consider this actual response from Facebook/Meta; when former Facebook engineer and whistleblower Frances Haugen revealed Myanmar’s military used Facebook in 2018 to promote ethnic cleansing:
“We’ve invested heavily in people and technology to keep our platform safe, and have made fighting misinformation and providing authoritative information a priority. If any research had identified an exact solution to these complex challenges, the tech industry, governments, and society would have solved them a long time ago. We have a strong track record of using our research—as well as external research and close collaboration with experts and organizations—to inform changes to our apps.”
Such obfuscations by Meta in the face of any controversy is staggering. We came up with a term to make sense of it: Techno-gaslighting.
The above quote is an excellent example of the phenomenon of techno-gaslighting: ostensibly clear and concise diction that somehow remains inscrutable; purposely worded to convince the reader they may not be savvy or smart enough to comprehend the statement. Purposefully voluble, circuitous. The reader, rather than retaliate or ask for further clarification at the expense of appearing unintelligent or uninformed, demurs from follow-up. The conversation ends.
Meta-speak was on full display in July this year at East-West Center’s International Media Conference 2022 in Hawaii. A roomful of journalists mostly from Southeast Asia, deeply invested in Meta’s security measures, were waiting to hear Nathaniel Gleicher, their head of security policy. He was to be interviewed by Dilrukshi Handunetti, a senior investigative reporter from Sri Lanka, who had been awarded for her brave journalism the night before.
She began her interview with a very simple, predictable question, after reading out Gleicher’s long and lofty introduction: What is your role at Meta?
“I coordinate our work across the company to track and counter determined adversaries.
The nature of the world we face today, the march and the rise of illiberal voices, the rise of autocracies around the world. Many of these trends predate the internet. …. But one of the things the Internet does and social media does is it connects people. And by connecting people, it accelerates all of these trends. It accelerates positive trends. Right. There are a lot of really good things that have come out of public debate that has changed from the connectivity of the Internet and social media, whether that is far more diverse public discussions today than ever before. …
And the goal of my team and the mission of my team is that for a lot of content moderation…. But for a certain subset of problems, there is an adversary on the other side who isn’t just making a mistake, they are repeatedly, intentionally, and systematically trying to abuse your systems, trying to cause harm across the Internet, trying to mislead or target our users.
One of the first things you see threat actors do is they abuse that system you created to try to actually compromise the accounts of people you’re trying to defend. And so whenever you’re dealing with determined adversaries, you have to think about this as a two player game. …
So, one of the key things that my team does and that many of you may have seen is we track and counter threat actors. We began with a focus on what we call CIB: coordinated, inauthentic behavior. Sort of a mouthful. …”
So, it went on for over 9 minutes. The audience strained their ears to follow along and take notes.
“Sorry. That was a long-winded answer,” said Handunetti.
“I do that sometimes,” responded Gleicher, with his warm, disarming smile.
Gleicher mentioned that his team compiles an “Adversarial Threat Report” several times during his 40-minute discussion. The purpose of the ‘adversarial threat reports’ is, he explained, to: (1) Share information about threats so people know what’s out there, (2) Share information with other defender teams to empower them, and (3) Deter the bad guys (italics ours) and impose a cost on them.
The audience was not provided with a cheat sheet that defined Meta’s internal lingo: adversaries, adversarial threat reports, defender teams, and bad actors. How do they ‘impose a cost’? What does this all mean? What constitutes a “bad guy”? Gleicher is the external-facing voice box of Facebook’s policy and was consistently inscrutable even to journalists.
Studies reveal that corporate jargon, euphemisms, and buzzwords are used within an organization by low-status people, in order for them to feel they belong.
But when corporate-speak is used by top executives, it’s done to deliberately obfuscate, confuse and deflect. The officious-sounding word salad is used especially to mislead. This techno-gaslighting is commonplace but seemed particularly macabre when used with a conference room full of hundreds of journalists, many of whom directly suffered physical or reputational threats at the hand of the Meta platform.
When Frances Haugen revealed that Facebook got rid of an important program that was needed to keep Facebook from being dangerous, Facebook responded:
“The goal of the Meaningful Social Interactions ranking change is in the name: improve people’s experience by prioritizing posts that inspire interactions, particularly conversations, between family and friends—which research shows is better for people’s well-being—and deprioritizing public content. Research also shows that polarization has been growing in the United States for decades, long before platforms like Facebook existed, and that it is decreasing in other countries where Internet and Facebook use has increased. We have our role to play and will continue to make changes consistent with the goal of making people’s experience more meaningful, but blaming Facebook ignores the deeper causes of these issues —and the research.”
Note the capitalization of MSI in ‘Meaningful Social Interactions.’ It assumes foreknowledge, putting the burden of research on those who are adversely affected by the Meta algorithms. How its reference here makes sense is anyone’s guess.
Meta has blood on its hands. It has kowtowed to the demands of fascists and authoritarian governments almost around the world for profits: India, Sri Lanka, Myanmar, Ethiopia, Turkey, Russia, etc. It evades responsibility through techno-gaslighting, hiding behind phrases designed to placate and confuse simultaneously. Meta maintains it is merely a platform for user-generated content, over which they have little to no control. What they refuse to acknowledge is that it profited to the tune of $118 billion in revenue in 2021 alone from the very content from which it distances itself.
Haugen revealed that Facebook has knowingly prioritized profits over the safety of people in the real world. “Facebook has realized that if they change the algorithm to be safer people will spend less time on the site, they’ll click on fewer ads, they’ll make less money,” said Haugen.
To this, Facebook said:
“The growth of people or advertisers using Facebook means nothing if our services aren’t being used in ways that bring people closer together—that’s why we are investing so much in security that it impacts our bottom line. Protecting our community is more important than maximizing our profits. To say we turn a blind eye to feedback ignores these investments, including the 40,000 people working on safety and security at Facebook and our investment of $13 billion since 2016.”
Back in the conference room in East-West Center’s Hawaii campus, one journalist asked Gleicher a straightforward question, “We appreciate that you want to stop the bad actors. Some would say Facebook is the bad actor. We know from the documents revealed by the whistleblowers that Facebook repeatedly ignored its own team’s findings that the algorithm was exaggerating all the bad behavior and sending people down rabbit holes. So, there’s this basic contradiction that Facebook makes money by being a bad actor. What can you do about that?
Gleicher laughed, settled in his chair and leaned forward. His response:
“It will surprise no one that this is something I hear quite a bit.”
Laughter in the hall.
“The fundamental truth, though, is that’s just not how the algorithm works. And there’s a bunch of really obvious reasons for that. I could say that the reasons are because of, sort of how problematic it is. But truthfully, it’s because if you want to build a functioning environment and business that people want to be part of, you can’t fill it with outrage. People don’t want that. And in fact, users have said that over and over again and have said that to us. Advertisers don’t want their products advertised next to disinformation, next to divisive narratives and users. If you want to build a platform that people will go to spend ten minutes on and then leave and never come back to, you can fill it with that type of stuff. But if you want to build a space that people want to engage with, want to spend time on, it can’t work like that. Now, the problem and the challenge, of course, is that when people gather together, there will be threat actors to try to exploit the platform.”
And then he went back to explaining what bad actors do, how the internet is essentially a hard place to be, and that Facebook’s ‘Trust and Safety’ team works hard. He emphasized he is at Meta because he believes this is where he can do the most good in tackling ‘bad actors’ and ‘coordinated actions.’
An Indian journalist from an independent news website asked even more specifically, “We are aware of the coordinated network of actors on Facebook and its other platforms which are spreading communal disinformation and calls for communal violence in India. Former Facebook India Policy Head had toldemployees at Facebook that penalizing violations by politicians from Mr. Modi’s party would hurt the company’s business prospects. Is Meta doing anything differently now?”
Gleicher said, “So this is one of the reasons why we have those adversarial threat reports is that when we investigate and take down one of these networks, we share information about it publicly.” And some more about the quarterly threat reports, coordinated and uncoordinated threat actors. “There’s a reason we publish these reports, because many people share the concerns that you just described.”.
Meta released its first human rights report on July 14, 2022, after much pressure from civil society groups. The report was found abysmally lacking, but Meta refused to expand on what read like a preview of a report.
Another Kashmiri woman journalist was even more specific in an attempt to pin Gleicher. She named specific groups/pages on Facebook with hundreds of thousands of members and followers, which abuse, degrade, troll and harass women critical of Modi’s BJP. This one was easy to answer for Gleicher, “I don’t know about these two specific pages.”
The concept of “I never read the article, so I cannot comment...” was a recurring theme at this session. When an audience member (one of the authors of this story, Allison Puccioni) asked Gleicher about the Bloomberg News article addressing the defunding of Facebook’s CrowdTangle tool – a tool popular with journalists that increased the ability to spot false narratives and foster overall transparency in content – he offered a pat response about Meta’s commitment to identifying threats across all platforms and how Meta will actually increase CrowdTangle resources. The questioner interrupted Gleicher, asking if he had read the Bloomberg article. “I haven’t seen a specific article, so I won’t comment on what they said exactly.” CrowdTangle is still reportedly on track for “depreciation”, an increasingly-used technical term meaning “rendered obsolete” or “mothballed”.
Mark Zuckerberg said in his Congressional testimony two years ago that Facebook is “an idealistic and optimistic company” that helps “people everywhere … stay connected to the people they love, make their voices heard, and build communities and businesses.” Zuckerberg’s public statements are full of industry buzzwords.
The Wire reported a very specific story, providing emails and screenshots to corroborate their stance. The core of The Wire story was that Meta takes down posts on Facebook and Instagram if the posts make India’s ruling party BJP uncomfortable. This part has been demonstrated several times; including in the revelations made by whistleblowers Frances Haugen and Sophie Zhang.
Meta’s denial and The Wire’s retraction have been about the authenticity of the screenshots and emails. The burning question, that Meta takes down posts to please autocrats, has been successfully suppressed, while the homes of three top editors of The Wire are being searched by the Delhi Police.
“These stories are fabrication, cross-check program is built to protect over-enforcement and has nothing to do with reporting posts, the accusations are outlandish,” said Meta. Andy Stone retweeted everyone who agreed with the CISO — even those far-right voices that are clearly aligned with India’s ruling party’s fascist ideology and benefit directly from silencing anti-government voices while promoting Islamophobia and casteism in India and the United States.
“Hate speech against marginalized groups, including Muslims, is on the rise in India and globally,” Stone said. “So, we are improving enforcement and are committed to updating our policies as hate speech evolves online.”
Allison Puccioni is a Center for International Security and Cooperation (CISAC)-affiliated imagery analyst and founder of the imagery consultancy Armillary Services, LLC. She has been an imagery analyst for 30 years, working within the military, tech, media, and academic communities.
Sarita Pandey has been a media professional for 20 years, specializing in digital media strategy and communications. She is also an artist and volunteers for human rights advocacy groups. She lives in Washington, D.C.