BROKEN CODE Inside Facebook and the Fight To Expose its Harmful Secrets Jeff Horwitz New York: Doubleday, July 2023 |
Rating: 4.5 High |
|||
ISBN-13 978-0-385-54918-9 | ||||
ISBN 0-385-54918-0 | 330pp. | HC | $32.50 |
Few businesses have influenced worldwide politics as thoroughly — for good and ill — as the social networking service Facebook. Originally created in 2004 by Mark Zuckerberg and four of his Harvard University classmates as a way of rating the appearance of coeds1 at the University, Facebook has now grown to claim a membership of more than 3 billion users around the world and to operate in 112 languages.
Zuckerberg dropped out of Harvard in his sophomore year to turn the service into a business. He was outstandingly successful. Moving to California, he set up in Palo Alto with seed money from Peter Thiel before to current headquarters in Menlo Park. His stated goal was idealistic: he wanted to connect the people of the world. The subsequent history of the corporation, however — as documented by multiple sources — belies that goal, revealing a primary focus on growing the number of users and the time they stayed connected. It turned out, to no one's surprise, that the best way of reaching those goals was to promote controversy and conflict.
Zuckerberg's strategy had an unavoidable downside: unavoidable, that is, unless he moderated his quest for growth at all costs: the promotion of controversy and conflict could and did spill over into the real world, and the lack of attention to limiting how much the system amplified undesirable messages opened the door to bad actors of all stripes. The basics of the story are now well known: how young Macedonian hackers, by aggregating inflammatory news stories and reposting them on Facebook, grew their audience essentially without limit, and how rabid partisans of all stripes, along with political operatives like Russia's IRA (Internet Research Agency), exploited Facebook's unguarded system to spread propaganda to millions of users and stoke divisions among them — divisions which often exploded into physical violence.
The scope of the problem, however, is less well known than it should be. This book tells that story in exhaustive detail, presenting some of the most egregious cases: the Burmese junta's genocide against its Muslim Rohingya people, the suppression of India's Muslims by the BJP party, and of course the influencing by the IRA and others of the US 2016 presidential campaign, which contributed to the election of Donald Trump.
It was not that Zuckerberg and his top managers were unaware of these problems; the company's research teams repeatedly showed them data that revealed them, and also recommended countermeasures. However, the response was always that putting those countermeasures in place would slow the growth of the user base. And that was verboten. Facebook did have some rules for prohibited content, of course. But using AI to moderate content was a hit-or-miss proposition, and it never hired enough moderators to handle the billions of comments. Even when systems existed to handle parts of the problem, they wee often dismantled when the people responsible for them left the company or were reassigned. The example of Arturo Bejar at Facebook is illustrative.
Early in his tenure, Sheryl Sandberg, Facebook's chief operating officer, asked Bejar to get to the bottom of skyrocketing user reports of nudity. His team sampled the reports and saw they were overwhelmingly false. In reality, users were encountering unflattering photos of themselves, posted by friends, and attempting to get them taken down by reporting them as porn. Simply telling users to cut it out didn't help. What did was giving users the option to report not liking a photo of themselves, describing how it made them feel, and then prompting them to share that sentiment privately with their friend. Nudity reports dropped by roughly half, Bejar recalled. A few such successes led Bejar to create a team called Protect and Care. A testing ground for efforts to head off bad online experiences, promote civil interactions, and help users at risk of suicide, the work felt both groundbreaking and important. The only reason Bejar left the company in 2015 was that he was in the middle of a divorce and wanted to spend more time with his kids. – Page 2 |
During his tenure, Bejar had absorbed a company motto meant to guide interactions between staff — "assume good intent." So when his daughter Joanna told him she had gotten an offensive comment on her Instagram account, he dismissed it as a fluke. She had reported the comment, but Instagram told her it did not violate standards. Offensive comments and pictures kept coming, however, so Bejar contacted former colleagues. He was invited back, and, after a time, called up his old team's system on the company's internal network, called Workplace.
The carefully tested prompts that he and his colleagues had composed—asking users to share their concerns, understand Facebook's rules, and constructively work out disagreements—were gone. Instead, Facebook now demanded that people allege a precise violation of the platform's rules by clicking through a gauntlet of pop-ups. Users determined enough to complete the process arrived at a final screen requiring them to reaffirm their desire to submit a report. If they simply clicked a button saying "done," rendered as the default in bright Facebook blue, the system archived their complaint without submitting it for moderator review. What Bejar didn't know then was that, six months prior, a team had redesigned Facebook's reporting system with the specific goal of reducing the number of completed user reports so that Facebook wouldn't have to bother with them, freeing up resources that could otherwise be invested in training its artificial intelligence-driven content moderation systems. In a memo about efforts to keep the costs of hate speech moderation under control, a manager acknowledged that Facebook might have overdone its efforts to stanch the flow of user reports: "we may have moved the needle too far," he wrote, suggesting that perhaps the company might not want to suppress them so thoroughly. – Pages 4-5 |
It's arguable that this was a reasonable cost-cutting measure. But it's also Facebook's typical response to handling undesirable user behavior: downplay its importance and do the absolute minimum to nip it in the bud. That strategy persisted as bad behavior escalated and internal teams grew more and more concerned. Notable examples were the trolls of the IRA setting up dummy accounts and buying ads to influence the 2016 election, and the viral proliferation of "Stop the Steal" protests that erupted after that election. Company researchers were well aware of these problems, but got no support in addressing them — until they became public knowledge.
It was even worse overseas. Facebook's operations in places like Burma, Ethiopia, India or Saudi Arabia had few or no staffers who knew the local languages, making the detection of offensive content impossible. This was made worse in some cases because the heads of those divisions favored one side of a local conflict. That was true in India for a time.
But, as this account vividly demonstrates, Facebook had a real knack for letting even problems it was warned of bloom out of control — whether in the US or foreign countries. It ignored misinformation like the "Stop the Steal" delusion, the anti-vaccine propaganda film Plandemic, and even reports of governments using Facebook to identify dissidents.2 It exempted politicians and other prominent figures from rules that applied to everyone else. Then, when such problems blew up in its face, management would scramble to do damage control.3
The closing chapters of the book were the most interesting to me. These concern the whistleblower Frances Haugen's efforts to collect evidence on Facebook's irresponsibility, and her interactions with the author. Before that, I must admit, the story was disappointing in its recounting how Facebook management again and again dismissed warnings about problems they should have known were going to come back and bite them. Not until near the end of the book are we told about research showing that the countermeasures Facebook's Integrity teams requested slowed growth for just a short time, then boosted it by creating a saner and more civil environment. Don't get me wrong; this is a good read throughout. It's not the author's fault that Facebook so persistently shot itself in the foot. However, I think he could have condensed the tale somewhat. For that defect, and for the lack of an index, I mark the book down one notch.
Which brings us to the question of where Facebook, and the company now named Meta Platforms, will go from here. From what I've read, the testimony from Zuckerbeg I've seen, and my own experience on Facebook, it won't be toward a system that respects factual accuracy and user privacy. Horwitz writes that in 2022, after his stories ran in The Wall Street Journal, Zuckerberg refused to apologize and, much like Trump, blamed the media for getting the facts wrong. He asked for a surge in hiring, and when his staff came back with a recommendation to hire 40,000 new employees, he penciled in a requirement for 8,000 more. On page 310, Horwitz notes that "Few of the new staffers would be slated to go into integrity work." Evidence suggests that Zuckerberg will put existing products on the back burner and go all in on the metaverse.5 In his final comments, Horwitz observes:
It's too early to write off the metaverse, of course. A decade could prove Zuckerberg a visionary. But enthusiasm for that vision has waned, even inside Meta. Employee surveys consistently find low confidence in the company's leadership. In one memo that leaked, Metaverse VP Vishal Shah scolded employees for failing to spend time in the metaverse themselves. "The simple truth is, if we don't love it, how can we expect our users to love it?" the executive asked. A later Workplace post seen by the Journal invited employees to come learn about the company's HR benefits in the metaverse, an event that would count toward "weekly required headset time." – Pages 313-314 |
Presumably, "required headset time" will continue until enjoyment improves. I don't think Facebook management has figured it out even now.