Reprinted here from the WSJ.
Yes, there will be many alternatives to mass social media that make far more sense to users. The best regulation is competition, so oversight boards can just make sure there is transparency and open competition among platforms because all platforms will attempt to create moats to protect their user data networks. The end of WEB2.0 social media dominance is is when users get paid for data contributions that enhance the value of the network. In a word, blockchain.
There’s No Quick Fix for Social Media
January 20, 2023
By Suzanne Nossel
Social media is in crisis. Elon Musk’s Twitter is still afloat, but the engine is sputtering as users and advertisers consider jumping ship. The reputationally-challenged Meta, owner of Facebook and Instagram, is cutting costs to manage a stagnating core business and to fund a bet-the-farm investment in an unproven metaverse. Chinese-owned TikTok faces intensifying scrutiny on national security grounds.
Many view this moment of reckoning over social media with grim satisfaction. Advocates, politicians and activists have railed for years against the dark sides of online platforms—hate speech, vicious trolling, disinformation, bias against certain views and the incubation of extremist ideas. Social media has been blamed for everything from the demise of local news to the rise of autocracy worldwide.
Social media reflects and intensifies human nature, for good and for ill.
But the challenge of reining in what’s bad about social media has everything to do with what’s good about it. The platforms are an undeniable boon for free expression, public discourse, information sharing and human connection. Nearly three billion people use Meta’s platforms daily. The overwhelming majority log in to look at pictures and reels, to discover a news item that is generating buzz or to stay connected to more friends than they could possibly talk to regularly in real life. Human rights activists, journalists, dissidents and engaged citizens have found social media indispensable for documenting and exposing world-shaking events, mobilizing movements and holding governments accountable.
Social media reflects and intensifies human nature, for good and for ill. If you are intrigued by football, knitting or jazz, it feeds you streams of video, images and text that encourage those passions. If you are fascinated by authoritarianism or unproven medical procedures, the very same algorithms—at least when left to their own devices—steer that material your way. If people you know are rallying around a cause, you will be sent messages meant to bring you along, whether the aim is championing civil rights or overthrowing the government.
Studies show that incendiary content travels faster and farther online than more benign material and that we engage longer and harder with it. The bare algorithms thus favor vitriol and conspiracy theories over family photos and puppy videos. For the social-media companies, the clear financial incentive is to keep the ugly stuff alive.
The only viable approach, though painstaking and unsatisfying, is to mitigate harms through trial and error.
The great aim of reformers and regulators has been to figure out a way to separate what is enriching about social media from what is dangerous or destructive. [Most social media sites should enforce non-anonymity to reduce destructive behavior. At the same time they can create incentives to reward productive behaviors.] Despite years of bill-drafting by legislatures, code-writing in Silicon Valley and hand-wringing at tech conferences, no one has figured out quite how to do it.
Part of the problem is that many people come to this debate brandishing simple solutions: “free speech absolutism,” “zero tolerance,” “kill the algorithms” or “repeal Section 230” (Section 230 of the 1996 Communications Decency Act, which shields platforms from liability for users’ speech). None of these slogans offers a true or comprehensive fix, however. Mr. Musk thought he could protect free speech by dismantling content guardrails, but the resulting spike in hate and disinformation on Twitter alienated many users and advertisers. In response he has improvised frantically, resurrecting old rules and making up new ones that seem to satisfy no one.
The notion that there is a single solution to all or most of what ails social media is a fantasy. The only viable approach, though painstaking and unsatisfying, is to mitigate the harms of social media through trial and error, involving tech companies and Congress but also state governments, courts, researchers, civil-society organizations and even multilateral bodies. Experimentation is the only tenable strategy.
Well before Mr. Musk began his haphazard effort to reform Twitter, other major platforms set out to limit the harms associated with their services. In 2021, I was approached to join Meta’s Oversight Board, a now 22-person expert body charged with reviewing content decisions for Facebook and Instagram. As the CEO of PEN America, I had helped to document how unchecked disinformation and online harassment curtail free speech on social media. I knew that more had to be done to curb what some refer to as “lawful but awful” online content.
The Oversight Board is composed of scholars, journalists, activists and civic leaders and insulated from Meta’s corporate management by various financial and administrative safeguards. Critics worried that the board was just a publicity stunt, meant to shield Meta from well-warranted criticism and regulation. I was skeptical too. But I didn’t think profit-making companies could be trusted to oversee the sprawling digital town square and also knew that calling on governments around the world to micromanage online content would be treated by some as an invitation to silence dissent. I concluded that, while no panacea, the Oversight Board was worth a try.
The board remains an experiment. Its most valuable contribution has been to create a first-of-its-kind body of content-moderation jurisprudence. The board has opined on a range of hard questions. It found, for instance, that certain Facebook posts about Ethiopia’s civil war could be prohibited as incitements to violence but also decided that a Ukrainian’s post likening Russian soldiers to Nazis didn’t constitute hate speech. In each of the 37 decisions released so far, the board has issued a thorough opinion, citing international human-rights law and Meta’s own professed values. Those opinions are themselves an important step toward a comprehensible, reviewable scheme for moderating content with the public interest as a guide.
Ultimately, the value of Meta’s Oversight Board and similar mechanisms will hinge on getting social media platforms to expose their innermost workings to scrutiny and implement recommendations for lasting change. Meta deserves credit for allowing leery independent experts to nose into its business and offer critiques. Still, the board has sometimes had trouble getting its questions answered. In a step backward, the company has quietly gutted Crowdtangle, an analytics tool that the board and outside researchers have used to examine trends on the platform. The fact that Meta can close valuable windows into its operations at will underscores the need for regulation to guarantee transparency and public access to data.
Starting with a structural approach allows federal lawmakers to delay taking measures that raise First Amendment concerns.
Such openness is at the heart of two major pieces of legislation introduced in Congress last year. The Platform Accountability and Transparency Act would force platforms to disclose data to researchers, while the Digital Services Oversight and Safety Act would create a bureau at the Federal Trade Commission with broad authority and resources. But progress on the legislation has been slow, and President Biden has rightly called on Congress to get moving and begin to hold social media companies accountable.
The bills aim to address two prerequisites for promoting regulatory trial and error: ensuring access to data and building oversight capability. Only by prying open the black box of how social media operates—the workings of the algorithms and the paths and pace by which problematic content travels—can regulators do their job. As with many forms of business regulation—including for financial markets and pharmaceuticals—regulators can be effective only to the extent that they’re able to see what’s happening and get their questions answered.
Regulatory bodies also need to build muscle memory in dealing with companies. Though we have moved beyond the spectacle of the 2018 Senate hearing in which lawmakers asked Mark Zuckerberg how he made money running a free service, only trained, specialized and dedicated regulators—most of whom should be recruited from Silicon Valley—will be able to keep pace with the world’s most sophisticated engineers and coders. By starting with these structural approaches, federal lawmakers can delay entering the fraught terrain of content-specific measures that will raise First Amendment concerns.
The inherent difficulty of content-specific regulations is already apparent in laws emerging from the states. Those who believe conservative voices are unfairly suppressed by tech companies notched a victory when a split panel of the U.S. Fifth Circuit Court of Appeals upheld a Texas law barring companies from excising posts based on political ideology. The Eleventh Circuit went the opposite way, upholding a challenge to a similar law championed by Florida Gov. Ron DeSantis. The platforms have vociferously opposed both measures as infringing on their own expressive rights and leeway to run their businesses.
This term the Supreme Court is expected to adjudicate the conflicting rulings on the Texas and Florida cases. Both laws have been temporarily stayed; their effects are untested in the marketplace. Depending on what the Supreme Court decides, we may learn whether companies are willing to operate with sharply limited discretion to remove posts flagged as disinformation, hate speech or trolling. The prospect looms of a complex geographic patchwork where posts could disappear as users cross state lines or where popular platforms are off-limits in certain states because they can’t or won’t comply with local rules. If Elon Musk’s efforts at Twitter are any indication, recasting content moderation with the aim of eliminating anticonservative bias is a messy proposition.
Regulatory experiments affecting content moderation should be adopted modestly, with the aim of evaluating and tweaking them before they’re reauthorized. There have been numerous calls, for example, to repeal Section 230 and make social-media companies legally liable for their users’ posts. Doing so would force the platforms to conduct legal review of posts before they go live, a change that would eliminate the immediacy and free-flowing nature of social media as we know it.
Virtually any proposed measure can be criticized for either muzzling too much valuable speech or not curbing enough malign content.
But that doesn’t mean Section 230 should be sacrosanct. Proposals to pare back or augment Section 230 to encourage platforms to moderate harmful content deserve careful consideration. Such changes need to be assessed for their impact not just on online abuses but also on free speech. The Supreme Court will hear two cases this term on the liability of platforms for amplifying terrorist content, and the decisions could open the door to overhauling Section 230.
While the U.S. lags behind, legislative experimentation with social media regulation is proceeding apace elsewhere. A 2018 German law imposes multimillion euro fines on large platforms that fail to swiftly remove “clearly illegal” content, including hate speech. Advocates for human rights and free expression have roundly criticized the law for its chilling effect on speech.
Far more ambitious regulations will soon make themselves felt across Europe. The EU’s Digital Services Act codifies “conditional liability” for platforms hosting content that they know is unlawful. The law, which targets Silicon Valley behemoths like Meta and Google, goes into full effect in spring 2024. It will force large providers to make data accessible for researchers and to disclose how their algorithms work, how content moderation decisions are made and how advertisers target users. [Transparency] The law relies heavily on “co-regulation,” with authorities overseeing new mechanisms involving academics, advocacy groups, smaller tech firms and other digital stakeholders. The set up acknowledges that regulators lack the necessary technical savvy and expertise to act alone.
A complementary EU law, the Digital Markets Act, goes into effect this May and targets so-called “gatekeepers”—the largest online services. The measure will impose new privacy restrictions, antimonopoly and product-bundling constraints, and obligations that make it easier for users to switch apps and services, bringing their accounts and data with them.
Taken together, these measures will fundamentally reshape how major platforms operate in Europe, with powerful ripple effects globally. Critics charge that the Digital Services Act will stifle free expression, forcing platforms to act as an arm of government censorship. Detractors maintain that the Digital Markets Act will slow down product development and hamper competition.
The new EU laws illustrate the Goldilocks dilemma of social media regulation—that virtually any proposed measure can be criticized for either muzzling too much valuable speech or not curbing enough malign content; for giving social-media companies carte blanche or stifling innovation; for regulating too much or not enough. Getting the balance right will require time, detailed and credible study of the actual effects of the measures, and a readiness to adjust, amend and reconsider.
Whatever the regulatory framework, content-moderation strategies need to be alive to fast-evolving threats and to avoid refighting the last war. In 2020, efforts to combat election disinformation focused heavily on the foreign interferences that had plagued the political process four years earlier. Mostly overlooked was the power of domestic digital movements to mobilize insurrection and seek to overturn election results.
There will be no silver bullet to render the digital arena safe, hospitable and edifying. We must commit instead to fine-tuning systems as they evolve. The end result will look like any complex regulatory regime—some self-regulation, some co-regulation and some top-down regulation, with variations across jurisdictions. This approach might seem unsatisfying in the face of urgent threats to democracy, public health and our collective sanity, but it would finally put us on a path to living responsibly with social media, pitfalls and all.
Ms. Nossel is the CEO of PEN America and the author of “Dare to Speak: Defending Free Speech for All.”