How To Fix Social Media (Facebook!)

How to Fix Social Media.

The Wall Street Journal has published a series of articles they call The Facebook Files. One article recently queried a dozen “experts” to discuss their ideas of how to fix social media. We at tuka have been focused on this challenge from the beginning, several years ago, and what these commentators reveal is a converging consensus that the problem with social media is scale and emotional triggering based on the speed of information flow. As Winston Churchill was said to quip, “A lie gets halfway around the world before the truth has a chance to get its pants on.” Social media connections have just made this pathology worse.

Our take has always been summed up with one line: A global gossip network makes no sense. So large, centralized networks of 3 billion users make no sense (i.e, FB). Social networking is person-to-person sharing based on mutual trust. So smaller newworks focused on shared interests make sense. This is what tuka is. Technology can then be harnessed to coordinate these networks in ways that reduce the siloing effect so we can all end up sharing more information based on our trusted networks. So what is needed are policies that break down the network effects of scale to open up the social media space to thousands of competitors, all focused on different community interests. Then the interactions across networks help bring us together willingly.

The other problem we at tuka have cited is the centralization and control of information networks and the immense value they create. Data is gold, and we cannot have a handful of private companies own and control the personal data users create. This is akin to giving your labor away, or slavery. The required changes are to decentralize the network using blockchain technologies so that value created can be measured and distributed accordingly to users.

Several of the experts have accurately recognized the problem and what to do about it. The better ideas have been cut and pasted below.

….

Clay Shirky: Slow It Down and Make It Smaller

We know how to fix social media. We’ve always known. We were complaining about it when it got worse, so we remember what it was like when it was better. We need to make it smaller and slow it down.

The spread of social media vastly increased how many people any of us can reach with a single photo, video or bit of writing. When we look at who people connect to on social networks—mostly friends, unsurprisingly—the scale of immediate connections seems manageable. But the imperative to turn individual offerings, mostly shared with friends, into viral sensations creates an incentive for social media platforms, and especially Facebook, to amplify bits of content well beyond any friend group.

We’re all potential celebrities now, where anything we say could spread well beyond the group we said it to, an effect that the social media scholar Danah Boyd has called “context collapse.” And once we’re all potential celebrities, some people will respond to the incentives to reach that audience—hot takes, dangerous stunts, fake news, miracle cures, the whole panoply of lies and grift we now behold.

The faster content moves, the likelier it is to be borne on the winds of emotional reaction.

The inhuman scale at which the internet assembles audiences for casually produced material is made worse by the rising speed of viral content. As the behavioral economist Daniel Kahneman observed, human thinking comes in two flavors: fast and slow. Emotions are fast, and deliberation is slow.

The obvious corollary is that the faster content moves, the likelier it is to be borne on the winds of emotional reaction, with any deliberation coming after it has spread, if at all. The spread of smartphones and push notifications has created a whole ecosystem of URGENT! messages, things we are exhorted to amplify by passing them along: Like if you agree, share if you very much agree.

Social media is better, for individuals and for the social fabric, if the groups it assembles are smaller, and if the speed at which content moves through it is slower. Some of this is already happening, as people vote with their feet (well, fingers) to join various group chats, whether via SMS, Slack or Discord.

We know that scale and speed make people crazy. We’ve known this since before the web was invented. Users are increasingly aware that our largest social media platforms are harmful and that their addictive nature makes some sort of coordinated action imperative.

It’s just not clear where that action might come from. Self-regulation is ineffective, and the political arena is too polarized to agree on any such restrictions. There are only two remaining scenarios: regulation from the executive branch or a continuation of the status quo, with only minor changes. Neither of those responses is ideal, but given that even a global pandemic does not seem to have galvanized bipartisanship, it’s hard to see any other set of practical options.

Mr. Shirky is Vice Provost for Educational Technologies at New York University and the author of “Cognitive Surplus: Creativity and Generosity in a Connected Age.”

….

Jaron Lanier: Topple the New Gods of Data

When we speak of social media, what are we talking about? Is it the broad idea of people connecting over the internet, keeping track of old friends, or sharing funny videos? Or is it the business model that has come to dominate those activities, as implemented by Facebook and a few other companies?

Tech companies have dominated the definition because of the phenomenon known as network effects: The more connected a system is, the more likely it is to produce winner-take-all outcomes. Facebook took all.

The domination is so great that we forget alternatives are possible. There is a wonderful new generation of researchers and critics concerned with problems like damage to teen girls and incitement of racist violence, and their work is indispensable. If all we had to talk about was the more general idea of possible forms of social media, then their work would be what’s needed to improve things.

Unfortunately, what we need to talk about is the dominant business model. This model spews out horrible incentives to make people meaner and crazier. Incentives run the world more than laws, regulations, critiques, or the ideas of researchers.

The current incentives are to “engage” people as much as possible, which means triggering the “lizard brain” and fight-or-flight responses. People have always been a little paranoid, xenophobic, racist, neurotically vain, irritable, selfish, and afraid. And yet putting people under the influence of engagement algorithms has managed to bring out even more of the worst of us.

The current incentives are to ‘engage’ people as much as possible, which means triggering the ‘lizard brain.’

Can we survive being under the ambient influence of behavior modification algorithms that make us stupider?

The business model that makes life worse is based on a particular ideology. This ideology holds that humans as we know ourselves are being replaced by something better that will be brought about by tech companies. Either we’ll become part of a giant collective organism run through algorithms, or artificial intelligence will soon be able to do most jobs, including running society, better than people. The overwhelming imperative is to create something like a universally Facebook-connected society or a giant artificial intelligence.

These “new gods” run on data, so as much data as possible must be gathered, and getting in the middle of human interactions is how you gather that data. If the process makes people crazy, that’s an acceptable price to pay.

The business model, not the algorithms, is also why people have to fear being put out of work by technology. If people were paid fairly for their contributions to algorithms and robots, then more tech would mean more jobs, but the ideology demands that people accept a creeping feeling of human obsolescence. After all, if data coming from people were valued, then it might seem like the big computation gods, like AI, were really just collaborations of people instead of new life forms. That would be a devastating blow to the tech ideology.

Facebook now proposes to change its name and to primarily pursue the “metaverse” instead of “social media,” but the only changes that fundamentally matter are in the business model, ideology, and resulting incentives.

Mr. Lanier is a computer scientist and the author, most recently, of “Ten Arguments for Deleting Your Social Media Accounts Right Now.”

Clive Thompson: Online Communities That Actually Work

Are there any digital communities that aren’t plagued by trolling, posturing and terrible behavior? Sure there are. In fact, there are quite a lot of online hubs where strangers talk all day long in a very civil fashion. But these aren’t the sites that we typically think of as social media, like Twitter, Facebook or YouTube. No, I’m thinking of the countless discussion boards and Discord servers devoted to hobbies or passions like fly fishing, cuisine, art, long-distance cycling or niche videogames.

I visit places like this pretty often in reporting on how people use digital tools, and whenever I check one out, I’m often struck by how un-toxic they are. These days, we wonder a lot about why social networks go bad. But it’s equally illuminating to ask about the ones that work well. These communities share one characteristic: They’re small. Generally they have only a few hundred members, or maybe a couple thousand if they’re really popular.

And smallness makes all the difference. First, these groups have a sense of cohesion. The members have joined specifically to talk to people with whom they share an enthusiasm. That creates a type of social glue, a context and a mutual respect that can’t exist on a highly public site like Twitter, where anyone can crash any public conversation.

Smallness makes all the difference. These groups have a sense of cohesion.

Even more important, small groups typically have people who work to keep interactions civil. Sometimes this will be the forum organizer or an active, long-term participant. They’ll greet newcomers to make them feel welcome, draw out quiet people and defuse conflict when they see it emerge. Sometimes they’ll ban serious trolls. But what’s crucial is that these key members model good behavior, illustrating by example the community’s best standards. The internet thinkers Heather Gold, Kevin Marks and Deb Schultz put a name to this: “tummeling,” after the Yiddish “tummeler,” who keeps a party going.

None of these positive elements can exist in a massive, public social network, where millions of people can barge into each other’s spaces—as they do on Twitter, Facebook, and YouTube. The single biggest problem facing social media is that our dominant networks are obsessed with scale. They want to utterly dominate their fields, so they can kill or absorb rivals and have the ad dollars to themselves. But scale breaks social relations.

Is there any way to mitigate this problem? I’ve never heard of any simple solution. Strong antitrust enforcement for the big networks would be useful, to encourage a greater array of rivals that truly compete with one another. But this likely wouldn’t fully solve the problem of scale, since many users crave scale too. Lusting after massive, global audiences, they will flock to whichever site offers the hugest. Many of the proposed remedies for social media, like increased moderation or modifications to legal liability, might help, but all leave intact the biggest problem of all: Bigness itself.

Mr. Thompson is a journalist who covers science and technology. He is the author, most recently, of “Coders: The Making of a New Tribe and the Remaking of the World.”