AI and Music

This is a recent post from Ted Gioia on the invasion of AI generated music. My comment below.

The Number of Songs in the World Doubled Yesterday.

I guess I’m disappointed at how this AI revolution is playing out. We were promised a new AI-constructed Bach with better music. But all we’re getting is more music.

The dam broke on the levee, and the AI songs are pouring out in torrents, threatening to wash away everything else.

Now, I can’t claim to have listened to all 100 million of these songs. But I’ve heard enough AI music to realize how lousy it is.

My Comment:

By the not-so-iron law of supply and demand, when the supply doubles the price falls approximately in half. But there is a zero bound to price and the price of creative digital content is approaching that fast. This merely means we have to stop focusing on transaction value of creative content. Instead, the value is realized in monetizing the audience reach – or peer networks in tech speak.

This post is mostly about the quality of AI generated art. No one is going to listen to a million new AI songs to find that subjective artistic quality – so how is the AI industry going to find their audience? Through AI generated recommendation engines. But just like AI can’t create music, it also can’t judge its quality. This is always going to be the “human” advantage in the digital content world.

At our digital content platform start-up, tukaGlobal, we have long recognized this reality. All this machine learning and AI cannot deliver to us the music we like. Humans, through direct human interactions do that. That’s how we all got introduced to new music in high school and college. In the digital world these connections can happen through social media dynamics. P2P networks represent the leveraging power of the human. Web3.0 will eventually deliver that world to us in a decentralized manner with blockchain. Harnessing social media dynamics will solve the search, discovery, and promotional functions of the digitized art world. Think of it this way – the most powerful algorithm in discriminating and promoting creativity is the human network. These networks are also the solution to monetization.


Get Ready to Pay More for Music

You can’t give people all the music they want for $9.99 per month.

That doesn’t generate enough cash to keep all the stakeholders happy. Somebody must get squeezed.

This is reposted from Ted Gioia’s Substack channel. He confirms the future trend we have been predicting here, especially about the streaming content business model. The bottom line is that streaming content is more expensive than buying content if artists are able to cut out the middleman. Digital technology with blockchain can do that.

Get Ready to Pay More for Music

I keep seeing cheery news stories about growth in the music business. But the numbers are misleading.

Highly misleading.

The growth is happening in all the worst places. Money is shifting from creative artists to technocrats, lawyers, and pencil pushers of all sorts.

If you play a guitar your career prospects suck. But if you’re a fund manager who owns song publishing rights, you can earn huge paychecks.

If you write songs, you may struggle to pay the rent. But if you’re a lawyer who specializes in song copyright lawsuits, you’re living in high style.
If you make music, you keep asking why it’s so tough. But if you’re a technocrat who streams music, you’re sitting pretty.
Is the music business really growing, if all the excess cash flow goes to to lawyers, fund managers, Silicon Valley overlords, and high-powered techies?

That’s like saying you’re healthy because the parasite sucking your lifeblood gets larger every day.

If you actually make the music, the industry isn’t healthy. Even if you’re just a music lover, things are looking ugly.

And it’s about to get a whole lot worse.

I’ve been warning for five years that the new economic model in the music business is broken. You can’t give people all the music they want for $9.99 per month.

That doesn’t generate enough cash to keep all the stakeholders happy. Somebody must get squeezed.

If you don’t grasp how broken the streaming model is—not just for music but also in Hollywood and other creative fields too—you won’t even understand the daily media news.

The broken economic model explains why writers in Hollywood are striking right now.
The broken economic model explains why writers are so fearful of AI—even demanding that the new contract prohibits AI-generated screenplays—and why musicians should be just as worried.
The broken economic model explains why Netflix has raised subscription prices sharply—even while reducing the range of film and TV options on its platform
The broken economic model explains why so many hot new artists are coming out of TikTok and social media, not traditional record labels.
The broken economic model explains why songwriters are selling their catalogs to investment groups.
The broken economic model explains why old music is killing new music in the marketplace.
I’ve been saying this for so long that I sound like a broken record (remember those?).

And how have things worked out for Spotify during those five years?

Check out all the red ink on the chart below. The company has never shown a profit, and results have worsened since listing on the stock exchange.

And now you’re going to pay a price for this. . . .

Physical vs. Digital Gold

I reprint the free portion of this Substack post by Ted Gioia. Emphasis in bold and my comments in red. The reliance on physical products reflects the market economics of controlling the supply and thus the price and profit margins. This is what the branding industry has flourished on. Digital content (music, blogs, books, videos) are given away virtually free in order to build an audience (peer network) that one can then sell higher margin goods and services to.

This makes perfect sense, given the economics of the digital world and how it augments the physical world. This is what Google and Facebook and LinkedIn and Apple and Amazon all do. The question then is how does the individual creator build, own, control, service, and monetize their peer networks? It’s kind of like a very valuable Rolodex file.

tuka is designed exactly for this need for users to create and monetize digital data value. It’s all about the data networks and the value they represent. The online world is slowly moving in tuka‘s direction, but to decentralize value creation will still require a blockchain platform.



Half the people buying vinyl albums don’t own record players. They treat their albums like holy relics—too precious to use and merely for display among other true believers. [Yes, but that’s collecting, not listening.]

Readers were shocked when I recently reported that statistic. I was a little stunned myself. But those are the facts.

Of course, there’s a lot about the vinyl revival that defies logic. What other business relies on a 60-year-old storage technology? That would be like running my writing career with a teletype unit and mimeograph machine.

And it’s not just vinyl. Cassette tapes—a cursed format that always unraveled at the worst possible time—are hot again. Even 8-track tapes, a longtime target of ridicule and abuse, are selling for thousands of dollars.

Why are people buying this stuff?

A new research report from Andrew Thompson at Components, released earlier today, helps us understand the bigger picture. Thompson analyzed 47,703,022 Bandcamp sales—involving almost five million items. And what he learned was startling.

Success in the music business is all about selling physical objects.

This is an unexpected situation—and runs counter to everything we’ve been told.

The Internet supposedly killed physical music media more than two decades ago. After iTunes was launched in 2001, there was no looking back. At first the music industry pivoted to digital downloads, and then everybody in the business jumped on the streaming bandwagon.

But it’s now 2023, and streaming platforms still aren’t profitable. [They never will be unless they can find a way to monetize those user networks.] However, Bandcamp is—and now we know why.

It’s all about tangible items.

Consider this chart—which looks at the correlation between revenues on Bandcamp and an artist’s reliance on physical merchandise.

Source: Components

Vinyl helps drive this. But it is only just part of a larger story. Artists can sell everything from clothing to compact discs on Bandcamp. And, of course, they can sell digital tracks too.

But the numbers make clear that physical merchandise is the smart business model.

According to Andrew Thompson:

Why is Bandcamp profitable and Spotify not? The answer we arrived at was that Bandcamp provides a simple platform for complex transactions, while Spotify is a technically complicated platform for facilitating a single transaction in the form of the one-size-fits-all subscription.

Fix Social Media?

Reprinted here from the WSJ.

Yes, there will be many alternatives to mass social media that make far more sense to users. The best regulation is competition, so oversight boards can just make sure there is transparency and open competition among platforms because all platforms will attempt to create moats to protect their user data networks. The end of WEB2.0 social media dominance is is when users get paid for data contributions that enhance the value of the network. In a word, blockchain.

There’s No Quick Fix for Social Media

January 20, 2023
By Suzanne Nossel

Social media is in crisis. Elon Musk’s Twitter is still afloat, but the engine is sputtering as users and advertisers consider jumping ship. The reputationally-challenged Meta, owner of Facebook and Instagram, is cutting costs to manage a stagnating core business and to fund a bet-the-farm investment in an unproven metaverse. Chinese-owned TikTok faces intensifying scrutiny on national security grounds.

Many view this moment of reckoning over social media with grim satisfaction. Advocates, politicians and activists have railed for years against the dark sides of online platforms—hate speech, vicious trolling, disinformation, bias against certain views and the incubation of extremist ideas. Social media has been blamed for everything from the demise of local news to the rise of autocracy worldwide.

Social media reflects and intensifies human nature, for good and for ill.

But the challenge of reining in what’s bad about social media has everything to do with what’s good about it. The platforms are an undeniable boon for free expression, public discourse, information sharing and human connection. Nearly three billion people use Meta’s platforms daily. The overwhelming majority log in to look at pictures and reels, to discover a news item that is generating buzz or to stay connected to more friends than they could possibly talk to regularly in real life. Human rights activists, journalists, dissidents and engaged citizens have found social media indispensable for documenting and exposing world-shaking events, mobilizing movements and holding governments accountable.

Social media reflects and intensifies human nature, for good and for ill. If you are intrigued by football, knitting or jazz, it feeds you streams of video, images and text that encourage those passions. If you are fascinated by authoritarianism or unproven medical procedures, the very same algorithms—at least when left to their own devices—steer that material your way. If people you know are rallying around a cause, you will be sent messages meant to bring you along, whether the aim is championing civil rights or overthrowing the government.

Studies show that incendiary content travels faster and farther online than more benign material and that we engage longer and harder with it. The bare algorithms thus favor vitriol and conspiracy theories over family photos and puppy videos. For the social-media companies, the clear financial incentive is to keep the ugly stuff alive.

The only viable approach, though painstaking and unsatisfying, is to mitigate harms through trial and error.

The great aim of reformers and regulators has been to figure out a way to separate what is enriching about social media from what is dangerous or destructive. [Most social media sites should enforce non-anonymity to reduce destructive behavior. At the same time they can create incentives to reward productive behaviors.] Despite years of bill-drafting by legislatures, code-writing in Silicon Valley and hand-wringing at tech conferences, no one has figured out quite how to do it.

Part of the problem is that many people come to this debate brandishing simple solutions: “free speech absolutism,” “zero tolerance,” “kill the algorithms” or “repeal Section 230” (Section 230 of the 1996 Communications Decency Act, which shields platforms from liability for users’ speech). None of these slogans offers a true or comprehensive fix, however. Mr. Musk thought he could protect free speech by dismantling content guardrails, but the resulting spike in hate and disinformation on Twitter alienated many users and advertisers. In response he has improvised frantically, resurrecting old rules and making up new ones that seem to satisfy no one.

The notion that there is a single solution to all or most of what ails social media is a fantasy. The only viable approach, though painstaking and unsatisfying, is to mitigate the harms of social media through trial and error, involving tech companies and Congress but also state governments, courts, researchers, civil-society organizations and even multilateral bodies. Experimentation is the only tenable strategy.

Well before Mr. Musk began his haphazard effort to reform Twitter, other major platforms set out to limit the harms associated with their services. In 2021, I was approached to join Meta’s Oversight Board, a now 22-person expert body charged with reviewing content decisions for Facebook and Instagram. As the CEO of PEN America, I had helped to document how unchecked disinformation and online harassment curtail free speech on social media. I knew that more had to be done to curb what some refer to as “lawful but awful” online content.

The Oversight Board is composed of scholars, journalists, activists and civic leaders and insulated from Meta’s corporate management by various financial and administrative safeguards. Critics worried that the board was just a publicity stunt, meant to shield Meta from well-warranted criticism and regulation. I was skeptical too. But I didn’t think profit-making companies could be trusted to oversee the sprawling digital town square and also knew that calling on governments around the world to micromanage online content would be treated by some as an invitation to silence dissent. I concluded that, while no panacea, the Oversight Board was worth a try.

The board remains an experiment. Its most valuable contribution has been to create a first-of-its-kind body of content-moderation jurisprudence. The board has opined on a range of hard questions. It found, for instance, that certain Facebook posts about Ethiopia’s civil war could be prohibited as incitements to violence but also decided that a Ukrainian’s post likening Russian soldiers to Nazis didn’t constitute hate speech. In each of the 37 decisions released so far, the board has issued a thorough opinion, citing international human-rights law and Meta’s own professed values. Those opinions are themselves an important step toward a comprehensible, reviewable scheme for moderating content with the public interest as a guide.

Ultimately, the value of Meta’s Oversight Board and similar mechanisms will hinge on getting social media platforms to expose their innermost workings to scrutiny and implement recommendations for lasting change. Meta deserves credit for allowing leery independent experts to nose into its business and offer critiques. Still, the board has sometimes had trouble getting its questions answered. In a step backward, the company has quietly gutted Crowdtangle, an analytics tool that the board and outside researchers have used to examine trends on the platform. The fact that Meta can close valuable windows into its operations at will underscores the need for regulation to guarantee transparency and public access to data.

Starting with a structural approach allows federal lawmakers to delay taking measures that raise First Amendment concerns.

Such openness is at the heart of two major pieces of legislation introduced in Congress last year. The Platform Accountability and Transparency Act would force platforms to disclose data to researchers, while the Digital Services Oversight and Safety Act would create a bureau at the Federal Trade Commission with broad authority and resources. But progress on the legislation has been slow, and President Biden has rightly called on Congress to get moving and begin to hold social media companies accountable.

The bills aim to address two prerequisites for promoting regulatory trial and error: ensuring access to data and building oversight capability. Only by prying open the black box of how social media operates—the workings of the algorithms and the paths and pace by which problematic content travels—can regulators do their job. As with many forms of business regulation—including for financial markets and pharmaceuticals—regulators can be effective only to the extent that they’re able to see what’s happening and get their questions answered.

Regulatory bodies also need to build muscle memory in dealing with companies. Though we have moved beyond the spectacle of the 2018 Senate hearing in which lawmakers asked Mark Zuckerberg how he made money running a free service, only trained, specialized and dedicated regulators—most of whom should be recruited from Silicon Valley—will be able to keep pace with the world’s most sophisticated engineers and coders. By starting with these structural approaches, federal lawmakers can delay entering the fraught terrain of content-specific measures that will raise First Amendment concerns.

The inherent difficulty of content-specific regulations is already apparent in laws emerging from the states. Those who believe conservative voices are unfairly suppressed by tech companies notched a victory when a split panel of the U.S. Fifth Circuit Court of Appeals upheld a Texas law barring companies from excising posts based on political ideology. The Eleventh Circuit went the opposite way, upholding a challenge to a similar law championed by Florida Gov. Ron DeSantis. The platforms have vociferously opposed both measures as infringing on their own expressive rights and leeway to run their businesses.

This term the Supreme Court is expected to adjudicate the conflicting rulings on the Texas and Florida cases. Both laws have been temporarily stayed; their effects are untested in the marketplace. Depending on what the Supreme Court decides, we may learn whether companies are willing to operate with sharply limited discretion to remove posts flagged as disinformation, hate speech or trolling. The prospect looms of a complex geographic patchwork where posts could disappear as users cross state lines or where popular platforms are off-limits in certain states because they can’t or won’t comply with local rules. If Elon Musk’s efforts at Twitter are any indication, recasting content moderation with the aim of eliminating anticonservative bias is a messy proposition.

Regulatory experiments affecting content moderation should be adopted modestly, with the aim of evaluating and tweaking them before they’re reauthorized. There have been numerous calls, for example, to repeal Section 230 and make social-media companies legally liable for their users’ posts. Doing so would force the platforms to conduct legal review of posts before they go live, a change that would eliminate the immediacy and free-flowing nature of social media as we know it.

Virtually any proposed measure can be criticized for either muzzling too much valuable speech or not curbing enough malign content.

But that doesn’t mean Section 230 should be sacrosanct. Proposals to pare back or augment Section 230 to encourage platforms to moderate harmful content deserve careful consideration. Such changes need to be assessed for their impact not just on online abuses but also on free speech. The Supreme Court will hear two cases this term on the liability of platforms for amplifying terrorist content, and the decisions could open the door to overhauling Section 230.

While the U.S. lags behind, legislative experimentation with social media regulation is proceeding apace elsewhere. A 2018 German law imposes multimillion euro fines on large platforms that fail to swiftly remove “clearly illegal” content, including hate speech. Advocates for human rights and free expression have roundly criticized the law for its chilling effect on speech.

Far more ambitious regulations will soon make themselves felt across Europe. The EU’s Digital Services Act codifies “conditional liability” for platforms hosting content that they know is unlawful. The law, which targets Silicon Valley behemoths like Meta and Google, goes into full effect in spring 2024. It will force large providers to make data accessible for researchers and to disclose how their algorithms work, how content moderation decisions are made and how advertisers target users. [Transparency] The law relies heavily on “co-regulation,” with authorities overseeing new mechanisms involving academics, advocacy groups, smaller tech firms and other digital stakeholders. The set up acknowledges that regulators lack the necessary technical savvy and expertise to act alone.

A complementary EU law, the Digital Markets Act, goes into effect this May and targets so-called “gatekeepers”—the largest online services. The measure will impose new privacy restrictions, antimonopoly and product-bundling constraints, and obligations that make it easier for users to switch apps and services, bringing their accounts and data with them.

Taken together, these measures will fundamentally reshape how major platforms operate in Europe, with powerful ripple effects globally. Critics charge that the Digital Services Act will stifle free expression, forcing platforms to act as an arm of government censorship. Detractors maintain that the Digital Markets Act will slow down product development and hamper competition.

The new EU laws illustrate the Goldilocks dilemma of social media regulation—that virtually any proposed measure can be criticized for either muzzling too much valuable speech or not curbing enough malign content; for giving social-media companies carte blanche or stifling innovation; for regulating too much or not enough. Getting the balance right will require time, detailed and credible study of the actual effects of the measures, and a readiness to adjust, amend and reconsider.

Whatever the regulatory framework, content-moderation strategies need to be alive to fast-evolving threats and to avoid refighting the last war. In 2020, efforts to combat election disinformation focused heavily on the foreign interferences that had plagued the political process four years earlier. Mostly overlooked was the power of domestic digital movements to mobilize insurrection and seek to overturn election results.

There will be no silver bullet to render the digital arena safe, hospitable and edifying. We must commit instead to fine-tuning systems as they evolve. The end result will look like any complex regulatory regime—some self-regulation, some co-regulation and some top-down regulation, with variations across jurisdictions. This approach might seem unsatisfying in the face of urgent threats to democracy, public health and our collective sanity, but it would finally put us on a path to living responsibly with social media, pitfalls and all.

Ms. Nossel is the CEO of PEN America and the author of “Dare to Speak: Defending Free Speech for All.”

Life in the Metaverse

This article on the efforts of Big  Tech, specifically META, to corner the market in VR exposes some of the myths of  Web3.0. Zuckerberg’s concept of VR is Web2.0 on steroids, as Jaron Lanier makes clear.

From Tablet:
Life in the Metaverse


“What’s going on today with a company like Google or Meta is that they’re getting all this data for free from people who don’t understand the meaning or the value of the data and then turning it into these algorithms that are mostly used to manipulate the same people,” Lanier told Kara Swisher on “The Metaverse: Expectations vs. Reality,” a New York Times podcast. For Lanier, the ultimate point of the Quest Pro is not to provide users with a valuable experience but to popularize a tool that can collect their data at the source.



NFTs (Non-fungible tokens) are the newest rage in blockchain development. These tokens are issued by the creators of digital assets and then registered on the Ethereum blockchain. The smart contracts associated with the tokens can award ownership rights to the asset with the rights to royalty payments for the marketing or sale of the asset. NFTs seem particularly suited to collectors’ items and can also be applied to physical assets. The sale of the tokens prepays the creator for the value created through his creation. Preselling NFTs for songs could deliver a revenue flow to musicians before the music became a commercial hit. Think of having a digital right of ownership to Happy Birthday or Yesterday or The Mona Lisa. The problem, of course, is that predicting the future value of a creative piece of art today is almost impossible. Remember, Van Gogh couldn’t sell any of his paintings!

This makes NFTs a very speculative asset play.

Click below to view the idea of an NFT art gallery:

The Gallery NFT

How To Fix Social Media (Facebook!)

How to Fix Social Media.

The Wall Street Journal has published a series of articles they call The Facebook Files. One article recently queried a dozen “experts” to discuss their ideas of how to fix social media. We at tuka have been focused on this challenge from the beginning, several years ago, and what these commentators reveal is a converging consensus that the problem with social media is scale and emotional triggering based on the speed of information flow. As Winston Churchill was said to quip, “A lie gets halfway around the world before the truth has a chance to get its pants on.” Social media connections have just made this pathology worse.

Our take has always been summed up with one line: A global gossip network makes no sense. So large, centralized networks of 3 billion users make no sense (i.e, FB). Social networking is person-to-person sharing based on mutual trust. So smaller newworks focused on shared interests make sense. This is what tuka is. Technology can then be harnessed to coordinate these networks in ways that reduce the siloing effect so we can all end up sharing more information based on our trusted networks. So what is needed are policies that break down the network effects of scale to open up the social media space to thousands of competitors, all focused on different community interests. Then the interactions across networks help bring us together willingly.

The other problem we at tuka have cited is the centralization and control of information networks and the immense value they create. Data is gold, and we cannot have a handful of private companies own and control the personal data users create. This is akin to giving your labor away, or slavery. The required changes are to decentralize the network using blockchain technologies so that value created can be measured and distributed accordingly to users.

Several of the experts have accurately recognized the problem and what to do about it. The better ideas have been cut and pasted below.


Clay Shirky: Slow It Down and Make It Smaller

We know how to fix social media. We’ve always known. We were complaining about it when it got worse, so we remember what it was like when it was better. We need to make it smaller and slow it down.

The spread of social media vastly increased how many people any of us can reach with a single photo, video or bit of writing. When we look at who people connect to on social networks—mostly friends, unsurprisingly—the scale of immediate connections seems manageable. But the imperative to turn individual offerings, mostly shared with friends, into viral sensations creates an incentive for social media platforms, and especially Facebook, to amplify bits of content well beyond any friend group.

We’re all potential celebrities now, where anything we say could spread well beyond the group we said it to, an effect that the social media scholar Danah Boyd has called “context collapse.” And once we’re all potential celebrities, some people will respond to the incentives to reach that audience—hot takes, dangerous stunts, fake news, miracle cures, the whole panoply of lies and grift we now behold.

The faster content moves, the likelier it is to be borne on the winds of emotional reaction.

The inhuman scale at which the internet assembles audiences for casually produced material is made worse by the rising speed of viral content. As the behavioral economist Daniel Kahneman observed, human thinking comes in two flavors: fast and slow. Emotions are fast, and deliberation is slow.

The obvious corollary is that the faster content moves, the likelier it is to be borne on the winds of emotional reaction, with any deliberation coming after it has spread, if at all. The spread of smartphones and push notifications has created a whole ecosystem of URGENT! messages, things we are exhorted to amplify by passing them along: Like if you agree, share if you very much agree.

Social media is better, for individuals and for the social fabric, if the groups it assembles are smaller, and if the speed at which content moves through it is slower. Some of this is already happening, as people vote with their feet (well, fingers) to join various group chats, whether via SMS, Slack or Discord.

We know that scale and speed make people crazy. We’ve known this since before the web was invented. Users are increasingly aware that our largest social media platforms are harmful and that their addictive nature makes some sort of coordinated action imperative.

It’s just not clear where that action might come from. Self-regulation is ineffective, and the political arena is too polarized to agree on any such restrictions. There are only two remaining scenarios: regulation from the executive branch or a continuation of the status quo, with only minor changes. Neither of those responses is ideal, but given that even a global pandemic does not seem to have galvanized bipartisanship, it’s hard to see any other set of practical options.

Mr. Shirky is Vice Provost for Educational Technologies at New York University and the author of “Cognitive Surplus: Creativity and Generosity in a Connected Age.”


Jaron Lanier: Topple the New Gods of Data

When we speak of social media, what are we talking about? Is it the broad idea of people connecting over the internet, keeping track of old friends, or sharing funny videos? Or is it the business model that has come to dominate those activities, as implemented by Facebook and a few other companies?

Tech companies have dominated the definition because of the phenomenon known as network effects: The more connected a system is, the more likely it is to produce winner-take-all outcomes. Facebook took all.

The domination is so great that we forget alternatives are possible. There is a wonderful new generation of researchers and critics concerned with problems like damage to teen girls and incitement of racist violence, and their work is indispensable. If all we had to talk about was the more general idea of possible forms of social media, then their work would be what’s needed to improve things.

Unfortunately, what we need to talk about is the dominant business model. This model spews out horrible incentives to make people meaner and crazier. Incentives run the world more than laws, regulations, critiques, or the ideas of researchers.

The current incentives are to “engage” people as much as possible, which means triggering the “lizard brain” and fight-or-flight responses. People have always been a little paranoid, xenophobic, racist, neurotically vain, irritable, selfish, and afraid. And yet putting people under the influence of engagement algorithms has managed to bring out even more of the worst of us.

The current incentives are to ‘engage’ people as much as possible, which means triggering the ‘lizard brain.’

Can we survive being under the ambient influence of behavior modification algorithms that make us stupider?

The business model that makes life worse is based on a particular ideology. This ideology holds that humans as we know ourselves are being replaced by something better that will be brought about by tech companies. Either we’ll become part of a giant collective organism run through algorithms, or artificial intelligence will soon be able to do most jobs, including running society, better than people. The overwhelming imperative is to create something like a universally Facebook-connected society or a giant artificial intelligence.

These “new gods” run on data, so as much data as possible must be gathered, and getting in the middle of human interactions is how you gather that data. If the process makes people crazy, that’s an acceptable price to pay.

The business model, not the algorithms, is also why people have to fear being put out of work by technology. If people were paid fairly for their contributions to algorithms and robots, then more tech would mean more jobs, but the ideology demands that people accept a creeping feeling of human obsolescence. After all, if data coming from people were valued, then it might seem like the big computation gods, like AI, were really just collaborations of people instead of new life forms. That would be a devastating blow to the tech ideology.

Facebook now proposes to change its name and to primarily pursue the “metaverse” instead of “social media,” but the only changes that fundamentally matter are in the business model, ideology, and resulting incentives.

Mr. Lanier is a computer scientist and the author, most recently, of “Ten Arguments for Deleting Your Social Media Accounts Right Now.”

Clive Thompson: Online Communities That Actually Work

Are there any digital communities that aren’t plagued by trolling, posturing and terrible behavior? Sure there are. In fact, there are quite a lot of online hubs where strangers talk all day long in a very civil fashion. But these aren’t the sites that we typically think of as social media, like Twitter, Facebook or YouTube. No, I’m thinking of the countless discussion boards and Discord servers devoted to hobbies or passions like fly fishing, cuisine, art, long-distance cycling or niche videogames.

I visit places like this pretty often in reporting on how people use digital tools, and whenever I check one out, I’m often struck by how un-toxic they are. These days, we wonder a lot about why social networks go bad. But it’s equally illuminating to ask about the ones that work well. These communities share one characteristic: They’re small. Generally they have only a few hundred members, or maybe a couple thousand if they’re really popular.

And smallness makes all the difference. First, these groups have a sense of cohesion. The members have joined specifically to talk to people with whom they share an enthusiasm. That creates a type of social glue, a context and a mutual respect that can’t exist on a highly public site like Twitter, where anyone can crash any public conversation.

Smallness makes all the difference. These groups have a sense of cohesion.

Even more important, small groups typically have people who work to keep interactions civil. Sometimes this will be the forum organizer or an active, long-term participant. They’ll greet newcomers to make them feel welcome, draw out quiet people and defuse conflict when they see it emerge. Sometimes they’ll ban serious trolls. But what’s crucial is that these key members model good behavior, illustrating by example the community’s best standards. The internet thinkers Heather Gold, Kevin Marks and Deb Schultz put a name to this: “tummeling,” after the Yiddish “tummeler,” who keeps a party going.

None of these positive elements can exist in a massive, public social network, where millions of people can barge into each other’s spaces—as they do on Twitter, Facebook, and YouTube. The single biggest problem facing social media is that our dominant networks are obsessed with scale. They want to utterly dominate their fields, so they can kill or absorb rivals and have the ad dollars to themselves. But scale breaks social relations.

Is there any way to mitigate this problem? I’ve never heard of any simple solution. Strong antitrust enforcement for the big networks would be useful, to encourage a greater array of rivals that truly compete with one another. But this likely wouldn’t fully solve the problem of scale, since many users crave scale too. Lusting after massive, global audiences, they will flock to whichever site offers the hugest. Many of the proposed remedies for social media, like increased moderation or modifications to legal liability, might help, but all leave intact the biggest problem of all: Bigness itself.

Mr. Thompson is a journalist who covers science and technology. He is the author, most recently, of “Coders: The Making of a New Tribe and the Remaking of the World.”

Does Facebook Really Make Sense?

An essay on social networks and regulation, published by Project Syndicate.

A Facelift for Facebook

One can agree with the explanation of the problem – centralized control of an increasingly valuable asset: information data. I’m not sure a real solution has been proposed yet. When I explain how platforms exploit the collection of users’ personal data for great material gain, I usually get a shoulder shrug. People still fail to appreciate that their data is as valuable as their labor, if not more so. They would not give away their labor for free, so why give away one’s valuable data for a free user page and some simple tools?

At a macro level, a social network like FB makes almost no sense, except as a data mining tool. FB thrives on little more than gossip networks and if one looks at the function of social gossip across time and cultures one realizes a global gossip network makes no sense from a human pov. All the negative effects and few of the positive. As Hill points out, Online Social Networks make sense for small assemblies of friends, families, and interest groups. This is the goal for OSNs of the future. FB “Groups” might resemble the type of content-focused social network that is valuable to users, but, of course, on FB these are exploited exclusively for FB or the administrator of the group. The users get free participation, but little of the value created. A small club of influencers is not the future of OSNs.

A digital license could impose some necessary public goods features, but for regulating audience size, the potential for abuse and centralized control probably outweighs the benefits. Better for the market to solve this problem with competition, but that will probably require some government policy that reduces or eliminates the existing natural monopolies due to network effects. (“Search” needs to be a public regulated utility, while Amazon needs to spin-off its vertical integration. I’m still of the opinion that large social networks like FB will be competed away.)

I would imagine the future would look like many social networks that are decentralized but coordinated in some way so that one doesn’t get siloed. And personal data must be under the control of users with the value of the social network created shared among them. For that one probably needs some kind of a decentralized blockchain solution. Most large, complex systems only become manageable through coordinated decentralization of control. Think representative democracy and Federalism.