Reining in the Oligarchies

From Hillsdale College’s  Imprimus:

Who Is in Control? The Need to Rein in Big Tech

Allum Bokhari

The following is adapted from a speech delivered at Hillsdale College on November 8, 2020, during a Center for Constructive Alternatives conference on Big Tech.

In January, when every major Silicon Valley tech company permanently banned the President of the United States from its platform, there was a backlash around the world. One after another, government and party leaders—many of them ideologically opposed to the policies of President Trump—raised their voices against the power and arrogance of the American tech giants. These included the President of Mexico, the Chancellor of Germany, the government of Poland, ministers in the French and Australian governments, the neoliberal center-right bloc in the European Parliament, the national populist bloc in the European Parliament, the leader of the Russian opposition (who recently survived an assassination attempt), and the Russian government (which may well have been behind that attempt).

Common threats create strange bedfellows. Socialists, conservatives, nationalists, neoliberals, autocrats, and anti-autocrats may not agree on much, but they all recognize that the tech giants have accumulated far too much power. None like the idea that a pack of American hipsters in Silicon Valley can, at any moment, cut off their digital lines of communication.

I published a book on this topic prior to the November election, and many who called me alarmist then are not so sure of that now. I built the book on interviews with Silicon Valley insiders and five years of reporting as a Breitbart News tech correspondent. Breitbart created a dedicated tech reporting team in 2015—a time when few recognized the danger that the rising tide of left-wing hostility to free speech would pose to the vision of the World Wide Web as a free and open platform for all viewpoints.

This inversion of that early libertarian ideal—the movement from the freedom of information to the control of information on the Web—has been the story of the past five years.

***

When the Web was created in the 1990s, the goal was that everyone who wanted a voice could have one. All a person had to do to access the global marketplace of ideas was to go online and set up a website. Once created, the website belonged to that person. Especially if the person owned his own server, no one could deplatform him. That was by design, because the Web, when it was invented, was competing with other types of online services that were not so free and open.

It is important to remember that the Web, as we know it today—a network of websites accessed through browsers—was not the first online service ever created. In the 1990s, Sir Timothy Berners-Lee invented the technology that underpins websites and web browsers, creating the Web as we know it today. But there were other online services, some of which predated Berners-Lee’s invention. Corporations like CompuServe and Prodigy ran their own online networks in the 1990s—networks that were separate from the Web and had access points that were different from web browsers. These privately-owned networks were open to the public, but CompuServe and Prodigy owned every bit of information on them and could kick people off their networks for any reason.

In these ways the Web was different. No one owned it, owned the information on it, or could kick anyone off. That was the idea, at least, before the Web was captured by a handful of corporations.

We all know their names: Google, Facebook, Twitter, YouTube, Amazon. Like Prodigy and CompuServe back in the ’90s, they own everything on their platforms, and they have the police power over what can be said and who can participate. But it matters a lot more today than it did in the ’90s. Back then, very few people used online services. Today everyone uses them—it is practically impossible not to use them. Businesses depend on them. News publishers depend on them. Politicians and political activists depend on them. And crucially, citizens depend on them for information.

Today, Big Tech doesn’t just mean control over online information. It means control over news. It means control over commerce. It means control over politics. And how are the corporate tech giants using their control? Judging by the three biggest moves they have made since I wrote my book—the censoring of the New York Post in October when it published its blockbuster stories on Biden family corruption, the censorship and eventual banning from the Web of President Trump, and the coordinated takedown of the upstart social media site Parler—it is obvious that Big Tech’s priority today is to support the political Left and the Washington establishment.

Big Tech has become the most powerful election-influencing machine in American history. It is not an exaggeration to say that if the technologies of Silicon Valley are allowed to develop to their fullest extent, without any oversight or checks and balances, then we will never have another free and fair election. But the power of Big Tech goes beyond the manipulation of political behavior. As one of my Facebook sources told me in an interview for my book: “We have thousands of people on the platform who have gone from far right to center in the past year, so we can build a model from those people and try to make everyone else on the right follow the same path.” Let that sink in. They don’t just want to control information or even voting behavior—they want to manipulate people’s worldview.

Is it too much to say that Big Tech has prioritized this kind of manipulation? Consider that Twitter is currently facing a lawsuit from a victim of child sexual abuse who says that the company repeatedly failed to take down a video depicting his assault, and that it eventually agreed to do so only after the intervention of an agent from the Department of Homeland Security. So Twitter will take it upon itself to ban the President of the United States, but is alleged to have taken down child pornography only after being prodded by federal law enforcement.

***

How does Big Tech go about manipulating our thoughts and behavior? It begins with the fact that these tech companies strive to know everything about us—our likes and dislikes, the issues we’re interested in, the websites we visit, the videos we watch, who we voted for, and our party affiliation. If you search for a Hannukah recipe, they’ll know you’re likely Jewish. If you’re running down the Yankees, they’ll figure out if you’re a Red Sox fan. Even if your smart phone is turned off, they’ll track your location. They know who you work for, who your friends are, when you’re walking your dog, whether you go to church, when you’re standing in line to vote, and on and on.

As I already mentioned, Big Tech also monitors how our beliefs and behaviors change over time. They identify the types of content that can change our beliefs and behavior, and they put that knowledge to use. They’ve done this openly for a long time to manipulate consumer behavior—to get us to click on certain ads or buy certain products. Anyone who has used these platforms for an extended period of time has no doubt encountered the creepy phenomenon where you’re searching for information about a product or a service—say, a microwave—and then minutes later advertisements for microwaves start appearing on your screen. These same techniques can be used to manipulate political opinions.

I mentioned that Big Tech has recently demonstrated ideological bias. But it is equally true that these companies have huge economic interests at stake in politics. The party that holds power will determine whether they are going to get government contracts, whether they’re going to get tax breaks, and whether and how their industry will be regulated. Clearly, they have a commercial interest in political control—and currently no one is preventing them from exerting it.

To understand how effective Big Tech’s manipulation could become, consider the feedback loop.

As Big Tech constantly collects data about us, they run tests to see what information has an impact on us. Let’s say they put a negative news story about someone or something in front of us, and we don’t click on it or read it. They keep at it until they find content that has the desired effect. The feedback loop constantly improves, and it does so in a way that’s undetectable.

What determines what appears at the top of a person’s Facebook feed, Twitter feed, or Google search results? Does it appear there because it’s popular or because it’s gone viral? Is it there because it’s what you’re interested in? Or is there another reason Big Tech wants it to be there? Is it there because Big Tech has gathered data that suggests it’s likely to nudge your thinking or your behavior in a certain direction? How can we know?

What we do know is that Big Tech openly manipulates the content people see. We know, for example, that Google reduced the visibility of Breitbart News links in search results by 99 percent in 2020 compared to the same period in 2016. We know that after Google introduced an update last summer, clicks on Breitbart News stories from Google searches for “Joe Biden” went to zero and stayed at zero through the election. This didn’t happen gradually, but in one fell swoop—as if Google flipped a switch. And this was discoverable through the use of Google’s own traffic analysis tools, so it isn’t as if Google cared that we knew about it.

Speaking of flipping switches, I have noted that President Trump was collectively banned by Twitter, Facebook, Twitch, YouTube, TikTok, Snapchat, and every other social media platform you can think of. But even before that, there was manipulation going on. Twitter, for instance, reduced engagement on the President’s tweets by over eighty percent. Facebook deleted posts by the President for spreading so-called disinformation.

But even more troubling, I think, are the invisible things these companies do. Consider “quality ratings.” Every Big Tech platform has some version of this, though some of them use different names. The quality rating is what determines what appears at the top of your search results, or your Twitter or Facebook feed, etc. It’s a numerical value based on what Big Tech’s algorithms determine in terms of “quality.” In the past, this score was determined by criteria that were somewhat objective: if a website or post contained viruses, malware, spam, or copyrighted material, that would negatively impact its quality score. If a video or post was gaining in popularity, the quality score would increase. Fair enough.

Over the past several years, however—and one can trace the beginning of the change to Donald Trump’s victory in 2016—Big Tech has introduced all sorts of new criteria into the mix that determines quality scores. Today, the algorithms on Google and Facebook have been trained to detect “hate speech,” “misinformation,” and “authoritative” (as opposed to “non-authoritative”) sources. Algorithms analyze a user’s network, so that whatever users follow on social media—e.g., “non-authoritative” news outlets—affects the user’s quality score. Algorithms also detect the use of language frowned on by Big Tech—e.g., “illegal immigrant” (bad) in place of “undocumented immigrant” (good)—and adjust quality scores accordingly. And so on.

This is not to say that you are informed of this or that you can look up your quality score. All of this happens invisibly. It is Silicon Valley’s version of the social credit system overseen by the Chinese Communist Party. As in China, if you defy the values of the ruling elite or challenge narratives that the elite labels “authoritative,” your score will be reduced and your voice suppressed. And it will happen silently, without your knowledge.

This technology is even scarier when combined with Big Tech’s ability to detect and monitor entire networks of people. A field of computer science called “network analysis” is dedicated to identifying groups of people with shared interests, who read similar websites, who talk about similar things, who have similar habits, who follow similar people on social media, and who share similar political viewpoints. Big Tech companies are able to detect when particular information is flowing through a particular network—if there’s a news story or a post or a video, for instance, that’s going viral among conservatives or among voters as a whole. This gives them the ability to shut down a story they don’t like before it gets out of hand. And these systems are growing more sophisticated all the time.

***

If Big Tech’s capabilities are allowed to develop unchecked and unregulated, these companies will eventually have the power not only to suppress existing political movements, but to anticipate and prevent the emergence of new ones. This would mean the end of democracy as we know it, because it would place us forever under the thumb of an unaccountable oligarchy.

The good news is, there is a way to rein in the tyrannical tech giants. And the way is simple: take away their power to filter information and filter data on our behalf.

All of Big Tech’s power comes from their content filters—the filters on “hate speech,” the filters on “misinformation,” the filters that distinguish “authoritative” from “non-authoritative” sources, etc. Right now these filters are switched on by default. We as individuals can’t turn them off. But it doesn’t have to be that way.

The most important demand we can make of lawmakers and regulators is that Big Tech be forbidden from activating these filters without our knowledge and consent. They should be prohibited from doing this—and even from nudging us to turn on a filter—under penalty of losing their Section 230 immunity as publishers of third party content. This policy should be strictly enforced, and it should extend even to seemingly non-political filters like relevance and popularity. Anything less opens the door to manipulation.

Our ultimate goal should be a marketplace in which third party companies would be free to design filters that could be plugged into services like Twitter, Facebook, Google, and YouTube. In other words, we would have two separate categories of companies: those that host content and those that create filters to sort through that content. In a marketplace like that, users would have the maximum level of choice in determining their online experiences. At the same time, Big Tech would lose its power to manipulate our thoughts and behavior and to ban legal content—which is just a more extreme form of filtering—from the Web.

This should be the standard we demand, and it should be industry-wide. The alternative is a kind of digital serfdom. We don’t allow old-fashioned serfdom anymore—individuals and businesses have due process and can’t be evicted because their landlord doesn’t like their politics. Why shouldn’t we also have these rights if our business or livelihood depends on a Facebook page or a Twitter or YouTube account?

This is an issue that goes beyond partisanship. What the tech giants are doing is so transparently unjust that all Americans should start caring about it—because under the current arrangement, we are all at their mercy. The World Wide Web was meant to liberate us. It is now doing the opposite. Big Tech is increasingly in control. The most pressing question today is: how are we going to take control back?

 

Collective Memory and Culture

This is an interesting explanation of how digital networks affect culture.  Comments in RED.

How We’ll Forget John Lennon

Kevin Berger

A few years ago a student walked into the office of Cesar A. Hidalgo, director of the Collective Learning group at the MIT Media Lab. Hidalgo was listening to music and asked the student if she recognized the song. She wasn’t sure. “Is it Coldplay?” she asked. It was “Imagine” by John Lennon. Hidalgo took it in stride that his student didn’t recognize the song. As he explains in our interview below, he realized the song wasn’t from her generation. What struck Hidalgo, though, was the incident echoed a question that had long intrigued him, which was how music and movies and all the other things that once shone in popular culture faded like evening from public memory.

Hidalgo is among the premier data miners of the world’s collective history. With his MIT colleagues, he developed Pantheon, a dataset that ranks historical figures by popularity from 4000 B.C. to 2010. Aristotle and Plato snag the top spots. Jesus is third. It’s a highly addictive platform that allows you to search people, places, and occupations with a variety of parameters. Most famous tennis player of all time? That’s right, Frenchman Rene Lacoste, born in 1904. (Roger Federer places 20th.) Rankings are drawn from, essentially, Wikipedia biographies, notably ones in more than 25 different languages, and Wikipedia page views.

Medium Is the Message: “As a new medium takes over, the type of information being produced changes dramatically,” says Cesar Hidalgo. “Printing was not good for actors but good for playwrights. TV was not good for playwrights but very good for sports.” 

 In December 2018, Hidalgo and colleagues published a Nature paper that put his crafty data-mining talents to work on another question: How do people and products drift out of the cultural picture? They traced the fade-out of songs, movies, sports stars, patents, and scientific publications. They drew on data from sources such as Billboard, Spotify, IMDB, Wikipedia, the U.S. Patent and Trademark Office, and the American Physical Society, which has gathered information on physics articles from 1896 to 2016. Hidalgo’s team then designed mathematical models to calculate the rate of decline of the songs, people, and scientific papers.

The report, “The universal decay of collective memory and attention,” concludes that people and things are kept alive through “oral communication” from about five to 30 years. They then pass into written and online records, where they experience a slower, longer decline. The paper argues that people and things that make the rounds at the water cooler have a higher probability of settling into physical records. “Changes in communication technologies, such as the rise of the printing press, radio and television,” it says, affect our degree of attention, and all of our cultural products, from songs to scientific papers, “follow a universal decay function.”

Last week I caught up with Hidalgo to talk about his Nature paper. But I also wanted to push him to talk about what he saw between the mathematical lines, to wear the social scientist’s hat and reflect on the consequences of decay in collective memory.

How do you define “collective memory?”

The easiest definition would be those pieces of knowledge or information that are shared by a large number of people.

Why does collective memory decay matter?

If you think about it, culture and memory are the only things we have. We treasure cultural memory because we use that knowledge to build and produce everything we have around us. That knowledge is going to help us build the future and solve the problems we have yet to solve. If aliens come here and wave a magic wand and make everyone forget everything—our cars, buildings, bridges, airplanes, our power systems, and so forth, we would collapse as a society immediately.

The relative power of scientists has diminished as we exited the printing era and went into this more performance-based era.

In your mind, what is a classic example of collective memory decay?

I thought everybody knew “Imagine” by John Lennon. I’m almost 40 and my student was probably 20. But I realized “Imagine” is not as popular in her generation as it was in mine, and it was probably less popular in my generation than in the generation before. People have a finite capacity to remember things. There’s great competition for the content out there, and the number of people who know or remember something decays over time. There’s another example, of Elvis Presley memorabilia. People had bought Elvis memorabilia for years and it was collecting huge prices. Then all of a sudden the prices started to collapse. What happened is the people who collected Elvis memorabilia started to die. Their families were stuck with all of this Elvis stuff and trying to sell it. But all of the people who were buyers were also dying.

You write collective memory also reflects changes in communication technologies, such as the rise of the printing press, radio, and TV. How so?

Take print. Changing the world from an oral tradition to a written tradition provided a much better medium for data. A lot of people have linked the revolution in the sciences and astronomy to the rise of printing because astronomical tables, for instance, could be copied in a reliable way. Before printing, astronomical tables were hand-copied, which introduced errors that diminished the quality of the data. With printing, people had more reliable forms of data. We see very clearly from our data that with the rise of printing you get the rise of astronomers, mathematicians, and scientists. You also see a rise in composers because printing helps the transmission of sheet music. So when you look at people we remember most from the time when print first arose, you see ones from the arts and sciences.

What did the mediums that came next mean for science?

The new mediums of radio and TV were much more adaptive for entertainment than science, that’s for sure. The people who belong to the sciences, as a fraction of the people who became famous, diminished enormously during the 20th century. The new mediums were not good for the nuances that science demands. For good reason, scientists need to qualify their statements narrowly and be careful when they talk about causality. They need to be specific about the methods they use and the data they collect. All of those extensive nuances are hard to communicate in mediums that are good for entertainment and good for performance. So the relative power of scientists, or their position in society, have diminished as we exited the printing era and went into this more performance-based era.

At the same time, scientists and the general scientific community have not been great at adapting their ideas to new mediums. Scientists are the first ones to bring down another scientist who tries to popularize content in a way that would not be traditional. So scientists are their own worst enemies in this battle. They have lagged behind in their ability to learn how to use these mediums. Sometimes they focus too much on the content without paying attention on how to adapt it to the medium that will best help it get out.

What does your analysis tell us we didn’t know before about the decay of collective memory?

We began by looking at how popular something is today based on how long ago it became popular in the first place. The expectation is collective memory decays over time in a smooth pattern, that the more time goes by, the more things become forgotten. But what we found when we looked at cultural products—movies, songs, sports figures, patents, and science papers—was that decay is not smooth but has two defined regimes.                                                                                                                                                                                                                                                                                                                                                                                                                                                 Then there’s the second regime in which it has a much longer tail, when the decay is smoother, and the attention is less. [This implies that artistic innovation, which departs from the popular, will take longer and a more focused, or niche, audience to catch on; its durability and network dissemination determining how successful it is.] 

I’m surprised how the U.S., a country with people doing so many things, can become so monothematic on such a vast scale.

When we started to think about decay, we realized we could take two concepts from anthropology—“communicative memory” and “cultural memory.” Communicative memory arises from talking about things. Donald Trump is very much in our communicative memory now. You walk down the street and find people talking about Trump—Trump and tariffs, Trump and the trade war. But there’s going to be a point, 20 years in the future, in which he’s not going to be talked about every day. He’s going to exit from communicative memory and be part of cultural memory. And that’s the memory we sustain through records. Although the average amount of years that something remains in communicative memory varies—athletes last longer than songs, movies, and science papers, sometimes for a couple of decades—we found this same overall decay pattern in multiple cultural domains.

In your forthcoming paper, “How the medium shapes the message,” you refer to the late cultural critic Neil Postman who argued that the popular rise of TV led to a new reign of entertainment, which dumbed us down, because entertainment was best suited for TV. Is that what you found?

We found evidence in that favor, yes. Because the fraction of people who belong to the sciences, as a fraction of all of the people that become famous, diminishes enormously during the 20th century. It would completely agree with that observation.

Do you agree with Postman that we’re all “amusing ourselves to death?”

I don’t think we’re amusing ourselves to death. I’m not like that much of a pessimist. I do think life is also about enjoying the ride, not just about doing important things. And new mediums like TikTok, a kind of Twitter for videos, are great for creative expression. People are doing amazing little performance skits on TikTok. The entertainment and artistic components of every new medium are not bad per se, but every medium can be hijacked by extreme people who know how to craft entertaining messages, especially when they want to advance a certain agenda.

What type of information is best suited for the Internet?

It’s hard to think of the Internet as a medium. It’s more of a platform in which Facebook, Twitter, email, and TikTok are different mediums. They each send their own type of message. A picture that does well on Instagram doesn’t necessarily shine on Twitter, where people are expecting something else. The behavior and the engagements are different. Twitter, for example, is about being controversial. You know, one way to get chewed up on Twitter is to try to be in the center! I use Twitter a little, but not that much. I find that it’s a little bit hostile. I’m a family type of guy, so I use Facebook. In Facebook, at least in my circle, you put more detail into comments and are a little bit more thoughtful.

Now people like Elon Musk are in the center of culture. Young people now look up to entrepreneurs the way we used to look up to musicians.

Is collective memory decaying more rapidly because communication technologies are so much faster?

I would love to know that but I can’t. Some people would say collective memory decays based not on calendar time but the speed at which new content is being produced. We forget Elvis because the Beatles came up and we forget the Beatles because Led Zeppelin came and we forget Led Zeppelin because Metallica came up, and so forth. But things become very dear to a generation and people will not forget about them just because new content came in. So decay would be something characteristic of humans, not the volume of content. To separate those two things, we would need to look at content from very different time frames. At the moment, we don’t have the richness of data that we would need to answer that question.

Still, don’t you think the speed at which online information is tearing through our brains has got to be leaving some path of destruction in collective memory?

I don’t know. I grew up in Chile, which of course is small compared to the United States. I came to the U.S. for the first time in 1996. And one of the things that still surprises me is how monothematic American culture can be. In 1996 it was all about O.J. Simpson. Everybody talked about O.J. Simpson. He was everywhere on TV. Just like Trump today, he consumed the entire bandwidth. I’m surprised how a country with so many people, and with people doing so many different things, can nevertheless become so monothematic on such a vast scale. Today we have so much more content than in 1996 because of the rise of the Internet and the ability of people to create content. But look at the percentage of all conversations and online communications that are consumed by Trump. So in that context, I don’t think content is being replaced so easily. I don’t see that much of a rise in diversity. [This indicates the winner-take-all nature of network and popularity metrics. Content creators become famous for being famous.] 

That’s really interesting. Because one of the common criticisms of the current information glut is we have no shared cultural center. Everybody has their own narrow interest and we have no shared cultural bond, no John Lennon.

Is that a collective memory phenomenon or is it because nowadays the guys in the middle of the culture are different guys? Different people come into the center of culture because of the type of mediums that are available. There have been musicians for thousands of years, and for most of that history, musicians have not been wealthy. It was only when there was a medium that allowed them to sell their music—vinyl, magnetic tapes, and discs—that they were able to make money. I think that generated a golden era of pop music in the ’60s, ’70s, and ’80s. And that’s associative to a communication technology that was dominant at that time. Radio and discs were a way to distribute those popular idols’ musical performances. When that technology was replaced by simple forms of copying, like the ability to copy files on the Internet, all that went away. [This explains why the music industry of physical media, with its high-profit margins, is not coming back.] Now people like Elon Musk are in the center of culture. He’s not John Lennon. It’s a very different type of leadership, a different type of model for young people. But Musk’s first job was an online payment start-up. And I think a lot of young people now look up to entrepreneurs the way we used to look up to musicians.

Did you come away from your study with insights into what may or may not cause something to stick in collective memory?

I read a very good book recently called The Formula by Albert-Laszlo Barabas. He says you can equate quality and popularity in situations in which performance is clearly measurable. But in cases in which performance is not clearly measurable, you cannot equate popularity with quality. If you look at tennis players, you find tennis players who win tournaments and difficult games are more popular. So quality and fame are closely correlated in a field in which performance is measured as tightly as professional tennis players. As you move to things that are less quantifiable in terms of performance, like modern art [or music or books], your networks are going to be more important in determining popularity. [This is why we need a human social network that curates and filters subjective content.]

How should we think about quality in media content?

Well, I would say that collective memory decay is an important way to measure and think about quality. If you publish some clickbait that is popular in the beginning, that gets a lot of views in the first couple of days, but a year later, nobody looks at it, you have a good metric. The same is true if you publish a more thoughtful piece that might not be as popular in the beginning because it didn’t work as clickbait—it required more engagement from the readers—but keeps on building readers over time. So the differences in longevity are important metrics for quality. [So unless we have a dynamic social network that can curate subjective contents and filter it into its proper consumer niche, quality becomes an ignored step-child to the popularity of art.]

That goes back to a paper I did when I was an undergrad about the decay functions of attendance of movies. There were some movies that had a lot of box office revenue in the first week but then decayed really fast. And there were other movies that decayed more slowly. We created a model in which people would talk to each other [this is what happens with an OSN] and communicate information of the quality of the movie. And that model only had one parameter, which was how good was the movie was. So the quality of the movie would increase or decrease the probability that people would go watch it. We could then look at the curves and infer how good the movie was, based not on the total area it was shown, or on the total revenue, but on the shape of the curve. That was interesting because there were movies that were really bad like Tomb Raider, which at first was a box office success. But if you put it on our model, you would see that it was just hype, people watched it, hated the movie, and the curve decayed really fast.

Cultural innovation and quality depend on human curation of content and word of mouth through a social network.

Antitrust and Vampire Squids

The US government wants to break up Facebook. Good – it’s long overdue

The Guardian, December 11, 2020

This week the government filed a ground-breaking antitrust suit against Facebook, seeking to break up the corporation for monopolistic practices. The suit comes on the heels of a similar case against Google, as well as an aggressive Democrat-authored congressional report recommending taking apart not just Google and Facebook, but Apple and Amazon as well.

The evidence against Facebook seems overwhelming, with enforcers pointing to internal email conversations in which the CEO, Mark Zuckerberg, and his colleagues allegedly conspired to monopolize the social media space by buying rivals and stifling competitors. Proof of intent to violate antitrust law appears to be ample. Yet news articles covering the case describe it as “far from a slam dunk”, and competition law experts predict that enforcers will “face an uphill battle” in proving their claims.

Embedded in these muted words about the legal viability of the case is a political battle about the nature of economic power. Both antitrust suits are the result of a new movement of anti-monopoly scholars and advocates pushing to reform a heavily concentrated and misshapen American economy. Yet within the cocooned world of orthodox antitrust experts, there’s a suspicious lack of enthusiasm for breaking up Facebook, or any of the tech goliaths. Fiona Scott Morton, for instance, a former Obama enforcer and opinion leader at Yale, wrote last year that “break-ups are not a good solution to the economic harms created by large firms in this sector.” And last year the leading antitrust scholar Herb Hovenkamp argued that “breakup remedies are radical and they frequently have unintended consequences,” and warned that “Judges aren’t good at breaking up companies.”

In this formulation, break-ups are a legally difficult and flawed remedy, akin to amputating the leg of someone in need of a pedicure. Some politicians are still listening to these experts; Republican politicians have expressed skepticism at break-ups, but even the 2020 Democratic platform says that regulators should only consider breaking up corporations “as a last resort”. More than politicians, judges listen to these arguments, and rewrite antitrust law from the bench to make bringing monopolization cases and winning them – even when the evidence is overwhelming – far too expensive and difficult.

Such a situation is historically unusual. As the historian Richard John notes, America has a long history of breaking up big companies. Some of those broken-up entities include logging companies in Maine in the 1840s, Standard Oil in the 1910s, and AT&T in the 1980s. In fact, in 1961 the supreme court pronounced that breaking up companies has “been called the most important of antitrust remedies. It is simple, relatively easy to administer, and sure.”

So what explains this modern reluctance?

The standard account is that a group of libertarian law and economics scholars in and around the University of Chicago recentered antitrust in the 1970s. These men, led by Milton Friedman, Robert Bork and George Stigler, sought to attack the New Deal regulatory state, and free concentrated capital. Bork led the legal crusade against what he called the “militant ideology” of aggressive antitrust enforcers. His goal was to pull control of this area of law out of the hands of liberal legislative bodies and place it in the hands of highly technical conservative economists and lifetime-appointed judges who would listen to them. When Ronald Reagan became president, he radically narrowed antitrust, amounting to what Bork called a “revolution in a major American policy”.

But this is only part of the story. It fails to explain how, in 2004, Antonin Scalia convinced his fellow supreme court justices, including Stephen Breyer and Ruth Bader Ginsburg, to join him in a unanimous supreme court decision which undermined the ability to bring monopolization cases by holding that the “charging of monopoly prices is not only not unlawful, it is an important element of the free-market system.

The liberal justices were swayed by a different set of scholars, less-well known in the revolution that has produced today’s monopoly-heavy economy. These scholars challenged Bork-influenced libertarians over certain methodological questions but accepted the ideological contention that antitrust should be a technical area without broader democratic goals.

This group is led by Hovenkamp, an academic centrist technocrat, who is the most important antitrust thinker alive today, nicknamed the “dean of the antitrust bar”. His partnership with Lyndon Johnson’s antitrust chief Don Turner and Harvard scholar Phil Areeda on a key antitrust treatise set the stage for his intellectual dominance in the 1980s. Stephen Breyer, a liberal justice and an adherent of Hovenkamp, once noted that advocates would rather have “two paragraphs of [the] treatise on their side than three courts of appeals or four supreme court justices.” Breyer wasn’t understating the point; to date, Hovenkamp has been cited by our highest court in 38 different cases, far more often than Bork.

Hovenkamp is an intellectual historian by training, and his views on antitrust policy are situated in a misleading narrative. His research radically downplays the historical importance of legislative and social movements focused on the democratic need to control big business, and instead emphasizes the role economists and technocrats began to play in shaping the law during the Gilded Age. As part of this narrative, he peddles an incomplete account of the origin of the Sherman Antitrust Act of 1890, the most important piece of anti-monopoly legislation ever enacted by Congress. Hovenkamp argues that there is no evidence that the framers of the Sherman Act sought to curtail monopolies brought about as a result of “superior skill or industry”. According to Hovenkamp, US Congress – and by extension Americans in general – never had a problem with big corporations, or even monopolies; we just didn’t like it when those monopolies became predatory.

This elitist and technocratic framework glosses over our rich anti-monopoly tradition. Thomas Jefferson, James Madison and Frederick Douglass all opposed monopolies on political grounds, and state legislatures in the 19th century began breaking up companies almost as soon as they started issuing corporate charters. Senator Sherman himself explained that the purpose of the federal antitrust act was “to put an end to great aggregations of capital because of the helplessness of the individual before them.”

Judge Learned Hand, whose decisions in contract and corporate law are still read with reverence, laid out the basic federal antitrust framework which was endorsed by the supreme court in 1946 and 1968 and governed our economy for most of the 20th century. In mandating the breakup of the aluminum monopoly of Alcoa in 1945, Hand concluded that monopoly power, in and of itself, was illegal. He explained that the Sherman Act is a law prohibiting monopolies, full stop, no matter whether they are predatory. He pointed out that Congress updated the antitrust laws four times in the 20th century to hit back at courts who attempted to narrow them.

Antitrust theory is dominated by reactionary yet often wildly inconsistent thinkers. Hovenkamp, who for decades resisted any action to rein in large technology firms, argued a year ago that breaking up these giants would send the economy back to “the Stone Age”. This week, reversing his position, Hovenkamp conceded that breaking up Facebook is now warranted – revealing his entire school of thought as largely a reactionary force torn between bending to concentrated financial power and scandalous headlines of abusive market power.

It is encouraging that the government is seeking to break up Google and Facebook, and that policymakers are rejecting flawed legal theorizing. But the resistance to restoring our anti-monopoly tradition runs much deeper than Robert Bork and his rightwing legacy. As we’ve seen, it’s just as entrenched within the centrist academic and judicial citadels of well-meaning technocrats who carry a deeply ingrained fear of too much democratic influence over the economy.

Policymakers and judges are going to have to shake off the misleading narrative spun by the current antitrust establishment. Doing so is essential not only for supporting fair markets, but for preserving democracy itself.


Competition is essential, which requires open access to markets and transparency.

Tech Anti-Trust Legislation

A banking law from 1956 offers a realistic model for regulating dominant internet platforms

Excerpt:

The Pattern
The antitrust assault on Big Tech is taking shape.

    • Bipartisan recommendations for antitrust action against the tech giants could come as soon as September, Rep. David Cicilline, D-MA, told Bloomberg on Wednesday. Cicilline chairs the House antitrust subcommittee that has conducted a year-long antitrust investigation, including last month’s high-profile antitrust hearing. That hearing was the sixth in the ongoing probe, which is expected to culminate in a report to Congress.
    • In an interview with Bloomberg TV’s Emily Chang, Cicilline got specific for the first time about what that report might say. The subcommittee is developing a “menu of options,” he said, that include updating old antitrust statutes aimed at oil and railroad monopolies; reforming federal antitrust agencies and making sure they have the resources to prosecute companies; and revitalizing private-sector enforcement.
    • Most interestingly, he hinted at two potential pieces of legislation that would target the tech sector in particular. One would seek to enforce principles of portability and interoperability. The second, he said, would be more ambitious in scope: “a sort of Glass-Steagall of the internet, saying you can either be a platform or you can be a producer of goods and services. You cannot do both, because they’d be in conflict.”
    • The Glass-Steagall Act, passed in 1933, required the separation of commercial banking from investment banking. It was crafted to address the conflicts of interest that arose when banks invested consumers’ assets in securities. The legislation divided financial institutions into investment banks such as Goldman Sachs and commercial banks such as Bank of America. Its 1999 repeal was cited by some economists as a precipitating factor in the 2008 financial crisis. A “Glass-Steagall of the internet,” in Cicilline’s analogy, would address the conflicts of interest that arise when companies that own dominant tech platforms also compete with the third parties who use those platforms.
    • For instance, Apple presumably would no longer be allowed to both control iOS and offer services such as Apple Music that go head-to-head on iOS with rivals such as Spotify. Amazon might no longer be allowed to produce its own lines of clothing and household goods to rival those of third-party sellers on its site. (Its cloud division, Amazon Web Services, has similar issues.) Google, perhaps, would have to give up on services such as Google Shopping, which allegedly benefits from high placement in its own search results. It’s less clear to me which of Facebook’s existing products would run afoul of it, if any. Likely, Facebook’s social networking dominance would be targeted through some of the other mechanisms Cicilline mentioned; he specifically called its acquisition of Instagram “illegal.”
    • At the risk of getting wonky, an even better analogy than Glass-Steagall might be the Bank Holding Company Act of 1956, which banned banks from holding ownership stakes in non-banking industries. The concern was that bank holding companies could boost their own non-banking businesses over those of rivals with favorable loan terms, or nudge their loan clients to patronize their other businesses. That sounds a lot like how Apple, Amazon, and in some cases Google allegedly tilt their platforms to favor their own services.
    • If this sort of legislation came to pass, the result would be a form of “breaking up Big Tech,” as some of the giants would likely be required to sell off or shutter some of their business lines. It echoes at least one part of Warren’s plan, which called for “large tech platforms to be designated as ‘Platform Utilities’ and broken apart from any participant on that platform.” Yet it would likely leave intact the core of each business, and would not necessarily require the tortuous untangling of, say, Apple’s hardware products from iOS, or Amazon.com from Amazon Web Services, which seems to be what some opponents of breakups have in mind. No doubt the details would still be tricky and heavily litigated. But they’d be unlikely to cripple the tech giants in the ways that would leave them unable to compete globally with Chinese rivals, which is a fear that the U.S. tech companies have been busy stoking.
    • There are some persuasive arguments for going much farther than a Glass-Steagall or Bank Holding Company Act to rein in the internet’s behemoths. Longtime digital rights activist and blogger Cory Doctorow made the case for robust antitrust action in a new book published on OneZero this week, called How to Destroy Surveillance Capitalism. The book is especially worth reading for anyone familiar with Shoshana Zuboff’s influential 2019 book The Age of Surveillance Capitalism, which Doctorow builds on and critiques. Zephyr Teachout’s book Break ’Em Up and Tim Wu’s book The Curse of Bigness are two other recent works that view size itself as the crux of the antitrust problem.
    • But Cicilline’s comments to Bloomberg suggest that a full dismantling of Silicon Valley’s dominance is unlikely to be an outcome of the current investigation. That may disappoint critics such as Doctorow, Teachout, and Wu. At the same time, it should puncture the notion that breaking up Big Tech is something to be feared — at least, by anyone other than the tech giants themselves.