Reining in the Oligarchies

From Hillsdale College’s  Imprimus:

Who Is in Control? The Need to Rein in Big Tech

Allum Bokhari

The following is adapted from a speech delivered at Hillsdale College on November 8, 2020, during a Center for Constructive Alternatives conference on Big Tech.

In January, when every major Silicon Valley tech company permanently banned the President of the United States from its platform, there was a backlash around the world. One after another, government and party leaders—many of them ideologically opposed to the policies of President Trump—raised their voices against the power and arrogance of the American tech giants. These included the President of Mexico, the Chancellor of Germany, the government of Poland, ministers in the French and Australian governments, the neoliberal center-right bloc in the European Parliament, the national populist bloc in the European Parliament, the leader of the Russian opposition (who recently survived an assassination attempt), and the Russian government (which may well have been behind that attempt).

Common threats create strange bedfellows. Socialists, conservatives, nationalists, neoliberals, autocrats, and anti-autocrats may not agree on much, but they all recognize that the tech giants have accumulated far too much power. None like the idea that a pack of American hipsters in Silicon Valley can, at any moment, cut off their digital lines of communication.

I published a book on this topic prior to the November election, and many who called me alarmist then are not so sure of that now. I built the book on interviews with Silicon Valley insiders and five years of reporting as a Breitbart News tech correspondent. Breitbart created a dedicated tech reporting team in 2015—a time when few recognized the danger that the rising tide of left-wing hostility to free speech would pose to the vision of the World Wide Web as a free and open platform for all viewpoints.

This inversion of that early libertarian ideal—the movement from the freedom of information to the control of information on the Web—has been the story of the past five years.

***

When the Web was created in the 1990s, the goal was that everyone who wanted a voice could have one. All a person had to do to access the global marketplace of ideas was to go online and set up a website. Once created, the website belonged to that person. Especially if the person owned his own server, no one could deplatform him. That was by design, because the Web, when it was invented, was competing with other types of online services that were not so free and open.

It is important to remember that the Web, as we know it today—a network of websites accessed through browsers—was not the first online service ever created. In the 1990s, Sir Timothy Berners-Lee invented the technology that underpins websites and web browsers, creating the Web as we know it today. But there were other online services, some of which predated Berners-Lee’s invention. Corporations like CompuServe and Prodigy ran their own online networks in the 1990s—networks that were separate from the Web and had access points that were different from web browsers. These privately-owned networks were open to the public, but CompuServe and Prodigy owned every bit of information on them and could kick people off their networks for any reason.

In these ways the Web was different. No one owned it, owned the information on it, or could kick anyone off. That was the idea, at least, before the Web was captured by a handful of corporations.

We all know their names: Google, Facebook, Twitter, YouTube, Amazon. Like Prodigy and CompuServe back in the ’90s, they own everything on their platforms, and they have the police power over what can be said and who can participate. But it matters a lot more today than it did in the ’90s. Back then, very few people used online services. Today everyone uses them—it is practically impossible not to use them. Businesses depend on them. News publishers depend on them. Politicians and political activists depend on them. And crucially, citizens depend on them for information.

Today, Big Tech doesn’t just mean control over online information. It means control over news. It means control over commerce. It means control over politics. And how are the corporate tech giants using their control? Judging by the three biggest moves they have made since I wrote my book—the censoring of the New York Post in October when it published its blockbuster stories on Biden family corruption, the censorship and eventual banning from the Web of President Trump, and the coordinated takedown of the upstart social media site Parler—it is obvious that Big Tech’s priority today is to support the political Left and the Washington establishment.

Big Tech has become the most powerful election-influencing machine in American history. It is not an exaggeration to say that if the technologies of Silicon Valley are allowed to develop to their fullest extent, without any oversight or checks and balances, then we will never have another free and fair election. But the power of Big Tech goes beyond the manipulation of political behavior. As one of my Facebook sources told me in an interview for my book: “We have thousands of people on the platform who have gone from far right to center in the past year, so we can build a model from those people and try to make everyone else on the right follow the same path.” Let that sink in. They don’t just want to control information or even voting behavior—they want to manipulate people’s worldview.

Is it too much to say that Big Tech has prioritized this kind of manipulation? Consider that Twitter is currently facing a lawsuit from a victim of child sexual abuse who says that the company repeatedly failed to take down a video depicting his assault, and that it eventually agreed to do so only after the intervention of an agent from the Department of Homeland Security. So Twitter will take it upon itself to ban the President of the United States, but is alleged to have taken down child pornography only after being prodded by federal law enforcement.

***

How does Big Tech go about manipulating our thoughts and behavior? It begins with the fact that these tech companies strive to know everything about us—our likes and dislikes, the issues we’re interested in, the websites we visit, the videos we watch, who we voted for, and our party affiliation. If you search for a Hannukah recipe, they’ll know you’re likely Jewish. If you’re running down the Yankees, they’ll figure out if you’re a Red Sox fan. Even if your smart phone is turned off, they’ll track your location. They know who you work for, who your friends are, when you’re walking your dog, whether you go to church, when you’re standing in line to vote, and on and on.

As I already mentioned, Big Tech also monitors how our beliefs and behaviors change over time. They identify the types of content that can change our beliefs and behavior, and they put that knowledge to use. They’ve done this openly for a long time to manipulate consumer behavior—to get us to click on certain ads or buy certain products. Anyone who has used these platforms for an extended period of time has no doubt encountered the creepy phenomenon where you’re searching for information about a product or a service—say, a microwave—and then minutes later advertisements for microwaves start appearing on your screen. These same techniques can be used to manipulate political opinions.

I mentioned that Big Tech has recently demonstrated ideological bias. But it is equally true that these companies have huge economic interests at stake in politics. The party that holds power will determine whether they are going to get government contracts, whether they’re going to get tax breaks, and whether and how their industry will be regulated. Clearly, they have a commercial interest in political control—and currently no one is preventing them from exerting it.

To understand how effective Big Tech’s manipulation could become, consider the feedback loop.

As Big Tech constantly collects data about us, they run tests to see what information has an impact on us. Let’s say they put a negative news story about someone or something in front of us, and we don’t click on it or read it. They keep at it until they find content that has the desired effect. The feedback loop constantly improves, and it does so in a way that’s undetectable.

What determines what appears at the top of a person’s Facebook feed, Twitter feed, or Google search results? Does it appear there because it’s popular or because it’s gone viral? Is it there because it’s what you’re interested in? Or is there another reason Big Tech wants it to be there? Is it there because Big Tech has gathered data that suggests it’s likely to nudge your thinking or your behavior in a certain direction? How can we know?

What we do know is that Big Tech openly manipulates the content people see. We know, for example, that Google reduced the visibility of Breitbart News links in search results by 99 percent in 2020 compared to the same period in 2016. We know that after Google introduced an update last summer, clicks on Breitbart News stories from Google searches for “Joe Biden” went to zero and stayed at zero through the election. This didn’t happen gradually, but in one fell swoop—as if Google flipped a switch. And this was discoverable through the use of Google’s own traffic analysis tools, so it isn’t as if Google cared that we knew about it.

Speaking of flipping switches, I have noted that President Trump was collectively banned by Twitter, Facebook, Twitch, YouTube, TikTok, Snapchat, and every other social media platform you can think of. But even before that, there was manipulation going on. Twitter, for instance, reduced engagement on the President’s tweets by over eighty percent. Facebook deleted posts by the President for spreading so-called disinformation.

But even more troubling, I think, are the invisible things these companies do. Consider “quality ratings.” Every Big Tech platform has some version of this, though some of them use different names. The quality rating is what determines what appears at the top of your search results, or your Twitter or Facebook feed, etc. It’s a numerical value based on what Big Tech’s algorithms determine in terms of “quality.” In the past, this score was determined by criteria that were somewhat objective: if a website or post contained viruses, malware, spam, or copyrighted material, that would negatively impact its quality score. If a video or post was gaining in popularity, the quality score would increase. Fair enough.

Over the past several years, however—and one can trace the beginning of the change to Donald Trump’s victory in 2016—Big Tech has introduced all sorts of new criteria into the mix that determines quality scores. Today, the algorithms on Google and Facebook have been trained to detect “hate speech,” “misinformation,” and “authoritative” (as opposed to “non-authoritative”) sources. Algorithms analyze a user’s network, so that whatever users follow on social media—e.g., “non-authoritative” news outlets—affects the user’s quality score. Algorithms also detect the use of language frowned on by Big Tech—e.g., “illegal immigrant” (bad) in place of “undocumented immigrant” (good)—and adjust quality scores accordingly. And so on.

This is not to say that you are informed of this or that you can look up your quality score. All of this happens invisibly. It is Silicon Valley’s version of the social credit system overseen by the Chinese Communist Party. As in China, if you defy the values of the ruling elite or challenge narratives that the elite labels “authoritative,” your score will be reduced and your voice suppressed. And it will happen silently, without your knowledge.

This technology is even scarier when combined with Big Tech’s ability to detect and monitor entire networks of people. A field of computer science called “network analysis” is dedicated to identifying groups of people with shared interests, who read similar websites, who talk about similar things, who have similar habits, who follow similar people on social media, and who share similar political viewpoints. Big Tech companies are able to detect when particular information is flowing through a particular network—if there’s a news story or a post or a video, for instance, that’s going viral among conservatives or among voters as a whole. This gives them the ability to shut down a story they don’t like before it gets out of hand. And these systems are growing more sophisticated all the time.

***

If Big Tech’s capabilities are allowed to develop unchecked and unregulated, these companies will eventually have the power not only to suppress existing political movements, but to anticipate and prevent the emergence of new ones. This would mean the end of democracy as we know it, because it would place us forever under the thumb of an unaccountable oligarchy.

The good news is, there is a way to rein in the tyrannical tech giants. And the way is simple: take away their power to filter information and filter data on our behalf.

All of Big Tech’s power comes from their content filters—the filters on “hate speech,” the filters on “misinformation,” the filters that distinguish “authoritative” from “non-authoritative” sources, etc. Right now these filters are switched on by default. We as individuals can’t turn them off. But it doesn’t have to be that way.

The most important demand we can make of lawmakers and regulators is that Big Tech be forbidden from activating these filters without our knowledge and consent. They should be prohibited from doing this—and even from nudging us to turn on a filter—under penalty of losing their Section 230 immunity as publishers of third party content. This policy should be strictly enforced, and it should extend even to seemingly non-political filters like relevance and popularity. Anything less opens the door to manipulation.

Our ultimate goal should be a marketplace in which third party companies would be free to design filters that could be plugged into services like Twitter, Facebook, Google, and YouTube. In other words, we would have two separate categories of companies: those that host content and those that create filters to sort through that content. In a marketplace like that, users would have the maximum level of choice in determining their online experiences. At the same time, Big Tech would lose its power to manipulate our thoughts and behavior and to ban legal content—which is just a more extreme form of filtering—from the Web.

This should be the standard we demand, and it should be industry-wide. The alternative is a kind of digital serfdom. We don’t allow old-fashioned serfdom anymore—individuals and businesses have due process and can’t be evicted because their landlord doesn’t like their politics. Why shouldn’t we also have these rights if our business or livelihood depends on a Facebook page or a Twitter or YouTube account?

This is an issue that goes beyond partisanship. What the tech giants are doing is so transparently unjust that all Americans should start caring about it—because under the current arrangement, we are all at their mercy. The World Wide Web was meant to liberate us. It is now doing the opposite. Big Tech is increasingly in control. The most pressing question today is: how are we going to take control back?

 

Madmen and the Godless Algorithm

FB-vs-Google

This article from The New Yorker.

Good overview history of the advertising model that has dominated our commercialism for decades. It’s now gone on digital steroids. The disruption of ad technology has interesting implications.

How the Math Men Overthrew the Mad Men

By Ken Auletta

Once, Mad Men ruled advertising. They’ve now been eclipsed by Math Men—the engineers and data scientists whose province is machines, algorithms, pureed data, and artificial intelligence. Yet Math Men are beleaguered, as Mark Zuckerberg demonstrated when he humbled himself before Congress, in April. Math Men’s adoration of data—coupled with their truculence and an arrogant conviction that their “science” is nearly flawless—has aroused government anger, much as Microsoft did two decades ago.

The power of Math Men is awesome. Google and Facebook each has a market value exceeding the combined value of the six largest advertising and marketing holding companies. Together, they claim six out of every ten dollars spent on digital advertising, and nine out of ten new digital ad dollars. They have become more dominant in what is estimated to be an up to two-trillion-dollar annual global advertising and marketing business. Facebook alone generates more ad dollars than all of America’s newspapers, and Google has twice the ad revenues of Facebook.

In the advertising world, Big Data is the Holy Grail, because it enables marketers to target messages to individuals rather than general groups, creating what’s called addressable advertising. And only the digital giants possess state-of-the-art Big Data. “The game is no longer about sending you a mail order catalogue or even about targeting online advertising,” Shoshana Zuboff, a professor of business administration at the Harvard Business School, wrote on faz.net, in 2016. “The game is selling access to the real-time flow of your daily life—your reality—in order to directly influence and modify your behavior for profit.” Success at this “game” flows to those with the “ability to predict the future—specifically the future of behavior,” Zuboff writes. She dubs this “surveillance capitalism.” [I question whether this will really work as anticipated once everybody is hip to the game.]

However, to thrash just Facebook and Google is to miss the larger truth: everyone in advertising strives to eliminate risk by perfecting targeting data.[This is the essence of what we’re doing here – reducing the risk of uncertainty.] Protecting privacy is not foremost among the concerns of marketers; protecting and expanding their business is. The business model adopted by ad agencies and their clients parallels Facebook and Google’s. Each aims to massage data to better identify potential customers. Each aims to influence consumer behavior. To appreciate how alike their aims are, sit in an agency or client marketing meeting and you will hear wails about Facebook and Google’s “walled garden,” their unwillingness to share data on their users. When Facebook or Google counter that they must protect “the privacy” of their users, advertisers cry foul: You’re using the data to target ads we paid for—why won’t you share it, so that we can use it in other ad campaigns? [But who really owns your data? Even if you choose to give it away?]

This preoccupation with Big Data is also revealed by the trend in the advertising-agency business to have the media agency, not the creative Mad Men team, occupy the prime seat in pitches to clients, because it’s the media agency that harvests the data to help advertising clients better aim at potential consumers. Agencies compete to proclaim their own Big Data horde. W.P.P.’s GroupM, the largest media agency, has quietly assembled what it calls its “secret sauce,” a collection of forty thousand personally identifiable attributes it plans to retain on two hundred million adult Americans. Unlike Facebook or Google, GroupM can’t track most of what we do online. To parade their sensitivity to privacy, agencies reassuringly boast that they don’t know the names of people in their data bank. But they do have your I.P. address, which yields abundant information, including where you live. For marketers, the advantage of being able to track online behavior, the former senior GroupM executive Brian Lesser said—a bit hyperbolically, one hopes—is that “we know what you want even before you know you want it.”[That sounds like adman hubris rather than reality.]

Worried that Brian Lesser’s dream will become a nightmare, ProPublica has ferociously chewed on the Big Data privacy menace like a dog with a bone: in its series “Breaking the Black Box,” it wrote, “Facebook has a particularly comprehensive set of dossiers on its more than two billion members. Every time a Facebook member likes a post, tags a photo, updates their favorite movies in their profile, posts a comment about a politician, or changes their relationship status, Facebook logs it . . . When they use Instagram or WhatsApp on their phone, which are both owned by Facebook, they contribute more data to Facebook’s dossier.” Facebook offers advertisers more than thirteen hundred categories for ad targeting, according to ProPublica.

Google, for its part, has merged all the data it collects from its search, YouTube, and other services, and has introduced an About Me page, which includes your date of birth, phone number, where you work, mailing address, education, where you’ve travelled, your nickname, photo, and e-mail address. Amazon knows even more about you. Since it is the world’s largest store and sees what you’ve actually purchased, its data are unrivalled. Amazon reaches beyond what interests you (revealed by a Google search) or what your friends are saying (on Facebook) to what you actually purchase. With Amazon’s Alexa, it has an agent in your home that not only knows what you bought but when you wake up, what you watch, read, listen to, ask for, and eat. And Amazon is aggressively building up its meager ad sales, which gives it an incentive to exploit its data.

Data excite advertisers. Prowling his London office in jeans, Keith Weed, who oversees marketing and communications for Unilever, one of the world’s largest advertisers, described how mobile phones have elevated data as a marketing tool. “When I started in marketing, we were using secondhand data which was three months old,” he said. “Now with the good old mobile, I have individualized data on people. You don’t need to know their names . . . You know their telephone number. You know where they live, because it’s the same location as their PC.” Weed knows what times of the day you usually browse, watch videos, answer e-mail, travel to the office—and what travel routes you take. “From your mobile, I know whether you stay in four-star or two-star hotels, whether you go to train stations or airports. I use these insights along with what you’re browsing on your PC. I know whether you’re interested in horses or holidays in the Caribbean.” By using programmatic computers to buy ads targeting these individuals, he says, Unilever can “create a hundred thousand permutations of the same ad,” as they recently did with a thirty-second TV ad for Axe toiletries aimed at young men in Brazil. The more Keith Weed knows about a consumer, the better he can aim to target a sale.

Engineers and data scientists vacuum data. They see data as virtuous, yielding clues to the mysteries of human behavior, suggesting efficiencies (including eliminating costly middlemen, like agency Mad Men), offering answers that they believe will better serve consumers, because the marketing message is individualized. The more cool things offered, the more clicks, the more page views, the more user engagement. Data yield facts and advance a quest to be more scientific—free of guesses. As Eric Schmidt, then the executive chairman of Google’s parent company, Alphabet, said at the company’s 2017 shareholder meeting, “We start from the principles of science at Google and Alphabet.”

They believe there is nobility in their quest. By offering individualized marketing messages, they are trading something of value in exchange for a consumer’s attention. They also start from the principle, as the TV networks did, that advertising allows their product to be “free.” But, of course, as their audience swells, so does their data. Sandy Parakilas, who was Facebook’s operations manager on its platform team from 2011 to 2012, put it this way in a scathing Op-Ed for the Times, last November: “The more data it has on offer, the more value it creates for advertisers. That means it has no incentive to police the collection or use of that data—except when negative press or regulators are involved.” For the engineers, the privacy issue—like “fake news” and even fraud—was relegated to the nosebleed bleachers. [This fact should be obvious to all of us.]

With a chorus of marketers and citizens and governments now roaring their concern, the limitations of Math Men loom large. Suddenly, governments in the U.S. are almost as alive to privacy dangers as those in Western Europe, confronting Facebook by asking how the political-data company Cambridge Analytica, employed by Donald Trump’s Presidential campaign, was able to snatch personal data from eighty-seven million individual Facebook profiles. Was Facebook blind—or deliberately mute? Why, they are really asking, should we believe in the infallibility of your machines and your willingness to protect our privacy?

Ad agencies and advertisers have long been uneasy not just with the “walled gardens” of Facebook and Google but with their unwillingness to allow an independent company to monitor their results, as Nielsen does for TV and comScore does online. This mistrust escalated in 2016, when it emerged that Facebook and Google charged advertisers for ads that tricked other machines to believe an ad message was seen by humans when it was not. Advertiser confidence in Facebook was further jolted later in 2016, when it was revealed that the Math Men at Facebook overestimated the average time viewers spent watching video by up to eighty per cent. And in 2017, Math Men took another beating when news broke that Google’s YouTube and Facebook’s machines were inserting friendly ads on unfriendly platforms, including racist sites and porn sites. These were ads targeted by keywords, like “Confederacy” or “race”; placing an ad on a history site might locate it on a Nazi-history site.

The credibility of these digital giants was further subverted when Russian trolls proved how easy it was to disseminate “fake news” on social networks. When told that Facebook’s mechanized defenses had failed to screen out disinformation planted on the social network to sabotage Hillary Clinton’s Presidential campaign, Mark Zuckerberg publicly dismissed the assertion as “pretty crazy,” a position he later conceded was wrong.

By the spring of 2018, Facebook had lost control of its narrative. Their original declared mission—to “connect people” and “build a global community”—had been replaced by an implicit new narrative: we connect advertisers to people.[Indeed, connecting people on a global basis for human interaction really doesn’t make a lot of sense. A global gossip network? Unless, of course, you’re trying to monetize it.] It took Facebook and Google about five years before they figured out how to generate revenue, and today roughly ninety-five percent of Facebook’s dollars and almost ninety percent of Google’s comes from advertising. They enjoy abundant riches because they tantalize advertisers with the promise that they can corral potential customers. This is how Facebook lured developers and app makers by offering them a permissive Graph A.P.I., granting them access to the daily habits and the interchange with friends of its users. This Graph A.P.I. is how Cambridge Analytica got its paws on the data of eighty-seven million Americans.

The humiliating furor this news provoked has not subverted the faith among Math Men that their “science” will prevail. They believe advertising will be further transformed by new scientific advances like artificial intelligence that will allow machines to customize ads, marginalizing human creativity. With algorithms creating profiles of individuals, Airbnb’s then chief marketing officer, Jonathan Mildenhall, told me, “brands can engineer without the need for human creativity.” Machines will craft ads, just as machines will drive cars. But the ad community is increasingly mistrustful of the machines, and of Facebook and Google.[As they should be – the value has been over-hyped.] During a presentation at Advertising Week in New York this past September, Keith Weed offered a report to Facebook and Google. He gave them a mere “C” for policing ad fraud, and a harsher “F” for cross-platform transparency, insisting, “We’ve got to see over the walled gardens.”

That mistrust has gone viral. A powerful case for more government regulation of the digital giants was made by The Economist, a classically conservative publication that also endorsed the government’s antitrust prosecution of Microsoft, in 1999. The magazine editorialized, in May, 2017, that governments must better police the five digital giants—Facebook, Google, Amazon, Apple, and Microsoft—because data were “the oil of the digital era”: “Old ways of thinking about competition, devised in the era of oil, look outdated in what has come to be called the ‘data economy.’ ” Inevitably, an abundance of data alters the nature of competition, allowing companies to benefit from network effects, with users multiplying and companies amassing wealth to swallow potential competitors.

The politics of Silicon Valley is left of center, but its disdain for government regulation has been right of center. This is changing. A Who’s Who of Silicon notables—Tim Berners-Lee, Tim Cook, Ev Williams, Sean Parker, and Tony Fadell, among others—have harshly criticized the social harm imposed by the digital giants. Marc Benioff, the C.E.O. of Salesforce.com—echoing similar sentiments expressed by Berners-Lee—has said, “The government is going to have to be involved. You do it exactly the same way you regulated the cigarette industry.”

Cries for regulating the digital giants are almost as loud today as they were to break up Microsoft in the late nineties. Congress insisted that Facebook’s Zuckerberg, not his minions, testify. The Federal Trade Commission is investigating Facebook’s manipulation of user data. Thirty-seven state attorneys general have joined a demand to learn how Facebook safeguards privacy. The European Union has imposed huge fines on Google and wants to inspect Google’s crown jewels—its search algorithms—claiming that Google’s search results are skewed to favor their own sites. The E.U.’s twenty-eight countries this May imposed a General Data Protection Regulation to protect the privacy of users, requiring that citizens must choose to opt in before companies can horde their data.

Here’s where advertisers and the digital giants lock arms: they speak with one voice in opposing opt-in legislation, which would deny access to data without the permission of users. If consumers wish to deny advertisers access to their cookies—their data—they agree: the consumer must voluntarily opt out, meaning they must endure a cumbersome and confusing series of online steps. Amid the furor about Facebook and Google, remember these twinned and rarely acknowledged truisms: more data probably equals less privacy, while more privacy equals less advertising revenue. Thus, those who rely on advertising have business reasons to remain tone-deaf to privacy concerns.

Those reliant on advertising know: the disruption that earlier slammed the music, newspaper, magazine, taxi, and retail industries now upends advertising. Agencies are being challenged by a host of competitive frenemies: by consulting and public-relations companies that have jumped into their business; by platform customers like Google and Facebook but also the Times, NBC, and Buzzfeed, that now double as ad agencies and talk directly to their clients; by clients that increasingly perform advertising functions in-house.

But the foremost frenemy is the public, which poses an existential threat not just to agencies but to Facebook and the ad revenues on which most media rely. Citizens protest annoying, interruptive advertising, particularly on their mobile phones—a device as personal as a purse or wallet. An estimated twenty per cent of Americans, and one-third of Western Europeans, employ ad-blocker software. More than half of those who record programs on their DVRs choose to skip the ads. Netflix and Amazon, among others, have accustomed viewers to watch what they want when they want, without commercial interruption.

Understandably, those dependent on ad dollars quake. The advertising and marketing world scrambles for new ways to reach consumers. Big Data, they believe, promises ways they might better communicate with annoyed consumers—maybe unlock ways that ads can be embraced as a useful individual service rather than as an interruption. If Big Data’s use is circumscribed to protect privacy, the advertising business will suffer. In this core conviction, at least, Mad Men and Math Men are alike.

This piece is partially adapted from Auletta’s forthcoming book, “Frenemies: The Epic Disruption of the Ad Business (and Everything Else).”

 

I would guess that the ad business will be disrupted further as we find new ways to connect consumers with what they want. This will reduce the power of the Math Men at centralized network servers.

I also suspect search will become a regulated public utility. A free society cannot tolerate one or two private corporations controlling all the information data that flows through its networks.

 

Don’t Be Evil?

The alarm bells keep ringing on the tech quasi-monopolies that rule the Internet. There are two main issues to address: one is the ownership and control over personal data – this data rightly belongs to consumers, not network servers – and two is the positive network effects that drive these cos. to dominance.

How we analyze these tech titans differs along these two issues. Amazon, Apple and Microsoft sell products and product markets are not easily protected from competition. They are middlemen between producers/suppliers and consumers. I expect we will discover new competitive models to deliver goods and services, which will eat into these cos.’ dominance. The promise of blockchain technology is exactly to eliminate the middleman.

Google and Facebook are different animals. Search is starting to appear to resemble a public good, like public libraries. With the positive externalities of network effects, it also resembles a natural monopoly – the more people use a search engine, the better is the information obtained, meaning the search engine becomes ever more valuable. We probably don’t want to destroy this value. To me, this suggests that Google’s search engine eventually will become a publicly regulated utility – because the politics will demand it. We already see this outside the U.S.

Facebook, the ultimate social network, is going through some ups and downs because of issues of how it collects and uses personal information. My impression is that a single social network for all socializing needs is probably not the ideal solution. If correct, competition will eat into FB, which will start to break up into different targeted functions, reducing its value as a one-stop-fits-all OSN.

We shall see.

How Silicon Valley went from ‘don’t be evil’ to doing evil

March 4, 2018

The Google logo is seen at the Google headquarters in Brussels, Tuesday March 23, 2010.

Meet the new boss. Same as the old boss.

– The Who, “We won’t be fooled again”, 1971

Once seen as the saviors of America’s economy, Silicon Valley is turning into something more of an emerging axis of evil. “Brain-hacking” tech companies such as Apple, Google, Facebook, Microsoft and Amazon, as one prominent tech investor puts it, have become so intrusive as to alarm critics on both right and left.

Firms like Google, which once advertised themselves as committed to being not “evil,” are now increasingly seen as epitomizing Hades’ legions. The tech giants now constitute the world’s five largest companies in market capitalization. Rather than idealistic newcomers, they increasingly reflect the worst of American capitalism — squashing competitors, using indentured servants, attempting to fix wagesdepressing incomes, creating ever more social anomie and alienation.

At the same time these firms are fostering what British academic David Lyon has called a “surveillance society” both here and abroad. Companies like Facebook and Google thrive by mining personal data, and their only way to grow, as Wired recently suggested, was, creepily, to “know you better.”

The techie vision of the future is one in which the middle class all but disappears, with those not sufficiently merged with machine intelligence relegated to rent-paying serfs living on “income maintenance.” Theirs is a world in where long-standing local affinities are supplanted by Facebook’s concept of digitally-created “meaningful communities.”

The progressive rebellion

Back during the Obama years, the tech oligarchy was widely admired throughout the progressive circles. Companies like Google gained massive access to the administration’s inner circles, with many top aides eventually entering a “revolving door” for jobs with firms like Google, Facebook, Uber, Lyft and Airbnb.

Although the vast majority of all political contributions from these firms, not surprisingly, go to the Democrats, many progressives — at least not those on their payroll — are expressing alarm about the oligarchs’ move to gain control of whole industries, such as education, finance, groceries, space, print media and entertainment. Left-leaning luminaries like Franklin Foer, former editor of the New Republic, rant against technology firms as a threat to basic liberties and coarsening culture.

Progressives are increasingly calling for ever growing tech monolith to be “broken up,” calling for new regulation to limit their size and scope. Many have embraced European proposals to restrain tech monopolies which now resemble “predatory capitalism” at its worse.

The right also rises

Traditionally, conservatives celebrated entrepreneurial success and opposed governmental intervention in the economy. Yet increasingly even libertarians, like Instapundit’s Glen Reynolds, have suggested that some form of anti-trust action may be necessary to curb oligarchic power. The National Review even recently suggested that these firms be treated as utilities, that is, regulated by government.

Conservatives are also concerned about pervasive political bias in the industry. The Bay Area, the heartland of the industry, has evolved as Facebook co-founder Peter Thiel notes, into a “one party state.” Ideological homogeneity discourages debate and dissent, both inside their companies.

More importantly, conservatives seek to curb their ability — increasingly evident as traditional media declines — to control content on the internet. As the techies expand their domain, America’s media, entertainment and cultural industries would seem destined to become ever less heterogenous in politics and cultural world-view.

A clear and present danger

Whether one sits on the progressive left or the political right, this growing hegemony presents a clear and present danger. It is increasingly clear that the oligarchs have forgotten that Americans are more than a collection of data-bases to be exploited. People, whatever their ideology, generally want to maintain a modicum of privacy, and choose their way of life.

The perfect world of the oligarchs can be seen in the Bay Area, where, despite the massive explosion in employment, even tech workers, due to high costs, do worse than their counterparts elsewhere. Meanwhile San Francisco, among the most unequal places in the country, has evolved into a walking advertisement for a post-modern dystopia, an ultra-expensive city filled with homeless people and streets filled with excrement and needles. It is also increasingly exporting people elsewhere, including many people making high salaries.

Of course, technology is critical to a brighter future, but need not be the province of a handful of companies or concentrated in one or two regions. The great progress in the 1980s and 1990s took place in a highly competitive, and dispersed, environment not one dominated by firms that control 80 or 90 percent of key markets. Not surprisingly, the rise of the oligarchs coincides with a general decline in business startups, including in tech.

We have traveled far from the heroic era of spunky start-ups nurtured in suburban garages. But a future of ever greater robotic dependence — a kind of high-tech feudalism — is not inevitable. Setting aside their many differences, conservatives and progressives need to agree on strategies to limit the oligarch’s stranglehold on our future.

Joel Kotkin is the R.C. Hobbs Presidential Fellow in Urban Futures at Chapman University in Orange and executive director of the Houston-based Center for Opportunity Urbanism (www.opportunityurbanism.org).

Tech Dystopia?

Below are excerpts from a fascinating series of articles by The Guardian (with links). The articles address many of the ways that Internet 2.0 network media models such as Google, Facebook, YouTube, Instagram, etc. are transforming, and in many cases undermining, the foundations of a democratic humanistic society. These issues motivate us at tuka to design solutions to the great question of life’s meaning.

Personally, I don’t believe this dystopia will come to pass because humans are quite resilient as a species and eventually our humanist qualities will dominate our biological urges and economic imperatives. We have free will and ultimately, we choose correctly.

Perhaps that is an overly optimistic opinion, but Internet (or Web) 3.0 technology is rewriting the script with applications that reassert human control over the data universe. We will build more humanistic social communities that employ technology, with the emphasis always on the human. We see this now with the growing refusal to surrender to Web 2.0 by tech insiders.

See excerpts and comments below.

“If politics is an expression of our human will, on individual and collective levels, then the attention economy is directly undermining the assumptions that democracy rests on.” If Apple, Facebook, Google, Twitter, Instagram, and Snapchat are gradually chipping away at our ability to control our own minds, could there come a point, I ask, at which democracy no longer functions?

‘Fiction is outperforming reality’: how YouTube’s algorithm distorts truth

Paul Lewis February 2, 2018

There are 1.5 billion YouTube users in the world, which is more than the number of households that own televisions. What they watch is shaped by this algorithm, which skims and ranks billions of videos to identify 20 “up next” clips that are both relevant to a previous video and most likely, statistically speaking, to keep a person hooked on their screen.

Company insiders tell me the algorithm is the single most important engine of YouTube’s growth. In one of the few public explanations of how the formula works – an academic paper that sketches the algorithm’s deep neural networks, crunching a vast pool of data about videos and the people who watch them – YouTube engineers describe it as one of the “largest scale and most sophisticated industrial recommendation systems in existence”.

We see here the power of AI data algorithms to filter content. The Google response has been to “expand the army of human moderators.” That’s a necessary method of reasserting human judgment over the network

The primary focus of the article then turns to politics and the electoral influences of disinformation:

Much has been written about Facebook and Twitter’s impact on politics, but in recent months academics have speculated that YouTube’s algorithms may have been instrumental in fuelling disinformation during the 2016 presidential election. “YouTube is the most overlooked story of 2016,” Zeynep Tufekci, a widely respected sociologist and technology critic, tweeted back in October. “Its search and recommender algorithms are misinformation engines.”

Apparently, the sensationalism surrounding the Trump campaign caused YT’s AI algorithms to push more video feeds favorable to Trump and damaging to Hillary Clinton. One doesn’t need to be a partisan to recognize this was probably true for this particular media channel and its business model that values more views more than anything else.

However, this reality can also be distorted to present a particular conspiracy narrative of its own:

Trump won the electoral college as a result of 80,000 votes spread across three swing states. There were more than 150 million YouTube users in the US. The videos contained in Chaslot’s database of YouTube-recommended election videos were watched, in total, more than 3bn times before the vote in November 2016.

This, unfortunately, is cherry-picking statistical inferences concerning the margin of voting support. What was significant in determining the 2016 election outcome was not 80,000 votes across three states, but a run of popular vote wins in 2,623 of 3,112 counties across the U.S. This 85% share could not be an accident, nor could it be due to the single influence of disinformation, Russian or otherwise. The true difference in the election was not revealed by the popular vote total or the Electoral College vote, but by the geographical distribution of support. One can argue about which is more critical to democratic governance, but this post is about electronic media content, not political analysis.

The next article further addresses how technology is influencing our individual behaviors.

‘Our minds can be hijacked’: the tech insiders who fear a smartphone dystopia

Paul Lewis October 6, 2017

Justin Rosenstein had tweaked his laptop’s operating system to block Reddit, banned himself from Snapchat, which he compares to heroin, and imposed limits on his use of Facebook. But even that wasn’t enough. In August, the 34-year-old tech executive took a more radical step to restrict his use of social media and other addictive technologies.

A decade after he stayed up all night coding a prototype of what was then called an “awesome” button, Rosenstein belongs to a small but growing band of Silicon Valley heretics who complain about the rise of the so-called “attention economy”: an internet shaped around the demands of an advertising economy.

The extent of this addiction is cited by research that shows people touch, swipe or tap their phone 2,617 times a day!

There is growing concern that as well as addicting users, technology is contributing toward so-called “continuous partial attention”, severely limiting people’s ability to focus, and possibly lowering IQ. One recent study showed that the mere presence of smartphones damages cognitive capacity – even when the device is turned off. “Everyone is distracted,” Rosenstein says. “All of the time.”

“The technologies we use have turned into compulsions, if not full-fledged addictions,” Eyal writes. “It’s the impulse to check a message notification. It’s the pull to visit YouTube, Facebook, or Twitter for just a few minutes, only to find yourself still tapping and scrolling an hour later.” None of this is an accident, he writes. It is all “just as their designers intended”.

Tristan Harris, a former Google employee turned vocal critic of the tech industry points out that… “All of us are jacked into this system. All of our minds can be hijacked. Our choices are not as free as we think they are.”

“I don’t know a more urgent problem than this,” Harris says. “It’s changing our democracy, and it’s changing our ability to have the conversations and relationships that we want with each other.” 

Harris believes that tech companies never deliberately set out to make their products addictive. They were responding to the incentives of an advertising economy, experimenting with techniques that might capture people’s attention, even stumbling across highly effective design by accident.

“Smartphones are useful tools,” says Loren Brichter, a product designer. “But they’re addictive. Pull-to-refresh is addictive. Twitter is addictive. These are not good things. When I was working on them, it was not something I was mature enough to think about.”

The two inventors listed on Apple’s patent for “managing notification connections and displaying icon badges” are Justin Santamaria and Chris Marcellino. A few years ago Marcellino, 33, left the Bay Area and is now in the final stages of retraining to be a neurosurgeon. He stresses he is no expert on addiction but says he has picked up enough in his medical training to know that technologies can affect the same neurological pathways as gambling and drug use. “These are the same circuits that make people seek out food, comfort, heat, sex,” he says.

“The people who run Facebook and Google are good people, whose well-intentioned strategies have led to horrific unintended consequences,” he says. “The problem is that there is nothing the companies can do to address the harm unless they abandon their current advertising models.

But how can Google and Facebook be forced to abandon the business models that have transformed them into two of the most profitable companies on the planet?

This is exactly the problem – they really can’t. Newer technology, such as distributed social networks tracked by blockchain technology, must be deployed to disrupt the dysfunctional existing technology. New business models will be designed to support this disruption. Human behavioral instincts are crucial to successful new designs that make us more human, rather than less.

James Williams does not believe talk of dystopia is far-fetched. …He says his epiphany came a few years ago when he noticed he was surrounded by technology that was inhibiting him from concentrating on the things he wanted to focus on. “It was that kind of individual, existential realization: what’s going on?” he says. “Isn’t technology supposed to be doing the complete opposite of this?”

The question we ask at tuka is: “What do people really want from technology and social interaction? Distraction or meaning? And how do they find meaning?” Our answer is self-expression through creativity, sharing it, and connecting with communities.

Williams and Harris left Google around the same time and co-founded an advocacy group, Time Well Spent, that seeks to build public momentum for a change in the way big tech companies think about design. 

“Eighty-seven percent of people wake up and go to sleep with their smartphones,” he says. The entire world now has a new prism through which to understand politics, and Williams worries the consequences are profound.

The same forces that led tech firms to hook users with design tricks, he says, also encourage those companies to depict the world in a way that makes for compulsive, irresistible viewing. “The attention economy incentivizes the design of technologies that grab our attention,” he says. “In so doing, it privileges our impulses over our intentions.”

That means privileging what is sensational over what is nuanced, appealing to emotion, anger, and outrage. The news media is increasingly working in service to tech companies, Williams adds, and must play by the rules of the attention economy to “sensationalize, bait and entertain in order to survive”.

In the wake of Donald Trump’s stunning electoral victory, many were quick to question the role of so-called “fake news” on Facebook, Russian-created Twitter bots or the data-centric targeting efforts that companies such as Cambridge Analytica used to sway voters. But Williams sees those factors as symptoms of a deeper problem.

It is not just shady or bad actors who were exploiting the internet to change public opinion. The attention economy itself is set up to promote a phenomenon like Trump, who is masterly at grabbing and retaining the attention of supporters and critics alike, often by exploiting or creating outrage.

Orwellian-style coercion is less of a threat to democracy than the more subtle power of psychological manipulation, and “man’s almost infinite appetite for distractions”.

“The dynamics of the attention economy are structurally set up to undermine the human will,” Williams says. “If politics is an expression of our human will, on individual and collective levels, then the attention economy is directly undermining the assumptions that democracy rests on.” If Apple, Facebook, Google, Twitter, Instagram, and Snapchat are gradually chipping away at our ability to control our own minds, could there come a point, I ask, at which democracy no longer functions?

Our politics will survive and democracy is only one form of governance. The bigger question is how does human civilization survive if our behavior becomes self-destructive and meaningless?

Vampire Squids?

 

likenolike

I would say this essay by Franklin Foer is a bit alarmist, though his book is worth reading and taking to heart. We are gradually becoming aware of the value of our personal data and I expect consumers will soon figure out how to demand a fair share of that value, else they will withdraw.

Technology is most often disrupted by newer technology that better serves the needs of users. For Web 2.0 business models, our free data is their lifeblood and soon we may be able to cut them off. Many hope that’s where Web 3.0 is going.

tuka is a technology model that seeks to do exactly that for creative content providers, their audiences, and promoter/fans.

How Silicon Valley is erasing your individuality

Washington Post, September 8, 2017

 

Franklin Foer is author of “World Without Mind: The Existential Threat of Big Tech,” from which this essay is adapted.

Until recently, it was easy to define our most widely known corporations. Any third-grader could describe their essence. Exxon sells gas; McDonald’s makes hamburgers; Walmart is a place to buy stuff. This is no longer so. Today’s ascendant monopolies aspire to encompass all of existence. Google derives from googol, a number (1 followed by 100 zeros) that mathematicians use as shorthand for unimaginably large quantities. Larry Page and Sergey Brin founded Google with the mission of organizing all knowledge, but that proved too narrow. They now aim to build driverless cars, manufacture phones and conquer death. Amazon, which once called itself “the everything store,” now produces television shows, owns Whole Foods and powers the cloud. The architect of this firm, Jeff Bezos, even owns this newspaper.

Along with Facebook, Microsoft and Apple, these companies are in a race to become our “personal assistant.” They want to wake us in the morning, have their artificial intelligence software guide us through our days and never quite leave our sides. They aspire to become the repository for precious and private items, our calendars and contacts, our photos and documents. They intend for us to turn unthinkingly to them for information and entertainment while they catalogue our intentions and aversions. Google Glass and the Apple Watch prefigure the day when these companies implant their artificial intelligence in our bodies. Brin has mused, “Perhaps in the future, we can attach a little version of Google that you just plug into your brain.”

More than any previous coterie of corporations, the tech monopolies aspire to mold humanity into their desired image of it. They think they have the opportunity to complete the long merger between man and machine — to redirect the trajectory of human evolution. How do I know this? In annual addresses and town hall meetings, the founding fathers of these companies often make big, bold pronouncements about human nature — a view that they intend for the rest of us to adhere to. Page thinks the human body amounts to a basic piece of code: “Your program algorithms aren’t that complicated,” he says. And if humans function like computers, why not hasten the day we become fully cyborg?

To take another grand theory, Facebook chief Mark Zuckerberg has exclaimed his desire to liberate humanity from phoniness, to end the dishonesty of secrets. “The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly,” he has said. “Having two identities for yourself is an example of a lack of integrity.” Of course, that’s both an expression of idealism and an elaborate justification for Facebook’s business model.

There’s an oft-used shorthand for the technologist’s view of the world. It is assumed that libertarianism dominates Silicon Valley, and that isn’t wholly wrong. High-profile devotees of Ayn Rand can be found there. But if you listen hard to the titans of tech, it’s clear that their worldview is something much closer to the opposite of a libertarian’s veneration of the heroic, solitary individual. The big tech companies think we’re fundamentally social beings, born to collective existence. They invest their faith in the network, the wisdom of crowds, collaboration. They harbor a deep desire for the atomistic world to be made whole. (“Facebook stands for bringing us closer together and building a global community,” Zuckerberg wrote in one of his many manifestos.) By stitching the world together, they can cure its ills.

Rhetorically, the tech companies gesture toward individuality — to the empowerment of the “user” — but their worldview rolls over it. Even the ubiquitous invocation of users is telling: a passive, bureaucratic description of us. The big tech companies (the Europeans have lumped them together as GAFA: Google, Apple, Facebook, Amazon) are shredding the principles that protect individuality. Their devices and sites have collapsed privacy; they disrespect the value of authorship, with their hostility toward intellectual property. In the realm of economics, they justify monopoly by suggesting that competition merely distracts from the important problems like erasing language barriers and building artificial brains. Companies should “transcend the daily brute struggle for survival,” as Facebook investor Peter Thiel has put it.

When it comes to the most central tenet of individualism — free will — the tech companies have a different way. They hope to automate the choices, both large and small, we make as we float through the day. It’s their algorithms that suggest the news we read, the goods we buy, the paths we travel, the friends we invite into our circles. [Blogger Note: As computers can’t write music like humans, algorithms cannot really define tastes. Our sensibilities are excited by serendipity, innovation, and surprise.]

It’s hard not to marvel at these companies and their inventions, which often make life infinitely easier. But we’ve spent too long marveling. The time has arrived to consider the consequences of these monopolies, to reassert our role in determining the human path. Once we cross certain thresholds — once we remake institutions such as media and publishing, once we abandon privacy — there’s no turning back, no restoring our lost individuality.

***

Over the generations, we’ve been through revolutions like this before. Many years ago, we delighted in the wonders of TV dinners and the other newfangled foods that suddenly filled our kitchens: slices of cheese encased in plastic, oozing pizzas that emerged from a crust of ice, bags of crunchy tater tots. In the history of man, these seemed like breakthrough innovations. Time-consuming tasks — shopping for ingredients, tediously preparing a recipe and tackling a trail of pots and pans — were suddenly and miraculously consigned to history.

The revolution in cuisine wasn’t just enthralling. It was transformational. New products embedded themselves deeply in everyday life, so much so that it took decades for us to understand the price we paid for their convenience, efficiency and abundance. Processed foods were feats of engineering, all right — but they were engineered to make us fat. Their delectable taste required massive quantities of sodium and sizable stockpiles of sugar, which happened to reset our palates and made it harder to satehunger. It took vast quantities of meat and corn to fabricate these dishes, and a spike in demand remade American agriculture at a terrible environmental cost. A whole new system of industrial farming emerged, with penny-conscious conglomerates cramming chickens into feces-covered pens and stuffing them full of antibiotics. By the time we came to understand the consequences of our revised patterns of consumption, the damage had been done to our waistlines, longevity, souls and planet.

Something like the midcentury food revolution is now reordering the production and consumption of knowledge. Our intellectual habits are being scrambled by the dominant firms. Giant tech companies have become the most powerful gatekeepers the world has ever known. Google helps us sort the Internet, by providing a sense of hierarchy to information; Facebook uses its algorithms and its intricate understanding of our social circles to filter the news we encounter; Amazon bestrides book publishing with its overwhelming hold on that market.

Such dominance endows these companies with the ability to remake the markets they control. As with the food giants, the big tech companies have given rise to a new science that aims to construct products that pander to their consumers. Unlike the market research and television ratings of the past, the tech companies have a bottomless collection of data, acquired as they track our travels across the Web, storing every shard about our habits in the hope that they may prove useful. They have compiled an intimate portrait of the psyche of each user — a portrait that they hope to exploit to seduce us into a compulsive spree of binge clicking and watching. And it works: On average, each Facebook user spends one-sixteenth of their day on the site.

In the realm of knowledge, monopoly and conformism are inseparable perils. The danger is that these firms will inadvertently use their dominance to squash diversity of opinion and taste. Concentration is followed by homogenization. As news media outlets have come to depend heavily on Facebook and Google for traffic — and therefore revenue — they have rushed to produce articles that will flourish on those platforms. This leads to a duplication of the news like never before, with scores of sites across the Internet piling onto the same daily outrage. It’s why a picture of a mysteriously colored dress generated endless articles, why seemingly every site recaps “Game of Thrones.” Each contribution to the genre adds little, except clicks. Old media had a pack mentality, too, but the Internet promised something much different. And the prevalence of so much data makes the temptation to pander even greater.

This is true of politics. Our era is defined by polarization, warring ideological gangs that yield no ground. Division, however, isn’t the root cause of our unworkable system. There are many causes, but a primary problem is conformism. Facebook has nurtured two hive minds, each residing in an informational ecosystem that yields head-nodding agreement and penalizes dissenting views. This is the phenomenon that the entrepreneur and author Eli Pariser famously termed the “Filter Bubble” — how Facebook mines our data to keep giving us the news and information we crave, creating a feedback loop that pushes us deeper and deeper into our own amen corners.

As the 2016 presidential election so graphically illustrated, a hive mind is an intellectually incapacitated one, with diminishing ability to tell fact from fiction, with an unshakable bias toward party line. The Russians understood this, which is why they invested so successfully in spreading dubious agitprop via Facebook. And it’s why a raft of companies sprouted — Occupy Democrats, the Angry Patriot, Being Liberal — to get rich off the Filter Bubble and to exploit our susceptibility to the lowest-quality news, if you can call it that.

Facebook represents a dangerous deviation in media history. Once upon a time, elites proudly viewed themselves as gatekeepers. They could be sycophantic to power and snobbish, but they also felt duty-bound to elevate the standards of society and readers. Executives of Silicon Valley regard gatekeeping as the stodgy enemy of innovation — they see themselves as more neutral, scientific and responsive to the market than the elites they replaced — a perspective that obscures their own power and responsibilities. So instead of shaping public opinion, they exploit the public’s worst tendencies, its tribalism and paranoia.

***

During this century, we largely have treated Silicon Valley as a force beyond our control. A broad consensus held that lead-footed government could never keep pace with the dynamism of technology. By the time government acted against a tech monopoly, a kid in a garage would have already concocted some innovation to upend the market. Or, as Google’s Eric Schmidt, put it, “Competition is one click away.” A nostrum that suggested that the very structure of the Internet defied our historic concern for monopoly.

As individuals, we have similarly accepted the omnipresence of the big tech companies as a fait accompli. We’ve enjoyed their free products and next-day delivery with only a nagging sense that we may be surrendering something important. Such blitheness can no longer be sustained. Privacy won’t survive the present trajectory of technology — and with the sense of being perpetually watched, humans will behave more cautiously, less subversively. Our ideas about the competitive marketplace are at risk. With a decreasing prospect of toppling the giants, entrepreneurs won’t bother to risk starting new firms, a primary source of jobs and innovation. And the proliferation of falsehoods and conspiracies through social media, the dissipation of our common basis for fact, is creating conditions ripe for authoritarianism. Over time, the long merger of man and machine has worked out pretty well for man. But we’re drifting into a new era, when that merger threatens the individual. We’re drifting toward monopoly, conformism, their machines. Perhaps it’s time we steer our course.

Google This.

AP FACEBOOK F A USA NY
(Photo: Mark Lennihan, AP)

Another argument that moves toward making these companies public utilities. (Google more than Facebook.) From USA Today:

I invested early in Google and Facebook and regret it. I helped create a monster.

‘Brain hacking’ Internet monopolies menace public health, democracy, writes Roger McNamee.

I invested in Google and Facebook years before their first revenue and profited enormously. I was an early adviser to Facebook’s team, but I am terrified by the damage being done by these Internet monopolies.

Technology has transformed our lives in countless ways, mostly for the better. Thanks to the now ubiquitous smartphone, tech touches us from the moment we wake up until we go to sleep. While the convenience of smartphones has many benefits, the unintended consequences of well-intentioned product choices have become a menace to public health and to democracy.

Facebook and Google get their revenue from advertising, the effectiveness of which depends on gaining and maintaining consumer attention. Borrowing techniques from the gambling industry, Facebook, Google and others exploit human nature, creating addictive behaviors that compel consumers to check for new messages, respond to notifications, and seek validation from technologies whose only goal is to generate profits for their owners.

The people at Facebook and Google believe that giving consumers more of what they want and like is worthy of praise, not criticism. What they fail to recognize is that their products are not making consumers happier or more successful.

Like gambling, nicotine, alcohol or heroin, Facebook and Google — most importantly through its YouTube subsidiary — produce short-term happiness with serious negative consequences in the long term.

Users fail to recognize the warning signs of addiction until it is too late. There are only 24 hours in a day, and technology companies are making a play for all them. The CEO of Netflix recently noted that his company’s primary competitor is sleep.

How does this work? A 2013 study found that average consumers check their smartphones 150 times a day. And that number has probably grown. People spend 50 minutes a day on Facebook. Other social apps such as Snapchat, Instagram and Twitter combine to take up still more time. Those companies maintain a profile on every user, which grows every time you like, share, search, shop or post a photo. Google also is analyzing credit card records of millions of people.

As a result, the big Internet companies know more about you than you know about yourself, which gives them huge power to influence you, to persuade you to do things that serve their economic interests. Facebook, Google and others compete for each consumer’s attention, reinforcing biases and reducing the diversity of ideas to which each is exposed. The degree of harm grows over time.

Consider a recent story from Australia, where someone at Facebook told advertisers that they had the ability to target teens who were sad or depressed, which made them more susceptible to advertising. In the United States, Facebook once demonstrated its ability to make users happier or sadder by manipulating their news feed. While it did not turn either capability into a product, the fact remains that Facebook influences the emotional state of users every moment of every day. Former Google design ethicist Tristan Harris calls this “brain hacking.”

The fault here is not with search and social networking, per se. Those services have enormous value. The fault lies with advertising business models that drive companies to maximize attention at all costs, leading to ever more aggressive brain hacking.

The Facebook application has 2 billion active users around the world. Google’s YouTube has 1.5 billion. These numbers are comparable to Christianity and Islam, respectively, giving Facebook and Google influence greater than most First World countries. They are too big and too global to be held accountable. Other attention-based apps — including InstagramWhatsAppWeChatSnapChat and Twitter — also have user bases between 100 million and 1.3 billion. Not all their users have had their brains hacked, but all are on that path. And there are no watchdogs.

Anyone who wants to pay for access to addicted users can work with Facebook and YouTube. Lots of bad people have done it. One firm was caught using Facebook tools to spy on law abiding citizens. A federal agency confronted Facebook about the use of its tools by financial firms to discriminate based on race in the housing market. America’s intelligence agencies have concluded that Russia interfered in our election and that Facebook was a key platform for spreading misinformation. For the price of a few fighter aircraft, Russia won an information war against us.

Incentives being what they are, we cannot expect Internet monopolies to police themselves. There is little government regulation and no appetite to change that. If we want to stop brain hacking, consumers will have to force changes at Facebook and Google.

Roger McNamee is the managing director and a co-founder of Elevation Partners, and investment partnership focused on media/entertainment content and consumer technology. 

FAANGs = Public Utilities?

Could it be that these companies — and Google in particular — have become natural monopolies by supplying an entire market’s demand for a service, at a price lower than what would be offered by two competing firms? And if so, is it time to regulate them like public utilities?

Consider a historical analogy: the early days of telecommunications.

In 1895 a photograph of the business district of a large city might have shown 20 phone wires attached to most buildings. Each wire was owned by a different phone company, and none of them worked with the others. Without network effects, the networks themselves were almost useless.

The solution was for a single company, American Telephone and Telegraph, to consolidate the industry by buying up all the small operators and creating a single network — a natural monopoly. The government permitted it, but then regulated this monopoly through the Federal Communications Commission.

AT&T (also known as the Bell System) had its rates regulated, and was required to spend a fixed percentage of its profits on research and development. In 1925 AT&T set up Bell Labs as a separate subsidiary with the mandate to develop the next generation of communications technology, but also to do basic research in physics and other sciences. Over the next 50 years, the basics of the digital age — the transistor, the microchip, the solar cell, the microwave, the laser, cellular telephony — all came out of Bell Labs, along with eight Nobel Prizes.

In a 1956 consent decree in which the Justice Department allowed AT&T to maintain its phone monopoly, the government extracted a huge concession: All past patents were licensed (to any American company) royalty-free, and all future patents were to be licensed for a small fee. These licenses led to the creation of Texas Instruments, Motorola, Fairchild Semiconductor and many other start-ups.

True, the internet never had the same problems of interoperability. And Google’s route to dominance is different from the Bell System’s. Nevertheless it still has all of the characteristics of a public utility.

We are going to have to decide fairly soon whether Google, Facebook and Amazon are the kinds of natural monopolies that need to be regulated, or whether we allow the status quo to continue, pretending that unfettered monoliths don’t inflict damage on our privacy and democracy.

It is impossible to deny that Facebook, Google and Amazon have stymied innovation on a broad scale. To begin with, the platforms of Google and Facebook are the point of access to all media for the majority of Americans. While profits at Google, Facebook and Amazon have soared, revenues in media businesses like newspaper publishing or the music business have, since 2001, fallen by 70 percent.

According to the Bureau of Labor Statistics, newspaper publishers lost over half their employees between 2001 and 2016. Billions of dollars have been reallocated from creators of content to owners of monopoly platforms. All content creators dependent on advertising must negotiate with Google or Facebook as aggregator, the sole lifeline between themselves and the vast internet cloud.

It’s not just newspapers that are hurting. In 2015 two Obama economic advisers, Peter Orszag and Jason Furman, published a paper arguing that the rise in “supernormal returns on capital” at firms with limited competition is leading to a rise in economic inequality. The M.I.T. economists Scott Stern and Jorge Guzman explained that in the presence of these giant firms, “it has become increasingly advantageous to be an incumbent, and less advantageous to be a new entrant.”

There are a few obvious regulations to start with. Monopoly is made by acquisition — Google buying AdMob and DoubleClick, Facebook buying Instagram and WhatsApp, Amazon buying, to name just a few, Audible, Twitch, Zappos and Alexa. At a minimum, these companies should not be allowed to acquire other major firms, like Spotify or Snapchat.

The second alternative is to regulate a company like Google as a public utility, requiring it to license out patents, for a nominal fee, for its search algorithms, advertising exchanges and other key innovations.

The third alternative is to remove the “safe harbor” clause in the 1998 Digital Millennium Copyright Act, which allows companies like Facebook and Google’s YouTube to free ride on the content produced by others. The reason there are 40,000 Islamic State videos on YouTube, many with ads that yield revenue for those who posted them, is that YouTube does not have to take responsibility for the content on its network. Facebook, Google and Twitter claim that policing their networks would be too onerous. But that’s preposterous: They already police their networks for pornography, and quite well.

Removing the safe harbor provision would also force social networks to pay for the content posted on their sites. A simple example: One million downloads of a song on iTunes would yield the performer and his record label about $900,000. One million streams of that same song on YouTube would earn them about $900.

I’m under no delusion that, with libertarian tech moguls like Peter Thiel in President Trump’s inner circle, antitrust regulation of the internet monopolies will be a priority. Ultimately we may have to wait four years, at which time the monopolies will be so dominant that the only remedy will be to break them up. Force Google to sell DoubleClick. Force Facebook to sell WhatsApp and Instagram.

Woodrow Wilson was right when he said in 1913, “If monopoly persists, monopoly will always sit at the helm of the government.” We ignore his words at our peril.