Anonymity Online: A Two-Edged Sword

The real risk is that we go on getting lost in stupid arguments, over shiny but trivial talking-points, and never get the hang of parsing what actually matters in the torrent of information overload. 

 

This essay, published in UnHerd examines the downside of anonymity and the pathologies of social media. And also the dangers of censorship. This is why we have no anonymity on tuka so reputational capital can be valued and rewarded. Some excerpts and comments below.

Why Twitter is So Awful

“…the term “attention economy” was coined… by Nobel Prize-winning economist Herbert Simon, in a 1971 article. Simon explored how to build organizations in a world saturated by information, arguing that attention is a key bottleneck in human culture. That is, the more abundant information is, the scarcer attention becomes as a resource.”

Yes, the time and energy consumed by attention are the resource constraints we face with too much information.

In the resulting bare-knuckle war for attention, it’s not reason that wins. Nor is everyone saying that the best, sanest, or most constructive ideas will prevail. Rather, it’s the most lurid (or aggressively state-sponsored) ideas that make it to the surface…

Yes, but nature has empowered us to adapt to a changing environment, and this applies to technological change that shapes our social interactions. We are learning how to cope with global social media that has distorted local human interaction. But one might have made a similar case against the introduction of the automobile. Motor vehicles made life more treacherous, where pedestrians were now subject to new risks. But we learned to regulate car traffic on roads and provide guidelines and behaviors for pedestrians. We learned how, when walking through town, to look both ways. We need to learn similar survival skills for navigating the online, virtual world. One must take control of one’s social engagement. Many have chosen to completely unplug.

The real risk is that we go on getting lost in stupid arguments, over shiny but trivial talking-points, and never get the hang of parsing what actually matters in the torrent of information overload. 

Yes, so one must exert judgment over how to spend one’s precious time. Not everyone will get it, but on a societal level it becomes a reflection of cultural values. Cultural values swing and tend toward the mean of humanity. When engaging online makes us feel less human, we will disengage in favor of human interaction. This is why so many people choose not to engage on Twitter and Facebook. And this is how we control the inhuman evolution of technology. Teach your children well.

Data Land Grab

FB-vs-Google

‘Good for the world’? Facebook emails reveal what really drives the site

As we can read from this article and Facebook’s internal management debates, Web 2.0 (of which the GAFA companies are the archetypes) is built on a data land grab. It’s rather similar to the actual land grab that the European powers battled over for the New World, then with the colonization of Africa and Asia.

Data is now a valuable resource that has been priced up there with land and capital. Naturally, the tech oligopolies and their startup wannabes all want to grab as much as possible. And who are they grabbing it from? The network users of course.

Web 3.0 is all about democratizing the value and monetization of personal networked data. It’s about decentralized ownership and control, much like the desire to own and control the fruits of one’s labor that ended slavery. Web 3.0 is the future, because Web 2.0 is unsustainable.

 

Madmen and the Godless Algorithm

FB-vs-Google

This article from The New Yorker.

Good overview history of the advertising model that has dominated our commercialism for decades. It’s now gone on digital steroids. The disruption of ad technology has interesting implications.

How the Math Men Overthrew the Mad Men

By Ken Auletta

Once, Mad Men ruled advertising. They’ve now been eclipsed by Math Men—the engineers and data scientists whose province is machines, algorithms, pureed data, and artificial intelligence. Yet Math Men are beleaguered, as Mark Zuckerberg demonstrated when he humbled himself before Congress, in April. Math Men’s adoration of data—coupled with their truculence and an arrogant conviction that their “science” is nearly flawless—has aroused government anger, much as Microsoft did two decades ago.

The power of Math Men is awesome. Google and Facebook each has a market value exceeding the combined value of the six largest advertising and marketing holding companies. Together, they claim six out of every ten dollars spent on digital advertising, and nine out of ten new digital ad dollars. They have become more dominant in what is estimated to be an up to two-trillion-dollar annual global advertising and marketing business. Facebook alone generates more ad dollars than all of America’s newspapers, and Google has twice the ad revenues of Facebook.

In the advertising world, Big Data is the Holy Grail, because it enables marketers to target messages to individuals rather than general groups, creating what’s called addressable advertising. And only the digital giants possess state-of-the-art Big Data. “The game is no longer about sending you a mail order catalogue or even about targeting online advertising,” Shoshana Zuboff, a professor of business administration at the Harvard Business School, wrote on faz.net, in 2016. “The game is selling access to the real-time flow of your daily life—your reality—in order to directly influence and modify your behavior for profit.” Success at this “game” flows to those with the “ability to predict the future—specifically the future of behavior,” Zuboff writes. She dubs this “surveillance capitalism.” [I question whether this will really work as anticipated once everybody is hip to the game.]

However, to thrash just Facebook and Google is to miss the larger truth: everyone in advertising strives to eliminate risk by perfecting targeting data.[This is the essence of what we’re doing here – reducing the risk of uncertainty.] Protecting privacy is not foremost among the concerns of marketers; protecting and expanding their business is. The business model adopted by ad agencies and their clients parallels Facebook and Google’s. Each aims to massage data to better identify potential customers. Each aims to influence consumer behavior. To appreciate how alike their aims are, sit in an agency or client marketing meeting and you will hear wails about Facebook and Google’s “walled garden,” their unwillingness to share data on their users. When Facebook or Google counter that they must protect “the privacy” of their users, advertisers cry foul: You’re using the data to target ads we paid for—why won’t you share it, so that we can use it in other ad campaigns? [But who really owns your data? Even if you choose to give it away?]

This preoccupation with Big Data is also revealed by the trend in the advertising-agency business to have the media agency, not the creative Mad Men team, occupy the prime seat in pitches to clients, because it’s the media agency that harvests the data to help advertising clients better aim at potential consumers. Agencies compete to proclaim their own Big Data horde. W.P.P.’s GroupM, the largest media agency, has quietly assembled what it calls its “secret sauce,” a collection of forty thousand personally identifiable attributes it plans to retain on two hundred million adult Americans. Unlike Facebook or Google, GroupM can’t track most of what we do online. To parade their sensitivity to privacy, agencies reassuringly boast that they don’t know the names of people in their data bank. But they do have your I.P. address, which yields abundant information, including where you live. For marketers, the advantage of being able to track online behavior, the former senior GroupM executive Brian Lesser said—a bit hyperbolically, one hopes—is that “we know what you want even before you know you want it.”[That sounds like adman hubris rather than reality.]

Worried that Brian Lesser’s dream will become a nightmare, ProPublica has ferociously chewed on the Big Data privacy menace like a dog with a bone: in its series “Breaking the Black Box,” it wrote, “Facebook has a particularly comprehensive set of dossiers on its more than two billion members. Every time a Facebook member likes a post, tags a photo, updates their favorite movies in their profile, posts a comment about a politician, or changes their relationship status, Facebook logs it . . . When they use Instagram or WhatsApp on their phone, which are both owned by Facebook, they contribute more data to Facebook’s dossier.” Facebook offers advertisers more than thirteen hundred categories for ad targeting, according to ProPublica.

Google, for its part, has merged all the data it collects from its search, YouTube, and other services, and has introduced an About Me page, which includes your date of birth, phone number, where you work, mailing address, education, where you’ve travelled, your nickname, photo, and e-mail address. Amazon knows even more about you. Since it is the world’s largest store and sees what you’ve actually purchased, its data are unrivalled. Amazon reaches beyond what interests you (revealed by a Google search) or what your friends are saying (on Facebook) to what you actually purchase. With Amazon’s Alexa, it has an agent in your home that not only knows what you bought but when you wake up, what you watch, read, listen to, ask for, and eat. And Amazon is aggressively building up its meager ad sales, which gives it an incentive to exploit its data.

Data excite advertisers. Prowling his London office in jeans, Keith Weed, who oversees marketing and communications for Unilever, one of the world’s largest advertisers, described how mobile phones have elevated data as a marketing tool. “When I started in marketing, we were using secondhand data which was three months old,” he said. “Now with the good old mobile, I have individualized data on people. You don’t need to know their names . . . You know their telephone number. You know where they live, because it’s the same location as their PC.” Weed knows what times of the day you usually browse, watch videos, answer e-mail, travel to the office—and what travel routes you take. “From your mobile, I know whether you stay in four-star or two-star hotels, whether you go to train stations or airports. I use these insights along with what you’re browsing on your PC. I know whether you’re interested in horses or holidays in the Caribbean.” By using programmatic computers to buy ads targeting these individuals, he says, Unilever can “create a hundred thousand permutations of the same ad,” as they recently did with a thirty-second TV ad for Axe toiletries aimed at young men in Brazil. The more Keith Weed knows about a consumer, the better he can aim to target a sale.

Engineers and data scientists vacuum data. They see data as virtuous, yielding clues to the mysteries of human behavior, suggesting efficiencies (including eliminating costly middlemen, like agency Mad Men), offering answers that they believe will better serve consumers, because the marketing message is individualized. The more cool things offered, the more clicks, the more page views, the more user engagement. Data yield facts and advance a quest to be more scientific—free of guesses. As Eric Schmidt, then the executive chairman of Google’s parent company, Alphabet, said at the company’s 2017 shareholder meeting, “We start from the principles of science at Google and Alphabet.”

They believe there is nobility in their quest. By offering individualized marketing messages, they are trading something of value in exchange for a consumer’s attention. They also start from the principle, as the TV networks did, that advertising allows their product to be “free.” But, of course, as their audience swells, so does their data. Sandy Parakilas, who was Facebook’s operations manager on its platform team from 2011 to 2012, put it this way in a scathing Op-Ed for the Times, last November: “The more data it has on offer, the more value it creates for advertisers. That means it has no incentive to police the collection or use of that data—except when negative press or regulators are involved.” For the engineers, the privacy issue—like “fake news” and even fraud—was relegated to the nosebleed bleachers. [This fact should be obvious to all of us.]

With a chorus of marketers and citizens and governments now roaring their concern, the limitations of Math Men loom large. Suddenly, governments in the U.S. are almost as alive to privacy dangers as those in Western Europe, confronting Facebook by asking how the political-data company Cambridge Analytica, employed by Donald Trump’s Presidential campaign, was able to snatch personal data from eighty-seven million individual Facebook profiles. Was Facebook blind—or deliberately mute? Why, they are really asking, should we believe in the infallibility of your machines and your willingness to protect our privacy?

Ad agencies and advertisers have long been uneasy not just with the “walled gardens” of Facebook and Google but with their unwillingness to allow an independent company to monitor their results, as Nielsen does for TV and comScore does online. This mistrust escalated in 2016, when it emerged that Facebook and Google charged advertisers for ads that tricked other machines to believe an ad message was seen by humans when it was not. Advertiser confidence in Facebook was further jolted later in 2016, when it was revealed that the Math Men at Facebook overestimated the average time viewers spent watching video by up to eighty per cent. And in 2017, Math Men took another beating when news broke that Google’s YouTube and Facebook’s machines were inserting friendly ads on unfriendly platforms, including racist sites and porn sites. These were ads targeted by keywords, like “Confederacy” or “race”; placing an ad on a history site might locate it on a Nazi-history site.

The credibility of these digital giants was further subverted when Russian trolls proved how easy it was to disseminate “fake news” on social networks. When told that Facebook’s mechanized defenses had failed to screen out disinformation planted on the social network to sabotage Hillary Clinton’s Presidential campaign, Mark Zuckerberg publicly dismissed the assertion as “pretty crazy,” a position he later conceded was wrong.

By the spring of 2018, Facebook had lost control of its narrative. Their original declared mission—to “connect people” and “build a global community”—had been replaced by an implicit new narrative: we connect advertisers to people.[Indeed, connecting people on a global basis for human interaction really doesn’t make a lot of sense. A global gossip network? Unless, of course, you’re trying to monetize it.] It took Facebook and Google about five years before they figured out how to generate revenue, and today roughly ninety-five percent of Facebook’s dollars and almost ninety percent of Google’s comes from advertising. They enjoy abundant riches because they tantalize advertisers with the promise that they can corral potential customers. This is how Facebook lured developers and app makers by offering them a permissive Graph A.P.I., granting them access to the daily habits and the interchange with friends of its users. This Graph A.P.I. is how Cambridge Analytica got its paws on the data of eighty-seven million Americans.

The humiliating furor this news provoked has not subverted the faith among Math Men that their “science” will prevail. They believe advertising will be further transformed by new scientific advances like artificial intelligence that will allow machines to customize ads, marginalizing human creativity. With algorithms creating profiles of individuals, Airbnb’s then chief marketing officer, Jonathan Mildenhall, told me, “brands can engineer without the need for human creativity.” Machines will craft ads, just as machines will drive cars. But the ad community is increasingly mistrustful of the machines, and of Facebook and Google.[As they should be – the value has been over-hyped.] During a presentation at Advertising Week in New York this past September, Keith Weed offered a report to Facebook and Google. He gave them a mere “C” for policing ad fraud, and a harsher “F” for cross-platform transparency, insisting, “We’ve got to see over the walled gardens.”

That mistrust has gone viral. A powerful case for more government regulation of the digital giants was made by The Economist, a classically conservative publication that also endorsed the government’s antitrust prosecution of Microsoft, in 1999. The magazine editorialized, in May, 2017, that governments must better police the five digital giants—Facebook, Google, Amazon, Apple, and Microsoft—because data were “the oil of the digital era”: “Old ways of thinking about competition, devised in the era of oil, look outdated in what has come to be called the ‘data economy.’ ” Inevitably, an abundance of data alters the nature of competition, allowing companies to benefit from network effects, with users multiplying and companies amassing wealth to swallow potential competitors.

The politics of Silicon Valley is left of center, but its disdain for government regulation has been right of center. This is changing. A Who’s Who of Silicon notables—Tim Berners-Lee, Tim Cook, Ev Williams, Sean Parker, and Tony Fadell, among others—have harshly criticized the social harm imposed by the digital giants. Marc Benioff, the C.E.O. of Salesforce.com—echoing similar sentiments expressed by Berners-Lee—has said, “The government is going to have to be involved. You do it exactly the same way you regulated the cigarette industry.”

Cries for regulating the digital giants are almost as loud today as they were to break up Microsoft in the late nineties. Congress insisted that Facebook’s Zuckerberg, not his minions, testify. The Federal Trade Commission is investigating Facebook’s manipulation of user data. Thirty-seven state attorneys general have joined a demand to learn how Facebook safeguards privacy. The European Union has imposed huge fines on Google and wants to inspect Google’s crown jewels—its search algorithms—claiming that Google’s search results are skewed to favor their own sites. The E.U.’s twenty-eight countries this May imposed a General Data Protection Regulation to protect the privacy of users, requiring that citizens must choose to opt in before companies can horde their data.

Here’s where advertisers and the digital giants lock arms: they speak with one voice in opposing opt-in legislation, which would deny access to data without the permission of users. If consumers wish to deny advertisers access to their cookies—their data—they agree: the consumer must voluntarily opt out, meaning they must endure a cumbersome and confusing series of online steps. Amid the furor about Facebook and Google, remember these twinned and rarely acknowledged truisms: more data probably equals less privacy, while more privacy equals less advertising revenue. Thus, those who rely on advertising have business reasons to remain tone-deaf to privacy concerns.

Those reliant on advertising know: the disruption that earlier slammed the music, newspaper, magazine, taxi, and retail industries now upends advertising. Agencies are being challenged by a host of competitive frenemies: by consulting and public-relations companies that have jumped into their business; by platform customers like Google and Facebook but also the Times, NBC, and Buzzfeed, that now double as ad agencies and talk directly to their clients; by clients that increasingly perform advertising functions in-house.

But the foremost frenemy is the public, which poses an existential threat not just to agencies but to Facebook and the ad revenues on which most media rely. Citizens protest annoying, interruptive advertising, particularly on their mobile phones—a device as personal as a purse or wallet. An estimated twenty per cent of Americans, and one-third of Western Europeans, employ ad-blocker software. More than half of those who record programs on their DVRs choose to skip the ads. Netflix and Amazon, among others, have accustomed viewers to watch what they want when they want, without commercial interruption.

Understandably, those dependent on ad dollars quake. The advertising and marketing world scrambles for new ways to reach consumers. Big Data, they believe, promises ways they might better communicate with annoyed consumers—maybe unlock ways that ads can be embraced as a useful individual service rather than as an interruption. If Big Data’s use is circumscribed to protect privacy, the advertising business will suffer. In this core conviction, at least, Mad Men and Math Men are alike.

This piece is partially adapted from Auletta’s forthcoming book, “Frenemies: The Epic Disruption of the Ad Business (and Everything Else).”

 

I would guess that the ad business will be disrupted further as we find new ways to connect consumers with what they want. This will reduce the power of the Math Men at centralized network servers.

I also suspect search will become a regulated public utility. A free society cannot tolerate one or two private corporations controlling all the information data that flows through its networks.

 

What’s Going On?

Vampire Squid
This is an excellent interview with technology culture guru Jaron Lanier, author of some very insightful books on the clashes between technology and humanism. See comments and highlights below…

And then when you move out of the tech world, everybody’s struggling…

It’s not so much that they’re doing badly, but they have only labor and no capital. Or the way I used to put it is, they have to sing for their supper, for every single meal. It’s making everyone else take on all the risk. It’s like we’re the people running the casino and everybody else takes the risks and we don’t. That’s how it feels to me. It’s not so much that everyone else is doing badly as that they’ve lost economic capital and standing, and momentum and plannability. It’s a subtle difference.

‘One Has This Feeling of Having Contributed to Something That’s Gone Very Wrong’

By Noah Kulwin April 17, 2018 New York Magazine

Over the last few months, Select All has interviewed more than a dozen prominent technology figures about what has gone wrong with the contemporary internet for a project called “The Internet Apologizes.” We’re now publishing lengthier transcripts of each individual interview. This interview features Jaron Lanier, a pioneer in the field of virtual reality and the founder of the first company to sell VR goggles. Lanier currently works at Microsoft Research as an interdisciplinary scientist. He is the author of the forthcoming book Ten Arguments for Deleting Your Social Media Accounts Right Now.

You can find other interviews from this series here.

Jaron Lanier: Can I just say one thing now, just to be very clear? Professionally, I’m at Microsoft, but when I speak to you, I’m not representing Microsoft at all. There’s not even the slightest hint that this represents any official Microsoft thing. I have an agreement within which I’m able to be an independent public intellectual, even if it means criticizing them. I just want to be very clear that this isn’t a Microsoft position.

Noah Kulwin: Understood.
Yeah, sorry. I really just wanted to get that down. So now please go ahead, I’m so sorry to interrupt you.

In November, you told Maureen Dowd that it’s scary and awful how out of touch Silicon Valley people have become. It’s a pretty forward remark. I’m kind of curious what you mean by that.
To me, one of the patterns we see that makes the world go wrong is when somebody acts as if they aren’t powerful when they actually are powerful. So if you’re still reacting against whatever you used to struggle for, but actually you’re in control, then you end up creating great damage in the world. Like, oh, I don’t know, I could give you many examples. But let’s say like Russia’s still acting as if it’s being destroyed when it isn’t, and it’s creating great damage in the world. And Silicon Valley’s kind of like that.

We used to be kind of rebels, like, if you go back to the origins of Silicon Valley culture, there were these big traditional companies like IBM that seemed to be impenetrable fortresses. And we had to create our own world. To us, we were the underdogs and we had to struggle. And we’ve won. I mean, we have just totally won. We run everything. We are the conduit of everything else happening in the world. We’ve disrupted absolutely everything. Politics, finance, education, media, relationships — family relationships, romantic relationships — we’ve put ourselves in the middle of everything, we’ve absolutely won. But we don’t act like it.

We have no sense of balance or modesty or graciousness having won. We’re still acting as if we’re in trouble and we have to defend ourselves, which is preposterous. And so in doing that we really kind of turn into assholes, you know?

How do you think that siege mentality has fed into the ongoing crisis with the tech backlash?

One of the problems is that we’ve isolated ourselves through extreme wealth and success. Before, we might’ve been isolated because we were nerdy insurgents. But now we’ve found a new method to isolate ourselves, where we’re just so successful and so different from so many other people that our circumstances are different. And we have less in common with all the people whose lives we’ve disrupted. I’m just really struck by that. I’m struck with just how much better off we are financially, and I don’t like the feeling of it.

Personally, I would give up a lot of the wealth and elite status that we have in order to just live in a friendly, more connected world where it would be easier to move about and not feel like everything else is insecure and falling apart. People in the tech world, they’re all doing great, they all feel secure. I mean they might worry about a nuclear attack or something, but their personal lives are really secure.

And then when you move out of the tech world, everybody’s struggling. It’s a very strange thing. The numbers show an economy that’s doing well, but the reality is that the way it’s doing well doesn’t give many people a feeling of security or confidence in their futures. It’s like everybody’s working for Uber in one way or another. Everything’s become the gig economy. And we routed it that way, that’s our doing. There’s this strange feeling when you just look outside of the tight circle of Silicon Valley, almost like entering another country, where people are less secure. It’s not a good feeling. I don’t think it’s worth it, I think we’re wrong to want that feeling.

It’s not so much that they’re doing badly, but they have only labor and no capital. Or the way I used to put it is, they have to sing for their supper, for every single meal. It’s making everyone else take on all the risk. It’s like we’re the people running the casino and everybody else takes the risks and we don’t. That’s how it feels to me. It’s not so much that everyone else is doing badly as that they’ve lost economic capital and standing, and momentum and plannability. It’s a subtle difference.

There’s still this rhetoric of being the underdog in the tech industry. The attitude within the Valley is “Are you kidding? You think we’re resting on our laurels? No! We have to fight for every yard.”

There’s this question of whether what you’re fighting for is something that’s really new and a benefit for humanity, or if you’re only engaged in a sort of contest with other people that’s fundamentally not meaningful to anyone else. The theory of markets and capitalism is that when we compete, what we’re competing for is to get better at something that’s actually a benefit to people, so that everybody wins. So if you’re building a better mousetrap, or a better machine-learning algorithm, then that competition should generate improvement for everybody.

But if it’s a purely abstract competition set up between insiders to the exclusion of outsiders, it might feel like a competition, it might feel very challenging and stressful and hard to the people doing it, but it doesn’t actually do anything for anybody else. It’s no longer genuinely productive for anybody, it’s a fake. And I’m a little concerned that a lot of what we’ve been doing in Silicon Valley has started to take on that quality. I think that’s been a problem in Wall Street for a while, but the way it’s been a problem in Wall Street has been aided by Silicon Valley. Everything becomes a little more abstract and a little more computer-based. You have this very complex style of competition that might not actually have much substance to it.

You look at the big platforms, and it’s not like there’s this bountiful ecosystem of start-ups. The rate of small-business creation is at its lowest in decades, and instead you have a certain number of start-ups competing to be acquired by a handful of companies. There are not that many varying powers, there’s just a few.
That’s something I’ve been complaining about and I’ve written about for a while, that Silicon Valley used to be this place where people could do a start-up and the start-up might become a big company on its own, or it might be acquired, or it might merge into things. But lately it kind of feels like both at the start and at the end of the life of a start-up, things are a little bit more constrained. It used to be that you didn’t have to know the right people, but now you do. You have to get in with the right angel investors or incubator or whatever at the start. And they’re just a small number, it’s like a social order, you have to get into them. And then the output on the other side is usually being acquired by one of a very small number of top companies.

There are a few exceptions, you can see Dropbox’s IPO. But they’re rarer and rarer. And I suspect Dropbox in the future might very well be acquired by one of the giants. It’s not clear that it’ll survive as its own thing in the long term. I mean, we don’t know. I have no inside information about that, I’m just saying that the much more typical scenario now, as you described, is that the companies go to one of the biggies.

I’m kind of curious what you think needs to happen to prevent future platforms, like VR, from going the way of social media and reaching this really profitable crisis state.

A lot of the rhetoric of Silicon Valley that has the utopian ring about creating meaningful communities where everybody’s creative and people collaborate and all this stuff — I don’t wanna make too much of my own contribution, but I was kind of the first author of some of that rhetoric a long time ago. So it kind of stings for me to see it misused. Like, I used to talk about how virtual reality could be a tool for empathy, and then I see Mark Zuckerberg talking about how VR could be a tool for empathy while being profoundly nonempathic, using VR to tour Puerto Rico after the storm, after Maria. One has this feeling of having contributed to something that’s gone very wrong.

So I guess the overall way I think of it is, first, we might remember ourselves as having been lucky that some of these problems started to come to a head during the social-media era, before tools like virtual reality become more prominent, because the technology is still not as intense as it probably will be in the future. So as bad as it’s been, as bad as the election interference and the fomenting of ethnic warfare, and the empowering of neo-Nazis, and the bullying — as bad as all of that has been, we might remember ourselves as having been fortunate that it happened when the technology was really just little slabs we carried around in our pockets that we could look at and that could talk to us, or little speakers we could talk to. It wasn’t yet a whole simulated reality that we could inhabit.

Because that will be so much more intense, and that has so much more potential for behavior modification, and fooling people, and controlling people. So things potentially could get a lot worse, and hopefully they’ll get better as a result of our experiences during this era.

As far as what to do differently, I’ve had a particular take on this for a long time that not everybody agrees with. I think the fundamental mistake we made is that we set up the wrong financial incentives, and that’s caused us to turn into jerks and screw around with people too much. Way back in the ’80s, we wanted everything to be free because we were hippie socialists. But we also loved entrepreneurs because we loved Steve Jobs. So you wanna be both a socialist and a libertarian at the same time, and it’s absurd. But that’s the kind of absurdity that Silicon Valley culture has to grapple with.

And there’s only one way to merge the two things, which is what we call the advertising model, where everything’s free but you pay for it by selling ads. But then because the technology gets better and better, the computers get bigger and cheaper, there’s more and more data — what started out as advertising morphed into continuous behavior modification on a mass basis, with everyone under surveillance by their devices and receiving calculated stimulus to modify them. So you end up with this mass behavior-modification empire, which is straight out of Philip K. Dick, or from earlier generations, from 1984.

It’s this thing that we were warned about. It’s this thing that we knew could happen. Norbert Wiener, who coined the term cybernetics, warned about it as a possibility. And despite all the warnings, and despite all of the cautions, we just walked right into it, and we created mass behavior-modification regimes out of our digital networks. We did it out of this desire to be both cool socialists and cool libertarians at the same time.

This dovetails with something you’ve said in the past that’s with me, which is your phrase Digital Maoism. Do you think that the Digital Maoism that you described years ago — are those the people who run Silicon Valley today?

I was talking about a few different things at the time I wrote “Digital Maoism.” One of them was the way that we were centralizing culture, even though the rhetoric was that we were distributing it. Before Wikipedia, I think it would have been viewed as being this horrible thing to say that there could only be one encyclopedia, and that there would be one dominant entry for a given topic. Instead, there were different encyclopedias. There would be variations not so much in what facts were presented, but in the way they were presented. That voice was a real thing.

And then we moved to this idea that we have a single dominant encyclopedia that was supposed to be the truth for the global AI or something like that. But there’s something deeply pernicious about that. So we’re saying anybody can write for Wikipedia, so it’s, like, purely democratic and it’s this wonderful open thing, and yet the bizarreness is that that open democratic process is on the surface of something that struck me as being Maoist, which is that there’s this one point of view that’s then gonna be the official one.

And then I also noticed that that process of people being put into a global system in which they’re supposed to work together toward some sort of dominating megabrain that’s the one truth didn’t seem to bring out the best in people, that people turned aggressive and mean-spirited when they interacted in that context. I had worked on some content for Britannica years and years ago, and I never experienced the kind of just petty meanness that’s just commonplace in everything about the internet. Among many other places, on Wikipedia.

On the one hand, you have this very open collective process actually in the service of this very domineering global brain, destroyer of local interpretation, destroyer of individual voice process. And then you also have this thing that seems to bring out this meanness in people, where people get into this kind of mob mentality and they become unkind to each other. And those two things have happened all over the internet; they’re both very present in Facebook, everywhere. And it’s a bit of a subtle debate, and it takes a while to work through it with somebody who doesn’t see what I’m talking about. That was what I was talking about.

But then there’s this other thing about the centralization of economic power. What happened with Maoists and with communists in general, and neo-Marxists and all kinds of similar movements, is that on the surface, you say everybody shares, everybody’s equal, we’re not gonna have this capitalist concentration. But then there’s some other entity that might not look like traditional capitalism, but is effectively some kind of robber baron that actually owns everything, some kind of Communist Party actually controls everything, and you have just a very small number of individuals who become hyperempowered and everybody else loses power.

And exactly the same thing has happened with the supposed openness of the internet, where you say, “Isn’t it wonderful, with Facebook and Twitter anybody can express themselves. Everybody’s an equal, everybody’s empowered.” But in fact, we’re in a period of time of extreme concentration of wealth and power, and it’s precisely around those who run the biggest computers. So the truth and the effect is just the opposite of what the rhetoric is and the immediate experience.

A lot of people were furious with me over Digital Maoism and felt that I had betrayed our cause or something, and I lost some friends over it. And some of it was actually hard. But I fail to see how it was anything but accurate. I don’t wanna brag, but I think I was just right. I think that that’s what was going on and that’s what’s happening in China. But what’s worse is that it’s happening elsewhere.

The thing is, I’m not sure that what’s going on in the U.S. is that distinct from what’s going on in China. I think there are some differences, but they’re in degree; they’re not stark. The Chinese are saying if you have a low social rating you can’t get on the subway, but on the other hand, we’re doing algorithmic profiling that’s sending people to jail, and we know that the algorithms are racist. Are we really that much better?

I’m not really sure. I think it would be hard to determine it. But I think we’re doing many of the same things; it’s just that we package them in a slightly different way when we tell stories to ourselves.

This is something I write about, you know I have another book coming out shortly?

Yeah, that was gonna be where I took this next.

One of the things that I’ve been concerned about is this illusion where you think that you’re in this super-democratic open thing, but actually it’s exactly the opposite; it’s actually creating a super concentration of wealth and power, and disempowering you. This has been particularly cruel politically. Every time there’s some movement, like the Black Lives Matter movement, or maybe now the March for Our Lives movement, or #MeToo, or very classically the Arab Spring, you have this initial period where people feel like they’re on this magic-carpet ride and that social media is letting them broadcast their opinions for very low cost, and that they’re able to reach people and organize faster than ever before. And they’re thinking, Wow, Facebook and Twitter are these wonderful tools of democracy.

But then the algorithms have to maximize value from all the data that’s coming in. So they test use that data. And it just turns out as a matter of course, that the same data that is a positive, constructive process for the people who generated it — Black Lives Matter, or the Arab Spring — can be used to irritate other groups. And unfortunately there’s this asymmetry in human emotions where the negative emotions of fear and hatred and paranoia and resentment come up faster, more cheaply, and they’re harder to dispel than the positive emotions. So what happens is, every time there’s some positive motion in these networks, the negative reaction is actually more powerful. So when you have a Black Lives Matter, the result of that is the empowerment of the worst racists and neo-Nazis in a way that hasn’t been seen in generations. When you have an Arab Spring, the result ultimately is the network empowerment of ISIS and other extremists — bloodthirsty, horrible things, the likes of which haven’t been seen in the Arab world or in Islam for years, if ever.

Black Lives Matter has incredible visibility, but the reality is that even though it has had an enormous effect on the discursive level, and at making the country fixated on this conversation, that’s distinct from political force necessary to effect that change. What do you think about the sort of gap between what Silicon Valley platforms have promised in that respect and then the material reality?

That observation — that social-media politics is all talk and no action or something, or that it’s empty — is compatible with, but a little bit different from, what I was saying. I’m saying that it empowers its opposite more than the original good intention. And those two things can both be true at once, but I just wanna point out that they’re two different explanations for why nothing decent seems to come out in the end.

I want to be wrong. I especially wanna be wrong about the March for Our Lives kids. I really wanna be wrong about them. I want them to not fall into this because they’re our hope, they’re the future of our country, so I very deeply, profoundly wanna be wrong. I don’t want their social-media data to empower the opposite movement that ends up being more powerful because negative emotions are more powerful. I just wanna be wrong. I so wanna be telling you bullshit right now.

So far it’s been right, but that doesn’t mean it will continue to be. So please let me be wrong.

Platforms seem trapped in this fundamental tension, and I’m just not sure how they break out of that.

My feeling is that if the theory is correct that we got into this by trying to be socialist and libertarian at the same time, and getting the worst of both worlds, then we have to choose. You either have to say, “Okay, Facebook is not going to be a business anymore. We said we wanted to create this thing to connect people, but we’re actually making the world worse, so we’re not gonna allow people to advertise on it; we’re not gonna allow anybody to have any influence on your feed but you. This is all about you. We’re gonna turn it into a nonprofit; we’re gonna give it to each country; it’ll be nationalized. We’ll do some final stock things so all the people who contributed to it will be rich beyond their dreams. But then after that it’s done; it’s not a business. We’ll buy back everybody’s stock and it’s done. It’s over. That’s it.”

[Blogger note: this choice between socialism and libertarianism is a highly interesting and crucial question, but I don’t think there’s one answer. Facebook strikes me as a dysfunctional idea from the beginning. Social interaction doesn’t scale, data networks scale. A global gossip network like Facebook makes almost no sense. I suspect FB will be competed down to many different functional social media models rather than one concentrated behemoth. Something like search. or Wikipedia seems rather different in nature. Google Search looks more and more like a public good, which means it is likely to become a regulated public utility. It’s not exactly clear how search works as a public utility, but I think the political imperative is there.]

That’s one option. So it just turns into a socialist enterprise; we let it be nationalized and it’s gone. The other option is to monetize it. And that’s the one that I’m personally more interested in. And what that would look like is, we’d ask those who can afford to — which would be a lot of people in the world, certainly most people in the West — to start paying for it. And then we’d also pay people who provide data into it that’s exceptionally valuable to the network, and it would become a source of economic growth. And we would outlaw advertising on it. There would no longer be third parties paying to influence you.

Because as long as you have advertising, you have this perverse incentive to make it manipulative. You can’t have a behavior-modification machine with advertisers and have anything ethical; it’s not possible. You could get away with it barely with television because television wasn’t as effective at modifying people. But this, there’s no ethical way to have advertising.

So you’d ban advertising, and you’d start paying people, a subset of people; a minority of people would start earning their living because they just do stuff that other people love to look at over Facebook or the other social networks, or YouTube for that matter. And then most people would pay into it in the same way that we pay into something like Netflix or HBO Now.

And one of the things I wanna point out is that back at the time when Facebook was founded, the belief was that in the future there wouldn’t be paid people making movies and television because armies of unpaid volunteers organized through our network schemes would make superior content, just like what happened with Wikipedia. But what actually happened is, when people started paying for Netflix, we got what we call Peak TV — things got much better as a result of it being monetized.

So I think if we had a situation where people were paying for something like Facebook, and being paid for it, and advertising was absolutely outlawed, the only customer would be the user, there would be no other customer. If we got into that situation, I think we have at least a chance of achieving Peak Social Media, just like we achieved Peak TV. We might actually see things improve a great deal.

So that’s the solution that I think is better. But we can’t do this combination of libertarian and communist ideology. It just doesn’t work. You have to choose one.

You’ve written this book, Ten Arguments for Deleting Your Social Media Accounts. I don’t want to make you summarize the whole book, but I want to ask what you thought was the most urgent argument, and to explain why.
Okay. By the way, it’s … For Deleting Your Social Media Accounts Right Now.

Right now! So the whole thing is already urgent, so which of these urgent pleas do you believe to be the most pressing?

There’s one that’s a little complicated, which is the last one. Because I have the one about politics, and I have the one about economics. That it’s ruining politics, it’s empowering the most obnoxious people to be the most powerful inherently, and that’s destroying the world. I have the one about economics, how it’s centralizing wealth even while it seems to be democratizing it. I have the one about how it makes you feel sad; I have all these different ones.

But at the end, I have one that’s a spiritual one. The argument is that social media hates your soul. And it suggests that there’s a whole spiritual, religious belief system along with social media like Facebook that I think people don’t like. And it’s also fucking phony and false. It suggests that life is some kind of optimization, like you’re supposed to be struggling to get more followers and friends. Zuckerberg even talked about how the new goal of Facebook would be to give everybody a meaningful life, as if something about Facebook is where the meaning of life is.

It suggests that you’re just a cog in a giant global brain or something like that. The rhetoric from the companies is often about AI, that what they’re really doing — like YouTube’s parent company, Google, says what they really are is building the giant global brain that’ll inherit the earth and they’ll upload you to that brain and then you won’t have to die. It’s very, very religious in the rhetoric. And so it’s turning into this new religion, and it’s a religion that doesn’t care about you. It’s a religion that’s completely lacking in empathy or any kind of personal acknowledgment. And it’s a bad religion. It’s a nerdy, empty, sterile, ugly, useless religion that’s based on false ideas. And I think that of all of the things, that’s the worst thing about it.

I mean, it’s sort of like a cult of personality. It’s like in North Korea or some regime where the religion is your purpose to serve this one guy. And your purpose is to serve this one system, which happens to be controlled by one guy, in the case of Facebook.

It’s not as blunt and out there, but that is the underlying message of it and it’s ugly and bad. I loathe it, and I think a lot of people have that feeling, but they might not have articulated it or gotten it to the surface because it’s just such a weird and new situation.

On the other hand, there’s a rising backlash that may end the platforms before they have the opportunity to take root and produce yet another vicious problem.

I’m in my late 50s now. I have an 11-year-old daughter, and the thing that bothers me so much is that we’re giving them a world that isn’t as good as the world we received. We’re giving them a world in which their hopes for being able to create a decent, happy, reasonably low-stress life, where they can have their own kids, it’s just not as good as what we were given. We have not done well by them.

And then to say that observing our own mistakes means that you’re old and don’t get it is profoundly counterproductive. It’s really just a way of evading our own responsibility. The truth is that we totally have screwed over younger generations. And that’s a bigger story than just the social-media and tech thing, but the social-media and tech thing is a big part of it. We’ve created a scammy society where we concentrate wealth in ways that are petty and not helpful, and we’ve given them a world of far fewer options than we had. There’s nothing I want more than for the younger people to create successful lives and create a world that they love. I mean, that’s what it’s all about. But to say that the path to that is for them to agree with the thing we made for them is just so self-serving and so obnoxiously narcissistic that it makes me wanna throw up.

This interview has been edited and condensed for length and clarity.

Can’t We All Just Be Friends?

This is the nefarious side of social media. A NY Times article on how Russian hackers used Facebook to try to influence the US and French elections.

Will Mark Zuckerberg ‘Like’ This Column?

tuka offers a bit of an alternative to uncontrolled social media. First, there is no anonymity. Second, the user feed is content only, not status or opinion or any other distracting things like cat and food photos. This means we know who is posting and what they post is tangible creative content. These two criteria ensure that the connections people make on tuka are meaningful and real.

The final caveat here is to remind ourselves not to be so easily manipulated by emotional triggers.

Vampire Squids?

 

likenolike

I would say this essay by Franklin Foer is a bit alarmist, though his book is worth reading and taking to heart. We are gradually becoming aware of the value of our personal data and I expect consumers will soon figure out how to demand a fair share of that value, else they will withdraw.

Technology is most often disrupted by newer technology that better serves the needs of users. For Web 2.0 business models, our free data is their lifeblood and soon we may be able to cut them off. Many hope that’s where Web 3.0 is going.

tuka is a technology model that seeks to do exactly that for creative content providers, their audiences, and promoter/fans.

How Silicon Valley is erasing your individuality

Washington Post, September 8, 2017

 

Franklin Foer is author of “World Without Mind: The Existential Threat of Big Tech,” from which this essay is adapted.

Until recently, it was easy to define our most widely known corporations. Any third-grader could describe their essence. Exxon sells gas; McDonald’s makes hamburgers; Walmart is a place to buy stuff. This is no longer so. Today’s ascendant monopolies aspire to encompass all of existence. Google derives from googol, a number (1 followed by 100 zeros) that mathematicians use as shorthand for unimaginably large quantities. Larry Page and Sergey Brin founded Google with the mission of organizing all knowledge, but that proved too narrow. They now aim to build driverless cars, manufacture phones and conquer death. Amazon, which once called itself “the everything store,” now produces television shows, owns Whole Foods and powers the cloud. The architect of this firm, Jeff Bezos, even owns this newspaper.

Along with Facebook, Microsoft and Apple, these companies are in a race to become our “personal assistant.” They want to wake us in the morning, have their artificial intelligence software guide us through our days and never quite leave our sides. They aspire to become the repository for precious and private items, our calendars and contacts, our photos and documents. They intend for us to turn unthinkingly to them for information and entertainment while they catalogue our intentions and aversions. Google Glass and the Apple Watch prefigure the day when these companies implant their artificial intelligence in our bodies. Brin has mused, “Perhaps in the future, we can attach a little version of Google that you just plug into your brain.”

More than any previous coterie of corporations, the tech monopolies aspire to mold humanity into their desired image of it. They think they have the opportunity to complete the long merger between man and machine — to redirect the trajectory of human evolution. How do I know this? In annual addresses and town hall meetings, the founding fathers of these companies often make big, bold pronouncements about human nature — a view that they intend for the rest of us to adhere to. Page thinks the human body amounts to a basic piece of code: “Your program algorithms aren’t that complicated,” he says. And if humans function like computers, why not hasten the day we become fully cyborg?

To take another grand theory, Facebook chief Mark Zuckerberg has exclaimed his desire to liberate humanity from phoniness, to end the dishonesty of secrets. “The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly,” he has said. “Having two identities for yourself is an example of a lack of integrity.” Of course, that’s both an expression of idealism and an elaborate justification for Facebook’s business model.

There’s an oft-used shorthand for the technologist’s view of the world. It is assumed that libertarianism dominates Silicon Valley, and that isn’t wholly wrong. High-profile devotees of Ayn Rand can be found there. But if you listen hard to the titans of tech, it’s clear that their worldview is something much closer to the opposite of a libertarian’s veneration of the heroic, solitary individual. The big tech companies think we’re fundamentally social beings, born to collective existence. They invest their faith in the network, the wisdom of crowds, collaboration. They harbor a deep desire for the atomistic world to be made whole. (“Facebook stands for bringing us closer together and building a global community,” Zuckerberg wrote in one of his many manifestos.) By stitching the world together, they can cure its ills.

Rhetorically, the tech companies gesture toward individuality — to the empowerment of the “user” — but their worldview rolls over it. Even the ubiquitous invocation of users is telling: a passive, bureaucratic description of us. The big tech companies (the Europeans have lumped them together as GAFA: Google, Apple, Facebook, Amazon) are shredding the principles that protect individuality. Their devices and sites have collapsed privacy; they disrespect the value of authorship, with their hostility toward intellectual property. In the realm of economics, they justify monopoly by suggesting that competition merely distracts from the important problems like erasing language barriers and building artificial brains. Companies should “transcend the daily brute struggle for survival,” as Facebook investor Peter Thiel has put it.

When it comes to the most central tenet of individualism — free will — the tech companies have a different way. They hope to automate the choices, both large and small, we make as we float through the day. It’s their algorithms that suggest the news we read, the goods we buy, the paths we travel, the friends we invite into our circles. [Blogger Note: As computers can’t write music like humans, algorithms cannot really define tastes. Our sensibilities are excited by serendipity, innovation, and surprise.]

It’s hard not to marvel at these companies and their inventions, which often make life infinitely easier. But we’ve spent too long marveling. The time has arrived to consider the consequences of these monopolies, to reassert our role in determining the human path. Once we cross certain thresholds — once we remake institutions such as media and publishing, once we abandon privacy — there’s no turning back, no restoring our lost individuality.

***

Over the generations, we’ve been through revolutions like this before. Many years ago, we delighted in the wonders of TV dinners and the other newfangled foods that suddenly filled our kitchens: slices of cheese encased in plastic, oozing pizzas that emerged from a crust of ice, bags of crunchy tater tots. In the history of man, these seemed like breakthrough innovations. Time-consuming tasks — shopping for ingredients, tediously preparing a recipe and tackling a trail of pots and pans — were suddenly and miraculously consigned to history.

The revolution in cuisine wasn’t just enthralling. It was transformational. New products embedded themselves deeply in everyday life, so much so that it took decades for us to understand the price we paid for their convenience, efficiency and abundance. Processed foods were feats of engineering, all right — but they were engineered to make us fat. Their delectable taste required massive quantities of sodium and sizable stockpiles of sugar, which happened to reset our palates and made it harder to satehunger. It took vast quantities of meat and corn to fabricate these dishes, and a spike in demand remade American agriculture at a terrible environmental cost. A whole new system of industrial farming emerged, with penny-conscious conglomerates cramming chickens into feces-covered pens and stuffing them full of antibiotics. By the time we came to understand the consequences of our revised patterns of consumption, the damage had been done to our waistlines, longevity, souls and planet.

Something like the midcentury food revolution is now reordering the production and consumption of knowledge. Our intellectual habits are being scrambled by the dominant firms. Giant tech companies have become the most powerful gatekeepers the world has ever known. Google helps us sort the Internet, by providing a sense of hierarchy to information; Facebook uses its algorithms and its intricate understanding of our social circles to filter the news we encounter; Amazon bestrides book publishing with its overwhelming hold on that market.

Such dominance endows these companies with the ability to remake the markets they control. As with the food giants, the big tech companies have given rise to a new science that aims to construct products that pander to their consumers. Unlike the market research and television ratings of the past, the tech companies have a bottomless collection of data, acquired as they track our travels across the Web, storing every shard about our habits in the hope that they may prove useful. They have compiled an intimate portrait of the psyche of each user — a portrait that they hope to exploit to seduce us into a compulsive spree of binge clicking and watching. And it works: On average, each Facebook user spends one-sixteenth of their day on the site.

In the realm of knowledge, monopoly and conformism are inseparable perils. The danger is that these firms will inadvertently use their dominance to squash diversity of opinion and taste. Concentration is followed by homogenization. As news media outlets have come to depend heavily on Facebook and Google for traffic — and therefore revenue — they have rushed to produce articles that will flourish on those platforms. This leads to a duplication of the news like never before, with scores of sites across the Internet piling onto the same daily outrage. It’s why a picture of a mysteriously colored dress generated endless articles, why seemingly every site recaps “Game of Thrones.” Each contribution to the genre adds little, except clicks. Old media had a pack mentality, too, but the Internet promised something much different. And the prevalence of so much data makes the temptation to pander even greater.

This is true of politics. Our era is defined by polarization, warring ideological gangs that yield no ground. Division, however, isn’t the root cause of our unworkable system. There are many causes, but a primary problem is conformism. Facebook has nurtured two hive minds, each residing in an informational ecosystem that yields head-nodding agreement and penalizes dissenting views. This is the phenomenon that the entrepreneur and author Eli Pariser famously termed the “Filter Bubble” — how Facebook mines our data to keep giving us the news and information we crave, creating a feedback loop that pushes us deeper and deeper into our own amen corners.

As the 2016 presidential election so graphically illustrated, a hive mind is an intellectually incapacitated one, with diminishing ability to tell fact from fiction, with an unshakable bias toward party line. The Russians understood this, which is why they invested so successfully in spreading dubious agitprop via Facebook. And it’s why a raft of companies sprouted — Occupy Democrats, the Angry Patriot, Being Liberal — to get rich off the Filter Bubble and to exploit our susceptibility to the lowest-quality news, if you can call it that.

Facebook represents a dangerous deviation in media history. Once upon a time, elites proudly viewed themselves as gatekeepers. They could be sycophantic to power and snobbish, but they also felt duty-bound to elevate the standards of society and readers. Executives of Silicon Valley regard gatekeeping as the stodgy enemy of innovation — they see themselves as more neutral, scientific and responsive to the market than the elites they replaced — a perspective that obscures their own power and responsibilities. So instead of shaping public opinion, they exploit the public’s worst tendencies, its tribalism and paranoia.

***

During this century, we largely have treated Silicon Valley as a force beyond our control. A broad consensus held that lead-footed government could never keep pace with the dynamism of technology. By the time government acted against a tech monopoly, a kid in a garage would have already concocted some innovation to upend the market. Or, as Google’s Eric Schmidt, put it, “Competition is one click away.” A nostrum that suggested that the very structure of the Internet defied our historic concern for monopoly.

As individuals, we have similarly accepted the omnipresence of the big tech companies as a fait accompli. We’ve enjoyed their free products and next-day delivery with only a nagging sense that we may be surrendering something important. Such blitheness can no longer be sustained. Privacy won’t survive the present trajectory of technology — and with the sense of being perpetually watched, humans will behave more cautiously, less subversively. Our ideas about the competitive marketplace are at risk. With a decreasing prospect of toppling the giants, entrepreneurs won’t bother to risk starting new firms, a primary source of jobs and innovation. And the proliferation of falsehoods and conspiracies through social media, the dissipation of our common basis for fact, is creating conditions ripe for authoritarianism. Over time, the long merger of man and machine has worked out pretty well for man. But we’re drifting into a new era, when that merger threatens the individual. We’re drifting toward monopoly, conformism, their machines. Perhaps it’s time we steer our course.

Digital Monopoly

Digital platforms force a rethink in competition theory

 

Economists need to provide regulators with tools to deal with market concentration

by: Diane Coyle, FT.com

Anxiety about the health of competition in the US economy — and elsewhere — is growing. The concern may be well founded but taking forceful action will require economists to provide some practical ways of proving and measuring the harm caused by increasing market power in the digital economy.

The forces driving concentration do not affect the US alone. In all digital markets, the cost structure of high upfront costs and low additional or marginal costs means there are large economies of scale. The broad impact of digital technology has been to increase the scope of the markets many businesses can hope to reach.

How will investment in physical networks or content get funded if an incumbent using the network and content captures all the profit downstream?

In pre-digital days, the question an economist would ask is whether the efficiencies gained by big or merging companies would be passed on to consumers in the form of lower prices. Another key question was whether it would still be possible for new entrants to break into the market.

Digital platforms make these questions harder to answer.

Read more… (Paywall – see comment below)

Google This.

AP FACEBOOK F A USA NY
(Photo: Mark Lennihan, AP)

Another argument that moves toward making these companies public utilities. (Google more than Facebook.) From USA Today:

I invested early in Google and Facebook and regret it. I helped create a monster.

‘Brain hacking’ Internet monopolies menace public health, democracy, writes Roger McNamee.

I invested in Google and Facebook years before their first revenue and profited enormously. I was an early adviser to Facebook’s team, but I am terrified by the damage being done by these Internet monopolies.

Technology has transformed our lives in countless ways, mostly for the better. Thanks to the now ubiquitous smartphone, tech touches us from the moment we wake up until we go to sleep. While the convenience of smartphones has many benefits, the unintended consequences of well-intentioned product choices have become a menace to public health and to democracy.

Facebook and Google get their revenue from advertising, the effectiveness of which depends on gaining and maintaining consumer attention. Borrowing techniques from the gambling industry, Facebook, Google and others exploit human nature, creating addictive behaviors that compel consumers to check for new messages, respond to notifications, and seek validation from technologies whose only goal is to generate profits for their owners.

The people at Facebook and Google believe that giving consumers more of what they want and like is worthy of praise, not criticism. What they fail to recognize is that their products are not making consumers happier or more successful.

Like gambling, nicotine, alcohol or heroin, Facebook and Google — most importantly through its YouTube subsidiary — produce short-term happiness with serious negative consequences in the long term.

Users fail to recognize the warning signs of addiction until it is too late. There are only 24 hours in a day, and technology companies are making a play for all them. The CEO of Netflix recently noted that his company’s primary competitor is sleep.

How does this work? A 2013 study found that average consumers check their smartphones 150 times a day. And that number has probably grown. People spend 50 minutes a day on Facebook. Other social apps such as Snapchat, Instagram and Twitter combine to take up still more time. Those companies maintain a profile on every user, which grows every time you like, share, search, shop or post a photo. Google also is analyzing credit card records of millions of people.

As a result, the big Internet companies know more about you than you know about yourself, which gives them huge power to influence you, to persuade you to do things that serve their economic interests. Facebook, Google and others compete for each consumer’s attention, reinforcing biases and reducing the diversity of ideas to which each is exposed. The degree of harm grows over time.

Consider a recent story from Australia, where someone at Facebook told advertisers that they had the ability to target teens who were sad or depressed, which made them more susceptible to advertising. In the United States, Facebook once demonstrated its ability to make users happier or sadder by manipulating their news feed. While it did not turn either capability into a product, the fact remains that Facebook influences the emotional state of users every moment of every day. Former Google design ethicist Tristan Harris calls this “brain hacking.”

The fault here is not with search and social networking, per se. Those services have enormous value. The fault lies with advertising business models that drive companies to maximize attention at all costs, leading to ever more aggressive brain hacking.

The Facebook application has 2 billion active users around the world. Google’s YouTube has 1.5 billion. These numbers are comparable to Christianity and Islam, respectively, giving Facebook and Google influence greater than most First World countries. They are too big and too global to be held accountable. Other attention-based apps — including InstagramWhatsAppWeChatSnapChat and Twitter — also have user bases between 100 million and 1.3 billion. Not all their users have had their brains hacked, but all are on that path. And there are no watchdogs.

Anyone who wants to pay for access to addicted users can work with Facebook and YouTube. Lots of bad people have done it. One firm was caught using Facebook tools to spy on law abiding citizens. A federal agency confronted Facebook about the use of its tools by financial firms to discriminate based on race in the housing market. America’s intelligence agencies have concluded that Russia interfered in our election and that Facebook was a key platform for spreading misinformation. For the price of a few fighter aircraft, Russia won an information war against us.

Incentives being what they are, we cannot expect Internet monopolies to police themselves. There is little government regulation and no appetite to change that. If we want to stop brain hacking, consumers will have to force changes at Facebook and Google.

Roger McNamee is the managing director and a co-founder of Elevation Partners, and investment partnership focused on media/entertainment content and consumer technology. 

FAANGs = Public Utilities?

Could it be that these companies — and Google in particular — have become natural monopolies by supplying an entire market’s demand for a service, at a price lower than what would be offered by two competing firms? And if so, is it time to regulate them like public utilities?

Consider a historical analogy: the early days of telecommunications.

In 1895 a photograph of the business district of a large city might have shown 20 phone wires attached to most buildings. Each wire was owned by a different phone company, and none of them worked with the others. Without network effects, the networks themselves were almost useless.

The solution was for a single company, American Telephone and Telegraph, to consolidate the industry by buying up all the small operators and creating a single network — a natural monopoly. The government permitted it, but then regulated this monopoly through the Federal Communications Commission.

AT&T (also known as the Bell System) had its rates regulated, and was required to spend a fixed percentage of its profits on research and development. In 1925 AT&T set up Bell Labs as a separate subsidiary with the mandate to develop the next generation of communications technology, but also to do basic research in physics and other sciences. Over the next 50 years, the basics of the digital age — the transistor, the microchip, the solar cell, the microwave, the laser, cellular telephony — all came out of Bell Labs, along with eight Nobel Prizes.

In a 1956 consent decree in which the Justice Department allowed AT&T to maintain its phone monopoly, the government extracted a huge concession: All past patents were licensed (to any American company) royalty-free, and all future patents were to be licensed for a small fee. These licenses led to the creation of Texas Instruments, Motorola, Fairchild Semiconductor and many other start-ups.

True, the internet never had the same problems of interoperability. And Google’s route to dominance is different from the Bell System’s. Nevertheless it still has all of the characteristics of a public utility.

We are going to have to decide fairly soon whether Google, Facebook and Amazon are the kinds of natural monopolies that need to be regulated, or whether we allow the status quo to continue, pretending that unfettered monoliths don’t inflict damage on our privacy and democracy.

It is impossible to deny that Facebook, Google and Amazon have stymied innovation on a broad scale. To begin with, the platforms of Google and Facebook are the point of access to all media for the majority of Americans. While profits at Google, Facebook and Amazon have soared, revenues in media businesses like newspaper publishing or the music business have, since 2001, fallen by 70 percent.

According to the Bureau of Labor Statistics, newspaper publishers lost over half their employees between 2001 and 2016. Billions of dollars have been reallocated from creators of content to owners of monopoly platforms. All content creators dependent on advertising must negotiate with Google or Facebook as aggregator, the sole lifeline between themselves and the vast internet cloud.

It’s not just newspapers that are hurting. In 2015 two Obama economic advisers, Peter Orszag and Jason Furman, published a paper arguing that the rise in “supernormal returns on capital” at firms with limited competition is leading to a rise in economic inequality. The M.I.T. economists Scott Stern and Jorge Guzman explained that in the presence of these giant firms, “it has become increasingly advantageous to be an incumbent, and less advantageous to be a new entrant.”

There are a few obvious regulations to start with. Monopoly is made by acquisition — Google buying AdMob and DoubleClick, Facebook buying Instagram and WhatsApp, Amazon buying, to name just a few, Audible, Twitch, Zappos and Alexa. At a minimum, these companies should not be allowed to acquire other major firms, like Spotify or Snapchat.

The second alternative is to regulate a company like Google as a public utility, requiring it to license out patents, for a nominal fee, for its search algorithms, advertising exchanges and other key innovations.

The third alternative is to remove the “safe harbor” clause in the 1998 Digital Millennium Copyright Act, which allows companies like Facebook and Google’s YouTube to free ride on the content produced by others. The reason there are 40,000 Islamic State videos on YouTube, many with ads that yield revenue for those who posted them, is that YouTube does not have to take responsibility for the content on its network. Facebook, Google and Twitter claim that policing their networks would be too onerous. But that’s preposterous: They already police their networks for pornography, and quite well.

Removing the safe harbor provision would also force social networks to pay for the content posted on their sites. A simple example: One million downloads of a song on iTunes would yield the performer and his record label about $900,000. One million streams of that same song on YouTube would earn them about $900.

I’m under no delusion that, with libertarian tech moguls like Peter Thiel in President Trump’s inner circle, antitrust regulation of the internet monopolies will be a priority. Ultimately we may have to wait four years, at which time the monopolies will be so dominant that the only remedy will be to break them up. Force Google to sell DoubleClick. Force Facebook to sell WhatsApp and Instagram.

Woodrow Wilson was right when he said in 1913, “If monopoly persists, monopoly will always sit at the helm of the government.” We ignore his words at our peril.