Why We’re Jaded with Facebook

likenolike

Facebook has been under constant fire for more than a year now and seems unable to answer its critics. Under such criticism the company’s executive team has promised to make user privacy its primary concern, until the next revelation exposes its duplicity. Now it seems every other week another article is written demanding that Facebook be broken up or regulated by government oversight.

We might wonder what exactly is wrong with Facebook and why can’t they fix it?

The answers are in the faulty logic of Facebook as a social network that connects the world and the financial business model required to fund that mission. Both efforts are fighting a natural contradiction when it comes to real reasons people use Facebook.

Let’s address the social aspect first. Facebook started as a on-campus online gossip network at Harvard University. This is the secret of its appeal – people like to gossip about others within their network of peers. The behavior went viral and expanded from Harvard to Yale and Princeton and other Ivys. Then it spread to universities across the country. Nobody really is as concerned with social status as young people between the ages of 13 and 21.

But then Facebook decided its gossip model should go public and proudly marked its rapid growth of the social network across the globe – to the tune of more than 2 billion users. We even got a movie out of it. But let’s consider the logic of such a global gossip network because, frankly, it makes no sense.

Gossip serves a very useful social and evolutionary purpose, despite it being popularly dismissed as “small talk” or “idle talk,” or even malicious or “nosy.” Robin Dunbar (he of Dunbar’s Number = 150) explains how gossip helps us maintain social relationships in groups and also helps community members sanction free riders or those who break established social norms (“Gossip in Evolutionary Perspective”). In this way, gossip provides a means of gaining information about individuals, cementing social bonds, and engaging in indirect aggression; helping people learn about how to live in their cultural society. Gossip anecdotes communicate rules in narrative form, such as by describing how someone else came to grief by violating social norms.

Certainly there appears to be something about gossip that is innate: our entertainment world is pretty much driven commercially by celebrity gossip. But we don’t know these people!

Dunbar actually extended his research to online social networks, specifically using Facebook as a test case of whether network technology relaxes the constraints that limit the size of offline social relationships (link). What he found was that the 150 number still holds for any meaningful social networks. In other words, the human brain is developed enough to maintain 150 social connections, after which the connections fall to the level of casual acquaintances. According to surveys, this is the experience of most Facebook users. Facebook “friends” are not really friends in the vernacular meaning of the word.

So, a network that connects us to roughly 2 billion users across the globe doesn’t make a whole lot of sense for the benefits of gossip. Rather, a gossip network that extends to people we have no personal relationship with tends to reinforce the negative aspects of gossip, i.e., meanness and rudeness. We can observe that celebrity gossip tends to focus on caricatures that emphasize the extremes of hero worship and cruel pettiness. In similar fashion, Facebook is very useful for small friendship networks that cohere around common interests or personal relationships, and the limits on that tend to approximate around 150 people.

The second reason Facebook is failing as a social network relates to its ad-driven revenue model. If I am using Facebook as a way to connect to my friends, I certainly resent a third party advertiser trying to insert itself into the middle of that communication channel (just imagine advertisers interrupting in the middle of your phone call!). How many of us were turned away from Facebook about 2+ years ago when our feeds were suddenly flooded with advertisements for things we had no interest in? The network data Facebook is selling to advertisers is weak, not robust. We know what our friends like, if they are friends, and Facebook algorithms do a poor job of approximating that. “Like” clicks are not really likes and digital advertisers know it.

The problem here is that Facebook ad rates are a function of the number of users FB claims to reach and the flow of network information across those user nodes, even if it’s Candy Crush games or humorous cat tricks. Facebook cannot really evaluate the subjective value of the information flow, so it merely sells it all in targeted user bundles. This does not serve end users (or advertisers) very well and the attrition rate is evidence of general user dissatisfaction. I would guess that most users stick with Facebook for the positive value they receive from far-flung friend networks and the lack of a viable alternative. But then we end up ignoring most of the white noise on our feeds, threatening the financial viability of FB’s revenue model.

So where does this lead?

Frankly, I would argue Facebook’s longevity under its current business model is challenging. Gossip makes sense and can be tolerated in small community groups, while wider social networks make sense if they are somewhat limited to common interests. Facebook “Groups” seem to exhibit some of these qualities, so perhaps that is a direction FB can move towards. But the problem then is that it is a much less valuable Facebook under its ad revenue model. Market competitions and alternative OSNs may eat into FB’s global network, forcing FB to adapt to a smaller footprint. That is likely to be a difficult financial adjustment for a company of FB’s size and reach. But technology cuts both ways and today’s Facebook may just be tomorrow’s obsolescence. Personally, I would prefer a social network that delivers more meaningful connections to other people and allows me to filter out a lot of the white noise. That can’t happen as long as the network servers make money off white noise.

 

Madmen and the Godless Algorithm

FB-vs-Google

This article from The New Yorker.

Good overview history of the advertising model that has dominated our commercialism for decades. It’s now gone on digital steroids. The disruption of ad technology has interesting implications.

How the Math Men Overthrew the Mad Men

By Ken Auletta

Once, Mad Men ruled advertising. They’ve now been eclipsed by Math Men—the engineers and data scientists whose province is machines, algorithms, pureed data, and artificial intelligence. Yet Math Men are beleaguered, as Mark Zuckerberg demonstrated when he humbled himself before Congress, in April. Math Men’s adoration of data—coupled with their truculence and an arrogant conviction that their “science” is nearly flawless—has aroused government anger, much as Microsoft did two decades ago.

The power of Math Men is awesome. Google and Facebook each has a market value exceeding the combined value of the six largest advertising and marketing holding companies. Together, they claim six out of every ten dollars spent on digital advertising, and nine out of ten new digital ad dollars. They have become more dominant in what is estimated to be an up to two-trillion-dollar annual global advertising and marketing business. Facebook alone generates more ad dollars than all of America’s newspapers, and Google has twice the ad revenues of Facebook.

In the advertising world, Big Data is the Holy Grail, because it enables marketers to target messages to individuals rather than general groups, creating what’s called addressable advertising. And only the digital giants possess state-of-the-art Big Data. “The game is no longer about sending you a mail order catalogue or even about targeting online advertising,” Shoshana Zuboff, a professor of business administration at the Harvard Business School, wrote on faz.net, in 2016. “The game is selling access to the real-time flow of your daily life—your reality—in order to directly influence and modify your behavior for profit.” Success at this “game” flows to those with the “ability to predict the future—specifically the future of behavior,” Zuboff writes. She dubs this “surveillance capitalism.” [I question whether this will really work as anticipated once everybody is hip to the game.]

However, to thrash just Facebook and Google is to miss the larger truth: everyone in advertising strives to eliminate risk by perfecting targeting data.[This is the essence of what we’re doing here – reducing the risk of uncertainty.] Protecting privacy is not foremost among the concerns of marketers; protecting and expanding their business is. The business model adopted by ad agencies and their clients parallels Facebook and Google’s. Each aims to massage data to better identify potential customers. Each aims to influence consumer behavior. To appreciate how alike their aims are, sit in an agency or client marketing meeting and you will hear wails about Facebook and Google’s “walled garden,” their unwillingness to share data on their users. When Facebook or Google counter that they must protect “the privacy” of their users, advertisers cry foul: You’re using the data to target ads we paid for—why won’t you share it, so that we can use it in other ad campaigns? [But who really owns your data? Even if you choose to give it away?]

This preoccupation with Big Data is also revealed by the trend in the advertising-agency business to have the media agency, not the creative Mad Men team, occupy the prime seat in pitches to clients, because it’s the media agency that harvests the data to help advertising clients better aim at potential consumers. Agencies compete to proclaim their own Big Data horde. W.P.P.’s GroupM, the largest media agency, has quietly assembled what it calls its “secret sauce,” a collection of forty thousand personally identifiable attributes it plans to retain on two hundred million adult Americans. Unlike Facebook or Google, GroupM can’t track most of what we do online. To parade their sensitivity to privacy, agencies reassuringly boast that they don’t know the names of people in their data bank. But they do have your I.P. address, which yields abundant information, including where you live. For marketers, the advantage of being able to track online behavior, the former senior GroupM executive Brian Lesser said—a bit hyperbolically, one hopes—is that “we know what you want even before you know you want it.”[That sounds like adman hubris rather than reality.]

Worried that Brian Lesser’s dream will become a nightmare, ProPublica has ferociously chewed on the Big Data privacy menace like a dog with a bone: in its series “Breaking the Black Box,” it wrote, “Facebook has a particularly comprehensive set of dossiers on its more than two billion members. Every time a Facebook member likes a post, tags a photo, updates their favorite movies in their profile, posts a comment about a politician, or changes their relationship status, Facebook logs it . . . When they use Instagram or WhatsApp on their phone, which are both owned by Facebook, they contribute more data to Facebook’s dossier.” Facebook offers advertisers more than thirteen hundred categories for ad targeting, according to ProPublica.

Google, for its part, has merged all the data it collects from its search, YouTube, and other services, and has introduced an About Me page, which includes your date of birth, phone number, where you work, mailing address, education, where you’ve travelled, your nickname, photo, and e-mail address. Amazon knows even more about you. Since it is the world’s largest store and sees what you’ve actually purchased, its data are unrivalled. Amazon reaches beyond what interests you (revealed by a Google search) or what your friends are saying (on Facebook) to what you actually purchase. With Amazon’s Alexa, it has an agent in your home that not only knows what you bought but when you wake up, what you watch, read, listen to, ask for, and eat. And Amazon is aggressively building up its meager ad sales, which gives it an incentive to exploit its data.

Data excite advertisers. Prowling his London office in jeans, Keith Weed, who oversees marketing and communications for Unilever, one of the world’s largest advertisers, described how mobile phones have elevated data as a marketing tool. “When I started in marketing, we were using secondhand data which was three months old,” he said. “Now with the good old mobile, I have individualized data on people. You don’t need to know their names . . . You know their telephone number. You know where they live, because it’s the same location as their PC.” Weed knows what times of the day you usually browse, watch videos, answer e-mail, travel to the office—and what travel routes you take. “From your mobile, I know whether you stay in four-star or two-star hotels, whether you go to train stations or airports. I use these insights along with what you’re browsing on your PC. I know whether you’re interested in horses or holidays in the Caribbean.” By using programmatic computers to buy ads targeting these individuals, he says, Unilever can “create a hundred thousand permutations of the same ad,” as they recently did with a thirty-second TV ad for Axe toiletries aimed at young men in Brazil. The more Keith Weed knows about a consumer, the better he can aim to target a sale.

Engineers and data scientists vacuum data. They see data as virtuous, yielding clues to the mysteries of human behavior, suggesting efficiencies (including eliminating costly middlemen, like agency Mad Men), offering answers that they believe will better serve consumers, because the marketing message is individualized. The more cool things offered, the more clicks, the more page views, the more user engagement. Data yield facts and advance a quest to be more scientific—free of guesses. As Eric Schmidt, then the executive chairman of Google’s parent company, Alphabet, said at the company’s 2017 shareholder meeting, “We start from the principles of science at Google and Alphabet.”

They believe there is nobility in their quest. By offering individualized marketing messages, they are trading something of value in exchange for a consumer’s attention. They also start from the principle, as the TV networks did, that advertising allows their product to be “free.” But, of course, as their audience swells, so does their data. Sandy Parakilas, who was Facebook’s operations manager on its platform team from 2011 to 2012, put it this way in a scathing Op-Ed for the Times, last November: “The more data it has on offer, the more value it creates for advertisers. That means it has no incentive to police the collection or use of that data—except when negative press or regulators are involved.” For the engineers, the privacy issue—like “fake news” and even fraud—was relegated to the nosebleed bleachers. [This fact should be obvious to all of us.]

With a chorus of marketers and citizens and governments now roaring their concern, the limitations of Math Men loom large. Suddenly, governments in the U.S. are almost as alive to privacy dangers as those in Western Europe, confronting Facebook by asking how the political-data company Cambridge Analytica, employed by Donald Trump’s Presidential campaign, was able to snatch personal data from eighty-seven million individual Facebook profiles. Was Facebook blind—or deliberately mute? Why, they are really asking, should we believe in the infallibility of your machines and your willingness to protect our privacy?

Ad agencies and advertisers have long been uneasy not just with the “walled gardens” of Facebook and Google but with their unwillingness to allow an independent company to monitor their results, as Nielsen does for TV and comScore does online. This mistrust escalated in 2016, when it emerged that Facebook and Google charged advertisers for ads that tricked other machines to believe an ad message was seen by humans when it was not. Advertiser confidence in Facebook was further jolted later in 2016, when it was revealed that the Math Men at Facebook overestimated the average time viewers spent watching video by up to eighty per cent. And in 2017, Math Men took another beating when news broke that Google’s YouTube and Facebook’s machines were inserting friendly ads on unfriendly platforms, including racist sites and porn sites. These were ads targeted by keywords, like “Confederacy” or “race”; placing an ad on a history site might locate it on a Nazi-history site.

The credibility of these digital giants was further subverted when Russian trolls proved how easy it was to disseminate “fake news” on social networks. When told that Facebook’s mechanized defenses had failed to screen out disinformation planted on the social network to sabotage Hillary Clinton’s Presidential campaign, Mark Zuckerberg publicly dismissed the assertion as “pretty crazy,” a position he later conceded was wrong.

By the spring of 2018, Facebook had lost control of its narrative. Their original declared mission—to “connect people” and “build a global community”—had been replaced by an implicit new narrative: we connect advertisers to people.[Indeed, connecting people on a global basis for human interaction really doesn’t make a lot of sense. A global gossip network? Unless, of course, you’re trying to monetize it.] It took Facebook and Google about five years before they figured out how to generate revenue, and today roughly ninety-five percent of Facebook’s dollars and almost ninety percent of Google’s comes from advertising. They enjoy abundant riches because they tantalize advertisers with the promise that they can corral potential customers. This is how Facebook lured developers and app makers by offering them a permissive Graph A.P.I., granting them access to the daily habits and the interchange with friends of its users. This Graph A.P.I. is how Cambridge Analytica got its paws on the data of eighty-seven million Americans.

The humiliating furor this news provoked has not subverted the faith among Math Men that their “science” will prevail. They believe advertising will be further transformed by new scientific advances like artificial intelligence that will allow machines to customize ads, marginalizing human creativity. With algorithms creating profiles of individuals, Airbnb’s then chief marketing officer, Jonathan Mildenhall, told me, “brands can engineer without the need for human creativity.” Machines will craft ads, just as machines will drive cars. But the ad community is increasingly mistrustful of the machines, and of Facebook and Google.[As they should be – the value has been over-hyped.] During a presentation at Advertising Week in New York this past September, Keith Weed offered a report to Facebook and Google. He gave them a mere “C” for policing ad fraud, and a harsher “F” for cross-platform transparency, insisting, “We’ve got to see over the walled gardens.”

That mistrust has gone viral. A powerful case for more government regulation of the digital giants was made by The Economist, a classically conservative publication that also endorsed the government’s antitrust prosecution of Microsoft, in 1999. The magazine editorialized, in May, 2017, that governments must better police the five digital giants—Facebook, Google, Amazon, Apple, and Microsoft—because data were “the oil of the digital era”: “Old ways of thinking about competition, devised in the era of oil, look outdated in what has come to be called the ‘data economy.’ ” Inevitably, an abundance of data alters the nature of competition, allowing companies to benefit from network effects, with users multiplying and companies amassing wealth to swallow potential competitors.

The politics of Silicon Valley is left of center, but its disdain for government regulation has been right of center. This is changing. A Who’s Who of Silicon notables—Tim Berners-Lee, Tim Cook, Ev Williams, Sean Parker, and Tony Fadell, among others—have harshly criticized the social harm imposed by the digital giants. Marc Benioff, the C.E.O. of Salesforce.com—echoing similar sentiments expressed by Berners-Lee—has said, “The government is going to have to be involved. You do it exactly the same way you regulated the cigarette industry.”

Cries for regulating the digital giants are almost as loud today as they were to break up Microsoft in the late nineties. Congress insisted that Facebook’s Zuckerberg, not his minions, testify. The Federal Trade Commission is investigating Facebook’s manipulation of user data. Thirty-seven state attorneys general have joined a demand to learn how Facebook safeguards privacy. The European Union has imposed huge fines on Google and wants to inspect Google’s crown jewels—its search algorithms—claiming that Google’s search results are skewed to favor their own sites. The E.U.’s twenty-eight countries this May imposed a General Data Protection Regulation to protect the privacy of users, requiring that citizens must choose to opt in before companies can horde their data.

Here’s where advertisers and the digital giants lock arms: they speak with one voice in opposing opt-in legislation, which would deny access to data without the permission of users. If consumers wish to deny advertisers access to their cookies—their data—they agree: the consumer must voluntarily opt out, meaning they must endure a cumbersome and confusing series of online steps. Amid the furor about Facebook and Google, remember these twinned and rarely acknowledged truisms: more data probably equals less privacy, while more privacy equals less advertising revenue. Thus, those who rely on advertising have business reasons to remain tone-deaf to privacy concerns.

Those reliant on advertising know: the disruption that earlier slammed the music, newspaper, magazine, taxi, and retail industries now upends advertising. Agencies are being challenged by a host of competitive frenemies: by consulting and public-relations companies that have jumped into their business; by platform customers like Google and Facebook but also the Times, NBC, and Buzzfeed, that now double as ad agencies and talk directly to their clients; by clients that increasingly perform advertising functions in-house.

But the foremost frenemy is the public, which poses an existential threat not just to agencies but to Facebook and the ad revenues on which most media rely. Citizens protest annoying, interruptive advertising, particularly on their mobile phones—a device as personal as a purse or wallet. An estimated twenty per cent of Americans, and one-third of Western Europeans, employ ad-blocker software. More than half of those who record programs on their DVRs choose to skip the ads. Netflix and Amazon, among others, have accustomed viewers to watch what they want when they want, without commercial interruption.

Understandably, those dependent on ad dollars quake. The advertising and marketing world scrambles for new ways to reach consumers. Big Data, they believe, promises ways they might better communicate with annoyed consumers—maybe unlock ways that ads can be embraced as a useful individual service rather than as an interruption. If Big Data’s use is circumscribed to protect privacy, the advertising business will suffer. In this core conviction, at least, Mad Men and Math Men are alike.

This piece is partially adapted from Auletta’s forthcoming book, “Frenemies: The Epic Disruption of the Ad Business (and Everything Else).”

 

I would guess that the ad business will be disrupted further as we find new ways to connect consumers with what they want. This will reduce the power of the Math Men at centralized network servers.

I also suspect search will become a regulated public utility. A free society cannot tolerate one or two private corporations controlling all the information data that flows through its networks.

 

Tech Dystopia?

Below are excerpts from a fascinating series of articles by The Guardian (with links). The articles address many of the ways that Internet 2.0 network media models such as Google, Facebook, YouTube, Instagram, etc. are transforming, and in many cases undermining, the foundations of a democratic humanistic society. These issues motivate us at tuka to design solutions to the great question of life’s meaning.

Personally, I don’t believe this dystopia will come to pass because humans are quite resilient as a species and eventually our humanist qualities will dominate our biological urges and economic imperatives. We have free will and ultimately, we choose correctly.

Perhaps that is an overly optimistic opinion, but Internet (or Web) 3.0 technology is rewriting the script with applications that reassert human control over the data universe. We will build more humanistic social communities that employ technology, with the emphasis always on the human. We see this now with the growing refusal to surrender to Web 2.0 by tech insiders.

See excerpts and comments below.

“If politics is an expression of our human will, on individual and collective levels, then the attention economy is directly undermining the assumptions that democracy rests on.” If Apple, Facebook, Google, Twitter, Instagram, and Snapchat are gradually chipping away at our ability to control our own minds, could there come a point, I ask, at which democracy no longer functions?

‘Fiction is outperforming reality’: how YouTube’s algorithm distorts truth

Paul Lewis February 2, 2018

There are 1.5 billion YouTube users in the world, which is more than the number of households that own televisions. What they watch is shaped by this algorithm, which skims and ranks billions of videos to identify 20 “up next” clips that are both relevant to a previous video and most likely, statistically speaking, to keep a person hooked on their screen.

Company insiders tell me the algorithm is the single most important engine of YouTube’s growth. In one of the few public explanations of how the formula works – an academic paper that sketches the algorithm’s deep neural networks, crunching a vast pool of data about videos and the people who watch them – YouTube engineers describe it as one of the “largest scale and most sophisticated industrial recommendation systems in existence”.

We see here the power of AI data algorithms to filter content. The Google response has been to “expand the army of human moderators.” That’s a necessary method of reasserting human judgment over the network

The primary focus of the article then turns to politics and the electoral influences of disinformation:

Much has been written about Facebook and Twitter’s impact on politics, but in recent months academics have speculated that YouTube’s algorithms may have been instrumental in fuelling disinformation during the 2016 presidential election. “YouTube is the most overlooked story of 2016,” Zeynep Tufekci, a widely respected sociologist and technology critic, tweeted back in October. “Its search and recommender algorithms are misinformation engines.”

Apparently, the sensationalism surrounding the Trump campaign caused YT’s AI algorithms to push more video feeds favorable to Trump and damaging to Hillary Clinton. One doesn’t need to be a partisan to recognize this was probably true for this particular media channel and its business model that values more views more than anything else.

However, this reality can also be distorted to present a particular conspiracy narrative of its own:

Trump won the electoral college as a result of 80,000 votes spread across three swing states. There were more than 150 million YouTube users in the US. The videos contained in Chaslot’s database of YouTube-recommended election videos were watched, in total, more than 3bn times before the vote in November 2016.

This, unfortunately, is cherry-picking statistical inferences concerning the margin of voting support. What was significant in determining the 2016 election outcome was not 80,000 votes across three states, but a run of popular vote wins in 2,623 of 3,112 counties across the U.S. This 85% share could not be an accident, nor could it be due to the single influence of disinformation, Russian or otherwise. The true difference in the election was not revealed by the popular vote total or the Electoral College vote, but by the geographical distribution of support. One can argue about which is more critical to democratic governance, but this post is about electronic media content, not political analysis.

The next article further addresses how technology is influencing our individual behaviors.

‘Our minds can be hijacked’: the tech insiders who fear a smartphone dystopia

Paul Lewis October 6, 2017

Justin Rosenstein had tweaked his laptop’s operating system to block Reddit, banned himself from Snapchat, which he compares to heroin, and imposed limits on his use of Facebook. But even that wasn’t enough. In August, the 34-year-old tech executive took a more radical step to restrict his use of social media and other addictive technologies.

A decade after he stayed up all night coding a prototype of what was then called an “awesome” button, Rosenstein belongs to a small but growing band of Silicon Valley heretics who complain about the rise of the so-called “attention economy”: an internet shaped around the demands of an advertising economy.

The extent of this addiction is cited by research that shows people touch, swipe or tap their phone 2,617 times a day!

There is growing concern that as well as addicting users, technology is contributing toward so-called “continuous partial attention”, severely limiting people’s ability to focus, and possibly lowering IQ. One recent study showed that the mere presence of smartphones damages cognitive capacity – even when the device is turned off. “Everyone is distracted,” Rosenstein says. “All of the time.”

“The technologies we use have turned into compulsions, if not full-fledged addictions,” Eyal writes. “It’s the impulse to check a message notification. It’s the pull to visit YouTube, Facebook, or Twitter for just a few minutes, only to find yourself still tapping and scrolling an hour later.” None of this is an accident, he writes. It is all “just as their designers intended”.

Tristan Harris, a former Google employee turned vocal critic of the tech industry points out that… “All of us are jacked into this system. All of our minds can be hijacked. Our choices are not as free as we think they are.”

“I don’t know a more urgent problem than this,” Harris says. “It’s changing our democracy, and it’s changing our ability to have the conversations and relationships that we want with each other.” 

Harris believes that tech companies never deliberately set out to make their products addictive. They were responding to the incentives of an advertising economy, experimenting with techniques that might capture people’s attention, even stumbling across highly effective design by accident.

“Smartphones are useful tools,” says Loren Brichter, a product designer. “But they’re addictive. Pull-to-refresh is addictive. Twitter is addictive. These are not good things. When I was working on them, it was not something I was mature enough to think about.”

The two inventors listed on Apple’s patent for “managing notification connections and displaying icon badges” are Justin Santamaria and Chris Marcellino. A few years ago Marcellino, 33, left the Bay Area and is now in the final stages of retraining to be a neurosurgeon. He stresses he is no expert on addiction but says he has picked up enough in his medical training to know that technologies can affect the same neurological pathways as gambling and drug use. “These are the same circuits that make people seek out food, comfort, heat, sex,” he says.

“The people who run Facebook and Google are good people, whose well-intentioned strategies have led to horrific unintended consequences,” he says. “The problem is that there is nothing the companies can do to address the harm unless they abandon their current advertising models.

But how can Google and Facebook be forced to abandon the business models that have transformed them into two of the most profitable companies on the planet?

This is exactly the problem – they really can’t. Newer technology, such as distributed social networks tracked by blockchain technology, must be deployed to disrupt the dysfunctional existing technology. New business models will be designed to support this disruption. Human behavioral instincts are crucial to successful new designs that make us more human, rather than less.

James Williams does not believe talk of dystopia is far-fetched. …He says his epiphany came a few years ago when he noticed he was surrounded by technology that was inhibiting him from concentrating on the things he wanted to focus on. “It was that kind of individual, existential realization: what’s going on?” he says. “Isn’t technology supposed to be doing the complete opposite of this?”

The question we ask at tuka is: “What do people really want from technology and social interaction? Distraction or meaning? And how do they find meaning?” Our answer is self-expression through creativity, sharing it, and connecting with communities.

Williams and Harris left Google around the same time and co-founded an advocacy group, Time Well Spent, that seeks to build public momentum for a change in the way big tech companies think about design. 

“Eighty-seven percent of people wake up and go to sleep with their smartphones,” he says. The entire world now has a new prism through which to understand politics, and Williams worries the consequences are profound.

The same forces that led tech firms to hook users with design tricks, he says, also encourage those companies to depict the world in a way that makes for compulsive, irresistible viewing. “The attention economy incentivizes the design of technologies that grab our attention,” he says. “In so doing, it privileges our impulses over our intentions.”

That means privileging what is sensational over what is nuanced, appealing to emotion, anger, and outrage. The news media is increasingly working in service to tech companies, Williams adds, and must play by the rules of the attention economy to “sensationalize, bait and entertain in order to survive”.

In the wake of Donald Trump’s stunning electoral victory, many were quick to question the role of so-called “fake news” on Facebook, Russian-created Twitter bots or the data-centric targeting efforts that companies such as Cambridge Analytica used to sway voters. But Williams sees those factors as symptoms of a deeper problem.

It is not just shady or bad actors who were exploiting the internet to change public opinion. The attention economy itself is set up to promote a phenomenon like Trump, who is masterly at grabbing and retaining the attention of supporters and critics alike, often by exploiting or creating outrage.

Orwellian-style coercion is less of a threat to democracy than the more subtle power of psychological manipulation, and “man’s almost infinite appetite for distractions”.

“The dynamics of the attention economy are structurally set up to undermine the human will,” Williams says. “If politics is an expression of our human will, on individual and collective levels, then the attention economy is directly undermining the assumptions that democracy rests on.” If Apple, Facebook, Google, Twitter, Instagram, and Snapchat are gradually chipping away at our ability to control our own minds, could there come a point, I ask, at which democracy no longer functions?

Our politics will survive and democracy is only one form of governance. The bigger question is how does human civilization survive if our behavior becomes self-destructive and meaningless?

Vampire Squids?

 

likenolike

I would say this essay by Franklin Foer is a bit alarmist, though his book is worth reading and taking to heart. We are gradually becoming aware of the value of our personal data and I expect consumers will soon figure out how to demand a fair share of that value, else they will withdraw.

Technology is most often disrupted by newer technology that better serves the needs of users. For Web 2.0 business models, our free data is their lifeblood and soon we may be able to cut them off. Many hope that’s where Web 3.0 is going.

tuka is a technology model that seeks to do exactly that for creative content providers, their audiences, and promoter/fans.

How Silicon Valley is erasing your individuality

Washington Post, September 8, 2017

 

Franklin Foer is author of “World Without Mind: The Existential Threat of Big Tech,” from which this essay is adapted.

Until recently, it was easy to define our most widely known corporations. Any third-grader could describe their essence. Exxon sells gas; McDonald’s makes hamburgers; Walmart is a place to buy stuff. This is no longer so. Today’s ascendant monopolies aspire to encompass all of existence. Google derives from googol, a number (1 followed by 100 zeros) that mathematicians use as shorthand for unimaginably large quantities. Larry Page and Sergey Brin founded Google with the mission of organizing all knowledge, but that proved too narrow. They now aim to build driverless cars, manufacture phones and conquer death. Amazon, which once called itself “the everything store,” now produces television shows, owns Whole Foods and powers the cloud. The architect of this firm, Jeff Bezos, even owns this newspaper.

Along with Facebook, Microsoft and Apple, these companies are in a race to become our “personal assistant.” They want to wake us in the morning, have their artificial intelligence software guide us through our days and never quite leave our sides. They aspire to become the repository for precious and private items, our calendars and contacts, our photos and documents. They intend for us to turn unthinkingly to them for information and entertainment while they catalogue our intentions and aversions. Google Glass and the Apple Watch prefigure the day when these companies implant their artificial intelligence in our bodies. Brin has mused, “Perhaps in the future, we can attach a little version of Google that you just plug into your brain.”

More than any previous coterie of corporations, the tech monopolies aspire to mold humanity into their desired image of it. They think they have the opportunity to complete the long merger between man and machine — to redirect the trajectory of human evolution. How do I know this? In annual addresses and town hall meetings, the founding fathers of these companies often make big, bold pronouncements about human nature — a view that they intend for the rest of us to adhere to. Page thinks the human body amounts to a basic piece of code: “Your program algorithms aren’t that complicated,” he says. And if humans function like computers, why not hasten the day we become fully cyborg?

To take another grand theory, Facebook chief Mark Zuckerberg has exclaimed his desire to liberate humanity from phoniness, to end the dishonesty of secrets. “The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly,” he has said. “Having two identities for yourself is an example of a lack of integrity.” Of course, that’s both an expression of idealism and an elaborate justification for Facebook’s business model.

There’s an oft-used shorthand for the technologist’s view of the world. It is assumed that libertarianism dominates Silicon Valley, and that isn’t wholly wrong. High-profile devotees of Ayn Rand can be found there. But if you listen hard to the titans of tech, it’s clear that their worldview is something much closer to the opposite of a libertarian’s veneration of the heroic, solitary individual. The big tech companies think we’re fundamentally social beings, born to collective existence. They invest their faith in the network, the wisdom of crowds, collaboration. They harbor a deep desire for the atomistic world to be made whole. (“Facebook stands for bringing us closer together and building a global community,” Zuckerberg wrote in one of his many manifestos.) By stitching the world together, they can cure its ills.

Rhetorically, the tech companies gesture toward individuality — to the empowerment of the “user” — but their worldview rolls over it. Even the ubiquitous invocation of users is telling: a passive, bureaucratic description of us. The big tech companies (the Europeans have lumped them together as GAFA: Google, Apple, Facebook, Amazon) are shredding the principles that protect individuality. Their devices and sites have collapsed privacy; they disrespect the value of authorship, with their hostility toward intellectual property. In the realm of economics, they justify monopoly by suggesting that competition merely distracts from the important problems like erasing language barriers and building artificial brains. Companies should “transcend the daily brute struggle for survival,” as Facebook investor Peter Thiel has put it.

When it comes to the most central tenet of individualism — free will — the tech companies have a different way. They hope to automate the choices, both large and small, we make as we float through the day. It’s their algorithms that suggest the news we read, the goods we buy, the paths we travel, the friends we invite into our circles. [Blogger Note: As computers can’t write music like humans, algorithms cannot really define tastes. Our sensibilities are excited by serendipity, innovation, and surprise.]

It’s hard not to marvel at these companies and their inventions, which often make life infinitely easier. But we’ve spent too long marveling. The time has arrived to consider the consequences of these monopolies, to reassert our role in determining the human path. Once we cross certain thresholds — once we remake institutions such as media and publishing, once we abandon privacy — there’s no turning back, no restoring our lost individuality.

***

Over the generations, we’ve been through revolutions like this before. Many years ago, we delighted in the wonders of TV dinners and the other newfangled foods that suddenly filled our kitchens: slices of cheese encased in plastic, oozing pizzas that emerged from a crust of ice, bags of crunchy tater tots. In the history of man, these seemed like breakthrough innovations. Time-consuming tasks — shopping for ingredients, tediously preparing a recipe and tackling a trail of pots and pans — were suddenly and miraculously consigned to history.

The revolution in cuisine wasn’t just enthralling. It was transformational. New products embedded themselves deeply in everyday life, so much so that it took decades for us to understand the price we paid for their convenience, efficiency and abundance. Processed foods were feats of engineering, all right — but they were engineered to make us fat. Their delectable taste required massive quantities of sodium and sizable stockpiles of sugar, which happened to reset our palates and made it harder to satehunger. It took vast quantities of meat and corn to fabricate these dishes, and a spike in demand remade American agriculture at a terrible environmental cost. A whole new system of industrial farming emerged, with penny-conscious conglomerates cramming chickens into feces-covered pens and stuffing them full of antibiotics. By the time we came to understand the consequences of our revised patterns of consumption, the damage had been done to our waistlines, longevity, souls and planet.

Something like the midcentury food revolution is now reordering the production and consumption of knowledge. Our intellectual habits are being scrambled by the dominant firms. Giant tech companies have become the most powerful gatekeepers the world has ever known. Google helps us sort the Internet, by providing a sense of hierarchy to information; Facebook uses its algorithms and its intricate understanding of our social circles to filter the news we encounter; Amazon bestrides book publishing with its overwhelming hold on that market.

Such dominance endows these companies with the ability to remake the markets they control. As with the food giants, the big tech companies have given rise to a new science that aims to construct products that pander to their consumers. Unlike the market research and television ratings of the past, the tech companies have a bottomless collection of data, acquired as they track our travels across the Web, storing every shard about our habits in the hope that they may prove useful. They have compiled an intimate portrait of the psyche of each user — a portrait that they hope to exploit to seduce us into a compulsive spree of binge clicking and watching. And it works: On average, each Facebook user spends one-sixteenth of their day on the site.

In the realm of knowledge, monopoly and conformism are inseparable perils. The danger is that these firms will inadvertently use their dominance to squash diversity of opinion and taste. Concentration is followed by homogenization. As news media outlets have come to depend heavily on Facebook and Google for traffic — and therefore revenue — they have rushed to produce articles that will flourish on those platforms. This leads to a duplication of the news like never before, with scores of sites across the Internet piling onto the same daily outrage. It’s why a picture of a mysteriously colored dress generated endless articles, why seemingly every site recaps “Game of Thrones.” Each contribution to the genre adds little, except clicks. Old media had a pack mentality, too, but the Internet promised something much different. And the prevalence of so much data makes the temptation to pander even greater.

This is true of politics. Our era is defined by polarization, warring ideological gangs that yield no ground. Division, however, isn’t the root cause of our unworkable system. There are many causes, but a primary problem is conformism. Facebook has nurtured two hive minds, each residing in an informational ecosystem that yields head-nodding agreement and penalizes dissenting views. This is the phenomenon that the entrepreneur and author Eli Pariser famously termed the “Filter Bubble” — how Facebook mines our data to keep giving us the news and information we crave, creating a feedback loop that pushes us deeper and deeper into our own amen corners.

As the 2016 presidential election so graphically illustrated, a hive mind is an intellectually incapacitated one, with diminishing ability to tell fact from fiction, with an unshakable bias toward party line. The Russians understood this, which is why they invested so successfully in spreading dubious agitprop via Facebook. And it’s why a raft of companies sprouted — Occupy Democrats, the Angry Patriot, Being Liberal — to get rich off the Filter Bubble and to exploit our susceptibility to the lowest-quality news, if you can call it that.

Facebook represents a dangerous deviation in media history. Once upon a time, elites proudly viewed themselves as gatekeepers. They could be sycophantic to power and snobbish, but they also felt duty-bound to elevate the standards of society and readers. Executives of Silicon Valley regard gatekeeping as the stodgy enemy of innovation — they see themselves as more neutral, scientific and responsive to the market than the elites they replaced — a perspective that obscures their own power and responsibilities. So instead of shaping public opinion, they exploit the public’s worst tendencies, its tribalism and paranoia.

***

During this century, we largely have treated Silicon Valley as a force beyond our control. A broad consensus held that lead-footed government could never keep pace with the dynamism of technology. By the time government acted against a tech monopoly, a kid in a garage would have already concocted some innovation to upend the market. Or, as Google’s Eric Schmidt, put it, “Competition is one click away.” A nostrum that suggested that the very structure of the Internet defied our historic concern for monopoly.

As individuals, we have similarly accepted the omnipresence of the big tech companies as a fait accompli. We’ve enjoyed their free products and next-day delivery with only a nagging sense that we may be surrendering something important. Such blitheness can no longer be sustained. Privacy won’t survive the present trajectory of technology — and with the sense of being perpetually watched, humans will behave more cautiously, less subversively. Our ideas about the competitive marketplace are at risk. With a decreasing prospect of toppling the giants, entrepreneurs won’t bother to risk starting new firms, a primary source of jobs and innovation. And the proliferation of falsehoods and conspiracies through social media, the dissipation of our common basis for fact, is creating conditions ripe for authoritarianism. Over time, the long merger of man and machine has worked out pretty well for man. But we’re drifting into a new era, when that merger threatens the individual. We’re drifting toward monopoly, conformism, their machines. Perhaps it’s time we steer our course.

Google This.

AP FACEBOOK F A USA NY
(Photo: Mark Lennihan, AP)

Another argument that moves toward making these companies public utilities. (Google more than Facebook.) From USA Today:

I invested early in Google and Facebook and regret it. I helped create a monster.

‘Brain hacking’ Internet monopolies menace public health, democracy, writes Roger McNamee.

I invested in Google and Facebook years before their first revenue and profited enormously. I was an early adviser to Facebook’s team, but I am terrified by the damage being done by these Internet monopolies.

Technology has transformed our lives in countless ways, mostly for the better. Thanks to the now ubiquitous smartphone, tech touches us from the moment we wake up until we go to sleep. While the convenience of smartphones has many benefits, the unintended consequences of well-intentioned product choices have become a menace to public health and to democracy.

Facebook and Google get their revenue from advertising, the effectiveness of which depends on gaining and maintaining consumer attention. Borrowing techniques from the gambling industry, Facebook, Google and others exploit human nature, creating addictive behaviors that compel consumers to check for new messages, respond to notifications, and seek validation from technologies whose only goal is to generate profits for their owners.

The people at Facebook and Google believe that giving consumers more of what they want and like is worthy of praise, not criticism. What they fail to recognize is that their products are not making consumers happier or more successful.

Like gambling, nicotine, alcohol or heroin, Facebook and Google — most importantly through its YouTube subsidiary — produce short-term happiness with serious negative consequences in the long term.

Users fail to recognize the warning signs of addiction until it is too late. There are only 24 hours in a day, and technology companies are making a play for all them. The CEO of Netflix recently noted that his company’s primary competitor is sleep.

How does this work? A 2013 study found that average consumers check their smartphones 150 times a day. And that number has probably grown. People spend 50 minutes a day on Facebook. Other social apps such as Snapchat, Instagram and Twitter combine to take up still more time. Those companies maintain a profile on every user, which grows every time you like, share, search, shop or post a photo. Google also is analyzing credit card records of millions of people.

As a result, the big Internet companies know more about you than you know about yourself, which gives them huge power to influence you, to persuade you to do things that serve their economic interests. Facebook, Google and others compete for each consumer’s attention, reinforcing biases and reducing the diversity of ideas to which each is exposed. The degree of harm grows over time.

Consider a recent story from Australia, where someone at Facebook told advertisers that they had the ability to target teens who were sad or depressed, which made them more susceptible to advertising. In the United States, Facebook once demonstrated its ability to make users happier or sadder by manipulating their news feed. While it did not turn either capability into a product, the fact remains that Facebook influences the emotional state of users every moment of every day. Former Google design ethicist Tristan Harris calls this “brain hacking.”

The fault here is not with search and social networking, per se. Those services have enormous value. The fault lies with advertising business models that drive companies to maximize attention at all costs, leading to ever more aggressive brain hacking.

The Facebook application has 2 billion active users around the world. Google’s YouTube has 1.5 billion. These numbers are comparable to Christianity and Islam, respectively, giving Facebook and Google influence greater than most First World countries. They are too big and too global to be held accountable. Other attention-based apps — including InstagramWhatsAppWeChatSnapChat and Twitter — also have user bases between 100 million and 1.3 billion. Not all their users have had their brains hacked, but all are on that path. And there are no watchdogs.

Anyone who wants to pay for access to addicted users can work with Facebook and YouTube. Lots of bad people have done it. One firm was caught using Facebook tools to spy on law abiding citizens. A federal agency confronted Facebook about the use of its tools by financial firms to discriminate based on race in the housing market. America’s intelligence agencies have concluded that Russia interfered in our election and that Facebook was a key platform for spreading misinformation. For the price of a few fighter aircraft, Russia won an information war against us.

Incentives being what they are, we cannot expect Internet monopolies to police themselves. There is little government regulation and no appetite to change that. If we want to stop brain hacking, consumers will have to force changes at Facebook and Google.

Roger McNamee is the managing director and a co-founder of Elevation Partners, and investment partnership focused on media/entertainment content and consumer technology.