Facebook in da Nile.

One should see this as the general problem with Facebook (and Twitter) as an information/news medium – it’s generated by subjective opinions rather than verifiable facts with verifiable sources. Verifiable identity (non-anonymity) has the virtue of incentivizing reputational capital and building trust across information exchanges. But Facebook then has to make a Sophie’s Choice: lose the traffic that raw emotionalism feeds or lose the trust of its user base. Facebook’s day of reckoning is coming.

From The Wall Street Journal:

Facebook Is Still In Denial About Its Biggest Problem
In a world where social media is the pre-eminent news conduit, ‘If it’s outrageous, it’s contagious’ is the new ‘If it bleeds, it leads’
It’s a good time to re-examine our relationship with Facebook Inc.
In the past month, it has been revealed that Facebook hosted a Russian influence operation which may have reached between 3 million and 20 million people on the social network, and that Facebook could be used to micro-target users with hate speech. It took the company more than two weeks to agree to share what it knows with Congress.
Increased scrutiny of Facebook is healthy. What went mainstream as a friendly place for loved ones to swap baby pictures and cat videos has morphed into an opaque and poorly understood metropolis rife with influence peddlers determined to manipulate what we know and how we think. We have barely begun to understand how the massive social network shapes our world.
Unfortunately, Facebook itself seems just as mystified, providing a response to all of this that has left many unsatisfied.
What the company’s leaders seem unable to reckon with is that its troubles are inherent in the design of its flagship social network, which prioritizes thrilling posts and ads over dull ones, and rewards cunning provocateurs over hapless users. No tweak to algorithms or processes can hope to fix a problem that seems enmeshed in the very fabric of Facebook.
Outrageous ads
On a network where article and video posts can be sponsored and distributed like ads, and ads themselves can go as viral as a wedding-fail video, there is hardly a difference between the two. And we now know that if an ad from one of Facebook’s more than five million advertisers goes viral—by making us feel something, not just joy but also fear or outrage—it will cost less per impression to spread across Facebook.
In one example, described in a recent Wall Street Journal article, a “controversial” ad went viral, leading to a 30% drop in the cost to reach each user. Joe Yakuel, founder and chief executive of Agency Within, which manages $100 million in digital ad purchases, told our reporter, “Even inadvertent controversy can cause a lot of engagement.”
Keeping people sharing and clicking is essential to Facebook’s all-important metric, engagement, which is closely linked to how many ads the network can show us and how many of them we will interact with. Left unchecked, algorithms like Facebook’s News Feed tend toward content that is intended to arouse our passions, regardless of source—or even veracity.
An old newspaper catchphrase was, “If it bleeds, it leads”—that is, if someone got hurt or killed, that’s the top story. In the age when Facebook supplies us with a disproportionate amount of our daily news, a more-appropriate catchphrase would be, “If it’s outrageous, it’s contagious.”
Will Facebook solve this problem on its own? The company has no immediate economic incentive to do so, says Yochai Benkler, a professor at Harvard Law School and co-director of the Berkman Klein Center for Internet and Society.
“Facebook has become so central to how people communicate, and it has so much market power, that it’s essentially immune to market signals,” Dr. Benkler says. The only thing that will force the company to change, he adds, is the brewing threat to its reputation.
Facebook’s next steps
Facebook Chief Executive Mark Zuckerberg recently said his company will do more to combat illegal and abusive misuse of the Facebook platform. The primary mechanism for vetting political and other ads will be “an even higher standard of transparency,” he said, achieved by, among other things, making all ads on the site viewable by everyone, where in the past they could be seen only by their target audience.
“Beyond pushing back against threats, we will also create more services to protect our community while engaging in political discourse,” Mr. Zuckerberg wrote.
This move is a good start, but it excuses Facebook from its responsibility to be the primary reviewer of all advertising it is paid to run. Why are we, the users, responsible for vetting ads on Facebook?
By default, most media firms vet the ads they run and refuse ones that might be offensive or illegal, says Scott Galloway, entrepreneur, professor of marketing at NYU Stern School of Business and author of “The Four,” a book criticizing the outsize growth and influence of Amazon, Apple, Facebook and Google.
Mr. Zuckerberg acknowledged in a recent Facebook post that the majority of advertising purchased on Facebook will continue to be bought “without the advertiser ever speaking to anyone at Facebook.” His argument for this policy: “We don’t check what people say before they say it, and frankly, I don’t think our society should want us to.”
This is false equivalence. Society may not want Facebook to read over everything typed by our friends and family before they share it. But many people would feel it’s reasonable for Facebook to review all of the content it gets paid (tens of billions of dollars) to publish and promote.
“Facebook has embraced the healthy gross margins and influence of a media firm but is allergic to the responsibilities of a media firm,” Mr. Galloway says.
More is needed
Mr. Zuckerberg has said it will hire 250 more humans to review ads and content posted to Facebook. For Facebook, a company with more than $14 billion in free cash flow in the past year, to say it is adding 250 people to its safety and security efforts is “pissing in the ocean,” Mr. Galloway says. “They could add 25,000 people, spend $1 billion on AI technologies to help those 25,000 employees sort, filter and ID questionable content and advertisers, and their cash flow would decline 10% to 20%.”
Of course, mobilizing a massive team of ad monitors could subject Facebook to exponentially more accusations of bias from all sides. For every blatant instance of abuse, there are hundreds of cases that fall into gray areas.
The whole situation has Facebook between a rock and a hard place. But it needs to do more, or else risk further damaging its brand and reputation, two things of paramount importance to a service that depends on the trust of its users.

David Byrne on Technology, Creativity…

…and what it means to be human.

Eliminating the Human

A View from David Byrne

We are beset by—and immersed in—apps and devices that are quietly reducing the amount of meaningful interaction we have with each other.

August 15, 2017

andy friedman

I have a theory that much recent tech development and innovation over the last decade or so has an unspoken overarching agenda. It has been about creating the possibility of a world with less human interaction. This tendency is, I suspect, not a bug—it’s a feature. We might think Amazon was about making books available to us that we couldn’t find locally—and it was, and what a brilliant idea—but maybe it was also just as much about eliminating human contact.

The consumer technology I am talking about doesn’t claim or acknowledge that eliminating the need to deal with humans directly is its primary goal, but it is the outcome in a surprising number of cases. I’m sort of thinking maybe it is the primary goal, even if it was not aimed at consciously. Judging by the evidence, that conclusion seems inescapable.

This then, is the new norm. Most of the tech news we get barraged with is about algorithms, AI, robots, and self-driving cars, all of which fit this pattern. I am not saying that such developments are not efficient and convenient; this is not a judgment. I am simply noticing a pattern and wondering if, in recognizing that pattern, we might realize that it is only one trajectory of many. There are other possible roads we could be going down, and the one we’re on is not inevitable or the only one; it has been (possibly unconsciously) chosen.

I realize I’m making some wild and crazy assumptions and generalizations with this proposal—but I can claim to be, or to have been, in the camp that would identify with the unacknowledged desire to limit human interaction. I grew up happy but also found many social interactions extremely uncomfortable. I often asked myself if there were rules somewhere that I hadn’t been told, rules that would explain it all to me. I still sometimes have social niceties “explained” to me. I’m often happy going to a restaurant alone and reading. I wouldn’t want to have to do that all the time, but I have no problem with it—though I am sometimes aware of looks that say “Poor man, he has no friends.” So I believe I can claim some insight into where this unspoken urge might come from.

Human interaction is often perceived, from an engineer’s mind-set, as complicated, inefficient, noisy, and slow. Part of making something “frictionless” is getting the human part out of the way. The point is not that making a world to accommodate this mind-set is bad, but that when one has as much power over the rest of the world as the tech sector does over folks who might not share that worldview, there is the risk of a strange imbalance. The tech world is predominantly male—very much so. Testosterone combined with a drive to eliminate as much interaction with real humans as possible for the sake of “simplicity and efficiency”—do the math, and there’s the future.

The evidence

Here are some examples of fairly ubiquitous consumer technologies that allow for less human interaction.

Online ordering and home delivery: Online ordering is hugely convenient. Amazon, FreshDirect, Instacart, etc. have not just cut out interactions at bookstores and checkout lines; they have eliminated all human interaction from these transactions, barring the (often paid) online recommendations.

Digital music: Downloads and streaming—there is no physical store, of course, so there are no snobby, know-it-all clerks to deal with. Whew, you might say. Some services offer algorithmic recommendations, so you don’t even have to discuss music with your friends to know what they like. The service knows what they like, and you can know, too, without actually talking to them. Is the function of music as a kind of social glue and lubricant also being eliminated? [Blog note: Creativity and innovation require serendipity, which tech algorithms often eliminate in the interest of efficiency.]

Ride-hailing apps: There is minimal interaction—one doesn’t have to tell the driver the address or the preferred route, or interact at all if one doesn’t want to.

Driverless cars: In one sense, if you’re out with your friends, not having one of you drive means more time to chat. Or drink. Very nice. But driverless tech is also very much aimed at eliminating taxi drivers, truck drivers, delivery drivers, and many others. There are huge advantages to eliminating humans here—theoretically, machines should drive more safely than humans, so there might be fewer accidents and fatalities. The disadvantages include massive job loss. But that’s another subject. What I’m seeing here is the consistent “eliminating the human” pattern.

Automated checkout:Eatsa is a new version of the Automat, a once-popular “restaurant” with no visible staff. My local CVS has been training staff to help us learn to use the checkout machines that will replace them. At the same time, they are training their customers to do the work of the cashiers.

Amazon has been testing stores—even grocery stores!—with automated shopping. They’re called Amazon Go. The idea is that sensors will know what you’ve picked up. You can simply walk out with purchases that will be charged to your account, without any human contact.

AI: AI is often (though not always) better at decision-making than humans. In some areas, we might expect this. For example, AI will suggest the fastest route on a map, accounting for traffic and distance, while we as humans would be prone to taking our tried-and-true route. But some less-expected areas where AI is better than humans are also opening up. It is getting better at spotting melanomas than many doctors, for example. Much routine legal work will soon be done by computer programs, and financial assessments are now being done by machines.

Robot workforce: Factories increasingly have fewer and fewer human workers, which means no personalities to deal with, no agitating for overtime, and no illnesses. Using robots avoids an employer’s need to think about worker’s comp, health care, Social Security, Medicare taxes, and unemployment benefits.

Personal assistants: With improved speech recognition, one can increasingly talk to a machine like Google Home or Amazon Echo rather than a person. Amusing stories abound as the bugs get worked out. A child says, “Alexa, I want a dollhouse” … and lo and behold, the parents find one in their cart.

Big data: Improvements and innovations in crunching massive amounts of data mean that patterns can be recognized in our behavior where they weren’t seen previously. Data seems objective, so we tend to trust it, and we may very well come to trust the gleanings from data crunching more than we do ourselves and our human colleagues and friends.

Video games (and virtual reality): Yes, some online games are interactive. But most are played in a room by one person jacked into the game. The interaction is virtual.

Automated high-speed stock buying and selling: A machine crunching huge amounts of data can spot trends and patterns quickly and act on them faster than a person can.

MOOCS: Online education with no direct teacher interaction.

“Social” media: This is social interaction that isn’t really social. While Facebook and others frequently claim to offer connection, and do offer the appearance of it, the fact is a lot of social media is a simulation of real connection.

What are the effects of less interaction?

Minimizing interaction has some knock-on effects—some of them good, some not. The externalities of efficiency, one might say.

For us as a society, less contact and interaction—real interaction—would seem to lead to less tolerance and understanding of difference, as well as more envy and antagonism. As has been in evidence recently, social media actually increases divisions by amplifying echo effects and allowing us to live in cognitive bubbles. We are fed what we already like or what our similarly inclined friends like (or, more likely now, what someone has paid for us to see in an ad that mimics content). In this way, we actually become less connected—except to those in our group.

Social networks are also a source of unhappiness. A study earlier this year by two social scientists, Holly Shakya at UC San Diego and Nicholas ­Christakis at Yale, showed that the more people use Facebook, the worse they feel about their lives. While these technologies claim to connect us, then, the surely unintended effect is that they also drive us apart and make us sad and envious.

I’m not saying that many of these tools, apps, and other technologies are not hugely convenient, clever, and efficient. I use many of them myself. But in a sense, they run counter to who we are as human beings.

We have evolved as social creatures, and our ability to cooperate is one of the big factors in our success. I would argue that social interaction and cooperation, the kind that makes us who we are, is something our tools can augment but not replace.

When interaction becomes a strange and unfamiliar thing, then we will have changed who and what we are as a species. Often our rational thinking convinces us that much of our interaction can be reduced to a series of logical decisions—but we are not even aware of many of the layers and subtleties of those interactions. As behavioral economists will tell us, we don’t behave rationally, even though we think we do. And Bayesians will tell us that interaction is how we revise our picture of what is going on and what will happen next.

I’d argue there is a danger to democracy as well. Less interaction, even casual interaction, means one can live in a tribal bubble—and we know where that leads.

Is it possible that less human interaction might save us?

Humans are capricious, erratic, emotional, irrational, and biased in what sometimes seem like counterproductive ways. It often seems that our quick-thinking and selfish nature will be our downfall. There are, it would seem, lots of reasons why getting humans out of the equation in many aspects of life might be a good thing.

But I’d argue that while our various irrational tendencies might seem like liabilities, many of those attributes actually work in our favor. Many of our emotional responses have evolved over millennia, and they are based on the probability that they will, more likely than not, offer the best way to deal with a situation.

What are we?

Antonio Damasio, a neuroscientist at USC wrote about a patient he called Elliot, who had damage to his frontal lobe that made him unemotional. In all other respects he was fine—intelligent, healthy—but emotionally he was Spock. Elliot couldn’t make decisions. He’d waffle endlessly over details. ­Damasio concluded that although we think decision-­making is rational and machinelike, it’s our emotions that enable us to actually decide.

With humans being somewhat unpredictable (well, until an algorithm completely removes that illusion), we get the benefit of surprises, happy accidents, and unexpected connections and intuitions. Interaction, cooperation, and collaboration with others multiplies those ­opportunities. [Yes!]

We’re a social species—we benefit from passing discoveries on, and we benefit from our tendency to cooperate to achieve what we cannot alone. In his book Sapiens, Yuval Harari claims this is what allowed us to be so successful. He also claims that this cooperation was often facilitated by an ability to believe in “fictions” such as nations, money, religions, and legal institutions. Machines don’t believe in fictions—or not yet, anyway. That’s not to say they won’t surpass us, but if machines are designed to be mainly self-interested, they may hit a roadblock. And in the meantime, if less human interaction enables us to forget how to cooperate, then we lose our advantage.

Our random accidents and odd behaviors are fun—they make life enjoyable. I’m wondering what we’re left with when there are fewer and fewer human interactions. Remove humans from the equation, and we are less complete as people and as a society.

“We” do not exist as isolated individuals. We, as individuals, are inhabitants of networks; we are relationships. That is how we prosper and thrive[Lest we overemphasize the cooperative aspect, E.O. Wilson argues that human society needs a balance between competition and cooperation acting through group vs. individual selection. Too much competition leads to Mad Max survival, while too much group cooperation leads to homogeneity resembling ant colonies. Wilson argues this tension and balance is the catalyst for human culture. See his book, The Social Conquest of Earth.]

David Byrne is a musician and artist who lives in New York City. His most recent book is called How Music Works. A version of this piece originally appeared on his website, davidbyrne.com.

Digital Futures

Here are four NYT opinion articles written by or about Jaron Lanier, who has been on the forefront of digital culture for at least the past 25 years. He presents much of the challenges and failures of technology when it butts up against humanism. The last two are reviews of his book, Who Owns the Future?  Definitely worth a read.

Fixing the Digital Economy

Digital Passivity

Will Digital Networks Ruin Us?

Fighting Words Against Big Data

Google This.

AP FACEBOOK F A USA NY
(Photo: Mark Lennihan, AP)

Another argument that moves toward making these companies public utilities. (Google more than Facebook.) From USA Today:

I invested early in Google and Facebook and regret it. I helped create a monster.

‘Brain hacking’ Internet monopolies menace public health, democracy, writes Roger McNamee.

I invested in Google and Facebook years before their first revenue and profited enormously. I was an early adviser to Facebook’s team, but I am terrified by the damage being done by these Internet monopolies.

Technology has transformed our lives in countless ways, mostly for the better. Thanks to the now ubiquitous smartphone, tech touches us from the moment we wake up until we go to sleep. While the convenience of smartphones has many benefits, the unintended consequences of well-intentioned product choices have become a menace to public health and to democracy.

Facebook and Google get their revenue from advertising, the effectiveness of which depends on gaining and maintaining consumer attention. Borrowing techniques from the gambling industry, Facebook, Google and others exploit human nature, creating addictive behaviors that compel consumers to check for new messages, respond to notifications, and seek validation from technologies whose only goal is to generate profits for their owners.

The people at Facebook and Google believe that giving consumers more of what they want and like is worthy of praise, not criticism. What they fail to recognize is that their products are not making consumers happier or more successful.

Like gambling, nicotine, alcohol or heroin, Facebook and Google — most importantly through its YouTube subsidiary — produce short-term happiness with serious negative consequences in the long term.

Users fail to recognize the warning signs of addiction until it is too late. There are only 24 hours in a day, and technology companies are making a play for all them. The CEO of Netflix recently noted that his company’s primary competitor is sleep.

How does this work? A 2013 study found that average consumers check their smartphones 150 times a day. And that number has probably grown. People spend 50 minutes a day on Facebook. Other social apps such as Snapchat, Instagram and Twitter combine to take up still more time. Those companies maintain a profile on every user, which grows every time you like, share, search, shop or post a photo. Google also is analyzing credit card records of millions of people.

As a result, the big Internet companies know more about you than you know about yourself, which gives them huge power to influence you, to persuade you to do things that serve their economic interests. Facebook, Google and others compete for each consumer’s attention, reinforcing biases and reducing the diversity of ideas to which each is exposed. The degree of harm grows over time.

Consider a recent story from Australia, where someone at Facebook told advertisers that they had the ability to target teens who were sad or depressed, which made them more susceptible to advertising. In the United States, Facebook once demonstrated its ability to make users happier or sadder by manipulating their news feed. While it did not turn either capability into a product, the fact remains that Facebook influences the emotional state of users every moment of every day. Former Google design ethicist Tristan Harris calls this “brain hacking.”

The fault here is not with search and social networking, per se. Those services have enormous value. The fault lies with advertising business models that drive companies to maximize attention at all costs, leading to ever more aggressive brain hacking.

The Facebook application has 2 billion active users around the world. Google’s YouTube has 1.5 billion. These numbers are comparable to Christianity and Islam, respectively, giving Facebook and Google influence greater than most First World countries. They are too big and too global to be held accountable. Other attention-based apps — including InstagramWhatsAppWeChatSnapChat and Twitter — also have user bases between 100 million and 1.3 billion. Not all their users have had their brains hacked, but all are on that path. And there are no watchdogs.

Anyone who wants to pay for access to addicted users can work with Facebook and YouTube. Lots of bad people have done it. One firm was caught using Facebook tools to spy on law abiding citizens. A federal agency confronted Facebook about the use of its tools by financial firms to discriminate based on race in the housing market. America’s intelligence agencies have concluded that Russia interfered in our election and that Facebook was a key platform for spreading misinformation. For the price of a few fighter aircraft, Russia won an information war against us.

Incentives being what they are, we cannot expect Internet monopolies to police themselves. There is little government regulation and no appetite to change that. If we want to stop brain hacking, consumers will have to force changes at Facebook and Google.

Roger McNamee is the managing director and a co-founder of Elevation Partners, and investment partnership focused on media/entertainment content and consumer technology. 

Digital PacMan

Apple Music’s Jimmy Iovine has been making waves again, with an interview for Beats 1in which he criticises labels for their handling of YouTube and safe harbour, suggesting that they are partly responsible for a decline in the quality of some albums, as artists try to squeeze recording in between their more-lucrative commitments.

YouTube first. “The labels haven’t done anything about YouTube. So now you’ve got YouTube out there with 500 million people, where you can get your music very elegantly for free, and getting better and better and better and better,” said Iovine, before claiming that Billboard’s chart is “counting YouTube’s plays the same as Spotify’s paid plays and Apple Music paid plays” – thus encouraging artists to support it.

So where does the artist go? ‘Oh, there’s 500 million people on YouTube, so I’m going to go promote my record there. Even though I get paid here, but I want a number one record here! That’s called fake news!” said Iovine. “Netflix doesn’t have a free tier: you can’t find House of Cards on YouTube.”

[Update: as has been pointed out, Billboard does not include YouTube streams in its main albums chart, but does include them in its Hot 100 singles chart.]

While admitting that safe-harbour legislation has been a challenge for labels trying to rein YouTube in, Iovine was firm in his belief: “You could figure out a way to deal with it, and so far the record industry has handled that wrong.”

His argument segued in to the knock-on effects on recorded-music. “How many times do you hear that today? ‘We don’t make any money on records, let’s make a record and go out and tour it and sell perfume’. Or whatever. So you start out with that premise, where artists believe that now,” he said.

“So you’ve got artists promoting free tiers. So what happens now? Your manager calls you, and you’re in the studio trying to make your album, and says ‘We have a gig for you in Dubai where you’re getting $750k’. Stevie Wonder didn’t leave the studio to go play a gig in Cleveland! He stayed with his art…”

Iovine’s view: “Everybody I know [now] is making their record on the road! Adele didn’t. Ed Sheeran didn’t. But you can tell. So the combination of all those things lead to music that someone could say some of it is not as good as it needs to be. No one’s looking at it holistically. From my perspective at 64 years old, if I was in the record business, that’s what I would be looking at. The actual art itself is being affected, and things that you’re doing is why it’s being affected… The record industry as it is right now has to come to grips with it and become part of the solution.”

Earlier in the interview, Iovine also suggested that labels need to do more to get to grips with changing dynamics in the streaming world – particularly as artists forge relationships with services like Apple Music.

You can’t hire just a few people that own a computer and say ‘So you’re in charge of digital: good luck!’. That’s not what it’s going to take,” he said. “They need to get real technology people in there, or merge with tech companies. They have to do something.”

Link here.