Data Land Grab

FB-vs-Google

‘Good for the world’? Facebook emails reveal what really drives the site

As we can read from this article and Facebook’s internal management debates, Web 2.0 (of which the GAFA companies are the archetypes) is built on a data land grab. It’s rather similar to the actual land grab that the European powers battled over for the New World, then with the colonization of Africa and Asia.

Data is now a valuable resource that has been priced up there with land and capital. Naturally, the tech oligopolies and their startup wannabes all want to grab as much as possible. And who are they grabbing it from? The network users of course.

Web 3.0 is all about democratizing the value and monetization of personal networked data. It’s about decentralized ownership and control, much like the desire to own and control the fruits of one’s labor that ended slavery. Web 3.0 is the future, because Web 2.0 is unsustainable.

 

Data Privacy

facebook lobby

Good, thorough, and l-o-o-o-o-ng article on data privacy issues, legislation, and network value. From the NYT Magazine. Read about what’s being done to you behind closed doors…

The Unlikely Activists Who Took On Silicon Valley — and Won

Some excerpts:

Almost by accident, though, Mactaggart had thrust himself into the greatest resource grab of the 21st century. To Silicon Valley, personal information had become a kind of limitless natural deposit, formed in the digital ether by ordinary people as they browsed, used apps and messaged their friends. Like the oil barons before them, they had collected and refined that resource to build some of the most valuable companies in the world, including Facebook and Google, an emerging duopoly that today controls more than half of the worldwide market in online advertising. But the entire business model — what the philosopher and business theorist Shoshana Zuboff calls “surveillance capitalism” — rests on untrammeled access to your personal data. The tech industry didn’t want to give up its powers of surveillance. It wanted to entrench them. And as Mactaggart would soon learn, Silicon Valley almost always got what it wanted.

Through the Obama years, the tech industry enjoyed extraordinary cachet in Washington, not only among Republicans but also among Democrats. Partnering with Silicon Valley allowed Democrats to position themselves as pro-business and forward-thinking. The tech industry was both an American economic success story and a political ally to Democrats on issues like immigration. Google enjoyed particularly close ties to the Obama administration: Dozens of Google alumni would serve in the White House or elsewhere in the administration, and by one estimate Google representatives visited the White House an average of about once a week.

Mactaggart … faced an American political establishment that saw the key to its future in companies like Google and Facebook — not because of whom they supported but because of what they did. The surveillance capitalists didn’t just sell more deodorant; they had built one of the most powerful tools ever invented for winning elections. Roughly the same suite of technologies helped elect Obama, a pragmatic liberal who promised racial progress and a benevolent globalism, and Trump, a strident nationalist who adeptly employs social media to stoke racial panic and has set out to demolish the American-led world order.

In the end, not a single lawmaker in either chamber voted against the compromise.

Political power is a malleable thing, … an elaborate calculation of artifice and argument, votes and money. People and institutions — in politics, in Silicon Valley — can seem all-powerful right up to the moment they are not. And sometimes, … a thing that can’t possibly happen suddenly becomes a thing that cannot be stopped.

The promise of blockchain is to disrupt this Monopoly game.

The Creators Case for Blockchain

Social Media Connection

Nice article on Medium:

A Poet’s Case for Blockchain

I would add that the major problems for artists in the digital age stem from the explosion of new supply of content. This drives the price down and the search costs of discovery up. The failure then becomes that artists can’t find their audiences and consumers can’t find the content they desire. For poets this means finding an audience not necessarily to sell poetry; rather more important is to find readers and appreciators of their poetry.

Large centralized network servers based on algorithms can’t solve this problem without commoditizing content and delivering the most popular but mundane content churned out by those metrics.

We need to empower the human by connecting the creative.

OSN Heart

 

The Big Tech Oligopoly

GAFA

Is It Time to Break-Up Big Tech?

6:05 AM, Oct 28, 2017 | By Irwin M. Stelzer The Weekly Standard

Uber comes along and ends the rainy days and nights of waving fruitlessly at cabs with flashing “off duty” signs, and governments respond to pressures from threatened incumbents by making life difficult or impossible for the welfare-enhancing newcomer.

Amazon spares consumers the chore of driving to malls, picking through racks, and perhaps finding a tolerable substitute for what they really, really want, and the tax man and the regulator come sniffing around.

Google puts the world’s intellectual output at everyone’s disposal, Apple puts enormous communicating power in citizens’ pockets, and Facebook links far-flung people with similar interests, only to find that the power their successes convey prompts governments to search for new constraints. And a cut of the revenues.

Unfortunately for the tax collectors, they are forced to play whack-a-mole with the Internet giants who can always find another way of moving profits from the greediest of their pursuers. Until the tax men accept the fact that they will always be one step behind the lawyers and accountants who shield their clients’ well-gotten gains from the pursuers, the profit-mole will never get whacked.

The obvious solution is to tax revenues in the country in which they are earned—it is a lot easier to total up sales receipts, and tax them, than to try to estimate the reasonableness of fees an international company charges itself for use of its own intellectual property, lodged in the sunny Caribbean or rainy Ireland.

But the taxation problem is the least of the worries of what we might call this era’s Fab Four—Amazon, Apple, Google, and Facebook. (The New York Times’ Farhad Manjoo includes Microsoft in what he calls The Frightful Five). They are now seen by critics as simply too big and, with the exception of Apple (which faces stiff competition) possessing market power that exceeds that of companies that competition authorities in days past dismembered—Rockefeller’s Standard Oil being the most notable of the old giants cut down to size by regulators.

The solution being mooted in academic seminars and the halls of Congress (when its members are not busy dodging presidential tweets) is utility-style regulation of the prices and the soaring profits of these companies. That solution is still a gleam in the eye of some politicians, right and left: they are reluctant to take steps that might curb the activities of businesses enormously popular with the public. But it is now a policy goal of companies who compete with the Internet giants on what they deem an unlevel playing field. At minimum, they are turning to the courts for relief: Yelp, the site on which you can express your pleasure with, or hurl brickbats at, businesses you patronize, has filed an antitrust action against Google, making much the same claims as produced the search-engine’s creator whopping fine in Europe.

Regulation is a tempting goal for policy makers here and in the European Union who feel it essential that they gain control over how people will use the internet to shop, travel, date, learn, and interact in the future. This is especially compelling for E.U. regulators, who feel that unless they somehow control the business practices of leading Internet companies the (ugh) Americans will have too much power in Europe.

I have been involved in regulation for enough decades to know the slowing-to-deadening effect regulation can have on innovation and customer service—not as bad as unregulated monopoly, but nowhere near as good for consumers as competition. Try hard to remember when the pre-break-up AT&T would not allow you even to attach a shoulder cradle to your phone—it being classified as a “foreign attachment”—and compare that with the range of communications devices available since the monopoly’s break-up. Or consider the quality of service you get from your quasi-monopoly cable company, at bundled prices bordering on the absurd, which is why, given the chance, millions are cutting the cord as competition rears its lovely head in the entertainment business.

Better to nourish competition in these new markets than to call in the regulators. Which is what the European Commission says it is trying to do. It has decided that Google has a dominant position in search—a finding with which the company, which I once served as a consultant, disagrees—and fined it $2.7 billion for favoring its own services when consumers search for maps, or shopping sites.

That’s a rather standard application of competition policy to the activities of companies found to be using their power in one market to disadvantage competitors in others. The EC decision raised hackles at Google’s Mountain View, California headquarters, and raised eyebrows in America because of the size of the fine and because once again the Brussels crowd has targeted an American company. Whether the newly installed Trump antitrust team will agree with the EC that Google possesses sufficient market power to warrant similar action is uncertain.

Amazon presents a different problem. It is big and getting bigger. The new $5 billion ancillary headquarters for which it is site-searching will employ a staff of 50,000 workers, supplemented at Christmas by some of the 120,000 Amazon recruits to cope with the Christmas rush. Some 238 cities and regions are offering gifts of taxpayer cash in the hope of persuading Jeff Bezos that his company will live happily ever after in their domain. Amazon now controls half of all internet retail business. But only 8 percent of all retail sales are transacted over the Internet, meaning that Bezos’ half represents only 4 percent of all retail sales. As Manjoo puts it, “Amazon . . . is still a minor satellite compared with Walmart’s sun.” So what’s the problem?

For one thing, that 4 percent share of the total retail market can mask Amazon’s devastating effect on specific market segments: think what has happened to bookstores faced with Amazon’s huge inventory and low prices. More worrying, Amazon has the power to strangle a potential rival in their cradles. Shortly after Blue Apron took its food-kit service public, Amazon filed to trademark a copycat service. Blue Apron shares immediately plunged 12 percent on investor fears that Amazon would eat Blue Apron’s lunch. A possible remedy would be to consider an established antitrust interdiction that, in certain circumstances, prohibits pre-announcements that contribute to the maintenance of market power.

Better that than having a utility-style regulatory commission deciding whether Prime service is fairly priced, or whether plans to provide customers with a special lock and Amazon employees with a camera-monitored key so they can deliver packages when the customer is not home is a good idea, rather than leaving it to customers to decide.

For now, I leave it to the political class to cope with the power these companies seem to have over the nature and reliability of the news they purvey. And to the sociologists to take on Facebook, and the cultural effects of all those friendships formed without even so much as a face-to-face hello, air kiss, or man hug.

The Logic of tuka

Many people, introduced to the concept developed here under the tuka ecosystem model, have asked, “So, what makes tuka different?”

Cribbing from Socrates, we would answer that question by posing two of our own:

  1. Why do people use the Internet?

I believe we can come up with a lot of different reasons people have embraced the Internet, despite the fact that sitting in front of a computer screen or thumbing a smartphone for hours is bad for the eyes, the hands, the posture and the back! But, one gains access to the world’s information at one’s finger-tips. We can be much more efficient and productive by having access to more information at much lower cost than in the past.

But those reasons just beg the question as to why having access to more and more information is important. Well, yes, information can be valuable, especially if it makes us more productive with our scarce resources of time and energy. In other words, information is valuable because informed knowledge empowers us to do more of the things we want to do, at less cost.

So, what is it we want to do? [That’s really a rhetorical question because there are literarily billions of valid answers.]

Let’s move on to question #2 and maybe it will become clear what we’re getting at.

2. Why do so many people use Facebook?

I think the answers to this question are quite different from the answers to the previous question. I doubt Facebook makes anybody more productive, unless one is running viral ad campaigns. Facebook and its fellow social media platforms actually seem to be the biggest time-wasters since the advent of the Internet! But apparently users are not only attracted to social media, they become addicted to it. How and why?

I believe the answer is found in psychology and the human need to be connected to others. As Aristotle wrote, “Man is a social animal,” (and woman perhaps even more so?). And we connect by sharing information with each other that ranges from silly gossip to important and relevant knowledge (like how to survive a natural disaster).

And the connection is as, if not more, important as the information being shared. This is a key insight into information technology because it tells us what’s really going on behind all this BIG DATA.

If we work backward from this goal of connection we see that connection is driven by the human instinct to connect by sharing, which is rooted in some initial step to create whatever it is we wish to share. Connection <- Sharing <- Creating. Here we have reverse engineered the internal logic of the tuka creative social media ecosystem: Create -> Share -> Connect, which repeats in an endless feedback loop.

So, the point of this post is to show that tuka is not really about what a new technology tool can do for us, but about what we can do with technology to be more like who we really are.

Facebook in da Nile.

One should see this as the general problem with Facebook (and Twitter) as an information/news medium – it’s generated by subjective opinions rather than verifiable facts with verifiable sources. Verifiable identity (non-anonymity) has the virtue of incentivizing reputational capital and building trust across information exchanges. But Facebook then has to make a Sophie’s Choice: lose the traffic that raw emotionalism feeds or lose the trust of its user base. Facebook’s day of reckoning is coming.

From The Wall Street Journal:

Facebook Is Still In Denial About Its Biggest Problem
In a world where social media is the pre-eminent news conduit, ‘If it’s outrageous, it’s contagious’ is the new ‘If it bleeds, it leads’
It’s a good time to re-examine our relationship with Facebook Inc.
In the past month, it has been revealed that Facebook hosted a Russian influence operation which may have reached between 3 million and 20 million people on the social network, and that Facebook could be used to micro-target users with hate speech. It took the company more than two weeks to agree to share what it knows with Congress.
Increased scrutiny of Facebook is healthy. What went mainstream as a friendly place for loved ones to swap baby pictures and cat videos has morphed into an opaque and poorly understood metropolis rife with influence peddlers determined to manipulate what we know and how we think. We have barely begun to understand how the massive social network shapes our world.
Unfortunately, Facebook itself seems just as mystified, providing a response to all of this that has left many unsatisfied.
What the company’s leaders seem unable to reckon with is that its troubles are inherent in the design of its flagship social network, which prioritizes thrilling posts and ads over dull ones, and rewards cunning provocateurs over hapless users. No tweak to algorithms or processes can hope to fix a problem that seems enmeshed in the very fabric of Facebook.
Outrageous ads
On a network where article and video posts can be sponsored and distributed like ads, and ads themselves can go as viral as a wedding-fail video, there is hardly a difference between the two. And we now know that if an ad from one of Facebook’s more than five million advertisers goes viral—by making us feel something, not just joy but also fear or outrage—it will cost less per impression to spread across Facebook.
In one example, described in a recent Wall Street Journal article, a “controversial” ad went viral, leading to a 30% drop in the cost to reach each user. Joe Yakuel, founder and chief executive of Agency Within, which manages $100 million in digital ad purchases, told our reporter, “Even inadvertent controversy can cause a lot of engagement.”
Keeping people sharing and clicking is essential to Facebook’s all-important metric, engagement, which is closely linked to how many ads the network can show us and how many of them we will interact with. Left unchecked, algorithms like Facebook’s News Feed tend toward content that is intended to arouse our passions, regardless of source—or even veracity.
An old newspaper catchphrase was, “If it bleeds, it leads”—that is, if someone got hurt or killed, that’s the top story. In the age when Facebook supplies us with a disproportionate amount of our daily news, a more-appropriate catchphrase would be, “If it’s outrageous, it’s contagious.”
Will Facebook solve this problem on its own? The company has no immediate economic incentive to do so, says Yochai Benkler, a professor at Harvard Law School and co-director of the Berkman Klein Center for Internet and Society.
“Facebook has become so central to how people communicate, and it has so much market power, that it’s essentially immune to market signals,” Dr. Benkler says. The only thing that will force the company to change, he adds, is the brewing threat to its reputation.
Facebook’s next steps
Facebook Chief Executive Mark Zuckerberg recently said his company will do more to combat illegal and abusive misuse of the Facebook platform. The primary mechanism for vetting political and other ads will be “an even higher standard of transparency,” he said, achieved by, among other things, making all ads on the site viewable by everyone, where in the past they could be seen only by their target audience.
“Beyond pushing back against threats, we will also create more services to protect our community while engaging in political discourse,” Mr. Zuckerberg wrote.
This move is a good start, but it excuses Facebook from its responsibility to be the primary reviewer of all advertising it is paid to run. Why are we, the users, responsible for vetting ads on Facebook?
By default, most media firms vet the ads they run and refuse ones that might be offensive or illegal, says Scott Galloway, entrepreneur, professor of marketing at NYU Stern School of Business and author of “The Four,” a book criticizing the outsize growth and influence of Amazon, Apple, Facebook and Google.
Mr. Zuckerberg acknowledged in a recent Facebook post that the majority of advertising purchased on Facebook will continue to be bought “without the advertiser ever speaking to anyone at Facebook.” His argument for this policy: “We don’t check what people say before they say it, and frankly, I don’t think our society should want us to.”
This is false equivalence. Society may not want Facebook to read over everything typed by our friends and family before they share it. But many people would feel it’s reasonable for Facebook to review all of the content it gets paid (tens of billions of dollars) to publish and promote.
“Facebook has embraced the healthy gross margins and influence of a media firm but is allergic to the responsibilities of a media firm,” Mr. Galloway says.
More is needed
Mr. Zuckerberg has said it will hire 250 more humans to review ads and content posted to Facebook. For Facebook, a company with more than $14 billion in free cash flow in the past year, to say it is adding 250 people to its safety and security efforts is “pissing in the ocean,” Mr. Galloway says. “They could add 25,000 people, spend $1 billion on AI technologies to help those 25,000 employees sort, filter and ID questionable content and advertisers, and their cash flow would decline 10% to 20%.”
Of course, mobilizing a massive team of ad monitors could subject Facebook to exponentially more accusations of bias from all sides. For every blatant instance of abuse, there are hundreds of cases that fall into gray areas.
The whole situation has Facebook between a rock and a hard place. But it needs to do more, or else risk further damaging its brand and reputation, two things of paramount importance to a service that depends on the trust of its users.

David Byrne on Technology, Creativity…

…and what it means to be human.

Eliminating the Human

A View from David Byrne

We are beset by—and immersed in—apps and devices that are quietly reducing the amount of meaningful interaction we have with each other.

August 15, 2017

andy friedman

I have a theory that much recent tech development and innovation over the last decade or so has an unspoken overarching agenda. It has been about creating the possibility of a world with less human interaction. This tendency is, I suspect, not a bug—it’s a feature. We might think Amazon was about making books available to us that we couldn’t find locally—and it was, and what a brilliant idea—but maybe it was also just as much about eliminating human contact.

The consumer technology I am talking about doesn’t claim or acknowledge that eliminating the need to deal with humans directly is its primary goal, but it is the outcome in a surprising number of cases. I’m sort of thinking maybe it is the primary goal, even if it was not aimed at consciously. Judging by the evidence, that conclusion seems inescapable.

This then, is the new norm. Most of the tech news we get barraged with is about algorithms, AI, robots, and self-driving cars, all of which fit this pattern. I am not saying that such developments are not efficient and convenient; this is not a judgment. I am simply noticing a pattern and wondering if, in recognizing that pattern, we might realize that it is only one trajectory of many. There are other possible roads we could be going down, and the one we’re on is not inevitable or the only one; it has been (possibly unconsciously) chosen.

I realize I’m making some wild and crazy assumptions and generalizations with this proposal—but I can claim to be, or to have been, in the camp that would identify with the unacknowledged desire to limit human interaction. I grew up happy but also found many social interactions extremely uncomfortable. I often asked myself if there were rules somewhere that I hadn’t been told, rules that would explain it all to me. I still sometimes have social niceties “explained” to me. I’m often happy going to a restaurant alone and reading. I wouldn’t want to have to do that all the time, but I have no problem with it—though I am sometimes aware of looks that say “Poor man, he has no friends.” So I believe I can claim some insight into where this unspoken urge might come from.

Human interaction is often perceived, from an engineer’s mind-set, as complicated, inefficient, noisy, and slow. Part of making something “frictionless” is getting the human part out of the way. The point is not that making a world to accommodate this mind-set is bad, but that when one has as much power over the rest of the world as the tech sector does over folks who might not share that worldview, there is the risk of a strange imbalance. The tech world is predominantly male—very much so. Testosterone combined with a drive to eliminate as much interaction with real humans as possible for the sake of “simplicity and efficiency”—do the math, and there’s the future.

The evidence

Here are some examples of fairly ubiquitous consumer technologies that allow for less human interaction.

Online ordering and home delivery: Online ordering is hugely convenient. Amazon, FreshDirect, Instacart, etc. have not just cut out interactions at bookstores and checkout lines; they have eliminated all human interaction from these transactions, barring the (often paid) online recommendations.

Digital music: Downloads and streaming—there is no physical store, of course, so there are no snobby, know-it-all clerks to deal with. Whew, you might say. Some services offer algorithmic recommendations, so you don’t even have to discuss music with your friends to know what they like. The service knows what they like, and you can know, too, without actually talking to them. Is the function of music as a kind of social glue and lubricant also being eliminated? [Blog note: Creativity and innovation require serendipity, which tech algorithms often eliminate in the interest of efficiency.]

Ride-hailing apps: There is minimal interaction—one doesn’t have to tell the driver the address or the preferred route, or interact at all if one doesn’t want to.

Driverless cars: In one sense, if you’re out with your friends, not having one of you drive means more time to chat. Or drink. Very nice. But driverless tech is also very much aimed at eliminating taxi drivers, truck drivers, delivery drivers, and many others. There are huge advantages to eliminating humans here—theoretically, machines should drive more safely than humans, so there might be fewer accidents and fatalities. The disadvantages include massive job loss. But that’s another subject. What I’m seeing here is the consistent “eliminating the human” pattern.

Automated checkout:Eatsa is a new version of the Automat, a once-popular “restaurant” with no visible staff. My local CVS has been training staff to help us learn to use the checkout machines that will replace them. At the same time, they are training their customers to do the work of the cashiers.

Amazon has been testing stores—even grocery stores!—with automated shopping. They’re called Amazon Go. The idea is that sensors will know what you’ve picked up. You can simply walk out with purchases that will be charged to your account, without any human contact.

AI: AI is often (though not always) better at decision-making than humans. In some areas, we might expect this. For example, AI will suggest the fastest route on a map, accounting for traffic and distance, while we as humans would be prone to taking our tried-and-true route. But some less-expected areas where AI is better than humans are also opening up. It is getting better at spotting melanomas than many doctors, for example. Much routine legal work will soon be done by computer programs, and financial assessments are now being done by machines.

Robot workforce: Factories increasingly have fewer and fewer human workers, which means no personalities to deal with, no agitating for overtime, and no illnesses. Using robots avoids an employer’s need to think about worker’s comp, health care, Social Security, Medicare taxes, and unemployment benefits.

Personal assistants: With improved speech recognition, one can increasingly talk to a machine like Google Home or Amazon Echo rather than a person. Amusing stories abound as the bugs get worked out. A child says, “Alexa, I want a dollhouse” … and lo and behold, the parents find one in their cart.

Big data: Improvements and innovations in crunching massive amounts of data mean that patterns can be recognized in our behavior where they weren’t seen previously. Data seems objective, so we tend to trust it, and we may very well come to trust the gleanings from data crunching more than we do ourselves and our human colleagues and friends.

Video games (and virtual reality): Yes, some online games are interactive. But most are played in a room by one person jacked into the game. The interaction is virtual.

Automated high-speed stock buying and selling: A machine crunching huge amounts of data can spot trends and patterns quickly and act on them faster than a person can.

MOOCS: Online education with no direct teacher interaction.

“Social” media: This is social interaction that isn’t really social. While Facebook and others frequently claim to offer connection, and do offer the appearance of it, the fact is a lot of social media is a simulation of real connection.

What are the effects of less interaction?

Minimizing interaction has some knock-on effects—some of them good, some not. The externalities of efficiency, one might say.

For us as a society, less contact and interaction—real interaction—would seem to lead to less tolerance and understanding of difference, as well as more envy and antagonism. As has been in evidence recently, social media actually increases divisions by amplifying echo effects and allowing us to live in cognitive bubbles. We are fed what we already like or what our similarly inclined friends like (or, more likely now, what someone has paid for us to see in an ad that mimics content). In this way, we actually become less connected—except to those in our group.

Social networks are also a source of unhappiness. A study earlier this year by two social scientists, Holly Shakya at UC San Diego and Nicholas ­Christakis at Yale, showed that the more people use Facebook, the worse they feel about their lives. While these technologies claim to connect us, then, the surely unintended effect is that they also drive us apart and make us sad and envious.

I’m not saying that many of these tools, apps, and other technologies are not hugely convenient, clever, and efficient. I use many of them myself. But in a sense, they run counter to who we are as human beings.

We have evolved as social creatures, and our ability to cooperate is one of the big factors in our success. I would argue that social interaction and cooperation, the kind that makes us who we are, is something our tools can augment but not replace.

When interaction becomes a strange and unfamiliar thing, then we will have changed who and what we are as a species. Often our rational thinking convinces us that much of our interaction can be reduced to a series of logical decisions—but we are not even aware of many of the layers and subtleties of those interactions. As behavioral economists will tell us, we don’t behave rationally, even though we think we do. And Bayesians will tell us that interaction is how we revise our picture of what is going on and what will happen next.

I’d argue there is a danger to democracy as well. Less interaction, even casual interaction, means one can live in a tribal bubble—and we know where that leads.

Is it possible that less human interaction might save us?

Humans are capricious, erratic, emotional, irrational, and biased in what sometimes seem like counterproductive ways. It often seems that our quick-thinking and selfish nature will be our downfall. There are, it would seem, lots of reasons why getting humans out of the equation in many aspects of life might be a good thing.

But I’d argue that while our various irrational tendencies might seem like liabilities, many of those attributes actually work in our favor. Many of our emotional responses have evolved over millennia, and they are based on the probability that they will, more likely than not, offer the best way to deal with a situation.

What are we?

Antonio Damasio, a neuroscientist at USC wrote about a patient he called Elliot, who had damage to his frontal lobe that made him unemotional. In all other respects he was fine—intelligent, healthy—but emotionally he was Spock. Elliot couldn’t make decisions. He’d waffle endlessly over details. ­Damasio concluded that although we think decision-­making is rational and machinelike, it’s our emotions that enable us to actually decide.

With humans being somewhat unpredictable (well, until an algorithm completely removes that illusion), we get the benefit of surprises, happy accidents, and unexpected connections and intuitions. Interaction, cooperation, and collaboration with others multiplies those ­opportunities. [Yes!]

We’re a social species—we benefit from passing discoveries on, and we benefit from our tendency to cooperate to achieve what we cannot alone. In his book Sapiens, Yuval Harari claims this is what allowed us to be so successful. He also claims that this cooperation was often facilitated by an ability to believe in “fictions” such as nations, money, religions, and legal institutions. Machines don’t believe in fictions—or not yet, anyway. That’s not to say they won’t surpass us, but if machines are designed to be mainly self-interested, they may hit a roadblock. And in the meantime, if less human interaction enables us to forget how to cooperate, then we lose our advantage.

Our random accidents and odd behaviors are fun—they make life enjoyable. I’m wondering what we’re left with when there are fewer and fewer human interactions. Remove humans from the equation, and we are less complete as people and as a society.

“We” do not exist as isolated individuals. We, as individuals, are inhabitants of networks; we are relationships. That is how we prosper and thrive[Lest we overemphasize the cooperative aspect, E.O. Wilson argues that human society needs a balance between competition and cooperation acting through group vs. individual selection. Too much competition leads to Mad Max survival, while too much group cooperation leads to homogeneity resembling ant colonies. Wilson argues this tension and balance is the catalyst for human culture. See his book, The Social Conquest of Earth.]

David Byrne is a musician and artist who lives in New York City. His most recent book is called How Music Works. A version of this piece originally appeared on his website, davidbyrne.com.