Tech Dystopia?

Below are excerpts from a fascinating series of articles by The Guardian (with links). The articles address many of the ways that Internet 2.0 network media models such as Google, Facebook, YouTube, Instagram, etc. are transforming, and in many cases undermining, the foundations of a democratic humanistic society. These issues motivate us at tuka to design solutions to the great question of life’s meaning.

Personally, I don’t believe this dystopia will come to pass because humans are quite resilient as a species and eventually our humanist qualities will dominate our biological urges and economic imperatives. We have free will and ultimately, we choose correctly.

Perhaps that is an overly optimistic opinion, but Internet (or Web) 3.0 technology is rewriting the script with applications that reassert human control over the data universe. We will build more humanistic social communities that employ technology, with the emphasis always on the human. We see this now with the growing refusal to surrender to Web 2.0 by tech insiders.

See excerpts and comments below.

“If politics is an expression of our human will, on individual and collective levels, then the attention economy is directly undermining the assumptions that democracy rests on.” If Apple, Facebook, Google, Twitter, Instagram, and Snapchat are gradually chipping away at our ability to control our own minds, could there come a point, I ask, at which democracy no longer functions?

‘Fiction is outperforming reality’: how YouTube’s algorithm distorts truth

Paul Lewis February 2, 2018

There are 1.5 billion YouTube users in the world, which is more than the number of households that own televisions. What they watch is shaped by this algorithm, which skims and ranks billions of videos to identify 20 “up next” clips that are both relevant to a previous video and most likely, statistically speaking, to keep a person hooked on their screen.

Company insiders tell me the algorithm is the single most important engine of YouTube’s growth. In one of the few public explanations of how the formula works – an academic paper that sketches the algorithm’s deep neural networks, crunching a vast pool of data about videos and the people who watch them – YouTube engineers describe it as one of the “largest scale and most sophisticated industrial recommendation systems in existence”.

We see here the power of AI data algorithms to filter content. The Google response has been to “expand the army of human moderators.” That’s a necessary method of reasserting human judgment over the network

The primary focus of the article then turns to politics and the electoral influences of disinformation:

Much has been written about Facebook and Twitter’s impact on politics, but in recent months academics have speculated that YouTube’s algorithms may have been instrumental in fuelling disinformation during the 2016 presidential election. “YouTube is the most overlooked story of 2016,” Zeynep Tufekci, a widely respected sociologist and technology critic, tweeted back in October. “Its search and recommender algorithms are misinformation engines.”

Apparently, the sensationalism surrounding the Trump campaign caused YT’s AI algorithms to push more video feeds favorable to Trump and damaging to Hillary Clinton. One doesn’t need to be a partisan to recognize this was probably true for this particular media channel and its business model that values more views more than anything else.

However, this reality can also be distorted to present a particular conspiracy narrative of its own:

Trump won the electoral college as a result of 80,000 votes spread across three swing states. There were more than 150 million YouTube users in the US. The videos contained in Chaslot’s database of YouTube-recommended election videos were watched, in total, more than 3bn times before the vote in November 2016.

This, unfortunately, is cherry-picking statistical inferences concerning the margin of voting support. What was significant in determining the 2016 election outcome was not 80,000 votes across three states, but a run of popular vote wins in 2,623 of 3,112 counties across the U.S. This 85% share could not be an accident, nor could it be due to the single influence of disinformation, Russian or otherwise. The true difference in the election was not revealed by the popular vote total or the Electoral College vote, but by the geographical distribution of support. One can argue about which is more critical to democratic governance, but this post is about electronic media content, not political analysis.

The next article further addresses how technology is influencing our individual behaviors.

‘Our minds can be hijacked’: the tech insiders who fear a smartphone dystopia

Paul Lewis October 6, 2017

Justin Rosenstein had tweaked his laptop’s operating system to block Reddit, banned himself from Snapchat, which he compares to heroin, and imposed limits on his use of Facebook. But even that wasn’t enough. In August, the 34-year-old tech executive took a more radical step to restrict his use of social media and other addictive technologies.

A decade after he stayed up all night coding a prototype of what was then called an “awesome” button, Rosenstein belongs to a small but growing band of Silicon Valley heretics who complain about the rise of the so-called “attention economy”: an internet shaped around the demands of an advertising economy.

The extent of this addiction is cited by research that shows people touch, swipe or tap their phone 2,617 times a day!

There is growing concern that as well as addicting users, technology is contributing toward so-called “continuous partial attention”, severely limiting people’s ability to focus, and possibly lowering IQ. One recent study showed that the mere presence of smartphones damages cognitive capacity – even when the device is turned off. “Everyone is distracted,” Rosenstein says. “All of the time.”

“The technologies we use have turned into compulsions, if not full-fledged addictions,” Eyal writes. “It’s the impulse to check a message notification. It’s the pull to visit YouTube, Facebook, or Twitter for just a few minutes, only to find yourself still tapping and scrolling an hour later.” None of this is an accident, he writes. It is all “just as their designers intended”.

Tristan Harris, a former Google employee turned vocal critic of the tech industry points out that… “All of us are jacked into this system. All of our minds can be hijacked. Our choices are not as free as we think they are.”

“I don’t know a more urgent problem than this,” Harris says. “It’s changing our democracy, and it’s changing our ability to have the conversations and relationships that we want with each other.” 

Harris believes that tech companies never deliberately set out to make their products addictive. They were responding to the incentives of an advertising economy, experimenting with techniques that might capture people’s attention, even stumbling across highly effective design by accident.

“Smartphones are useful tools,” says Loren Brichter, a product designer. “But they’re addictive. Pull-to-refresh is addictive. Twitter is addictive. These are not good things. When I was working on them, it was not something I was mature enough to think about.”

The two inventors listed on Apple’s patent for “managing notification connections and displaying icon badges” are Justin Santamaria and Chris Marcellino. A few years ago Marcellino, 33, left the Bay Area and is now in the final stages of retraining to be a neurosurgeon. He stresses he is no expert on addiction but says he has picked up enough in his medical training to know that technologies can affect the same neurological pathways as gambling and drug use. “These are the same circuits that make people seek out food, comfort, heat, sex,” he says.

“The people who run Facebook and Google are good people, whose well-intentioned strategies have led to horrific unintended consequences,” he says. “The problem is that there is nothing the companies can do to address the harm unless they abandon their current advertising models.

But how can Google and Facebook be forced to abandon the business models that have transformed them into two of the most profitable companies on the planet?

This is exactly the problem – they really can’t. Newer technology, such as distributed social networks tracked by blockchain technology, must be deployed to disrupt the dysfunctional existing technology. New business models will be designed to support this disruption. Human behavioral instincts are crucial to successful new designs that make us more human, rather than less.

James Williams does not believe talk of dystopia is far-fetched. …He says his epiphany came a few years ago when he noticed he was surrounded by technology that was inhibiting him from concentrating on the things he wanted to focus on. “It was that kind of individual, existential realization: what’s going on?” he says. “Isn’t technology supposed to be doing the complete opposite of this?”

The question we ask at tuka is: “What do people really want from technology and social interaction? Distraction or meaning? And how do they find meaning?” Our answer is self-expression through creativity, sharing it, and connecting with communities.

Williams and Harris left Google around the same time and co-founded an advocacy group, Time Well Spent, that seeks to build public momentum for a change in the way big tech companies think about design. 

“Eighty-seven percent of people wake up and go to sleep with their smartphones,” he says. The entire world now has a new prism through which to understand politics, and Williams worries the consequences are profound.

The same forces that led tech firms to hook users with design tricks, he says, also encourage those companies to depict the world in a way that makes for compulsive, irresistible viewing. “The attention economy incentivizes the design of technologies that grab our attention,” he says. “In so doing, it privileges our impulses over our intentions.”

That means privileging what is sensational over what is nuanced, appealing to emotion, anger, and outrage. The news media is increasingly working in service to tech companies, Williams adds, and must play by the rules of the attention economy to “sensationalize, bait and entertain in order to survive”.

In the wake of Donald Trump’s stunning electoral victory, many were quick to question the role of so-called “fake news” on Facebook, Russian-created Twitter bots or the data-centric targeting efforts that companies such as Cambridge Analytica used to sway voters. But Williams sees those factors as symptoms of a deeper problem.

It is not just shady or bad actors who were exploiting the internet to change public opinion. The attention economy itself is set up to promote a phenomenon like Trump, who is masterly at grabbing and retaining the attention of supporters and critics alike, often by exploiting or creating outrage.

Orwellian-style coercion is less of a threat to democracy than the more subtle power of psychological manipulation, and “man’s almost infinite appetite for distractions”.

“The dynamics of the attention economy are structurally set up to undermine the human will,” Williams says. “If politics is an expression of our human will, on individual and collective levels, then the attention economy is directly undermining the assumptions that democracy rests on.” If Apple, Facebook, Google, Twitter, Instagram, and Snapchat are gradually chipping away at our ability to control our own minds, could there come a point, I ask, at which democracy no longer functions?

Our politics will survive and democracy is only one form of governance. The bigger question is how does human civilization survive if our behavior becomes self-destructive and meaningless?

The Genius? of Silicon Valley?

Foer

Here’s a review of Franklin Foer’s new book, World Without Mind: The Existential Threat of Big Tech. What we’re seeing here is the slow breaking of the next wave of tech, from Web 2.0 to Web 3.0, where the users take back control and are treated as more than mindless sources of data. It took a long time to transition the world out of feudalism and we’re still in the very early stages of throwing off the yolk of data exploitation and tyranny. Algorithms cannot guide humanity.

The genius and stupidity of Silicon Valley

Knowledge is a tricky thing.

Acquiring and deploying it to change the world through technological innovations can inspire great confidence and self-certainty in the person who possesses the knowledge. And yet, the confidence and self-certainty is nearly always misplaced — a product of the knower presuming that his expert knowledge of one aspect of reality applies equally to others. That’s one powerful reason why myths about the place of knowledge in human life so often teach lessons about hubris and its dire social, cultural, and political consequences.

Franklin Foer’s important new book, World Without Mind: The Existential Threat of Big Tech, is best seen as a modern-day journalistic retelling of one of those old cautionary tales about human folly. Though he doesn’t describe his aim in quite this way, Foer sets out to expose the foolishness and arrogance that permeates the culture of Silicon Valley and that through its wondrous technological innovations threatens unintentionally to wreck civilizational havoc on us all.

It’s undeniable that Silicon Valley’s greatest innovators know an awful lot. Google is an incredibly powerful tool for organizing information — one to which no previous generation of human beings could have imagined having easy and free access, let alone devising from scratch, as Larry Page and Sergey Brin managed to do. The same goes for Facebook, which Mark Zuckerberg famously created in his Harvard dorm room and has become a global powerhouse in a little more than a decade, turning him into one of the world’s richest men and revolutionizing the way some two billion people around the world consume information and interact with each other.

That’s power. That’s knowledge.

But knowledge of what?

Mostly of how to program computers and deploy algorithms to sort through, organize, cluster, rank, and order vast quantities of data. In the case of Facebook, Zuckerberg obviously also understood something simple but important about how human beings might enjoy interacting online. That’s not nothing. Actually, it’s a lot. An enormous amount. But it’s not everything — or anything remotely close to what Silicon Valley’s greatest innovators think it is.

When it comes to human beings — what motivates them, how they interact socially, to what end they organize politically — figures like Page and Zuckerberg know very little. Almost nothing, in fact. And that ignorance has enormous consequences for us all.

You can see the terrible problems of this hubris in the enormously sweeping ambitions of the titans of technology. Page, for instance, seeks to achieve immortality.

Foer explains how Page absorbed ideas from countercultural guru Stewart Brand, futurist Ray Kurzweil, and others to devise a quasi-eschatological vision for Google as a laboratory for artificial intelligence that might one day make it possible for humanity to transcend human limitations altogether, eliminating scarcity, merging with machines, and finally triumphing over mortality itself. Foer traces the roots of this utopianism back to Descartes’ model of human subjectivity, which pictures a spiritual mind encased within and controlling an (in principle, separable) mechanical body. If this is an accurate representation of the mind’s relation to its bodily host, then why not seek to develop technology that would make it possible to deposit this mind, like so much software, into a much more durable and infinitely repairable and improvable computer? In the process, these devices would be transformed into what Kurzweil has dubbed “spiritual machines” that could, in principle, enable individuals to live on and preserve their identities forever.

The problem with such utopian visions and extravagant hopes is not that they will outstrip our technological prowess. For all I know, the company that almost instantly gathers and ranks information from billions of websites for roughly 40,000 searches every second will some day, perhaps soon, develop the technical capacity to transfer the content of a human mind into a computer network.

The problem with such a goal is that in succeeding it will inevitably fail. As anyone who reflects on the issue with any care, depth, and rigor comes to understand, the Cartesian vision of the mind is a fiction, a fairy tale. Our experience of being alive, of being-in-the-world, is thoroughly permeated and shaped by the sensations, needs, desires, and fears that come to us by the way of our bodies, just as our opinions of right and wrong, better and worse, noble and base, and just and unjust are formed by rudimentary reflections on our own good, which is always wrapped up with our perception of the good of our physical bodies.

Even if it were possible to transfer our minds — our memories, the content of our thoughts — into a machine, the indelible texture of conscious human experience would be flattened beyond recognition. Without a body and its needs, desires, vulnerabilities, and fear of injury and death, we would no longer experience a world of meaning, gravity, concern, and care — for ourselves or others. Which also means that Page’s own relentless drive to innovate technologically — which may well be the single attribute that most distinguishes him as an individual — would vanish without a trace the moment he realized his goal of using technological innovations to achieve immortality.

An immortal Larry Page would no longer be Larry Page.

Zuckerberg’s very different effort to overcome human limits displays a similar obliviousness to the character of human experience, in this case political life — and it ends with a similar paradox.

Rather than simply providing Facebook’s users with a platform for socializing and sharing photos, Zuckerberg’s company has developed intricate algorithms for distributing information in each user’s “news feed,” turning it into a “personalized newspaper,” with the content (including advertisements) precisely calibrated to his or her particular interests, tastes, opinions, and commitments. The idea was to build community and bring people together through the sharing and dissemination of information. The result has been close to the opposite.

As Facebook’s algorithms have become more sophisticated, they have gotten better and better at giving users information that resembles information they have previously liked or shared with their friends. That has produced an astonishing degree of reinforcement of pre-existing habits and opinions. If you’re a liberal, you’re now likely only to see liberal opinions on Facebook. If you’re conservative, you’ll only see conservative opinions. And if you’re inclined to give credence to conspiracy theories, you’ll see plenty of those.

And maybe not just if you favor conspiracy theories. As we’ve learned since the 2016 election, it’s possible for outside actors (like foreign intelligence services, for example) to game the system by promoting or sponsoring fake or inflammatory stories that get disseminated and promoted among like-minded or sympathetic segments of the electorate.

Facebook may be the most effective echo chamber ever devised, precisely because there’s potentially a personalized chamber for every single person on the planet.

What began with a hope of bringing the country and the world together has in a little over a decade become one of the most potent sources of division in a deeply divided time.

And on it goes, with each company and technology platform producing its own graveyards full of unintended consequences. Facebook disseminates journalism widely but ends up promoting vacuous and sometimes politically pernicious clickbait. Google works to make information (including the content of books) freely available to all but in the process dismantles the infrastructure that was constructed to make it possible for people to write for a living. Twitter gives a megaphone to everyone who opens an account but ends up amplifying the voice of a demagogue-charlatan above everyone else, helping to propel him all the way to the White House.

Foer ends his book on an optimistic note, offering practical suggestions for pushing back against the ideological and technological influence of Silicon Valley on our lives. Most of them are worthwhile. But the lesson I took from the book is that the challenge we face may defy any simple solution. It’s a product, after all, of the age-old human temptation toward arrogance or pride — only now inflated by the magnitude of our undeniable technological achievements. How difficult it must be for our techno-visionaries to accept that they know far less than they’d like to believe.

Ten Habits of Highly Creative People

Creativity

Ten Habits of Highly Creative People

Scott Barry Kaufman and Carolyn Gregoire explore how to develop creativity as a habit and a style of engaging with the world.

What exactly is creativity? So many of us assume that creativity is something we had as a child but we lost, or something allocated to rarified individuals that we can only admire from afar.

But science has shown that, in many ways, we are all wired to create. The key is recognizing that creativity is multifaceted—on the level of the brain, personality, and the creative process—and can be displayed in many different ways, from the deeply personal experience of uncovering a new idea or experience to expressing ourselves through words, photos, fashion, and other everyday creations, to the work of renowned artists that transcends the ages.

Read more

 

Music and Creativity

How Music Helps Us Be More Creative

A new study suggests that listening to happy music promotes more divergent thinking—a key element of creativity.

In today’s world, creative thinking is needed more than ever. Not only do many businesses seek creative minds to fill their ranks, but the kinds of complex social problems we face could also use a good dose of creativity.

Luckily, creativity is not reserved for artists and geniuses alone. Modern science suggests that we all have the cognitive capacity to come up with original ideas—something researchers call “divergent thinking.” And we can all select from a series of ideas the one most likely to be successful, which researchers call “convergent thinking.”

Read more…