No, Nobody is Saving the Music Industry. Or the Culture Industry.

No, Streaming Services Are Not ‘Saving The Music Industry’

Recent reports that streaming is now the ‘biggest money-maker’ for the music biz have prompted hyperbolic claims that Spotify and co have ‘saved the music industry’. In reality, this could not be further from the truth.

Excerpt:

Progressive music that goes against the aesthetics of whatever the mainstream might be at any given point by its very nature does not cater to the whims of a Spotify algorithm. Now that streaming is the industry’s biggest money-maker it has become the overriding force in music consumption. This dominance will only increase as time goes on, and for artists to gain anything, as a result, requires them to conform or die. There are exceptions, most notably in zeitgeist-seizing movements like grime that are both artistically essential and buoyed by the kind of mass appeal that in effect bypasses the need for a leg-up from the algorithms, but such a lethal combination is rare indeed. Not everything that is great is as popular.

Yes, not everything that is great is popular, and not everything that is popular is great! We need human subjectivity, and that’s more complicated than a complex algorithm.

If streaming platforms keep growing more and more influence over how music is curated and marketed by those in charge, while the revenue for those not mundane enough to fit their algorithms remains so pitifully minute, it is not that impossible to envisage the blandest landscape the industry has ever seen. Great music will continue being made, of course, but getting that music out to people outside of the algorithms will be so much harder. “I hope I am wrong,” says Reeder. “I hope the revenue from streaming does improve, because if it doesn’t, well, who knows how positive the future will be for the majority of music makers and labels out there?”

This is not only the case for music, but for literature, poetry, video, and photography.

Don’t Worry? …Be Happy?

Americans are depressed and suicidal because something is wrong with our culture

Excerpt from an article examining the rise of celebrity suicides such as Anthony Bourdain and Kate Spade. This gets to the heart of what makes us fulfilled as human beings. Not fame and fortune, but ascending Maslow’s hierarchy in our own ways:

…why are so many more Americans getting to this level of emotional despair than in the past? As journalist Johann Hari wrote in his best-selling book Lost Connections: Uncovering the Real Causes of Depression — and the Unexpected Solutions, the epidemic of depression and despair in the Western World isn’t always caused by our brains. It’s largely caused by key problems in the way we live.

We exist largely disconnected from our extended families, friends and communities — except in the shallow interactions of social media — because we are too busy trying to “make it” without realizing that once we reach that goal, it won’t be enough.

In an interview this year, the comedian and actor Jim Carrey talked about “getting to the place where you have everything everybody has ever desired and realizing you are still unhappy. And that you can still be unhappy is a shock when you have accomplished everything you ever dreamt of and more….”

If only we get that big raise, or new house or have children we will finally be happy. But we won’t. In fact, as Carrey points out, in many ways achieving all your goals provides the opposite of fulfillment: it lays bare the truth that there is nothing you can purchase, possess or achieve that will make you feel fulfilled over the long term.

Rather than pathologizing the despair and emotional suffering that is a rational response to a culture that values people based on ever escalating financial and personal achievements, we should acknowledge that something is very wrong. We should stop telling people who yearn for a deeper meaning in life that they have an illness or need therapy. Instead, we need to help people craft lives that are more meaningful and built on a firmer foundation than personal success.

Full article here.

…to find happiness:

Create – Share – Connect

Art and Algorithms

 

Excerpt from a review of books in The New Yorker: “Are Liberals on the Wrong Side of History?”

The algorithms of human existence are not like the predictable, repeatable algorithms of a computer, or people would not have a history, and Donald Trump would not be President. In order to erase humanity as a special category—different from animals, on the one hand, and robots, on the other—Harari [author of Sapiens] points to the power of artificial intelligence, and the prospect that it will learn to do everything we can do, but better. Now, that might happen, but it has been predicted for a long time [there’s more to the human than we assume] and the arrival date keeps getting postponed.

The A.I. that Harari fears and admires doesn’t, on inspection, seem quite so smart. He mentions computer-generated haiku, as though they were on a par with those generated by Japanese poets. Even if such poems exist, they can seem plausible only because the computer is programmed to imitate stylistic tics that we have already been instructed to appreciate, something akin to the way the ocean can “create” a Brancusi—making smooth, oblong stones that our previous experience of art has helped us to see as beautiful—rather than to how artists make new styles, which involves breaking the algorithm, not following it.

This argument is relevant not only to the creation of art, but also the appreciation of same. And the appreciation of art is expressed in reviews, aesthetic appeal, and novelty, as well as popularity. Algorithms trigger mostly off various proxies for popularity, such as “likes” or sales. We need human aesthetic judgments to support true artistic creation.

Who Are We and What Matters?

Google? Facebook? These two firms alone control roughly 2/3s of digital media advertising revenues.

That’s power. That’s knowledge.

But knowledge of what?

Mostly of how to program computers and deploy algorithms to sort through, organize, cluster, rank, and order vast quantities of data. In the case of Facebook, Zuckerberg obviously also understood something simple but important about how human beings might enjoy interacting online. That’s not nothing. Actually, it’s a lot. An enormous amount. But it’s not everything — or anything remotely close to what Silicon Valley’s greatest innovators think it is.

When it comes to human beings — what motivates them, how they interact socially, to what end they organize politically — figures like Page and Zuckerberg know very little. Almost nothing, in fact. And that ignorance has enormous consequences for us all.

From Disruption to Dystopia

Very interesting article by Joel Kotkin, who researches the economics and politics of cities. It portrays a future that resembles feudalism more than free market democratic capitalism. I’d optimistically venture there will eventually be a more humanist backlash against the future dominance of technology.

From Disruption to Dystopia: Silicon Valley Envisions the City of the Future

The unaffordable Bay Area, Google’s new neighborhood ‘built from the internet up,’ and China’s police state each offer glimpses of what the tech giants plan to sell the rest of us.

by Joel Kotkin

The tech oligarchs who already dominate our culture and commerce, manipulate our moods, and shape the behaviors of our children while accumulating capital at a rate unprecedented in at least a century want to fashion our urban future in a way that dramatically extends the reach of the surveillance state already evident in airports and on our phones.

The drive to redesign our cities, however, is not really the end of the agenda of those who Aldous Huxley described as the top of the “scientific caste system.” The oligarchy has also worked to make our homes, our personal space, “connected” to their monitoring and money machines. This may be a multibillion-dollar market soon, but many who have employed such devices at home—appliances that track our activities and speak to us like loyal servants—find them “creepy,” as they should, given that their daily activities are fed back to enrich the high-tech hive mind. Both the city and house the future may owe more to Brave New World than Better Homes and Gardens.

This is a vision of the urban future in which the tech companies’ own workers and whatever other people with skills the machines haven’t yet replaced are a new class of urban serfs living in small apartments, along with a much larger class of dependent persons living on “income maintenance” and housing or housing subsidies provided by the state. “Bees exist on Earth to pollinate flowers, and maybe humans are here to build the machines,” observes professor Andrew Hudson-Smith, from University College London’s Centre for Advanced Spatial Analysis. “The city will be one big joined-up urban machine, and humans’ role on Earth will be done.”

Read more

Tech Dystopia?

Below are excerpts from a fascinating series of articles by The Guardian (with links). The articles address many of the ways that Internet 2.0 network media models such as Google, Facebook, YouTube, Instagram, etc. are transforming, and in many cases undermining, the foundations of a democratic humanistic society. These issues motivate us at tuka to design solutions to the great question of life’s meaning.

Personally, I don’t believe this dystopia will come to pass because humans are quite resilient as a species and eventually our humanist qualities will dominate our biological urges and economic imperatives. We have free will and ultimately, we choose correctly.

Perhaps that is an overly optimistic opinion, but Internet (or Web) 3.0 technology is rewriting the script with applications that reassert human control over the data universe. We will build more humanistic social communities that employ technology, with the emphasis always on the human. We see this now with the growing refusal to surrender to Web 2.0 by tech insiders.

See excerpts and comments below.

“If politics is an expression of our human will, on individual and collective levels, then the attention economy is directly undermining the assumptions that democracy rests on.” If Apple, Facebook, Google, Twitter, Instagram, and Snapchat are gradually chipping away at our ability to control our own minds, could there come a point, I ask, at which democracy no longer functions?

‘Fiction is outperforming reality’: how YouTube’s algorithm distorts truth

Paul Lewis February 2, 2018

There are 1.5 billion YouTube users in the world, which is more than the number of households that own televisions. What they watch is shaped by this algorithm, which skims and ranks billions of videos to identify 20 “up next” clips that are both relevant to a previous video and most likely, statistically speaking, to keep a person hooked on their screen.

Company insiders tell me the algorithm is the single most important engine of YouTube’s growth. In one of the few public explanations of how the formula works – an academic paper that sketches the algorithm’s deep neural networks, crunching a vast pool of data about videos and the people who watch them – YouTube engineers describe it as one of the “largest scale and most sophisticated industrial recommendation systems in existence”.

We see here the power of AI data algorithms to filter content. The Google response has been to “expand the army of human moderators.” That’s a necessary method of reasserting human judgment over the network

The primary focus of the article then turns to politics and the electoral influences of disinformation:

Much has been written about Facebook and Twitter’s impact on politics, but in recent months academics have speculated that YouTube’s algorithms may have been instrumental in fuelling disinformation during the 2016 presidential election. “YouTube is the most overlooked story of 2016,” Zeynep Tufekci, a widely respected sociologist and technology critic, tweeted back in October. “Its search and recommender algorithms are misinformation engines.”

Apparently, the sensationalism surrounding the Trump campaign caused YT’s AI algorithms to push more video feeds favorable to Trump and damaging to Hillary Clinton. One doesn’t need to be a partisan to recognize this was probably true for this particular media channel and its business model that values more views more than anything else.

However, this reality can also be distorted to present a particular conspiracy narrative of its own:

Trump won the electoral college as a result of 80,000 votes spread across three swing states. There were more than 150 million YouTube users in the US. The videos contained in Chaslot’s database of YouTube-recommended election videos were watched, in total, more than 3bn times before the vote in November 2016.

This, unfortunately, is cherry-picking statistical inferences concerning the margin of voting support. What was significant in determining the 2016 election outcome was not 80,000 votes across three states, but a run of popular vote wins in 2,623 of 3,112 counties across the U.S. This 85% share could not be an accident, nor could it be due to the single influence of disinformation, Russian or otherwise. The true difference in the election was not revealed by the popular vote total or the Electoral College vote, but by the geographical distribution of support. One can argue about which is more critical to democratic governance, but this post is about electronic media content, not political analysis.

The next article further addresses how technology is influencing our individual behaviors.

‘Our minds can be hijacked’: the tech insiders who fear a smartphone dystopia

Paul Lewis October 6, 2017

Justin Rosenstein had tweaked his laptop’s operating system to block Reddit, banned himself from Snapchat, which he compares to heroin, and imposed limits on his use of Facebook. But even that wasn’t enough. In August, the 34-year-old tech executive took a more radical step to restrict his use of social media and other addictive technologies.

A decade after he stayed up all night coding a prototype of what was then called an “awesome” button, Rosenstein belongs to a small but growing band of Silicon Valley heretics who complain about the rise of the so-called “attention economy”: an internet shaped around the demands of an advertising economy.

The extent of this addiction is cited by research that shows people touch, swipe or tap their phone 2,617 times a day!

There is growing concern that as well as addicting users, technology is contributing toward so-called “continuous partial attention”, severely limiting people’s ability to focus, and possibly lowering IQ. One recent study showed that the mere presence of smartphones damages cognitive capacity – even when the device is turned off. “Everyone is distracted,” Rosenstein says. “All of the time.”

“The technologies we use have turned into compulsions, if not full-fledged addictions,” Eyal writes. “It’s the impulse to check a message notification. It’s the pull to visit YouTube, Facebook, or Twitter for just a few minutes, only to find yourself still tapping and scrolling an hour later.” None of this is an accident, he writes. It is all “just as their designers intended”.

Tristan Harris, a former Google employee turned vocal critic of the tech industry points out that… “All of us are jacked into this system. All of our minds can be hijacked. Our choices are not as free as we think they are.”

“I don’t know a more urgent problem than this,” Harris says. “It’s changing our democracy, and it’s changing our ability to have the conversations and relationships that we want with each other.” 

Harris believes that tech companies never deliberately set out to make their products addictive. They were responding to the incentives of an advertising economy, experimenting with techniques that might capture people’s attention, even stumbling across highly effective design by accident.

“Smartphones are useful tools,” says Loren Brichter, a product designer. “But they’re addictive. Pull-to-refresh is addictive. Twitter is addictive. These are not good things. When I was working on them, it was not something I was mature enough to think about.”

The two inventors listed on Apple’s patent for “managing notification connections and displaying icon badges” are Justin Santamaria and Chris Marcellino. A few years ago Marcellino, 33, left the Bay Area and is now in the final stages of retraining to be a neurosurgeon. He stresses he is no expert on addiction but says he has picked up enough in his medical training to know that technologies can affect the same neurological pathways as gambling and drug use. “These are the same circuits that make people seek out food, comfort, heat, sex,” he says.

“The people who run Facebook and Google are good people, whose well-intentioned strategies have led to horrific unintended consequences,” he says. “The problem is that there is nothing the companies can do to address the harm unless they abandon their current advertising models.

But how can Google and Facebook be forced to abandon the business models that have transformed them into two of the most profitable companies on the planet?

This is exactly the problem – they really can’t. Newer technology, such as distributed social networks tracked by blockchain technology, must be deployed to disrupt the dysfunctional existing technology. New business models will be designed to support this disruption. Human behavioral instincts are crucial to successful new designs that make us more human, rather than less.

James Williams does not believe talk of dystopia is far-fetched. …He says his epiphany came a few years ago when he noticed he was surrounded by technology that was inhibiting him from concentrating on the things he wanted to focus on. “It was that kind of individual, existential realization: what’s going on?” he says. “Isn’t technology supposed to be doing the complete opposite of this?”

The question we ask at tuka is: “What do people really want from technology and social interaction? Distraction or meaning? And how do they find meaning?” Our answer is self-expression through creativity, sharing it, and connecting with communities.

Williams and Harris left Google around the same time and co-founded an advocacy group, Time Well Spent, that seeks to build public momentum for a change in the way big tech companies think about design. 

“Eighty-seven percent of people wake up and go to sleep with their smartphones,” he says. The entire world now has a new prism through which to understand politics, and Williams worries the consequences are profound.

The same forces that led tech firms to hook users with design tricks, he says, also encourage those companies to depict the world in a way that makes for compulsive, irresistible viewing. “The attention economy incentivizes the design of technologies that grab our attention,” he says. “In so doing, it privileges our impulses over our intentions.”

That means privileging what is sensational over what is nuanced, appealing to emotion, anger, and outrage. The news media is increasingly working in service to tech companies, Williams adds, and must play by the rules of the attention economy to “sensationalize, bait and entertain in order to survive”.

In the wake of Donald Trump’s stunning electoral victory, many were quick to question the role of so-called “fake news” on Facebook, Russian-created Twitter bots or the data-centric targeting efforts that companies such as Cambridge Analytica used to sway voters. But Williams sees those factors as symptoms of a deeper problem.

It is not just shady or bad actors who were exploiting the internet to change public opinion. The attention economy itself is set up to promote a phenomenon like Trump, who is masterly at grabbing and retaining the attention of supporters and critics alike, often by exploiting or creating outrage.

Orwellian-style coercion is less of a threat to democracy than the more subtle power of psychological manipulation, and “man’s almost infinite appetite for distractions”.

“The dynamics of the attention economy are structurally set up to undermine the human will,” Williams says. “If politics is an expression of our human will, on individual and collective levels, then the attention economy is directly undermining the assumptions that democracy rests on.” If Apple, Facebook, Google, Twitter, Instagram, and Snapchat are gradually chipping away at our ability to control our own minds, could there come a point, I ask, at which democracy no longer functions?

Our politics will survive and democracy is only one form of governance. The bigger question is how does human civilization survive if our behavior becomes self-destructive and meaningless?

The Genius? of Silicon Valley?

Foer

Here’s a review of Franklin Foer’s new book, World Without Mind: The Existential Threat of Big Tech. What we’re seeing here is the slow breaking of the next wave of tech, from Web 2.0 to Web 3.0, where the users take back control and are treated as more than mindless sources of data. It took a long time to transition the world out of feudalism and we’re still in the very early stages of throwing off the yolk of data exploitation and tyranny. Algorithms cannot guide humanity.

The genius and stupidity of Silicon Valley

Knowledge is a tricky thing.

Acquiring and deploying it to change the world through technological innovations can inspire great confidence and self-certainty in the person who possesses the knowledge. And yet, the confidence and self-certainty is nearly always misplaced — a product of the knower presuming that his expert knowledge of one aspect of reality applies equally to others. That’s one powerful reason why myths about the place of knowledge in human life so often teach lessons about hubris and its dire social, cultural, and political consequences.

Franklin Foer’s important new book, World Without Mind: The Existential Threat of Big Tech, is best seen as a modern-day journalistic retelling of one of those old cautionary tales about human folly. Though he doesn’t describe his aim in quite this way, Foer sets out to expose the foolishness and arrogance that permeates the culture of Silicon Valley and that through its wondrous technological innovations threatens unintentionally to wreck civilizational havoc on us all.

It’s undeniable that Silicon Valley’s greatest innovators know an awful lot. Google is an incredibly powerful tool for organizing information — one to which no previous generation of human beings could have imagined having easy and free access, let alone devising from scratch, as Larry Page and Sergey Brin managed to do. The same goes for Facebook, which Mark Zuckerberg famously created in his Harvard dorm room and has become a global powerhouse in a little more than a decade, turning him into one of the world’s richest men and revolutionizing the way some two billion people around the world consume information and interact with each other.

That’s power. That’s knowledge.

But knowledge of what?

Mostly of how to program computers and deploy algorithms to sort through, organize, cluster, rank, and order vast quantities of data. In the case of Facebook, Zuckerberg obviously also understood something simple but important about how human beings might enjoy interacting online. That’s not nothing. Actually, it’s a lot. An enormous amount. But it’s not everything — or anything remotely close to what Silicon Valley’s greatest innovators think it is.

When it comes to human beings — what motivates them, how they interact socially, to what end they organize politically — figures like Page and Zuckerberg know very little. Almost nothing, in fact. And that ignorance has enormous consequences for us all.

You can see the terrible problems of this hubris in the enormously sweeping ambitions of the titans of technology. Page, for instance, seeks to achieve immortality.

Foer explains how Page absorbed ideas from countercultural guru Stewart Brand, futurist Ray Kurzweil, and others to devise a quasi-eschatological vision for Google as a laboratory for artificial intelligence that might one day make it possible for humanity to transcend human limitations altogether, eliminating scarcity, merging with machines, and finally triumphing over mortality itself. Foer traces the roots of this utopianism back to Descartes’ model of human subjectivity, which pictures a spiritual mind encased within and controlling an (in principle, separable) mechanical body. If this is an accurate representation of the mind’s relation to its bodily host, then why not seek to develop technology that would make it possible to deposit this mind, like so much software, into a much more durable and infinitely repairable and improvable computer? In the process, these devices would be transformed into what Kurzweil has dubbed “spiritual machines” that could, in principle, enable individuals to live on and preserve their identities forever.

The problem with such utopian visions and extravagant hopes is not that they will outstrip our technological prowess. For all I know, the company that almost instantly gathers and ranks information from billions of websites for roughly 40,000 searches every second will some day, perhaps soon, develop the technical capacity to transfer the content of a human mind into a computer network.

The problem with such a goal is that in succeeding it will inevitably fail. As anyone who reflects on the issue with any care, depth, and rigor comes to understand, the Cartesian vision of the mind is a fiction, a fairy tale. Our experience of being alive, of being-in-the-world, is thoroughly permeated and shaped by the sensations, needs, desires, and fears that come to us by the way of our bodies, just as our opinions of right and wrong, better and worse, noble and base, and just and unjust are formed by rudimentary reflections on our own good, which is always wrapped up with our perception of the good of our physical bodies.

Even if it were possible to transfer our minds — our memories, the content of our thoughts — into a machine, the indelible texture of conscious human experience would be flattened beyond recognition. Without a body and its needs, desires, vulnerabilities, and fear of injury and death, we would no longer experience a world of meaning, gravity, concern, and care — for ourselves or others. Which also means that Page’s own relentless drive to innovate technologically — which may well be the single attribute that most distinguishes him as an individual — would vanish without a trace the moment he realized his goal of using technological innovations to achieve immortality.

An immortal Larry Page would no longer be Larry Page.

Zuckerberg’s very different effort to overcome human limits displays a similar obliviousness to the character of human experience, in this case political life — and it ends with a similar paradox.

Rather than simply providing Facebook’s users with a platform for socializing and sharing photos, Zuckerberg’s company has developed intricate algorithms for distributing information in each user’s “news feed,” turning it into a “personalized newspaper,” with the content (including advertisements) precisely calibrated to his or her particular interests, tastes, opinions, and commitments. The idea was to build community and bring people together through the sharing and dissemination of information. The result has been close to the opposite.

As Facebook’s algorithms have become more sophisticated, they have gotten better and better at giving users information that resembles information they have previously liked or shared with their friends. That has produced an astonishing degree of reinforcement of pre-existing habits and opinions. If you’re a liberal, you’re now likely only to see liberal opinions on Facebook. If you’re conservative, you’ll only see conservative opinions. And if you’re inclined to give credence to conspiracy theories, you’ll see plenty of those.

And maybe not just if you favor conspiracy theories. As we’ve learned since the 2016 election, it’s possible for outside actors (like foreign intelligence services, for example) to game the system by promoting or sponsoring fake or inflammatory stories that get disseminated and promoted among like-minded or sympathetic segments of the electorate.

Facebook may be the most effective echo chamber ever devised, precisely because there’s potentially a personalized chamber for every single person on the planet.

What began with a hope of bringing the country and the world together has in a little over a decade become one of the most potent sources of division in a deeply divided time.

And on it goes, with each company and technology platform producing its own graveyards full of unintended consequences. Facebook disseminates journalism widely but ends up promoting vacuous and sometimes politically pernicious clickbait. Google works to make information (including the content of books) freely available to all but in the process dismantles the infrastructure that was constructed to make it possible for people to write for a living. Twitter gives a megaphone to everyone who opens an account but ends up amplifying the voice of a demagogue-charlatan above everyone else, helping to propel him all the way to the White House.

Foer ends his book on an optimistic note, offering practical suggestions for pushing back against the ideological and technological influence of Silicon Valley on our lives. Most of them are worthwhile. But the lesson I took from the book is that the challenge we face may defy any simple solution. It’s a product, after all, of the age-old human temptation toward arrogance or pride — only now inflated by the magnitude of our undeniable technological achievements. How difficult it must be for our techno-visionaries to accept that they know far less than they’d like to believe.

Ten Habits of Highly Creative People

Creativity

Ten Habits of Highly Creative People

Scott Barry Kaufman and Carolyn Gregoire explore how to develop creativity as a habit and a style of engaging with the world.

What exactly is creativity? So many of us assume that creativity is something we had as a child but we lost, or something allocated to rarified individuals that we can only admire from afar.

But science has shown that, in many ways, we are all wired to create. The key is recognizing that creativity is multifaceted—on the level of the brain, personality, and the creative process—and can be displayed in many different ways, from the deeply personal experience of uncovering a new idea or experience to expressing ourselves through words, photos, fashion, and other everyday creations, to the work of renowned artists that transcends the ages.

Read more