A video to take to heart:
The digital virus has no vaccine, but hopefully a cure…
Literacy altered the human brain, making it “refit some of its existing neuronal groups” and “form newly recycled circuits.” The brain had to change because the innate brain can’t read. It responds to what it is exposed to if exposure happens often, for a long period. Literacy develops through practice—through labor that compels the development of revised brain functions. The more you read, the more your brain adapts. It is a “plastic” organ.
The Information Age is different in many ways. This is why digital formats have roiled physical formats across all the creative industries. We need to think outside the box.
Interesting article from an academic expert (my annotations in red):
a digital platform is either large or dead.
Why economics must go digital
One of the biggest concerns about today’s tech giants is their market power. At least outside China, Google, Facebook, and Amazon dominate online search, social media, and online retail, respectively. And yet economists have largely failed to address these concerns in a coherent way. To help governments and regulators as they struggle to address this market concentration, we must make economics itself more relevant to the digital age.
Digital markets often become highly concentrated, with one dominant firm, because larger players enjoy significant returns to scale. For example, digital platforms incur large upfront development costs but benefit from low marginal costs once the software is written. They gain from network effects, whereby the more users a platform has, the more all users benefit. And data generation plays a self-reinforcing role: more data improves the service, which brings in more users, which generates more data. To put it bluntly, a digital platform is either large or dead.
As several recent reports (including one to which I contributed) have pointed out, the digital economy poses a problem for competition policy. Competition is vital for boosting productivity and long-term growth because it drives out inefficient producers and stimulates innovation. Yet how can this happen when there are such dominant players?
Today’s digital behemoths provide services that people want: one recent study estimated that consumers value online search alone at a level equivalent to about half of US median income. Economists, therefore, need to update their toolkit. Rather than assessing likely short-term trends in specific digital markets, they need to be able to estimate the potential long-term costs implied by the inability of a new rival with better technology or service to unseat the incumbent platform.
This is no easy task because there is no standard methodology for estimating uncertain, non-linear futures. Economists even disagree on how to measure static consumer valuations of free digital goods such as online search and social media. And although the idea that competition operates dynamically through firms entering and exiting the market dates back at least to Joseph Schumpeter, the standard approach is still to look at competition among similar companies producing similar goods at a point in time.
The characteristics of digital technology pose a fundamental challenge to the entire discipline. As I pointed out more than 20 years ago, the digital economy is “weightless.” Moreover, many digital goods are non-rival “public goods”: you can use software code without stopping others from doing so, whereas only one person can wear the same pair of shoes. And they require a substantial degree of trust to have any value: we need to experience them to know whether they work, and social influence is often crucial to their diffusion.
Yet standard economics generally assumes none of these things. Economists will bridle at this statement, rightly pointing to models that accommodate some features of the digital economy. But economists’ benchmark mental world – particularly their instinctive framework for thinking about public policy questions – is one where competition is static, preferences are fixed and individual, rival goods are the norm, and so on.
Starting from there leads inexorably to presuming the “free market” paradigm. As any applied economist knows, this paradigm is named for a mythical entity. But this knowledge somehow does not give rise to an alternative presumption, say, that governments should supply certain products.
This instinct may be changing. One straw in the wind is the call by Jim O’Neill, a former Goldman Sachs economist who now heads the Royal Institute of International Affairs (Chatham House), for public research and production of new antibiotics. Having led a review of the spread of anti-microbial resistance – which will kill millions of people if new drugs are not discovered – O’Neill is dismayed by the lack of progress made by private pharmaceutical companies.
Drug discovery is an information industry, and information is a non-rival public good which the private sector, not surprisingly, is under-supplying. [Yes – this is what intellectual property rights/copyrights/patents is all about. The problem is attributing the value created by the sharing of information. We may be able to solve that with blockchain ledgers.] That conclusion is not remotely outlandish in terms of economic analysis. And yet the idea of nationalizing part of the pharmaceutical industry is outlandish from the perspective of the prevailing economic-policy paradigm.
Or consider the issue of data, which has lately greatly exercised policymakers. Should data collection by digital firms be further regulated? Should individuals be paid for providing personal data? [Yes, they should. Personal data is as proprietary as personal labor and personal ideas. Making sure users get paid for their data changes the business models of these natural monopolies.] And if a sensor in smart-city environment records that I walk past it, is that my data, too? The standard economic framework of individual choices made independently of one another, with no externalities, and monetary exchange for the transfer of private property, offers no help in answering these questions. [Yes, because we don’t yet assign value to shared information. We rely on the property rights of tangible assets.]
Economic researchers are not blameless when it comes to inadequate policy decisions. We teach economics to people who go out into the world of policy and business, and our research shapes the broader intellectual climate. The onus now is on academics to establish a benchmark approach to the digital economy and to create a set of applied methods and tools that legislators, competition authorities, and other regulators can use.
Mainstream economics has largely failed to keep up with the rapid pace of digital transformation, and it is struggling to find practical ways to address the growing power of dominant tech companies. If the discipline wants to remain relevant, then it must rethink some of its basic assumptions.
The following article caught my attention. It seems to suggest that millennials and Gen Zers have just become more willing than earlier generations to seek out therapy for mental health issues. But 50-75% quit their jobs due to mental health? That seems more like an epidemic than social enlightenment and the fact that it strikes deepest among certain age cohorts is a red flag. I would suggest that other research studies into happiness, fulfillment, and health have come up with different factors. (My book The Ultimate Killer App: How Technology Succeeds presents most of this research with citation references.)
First, most normal (i.e, non-medical) psychological depressive states can be traced to a disconnect between expectations and reality. We all experience this as disappointment, but an over-emphasis on relative status with our peers can be a strong catalyst for psychological distress. Certainly social media has made this relative status more salient in our daily lives: “Gee, our friends on Facebook seem to be living much more exciting and rewarding lives!!!” We’ve even coined this distress as FOMO (fear of missing out). In pre-mass media days few people knew they were missing out on the Vanderbilt or Rockefeller lifestyle; these were more like Hollywood fantasies. But social media has brought such status comparisons into our daily lives with real people we know. How can a young worker flipping burgers as a career job stepping stone last more than a month of this humiliation when his friends are playing ping-pong at Google? Apparently not many.
An older generation would reply that work is not supposed to be fun, that’s why it’s called work. But I think the future offers us better solutions if we are cognizant enough of the problem to seek them out. First, healthy humans at some point realize that relative status matters not a whit; nobody really cares about how great your life is, except you and perhaps your mom.
The second point that psychological research reveals is that money and wealth offer a very poor representation of true status and self-actualization. Instead we should be looking to our creative and social instincts. This is the major thesis of The Ultimate Killer App: what we truly desire to be happy and fulfilled is to explore our creativity and share it with like-minded others to establish robust social connections. Having a child and a family helps satisfy these most primal needs. It really comes down to Maslow’s Hierarchy of Needs and this is the foundational idea of tuka, a creativity sharing social network platform.
Young people are quitting their jobs in droves. Here’s why.
by Megan Henney
Young people are spearheading mental health awareness at the workplace.
About half of millennials and 75 percent of Gen Zers have quit their jobs for mental health reasons, according to a new study conducted by Mind Shares Partners, SAP and Quatrics. It was published in Harvard Business Review.
That’s compared to just 20 percent of respondents overall who said they’ve voluntarily left a job in order to prioritize their mental health — emblematic of a “shift in generational awareness,” the authors of the report, Kelly Greenwood, Vivek Bapat and Mike Maughan, wrote. For baby boomers, the number was the lowest, with less than 10 percent quitting a job for mental-health purposes.
It should come as no surprise that younger generations are paving the way for the de-stigmatization of mental health. A Wall Street Journal article published in March labeled millennials the “therapy generation,” as todays 20- and 30-somethings are more likely to turn to therapy, and with fewer reservations, than young people in previous eras did.
A 2017 report from the Center for Collegiate Mental Health at Penn State University found that, based on data from 147 colleges and universities, the number of students seeking mental-health help increased at five times the rate of new students starting college from 2011 to 2016. And a Blue Cross Blue Shield study published in 2018 revealed that major depression diagnoses surged by 44 percent among millennials from 2013 to 2016.
Increasingly, employees (about 86 percent) want their company to prioritize mental health. Despite that — and the fact that mental health conditions result in a $16.8 billion loss in employee productivity — the report found that companies are still not doing enough to break down the stigma, resulting in a lack of identification in workers who may have a mental health condition. Up to 80 percent of individuals will manage a mental health condition at one point in their lifetime, according to the study.
Of course, sometimes employees are unaware of the different resources offered at their organizations, or are afraid of retribution if they elect to use them. In the study, millennials, ages 23 to 38, were 63 percent more likely than baby boomers, 55 to 73, to know the proper procedure for seeking mental health support from the company.
The study was based on responses collected from 1,500 U.S. adults.
They don’t. They just don’t hear it enough in the right context!
This is the problem tuka is trying to solve: to recreate that music sharing network you had when you were in high school and college!
Why do old people hate new music?
by Frank T. McAndrew
When I was a teenager, my dad wasn’t terribly interested in the music I liked. To him, it just sounded like “a lot of noise,” while he regularly referred to the music he listened to as “beautiful.”
This attitude persisted throughout his life. Even when he was in his 80s, he once turned to me during a TV commercial featuring a 50-year-old Beatles tune and said, “You know, I just don’t like today’s music.”
It turns out that my father isn’t alone.
As I’ve grown older, I’ll often hear people my age say things like “they just don’t make good music like they used to.”
Why does this happen?
Luckily, my background as a psychologist has given me some insights into this puzzle.
We know that musical tastes begin to crystallize as early as age 13 or 14. By the time we’re in our early 20s, these tastes get locked into place pretty firmly.
In fact, studies have found that by the time we turn 33, most of us have stopped listening to new music. Meanwhile, popular songs released when you’re in your early teens are likely to remain quite popular among your age group for the rest of your life.
There could be a biological explanation for this. There’s evidence that the brain’s ability to make subtle distinctions between different chords, rhythms and melodies gets worse with age. So to older people, newer, less familiar songs might all “sound the same.”
But I believe there are some simpler reasons for older people’s aversion to newer music. One of the most researched laws of social psychology is something called the “mere exposure effect.” In a nutshell, it means that the more we’re exposed to something, the more we tend to like it.
This happens with people we know, the advertisements we see and, yes, the songs we listen to.
When you’re in your early teens, you probably spend a fair amount of time listening to music or watching music videos. Your favorite songs and artists become familiar, comforting parts of your routine.
For many people over 30, job and family obligations increase, so there’s less time to spend discovering new music. Instead, many will simply listen to old, familiar favorites from that period of their lives when they had more free time.
Of course, those teen years weren’t necessarily carefree. They’re famously confusing, which is why so many TV shows and movies – from “Glee” to “Love, Simon” to “Eighth Grade” – revolve around the high school turmoil.
Psychology research has shown that the emotions that we experience as teens seem more intense than those that comes later. We also know that intense emotions are associated with stronger memories and preferences. All of this might explain why the songs we listen to during this period become so memorable and beloved.
So there’s nothing wrong with your parents because they don’t like your music. In a way, it’s all part of the natural order of things.
At the same time, I can say from personal experience that I developed a fondness for the music I heard my own children play when they were teenagers. So it’s certainly not impossible to get your parents on board with Billie Eilish and Lil Nas X.
I reprint this article in full because this is exactly what we’ve been arguing at tuka all along. Serious dedicated writers can no longer write, as so it goes with the development of other creative professions such as music, photography, and video. So collapsing creative incomes means the dominance of mediocrity and purely commercial values for artistic expression. A good example is the proliferation of reality TV. It all becomes rather tired and boring and audiences turn to better options. It’s a vicious downward spiral.
Crashing author earnings ‘threaten future of American literature’
January 8, 2019
A major survey of American authors has uncovered a crash in author earnings described as “a crisis of epic proportions” – particularly for full-time literary writers, who are “on the verge of extinction”.
Surveying its membership and that of 14 other writers’ organisations in what it said was the largest survey of US authors’ earnings ever conducted, the Authors Guild reported that the median income from writing-related work fell to a historic low in 2017 at $6,080 (£4,760), down 42% from 2009. [Yeah, who survives on $6K a year?]
Writers of literary fiction are particularly affected, said the Authors Guild, with those authors experiencing the biggest recent decline in writing-related earnings – down 43% since 2013. Isolating book-related income, the decline was even steeper, down to $3,100 in 2017 – more than 50% down on 2009’s median of $6,250. In total, 5,027 authors provided detailed responses to the survey.
The Authors Guild said the reduction in earnings for literary writers “raises serious concerns about the future of American literature – books that not only teach, inspire and elicit empathy in readers, but help define who Americans are and how the US is perceived by the world”.
“When you impoverish a nation’s authors, you impoverish its readers,” said Authors Guild president James Gleick. Vice-president Richard Russo added that “there was a time in America, not so very long ago, that dedicated, talented fiction and non-fiction writers who put in the time and learned the craft could make a living doing what they did best, while contributing enormously to American knowledge, culture and the arts. That is no longer the case for most authors, especially those trying to start careers.”
Author TJ Stiles said: “Poverty is a form of censorship. That’s because creation costs. Writing requires resources, and it imposes opportunity costs. Limiting writing to the financially independent … punishes authors based on their lack of wealth and income.”
The survey found that even those who consider themselves to be full-time writers are forced to hold down multiple jobs to earn enough money to survive. Just 21% of full-time published authors derived 100% of their income from their books in 2017, with the need to focus on other avenues for income meaning that literary authors are writing and publishing books less often.
“It takes writers longer to research and write books, since they have to do it between other money-earning ventures,” said the Guild, describing the situation as “a crisis of epic proportions”.
“The quality of books written by authors holding down other jobs may be affected since their attention is divided and writing is often pushed to what spare free time is left,” it added.
The one bright point in the survey was for self-published writers, who were the only group to experience a significant increase in earnings – up 95% in book-related income between 2013 and 2017, with the number of authors self-publishing up by 72% since 2013. But the Guild pointed out that self-published authors still earned 58% less than traditionally published peers in 2017. [Self-publishing is the best financial strategy, but how do we separate hobbyists from serious writers in a sea of content?]
The Authors Guild blamed the crisis on the “growing dominance of Amazon”, which it said forced publishers to accept narrower margins and then pass their losses on to authors, as well as publishers’ focus on blockbuster writers at the expense of lesser-known names, as well as a 25% royalty rate for ebooks.
“Amazon, but also Google, Facebook and every other company getting into the content business, devalue what we produce to lower their costs for content distribution, and then take an unfair share of the profits from what remains for delivering that reduced product,” said Russo. “We get that they like to move fast and break things, but it’s no longer in their own interest to break us. If even the most talented of authors can no longer afford to write, to create, who’s going to provide the content?” [What we have discovered is that the big digital distributors like Amazon and Apple don’t really care about the intrinsic value of the content they distribute. More is better and cheaper.]
The survey follows similar research conducted in the UK in 2018, which found that median earnings for professional writers had fallen by 42% since 2005 to under £10,500 – well below the minimum wage of £15,269.
As we’ve argued previously, here, here, and here, algorithms are no magic wand for sorting subjective content, artistic or otherwise. Here the top brass at Apple admits to the fact in a criticism of one of its competitors, Spotify.
Two take-aways from Mr. Cook’s argument. One, he claims that Apple uses the human creativity of its users to create playlists, but tuka uses it’s entire network of users to reward human curation of all content. Yeah, web 3.0 is about the human, not the machine.
Second, the race among the streamers, Apple Music, Amazon Music, Google Play, and Spotify, Pandora, etc. is another loss-leading attempt to build out a user base to monetize the data flow. In other words, streaming doesn’t pay unless you can monetize the network in some other way. As I suspected, the pricing model of streaming is likely financially unsustainable in the long-run as the true price of streaming content is several times what consumers are now paying. Direct ownership of content may be far more economical than renting it without ad support.
Apple CEO Tim Cook recently revealed the company’s streaming music service, Apple Music, has clearly surpassed Spotify in subscribers in the US, Canada, and Spotify.
Instead of gloating, he humbly downplayed the achievement.
“The key thing in music is not the competition between the companies that are providing music, the real challenge is to grow the market. If we put our emphasis on growing the market, which we’re doing, we’ll be the beneficiaries of that, as will others.”
In a recently published article with monthly business magazine Fast Company, Cook revealed his true feelings about his streaming music rivals. Taking a clear swipe at Spotify, he said,
“We worry about the humanity being drained out of music, about it becoming a bits-and-bytes kind of world instead of the art and craft.”
Nice article on Medium:
I would add that the major problems for artists in the digital age stem from the explosion of new supply of content. This drives the price down and the search costs of discovery up. The failure then becomes that artists can’t find their audiences and consumers can’t find the content they desire. For poets this means finding an audience not necessarily to sell poetry; rather more important is to find readers and appreciators of their poetry.
Large centralized network servers based on algorithms can’t solve this problem without commoditizing content and delivering the most popular but mundane content churned out by those metrics.
We need to empower the human by connecting the creative.
Recent reports that streaming is now the ‘biggest money-maker’ for the music biz have prompted hyperbolic claims that Spotify and co have ‘saved the music industry’. In reality, this could not be further from the truth.
Progressive music that goes against the aesthetics of whatever the mainstream might be at any given point by its very nature does not cater to the whims of a Spotify algorithm. Now that streaming is the industry’s biggest money-maker it has become the overriding force in music consumption. This dominance will only increase as time goes on, and for artists to gain anything, as a result, requires them to conform or die. There are exceptions, most notably in zeitgeist-seizing movements like grime that are both artistically essential and buoyed by the kind of mass appeal that in effect bypasses the need for a leg-up from the algorithms, but such a lethal combination is rare indeed. Not everything that is great is as popular.
Yes, not everything that is great is popular, and not everything that is popular is great! We need human subjectivity, and that’s more complicated than a complex algorithm.
If streaming platforms keep growing more and more influence over how music is curated and marketed by those in charge, while the revenue for those not mundane enough to fit their algorithms remains so pitifully minute, it is not that impossible to envisage the blandest landscape the industry has ever seen. Great music will continue being made, of course, but getting that music out to people outside of the algorithms will be so much harder. “I hope I am wrong,” says Reeder. “I hope the revenue from streaming does improve, because if it doesn’t, well, who knows how positive the future will be for the majority of music makers and labels out there?”
This is not only the case for music, but for literature, poetry, video, and photography.
Some more bad news for Facebook. The following two news bites explain why. At a deeper level, we must ask why Facebook users are disengaging. I suspect it’s due to two factors: one, FB’s expressed desire to please users by emphasizing friends and family in timeline feeds; and two, because most of FB’s timeline content outside of friends and family is fairly tedious and distracting. Both factors greatly reduce the ad pushing revenue model that FB depends on. One could feel this at least a year or more ago when ads grew like kudzu on our timeline feeds.
The solution is to de-emphasize the monetization of meaningless connections and focus on meaningful connections. That can only come through peer-to-peer sharing of meaningful content. Stories is one way to engage our creativity. Many others exist. See tuka.
Facebook had a terrible, horrible, no good, very bad day.
After the company’s quarterly earnings call with investors, FB’s stock price dropped ~20% in after-hours trading. Over $100B in value disappeared in an instant after FB announced disappointing revenue numbers and user growth.
Some context: that’s comparable to the entirety of General Motors, Ford and Target… combined.
Why did the stock tank? A perfect storm hit one of Facebook’s core features, the News Feed:
- Less “viral” clickbait in the feed. Facebook has committed to optimizing for “time well-spent” in the app, not overall engagement. While this shift made for a better experience for users, FB can’t show users as many ads as before.
- Less feed personalization. In the wake of the Cambridge Analytica scandal and recent GDPR regulations, Facebook revamped its data usage policies and privacy controls. These changes hurt FB’s ability to charge companies big bucks to specifically target ads.
But the most interesting change to the News Feed: the rollout of stories, a well-received but not-well-monetized feature that could change how most people use Facebook and their products altogether.
Stories might actually break Facebook
Stories – tappable, full-screen photos and videos – are replacing news feeds everywhere. The format was originally pioneered by Snapchat. In fact, right after Snapchat launched stories in 2013, Facebook tried to acquire them for $3B.
Facebook worked around the failed acquisition by copying Stories inside Instagram (and then in Messenger… and then in Facebook itself).
Instagram stories took off, with 250M+ users engaging with the feature less than a year after its launch. That’s over 50% of Instagram users… and close to double the number of Snapchat daily active users.
Unfortunately, the success of Stories might shoot Facebook’s ad revenues in the foot. Companies are still figuring out how to build the ad units – engaging, vertical video ads that users won’t immediately tap past. Rest assured that #content creators everywhere are working to make Story ads profitable. [Blogger note: interesting that people still think content is only worth what it can shake from the money tree (i.e., commercial advertising).]