Q: What do musicians do?
A: Create and share their art!
Q: How does a musician make money?
Hint: It’s all about the data.
Q: What do musicians do?
A: Create and share their art!
Q: How does a musician make money?
Hint: It’s all about the data.
The argument here is really about product flow vs. art. According to Spotify, the quality of artistic expression is not what’s important to listeners, it’s all about the product flow. An oversupply of crap is still, well, crap. Spotify needs data, not music. They want to use musicians to mine their data network value to justify their share price.
Streaming services as distribution networks are not friends to artists.
A video to take to heart:
Seven out of the top 10 most valuable companies in the world are tech companies that either directly generate profit from data or are empowered by data from the core. Multiple surveys show that the vast majority of business decision makers regard data as an essential asset for success. We have all experienced how data is shifting this major paradigm shift for our personal, economic and political lives. Whoever owns the data owns the future.
But who’s producing the data? I assume everyone in this room has a smartphone, several social media accounts and has done a Google search or two in the past week. We are all producing the data.
The digital virus has no vaccine, but hopefully a cure…
Literacy altered the human brain, making it “refit some of its existing neuronal groups” and “form newly recycled circuits.” The brain had to change because the innate brain can’t read. It responds to what it is exposed to if exposure happens often, for a long period. Literacy develops through practice—through labor that compels the development of revised brain functions. The more you read, the more your brain adapts. It is a “plastic” organ.
The coronavirus pandemic has taken away so much in the blink of an eye: lives, jobs, income, wealth, vacations, travel, live entertainment, eating out, socializing, family gatherings, birthday parties, graduations, sports, libraries, universities, among so many things we never knew we would (not) miss.
But the pandemic has also given us a moment to reflect; to reflect on life’s true meaning and value.
Plagues, like war, strip us down and lay us bare, the reason why they are constantly explored through our arts and literature.
So, as we #stayathome, quarantined away from the frantic life of just a few months ago, we have been given this unique opportunity of time to consider the important things in life. Not yesterday, not tomorrow. Today.
You can be sure binge-watching Netflix and Showtime is not one of them. Meaningless passive entertainment is a time killer, so why take the time this crisis is giving us and squander it?
Will 2020 be like the proverbial hole in the resumé of our lives?
No. Idle time, a thing so rare these days, will lead us back to our hearts and souls because the alternative is maddening. The constant barrage of the Information Age has robbed us of being alone with ourselves, to discover what makes us tick. It’s not just TikTok.
Instead, our creative spirits sing the language of our souls as we reach out with our hearts to communicate and share that song. Heart and soul. People are playing music, singing, cooking, baking, drawing, painting, writing, planting gardens – all dancing to the rhythm of life, no longer marching to the drumbeat of time and money. Technology, that two-edged sword, helps us make the connections that fill our need for social engagement. But one can only stare at a Zoom party screen for so long. We must create meaning to share meaning. It is that love of the creator in all of us.
And this, thanks to a global pandemic, is actually happening. And not a moment too soon.
The Information Age is different in many ways. This is why digital formats have roiled physical formats across all the creative industries. We need to think outside the box.
Interesting article from an academic expert (my annotations in red):
a digital platform is either large or dead.
One of the biggest concerns about today’s tech giants is their market power. At least outside China, Google, Facebook, and Amazon dominate online search, social media, and online retail, respectively. And yet economists have largely failed to address these concerns in a coherent way. To help governments and regulators as they struggle to address this market concentration, we must make economics itself more relevant to the digital age.
Digital markets often become highly concentrated, with one dominant firm, because larger players enjoy significant returns to scale. For example, digital platforms incur large upfront development costs but benefit from low marginal costs once the software is written. They gain from network effects, whereby the more users a platform has, the more all users benefit. And data generation plays a self-reinforcing role: more data improves the service, which brings in more users, which generates more data. To put it bluntly, a digital platform is either large or dead.
As several recent reports (including one to which I contributed) have pointed out, the digital economy poses a problem for competition policy. Competition is vital for boosting productivity and long-term growth because it drives out inefficient producers and stimulates innovation. Yet how can this happen when there are such dominant players?
Today’s digital behemoths provide services that people want: one recent study estimated that consumers value online search alone at a level equivalent to about half of US median income. Economists, therefore, need to update their toolkit. Rather than assessing likely short-term trends in specific digital markets, they need to be able to estimate the potential long-term costs implied by the inability of a new rival with better technology or service to unseat the incumbent platform.
This is no easy task because there is no standard methodology for estimating uncertain, non-linear futures. Economists even disagree on how to measure static consumer valuations of free digital goods such as online search and social media. And although the idea that competition operates dynamically through firms entering and exiting the market dates back at least to Joseph Schumpeter, the standard approach is still to look at competition among similar companies producing similar goods at a point in time.
The characteristics of digital technology pose a fundamental challenge to the entire discipline. As I pointed out more than 20 years ago, the digital economy is “weightless.” Moreover, many digital goods are non-rival “public goods”: you can use software code without stopping others from doing so, whereas only one person can wear the same pair of shoes. And they require a substantial degree of trust to have any value: we need to experience them to know whether they work, and social influence is often crucial to their diffusion.
Yet standard economics generally assumes none of these things. Economists will bridle at this statement, rightly pointing to models that accommodate some features of the digital economy. But economists’ benchmark mental world – particularly their instinctive framework for thinking about public policy questions – is one where competition is static, preferences are fixed and individual, rival goods are the norm, and so on.
Starting from there leads inexorably to presuming the “free market” paradigm. As any applied economist knows, this paradigm is named for a mythical entity. But this knowledge somehow does not give rise to an alternative presumption, say, that governments should supply certain products.
This instinct may be changing. One straw in the wind is the call by Jim O’Neill, a former Goldman Sachs economist who now heads the Royal Institute of International Affairs (Chatham House), for public research and production of new antibiotics. Having led a review of the spread of anti-microbial resistance – which will kill millions of people if new drugs are not discovered – O’Neill is dismayed by the lack of progress made by private pharmaceutical companies.
Drug discovery is an information industry, and information is a non-rival public good which the private sector, not surprisingly, is under-supplying. [Yes – this is what intellectual property rights/copyrights/patents is all about. The problem is attributing the value created by the sharing of information. We may be able to solve that with blockchain ledgers.] That conclusion is not remotely outlandish in terms of economic analysis. And yet the idea of nationalizing part of the pharmaceutical industry is outlandish from the perspective of the prevailing economic-policy paradigm.
Or consider the issue of data, which has lately greatly exercised policymakers. Should data collection by digital firms be further regulated? Should individuals be paid for providing personal data? [Yes, they should. Personal data is as proprietary as personal labor and personal ideas. Making sure users get paid for their data changes the business models of these natural monopolies.] And if a sensor in smart-city environment records that I walk past it, is that my data, too? The standard economic framework of individual choices made independently of one another, with no externalities, and monetary exchange for the transfer of private property, offers no help in answering these questions. [Yes, because we don’t yet assign value to shared information. We rely on the property rights of tangible assets.]
Economic researchers are not blameless when it comes to inadequate policy decisions. We teach economics to people who go out into the world of policy and business, and our research shapes the broader intellectual climate. The onus now is on academics to establish a benchmark approach to the digital economy and to create a set of applied methods and tools that legislators, competition authorities, and other regulators can use.
Mainstream economics has largely failed to keep up with the rapid pace of digital transformation, and it is struggling to find practical ways to address the growing power of dominant tech companies. If the discipline wants to remain relevant, then it must rethink some of its basic assumptions.
The following article caught my attention. It seems to suggest that millennials and Gen Zers have just become more willing than earlier generations to seek out therapy for mental health issues. But 50-75% quit their jobs due to mental health? That seems more like an epidemic than social enlightenment and the fact that it strikes deepest among certain age cohorts is a red flag. I would suggest that other research studies into happiness, fulfillment, and health have come up with different factors. (My book The Ultimate Killer App: How Technology Succeeds presents most of this research with citation references.)
First, most normal (i.e, non-medical) psychological depressive states can be traced to a disconnect between expectations and reality. We all experience this as disappointment, but an over-emphasis on relative status with our peers can be a strong catalyst for psychological distress. Certainly social media has made this relative status more salient in our daily lives: “Gee, our friends on Facebook seem to be living much more exciting and rewarding lives!!!” We’ve even coined this distress as FOMO (fear of missing out). In pre-mass media days few people knew they were missing out on the Vanderbilt or Rockefeller lifestyle; these were more like Hollywood fantasies. But social media has brought such status comparisons into our daily lives with real people we know. How can a young worker flipping burgers as a career job stepping stone last more than a month of this humiliation when his friends are playing ping-pong at Google? Apparently not many.
An older generation would reply that work is not supposed to be fun, that’s why it’s called work. But I think the future offers us better solutions if we are cognizant enough of the problem to seek them out. First, healthy humans at some point realize that relative status matters not a whit; nobody really cares about how great your life is, except you and perhaps your mom.
The second point that psychological research reveals is that money and wealth offer a very poor representation of true status and self-actualization. Instead we should be looking to our creative and social instincts. This is the major thesis of The Ultimate Killer App: what we truly desire to be happy and fulfilled is to explore our creativity and share it with like-minded others to establish robust social connections. Having a child and a family helps satisfy these most primal needs. It really comes down to Maslow’s Hierarchy of Needs and this is the foundational idea of tuka, a creativity sharing social network platform.
by Megan Henney
Young people are spearheading mental health awareness at the workplace.
About half of millennials and 75 percent of Gen Zers have quit their jobs for mental health reasons, according to a new study conducted by Mind Shares Partners, SAP and Quatrics. It was published in Harvard Business Review.
That’s compared to just 20 percent of respondents overall who said they’ve voluntarily left a job in order to prioritize their mental health — emblematic of a “shift in generational awareness,” the authors of the report, Kelly Greenwood, Vivek Bapat and Mike Maughan, wrote. For baby boomers, the number was the lowest, with less than 10 percent quitting a job for mental-health purposes.
It should come as no surprise that younger generations are paving the way for the de-stigmatization of mental health. A Wall Street Journal article published in March labeled millennials the “therapy generation,” as todays 20- and 30-somethings are more likely to turn to therapy, and with fewer reservations, than young people in previous eras did.
A 2017 report from the Center for Collegiate Mental Health at Penn State University found that, based on data from 147 colleges and universities, the number of students seeking mental-health help increased at five times the rate of new students starting college from 2011 to 2016. And a Blue Cross Blue Shield study published in 2018 revealed that major depression diagnoses surged by 44 percent among millennials from 2013 to 2016.
Increasingly, employees (about 86 percent) want their company to prioritize mental health. Despite that — and the fact that mental health conditions result in a $16.8 billion loss in employee productivity — the report found that companies are still not doing enough to break down the stigma, resulting in a lack of identification in workers who may have a mental health condition. Up to 80 percent of individuals will manage a mental health condition at one point in their lifetime, according to the study.
Of course, sometimes employees are unaware of the different resources offered at their organizations, or are afraid of retribution if they elect to use them. In the study, millennials, ages 23 to 38, were 63 percent more likely than baby boomers, 55 to 73, to know the proper procedure for seeking mental health support from the company.
The study was based on responses collected from 1,500 U.S. adults.
by Matt Stoller
The Guardian, Mon 9 Sep 2019
Google and Facebook are the subject of large antitrust investigations. This is good news for our democracy and a free press
The most important input for an advertiser is knowing who is watching the ad. If you know who is seeing an ad slot, you can charge a lot of money to tailor it for that person’s specific interest. If you don’t know who is seeing an ad slot, you can’t charge very much at all. Google and Facebook know who is looking at ad slots everywhere and what they are interested in, so they can sell anything any marketer needs.