Anti-Culture by Algorithm

As we’ve mentioned elsewhere, rule by algorithm can be just as stringent as any rule by a dictator, perhaps even more so as it is vague, faceless, and hard to define. And human actors will always adopt perverse incentives to game the algorithm. This article explains how Amazon Kindle algorithms can produce perverse incentives that determine market outcomes for books and the products we get to see and consume.
Let’s just say, it ain’t literature! 

BAD ROMANCE

To cash in on Kindle Unlimited, a cabal of authors gamed Amazon’s algorithm

A genre that mostly features shiny, shirtless men on its covers and sells ebooks for 99 cents a pop might seem unserious. But at stake are revenues sometimes amounting to a million dollars a year, with some authors easily netting six figures a month. The top authors can drop $50,000 on a single ad campaign that will keep them in the charts — and see a worthwhile return on that investment.

In other words, self-published romance is no joke.

Book stuffing is a term that encompasses a wide range of methods for taking advantage of the Kindle Unlimited revenue structure. In Kindle Unlimited, readers pay $9.99 a month to read as many books as they want that are available through the KU program. This includes both popular mainstream titles like the Harry Potter series and self-published romances put out by authors like Crescent and Hopkins. Authors are paid according to pages read, creating incentives to produce massively inflated and strangely structured books. The more pages Amazon thinks have been read, the more money an author receives.

“But if they’re only relying on algorithms, they’re going to find — like YouTube and like Facebook have been finding — they need human curators as well.”

Exactly.

Read more…

 

Networks and Hierarchies

This is a review of British historian Niall Ferguson’s new book titled The Square and the Tower: Networks, Hierarchies and the Struggle for Global Power. It’s interesting to take the long arc of history into account in this day and age of global communication networks, which might seem to herald the permanent dominance of networks over hierarchies. That history cautions us otherwise.

Ferguson notes two predominant ages of networks: the advent of the printing press in 1452 that led to an explosion of networks across the world until around 1800. This was the Enlightenment period that helped transform economics, politics, and social relations.

Today, the second age of networks consumes us, starting at about 1970 with microchip technology and continuing forward to the present. It is the age of telecommunications, digital technology, and global networks. Ours is an age where it seems “everything is connected.”

Ferguson notes that, beginning with the invention of written language,  all that has happened is that new technologies have facilitated our innate, ancient urge to network – in other words, to connect. This seems to affirm Aristotle’s observation that “man is a social animal,” as well as a large library of psychological behavioral studies over the past century. He also notes that most networks may reflect a power law distribution and be scale-free. In other words, large networks grow larger and become more valuable as they do so. This means the rich get richer and most social networks are profoundly inegalitarian. This implies that the GoogleAmazonFacebookApple (GAFA) oligarchy may be taking over the world, leaving the rest of us as powerless as feudal serfs.

But there is a fatal weakness inherent to this futuristic scenario, in that complex networks create interdependent relationships that can lead to catastrophic cascades, such as the global financial crisis of 2008. Or an explosion of “fake news” and misinformation spewed out by global gossip networks.

We are also seeing a gradual deconstruction of networks that compete with the power of nation-state sovereignty. This is reflected in the rise of nationalistic politics in democracies and authoritarian monopoly control over information in autocracies.

However, from the angle of hierarchical control, Ferguson notes that failures of democratic governance through the administrative state “represents the last iteration of political hierarchy: a system that spews out rules, generates complexity, and undermines both prosperity and stability.”

These historical paths imply that the conflict between distributed networks and concentrated hierarchies is likely a natural tension in search of an uneasy equilibrium.

Ferguson notes “if Facebook initially satisfied the human need to gossip, it was Twitter – founded in March 2006 – that satisfied the more specific need to exchange news, often (though not always) political.” But when I read Twitter feeds I’m thinking Twitter may be more of a tool for disruption rather than constructive dialogue. In other words, we can use these networking technologies to tear things down, but not so much to build them back up again.

As a Twitter co-founder confesses:

‘I thought once everybody could speak freely and exchange information and ideas, the world is automatically going to be a better place,’ said Evan Williams, one of the co-founders of Twitter in May 2017. ‘I was wrong about that.’

Rather, as Ferguson asserts, “The lesson of history is that trusting in networks to run the world is a recipe for anarchy: at best, power ends up in the hands of the Illuminati, but more likely it ends up in the hands of the Jacobins.”

Ferguson is quite pessimistic about today’s dominance of networks, with one slim ray of hope. As he writes,

“…how can an urbanized, technologically advanced society avoid disaster when its social consequences are profoundly inegalitarian?

“To put the question more simply: can a networked world have order? As we have seen, some say that it can. In the light of historical experience, I very much doubt it.”

That slim ray of hope? Blockchain technology!

A thought-provoking book.

 

 

 

Algorithmic Culture

Some excerpts from this article point out the effect machine algorithms have on shaping our information, our entertainment, and our culture.

The Creepy and Creeping Power of Social Media

By Ned Ryun| June 8th, 2018

While algorithms are necessary to serve up the content people want, social media companies failing to be transparent on this front are dangerous…Algorithm tweaking isn’t neutral and it has a massive “follow on” effect in the digital industry and political world, changing the kind of content that people see everyday. So if the algorithm starts filtering [say, content] it puts a thumb on the scale, favoring one side over the other. With a small handful of controllers over the algorithms, it’s appropriate to ask who controls the controllers?

….

We should acknowledge that rule by algorithm can be just as stringent as any rule by a dictator, perhaps even more so as it is vague, faceless, and hard to define. These algorithms decide what you see and don’t see in your timeline, subtly determining for you what is “worthy” of your attention. Facebook treats this algorithm like a black box, we’re never allowed to look inside and see what’s going on, we’ll only ever see the results on our news feeds. A world ruled by algorithms—just like the one it replaced controlled by network executives—closes off views, closes off debates, and further Balkanizes people. So, in fact, how can these social media and tech giants save democracy when in fact they’re becoming less democratic?

This is the same dynamic that is filtering and feeding our artistic content through the world-wide web. We can only consume the content that we can find and this is how it’s being found.

Put The Damn Phone Down and Do Something

This is a good interview with the Ben Silbermann, founder of Pinterest, published on Medium.

Some excerpts:

We’re social creatures. We need to connect with other people.

Pinterest is actually…it’s really about you. It’s about your tastes, your aspirations, your plans. There are other people there. Our recommendations are all curated by other users. The objective is not to do that [seek Likes]. That’s why it’s different than social networks.

sure, it’s fun to look at millions of ideas, but eventually, the real satisfaction and joy comes from giving it a shot. It might turn out great. It might turn out poorly. All of that is fine. We want to be the company that motivates you to put your phone down and to go try those things.

So, Pinterest is doing the right things to encourage engagement within the community. The next step of Web 3.0 is to distribute the network value they create back to the community of users. tuka will do that.

Surviving the Digital Economy

This will be a crucial issue for a free society going forward…giving away your data is like giving away your labor.

Want Our Personal Data? Pay for It

The posting, tagging and uploading that we do online may be fun, but it’s labor too, and we should be compensated for it

By Eric A. Posner and Glen Weyl

WSJ, April 20, 2018 11:19 a.m. ET

Congress has stepped up talk of new privacy regulations in the wake of the scandal involving Cambridge Analytica, which improperly gained access to the data of as many as 87 million Facebook users. Even Facebook chief executive Mark Zuckerberg testified that he thought new federal rules were “inevitable.” But to understand what regulation is appropriate, we need to understand the source of the problem: the absence of a real market in data, with true property rights for data creators. Once that market is in place, implementing privacy protections will be easy.

We often think of ourselves as consumers of Facebook, Google, Instagram and other internet services. In reality, we are also their suppliers—or more accurately, their workers. When we post and label photos on Facebook or Instagram, use Google maps while driving, chat in multiple languages on Skype or upload videos to YouTube, we are generating data about human behavior that the companies then feed into machine-learning programs.

These programs use our personal data to learn patterns that allow them to imitate human behavior and understanding. With that information, computers can recognize images, translate languages, help viewers choose among shows and offer the speediest route to the mall. Companies such as Facebook, Google, and Microsoft (where one of us works) sell these tools to other companies. They also use our data to match advertisers with consumers.

Defenders of the current system often say that we don’t give away our personal data for free. Rather, we’re paid in the form of the services that we receive. But this exchange is bad for users, bad for society and probably not ideal even for the tech companies. In a real market, consumers would have far more power over the exchange: Here’s my data. What are you willing to pay for it?

An internet user today probably would earn only a few hundred dollars a year if companies paid for data. But that amount could grow substantially in the coming years. If the economic reach of AI systems continues to expand—into drafting legal contracts, diagnosing diseases, performing surgery, making investments, driving trucks, managing businesses—they will need vast amounts of data to function.

And if these systems displace human jobs, people will have plenty of time to supply that data. Tech executives fearful that AI will cause mass unemployment have advocated a universal basic income funded by increased taxes. But the pressure for such policies would abate if users were simply compensated for their data.

The data currently compiled by Facebook and other companies is of pretty low quality. That’s why Facebook has an additional army of paid workers who are given dedicated tasks, such as labeling photos, to fill in the gaps left by users. If Facebook paid users for their work, it could offer pay tied to the value of the user’s contribution—offering more, for example, for useful translations of the latest Chinese slang into English than for yet another video labeled “cat.”

So why doesn’t Facebook already offer wages to users? For one, obviously, it would cost a lot to pay users for the data that the company currently gets for free. And then Google and others might start paying as well. Competition for users would improve the quality of data but eat away at the tech companies’ bottom line.

It’s also true that users simply aren’t thinking this way. But that can change. The basic idea is straightforward enough: When we supply our personal data to Facebook, Google or other companies, it is a form of labor, and we should be compensated for it. It may be enjoyable work, but it’s work just the same.

If companies reject this model of “data as labor,” market pressure could be used to persuade them. Rather than sign up directly with, say, Facebook, people would sign up with a data agent. (Such services, sometimes referred to as personal data exchanges or vaults, are already in development, with more than a dozen startups vying to fill this role.) The data agent would then offer Facebook access to its members and negotiate wages and terms of use on their behalf. Users would get to Facebook through the agent’s platform. If at any time Facebook refused reasonable wages, the data agent could coordinate a strike or a boycott. Unlike individual users, the data agent could employ lawyers to review terms and conditions and ensure that those terms are being upheld.

With multiple data agents competing for users’ business, no one could become an abusive monopolist. The agent’s sole purpose would be managing workers’ data in their interests—and if there were a problem, users could move their data to another service without having to give up on their social network.

Companies such as Apple and Amazon also could get into the act. Currently, their business models are very different from those of Facebook and Google. For the most part, their focus is on selling products and services, rather than offering them without charge. If Facebook and Google refuse to pay users for their data, these other companies are big and sophisticated enough to pay for data instead.

Would the “data as labor” model put the tech giants out of business? Hardly. Their vast profits already reflect their monopoly power. Their margins would certainly be tighter under this new regime, but the wider economy would likely grow through greater productivity and a fairer distribution of income. The big companies would take a smaller share of a larger pie, but their business model would be far more sustainable, politically and socially. More important, they would have to focus on the value that their core services bring to consumers, rather than on exploiting their monopoly in user data.

As for Congress, it could help by making it simpler for individuals to have clear property rights in their own data, rights that can’t be permanently signed away by accepting a company’s confusing terms and conditions. The European Union has already taken steps in this direction, and its new regulations—which require data to be easily portable—are a leading stimulus for the rise of data agent startups. Government can also help by updating labor law to be more consistent with modern data work while protecting data workers from exploitation.

Most of us already take great satisfaction in using social media to connect with our friends and family. Imagine how much happier and prouder we would be if we received fair pay for the valuable work we perform in doing that?

Prof. Posner teaches at the University of Chicago Law School. Dr. Weyl teaches at Yale University and is a principal researcher at Microsoft (whose views he in no way represents here). Their new book is “Radical Markets: Uprooting Capitalism and Democracy for a Just Society,” which will be published on May 8 by Princeton University Press.

https://www.wsj.com/articles/want-our-personal-data-pay-for-it-1524237577 (Paywall)

The Real Scandal?

likenolike

The Real Scandal Isn’t What Cambridge Analytica Did

It’s what Facebook made possible.

A couple of excerpts:

Sinister as it sounds, “psychographic” targeting—advertising to people based on information about their attitudes, interests, and personality traits—is an imprecise science at best and “snake oil” at worst.

….

If you think of that data, and the ads, as a relatively small price to pay for the privilege of seamless connection to everyone you know and care about, then Facebook looks like the wildly successful, path-breaking company that made it all possible. But if you start to think of the bargain as Faustian—with hidden long-term costs that overshadow the obvious benefits—then that would make Facebook the devil.

What this scandal did, then, was make the grand bargain of the social web look a little more Faustian than it did before.

From that perspective, the real scandal is that this wasn’t a data breach or some egregious isolated error on Facebook’s part. What Cambridge Analytica did was, in many ways, what Facebook was optimized for—collating personal information about vast numbers of people in handy packets that could then be used to try to sell them something.

Yes, the real scandal is that most of us are giving away real value that we need to survive in the digital economy.

Who’s Watching You?

Personal data is the new goldmine. Here’s who’s mining your gold:

What You Need to Know About Tech

Good article.

12 Things Everyone Should Understand About Tech

A couple of good excerpts:

9. Most big tech companies make money in just one of three ways.

It’s important to understand how tech companies make money if you want to understand why tech works the way that it does.

  • Advertising: Google and Facebook make nearly all of their money from selling information about you to advertisers. Almost every product they create is designed to extract as much information from you as possible, so that it can be used to create a more detailed profile of your behaviors and preferences, and the search results and social feeds made by advertising companies are strongly incentivized to push you toward sites or apps that show you more ads from these platforms. It’s a business model built around surveillance, which is particularly striking since it’s the one that most consumer internet businesses rely upon.
  • Big Business: Some of the larger (generally more boring) tech companies like Microsoft and Oracle and Salesforce exist to get money from other big companies that need business software but will pay a premium if it’s easy to manage and easy to lock down the ways that employees use it. Very little of this technology is a delight to use, especially because the customers for it are obsessed with controlling and monitoring their workers, but these are some of the most profitable companies in tech.
  • Individuals: Companies like Apple and Amazon want you to pay them directly for their products, or for the products that others sell in their store. (Although Amazon’s Web Services exist to serve that Big Business market, above.) This is one of the most straightforward business models—you know exactly what you’re getting when you buy an iPhone or a Kindle, or when you subscribe to Spotify, and because it doesn’t rely on advertising or cede purchasing control to your employer, companies with this model tend to be the ones where individual people have the most power.

That’s it. Pretty much every company in tech is trying to do one of those three things, and you can understand why they make their choices by seeing how it connects to these three business models.


10. The economic model of big companies skews all of tech.

Today’s biggest tech companies follow a simple formula:

  1. Make an interesting or useful product that transforms a big market
  2. Get lots of money from venture capital investors
  3. Try to quickly grow a huge audience of users even if that means losing a lot of money for a while
  4. Figure out how to turn that huge audience into a business worth enough to give investors an enormous return
  5. Start ferociously fighting (or buying off) other competitive companies in the market

This model looks very different than how we think of traditional growth companies, which start off as small businesses and primarily grow through attracting customers who directly pay for goods or services. Companies that follow this new model can grow much larger, much more quickly, than older companies that had to rely on revenue growth from paying customers. But these new companies also have much lower accountability to the markets they’re entering because they’re serving their investors’ short-term interests ahead of their users’ or community’s long-term interests.

The pervasiveness of this kind of business plan can make competition almost impossible for companies without venture capital investment. Regular companies that grow based on earning money from customers can’t afford to lose that much money for that long a time. It’s not a level playing field, which often means that companies are stuck being either little indie efforts or giant monstrous behemoths, with very little in between. The end result looks a lot like the movie industry, where there are tiny indie arthouse films and big superhero blockbusters, and not very much else. [This is amplifying the winner-take-all dynamics of technology.]

And the biggest cost for these big new tech companies? Hiring coders. They pump the vast majority of their investment money into hiring and retaining the programmers who’ll build their new tech platforms. Precious little of these enormous piles of money is put into things that will serve a community or build equity for anyone other than the founders or investors in the company. There is no aspiration that making a hugely valuable company should also imply creating lots of jobs for lots of different kinds of people. [Note: I would add to this last point that the aspiration should be to distribute value more widely across the community, not necessarily create more jobs.]