The Real Scandal?

likenolike

The Real Scandal Isn’t What Cambridge Analytica Did

It’s what Facebook made possible.

A couple of excerpts:

Sinister as it sounds, “psychographic” targeting—advertising to people based on information about their attitudes, interests, and personality traits—is an imprecise science at best and “snake oil” at worst.

….

If you think of that data, and the ads, as a relatively small price to pay for the privilege of seamless connection to everyone you know and care about, then Facebook looks like the wildly successful, path-breaking company that made it all possible. But if you start to think of the bargain as Faustian—with hidden long-term costs that overshadow the obvious benefits—then that would make Facebook the devil.

What this scandal did, then, was make the grand bargain of the social web look a little more Faustian than it did before.

From that perspective, the real scandal is that this wasn’t a data breach or some egregious isolated error on Facebook’s part. What Cambridge Analytica did was, in many ways, what Facebook was optimized for—collating personal information about vast numbers of people in handy packets that could then be used to try to sell them something.

Yes, the real scandal is that most of us are giving away real value that we need to survive in the digital economy.

Who’s Watching You?

Personal data is the new goldmine. Here’s who’s mining your gold:

What You Need to Know About Tech

Good article.

12 Things Everyone Should Understand About Tech

A couple of good excerpts:

9. Most big tech companies make money in just one of three ways.

It’s important to understand how tech companies make money if you want to understand why tech works the way that it does.

  • Advertising: Google and Facebook make nearly all of their money from selling information about you to advertisers. Almost every product they create is designed to extract as much information from you as possible, so that it can be used to create a more detailed profile of your behaviors and preferences, and the search results and social feeds made by advertising companies are strongly incentivized to push you toward sites or apps that show you more ads from these platforms. It’s a business model built around surveillance, which is particularly striking since it’s the one that most consumer internet businesses rely upon.
  • Big Business: Some of the larger (generally more boring) tech companies like Microsoft and Oracle and Salesforce exist to get money from other big companies that need business software but will pay a premium if it’s easy to manage and easy to lock down the ways that employees use it. Very little of this technology is a delight to use, especially because the customers for it are obsessed with controlling and monitoring their workers, but these are some of the most profitable companies in tech.
  • Individuals: Companies like Apple and Amazon want you to pay them directly for their products, or for the products that others sell in their store. (Although Amazon’s Web Services exist to serve that Big Business market, above.) This is one of the most straightforward business models—you know exactly what you’re getting when you buy an iPhone or a Kindle, or when you subscribe to Spotify, and because it doesn’t rely on advertising or cede purchasing control to your employer, companies with this model tend to be the ones where individual people have the most power.

That’s it. Pretty much every company in tech is trying to do one of those three things, and you can understand why they make their choices by seeing how it connects to these three business models.


10. The economic model of big companies skews all of tech.

Today’s biggest tech companies follow a simple formula:

  1. Make an interesting or useful product that transforms a big market
  2. Get lots of money from venture capital investors
  3. Try to quickly grow a huge audience of users even if that means losing a lot of money for a while
  4. Figure out how to turn that huge audience into a business worth enough to give investors an enormous return
  5. Start ferociously fighting (or buying off) other competitive companies in the market

This model looks very different than how we think of traditional growth companies, which start off as small businesses and primarily grow through attracting customers who directly pay for goods or services. Companies that follow this new model can grow much larger, much more quickly, than older companies that had to rely on revenue growth from paying customers. But these new companies also have much lower accountability to the markets they’re entering because they’re serving their investors’ short-term interests ahead of their users’ or community’s long-term interests.

The pervasiveness of this kind of business plan can make competition almost impossible for companies without venture capital investment. Regular companies that grow based on earning money from customers can’t afford to lose that much money for that long a time. It’s not a level playing field, which often means that companies are stuck being either little indie efforts or giant monstrous behemoths, with very little in between. The end result looks a lot like the movie industry, where there are tiny indie arthouse films and big superhero blockbusters, and not very much else. [This is amplifying the winner-take-all dynamics of technology.]

And the biggest cost for these big new tech companies? Hiring coders. They pump the vast majority of their investment money into hiring and retaining the programmers who’ll build their new tech platforms. Precious little of these enormous piles of money is put into things that will serve a community or build equity for anyone other than the founders or investors in the company. There is no aspiration that making a hugely valuable company should also imply creating lots of jobs for lots of different kinds of people. [Note: I would add to this last point that the aspiration should be to distribute value more widely across the community, not necessarily create more jobs.]

What are the DApps of the Future?

But how?

DApps, or Distributed Applications, are the force multipliers for blockchain technologies, just like email, Amazon, eBay, Google, and social networks are the applications that have propelled the Internet. The race is on for the development of these Dapps to transform industries and the future of the Internet itself.

In Search of Blockchain’s Killer-Apps

By Irving Wladawsky-Berger, WSJ, Mar 9, 2018

Blockchain has been in the news lately, but beyond knowing that it has something to do with payments and digital currencies, most people don’t know what blockchain is or why they should care. A major part of the reason is that we still don’t have the kind of easy-to-explain blockchain killer-apps that propelled the internet forward.

Blockchain has yet to cross the chasm from technology enthusiasts and visionaries to the wider marketplace that’s more interested in business value and applications. There’s considerable research on blockchain technologies, platforms and applications as well as market experimentation in a number of industries, but blockchain today is roughly where the internet was in the mid-late 1980s: full of promise but still confined to a niche audience.

In addition, outside of digital currencies, blockchain applications are primarily aimed at institutions. And, given that blockchain is all about the creation, exchange and management of valuable assets, its applications are significantly more complex to understand and explain than internet applications.

The management of information is quite different from the management of transactions. The latter, especially for transactions dealing with valuable or sensitive assets, requires deep contractual negotiations among companies and jurisdictional negotiations among governments. Moreover, since blockchain is inherently multi-institutional in nature, its applications involve close collaboration among companies, governments and other entities.

In my opinion, there will likely be two major kinds of blockchain killer-apps: those primarily aimed at reducing the friction and overheads in complex transaction involving multiple institutions; and those primarily aimed at strengthening the security and privacy of the internet through identity management and data sharing. Let me discuss each in turn.

Complex transactions among institutions. “Contracts, transactions, and the records of them are among the defining structures in our economic, legal, and political systems,” wrote Harvard professors Marco Iansiti and Karim Lakhani in a 2017 HBR article.

With blockchain, “every agreement, every process, every task, and every payment would have a digital record and signature that could be identified, validated, stored, and shared… Individuals, organizations, machines, and algorithms would freely transact and interact with one another with little friction.”

Blockchain holds the promise to transform the finance industry and other aspects of the digital economy by bringing one of the most important and oldest concepts, the ledger, to the internet age. Ledgers constitute a permanent record of all the economic transactions an institution handles, whether it’s a bank managing deposits, loans and payments; a brokerage house keeping track of stocks and bonds; or a government office recording the ownership and sale of land and houses.

Over the years, institutions have automated their original paper-based ledgers with sophisticated IT applications and data bases. But while most ledgers are now digital, their underlying structure has not changed. Each institution continues to own and manage its own ledger, synchronizing its records with those of other institutions as appropriate, – a cumbersome process that often takes days. While these legacy systems operate with a high degree of robustness, they’re rather inflexible and inefficient.

In August of 2016, the WEF published a very good report on how blockchain can help reshape the financial services industry. The report concluded that blockchain technologies have great potential to drive simplicity and efficiency through the establishment of new financial services infrastructure, processes and business models.

However, transforming the highly complex global financial ecosystem will take considerable investment and time. It requires the close collaboration of its various stakeholders, including existing financial institutions, fintech startups, merchants of all sizes, government regulators in just about every country, and huge numbers of individuals around the world. Getting them to work together and pull in the same direction is a major undertaking, given their diverging, competing interests. Overcoming these challenges will likely delay large-scale, multi-party blockchain implementations.

Supply chain applications will likely be among the earliest blockchain killer-apps, increasing the speed, security and accuracy of financial and commercial settlements; tracking the supply chain lifecycle of any component or product; and securely protecting all the transactions and data moving through the supply chain. The infrastructures and processes of supply chains are significantly less complex than those in financial services, healthcare, and other industries and there are already a number of experimental applications under way.

A recent WSJ CIO Journal article noted that blockchain seems poised to change how supply chains work. The article cites examples of projects with Walmart and British Airwayswhere blockchain is used to maintain the integrity of the data being shared across the various institutions participating in their respective ecosystems. Earlier this year IBM and Maersk announced a joint venture to streamline operations for the entire global shipping ecosystem. Their joint venture aims to apply blockchain technologies to the current stack of paperwork needed to process and track the shipping of goods. Maersk estimates that the costs to process and administer the required documentation can be as high as 20 percent the actual physical transportation costs.

Identity management and data sharing. The other major kind of blockchain killer-apps will likely deal with identity management and data security.

As we move from a world of physical interactions and paper documents, to a world primarily governed by digital data and transactions, our existing methods for protecting identities and data are proving inadequate. Internet threats have been growing. Large-scale fraud, data breaches, and identity thefts are becoming more common. Companies are finding that cyberattacks are costly to prevent and recover from. The transition to a digital economy requires radically different identity systems.

A major reason for the internet’s ability to keep growing and adapting to widely different applications is that it’s stuck to its basic data-transport mission.  Consequently, there’s no one overall owner responsible for security, let alone identity management, over the internet. These important responsibilities are divided among several actors, making them significantly harder to achieve.

Blockchain technologies should help us enhance the security of digital transactions and data, by developing the required common services for secure communication, storage and data access, along with open source software implementations of these standard services, supported by all major blockchain platforms, such as Hyperledger and Ethereum.

Identity is the key that determines the particular transactions in which individuals, institutions, and the exploding number of IoT devices, can rightfully participate, as well as the data they’re entitled to access. But, our existing methods for managing digital identities are far from adequate.

To reach a higher level of privacy and security we need to establish a trusted data ecosystem, which requires the interoperability and sharing of data across the various institutions involved. The more data sources a trusted ecosystem has access to, the higher the probability of detecting fraud and identity theft. However, it’s not only highly unsafe, but also totally infeasible to gather all the needed attributes in a central data warehouse. Few institutions will let their critical data out of their premises.

MIT Connection Science, a research initiative led by MIT professor Sandy Pentland, has been developing a new identity framework that would enable the safe sharing of data across institutions. Instead of copying or moving the data across, the agreed upon queries are sent to the institution owning the data, executed behind the firewalls of the data owners, and only the encrypted results are shared. MIT Connection Science is implementing such an identity framework in its OPAL initiative, which makes extensive use of cryptographic and blockchain technologies. A number of pilots are underway around the world.

Irving Wladawsky-Berger worked at IBM for 37 years and has been a strategic advisor to Citigroup and to HBO. He is affiliated with MIT, NYU and Imperial College, and is a regular contributor to CIO Journal.

Who Are We and What Matters?

Google? Facebook? These two firms alone control roughly 2/3s of digital media advertising revenues.

That’s power. That’s knowledge.

But knowledge of what?

Mostly of how to program computers and deploy algorithms to sort through, organize, cluster, rank, and order vast quantities of data. In the case of Facebook, Zuckerberg obviously also understood something simple but important about how human beings might enjoy interacting online. That’s not nothing. Actually, it’s a lot. An enormous amount. But it’s not everything — or anything remotely close to what Silicon Valley’s greatest innovators think it is.

When it comes to human beings — what motivates them, how they interact socially, to what end they organize politically — figures like Page and Zuckerberg know very little. Almost nothing, in fact. And that ignorance has enormous consequences for us all.

Tech Dystopia?

Below are excerpts from a fascinating series of articles by The Guardian (with links). The articles address many of the ways that Internet 2.0 network media models such as Google, Facebook, YouTube, Instagram, etc. are transforming, and in many cases undermining, the foundations of a democratic humanistic society. These issues motivate us at tuka to design solutions to the great question of life’s meaning.

Personally, I don’t believe this dystopia will come to pass because humans are quite resilient as a species and eventually our humanist qualities will dominate our biological urges and economic imperatives. We have free will and ultimately, we choose correctly.

Perhaps that is an overly optimistic opinion, but Internet (or Web) 3.0 technology is rewriting the script with applications that reassert human control over the data universe. We will build more humanistic social communities that employ technology, with the emphasis always on the human. We see this now with the growing refusal to surrender to Web 2.0 by tech insiders.

See excerpts and comments below.

“If politics is an expression of our human will, on individual and collective levels, then the attention economy is directly undermining the assumptions that democracy rests on.” If Apple, Facebook, Google, Twitter, Instagram, and Snapchat are gradually chipping away at our ability to control our own minds, could there come a point, I ask, at which democracy no longer functions?

‘Fiction is outperforming reality’: how YouTube’s algorithm distorts truth

Paul Lewis February 2, 2018

There are 1.5 billion YouTube users in the world, which is more than the number of households that own televisions. What they watch is shaped by this algorithm, which skims and ranks billions of videos to identify 20 “up next” clips that are both relevant to a previous video and most likely, statistically speaking, to keep a person hooked on their screen.

Company insiders tell me the algorithm is the single most important engine of YouTube’s growth. In one of the few public explanations of how the formula works – an academic paper that sketches the algorithm’s deep neural networks, crunching a vast pool of data about videos and the people who watch them – YouTube engineers describe it as one of the “largest scale and most sophisticated industrial recommendation systems in existence”.

We see here the power of AI data algorithms to filter content. The Google response has been to “expand the army of human moderators.” That’s a necessary method of reasserting human judgment over the network

The primary focus of the article then turns to politics and the electoral influences of disinformation:

Much has been written about Facebook and Twitter’s impact on politics, but in recent months academics have speculated that YouTube’s algorithms may have been instrumental in fuelling disinformation during the 2016 presidential election. “YouTube is the most overlooked story of 2016,” Zeynep Tufekci, a widely respected sociologist and technology critic, tweeted back in October. “Its search and recommender algorithms are misinformation engines.”

Apparently, the sensationalism surrounding the Trump campaign caused YT’s AI algorithms to push more video feeds favorable to Trump and damaging to Hillary Clinton. One doesn’t need to be a partisan to recognize this was probably true for this particular media channel and its business model that values more views more than anything else.

However, this reality can also be distorted to present a particular conspiracy narrative of its own:

Trump won the electoral college as a result of 80,000 votes spread across three swing states. There were more than 150 million YouTube users in the US. The videos contained in Chaslot’s database of YouTube-recommended election videos were watched, in total, more than 3bn times before the vote in November 2016.

This, unfortunately, is cherry-picking statistical inferences concerning the margin of voting support. What was significant in determining the 2016 election outcome was not 80,000 votes across three states, but a run of popular vote wins in 2,623 of 3,112 counties across the U.S. This 85% share could not be an accident, nor could it be due to the single influence of disinformation, Russian or otherwise. The true difference in the election was not revealed by the popular vote total or the Electoral College vote, but by the geographical distribution of support. One can argue about which is more critical to democratic governance, but this post is about electronic media content, not political analysis.

The next article further addresses how technology is influencing our individual behaviors.

‘Our minds can be hijacked’: the tech insiders who fear a smartphone dystopia

Paul Lewis October 6, 2017

Justin Rosenstein had tweaked his laptop’s operating system to block Reddit, banned himself from Snapchat, which he compares to heroin, and imposed limits on his use of Facebook. But even that wasn’t enough. In August, the 34-year-old tech executive took a more radical step to restrict his use of social media and other addictive technologies.

A decade after he stayed up all night coding a prototype of what was then called an “awesome” button, Rosenstein belongs to a small but growing band of Silicon Valley heretics who complain about the rise of the so-called “attention economy”: an internet shaped around the demands of an advertising economy.

The extent of this addiction is cited by research that shows people touch, swipe or tap their phone 2,617 times a day!

There is growing concern that as well as addicting users, technology is contributing toward so-called “continuous partial attention”, severely limiting people’s ability to focus, and possibly lowering IQ. One recent study showed that the mere presence of smartphones damages cognitive capacity – even when the device is turned off. “Everyone is distracted,” Rosenstein says. “All of the time.”

“The technologies we use have turned into compulsions, if not full-fledged addictions,” Eyal writes. “It’s the impulse to check a message notification. It’s the pull to visit YouTube, Facebook, or Twitter for just a few minutes, only to find yourself still tapping and scrolling an hour later.” None of this is an accident, he writes. It is all “just as their designers intended”.

Tristan Harris, a former Google employee turned vocal critic of the tech industry points out that… “All of us are jacked into this system. All of our minds can be hijacked. Our choices are not as free as we think they are.”

“I don’t know a more urgent problem than this,” Harris says. “It’s changing our democracy, and it’s changing our ability to have the conversations and relationships that we want with each other.” 

Harris believes that tech companies never deliberately set out to make their products addictive. They were responding to the incentives of an advertising economy, experimenting with techniques that might capture people’s attention, even stumbling across highly effective design by accident.

“Smartphones are useful tools,” says Loren Brichter, a product designer. “But they’re addictive. Pull-to-refresh is addictive. Twitter is addictive. These are not good things. When I was working on them, it was not something I was mature enough to think about.”

The two inventors listed on Apple’s patent for “managing notification connections and displaying icon badges” are Justin Santamaria and Chris Marcellino. A few years ago Marcellino, 33, left the Bay Area and is now in the final stages of retraining to be a neurosurgeon. He stresses he is no expert on addiction but says he has picked up enough in his medical training to know that technologies can affect the same neurological pathways as gambling and drug use. “These are the same circuits that make people seek out food, comfort, heat, sex,” he says.

“The people who run Facebook and Google are good people, whose well-intentioned strategies have led to horrific unintended consequences,” he says. “The problem is that there is nothing the companies can do to address the harm unless they abandon their current advertising models.

But how can Google and Facebook be forced to abandon the business models that have transformed them into two of the most profitable companies on the planet?

This is exactly the problem – they really can’t. Newer technology, such as distributed social networks tracked by blockchain technology, must be deployed to disrupt the dysfunctional existing technology. New business models will be designed to support this disruption. Human behavioral instincts are crucial to successful new designs that make us more human, rather than less.

James Williams does not believe talk of dystopia is far-fetched. …He says his epiphany came a few years ago when he noticed he was surrounded by technology that was inhibiting him from concentrating on the things he wanted to focus on. “It was that kind of individual, existential realization: what’s going on?” he says. “Isn’t technology supposed to be doing the complete opposite of this?”

The question we ask at tuka is: “What do people really want from technology and social interaction? Distraction or meaning? And how do they find meaning?” Our answer is self-expression through creativity, sharing it, and connecting with communities.

Williams and Harris left Google around the same time and co-founded an advocacy group, Time Well Spent, that seeks to build public momentum for a change in the way big tech companies think about design. 

“Eighty-seven percent of people wake up and go to sleep with their smartphones,” he says. The entire world now has a new prism through which to understand politics, and Williams worries the consequences are profound.

The same forces that led tech firms to hook users with design tricks, he says, also encourage those companies to depict the world in a way that makes for compulsive, irresistible viewing. “The attention economy incentivizes the design of technologies that grab our attention,” he says. “In so doing, it privileges our impulses over our intentions.”

That means privileging what is sensational over what is nuanced, appealing to emotion, anger, and outrage. The news media is increasingly working in service to tech companies, Williams adds, and must play by the rules of the attention economy to “sensationalize, bait and entertain in order to survive”.

In the wake of Donald Trump’s stunning electoral victory, many were quick to question the role of so-called “fake news” on Facebook, Russian-created Twitter bots or the data-centric targeting efforts that companies such as Cambridge Analytica used to sway voters. But Williams sees those factors as symptoms of a deeper problem.

It is not just shady or bad actors who were exploiting the internet to change public opinion. The attention economy itself is set up to promote a phenomenon like Trump, who is masterly at grabbing and retaining the attention of supporters and critics alike, often by exploiting or creating outrage.

Orwellian-style coercion is less of a threat to democracy than the more subtle power of psychological manipulation, and “man’s almost infinite appetite for distractions”.

“The dynamics of the attention economy are structurally set up to undermine the human will,” Williams says. “If politics is an expression of our human will, on individual and collective levels, then the attention economy is directly undermining the assumptions that democracy rests on.” If Apple, Facebook, Google, Twitter, Instagram, and Snapchat are gradually chipping away at our ability to control our own minds, could there come a point, I ask, at which democracy no longer functions?

Our politics will survive and democracy is only one form of governance. The bigger question is how does human civilization survive if our behavior becomes self-destructive and meaningless?

Blockchain Future?

This is a very long, comprehensive essay on bitcoin, blockchain, and distributed network economics. Published in the NY Times.

Full article

An excerpt I find particularly relevant to the tuka model:

The token architecture would give a blockchain-based identity standard an additional edge over closed standards like Facebook’s. As many critics have observed, ordinary users on social-media platforms create almost all the content without compensation, while the companies capture all the economic value from that content through advertising sales. A token-based social network would at least give early adopters a piece of the action, rewarding them for their labors in making the new platform appealing. “If someone can really figure out a version of Facebook that lets users own a piece of the network and get paid,” Dixon says, “that could be pretty compelling.”

That would be tuka.

 

The Genius? of Silicon Valley?

Foer

Here’s a review of Franklin Foer’s new book, World Without Mind: The Existential Threat of Big Tech. What we’re seeing here is the slow breaking of the next wave of tech, from Web 2.0 to Web 3.0, where the users take back control and are treated as more than mindless sources of data. It took a long time to transition the world out of feudalism and we’re still in the very early stages of throwing off the yolk of data exploitation and tyranny. Algorithms cannot guide humanity.

The genius and stupidity of Silicon Valley

Knowledge is a tricky thing.

Acquiring and deploying it to change the world through technological innovations can inspire great confidence and self-certainty in the person who possesses the knowledge. And yet, the confidence and self-certainty is nearly always misplaced — a product of the knower presuming that his expert knowledge of one aspect of reality applies equally to others. That’s one powerful reason why myths about the place of knowledge in human life so often teach lessons about hubris and its dire social, cultural, and political consequences.

Franklin Foer’s important new book, World Without Mind: The Existential Threat of Big Tech, is best seen as a modern-day journalistic retelling of one of those old cautionary tales about human folly. Though he doesn’t describe his aim in quite this way, Foer sets out to expose the foolishness and arrogance that permeates the culture of Silicon Valley and that through its wondrous technological innovations threatens unintentionally to wreck civilizational havoc on us all.

It’s undeniable that Silicon Valley’s greatest innovators know an awful lot. Google is an incredibly powerful tool for organizing information — one to which no previous generation of human beings could have imagined having easy and free access, let alone devising from scratch, as Larry Page and Sergey Brin managed to do. The same goes for Facebook, which Mark Zuckerberg famously created in his Harvard dorm room and has become a global powerhouse in a little more than a decade, turning him into one of the world’s richest men and revolutionizing the way some two billion people around the world consume information and interact with each other.

That’s power. That’s knowledge.

But knowledge of what?

Mostly of how to program computers and deploy algorithms to sort through, organize, cluster, rank, and order vast quantities of data. In the case of Facebook, Zuckerberg obviously also understood something simple but important about how human beings might enjoy interacting online. That’s not nothing. Actually, it’s a lot. An enormous amount. But it’s not everything — or anything remotely close to what Silicon Valley’s greatest innovators think it is.

When it comes to human beings — what motivates them, how they interact socially, to what end they organize politically — figures like Page and Zuckerberg know very little. Almost nothing, in fact. And that ignorance has enormous consequences for us all.

You can see the terrible problems of this hubris in the enormously sweeping ambitions of the titans of technology. Page, for instance, seeks to achieve immortality.

Foer explains how Page absorbed ideas from countercultural guru Stewart Brand, futurist Ray Kurzweil, and others to devise a quasi-eschatological vision for Google as a laboratory for artificial intelligence that might one day make it possible for humanity to transcend human limitations altogether, eliminating scarcity, merging with machines, and finally triumphing over mortality itself. Foer traces the roots of this utopianism back to Descartes’ model of human subjectivity, which pictures a spiritual mind encased within and controlling an (in principle, separable) mechanical body. If this is an accurate representation of the mind’s relation to its bodily host, then why not seek to develop technology that would make it possible to deposit this mind, like so much software, into a much more durable and infinitely repairable and improvable computer? In the process, these devices would be transformed into what Kurzweil has dubbed “spiritual machines” that could, in principle, enable individuals to live on and preserve their identities forever.

The problem with such utopian visions and extravagant hopes is not that they will outstrip our technological prowess. For all I know, the company that almost instantly gathers and ranks information from billions of websites for roughly 40,000 searches every second will some day, perhaps soon, develop the technical capacity to transfer the content of a human mind into a computer network.

The problem with such a goal is that in succeeding it will inevitably fail. As anyone who reflects on the issue with any care, depth, and rigor comes to understand, the Cartesian vision of the mind is a fiction, a fairy tale. Our experience of being alive, of being-in-the-world, is thoroughly permeated and shaped by the sensations, needs, desires, and fears that come to us by the way of our bodies, just as our opinions of right and wrong, better and worse, noble and base, and just and unjust are formed by rudimentary reflections on our own good, which is always wrapped up with our perception of the good of our physical bodies.

Even if it were possible to transfer our minds — our memories, the content of our thoughts — into a machine, the indelible texture of conscious human experience would be flattened beyond recognition. Without a body and its needs, desires, vulnerabilities, and fear of injury and death, we would no longer experience a world of meaning, gravity, concern, and care — for ourselves or others. Which also means that Page’s own relentless drive to innovate technologically — which may well be the single attribute that most distinguishes him as an individual — would vanish without a trace the moment he realized his goal of using technological innovations to achieve immortality.

An immortal Larry Page would no longer be Larry Page.

Zuckerberg’s very different effort to overcome human limits displays a similar obliviousness to the character of human experience, in this case political life — and it ends with a similar paradox.

Rather than simply providing Facebook’s users with a platform for socializing and sharing photos, Zuckerberg’s company has developed intricate algorithms for distributing information in each user’s “news feed,” turning it into a “personalized newspaper,” with the content (including advertisements) precisely calibrated to his or her particular interests, tastes, opinions, and commitments. The idea was to build community and bring people together through the sharing and dissemination of information. The result has been close to the opposite.

As Facebook’s algorithms have become more sophisticated, they have gotten better and better at giving users information that resembles information they have previously liked or shared with their friends. That has produced an astonishing degree of reinforcement of pre-existing habits and opinions. If you’re a liberal, you’re now likely only to see liberal opinions on Facebook. If you’re conservative, you’ll only see conservative opinions. And if you’re inclined to give credence to conspiracy theories, you’ll see plenty of those.

And maybe not just if you favor conspiracy theories. As we’ve learned since the 2016 election, it’s possible for outside actors (like foreign intelligence services, for example) to game the system by promoting or sponsoring fake or inflammatory stories that get disseminated and promoted among like-minded or sympathetic segments of the electorate.

Facebook may be the most effective echo chamber ever devised, precisely because there’s potentially a personalized chamber for every single person on the planet.

What began with a hope of bringing the country and the world together has in a little over a decade become one of the most potent sources of division in a deeply divided time.

And on it goes, with each company and technology platform producing its own graveyards full of unintended consequences. Facebook disseminates journalism widely but ends up promoting vacuous and sometimes politically pernicious clickbait. Google works to make information (including the content of books) freely available to all but in the process dismantles the infrastructure that was constructed to make it possible for people to write for a living. Twitter gives a megaphone to everyone who opens an account but ends up amplifying the voice of a demagogue-charlatan above everyone else, helping to propel him all the way to the White House.

Foer ends his book on an optimistic note, offering practical suggestions for pushing back against the ideological and technological influence of Silicon Valley on our lives. Most of them are worthwhile. But the lesson I took from the book is that the challenge we face may defy any simple solution. It’s a product, after all, of the age-old human temptation toward arrogance or pride — only now inflated by the magnitude of our undeniable technological achievements. How difficult it must be for our techno-visionaries to accept that they know far less than they’d like to believe.

The Death of Culture?

Designing a Sustainable Creative Ecosystem

Too Much information = The Death of Culture?

The major creative industries of music, photography, print, and video have all been disrupted by digital technology. We know this. As Chris Anderson has argued in his book Free, the cost of digital content has been driven towards zero. How could this be a bad thing? Well, TMI (Too Much Information — in this case, Too Much Content) is the curse of the Digital Age. It means creators make no money and audiences can’t find quality content amidst all the noise.

The end result will be a staleness of content and stagnant creative markets, i.e., the slow death of culture. So, how did this happen and what do we do about it?

View the rest of the story on Medium.