TikTok and digital feudalism
TikTok, the most well-known app behind the Chinese unicorn ByteDance, has risen to become one of the most successful Chinese companies with global reach. It is also the subject of great controversy right now, particularly after its recent ban in India for posing a threat to sovereignty and integrity. It has been banned in the U.S. as well.
TikTok is not your regular social network. It’s been created AI-first. Its success is attributed to proprietary AI and ML algorithms that deliver customized content feeds to a highly engaged, young audience who are content creators. Zhang Yiming, founder and CEO of ByteDance, speaks of the power of AI tech as a force for good:
AI technology touches more aspects of our lives than we are even aware of, from the networked systems that control the flow of traffic through our cities and the algorithms that create our music playlists, to the systems in healthcare and education that run automated diagnostics and recommend online courses to expand our knowledge base. As AI becomes an increasingly integral part of our society, ByteDance believes that we – and our industry peers – have a duty to ensure that we understand and can anticipate the social impact of these new technologies, and manage this impact responsibly.
The “digital feudalism” part of the title is a nod to Jaron Lanier who made a bold statement in Who Owns the Future:
The information economy that we are currently building doesn’t really embrace capitalism, but rather a new form of feudalism.
We aren’t creating enough opportunity for enough people online. The proof is simple. The wide adoption of transformative connecting technology should create a middle-class wealth boom, as happened when the Interstate Highway System gave rise to a world of new jobs in transportation and tourism, for instance, and generally widened commercial prospects. Instead we’ve seen recession, unemployment, and austerity.
Lanier, a VR pioneer, scientist, entrepreneur, writer and musician, is famous for bringing our attention to digital information as people in disguise. Another variant of this is treating our data as labour.
Whether you’re a user of digital services, an entrepreneur or a policy-maker, it pays to understand what this means and why it matters.
How we got to where we are
We got here unintentionally, not as a result of some evil scheme designed by evil people. But the consequences for everyone are serious nonetheless.
The Internet as we know it has evolved as an oscillation between open/public systems and closed/proprietary ones. The first private large-scale commercial networks (e.g., airlines, banks, EDIs) established their own permanent infrastructure for exclusive use. Public information services, available to any organisation or member of the public, could not physically establish a link to each potential customer. So they had to rely on data communications supplied by the telephone and telegraph monopolies. The Internet, a collection of various protocols and technologies, emerged as a “dominant design” (amongst other competing technologies such as OSI), fuelled by the PC revolution of the 1980s.
Early Internet pioneers of the 1980s (mostly researchers in universities and research centres) were idealist cyberpunks, who saw the Internet as a place to share free information. Jaron Lanier recalls:
We thought the world would be a better place if everyone shared as much information as possible, free from the constraints of the commercial order. It was an utterly reasonable idea.
How, then, this reasonable idea got twisted so badly? Several things need to be pieced together.
Centralised digital platforms captured most of the value created on the Internet from the mid 1990s (when it became available for commercial use) to now. Platforms are systems with multi-sided network effects connecting several groups of users and greatly reducing costs for users to find and transact with each other. These platforms follow a predictable life cycle. When they roll out, they court users – individuals, developers, businesses and other complementors – to join their platform. The more users and complementors come on board, the more valuable the platform becomes. As platforms move up the adoption S-curve, their power over users and 3rd parties steadily grows.
Once platforms reach the top of the S-curve, growth becomes a zero-sum game. The easiest way for platforms to keep on growing is essentially extracting data from users and competing with complements over audiences and profits. Some historical examples of this are Microsoft vs. Netscape, Google vs. Yelp, and even operating systems iOS and Android still take a 30% tax and can reject apps for seemingly arbitrary reasons. Digital platforms end up becoming winners-take-all (or most) in their respective industries.
Source: Chris Dixon (2018) https://onezero.medium.com/why-decentralization-matters-5e3f79f7638e
Advertising and privacy transfer / surveillance (vs user fees or subscriptions) have become the business model of last resort for many digital giants. In that model, to quote Vitalik Buterin and Jaron Lanier:
People experience a vaguely socialist online world in which they share freely and are offered experiences, connections, and services for free, but they are living an illusion.
In the aftermath of the late 1990s boom-bust cycle, when all that seemed to matter for dot.com startups and their investors were the number of eyeballs (i.e., the size of the user base), emerging giants such as Google decided to eventually adopt the advertising model. By that time an entire social movement developed around the idea that online services should be free. In addition, many online services were small and niche and hence didn't justify the costs of infrastructure development. Google’s insight was that online advertising could be more personalised than is possible in traditional advertising media such as TV. Google could uncover the values and preferences of users from their search history, hence it could minimise advertising waste. Other digital giants followed Google’s model.
Another piece in this puzzle is the sheer power and influence of large computational networks. Lanier uses Siren Servers as a metaphor to describe their seductiveness. ”Sirens” of course allude to Ulysses’ temptations during his arduous voyage. In addition to the search, short video content and social networking companies based on the advertising model (e.g., FAGA and ByteDance), Siren Servers include high-tech finance schemes like high-frequency trading and derivatives funds, modern insurance, intelligence agencies and online stores. These networks gather data from users, often for free. The data are analysed using the most powerful available computers, run by the very best available technical people. The most precious and protected data, according to Lanier, are statistical correlations that are used by algorithms but are rarely seen or understood by people.
Hidden in plain sight
What most of us don’t realise is that you and me are a vital cog in the digital economy. We’re both a data producer and seller. Our labour is what powers the digital economy, and yet our role as data producers is not properly remunerated. In a high-profile class action suit against Huffington Post on behalf of the uncompensated bloggers, media activist Jonathan Tasini described the plaintiffs as “modern day slaves on Arianna Huffington’s plantation”.
The data we leave behind in our online interactions is the source of record profits behind digital giants, who exploit our lack of understanding of how data, ML and AI work. In these interactions, our labour, leisure, consumption, production and play are all intertwined.
Eric Posner and Glen Weyl explain:
AIs run on ML systems that analyze piles of human-produced data. “Programmers” do not write ingeniously self-determining algorithms. Instead, they design the interaction between workers (meaning us, the users who produce data) and machines (computational power) to produce specific information or production services. Most of the difficult work is not deriving profound algorithmic designs. Instead, it involves tweaking existing models to fit the relevant data and deliver the desired service. Programmers of ML systems are like modern factory floor managers, directing data workers to their most productive outlets.
And so in addition to “standard” network effects, digital giants come to enjoy an even more powerful competitive moat, or what Matt Turck, a VC, calls “data network effects”:
Data network effects occur when your product, generally powered by machine learning, becomes smarter as it gets more data from your users. In other words: the more users use your product, the more data they contribute; the more data they contribute, the smarter your product becomes (which can mean anything from core performance improvements to predictions, recommendations, personalization, etc.); the smarter your product is, the better it serves your users and the more likely they are to come back often and contribute more data – and so on and so forth. Over time, your business becomes deeply and increasingly entrenched, as nobody can serve users as well.
TikTok is a classic example of data network effects at play. It uses algorithms in computer vision and natural language processing technology to improve its product and user experience. The more users click, view and comment, the better TikTok becomes at recommending precisely the content they want to see. Personalied news recommendations increase the time users spend on the app and hence its appeal to advertisers. It’s a positive feedback loop that has created one of the most addictive content platforms on the Internet.
Data that feeds ML actually has increasing returns, as more data allows to solve more complicated problems.
The consequences
Let’s put it together. We, people as users of large computational networks, produce and supply free content, personal information and data. These inputs are not valued fairly. Networks gather these data from us (for no pay), and leverage their standard and data network effects, resulting in a concentration of information and market power. In exchange for data and privacy transfer, we get free trinkets, free music, cheap mortgages and cheap prices and a creepy new kind of product, “computed influence” over us brought about by the advertising business model. “Free” and cheap essentially mean that someone else decides how we live.
The power, wealth and profit distribution in the digital economy increasingly looks like a Power Law distribution (Pareto principle on steroids) and less like a Bell curve.
We are all part of this broad problem. Capitalism only works if there are enough successful people to be customers. This is why we may end up with digital feudalism. The long tail in the power law distribution are most of us.
Source: https://jitha.me/power-law-working-hard-enough/
Alternatives
What are the alternatives to digital feudalism? New social and economic models, if only we’re prepared to part with our dogmatic thinking.
Yuval Noah Harari explores two options. One is UBI (universal basic income) – taxing the billionaires and corporations controlling AI and robots, and using this money to provide an individual with a benefit covering basic needs. Another is universal basic services – that is, governments subsidising free education, healthcare and transportation. It’s open to debate whether UBI (the capitalist paradise) or universal basic services (the communist paradise) are better. And how one defines “basic” and “universal” is particularly problematic.
Another alternative is to recognise that data as labour, and so our society needs humans as data suppliers:
Even if AI never lives up to its hype, data as labor may offer important supplemental earning opportunities and sense of social contribution to citizens affected by rising inequality. Yet none of this will happen unless people change their attitudes toward data.
Our data thus becomes a source of future economic value. As tech giants are unlikely to implement the change, Glen Weyl and Jaron Lanier recommend an additional layer of organisations of intermediate size to bridge the gap between digital platforms and individual users. MIDs (mediators of individual data) are community organisations that negotiate royalties and wages, promote standards, perform routine and accounting duties and so on. The idea is not terribly revolutionary, as MIDs such as farmers’ cooperatives, mutual funds, guilds, partnerships and professional societies have always been critical to a well-functioning society. Their calculations show that if AI ends up as only 10% of the economy, the AI-feeding aspect of data supply can deliver $20,000 in income for an average American family of four.
A yet another solution is to embrace decentralised cryptonetworks. Many reasonable people believe that they have advantages (particularly incentive advantages) for content creators, users, entrepreneurs and developers. If they can win the hearts and minds of entrepreneurs and developers, they may be able to develop better alternative products to the ones currently supplied by centralised digital giants. Many of these initiatives are already underway.