Art: Argument over a Card Game, Jan Steen
What do you see when you power on your doom brick for the first time in the morning?
For me, it’s first and foremost, Twitter, starting with my @ replies. Then, I read my feed. Then, I scroll over to the “For You” section, which offers me a curated list of trending news based on, I’m assuming, some combination of tweets I’ve interacted with, plus my Twitter demographic profile, plus overall trending news, plus some other variables thrown in. (Actually, this would make a great interview question for Twitter if it’s not already: what kind of recommendation algorithm would you use to power this section?)
Then, I read my email - lots of personal newsletters, newsletters from newspapers, then a quick scroll through Slack work notifications, Telegram/iMessage, and, finally Hacker News.
What’s the common thread behind all of these sources of content and information? They are all, except for Hacker News, powered by extreme personal selection, either by me opting into them based on my interests (newsletters, Telegram channels, etc.), or driven by algorithms (the “For You” section.)
I never get exposed to any content that the social media giants, or I, personally, think I am not interested in. This is the idea of the web based on the data science/recommendation platform idea of relevance.
I’m not really talking about anything new here. We’ve been wringing our hands over filter bubbles for years at this point. They came to prominence right around the time of the 2016 election, but Eli Pariser, who helped bring the term into popular prominence, had been discussing them for years. If you haven’t yet, watch his TED Talk (I know, I know) on the topic, which he gave in 2011. It’s very, very good and very scary.
But what’s really reinforced it for me is seeing the reactions to the election that happened (fortunately I can say past tense now, although for me personally last week seemed to last fifteen years).
Based on my personal feed, it was clear to me that there was one specific set of facts about the election. There was obviously no way it could go any other way. But for friends and family who had picked different news sources and as a result been opted into different personalization algorithms on their doom bricks, the set of facts they argued were completely different from mine. To them, any news story I cited about it being over was suspect, and to me, any YouTube link they sent was completely out there. It seemed impossible to me to believe any set of ideas other than the one I understood, and vice versa.
But, the even scarier part was that, of all the links and arguments they’d sent me, I’d never seen any of them come across my feed. So, I was not prepared for a discussion about them. They were coming from a completely different personalized universe, into mine. And there was absolutely no intersect. We could not have an intelligent conversation about it and come to a disagreement based on a shared narrative.
I’m using the election as an example, but this issue is much larger and continues to only grow. This is about every single piece of news we consume, and every piece of information. This is about all the decisions we make every day based on our mental model of the world. And the danger here is not that people consume different pieces of news, which they’ve done throughout history, but that, by the virtue of both A/B testing and personalized recommendations, we have been siloed so far away from people who are intellectually different from us, that it’s impossible to have a reasonable debate, because we simply don’t know each other’s sides even exist.
In the past, there were varied media sources, but there were only a couple of them. Everyone could read a single newspaper, be it the New York Times, or The Philadelphia Inquirer, or the New York Post, and argue on the merits of the facts presented there. “This article is wrong, this writer is lousy.” “Well I think the article is great, the writer is great, and the facts are true.”
Now, we are all getting served two different sets of facts. Or rather, variations of millions of different sets of facts. There is no single “source of truth”, no foundational set of facts or observed news from which to start a conversation for everyone. Trying to argue for or against something is like trying to herd a million small fish into a single school, or to grasp at a million pieces of sand scattered across a table. (Ask me, a mom dumb enough to buy her kids kinetic sand, how I know.)
We have completely lost our shared sense of what reality is.
Consider the Google homepage. It used to be the same for everyone, sometime before 2011. Then, Google invented online A/B tests, which are an extension of randomized control trial experiments, previously used in physical and social sciences.
Over the past decade, the power of A/B testing has become an open secret of high-stakes web development. It's now the standard (but seldom advertised) means through which Silicon Valley improves its online products. Using A/B, new ideas can be essentially focus-group tested in real time: Without being told, a fraction of users are diverted to a slightly different version of a given web page and their behavior compared against the mass of users on the standard site. If the new version proves superior—gaining more clicks, longer visits, more purchases—it will displace the original; if the new version is inferior, it's quietly phased out without most users ever seeing it. A/B allows seemingly subjective questions of design—color, layout, image selection, text—to become incontrovertible matters of data-driven social science.
What happened is that, as Pariser says in his video, is that there is no single “page of truth” for Google.
Today, A/B is ubiquitous, and one of the strange consequences of that ubiquity is that the way we think about the web has become increasingly outdated. We talk about the Google homepage or the Amazon checkout screen, but it's now more accurate to say that you visited a Google homepage, an Amazon checkout screen. What percentage of Google users are getting some kind of "experimental" page or results when they initiate a search?
A/B testing is also combined with the fact that Google personalizes your search results based on your search history and geolocation. Which means that if you search for “dogs” and you’ve previously done a lot of searching and browsing for huskies, you’ll get a husky page as the first relevant result, whereas someone else might get chihuahua, someone else might get collie, and so on.
You can see how this is fantastic for you, personally, but for us, collectively, as a society, is horrible. Because we all think that different dogs are popular. How can we start a conversation if one person is coming from “collie” and another is coming from “husky”? (Everyone should come from collie, by the way, because they are the best dogs, thanks.)
And, what’s making this worse is that, now, on top of recommendations and A/B testing, there is another layer. The social media companies, under immense pressure from public opinion, and more likely, regulation, are trying to control the kind of content that’s out there. This was especially prevalent in the time leading up to the election, where they had the unenviable task of deciding whether something was leading to destabilizing elections or not and blocking or not blocking content. But the thing with blocking content is that, as soon as you do it once, now you are no longer a pipe, an ecosystem or platform. You’re a media company and every single thing you block or don’t block will be under suspicion by different subsets of your users.
And what’s happening with media companies? They’re becoming tech companies. Consider this recent article about the New York Times. There is a lot of he said/she said in it, but what struck me was this:
What the audience wants most of all, apparently, is “Opinion.” On a relative basis, the section is the paper’s most widely read: “Opinion” produces roughly 10 percent of the Times’ output while bringing in 20 percent of its page views, according to a person familiar with the numbers. (The Times turned off programmatic advertising on the Cotton op-ed after some employees objected to the paper profiting off the provocation.) Now that the paper has switched from an advertising to a subscription-focused model, employees on both the editorial and business sides of the Times said that the company’s “secret sauce,” as one of them put it, was the back-end system in place for getting casual readers to subscribe. In 2018, a group of data scientists at the Times unveiled Project Feels, a set of algorithms that could determine what emotions a given article might induce. “Hate” was associated with stories that used the words tax, corrupt, or Mr. — the initial study took place in the wake of the Me Too movement — while stories that included the words first, met, and York generally produced “happiness.” But the “Modern Love” column was only so appealing. “Hate drives readership more than any of us care to admit,” one employee on the business side told me.
By the way, if you’re concerned by/interested in Project Feels, the team working on it wrote up a pretty interesting technical explanation of how it works that goes further than the article above.
So news stories are now (and have been for some time) getting written based on what sells (which, no surprise, are strong emotions, usually negative ones) rather than what can be investigated and debated carefully over some amount of time.
So, where are we all on our doom bricks today? We are more connected than ever, our phones are shinier, faster, and we can do amazing things. And at the same time, we are immensely locked into our own tiny universes, and no longer coming at public discussion from a central place.
I am, personally, very hopeful that we’re coming out of the basement McDonald’s from the perspective of the country and COVID. (Things are very bleak right now but the vaccine is on the way.)
But the way I’ve seen our bifurcated information economy work in the last election, and without any abatement into this one, makes me very scared. And, in an ironic bit of self-reinforcement that makes it hard for me personally to see an end to this, this makes me scroll my doombrick even harder.
What I’m reading lately:
The Newsletter:
This newsletter’s M.O. is takes on tech news that are rooted in humanism, nuance, context, rationality, and a little fun. It goes out once or twice a week. If you like it, forward it to friends and tell them to subscribe!
The Author:
I’m a machine learning engineer. Most of my free time is spent wrangling a kindergartner and a toddler, reading, and writing bad tweets. Find out more here or follow me on Twitter.
I love the term "Doom Bricks". I'm going to start calling my mom a "Doom Bricker" instead of a boomer. She runs a business on Facebook so it's all in jest, but I always warn her to keep some distance between her and those recommendation algorithms. Come up for air once in a while, I say!
>So news stories are now (and have been for some time) getting written based on what sells (which, no surprise, are strong emotions, usually negative ones) rather than what can be investigated and debated carefully over some amount of time.
there's another aspect of this. activating emotion with polarizing coverage is not only a strategy to boost short term engagement for ad dollars, but is also a political calculation for electorates. if stories motivated by activism mobilize targeted electorates moreso than their opposition through negative polarization, you can affect democratic outcomes in your favor
this is particularly relevant for an institution like the NYT and their new millennial cohort, who do not merely have pecuniary incentives but are seeking to be associated with and ultimately to conquest an elite cultural brand for their own purposes. because they pay less than say, a FAANG, the motivated people who stay there are quite happy to be compensated with social capital and the capability to swing elections and public opinion