What Was Twitter, Anyway?
Listen to This Article
Audio Recording by Audm
The trouble began, as it usually does, when I saw something funny on my computer. It was the middle of the morning on a Wednesday, a few years back, and I came across news that Le Creuset, the French cookware brand, had made a line of “Star Wars”-themed pots and pans. There was a roaster made to look like Han Solo frozen in carbonite ($450) and a Dutch oven with Tatooine’s twin suns on it (“Our Dutch oven promises an end result that’s anything but dry — unlike the sun-scorched lands of Tatooine”; $900). A set of mini cocottes had been decorated to resemble the lovable droid characters C-3PO, R2-D2 and BB-8.
I was also looking at Twitter that day, something that I can say for sure not only because of what happened next, but also because I look at Twitter just about every day. (This is not terribly unusual in my profession — I am an editor at The New York Times Magazine — but I think it should be stated clearly upfront that I have something of an acute problem with it.) I took a screenshot of the cocottes and uploaded it to the site. I wrote, as an accompanying caption, “The Star Wars/Le Creuset pots imply the existence of a Type of Guy I find genuinely unimaginable…” — just like that, ellipsis and all. I hit send. I guess I went back to work after that. My email records show that I sent a big edit memo to a writer. Then, around lunchtime, things started happening.
If you don’t use Twitter — which is perfectly normal; about three-quarters of Americans don’t — you should know that the platform has a function called quote-tweeting, which was introduced in 2015. It allows users to show a tweet they’ve encountered to their own followers, while adding their own text or image to comment on it. You often see people use this function to respond to some contrived prompt that crosses their feed (“What’s a great song that features an impressive horn section?”). Less often, though often enough that the practice has its own name, quote-tweets are used to roast and clown on people — to trot them out in front of a new audience, drop their pants and spank them. This is referred to as “dunking.”
At some point in the early afternoon, someone dunked on me by quote-tweeting my observation and adding, in The Onion’s headline style: “Area Man Has Never Heard of Women.” My post was now in front of a new audience, and that audience was now reading it framed by what I would consider an uncharitable interpretation of my point.
New quote-tweets started to pour in, each one putting me in front of another audience of followers, some minuscule and others quite large. “I enjoyed that this tweet manages to be sexist on multiple levels”; “#newsflash WOMEN cook and like Star Wars”; “Imagine a woman”; “Hi, have you met women?”; “Women like Star Wars. Men cook.”; “My husband is a huge Star Wars fan and is the cook in the house. He bakes too. Sorry to blow your mind.”; “i luv a good dose of homophobia and toxic masculinity in the year of our lord 2019 🙄.” My notifications flooded for the next 24 hours as the tweet continued to find its way into new corners of the site. Some people replied directly: “… are you aware that girls can like star wars too”; “Willy, get a better imagination, and cut it out with the gatekeeping”; “Men cook. Women like Star Wars. If you can’t imagine those things, that’s about you, not other people.”; “Showed my son, he’s trying to find them to order them now. Btw, he’s a Marine.” Other replies can’t be printed here.
None of these people were wrong, exactly. It was true that in the split second between learning of the pots and posting about them, I had imagined a stereotypically geeky and slovenly guy as the customer, and Le Creuset as the kind of thing you put on your wedding registry — that is indeed why I thought the products were funny. It’s not as if this was a terribly original thought; I didn’t wake up and introduce to our culture, on a random Wednesday, the idea that male nerds like to buy “Star Wars” memorabilia. Nor had these broader gender corollaries — that men don’t cook, that women don’t like “Star Wars” — so much as crossed my mind. In any event, I no longer have any trouble imagining what “Star Wars”-Le Creuset customers are like.
I was wrong on another level, too: The pots and pans were, as many Twitter users would find time to inform me, wildly popular, and are now available only on the secondary market, in some cases for multiple times their retail value. Still, wrong as I may have been, the responses I managed to provoke were stunning to me — for their volume, their woundedness, their consistency and the way the “Star Wars”-liking issue was so salient that I was called sexist for not associating cookware with women. Luckily, the sheer inanity of the topic offered a measure of safety you don’t typically get when you bring negative attention to yourself on Twitter. I could afford to take the anthropological view. I felt like Bill Paxton at the end of “Twister” — strapped in and able to see down the barrel of this thing and admire its beautiful, treacherous contours.
Twitter is both short-form and fast-moving, which together make it feel conversational. Like all conversations, it’s highly context-dependent, and like all good conversations, it’s guided by the pleasure principle. That’s what makes it fun: Who doesn’t want to be the person who can make everyone laugh at a dinner party? But Twitter also puts your dinner-party remarks in front of people who were not invited to the dinner party, showing them exactly how little you considered them before chiming in. And, of course, no one involved is having fun at a dinner party at any point in this process; everyone is, like you, probably alone, on the computer, experiencing the feeling we used to know as boredom.
Though it didn’t feel this way at the time, as I look back now, it’s clear that no one was actually upset about the “Star Wars” thing, not in any meaningful sense. A couple of people tried to draw a connection between my retrograde outlook on novelty Dutch ovens and my employer — always an alarming development — but mostly it was low-effort clowning that felt charged only because it was traveling along such high-energy vectors (sexism, homophobia, “Star Wars” fandom). The platform can coax this exact sort of response out of its users with an incredibly small amount of effort. It’s only on the receiving end, where all these messages collect in one place, that it feels oppressive.
This sort of thing is happening to dozens of people at any moment on Twitter, routinely enough that it’s more than some unfortunate externality, though not so often that you’d say it’s the point of the platform. (It, too, has a name: “getting ratioed.”) You have a few options when this happens. In theory, you can just log out and wait for it to end, but no one does that, because who knows what might happen when you’re not watching. You can go private, which basically ends it, though in a way that looks like admitting defeat. (I did this, briefly, so I could go to sleep that night.) You can delete the tweet, or even delete your whole account. But you can also do what I chose to do the next morning, which is to continue posting about it, because it’s fun, and because it really doesn’t take much effort at all. That’s basically the whole problem right there.
This all happened on and around Dec. 4, 2019. Though none of us knew it at the time, a mysterious new respiratory disease had just begun circulating in central China. This would set in motion a spectacular series of events that would make Twitter the focal point of pitched battles about freedom of speech, community health, racial justice and American democracy. At the same time, the pandemic and the federal response to it would create bizarre macroeconomic dynamics that would help one man grow his net worth tenfold in two years, transforming him from a high-profile but middle-of-the-pack billionaire into the wealthiest man in human history. For a time, anyway. It appears that Elon Musk was troubled enough by Twitter’s role in the discourse battles that he felt he should control it himself, and $44 billion later — nearly double his entire net worth at the outset of the pandemic — he has his wish.
Musk has done many things to Twitter, both the app and the business, during his six months as chief executive and owner. He has laid off more than half the staff, changed the interface and functionality of the product and aggressively pushed users to sign up for a paid subscription version of the service. He says that usage has gone up, but because he has taken the company private, we only have his word on that. According to most estimates, ad spending has plummeted. Musk himself has reportedly estimated that the company is now worth about $20 billion, a negative 55 percent return. He has, meanwhile, enlisted a small group of journalists — many of whom have taken a political journey similar to Musk’s in recent years — to sift through company emails and Slacks in an effort to reveal overreach on the part of the old regime in its management of the global conversation. They published reams of lightly redacted emails, showing regular correspondence between Twitter’s trust-and-safety team and the F.B.I., and other organs of the state, which apparently spend a considerable amount of time scrutinizing individual Twitter accounts.
Musk’s takeover of the platform has not only strained the dinner-party metaphor (a new host comes in and dominates the conversation, demanding money from you and accusing the hosts from before of being F.B.I. stooges?); it has also strained the sense of conviviality that made Twitter feel like a party in the first place. The site feels a little emptier, though certainly not dead. More like the part of the dinner party when only the serious drinkers remain. Whiskey is being poured into wineglasses, and the cheese plate has become an ashtray. It’s still a great time — indeed, it’s a little looser — but it also feels as if many of us are just avoiding the inevitable. Eventually, we’ll scrape the plates, load the dishwasher and leave the pans to soak (“Hey, cool Dutch oven — are those the twin suns of Tatooine?”). It’s possible the party will stretch on until sunrise, when the more sensible guests will return. But for now, someone just turned up the lights, and it’s probably time to ask ourselves: What exactly have we been doing here for the last decade and a half?
A number of narratives have developed over the years to explain what Twitter has been doing to us. There was, in the wake of Trump’s election, the focus on Russian “bots” and “trolls” — two words often used interchangeably, though they mean totally different things — sowing discord and amplifying divisive rhetoric. As the Trump years progressed, this evolved into a broader concern about “disinformation,” “misinformation” and whether and how Twitter should seek to stop them. And behind all this lurked vague concerns about “the algorithm,” the exotic mathematical force accused of steering hypnotized users into right-wing extremism, or imprisoning people in a cocoon of smug liberalism, or somehow both.
Those narratives all express fears about what happens when people consume information online, but they have little useful to say about how or why all that information is produced in the first place. After all, everything you read on Twitter, whether it comes from the president of the United States or your local dogcatcher, is a result of the process known as posting. And only a small proportion of users post. There is a lot of research on this topic, and it can be bracing reading for the Twitter addict. In 2021, the Pew Research Center took a close look at about 1,000 U.S.-based accounts, plucked out of a bigger survey of the site. This sample was split into two — the “most active users,” who made up just 25 percent of the group, and the rest. Statistically speaking, no one in the bottom 75 percent even posted at all: They produced a median of zero posts a month. They also checked the site far less frequently and were more likely to find it uncivil.
There’s also some data about the heavy users, and though Pew would not approve, let’s pretend, for our purposes, that it can be used to make a composite sketch of one. We’ll call him Joe Sixpost. Joe produces about 65 tweets a month, an average of two a day. Only 14 percent of his output is his own material, original stand-alone tweets posted to the timeline; half of his posts are retweets of stuff other people posted, and the remainder are quote-tweets or replies to other tweets. None of this stuff travels far. Joe has a median of 230 followers, and on average his efforts earn him 37 likes and one retweet a month. Nevertheless, it is heavy users like this — just the top quartile — who produced 97 percent of the larger group’s posts.
Let me be frank: These are pathetic numbers. Over the last 48 hours, I have made 14 posts. Five were “original” posts to the timeline. I also retweeted a writer I work with, my twin brother and Grover Norquist, and replied to tweets replying to my own. Thus, in two days, I put myself on track to make 210 posts a month. (I won’t mention the like and retweet numbers, but suffice it to say I had individual posts that absolutely rinsed Joe Sixpost’s monthly counts.) And this was a period during which I took care of my young child, did garbage duty in my building, tried to go grocery shopping but discovered I had a flat tire, walked to a different store, cooked dinner (that’s right), read, watched “Party Down,” slept, got my kid to day care, changed the flat tire and worked on this article. I didn’t even think I was on Twitter very much. But because my posts go out to so many more accounts than even an “active user” like Joe Sixpost’s do — by a factor of 100 — I’d still do more to shape reality on the platform even if I posted less frequently than he did. Which, as we’ve established, I don’t.
People afflicted with this unyielding desire to post are rare enough that we probably aren’t easily captured in studies like Pew’s. If you pick a thousand people at random, you might not find many of us, and if you do, our derangement will be smoothed out into averages and obscured by medians, blinding you to the fact that the bulk of your Twitter reading comes from a tiny minority of the population that shares this peculiar deficiency with me. When we talk about the problems created by Twitter, we focus on what happens when people read the wrong sort of post, like disinformation from a malign actor. If we consider the posting side of things at all, it is to lament the excesses of cancel culture — typically from the receiving end. But if we really want to understand what Twitter has done to us, surely it would make more sense to account for the millions and millions of more ordinary posts the platform generates by design. Why has a small sliver of humanity taken it upon themselves to heap their thoughts into this hopper every day?
Part of answering this question involves realizing that a tweet isn’t just a matter of one person speaking and others listening. Kevin Munger, an assistant professor of political science and social data analytics at Penn State — he also happens to be an acquaintance of mine — thinks of this confusion as the overhang of the “broadcast paradigm” in an era when it is no longer relevant. Many people conceive of tweets as analogous to TV or newspaper or radio — that “there are people who tweet, there are people who read the tweets,” as Munger puts it. “And the tweet is just text, right, and it’s static.”
But there is no such separation between creator and consumer, and that’s not what a tweet is. “If you look at a tweet, it’s always already encoding audience feedback,” Munger points out. Right beneath the text of the tweet is information about what the network thinks of it: the numbers of replies, retweets and likes. “You can’t actually conceive of a tweet except as a synthetic object, which contains both the original message and the audience feedback,” he explains. In fact, a tweet contains layers of information beyond that: not just how many people liked it or replied, but who, and what they said, and how they present themselves, and whom they follow, and who follows them, and so on. Every post contains within it a unique core sample of the network and its makeup. And whether they admit it or not, Munger says, all of this helps users build mental models of the platform.
Munger is highly pessimistic about our ability to use Twitter to debate or deliberate anything of importance. Instead, he suggests, we use the site as a “vibes-detection machine” — a means of discovering subtle shifts in sentiment within our local orbits; a way to suss out, in an almost postrational way, which ideas, symbols and beliefs pair with one another. (If this sounds fanciful to you, ask a heavy Twitter user what set of political commitments is signified by using a Greek statue as an avatar.) But it’s hard to detect vibes unless you put a signal out there first; there’s no way to grasp the thing from outside looking in. “In order to understand how it works,” Munger says, “you have to act on it and allow it to act on you.” You have to post.
Credit…Photo illustration by Jamie Chung. Concept by Pablo Delcan.
Nick Bilton’s 2013 book, ‘‘Hatching Twitter,” was disorienting reading for me, because it took me back to a place I thought I knew well: San Francisco, 2006. I was in college at the time, but I grew up in the city and went back for all my breaks. The summer he founded Twitter, Jack Dorsey was hanging out in the Mission and working South of Market. So was I. We both had recently learned how to send text messages and enjoyed visiting Dolores Park. The difference between us was that Dorsey was about to take a central role in the industry that would remake our city and convulse the entire planet in the bargain, and I was mostly just hanging out with my friends.
Back then, the social internet was a more naïve and hopeful place. Just look at Dorsey, whose Flickr account from the era is still up and public. You can see all sorts of relics from Dorsey’s prebillionaire social life in and around the city: trips to Coachella and Point Reyes, arty photographs of street signs. And in the mix, you can find screenshots of early Twttr, as it was known. The logo is green, bubbly and sweaty; it looks like a new flavor of SoBe. The very first layout looks nearly identical to Craigslist. “What’s your status?” it asks at the top, and below you can see Dorsey’s colleagues responding. “Preparing a pizza,” writes Florian Weber, one of the project’s first engineers. “having some coffee,” offers Biz Stone, another founder. “so excited about new odeo ideas,” writes Evan Williams, whose start-up Odeo employed Dorsey and was helping develop this new concept that would swallow it whole.
Dorsey had nurtured the basic idea of Twitter for years — a site that would be like AOL Instant Messenger’s “away message” for anywhere, or “a more ‘live’ LiveJournal,” as he put it in a post on Flickr. He wanted to call it Status, and it was important to him that the service be principally social. In his book, Bilton recounts how Dorsey initially considered and dismissed using audio as a medium because it would be impossible to use at a nightclub. That was, in Dorsey’s mind, a key use case. But Williams, who created Blogger and sold it to Google for millions, came to see something else in Twitter: To him, its potential lay in its ability to create a running record of what was going on in the outside world. The book recounts a somewhat absurd, but revealing, philosophical argument between the two founders. If one of them were to see a fire on Market and Third, in downtown San Francisco, and tweet about it, would he be tweeting that there was a fire on Market and Third? Or would he be tweeting that he was witnessing a fire on Market and Third? Dorsey was insistent that it was the latter: “You’re talking about your status as you look at the fire.”
To Dorsey, the fact that Twitter creates a record of the world would be an incidental byproduct of all this status-sharing. But as time went on, and more people joined, the Williams view came to look prophetic. It would be vindicated on a January afternoon in 2009, when an Airbus A320 taking off from LaGuardia collided with a flock of geese over the Bronx, losing power in both engines and forcing the quick-thinking pilot to ditch the plane in the Hudson. A businessman named Janis Krums was on a ferry to New Jersey when the boat’s captain announced that a plane was down in the water, and they were going to see if they could help. Krums figured it was a small single-engine craft, and was stunned when they pulled up to a commercial airliner. He had an iPhone, and he took a picture of the plane in the icy water, with passengers crowding onto life rafts. He posted it to Twitter with a brief caption. Krums handed the phone to one of the rescued passengers, who wanted to call his loved ones, and forgot about it amid the rescue efforts. By the time he and his phone were reunited, about 30 minutes later, it had exploded with messages and missed calls from news agencies. “The tweet had gone around the world,” he told me. “And I had no idea.” The biggest story of the day had been broken by some random guy with a smartphone. Reporters called it so many times that they drained Krums’s battery within an hour. He was finally able to make it back to Jersey by nightfall, at which point he was being interviewed on morning radio in Australia.
Later that year, Williams, having ousted Dorsey to become Twitter’s chief executive, would change the site’s prompt from “What are you doing?” to “What’s happening?” as it remains to this day. But if that seems like a clean victory for Williams, it wasn’t quite. Because what Krums wrote was exactly what Dorsey had imagined; it was about not just the plane but also the fact that he, Krums, was looking at it. “There’s a plane in the Hudson,” he wrote. “I’m on the ferry going to pick up the people. Crazy.”
Twitter could never be just about the outside world or about our internal ones; it would always have to be both. Dorsey and Williams were correct to identify this as a conflict, even if they could not design or engineer it away. These two repellent magnets were fused together and left under the platform’s floorboards. More and more people joined, hoping to learn what was happening in the world and to share what was happening in theirs. Eventually, the situation that obtained was altogether stranger than Williams or Dorsey could have imagined.
Twitter took off first with geeks in San Francisco, and then with people in the tech-media-music orbit at South by Southwest in 2007. From there, it continued to annex populations prone to graphomania (reporters, rappers, academics) and those that just had more things to say than opportunities to say them (comedians, editors, TV writers, lawyers). Twitter quickly figured out that its value lay in its ability to surface conversations: What was the world talking about? In 2008, it began plumbing its depths to identify trends. These were the early days of the Big Data era, and the idea was that within all the chatter could be found some hidden rhythm, a form of crowd wisdom. It wasn’t long before people got the idea that they could harness Twitter’s firehose of information to do things like trade stocks — one hedge fund, started in 2011, promised 15 to 20 percent returns based on its algorithmic ability to divine market movements. It shuttered after a month.
Twitter’s takeover of the media class was rapid. In April 2009, Maureen Dowd interviewed Williams and Stone, telling them that she “would rather be tied up to stakes in the Kalahari Desert, have honey poured over me and red ants eat out my eyes than open a Twitter account”; she signed up three months later to promote her column. Later that spring, a Time cover story noted that Twitter users had begun using the site as a “pointing device” and sharing longer-form content. (“It’s just as easy to use Twitter to spread the word about a brilliant 10,000-word New Yorker article as it is to spread the word about your Lucky Charms habit.”) This would make it an incredible way to keep up on the news — and absolutely irresistible to journalists. By the next year, the Times media reporter David Carr was writing an ode to the site, correctly predicting it was more than a fad and lauding it for both its relative civility and its “obvious utility” for information-gathering. “If all kinds of people are pointing at the same thing at the same instant,” he wrote, “it must be a pretty big deal.”
I am told by my superiors here at The Times that there was a time when journalists would talk about what they’d been reading at the bar, or at cocktail parties. One of these people told me, and I don’t think he was kidding, that an article of his went viral by fax machine. I’ll have to take his word for it, because I’ve never known a life in journalism free from the gravitational pull of Twitter. In fact, I probably owe my career to it. In 2011, I wrote an essay for a website called The Awl, and the very thing that Carr described happened: The article, which was about the McRib, went viral on Twitter, putting my work in front of editors at places like The Times. A few months earlier, I was at the precipice of giving up on writing; within about a year, I would be regularly freelancing. After a while, I had a full-time job as an editor.
There was, around this time, an enormous expansion in web media, with BuzzFeed, Vice and others pouring truckloads of venture capital into the field. And though Twitter never drove much traffic, it was nevertheless important for journalists to be there, because everyone else was there; this was where your articles would be read and digested by your peers and betters (as well as, theoretically, the reading public). It was doubly important because of how precarious these new jobs were. Your Twitter profile was also your calling card, potentially a life raft to a new job. The platform was an extremely fraught sort of LinkedIn, one you would use to publicly waste company time.
Looking back, it’s hard not to see this as a tragic bargain. Twitter took the wild world of blogging and corralled the whole thing, offering writers a deal they couldn’t refuse: Instant, constant access to an enormous audience, without necessarily needing to write more than 140 characters. But they would never again be as alone with their thoughts, even when they were off the platform. Twitter follows you, mentally, and besides, anything can be brought back there for judgment. Perhaps worst of all, they would be gently cowed into talking about whatever it was everyone else was talking about, or risk being ignored, and replaced by someone who would.
But this journalistic swarming instinct made Twitter an ideal place for activists to get a message out. If there is one good thing that can be said about Twitter, it’s that it really was democratizing: It allowed the previously voiceless to walk right up to the powerful and put stuff right in front of their faces, at any time of day. The Green Revolution in Iran, the Tahrir Square protests and Occupy Wall Street — all of these made use of Twitter in creative ways. Two of the biggest social movements of the last decade are often rendered as one word with a hashtag attached to it. The real action of Black Lives Matter may have taken place in the streets, and the long-delayed consequences of Me Too delivered in boardrooms or courtrooms, but these movements couldn’t have begun if they could not corral and excite latent political energies via social platforms.
Really, Twitter was good for getting any sort of message out there. Governors and senators, Shaquille O’Neal and Sears; Mahmoud Ahmadinejad, the American Enterprise Institute and Chrissy Teigen; the Dalai Lama, Rachel Maddow and the guy who does “Dilbert” — all could use the same exact tools to be heard, and to hear, at all hours of the day. For some, it was their job to get a message out; for others, an ancillary goal; for others still, a reluctant undertaking done in the name of relevance. In any event, the barrier between work and goofing around grew dangerously thin, especially as more influential people and entities arrived.
Because as soon as Twitter began bringing all these people together, it amounted to an irresistible target. Twitter was an exceptional tool, above all else, for making jokes. Some groups elevated it to an art, profoundly transforming the folkways and language of the platform — “Black Twitter” chief among them. There was also “Weird Twitter,” an unfortunate label that refers as much to a specific group of people as to the sensibility they shared. What Weird Twitter posters had in common, beyond being (mostly) funny, was a special brain damage that granted them access to the hidden frequencies of the internet.
In 2010, a young Canadian named Stefan Heck joined Twitter in search of Vancouver Canucks news but soon fell in with what would become the Weird Twitter crowd. Lots of corporations had come to Twitter to offer quick customer service, and Heck and his friends enjoyed messing with them. (Like tweeting at PetSmart: “if my turtle stops moving after i smoke it out its just sleeping right?”) One hashtag that often trended in those days was #tcot, the “Top Conservatives on Twitter,” and Heck and his friends often found their way there in search of a good time. Heck recalls it being full of “you know, 70-year-old guys, like, retired boat salesmen and dentists.” He can’t remember for sure, but he believes this is where they eventually found the 1980s TV star Scott Baio, who was and remains a conservative culture warrior.
Unlike other celebrities on the platform, Baio would actually respond to people. “He felt like a real guy who posted,” Heck says. “He was in it for the love of the game.” In 2011, when Heck and friends started asking him if he was an adult-diaper fetishist, Baio snapped, blocking everyone who asked him about diapers and tweeting to complain about it. Heck and others started posting “#RIPScottBaio,” and apparently did so with enough volume that it became a trending topic, persuading some untold number of people that the actor had died. Someone reportedly edited Wikipedia to certify his death from “diaper-related illness.” By the next day, NBC’s “Today” show was debunking the claim on its website.
To Heck, the Baio episode showed how small and wide-open the site was — how it could be gamed. (The incident was brought to my attention when I asked Mike Caulfield, a research scientist at the University of Washington’s Center for an Informed Public, if he could think of any watershed moments in Twitter history; he thought it was interesting for more or less the same reasons.) A small conspiracy could capture the platform’s homuncular version of reality and tickle it until it shouted nonsense. Indeed, Twitter’s own insistence that it could connect the whole world and surface the most engaging conversations amounted to an enormous “KICK ME” sign on its back. It had grown from a place where people shared what they were having for lunch to one that was either changing the world or purely self-contained, a pearl of heightened reactions accreting around a tiny grain of provocation. No one was ever really sure which.
But if you were good at the game, it could be good for you, both on Twitter and off. People got commissions and book deals — not many, but enough. Some people lost their jobs — not many, but enough. A couple of people got TV shows out of it. Once, someone told a story so wild it was turned into a feature film. Hell, one guy even went and got himself elected president.
The election of Donald Trump made Twitter an extremely fraught environment. Did you hate the way the media reported on him? They were all there to tweet at about it. Did you blame everything that was happening on people slightly to your left? Slightly to your right? A random podcaster? Someone you didn’t know existed until five seconds ago? They were there, too. And, of course, so was the president. Some of his opponents suspected his election might be the fault of the platform itself. This idea gave us a solid six years of discourse on Russian bots and trolls and disinformation, though none of this, according to a recent study in Nature, had any meaningful effect on voters’ 2016 decision-making. In all the bickering, it was easy to lose track of what was keeping us on Twitter in the first place.
One compelling theory comes from Chris Bail, a sociology professor at Duke, who began studying Twitter in the years when these debates were raging. Bail was especially curious about the “filter bubble,” the idea that social media platforms encircle users with opinions they share, causing them to be less amenable to arguments from the other side. Bail had read research showing that social media has actually given people a more diverse information diet. “Even convincing people that that’s true is really hard,” he told me, because there is an enormous apparatus of talking heads telling them otherwise.
So Bail and his colleagues designed an experiment to test the filter bubble: They exposed partisan Twitter users to a bot that would retweet counterpartisan speech 24 times a day, for a month, and interviewed participants before and after. In the end, they showed that the reality was stranger than the theory: The more attention respondents paid to the bots, the more entrenched they became in their beliefs. These results were especially true of conservatives. Bail even saw some participants yelling at the experiment’s bots. “This happened so often that three of the most extreme conservatives in our study began following each other,” Bail writes in his book “Breaking the Social Media Prism.” “The trio teamed up to attack many of the messages our liberal bot retweeted for an entire week, often pushing each other to make increasingly extreme criticism as time passed.”
Bail argues that Twitter is a “prism” that bends both the depiction of reality you see through it and your own efforts to show who you are to the world. The platform, Bail writes, taps into the human desire to “present different versions of ourselves, observe what other people think of them and revise our identities accordingly.” People like to think of social media as a mirror, he told me: “I can see what’s going on, and I can see my place in what’s going on.” But Twitter is not a random sampling of reality. Almost all the feedback you receive on the site comes from its most active users. “And the most active social media users,” Bail says, “are a weird group of people.” Somehow this fact doesn’t override our desire to fit in, which is then pointed in strange directions: “We see this distorted reality,” Bail says, “we understand it as reality, and we react accordingly.” As we all do this, together, we create feedback loops that further warp the projection of reality. (You could see this dynamic especially clearly at the height of the pandemic, when Twitter’s feed was some people’s primary porthole to the outside world.)
One thing Kevin Munger pointed out to me is that Twitter users are running Bail’s experiment on one another constantly. Pervasive quote-tweet dunking, for example, is often used to highlight the most galling ideas coming from one’s political foes, feeding users outrageous caricatures of the other side. There are also numerous accounts — Libs of TikTok most notorious among them — that exist for this sole purpose: to drag speech out of its intended context in another gamified discourse, across the partisan divide, to make people mad. Bail ran his experiment for only a month; imagine doing this for about a decade.
Bail told me that before he settled on the prism, he considered sonar as his central metaphor, because of the way Twitter allows users to send out a message and see what bounces back. This is a helpful way of thinking about Trump, whose Twitter habit was largely seen as a sideshow, a means of circumventing the press or just evidence of his terrible impulse control. It was all those things, of course. But this is also the man who discovered, lurking within the rot of the two-party system, a strange new shape in the electorate. Should we regard it as pure coincidence that he spent all those years on Twitter, with an enormous following and the sonar capabilities of an Ohio-class submarine? Even Trump’s campaign rallies and governing style had this highly provisional, posting-like rhythm to them: He tried things out, saw what worked and pocketed those moves. Is it so hard to believe that the image-obsessed salesman, up in his gilded cockpit in the vibes-detection machine, was learning something about what people wanted to hear?
We could ask similar questions about Musk, whose increased exposure to the site has coincided with his transformation from beloved entrepreneur to substantially less beloved culture warrior. One of Bail’s chief observations about Twitter is that its prismatic qualities generate a strong effect on users: Its feedback makes very clear who your friends and enemies are. This can act as a sort of centrifugal force, pushing people deeper into the belief structures of their “team,” and pushing moderates out of the conversation entirely. We can’t know exactly why Musk seems to have become so engaged with culture-war topics, but Bail’s ideas suggest one explanation: Through the prism, he saw the most disingenuous arguments from both sides over the most contentious issues of the day, his own behavior very much included. And one side welcomed him while the other rejected him.
Now that Musk owns the site, he has repeatedly stated that his goal is to bring back “free speech,” and he has tweeted several times about the “woke mind virus” that he believes threatens civilization. It seems he thinks it might live within his new plaything, and can be dislodged if he turns it upside-down and shakes it just right. But it’s not clear he knows where it is: Was it in the staff? He has laid off most of them now; many others have left of their own volition. Was it in their content-moderation team? He has treated Twitter’s San Francisco offices like Stasi HQ, revealing the inner workings of the previous regime. Is it in the algorithm or the UX? He has changed all that too, and continues to tinker with them, seemingly based on passing whims and grudges — or sometimes inscrutable urges. He added more metrics to every tweet, briefly changed the site’s logo to a shiba inu and obscured the “W” on the sign that hangs from the company’s Market Street headquarters. (Musk did not respond to a request for comment; Twitter’s press email autoreplied, as it apparently does to all incoming messages, “💩.”)
The net effect of all of this has been a buggy site — and one that feels less alive. Not just because so many influential people have departed but also because Musk broke the spell. You can no longer believe that this platform offers an unobstructed view to the outside world, if you ever did, now that his hands have so thoroughly smudged up the glass.
It’s hard to look back on nearly a decade and a half of posting without feeling something like regret. Not regret that I’ve harmed my reputation with countless people who don’t know me, and some who do — though there is that. Not regret that I’ve experienced all the psychic damage described herein — though there is that too. And not even regret that I could have been doing something more productive with my time — of course there’s that, but whatever. What’s disconcerting is how easy it was to pass all the hours this way. The world just sort of falls away when you’re looking at the feed. For all the time I spent, I didn’t even really put that much into it.
There is a famous thought experiment in thermodynamics called Maxwell’s Demon, named for the Scottish physicist James Clerk Maxwell. Musk certainly knows it; he’s a big admirer of Maxwell’s. (He once tweeted “Maxwell was incredible,” but that was right around the time a cricketer named Glenn Maxwell did something impressive in an Indian Premier League match, so he just ended up confusing much of South Asia.) Maxwell proposed a means of circumventing the second law of thermodynamics, which basically states that in a closed system, disorder will increase naturally unless energy is used to stop it; heat will always dissipate into cold. What if, Maxwell asked, you had a box split in two by a wall, and a tiny being sitting atop the wall, operating a little door, and this being was clever enough to track individual molecules and know how fast they were moving? If he let only faster-moving molecules go from Chamber A to Chamber B, and only slower-moving molecules pass the other way, then, without any new energy being introduced, Chamber B would become very hot.
This is basically a thought experiment about information overcoming the limits of the physical world, so it naturally found fans in the world of computing. The “mailer-daemon” that returns bounced emails to your inbox, for example, is one of many background processes that takes its name from Maxwell’s concept. Dorsey was enamored of the idea; he had a tattoo that read “0daemon!?” and once wrote a poem about a “jak daemon,” a cyberpunk hacker type who manipulates “the background process in small ways to drive various aspects of the world.”
I thought about Maxwell’s Demon as I reconsidered the “Star Wars”-Le Creuset thing, and how clear it was that no one involved had even been especially angry. It’s in episodes like this that Twitter manages to violate the discursive law that, until quite recently, prevented random Australians from yelling at you when you’re trying to go to bed. In the real world, you can go 30-some years without ever encountering the sensitivities of the “Star Wars” cookware community. But Twitter can, if you tell it just the right thing, shoot every last one of them at you through a little door, creating a pocket of extreme heat without anyone having meant to do much at all. This is perhaps the central paradox of Twitter: It can produce enormous outcomes without meaningful inputs.
I happen to know about Maxwell’s Demon only because it makes an appearance in Thomas Pynchon’s “The Crying of Lot 49,” a 1966 novella centered on a clandestine communications network that is used by a baffling array of people (anarcho-syndicalists, tech geeks, assorted perverts and cranks) and seems particularly popular in San Francisco. Instead of mailboxes, it operates through a system of containers disguised to look like trash cans; the only one of these the protagonist finds is somewhere South of Market, just blocks from where Twitter would be born. It’s a book I read 20 years ago. If I’d come to it more recently, I doubt the mention of Maxwell would have stuck in my mind, thanks to either normal aging or some irreversible damage I’ve done to my brain by staring at Twitter.
But I’m glad I remembered it, because what I read when I pulled my copy down off the shelf was the best way of thinking about Twitter I’ve encountered. In the novella, an East Bay inventor named John Nefastis has designed a box, complete with two pistons attached to a crankshaft and a flywheel, that he claims contains the molecule-sorting demon. It can be used to provide unlimited free energy, but it doesn’t work unless there is someone sitting outside, looking at it. There was, Nefastis believed, a certain type of person, a “sensitive,” capable of communicating with the demon within as it gathered its data on the billions of particles inside the box — positions, vectors, levels of excitement. The sensitive could process all that information, telling the demon which piston to fire. Together, the demon and the sensitive would move the molecules to and fro, creating a perpetual-motion machine. The box was a closed system, separate from the outside world, but it could nevertheless do work on anything it was connected to.
Pynchon’s protagonist tries, and fails, to operate the Nefastis Machine. But when I open Twitter, I see a lot of people who can talk to that demon; who can process, intuitively, the positions and attitudes of unimaginable numbers of others; who know just what to tell the demon to make things move; who are happy, or close enough, spending hours sitting with the box, watching the pistons pump. Activists, politicians, journalists, comedians, snack-food brands and Stephen King — all have taken their turn at the box. Union organizers, venture capitalists, grad students and amateur historians — they could make the flywheel turn. No one even has to do much of anything to make it move. But none of us have the power to stop it, either. And at some point — back before we really knew what we were doing — we hooked those pistons up all over the place.
And though it seems unlikely that Twitter itself will disappear, the powerful mechanism it became over the years — the one that made an often unprofitable company so valuable in the first place; the one that allowed a collectively conjured illusion to transform the real world — seems to be sputtering and squealing, and all the noise is making it hard to communicate with the demon within. The platform could continue to operate in some form, even as the mechanism slowly rusts or eventually grinds to a halt. If that happens, the world would feel exactly the same — and utterly transformed. And I, and others, and maybe you, too, would have to contend with what we’d really been doing the whole time: staring into a box, hoping to see it move.
Prop stylist: Ariana Salvato.
Willy Staley is a story editor for the magazine. He has written about the effort to count the country’s billionaires, the TV show “The Sopranos,” the writer and director Mike Judge and the professional skateboarder Tyshawn Jones. Jamie Chung is a photographer who has worked on nearly a dozen covers for the magazine. He won awards this year from American Photography and the Society of Publication Designers. Pablo Delcan is a designer and art director from Spain who is now based in Callicoon, N.Y. His work blends traditional and modern techniques across mediums like illustration, print design and animation.