With Tech Addiction, We Are Often Our Own Pushermen

Reading Time: 7 minutes

There is something different about social media–something that distinguishes it from the habit-forming products that came before it. It’s social.

(big surprise, I know)

When we reach for that cigarette, or that pint of Ben and Jerry’s, the urge that motivates us–along with its (at least temporary) satisfaction–are centered within the individual. But the urge we feel to refresh our news feed and see what our friends are doing without us on a Friday night is all about our need to feel desired, accepted, and loved by others.

I don’t mean to oversimplify here; the desire for Ben and Jerry’s can be rooted in a void someone feels due to a lack of acceptance or love from others. My point is that we’ve never seen a compulsive product that so directly engages with our relational insecurities before. When it comes to those who suffer from addiction, it’s often easy for society to write these people off as a sub-population with poor self-control, or with an inherent predisposition towards dependency. But when it comes to our needs as social animals, I think we’re all quite susceptible to temptation.

Because our social needs are so core to our nature, we’re better off just admitting to ourselves: we love this new technology. We love the ability to see what our friends are doing without us. We love to read the digital tea leaves left behind by our crushes. We love to keep track of our social status in relation to others–because that’s what we were doing long before social media came along. The only difference is that now we have a bright neon electronic scoreboard to conveniently do what we were doing before in our heads.

I’m not saying any of this is healthy, but I think this is an apple that can’t be unbitten. And I would argue few of us would prefer to go back to the way things were before.

The problem with the current atmosphere of tech backlash is that it obscures the degree to which we ourselves are responsible for our undesirable present-day equilibrium. In turn, this fosters unrealistic expectations about the extent to which top-down solutions can address the problem. In the end, I think a more honest reckoning with our own role would better focus efforts to improve the status quo.

The Race to the Bottom of the Brainstem

“The race to the bottom of the brainstem” is a phrase coined by Tristan Harris, former Google Design Ethicist and co-founder of the Center for Humane Technology. It describes the incentive of tech companies to exploit our paleolithically-wired brains to get us to use their platforms as much as possible. I’m actually a big fan of the CHT’s work; before Harris began ringing the alarm, the balance of the attention economy was way out of whack–tilted too far towards monetization and maximization of screen time. I laud Harris’ work to raise awareness and create pressure on tech companies to reform their ways.

Harris believes that all the problems stemming from technology–from vanity-fication to political polarization and election manipulation–are rooted in human weaknesses. While I agree with him on that point, I think it’s important to recognize that not all weaknesses are created equal. It’s one thing to tweak the parameters of a recommendation algorithm so that it’s not feeding me more and more extreme content all in the name of maximizing engagement. But it’s another thing to stop me from giving into FOMO and seek out pictures from the party I wasn’t invited to last weekend. One is about what the platform pushes upon us, while the other is about what we pull out for ourselves.

I understand why Harris wants to lump all of these together; it creates a common language and an easier call to action. But it also creates an atmosphere of unrealistic expectations around what kind of solutions are even feasible to tackle the problem. As a result, we end up venturing down some unproductive rabbit holes…

The Example of Bullying

In the spring of 2019, Instagram invited a bunch of journalists to their headquarters to showcase their anti-bullying efforts. The program was so nascent, it puzzled me why they would even want to trot it out so publicly before journalists. Perhaps they suffered from the endemic Silicon Valley illness that there’s a tech fix for everything–or, maybe they were so terrified of regulators descending upon their industry, they were trying to get ahead of it with some lip service-filled PR. Or maybe a bit of both.

Either way, I don’t think online bullying is going anywhere. Bullying is like a constantly-mutating social virus. New forms of it will always be emerging, so even if an artificial intelligence algorithm gets good at stamping out bullying today, that doesn’t mean it will still be good at it a year from now. In practice, anti-bullying is a content moderation problem; AI algorithms flag questionable content for human review, and these reviewers make the final call about what stays and what goes. Yet, the very examples the executives brought up in the spring presentation make it clear: this is a Sisyphean task.

Bullies might take a shot at their victim’s weight by tagging them in a picture of a whale. So does an algorithm flag all pictures of whales (or cows or pigs or what have you) and send them along to the human moderators for review? That idea becomes even more ridiculous when you consider the conditions that prevail at some of the content moderation contractors employed by Facebook.

These are the technical difficulties involved in attempting to address just the “push” problem of bullying. But what about the pull problem, i.e. the many ways in which we bully ourselves? That same NYT article mentions Instagram’s pilot program to hide like counts on posts publicly (but still keep a user’s likes visible to them privately). Will that stop users from keeping track of their own social status? I don’t think so. If you can’t compare yourself to others, you’ll just compare yourself to yourself–your past self, that is. I find myself doing this, since I don’t even bother contending with people punching in a higher Instagram weight class. Instead I get that precious IV drip of social feedback by comparing the performance of my different posts over time.

Get Smarter than SMART

The only way to stop these issues is to throw out the platforms entirely, but nobody’s advocating for that (and neither am I–at least not with any kind of top-down solution). Instead, the policy options actually on the table are sending us down yet more unproductive rabbit holes.

I’m talking specifically about the Social Media Addiction and Reduction (or SMART) Act–introduced by Senator Josh Hawley earlier this year–which includes prohibitions on features like autoplay, infinite scrollbars, and engagement badges. What is the point of wading so deep into the weeds of an app when it won’t be long before technology leaps forward to create new, ever-more engaging features? By the time augmented reality arrives in non-Google Glass form and replaces our phones entirely, I’m sure tech companies will have found new ways to engage us that will be outside the scope of the law (and probably even more addictive). It’s futile to try keep up with technology with this kind of legislation–to say nothing of its deep-seeded paternalism. But even more fundamentally, the law does nothing to address the “pull” problem of these platforms because that problem is located in us–not the technology.

I’m not saying we should throw out all top-down solutions. I’m just saying we need to be smarter about it. For example, Russian meddling in U.S. elections is a serious, tech-related problem–but it is one where the platforms can make a meaningful difference. Unlike in the case of bullying, we can actually measure the thing we want to minimize. The fact that fake news spreads faster than real news can actually be used against the people trying to spread disinformation. I am happy to see that there are smart people working on this. Even if it won’t entirely eliminate the problem, it stands a chance against Russia’s rapidly evolving tactics ahead of the 2020 election.

The problem with lumping all tech-related problems together is that we tend to grow frustrated when tech companies fail to solve problems that they have no realistic hope of solving. By holding them responsible for solving all the problems, it only makes us angrier when they fail, hardening us to the idea that there are cases where we should really be looking at ourselves. I don’t say this out of pity for a poor, defenseless tech industry (they’re anything but). I say it in the hope of focusing on solutions that work.

“We’re not freebasing Facebook. We’re not injecting Instagram”

I think there’s a better approach, but its most vocal champion seems to find himself the object of scorn these days. I’m talking about Nir Eyal, the technologist best known for writing 2013’s Hooked, the book widely credited as the Silicon Valley bible for creating habit-forming apps. This year he released Indistractable: How to Control Your Attention and Choose Your Life. With this latest turn, many are accusing Eyal of being a hypocrite.

In the NYT review of Indistractable, child psychologist Richard Freed is quoted as saying, “Nir Eyal’s trying to flip. These people who’ve done this are all trying to come back selling the cure. But they’re the ones who’ve been selling the drugs in the first place.”

At first glance it might look like there’s a contradiction. Eyal wants to have his cake and eat it too when it comes to human nature. While Hooked embodies a model of the human mind that can be acted upon and influenced as an object, Indistractable argues that we users are subjects with rational agency–not “puppets on a string.” So which is it? The question is, of course, an overly simplistic false dichotomy. The answer is that we’re both; it all depends whether we’re thinking fast or slow—in the Kahneman-esque sense.

I don’t agree with Nir Eyal on everything. For example, I think Ezra Klein successfully pushes back on Eyal’s aversion to the word “addiction” on his podcast. For some, using the language/framework of addiction can help them get a grip on their tech usage by admitting they have a problem that’s not entirely under their control. I haven’t read Indistractable (full disclosure) but in principle I’m behind Eyal’s instinct to start looking for solutions at the individual level. “The technology is the proximate cause, not the root cause,” he argues. Kids, for example, might be “overusing technology as an escape. But we don’t ask ourselves, what are they escaping from?”

It says something about the current atmosphere of tech backlash that the New York Times brushes the ideas in Eyal’s book aside as “hard and sort of annoying advice.” Maybe he’s not the person people want to hear from at the moment, but that doesn’t mean what has to say should be dismissed out of hand.

(photo credit: Marc Schaefer)

Published on November 4, 2019