Kicking Facebook

It’s time to break the habit, reclaim control, and rebuild the social fabric — here’s why and (a tiny bit about) how.

Tags that this post has been filed under.

It’s New Year’s resolution season, and here’s mine: this year I’m leaving Facebook, and I want to take as many of you with me as possible.

To be honest, kicking Facebook is long overdue for me. It stopped being fun years ago. It became a habit, and one I struggle to control. I imagine it’s about the same for you. Everyone knows that Facebook is addictive and that nearly two billion of us are hooked.

Like companies that sell tobacco and gambling, Facebook recognizes that a lot of the people who use it do so compulsively, and they publicly acknowledge that this may be a problem. In general, companies that sell addictive products try to put the onus on consumers to control their own impulses, telling us to gamble, drink, smoke, responsibly.

But, as with gambling, tobacco, and other addictive products and experiences, we are not the problem here. Facebook isn’t addictive because we’re weak-willed — it’s addictive because Facebook is carefully designed to exploit our natural human tendencies and to override our self-control, so that we will spend more time using its apps and websites. Facebook’s founding president, Sean Parker, has told us this, publicly, in recent months.

As a company, Facebook has made a long series of decisions about what features and functions to build, and a lot of those have served to make the site and apps more “sticky.” With every decision to increase “user engagement,” Facebook has taken away some of our autonomy and reduced our freedom to choose when and how much to use the platform. And with every moment of our time it captures and every bit of our self-control it overpowers, Facebook gains more money, power, and influence.

Facebook treats us a lot like tobacco companies have treated smokers — with exploitation, manipulation, disrespect, and disregard for our well-being. It’s an abusive relationship that has to end.

It wasn’t always this way…

I joined Facebook nearly 11 years ago, in January 2007. The fact that people I knew were on Facebook was probably what initially drew me to the platform. This was the era of Friendster (where I had a profile) and MySpace (where I did not). Connecting and interacting online was getting easier and more common.

In those heady days of Web 2.0, it was totally natural to add a bunch of personal trivia to my profile — bands I liked, movies I’d watched, sports I loved, you name it. Why wouldn’t I do that? It was like having a personal website, a blog, a messaging platform, and a photo sharing service all wrapped into one simple package. It was free, and I didn’t need to write a line of code to do it. Facebook made it easy to build a digital identity for my friends to see, and I had no idea how the company might use that information beyond displaying it to the people I chose to connect with.

This was the key to Facebook going viral — it was a service that made it absurdly easy to get online and see what other people were doing (online). In a way, we’ve all got a bit of voyeur-exhibitionist in us, and Facebook lowered the barriers to entry to almost nothing — all the way down to 1) access to an internet-connected device, 2) the age of 13, and 3) a valid email address.

Facebook also made it easy to find people I knew, by mining my email contact lists, and to meet new friends, by recommending friends of friends that I should connect with whether I knew them offline or not. I remember some of the original profile questions being about my relationship status (“it’s complicated”, anyone?), and what kind of relationships I was looking for. Looking back now, FB was one pivot away from becoming an online dating service.

I was a bit naive about security back then — I’m probably still woefully underinformed — and my profile was fully public at first. My one privacy measure was to make my profile name “Scott M” so that it would be harder for my students to find and connect with me. (I said I was naive, didn’t I?) Probably pretty early on, I changed my privacy settings to “friends of friends” and — I think — then down to “friends-only” for practically everything. I still managed to accumulate 400+ friends, and some of them are actual friends and family members.

I still love seeing people’s photos, especially now that old friends are reproducing and creating adorable little versions of themselves. But over time, my FB timeline changed to be more about news and politics. That suits me, I’m, let’s say, politically-involved.

Most of the news and politics in my Facebook feed are the digital equivalent of preaching to the choir. Of course they are — we’re more likely to spend time engaging with things we agree with, except when we spend time arguing ourselves blue in the face (or the fingers) with people we disagree with.

The fact that most of what I see on Facebook comes from inside my own filter bubble probably says a lot about me — it might tell me that I spend the most time interacting with things I agree with. Ironically, in my case, the things I agree with are sometimes news articles railing against all the things I intensely disagree with in the world.

Facebook doesn’t care if you love or hate something, as long as you spend time engaging with it. Whether you’re rage-ranting with some [insert politically-opposite position here] idiot or loving it up with you besties, if you’re spending time, you’re going to be given more of it to spend time with. That’s by design.

(In light of an announcement today that Facebook is changing its newsfeed to prioritise things like “meaningful interactions between people,” I think this argument still holds. Facebook is essentially content agnostic — it does not really care what you’re interacting with others about, as long as you’re interacting on Facebook, and they will drive those behaviours with design and technology as much as possible. This change is similar to tobacco companies adding filters or selling “light” or menthol cigarettes. It’s nice marketing, and seems to be for the good of the users, but just as tobacco companies still bank on nicotine addiction, Facebook will continue to use variable reward and other techniques to exploit human psychology for its own benefit.)

How did we get here? — a speculative history of the moral corruption of Facebook

When I first joined Facebook 11 years ago, it was fun. It provided a whole bunch of useful services for free, and it was ridiculously easy to use. In terms of technology, that’s kind of the Holy Grail for the people who use it.

But free for users does not mean free for Facebook to provide — building software, storing data, hosting services all costs money. Sooner or later, Facebook as a company had to make choices about how to make money. The choices the company made thoroughly corrupted it and turned the platform into something quite different from what it was.

Early on, FB attracted plenty of venture capital to keep the lights on and the perks flowing. Of course, investors don’t put their money in for shits and giggles, or out of the goodness of their hearts (even if they’re sometimes called “angels”). Eventually, investors want a return, either as a share of profit or by selling their stake (or a portion of it) for real money, preferably (a lot) more than they put in.

That sort of arrangement drives a company to seek growth, so that its shares will be more valuable in the future. If they don’t, there’s a good chance they’ll be sued by investors, and the company’s founders or executives black-balled and banished from startup-land.

(It apparently took Facebook about 5 years to become profitable, and about 8 years to make its IPO and become a publicly-traded company.)

For a company like Facebook, which provides a platform for people to interact with each other online, there are a couple of options to make the business profitable.

One option is to make (some or all) users pay for the service. The obvious downside here is that fewer people will use a paid service than a free service, either because they don’t have the money to pay for it or because there are free alternatives available. That limits growth pretty quickly, unless the company can find a way to become impossible to live without.

The other option is to find another way to generate revenue, so that users don’t have to pay directly with their own money. The obvious choice here is digital advertising.

Digital advertising has infested the internet from pretty early on, and a lot of companies — especially media publishers like the news company I work for — have chosen to use advertising revenue to support their operations. Those early decisions to tap into advertising money have had a huge impact on the internet and on a lot of the businesses that use it.

It’s not a huge stretch to say that reliance on digital advertising has been one of the main causes of distress in the news industry over the past few years, and it’s a problem a lot of media companies are desperately trying to fix. Print media, for example continues to struggle with how (and why) to measure their audiences as a result of the advertising industry’s conventions and demands…. But I digress.

Facebook’s decision to choose advertiser money over user money has probably been the single most corrupting factor in its history. It has had huge implications for how the company has tried (and succeeded, massively) to grow, and it is likely the source of a lot of the unethical, exploitative crap that the company seems to do.

It’s no secret that even though there are nearly 2 billion active users on Facebook, the real customers are companies and advertising agencies who want to get their ads in front of us. The service Facebook provides is a platform for connecting individuals with one another, and connecting advertisers to those billions of individuals.

The product Facebook sells is human attention. The main way for Facebook to grow as a business is to capture more human attention, and to make that attention worth more so it can be sold for higher prices.

Capturing more human attention on a social media platform requires at least one of two things: getting more people to join and use the platform, and/or getting users to spend more and more of their time on the platform.

With absolutely zero insider information, I can confidently assume that every decision Facebook makes about what new features to include — from push notifications and email alerts, to the replacement of user walls with a scrolling news feed, to Facebook memories, to games, to groups and buy/sell marketplaces, to check-ins and recommended things to do and see — are measured in terms of how much additional attention they capture for the company to sell on to paying customers.

Whatever benefits we get from Facebook as users — social interaction with people we know, like, and love (or hate), pictures of far away family and friends, news and information — the real purpose of Facebook is not to provide those benefits to us, but to use those benefits to steal more of our attention and sell it on to the highest bidder. Every bit of Facebook that any of us use is tainted by the company’s hunger for more attention.

How does Facebook drive engagement, capture attention, and collect data?

Facebook — and, to be fair, lots of other companies — uses a number of techniques to trick you into behaviours that are good for them, and neutral or negative for you. (If you’re looking for an excellent, in-depth article about this, featuring former tech insiders who have become “Silicon Valley heretics” in refusing to even use the products they helped to build, this piece by Paul Lewis is solid gold.)

Variable reward — why the news feed is addictive

One of the most common techniques relies on a principle called “variable reward.” It’s the cornerstone of principles taught by Nir Eyal, a prominent “behavioural designer,” entrepreneur, and angel investor. Here’s how he explains the value of using variable reward in digital design:

“As B.F. Skinner discovered over 50 years ago, variable rewards are a powerful inducement to creating compulsions. Today, technology companies are creating new habits by continuously cycling users through the Hook Model — and variable rewards fuel the chain reaction. Understanding what moves us to action allows us to build products that are aligned with users’ interests and gain greater control of our own technology-induced behaviors.”

Variable Rewards: Want to Hook Your Users? Drive Them Crazy, by Nir Eyal

In Facebook, the most obvious example of this is in the news feed on your phone, which is designed much like a slot machine (also designed to be addictive). You can’t see what’s coming next, but you think it might be something really good, because you’ve seen something really good in the past. So you pull down to refresh or flick your thumb to scroll and wait (sometimes for an eternity of seconds) for the next dopamine hit.

Notifications work in a similar way. Every time you open that app, you can bet there’s a little red badge with a number in it, telling you there’s something new to look at. You keep coming back, looking for some new stuff. You keep scrolling, and scrolling, and scrolling, and scrolling until you’ve missed your bus stop and have to backtrack a few kilometers to get home. (Full disclosure, this happened to me a few weeks ago!)

This is not your choice. You are being manipulated, by design, to keep you scrolling through the newsfeed, whether it’s good for you or not.

Why? The simple answer is that the more you scroll, the more ads you will be shown. I did a quick experiment just now. I counted the first 100 things I saw in the newsfeed to see how many were posts from real people or organisations I follow, and how many were ads. The result, 86 real posts, 14 ads. That means 1 in 7 things in my feed were paid for by someone trying to reach me — or people like me. Imagine if every seventh interaction you had with someone in real life was interrupted by an ad.

Social validation — why sharing and reacting are addictive

Facebook has also honed its ability to prey on our narcissism and need for social validation. As a social species, we have an inherent need to be liked and accepted as members of a group. This works through a feedback mechanism — we do or say something, other people react to it, we interpret the reactions to learn if what we’ve said is acceptable and makes us look good in the eyes of others. It generally happens at a subconscious level, and it leads to all kinds of things like compliance, conformity, and internalization.

What Facebook has done is made it incredibly easy to communicate behaviours and responses over time and distance. In the process, Facebook has made itself the broker of social validation, which draws more people into sharing more of their lives on the platform. The mechanism here is the like button and reactions, and it adds social validation to the principle of variable reward.

Think of a time — any time — you’ve posted something on Facebook. Maybe it was a photo of your new haircut, maybe it was an article you read and agreed with. You’ll probably recall the slight, subtle feeling of anxiety, the rush of anticipation, and the wave of relief as the first reactions come in. Maybe that was followed by a hint of disappointment when a few minutes or hours go by and not as many people reacted as you expected (and hoped).

When you post something on Facebook you don’t know whether or how many responses you’re going to get — variable reward is in play here too. But you do know, even if it’s subconscious, that people’s reactions (or lack of reactions) will tell you whether what you’re doing is worthwhile or whether you’re wasting your time in life, whether people like you as a human being or whether no one cares about you so you might as well just … . It works in the same way for Instagram, as Adam Alter explains in this video.

The uncertainty and social validation keep us coming back to Facebook to post and react to more and more content. It feels good, we crave it, it’s instantaneous and easy. But it’s junk. We get the dopamine hit, but we don’t get the satisfaction signals that helps us with impulse control and allow us to moderate our own behaviour. Designing digital solutions for that problem is not in the interests of Facebook, because it would limit your engagement and their supply of attention to sell.

Encouraged/forced connection — building networks to collect your data

In addition to manipulative design that uses psychological vulnerabilities to keep us hooked, Facebook uses the power of technology to pull us deeper into its ever-growing web. Over the years, the company has either built new services or bought other companies that have built them, to get more complete access to our digital lives.

One example of this is sharing and uploading contacts between digital accounts. In the early days, you used your email address to create and verify a Facebook account, and it asked you to share your contact list. The helpful service would then find people you knew on Facebook and suggest you connect with them or send them invitations to join if they weren’t already there.

The Facebook app seamlessly integrates with the contacts app on your iPhone, if you’ve given it permission. If you haven’t given it permission, every time you add a new contact to your phone you’ll be asked to change your settings and allow contact sharing.

The design trick here is to make integration the default setting, and to pester users to restore the function if they turn it off. It’s a piece of software, it will never get tired of making the same request — but you might wear down and give in.

For Facebook users, contact integration is a handy feature — it’s really easy to find people you know on Facebook if Facebook already knows who you know in real life, and obviously you’ll want to be Facebook friends with anyone in your phone contacts list, because, well, it’s Facebook and practically everyone in the world is there.

The benefit here for Facebook is that it grows its user base and likely increases the number of connections in its network. The more people Facebook can connect you with, the more opportunities they’ll have to keep you interested and coming back to Facebook. It also gives Facebook ways to integrate with other datasets out there in the world, in order to identify and track you around the web and in the real world.

Massive databases of people’s identities and contact information are just one of the benefits of Facebook building other apps like Messenger, buying Instagram and WhatsApp, and integrating with Twitter. It also works through “social logins” — using your Facebook account to create and login to other accounts around the internet. When you integrate apps and share contacts, you are implicitly giving Facebook (and other companies) permission to watch you do the things you do online. This is ethically dubious, at best, and a massive violation of your privacy and autonomy at worst.

Where in the world is… location tracking

One of the wonders of modern technology is that with a simple, ubiquitous handheld device, we can navigate the world and broadcast our location to anyone. Facebook uses location technology to make suggestions about what to do where you are, and it lets us “check in” to show off the great things we’re doing with our time. You get the reward of some social validation when people react with approval and envy, and maybe even a discount on your drinks or meal from the owner of the business where you’ve checked in.

On a recent trip to Tasmania, I noticed that Facebook helpfully offered to show me where my friends had checked-in on their trips to Tasmania. All I had done was to be tagged by my partner in her check-in at a cafe in Hobart, and Facebook knew where I was and what information to show me. While this is fun and maybe even useful, it’s also completely creepy, not least because of what else Facebook is doing with all the data they take from us.

What is Facebook doing with all the data and technology?

Facebook has been incredibly successful at hooking a huge proportion of the human population and capturing unprecedented amounts of our attention. This has made it one of the biggest, most successful companies of all time, with absurd revenues derived almost completely from advertising sales (nearly $30 billion in 2016 and growing).

Looking at recent numbers, in the USA and Canada, Facebook brings in nearly $20 per user per year by selling advertisers our attention and using our personal data to do it.

The value Facebook provides to advertisers is not just the number of people it can reach. The real value is in the ability to use the incredible amount of data the company collects on every one of its users to carefully target ads to the most receptive audiences.

This has some benefits for everyone — it’s basically a waste of everyone’s time and money to show ads for pregnancy tests to people who can’t get pregnant, so it works for everyone if Facebook can make sure that only women likely to be considering or trying to conceive are shown ads for home pregnancy tests. If you’re a woman of a certain age, you may have noticed pregnancy test ads following you around the internet, in part because Facebook (and google) tracks your behaviours and serves you ads all over the place.

This power can also be used for real harm. As recently as November, 2017, Facebook continued to allow advertisers to purchase targeted ads that exclude ethnic or racial groups. ProPublica ran an experiment and investigation in which they were able to break US Federal anti-discrimination laws by targeting housing ads to exclude “groups like African-Americans, Jews, and Spanish speakers.” Not only is this illegal, but Facebook was made aware of the problem and hadn’t done anything to change it after a full year.

Similarly, “bad actors” can pay to use Facebook’s data and targeting services to carefully distribute practically any kind of information they want to pretty much any group of Facebook users they want.

This is the power and the technology behind the “election meddling” that likely contributed to Donald Trump becoming President of the United States, to British voters choosing to leave the European Union, and which is likely to be used again in the future to disrupt and influence democratic processes wherever it is convenient for the people who want to do it. This is all laid out in an excellent, very long story by Roger McNamee, an early Facebook investor and former mentor to Mark Zuckerberg.

Facebook offers more than just targeted advertising. Combining big data, analytics, and location services, it also offers advertisers the ability to measure how many people visited their stores or bought something from them after seeing an ad for their company’s products. This works both online, by tracking people’s behaviour in and outside of Facebook, and in real life.

Facebook offers advertisers tools to tailor advertising to people with specific characteristics and to direct them to the closest brick-and-mortar store to buy the product shown in the ad, which they hope will increase the number of people who buy something after being prompted by an ad.

If that’s not enough, Facebook also uses location services to measure how many people actually visit a store after seeing an ad, to prove that the ads are effective and make them worth more money to advertisers.

Not enough for you yet? How about the “offline conversions API” that “allows businesses to match transaction data from their customer database or point-of-sale system to adverts reporting, helping them better understand the effectiveness of their adverts in real time.”

Yes, that means that Facebook will use the data it has about you, including your physical location at specific times, and match that with a company’s records of what was bought, when, by whom, to tell them how much return on their advertising they have made.

And the future looks to be getting creepier. Late last year reports surfaced that Facebook has filed a patent for facial recognition technology that may allow in-store sales people to access data about the individuals walking through their doors, and “gauge customer’s emotions and brand choice by leveraging their Facebook profiles through crown-scanning technology.” All this ostensibly in order to provide a hyper-personalised experience.

We all have different tolerances for creepiness, and some people may be more comfortable than I am with advertising, privacy violations, personal security risks, and the prospect of fictitiously familiar sales-people knowing things about me before I’ve opened my mouth. But this is a dark future we’re heading towards.

This is dystopia. Facebook’s dopamine engine is pleasuring us into total surveillance and shattering our autonomy and self-control.

I’m not suggesting that we’re heading for a world where Facebook is the Matrix or Skynet (well, not yet). But we are already living in a world where Facebook and other massive tech companies have insinuated themselves into the fabric of our society in countless ways. Not only do they manage the flows of information between people in minute detail, they are extracting the data from those interactions to insert and target advertising throughout our lives, and to shape our behaviours without our consent or knowledge.

Imagine a world where your telephone conversation about what movie to see with your friends is interrupted by telemarketers with a sales pitch for the latest summer blockbuster. Or a world where your dinner was paused on the fork between plate and mouth to serve you an ad for a desert you don’t have in the house at the moment but can buy at the supermarket down the road. That’s where we’re heading, because Facebook is well on its way to becoming the internet itself, capturing as many moments of our attention as possible to turn into revenue from advertising sales.

The more time you spend and the more data you let Facebook take from you, the less control you will have over your decisions about what to buy, what to do, who to spend your time with, and who to vote for. It needs to stop.

What can we do?

With all the toxic stuff Facebook is doing to us as individuals and communities, it’s important to ask, well, what can we do about it? We need to start by getting off Facebook — as individuals and as communities.

Ultimately, we can reclaim our autonomy by quitting Facebook. It works best if we all do it, but even leaving one at a time is a start, if we help others out too. Last one out can shut off the lights.

Getting off Facebook is not as difficult as they make it seem. Facebook will use persuasion to try to get you to stay, but there’s no need to listen. You may want to go cold turkey, or you may want to wean yourself off it a little at a time.

The Time Well Spent(Now the Center for Humane Technology) movement provides some general tips for taking back control of your life from invasive digital technology.(Nir Eyal, addiction consultant to the tech scene has also published a book - Indistractible about how to regain control of your attention.) Applying those to unfriending Facebook, here are some actions to take.

First thing first, immediately turn off any notifications or alerts you get from Facebook. Don’t let them badger you into spending your time there. Shut off the emails they send, turn off as many of the in-app notifications as possible so you don’t see as many when you do use Facebook, and absolutely cut out push notifications altogether.

Even without notifications, the pull of the app and site will still be strong, so delete the app from your devices, and make sure to log out of the website. Do not save the password in your browser or password manager. Make it as slow as possible to get into Facebook, so that you will have time to pause and think about whether you really want to spend your time on Facebook right now.

When you’re ready, delete your account, and try to delete as much of the data they hold as possible. If you’re worried you’ll miss looking back at the things you’ve done, you can always download a partial record of your Facebook existence.

Change all the social logins you have and create direct accounts with any online services you use — you’ll probably have to do this once you’ve deleted your account, so might as well get started now.

And then what?

Once you’ve successfully disconnected yourself from Facebook, spend some time reconnecting with the people close to you. Not just with family and close friends, but also with the people in your neighbourhood and at work. Get to know them. Build relationships, build trust, build interconnection. Rebuild the social fabric. Do things together. Make use of public space. All this and more is critical for solving the countless social, political, and ecological problems we face today.

Editorial Note

I wrote this extremely long... essay... after working in a legacy media company for a couple of years, and before some of the most egregious failures of Facebook (and other social media and new-tech companies) were confirmed. I was influenced somewhat by the Time Well Spent (now the Center for Humane Technology) movement of repentant techbros. Excellent investigative journalism - for example by Carole Cadwalladr on Cambridge Analytics and Facebook's role in the Brexit referendum - and continued abuses and denials of responsibility from FB itself have shown that this company was (and still is, as of late 2020) even worse than I thought. (Read it in shorter pieces on Medium from the links below.)

If you made it this far down the page, thanks for reading.

Like with every post on this blog, consider this an invitation to join in thinking together, and maybe doing together. Why not get in touch with me on the Get in touch on the Fediverse to share your thoughts?

See more:

Creative Commons

All posts on this blog are by Scott Matter and licensed under CC-BY-NC-SA 4.0