BLOG

I’m a former CIA cyber-operations officer who studies bot traffic. Here’s why it’s plausible that more than 80% of Twitter’s accounts are actually fake—and Twitter is not alone.

Dan Woods Thumbnail
Dan Woods
Published July 14, 2022
  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin
  • Share via AddThis

At this point you’ve probably heard about the sputtering acquisition and emerging legal drama between Twitter, a company that did not seek to be purchased, and Elon Musk, who has rescinded his offer to purchase the company.

At the center of this conflict is the subject of bot traffic, which is something I know a fair bit about. For the past six years, my job has been to lead a team of data scientists who analyze web interactions to identify bots, the applications bots are targeting, and their objectives.

On average, about 2 billion transactions flow through F5’s bot defense infrastructure every day, and we have briefed hundreds of companies in virtually every industry about their bot traffic.

Based on this experience, Twitter’s bot traffic is almost certainly far greater than they have expressed publicly and even greater than they believe internally. In fairness, the latter is likely the case for all organizations that are targeted by malicious or unwanted bots but don’t use best-in-class technology to eliminate them.

Here’s some of what we’ve learned about bots over the past few years and why it was so easy to come to that conclusion.

Bots always try to accomplish something.

An organization that enables customers to log into online accounts will see automation against the login application to try and engage in some type of fraud. An organization that offers special prices online will see automation used to scrape the prices, fares, and rates for resale. There are dozens of examples like this.

In Twitter’s case, a key incentive is gaining followers. There is a perception that the more followers someone has, the more interesting their tweets must be, and indeed, accounts with more followers tend to be more influential.

The objective to amplify influence is where this model can become concerning. Imagine the influence you could have with automated control over millions of Twitter accounts that are interacting with the real accounts of public figures and private citizens. This is likely to attract highly motivated nation state actors with virtually unlimited resources.

If there’s an incentive and the means, there will be more bots.

Not only is there a huge incentive on Twitter, but there is also a means. There are countless services on the Internet (including dark/deep web marketplaces) offering Twitter accounts, followers, likes, and retweets for a fee.

For research purposes, I tried these services on a Twitter account I created. Continuing to test, for less than $1,000, the account now has nearly 100,000 followers. I once tweeted complete gibberish and paid followers to retweet it. They did. These accounts have names like TY19038461038, and they follow a lot of other accounts, too.

I began to wonder how easy it would be to create a Twitter account using automation. I am not a programmer, but I researched automation frameworks on YouTube and Stack Overflow. Turns out, it’s easy.

Taking my testing to the next level, over a weekend I wrote a script that automatically creates Twitter accounts. My rather unsophisticated script was not blocked by any countermeasures. I didn’t try to change my IP address or user agent or do anything to conceal my activities.

If it’s that easy for a person with limited skills, imagine how easy it is for an organization of highly skilled, motivated individuals.

Enterprises frequently underestimate the size of their bot problem.

A few years ago, a U.S. social networking site deployed F5’s bot defense and discovered that 99% of their login traffic was automated. Yes, you read that right—99%.

In fact, we find 80–99% of traffic is automated on many applications. These findings are not a corner case—they’re common across many organizations (retailers, financial institutions, telcos, and quick-service restaurants, to name a few).

This was, of course, devastating news to the company. They knew they had a bot problem but never imagined it was that bad. The implications quickly sank in. Only a tiny fraction of their customer accounts were real human customers. The rest were bots.

For social networking companies, the number of Daily Active Users (DAU), which is a subset of all accounts, plays a big role in valuation. Disclosing that their DAU was a fraction of what they thought it was caused their value to drop significantly.

Enterprises that benefit from bots don’t always want to know.

One could argue it would have been better for that company’s shareholders if the organization had never learned the truth and instead simply asserted that their bot problem was less than 5%.

This pressure doesn’t just apply to social networking sites whose valuation is determined by the number of DAU. It’s also true for companies that sell high-demand products with limited inventory such as concert tickets, sneakers, designer purses, or the next iPhone.

When these products sell out in minutes to bots, only to be resold on the secondary market for highly inflated prices, it annoys customers. But the enterprise still sells out their entire inventory quickly. 

In these cases, a company may want to appear as if they’re doing everything they can to stop bots while privately doing very little.

It isn’t just Twitter—the bot problem is everyone’s problem.

When I consider the volume and velocity of automation we’re seeing today, the sophistication of bots that a given set of incentives is likely to attract, and the relative lack of countermeasures I saw in my own research, I can only come to one conclusion: In all likelihood, more than 80% of Twitter accounts are actually bots. This, of course, is my opinion.

I’m sure Twitter is trying to prevent unwanted automation on its platform, similar to every company. But they are likely dealing with highly sophisticated automation from extremely motivated actors. In those circumstances, bot remediation is not a DIY project. It requires equally sophisticated tools.  

However, there is something much more important at stake here. The problem of bots is bigger than any advertising revenue or stock price or company valuation. Allowing this problem to persist threatens the entire foundation of our digital world.

Allowing bots to proliferate from anywhere leads to massive fraud that costs billions. It ruins people’s lives and provides tools for nations and nefarious organizations to spread misinformation, create conflicts, and even influence political processes. It means more fraud, more misinformation, more conflict that impacts our ability to communicate and relate to each other worldwide.

If we as a society want to have all of the conveniences, knowledge, entertainment, and other benefits of the Internet and our mobile, connected world, we must do something about automated traffic online. The only way to fight bots is with highly sophisticated automation of our own.

By Dan Woods, Global Head of Intelligence at F5