,

Can You Spot Online Bots Before They Overrun Social Media?

June 7, 2023 • Zachary Amos

Advertisements

Are bots on social media harmless automation technology or a tool for spreading dangerous content? The answer is both. Social media bots are extremely common today and have become a spotlight issue on leading platforms like Twitter and Facebook.

Some bots are harmless and used for legitimate purposes. Others are programmed to spread hate, misinformation, propaganda and other negative content. How can users tell the difference and what should they know about bots?

What Exactly is a “Bot”?

A bot is any program used to automate a digital task, whether for sending out marketing emails, automating office projects, or any number of possible applications. One of the most common uses for bots today is on social media. A bot account is a fake social media account that is fully or partially automated. Users program bots to send out certain types of content. 

There are different types of bots on social media, varying in legitimacy and transparency. Many bots are labeled outright as bots. These bot accounts are intended to automate harmless content creation, such as sharing cute photos of dogs. Other bot accounts may share automated posts to rapidly update followers on a certain topic. 

Unfortunately, bots can also be used to cause harm and dissent on social media. Unethical bot accounts are not labeled as bots and may be used to spread hate speech or misinformation. Bots can also be used to impersonate celebrities or groups or pose as real everyday people. 

Some bots are more common on certain social media platforms. For example, impersonation accounts are one of the most common types of bots found on LinkedIn. Misinformation or harmless bot accounts are the most common types on Twitter. Post formats and algorithms significantly impact how successful certain kinds of bots can be on that platform. 

Legitimate Uses for Bots on Social Media

Usually, conversation around the use of bots focuses on the negative aspects of them. The harmful impact of bots is definitely serious, but it’s also important to understand how bots can be used fairly and ethically. 

Numerous official government and local organizations around the world have bot accounts to automate important news and safety notifications. For example, the National Weather Service has a bot account for posting storm warnings and public safety alerts. Automating Twitter posts for this topic is harmless and could actually save lives. 

Users can also make bot accounts just for fun with no ulterior motive in mind. For instance, legitimate users will usually label their bot accounts as such on Twitter. These accounts use the bots to post cat memes, animal pictures and other harmless content. These bot accounts simply automate content sharing for purely recreational purposes. The main differentiating feature here is that the content that these bots share is not related to controversial topics, misinformation, propaganda, hate speech or other sensitive topics. 

Large businesses also frequently have bot accounts, although they may not advertise them as bots. It is very common for businesses to automate their social media accounts by creating posts in advance and scheduling them to automatically post on a certain date. Businesses might also fully automate the creation of their accounts’ original posts then have a real person replying to users’ comments on those posts. 

Harmful Uses for Social Media Bots

While there are innocent uses for bots on social media, unfortunately there are also dangerous negative applications for this technology. Bot accounts are often associated with cyberbullying, misinformation, impersonation and even cybercrime. 

Spreading Misinformation

The most well-known use for bots on social media today is automating the spread of misinformation. Bot accounts programmed to do this take advantage of the way social media algorithms work in order to artificially create trends, rapidly spreading false information. 

The people who use bots for this purpose often create entire “botnets”, large networks of hundreds or thousands of accounts all automated to spread the same type of content. For example, researchers have identified a botnet of 13,000 separate accounts all tweeting about Brexit legislation leading up to the January 2020 vote. 

Bad actors often leverage misinformation botnets to make unpopular opinions appear popular. For instance, in the above example, one person uses a botnet to make their opinion appear popular among thousands. This can make it easy for bot creators to build an appearance of legitimacy around misinformation. After all, if thousands of people appear to be reinforcing a supposed report or fact, doesn’t that mean it must be based on some concrete information? 

This kind of implicit trust is part of the social engineering behind misinformation bots. They use a high volume of posts to blast false narratives to as many real users as possible. The automated nature of bots and botnets is part of why some misinformation strains appear to pop into existence and take off so quickly. 

Trolling and Hate Speech

In addition to spreading false information, users can also program bots to spread hateful opinions or spam hurtful content about a specific user. For example, a troll bot might automatically reply to every one of a certain user’s posts with hateful messages or threats. Public figures, government officials and journalists are among the most common targets for troll bots. 

Bad actors design hate speech bots on social media to create an illusion of mass support for a certain controversial view. Sometimes the goal is simply to cause hurt or rally more hate. This type of bot might automatically post messages containing racial slurs, insults, hate rhetoric, threats and other negative content. Usually, these bots target a specific person or group. 

Bad actors often leverage hate speech bots toward a certain political view or issue. For example, studies found that bad actors used hate speech bots in 2020 to stir discord around the origins of the COVID-19 virus through racist comments and posts. Many hate speech bots also sparked anger and polarization surrounding the 2020 U.S. presidential election. 

The idea in cases like these is to use hate speech to rally real people around a certain viewpoint or political ideology. The hateful content the bots post is meant to turn people against one another and push people toward extreme views through fear and emotional manipulation. This use of bots on social media hurts everyone, regardless of one’s political or religious beliefs. 

Moderation has a significant impact on the prevalence and success of hate speech bots. If a social media platform has a good content moderation program, it can catch and shut down hate speech bot accounts. 

Impersonation, Cybercrime and Fake Followers

Harmful bots on social media usually aren’t labeled as bots, but some take things a step further by impersonating public figures or celebrities. Impersonation bots are frequently part of larger cybercrime campaigns, usually some type of scam. 

This is unfortunately a common issue on YouTube, where bad actors set up bots to spam-comment on popular YouTubers’ videos. The spam comments often encourage viewers to click a link or message an account because they have won a prize or giveaway. In reality, these are scams designed to steal money from viewers. Luckily, YouTube is taking steps to stop these bot accounts and spam comment campaigns. 

Scammers can also use impersonation bots to generate fake traffic for a certain account through automated clicks or views. In fact, scammers can even use whole botnets of fake accounts to generate hundreds or thousands of fake followers for other accounts. Fake follower bots are usually part of scams users can pay for to make it look like they have more followers than they actually do, which may get social media algorithms to promote their content more. 

Tips for Spotting Bots on Social Media

The many harmful uses for bots on social media are definitely concerning and something all Internet users should be aware of. How can people identify bot accounts, though? It may be more difficult than it sounds, especially since bot creators now have access to AI models like ChatGPT, which make their content more convincing. 

Studies have found that AI is making it increasingly difficult to tell which social media accounts are bots and which aren’t. For instance, one research study included a bot account that only 10% of participants correctly identified. Meanwhile, over 41% of participant misidentified a real account as a bot. 

Generative AI makes it possible for scammers to create very convincing fake profiles for their bot accounts. Now they can use an AI image generator to create an original, realistic profile photo in mere minutes. Bots can use language AI models to post convincingly in any language, expanding their international reach. 

Bot-Spotting Tools

Luckily, technology is also making it possible for users to stay ahead of these sophisticated bot attacks. For example, the Indiana University Observatory on Social Media developed a tool called Botometer that anyone can use for free to see if an account might be a bot. A similar open-source website, Bot Sentinel, features a public list of reported bot accounts and a tool for checking if a social media account is fake. 

These tools can help users spot suspicious accounts. However, it is important to remember that they’re not foolproof. Twitter released guidelines in 2020 clarifying that some of the metrics bot-spotting tools look for are not always suspicious. For example, many legitimate users may choose not to create a bio on Twitter out of concern for their privacy online. So, bot-spotting tools can definitely be helpful but may give false positives. 

Additionally, users can follow a few best practices to stay alert for bot activity and avoid falling for bot scams. For example, hackers, scammers and other bad actors can use bots on social media to spread misinformation, so only trust posts about current events that come from verified, legitimate news outlets or publications. Common signs that an account might be fake include a strange or purely numeric username, lack of bio details, lack of a profile picture, large numbers of posts per day, and frequent spelling or grammatical errors. 

Using and Avoiding Social Media Bots

Bots are common on all social media platforms today, although the way they are used varies greatly. Some bots are harmless and may even be providing a helpful service, such as severe weather alerts. However, other types of bots on social media are programmed for harm, spreading misinformation and hate speech. All social media users should be aware of the signs and risks of bot accounts in order to consume content online in a safe, informed manner.

bg-pamplet-2