
Platform Manipulation Part I: Astroturfing
Social media platforms like Twitter are increasingly being manipulated by fake accounts and bots. In this article series, we will explore the various forms of platform manipulation, beginning with astroturfing, and examine some notable examples of astroturfing scandals.
But before we proceed to discuss the specifics, it’s important to clarify what Twitter’s Platform Manipulation and Spam Policy says:
Platform manipulation encompasses various forms, and the rules aim to address a broad spectrum of prohibited behavior, including:
- Commercially-motivated spam, which typically seeks to divert attention or traffic from Twitter conversations to accounts, websites, products, services, or initiatives;
- Inauthentic engagements, which attempt to make accounts or content seem more popular or active than they genuinely are;
- Coordinated activity, which tries to artificially influence conversations through the use of multiple accounts, fake accounts, automation, and/or scripting; and
- Coordinated harmful activity that encourages or promotes behavior violating Twitter Rules.
So what exactly astroturfing is?
There have been numerous instances of social platform manipulation on Twitter involving bots. Bots are automated accounts that can execute various tasks, such as posting tweets, retweeting, liking, and following other accounts. They can manipulate social media conversations by amplifying specific messages and disseminating misinformation or disinformation. They can also create the illusion of widespread support or opposition to a particular issue or viewpoint. In some cases, bots can do “astroturfing,” which involves creating the semblance of grassroots movements or sentiments driven by a small group of individuals or organizations.
Astroturfing scandals have rocked various industries in recent history, highlighting the power and potential dangers of this deceptive marketing technique. From fake online reviews to political propaganda, astroturfing acts use to manipulate public perception and sow misinformation on a massive scale.
Twitter has taken multiple steps to combat social platform manipulation and bot activity. However, social platform manipulation remains a significant challenge, requiring continuous efforts to detect and prevent such activity. While users can remain vigilant and critical of the information they encounter, verifying the credibility of information sources is not easily achievable without the help of software.
What can we do against astroturfing and Twitter bots?
There are software tools available on the internet developed to find behavioral patterns left by bots and fake accounts.
Twitteraudit.com is a website that can help verify sources against astroturfing acts by analyzing Twitter followers. It provides an estimate of the percentage of genuine followers versus fake followers. By using TwitterAudit, individuals can determine whether a Twitter account has a high percentage of fake followers, indicating a potential astroturfing campaign. This tool can also be used to identify Twitter accounts that have a large number of followers but a low percentage of genuine followers. This could suggest that the account owner has bought fake followers to artificially inflate their popularity.
To avoid the flood of misinformation, start by verifying the community using twitteraudit.com unlimited followers audit function. Engagement analytics is another feature that can reveal unrealistic engagement numbers.
A couple of well-known astroturfing scams in the recent past
Now that we know what astroturfing is and how we can protect ourselves against it, let’s see some famous scandals which went viral in recent years on the internet:
- A high-profile astroturfing scandal on Twitter involved the Internet Research Agency (IRA). The Russian troll farm created thousands of fake Twitter accounts to spread disinformation during the 2016 US presidential election. The IRA’s activities on Twitter included promoting divisive political content, creating fake grassroots movements, and attacking political opponents. This scandal led to increased scrutiny of social media platforms and their role in shaping public opinion.
- In 2021, a PR firm hired by ExxonMobil reportedly created fake Twitter accounts to promote pro-fossil fuel messages and attack renewable energy advocates. The fake accounts were part of a larger astroturfing campaign involving paid actors and other deceptive tactics.
As we can see companies and political organizations continue to seek ways to manipulate public opinion. Therefore it’s important for individuals to seek out credible sources of information by using 3rd-party tools for audits.
Conclusion
Social platform manipulation on Twitter and other social media platforms is a growing concern, with fake accounts and bots being used to amplify specific messages, spread misinformation, and create the illusion of widespread support or opposition. Astroturfing is a common form of manipulation that involves creating the semblance of grassroots movements or sentiments.
TwitterAudit is a useful tool that users can use to combat astroturfing scams. Whenever Twitter users need to verify the credibility of a source, they can rely on TwitterAudit’s unlimited audit feature by visiting twitteraudit.com.
If you want to know more about Twitter scams, check out the next article in the series!