Earlier this week British Prime Minister Theresa May said in a speech that social media was inundated with Russian attempts to distort the news and elections and accused Moscow of “planting fake stories” online as part of a “campaign of cyber-espionage and disruption”.
She was using the information after both British and US intelligence services identified several prominent news services were exposed as Russian bots.
Earlier in November, the US Senate Select Committee on Intelligence provided a list of 2,753 suspended, Russian-linked Twitter accounts.
You've reached your daily free article limit.
Subscribe and support our veteran writing staff to continue reading.
Earlier this week British Prime Minister Theresa May said in a speech that social media was inundated with Russian attempts to distort the news and elections and accused Moscow of “planting fake stories” online as part of a “campaign of cyber-espionage and disruption”.
She was using the information after both British and US intelligence services identified several prominent news services were exposed as Russian bots.
Earlier in November, the US Senate Select Committee on Intelligence provided a list of 2,753 suspended, Russian-linked Twitter accounts.
Many of these fake Russian bot accounts have been traced directly to the Internet Research Agency, in St Petersburg. The Russian government-backed agency reportedly runs social media accounts disseminating “disinformation”.
This “troll farm” was exposed by a whistleblower in the New York Times in 2015. A BBC investigation has also uncovered a pro-Russia “troll factory” in Ukraine.
The Oxford Internet Institute – part of the University of Oxford and recipients of the Democracy award for its analysis of propaganda – says bots “significantly impact [on]public life during important policy debates, elections, and political crises” and “flourished during the 2016 US presidential election”.
Bot-spotting tips
The Atlantic Council’s Digital Forensic Research Lab (DFRL) offers social-media users tips for spotting a bot:
- Frequency: Bots are prolific posters. The more frequently they post, the more caution should be shown. The DFRL classifies 72 posts a day as suspicious, and more than 144 per day as highly suspicious.
- Anonymity: Bots often lack any personal information. The accounts often have generic profile pictures and political slogans as “bios”.
- Amplification: A bot’s timeline will often consist of re-tweets and verbatim quotes, with few posts containing original wording.
- Common content: Networks of bots can be identified if multiple profiles tweet the same content almost simultaneously.
The Digital Forensic Research Lab’s full list of tips can be found here.
There has been a move away from fully automated bots to semi-automated accounts.
Such accounts are harder to identify, as they intersperse their activity with references to popular culture and personalized replies.
To read the entire article from The BBC, click here:
Photo courtesy Wikipedia
Should the US Air Force Buy 250 B-21 Stealth Bombers?
Inside Delta Force: America’s Most Elite Special Mission Unit
Russia Fires Intercontinental Ballistic Missile at Ukraine in Historic First
SOFREP Interviews Chelsea Walsh: The Nurse Who Reported Red Flags About Trump’s Would Be Assassin
Happy Birthday Delta Force!
Join SOFREP for insider access and analysis.
TRY 14 DAYS FREEAlready a subscriber? Log In
COMMENTS
You must become a subscriber or login to view or post comments on this article.