Trolls, bots
trolls
A troll is a person who acts online without providing a real identity in order to disrupt healthy discussions among users on various websites, forums and social networks. Trolls write aggressive and unrelated comments on the topics interesting to them in the groups created in social networks (for example, Facebook, Vkontakte, Odnoklassniki), blogs, as well as comments section of various web portals with the intent of attracting attention and emotionally provoking users.
As one of the main and most important characteristics, a classic Internet troll always hides a real identity, using a fake social media account and providing either fake or no personal information at all. These may include photos, contact information, data about education, workplace, hobby and interests.
Hybrid Trolls. Unlike a classic troll, who acts at own discretion to incite disagreement and conflicts on the Internet, a hybrid troll is hired as an information warfare tool. Most frequently, hybrid trolls are subordinate to the government or state institutions and get remuneration for this job.
Cyber Troops are government or political party teams committed to manipulating public opinion. According to the research of the Oxford Internet Institute, most cyber troops will use online commentators and fake social media accounts to spread pro-government or pro-party messages to populations both domestically and abroad.
Cyber troops actively use computational propaganda on social media platforms. Computational propaganda is the use of automation, algorithms and big-data analytics to manipulate public life (Howard & Woolley, 2016). Computational propaganda involves:
-
Spread of fake news and misinformation on social media platforms;
-
Illegal data harvesting and micro-profiling;
-
Exploitation of social media platforms for foreign influence operations;
-
Amplification of hate speech or harmful content through fake accounts, political bots and clickbait content.
Bots
The Oxford Internet Institute distinguishes one more type of bots – political bots, pieces of software or code designed to mimic human behavior online. They can be used to perform various manipulative techniques including spreading junk news and propaganda during elections and referenda, or manufacturing a false sense of popularity or support by liking or sharing stories, ultimately drowning out authentic conversations about politics online.
Bots can significantly influence online debates and discussions, especially through networking. Bots are used, when a client wants:
-
To create a trending hashtag or a phrase;
-
To disseminate and support a particular message;
-
To intimidate and discredit other users;
-
To increase artificial “likes” and followers.
Why are trolls and bots used?
-
Convince of the reliability of any issue;
-
Mobilize people around a certain topic or story;
-
Gain support of broad public towards particular ideology;
-
Mislead users;
-
To exert psychological pressure on them.
To achieve these goals, fake profiles and automated accounts spread relevant content; enter into interaction with real users, trying to change the system of values, perceptions, emotions, motivation and discourse of the target audience.
How to identify bots on Twitter?
It is difficult to identify bots, i.e. automated accounts only by means of one criterion. The Atlantic Council’s Digital Forensic Research Lab (DFRL) offers social media users some tips for spotting a bot on Twitter.
Frequency of posts, activity: Bots are prolific posters. The more frequently they post, the more caution should be shown. The Oxford Internet Institute’s Computational Propaganda team views an average of more than 50 posts a day as suspicious. The DFRL classifies 72 posts a day as suspicious; this is a widely recognized and applied benchmark, but may be on the low side
Source: DFRL; Photo: Twitter
Anonymity: Bots often lack any personal information.
Amplification: A bot’s timeline will often consist of re-tweets and verbatim quotes, with few posts containing original wording. One main role of bots is to boost the signal from other users. The timeline of a typical bot will therefore consist of a procession of re-tweets and word-for-word quotes of news headlines, with few or no original posts.
Source: DFRL; Photo: Twitter
Common content: Networks of bots can be identified easily, if we notice that multiple profiles tweet the same content almost simultaneously.
Photo: Stopfake.org
Botometer is a useful resource to identify bots on Twitter. It is a joint project of the Network Science Institute (IUNI) and the Center for Complex Networks and Systems Research (CNetS) at Indiana University. Botometer checks the activity of a Twitter account and gives it a score based on how likely the account is to be a bot. Higher scores are more bot-like.
Use of this service requires Twitter authorization via your own account. Indicate a Twitter username in a search box. Botometer enables us to check the account itself, its followers and friends of users.
How to identify trolls?
Trolls’ online activity has several characteristics, which help us identify them:
Step No. 1: Look at the comments written by trolls, which are:
-
Completely out of context – a discussion may be about an absolutely different topic, but a troll may be posting on the topics, which are irrelevant to the given issue.
-
Long or, on the contrary, contain several words, calls, links of unreliable sources - on the one hand, a troll may post long comments concerning a particular topic, causing overloading of discussion with unnecessary information with the intent of influencing real users. On the other hand, a troll may post calls consisting of several words, accompanied by the links of unreliable, tabloid or unknown media resources.
-
Aggressive – to defend or promote their desirable positions, trolls use aggressive language in relations with real users. They keep to one, radical position, which never changes. It does not matter how correct and justified argument you will bring in a dispute with a troll. The latter’s main function is to express his own position and to convince as many users as possible of the legitimacy of his position.
-
Contain linguistic and grammatical errors – when dealing with a troll living in another country and speaking another language, we can see that he frequently makes grammatical errors or posts in another language.
Step No. 2. Analyze troll’s profile
-
Examine a section “About me”, which, as a rule, provides scarce information about the profile owner. Real users use social networks for interaction and establishing contacts with other users. For this to happen, they provide personal information, like date of birth, family members, workplace, education, etc. Internet trolls either do not provide at all or provide false information. For example, frequently trolls indicate the United States as their birthplace, and Facebook or other famous brands and companies as their workplace.
-
View photos – Personal photos are rarely added as a troll’s profile photo and a cover photo, as well as in a photo gallery. In most cases, number of photos is either very small or no photos are provided at all. If we deal with an experienced troll, personal photos of another users may be uploaded on the profile. To reveal it, we can use Google image search engine, where we can upload a troll’s profile image and find out whether it depicts any person or public servant.
-
View videos – The majority of social network users have uploaded profile videos, which enable to identify their residential, work and educational places, as well as their friends and a user himself/herself. The majority of trolls have no such videos on their profiles.
-
Analyze friends – Trolls mainly do not have many friends; however, we can come across some trolls who have thousands of friends. If foreign nationals prevail in the lists of trolls’ friends and locals are represented either in small numbers or are not represented at all, we can conclude that it is a fake profile.
-
Pay attention to news feed materials – Pay attention to shared content, photos, videos. Trolls mainly upload posts with particular ideology and messages, containing information about their sentiments and positions. You can frequently find the links of tabloid and unknown media outlets who are spreading disinformation in their news feeds.
Step No. 3. Mark a fake profile to help others identify it.
-
Frequently, this or that fake profile has already been identified by another user as a troll that additionally helps us identify them. To make a contribution to protecting the Internet against trolls, just leave a comment after rechecking that will enable other users to identify trolls.
Facebook Profile Analysis: Search Engines
When analyzing a user profile to identify Internet trolls, it is important to search for comprehensive information about them. To increase the efficiency of this process and to save time, we can use open online resources, enabling to carry out an in-depth analysis of any user profile within the shortest period of time.
Facebook Graph Search (Graph.tips) is useful search engine that provides public information about users, according to Facebook’s privacy policy. Moreover, Facebook Graph Search enables us to simultaneously enter the data of two or more users in search engine and then identify links between them.
Facebook Graph Search engine offers the list, from which we can select necessary criteria:
-
Photos, videos and posts liked or posted by a user;
-
Comments left by a user on various posts, videos and photos;
-
Content, where a user has been tagged by other users;
-
Liked pages;
-
Used applications;
-
Institutions where a user checked in;
-
Events attended by a user;
-
Relatives and friends.
In case of indicating two or more users, the search engine enables us to search for:
-
Photos, Facebook pages liked by them;
-
Places where users checked in;
-
Posts and photos on which users left comments;
-
Photos, in which other users tagged them;
-
Common Facebook groups;
-
Events attended by users.
Such search enables to establish interesting links between the users that would be difficult in case of analyzing their profiles separately.