
We kindly inform you that, as long as the subject affiliation of our 300.000+ articles is in progress, you might get unsufficient or no results on your third level or second level search. In this case, please broaden your search criteria.
This issue of Robotrolling examines users suspended by Twitter. Contrary to expectation, most of the accounts were human-controlled accounts rather than bots. Since 2017, the speed at which Twitter suspended misbehaving users has by two measures almost doubled. However, removals of Russian-language accounts have been considerably slower than for English. The speed of removal can be critical, for instance in the context of an election. The Latvian elections, conducted on 6 October 2018, passed with remarkably little Russian language activity about the NATO presence in the country. Our analyses show a movement in the past year away from automated manipulation to humans operating fake or disposable identities online. The figures published in this issue reflect the good work done to tackle bots, but show much work remains to tackle manipulation through fake human-controlled accounts. Bots created 46% of Russian-language messaging about the NATO presence in the Baltics and Poland. More than 50% of Russian-language messaging about Estonia this quarter came from automated accounts. Anonymous human-operated accounts posted 46% of all English-language messages about Poland, compared to 29% for the Baltic States. This discrepancy is both anomalous and persistent. Some of the messaging is probably artificial. We continue to publish measures of fake social activity in the hope that quantifying the problem will focus minds on solving it.
More...
On the popular Russian-language social network VK, material about the NATO presence in the Baltics and Poland was viewed no less than 11 million times this quarter (February – April 2019). 93% of these views were for material from community spaces. On VK, community spaces are increasingly important, both as a sources of content and as places for discussion. The move to groups has implications beyond the Russian-language space. Facebook has recently launched a push to promote community spaces. These spaces, normally closed to researchers, offer huge potential for misuse and manipulation. Our investigation of VK community spaces reveals that the vast majority of groups in which NATO is discussed are communities with radical pro-Kremlin or nationalist tendencies, or dedicated to the conflict in Ukraine. These communities generate more posts and attract more views even than communities created by Russian state media outlets. On Twitter, bots tweeting in Russian remain a bigger problem than bots tweeting in English. In Russian, they account for 43% of all messages—a significant increase in recent months. In English bots posted 17% of messages. English-language bots this quarter overwhelmingly amplified news content from RT (formerly Russia Today) and other pro-Kremlin news outlets. On all platforms, discussion regarding NATO troops in Poland attracted the largest number of posts this quarter. Finally, in this issue we publish our first case study of manipulation on Facebook. It looks at the degree to which bots and trolls targeted posts promoted by Latvian political parties contesting the European Elections in late May 2019.
More...
Anonymous users stole the show this quarter. Never before have we observed such high levels of activity from anonymous accounts. At the same time, bot activity in Russian-language conversations about NATO activity in the Baltics and Poland has emerged from its winter slumber. In the wake of the Skripal poisonings in the UK in March, Russian-language bot and anonymous activity about NATO more than doubled. Mentions of NATO on VK, in contrast, have been stable and declining during the whole period. Social media companies are working to end platform misuse. But malicious activity is evolving. Today, anonymous accounts are dominating the conversation. These accounts are either operated manually, or they have become advanced enough to fool human observers. The responses from open and free societies to the problem of online malicious activity have neither been strong enough, nor consistent enough. Figures presented in this issue reveal a disparity between the conversation quality in English and Russian-language spaces. Currently, the Russian-language conversation about NATO in the Baltics and Poland has six times the proportion of content from bot and anonymous accounts. As Twitter has taken steps to remove bots, the disparity has only widened. We assess that 93% of Russian-language accounts in our dataset are operated anonymously or automatically. In no way does this conversation mirror opinions of citizens. Journalists, policy makers, and advertisers take note!
More...
This report presents top-level findings from the first research project to systematically track and measure the scale of inauthentic activity on the Russian social network VK. On VK, a vocal core consisting of loyal news media, pro-Kremlin groups, and bots and trolls dominates the conversation about NATO. The volume of material from this core group is such, that overall genuine users account for only of 14% of the total number of messages about NATO in the Baltic States and Poland. The spread of demonstrably fake content can offer a starting point for measuring how social media manipulation impacts genuine conversations. In the case of one story about a fictitious Finnish blogger, our algorithm estimates that at least 80% of users who shared the fake story were authentic. This quarter, messages appeared in more than 2 000 different group pages on VK. Setting aside messages from group pages, 37% of VK posts came from ‘bot’ accounts—software that mimics human behavior online. This level of activity is comparable to what we have seen on Russian-language Twitter. Unlike on Twitter, where the vast majority of human-controlled accounts are operated anonymously, on VK most accounts are likely to be authentic. Western social media companies have belatedly taken an active role in reducing the reach of the Kremlin’s social media manipulation efforts. However, it remains hard for researchers to evaluate the effectiveness of these measures on platforms such as Facebook and Instagram. In this context, VK offers a cautionary view of a network with minimal privacy, regulation, and moderation.
More...
President Trump’s whirlwind tour of Europe in July provoked ferocious discussion about NATO on social media. Anonymous human-controlled English-language accounts, expressing positions in support of or in opposition to the US President, dominated online conversations. Compared to the levels observed in the Spring issue of Robotrolling, the volume of English-language messages has more than doubled. The increasing proportion of anonymous accounts active during key political moments indicate that anonymity is being abused to cloak manipulation on social networks. We call on social media companies to keep investing in countering platform misuse. The social media companies Reddit and Twitter have released lists of accounts identified as originating from the notorious St Petersburg ‘troll factory’—the Internet Research Agency (IRA). In this issue, we present the first quantitative analysis comparing English- and Russian-language posts from these accounts. The IRA bombarded citizens in Russia and its neighboring states with pro-Kremlin propaganda. For English, fake accounts posed as Trump supporters, and argued both sides of the Black Lives Matter controversy. Russian-language material closely echoed and amplified the narratives popularized by Russian state-media. Amongst the accounts identified by Twitter, 26 also posted about NATO in the Baltics and Poland. Our algorithm correctly identified 24 of these as bot accounts. The other two accounts were anonymous human-controlled (troll) accounts.
More...
The paper considers some criminal law aspects of “hate speech” on electronic media and internet. After the definition of meaning of the term “hate speech” provided for the needs of the paper, and a review of obligations of the member States concerning prohibition and criminalization of hate speech in certain international legal documents, the paper also gives a review of the manner and scope of criminalization of hate speech in the criminal codes in force in BiH. In addition to an analysis of the principal legal elements of the related criminal offenses, the paper pays a special attention to the preconditions which must be satisfied in order that a conviction for the criminal offense of hate speech is not in violation of the right to free expression as one of the fundamental human rights. The paper also refers to the responsibility of journalists and media for hate speech and its spreading, as well as the responsibility of mediators – service providers for hate speech of others.
More...
After analyzing the prohibition of hate speech in international law and practice, the author analyzes the normative and institutional framework for hate speech prohibition in Serbia. The analysis points to valid legal solutions to this prohibition, regulations of regulatory bodies that are responsible for combating hate speech in print and electronic media and the Internet, and then provides an overview of institutional mechanisms of protection against hate speech. After that, the author gives a critical overview of the functioning of the normative-institutional framework for the prohibition of hate speech and points to the key problems, first of all: the problem of media freedom, the high degree of non-recognition of the essence of discrimination and hate speech among representatives of the public authorities, the problem regarding the functioning and independence of regulatory bodies, but also the judicial system in general. Political discourse in Serbia is burdened with the spirit of intolerance and rhetoric that often contains hate speech. The fact that pro-government media often appear as actors of such rhetoric is worrying, as it points to the readiness of the public authorities to promote a critical dialogue that is the premise of a healthy democratic society.
More...
In the period May—July 2019 bots accounted for 55% of all Russian-language messages on Twitter. This big increase in automated activity was largely driven by news-bots contributing to information effects around stories published by the Kremlin’s propaganda outlet, Sputnik. On VK, the bot presence also increased, and currently accounts for one quarter of all users. 17% of English language messaging was done by bots. Three military exercises were of particular interest for Russian-language bots on Twitter and VK: Spring Storm, Baltic Operations (BALTOPS), and Dragon-19. The level of Twitter activity during the month of July was less than half that observed for the period May–June. Having studied robotic activity for almost three years, we see a clear pattern: whenever a military exercise takes place, coverage by hostile pro-Kremlin media is systematically amplified by inauthentic accounts. In this issue of Robotrolling we take a closer look at how manipulation has changed during the period 2017–2019 in response to measures implemented by Twitter. Since 2017 bot activity has changed. Spam bots have given way to news bots—accounts promoting fringe or fake news outlets—and mention-trolls, which systematically direct messaging in support of pro-Kremlin voices and in opposition to its critics. We present an innovative case study measuring the impact political social media manipulation has on online conversations. Analysis of Russian Internet Research Agency posts to the platform Reddit shows that manipulation caused a short-term increase in the number of identity attacks by other users, as well as a longer-term increase in the toxicity of conversations.
More...
If the seriousness of a given “emerging security threat” is measured by the number of recent analyses devoted to it or the proliferation of experts studying it, then cyberthreats must now surpass the dangers of offline terrorism and energy security. While all issues “cyber” attract a high level of policymaker attention, another threat seems to have been forgotten and marginalised: cyberterrorism. To an extent, the evolution of cyberterrorism mirrors that of “regular” terrorism, which erupted as the “weapon of the weak,” and after a state-sponsored phase seems to be returning to its sub-state or even “lone wolf” roots. Cyberthreats, on the other hand, originally of a sub-state nature, are now mostly in the domain of state entities that have not yet made the decision to launch state-sponsored cyberterrorism.
More...
China’s authorities are expanding the promotion of Chinese politics abroad through social media such as Twitter and Facebook, popular in Europe and the U.S. but blocked in China. In recent months, many Chinese diplomatic missions and ambassadors have set up accounts on these platforms, and Twitter profiles had already been established by prominent state media employees. Through their interactions with users and positive presentations of Chinese policy, they seek to change attitudes to China in European and American societies and sway experts and political leaders in their countries. Although these actions are in the preliminary stage, the Chinese authorities will try to expand their influence on the decision-making processes in the EU and U.S.
More...
Władze ChRL poszerzają promocję chińskiej polityki za granicą przez media społecznościowe takie jak Twitter i Facebook, popularne w Europie czy USA, ale blokowane w ChRL. W ostatnich miesiącach wiele chińskich placówek dyplomatycznych i ambasadorów założyło na nich konta. Profile na Twitterze mieli już wcześniej prominentni pracownicy mediów państwowych. Wchodząc w interakcje z użytkownikami i przedstawiając politykę ChRL w pozytywnym świetle, dążą do zmiany nastawienia europejskich i amerykańskich społeczeństw – w tym aparatu władzy i ekspertów – do Chin. Choć działania te są na początkowym etapie, władze ChRL będą je rozwijać, próbując wpływać na proces decyzyjny w UE i USA.
More...
The Russian authorities actively use the digital space to achieve foreign policy goals. On Russia’s initiative, the UN General Assembly adopted two new resolutions in the field of cybersecurity. The first concerns the rules of conduct of states on the internet and the second deals with cybercrime. They were adopted despite opposition from Western countries. Russia is trying to protect the Russian network against external influence. The aim of these activities is ensuring the stability of the internal political regime.
More...
Rosyjskie władze aktywnie wykorzystują przestrzeń cyfrową do realizacji celów polityki zagranicznej. Z inicjatywy Rosji Zgromadzenie Ogólne ONZ przyjęło dwie nowe rezolucje z zakresu cyberbezpieczeństwa. Pierwsza dotyczy zasad postępowania państw w internecie, druga – walki z cyberprzestępczością. Zostały one przegłosowane przy sprzeciwie państw zachodnich. Rosja stara się chronić rosyjską sieć internetową przed wpływami zewnętrznymi. Celem tych działań jest stabilność wewnętrznego reżimu politycznego.
More...
Estonia's commitment to strengthening cybersecurity is a result of its experience as a victim of hacking. At the same time, Estonian authorities are consistently raising the country's informatisation level and lobbying for an effective digital single market in the EU. For Estonia, expertise in increasing cybersecurity has become the country’s unique brand, and the application of comprehensive and effective solutions means that it can be a model partner for Poland in digitisation and combating cyberthreats.
More...
Zaangażowanie Estonii we wzmocnienie cyberbezpieczeństwa jest wynikiem ataków hakerskich, jakich doświadczyła. Jednocześnie estońskie władze konsekwentnie podnoszą poziom cyfryzacji kraju oraz lobbują na rzecz efektywnego jednolitego rynku cyfrowego w UE. Dążenie Estonii do zwiększania cyberbezpieczeństwa stało się wręcz jej znakiem rozpoznawczym. Stosowanie przez nią kompleksowych i efektywnych rozwiązań sprawia, że może ona być dla Polski wzorcowym partnerem w cyfryzacji i zwalczaniu zagrożeń cybernetycznych.
More...
This quarter, the disputed presidential election result and nationwide protests in Belarus were the main targets of inauthentic Russian-language accounts, resulting in a cluster of spikes in fake activity in August. Pro-Lukashenka users concocted an external threat from NATO by pushing false claims of NATO buildup along the Belarusian border and shared rumours of impending intervention. Automated users asserted that NATO posed an internal threat in Belarus as well, alleging that the demonstrations are “puppeteered” by the West. The situation in Belarus coincided with the most pronounced uptick in attention from identifiably human Russian-language accounts. Compared to the previous report, the portion of messages attributed to identifiable humans increased from 14% to 18% on Twitter and from 26% to nearly 30% on VK. This increase in legitimate engagement in NATO-related discussions of Belarus drove down the percentage of bot users to the lowest figure we have observed, 15% on Russian Twitter and 19% on VK. English-language activity focused on Polish affairs, both independently and in relation to the ongoing protests in Belarus. Inauthentic English-language discussions peaked with announcements of US troop relocation from Germany to Poland. In September, former US vice president Joe Biden made critical comments about Hungary and Poland, triggering the highest volume of automated retweets from English-language bots this quarter. Finally, in this instalment of Robotrolling we take a look at the supply side of fake social media accounts. The second iteration of the COE’s social media manipulation experiment tracks variation between the responses of Facebook, Twitter, YouTube, Instagram and TikTok to inauthentic engagement. Strikingly, the report found that Instagram is 10x cheaper to manipulate than Facebook, TikTok has virtually no self-regulatory defences, and it remains easy to manipulate US senators’ accounts, even during an election period.
More...
This quarter, we observed a significant drop in both authentic and inauthentic engagement with the topic of NATO in Poland and the Baltics. The number of bots and the volume of messages they disseminated decreased among both English- and Russian-language communities on Twitter and VK. Spikes in bot activity this winter coincided with NATO military exercises. Inauthentic accounts placed particular emphasis on unfounded claims of disorderly conduct among NATO soldiers and the alleged effects that military exercises have on local civilian populations. Throughout this period, inauthentic accounts also amplified claims of turmoil within the alliance and fears of military buildup along the border of Kaliningrad. While Russian-language bot activity was focused primarily on military affairs, English-language bot activity was centered on US affairs in the wake of the 2020 presidential election, particularly debates over how the Biden administration will impact US-Polish relations and transatlantic security more generally. In this issue of Robotrolling, we also discuss the steps Twitter has taken to protect its platforms from attempts to incite violence, organise attacks, and share misinformation following the riot at the US Capitol on 6 January 2021. This regulatory enforcement resulted in the removal of tens of thousands of accounts connected to QAnon conspiracy theorists. Our analysis is accompanied by a visualisation of the English-language accounts mentioning the NATO presence in Poland and the Baltics, demonstrating the impact these account removals will have on the information space.
More...
Following global trends, social media has become an instrumental part of daily life for most Albanians. Even before the COVID-19 pandemic, it was a primary means for social communication and receiving information on current affairs. Moreover, since the outbreak of the pandemic, social media has cemented its position as an irreplaceable aspect of people’s personal and professional lives. While it is wonderful that the internet allows education, business, and daily life to continue, there are some challenges to such dependency on social media. For example, disinformation spreads six times faster through social media channels than actual news, and social media platforms create opportunities for unsupervised communication, which can lead to harassment, abuse, and blackmail.
More...