[Air-L] Digital Methods Summer School 2021 - Amsterdam
Richard Rogers
rogers at govcom.org
Tue Mar 23 02:43:11 PDT 2021
Digital Methods Summer School 2021
5 - 16 July 2021
Online via Zoom or in-person (as circumstances allow)
New Media & Digital Culture
University of Amsterdam
Turfdraagsterpad 9
1012 XT Amsterdam
the Netherlands
Call for participation. For application information see here <https://wiki.digitalmethods.net/Dmi/SummerSchool2021>.
Fake everything: Social media’s struggle with inauthentic activities
This year’s Summer School has as its theme the so-called ‘faking’ and detecting of inauthentic users, metrics and content on social media. The uptick in attention to the study of the fake online could be attributed in the first instance to the ‘fake news crisis <https://www.buzzfeednews.com/article/craigsilverman/viral-fake-election-news-outperformed-real-news-on-facebook>’ of 2016, where it was found that so-called fake news outperformed mainstream news on Facebook in the run-up to the U.S. presidential elections that year. That finding also set in motion the subsequent struggle around the occupation of the term from a type of news originating from imposter media organisations or other dubious sources to a ‘populist’ charge against mainstream and elite media <https://journals.sagepub.com/doi/full/10.1177/0163443720906992> that seeks to delegitimate sources found publishing inconvenient or displeasing stories.
In its study we have had calls to cease using the term <https://www.tandfonline.com/doi/full/10.1080/0020174X.2019.1685231?casa_token=7HcMp9Zj538AAAAA%3AKJrchPsaCqun6pktp6TzSQEIIczgeKZLnp0-wi9mVIpJrvHDFnMM2A8EeuoUtag8_d_-K8X04HT8>, fake news. There also has been a variety of classification strategies. Both the expansion as well as contraction of the term may be seen in its reconceptualisation by scholars as well as by the platforms themselves. The definitional evolution <https://hal.archives-ouvertes.fr/hal-02003893/> is embodied in such phrasings as ‘junk news <http://comprop.oii.ox.ac.uk/wp-content/uploads/sites/89/2017/03/What-Were-Michigan-Voters-Sharing-Over-Twitter-v2.pdf>’ and ‘problematic information <https://datasociety.net/wp-content/uploads/2017/08/DataAndSociety_LexiconofLies.pdf>’, which are broader in their classification <https://misinforeview.hks.harvard.edu/article/research-note-the-scale-of-facebooks-problem-depends-upon-how-fake-news-is-classified/>, whilst the platforms appear to prefer the terms ‘false’ (Facebook) or ‘misleading’ (Twitter), which are narrower.
On the back-end the platform companies also develop responses to these activities. They would like to automate as well as outsource its detection and policing, be it through low-wage content moderators, (volunteer) fact-checking outfits or user-centred collaborative filtering such as Twitter’s ‘birdwatchers <https://blog.twitter.com/en_us/topics/product/2021/introducing-birdwatch-a-community-based-approach-to-misinformation.html>’, an initiative they say born of societal distaste for a central decision-making authority, found through qualitative interviews. They also take major decisions to label content by world leaders (and indeed have world leader content policies <https://blog.twitter.com/en_us/topics/company/2019/worldleaders2019.html>), which subsequently land platform governance and decision-making in the spotlight.
More broadly there has been a rise in the study of ‘computational propaganda <https://ijoc.org/index.php/ijoc/article/view/6298>’ and ‘artificial amplification <https://library.oapen.org/handle/20.500.12657/42884>’ which the platforms refer to as ‘inauthentic behaviour’. These may take the form of bots or trolls; they may be ‘coordinated’ by ‘troll armies’, which has been outlined in Facebook’s regular ‘coordinated inauthentic behaviour reports’. As its head of security policy puts it <https://about.fb.com/news/tag/coordinated-inauthentic-behavior/>, Facebook defines it (in a roomy and plainspeak manner) as ‘people or pages working together to mislead others about who they are or what they are doing’. Occasionally data sets become available (by Twitter or other researchers) that purport to be collections of tweets <https://fivethirtyeight.com/features/why-were-sharing-3-million-russian-troll-tweets/> by these inauthentic, coordinated campaigners, whereupon scholars (among other efforts) seek to make sense of which signals can be employed to detect them.
Other types of individuals online also have caught the attention of the platforms as ‘dangerous’ (Facebook), and have been deplatformed <https://journals.sagepub.com/doi/full/10.1177/0267323120922066>, a somewhat drastic step that follows (repeated) violations of platform rules and presumably temporary suspensions. ‘Demonetisation’ also is among the platforms’ repertoire of actions, should these individuals, such as extreme internet celebrities, be turning vitriol into revenue, though there is also the question of which advertisers attach themselves (knowingly or not) to such content. Moreover, there are questions about why certain channels <https://psycnet.apa.org/record/2019-06906-001> have been demonetised for being 'extremist’. Others ask, is ‘counter-speech’ an alternative to counter-action?
On the interface, where the metrics are concerned, there may be follower factories behind high follower and like counts. The marketing industry dedicated to social listening as well as computational researchers have arrived at a series of rules of thumb as well as signal processing that aid in the flagging or detection of the inauthentic. Just as sudden rises in follower counts might indicate bought followers, a sudden decline suggests a platform ‘purge’ of them. Perhaps more expensive followers gradually populate an account, making it appear natural. Indeed, there is the question of which kinds of (purchased) followers are ‘good enough <https://wiki.digitalmethods.net/Dmi/SummerSchool2020GoodEnoughPublics>’ to count and be counted. What is the minimum amount of grooming? Can it be automated or is there always some human touch? Finally, there is a hierarchy in the industry, where Instagram followers are the most sought after, but ‘influencers’ (who market wares there) are often contractually bound to promise that they have not ‘participated in comment pods (group 'liking' pacts), botting (automated interactions), or purchasing fake followers <https://www.wired.com/story/instagram-fake-followers/>'.
Organisers: Richard Rogers, Guillen Torres and Esther Weltevrede, Media Studies, University of Amsterdam. Application information at https://www.digitalmethods.net <https://www.digitalmethods.net/>
Prof. Richard Rogers
Media Studies
University of Amsterdam
Out now:
R. Rogers (2019), Doing Digital Methods, Los Angeles: Sage.
Just out:
R. Rogers and S. Niederer (eds.) (2020), The Politics of Social Media Manipulation, Amsterdam: Amsterdam University Press.
More information about the Air-L
mailing list