The COVID-19 pandemic has had significant political, social, and economic effects across the globe, including in Denmark. In response to the pandemic, the Danish government, as one of the first countries in Europe and indeed the world, decided to close its borders, lockdown public institutions, and ban gatherings over ten people, among other “social distancing” measures. Like elsewhere in the world (Singh et al. 2020), Twitter has emerged as an important space for public discussion and debate around COVID-19 in Denmark. How are we to understand what, how, and why we post in relation to this pressing matter of global concern?
Social media data as so-called “big data” often lends itself to computational analysis through methods that include hashtag trends analysis and sentiment analysis, among many others (e.g. Bruns & Stieglitz 2013). At the same time, anthropologists and other social scientists have pursued various methods, including “digital ethnography” or “virtual ethnography,” “netnography,” “hashtag ethnography,” and so on, as ways of exploring the motivations and meanings associated with social media and other technologies (e.g. Kozinets 2019; Pink et al. 2016). While computational and ethnographic methods are often used independently, we explore here the pitfalls and possibilities of a computational anthropology, which combines both computational analyses and insights from a more ethnographic approach, building on research from anthropology, science & technology studies (STS), and sociology. We use this combination to understand public discourse and affect in the Danish Twitter landscape in relation to COVID-19, building on work performed together in the interdisciplinary Copenhagen Center for Social Data Science (SODAS).
Attention, Place, and Engagement on (Danish) Twitter
In thinking about how and why people post on Twitter, one key aspect to consider is its design as a digital platform and how it captures and channels user attention in particular ways (Siles 2013). Natasha Schüll (2014) has explored how “addiction by design” is promoted by digital gambling machines as an interaction between humans and machines (and environment), where features such as the speed and reliability of play capture the attention of players in ways that may come close to being addictive. Comparatively, features of Twitter’s design call on people to attend to the platform in particular ways. Twitter presents a “liveness,” for both us as researchers and for users, but also structures the kinds of interactions that can take place through this particular medium (Marres and Weltevrede 2013; c.f. McLuhan 1964). As such, it comes to co-configure a range of networked publics based on particular forms of affective expression and senses of togetherness (boyd 2010; Papacharissi 2016).
One very concrete way in which Twitter configures its users is though the dual constraint that all posting is public while restricted to only 280 characters. The main feed is also organized temporally so that new tweets are continually updated and added at the top of one’s feed. Indeed, the combination of short posts and continual updates is often seen to move at a faster pace than other social media like Facebook. Yet, tweets are also connected via particular hashtags, which draw people and posts together topically (or at least indexically) and thereby produce a sense of shared temporality (Bonilla and Rosa 2015) or what might even be called an imagined digital community. The most popular hashtags can make it into algorithmically determined “trends,” highlighting happenings in action such as #covid19dk, as illustrated in Figure 1. At the same time, users sometimes purposefully seek to game the algorithm and make topics trend, simultaneously highlighting issues like racism and the significance of “trends” in Twitter’s design and use (Sharma 2013). Taken together, the Twitter feed, aggregation by hashtags, and the promotion of trending topics amount to a powerful “attention by design” in Schüll’s sense.
However, if they are the result of design, the kinds of connections and bonds formed to and by Twitter can still be very affective and social (Papacharissi 2016). As examples of gaming Twitter’s trends show (Bradshaw 2019), users shape the kinds of interactions that take place on the site and use it in ways both in line with and in opposition to its design (Miller 2016). A strong attachment and attention to social media sites like Twitter may also in part be explained by the role it can serve as a “meta-friend” – a reliable place to engage with when bored or lonely (Miller 2013). Research on online spaces, broadly speaking, shows how they can become places where people live (Ginsburg 2012). While this has been explored particularly in terms of virtual worlds like Second Life, social media sites may not only offer a connection between people and places, but also become places constructed and inhabited in and of themselves (Miller 2012, 2013).
Compared to sites like Myspace where users can decorate their home pages or Facebook where profile pages may represent private spaces (Miller 2012), however, Twitter does not provide many options for personalization beyond a couple of photos and a brief self-description for a profile. Nonetheless, more or less ephemeral or permanent groups form around hashtags and hash-tagging practices, such as the production and propagation of Black Twitter through “Blacktags” (Sharma 2013). Thus, in practice, Twitter cannot be seen as a single monolithic platform; rather, there are multiple Twitters with different affective and social norms. Also, rather than constituting one undifferentiated “public sphere,” hash-tagging serves to demarcate a range of more particular publics, or “issue publics” (Marres & Weltevrede 2013), who tend to share topical concerns and, sometimes, affective registers – such as we explore below for the coronavirus pandemic on Danish Twitter.
In relation to Danish Twitter overall, this is often described as mostly an elite community of politicians, journalists, interest groups, and citizens who are more interested in politics than the average person in Denmark. While figures vary across surveys, official statistics show that as of 2019 approximately only 10% of Danish citizens are active on Twitter, even as research finds that upwards of 65% of all candidates for the 2015 general elections deployed accounts and messages on this platform (Blach-Ørsten et al. 2017). Under regular (non-pandemic) circumstances, and when queried on their Twitter use, Danish politicians state that they use the platform chiefly as part of attracting the attention of journalists or sometimes to communicate directly to their constituencies, without journalistic filtering (Blach-Ørsten et al. 2017: 336). Moreover, the same study indicates that active use of the platform amongst political stakeholders skews geographically towards the greater Copenhagen area, as the country’s capital and center of political power.
Like elsewhere, Twitter thus forms part of an increasingly multifaceted media environment for the circulation of commercial and political discourses in Denmark. This media environment includes traditional news media and other social media, not least Facebook, with upwards of 60% of the Danish population active on the latter platform. At the same time, Denmark has also recently experienced a number of more concerted Twitter- and hashtag- based campaigns, including the local version of the global #metoo movement (Hosterman et al. 2018), as well as, for instance, significant labor movement mobilization as part of collective bargaining conflicts in 2018. Likewise, Denmark has been at the receiving end of Europe-wide Twitter-based shaming campaigns, directed at the country’s handling of the refugee crisis in 2015 (Gualda & Rebello 2016: 205). Dynamics of affective issue public formation via Twitter is thus very much part of public-political life in Denmark, even as the platform itself attains its full importance mostly in being linked into a broader media ecology.
Studying the Dynamics and Spaces of Political Attention
Continuing on the theme of attention and specifically political attention, a growing body of work in media studies, political sociology, and computational social science have explored the role of digital platforms in the modulation of political attention (see e.g. Bosch 2017; Bonilla and Rosa 2015). Since the 1960s, political sociologists have been interested in understanding the rise and fall of public attention with respect to political issues and societal problems, known as ”issue-attention” (Blumer 1971; Downs 1972). Hilgartner and Bosk (1988) developed what they called a “public arena model” for studying how several issues compete for attention within public contexts with limited informational capacity like newspapers and television.
With the growing role of social media and other digital platforms for political communication, this question of how issues “compete” is as relevant as ever. Still, there is a crucial difference in terms of what the “competition” is about. For if the now foregone “information age” was dominated by large news corporations, whose attention all political stakeholders competed over (“the prime time slot”), then, in the new “age of attention” (Williams 2018), the primary “scarce resource” around is the attention of individual digital consumers. Indeed, some of the most influential and also most interesting work by digital anthropologists can be said to have been concerned with precisely this question. For instance, Miller (2000) early argued that websites are systematically designed to seduce and entrap the attention of users (c.f. Gell 1998). In continuation of this precipitous work, Horst and Miller (2012: 27) discuss the “attention-seeking mechanisms” of digital technologies against a background of increasing competition over attention.
From a more quantitative perspective, the development of new computational methods for automated text analysis in the fertile intersection between the field of natural language processing (NLP) and computational social science (see Evans & Aceves 2016) have allowed researchers to empirically examine how and why politicians and other stakeholders direct flows of attention through digital media (Quinn et al. 2010; Grimmer 2013; Bail 2016). The fact that politicians and parties can now easily monitor responses from supporters and adversaries via social media metrics has consequences not just for their attempts to steer attention to issues by users of social media in certain directions as opposed to others. It also spills over into and thereby affects the form and content of their offline political practice (Blach-Ørsten et al. 2017).
It was against this backdrop that our interdisciplinary SODAS group decided to formulate a research project in the context of the COVID-19 pandemic. Essentially, our ambition was to use state-of-the-art social data science methods to empirically map out how political attention pertaining to COVID-19 flows between Danish public authorities, news media, and ordinary citizens. Inspired by the aforementioned work on Twitter’s attention-by-design, affective publics, and the economies of political attention in the era of social media, our overarching assumption was that the tweets posted by both the Danish government and state authorities and the response by the public in the context of COVID-19 greatly influence and regulate the flow of public attention, and thereby, we argued, shape and channel the content and form of issue politics revolving around the government-imposed lockdown. In addition to providing an overview of what sub-issues were prominent on Twitter and how these developed in time, we also envisioned this mapping would contribute to an understanding of the effectiveness of public authorities in setting agendas on news media and on social media platforms, and how the reach, effect, and perception of these varies across groups in society.
Accordingly, we were interested in exploring several questions: How does public attention pertaining to the COVID-19 outbreak and its overlapping sub-issues flow between state authorities, news media and citizens in Denmark and beyond? What key public issues are articulated in relation to the outbreak (e.g. social distancing, border closing, intensive care unit (ICU) capacity, state economic support)? And how does COVID-19 affect existing key issues, like climate change? We report here in particular on the second of these questions.
What and How We Post: Mapping the Danish Twitter Landscape during COVID-19
To analyze Danish tweets related to COVID-19, we collected ~770.000 tweets over the period 24/02-28/04, containing one of 147 search terms, which we collectively identified as being used in relation to COVID-19 on Twitter in Denmark. From this dataset, we applied a language filter to sort out tweets not written in Danish.[i] In addition, we narrowed down the tweets we analyzed to those that contained one of five more explicit COVID-19 search terms, amounting to a final dataset of ~140.000 original tweets in total.[ii]
The idea behind our first step, the deployment of co-hashtag analysis, was to map out the emerging thematic clusters and sub-issues for the Danish coronavirus Twitter attention space as a whole, as one version of semantic network analysis sensitive to change over time (Rule et al. 2015). As argued by scholars in STS, in particular (e.g. Marres & Gerlitz 2016), rather than rely solely on popularity and trend measures per se, Twitter data are suitable for co-word analysis, including co-hashtag analysis, since “tweets” provide a workable unit within which to detect co-occurrence relations as signs of thematic overlaps. The resultant network (Figure 2) indicates such overlaps in the shape of clustered hashtag relations, or sub-issues, spatialized according to their relative distances. As a method technique, in other words, co-hashtag analysis amounts to a quantitative semantic mapping of certain Twitter-based dynamics, as one version of a wider surge in interest in digital methods for cultural-political map-making.
Figure 2 shows the overarching or total hashtag-based coronavirus attention network for the entire duration under study (February 24 to April 28, 2020), with three core thematic sub-clusters or sub-issues standing out rather clearly, even from a mere visual (as opposed to mathematical) analysis. The three sub-issues in question include health policy in a broad sense, marked e.g. by the “sundpol” (abbreviated Danish for “health politics”) hashtag; economic policy, also broadly (“dkbiz” standing for “Danish business”); and a slightly smaller civic morality-like sub-issue organized around the call to “stay home” (“blivhjemme”). For brevity, we dub these the SUND (health), the ECOPOL, and the REAC (for “reaction”) cluster, respectively. To the upper-left, one finds a smaller cluster on school policy. The stand-alone cluster visible to the right of the graph pertains, in turn, to concerns with the lockdown of the Danish soccer league and, interspersed, with the postponement of Denmark’s involvement in what was to have been the 2020 European Soccer Championships. This simple map-wise juxtaposition, we might say, thus suggests some of the affective and social divergences in play in the Twitter coronavirus space.
Realizing that the Twitter attention space would likely reflect the rather dramatic events unfolding in Danish politics during the period in question, and not least the government-enforced lockdown of public institutions on March 11, we decided in turn to split our data-set into three roughly equal-length sub-periods: the run-up to, the implementation phase of, and the aftermath following the lockdown. Doing so revealed a rather clear temporal pattern: in addition to an overall increase in tweets around the lockdown, whereas the space would initially be dominated solely by the SUND and ECOPOL clusters, the REAC cluster gradually emerged during the lockdown implementation phase and gained full prominence in the aftermath, at one point becoming in fact the most prevalent cluster overall. Given the backdrop previously laid out in terms of a platform skewed towards political elite stakeholders, this very outpouring of civic morality on Danish Twitter, with attendant loaded calls to “keep distance”, “wash hands”, (avoid) “hoarding” (of toilet paper etc.), and to generally show “society-mindedness” (“samfundssind”), is quite striking. Indeed, the map here attains what we might call its own seductive qualities, lending itself easily to slick, overarching interpretations of the “civic-minded” Danish “welfare society” in action.
In addition to mapping what topics people tweet about and how these topics relate, we were also interested in how people tweet about COVID-19. Based on an open call for inputs from our group, we settled for a common (if not uncontroversial; cf. Puschmann & Powell 2018) technique for quantitative text analysis, namely so called sentiment analysis or opinion mining and affect analysis (Mohammed & Kiritchenko 2015; Ignatow & Mihalcea 2017). Specifically, we respectively conducted (1) an analysis of the overall mood of the tweets (positive, neutral or negative), and (2) the use of language associated with six affects (anger, joy, disgust, sadness, fear, surprise) and attitudes (trust, expectation), which we collectively refer to as tones. We borrowed these eight categories from the literature on sentiment analysis, where they are typically referred to as “basic emotions”, deriving as they do from mainstream psychological research (such as, for example, Ekman 1992 and Plutchik 1994). It was precisely to convey that we do not understand these eight categories to refer to inner mental states assumed to unfold within insular individual human subjects, however, but as modes of speech pertaining to a specific genre of socially constituted discourse between subjects, that we decided to call our second method of inquiry an “affect-analysis” (contra “sentiment-analysis”).
Our initial mood-analysis was based on a text mining technique which, to put it mildly, is not very fine-grained. Essentially, a dictionary-based sentiment analysis thus works by first associating binary valence (“positive” or “negative”) to each word in the dictionary, and then by assigning a graduation of the assumed degree of this so-called polarity on a scale from 1 to 5. For example, in the dictionary called AFINN[iii] that we used, the word “eminent” is scored +4, while the word “violent” is scored -4. The so-called sentiment of a tweet is then calculated as the sum of the sentiment of each of its words. A quick application of this model to our text data showed a clear spike in mood around the Danish government’s announcement of the lock-down. While Danish tweets about coronavirus on average were “negative” before the announcement (around -0.75), they seemingly became more “positive” after (around -0.30). To give an illustration of what this might mean, it amounts to the word “thanks” being used more, or the word “destroyed” being used less, in every fourth to fifth tweet. But, of course, to claim that this is what our data “show” or “mean” amounts to a very narrow idea of what a corpus comprised of thousands of tweets potentially “shows” and “means”.
Next, in our attempt to quantify the changes in tone in our corpus, we again used a dictionary-based technique, although this time we also deployed a more qualitative validation. More precisely, we used a Google-Translate-based translation into Danish of the Canadian NRC Emotion Lexicon[vii], which we then subsequently qualified by cleaning up the specific word-lists according to standard procedures for inter-coder reliability norms within content analysis (Weber 1990). To, for example, detect the affect “sadness”, this method works by first associating a list of words to this label, such as “worried” and “loss”. To quantify the affect “sadness” across a corpus of tweets, then, the initial model simply proceeds by counting the number of tweets mentioning at least one word from the corresponding “sadness” entry in the dictionary. Note that, like most quantitative text mining techniques, an automated affect analysis of this sort is unable to detect negations (“not sad”), let alone more complex semantic forms such as humor and irony (on which more below). Hence, for example, the phrase “I have no trust in the government” would according to the model fall within the label “trust”. In part for these reasons, and in order to evaluate and validate the results generated by deploying this purely automatic affect mining more generally, four persons manually and qualitatively assessed which of the 100 most frequent words from the NRC Lexicon to actually include in our analysis. Keeping only those words that at least three of the four assessments agreed to include drastically reduced the number of words deemed relevant for each tone (the six affects and two attitudes), dropping in each case to below 25 – a testimony to the importance and added benefits of “human augmented text mining”, to coin a term.
The plot in Figure 3 traces how the share of tweets containing words that the cleaned-up dictionary labeled as associated with each of the eight tones, developed relative to the first week analyzed.[v]
Overall, tweets containing words from the “trust” (“tillid”) entry in the cleaned-up dictionary rose to about twice the share of the first week, while tweets containing “fear” (“frygt”) words in the similarly adjusted dictionary fell to about half. Between these two extremes, five of the remaining six attitudes and affects were more prevalent at the end of the period covered, albeit with significant fluctuations along the way. “Disgust” (“afsky”) was the only affect except “fear” that was less prevalent at the end of the period.
What to make of this? In an attempt to search further for semantic context, we decided to mobilize ethnographically trained student assistants to further manually filter and code a sample of approximately 500 “trust” tweets. Initially, this was simply to see whether such tweets in fact express trust, as usually perceived, or rather express mistrust toward, say, some branch of government. Somewhat strikingly, yet in some sense consistent with our move towards affect (rather than sentiment) analysis, more than half of this sample turned out to express mistrust rather than trust, to the extent we deemed it possible to fully judge and code. Overall, then, it seems that the COVID-19 situation and not least the government-led partial lockdown has indeed exerted a clear, if also rather ambivalent, affective push towards expressions of (mis)trust on Danish Twitter, shaping its public tone.
Pros and Cons of Using Text Mining Methods for Analyzing Twitter Data
Quantitative text mining techniques have the obvious advantage that they are fast to deploy and are scalable, so that they can be used to analyze datasets of an enormous size – but does that come at the price of data points that are more “noise” than “signal”, to use Nate Silver’s phrase (2012)? From an anthropological point of view, one problematic feature is obviously the way in which isolated or near-isolated words are taken to continually signify their original context of use, and yet have been more or less completely ripped apart from it. At the semantic level, only a small fragment of the context is preserved. Hence, for the analysis of sentiment and affect, all dictionary-words and hashtags co-occurred with at least one of our five coronavirus search terms, suggesting the Twitter coronavirus attention space as the vague meta-context, while the mapping of topics relied on hashtag co-occurrences, thereby attempting to re-signify contours of the semantic landscape initially stripped from view by the singular reliance on hashtags.
Still, even had we leveraged so-called vector-space models or other state of the art NLP techniques for quantitative analyses of text, that are known to be more sensitive to semantic context (Kozlowski et al. 2019), the variety and specificity of social, cultural, and political-economic settings in which these tweets were produced would still have been more or less entirely inaccessible to us. Note here that it may be useful to distinguish between context that is internal to the platform and context external to it (see Marres & Weltevrede 2016). Aspects of the former were accessible to us, such as the thread in which a tweet might be placed, which URLs it might link to, and other platform-specific features and constraints, as discussed earlier in this text. Indeed, it was precisely the fact that we had access to this “inner” layer of context with respect to our Twitter data that we were able to mobilize ethnographically-trained members in our team to carry out a manual, qualitative coding of a sample of tweets related to the attitude “trust”.
Having said that, these text mining methods nonetheless also allow us to do things with our data that a purely qualitative approach would not have allowed. For example, the capacity of these techniques to “see” and render visible patterns and dynamics within and across the total discursive vista defined by a given large-scale text corpus (Nelson 2017), may help point our analysis in new directions, which may (or may not) be worthwhile to explore further – patterns, that is to say, which it would have been very time-consuming or perhaps downright impossible to identify by means of conventional human reading and interpretation. To be sure, based on the corpus of tweets that we have sampled, interesting questions abound. Where to start? Even clear theoretically-driven hypotheses are bound to leave several possible points of analytical as well as empirical departure unexplored. While analyses like the above in no way provide an answer to this question, they might be supportive in narrowing down where to qualitatively continue.
These themes of dynamic pattern detection across large-scale textual data are highly pertinent to our deployment of co-hashtag analysis in particular, as well as more generally to the broader families of semantic and other social network analysis methods to which this technique belongs (Rule et al. 2015; Marres & Weltevrede 2016). Indeed, when considered within a broader view of method repertoires and their various alignments and conflicts in the history of the social sciences writ large, network analysis has arguably long straddled the qualitative-quantitative divide in interesting, yet also often non-coherent ways, including within anthropology (Knox et al. 2006). In recent digital method developments, broadly construed, this has now turned into a distinct resurgence, as noted, in techniques and metaphors of “mapping”, a notion replete with landscape-like connotations of using large-scale network data to better describe the semantic and social relations, clustering effects, and relative distances enacted within entire public arenas and attention spaces like that of COVID-19 on Twitter.
Still, for all the interest exerted by STS and other scholars (ourselves included; Blok & Pedersen 2014; Blok et al. 2017) towards the “quali-quantitative” (Latour et al. 2012) potentials of such digital method techniques, there is no denying the fact that co-hashtag analysis, as we deploy it here, still leans emphatically toward to “quantitative” pole of that continuum (or, better, that fractal distinction; see Abbott 2001). To stay in the lingo, the semantic and thematic landscape that we enact by its deployment remains rather coarse in its contours, with little opportunity to re-scale, jump, or re-focus onto lesser-trodden filaments of the network, let alone to “zoom in” on details of how clusters (like ECOPOL, say) are themselves internally composed in affective and other tensions. And again, unlike what the zoom metaphor might convey, this is in fact more than a “merely” technical issue: not only do we not, of course, gain access here to the wider political and other context external to the platform (the wider landscape, as it were); we also do not attain much qualitative grip on the “inner” layers of semantic struggle, ambiguity, irony, and exasperation presumably traced, at other levels, around the hashtags per se.
Turning now more specifically to a discussion of the potentials and pitfalls of sentiment analysis and its variations, it may be useful to begin by stressing that this kind of approach is in many ways fundamentally at odds with predominant methodological, epistemological as well as – some would probably add – political and ethical commitment of anthropology and qualitative sociology. As has been demonstrated and argued in a range of recent publications within media studies and digital sociology (e.g. Marres & Weltevrede 2016; Puschmann & Powell 2018), there are at least three problems with sentiment analyses from a broadly interpretative social-science methods perspective. First, as the technique has travelled over the years from psychology to computational linguistics via marketing and into the broader computational social science domain (including Twitter analysis), its built-in assumptions (such as e.g. the easy distinguishable measure of polarity between “positive” from “negative” sentiments) have increasingly gone out-of-sync with its use scenarios. Second, even assuming an appropriate use scenario, the technique often proves quite unreliable as a way of classifying words-in-text, indeed seemingly pronouncedly more so than other, comparable text mining techniques, as based e.g. on supervised machine learning (Ceron et al. 2014). Third, as we discussed more generally, for automated text mining techniques, the abstraction of words from their semantic and pragmatic contexts render any interpretation of sentiment analyses subject to built-in uncertainties.
Homing in now more specifically on the sentiment analyses of text data mined from Twitter, of which our affect-analysis of Danish tweets during the COVID-19 lockdown may be seen as an (albeit specific and somewhat tweaked) example, there are, as noted, profound limitations to the inferences and claims about “affect” such an approach allows for. We can separate out two dimensions to this problem. On the one hand, it would be naïve and reductionist (as well as speculative) to claim that the “affects” and “attitudes” identified in our data by the (more-or-less cleaned-up) automated model in any direct way correspond to whatever emotions or opinions entertained by the users posting these tweets. There may be such a correspondence, or at least some degree of it, or there may not – we simply cannot tell or know based on this kind of approach. To actually have a chance of finding out would require attending in a more deep and detailed manner to the meaning and the context of a sample of individual tweets, which would call for the use of very different and more qualitative methods, including what sociologists have been calling “content analysis” at least since the 1950s (Weber 1990).
But there is also a second and, in some sense, deeper problem. We are referring here to the fact that, even in the (to our knowledge) few cases where a systematic content analysis of tweets has been conducted (Gaspar et al. 2016; Meadows et al. 2019), we are still left with the moot question as to whether, and if so how, the meanings and sentiments inferred from individual tweets and their (platform-specific as well as the wider) contexts correspond to people’s motivations, emotions, and mental states more generally. After all, this is really just another riff over the time-honored anthropological and sociological dictum that we need to pay very close attention to the potential differences between what people say (they do) and what they (actually, sometimes) do.
To illustrate some of these problems and some possible attempts to solve or at least mitigate them, consider Gaspar et al.’s (2016) recent paper “Beyond Positive or Negative: Qualitative Sentiment Analysis of Social Media Reactions to Unexpected Stressful Events”, whose title points to its relevance to our empirical setting of trying to make sense of “why” and “how” people tweet over COVID-19 as a health pandemic. Now, we are prone to agree with Gaspar et al. that, when it comes to an analysis of the “mood” expressed in Twitter data, what is needed are “theory-driven explorations of [Twitter] users’ expressions”, something which, as they note, is “scarce in social media research in general” (2016: 181). We also find quite convincing the claim that “Twitter in particular can be considered a good source of affective expressions due to the quick, spontaneous and affective reactions found there” (2016: 181), even if we would like to learn more about the specific psychological work and the meta-psychological assumptions upon which this claim rest. And finally, we are sympathetic to the idea that a proper “qualitative analysis [of such tweets should] not [be] carried out at the level of words but rather [..of..] the whole tweet content, within its 140-characters limit, as the unit of analysis…, [thereby] allowing for assessing the context in which specific keywords were produced”. We are less inclined, however, as per the “deeper” aspect of the problem of inferring meaning (and especially in the form of intentions) from language use, to follow Gaspar et al. in the thinking behind the claim that this gives “access to… coping resources and indicators of the social grounding of individual perceptions”, as these pertain to a crisis situation not unlike our present COVID-19 world (2016: 183).
Nowhere is the limitation to this sort of theory-driven content analysis clearer than in Gaspar et al.’s discussion of what they, in a telling off-hand way, conjoin as “humor, irony” (2016: 181). What is made clear here, it seems to us, is the oversimplified way in which not just these but many other social media researchers conceive of the nature and role of so-called “humor” and “irony” in the context of these platforms. Take, for example, the following claim: “seemingly positive expressions like joking can indicate a lack of resources for more active forms of coping” (2016: 188). While it is surely correct to highlight the fact that a lot of the expressions and words used on Twitter should not be taken at face value, the authors seem here to be tapping into a psychologizing (Freudian) and over-simplified theory of humor, which assumes that its “function” is to facilitate an “accommodation of the stressful event(s), based on a cognitive restructuring of the situation”, akin basically to a “denial of the perceived threat” (2016: 181). This, then, is another sense to which our approach fundamentally differs from that of Gaspar et al.: whereas they seem to find the adoption of the terminology of sentiment analysis unproblematic, including the more or less outspoken assumption that these “sentiments” can somehow be said to exist in individual mind-brains, then we are working under the basic premise that that our object of analysis is comprised by affects that come into being and flow between subjects via context-dependent genres of language and discourse more generally.
The problem with this line of thinking, apart from the fact that it rests on an outdated and what we consider a vulgar-functionalist model of humor, is that it is incapable of taking into account the extremely multifaceted, subtle, and often irreducibly ambivalent communication involved in people’s uses of ironic language and other semi-propositional attitudes. In order to begin overcoming this problem, which seems to be endemic to both quantitative and qualitative analyses of digital text data, it seems to us that, in addition to what Gaspar et al. say is the necessary focus on both the specific “form” and “function” of individual tweets, future studies of a given corpus of Twitter (or for that matter Facebook) text data need to take into account what some linguistic theorists call propositional attitudes (see, for example, Wilson & Sperber 2012), or which we might here simply call the mode of tweets. In future work, we hope to include this dimension to our computational anthropological analysis of Twitter data.
Meanwhile, it may be useful to remind ourselves that, despite the many limitations that we have highlighted here, computationally driven and semi-automated analyses of sentiments, moods and affect as expressed on Twitter and other social media do allow anthropologists and other traditionally qualitatively-minded disciplines to gain access to new data sets of an unprecedented scale, and on this basis to forage into new and unknown analytical territories. Also, as our modest example of “human augmented text mining” indicates, and as indexed by the notion of “the quali-quantitative”, there are in general plenty of possibilities for making ethnographic and computational approaches writ large mutually enhance each other, as part of validation and of sustained explorations. Indeed, as already noted above, it is this untapped explorative potential of these technical and methods that makes the promise for a computational anthropology so exciting to us and our colleagues.
Why We Post: Theorizing Twitter Data Anthropologically
Given these potentials and pitfalls of particular computational social scientific and social data science approaches to the mining and analysis of text, complementing these with a more ethnographic approach building on Anthropology and STS brings us closer to answering the question of “why we post” in relation COVID-19 on Danish Twitter. In particular, such a combination helps us explore the interactions of meaning, sociocultural practice, and technology, all of which are fundamental components of posting on Twitter. That is to say, the question of “Why we post” necessarily entails asking, “What kind of technocultural assemblage is put into motion when we express ourselves online?” (Langlois 2011: 3). In the case of Danish Twitter and COVID-19, the meanings seen in our co-hashtag and affect analyses above are produced through intimate relations of power with the design of Twitter as part of a broader landscape of social media platforms and other technologies, and the dynamics of “issues attention” alongside other affective and sociocultural practices. With this approach and perspective, we see in participation in COVID-19 topics negotiations and efforts to constitute norms, values, and emotions relating to COVID-19, while Twitter forms an affective place of engagement in itself.
The landscape of hashtags shows this interrelation as people are drawn to talk about topics around the pandemic, seen particularly as the number of tweets containing our COVID-19 related keywords increased dramatically around the announcement of the lockdown in March, and focus on institutionally connected topics, such as health and economy, along with reactions to the lockdown. As discussed above, the centrality of some of these topics are likely promoted through Twitter “trends.” As we have also discussed, posting on Twitter is bound up with producing political and media attention and can be significant in shaping the agenda in terms of media discourse and highlighting marginalized issues (Bosch 2017; Bonilla and Rosa 2015). Moreover, based on YouTube rants and comments, Patricia Lange (2014) explores how these can operate as an “emotional public” for discussing the norms and bounds of appropriate online behavior. The jump in hashtags relating to hoarding (“hamstring”) immediately following the lockdown also suggests how this might encompass discussing the norms and bounds of appropriate offline behavior.
One classic anthropological interpretation of the dramatic increase in the cluster focusing on reactions to the shutdown, in particular, would be to see tweeting as a magical practice. Tweeting here has an observable social effect, which may be conceived of as a form of imitative magic where tweeting about an issue seeks to have an effect on that issue in practice (Taussig 1992). Tweeting can thus operate performatively where tweeting may work to conjure particular (moral and normative) worlds into being. Another hashtag meaning essentially “being together individually” as part of the reaction cluster, for example, operates as both a topic of discussion and a means of producing a particular type of community and practice that is indeed together individually. Moreover, language and technology conjoined in the process of tweeting have an amplified effect in and on the world as they inter/intra-act with people, technologies, media, political process, and so on (Barad 2007) because of the features of bits, namely their persistence, replicability, scalability, and searchability (boyd 2010).
This production of realities has both discursive and affective dimensions. Along with Lange’s “emotional public”, Zizi Papacharissi has proposed the term “affective public” as a kind of network public (boyd 2010) “mobilized and connected, identified, and potentially disconnected through expressions of sentiment” (Papacharissi 2016: 311). The use of “sentiment” here is meaningful, where Papacharissi (2016: 311) treats affect as a “form of pre-emotive intensity subjectively experienced”. On the face of it, this would seem to differ markedly from our approach, which, as discussed above, takes affect to be socially produced mode of expression. On a deeper theoretical level, however, these two approaches to the study of emotive and affective forms are in fact closely aligned in that they both take emotive and affective phenomena and processes to inhere outside the individual subject. Indeed, many of the characteristics of “affective publics” as having digital presence, being connective, and tied to affective statements of opinions are useful for our quali-quantitative study of Danish Twitter during COVID-19. The change in tone seen in the affect analysis, namely the increase in (mis)trust and decrease in fear can represent negotiation over and performance of appropriate emotional responses to the COVID-19 pandemic. Again, one way of theorizing these changes is by thinking of them as “magical” and performative, where the decrease in the affect “fear”, for example, may (or may not) represent a conjuring of a less fearful world alongside, and intertwined with, the normalization of life under lockdown. Similarly, the increase in the expression of the attitude of (mis)trust after the lockdown may be interpreted as a negotiation over possible political and, indeed, affective futures.
Scholarly attention to affect in social media has often pointed towards emotional contagions, seen most controversially in the experiment conducted by Facebook (Kramer, Guillory and Hancock 2017). In addition to the problems of measuring sentiment, discussed above, there are conceptual problems with common understandings of virality and contagion online as often associated with Richard Dawkins’s (1989) concept of mimetics, not least of which are the application of genetic processes to social practices (a longstanding problematic in anthropology!) (Sharma 2013). Nonetheless, the substance of the network itself produces connections in ways that may promote magical interaction, based on contagion. That is to say, tweets with tones of (mis)trust summon similar tonal responses through the structure of the network and of Twitter’s design centering on hashtags, threads, and retweets where subjects are called upon to respond in particular ways (Sharma 2013; Foucault 1997). Moreover, such participation may be an end goal in itself, where engagement in and on Twitter relating to COVID-19 constitutes a (discursive and affective) place to inhabit and become part of (Miller 2012,2013). “Twitterhjerne” (“twitter brain”), also part of the reaction cluster along with “being together individually”, points to this mutual production of network and place as a means of both extending ones’ self (and brain) and becoming part of more than one’s self.
Anthropological theories of magic, performativity, publics, and place, then, may help shed new light on how and why people tweet about COVID-19 in Denmark, and how anthropology may in this way contribute to ongoing social scientific discussions about the nature and the dynamics of political attention on this and other digital platforms. In our future research, we plan to complement our predominately computational analysis of Danish Twitter posts in the form co-hashtag and affect analyses with more traditional ethnographic research on people’s daily experiences and social media use before and after the COVID-19 lockdown.
Samantha Breslin, Thyge Ryom Enggaard, Anders Blok, Tobias Gårdhus and Morten Axel Pedersen are part of a group of interdisciplinary researchers from the Copenhagen Center for Social Data Science (SODAS) who have come together to investigate how public attention is developing during the coronavirus crisis.
Thanks to Hjalmar Bang Carlsen, Emilie Munch Gregersen, Sofie Læbo Astrupgaard, Kristoffer Pade Glavind, and Friedolin Merhout for their insights on many of these ideas. This post draws on their work developing the co-hashtag analysis and affect analysis, found here and here, in Danish. These studies were made possible by an immediate grant from the Faculty of Social Sciences of the University of Copenhagen, and through funding and support from the SODAS and DISTRACT research project funded by the European Research Council (ERC).
[i] We used a package named “langdetect” in Python to determine the language of each tweet. This will have likely included some non-Danish tweets, and excluded some Danish tweets.
[ii] The translated search terms were “corona”, “covid”, “epidemic”, “pandemic”, and “virus”. By original, we mean tweets that were not retweets.
[iv] More info on the NRC dictionary here: https://nrc.canada.ca/en/research-development/products-services/technical-advisory-services/sentiment-emotion-lexicons
Abbott, Andrew. 2001. Chaos of Disciplines. Chicago: University of Chicago Press.
Bail, Christopher A. 2016. “Cultural carrying capacity: Organ donation advocacy, discursive framing, and social media engagement.” Social Science & Medicine 165: 280-288.
Barad, Karen. 2007. Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning. Durham, NC: Duke University Press.
Blach-Ørsten, Mark, Mads Kæmsgaard Eberholst, and Rasmus Burkal. 2017. “From hybrid media system to hybrid-media politicians: Danish politicians and their cross-media presence in the 2015 national election campaign.” Journal of Information Technology & Politics 14 (4): 334-347. https://doi.org/10.1080/19331681.2017.1369917.
Blok, Anders, and Pedersen, Morten Axel. 2014. “Complementary social science? Quali-quantitative experiments in a Big Data world.” Big Data & Society. https://doi.org/10.1177/2053951714543908.
Blok, Anders, Hjalmar B Carlsen, Tobias B Jørgensen, Mette M Madsen, Snorre Ralund and Morten A Pedersen. 2017. “Stitching together the heterogeneous party: A complementary social data science experiment.” Big Data & Society https://journals.sagepub.com/doi/pdf/10.1177/2053951717736337
Blumer, Herbert. 1971. “Social problems as collective behavior.” Social Problems 18(3): 298-306.
Bonilla, Yarimar, and Jonathan Rosa. 2015. “#Ferguson: Digital Protest, Hashtag Ethnography, and the Racial Politics of Social Media in the United States: #Ferguson.” American Ethnologist 42 (1): 4–17. https://doi.org/10.1111/amet.12112.
Bosch, Tanja. 2017. “Twitter Activism and Youth in South Africa: The Case of #RhodesMustFall.” Information Communication and Society 20 (2): 221–32. https://doi.org/10.1080/1369118X.2016.1162829.
boyd, danah. 2010. “Social Network Sites as Networked Publics: Affordances, Dynamics, and Implications.” In Networked Self: Identity, Community, and Culture on Social Network Sites, 39–58. New York, NY: Routledge. http://www.danah.org/papers/2010/SNSasNetworkedPublics.pdf.
Bradshaw, Samantha. 2019. “Disinformation optimised: gaming search engine algorithms to amplify junk news.” Internet Policy Review 8 (4): 1-24. http://dx.doi.org/10.14763/2019.4.1442.
Bruns, Axel, and Stefan Stieglitz. 2013. “Towards more systematic Twitter analysis: metrics for tweeting activities.” International Journal of Social Research Methodology 16 (2): 91-108. https://doi.org/10.1080/13645579.2012.756095.
Ceron, Andrea, Luigi Curini, and Stefano M. Lacus. 2014. “Using sentiment analysis to monitor electoral campaigns: Method matters – evidence from the United States and Italy.” Social Science Computer Review 33 (1): 3-20. https://doi.org/10.1177/0894439314521983.
Dawkins, Richard. 1989. The Selfish Gene. Oxford, UK: Oxford University Press.
Downs, Anthony. 1972. “Up and down with ecology: The ‘issue-attention cycle.’” Public Interest 28: 38-50.
Ekman, Paul. 1992. “An argument for basic emotions.” Cognition and Emotion 6(3-4): 169–200.
Evans, James A. and Pedro Aceves. 2016. “Machine Translation: Mining Text for Social Theory.” Annual Review of Sociology 42: 21–50
Foucault, Michel. 1997. Ethics: Subjectivity and Truth. Edited by Paul Rabinow. Translated by Robert Hurley and Others. Essential Works of Foucault, 1954-1984. New York, NY: The New Press.
Gaspar, Rui, Cláudia Pedro, Panos Panagiotopoulos and Beate Seibt. 2016. “Beyond positive or negative: Qualitative sentiment analysis of social media reactions to unexpected stressful events”. Computers in Human Behavior 56: 179-191
Gell, Alfred. 1998. Art and Agency: An Anthropological Theory. Oxford: Oxford University Press.
Ginsburg, Faye. 2012. “Disability in the Digital Age.” In Digital Anthropology, edited by Heather A. Horst and Daniel Miller, 101 – 126. London and New York: Berg.
Grimmer, Justin. 2013. “Appropriators not position takers: The distorting effects of electoral incentives on congressional representation.” American Journal of Political Science 57(3): 624-642.
Gualda, Estrella, and Carolina Rebollo. 2016. “The refugee crisis on Twitter: A diversity of discourses at a European crossroads.” Journal of Spatial and Organizational Dynamics 4 (3): 199-212. http://hdl.handle.net/10272/13624.
Hilgartner, Stephen and Charles L. Bosk. 1988. “The rise and fall of social problems: A public arenas model.” American journal of Sociology 94(1): 53-78.
Horst, Heather A. and Daniel Miller. 2012. Digital Anthropology. London & New York: Bloomsbury.
Hosterman, Alec R., Naomi R. Johnson, Ryan Stouffer, and Steven Herring. 2018. “Twitter, social support messages and the #MeToo movement.” The Journal of Social Media in Society 7 (2): 69-91. https://www.thejsms.org/tsmri/index.php/TSMRI/article/view/475.
Ignatow, Gabe and Rada Mihalcea. 2017. An Introduction to Text Mining. London: Sage Publications
Knox, Hannah, Mike Savage, and Penny Harvey. 2006. “Social networks and the study of relations: networks as method, metaphor and form.” Economy and Society 35 (1): 113-140. https://doi.org/10.1080/03085140500465899.
Kozlowski, Austin C., Matt Taddy and James. A. Evans. 2019. “The Geometry of Culture: Analyzing the Meanings of Class through Word Embeddings”. American Sociological Review 84(5) 905– 949
Kozinets, Robert V. 2019. Netnography: The Essentail Guide to Qualitative Social Media Research. London: Sage.
Kramer, Adam D I, Jamie E Guillory, and Jeffrey T Hancock. 2014. “Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks.” PNAS 111 (24): 8788-8790. https://doi.org/10.1073/pnas.1320040111.
Lange, Patricia G. 2014. “Commenting on YouTube Rants: Perceptions of Inappropriateness or Civic Engagement?” Journal of Pragmatics 73: 53–65. https://doi.org/10.1016/j.pragma.2014.07.004.
Langlois, Ganaele. 2011. “Meaning, Semiotechnologies and Participatory Media.” Culture Machine 12: 1–27. https://doi.org/10.1177/106480469700500305.
Latour, Bruno, Pablo Jensen, Tomasso Venturini, Sébastian Grauwin, Dominique Boullier. 2012. “‘The whole is always smaller than its parts’ – A digital test of Gabriel Tarde’s monads.” British Journal of Sociology 63(4): 590–615.
Marres, Noortje, and Esther Weltevrede. 2013. “Scraping the Social?: Issues in Live Social Research.” Journal of Cultural Economy 6 (3): 313–35. https://doi.org/10.1080/17530350.2013.772070.
Marres, Noortje, and Carolin Gerlitz. 2016. “Interface methods: Renegotiating relations between digital social research, STS and sociology.” The Sociological Review 64 (1): 21-46. https://doi.org/10.1111/1467-954X.12314.
Meadows, Charles W., Cui Zhang Meadows, Lu Tang and Wenlin Liu. 2019. “Unraveling Public Health Crises Across Stages: Understanding Twitter Emotions and Message Types During the California Measles Outbreak”. Communication Studies 70 (4): 453-469.
Miller, Daniel. 2000. The Fame of Trinis: Websites as Traps. Journal of Material Culture 5(1).
Miller, Daniel. 2012. “Social Networking Sites.” In Digital Anthropology, edited by Heather A. Horst and Daniel Miller, 146 – 164. London and New York: Berg.
Miller, Daniel. 2013. Tales from Facebook. UK: Polity Press.
Miller, Daniel. 2016. Social Media in an English Village: Or How to Keep People at Just the Right Distance. London, UK: UCL Press.
Mohammed, Saif M. and Svetlana Kiritchenko. 2015. “Using Hashtags to Capture Fine Emotion Categories from Tweets”. Computational Intelligence 31 (2): 301-326.
Nelson, Laura K. 2017. “Computational Grounded Theory: A Methodological Framework.” Sociological Methods & Research. https://doi.org/10.1177/0049124117729703.
Papacharissi, Zizi. 2016. “Affective publics and structures of storytelling: sentiments, events and mediality.” Information, Communication & Society 19 (3): 307-324. http://dx.doi.org/10.1080/1369118X.2015.1109697.
Pink, Sarah, Heather Horst, John Postill, Larissa Hjorth, Tania Lewis, and Jo Tacchi. 2016. Digital Ethnography: Principles and Practice. London, UK: Sage.
Plutchik, Robert. 1994. The psychology and biology of emotion. New York: Harper Collins.
Puschmann, Cornelius, and Alison Powell. 2018. “Turning words into consumer preferences: How sentiment analysis is framed in research and the news media.” Social Media + Society July-September: 1-12. https://doi.org/10.1177/2056305118797724.
Quinn, Kevin M., Burt L. Monroe, Michael Colaresi, et al. 2010. “How to analyze political attention with minimal assumptions and costs.” American Journal of Political Science 54(1): 209-228.
Rule, Alix, Jean-Philippe Cointet, and Peter S. Bearman. 2015. “Lexical shifts, substantive changes, and continuity in State of the Union discourse, 1790–2014.” Proceedings of the National Academy of Sciences September 1, 112 (35). https://doi.org/10.1073/pnas.1512221112.
Schüll, Natasha Dow. 2014. Addiction by Design: Machine Gambling in Las Vegas. Princeton, NJ: Princeton University Press.
Sharma, Sanjay. 2013. “Black Twitter? Racial Hashtags, Networks and Contagion.” New Formations 78 (78): 46–64. https://doi.org/10.3898/newf.78.02.2013.
Siles, Ignacio. 2013. “Inventing Twitter: An Iterative Approach to New Media Development.” International Journal of Communication 7: 2105-2127. https://ijoc.org/index.php/ijoc/article/view/1995.
Singh, Lisa, Shweta Bansal, Leticia Bode, Ceren Budak, Guangqing Chi, Kornraphop Kawintiranon, Colton Padden, Rebecca Vanarsdall, Emily Vraga, and Yanchen Wang. 2020. “A First Look at COVID-19 Information and Misinformation Sharing on Twitter.” http://arxiv.org/abs/2003.13907.
Silver, Nate. 2012. The Signal and the Noise: Why So Many Predictions Fail–but Some Don’t. New York: Penguin.
Taussig, Michael. 1992. Mimesis and Alterity: A Particular History of the Senses. London: Routledge.
Weber, Robert P. 1990. Basic Content Analysis. Newbury Park: Sage.
Williams, James. 2018. Stand out of our Light. Freedom and Resistance in the Attention Economy. Cambridge: Cambridge University Press.
Wilson, Deirdre & Dan Sperber. 2012. Explaining irony. In Deirdre Wilson & Dan Sperber, Meaning and Relevance, 123-145. Cambridge: Cambridge University Press.
- Suffering, Agency, and the Value of Early and Late Life
- HIV testing, neoliberal governance, and the new moral regime of gay health in Taiwan
- From Chicken Sheds to Random Control Trials: A Commentary on the “Bio-Social Methods for a Vitalist Social Science” Workshop
- Confusion, Truth, and Bureaucracy: A reply to Fitzgerald and Callard