Lectures

Mapping Algorithmic Assumptions: Reflections from a Society for Psychological Anthropology roundtable

This article is part of the series:
Fritz Kahn, Der Mensch als Industriepalast, 1926. Detail. Image in public domain

Introduction: surveil, classify and predict
by Alexa Hagerty and Livia Garofalo

The works distilled by the authors and the discussion offered by Professor Emily Martin presented here were originally part of a roundtable at the Society for Psychological Anthropology 2021 biannual meeting. They seek to map the algorithmic assumptions encoded in digital technologies, revealing specific, embodied, and lively intellectual histories, conceptual structures, material infrastructures and everyday practices.

Digital technologies are increasingly used in psychiatry and mental health care, but they are also deployed in sensitive contexts with implications for people’s mental health and wellbeing, such as employment, education, and the criminal justice system, among others. Underwriting these systems, artificial intelligence/machine learning technologies surveil, classify and predict, claiming to assess personality, character, and affective states. Encoded in these systems are assumptions about minds, behaviors, and social relationships that work to shape subjectivities, material realities, and imaginable futures. From rational actors to addicted users, these Eurocentric theories of the human are largely drawn from experimental psychology and freely mixed with Silicon Valley notions of human optimisation and venture capital logics.

Anthropological perspectives have the possibility to center pluralism, subjectivity, and the social in ways that are distinct and generative, providing productive friction to these individualistic and universalist models. This approach can reveal, as Kate Crawford has argued, that ‘artificial intelligence’ is neither artificial nor intelligent, but rather grounded in specific realities.

Our roundtable discussion, “The algorithmic mind? Data-driven technology, Experimental Psychology, and the Generative Friction of Anthropology,” examined notions of productivity, labor and materiality, the boundaries of the ‘normal’ and ‘pathological’, authenticity, addiction and the digital subject. Taken together, they help us more closely trace the assumptions of human minds and societies baked into digital technologies.

Alexandrine Royer examines the assumptions of emotion underwriting affective computing and the biopolitics of these technologies.  Suzana Jovicic interrogates how narratives of technology “addiction” act to obscure micropolitics of everyday life on the margins in Vienna, Austria. Aaron Neiman analyzes e-mental health technologies in Australia, highlighting the complexity of the therapeutic alliance and the preconditions of “authentic” interaction. Johannes Bruder explores the links between psychological models and technological design in research on rest and sleep in machine learning and contemporary neuroscience. Gabriele de Seta parses the multiple, unstable, and shifting ways “artificial intelligence” has been conceptualized, imagined and interpreted in China.

Weaving together these works, our discussant Professor Emily Martin explores themes of time, energy, and abstraction, revealing the role of anthropology in illuminating the social and cultural work of automated systems. Professor Martin’s discussion builds on her foundational scholarship on the anthropology of science which explores the assumptions governing the scientific method, meticulously unpacking taken-for-granted concepts like experiment, objectivity, and training. Through this work she interrogates and illuminates the intellectual histories, conceptual structures, and material infrastructures that inform how we imagine, study, and “hold stable in time and place” the human — allowing us to think about how this “held in place” human is encoded into machines and algorithmically automated, and leading us toward an ethnography of artificial intelligence. The pandemic has entrenched technosolutionist approaches and accelerated the pace of tech deployment. Anthropological intervention is urgent and critical. We must forge new paths of inquiry, critique, and action to engage with these ubiquitous and powerful systems.

Alexa Hagerty is an anthropologist and co-founder of Dovetail Labs, a research fellow at the University of Cambridge Leverhulme Centre for the Future of Intelligence, and a working group co-convener in the Ada Lovelace Institute’s JUST AI network. 

Livia Garofalo is a cultural and medical anthropologist and a researcher on the Health & Data team at Data & Society. 

References:

Crawford, Kate. The Atlas of AI. Yale University Press, 2021.

Martin, Emily “Ethnography, History and Philosophy of Experimental Psychology” In Finite but Unbounded: New Approaches in Philosophical Anthropology edited by Kevin M. Cahill, Martin Gustafsson and Thomas Schwarz Wentzer, 97-118. Berlin, Boston: De Gruyter, 2017. https://doi.org/10.1515/9783110523812-006

_____  “Toward an Ethnography of Experimental Psychology.” In Plasticity and Pathology, pp. 1-19. Fordham University Press, 2016.

_____  “The potentiality of ethnography and the limits of affect theory.” Current Anthropology 54, no. S7 (2013): S149-S158.

_____  “Anthropology and the Cultural Study of Science.” Science, Technology, & Human Values 23, no. 1 (January 1998): 24–44. https://doi.org/10.1177/016224399802300102.


Making Humans into Machines: The Use of Emotion-Recognition Systems in Training Human Behaviour
by Alexandrine Royer

Image: Unsplash

AI technologies that predict and model human emotions to guide interpersonal interactions are increasingly being utilized by social media platforms, customer service, hiring companies, national security, child education, clinical therapies and more. Since the 2000s, the rising rates of autism diagnoses have led to a swelling of interest and public health investments into therapies and technologies geared towards autistic patients.

During one of my interviews, held back in December, computer scientist Professor Aylett bluntly stated, “our current ability to detect emotion is totally rudimentary or totally useless”. Aylett, whose research involves developing robots for individuals on the autism spectrum to detect facial expressions and other social cues, spoke candidly about the shortcomings of affective scientists in the emotion AI industry.

When I inquired as to why Aylett chose to focus on robot-assisted therapy for Autistic adults, part of her answer was “it’s what gets funding.”

Part of this enthusiasm for the potential of affective technologies to aid the emotional and behavioural development of autistic children and adults can be traced back to Picard, who heads the Affective Research Computing Group at MIT. In a paper originally published in 2006, co-authored with Rana el Kaliouby and Simon Baron-Cohen, Picard first presents the ideal pairing between affective computing and individuals with autism.

Throughout the text, Picard, El Kouliby and Baron-Cohen readily outline the apparent likeness between people with autism and technological devices, affirming both “computers, like people with autism, do not naturally have the ability to interpret socioaffective cues, such as tone of voice or facial expression”, and “similarly, do not naturally have common sense about people and the way they operate”. People with autism are described as “extreme synthesizers” who have proven abilities in topics like “prime numbers, calendrical calculation, or classification of artifacts or natural kinds.”

Explicit in Picard, El Kouliby and Baron-Cohen’s writing is a biopsychic imaginary of autistic people as characterized by a purely mechanical and machine-like mind. From this imaginary, people with autism, like computers, can be the site and subject of experimental engineering. Absent in their discourse is the physicality of the human body, the subjective experiences, and affective and relational desires of individuals with autism. In line with this machine-like imaginary, social robots, rather than fellow humans, are presented as ideal companions for autistic children.

Today, the Autism Spectrum Condition is understood by psychologists to constitute a broad range of behavioral, cognitive and neurological atypicalities, with each manifestation of traits being entirely singular (Belek 2019, 231). Coleman & Gillberg (2012) argue that the term spectrum is inherently misleading given the heterogeneity of the condition, as it implies that a person can be placed on a continuum of either ‘more’ or ‘less’ autistic, with a more appropriate term being autism. Milton, who was diagnosed with Aspergers, has argued that autism “is never fully given as a set of a priori circumstances, but is actively constructed by social agents engaged in material and mental production.” (2012, 884). The models of emotion used in affective computing run counter to decades of ethnographic research that demonstrate the underlying cultural specificity and diversity to human emotions. As Catherine Lutz once mentioned, to “talk about emotions is simultaneously talk about society” (1988, 6). When it comes to the emotion technology created by affective scientists, the anthropological question is, therefore, not whether such systems can actually ‘read’ emotions. Instead, we should ask who gets to decide what constitutes legitimate and legible emotions, and who is then seen as failing to meet these socially constructed standards.

Alexandrine Royer is a student fellow at the Leverhulme Centre for the Future of Intelligence and an is an incoming Ph.D. candidate in Social Anthropology at Cambridge. She specializes in affective computing, augmented reality and digital labour, with her Ph.D. project centring on the digital economy in Africa.

Author’s note: In this piece, Autism is capitalized, for further discussion on the diversity of semantic perspectives see the Autism Self Advocacy Network (ASAN) https://autisticadvocacy.org/about-asan/identity-first-language/

References:

Belek, Ben. (2019). An Anthropological Perspective on Autism. Philosophy, Psychiatry & Psychology, 26(3), 231-241.

Crawford, Kate (April 2021). ‘Time to regulate AI that interprets human emotions’. Nature 592, p.167

Coleman, Mary, & Gillberg, Christopher. (2011). The Autisms. Cary: Oxford University Press, Incorporated.

Kaliouby, Rana el, Picard, Rosalind, & Baron-Cohen, Simon. (2006). Affective Computing and Autism. Annals of the New York Academy of Sciences, 1093(1), 228-248.

Lutz, C. (1988). Unnatural emotions : Everyday sentiments on a Micronesian atoll & their challenge to western theory. Chicago ; London: University of Chicago Press. Milton, D.E. 2012. On the ontological status of autism: the ‘double empathy problem’. Disability & Society 27 (6), 883-7.


Scrolling and the In-Between Spaces of Boredom: Marginalized Youths on the Periphery of Vienna
by Suzana Jovicic


Image: Suzana Jovicic

“Here, you can scroll.” Toni unlocked his smartphone and began scrolling through his Instagram feed. Toni, a 17-year-old apprentice plumber, whose parents emigrated from Albania, visits the youth club on the outskirts of Vienna regularly after work. Sitting on the sofa in a dark basement room, he rhythmically scrolls through the images, his face illuminated by the smartphone.

Infinite scrolling through social-media feeds may appear trivial, or even an addictive waste of time. Silicon Valley professionals themselves appear wary of the “addictive” and “toxic” features of the devices they helped to develop (Lewis 2017). However, by merely psychologizing digital phenomena deemed as addictive, we miss out on the opportunity to understand what such mundane digital practices can reveal about the micropolitics of life on the margins.

My arguments stem primarily from ethnographic fieldwork in two youth clubs in Vienna, Austria, in 2018 and 2019. Youth clubs are non-commercial spaces open to youths aged 5-21, organized in the context of youth work and usually located in proximity to social housing projects. Hanging out in the youth clubs, I frequently saw my interlocutors fiddling with the smartphone in the moments of disengagement and waiting, while the most common reply to the question “when are you using your phone the most” was “when I am bored”.

The easiness with which smartphones can fill out the moments of mundane boredom is obvious. However, there was a different, existential quality to the boredom that started to manifest as the months passed by. I met many young people – men in particular – who were frustrated because of their waiting for an apprenticeship or a job, sometimes for several years. Many spoke of long periods of “doing nothing” and of ritualized visits to the unemployment agency, which included talks with psychologists on a mission to “fix” them for the job market. When hanging out turns into chronic boredom, “doing nothing” morphs into a form of “social suffering” (van den Berg and O’Neill 2017, 7).

Paradoxically, woven into the heaviness of “doing nothing” as a form of social suffering, is the effortlessness of infinite scrolling. Despite the seemingly bored casualness that accompanies scrolling, neither the movement itself, nor the content users follow, nor the adverts that Instagram curates according to the traces they left online, are accidental. From a design perspective, erasing effort and enabling easy participation is at the core of user-friendly design that compels users to dwell online.

A non-digital-centric (Pink et al. 2016) view of the everyday struggles on the margins reveals how problematic labels such as “addictive” are. Idle scrolls are not merely a result of “addictive” features but also related to how effortless design creates appealing spaces that effortlessly fit into an everyday life filled with boredom and waiting.

Suzana Jovicic is a PhD researcher at the Institute of Social and Cultural Anthropology, University of Vienna. Her dissertation deals with the use of smartphones among marginalized youth based on ethnographic research in Viennese youth centers.

Link to full paper published in Ethos (2021) https://anthrosource.onlinelibrary.wiley.com/doi/full/10.1111/etho.12294

References:

Lewis, Paul. 2017. “‘Our Minds Can Be Hijacked’: The Tech Insiders Who Fear a Smartphone Dystopia.” The Guardian website, October 6. Accessed December 25, 2019. https://www.theguardian.com/technology/2017/oct/05/smartphone-addiction-silicon-valley-dystopia.

Pink, Sarah, Heather Horst, John Postill, Larissa Hjorth, Tania Lewis, and Jo Tacchi. 2016. Digital Ethnography: Principles and Practice. Los Angeles: SAGE Publications Ltd.

van den Berg, Marguerite, and Bruce O’Neill. 2017. “Introduction: Rethinking the Class Politics of Boredom.” Focaal: Tijdschrift voor Antropologie 2017 (78): 1-8.


Automated CBT and the Therapeutic (D)alliance in Australia
by Aaron Neiman

Image: Jake Nakos via Unsplash

The therapeutic alliance– the process of mutual recognition and interpersonal rapport between patient and therapist– is generally considered to be the ‘active ingredient’ of psychotherapy (Wolfe & Goddfried 1988, 450). This meaningful encounter with a skilled person is thought to capitalize on a uniquely human capability for psychic healing, and gives “the talking cure” a privileged place among psychological interventions. However, for reasons of cost and convenience, some argue that computer automated therapies have increasingly removed the need for any human therapist at all. In Australia, for example, the government invests heavily in these technologies as part of a larger public mental health strategy aiming to free up precious in-person psychology appointments for more acutely ill people. Those individuals with “mild-to-moderate” cases of the “high-prevalence” disorders, (i.e. depression and anxiety), are encouraged to instead make use of these “e-mental health” programs, particularly automated Internet-delivered cognitive behavioral therapy (eCBT).

Unsurprisingly, stripping away intersubjectivity through computer automation has provoked charges from more traditional therapists of cheapening, bastardizing, or perhaps even “cheating” the hard work of psychotherapy (Whitfield & Williams 2004; Waller & Gilbody 2009; Stallard et al. 2010). In my fieldwork with the researchers in Australia helping to develop these programs and bring them to market, I became interested in how my informants respond to this most common critique of their work: that by turning a two-player game into a single-player one, they have taken the essential “spark” out of therapy, perverting or removing the therapeutic alliance that allows for effective clinical practice.

Curiously, I found that for many of these individuals, it is the fallibility of the human therapist that poses a threat to the quality of the therapy. As Dr. Gavin Andrews, a pioneering psychiatrist in the field of eCBT lamented to me,

When therapists therapize people, often they get carried away with their pet homily or interest. Or they get seduced by the patient: “doctor I’ve got to tell you what happened to my sister’s child at the weekend.” And forty minutes later when you’ve heard what happened to her sister’s child, the scheduled hour is coming to an end and they’ve avoided you. And you got conned.

In this telling, the therapeutic alliance is formulated as something brittle, and the computer’s immunity to being “conned” by the patient affords it an advantage in that respect. His use of the word “seduced” is also telling: one of the most touted advantages of the computer psychotherapist has been that, unlike the real thing “after all the computer doesn’t burn out, look down on you, or try to have sex with you” (Colby qtd. in Turkle 1997, 115. See also Marks et al 1998). We might playfully call this the “therapeutic dalliance”, referring both to the possibility of unwanted romantic and sexual entanglements and to the misuse of the “therapist’s hour” as a chat session.

Dr. Elizabeth Scott, head of a hospital psych unit where an Dr. Andrews’ program was being introduced for inpatients, similarly suggested to me that the therapeutic alliance may be more about flattering the sensibilities of the therapist than helping the patient: “Clinicians like working with people face-to-face. They have the therapeutic alliance and it makes them feel good.” If e-mental health is accused of lacking the essential human touch, its proponents shoot back that there is already plenty of hollow and inappropriate interaction with therapists themselves.

Aaron Neiman is a PhD student in the Department of Anthropology at Stanford University. Aaron’s doctoral research focuses on the growing use of online interventions to treat mental illness in Australia.

References:

Marks, Isaac, Susan Shaw, and Richard Parkin. “Computer-Aided Treatments of Mental Health Problems.” Clinical Psychology: Science and Practice 5, no. 2 (1998): 151–70. https://doi.org/10.1111/j.1468-2850.1998.tb00141.x.

Stallard, Paul, Thomas Richardson, and Sophie Velleman. “Clinicians’ Attitudes towards the Use of Computerized Cognitive Behaviour Therapy (CCBT) with Children and Adolescents.” Behavioural and Cognitive Psychotherapy 38, no. 5 (October 2010): 545–60. https://doi.org/10.1017/S1352465810000421.

Turkle, Sherry. Life on the Screen: Identity in the Age of the Internet. New York, NY: Simon & Schuster, 1997.

Waller, R., and S. Gilbody. “Barriers to the Uptake of Computerized Cognitive Behavioural Therapy: A Systematic Review of the Quantitative and Qualitative Evidence.” Psychological Medicine 39, no. 5 (May 2009): 705–12. https://doi.org/10.1017/S0033291708004224.

Whitfield, Graeme, and Chris Williams. “If the Evidence Is So Good– Why Doesn’t Anyone Use Them? A National Survey of the Use of Computerized Cognitive Bheaviour Therapy.” Behavioural and Cognitive Psychotherapy 32, no. 1 (January 2004): 57–65. https://doi.org/10.1017/S1352465804001031.

Wolfe, B. E., and M. R. Goldfried. “Research on Psychotherapy Integration: Recommendations and Conclusions from an NIMH Workshop.” Journal of Consulting and Clinical Psychology 56, no. 3 (June 1988): 448–51. https://doi.org/10.1037//0022-006x.56.3.448.


Idle minds and slacking algorithms
by Johannes Bruder

Image: Scientific American 2010, creative commons

“The future of mindfulness-based interventions is digital, and the future is now,” a group of well-known neuroscientists and psychologists proclaim in Current Opinion in Psychology (Mrazek et al. 2019). In their paper, they elaborate on the advantages that app- and web-based interventions offer over traditional face-to-face formats: enhanced accessibility, standardization, personalization, and efficacy of mindfulness training. Originally a situated meditation practice that requires spatial and temporal immersion and long-term practice, popular smartphone apps now seamlessly integrate 5-minute exercises into the packed schedules of stressed-out white-collar workers.

Digital mindfulness-based interventions (d-MBIs) belong to a class of techniques that “untethers the synchronicity of place, time, people, and meaning characteristic of dominant (Western, biomedical) models of psy-care and knowledge production” (Bemme, Brenman, Semel 2020). They disentangle the practice of mindfulness meditation from its “traditional” cultural contexts—which include ancient Buddhism as well as 1960s counterculture in North America—to rebrand it as intervention into attentional routines of contemporary workplaces.

This transformation of mindfulness into a popular technique of psy-care is linked to and supported by experimental shifts in North American and European neuroscience and psychology laboratories. Until the early 1990s, any measurements of mental and cognitive activity were typically conducted if and when the volunteer was occupied with an experimental task designed by the experimenter. That is, neuroscientists had defined cognition as “attention to task”, implying that what the brain does when we are unfocused is cognitively insignificant. In the then emerging field of resting state studies, however, researchers began to analyze so-called “self-generated” brain activity, which peaks when humans rest. One mental phenomenon most vividly describes this sort of mental drift and hence turned into the go-to phenomenon of a new field: mind wandering.

In the behaviorist paradigm of neuroscience research, mind wandering was considered the epitome of distraction and thus a potentially pathological mental activity. Neuroscience had since the late nineteenth century been characterized by “an uncanny proximity between subjective responses to a task delivered in the laboratory and one prescribed on the shop floor” (Morrison et al. 2019, 64)—the time and energy spent wandering from task was thus largely ignored. Throughout the last decades, however, mind wandering has been radically reconceived as a source of creativity and subjectivity.

In 2006, neuroscientist and resting state forerunner Marcus Raichle first spoke of the “brain’s dark energy”—a concept that was originally chosen to situate resting state activity as an absent presence in the history of the neurosciences (Raichle 2006). But his paper also marks a moment where what was discovered through monitoring brain activity at rest got reconceptualized as the brain’s “default mode,” a mode of cognition, he argued, that never ceases to insist from the background of our conscious experience.

The transformation of the brain’s dark energy into its default mode of operation aligned cognitive neuroscience with contemporary labor regimes: it fits well with the idea of unbounded workdays and restless cognitive activity. In this context, digital mindfulness-based interventions are figured as algorithmic techniques that turn the dark energy of the brain into a source of enhanced productivity and augmented psychic resilience.

Johannes Bruder is the Head of Institute of Experimental Design and Media Culture (IXDM) ad interim. Johannes’s research targets infrastructures, technologies, and media that support epistemologies & empiricisms in art, design, science and their (sub)cultural distortions.

Link to full paper published in Science, Technology, and Human Values (2021) https://journals.sagepub.com/doi/full/10.1177/01622439211025632

References:

Bemme, Dörte, Natassia Brenman, and Beth Semel. 2020. „Tracking Digital Psy: Mental Health and Technology in an Age of Disruption.” Somatosphere Aug 28. http://somatosphere.net/2020/tracking-digital-psy-mental-health-and-technology-in-an-age-of-disruption.html/, accessed July 20, 2021.

Morrison, Hazel, Shannon McBriar, Hilary Powell, Jesse Proudfoot, Steven Stanley, Des Fitzgerald, and Felicity Callard. 2019. “What is a Psychological Task? The Operational Pliability of “Task” in Psychological Laboratory Experimentation.” Engaging Science, Technology, and Society 5: 61-85.

Mrazek, Alissa J, Michael D Mrazek, Casey M Cherolini, Jonathan N Cloughesy, David J Cynman, Lefeba J Gougis, Alex P Landry, Jordan V Reese, and Jonathan W Schooler. 2019. “The future of mindfulness training is digital, and the future is now.” Current Opinion in Psychology 28: 81-86.

Raichle, Marcus. 2006. “The Brain’s Dark Energy.” Science 314 (5803): 1249-1250.


Wisdom + capability: On Chinese discourses around artificial intelligence, algorithms, and “smart” technologies
by Gabriele de Seta

Image credit: Baidu Baike (2021), “Yan Shi”

As is the case for other scientific and technological developments, Chinese discussions of artificial intelligence often seek to anchor its contemporary advancements in ancient legends and foundational myths, laying claim to millennia of history. One of the most common referents for AI’s ancient past is “Yan Shi’s automaton”, which is also hailed as China’s first science fiction story. Described in the fifth chapter of the Liezi, a Taoist text compiled in the 4th century CE, this legendary object offers tantalizing insights into the history of artifice and automation.

As the story goes, a renowned artificer named Yan Shi presented King Mu of Zhou with a man-made changzhe, or ‘singer’ (which becomes an “automaton” only in Lionel Giles‘s 1912 English translation); the human-like construct entertained the ruler until its uncanny likeness and flirtatious winks to the court’s concubines forced Yan Shi to dismantle it and unveil its artificial insides made of “leather, wood, glue, and paint”. Yan Shi’s singer automaton moved as a believable human being: a touch on its chin would make it sing harmoniously, while holding its hand would make it start a dance routine.

And yet, in this section of the Liezi there is no explicit discussion of intelligence: the story emphasizes craftsmanship, the making of a contraption that can respond to its creators’ commands, each of its artificial organs seemingly contributing to a specific capability. As King Mu’s rhetorical question suggests, the key conclusion is about the human power to create the semblance of life:

“Can it be that human skill is on par with that of the creator?”

The Liezi’s emphasis on artificiality does not entail a lack of discussion of intelligence in ancient Chinese philosophy: as Shih-ying Yang and Robert J. Sternberg (1997) argue, different philosophical conceptions of intelligence are articulated in Confucian and Taoist canons from as early as the 6th century BCE (Yang & Sternberg 1997). In the Analects, for example, Confucius defines intelligence in terms of “abilities”, and in particular the capability to make moral judgments. In Taoist classics, conversely, one finds a conception of intelligence centered on “perceptual and conceptual flexibility”.

This hints at the complex relationship between the two concepts of zhi (knowledge) and neng (ability, capability), which come together in the term zhineng, ‘knowing-capacity’ or ‘intelligence’, as articulated by Xunzi, another early Confucian philosopher (Fung 2012). Today, zhineng is commonly used in the locution rengong zhineng, ‘artificial intelligence’, while zhihui (the contemporary Mandarin term for ‘wisdom’) translates the ‘smart’ adjective in a plethora of products ranging from smart cities to smart logistics, further complicating the lexical field.

Throughout the centuries separating ancient kingdoms and dynasties from China’s Republican modernity, Communist revolution and market-socialist present, shifting conceptualizations of artifice and intelligence have informed contemporary imaginaries of AI. After scientist Qian Xuesen introduced cybernetics to China in the 1950s, revolutionary critiques questioned its pursuit of man-made ‘intelligence’ as a form of reactionary idealism. Only a few decades later, DeepMind’s AlphaGo – a contemporary automaton with a more-than-human gaming capability – triggers the “Sputnik moment” of China AI craze, embodying a much more conflictual and competitive conception of intelligence (Bory 2019).

How is ‘artificial intelligence’ discussed and understood in China today? Without essentializing the cultural specificity of these terms and their translations, tracing the history of how conceptions of intelligence and artifice have changed, sedimented and become articulated with one another can contribute to unraveling present mythologies of AI in China and, perhaps, make sense of its future trajectories.

Gabriele de Seta holds a PhD in Sociology the Hong Kong Polytechnic University and was a Postdoctoral Fellow at the Institute of Ethnology, Academia Sinica in Taipei. Gabriele is currently a Postdoctoral Researcher at the University of Bergen, where he is part of the ERC-funded project “Machine Vision in Everyday Life”. His research work, grounded on ethnographic engagement across multiple sites, focuses on digital media practices, sociotechnical entanglements and vernacular creativity in the Chinese-speaking world.

References:

Bory, P. (2019). Deep new: The shifting narratives of artificial intelligence from Deep Blue to AlphaGo. Convergence25(4), 627-642.

Fung 馮耀明Yiu-ming. (2012). Two Senses of “Wei 偽”: A New Interpretation of Xunzi’s Theory of Human Nature. Dao11, 187-200.

Yang, S.-y., & Sternberg, R. J. (1997). Conceptions of intelligence in ancient Chinese philosophy. Journal of Theoretical and Philosophical Psychology, 17(2), 101–119. https://doi.org/10.1037/h0091164


Conclusion: Time, Energy, Fatigue & Abstraction
by Emily Martin

The works presented here map the assumptions encoded into Artificial Intelligence/Machine Learning systems. In considering this cartography, Professor Emily Martin offers the themes of time, energy, and abstraction as signposts to the kinds of subjectivities and socialities these technologies produce. 

The first theme is time. Aaron Neiman brings in time explicitly, suggesting that the giving up of time by both patient and clinician is a mark of “authentic” interaction. A slight twist on this idea would bring in Johannes Fabian’s notion of co-eval time. Perhaps what makes the interaction authentic is that patient and clinician can stand in the same time, in time that is coeval. Might this notion shed light on the time lived in by computer technologies? Do computer technologies live in a different time, one that is potentially eternal, or that can be endlessly renewed as with an updated operating system? Is it possible to be coeval with a computer technology? Might Suzana Jovicic’s description of people scrolling constitute a form of desire for coevalness with a computer, like listening to a concert or watching a movie—passive but tuned in?

The second theme is Energy and Fatigue. While reading these papers, my mind turned to Anson Rabinbach’s historical work, called The Human Motor. Rabinbach focuses on the nineteenth century scientific and cultural framework called labor power. For this framework to arise, the human body and the industrial machine had to be seen as motors that converted energy into mechanical work.

These developments were in my mind when reading Johannes Bruder’s paper in particular. His account is rich in ideas about what it means to “rest” in the case of cognitive activities. Are there echoes here of the 19th century preoccupation with harnessing energy toward an unlimited capacity for production?

The third theme is Abstraction. The abstraction of labor power: as Marx explained, this concept required all forms of labor, different as they actually are by skill, method, goal, history, purpose, moral value, difficulty etc., to be arrayed on the same scale, measured by labor time.

Abstraction is involved in the never idle brain labor Johannes described: imagine the different forms of “narrow if-then” thinking or the different forms of insight, or creative play when the brain is at rest or in default mode—the qualitatively different forms of experience that are lumped under these abstractions. Other papers also place abstraction at their center. Alexandrine Royer trenchantly describes the “highly reductive” model of emotions in emotion AI and affective computing. Gabriele de Seta takes up the term “artificial intelligence” and how it is translated in Chinese. The translation – rengong zhineng – reflects specifically Chinese conceptions of intelligence, wisdom, and work. This notion of intelligence combines something like “wisdom” and “capability.” Thus the two Chinese terms combined hint at a conception of AI that places the energy of the mind and the body inextricably together.

Finally, I think one important take home message from all these papers is that psychological anthropology can make an important contribution to understanding the social and cultural content of the many sciences involved here. Psychological anthropology can show how abstractions are produced (in exactly as much detail as Marx did for labor power); and what kind of social work abstractions do and what they conceal. The papers demonstrate the powerful anthropological effect of describing the lively and various forms of human practices and ideas that abstractions lump together and make invisible.

Emily Martin is a Professor Emerita of Anthropology at New York University.


Leave a Reply

Your email address will not be published. Required fields are marked *

slot online judi bola online judi bola https://widgets-tm.wolterskluwer.com AgenCuan merupakan slot luar negeri yang sudah memiliki beberapa member aktif yang selalu bermain slot online 24 jam, hanya daftar slot gacor bisa dapatkan semua jenis taruhan online uang asli. slot thailand jbo680 jbo680 situs slot terpercaya slot pragmatic play online surya168 idn poker idn poker slot online slot jepang