The dominant discourse surrounding digital psy technologies such as MindStrong, a teletherapy app designed to detect mental health changes by monitoring changes in typing speed “down to the millisecond”, is that they uncover interior states by gathering psychologically rich data that was always there. But as the Tracking Digital Psy series editors outline in their call for contributors, and subsequent pieces have underscored, digital psy technologies do not merely reveal. Rather, they are key points in a socio-technical assemblage geared toward the making of scientific facts about mental illness, the brain, and the body.
To learn more about the contingency of digital psy—and the labor, ideologies, and moral frameworks that shape its production—series co-editor Beth Semel had a conversation with visual artist and critical computing scholar-practitioner Jonathan Zong. Jonathan created Biometric Sans, an experimental typography system which elongates letterforms in response to an individual’s typing speed. The work is inspired by the practice of keystroke biometrics, the idea that individuals are uniquely identifiable by the way that they type.
In what follows, Beth and Jonathan discuss Biometric Sans as an artifact for disentangling the tensions between care and control that undergird digital psy projects. They explore the possibility of combining art and computation to open up space for contingency, embodied knowledge, and exploring speculation as a mode of intervention.
Beth Semel: Aside from the visuals and the touch-sensitive component of Biometric Sans, part of what drew me to the project is the connection I see between what you made and keystroke biometrics, which proponents of digital phenotying tend to cite as the most obvious, and most compelling, use-case. I thought, if this person knows how to make a similar system, one that is ostensibly tracing and translating things like touch and typing quality into a visible, tangible trace, then maybe he could conjecture about the secret sauce that might go into making a digital phenotyping prototype, that is, a system that researchers suggest can “read” or interpret mental states through behavior like typing. So, to begin, could you say a little bit more about how — and why — you made Biometric Sans? And do you think people building a typing tracking system for mental health applications would go about building that system in a similar way?

Jonathan Zong: I first learned about keystroke biometrics as an intern at Coursera, which is an ed tech online course company. I don’t know what they’re doing now, but they had these certificates that were identity verified, that you could pay for, that were more of a credential than the free classes. And so you type in your name into a box a couple times to create this identity verification profile supposedly using keystroke biometrics. I was dimly aware of that and then I guess it kind of stuck in my head.
Years later, I created Biometric Sans as part of my undergraduate thesis work in the visual arts program at Princeton, advised by David Reinfurt. At the time, I was also doing a computer science thesis advised by Nathan Matias on designing research ethics systems. I was also thinking through questions about gesture and embodiment in a performance studies class with Judith Hamera. So the body as a contested site of control, and the way people are constantly asked to conform their bodies to systems so that systems can work, was on my mind.
As for the question of how to build Biometric Sans, I needed to capture the timing of keystrokes. I decided to do this in a website because websites are common interactive programs that already capture keyboard input and create visual output. Websites are also easy to share and circulate. My technical decision to build the system into a website, and my aesthetic decision to make every character take on multiple possible appearances based on timing, was also why I couldn’t package it as a font file.
Beth Semel: I find it very interesting that it’s not a packageable font file, that it exists on the website and can’t really exist independently anywhere else.
Jonathan Zong: Totally, yeah. It’s not a single font, but a program that describes an abstraction over many fonts. It just exists on the website as a program, and generates output images in the SVG format. It doesn’t generate text glyphs, so if you write in Biometric Sans and put those words on a website, it won’t be read as text by screen readers. It’s not search indexable. So even though it’s less legible to a person than a standard font, it’s not legible as text to machines at all.
Form and Content in Digital Psy
Beth Semel: Oh, that’s so interesting. Was that an intentional choice or something you were thinking about when you made it?
Jonathan Zong: I think about Biometric Sans and related work in terms of a text/image distinction. It’s kind of a fraught distinction and Biometric Sans attempts to blur the lines. In a technical environment, anything that’s not a very narrow definition of text is an image. When I was making it, I found that to do something interesting I just had to do it this way because what I wanted doesn’t fit into the existing technical standards that handle text. And so I was kind of disappointed that it’s not readable in this way but found it necessary.
The program that runs on the website is a JavaScript program just like any other program on any other website. It listens to key events that happen in the window. Because it’s not a conventional font, you’re not actually typing into a text field and directly seeing the output. So it’s kind of a hack—it’s basically intercepting this keyboard signal that would be destined for a text field if there was one. Then it takes the key you pressed and then uses that to fetch the shape of the corresponding letter from a base font file and then stretches it horizontally based on the interval between the keys.
This program records which key is pressed, and also records the elapsed time since the last time you pressed a key. So like you said, this is creating a data set. You can think of Biometric Sans as a visualization of a typing data set. And that data set is basically exactly what I think that someone would use to create a digital phenotyping system based on typing. They don’t even probably care what keys, just the timing. Although—I’m speculating—but I feel like a more sophisticated digital phenotyping system would probably look at what keys are pressed and take into account the positions of the keys, because you might expect twisting your fingers for certain keys to take more time than for others.
Beth Semel: This all reminds me of something I observed in my fieldwork, working alongside psychiatric and engineering professionals trying to build similar systems for capturing voice quality and trying to explore its connections with psychological, mental states. There was often this tension between form and content — my interlocutors were very committed to trying to pull apart the formal qualities of speech away from the content, the meaning. But in pursuit of that splitting, they’d inevitably run into the inseparability of form and content, to the extent that it began to feel like a false binary. It seems very similar to what you’re saying — the focus on typing form would still lead researchers or a tech worker back to the content of what a person has typed.
Jonathan Zong: Yeah. If you have the timing information, which includes the order that the keys are pressed; and you have the positions of keys, from which letters can be recovered given a keyboard layout—then you by definition can reconstruct the sentences that were typed. Informationally, its equivalent. The information you need to capture the form and content are the same.
Beth Semel: Which would mean, then, that any kind of app or healthcare technology that tracks a user’s typing quality would also be tracking the content of whatever the user types. And this conflicts with the “privacy preserving” promise that digital phenotyping companies or initiatives often make: that gathering data on formal qualities is somehow “less invasive.” While that invasiveness is surprising, though, I think there’s a politics to for whom it would be surprising: for whom is it astonishing that a company or an institution is surveilling its users, despite promises otherwise? I’ve been thinking a lot about this since reading an interview of abolitionist activist Sarah Hamid in Logic Magazine, alongside reading the work of scholars like Simone Browne and Ruha Benjamin. They argue that racialization and hyper-visibility go hand-in-hand, especially when coupled with legacies of anti-Blackness (and, post 9/11, Islamophobia) that dictate which kinds of subjects the state marks as risky, dangerous, and therefore must always be watched and tracked.
Empirics of Harm
Beth Semel: The things Sarah Hamid said about the concern for preserving privacy being a strategically, white, liberal issue to rally behind has been on my mind, especially in light of the anniversary of Fred Hampton’s murder and thinking about the present-day overlaps with COINTELPRO, the surveillence program that played a role in his death. What Hamid, Browne, and Benjamin underscore is that revelations of invasive, deep surveillance — “they’re [a company, a state institution] spying on us!” — tend to be shocking for people who are used to being illegible or invisible or not watch-worthy to the state or other surveillance networks.
Jonathan Zong: That’s such an interesting point because “they’re spying on us” is an abstract fear, but harm is very empirical. When you talk about surveillance, you often have to talk about it in general, in terms of what could happen. Then you have so many problems when you try to talk about it in the specific, because companies who make surveillance systems obscure their inner workings. When you try to talk about a specific system, then they can say, “oh, but it doesn’t work exactly the way that you’re describing.” There are different levels of hypotheticals to disentangle. Digital phenotyping systems claim to do something impossible. Even if that weren’t the case, there are many institutions procuring different systems from different vendors. It’s such a hard time zooming in from the abstract privacy risk into the “what is actually happening.” Maybe the thing we’re convincing ourselves of in this form/content discussion is that the question of content, of “what harms are empirically happening,” is not always knowable given information asymmetries and power relations that obscure these systems’ mechanisms. This is why we resort to thinking formally, in terms of “what could happen given what we know to be likely.”
Beth Semel: I do feel there’s a great deal of mysticism, confusion, and as you say, asymmetry surrounding understandings over how computational systems or tracking devices actually work, hence, my asking you to try and explain the secret sauce of keystroke biometrics. What I encountered in my fieldwork, what many other anthropologists, scholars and historians of technology write about, and what things like Biometric Sans reveal is just how much material work and contingency goes into making computational systems function, or even building infrastructures to capture data that can then be labeled, trained, and used to build an algorithm.
The discourse surrounding Big Data in biomedical or psychological contexts, for instance, tends to be one of seamlessness: “oh, there’s all this wonderfully rich data out there that could lead to clues about those things we call mental illnesses, and we just need the right net and then we can grab it, process it all, and finally unlock all these biological mysteries that have long eluded us.” But in fact there are so many other steps and bodies and people that go into even defining what the net, the device or mechanisms of capture, should be, and then getting it all to hang together on a granular level. How big will the holes in the data-fishing net be? Is plastic better than wire? What does the “fish” that we want to catch with it even look like? Not to mention the net-weavers, the dock hands, the people steering the ship, cleaning the toilets on the ship, international fishing laws that dictate where and WHO can fish, etc. It’s so much more complicated and slipshod than a “grab juicy data out of thin air” situation.
Sitting with the Tension
Beth Semel: Could you tell me more about this quote from Sun-Ha Hong that you’ve included on your website in your description of Biometric Sans? “The promise of data is the promise to turn bodies into facts.”

Jonathan Zong: I saw Sun-Ha Hong’s talk by almost complete coincidence. A friend had invited me to go the day before. It was at the Wattis Institute in San Francisco, a contemporary art space. So it felt like very appropriate provenance as an inspiration for Biometric Sans. It made a big impression on me at the time. I think this and the Vilem Flusser essay “Why Do Typewriters Go ‘Click’?” both kind of pushed me towards this question of data as a particular, but limited, way of knowing the world.
Initially, I thought about this quote in terms of what is erased or filtered out when things become data. What embodied characteristics (like race, gender, affect, ability) are just not knowable in digital environments? And I think optimistically I had thought of Biometric Sans as trying to bring back bodies in digital writing. Trying to reintroduce some element of gesture and individuality. And I think that’s still true. But of course, you circle back to, oh, but it’s gesture, affect, etc. read through biometrics. So still you’re back at the question of what’s lost when you cross these boundaries between virtual and real.
I find it helpful the way that Sun-Ha identifies this translation from bodies to facts as a promise of data. I feel like there’s a couple ways to read that. You can read that as, “Data promises to reveal the world through numbers but fails to achieve that promise.” You can also read that as, “The reduction from unknowable reality into manageable quantities is what people find promising about data.” Many people see the fact that data turns bodies into facts as an opportunity. I think both of these interpretations are interesting. I really like what you said in your initial email to me about this Somatosphere series:
We came together as feminist scholars really wanting to carve out a space for conversations that sit with the weirdness and discomfort of digital psy without writing it all off as either shortsighted reductionism or salvational silver bullet.
And I see a bit of that in this quote. Depending on how you read it, it’s pointing to both tendencies of data. Shortsighted reductionism and salvational silver bullet. As someone who works in computer science and builds systems and puts them into the world, I also want to sit in that space of discomfort. Because if I believed that data-driven modes of understanding were irredeemably harmful, I wouldn’t be doing what I’m doing. But I also, of course, can’t ignore that data is used in all these ways that go beyond its capacities. I like things like this quote that kind of point to the existence of a limitation. And then in my work I aspire not to stop at saying that there is a limitation, but to make our understanding of those limitations more precise. I’m paraphrasing Ruha Benjamin here, but I want to ask: limited in what way? What are the categories and frames that we can apply? What can we meaningfully do within digital environments that don’t have to solve all the world’s problems, but doesn’t have to be terrible? Benjamin applies the abolitionist idea of reformist reforms vs. non-reformist reforms to tech, which has been helpful for me. We often can’t dismantle entire systems all at once, but our compromised actions within these systems can still be consistent with their dismantling.
Beth Semel: That’s beautiful. Right, it — the technology, the art object, the intervention — doesn’t have to solve all of the world’s problems at once, but we should also create and move in a way that’s mindful of: could this thing I’m making be adding one little, additional brick to bigger, deeper, infrastructural problems? The key is, as you’ve put it so well, not having all the answers, but rather keeping in mind what you’ve been saying about specificity, about harms having an empirics. I think there’s a lot of power in being really precise and careful about what is wrong, for instance, with violations of privacy or surveillance in digital mental health care contexts. Or what is precisely creepy about it.
I think the key is also moving forward with a clear understanding that working with in structures that are complicit in violence — whether it be a tech company, academia, what have you — is not necessarily a complete moral failure or “selling out” but rather, a starting point for doing the difficult work of dismantling those harmful structures. This was something that both Ruha Benjamin and Safiya Noble brought up, in reference to abolitionist approaches to studying technology, during a November 2020 panel on Anti-Blackness and Technology.
Jonathan Zong: Working from different starting points in computing and art, or art and research, I feel that even though their outputs take different forms these don’t feel like distinct processes to me. But they’re kind of always extricated by institutions that we have to work within. The work of understanding and remedying harm that happens within systems is not bucketable into disciplinary categories, or even methodological categories.
Beth Semel: Yeah, exactly. For instance, race science is race science. To a certain extent, trying to adhere to disciplinary distinctions and call something like research articles that are promoting “digital phrenology” a problem only for the machine learning community or a problem only entangled with the history of anthropology can detract from working on the problem together or worse, can be a way to excuse yourself from complicity.
Jonathan Zong: Definitely. I mean, what you’re doing with digital psy is just so important, because you’re noticing this tendency, not only in anthropology, to turn towards seductive but harmful approaches. And there can be a lot of “Oh, if only the machine learning people would listen to the humanities people.” I find that so unhelpful, and not in a “not all computer scientists” way. It’s unhelpful because like you said it reinforces that separation in an unproductive way. Computer science doesn’t have a monopoly on harm, just as the humanities and social sciences don’t have a monopoly on ways to address harm.
Beth Semel: I think about this often. In my own work, it might be easier to tell a story of, “look at these technologists and data scientists, I’ve caught them being reductionistic” but in reality, they’re well aware of the limits of their approaches, of building algorithmic systems necessitating some amount of reduction. So, what’s the intervention then?
Jonathan Zong: That’s a really interesting point though, what you’re saying about how technologists live with reductionism. Even if it’s not spoken in these terms, I think people in engineering, especially, must know that their tools are reductionistic. It’s back to that discussion on different valences of “promise” in Sun-Ha’s quote. The reduction is the point because it makes problems tractable for engineering methods. The reduction allows us to advance knowledge in a limited but very real way. We can’t just call people out for existing in a world where the available tools are limited, we have to do more than that.
Embodied Knowledge, Visibility, Hiccups
Beth Semel: To me, Biometric Sans is an intervention that moves beyond either pointing out the fact of surveillance or the necessary reductionism involved in building computational things. And it makes this intervention as a performance piece, as something embodied, rather than, say, a traditional piece of academic writing.
Jonathan Zong: Oh, well, thanks. I appreciate that you feel that way. And I think being committed to embodied knowledge means accepting that there’s things that you can’t say in words, and things that you kind of have to feel and experience. Of course I narrativize Biometric Sans in certain ways, but I always like to hope that people can just use it. Then maybe I don’t need to come up with the words. I’ve watched a bunch of people use it and people do such different things, and think about it in such different ways, that I don’t even want to tell people what to think about it sometimes.
Beth Semel: What do people say? What do they write about? Do they just write their name?
Jonathan Zong: A lot of people start with “hi” or write their names. Or will just do a stream of consciousness or make up little phrases. But what I really find interesting is the different reactions that people have. While I often start with the assumption that handwriting is more individually expressive than digital typing, others have shared with me their experiences of being told to “fix” their handwriting in school by writing and rewriting lines. For them, handwriting is a disciplinary practice. I’ve had others tell me that writing with Biometric Sans is stressful because they feel very acutely aware of their delays. That they are confronted visually with the imperfection of their thoughts, almost, and that is hard for them, which I didn’t anticipate. Some people will continually write and backspace to try to get their words to look a certain way, in a feedback loop. I think this practice starts to kind of develop a visual language of affect and tone. Some people even use it to draw, making little faces out of letters.
Beth Semel: For me when I was using it, the only thing I could think of was, “is this how it looks to be seen by something like a digital phenotyping keyboard tracer?” Which is why, to bring up her work again, it made me think of the art pieces that Simone Browne writes about in her book Dark Matters and the book as a whole. Biometric Sans makes visible that process of being made biometrically visible. You watch yourself being watched, in a way that gestures to the power in the possibility of “watching the watchers,” of knowing precisely how you are being surveilled and how your body and movements are being data-fied.
Jonathan Zong: Yeah, that’s fantastic. I really, I love that. A visual representation of what being watched looks like. In a previous conversation, you mentioned the idea of Biometric Sans as a counterpoint to CV Dazzle, “where the strategy is not invisibility/illegibility, but legibility and visibility.” And I think that’s really interesting. And as someone who is trying to figure out what I think my approach is to critical interventions in art, I worry about stuff like this a lot of the time. What does it mean to make stuff visible? And what are we actually making visible?
One example of an art piece that is critical of AI is Kate Crawford and Trevor Paglen’s ImageNet Roulette project, which was widely covered and had a real impact in the world. This project resulted in certain harmful images being removed from the ImageNet data set, because it made certain forms of harm visible. And I think they would agree that that was their intervention. But I was uncomfortable with this project, despite its success as like a piece of art activism, I guess, in that it got something to happen.
I just found it really weird that, aside from amplifying the harms—putting a magnifying glass up to the harms inherent in ImageNet—there was no other intervention. I would see people tweet about it. And white people would tweet, “Oh, isn’t it so funny that like this AI called me like, a pilot or economist or whatever, something that I’m clearly not.” And then Black and brown people are tweeting, “Hey, this AI called me something really horrible that sucks and hurts.” And the reception to this art piece was split along the exact same lines of disproportionate harm that the thing it was critiquing creates. And that kind of sucks. So since then I’ve been thinking a lot about what it means to make harm visible and is visibility always the same as amplification. And I don’t think of Biometrics Sans in that way. I don’t think I’m really hurting anyone by making these stretchy letters. But I would love to understand better what differentiates these approaches.
Beth Semel: At least in my experience interacting with Biometric Sans, I had this moment of “Hey, look how cool this is,” but also, and you say this, I think, somewhere on your website, this realization of potentially being surveilled. That I was not only producing a cool output—my words displayed in a visually interesting, responsive way—but also, through this act of play, I was being enrolled into some kind of system, being accounted for. In that regard, I see Biometric Sans as inviting users to sit with that tension instead of just replicating harm.
And that’s why it’s cool to learn that Biometric Sans is not available as a stand-alone typeface. If it was, I could see the danger in someone grabbing it, hacking it to reverse engineer it or use it in a way that you had never intended. But then, as you said, the words people produce when the type are not actually machine-readable. When people type, they’re not producing an image that could go on to have a nefarious afterlife. You’re not feeding any kind of data economy, in a way that gamified image tagging platform might. It’s more of a hiccup in any kind of biometric surveillance network than an addition. Does that make sense?
Jonathan Zong: Makes a lot of sense. I love the hiccup framing. I think I’m definitely not alone in that many artists are very committed to the hiccup, the glitch, the bug. For instance, see Legacy Russell’s Glitch Feminism manifesto. And I think for good reason because hiccups kind of reveal the underlying logics of a system. They’re what happens when a person asserts their non-machine-ness and does something unexpected to a system. Even earlier on—so, the installation version and the website version, now, of Biometric Sans, has a blinky cursor to tell you to type. I added that with great hesitation, because originally it was just a blank white screen when you go to the URL and nothing really told you what to do. And I think in a gallery when you’re walking into a room with no context and you haven’t had an hour long conversation with me before, yeah, you’re gonna see that and not know what to do. So I added it. But I kind of liked the, I don’t know … Is it illegibility or maybe just … it’s a social hiccup in that we expect systems to tell us how to use them, and this system refused.

Care and Control
Beth Semel: Aside from digital phenotyping, there’s another use-case that Biometric Sans brings to mind: surveillance technologies used in remote learning context to track “student engagement.” This brings to mind how, more often than not, “novel” or “innovative” techniques like digital phenotyping are really just retooled systems from carceral or forensic contexts, rebranded as mechanisms for mental health care workers to expand their care of patients beyond the clinic, or reach otherwise “unreachable” patients; this “unreachable patient” is a construct that my co-editor, Dörte Bemme, writes about in the context of global digital mental health. So I’m really interested in, and feel it’s important to talk about, how biometric, computational technologies are constantly sliding in and out of different sectors. Do you think that COVID-19 is speeding up this kind of expansion? Are people using keystroke biometrics in those contexts, for those purposes? It strikes me that the initial idea for Biometric Sans came from your experience in a classroom environment.
Jonathan Zong: We’re hearing about all these test proctoring services that track things like keystrokes and eye movements and flag students for “abnormal behavior” while taking online exams. If students click too fast or too slow, get up to go to the bathroom, or someone they live with walks by in the background, they could be flagged for cheating. It reminds me of my friend’s story about Biometric Sans and being disciplined for writing the wrong way in school. Independently of what these systems do, there’s such a punitive logic behind them. What does it do for someone’s learning to punish them for going to the bathroom or sharing a small space with another person?
But yeah, I think it’s hard for me to talk about keystroke biometrics in particular because I’m not personally familiar with the empirical cases where it’s being used. But I just think back to that kind of identity verification context for taking certification quizzes on Coursera. There’s such a demand to import carceral technology into education. It’s opportunistic on the part of the technology makers. Other people have said this, or said this better, but, yeah, these vendors are kind of capitalizing off of a crisis. And the result is we’re habituated to all this tracking and measuring in contexts where the concerns are maybe abstract privacy concerns. Or maybe they’re real over a longer time horizon where it’s harder to attribute causality, as education surveillance sorts and classes students based on anticipated “outcomes”. But either way we become habituated to the technology, to the logic behind them. With carceral systems, it’s clearer to me how to tie broader concerns about this logic back to the empirics of harm—a policeman hitting you with a stick on behalf of the state. It’s a bit of a problem that it’s hard to do that with data in other contexts, that it’s harder to see the connection to categories of harm right away when causal chains get obscured within systems.
Beth Semel: I think about this often in the case of digital psychiatry-related tools or interventions: they live in this shakier, murkier water. A good deal of people, in my experience, engage in digital psychiatry projects because they genuinely want to help people — this includes research subjects, who consent to participate in clinical trials with digital psych interventions or else they produce the datasets that line the bottom of the eventual technological prototype. In my ethnographic experience studying the making of these tools, I often encountered the explanation: “certain kinds of people need to accept a certain level of surveillance in order to be taken care of—it’s ultimately for their own good and we are operating with good intentions.” But as many feminist technoscience scholars writing about care have underscored, care is not always benevolent or inherently good—even when driven by good intentions it can have violentconsequences. This is why we’ve tried to frame our series on digital psy around gray spaces and queasiness, rather than a very dichotomous story of, “people trying to help people with technology” or “people harming people with technology.” Neither of these leave a whole lot of room for agency, on behalf of either the makers or the users.
Jonathan Zong: That’s really interesting. The idea that some people have to be surveilled is an unsettling thought. Yeah. I guess I don’t know what exactly I can contribute, other than starting by affirming that that’s an important uncomfortable space to be in. I remember the quote you showed me before, the Thomas Insel quote, where he asked whether it could “become possible to assess how we function directly and continuously rather than using laboratory measures at a single point in time?” It’s a very reasonable sounding, well-intentioned question. That does in fact sound better, because it’s an acknowledgement of the way that real people change over time, whereas data is usually used as a fixed measure. But I recently read a post on Alexander Galloway’s blog where he was talking about the broader embrace of empiricism that has accompanied advances in data science. In these ways of knowing, nothing can ever change because all you have is purely descriptive frameworks. If measurement and description is a tool for understanding what is, I’m not sure that more of it necessarily adds up to a theory of what could be. Things often get better while staying fundamentally insufficient.
Speculation as Method
Beth Semel: As a way to wrap up our conversation, I’ve noticed this running theme or problem rather of temporality: digital psy interventions like digital phenotyping and keystroke biometrics in particular participating in present-day harm, replicating past harms, or provoking concerns about unintentional, potential future harms. This all leads me to ask: do you think about the role of not just temporality but speculation in your work? I’m thinking here of how Ruha Benjamin writes about speculative fiction as method, as a kind of imaginative exercise for thinking across temporal scales.
Jonathan Zong: Yeah. One of the issues Nathan Matias and I think about in our work on collective refusal is that it’s not even just that harm could occur later. New forms of harm haven’t even been invented yet, but they will be. And some of these harms are outside of our current capacity to imagine them.
This is why I think speculative fiction is so important. When Donna Haraway talks about the feminist scholarly practice of speculative fabulation, she’s talking about the importance of the “stories we tell to tell other stories with” — and the thoughts we have at our disposal to think other thoughts. With Janet Zong York, who is a literature scholar and also my sister, I co-taught a class at Tufts University called “AI, Justice, Imagination: Technology and Speculative Fiction.” Our goal with the course was to turn to fiction’s imagined worlds for help thinking about ways of imagining possible worlds in the present and near future. Much technology that we’ve talked about today, whether in digital psy or surveillance more broadly, rests on narratives of infallibility, inevitability, or determinism. But as many scholars—particularly working within feminist, decolonial, and other traditions—remind us, the future is contested ground. And so is our understanding of the past, which influences our expectations of the future. I’m thinking of Ken Liu, a sci-fi and fantasy writer who makes historical interventions and extrapolates speculative presents from them. Work like this ties the way we narrativize and imagine our past through memory to the way we imagine the future. This is something Janet is thinking about in her work and teaching. What Ruha Benjamin and others say about imagination suggests that it’s necessary to the success of critique, and necessarily precedes construction. And I think what is really important to me about what I see in her approach is that it’s much easier to describe critically than to articulate a vision for what could be or what ought to be. I think that, not always, but that’s a role that art can play as well. And that’s why I continue to feel so committed to art as a method. As an artist, I often think about “defamiliarization” as an intervention, making the concepts and assumptions we take for granted visible from a distance so that we can really, truly begin to see them. Unlike research, art and design operate on mass cultural forms. It operates on the level of symbols, metaphors, and ideological frames. We need those to see clearly, and to assemble a way to say what we want. And only then can we make it happen.
Jonathan Zong is a visual artist and computer scientist interested in the imperfect ways that interface design makes people and systems legible to each other. He is a PhD Candidate in Human-Computer Interaction at MIT CSAIL. Jonathan graduated from the Visual Arts and Computer Science departments at Princeton University in 2018. He is a 2019 Paul and Daisy Soros Fellow and NSF Graduate Research Fellow.
Beth Semel is a Postdoctoral Associate in Anthropology at the Massachusetts Institute of Technology, where she serves as the Associate Director of the Language and Technology Lab. Her research explores the intersection of communication sciences, computing, and biomedicine in the contemporary U.S. Her current project traces the development of automated voice analysis technologies for psychiatric assessment. Through a behind-the-screen look at efforts to render mental illnesses both informatic and audible, she tracks how these projects torque and reiterate hegemonic ideas about language, listening, labor, and care. She received her PhD in History, Anthropology, Science, Technology and Society at MIT and was a 2018-2019 Weatherhead Residential Fellow at the School for Advanced Research.
Tracking Digital Psy is a series edited by Dörte Bemme, Natassia Brenman and Beth Semel.
Similar Posts
- The subjects of digital psychiatry
- Mapping Algorithmic Assumptions: Reflections from a Society for Psychological Anthropology roundtable
- Techno-geographies of digital phenotyping in mental health research
- The hype and hope of data for healthcare in Africa
- A Politics of numbers? Digital registration in Kenya’s experiments with universal health coverage