AI Job Displacement: Perspectives on the Future of Work From Beneath the Silicon Ceiling


Following a day-long, company-wide “workplace culture” workshop for a Silicon Valley startup that creates and oversees mental health care chatbots (pseudonym: The Startup), I find myself sitting on an upholstered bench tucked into an out-of-the-way corner of a WeWork in Northern California. I’m an anthropologist of artificial intelligence conducting participant observation research at The Startup, and at this moment, I’m sharing this bench with two young women who not only think that AI can and should be used to provide mental health care, but who believe that being a mental health care worker can and should be a well-respected, sustainable job that pays a living wage. For my informants, these two respective points are interlinked.

Mental health chatbots, a growing form of on-demand care, are AI personas that exchange text messages with users via phone or computer about everything from guided breathing exercises to explaining a particular psychological technique (typically drawn from CBT or DBT, but sometimes from psychoanalysis, among other modalities). While The Startup’s chatbots are interactive, they don’t use generative AI; they instead use what many now call “rules-based” AI. This means that their conversations follow predicted pathways: these chatbots will “react” and adjust their output based on a user’s responses, but these adjustments are limited in scope, and are entirely pre-written. The Startup’s health team, a group of mental health professionals—clinical psychologists, social workers, and counselors—are the experts who author and carefully review every word of dialogue the chatbot exchanges. The two women seated beside me are both health team members—and one of them is currently in tears.

Directly to my left is Nisha, a 23-year-old part-time tech worker and full-time UX Master’s student whose outlook on the world is equal parts curious and skeptical. Nisha is Desi, or someone of South Asian heritage; I would later learn that she immigrated to the US from India as a young child, and can still vividly recall her first thoughts after stepping off the plane that brought her family to California. On the other side of Nisha sits Sofía, a highly educated psychologist and very recent immigrant from Argentina—a country known for having the highest population of psychologists per capita in the world. Sofía, 30, is a self-described optimist who unabashedly loves Burning Man, people, and life. She has a calming presence, belying what I can only describe as her relentless stores of energy. Right now, Nisha is crying—a release of built-up frustration that peaked after she learned, during the meeting from which we just came, that pay raises are paused for the time being. Sofía and I are both consoling her, while surreptitiously glancing around to make sure we haven’t attracted any notice from our colleagues lingering in and outside the meeting room down the hall. The next words out of Sofía’s mouth shock me—far more than Nisha’s unraveling had moments earlier:

SOFÍA: Mental health is shit. It pays shit—even in tech.

Her words don’t shock me because they are in any way untrue; they shock me because of how true they are. Sofía is the epitome of someone who loves her job—and while affirming this is all but required of workers in Silicon Valley, in Sofía’s case it’s self-evident truth. She sincerely delights in being a psychologist who innovates ways to use AI to help people. What she’s referring to are the unspoken consequences of being a mental health care worker.

The sub-discipline of feminist economics offers a name for these consequences: the “care work paradox”—or more commonly, just the “care paradox.” Broadly speaking, it’s the mismatch between care work’s low compensation and poor working conditions and the broadly recognized value of this work. A key framing comes from Nancy Folbre (1995), who importantly shows that not only do economists significantly differ in how they account for this mismatch, but many outright reject the notion that there is any discrepancy between our societal reliance on what care workers do and the pay[1] and treatment they receive.

But in challenging her fellow economists’ denial of an incongruity between care work’s pay/status and its recognized importance, Folbre herself inadvertently falls into a trap of essentializing care work. She aptly pushes back against sorting care work into a binary of work done for pay vs work done without expectation of monetary payment—but does so on the basis that certain instances of paid care work might still qualify as the latter because, “some people don’t work for money alone.” Folbre, who does not contest the idea that the underlying motivation of a caregiver can or should be distilled in ways not expected of other laborers, neglects to examine the improbable corollary: that most people “work for money alone.” Instead, she builds on the belief that care workers, in their actions and motivations, constitute a distinct class of workers—a belief that ultimately does them no favors. Anthropologist Drucilla Barker (2012) argues that Folbre fails to acknowledge how the global political economy of care work—and the gendered, racialized labor that underwrites it (Glenn 2012, Parreñas 2015)—are necessarily at the core of accounting for the devaluation of care workers:

[T]he paradox of caring labor is that migrant populations often cast as dangerous, disenfranchised, disposable, and undeserving of the rights and privileges of human dignity are providing much of the care work that the more prosperous world depends on (p. 575).

In other words, a “love the care, hate the care worker” sentiment is not incidental to the fact that care workers are disproportionately women, people of color, and/or immigrants. Given this, the prevailing tendency in economic theory to explain away the care work paradox as a product of multiple, incommensurate systems of “value” itself—an account of value that anthropologist David Graeber (2001, 2013) contests—seems strikingly like an overwrought way to avoid acknowledging the systemic disenfranchisement of immigrant, POC, and/or women workers. Broad recognition of the fundamental demand that the care work paradox places on workers—the expectation to be, in any given moment, at once essential (“your work is vital”) and expendable (“your work doesn’t merit good pay or safe, sustainable work conditions”)—exploded during the height of the COVID-19 pandemic (Timcke and Gomes 2020). But this paradoxical arrangement was in no way resolved. Instead, public discourse on labor precarity sharply pivoted to focus on the impending role of AI in increasing worker expendability. I propose that this pivot may be instructive; accordingly, I want to briefly consider how tech workers’ encounters with the unresolved care work paradox may sharpen our understanding of the unfolding future of work.

Sofía, with her multiple degrees and tech sector job title, shouldn’t in any way qualify as a marginalized worker; yet she undeniably inhabits a position of precarity. In addition to working full time, she’s pursuing a second advanced degree in psychology—because, despite her ample training and qualifications in a country where mental health training and work is more competitive than in the US, she’s found that a US degree simply opens far more doors than one from Latin America, regardless of merit or context. She’s also eight months pregnant, and still waiting to hear whether or not she’ll be granted paid maternity leave—an inquiry she raised yet again at the workshop from which we just came. But as an immigrant tech worker—and particularly as one with a mental health rather than a programming background—she’s understandably cautious of asking for “too much.” Sofía began her role at The Startup as an unpaid worker—a gambit that she herself proposed, and one that paid off: she acquired a full-time position on The Startup’s health team along with the employer backing for the work visa she urgently needed. Accordingly, while the precarity of Sophía’s yet-to-be-determined maternity leave weighs on her, she remains confident that self-advocacy sans boat-rocking is the best—and, like it or not, the only viable—path forward.

It might sound like Sophía, one hand resting absent-mindedly on her belly as she advises Nisha to not expect too much, is a tragic figure. But nothing could be further from the truth: sitting on the bench beside her, I’m ready to follow Sophía into battle; she vibrates with the fierce determination of someone who intends to succeed no matter the odds. It’s her obvious prowess that makes the overshadowing presence of the care work paradox all the more grim. Sophía, whose work is crucial to the mission of The Startup, is simultaneously configured by her workplace as expendable. Ironically, trying to outrun the care work paradox is what brought Sofía and her health team colleagues to the tech sector in the first place. When they applied to work at The Startup, most of them felt depleted by the demands of conventional care work. Stories of previous jobs touched on things like the tremendous stress of restraining patients in a residential facility, the shock of witnessing gunfire while carrying out in-home patient visits, and burnout following close work with traumatized clients. All the while, they struggled to afford rent, health care, and other basic living expenses. Compared to this, working for The Startup was a welcome relief. Interestingly then, the Startup’s health team are all workers who attempted to bypass the care work paradox by effectively seeking out their own displacement.

In the context of AI, the term displacement communicates the precise dynamics underpinning a phenomenon that is more commonly indexed as replacement (i.e., “Will AI take my job?”). As communication scholar Lilly Irani explains in “Justice for ‘Data Janitors’” (2015), “Automation doesn’t replace labor. It displaces it.” Unlike replacement, displacement entails the creation of new jobs and roles for human workers. Put that way, it might sound like I’m suggesting that displacement is replacement’s more neutral form—a version of “when one door closes, another opens.” Another door does indeed open, but with an unwelcome twist: while displacement might seemingly beckon towards the elevation of human workers into the role of supervising AI while supplying the critical thinking skills, creativity, and responsible oversight that it lacks, a critical body of STS scholarship (Ekbia and Nardi 2014, 2017; Gillespie 2018; Gray and Suri 2019; Irani 2013; and Roberts 2019, among others) shows that in practice, displacement leads to rote, repetitive, low-level, and low-paid—or even unpaid—roles, in which the AIs may in fact be supervising the human workers.

Though my informants never used the term displacement to describe their relationship to the chatbots whose dialogue they author, they have in fact divvied up their therapist duties with these bots. AI conversational agents buffer the health team members from the stressors of patient interactions while enabling them to help people at the scale of thousands. But while displacement may have afforded the health team some of the benefits and prestiges of tech work, these displaced care workers seemingly remain subject to the care worker paradox. Even very early during my fieldwork at The Startup, I noticed that the health team’s work was not as respected (and in time, I learned it was not as well-compensated) as that of their engineering and business team counterparts. Time and again in meetings, I observed that input from health team leader Jenna, a highly-credentialed psychologist with a corporate leadership background, was given conspicuously less weight than that of the other team leaders. I don’t mean to suggest that the way in which the health team members were treated was abjectly awful—but the fact that their treatment was different is significant. What I seek to highlight is the natural, largely unquestioned status of technologically-mediated care work as subordinate work at The Startup. Even as these mental health workers turned to the tech sector as a means of leaving the vicissitudes of care work behind, the gendered, racialized roles that inhere in care work seemingly ported over effortlessly into tech work. As Sofía warned Nisha: you can’t do care work, even in tech, and expect good pay. The best you can hope for is better pay and working conditions than outside of tech.

Nisha, for her part, recognized this unspoken rule; but she thoroughly rejected it. She’d spent the past year working as a Facebook content moderator via a third-party contracting company where, shift after shift, she reviewed and flagged videos and images of animal cruelty, child abuse, mutilation, and sexual assault—all while enduring a boss who “basically told us that we’re not worth anything.” Notably, this role did not provide on-site mental health support or any health benefits. Nisha was exhausted by the pressure she felt to unquestioningly accept a world in which helping people necessitated taking a pay cut. She recognized that having content exclusively created by mental health professionals is The Startup’s primary selling point. The ethical clout of professional psychological expertise is what lends safety and authority to the otherwise dubious enterprise of having AI provide emotional support—and in turn draws their impressive portfolio of clients: workplace benefits programs, hospitals, universities, municipalities, and even a branch of the US military. Care work is the product.

Though the other members of the health team did not name and challenge their categorically lower status at The Startup as directly as Nisha did in interviews, I learned that they were all aware of it. In one of my early off-site interviews, a worker nervously inquired if I planned to ask about pay; seeing their distress, I assured them no, there was no need to discuss that (or any other matter). This worker, visibly relieved, replied, “OK—because it’s kinda low, especially for the tech sector.” Indeed, the workshop we’d all attended on the day of Sofía’s hushed conversation with Nisha was devised by Jenna, the health team leader, as a way to realign The Startup’s upper management with her goal of ensuring more equitable treatment for her team. Unfortunately, the workshop itself showcased the very hierarchies with which she had hoped to contend. During Jenna’s closing presentation, the head of the sales team abruptly leaned over and whispered something to the founder while pointing towards the door; we all watched as both men stood and left the room together while Jenna was mid-sentence. Given that much of the discussion throughout the workshop had highlighted the issue of the health team being unable to get the founder’s attention on what Jenna described as “small yet important matters,” the irony did not escape the notice of those still in the room. After they left, Noa, a recent hire on the health team, in response to a question Jenna had posed, raised her hand and began describing her recent struggle to receive her first paycheck, which was processed several weeks late. Emboldened by what had just transpired, Noa added that, as a junior member of the health team, “I’m likely having a very different onboarding experience than Ian [on the engineering team] did.” Others jumped in, noting an array of similarly frustrating circumstances—including Sophía, who still didn’t know if she would be getting maternity leave. At this point, Caleb, a consultant on the business team, carefully offered an observation:

CALEB: Not to make this, uh… I just think it’s important to acknowledge that in this discussion about HR issues that we just had, we had a very gendered conversation where the women in the room represented one viewpoint, and the men mostly represented a different one, and I think that’s important to acknowledge and understand why that might be the case.

Caleb’s comment drew nods and exclamations of agreement. Jenna effusively thanked Caleb for voicing it. Yet I couldn’t help but note the irony enfolded in the reception his remark drew: Noa had already said the same thing a moment earlier, just a fraction less pointedly. Still, I appreciated Caleb’s intervention, and the question he (and Noa) highlighted: why indeed was this “gendered conversation” the case? Caleb could have just as easily named the issue as one of status differences between the health team and the business and engineering teams; but given the composition of the teams, gender was, to him, the more evident framing. As I’ve sought to show, one of the ways that gendered and racialized roles endure is by acquiring new names and logics—with the care work paradox being one such socially-accepted “logic.” Given the long history of POC/immigrant/women workers’ categorical undervaluation, it’s hardly surprising that the undervaluing of their work would manifest in “new” forms; the care work paradox paints a layer of renewed justification over extant hierarchies.

When it comes down to it, AI job displacement is a process that entails employers deciding which workers they believe to be most valuable—who is and is not displacable. If, as I’ve suggested, the care work paradox indicates the unresolved residue of past arrangements of work (namely, unsettled notions of who count as workers, versus property or wards whose labor is taken for granted as a due entitlement), then it’s important to recognize that this paradox is likewise poised to follow us into the future. As anthropologist Lucy Suchman notes: “Robots work best when the world has been arranged in the way that they need it to be arranged in order to be effective.” The care work paradox indicates one of the ways in which the world, broadly speaking, has already been arranged for AI. In other words, I want to suggest the possibility that the care work paradox functions as a powerful antecedent to AI job displacement.

For care workers, the expectation to be at once essential and expendable is not an enigmatic contradiction so much as it is a straightforward job requirement. Insofar as the care work paradox perpetuates—and naturalizes—the existence of essential-yet-disposable workers, it simultaneously creates an opening for nonhuman workers (AIs) that in turn better fulfill this “inhuman” attribute. Importantly, the normalization of the care work paradox and the everyday exploitations it entails are likely to have consequences far beyond the sphere of care alone. In other words, what I’m suggesting is that the very existence of this paradox provides a ready-made template for AI labor. AI workers make perfect sense—and, I would argue, only make sense—in a world where so many modes of work already demand disposable-yet-essential workers. Despite becoming tech workers, Sophía, Jenna, and Nisha recognize that even technologically-mediated care work comes with the expectation of acceding to this demand; the future of work, from beneath the “silicon ceiling,” looks like more of the same.


[1] Despite steep educational requirements and emotionally—and depending on the position, physically—taxing, and sometimes even traumatizing, work, being a mental health practitioner tends to be a relatively low-paying job. And even for those who are paid well, such as psychiatrists (the profession that, unsurprisingly, fares the best out of mental health workers in terms of income), they are nevertheless one of the lowest-paid categories of physicians (see: Qi et al, “Comparison of Performance of Psychiatrists vs Other Outpatient Physicians in the 2020 US Medicare Merit-Based Incentive Payment System,” 2022: https://jamanetwork.com/journals/jama-health-forum/fullarticle/2790543).

Works cited:

Barker, Drucilla K. 2012. “Querying the Paradox of Caring Labor.” Rethinking Marxism 24, 4: 574–591.  https://doi.org/10.1080/08935696.2012.711065

Ekbia, Hamid R. and Bonnie A. Nardi. 2017. Heteromation, and Other Stories of Computing and Capitalism. Boston: MIT Press.

⸻. 2014. “Heteromation and its (dis)contents: The invisible division of labor between humans and machines.” First Monday 19, 6. https://doi.org/10.5210/fm.v19i6.5331.

Folbre, Nancy. 1995. “‘Holding Hands at Midnight’: The Paradox of Caring Labor.” Feminist Economics 1, 1: 73–92.

Glenn, Evelyn Nakano. 2012. Forced to Care: Coercion and Caregiving in America. Cambridge, MA: Harvard University Press.

Graeber, David. 2013. “Postscript: It is value that brings universes into being.” Journal of Ethnographic Theory 3, 2: 219–243. https://doi.org/10.14318/hau3.2.012.

⸻. 2001. Toward an anthropological theory of value: The false coin of our own dreams. New York: Palgrave Macmillan.

Gray, Mary and Siddharth Suri. 2019. Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Boston: Houghton Mifflin Harcourt.

Gillespie, Tarlton. 2018. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. New Haven: Yale University Press.

Irani, Lilly. 2015. “Justice for ‘Data Janitors.’” Public Books, January 15, 2015: https://www.publicbooks.org/justice-for-data-janitors/.

⸻. 2013. “The Cultural Work of Microwork.” New Media and Society 17, 5: 720–739.

Newton, Casey. 2019. “The Trauma Floor: The secret lives of Facebook moderators in America.” The Verge, February 25: https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona.

Parreñas, Rhacel Salazar. 2015. Servants of Globalization: Migration and Domestic Work, Second Edition. Stanford University Press.

Roberts, Sarah T. 2019. Behind the Screen: Content Moderation in the Shadows of Social Media. New Haven: Yale University Press.

Suchman, Lucy. 2008. “Feminist STS and the Sciences of the Artificial.” In The Handbook of Science and Technology Studies, edited by Edward J. Hackett, Olga Amsterdamska, Michael Lynch, and Judy Wajcman, 139–164. Cambridge, MA: MIT Press.

Timcke, Scott, and Shelene Gomes. 2020. “Essentially Disposable.” Exertions https://doi.org/10.21428/1d6be30e.f3d5cda3.

Weeks, Kathi. 2017. “Down with Love: Feminist Critique and the New Ideologies of Work.” WSQ: Precarious Work 45, 3–4: 37–58. http://doi.org/10.1353/wsq.2017.0043.

Related articles

Somatosphere: the first 15 years. Dörte Bemme in conversation with Eugene Raikhel


Dörte Bemme: Tell me a little bit about Somatosphere’s origin story. How did it all begin? Eugene Raikhel: The website was launched in the middle of 2008, but the important broader context is that this was a time when there was a flourishing blogosphere in anthropology and related fields. There was a lot of activity […]

View article

Vital Circulations and Circuits: Reflections around vitality and the biopolitical regulation of its flows


Most of us probably feel well-schooled in the nature of respiratory viruses after a blistering global pandemic and frequent news items of the latest outbreak of another strain of COVID-19 or influenza. For many, a headache, a sneeze, or a new and persistent cough directs our attention within and across the circulations that simultaneously sustain […]

View article

Putting British social medicine in conversation with Black feminist health science studies: considerations for racial health disparities


Introduction At present, we are living through several major, arguably foreseen, historical events and political shifts. For example, the COVID-19 pandemic has destabilised and further fragmented Britain’s social, economic, and healthcare systems. Additionally, the cost-of-living crisis has resulted in the disturbing increase of the reliance on foodbanks for the employed and unemployed alike (BBC News. […]

View article