The AI identity crisis
The rise of algorithmic anxiety in the professions
When the work that defined you can be done by a machine in seconds, something happens that the change-management literature has no language for.
This is the conversation that is not being had inside organisations transforming themselves with AI. The other conversation. The one happening inside the heads and hearts of the people whose roles are being transformed.
Researchers are starting to name it. What they are finding is no longer a soft problem.
The empirical case
In 2022, a paper led by Milad Mirbabaie in Springer’s Electronic Markets established “AI identity threat” as a research construct. Their work made the case that AI was activating something different from previous technological disruptions. The threat operated below the level of skill loss, reaching into professional self-concept itself.
Three years on, the literature has accelerated. In September 2025, Stephanie McNamara and Joseph Thornton of the University of Florida proposed a clinical name for the syndrome they were now seeing in their offices: Artificial Intelligence Replacement Dysfunction. Its symptom list reads as a portrait of expert identity in collapse: anxiety, insomnia, paranoia, identity loss, and hopelessness.
Five months later, in February 2026, Anurag Shekhar and Musawenkosi Saurombe published a study in Frontiers in Psychology analysing 1,454 Reddit narratives from workers experiencing AI displacement. They identified seven themes, including identity erosion, expertise devaluation, and organisational betrayal. Their most striking finding was a discordance between what workers were saying and what they were actually communicating. Around 52% of posts read on the surface as positive. Contextual analysis showed roughly 51% were carrying real negativity underneath. Workers were performing adaptability, often through dark humour, while privately reporting something closer to grief.
Around the same time, Futurism led the popular press coverage of the AIRD paper under the headline “It Turns Out That Constantly Telling Workers They’re About to Be Replaced by AI Has Grim Psychological Effects.”
That headline says it all.
The popular conversation
The mainstream business press has begun arriving at the same place from a different direction. In April 2026, Geoff Curtis published a piece in Fortune under the title “AI and job loss: the identity crisis no one is preparing for.” His central image was Catholic-theology-inflected: workers caught in a “professional identity purgatory,” a liminal state between an old self and a new one that has not yet arrived. Curtis was writing from the inside, having lost his own job to restructuring. He puts it cleanly: “I didn’t just lose a job. I lost the scaffolding I’d built my professional identity on.”
Even those at the most optimistic end of the AI conversation are reporting interior disturbance. In May 2025, the economist Tyler Cowen and Avital Balwit, Anthropic’s chief of staff, had a published exchange in The Free Press. Balwit wrote that she felt “humbled by how easily [Claude] does what used to make me feel uniquely valuable.” Cowen, in the same piece, asked himself “how I will stay relevant” in the AI age, hitting on the question that many people are asking themselves.
The World Economic Forum’s Global Foresight Network has named the phenomenon institutionally. In an August 2025 piece, they introduced the idea of an “AI precariat” facing “occupational identity loss.” They proposed combined policy interventions covering mental-health care, retraining, and community supports, alongside an “AI Anxiety Index” to measure the phenomenon at scale.
The empirical literature, the clinical literature, the financial press, the leading public intellectuals, and the institutional risk infrastructure are all pointing at the same thing.
Why AI training is not enough
Inside organisations, the dominant response remains at the level of process, policy and training. Reskill, redeploy, restructure. Run another change-management workshop. The assumption is that lack of skill is the problem.
The material underneath the surface is older than skill. It is fear, shame, grief, and anger that nobody has given language for. A workshop on prompt engineering cannot reach those.
Training has its place. The training-room frame assumes the only thing in play is technical capability. The empirical literature is now showing that what is actually in play is identity.
Jeffrey Pfeffer, in his book Dying for a Paycheck, traced how thoroughly Western professional identity has fused with the job itself. For many, the work has become the self. When the work changes faster than the self can adapt, something fractures.
This is most acute for the credentialed expert professions: medicine, law, finance, accounting, engineering, consulting, IT, and research. Professions where decades of training and practice produced an internal sense of competence that became indistinguishable from the person.
When AI compresses what those professions do, the ground on which the self had been standing becomes less solid.
What I bring to this
I should declare an interest. I have spent twenty-five years in financial modelling, mostly in project finance, infrastructure, and energy. I am also one of the people training senior practitioners in how to work with AI in the discipline. And alongside that work, over the past four years, I have trained as a Deep Process Psychotherapist, working with senior leaders and professionals. An unusual combination, to say the least.
But it does give me a particular perspective:
I am writing about this from inside two different rooms: the room where the AI compression of expert work is happening in real time, and the room where the people experiencing that compression are starting to find words for what it is doing to them.
The patterns the empirical literature is now describing closely track what shows up in the second room when there is space and time to look at it.
What depth psychology brings
Carl Jung, working in the early twentieth century, gave us a word for the parts of ourselves we have learned to keep out of sight. He called it the shadow: the parts of ourselves that, at some point in childhood, were not safe to show. The professional self gets built on top of those buried parts. They do not vanish; they wait.
The depth-psychology tradition has spent a hundred years learning what to do when those parts start to surface. That is what is happening now, at scale, in expert professions. The AI transition is doing more than shifting jobs. It is compressing the professional self that those jobs used to hold. As that compression happens, the material the professional self had been holding underground comes up for air.
What depth psychology offers is a different stance toward the material that is rising: meeting it instead of managing it, turning toward what was previously turned away from.
This is the stance the AI training rooms cannot offer. The corporate world is not set up to support the kind of relationship with one’s own difficulty that the work requires. Which is part of why the empirical literature is reaching for clinical names like AIRD: when the inner work cannot happen at work, it shows up as difficulties outside of work, and often as symptoms in the doctor’s office.
Two conversations, not yet talking
Two conversations are happening about AI and expert work right now, and they are not talking to each other.
The first is the empirical and clinical conversation: psychologists, organisational researchers, mental-health practitioners producing increasingly precise descriptions of what is happening. The vocabulary is sharpening fast: identity threat, algorithmic anxiety, replacement dysfunction, algorithmic paranoia.
The second is the depth-psychology conversation. A few writers, mostly in the Jungian and post-Jungian tradition, are theorising AI as a mythic event. Jason Batt and Jonathan Erickson edited a substantial volume in October 2025 called Depth Psychology, Myth and Artificial Intelligence: Soul and the Machine. It reads as the most serious recent attempt to bring the depth-psychology lineage into contact with the AI question. Its register is theoretical, and its focus is largely on AI itself as a cultural artefact rather than on what AI is doing to the working professional whose expertise is in the compression zone.
Neither conversation is meeting the senior practitioner where they actually live: at the desk, on a Wednesday afternoon, watching a tool produce in seconds what took them ten years to learn.
This is the gap The Shadow at Work is writing into.
What the path through looks like
If you are in this territory, the most useful thing this piece can offer is a different stance toward what you are experiencing.
The flat affect, the dread, the working-harder-rather-than-less, the questions you do not quite let yourself ask: they are signs that something old in you is being touched by something new in the world. You are experiencing it accurately. But perhaps through the lens of a professional self that has been trained to ignore its own experience.
The depth-psychology tradition would call this an opportunity, in a specific sense: material that has been buried for decades is now available to be met. The parts of the self that were exiled in order to function professionally are coming up for air. This is what the depth-psychology tradition has been preparing the ground for, for a hundred years.
What the work of meeting them looks like in practice is a longer subject than this piece can hold. It draws on shadow work in the Jung, Bly, Ford and Richo lineage, parts work from the Internal Family Systems tradition, and somatic approaches that listen to the body. All of it requires turning toward what the professional self has spent decades turning away from.
This is uncomfortable work in the short term and generative work in the longer term, particularly when the structures that previously contained the self are loosening.
Closing
We are at the beginning of a transition that the surface conversation about AI is not capable of holding. The technical, institutional, and clinical conversations all need to continue. The conversation that is missing is the one about what is happening inside expert workers’ actual psyches as the ground moves under their feet.
That is the conversation I’m pursuing in The Shadow at Work .
What the moment requires is three things meeting each other: the practical tools the depth-psychology tradition has been developing for a century, the language the empirical literature is now producing for what is happening, and the lived experience of the people inside the compression. None of the three on its own is enough.
The ground is open. The work is the same as it has always been: meeting what is rising rather than managing it.

