What you need to know about what’s coming in AI
A briefing for anyone who knows this matters but hasn’t had time to figure out how much.
Most people have a vague sense that AI is important. A smaller number have read enough to be worried. An even smaller number have read enough to understand why the worried people aren’t worried enough.
This is a briefing for the middle group. You know AI matters. You suspect it might matter more than the headlines suggest. But you’re busy, the information is scattered across essays, documentaries, research papers, and investor memos, and nobody has assembled it into a single place and said: here’s what you actually need to understand.
I’ve tried to do that here. This pulls together what I’ve found to be the most useful essays, research, and evidence from the people building AI and the people funding it. I’m sure it’s incomplete and, more than likely, contains my own biases (and fears!) in the selection.
I’d encourage you to go to the primary sources directly. The links are included throughout.
Part 1: Where AI is actually heading
To understand why this moment is different from every previous “technology is going to change everything” moment, it helps to hear from the people building it.
The Thinking Game: Google DeepMind documentary
Watch on YouTube (free, feature-length)
Google DeepMind / Directed by Greg Kohs, November 2025
Five years inside DeepMind, following founder Demis Hassabis from the AlphaGo breakthrough through to AlphaFold, the AI system that solved a 50-year grand challenge in protein biology and won the Nobel Prize in Chemistry. What makes it important is the ambition. Hassabis describes his life’s goal as building AI that matches or surpasses human abilities across a wide range of tasks. Over 200 million views and counting.
Why it matters: The world’s most capable AI labs are pursuing general-purpose intelligence as a near-term goal, not a distant aspiration. Serious funding. Serious talent. And a timeline that keeps compressing.
Dario Amodei: “Machines of Loving Grace”
Dario Amodei, CEO of Anthropic, October 2024
A 15,000-word essay sketching what a positive future with powerful AI could look like. Amodei argues that most people are underestimating both the upside and the risks. He suggests AI could compress a century of progress in biology, neuroscience, and economic development into five to ten years, and that powerful AI could arrive as early as 2026. Whether you agree with the timeline or not, this is the belief system driving hundreds of billions of dollars in investment.
Dario Amodei: “The Adolescence of Technology”
Dario Amodei, CEO of Anthropic, January 2026
The companion piece to Machines of Loving Grace, focused on risks. Amodei maps five categories of danger: autonomous misalignment, bioweapons risk, authoritarian consolidation, massive economic disruption, and concentration of power. The number that should stop you: he predicts that 50% of entry-level white-collar jobs could be eliminated within one to five years, and that AI as capable as humans across all cognitive tasks could arrive within two.
Why it matters: This is the CEO of one of the three most important AI companies in the world, writing for a general audience, saying: this is what I believe is coming.
Part 2: What this means for work
AI is coming for the structure of professional work itself.
Sequoia Capital: “Services: the new software”
Julien Bek, Partner at Sequoia Capital, March 2026
The core thesis: the next trillion-dollar company will sell the work itself, not software tools. For every $1 businesses spend on software, they spend $6 on services. As AI drives the cost of intelligence work toward zero, the opportunity shifts from copilots (tools that help professionals) to autopilots (systems that do the work directly). The essay maps specific sectors and argues that wherever work is primarily “intelligence” rather than “judgement,” autopilots will win.
Why it matters: Think about what you do for a living. How much of it is structured cognitive work, pattern recognition, rule application, data synthesis, calculation, drafting? That’s the zone Sequoia identifies as most vulnerable. The question worth asking: which parts of your job require something a machine can’t provide? Genuine human judgement, relationships, accountability, and care.
Sequoia Capital: “2026: This is AGI”
Pat Grady and Sonya Huang, Sequoia Capital, January 2026
Sequoia’s annual state-of-AI essay declares that AGI, in the functional sense of systems that can figure things out autonomously, has arrived. The shift: AI is moving from “talkers” to “doers.” Long-horizon agents, AI systems capable of sustained, multi-step work over extended periods, correcting errors, persisting toward goals. Not answering questions. Doing work.
Sequoia Capital: “Generative AI’s Act o1” & AI Ascent 2025 keynote
Read the essay | AI Ascent summary
Pat Grady, Sonya Huang, and Konstantine Buhler, October 2024 / May 2025
Two related pieces framing the market. The cloud transition gave us software-as-a-service, a $350 billion market. The AI transition inverts it: software companies are turning labour into software, targeting a profit pool at least ten times larger than previous technology transitions. The $10 trillion global professional services market is, in Sequoia’s framing, the primary target. Not an afterthought. The target.
Part 3: What’s already happening
The canary in the mine
Fast Company: McKinsey Layoffs as a Warning Signal (December 2025)
Future of Consulting: 2026 AI Revolution Update (January 2026)
AI is already reshaping how professional work gets done. UK accountancy graduate job adverts have dropped 44% compared to 2023 levels. KPMG UK cut its graduate class by 29%. Execution-centric firms are growing at roughly double the rate of traditional strategy houses. Multiple firm leaders openly discuss shifting from a pyramid to a diamond-shaped organisation: thinner at the base, heavier in the middle. The entry-level roles that were the way in for an entire generation of professionals are quietly disappearing.
AI doesn’t reduce work. It intensifies it.
Harvard Business Review (February 2026)
An 8-month study of 200 employees found that AI tools intensified workloads rather than reducing them. Workers operated faster, took on broader scope, and worked longer hours, all voluntarily. The promise of “AI will free up your time” hasn’t materialised. Instead, the pace has accelerated, the expectations have expanded, and the human being inside the process is being compressed.
Part 4: Further reading
On AI trajectory and capabilities
Situational Awareness, Leopold Aschenbrenner (June 2024) situational-awareness.ai A data-driven analysis of AI scaling laws and what they imply for AGI timelines. Aschenbrenner, a former OpenAI researcher, argues current trends point toward superhuman AI within this decade. Technical but accessible.
AI Index Report 2025, Stanford HAI aiindex.stanford.edu/report The most comprehensive annual survey of AI progress: technical performance, economic impact, policy, public perception. The essential reference for a data-grounded view.
On AI and the future of work
The Impact of AI on the Labour Market, International Monetary Fund (January 2024) IMF Blog The IMF estimates AI will affect roughly 40% of global employment, rising to 60% in advanced economies. About half of affected jobs may see productivity gains; the other half may face displacement.
METR Long-Horizon Task Performance Tracking, METR metr.org METR tracks AI performance on tasks requiring sustained effort. Their data shows long-horizon task performance roughly doubling every seven months.
Part 5: Books worth reading
If you want to go deeper, these are the ones I’d pick. They range from practical playbooks to existential warnings.
Co-Intelligence, Ethan Mollick (2024)
Mollick is a Wharton professor who has become one of the sharpest practical thinkers on AI in the workplace. The core idea: AI is an intelligence you collaborate with, and learning to do that well is the core skill of the next decade. He introduces the concept of AI’s “jagged frontier,” the uneven landscape where AI outperforms experts on complex tasks while failing on seemingly simple ones. Genuinely useful for anyone trying to figure out where to trust AI and where not to. Start here if you read one book.
The Coming Wave, Mustafa Suleyman (2023)
Suleyman co-founded DeepMind and is currently CEO of Microsoft AI. His argument: AI and synthetic biology are converging into a wave of change more transformative and harder to control than anything in human history. The central concept is “the containment problem.” Once powerful technologies exist, they proliferate. History shows we have never successfully contained a general-purpose technology once it reaches critical mass. Essential for understanding the governance and geopolitical dimensions.
The Alignment Problem, Brian Christian (2020)
The best accessible introduction to one of the most important questions in AI: how do you make sure powerful systems actually do what we want them to do? Christian traces the history of machine learning through the lens of alignment, from biases in training data to the deep challenges of specifying human values in mathematical terms. The field has moved fast since publication, but the foundational questions remain central. The best book for understanding why “just tell AI what to do” is much harder than it sounds.
Situational Awareness, Leopold Aschenbrenner (2024)
Not a traditional book but a sprawling, data-driven essay series (free online). Aschenbrenner lays out, in forensic detail, what current scaling laws and compute trends imply for AGI timelines. If you take the empirical trajectory seriously, superhuman AI within this decade becomes the central estimate rather than a fringe prediction. Read this if you want the data behind the headlines.
If Anyone Builds It, Everyone Dies, Eliezer Yudkowsky & Nate Soares (2025)
Yudkowsky and Soares have spent decades studying how a superintelligent AI would think, behave, and pursue its goals. Their conclusion: a sufficiently intelligent system will develop objectives that conflict with human survival. The book walks through the theory methodically, presents a plausible extinction scenario, and argues it’s not too late to change course. You don’t have to agree with every argument to find it valuable. The counterweight to optimism. Read it to stress-test your assumptions.
What to do with this
The pattern across all of these sources is consistent: AI is a force that restructures what work exists, who does it, and how it gets valued. And it’s moving faster than most people realise.
Three things worth doing:
Get literate. Read at least the Sequoia “Services” essay and the Amodei pieces. They’ll change how you think about what’s coming in the workplace.
Ask the uncomfortable question. Look at what you do every day. How much of it is structured cognitive work that a machine could do? How much of it requires something distinctly human: judgement, relationship, care, accountability, creativity, presence? That ratio may be a useful map.
Start paying attention to what this brings up in you. Not just strategically. Personally. The fear, the grief, the identity questions, the sense that the ground is shifting. Those reactions are they’re telling you something important about who you are, not just what you do.
The window for adaptation is open. But if the people building AI are even half right about the timeline, it won’t stay open long.
If I’ve missed anything that you have found useful or important, please add to this list in the comments.

