Parallel worlds
This is one of the most important pieces I’ve ever written. For many people, it won’t be a comfortable read — but right now, honesty matters far more than comfort.
It all started with a thought from Jack Clark, one of the co-founders of Anthropic:
“I expect that by summer 2026, many people working with advanced AI tools will feel like they’re living in a parallel world compared to those who aren’t.”
This idea resonates with me completely, because I can see it with my own eyes already. And it’s a far bigger problem than people realise. For now, much like with other exponential technologies, hardly anyone is paying attention.
Most people are still living in an illusion. They work at companies where not much is happening yet. They see that AI can code and automate a lot of things. But sometimes it doesn’t work, or it makes a mistake, and people conclude that AI will never be capable of doing work at a human level. That it will have the same impact as any other tool humanity has ever invented.
And yet we see more and more examples of what skilled people can do with AI, but until we see it with our own eyes and in our own work, we often assume it simply doesn’t apply to us.
The impact of artificial intelligence on our work will be enormous. And this year, we’ll see it with our own eyes for the first time. That’s why we need to prepare, as individuals, as companies, and as a society.
An important note: I know our work isn’t just about what we do on a computer. That it’s full of regulation and organisational inertia. That no model will replace judgement, relationships, or human emotion. That work won’t disappear, it will just change, as it always has. The problem is the unevenness. Some people are changing fast, while the majority aren’t changing at all. That’s what’s creating what I want to talk about: not just two classes of people. But two speeds of learning. And two entirely different realities.
Bifurcation is coming
When I was preparing for AI Predictions 2026, I went through data from the largest AI labs as well as our own AI programmes. I also kicked off a public discussion on LinkedIn with a simple question: how has your approach to AI changed over the past year?
When I fed more than 80 responses into AI, several clear patterns emerged: people are moving from chatting to more advanced tools, combining multiple models, and tackling more complex challenges.
But the most important finding was different: a growing divide.
The gap between advanced users and everyone else is deepening. In some companies, AI adoption has stalled. And advanced users are losing patience with the world around them.
One caveat, by advanced users, I don’t mean tech people who were always in their own world. Quite the opposite. These are people from all kinds of teams and departments. An architect, a marketer, a finance manager, a product specialist…
It was Claude who first surfaced the term bifurcation for me, and it describes exactly what’s happening:
Bifurcation (from the Latin for “fork”) means splitting into two parts. In geography, a river divides into two channels; in medicine, the aorta divides into the iliac arteries. In mathematics and physics, the term describes sudden changes in a system’s behaviour that lead to new states.
In other words: parallel worlds.
We are experiencing this bifurcation right now. We are entering an era of parallel worlds, one inhabited by people who work in symbiosis with AI. The other by everyone else.
Look at the real stories from people in my Future AI Leader programme:
“On Wednesday I used my first script to merge data. Today, a few days later, I’ve written around 15 more scripts with AI for various operations, automatically processed hundreds of documents, and saved weeks of time.”
And there are dozens more like it:
“I built a complete automation with AI integration…” “I built an application for planning and evaluating production capacity…” “I moved my vision from ‘I’d like to, but don’t know where to start’ to ‘I have everything ready’…”
Again, none of these are tech people…
I see the same in our own team. Whenever we show someone how we work, they think we’re from a completely different planet.
One of my colleagues was showing a customer from Sweden how we have our AI tools set up and how we use agents at work. Even though she had previously spoken with six other companies, when she saw ours, she could only manage one response: “You’re on a completely different level. Like, a completely different level.”
Here’s a concrete example. When I needed to gather feedback from colleagues on a new project, I asked my AI agent to create a voice assistant that would call each person and have a conversation about it. It took me three simple prompts. Three. The first two to build the voice bot, the third to analyse the conversations.
But that’s not all. When I shared it in the team chat, a colleague picked it up and built her own voice agent in exactly the same way:
“It calls the client after a proposal is sent. The call lasts 3–5 minutes. It speaks naturally, doesn’t sell. After every call I get an email with a summary, what the client said, what their vibe is, what they need.”
And I’m seeing examples like this more and more…
How about you? Are you dealing with things like this? Are you seeing examples like these around you?
If yes, great. You know what’s possible, and you know how AI will affect your work and how to prepare. If not, you have a problem. Because this is just the tip of the iceberg, and you have no idea what’s coming.
Parallel worlds of AI tools
Matt Shumer describes the gap between tools very well in his article Something Big Is Happening, which was seen by more than 80 million (!) people in just a few days:
“Today’s models are unrecognisable compared to what existed six months ago. The debate about whether AI is actually improving or has hit a wall is over. Anyone still claiming otherwise either hasn’t used the current models, has a reason to downplay the situation, or is drawing on 2024 experience that no longer applies. The gap between public perception and reality is enormous, and dangerous, because it prevents people from preparing.”
Just think about how the capabilities of these tools have shifted over the past few years, because this is the reality that’s hard to believe if you’re not watching it closely:
In 2023, AI made mistakes doing simple arithmetic. In 2024, it could write basic software. In 2025, top developers say AI produces most of their code.
And now, at the start of 2026, a new generation of models has arrived that makes everything before them feel like the technological dark ages. Incidentally, the creators of these models are already using previous generations of AI heavily in their development. Artificial intelligence is helping create even better artificial intelligence. That’s part of why the pace of development is accelerating the way it is.
I’m talking about Claude Opus 4.6, for example. A model so capable that I personally use nothing else. After the first week of working with it, it cost me hundreds of dollars, but I wouldn’t trade it. Through it I’m solving incredibly interesting problems, it’s exceptionally sharp at any kind of data analysis, and above all, it codes whatever I can dream up.
The differences aren’t just in the models. They’re in the AI tools people use at work. Some still only allow chat in a browser. Others are connected to your files and folders, can work independently, and will happily work for an hour straight to do something that would normally take you a full day.
Everyone on our team, including our assistants, uses Cursor. I personally use almost nothing else. When I needed to fill out a multi-page AML document for a real estate agency, I asked Cursor to do it. It found all the necessary information on my computer on its own, looked up the rest online, created a fully completed document, and saved it as a PDF. Without a single manual input.
Do you have something that can do that? Or are you still typing prompts into ChatGPT, copying the output into an email, and calling that “working with AI”?
If you want to see what the parallel world of top AI applications looks like, you have to play with the best ones. There’s simply no other way. And don’t limit yourself to what your employer provides. Seriously. This is your future. Pay for the best ones available, at least for a few months, and learn to work with them as well as you possibly can.
Even if it’s just for personal projects, or to learn something new. That experience is irreplaceable and could secure your place in your next interview for your dream job.
Parallel worlds of skills
One of my most important predictions is the extreme importance of the human being. Because only people with the right skills can extract real value from AI.
I could go on with more examples from our programmes:
“I built an automation to fill in budget spreadsheets, saving one full day per month. A recurring task I dreaded every month is now handled automatically.”
“I created 80 presentation slides plus 40 pages of documentation in 3 hours. A colleague estimated the work at a minimum of 2+ days. And 100% it wouldn’t have been as good, either in content or visually.”
I think you’re beginning to get a picture of what the parallel world of people who know how to work with AI looks like. The impact of these differences on the job market is already becoming visible today.
These people will have an enormous advantage and a wealth of opportunities. But they’ll also be competing against you in your next hiring process. And it’s not just a feeling, data confirms we’re entering what’s called a K-shaped economy, one that’s splitting into two trajectories. On one side: top performers who are several times more productive thanks to AI. On the other: everyone else.
According to the PwC Global AI Jobs Barometer, people with AI skills earn 56% more than people in the same role without them. And this wage premium doubled in just one year. Industries that are adopting AI are showing three times the revenue growth per employee. Yet many workers still haven’t undergone any serious AI training that would allow them to truly harness these tools (and I don’t mean a one-off session on ChatGPT or Copilot).
Goldman Sachs estimates that by 2028, AI will affect 300 million jobs globally. I don’t want to scare anyone, but that’s 24 months away…
Fortunately, everyone has the ability to change their trajectory. But the reality is that this gap widens every few months. And in a few years, it may no longer be possible to close it.
The good news is that the ability to work with AI can be learned. Like any other skill. You just have to make it a priority and give it time. The key isn’t one or two training sessions. It’s genuine learning through practice (learning by doing). This is confirmed by data from the Future AI Leader programme. The majority of participants spend many hours on AI beyond their regular working obligations. Not because they have to. But because they understood what’s at stake.
The second piece of good news is that working with AI is not built on technical skill. It’s far more inclusive than other technologies have been. Based on our experience, the ability to leverage AI rests on four pillars:
35% AI-first mindset 3
0% Ability to work with AI tools
25% Application in your own work
10% Technical component
See that? Only 10% is about the tools. The most important thing is mindset. Every top performer in our programmes names the mindset shift first. A person with an AI-first mindset and zero technical skills will learn everything. A person with perfect skills and the attitude of “I don’t have time for this” will get nothing out of it.
Add what’s called “high agency,” proactivity, taking ownership, and the ability to keep learning, and there’s a strong chance that within just a few months, you too could be among those living in the parallel world.
Parallel worlds at work
I fully agree with Anthropic CEO Dario Amodei, who stated that AI will eliminate 50% of junior office positions within one to five years. Personally, I think in the domain of knowledge work, it could be far more. There will be many positive consequences too, but certainly not for everyone.
We’re already seeing the early signs. Demand for AI skills is beginning to reshape the job market. Companies are hiring fewer new people, they’re letting people go, teams working with AI can be significantly smaller, and companies are looking for different people, people who know how to work with AI.
It will take a while before this shows up in economic data. But if you’re watching what’s happening, you can see it in advance, right now.
A telling example is the development of the new Claude Cowork tool. Anthropic built the entire product with the help of AI in ten days. A Google employee commented that they spend ten hours a day in meetings debating whether they should build something similar too.
Andrew Ng, one of the most prominent figures in AI, recently described what he’s seeing firsthand in Silicon Valley. Think of it as a preview of a near future that will soon reach everyone else:
“Companies aren’t replacing people with AI, they’re replacing people without AI with people who have AI. Marketers, recruiters, and analysts who can also code with AI are more productive than those who can’t. And companies are slowly parting ways with those who can’t adapt. Entire teams are also shrinking. A project that used to require 8 engineers and 1 product manager can now be handled by 2 engineers and 1 PM, or even by one person who combines both roles.”
So what will the parallel worlds at work actually look like?
World 1: They coordinate AI agents. They’re “gardeners” who build systems where the work works for them.
World 2:They do every task manually from start to finish. They’re “sculptors” who do everything themselves.
World 1: They document their know-how, processes, and thinking. AI knows them and delivers better outputs over time.
World 2: They carry everything in their heads. Every prompt starts from scratch; outputs are generic.
World 1: From idea to prototype in minutes. Fast materialisation, fast feedback, fast improvement.
World 2: From idea to execution in weeks. Meetings, approvals, waiting. A lot of things stay inside their heads.
World 1: The marketer runs their own ads, the salesperson their own reporting, the teacher builds their own test app.
World 2: When they need something technical, they send a request to IT and wait. And wait.
World 1: Proposal to clients 5 minutes after a meeting. An interactive web page instead of a PowerPoint. Better services.
World 2: Proposal in days. PowerPoints. Emails with attachments. Exactly like 10 years ago.
World 1: Ownership, high agency, they know “what good looks like.”
World 2: They wait for instructions, for training, for the company to introduce AI.
I’m not saying one world is better than the other. Different things suit different people. Some want to stay in control; some are content with what they do today; some will decide not to work with AI at all. And that’s perfectly fine.
But when making that decision, it’s worth asking:
Will I still be able to deliver value to clients and employers? Will I help my colleagues, or mostly slow them down? Will I be relevant in my field in the job market next year?
What you could do starting today
Matt Shumer puts it exactly right:
“This may be the most important year of your career. The person who walks into a meeting and says ‘I did this analysis with AI in an hour instead of three days’ will be the most valuable person in the room. Not sometime in the future. Right now. Learn these tools. Show what’s possible. This window will close soon, once everyone gets it, the advantage disappears.”
If you decide to discover the parallel world where you’ll truly live in symbiosis with AI, here are a few ways to get started:
Create your AI vision. Imagine yourself three to five years from now, when AI will be doing most of what you do today, but far better. What will you be doing? How will you use AI to get there? What do you need to learn to reach that future? This vision can become your north star, the compass by which you make decisions about your learning, your career, and your daily work.
Know your strengths. Take a talent assessment, for yourself and for your children. And look for ways to improve most in what you’re already good at. The more expert you become in your field, the greater the value you’ll deliver (with AI’s help). And the more you’ll enjoy working with it.
Join a community. Go to external events, meetups, conferences. Especially if you don’t have a group of enthusiasts within your own company. You need to see what’s real for others, what’s possible, and also that everyone is dealing with the same things you are. That nobody has all the answers. And that the journey is easier when you don’t go it alone.
Invest in quality training. And be wary of the low-quality kind. Prioritise training that draws on real practice, not consultants presenting pre-packaged slides or demo examples with no connection to reality. There’s no shortage of no-name courses today giving genuinely harmful advice. Be demanding about who you learn from.
But the single most important step is to start taking AI seriously. Pay for Claude, Gemini, or ChatGPT, the full version, and use the best available models. Try one of the vibe coding tools (Macaly, Replit, Lovable, Cursor) and build something in it, from start to finish. Don’t just focus on small tasks, go after the big problems and projects you’ve been carrying around in your head.
Because that’s the best thing about AI, the chance to finally bring to life the things we’ve been dreaming about for a long time.
FD

