Are You Still Actually in Charge?
The case for human sovereignty in an age when AI is doing more of your job than you might realise.
Are you still actually in charge? Not of your company. Not of your team. Of your own thinking, your own judgment, your own sense of what you are here to do.
Because something is shifting, and most of the leaders I speak with feel it before they can name it. The tools are different. The pace is different. Decisions that used to require deep expertise now get drafted by a machine in seconds. And somewhere in the middle of all the efficiency and the automation, a quiet question has started to surface:
Who is actually making the calls around here?
I’ve been sitting with this question for some time. It was crystallised for me recently by a post from an AI insider who described, without drama, how his entire working week had been transformed, how he now describes a project to an AI, walks away, and returns hours later to find it finished, refined, and ready. Not a rough draft. The completed thing. He wrote the post, he said, because the gap between what he’d been telling people and what was actually happening had become too large to stay quiet about. Reading it, I felt the same way about a different gap: the space between how we talk about AI adoption in leadership — faster, smarter, more efficient — and what we almost never discuss, which is what happens to the human in the middle of all of it.
That is what I want to talk about here.
The Risk Nobody Is Talking About
The dominant leadership conversation about AI right now is a productivity conversation. How do we do more, faster, with less? How do we stay competitive? How do we adopt before we fall behind?
These are legitimate questions. But they are not the most important ones.
The more important question is this: as you hand more to the machine, are you remaining the author of your decisions or are you becoming an mere approver of outputs you didn’t fully examine, produced by systems you don’t fully understand?
This happens gradually. Invisibly. Your team uses AI to summarise a complex report, and the summary becomes the basis for a board decision without anyone reading the original. A model pre-filters job candidates before a human sees a single resume. A forecast built by an algorithm is presented in a meeting, and nobody in the room can fully explain its assumptions. The AI recommends. The human, pressed for time, trusting the output, nods it through.
There is a meaningful difference between delegating a task and delegating your judgment. The first is smart leadership. The second is an abdication so gradual you may not notice it until something goes wrong in a way that can’t be undone.
The most dangerous AI risk for most leaders isn’t rogue technology. It’s the slow, comfortable erosion of human judgment dressed up as efficiency.
This is the sovereignty imperative: the active, deliberate choice to remain in charge of what matters most — your thinking, your decisions, your identity as a leader, even as AI becomes more capable and more present in every part of your work.
It is not an anti-AI position. I use AI daily. I believe it is one of the most powerful tools available to leaders right now. But a tool is only as good as the judgment of the person using it. And that judgment, your judgment, is what’s quietly at stake.
A Framework for Staying Sovereign
Over the past month, I’ve been developing a framework for what human sovereignty actually looks like for leaders who want to harness AI fully without losing themselves in the process. It comes down to four disciplines, each of which can be applied starting this week.
1. Audit your decision chain.
Once a quarter, map the ten decisions that most significantly affect your organisation. For each one, ask a single honest question: Is a human with real context and real authority still making the actual call or has that call effectively been made by AI model before it ever reached a person?
If the chain of reasoning disappears into a black box, that is your signal. Not to abandon the tool, but to redesign the process to strengthen human accountability.
2. Protect your cognitive edge.
AI is extraordinarily good at producing fluent, confident-sounding outputs. The risk, over time, is that leaders who rely on those outputs without interrogating them begin to lose the very capacities that made them effective: the ability to sit with an ambiguous problem, to spot the flaw in a compelling argument, to trust a well-developed instinct.
The discipline is simple, and it is counter-intuitive. Once a week, solve a meaningful problem without reaching for AI first. Not as a productivity exercise. As a practice of maintaining the cognitive muscle that no tool can replace. If that feels difficult, that difficulty is the point.
3. Invest in what AI cannot replicate.
The leaders who will have the most durable advantage over the next decade are not the ones who best mimic what AI can do. They are the ones who most fully develop what AI cannot, such as embodied judgment built over years of experience, genuine relationships grounded in trust, the moral clarity to make the right call when the data is ambiguous, and the physical and mental vitality to sustain performance over the long term.
This means treating your health, your relationships, your sense of purpose, and your continued learning as strategic assets. They are the foundation from which sovereign leadership is possible.
4. Know what you stand for and why.
As AI takes over more of the tasks that once gave professionals their identity and sense of competence, the question of meaning becomes urgent. Leaders who have a clear, examined answer to why they lead, what they are building, what they uniquely bring, what kind of person they are becoming in the process, will navigate this transition from a very different place than those who don’t.
In the age of AI, your purpose is a navigational system. When the tools change around you, and they will keep changing faster than any of us expect, it is the only thing that keeps you oriented toward what actually matters.
The Leaders Who Will Come Out Ahead
I want to be direct about something. The leaders who will thrive in the next decade are not the ones who adopted AI fastest, nor are they the ones who resisted it longest. They are the ones who stayed awake through the transition, who used AI with clear intention, who continued to develop their human capacities with the same seriousness they applied to their technology stack, and who never outsourced the most important question of leadership:
What do I bring to this that no tool, however capable, can bring for me?
That question is worth sitting with. Not once, but regularly. Because the pace of change is not slowing down, and the window for answering it on your own terms is right now.
The world is not asking you to become less human to lead well in the age of AI. It is asking you to become more deliberately, more consciously, more fully human than you perhaps needed to be before.
That is the sovereignty imperative. And it starts with a choice you can make today.
If this sparked something, forward it to a friend or colleague who’s thinking about the future of their life and work in the AI age. The conversation is worth having now.



Excellent points. With the pressure speed, we as individuals need to assure we don't accidentally outsource our judgement to AI. And as leaders, we need to ensure our people have an environment where they are comfortable pushing back on deadlines for the sake of diligence.