More capable leaders?
What "capable" actually means in a world where software does the compliant, efficient work.
When companies talk about developing capable people, they usually mean three things: more compliant (follow the rules), more efficient (do it faster), higher-performing (hit the numbers). These are reasonable goals if the job is routine. They're the wrong goals if the job requires judgement.
Software is getting very good at compliance and efficiency. The skills that are hard to automate are exactly the ones that get squeezed out by training programmes built around checklists and behavioural scripts.
What we actually need are people who are intuitive and independent, with the creativity to solve new problems. If you’re David Lynch you'd claim that “Intuition is the number one human tool”
The problem with “Emotional scoring”
There's a growing category of tools that use AI to assess how people communicate: dashboards that score tone, flag affect, predict engagement. The promise is objectivity. The reality is something older.
The idea that gathering enough data about a person lets you predict their behaviour has appealed to authoritarians for decades. It keeps not working, for a fundamental reason. Stephen Wolfram demonstrated it in the context of computational systems: for many complex processes, there is no shortcut to predicting future behaviour other than running the whole process and seeing what happens. You can't compress a person into a score and expect the score to tell you much.
In practise, these tools create three problems. First, people who know they're being graded by an algorithm stop being themselves and start performing for it. The signal becomes noise the moment people learn what it's measuring. Second, the underlying models encode narrow ideas of how a "good" communicator looks or sounds. If your natural style doesn't match that mould, the system marks you down regardless of how effective you actually are. Third, surveillance in organisations runs one way. Leaders get data on their teams; teams don't get data on leaders. That creates fear, not growth. This is to say nothing of the corrosive effect of surveillance and lack of privacy on trust and psychological safety.
A flight simulator for difficult conversations
The more useful question is what AI can do that a classroom can't. And the answer is: give people a safe place to practise the situations they actually find difficult.
Think of it as a flight simulator for difficult conversations. You can generate realistic scenes — a tense one-to-one, a room where someone is about to derail the meeting, a negotiation going sideways — and use them as material to stop, analyse, and discuss. Why is that person reacting that way? What's actually being said underneath the words? What would you do here?
You can run the conversation yourself, unscripted, and get feedback on what worked. You can repeat it. You can try a different approach. The AI has infinite patience for repetition in a way that a colleague asked to roleplay for the fifteenth time does not.
This is a very different use of AI than surveillance. Instead of watching people and generating scores, it gives people a space to experiment, fail without consequence, and build instincts through genuine practise.
Authenticity is the point
Most corporate training asks people to put on a mask: to perform active listening, to demonstrate empathy in the approved way, to hit the behavioural markers on the rubric. People can tell when it's performance and so can the people they're leading.
The leaders who actually move people aren't working from checklists. They're drawing on real experience, real emotion, real instinct. You can't train that by scoring it. You train it by giving people enough practise that they stop thinking about the technique and start trusting their own judgement.
That's what we're trying to build. If you want to talk about what it could look like for your organisation, get in touch.