The alignment problem your organization already has and hasn't named yet

Thanos Papadimitriou
Co-founder & President at Moveo.AI
in
🏆 Leadership Insights

In AI conferences, alignment almost always appears as a tomorrow problem. Something to be resolved when systems become sufficiently advanced, when the stakes get high enough to demand real attention.
Those who, like me, move between research and building AI products see this problem surface much more immediately. Alignment is already here. It just arrived disguised as something far more mundane: a configuration decision.
What academia calls alignment, operations calls configuration
In the podcast I produced for SKAI, I spoke with Vasilis Vassalos, a computer science professor and Stanford alumnus who spent years alongside the founders of Google when the company was still running out of university servers.
In episodes 5 and 6, we discussed alignment in its broadest sense: how to ensure an AI system adopts the right values as it becomes more autonomous.
Vassalos identified the core of the problem: the values we program into AI systems have no neurophysiological basis, the way parts of human morality do. They need to be chosen and encoded by whoever builds the system. And the criteria for that choice is one of the most important open questions in AI.
That is the academic definition. In practice, the same decision appears under a different name.
When a team defines that an agent can offer a concession without human approval, it is making an alignment decision. When it configures that certain situations should never be resolved automatically, it is doing the same thing. The difference is that nobody usually calls it that.
The question most teams have not answered
Every AI agent deployment involves, consciously or not, answering a fundamental question: which actions does the agent take autonomously, and which ones need a human in the loop?
Most teams answer this question reactively. Something goes wrong, the rule is adjusted. A customer complains, and the script is revised. A risk shows up in the financials, and the policy is tightened. The problem with this approach is that it defines the boundary of autonomy through failure, not through design.
As I explored when analyzing how traditional companies are adopting AI to survive competitive pressure, the pace of adoption rarely matches the maturity of the operational framework. Organizations arrive at AI in a hurry and define autonomy along the way.
What happens when the boundary is not defined
Three scenarios repeat themselves in the operations I observe.
The agent makes a concession it should not have. An exception outside policy, a promise the company cannot fulfill, a discount applied in a context that would have changed the decision entirely if a human had been in the room. The cost shows up in the results, not in the conversation with the customer.
The agent escalates a case it could have resolved on its own. The customer waits longer, the human team receives unnecessary volume, the experience deteriorates. The problem here is the inverse: insufficient autonomy where there should be more.
The agent resolves a case that required supervision. In regulated industries, this is the highest-risk scenario. A decision that should have gone through compliance, a situation that required a different approach, an exception that needed approval and did not get it.
In all three cases, the problem is not the agent's capability. It is the absence of deliberate design around where autonomy begins and where it ends.
Not sure where your operation stands on these three scenarios?
Take the Readiness Assessment →
Alignment is a cycle, not a checkbox
The operational alignment decision does not end on launch day. It needs to be revisited regularly because three things change: customer behavior, organizational policies, and the agent's own behavior as it learns from new data.
This connects directly to something I analyzed in another context: AI models depend on continuous updating to avoid drifting from what is expected of them. The operational equivalent is the same. An agent configured in one context may have response patterns that no longer make sense a few quarters later. The autonomy boundary that worked well at one volume of interactions may need revision when that volume grows.
Organizations that treat alignment as an initial configuration will discover the problem when it shows up in the numbers. Those that treat it as a continuous process will have the chance to correct course before that.
The decision you are already making
What became clearest to me after the conversation with Vassalos on SKAI was this: the risk of alignment does not only lie in systems that become too autonomous. It lies in organizations that delegate to the system a decision that should be human, simply because no one named it as such.
Every organization that has already put an AI agent into production has already chosen where to draw that boundary.
The question is whether that choice was made by design or by default, and whether a process exists to revisit it before the results show it was wrong.
Thanos Papadimitriou teaches entrepreneurship at NYU Stern in New York and supply chain management at SDA Bocconi in Mumbai. He is one of the co-founders of technology startup Moveo.AI.