The window to define how AI will operate in your organization is closing

Thanos Papadimitriou

Co-founder & President at Moveo.AI

in

🏆 Leadership Insights

In one of the episodes of my SKAI podcast, I asked Vasilis Vassalos, a computer science professor and Stanford alumnus who worked alongside the founders of Google in the university's early days, about a letter signed by names like Steve Wozniak and Elon Musk. The letter, published in March 2023 by the Future of Life Institute, called for a six-month pause on training AI systems more powerful than GPT-4. What happened with it?

Vassalos's answer was direct: nothing happened. The titans of technology simply ignored it.

The reasons he gave say something important about where we are today, and about what this means for any organization that is already using, or planning to use, AI agents in its operations.

The analogy that stayed with me

Vassalos used the automobile as a reference. First the cars were built. Then people died in accidents. Then came traffic lights, pedestrian crossings, and road safety laws. Regulation arrived after the problem, but at a pace compatible with human life. Societies had decades to adapt.

With AI, that margin has collapsed.

The evolutions that once took generations now happen in years. And the institutions, whether regulatory, corporate, or governance-related, are not designed to respond at that speed. The gap between what the technology can already do and what decision-making structures can process has never been wider.

The problem Vassalos described at the level of global regulation exists at the level of every organization too. And it has a direct consequence for anyone making decisions about AI in high-volume operations.

Why self-regulation is not working

The Wozniak and Musk letter failed for two reasons, according to Vassalos. First, structural: national interests rarely allow international coordination on problems of a global scale. The climate crisis is the most visible example. AI is becoming the second.

The second reason is more revealing. A very large segment of society believes technology companies should self-regulate, that the market will find the right balance without external intervention. The letter was ignored by the titans of technology precisely because that argument is convenient for them.

The problem, as Vassalos put it, is that self-regulation simply did not happen. And six months after the letter, nothing had changed.

The same pattern repeats inside organizations. Teams deploy AI agents waiting for someone outside to define the rules before internal action is required. The regulator will set the limits. The vendor will ensure compliance. The industry will create a standard.

Meanwhile, configuration decisions accumulate. And each one is, in practice, a choice about how AI will operate, which actions it will take autonomously, and what it will escalate to a human. Not defining those limits consciously does not mean they do not exist. It means they were defined by default.

What I observe in organizations that defer this decision

As someone who builds AI systems for enterprise operations at Moveo.AI, I see three consequences repeat themselves in organizations that postpone this decision.

1- Configuration decisions become policy without being treated as such

A parameter adjusted to solve a specific problem becomes the operational standard. Months later, no one knows exactly why the agent behaves a certain way, and changing it requires disproportionate effort.

2- Compliance risks appear in results, not in approval meetings

In regulated industries, the difference between an autonomous agent action inside policy and outside it can be invisible until an auditor or a customer surfaces it. At that point, the cost of correction is far greater than prevention would have been.

3- The competitive window closes

Organizations that define AI governance before being forced to do so gain the flexibility to expand agent use with more confidence and less rework. Those that wait for external regulation start that conversation already in a reactive position.

This moment will not last

Vassalos gave more than 50% probability that superintelligence becomes a reality within the next decade or decade and a half. Regardless of what one thinks about that horizon, what is clear is that the next wave of capabilities will arrive before most organizations have defined how to handle the current one.

This is the window. The moment between deploying the first agents and the arrival of the next generation of systems is the most suitable for defining internally how AI will operate, with what limits, and who reviews those decisions over time.

Waiting for external regulation is not a neutral strategy. It is a decision to defer a choice that is already being made, by default, every day.

Define before being required to

Organizations that establish internally how AI will operate before being forced to do so will have more freedom, not less, when regulation finally arrives.

Because they will enter that conversation with decisions already made, reviewed, and documented, instead of needing to reconstruct the logic behind choices no one remembers making.

The window is open. For how long, no one knows.

Thanos Papadimitriou teaches entrepreneurship at NYU Stern in New York and supply chain management at SDA Bocconi in Mumbai. He is one of the co-founders of technology startup Moveo.AI.