The productivity conversation AI founders are not having

Thanos Papadimitriou

Co-founder & President at Moveo.AI

in

🏆 Leadership Insights

In every conversation I have about AI and work, there comes a moment when someone says: "but job losses are inevitable". And the phrase ends the discussion before it starts.

In a recent episode of my SKAI podcast, I spoke with Vasilis Vassalos, a computer science professor and Stanford alumnus, about the social and economic implications of superintelligence.

What became clearest to me after that conversation was this: the inevitability narrative is not a technical analysis. It is a rhetorical decision with political consequences. And AI founders who repeat it are being convenient, not honest.

The lesson history already taught us

Vassalos used a precise example. The productivity gains of the early twentieth century could have led only to wealth concentration and even longer working hours. They did not. The five-day work week and the eight-hour workday were not natural consequences of industrialization. They were the result of collective choices: labor organizing, legislation, and political decisions.

The same logic applies now. What AI will do with the human time it frees is not determined by the technology. It is determined by who makes decisions about how that technology is implemented and how its gains are distributed.

That seems obvious when stated plainly. But the dominant narrative moves in the opposite direction, treating the outcome as given and removing from view exactly the choices that can still be made.

What operations automation actually does

Building AI systems for enterprise operations at Moveo.AI, I frequently see what happens when an agent takes over high-volume tasks in customer service, accounts receivable, or collections. The agent handles hundreds of interactions that previously occupied the time of skilled professionals. That time becomes available.

What happens with it depends entirely on how the organization chooses to use it. It is not automatic. It is a management decision that needs to be made consciously, before implementation, not after.

The functions that remain human

Vassalos was direct about this in the episode. Activities where emotion, care, and human presence play a central role are resilient to automation. Hospital professions, home care, artistic creation with live human participation.

The professional who handled two hundred collection calls a day has the capacity for higher-complexity, higher-value conversations that an agent cannot conduct. That is a reorganization of work, not an elimination.

But that reorganization does not happen on its own. It happens when companies design work that way, when there is training, when there is intention.

What is genuinely at risk

Vassalos was also honest about what does not hold up. Middle administrative layers, data processing, text production, and project coordination are genuinely exposed. There is no easy comfort here. What there is, is a choice about what to do with that exposure: let it become unemployment, or convert it into something different.

That choice does not belong to the machines. It belongs to those who operate them, those who buy them, and the societies that decide how to regulate the outcome.

What founders have a responsibility to say

Founders who build automation tools and repeat the inevitability narrative are doing two things simultaneously. First, removing accountability from those who make decisions about how deployment is designed and how gains are distributed. Second, discouraging any collective conversation about alternatives.

Vassalos put it directly in the episode: whoever tries to convince us that AI's consequences for work are inevitable is, in practice, saying we are powerless to determine that path. I agree. And I would add: founders who use that argument are being convenient, not realistic.

What we can say honestly is that productivity gains are real. The human time freed by automation can go to more valuable places. But that depends on how deployment is designed, on what organizations decide to do with that time, and on the regulatory environment societies choose to create.

The window to make this choice

The same window that exists for defining how AI will operate internally also exists for defining how its gains will be distributed. Organizations that make these decisions before being pressured by regulation or labor market shifts will have more freedom, not less, when those pressures arrive.

And they will arrive. The speed at which technology advances makes the conversation more urgent, not less necessary. Treating the outcome as inevitable is a way of not participating in the decision. Founders who do that are choosing a position. They are just not being transparent about which one.

Thanos Papadimitriou teaches entrepreneurship at NYU Stern in New York and supply chain management at SDA Bocconi in Mumbai. He is one of the co-founders of technology startup Moveo.AI.