In a nutshell
Given our decade-long experience using AI as part of how we serve clients,[1] we’ve been getting questions about the nature of AI and how it is likely to impact us. So we’ve written a trilogy of articles to explain the why, the how, and what to do about it as a CEO.
AI’s main impact won’t be a revolutionary new economic system. It will be a shift in power and wealth towards those who control AI resources, higher prices for everyone else, and, if nothing is done to prevent it, grave social problems (especially for developing countries).
Is AI cheaper than having employees?
Most talk about the impact of AI makes the unexamined assumption that, yes, AI is cheaper than a corresponding employee.
Given the cost of a subscription to one of today’s main AIs[2] relative to what it is capable of, this seems to be a no-brainer. Research has indeed shown that in areas such as software development, which used to be considered as among the safest for human intelligence, AI is at least able to increase productivity by 20%.[3] Companies such as Microsoft have already begun redundancies, very likely as a result.[4]
However, today’s pricing does not reflect the true cost of training and running AI, which is presumably much higher. It remains an open question whether, if priced to cover the true cost, AI would remain so attractive. To get an idea of how much spending is involved, consider that as of this writing, OpenAI alone has already raised $59 billion.[5]
Given the secrecy with which these companies surround their financials, only reporting selected figures intended to make themselves look good, it is difficult to assess the real situation.
In the rest of this article, we will assume that, yes, AI is more cost-effective than employees in many use cases. This is is an assumption! It is very likely true for some jobs, more questionably so for others, and perhaps not as often true as we nowadays tend to believe.
The interested reader can find some information on the little that we know about OpenAI’s financials here.
Will AI develop to reach true intelligence / the “singularity”?
In The Limits of AI, first article of this series, we have already demonstrated that the development of AI hits some hard limits from the field of mathematics, from the field of philosophy of mind, and from technical constraints. It cannot reach true intelligence, not even in theory, not even assuming unlimited technological development, due to fundamental mathematical and other limitations.
Furthermore, within the limits that AI can achieve, there is an additional practical constraint: as AI develops, once we are past the initial breakthroughs, further development becomes exponentially more expensive, and the development of AI will rapidly outstrip the ability of its humans to maintain its quality.
So, we’re safe from an AI takeover?
No. That’s not too say that it will happen. But it is possible, even if AI never achieves true intelligence.
Indeed, it is the easiest thing in the world to program a goal into any algorithm, a fortiori an AI, and to program a feedback loop that causes it to seek to achieve this goal.
Given the complexity of AI and the profusion of different types of training that they are submitted to, it is also quite easy to unintentionally embed a goal which the designers never intended.
AI does not need to be intelligent in order to seek to control us.
But if it isn’t intelligent, wouldn’t we be able to outsmart it?As a result, wouldn’t it be unable to control us?
Not necessarily. For instance, not if it seeks to dumb down the environment sufficiently that true intelligence is not needed to dominate.
For instance, AIs that have not achieved intelligence are already capable of beating off at games, once properly trained to do so. Thus, if an AI managed to reduce our options to the complexity level of a game, even a very complex game with infinite possibilities, this would be sufficient for it to be able to dominate us. In history, many dictators whose stupidity was truly impressive have succeeded in doing just that.
Will AI primarily impact white collar jobs?
As the same technological developments that have enabled LLM-AIs are being transferred over to robotics, it seems unlikely, given what we know today, that only white-collar jobs will be impacted.
In Article 1 of this series, we saw why the development of AI will outstrip our ability to maintain quality.
Consequently, the jobs most likely to be impacted are:
- Jobs which today are already being impacted, such as software development.
- Jobs which would already have been impacted if the needed robotic capabilities had already existed (e.g. low-end assembly line work)
- Jobs where there is no swift feedback in case of the employee or AI getting it wrong or bullshitting (e.g. management consulting).
- Other jobs, which are hard to predict, since the development of the technology is non-linear.[6] An analysis of which CEO roles / management styles are replaceable by AI and which are not can be found in section Epilogue: What will happen to the CEO’s job of the third article of this trilogy.
Will AI create a new, never before seen economic system
No.
It is clear from the above that AI is likely to make some major changes to current economic structures.
But not enough to create a qualitatively new economic system. Indeed, some (but not all!) foundational economic concepts, such as the law of supply and demand, have been found to be valid in even the most extreme scenarios, be it world wars, genocide, societal collapse, concentration camps, centrally planned economies, economies designed with the express purpose of getting around supply and demand (e.g. price capped and subsidized economies), economies where it was considered only fair to charge the rich less than the poor[7], etc.
Indeed, these concepts are founded on certain facts of life which have always been true in human society, for example that some resources are scarce.[8]
So we can safely assume that the post-AI economy will still be ruled by the law of supply and demand. This should remain true whether or not society itself undergoes dramatic changes such as revolution. And whether or not society collapses.
Wait! Might society collapse?
The reader might be thinking, “What do we care about the preservation of the basic facts of economics? In your last reply, you’re answering a minor question, while acting as though a much more important question doesn’t matter!”
Please bear with me, and allow me to return to this question once we’re a bit further advanced in our analysis. Reason only lights our road one step at a time.
What will happen when (if?) AI takes everyone’s jobs?
The attentive reader will note that we have already demonstrated that AI will not take everyone’s jobs. However, in order to better understand the economic impact of AI, let us pretend that it will in fact take everyone’s job, and see where that leads us.
At first sight, we might think that this results in a scenario of mass hunger. However, we’d be forgetting the law of supply and demand. If demand for human labor collapses, then the price of human labor will adjust until such time as human labor again becomes attractively priced relative to the price of AI. Because, let us not forget, AI does have a cost.
But would such an adjustment not be impossible? Isn’t a minimum salary required by us poor humans who need to be able to eat and take shelter?
This was precisely the argument raised by Karl Marx in order to “prove” that the efficiency gains generated by the machines of the industrial revolution would inevitably result in societal revolution and collapse followed by a dictatorship of the proletariat.
The fallacy of this argument is that it forgets that the adjustment happens across the entire economic system. Thus, the price of food will also go down. The price of shelter will also go down.
This doesn’t mean that people won’t feel a collapse of their standards of living.
Even famines are possible.
Will famines happen because of AI taking people’s jobs?
For a famine to happen, it requires either for a society to be unable to produce enough food, or for it to have the capability to produce enough food, but for it to not route the food to all of the population.
Currently, at the global level, much more food is produced than is actually needed for human consumption.[9] An increase in efficiency brought on by AI will increase and not decrease efficiency. Thus, in the short and medium term, it will not change this situation.[10]
But will everyone have equal access to these food supplies?
Hint: today, already, they don’t.
Which brings us to:
Will AI deepen the chasm between the richest and the poorest?
Another robust economic principle is that, in the absence of the use or the threat of use of force[11], wealth flows to those who control economic resources.
Consequently, AI will result in shifting wealth away from those who control the resource that is labor towards those who control the new resource that is AI.
We can speculate whether those who control the new resource, that is AI, will end up being a very small group, or whether competition will emerge somewhat democratizing the control of AI technologies.
But, in the context of this question, it does not matter. Control of the ability to do AI will never be democratized to the point where even the poor control it.
Even a technology that is more easily democratizable than the ability to produce AI, such as computer technologies, has dug deep additional divides between the poorest and the medium and upper classes in wealthy countries, as well as divides between developing and developed countries. A technology such as AI can be expected to dig even deeper chasms.
So? Will more people go hungry?
The preceding analysis shows that AI will increase the chasm between the poorest (particularly in the poorest countries) and the well-off.
Consequently, yes, AI will in all probability increase the occurrence of hunger in poorer countries.
How about in the case of the poorer segments of wealthy countries?
From a purely pragmatic perspective, people in power are incentivized to make sure that the social system is either sufficiently just to avoid a revolution, or sufficiently unjust for the disfavored to be too busy trying to get their next meal to be able to worry about regime change.
Consequently, we are left with three possible scenarios.
- Scenario 1: the government does what is necessary to ensure that a minimum level of welfare is maintained for a sufficiently large portion of the population. This level would probably be similar to today’s level (because this is the level that balances the political / popularity cost to politicians versus the cost to entrenched interests).
- Scenario 2: the government, on the contrary, seeks to plunge the population into such misery that it would be unable to rise up in revolution.
- Scenario 3: the people in power realize the importance of choosing one or the other of the previous alternatives; however the system of governance is so dysfunctional that the government would not be able to implement any coherent policy.
We can immediately eliminate Scenario 2, since we are talking about wealthy countries. Scenario 3 is possible, but is not the most likely scenario. After all, we are talking about countries which, although they do have their (sometimes severe!) governance problems, have, by and large, a functioning system of governance.
Thus, Scenario 1 is the most likely scenario, and Scenario 3 is possible, but not likely.
Thus, the most likely scenario is either that the general adjustment of the economy will be sufficiently efficient to avoid an increase in hunger, or that the government will step in with subsidies in order to maintain a minimum level of welfare.
Might society collapse?
We can now return to this question that we were unable to answer previously.
Based on the previous analysis, if the AI revolution is sufficiently severe in terms of job losses, it absolutely can result in societal collapse in the poorer countries. In wealthy countries, however, this is unlikely (see answer to the previous question for why).
In my personal view, especially for wealthy countries, the unsung and undramatic aspects of AI are much more dangerous than a hypothetical societal collapse. The one-on-one psychological impact that over-reliance on AI will have on adults, and on the devlopment of social and thinking skills of children, are especially troubling. Skills such as independent thinking are just that, skills. Use it or lose it.
-
Example 1: in the context of the acquisition and turnaround of a near-bankrupt company, cash flow was the most important consideration. As a result, the ability to peer forward into the future, even if only by a few days, in order to predict future revenue, constituted a decisive advantage. We achieved this using AI.
Example 2. In the context of negotiations with our client’s suppliers, we faced a difficulty. There weren’t all that many alternatives to the suppliers that we were negotiating with. In order to strengthen our negotiation position, we needed to find more potential suppliers capable of manufacturing the same products, but who were not advertising the fact. We used AI to identify other factories which had such capabilities.
Example 3. During the COVID pandemic, we created a system to dynamically update the relevant policies, based on a cost-benefits analysis that was updated in real-time to take into account the latest incoming data (such as covid cases inthe organization and where they appeared).
↩︎ -
As of this writing, for general purpose use, offerings such as ChatGPT Premium cost $20 a month. For technical use, offerings such as Claude Code also cost $20 a month. API pricing also compares favorably to an employee doing the same task – assuming the AI gets it right. ↩︎
-
For instance see this research by MIT, which finds a 26% improvement in three companies including Microsoft. Incidentally, the technology they tested has been superseded by the much more powerful coding assistants, as of August 2025. ↩︎
-
I’m sorry, I know the reader would have preferred something more definite. But we don’t know what we don’t know. Whereas we do know, for the technical and theoretical reasons explained in The Limits of AI that AI’s ability to generalize its capabilities to new fields will be spotty, we do not know to which fields it will succeed in generalizing and to which it will not. ↩︎
-
Such was the Athenian economy, in classical times. This is known from e.g. Aristotle. ↩︎
-
But, the reader might be thinking, maybe the AI revolution will precisely falsify these facts of life. Don’t the TV talking heads talk about a possible post-scarcity society?
This is a misunderstanding. A post-scarcity society is not defined as a society where nothing is scarce (which would be impossible), but rather as one where basic human needs are always met. In such a society, Picasso paintings will not cease to be scarce, nor gold, nor even necessarily motor vehicles or soft drinks. None of these are basic human needs. ↩︎ -
For example, as per UN publications, “Today, we are producing more than enough food to feed the world’s entire population.” See here ↩︎
-
This assumes the existence of today’s infrastructure for food production. Thus, the conclusion may cease to be valid in the long-term, in the case where the current infrastructure for food production are not maintained. ↩︎
-
Note that government regulation falls within the category of the threat of use of force. ↩︎