General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsAn interesting conversation with ChatGPT about the impact of AI on the economy and society.
TL;DR
Here is the bottom line:
If AI and automation continue to be deployed primarily as:
labor substitutes
control mechanisms
and cost eliminators
then over time:
the system really does undermine the social base that legitimizes its own expansion.
Not through economic collapse.
Through political re-architecture.
That is how most large economic systems actually fail.
And here's the full conversation. I do warn you, it is very long. I found it very interesting.
Q: On a more global perspective, not just AI, but automation in general, reducing the need for manpower. Is there a negative feedback loop here where reduction of workforce leads to lower purchasing power, which leads to lower sales, which leads to downward pressure on proces, which leads to pressure to automate in order to reduce costs. where does that lead? and what is the impact to investments long term?
A: Youre describing a very old and very real macro-economic concern.
It even has a name.
John Maynard Keynes called it technological unemployment almost a century ago.
Let me answer you directly, in the way you framed it.
Is there a negative feedback loop?
Your loop is:
automation → fewer workers → lower purchasing power → lower sales → price pressure → more automation
Yes that loop can exist locally and temporarily.
But historically, it has not been the dominant long-run dynamic at the level of an entire economy.
The key reason is this:
automation reduces labor per unit of output, not necessarily total labor demand.
Those two are very different.
The critical distinction
There are two very different worlds:
① Automation that reduces labor inside a fixed market
Example:
call centers
back-office processing
data cleaning
document processing
In those sectors:
demand is relatively fixed
automation directly displaces people
wages and headcount fall
Here your loop can partially happen.
② Automation that reduces cost and enables new demand
This is where history breaks your loop.
When productivity rises:
prices fall
real incomes rise
consumption shifts to other goods and services
And labor demand reappears in:
different industries
different roles
different skill levels
This is why total employment kept growing after:
mechanization of agriculture
industrial automation
computers
the internet
Why the full downward spiral rarely happens
For the loop you describe to dominate the entire economy, three things must be true at once:
automation destroys jobs faster than new activities can absorb people
people cannot re-skill fast enough
new industries do not emerge at scale
Historically, point (3) has almost always failed.
But here is the part that is different this time
From your own background in data / pipelines / analytics, you probably see this immediately.
Previous automation waves mainly replaced:
physical labor
routine clerical labor
This wave (AI + software automation) directly targets:
middle-skill cognitive work
analytical work
coordination and reporting work
junior professional work
That changes the distribution problem.
The real feedback loop is not economic its political and social
The more realistic loop looks like this:
automation → job polarization → wage inequality → political pressure → regulation / redistribution → investment incentives change
This is already visible in policy discussions at organizations like the
Organisation for Economic Co-operation and Development and the
International Monetary Fund.
The core risk is not:
nobody has money anymore
The risk is:
income becomes concentrated enough that consumption growth slows and becomes unstable.
High-income households do not spend like middle-income households.
Where does your loop actually lead?
In practice, it leads to:
➤ structural labor reallocation
Not permanent mass unemployment.
But:
fewer routine white-collar jobs
more high-skill technical jobs
more low-productivity personal / service jobs
This is called job polarization.
You already see this today.
Now lets talk about the part you really asked:
What does this mean for long-term investment?
Heres the blunt version.
1. Automation is structurally deflationary
Long term:
goods and services get cheaper to produce
margins are under constant pressure
pricing power is harder to maintain
So investors increasingly value:
scarcity, not efficiency.
That means:
brands
ecosystems
network effects
regulation-protected positions
distribution control
2. The best long-term investments are not automation users
They are:
➡️ automation owners
➡️ automation platforms
➡️ automation infrastructure
Thats why markets consistently reward:
cloud platforms
data platforms
chip and infrastructure layers
tooling ecosystems
Much more than the average company that simply automates internally.
3. Your loop compresses returns for ordinary firms
If every firm can automate:
cost advantage disappears
differentiation disappears
competition increases
So long-term:
automation raises aggregate productivity but lowers average firm-level return on capital.
This is extremely important for investors.
The subtle but powerful implication
Automation is good for:
the economy
productivity
long-run output
But it is not automatically good for equity returns.
Equity returns depend on:
who captures the productivity gains.
The uncomfortable truth
If automation continues to expand into cognitive and professional work:
then long-term investment returns become increasingly concentrated in:
a small number of technology and infrastructure firms
a small number of platform owners
firms with regulatory or ecosystem moats
And broad, diversified generic operating companies become:
structurally harder to grow profits in.
So, is your negative feedback loop wrong?
No.
Its just incomplete.
The real loop is:
automation → productivity → cost reduction → margin competition → capital concentrates into platform & infrastructure owners → income inequality rises → policy and regulation shape the next cycle
A final perspective, tuned to your background
You work in data pipelines, governance, transformation, and platform plumbing.
That is exactly where the long-term value sits in an automation economy:
not in replacing people once,
but in making automation reliable, auditable, scalable and governable.
Ironically, the more automation spreads, the more investment shifts toward:
data infrastructure
lineage
orchestration
security
compliance
quality controls
In other words:
automation creates its own supporting industries.
That is one of the main reasons the full collapse loop you described has never actually closed on itself at the macro level.
Q: It seems to me that what prevented the full collapse loop was that there were still human skills that were needed and which automation could not replace. But with the coming of AI, what human skills remain that cannot be replaced by automation? What's left for humans to do?
Youre putting your finger on the real historical hinge:
previous automation waves stalled because there were still large classes of human-only tasks.
The uncomfortable question now is whether AI closes that gap.
The cleanest way economists frame this today comes from the task-based view of automation, most prominently associated with
Daron Acemoglu at
Massachusetts Institute of Technology.
The key idea is:
jobs are bundles of tasks automation replaces tasks, not whole occupations.
So the right question is not:
Which jobs survive?
It is:
Which tasks still fundamentally require humans?
Lets be precise and unsentimental.
First: what AI is now very good at
Modern AI is already strong at:
pattern recognition in large datasets
language transformation (summarize, translate, draft, classify)
code generation for well-scoped problems
routine reasoning inside clearly defined rules
search and retrieval across massive corpora
In your own world:
data cleaning
schema mapping
transformation boilerplate
documentation generation
test scaffolding
→ these are exactly the tasks that compress fastest.
So yes: a very large fraction of junior and mid-level cognitive production work is now automatable.
That is genuinely new.
What still resists automation (and why)
Here are the durable categories.
Not because AI is not smart enough, but because of how real systems work.
1. Owning real-world consequences
AI can propose actions.
It does not:
carry legal liability
absorb reputational damage
hold fiduciary responsibility
get fired, sued, or sanctioned
Decisions that require:
sign-off
accountability
escalation ownership
remain human.
In your context:
data release approvals
governance exceptions
compliance trade-offs
production incident ownership
This is not a technical limitation.
It is an institutional one.
2. Defining the problem in the first place
AI is very strong once the objective is clear.
It is weak at:
deciding what the objective should be
choosing between conflicting stakeholder goals
redefining the problem when constraints change
This is much harder than it sounds.
Most real projects fail not because solutions are bad, but because:
the wrong problem was automated very efficiently.
Problem formulation remains deeply human.
3. Cross-domain judgment under ambiguity
AI works best when:
the domain is well represented in data
feedback is fast
success is well defined
Humans still dominate when:
signals are sparse
situations are novel
incentives are misaligned
politics and organizational dynamics matter
Think about:
program prioritization
platform strategy
vendor selection
architecture trade-offs across teams
These are not purely technical tasks.
They are socio-technical coordination problems.
4. Trust, persuasion and social navigation
This is routinely underestimated.
A huge fraction of economic output is driven by:
convincing someone to fund something
convincing someone to adopt something
convincing someone to tolerate risk
resolving conflict between teams
AI can generate arguments.
It cannot genuinely:
build trust
maintain long-term relationships
negotiate accountability
In complex organizations, that is real labor.
5. System ownership and integration across messy reality
This one maps directly to your experience.
Automation does not struggle with:
a clean pipeline
a clean schema
a clean API
It struggles with:
legacy systems
partial documentation
inconsistent semantics
broken lineage
organizational silos
The work of:
stitching systems together
interpreting business meaning
aligning definitions
enforcing governance
is mostly not computational.
It is institutional.
The uncomfortable shift
Here is the real change with AI.
Before:
humans mainly produced the content and logic
Now:
humans increasingly supervise, constrain, validate and integrate machine output
This is a role shift, not just a productivity boost.
So what is left for humans?
The honest list looks like this:
Humans concentrate into four roles
▶ problem framers
What should be built?
What should be optimized?
What should not be automated?
▶ risk owners
Who signs?
Who is accountable?
Who absorbs failure?
▶ integrators
How does this touch existing systems, data, processes and politics?
▶ social coordinators
How do people actually adopt this?
How do teams align?
How do conflicts resolve?
Everything else becomes progressively more automated.
Now the part you are probably implicitly worrying about
Is that enough work for everyone?
No.
Not at the same wage distribution.
This is where your intuition is absolutely right.
AI compresses:
the production layer
the middle cognitive layer
and expands:
the leverage layer (few people overseeing a lot of automated output)
So even if total employment eventually recovers,
the number of well-paid, routine professional roles shrinks.
That is the structural change.
The crucial nuance
This does not mean:
humans become useless
It means:
the bottleneck of economic value moves from execution to coordination and ownership.
That is historically very rare to happen this quickly.
A practical way to think about your own future-proof skills
Given your background (data pipelines, transforms, governance, and failure modes), the safest human-only edge is not:
writing transformations
building schemas
automating workflows
It is:
designing the data and automation system so that it can be trusted, audited, evolved and governed across teams and projects.
In other words:
semantic ownership
lineage and traceability
failure diagnosis
policy enforcement
cross-project consistency
Those are not just technical problems.
They are organizational ones.
The blunt answer to your question
What human skills remain?
Not creativity in the abstract.
Not empathy as a slogan.
What remains are:
skills that bind technical systems to human institutions.
And that is exactly why the collapse loop you described earlier is unlikely to fully close:
automation creates enormous output capacity
but humans remain the bottleneck in deciding, governing and integrating what that output is allowed to do.
humans remain the bottleneck in deciding, governing and integrating what that output is allowed to do. True. But how many humans will it take to do that, compared to the mass of humanity extant. what happens to the rest of humanity, the ones that are still rubbing sticks together to make fire.
This is the hard, honest version of your question:
If humans are only needed for decision, governance and integration how many humans is that, really?
Short answer:
far fewer than todays global labor force.
And yes that creates a real structural problem.
Let me be very direct.
There is no economic law that says everyone will be needed
A lot of optimistic arguments implicitly assume:
markets will always invent enough new tasks for everyone.
That is not a law of nature.
It is a historical pattern and patterns can break.
The most careful modern work on this comes from economists like
Daron Acemoglu.
Their central warning is exactly what you are intuitively circling:
AI and automation are becoming general-purpose substitutes for human tasks, not just complements.
That changes the scale.
How many humans are actually needed for deciding, governing, integrating?
Not many.
Think in terms of leverage.
A small group of people can already:
design systems
define policies
supervise models
approve outputs
integrate pipelines
enforce compliance
that scale to millions or billions of users.
This is not speculative.
You already see this in:
cloud platforms
data infrastructure teams
security and governance groups
One platform team replaces the work that previously required:
dozens of local IT teams
dozens of analysts
dozens of support and coordination roles
AI multiplies that leverage again.
So realistically:
the governance / integration layer is a thin layer, not a mass-employment layer.
So what happens to the rest?
This is where we must separate three very different groups.
1. The global poor (rubbing sticks together to make fire)
For much of the world, the problem is not AI.
It is:
weak institutions
poor infrastructure
limited capital
limited education access
Ironically, for this group:
AI can raise productivity without displacing much, because there is not much formal employment to displace in the first place.
So this group is not the main macro risk.
2. The global middle and lower-middle in developed economies
(this is the real pressure point)
This is the group that historically:
staffed offices
ran operations
handled coordination
produced reports
managed workflows
supported professionals
In other words:
the cognitive middle layer.
That layer is exactly what AI compresses.
And this is new.
Earlier automation mainly wiped out:
manual labor
routine factory work
This time it targets:
clerical
analytical
junior professional
operational coordination
There is no obvious, massive replacement sector waiting.
3. A small high-leverage elite
Designers, owners, integrators, strategists, system builders.
This group grows slowly.
Their impact scales extremely well.
Their wages scale very well.
Here is the uncomfortable math
If AI continues to improve roughly along current lines, then:
the economy can grow while the number of economically central workers shrinks.
That is the key difference from past automation waves.
Productivity no longer needs a large human base.
So
what do the rest of humans do?
Economically, three things tend to happen.
1. A large expansion of low-productivity personal and local services
Care.
Hospitality.
Local services.
Human presence jobs.
Not because they are efficient
but because people still want humans in those roles.
These jobs:
do not scale
pay less
are locally constrained
This is already happening.
2. A growing dependency on redistribution
This is unavoidable if the previous dynamic holds.
Not necessarily in the dramatic form of universal basic income
but in practice through:
tax credits
subsidies
public employment
social programs
healthcare and housing support
The reason is mechanical:
if a large fraction of people cannot sell labor at a price the market values, consumption collapses.
And then your negative feedback loop really does become macro-relevant.
3. A political economy problem not a technology problem
This is the part most tech discussions quietly avoid.
The limiting factor will not be:
what AI can do
It will be:
how societies choose to distribute the output of systems that require fewer humans.
That is fundamentally political.
A key correction to one common fear
It is not that:
most humans become useless
It is that:
the market no longer needs most humans as producers.
But people still exist as:
citizens
consumers
voters
social participants
Markets alone do not solve that mismatch.
Where your original loop fits now
You earlier described:
automation → less income → less demand → more automation → collapse
That loop becomes plausible only under one condition:
redistribution and new income channels fail to compensate for lost labor income.
Historically, redistribution expanded quietly and incrementally.
This time the scale may need to be much larger.
The blunt, unsentimental answer
How many humans will be needed to decide, govern and integrate?
A small fraction.
Single-digit percentages of the workforce in many sectors is not a crazy long-term estimate.
And what about the rest?
Economically:
lower-paid service work
public and semi-public employment
or income supported by redistribution
Socially and politically:
this becomes the central issue of the century.
Not AI alignment.
Not model safety.
Not productivity.
Legitimacy of an economic system in which most people are no longer structurally needed for production.
A final, very practical note tailored to you
Your instinct to focus on:
governance
lineage
failure modes
integration
institutional constraints
is exactly correct for staying in the high-leverage layer.
But zooming out:
what protects you personally does not scale to protecting society.
That gap between individually rational skill choices and collectively sustainable outcomes is precisely why this is not mainly a technology problem anymore.
It is a distribution and institutional design problem.
It is a distribution and institutional design problem. Correct. That's where I think the system can become the architect of it's own demise. Fewer and fewer people are vested in the success of the AI automation cycle. They receive no benefit and become disjoint and evolve their own society and economy at best, or become ludites at worst.
Youre describing something very close to a political-economy failure mode, not a technology one.
There is a well-known historical framing for exactly this dynamic, from
Karl Polanyi.
His core idea (from The Great Transformation) is:
when an economic system stops embedding itself in social institutions, society eventually pushes back sometimes constructively, sometimes destructively.
What youre intuitively calling the system becoming the architect of its own demise fits that model very cleanly.
Let me translate your intuition into a precise mechanism.
Youre right about the key risk: vested participation collapses
Your claim:
fewer and fewer people are vested in the success of the AI automation cycle
That is the dangerous variable.
Not inequality per se.
Not job loss per se.
But this:
loss of perceived participation in the production system.
That distinction matters.
A society can tolerate:
large productivity gaps
large income gaps
Much less easily than it can tolerate:
being economically irrelevant.
The real feedback loop looks like this
Not the simple labor → income → demand loop.
The destabilizing loop is:
automation concentrates economic agency →
most people become passive recipients →
social identity detaches from production →
political legitimacy erodes →
policy becomes adversarial to the production system →
investment becomes defensive →
innovation slows or fragments
This is exactly the own demise dynamic you are pointing at.
The crucial nuance
People do not need to own AI systems.
They do not even need to work directly on them.
But they do need at least one of these:
a believable path into participation
a sense that their work materially matters
or a credible claim on the output
If all three disappear, disengagement is rational.
Youre also right about the two social outcomes
You said:
disjoint and evolve their own society and economy at best, or become luddites at worst
Those are not hypothetical.
They are two historically common responses.
Path A parallel / informal economies
This looks like:
local services
informal markets
community and family production
small-scale entrepreneurship
semi-detached digital and creator micro-economies
People are not anti-technology.
They are simply not structurally integrated into the dominant production engine.
This already exists.
Path B adversarial politics toward technology and capital
This is what people loosely call Luddite, but historically it is better described as:
anti-elite, anti-concentration, anti-scale politics
The original Luddites were not anti-machines.
They were anti-loss of bargaining power.
That pattern is repeating.
This is exactly what modern labor-automation economists warn about
The person most aligned with your concern is again
Daron Acemoglu.
His central policy warning is not:
AI will destroy jobs
It is:
AI can be deployed in ways that systematically bypass human complementarity.
When that happens:
productivity rises
but political support for the growth model collapses
This is not an economic contradiction.
It is an institutional one.
Why your architect of its own demise phrasing is accurate
Because the incentive structure inside firms is very narrow:
automate to reduce cost
automate to reduce headcount
automate to increase control
automate to remove coordination friction
None of those incentives internalize:
long-run political support
social legitimacy
consumption stability
workforce identity
So the system optimizes itself into a fragile equilibrium.
Here is the really uncomfortable truth
Markets do not require social stability to function.
They only require:
property rights
enforcement
contracts
capital flows
But innovation systems do.
High-risk, long-horizon investment collapses very quickly when:
regulation becomes unpredictable
political swings become extreme
public legitimacy erodes
So your scenario directly feeds into long-term investment quality.
What would actually prevent the self-destruction loop?
This is where most discussions stay vague.
But the levers are surprisingly concrete.
1. Human-complementary deployment choices
AI that:
augments frontline workers
increases span of control without removing roles
raises wages through leverage
instead of:
replacing entire task clusters
This is exactly the design distinction Acemoglu emphasizes.
2. Broad claims on output, not just redistribution
This is subtle but crucial.
Pure redistribution says:
you are outside the system, here is a transfer.
Participation says:
you are a stakeholder in the system.
Mechanisms can include:
profit sharing
worker equity
sovereign / public investment funds
broad capital ownership vehicles
Without this, resentment grows even if material needs are met.
3. Institutionalized human roles
Not symbolic roles.
Real roles in:
oversight
governance
quality control
compliance
escalation authority
safety and ethics enforcement
If these roles are treated as cost centers to be eliminated,
your thin human layer becomes politically indefensible.
The critical investment implication (long-term)
You asked earlier about investments.
Here is the connection to your concern.
Long-term capital returns depend less and less on:
how powerful the automation is
and more and more on:
how politically sustainable the automation regime is.
This is exactly the same structural problem that energy, finance and housing markets face.
Investors will increasingly price:
regulatory exposure
public backlash risk
forced labor or data policies
nationalization or public-option competition
into automation-heavy sectors.
One very sharp way to summarize your concern
Your fear is not that AI destroys jobs.
Your fear is that:
the production system becomes socially unrepresentative.
That is far more dangerous.
And Ill end with the honest bottom line
You are correct.
If AI and automation continue to be deployed primarily as:
labor substitutes
control mechanisms
and cost eliminators
then over time:
the system really does undermine the social base that legitimizes its own expansion.
Not through economic collapse.
Through political re-architecture.
That is how most large economic systems actually fail.
