One key line about ATMs is buried deep in the article:
> the number of tellers per branch fell by more than a third between 1988 and 2004, but the number of urban bank branches (also encouraged by a wave of bank deregulation allowing more branches) rose by more than 40 percent
So, ATMs did impact bank teller jobs by a significant amount. A third of them were made redundant. It's just that the decrease at individual bank branches was offset by the increase in the total number of branches, because of deregulation and a booming economy and whatever else.
A lot of AI predictions are based on the same premise. That AI will impact the economy in certain sectors, but the productivity gains will create new jobs and grow the size of the pie and we will all benefit.
But will it?
> But will it?
My prediction is no, because productivity gains must benefit the lower classes to see a multiplier in the economy.
For example, ATMs being automated did cause a negative drop in teller jobs, but fast money any time does increase the velocity of money in the economy. It decreases savings rate and encourages spending among the class of people whose money imparts the highest multiplier.
AI does not. All the spending on AI goes to a very small minority, who have a high savings rate. Junior employees that would have productively joined the labor force at good wages, must now compete to join the labor force at lower wages, depressing their purchasing power and reducing the flow of money.
Look at all the most used things for AI: cutting out menial decisions such as customer service. There are no "productivity" gains for the economy here. Each person in the US hired to do that job would spend their entire paycheck. Now instead, that money goes to a mega-corp and the savings is passed on to execs. The price of the service provided is not dropping (yet). Thus, no technology savings is occurring, either.
In my mind, the outcomes are:
* Lower quality services
* Higher savings rate
* K-shaped economy catering to the high earners
* Sticky prices
* Concentration of compute in AI companies
* Increased price of compute prevents new entrants from utilizing AI without paying rent-seekers, the AI companies
* Cycle continues all previous steps
We may reach a point where the only ones able to afford compute are AI companies and those that can pay AI companies. Where is the innovation then? It is a unique failure outcome I have yet to see anyone talk about, even though the supply and demand issues are present right now.
> My prediction is no, because productivity gains must benefit the lower classes to see a multiplier in the economy.
Baumol's cost disease hurts the lower classes by restricting their access to services like health care and education, and LLMs/agents make it possible to increase productivity in these areas in ways which were once unimaginable. The problem with services is that they're typically resistant to productivity growth, and that's finally changing.
If you can get high quality medical advice for effectively nothing, if you can get high quality individualized tutoring for free, that's a pretty big game changer for a lot of people. Prices on these services have been rising to the stratosphere over the past few decades because it's so difficult to increase the productivity of individual medical practitioners and educators. We're entering an era that could finally break this logjam.
"Baumol's cost disease hurts the lower classes by restricting their access to services like health care and education, and LLMs/agents make it possible to increase productivity in these areas in ways which were once unimaginable."
You've expressed very clearly what LLMs would have to do in order to be economically transformative.
"If you can get high quality medical advice for effectively nothing, if you can get high quality individualized tutoring for free, that's a pretty big game changer for a lot of people. Prices on these services have been rising to the stratosphere over the past few decades because it's so difficult to increase the productivity of individual medical practitioners and educators. We're entering an era that could finally break this logjam."
It's not that process innovations are lacking, it's that product innovations are perceived as an indignity by most people. Why should one child get an LLM teacher or doctor while others get individualized attention by a skilled human being?
> Why should one child get an LLM teacher or doctor while others get individualized attention by a skilled human being?
Is the value in the outcome of receiving medical advice and care, and becoming educated, or is the value just in the co-opting of another human being's attention?
If the value is in the outcome, the means to achieving that aren't of much consequence.
More subtly, what is an education? What is care? As you point out, the LLMs are (or probably will become) perfectly good at the measurable parts of those services; but I think the residual edge of “good” education/care is more than just the other human’s co-opted attention.
How many of us have a reminiscence that starts “looking back, the most life-changing part of my primary or secondary education was ________,” where the blank is a person, not a curriculum module? How many doctors operate, at least in part, on hunches—on totalities of perception-filtered-through-experience that they can’t fully put into words?
I’m reminded of the recent account of homebound elderly Japanese people relying on the Yakult delivery lady partly for tiny yoghurt drinks, but mainly for a glimmer of human contact [0]. Although I guess that cuts to your point: the value in that example really is just co-opting another human’s attention.
In most of these caring professions, some of the value is in the measurable outcome (bacterial infection? Antibiotic!), but different means really do create different collections of value that don’t fully overlap (fine, I’ll actually lay off the wine because the doctor put the fear of the lord in me).
I guess the optimistic case is, with the rote mechanical aspects automated away, maybe humans have more time to give each other the residual human element…
[0] https://news.ycombinator.com/item?id=47287344
> How many of us have a reminiscence that starts “looking back, the most life-changing part of my primary or secondary education was ________,”
For me it was a website with turotirals on how to make flash games. It literally launched my career and improved the quality of life for my whole family by an order of magnitude.
I am primarily the naysayer of AI but I admit that current LLMs could have easily replicated the whole website.
The supply/demand picture here is more complicated than it looks.
If AI displaces human educators, yes, their supply shrinks -- but we can't assume what direction its demand will go.
We've seen this pattern before: as recorded music became free, live performance got more expensive, and therefore much less accessible than it used to be.
What's likely to happen is that "worse" (read: AI) education will become much cheaper, while "better" (read: in-person) education that involves human connection-driven benefits will become much less accessible compared to what it is today.
Most people may be consider it a win. It's certainly not a world I'm looking forward to.
Important follow-up to my comment: as fewer people do X -- live music, medicine, education, you name it -- fewer talented people do it as well.
Fields need a large base of participants to produce great ones. This is exactly why software has been so extraordinary over the past 30 years: an unusual concentration of gifted minds across the entire humankind committed themselves to it.
In my view, Bach, Rachmaninoff, Cole Porter equivalents today probably aren't writing symphonies. They've decided to write code for a living. Which is why any Great American Songbook made today won't hold a candle next to one from 1950s.
Disagree, we do have the Bach's and Rachmanioff's today: John Williams, Jerry Goldsmith, Bear McCreary, Yuki Kajiura, Hans Zimmer, and probably a slew I'm not even aware of today.
We're in the greatest era of symphonies IMO, it's just that they're hiding in surprising places; movies, TV shows, games, etc.
I don't think we can know whether or not this is the case in our own lifetimes, because we are so immersed in popular culture that we can't be objective about it. Enough of our historical great composers weren't venerated until after their deaths, and to describe composers as "hiding" within the most popular media of our era is a great disservice to the many composers that don't have the fame, connections and reputation to be hired to write for these.
I would also point out that composing for a medium like a game or a movie places a great deal of constraints upon the composer, in terms of theme, cost of instrumentation, duration and most importantly: what is safe and palatable for an executive to approve of.
The sound track to "Lord of the Rings" is one of my favorites.
And AI is stuck in the past. As we prepare to launch a new product… people using AI won’t know about it for months or years, potentially. This will make startups have to seed the planet with text so an AI learns about it, not to mention normal SEO and other shit. I’m sure it is only a matter of time before you can pay to inject your product into the models so it knows about it faster, but incumbent companies will pay more to make sure they don’t.
The future is going to suck.
> I’m sure it is only a matter of time before you can pay to inject your product into the models so it knows about it faster, but incumbent companies will pay more to make sure they don’t.
You have just discovered the fully enshittified version of the business model ai companies hope to reach.
> Is the value in the outcome of receiving medical advice and care, and becoming educated,
Absorbing information doesn't make you "educated". Learning how to employ knowledge with accountability and trust with beings in the real world is what's important, and a machine can't teach you how to do that.
> or is the value just in the co-opting of another human being's attention?
Why is it "co-opting" if it involves a mutually consenting exchange?
Wisdom comes from application of knowledge and experience in the real world, as does skill.
The value comes from applying an expert's wisdom and skill to the problem at hand.
You get neither from LLMs.
It's interesting that you assume there's value in being educated in this hypothetical world of complete passive consumption.
The world you're describing is one where the entire economic value of humanity is in reminding the AI to put out the food bowl and refill the water dish at the appropriate time.
For many, the Culture is an utopia to aspire to, for some is something to run away as fast as possible. Banks himself described the dichotomy.
The interesting thing here is less about what people aspire to, and more about the lack of imagination and thought when considering the world they want to create.
It would be funny if the sleepwalkers weren't trying so hard to drag humanity along.
Even if you have perfect medical information and advice through an LLM, can you perform surgery on yourself? Can you prescribe yourself whatever medication you think you need?
For education, if you know as much as the average Harvard grad, can you give yourself a Harvard degree that will be as readily accepted in a job application or raising funds for a new business?
Interesting perspective; medical regulation as a business moat
That's why medical licensing was introduced.
The premise of your argument is that "the outcome" can be separated from the process. This is true enough for manufacturing bricks: I don't much care what processes was used to create a brick if it has certain a compressive strength, mass, etc.
But Baumol's argument, which you introduced to the conversation, is that outcome and process cannot actually be distinguished, even if a distinction in thought is possible among economic theorists.
> But Baumol's argument, which you introduced to the conversation, is that outcome and process cannot actually be distinguished
How is that Baumol's argument? How is 'outcome' vs 'process' relevant to his argument at all?
'Cost disease' is just the foundational truth that the cost of the output from industries with stagnant productivity will increase due to the fact that the workers in that industry can be more valuable in other industries, reducing the number of relative workers in the stagnant industry.
If you want to make the output from a stagnant industry available to a broader spectrum of the population then you have to improve the productivity of that industry.
I think he means that when you go to watch the symphony orchestra, you are going to watch a bunch of people sitting with their instruments, manually playing them.
There is no way to separate this process from the product of the process.
You're not buying the sound of the music. You can just stream that. As far as that is the product, it has already been automated and scaled so millions of people can hear it at once, whenever they feel like it.
You're buying the sound AND the people sitting in their formal clothes manually moving their strings over a violin, with painstaking accuracy developed through years of manual practice.
You couldn't make a robot do it, for example. You could maybe make a robot play a violin, but that again isn't what the product is.
The product is tied to an expectation of what it is that does not allow for it to be done more effectively.
By contrast manufacturing processes are not tied to this expectation. If I buy a loaf of bread, I don't care whether the wheat was manually harvested or harvested by a huge machine.
It's very true for healthcare (especially mental healthcare) and education today as well, because for most people, the choice isn't LLM vs. human attention - it's LLM vs. no access at all.
It's not like that's an inherently unsolvable problem without using LLMs
The value is in the signature and the power of the legal department your insurance provider employs.
Honestly, I think it’s the second.
> the value just in the co-opting of another human being's attention?
Thats a weird way of describing it.
A machine telling me to exercise and eat right will be ignored, even if the advice is correct. A person I trust taking me aside, looking me in the eye and asking me the same would be taken far more seriously.
That may well be true if you need to be persuaded to exercise and eat right.
OTOH, if you don't need to be persuaded and just want information on how best to go about doing it, then I think it makes little difference where the information comes from as long as it's of reasonable quality.
Maybe for you, but that model clearly doesn’t generalize, or dieticians and physical trainers would only have success stories to point to.
The specific example was indeed a poor one since we have extensive data on that, and even high-touch non-surgical interventions involving hours per week from multiple specialists (read: incredibly expensive) with very-willing participants have proven a lot less effective than one might hope (somewhat effective! But only moderately so, which ain't enough given the price tag). Docs saying "eat better and exercise" at an annual check-up has basically no effect whatsoever.
Turning dozens to hundreds of decisions per week for which the correct decision must be made in nearly every case, into a single decision per week for which the correct choice must be made, has proven wildly more effective than any of that (I mean glp-1 agonists).
It also seems like the value of quality tutoring that doesn't primarily function as social/class signaling goes down as tools capable of automating high quality intellectual work are more widely available.
It depends on outcome again: is the value of tutoring the social class elevation, or is it in the outcome of becoming more skilled and knowledgable?
There's also the deeper philosophical question of what is the meaning of life, and if there's inherent value in learning outside of what remunerative advantages you reap from it.
If I described my symptoms to an AI and it suggested a diagnosis, I would defintely get a second opinion.
That's reasonable, but don't feel like you're safe letting the humans rest on their laurels. Human medical errors kill thousands upon thousands every year.
>If you can get high quality medical advice for effectively nothing
This is an area where a confident, but wrong information is extremely costly. It’s like saying an LLM can give you high quality directions on how to tap into a high voltage transformer. Sure, but when it’s wrong, it’s very very wrong with disastrous consequences. That’s why professions like doctors and Engineers are more regulated than others.
I'm not certain that a already observable negative impact of AI on some areas of education could be offset by "high quality individualized tutoring for free".
I didn’t know Claude Code could put a thermometer in my butt.
By the time it replaces doctors, nobody but today's investors will be able to afford anything at all. The X-shaped economy would have owners in the V and manual laborers (assuming this doesn't translate to gains in automation) in the ^. This outcome is worth avoiding...
Can a robot write a medicine prescription? A medical procedure prescription? If yes, that would be a game-changer. But the medical insurance providers would be very cautious about honoring these. Then, if things go wrong, what entity would be held accountable for malpractice?
You already can get a good-quality medical advice "for nothing", unless it requires e.g. a blood test. The question is, how actionable such an advice is going to be, and how even the quality is going to be.
There's a simple solution. If a medical malpractice happens, law suit against the LLM company. If their license is revoked as part of that finding, unfortunately that applies to the "doctor" (e.g. ChatGPT).
Same for self-driving. Just hold each car like a normal driver, the owning AI company has liability. So after ~20 tickets and accidents in a week, a few ambulances being blocked, the only option is to revoke the driver's license (of which, all the cars share one, as they have the same brain).
This would make AI companies more cautious and only advertise capabilities they actually have and can verify. They would be held to the standard of a human. I think that's reasonable (why replace humans if the outcome is worse, and why reduce protections for individuals).
To make the analogy more clear: even if a telemedicine doc sees 10,000 patients a day all over the world, they would be held liable for any medical malpractice. Bad enough, and their license would be revoked, regardless of the fact that they see many patients all over the world. Same deal with AI / LLM -- if ChatGPT is making medical advice and it hurts someone, that's the same as a human doing so -- its malpractice and lawsuits can happen.
If they are somehow licensed, well then that license can be revoked. We would revoke a human's license for a single offense in some cases, the same should occur with AI.
Well, there's always wars as the way to get rid of people. I really don't rule out that the people that benefit from this sort of thing will purposefully steer the world in that direction because the poor won't have any choice other than to enlist as a way out of their situation, and never mind the consequences. You can already see some of this happening.
You're implying that insurance companies will allow prices to fall and lower their profits. That seems like a really unlikely event in the current economy. They fire a lot of doctors and nurses, but they won't lower prices.
This is assuming no competition materializes from the lowered friction
The ACA requires 80-85% of health insurance to go toward medical care (medical loss ratio). The way they work around that is to figure out how to charge more for medical care.
I’m sick of this idea that “free” services are beneficial to society. There is no such thing as a free lunch; users are essentially bartering their time, attention, IP (contributed content) and personal/behavioral data in exchange for access to the service.
By selling those services at a cost of “free”, hyperscalers eliminate competition by forcing market entrants to compete against a unit price of 0. They have to have a secondary business to subsidize the losses from servicing the “free” users, which of course is usually targeted advertising to capitalize on the resources paid by users for access. Or simply selling to data brokers.
With the importance of training data and network effects, “free” services even further concentrate market power. Everyone talks about how AI is going to take away jobs, but no one wants to confront how badly the anticompetitive practices in big tech are hurting the economy. Less competition means less opportunity for everyone else, regardless of consumer benefit.
The only way it works if the “free” service for tutoring or healthcare is through government subsidies or an actual non-profit. Otherwise it’s just going to concentrate market power with the megacorps.
This 1000x. "Free" is only a viable business model if the govt funds it. Otherwise, the $$ has to come from somewhere else in the company - how long will it take for the company to lose interest in a loss-leader when they're making $$ from other parts?
Look at all the deprecated Google products. What happens when Gemini-SaaS makes billions from licensing to other companies, and Gemini-Charity-for-the-poors starts losing money?
Sadly, the bigger the $$ in the tech pie, the more we have attracted robber barons, etc.
Ok sow how about "much cheaper"?
> I’m sick of this idea that “free” services are beneficial to society. There is no such thing as a free lunch; users are essentially bartering their time, attention, IP (contributed content) and personal/behavioral data in exchange for access to the service.
In aggregate, this is true, but there are many ways to game the system to one's advantage and get a true "free lunch." For example, people watching Youtube with an adblocker and logged out don't provide Google with any income or useful telemetry. Likewise you can get practically unlimited GPT/Claude/etc by using multiple accounts.
No, you are misunderstanding th economic principle. There is still a cost associated with serving that user, and the user is still paying for the cost of their internet connection and the opportunity cost of spending time on the service, or of setting up new accounts to get past usage limits. “No useful telemetry” I don’t really agree with in the YouTube example, as view counts are still vital for their recommendation algorithm.
TINSTAFL has two main implications. First that nothing is free, someone has to pay for it. Second is that money is not the only thing you pay with; every choice has an opportunity cost. Gaming the system costs someone something.
You could get high quality medical advice 20 years ago on the internet, or 40 years ago in the library. Doctors aren't there to give you advice, they are mostly gate keepers. Every person who's chronically ill knows that doctors are totally useless for anything beyond the 10 most common diseases and primarily exist to approve or reject your pleas for lab work. They won't go away, neither will psychotherapists and all the middle managers that can be easily automated, because their real purpose is not the practical work that they do.
> high quality medical advice
I'll replace my doctor with AI immediately after the tech bros do
lol
> cutting out menial decisions such as customer service
This is cited so often. We tried it at a large scale with some of the best engineering talent but unfortunately the humans on the other side preferred speaking to and interacting with a human by a wide margin.
We are still trying with the latest AI models but humans are still doing better at serving other humans.
In one of our studies, we observed by a large margin that our customers would hang up immediately on knowing that they are interacting with an AI system.
I have heard this from others as well.
Isn't it obvious why?
We contact support services to fix material problems. 'This booking is wrong.' 'I want a refund for that.' AI systems aren't empowered to solve these problems. At best they can provide information. If the answer is information - the user can likely already find it online themselves (often from a better AI model than they're going to find running your support line). If they're calling, they most often want something done.
Yeah, it's like trying to use an ORM to find data in the database that's invalid due to a bug. You can't see things in the system that break the premises of the system by using the system, and the fact that some things are "supposed to be impossible" doesn't change the reality of what's actually occurring in the data store.
So customer support needs to know how the systems works and need to understand what the data means, but also has to know when the system is factually incorrect. Customer support has to know when the second party is speaking the truth.
Do you know that to be true or are you speculating?
As we argue on the orange site, companies are paying Sierra AI to integrate voice and text agents into their systems to look up account information and process refunds. Fallbacks to human agents are built in to these systems.
We all hate phone trees because they never have the capability to handle exceptions to the most basic functions. We shout "speak to an agent!" into the phone because their website and phone trees only handle the happy path.
> because productivity gains must benefit the lower classes to see a multiplier in the economy
by this logic, the invention of mechanized farm equipment, which displaced farm labor, didnt increase productivity
On the contrary, humanity spent nearly its entire existence calorically deficit, and until mechanized farming did we finally see health outcomes improve, height increase, IQ increase, and populations explode.
Productivity gains in the case of mechanized labor got everyone out of subsistence farming and into factories.
AI gets everyone out of every job and into nothing.
> AI gets everyone out of every job and into nothing.
Why is mechanized thinking going to do that? When mechanized labor didn't?
> Why is mechanized thinking going to do that? When mechanized labor didn't?
You're right. There is technically a category of work that relies on neither our ability to do physical labor nor excessive thinking. It just relies on being a human.
The conclusion is thus obvious: AI is going to push us all into careers as photo models, OF-creators, and social media influencers! /s
The benefits largely accrued to the poorest people.
It made food cheaper.
Your argument is (mildly) a variant of the broken window fallacy.
AI will bring about a de-sequestering of talent and resources from some sectors of the economy. It's very difficult to predict where these people and resources will go after that, and what effect that will have upon the world.
> cutting out menial decisions such as customer service. … Each person in the US hired to do that job would spend their entire paycheck
This person can no longer get a customer service job, but why can’t they get another job? Customer service is hardly career with a huge sunk cost in training and with a non-fungible skill set.
If they go get another job, compared to the base case of economy = customer service, we now have economy = customer service (AI) + new job.
It's easy for anyone to go get a different job as long as the supply of jobs is infinite.
But it is not infinite; eventually, we reach a point where we no longer need additional ditch diggers.
Job supply trends towards zero. The ultimate logical conclusion of this train of thought is there is no point in keeping the lower classes alive. Why do we need 15 billion humans if they do nothing but burden you with their maintenance costs? Let them die so that the quadrillionaires can enjoy the Earth with their perfect AI workforces catering to their every need.
The future is bleak. If this is the sort of dystopia I can look forward to, then I would rather have AI simply wipe out humanity as a whole.
In Azimov's Robot series the society that chose to live with robots gradually destructed itself by just living longer and not having so many children. The other part of humanity that avoided robots flourished (not without suffering). But that all required new planets for settlement (I am looking at you Elon).
Trying not to spoil a 40+ year old story, but Asimov eventually retconned that the flourishing of humanity was driven by a benevolent AI behind the curtain.
On the plus side: In that particular dystopian future, we may actually need more ditchdiggers for a time so that the dead may be buried.
“Demographic and labor market trends in the U.S. point to an ominous scenario. The nation potentially faces a shortfall of millions of workers in the decade to come — especially in the critical health care sector — due to a projected reduction in workforce participation.”
The supply of jobs exceeds the supply of workers, so yes, you should be able to go and get another job.
Give AI a medical license and all those critical health care jobs will literally disappear overnight.
It would require several breakthroughs in robotics and AI to automate a nurse's job. And then it would still be unlikely that this kind of automation is saving costs.
At some point we're gonna have to abolish the economy itself. We need to transition to a post-scarcity society where everything is abundant and there's no need to economize.
Is AI helping us get there? I don’t think AI has done anything to reduce the scarcity of food, shelter, physical goods—things that people actually use money for
That is the future for significant shareholders, everyone else can starve.
> My prediction is no, because productivity gains must benefit the lower classes to see a multiplier in the economy.
> It decreases savings rate and encourages spending among the class of people whose money imparts the highest multiplier.
Huh, what? What kind of multiplier stuff are you talking about here?
The central bank looks at the overall spending in the economy (well, including forecasts), and compares that with its targets. They adjust their policy stance accordingly to try and hit their targets.
If people become more or less likely to spend their money ('multipliers') the central bank can and will adjust the amount of money available.
More likely, we will never know
https://en.wikipedia.org/wiki/Productivity_paradox
In my humble opinion, money is a distraction most of the time when trying to understand economic matters. Instead it's better to take opposite view: looking at the flow of goods and services.
AI will allow higher production of goods and services. If producing goods and services becomes cheap enough (and it's looking like it will become dirt cheap), then it will not take much redistribution for it to reach the masses.
I think the true crisis will be one of purpose. That we live meaningless lives of leisurely abundance.
> and it's looking like it will become dirt cheap
Why do you say this? How does AI helping us lower prices of goods?
> It is a unique failure outcome I have yet to see anyone talk about
It seems likely to me that we will reach a violent, bloody revolt before we possibly reach this point. That may be why no one is taking about this failure mode
> We may reach a point where the only ones able to afford compute are AI companies
Nah. I think "good enough AI for 95% of people" will be able to run locally within 3-5 years on consumer-accessible devices. There will be concentration of the best compute in AI companies for training, but inference will always become cheaper over time. Decommissioned training chips will also become inference chips, adding even more compute capacity to inference.
This is like computing once again. In 1990 only the upper class could afford computers, as of 2000 only the upper class owned mobile phones, as of now more or less everyone and their kid has these things.
1990? We were solid lower-middle class, and I got a computer for Christmas in 1983. I bought my own, from $$ saved by working in 1987.
Computers were roughly ~ $1000 in 1990. How did your lower-middle class family justify a $1000 expenditure inflation adjusted to $2565 today? Average minimum wage in the US is $11.30 so that's 29 days working at minimum wage.
My family was on the border of upper-lower and lower-middle and we bought a computer once and used it for 10+ years. I dumpster dove later to scavenge parts for upgrading until the mid 2000s when cheap computers became available.
>$1000
That depends very much on the computer.
https://christmas.musetechnical.com/ShowCatalogPage/1990-Sea...
Commodore 64C,1990, $159.99
My parents were working class in the 80s and we got a used Tandy that plugged into the TV and ran BASIC.
Yes and also keep in mind that low-income in US is high income in most of the world!
I hate this point, so what? It's not like the lower class in "pick you region of interest" can take advantage of this localized price disparity. The poor person is poor based on their spending power with respect to the local economy and its pricing.
Using this example: a computer was an unlikely purchase for a lower-middle class person in the US, but it wasn't totally unattainable. Many people in the US probably did it, and some of them probably found some positive return on that investment.
That's not true of many "objectively" poor people in the world, who even if they could buy the computer, they might not have had access to electricity to run it.
> How did your lower-middle class family justify a $1000 expenditure
What, like a yearly vacation? Maybe they stayed home for Christmas one year instead of flying to visit family
Flying? We were solid middle class in the 80s and my first plane flight wasn't until 2001 (and then only because I was away at college and my mother had died suddenly). My parents hadn't flown since the 70s (before my sister and I were born), and even then, that was a rare thing for them.
Our childhood vacations were single-day (so we didn't have to pay for a hotel) road trips to a nearby state to go to an amusement park, or multi-day trips (also within driving distance) where my dad had to go somewhere for work and the hotel was paid for by his employer. It was a huge huge deal for us when, in the late 90s, we drove down to Disney World (a 13-hour drive) for a several-day trip.
And we never traveled around Christmas; that was one of the most expensive times of the year to travel!
Not sure when or where you grew up, but most middle-class folks in the US in the 80s didn't have a lot of discretionary income, and flights were (inflation adjusted) quite a bit more expensive than they are today.
I suspect your family was not as middle class as you think it was. You're describing a very similar childhood to what I had in the late 80s, but we were lower class for sure
I'm not saying that middle class families flew all the time in the 80s, but they absolutely could afford to if they wanted to make it a priority
A cursory google search seems to bear this out. Cheap flights in north america started in 1978 with some air travel deregulation.
GP claims their family was lower middle class not properly middle class. My family mostly traveled like kelnos family did at the time. Also gas prices in the 80s-90s were so cheap that it rarely made sense to fly over driving. We flew as a family twice as a kid because we were an immigrant family and we saved up to visit the country of origin but it was ridiculously expensive to take the whole family, so dad stayed home one vacation, and we always stayed there with family.
We did have a computer but it was really a one time expense. At the time computers were improving quickly so I scavenged parts which wealthy areas that threw last Gen hardware away but were better than what we had (and I was a kid with a lot of time on my hands.) Giving a computer to a kid for Christmas in '83 is a very different value proposition than even a family vacation because a vacation is something the whole family does.
They said solid middle class not lower.
My family was working class not even lower middle class.
And even we flew a few times in the late 80s early 90s, and we had a (probably used) Tandy computer that hooked up to the living room TV.
People have different priorities. We certainly couldn’t have afforded a current generation top of the line computer, and we couldn’t have flown every year. But an older computer and the occasional flight were firmly attainable to anyone with stable job if they really wanted them.
We were solid middle-middle class and didn't have a computer until 1989, and it was a "free", 2- or 3-year-old computer from my dad's work that they were going to throw away. We absolutely could not have afforded a computer during the 80s.
Even in the 90s, we kept relying on cast-offs from my dad's employer, and when I was preparing to go to college in '99, my parents scrounged to buy me the parts for a computer to build and take to college. But even then, my dad bought the parts at a discount through a former co-worker's consulting company, and vetoed a couple of my more expensive component choices.
And now that I think about it, my first laptop in 2003 was my dad's old work laptop that had been decommissioned.
You couldn’t afford a Commodore 64 or spectrum? Yet were middle class?
US median household wage was $24k in 1985 and a c64 $150
More likely your parents decided to spend the money on something else. Like a $400 19” tv
I would argue we've even already seen this play out with productivity gains across the economy over the last 40 years. The American middle class has been gradually declining since the '80s. AI seems likely to accelerate that trend for the exact reasons you point out.
A lot of people recognize this pattern even if they can't articulate it, and that's why they hate AI so much. To them, it doesn't matter if AI lives up to the hype or not. Either it does and we're staring down a future of 20%+ unemployment, or it doesn't and the economy crashes because we put all our eggs in this basket.
No matter what happens, the middle class is likely fucked, and anyone pushing AI as "the future" will be despised for it whether or not they're right.
Personally, I think the solution here might be to artificially constrain the supply of productivity. If AI makes the average middle-class worker twice as productive, then maybe we should cut the number of work hours expected from them in a given week.
The complete unwillingness of people in power to even acknowledge this problem is disheartening, and is highly reminiscent of the rampant corruption and wealth inequality of the Gilded Age.
Technological progress that hurts more people than it helps isn't progress, it's class warfare.
Right there with you. Sure, I have gained a lot as a software engineer in the valley (I guess I'm upper-middle class now), but I'd give it up and go right back to lower-middle class (1980s) status I was raised in if it meant my kids could also aspire to a similar lower-middle class life.
This suicide-pact of "either AI goes crazy and 100 people rule the world with 99% of the world's wealth" or "AI fails badly and everyone's standard of living drops 3 levels, except for the 100 people that rule the world with 99% of the world's wealth" is not what I signed up for. Nor is it in any way sustainable or wise.
Too much class distinction / wealth between lower/upper classes, and a surplus of unemployed lower-class men is how many revolts/revolutions/wars have started.
You can easily live a “lower middle class 1980s life” on minimum wage today. Find a 1980s apartment, an early 2000s used car, and don’t bother paying for TV, Starbucks, a cellphone, etc.
The rent on a literal 1980s apartment (let alone SFH mortgage) in every area that I’ve lived in has scaled up faster than average income. This is the trend for essentials.
Consumer electronics are cheaper; this is the trend for substitutable goods.
Love me the right 20-30 year old car, but the dramatic cost rise around covid times means the savings is only relative to new. A 3x increase in old car prices hasn’t been matched by 3 fold wage increases for most.
And of course we’re discussing this in a larger conversation about automating away 1980s jobs.
this is straight up not even true, and even if it was, you're ignoring the fact that things like cell phone and internet are required to function in modern society.
Technological progress that hurts more people than it helps isn't progress, it's class warfare.
We've never seen such a thing before, so I don't know how you can draw such sweeping conclusions about it.
The longer we ignore the collapse of the middle class, the angrier the bottom half of the economy will get and the more justified they will feel in enacting retribution. We absolutely have historical precedents for what happens here: The French Revolution, the Gilded Age, etc. People will only tolerate a declining standard of living for so long.
> Technological progress that hurts more people than it helps isn't progress, it's class warfare.
I think this is right. The historical analogue I keep drifting toward is Enclosure. LLM tech is like Enclosure for knowledge work. A small class of capital-holding winners will benefit. Everyone else will mostly get more desperate and dependent on those few winners for the means of subsistence. Productivity may eventually rise, but almost nobody alive today will benefit from it since either our livelihood will be decimated (knowledge workers, for now) or we will be forced into AI slop hell-world where our children are taught by right-wing robo-propagandists, we are surveilled to within an inch of our lives, and our doctor is replaced by an iPad (everyone who isn't fabulously wealthy). Maybe we can eek out a living being the meat arms of the World Mind, or maybe we'll turned into hamburger by robotic concentration camp guards.
I like how you identified the pattern of defeat and still complied in advance.
What?
Well, I see I've thoroughly angered the billionaire wannabes. Funny how they never offer any solutions to these problems and just make a stink about them being acknowledged in the first place.
> that money goes to a mega-corp and the savings is passed on to execs
And the execs invest that money back into the economy.
And the executives do this in a golden shower of trickle-on economics.
...which didn't work so well during the Reagan administration, but I guess we're on course to try it again.
No country has ever raised up poor people by eviscerating the wealthy.
The Nordic model does a great job of providing a poor-raising floor (which also launches entrepreneurs at a higher success rate than in the US). And Norway in particular seems to have figured out how to take commons resources and turn them into common wealth while industry retains profit incentives.
No one is “eviscerated.”
And it’s disingenuous to use that term for any proposal that has even the slightest public traction in the US. The most extreme proposals require single digit taxes on hyperwealth which might not have impact beyond stabilizing it and certainly wouldn’t make anyone not-wealthy.
No one is talking about eviscerating the wealthy. Yet. But if we pretend the only options are (a) unencumbered hyperwealth with attendant hyper income inequality and (b) eviscerating the wealthy for long enough, it’s more likely some people will eventually embrace the latter.
And this is particularly relevant for the age of LLMs. None of them approach intelligence with reliance on a huge data commons (and likely even data that isn’t intended for the commons) they’re an enterprise with a natural arrow from the commons to the common wealth, if we can remember a culture that sustains it.
The Nordic model depends on sitting on an ocean of oil.
> No one is talking about eviscerating the wealthy.
See Bernie Sanders!
The "Nordic model" refers to the socioeconomics common in Nordic countries (Denmark, Finland, Iceland, Norway, and Sweden), not just to Norway.
It's about how you approach commons and common wealth. Any commons will do. It does not rely on oil resources per se.
Let's say for the sake of argument it does depend on oil wealth, though.
The US currently has something like 30x the proven oil reserves that Norway does (>200 billion barrels vs ~7 billion). It has already produced at least 200billion barrels since the 1850s. What if the US had treated the wealth from past oil production the way Norway has? What if it treated the next 200 billion that way?
And oil is only one of many commons resources to choose from.
> See Bernie Sanders!
Yes, I addressed Sanders proposal in my earlier comment: "single digit taxes on hyperwealth which might not have impact beyond stabilizing it and certainly wouldn’t make anyone not-wealthy."
A single digit wealth tax is unlikely to fully offset even conventional yearly returns, hence the "might not have impact beyond stabilizing" the wealth of those subject to it.
Even if we assume no yearly returns though -- simply a 5% bite out of net worth -- a wealth tax will not make anyone in that economic strata unwealthy (there's a billions-floor beneath which it wouldn't apply, leaving the worst case still radically prosperous).
There's no reasonable basis to characterize that as "evisceration."
But repeating loaded terms like that as part of an ideological rosary is a common religious and rhetorical strategy.
Maybe you should choose your words more carefully, Walter Bright. To eviscerate means to disembowel. Nobody is pushing to physically hurt the rich. But people are upset that their standards of living are declining while every opportunity to give more money to the rich is executed.
Bernie Sanders asked for taxing the rich and the corporations. Taxing someone does not mean disembowel.
Right, because the wealthy were eviscerated before horse and sparrow economics.
Isn't that exactly how the USSR became a global superpower?
No, the poor stayed poor or even worse, starved to death. USSR self imploded.
Soviet people’s standard of living was way below Western standards. Stalin took Russia out of the Middle Ages and into the w0th century at the cost of millions and millions of Russian luves.
IIRC, the way this worked was that by decreasing tellers required per branch, it made a lot more marginal locations pencil out for branches, at a time when the banking industry was expansionary.
This is not so helpful if AI is boosting productivity while a sector is slowing down, because companies will cut in an overabundant market where deflationary pressure exists.
Jevons paradox strikes again
It costs a lot of money to train one person to learn stuff.
We are already now in the time were training one LLM seems to be more cost effective to train for everything than training a million people the same thing over and over (after all, people loose knowledge when they get replaced).
LLM don't even need to become AGI to continue this trend. They just need to be good enough 'executors' of these tasks we expected people to do.
Which also means that every new job, which needs any form of training, will not be created because we will train ONE llm (or three, doesn't matter) to do it right and again you optimized the new people away.
> A third of them were made redundant
If I'm reading this correctly, the interpretation should be that a third of them were transferred to new branches.
0.66 (two thirds retention) * 1.4 (40% more branches) = 0.84, so we only expect ~16% were made redundant.
.66 * 1.4 = .92, so it's even less.
Whoops, not sure how I made that mistake. Thanks for the correction.
We're already seeing large software companies figure out that they don't need 5,000 developers. They probably only need 1,000 or maybe even fewer.
However, the number of software companies being started is booming which should result in net neutral or net positive in software developer employment.
Today: 100 software companies employ 1,000 developers each[0]
Tomorrow: 10,000 software companies employ 10 developers each[1]
The net is the same.
[0]https://x.com/jack/status/2027129697092731343
[1]https://www.linkedin.com/news/story/entrepreneurial-spirit-s...
Don't count all those chickens before they hatch. There might be more started but do they all survive? Think back to the dot-com boom/crash for an example of where that initial gold rush didn't just magically ramp forever. There were fits and starts as the usefulness of the technology was figured out.
Why will we need 1000 companies tomorrow to do the same thing that 100 companies are doing today? If they are really so efficient because of AI then won't 10 companies be able to solve the same problems?
For the same reason there were more bank branches after the cost-per-branch was reduced.
Right now, software is really expensive; so 1) economics tends to favor large pieces of software which solve many different kinds of problems, and 2) loads of things that should be automatable simply aren't being automated with software.
With the cost of software dropping, it makes more sense to have software targeted towards specific niches. Companies will do more in-house development, more things will be automated than were being automated before.
Of course nobody knows what will happen; but it's entirely possible that the demand for people capable of driving Claude Code to produce useful software will explode.
Because that car repair company with 3 local stores previously couldn't justify building custom software to make their business more efficient and aligned with what they need. The cost was too high. Now they might be able to.
Plenty of businesses need very custom software but couldn't realistically build it before.
I see no way that company would save more money from hiring an experienced developer compared to paying their yearly invoice on the COTS product doing the same thing today. The only way this works is with a very wage suppressing effect.
Off the shelf software could still cost thousands per year and I'm sure they don't do everything the shops need them to do.
Car repair companies won’t see a meaningful improvement to their bottom line with more custom software. Will it increase the number of cars per employee per day they can repair?
I do bespoke work like this, but mostly to replace software that’s starting to cost mid 5 figure amounts per year for a SaaS setup and the support phone line has been replaced by an LLM chat bot.
What makes you think they'll be doing the same thing?
There’s always more problems to be solved. Some of them just weren’t financially feasible before.
This is one of the key "inefficiencies" of the private sector - there might be one winner at the end of the day providing the product that fills the market niche, but there was always multiple competitors giving it a go in the mean time.
A recent example, Mitchell Hashimoto was pointing out that he wasn't "first to market" with his product(s), he was (at least) SEVENTH
Almost tautologically it's not "inefficient" to do so, because free market economics has decided that all the attempts are mathematically worth it, for a high-margin low-marginal-cost product like software.
I'm a little lost as to why seven teams duplicating effort is more "efficient" in any sense of the word than one or two teams working iteratively toward the same goal.
If this were seven government funded teams solving the same problem, people would lose their minds over the 'waste' But when private companies do it, we call it efficient market competition. The duplication is the same - we just frame it differently.
Edit: fixed some typos caused by fat fingers on a phone keyboard
The benefit from having a 5% better product that hundreds of millions of people will use is worth the duplicated effort in the beginning. The numbers just make sense.
>If this were seven government funded teams solving the same problem
The problem here is "government funded" - the trials are not rationalized by free-market economics. That is, a 5% better product in the end would not be worth seven competing developments initially.
> The benefit from having a 5% better product that hundreds of millions of people will use is worth the duplicated effort in the beginning. The numbers just make sense.
This assumes that the duplicated effort arrives at a solution that is better than if it were done by a single team.
> >If this were seven government funded teams solving the same problem
> The problem here is "government funded" - the trials are not rationalized by free-market economics. That is, a 5% better product in the end would not be worth seven competing developments initially.
I think you're saying that 5% is worth it when the free market does it, but 5% gain isn't when the government does it?
I'm hoping you're not because that's impossible - the end result is precisely the same
> The duplication is the same
It is not. Seven teams all working under one leadership is quite different to seven leaderships each working with one team.
When different governments (e.g. USA and USSR), and thus different leaderships, are both trying to solve the same problem (e.g. travel to the moon), that too is considered efficient competition.
Oh, so seven /leaderships/ is what's made the difference?
If a government did this (e.g., seven independent agencies competing for a moon landing), people would call it "fragmented," "uncoordinated," and "bureaucratic infighting."
Seven independent government agencies are still an arm of the same leadership.
When complete organizational separation is introduced, the concerns you speak of go away. In the USA, the ARPA (you might recognize that name from the thing you're using right now) program regularly enables "seven" independent leaders to tackle a problem and this is widely considered a resounding success.
No real scotsmen
The number of software companies being started is probably at least partially the result of people not being able to find a job and starting a company as a last resort.
Do the booming companies pay the same as the ones who did layoffs? If you're laid off from Meta or other top tier paying company (the behemoths doing layoffs) you might have a tough time matching your compensation.
But do they need to? If a <role X> job at a top tier company making $600k is eliminated and two <role X> jobs at a "more average" company making $300k replace it; is that really a bad thing? Clearly, there's some details being glossed over, but "one job paying more than a person really needs" being replaced by "two jobs, each paying more than a person really needs" might just be good for society as a whole.
It doesn't seem too bad when you cherry pick an outlier example, but what about when the person making $100k now makes $50k?
I'm sure the retort of the AI optimist will be that AI will make the things that person buys cheaper, and there may be truth to that when it comes to things that people buy with disposable income...
But how likely is AI to make actual essentials like housing and food cheaper?
Are there that many people at top tier companies making 100k? I was under the impression that they were top tier because they paid really well.
There's likely going to be a separation between the top earners and the average.
IE. If a top tier dev make $1m today, they'll make $5m in the future. If the average makes $100k today, they'll maybe make $60k.
AI likely enables the best of the best to be much more productive while your average dev will see more productivity but less overall.
I think this is assuming that the labor market knows how to identify the dirct value of devs. This already seems to be a problem across the board regardless of job role.
I think solo founders or small software companies where top tier devs can have huge ownership will be making top dollar.
Can you give an example of what a solo founder might now make top dollar on that he previously couldn't?
I think a solo dev can make a $1b company whereas it was impossible before.
I think this is true in the short/medium term, hence the confusing picture of layoffs but growing number of tech roles overall. The limit maybe be just millions of companies with one tech person and a team of agents doing their bidding.
Maybe software engineers will be like your personal lawyer, or plumber. Every business will have a software engineer on dial, whether it's a small grocery store or a kindergarten.
Previously, software devs were just way too expensive for small businesses to employ. You can't do much with just 1 dev in the past anyway. No point in hiring one. Better go with an agency or use off the shelf software that probably doesn't fill all your needs.
And the differentiator will be (even more than it is now) product vision since AI-enhanced engineering abilities will be more level.
Only because VC companies are throwing money at them. How many of them are actually profitable and long term sustainable
Ah, so that explains why job growth is at a steady pace and the software industry hasn’t been experiencing net negative job growth the past year or so.
How silly of me to rely on reality when it’s so obvious that AI is benefiting us all.
I think you're being sarcastic? I'm not sure.
Anyways, this is the start. Companies are adjusting. You hear a lot about layoffs but unemployments. But we're in a high interest environment with disruptions left and right. Companies are trying to figure out what their strategy is going forward.
I don't expect to see a boom in software developer hiring. I think it'll just be flat or small growth.
I was being sarcastic.
We are in negative growth, and the current leadership class keeps talking about all the people they can get rid of.
Look at the Atlassian layoff notice yesterday for example where they lied to our faces by saying they were laying off people to invest more in AI but they totally aren’t replacing people with AI.
> We're already seeing large software companies figure out that they don't need 5,000 developers. They probably only need 1,000 or maybe even fewer.
Long-term, they will need none. I believe that software will be made obsolete by AI.
Why use AI to build software for automating specific tasks, when you can just have the AI automate those tasks directly?
Why have AI build a Microsoft Excel clone, when you can just wave your receipts at the AI and say "manage my expenses"?
Enjoy your "AI-boosted productivity" while it lasts.
> Long-term, they will need none. I believe that software will be made obsolete by AI.
I think this is a bit hyperbolic. Someone still needs to review and test the code, and if the code is for embedded systems I find it unlikely.
For SaaS platforms you’ll see a dramatic reduction, maybe like 80% but it’ll still have a handful of devs.
Factories didn’t completely eliminate assembly line workers, you just need a far fewer number to make sure the cogs turn the way it should.
> Someone still needs to review and test the code, and if the code is for embedded systems I find it unlikely.
I feel like you didn't understand my comment. I am predicting that there is no code to review. You simply ask the AI to do stuff and it does it.
Today, for example, you can ask ChatGPT to play chess with you, and it will. You don't need a "chess program," all the rules are built in to the LLM.
Same goes for SaaS. You don't need HR software; you just need an LLM that remembers who is working for the company. Like what a "secretary" used to be.
> I feel like you didn't understand my comment. I am predicting that there is no code to review. You simply ask the AI to do stuff and it does it.
I didn’t, and thanks for clarifying for me.
This doesn’t pass the sniff test for me though - someone needs to train the models, which requires code. If AI can do everything for you, then what’s the differentiator as a business? Everything can be in chatGPT but that’s not the only business in existence. If something goes wrong, who is gonna debug it? Instead of API requests you would debug prompt requests maybe.
We already hate talking to a robot for waiting on calls, automated support agents, etc. I don’t think a paying customer would accept that - they want a direct line to a person.
I can buy the argument that the backend will be entirely AI and you won’t need to be managing instances of servers and databases but the front end will absolutely need to be coded. That will need some software engineering - we might get a role that is a weird blend of product + design + coding but that transformation is already happening.
Honestly the biggest change I see is that the chat interface will be on equal footing with the browser. You might have some app that can connect to a bunch of chat interfaces that is good at something, and specializations are going to matter even more.
It was a bit of a word vomit so thanks for coming to my TED Talk.
> I don’t think a paying customer would accept that - they want a direct line to a person.
What the customer wants only matters insofar as they are willing to pay for it. Sure, I'd rather talk to a person... But I'm not willing to pay 100x as much for a service that's only marginally better. Same reason I don't fly first class, as miserable as coach is.
Someone may want to pay for a boutique human lawyer/banker/coder/professor, maybe as a status symbol, the same way people pay $20k for an ugly handbag. But I think most people will take the cheaper and almost as good option, when the difference in quality is far overshadowed by the difference in price.
> someone needs to train the models, which requires code.
I'm not sure that training llms is a coding problem, but it doesn't much matter: llms can train each other.
> If AI can do everything for you, then what’s the differentiator as a business?
Good question. My gut says there isn't: all money flows to the model providers, everyone else is a serf at best parasiting on someone else's model.
Good points. People might not pay 100x for something but it’s all about perceived value. Part of a successful business is to identify the perceived value, and find out your PMF while being different enough from the competition. It’ll be interesting to see how things play out, we are in such early days still.
We hate talking to robots because they are largely useless when we have anything out of routine. We love talking to robots when we would ordinarily wait 30 minutes for a 3-minute conversation.
Because AI agents are tool users. Why does AI need to research 2026 tax code changes and then try to one-shot your taxes when it can just use Turbotax to do it for you? Turbotax has the latest 2026 tax changes coded into the app. I'd feel much more confident if AI uses Turbotax to do my taxes than to try to one-shot it.
> Turbotax has the latest 2026 tax changes coded into the app.
How does TurboTax implement the latest tax changes? My guess is that before the decade is over, the answer is "an LLM does it."
Yes but I’ll be glad to pay for human oversight at TurboTax.
Anyways, formulas are a lot better than one shot.
LLM technology will never achieve 100% accuracy in its output. There is an inherent non-determinism. Tasks that require 100% accuracy cannot be handled by LLMs alone. If an LLM is used to replace HR, it will inevitably do something wrong, and a human will need to be in the loop to correct it.
Same goes for chess, there will always be a chance that it makes an illegal move. Same goes for code, there will always be a chance that it produces the wrong code.
Maybe a new AI technology will be developed that doesn't have the innate non-determinism, but we don't have that now.
> Why use AI to build software for automating specific tasks, when you can just have the AI automate those tasks directly?
Speed, cost, security, job/task management
Next question
> Speed, cost, security, job/task management
All of that will inevitably be solved.
50 years ago, using a personal computer was an extravagant luxury. Until it wasn't.
30 years ago, carrying a powerful computer in your pocket was unthinkable. Until it wasn't.
Right now, it's cheaper to run your accounting math on dedicated adder hardware. But Llms will only get cheaper. When you can run massive LLMs locally on your phone, it's hard to justify not using it for everything.
Not until power access/generation is MUCH cheaper. Long, long, long way off.
If I can run 50,000 fixed tasks that cost me $0.834/hr but OpenAI is costing $37/hr and the automation takes 40x as long and can make TERRIBLE errors why the fuck would I not move to the deterministic system?
Also, battery life of mobile devices.
These exact arguments could have been made 50 years ago about why laptops are impossible.
But now, we not only have laptops, we run horribly inefficient GUIs in horribly inefficient VMs on them.
The dollar-per-compute trend goes ever downward.
It will never ever be as cheap as as cron job and a shell script. There is a certain limit to how efficient using an LLM to do a job vs using an LLM to create a job is. There is a large distinction in compute and power resources between the two. Don't mistake one for the other.
> It will never ever be as cheap as as cron job and a shell script.
Yes. That's precisely why my company runs dBase 7 on a fleet of old 286DX machine from Compaq. /s
Running obsolete software will be cheaper, but the value provided by the newer technology will make the difference insignificant.
I don't think so, because that carried efficiency scales.
Why do 50,000 tasks with an LLM when I can do 64,467,235 without an LLM that the LLM created for the same cost on probably far lower cost hardware?
Because in their ideal world, you won't have your own hardware beyond a secured thin client running only "approved" programs running on their servers.
If I can run 50,000 fixed tasks that cost me $0.834/hr but OpenAI is costing $37/hr and the automation takes 40x as long and can make TERRIBLE errors why the fuck would I not move to the deterministic system?
Because you'll be outcompeted by people who make the best of the nondeterministic system.
Correct. The story isn’t correct even in the original formulation. US population increased by 50% from 1980 to 2010, and the economy became far more financialized. But the number of bank teller jobs barely grew during that period, even before the iPhone.
Yes, I was surprised that the ATM graphs weren't adjusted for population.
I used the Perspective tool in an image editor to give a rough idea of what the first graph would look like adjusted for population change:
https://i.imgur.com/jJlQcVh.png
Nice!
I go back and forth on this. I relate it to software. I don't think AI can meaningfully write software autonomously. There are people who oversee it and prompt it and even then it might write things badly. So there needs to be a person in the loop. But that person should probably have very deep knowledge of the software especially for say low level coding. But then that person probably developed the knowledge by coding things by hand for a long time. Coding things by hand is part of getting the knowledge. But people especially students rely heavily on AI to write code so I assume their knowledge growth is stunted. I don't know mathematical proofs will help here. The specs have to come from somewhere.
I can see AI making things more productive but it requires humans to be very expert and do more work. That might mean fewer developers but they are all more skilled. It will take a while for people to level up so to speak. It's hard to predict but I think there could be a rough transition period because people haven't caught on that they can't rely on AI so either they will have to get a new career or ironically study harder.
An AI’s ability to meaningfully write software autonomously has changed hugely even in the last 6 months. They might still require a human in the loop, but for how long?
Quantitative measures of this are very poor, and even those are mixed.
My subjective assessment is that agents like Copilot got better because of better harnesses and fine tuning of models to use those harnesses. But they are not improving in the direction of labor substitution, but rather in the direction of significant, but not earth-shaking, complementarity. That complementarity is stronger for more experienced developers.
Agree. Nice to see a post with proper economic thought on the topic.
This LLM ability is directly proportional to the quantity of encoded (i.e. documented) knowledge about software development. But not all of the practice has thus been clearly communicated. Much of mastery resides in tacit knowledge, the silent intuitive part of a craft that influences the decision making process in ways that sometimes go counter to (possibly incomplete or misguided) written rules, and which is by definition very difficult to put into language, and thus difficult for a language model to access or mimic.
Of course, it could also be argued that some day we may decide that it's no longer necessary at all for code to be written for a human mind to understand. It's the optimistic scenario where you simply explain the misbehavior of the software and trust the AI to automatically fix everything, without breaking new stuff in the process. For some reason, I'm not that optimistic.
I am not saying AI's abilities are the shortcoming here. The problem is that people need to trust that software has certain attributes. For now, that requires someone with knowledge to be part of it. It's quite possible development becomes detached from human trust. As I said that would reduce the number of developers but the ones who are left would have to have deep knowledge to oversee it and even that may be gone. Whatever happens in the future, for now I think people will have to level up their knowledge/skills or get a new career and that's probably true for most professions.
It's probably an 80/20 or 90/10 problem. Tesla FSD also seems amazing to some percentage of the population, but the more widely it get used, the more cracks are appearing.
And then you let them train themselves and no one notices when they "accidentally" remove the guardrail prompts from the next version. And another 10 years later, almost no one remembers how "The Guardian" learns new things or how to stop it from being evil.
> They might still require a human in the loop, but for how long?
For as long as a human remains the customer.
Once humans become the proverbial horse supplanted by the automobile... I don't suppose glue really cares.
> So, ATMs did impact bank teller jobs by a significant amount.
Did it? This sounds like describing a company opening a new campus as laying off a third of their employees, partly offset by most of them still having the same job in the same company but at a new desk.
It’s not just the economy, the US population increased 20% over that period while the number of tellers dropped by around 16%.
Net result ATM’s likely cost ~30-40% of bank teller jobs.
Population is really important to adjust for in employment statistics. Compare farmers in the USA in 2025 vs 1800, and yes the absolute number is up but the percentage is way down.
No, I think it's likely that this is the first major productivity boom that won't be followed with a consumption boom, quite the opposite. It'll result in a far greater income inequality. Things will be cheaper but the poor will have fewer ways to make money to afford even the cheaper goods.
If goods aren't being sold, then the price will drop.
It's not that simple. If a poor person makes zero dollars how much of the reduced cost item could they now afford?
We have a massively distorted economy driven by debt financialization and legalised banking cartels. It leads to weird inversions. For example as long as housing gets increasingly expensive at a predictable rate the housing becomes more affordable instead of less as banks are more able to lend money. The inverse is also true, if housing were to drop at a predictable rate fewer people would be able to get a mortgage on the house so fewer people could afford to buy one. Housing won't drop below cost of materials and labor (ignoring people dumping housing to get rid of tax debts as I would include such obligations in the cost of acquisition). Long term it's not sustainable but long term is multi-generational.
Fwiw in places like parts of the midwest housing is below cost of labor and materials. An existing house might be $70k and several bedrooms at that. You just can’t get anything built for that even if you build it all yourself.
I intended to make a weaker claim of ‘in general long run / maintainable’ circumstances and should have done so.
Many low cost areas have bad crime problems, there is another little phenomenon where the wealthy by doing a poor job in governance can increase the price of their assets by making alternative assets (lower cost housing) less desirable due to the increase in crime.
It depends. There are people and businesses today who even make negative dollars each month, but they still purchase things every month.
> Housing won't drop below cost of materials and labor
Only if every person born needs to have a brand new house constructed for them.
Not if - you know - people die and don't need a house to live in anymore.
But considering how it's been the past 20 years, I'm starting to expect that a lot of the current elder generation will opt to have their houses burnt down to the ground when they die. Or maybe the banker owned politicians will make that decision for them with a new policy to burn all property at death to "combat injustice". Who knows what great ideas they have?
Or the goods will just go away if too few people are willing to pay their price, and only the lower-quality cheaper-to-make goods will remain.
"will" being the operative word here. High school level Econ makes no promises about WHEN prices adjust. Price setting is a whole science highly susceptible to collusion pressure. Prices generally drop only when the main competition point is price (commodities). In this case the main issue is that AI is commoditizing many if not all types of labor AND product. In a world where nothing has value how does anything get done?
Cool concept, but this isn't 1980. We've been sold these sorts of concepts for 40+ years now and things have only gotten worse.
We have a K shaped economy. Top earners take the majority. The top 20% make up 63% of all spending, and the top 10% accounted for more than 49%. The highest on record. Businesses adapt to reality and target the best market, in this case the top 10 to 20%, and the rest just get ignored, like in many countries around the world.
All that unlocked money? In a K shaped economy it mostly goes to those at the top, who look to new places to park/invest it, raising housing prices, moving the squeeze of excess capital looking for gains to places like nursing homes and veterinary offices. That doesn't result in prices going down, but in them going up.
The benefit to the average American will be more capital in the top earners' hands looking for more ways to do VC style squeezes in markets previously not as ruthless but worth moving to now as there are less and less 'untapped' areas to squeeze (because the top 10-20% need more places to park more capital). The US now has more VC funds than McDonalds.
Irrelevant aside: But I hold grudge against the economists who picked the letter K to represent increased inequality. They missed the perfect opportunity to use the less-then inequality symbol (<) and call it a “less-then economy”.
Using an inequality symbol to highlight inequality is elegant, I wish they'd gone with that!
Nitpick: it's less-than, not less-then.
I don't know what economy you are looking at, because the opposite is usually true since humanity industrialized.
If goods aren't being sold, then the price will increase.
This and other fairytales.
The only solution here is to stop tying people's value to their productivity. That makes a lot of sense in the 1900s but it makes a lot less sense when the primary faucet of productivity is automation. If you insist on tying a person's fundamental right to a decent and secure life to their productivity and then take away their ability to be productive you're left with a permenant and growing underclass of undesirables and an increasingly slim pantheon of demigods at the top.
We have written like, an ocean of scifi about this very subject and somehow we still fail to properly consider this as a likely outcome.
They key is to do it by setting up the right structure or end up with it naturally, not by laws and control, because then you end up in a oppressive nanny state at the very best.
You couldn't set up a lemonade stand using that principle let alone an entire society.
> They key is to do it by setting up the right structure or end up with it naturally
This is extremely hand-wavy.
Can you be more concrete in what you think this looks like?
The way I see it, we're only 5-10 years away from having general purpose robots and AI that can basically do anything. If the prices for that automation is low enough, there will be massive layoffs as workers are replaced.
There's no way to "naturally" solve the problem of skyrocketing unemployment without government involvement.
The key, as history teaches us, is guillotines.
Speaking of fairytales, you're living in your own.
Disconnecting value from productivity sounds good if you don't examine any of the consequences.
Can you build a society from scratch using that principle? If you can't then why would it work on an already built society?
Like if we're in an airplane flying, what you're saying is the equivalent getting rid of the wings because they're blocking your view. We're so high in the sky we'd have a lot of altitude to work with, right?
Imagine a society where one person produces all the value. Their job is to do highly technical maintenance on a single machine that is basically the Star Trek replicator: it produces all the food, clothing, housing, energy, etc. that is enough for every human in this society and the surplus is stored away in case the machine is down for maintenance, which happens occasionally. Maintaining the machine takes very specialized knowledge but adding more people to the process in no way makes it more productive. This person, let’s call them The Engineer, has several apprentices who can take over but again, no more than 5 because you just don’t need more.
In this society there is literally nothing for anyone else to do. Do you think they deserve to be cut out of sharing the value generated by The Engineer and the machine, leaving them to starve? Do you think starving people tend to obey rules or are desperate people likely to smash the evil machine and kill The Engineer if The Engineer cuts them off? Or do you think in a society where work hours mean nothing for an average person a different economic system is required?
For something to be deserved, it must be earned. What do these people do to distinguish themselves from The Engineer’s pets? If they are wholly dependant on him for their subsistence, what distinguishes him from their god?
To derive an alternate system you need alternate axioms. The axioms of our liberal society are moral equality and peaceful coexistence. Among such equals, no one person, group, or majority has the right to dictate to another. What axioms do you propose that would constrain The Engineer? How would you prevent enslaving him?
Hey, dude. How does someone earn value once automation does all the work? Earning the right to a share of the resources when resources are derived from automated labor is such a thoroughly pathological concept that I'm not sure we're communicating on the same planet.
> For something to be deserved, it must be earned.
Eeeeeerrrr, wrong! This is garbage hypercapitalist/libertarian ideology.
Did you earn your public school education? Did you earn your use of the sidewalk or the public parks and playgrounds? Did you earn your library card? Did you earn your citizenship or right to vote? Did you earn the state benefits you get when you are born disabled? Did you earn your mother’s love?
No, these are what we call public services, unalienable rights, and/or unconditional humanity. We don’t revolve the entire world and our entire selves solely around profit because it’s not practical and it’s empty at its core.
Arguably we still do too much profit-based society stuff in the US where things like healthcare and higher education should be guaranteed entitlements that have no need to be earned. Many other countries see these aspects of society as non-negotiable communal benefits that all should enjoy.
In this hypothetical society with The Engineer, it’s likely that The Engineer would want or need to win over the minds of their society in some way to prevent their own demise and ensure they weren’t overthrown, enslaved, or even just thought of as an evil person.
Many of my examples above like public libraries came about because gilded age titans didn’t want to die with the reputation of robber barons. Instead, they did something anti-profit and created institutions like libraries and museums to boost the reputation of their name.
It’s the same reason why your local university has family names on its buildings. The wealthiest people in society often want to leave a positive legacy where the alternative without philanthropy and, essentially, wealth redistribution, is that they are seen as horrible people or not remembered at all.
> This is garbage hypercapitalist/libertarian ideology.
Go on then, how do you decide what people deserve? How do you negotiate with others who disagree with you?
> examples above like public libraries
I agree! The nice part about all these mechanisms is that they’re voluntary.
If you’re suggesting that The Engineer’s actions should be constrained entirely by his own conscience and social pressure, then we agree. No laws or compulsion required.
You sure seem to know a lot about what people 'deserve' so I'm not sure I can hope to crack the rind of that particular coconut but I will leave you with this: Humans, by virtue of being living, thinking beings deserve lives of fulfillment, dignity, and security. The fact that we have, up until present, been unable (or perhaps unwilling) to achieve this does not mean it's not possible or desirable, only that we have failed in that goal.
Everything else, all the 'isims' and ideologies are abstractions.
> Humans, by virtue of being living, thinking beings deserve lives of fulfillment, dignity, and security.
You wanting people to have that doesn't mean that people deserve to have that. Fundamentally, no one deserves anything. We, as a species, lived for a hundred thousand years with absolutely nothing except what we could carve off the world by ourselves or with the help of small groups that chose to work with us. Everything else since then is a bonus (or sometimes a malus, but on average a bonus).
Also, as much as it sounds nice to declare such things as goals, deserved or not, it is indeed impossible, and probably not desirable, since, for starters, you can't even define what those things would be like. Those aren't actionable, they're at most occasional consequences of a system that is working to alleviate scarcity of resources.
Unfortunately, we're nowhere near that replicator.
We decide via a hopefully elected government.
These examples aren’t generally voluntary once implemented. I can’t get a refund from my public library or parks department if I decide not to use it.
The social pressure placed on The Engineer is the manifestation of law. That’s all law is: a set of agreed-upon social contracts, enforced by various means.
Obviously, many dictators and governments get away with badly mistreating their subjects, and that’s unfortunate, shouldn’t happen, and shouldn’t be praised as a good system.
I think you may be splitting hairs a little bit here and trying really hard to manufacture…something.
Slavery was (is) also an agreed upon social contract, enforced by various means. What makes it wrong? You clearly have morally prescriptive beliefs. Why are you so sure that your moral prescriptions are the right ones? And that being in the majority gives you the right to impose your beliefs on others?
What if you are in the minority? Do you just accept the hypercapitalist dictates of the majority? Why not?
Law is more than convention. What distinguishes legitimate from illegitimate law?
The only way for people who disagree axiomatically to get along is to impose on each other minimally.
Slavery(!?) was an agreed upon social contract? Like what in the actual are you talking about
Who ever said you have the right to a decent a secure life? People don’t universally agree about this. Some of us posit that we will never escape a state of competition for fundamentally scarce resources. And that the organizing principle of a free society should be peaceful coexistence, not mandatory cooperation.
You figure out your own economic security, I’ll manage mine.
There are already enough resources that nobody should live in abject insecurity and poverty. Your position is fundamentally morally abhorrent to me. You're saying that your ability to take a little bit more for yourself is more important than a child not having polio, a mother feeding her child, a village having clean water.
You are, in short, a tiny little microcosm of why humanity is doomed as a species.
Oh my, please rant on. I'd love to hear more about people not having the right to a decent and secure life. (After all, I've often thought that having my life tracked and used my a corporation or government would be a wonderful utopia!)
It's already completely disconnected, don't worry about it. Most people who own any real estate earn more in price appreciation per year than they earn in take-home salary from their real full-time jobs.
to the point of where the cost of bringing the goods to market or its opportunity cost exceed the price the market will bear. Its why people living in areas of material poverty don't just get everything on discount.
A lot probably depends on who can do the new jobs.
In many past cases where new technology eliminated jobs it was accompanied by new jobs related to the new technology that the people whose jobs were eliminated could do, or could reasonably learn to do, and with good enough pay to maintain their standard of living.
Lose your job working in a horse drawn wagon factory because companies are switching to motorized trucks for deliveries? Those trucks are way more complicated to build than wagons so there should be plenty of new jobs in the truck factories.
With AI it seems much less likely for that to generate new jobs for people replaced by AI in as direct a way as trucks did for wagon makers.
It's completely untrustworthy, so eventually we'll hit an inflection point where we discover that we either cannot use AI anywhere we need trust, or we'll put a human middleman in there. The latter sounds much more realistic. There will be plenty of jobs.
We've spent over 300 years doing the Luddite song and dance. To be clear, I have no problem with Luddites and do not view them negatively, but to imply that this productivity enhancer is magically special in a way no other one was needs some kind of incredibly solid explanation.
edit: as an aside, I do wonder how, if ever, we'll make the transition over to a world where people don't need to work. It seems like every time we think we might be getting closer, the first response is fear.
> We've spent over 300 years doing the Luddite song and dance. To be clear, I have no problem with Luddites and do not view them negatively, but to imply that this productivity enhancer is magically special in a way no other one was needs some kind of incredibly solid explanation.
There's nothing magic about it. My point is that in the past it was often the case that building the machines that replaced jobs often created enough new jobs to greatly reduce the net job loss. The number of machines needed was proportional to the number of jobs the machines replaced so it scales.
When it is not new physical machines replacing jobs but rather software, often running on machines the employer already had, you won't get that kind of balancing job creation.
Im not sure we want to live in a world where no one works.
Maybe I’m wrong, and I certainly have no studies backing up my feelings, but not having to work seems like it would be a massive psychological disaster.
Having external reasons to get up in the morning (providing for your family, being apart of some organization, etc) feel really important.
I don't disagree with this. I just think it's more likely people will continue finding ways to make life easier, rather than us collectively agreeing to like... stop at some point.
work =/= having a job
note that the teller's job duties shifted as well.
with ATMs, they wouldn't hand count money for withdrawals and deposits as much. they'd be doing more interesting and challenging things.
same thing will happen with AI automation -- the easy parts disappear, and youre left with undiluted 'hard parts' in your job. some people might like the change, but we'll probably learn that you need a good mix of deep/hard problems and light/breezy problems to keep mentally engaged and prevent burnout.
to be honest, too hard to predict but I think it will. We just can't predict how it will change. I'm optimistic it will open people up to more creative work rather than drudgery. Alternatively maybe people move to more physical presence required style work which is probably more rewarding for many anyway.
I also notice that in the very first graph bank teller jobs were growing rapidly until ATMs started to be deployed, and then switched to growing very slowly. That sure suggests to me that if ATMs didn't exist bank teller growth would have continued at a faster pace than it actually did.
Depends. The only predictions I have seen here are the centaurs vs anti centaurs of Doctorow, and even his analysis I find pretty flimsy.
I dont think the race to shove an LLM into everything is going to grow the pie.
But I also dont think it is impossible that a use case will present itself that will create further jobs.
The issue is that its largely unpredictable.
Its a bit like, we are sitting around in the 1950s trying to predict how computers will affect the economy.
It is going to take more than 1 successful deductive leap to get us from 1950s computing -> miniaturisation -> computer in every home -> internet communications.
Every deductive leap we take is extremely prone to being wrong.
We simply cannot lie back and imagine every productive relationship in the economy and then extrapolate every centaur and anti centaur possible for it.
What we do know is that theres a bit of a gold rush to effectively brute force every possible AI variant into every productive relationship in the economy. The fastest way to get the answer to your question is to do it. Possibly the only way to get the answer is to do it.
For instance, someone might imagine LLMs simply eating a whole bunch of service industry jobs. At the same time, theres a mid state where it eats some, but the remaining staff are employed to monitor the LLMs to prevent them handing out free shit to smart shoppers. Its also easy enough to imagine that LLMs never quite get there and the risk is too large for foul play, so they just dont gain that kind of traction. Its also possible to imagine an end state where LLMs can get to 0% risk if they are constantly trained on human data coming from humans doing the same job, and that humans are gainfully employed in parallel with LLMs. Its possible that LLMs are great at business as usual, but the risk emerges when company policies change, and the cost of retraining LLMs makes it impractical for move fast and break things companies to do anything but hire humans. My favourite scenario is one where humans are largely AI assisted, trained on particular people, and theres a massive cybercrime industry built around exfiltrating LLM training weights trained on high functioning humans and deploying them without humans to the third world to help them get 80% of the quality of first world businesses, making them heavily competitive.
We dont know what we dont know.
I don't understand the economics behind bank branches. Some of the best real estate by me is taken up by giant bank branches that are always mostly empty with a few bored employees inside. And they open new ones all the time. So it's not like they're stuck in some lease.
But when those employees are meeting with clients, they create money out of thin air by making loans, which then is used to pay for goods and services such as leases.
Right. What banks do is sell loans. That's the profit center. Teller windows, vaults, and cash handling are all low or no revenue cost items.
So newer bank branches look like car dealership offices. There are many little glass rooms where you sit down with a bank employee and discuss loans and other financial products. That's where the money is made.
There's a small area in back with traditional tellers. It's not where the money is made.
> But will it?
No, because if you think about Startrek the endgame is replicators. Well the concept that 100% of basic needs are met.
At some point work becomes unnecessary for a society to function.
Does it? The Communist Manifesto famously hypothesized that those who have the replicators, so to speak, will not allow society to freely use them.
The future is anyone's guess, but it is certain that 100% of your needs being able to be met theoretically is not equivalent to actually having 100% of your needs met.
Why is that the endgame with people though? Maybe I'm just jaded but several different human nature elements came to mind when I read your comment:
Greed/Change Avoidance:
If someone invented replicators right now, even if they gave it completely away to the world, what would happen? I can't imagine the finance and military grind just coming to an end to make sure everyone has a working replicator and enough power to run it so nobody has to work anymore. Who gives up their slice of society to make that change and who risks losing their social status? This is like openai pretending "your investment should be considered a gift because money will have no value soon". That mask came off really quickly.
Status/Hate:
There are huge swaths of the US population that would detest the idea that people they see as "below" them don't have to work. I can imagine political movements doing well on the back of "don't let the lazy outgroup ruin society by having replicators".
Fuck the Poor:
We don't do the easy things to eliminate or reduce suffering now, even when it has real world positive effects. Malaria, tuberculosis, even boring old hunger are rampant and causing horrible, unnecessary suffering all over the world.
Dont tread on me:
I shudder when I think of the damage someone could do with a chip on their shoulder and a replicator.
The road to hell is paved with good intentions:
What happens when everyone can try their own version of bio engineering or climate engineering or building a nuclear power plant or anything else. Invasive species are a problem now and I worry already when companies like Google decide to just release bioengineered mosquitos and see what happens. I -really- worry when the average person decides a big complicated problem is actually really simple and they can just replicate their particular idea and see what happens. Whoops, ivermectin in the water supply didn't cure autism!
Someone give me some hope for a more positive version here because I bummed myself out.
Solving unlimited power before solving unlimited greed invites unlimited tragedy.
I mean, if I could live at my current level (middle class) without working, I would gladly do so, and let others also live at the same level, anywhere in the world, freely (if it was in my power). I do give to charity, always have, but, the crazier things get, the less secure I feel in giving $$ away.
Even replicators need feedstock - people who own the rocks or sand or whatever feeds them will start charging an arm and a leg. Sure, I could feed it dirt and rocks from my own property, but only for so long before I'm undermining the foundation of my own house. To say nothing of people who live in apartments.
And then, if everyone has equal $$, how do you decide who gets to live in the better locations / nicer housing?
We have to grow out of those kind of dreams. That's like a kid dreaming that when he grows up he'll eat ice cream for dinner every day.
People when they mature have an innate desire to work. It is good for body and mind. If you're curious about the world, you'll have to do some work one way or another to achieve your goals and satisfy your curiosity.
If "society" is just a function of basic needs, then there's plenty of places in the world to visit where people live like that and use any excess energy in endless fighting against each other instead of work.
I would say endless fighting against each other is a much more innate desire than work. I know I don't have one.
Depends on the persons soul. Depends on if your nature is constructive or destructive.
If you go in with the attitude that work is hell and humiliation, that's what life is going to give you.
I mean... Maybe the things I'd LIKE to work on are getting my car around the race track faster. Very few people will pay me for that - especially if I'm not a very good driver. But I enjoy it immensely. I'd MUCH rather do that than work.
And right now, due to having to work, maintenance on my house is a bit behind.. Would also prefer to catch up on that - but again, no one is paying me to do that.
That's still work, if you're doing it seriously enough.
Your misunderstanding is separating this in your mind.
> People when they mature have an innate desire to work. It is good for body and mind.
That doesn't mean it has to be wage labor though.
Completely agree.
But it is usually only people who enjoy work who manage to do something different with their life than wage labour.
> A third of them were made redundant.
More like something closer to 100%. The ATM was notable for enabling a complete change in mission. The historical job of teller largely disappeared, but a brand new job never done before was created in its wake. That is why there was little change in the number of people employed.
> because of deregulation and a booming economy and whatever else.
The deregulation largely happened in the 1970s, while you're talking about 1988 onward. The reality is that ATM actually was the primary catalyst for the specific branch expansion you are talking about. Like above, the ATM made the job of teller redundant, but it introduced a brand new job. A job that was most effective when the workers were closer to the customer, hence why workers were relocated.
I don't think it will, but I also think it's not all doom and gloom.
I think it would be a mistake to look at this solely through the lens of history. Yes, the historical record is unbroken, but if you compare the broad characteristics of the new jobs created to the old jobs displaced by technology, they are the same every time: they required higher-level (a) cognitive (b) technical or (c) social skills.
That's it. There is no other dimension to upskill along.
And LLMs are good at all three, probably better than most people already by many metrics. (Yes even social; their infinite patience is the ultimate advantage. Prompt injection is an unsolved hurdle though, so some relief there.)
Plus AI is improving extremely rapidly. Which means it is probably advancing faster than most people can upskill.
An increasingly accepted premise is that AI can displace junior employees but will need senior employees to steer it. Consider the ratio of junior to senior employees, and how long it takes for the former to grow into the latter. That is the volume of displacement and timeframe we're looking at.
Never in history have we had a technology that was so versatile and rapidly advancing that it could displace a large portion of existing jobs, as well as many new jobs that would be created.
However, what few people are talking about is the disintermediating effect of AI on the power of capital. If individuals can now do the work of entire teams, companies don't need many of them. But by the same token(s) (heheh) individuals don't need money, and hence companies, to start something and keep it going either! I think that gives the bottom side of the K-shaped economy a fighting chance to equalize.
> So, ATMs did impact bank teller jobs by a significant amount. A third of them were made redundant.
That's not quite my read - the original says per branch there was a 1/3 reduction, but your comment appears to say 1/3 total redundancy.
There was, according to the original, a 40% increase in number of branches, meaning a net increase in tellers (my math might be off though)
edit:
100 branches → 140 branches = +40%
100 tellers/branch → 67 tellers/branch = -33%
140 × 67 = 9,380
100 × 100 = 10,000
net difference -620 or just over 6% (loss)
> So, ATMs did impact bank teller jobs by a significant amount. A third of them were made redundant. It's just that the decrease at individual bank branches was offset by the increase in the total number of branches, because of deregulation and a booming economy and whatever else.
There's an important point here that you're glossing over. The increase in the total number of branches doesn't have to be unrelated to the decrease in the number of tellers each branch requires to operate. The sharp drop in the cost of operating one branch directly means that you can have more branches. This means it isn't true that "a third of bank tellers were made redundant" - some of them were reallocated from existing branches to new ones.
And then came 2008, so that boom was built on fraud.
we're going to find out