Sam said yesterday that chatgpt handles ~700M weekly users. Meanwhile, I can't even run a single GPT-4-class model locally without insane VRAM or painfully slow speeds.
Sure, they have huge GPU clusters, but there must be more going on - model optimizations, sharding, custom hardware, clever load balancing, etc.
What engineering tricks make this possible at such massive scale while keeping latency low?
Curious to hear insights from people who've built large-scale ML systems.
Same explanation but with less mysticism:
Inference is (mostly) stateless. So unlike training where you need to have memory coherence over something like 100k machines and somehow avoid the certainty of machine failure, you just need to route mostly small amounts of data to a bunch of big machines.
I don't know what the specs of their inference machines are, but where I worked the machines research used were all 8gpu monsters. so long as your model fitted in (combined) vram, you could job was a goodun.
To scale the secret ingredient was industrial amounts of cash. Sure we had DGXs (fun fact, nvidia sent literal gold plated DGX machines) but they wernt dense, and were very expensive.
Most large companies have robust RPC, and orchestration, which means the hard part isn't routing the message, its making the model fit in the boxes you have. (thats not my area of expertise though)
> Inference is (mostly) stateless. ... you just need to route mostly small amounts of data to a bunch of big machines.
I think this might just be the key insight. The key advantage of doing batched inference at a huge scale is that once you maximize parallelism and sharding, your model parameters and the memory bandwidth associated with them are essentially free (since at any given moment they're being shared among a huge amount of requests!), you "only" pay for the request-specific raw compute and the memory storage+bandwidth for the activations. And the proprietary models are now huge, highly-quantized extreme-MoE models where the former factor (model size) is huge and the latter (request-specific compute) has been correspondingly minimized - and where it hasn't, you're definitely paying "pro" pricing for it. I think this goes a long way towards explaining how inference at scale can work better than locally.
(There are "tricks" you could do locally to try and compete with this setup, such as storing model parameters on disk and accessing them via mmap, at least when doing token gen on CPU. But of course you're paying for that with increased latency, which you may or may not be okay with in that context.)
> The key advantage of doing batched inference at a huge scale is that once you maximize parallelism and sharding, your model parameters and the memory bandwidth associated with them are essentially free (since at any given moment they're being shared among a huge amount of requests!)
Kind of unrelated, but this comment made me wonder when we will start seeing side channel attacks that force queries to leak into each other.
I asked a colleague about this recently and he explained it away with a wave of the hand saying, "different streams of tokens and their context are on different ranks of the matrices". And I kinda believed him, based on the diagrams I see on Welch Labs YouTube channel.
On the other hand, I've learned that when I ask questions about security to experts in a field (who are not experts in security) I almost always get convincing hand waves, and they are almost always proven to be completely wrong.
Sigh.
mmap is not free. It just moves bandwidth around.
Using mmap for model parameters allows you to run vastly larger models for any given amount of system RAM. It's especially worthwhile when you're running MoE models and parameters for unused "experts" can just be evicted from RAM, leaving room for more relevant data. But of course this applies more generally to, e.g. single model layers, etc.
> Inference is (mostly) stateless
Quite the opposite. Context caching requires state (K/V cache) close to the VRAM. Streaming requires state. Constrained decoding (known as Structured Outputs) also requires state.
> Quite the opposite.
Unless something has dramatically changed, the model is stateless. The context cache needs to be injected before the new prompt, but for what I understand (and please do correct me if I'm wrong) the the context cache isn't that big, like in the order of a few tens of kilobytes. Plus the cache saves seconds of GPU time, so having an extra 100ms of latency is nothing compare to a cache miss. so a broad cache is much much better than a narrow local cache.
But! even if its larger, Your bottleneck isn't the network, its waiting on the GPUs to be free[1]. So whilst having the cache really close ie in the same rack, or same machine, will give the best performance, it will limit your scale (because the cache is only effective for a small number of users)
[1] a 100megs of data shared over the same datacentre network every 2-3 seconds per node isn't that much, especially if you have a partitioned network (ie like AWS where you have a block network and a "network" network)
KV cache for dense models is order 50% of parameters. For sparse moe models it can be significantly smaller I believe, but I don’t think it is measured in kb.
> So I simultaneously can tell you that its smart people really thinking about every facet of the problem, and I can't tell you much more than that.
"we do 1970s mainframe style timesharing"
there, that was easy
For real. Say it takes 1 machine 5 seconds to reply, and that a machine can only possibly form 1 reply at a time (which I doubt, but for argument).
If the requests were regularly spaced, and they certainly won’t be, but for the sake of argument, then 1 machine could serve 17,000 requests per day, or 120,000 per week. At that rate, you’d need about 5,600 machines to serve 700M requests. That’s a lot to me, but not to someone who owns a data center.
Yes, those 700M users will issue more than 1 query per week and they won’t be evenly spaced. However, I’d bet most of those queries will take well under 1 second to answer, and I’d also bet each machine can handle more than one at a time.
It’s a large problem, to be sure, but that seems tractable.
Yes. And batched inference is a thing, where intelligent grouping/bin packing and routing of requests happens. I expect a good amount of "secret sauce" is at this layer.
Here's an entry-level link I found quickly on Google, OP: https://medium.com/@wearegap/a-brief-introduction-to-optimiz...
But that’s not accurate. There are all sorts of tricks around KV cache where different users will have the same first X bytes because they share system prompts, caching entire inputs / outputs when the context and user data is identical, and more.
Not sure if you were just joking or really believe that, but for other peoples’ sake, it’s wildly wrong.
Really? So the system recognises someone asked the same question and serves the same answer? And who on earth shares the exact same context?
I mean i get the idea but sounds so incredibly rare it would mean absolutely nothing optimisation wise.
Yes. It is not incredibly rare, it's incredibly common. A huge percentage of queries to retail LLMs are things like "hello" and "what can you do", with static system prompts that make the total context identical.
It's worth maybe a 3% reduction in GPU usage. So call it a half billion dollars a year or so, for a medium to large service.
Even if that were the case you wouldn't be wrong. Adding caching and deduplication (and clever routing and sharding, and ...) on top of timesharing doesn't somehow make it not timesharing anymore. The core observation about the raw numbers still applies.
I'm pretty sure that's not right.
They're definitely running cluster knoppix.
:-)
Makes perfect sense, completely understand now!
I don't think it's either useful or particularly accurate to characterize modern disagg racks of inference gear, well-understood RDMA and other low-overhead networking techniques, aggressive MLA and related cache optimizations that are in the literature, and all the other stuff that goes into a system like this as being some kind of mystical thing attended to by a priesthood of people from a different tier of hacker.
This stuff is well understood in public, and where a big name has something highly custom going on? Often as not it's a liability around attachment to some legacy thing. You run this stuff at scale by having the correct institutions and processes in place that it takes to run any big non-trivial system: that's everything from procurement and SRE training to the RTL on the new TPU, and all of the stuff is interesting, but if anyone was 10x out in front of everyone else? You'd be able to tell.
Signed, Someone Who Also Did Megascale Inference for a TOP-5 For a Decade.
Doesn't google have TPU's that makes inference of their own models much more profitable than say having to rent out NVDIA cards?
Doesn't OpenAI depend mostly on its relationship/partnership with Microsoft to get GPUs to inference on?
Thanks for the links, interesting book!
Yes. Google is probably gonna win the LLM game tbh. They had a massive head start with TPUs which are very energy efficient compared to Nvidia Cards.
The only one who can stop Google is Google.
They’ll definitely have the best model, but there is a chance they will f*up the product / integration into their products.
It would take talent for them to mess up hosting businesses who want to use their TPUs on GCP.
But then again even there, their reputation for abandoning products, lack of customer service, condescension when it came to large enterprises’ “legacy tech” lets Microsoft who is king of hand holding big enterprise and even AWS run rough shod over them.
When I was at AWS ProServe, we didn’t even bother coming up with talking points when competing with GCP except to point out how they abandon services. Was it partially FUD? Probably. But it worked.
>It would take talent for them to mess up hosting businesses who want to use their TPUs on GCP.
there are few groups as talented at losing a head start as google.
Google employees collectively have a lot of talent.
A truly astonishing amount of talent applied to… hosting emails very well, and losing the search battle against SEO spammers.
Well, Search had no chance when the sites also make money from Google ads. Google fucked their Search by creating themselves incentives for bounce rate.
> It would take talent for them to mess up hosting businesses who want to use their TPUs on GCP. > But then again even there, their reputation for abandoning products
What are the chances of abandoning TPU-related projects where the company literally invested billions in infrastructure? Zero.
Enterprise sales and support takes a lot of people skills, hand holding, showing respect for the current state, being willing to deal with and navigate the internal politics of the customer, etc.
All things that Google is remarkably bad at.
I don't know what scale of "billions" you're talking about; but, Intel blew 1–2 billion on Larrabee. Even worse: Intel blew 5+ billion on mobile pre-iPhone. I remember when that team was shown the door — that's when we had to evaluate the early RGX GPUs as a backstop to try to win Apple's business; the RGX's were turds.
Penny-wise pound-foolish.
Bit of an aside but Larrabee didn't fail. Intel inexplicably abandoned the consumer GPU market but the same tech was successfully sold to enterprise customers in the form of Xeon Phi. Several of the largest supercomputing clusters have used them.
https://tomforsyth1000.github.io/blog.wiki.html#%5B%5BWhy%20...
Intel also wasted untold billions trying to compete with Qualcomm building cellular chips with lackluster results and the sold the division to Apple which has spent billions more just to end up with the lackluster C1 in the SE.
There is plenty of time left to fumble the ball.
And they already did many times.
Google will win the LLM game if the LLM game is about compute, which is the common wisdom and maybe true, but not foreordained by God. There's an argument that if compute was the dominant term that Google would never have been anything but leading by a lot.
Personally right now I see one clear leader and one group going 0-99 like a five sigma cosmic ray: Anthropic and the PRC. But this is because I believe/know that all the benchmarks are gamed as hell, its like asking if a movie star had cosmetic surgery. On quality, Opus 4 is 15x the cost and sold out / backordered. Qwen 3 is arguably in next place.
In both of those cases, extreme quality expert labeling at scale (assisted by the tool) seems to be the secret sauce.
Which is how it would play out if history is any guide: when compute as a scaling lever starts to flatten, you expert label like its 1987 and claim its compute and algorithms until the government wises up and stops treating your success persobally as a national security priority. It's the easiest trillion Xi Xianping ever made: pretending to think LLMs are AGI too, fast following for pennies on the dollar, and propping up a stock market bubble to go with the fentanyl crisis? 9-D chess. It's what I would do about AI if I were China.
Time will tell.
I believe Google might win the LLM game simply because they have the infrastructure to make it profitable - via ads.
All the LLM vendors are going to have to cope with the fact that they're lighting money on fire, and Google have the paying customers (advertisers) and with the user-specific context they get from their LLM products, one of the juciest and most targetable ad audiences of all time.
Everyone seems to forget about Mu Zero which was arguably more important than transformer architecture.
Yeah honestly. They could just try selling solutions and SLAs combining their TPU hardware with on-prem SOTA models and practically dominate enterprise. From what I understand, that's GCP's gameplay too for most regulated enterprise clients.
Googles bread and butter is advertising, so they have a huge interest in keeping things in house. Data is more valuable to them than money from hardware sales.
Even then, I think that their primary use case is going to be consumer grade good AI on phones. I dunno why Gemma QAT model fly so low on the radar, but you can basically get full scale Llamma 3 like performance from a single 3090 now, at home.
https://www.cnbc.com/2025/04/09/google-will-let-companies-ru...
Google has already started the process of letting companies self-host Gemini, even on NVidia Blackwell GPUs.
Although imho, they really should bundle it with their TPUs as a turnkey solution for those clients who haven't invested in large scale infra like DCs yet.
Its the same format as other software - you release the actual software for free but offer managed services that work with that software way better and easier.
Yeah but those are on Google's managed cloud, and not onprem. But that recent announcement has been specifically for Google Distributed Cloud, which is huge.
My point was a bit more specific though. To elaborate, I know of a number of publicly traded companies (USD $200M+ market cap) globally which have identified use cases for onprem AI and want to implement them actively but cannot, because they lack the knowhow to work with onprem, and hiring talent to implement that is just extremely expensive. Google should simply provide it as a turnkey bundle and milk them for it.
My guess is that either google want's a high level of physical control over their TPUs, or they have one sort of deal or another with NVidia and don't want to step on their toes.
And also, Google's track record with hardware.
It’s my understanding that google makes bulk of ad money from search ads - sure they harvest a ton of data but it isn’t as valuable to them as you’d think. I suspect they know that could change so they’re hoovering up as much as they can to hedge their bets. Meta on the other hand is all about targeted ads.
Right so keeping things in house and seeing what people are asking Gemini would be probably better for them?
Gemma Term of uses ?
Relenting hardware like that would be such a cleansing old-school revenue stream for Google... just imagine...
Hasn’t the Inferentia chip been around long enough to make the same argument? AWS and Google probably have the same order of magnitude of their own custom chips
Inferentia has a generally worse stack but yes
But they’re ASICs so any big architecture changes will be painful for them right?
TPUs are accelerators that accelerate the common operations found in neural nets. A big part is simply a massive number of matrix FMA units to process enormous matrix operations, which comprises the bulk of doing a forward pass through a model. Caching enhancements and massively growing memory was necessary to facilitate transformers, but on the hardware side not a huge amount has changed and the fundamentals from years ago still powers the latest models. The hardware is just getting faster and with more memory and more parallel processing units. And later getting more data types to enable hardware-enabled quantization.
So it isn't like Google designed a TPU for a specific model or architecture. They're pretty general purpose in a narrow field (oxymoron, but you get the point).
The set of operations Google designed into a TPU is very similar to what nvidia did, and it's about as broadly capable. But Google owns the IP and doesn't pay the premium and gets to design for their own specific needs.
There are plenty of matrix multiplies in the backward pass too. Obviously this is less useful when serving but it's useful for training.
I'd think no. They have the hardware and software experience, likely have next and next-next plans in place already. The big hurdle is money, which G has a bunch of.
Im a research person building models so I can't answer your questions well (save for one part)
That is, as a research person using our GPUs and TPUs I see first hand how choices from the high level python level, through Jax, down to the TPU architecture all work together to make training and inference efficient. You can see a bit of that in the gif on the front page of the book. https://jax-ml.github.io/scaling-book/
I also see how sometimes bad choices by me can make things inefficient. Luckily for me if my code/models are running slow I can ping colleagues who are able to debug at both a depth and speed that is quite incredible.
And because were on HN I want to preemptively call out my positive bias for Google! It's a privilege to be able to see all this technology first hand, work with great people, and do my best to ship this at scale across the globe.
> Another great resource to look at is the unsloth guides.
And folks at LMSys: https://lmsys.org/blog/
This caught my attention "But today even “small” models run so close to hardware limits".
Sounds analogous to the 60's and 70's i.e "even small programs run so close to hardware limits". If optimization and efficiency is dead in software engineering, it's certainly alive and well in LLM development.
Why does the unsloth guide for gemma 3n say:
> llama.cpp an other inference engines auto add a <bos> - DO NOT add TWO <bos> tokens! You should ignore the <bos> when prompting the model!
That makes the want to try exactly that? Weird
Nothing smart about making something that is not useful for humans.
No, you just over complicate things.
If people at google are so smart why can't google.com get a 100% lighthouse score?
I have met a lot of people at Google, they have some really good engineers and mediocre ones. But mostl importantly they are just normal engineers dealing normal office politics.
I don't like how the grand parent mystifies this. This problem is just normal engineering. Any good engineer could learn how to do it.
Because most smart people are not generalists. My first boss was really smart and managed to found a university institute in computer science. The 3 other professors he hired were, ahem, strange choices. We 28 year old assistents could only shake our heads. After fighting a couple of years with his own hires the founder left in frustration to found another institution.
One of my colleagues was only 25, really smart in his field and became a professor less than 10 years later. But he was incredibly naive in everyday chores. Buying groceries or filing taxes resulted in major screw-ups regularly
I have met those supersmart specialists but in my experience there are also a lot of smart people who are more generalists.
The real answer is likely internal company politics and priorities. Google certainly has people with the technical skills to solve it but do they care and if they care can they allocate those skilled people to the task?
My observation is that in general smart generalists are smarter than smart specialists. I work at Google, and it’s just that these generalists folks are extremely fast learners. They can cover breadth and depth of an arbitrary topic in a matter of 15 minutes, just enough to solve a problem at hand.
It’s quite intimidating how fast they can break down difficult concepts into first principles. I’ve witnessed this first hand and it’s beyond intimidating. Makes you wondering what you’re doing at this company… That being said, the caliber of folks I’m talking about is quite rare, like top 10% of top 1% teams at Google.
That is my experience too. It sometimes seem the supersmart generalists are people whose strongest skill is learning.
Pro-tip they're just not. A lot of tech nerds really like to think they're a genius with all the answers ("why don't they just do XX"), but some eventually learn that the world is not so black and white.
The Dunning-Kruger effect also applies to smart people. You don't stop when you are estimating your ability correctly. As you learn more, you gain more awareness of your ignorance and continue being conservative with your self estimates.
A lot of really smart people working on problems that don't even really need to be solved is an interesting aspect of market allocation.
Can you explain what you mean about 'not needing to be solved'? There are versions of that kind of critique that would seem, at least on the surface, to better apply to finance or flash trading.
I ask because scaling an system that a substantially chunk of the population finds incredibly useful, including for the more efficient production of public goods (scientific research, for example) does seem like a problem that a) needs to be solved from a business point of view, and b) should be solved from a civic-minded point of view.
I think the problem I see with this type of response is that it doesn't take into context the waste of resources involved. If the 700M users per week is legitimate then my question to you is: how many of those invocations are worth the cost of resources that are spent, in the name of things that are truly productive?
And if AI was truly the holy grail that it's being sold as then there wouldn't be 700M users per week wasting all of these resources as heavily as we are because generative AI would have already solved for something better. It really does seem like these platforms are, and won't be, anywhere as useful as they're continuously claimed to be.
Just like Tesla FSD, we keep hearing about a "breakaway" model and the broken record of AGI. Instead of getting anything exceptionally better we seem to be getting models tuned for benchmarks and only marginal improvements.
I really try to limit what I'm using an LLM for these days. And not simply because of the resource pigs they are, but because it's also often a time sink. I spent an hour today testing out GPT-5 and asking it about a specific problem I was solving for using only 2 well documented technologies. After that hour it had hallucinated about a half dozen assumptions that were completely incorrect. One so obvious that I couldn't understand how it had gotten it so wrong. This particular technology, by default, consumes raw SSE. But GPT-5, even after telling it that it was wrong, continued to give me examples that were in a lot of ways worse and kept resorting to telling me to validate my server responses were JSON formatted in a particularly odd way.
Instead of continuing to waste my time correcting the model I just went back to reading the docs and GitHub issues to figure out the problem I was solving for. And that led me down a dark chain of thought: so what happens when the "teaching" mode rethinks history, or math fundamentals?
I'm sure a lot of people think ChatGPT is incredibly useful. And a lot of people are bought into not wanting to miss the boat, especially those who don't have any clue to how it works and what it takes to execute any given prompt. I actually think LLMs have a trajectory that will be similar to social media. The curve is different and I, hopefully, don't think we've seen the most useful aspects of it come to fruition as of yet. But I do think that if OpenAI is serving 700M users per week then, once again, we are the product. Because if AI could actually displace workers en masse today you wouldn't have access to it for $20/month. And they wouldn't offer it to you at 50% off for the next 3 months when you go to hit the cancel button. In fact, if it could do most of the things executives are claiming then you wouldn't have access to it at all. But, again, the users are the product - in very much the same way social media played into.
Finally, I'd surmise that of those 700M weekly users less than 10% of those sessions are being used for anything productive that you've mentioned and I'd place a high wager that the 10% is wildly conservative. I could be wrong, but again - we'd know about that if it were the actual truth.
> If the 700M users per week is legitimate then my question to you is: how many of those invocations are worth the cost of resources that are spent, in the name of things that are truly productive?
Is everything you spend resources on truly productive?
Who determines whether something is worth it? Is price/willingness of both parties to transact not an important factor?
I don't think ChatGPT can do most things I do. But it does eliminate drudgery.
I don't believe everything in my world is as efficient as it could be. But I genuinely think about the costs involved [0]. When doing automations that are perfectly handled by deterministic systems why would I put the outcomes of those in the hands of a non-deterministic one? And at that cost differential?
We know a few things: LLMs are not efficient, LLMs are consuming more water than traditional compute, we know the providers know but they haven't shared any tangible metrics, and the build process involves, also, an exceptional amount of time, wattage and water.
For me it's: if you have access to a supercomputer do you use it to tell you a joke or work on a life saving medicine?
We didn't have these tools 5 years ago. 5 years ago you dealt with said "drudgery". On the other hand you then say it can't do "most things I do". It seems as though the lines of fatalism and paradox are in full force for a lot of the arguments around AI.
I think the real kicker for me this week (and it changes week-over-week, which is at least entertaining) is when Paul Graham told his Twitter feed [1] a "hotshot" programmer is writing 10k LOC that are not "bug-filled crap" in 12 hours. That's 14 LOC per minute. Compared to industry norms of 50-150 LOC per 8 hour day. Apparently,this "hot-shot" is not "naive", though, implying that it's most definitely legit.
[0] https://www.sciencenews.org/article/ai-energy-carbon-emissio... [1] https://x.com/paulg/status/1953289830982664236
> When doing automations that are perfectly handled by deterministic systems why would I put the outcomes of those in the hands of a non-deterministic one?
The stuff I'm punting isn't stuff I can automate. It's stuff like, "build me a quick command line tool to model passes from this set of possible orbits" or "convert this bulleted list to a course articulation in the format preferred by the University of California" or "Tell me the 5 worst sentences in this draft and give me proposed fixes."
Human assistants that I would punt this stuff to also consume a lot of wattage and power. ;)
> We didn't have these tools 5 years ago. 5 years ago you dealt with said "drudgery". On the other hand you then say it can't do "most things I do".
I'm not sure why you think this is paradoxical.
I probably eliminate 20-30% of tasks at this point with AI. Honestly, it probably does these tasks better than I would (not better than I could, but you can't give maximum effort on everything). As a result, I get 30-40% more done, and a bigger proportion of it is higher value work.
And, AI sometimes helps me with stuff that I -can't- do, like making a good illustration of something. It doesn't surpass top humans at this stuff, but it surpasses me and probably even where I can get to with reasonable effort.
It is absolutely impossible that human assistants being given those tasks would use even remotely within the same order of magnitude the power that LLM’s use.
I am not an anti-LLM’er here but having models that are this power hungry and this generalisable makes no sense economically in the long term. Why would the model that you use to build a command tool have to be able to produce poetry? You’re paying a premium for seldom used flexibility.
Either the power drain will have to come down, prices at the consumer margin significantly up or the whole thing comes crashing down like a house of cards.
> It is absolutely impossible that human assistants being given those tasks would use even remotely within the same order of magnitude the power that LLM’s use.
A human eats 2000 kilocalories of food per day.
Thus, sitting around for an hour to do a task takes 350kJ of food energy. Depending on what people eat, it's 350kJ to 7000kJ of fossil fuel energy in to get that much food energy. In the West, we eat a lot of meat, so expect the high end of this range.
The low end-- 350kJ-- is enough to answer 100-200 ChatGPT requests. It's generous, too, because humans also have an amortized share of sleep and non-working time, other energy inputs/uses to keep them alive, eat fancier food, use energy for recreation, drive to work, etc.
Shoot, just lighting their part of the room they sit in is probably 90kJ.
> I am not an anti-LLM’er here but having models that are this power hungry and this generalisable makes no sense economically in the long term. Why would the model that you use to build a command tool have to be able to produce poetry? You’re paying a premium for seldom used flexibility.
Modern Mixture-of-Experts (MoE) models don't activate the parameters/do the math related to poetry, but just light up a portion of the model that the router expects to be most useful.
Of course, we've found that broader training for LLMs increases their usefulness even on loosely related tasks.
> Either the power drain will have to come down, prices at the consumer margin significantly up
I think we all expect some mixture of these: LLM usefulness goes up, LLM cost goes up, LLM efficiency goes up.
Reading your two comments in conjunction - I find your take reasonable, so I apologise for jumping the gun and going knee first in my previous comment. It was early where I was, but should be no excuse.
I feel like if you're going to go down the route of the energy consumption needed to sustain the entire human organism, you have to do that on the other side as well - as the actual activation cost of human neurons and articulating fingers to operate a keyboard won't be in that range - but you went for the low ball so I'm not going to argue that, as you didn't argue some of the other stuff that sustains humans.
But I will argue the wider implication of your comment that a like-for-like comparison is easy - it's not, so leaving it in the neuron activation space energy cost would probably be simpler to calculate, and there you'd arrive at a smaller ChatGPT ratio. More like 10-20, as opposed to 100-200. I will concede to you that economies of scale mean that there's an energy efficiency in sustaining a ChatGPT workforce compared to a human workforce, if we really want to go full dystopian, but that there's also outsized energy inefficiency in needing the industry and using the materials to construct a ChatGPT workforce large enough to sustain the economies of scale, compared to humans which we kind of have and are stuck with.
There is a wider point that ChatGPT is less autonomous than an assistant, as no matter the tenure with it, you'll not give it the level of autonomy that a human assistant would have as it would self correct to a level where you'd be comfortable with that. So you need a human at the wheel, which will spend some of that human brain power and finger articulation, so you have to add that to the scale of the ChatGPT workflow energy cost.
Having said all that - you make a good point with MoE - but the router activation is inefficient; and the experts are still outsized to the processing required to do the task at hand - but what I argue is that this will get better with further distillation, specialisation and better routing however only for economically viable task pathways. I think we agree on this, reading between the lines.
I would argue though (but this is an assumption, I haven't seen data on neuron activation at task level) that for writing a command-line tool, the neurons still have to activate in a sufficiently large manner to parse a natural language input, abstract it and construct formal language output that will pass the parsers. So you would be spending a higher range of energy than for an average Chat GPT task
In the end - you seem to agree with me that the current unit economics are unsustainable, and we'll need three processes to make them sustainable - cost going up, efficiency going up and usefulness going up. Unless usefulness goes up radically (which it won't due to scaling limitations of LLM's), full autonomy won't be possible, so the value of the additional labour will need to be very marginal to a human, which - given the scaling laws of GPU's - doesn't seem likely.
Meanwhile - we're telling the masses at large to get on with the programme, without considering that maybe for some classes of tasks it just won't be economically viable; which creates lock in and might be difficult disentangle in the future.
All because we must maintain the vibes that this technology is more powerful than it actually is. And that frustrates me, because there's plenty pathways where it's obvious it will be viable, and instead of doubling down on those, we insist on generalisability.
> There is a wider point that ChatGPT is less autonomous than an assistant, as no matter the tenure with it, you'll not give it the level of autonomy that a human assistant would have as it would self correct to a level where you'd be comfortable with that.
IDK. I didn't give human entry level employees that much autonomy. ChatGPT runs off and does things for a minute or two consuming thousands and thousands of tokens, which is a lot like letting someone junior spin for several hours.
Indeed, the cost is so low -- better to let it "see its vision through" than to interrupt it. A lot of the reason why I'd manage junior employees closely are to A) contain costs, and B) prevent discouragement. Neither of those apply here.
(And, you know -- getting the thing back while I remember exactly what I asked and still have some context to rapidly interpret the result-- this is qualitatively different from getting back work from a junior employee hours later).
> that maybe for some classes of tasks it just won't be economically viable;
Running an LLM is expensive. But it's expensive in the sense "serving a human costs about the same as a long distance phone call in the 90's." And the vast majority of businesses did not worry about what they were expending on long distance too much.
And the cost can be expected to decrease, even though the price will go up from "free." I don't expect it will go up too high; some players will have advantages from scale and special sauce to make things more efficient, but it's looking like the barriers to entry are not that substantial.
The unit economics is fine. Inference cost has reduced several orders of magnitude over the last couple years. It's pretty cheap.
Open AI reportedly had a loss of $5B last year. That's really small for a service with hundreds of millions of users (most of which are free and not monetized in any way). That means Open AI could easily turn a profit with ads, however they may choose to implement it.
> so what happens when the "teaching" mode rethinks history, or math fundamentals?
The person attempting to learn either (hopefully) figures out the AI model was wrong, or sadly learns the wrong material. The level of impact is probably quite relative to how useful the knowledge is one's life.
The good or bad news, depending on how you look at it, is that humans are already great at rewriting history and believing wrong facts, so I am not entirely sure an LLM can do that much worse.
Maybe ChatGPT might just kill of the ignorant like it already has? GPT already told a user to combine bleach and vinegar, which produces chlorine gas. [1]
[1] https://futurism.com/chatgpt-bleach-vinegar
Reminds me of our president
https://www.bbc.com/news/world-us-canada-52407177.amp
They won’t be honest and explain it to you but I will. Takes like the one you’re responding to are from loathsome pessimistic anti-llm people that are so far detached from reality they can just confidently assert things that have no bearing on truth or evidence. It’s a coping mechanism and it’s basically a prolific mental illness at this point
And what does that make you? A "loathsome clueless pro-llm zealot detached from reality"? LLMs are essentially next word predictors marketed as oracles. And people use them as that. And that's killing them. Because LLMs don't actually "know", they don't "know that they don't know", and won't tell you they are inadequate when they are. And that's a problem left completely unsolved. At the core of very legitimate concerns about the proliferation of LLMs. If someone here sounds irrational and "coping", it very much appears to be you.
> so far detached from reality they can just confidently assert things that have no bearing on truth or evidence
So not unlike an LLM then?
> working on problems that don't even really need to be solved
Very, very few problems _need_ to be solved. Feeding yourself is a problem that needs to be solved in order for you to continue living. People solve problems for different reasons. If you don't think LLMs are valuable, you can just say that.
The few problems humanity has that need to be solved:
1. How to identify humanity's needs on all levels, including cosmic ones...(we're in the Space Age so we need to prepare ourselves for meeting beings from other places)
2. How to meet all of humanity's needs
Pointing this out regularly is probably necessary because the issue isn't why people are choosing what they're doing...it's that our systems actively disincentivize collectibely addressing these two problems in a way that doesn't sacrifice people's wellbeing/lives... and most people don't even think about it like this.
The notion that simply pretending to not understand that I was making a value judgment about worth is an argument is tiring.
Well, we all thought advertising was the worst thing to come out of the tech industry, someone had to prove us wrong!
Just wait until the two combine.