Requirements:
* Macbook is not an option
* I go through phases and switch between Windows and Linux as my primary OS.
* Want to be able to mess around with some local LLMs.
* I travel frequently, so portability is somewhat important. Currently own a 13 inch, but 14 inch should work too I think.
Notes:
* My current laptop is a 6yr old Dell XPS. It has generally served me well.
* I bought an Asus Zenbook for a family member and I have been impressed with how well it has worked out. Anyone with any recent experience with Asus laptops for development?
* I have had bad experiences with Lenovo twice, which makes me wary of Thinkpads, but willing to consider it if it makes the most sense.
* Framework look very appealing but I have heard mixed reviews.
OP notes they switch between Linux and Windows.
I've not tried local LLMs on Windows, but I do loads with 'em on a three-year-old Legion running Arch.
That said, whilst small local models are nice for some use-cases, I'm leaning more towards APIs these days. I like the better selection of models and the ability to use them without bringing my machine to a halt.
> You have two requirements that are at odds:
Not really now that we have the AMD Strix Halo: https://arstechnica.com/gadgets/2025/02/review-asus-rog-flow....
The only available SKU right now is the above one that is a weird gaming tablet/laptop that seems to not be good in either (too heavy for a tablet, too cramped for laptop usage), but the performance is definitely there (similar performance of RTX 4060 for laptop, using a similar TDP of only the GPU for the whole APU) and you also have 32GB of unified memory for LLMs. Also, the chipset itself supports up to 128GB of RAM, so technically in future we could have an even better SKUs for LLMs (but nothing announced yet AFAIK).
Cool, I will keep an eye on this.
I'm going to second this, hard. You're much better off doing anybody's $10/month github copilot/codey/cursor plan, spend less to get a laptop that does everything else better, then in a year or 2 ask again to see if localLLMs have gotten better or if x86 laptops have gotten better.
What I can do with localLLM on my MacBook is not worth paying extra for an x86 laptop that be heavier, hotter, louder, and less battery (especially if you're not going to play games).
PocketAI can be helpful:
https://www.adlinktech.com/en/pocket-ai-with-nvidia-rtx-a500...
Running LLMs is more of a nice to have. If I can run something like DeepSeek-Coder-V2 even if it's a bit slow, I'll be happy.
If you have a powerful computer at home, you can also offload your ai work to it. It's still local in the sense it's your computer, but it would require network access.
You have the third option of getting a USB processing unit, such as a Coral TPU.