Can someone explain the other iot devices using the same broker? I tried cross referencing the feature list, information about the user base, kickstarter origin and flutter app with some search results and I’m pretty sure that I found the company and product in question. But they don’t (publicly) produce iot devices? Sooo I’m wondering if different companies are streaming their data into a shared sink and why they would do that?
At this point, I trust LLMs to come up with something more secure than the cheapest engineering firm for hire.
"Anyone else out there vibe circuit-building?"
https://xcancel.com/beneater/status/2012988790709928305
Is there more context to this? I'm assuming Ben is experimenting and demonstrating the danger of vibe circuit designing? Mostly because I know he has a ton of experience and I'd expect him to not make this mistake (also seems like he told the AI why it was wrong)
I'm not sure, it was posted on HN a couple weeks ago with the same title as the text in his tweet. I'd guess he was experimenting and trying to show the dangers, like you suggested.
People make these mistakes too. Several times in my high school shop class kids shorted out 9V batteries trying to build circuits because they didn't understand how electronics work. At no point did our teacher stop them from doing so - on at least one occasion I unplugged one from a breadboard before it got too toasty to handle (and I was/am an electronics nublet). Similarly there was also a lot of hand-wringing about the Gemini pizza glue in a world where people do wacky stuff like cook fish in a dishwasher or defrost chicken overnight on the counter or put cooked steak on the same plate it was on when raw just a few minutes prior.
LLMs are just surfacing the fact that assessing and managing risk is an acquired, difficult-to-learn skill. Most people don't know what they don't know and fail to think about what might happen if they do something (correctly or otherwise) before they do it, let alone what they'd do if it goes wrong.
Well said, but I'd add that LLMs are also surfacing the fact that there's a swathe of people out there who will treat the machines as more trustworthy than humans by default, and don't believe they need to do any assessment or risk management in the first place.
What's your point?
The AI is being sold as an expert, not a student. These are categorically different things.
The mistake in the post is one that can be avoided by taking a single class at a community college. No PhD required, not even a B.S., not even an electricians certificate.
So I don't get your point. You're comparing a person in a learning environment to the equivalent of a person claiming to have a PhD in electrical engineering. A student letting the magic smoke escape from a basic circuit is a learnable experience (a memorable one that has high impact), especially when done in a learning environment where an expert can ensure more dangerous mistakes are less likely or non existent. But the same action from a PhD educated engineer would make you reasonably question their qualifications. Yes, humans make mistakes but if you follow the AI's instructions and light things on fire you get sued. If you follow the engineer's instructions and set things on fire then that engineer gets fired likely loses their license.
So what is your point?
No one thinks their breadboard wont catch on fire because an AI agent told them it wouldn’t. Its never been easier to learn because of these agents.
Lawyers are getting in trouble because they use AI and submit fabricated citations about fabricated cases as precedent. A bunch of charges were recently thrown out in Wisconsin because of this, and it's not the first time such behavior has made the news.
https://www.wpr.org/news/judge-sanctions-kenosha-county-da-a...
AI is indeed being understood to be an expert that replaces human judgement, and people are being hurt because of it.
In my experience people don’t use LLMs to learn but to circumvent learning.
I am sure this is true. On the flip side, as someone who is addicted to learning stuff, I've been finding LLMs to be amazing at feeding my addiction. :)
Some recent examples:
* foreign languages ("explain the difference between these two words that have the same English translation", "here's a photo of a mock German exam paper and here is my written answer - mark it & show me how I could have done better")
* domains that I'm familiar with but might not know the exact commands off the top of my head (troubleshooting some ARP weirdness across a bunch of OSX/Linux/Windows boxes on an Omada network)
* learning basic skills in a new domain ("I'm building this thing out of 4mm mild steel - how do I go about choosing the right type of threading tap?", "what's the difference between Type B and Type F RCCB?")
Many of these can be easily answered with a web search, but the ability to ask follow-up questions has been a game changer.
I'd love to hear from other addicts - are there areas where LLMs have really accelerated your learning?
Just because a calculator will only ever be used by a subset of the population to type 80085 and giggle, doesn't mean it can't also be used for complex calculations.
AI is a tool that can accelerate learning, or severely inhibit it. I do think the tooling is going to continue to make it easier and easier to get good output without knowing what you're doing, though.
Exactly. I like to say that learning feels like frustration. If I'm right, then LLM's eliminate precisely the thing that is learning.
That's a very strong claim. I don't think people expect their circuits to ignite, LLM instruction or not. But I'd expect learning from a book or dedicated website would be less likely for that to occur. (Even accounting for bad manufacturing)
You're biased because you're not considering that by definition the student is inexperienced. Unknown unknowns. Tons of people don't know very basic things (why would they?) like circuits with capacitors bring dangerous when the power is off.
Why are you defending there LLM? Would you be as nice to a person? I'd expect not because these threads tend to point out a person's idiocy. I'm not sure why we give greater leeway to the machine. I'm not sure why we forgive them as if they are a student learning but someone posting similar instructions on a blog gets (rightfully) thrashed. That blog writer is almost never claiming PhD expertise
I agree that LLMs can greatly aid in learning. But I also think they can greatly hinder learning. I'm not sure why anyone thinks it's any different than when people got access to the internet. We gave people access to all the information in the world and people "do their own research" and end up making egregious errors because they don't know how to research (naively think it's "searching for information"), what questions to ask, or how to interrogate data (and much more). Instead we've ended up with lots of conspiratorial thinking. Now a sycophantic search engine is going to fix that? I'm unconvinced. Mostly because we can observe the result.
> We gave people access to all the information in the world and people "do their own research" and end up making egregious errors because they don't know how to research (naively think it's "searching for information"), what questions to ask, or how to interrogate data (and much more).
You pin pointed a major problem with education, indeed. Personally, I think 3 crucial courses should be taught in school to mitigate that: 1) rational thinking 2) learning how to learn 3) learning how to do a research.
The result of more people getting into electronics because it’s easier now?
When reading I suggest trying to interpret what the person wrote rather than just ignore it. I'd probably start by taking the advice of your username
The difference is that LLMs pretend to be experts on all things. The high school shop kids aren’t under the impression they can build a smart toaster or whatever.
What’s wrong with dishwasher salmon?
It doesn't get hot enough to be a safe cooking method
https://youtu.be/dSwzau2_KF8?t=1108
Ha ha, I said this before when Ben's post came up earlier, but, yes I am. And so far it has been a positive experience.
The cheapest engineering firms you hire are also using LLMs.
The operator is still a factor.
Yeah, but they’ll add another layer of complexity over doing it yourself
The people doing these kickstarters are outsourcing the work because they can’t do it themselves. If they use an LLM, they don’t know what to look for or even ask for, which is how they get these problems where the production backend uses shared credentials and has no access control.
The LLM got it to “working” state, but the people operating it didn’t understand what it was doing. They just prompt until it looks like it works and then ship it.
You're still not following.
The parents are saying they'd rather vibe code themselves than trust an unproven engineering firm that does(n't) vibe code.
I’m following exactly, but the parent commenter is off on a tangent unrelated to the topic.
We’re not taking about the parent commenter, we’re talking about unskilled Kickstarter operators making decisions. Not a skilled programmer using an LLM.
> they'd rather vibe code themselves than trust an unproven engineering firm
You could cut the statement short here, and it would still be a reasonable position to take these days.
LLMs are still complex, sharp tools - despite their simple appearance and proteststions of both biggest fans and haters alike, the dominating factor for effectiveness of an LLM tool on a problem is still whether or not you're holding it wrong.
I forgot about that Jobs/Apple reference!
Paraphasing, LLMs are great (bad) tools for the right (wrong) job...
in the right hands,
at the right time,
in the right place...
I don’t know, you can get a lot of nice engineering done in a Shenzhen dark alley.
LLMs definitely write more robust code than most. They don't take shortcuts or resort to ugly hacks. They have no problem writing tedious guards against edge cases that humans brush off. They also keep comments up to date and obsess over tests.
> They don't take shortcuts or resort to ugly hacks.
That hasn't, universally, been my experience. Sometimes the code is fine. Sometimes it is functional, but organized poorly, or does things in a very unusual way that is hard to understand. And sometimes it produces code that might work sometimes but misses important edge cases and isn't robust at all, or does things in an incredibly slow way.
> They have no problem writing tedious guards against edge cases that humans brush off.
The flip side of that is that instead of coming up with a good design that doesn't have as many edge cases, it will write verbose code that handles many different cases in similar, but not quite the same ways.
> They also keep comments up to date and obsess over tests.
Sure but they will often make comments or tests that aren't actually useful, or modify tests to succeed instead of fixing the code.
One significant danger of LLMs is that the quality of the output is higly variable and unpredictable.
That's ok, if you have someone knowledgeable reviewing and correcting it. But if you blindly trust it, because it produced decent results a few times, you'll probably be sorry.
I have a hard time getting them to write small and flexible functions. Even with explicit instructions about how a specific routine should be done. (Really easy to produce in bash scripts as they seem to avoid using functions, but so do people, but most people suck at bash) IME they're fixated on the end goal and do not grasp the larger context (which is often implicit though I still find difficulty when I'm highly explicit. Which at that point it's usually faster to write myself)
It also makes me question context. Are humans not doing this because they don't think about it or because we've been training people to ignore things? How often do we hear "I just care that it works?" I've only heard that phrase from those that also love to talk about minimum viable products because... frankly, who is not concerned if it works? That's always been a disagreement about what is sufficient. Only very junior people believe in perfection. It's why we have sayings like "there's no solution more permanent than a temporary fix that works". It's the same people who believe tests are proof of correctness rather than a bound on correctness. The same people who read that last sentence and think I'm suggesting to not write tests or believe tests are useless.
I'd be concerned with the LLM operator quite a bit because of this. Subtle things are important when instructing LLMs. Subtle things in the prompts can wildly change the output
They absolutely take shortcuts and resort to ugly hacks.
My AGENTS.md is filled with specific lines to counter all of them that come up.
What? Yes they do take shortcuts and hacks. They change the tests case to make it pass. As the context gets longer it is less reliable at following earlier instructions. I literally had Claude hallucinate nonexistent APIs and then admitted “You caught me! I didn’t actually know, let me do a web search” and then after the web search it still mixes deprecated patterns and APIs against instructions.
I’m much more worried about the reliability of software produced by LLMs.
I had 5.3-Codex take two tries to satisfy a linter on Typescript type definitions.
It gave up, removed the code it had written directly accessing the correct property, and replaced it with a new function that did a BFS to walk through every single field in the API response object while applying a regex "looksLikeHttpsUrl" and hoping the first valid URL that had https:// would be the correct key to use.
On the contrary, the shift from pretraining driving most gains to RL driving most gains is pressuring these models resort to new hacks and shortcuts that are increasingly novel and disturbing!
> LLMs definitely write more robust code than most.
I’ve been using Opus 4.6 and GPT-Codex-5.3 daily and I see plenty of hacks and problems all day long.
I think this is missing the point. The code in this product might be robust in the sense that it follows documentation and does things without hacks, but the things it’s doing are a mismatch for what is needed in the situation.
It might be perfectly structured code, but it uses hardcoded shared credentials.
A skilled operator could have directed it to do the right things and implement something secure, but an unskilled operator doesn’t even know how to specify the right requirements.
Interesting and completely wrong statement, what gave you this impression?
I know right. I kept waiting for a sarcasm tag at the end
right and wrong don't exist when evaluating subjective quantifiers
The discourse around LLMs has created this notion that humans are not lazy and write perfect code. They get compared to an ideal programmer instead of real devs.
This. The hacks, shortcuts and bugs I saw in our product code after i got hired, were stuff every LLM would tell you not to do.
LLM's at best asymptotically approach a human doing the same task. They are trained on the best and the worst. Nothing they output deserves faith other than what can be proven beyond a shadow of a doubt with your own eyes and tooling. I'll say the same thing to anyone vibe coding that I'd say to programmatically illiterate. Trust this only insofar as you can prove it works, and you can stay ahead of the machine. Dabble if you want, but to use something safely enough to rely on, you need to be 10% smarter than it is.
Amen. On top of that, especially now, with good prompting you can get closer to that better than you think.
And the cheapest engineering firm won't use LLMs as well, wherever possible?
The cheapest engineering firm will turn out to be headed up by an openclaw instance.
fun fact, LLMs come in cheapest and useless and expensive but actually does what's being asked, too.
So, will they? Probably. Can you trust the kind of LLM that you would use to do a better job than the cheapest firm? Absolutely.
this.
Oh gosh anyone who thinks LLMs make firmware free haven’t seriously tried to use it for firmware engineering then.