Everyone in the comments is like, "take a look at this AI tool for Ghirda"
This is indicative of two things.
1. While I can't stand the guy, ya'll need to watch Peter Thiel's talk from 10-15 years ago at Stanford about not building the same thing everyone else is, a la, the obvious thing.
2. People are really attracted to using LLMs on deep thinking tasks, off shoring their thinking, to a "Think for me SaaS". This won't end well for you, there's no shortcuts in life that don't come with a (huge) cost.
The person who showed their work and scored A's on math tests instead of just learning how to use a calculator, is better off in their career/endevours than the 80% of others who did the latter. If Laurie Wired makes an MCP for Ghirda and uses it that's one thing, you using it without ever reverse engineering extensively is completely different. I'd bet my bottom dollar that Laurie Wired doesn't prefer the MCP over her own mental processes 8/10 times.
I was wondering why so many people were suddenly hopping into my humble profession and declaring me redundant. Ah, a youtube influencer is at the center of it. Makes sense.
lol
This feels like a bit of a false dichotomy. Just because I give some thinking tasks to an AI doesn't mean I'm sitting there doing nothing while it thinks.
Interpret the intent of the parent's comment more and focus less on finding its critiques. The irony here is that the critique you made is the most obvious one, which also means it is the one that the parent believed you're most likely to understand the implicit context around. I don't think anyone has handed all thinking over to LLMs, it's always somewhere on a spectrum. I think we can assume the parent isn't framing things as a binary outcome. If they were, we should ignore everything they're saying.
I’ve made frameworks that turn a project entirely over to the AI — eg, turn a paragraph summary of what I want into a book on that topic.
Obviously I get much less out of that — I’m not denying the tradeoff, just saying that some people are all the way to “write a short request, accept the result” for (certain) thinking tasks.
Sure but even that falls on the spectrum. The request requires some thinking. So if we're not being pedantic then people will criticize because natural language isn't
I'd say _this_ is the comment guilty of making a false dichotomy.
Do you have a background in reverse engineering?
You literally have a blog post called "AI can only solve boring problems"
Are you just trying to argue for the sake of arguing?
What does my blog post have to do with anything? (But since you mention it - a large part of reverse engineering falls under the "boring" category I define in that article)
A VC might want variety and advise people he will vote with his dollars for variety, because he's not funding the same thing as everyone else is.
Being first and the winner requires a lot to line up, so it shouldn't be the only, default, or best setting. Pursuing this is optimizing.
Also a message from 10-15 years ago might not reflect the same context as today.
"A VC might want variety and advise people he will vote with his dollars for variety".
In other words, what's good for Peter Theil might not be goid for you.
Yup. Therefore postulating it as a truth or standard is ok if that's what you agree with and want to also pursue, but it's important to keep in mind that valid goals are a spectrum.