76 comments
  • randomtoast4m

    I hope that one day we have a tool that can convert any proprietary binary to source code with a single click. It would be so much fun to have an "open source" version of all games. Currently, there are projects like https://github.com/Try/OpenGothic and https://github.com/SFTtech/openage, but these require years of community effort.

    • airza4m

      Current SOTA models are really bad at RE and i don't really expect this to improve through training on open data.

      There are just not a lot of high quality examples on the internet, and more importantly the people writing this code are doing their best to make it actively more difficult.

      • sebzim45004m

        It is quite easy to produce high quality synthetic data to train reverse engineering. Just take any open source project and ask the model to produce the code (or something equivalent) given the binary.

        • ai-christianson4m

          Right. You could even run it through code obfuscators and such to create more diverse, realistic examples.

    • gus_massa4m

      You can't open source code that is not yours. They are implementing a clean new version.

      On the other direction, a company can't pick a GPL project, uncompile the code and release it as proprietary.

      • randomtoast4m

        > They are implementing a clean new version.

        Much of reverse engineering involves analyzing existing code, and this is not a secret. There are forums where people discuss and share their reverse engineering findings. Without this, creating a nearly 100% compatible clone, such as one that can use the original game files, would be nearly impossible.

  • Xx_crazy420_xX4m

    For LLMs to solve code I think they should be AST-native. Code is a tree, not a sequence — yet we feed it to models linearly, with no explicit structure. Todays models lack recurrence or true memory, so they can’t reason over hierarchical structures effectively.

    • Nesco4m

      LLMs are autoregressive models. However, the notion of order in ASTs might be nonexistent, especially for parallel branches of computation/control flow. You could attempt to untangle each branch into N sequences, but this would erase control-flow information.

      Even when there is an objective ordering of the children of every node, you still have four traversal options: {preorder, postorder} × {BF, DF}.

      Note: For children lacking an objective ordering, you might apply generic rules to define a traversal order, but you’d end up with as many depth-first traversals as there are possible orders—essentially a crude heuristic. If you want the evaluation order to be dynamic at each step (e.g., using RL), the complexity grows geometrically worse. That’s been my experience tinkering with a custom AST DSL for ARC-AGI.

      • Xx_crazy420_xX4m

        Cool to hear you've worked on ARC-AGI — I poked with it too. You’re totally right about the messy traversal space, especially with parallel branches. What feels ambiguous at the token level becomes structured ambiguity in the AST — and that’s progress.

        My hunch is that LLMs don’t need to solve the whole traversal space — they just need a clean, abstract interface. Even parallel branches can be normalized into a schema that the model can reason over consistently. And in practice, you rarely need full recursion or a complete tree walk to understand a node — but having that option unlocks deeper comprehension when it counts.

        This kind of structural understanding would also massively improve Copilot-style tools, especially for less popular libraries where token-level familiarity breaks down. If models could reason over types and structure instead of guessing based on frequency, completions would be a lot more reliable outside the top 1% of APIs.

      • dragonwriter4m

        > LLMs are autoregressive models.

        Most LLMs are autoregressive models, but exceptions exist, e.g., Mercury [0] is a diffusion LLM.

        [0] https://www.inceptionlabs.ai/news

        • Nesco4m

          Well, from my very limited comprehension of diffusion models, they apply to fixed length structure, mostly from a continuous space. Maybe a way to make them work with tree structures could be found - that's no trivial task

          • dragonwriter4m

            Autoregressive LLMs don't usually work on tree structures, they work on capped-length linear token sequences, which are isomorphic to fixed-length sequences.

            I'm not sure why you think working on tree structures rather than fixed length sequences would be necessary for diffusion language models—which, again, actually exist; aside from Mercury which is proprietary, there is also LLaDA: https://ml-gsai.github.io/LLaDA-demo/

    • gnfargbl4m

      Has there been much work on reversing binaries into an AST form? It seems like something that somebody would have thought of researching, but I've not come across any efforts.

      Is it something you can do generically, or do you need to know the specific compiler? Do you need to know the specific language, even, or could you perhaps create some other hypothetical AST in a different language that would have led to the same binary?

    • lmeyerov4m

      The graph part , more so than the ast part, makes sense to me. We reason over programs as hairy dataflow/controlflow/etc dependency graphs that happen to originally be encoded as some sort of text->ast.

      GNNs went down some roads here, but never felt like a path to reasoning. So how to get an RL reasoner flow to do what is easy for datalog, natively and/or as a tool?

    • pilooch4m

      Or just we could forget about code and have model act directly :) That's my bet.

    • otabdeveloper44m

      LLMs process information in a strictly sequential manner. It's their core capability and what makes them feel so anthropomorphic.

      • dragonwriter4m

        > LLMs process information in a strictly sequential manner.

        "LLMs" as a class do not. Most LLMs, because most LLMs are autoregressive models, but diffusion LLMs exist and are not sequential in the way that autoregressive models are.

        > It's their core capability

        Being sequential is not a capability at all, much less a core one defining Large Language Models.

        > and what makes them feel so anthropomorphic.

        I disagree with this, too; I think what makes LLMs "feel so anthropomorphic" is the fact that most humans are very focused on language in perceiving other humans as human, and LLMs' output (as their name suggests) models human use of language, directly targeting a key feature used to identify something as human-like.

        • otabdeveloper44m

          The gimmick of the LLM is that it outputs text sequentially, as if it is talking to us. That's what makes them feel "alive" and "intelligent" to us. (And yes, ironically it's this sequential nature that actually limits their intelligence in practice, but whatever. The AI hype is about appearances, not facts.)

          • lucianbr4m

            > That's what makes them feel "alive" and "intelligent" to us.

            What is the basis for this claim? Seems like "A" (chatbots output text sequentially) is true, and "B" (they feel intelligent to us) is true, and you're claiming "A causes B" without any support at all. Just because they happen to both be true and you personally feel there is a causal relationship, which proves nothing.

          • dragonwriter4m

            > The gimmick of the LLM is that it outputs text sequentially, as if it is talking to us. That's what makes them feel "alive" and "intelligent" to us.

            Yes, I got that that was the original claim. I still disagree with us. What makes them feel alive and intelligent is that they produce human-like language output, not that the process by which they construct that output is sequential. Non-autoregressive LLMs of equal output quality would (do) appear just as alive and intelligent as autoregressive LLMs. An autoregressive LLM behind a non-streaming request/response interface where the token-by-token sequencing of the response is not exposed to the user still seems just as intelligent as one where the output is streamed to the user.

          • rowanG0774m

            Are you saying that if visually LLMs would not output text sequentially but at once they would not be as successful as they are?

            • otabdeveloper44m

              Yes. Human speech is sequential (we make sounds one by one), and when LLMs mimic this with token-by-token autocomplete they seem more anthropomorphic to us.

              (I take issue with the word "successful", though. Selling LLMs as a human-like intelligence is a gimmick and a borderline scam.)

              • 4m
                [deleted]
        • 4m
          [deleted]
      • mike_hearn4m

        Not fully.

        The point of transformer attention is cross-wise processing of tokens that computes their relationship to each other at multiple levels of abstraction. That's why LLMs can read so fast: they're processing all the input tokens in parallel.

        LLMs emit tokens in a sequential manner at the level of the outer loop, but clearly inside the activations is a non-sequential map of the entire planned output, otherwise they wouldn't be able to make coherent sentences or speak German (which puts verbs at the end).

  • qwertox4m

    Which tools can currently invoke MCP? I have read only a little about MCP and got to know that Claude's desktop application is capable of using MCP locally.

    Are there any chat interfaces which allow using MCP remotely?

    I would like to be able to specify MCP endpoints and the functions they offer in ChatGPT's, Claude's and Gemini's web interfaces so that I can have them call my servers remotely. A bit like "GPTs" and "Gems".

    • lauriewired4m

      I touch on this briefly in the video, beside Claude Desktop, 5ire is a fairly model-agnostic local MCP client, I'm sure there are others.

      sama also recently mentioned ChatGPT Desktop is getting MCP client functionality "soon".

      As for remote clients, Cloudflare has some really useful tooling, look at their "AI Playground".

    • electroly4m

      I use them in Cursor. Writing an MCP server is trivial, just ask Cursor to put one together in TypeScript. You would use your local MCP server to call whatever remote API you want (or perform some other task). The MCP server uses stdin/stdout to talk to Cursor.

    • efunnekol4m

      You can use MCP servers in SAM (Solace Agent Mesh). That has a chat interface and can be run remotely. Perhaps the easiest way to do it remotely is to use a Slack integration to SAM with a free Slack workspace, which doesn't require poking a hole to serve the browser UI

      https://github.com/SolaceLabs/solace-agent-mesh

    • jevyjevjevs4m

      I'm using Librechat which I've found to be quite feature complete. I updated an Obsidian MCP to get my most recent journal entries to act like a therapist. Example setup here: https://www.jevy.org/articles/obsidian-mcps-to-work-with-not...

      • dockerd4m

        @jevyjevjevs,

        Can you add rss feed to your site blog? I found few of the articles interesting and helpful. I would like to subscribe but I don't see rss or email subscription.

    • nekitamo4m

      I had the same question as you, and some quick Googling led me to this list here:

      https://github.com/punkpeye/awesome-mcp-clients

    • salgorithm4m

      Block has an open source tool called Goose that invokes MCP. https://block.github.io/goose/

      • hedgehog4m

        Is there a trick to making it work well? I tried Goose briefly but it seemed very flaky compared to Open Web UI with hand-configured tool calling.

    • fixprix4m

      Unity, Blender and Photoshop all have rough MCP integrations available. You can find them on GitHub.

    • mettamage4m

      If you run some proxy server, you could run MCP servers remotely

    • asphodel_gray4m

      Cursor has support for it I believe

  • mdaniel4m

    Her previous integration with Ghidra and an LLM had a good video, too: https://news.ycombinator.com/item?id=42860849

    Malimite – iOS and macOS Decompiler - https://news.ycombinator.com/item?id=42829402 - Jan, 2025 (37 comments)

  • sorenjan4m

    If you haven't watched her Youtube channel before I recommend checking it out. Besides the technical content I think the editing with retro OS graphics are fun.

    • foooorsyth4m

      It's really impressive. Technical content, GitHub repos that go along with the videos, set design, retro editing -- much higher quality than a lot of stuff out there from major studios

  • ngneer4m

    Thought experiment. Suppose all binaries could be instantly reverse engineered to perfection. How would that change security?

    • LegionMammal9784m

      Everyone would just replace all their proprietary programs with dumb clients that communicate with a server. Either that, or they'd go all in on homomorphic encryption.

    • ynniv4m

      Only formally proven systems will be secure

    • xeckr4m

      Everything is open source is you speak assembly.

    • gosub1004m

      Secure enclaves would appear in most computers. Nothing would be run without everything being encrypted.

  • brokensegue4m

    my experience with just copying and pasting things from ghidra into LLMs and asking it to figure it out wasn't so successful. it'd be cool to have benchmarks for this stuff though.

    • Everdred2dx4m

      I actually have only tried this once but had the opposite experience. Gave it 5 or so related functions from a ps2 game and it correctly inferred they were related to graphics code, properly typing and naming the parameters. I’m sure this sort of thing is extremely hit or miss though

      • strstr4m

        Had the same experience. Took the janky decompilation from ghidra, and it was able to name parameters and functions. Even figured out the game based on a single name in a string. Based in my read of the labeled decompilation, it seemed largely correct. And definitely a lot faster than me.

        Even if I weren’t to rely on it 100% it was definitely a great draft pass over the functions.

      • cedws4m

        Most likely there was just a mangled symbol somewhere that it recognised from its training data.

        • rowanG0774m

          Where is that coming from? The chances that some random ps2 games code symbols are in the training data are infinitesimal. It's much more likely that it can understand code and rewrite it. Basically what LLM have been capable of for years now.

          • sitkack4m

            Parent is supposing w/o any experience. LLMs can see in hex, bytecode and base64, rot13, etc. I use LLMs to decompile bytecode all the time.

        • unit1494m

          [dead]

    • rfoo4m

      I've been thinking on how to build a benchmark for this stuff for a while, and don't have a good idea other than LLM-as-judge (which quickly gets messy). I guess there's a reason why current neural decompilation attempts are all evaluated on "seemingly meaningless" benchmarks like "can it recompile without syntax error" or "functional equivalence of recompilation" etc.

      • vessenes4m

        Hmm, specifically when it comes to reverse engineering, you have the best benchmark ever - you can check the original code, no?

        • brokensegue4m

          that requires LLM as judge

          • dataangel4m

            no it doesn't, you just diff against the real source code. probably something more fuzzy/continuous than actual diff, but still

            • rfoo4m

              Besides functional equivalence, a significant part of the value in neural decompilation is the symbol (function names, variable names, struct definition including member names) it recovered. So, if the LLM predicted "FindFirstFitContainer" for a function originally called "find_pool", is this correct? Wrong? 26.333% correct?

            • brokensegue4m

              Proving that two pieces of code are equivalent sounds very hard (incomputable)

      • bitfieldz4m

        [dead]

  • Everdred2dx4m

    Is anyone working on a "catalog" of MCP servers? Searching on Github is not exactly the best way to discover these.

  • celesian4m

    This is very cool but it would be nice to have more features on the MCP server, such as arbitrary read and write of programs. For example, I was working on a self-unpacking CTF challenge which XORed instructions. It would be nice to have it be able to read the values at the addresses it xored.

  • dang4m

    Related (but merged hither):

    GhidraMCP: Now AI can reverse malware [video] - https://news.ycombinator.com/item?id=43475025

  • userbinator4m

    RE is exactly the sort of work that requires precision and careful reasoning, not hallucinatory statistical inference. Seeing how LLMs stumble very heavily on the former makes it clear that AI will not replace us.

    • iugtmkbdfil8344m

      I hate to be that guy, but one does not follow the other. To some, just the initial appearance of 'acceptable'/'good enough' is, well, good enough. Current set of LLMs can absolutely replace us while breaking a lot in the process.

    • bitfieldz4m

      [dead]

  • enigma1014m

    You just opened pandora's box lady wired

  • dprophecyguy4m

    i love you lauriewired.

  • securemepro4m

    [flagged]