alt.hn

3/25/2025 at 6:47:37 PM

MCP server for Ghidra

https://github.com/LaurieWired/GhidraMCP

by tanelpoder

3/27/2025 at 12:35:50 PM

I hope that one day we have a tool that can convert any proprietary binary to source code with a single click. It would be so much fun to have an "open source" version of all games. Currently, there are projects like https://github.com/Try/OpenGothic and https://github.com/SFTtech/openage, but these require years of community effort.

by randomtoast

3/27/2025 at 3:15:43 PM

Current SOTA models are really bad at RE and i don't really expect this to improve through training on open data.

There are just not a lot of high quality examples on the internet, and more importantly the people writing this code are doing their best to make it actively more difficult.

by airza

3/27/2025 at 4:13:24 PM

It is quite easy to produce high quality synthetic data to train reverse engineering. Just take any open source project and ask the model to produce the code (or something equivalent) given the binary.

by sebzim4500

3/27/2025 at 5:49:39 PM

Right. You could even run it through code obfuscators and such to create more diverse, realistic examples.

by ai-christianson

3/27/2025 at 3:26:31 PM

You can't open source code that is not yours. They are implementing a clean new version.

On the other direction, a company can't pick a GPL project, uncompile the code and release it as proprietary.

by gus_massa

3/27/2025 at 3:41:57 PM

> They are implementing a clean new version.

Much of reverse engineering involves analyzing existing code, and this is not a secret. There are forums where people discuss and share their reverse engineering findings. Without this, creating a nearly 100% compatible clone, such as one that can use the original game files, would be nearly impossible.

by randomtoast

3/27/2025 at 7:40:34 AM

For LLMs to solve code I think they should be AST-native. Code is a tree, not a sequence — yet we feed it to models linearly, with no explicit structure. Todays models lack recurrence or true memory, so they can’t reason over hierarchical structures effectively.

by Xx_crazy420_xX

3/27/2025 at 8:00:07 AM

LLMs are autoregressive models. However, the notion of order in ASTs might be nonexistent, especially for parallel branches of computation/control flow. You could attempt to untangle each branch into N sequences, but this would erase control-flow information.

Even when there is an objective ordering of the children of every node, you still have four traversal options: {preorder, postorder} × {BF, DF}.

Note: For children lacking an objective ordering, you might apply generic rules to define a traversal order, but you’d end up with as many depth-first traversals as there are possible orders—essentially a crude heuristic. If you want the evaluation order to be dynamic at each step (e.g., using RL), the complexity grows geometrically worse. That’s been my experience tinkering with a custom AST DSL for ARC-AGI.

by Nesco

3/27/2025 at 8:40:55 AM

Cool to hear you've worked on ARC-AGI — I poked with it too. You’re totally right about the messy traversal space, especially with parallel branches. What feels ambiguous at the token level becomes structured ambiguity in the AST — and that’s progress.

My hunch is that LLMs don’t need to solve the whole traversal space — they just need a clean, abstract interface. Even parallel branches can be normalized into a schema that the model can reason over consistently. And in practice, you rarely need full recursion or a complete tree walk to understand a node — but having that option unlocks deeper comprehension when it counts.

This kind of structural understanding would also massively improve Copilot-style tools, especially for less popular libraries where token-level familiarity breaks down. If models could reason over types and structure instead of guessing based on frequency, completions would be a lot more reliable outside the top 1% of APIs.

by Xx_crazy420_xX

3/27/2025 at 8:39:59 AM

> LLMs are autoregressive models.

Most LLMs are autoregressive models, but exceptions exist, e.g., Mercury [0] is a diffusion LLM.

[0] https://www.inceptionlabs.ai/news

by dragonwriter

3/28/2025 at 10:05:27 AM

Well, from my very limited comprehension of diffusion models, they apply to fixed length structure, mostly from a continuous space. Maybe a way to make them work with tree structures could be found - that's no trivial task

by Nesco

3/28/2025 at 3:24:20 PM

Autoregressive LLMs don't usually work on tree structures, they work on capped-length linear token sequences, which are isomorphic to fixed-length sequences.

I'm not sure why you think working on tree structures rather than fixed length sequences would be necessary for diffusion language models—which, again, actually exist; aside from Mercury which is proprietary, there is also LLaDA: https://ml-gsai.github.io/LLaDA-demo/

by dragonwriter

3/27/2025 at 12:09:04 PM

Has there been much work on reversing binaries into an AST form? It seems like something that somebody would have thought of researching, but I've not come across any efforts.

Is it something you can do generically, or do you need to know the specific compiler? Do you need to know the specific language, even, or could you perhaps create some other hypothetical AST in a different language that would have led to the same binary?

by gnfargbl

3/27/2025 at 4:40:56 PM

The graph part , more so than the ast part, makes sense to me. We reason over programs as hairy dataflow/controlflow/etc dependency graphs that happen to originally be encoded as some sort of text->ast.

GNNs went down some roads here, but never felt like a path to reasoning. So how to get an RL reasoner flow to do what is easy for datalog, natively and/or as a tool?

by lmeyerov

3/27/2025 at 2:03:47 PM

Or just we could forget about code and have model act directly :) That's my bet.

by pilooch

3/27/2025 at 8:26:01 AM

LLMs process information in a strictly sequential manner. It's their core capability and what makes them feel so anthropomorphic.

by otabdeveloper4

3/27/2025 at 8:35:54 AM

> LLMs process information in a strictly sequential manner.

"LLMs" as a class do not. Most LLMs, because most LLMs are autoregressive models, but diffusion LLMs exist and are not sequential in the way that autoregressive models are.

> It's their core capability

Being sequential is not a capability at all, much less a core one defining Large Language Models.

> and what makes them feel so anthropomorphic.

I disagree with this, too; I think what makes LLMs "feel so anthropomorphic" is the fact that most humans are very focused on language in perceiving other humans as human, and LLMs' output (as their name suggests) models human use of language, directly targeting a key feature used to identify something as human-like.

by dragonwriter

3/27/2025 at 10:51:32 AM

The gimmick of the LLM is that it outputs text sequentially, as if it is talking to us. That's what makes them feel "alive" and "intelligent" to us. (And yes, ironically it's this sequential nature that actually limits their intelligence in practice, but whatever. The AI hype is about appearances, not facts.)

by otabdeveloper4

3/27/2025 at 1:13:00 PM

> That's what makes them feel "alive" and "intelligent" to us.

What is the basis for this claim? Seems like "A" (chatbots output text sequentially) is true, and "B" (they feel intelligent to us) is true, and you're claiming "A causes B" without any support at all. Just because they happen to both be true and you personally feel there is a causal relationship, which proves nothing.

by lucianbr

3/27/2025 at 6:33:55 PM

> The gimmick of the LLM is that it outputs text sequentially, as if it is talking to us. That's what makes them feel "alive" and "intelligent" to us.

Yes, I got that that was the original claim. I still disagree with us. What makes them feel alive and intelligent is that they produce human-like language output, not that the process by which they construct that output is sequential. Non-autoregressive LLMs of equal output quality would (do) appear just as alive and intelligent as autoregressive LLMs. An autoregressive LLM behind a non-streaming request/response interface where the token-by-token sequencing of the response is not exposed to the user still seems just as intelligent as one where the output is streamed to the user.

by dragonwriter

3/27/2025 at 11:28:09 AM

Are you saying that if visually LLMs would not output text sequentially but at once they would not be as successful as they are?

by rowanG077

3/28/2025 at 8:38:22 AM

Yes. Human speech is sequential (we make sounds one by one), and when LLMs mimic this with token-by-token autocomplete they seem more anthropomorphic to us.

(I take issue with the word "successful", though. Selling LLMs as a human-like intelligence is a gimmick and a borderline scam.)

by otabdeveloper4

3/27/2025 at 8:51:41 AM

Not fully.

The point of transformer attention is cross-wise processing of tokens that computes their relationship to each other at multiple levels of abstraction. That's why LLMs can read so fast: they're processing all the input tokens in parallel.

LLMs emit tokens in a sequential manner at the level of the outer loop, but clearly inside the activations is a non-sequential map of the entire planned output, otherwise they wouldn't be able to make coherent sentences or speak German (which puts verbs at the end).

by mike_hearn

3/26/2025 at 7:08:20 PM

Which tools can currently invoke MCP? I have read only a little about MCP and got to know that Claude's desktop application is capable of using MCP locally.

Are there any chat interfaces which allow using MCP remotely?

I would like to be able to specify MCP endpoints and the functions they offer in ChatGPT's, Claude's and Gemini's web interfaces so that I can have them call my servers remotely. A bit like "GPTs" and "Gems".

by qwertox

3/26/2025 at 7:23:43 PM

I touch on this briefly in the video, beside Claude Desktop, 5ire is a fairly model-agnostic local MCP client, I'm sure there are others.

sama also recently mentioned ChatGPT Desktop is getting MCP client functionality "soon".

As for remote clients, Cloudflare has some really useful tooling, look at their "AI Playground".

by lauriewired

3/26/2025 at 11:13:31 PM

I use them in Cursor. Writing an MCP server is trivial, just ask Cursor to put one together in TypeScript. You would use your local MCP server to call whatever remote API you want (or perform some other task). The MCP server uses stdin/stdout to talk to Cursor.

by electroly

3/27/2025 at 11:04:42 AM

I'm using Librechat which I've found to be quite feature complete. I updated an Obsidian MCP to get my most recent journal entries to act like a therapist. Example setup here: https://www.jevy.org/articles/obsidian-mcps-to-work-with-not...

by jevyjevjevs

3/27/2025 at 11:39:03 AM

@jevyjevjevs,

Can you add rss feed to your site blog? I found few of the articles interesting and helpful. I would like to subscribe but I don't see rss or email subscription.

by dockerd

3/26/2025 at 7:56:05 PM

You can use MCP servers in SAM (Solace Agent Mesh). That has a chat interface and can be run remotely. Perhaps the easiest way to do it remotely is to use a Slack integration to SAM with a free Slack workspace, which doesn't require poking a hole to serve the browser UI

https://github.com/SolaceLabs/solace-agent-mesh

by efunnekol

3/26/2025 at 8:00:35 PM

Block has an open source tool called Goose that invokes MCP. https://block.github.io/goose/

by salgorithm

3/27/2025 at 2:15:38 AM

Is there a trick to making it work well? I tried Goose briefly but it seemed very flaky compared to Open Web UI with hand-configured tool calling.

by hedgehog

3/26/2025 at 10:23:30 PM

Unity, Blender and Photoshop all have rough MCP integrations available. You can find them on GitHub.

by fixprix

3/26/2025 at 7:59:48 PM

If you run some proxy server, you could run MCP servers remotely

by mettamage

3/26/2025 at 8:10:41 PM

Cursor has support for it I believe

by asphodel_gray

3/26/2025 at 9:54:18 PM

If you haven't watched her Youtube channel before I recommend checking it out. Besides the technical content I think the editing with retro OS graphics are fun.

by sorenjan

3/27/2025 at 2:46:49 AM

It's really impressive. Technical content, GitHub repos that go along with the videos, set design, retro editing -- much higher quality than a lot of stuff out there from major studios

by foooorsyth

3/27/2025 at 1:07:37 AM

Thought experiment. Suppose all binaries could be instantly reverse engineered to perfection. How would that change security?

by ngneer

3/27/2025 at 2:43:24 AM

Everyone would just replace all their proprietary programs with dumb clients that communicate with a server. Either that, or they'd go all in on homomorphic encryption.

by LegionMammal978

3/27/2025 at 2:45:03 AM

Only formally proven systems will be secure

by ynniv

3/27/2025 at 1:31:56 AM

Everything is open source is you speak assembly.

by xeckr

3/27/2025 at 2:35:04 PM

Secure enclaves would appear in most computers. Nothing would be run without everything being encrypted.

by gosub100

3/26/2025 at 3:01:59 AM

my experience with just copying and pasting things from ghidra into LLMs and asking it to figure it out wasn't so successful. it'd be cool to have benchmarks for this stuff though.

by brokensegue

3/26/2025 at 5:27:38 AM

I actually have only tried this once but had the opposite experience. Gave it 5 or so related functions from a ps2 game and it correctly inferred they were related to graphics code, properly typing and naming the parameters. I’m sure this sort of thing is extremely hit or miss though

by Everdred2dx

3/27/2025 at 4:30:01 AM

Had the same experience. Took the janky decompilation from ghidra, and it was able to name parameters and functions. Even figured out the game based on a single name in a string. Based in my read of the labeled decompilation, it seemed largely correct. And definitely a lot faster than me.

Even if I weren’t to rely on it 100% it was definitely a great draft pass over the functions.

by strstr

3/26/2025 at 6:45:26 AM

Most likely there was just a mangled symbol somewhere that it recognised from its training data.

by cedws

3/26/2025 at 5:52:43 PM

Where is that coming from? The chances that some random ps2 games code symbols are in the training data are infinitesimal. It's much more likely that it can understand code and rewrite it. Basically what LLM have been capable of for years now.

by rowanG077

3/26/2025 at 6:02:46 PM

Parent is supposing w/o any experience. LLMs can see in hex, bytecode and base64, rot13, etc. I use LLMs to decompile bytecode all the time.

by sitkack

3/26/2025 at 9:09:55 PM

[dead]

by unit149

3/26/2025 at 6:22:28 PM

I've been thinking on how to build a benchmark for this stuff for a while, and don't have a good idea other than LLM-as-judge (which quickly gets messy). I guess there's a reason why current neural decompilation attempts are all evaluated on "seemingly meaningless" benchmarks like "can it recompile without syntax error" or "functional equivalence of recompilation" etc.

by rfoo

3/26/2025 at 8:16:39 PM

Hmm, specifically when it comes to reverse engineering, you have the best benchmark ever - you can check the original code, no?

by vessenes

3/27/2025 at 2:58:09 AM

that requires LLM as judge

by brokensegue

3/27/2025 at 11:40:19 AM

no it doesn't, you just diff against the real source code. probably something more fuzzy/continuous than actual diff, but still

by dataangel

3/30/2025 at 9:58:39 AM

Besides functional equivalence, a significant part of the value in neural decompilation is the symbol (function names, variable names, struct definition including member names) it recovered. So, if the LLM predicted "FindFirstFitContainer" for a function originally called "find_pool", is this correct? Wrong? 26.333% correct?

by rfoo

3/27/2025 at 12:28:17 PM

Proving that two pieces of code are equivalent sounds very hard (incomputable)

by brokensegue

3/27/2025 at 10:08:16 AM

[dead]

by bitfieldz

3/27/2025 at 4:22:31 AM

Is anyone working on a "catalog" of MCP servers? Searching on Github is not exactly the best way to discover these.

by Everdred2dx

3/27/2025 at 4:58:07 AM

I've noticed a lot of websites popping up recently which is basically just a list of MCP servers. Some examples:

- https://mcpservers.org/

- https://glama.ai/mcp/servers

- https://www.claudemcp.com/servers

Not to mention the usual GitHub ones:

- https://github.com/punkpeye/awesome-mcp-servers

The hype is real.

by meander_water

3/27/2025 at 6:27:32 AM

To clarify somewhat, while they all index MCP servers out there, some of them also will _host_ the MCP server remotely as well. Glama, mcp.run and just recently Cloudflare have offerings in this realm.

by knowaveragejoe

3/28/2025 at 1:20:24 PM

Do these MCP registries expose an MCP server too, so the client can do MCP server auto discovery based on registry?

by Klaster_1

3/27/2025 at 3:26:27 AM

This is very cool but it would be nice to have more features on the MCP server, such as arbitrary read and write of programs. For example, I was working on a self-unpacking CTF challenge which XORed instructions. It would be nice to have it be able to read the values at the addresses it xored.

by celesian

3/27/2025 at 4:12:49 AM

RE is exactly the sort of work that requires precision and careful reasoning, not hallucinatory statistical inference. Seeing how LLMs stumble very heavily on the former makes it clear that AI will not replace us.

by userbinator

3/27/2025 at 4:38:54 AM

I hate to be that guy, but one does not follow the other. To some, just the initial appearance of 'acceptable'/'good enough' is, well, good enough. Current set of LLMs can absolutely replace us while breaking a lot in the process.

by iugtmkbdfil834

3/27/2025 at 11:16:35 AM

[dead]

by bitfieldz

3/27/2025 at 11:47:50 AM

You just opened pandora's box lady wired

by enigma101

3/27/2025 at 3:42:37 AM

i love you lauriewired.

by dprophecyguy

3/26/2025 at 11:36:45 PM

[flagged]

by securemepro