5/22/2025 at 12:11:34 AM
I've been saying for two years that "any sufficiently advanced agent is indistinguishable from a DSL."Rather than asking an agent to internalize its algorithm, you should teach it an API and then ask it to design an algorithm which you can run that in user space. There are very few situations where I think it makes sense (for cost or accuracy) for an LLM to internalize its algorithm. It's like asking asking an engineer to step through a function in their head instead of just running it.
by madrox
5/23/2025 at 1:16:11 AM
I think I understand what you're proposing, but I'm not sure.So in concrete terms I'm imagining:
1. Create a prompt that gives the complete API specification and some general guidance about what role the agent will have.
2. In that prompt ask it to write a function that can be concisely used by the agent, written to be consumed from the agent and with the agent's perspective. The body of that function translates the agent-oriented function definition to an API call.
3. Now the agent can use these modified versions of the API that expose only what's really important from its perspective.
4. But there's no reason APIs and functions have to map 1:1. You can wrap multiple APIs in one function, or break things up however made most sense.
5. Now the API-consuming agent is just writing library routines for other agents, and creating a custom environment for those agents.
6. This is all really starting to look like a team of programmers building a platform.
7. You could design the whole thing top-down as well, speculating then creating the functions the agents will likely want, and using whatever capabilities you have to implement those functions. The API calls are just an available set of functionality.
And really you could have multiple APIs being used in one function call, and any number of ways to rephrase the raw capabilities as more targeted and specific capabilities.
4. Now the
by ianbicking
5/22/2025 at 1:43:00 AM
Evidence that the path to ASI is not extending the capabilities of LLMs, but instead distilling out and compiling self-improving algorithms externally in a symbolic application.by symbolicAGI
5/22/2025 at 5:13:09 AM
Can you point to evidence of widespread use of the word 'agent' in this context from two years ago?by fooker
5/22/2025 at 12:52:52 PM
Here are the top articles for the month of May 2023 on HN with "agent" in the title [0]. Looks like early days for the term but with a few large hits (like the HuggingFace announcement), which suggests OP was surprisingly precise in their citation of two years as the time window.Also, since you're implicitly questioning OP's claim to have been saying this all along, here's a comment from September 2023 where they first said the same quote and said they'd been building agents for 3 months by that point [1]. That's close enough to 2 years in my book.
[0] https://hn.algolia.com/?dateEnd=1685491200&dateRange=custom&...
by lolinder
5/22/2025 at 3:43:29 PM
We need more examples of posts like what you made, where you call it some naysayer for being extremely wrong. Actually, folks can and often do tell the truth on the internet!by Der_Einzige
5/22/2025 at 7:26:44 AM
https://news.ycombinator.com/item?id=37626877by madrox
5/23/2025 at 3:11:07 AM
Neat!by fooker
5/22/2025 at 6:57:59 PM
I'm sure you can find it in chatbot documentation from the 90s. It's a generic term carried over from non-AI chat. People responding to support chats were called agents.by nitwit005