alt.hn

5/21/2025 at 5:18:52 PM

LLM function calls don't scale; code orchestration is simpler, more effective

https://jngiam.bearblog.dev/mcp-large-data/

by jngiam1

5/22/2025 at 12:11:34 AM

I've been saying for two years that "any sufficiently advanced agent is indistinguishable from a DSL."

Rather than asking an agent to internalize its algorithm, you should teach it an API and then ask it to design an algorithm which you can run that in user space. There are very few situations where I think it makes sense (for cost or accuracy) for an LLM to internalize its algorithm. It's like asking asking an engineer to step through a function in their head instead of just running it.

by madrox

5/23/2025 at 1:16:11 AM

I think I understand what you're proposing, but I'm not sure.

So in concrete terms I'm imagining:

1. Create a prompt that gives the complete API specification and some general guidance about what role the agent will have.

2. In that prompt ask it to write a function that can be concisely used by the agent, written to be consumed from the agent and with the agent's perspective. The body of that function translates the agent-oriented function definition to an API call.

3. Now the agent can use these modified versions of the API that expose only what's really important from its perspective.

4. But there's no reason APIs and functions have to map 1:1. You can wrap multiple APIs in one function, or break things up however made most sense.

5. Now the API-consuming agent is just writing library routines for other agents, and creating a custom environment for those agents.

6. This is all really starting to look like a team of programmers building a platform.

7. You could design the whole thing top-down as well, speculating then creating the functions the agents will likely want, and using whatever capabilities you have to implement those functions. The API calls are just an available set of functionality.

And really you could have multiple APIs being used in one function call, and any number of ways to rephrase the raw capabilities as more targeted and specific capabilities.

4. Now the

by ianbicking

5/22/2025 at 1:43:00 AM

Evidence that the path to ASI is not extending the capabilities of LLMs, but instead distilling out and compiling self-improving algorithms externally in a symbolic application.

by symbolicAGI

5/22/2025 at 5:13:09 AM

Can you point to evidence of widespread use of the word 'agent' in this context from two years ago?

by fooker

5/22/2025 at 12:52:52 PM

Here are the top articles for the month of May 2023 on HN with "agent" in the title [0]. Looks like early days for the term but with a few large hits (like the HuggingFace announcement), which suggests OP was surprisingly precise in their citation of two years as the time window.

Also, since you're implicitly questioning OP's claim to have been saying this all along, here's a comment from September 2023 where they first said the same quote and said they'd been building agents for 3 months by that point [1]. That's close enough to 2 years in my book.

[0] https://hn.algolia.com/?dateEnd=1685491200&dateRange=custom&...

[1] https://news.ycombinator.com/item?id=37626877

by lolinder

5/22/2025 at 3:43:29 PM

We need more examples of posts like what you made, where you call it some naysayer for being extremely wrong. Actually, folks can and often do tell the truth on the internet!

by Der_Einzige

5/22/2025 at 6:57:59 PM

I'm sure you can find it in chatbot documentation from the 90s. It's a generic term carried over from non-AI chat. People responding to support chats were called agents.

by nitwit005

5/22/2025 at 1:52:22 AM

I've been building agentic systems for my ecommerce business. I evaluated smolagents. It's elegant and has a lot of appealing qualities, but adds a lot of complexity to the system. For some tasks it's perfect, dynamic reporting environments that can sort and aggregate data without schema might be a good one. For most tasks it's just overkill. Gemini and OpenAI both offer python interpreters as tools, which can cover a lot of the use cases for code agents. It's true that cramming endless message on a stack of tool calls and interactions is not scalable, that is not a good way to use these tools. Most agentic workflows are shortlived. Complexity is managed with structure and discipline. These are well known problems in software development, and the old lessons still apply to the new tools. Function calls can absolutely scale well in an agentic system, or they can become a mess, just like they can in any codebase. Personally, building a system that works well is as much about managing cognitive load as the developer as it is about managing control flow and runtime performance. A simple solution that works well enough is usually superior to a clever solution with great performance. Composing function calls is the simple solution. Structured data can be still be parsed and transformed the old fashioned way. If the structure is unknown, even the cheap models are great at parsing. Managing complexity in an agentic system can be broken down into a problem of carefully managing application state. The message stack can be manipulated as needed to feed the models the active context. It's memory management in a constrained environment.

by jacob019

5/22/2025 at 10:20:30 AM

Great summary of the trade-offs in Agentic systems. We’ve tackled these exact challenges as we built out our conversational product discovery product for e-commerce at IsarTech [0].

I agree function composition and structured data are essential for keeping complexity in check. In our experience, well-defined structured outputs are the real scalability lever in tool calling. Typed schemas keep both cognitive load and system complexity manageable. We rely on deterministic behavior wherever possible, and reserve LLM processing for cases where schema-less data or ambiguity is involved. Its a great tool for mapping fuzzy user requests to a more structured deterministic system.

That said, finding the right balance between taking complexity out of high entropy input or introducing complexity through chained tool calling is a tradeoff and balance that needs to be struck carefully. In real-world commerce settings, you rarely get away with just one approach. Structured outputs are great until you hit ambiguous intents—then things get messy and you need fallback strategies.

[0] https://isartech.io/

by qu0b

5/22/2025 at 11:25:21 AM

Ambiguity must be explicitly handled like uncertainty in predictive modeling, that can be challenging. I run into trouble with task complexity. At a certain point even the best models start making dumb mistakes, and it's tough to draw the line for decomposing tasks. Role playing to induce planning and reflection helps, but I feel that upper bound. I've noticed that the model performance declines when using constrained outputs. Last year I would go to all this trouble decomposing tasks in ways that seem silly given the current models. At the pace that things are moving, I expect to see models soon that can handle 10x complexity and 10mb context, I just hope I can afford to use them.

by jacob019

5/21/2025 at 8:35:15 PM

The issue is not in function calls but HOW MCP got designed here and you are using.

Most MCP are replicating API. Returning blobs of data.

1. This is using a lot of input context in formating as JSON and escaping a Json inside already a JSON. 2. This contain a lot of irrelevant information that you can same on it.

So the issue is the MCP tool. It should instead flaten the data as possible as it's going back again thru JSON Encoding. And if needed remove some fields.

So MCP SAAS here are mainly API gateways.

That brings this noise! And most of ALL they are not optimizing MCP's.

by mehdibl

5/22/2025 at 5:36:51 AM

This is what GraphQL was designed for. Only select fields you really need. We've built an OSS Gateway that turns a collection of GraphQL queries into an MCP server to make this simple: https://wundergraph.com/mcp-gateway

by jensneuse

5/22/2025 at 1:33:08 AM

> 1. This is using a lot of input context in formating as JSON and escaping a Json inside already a JSON.

Isn't it a model problem that they don't respect complex json schemas?

by never_inline

5/22/2025 at 7:23:24 PM

MCP doesn't help but filtering is not always a good solution - sometimes you just need the agent to process a lot of data.

In that scenario running code on the data with minimum evaluation of the data (eg. a schema with explanation) is a much better approach and it will scale up to use cases of a certain complexity.

Even this system is not perfect: once your data definition and orchestration grow to big you'll face the same problems.

This should allow you to scale to pretty complex problems though, while the naive approach of just embedding API responses in the chat fails soon (I run into this issue frequently, maintaining a relatively simple systems with a few tool calls).

The only proper solution is reproducing the level of granularity of human decisions in code and call this "decisional system" from an LLM (which would be then reduced to a mere language interface between human language and the internal system). Easier said than done, though.

by jokethrowaway

5/22/2025 at 6:52:55 AM

Just for fun, I used ChatGPT to reverse a string as my first test of using their API. I was amused at how much work it took to get the LLM to give me just the reversed string, and even then I didn't feel I could fully trust it. I learned my lesson, and now I have multiple LLMs check to see of the string has actually been reversed. Soon I'll be spinning up a data center to host the GPUs necessary to correctly count the number of Rs in strawberry.

by devoutsalsa

5/21/2025 at 7:34:15 PM

My team at Shopify just open sourced Roast [1] recently. It lets us embed non-deterministic LLM jobs within orchestrated workflows. Essential when trying to automate work on codebases with millions of lines of code.

[1] https://github.com/shopify/roast

by obiefernandez

5/21/2025 at 10:20:52 PM

Wow - Roast looks fantastic. You architected and put names and constraints on some things that I've been wrestling with for a while. I really like how you are blending the determinism and non-determinism. (One thing that is not obvious to me after reading the README a couple of times (quickly), is whether/how the LLM can orchestrate multiple tool calls if necessary and make decisions about which tools to call in which order. It seems like it does when you tell it to refactor, but I couldn't tell if this would be suitable for the task of "improve, then run tests. Repeat until done.")

by TheTaytay

5/21/2025 at 8:30:35 PM

Nice to see Ruby continuing to exist and deliver... even in the age of "AI"

by drewda

5/22/2025 at 12:40:50 PM

This is great! Reading the docs tickles my brain. Nice way to package up LLM functionality in a declarative way!

by bandoti

5/22/2025 at 4:53:56 AM

This looks pretty cool! I'm curious how these sort of workflows are being used internally at Shopify. Any examples you can share?

by crakhamster01

5/21/2025 at 11:11:14 PM

good stuff!

i just broke Claude Code Research Preview, and i've crashed ChatGPT 4.5 Pro Deep Research. and i have the receipts :), so i'm looking for tools that work

by The_Blade

5/21/2025 at 6:40:47 PM

I feel that the optimal solution is hybrid, not polarized. That is, we use deterministic approach as much as we can, but leverage LLMs to handle the remaining complex part that is hard to spec out or describe deterministically

by hintymad

5/21/2025 at 6:42:41 PM

Yes - in particular, I think one interesting angle is use the LLM to generate deterministic approaches (code). And then, if the code works, save it for future use and it becomes deterministic moving forward.

by jngiam1

5/21/2025 at 6:47:07 PM

Yes, and the other way around: use the deterministic methods to generate the best possible input to LLM.

by hintymad

5/21/2025 at 7:33:05 PM

Can you give an example so we can visualise this?

by seunosewa

5/21/2025 at 9:18:59 PM

For instance, in an AIOps project we still perform a number of time series algorithms and then feed the results along with the original time series data to LLM. LLM will produce much more relevant and in-depth analysis than using the raw data along as input.

by hintymad

5/21/2025 at 7:36:37 PM

I agree. You want to use as little LLM as possible in your workflows.

by nowittyusername

5/21/2025 at 8:10:03 PM

I've been developing software for decades without LLMs, turns out you can get away with very little!

by mort96

5/21/2025 at 11:32:39 PM

You need very little for software development. Linters, IDEs, debuggers, and even programming languages are all optional, but they sure help shorten deadlines!

by nomel

5/21/2025 at 9:07:48 PM

Sorry I’ve been out of the industry for the last year or so, is this madness really what people are doing now?

by padjo

5/21/2025 at 9:39:05 PM

No, not most people. But some people are experimenting.

No one has found anything revolutionary yet, but there are some useful applications to be sure.

by _se

5/21/2025 at 11:18:30 PM

Or, we have a hammer and we’re hitting things with it to see if they’re nails.

by padjo

5/21/2025 at 11:27:40 PM

I think this is true, with the pretext that we have never seen a hammer before and don't know what nails are yet.

by resonious

5/22/2025 at 9:17:04 AM

Or, more cynically, the hammer is made of glass, and everyone is required to believe that everything is a nail, lest the markets catch on.

At this point, it _really_ seems like a solution in desperate, near-the-end-of-the-runway, search of a problem.

by rsynnott

5/22/2025 at 6:16:34 AM

Some people believe that if you're not doing this now, you might be out of the industry again pretty soon.

by tobyhinloopen

5/22/2025 at 5:40:06 AM

My daily job by now is massively using AI to develop AI agent designer, which means a lot of stuff like this.

I really did not even want this, it just happened.

by czechdeveloper

5/21/2025 at 7:47:47 PM

I'm slightly confused as to why you'd use a LLM to sort structured data in the first place?

by codyb

5/21/2025 at 7:56:17 PM

The goal is to do more complex data processing, like build dashboards, agentically figure out which tickets are stalled, do a quarterly review of things done, etc. Sorting is a tiny task in the bigger ones, but hopefully more easily exemplifies the problem.

by jngiam1

5/21/2025 at 9:02:37 PM

I don’t understand how this can work. Given probabilistic nature of LLMs the more steps you have more chances something goes off. What is good in the dashboard if you cannot be sure it was not partially hallucinated?

by kikimora

5/21/2025 at 9:34:00 PM

> What is good in the dashboard if you cannot be sure it was not partially hallucinated?

A lot of the time the dashboard contents doesn't actually matter anyway, just needs to look pretty...

On a serious note, the systems being built now will eventually be "correct enough most of the time" and that will be good enough (read: cheaper than doing it any other way).

by staunton

5/22/2025 at 7:36:50 PM

We're all just going to be glorified debuggers trawling through reams of generated code we've never seen before to root out the hallucinations.

by codyb

5/21/2025 at 10:32:43 PM

Probabilistic nature means nothing on its own. LLM that can solve your deterministic task will easily assign 100% to the correct answer (or 99%, the noise floor can be truncated with a sampler). If it doesn't do that and your reply is unstable, it cannot solve it confidently. Which happens to all LLMs on a sufficiently complex task, but it's not related to their probabilistic nature.

Of course that still doesn't mean that you should do that. If you want to maximize model's performance, offload as much distracting stuff as possible to the code.

by orbital-decay

5/22/2025 at 7:34:53 AM

Everything you described is already solved by Metabase and few other tools. It takes a few hours to make daily reports there and the dashboard of your dreams.

And its not like it changes every day. KPis etc stay the same for months. And then you can easily update it in a hour.

So what exactly does llm solve here?

by risyachka

5/21/2025 at 5:54:04 PM

That's kind of the entire premise of huggingface smolagent and while it does work really well when it works it also increase the challenges in rolling back failed actions

I guess one could in principle wrap the entire execution block into a distributed transaction, but llm try to make code that is robust, which works against this pattern as it makes hard to understand failure

by avereveard

5/21/2025 at 6:18:43 PM

Agree, the smolagent premise is good; but the hard part is handling execution, errors, etc.

For example, when the code execution fails mid-way, we really want the model to be able to pick up from where it failed (with the states of the variables at the time of failure) and be able to continue from there.

We've found that the LLM is able to generate correct code that picks up gracefully. The hard part now is building the runtime that makes that possible; we've something that works pretty well in many cases now in production at Lutra.

by jngiam1

5/21/2025 at 7:09:58 PM

I think in principle you can make the entire API exposed to the llm idempotent so that it bicomes irrelevant for the backend wheter the llm replay the whole action or just the failed steps

by avereveard

5/21/2025 at 7:13:49 PM

That'd work well for read-only APIs, but we also want the LLMs to be able to update data, create documents, etc. Feels a bit harder when there are side-effects.

by jngiam1

5/21/2025 at 6:35:48 PM

Could you implement an actual state machine and have your agent work with that?

by hooverd

5/21/2025 at 7:08:35 PM

that's the langraph idea. each langraph node can then be a smolagent

latency tho, would be unbearable for real time.

by avereveard

5/21/2025 at 8:41:37 PM

I think that there may be another solution for this, that is the LLM write a valid code that calls the MCP's as functions. See it like a Python script, where each MCP is mapped to a function. A simple example:

  def process(param1, param2):
     my_data = mcp_get_data(param1)
     sorted_data = mcp_sort(my_data, by=param2)
     return sorted_data

by bguberfain

5/21/2025 at 8:58:55 PM

Yes! If you want to see how this can work in practice, check out https://lutra.ai ; we've been using a similar pattern there. The challenge is making the code runtime work well for it.

by jngiam1

5/21/2025 at 8:35:55 PM

LLMs clearly struggle when presented with JSON, especially large amounts of it.

There's nothing stopping your endpoints from returning data in some other format. LLMs actually seem to excel with XML for instance. But you could just use a template to define some narrative text.

by CSMastermind

5/21/2025 at 8:55:26 PM

I'm consistently surprised that people don't use XML for LLMs as the default given XML comes with built-in semantic context. Convert the XML to JSON output deterministically when you need to feed it to other pipelines.

by ryoshu

5/22/2025 at 1:24:37 AM

We've been using Markdown tables to return data to the LLM with some success

by crabl

5/21/2025 at 9:57:00 PM

Any reason for this for my own learning? Was XML more prevalent during training? Something better about XML that makes it easier for the LLM to work with?

XML seems more text heavy, more tokens. However, maybe more context helps?

by iJohnDoe

5/22/2025 at 12:48:48 AM

It's in the official OpenAI prompting guidelines: https://cookbook.openai.com/examples/gpt4-1_prompting_guide#...

But it's also evident for anyone who has used these models. It's also not unique to OpenAI, this bias is prevalent in every model I've ever tested from GPT 3 to the latest offerings from every single frontier model provider.

As to why I would guess it's because XML bakes semantic meaning into the tags it uses so it's easier for the model to understand the structure of the data. <employee>...</employee> is a lot easier to understand than { "employee": { ... }}.

I would guess that the models are largely ignoring the angular brackets and just focusing on the words which have unique tokens and thus are easier to pair up than the curly braces that are the same throughout JSON. Just speculation on my part though.

And this only applies to the input. Earlier models struggled to reliably output JSON so they've been both fine-tuned and wrapped in specific formatters that reliably force clean JSON outputs.

by CSMastermind

5/22/2025 at 7:04:24 PM

I've seen the suggestion it's because it's been trained on a lot of HTML, but the GPT docs suggest using markdown as a default choice, which is relatively less common.

by nitwit005

5/21/2025 at 9:48:30 PM

I am kind of confused why can't you just create a new MCP tool that encapsulates parsing and other required steps together in a code block?

This would be more reliable than expecting the LLM to generate working code 100% of the time?

by arjunchint

5/21/2025 at 10:09:44 PM

You should for sure do this for common post processing tasks. However, you're usually not going to know all the types of post-processing users will want to do with tool call output at design-time.

by Centigonal

5/21/2025 at 10:19:44 PM

I would really like to see output-aware LLM inference engines. For example, imagine if the LLM output some tokens that meant "I'm going to do a tool call now", and the inference engine (e.g. llama.cpp) changed the grammar on the fly so the next token could only be valid for the available tools.

Or, if I gave the LLM a list of my users and asked it to filter based on some criteria, the grammar would change to only output user IDs that existed in my list.

I don't know how useful this would be in practice, but at least it would make it impossible for the LLM to hallucinate for these cases.

by stavros

5/21/2025 at 11:14:47 PM

Of course it would hallucinate. It would just pick arbitrary/wrong values.

by molf

5/21/2025 at 11:19:28 PM

It would be wrong, but it wouldn't hallucinate non-existent IDs.

by stavros

5/21/2025 at 9:41:05 PM

> Allowing an execution environment to also access MCPs, tools, and user data requires careful design to where API keys are stored, and how tools are exposed.

If your tools are calling APIs on-behalf of users, it's better to use OAuth flows to enable users of the app to give explicit consent to the APIs/scopes they want the tools to access. That way, tools use scoped tokens to make calls instead of hard to manage, maintain API keys (or even client credentials).

by norcalkc

5/22/2025 at 12:45:28 AM

Agreed, OAuth is certainly preferred for many reasons, but replace "API keys" with "OAuth access tokens" and you have the same fundamental challenge of ensuring an LLM or untrusted code never has access to the user's secrets.

by vrv

5/21/2025 at 9:46:21 PM

Do you know of any examples which use MCP and oauth cleanly?

by iandanforth

5/22/2025 at 6:23:59 AM

This is something I have been attempting for quite a while now. One simple tool I started is a deterministic data extraction system where AI helps in finding out the data to be extracted but then the code would try and "template" it. When we have the template, the extraction on any similar string would happen deterministically.

Think of extracting parts of an email subject. LLM is great at going through unseen subject lines and telling us what can be extracted. We ask LLM what it found, where. For things like dates, times, city, country etc, we can then deterministically re-run on new strings to extract.

https://github.com/pixlie/determined

by brainless

5/21/2025 at 9:43:49 PM

We’ve been using smolagents, which takes this approach, and are impressed.

Slight tangent, but as a long term user of OpenAI models, I was surprised at how well Claude Sonnet 3.7 through the desktop app handles multi-hop problem solving using tools (over MCP). As long as tool descriptions are good, it’s quite capable of chaining and “lateral thinking” without any customisation of the system or user prompts.

For those of you using Sonnet over API: is this behaviour similar there out of the box? If not, does simply pasting the recently exfiltrated[1] “agentic” prompt into the API system prompt get you (most of the way) there?

[1] https://news.ycombinator.com/item?id=43909409

by darkteflon

5/21/2025 at 10:15:44 PM

How does it compare to MCP servers?

by 3abiton

5/21/2025 at 10:59:34 PM

Not sure if I correctly understand your question. I was saying that Sonnet 3.7 in the desktop app is good out-of-the-box at orchestrating tools exposed as MCP servers and asking whether that behaviour is also present over the Anthropic API or, if not, whether copy-pasting the exfiltrated system prompt gets you there.

by darkteflon

5/21/2025 at 5:57:27 PM

> Most execution environments are stateful (e.g., they may rely on running Jupyter kernels for each user session). This is hard to manage and expensive if users expect to be able to come back to AI task sessions later. A stateless-but-persistent execution environment is paramount for long running (multi-day) task sessions.

It's interesting how architectural patterns built at large tech companies (for completely different use-cases than AI) have become so relevant to the AI execution space.

You see a lot of AI startups learning the hard way that value of event sourcing and (eventually) durable execution, but these patterns aren't commonly adopted on Day 1. I blame the AI frameworks.

(disclaimer - currently working on a durable execution platform)

by abelanger

5/21/2025 at 6:39:42 PM

I see all of this as a constant negotiation of what is and isn't needed out of traditional computing. Eventually they find that what they want from any of it is determinism, unfortunately for LLMs.

by th0ma5

5/21/2025 at 6:38:05 PM

Maybe we just need models that can reference spans by start:end range. Then they can pass arguments by reference instead of explicit quotation. We can use these spans as answers in extractive QA tasks, or as arguments for a code block, or to construct a graph from pointers, and do graph computation. If we define a "hide span" operation the LLM could dynamically open and close its context, which could lead to context size reduction. Basically - add explicit indexing to context memory, and make it powerful, the LLM can act like a CPU.

by visarga

5/21/2025 at 8:17:52 PM

This is exactly what I've encountered, at least with Claude, it writes out huge artifacts (static ones retrieved from the file system or wherever) character for character - What I'm going to try this weekend is just integrating a redis cache or sqlite into the MCP tool calls, so claude doesnt have to write everything out character per character... no idea if it will work as expected...

also looking into "fire and forget" tools, to see even if that is possible

by fullstackchris

5/21/2025 at 8:36:14 PM

You don't have to use full write.

Use grep & edit lines. and sequences instead of full files.

This way you can edit files with 50kl loc without issue while Claude will blow out if you ever try to write such file.

by mehdibl

5/22/2025 at 12:13:13 PM

In that case grep is fine, but if I have a specific artifact I need to transport from one function to another, I'll need some sort of background set / get.

by fullstackchris

5/22/2025 at 1:37:45 PM

Another example.

I have web pages fetching.

But web page fetching bring a lot of JS / noise.

So the fetched page instead use https://jina.ai/reader/. I get Markdown. But is this enough. No, there still a lot of links and stuff, so I again do another pipe and pass to strip a lot like URL's I usually don't need as focus here on content.

by mehdibl

5/22/2025 at 1:35:03 PM

then the artifact is saved ONCE in the filesystem. I don't get why you use artefacts in the first place.

Edited in FS. Then next agent/tools directly read.

The issue is the workflow here as you make everything getting thru the model and combining tools you control and tools you don't control ( Claude Artefacts). But default I disable EVERYthing from Claude. And use filesystem. With that I have git diff to check the changes, and can as I said do such granular changes and edits.

As I said the issue is in your workflow.

by mehdibl

5/22/2025 at 3:34:33 AM

It’s because MCP return types are so basic. It’s text. Or image. Or one other type in the protocol I forget.

It’s not well thought out. I’ve been building one with the new auth spec and their official code and tooling is really lacking.

It could have been so much simpler and straight forward by now.

Instead you have 3 different server types and one is deprecated already (SSE) it’s almost funny

by zackify

5/21/2025 at 9:54:24 PM

What are the current best options for sandboxed execution environments? HuggingFace seems to have a tie-up with E2B, although by default smolagents runs something ephemeral in-process. I feel like there must be a good Docker container solution to this that doesn’t require signing up to yet another SaaS. Any recommendations?

by darkteflon

5/22/2025 at 6:41:43 AM

Are you looking for an open-source sandboxing solution? Self hosting is available for E2B. You still have to subscribe to a SaaS for ephemeral cloud compute though.

by ATechGuy

5/21/2025 at 10:11:10 PM

Try gVisor

by colonCapitalDee

5/22/2025 at 12:40:17 AM

That sounds like a category error? An alternative OCI runtime is not what GP asked for.

by codethief

5/22/2025 at 5:37:19 PM

Using gVisor to run a Docker container creates a sandboxed execution environment inside the Docker container

by colonCapitalDee

5/21/2025 at 11:03:10 PM

In the example request, they want a list of issues in their project but don’t need the ID of each issue. But, what about when you want a list of issues and DO want the ID?

by yahoozoo

5/22/2025 at 12:40:37 AM

If the output schema specifies an id field, the LLM can write a code snippet that references it based on the context of the subsequent request, but the LLM doesn't need to observe the underlying value unless necessary. E.g., it can pass the 'id' opaquely to another call that receives the "id" as an input. If the user specifically wants to see the "id", the code orchestration approach can have the LLM just print the content.

by vrv

5/21/2025 at 11:08:32 PM

I had the same question.

by wyett

5/21/2025 at 10:01:13 PM

That's MCP for you.

MCP is literally just a wrapper around an API call, but because it has some LLM buzz sprinkled on top, people expect it to do some magic, when they wouldn't expect the same magic from the underlying API.

by iLoveOncall

5/22/2025 at 12:16:24 AM

It is just a wrapper around an API call. And that's all you need for magic.

Explain how I would do this without an LLM:

https://blog.nawaz.org/posts/2025/May/gemini-figured-out-my-...

by BeetleB

5/22/2025 at 9:25:26 AM

Is this a trick question? You lay out exactly how you would do this without an LLM in the prompt...

by iLoveOncall

5/22/2025 at 5:52:05 PM

The whole point of this is I didn't need to come up with a strategy for figuring out a kid's name. The LLM has enough intelligence to do it.

Put another way now that I have the MCP in place I no longer need to write any programs to do these tasks (unless you consider my prompt to be the program).

I used it to find the name of a different person's kid. And it used a different set of queries than the one I sent you. How are you going to encode all possibilities merely by using an API?

I use the exact same MCP tool to summarize emails, get me links from emails etc. You want me to write a program for each use case when I can do it all with just one program?

by BeetleB

5/22/2025 at 7:57:13 AM

In which the industry reinvents the concept of a schema-ful API surface like the kinds we've had for 30 years. Rediscovering the past shouldn't be revolutionary

by quotemstr

5/21/2025 at 11:30:43 PM

I’m confused as to why no one is just having LLMs dynamically produce and expose new tools on the fly as combinations of many small tools or even write new functions from scratch, to handle cases where there isn’t an ideal tool to process some input with one efficient tool call.

by deadbabe

5/21/2025 at 11:42:16 PM

I am building a company in this space, so can hopefully give some insight [0].

The issue right now is that both (1) function calling and (2) codegen just aren't really very good. The hype train far exceeds capabilities. Giving great demos like fetching some Stripe customers, generating an email or getting the weather work flawlessly. But anything more sophisticated goes off the rails very quickly. It's difficult to get models to reliably call functions with the right parameters, to set up multi-step workflows and more.

Add codegen into the mix and it's hairier. You need a deployment and testing apparatus to make sure the code actually works... and then what is it doing? Does it need secret keys to make web requests to other services? Should we rely on functions for those?

The price / performance curve is a consideration, too. Good models are slow and expensive. Which means their utility has to be higher in order to charge a customer to pay for the costs, but they also take a lot longer to respond to requests which reduces perception of value. Codegen is even slower in this case. So there's a lot of alpha in finding the right "mixture of models" that can plan and execute functions quickly and accurately.

For example, OpenAI's GPT-4.1-nano is the fastest function calling model on the market. But it routinely tries to execute the same function twice in parallel. So if you combine it with another fast model, like Gemini Flash, you can reduce error rates - e.g. 4.1-nano does planning, Flash executes. But this is non-obvious to anybody building these systems until they've tried and failed countless times.

I hope to see capabilities improve and costs and latency trend downwards, but what you're suggesting isn't quite feasible yet. That said I (and many others) are interested in making it happen!

[0] https://instant.bot

by keithwhor

5/22/2025 at 12:30:50 AM

Well in the mean time we could just have the LLM shoot Jira tickets at human developers to build out new tools it requires ASAP? And until it’s done have a placeholder message returned to the client? Could be a good way to keep developers working constantly. And eventually when the tech is good you replace the human devs with LLMs.

by deadbabe

5/21/2025 at 8:05:45 PM

> TL;DR: Giving LLMs the full output of tool calls is costly and slow.

Is this true for all tool calls? Even if the tool returns little data?

by koakuma-chan

5/21/2025 at 8:23:11 PM

from my experience its about the speed of a very competant human - one of my favorite custom tools ive written is just access to a series of bash commands - havent tested with others but claude very quickly browses through files, reads them, and so on to do whatever it was you prompted. But even then it is all contextual - for example, I had to remove 'find' because as one would expect, running 'find' against a huge directory set is very slow!

by fullstackchris

5/21/2025 at 11:13:08 PM

Well, the bottleneck there would usually be the LLM, because, e.g., a tool to inspect a filesystem directory would be very fast, and it wouldn't necessarily return a lot of data, so I am confused what this article is really trying to say.

by koakuma-chan