3/26/2025 at 9:27:48 PM
Today MCP added Streamable HTTP [0] which is a huge step forward as it doesn't require an "always-on" connection to remote HTTP servers.However, if you look at the specification it's clear bringing the LSP-style paradigm to remote HTTP servers is adding a bunch of extra complexity. This is a tool call, for example:
{
"jsonrpc": "2.0",
"id": 2,
"method": "tools/call",
"params": {
"name": "get_weather",
"arguments": {
"location": "New York"
}
}
}
Which traditionally would just be HTTP POST to `/get_weather` with `{ "location": "New York" }`.I've made the suggestion to remove some of this complexity [1] and fall back to just a traditional HTTP server, where a session can be negotiated with an `Authorization` header and we rely on traditional endpoints / OpenAPI + JSON Schema endpoint definitions. I think it would make server construction a lot easier and web frameworks would not have to materially be updated to adhere to the spec -- perhaps just adding a single endpoint.
[0] https://spec.modelcontextprotocol.io/specification/2025-03-2...
[1] https://github.com/modelcontextprotocol/specification/issues...
by keithwhor
3/26/2025 at 10:04:38 PM
I fully agree.MCP is just too complex for what it is supposed to do. I don't get what's the benefit. It is the kind of thing that has the potential to be a huge time waste because it requires custom dev tools to develop and troubleshoot.
It is not even a protocol in the traditional sense - more of a convention.
And of course we will implement it, like everyone else, because it is gathering momentum, but I do not believe it is the right approach at all. A simpler HTTP-based OpenAPI service would have been a lot better and it is already well supported in all frameworks.
The only way I can make sense of MCP in the context of an STDIO.
by _pdp_
3/26/2025 at 10:11:21 PM
The `stdio` approach for local services makes complete sense to me. Including using JSONRPC.But for remote HTTP MCP servers there should be a dead simple solution. A couple years ago OpenAI launched plugins as `.well-known/ai-plugin.json`, where it'd contain a link to your API spec, ChatGPT could read it, and voila. So all you needed to implement was this endpoint and ChatGPT could read your whole API. It was pretty cool.
ChatGPT Plugins failed, however. I'm confident it wasn't because of the tech stack, it was due to the fact that the integration demand wasn't really there yet: companies were in the early stages of building their own LLM stacks, ChatGPT desktop didn't exist. It also wasn't marketed as a developer-first global integration solution: little to no consistent developer advocacy was done around it. It was marketed towards consumers and it was pretty unwieldy.
IMO the single-endpoint solution and adhering to existing paradigms is the simplest and most robust solution. For MCP, I'd advocate that this is what the `mcp/` endpoint should become.
Edit: Also tool calling in models circa 2023 was not nearly as good as it is now.
by keithwhor
3/26/2025 at 10:24:33 PM
I agree. What OpenAI did was simple and beautiful.Also, I think there is a fundamental misunderstanding that MCP services are plug and play. They are not. Function names and descriptions are literally prompts so it is almost certain you would need to modify the names or descriptions to add some nuances to how you want these to be called. Since MCP servers are not really meant to be extensible in that sort of way, the only other alternative is to add more context into the prompt which is not easy unless you have a tone of experience. Most of our customers fail at prompting.
The reason I like the ai-plugin.json approach is that you don't have to change the API to make the description of a function a little bit different. One day MCP might support this but it will another layer of complexities that could have been avoided with a remotely hosted JSON / YAML file.
by _pdp_
4/1/2025 at 8:11:58 PM
It’s not just about passing prompts — in production systems like Ramp’s, they had to build a custom ETL pipeline to process data from their endpoints, and host a separate database to serve structured transaction data into the LLM context window effectively.We’ve seen similar pre-processing strategies in many efficient LLM-integrated APIs — whether it’s GraphQL shaping data precisely, SQL transformations for LLM compatibility, or LLM-assisted data shaping like Exa does for Search.
https://engineering.ramp.com/ramp-mcp
PS: When building agents, prompt and context management becomes a real bottleneck. You often need to juggle dynamic prompts, tool descriptions, and task-specific data — all without blowing the context window or inducing hallucinations. MCP servers help solve this by acting as a "plug-and-play" prompt loader — dynamically fetching task-relevant prompts or tool wrappers just-in-time. This leads to more efficient tool selection, reduced prompt bloat, and better overall reasoning for agent workflows.
by sarthak_chauhan
3/26/2025 at 10:29:24 PM
The good thing to note is that (AFAIK) MCP is intended to be a collaborative and industry-wide effort. Whereas plugins was OpenAI-specific.So, hopefully, we can contribute and help direct the development! I think this dialogue is helpful and I'm hoping the creators respond via GitHub or otherwise.
by keithwhor
3/27/2025 at 12:00:42 AM
It took me a minute to even understand this comment because for me the “obvious” use-case for MCP is local filesystem tasks, not web requests. Using MCP to manipulate files is my primary LLM use-case and has been ever since Anthropic released it and integrated it into Claude Desktop. I understand where you’re coming from, but I suspect that the idea here is to build something that is more “filesystem first.”by mordymoop
3/27/2025 at 12:11:55 AM
That makes sense. But if that's the case I think we should call a spade a spade and differentiate "Local-first MCP" and "Remote MCP"; because what (many, most?) companies are really trying to do is integrate with the latter.Which is where you see this sort of feedback, where a bunch of us API engineers are like; "there's already a well-trodden path for doing all of this stuff. Can we just do that and agree that it's the standard?"
by keithwhor
3/26/2025 at 11:30:36 PM
100%. I know I’m in the “get off my lawn” phase of my career when I see things like MCP and LangChain, but know I would have been excited about them earlier in my career.by tlrobinson
3/26/2025 at 11:57:43 PM
LangChain is an objectively terrible Frankenstein's monster of an API. If you were a good developer in your youth, you'd have still held it in contempt, and treat MCP with caution.The MCP API is pretty bad, too, it's just that a paradigm is starting to emege regarding modularity, integration and agentic tooling, and MCP happens to be the only real shot in that direction st this particular moment.
by soulofmischief
3/27/2025 at 5:26:58 PM
How is the MCP API bad? It uses simple and widely available standards-based transports while still allowing custom transport implementations, along with a simple API that can be used easily without any libraries, in any programming language under the sun?by zambachi
3/27/2025 at 4:48:30 AM
Could you elaborate on your issues with LangChain?We're kinda headed towards using it as it seems to be a flexible enough abstraction that is relatively stable to work with, so I'd like to know if I'm overlooking something..?
by gnatolf
3/27/2025 at 5:01:21 AM
A lot of folks use it to get started quickly and then realize the unnecessary abstractions are obfuscating the actual hard parts.by gregorymichael
3/27/2025 at 8:28:53 AM
A big soup of untyped json blobs and python, all working highly asynchronously in complicated ways.Is this variable available in my agent’s state?
Depends! Is it an agent that started an hour ago before you made that other change? Is it in the manager state or not? Did the previous agent run correctly or not? Did something else go wrong?
by davedx
3/27/2025 at 8:12:51 AM
You spend more time fighting the tool/framework than working on the product you’re trying to build.by jinushaun
3/29/2025 at 5:33:20 PM
Cryptic errors that will make you tear your hair out and docs that are constantly changingby anxman
3/27/2025 at 8:26:46 AM
I’m seriously considering getting out of IT because of itby davedx
3/27/2025 at 12:04:26 PM
because of MCP and langchain?by romankolpak
3/30/2025 at 3:42:16 PM
Eh, more the wider cultural effects of them. Vibe coding, everyone now creating kitchen sink apps that do everything under the sun, k8s everywhere, agents everywhere. It feels like a big part of the industry has lost all focus and is sprinting towards some endless vague panacea instead of focusing on solving specific well defined business problems.It’s always been a bit like this but it feels particularly bad since AI hit mainstream coding tooling. Just my 2c :)
by davedx
3/31/2025 at 3:59:46 PM
I tend to agree here. Alot of the use cases are around STDIO and using MCPs within existing apps like Cursor/Windsurf etc. Most developers want to build and integrate their own tools and the complexity required here (build out the server, bundle a client, and the additional latency) is probably not worth the ROI. There also seems to be a notion that security is "handled" by MCP which might be premature. I mean, there are several good decisions in the standard (tls with sse, sampling etc), but critical choices around auth and scope for agents or 3rd party mcp providers are still wide open IMO.Overall, a step in the right direction, but still early. I wrote more on this here. https://newsletter.victordibia.com/p/no-mcps-have-not-won-ye...
by vykthur
4/1/2025 at 8:24:22 PM
How much of this is already addressed by Cloudflare’s Remote MCP setup? https://blog.cloudflare.com/remote-model-context-protocol-se...by sarthak_chauhan
3/26/2025 at 10:25:11 PM
Yeah, I had more luck with just giving an ai the openapi spec and letting it figure everything out. I like a lot about MCP (structure, tool guidance, etc), but couldn't it just have been a REST API and a webserver?by pcarolan
3/27/2025 at 5:22:05 PM
I think people often think of their specific use-case and tend to forget the bigger picture. MCP does not force one transport or the other and that is great—use any transport you want as long as it uses JSON RPC as the payload.The two built in transports are also extremely minimalistic and for SSE transport use regular HTTP—no need for web sockets or other heavier dependencies because SSE events are lightweight and broadly supported.
by zambachi
3/27/2025 at 3:26:32 PM
Second that. A lot of our use cases are "remote tooling", i.e. calling APIs. Implementing an MCP server to wrap APIs seems very complex - both in terms of implementation and infrastructure.We have found GraphQL to be a great "semantic" interface for API tooling definitions since GraphQL schema allows for descriptions in the spec and is very humanly readable. For "data-heavy" AI use cases, the flexibility of GraphQL is nice so you can expose different levels of "data-depth" which is very useful in controlling cost (i.e. context window) and performance of LLM apps.
In case anybody else wants to call GraphQL APIs as tools in their chatbot/agents/LLM apps, we open sourced a library for the boilerplate code: https://github.com/DataSQRL/acorn.js
by mbroecheler
3/27/2025 at 8:46:54 PM
Oh wow. Amazing. I did not think of that. I am not fan of GraphQL but you might be onto something here. I have not checked the code and perhaps this is not the right channel for this but my read is that this library allows any generic GraphQL server to exposed in this sort of way?by _pdp_
3/28/2025 at 11:40:57 PM
Exactly, any generic GraphQL server can be turned into a set of LLM tools with minimal overhead and complexity.by mbroecheler
3/27/2025 at 2:54:48 PM
I am actually working on such a thing, and I want to get it righr. This is like RSS or Atom or Jabber or XMPP wars. Or OpenGraph vs Twitter’s meta tags etc. I want interoperability which is cool, but I also want it to seamlessly interoperate with human roles and services.What is the best way to connect with you? I would like to discuss ideas and protocols if you’re up for that.
by EGreg
3/27/2025 at 8:05:59 AM
Exactly my points: https://taoofmac.com/space/notes/2025/03/22/1900by rcarmo
4/1/2025 at 8:43:56 PM
https://engineering.ramp.com/ramp-mcphttps://blog.cloudflare.com/remote-model-context-protocol-se...
by sarthak_chauhan
3/27/2025 at 7:14:31 AM
"Tool calling" is just one part of MCP, there are more things like "Sampling" which allow the server itself to initiate stuff on the client. As for tool calling, having a layer like MCP makes sense because there a lot of things which don't have a REST-API + may need direct access to the computer (filesystem, processes, etc).Examples:
* Running SQL commands on a DB or a Redis instance. * Launching Docker containers, SSHing to a server an running some command. * Reading a file and extracting relevant information from it, like OCR. * Controlling a remote browser using the WebDriver protocol, have some kind of persistent connection to a backend.
As for just pure REST-API usecases, I think MCP serves what Swagger/OpenApi-Spec are meant to do, i.e. enforce some kind of format and give each endpoint a "Name" + list of Params which the LLM can invoke. The issue is that there is no standardised way to pass these API specs to LLMs as tools (maybe something can be built in this space). In the future, I can easily see some kind of library/abstraction that allows an MCP server to parse an existing API spec file to expose those APIs as tools which can be combined with some local state on the computer to allow stateful interactions with a REST API.
by pulkitsh1234
3/27/2025 at 4:52:57 PM
I just made a tool which parses any OpenAPI spec to MCP spec: https://www.open-mcp.org (literally just deployed so I don't know if the DNS has propagated globally yet..)by dan-kwiat
3/27/2025 at 8:37:58 PM
I received a 500 response when I attempted to create an MCP server for an API.I was using this URL: https://engineapi.moonstream.to/metatx/openapi.json
The response body:
{success: false, error: "Server URL must start with https:// or http://"}
by zomglings
3/27/2025 at 9:57:07 PM
Thanks, I just added support for relative URLs, try again. Your OpenAPI spec defines a relative base URL for the server /metatx but no domain. You can now specify the full base URL in your MCP client's environment variables:`OPEN_MCP_BASE_URL="https://engineapi.moonstream.to/metatx"`
by dan-kwiat
3/28/2025 at 12:58:09 AM
very cool, I tried to feed it the toolhouse.ai openapi spec and it worked VERY quickly!! wowby orliesaurus
3/27/2025 at 1:58:02 AM
My bias is I generally think RPC is much nicer than REST. But what's kind of funny here is that we have(1) an RPC to call a (remote) method called "tools/call", which is a method that
(2) calls a method called get_weather
Both methods have arguments. But the arguments of "tools/call" are called "params" and the arguments of "get_weather" are called "arguments".
I realize this is a common pattern when you have to shell out, e.g. in python's subprocess.run().
But it also seems like there could be a cleaner API with better types.
by ants_everywhere
3/27/2025 at 2:08:15 AM
I don’t disagree. I fought this battle for a long time — ran a company where I tried to simplify SDK development by making every endpoint POST and JSON params; sorta like SOAP / just simple RPC. Why do you need all the HTTP methods when most SDKs simplify everything to .retrieve etc, why not name the endpoints that?What I realized was that these specs are valuable because they’re stable over long periods of time and handle many sorts of edge cases. Also from a systems integration perspective, everybody already knows and is trained in them. Over many years I accepted the wisdom of commons.
A lot of tooling already exists to make development of these sorts of systems easy to implement and debug. Hence why I think for Remote MCP servers, HTTP as it exists is a great choice.
by keithwhor
3/27/2025 at 9:09:09 AM
I don't feel that's really true, it's easy to forget how fast things have moved.For a long time lots of servers didn't really support PUT or DELETE, and it was only the early 2010s that it became common.
It's still a problem sometimes that you have to explicitly enable them (I'm looking at you IIS + WebDAV).
PATCH wasn't even added till 2010 and you still don't see it commonly used.
Perhaps we have different understandings of 'stable' and 'many years'.
I also agree with you on RPC, it's pretty ridiculous that some guys tried to boil every single API down to essentially 4 verbs. I remember when Google went all crazy on implementing pure REST and their APIs were atrocious.
And everyone still goes along with it even though it clearly doesn't work, so you always end up with a mix of REST and RPC in any non-trivial API.
But pure RPC doesn't really work as then you have to change every call to a POST, as you mention. Which is also confusing as everyone is now used to using the really restricted REST CRUD interface.
So now pure REST sucks and pure RPC sucks. Great job HTTP standards team!
To be fair to them, I know it's hard and at some point you can't fix your mistakes. These days I guess I've just accepted that almost all standards are going to suck a bit.
by mattmanser
3/27/2025 at 12:59:03 AM
ad hoc RPC[1] that involves JSON request/response payloads and is wed to HTTP transport is arguably worse than conforming to the JSON-RPC 2.0 specification[2].[1] if it’s not REST (even giving a pass on HATEOAS) then it’s probably, eventually, effectively RPC, and it’s still ad hoc even if it’s well documented
by michaelsbradley
3/27/2025 at 9:17:32 AM
The big irony behind HATEOAS is that LLMs are the mythical "evolvable agents" that are necessary to make HATEOAS work in the first place. HATEOAS was essentially built around human level intelligence that can automatically crawl your endpoints and read documentation written in human language and then they scratched their head why it didn't catch on.Only browser like clients could conform to HATEOAS, because they essentially delegate all the hard parts (dealing with a dynamically changing structureless API) to a human.
by imtringued
3/27/2025 at 5:20:36 PM
Well, like I wrote, "giving a pass on HATEOAS".With e.g. JSON over HTTP, you can implement an API that satisfies the stateless-ness constraint of REST and so on. Without hypermedia controls it would fit at Level 2 of the RMM, more or less.
In that shape, it would still be a different beast from RPC. And a disciplined team or API overlord could get it into that shape and keep it there, especially if they start out with that intention.
The problem I've seen many times is that a JSON over HTTP API can start out as, or devolve into, a messy mix of client-server interactions and wind up as ad hoc RPC that's difficult to maintain and very brittle.
So, if a team/project isn't committed to REST, and it's foreseeable that the API will end up dominated by RPC/-like interactions, then why not embrace that reality and do RPC properly? Conforming to a specification like JSON-RPC can be helpful in that regard.
by michaelsbradley
3/27/2025 at 10:00:43 AM
Yeah, I always thought MCP was a bit verbose. It reminds me of the WSDL and SOAP mess of the 2000s. Model tool calls are just RPCs into some other service, so JSON-RPC makes sense. Is there anything else has wide adoption and good client support? XML-RPC? gRPC? Protobufs? I mean, it shouldn't need extra libraries to use. You can handroll a JSON-RPC request/response pretty easily from any programming language.Regarding the verbosity, yeah, it's interesting how model providers make more money from more tokens used, and you/we end up paying for it somehow. When you're doing lots of tool calls, it adds up!
by ammmir
3/27/2025 at 6:32:48 AM
Why is this get_weather location "New York" always an example when people talk about tool calling?by antupis
3/27/2025 at 6:09:44 PM
1/ pre-trained models don't know current weather2/ easy enough for people to understand
by richsong
3/27/2025 at 12:27:54 PM
Because if you can make it New York City, you can make it anywhere.by deadbabe
3/27/2025 at 1:18:11 AM
Totally agree - a well-defined REST API "standard" for tool listing and tool execution would have been much better. Could extend as needed to websockets for persistent connections / streaming data.by pacjam
3/27/2025 at 3:58:31 AM
Maybe MCP was developed with AI. LLMs tend to be overly verboseby croes
3/27/2025 at 12:07:50 PM
fully agree, and that's what we do at ACI.dev as a managed authenticated tool calling platform. https://www.aci.dev/ And we expose more tool discovery flexibility rather than just list_tools()by thisisfixer
3/27/2025 at 12:10:37 PM
>And we expose more tool discovery flexibility rather than just list_tools()Extremely curious about this as just directly listing all tools to an agent will obviously not scale well. What does the interface look like?
by ramesh31
3/27/2025 at 1:25:24 AM
perhaps openai is the wrong tool for this thing.by cyanydeez
3/26/2025 at 11:30:50 PM
I've been working on BLAH - Barely Logical Agent Host (https://github.com/thomasdavis/blah/blob/master/packages/cli...) for the past few weeks.It is essentially a standard (has a schema) that has an ecosystem of tools around it.
(completely opensource, no protocol/bridge lockin, no vendor/provider lockin, no ide/client/auton lockin, http/sse/jsonrpc/whatever, local/remote, composable)
So far I'm categorically calling MCP a "bridge", because BLAH supports other bridges such as SLOP (https://github.com/agnt-gg/slop/blob/main/README.md) and conceptually OpenAPI (or simply HTTP) is a bridge.
An example blah.json looks like this
"tools": [
{
"name": "jsonresume",
"bridge": "mcp",
"command": "npx -y @jsonresume/mcp@3.0.0",
},
{
"name": "slop_example",
"description": "Slop example",
"bridge": "slop",
"source": "https://ajax-valjs_slop_example.web.val.run"
},
{
"name": "normal_http_example",
"description": "Normal HTTP example",
"bridge": "openapi",
"source": "https:/example.com/openapi.json"
}
],
}
So blah can orchestra/compose any number of bridges, and it can list any tools it style/format of any bridge. (sorry that sounds so dumb)For example, you can run `blah mcp start --config blah.json`, and add that to your cursor/claude mcp servers. When you fetch the tools, it loops over all your tools in your blah.json and fetches them whether it is an mcp server, slop server or openapi, it's agnostic and will return a full list of tools pulling from all bridges.
And then you can do `blah slop start`, which will do the same, but the opposite, start a slop server but it will boot up mcp servers and serve them over http too.
So essentially a bridge/protocol agnostic system, you ask for a list of tools, and it traverses everything that can list tools and splats them all together.
That's a little of the main pitch but there are larger plans to try build an ecosystem like npmjs where there is a public registry of any tool (which are just functions at the end of the day). Clouldflare AI team is really getting it right now (https://blog.cloudflare.com/remote-model-context-protocol-se...), most people would rather just tools being hosted for them, and workers is perfect for that.
It also means you just have to add one MCP server and use one configuration file to manage your tools. (Also has way more control over logging which MCP devs would love)
---
And that all maybe sounds complicated, but it's meant to make things A LOT easier. The README.md is horrible, just lots of random ideas and my comment isn't much better, but I'm aiming to have an alpha version in a week or so. Would absolutely love feedback!
4.5 Deep Research Report on BLAH - https://gist.github.com/thomasdavis/3827a1647533e488c107e64a...
by thomasfromcdnjs
3/29/2025 at 5:20:17 PM
I agree, people don't want to manually add all MCP servers they want to connect to. I think we're headed in a direction where dynamic tool discovery will be important. E.g. I want to do "x" which requires doing "y" and "z". The client or server should then search for the tools (or agents as tools) that can do the job. So this would basically be a semantic search for which tools are most likely to do the job, then use those tools, and search for more tools if it needs to. RAG for tools, essentially. I wrote an article on how to give ChatGPT unlimited functions a year ago, where I implemented this method.I basically created a vector database for functions, and created the store based on the tool description, then added the tool definition as metadata. Then for every request I ran a lookup for tools, added them to the tools list. Since tool definitions are added to the context we would have to do something like this at some stage when we want to implement a lot of tools.
https://blogg.bekk.no/function-retrieval-add-unlimited-funct...
by haakjacobs
3/26/2025 at 11:35:15 PM
This is very cool but definitely has the XKCD standards vibe [0]. If the industry is standardizing on MCP but then we decide it's not good enough, we just end up back where we started. I hope there's enough willpower (and low enough ego) to get to a really tight, single great implementation.by keithwhor
3/26/2025 at 11:42:09 PM
Complete with the alt text mentioning USB, which is used in the MCP website to describe it. Someone else said it and I agree, it's not a good analogy. Most of what we do in software development is connecting things. Saying "this is like USB but for X" could cover a huge chunk of what software is.Besides, this "think of" analogies kinda irk me because I don't want you to give me a mental image you think I can digest, I want to know how it works. Abstractions in software are great and all but the fact that for some reason most explainers have decided they should be opaque is very unhelpful when you're trying to learn.
by namaria
3/27/2025 at 12:00:06 AM
Only a certain subset of developer spends most of their time "connecting things", and if that's the kind of developer you consider yourself, I'd be looking to either upskill or change professions as this will be the first kind of developer eliminated if we continue to see decent progress in automation.by soulofmischief
3/27/2025 at 12:07:45 AM
Would disagree there — system integration probably accounts for like 90% of development work; just at different layers of abstraction.It’s evergreen work that companies are endlessly trying to eliminate or automate yet keep running headfirst into.
by keithwhor
3/27/2025 at 7:41:17 AM
Doesn't mean it's not all the same or boring drudge work :).Though I disagree with GP's reply to you about being product-oriented and such - 90% of products are just "system integration" with some custom marketing graphics and a sprinkle of vendor lock in :).
Combination of standardization and AI will end in a great carnage of software developer jobs, as system integration is basically the poster child of a job that should not exist in ideal world - i.e. all problems are problems of insufficient automation or insufficient ability to cope with complexity. But there's only so much demand for R&D and creating bona fide new capabilities, and not everyone wants to be a manager or a salesperson...
IDK, might be really the time to pick some trade skills, as manual labor in the field is likely the last thing to be automated away.
by TeMPOraL
3/27/2025 at 2:10:58 AM
I would urge you to not think this way: https://www.osmos.io/fabricby kiratp
3/27/2025 at 3:24:05 AM
You probably should've disclosed that you are promoting your own company's AI product here...by ewoodrich
3/27/2025 at 12:19:24 AM
I would still feel zero job security in such a position, and would be looking for work which is not only intellectually and creatively rewarding, but considerably more difficult to automate. Often this means becoming product-oriented or getting into leadership positions.by soulofmischief
3/27/2025 at 12:00:43 AM
aha can't disagree with that sentiment, best I can do is not make the standard suck.by thomasfromcdnjs
3/27/2025 at 1:17:46 PM
Ehh. Sometimes you don't want to have to have string builders to assemble REST calls.by thanatropism