Discussion
MCP is Dead; Long Live MCP!
codemog: As soon as MCP came out I thought it was over engineered crud and didn’t invest any time in it. I have yet to regret this decision. Same thing LangChain.
fartfeatures: All the code I work on now has an MCP interface so that the LLM can debug more easily. I'd argue it is as important as the UI these days. The amount of time it has saved me is unreal. It might be worth investing a very small amount of your time in it to see if it is a good fit. Even a poor protocol can provide useful functionality.
mlnj: You are right.Although I have been a skeptic of MCPs, it has been an immense help with agents. I do not have an alternative at the moment.
jollyllama: > Centralization is Key> (I preface that this is primarily relevant for orgs and enterprises; it really has no relevance for individual vibe-coders)The thing about tools that "democratize" software development, whether it is Visual Studio/Delphi/QT or LLMs, is that you wind up with people in organizations building internal tools on which business processes will depend who do not understand that centralization is key. They will build these tools in ignorance of the necessity of centralization-centric approaches (APIs, MCP, etc.) and create Byzantine architectures revolving around file transfers, with increasing epicycles to try to overcome the pitfalls of such an approach.
CharlieDigital: There's a distinction between individual devs and organizations like Amazons or even a medium sized startup.Once you have 10-20 people using agents in wildly different ways getting wildly different results, the question of "how do I baseline the capabilities across my team?" becomes very real.In our team, we want to let every dev us the agent harness that they are comfortable with and that means we need a standard mechanism of delivering standard capabilities, config, and content across the org.I don't see it as democratization versus corporate facism in so much as it is "can we get consistent output from developers of varying degrees of skill using these agents in different ways?"
SilverElfin: This came up in recent discussions about the Google apps CLI that was recently released. Google initially included an MCP server but then removed it silently. And it seemed like suddenly a lot of people were talking about how MCP is dead. But fundamentally that doesn’t make sense. If an AI needs to be fed instructions or schemas (context) to understand how to use something via MCP, wouldn’t it need the same things via CLI? How could it not?As for Google - they previously said they are going to support MCP. And they’ve rolled out that support even recently (example from a quick search: https://cloud.google.com/blog/products/ai-machine-learning/a...). But now with the Google Workspace CLI and the existence of “Gemini CLI Extensions” (https://geminicli.com/extensions/about/), it seems like they may be trying to diminish MCP and push their own CLI-centric extension strategy.
whattheheckheck: So let's say you have a rag llm chat api connected to an enterprises document corpus.Do you not expose an mcp endpoint? Literally every vscode or opencode node gets it for free (a small json snippet in their mcp.json config) If you do auth right
ph4rsikal: LangChain is not over-engineered; it's not engineered at all. Pure Chaos.
skybrian: If it's a remote API, I suppose the argument is that you might as well fetch the documentation from the remote server, rather than using a skill that might go out of date. You're trusting the API provider anyway.But it's putting a lot of trust in the remote server not to prompt-inject you, perhaps accidentally. Also, what if the remote docs don't suit local conditions? You could make local edits to a skill if needed.Better to avoid depending on a remote API when a local tool will do.
CharlieDigital: Or just build your own remote MCP server for docs? It's easy enough now that the protocol and supporting SDKs have stabilized.Most folks are familiar with MCP tools but not so much MCP resources[0] and MCP prompts[1]. I'd make the case that these latter two are way more powerful and significant because (most) tools support them (to varying degrees at the moment, to be fair).For teams/orgs, these are really powerful because they simplify delivery of skills and docs and moves them out of the repo (yes, there are benefits to this, especially when the content is applicable across multiple repos) on top of surfacing telemetry that informs usage and efficacy.Why would you do it? One reason is that now you can index your docs with more powerful tools. Postgres FTS, graph databases to build a knowledge base, extract code snippets and build a best practices snippet repo, automatically link related documents by using search, etc.[0] https://modelcontextprotocol.io/specification/2025-06-18/ser...[1] https://modelcontextprotocol.io/specification/2025-06-18/ser...
0xbadcafebee: [delayed]
CharlieDigital: Not only editors, but also different runtime contexts like GitHub Agents running in Actions.We can plug in MCP almost anywhere with just a small snippet of JSON and because we're serving it from a server, we get very clear telemetry regardless of tooling and envrionment.
ambicapter: If AI is AI, why does it need a protocol to figure out how to interact with HTTP, FTP, etc.? MCP is a way to quickly get those integrations up and running, but purely because the underlying technology has not lived up to its hyped abilities so far. That's why people think of MCP as a band-aid fix.
jswny: MCP loads all tools immediately. CLI does not because it’s not auto exposed to the agent, got have more control of how the context of which tools exist, and how to deliver that context.
moralestapia: Our workflows must be massively different.I code in 8 languages, regularly, for several open source and industry projects.I use AI a lot nowadays, but have never ever interacted with an MCP server.I have no idea what I'm missing. I am very interested in learning more about what do you use it for.
chatmasta: What are you using for hosting and deploying the MCP servers? I’d like something low friction for enterprise teams to be able to push their MCP definitions as easily as pushing a Git repo (or ideally, as part of a Git repo, kinda like GitHub pages). It’s obviously not sustainable for every team to host their own MCP servers in their own way.So what’s the best centralized gateway available today, with telemetry and auth and all the goodness espoused in this blog post?
jswny: MCP is fine, particular remote MCP which is the lowest friction way to get access to some hosted service with auth handled for you.However, MCP is context bloat and not very good compared to CLIs + skills mechanically. With a CLI you get the ability to filter/pipe (regular Unix bash) without having to expand the entire tool call every single time in context.CLIs also let you use heredoc for complex inputs that are otherwise hard to escape.CLIs can easily generate skills from the —help output, and add agent specific instructions on top. That means you can give the agent all the instructions it needs to know how to use the tools, what tools exist, lazy loaded, and without bloating the context window with all the tools upfront (yes, I know tool search in Claude partially solves this).CLIs also don’t have to run persistent processes like MCP but can if needed
twapi: > Influencer Driven Hype Cycle
CharlieDigital: > I have no idea what I'm missing. The questions I'd ask: - Do you work in a team context of 10+ engineers? - Do you all use different agent harnesses? - Do you need to support the same behavior in ephemeral runtimes (GH Agents in Actions)? - Do you need to share common "canonical" docs across multiple repos? - Is it your objective to ensure a higher baseline of quality and output across the eng org? - Would your workload benefit from telemetry and visibility into tool activation? If none of those apply, then it's not for you. Server hosted MCP over streamable HTTP benefits orgs and teams and has virtually no benefit for individuals.
s0ulf3re: I’ve always felt like MCP is way better suited towards consumer usage rather than development environments. Like, yeah, MCP uses a lot of a context window, is more complex than it should be in structure, and it isn’t nearly as easy for models to call upon as a command line tool would be. But I believe that it’s also the most consumer friendly option available right now.It’s much easier for users to find what exactly a model can do with your app over it compared to building a skill that would work with it since clients can display every tool available to the user. There’s also no need for the model to setup any environment since it’s essentially just writing out a function, which saves time since there’s no need to setup as many virtual machine instructions.It obviously isn’t as useful in development environments where a higher level of risk can be accepted since changes can always be rolled back in the repository.If I recall correctly, there’s even a whole system for MCP being built, so it can actually show responses in a GUI much like Siri and the Google Assistant can.
simianwords: but you need to _install_ a CLI. with MCP, you just configure!
CharlieDigital: Because protocols provide structure that increases correctness.It is not a guarantee (as we see with structured output schemas), but it significantly increases compliance.
ambicapter: You're interacting with an LLM, so correctness is already out the window. So model-makers train LLMs to work better with MCP to increase correctness. So the only reason correctness is increased with MCP is because LLMs are specifically trained against it.So why MCP? Are there other protocols that will provide more correctness when trained? Have we tried? Maybe a protocol that offers more compression of commands will overall take up more context, thus offering better correctness.MCP seems arbitrary as a protocol, because it kinda is. It doesn't >>cause<< the increase in correctness in of itself, the fact that it >>is<< a protocol is the reason it may increase correctness. Thus, any other protocol would do the same thing.
CharlieDigital: > So why MCP? ... MCP seems arbitrary as a protocol You're right, it is an arbitrary protocol, but it's one that is supported by the industry.See the screencaps at the end of the post that show why this protocol. Maybe one day, we will get a better protocol. But that day is not today; today we have MCP.
MaxLeiter: MCPs are great for some use casesIn v0, people can add e.g. Supabase or Neon to their projects with one click. We then auto-connect and auth to the integration’s remote MCP server on behalf of the user.v0 can then use the tools the integration provider wants users to have, on behalf of the user, with no additional configuration. Query tables, run migrations, whatever. Zero maintenance burden on the team to manage the tools. And if users want to bring their own remote MCPs, that works via the same code path.We also use various optimizations like a search_tools tool to avoid overfilling context
fartfeatures: > You're interacting with an LLM, so correctness is already out the window.With all due respect if you are prompting correctly and following approaches such as TDD / extensive testing then correctness is not out the window. That is a misunderstanding likely caused by older versions of these models.Correctness can be as complete as any other new code, I've used the AI to port algorithms from Python to Rust which I've then tested against math oracles and published examples. Not only can I check my code mathematically but in several instances I've found and fixed subtle bugs upstream. Even in well reviewed code that has been around for many years and is well used. It is simply a tool.
CharlieDigital: You can solve the same problem by giving subsets of MCP tools to subagents so each subagent is responsible for only a subset of tools.Or...just don't slam 100 tools into your agent in the first place.
simianwords: >Or...just don't slam 100 tools into your agent in the first place.But I can do them with CLI so that's a negative for MCP?
CharlieDigital: You've missed the point and hyperfocused on the story around context and not why an org would want to have centralized servers exposing MCP endpoints instead of CLIs
simianwords: I would want to know what point I missed. I can have 100 CLI's but not 100 MCP tools.100 MCP tools will bloat the context whereas 100 CLI's won't. Which part do you disagree with?
kubanczyk: > if something looks like crud, it probably is crudYes, technically, but you've probably meant cruft here.
nonethewiser: If AI is AI why does it need me to prompt it?
8note: Why the desire to reinvent the wheel every time? Agents can do it accurately, but you have to wait for them to figure it out every time, and waste tokens on non-differentiated workThe agents are writing the mcps, so they can figure out those http and ftp calls. MCP makes it so they dont have to every time they want to do something.I wouldnt hire a new person to read a manual and then make a bespoke json to call an http server, every single time i want to make a call, and thats not a knock on the person's intelligence. Its just a waste of time doing the same work over and over again. I want the results of calling the API, not to spend all my time figuring out how to call the API
winrid: Many products provide MCP servers to connect LLMs. For example I can have claude examine things through my ahrefs account without me using the UI etc
8n4vidtmkvmk: That's also one of the things that worries me the most. What kind of data is being sent to these random endpoints? What if they to rogue or change their behavior?A static set of tools is safer and more reliable.
simianwords: > This is absolutely necessary since you can (and will) use AI for a million different thingsthe point is, is it necessary to create a new protocol?
paulddraper: It’s not a new protocol.It’s JSON-RPC plus OAuth.(Plus a couple bits around managing a local server lifecycle.)
charcircuit: You just paste in a web link to a skill. Your agent is smart enough to know hours to use it or save it.
8note: mcp is generally a static set of tools, where auth is handled by deterministic code and not exposed to the agent.the agent sees tools as allowed or not by the harness/your mcp config.For the most part, the same company that you're connecting to is providing the mcp, so its not having your data go to random places, but you can also just write your own. its fairly thin wrappers of a bit of code to call the remote service, and a bit of documentation of when/what/why to do so
kybernetikos: I've just been discovering this pattern too. It's made a huge difference. Trying to get Claude to remote control an app for testing via the various other means was miserable and unreliable.I got it to build an MCP server into the app that supported sending commands to allow Claude to interact with it as if it was a user, including keypresses and grabbing screenshots, and the difference was immediate and really beneficial.Visual issues were previously one of the things it would tend to struggle with.
charcircuit: >The LLM has no way of knowing which CLI to use and how it should use it…unless each tool is listed with a description somewhere either in AGENTS|CLAUDE.md or a README.mdThis is what the skill file is for.>Centralizing this behind MCP allows each developer to authenticate via OAuth to the MCP server and sensitive API keys and secrets can be controlled behind the serverThis doesn't require MCP. Nothing is stopping you from creating a service to proxy requests from a CLI.The problem with this article is it doesn't recognize that skills is a more general superset compared with MCP. Anything done with MCP could have an equivalent done with a skill.
8note: Its very similar to the switch from a text editor + command line, to having an IDE with a debugger.the AI gets to do two things:- expose hidden state - do interactions with the app, and see before/after/errorsit gives more time where the LLM can verify its own work without you needing to step in. Its also a bit more integration test-y than unit.if you were to add one mcp, make it Playwright or some similar browser automation mcp. Very little has value add over just being able to control a browser
avereveard: Ceach ai need context management per conversation this is something that would be very clunky to replicate on top of http or ftp (as in requiring side channel information due session and conversation management)Everyone looks at api and sure mcp seem redundant there but look at agent driving a browser the get dom method depends on all the action performed from when the window opened and it needs to be per agent per conversationCan you do that as rest sure sneak a session and conversation in a parameter or cookie but then the protocol is not really just http is it it's all this clunky coupling that comes with a side of unknowns like when is a conversation finished did the client terminate or were just between messages and as you go and solve these for the hundredth time you'd start itching for standardization
Jayakumark: Can you please share source code for the Resources/Prompts example ?
tptacek: I can add Supabase or Stripe to my project with zero clicks just by setting up a .envrc.
tptacek: I still don't really understand what LangChain even is.
CPLX: I’ve been using Chrome DevTools MCP a lot for this purpose and have been very happy with it.
tptacek: Why is this the right way to go? It's not solving the problem it looks like it's solving. If your challenge is that you need to communicate with a foreign API, the obvious solution to that is a progressively discoverable CLI or API specification --- the normal tool developers use.The reason we have MCP is because early agent designs couldn't run arbitrary CLIs. Once you can run commands, MCP becomes silly.There is a clear problem that you'd like an "automatic" solution for, but it's not "we don't have a standard protocol that captures every possible API shape", it's "we need a good way to simulate what a CLI does for agents that can't run bash".
isbvhodnvemrwvn: It's significantly more difficult to secure random clis than those apis. All llm tools today bypass their ignore files by running commands their harness can't control.
jamesrom: What part of MCP do you think is over-engineered?This is quite literally the opposite opinion I and many others had when first exploring MCP. It's so _obviously_ simple, which is why it gained traction in the first place.
MaxLeiter: But then the LLM needs to write its own tools/code for interacting with said service. Which is fine, but slower and it can make mistakes vs officially provided tools
superturkey650: All MCP adds is a session token. How is that not already a solved problem?
jamesrom: The problem with MCP isn't MCP. It's the way it's invoked by your agent.IMO, by default MCP tools should run in forked context. Only a compacted version of the tool response should be returned to the main context. This costs tokens yes, but doesn't blow out your entire context.If other information is required post-hoc, the full response can be explored on disk.
Frannky: I don't know. Skill+http endpoint feel way safer, powerful and robust. The problem is usually that the entity offering the endpoint, if the endpoint is ai powered, concur in LLM costs. While via mcp the coding agent is eating that cost, unless you are also the one running the API and so can use the coding plan endpoint to do the ai thing
drdaeman: World would be surely a saner place if instead of “MCP vs CLI” people would talk about “JSON-RPC vs execlp(3)”.Not accurate, but at least makes on think of the underlying semantics. Because, really, what matters is some DSL to discover and describe action invocations.
socketcluster: I find that skills work very well. The main SKILL file has an overview of all the capabilities of my platform at a high level and each section links to a more specific file which contains the full information with all possible parameters for that particular capability.Then I have a troubleshooting file (also linked from the main SKILL file) which basically lists out all the 'gotchas' that are unique to my platform and thus the LLM may struggle with in complex scenarios.After a lot of testing, I identified just 5 gotchas and wrote a short section for each one. The title of each section describes the issue and lists out possible causes with a brief explanation of the underlying mechanism and an example solution.Adding the troubleshooting file was a game changer.Whenever it runs into a tricky issue, it checks that file and it's highly effective.My platform was designed to reduce applications down to HTML tags which stream data to each other so it the goal is low token count and no-debugging.I basically replaced debugging with troubleshooting and the 5 cases I mentioned are literally all that was left. It seems to be able to quickly assemble any app without bugs now.The 'gotchas' are not exactly bugs but more like "Why doesn't this value update in realtime?" kind of issues. There are a few optimizations that the LLM needs to be aware of.