A software engineer will be a person who inspects the AI's work, same as a building inspector today. A software architect will co-sign on someone's printed-up AI plans, same as a building architect today. Some will be in-house, some will do contract work, and some will be artists trying to create something special, same as today. The brute labor is automated away, and the creativity (and liability) is captured by humans.
If I was a bot I would probably write some perfectly punctuated garbage about how your site is a crucial testament to the ever evolving digital landscape or use big words to delve into the multifaceted tapestry of internet ethics. But honestly your website about stopping sloppy pasta is just so dumb and a complete waste of time. Your acting like somebody writing a fake story with ai is the end of the world or something. Literaly nobody cares if some random article was written by a computer so maybe stop pretending your the heroic saviors of the web. Get a real hobby and stop whining about people using chat bots because its really not that deep bro.
- now the fun part: which AI did I use to write the above?
Write a response to this website: https://stopsloppypasta.ai/
Make sure to avoid all common AI-isms and not make it look like it was written by AI. Include mistakes, don't use em-dashes, don't use common AI phrases, etc. Plan out what would normally look like AI first, and avoid those things. Also don't make it a narrative, make it one paragraph that is simple and to the point. Try to have a snarky attitude.
Are you familiar with parallel construction? That's what this is for. If they have a warrant and show it to you, it says what they can search and why. If they don't tell you what they're searching for and why, they can look for anything, and then construct a separate scenario which just happens to expose the thing they knew would be there from the first fishing expedition. They then use this (usually circumstantial) evidence to accuse you of a crime, and they can win, even if you didn't commit a crime, but it looks like you did. And now they can do it with digital information, automatically, behind the scenes, without your knowledge. (or they can take your laptop and phone and do it then)
I don't see the problem with this. It's inadvisable to try to stop the police from doing whatever they want to do if they assert that they have the right to do it. You then get the lawyers involved and sort it out afterwards. Comparing the timestamp on the warrant to the time of the police action should hopefully determine whether parallel construction is taking place.
> It's inadvisable to try to stop the police from doing whatever they want to do if they assert that they have the right to do it.
The police regularly lie to and manipulate people about their rights in order to coerce them into consent. If you believe the officer is in the wrong, push back.
> You then get the lawyers involved and sort it out afterwards. Comparing the timestamp on the warrant to the time of the police action should hopefully determine whether parallel construction is taking place.
Parallel construction means they are using the opportunity to go on a fishing expedition. Dealing with it later is too late, they've already gone fishing.
This is a much bigger issue regarding the metadata of a wireless carrier. They're not issuing the warrant to you, they're issuing it to the carrier, who has a duty to reject overly broad searches. If they don't even get to see the warrant, they can't reject the search based on the merits. So now the police get to collect everyone's metadata. Who cares if we look at the warrant after? They've already got the data. Even if they "delete it" after, they already got to go fishing.
Nothing good is going to be solved by expanding law enforcement's power, reach, or lightening any existing restrictions. We are not suffering from crimes due to lack of law enforcement's legal scope. It's quite the opposite.
Your parallel construction is still too linear; this isn't git history. If they get a warrant AND tell you about it, the warrant dictates what they can look at, what you have to share, etc. Now they can look at anything because you have no idea what is off limits. If they find something unrelated they don't have to act on it immediately; they can then look for motivating reasons to get a warrant targeting an area they know will turn out. They go fishing, but for next time.
But the warrant still has to originally exist with, presumably, a timestamp that shows it existed prior to the search. And modification of the timestamp or lack of such a feature would be a good way to get the evidence thrown out?
That’s not how evidence works in Canada. Illegally obtained evidence is still evidence - you simply also have a tort against the officer for breaching your rights.
Yes, in some cases, but this is not automatic, nor even close. The more serious the trial (ex, murder, child pornography), the more likely it serves the court’s interest to use the illegally obtained evidence. See https://doi.org/10.60082/2817-5069.3711 for a longitudinal study. Illegally obtained evidence is routinely used.
my understanding: within the context of that specific action; the evidence still exists. If there is less clarity about how and when it was collected though, there is far more opportunity to use broad evidence obtained in the periphery of a undisclosed warrant in other contexts.
You used a conditional so I assume you also know how such a system can fail. It's not hard to figure out how that can be exploited, right? You can't rely on that conditional being executed perfectly every time, even without adversarial actors. But why ignore adversarial actors?
The existence of a category of warrants that allows operation that is indistinguishable from warrantless searches creates a kind of legal hazard and personal risk that is hard to overlook. Police lie on the regular.
...and are allowed to lie within narrow and specific contexts, which seems a "balance of rights" scenario. My fear in this case is that a lie of omission is far more dangerous (specifically for misuse) than a specific & explicitly lie.
There were two commenters that responded 15 minutes prior to your comment. I'd suggest starting there if you want to understand. Then if you disagree with those, you can comment and actually contribute to the conversation ;)
For the authors of openguard: if you want me to use your tool, you have to publish engineering documentation. All you have is a quickstart guide and configuration section. I have no idea how this works under the hood or whether it works for all my use cases, so I'm not even going to try it.
Thank you for the feedback! It's very early days of the project, there's indeed a lot to improve in this aspect.
OpenGuard is an OpenAI/Anthropic-compatible LLM proxy with middleware-style configuration for protocol-level inspections of the traffic that goes through it. Right now it has a small set of guards that is being actively expanded.
MCP is a fixed specification/protocol for AI app communication (built on top of an HTTP CRUD app). This is absolutely the right way to go for anything that wants to interoperate with an AI app.
For a long time now, SWEs seem to have bamboozled into thinkg the only way you can connect different applications together are "integrations" (tightly coupling your app into the bespoke API of another app). I'm very happy somebody finally remembered what protocols are for: reusable communications abstractions that are application-agnostic.
The point of MCP is to be a common communications language, in the same way HTTP is, FTP is, SMTP, IMAP, etc. This is absolutely necessary since you can (and will) use AI for a million different things, but AI has specific kinds of things it might want to communicate with specific considerations. If you haven't yet, read the spec: https://modelcontextprotocol.io/specification/2025-11-25
Why is this the right way to go? It's not solving the problem it looks like it's solving. If your challenge is that you need to communicate with a foreign API, the obvious solution to that is a progressively discoverable CLI or API specification --- the normal tool developers use.
The reason we have MCP is because early agent designs couldn't run arbitrary CLIs. Once you can run commands, MCP becomes silly.
There is a clear problem that you'd like an "automatic" solution for, but it's not "we don't have a standard protocol that captures every possible API shape", it's "we need a good way to simulate what a CLI does for agents that can't run bash".
A lot of the reasons to use MCP are contained in the architecture document (https://modelcontextprotocol.io/specification/2025-11-25/arc...) and others. Among them, chief is security, but then there's standardization of AI-specific features, and all the features you need in a distributed system with asynchronous tasks and parallel operation. There is a lot of stuff that has nothing to do with calling tools.
For any sufficiently complex set of AI tasks, you will eventually need to invent MCP. The article posted here talks about those cases and reasons. However, there are cases when you should not use MCP, and the article points those out too.
Security is the chief reason in that it's the most important, since AI security is like nuclear waste. But the reason you should use it is it's a standard, and it's better to use one standard and be compatible with 10,000 apps, than have to write 10,000 custom integrations.
When I first used ChatGPT, I thought, "surely someone has written some kind of POP3 or IMAP plugin for ChatGPT so it can just connect to my mail server and download my mail." Nope; you needed to write a ChatGPT-specific integration for mail, which needed to be approved by ChatGPT, etc. Whereas if they supported any remote MCP server, I could just write an MCP server for mail, and have ChatGPT connect to it, ask it to "/search_mail_for_string" or whatever, and poof, You Have Mail(tm).
They did the right thing in hindsight: leave security open until clear patterns emerge, then solidify those patterns into a spec. The spec is still in draft and currently, they are trying to find a simpler solution for client registration than DCR, which apparently ephemeral clients seems to solve for now.
If they had made the security spec without waiting for user information they would most certainly have chosen a suboptimal solution.
I am creator of HasMCP (my response could have a little bias). Not everyone has home/work computer by preference mostly. I know a lot of people just use iPad or Android tablet in addition to their phone. They still use applications to work on the things. This number is not a small amount of people. They need to access openworld data or service specific data. This is where MCP is still the one of the best ways.
It tries to standardize the auth, messaging, feedback loop where API can't do alone. A CLI app can do for sure but we are talking about a standard maybe the way is something like mcpcli that you can install your phone but still would you really prefer installing bunch of application to your personal device?
Some points that MCP is still not good as of today:
- It does not have a standard to manage context in a good way. You have to find your hack. The mostly accepted one search, add/rm tool. Another one is cataloging the tools.
- lack of client tooling to support elicitation on many clients (it really hurts productivity but this is not solved with cli too)
- lack of mcp-ui adoption (mcp-ui vs openai mcp app)
I would suggest keep building to help you and your users. I am not sponsor of MCP, just sharing my personal opinion. I am also creator HasCLI but kindly biased for MCP then CLI in terms of coverage and standardization.
The biggest disappointment I have with MCP today is that many clients are still half-assed on supporting the functions outside of MCP tools.
Namely, two very useful features resources and prompts have varying levels of support across clients (Codex being one of the worst).
These two are possibly the most powerful ones since they allow consistent, org-level remote delivery of context and I would like to see all major clients support these two and eventually catch up on the other features like elicitation, progress, tasks, etc.
> It tries to standardize the auth, messaging, feedback loop where API can't do alone.
If it tried to do that, you wouldn't have the pain point list.
It's a vibe coded protocol that keeps using one-directional protocols for bi-directional communication, invents its own terms for existing stuff (elicitation lol), didn't even have any auth at the beginnig etc.
For the Agent to use CLI, don't we have to install CLI in the run-time environment first? Instead for the MCP over streamable HTTP we don't have to install anything and just specify the tool call in the context in't it?
This rolls up to my original point. I get that if you stipulate the agent can't run code, you need some kind of systems solution to the problem of "let the agent talk to an API". I just don't get why that's a network protocol coupling the agent to the API and attempting to capture the shape of every possible API. That seems... dumb.
The argument that mcp is poorly designed is different than “just use cli” which is further different than mcp is a dead end.
I agree mcp is bad as a protocol and likely not what solves the problem long term. But clearly the cli focus is an artifact of coding agents being the tip of the iceberg that we are seeing for llm agent use cases.
>CLI doesn’t work for your coworkers that aren’t technical.
This actually isn't true. I've written bespoke CLI tools for my small business and non-technical people run them without issue. They get intimidated at first but within a day or so they're completely used to it - it's basically just magic incantations on a black box.
CLI’s and shell commands can be wrapped by and packaged into scripts, those scripts can have meaningful names. On Windows at least you can assign special icons to shortcuts to those scripts.
I’ve used that approach to get non-technical near-retirees as early adopters of command line tooling (version control and internal apps). A semantic layer to the effect of ‘make-docs, share-docs, get-newest-app, announce-new-app-version’.
The users saw a desktop folder with big buttons to double click. Errors opened up an email to devs/support with full details (minimizing error communication errors and time to fix). A few minutes of training, expanded and refined to meet individual needs, and our accountants & SME’s loved SVN/Git. And the discussion was all about process and needs, not about tooling or associated mental models.
MCP also doesn't work for coworkers that are technical. It works for their agents only.
CLI works for both agents and technical people.
REST API works for both agents and technical people.
MCP works only for agents (unless I can curl to it, there are some HTTP based ones)
This should be trivial if you have proper API documentation in something like swagger. You can generate a cli tool with no "figuring out" anything either.
Nothing is “trivial” when you combine humans and computers. I worked at the MIT Computing Help Desk during my undergraduate years. We joked that we received callas from Nobel laureates who could find subatomic particles but couldn’t find the Windows Start button.
My company is currently trying to rollout shared MCPs and skills throughout the company. The engineers who have been using AI tools for the past 1-2 years have few, if any, issues. The designers, product managers, and others have numerous issues.
Having a single MCP gateway with very clear instructions for connecting to Claude Desktop and authenticating with Google eliminates numerous problems that would arise from installing and authenticating a CLI.
The MCP is also available on mobile devices. I can jot down ideas and interact with real data with Claude iOS and the remote MCP. Can’t do that with a CLI.
It's significantly more difficult to secure random clis than those apis. All llm tools today bypass their ignore files by running commands their harness can't control.
I'm fuzzy when we're talking about what makes an LLM work best because I'm not really an expert. But, on this question of securing/constraining CLIs and APIs? No. It is not easier to secure an MCP than it is a CLI. Constraining a CLI is a very old problem, one security teams have been solving for at least 2 decades. Securing MCPs is an open problem. I'll take the CLI every time.
You should read the article, it explains very well why that is completely wrong. cLIs don’t have a good story about security, are you serious?? They either use a secret , in which case the LLM will have the exact same permission as you as a user, which is bonkers (not to mention the LLM can leak your secret now to anyone by making a simple curl request) and prevents AI auditing since it’s not the AI that seems to use the secret, it’s just you! And the other alternative is to run OAuth flows by making you authorize in the browser :). That at least allows some sort of auditing since the agent can use a specific OAuth client to authorize you. But now you have no ability to run the agent unattended, you will need to log in to every possible CLI service before you let the agent work, which means your agent is just sitting there with all your access. Ignorance about best security practices really makes this industry a joke. We need zero standing trust. Auditability. Minimum access required for a task. By letting your agent use your CLIs as if it was you, you throw away all of that.
OP never mentioned letting the agent run as him or use his secrets. All of the issues you mention can be solved by giving the agent it’s own set of secrets or using basic file permissions, which are table stakes.
Back to the MCP debate, in a world where most web apis have a schema endpoint, their own authentication and authorization mechanisms, and in many instances easy to install clients in the form of CLIs … why do we need a new protocol, a new server, a new whatever. KISS
> OP never mentioned letting the agent run as him or use his secrets
That is implicit with a CLI because it is being invoked in the user session unless the session itself has been sandboxed first. Then for the CLI to access a protected resource, it would of course need API keys or access tokens. Sure, a user could set up a sandbox and could provision agent-specific keys, but everyone could always enable 2FA, pick strong passwords, use authenticators, etc . and every org would have perfect security.
Yes, this has been the gradual evolution of AI context and tooling. Same thing is occurring with some of the use cases of a vector DB and RAG. Once you can have the agent interact with the already existing conventional data store using existing queries, there is no point in introducing that work flow for inference.
no, it's all about auth. MCP lets less-technical people plug their existing tools into agents. They can click through the auth flow in about 10 seconds and everything just works. They cannot run CLIs because they're not running anything locally, they're just using some web app. The creator of the app just needed to support MCP and they got connectivity with just about everything else that supports MCP.
Write better CLIs for the agents of the less-technical people. The MCPs you're talking about don't exist yet either. This doesn't seem complicated; MCP seems like a real dead end.
How are those CLIs being installed and run on hosted services? You'll need to sandbox them and have a way to install them automatically which seems difficult. How does the auth flow work? You'd need to invent some convention or write glue for each service. These are far more complicated than just using MCP, regardless of the benefits of the protocol itself.
I think a big part of why this discussion is coming up again and again is that people assume the way they are using AI is universal, but there's a bunch of different ways to leverage it. If you have an agent which runs within a product it usually cannot touch the outside world at all by design, you do not need an explicit sandbox (i.e. a VM or container) at all because it lives in an isolated environment. As soon as you say "we use CLIs not MCP" well now you need a sandbox and everything else that goes along with it.
If you can tell ahead of time what external connectors you need and you're already sandboxing then by all means go with CLIs, if you can't then MCP is literally the only economical and ergonomic solution as it stands today.
> ...people assume the way they are using AI is universal
This is what led me back to MCP. Our team is using Claude CLI, Claude VSCX, Codex, OpenCode, GCHP, and we need to support GH Agents in GH Actions.
We wanted telemetry and observability to see how agents are using tool and docs.
There's no sane way to do this as an org without MCP unless we standardize and enforce a specific toolset/harness that we wrap with telemetry. And no one wants that.
> Why is this the right way to go? It's not solving the problem it looks like it's solving. If your challenge is that you need to communicate with a foreign API, the obvious solution to that is a progressively discoverable CLI or API specification --- the normal tool developers use.
That sounds like a hack to get around the lack of MCP. If your goal is to expose your tools through an interface that a coding agent can easily parse and use, what compels you to believe throwing amorphous structured text is a better fit than exposing it through a protocol specially designed to provide context to a model?
> The reason we have MCP is because early agent designs couldn't run arbitrary CLIs. Once you can run commands, MCP becomes silly.
I think you got it backwards. Early agents couldn't handle, and the problem was solved with the introduction of an interface that models can easily handle. It became a solved problem. Now you only argue that if today's models work hard enough, they can be willed into doing something with tools without requiring a MCP. That's neat, but a silly way to reinvent the wheel - poorly.
If AI is AI, why does it need a protocol to figure out how to interact with HTTP, FTP, etc.? MCP is a way to quickly get those integrations up and running, but purely because the underlying technology has not lived up to its hyped abilities so far. That's why people think of MCP as a band-aid fix.
Why the desire to reinvent the wheel every time? Agents can do it accurately, but you have to wait for them to figure it out every time, and waste tokens on non-differentiated work
The agents are writing the mcps, so they can figure out those http and ftp calls. MCP makes it so they dont have to every time they want to do something.
I wouldnt hire a new person to read a manual and then make a bespoke json to call an http server, every single time i want to make a call, and thats not a knock on the person's intelligence. Its just a waste of time doing the same work over and over again. I want the results of calling the API, not to spend all my time figuring out how to call the API
It’s simply about making standard, centralized plugins available. Right now Claude benefits from a “link GitHub Connector” button with a clear manifest of actions.
Obviously if the self-modifying, Clawd-native development thing catches on, any old API will work. (Preferably documented but that’s not a hard requirement.)
For now though, Anthropic doesn’t host a clawd for you, so there isn’t yet a good way for it to persist customs integrations.
each ai need context management per conversation this is something that would be very clunky to replicate on top of http or ftp (as in requiring side channel information due session and conversation management)
Everyone looks at api and sure mcp seem redundant there but look at agent driving a browser the get dom method depends on all the action performed from when the window opened and it needs to be per agent per conversation
Can you do that as rest sure sneak a session and conversation in a parameter or cookie but then the protocol is not really just http is it it's all this clunky coupling that comes with a side of unknowns like when is a conversation finished did the client terminate or were just between messages and as you go and solve these for the hundredth time you'd start itching for standardization
It makes it part of the protocol so the llm doesn't have to handle it, which is brittle
And look at the patent post I've replied to choice of protocol, I'd like to see a session token over ftp where you need to track the current folder per conversation.
But the agent harness is still handling the session token for you either way. MCP might be an easy way for agent harness creators to abstract the issue away, but I don’t want to lose all REST conventions just to make it a little easier for them to write an agent harness.
It makes it harder for the LLM to understand what’s going on, not easier.
No, but MCPs aren’t free to build either. So if you need to build an API on top, why would you build an MCP instead of using one of the existing standards that both LLMs and humans already know how to work with?
You're interacting with an LLM, so correctness is already out the window. So model-makers train LLMs to work better with MCP to increase correctness. So the only reason correctness is increased with MCP is because LLMs are specifically trained against it.
So why MCP? Are there other protocols that will provide more correctness when trained? Have we tried? Maybe a protocol that offers more compression of commands will overall take up more context, thus offering better correctness.
MCP seems arbitrary as a protocol, because it kinda is. It doesn't >>cause<< the increase in correctness in of itself, the fact that it >>is<< a protocol is the reason it may increase correctness. Thus, any other protocol would do the same thing.
> You're interacting with an LLM, so correctness is already out the window.
With all due respect if you are prompting correctly and following approaches such as TDD / extensive testing then correctness is not out the window. That is a misunderstanding likely caused by older versions of these models.
Correctness can be as complete as any other new code, I've used the AI to port algorithms from Python to Rust which I've then tested against math oracles and published examples. Not only can I check my code mathematically but in several instances I've found and fixed subtle bugs upstream. Even in well reviewed code that has been around for many years and is well used. It is simply a tool.
> So why MCP? ... MCP seems arbitrary as a protocol
You're right, it is an arbitrary protocol, but it's one that is supported by the industry.
See the screencaps at the end of the post that show why this protocol. Maybe one day, we will get a better protocol. But that day is not today; today we have MCP.
You mean, why not ask the AI to "find a way to use FTP", including either using a tool, or writing its own code? Besides the security issues?
One simple reason is "determinism". If you ask the AI to "just figure it out", it will do that in different ways and you won't have a reliable experience. The protocol provides AI a way to do this without guessing or working in different ways, because the server does all the work, deterministically.
But the second reason is, all the other reasons. There is a lot in the specification, that the AI literally cannot figure out, because it would require custom integration with every application and system. MCP is also a client/server distributed system, which "calling a tool" is not, so it does stuff that is impossible to do on your existing system, without setting up a whole other system... a system like MCP. And all this applies to both the clients, and the servers.
Here's another way to think of it. The AI is a psychopath in prison. You want the psycho to pick up your laundry. Do you hand the psycho the keys to your car? Or do you hand him a phone, where he can call someone who is in charge of your car? Now the psycho doesn't need to know how to drive a car, and he can't drive it off a bridge. All he can do is talk to your driver and tell him where to go. And your driver will definitely not drive off a bridge or stab anyone. And this works for planes, trains, boats, etc, just by adding a phone in between.
Exactly this. I've made some MCP servers and attached tons of other people's MCP servers to my llms and I still don't understand why we can't just use OpenAPI.
Why did we have to invent an entire new transport protocol for this, when the only stated purpose is documentation?
World would be surely a saner place if instead of “MCP vs CLI” people would talk about “JSON-RPC vs execlp(3)”.
Not accurate, but at least makes on think of the underlying semantics. Because, really, what matters is some DSL to discover and describe action invocations.
By and large, it is a very simple protocol and if you build something with it, you will see that it is just a series of defined flows and message patterns. When running over streamable HTTP, it is more or less just a simple REST API over HTTP with JSON RPC payload format and known schema.
No, this misunderstands what MCP is for and how it works.
Let's say you use Claude's chat interface. How can you make Claude connect to, say, the lights in your house?
Without MCP, you would need Anthropic the company to add support to Claude the web interface to connect over a network to your home, use some custom routing software (that you don't have) to communicate over whatever lightbulb-specific IoT protocol your bulbs use, to be able to control them. Claude needs to support your specific lightbulb stack, and some kind of routing software would need to be added in your home to connect the external network to the internal devices.
But with MCP, Claude only has to support MCP. They don't have to know anything about your lightbulbs or have some custom routing thing for your home. You just need to run an MCP server that talks to the lightbulbs... which the lightbulb company should make and publish, so you don't have to do anything but download the lightbulb MCP server and run it. Now Claude can talk to your lightbulbs, and neither you nor Claude had to do any extra work.
In addition to the communication, there is also asynchronous task control features, AI-specific features, security features, etc that are all necessary for AI work. All this is baked into MCP.
This is the power of standardized communications abstractions. It's why everyone uses HTTP and doesn't have their own custom application-specific tcp-server-language. The world wide web would just be 10 websites.
No, that's not MCP. That's a pleasant idea that MCP has been shoehorned into trying to solve. But MCP the spec is far more complicated than it needs to be to support that story. Streamable HTTP transport makes it much more workable, and I imagine was designed by real people rather than the version prior to that, but it's still much more than it needs.
Ultimately, 90% of use cases would be solved by a dramatically simpler spec which was simply an API discovery mechanism, maybe an OpenAPI spec at a .well-known location, and a simple public-client based OAuth approach for authentication and authorization. The full-on DCR approach and stateful connections specified in the spec is dramatically harder to implement.
More than it needs? Buddy, HTTP is more than any web app needs. It has a lot of stuff in it because it's intended to solve a lot of problems. The fact that there is a bidirectional stateful mode for HTTP is horrifying, but it's there now, and it solves problems. MCP is here, it solves problems we have now, it's supported by industry. If there are pain points, we can fix them in the standard without throwing the baby out with the bathwater.
> The fact that there is a bidirectional stateful mode for HTTP is horrifying,
Oh no, really? So why did the new vibe-coded hotness use WebSockets for bidirectional communication?
> MCP is here, it solves problems we have now,
Many other protocols save the exact same problem of client-server communication with well-defined ways of discovering available API calls.
> it's supported by industry.
It's supported by hype and people who have very little knowledge of what existis in the world.
Also, industry is notorious for supporting a lot of crazy and bad shit. Doesn't make it good.
> If there are pain points, we can fix them in the standard without throwing the baby out with the bathwater.
You have already thrown out a lot of babies by deciding that the vibe-coded MCP protocol is the only true way to set up two-way communication between a server and a client, and refuse to even entertain the thought that it might not be a good protocol to begin with.
> But with MCP, Claude only has to support MCP. They don't have to know anything about your lightbulbs
Except the fact that it has to "know" about that specific manufacturer's bespoke API aka "tool calls" for that specific lightbulb. If the manufacturer provides an API for the lightbulb.
MCP is a vibe-coded communications protocol. There's nothing more standard or re-usable in MCP than HTTP, or any protocols built over that. Hell, using GraphQL would be a more standardized, re-usable and discoverable way of doing things than MCP. Fielding ddescribed and architecture for machine-discoverable APIs in 2000
1) MCP does more than just make an API call, 2) only the MCP server has to know about the lightbulb.
Example: right now, I want to add web search to my local AI agent. Normally you'd have to add some custom logic to the agent to do this. But instead, I merely support MCP in the agent. Now I can connect to a SearXNG MCP server, and tell my agent to "use /web_search". Boom, I have web search, and the agent didn't need anything added to it. Similarly, SearXNG didn't need to know anything about my AI agent.
If you "just used HTTP", you could not do that. You'd need to add extra code to SearXNG, or extra code to the AI agent, just to support this one use case.
GraphQL does not have any of the AI-specific features in it, and is way more complex than MCP.
It literally does that. What MCP calls a "tool call" is literally an API call (well, technically an RPC call since it's just JSON-RPC underneath).
But that's beside the point. Your original claim was this:
--- start quote ---
The only way you can connect different applications together are "integrations" (tightly coupling your app into the bespoke API of another app).
--- end quote ---
1. The MCP doesn't solve that. Every MCP server you connect to will expose their own bespoke API (aka tools) incompatible with anything else, in data formats incompatible with anything else.
2. No idea what SearXNG is, but if you used Swagger/OpenAPI or GraphQL you could easily have provided a standard way to discover what your API offers, and ways of calling that API
> You'd need to add extra code to SearXNG
You literally added extra code to SearXNG to expose an MCP server.
> GraphQL does not have any of the AI-specific features in it
Neither does MCP. Just because they invented new cute terms for JSON-RPC doesn't make it any more suitable for AI than literally any other protocol. And don't forget the idiocy of using a one-way communication protocol for two-way communication.
MCP re-invented SOAP, badly, with none of the advantages, and most of the disadvantages
Tell me how many ways that print help message for a command you have seen and say "reusable" again. Mcp is exactly exists to solve this. The rest is just json rpc with simple key value pairs.
You can probably let llm guess the help flag and try to parse help message. But the success rate is totally depends on model you are using.
The problem is that engineers of data formats have ignored the concept of layers. With network protocols, you make one layer (Ethernet), you add another layer (IP), then another (TCP), then another (HTTP). Each one fits inside the last, but is independent, and you can deal with them separately or together. Each one has a specialty and is used for certain things. The benefits are 1) you don't need "a kitchen sink", 2) you can replace layers as needed for your use-case, 3) you can ship them together or individually.
I don't think anyone designs formats this way, and I doubt any popular formats are designed for this. I'm not that familiar with enterprise/big-data formats so maybe one of them is?
For example: CSV is great, but obviously limited, and not specified all that well. A replacement table data format could be binary (it's 2026, let's stop "escaping quotes", and make room for binary data). Each row can have header metadata to define which columns are contained, so you can skip empty columns. Each cell can be any data format you want (specifically so you can layer!). The header at the beginning of the data format could (optionally) include an index of all the rows, or it could come at the end of the file. And this whole table data format could be wrapped by another format. Due to this design, you can embed it in other formats, you can choose how to define cells (pick a cell-data-format of your choosing to fit your data/type/etc, replace it later without replacing the whole table), you can view it out-of-order, you can stream it, and you can use an index.
> With network protocols, you make one layer (Ethernet), you add another layer (IP), then another (TCP), then another (HTTP). Each one fits inside the last, but is independent, and you can deal with them separately or together.
It looks neat when you illustrate it with stacked boxes or concentric circles, but real-world problems quickly show the ugly seams. For example, how do you handle encryption? There are arguments (and solutions!) for every layer, each with its own tradeoffs. But it can't be neatly slotted into the layered structure once and for all. Then you have things like session persistence, network mobility, you name it.
Data formats have other sets of tradeoffs pulling them in different directions, but I don't think that layered design would come near to solving any of them.
Some early binary formats followed similar concepts. Look up Interchange File Format, AIFF, RIFF, and their applications and all the file formats using this structure to this day.
I would say that most of the video file formats today are a bit like that too: they allow different stream data encoding schemes with metadata being the definition of a particular format (mostly to bring up a more familiar example that is not as generic).
Have a look at Asset Administration Shells (AAS) -- it is a data exchange format built on top of JSON and XML (and RDF, and OPC UA and Protobuf, etc.).
Eh, this escaping problem was basically solved ages ago.
If we really wanted to make a UTF-8 data interchange format that needs minimal escaping, we already have ␜ (FS File Separator U+001C), ␝ (GS Group Separator U+001D), ␞ (RS Row Separator U+001E), ␟ (US Unit Separator U+001F). The problem is that they suck to type out so they suck for character based interchange. But we could add them to that emoji keyboard widget on modern OSs that usually gets bound to <Meta> + <.>.
But if we put those someplace people could easily type them, that resolved the problem.
But, binary data? Eh, that really should be transmitted as binary data and not as data encoded in a character format. Like not only not using Base64, but also not using a character representation of a byte stream like "0x89504E470D0A1A0A...". Instead you should send a byte stream as a separate file.
So we need a way to combine a bunch of files into a streaming, compressed format.
And the thing is, we already have that format. It's .tar.lz4!
Row separator is great, until you find that someone has put one in a data field. Like your comment. It just moves the problem (control and data mixed together) to a less-used control character.
different things. adding levels of abstraction is not the same as having a statistical model generate abstractions for you.
you can still call it spec-programming but if you don't audit your generated code then you're simply doing it wrong; you just don't realize that yet because you've been getting away with it until now.
"what is the best open weight model for high-quality coding that fits in 8GB VRAM and 32GB system RAM with t/s >= 30 and context >= 32768" -> Qwen2.5-Coder-7B-Instruct
"what is the best open weight model for research w/web search that fits in 24GB VRAM and 32GB system RAM with t/s >= 60 and context >= 400k" -> Qwen3-30B-A3B-Instruct-2507
"what is the best open weight embedding model for RAG on a collection of 100,000 documents that fits in 40GB VRAM and 128GB system RAM with t/s >= 50 and context >= 200k" -> Qwen3-Embedding-8B
Specific models & sizes for specific use cases on specific hardware at specific speeds.
- The t/s estimation per machine is off. Some of these models run generation at twice the speed listed (I just checked on a couple macs & an AMD laptop). I guess there's no way around that, but some sort of sliding scale might be better.
- Ollama vs Llama.cpp vs others produce different results. I can run gpt-oss 20b with Ollama on a 16GB Mac, but it fails with "out of memory" with the latest llama.cpp (regardless of param tuning, using their mxfp4). Otoh, when llama.cpp does work, you can usually tweak it to be faster, if you learn the secret arts (like offloading only specific MoE tensors). So the t/s rating is even more subjective than just the hardware.
- It's great that they list speed and size per-quant, but that needs to be a filter for the main list. It might be "16 t/s" at Q4, but if it's a small model you need higher quant (Q5/6/8) to not lose quality, so the advertised t/s should be one of those
- Why is there an initial section which is all "performs poorly", and then "all models" below it shows a ton of models that perform well?
> The only way to stop this from happening is half the country refuse to buy any tech that implements OS age verification
No, the way to stop it is to talk to your representatives.
You have the power. You just have to pick up a phone, and ask your friends, relatives, neighbors, to do the same. (They will, because it affects all of them.) Tell your reps to remove the legislation or you're voting them out. They don't want to lose their jobs. They will change if you tell them to. But only if you tell them. That is your power. Use it or lose it.
> the way to stop it is to talk to your representatives.
I keep seeing this advice, yet whenever it actually matters, it doesn't really work
No amount of talking to representatives stopped the genocide in Gaza, no amount of talking to representatives is stopping what the US is doing now in Iran
Majority of Congress voted to continue war in Iran, despite an overwhelming majority of Americans being opposed to it
I hate to be negative here but every single time I have spoken with a representative, they will just take the party line. "Thank you for reaching out. We are doing X as advised by the department of Y based on our evidence of Z."
Then they just continue with that was already happening.
reply