Discussion
The open source AI coding agent
sergiotapia: If I wanted to switch from Claude Code to this - what openai model is comparable to opus 4.6? And is it the same speed or slower/faster? Thank you!
ramon156: The Agent that is blacklisted from Anthropic AI, soon more to come.I really like how their subagents work, as a bonus I get to choose which model is in which agent. Sadly I have to resort to the mess that Anthropic calls Claude Code
pimeys: GPT 5.4 has been the winner this week. Last week Opus 4.6. You can use both in OpenCode.
lima: You can still use OpenCode with the Anthropic API.
pimeys: Yep. That's what I do. Just API keys and you can switch from Opus to GPT especially this week when Opus has been kind of wonky.
jatora: 'just API key' lol. just hundreds of dollars at a minimum
swyx: do you care about harness benchmarks or no?
sergiotapia: Just a data point, I would need to use it for my workflows. I do have a monorepo with a root level claude.md, and project level claude.md files for backend/frontend.
cgeier: I‘m a big fan of OpenCode. I’m mostly using it via https://github.com/prokube/pk-opencode-webui which I built with my colleague (using OpenCode).
gwd: Or have Claude write the code and Gemini review it. (Was using GPT for review until the recent Pentagon thing.)
vadepaysa: Things that make an an OpenCode fanboy 1. OpenCode source code is even more awesome. I have learned so much from the way they have organized tools, agents, settings and prompts. 2. models.dev is an amazing free resource of LLM endpoints these guys have put together 3. OpenCode Zen almost always has a FREE coding model that you can use for all kinds of work. I recently used the free tier to organize and rename all my documents.
nopurpose: Claude Code subscription is still usable, but requires plugin like https://github.com/griffinmartin/opencode-claude-auth
canadiantim: Sure but will you get banned by anthropic anyway?
specproc: This is the problem with this bollocks. Outsourcing our brains at a per token rate. It'd be exciting if I didn't hand to pay Americans for it.
arbuge: How does it compare to using GPT 5.4 inside Codex?
stavros: I pay $100/mo to Anthropic. Yesterday I coded one small feature via an API key by accident and it cost $6. At this rate, it will cost me $1000/mo to develop with Opus. I might as well code by hand, or switch to the $20 Codex plan, which will probably be more than enough.I'd rather switch to OpenAI than give up my favorite harness.
hereme888: The reason I'm switching again next month, from Claude back to OpenAI.
hungryhobbit: Yeah, support the company that promised to help your government illegally mass surveil and mass kill people, because they offered a new micro-optimization.
pczy: They are not blacklisted. You are allowed to use the API at commercial usage pricing. You are just not allowed to use your Claude Code subscription with OpenCode (or any other third‑party harness for the record).
hereme888: Was it not obvious what the OP meant by blacklisted?
enraged_camel: No, it was not? For those whose native language is English, "blacklisted" implies Claude API will not allow OpenCode.
QubridAI: OpenCode feels like the “open-source Copilot agent” moment the more control, hackability, and no black-box lock-in.
oldestofsports: I dont understand this, what is the difference, technically!
hereme888: Subscription = token that requires refreshing 1-2x/day, and you get the freedom to use your subscription-level usage amount any way you want.API = way more expensive, allowed to use on your terms without anthropic hindering you.
p0w3n3d: For some reason opencode does not have option to disable streaming http client, which renders some inference providers unavailable...There's also a request and a PR to add such option but it was closed due to "not adhering to community standards"
jedisct1: For open models with limited context, Swival works really well: https://swival.dev
avereveard: isn't this the one with default-on need code change to turn off telemetry?
flexagoon: No
avereveard: https://github.com/anomalyco/opencode/issues/5554https://www.reddit.com/r/LocalLLaMA/comments/1rv690j/opencod...?
xienze: Yeah I had a similar experience one time. Which is why I laugh when people suggest Anthropic is profitable. Sure, maybe if everyone does API pricing. Which they won’t because it’s so damn expensive. Another way to think about it is API pricing is a glimpse into the future when everyone is dependent on these services and the subscription model price increases start.
stavros: Both of them promised to help their government illegally mass surveil and mass kill people. One of them just didn't want it done to US citizens.I'm not a US citizen, so both companies are the same, as far as I'm concerned.
hungryhobbit: You are absolutely correct that both are evil ... as are most corporations.Still, I feel like "will commit illegal mass murder against their own citizens" is a significant enough degree more evil. I think lots of corporations will help their government murder citizens of other countries, but very few would go so far as to agree to murder their own (fellow) citizens ... just to get a juicy contract.
caderosche: I feel like Anthropic really need to fork this for Claude Code or something. The render bugs in Claude Code drive me nuts.
rbanffy: If you want faster, anything running on a Cerebras machine will do.Never tried it for much coding though.
eli: Outside of their (hard to buy) GLM 4.7 coding plans, it's also extremely expensive.
Frannky: I don't use it for coding but as an agent backend. Maybe opencode was thought for coding mainly, but for me, it's incredibly good as an agent, especially when paired with skills, a fastapi server, and opencode go(minimax) is just so much intelligence at an incredibly cheap price. Plus, you can talk to it via channels if you use a claw.
lairv: I tried to use it but OpenCode won't even open for me on Wayland (Ubuntu 24.04), whichever terminal emulator I use. I wasn't even aware TUI could have compatibility issues with Wayland
arikrahman: Can anyone clarify how this compares with Aider?
KronisLV: With Anthropic, you either pay per token with an API key (expensive), or use their subscription, but only with the tools that they provide you - Claude, Claude Cowork and Claude Code (both GUI and CLI variants). Individuals generally get to use the subscriptions, companies, especially the ones building services on top of their models, are expected to pay per token. Same applies to various third party tools.The belief is that the subscriptions are subsidized by them (or just heavily cut into profit margins) so for whatever reason they're trying to maintain control over the harness - maybe to gather more usage analytics and gain an edge over competitors and improve their models better to work with it, or perhaps to route certain requests to Haiku or Sonnet instead of using Opus for everything, to cut down on the compute.Given the ample usage limits, I personally just use Claude Code now with their 100 USD per month subscription because it gives me the best value - kind of sucks that they won't support other harnesses though (especially custom GUIs for managing parallel tasks/projects). OpenCode never worked well for me on Windows though, also used Codex and Gemini CLI.
anonym29: >or perhaps to route certain requests to Haiku or Sonnet instead of using Opus for everything, to cut down on the computeYou can point Claude Code at a local inference server (e.g. llama.cpp, vLLM) and see which model names it sends each request to. It's not hard to do a MITM against it either. Claude Code does send some requests to Haiku, but not the ones you're making with whatever model you have it set to - these are tool result processing requests, conversation summary / title generation requests, etc - low complexity background stuff.Now, Anthropic could simply take requests to their Opus model and internally route them to Sonnet on the server side, but then it wouldn't really matter which harness was used or what the client requests anyway, as this would be happening server-side.
brendanmc6: I’ve been extraordinarily productive with this, their $10 Go plan, and a rigorous spec-driven workflow. Haven’t touched Claude in 2 months.I sprinkle in some billed API usage to power my task-planner and reviewer subagents (both use GPT 5.4 now).The ability to switch models is very useful and a great learning experience. GLM, Kimi and their free models surprised me. Not the best, not perfect, but still very productive. I would be a wary shareholder if I owned a stake in the frontier labs… that moat seems to be shrinking fast.
hippycruncher22: I'm a https://pi.dev man myself.
anonym29: Just remember, OpenCode is sending all of your prompts and responses to their own servers, even when you're using your own locally hosted models. There are no environment variables, flags, or other configuration options to disable this behavior.¹At least you can easily turn off telemetry in Claude Code - just set CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC to 1.You can use Claude Code with llama.cpp and vLLM, too right out of the box with no additional software necessary, just point ANTHROPIC_BASE_URL at your inference server of choice, with any value in ANTHROPIC_API_KEY.Some people think that Anthropic could disable this at any time, but that's not really true - you can disable automatic updates and back up and reuse native Claude Code binaries, ensuring Anthropic cannot change your existing local Claude Code binary's behavior.With that said, I like the idea of an open source TUI agent that won't spy on me without my consent and no way to disable it much better than a closed source TUI agent that I can effectively neuter telemetry on, but sadly, OpenCode is not the former. It's just another piece of VC-funded spyware that's destined for enshittification.¹https://github.com/anomalyco/opencode/blob/4d7cbdcbef92bb696...
kristopolous: I've point thought about making things that just send garbage to any data collecting service.You'd be surprised how useless datasets become with like 10% garbage data when you don't know which data is garbage
smetannik: This shouldn't be related to Wayland.It works perfectly fine on Niri, Hyprland and other Wayland WMs.What problem do you have?
flexagoon: You can scroll down literally two messages in the Github issue you linked:> there isnt any telemetry, the open telemetry thing is if you want to get spans like the ai sdk has spans to track tokens and stuff but we dont send them anywhere and they arent enabled either> most likely these requests are for models.dev (our models api which allows us to update the models list without needing new releases)
kristopolous: Geminis cli is clearly a fork of it btw
stavros: I see your viewpoint but, to me, "both will happily murder you but one is better because they won't murder ME!" isn't very compelling. Like, I get it, but also it changes nothing for me. They're both bad.
cyanydeez: watching trump get elected twice; you can see why americanos have no problemos with mental backflips when choosing.But you're still choosing evil when you could try local models
thefnordling: opus/sonnet 4.6 can be used in opencode with a github copilot subscription
solomatov: Does github copilot ToS allow this?
solomatov: Do they have any sandbox out of the box?
samtheprogram: Definitely not Wayland related, or so I doubt. I'm on wayland and never had any issues, and it's a TUI, where the terminal emulator does or does not do GPU work. What led you to that conclusion?
flexagoon: > I wasn't even aware TUI could have compatibility issues with WaylandThey shouldn't, as long as your terminal emulator doesn't. Why do you think it's Wayland related?
ianschmitz: That linked code is not used by the opencode agent instance though right? Looks related to their web server?
planckscnst: I love OpenCode! I wrote a plugin that adds two tools: prune and retrieve. Prune lets the LLM select messages to remove from the conversation and replace with a summary and key terms. The retrieve tool lets it get those original messages back in case they're needed. I've been livestreaming the development and using it on side projects to make sure it's actually effective... And it turns out it really is! It feels like working with an infinite context window.https://www.youtube.com/live/z0JYVTAqeQM?si=oLvyLlZiFLTxL7p0
Duplicake: Why is this upvoted again on hacker news this is an old thing
zer0tonin: Because this site is basically dead for any other subject than vibecoding and AI agents.
Robdel12: > mass kill peoplehttps://www.washingtonpost.com/technology/2026/03/04/anthrop...
kykat: Will you send me an H100?
miki123211: Anthropic's model deployments for Claude Code are likely optimized for Claude Code. I wouldn't be surprised if they had optimizations like sharing of system prompt KV-cache across users, or a speculative execution model specifically fine-tuned for the way Claude Code does tool calls.When setting your token limits, their economics calculations likely assume that those optimizations are going to work. If you're using a different agent, you're basically underpaying for your tokens.
echelon: - OR - it's about lock-in.Build the single pane of glass everyone uses. Offer it under cost. Salt the earth and kill everything else that moves.Nobody can afford to run alternative interfaces, so they die. This game is as old as time. Remember Reddit apps? Alternative Twitter clients?In a few years, CC will be the only survivor and viable option.It also kneecaps attempts to distill Opus.
debazel: Are you sure that endpoint is sending all traffic to opencode? I'm not familiar with Hono but it looks like a catch all route if none of the above match anything and is used to serve the front-end web interface?
flexagoon: You are correct, it is indeed a route for the web interface
flexagoon: They don't. That is just the route for their WebUI, which is completely optional.
fnordpiglet: It’s probably a mixture of things including direct control over how the api is called and used as pointed out above and giving a discount for using their ecosystem. They are in fact a business so it should not surprise anyone they act as one.
esperent: [delayed]
quietsegfault: Can you talk more about how you leverage higher quality models for the stuff that counts? Anywhere I can read more on the philosophy of when to use each?
singpolyma3: OpenCode vs Aider vs Crush?
cyanydeez: Are you doing something that actually demands it? Have you tried local models on either the mac or AMD395+?