NHacker Next
login
▲Anthropic blocks third-party use of Claude Code subscriptionsgithub.com
555 points by sergiotapia 19 hours ago | 463 comments
Loading comments...
dfabulich 17 hours ago [-]
For folks not following the drama: Anthropic's $200/month subscription for Claude Code is much cheaper than Anthropic's pay-as-you-go API. In a month of Claude Code, it's easy to use so many LLM tokens that it would have cost you more than $1,000 if you'd paid via the API.

Why is Anthropic offering such favorable pricing to subscribers? I dunno. But they really want you to use the Claude Code™ CLI with that subscription, not the open-source OpenCode CLI. They want OpenCode users to pay API prices, which could be 5x or more.

So, of course, OpenCode has implemented a workaround, so that folks paying "only" $200/month can use their preferred OpenCode CLI at Anthropic's all-you-can-eat token buffet.

https://github.com/anomalyco/opencode/issues/7410#issuecomme...

Everything about this is ridiculous, and it's all Anthropic's fault. Anthropic shouldn't have an all-you-can-eat plan for $200 when their pay-as-you-go plan would cost more than $1,000+ for comparable usage. Their subscription plans should just sell you API credits at, like, 20% off.

More importantly, Anthropic should have open sourced their Claude Code CLI a year ago. (They can and should just open source it now.)

danmaz74 12 hours ago [-]
> More importantly, Anthropic should have open sourced their Claude Code CLI a year ago. (They can and should just open source it now.)

"Should have" for what reason? I would be happy if they open sourced Claude Code, but the reality is that Claude Code is what makes Anthropic so relevant in the programming more, much more than the Claude models themselves. Asking them to give it away for free to their competitors seems a bit much.

mrbungie 12 hours ago [-]
Well OpenCode already exists and you can connect it to multiple providers, so you could just say that the agentic CLI harness business model as a service/billable feature is no more. In hindsight I would say it never made sense in the first place.
danmaz74 11 hours ago [-]
When I compared OpenCode and Claude Code head to head a couple of months ago, Claude Code worked much better for me. I don't know if they closed the gap in the meantime, but for sure Claude Code has improved since then.
vmg12 9 hours ago [-]
OpenCode launched a couple of months ago so that makes sense that it's worse. It's much better than Claude Code now. Somehow for the same model, opencode completes the same work faster than claude code and the ux is much better.
rovr138 7 hours ago [-]
You win by adoption.

Here adoption is a combination on the tool and the model.

If people can’t pay the model to use the tool, they might not use the tool even if it’s better.

That’s what anthropic is doing.

It might be faster, but it’s more expensive.

aatd86 4 hours ago [-]
There is no loyalty. They eho have the best models win.

The only way remains to try and lock consumers into your ecosystem.

vmg12 6 hours ago [-]
Meant to say "it was worse" not "it's worse"
sbarre 6 hours ago [-]
Branding and customer relationships matter as much or more than the "billable service" part of Claude Code.

It's not unheard of for companies that have strong customer mindshare to find themselves intermediated by competitors or other products to the point that they just became part of the infrastructure and eventually lose that mindshare.

I doubt Anthropic wants to become a swappable backend for the actual thing that developers reach for to do their work (the CLI tool).

Don't get me wrong, I think developers should 100% have the choice of tooling they want to use.

But from a business standpoint I think maintaining that direct or first-party connection to the developer is what Anthropic are trying to protect here.

xpe 10 hours ago [-]
The above does not prove that it is irrational for Anthropic to keep the Claude Code source code closed. There are many reasons I can see (and probably some I can’t) for why closed source is advantageous for A\. One such (mentioned in various places) is the value-add of certain kinds of analytics and or telemetry.

Aside: it is pretty easy to let our appreciation* of OSS turn into a kind of confirmation bias about its value to other people/orgs.

* I can understand why people promote OSS from various POVs: ethics, security, end user control, ecosystem innovation, sheer generosity, promotion of goodwill, expressions of creativity, helping others, the love of building things, and much more. I value all of these things. But I’m wary of reasoning and philosophies that offer merely binary judgments, especially ones that try to claim what is best for another party. That's really hard to know so we do well to be humble about our claims.**

**: Finally, being humble about what one knows does not mean being "timid" with your logic or reasoning. Just be sure to state it as clearly as you can by mentioning your premises and values.

rovr138 7 hours ago [-]
Except that the cost is better with their harness and looks like people don’t want to fork 5x.

Adoption is how one wins. Look at all the crappy solutions out there that are still around.

altmanaltman 9 hours ago [-]
> the reality is that Claude Code is what makes Anthropic so relevant in the programming more, much more than the Claude models themselves

but Claude Code cannot run without Claude models? What do you mean?

boh 9 hours ago [-]
Relative to their competitors who also have comparable models, Anthropic's design choices in effectively managing context with a very well thought out and coherent design, makes them stand out.
F7F7F7 4 minutes ago [-]
If you enjoy working out of the terminal and CLI/TUI's then it's not even close. Gemini, Codex, CoPilot, and every other CLI I can think of are awful. Stumbling, bumbling and you'd be lucky to keep your file tree in tact even with tight permissions (short of a sandbox).

Claude Code feels like my early days when pair programming was all the rage.

If you have the time OpenCode comes the closest and lets you work across providers seamless.

dkdcio 8 hours ago [-]
(i.e. competitors can still use Claude models but haven’t achieved the same DevEx as CC so far, at least in my opinion and many others)

also while I was initially on the “they should open source” boat, and I’m happy Codex CLI did, there are a ton of benefits to keeping to closed source. just look at how much spam and dumb community drama OpenAI employees now have to deal with on GitHub. I increasingly think it’s a smart moved to keep it closed source and iterate without as direct community involvement on the codebase for now

rovr138 7 hours ago [-]
They could open source and not take contributions fwiw.

They could close the issues and only allow discussions.

There was a project mentioned here recently that did just that.

*Edit

It was Ghostty,

"Why users cannot create Issues directly" - https://news.ycombinator.com/item?id=46460319

joelwilliamson 5 hours ago [-]
They could open source it and not even have a Github project associated. Just provide a read-only git repo on anthropic.com or drop a source tarball every release.
hxugufjfjf 4 hours ago [-]
Then a ton of vibe-coded Claude Code forks out of their control would pop up on GitHub and people would be even more frustrated at Anthropic for not fixing their issues.
kurtis_reed 8 hours ago [-]
https://github.com/musistudio/claude-code-router
reactordev 11 hours ago [-]
Claude code is nothing more than a loop to Opus.
sirtaj 9 hours ago [-]
I use Q/aka kiro-cli at work with opus and it's clearly inferior to CC within the first 30s or so of usage. So no, not quite
elfly 8 hours ago [-]
Kiro is such a disaster. It starts well with all the planning, but I haven't been able to control it. It changes files on a whim and changes opinion from paragraph to paragraph.

Also it uses the Claude models but afaik it is constantly changing which one is using depending on the perceived difficulty.

cobolcomesback 7 hours ago [-]
> it uses the Claude models but afaik it is constantly changing which one is using depending on the perceived difficulty

Claude Code does the same. You can disable it in Kiro by specifically setting the model to what you want rather than “auto” using /model.

Tbh I’ve found Kiro to be much better than Claude Code. The actual quality of results seems about the same, but I’ve had multiple instances where Claude Code get stuck because of errors making tool calls whereas Kiro just works. Personally I also just prefer the simplicity of Kiro’s UX over CC’s relative “flashy” TUI.

giancarlostoro 9 hours ago [-]
Yeah, I've heard of people swapping out the model that Claude Code calls and apparently its not THAT much of a difference. What I'd love to see from Anthropic instead is, give me smaller LLM models, I don't even care if they're "open source" or not, just pull down a model that takes maybe 4 or 6 GB of VRAM into my local box, and use those for your coding agents, you can direct it and guide it with Opus anyway, so why not cut down on costs for everyone (consumer and Anthropic themselves!) by just letting users who can run some of the compute locally. I've got about 16GB of VRAM I can juice out of my Macbook Pro, I'm okay running a few smaller models locally with the guiding hand of Opus or Sonnet for less compute on the API front.
motbus3 8 hours ago [-]
Anthropic might have good models, but they are the worse. I mentioned in another thread how they do whatever they can to bypass bot detection protection to scrap content.
labcomputer 8 hours ago [-]
So, like, why don’t people just use the better-than-Claude OpenCode CLI with these other just-as-good-as-Claude models?
tacoooooooo 8 hours ago [-]
not sure there are any models yet that you can get the quality out you need to do this and run on your mbp
stingraycharles 10 hours ago [-]
What part of a TOS is ridiculous? Claude Code is obviously a loss leader to them, but developer momentum / market share is important to them and they consider it worth it.

What part of “OpenCode broke the TOS of something well defined” makes you think it’s all Anthropic’s fault?

landl0rd 7 hours ago [-]
It's probably not a "loss-leader" so much as "somewhat lower margin". Their bizdev guys are doubtless happy to make a switch between lower-margin, higher-multiple recurring revenue versus higher-margin, lower-multiple pay-as-you-go API billing. Corporate customers with contracts doubtless aren't paying like that for the API either. This is not uncommon.
epolanski 1 hours ago [-]
I keep hearing those claims that's they lose money on it, but I have more and more doubts about this being true.

GPU compute cost has falled down in the last two years a lot.

mnky9800n 9 hours ago [-]
My guess is that ultimately the use of Claude code will provide the training data to make most of what you do now in Claude code irrelevant.
Ajedi32 6 hours ago [-]
When you have a "loss leader" whose sole purpose is to build up market share (e.g. put competitors out of business) that's called predatory pricing.
anamexis 6 hours ago [-]
Every loss leader's purpose is to build up market share.
ForHackernews 7 hours ago [-]
Do you think Anthropic followed all the ToS of every website on the internet when scraping them for training data?
rovr138 7 hours ago [-]
You justify a wrong thing by attacking something else? Is that the only argument?
ForHackernews 6 hours ago [-]
I don't feel any need to behave respectfully toward people or institutions that clearly don't respect me.
fc417fc802 9 hours ago [-]
Poor behavior is still poor behavior even if the relevant ToS aligns with it.
anamexis 7 hours ago [-]
Why is it poor behavior though?
fc417fc802 12 minutes ago [-]
Restricting users from using third party tools is commonly viewed as poor behavior. I'm not inclined to rehash that debate here, although I might respond to specific (contextually relevant) counterpoints if you feel like making them.
casparvitch 17 hours ago [-]
I guess one issue is that you pay $200/month whether you use it or not. Potentially this could be better for Anthropic. What was not necessarily foreseeable (ok maybe it was) back when that started was that users have invented all kinds of ways to supervise their agents to be as efficient as possible. If they control the client, you can't do that.
vidarh 13 hours ago [-]
I can easily get Claude Code to run for 8-10 hours unsupervised without stopping with sub-agents entirely within Claude Code.

I think it is more likely that if you stick with Claude Code, then you are more likely to stick with Opus/Sonnet, whereas if you use a third party CLI you might be more likely to mix and match or switch away entirely. It's in their interest to get you invested in their tooling.

dabernathy89 3 hours ago [-]
I've yet to come up with a workflow where I would want Claude to do this much work... unless I had an extremely detailed spec defined for it. How do you ensure it doesn't go off the rails?
AnotherGoodName 2 hours ago [-]
You pretty much just said it. Define an extremely detailed spec. I have one that's five .md files to iteratively churn through. It had to be split into that many files since i don't want to break context length limits on the AI.
KronisLV 12 hours ago [-]
> if you use a third party CLI you might be more likely to mix and match or switch away entirely.

I really like doing this, be it with OpenCode or Copilot or Cline/RooCode/KiloCode: I do have a Cerebras Code subscription (50 USD a month for a lot of tokens but only an okayish model) whereas the rest I use by paying per-token.

Monthly spend ends up being somewhere between 100-150 USD total, obviously depending on what I do and the proportion of simple vs complex tasks.

If Sonnet isn’t great for a given task, I can go for GPT-5 or Gemini 3.

vidarh 11 hours ago [-]
I don't do this much because I really like Opus 4.5, and so far I haven't hit the limits on the $200 subscription much, but I do have some projects where I might need far higher limits.

As a matter of principle, I really would like the flexibility though, as while I love Opus now, who knows which model I will prefer next month.

mlrtime 12 hours ago [-]
On the flip side I started using Claude with other LLMs (openai) because my Pro sub gets maxed out quickly and I want a cheaper alternative to finish a project.

I just use claude code proxy or litellm and set the ANTHROPIC_BASE_URL to my proxy and chose another LLM.

vidarh 12 hours ago [-]
That would seem to be a very good reason for them to make Claude Code good enough that people would prefer doing that over the inverse...

But also, they're a bit schizophrenic about what they want Claude Code to be, given you can stream JSON to/from Claude Code to use it as a headless backend including with your subscriptions.

j45 11 hours ago [-]
Multi model is the way of the future though as much as I like and prefer Anthropic.
labcomputer 8 hours ago [-]
> I guess one issue is that you pay $200/month whether you use it or not.

I can easily churn through $100 in an 8 hour work day with API billing. $200/month seems like an incredibly good deal, even if they apply some throttling.

isolli 15 hours ago [-]
Why is supervising one's agents to be as efficient as possible a problem for Anthropic?
dpark 6 hours ago [-]
When people say efficient here, they mean cost efficient, extracting as much work per dollar from Anthropic as possible. This is the opposite of Anthropic’s view of efficiency, which would be providing the minimal amount of service for the most amount of money.
casparvitch 13 hours ago [-]
More inference = more cost (to Anthropic)
ignat980 13 hours ago [-]
What kind of ways to supervise?
denysvitali 10 hours ago [-]
ralph-loop & co
moltar 14 hours ago [-]
To extend your all you can eat analogy. It’s similar to how all you can eat restaurants allow you to eat all you can within the bounds of the restaurant, but you aren’t allowed to bring the food out with you.
hshdhdhj4444 12 hours ago [-]
Another analogy is that it’s a takeout but anthropic is insisting you only eat at home with the plastic utensils they’ve provided rather than the nice metal utensils you have at home.

Another analogy is that it’s a restaurant that offers delivery and they’re insisting you use their own in house delivery service instead of placing a pickup order and asking your friendly neighbor to pick it up for you on their way back from the office.

savanaly 4 hours ago [-]
The all you can eat buffet analogy makes way more sense to me, because it speaks to the aspect of it where the customer can take a lot of something without restriction. That's the critical thing with the Anthropic subscription, and the takeout analogy or delivery service don't contain any element of it.
jbstack 13 hours ago [-]
It's not really a fair analogy. Restaurants don't want you taking food away because they want to limit the amount you eat to a single meal, knowing that you'll stop when you get full. If you take food out you can eat more by waiting until the next meal when you're hungry again.

You don't "get full" and "get hungry again" by switching UIs. You can consume the same amount whether you switch or you don't switch.

dpark 7 hours ago [-]
> You don't "get full" and "get hungry again" by switching UIs. You can consume the same amount whether you switch or you don't switch.

This is actually a compelling argument for Claude Code getting the discount but not extending it to other cases. Claude Code, being subsidized by the company, is incentivized to minimize token usage. Third parties that piggyback on the same flat rate subscription, are not. i.e. Claude code wants you to eat less.

Of course, I don’t believe at all that this is why Anthropic has blocked this use case. But it is a reasonable argument.

samrolken 7 hours ago [-]
Claude Code does a lot of work in optimizing context usage, how much output is included by tools and how that's done, and when to compact. This very well may make the cost of providing the subscription lower to Anthropic when Claude Code is used. It's well within the realm of possibility if not likelihood that other tools don't have the same incentive to optimize the buffet usage.

Not sure where that goes in the analogies here but maybe something about smaller plates.

mrgoldenbrown 4 hours ago [-]
The UI absolutely could influence the backend usage.

Think about a web browser that respects cache lifetimes vs one that downloads everything everytime. As an ISP I'd be more likely to offer you unlimited bandwidth if I knew you were using a caching browser.

Likewise Claude code can optimize how it uses tokens and potentially provide the same benefit with less usage.

14 hours ago [-]
dfabulich 13 hours ago [-]
Not really. At a buffet restaurant, if you could take the food out with you, you'd takeaway more food than you can eat at one sitting. OpenCode users and Claud Code™ CLI users use tokens at approximately the same rate.

This is more like an all-you-can-eat restaurant requiring you to eat with their flimsy plastic forks, forbidding you to bring your own utensils.

samrolken 7 hours ago [-]
Claude Code does a lot regarding optimizing context usage, tool output, sub-agent interactions, context compaction, and stuff like that. I don't imagine OpenCode has the same financial incentive to decrease the token cost Anthropic takes on under the subscriptions.
snackdex 8 hours ago [-]
yes with the whole goal to make the utensils better
whatsupdog 12 hours ago [-]
Why is this being downvoted? This is the perfect analogy.
PunchyHamster 12 hours ago [-]
...no, that's more like "but you can't bring your own fork"
10 hours ago [-]
darepublic 12 hours ago [-]
anthropic should not be criticizing the gluttony of others whilst licking its fingers surrounded by buckets full of fried chicken
baby 17 hours ago [-]
Aren't you happy that you can use claude code unlimited for only 200/month? I don't really get your point tbh
Aurornis 15 hours ago [-]
I’d bet almost everyone who opts to buy the $200 plan is happy with the deal they’re getting relative to API pricing.

I think some people get triggered by the inconsistency in pricing or the idea of having a fixed cost for somewhat vague usage limits.

In practice it’s a great deal for anyone using the model at that level.

piokoch 13 hours ago [-]
It is not unlimited, being not careful with your context management, you hit the limits quickly.
abhijat 12 hours ago [-]
Isn't the context window the same for all plans, 200k? You would hit usage limits?
billyjobob 8 hours ago [-]
If you send the full 200k tokens on every request you will get very few requests before you hit the token limit. Caching reduces the number sent but I don't know how much they can cache?
3abiton 17 hours ago [-]
> Why is Anthropic offering such favorable pricing to subscribers? I dunno. But they really want you to use the Claude Code™ CLI with that subscription, not the open-source OpenCode CLI.

Because they are harvesting all the data they can harvest through CLI to train further models. API access in contrast provides much more limited data

dfabulich 17 hours ago [-]
As far as I know, OpenCode sends (has to send) the same data to Anthropic as Claude Code™ CLI (especially if they're going to successfully imitate CC™ in order to access cheap subscription pricing).
NitpickLawyer 16 hours ago [-]
There are additional signals that a client can send as telemetry that they lose if you use a 3rd party app. Things like accepted vs rejected sessions and so on.
ehsanu1 17 hours ago [-]
But I doubt you can opt in to them training on that data coming in via OpenCode.
DoesntMatter22 17 hours ago [-]
You have already done that when you sign up for an anthropic account
casparvitch 17 hours ago [-]
I doubt you can opt out
eli 17 hours ago [-]
Claude Code only trains on data if you opt in
NitpickLawyer 16 hours ago [-]
They've recently switched to opt-out instead. And even then, if you read the legalese they say "train frontier models". That would (probably) allow them to train a reward model or otherwise test/validate on your data / signals without breaking the agreement. There's a lot of signal in how you use something (e.g. accepted vs. rejected rate) that they can use without strictly including it in the dataset for training their LLMs.
adastra22 17 hours ago [-]
They switched to opt out, with some extra dark patterns to convert people who already opted out into opting in.
throwa356262 16 hours ago [-]
I did not know that. Could you elaborate?
adastra22 10 hours ago [-]
New users now have to opt-out of training on their data - it is enabled by default. For existing users, during the transition they updated their terms and let you know about the change in policy, giving you an option to opt-in or opt-out. Opt-in was the default selection. Just today they AGAIN updated terms, presenting a click-through form on first load that looks like a permissions check (e.g. the standard dialog to enable access to the file system that we're conditioned to click-through). It was actually a terms-of-service update with opt-in selected by default, even if you already explicitly opted out. So if you hit enter to dismiss as you're used to doing, you just switched your account over to opt-in.
CSSer 17 hours ago [-]
I used to be less cynical, but I could see them not honoring that, legal or not. The real answer, regardless of how you feel about that conversation, is that Claude Code, not any model, is the product.
eli 17 hours ago [-]
I couldn't. Aside from violating laws in various countries and opening them up to lawsuits, it would be extremely bad for their enterprise business if they were caught stealing user data.
grumbelbart2 15 hours ago [-]
Maybe. But the data is there, imagine financial troubles, someone buys in and uses the data for whatever they want. Much like 23andme. If you want something to stay a secret, you don't send it to that LLM, or you use a zero-retention contract.
eli 7 hours ago [-]
If your threat model is "vendor willing to ignore contracts and laws to steal your data" I can't see how a zero-retention contract helps.
jrvarela56 17 hours ago [-]
If they believe it would get them AGI they would risk it.
tsimionescu 12 hours ago [-]
You don't have to imagine, you can see it happening all the time. Even huge corps like FB have been already fined for ignoring user consent laws for data tracking, and thousands of smaller ones are obviously ignoring explicit opt in requirements in the GDPR at least.
onel 11 hours ago [-]
That is not true, though. You have to opt in for them to train on your data
whalesalad 17 hours ago [-]
Claude code cli and api vs subscription are tangential. You can use Claude code with an api token.
skerit 12 hours ago [-]
> Anthropic shouldn't have an all-you-can-eat plan for $200 when their pay-as-you-go plan would cost more than $1,000+ for comparable usage. Their subscription plans should just sell you API credits at, like, 20% off

I have the "all-you-can-eat" plan _because_ I know what I'm getting and how much it'll cost me.

I don't see anything wrong with this. It's just a big time-limited amount of tokens you can use. Of course it sucks that it's limited to Claude-Code and Claude.ai. But the other providers have very similar subscriptions. Even the original ChatGTP pro subscription gives you a lot more tokens for the $20 it costs compared to the API cost.

I always assumed tokens over the API cost that much, because that's just what people are willing to pay. And what people are willing to pay for small pay-as-you-go tasks vs large-scale agentic coding just doesn't line up.

And then there's the psychological factor: if Claude messed up and wasted a bunch of tokens, I'm going to be super pissed that those specific tokens will have cost me $30. But when it's just a little blip on my usage limit, I don't really mind.

parliament32 3 hours ago [-]
> let's sell a loss leader

> oh no, people are actually buying the loss leader

I'm looking forward to the upcoming reckoning when all these AI companies start actually charging users what the services cost.

HDThoreaun 41 minutes ago [-]
I see zero reason to believe the $200 subscription is losing money. Anthropic makes subscriptions cheaper because 1. Most users dont use all their allocated tokens 2. subscriptions create a lock in effect, even if its a weak one 3. Easier to raise money when you can point to your ARR from subscriptions 4. Lowering revenue variance month to month is very valuable for businesses
OGEnthusiast 17 hours ago [-]
> More importantly, Anthropic should have open sourced their Claude Code CLI a year ago. (They can and should just open source it now.)

Isn't the whole thesis behind LLM coding that you can easily clone the CLI using an LLM? Otherwise what are you paying $200/mo for?

underdeserver 17 hours ago [-]
In some sense that's what OpenCode is, and Anthropic's not having that.
OGEnthusiast 17 hours ago [-]
This whole thing just seems like a "I never thought the leopards would eat my face" by all the people who have been shilling LLMs non-stop.
eli 17 hours ago [-]
Doesn't seem like it has much to do with LLMs at all, just typical vendor lock-in nonsense like how Apple's own apps get entitlements not available to any other developer.
arbirk 2 hours ago [-]
I agree with the principle, but reality dictates that users and exposure is the real currency. So while annoying it is understandable that Anthropic subsidizes their own direct users.
wooger 14 hours ago [-]
It's hard to understand what Anthropic are getting from forcing more people to use Claude Code vs any other tools via the API. Why do they care? Do they somehow get better analytics or do they dream that there's a magical lock-in effect... from a buggy CLI?
sho 12 hours ago [-]
I suspect that they lose control over the cheaper models CC can choose for you for eg. file summaries or web fetch. Indeed, they lose web fetch and whatever telemetry it gives them completely.

It's not unreasonable to assume that without the ability to push haiku use aggresively for summarization, the average user in OC vs CC costs more.

oidar 8 hours ago [-]
This is a very good point. If the 3rd party tools are using opus for compacting/summarizing - that would increase inference costs for anthropic.
elAhmo 13 hours ago [-]
Not that hard to understand, they want to control how their users use their product. A CLI they built, even acquiring the framework it was built in, is a way to achieve that.
touristtam 13 hours ago [-]
But to make a "gauche" analogy, that would be like Microsoft not letting you browse their websites without using their browser (Edge).
weikju 13 hours ago [-]
Don’t give them any ideas!
fragmede 6 hours ago [-]
ActiveX has entered the chat.
mike_hearn 12 hours ago [-]
It's because the model companies believe there's no way to survive just selling a model via an API. That is becoming a low margin, undifferentiated commodity business that can't yield the large revenue streams they need to justify the investments. The differences between models just aren't large enough and the practice of giving model weights away for free (dumping) is killing that business.

So they all want to be product companies. OpenAI is able to keep raising crazy amounts of capital because they're a product company and the API is a sideshow. Anthropic got squeezed because Altman launched ChatGPT first for free and immediately claimed the entire market, meaning Anthropic became an undifferentiated Bing-like also-ran until the moment they launched Claude Code and had something unique. For consumer use Claude still languishes but when it comes to coding and the enormous consumption programmers rack up, OpenAI is the one cloning Claude Code rather than the other way around.

For Claude Code to be worth anything to Anthropic's investors it must be a product and not just an API pricing tier. If it's a product they have so many more options. They can e.g. include ads, charge for corporate SSO integrations, charge extra for more features, add social features... I'm sure they have a thousand ideas, all of which require controlling the user interface and product surface.

That's the entire reason they're willing to engage in their own market dumping by underpricing tokens when consumed via their CLI/web tooling: build up product loyalty that can then be leveraged into further revenue streams beyond paying for tokens. That strategy doesn't work if anyone can just emulate the Claude Code CLI at the wire level. It'd mean Anthropic buys market share for their own competitors.

N.B. this kind of situation is super common in the tech industry. If you've ever looked at Google's properties you'll discover they're all locked behind Javascript challenges that verify you're using a real web browser. The features and pricing of the APIs is usually very different to what consumers can access via their web browser and technical tricks are used to segment that market. That's why SERP scraping is a SaaS (it's far too hard to do directly yourself at scale, has to be outsourced now), and why Google is suing them for bypassing "SearchGuard", which appears to just be BotGuard rebranded. I designed the first version of BotGuard and the reason they use it on every surface now, and not just for antispam, is because businesses require the ability to segment API traffic that might be generated by competitors from end user/human traffic generated by their own products.

If Anthropic want to continue with this strategy they'll need to do the same thing. They'll need to build what is effectively an anti-abuse team similar to the BotGuard team at Google or the VAC team at Valve, people specialized in client integrity techniques and who have experience in detecting emulators over the network.

0x500x79 6 hours ago [-]
DAU/MAU for IPO.
rienbdj 5 hours ago [-]
What can we learn from this?

The model is not a moat

They need to own the point of interaction to drive company valuation. Users can more about tool switching costs that the particular model they use.

icsrutil 13 hours ago [-]
> They can and should just open source it now

Why you have this idea? why they should open source it now?

steveharman 11 hours ago [-]
I believe there are a number of cli tools which also use Anthropic's Max plan (subscription) - this isn't just an OpenCode issue.

I had the TaskMaster AI tool hooked up to my Anthropic sub, as well as a couple of other things - Kilo Code and and Roo Code iirc?

From discussions at the time (6 months ago) this "use your Anthropic sub" functionality was listed by at least one of the above projects as "thanks to the functionality of the Anthropic SDK you can now use your sub...." implying it was officially sanctioned rather than via a "workaround".

HDThoreaun 46 minutes ago [-]
> Why is Anthropic offering such favorable pricing to subscribers?

Most subscribers dont use up all their allocated tokens. Theyre banning these third parties because they consistently do use all their allocated tokens.

adam_patarino 10 hours ago [-]
They are subsidizing Claude code so they can use your data to train better coding models. You’re paying them to show their models how to code better.
stemlord 8 hours ago [-]
If true I wonder what kind of feedback loop is happening by training on human behavior that's directly influenced by the output of the same model
adam_patarino 7 hours ago [-]
We build our fine tuning and reinforcement pipeline at cortex.build by synthesizing interactions between a user, the agent loops, and a codebase. The exact data they get from users in Claude Code.

That data is critical to improve tool call use (both in correctness but also to improve when the agent chooses to use that tool). It's also important for the context rewrites Claude does. They rewrite your prompt and continuously manage the back-and-forth with the model. So does Cortex, just more aggressively with a more powerful context graph.

mirzap 10 hours ago [-]
I tend to think that their margins on API pricing are significantly higher. They likely gave up some of that margin to grow the Claude Code user base, though it probably still runs at a thin profit. Businesses are simply better customers than individuals and are willing to pay much more.
xtiansimon 6 hours ago [-]
Sorry, ClaudeCode is $200/mo? I’m not using it now, but was thinking about giving it a try. The website shows $200/year for Pro:

“$17 Per month with annual subscription discount ($200 billed up front). $20 if billed monthly.”

https://claude.com/pricing

What are you referring to that’s 10x that price? (Conversely, I’m wondering why Pro costs 1/10 the value of whatever you’re referring to?!?)

lcnPylGDnU4H9OF 5 hours ago [-]
They don't exactly put it front and center. Click "usage limits" (at the bottom; https://support.claude.com/en/articles/9797557-usage-limit-b...) then "Max plan" in the first list (https://support.claude.com/en/articles/11014257-about-claude...). There is a $200/mo price which people are likely referring to with "20x more usage per session" (which kinda bothers me because I'd bet my bottom dollar it's 20x "as much" but that's a lost cause).
crassus_ed 6 hours ago [-]
I got a pro subscription yesterday. With it you get a certain amount of tokens and you have a certain limit every 5 hours and every week.

Once the limit is reached, you can choose to pay-per-token, upgrade your plan, or just wait until it refreshes. The more expensive subscription variants just contain more tokens, that’s all.

vadansky 6 hours ago [-]
Keep scrolling down, there is a Max option
kar1181 13 hours ago [-]
Anthropic want you to use claude code cli badly and are prepared to be very generous if you do. People want to take that generosity without the reciprocity.

I don't normally like to come down on the side of the megabigcorp but in this case anthropic aren't being evil. Not yet anyway.

redbluered 12 hours ago [-]
I think they are.

The key question is about why they want to you to use the CLI. If you're not the customer, you're the product.

There's also a monopolistic aspect to this. Having the best model isn't something over can legally exploit to gain advantage in adjacent markets.

It reeks of "Windows isn't done until Lotus won't run," Windows showing spurious error messages for DR-DOS, and Borland C++ losing to the then-inferior Visual C++ due to late support of new Windows features. And Internet Explorer bundling versus Netscape.

Yes, Microsoft badly wanted you to use Office, Visual C++, MS-DOS, and IE, but using Windows to get that was illegal.

Microsoft lost in court, paid a nominal fine, and executives were crying all the way to the bank.

senordevnyc 8 hours ago [-]
If you're not the customer, you're the product.

You are the customer, you're paying them directly.

realharo 7 hours ago [-]
Well they are doing the same to website owners who rely on human visitors for their revenue streams.

Both scraping and on-demand agent-driven interactions erode that. So you could look at people doing the same to them as a sort of poetic justice, from a purely moral standpoint at least.

dheatov 11 hours ago [-]
Assuming the actual price for many user is closer to 1k USD/mth than to 200 USD/mth, and the actual price is closer to their target margin to be a viable business, they're practically subsidising usage after 200 USD/mth. Together with other AI-TECH doing the same, they fabricate a false sense of "AI is capable AND affordable", which imo is evil.
vovavili 11 hours ago [-]
There is nothing evil about prioritizing customer acquisition over immediate profit.
fc417fc802 9 hours ago [-]
And yet in many cases there are regulations against it. Almost as though behavior that warps the market is generally undesirable.
vovavili 9 hours ago [-]
Give me an example of one such regulation.
fc417fc802 9 hours ago [-]
Are you sure you're participating in good faith? I'll go ahead and indulge you for the benefit of the audience though.

The generic term is predatory pricing and it's regulated to some extent in pretty much every country. https://en.wikipedia.org/wiki/Predatory_pricing#Legal_aspect...

When carried out at the international level it's known as dumping. The WTO has provisions against it. https://en.wikipedia.org/wiki/Dumping_(pricing_policy)#Legal...

keeda 2 hours ago [-]
As the Wikipedia page calls out, predatory pricing is generally in the context of a dominant firm throwing their weight around to dominate the market. You could make this case against large incumbents like Microsoft and Google, but Anthropic is actually the upstart here.

In any case, all this depends on how you define the "market", and the entire market for AI-assisted coding is very nascent and fast-moving to make any reliable calls about dominance at this point.

fc417fc802 17 minutes ago [-]
That's the context where it tends to get regulated. Indeed it doesn't apply to Anthropic. Neither is Anthropic a party to the WTO.

I was asked for examples of behavior that distorts the market being regulated and provided two of them. There are other examples out there as well.

llmslave2 13 hours ago [-]
> More importantly, Anthropic should have open sourced their Claude Code CLI a year ago. (They can and should just open source it now.)

Who cares? Just have Claude vibe code it in an afternoon...

weird-eye-issue 15 hours ago [-]
By this logic ChatGPT shouldn't exist either and should be charged by API pricing
motbus3 8 hours ago [-]
I guess we will find out on the updated TOCs very soon
heliumtera 8 hours ago [-]
>But they really want you to use the Claude Code™

They definitely want their absolutely proprietary software with sudo privilege on your machine. I wonder why they would want that geeez

rovr138 7 hours ago [-]
it doesn't need sudo privileges...
heliumtera 3 hours ago [-]
Yeah, it is not necessary. Being able to run unprivileged commands on millions of machines is already a stupidly powerful thing
Xevion 17 hours ago [-]
> Anthropic shouldn't have an all-you-can-eat plan for $200 when their pay-as-you-go plan would cost more than $1,000+ for comparable usage. Their subscription plans should just sell you API credits at, like, 20% off.

Sorry, I don't understand this. Either you're saying

A) Everyone paying $200/mo should now pay $800/mo to match this 20% off figure you're theorizing... or B) Maybe you're implying that the $1,000+ costs are too high and they should be lowered, to like, what, $250/mo? (250 - 20% = $200)

Which confuses me, because neither option is feasible or ever gonna happen.

tsimionescu 11 hours ago [-]
Not the OP, but it seems pretty clear to me - they're suggesting that fixed per-month pricing with unlimited usage shouldn't exist at all, as it doesn't really make sense for a product that has per token costs.

Instead, they're saying that a 200$/month subscription should pay for something like $250 worth of pay-per-token API tokens, and additionally give preferential pricing for using up more tokens than that.

So, if the normal API pricing were 10$ per million tokens, a 200$ per month subscription should offer 25M tokens for free, and allow you to use more tokens at a 9$/1M token rate. This way, if you used 50M tokens in a month, you'd pay 445$ with a subscription, versus paying 500$ with pay-as-you-go. This is still a good discount, but doesn't create perverse incentives, as the current model does.

reactordev 11 hours ago [-]
It’s all about the data.

They want you to use their tool so they can collect data.

sourcecodeplz 12 hours ago [-]
It is and was open source from the start?

https://github.com/anthropics/claude-code

c0balt 12 hours ago [-]
Unless I'm severely mistaken that's not the source code for claude-code. It's a few official plugins and some helper scripts
vovavili 11 hours ago [-]
Look a bit closer to the contents of this repo. There is basically no code.
krembo 17 hours ago [-]
I wonder how are these pricings compared to running Claude over bedrock
weird-eye-issue 15 hours ago [-]
The price is the same whether you run it on Bedrock or Anthropic....
hahahahhaah 17 hours ago [-]
Anthropic and all AI are playing chicken with each other. You need to win userbase and that is worth losing money for but if you sell discount tokens for Loveable clones to profit from that is not in your interest.

Anthropic is futher complicated by mission.

piokoch 13 hours ago [-]
My problem with CC is that it is trying to be very creative. I am asking it to fix some test, or create a new test. What it is doing? It is running grep to find all tests in the code base and parses them. This eats a lot of tokens.

Then it runs the test, as if I could not do this myself, it reads the output, sometimes very long (so more and more tokens are burned) and so on.

If people had to pay for this cleverness and creativity an API price, they would be a bit shocked and give up quickly CC.

Using Aider with Claude Sonnet I am eating much less tokens than CC does.

j45 11 hours ago [-]
Claude Code is unusually efficient in the use if tokens in top of it all.
bakugo 16 hours ago [-]
> Why is Anthropic offering such favorable pricing to subscribers? I dunno

I do, it's called vendor lock-in. The product they're trying to sell is not the $200 subscription, it's the entire Claude Code ecosystem.

For the average person, the words "AI" and "ChatGPT" are synonims. OpenAI's competitors have long conceded this loss, and for the most part, they're not even trying to compete, because it's clear to everyone that there is no clear path to monetization in this market - the average joe isn't going to pay for a $100/mo subscription to ask a chatbot to do their homework or write a chocolate cake recipe, so good luck making money there.

The programming market is an entirely different story, though. It's clear that corporations are willing to pay decent money to replace human programmers with a service that does their work in a fraction of the time (and even the programmers themselves are willing to pay independently to do less work, even if it will ultimately make them obsolete), and they don't care enough about quality for that to be an issue. So everyone is currently racing to capture this potentially profitable market, and Claude Code is Anthropic's take on this.

Simply selling the subscription on its own without any lock-in isn't the goal, because it's clearly not profitable, nor is it currently meant to be, it's a loss leader. The actual goal is to get people invested long-term in the Claude Code ecosystem as a whole, so that when the financial reality catches up to the hype and prices have to go up 5x to start making real money, those people feel compelled to keep paying, instead of seeking out cheaper alternatives, or simply giving up on the whole idea. This is why using the subscription as an API for other apps isn't allowed, why Claude Code is closed source, why it doesn't support third party OpenAI-compatible APIs, and why it reads a file called CLAUDE.md instead of something more generic.

mannanj 17 hours ago [-]
How is it different from what OpenAI and Codex, and Gemini offer?
juped 12 hours ago [-]
I'm baffled that people, unknown to me, have apparently been considering Claude Code, the program, some kind of "secret sauce". It's a tool harness. Claude could one-shot write it for you, lol.
dathinab 7 hours ago [-]
I guess it's another case of:

- effective moneytizeability of a lot of AI products seem questionable

- so AI cost strongly subsidized in all kinds of ways

- which is causing all kind of strange dynamics and is very much incompatible with "free market self regulation" (hence why a company long term running by investor money _and_ under-pricing any competition which isn't subsidized is theoretically not legal (in the US). Not that the US seem to care to actually run a functioning self regulating free market, even going back as far as Amazone. Turns out moving "state subsidized" to "subsidized by rich" somehow makes it no longer problematic / anti-free-market /non-liberal ... /s))

globular-toast 16 hours ago [-]
> More importantly, Anthropic should have open sourced their Claude Code CLI a year ago. (They can and should just open source it now.)

I assume they're embarrassed by it. Didn't one of their devs recently say it's 100% vibe coded?

rolymath 11 hours ago [-]
What an incredibly entitled message. If you know what anthropic should and shouldn't do, go start your own AI company.
petesergeant 10 hours ago [-]
Right? Lots of "Anthropic should do x because that's what makes sense to me".
iamkonstantin 16 hours ago [-]
What's ridiculous is that the subscription at 180€/month (excl. VAT) is already absurdly expensive for what you get. I doubt many would sign up for the per-API usage as it's just not sustainable pricing (as a user).
anonzzzies 16 hours ago [-]
For the bizarre amount of work that gets done for that 180 euro, it is really cheap. We are just getting used to it and sinking prices everywhere, it is just that CC is the best (might be taste or bias, I at least think so), so we are staying with it for now. If it gets more expensive, we will go and try others for production instead of just trying them to get a feel for the competition as we do now.
elAhmo 13 hours ago [-]
This take is ridiculous. Nearly everyone who uses Max agrees that what they get for the money paid is an amazing deal. If you don't use or understand how LLMs fit in your workflows, you are not the target customer. But for people who use it daily, it is a relatively small investment compared to the time saved.
Majromax 9 hours ago [-]
> If you don't use or understand how LLMs fit in your workflows, you are not the target customer.

I feel like this is a major area of divergence. The "vibes" are bifurcating between "coding agents are great!" and "coding agents are all hype!", with increasing levels of in-group communication.

How should I, an agent-curious user, begin to unravel this mess if $200 is significantly more than pocket change? The pro-agent camp remarks that these frontier models are qualitatively better and using older/cheaper approaches would give a misleading impression, so "buy the discount agent" doesn't even seem like a reasonable starting point.

12345hn6789 8 hours ago [-]
The$20 plan exists for a reason. If you're interested you can give it a whirl.
thunfischtoast 16 hours ago [-]
That entirely depends on your business case. If that call costing 50 Cent has done something for me which would have taken me more than 1 minute of paid working time to do it's sustainable.
baq 16 hours ago [-]
It pays for itself in a day for some folks. It is a lot but it’s still cheap.
sheepscreek 17 hours ago [-]
Update: Touché. The repo is just plugins and skills, not the meat.

In any case, another workaround would be using ACP that’s supported by Zed. Let’s editing tools access the power of CLI agents directly.

———

> Anthropic should have open sourced their Claude Code CLI a year ago

https://github.com/anthropics/claude-code

It has been open source for a while now. Probably 4-6 months.

> Anthropic shouldn't have an all-you-can-eat plan for $200 when their pay-as-you-go plan would cost more than $1,000+ for comparable usage. Their subscription plans should just sell you API credits at, like, 20% off.

That's a very odd thing to wish for. I love my subscriptions and wouldn't have it any other way.

dboon 17 hours ago [-]
If you're going to link a repository, you should read it first. That repository is just a couple plugins and community links. Claude Code is, and always has been, completely closed source.
ukuina 17 hours ago [-]
That repo does not contain the source code for Claude Code.
baby 17 hours ago [-]
You can't use it as an SDK though, unlike codex
adastra22 17 hours ago [-]
You can though?

https://platform.claude.com/docs/en/agent-sdk/overview

baby 9 hours ago [-]
That's to build your own agent, similar to the openai agent sdk
giancarlostoro 9 hours ago [-]
> More importantly, Anthropic should have open sourced their Claude Code CLI a year ago. (They can and should just open source it now.)

It's not like if it houses some top secret AI models inside of it, and it would make way more sense, and probably expand the capabilities of Claude Code itself. Do they lose out to having OpenAI or other competitors basically stealing their approach?

dboon 19 hours ago [-]
This is an unusual L for Anthropic. The unfortunate truth is that the engineering in opencode is so far ahead of Claude Code. Obviously, CC is a great tool, but that's more about the magic of the model than the engineering of the CLI.

The opencode team[^1][^2] built an entire custom TUI backend that supports a good subset of HTML/CSS and the TypeScript ecosystem (i.e. not tied to Opencode, a generic TUI renderer). Then, they built the product as a client/server, so you can use the agent part of it for whatever you want, separate from the TUI. And THEN, since they implemented the TUI as a generic client, they could also build a web view and desktop view over the same server.

It also doesn't flicker at 30 FPS whenever it spawns a subagent.

That's just the tip of the iceberg. There are so many QoL features in opencode that put CC to shame. Again, CC is a magical tool, but the actual nuts and bolts engineering of it is pretty damning for "LLMs will write all of our code soon". I'm sorry, but I'm a decent-systems-programmer-but-terminal-moron and I cranked out a raymarched 3D renderer in the terminal for a Claude Wrapped[^] in a weekend that...doesn't flicker. I don't mean that in a look-at-me way. I mean that in a "a mid-tier systems programmer isn't making these mistakes" kind of way.

Anyway, this is embarrassing for Anthropic. I get that opencode shouldn't have been authenticating this way. I'm not saying what they are doing is a rug pull, or immoral. But there's a reason people use this tool instead of your first party one. Maybe let those world class systems designers who created the runtime that powers opencode get their hands on your TUI before nicking something that is an objectively better product.

[^1] https://github.com/anomalyco/opentui

[^2] From my loose following of the development, not a monolith, and the person mostly responsible for the TUI framework is https://x.com/kmdrfx

[^3] https://spader.zone/wrapped/

IgorPartola 18 hours ago [-]
My favorite is running CC in a screen session. There if I type out a prompt and then just start holding down the backspace key to delete a bunch of characters, at some point they key press refresh rate outruns CC’s brains and it just starts acting like it moved the cursor but didn’t delete anything. It is an embarrassing bug, but one that I suspect wouldn’t be found in automated testing.
visarga 17 hours ago [-]
Talking about embarrassing bugs, Claude chat (both web and iOS apps) lately tend to lose the user message when there is a network error. This happens every day to me lately. It is frustrating to retype a message from memory, first time you are "in the flow" second time it feels like unjust punishment.

With all the Claude Code in the world how come they don't write good enough tests to catch UI bugs? I have come to the point where I preemptively copy the message in clipboard to prevent retyping.

prodigycorp 12 hours ago [-]
This is an old bug. I cant believe they haven't fixed it yet. My compliments for the Claude frontend start and end at artifacts.
BoorishBears 15 hours ago [-]
Ctrl Z usually recovers the missing text, even across page refreshes
dotancohen 12 hours ago [-]
If you want to work around this bug, Claude Code supports all the readline shortcuts such as Ctrl-W and Ctrl-U.
eru 18 hours ago [-]
Have you tried tmux?
Ycros 18 hours ago [-]
I use tmux, I have this exact same bug in tmux. It's part of why I use OpenCode and not Claude Code.
eru 15 hours ago [-]
Thanks!
_zoltan_ 15 hours ago [-]
unfortunately it's buggy in tmux as well. last night I couldn't hit esc after a long, long session as it simply ignored the key. doesn't happen outside of tmux.
BoiledCabbage 18 hours ago [-]
> Anyway, this is embarrassing for Anthropic.

Why? A few times in this thread I hear people saying "they shouldn't have done this" or something similar but not given any reason why.

Listing features you like of another product isn't a reason they shouldn't have done it. It's absolutely not embarrassing, and if anything it's embarrassing they didn't catch and do it sooner.

gpm 18 hours ago [-]
Because the value proposition that has people pay Anthropic is that it's the best LLM-coding tool around. When you're competing on "we can ban you from using the model we use with the same rate limits we use" everyone knows you have failed to do so.

They might or might not currently have the best coding LLM - but they're admitting that whatever moat they thought they were building with claude code is worthless. The best LLM meanwhile seems to change every few months.

They're clearly within their rights to do this, but it's also clearly embarrassing and calls into question the future of their business.

casparvitch 18 hours ago [-]
Is it that it's the best coding tool or the best model? I still get the best (most accurate) results out of anthropic models (but not out of CC).
gpm 17 hours ago [-]
Best coding tool is what makes users use something, a good model is just a component of that.

I don't think "we have the current best model for coding" is a particularly good business proposition - even assuming it's true. Staying there looks like it's going to be a matter of throwing unsustainable amounts of money at training forever to stay ahead of the competition.

Meanwhile the coding tool part looks like it could actually be sticky. People get attached to UIs. People are more effective in the UIs they are experienced with. There's a plausible story that codeveloping the UI and model could result in a better model for that purpose (because it's fine tuned on the UIs interactions).

And independently "Claude Code" being the best coding tool around was great for brand recognition. "Open Code with the Opus 4.5 backend - no not the Claude subscription you can't use that - the API" won't be.

BoiledCabbage 17 hours ago [-]
I appreciate you sharing your thinking.

I think it's reasonable to state that at the moment Opus 4.5 is the best coding model. Definitely debatable, but at least I don't think it controversial to argue that, so we'll start there.

They offer the best* model at cost via an API (likely not actually at cost, but let's assume it is). They also will subsidize that cost for people who use their tool. What benefit do they get or why would a company want to subsidize the cost of people using another tool?

> I don't think "we have the current best model for coding" is a particularly good business proposition - even assuming it's true. Staying there looks like it's going to be a matter of throwing unsustainable amounts of money at training forever to stay ahead of the competition.

I happen to agree - to mee it seems tenuous having a business solely based on having the best model, but that's what the industry is trying to find out. Things change so quickly it's hard to predict 2 years out. Maybe they are first to reach XYZ tech that gives them a strong long term position.

> Meanwhile the coding tool part looks like it could actually be sticky. People get attached to UIs. People are more effective in the UIs they are experienced with.

I agree, but it doesn't seem like that's their m.o. If anything the opposite they aren't trying to get people locked into their tooling. They made MCPs a standard so all agents could adopt. I could be wrong, but thought they also did something similar with /scripts or something else. If you wanted to lock people in you'd have people build an ecosystem of useful tooling and make it not compatible with other agents, but they (to my eyes) have been continuously putting things into the community.

So my general view of them is that they feel they have a vision with business model that doesn't require locking people into their tooling ecosystem. But they're still a business so don't gain from subsidizing people to use other tools. If people want their models in other tools use the "at-cost" APIs - why would they subsidize you to use someone else's tool?

casparvitch 17 hours ago [-]
There's just not that much IP in a UI like that. Every day we get articles on here that you can make an agent in 200 LOCs, Yegge's gas town in 2 weeks, etc. Training the model is the hard part, and what justifies a large valuation (350B for anthropic, c.f. 7B for jetbrains).
matt-p 15 hours ago [-]
I think in fairness to anthropic they are winning in llms right? Since 3.7 they have been better than any other lab.
OGEnthusiast 18 hours ago [-]
> Because the value proposition that has people pay Anthropic is that it's the best LLM-coding tool around.

Why not just use a local LLM instead? That way you don't have to pay anyone.

pdntspa 18 hours ago [-]
Because they still suck at real-world software engineering
_zoltan_ 15 hours ago [-]
none can touch any of the top models. none.
dboon 18 hours ago [-]
It is embarrassing to restrict an open source tool that is (IMO) a strictly and very superior piece of software from using your model. It is not immoral, like I said, because it's clearly against the ToC; but it's not like OC is stealing anything from Anthropic by existing. It's the same subscription, same usage.

Obviously, I have no idea what's going on internally. But it appears to be an issue of vanity rather than financials or theft. I don't think Anthropic is suffering harm from OC's "login" method; the correct response is to figure out why this other tool is better than yours and create better software. Shutting down the other tool, if that's what's in fact happening, is what is embarrassing.

BoiledCabbage 17 hours ago [-]
> It is embarrassing to restrict an open source tool that is (IMO) a strictly and very superior piece of software from using your model.

> Shutting down the other tool, if that's what's in fact happening, is what is embarrassing.

To rephrase it different as I feel my question didn't land. It's clear to me that you think it's embarrassing. And it's clear what you think is embarrassing. I'm trying to understand why you think it's embarrassing. I don't think it is at all.

Your statements above are simply saying "X is embarrassing because it's embarrassing". Yes I hear that you think it's embarrassing but I don't think it is at all. Do you have a reason you can give why you think it's embarrassing? I think it's very wise and pretty standard to not subsidize people who aren't using your tool.

I'm willing to consider arguments differently, but I'm not hearing one. Other than "it just is because it is".

rocqua 15 hours ago [-]
If your value proposition is: do X, and then you have to take action against an open source competitor for doing X better, that shows that you were beaten at the thing you tried very hard at, by people with way fewer resources.

I can see why you would call that embarrassing.

kelnos 13 hours ago [-]
The competitor is not "doing X better"; it's more complicated than that.

CC isn't just the TUI tool. It's also the LLM behind it. OC may have built a better TUI tool, but it's useless without an LLM behind it. Anthropic is certainly within their rights to tell people they can only integrate their models certain ways.

And as for why this isn't embarrassing, consider that OC can focus 100% of their efforts on their coding tool. Anthropic has a lot of other balls in the air, and must do so to remain relevant and competitive. They're just not comparable businesses.

Draiken 12 hours ago [-]
> CC isn't just the TUI tool. It's also the LLM behind it.

No, Claude Code is literally the TUI tool. The LLMs behind are the models. You can use different models within the same TUI tool, even CC allows that, regardless of the restriction of only using their models (because they chose to do that).

> consider that OC can focus 100% of their efforts on their coding tool.

And they have billions of dollars to hire full teams of developers to focus on it. Yet they don't.

They want to give Claude Code an advantage because they don't want to invest as much in it and still "win", while they're in a position to do so. This is very similar to Apple forcing developers to use their apps because they can, not because it's better. With the caveat that Anthropic doesn't have a consolidated monopoly like Apple.

Can they do that? Yes.

Should they do that? It's a matter of opinion. I think it's a bad move.

Is it embarrassing? Yes. It shows they're admitting their solution is worse and changing the rules of the game to tilt it in their favor while offering an inferior product. They essentially don't want to compete, they want to force people to use their solution due to pricing, not the quality of their product.

yencabulator 4 hours ago [-]
Claude Code is more than the TUI, it's the prompts, the agentic loop, and tools, all made to cooperate well with the LLM powering it. If you use Claude Code over a longer period of time you'll notice Anthropic changing the tooling and prompts underneath it to make it work better. By now, the model is tuned to their prompts, tools etc.
dboon 17 hours ago [-]
Why do you like or dislike Diet Coke? At some point, saying what I think is embarrassing is equivalent to saying why.

But, to accept your good faith olive branch, one more go: AI is a space full of grift and real potential. Anthropic's pitch is that the potential is really real. So real, in fact, that it will alter what it means to write software.

It's a big claim. But a simple way to validate it would be to see if Anthropic themselves are producing more or higher quality software than the rest of the industry. If they aren't, something smells. The makers of the tool, and such a well funded and staffed company, should be the best at using it. And, well, Claude Code sucks. It's a buggy mess.

Opencode, on the other hand, is not a buggy mess. It is one of the finest pieces of software I've used in a long time, and I don't mean "for a TUI". And they started writing it after CC was launched. So, to finally answer your question: Opencode is a competitor in a way that brings to question Anthropic's very innermost claim, the transformative nature of AI. I find it embarrassing to answer this question-of-sorts by limply nicking the competitor, rather than using their existence as a call for self improvement. And, Christ, OC is open. It's open source. Anthropic could, at any time, go read the code and do the engineering to make CC just as good. It is embarrassing to be beaten at your own game and then take away the ball.

(If that is what is happening. Of course, this could be a misunderstanding, or a careless push to production, or any number of benign things. But those are uninteresting, so let's assume for the sake of argument that it was intentional).

BoiledCabbage 17 hours ago [-]
Thanks, while we in the end may not agree - I do feel I understand your thinking now. Also agreed, we've probably reached the fruitful end of this discussion and this will be my last reply on it. I'll explain my thoughts similarly as you.

To me it seems more akin to someone saying "I'm launching a restaurant. I'll give you a free meal if you come and give me feedback on the dish, the decor, service...". This happens for a bit, then after a while people start coming in taking the free plate and going and eating it at a different restaurant.

To me it seems pretty reasonable to say "If you're taking the free meal you have to eat it here and give feedback".

That said, I do acknowledge you see it very differently and given how you see it I understand why you feel it's embarrassing.

Thanks for the discussion.

_bobm 14 hours ago [-]
But you are not having a free meal lunch are you? You _are paying_ for your meal.

Worse: you are the meal as well.

Do you see this?

ehnto 18 hours ago [-]
As a user it is because I can no longer use the subscription with the greater tooling ecosystem.

As for Anthropic, they might not want to do this as they may lose users who decide to use another provider, since without the cost benefit of the subscription it doesn't make sense to stay with them and also be locked into their tooling.

18 hours ago [-]
what 18 hours ago [-]
The subscription is for their products? If you want to use their models in another product you can pay for the API usage.
ehnto 17 hours ago [-]
From my perspective, I was paying for the model. This is kind of a pointless distinction now though.

It was working and now it isn't, and the outcome is that some of their customers are unhappy and might move on.

API access is not the same product offering as the subscription, so that's probably a practical option but not a comparable one.

_zoltan_ 15 hours ago [-]
you yourself admit that API access is a separate product. if you want to use 3rd party tooling, pay for API access.

if you want to use (most likely heavily) subsidized subscription plans, use their ecosystem.

it's that simple.

ehnto 12 hours ago [-]
No one said it was complicated, and you might be imagining that I care more than I do. However if you can't understand why having a feature of a paid product removed is dissatisfying, then I cannot help you understand any further.

I am surprised that anyone would think the "product" is the web interface and cli tool though, the product is very clearly the model. The difference in all options is merely how you access it.

rovr138 7 hours ago [-]
> having a feature of a paid product removed is dissatisfying

It wasn't a feature. It was a loophole. They closed it.

There are multiple products. Besides models, there's a desktop app, there's claude code. They have subscriptions.

ehnto 6 hours ago [-]
Feature, attribute, loophole. I really doubt we fundamentally disagree on the situation here. You can use your empathy to understand why people are disappointed, and I will pretend such a detail oriented thread has made me feel content. Anthropic can do what they want, it's their service.
rockatanescu 12 hours ago [-]
The Claude plans allow you to send a number of messages to Anthropic models in a specific interval without incurring any extra costs. From Anthropic's "About Claude's Max Plan Usage" page:

> The number of messages you can send per session will vary based on the length of your messages, including the size of files you attach, the length of current conversation, and the model or feature you use. Your session-based usage limit will reset every five hours. If your conversations are relatively short and use a less compute-intensive model, with the Max plan at 5x more usage, you can expect to send at least 225 messages every five hours, and with the Max plan at 20x more usage, at least 900 messages every five hours, often more depending on message length, conversation length, and Claude's current capacity.

So it's not a "Claude Code" subscription, it's a "Claude" subscription.

The only piece of information that might suggest that there are any restrictions to using your subscription to access the models is the part of the Pro plan description that says "Access Claude Code on the web and in your terminal" and the Max plan description that says "Everything in Pro".

wiseowise 14 hours ago [-]
It is embarrassing, because it means they’re afraid of competition. If CC was so great, at least a fraction of they sell it, they wouldn’t need to do it.
llmslave2 13 hours ago [-]
It's embarrassing because they use Claude Code to build all of their software and they can't write decent software to save their lives. Their software quality is embarrassing, basically Microsoft tier, which calls into question both the effectiveness of their AI products and Agentic workflows.

Like seriously, the creator of CC claims to run 10 simultaneous agents at once. We sure can tell bud.

anhner 15 hours ago [-]
"Leave the multibillion dollar company alone!"
dcre 18 hours ago [-]
I've used both CC and OpenCode quite a bit and while I like both and especially appreciate the work around OpenTUI, experience-wise I see almost no difference between the two. Maybe it's because my computer is fast and I use Ghostty, but I don't experience any flickering in CC. Testing now, I see typing is slightly less responsive in CC (very slightly: I never noticed until I was testing it on purpose).

We will see whether OpenCode's architecture lets them move faster while working on the desktop and TUI versions in parallel, but it's so early — you can't say that vision has been borne out yet.

kristianp 2 hours ago [-]
Interesting that [1] is 30% zig as well as mostly typescript. That's a lot of native code for something that runs in a terminal (i.e. no graphical code required).
createaccount99 14 hours ago [-]
> unusual L for Anthropic

Not unusual, not for Anthropic.

cat-whisperer 15 hours ago [-]
I am curious, I haven't faced any major issues using claude code in my daily workflow. Never noticed any flickering either.

Why do you think opencode > CC? what are some productivity/practical implications?

azuanrb 15 hours ago [-]
Opencode has a web UI, so I can open it on my laptop and then resume the same session on the web from my phone through Tailscale. It’s pretty handy from time to time and takes almost zero effort from me.

The flickering is still happening to me. It's less frequent than before, but still does for long/big sessions.

vinhnx 14 hours ago [-]
> The unfortunate truth is that the engineering in opencode is so far ahead of Claude Code

I'm curious, what made you think of that?

satvikpendem 15 hours ago [-]
> Anyway, this is embarrassing for Anthropic. I get that opencode shouldn't have been authenticating this way. I'm not saying what they are doing is a rug pull, or immoral. But there's a reason people use this tool instead of your first party one. Maybe let those world class systems designers who created the runtime that powers opencode get their hands on your TUI before nicking something that is an objectively better product.

This is nothing new, they pulled Claude models from the Trae editor over "security concerns." It seems like Anthropic are too pearl-clutching in comparison to other companies, and it makes sense given they started in response to thinking OpenAI was not safety oriented enough.

NetOpWibby 18 hours ago [-]
inb4 Anthropic acquires Opencode
dboon 18 hours ago [-]
I actually wouldn't be that surprised by this. I'd be more surprised at the OC people folding (not the right word, but you get it) on some pretty heavy ambitions in favor of an acquisition.
yieldcrv 18 hours ago [-]
the right word, keeping with card playing and poker terms would be book a win or win the hand, scoop the pot
xpe 18 hours ago [-]
Update: Ah, I see this part: "This credential is only authorized for use with Claude Code and cannot be used for other API requests."

Old comment for posterity: How do we know this was a strategy/policy decision versus just an engineering change? (Maybe the answer is obvious, but I haven't seen the source for it yet.) I skimmed the GitHub issue, but I didn't see discussion about why this change happened. I don't mean just the technical change; I mean why Anthropic did it. Did I miss something?

alexeiz 15 hours ago [-]
> The unfortunate truth is that the engineering in opencode is so far ahead of Claude Code.

If only Claude Code developers had access to a powerful LLM that would allow them to close the engineering gap. Oh, wait...

llmslave2 13 hours ago [-]
Anthropic developers run 10 Claude Code instances at once, with unlimited access to the best models.
wiseowise 14 hours ago [-]
It’s a pure marketing. When will people understand that?
slekker 15 hours ago [-]
How much do you use AI in your day? Are you a heavy user? Asking because your comment has a lot of "LLM mannerism"
dboon 8 hours ago [-]
No, it doesn’t.
ChicagoDave 16 hours ago [-]
Or just maybe submit feature requests instead of backdooring a closed source system.
oblio 14 hours ago [-]
All the TUI agents are awful at scrolling. I'm on Ubuntu 24.04 and both Claude Code and Gemini CLI absolutely destroy scrolling. I've tested Claude Code in the VS Code and it's better there, but in the Gnome Terminal it's plain unusable.

And a lot of people are reporting scrolling issues.

As someone was saying, it's like they don't have access to the world's best coding LLM to debug these issues.

ChicagoDave 5 hours ago [-]
I use Claude Code every day and it works perfectly fine outside of a few bugs (they broke ESC interrupt in 2.1).

I just don’t understand the misplaced anger at breaking TOS (even for a good reason) and getting slapped down.

Like what did anyone think would happen?

We all want these tools and companies to succeed. Anthropic needs to find profit in a few years. It’s in all of our best interests to augment that success, not bitch because they’re not doing it your way.

oblio 2 hours ago [-]
> We all want these tools and companies to succeed. Anthropic needs to find profit in a few years. It’s in all of our best interests to augment that success, not bitch because they’re not doing it your way.

Considering they're destroying a lot of fields of industry, I'm not sure I want them to succeed. Are we sure they're making the world a better place?

Or are they just concentrating wealth, like Google, Meta, Microsoft, Amazon, Uber, Doordash, Airbnb and all the other holy-tech grails in the last 20 years?

Our lives are more convenient than they were 20 years ago and probably poorer and more stressful.

wiseowise 14 hours ago [-]
Simping for closed source software? Tsk, tsk.
oldhead 18 hours ago [-]
This headline is misleading. EDIT: Or rather was, at it has now been edited to be accurate.

You can still bring your own Anthropic API key and use Claude in OpenCode.

What you can no longer do is reverse engineer undocumented Anthropic APIs and spoof being a Claude Code client to use an OAuth token from a subscription-based Anthropic account.

This really sucks for people who want a thriving competitive market of open source harnesses since BYOK API tokens mean paying a substantial premium to use anything but Anthropic's official clients.

But it's hard to say it's surprising or a scandal, or anything terribly different from what tons of other companies have done in the past. I'd personally advise people to expect everything about using frontier coding models becoming much more pay-to-play.

planckscnst 18 hours ago [-]
The API key is not a subscription. The title says subscriptions are blocked from using third-party tools. Or am I misunderstanding?
oldhead 18 hours ago [-]
Headline's been edited since my post. It previously said something along the lines of "Anthropic bans API use in OpenCode CLI"
barnabee 11 hours ago [-]
The ideal endgame is that AI lets us build tools that make it impossible to tell what application or device is using their APIs and everything becomes open to third party clients whether they like it or not.
lemming 18 hours ago [-]
This will piss a lot of people off, and seems like a strange move. I get that this was always a hack and against the ToS. But I've been paying Anthropic money every month to do exactly what I would have done with Claude Code, but in another harness that I like better. All they've achieved here is that I am no longer giving them money. Their per-token pricing is really expensive compared to OpenAI, and I like the results from the OpenAI models better too, they're just very slow.

Here's a good benchmark from the brokk team showing the performance per dollar, GPT-5.1 is around half the price of Opus 4.5 for the same performance, it just takes twice as long.

https://brokk.ai/power-ranking?dataset=openround&models=flas...

So as of today, my money is going to OpenAI instead of Anthropic. They probably don't care though, I suspect that not many users are sufficiently keen on alternative harnesses to make a difference to their finances. But by the same token (ha ha), why enforce this? I don't understand why it's so important to them that I'm using Claude Code instead of something else.

gbear605 16 hours ago [-]
Presumably Claude Code is a loss leader to try to lock you into their ecosystem or at least get you to exclusive associate “AI” with “Claude”. So if it’s not achieving those goals, they’d prefer if you use OpenAI instead.
wiether 15 hours ago [-]
That's my understanding and that's what I see happening at some places.

People got a CC sub, invest on the whole tooling around CC (skills and whatnot) and once they're a few weeks/months in, they'll need a lot of convincing to even try something else.

And given how often CC itself changes and they need to keep up with it, that's even worse. It's not just not wanting to get out of your confort zone, it's just trying to keep up with your current tools. Now if you also have to try a new tool every other day, the 10x productivity improvements claimed won't be enough to cover the lack of actual working hours you'll be left with in a week.

fragmede 1 hours ago [-]
> it just takes twice as long.

but time is also money. Personally if I could pay more money to get answers faster, I'd pay double.

weird-eye-issue 19 hours ago [-]
The API is not banned only using the Claude Code subscription is

I actually tried this several months back to do a regular API request using the CC subscription token and it gave the same error message

So this software must have been pretending to be Claude Code in order to get around that

A Claude Code subscription should not work with other software, I think this is totally fair

creativeSlumber 18 hours ago [-]
> A Claude Code subscription should not work with other software.

why not though? aren't you paying for the model usage regardless of the client you use?

cortesoft 18 hours ago [-]
No, you are paying to use Claude code… it uses the model underneath, but you aren’t paying for raw model usage. For whatever reason, Anthropic thinks this is the best way to divide up their market.

They want to charge more for direct access to the model.

JimmaDaRustla 4 hours ago [-]
> No, you are paying to use Claude code

Why would anyone pay a subscription for barebones LLM agent?

You can beat that drum all you want, but you know it's bullshit. People pay the subscription for the AI, not the tool that consumes it. That tool being crap is why everyone started using third-party tools.

The reason they are blocking third-party usage is they want developers to use only their models and no competitors.

weird-eye-issue 18 hours ago [-]
That's not up to you or me. I think it's pretty clear by the phrase "Claude Code subscription" that it's meant for only "Claude Code". Why are you confused?

This could be so easily abused by companies who spend thousands of dollars per month for API costs you could just reverse engineer it and use the subscription tokens to get that down to a few hundred

gpm 18 hours ago [-]
That phrase isn't the official one. It's "The Max Plan" which "combines Claude Desktop and mobile apps and Claude Code in one subscription".
fastball 18 hours ago [-]
Yes, so... pretty clearly not OpenCode.
weird-eye-issue 18 hours ago [-]
Yeah, exactly.
bandrami 17 hours ago [-]
Can I script and scrape Claude Code to provide the exact same data for consumption by the banned client? (This sounds like an interesting challenge for Claude Code to try...)
weird-eye-issue 17 hours ago [-]
Yes they even offer an SDK for it now so "scraping" is not required
davely 7 hours ago [-]
Claude Code provides a headless mode that you can do this exact same thing with:

$ claude -p “fix the eslint in file XYZ”

potamic 18 hours ago [-]
I don't think they are confused. They are simply challenging the assertion that the model should not work with other software. Which is fair because there is a lot of precedent around whether a service can dictate how it must be consumed. It's not a simple answer and there are good reasons for both sides. Whichever path we take will have wide consequences and shape our future in a very distinct way. So it is an important decision, and ultimately up to us, as a society to influence and guide.
weird-eye-issue 17 hours ago [-]
"challenging the assertion that the model should not work with other software"

This has nothing to do with "the model". You can use "the models" through the API for anything.

This has to do with access to a specific product being abused to then get low-cost API access for other use cases

iwontberude 17 hours ago [-]
It’s like saying Netflix is wrong to require an official Netflix client to access their service. Total dud of an argument if you ask me.
tadfisher 15 hours ago [-]
Well, they are wrong, and the argument is still a dud.
weird-eye-issue 13 hours ago [-]
Netflix would not even exist if you could just freely download all of the media to your computer and play it anytime because of licensing agreements and other factors. So you can think that they are wrong but that's not really rooted in reality or practicality.
tadfisher 5 hours ago [-]
That is possible, was always possible, and will continue to be possible, judging by the availability of 4K rips available via piracy.
casparvitch 18 hours ago [-]
IDK if Anthropic wants to offer a service at below cost, I don't think they should gate keep which client you access that service over. Or in other terms, I won't use a service that locks me into a client I don't like.
firloop 18 hours ago [-]
How do you draw that conclusion? If Anthropic wants to offer a service at below cost, they seem a lot more justified in restricting how and where they subsidize usage.
casparvitch 18 hours ago [-]
Yeah fair, I could've chosen better words. My conclusion is: "I don't want to pay for that", not "they shouldn't be able to do that".
BoiledCabbage 18 hours ago [-]
> IDK if Anthropic wants to offer a service at below cost, I don't think they should gate keep which client you access that service over.

Are you going to say why you think they shouldn't? You didn't give a reason.

weird-eye-issue 18 hours ago [-]
> Or in other terms, I won't use a service that locks me into a client I don't like.

Then don't! Or just use the API which doesn't lock you into any client.

renewiltord 18 hours ago [-]
That seems mutual. They don’t want you to use this service with an arbitrary client and you don’t want to use this service that won’t allow an arbitrary client. So both of you don’t want the relationship. Seems fine.

For my part, I’m fine understanding that bundling allows for discounting and I would prefer to enable that.

conception 18 hours ago [-]
But they get telemetry, feedback, good will, etc. That’s one reason why usage is discounted to a subscription fee.
adastra22 17 hours ago [-]
They don't get any telemetry or feedback data from me, as I've turned all that off. So why should I be limited to CC?
mercanlIl 8 hours ago [-]
If you don’t want to use CC, just use the API
CuriouslyC 18 hours ago [-]
Good will, huh?
conception 9 hours ago [-]
You don’t think Claude Code has generated good will for Anthropic? People just liking a brand is powerful.
CuriouslyC 9 hours ago [-]
I don't think this move is generating good will.
nebezb 18 hours ago [-]
> aren't you paying for the model usage

No, you’re paying for “Claude Code” usage.

JimmaDaRustla 4 hours ago [-]
> A Claude Code subscription should not work with other software, I think this is totally fair

Why the hell not? What an L take - if I pay a subscription fee for an API, I should be able to use that API however I want. If they're forcing users to only consume their APIs with a proprietary piece of software, it really begs what's in that software that makes it valuable to them. Seems like there's something nefarious involved.

protocolture 17 hours ago [-]
>A Claude Code subscription should not work with other software, I think this is totally fair

Strongly disagree. They are just trying to moat.

iwontberude 17 hours ago [-]
It’s a private API. What part of this is hard to understand? This is why you don’t code against undocumented APIs with no contract. It’s self destructive.
nextaccountic 18 hours ago [-]
Is Claude Code still available on IDEs through ACP?

Like https://zed.dev/docs/ai/external-agents

lemming 18 hours ago [-]
Should be, yes - ACP is basically just a different way of invoking the agent, so you're still using Claude Code. It's alternative clients like OpenCode, the CharmBracelet one and pi which will be affected - they basically reimplement the agent part and just call the API directly.
Syzygies 18 hours ago [-]
Yes. I've been using it today with Zed (a mind-blowing editor, by the way).

One must use an API key to work through Zed, but my Max subscription can be used with Claude Code as an external agent via Zed ACP. And there's some integration; it's a better experience than Claude Code in a terminal next to file viewing in an editor.

prydt 18 hours ago [-]
Came here to say this. Using opencode with the API works fine.
ashishgupta2209 18 hours ago [-]
Same
19 hours ago [-]
sfmike 17 hours ago [-]
but if something is used in CLI it makes sense it would be in order to be used with other things in the CLI
hsbauauvhabzb 18 hours ago [-]
Yeah, you shouldn’t infringe the copywrite of a tool written by a company which is built off of copywrite infringement.
weird-eye-issue 18 hours ago [-]
First of all you mean "copyright". Second of all this has literally nothing to do with copyright
misternugget 16 hours ago [-]
Engineer working on Amp here.

I'm very surprised that it took them this long to crack down on it. It's been against the terms of service from the start. When I asked them back in March last year whether individuals can use the higher rate limits that come with the Claude Code subscription in other applications, that was also a no.

Question is: what changed? New founding round coming up, end of fiscal year, planning for IPO? Do they have to cut losses?

Because the other surprise here is that apparently most people don't know the true cost of tokens and how much money Anthropic is losing with power users of Claude Code.

llmslave2 13 hours ago [-]
> Question is: what changed? New founding round coming up, end of fiscal year, planning for IPO? Do they have to cut losses?

I'm gonna say IPO considering their recent aggressive stealth marking campaign on X, Reddit, and HN.

cheeze 5 hours ago [-]
Can you share more on this?
cmdtab 15 hours ago [-]
Yeah. If my claude code usage was on API directly, it would be in thousands. I know this because I have addon credits on top of the max plan because I run into weekly limits often
ramoz 11 hours ago [-]
You think anthropic is losing money now with the weekly limits? And while hitting the gas on mass market?
muppetman 17 hours ago [-]
I feel like I'm the only person on this site that doesn't use AI for coding. I guess there's probably a lot of other people that haven't commented on this story who don't use it either. But when I read about how much hype and all that sort of stuff there is in the AI industry, and then I see the amount of posts and commentary and deep technical discussion about how this feature has affected people, I'm not so sure. Everyone I know hates AI and how it's been shoved into every corner of our lives, but I look here and it's insanely popular. Anyway, sorry this was a very off topic comment. It's just very interesting to me that the hype isn't all just hype.
isodev 17 hours ago [-]
I also don’t use AI for coding. I tried, I explored, I learned how it works.

At the end, “maybe-sometimes works” and “sends a copy of all your code to some server in the US” are just incompatible with the kind of software I create.

Regarding the post, I think it’s telling that Anthropic is trying to force people into using their per-usage billing more than the subscription. My take is that the subscription offers a lot as a way of hooking developers into it and is not sustainable for Anthropic if people end up actually maxing their usage.

Given how much money is wasted into the LLM craze, I can imagine there will be more “tightening of the belt” from the AI corps going forward.

For the five coders out there, maybe it’s time to use your tokens to get back control of your codebases … you may have to “meat code” them soon.

luckilydiscrete 17 hours ago [-]
I'll say "maybe-sometimes works" is a misunderstanding.

It feels like that initially, but that's no different from any new tool you adopt. A jackhammer also "maybe-sometimes works" as a hammer replacement.

cocoto 15 hours ago [-]
Not true, most tools are deterministic. For instance my programming language LSP just works 100% of the time with no failure. It doesn’t hallucinate any types, methods or variables.
wiseowise 14 hours ago [-]
Your LSP also can’t do complex reasoning across the purpose of your whole codebase.
lomase 13 hours ago [-]
Neither LLM can reason about anything.

Reasoning is an human trait.

8 hours ago [-]
wiseowise 13 hours ago [-]
Whatever you say lol.
isodev 13 hours ago [-]
The usage of the word "reasoning" in the context of LLMs, just like the "I" in "AI", that's more marketing than a technical reality. I know it can be confusing.
hu3 12 hours ago [-]
Regardless of semanthics, LLMs + tooling can do impressive things.

For example I can tell LLMs to scan my database schema and compare to code to detect drift or inconsistencies.

And while doing that it has enough condensed world knowledge to point to me that the code is probably right when declaring person.name a non-nullable string despite the database column being nullable.

And it can infer that date_of_birth column is correct in being nullable on the database schema and wrong in code where the type is a non-nullable date because, in my system, it knows date_of_birth is an optional field.

This is a simple example that can be solved by non-LLMs tooling also. In practice it can do much more advanced reasoning with regards to business rules.

We can argue semanthics all day but this is reason enough to be useful for me.

There are many examples I could give. But to the skeptics I recommend trying to use LLMs for understanding large systems. But take your time to give it read only access to data base schema.

wiseowise 10 hours ago [-]
Sure, sure.
xpe 9 hours ago [-]
> Neither LLM can reason about anything.

> Reasoning is an human trait.

Note: this is not directed at the commenter or any person in particular. It is directed at various patterns I've noticed.

I often notice claims like the following:

- human intelligence is the "truest" form of intelligence;

- machines can't reason (without very clearly stating what you mean by reasoning);

- [such and such] can only be done by a human (without clearly stating that you mean at the present time with present technology that you know of);

Such claims are in my view, rather unhelpful framings – or worse, tropes or thought-terminated clichés. We would be wise to ask ourselves how such things persist.

How do these ideas lodge in our brains? There are various shaky premises (including cognitive missteps) that lead to them. So I want to make some general comments that often lead to the above kind of thinking.

It is more important than ever for people to grow their understanding and appreciation. I suggest considering the following.

1. ... recognize that one probably can't offer a definition of {reasoning, intelligence, &c} that is widely agreed upon. Probably the best you can hope for is to clarify the sense of which you mean. There are often fairly clear 'camps' that can easily be referenced.

2. Recognition that implicitly hiding a definition in your claims -- or worse, forcing a definition on people -- doesn't do much good.

3. Awareness that one's language may be often interpreted in various ways by reasonable people.

4. Internalize dictionaries are catalogs of various usage that evolve over time. Dictionaries are not intended to be commandments of correctness, though some still think dictionary-as-bludgeon is somehow appropriate.

3. Acknowledge confusing terminology in AI/LLM in particular. For example, reasonable people can recognize that "reasoning" in this context is a fraught term.

5. Recognition that humanity is only getting started when it comes to making sense of how "intelligence" decomposes, how our brains work, the many nuanced differences between machine intelligence and human intelligence.

6. Recognize one's participation in a social context. Strive to not provide fuel for the fires of misunderstanding. If you use a fraught term, be extra careful to say what you mean.

7. Hopefully obvious: sweeping generalizations and blanket black-or-white statements are unlikely to be true unless you are talking about formal systems like logic and mathematics. Just don't do it. Don't let your thinking fall into that trap. And don't spew it -- that insults the intelligence of one's audience.

8. Generally speaking, people would be wise† to think about upping their epistemic game. If one says things that are obviously inaccurate, you are wasting your intelligence refined over millions of years by evolution and culture. To do so is self-destructive, for it makes oneself less valuable relative to LLMs who (although they blunder) are often more reliable than people who speak carelessly.

† Because it benefits the person directly and it helps culture, civilization, progress, &c

isodev 17 hours ago [-]
Every prompt is a run of probability - It’s at the core of the technology to be unable to give reproducible responses and even after a while, Claude is just as likely to sneak-in crimes in every snippet it outputs.
xpe 8 hours ago [-]
> Every prompt is a run of probability - It’s at the core of the technology to be unable to give reproducible responses and even after a while, Claude is just as likely to sneak-in crimes in every snippet it outputs.

Some readers might interpret "a run of probability" to mean "we can't say anything about the statistical distribution". I don't think the commenter means that, but still, communicating statistics is hard, so I suggest being careful.

For example, writing "even after a while, just as likely to sneak-in crimes in every snippet it outputs" is pretty attention-getting and even provoking. What does the commenter mean by it? What kind of 'crimes' do they mean? Does the commenter really mean 'just as likely'? Just as likely as what? I would think most readers would form very different takes.

jatora 15 hours ago [-]
like the other commenter said. you need to learn to use new tools. and your take clearly indicates you haven't.
lomase 13 hours ago [-]
Your take clearly indicates you need to learn how to code.
xpe 9 hours ago [-]
Please don't snipe at others, even if someone else started it. https://news.ycombinator.com/newsguidelines.html
schiem 8 hours ago [-]
I don't use it at all for a variety of reasons, but I rarely bother to get into discussions on HackerNews.

Looking at how new it is, and how quickly things are changing, it seems likely that I could adopt it into my workflow in a month or two if it turns out that that's necessary.

On the other hand, I've spent the last 2 decades building skills as a developer. I'm far more worried that becoming a glorified code reviewer will atrophy those skills than I am about falling behind. Maybe it will turn out that those skills are now obsolete, but that feels unlikely to me.

Izkata 7 hours ago [-]
> I'm far more worried that becoming a glorified code reviewer will atrophy those skills

A co-worker who went all-in around a year ago admitted a few months ago he's noticed this in himself, and was trying to stop using the code-generating functionality of any of these tools. Emphasis on "try": apparently the times it does work amazingly makes it addictive like gambling, and it's far too easy to reach for.

tkgally 12 hours ago [-]
> I feel like I'm the only person on this site that doesn't use AI for coding.

I’m surprised by that. One reason I follow discussions here about AI and coding is that strong opinions are expressed by professionals both for and against. It seems that every thread that starts out with someone saying how AI has increased their productivity invites responses from people casting doubt on that claim, and that every post about the flaws in AI coding gets pushback from people who claim to use it to great effect.

I’m not a programmer myself, but I have been using Claude Code to vibe-code various hobby projects and I find it enormously useful and fun. In that respect, I suppose, I stand on the side of AI hype. But I also appeciate reading the many reports from skeptics here who explain how AI has failed them in more serious coding scenarios than what I do.

LorenzoGood 17 hours ago [-]
I feel the same. I don't want to hear about it all the time (although I welcome discussion). I wish this site would go back to talking about other tech things.
luckilydiscrete 17 hours ago [-]
AI is indeed just hype in a lot of cases, but also has revolutionary value in a other cases. Trying it is the only way you'll be able to differentiate the latter from the former.
daliusd 16 hours ago [-]
It will be shoved into your life anyway. You might like it or not, but the only safe choice is to learn and understand it IMHO.

About usage: it looks like web development gets benefits here, but other areas are not that successful somehow. While I use it successfully for Neovim Lua plugins development, CLI apps (in JS) and shell development (WezTerm Lua + fish shell). So I don't know if:

a) it simply has clicked for me and it will click for everyone who invests into it;

b) it is not for everybody because of tech;

c) is it not for everybody because of mindset;

dannersy 14 hours ago [-]
I share your experience. Additionally, I am surprised anyone on this site did not see this progression coming. Between costs, the race to be THE provider, and anyone who has an awareness of how the tech industry has been operating the last 15 years, this move by Anthropic was so laughably predictable that the discourse in this thread is pretty disappointing.
BrouteMinou 14 hours ago [-]
All those people are on the drug they got on the cheap during the fun party nights.

They are, or soon will be, surprised that the price is going to increase, and they are the only losers in that great story of theirs...

defrost 13 hours ago [-]
Was it the drug that killed River Phoenix, or the one that Edmund Hillary had in his veins ascending Everest?

Greek philosophers pondered that question: https://www.youtube.com/watch?v=Ijx_tT5lCDY

mgfist 17 hours ago [-]
I hate how AI is being shoved in most things, but I do love AI in a few of those places (ai coding and google search replacement)
iamkonstantin 17 hours ago [-]
Have you noticed how Google search summaries have taken the shape of those annoying blogposts that take you through several “What is a computer program” explainers before answering the question?
globular-toast 16 hours ago [-]
I use it sparingly. I do still have to produce boilerplate and don't have the time/will to engineer a better solution. But any actual logic etc I do myself. Why would I take a chance on an LLM doing it wrong when I know exactly how I want it and am perfectly capable of doing it myself. Also, what the hell am I going to do in the minutes it takes to generate, just sit there and watch it? No thanks.
JimmaDaRustla 4 hours ago [-]
Lots of arguing about semantics of what the subscription is actually intended for.

Claude Code, as a coding assistant, isn't even mediocre, it's kind of crap. The reason it's at all good is because of the model underneath - there's tons of free and open agent tools that are far better than Claude Code. Regardless of what they say you're paying the subscription for, the truth is the only thing of value to developers is the underlying AI and API.

I can only think of few reasons why they'd do this: 1. Their Claude Code tool is not simply an agent assistant - perhaps it's feeding data for model training purposes, or something of the sorts where they gain value from it. 2. They don't want developers to use competitor models in any capacity. 3. They're offloading processing or doing local context work to drive down the API usage locally, making the usage minimal. This is very unlikely.

I currently use Opus 4.5 for architecting, which then feeds into Gemini 3 Flash with medium reasoning for coding. It's only a matter of time before Google competes with Opus 4.5, and when they do, I won't have any loyalty to Anthropic.

jacquesm 4 hours ago [-]
For AI companies the access to the interaction is very valuable, that explains the price difference. It is data that the competition does not have access to. Of course they are storing that data for model training purposes, that's the whole reason this exists in the first place. They are subsidizing until they get their quality up to the point that the addiction is so strong you won't be able to get through your workday without it. And then surprise the per month access fee will start to rise.
ramoz 3 hours ago [-]
The other harnesses would arguably give them even richer data and product insights.
deveworld 16 hours ago [-]
The fix has been merged in https://github.com/anomalyco/opencode-anthropic-auth/pull/11, and PR https://github.com/anomalyco/opencode/pull/7432 is open to bump the version.

Until it's released, here's a workaround:

1. git clone https://github.com/anomalyco/opencode-anthropic-auth.git

2. Add to ~/.config/opencode/opencode.json: "plugin": ["file:///path/to/opencode-anthropic-auth/index.mjs"]

3. Run: OPENCODE_DISABLE_DEFAULT_PLUGINS=true opencode

meeq 15 hours ago [-]
Anthropic shot themselves in the foot with this decision. It‘s a PR nightmare and at the same time the open source community will always find a way. They just wasted everyone‘s time and likely lost a bunch of users while doing so.

Thank you for sharing this!

mike_hearn 11 hours ago [-]
The open source community won't always find a way. Remote attestation isn't a new concept (it doesn't have to be hardware backed, the concept is general).

The industry has enough experience with this by now to know how it goes, and open source projects are always the first to drop out of the race. The time taken to keep up becomes much too high to justify doing on a voluntary basis or giving away the results, so as the difficulty of bypassing checks goes up the only people who can do it become SaaS providers.

BluRay BD+ was a good example of that back in the day. AACS was breakable by open source players. Once BD+ came along the open source doom9 crowd were immediately wiped out. For a long time the only breaks came from a company in Antigua that sold a commercial ripper, which was protected from US law enforcement by a WTO decision specific to that island.

You also see this with stuff like Google YouTube/SERP scraping. There currently aren't any open source solutions that don't get rapidly blocked server side, AFAIK. Companies that know how to beat it keep their solutions secret and sell bypasses as a service.

cmrdporcupine 5 hours ago [-]
Apparently this has already been stopped in its tracks?

Anthropic seems determined to plug the hole.

touristtam 13 hours ago [-]
has opencode repo moved organisation from under sst to anomalyco?
tietjens 12 hours ago [-]
yes, anomalyco now comprises of sst, opencode, etc.
brainless 18 hours ago [-]
I know this will sound strange, but SOTA model companies will eventually allow subscription based usage through third-party tools. For any usage whatsoever.

Models are pretty much democratized. I use Claude Code and opencode and I get more work done these days with GLM or Grok Code (using opencode). Z.ai (GLM) subscription is so worth it.

Also, mixing models, small and large ones, is the way to go. Different models from different providers. This is not like cloud infra where you need to plan the infra use. Models are pretty much text in, text out (let's say for text only models). The minor differences in API are easy to work with.

MeetingsBrowser 18 hours ago [-]
Wouldn't this mean SOTA model companies are incentivized not to allow subscriptions through third parties?

If all the models are interchangeable at the API layer, wouldn't they be incentivized to add value at the next level up and lock people in there to prevent customers from moving to competitors on a whim.

Majromax 9 hours ago [-]
> If all the models are interchangeable at the API layer, wouldn't they be incentivized to add value at the next level up

Just the other day, a 2016 article was reposted here [https://news.ycombinator.com/item?id=46514816] on the 'stack fallacy', where companies who are experts in their domain repeatedly try and fail to 'move up the value chain' by offering higher-level products or services. The fallacy is that these companies underestimate the essential compexities of the higher-level and approach the problem with arrogance.

That would seem to apply here. Why should a model-building company have any unique skill at building higher-level integration?

If their edge comes from having the best model, they should commoditize the complement and make it as easy as possible for everyone to use (and pay for) their model. The standard API allows them to do just this, offering 'free' benefits from community integrations and multi-domain tasks.

If their edge does not come from the model – if the models are interchangeable in performance and not just API – then the company will have deeper problems justifying its existing investments and securing more funding. A moat of high-level features might help plug a few leaks, but this entire field is too new to have the kind of legacy clients that keep old firms like IBM around.

brainless 18 hours ago [-]
I do not know what that next level is to be honest. Web search, crawler, code execution, etc. can all be easily added on the agent side. And some of the small models are so good when the context is small that being locked into one provider makes no sense. I would rather build a heavy multi-agent solution, using Gemini, GLM, Sonnet, Haiku, GPT, and even use BERT, GlinER and other models for specific tasks. Low cost, no lock-in, still get high quality output.
thorum 17 hours ago [-]
AI labs are not charities and there is no way to make money offering unlimited access to SOTA LLMs. Even as costs drop, that will continue to be true for the best models in 2027, 2028 etc. - as demonstrated by the fact that CPU time still costs money. The current offerings are propped up by a VC bubble and not sustainable.
brainless 16 hours ago [-]
I agree but that is not the issue. See the really "large" models are great at a few things but they are not needed for daily tasks, including most coding tasks. Claude Code itself uses Haiku for a lot of tasks.

The non-SOTA companies will eat more of this pie and squeeze more value out of the SOTA companies.

substackreader 18 hours ago [-]
They already do it’s called the API
bazhand 14 hours ago [-]
FWIW this isn’t new, using a Claude/Max subscription auth token as a general-purpose “API key” has been known (and blocked) for ages. OpenCode basically had to impersonate the official Claude Code client to make that work, and it always felt like a loophole that would get patched eventually.

This is exactly why (when OpenCode and Charm/Crush started diverging) Charm chose not to support “use your Claude subscription” auth and went in a different direction (BYOK / multi-provider / etc). They didn’t want to build a product on top of a fragile, unofficial auth path.

And I think there’s a privacy/policy reason tightening this now too: the recent Claude Code update (2.1-ish) pops a “Help improve Claude” prompt in the terminal. If you turn that ON, retention jumps from 30 days to up to 5 years for new/resumed chats/coding sessions (and data can be used for model improvement). If you keep it OFF, you stay on the default 30-day retention. You can also delete data anytime in settings. That consent + retention toggle is hard to enforce cleanly if you’re not in an official client flow, so it makes sense they’re drawing a harder line.

artdigital 12 hours ago [-]
Yea exactly, I’m surprised people are calling this “drama”. It was from the beginning against the ToS, all the stuff supporting it just reverse engineered what Claude Code is doing and spoof being a client.

I tried something similar few months back and Claude already has restrictions against this in place. You had to very specifically pretend to be real Claude Code (through copying system prompts etc) to get around it, not just a header.

mcast 19 hours ago [-]
I’m not surprised they closed the loophole, it always felt a little hacky using an Anthropic monthly sub as an API with a spoofed prompt (“You are Claude Code, Anthropic's official CLI for Claude”) with OpenCode.

Google will probably close off their Antigravity models to 3P tools as well.

viraptor 18 hours ago [-]
And it already has a workaround. https://github.com/anomalyco/opencode/issues/7410#issuecomme...

I really don't understand why they thought this is a good idea. I mean I know why they wish to do this, but it's obviously not going to last.

szundi 18 hours ago [-]
[dead]
Draiken 9 hours ago [-]
Funnily enough, I didn't know about opencode and will now test it out and likely use it instead.

Improve your client so people prefer it? Nah.

Try to force people to use your client by subsidizing it? Now that's what I'm talking about.

As others said, why not just run a bunch of agents on Claude Code to surpass Opencode? I'm sure that's easy with their unlimited tokens!

carlgreene 8 hours ago [-]
Lol this is my exact thought as well. Just downloaded it now and taking it for a spin...pretty good so far!
tbliu 7 hours ago [-]
btw, Amp just announced Opus 4.5 is now supported in their free tier: https://ampcode.com/news/amp-free-frontier. I've been using it a ton and it's a super nice cli. It has ads, but they are fairly non-intrusive
planet_1649c 18 hours ago [-]
Cancelled my claude subscription over it. Opencode is miles ahead of any coding tools. Will stick to using it rather than claude. Other models / other ways to access claude exists.
Mave83 4 hours ago [-]
same here, will cancle the subscription and move away from this nonsense. I want to use their LLM, not their CLI.
ewoodrich 18 hours ago [-]
Ugh, well at least this was the nudge I needed to cancel my Claude Pro subscription... I've already had a bad taste in my mouth watching the rate limits on the plan get worse and worse since I first subscribed and I have a few other subscriptions to fall back on while I've been evaluating different options. I literally never use the regular Claude Chat web UI either, that's pretty much 100% Gemini since I get it via my Google One plan.

OpenCode makes me feel a lot better knowing that my workflow isn't completely dependent on single vendor lock-in, and I generally prefer the UX to Claude Code anyway.

Sewer56 18 hours ago [-]
Here's how to get a refund on the website (all automated):

1. Profile Icon -> Get Help

2. Send us a Message

3. Click 'Refund'

Big corpos only talk money, so it's the best you could do in this situation.

If you can't refund, and need to wait till sub runs out after cancelling, go to the OpenCode repo and rename your tools so they start with capital letters. That'll work around it. They just match on lowercase tool names of standard tools.

everfrustrated 7 hours ago [-]
Really useful thanks!

I signed up thinking Claude Code was an IDE and really disappointed with it. Their plugin for vscode is complete trash. Way over hyped. Their models are good but I can get that through other ways.

tyfon 17 hours ago [-]
That actually worked since I subscribed a few days ago specifically to try open code.

"Your subscription has been canceled and your refund is on the way. Please allow 5-10 business days for the funds to appear in your account."

minimaxir 19 hours ago [-]
This appears to be a part of a crackdown on third-party clients using Claude Code's credentials/subscriptions but not through Claude Code.

Not surprising as this type of credential reuse is always a gray area, but weird Anthropic deployed it on a Thursday night without any warning as the inevitable shitstorm would be very predictable.

metadat 19 hours ago [-]
Yes, it appears they've been cracking down elsewhere as well: https://github.com/charmbracelet/crush/pull/1783

Are they really that strapped already? It took Netflix like 20 years before they began nickel and diming us.. with Anthro it's starting after less than 20 months in the spotlight.

I suspect it's really about control and the culture of Anthropic, rather than only finances. The message is: no more funtime, use Claude CLI, pay a lot for API tokens, or get your account banned.

__loam 19 hours ago [-]
All of these companies are losing money in the billions every quarter. Look at how frequent the raises are.
metadat 19 hours ago [-]
It isn't that simple, demand is growing and they're investing in that growth. With the exception of ElGoog the providers are all private entities, so we don't really know.

Edit: TMTD, hi! That makes sense, yeah.

toomuchtodo 19 hours ago [-]
Unit profitability is out of reach, the demand curve exceeds the profitability curve.

Edit: hello, good to chat again!

B480FA8D 13 hours ago [-]
The "crackdown" is really mild though. To be fair to Anthropic, I don't think they have been committed to banning third-party tools.

github:anomalyco/opencode?rev=5e0125b78c8da0917173d4bcd00f7a0050590c55 (a trivial patch that works for now)

cma 19 hours ago [-]
They've added this change at the same time they added random trick prompts to try and get you hit enter on the training opt in from late last year. I've gotten three popups inside claude code today at random times trying to trick me into having it train my data with a different selection defaulted than I've already chosen.

(edit 4 times now just today)

transcriptase 18 hours ago [-]
More evidence the EU solved the wrong problem. Instead of mandating cookie banners, mandate a single global “fuck off” switch: one-click, automatic opt-out from any feature/setting/telemetry/tracking/training that isn’t strictly required or clearly beneficial to the user as an individual. If it’s mainly there for data collection, ads, attribution, “product improvement”, or monetization, it should be off by default and remain that way so long as the “fuck off” option is toggled. Burden of proof on the provider. Fines exceeding what it takes to get growth teams and KPI hounds to have legal coach them on what “fuck off” means and why they need to.
croes 18 hours ago [-]
Remember what happened do the DNT flag in the browser?

They just ignored until it was gone.

If you don’t give them a way to trick and annoy you into accept tracking they ignore completely what you want

wolvoleo 12 hours ago [-]
DNT was useless because it didn't have a legal basis. It would have been amazing if they had mandated something like this instead of the cookie walls.

Advertisers ignored it because they could. And complained that it defaulted to on, however cookies are supposed to be opt-in so this is how it's supposed to work anyway.

memoriuaysj 15 hours ago [-]
remember how all of HN and tech people were saying that DNT is a Micro$oft scam designed to break privacy because it was enabled by default without requiring user action?

to the point that Apache web server developers added a custom rule in the default httpd.conf to strip away incoming DNT headers !!!

https://arstechnica.com/information-technology/2012/09/apach...

adastra22 17 hours ago [-]
DNT wasn't actually legally mandated.
skeptic_ai 19 hours ago [-]
Last time I mentioned they are sketchy I got a ton of downvotes. I’m happy to see more support.
immibis 19 hours ago [-]
They're losing money on every inference, so of course they want as many banned users as they can get away with.
19 hours ago [-]
jiangplus 18 hours ago [-]
This reminds me of when Anthropic cut first-party access to Claude models in Windsurf as they were about to be acquired by Google.

https://www.reddit.com/r/ChatGPTCoding/comments/1l2y2kh/anth...

TiredOfLife 9 hours ago [-]
They cut access because OpenAI was about to buy Windsurf. The moment that deal was cancelled access was returned.
outlier99 18 hours ago [-]
they didn't even ban non claude code clients they just banned certain tool names that opencode uses...

https://github.com/anomalyco/opencode/issues/7410#issuecomme...

ChaseRensberger 17 hours ago [-]
Wow, i sat down to do a little bit of late night coding and ended up running into this nightmare. Just canceled my anthropic subscription and started paying for opencode zen. Unfortunately opencode is enough of a better product i will indeed pay 10 times the price to use it.
pietz 5 hours ago [-]
Honest question: Why would I use Claude with OpenCode if I have a Claude Max subscription? Why not Claude Code?
msten 10 hours ago [-]
This makes total sense to me. Limiting the usage to their tooling means they can place reasonable limits on usage by controlling how the client interacts with the LLM and making those calls as efficient as possible. The current state of things didn't really feel sustainable.
tempaccount420 9 hours ago [-]
They can and already place limits on the server side.
ozlikethewizard 16 hours ago [-]
I know this is somewhat unreasonable but watching "devs" unable to work because "faceless corp 1007" cut their access definitely has a level of schadenfreude to it.
RamtinJ95 15 hours ago [-]
who said they are unable to work? This meme is so old and overused.
wiseowise 14 hours ago [-]
They laugh about LLMs hallucinating, but themselves hallucinate the facts that don’t exist all the time.
dboon 17 hours ago [-]
For anyone coming looking for a solution; I peeked around the OC repository, and a few PRs got merged in. Add this to $HOME/.config/opencode/opencode.json: plugin = ["opencode-anthropic-auth"]

That is, if that's not pulled in to latest OC by the time I post this. Not sure what the release cycle is for builtin plugins like that, but by force specifying it it definitely pulls master which has a fix.

https://opencode.ai/docs/plugins/

paolomainardi 14 hours ago [-]
Not strictly related, but since Copilot could be the next to violate the TOS, I've asked for an official response here: https://github.com/orgs/community/discussions/183809. If someone can help raise this question, it's more than welcome.
whs 4 hours ago [-]
Copilot in VSCode is integrated with VSCode's LLM provider API. Which means that any plugins that needs LLM capabilities can submit request to Copilot. Roo Code support that as an option. And of course, there's a plugin that start an OpenAI/Anthropic-compatible web server inside VSCode that just calls to the LLM provider API. It seems that if you use unlimited models (like GPT-4.1) you probably get unlimited API calls. However, those model doesn't seems to be very agentic when used in Claude Code.
everfrustrated 7 hours ago [-]
GitHub doesn't offer any unlimited style AI model plans so I don't think they'll care. Their pricing is fairly aligned with their costs.

This only affects Claude as they try to market their plan as unlimited with various usage rate limits but its clearly costing them a lot more than what they sell it for.

realharo 7 hours ago [-]
Copilot plan limits are however "per prompt", and prompts that ask the agent to do a lot of stuff with a large context are obviously going to be more expensive to run than prompts that don't.
hakanderyal 7 hours ago [-]
If this helps to keep the $200 around longer, I’m happy.

The thing I most fear is them banning multiple accounts. That would be very expensive for a lot of folks.

pat_erichsen 18 hours ago [-]
This situation feels like a +1 for Agent Client Protocol (ACP) [1].

In ACP, you auth directly to the underlying agent (eg Claude Code SDK) rather than a third-party tool (eg OpenCode) that then calls an inference endpoint on your behalf. If you're logged into Claude Code, you're already logged into Claude Code through any ACP client.

[1] https://agentclientprotocol.com/overview/agents

articunoJones 18 hours ago [-]
It works like that for the agent SDK, you could connect open code to that
bionhoward 14 hours ago [-]
Why act like it’s a mystery when the Claude Code repo clearly explains:

> When you use Claude Code, we collect feedback, which includes usage data (such as code acceptance or rejections), associated conversation data, and user feedback submitted via the /bug command.

They subsidize Claude Code because it gives them your codebase and chat history

ksynwa 13 hours ago [-]
They should be getting most of it from third party clients too. At least the chat and the files are being sent to or from Anthropic's own servers.
kalamazooo 11 hours ago [-]
OpenCode brought this on themselves and their users. Plugging Claude Max subscriptions into other agents has been against the terms of service basically since the start and I imagine Anthropic must have issued plenty of warnings here that were ignored. They wouldn’t do this unless they really had to. If folks are mad about being rugged, blame OpenCode for misleading their users when they’ve long known this day was coming. Brilliant cynical strategy though to exploit soft enforcement for growth and lay the blame at the company that provided them cheap tokens.
monooso 11 hours ago [-]
That's a lot of supposition leading to a shaky conclusion.

Your speculation may be correct, of course, but I have yet to see any mention of Anthropic issuing "plenty of warnings", or only taking this action because they "really had to."

aamoscodes 7 hours ago [-]
I haven’t checked with other third parties, but Crush got a warning https://github.com/charmbracelet/crush/pull/1783
Szpadel 16 hours ago [-]
inference costs nothing in comparison to training (you have so many requests in parallel at their scale), for inference they should be profitable even when you drain whole weekly quota every week

but of course they have to pay for training too.

this looks like short sighted money grab (do they need it?), that trade short term profit for trust and customer base (again) as people will cancel their unusable subscriptions.

changing model family when you have instructions tuned for for one of them is tricky and takes long time so people will stick to one of them for some time, but with API pricing you quickly start looking for alternatives and openai gpt-5 family is also fine for coding when you spend some time tuning it.

another pain is switching your agent software, moving from CC to codex is more painful than just picking different model in things like OC, this is plausible argument why they are doing this.

everfrustrated 7 hours ago [-]
>inference costs nothing

Clearly not true. Just look at OpenRouter model providers. Costs are very very real.

d4rkp4ttern 10 hours ago [-]
Curious about portability of CC -> OpenCode. I wonder how much of my CC setup (skills, commands, agents, hooks etc) will work if I were to switch to OpenCode.
articunoJones 18 hours ago [-]
fork OC and use anthropics agent SDK that allows you to build on top of your subscription

The Agent SDK can piggyback on your existing Claude Code authentication

https://platform.claude.com/docs/en/agent-sdk/quickstart

ankushkun_ 15 hours ago [-]
Woke up and everything is on Fire, I thought opencode had some bug coz it updated itself this morning, but realised its claude who blocked third party clients :( L for Anthropic indeed, opencode had wayyy better experience than claude code.
CoolCold 15 hours ago [-]
Genuine question, as someone who never used Claude Code, but used OpenCode/Aider/GeminiCli - as many here say Opencode is better, mind sharing why (from end user perspective)?

I was thinking to try Claude Code later and may reconsider doing so.

aserafini 15 hours ago [-]
I experimented with Claude Code but returned to the familiar Aider which existed before all of these tools AFAIK.

You’ll notice people in Aider GitHub issues being concerned about its rather conservative pace of change, lack of plug-in ecosystem. But I actually started to appreciate these constraints as a way to really familiarise myself with the core “edit files in a loop with an end goal” that is the essence of all agent coding.

Anytime I feel a snazzy feature is lacking from Aider I think about it and realise I can already solve it in Aider by changing the problem to editing a file in a loop.

CoolCold 14 hours ago [-]
Well, there is Aider-CE aka Cecli, which moves, updates almost every day (I'm tried to try it but much).

Opencode is totally different beast comparing to Aider and I mostly stopped using Aider for 2 months or so - it just iterate simpler and faster with OpenCode for me.

B480FA8D 14 hours ago [-]
The TOS, which is a contract of adhesion for consumer-facing products, does not really matter that much in my opinion since "we have to lock you in to our specific interface on our public offering" is not a cognizable interest. SCOTUS is also very clear in requiring actual damages (in incremental harms) to establish a CFAA violation. At any rate, opencode is essentially providing equitable estoppel as a service by being open and popular - cannot go after me without first dealing with the "unionized" project (last words)! I don't think they get to conflate the issues of alternative interface dispute and their intentional pricing strategy losing money on heavy users.

Of course, they are banning for financial economic interests, not nominal alleged contractual violations, so Anthropic is not sympathetic.

// NOT LEGAL ADVICE

Obviously, I think it can make sense to Anthropic since opencode users likely disproportionately cost them money with little lock-in - you can switch the moment a comparable model is available elsewhere. It does not (necessarily) mean there are any legal or ethical issues barring us from continuously using the built-in opencode OAuth though.

pmihaylov 13 hours ago [-]
I have a background agents app I'm running - https://claudecontrol.com and it seems I am not impacted by this change. My anthropic sub still works fine.

I believe this is because I am using claude code as a CLI for SDK purposes vs using it as a typescript library. Quite a fortunate choice at the time!

18 hours ago [-]
casparvitch 18 hours ago [-]
Unsure of the other competition, but I can vouch for synthetic.new's subscription for GLM (+ other open models). Note quite as accurate as Anthropic's models but good enough for basically everything I do.
planckscnst 18 hours ago [-]
I've been (adding an OpenCode feature that allows the LLM to edit its own context)[1] and trying to debug an issue with the Anthropic API because I'm calling it with missing fields that it expects. I hope my multiple erronious API calls aren't what triggered this decision

[1]: https://github.com/Vibecodelicious/opencode/tree/surgical_co...

zeroDivisible 7 hours ago [-]
It doesn't mean much but I cancelled my 5x Max subscription to Claude. Only way how I can tell them what I think about this change.
imdsm 14 hours ago [-]
Anthropic should buy OpenCode and merge with CC
bkolobara 14 hours ago [-]
Please no. I have recently switched to opencode and the product quality is so much higher.
vldszn 8 hours ago [-]
At this point, Anthropic should acquihire OpenCode.
azuanrb 15 hours ago [-]
I’m curious whether this is related to the recent update. When I opened Claude Code, I was greeted with a “Help improve Claude” message that changes the retention policy from 30 days to 5 long years.

They can’t apply these changes or update parts of the flow for the non-Claude CLI, which explains their latest move.

B480FA8D 13 hours ago [-]
A crucial context is that this "block" is resolved (for now) via bumping version numbers. It is almost as if Anthropic deployed this to test the water on community reaction... Right now it is trivial to fingerprint opencode users without deep inspection into the conversations (privacy conerns), but Anthropic is not doing that.

https://github.com/anomalyco/opencode/commit/5e0125b78c8da09...

artdigital 12 hours ago [-]
See https://github.com/charmbracelet/crush/pull/1783

I wouldn’t be surprised if Anthropic filed a similar request against OpenCode, and follows it up with a takedown eventually

18 hours ago [-]
baq 16 hours ago [-]
Meanwhile gpt codex 5.2 was never available outside of codex and nobody made a fuss out of it.
meeq 9 hours ago [-]
It is available in Opencode, you just have to install this plugin: https://github.com/numman-ali/opencode-openai-codex-auth

Been using my ChatGPT sub with Opencode for a couple of weeks now. Only wish I‘d found out sooner. Could have saved a decent chunk of money.

kirubakaran 13 hours ago [-]
So is this the Bezos play of depressing the acquisition price? iirc Bezos froze the Amazon referral program of GoodReads.com to force them to take a lower price. If so, shame on them!
casparvitch 17 hours ago [-]
It appears [0] there is now a fix/workaround (?)

[0] https://github.com/anomalyco/opencode-anthropic-auth/pull/11

ehsanu1 17 hours ago [-]
I understand them not wanting to allow non-coding agents to use the subscription, but why specifically block another coding agent? Is the value Anthropic gets from users specifically using claude code that high? Is it about the training data opt-ins?
nake89 15 hours ago [-]
Hopefully this doesn't happen with GitHub Copilot. OpenCode is fantastic. They offer a server and an SDK. This means I build amazing personal tools. GitHub Copilots low price + OpenCode is just amazing.
nomadygnt 7 hours ago [-]
Honestly with how good OpenCode is, this really just makes GitHub copilot the best subscription for the average user. It’s the cheapest. It’s free for students. You get access to all of OpenAI models AND Anthropic models AND Gemini models and you still have a pretty dang good CLI/TUI (OC, not Copilot CLI). And the limits are pretty reasonable. I’ve never hit the limits in a month though admittedly I am not a “five agents at once” kind of vibe coder.
19 hours ago [-]
Garlef 16 hours ago [-]
Maybe a subscription based payment model would also work for in general?

Similar to a gym membership where only a small part of the paying users actually show up.

StarterPro 18 hours ago [-]
Why don't you just ask Claude Code to write you a workaround? I'm sure if you say "fix plz" enough times, it'll work eventually.
mrdw 16 hours ago [-]
just use free antigravity subscription with Opus 4.5 and reverse engineered API, and bunch of cheap google accounts
sergiotapia 18 hours ago [-]
Switched to the z.ai coding plan, and used the GLM 4.7 model for a few complex changes since posting this, and it works really well.

I don't think I will renew Anthropic, the open models have reached an inflection point.

selectnull 13 hours ago [-]
I'm willing to cancel my Claude subscription because of this.
ChaseRensberger 17 hours ago [-]
seems to be an easy fix already up: https://github.com/anomalyco/opencode-anthropic-auth/pull/10
noobcoder 13 hours ago [-]
Well totally within Anthropic’s rights, but still a bad look.
kburman 8 hours ago [-]
It’s the standard enshittification lifecycle: subsidize usage to get adoption, then lock down the API to force users into a controlled environment where you can squeeze them.

Like Reddit, they realized they can't show ads (or control the user journey) if everyone is using a third-party client. The $200 subscription isn't a pricing tier. It's a customer acquisition cost for their proprietary platform. Third-party clients defeat that purpose.

titaniumrain 12 hours ago [-]
Anthropic has a bunch of weirdos making decisions.
ares623 18 hours ago [-]
A rare glimpse into the enshittification that is to come to these tools. It’s only a matter of time.
hooverd 4 hours ago [-]
Cynically, I think they're in a Gmail 2008 era right now.
wahnfrieden 19 hours ago [-]
Meanwhile, OpenAI co-signs https://github.com/steipete/oracle which lets you use your ChatGPT subscription to gain programmatic/agentic access to 5.2 Pro via automating browser access to the web frontend. Karpathy and other leaders have praised this feature on X.

If that is indeed so welcome, imagine what else you could script via their website to get around Codex rate limits or other such things.

After all what coud be so different about this than what browsers like Atlas do already

sams99 18 hours ago [-]
Codex requires stuffing a very specific system prompt otherwise the custom endpoint will reject you
casparvitch 18 hours ago [-]
Maybe Anthropic saw Yegge's gas town concept and got scared off?
striking 18 hours ago [-]
I believe Gas Town uses Claude Code under the hood, still? Just with some really wild hooks
casparvitch 17 hours ago [-]
Oh you mean literally claude code, not some hack like opencode? I guess that makes sense yeah, given the tmux interface.
RamtinJ95 15 hours ago [-]
I get calling opencode a hack, but actually looking at the product claude code is the hack compared to opencode for real
rvcdbn 7 hours ago [-]
couldn't opencode just switch to agent sdk?
16 hours ago [-]
cloudking 15 hours ago [-]
Why is OpenCode better than Claude Code?
azuanrb 15 hours ago [-]
I wouldn’t say it’s better, but it does have some nice features. Opencode has a web UI, so I can open it on my laptop and then resume the same session on the web from my phone through Tailscale. It’s pretty handy from time to time and takes almost zero effort from me.
nake89 15 hours ago [-]
Works with several providers (e.g. Github copilot or bring your own key). They offer a server and an sdk, so you can build all kinds of personal tools. It's amazing.
paradite 10 hours ago [-]
Claude Code also has Claude Agent SDK (basically a wrapper around Claude Code) with a million downloads in the past week.

https://www.npmjs.com/package/@anthropic-ai/claude-agent-sdk

chrisvalleybay 14 hours ago [-]
Very interesting! What do you mean by personal tools? Do you have any examples of something you've built with this?
wiseowise 14 hours ago [-]
It is open source, to start with, and works with a lot of LLMs providers instead of being vendor locked into one.
gaigalas 8 hours ago [-]
So, models are officially a commodity now.

The battle is for the harness layer. And it's quickly going the commodity way as well.

What's left for boutique-style AI companies?

dmezzetti 9 hours ago [-]
Two words: Open Source.
kachapopopow 7 hours ago [-]
as much as I love opus I hate this company (not for the reasons you'd think thought). I just have a proxy that exposes an unauthenticated endpoint and bypasses all their attempts at banning for opencode usage since I was already on like my 5th claude account trying to get around random bans.
bilalbayram 7 hours ago [-]
its just sad
macinjosh 17 hours ago [-]
OpenCode has a workaround already: https://github.com/anomalyco/opencode-anthropic-auth/pull/11
galsjel 19 hours ago [-]
Just open source Claude Code and maybe it gets supported by fostering a community... Oh wait, no lock in? Sorry there's no stakeholder value in that.
commanderkeen08 19 hours ago [-]
No. Do you realize how much of a joke Claude code is? Under the hood. How they implemented client auth?

Well let me tell you

https://github.com/anomalyco/opencode/blob/dev/packages/open...

You literally send your first message “you are Claude code”

The fact that this ever worked was insane.

Headline is more like anthropic vibes a bug and finally catches it.

krackers 19 hours ago [-]
Is there any other way to do it though? Even if they implemented some form of auth logic, since it's all client side anyone could spoof it. The only real way to distinguish Claude Code from non-Claude Code is behavioral analysis (e.g. system prompt, set of tools, etc.). Or engage in a cat and mouse game of increasingly obfuscated challenge/auth.
firloop 18 hours ago [-]
Tonight we are all Claude Code, Anthropic's official CLI for Claude.
justinsaccount 18 hours ago [-]
That sort of mechanism is not a "joke" and is often used for trademark/legal reasons, not technical ones.
ronsor 18 hours ago [-]
Both Nintendo and Sega tried that, and it did not work as they legally intended.
zerohp 18 hours ago [-]
The joke is that AI companies pretend to care about doing legal things.
koakuma-chan 19 hours ago [-]
There is no way to prevent people from using a custom client.
notpushkin 18 hours ago [-]
There are ways to make it painful. Though it would probably be painful for “legit” users, too.
gpm 18 hours ago [-]
Game developers disagree...
koakuma-chan 18 hours ago [-]
Time to add Denuvo to Claude Code?
18 hours ago [-]
serf 18 hours ago [-]
yeah, and it's been an easy win for game developers and smooth sailing on that front, too.

..right?

skeptic_ai 19 hours ago [-]
So now you just need to remove the “read” tool to authenticate?
_flux 18 hours ago [-]
Or call the tool "Read" and it works, according to an issue comment.

But actually the solution is checking out how the official client does it and then doing the same steps, though if people start doing this then Anthropic will probably start making it more difficult to monitor and reverse engineer.

It might not matter, as some people have a lot of expertice in this, but people might still get the message and move away to alternatives.

realharo 16 hours ago [-]
The endgame is a small background agent that runs Claude Code every once in a while, inspects its traffic, and adjusts on the fly.
_flux 15 hours ago [-]
Then they'd start pinning certs and hiding keys inside the obfuscated binary to make traffic inspection harder?

And if an open source tool would start to use those keys, their CI could just detect this automatically and change the keys and the obfuscation method. Probably quite doable with LLMs..

realharo 14 hours ago [-]
Without breaking legitimate clients?

At some point it becomes easier to just reevaluate the business model. Or just make a superior product.

_flux 12 hours ago [-]
Aren't Anthropic in control of all the legitimate clients? They can download a new version, possibly automatically.

I believe the key issue here is that the product they're selling is all-you-can-eat API buffet for $200/month. The way they manage this is that they also sell the client for this, so they can more easily predict how much this is actually going to consume tokens (i.e. they can just put their new version of Claude Code to CI with some example scenarios and see it doesn't blow out their computing quota). If some third party client is also using the same subscription, it makes it much more difficult to make the deal affordable for them.

As I understand it, using the per-token API works just fine and I assume the reason people don't want to use it because it ends up costing more.

krater23 14 hours ago [-]
Hold on tight, the wild journey to enshittification has begun.
llmslave2 13 hours ago [-]
ensloppification
honeyspoon 19 hours ago [-]
end of an era
TiredOfLife 5 hours ago [-]
Reminder that Anthropic owns Bun.
planet_1649c 18 hours ago [-]
Damn :((
willvarfar 15 hours ago [-]
Presumably there will soon be banner ads in Claude Code then? </s>
behnamoh 19 hours ago [-]
Anthro is having its Apple moment: too many customers means the company is always on the news, for better or worse.

When iPhones receive negative reviews it's not like only Apple screwed up; others did too, but they sell so much less than Apple that no one hears about them:

    "Apple violated my privacy a tiny bit" makes the news;
    "Xiaomi sold my fingerprint info to 3rd party vendors" doesn't.

Similarly, Anthropic is under heavy fire recently because frankly, Claude Code is the best coding agent out there, and it's not even close.
Xevion 16 hours ago [-]
Anthropic's models are good. Claude Code isn't good.
sfmike 19 hours ago [-]
find codex better by far in some cases
17 hours ago [-]
NamlchakKhandro 19 hours ago [-]
Lmao? seriously. Claude Code is epic levels terrible compared to opencode.
19 hours ago [-]
throwaway7783 18 hours ago [-]
Can you elaborate?
dkdcio 18 hours ago [-]
how so?