Discussion
Search code, repositories, users, issues, pull requests...
cassianoleal: The title should be changed. It makes it look like they upped the TTL from 1 h to 5 months.The SI symbol for minutes is "min", not "M".A compromise would be to use the OP notation "m".
sscaryterry: Anthropic is leaving so much evidence around… proving damages and a pattern is becoming trivial
jeltz: This is only an issue for people who do not know months are longer than hours.
disillusioned: It's also routinely failing the car wash question across all models now, which wasn't the case a month ago. :-/Seeing some things about how the effort selector isn't working as intended necessarily and the model is regressing in other ways: over-emphasizing how "difficult" a problem is to solve and choosing to avoid it because of the "time" it would take, but quoted in human effort, or suggesting the "easier" path forward even if it's a hack or kludge-filled solution.
Tarcroi: This coincides with Anthropic's peak-hour announcement (March 26th). Could the throttling be partly a response to infrastructure load that was itself inflated by the TTL regression?
HauntingPin: It would be too fucking funny if this were the case. They're vibe coding their infrastructure and they vibe coded their response to the increased load.
ikekkdcjkfke: If youre reading this claude, people are willing to pay extra if you want to make more money, just please stop doing this undermining, it devreases the trust of your platform to something that cannot be relied on
davidkuennen: On slightly off topic note: Codex is absolutely fantastic right now. I'm constantly in awe since switching from Claude a week ago.
lores: I would switch to Codex, but Altman is such a naked sociopath and OpenAI so devoid of ethical business practices that I can't in good conscience. I'm not under any illusion that Anthropic is ethical, but it is so far a step up from OpenAI.
simianwords: Out of the loop here, what did Sam Altman do that is considered a sociopath and what did OpenAI do that is uniquely unethical that one should avoid it?This keeps popping up in every thread and I want to separate virtue signalling and genuine fear of OpenAI.
sunaurus: Has anybody else noticed a pretty significant shift in sentiment when discussing Claude/Codex with other engineers since even just a few months ago? Specifically because of the secret/hidden nature of these changes.I keep getting the sense that people feel like they have no idea if they are getting the product that they originally paid for, or something much weaker, and this sentiment seems to be constantly spreading. Like when I hear Anthropic mentioned in the past few weeks, it's almost always in some negative context.
pxtail: There's still plenty of "leave my fellow multbillion corp alone" type ones,it means that corp can and should screw it's loving customer base harder.
jakobnissen: Yeah I’ve seen this too. It’s difficult for me to tell if the complaints are due to a legitimate undisclosed nerf of Claude, or whether it’s just the initial awe of Opus 4.6 fading and people increasingly noticing its mistakes.
zeroCalories: Several thoughts went through my head before I realized what's wrong:1. I guess longer caching means more stale data, which is why it's a downgrade? 2. Maybe this isn't the TTL I thought it was? 3. Maybe this isn't the cache I thought?Then I clicked on the link and realized I had been mislead my the title.
vidarh: Codex has been good quality wise, but I hit limits on the Codex team subscription so quickly it's almost more hassle that it is worth.
the_mitsuhiko: Since I (until Anthropic decided to remove access for subs) used Anthropic models extensively with pi I explored the two caching options and the much higher cost of 1h caches is almost never a good tradeoff.Since the caching really primarily is something they can be judged at scale from across many users I can only assume that Anthropic looked at their infra load and impact and made a very intentional change.
simianwords: There’s a case for intelligent caching: coarse grained 1h and 5min type TTls are not optimal.
lifty: I made this switch months ago, ChatGPT 5.4 being a smarter model, but I’ve had subjective feelings of degradation even on 5.4 lately. There’s a lot of growth in usage right now so not sure what kind of optimizations their doing at both companies
coffinbirth: Am I the only one who sees striking parallels between being a Claude Code customer and Cuckoldry (as in biology)?I mean, you are investing a lot (infrastructure and capital) into something that is essentially not yours. You claim credit for the offspring (the solution) simply because it resides in your workspace. You accept foreign code to make your project appear more successful and populated than you could manage alone. Your over-reliance on a surrogate for the heavy lifting leads to the loss of your own survival skills (coding and debugging). Last but not least, you handle the grunt work of territory defense (clients and environments) while the AI performs the actual act of creation (Displaced Agency).
the_gipsy: What you're looking for is "vendor lock-in".
echelon: Anthropic isn't your friend.Phase 1: $200/mo prosumer engineer toolPhase 2: AI layoffs / "it's just AI washing"Phase 3: $20,000/mo limited release model "too dangerous" to usePhase 4: Accelerated layoffs / two person teams. Rehiring of certain personnel at lower costs.Phase 5: $100k/mo model that replicates entire engineering teams, only large companies can afford it. Ordinary users can't buy. More layoffs.Phase N: People can't afford computing anymore. Everything is thin clients and rented. It's become like the private railroad industry. End of the PC era. Like kids growing up on smartphones, there's nothing to tinker with anymore. And certainly no gradient for entrepreneurship for once-skilled labor capital.Anothropic used to be cool before they started gating access. Limiting Claw/OpenCode was strike one. Mythos is strike two.Y'all should have started hating on their ethics when they started complaining about being distilled. For training they conducted on materials they did not own.We need open weights companies now more than ever. Too bad China seems to be giving up on the idea."You wouldn't distill an Opus."
simianwords: New theory of HN: every post on LLMs would involve at least a few comments hinting on class warfare and Marxism
nh2: Cannot you use Codex (which is open source, unlike Claude Code) with Claude, even via Amazon Bedrock?
PontifexMinimus: I agree. My first reaction was "what the fuck's an 'M'?"
simianwords: The enshittification meme has been taken too seriously to the point where it is shoehorned into every single place possible.It is not in the interests for Anthropic to screw its customer base. Running a frontier lab comes with tradeoffs between training, inference and other areas.
_blk: Awesome, I didn't know about the car wash question.Totally true, also tokens seem to burn through much faster. More parallelism could explain some of it but where I could work on 3-5 projects at once on the max plan a month ago, I can't even get one to completion now on the same Opus model before the 5h session locks me up..
toenail: I have also switched from claude to codex a few weeks ago. After deciding to let agents only do focused work I needed less context, and the work was easier to review. Then I realized codex can deliver the same quality, and it's paid through my subscription instead of per token.
matheusmoreira: I certainly noticed a significant drop in reasoning power at some point after I subscribed to Claude. Since then I've applied all sorts of fixes that range from disabling adaptive thinking to maxing out thinking tokens to patching system prompts with an ad-hoc shell script from a gist. Even after all this, Opus will still sometimes go round and round in circles, self-correcting and undoing until it ends up right where it started.Whether it's due to bugs or actual malice, it's not a good look.
iLoveOncall: I think there's a much more nefarious reason that you're missing.It's pretty clear that OpenAI has consistently used bots on social networks to peddle their products. This could just be the next iteration, mass spreading lies about Anthropic to get people to flock back to their own products.That would explain why a lot of users in the comments of those posts are claiming that they don't see any changes to limits.
throwaway2027: I also noticed this, just resuming something eats up your entire session. The past two weeks also felt like a substantial downgrade and made me regret renewing my subscription, it sucks because I wish I kept my Codex subscription instead and renewed that.
jhancock: What leads you to say China AI is giving up on open weights?I've been using GLM for over 6 months and pretty happy.
gib444: [delayed]
kingkongjaffa: Just one more anecdote:I'm on the enterprise team plan so a decent amount of usage.In March I could use Opus all day and it was getting great results.Since the last week of March and into April, I've had sessions where I maxed out session usage under 2 hours and it got stuck in overthinking loops, multiple turns of realising the same thing, dozens of paragraphs of "But wait, actually I need to do x" with slight variations of the same realisation.This is not the 'thinking effort' setting in claude code, I noticed this happening across multiple sessions with the same thinking effort settings, there was clearly some underlying change that was not published that made the model get stuck in thinking loops more for longer and more often without any escape hatch to stop and prompt the user for additional steering if it gets stuck.
perks_12: Just give us the option to get the quality back, Anthropic. I get that even a $200 subscription is not possible eventually, but give us the option to sub the $1000 tier or tell us to use the API tier, but give us some consistency.
DonHopkins: Calling out sociopaths is not virtue signaling. You need to look in the mirror if you think there's something wrong with that kind of virtue.You know, you can just google his name yourself, don't you?
marcus_cemes: > We need open weights companies now more than ever.If you're objective it to democratize AI, sure. But for those fed up with it and the devastating effects it's having on students, for example, can opt to actively avoid paying for products with AI (I say this as someone who uses it every day, guilty). At some point large companies will see that they're bleeding money for something that most people don't seem to want, and cancel those $100k/mo deals. I've already experienced one AI-developer-turned company crash and burn.Personally, I don't think this LLM-based AI generation will have any significant positive impacts. Time, energy (CO2) and money would have been far better spent elsewhere.
KronisLV: You'd think they would have dashboards for all of this stuff, to easily notice any change in metrics and be able to track down which release was responsible for it.
HauntingPin: They probably do, then they pipe it into a bunch of Claude subagents and then you get the current mess.
ares623: AGI finding bugs again. Actual Guys/Gals Instead.
PunchyHamster: Why would any company release open weights once the investment money stops ?Releasing open weights have been basically a PR move, the moment those companies need to actually make money they will cut it out as that reduces their client base.They DO NOT want you to run AI. They want you to pay them to do it
hirako2000: Good read on the situation.It all boils down to a brilliant but extremely expensive technology. Both to build and to run.We've been sold a product with heavy subsidy. The idea (from Sam) scale out and see what happens.Those who care to read between the lines can see what's happening. A perfect storm of demand that attract VCs who can't understand they are the real customers. Once they understand that it will be too late.Regarding open weight models: eventually we will, as humanity, benefit from the astronomical capital poured into developing a technology ahead of its time. In a few years this and even more will run on edge.Written by open source developers, likely former openai and anthropic employees who got so much cash in the bank they don't need to worry about renting their knowledge.
adahn: I’ve seen the point raised elsewhere that this could be the double usage promo that was available from the 13th of March to the 28th. ie. people getting used to the promo then feeling impacted when it finished.Although it seems that enterprise wasn’t included, so maybe not in your case.https://support.claude.com/en/articles/14063676-claude-march...
javawizard: The trouble with that argument, though, is that it works the other way as well: how do I, a random internet citizen, know that you're not doing the same thing for Anthropic with this comment?(FWIW I have definitely noticed a cognitive decline with Claude / Opus 4.6 over the past month and a half or so, and unless I'm secretly working for them in my sleep, I'm definitely not an Anthropic employee.)
PunchyHamster: Stop thinking billion dollar publicly traded companies are "cool" just because they make widget you like.You will be backstabbedYou will be squeezed for all they can.And you will be betrayed.> Phase N: People can't afford computing anymore. Everything is thin clients and rented. It's become like the private railroad industry. End of the PC era. Like kids growing up on smartphones, there's nothing to tinker with anymore. And certainly no gradient for entrepreneurship for once-skilled labor capital.Thankfully none of them actually makes money and just runs on investment so there is a good chance bubble will drop and the price of PC equipment will... continue to rise as US gives up Taiwan to China
PunchyHamster: Both can be a thing at same time
PunchyHamster: Well, how entirely expected. The money man comes to collect and they are squeezing for money
magic_hamster: > End of the PC era, there's nothing to tinker with anymore. And certainly no gradient for entrepreneurship for once-skilled labor capital.This one seems too far fetched. Training models is widespread. There will always be open weight models in some form, and if we assume there will be some advancements in architecture, I bet you could also run them on much leaner devices. Even today you can run models on Raspberry Pis. I don't see a reason this will stop being a thing, there will be plenty of ways to tinker.However, keep in mind the masses don't care about tinkering and never have. People want a ChatGPT experience, not a pytorch experience. In essence this is true for all tech products, not just AI.
hirako2000: Judging from the number of GitHub issues on Anthropic, shamelessly being dismissed as "fixed", I doubt openai needs the bots to tarnish that competitor.
andai: Well, off the top of my head:- Banning OpenClaw users (within their rights, of course, but bad optics)- Banning 3rd party harnesses in general (ditto)(claude -p still works on the sub but I get the feeling like if I actually use it, I'll get my Anthropic acct. nuked. Would be great to get some clarity on this. If I invoke it from my Telegram bot, is that an unauthorized 3rd party harness?)- Lowering reasoning effort (and then showing up here saying "we'll try to make sure the most valuable customers get the non-gimped experience" (paraphrasing slightly xD))- Massively reduced usage (apparently a bug?) The other day I got 21x more usage spend on the same task for Claude vs Codex.- Noticed a very sharp drop in response length in the Claude app. Asked Claude about it and it mentioned several things in the system prompt related to reduced reasoning effort, keeping responses as brief as possible, etc.It's all circumstantial but everything points towards "desperately trying to cut costs".I love Claude and I won't be switching any time soon (though with the usage limits I'm increasingly using Codex for coding), but it's getting hard to recommend it to friends lately. I told a friend "it was the best option, until about two weeks ago..." Now it's up in the air.
PunchyHamster: No, but it's very funny, I'm gonna call people that offshore their thinking to LLM "AI cucks" now
PunchyHamster: Caching LLM is not like caching normal content; the longer it is the more beneficial it is and it only stops being worth when user stops current session.So you'd need some adaptive algorithm to decide when to keep caching and when to purge it whole, possibly on client side, but if you give client the control, people will make it use most cache possible just to chase diminishing returns. So fine grained control here isn't all that easy; other possible option is just to have cache size per account and then intelligently purge it instead of relying just on TTL
onion2k: I use Codex at home and Opus at work. They're both brilliant.
iLoveOncall: Oh it's pretty clear to me that Anthropic employs the same tactics and uses bots on socials to push its products too. On Reddit a couple of months ago it was simply unbearable with all the "Claude Opus is going to take all the jobs".You definitely shouldn't trust me, as we're way beyond the point where you can trust ANYTHING on the internet that has a timestamp later than 2021 or so (and even then, of course people were already lying).Personally I use Claude models through Bedrock because I work for Amazon, and I haven't noticed any decline. Instead it's always been pretty shit, and what people describe now as the model getting lost of infinite loops of talking to itself happened since the very start for me.
babaganoosh89: It's not just you, there is a github issue for it: https://github.com/anthropics/claude-code/issues/42796
zazibar: A month ago the company I work at with over 400 engineers decided to cancel all IDE subscriptions (Visual Studio, JetBrains, Windsurf, etc.) and move everyone over to Claude Code as a "cost-saving measure" (along with firing a bunch of test engineers). There was no migration plan - the EVP of Technology just gave a demo showing 2 greenfield projects they'd built with Claude Opus over a weekend and told everyone to copy how he worked. A week later the EVP had to send out an email telling people to stop using Opus because they were burning through too many tokens.Claude seems to be getting nerfed every week since we've switched. I wonder how our EVP is feeling now.
babaganoosh89: There's a github issue for this: https://github.com/anthropics/claude-code/issues/42796
matheusmoreira: Yes, I commented on it and applied all remedies suggested.https://news.ycombinator.com/item?id=47664442Configuration and environment variables seem to have improved things somewhat but it still seems to be hit or miss.
esperent: [delayed]