Discussion
Learn Claude Codeby doing, not reading.
mrtksn: Are people again learning a new set of tools? Just tell the AI what you want, if the AI tool doesn't allow that then tell another Ai tool to make you a translation layer that will convert the natural language to the commands etc. What's the point of learning yet another tool?
faeyanpiraat: I cannot decipher what you mean, have you mixed up the tabs, and wanted to post this somewhere else?The linked site is a pretty good interactive Claude tutorial for beginners.
mrtksn: Nope, why would anybody type commands to a machine that does natural language processing? Just tell the thing what you want.
npilk: Strongly agree with the sentiment, but I'd say if you're familiar with the terminal you may as well just install it and truly 'learn by doing'!I could see this being great for true beginners, but for them it might be nice to have even some more basics to start (how do I open the terminal, what is a command, etc).
cyanydeez: I think somewhere between 2016 and 2026 the market realized that programmers _love_ writing tools for themselves and others, and it went full bore into catering to the Bike Shedding economy, and now AI is accelerating this to an absurd degree.
Yiin: find your level -> answer D to everything -> you're a beginner! And I thought I have high standards...
faeyanpiraat: Yes, but you gotta learn what is possible.I wouldn't have the thought to say to the machine to compact its context if I didn't know it has context and it can be compacted, right?
mrtksn: Good point, but IMHO the learning material for this should be the basics of LLM.
grewil2: Side note: I don’t know what Anthropic changed but now Claude Code consumes the quota incredibly fast. I have the Max5 plan, and it just consumed about 10% of the session quota in 10 minutes on a single prompt. For $100/month, I have higher expectations.
landr0id: Relevant: https://www.reddit.com/r/ClaudeAI/comments/1s7zgj0/investiga...https://www.reddit.com/r/ClaudeAI/comments/1s7mkn3/psa_claud...
no1youknowz: I've been jumping from Claude -> Gemini -> GPT Codex. Both Claude and Gemini really reduced quotas and so I cancelled. Only subbed GPT for the special 2x quota in March and now my allocation is done as well.I decided to give opencode go a try today. It's $5 for the first month. Didn't get much success with Kimi K2, overly chatty, built too complex solutions - burned 40% of my allocation and nothing worked. ¯\_(ツ)_/¯.But Minimax m2.7. Wow, it feels just like Claude Opus 4.6. Really has serious chops in Rust.Tomorrow/Wednesday will try a month of their $40 plan and see how it goes.
MeetingsBrowser: I use claude code every day, I've written plugins and skills, use MCP servers, subagent workflows, and filled out the "Find your level" quiz as such.According to the quiz, I am a beginner!
nickphx: Why wpuld anyone want to "learn" how to use some non-deterministic black box of bullshit that is frequently wrong? When you get different output fkr the same input, how do you learn? How is that beneficial? Why would you waste your time learning something that is frequently changing at the whims of some greedy third party? No thanks.
sznio: I don't understand the purpose of a tutorial for a natural language ai system.
rco8786: Claude Code is a tool that uses natural language ai systems. It itself is not a natural language ai system.
ForHackernews: Because you will soon be working for it unless you learn to make it work for you.
conception: I noticed 1M context window is default and no way not to use it. If your context is at 500-900k tokens every prompt, you’re gonna hit limits fast.
aberoham: export CLAUDE_CODE_DISABLE_1M_CONTEXT=1
victorbjorklund: Minimax 2.7 is great. Not close to Claude but good enough for a lot of coding tasks.
arbitrary_name: sounds like you might benefit from a tutorial!
i_love_retros: It's fucking insane that we all have to pay rent every month to an AI company just to keep doing our jobs.
nice_byte: you literally don't have to. you can literally just keep doing your job the way that you always have.
Adam_cipher: The quota problem is partly a context problem. Claude Code with 1M context re-reads your entire project structure every turn. If your agent already knows "this file handles auth, that module does payments" from previous sessions, it doesn't need to rediscover it.I've been running an autonomous agent 24/7 for 60 days. The unlock wasn't a bigger context window — it was external memory. Agent stores facts as it learns, retrieves only what's relevant before each action, and scores whether the retrieved context actually helped. Facts that work get promoted; wrong ones decay.Practically this means my agent's effective context per turn dropped from ~500K tokens to ~50K because it's not re-reading the entire codebase — just pulling the 20-30 facts it needs for the current task.Built it as an MCP server so any Claude Code session can use it: https://engram.cipherbuilds.ai
i_love_retros: I probably won't have a job for much longer if I do that, unfortunately
Wowfunhappy: I had to double check that they'd removed the non-1M option, and... WTF? This is what's in `config` → `model` 1. Default (recommended) Opus 4.6 with 1M context · Most capable for complex work 2. Sonnet Sonnet 4.6 · Best for everyday tasks 3. Sonnet (1M context) Sonnet 4.6 with 1M context · Billed as extra usage · $3/$15 per Mtok 4. Haiku Haiku 4.5 · Fastest for quick answers So there's an option to use non-1M Sonnet, but not non-1M Opus?Except wait, I guess that actually makes sense, because it says Sonnet 1M is billed as extra usage... but also WTF, why is Sonnet 1M billed as extra usage? So Opus 1M is included in Max, but if you want the worse model with that much context, you have to pay extra? Why the heck would anyone do that?The screen does also say "For other/previous model names, specify with --model", so maybe you can use that to get 200K Opus, but I'm very confused why Anthropic wouldn't include that in the list of options. What a strange UX decision.
jurakovic: Is that quiz correct? I have answered mostly C or D and maybe a few of B, but still got "Beginner". How?!
roxolotl: The quiz is super weird too. They A-C are knowledge questions D is something you’ve done.
jatora: AI comment.
nixpulvis: I think it's funny and interesting how LLMs are commoditizing information generation. It's completely expected, but also somewhat challenging to figure out what the best combination of "learning" "fact" systems is.I'd be curious to know more about how this compares to other approaches.
rzzzt: Why do I need to tell the machine to compact its context? This feels like homework and/or ceremony.
teaearlgraycold: Anthropic is not building good will as a consumer brand. They've got the best product right now but there's a spring charging behind me ready to launch me into OpenCode as soon as the time is right.
kylecazar: Would you use Opus if you switched to OpenCode?
teaearlgraycold: I'd like to use Opus with OpenCode right now to combine the best TUI agent app with the best LLM. But my understanding is Anthropic will nuke me from orbit if I try that.
fercircularbuf: I love the pedagogical approach here and the ability to easily hone in on your level before diving into content. Your approach would work really well for other subjects as well.
Esophagus4: Did anyone not get beginner?I got it as well.
Uncorrelated: I responded with a mix of mostly B and C answers and got “advanced.” Yet, as pointed out by another commenter, selecting all D answers (which would make you an expert!) gets you called a beginner.I can only assume the quiz itself was vibe-coded and not tested. What an incredible time we live in.
sidrag22: there is certainly a future where this isn't the case. Learning how to use AI and use it in your workflows will likely for sure be a part of any serious dev's future, but being beholden to a data center does not seem to reflect reality. Consider all the 5m-8m models and how powerful they are today compared to what the best models did 2 years ago. If you want to stay absolute bleeding edge model wise, sure you'll be stuck at a data center for some time...Why isn't this just kinda seen as a repeat of the original birth of computers? Consider the IBM 350 (3.5mb) rented in the 50s for thousands per month. Now I have a drawer filled with SD cards that go up to 128gb that i cant even give away.
corford: OpenCode with a Copilot Business sub and Opus 4.6 as the model works well
htx80nerd: I continue to find the non-stop claude spam fascinating. Gemini and ChatGPT have been very good for my needs, Claude not so much. Every week, if not every day, Claude spam is all over this site. But barely a peep about Gemini or ChatGPT coding capabilities.
retrofuturism: `/model opus` sets it to the original non-1M Opus... for now.