Discussion
Identity verification on Claude
megous: No. At least until there are actual KYC laws for LLM access in my country...
throwatdem12311: Why do companies keep working with Persona even though they have proven time and time again to be untrustworthy?
dinoqqq: It's worrying that they don't specify in which cases they require identity checks.
benterix: Identity verification to use an API?? And via Persona? I can't say if it's real. But if they really try to enforce that, I guess goodbye Anthropic forever.
finghin: They were all the same from the beginning. Every tech company of a certain size and significance eventually begins collecting data and sharing it with state actors, as far as I can see.
nannal: The under-18 detection is also error prone, seems simpler to me to initiate a gdpr data rrequest, archive it and then make a new account.
zoobab: Time to setup my own local LLM.
llm_nerd: Just a few days ago, on Friday, my 15 year old son had his Claude account suspended with a demand for ID to prove he is 18 or older. He had his own Claude Max subscription (he out-earns me fairly frequently in his circle of gaming programmers), and was unaware Anthropic had a must-be-18 rule, as was I. Their email said "Our team found signals that your account was used by a child. This breaks our rules, so we paused your access to Claude." So I guess if you ever ask a question that seems to originate from a teen or less, expect to hit an ID gate.So now he's a Codex user. OpenAI and Google both have a minimum age of 13.
trollbridge: ... so let me understand this.It is frequently said that programming directly is obsolete, and the skill you must have now is knowing how to operate agentic AIs.Yet you aren't allowed to do this until you're 18.So, developing software is now 18+ only?
llm_nerd: It seems out of step and foolish, and the cynic in me says that Anthropic has a side hustle of identity harvesting and is looking for justifications, but on the flip side, there is a real risk of pearl clutching if a child ever uses AI, and maybe Anthropic just wants to steer clear of all of that. Though simply putting it in the ToS should be sufficient legal shielding, and the idea that they're chat harvesting to age fingerprint conversations seems dubious.
LoganDark: Persona is bad news. They should not be using Persona. This is bad.> Your ID and selfie are collected and held by Persona, not on Anthropic's systems. Anthropic can access verification records through Persona's platform when needed—for example, to review an appeal—but we don't copy or store those images ourselves.They persist this data. That's unacceptable.> Persona is contractually limited in how they can use your data: only to provide and support verification and to improve their ability to prevent fraud. They're bound to protect it with industry-standard security controls and delete it in line with the retention limits we've set and applicable law.It's good to hear that they're criminals. That means nothing for me though. Nothing.
helsinkiandrew: The "Why did my account get banned after verification?" section gives some reasons:- Repeated violations of our Usage Policy- Account creation from an unsupported location- Terms of Service violations- Under-18 usage
Mordisquitos: Those are reasons for banning after verification, not reasons for requesting identity verification in the first place.
helsinkiandrew: Wouldn't the reasons for requesting identification be the same those for banning people - the system has flagged that you might be from the wrong location/under 18/creating multiple free acounts etc - so is validating.
square_usual: > It is frequently said that programming directly is obsoleteWho says this?
Kim_Bruning: This is highly problematic.I may consider showing my ID to a company I already have a business relationship with; given demonstrable legal obligations, contractual necessities, legitimate interests etc . Eg the standard GDPR list.I do have an existing business relationship with Anthropic, so I might under some circumstances decide to show them my id. I don't have a business relationship with Persona though.I understand the instinct: they want to insulate themselves from holding PII. Not the worst idea. But bringing in the third party might not be the solution they're looking for.
FpUser: In the old USSR one had to register a typewriter. Sweet memories. And at that time western people (deservedly) laughed at it or used facts like this to show how backwards the country was
varispeed: It's okay when corporation does it...
varispeed: This is deranged. Say you wanted to use AI to prepare whistleblowing submission to use regulatory language and test for any weak points. Then Claude flags it and requires you to identify yourself. It's not a stretch of imagination that before you manage to send the bundle, you find yourself in the suitcase somewhere in the woods. People explore all kinds of sensitive stuff and I see it is tempting for AI companies to see exact person behind it and then it takes one disgruntled employee to put lives in danger. WTF
trollbridge: Due to zero consequences of that untrustworthiness.
Kim_Bruning: I think minimal opsec here would suggest you not share your data with a random corporation in the usa.
LoganDark: They request ID for bans so that they can ban you personally. ID checks may as well be a sign that you've already been banned and they're fishing for ways to make the ban harder to evade. Venmo does the same thing.
mothballed: Maybe Anthropic just likes creating a market for dark identities. Because that's the most likely effect of such stupidity; generating more ID theft victims with no change to services to criminals.
LoganDark: Is a "dark identity" one that's never been shared with an identity-theft-as-a-service? Or is it just of one that's (supposed to be) privacy-conscious (and wouldn't otherwise have been an easy victim)?
daliusd: I can guess at least one valid:* preventing North Korea, China, Russian, Iran and etc. actors from accessing service. They absolutely use workarounds to access AI, e.g. I bet there are companies who are proxy between Anthropic and those countries.I imagine there will be quite some false positives while identifying those.
a2128: This will do absolutely nothing to prevent those actors from accessing Claude... they already recruit young unemployed Americans to do proxy job interviews[0][1], etc. They'll just pay young unemployed Americans to do verification for them.[0] https://www.tradingview.com/news/cointelegraph:6192f38e3094b...[1] https://youtube.com/watch?v=QebpXFM1ha0
mothballed: This reminds me of when I was homeless and could not open a bank account because I had no proof of address, which was required for KYC. A criminal would spend 5 minutes in photoshop and produce a utility document. The end result is these sorts of things mostly filter out honest users while doing nothing to criminals. It is a performative form of self-flagellation and job creation for compliance bureaucrats.
pajamasam: Anthropic says they may not train their models using your data, but apparently Persona (the service they will use for identity verification) WILL according to https://thelocalstack.eu/posts/linkedin-identity-verificatio...Persona also might send your data to 17 different subprocessors (16 if you exclude Anthropic itself).
redbell: > Persona also might send your data to 17 different subprocessorsYou reminded me of this submission from two months ago: I verified my LinkedIn identity. Here's what I handed over (https://news.ycombinator.com/item?id=47098245)
Imustaskforhelp: Recently a few days back, I had to verify my Linkedin identity on a new account (I am 17 for context) and I used proton mail and Linkedin immediately blocked it and asked for verificationI legally couldn't verify because persona doesn't detect aadhaar card and their support system on twitter/mail whatever was incredibly bad so much so that it felt like copy-paste and I still haven't gotten the card. I have written about my experience too.https://smileplease.mataroa.blog/blog/linkedin/ : (Title of this is) Linkedin's "final decision", restricting my account, making me feel unheard, Persona being Persona & the time I asked Linkedin support what 351/13 is to prove if they are human or not.
Scaled: And after those laws, VPN to a free country and download local models. Never give in to the panopticon.
Kim_Bruning: Qwen3 runs locally on reasonable hardware, and is comparable to a mid-2025 Claude Sonnet (albeit possibly rather slower) .Local models are chasing the online frontier models pretty hard.So worst case, that's the fallback (FWIW, YMMV)
HWR_14: Whats "reasonable hardware"?
Borealid: A machine with 128GB of unified system RAM will run reasonable-fidelity quantizations (4-bit or more).If you ever want to answer this type of question yourself, you can look at the size of the model files. Loading a model usually uses an amount of RAM around the size it occupies on disk, plus a few gigabytes for the context window.Qwen3.5-122B-A10B is 120GB. Quantized to 4 bits it is ~70GB. You can run a 70GB model in 80GB of VRAM or 128GB of unified normal RAM.Systems with that capability cost a small number of thousand USD to purchase new.If you are willing to sacrifice some performance, you can take advantage of the model being a mixture-of-experts and use disk space to get by with less RAM/VRAM, but inference speed will suffer.
Sol-: I figured they already have your identity via the payment process. Not like you can do anything (risky or not) via the free tier.
duskdozer: >untrustworthyFor the user, sure. But for companies and governments? I'm pretty sure Person is quite trustworthy.
wheybags: Basically the only relevant question, and it's the one they didn't answer
esperent: [delayed]
duskdozer: >Say you wanted to use AI to prepare whistleblowing submission to use regulatory language and test for any weak points.Why would you do this? If you can't write it yourself, you're just sabotaging your effort once the hallucinations are revealed. Secondly, a whistleblower is going to use a corporate LLM provider? Even without ID checks, that's an extremely uncompensated risk.
Wowfunhappy: That sounds likely to increase their costs and create new opportunities to get caught. Not a silver bullet but not "absolutely nothing". Like how anti-money laundering laws don't wipe out all crime, but are still worthwhile.
Someone1234: People have tried to run Qwen3-235B-A22B-Thinking-2507 on 4x $600 used, Nvidia 3090s with 24 GB of VRAM each (96 GB total), and while it runs, it is too slow for production grade (<8 tokens/second). So we're already at $2400 before you've purchased system memory and CPU; and it is too slow for a "Sonnet equivalent" setup yet...You can quantize it of course, but if the idea is "as close to Sonnet as possible," then while quantized models are objectively more efficient they are sacrificing precision for it.So next step would be in order to up the speed, we're at 4x $1300, Nvidia 5090s with 32 GB of VRAM each (128 GB), or $5,200 before RAM/CPU/etc. All of this additional cost to increase your tokens/second without lobotomizing the model. This still may not be enough.I guess my point is: You see this conversation a LOT online. "Qwen3 can be near Sonnet!" but then when asked how, instead of giving you an answer for the true "near Sonnet" model per benchmarks, they suddenly start talking about a substantially inferior Qwen3 model that is cheap to run at home (e.g. 27B/30B quantized down to Q4/Q5).The local models absolutely DO exist that are "near Sonnet." The hardware to actually run them is the bottleneck, and it is a HUGE financial/practical bottleneck. If you had a $10K all-in budget, it isn't actually insane for this class of model, and the sky really is the limit (again to reduce quantization and or increase tokens/second).PS - And electricity costs are non-trivial for 4x 3090s or 4x 5090s.
Kim_Bruning: I may have genuinely new data for you.Qwen3.5-35B-A3B is reported to perform slightly better than the model you mentioned.It runs fine on a single 3090 with 131072 tokens of context (or even twice that, but I wanted some VRAM left over), and due to the hybrid attention architecture, the context size scales rather less drastically than ctx^2. I've had friends with smaller cards still getting work out of it. Generation is at around 20 tokens/sec on that 3090. You'll need enough DRAM to hold the bits of the model that don't fit. Nothing to write home about, but genuinely usable in a pinch or for tasks that don't need immediate interactivity.It's the first local model that passes my personal kimbench usability benchmark at least. Just be aware that it is extremely verbose in thinking mode. Seems to be a qwen thing.
fy20: If you want something off the shelf get a MacBook Pro M5 (base "Pro" CPU) with 48GB RAM:Gemma 4 31B Q6: 9tok/s, I'd say it is smarter than GPT-4o, but yeah it's slow. Good for coding.Gemma 4 26B A4B Q4: 50tok/s. Feels faster than ChatGPT 5.4, but not as smart (as it reasons less). Good for general chatting and research.