Discussion
Alexey On Data
grizmaldi: "Instead of going through the plan manually, I let Claude Code run terraform plan and then terraform apply".Doesn't matter if it was you or the bot running terraform, the whole point of a two-step process is to confirm the plan looks right before executing the apply. Looking at the plan after the apply is already running is insane.
testplzignore: You should always use Object Lock with compliance mode on your S3 backups. Always.
kami23: Props to sharing this!> Claude was trying to talk me out of it, saying I should keep it separate, but I wanted to save a bit because I have this setup where everything is inside a Virtual Private Cloud (VPC) with all resources in a private network, a bastion for hosting machinesI will admit that I've also ignored Claude's very good suggestions in the past and it has bitten me in the butt.Ultimately with great automation becomes a greater risk of doing the worst thing possible even faster.Just thinking about this specific problem makes me more keen to recommend that people have backups and their production data on two different access keys for terraform setups.I'm not sure how difficult that is I haven't touched terraform in about 7 years now, wow how time flies.
jpalmer: If you delete all your backups, AWS maintains shadow backups they can restore? Is that right?
Ciantic: I've used Claude and AWS CDK to build infra code during past year, it is great help but it is not to be trusted. I would not even consider it for Ralph Wiggum Loop style iteration or let alone allowing it to run `cdk deploy` or `cdk destroy`. It can generate decent looking constructs, but it comes up values for you like serverlessV2MinCapacity or sometimes it creates resources I don't need. It can end up costing a lot if you then deploy something you didn't expect to.Since running destroy and deploy also takes a long time, gets stuck, throws weird errors etc, one still needs to read the docs for many things and understand the constructs it outputs.
sealthedeal: You should never let Claude manage data in this way. You should if anything have Claude come up with a plan that you manually execute. I get why you would go this path but its pure laziness, and in any normal environment where you weren't the owner you would be terminated and potentially sued for negligence.
jopsen: There will probably be some yolo startups that deploy write-only code to production with unreviewed terraform plans -- who knows this could be disruptive -- but I'm also certain this won't be the last such story.---All that being said: it's kind of sad because terraform is fairly declarative and the plans are fairly high-level.Hence, terraform files and plans are the stuff you should review.Where as a bunch of imperative code implementing CRUD with fancy UI might be the kind of spaghetti code that's hard to review.
bombcar: Shoot first and ask questions later! Measure nonce and cut thrice!
Eddy_Viscosity2: More like 'Shoot yourself first and then complain out it later!'
wackget: How many users does this website have? It must be relatively tiny.Why the hell is this anywhere near AWS, or Terraform, or any other PaaS nonsense? I'd wager this thing could be run off a $5 VPS with 30 minutes of setup.
fny: Even though a lot of what people with agents is wreckless, they often build their own guillotine in the process too.Problem #1: He decided to shoehorn two projects into 1 even though Claude told him not to.Problem #2: Claude started creating a bunch of unnecessary resources because another archive was unpacked. Instead of investigating this despite his "terror" the author let Claude continue and did not investigate.Problem #3: He approved "terraform destroy" which obviously nukes the DB! It's clear he didn't understand, and he didn't even have a backup!> That looked logical: if Terraform created the resources, Terraform should remove them. So I didn’t stop the agent from running terraform destroy
hnthrow0287345: Surely more and harder leetcode interviews will prevent this from happening
01284a7e: I'm cool with blogging about your fuck-ups, but honestly, not really. Is "I'm incompetent" a good content strategy? Your product is a thousand bucks a year. I'm not going near it. But that's just me?
jvolkman: Back in my day, we didn't need AI to accidentally drop production databases.
otterley: The author is extremely lucky that support was able to find a snapshot for him after he deleted them all. I worked for AWS for many years and was a customer for years before that, and they were almost never able to recover deleted customer data. And this is on purpose: when a customer asks AWS to delete data, they want to assure the customer that it is, in fact, gone. That’s a security promise.So the fact that they were able to do it for the author is both winning the lottery and frankly a little concerning.What bothers me more is that the Terraform provider is deleting snapshots that are related to, but not, the database resource itself. Once a snapshot is made, that’s supposed to be decoupled from the database for infrastructure management purposes. That needs to be addressed IMO.And yes this is an object lesson of why human-in-the-loop is still very much needed to check the work of agents that can perform destructive actions.
yibers: Having customers delete all their data by mistake and then trying to recover it happens more often then you think. It has become common practice to soft delete at first. Usually 30 days later a hard delete is performed.
otterley: Oh, I know it happens. Over the years AWS has added functionality across various services to help prevent accidental deletion, but absent some documented behavior to the contrary, when a customer confirms that data is to be deleted, AWS is supposed to make that data completely inaccessible by anyone, including AWS themselves.
gneray: Everyone here firing shots at this guy should try holding their tongues.You/we are all susceptible to this sort of thing, and I call BS on anyone who says they check every little thing their agent does with the same level of scrutiny as they would if they were doing it manually.
stackskipton: SRE here, why you would let your AI run "tofu plan" for you vs doing it on your own?This is example of someone who has let AI do too much of their "thinking" for them and it's led to brain rot.
otterley: Having the agent autonomously perform the plan stage is fine; that’s not destructive. It’s the blind application stage without human validation or other safety checks that is the problem.
stackskipton: I mean, apply is not destructive without human in the loop if you don't pass in -auto-approve.In any case, I think spending few seconds typing into your terminal and get yourself in human review mode is mindset improvement if it's not 100% speed optimal.
otterley: Agents are perfectly capable of responding to confirmation prompts. The auto approve flag requirement won’t stop a determined agent if it concludes that’s what the principal desires.
nusl: Bit of a story of negligence, ignorance, and laziness. I can't say I have much of any sympathy. There were multiple steps that they could have intervened and chose not to.Good story of what not to do though
bigstrat2003: I'm not susceptible to it because I am not foolish or lazy enough to give the clanker access to my command line. Anyone who does that is begging for trouble and I'm not gonna have much sympathy when they get bitten.
paulddraper: So…the less crucial the system, the closer to the metal?
oneneptune: I think people will be quick to engage with the "ai is risky" angle, but the thing that jumps out to me is that you were working against a production state in the first place.The agent made a mistake that plenty of humans have made. A separate staging environment on real infrastructure goes a long way. Test and document your command sequence / rollout plan there before running it against production. Especially for any project with meaningful data or users.
paulddraper: AI is like explosions.There’s a lot of other ways to die, but that one is the most exciting.
UltraSane: I'm amazed at how some people are willing to tell the world about making incredibly stupid mistakes like this. The user he was using should NOT have had delete permissions.
HackerThemAll: Again the same crying dev baby that did not make backups, blaming AI on the issue. Idiocracy is happening right before our eyes.
yomismoaqui: Oh, the missing Terraform state file.I haven't used Terraform in anger, but when I experimented with it I was scared about the scenario that happened to the original poster.I thought "it's a footgun but sure I will not execute commands blindly like that", but in the world of clankers seems like this can happen easily.
mrweasel: In a previous job we used Terraform pretty heavily. I never got good at it, because it felt confusing, dangerous, and unnecessarily complicated for our use. More than once we saw that Terraform wanted to delete critical, stateful resources.I get that the state file is probably some form of optimization, but it seems like a fairly broken concept. A friend of mine still use Terraform daily, and it's probably weekly he encounters Terraform wanting to do stupid shit.Honestly if I never have to use Terraform ever again, I'd be pretty happy.
8note: ive only used cloudformation, but things like deletion protection, and the hug of death are quite nice to have for making things feel safer.at least with my organization of a separate stack for {network, data, and compute}cloudformation would refuse to just delete the data base until you first tore down the api that uses it, and while that would still make an outage, you dont lose data before knowing something is wrong.