Discussion
How We Hacked McKinsey's AI Platform
sd9: Cool but impossible to read with all the LLM-isms
sgt101: Why was there a public endpoint?Surely this should all have been behind the firewall and accessible only from a corporate device associated mac address?
lenerdenator: Not exactly clear from the link: were they doing red team work for McKinsey or is this just "we found a company we thought wouldn't get us arrested and ran an AI vuln detector over their stuff"?You'd think that the world's "most prestigious consulting firm" would have already had someone doing this sort of work for them.
cmiles8: I can only remember a McKinsey team pushing Watson on us hard ages ago. Was a total train wreck.They’ve long been all hype no substance on AI and looks like not much has changed.They might be good at other things but would run for the hills if McKinsey folks want to talk AI.
vanillameow: Tiring. Internet in 2026 is LLMs reporting on LLMs pen-testing LLM-generated software.
fhd2: > This was McKinsey & Company — a firm with world-class technology teams [...]Not exactly the word on the street in my experience. Is McKinsey more respected for software than I thought? Otherwise I'm curious why TFA didn't just politely leave this bit out.
aerhardt: The LLM that wrote this simply couldn’t help itself.
codechicago277: Didn’t see it until the last paragraph, but yeah clearly drafted with at least some AI help.
paxys: > named after the first professional woman hired by the firm in 1945Going out of their way to find a woman's name for an AI assistant is not as empowering as the creators probably thought in their heads.
joenot443: > One of those unprotected endpoints wrote user search queries to the database. The values were safely parameterised, but the JSON keys — the field names — were concatenated directly into SQL.I was expecting prompt injection, but in this case it was just good ol' fashioned SQL injection, possible only due to the naivety of the LLM which wrote McKinsey's AI platform.
simonw: Yeah, gotta admit I'm a bit disappointed here. This was a run-of-the-mill SQL injection, albeit one discovered by a vulnerability scanning LLM agent.I thought we might finally have a high profile prompt injection attack against a name-brand company we could point people to.
frereubu: From TFA: "Fun fact: As part of our research preview, the CodeWall research agent autonomously suggested McKinsey as a target citing their public responsible diclosure policy (to keep within guardrails) and recent updates to their Lilli platform. In the AI era, the threat landscape is shifting drastically — AI agents autonomously selecting and attacking targets will become the new normal."
frankfrank13: Some insider knowledge: Lilli was, at least a year ago, internal only. VPN access, SSO, all the bells and whistles, required. Not sure when that changed.McKinsey requires hiring an external pen-testing company to launch even to a small group of coworkers.I can forgive this kind of mistake on the part of the Lilli devs. A lot of things have to fail for an "agentic" security company to even find a public endpoint, much less start exploiting it.That being said, the mistakes in here are brutal. Seems like close to 0 authz. Based on very outdated knowledge, my guess is a Sr. Partner pulled some strings to get Lilli to be publicly available. By that time, much/most/all of the original Lilli team had "rolled off" (gone to client projects) as McKinsey HEAVILY punishes working on internal projects.So Lilli likely was staffed by people who couldn't get staffed elsewhere, didn't know the code, and didn't care. Internal work, for better or worse, is basically a half day.This is a failure of McKinsey's culture around technology.
mnmnmn: McKinsey can eat shit
danenania: > I thought we might finally have a high profile prompt injection attack against a name-brand company we could point people to.These folks have found a bunch: https://www.promptarmor.com/resourcesBut I guess you mean one that has been exploited in the wild?
palmotea: With all we've been learning from stuff like the Epstein emails, it would have been nice if someone had leaked this data:> 46.5 million chat messages. From a workforce that uses this tool to discuss strategy, client engagements, financials, M&A activity, and internal research. Every conversation, stored in plaintext, accessible without authentication.> 728,000 files. 192,000 PDFs. 93,000 Excel spreadsheets. 93,000 PowerPoint decks. 58,000 Word documents. The filenames alone were sensitive and a direct download URL for anyone who knew where to look.I'm sure lots of very informative journalism could have been done about how corporate power actually works behind the scenes.