Discussion
The Webpage Has Instructions. The Agent Has Your Credentials.
redgridtactical: This is the natural consequence of building everything around "the agent needs access to everything to be useful." The more capabilities you hand an agent, the larger the attack surface when it encounters a malicious page.The simplest mitigation is also the least popular one: don't give the agent credentials in the first place. Scope it to read-only where possible, and treat every page it visits as untrusted input. But that limits what agents can do, which is why nobody wants to hear it.
rocho: I absolutely agree, although even that doesn't solve the root problem. The underlying LLM architecture is fundamentally insecure as it doesn't separate between instructions and pure content to read/operate on.I wonder if it'd be possible to train an LLM with such architecture: one input for the instructions/conversation and one input for "operational content" (I don't have a better word for it). Training would ensure that the latter isn't interpreted as instructions.
stavros: Why does the agent have your credentials? There's no need for that! I made one that doesn't:https://github.com/skorokithakis/stavrobot
indigodaddy: So this is like a claw type thing? I’ve never used these “agents”. Not sure what I would do with them. Probably not for coding right?
stavros: Yeah, it's more of a personal assistant. It can do coding, but it's most useful as a PA.
amelius: You can do basically anything with a claw agent. For example, I asked one to build me a Dyson sphere. It is still working on it, but so far so good.
indigodaddy: So. Yesterday I had a need to, from my android phone, have ChatGPT et Al mobile app do something I THOUGHT was very simple. Read a publicly available Google spreadsheet (I gave it the /htmlview which in incognito I could see ALL the rows (maybe close to 1000 rows). None could do it. Not ChatGPT, not MS Copilot, not Claude app, not Gemini, not even GitHub copilot in a web tab. Some said I can’t even see that. Some could see it but couldn’t do anything with it. Some could see it but only the first 100 lines. All I wanted to do was have it ingest the entire thing and then spit me back out in a csv or txt any rows that mentioned 4K. Seemed simple but these things couldn’t even get past that first hurdle. Weirdly, I remembered I had the Grok app too and gave it a shot, and, it could do it. I guess it is more intelligent in it’s abilities to scrape/parse/whatever all kinds of different types of sites.I’d guess this is the type of thing that might actually excel in your agent or these claw clones, because they literally can just do whatever bash/tool type actions on whatever VM or sandboxed environment they live on?
stavros: Yeah, I think this was an issue of Google blocking bot user agents more than the LLMs not being smart enough. A bot that can run curl (like mine) should read it no problem.
indigodaddy: Ah ok that actually makes sense as the reason. And I think I’ve seen that with even coding agents when they are trying to look up stuff on the web or URLs you give them, now that I think about it..
0xbadcafebee: [delayed]