Discussion
Search code, repositories, users, issues, pull requests...
nnevatie: Considering that Claude sometimes confuses the identities of itself and the user, this might as well cite the user - "you just said X".
Gijs4g: The website fully stutters to a halt.Managed to ask if Ali Khamenei is still alive. It answered "Yes, ..."
jampekka: The HN title is quite a strong claim, but it's nowhere to be seen in the repo.It seems to be fully prompt based, so the AI still can say anything it pleases.How well do these complicated prompt systems usually work? My strategy is to stick mostly to just simple prompts with potentially some deterministic tools and vendor harnesses, based on the rationale that these are what the models are trained and evaluated with. And that LLMs still often get tripped up when their context is spammed with too much stuff.
0x3f: Well, I would have tried it but the website kills Firefox.Hard to see how you could really make this work though. You might as well just add "fetch and re-read all sources explicitly to make sure they are correct" to a normal prompt.
4ndrewl: I tried it with the Car Wash question (it failed) and all it's claims were mostly fuel consumption or emissions related, and this"factual (ai) Weather, traffic, and personal urgency are the only significant variables that could tilt the decision toward driving."My gut feeling is that if this could be done, it would be a core part of one of the model provider's output.
Lionga: This is akin to writing "No hallucinations" in your proompt. So strange that even HN thinks it is worth anything.
sigmoid10: The crazy thing is, you could do this. And it can be done 100% with code using zero prompting - just by limiting the output token set to a structured format and then further constraining parts of that to sources that were retrieved before. I know because I wrote such a system already. It could still match sources and answers incorrectly (just like this approach) but there is no need to rely on crazy prompts and agents to prevent hallucinations or missing outputs (which in the end still lacks any guarantees).
est: Looks like it's just find sources in Confluence against bullshit Claude Code says?I thought it can search for online cites.
todotask2: The interfactive app caused my mouse moving so sluggish on macOS.
doginasuit: I'm positive there are use-cases for this tool but after several years of working with LLMs, hallucinations have become a non-issue. You start to get a sense of the likely gaps in their knowledge just like you would a person.Questions about application settings, for example, where to find a particular setting in a particular app. The LLM has a sense of how application settings are generally structured but the answer is almost never spot on. I just prefix these questions with "do a web search" or provide a link to documentation and that is usually enough to get a decent response along with citations.