Discussion
Grafeo¶
Aurornis: Does anyone have any experience with this DB? Or context about where it came from?From the commit history it's obvious that this is an AI coded project. It was started a few months ago, 99% of commits are from 1 contributor, and that 1 contributor has some times committed 100,000 lines of code per week. (EDIT: 200,000 lines of code in the first week)I'm not anti-LLM, but I've done enough AI coding to know that one person submitting 100,000 lines of code a week is not doing deep thought and review on the AI output. I also know from experience that letting AI code the majority of a complex project leads to something very fragile, overly complicated, and not well thought out. I've been burned enough times by investigating projects that turned out to be AI slop with polished landing pages. In some cases the claimed benchmarks were improperly run or just hallucinated by the AI.So is anyone actually using this? Or is this someone's personal experiment in building a resume portfolio project by letting AI run against a problem for a few months?
gdotv: Agreed, there's been a literal explosion in the last 3 months of new graph databases coded from scratch, clearly largely LLM assisted. I'm having to keep track of the industry quite a bit to decide what to add support for on https://gdotv.com and frankly these days it's getting tedious.
jandrewrogers: That is a lot of code for what appears to be a vanilla graph database with a conventional architecture. The thing I would be cautious about is that graph database engines in particular are known for hiding many sharp edges without a lot of subtle and sophisticated design. It isn't obvious that the necessary level of attention to detail has been paid here.
justonceokay: Yes a graph database will happily lead you down a n^3 (or worse!) path when trying to query for a single relation if you are not wise about your indexes, etc.
adsharma: There are 25 graph databases all going me too in the AI/LLM driven cycle.Writing it in Rust gets visibility because of the popularity of the language on HN.Here's why we are not doing it for LadybugDB.Would love to explore a more gradual/incremental path.Also focusing on just one query language: strongly typed cypher.https://github.com/LadybugDB/ladybug/discussions/141
tadfisher: Is LadybugDB not one of these 25 projects?
adsharma: LadybugDB is backed by this tech (I didn't write it)https://vldb.org/cidrdb/2023/kuzu-graph-database-management-...You can judge for yourself what work has been done in the last 5 months. Many short videos here. New open source contributors who I didn't know before ramping up.https://youtube.com/@ladybugdb
adsharma: Are you talking about Andy Pavlo bet here?https://news.ycombinator.com/item?id=29737326Kuzu folks took some of these discussions and implemented them. SIP, ASP joins, factorized joins and WCOJ.Internally it's structured very similar to DuckDB, except for the differences noted above.DuckDB 1.5 implemented sideways information passing (SIP). And LadybugDB is bringing in support for DuckDB node tables.So the idea that graph databases have shaky internals stems primarily from pre 2021 incumbents.4 more years to go to 2030!
jandrewrogers: I wasn't referring to the Pavlo bet but I would make the same one! Poor algorithm and architecture scalability is a serious bottleneck. I was part of a research program working on the fundamental computer science of high-scale graph databases ~15 years ago. Even back then we could show that the architectures you mention couldn't scale even in theory. Just about everyone has been re-hashing the same basic design for decades.As I like to point out, for two decades DARPA has offered to pay many millions of dollars to anyone who can demonstrate a graph database that can handle a sparse trillion-edge graph. That data model easily fits on a single machine. No one has been able to claim the money.Inexplicably, major advances in this area 15-20 years ago under the auspices of government programs never bled into the academic literature even though it materially improved the situation. (This case is the best example I've seen of obviously valuable advanced research that became lost for mundane reasons, which is pretty wild if you think about it.)
cluckindan: The d:Document syntax looks so happy!
piyh: I'm turning off my brain and using neo4j