Discussion
@codemix/graphA real-time, type-safe graph database in a CRDT.
cyanydeez: Eventually someone will figure out how to use a graph database to allow an agent to efficiency build & cull context to achieve near determinant activities. Seems like one needs a sufficiently powerful schema and a harness that properly builds the graph of agent knowledge, like how ants naturally figure how where sugar is, when that stockpile depletes and shifts to other sources.This looks neat, but if you want it to be used for AI purposes, you might want to show a schema more complicated than a twitter network.
phpnode: the airline graph is more complex, I can show the schema for that if you think it's useful?
embedding-shape: I'd wager the problem is on the side of "LLMs can't value/rank information good enough" rather than "The graph database wasn't flexible/good enough", but I'd be happy to be shown counter-examples.I'm sure once that problem been solved, you can use the built-in map/object of whatever language, and it'll be good enough. Add save/load to disk via JSON and you have long-term persistence too. But since LLMs still aren't clever enough, I don't think the underlying implementation matters too much.
2ndorderthought: Can anyone explain why it is a good idea to make a graphdb in typescript? This not a language flamewar question, more of an implementation details question.Though typescript is pretty fast, and the language is flexible, we all know how demanding graph databases are. How hard they are to shard, etc. It seems like this could be a performance trap. Are there successful rbdms or nosql databases out there written in typescript?Also why is everything about LLMs now? Can't we discuss technologies for their face value anymore. It's getting kind of old to me personally.
phpnode: I needed it to be possible to run the graph in the browser and cloudflare workers, so TS was a natural fit here. It was built as an experiment into end to end type safety - nothing to do with LLMs, but it ended up being useful in the product I'm building. It's not designed for large data sets.
2ndorderthought: Makes sense thanks for explaining the use case. The LLM question was only because of the comments at the time of the post.The query syntax looks nice by the way.
phpnode: thanks, it was as close to Gremlin[0] as I could get without losing type safety (Gremlin is untyped)[0] https://tinkerpop.apache.org/
lmeyerov: It's interesting to think of where the value comes from.One of the main lessons of the RAG era of LLMs was reranked multiretrieval is a great balance of test time, test compute, and quality at the expense of maintaining a few costly index types. Graph ended up a nice little lift when put next to text, vector, and relational indexing by solving some n-hop use cases. I'm unsure if the juice is worth the squeeze, but it does make some sense as infra. Making and using these flows isn't that conceptually complicated and most pieces have good, simple OSS around them.There is another universe of richer KG extraction with even heavier indexing work. I'm less clear on the relative ROI here in typical benchmarks. Imagine going full RDF, vs the simpler property graph queries & ontologies here, and investing in heavy entity resolution etc preprocessing during writes. I don't know how well these improve scores vs regular multiretrieval above, and how easy it is to do at any reasonable scale. However, a lot of the work shifts out of the DB and out of the agent, and into a much fancier kg pipeline. So now there is a missing layer with less clear proof/value burden.
lo1tuma: 15 years ago I was a big fan of this chaining methods pattern. These days I don’t like it anymore. Especially when it comes to unit-testing and implementing fake objects it becomes quite cumbersome to setup the exact same interface.
phpnode: unfortunately it's unavoidable if you want to preserve type safety. I did consider parsing Cypher in typescript types, but it's not worth the effort and it's not possible to do safely.
rglullis: > It's not designed for large data sets.How large is large, here? Tens of thousands of triples? Hundreds? Millions?I'm working on a local-first browser extension for ActivityPub, and currently I am parsing the JSON-LD and storing the triples in specialized tables on pglite to be able to make fast queries on that data.It would be amazing to ditch the whole thing and just deal with triples based on the expanded JSON-LD, but I wonder how the performance would be. While using the browser extension for a week, the store accumulated ~90k thousand JSON-lD documents, which would probably mean 5 times as many triples. Storage wise is okay (~300MB), but I think that a graph database would only be useful to manage "hot data", not a whole archive of user activity.
rounce: Why not with a pipe that returns a function, the type of which is determined by the args of the pipe? That is possible to make typesafe in TS. That way you can have both APIs where the chained version is just wrapping successive pipe calls.
alansaber: I'm conceptually very bullish on B (entity resolution and hierarchy pre-processing during writes). I'm less certain than A and B need to be merged into a single library. Obviously, a search agent should know the properties of the KG being searched, but as the previous poster mentioned, these graph dbs are inherently inaccurate and only form part of the retrieval pattern anyway.
lmeyerov: Maybe it's useful to split out B1) KG pipelines from the choice of B2) simple property graph ontologies & queries vs advanced rdf ontologies and sparql queriesIt sounds like you are thinking about KG pipelines, but I'm unclear on whether typed property graphs, vs more advanced RDF/SPARQL, is needed in your view on the graph engine side?
esafak: Got benchmarks?
rapnie: Gleam might be a great choice perhaps. Compiles to typescript and Erlang/BEAM.
AlotOfReading: I'm not terribly familiar with graph databases, but perhaps someone who is can explain the advantage of this awfully complicated seeming design. There's gremlin, cypher, yjs, and zod, all of which I understand are different languages for different problems.What's the advantage of using all these different things in one system? You can do all of this in datalog. You get strong eventual consistency naturally. LLMs know how to write it. It's type safe. JS implementations exist [0].[0] https://github.com/tonsky/datascript