Discussion
A backend for AI-coded apps
chrysoprace: Is InstantDB no longer about local-first or is the AI angle just a marketing thing?
ghm2199: For people like me — who are kind of familiar with how react/jetpack compose/flutter like frameworks work — I recall using react-widget/composables which seamlessly update when these register to receive updates to the underlying datamodel. The persistence boundary in these apps was the app/device where it was running. The datamodel was local. You still had to worry about making the data updates to servers and back to get to other devices/apps.Instant crosses that persistence boundary, your app can propagate updates so you don't have to. Right?But how is this different/better than things like vercel?
asdev: I wonder if people really need this. How many people are really building multiplayer apps like Figma, Linear etc? I'm guessing 99% are CRUD and I doubt that will change. Even if so, would you want to vendor lock into some proprietary technology rather than build with tried and tested open source components?
nezaj: For what it's worth, Instant is 100% open source!https://github.com/instantdb/instant
ladon86: Looks very nice! I'll give it a spin for prototypes.Would love to check out /docs but it's currently a 404.
nezaj: Docs should be working now! If anyone else has issues please let us know!
nharada: This is super cool and exactly what I've been looking for for personal projects I think. I wanna try it out, but the "agent" part could be more seamless. How does my coding agent know how to work this thing?I'd suggest including a skill for this, or if there's already one linking to it on the blog!
nezaj: We do have a skill!npx skills add instantdb/skillsWould recommend doing `bunx/pnpx/npx create-instant-app` to scaffold a project too!
jamest: They actually deliver on the promise of "relational queries && real-time," which is no small feat.Though, their console feels like it didn't get the love that the rest of the infra / website did.Congrats on the 1.0 launch! I'm excited to keep building with Instant.
stopachka: Thank you! We spent a lot of time with the demos on the home page, the essays page, and upgrading the docs.We're going to redesign the dashboard in the next few weeks.One interesting observation from our users: though they use the dashboard less in some ways (the AI agents spin up apps and make schema changes for them), we found people use them _more_ in other ways. Instant comes with an Explorer component, which lets you query your data. We found users want to engage with that a lot more.
risyachka: Yeah I kinda agree. Considering llms write most of the code today, the need for fancy tech is lower than ever. A good old crud app looks like a perfect fit for ai - its simple, repetitive and ai is great at sql. Go binary for backend and react for frontend - covers 99.9% use cases with basically zero resource usage. 5 usd node will handle 100k mau without breaking a sweat.
LoganDark: Can I view the source code of this skill / install it manually? I am incredibly not a fan of automated installers for this type of stuff.
nezaj: You can! The skill lives herehttps://github.com/instantdb/skills
stopachka: Good idea! I went ahead and updated the essay:https://github.com/instantdb/instant/pull/2530It should be live in a few minutes.
ghm2199: One thing I have always wanted to do is cancel an AI Agent executing remotely that I kicked off as it streamed its part by part response(part could words, list of urls or whatever you want the FE to display). A good example is web-researcher agent that searches and fetches web pages remotely and sends it back to the local sub-agent to summarize the results. This is something claude-code in the terminal does not quite provide. In Instant would this be trivial to build?Here is how I built it in a WUI: I sent SSE events from Server -> Client streaming web-search progress, but then the client could update a `x` box on "parent" widget using the `id` from a SSE event using a simple REST call. The `id` could belong to parent web-search or to certain URLs which are being fetched. And then whatever is yielding your SSE lines would check the db would cancel the send(assuming it had not sent all the words already).
stopachka: If I understood you correctly:You kick off an agent. It reports work back to the user. The user can click cancel, and the agent gets terminated.You are right, this kind of UX comes very naturally with Instant. If an agent writes data to Instant, it will show up right away to the user. If the user clicks an `X` button, it will propagate to the agent.The basic sync engine would handle a lot of the complexity here. If the data streaming gets more complicated, you may want to use Instant streams. For example, if you want to convey updates character by character, you can use Instant streams as an included service, which does this extremely efficiently.More about the sync engine: https://www.instantdb.com/product/sync More about streams: https://www.instantdb.com/docs/streams
stopachka: > 5 usd node will handle 100k mau without breaking a sweat.One problem you may encounter with the 5 usd node: how do you handle multiple projects? You could put them all in one VM, but that set up can get esoteric, and as you look for more isolation, the processes won't fit on such a small machine.With Instant, you can make unlimited projects. Your app also gets a sync engine, which is both good for your users, and at least in our experiments, the AIs prefer building with it.And if you ever want to get off Instant, the whole system is open source.I still resonate with a good Hetzner box though, and it can make sense to self-host or to use more tried-and-true tech.For what it's worth, with Instant you would get a lot more support for easy projects. At least in our benchmarks, AI
shay_ker: with a huge multi-tenant database, how do you deal with noisy neighbors?
stopachka: There's two answers: data structures, and operations1. Data structuresOne data structure that helps a lot is the grouped queue.I cover it in the essay here:https://www.instantdb.com/essays/architecture#:~:text=is%20t...To summarize:In places where we process throughput, we generally stick a grouped queue and a threadpool that takes from it. The mechanics for this queue make it so that if there's one noisy neighbor, it can't hog all the threads.2. OperationsThere's also a big part that's just about operating the system. If you think about what happens in big companies, they are effectively dealing with noisy neighbors all the time. You keep the an eye on the system at all times, and manage against any spikes.The benefit to centralizing the operations is, when a small company gets big quick, we likely already have the buffer to help them scale. The drawdown to all systems like this, is that sometimes we get it wrong.When we get it wrong though, we write it down, and improve our operations.