Discussion
sebmellen: Just as a tiny first piece of feedback, the main marketing website is very hard to understand or grok without a demo of how the tool works. Even just the quick YouTube video that you added in your post here, if embedded, would make a difference.There are so many "agentic tools" out there that it's really hard to see what differentiates this just based on the website.
a24venka: Thanks for the feedback! Definitely agree that we could do more with the marketing site. We're working on a gallery page to showcase some demos.
jpbryan: Why do I need a canvas to visualize the work that the agents are doing? I don't want to see their thought process, I just want the end product like how ChatGPT or Claude currently work.
a24venka: That is definitely a valid way of using Spine as well. You can just work in the chat and consume the deliverables similar to how you would in other tools.The canvas helps when you want to trace back why an output wasn't what you expected, or if you're curious to dig deeper.Even beyond auditability, the canvas also helps agents do better work: they can generate in parallel, explore branches, and pass context to each other in a structured way (especially useful for longer-running tasks).
gravity2060: In the demo video you shared (yt link) how many credits did that whole project take? What is the prices to fix elements of it (for example of you dislike a minor aspect of the generated spreadsheet do follow up instructions utilize only the narrow subset of agents that has been demoed to that subtask, or does it create new agents who have to create new context in the narrow follow up task?)
dude250711: Dark UI pattern: pretends that it is immediately usable only to redirect for sign-up.
a24venka: Fair point, we should be more upfront about the sign-up step. Given that tasks are long-running and token-intensive, we do need an auth barrier to protect against abuse, but we can definitely do a better job signaling that before you hit the canvas.
garciasn: Or, just show us in an animated GIF how the product works in practice. Then, should be somehow find benefit in a visual representation of a swarm's workflow, we could sign up rather than having to, unintuitively, scroll down to watch a YouTube video.
a24venka: Credits are consumed by the blocks that get generated, not by the agents themselves. Some blocks are cheaper than others. A simple prompt or image block is a single model call, while browser use or deliverable blocks like documents and spreadsheets run models in a loop and cost more. Blocks also cost more when they have more blocks connected to them (more input tokens).In the demo video I shared, the task cost about ~7,000 credits since it ran around 10 BrowserUse blocks and produced multiple deliverables.If you want to fix a specific block (or set of blocks), you can select them and the chat will scope itself to primarily work on those. In that case fewer blocks run, so it's cheaper.
esafak: Is the value prop that I can see what the agent is doing? This is not the way: https://youtu.be/R_2-ggpZz0Q?t=158How am I supposed to get anything out of this? Consider that agents are going to get faster and run more and more tasks in parallel. This is not manageable for a human to follow in real time. I can barely keep up with one agent in real-time, let alone a swarm.What I could see being useful is if you monitored the agents and notified me when one is in the middle of something that deserves my attention.
nusl: 7000 credits, ouch. The tool is really cool, I do think it's super useful. I also like the swarm particle animations in the backround.
pqs: I had to read this text in order to understand what this tool does, because I could not know from the website (without watching a video). You should use Spine to improve your website. ;-)
BloondAndDoom: I didn’t read the post, I checked out the website just like 99% of the people will do.Simple advice, if you are selling a product with a selling point of being visual, show it on your website. Not in a YouTube video but actual screenshots, short cut 10 sec video/gif
a24venka: Definite miss on our part, we're working on making the product experience more visible upfront on our landing page.
woeirua: Interesting idea, I wanted to see an example of the agents working on a canvas when I opened your page. I saw nothing of the sort. Sorry, but immediate fail.This may be too harsh, but you need to make it immediately clear to someone today why they can't just have Claude Code one shot your app!
salomonk_mur: Friend, in the age of AI and even more so if you are selling an AI product, all you need is literally 2 screenshots and one prompt.
aleda145: Super cool!I'm completely sold on the canvas layer. Embracing non linearity is such a boon when you're on the ideas stage. When you have verified it though, moving it to another medium (a document, presentation or just code) is often the best choice.Do you see the canvases created with Spine as "one off" that you discard when you have got your deliverable, or as something living that you keep around?I'm building a side project for running SQL on a canvas (kavla.dev), so I'm thinking about canvas workflows all the time!
a24venka: Thanks! Great question. We see canvases as living workspaces, you can revisit, iterate on, and build on them over time.But the deliverables (docs, slides, code) are first-class outputs you can export and use independently. So it works both ways depending on the workflow.Kavla looks cool, canvas-based SQL is a great use case for this kind of thinking!
aleda145: Nice! I'll make sure to try out Spine this weekend, if you want detailed feedback feel free to email me. You can find it in my profile.
vivzkestrel: excuse my memory at this point, arent there like a 100 of these posted on HN every month that all have something to do with multi agent collaboration that support 1000 models?
TheTaytay: I think this is really neat. You should probably take it as a compliment that the biggest criticisms so far are about the website landing page. ;)I like canvases in general, and I especially like them for mentally organizing and referring to this sort of broad work. (Honestly, I think zoomable canvases would make a better window manager in general, but I digress)One small piece of friction: My default mouse-based ways of dragging the canvas around (that work in most canvases like Figma) aren't working. I saw that you had a tutorial, and I have learned to hold space now, but I prefer the "hold middle mouse button to drag my canvas view around".I've got a couple of research tasks running now, and my current open questions as a very new user are: 1) How easy will it be to store the outputs into a Github repository. 2) How easy will it be to refer back to this later? 3) Can I build upon it manually or automatically? 4) Can I (securely) share it with someone else for them to see and build upon it? 5) Can I do something "locally" with it? Not necessarily the model, but my preferred interface for LLMs at this point is Claude Code. Could I have a Claude Code instance running in one of these boxes somehow? 6) What if I want to do private stuff with it and don't like the traffic going through Spine's servers? Could I pay them for the interface, but bring my own keys? (Related: Can I self host somehow?) 7) When this is done, each artifact it found (screenshot, webpage, etc), is going to be helpful. The data-hoarder in me wants to make sure I can search these later. Heck, if I could do that, this would become my preferred "web browser". (But again, I digress.)
airstrike: [delayed]
a24venka: Agreed. We will make sure this comes through in our website.
a24venka: Really appreciate the detailed feedback and questions! And yes, we'll take the website criticism as a compliment :)Good callout on the canvas navigation, we'll look into middle mouse button support.To answer your questions: 1) GitHub integration is on our roadmap. Right now you can export outputs manually but we want to make this seamless. 2) All your canvases are saved and you can search them by name in your dashboard. We're also working on a dedicated section for deliverables across canvases. 3) Yes to both! You can manually add or edit blocks, or kick off new agent runs that build on existing work. 4) You can currently only share public links of your canvas to others (but you can make it private at any point). We are testing out a teams feature which allows you to share canvases with members on your team securely. Beyond that, we are working on adding roles and email-based sharing controls which is in our roadmap. 5) Claude Code in a block is a really interesting idea. We don't support that today but we're thinking about computer use and coding workflows. 6)BYOK (bring your own keys) is something we've heard interest in and are considering. Self-hosting isn't available right now, though we do support private deployments for enterprise customers if that's ever relevant. 7) Love the 'preferred web browser' framing. Right now you can search canvases but searchable artifacts across canvases is definitely where we want to head.Thanks for giving it a real spin, this kind of feedback is incredibly valuable.
useftmly: the 'chat is the wrong interface' framing is interesting but i think the real friction is context fragility, not linearity. long chats break because models lose the thread, not because you can't branch. the canvas fixes that by making context explicit and persistent — you can see what the agents are actually holding.curious how the orchestrator handles conflicts when parallel agents produce contradictory outputs. does the downstream synthesizer just pick, or does it surface the divergence to the user?
kmoser: I read "AI agents that collaborate on a visual canvas" and I thought it was a shared canvas (as in an image) that virtual agents could contribute to, sort of like an image-only Moltbook.
a24venka: Great framing. You're right that context fragility is a big part of it. The canvas helps because each block maintains its own context explicitly, and connected blocks pass context between blocks without polluting the agents' context windows.On conflict resolution, the synthesizer block can see all upstream outputs, so it has full visibility into any divergence. It does surface contradictions to the user, though this is something we're constantly improving.
visekr: whoa congrats on the launch. lol I launched my visual canvas for agents today too. I went in a more of a collaborative canvas IDE, agent orchestration direction. But very cool to see your take on ithttps://getmesa.dev is mine
poly2it: It looks interesting, but is it really more efficient than a tiling window manager?
kkukshtel: "I make AI output lots of stuff" is not an intrinsically valuable thing. I can run the same thing on Claude in research mode and get a report with cited sources in a more digestable format on my phone. What's the eval here on if any of this is good? Is it even possible to test (ie, you cant really AB test startup ideas)?
agenticbtcio: The persistence layer is probably more valuable here than the visual canvas itself. When running multi-agent workflows on complex tasks, the hard part isn't the agents working in parallel — it's maintaining coherent state when something fails mid-run or needs revision. A canvas that preserves intermediate results and lets you surgically re-run specific branches without starting over would solve a real pain point. Curious how Spine handles partial failures — does the whole workflow restart or can it pick up from the last valid checkpoint?
a24venka: Great question. The core of Spine is coordinating multiple specialized agents across multiple models, using the canvas to store and pass context selectively so each agent works with exactly what it needs.On the eval side, we ran Spine Swarm against GAIA Level 3 and Google DeepMind's DeepSearchQA and hit #1 on both.Full writeup: https://blog.getspine.ai/spine-swarm-hits-1-on-gaia-level-3-...