Discussion
Search code, repositories, users, issues, pull requests...
jrswab: I built Axe because I got tired of every AI tool trying to be a chatbot.Most frameworks want a long-lived session with a massive context window doing everything at once. That's expensive, slow, and fragile. Good software is small, focused, and composable... AI agents should be too.Axe treats LLM agents like Unix programs. Each agent is a TOML config with a focused job. Such as code reviewer, log analyzer, commit message writer. You can run them from the CLI, pipe data in, get results out. You can use pipes to chain them together. Or trigger from cron, git hooks, CI.What Axe is:- 12MB binary, two dependencies. no framework, no Python, no Docker (unless you want it)- Stdin piping, something like `git diff | axe run reviewer` just works- Sub-agent delegation. Where agents call other agents via tool use, depth-limited- Persistent memory. If you want, agents can remember across runs without you managing state- MCP support. Axe can connect any MCP server to your agents- Built-in tools. Such as web_search and url_fetch out of the box- Multi-provider. Bring what you love to use.. Anthropic, OpenAI, Ollama, or anything in models.dev format- Path-sandboxed file ops. Keeps agents locked to a working directoryWritten in Go. No daemon, no GUI.What would you automate first?
punkpeye: What are some things you've automated using Axe?
jedbrooke: looks interesting, I agree that chat is not always the right interface for agents, and a LLM boosted cli sometimes feels like the right paradigm (especially for dev related tasks).how would you say this compares to similar tools like google’s dotprompt? https://google.github.io/dotprompt/getting-started/
ufish235: Why is this comment an ad?
ForceBru: This is the OP promoting their project — makes sense to me
bensyverson: It's exciting to see so much experimentation when it comes to form factors for agent orchestration!The first question that comes to mind is: how do you think about cost control? Putting a ton in a giant context window is expensive, but unintentionally fanning out 10 agents with a slightly smaller context window is even more expensive. The answer might be "well, don't do that," and that certainly maps to the UNIX analogy, where you're given powerful and possibly destructive tools, and it's up to you to construct the workflow carefully. But I'm curious how you would approach budget when using Axe.
jrswab: > how you would approach budget when using AxeGreat question and it's something that I've not dig into yet. But I see no problem adding a way to limit LLMs by tokens or something similar to keep the cost for the user within reason.
ozgurozkan: The Unix-philosophy framing resonates — focused, composable, single-purpose agents are genuinely safer architecturally than monolithic long-lived sessions with massive context windows.That said, composability introduces its own attack surface. When agents chain together via pipes or tool calls, each handoff is a trust boundary. A compromised upstream output becomes a prompt injection vector for the next agent in the chain.This is one of the patterns we stress-test at audn.ai (https://audn.ai) — we do adversarial testing of AI agents and MCP tool chains. The depth-limited sub-agent delegation you mention is exactly the kind of structure where multi-step drift and argument injection can cause real damage. A malicious intermediate output can poison a downstream agent's context in ways that are really hard to audit after the fact.The small binary / minimal deps approach is great for reducing supply chain risk. Have you thought about trust boundaries between agents when piping? Would be curious whether there's a signing or validation layer planned between agent handoffs.
r_lee: wow, like 10 posts within 5 minutes, how great! love me some AI slop on HN @dang
zrail: Looks pretty interesting!Tiny note: there's a typo in your repo description.
mark_l_watson: If I have time I want to try this today because it matches my LLM-based work style, especially when I am using local models: I have command line tools that help me generated large one-shot prompts that I just paste into an Ollama repl - then I check back in a while.It looks like Axe works the same way: fire off a request and later look at the results.
jrswab: Exactly! I also made it to chain them together so each agent only gets what it needs to complete its one specific job.
hrimfaxi: Yeah ive been going and flagging everything until they're banned. Old account too.
jrswab: I've not heard of that before but after looking into it I think they are solving different problems.Dotprompt is a promt template that lives inside app code to standardize how we write prompts.Axe is an execution runtime you run from the shell. There's no code to write (unless you want the LLM to run a script). You define the agent in TOML and run with `axe run <agent name> and pipe data into it.
saberience: I’m having trouble understanding when/where I would use this? Is this a replacement for pi or codex?
jrswab: This is not a replacement for either in my opinion. Apps like codex and pi are interactive but ax is non-interactive. You define an agent once and the trigger it however you please.
btbuildem: I really like seeing the movement away from MCP across the various projects. Here the composition of the new with the old (the ol' unix composability) seems to um very nicely.OP, what have you used this on in practice, with success?
Lliora: 12MB for an "AI framework replacement"? That's either brilliant compression or someone's redefining "framework" to mean "toy model that works on my laptop." Show me the benchmarks on actual workloads, not the readme poetry.
jrswab: This is not an LLM but a Binary to run LLMs as single purpose agents that can chain together.
mrweasel: Yeah I was disappointed by that too.