Discussion
If DSPy is So Great, Why Isn't Anyone Using It?
TheTaytay: I tried it in the past, one time “in earnest.” But when I discovered that none of my actual optimized prompts were extractable, I got cold feet and went a different route. The idea of needing to do fully commit to a framework scares me. The idea of having a computer optimize a prompt as a compilation step makes a lot of sense, but treating the underlying output prompt as an opaque blob doesn’t. Some of my use cases were JUST off of the beaten path that dspy was confusing, which didn’t help. And lastly, I felt like committing to dspy meant that I would be shutting the door on any other framework or tool or prompting approach down the road.I think I might have just misunderstood how to use it.
sbpayne: I don't know that you misunderstood. This is one of my biggest gripes with Dspy as well. I think it takes the "prompt is a parameter" concept a bit too far.I highly recommend checking out this community plugin from Maxime, it helps "bridge the gap": https://github.com/dspy-community/dspy-template-adapter
memothon: I think the real problem with using DSPy is that many of the problems people are trying to solve with LLMs (agents, chat) don't have an obvious path to evaluate. You have to really think carefully on how to build up a training and evaluation dataset that you can throw to DSPy to get it to optimize.This takes a ton of upfront work and careful thinking. As soon as you move the goalposts of what you're trying to achieve you also have to update the training and evaluation dataset to cover that new use case.This can actually get in the way of moving fast. Often teams are not trying to optimize their prompts but even trying to figure out what the set of questions and right answers should be!
sbpayne: Yeah, I think Dspy often does not really show it's benefit until you have a good 'automated metric', which can be difficult to get to.I think the unfortunate part is: the way it encourages you to structure your code is good for other reasons that might not be an 'acute' pain. And over time, it seems inevitable you'll end up building something that looks like it.
dzonga: at /u/ sbpyane - very useful info and pricing page as well.useful for upcoming consultants to learn how to price services too.
QuadmasterXLII: If you find yourself adding a database because thats less painful than regular deployments from your version control, something is hair on fire levels of wrong with your CICD setup.
sbpayne: I think this misunderstands the need for iteration! Maybe I could have written it more clearly :).The reality is that you don't want to re-deploy for every prompt change, especially early on. You want to get a really tight feedback loop. If prompt change requires a re-deploy, that is usually too slow. You don't have to use a database to solve this, but it's pretty common to see in my experience.
villgax: Nobody uses it except for maybe the weaviate developer advocates running those jupyter cells.
ndr: It's not as ergonomic as they made it to be.The fact that you have to bundle input+output signatures and everything is dynamically typed (sometimes into the args) just make it annoying to use in codebases that have type annotations everywhere.Plus their out of the box agent loop has been a joke for the longest time, and writing your own if feasible but it's night and day when trying to get something done with pydantic-ai.Too bad because it has a lot of nice things, I wish it were more popular.
sbpayne: Yeah! I can agree with this. There's some improved ergonomics to get here
pjmlp: Never heard of it, that is already a reason.
sbpayne: hahaha this is true!
deaux: I don't see it at all.> Typed I/O for every LLM call. Use Pydantic. Define what goes in and out.Sure, not related to DSPy though, and completely tablestakes. Also not sure why the whole article assumes the only language in the world is Python.> Separate prompts from code. Forces you to think about prompts as distinct things.There's really no reason prompts must live in a file with a .md or .json or .txt extension rather than .py/.ts/.go/.., except if you indeed work at a company that decided it's a good idea to let random people change prod runtime behavior. If someone can think of a scenario where this is actually a good idea, feel free to elighten me. I don't see how it's any more advisable than editing code in prod while it's running.> Composable units. Every LLM call should be testable, mockable, chainable.> Abstract model calls. Make swapping GPT-4 for Claude a one-line change.And LiteLLM or `ai` (Vercel), the actually most used packages, aren't? You're comparing downloads with Langchain, probably the worst package to gain popularity of the last decade. It was just first to market, then after a short while most realized it's horrifically architected, and now it's just coasting on former name recognition while everyone who needs to get shit done uses something lighter like the above two.> Eval infrastructure early. Day one. How will you know if a change helped?Sure, to an extent. Outside of programming, most things where LLMs deliver actual value are very nondeterministic with no right answer. That's exactly what they offer. Plenty of which an LLM can't judge the quality of. Having basic evals is useful, but you can quickly run into their development taking more time than it's worth.But above all.. the comments on this post immediately make clear that the biggest differentiator of DSPy is the prompt optimization. Yet this article doesn't mention that at all? Weird.
sbpayne: I think all of these things are table-stakes; yet I see that they are implemented/supported poorly across many companies. All I'm saying is there are some patterns here that are important, and it makes sense to enter into building AI systems understanding them (whether or not you use Dspy) :)
stephantul: Mannnn, here I thought this was going to be an informative article! But it’s just a commercial for the author’s consulting business.
halb: The author itself is probably ai-generated. The contact section in the blog is just placeholder values. I think the age of informative articles is gone
tilt: Curious what you think of https://github.com/pipevals/pipevals (author)
andyg_blog: >the whole article assumes the only language in the world is Python.This was my take as well.My company recently started using Dspy, but you know what? We had to stand up an entire new repo in Python for it, because the vast majority of our code is not Python.
markab21: I think the entire premise that the prompting is the surface area for optimizing the application is fundamentally the wrong framing, in the same way that in 1998 better cpam will save CGI. It's solving the wrong problems now, and the limitations in context and model intelligence require a tool like Dspy.The only thing I'd grab dspy for at this point is to automate the edges of the agentic pipeline that could be improved with RL patterns. But if that is true, you're really shorting yourself by giving your domain DSPY. You should be building your own RL learning loops.My experience: If you find yourself reaching for a tool like Dspy, you might be sitting on a scenario where reinforcement learning approaches would help even further up the stack than your prompts, and you're probably missing where the real optimization win is. (Think bigger)
sbpayne: Yeah, I find it hard to recommend Dspy. At the same time, I can't escape the observation that many companies are re-implementing a lot of parts of it. So I think it's important to at least learn from what Dspy is :)
panelcu: https://www.tensorzero.com/docs has similar abstractions but doesn't require Python and doesn't require committing to the framework or a language. It's also pretty hard to onboard, but solves the same problems better and makes evaluating changes to models / prompts much easier to reason about.
verdverm: Have you looked at ADK? How does it compare? Does it even fit in the same place as Dspy?https://google.github.io/adk-docs/Disclaimer, I use ADK, haven't really looked at Dspy (though I have prior heard of it). ADK certainly addresses all of the points you have in the post.
sbpayne: I personally haven't looked super closely at ADK. But I would love if someone more knowledgeable could do a sort of comparison. I imagine there are a lot of similar/shared ideas!
CraftingLinks: I used dspy in production, then reverted the bloat as it literally gave me nothing of added value in practice but a lot of friction when i needed precise control over the context. Avoid!
memothon: Yeah I agree with this. I will try to use it in earnest on my next project.That metric is the key piece. I don't know the right way to build an automated metric for a lot of the systems I want to build that will stand the test of time.
sbpayne: To be clear: I don't know that I would recommend using it, exactly. I would just make sure you understand the lessons so you see how it best makes sense to apply to your project :)
TZubiri: >"Stage 2: “Can we tweak the prompt without deploying?”Are we playing philosophy here? If you move some part of the code from the repo and into a database, then changing that database is still part of the deployment, but now you just made your versioning have identity crisis. Just put your prompts in your git repo and say no when someone requests an anti-pattern be implemented.
sbpayne: I think the core challenge here is that being able to (in "development") quickly change the prompt or other parameters and re-run the system to see how it changes is really valuable for making a tight iteration loop.It's annoying/difficult in practice if this is strictly in code. I don't think a database is necessarily the way to go, but it's just a common pattern I see. And I really strongly believe this is more of a need for a "development time override" than the primary way to deploy to production, to be clear.
CharlieDigital: I work with author; author is definitely not AI generated.
msp26: > Data extraction tasks are amongst the easiest to evaluate because there’s a known “right” answer.Wrong. There can be a lot of subjectivity and pretending that some golden answer exists does more harm and narrows down the scope of what you can build.
sbpayne: This is very true! I could have been more careful/precise in how I worded this. I was really trying to just get across that it's in a sense easier than some tasks that can be much more open ended.I'll think about how to word this better, thanks for the feedback!
rco8786: I think they're just saying that data extraction tasks are easy to evaluate because for a given input text/file you can specify the exact structured output you expect from it.
giorgioz: Loved the article because I exactly hit the stages all up till the 5th! Thank you for making me see the whole picture and journey!I think a problem to DSPy is that they don't know the concept of THE WHOLE PRODUCT: https://en.wikipedia.org/wiki/Whole_productLook at https://mastra.ai/ and https://www.copilotkit.ai/ to see how more inviting their pages look. A company is not selling only the product itself but all the other things around the product = THE WHOLE PRODUCTA similar concept in developer tools is the docs are the productAlso I'm a fullstack javascript engineer and I don't use Python. Docs usually have a switch for the language at the top. Stripe.com is famous for it's docs and Developer Experience: https://docs.stripe.com/search#examples It's great to study other great products to get inspiration and copy the best traits that are relevant to your product as well.