Theory of Constraints for AI
How can you get better results from AI?
When you’re using LLMs — to code, to write, whatever — the single biggest factor in the quality of your results is the quality of the underlying model.
We’re all benefiting from the continued progress of the AI labs. They keep releasing models that perform better and better — and you and I don’t need to do anything. Just toggle to the new one and start getting improved results.
But there are a few levers that are in our control, which I’ve found can make the difference between good and great outputs. I anticipate that there will be an increased focus on these three levers — including startups being built focused on them and the LLM interfaces leaning more on them. The levers are:
- Prompts (what you send to the agent as the request)
- Context (additional data, knowledge, background, attachments, and source materials you send)
- Tools (access to interact with the world in ways outside of the chat box - whether searching the web or creating an artifact)
If you can start thinking in these terms — since each is in your control — you will start getting better results.
So far, the runaway, highest-productivity-gain use case for LLMs is coding. Cursor, Claude Code, and other applications are growing unbelievably fast, and with good reason.
They (mostly) use the same underlying models as every other LLM product. So why are they so much more impactful than non-coding AI products?
You can make arguments about code being easy for LLMs to understand (perhaps, although I’m not convinced that’s true for general LLM architectures), or about them being trained on lots of code (true, but so too for other forms of content). But I would contend that it is actually because of context and tool use — and a bit of prompting (and really, the combination of all three).
When you’re using AI coding products, the agent typically has access to a ton of context — documentation files, files across your project, READMEs and .cursorrules and AGENTS.md files that guide them.
Plus, the primary way the agent expresses itself is not just returning an answer. It is through tool use. The agents use purpose-built, deterministic tools to create and apply patches to files, editing them in specific ways.
The “issue” with LLMs, by default, is two-fold:
- They don’t have knowledge about your specific situation by default — your own data, or even pointers to the relevant things in their training data
- They don’t have any way to act other than outputting streams of text
Context fixes the former, and tools fix the latter. AI coding tools help you do this by default — making it easy to add context (and being in a context-rich environment in the first place), and giving the LLMs a bunch of useful tools.
But you can do the same when you’re using LLMs for other things. Take the time and effort to give them context, same as you would for a teammate who is starting on a new project. And, to the extent possible, ensure it has useful tools.
I’ve written about Theory of Constraints before. It can be applied here too.
We can consider a “factory floor” of an AI task:

Improving the LLM itself is not within (most of) our control. So we’ll have to leave that out and consider what the bottleneck is among the rest of the process.
What you’ll find — per Theory of Constraints — is there is always some limiting factor or bottleneck in a system. In the case of the “LLM output factory, sans LLM itself” system, it is either: the prompt, the context, or the tools.
Finding which one is the limiting factor and improving it is the best path towards improving the quality of your output.
To take a simple example: if you’re trying to have AI write a product spec, but your prompt is “write spec plz,” it almost doesn’t matter how much context or how good of a set of tools you give it, its output quality will be effectively capped.
Or if you have a really good prompt for a product spec, but you don’t give it enough information about the product in question, its output will be capped.
So when you have a “serious” AI use case, take a step back, do the Theory of Constraints process, and consider whether the agent needs better prompts, context, or tools — and then figure out a way to grant that.
Looking for more to read?
Want to hear about new essays? Subscribe to my roughly-monthly newsletter recapping my recent writing and things I'm enjoying:
And I'd love to hear from you directly: andy@andybromberg.com