Prompt Engineering at Scale: From One-Off Prompts to Managed Prompt Systems
The first prompt you write for a production system is a string in your code. The fiftieth prompt is a liability if it changes without testing, if it diverges be...
All posts tagged with #llm-engineering
The first prompt you write for a production system is a string in your code. The fiftieth prompt is a liability if it changes without testing, if it diverges be...
A single agent trying to do everything is both expensive and fragile. Expensive because you're running a general-purpose model with a large context window for t...
When a traditional API call fails, you check the status code, look at the error message, and fix it. When an LLM-powered agent produces a wrong answer or behave...
Tool calling is the part of LLM applications where the model makes decisions that have real-world effects. When it goes wrong and it will the agent either produ...
Every LLM integration starts the same way: you ask the model to "respond in JSON" and then write a JSON.parse() call to get the data out. This works in developm...
I spent 15 years writing TypeScript. Then AI happened, and a significant fraction of the tooling, research implementations, and production frameworks I needed t...
In 2017, Andrej Karpathy wrote a post called "Software 2.0" that described a shift in how we write software. Instead of writing explicit instructions (Software ...