- Please consult me when you encounter any ambiguous edge cases
Attaching the AI to production to directly do things with API calls is bad. For me the only use case where the app should do any AI stuff is with reading/categorizing/etc. Basically replacing the "R" in old CRUD apps. If you want to use that same new AI based "R" endpoint to auto fill forms for the "C", "U", and "D" based on a prompt that's cool, but it should never mutate anything for a customer before a human reviews it. Basically CRUD apps are still CRUD apps (and this will always be true), they just have the benefit of having a very intelligent "R" endpoint that can auto complete forms for customers (or your internal tooling/Jenkins pipelines/etc), or suggest (but never invoke) an action.
Why not check the logprobs of the output and take action when the prob of the first and second most likely token is too similar? (or below a certain threshold?
llm -> prompt -> result
llm -> prompt + prompt encoded as skill -> result
llm -> prompt + deterministic code encoded as skill -> result
I do think prompting to generate code early can shortcut that path to deterministic code, but we're still essentially embedding deterministic code in a non-deterministic wrapper. There is a missing layer of determinism in many cases that actually make long-horizon tasks successful. We need deterministic code outside the non-deterministic boundary via an agentic loop or framework. This puts us in a place where the non-deterministic decision making is sandwiched in between layers of determinism:
deterministic agentic flows -> non-deterministic decision making -> deterministic tools
This has been a very powerful pattern in my experiments and it gets even stronger when the agents are building their own determinism via tools like auto-researcher.
The hardware control team delivers a spec as a document and spreadsheet. The mobile team was using that to code the interface library and validating their code against the server. I converted the document to TSV, sent some parts to Claude and have it write a parser for the TSV keeping all the nuances of human written spec. It took more than 150 iterations to get the parser to handle all edge cases and generate an intermediate output as JSON. Then Claude helped me write a code generator using some custom glue on top of Apollo to generate the code that is consumed by the mobile app.
This whole pipeline runs as part of Github actions and calls Claude only when our library validator fails. There is an md file which is sent to Claude on failure as part of the request to figure out what went wrong, propose a solution and create a PR. This is followed by a human review, rework and merge. Total credits consumed to get here < $350.
I think most problems with ai tend to be around can you deterministically test the thing you are asking it to do?
How many of us would never ever show work, without going to check the thing we just built first?
Of course: have it write tests first; and run them to check its work.
Works well for refactoring, but greenfield implementations still rely on a spec that is guaranteed to be incomplete, overcomplete and wrong in many ways.
https://www.decisional.com/blog/workflow-automation-should-b...
I think there is a fundamental incentive problem - code + llm + harness is bound to be more efficient but the labs want you to burn tokens so they are not going to tell you to use the code, just burn more tokens. They are asking us to forget about the token cost and reliability for now - model will become better.
This means that most people just believe that their agent should just be able to do anything with the help of some Model fairy dust with prompts + skills.
People need to watch their agents fail in production to be able to come to the right conclusion unfortunately.
Eg. Loop in the code, process the subagents non-deterministic response for the individual task.
This takes 10 minutes to set up, you just need to run something like /skill-builder and describe the desired workflow.
I imagine many people just don’t know that it’s possible. I only discovered it a few days ago myself.
It worked on the first try.
They are particularly bad at complex multiline parsing. Writing all sorts of weird/crude python/awk scripts and getting confused in the process.
I wish they would use Perl6/Grammer or Haskell/Parsec or similar and write better parsing scripts.
Correct. The concept of having probabilistic output with deterministic acceptance “guardrails” is illogical. If the domain resists deterministic modeling such that you’re using an LLM, the guardrails don’t magically gain that capability.