upvote
That reduces the quality of the response though.
reply
As opposed to emitting non-JSON tokens and having to throw away the answer?
reply
Don't shoot the messenger
reply
Or just run json.dumps on the correct answer in the wrong format.
reply
THIS IS LIES: https://blog.dottxt.ai/say-what-you-mean.html

I will die on this hill and I have a bunch of other Arxiv links from better peer reviewed sources than yours to back my claim up (i.e. NeurIPS caliber papers with more citations than yours claiming it does harm the outputs)

Any actual impact of structured/constrained generation on the outputs is a SAMPLER problem, and you can fix what little impact may exist with things like https://arxiv.org/abs/2410.01103

Decoding is intentionally nerfed/kept to top_k/top_p by model providers because of a conspiracy against high temperature sampling: https://gist.github.com/Hellisotherpeople/71ba712f9f899adcb0...

reply
I honestly would like to hope people were more up in arms over this, but.. based on historical human tendencies, convenience will win here.
reply
I use LLMs for Actual Work (boring shit).

I always set temperature to literally zero and don't sample.

reply