Hacker News
new
past
comments
ask
show
jobs
points
by
kubb
10 hours ago
|
comments
by
prodigycorp
9 hours ago
|
[-]
Are you sure about that? Chain of thought does not need to be semantically useful to improve LLM performance.
https://arxiv.org/abs/2404.15758
reply
by
kubb
3 hours ago
|
parent
|
next
[-]
If you're misusing LLMs to solve TC^0 problems, which is what the paper is about, then... you also don't need the slop lavine. You can just inject a bunch of filler tokens yourself.
reply
by
davidguetta
9 hours ago
|
parent
|
prev
|
[-]
still doesn't mean all tokens are useful. it's the point of benchmarks
reply
by
prodigycorp
9 hours ago
|
parent
|
[-]
Care to share the benchmarks backing the claims in this repo?
reply