Hacker News
new
past
comments
ask
show
jobs
points
by
manmal
9 hours ago
|
comments
by
dktp
9 hours ago
|
next
[-]
The idea is that smarter models might use fewer turns to accomplish the same task - reducing the overall token usage
Though, from my limited testing, the new model is far more token hungry overall
reply
by
manmal
8 hours ago
|
parent
|
[-]
Well you‘ll need the same prompt for input tokens?
reply
by
httgbgg
8 hours ago
|
parent
|
[-]
Only the first one. Ideally now there is no second prompt.
reply
by
manmal
8 hours ago
|
parent
|
[-]
Are you aware that every tool call produces output which also counts as input to the LLM?
reply
by
9 hours ago
|
prev
|
next
[-]
deleted
reply
by
kalkin
9 hours ago
|
prev
|
[-]
That's valid, but it's also worth knowing it's only one part of the puzzle. The submission title doesn't say "input".
reply