Hacker News
new
past
comments
ask
show
jobs
points
by
DeathArrow
5 hours ago
|
comments
by
zozbot234
4 hours ago
|
[-]
Arguably it makes more sense technically to get the model support into llama.cpp, which provides many options for GPU+CPU split inference already.
reply