Hacker News
new
past
comments
ask
show
jobs
points
by
metalliqaz
7 hours ago
|
comments
by
cpburns2009
6 hours ago
|
next
[-]
I meant ollama uses llama.cpp internally. Sorry for the confusion.
reply
by
naasking
5 hours ago
|
prev
|
[-]
From what I understand, ROCm is a lot buggier and has some performance regressions on a lot of GPUs in the 7.x series. Vulkan performance for LLMs is apparently not far behind ROCm and is far more stable and predictable at this time.
reply