Hacker News
new
past
comments
ask
show
jobs
points
by
alfonsodev
23 hours ago
|
comments
by
lreeves
22 hours ago
|
[-]
I'm running the unquantized base model on 2xA6000s (Ampere gen, 48GB each). Runs at about 25 tokens/second.
reply
by
NitpickLawyer
21 hours ago
|
parent
|
[-]
FYI they also released FP8 quants, and those should be faster on your setup (we have the same). As long as you keep kv at 16bit, FP8 should be close-to-lossless compared to 16bit, but with more context available and faster inference speed.
reply