upvote
good to hear! Do you mind sharing your setup and tokens / seconds performance ?
reply
I'm running the unquantized base model on 2xA6000s (Ampere gen, 48GB each). Runs at about 25 tokens/second.
reply
FYI they also released FP8 quants, and those should be faster on your setup (we have the same). As long as you keep kv at 16bit, FP8 should be close-to-lossless compared to 16bit, but with more context available and faster inference speed.
reply