- If you already HAVE a computer and are looking for models: LLMFit
- If you are looking to BUY a computer/hardware, and want to compare/contrast for local LLM usage: This
You cannot exactly run LLMFit on hardware you don't have.
You can check out here how it does that: https://github.com/AlexsJones/llmfit/blob/main/llmfit-core/s...
To detect NVIDIA GPUs, for example: https://github.com/AlexsJones/llmfit/blob/main/llmfit-core/s...
In this case it just runs the command "nvidia-smi".
Note: llmfit is not web-based.
I too was a little surprised by this. My browser (Vivladi) makes a big deal about how privacy-conscious they are, but apparently browser fingerprinting is not on their radar.
> Estimates based on browser APIs. Actual specs may vary
It looks like I can run more local LLMs than I thought, I'll have to give some of those a try. I have decent memory (96GB) but my M2 Max MBP is a few years old now and I figured it would be getting inadequate for the latest models. But llmfit thinks it's a really good fit for the vast majority of them. Interesting!