upvote
Can't see how NVIDA justifies its valuation/forward P/E ratio with these developments and on-device also becoming viable for 98% of people's needs when it comes to AI
reply
On-device is incredibly far away from being viable. A $20 ChatGPT subscription beats the hell out of the 8B model that a $1,000 computer can run.

Nvidia's forward PE ratio is only 20 for 2026. That's much lower than companies like Walmart and Costco. It's also growing nearly 100% YoY and has a $1 trillion backlog.

I think Nvidia is cheap.

reply
I run both MoE and dense models on laptops.

One set of models run on 8GB VRAM / 16GB RAM and another set runs on 24GB VRAM / 64GB RAM. Both are very useful for easy and easy-to-moderate complex code, respectively.

The latest open, small models are incredibly useful even at smaller sizes when configured properly (quant size, sampling params, careful use of context etc).

reply
I think you overestimate what most people are doing with AI. A 2B model can give out relationship advice and tell you how long to boil an egg.
reply
8b models can run on laptops. Of course a 1.8T model is more capable, but for a lot of tasks it really isn't 1000x
reply
This is an assessment of the moment. When rate of AI data center construction slows down, then P/E will start to grow. Or are we saying that the pace will only grow forever? There are already signs of a slowdown in construction.
reply
What are these signs you are referencing? Source?
reply
Like why would it slow down? If 1% of human capability is currently replaced with AI, how would things look if that number goes to 15%? When autonomous robots come into fruition as photo recognition improves, demand for compute will skyrocket.
reply
> On-device is incredibly far away from being viable. A $20 ChatGPT subscription beats the hell out of the 8B model that a $1,000 computer can run.

That's a very strange comment. Why would anyone run a dense model on a low-end computer? A 8B model is only going to make sense if you have a dGPU. And a Qwen3.6 or Gemma4 MoE aren't going to be “beaten the hell out” for most tasks especially if you use tools.

Finally, over the lifetime of your computer, your ChatGPT subscription is going to cost more than the cost of your reference computer! So the real question should be whether you're better off with a $1000 computer and a ChatGPT subscription or with a $2000 computer (assuming a conservative lifetime of 4 years for the computer).

My Strix Halo desktop (which I paid ~1700€ before OpenAI derailed the RAM market) paired with Qwen3.5 is a close replacement for a $200/month subscription, so the cost/benefit ratio is strongly in favor of the local model in my use case.

The complexity of following model releases and installing things needed for self-hosting is a valid argument against local models, but it's absolutely not the same thing as saying that local models are too bad to use (which is complete BS).

reply
I do think Nvidia isn't that badly priced; they still have the dominance in training and the proven execution

Biggest risk I see is Nvidia having delays / bad luck with R&D / meh generations for long enough to depress their growth projections; and then everything gets revalued.

reply
Great! Can't wait to buy decent GPU for interference for <1k$
reply