Hacker News
new
past
comments
ask
show
jobs
points
by
drob518
5 hours ago
|
comments
by
bigyabai
1 hours ago
|
[-]
For FP16-native training of 100B+ models, you will probably still be offloading to swap unless you've got a $150,000 RDMA Mac Studio cluster. The workload would be deeply compute-constrained if you could fit it in-memory anyways.
reply