Hacker News
new
past
comments
ask
show
jobs
points
by
flippant
7 hours ago
|
comments
by
dnautics
3 hours ago
|
[-]
No problem. It's an SLM, I have a dedicated on-prem GPU server that I deploy behind tailscale for inference. For training, I reach out to lambdalabs and just get a beefy GPU for a few hours for the cost of a Starbucks coffee.
reply