It IS significantly slower, about 3.5 minutes on my MacBook vs seconds on an H100. That's partly the pure-PyTorch backend overhead and partly just the hardware difference.
For my use case the tradeoff works -- iterate locally without paying for cloud GPUs or waiting in queues.