I did give up on OpenCode Go (GLM 5) as it was noticeably slower though
You need a reasonable pace for the chit-chat stages of a task, I don't care if the execution then takes a while
You even have models you can run locally that outperform models from a year or so ago.
You'll still need a top-of-the-line laptop to run it most likely.