upvote
I'm not sure I follow, you develop code on a remote machine by speaking to your phone and are unimpressed by the result?
reply
It's not that I'm unimpressed by the results, it's that I think I'm saving time by pushing the agent along remotely, but the reality is that my messages to the agent(s) end up being a lot shorter, which inevitably leaves more up for interpretation.

Don't get me wrong, I still use Codex (and sometimes Claude Code) remotely every day, and am overall excited for this release, it's just that the benefit wasn't as high as I had initially hoped.

Part of this is due to the models getting better (no need to prod along with "continue"), and part of this is the nature of how I use my phone (short bursts of attention).

But again, maybe I'm just old and prefer big screens with a keyboard.

reply
Just...write longer messages. Maybe it is age but I've written huge forum such as on HN all from my phone often with multiple tabs open to source various links for foot notes. When I type for an LLM, I will type a lot too if needed and will often even type a little, wait to think, then continue, over the course of like 15 minutes even, so that the intention of the prompt is correct since that saves much more time and produces better results than shorter messages.

I think you just need to type more rather than feeling constricted, as it's actually a form of liberation, to produce (or have an AI produce, whatever) something from wherever you are rather than needing to sit down on a laptop where you're gonna be waiting around anyway.

What tunnel setup do you use by the way? I'm on Android so it's kind of annoying all the LLM remote coding apps are iOS only.

reply
Oh, I agree completely. I avoid loose language, revise my wording, and usually write prompts that require scrolling on mobile.

It isn’t so much that I feel restricted, I guess it’s that mobile wasn’t as big of a game changer as it was ~6 months ago.

My bandwidth feels more restricted by my own cognitive capacity (usually due to do context switching), rather than the limits of the model itself, and the mobile interface makes that worse.

I’ve recently found myself reserving larger tasks for “keyboard time” and reverting my thinking back to notes (in mobile), which I’ll then formulate to the LLM at some future time.

> What tunnel setup do you use by the way?

I “vibecoded” an agentic runtime that operates my machine generally (including TUIs like Codex/Claude Code), which I connect through a custom proxy and mobile app (both also vibecoded).

I previously tried Cloudflare Tunnels and an SSH setup, but it all felt a bit hacky.

Unfortunately the app is iOS only, but I could open source it and you’d probably be able to make an Android clone quickly (:

reply
I've been coding on Android for a few months, mostly while walking around outside or showering. I'm on a mix of Tailscale + Termux + ssh server + tmux + codex CLI, Tailscale is great.

I think you may be able to optimize your workflow more by drafting your prompt in ChatGPT first; get it to expand out the intent for you. Doing that has made phone coding a lot more tolerable for me.

I like to think that I've given phone coding a fair shot (and I continue to do it), but I agree with the other poster that there's something about the lack of a keyboard that really gets to me :) I wish I knew what it was.

reply
They are unimpressed by their (current) ability to use it, not the technology.
reply
the ums are exactly the sign that you speak much faster than you type, so you need a pause for your thoughts to catch up
reply
I've been trying voxtype (using whisper models) lately, and to my surprise all my ums are filtered out. It's really good now actually!
reply
I don't see any way to use that on a phone.
reply
Wispr flow cuts out ums. I love it
reply
the main thing is functionality, you can always work around the ergonomics
reply