upvote
This is what I did for BeatScratch! https://beatscratch.io

My music model is all Protobuf messages, which go from Dart/Flutter land to Kotlin/C/Swift/JS audio backends on target platforms. I also use Protobuf for saving and sharing. It’s been incredibly resilient and performant.

reply
I don't understand what you mean by frontend and backend when you mention ffi. Is this backend in a remote server or just on the same app?

I used proto buf with rust, I had a rust client that spoke to my flutter frontend via dbus. The rust client connected to my remote server via a web socket and all messages were wrapped in protobuf and sent as binary. Made everything a lot more concrete... But it basically forced me to build my own much shittier version of gRPC. Since, if the wan for your network was every killed the client was notified too late and you'd end up with missing messages if the network buffer got filled. We added a message id and acknowledgement process with sqlite backing up each message.

I still have nightmares about why I built that.

reply
Have you considered just using gRPC in this case? You gain 100% language separation (no FFI) and remote client/server at the cost of a little more call overhead.
reply
Most of the gRPC implementations force buffering of the whole response for large unary responses. They are not really written by people who care about performance. It’s dumb because the protobuf binary marshaled format is perfectly designed for server-side incremental marshaling.
reply
Performance is relative. gRPC is plenty fast enough for my use case, and for that matter, almost all client/server use cases that work across the Internet. If a Javascript web client against a REST backend is fast enough latency-wise, then a local gRPC connection on a single PC is gonna feel like greased lightning. Of course, there will be a few scenarios where tight coupling of client/server are required for good enough performance, but they are few and far between.
reply
Not OP but in same situation. Not every platform can run gRPC over localhost easily or without extra privileges.

I used to use protobuf but now I just use JSON, over stdin/stdout on desktop. It’s honestly quite good.

reply
Which platforms? My product runs gRPC client/server on macOS, Linux and Windows. No issues with privileges. Or are you trying to run it on port 443? Yeah, don't do that, run it on 8443 or whatever instead.
reply
Then you have to deal with port collisions when some other software wants to use that port. And keeping a port open without any authentication is terrible for security, even if it only binds on localhost, so you have to find some secure way to share a key between the client and server.

Personally I wish we could just use UNIX sockets for "localhost-only TCP", but software support is just not there.

reply
I don't worry about security too much given it is just bound to localhost, but I do use a simple password (and make it modifiable by the user). Avoiding port collisions in the real world isn't a big issue, just ask an AI for the least assigned default ports and chance of collision is minor (in worst case, also user modifiable). In return, you get free "remotability", which is kind of a big deal IMO.

I do wish gRPC allowed for easy usage of UNIX domain sockets and perhaps named pipes, however. Sometimes all you need is IPC, but in my case, I'm happy to have remote usage builtin.

reply
Why not ConnectRPC? It's basically gRPC but without all the strange requirements for exotic HTTP features.
reply
I actually use this currently. Not nearly as many platforms, but you an always fallback to gRPC.
reply
Hah yea. I just did a deep dive into protobufs and RPC for an embedded application. Left learning a lot, and with a headache. Part of it was because this was using heapless, and I got errors until I configured the generator to use the right Vec sizes.
reply
That's a perfectly fine approach, Protobuf strength is exactly these kind of use cases.
reply