Yes. A web browser can't just read a .zip file as a web page. (Even if a web browser decided to try to download, and decompress, and open a GUI file browser, you still just get a list of files to click.) Therefore, far from satisfying the trilemma, it just doesn't work.
And if you fix that, you still generally have a choice between either no longer being single-file or efficiency. (You can just serve a split-up HTML from a single ZIP file with some server-side software, which gets you efficiency, but now it's no longer single-file; and vice-versa. Because if it's a ZIP, how does it stop downloading and only download the parts you need?)
Now, maybe you mean something like, 'a web server could additionally run some special CGI software or a plugin or do some fancy Lua scripting in order to munge a ZIP and split it up on the fly so as to do something like serve it to clients as a regular efficient multi-file HTML page'. Sure. I already cover that in the writeup, as we seriously considered this and got as far as writing a Lua nginx script to support special range requests. But then... it's not single-file. It's multi-file - whatever the additional special config file, script, plugin, or executable is.
Tar is sequential. Each entry header sits right before its data. If the JSON manifest in the Gwtar preamble says an asset lives at byte offset N with size M, the browser fires one Range request and gets exactly those bytes.
The other problem is decompression. Zip entries are individually deflate-compressed, so you'd need a JS inflate library in the self-extracting header. Tar entries are raw bytes, so the header script just slices at known offsets. No decompression code keeps the preamble small.