If you're only concerned about identical binaries on x86, it's not too bad because AMD and Intel tend to have intentionally identical implementations of most floating point operations, with the exception of a few of the approximate reciprocal SSE instructions (rcpps, rsqrtps, etc). Modern x86 instructions tend to have their exact results strictly defined to avoid this kind of inconsistency: https://software.intel.com/en-us/articles/reference-implemen...
If you want this to work across ARM and x86 (or even multiple ARM vendors), you are screwed, and need to restrict yourself to using only the basic arithmetic operations and reimplement everything else yourself.
They could transparently load balance a user from one different backend platform to the other with zero visible difference to the user.
Is this problematic for WASM implementations? The WASM spec requires IEEE 754-2019 compliance with the exception of NaN bits. I guess that could be problematic if you're branching on NaN bits, or serializing, but ideally your code is mostly correct and you don't end up serializing NaN anyway.
On older versions of DirectX (maybe even in some modern Windows APIs?) there were cases where it would internally change the FPU mode, causing chaos for callers trying to use floats deterministically[1].
[1] https://gafferongames.com/post/floating_point_determinism/ (see the Elijah quote, especially)
Works fine.
This is a not a small code base, and no particular care has been taken with the floating point operations used.
that RECIP14 link is AVX-512, i.e. not available on a bunch of hardware (incl. the newest Intel client CPUs), so you wouldn't ever use it in a deterministic-simulation multiplayer game anyway, even if you restrict yourself to x86-64-only; so you're still stuck to the basic IEEE-754 ops even on x86-64.
x86-64 is worse than aarch64 is a very important aspect - baseline x86-64 doesn't have fused multiply-add, whereas aarch64 does (granted, the x86-64 FMA extension came out around not far from aarch64/armv8, but it's still a concern, such is life). Of course you can choose to not use fma, but that's throwing perf away. (regardless you'll want -ffp-contract=off or equivalent to make sure compiler optimizations don't screw things up, so any such will need to be manual fma calls anyway)
Cool to see that the game is owned by Coffee Stain now, too. Satisfactory has been handled well by them, so I'm optimistic about the future of Teardown as well.
Dennis Gustafsson – Parallelizing the physics solver – BSC 2025
Anyway I recently bought it because of multiplayer. Can’t wait to try it out.
I have been trying to figure out a way to do physics completely in voxel space to ensure a global grid. But I have not been able to find any theory of Newtonian Mechanics that would work in discretised space (Movable Cellular Automata was the closest). I wonder if anyone in the Teardown dev team tried to solve this problem?
I tried this on a local project. It looks very jank and the math falls apart quickly. Unfortunately, using a fixed axis-aligned grid for rotating reference frames is not practical.
One to thing I wanted to try but didn’t, was to use dynamic axes. So once an entity is created (that is, a group of voxels not attached to the world grid), it gets its own grid that can rotate relative to the world grid. The challenge would be collision detection between two unaligned grids of voxels. Converting the group to a mesh, like Teardown does, would probably be the easiest and most effective way, unless you want to invent some new game-physics math!
Not sure what you mean with the claim that Newtonian Mechanics doesn't work in discretised space? I'm know there are plenty of codes that discretise space and solve fluid mechanical problems, and that's all Newtonian physics.
Of course you need a quite high resolution (compared to the voxel grid in teardown) when you discretise for it to come out like it does in reality, but if you truly want discretised physics on the same coarse scale as the voxels in teardown you can just run these methods and accept it looks weird.
i had brainstormed a bit a similar problem (non world aligned voxels "dynamic debris" in a destructible environment. One of the ideas that came through was to have a physics solver like the physX Flex sdk.
https://developer.nvidia.com/flex * 12 years old, but still runs in modern gpus and is quite interesting on itself as a demo * If you run it, consider turning on the "debug view", it will show the colision primitives intead of the shapes.
General purpose physics engine solvers arent that much gpu friendly, but if the only physical primitive shape being simulated are spheres (cubes are made of a few small spheres, everything is a bunch of spheres) the efficiency of the simulation improves quite a bit. (no need for conditional treatment of collisions like sphere+cube, cube+cylinder, cylinder+sphere and so on)
wondered if it could be solved by having a single sphere per voxel, considering only the voxels at the surface of the physically simulated object.
Maybe you could simulate physics but completely constrain any rotation? Then you’d have falling stuff, and it could move linearly (still moving in 3d space but snapping to the world grid for display purposes)?
I’ve tried to load teardown levels in a homegrown engine and I always end up stuttering like hell as soon as GI becomes involved (or even before that).
I’m going to finally manage to replicate it, and then the new engine will be released and raise the bar again xD
Because it looks like your opponent is a Swedish former demoscener who started programming at age 12 on the C64 and Amiga computers in 1990, quickly moving on to writing games and demos in assembly, then professionally developing physics engines since 2001, specializing in game performance profiling and squeezing performance out of optimized mobile games.
As far as game dev stereotypes go you basically picked a Final Boss fight. Good luck, you'll need it :p
> Due to protection of web servers from repeated attacks, we were forced to restrict access to administrative interface of web pages to selected countries. If you are currently in a foreign country, please sign in to WebAdmin, proceed to your domain management and disable this GeoIP filter in OneClick Installer section.
[...]
3. Record the deterministic command stream, pass it to the joining client, and have that client apply all changes to the loaded scene before joining the game. The amount of data is much smaller than in option 2 since we’re not sending any voxel data, but applying the changes can take a while since it involves a lot computation.
Once we started investigating option 3 we realized it was actually less data than we anticipated, but we still limit the buffer size and disable join-in-progress when it fills up. This allows late joins up to a certain amount of scene changes, beyond which applying the commands would simply take an unreasonably long time. "
So [1] is not an option for players who want to do it that way?
But if you've got a solution for [3] that works completely correctly anyhow, then writing lots of code for [1] becomes redundant to that anyhow, even with save/load code sitting right there. Might as well start from the beginning and replay it anyhow.
One of the things I will often do that I sometimes have to explain to my fellow engineers is bound the rate at which we'll have to handle certain changes based on what is making them. If you know that you've got a human on the other end of some system, they can only click and type and enter text so quickly. Yes, you still need to account for things like copy & paste if that's an issue for your system, where they may suddenly roll up with a novel's worth of text, but you know they can't be sending you War and Peace sixty times a second. You can make a lot of good scaling decisions around how many resources a user needs when you remember that. The bitrate coming out of a human being is generally fairly small; we do our human magic with the concatenation of lots of bits over lots of time but the bitrate itself is generally small. For all that Teardown is amazingly more technically complicated than Doom, the "list of instructions the humans gave to the game" is not necessarily all that much larger than a Doom demo (which is itself a recording of inputs that gets played back, not a video file), because even Doom was already brushing up on the limits of how many bits-per-second we humans can emit.
Obviously that won't scale if you intend to have dozens of players constantly joining a server rather than a "friends only" (or whatever more constrained scenario) where players only occasionally join mid game.
That said I haven't played any of the more intricate mods out there, but I can how it would become more of an issue.
The "Reliable vs Unreliable" section implies that different parts of the scene are sent using a strict-ordering protocol so that the transforms happen in the same order on every client, but other parts happen in a state update stream with per client queueing.
But which is which? Which events are sent TCP and which are UDP (and is that literally what they're doing, or only a metaphor?)
Really the economy of the text in the blog seems backwards, this section has one short paragraph explaining the concept of deterministic event ordering as important for keeping things straight, and then 3 paragraphs about how player position and velocity are synced in the same way as any other game. I want read more about the part that makes teardown unique!
I presume you know this, but maybe for others: judging by what was written in that paragraph, I'd indeed assume he means the same paradigm that has been driving replicated real-time simulations since at least QuakeWorld - some world-state updates (in this case _"object transforms, velocities, and player positions"_, among others) don't have to be reliable, because in case you miss one and need a retransmit, by the time you receive it, it's already invalid - the next tick of the simulation has already been processed and you will get updated info anyway (IIRC this is essentially John Carmack's insight from his .plan files).
The command messages (player operations and events) _need_ to be reliable, because they essentially serve as the source of truth when it comes to the state of the world. Each game client depends on them to maintain a state that is the same for everyone (assuming determinism the Teardown team has been working on ensuring).
I would be surprised if they actually had TCP at all
https://www.youtube.com/watch?v=XfcCyMQ13XM
(Edit: fixed link)
Who wants to play a game with 50ms+ keypress to screen update delay? Sounds miserable.
That would have a number of advantages, come to think of it. For starters, install size could be much lower, piracy would be a non-issue, and there would be no need to worry about cross-platform development concerns.
However, Teardown is in the set of games where it just barely works and only if all the stars and the moon align. I'd characterize it as something like, cloud gaming spends 100% of the margin, so if anything, anything goes wrong, it doesn't work very well.
(Plus, as excited as the companies are about locking us into subscriptions rather than purchases that we own, when it comes time to actually pay for the service they are delivering they sure do like to skimp, because it turns out it's kind of expensive to dedicate the equivalent of a high-end gaming console per person. Most stuff that lives in the cloud, a single user averages using a vanishing fraction of a machine over time, not consuming entire servers at a time. Which doesn't pair well with "you spent 100% of the margin just getting cloud gaming to work at all".)
Teardown, visually speaking, is a pretty noisy game at times, and doesn't give a great visual clarity when streamed at real-time-encoding-type bitrates during these noisy moments.
FPS mouse+keyboard is also one of the worst-case scenarios for Moonlight/GFNow/etc. remote play, because first person aiming with a mouse relies very heavily on a tight input-vision feedback loop, and first person camera movement is much harder to encode while preserving detail relative to, say, a static-camera overhead view, or even third person games where full-scene panning is slower and more infrequent.
Other comments are worrying about the streaming and latency but local split screen could also be another use case here.
Douche bags.