upvote
> Cheaters are by definition anomalies

So are very good players, very bad players, players with weird hardware issues, players who just got one in a million lucky…

When you have enough randomly distributed variables, by the law of big numbers some of them will be anomalous by pure chance. You can't just look at any statistical anomaly and declare it must mean something without investigating further.

In science, looking at a huge number of variables and trying to find one or two statistically significant variables so you can publish a paper is called p hacking. This is why there are so many dubious and often even contradictory "health condition linked to X" articles.

reply
> So are very good players, very bad players, players with weird hardware issues, players who just got one in a million lucky…

They will all cluster in very different latent spaces.

You don't automatically ban anomalies, you classify them. Once you have the data and a set of known cheaters you ask the model who else looks like the known cheaters.

Online games are in a position to collect a lot of data and to also actively probe players for more specific data such as their reactions to stimuli only cheaters should see.

reply
Valve has already tried this with VACNET if I am not mistaken. Judging by how big the cheating problem still is, they were not very successful.
reply
For competitive gaming this becomes a problem.

But a good way of solving this in community managed multiplayer games is this: if a player is extremely good to the point where it’s destroying the fun of every other player: just kick them out.

Unfair if they weren’t cheating? Sure. But they can go play against better players elsewhere. Dominating 63 other players and ruining their day isn’t a right. You don’t need to prove beyond reasonable doubt they’re cheating if you treat this as community moderation.

reply
> Dominating 63 other players and ruining their day isn’t a right.

it is, if you're not cheating and is in fact just that good. That's called competitive sports, which participants voluntarily engage in.

reply
It's like if Nikola Jokic showed up to your local court every day and consistent beat you day after day. You'd eventually give up because it's not fun anymore.

People who engage in competitive sports all agree to it. Most people want to play for fun. They have a natural right to do so.

reply
Why do you feel someone has a right to play anywhere?

If a community manages a server, it’s basically private property. And community managed servers are always superior to official publisher-managed servers. Anticheat - or just crowd management - is done hands on in the server rather than automated, async, centralized.

Buying the game might mean you have a ”right” to play it, but not on my server you don’t.

reply
Then you are kicking full-time streamers like Stodeh, tanking your chances your game has any kind of success.
reply
”Your game”? It’s a publisher making a game. If I’m kicking someone off my server I’m not asking EA/Ubisoft etc.

I’m talking about normal old fashioned server administration now, I.e people hosting/renting their game infra and doing the administration: making rules, enforcing the rules by kicking and banning, charging fees either for vip status meaning no queuing etc, or even to play at all.

reply
I've been advocating for a statistical honeypot model for a while now. This is a much more robust anti cheat measure than even streaming/LAN gaming provides. If someone figures out a way to obtain access to information they shouldn't have on a regular basis, they will be eventually be found with these techniques. It doesn't matter the exact mechanism of cheating. This even catches the "undetectable" screen scraping mouse robot AI wizard stuff. Any amount of signal integrated over enough time can provide damning evidence.

> With that goal in mind, we released a patch as soon as we understood the method these cheats were using. This patch created a honeypot: a section of data inside the game client that would never be read during normal gameplay, but that could be read by these exploits. Each of the accounts banned today read from this "secret" area in the client, giving us extremely high confidence that every ban was well-deserved.

https://www.dota2.com/newsentry/3677788723152833273

reply
This is said very often, but doesn't seem to be working out in practice.

Valve has spent a lot of time and money on machine learning models which analyze demo files (all inputs). Yet Counter-Strike is still infested with cheaters. I guess we can speculate that it's just a faulty implementation, but clearly the problem isn't just "throw a ML model at the problem".

reply
Honeypots are used pretty often, sure. They're not enough, though useful.

Behavioral analysis is way harder in practice than it sounds, because most closet cheaters do not give enough signal to stand out, and the clusters are moving pretty fast. The way people play the game always changes. It's not the problem of metric selection as it might appear to an engineer, you need to watch the community dynamics. Currently only humans are able to do that.

reply
If you play with friends and your cheats cooperate, I don't think honeypots would be fool-proof any longer. Unless you all get the same fake data.
reply
In CS2, a huge portion of cheaters can be identified just by the single stat 'time-to-damage'. Cheaters will often be 100ms faster to react than even the fastest pros. Not all cheaters use their advantage in this way, but simply always make perfect choices because they have more information than their opponents.
reply
deleted
reply
I disagree with the premise that it doesn't matter as long as users can't tell. Say you're running a Counterstrike tournament with a 10k purse... Integrity matters there. And a smart cheater is running 'stealth' in that situation. Think a basic radar or a verrrrrry light aimbot, etc.

The problem is that traditional cheats (aimbot, wallhack, etc.) give users such a huge edge that they are multiple standard deviations from the norm on key metrics. I agree with you on that and there are anticheats that look for that exact thing.

I've also seen anticheats where flagged users have a session reviewed. EG you review a session with "cheats enabled" and try to determine whether you think the user is cheating. This works decently well in a game like CS where you can be reasonably confident over a larger sample size whether a user is playing corners correctly, etc.

The issue with probing for game world entities is that at some point, you have to resolve it in the client. EG "this is a fake player, store it in memory next to the other player entities but don't render this one on screen." This exact thing has happened in multiple games, and has worked as a temporary solution. End of the day, it ends up being a cat and mouse game. Cheat developers detect this and use the same resolution logic as the game client does. Memory addresses change, etc. and the users are blocked from using it for a few hours or a few days, but the developer patches and boom, off to the races.

These days game hacks are a huge business. Cheats often are offered as a subscription and can rank from anywhere from 10-hundreds of dollars a month. It's big money and some of the larger hack manufacturers are full blown companies which can have tens of thousands of customers. It's a huge business.

I think you're realistically left with two options. Require in-person LAN matches with hardware provided by the tournament which is tamper-resistant. Or run on a system so locked down that cheats don't exist.

Both have their own problems... In-person eliminates most of that risk but it's always possible to exploit. Running on a system which is super locked down (say, the most recent playstation) probably works, until someone has a 0day tucked away that they hoard specifically for their advantage. An unlikely scenario but with the money involved in some esports... Anything is possible.

https://www.documentcloud.org/documents/24698335-la22cv00051...

reply
> End of the day, it ends up being a cat and mouse game. Cheat developers detect this and use the same resolution logic as the game client does.

This is not well done. Only the server should be able to tell what the honeypot is. The point is to spawn an entity for one or more clients which will be 100% real for them but would not matter because without cheats it has no impact on them whatsoever. When the world evolves such that an impact becomes more likely then you de-spawn it.

This will only be possible if the server makes an effort to send incomplete entity information (I believe this is common), this way the cheats cannot filter out the honeypots. The cheats will need to become very sophisticated to try and anticipate the logic the server may use in its honeypots, but the honeypot method is able to theoretically approach parity with real behavior while the cheat mitigations cannot do that with their discrimination methods (false positives will degrade cheater performance and may even leak signal as well).

For example you can use a player entity that the client hasn't seen yet (or one that exited entity broadcast/logic range for some time) as a fake player that's camping an invisible corner, then as the player approaches it you de-spawn it. A regular player will never even know it was there.

Another vector to push is netcode optimizations for anti-cheating measures. To send as little information as possible to the client, decouple the audio system from the entity information - this will allow the honeypot methods to provide alternative interpretations for the audio such as a firefights between ghosts only cheaters will react to. This will of course be very complex to implement.

The greatest complexity in the honeypot methods will no doubt be how to ensure no impact on regular players.

reply