Kernel AC is currently the best way to protect against cheats by far, the game with the strongest protection is Valorant and it works very well. OW2 is lightyears behind Valorant.
Not sure what your point is. Most of your post is inaccurate, DMA cheats represent the minority of cheats because they're very expensive and you need a second computer.
elitepvpers - it's public. DMA cheats have grown and are the primary way people cheat in games these days it makes around 5m/month [retail] just from one of the providers that I know in the scene this includes selling the hardware, the bypass and the cheats (not under the same umbrella for obvious reasons).
The scene has shifted immensely in the last few years, everyone and their grandmother has DMA now, I mean you can buy these off amazon now. Korean's are a bit stuck since most of them use gaming cafes so they've been slow adopters, but cafe shops have the benefit of using an old version of hyper-v which allows you to just use the method described above. Hyper-V cheats are the most popular for valorant.
I would argue that valorant and overwatch are pretty much on the same level based on what it feels to play. I've seen just as many visible cheaters in valorant as in overwatch. Although I will admit that I am pretty outdated myself since around mid 2025. Valorant allows you to ** around so that might be related, overwatch bans rage hackers way faster than valorant does as well.
OW2 is very different from CS and Valorant, OW does not suffer from cheat the same way because it's not a pure aim based game game with hitscan as the main thing. The vast majority of classes don't benefits from cheat like other fps do.
I did main support and tank at master level in OW and beside esp there is 0 benefit of cheating.
Asked a guy I know since 2021 said that ability helpers are the most important features for an overwatch cheat and that ESP is basically unusable in gm since you get almost immediately called out for it, they are quite just sus you out and report. Trust score of high rated players eventually gets you banned (assumption).
Game compagny have to have those kernel anti cheat because MS never implemented proper isolation in the first place, if Windows was secured like an apple phone or a console there wouldn't be a need for it.
Anti cheat don't run on modern console, game dev knoes that the latest firmware on a console is secure enough so that the console can't be tempered.
Consoles and phones are "secure" because you don't own them. They aren't yours. They belong to the corporations. They're just generously allowing you to use the devices. And only in the ways they prescribe.
This is the exact sort of nonsense situation I want to prevent. We should own the computers, and the corporations should be forced to simply suck it up and deal with it. Cheating? It doesn't matter. Literal non-issue compared to the loss of our power and freedom.
It's just sad watching people sacrifice it all for video games. We were the owners of the machine but we gave it all up to play games. This is just hilarious, in a sad way.
All the games that use kernel anti cheat have the simulation running on the server.
You can't make a competitive fps game with a dumb terminal, it can't work because the latency is too high so that's why you have to run local predictive simulation.
You don't want to wait the server to ack your inputs.
> All the games that use kernel anti cheat have the simulation running on the server.
There's an exception with fighting games. Fighting games generally don't have server simulations (or servers at all), but every single client does their own full simulation. And 2XKO and Dragon Ball FighterZ have kernel anti cheat.
Well I'm just nitpicking and it's different because it's one of the few competitive genres where the clients do full game state simulations. Another being RTS games.
You can't compare VPS with VMs from major cloud provider, VPS don't offer anything beside basic compute.
Also virtualization from cloud provider is way better because they have custom hardware and software so you don't suffer from noisy neighbours for example.
Go is not difficult to maintain at large scale, I mean take Kubernetes for example, it's "trivial" to understand and modified even though it's in the millions loc.
Kubuernetes is built by a trillion dollar company who has the resources to manage QA and dev tooling at a scale that the vast majority of teams do not.
We need more details on 6. This is the hard part, like you swap connection from A to B, but if B is not synced properly and you write to it then you start having diff between the two and there is no way back.
Like B is slightly out of date ( replication wise ) the service modify something, then A comes with change that modify the same data that you just wrote.
How do you ensure that B is up to date without stopping write to A ( no downtime ).
Not sure how they do it, but I would do it like so:
Have old database be master. Let new be a slave. Load in latest db dump, may take as long as it wants.
Then start replication and catch up on the delay.
You would need, depending on the db type, a load balancer/failover manager. PgBouncer and PgPoolII come to mind, but MySQL has some as well. Let that connect to the master and slave, connect the application to the database through that layer.
> Load in latest db dump, may take as long as it wants.
400TB its about a week+ ?
> Then start replication and catch up on the delay.
Then u have a changes in the delay about +- 1TB. It means a changes syncing about few days more while changes still coming.
They said "current requests are buffered" which is impossible, especial for long distributed (optional) transactions which in a progress (it can spend a hours, days (for analitycs)).
Overwall this article is a BS or some super custom case which irrelevant for common systems. You can't migrate w/o downtime, it's a physical impossible.
"Take snapshot and begin streaming replication"... like to where? The snapshot isn't even prepared fully yet and definitely hasn't reached the target. Where are you dumping/keeping those replication logs for the time being?
Secondly, how are you managing database state changes due to realtime update queries? They are definitely going in source table at this point.
I don't get this. Im still stuck on point 1... have read it twice already.
He can't. It's not a reference, just a bunch of CLI examples. Please learn what is the reference. Even docs is a BS, wonderful product. Overall this article is a typical advertising and clickbait..
The code is open source though, you can read it. The cli examples point you towards the relevant bits of the actual database code to read.
For my own sake, I'm not sure what is so surprising here. "Turn up a hot second replica and fail over to it intentionally behind a global load balancer." Is pretty well trodden ground.
YES!! But the article point us to it's a 400TB+ w/o downtime migration. This is impossible. That why is looks like clickbait and advertising of a product.
Thank you for the link but it's not the same case ;) Google used storage switching which has migration in mixed mode, i.e. migration on demand when data migrated due user access to. API had compatibility layer to read/write from/to both storage systems (i built kind of this migration mechanism about decade ago). And google spend about 8 years for the migration which ok. And the article about Database migration which can be periodical process (critical scheme changes for example) and they describe it to us. Take snapshot and racing with snapshot overhead changes and etc. I think we can let's over here. It's not a zero downtime solution cuz it's not exists.
So you don't understand how something works. That's fine. But to then say the article and/or tech are BS is... a choice.
This work has been and is being used by some of the largest sites / apps in the world including Uber, Slack, GitHub, Square... But sure, "it's BS, super custom, and irrelevant". Gee, yer super smart! Thank you for the amazing insights. 5 stars.
OCSP is deprecated and basically dead at this point. Some clients still use it but I don't think many (any?) have actually enforced OCSP for years since it was notoriously fickle anyways.
Interesting. If you go to youtube.com it's all messed up; missing all the videos in the listings. But if you follow a video embedded in another site to youtube, it'll show and play fine. It'll break if you try to browse away from it.
Yeah, YouTube is not one server, it's hundreds of them. The videos are served mostly from CDNs (the Content Distribution Network). It's a different set of servers than handles account logins, routing, etc.
Some Google Services are also down at the moment, unrelated to YouTube, so probably a failure along some common infrastructure pipeline.
Your History, Subscriptions and search should all work. You should be able to see any creator's page if you go to it directly. The videos are all still watchable. It's primarily the home page and recommended videos that are having issues. Basically any place they recommend videos you haven't seen is broken right now, but the videos are still there and accessible.
I've tried via VPN from the U.S., U.K., Sweden, Germany, Russia, Colombia, etc. Same issue across the board.
Not true! There are a fair number of them, and they're even reasonably general-purpose, e.g. https://www.ponylang.io/
Most that can recall achieve this by simply not having any locks at all. That's feasible with some careful design.
Outside proof-oriented languages though, I'm not aware of any that prevent livelocks, much less both. When excluding stuff that's single threaded but might otherwise qualify, e.g. Elm. "Lack of progress" is what most care about though, and yeah that realm is much more "you give up too much to get that guarantee" in nearly all cases (e.g. no turing completeness).
Not sure what your point is. Most of your post is inaccurate, DMA cheats represent the minority of cheats because they're very expensive and you need a second computer.
reply