Replies: 1 comment
-
Sorry man, forgot to reply here. My fault.
Honestly, sometimes it's hard to answer the question how people approach the problem without disclosing private information. An open question like this one is very large and deep, you know. If you can narrow the scope of your question a little, I'll try to be more specific. Excuse me if in the meantime I've tried to give some ideas while remaining vague. 🙂 |
Beta Was this translation helpful? Give feedback.
-
Hello! I'd love to get some inputs on server (or client) tick implementations. I'm curious to see how people approach the problem.
I'm refactoring an old game server implementation, in which all the game packets were processed as soon as they were received by the network thread pool. It handles quite a lot of players without any issues, but is also creates lot of problems and wtf bugs due to how unpredictable the "simulation" is. The refactored one I'm putting the network events in a "queue" and processing them with entt inside a "scheduler".
I'm aware that mmos implement this in different ways, but it's the only way I found to keep the codebase clean and maintainable. I'm not worried about performance in this case, and I'm not simulating physics or anything like that. My use case is mostly damage calculation, area of interest, event broadcasting and several timers to handle stuff like damage over time, cooldowns, recurring timers, one-shot timeouts, "cron jobs", and specially the "elapsed simulation time".
From my tests + past experiences, accumulating time will often desync due to accuracy or "unacknowledged" processing time, timers are not that accurate depending which one you use, and VPSs will time drift eventually, causing a lot of trouble for longer player sessions (when you depend on the sync of both client and server clocks), plus and as soon as the amount of stuff you have to process increases, you can run late in the simulation.
Albion seems to handle packet as they are received, thus no "gameplay loop". Eve seems to lag everyone until the server's frame simulation ends. I know this really depends on the project needs, but I'm curious what people do or if there are any other tricks out there that I'm not aware of. Fixed tick rates, simulations with delta time, is that it?
My test impl has "reliable" and "unreliable" execution policies. The "reliable" one being the one that will try to catch up if simulation runs late, and unreliable is able to skip ticks and have a variable delta time. They have different use cases (like unreliable for world updates, reliable for time dependent events and cron scheduling).
btw: Nevermind the name, I have to deal with database access, file monitoring notifications, async callbacks and futures here too, that's why I called it a scheduler and got a .submit() in place to be able to process events like an event loop.
Beta Was this translation helpful? Give feedback.
All reactions