|
I'm curious on defining our boundaries, with respect to the limits we will place on ourselves.
Right now, let us assume that we are sending more data than the client can handle on LAN. The problem we have is that the client cuts out after 1024 packets have found to be lost, which is when the latest packet it gets is 1024 (or 1025?) indices ahead of what the server last sent.
1) How much do we want to limit our packet buffer to? For example, if a mod is generating lets say 2000 packets worth of data per tic, what exactly do we do in this case? It's not uncommon to see mods do this (see BestEver statistics), so we'll have to introduce a cap on the server end which says "too much data to transfer".
What is a good limit? Right now 1024 packets * 1024 bytes = 1 meg of memory, I'm guessing it's not unreasonable to do 8 of these? That should be 8 megs of memory max per player, which is 512 megs of additional ram. Are we okay with this?
2) We may have to consider a throughput cap, and I'm not exactly sure how to make it work optimally. If we're piping through 256-512 packets per second (as an example) it needs to take into account the ping of the player. If they have 300 ms ping, by the time they realize they're missing a packet, the 1024 buffer on the server could have already been blown past recovery.
3) Storing extra packets in memory: If we're going to have a buffer to deploy, might it be worth expanding the limit of how far the server will remember back?
I feel that these need some serious thought before even considering code development on a patch, since this is larger of a task than I originally thought.
|
|