Zandronum Chat on our Discord Server Get the latest version: 3.1
Source Code

View Revisions: Issue #3334 All Revisions ] Back to Issue ]
Summary 0003334: Tickrate discrepancies between clients/servers
Revision 2017-11-11 01:19 by Leonard
Description This issue was first described by unknownna here: 0002859:0015907.
In this summary, all issues are addressed by 0003314, 0003317 and 0003316.
All but one issue:
Quote from unknownna
* Desyncs a little consistently every 24-25 seconds.

Turns out this is a little off though, the exact period is 28 seconds.

When clients start on both Linux/Windows, a timing method is selected: either a timing event fired by the OS can be created and Zandronum will use it or time polling is used instead.
Unfortunately, as described in this thread from the ZDoom forums, Windows timing events are only precise down to the milliseconds and as such only a delay of 28ms can be achieved due to rounding.
Zandronum servers on the other hand always use time polling and, as a way of dealing with this issue, use a tickrate fixed at 35.75hz.
lNewTics = static_cast<LONG> ( lNowTime / (( 1.0 / (double)35.75 ) * 1000.0 ) );

Can you spot the problem here?
As described in the linked thread, a time step of exactly 28ms would theoritically result in a frequency of (1000 / 28) = 35.714285714...hz.
Using a tickrate of 35.75 instead results in a time step of (1000 / 35.75) = 27.972027972ms.
Calculating the difference and dividing the client time step by it (i.e. magnifying that difference until it equals one time step) and we get, you guessed it, 28 seconds.
(28 / (28 - (1000 / 35.75))) = 1001 tics or 28 seconds.
This means that after 28 seconds, the server will essentially be ahead of us by 1 tic.
The prediction will account for this and as such clients won't notice a thing but outside spectators sure will.
Attached is a demo of the desync occuring by using cl_predict_players false (28ms.cld).

By making the server loop use a proper time step of 28ms, the issue is fixed:
lNewTics = static_cast<LONG> ( lNowTime / 28.0 );

However a problem still remains: the windows timing method is the ONLY one that has a time step of 28ms.
If for some reason the creation of a timer failed at start, Windows will use time polling which will obviously be a proper 35hz.
Linux timer events are precise down to the microseconds as reported in the thread and will have proper 35hz both for polling and timing events.
As such, even with a fixed time step of 28ms on the server, clients using a proper 35hz tickrate will experience a desync too and a much worse one at that.
Calculating the difference again, etc...:
((1000 / 35) / ((1000 / 35) - 28)) = 50 tics.
Only 50 tics and these clients will appear to jitter to outside observers.
I will attach a demo of this with cl_predict_players false as well (35hz.cld).

The solution is obviously to enforce a consistent tickrate in every case but which one?
Should we force the time step to be 28ms globally or should we simply disable Windows timer events and use 35hz on the servers and be done with it?
In my opinion, we should simply disable Windows timer events and here are my arguments on why:
* The clients never slept to begin with if cl_capfps is off.
* As per 0001633, the timer events are already disabled on linux for clients due to being faulty.
* Proper 35hz.
I'm not sure but doesn't the fact that the tickrate is not a proper 35hz mean that mods that would rely on it being correct experience time drift?
For example, a mod uses a timer in ACS that's supposed to count time at 35hz, does it keep track of the time correctly?
I haven't really checked this so I'm not sure but that could be a thing.
* If we want to look at some sort of standard, Quake 3 uses time polling for both its clients and servers and, just like Zandronum, only the servers sleep.
Revision 2017-11-08 00:53 by Leonard
Description This issue was first described by unknownna here: 0002859:0015907.
In this summary, all issues are addressed by 0003314, 0003317 and 0003316.
All but one issue:
Quote from unknownna
* Desyncs a little consistently every 24-25 seconds.

Turns out this is a little off though, the exact period is 28 seconds.

When clients start on both Linux/Windows, a timing method is selected: either a timing event fired by the OS can be created and Zandronum will use it or time polling is used instead.
Unfortunately, as described in this thread from the ZDoom forums, Windows timing events are only precise down to the milliseconds and as such only a delay of 28ms can be achieved due to rounding.
Zandronum servers on the other hand always use time polling and, as a way of dealing with this issue, use a tickrate fixed at 35.75hz.
lNewTics = static_cast<LONG> ( lNowTime / (( 1.0 / (double)35.75 ) * 1000.0 ) );

Can you spot the problem here?
As described in the linked thread, a time step of exactly 28ms would theoritically result in a frequency of (1000 / 28) = 35.714285714...hz.
Using a tickrate of 35.75 instead results in a time step of (1000 / 35.75) = 27.972027972ms.
Calculating the difference and dividing the client time step by it (i.e. magnifying that difference until it equals one time step) and we get, you guessed it, 28 seconds.
(28 / (28 - (1000 / 35.75))) = 1001 tics or 28 seconds.
This means that after 28 seconds, the server will essentially be ahead of us by 1 tic.
The prediction will account for this and as such clients won't notice a thing but outside spectators sure will.
Attached is a demo of the desync occuring by using cl_predict_players false (28ms.cld).

By making the server loop use a proper time step of 28ms, the issue is fixed:
lNewTics = static_cast<LONG> ( lNowTime / 28.0 );

However a problem still remains: the windows timing method is the ONLY one that has a time step of 28ms.
If for some reason the creation of a timer failed at start, Windows will use time polling which will obviously be a proper 35hz.
Linux timer events are precise down to the microseconds as reported in the thread and will have proper 35hz both for polling and timing events.
As such, even with a fixed time step of 28ms on the server, clients using a proper 35hz tickrate will experience a desync too and a much worse one at that.
Calculating the difference again, etc...:
((1000 / 35) / ((1000 / 35) - 28)) = 50 tics.
Only 50 tics and these clients will appear to jitter to outside observers.
I will attach a demo of this with cl_predict_players false as well (35hz.cld).

The solution is obviously to enforce a consistent tickrate in every case but which one?
Should we force the time step to be 28ms globally or should we simply disable Windows timer events and use 35hz on the servers and be done with it?
In my opinion, we should simply disable Windows timer events and here are my arguments on why:
* The clients never slept to begin with, why bother with a timer?
* As per 0001633, the timer events are already disabled on linux for clients due to being faulty.
* Proper 35hz.
I'm not sure but doesn't the fact that the tickrate is not a proper 35hz mean that mods that would rely on it being correct experience time drift?
For example, a mod uses a timer in ACS that's supposed to count time at 35hz, does it keep track of the time correctly?
I haven't really checked this so I'm not sure but that could be a thing.
* If we want to look at some sort of standard, Quake 3 uses time polling for both its clients and servers and, just like Zandronum, only the servers sleep.
Revision 2017-11-08 00:52 by Leonard
Description This issue was first described by unknownna here: 0002859:0015907.
In this summary, all issues are addressed by 0003314, 0003317 and 0003316.
All but one issue:
Quote from unknownna
* Desyncs a little consistently every 24-25 seconds.

Turns out this is a little off though, the exact period is 28 seconds.

When clients start on both Linux/Windows, a timing method is selected: either a timing event fired by the OS can be created and Zandronum will use it or time polling is used instead.
Unfortunately, as described in this thread from the ZDoom forums, Windows timing events are only precise down to the milliseconds and as such only a delay of 28ms can be achieved due to rounding.
Zandronum servers on the other hand always use time polling and, as a way of dealing with this issue, use a tickrate fixed at 35.75hz.
lNewTics = static_cast<LONG> ( lNowTime / (( 1.0 / (double)35.75 ) * 1000.0 ) );

Can you spot the problem here?
As described in the linked thread, a time step of exactly 28ms would theoritically result in a frequency of (1000 / 28) = 35.714285714...hz.
Using a tickrate of 35.75 instead results in a time step of (1000 / 35.75) = 27.972027972ms.
Calculating the difference and dividing the client time step by it (i.e. magnifying that difference until it equals one time step) and we get, you guessed it, 28 seconds.
(28 / (28 - (1000 / 35.75))) = 1001 tics or 28 seconds.
This means that after 28 seconds, the server will essentially be ahead of us by 1 tic.
The prediction will account for this and as such clients won't notice a thing but outside spectators sure will.
Attached is a demo of the desync occuring by using cl_predict_players false (28ms.cld).

By making the server loop use a proper time step of 28ms, the issue is fixed:
lNewTics = static_cast<LONG> ( lNowTime / 28.0 );

However a problem still remains: the windows timing method is the ONLY one that has a time step of 28ms.
If for some reason the creation of a timer failed at start, Windows will use time polling which will obviously be a proper 35hz.
Linux timer events are precise down to the microseconds as reported in the thread and will have proper 35hz both for polling and timing events.
As such, even with a fixed time step of 28ms on the server, clients using a proper 35hz tickrate will experience a desync too and a much worse one at that.
Calculating the difference again, etc...:
((1000 / 35) / ((1000 / 35) - 28)) = 50 tics.
Only 50 tics and these clients will appear to jitter to outside observers.
I will attach a demo of this with cl_predict_players false as well (35hz.cld).

The solution is obviously to enforce a consistent tickrate in every case but which one?
Should we force the time step to be 28ms globally or should we simply disable Windows timer events and use 35hz on the servers and be done with it?
In my opinion, we should simply disable Windows timer events and here are my arguments on why:
[list]
[*]The clients never slept to begin with, why bother with a timer?
[*]As per 0001633, the timer events are already disabled on linux for clients due to being faulty.
[*]Proper 35hz.
I'm not sure but doesn't the fact that the tickrate is not a proper 35hz mean that mods that would rely on it being correct experience time drift?
For example, a mod uses a timer in ACS that's supposed to count time at 35hz, does it keep track of the time correctly?
I haven't really checked this so I'm not sure but that could be a thing.
[*]If we want to look at some sort of standard, Quake 3 uses time polling for both its clients and servers and, just like Zandronum, only the servers sleep.
[/list]






Questions or other issues? Contact Us.

Links


Copyright © 2000 - 2024 MantisBT Team
Powered by Mantis Bugtracker