MantisBT - Zandronum
View Issue Details
0003334Zandronum[All Projects] Bugpublic2017-11-08 00:522018-06-03 22:39
needs testingopen 
0003334: Tickrate discrepancies between clients/servers
This issue was first described by unknownna here: 0002859:0015907.
In this summary, all issues are addressed by 0003314, 0003317 and 0003316.
All but one issue:
Quote from unknownna
* Desyncs a little consistently every 24-25 seconds.

Turns out this is a little off though, the exact period is 28 seconds.

When clients start on both Linux/Windows, a timing method is selected: either a timing event fired by the OS can be created and Zandronum will use it or time polling is used instead.
Unfortunately, as described in this thread from the ZDoom forums, Windows timing events are only precise down to the milliseconds and as such only a delay of 28ms can be achieved due to rounding.
Zandronum servers on the other hand always use time polling and, as a way of dealing with this issue, use a tickrate fixed at 35.75hz.
lNewTics = static_cast<LONG> ( lNowTime / (( 1.0 / (double)35.75 ) * 1000.0 ) );

Can you spot the problem here?
As described in the linked thread, a time step of exactly 28ms would theoritically result in a frequency of (1000 / 28) = 35.714285714...hz.
Using a tickrate of 35.75 instead results in a time step of (1000 / 35.75) = 27.972027972ms.
Calculating the difference and dividing the client time step by it (i.e. magnifying that difference until it equals one time step) and we get, you guessed it, 28 seconds.
(28 / (28 - (1000 / 35.75))) = 1001 tics or 28 seconds.
This means that after 28 seconds, the server will essentially be ahead of us by 1 tic.
The prediction will account for this and as such clients won't notice a thing but outside spectators sure will.
Attached is a demo of the desync occuring by using cl_predict_players false (28ms.cld).

By making the server loop use a proper time step of 28ms, the issue is fixed:
lNewTics = static_cast<LONG> ( lNowTime / 28.0 );

However a problem still remains: the windows timing method is the ONLY one that has a time step of 28ms.
If for some reason the creation of a timer failed at start, Windows will use time polling which will obviously be a proper 35hz.
Linux timer events are precise down to the microseconds as reported in the thread and will have proper 35hz both for polling and timing events.
As such, even with a fixed time step of 28ms on the server, clients using a proper 35hz tickrate will experience a desync too and a much worse one at that.
Calculating the difference again, etc...:
((1000 / 35) / ((1000 / 35) - 28)) = 50 tics.
Only 50 tics and these clients will appear to jitter to outside observers.
I will attach a demo of this with cl_predict_players false as well (35hz.cld).

The solution is obviously to enforce a consistent tickrate in every case but which one?
Should we force the time step to be 28ms globally or should we simply disable Windows timer events and use 35hz on the servers and be done with it?
In my opinion, we should simply disable Windows timer events and here are my arguments on why:
* The clients never slept to begin with if cl_capfps is off.
* As per 0001633, the timer events are already disabled on linux for clients due to being faulty.
* Proper 35hz.
I'm not sure but doesn't the fact that the tickrate is not a proper 35hz mean that mods that would rely on it being correct experience time drift?
For example, a mod uses a timer in ACS that's supposed to count time at 35hz, does it keep track of the time correctly?
I haven't really checked this so I'm not sure but that could be a thing.
* If we want to look at some sort of standard, Quake 3 uses time polling for both its clients and servers and, just like Zandronum, only the servers sleep.
Using the test map referenced in 0002859:0018627:
-host a server on MAP01
-join a client with cl_predict_players 0
-wait at least 28 seconds
-you will notice stuttering for about a second or so, this is what outside observers actually see
You may also use the debug output given by the linked commit, it will show the desync happening in real time (the server messages arriving at different times and the prediction suddenly having to do 2 ticks even though this is a local connection).
The demos were both recorded on the same test map described in the steps to reproduce.
No tags attached.
related to 0001633needs testing Leonard [Linux x86_64] Multiplayer game completely broken 
parent of 0002859needs testing Leonard Gametic-based unlagged seemingly goes out of sync often compared to ping-based unlagged 
related to 0002491needs testing Leonard Screenshot exploit 
related to 0003418needs testing Leonard (3.1 alpha) Stuttering ingame 
Not all the children of this issue are yet resolved or closed.
? 28ms.cld (86,629) 2017-11-08 00:52
? 35hz.cld (112,668) 2017-11-08 00:57
? jack.cld (62,779) 2018-02-11 19:45
? jitter.cld (78,600) 2018-02-11 19:45
Issue History
2017-11-08 00:52LeonardNew Issue
2017-11-08 00:52LeonardStatusnew => assigned
2017-11-08 00:52LeonardAssigned To => Leonard
2017-11-08 00:52LeonardFile Added: 28ms.cld
2017-11-08 00:53LeonardDescription Updatedbug_revision_view_page.php?rev_id=11315#r11315
2017-11-08 00:57LeonardFile Added: 35hz.cld
2017-11-08 00:59LeonardRelationship addedparent of 0002859
2017-11-08 10:54Edward-sanNote Added: 0018816
2017-11-08 17:26LeonardNote Added: 0018818
2017-11-08 20:25Edward-sanNote Added: 0018820
2017-11-10 02:26Ru5tK1ngNote Added: 0018850
2017-11-10 18:33LeonardNote Added: 0018851
2017-11-10 18:33LeonardStatusassigned => needs review
2017-11-10 20:17Blzut3Note Added: 0018852
2017-11-10 21:11Blzut3Note Edited: 0018852bug_revision_view_page.php?bugnote_id=18852#r11327
2017-11-11 01:19LeonardNote Added: 0018853
2017-11-11 01:19LeonardDescription Updatedbug_revision_view_page.php?rev_id=11328#r11328
2017-11-13 09:16LeonardRelationship addedrelated to 0001633
2017-11-13 14:08LeonardStatusneeds review => assigned
2018-01-23 01:47Ru5tK1ngNote Added: 0019006
2018-01-23 20:04LeonardNote Added: 0019011
2018-01-24 01:40Ru5tK1ngNote Added: 0019012
2018-01-24 01:40Ru5tK1ngNote Edited: 0019012bug_revision_view_page.php?bugnote_id=19012#r11382
2018-02-11 19:45LeonardFile Added: jack.cld
2018-02-11 19:45LeonardFile Added: jitter.cld
2018-02-11 19:46LeonardNote Added: 0019031
2018-03-25 15:19LeonardNote Added: 0019158
2018-03-25 15:19LeonardStatusassigned => needs review
2018-03-25 15:22LeonardRelationship addedrelated to 0002491
2018-04-30 15:47LeonardNote Added: 0019177
2018-04-30 15:47LeonardStatusneeds review => needs testing
2018-05-02 16:05LeonardRelationship addedrelated to 0003418
2018-05-07 02:07StrikerMan780Note Added: 0019211
2018-05-07 02:08StrikerMan780Note Edited: 0019211bug_revision_view_page.php?bugnote_id=19211#r11522
2018-05-07 03:09StrikerMan780Note Edited: 0019211bug_revision_view_page.php?bugnote_id=19211#r11523
2018-05-07 05:58StrikerMan780Note Edited: 0019211bug_revision_view_page.php?bugnote_id=19211#r11524
2018-05-07 05:58StrikerMan780Note Edited: 0019211bug_revision_view_page.php?bugnote_id=19211#r11525
2018-05-07 09:46LeonardNote Added: 0019215
2018-06-03 22:39StrikerMan780Note Added: 0019267
2018-06-03 22:39StrikerMan780Note Edited: 0019267bug_revision_view_page.php?bugnote_id=19267#r11567

2017-11-08 10:54   
Just for curiosity: did you check also if GZDoom multiplayer has this desync problem?

AFAIR some years ago, when I, on Ubuntu 64, tested with a Windows user, there weren't issues like that.
2017-11-08 17:26   
No, I didn't check GZDoom.
Quote from Edward-san
there weren't issues like that.

I'm assuming you're also talking about GZDoom here?
My guess is that given the nature of GZDoom's multiplayer it literally doesn't matter: everyone connected experience the worst latency so even on a lan a different tickrate would just mean a slight latency (0.5ms) for the Windows user who is ticking faster.
If you're talking about Zandronum then bear in mind the client does not see this for itself and even if you hosted on Linux, the server still uses the same loop so you would still only notice the other user's jitter every 28 seconds.
2017-11-08 20:25   
I'm assuming you're also talking about GZDoom here?

2017-11-10 02:26   
It seems to make more sense and consistency to disable Windows timer events. I would say just begin working to implement 35hz.
2017-11-10 18:33   
This is for proper 35hz.
If for some reason we end up going for 28ms though I could always edit it later.
2017-11-10 20:17   
(edited on: 2017-11-10 21:11)
I'm confused on "The clients never slept to begin with, why bother with a timer?" The difference between the polled and event timer should be whether Zandronum uses 100% CPU or not with cl_capfps on (I think vid_vsync would be affected too but not certain). If disabling the event timer does indeed cause 100% CPU usage expect a lot of angry laptop users.

On a related note, I believe 3.0 got the updated Linux timer and should be fine now modulo the issue you're trying to solve.

Edit: Regarding the ACS timing point. It has been too long but the last time I looked at the 35.75Hz issue I did notice some mods were correcting for the round off. So some mods are going to be wrong with either choice, but it really doesn't matter unless your hobby is watching timers and looking for drift. :P

2017-11-11 01:19   
Oh you're right, I didn't check for the I_WaitForTic functions.
Then I guess we loose sleeping with cl_capfps on for Windows.
2018-01-23 01:47   
Is the loss of sleep for windows the reason the patch wasn't pulled?
2018-01-23 20:04   
The loss of sleep issue was solved, the reason this wasn't addressed yet is because having consistent ticrates (both 35hz and 28ms) revealed a much bigger problem that needs to be addressed first.
2018-01-24 01:40   
For those following this ticket or interested parties, what bigger issue was revealed? I'm assuming 1633 is part of it.

2018-02-11 19:46   
Sorry for the extremely late reply, I wanted to respond once I had the basic implementation up and running and time played against me again and I kind of forgot to reply.

Quote from Ru5tK1ng
I'm assuming 1633 is part of it.


Quote from Ru5tK1ng
what bigger issue was revealed?

I will try my best to explain this.
In this ticket it was found out that the ticrates between clients/servers differ.
This means that inevitably over time one is going to end up behind the other: after 28 seconds the windows clients are so far behind they loose one tic and at the very moment this occurs, the server's ticking gets closer and closer and that's what causes the "desync".
At this point, when the client and the server's ticking happen almost at the same time, the server seems to alternate between receiving the client's movement commands before and after ticking.
This produces what Alex called the "jackhammering effect" where in the worst case the server consistently processes 2 movement commands (the previous one that was received AFTER ticking and the current one) and then waits the next tic.
Obviously this makes the player extremely jittery.
The problem is that once the ticrates were actually fixed and made consistent across clients/servers, on rare occasions a connecting client would get the jackhammering effect except this time permanently.
This is presumably due to the fact that the clients/servers do not lag behind relative to each other anymore which implies that a client that would tick almost exactly at the same time as its server by "chance" would do so permanently.

I'm not sure what causes it to happen in the first place and the fact that searching for such a problem doesn't seem to give anything useful and that apparently this problem never occurred to any other game engines made me doubt what was even happening.
Even then, I tried to debug further but couldn't find anything useful anymore on this issue.

After that I decided to work on smoothing the ticbuffer which would not only fix (or kludge if it's not something that is supposed to happen) the jackhammering problem but also benefit online play in general.
In particular, I expect this will come close to completely solving 0002491.
The way this works is simple: detect if movement command "drops" occur frequently and if so simply apply a "safe latency" to the processing in such a way that the movement will end up being completely smooth.
I think this is the most logical way to solve such a thing: the jittery player shouldn't even notice a thing as both prediction and unlagged should account for any extra latency except now, instead of being very hard to hit to outside observers, will be much, much smoother.
I attached a demo (jack.cld) in which a player experiences the jackhammering effect followed by the new ticbuffer "regulator" being turned on.
This means that players experiencing this problem will be completely smooth to outside observers regardless but at the expense of having one extra tic of latency.
I also attached a much worse example using gamer's proxy to simulate a jittery player (jitter.cld).
2018-03-25 15:19   
Since the ticbuffer smoothing is ready to be reviewed, this needs to be reviewed as well.
2018-04-30 15:47   
This was pulled.
2018-05-07 02:07   
(edited on: 2018-05-07 05:58)
There seems to be an issue with this in the latest 3.1 build. Player and actor movement is no longer interpolated, it looks like it's running at 35fps, even with uncapped framerate. Both online and offline.

Everything else seems fine however.

2018-05-07 09:46   
That was reported in 0003418.
2018-06-03 22:39 [^]

Linking to this comment, since it may have more to do with this than the ticket I posted in. Not sure though.