Skip to content

Detect unsynchronized TSCs across cores and fallback to the OS time source? #111

@tatsuya6502

Description

@tatsuya6502

As we discovered in #61, some machines have TSCs that are not synchronized across cores. On such a machine, quanta::Instant will not return monotonically increasing time when queried from different cores. A users of my library, who uses a Thinkpad with Ryzen mobile CPU, seems to have this problem:

moka-rs/moka#472 (comment)

CPU: AMD Ryzen 7 PRO 6850U with Radeon Graphics
System: NixOS Linux x86_64

EDIT: Looking at the linked issue:

Thinkpads with Ryzen mobile CPUs

Yep, this is the case here.

I added a workaround by doing a saturating subtraction of quanta::Instant instead of checked subtraction, so it will not return None when a time warp happens. But this is not a solution, as it only hides the problem.

I did some search and found this: https://bugzilla.kernel.org/show_bug.cgi?id=202525#c2

I used a simple tool to watch the TSC values for each CPU, and it looks like CPUs 1-7 are synchronized, but CPU 0 has a large offset compared to the others. They're all advancing at the same frequency (2195MHz) though:

1028:     -8   1606   1606   1606   1606   1606   1606   1606  3695718 KHz

...
The interesting and incriminating part in the output above is that CPU0 is about -1600ms offset from the TSCs on the other CPUs.

So I am wondering if quanta can detect this situation and fallback to the OS time source. Maybe detecting it in the calibration phase? I am not sure how to do this, but the only way I can think of is to read TSCs of different cores and see if the differences between the values are within some threshold.

Maybe it is something like this?

  1. Create a std::sync::Barrier to synchronize the threads.
  2. Spawn a thread on each core:
    • Pin the thread to the core using core_affinity crate.
      • (This will not work on macOS, but I think it is not a problem as Intel Macs would not have this unsynchronized TSC problem.)
  3. Wait on the barrier.
  4. Read the TSC.
  5. Compare the differences between the TSCs of different cores.

But I have got questions like: what would be the good threshold? (Given the above -1600ms off example, maybe a few milliseconds would be large enough? Maybe this is application dependent?)

A completely different solution for my library would be to use the OS time source (e.g. std::time::Instant) where the time accuracy is important, and use TSC (quanta::Instant) in other places.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions