| Summary: | Watchdog can't work on Mac with m1 chip because wrong thread cpu time. | ||
|---|---|---|---|
| Product: | WebKit | Reporter: | Vincent.yu <yuweishan> |
| Component: | JavaScriptCore | Assignee: | Nobody <webkit-unassigned> |
| Status: | NEW --- | ||
| Severity: | Normal | CC: | mark.lam, saam, webkit-bug-importer, ysuzuki |
| Priority: | P2 | Keywords: | InRadar |
| Version: | Other | ||
| Hardware: | Mac (Apple Silicon) | ||
| OS: | macOS 12 | ||
Watchdog.cpp use CPUTime::forCurrentThread, which use clock_gettime(CLOCK_THREAD_CPUTIME_ID, &ts), to get the cpu clock of thread. But on Mac with m1, the clock_gettime(CLOCK_THREAD_CPUTIME_ID) is always incredibly small. Here is the test code: ``` double CPUTime() { struct timespec ts; int ret = clock_gettime(CLOCK_THREAD_CPUTIME_ID, &ts); return ts.tv_sec + ts.tv_nsec/1000.0/1000.0/1000.0; } double RealTime() { struct timespec ts; int ret = clock_gettime(CLOCK_MONOTONIC, &ts); return ts.tv_sec + ts.tv_nsec/1000.0/1000.0/1000.0; } void longTask() { double begin = RealTime(); double limit = 1.0; static double g_result = 0; while (1) { for (int i=0; i<1000000; i++) { g_result = sin(g_result+5); } double now = RealTime(); if (now - begin >= limit) break; } return; } void testClocks() { double t1, t0, cpuTimeCost, realTimeCost; t0 = CPUTime(); longTask(); t1 = CPUTime(); cpuTimeCost = t1 - t0; double t10 = RealTime(); longTask(); double t11 = RealTime(); realTimeCost = t11 - t10; NSLog(@"\nthread cpu: %lfs\nreal: %lfs", cpuTimeCost, realTimeCost); return; } ``` It will output: ``` thread cpu: 0.024395s real: 1.006627s ``` Only the M1 chip has this problem. On Mac with Intel, these two clocks are almost the same.