```js var u = new Uint8Array(new Float64Array([NaN]).buffer) u[1] = 1; console.log(u); var nu = new Uint8Array(new Float64Array([new Float64Array(u.buffer)[0]]).buffer) console.log(nu); ``` Bun / Safari lose the nantissa in the process ``` Uint8Array(8) [ 0, 1, 0, 0, 0, 0, 248, 127 ] Uint8Array(8) [ 0, 0, 0, 0, 0, 0, 248, 127 ] // Note that the 1 is lost ``` Node / Chrome preserve the nantissa: ```js Uint8Array(8) [ 0, 1, 0, 0, 0, 0, 248, 127 ] Uint8Array(8) [ 0, 1, 0, 0, 0, 0, 248, 127 ] // Note that the 1 is preserved ``` FWIW the note in 6.1.6, along with the paragraph preceding the note, seems to suggest that both behaviors are spec-compliant.
It is actually interesting! But it is an expected behavior. When the value is evaluated under JS context (in this case, new Float64Array(u.buffer)[0]), we are only allowing canonical representation of NaN value. Each engine has somewhat a behavior which enforces canonical NaN representation, so let's not assume the bit pattern of NaN once it becomes JS value. For example, in SpiderMonkey case, var u = new Uint8Array(new Float64Array([NaN]).buffer) u[7] = 255; print(u); var nu = new Uint8Array(new Float64Array([new Float64Array(u.buffer)[0]]).buffer) print(nu); Generates, 0,0,0,0,0,0,248,255 0,0,0,0,0,0,248,127 so, bits of NaN can be modified once it becomes JS value.
In JSC, the behavior is simple and deterministic. When a NaN is exposed as a JS value, it is a canonical NaN (0, 0, 0, 0, 0, 0, 248, 127). :)