RESOLVED FIXED 112141
LLInt CLoop backend misses Double2Ints() on 32bit architectures
https://bugs.webkit.org/show_bug.cgi?id=112141
Summary LLInt CLoop backend misses Double2Ints() on 32bit architectures
Gabor Rapcsanyi
Reported 2013-03-12 06:01:27 PDT
LLInt cloop backend misses Double2Ints() on 32bit architectures
Attachments
proposed fix (2.30 KB, patch)
2013-03-12 06:19 PDT, Gabor Rapcsanyi
no flags
Gabor Rapcsanyi
Comment 1 2013-03-12 06:19:16 PDT
Created attachment 192723 [details] proposed fix
Filip Pizlo
Comment 2 2013-03-12 08:26:05 PDT
Comment on attachment 192723 [details] proposed fix View in context: https://bugs.webkit.org/attachment.cgi?id=192723&action=review R=me, but it looks like both Ints2Double and Double2Ints are not endianness-friendly. > Source/JavaScriptCore/llint/LowLevelInterpreter.cpp:130 > +static void Double2Ints(double val, uint32_t& lo, uint32_t& hi) > +{ > + union { > + double dval; > + uint64_t ival64; > + } u; > + u.dval = val; > + hi = static_cast<uint32_t>(u.ival64 >> 32); > + lo = static_cast<uint32_t>(u.ival64); > +} Is this right? Does Double2Ints mandate 'hi' and 'lo' being the low-order bits and the high-order bits, or are they supposed to be the first bits and the last bits?
WebKit Review Bot
Comment 3 2013-03-12 08:30:24 PDT
Comment on attachment 192723 [details] proposed fix Clearing flags on attachment: 192723 Committed r145551: <http://trac.webkit.org/changeset/145551>
WebKit Review Bot
Comment 4 2013-03-12 08:30:27 PDT
All reviewed patches have been landed. Closing bug.
Gabor Rapcsanyi
Comment 5 2013-03-12 09:03:04 PDT
(In reply to comment #2) > (From update of attachment 192723 [details]) > View in context: https://bugs.webkit.org/attachment.cgi?id=192723&action=review > > R=me, but it looks like both Ints2Double and Double2Ints are not endianness-friendly. > Nope they don't, should we make it endianness-friendly? > > Source/JavaScriptCore/llint/LowLevelInterpreter.cpp:130 > > +static void Double2Ints(double val, uint32_t& lo, uint32_t& hi) > > +{ > > + union { > > + double dval; > > + uint64_t ival64; > > + } u; > > + u.dval = val; > > + hi = static_cast<uint32_t>(u.ival64 >> 32); > > + lo = static_cast<uint32_t>(u.ival64); > > +} > > Is this right? Does Double2Ints mandate 'hi' and 'lo' being the low-order bits and the high-order bits, or are they supposed to be the first bits and the last bits? I've just made it as the Ints2Double. We tried it on JSC tests and everything went fine. So I think its ok for the little-endian architecture at least.
Note You need to log in before you can comment on or make changes to this bug.