WebKit Bugzilla
Attachment 370529 Details for
Bug 197979
: [JSC] Implement op_wide16 / op_wide32 and introduce 16bit version bytecode
Home
|
New
|
Browse
|
Search
|
[?]
|
Reports
|
Requests
|
Help
|
New Account
|
Log In
Remember
[x]
|
Forgot Password
Login:
[x]
[patch]
Patch
bug-197979-20190523155230.patch (text/plain), 85.77 KB, created by
Yusuke Suzuki
on 2019-05-23 15:52:31 PDT
(
hide
)
Description:
Patch
Filename:
MIME Type:
Creator:
Yusuke Suzuki
Created:
2019-05-23 15:52:31 PDT
Size:
85.77 KB
patch
obsolete
>Subversion Revision: 245700 >diff --git a/Source/JavaScriptCore/ChangeLog b/Source/JavaScriptCore/ChangeLog >index 097a2cff8479aa8bd384fd2ff312dbb824dc9b66..e3adfbf3f8035b27298bafed0b9fbd20e3df03da 100644 >--- a/Source/JavaScriptCore/ChangeLog >+++ b/Source/JavaScriptCore/ChangeLog >@@ -1,3 +1,93 @@ >+2019-05-23 Tadeu Zagallo <tzagallo@apple.com> and Yusuke Suzuki <ysuzuki@apple.com> >+ >+ [JSC] Implement op_wide16 / op_wide32 and introduce 16bit version bytecode >+ https://bugs.webkit.org/show_bug.cgi?id=197979 >+ >+ Reviewed by NOBODY (OOPS!). >+ >+ * bytecode/BytecodeConventions.h: >+ * bytecode/BytecodeDumper.cpp: >+ (JSC::BytecodeDumper<Block>::dumpBlock): >+ * bytecode/BytecodeList.rb: >+ * bytecode/BytecodeRewriter.h: >+ (JSC::BytecodeRewriter::Fragment::align): >+ * bytecode/BytecodeUseDef.h: >+ (JSC::computeUsesForBytecodeOffset): >+ (JSC::computeDefsForBytecodeOffset): >+ * bytecode/CodeBlock.cpp: >+ (JSC::CodeBlock::finishCreation): >+ * bytecode/CodeBlock.h: >+ (JSC::CodeBlock::metadataTable const): >+ * bytecode/Fits.h: >+ * bytecode/Instruction.h: >+ (JSC::Instruction::opcodeID const): >+ (JSC::Instruction::isWide16 const): >+ (JSC::Instruction::isWide32 const): >+ (JSC::Instruction::hasMetadata const): >+ (JSC::Instruction::sizeShiftAmount const): >+ (JSC::Instruction::size const): >+ (JSC::Instruction::wide16 const): >+ (JSC::Instruction::wide32 const): >+ (JSC::Instruction::isWide const): Deleted. >+ (JSC::Instruction::wide const): Deleted. >+ * bytecode/InstructionStream.h: >+ (JSC::InstructionStreamWriter::write): >+ * bytecode/Opcode.h: >+ * bytecode/OpcodeSize.h: >+ * bytecompiler/BytecodeGenerator.cpp: >+ (JSC::BytecodeGenerator::alignWideOpcode16): >+ (JSC::BytecodeGenerator::alignWideOpcode32): >+ (JSC::BytecodeGenerator::emitGetByVal): >+ (JSC::BytecodeGenerator::emitYieldPoint): >+ (JSC::StructureForInContext::finalize): >+ (JSC::BytecodeGenerator::alignWideOpcode): Deleted. >+ * bytecompiler/BytecodeGenerator.h: >+ (JSC::BytecodeGenerator::write): >+ * dfg/DFGCapabilities.cpp: >+ (JSC::DFG::capabilityLevel): >+ * generator/Argument.rb: >+ * generator/DSL.rb: >+ * generator/Metadata.rb: >+ * generator/Opcode.rb: >+ * generator/Section.rb: >+ * jit/JITExceptions.cpp: >+ (JSC::genericUnwind): >+ * llint/LLIntData.cpp: >+ (JSC::LLInt::initialize): >+ * llint/LLIntData.h: >+ (JSC::LLInt::opcodeMapWide16): >+ (JSC::LLInt::opcodeMapWide32): >+ (JSC::LLInt::getOpcodeWide16): >+ (JSC::LLInt::getOpcodeWide32): >+ (JSC::LLInt::getWide16CodePtr): >+ (JSC::LLInt::getWide32CodePtr): >+ (JSC::LLInt::opcodeMapWide): Deleted. >+ (JSC::LLInt::getOpcodeWide): Deleted. >+ (JSC::LLInt::getWideCodePtr): Deleted. >+ * llint/LLIntSlowPaths.cpp: >+ (JSC::LLInt::LLINT_SLOW_PATH_DECL): >+ * llint/LLIntSlowPaths.h: >+ * llint/LowLevelInterpreter.asm: >+ * llint/LowLevelInterpreter.cpp: >+ (JSC::CLoop::execute): >+ * llint/LowLevelInterpreter32_64.asm: >+ * llint/LowLevelInterpreter64.asm: >+ * offlineasm/arm.rb: >+ * offlineasm/arm64.rb: >+ * offlineasm/cloop.rb: >+ * offlineasm/instructions.rb: >+ * offlineasm/mips.rb: >+ * offlineasm/x86.rb: >+ * parser/ResultType.h: >+ (JSC::OperandTypes::OperandTypes): >+ (JSC::OperandTypes::first const): >+ (JSC::OperandTypes::second const): >+ (JSC::OperandTypes::bits): >+ (JSC::OperandTypes::fromBits): >+ (): Deleted. >+ (JSC::OperandTypes::toInt): Deleted. >+ (JSC::OperandTypes::fromInt): Deleted. >+ > 2019-05-23 Ross Kirsling <ross.kirsling@sony.com> > > Lexer<T>::parseDecimal ought to ASSERT isASCIIDigit >diff --git a/Source/WTF/ChangeLog b/Source/WTF/ChangeLog >index a8f45dd384efde2aa808c7b6597253afc94a4667..5e3c94e7dc3e9065b776efe0840bdfac6ccc426b 100644 >--- a/Source/WTF/ChangeLog >+++ b/Source/WTF/ChangeLog >@@ -1,3 +1,15 @@ >+2019-05-23 Yusuke Suzuki <ysuzuki@apple.com> >+ >+ [JSC] Implement op_wide16 / op_wide32 and introduce 16bit version bytecode >+ https://bugs.webkit.org/show_bug.cgi?id=197979 >+ >+ Reviewed by NOBODY (OOPS!). >+ >+ * wtf/FastMalloc.h: >+ (WTF::FastMalloc::zeroedMalloc): >+ * wtf/MallocPtr.h: >+ (WTF::MallocPtr::zeroedMalloc): >+ > 2019-05-23 Ross Kirsling <ross.kirsling@sony.com> > > [PlayStation] Implement platformUserPreferredLanguages. >diff --git a/Source/JavaScriptCore/bytecode/BytecodeConventions.h b/Source/JavaScriptCore/bytecode/BytecodeConventions.h >index 7781378ce6de131c5503498d44546825e7555105..a6bdd127620b95db6c8023fbbc98c970f6534d21 100644 >--- a/Source/JavaScriptCore/bytecode/BytecodeConventions.h >+++ b/Source/JavaScriptCore/bytecode/BytecodeConventions.h >@@ -29,4 +29,8 @@ > // 0x80000000-0xFFFFFFFF Negative indices from the CallFrame pointer are entries in the call frame. > // 0x00000000-0x3FFFFFFF Forwards indices from the CallFrame pointer are local vars and temporaries with the function's callframe. > // 0x40000000-0x7FFFFFFF Positive indices from 0x40000000 specify entries in the constant pool on the CodeBlock. >-static const int FirstConstantRegisterIndex = 0x40000000; >+static constexpr int FirstConstantRegisterIndex = 0x40000000; >+ >+static constexpr int FirstConstantRegisterIndex8 = 16; >+static constexpr int FirstConstantRegisterIndex16 = 64; >+static constexpr int FirstConstantRegisterIndex32 = FirstConstantRegisterIndex; >diff --git a/Source/JavaScriptCore/bytecode/BytecodeDumper.cpp b/Source/JavaScriptCore/bytecode/BytecodeDumper.cpp >index 721d390552c92a8b3a264b08e9525812f8244f8e..ec31ef0e1f24d40e42bb23b993a4c8353e54ebd9 100644 >--- a/Source/JavaScriptCore/bytecode/BytecodeDumper.cpp >+++ b/Source/JavaScriptCore/bytecode/BytecodeDumper.cpp >@@ -193,22 +193,26 @@ template<class Block> > void BytecodeDumper<Block>::dumpBlock(Block* block, const InstructionStream& instructions, PrintStream& out, const ICStatusMap& statusMap) > { > size_t instructionCount = 0; >- size_t wideInstructionCount = 0; >+ size_t wide16InstructionCount = 0; >+ size_t wide32InstructionCount = 0; > size_t instructionWithMetadataCount = 0; > > for (const auto& instruction : instructions) { >- if (instruction->isWide()) >- ++wideInstructionCount; >- if (instruction->opcodeID() < NUMBER_OF_BYTECODE_WITH_METADATA) >+ if (instruction->isWide16()) >+ ++wide16InstructionCount; >+ else if (instruction->isWide32()) >+ ++wide32InstructionCount; >+ if (instruction->hasMetadata()) > ++instructionWithMetadataCount; > ++instructionCount; > } > > out.print(*block); > out.printf( >- ": %lu instructions (%lu wide instructions, %lu instructions with metadata); %lu bytes (%lu metadata bytes); %d parameter(s); %d callee register(s); %d variable(s)", >+ ": %lu instructions (%lu 16-byte instructions, %lu 32-byte instructions, %lu instructions with metadata); %lu bytes (%lu metadata bytes); %d parameter(s); %d callee register(s); %d variable(s)", > static_cast<unsigned long>(instructionCount), >- static_cast<unsigned long>(wideInstructionCount), >+ static_cast<unsigned long>(wide16InstructionCount), >+ static_cast<unsigned long>(wide32InstructionCount), > static_cast<unsigned long>(instructionWithMetadataCount), > static_cast<unsigned long>(instructions.sizeInBytes() + block->metadataSizeInBytes()), > static_cast<unsigned long>(block->metadataSizeInBytes()), >diff --git a/Source/JavaScriptCore/bytecode/BytecodeList.rb b/Source/JavaScriptCore/bytecode/BytecodeList.rb >index cdee569c8d020b0d793ce84ae88ac5dbe1b5191e..ea1bbe2d68d91fd4980b297c0c313f1930cf5d3c 100644 >--- a/Source/JavaScriptCore/bytecode/BytecodeList.rb >+++ b/Source/JavaScriptCore/bytecode/BytecodeList.rb >@@ -82,7 +82,8 @@ > asm_prefix: "llint_", > op_prefix: "op_" > >-op :wide >+op :wide16 >+op :wide32 > > op :enter > >@@ -1140,6 +1141,17 @@ > op :llint_cloop_did_return_from_js_21 > op :llint_cloop_did_return_from_js_22 > op :llint_cloop_did_return_from_js_23 >+op :llint_cloop_did_return_from_js_24 >+op :llint_cloop_did_return_from_js_25 >+op :llint_cloop_did_return_from_js_26 >+op :llint_cloop_did_return_from_js_27 >+op :llint_cloop_did_return_from_js_28 >+op :llint_cloop_did_return_from_js_29 >+op :llint_cloop_did_return_from_js_30 >+op :llint_cloop_did_return_from_js_31 >+op :llint_cloop_did_return_from_js_32 >+op :llint_cloop_did_return_from_js_33 >+op :llint_cloop_did_return_from_js_34 > > end_section :CLoopHelpers > >diff --git a/Source/JavaScriptCore/bytecode/BytecodeRewriter.h b/Source/JavaScriptCore/bytecode/BytecodeRewriter.h >index 367eaa98d0f1fb9268a564d2f8312e9eebd0099e..e261654b24d399133c9f91173dab734991b406a3 100644 >--- a/Source/JavaScriptCore/bytecode/BytecodeRewriter.h >+++ b/Source/JavaScriptCore/bytecode/BytecodeRewriter.h >@@ -161,7 +161,7 @@ WTF_MAKE_NONCOPYABLE(BytecodeRewriter); > { > #if CPU(NEEDS_ALIGNED_ACCESS) > m_bytecodeGenerator.withWriter(m_writer, [&] { >- while (m_bytecodeGenerator.instructions().size() % OpcodeSize::Wide) >+ while (m_bytecodeGenerator.instructions().size() % OpcodeSize::Wide32) > OpNop::emit<OpcodeSize::Narrow>(&m_bytecodeGenerator); > }); > #endif >diff --git a/Source/JavaScriptCore/bytecode/BytecodeUseDef.h b/Source/JavaScriptCore/bytecode/BytecodeUseDef.h >index 5718b5bd31faf54188dfad27cc30c34ea14d0764..4962c7618044bb147a63a118473d13285391f2f1 100644 >--- a/Source/JavaScriptCore/bytecode/BytecodeUseDef.h >+++ b/Source/JavaScriptCore/bytecode/BytecodeUseDef.h >@@ -68,7 +68,8 @@ void computeUsesForBytecodeOffset(Block* codeBlock, OpcodeID opcodeID, const Ins > }; > > switch (opcodeID) { >- case op_wide: >+ case op_wide16: >+ case op_wide32: > RELEASE_ASSERT_NOT_REACHED(); > > // No uses. >@@ -289,7 +290,8 @@ template<typename Block, typename Functor> > void computeDefsForBytecodeOffset(Block* codeBlock, OpcodeID opcodeID, const Instruction* instruction, const Functor& functor) > { > switch (opcodeID) { >- case op_wide: >+ case op_wide16: >+ case op_wide32: > RELEASE_ASSERT_NOT_REACHED(); > > // These don't define anything. >diff --git a/Source/JavaScriptCore/bytecode/CodeBlock.cpp b/Source/JavaScriptCore/bytecode/CodeBlock.cpp >index 18ba96a239bef08d458b2f36232520306f7488eb..ade2ace3d1a46d1316a043d31058201001770d50 100644 >--- a/Source/JavaScriptCore/bytecode/CodeBlock.cpp >+++ b/Source/JavaScriptCore/bytecode/CodeBlock.cpp >@@ -445,9 +445,14 @@ bool CodeBlock::finishCreation(VM& vm, ScriptExecutable* ownerExecutable, Unlink > const UnlinkedHandlerInfo& unlinkedHandler = unlinkedCodeBlock->exceptionHandler(i); > HandlerInfo& handler = m_rareData->m_exceptionHandlers[i]; > #if ENABLE(JIT) >- MacroAssemblerCodePtr<BytecodePtrTag> codePtr = instructions().at(unlinkedHandler.target)->isWide() >- ? LLInt::getWideCodePtr<BytecodePtrTag>(op_catch) >- : LLInt::getCodePtr<BytecodePtrTag>(op_catch); >+ auto instruction = instructions().at(unlinkedHandler.target); >+ MacroAssemblerCodePtr<BytecodePtrTag> codePtr; >+ if (instruction->isWide32()) >+ codePtr = LLInt::getWide32CodePtr<BytecodePtrTag>(op_catch); >+ else if (instruction->isWide16()) >+ codePtr = LLInt::getWide16CodePtr<BytecodePtrTag>(op_catch); >+ else >+ codePtr = LLInt::getCodePtr<BytecodePtrTag>(op_catch); > handler.initialize(unlinkedHandler, CodeLocationLabel<ExceptionHandlerPtrTag>(codePtr.retagged<ExceptionHandlerPtrTag>())); > #else > handler.initialize(unlinkedHandler); >diff --git a/Source/JavaScriptCore/bytecode/CodeBlock.h b/Source/JavaScriptCore/bytecode/CodeBlock.h >index 98dbf647549c789c24858c224157cc467e7cc7fb..4e62bdec22ee9e5e286f9165404b0fc32d77c289 100644 >--- a/Source/JavaScriptCore/bytecode/CodeBlock.h >+++ b/Source/JavaScriptCore/bytecode/CodeBlock.h >@@ -145,6 +145,8 @@ class CodeBlock : public JSCell { > void dumpAssumingJITType(PrintStream&, JITType) const; > JS_EXPORT_PRIVATE void dump(PrintStream&) const; > >+ MetadataTable* metadataTable() const { return m_metadata.get(); } >+ > int numParameters() const { return m_numParameters; } > void setNumParameters(int newValue); > >diff --git a/Source/JavaScriptCore/bytecode/Fits.h b/Source/JavaScriptCore/bytecode/Fits.h >index 24d7757c979465ef28018ad3b7c91f339c82a1d7..48d54e85d84268f7be6030641c05e7d708ca4292 100644 >--- a/Source/JavaScriptCore/bytecode/Fits.h >+++ b/Source/JavaScriptCore/bytecode/Fits.h >@@ -51,123 +51,126 @@ struct Fits; > // Implicit conversion for types of the same size > template<typename T, OpcodeSize size> > struct Fits<T, size, std::enable_if_t<sizeof(T) == size, std::true_type>> { >- static bool check(T) { return true; } >- >- static typename TypeBySize<size>::type convert(T t) { return bitwise_cast<typename TypeBySize<size>::type>(t); } >- >- template<class T1 = T, OpcodeSize size1 = size, typename = std::enable_if_t<!std::is_same<T1, typename TypeBySize<size1>::type>::value, std::true_type>> >- static T1 convert(typename TypeBySize<size1>::type t) { return bitwise_cast<T1>(t); } >-}; >+ using TargetType = typename TypeBySize<size>::unsignedType; > >-template<typename T, OpcodeSize size> >-struct Fits<T, size, std::enable_if_t<sizeof(T) < size, std::true_type>> { > static bool check(T) { return true; } > >- static typename TypeBySize<size>::type convert(T t) { return static_cast<typename TypeBySize<size>::type>(t); } >+ static TargetType convert(T t) { return bitwise_cast<TargetType>(t); } > >- template<class T1 = T, OpcodeSize size1 = size, typename = std::enable_if_t<!std::is_same<T1, typename TypeBySize<size1>::type>::value, std::true_type>> >- static T1 convert(typename TypeBySize<size1>::type t) { return static_cast<T1>(t); } >+ template<class T1 = T, OpcodeSize size1 = size, typename = std::enable_if_t<!std::is_same<T1, TargetType>::value, std::true_type>> >+ static T1 convert(TargetType t) { return bitwise_cast<T1>(t); } > }; > >-template<> >-struct Fits<uint32_t, OpcodeSize::Narrow> { >- static bool check(unsigned u) { return u <= UINT8_MAX; } >+template<typename T, OpcodeSize size> >+struct Fits<T, size, std::enable_if_t<std::is_integral<T>::value && sizeof(T) != size && !std::is_same<bool, T>::value, std::true_type>> { >+ using TargetType = std::conditional_t<std::is_unsigned<T>::value, typename TypeBySize<size>::unsignedType, typename TypeBySize<size>::signedType>; > >- static uint8_t convert(unsigned u) >+ static bool check(T t) > { >- ASSERT(check(u)); >- return static_cast<uint8_t>(u); >+ return t >= std::numeric_limits<TargetType>::min() && t <= std::numeric_limits<TargetType>::max(); > } >- static unsigned convert(uint8_t u) >+ >+ static TargetType convert(T t) > { >- return u; >+ ASSERT(check(t)); >+ return static_cast<TargetType>(t); > } >+ >+ template<class T1 = T, OpcodeSize size1 = size, typename TargetType1 = TargetType, typename = std::enable_if_t<!std::is_same<T1, TargetType1>::value, std::true_type>> >+ static T1 convert(TargetType1 t) { return static_cast<T1>(t); } > }; > >-template<> >-struct Fits<int, OpcodeSize::Narrow> { >- static bool check(int i) >- { >- return i >= INT8_MIN && i <= INT8_MAX; >- } >+template<OpcodeSize size> >+struct Fits<bool, size, std::enable_if_t<size != sizeof(bool), std::true_type>> : public Fits<uint8_t, size> { >+ using Base = Fits<uint8_t, size>; >+ >+ static bool check(bool e) { return Base::check(static_cast<uint8_t>(e)); } > >- static uint8_t convert(int i) >+ static typename Base::TargetType convert(bool e) > { >- ASSERT(check(i)); >- return static_cast<uint8_t>(i); >+ return Base::convert(static_cast<uint8_t>(e)); > } > >- static int convert(uint8_t i) >+ static bool convert(typename Base::TargetType e) > { >- return static_cast<int8_t>(i); >+ return Base::convert(e); > } > }; > >+template<OpcodeSize size> >+struct FirstConstant; >+ > template<> >-struct Fits<VirtualRegister, OpcodeSize::Narrow> { >+struct FirstConstant<OpcodeSize::Narrow> { >+ static constexpr int index = FirstConstantRegisterIndex8; >+}; >+ >+template<> >+struct FirstConstant<OpcodeSize::Wide16> { >+ static constexpr int index = FirstConstantRegisterIndex16; >+}; >+ >+template<OpcodeSize size> >+struct Fits<VirtualRegister, size, std::enable_if_t<size != OpcodeSize::Wide32, std::true_type>> { >+ // Narrow: > // -128..-1 local variables > // 0..15 arguments > // 16..127 constants >- static constexpr int s_firstConstantIndex = 16; >+ // >+ // Wide16: >+ // -2**15..-1 local variables >+ // 0..64 arguments >+ // 64..2**15-1 constants >+ >+ using TargetType = typename TypeBySize<size>::signedType; >+ >+ static constexpr int s_firstConstantIndex = FirstConstant<size>::index; > static bool check(VirtualRegister r) > { > if (r.isConstant()) >- return (s_firstConstantIndex + r.toConstantIndex()) <= INT8_MAX; >- return r.offset() >= INT8_MIN && r.offset() < s_firstConstantIndex; >+ return (s_firstConstantIndex + r.toConstantIndex()) <= std::numeric_limits<TargetType>::max(); >+ return r.offset() >= std::numeric_limits<TargetType>::min() && r.offset() < s_firstConstantIndex; > } > >- static uint8_t convert(VirtualRegister r) >+ static TargetType convert(VirtualRegister r) > { > ASSERT(check(r)); > if (r.isConstant()) >- return static_cast<int8_t>(s_firstConstantIndex + r.toConstantIndex()); >- return static_cast<int8_t>(r.offset()); >+ return static_cast<TargetType>(s_firstConstantIndex + r.toConstantIndex()); >+ return static_cast<TargetType>(r.offset()); > } > >- static VirtualRegister convert(uint8_t u) >+ static VirtualRegister convert(TargetType u) > { >- int i = static_cast<int>(static_cast<int8_t>(u)); >+ int i = static_cast<int>(static_cast<TargetType>(u)); > if (i >= s_firstConstantIndex) > return VirtualRegister { (i - s_firstConstantIndex) + FirstConstantRegisterIndex }; > return VirtualRegister { i }; > } > }; > >-template<> >-struct Fits<SymbolTableOrScopeDepth, OpcodeSize::Narrow> { >- static bool check(SymbolTableOrScopeDepth u) >- { >- return u.raw() <= UINT8_MAX; >- } >+template<OpcodeSize size> >+struct Fits<SymbolTableOrScopeDepth, size, std::enable_if_t<size != OpcodeSize::Wide32, std::true_type>> : public Fits<unsigned, size> { >+ using TargetType = typename TypeBySize<size>::unsignedType; >+ using Base = Fits<unsigned, size>; > >- static uint8_t convert(SymbolTableOrScopeDepth u) >- { >- ASSERT(check(u)); >- return static_cast<uint8_t>(u.raw()); >- } >+ static bool check(SymbolTableOrScopeDepth u) { return Base::check(u.raw()); } > >- static SymbolTableOrScopeDepth convert(uint8_t u) >+ static TargetType convert(SymbolTableOrScopeDepth u) > { >- return SymbolTableOrScopeDepth::raw(u); >+ return Base::convert(u.raw()); > } >-}; > >-template<> >-struct Fits<Special::Pointer, OpcodeSize::Narrow> : Fits<int, OpcodeSize::Narrow> { >- using Base = Fits<int, OpcodeSize::Narrow>; >- static bool check(Special::Pointer sp) { return Base::check(static_cast<int>(sp)); } >- static uint8_t convert(Special::Pointer sp) >- { >- return Base::convert(static_cast<int>(sp)); >- } >- static Special::Pointer convert(uint8_t sp) >+ static SymbolTableOrScopeDepth convert(TargetType u) > { >- return static_cast<Special::Pointer>(Base::convert(sp)); >+ return SymbolTableOrScopeDepth::raw(Base::convert(u)); > } > }; > >-template<> >-struct Fits<GetPutInfo, OpcodeSize::Narrow> { >+template<OpcodeSize size> >+struct Fits<GetPutInfo, size, std::enable_if_t<size != OpcodeSize::Wide32, std::true_type>> { >+ using TargetType = typename TypeBySize<size>::unsignedType; >+ > // 13 Resolve Types > // 3 Initialization Modes > // 2 Resolve Modes >@@ -197,7 +200,7 @@ struct Fits<GetPutInfo, OpcodeSize::Narrow> { > return resolveType < s_resolveTypeMax && initializationMode < s_initializationModeMax && resolveMode < s_resolveModeMax; > } > >- static uint8_t convert(GetPutInfo gpi) >+ static TargetType convert(GetPutInfo gpi) > { > ASSERT(check(gpi)); > auto resolveType = static_cast<uint8_t>(gpi.resolveType()); >@@ -206,7 +209,7 @@ struct Fits<GetPutInfo, OpcodeSize::Narrow> { > return (resolveType << 3) | (initializationMode << 1) | resolveMode; > } > >- static GetPutInfo convert(uint8_t gpi) >+ static GetPutInfo convert(TargetType gpi) > { > auto resolveType = static_cast<ResolveType>((gpi & s_resolveTypeBits) >> 3); > auto initializationMode = static_cast<InitializationMode>((gpi & s_initializationModeBits) >> 1); >@@ -215,108 +218,79 @@ struct Fits<GetPutInfo, OpcodeSize::Narrow> { > } > }; > >-template<> >-struct Fits<DebugHookType, OpcodeSize::Narrow> : Fits<int, OpcodeSize::Narrow> { >- using Base = Fits<int, OpcodeSize::Narrow>; >- static bool check(DebugHookType dht) { return Base::check(static_cast<int>(dht)); } >- static uint8_t convert(DebugHookType dht) >- { >- return Base::convert(static_cast<int>(dht)); >- } >- static DebugHookType convert(uint8_t dht) >- { >- return static_cast<DebugHookType>(Base::convert(dht)); >- } >-}; >+template<typename E, OpcodeSize size> >+struct Fits<E, size, std::enable_if_t<sizeof(E) != size && std::is_enum<E>::value, std::true_type>> : public Fits<std::underlying_type_t<E>, size> { >+ using Base = Fits<std::underlying_type_t<E>, size>; > >-template<> >-struct Fits<ProfileTypeBytecodeFlag, OpcodeSize::Narrow> : Fits<int, OpcodeSize::Narrow> { >- using Base = Fits<int, OpcodeSize::Narrow>; >- static bool check(ProfileTypeBytecodeFlag ptbf) { return Base::check(static_cast<int>(ptbf)); } >- static uint8_t convert(ProfileTypeBytecodeFlag ptbf) >- { >- return Base::convert(static_cast<int>(ptbf)); >- } >- static ProfileTypeBytecodeFlag convert(uint8_t ptbf) >- { >- return static_cast<ProfileTypeBytecodeFlag>(Base::convert(ptbf)); >- } >-}; >+ static bool check(E e) { return Base::check(static_cast<std::underlying_type_t<E>>(e)); } > >-template<> >-struct Fits<ResolveType, OpcodeSize::Narrow> : Fits<int, OpcodeSize::Narrow> { >- using Base = Fits<int, OpcodeSize::Narrow>; >- static bool check(ResolveType rt) { return Base::check(static_cast<int>(rt)); } >- static uint8_t convert(ResolveType rt) >+ static typename Base::TargetType convert(E e) > { >- return Base::convert(static_cast<int>(rt)); >+ return Base::convert(static_cast<std::underlying_type_t<E>>(e)); > } > >- static ResolveType convert(uint8_t rt) >+ static E convert(typename Base::TargetType e) > { >- return static_cast<ResolveType>(Base::convert(rt)); >+ return static_cast<E>(Base::convert(e)); > } > }; > >-template<> >-struct Fits<OperandTypes, OpcodeSize::Narrow> { >+template<OpcodeSize size> >+struct Fits<OperandTypes, size, std::enable_if_t<sizeof(OperandTypes) != size, std::true_type>> { >+ static_assert(sizeof(OperandTypes) == sizeof(uint16_t)); >+ using TargetType = typename TypeBySize<size>::unsignedType; >+ > // a pair of (ResultType::Type, ResultType::Type) - try to fit each type into 4 bits > // additionally, encode unknown types as 0 rather than the | of all types >- static constexpr int s_maxType = 0x10; >+ static constexpr unsigned typeWidth = 4; >+ static constexpr unsigned maxType = (1 << typeWidth) - 1; > > static bool check(OperandTypes types) > { >- auto first = types.first().bits(); >- auto second = types.second().bits(); >- if (first == ResultType::unknownType().bits()) >- first = 0; >- if (second == ResultType::unknownType().bits()) >- second = 0; >- return first < s_maxType && second < s_maxType; >+ if (size == OpcodeSize::Narrow) { >+ auto first = types.first().bits(); >+ auto second = types.second().bits(); >+ if (first == ResultType::unknownType().bits()) >+ first = 0; >+ if (second == ResultType::unknownType().bits()) >+ second = 0; >+ return first <= maxType && second <= maxType; >+ } else >+ return true; > } > >- static uint8_t convert(OperandTypes types) >- { >- ASSERT(check(types)); >- auto first = types.first().bits(); >- auto second = types.second().bits(); >- if (first == ResultType::unknownType().bits()) >- first = 0; >- if (second == ResultType::unknownType().bits()) >- second = 0; >- return (first << 4) | second; >- } >- >- static OperandTypes convert(uint8_t types) >- { >- auto first = (types & (0xf << 4)) >> 4; >- auto second = (types & 0xf); >- if (!first) >- first = ResultType::unknownType().bits(); >- if (!second) >- second = ResultType::unknownType().bits(); >- return OperandTypes(ResultType(first), ResultType(second)); >- } >-}; >- >-template<> >-struct Fits<PutByIdFlags, OpcodeSize::Narrow> : Fits<int, OpcodeSize::Narrow> { >- // only ever encoded in the bytecode stream as 0 or 1, so the trivial encoding should be good enough >- using Base = Fits<int, OpcodeSize::Narrow>; >- static bool check(PutByIdFlags flags) { return Base::check(static_cast<int>(flags)); } >- static uint8_t convert(PutByIdFlags flags) >+ static TargetType convert(OperandTypes types) > { >- return Base::convert(static_cast<int>(flags)); >+ if (size == OpcodeSize::Narrow) { >+ ASSERT(check(types)); >+ auto first = types.first().bits(); >+ auto second = types.second().bits(); >+ if (first == ResultType::unknownType().bits()) >+ first = 0; >+ if (second == ResultType::unknownType().bits()) >+ second = 0; >+ return (first << typeWidth) | second; >+ } else >+ return static_cast<TargetType>(types.bits()); > } > >- static PutByIdFlags convert(uint8_t flags) >+ static OperandTypes convert(TargetType types) > { >- return static_cast<PutByIdFlags>(Base::convert(flags)); >+ if (size == OpcodeSize::Narrow) { >+ auto first = types >> typeWidth; >+ auto second = types & maxType; >+ if (!first) >+ first = ResultType::unknownType().bits(); >+ if (!second) >+ second = ResultType::unknownType().bits(); >+ return OperandTypes(ResultType(first), ResultType(second)); >+ } else >+ return OperandTypes::fromBits(static_cast<uint16_t>(types)); > } > }; > > template<OpcodeSize size> >-struct Fits<BoundLabel, size> : Fits<int, size> { >+struct Fits<BoundLabel, size> : public Fits<int, size> { > // This is a bit hacky: we need to delay computing jump targets, since we > // might have to emit `nop`s to align the instructions stream. Additionally, > // we have to compute the target before we start writing to the instruction >@@ -330,12 +304,12 @@ struct Fits<BoundLabel, size> : Fits<int, size> { > return Base::check(label.saveTarget()); > } > >- static typename TypeBySize<size>::type convert(BoundLabel& label) >+ static typename Base::TargetType convert(BoundLabel& label) > { > return Base::convert(label.commitTarget()); > } > >- static BoundLabel convert(typename TypeBySize<size>::type target) >+ static BoundLabel convert(typename Base::TargetType target) > { > return BoundLabel(Base::convert(target)); > } >diff --git a/Source/JavaScriptCore/bytecode/Instruction.h b/Source/JavaScriptCore/bytecode/Instruction.h >index fb278e9cad37a01667fe113a41bafd3b9956ce0d..651ce8f0c1c21c43bb8f230e03eb9600bc2d9cdb 100644 >--- a/Source/JavaScriptCore/bytecode/Instruction.h >+++ b/Source/JavaScriptCore/bytecode/Instruction.h >@@ -45,14 +45,16 @@ struct Instruction { > OpcodeID opcodeID() const { return static_cast<OpcodeID>(m_opcode); } > > private: >- typename TypeBySize<Width>::type m_opcode; >+ typename TypeBySize<Width>::unsignedType m_opcode; > }; > > public: > OpcodeID opcodeID() const > { >- if (isWide()) >- return wide()->opcodeID(); >+ if (isWide32()) >+ return wide32()->opcodeID(); >+ if (isWide16()) >+ return wide16()->opcodeID(); > return narrow()->opcodeID(); > } > >@@ -61,16 +63,35 @@ struct Instruction { > return opcodeNames[opcodeID()]; > } > >- bool isWide() const >+ bool isWide16() const > { >- return narrow()->opcodeID() == op_wide; >+ return narrow()->opcodeID() == op_wide16; >+ } >+ >+ bool isWide32() const >+ { >+ return narrow()->opcodeID() == op_wide32; >+ } >+ >+ bool hasMetadata() const >+ { >+ return opcodeID() < NUMBER_OF_BYTECODE_WITH_METADATA; >+ } >+ >+ int sizeShiftAmount() const >+ { >+ if (isWide32()) >+ return 2; >+ if (isWide16()) >+ return 1; >+ return 0; > } > > size_t size() const > { >- auto wide = isWide(); >- auto padding = wide ? 1 : 0; >- auto size = wide ? 4 : 1; >+ auto sizeShiftAmount = this->sizeShiftAmount(); >+ auto padding = sizeShiftAmount ? 1 : 0; >+ auto size = 1 << sizeShiftAmount; > return opcodeLengths[opcodeID()] * size + padding; > } > >@@ -106,11 +127,18 @@ struct Instruction { > return reinterpret_cast<const Impl<OpcodeSize::Narrow>*>(this); > } > >- const Impl<OpcodeSize::Wide>* wide() const >+ const Impl<OpcodeSize::Wide16>* wide16() const >+ { >+ >+ ASSERT(isWide16()); >+ return reinterpret_cast<const Impl<OpcodeSize::Wide16>*>(bitwise_cast<uintptr_t>(this) + 1); >+ } >+ >+ const Impl<OpcodeSize::Wide32>* wide32() const > { > >- ASSERT(isWide()); >- return reinterpret_cast<const Impl<OpcodeSize::Wide>*>(bitwise_cast<uintptr_t>(this) + 1); >+ ASSERT(isWide32()); >+ return reinterpret_cast<const Impl<OpcodeSize::Wide32>*>(bitwise_cast<uintptr_t>(this) + 1); > } > }; > >diff --git a/Source/JavaScriptCore/bytecode/InstructionStream.h b/Source/JavaScriptCore/bytecode/InstructionStream.h >index ce9607b372f3bda529a9b5e1fbc74ab60f5c2584..99b5a5a906026cece8b4dd100449289d1ffe6438 100644 >--- a/Source/JavaScriptCore/bytecode/InstructionStream.h >+++ b/Source/JavaScriptCore/bytecode/InstructionStream.h >@@ -210,6 +210,20 @@ class InstructionStreamWriter : public InstructionStream { > m_position++; > } > } >+ >+ void write(uint16_t h) >+ { >+ ASSERT(!m_finalized); >+ uint8_t bytes[2]; >+ std::memcpy(bytes, &h, sizeof(h)); >+ >+ // Though not always obvious, we don't have to invert the order of the >+ // bytes written here for CPU(BIG_ENDIAN). This is because the incoming >+ // i value is already ordered in big endian on CPU(BIG_EDNDIAN) platforms. >+ write(bytes[0]); >+ write(bytes[1]); >+ } >+ > void write(uint32_t i) > { > ASSERT(!m_finalized); >diff --git a/Source/JavaScriptCore/bytecode/Opcode.h b/Source/JavaScriptCore/bytecode/Opcode.h >index 4427dd98a40716f9e38c469e72cf0b60e9cc5995..c921dd813c65e0ccb3a86f91aa77bf8a1208143a 100644 >--- a/Source/JavaScriptCore/bytecode/Opcode.h >+++ b/Source/JavaScriptCore/bytecode/Opcode.h >@@ -66,8 +66,12 @@ const int numOpcodeIDs = NUMBER_OF_BYTECODE_IDS + NUMBER_OF_BYTECODE_HELPER_IDS; > > #if ENABLE(C_LOOP) && !HAVE(COMPUTED_GOTO) > >-#define OPCODE_ID_ENUM(opcode, length) opcode##_wide = numOpcodeIDs + opcode, >- enum OpcodeIDWide : unsigned { FOR_EACH_OPCODE_ID(OPCODE_ID_ENUM) }; >+#define OPCODE_ID_ENUM(opcode, length) opcode##_wide16 = numOpcodeIDs + opcode, >+ enum OpcodeIDWide16 : unsigned { FOR_EACH_OPCODE_ID(OPCODE_ID_ENUM) }; >+#undef OPCODE_ID_ENUM >+ >+#define OPCODE_ID_ENUM(opcode, length) opcode##_wide32 = numOpcodeIDs * 2 + opcode, >+ enum OpcodeIDWide32 : unsigned { FOR_EACH_OPCODE_ID(OPCODE_ID_ENUM) }; > #undef OPCODE_ID_ENUM > #endif > >diff --git a/Source/JavaScriptCore/bytecode/OpcodeSize.h b/Source/JavaScriptCore/bytecode/OpcodeSize.h >index 98943f39d8ef08efa8f876b882fec0ebeac7d786..24b162b93f79e84a0d1864231acdbd73e59aa063 100644 >--- a/Source/JavaScriptCore/bytecode/OpcodeSize.h >+++ b/Source/JavaScriptCore/bytecode/OpcodeSize.h >@@ -29,7 +29,8 @@ namespace JSC { > > enum OpcodeSize { > Narrow = 1, >- Wide = 4, >+ Wide16 = 2, >+ Wide32 = 4, > }; > > template<OpcodeSize> >@@ -37,12 +38,20 @@ struct TypeBySize; > > template<> > struct TypeBySize<OpcodeSize::Narrow> { >- using type = uint8_t; >+ using signedType = int8_t; >+ using unsignedType = uint8_t; > }; > > template<> >-struct TypeBySize<OpcodeSize::Wide> { >- using type = uint32_t; >+struct TypeBySize<OpcodeSize::Wide16> { >+ using signedType = int16_t; >+ using unsignedType = uint16_t; >+}; >+ >+template<> >+struct TypeBySize<OpcodeSize::Wide32> { >+ using signedType = int32_t; >+ using unsignedType = uint32_t; > }; > > template<OpcodeSize> >@@ -54,7 +63,12 @@ struct PaddingBySize<OpcodeSize::Narrow> { > }; > > template<> >-struct PaddingBySize<OpcodeSize::Wide> { >+struct PaddingBySize<OpcodeSize::Wide16> { >+ static constexpr uint8_t value = 1; >+}; >+ >+template<> >+struct PaddingBySize<OpcodeSize::Wide32> { > static constexpr uint8_t value = 1; > }; > >diff --git a/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp b/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp >index 667dac047e187ed7e6db983af30611b4efd7ae39..aa0a10819c762abc778cfa6ebb070031f71f9111 100644 >--- a/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp >+++ b/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp >@@ -1339,10 +1339,18 @@ void BytecodeGenerator::recordOpcode(OpcodeID opcodeID) > m_lastOpcodeID = opcodeID; > } > >-void BytecodeGenerator::alignWideOpcode() >+void BytecodeGenerator::alignWideOpcode16() > { > #if CPU(NEEDS_ALIGNED_ACCESS) >- while ((m_writer.position() + 1) % OpcodeSize::Wide) >+ while ((m_writer.position() + 1) % OpcodeSize::Wide16) >+ OpNop::emit<OpcodeSize::Narrow>(this); >+#endif >+} >+ >+void BytecodeGenerator::alignWideOpcode32() >+{ >+#if CPU(NEEDS_ALIGNED_ACCESS) >+ while ((m_writer.position() + 1) % OpcodeSize::Wide32) > OpNop::emit<OpcodeSize::Narrow>(this); > #endif > } >@@ -2721,13 +2729,20 @@ RegisterID* BytecodeGenerator::emitGetByVal(RegisterID* dst, RegisterID* base, R > > if (context.isIndexedForInContext()) { > auto& indexedContext = context.asIndexedForInContext(); >- OpGetByVal::emit<OpcodeSize::Wide>(this, kill(dst), base, indexedContext.index()); >+ kill(dst); >+ if (OpGetByVal::checkWithoutMetadataID<OpcodeSize::Narrow>(this, dst, base, property)) >+ OpGetByVal::emitWithSmallestSizeRequirement<OpcodeSize::Narrow>(this, dst, base, indexedContext.index()); >+ else if (OpGetByVal::checkWithoutMetadataID<OpcodeSize::Wide16>(this, dst, base, property)) >+ OpGetByVal::emitWithSmallestSizeRequirement<OpcodeSize::Wide16>(this, dst, base, indexedContext.index()); >+ else >+ OpGetByVal::emit<OpcodeSize::Wide32>(this, dst, base, indexedContext.index()); > indexedContext.addGetInst(m_lastInstruction.offset(), property->index()); > return dst; > } > >+ // We cannot do the above optimization here since OpGetDirectPname => OpGetByVal conversion involves different metadata ID allocation. > StructureForInContext& structureContext = context.asStructureForInContext(); >- OpGetDirectPname::emit<OpcodeSize::Wide>(this, kill(dst), base, property, structureContext.index(), structureContext.enumerator()); >+ OpGetDirectPname::emit<OpcodeSize::Wide32>(this, kill(dst), base, property, structureContext.index(), structureContext.enumerator()); > > structureContext.addGetInst(m_lastInstruction.offset(), property->index()); > return dst; >@@ -4480,7 +4495,7 @@ void BytecodeGenerator::emitYieldPoint(RegisterID* argument, JSAsyncGeneratorFun > #if CPU(NEEDS_ALIGNED_ACCESS) > // conservatively align for the bytecode rewriter: it will delete this yield and > // append a fragment, so we make sure that the start of the fragments is aligned >- while (m_writer.position() % OpcodeSize::Wide) >+ while (m_writer.position() % OpcodeSize::Wide32) > OpNop::emit<OpcodeSize::Narrow>(this); > #endif > OpYield::emit(this, generatorFrameRegister(), yieldPointIndex, argument); >@@ -4983,7 +4998,7 @@ void StructureForInContext::finalize(BytecodeGenerator& generator, UnlinkedCodeB > int propertyRegIndex = std::get<1>(instTuple); > auto instruction = generator.m_writer.ref(instIndex); > auto end = instIndex + instruction->size(); >- ASSERT(instruction->isWide()); >+ ASSERT(instruction->isWide32()); > > generator.m_writer.seek(instIndex); > >@@ -4996,7 +5011,7 @@ void StructureForInContext::finalize(BytecodeGenerator& generator, UnlinkedCodeB > // 1. dst stays the same. > // 2. base stays the same. > // 3. property gets switched to the original property. >- OpGetByVal::emit<OpcodeSize::Wide>(&generator, bytecode.m_dst, bytecode.m_base, VirtualRegister(propertyRegIndex)); >+ OpGetByVal::emit<OpcodeSize::Wide32>(&generator, bytecode.m_dst, bytecode.m_base, VirtualRegister(propertyRegIndex)); > > // 4. nop out the remaining bytes > while (generator.m_writer.position() < end) >diff --git a/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.h b/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.h >index 1c90313c1affb5950ba5dc69afbffac8ce4aa6d8..e97686aa3edcdd8d3f9b57c4a6d123bac9cb3987 100644 >--- a/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.h >+++ b/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.h >@@ -1162,8 +1162,13 @@ namespace JSC { > RegisterID* emitThrowExpressionTooDeepException(); > > void write(uint8_t byte) { m_writer.write(byte); } >+ void write(uint16_t h) { m_writer.write(h); } > void write(uint32_t i) { m_writer.write(i); } >- void alignWideOpcode(); >+ void write(int8_t byte) { m_writer.write(static_cast<uint8_t>(byte)); } >+ void write(int16_t h) { m_writer.write(static_cast<uint16_t>(h)); } >+ void write(int32_t i) { m_writer.write(static_cast<uint32_t>(i)); } >+ void alignWideOpcode16(); >+ void alignWideOpcode32(); > > class PreservedTDZStack { > private: >diff --git a/Source/JavaScriptCore/dfg/DFGCapabilities.cpp b/Source/JavaScriptCore/dfg/DFGCapabilities.cpp >index 20c2340cc00a0c2f4f2100f5ffafedda2867b023..dfe6c16d51a00cab03af98010754a96c7969e345 100644 >--- a/Source/JavaScriptCore/dfg/DFGCapabilities.cpp >+++ b/Source/JavaScriptCore/dfg/DFGCapabilities.cpp >@@ -108,7 +108,8 @@ CapabilityLevel capabilityLevel(OpcodeID opcodeID, CodeBlock* codeBlock, const I > UNUSED_PARAM(pc); > > switch (opcodeID) { >- case op_wide: >+ case op_wide16: >+ case op_wide32: > RELEASE_ASSERT_NOT_REACHED(); > case op_enter: > case op_to_this: >diff --git a/Source/JavaScriptCore/generator/Argument.rb b/Source/JavaScriptCore/generator/Argument.rb >index 99dcb93455200ebe230d0e163f0526444ba67218..38a28153554a3aafc71104eba10a2898d53f2b53 100644 >--- a/Source/JavaScriptCore/generator/Argument.rb >+++ b/Source/JavaScriptCore/generator/Argument.rb >@@ -42,6 +42,10 @@ def create_param > "#{@type.to_s} #{@name}" > end > >+ def create_reference_param >+ "#{@type.to_s}& #{@name}" >+ end >+ > def field_name > "m_#{@name}" > end >@@ -67,8 +71,10 @@ def setter > template<typename Functor> > void set#{capitalized_name}(#{@type.to_s} value, Functor func) > { >- if (isWide()) >- set#{capitalized_name}<OpcodeSize::Wide>(value, func); >+ if (isWide32()) >+ set#{capitalized_name}<OpcodeSize::Wide32>(value, func); >+ else if (isWide16()) >+ set#{capitalized_name}<OpcodeSize::Wide16>(value, func); > else > set#{capitalized_name}<OpcodeSize::Narrow>(value, func); > } >@@ -78,7 +84,7 @@ def setter > { > if (!#{Fits::check "size", "value", @type}) > value = func(); >- auto* stream = bitwise_cast<typename TypeBySize<size>::type*>(reinterpret_cast<uint8_t*>(this) + #{@index} * size + PaddingBySize<size>::value); >+ auto* stream = bitwise_cast<typename TypeBySize<size>::unsignedType*>(reinterpret_cast<uint8_t*>(this) + #{@index} * size + PaddingBySize<size>::value); > *stream = #{Fits::convert "size", "value", @type}; > } > EOF >diff --git a/Source/JavaScriptCore/generator/DSL.rb b/Source/JavaScriptCore/generator/DSL.rb >index 9407aad24fb56c597bf8f2c069484d7655e901cd..92c7f946ceb0eec5b03a94c07b712f491f045662 100644 >--- a/Source/JavaScriptCore/generator/DSL.rb >+++ b/Source/JavaScriptCore/generator/DSL.rb >@@ -144,7 +144,7 @@ def self.write_init_asm(bytecode_list, init_asm_filename) > GeneratedFile::create(init_asm_filename, bytecode_list) do |template| > template.multiline_comment = nil > template.line_comment = "#" >- template.body = (opcodes.map.with_index(&:set_entry_address) + opcodes.map.with_index(&:set_entry_address_wide)) .join("\n") >+ template.body = (opcodes.map.with_index(&:set_entry_address) + opcodes.map.with_index(&:set_entry_address_wide16) + opcodes.map.with_index(&:set_entry_address_wide32)) .join("\n") > end > end > >diff --git a/Source/JavaScriptCore/generator/Metadata.rb b/Source/JavaScriptCore/generator/Metadata.rb >index ad5efa562b3505e4f8d0462395c3933817514386..c3886f877cf96b5bc572b64f1ef43c798eb57b43 100644 >--- a/Source/JavaScriptCore/generator/Metadata.rb >+++ b/Source/JavaScriptCore/generator/Metadata.rb >@@ -112,9 +112,13 @@ def create_emitter_local > EOF > end > >+ def emitter_local_name >+ "__metadataID" >+ end >+ > def emitter_local > unless @@emitter_local >- @@emitter_local = Argument.new("__metadataID", :unsigned, -1) >+ @@emitter_local = Argument.new(emitter_local_name, :unsigned, -1) > end > > return @@emitter_local >diff --git a/Source/JavaScriptCore/generator/Opcode.rb b/Source/JavaScriptCore/generator/Opcode.rb >index 05c259595c80c97cc33f3ff8eb7b66b5160b4d1c..1cbf2237d41ff0b5789d7dc5533a5d417585a269 100644 >--- a/Source/JavaScriptCore/generator/Opcode.rb >+++ b/Source/JavaScriptCore/generator/Opcode.rb >@@ -32,7 +32,8 @@ class Opcode > > module Size > Narrow = "OpcodeSize::Narrow" >- Wide = "OpcodeSize::Wide" >+ Wide16 = "OpcodeSize::Wide16" >+ Wide32 = "OpcodeSize::Wide32" > end > > @@id = 0 >@@ -74,6 +75,12 @@ def typed_args > @args.map(&:create_param).unshift("").join(", ") > end > >+ def typed_reference_args >+ return if @args.nil? >+ >+ @args.map(&:create_reference_param).unshift("").join(", ") >+ end >+ > def untyped_args > return if @args.nil? > >@@ -81,7 +88,7 @@ def untyped_args > end > > def map_fields_with_size(prefix, size, &block) >- args = [Argument.new("opcodeID", :unsigned, 0)] >+ args = [Argument.new("opcodeID", :OpcodeID, 0)] > args += @args.dup if @args > unless @metadata.empty? > args << @metadata.emitter_local >@@ -108,15 +115,14 @@ def opcodeID > end > > def emitter >- op_wide = Argument.new("op_wide", :unsigned, 0) >+ op_wide16 = Argument.new("op_wide16", :OpcodeID, 0) >+ op_wide32 = Argument.new("op_wide32", :OpcodeID, 0) > metadata_param = @metadata.empty? ? "" : ", #{@metadata.emitter_local.create_param}" > metadata_arg = @metadata.empty? ? "" : ", #{@metadata.emitter_local.name}" > <<-EOF.chomp > static void emit(BytecodeGenerator* gen#{typed_args}) > { >- #{@metadata.create_emitter_local} >- emit<OpcodeSize::Narrow, NoAssert, true>(gen#{untyped_args}#{metadata_arg}) >- || emit<OpcodeSize::Wide, Assert, true>(gen#{untyped_args}#{metadata_arg}); >+ emitWithSmallestSizeRequirement<OpcodeSize::Narrow>(gen#{untyped_args}); > } > #{%{ > template<OpcodeSize size, FitsAssertion shouldAssert = Assert> >@@ -124,6 +130,13 @@ def emitter > {#{@metadata.create_emitter_local} > return emit<size, shouldAssert>(gen#{untyped_args}#{metadata_arg}); > } >+ >+ template<OpcodeSize size> >+ static bool checkWithoutMetadataID(BytecodeGenerator* gen#{typed_args}) >+ { >+ decltype(gen->addMetadataFor(opcodeID)) __metadataID { }; >+ return checkImpl<size>(gen#{untyped_args}#{metadata_arg}); >+ } > } unless @metadata.empty?} > template<OpcodeSize size, FitsAssertion shouldAssert = Assert, bool recordOpcode = true> > static bool emit(BytecodeGenerator* gen#{typed_args}#{metadata_param}) >@@ -134,18 +147,45 @@ def emitter > return didEmit; > } > >+ template<OpcodeSize size> >+ static void emitWithSmallestSizeRequirement(BytecodeGenerator* gen#{typed_args}) >+ { >+ #{@metadata.create_emitter_local} >+ if (static_cast<unsigned>(size) <= static_cast<unsigned>(OpcodeSize::Narrow)) { >+ if (emit<OpcodeSize::Narrow, NoAssert, true>(gen#{untyped_args}#{metadata_arg})) >+ return; >+ } >+ if (static_cast<unsigned>(size) <= static_cast<unsigned>(OpcodeSize::Wide16)) { >+ if (emit<OpcodeSize::Wide16, NoAssert, true>(gen#{untyped_args}#{metadata_arg})) >+ return; >+ } >+ emit<OpcodeSize::Wide32, Assert, true>(gen#{untyped_args}#{metadata_arg}); >+ } >+ > private: >+ template<OpcodeSize size> >+ static bool checkImpl(BytecodeGenerator* gen#{typed_reference_args}#{metadata_param}) >+ { >+ UNUSED_PARAM(gen); >+ return #{map_fields_with_size("", "size", &:fits_check).join "\n && "} >+ && (size == OpcodeSize::Wide16 ? #{op_wide16.fits_check(Size::Narrow)} : true) >+ && (size == OpcodeSize::Wide32 ? #{op_wide32.fits_check(Size::Narrow)} : true); >+ } >+ > template<OpcodeSize size, bool recordOpcode> > static bool emitImpl(BytecodeGenerator* gen#{typed_args}#{metadata_param}) > { >- if (size == OpcodeSize::Wide) >- gen->alignWideOpcode(); >- if (#{map_fields_with_size("", "size", &:fits_check).join "\n && "} >- && (size == OpcodeSize::Wide ? #{op_wide.fits_check(Size::Narrow)} : true)) { >+ if (size == OpcodeSize::Wide16) >+ gen->alignWideOpcode16(); >+ else if (size == OpcodeSize::Wide32) >+ gen->alignWideOpcode32(); >+ if (checkImpl<size>(gen#{untyped_args}#{metadata_arg})) { > if (recordOpcode) > gen->recordOpcode(opcodeID); >- if (size == OpcodeSize::Wide) >- #{op_wide.fits_write Size::Narrow} >+ if (size == OpcodeSize::Wide16) >+ #{op_wide16.fits_write Size::Narrow} >+ else if (size == OpcodeSize::Wide32) >+ #{op_wide32.fits_write Size::Narrow} > #{map_fields_with_size(" ", "size", &:fits_write).join "\n"} > return true; > } >@@ -159,9 +199,9 @@ def emitter > def dumper > <<-EOF > template<typename Block> >- void dump(BytecodeDumper<Block>* dumper, InstructionStream::Offset __location, bool __isWide) >+ void dump(BytecodeDumper<Block>* dumper, InstructionStream::Offset __location, int __sizeShiftAmount) > { >- dumper->printLocationAndOp(__location, &"*#{@name}"[!__isWide]); >+ dumper->printLocationAndOp(__location, &"**#{@name}"[2 - __sizeShiftAmount]); > #{print_args { |arg| > <<-EOF.chomp > dumper->dumpOperand(#{arg.field_name}, #{arg.index == 1}); >@@ -181,20 +221,27 @@ def constructors > { > ASSERT_UNUSED(stream, stream[0] == opcodeID); > } >+ # >+ #{capitalized_name}(const uint16_t* stream) >+ #{init.call("OpcodeSize::Wide16")} >+ { >+ ASSERT_UNUSED(stream, stream[0] == opcodeID); >+ } >+ > > #{capitalized_name}(const uint32_t* stream) >- #{init.call("OpcodeSize::Wide")} >+ #{init.call("OpcodeSize::Wide32")} > { > ASSERT_UNUSED(stream, stream[0] == opcodeID); > } > > static #{capitalized_name} decode(const uint8_t* stream) > { >- if (*stream != op_wide) >- return { stream }; >- >- auto wideStream = bitwise_cast<const uint32_t*>(stream + 1); >- return { wideStream }; >+ if (*stream == op_wide32) >+ return { bitwise_cast<const uint32_t*>(stream + 1) }; >+ if (*stream == op_wide16) >+ return { bitwise_cast<const uint16_t*>(stream + 1) }; >+ return { stream }; > } > EOF > end >@@ -219,8 +266,12 @@ def set_entry_address(id) > "setEntryAddress(#{id}, _#{full_name})" > end > >- def set_entry_address_wide(id) >- "setEntryAddressWide(#{id}, _#{full_name}_wide)" >+ def set_entry_address_wide16(id) >+ "setEntryAddressWide16(#{id}, _#{full_name}_wide16)" >+ end >+ >+ def set_entry_address_wide32(id) >+ "setEntryAddressWide32(#{id}, _#{full_name}_wide32)" > end > > def struct_indices >@@ -253,7 +304,7 @@ def self.dump_bytecode(opcodes) > #{opcodes.map { |op| > <<-EOF.chomp > case #{op.name}: >- __instruction->as<#{op.capitalized_name}>().dump(dumper, __location, __instruction->isWide()); >+ __instruction->as<#{op.capitalized_name}>().dump(dumper, __location, __instruction->sizeShiftAmount()); > break; > EOF > }.join "\n"} >diff --git a/Source/JavaScriptCore/generator/Section.rb b/Source/JavaScriptCore/generator/Section.rb >index 7a6afcc2194d4c7af942613576970e8f13828ecf..8cd21db9168417dac389cca5c742ff7bebe6b12f 100644 >--- a/Source/JavaScriptCore/generator/Section.rb >+++ b/Source/JavaScriptCore/generator/Section.rb >@@ -100,7 +100,10 @@ def header_helpers(num_opcodes) > out.write("#define #{opcode.name}_value_string \"#{opcode.id}\"\n") > } > opcodes.each { |opcode| >- out.write("#define #{opcode.name}_wide_value_string \"#{num_opcodes + opcode.id}\"\n") >+ out.write("#define #{opcode.name}_wide16_value_string \"#{num_opcodes + opcode.id}\"\n") >+ } >+ opcodes.each { |opcode| >+ out.write("#define #{opcode.name}_wide32_value_string \"#{num_opcodes * 2 + opcode.id}\"\n") > } > end > out.string >diff --git a/Source/JavaScriptCore/jit/JITExceptions.cpp b/Source/JavaScriptCore/jit/JITExceptions.cpp >index 7fb225b17199d37b35991a7affb9e92cb2e91e72..95bbe508b7b051065bbad01d71ab9f94b6633900 100644 >--- a/Source/JavaScriptCore/jit/JITExceptions.cpp >+++ b/Source/JavaScriptCore/jit/JITExceptions.cpp >@@ -74,9 +74,12 @@ void genericUnwind(VM* vm, ExecState* callFrame) > #if ENABLE(JIT) > catchRoutine = handler->nativeCode.executableAddress(); > #else >- catchRoutine = catchPCForInterpreter->isWide() >- ? LLInt::getWideCodePtr(catchPCForInterpreter->opcodeID()) >- : LLInt::getCodePtr(catchPCForInterpreter->opcodeID()); >+ if (catchPCForInterpreter->isWide32()) >+ catchRoutine = LLInt::getWide32CodePtr(catchPCForInterpreter->opcodeID()); >+ else if (catchPCForInterpreter->isWide16()) >+ catchRoutine = LLInt::getWide16CodePtr(catchPCForInterpreter->opcodeID()); >+ else >+ catchRoutine = LLInt::getCodePtr(catchPCForInterpreter->opcodeID()); > #endif > } else > catchRoutine = LLInt::getCodePtr<ExceptionHandlerPtrTag>(handleUncaughtException).executableAddress(); >diff --git a/Source/JavaScriptCore/llint/LLIntData.cpp b/Source/JavaScriptCore/llint/LLIntData.cpp >index 58f18e47594e7684493479ed3b2121382291fc92..e34a79f58d764d3e94b46d404a928ec86f52344c 100644 >--- a/Source/JavaScriptCore/llint/LLIntData.cpp >+++ b/Source/JavaScriptCore/llint/LLIntData.cpp >@@ -49,10 +49,11 @@ namespace LLInt { > > uint8_t Data::s_exceptionInstructions[maxOpcodeLength + 1] = { }; > Opcode g_opcodeMap[numOpcodeIDs] = { }; >-Opcode g_opcodeMapWide[numOpcodeIDs] = { }; >+Opcode g_opcodeMapWide16[numOpcodeIDs] = { }; >+Opcode g_opcodeMapWide32[numOpcodeIDs] = { }; > > #if !ENABLE(C_LOOP) >-extern "C" void llint_entry(void*, void*); >+extern "C" void llint_entry(void*, void*, void*); > #endif > > void initialize() >@@ -61,11 +62,12 @@ void initialize() > CLoop::initialize(); > > #else // !ENABLE(C_LOOP) >- llint_entry(&g_opcodeMap, &g_opcodeMapWide); >+ llint_entry(&g_opcodeMap, &g_opcodeMapWide16, &g_opcodeMapWide32); > > for (int i = 0; i < numOpcodeIDs; ++i) { > g_opcodeMap[i] = tagCodePtr(g_opcodeMap[i], BytecodePtrTag); >- g_opcodeMapWide[i] = tagCodePtr(g_opcodeMapWide[i], BytecodePtrTag); >+ g_opcodeMapWide16[i] = tagCodePtr(g_opcodeMapWide16[i], BytecodePtrTag); >+ g_opcodeMapWide32[i] = tagCodePtr(g_opcodeMapWide32[i], BytecodePtrTag); > } > > ASSERT(llint_throw_from_slow_path_trampoline < UINT8_MAX); >diff --git a/Source/JavaScriptCore/llint/LLIntData.h b/Source/JavaScriptCore/llint/LLIntData.h >index b248abcda43653f3765935db59608679aad2f0ab..de39056636a249f53c5c2572e2a91fd588000bb3 100644 >--- a/Source/JavaScriptCore/llint/LLIntData.h >+++ b/Source/JavaScriptCore/llint/LLIntData.h >@@ -43,7 +43,8 @@ typedef void (*LLIntCode)(); > namespace LLInt { > > extern "C" JS_EXPORT_PRIVATE Opcode g_opcodeMap[numOpcodeIDs]; >-extern "C" JS_EXPORT_PRIVATE Opcode g_opcodeMapWide[numOpcodeIDs]; >+extern "C" JS_EXPORT_PRIVATE Opcode g_opcodeMapWide16[numOpcodeIDs]; >+extern "C" JS_EXPORT_PRIVATE Opcode g_opcodeMapWide32[numOpcodeIDs]; > > class Data { > >@@ -57,11 +58,14 @@ class Data { > > friend Instruction* exceptionInstructions(); > friend Opcode* opcodeMap(); >- friend Opcode* opcodeMapWide(); >+ friend Opcode* opcodeMapWide16(); >+ friend Opcode* opcodeMapWide32(); > friend Opcode getOpcode(OpcodeID); >- friend Opcode getOpcodeWide(OpcodeID); >+ friend Opcode getOpcodeWide16(OpcodeID); >+ friend Opcode getOpcodeWide32(OpcodeID); > template<PtrTag tag> friend MacroAssemblerCodePtr<tag> getCodePtr(OpcodeID); >- template<PtrTag tag> friend MacroAssemblerCodePtr<tag> getWideCodePtr(OpcodeID); >+ template<PtrTag tag> friend MacroAssemblerCodePtr<tag> getWide16CodePtr(OpcodeID); >+ template<PtrTag tag> friend MacroAssemblerCodePtr<tag> getWide32CodePtr(OpcodeID); > template<PtrTag tag> friend MacroAssemblerCodeRef<tag> getCodeRef(OpcodeID); > }; > >@@ -77,9 +81,14 @@ inline Opcode* opcodeMap() > return g_opcodeMap; > } > >-inline Opcode* opcodeMapWide() >+inline Opcode* opcodeMapWide16() > { >- return g_opcodeMapWide; >+ return g_opcodeMapWide16; >+} >+ >+inline Opcode* opcodeMapWide32() >+{ >+ return g_opcodeMapWide32; > } > > inline Opcode getOpcode(OpcodeID id) >@@ -91,10 +100,20 @@ inline Opcode getOpcode(OpcodeID id) > #endif > } > >-inline Opcode getOpcodeWide(OpcodeID id) >+inline Opcode getOpcodeWide16(OpcodeID id) >+{ >+#if ENABLE(COMPUTED_GOTO_OPCODES) >+ return g_opcodeMapWide16[id]; >+#else >+ UNUSED_PARAM(id); >+ RELEASE_ASSERT_NOT_REACHED(); >+#endif >+} >+ >+inline Opcode getOpcodeWide32(OpcodeID id) > { > #if ENABLE(COMPUTED_GOTO_OPCODES) >- return g_opcodeMapWide[id]; >+ return g_opcodeMapWide32[id]; > #else > UNUSED_PARAM(id); > RELEASE_ASSERT_NOT_REACHED(); >@@ -110,9 +129,17 @@ ALWAYS_INLINE MacroAssemblerCodePtr<tag> getCodePtr(OpcodeID opcodeID) > } > > template<PtrTag tag> >-ALWAYS_INLINE MacroAssemblerCodePtr<tag> getWideCodePtr(OpcodeID opcodeID) >+ALWAYS_INLINE MacroAssemblerCodePtr<tag> getWide16CodePtr(OpcodeID opcodeID) >+{ >+ void* address = reinterpret_cast<void*>(getOpcodeWide16(opcodeID)); >+ address = retagCodePtr<BytecodePtrTag, tag>(address); >+ return MacroAssemblerCodePtr<tag>::createFromExecutableAddress(address); >+} >+ >+template<PtrTag tag> >+ALWAYS_INLINE MacroAssemblerCodePtr<tag> getWide32CodePtr(OpcodeID opcodeID) > { >- void* address = reinterpret_cast<void*>(getOpcodeWide(opcodeID)); >+ void* address = reinterpret_cast<void*>(getOpcodeWide32(opcodeID)); > address = retagCodePtr<BytecodePtrTag, tag>(address); > return MacroAssemblerCodePtr<tag>::createFromExecutableAddress(address); > } >@@ -141,9 +168,14 @@ ALWAYS_INLINE void* getCodePtr(OpcodeID id) > return reinterpret_cast<void*>(getOpcode(id)); > } > >-ALWAYS_INLINE void* getWideCodePtr(OpcodeID id) >+ALWAYS_INLINE void* getWide16CodePtr(OpcodeID id) >+{ >+ return reinterpret_cast<void*>(getOpcodeWide16(id)); >+} >+ >+ALWAYS_INLINE void* getWide32CodePtr(OpcodeID id) > { >- return reinterpret_cast<void*>(getOpcodeWide(id)); >+ return reinterpret_cast<void*>(getOpcodeWide32(id)); > } > #endif > >diff --git a/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp b/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp >index 362ab2de3651ad6bd8f5155e16e8a8baa0dc7485..b3be9d815863d8b822caf59b3ba79410bd6c3eaf 100644 >--- a/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp >+++ b/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp >@@ -1722,9 +1722,14 @@ LLINT_SLOW_PATH_DECL(slow_path_call_eval) > return commonCallEval(exec, pc, LLInt::getCodePtr<JSEntryPtrTag>(llint_generic_return_point)); > } > >-LLINT_SLOW_PATH_DECL(slow_path_call_eval_wide) >+LLINT_SLOW_PATH_DECL(slow_path_call_eval_wide16) > { >- return commonCallEval(exec, pc, LLInt::getWideCodePtr<JSEntryPtrTag>(llint_generic_return_point)); >+ return commonCallEval(exec, pc, LLInt::getWide16CodePtr<JSEntryPtrTag>(llint_generic_return_point)); >+} >+ >+LLINT_SLOW_PATH_DECL(slow_path_call_eval_wide32) >+{ >+ return commonCallEval(exec, pc, LLInt::getWide32CodePtr<JSEntryPtrTag>(llint_generic_return_point)); > } > > LLINT_SLOW_PATH_DECL(slow_path_strcat) >diff --git a/Source/JavaScriptCore/llint/LLIntSlowPaths.h b/Source/JavaScriptCore/llint/LLIntSlowPaths.h >index dc357a161ca08bdcc3cf9500718a038e1f88b215..c24c2d861a6de9eabe4fa12bd6f0ba97838f5854 100644 >--- a/Source/JavaScriptCore/llint/LLIntSlowPaths.h >+++ b/Source/JavaScriptCore/llint/LLIntSlowPaths.h >@@ -117,7 +117,8 @@ LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_tail_call_varargs); > LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_tail_call_forward_arguments); > LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_construct_varargs); > LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_call_eval); >-LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_call_eval_wide); >+LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_call_eval_wide16); >+LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_call_eval_wide32); > LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_tear_off_arguments); > LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_strcat); > LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_to_primitive); >diff --git a/Source/JavaScriptCore/llint/LowLevelInterpreter.asm b/Source/JavaScriptCore/llint/LowLevelInterpreter.asm >index 60d41709ab504cfa6c60b09c4e402ba0a165c650..f72bc47a0e50721b31df19bf5318f21cfd334b60 100644 >--- a/Source/JavaScriptCore/llint/LowLevelInterpreter.asm >+++ b/Source/JavaScriptCore/llint/LowLevelInterpreter.asm >@@ -1,4 +1,4 @@ >-# Copyright (C) 2011-2019 Apple Inc. All rights reserved. >+# Copyrsght (C) 2011-2019 Apple Inc. All rights reserved. > # > # Redistribution and use in source and binary forms, with or without > # modification, are permitted provided that the following conditions >@@ -311,31 +311,39 @@ macro dispatchOp(size, opcodeName) > dispatch(constexpr %opcodeName%_length) > end > >- macro dispatchWide() >+ macro dispatchWide16() >+ dispatch(constexpr %opcodeName%_length * 2 + 1) >+ end >+ >+ macro dispatchWide32() > dispatch(constexpr %opcodeName%_length * 4 + 1) > end > >- size(dispatchNarrow, dispatchWide, macro (dispatch) dispatch() end) >+ size(dispatchNarrow, dispatchWide16, dispatchWide32, macro (dispatch) dispatch() end) > end > > macro getu(size, opcodeStruct, fieldName, dst) >- size(getuOperandNarrow, getuOperandWide, macro (getu) >+ size(getuOperandNarrow, getuOperandWide16, getOperandWide32, macro (getu) > getu(opcodeStruct, fieldName, dst) > end) > end > > macro get(size, opcodeStruct, fieldName, dst) >- size(getOperandNarrow, getOperandWide, macro (get) >+ size(getOperandNarrow, getOperandWide16, getOperandWide32, macro (get) > get(opcodeStruct, fieldName, dst) > end) > end > >-macro narrow(narrowFn, wideFn, k) >+macro narrow(narrowFn, wide16Fn, wide32Fn, k) > k(narrowFn) > end > >-macro wide(narrowFn, wideFn, k) >- k(wideFn) >+macro wide16(narrowFn, wide16Fn, wide32Fn, k) >+ k(wide16Fn) >+end >+ >+macro wide32(narrowFn, wide16Fn, wide32Fn, k) >+ k(wide32Fn) > end > > macro metadata(size, opcode, dst, scratch) >@@ -362,9 +370,13 @@ _%label%: > prologue() > fn(narrow) > >-_%label%_wide: >+_%label%_wide16: >+ prologue() >+ fn(wide16) >+ >+_%label%_wide32: > prologue() >- fn(wide) >+ fn(wide32) > end > > macro op(l, fn) >@@ -475,8 +487,9 @@ const MasqueradesAsUndefined = constexpr MasqueradesAsUndefined > const ImplementsDefaultHasInstance = constexpr ImplementsDefaultHasInstance > > # Bytecode operand constants. >-const FirstConstantRegisterIndexNarrow = 16 >-const FirstConstantRegisterIndexWide = constexpr FirstConstantRegisterIndex >+const FirstConstantRegisterIndexNarrow = constexpr FirstConstantRegisterIndex8 >+const FirstConstantRegisterIndexWide16 = constexpr FirstConstantRegisterIndex16 >+const FirstConstantRegisterIndexWide32 = constexpr FirstConstantRegisterIndex > > # Code type constants. > const GlobalCode = constexpr GlobalCode >@@ -1027,7 +1040,7 @@ macro checkSwitchToJITForEpilogue() > end > > macro assertNotConstant(size, index) >- size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide, macro (FirstConstantRegisterIndex) >+ size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide16, FirstConstantRegisterIndexWide32, macro (FirstConstantRegisterIndex) > assert(macro (ok) bilt index, FirstConstantRegisterIndex, ok end) > end) > end >@@ -1312,41 +1325,45 @@ else > end > end > >-# The PC base is in t2, as this is what _llint_entry leaves behind through >-# initPCRelative(t2) >+# The PC base is in t3, as this is what _llint_entry leaves behind through >+# initPCRelative(t3) > macro setEntryAddress(index, label) > setEntryAddressCommon(index, label, a0) > end > >-macro setEntryAddressWide(index, label) >+macro setEntryAddressWide16(index, label) > setEntryAddressCommon(index, label, a1) > end > >+macro setEntryAddressWide32(index, label) >+ setEntryAddressCommon(index, label, a2) >+end >+ > macro setEntryAddressCommon(index, label, map) > if X86_64 or X86_64_WIN >- leap (label - _relativePCBase)[t2], t3 >- move index, t4 >- storep t3, [map, t4, 8] >+ leap (label - _relativePCBase)[t3], t4 >+ move index, t5 >+ storep t4, [map, t5, 8] > elsif X86 or X86_WIN >- leap (label - _relativePCBase)[t2], t3 >- move index, t4 >- storep t3, [map, t4, 4] >+ leap (label - _relativePCBase)[t3], t4 >+ move index, t5 >+ storep t4, [map, t5, 4] > elsif ARM64 or ARM64E >- pcrtoaddr label, t2 >+ pcrtoaddr label, t3 > move index, t4 >- storep t2, [map, t4, PtrSize] >+ storep t3, [map, t4, PtrSize] > elsif ARMv7 > mvlbl (label - _relativePCBase), t4 >- addp t4, t2, t4 >- move index, t3 >- storep t4, [map, t3, 4] >+ addp t4, t3, t4 >+ move index, t5 >+ storep t4, [map, t5, 4] > elsif MIPS > la label, t4 > la _relativePCBase, t3 > subp t3, t4 >- addp t4, t2, t4 >- move index, t3 >- storep t4, [map, t3, 4] >+ addp t4, t3, t4 >+ move index, t5 >+ storep t4, [map, t5, 4] > end > end > >@@ -1358,9 +1375,10 @@ _llint_entry: > if X86 or X86_WIN > loadp 20[sp], a0 > loadp 24[sp], a1 >+ loadp 28[sp], a2 > end > >- initPCRelative(t2) >+ initPCRelative(t3) > > # Include generated bytecode initialization file. > include InitBytecodes >@@ -1370,14 +1388,23 @@ _llint_entry: > ret > end > >-_llint_op_wide: >- nextInstructionWide() >+_llint_op_wide16: >+ nextInstructionWide16() > >-_llint_op_wide_wide: >+_llint_op_wide32: >+ nextInstructionWide32() >+ >+macro noWide(label) >+_llint_%label%_wide16: > crash() > >-_llint_op_enter_wide: >+_llint_%label%_wide32: > crash() >+end >+ >+noWide(op_wide16) >+noWide(op_wide32) >+noWide(op_enter) > > op(llint_program_prologue, macro () > prologue(notFunctionCodeBlockGetter, notFunctionCodeBlockSetter, _llint_entry_osr, _llint_trace_prologue) >@@ -1778,12 +1805,20 @@ _llint_op_call_eval: > _llint_slow_path_call_eval, > prepareForRegularCall) > >-_llint_op_call_eval_wide: >+_llint_op_call_eval_wide16: > slowPathForCall( >- wide, >+ wide16, > OpCallEval, >- macro () dispatchOp(wide, op_call_eval) end, >- _llint_slow_path_call_eval_wide, >+ macro () dispatchOp(wide16, op_call_eval) end, >+ _llint_slow_path_call_eval_wide16, >+ prepareForRegularCall) >+ >+_llint_op_call_eval_wide32: >+ slowPathForCall( >+ wide32, >+ OpCallEval, >+ macro () dispatchOp(wide32, op_call_eval) end, >+ _llint_slow_path_call_eval_wide32, > prepareForRegularCall) > > _llint_generic_return_point: >@@ -1791,9 +1826,14 @@ _llint_generic_return_point: > dispatchOp(narrow, op_call_eval) > end) > >-_llint_generic_return_point_wide: >- dispatchAfterCall(wide, OpCallEval, macro() >- dispatchOp(wide, op_call_eval) >+_llint_generic_return_point_wide16: >+ dispatchAfterCall(wide16, OpCallEval, macro() >+ dispatchOp(wide16, op_call_eval) >+ end) >+ >+_llint_generic_return_point_wide32: >+ dispatchAfterCall(wide32, OpCallEval, macro() >+ dispatchOp(wide32, op_call_eval) > end) > > llintOp(op_identity_with_profile, OpIdentityWithProfile, macro (unused, unused, dispatch) >diff --git a/Source/JavaScriptCore/llint/LowLevelInterpreter.cpp b/Source/JavaScriptCore/llint/LowLevelInterpreter.cpp >index 6c4cee7c539cf047501932d9acdaa5666cf2ade1..b061ff4852fc4c2c792d1858e03f7a39fca215ab 100644 >--- a/Source/JavaScriptCore/llint/LowLevelInterpreter.cpp >+++ b/Source/JavaScriptCore/llint/LowLevelInterpreter.cpp >@@ -249,12 +249,14 @@ JSValue CLoop::execute(OpcodeID entryOpcodeID, void* executableAddress, VM* vm, > // are at play. > if (UNLIKELY(isInitializationPass)) { > Opcode* opcodeMap = LLInt::opcodeMap(); >- Opcode* opcodeMapWide = LLInt::opcodeMapWide(); >+ Opcode* opcodeMapWide16 = LLInt::opcodeMapWide16(); >+ Opcode* opcodeMapWide32 = LLInt::opcodeMapWide32(); > > #if ENABLE(COMPUTED_GOTO_OPCODES) > #define OPCODE_ENTRY(__opcode, length) \ > opcodeMap[__opcode] = bitwise_cast<void*>(&&__opcode); \ >- opcodeMapWide[__opcode] = bitwise_cast<void*>(&&__opcode##_wide); >+ opcodeMapWide16[__opcode] = bitwise_cast<void*>(&&__opcode##_wide16); \ >+ opcodeMapWide32[__opcode] = bitwise_cast<void*>(&&__opcode##_wide32); > > #define LLINT_OPCODE_ENTRY(__opcode, length) \ > opcodeMap[__opcode] = bitwise_cast<void*>(&&__opcode); >@@ -263,7 +265,8 @@ JSValue CLoop::execute(OpcodeID entryOpcodeID, void* executableAddress, VM* vm, > // narrow opcodes don't need any mapping and wide opcodes just need to add numOpcodeIDs > #define OPCODE_ENTRY(__opcode, length) \ > opcodeMap[__opcode] = __opcode; \ >- opcodeMapWide[__opcode] = static_cast<OpcodeID>(__opcode##_wide); >+ opcodeMapWide16[__opcode] = static_cast<OpcodeID>(__opcode##_wide16); \ >+ opcodeMapWide32[__opcode] = static_cast<OpcodeID>(__opcode##_wide32); > > #define LLINT_OPCODE_ENTRY(__opcode, length) \ > opcodeMap[__opcode] = __opcode; >@@ -285,7 +288,7 @@ JSValue CLoop::execute(OpcodeID entryOpcodeID, void* executableAddress, VM* vm, > } > > // Define the pseudo registers used by the LLINT C Loop backend: >- ASSERT(sizeof(CLoopRegister) == sizeof(intptr_t)); >+ static_assert(sizeof(CLoopRegister) == sizeof(intptr_t)); > > // The CLoop llint backend is initially based on the ARMv7 backend, and > // then further enhanced with a few instructions from the x86 backend to >diff --git a/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm b/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm >index c2e60ab6dfb0b2d8d8868750cf58be9f335a10de..6dff84a8c0331dbb35f4ef3ec658dd83276207e3 100644 >--- a/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm >+++ b/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm >@@ -29,9 +29,15 @@ macro nextInstruction() > jmp [t1, t0, 4], BytecodePtrTag > end > >-macro nextInstructionWide() >+macro nextInstructionWide16() >+ loadh 1[PC], t0 >+ leap _g_opcodeMapWide16, t1 >+ jmp [t1, t0, 4], BytecodePtrTag >+end >+ >+macro nextInstructionWide32() > loadi 1[PC], t0 >- leap _g_opcodeMapWide, t1 >+ leap _g_opcodeMapWide32, t1 > jmp [t1, t0, 4], BytecodePtrTag > end > >@@ -40,14 +46,22 @@ macro getuOperandNarrow(opcodeStruct, fieldName, dst) > end > > macro getOperandNarrow(opcodeStruct, fieldName, dst) >- loadbsp constexpr %opcodeStruct%_%fieldName%_index[PC], dst >+ loadbsi constexpr %opcodeStruct%_%fieldName%_index[PC], dst >+end >+ >+macro getuOperandWide16(opcodeStruct, fieldName, dst) >+ loadh constexpr %opcodeStruct%_%fieldName%_index * 2 + 1[PC], dst >+end >+ >+macro getOperandWide16(opcodeStruct, fieldName, dst) >+ loadhsi constexpr %opcodeStruct%_%fieldName%_index * 2 + 1[PC], dst > end > >-macro getuOperandWide(opcodeStruct, fieldName, dst) >+macro getuOperandWide32(opcodeStruct, fieldName, dst) > loadi constexpr %opcodeStruct%_%fieldName%_index * 4 + 1[PC], dst > end > >-macro getOperandWide(opcodeStruct, fieldName, dst) >+macro getOperandWide32(opcodeStruct, fieldName, dst) > loadis constexpr %opcodeStruct%_%fieldName%_index * 4 + 1[PC], dst > end > >@@ -447,7 +461,7 @@ end > # Index, tag, and payload must be different registers. Index is not > # changed. > macro loadConstantOrVariable(size, index, tag, payload) >- size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide, macro (FirstConstantRegisterIndex) >+ size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide16, FirstConstantRegisterIndexWide32, macro (FirstConstantRegisterIndex) > bigteq index, FirstConstantRegisterIndex, .constant > loadi TagOffset[cfr, index, 8], tag > loadi PayloadOffset[cfr, index, 8], payload >@@ -463,7 +477,7 @@ macro loadConstantOrVariable(size, index, tag, payload) > end > > macro loadConstantOrVariableTag(size, index, tag) >- size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide, macro (FirstConstantRegisterIndex) >+ size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide16, FirstConstantRegisterIndexWide32, macro (FirstConstantRegisterIndex) > bigteq index, FirstConstantRegisterIndex, .constant > loadi TagOffset[cfr, index, 8], tag > jmp .done >@@ -478,7 +492,7 @@ end > > # Index and payload may be the same register. Index may be clobbered. > macro loadConstantOrVariable2Reg(size, index, tag, payload) >- size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide, macro (FirstConstantRegisterIndex) >+ size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide16, FirstConstantRegisterIndexWide32, macro (FirstConstantRegisterIndex) > bigteq index, FirstConstantRegisterIndex, .constant > loadi TagOffset[cfr, index, 8], tag > loadi PayloadOffset[cfr, index, 8], payload >@@ -496,7 +510,7 @@ macro loadConstantOrVariable2Reg(size, index, tag, payload) > end > > macro loadConstantOrVariablePayloadTagCustom(size, index, tagCheck, payload) >- size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide, macro (FirstConstantRegisterIndex) >+ size(FirstConstantRegisterIndexNarrow, FirstConstantRegisterIndexWide16, FirstConstantRegisterIndexWide32, macro (FirstConstantRegisterIndex) > bigteq index, FirstConstantRegisterIndex, .constant > tagCheck(TagOffset[cfr, index, 8]) > loadi PayloadOffset[cfr, index, 8], payload >diff --git a/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm b/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm >index 8119da2cbff7ab482032b72ad337b51abfb19c35..9369d90056869045d1b3788f9b10b61e8504711a 100644 >--- a/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm >+++ b/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm >@@ -30,9 +30,15 @@ macro nextInstruction() > jmp [t1, t0, PtrSize], BytecodePtrTag > end > >-macro nextInstructionWide() >+macro nextInstructionWide16() >+ loadh 1[PB, PC, 1], t0 >+ leap _g_opcodeMapWide16, t1 >+ jmp [t1, t0, PtrSize], BytecodePtrTag >+end >+ >+macro nextInstructionWide32() > loadi 1[PB, PC, 1], t0 >- leap _g_opcodeMapWide, t1 >+ leap _g_opcodeMapWide32, t1 > jmp [t1, t0, PtrSize], BytecodePtrTag > end > >@@ -41,14 +47,22 @@ macro getuOperandNarrow(opcodeStruct, fieldName, dst) > end > > macro getOperandNarrow(opcodeStruct, fieldName, dst) >- loadbsp constexpr %opcodeStruct%_%fieldName%_index[PB, PC, 1], dst >+ loadbsq constexpr %opcodeStruct%_%fieldName%_index[PB, PC, 1], dst >+end >+ >+macro getuOperandWide16(opcodeStruct, fieldName, dst) >+ loadh constexpr %opcodeStruct%_%fieldName%_index * 2 + 1[PB, PC, 1], dst >+end >+ >+macro getOperandWide16(opcodeStruct, fieldName, dst) >+ loadhsq constexpr %opcodeStruct%_%fieldName%_index * 2 + 1[PB, PC, 1], dst > end > >-macro getuOperandWide(opcodeStruct, fieldName, dst) >+macro getuOperandWide32(opcodeStruct, fieldName, dst) > loadi constexpr %opcodeStruct%_%fieldName%_index * 4 + 1[PB, PC, 1], dst > end > >-macro getOperandWide(opcodeStruct, fieldName, dst) >+macro getOperandWide32(opcodeStruct, fieldName, dst) > loadis constexpr %opcodeStruct%_%fieldName%_index * 4 + 1[PB, PC, 1], dst > end > >@@ -450,19 +464,30 @@ macro loadConstantOrVariable(size, index, value) > .done: > end > >- macro loadWide() >- bpgteq index, FirstConstantRegisterIndexWide, .constant >+ macro loadWide16() >+ bpgteq index, FirstConstantRegisterIndexWide16, .constant >+ loadq [cfr, index, 8], value >+ jmp .done >+ .constant: >+ loadp CodeBlock[cfr], value >+ loadp CodeBlock::m_constantRegisters + VectorBufferOffset[value], value >+ loadq -(FirstConstantRegisterIndexWide16 * 8)[value, index, 8], value >+ .done: >+ end >+ >+ macro loadWide32() >+ bpgteq index, FirstConstantRegisterIndexWide32, .constant > loadq [cfr, index, 8], value > jmp .done > .constant: > loadp CodeBlock[cfr], value > loadp CodeBlock::m_constantRegisters + VectorBufferOffset[value], value >- subp FirstConstantRegisterIndexWide, index >+ subp FirstConstantRegisterIndexWide32, index > loadq [value, index, 8], value > .done: > end > >- size(loadNarrow, loadWide, macro (load) load() end) >+ size(loadNarrow, loadWide16, loadWide32, macro (load) load() end) > end > > macro loadConstantOrVariableInt32(size, index, value, slow) >@@ -1518,7 +1543,7 @@ llintOpWithMetadata(op_get_by_val, OpGetByVal, macro (size, get, dispatch, metad > bia t2, Int8ArrayType - FirstTypedArrayType, .opGetByValUint8ArrayOrUint8ClampedArray > > # We have Int8ArrayType. >- loadbs [t3, t1], t0 >+ loadbsi [t3, t1], t0 > finishIntGetByVal(t0, t1) > > .opGetByValUint8ArrayOrUint8ClampedArray: >@@ -1538,7 +1563,7 @@ llintOpWithMetadata(op_get_by_val, OpGetByVal, macro (size, get, dispatch, metad > bia t2, Int16ArrayType - FirstTypedArrayType, .opGetByValUint16Array > > # We have Int16ArrayType. >- loadhs [t3, t1, 2], t0 >+ loadhsi [t3, t1, 2], t0 > finishIntGetByVal(t0, t1) > > .opGetByValUint16Array: >diff --git a/Source/JavaScriptCore/offlineasm/arm.rb b/Source/JavaScriptCore/offlineasm/arm.rb >index 85e0b8ec5b2f6a8960362a3174c615bd0b554032..9881e4dbd037f823aa3cdec30532d6dbd32ad2b5 100644 >--- a/Source/JavaScriptCore/offlineasm/arm.rb >+++ b/Source/JavaScriptCore/offlineasm/arm.rb >@@ -444,13 +444,13 @@ def lowerARMCommon > $asm.puts "str #{armOperands(operands)}" > when "loadb" > $asm.puts "ldrb #{armFlippedOperands(operands)}" >- when "loadbs", "loadbsp" >+ when "loadbsi" > $asm.puts "ldrsb.w #{armFlippedOperands(operands)}" > when "storeb" > $asm.puts "strb #{armOperands(operands)}" > when "loadh" > $asm.puts "ldrh #{armFlippedOperands(operands)}" >- when "loadhs" >+ when "loadhsi" > $asm.puts "ldrsh.w #{armFlippedOperands(operands)}" > when "storeh" > $asm.puts "strh #{armOperands(operands)}" >diff --git a/Source/JavaScriptCore/offlineasm/arm64.rb b/Source/JavaScriptCore/offlineasm/arm64.rb >index 9c0cbdca34b01df090af76718557a4bf57f6d9a9..58bbc2aafd9b484edc665e6ad02648c44c9e177c 100644 >--- a/Source/JavaScriptCore/offlineasm/arm64.rb >+++ b/Source/JavaScriptCore/offlineasm/arm64.rb >@@ -278,7 +278,7 @@ def arm64LowerLabelReferences(list) > | node | > if node.is_a? Instruction > case node.opcode >- when "loadi", "loadis", "loadp", "loadq", "loadb", "loadbs", "loadh", "loadhs", "leap" >+ when "loadi", "loadis", "loadp", "loadq", "loadb", "loadbsi", "loadbsq", "loadh", "loadhsi", "loadhsq", "leap" > labelRef = node.operands[0] > if labelRef.is_a? LabelReference > tmp = Tmp.new(node.codeOrigin, :gpr) >@@ -374,9 +374,9 @@ def getModifiedListARM64(result = @list) > result = riscLowerMalformedAddresses(result) { > | node, address | > case node.opcode >- when "loadb", "loadbs", "loadbsp", "storeb", /^bb/, /^btb/, /^cb/, /^tb/ >+ when "loadb", "loadbsi", "loadbsq", "storeb", /^bb/, /^btb/, /^cb/, /^tb/ > size = 1 >- when "loadh", "loadhs" >+ when "loadh", "loadhsi", "loadhsq" > size = 2 > when "loadi", "loadis", "storei", "addi", "andi", "lshifti", "muli", "negi", > "noti", "ori", "rshifti", "urshifti", "subi", "xori", /^bi/, /^bti/, >@@ -709,16 +709,18 @@ def lowerARM64 > emitARM64Unflipped("str", operands, :quad) > when "loadb" > emitARM64Access("ldrb", "ldurb", operands[1], operands[0], :word) >- when "loadbs" >+ when "loadbsi" > emitARM64Access("ldrsb", "ldursb", operands[1], operands[0], :word) >- when "loadbsp" >- emitARM64Access("ldrsb", "ldursb", operands[1], operands[0], :ptr) >+ when "loadbsq" >+ emitARM64Access("ldrsb", "ldursb", operands[1], operands[0], :quad) > when "storeb" > emitARM64Unflipped("strb", operands, :word) > when "loadh" > emitARM64Access("ldrh", "ldurh", operands[1], operands[0], :word) >- when "loadhs" >+ when "loadhsi" > emitARM64Access("ldrsh", "ldursh", operands[1], operands[0], :word) >+ when "loadhsq" >+ emitARM64Access("ldrsh", "ldursh", operands[1], operands[0], :quad) > when "storeh" > emitARM64Unflipped("strh", operands, :word) > when "loadd" >diff --git a/Source/JavaScriptCore/offlineasm/cloop.rb b/Source/JavaScriptCore/offlineasm/cloop.rb >index 933e809bf6bd99e34284d5bf6971b2b8244cf843..6001b41bebee3c2b4fa54cbda3de0b06917059c8 100644 >--- a/Source/JavaScriptCore/offlineasm/cloop.rb >+++ b/Source/JavaScriptCore/offlineasm/cloop.rb >@@ -656,16 +656,18 @@ def lowerC_LOOP > $asm.putc "#{operands[1].intptrMemRef} = #{operands[0].clValue(:intptr)};" > when "loadb" > $asm.putc "#{operands[1].clLValue(:intptr)} = #{operands[0].uint8MemRef};" >- when "loadbs" >- $asm.putc "#{operands[1].clLValue(:intptr)} = (uint32_t)(#{operands[0].int8MemRef});" >- when "loadbsp" >- $asm.putc "#{operands[1].clLValue(:intptr)} = #{operands[0].int8MemRef};" >+ when "loadbsi" >+ $asm.putc "#{operands[1].clLValue(:int32)} = #{operands[0].int8MemRef};" >+ when "loadbsq" >+ $asm.putc "#{operands[1].clLValue(:int64)} = #{operands[0].int8MemRef};" > when "storeb" > $asm.putc "#{operands[1].uint8MemRef} = #{operands[0].clValue(:int8)};" > when "loadh" > $asm.putc "#{operands[1].clLValue(:intptr)} = #{operands[0].uint16MemRef};" >- when "loadhs" >- $asm.putc "#{operands[1].clLValue(:intptr)} = (uint32_t)(#{operands[0].int16MemRef});" >+ when "loadhsi" >+ $asm.putc "#{operands[1].clLValue(:int32)} = #{operands[0].int16MemRef};" >+ when "loadhsq" >+ $asm.putc "#{operands[1].clLValue(:int64)} = #{operands[0].int16MemRef};" > when "storeh" > $asm.putc "*#{operands[1].uint16MemRef} = #{operands[0].clValue(:int16)};" > when "loadd" >diff --git a/Source/JavaScriptCore/offlineasm/instructions.rb b/Source/JavaScriptCore/offlineasm/instructions.rb >index 69e4b6aa8fe4b468b1a48df380e9741c7bf9d417..2ad5944e9055a559136c28e36b872edb9bdd84e9 100644 >--- a/Source/JavaScriptCore/offlineasm/instructions.rb >+++ b/Source/JavaScriptCore/offlineasm/instructions.rb >@@ -53,10 +53,11 @@ > "loadi", > "loadis", > "loadb", >- "loadbs", >- "loadbsp", >+ "loadbsi", >+ "loadbsq", > "loadh", >- "loadhs", >+ "loadhsi", >+ "loadhsq", > "storei", > "storeb", > "loadd", >diff --git a/Source/JavaScriptCore/offlineasm/mips.rb b/Source/JavaScriptCore/offlineasm/mips.rb >index 36e1fb7e3febe5474223aabd347e001add04c09f..e44647cd3b2751acfa2c5154f20a1f337b4e085f 100644 >--- a/Source/JavaScriptCore/offlineasm/mips.rb >+++ b/Source/JavaScriptCore/offlineasm/mips.rb >@@ -880,13 +880,13 @@ def lowerMIPS > $asm.puts "sw #{mipsOperands(operands)}" > when "loadb" > $asm.puts "lbu #{mipsFlippedOperands(operands)}" >- when "loadbs", "loadbsp" >+ when "loadbsi" > $asm.puts "lb #{mipsFlippedOperands(operands)}" > when "storeb" > $asm.puts "sb #{mipsOperands(operands)}" > when "loadh" > $asm.puts "lhu #{mipsFlippedOperands(operands)}" >- when "loadhs" >+ when "loadhsi" > $asm.puts "lh #{mipsFlippedOperands(operands)}" > when "storeh" > $asm.puts "shv #{mipsOperands(operands)}" >diff --git a/Source/JavaScriptCore/offlineasm/x86.rb b/Source/JavaScriptCore/offlineasm/x86.rb >index f2deba81b76317568d812e6b8dc750ad34245bd8..1eb709b6c116291ae4fb61f8e33ba50929004800 100644 >--- a/Source/JavaScriptCore/offlineasm/x86.rb >+++ b/Source/JavaScriptCore/offlineasm/x86.rb >@@ -939,17 +939,17 @@ def lowerX86Common > else > $asm.puts "movzx #{x86LoadOperands(:byte, :int)}" > end >- when "loadbs" >+ when "loadbsi" > if !isIntelSyntax > $asm.puts "movsbl #{x86LoadOperands(:byte, :int)}" > else > $asm.puts "movsx #{x86LoadOperands(:byte, :int)}" > end >- when "loadbsp" >+ when "loadbsq" > if !isIntelSyntax >- $asm.puts "movsb#{x86Suffix(:ptr)} #{x86LoadOperands(:byte, :ptr)}" >+ $asm.puts "movsbq #{x86LoadOperands(:byte, :quad)}" > else >- $asm.puts "movsx #{x86LoadOperands(:byte, :ptr)}" >+ $asm.puts "movsx #{x86LoadOperands(:byte, :quad)}" > end > when "loadh" > if !isIntelSyntax >@@ -957,12 +957,18 @@ def lowerX86Common > else > $asm.puts "movzx #{x86LoadOperands(:half, :int)}" > end >- when "loadhs" >+ when "loadhsi" > if !isIntelSyntax > $asm.puts "movswl #{x86LoadOperands(:half, :int)}" > else > $asm.puts "movsx #{x86LoadOperands(:half, :int)}" > end >+ when "loadhsq" >+ if !isIntelSyntax >+ $asm.puts "movswq #{x86LoadOperands(:half, :quad)}" >+ else >+ $asm.puts "movsx #{x86LoadOperands(:half, :quad)}" >+ end > when "storeb" > $asm.puts "mov#{x86Suffix(:byte)} #{x86Operands(:byte, :byte)}" > when "loadd" >diff --git a/Source/JavaScriptCore/parser/ResultType.h b/Source/JavaScriptCore/parser/ResultType.h >index c53d17c4ac8dff4b243ec9d063395e50b897d123..cce0f6d6001e3c2c31d9eca8e14f9f8d43ea59f1 100644 >--- a/Source/JavaScriptCore/parser/ResultType.h >+++ b/Source/JavaScriptCore/parser/ResultType.h >@@ -194,40 +194,32 @@ namespace JSC { > { > OperandTypes(ResultType first = ResultType::unknownType(), ResultType second = ResultType::unknownType()) > { >- // We have to initialize one of the int to ensure that >- // the entire struct is initialized. >- m_u.i = 0; >- m_u.rds.first = first.m_bits; >- m_u.rds.second = second.m_bits; >+ m_first = first.m_bits; >+ m_second = second.m_bits; > } > >- union { >- struct { >- ResultType::Type first; >- ResultType::Type second; >- } rds; >- int i; >- } m_u; >+ ResultType::Type m_first; >+ ResultType::Type m_second; > > ResultType first() const > { >- return ResultType(m_u.rds.first); >+ return ResultType(m_first); > } > > ResultType second() const > { >- return ResultType(m_u.rds.second); >+ return ResultType(m_second); > } > >- int toInt() >+ uint16_t bits() > { >- return m_u.i; >+ static_assert(sizeof(OperandTypes) == sizeof(uint16_t)); >+ return bitwise_cast<uint16_t>(*this); > } >- static OperandTypes fromInt(int value) >+ >+ static OperandTypes fromBits(uint16_t bits) > { >- OperandTypes types; >- types.m_u.i = value; >- return types; >+ return bitwise_cast<OperandTypes>(bits); > } > > void dump(PrintStream& out) const >diff --git a/Source/WTF/wtf/FastMalloc.h b/Source/WTF/wtf/FastMalloc.h >index efefb3a3133e83f64336fd2ec2da52a54832e621..0ea6d2d28ccb33b1672c35b1b5cabe3e4f405787 100644 >--- a/Source/WTF/wtf/FastMalloc.h >+++ b/Source/WTF/wtf/FastMalloc.h >@@ -199,6 +199,8 @@ struct FastMalloc { > return realResult; > return nullptr; > } >+ >+ static void* zeroedMalloc(size_t size) { return fastZeroedMalloc(size); } > > static void* realloc(void* p, size_t size) { return fastRealloc(p, size); } > >diff --git a/Source/WTF/wtf/MallocPtr.h b/Source/WTF/wtf/MallocPtr.h >index 83b3a51fba102890274f4ecead768ae643e36bba..88a771da36b74a891574781e7df6e50adf184335 100644 >--- a/Source/WTF/wtf/MallocPtr.h >+++ b/Source/WTF/wtf/MallocPtr.h >@@ -101,6 +101,11 @@ template<typename T, typename Malloc = FastMalloc> class MallocPtr { > > template<typename U> friend MallocPtr<U> adoptMallocPtr(U*); > >+ static MallocPtr zeroedMalloc(size_t size) >+ { >+ return MallocPtr { static_cast<T*>(Malloc::zeroedMalloc(size)) }; >+ } >+ > static MallocPtr malloc(size_t size) > { > return MallocPtr { static_cast<T*>(Malloc::malloc(size)) };
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Diff
View Attachment As Raw
Actions:
View
|
Formatted Diff
|
Diff
Attachments on
bug 197979
:
370128
|
370215
|
370216
|
370235
|
370308
|
370311
|
370312
|
370313
|
370314
|
370475
|
370476
|
370481
|
370486
|
370496
|
370516
|
370524
|
370529
|
370534
|
370556
|
370607
|
370616
|
370623
|
370626
|
370690
|
370691
|
370714
|
370715
|
370718
|
370783
|
370791
|
370797
|
370798
|
370819
|
370922
|
370929