WebKit Bugzilla
Attachment 370237 Details for
Bug 197993
: Allow OSR exit to the LLInt
Home
|
New
|
Browse
|
Search
|
[?]
|
Reports
|
Requests
|
Help
|
New Account
|
Log In
Remember
[x]
|
Forgot Password
Login:
[x]
[patch]
patch
b-backup.diff (text/plain), 61.38 KB, created by
Saam Barati
on 2019-05-19 21:34:15 PDT
(
hide
)
Description:
patch
Filename:
MIME Type:
Creator:
Saam Barati
Created:
2019-05-19 21:34:15 PDT
Size:
61.38 KB
patch
obsolete
>Index: JSTests/ChangeLog >=================================================================== >--- JSTests/ChangeLog (revision 245507) >+++ JSTests/ChangeLog (working copy) >@@ -1,3 +1,17 @@ >+2019-05-19 Saam barati <sbarati@apple.com> >+ >+ Allow OSR exit to the LLInt >+ https://bugs.webkit.org/show_bug.cgi?id=197993 >+ >+ Reviewed by NOBODY (OOPS!). >+ >+ * stress/exit-from-getter-by-val.js: Added. >+ (field): >+ (foo): >+ * stress/exit-from-setter-by-val.js: Added. >+ (field): >+ (foo): >+ > 2019-05-17 Justin Michaud <justin_michaud@apple.com> > > [WASM-References] Add support for Anyref in parameters and return types, Ref.null and Ref.is_null for Anyref values. >Index: JSTests/stress/exit-from-getter-by-val.js >=================================================================== >--- JSTests/stress/exit-from-getter-by-val.js (nonexistent) >+++ JSTests/stress/exit-from-getter-by-val.js (working copy) >@@ -0,0 +1,25 @@ >+function field() { return "f"; } >+noInline(field); >+ >+(function() { >+ var o = {_f:42}; >+ o.__defineGetter__("f", function() { return this._f * 100; }); >+ var result = 0; >+ var n = 50000; >+ function foo(o) { >+ return o[field()] + 11; >+ } >+ noInline(foo); >+ for (var i = 0; i < n; ++i) { >+ result += foo(o); >+ } >+ if (result != n * (42 * 100 + 11)) >+ throw "Error: bad result: " + result; >+ o._f = 1000000000; >+ result = 0; >+ for (var i = 0; i < n; ++i) { >+ result += foo(o); >+ } >+ if (result != n * (1000000000 * 100 + 11)) >+ throw "Error: bad result (2): " + result; >+})(); >Index: JSTests/stress/exit-from-setter-by-val.js >=================================================================== >--- JSTests/stress/exit-from-setter-by-val.js (nonexistent) >+++ JSTests/stress/exit-from-setter-by-val.js (working copy) >@@ -0,0 +1,27 @@ >+function field() { return "f"; } >+noInline(field); >+ >+(function() { >+ var o = {_f:42}; >+ o.__defineSetter__("f", function(value) { this._f = value * 100; }); >+ var n = 50000; >+ function foo(o_, v_) { >+ let f = field(); >+ var o = o_[f]; >+ var v = v_[f]; >+ o[f] = v; >+ o[f] = v + 1; >+ } >+ noInline(foo); >+ for (var i = 0; i < n; ++i) { >+ foo({f:o}, {f:11}); >+ } >+ if (o._f != (11 + 1) * 100) >+ throw "Error: bad o._f: " + o._f; >+ for (var i = 0; i < n; ++i) { >+ foo({f:o}, {f:1000000000}); >+ } >+ if (o._f != 100 * (1000000000 + 1)) >+ throw "Error: bad o._f (2): " + o._f; >+})(); >+ >Index: Source/JavaScriptCore/ChangeLog >=================================================================== >--- Source/JavaScriptCore/ChangeLog (revision 245507) >+++ Source/JavaScriptCore/ChangeLog (working copy) >@@ -1,3 +1,71 @@ >+2019-05-19 Saam barati <sbarati@apple.com> >+ >+ Allow OSR exit to the LLInt >+ https://bugs.webkit.org/show_bug.cgi?id=197993 >+ >+ Reviewed by NOBODY (OOPS!). >+ >+ This patch makes it so we can OSR exit to the LLInt. >+ Here are the interesting implementation details: >+ >+ 1. We no longer baseline compile everything in the inline stack. >+ >+ 2. When the top frame is a LLInt frame, we exit to the corresponding >+ LLInt bytecode. However, we need to materialize the LLInt registers >+ for PC, PB, and metadata. >+ >+ 3. When dealing with inline call frames where the caller is LLInt, we >+ need to return to the appropriate place. Let's consider we're exiting >+ at a place A->B (A calls B), where A is LLInt. If A is a normal call, >+ we place the return PC in the frame we materialize to B to be right >+ after the LLInt's inline cache for calls. If A is a varargs call, we place >+ it at the return location for vararg calls. The interesting scenario here >+ is where A is a getter/setter. This means that A might be get_by_id, >+ get_by_val, put_by_id, or put_by_val. Since the LLInt does not have any >+ form of IC for getters/setters, we make this work by creating new LLInt >+ "return location" stubs for these opcodes. >+ >+ 4. We need to update what callee saves we store in the callee if the caller frame >+ is a LLInt frame. Let's consider an inline tack A->B->C, where A is a LLInt frame. >+ When we materialize the stack frame for B, we need to ensure that the LLInt callee >+ saves that A uses is stored into B's preserved callee saves. Specifically, this >+ is just the PB/metadata registers. >+ >+ This patch also fixes offlineasm's macro expansion to allow us to >+ use computed label names for global labels. >+ >+ * JavaScriptCore.xcodeproj/project.pbxproj: >+ * Sources.txt: >+ * bytecode/CodeBlock.h: >+ (JSC::CodeBlock::metadataTable): >+ (JSC::CodeBlock::instructionsRawPointer): >+ * dfg/DFGOSRExit.cpp: >+ (JSC::DFG::OSRExit::executeOSRExit): >+ (JSC::DFG::reifyInlinedCallFrames): >+ (JSC::DFG::adjustAndJumpToTarget): >+ (JSC::DFG::OSRExit::compileOSRExit): >+ (JSC::DFG::OSRExit::compileExit): >+ * dfg/DFGOSRExit.h: >+ (JSC::DFG::OSRExitState::OSRExitState): >+ * dfg/DFGOSRExitCompilerCommon.cpp: >+ (JSC::DFG::callerReturnPC): >+ (JSC::DFG::calleeSaveSlot): >+ (JSC::DFG::reifyInlinedCallFrames): >+ (JSC::DFG::adjustAndJumpToTarget): >+ * dfg/DFGOSRExitCompilerCommon.h: >+ * dfg/DFGOSRExitPreparation.cpp: Removed. >+ * dfg/DFGOSRExitPreparation.h: Removed. >+ * ftl/FTLOSRExitCompiler.cpp: >+ (JSC::FTL::compileStub): >+ (JSC::FTL::compileFTLOSRExit): >+ * llint/LLIntData.h: >+ * llint/LowLevelInterpreter.asm: >+ * llint/LowLevelInterpreter32_64.asm: >+ * llint/LowLevelInterpreter64.asm: >+ * offlineasm/asm.rb: >+ * offlineasm/transform.rb: >+ * runtime/Options.h: >+ > 2019-05-18 Tadeu Zagallo <tzagallo@apple.com> > > Add extra information to dumpJITMemory >Index: Source/JavaScriptCore/Sources.txt >=================================================================== >--- Source/JavaScriptCore/Sources.txt (revision 245507) >+++ Source/JavaScriptCore/Sources.txt (working copy) >@@ -380,7 +380,6 @@ dfg/DFGOSRExitBase.cpp > dfg/DFGOSRExitCompilerCommon.cpp > dfg/DFGOSRExitFuzz.cpp > dfg/DFGOSRExitJumpPlaceholder.cpp >-dfg/DFGOSRExitPreparation.cpp > dfg/DFGObjectAllocationSinkingPhase.cpp > dfg/DFGObjectMaterializationData.cpp > dfg/DFGOperations.cpp >Index: Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj >=================================================================== >--- Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj (revision 245507) >+++ Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj (working copy) >@@ -171,7 +171,6 @@ > 0F235BE017178E1C00690C7F /* FTLOSRExitCompiler.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F235BCA17178E1C00690C7F /* FTLOSRExitCompiler.h */; settings = {ATTRIBUTES = (Private, ); }; }; > 0F235BE217178E1C00690C7F /* FTLThunks.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F235BCC17178E1C00690C7F /* FTLThunks.h */; settings = {ATTRIBUTES = (Private, ); }; }; > 0F235BEC17178E7300690C7F /* DFGOSRExitBase.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F235BE817178E7300690C7F /* DFGOSRExitBase.h */; }; >- 0F235BEE17178E7300690C7F /* DFGOSRExitPreparation.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F235BEA17178E7300690C7F /* DFGOSRExitPreparation.h */; }; > 0F24E54117EA9F5900ABB217 /* AssemblyHelpers.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F24E53C17EA9F5900ABB217 /* AssemblyHelpers.h */; settings = {ATTRIBUTES = (Private, ); }; }; > 0F24E54217EA9F5900ABB217 /* CCallHelpers.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F24E53D17EA9F5900ABB217 /* CCallHelpers.h */; settings = {ATTRIBUTES = (Private, ); }; }; > 0F24E54317EA9F5900ABB217 /* FPRInfo.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F24E53E17EA9F5900ABB217 /* FPRInfo.h */; settings = {ATTRIBUTES = (Private, ); }; }; >@@ -2187,8 +2186,6 @@ > 0F235BCC17178E1C00690C7F /* FTLThunks.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLThunks.h; path = ftl/FTLThunks.h; sourceTree = "<group>"; }; > 0F235BE717178E7300690C7F /* DFGOSRExitBase.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGOSRExitBase.cpp; path = dfg/DFGOSRExitBase.cpp; sourceTree = "<group>"; }; > 0F235BE817178E7300690C7F /* DFGOSRExitBase.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGOSRExitBase.h; path = dfg/DFGOSRExitBase.h; sourceTree = "<group>"; }; >- 0F235BE917178E7300690C7F /* DFGOSRExitPreparation.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGOSRExitPreparation.cpp; path = dfg/DFGOSRExitPreparation.cpp; sourceTree = "<group>"; }; >- 0F235BEA17178E7300690C7F /* DFGOSRExitPreparation.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGOSRExitPreparation.h; path = dfg/DFGOSRExitPreparation.h; sourceTree = "<group>"; }; > 0F24E53B17EA9F5900ABB217 /* AssemblyHelpers.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = AssemblyHelpers.cpp; sourceTree = "<group>"; }; > 0F24E53C17EA9F5900ABB217 /* AssemblyHelpers.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = AssemblyHelpers.h; sourceTree = "<group>"; }; > 0F24E53D17EA9F5900ABB217 /* CCallHelpers.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CCallHelpers.h; sourceTree = "<group>"; }; >@@ -7657,8 +7654,6 @@ > 0F392C881B46188400844728 /* DFGOSRExitFuzz.h */, > 0FEFC9A71681A3B000567F53 /* DFGOSRExitJumpPlaceholder.cpp */, > 0FEFC9A81681A3B000567F53 /* DFGOSRExitJumpPlaceholder.h */, >- 0F235BE917178E7300690C7F /* DFGOSRExitPreparation.cpp */, >- 0F235BEA17178E7300690C7F /* DFGOSRExitPreparation.h */, > 0F6237951AE45CA700D402EA /* DFGPhantomInsertionPhase.cpp */, > 0F6237961AE45CA700D402EA /* DFGPhantomInsertionPhase.h */, > 0FFFC94F14EF909500C72532 /* DFGPhase.cpp */, >@@ -9009,7 +9004,6 @@ > 0F7025AA1714B0FC00382C0E /* DFGOSRExitCompilerCommon.h in Headers */, > 0F392C8A1B46188400844728 /* DFGOSRExitFuzz.h in Headers */, > 0FEFC9AB1681A3B600567F53 /* DFGOSRExitJumpPlaceholder.h in Headers */, >- 0F235BEE17178E7300690C7F /* DFGOSRExitPreparation.h in Headers */, > 0F6237981AE45CA700D402EA /* DFGPhantomInsertionPhase.h in Headers */, > 0FFFC95C14EF90AF00C72532 /* DFGPhase.h in Headers */, > 0F2B9CEB19D0BA7D00B1D1B5 /* DFGPhiChildren.h in Headers */, >Index: Source/JavaScriptCore/bytecode/CodeBlock.h >=================================================================== >--- Source/JavaScriptCore/bytecode/CodeBlock.h (revision 245507) >+++ Source/JavaScriptCore/bytecode/CodeBlock.h (working copy) >@@ -884,6 +884,9 @@ public: > return m_unlinkedCode->metadataSizeInBytes(); > } > >+ MetadataTable* metadataTable() { return m_metadata.get(); } >+ const void* instructionsRawPointer() { return m_instructionsRawPointer; } >+ > protected: > void finalizeLLIntInlineCaches(); > #if ENABLE(JIT) >Index: Source/JavaScriptCore/dfg/DFGOSRExit.cpp >=================================================================== >--- Source/JavaScriptCore/dfg/DFGOSRExit.cpp (revision 245507) >+++ Source/JavaScriptCore/dfg/DFGOSRExit.cpp (working copy) >@@ -33,7 +33,6 @@ > #include "DFGGraph.h" > #include "DFGMayExit.h" > #include "DFGOSRExitCompilerCommon.h" >-#include "DFGOSRExitPreparation.h" > #include "DFGOperations.h" > #include "DFGSpeculativeJIT.h" > #include "DirectArguments.h" >@@ -371,9 +370,6 @@ void OSRExit::executeOSRExit(Context& co > // results will be cached in the OSRExitState record for use of the rest of the > // exit ramp code. > >- // Ensure we have baseline codeBlocks to OSR exit to. >- prepareCodeOriginForOSRExit(exec, exit.m_codeOrigin); >- > CodeBlock* baselineCodeBlock = codeBlock->baselineAlternative(); > ASSERT(baselineCodeBlock->jitType() == JITType::BaselineJIT); > >@@ -405,11 +401,24 @@ void OSRExit::executeOSRExit(Context& co > adjustedThreshold = BaselineExecutionCounter::clippedThreshold(codeBlock->globalObject(), adjustedThreshold); > > CodeBlock* codeBlockForExit = baselineCodeBlockForOriginAndBaselineCodeBlock(exit.m_codeOrigin, baselineCodeBlock); >- const JITCodeMap& codeMap = codeBlockForExit->jitCodeMap(); >- CodeLocationLabel<JSEntryPtrTag> codeLocation = codeMap.find(exit.m_codeOrigin.bytecodeIndex()); >- ASSERT(codeLocation); >+ bool exitToLLInt = Options::forceOSRExitToLLInt() || codeBlockForExit->jitType() == JITType::InterpreterThunk; >+ void* jumpTarget; >+ if (exitToLLInt) { >+ unsigned bytecodeOffset = exit.m_codeOrigin.bytecodeIndex(); >+ const Instruction& currentInstruction = *codeBlockForExit->instructions().at(bytecodeOffset).ptr(); >+ MacroAssemblerCodePtr<JSEntryPtrTag> destination; >+ if (currentInstruction.isWide()) >+ destination = LLInt::getWideCodePtr<JSEntryPtrTag>(currentInstruction.opcodeID()); >+ else >+ destination = LLInt::getCodePtr<JSEntryPtrTag>(currentInstruction.opcodeID()); > >- void* jumpTarget = codeLocation.executableAddress(); >+ jumpTarget = destination.executableAddress(); >+ } else { >+ const JITCodeMap& codeMap = codeBlockForExit->jitCodeMap(); >+ CodeLocationLabel<JSEntryPtrTag> codeLocation = codeMap.find(exit.m_codeOrigin.bytecodeIndex()); >+ ASSERT(codeLocation); >+ jumpTarget = codeLocation.executableAddress(); >+ } > > // Compute the value recoveries. > Operands<ValueRecovery> operands; >@@ -417,7 +426,7 @@ void OSRExit::executeOSRExit(Context& co > dfgJITCode->variableEventStream.reconstruct(codeBlock, exit.m_codeOrigin, dfgJITCode->minifiedDFG, exit.m_streamIndex, operands, &undefinedOperandSpans); > ptrdiff_t stackPointerOffset = -static_cast<ptrdiff_t>(codeBlock->jitCode()->dfgCommon()->requiredRegisterCountForExit) * sizeof(Register); > >- exit.exitState = adoptRef(new OSRExitState(exit, codeBlock, baselineCodeBlock, operands, WTFMove(undefinedOperandSpans), recovery, stackPointerOffset, activeThreshold, adjustedThreshold, jumpTarget, arrayProfile)); >+ exit.exitState = adoptRef(new OSRExitState(exit, codeBlock, baselineCodeBlock, operands, WTFMove(undefinedOperandSpans), recovery, stackPointerOffset, activeThreshold, adjustedThreshold, jumpTarget, arrayProfile, exitToLLInt)); > > if (UNLIKELY(vm.m_perBytecodeProfiler && codeBlock->jitCode()->dfgCommon()->compilation)) { > Profiler::Database& database = *vm.m_perBytecodeProfiler; >@@ -515,13 +524,18 @@ void OSRExit::executeOSRExit(Context& co > break; > > // Begin extra initilization level: ArrayProfileUpdate >- ArrayProfile* arrayProfile = exitState.arrayProfile; >- if (arrayProfile) { >+ if (ArrayProfile* arrayProfile = exitState.arrayProfile) { > ASSERT(!!exit.m_jsValueSource); > ASSERT(exit.m_kind == BadCache || exit.m_kind == BadIndexingType); >- Structure* structure = profiledValue.asCell()->structure(vm); >- arrayProfile->observeStructure(structure); >- arrayProfile->observeArrayMode(arrayModesFromStructure(structure)); >+ >+ CodeBlock* profiledCodeBlock = baselineCodeBlockForOriginAndBaselineCodeBlock(exit.m_codeOriginForExitProfile, baselineCodeBlock); >+ const Instruction* instruction = profiledCodeBlock->instructions().at(exit.m_codeOriginForExitProfile.bytecodeIndex()).ptr(); >+ bool doProfile = instruction->opcodeID() != op_get_by_id || instruction->as<OpGetById>().metadata(profiledCodeBlock).m_mode == GetByIdMode::ArrayLength; >+ if (doProfile) { >+ Structure* structure = profiledValue.asCell()->structure(vm); >+ arrayProfile->observeStructure(structure); >+ arrayProfile->observeArrayMode(arrayModesFromStructure(structure)); >+ } > } > if (extraInitializationLevel <= ExtraInitializationLevel::ArrayProfileUpdate) > break; >@@ -763,6 +777,8 @@ static void reifyInlinedCallFrames(Conte > CodeOrigin* trueCaller = inlineCallFrame->getCallerSkippingTailCalls(&trueCallerCallKind); > void* callerFrame = cpu.fp(); > >+ bool callerIsLLInt = false; >+ > if (!trueCaller) { > ASSERT(inlineCallFrame->isTail()); > void* returnPC = frame.get<void*>(CallFrame::returnPCOffset()); >@@ -776,46 +792,16 @@ static void reifyInlinedCallFrames(Conte > } else { > CodeBlock* baselineCodeBlockForCaller = baselineCodeBlockForOriginAndBaselineCodeBlock(*trueCaller, outermostBaselineCodeBlock); > unsigned callBytecodeIndex = trueCaller->bytecodeIndex(); >- MacroAssemblerCodePtr<JSInternalPtrTag> jumpTarget; >- >- switch (trueCallerCallKind) { >- case InlineCallFrame::Call: >- case InlineCallFrame::Construct: >- case InlineCallFrame::CallVarargs: >- case InlineCallFrame::ConstructVarargs: >- case InlineCallFrame::TailCall: >- case InlineCallFrame::TailCallVarargs: { >- CallLinkInfo* callLinkInfo = >- baselineCodeBlockForCaller->getCallLinkInfoForBytecodeIndex(callBytecodeIndex); >- RELEASE_ASSERT(callLinkInfo); >- >- jumpTarget = callLinkInfo->callReturnLocation(); >- break; >- } >- >- case InlineCallFrame::GetterCall: >- case InlineCallFrame::SetterCall: { >- StructureStubInfo* stubInfo = >- baselineCodeBlockForCaller->findStubInfo(CodeOrigin(callBytecodeIndex)); >- RELEASE_ASSERT(stubInfo); >- >- jumpTarget = stubInfo->doneLocation(); >- break; >- } >- >- default: >- RELEASE_ASSERT_NOT_REACHED(); >- } >+ void* jumpTarget = callerReturnPC(baselineCodeBlockForCaller, callBytecodeIndex, trueCallerCallKind, callerIsLLInt); > > if (trueCaller->inlineCallFrame()) > callerFrame = cpu.fp<uint8_t*>() + trueCaller->inlineCallFrame()->stackOffset * sizeof(EncodedJSValue); > >- void* targetAddress = jumpTarget.executableAddress(); > #if CPU(ARM64E) > void* newEntrySP = cpu.fp<uint8_t*>() + inlineCallFrame->returnPCOffset() + sizeof(void*); >- targetAddress = retagCodePtr(targetAddress, JSInternalPtrTag, bitwise_cast<PtrTag>(newEntrySP)); >+ jumpTarget = tagCodePtr(jumpTarget, bitwise_cast<PtrTag>(newEntrySP)); > #endif >- frame.set<void*>(inlineCallFrame->returnPCOffset(), targetAddress); >+ frame.set<void*>(inlineCallFrame->returnPCOffset(), jumpTarget); > } > > frame.setOperand<void*>(inlineCallFrame->stackOffset + CallFrameSlot::codeBlock, baselineCodeBlock); >@@ -825,6 +811,14 @@ static void reifyInlinedCallFrames(Conte > // copy the prior contents of the tag registers already saved for the outer frame to this frame. > saveOrCopyCalleeSavesFor(context, baselineCodeBlock, VirtualRegister(inlineCallFrame->stackOffset), !trueCaller); > >+ if (callerIsLLInt) { >+ CodeBlock* baselineCodeBlockForCaller = baselineCodeBlockForOriginAndBaselineCodeBlock(*trueCaller, outermostBaselineCodeBlock); >+ frame.set<const void*>(calleeSaveSlot(inlineCallFrame, baselineCodeBlock, LLInt::Registers::metadataTableGPR).offset, baselineCodeBlockForCaller->metadataTable()); >+#if USE(JSVALUE64) >+ frame.set<const void*>(calleeSaveSlot(inlineCallFrame, baselineCodeBlock, LLInt::Registers::pbGPR).offset, baselineCodeBlockForCaller->instructionsRawPointer()); >+#endif >+ } >+ > if (!inlineCallFrame->isVarargs()) > frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount, PayloadOffset, inlineCallFrame->argumentCountIncludingThis); > ASSERT(callerFrame); >@@ -889,6 +883,24 @@ static void adjustAndJumpToTarget(Contex > } > > vm.topCallFrame = context.fp<ExecState*>(); >+ >+ if (exitState->isJumpToLLInt) { >+ CodeBlock* codeBlockForExit = baselineCodeBlockForOriginAndBaselineCodeBlock(exit.m_codeOrigin, baselineCodeBlock); >+ unsigned bytecodeOffset = exit.m_codeOrigin.bytecodeIndex(); >+ const Instruction& currentInstruction = *codeBlockForExit->instructions().at(bytecodeOffset).ptr(); >+ >+ context.gpr(LLInt::Registers::metadataTableGPR) = bitwise_cast<uintptr_t>(codeBlockForExit->metadataTable()); >+#if USE(JSVALUE64) >+ context.gpr(LLInt::Registers::pbGPR) = bitwise_cast<uintptr_t>(codeBlockForExit->instructionsRawPointer()); >+ context.gpr(LLInt::Registers::pcGPR) = static_cast<uintptr_t>(exit.m_codeOrigin.bytecodeIndex()); >+#else >+ context.gpr(LLInt::Registers::pcGPR) = bitwise_cast<uintptr_t>(¤tInstruction); >+#endif >+ >+ if (exit.isExceptionHandler()) >+ vm.targetInterpreterPCForThrow = ¤tInstruction; >+ } >+ > context.pc() = untagCodePtr<JSEntryPtrTag>(jumpTarget); > } > >@@ -1047,8 +1059,6 @@ void JIT_OPERATION OSRExit::compileOSREx > ASSERT(!vm->callFrameForCatch || exit.m_kind == GenericUnwind); > EXCEPTION_ASSERT_UNUSED(scope, !!scope.exception() || !exit.isExceptionHandler()); > >- prepareCodeOriginForOSRExit(exec, exit.m_codeOrigin); >- > // Compute the value recoveries. > Operands<ValueRecovery> operands; > codeBlock->jitCode()->dfg()->variableEventStream.reconstruct(codeBlock, exit.m_codeOrigin, codeBlock->jitCode()->dfg()->minifiedDFG, exit.m_streamIndex, operands); >@@ -1167,6 +1177,13 @@ void OSRExit::compileExit(CCallHelpers& > > CodeOrigin codeOrigin = exit.m_codeOriginForExitProfile; > if (ArrayProfile* arrayProfile = jit.baselineCodeBlockFor(codeOrigin)->getArrayProfile(codeOrigin.bytecodeIndex())) { >+ const Instruction* instruction = jit.baselineCodeBlockFor(codeOrigin)->instructions().at(codeOrigin.bytecodeIndex()).ptr(); >+ CCallHelpers::Jump skipProfile; >+ if (instruction->opcodeID() == op_get_by_id) { >+ auto& metadata = instruction->as<OpGetById>().metadata(jit.baselineCodeBlockFor(codeOrigin)); >+ skipProfile = jit.branch8(CCallHelpers::NotEqual, CCallHelpers::AbsoluteAddress(&metadata.m_mode), CCallHelpers::TrustedImm32(static_cast<uint8_t>(GetByIdMode::ArrayLength))); >+ } >+ > #if USE(JSVALUE64) > GPRReg usedRegister; > if (exit.m_jsValueSource.isAddress()) >@@ -1242,6 +1259,9 @@ void OSRExit::compileExit(CCallHelpers& > jit.pop(scratch2); > jit.pop(scratch1); > } >+ >+ if (skipProfile.isSet()) >+ skipProfile.link(&jit); > } > } > >Index: Source/JavaScriptCore/dfg/DFGOSRExit.h >=================================================================== >--- Source/JavaScriptCore/dfg/DFGOSRExit.h (revision 245507) >+++ Source/JavaScriptCore/dfg/DFGOSRExit.h (working copy) >@@ -106,7 +106,7 @@ private: > enum class ExtraInitializationLevel; > > struct OSRExitState : RefCounted<OSRExitState> { >- OSRExitState(OSRExitBase& exit, CodeBlock* codeBlock, CodeBlock* baselineCodeBlock, Operands<ValueRecovery>& operands, Vector<UndefinedOperandSpan>&& undefinedOperandSpans, SpeculationRecovery* recovery, ptrdiff_t stackPointerOffset, int32_t activeThreshold, double memoryUsageAdjustedThreshold, void* jumpTarget, ArrayProfile* arrayProfile) >+ OSRExitState(OSRExitBase& exit, CodeBlock* codeBlock, CodeBlock* baselineCodeBlock, Operands<ValueRecovery>& operands, Vector<UndefinedOperandSpan>&& undefinedOperandSpans, SpeculationRecovery* recovery, ptrdiff_t stackPointerOffset, int32_t activeThreshold, double memoryUsageAdjustedThreshold, void* jumpTarget, ArrayProfile* arrayProfile, bool isJumpToLLInt) > : exit(exit) > , codeBlock(codeBlock) > , baselineCodeBlock(baselineCodeBlock) >@@ -118,6 +118,7 @@ struct OSRExitState : RefCounted<OSRExit > , memoryUsageAdjustedThreshold(memoryUsageAdjustedThreshold) > , jumpTarget(jumpTarget) > , arrayProfile(arrayProfile) >+ , isJumpToLLInt(isJumpToLLInt) > { } > > OSRExitBase& exit; >@@ -131,6 +132,7 @@ struct OSRExitState : RefCounted<OSRExit > double memoryUsageAdjustedThreshold; > void* jumpTarget; > ArrayProfile* arrayProfile; >+ bool isJumpToLLInt; > > ExtraInitializationLevel extraInitializationLevel; > Profiler::OSRExit* profilerExit { nullptr }; >Index: Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp >=================================================================== >--- Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp (revision 245507) >+++ Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp (working copy) >@@ -33,10 +33,29 @@ > #include "JIT.h" > #include "JSCJSValueInlines.h" > #include "JSCInlines.h" >+#include "LLIntData.h" > #include "StructureStubInfo.h" > > namespace JSC { namespace DFG { > >+// These are the LLInt OSR exit return points. >+extern "C" void op_call_return_location_wide(); >+extern "C" void op_call_return_location_narrow(); >+extern "C" void op_construct_return_location_wide(); >+extern "C" void op_construct_return_location_narrow(); >+extern "C" void op_call_varargs_slow_return_location_wide(); >+extern "C" void op_call_varargs_slow_return_location_narrow(); >+extern "C" void op_construct_varargs_slow_return_location_wide(); >+extern "C" void op_construct_varargs_slow_return_location_narrow(); >+extern "C" void op_get_by_id_return_location_narrow(); >+extern "C" void op_get_by_id_return_location_wide(); >+extern "C" void op_get_by_val_return_location_narrow(); >+extern "C" void op_get_by_val_return_location_wide(); >+extern "C" void op_put_by_id_return_location_narrow(); >+extern "C" void op_put_by_id_return_location_wide(); >+extern "C" void op_put_by_val_return_location_narrow(); >+extern "C" void op_put_by_val_return_location_wide(); >+ > void handleExitCounts(CCallHelpers& jit, const OSRExitBase& exit) > { > if (!exitKindMayJettison(exit.m_kind)) { >@@ -136,6 +155,102 @@ void handleExitCounts(CCallHelpers& jit, > doneAdjusting.link(&jit); > } > >+void* callerReturnPC(CodeBlock* baselineCodeBlockForCaller, unsigned callBytecodeIndex, InlineCallFrame::Kind trueCallerCallKind, bool& callerIsLLInt) >+{ >+ callerIsLLInt = Options::forceOSRExitToLLInt() || baselineCodeBlockForCaller->jitType() == JITType::InterpreterThunk; >+ >+ void* jumpTarget; >+ >+ if (callerIsLLInt) { >+ const Instruction& callInstruction = *baselineCodeBlockForCaller->instructions().at(callBytecodeIndex).ptr(); >+ bool isWide = callInstruction.isWide(); >+ >+#define LLINT_RETURN_LOCATION(name) FunctionPtr<NoPtrTag>(isWide ? name##_wide : name##_narrow).executableAddress() >+ >+ switch (trueCallerCallKind) { >+ case InlineCallFrame::Call: >+ jumpTarget = LLINT_RETURN_LOCATION(op_call_return_location); >+ break; >+ case InlineCallFrame::Construct: >+ jumpTarget = LLINT_RETURN_LOCATION(op_construct_return_location); >+ break; >+ case InlineCallFrame::CallVarargs: >+ jumpTarget = LLINT_RETURN_LOCATION(op_call_varargs_slow_return_location); >+ break; >+ case InlineCallFrame::ConstructVarargs: >+ jumpTarget = LLINT_RETURN_LOCATION(op_construct_varargs_slow_return_location); >+ break; >+ case InlineCallFrame::GetterCall: { >+ if (callInstruction.opcodeID() == op_get_by_id) >+ jumpTarget = LLINT_RETURN_LOCATION(op_get_by_id_return_location); >+ else if (callInstruction.opcodeID() == op_get_by_val) >+ jumpTarget = LLINT_RETURN_LOCATION(op_get_by_val_return_location); >+ else >+ RELEASE_ASSERT_NOT_REACHED(); >+ break; >+ } >+ case InlineCallFrame::SetterCall: { >+ if (callInstruction.opcodeID() == op_put_by_id) >+ jumpTarget = LLINT_RETURN_LOCATION(op_put_by_id_return_location); >+ else if (callInstruction.opcodeID() == op_put_by_val) >+ jumpTarget = LLINT_RETURN_LOCATION(op_put_by_val_return_location); >+ else >+ RELEASE_ASSERT_NOT_REACHED(); >+ break; >+ } >+ >+ default: >+ RELEASE_ASSERT_NOT_REACHED(); >+ } >+ >+#undef LLINT_RETURN_LOCATION >+ >+ } else { >+ switch (trueCallerCallKind) { >+ case InlineCallFrame::Call: >+ case InlineCallFrame::Construct: >+ case InlineCallFrame::CallVarargs: >+ case InlineCallFrame::ConstructVarargs: { >+ CallLinkInfo* callLinkInfo = >+ baselineCodeBlockForCaller->getCallLinkInfoForBytecodeIndex(callBytecodeIndex); >+ RELEASE_ASSERT(callLinkInfo); >+ >+ jumpTarget = callLinkInfo->callReturnLocation().untaggedExecutableAddress(); >+ break; >+ } >+ >+ case InlineCallFrame::GetterCall: >+ case InlineCallFrame::SetterCall: { >+ StructureStubInfo* stubInfo = >+ baselineCodeBlockForCaller->findStubInfo(CodeOrigin(callBytecodeIndex)); >+ RELEASE_ASSERT(stubInfo); >+ >+ jumpTarget = stubInfo->doneLocation().untaggedExecutableAddress(); >+ break; >+ } >+ >+ default: >+ RELEASE_ASSERT_NOT_REACHED(); >+ } >+ } >+ >+ return jumpTarget; >+} >+ >+CCallHelpers::Address calleeSaveSlot(InlineCallFrame* inlineCallFrame, CodeBlock* baselineCodeBlock, GPRReg calleeSave) >+{ >+ const RegisterAtOffsetList* calleeSaves = baselineCodeBlock->calleeSaveRegisters(); >+ for (unsigned i = 0; i < calleeSaves->size(); i++) { >+ RegisterAtOffset entry = calleeSaves->at(i); >+ if (entry.reg() != calleeSave) >+ continue; >+ return CCallHelpers::Address(CCallHelpers::framePointerRegister, static_cast<VirtualRegister>(inlineCallFrame->stackOffset).offsetInBytes() + entry.offset()); >+ } >+ >+ RELEASE_ASSERT_NOT_REACHED(); >+ return CCallHelpers::Address(CCallHelpers::framePointerRegister); >+} >+ > void reifyInlinedCallFrames(CCallHelpers& jit, const OSRExitBase& exit) > { > // FIXME: We shouldn't leave holes on the stack when performing an OSR exit >@@ -152,6 +267,8 @@ void reifyInlinedCallFrames(CCallHelpers > CodeOrigin* trueCaller = inlineCallFrame->getCallerSkippingTailCalls(&trueCallerCallKind); > GPRReg callerFrameGPR = GPRInfo::callFrameRegister; > >+ bool callerIsLLInt = false; >+ > if (!trueCaller) { > ASSERT(inlineCallFrame->isTail()); > jit.loadPtr(AssemblyHelpers::Address(GPRInfo::callFrameRegister, CallFrame::returnPCOffset()), GPRInfo::regT3); >@@ -167,36 +284,7 @@ void reifyInlinedCallFrames(CCallHelpers > } else { > CodeBlock* baselineCodeBlockForCaller = jit.baselineCodeBlockFor(*trueCaller); > unsigned callBytecodeIndex = trueCaller->bytecodeIndex(); >- void* jumpTarget = nullptr; >- >- switch (trueCallerCallKind) { >- case InlineCallFrame::Call: >- case InlineCallFrame::Construct: >- case InlineCallFrame::CallVarargs: >- case InlineCallFrame::ConstructVarargs: >- case InlineCallFrame::TailCall: >- case InlineCallFrame::TailCallVarargs: { >- CallLinkInfo* callLinkInfo = >- baselineCodeBlockForCaller->getCallLinkInfoForBytecodeIndex(callBytecodeIndex); >- RELEASE_ASSERT(callLinkInfo); >- >- jumpTarget = callLinkInfo->callReturnLocation().untaggedExecutableAddress(); >- break; >- } >- >- case InlineCallFrame::GetterCall: >- case InlineCallFrame::SetterCall: { >- StructureStubInfo* stubInfo = >- baselineCodeBlockForCaller->findStubInfo(CodeOrigin(callBytecodeIndex)); >- RELEASE_ASSERT(stubInfo); >- >- jumpTarget = stubInfo->doneLocation().untaggedExecutableAddress(); >- break; >- } >- >- default: >- RELEASE_ASSERT_NOT_REACHED(); >- } >+ void* jumpTarget = callerReturnPC(baselineCodeBlockForCaller, callBytecodeIndex, trueCallerCallKind, callerIsLLInt); > > if (trueCaller->inlineCallFrame()) { > jit.addPtr( >@@ -227,6 +315,14 @@ void reifyInlinedCallFrames(CCallHelpers > trueCaller ? AssemblyHelpers::UseExistingTagRegisterContents : AssemblyHelpers::CopyBaselineCalleeSavedRegistersFromBaseFrame, > GPRInfo::regT2); > >+ if (callerIsLLInt) { >+ CodeBlock* baselineCodeBlockForCaller = jit.baselineCodeBlockFor(*trueCaller); >+ jit.storePtr(CCallHelpers::TrustedImmPtr(baselineCodeBlockForCaller->metadataTable()), calleeSaveSlot(inlineCallFrame, baselineCodeBlock, LLInt::Registers::metadataTableGPR)); >+#if USE(JSVALUE64) >+ jit.storePtr(CCallHelpers::TrustedImmPtr(baselineCodeBlockForCaller->instructionsRawPointer()), calleeSaveSlot(inlineCallFrame, baselineCodeBlock, LLInt::Registers::pbGPR)); >+#endif >+ } >+ > if (!inlineCallFrame->isVarargs()) > jit.store32(AssemblyHelpers::TrustedImm32(inlineCallFrame->argumentCountIncludingThis), AssemblyHelpers::payloadFor((VirtualRegister)(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount))); > #if USE(JSVALUE64) >@@ -310,11 +406,38 @@ void adjustAndJumpToTarget(VM& vm, CCall > > CodeBlock* codeBlockForExit = jit.baselineCodeBlockFor(exit.m_codeOrigin); > ASSERT(codeBlockForExit == codeBlockForExit->baselineVersion()); >- ASSERT(codeBlockForExit->jitType() == JITType::BaselineJIT); >- CodeLocationLabel<JSEntryPtrTag> codeLocation = codeBlockForExit->jitCodeMap().find(exit.m_codeOrigin.bytecodeIndex()); >- ASSERT(codeLocation); > >- void* jumpTarget = codeLocation.retagged<OSRExitPtrTag>().executableAddress(); >+ void* jumpTarget; >+ bool exitToLLInt = Options::forceOSRExitToLLInt() || codeBlockForExit->jitType() == JITType::InterpreterThunk; >+ if (exitToLLInt) { >+ unsigned bytecodeOffset = exit.m_codeOrigin.bytecodeIndex(); >+ const Instruction& currentInstruction = *codeBlockForExit->instructions().at(bytecodeOffset).ptr(); >+ MacroAssemblerCodePtr<JSEntryPtrTag> destination; >+ if (currentInstruction.isWide()) >+ destination = LLInt::getWideCodePtr<JSEntryPtrTag>(currentInstruction.opcodeID()); >+ else >+ destination = LLInt::getCodePtr<JSEntryPtrTag>(currentInstruction.opcodeID()); >+ >+ if (exit.isExceptionHandler()) { >+ jit.move(CCallHelpers::TrustedImmPtr(¤tInstruction), GPRInfo::regT2); >+ jit.storePtr(GPRInfo::regT2, &vm.targetInterpreterPCForThrow); >+ } >+ >+ jit.move(CCallHelpers::TrustedImmPtr(codeBlockForExit->metadataTable()), LLInt::Registers::metadataTableGPR); >+#if USE(JSVALUE64) >+ jit.move(CCallHelpers::TrustedImmPtr(codeBlockForExit->instructionsRawPointer()), LLInt::Registers::pbGPR); >+ jit.move(CCallHelpers::TrustedImm32(bytecodeOffset), LLInt::Registers::pcGPR); >+#else >+ jit.move(CCallHelpers::TrustedImmPtr(¤tInstruction), LLInt::Registers::pcGPR); >+#endif >+ jumpTarget = destination.retagged<OSRExitPtrTag>().executableAddress(); >+ } else { >+ CodeLocationLabel<JSEntryPtrTag> codeLocation = codeBlockForExit->jitCodeMap().find(exit.m_codeOrigin.bytecodeIndex()); >+ ASSERT(codeLocation); >+ >+ jumpTarget = codeLocation.retagged<OSRExitPtrTag>().executableAddress(); >+ } >+ > jit.addPtr(AssemblyHelpers::TrustedImm32(JIT::stackPointerOffsetFor(codeBlockForExit) * sizeof(Register)), GPRInfo::callFrameRegister, AssemblyHelpers::stackPointerRegister); > if (exit.isExceptionHandler()) { > // Since we're jumping to op_catch, we need to set callFrameForCatch. >Index: Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.h >=================================================================== >--- Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.h (revision 245507) >+++ Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.h (working copy) >@@ -39,6 +39,8 @@ namespace JSC { namespace DFG { > void handleExitCounts(CCallHelpers&, const OSRExitBase&); > void reifyInlinedCallFrames(CCallHelpers&, const OSRExitBase&); > void adjustAndJumpToTarget(VM&, CCallHelpers&, const OSRExitBase&); >+void* callerReturnPC(CodeBlock* baselineCodeBlockForCaller, unsigned callBytecodeOffset, InlineCallFrame::Kind callerKind, bool& callerIsLLInt); >+CCallHelpers::Address calleeSaveSlot(InlineCallFrame*, CodeBlock* baselineCodeBlock, GPRReg calleeSave); > > template <typename JITCodeType> > void adjustFrameAndStackInOSRExitCompilerThunk(MacroAssembler& jit, VM* vm, JITType jitType) >Index: Source/JavaScriptCore/dfg/DFGOSRExitPreparation.cpp >=================================================================== >--- Source/JavaScriptCore/dfg/DFGOSRExitPreparation.cpp (revision 245507) >+++ Source/JavaScriptCore/dfg/DFGOSRExitPreparation.cpp (nonexistent) >@@ -1,53 +0,0 @@ >-/* >- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved. >- * >- * Redistribution and use in source and binary forms, with or without >- * modification, are permitted provided that the following conditions >- * are met: >- * 1. Redistributions of source code must retain the above copyright >- * notice, this list of conditions and the following disclaimer. >- * 2. Redistributions in binary form must reproduce the above copyright >- * notice, this list of conditions and the following disclaimer in the >- * documentation and/or other materials provided with the distribution. >- * >- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY >- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE >- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR >- * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR >- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, >- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, >- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR >- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY >- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT >- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE >- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. >- */ >- >-#include "config.h" >-#include "DFGOSRExitPreparation.h" >- >-#if ENABLE(DFG_JIT) >- >-#include "CodeBlock.h" >-#include "JIT.h" >-#include "JITCode.h" >-#include "JITWorklist.h" >-#include "JSCInlines.h" >- >-namespace JSC { namespace DFG { >- >-void prepareCodeOriginForOSRExit(ExecState* exec, CodeOrigin codeOrigin) >-{ >- VM& vm = exec->vm(); >- DeferGC deferGC(vm.heap); >- >- for (; codeOrigin.inlineCallFrame(); codeOrigin = codeOrigin.inlineCallFrame()->directCaller) { >- CodeBlock* codeBlock = codeOrigin.inlineCallFrame()->baselineCodeBlock.get(); >- JITWorklist::ensureGlobalWorklist().compileNow(codeBlock); >- } >-} >- >-} } // namespace JSC::DFG >- >-#endif // ENABLE(DFG_JIT) >- >Index: Source/JavaScriptCore/dfg/DFGOSRExitPreparation.h >=================================================================== >--- Source/JavaScriptCore/dfg/DFGOSRExitPreparation.h (revision 245507) >+++ Source/JavaScriptCore/dfg/DFGOSRExitPreparation.h (nonexistent) >@@ -1,48 +0,0 @@ >-/* >- * Copyright (C) 2013 Apple Inc. All rights reserved. >- * >- * Redistribution and use in source and binary forms, with or without >- * modification, are permitted provided that the following conditions >- * are met: >- * 1. Redistributions of source code must retain the above copyright >- * notice, this list of conditions and the following disclaimer. >- * 2. Redistributions in binary form must reproduce the above copyright >- * notice, this list of conditions and the following disclaimer in the >- * documentation and/or other materials provided with the distribution. >- * >- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY >- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE >- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR >- * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR >- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, >- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, >- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR >- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY >- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT >- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE >- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. >- */ >- >-#pragma once >- >-#if ENABLE(DFG_JIT) >- >-#include "CallFrame.h" >-#include "CodeOrigin.h" >- >-namespace JSC { namespace DFG { >- >-// Make sure all code on our inline stack is JIT compiled. This is necessary since >-// we may opt to inline a code block even before it had ever been compiled by the >-// JIT, but our OSR exit infrastructure currently only works if the target of the >-// OSR exit is JIT code. This could be changed since there is nothing particularly >-// hard about doing an OSR exit into the interpreter, but for now this seems to make >-// sense in that if we're OSR exiting from inlined code of a DFG code block, then >-// probably it's a good sign that the thing we're exiting into is hot. Even more >-// interestingly, since the code was inlined, it may never otherwise get JIT >-// compiled since the act of inlining it may ensure that it otherwise never runs. >-void prepareCodeOriginForOSRExit(ExecState*, CodeOrigin); >- >-} } // namespace JSC::DFG >- >-#endif // ENABLE(DFG_JIT) >Index: Source/JavaScriptCore/ftl/FTLOSRExitCompiler.cpp >=================================================================== >--- Source/JavaScriptCore/ftl/FTLOSRExitCompiler.cpp (revision 245507) >+++ Source/JavaScriptCore/ftl/FTLOSRExitCompiler.cpp (working copy) >@@ -28,8 +28,8 @@ > > #if ENABLE(FTL_JIT) > >+#include "BytecodeStructs.h" > #include "DFGOSRExitCompilerCommon.h" >-#include "DFGOSRExitPreparation.h" > #include "FTLExitArgumentForOperand.h" > #include "FTLJITCode.h" > #include "FTLLocation.h" >@@ -37,6 +37,7 @@ > #include "FTLOperations.h" > #include "FTLState.h" > #include "FTLSaveRestore.h" >+#include "GetByIdMetadata.h" > #include "LinkBuffer.h" > #include "MaxFrameExtentForSlowPathCall.h" > #include "OperandsInlines.h" >@@ -249,6 +250,14 @@ static void compileStub( > if (exit.m_kind == BadCache || exit.m_kind == BadIndexingType) { > CodeOrigin codeOrigin = exit.m_codeOriginForExitProfile; > if (ArrayProfile* arrayProfile = jit.baselineCodeBlockFor(codeOrigin)->getArrayProfile(codeOrigin.bytecodeIndex())) { >+ >+ const Instruction* instruction = jit.baselineCodeBlockFor(codeOrigin)->instructions().at(codeOrigin.bytecodeIndex()).ptr(); >+ CCallHelpers::Jump skipProfile; >+ if (instruction->opcodeID() == op_get_by_id) { >+ auto& metadata = instruction->as<OpGetById>().metadata(jit.baselineCodeBlockFor(codeOrigin)); >+ skipProfile = jit.branch8(CCallHelpers::NotEqual, CCallHelpers::AbsoluteAddress(&metadata.m_mode), CCallHelpers::TrustedImm32(static_cast<uint8_t>(GetByIdMode::ArrayLength))); >+ } >+ > jit.load32(MacroAssembler::Address(GPRInfo::regT0, JSCell::structureIDOffset()), GPRInfo::regT1); > jit.store32(GPRInfo::regT1, arrayProfile->addressOfLastSeenStructureID()); > >@@ -266,6 +275,9 @@ static void compileStub( > jit.lshift32(GPRInfo::regT1, GPRInfo::regT2); > storeArrayModes.link(&jit); > jit.or32(GPRInfo::regT2, MacroAssembler::AbsoluteAddress(arrayProfile->addressOfArrayModes())); >+ >+ if (skipProfile.isSet()) >+ skipProfile.link(&jit); > } > } > >@@ -532,8 +544,6 @@ extern "C" void* compileFTLOSRExit(ExecS > } > } > >- prepareCodeOriginForOSRExit(exec, exit.m_codeOrigin); >- > compileStub(exitID, jitCode, exit, &vm, codeBlock); > > MacroAssembler::repatchJump( >Index: Source/JavaScriptCore/llint/LLIntData.h >=================================================================== >--- Source/JavaScriptCore/llint/LLIntData.h (revision 245507) >+++ Source/JavaScriptCore/llint/LLIntData.h (working copy) >@@ -25,6 +25,7 @@ > > #pragma once > >+#include "GPRInfo.h" > #include "JSCJSValue.h" > #include "MacroAssemblerCodeRef.h" > #include "Opcode.h" >@@ -152,4 +153,23 @@ ALWAYS_INLINE void* getCodePtr(JSC::Enco > return bitwise_cast<void*>(glueHelper); > } > >+#if ENABLE(JIT) >+struct Registers { >+ static const GPRReg pcGPR = GPRInfo::regT4; >+ >+#if CPU(X86_64) && !OS(WINDOWS) >+ static const GPRReg metadataTableGPR = GPRInfo::regCS1; >+ static const GPRReg pbGPR = GPRInfo::regCS2; >+#elif CPU(X86_64) && OS(WINDOWS) >+ static const GPRReg metadataTableGPR = GPRInfo::regCS3; >+ static const GPRReg pbGPR = GPRInfo::regCS4; >+#elif CPU(ARM64) >+ static const GPRReg metadataTableGPR = GPRInfo::regCS6; >+ static const GPRReg pbGPR = GPRInfo::regCS7; >+#elif CPU(MIPS) || CPU(ARM) >+ static const GPRReg metadataTableGPR = GPRInfo::regCS0; >+#endif >+}; >+#endif >+ > } } // namespace JSC::LLInt >Index: Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm >=================================================================== >--- Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm (revision 245507) >+++ Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm (working copy) >@@ -1391,6 +1391,13 @@ llintOpWithMetadata(op_get_by_id, OpGetB > .opGetByIdSlow: > callSlowPath(_llint_slow_path_get_by_id) > dispatch() >+ >+.osrReturnPoint: >+ getterSetterOSRExitReturnPoint(op_get_by_id, size) >+ metadata(t2, t3) >+ valueProfile(OpGetById, t2, r1, r0) >+ return(r1, r0) >+ > end) > > >@@ -1453,6 +1460,11 @@ llintOpWithMetadata(op_put_by_id, OpPutB > .opPutByIdSlow: > callSlowPath(_llint_slow_path_put_by_id) > dispatch() >+ >+.osrReturnPoint: >+ getterSetterOSRExitReturnPoint(op_put_by_id, size) >+ dispatch() >+ > end) > > >@@ -1504,10 +1516,17 @@ llintOpWithMetadata(op_get_by_val, OpGet > .opGetByValSlow: > callSlowPath(_llint_slow_path_get_by_val) > dispatch() >+ >+.osrReturnPoint: >+ getterSetterOSRExitReturnPoint(op_get_by_val, size) >+ metadata(t2, t3) >+ valueProfile(OpGetByVal, t2, r1, r0) >+ return(r1, r0) >+ > end) > > >-macro putByValOp(opcodeName, opcodeStruct) >+macro putByValOp(opcodeName, opcodeStruct, osrExitPoint) > llintOpWithMetadata(op_%opcodeName%, opcodeStruct, macro (size, get, dispatch, metadata, return) > macro contiguousPutByVal(storeCallback) > biaeq t3, -sizeof IndexingHeader + IndexingHeader::u.lengths.publicLength[t0], .outOfBounds >@@ -1595,13 +1614,20 @@ macro putByValOp(opcodeName, opcodeStruc > .opPutByValSlow: > callSlowPath(_llint_slow_path_%opcodeName%) > dispatch() >+ >+ .osrExitPoint: >+ osrExitPoint(size, dispatch) > end) > end > > >-putByValOp(put_by_val, OpPutByVal) >+putByValOp(put_by_val, OpPutByVal, macro (size, dispatch) >+ .osrReturnPoint: >+ getterSetterOSRExitReturnPoint(op_put_by_val, size) >+ dispatch() >+end) > >-putByValOp(put_by_val_direct, OpPutByValDirect) >+putByValOp(put_by_val_direct, OpPutByValDirect, macro (a, b) end) > > > macro llintJumpTrueOrFalseOp(opcodeName, opcodeStruct, conditionOp) >@@ -1850,10 +1876,10 @@ macro commonCallOp(opcodeName, slowPath, > storei CellTag, Callee + TagOffset[t3] > move t3, sp > prepareCall(%opcodeStruct%::Metadata::m_callLinkInfo.machineCodeTarget[t5], t2, t3, t4, JSEntryPtrTag) >- callTargetFunction(size, opcodeStruct, dispatch, %opcodeStruct%::Metadata::m_callLinkInfo.machineCodeTarget[t5], JSEntryPtrTag) >+ callTargetFunction(opcodeName, size, opcodeStruct, dispatch, %opcodeStruct%::Metadata::m_callLinkInfo.machineCodeTarget[t5], JSEntryPtrTag) > > .opCallSlow: >- slowPathForCall(size, opcodeStruct, dispatch, slowPath, prepareCall) >+ slowPathForCall(opcodeName, size, opcodeStruct, dispatch, slowPath, prepareCall) > end) > end > >Index: Source/JavaScriptCore/llint/LowLevelInterpreter64.asm >=================================================================== >--- Source/JavaScriptCore/llint/LowLevelInterpreter64.asm (revision 245507) >+++ Source/JavaScriptCore/llint/LowLevelInterpreter64.asm (working copy) >@@ -1285,7 +1285,6 @@ llintOpWithMetadata(op_get_by_id_direct, > dispatch() > end) > >- > llintOpWithMetadata(op_get_by_id, OpGetById, macro (size, get, dispatch, metadata, return) > metadata(t2, t1) > loadb OpGetById::Metadata::m_mode[t2], t1 >@@ -1336,6 +1335,13 @@ llintOpWithMetadata(op_get_by_id, OpGetB > .opGetByIdSlow: > callSlowPath(_llint_slow_path_get_by_id) > dispatch() >+ >+.osrReturnPoint: >+ getterSetterOSRExitReturnPoint(op_get_by_id, size) >+ metadata(t2, t3) >+ valueProfile(OpGetById, t2, r0) >+ return(r0) >+ > end) > > >@@ -1408,6 +1414,11 @@ llintOpWithMetadata(op_put_by_id, OpPutB > .opPutByIdSlow: > callSlowPath(_llint_slow_path_put_by_id) > dispatch() >+ >+.osrReturnPoint: >+ getterSetterOSRExitReturnPoint(op_put_by_id, size) >+ dispatch() >+ > end) > > >@@ -1577,10 +1588,17 @@ llintOpWithMetadata(op_get_by_val, OpGet > .opGetByValSlow: > callSlowPath(_llint_slow_path_get_by_val) > dispatch() >+ >+.osrReturnPoint: >+ getterSetterOSRExitReturnPoint(op_get_by_val, size) >+ metadata(t5, t2) >+ valueProfile(OpGetByVal, t5, r0) >+ return(r0) >+ > end) > > >-macro putByValOp(opcodeName, opcodeStruct) >+macro putByValOp(opcodeName, opcodeStruct, osrExitPoint) > llintOpWithMetadata(op_%opcodeName%, opcodeStruct, macro (size, get, dispatch, metadata, return) > macro contiguousPutByVal(storeCallback) > biaeq t3, -sizeof IndexingHeader + IndexingHeader::u.lengths.publicLength[t0], .outOfBounds >@@ -1668,12 +1686,19 @@ macro putByValOp(opcodeName, opcodeStruc > .opPutByValSlow: > callSlowPath(_llint_slow_path_%opcodeName%) > dispatch() >+ >+ osrExitPoint(size, dispatch) >+ > end) > end > >-putByValOp(put_by_val, OpPutByVal) >+putByValOp(put_by_val, OpPutByVal, macro (size, dispatch) >+ .osrReturnPoint: >+ getterSetterOSRExitReturnPoint(op_put_by_val, size) >+ dispatch() >+end) > >-putByValOp(put_by_val_direct, OpPutByValDirect) >+putByValOp(put_by_val_direct, OpPutByValDirect, macro (a, b) end) > > > macro llintJumpTrueOrFalseOp(opcodeName, opcodeStruct, conditionOp) >@@ -1944,10 +1969,10 @@ macro commonCallOp(opcodeName, slowPath, > storei t2, ArgumentCount + PayloadOffset[t3] > move t3, sp > prepareCall(%opcodeStruct%::Metadata::m_callLinkInfo.machineCodeTarget[t5], t2, t3, t4, JSEntryPtrTag) >- callTargetFunction(size, opcodeStruct, dispatch, %opcodeStruct%::Metadata::m_callLinkInfo.machineCodeTarget[t5], JSEntryPtrTag) >+ callTargetFunction(opcodeName, size, opcodeStruct, dispatch, %opcodeStruct%::Metadata::m_callLinkInfo.machineCodeTarget[t5], JSEntryPtrTag) > > .opCallSlow: >- slowPathForCall(size, opcodeStruct, dispatch, slowPath, prepareCall) >+ slowPathForCall(opcodeName, size, opcodeStruct, dispatch, slowPath, prepareCall) > end) > end > >Index: Source/JavaScriptCore/llint/LowLevelInterpreter.asm >=================================================================== >--- Source/JavaScriptCore/llint/LowLevelInterpreter.asm (revision 245507) >+++ Source/JavaScriptCore/llint/LowLevelInterpreter.asm (working copy) >@@ -898,12 +898,24 @@ macro traceExecution() > end > end > >-macro callTargetFunction(size, opcodeStruct, dispatch, callee, callPtrTag) >+macro callTargetFunction(opcodeName, size, opcodeStruct, dispatch, callee, callPtrTag) > if C_LOOP > cloopCallJSFunction callee > else > call callee, callPtrTag > end >+ >+ macro defineWide() >+ global _%opcodeName%_return_location_wide >+ _%opcodeName%_return_location_wide: >+ end >+ >+ macro defineNarrow() >+ global _%opcodeName%_return_location_narrow >+ _%opcodeName%_return_location_narrow: >+ end >+ >+ size(defineNarrow, defineWide, macro (f) f() end) > restoreStackPointerAfterCall() > dispatchAfterCall(size, opcodeStruct, dispatch) > end >@@ -973,7 +985,7 @@ macro prepareForTailCall(callee, temp1, > jmp callee, callPtrTag > end > >-macro slowPathForCall(size, opcodeStruct, dispatch, slowPath, prepareCall) >+macro slowPathForCall(opcodeName, size, opcodeStruct, dispatch, slowPath, prepareCall) > callCallSlowPath( > slowPath, > # Those are r0 and r1 >@@ -982,10 +994,26 @@ macro slowPathForCall(size, opcodeStruct > move calleeFramePtr, sp > prepareCall(callee, t2, t3, t4, SlowPathPtrTag) > .dontUpdateSP: >- callTargetFunction(size, opcodeStruct, dispatch, callee, SlowPathPtrTag) >+ callTargetFunction(%opcodeName%_slow, size, opcodeStruct, dispatch, callee, SlowPathPtrTag) > end) > end > >+macro getterSetterOSRExitReturnPoint(opName, size) >+ macro defineWide() >+ global _%opName%_return_location_wide >+ _%opName%_return_location_wide: >+ end >+ >+ macro defineNarrow() >+ global _%opName%_return_location_narrow >+ _%opName%_return_location_narrow: >+ end >+ >+ size(defineNarrow, defineWide, macro (f) f() end) >+ restoreStackPointerAfterCall() >+ loadi ArgumentCount + TagOffset[cfr], PC >+end >+ > macro arrayProfile(offset, cellAndIndexingType, metadata, scratch) > const cell = cellAndIndexingType > const indexingType = cellAndIndexingType >@@ -1687,7 +1715,7 @@ end) > callOp(construct, OpConstruct, prepareForRegularCall, macro (getu, metadata) end) > > >-macro doCallVarargs(size, opcodeStruct, dispatch, frameSlowPath, slowPath, prepareCall) >+macro doCallVarargs(opcodeName, size, opcodeStruct, dispatch, frameSlowPath, slowPath, prepareCall) > callSlowPath(frameSlowPath) > branchIfException(_llint_throw_from_slow_path_trampoline) > # calleeFrame in r1 >@@ -1702,19 +1730,19 @@ macro doCallVarargs(size, opcodeStruct, > subp r1, CallerFrameAndPCSize, sp > end > end >- slowPathForCall(size, opcodeStruct, dispatch, slowPath, prepareCall) >+ slowPathForCall(opcodeName, size, opcodeStruct, dispatch, slowPath, prepareCall) > end > > > llintOp(op_call_varargs, OpCallVarargs, macro (size, get, dispatch) >- doCallVarargs(size, OpCallVarargs, dispatch, _llint_slow_path_size_frame_for_varargs, _llint_slow_path_call_varargs, prepareForRegularCall) >+ doCallVarargs(op_call_varargs, size, OpCallVarargs, dispatch, _llint_slow_path_size_frame_for_varargs, _llint_slow_path_call_varargs, prepareForRegularCall) > end) > > llintOp(op_tail_call_varargs, OpTailCallVarargs, macro (size, get, dispatch) > checkSwitchToJITForEpilogue() > # We lie and perform the tail call instead of preparing it since we can't > # prepare the frame for a call opcode >- doCallVarargs(size, OpTailCallVarargs, dispatch, _llint_slow_path_size_frame_for_varargs, _llint_slow_path_tail_call_varargs, prepareForTailCall) >+ doCallVarargs(op_tail_call_varargs, size, OpTailCallVarargs, dispatch, _llint_slow_path_size_frame_for_varargs, _llint_slow_path_tail_call_varargs, prepareForTailCall) > end) > > >@@ -1722,12 +1750,12 @@ llintOp(op_tail_call_forward_arguments, > checkSwitchToJITForEpilogue() > # We lie and perform the tail call instead of preparing it since we can't > # prepare the frame for a call opcode >- doCallVarargs(size, OpTailCallForwardArguments, dispatch, _llint_slow_path_size_frame_for_forward_arguments, _llint_slow_path_tail_call_forward_arguments, prepareForTailCall) >+ doCallVarargs(op_tail_call_forward_arguments, size, OpTailCallForwardArguments, dispatch, _llint_slow_path_size_frame_for_forward_arguments, _llint_slow_path_tail_call_forward_arguments, prepareForTailCall) > end) > > > llintOp(op_construct_varargs, OpConstructVarargs, macro (size, get, dispatch) >- doCallVarargs(size, OpConstructVarargs, dispatch, _llint_slow_path_size_frame_for_varargs, _llint_slow_path_construct_varargs, prepareForRegularCall) >+ doCallVarargs(op_construct_varargs, size, OpConstructVarargs, dispatch, _llint_slow_path_size_frame_for_varargs, _llint_slow_path_construct_varargs, prepareForRegularCall) > end) > > >@@ -1766,6 +1794,7 @@ end) > > _llint_op_call_eval: > slowPathForCall( >+ op_call_eval_narrow, > narrow, > OpCallEval, > macro () dispatchOp(narrow, op_call_eval) end, >@@ -1774,6 +1803,7 @@ _llint_op_call_eval: > > _llint_op_call_eval_wide: > slowPathForCall( >+ op_call_eval_wide, > wide, > OpCallEval, > macro () dispatchOp(wide, op_call_eval) end, >Index: Source/JavaScriptCore/offlineasm/asm.rb >=================================================================== >--- Source/JavaScriptCore/offlineasm/asm.rb (revision 245507) >+++ Source/JavaScriptCore/offlineasm/asm.rb (working copy) >@@ -401,7 +401,7 @@ File.open(outputFlnm, "w") { > lowLevelAST = lowLevelAST.resolve(buildOffsetsMap(lowLevelAST, offsetsList)) > lowLevelAST.validate > emitCodeInConfiguration(concreteSettings, lowLevelAST, backend) { >- $currentSettings = concreteSettings >+ $currentSettings = concreteSettings > $asm.inAsm { > lowLevelAST.lower(backend) > } >Index: Source/JavaScriptCore/offlineasm/transform.rb >=================================================================== >--- Source/JavaScriptCore/offlineasm/transform.rb (revision 245507) >+++ Source/JavaScriptCore/offlineasm/transform.rb (working copy) >@@ -259,7 +259,9 @@ class Label > match > end > } >- Label.forName(codeOrigin, name, @definedInFile) >+ result = Label.forName(codeOrigin, name, @definedInFile) >+ result.setGlobal() if @global >+ result > else > self > end >@@ -272,7 +274,9 @@ class Label > raise "Unknown variable `#{var.originalName}` in substitution at #{codeOrigin}" unless mapping[var] > mapping[var].name > } >- Label.forName(codeOrigin, name, @definedInFile) >+ result = Label.forName(codeOrigin, name, @definedInFile) >+ result.setGlobal() if @global >+ result > else > self > end >Index: Source/JavaScriptCore/runtime/Options.h >=================================================================== >--- Source/JavaScriptCore/runtime/Options.h (revision 245507) >+++ Source/JavaScriptCore/runtime/Options.h (working copy) >@@ -520,6 +520,7 @@ constexpr bool enableWebAssemblyStreamin > v(double, validateAbstractInterpreterStateProbability, 0.5, Normal, nullptr) \ > v(optionString, dumpJITMemoryPath, nullptr, Restricted, nullptr) \ > v(double, dumpJITMemoryFlushInterval, 10, Restricted, "Maximum time in between flushes of the JIT memory dump in seconds.") \ >+ v(bool, forceOSRExitToLLInt, false, Normal, "If true, we always exit to the LLInt. If false, we exit to whatever is most convenient.") \ > > > enum OptionEquivalence { >Index: Tools/ChangeLog >=================================================================== >--- Tools/ChangeLog (revision 245507) >+++ Tools/ChangeLog (working copy) >@@ -1,3 +1,21 @@ >+2019-05-19 Saam barati <sbarati@apple.com> >+ >+ Allow OSR exit to the LLInt >+ https://bugs.webkit.org/show_bug.cgi?id=197993 >+ >+ Reviewed by NOBODY (OOPS!). >+ >+ * Scripts/run-jsc-stress-tests: >+ >+2019-05-19 Saam barati <sbarati@apple.com> >+ >+ Allow OSR exit to the LLInt >+ https://bugs.webkit.org/show_bug.cgi?id=197993 >+ >+ Reviewed by NOBODY (OOPS!). >+ >+ * Scripts/run-jsc-stress-tests: >+ > 2019-05-19 Darin Adler <darin@apple.com> > > Change String::number to use "shortest" instead of "fixed precision 6 digits" >Index: Tools/Scripts/run-jsc-stress-tests >=================================================================== >--- Tools/Scripts/run-jsc-stress-tests (revision 245507) >+++ Tools/Scripts/run-jsc-stress-tests (working copy) >@@ -495,6 +495,7 @@ B3O1_OPTIONS = ["--defaultB3OptLevel=1"] > B3O0_OPTIONS = ["--defaultB3OptLevel=0"] > FTL_OPTIONS = ["--useFTLJIT=true"] > PROBE_OSR_EXIT_OPTION = ["--useProbeOSRExit=true"] >+FORCE_LLINT_EXIT_OPTIONS = ["--forceOSRExitToLLInt=true"] > > require_relative "webkitruby/jsc-stress-test-writer-#{$testWriter}" > >@@ -704,7 +705,7 @@ def runFTLNoCJIT(*optionalTestSpecificOp > end > > def runFTLNoCJITB3O0(*optionalTestSpecificOptions) >- run("ftl-no-cjit-b3o0", "--useArrayAllocationProfiling=false", "--forcePolyProto=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS + B3O0_OPTIONS + optionalTestSpecificOptions)) >+ run("ftl-no-cjit-b3o0", "--useArrayAllocationProfiling=false", "--forcePolyProto=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS + B3O0_OPTIONS + FORCE_LLINT_EXIT_OPTIONS + optionalTestSpecificOptions)) > end > > def runFTLNoCJITValidate(*optionalTestSpecificOptions) >@@ -724,7 +725,7 @@ def runFTLNoCJITOSRValidation(*optionalT > end > > def runDFGEager(*optionalTestSpecificOptions) >- run("dfg-eager", *(EAGER_OPTIONS + COLLECT_CONTINUOUSLY_OPTIONS + PROBE_OSR_EXIT_OPTION + optionalTestSpecificOptions)) >+ run("dfg-eager", *(EAGER_OPTIONS + COLLECT_CONTINUOUSLY_OPTIONS + PROBE_OSR_EXIT_OPTION + FORCE_LLINT_EXIT_OPTIONS + optionalTestSpecificOptions)) > end > > def runDFGEagerNoCJITValidate(*optionalTestSpecificOptions) >@@ -741,7 +742,7 @@ def runFTLEagerWatchdog(*optionalTestSpe > end > > def runFTLEagerNoCJITValidate(*optionalTestSpecificOptions) >- run("ftl-eager-no-cjit", "--validateGraph=true", "--airForceIRCAllocator=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS + EAGER_OPTIONS + COLLECT_CONTINUOUSLY_OPTIONS + optionalTestSpecificOptions)) >+ run("ftl-eager-no-cjit", "--validateGraph=true", "--airForceIRCAllocator=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS + EAGER_OPTIONS + COLLECT_CONTINUOUSLY_OPTIONS + FORCE_LLINT_EXIT_OPTIONS + optionalTestSpecificOptions)) > end > > def runFTLEagerNoCJITB3O1(*optionalTestSpecificOptions)
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Diff
View Attachment As Raw
Actions:
View
|
Formatted Diff
|
Diff
Attachments on
bug 197993
:
370175
|
370227
|
370228
|
370229
|
370237
|
370238
|
370276
|
379977
|
380065
|
380066
|
380073
|
380168
|
380244
|
380246
|
380368