WebKit Bugzilla
Attachment 370136 Details for
Bug 197940
: [JSC] Shrink Metadata
Home
|
New
|
Browse
|
Search
|
[?]
|
Reports
|
Requests
|
Help
|
New Account
|
Log In
Remember
[x]
|
Forgot Password
Login:
[x]
[patch]
Patch
bug-197940-20190517130320.patch (text/plain), 76.12 KB, created by
Yusuke Suzuki
on 2019-05-17 13:03:21 PDT
(
hide
)
Description:
Patch
Filename:
MIME Type:
Creator:
Yusuke Suzuki
Created:
2019-05-17 13:03:21 PDT
Size:
76.12 KB
patch
obsolete
>Subversion Revision: 245433 >diff --git a/Source/JavaScriptCore/ChangeLog b/Source/JavaScriptCore/ChangeLog >index 9fc25a80b85b4607e82217d7ee5e3064a81d2940..fe5ed7e8e0ea96d6c9edce4edcdffa3e5d91e7bf 100644 >--- a/Source/JavaScriptCore/ChangeLog >+++ b/Source/JavaScriptCore/ChangeLog >@@ -1,3 +1,151 @@ >+2019-05-17 Yusuke Suzuki <ysuzuki@apple.com> >+ >+ [JSC] Shrink Metadata >+ https://bugs.webkit.org/show_bug.cgi?id=197940 >+ >+ Reviewed by NOBODY (OOPS!). >+ >+ We get Metadata related data in Gmail and it turns out the following things. >+ >+ 1. At peak, MetadataTable eats a lot of bytes (30 MB - 50 MB, sometimes 70 MB while total Gmail footprint is 400 - 500 MB). >+ 2. After full GC happens, most of Metadata is destroyed while some are kept. Still keeps 1 MB. But after the GC, # of MetadataTable eventually grows again. >+ >+ If we shrink Metadata, we can get peak memory footprint in Gmail. >+ >+ This patch attempts to shrink Metadata. This patch first focus on low hanging fruits: it does not include the change removing OSR exit JSValue in ValueProfile. >+ This patch uses fancy bit juggling & leverages nice data types to reduce Metadata, as follows. >+ >+ 1. ValueProfile is reduced from 32 to 24. It reduces Metadata using ValueProfile. >+ 2. ArrayProfile is reduced from 16 to 12. Ditto. >+ 3. OpCall::Metadata is reduced from 88 to 64. >+ 4. OpGetById::Metadata is reduced from 56 to 40. >+ 5. OpToThis::Metadata is reduced from 48 to 32. >+ 6. OpNewObject::Metadata is reduced from 32 to 16. >+ >+ According to the gathered data, it should reduce 1-2MB in steady state in Gmail, much more in peak memory, ~1 MB in the state just after full GC. >+ It also improves RAMification by 0.3% (6 runs). >+ >+ * bytecode/ArrayProfile.cpp: >+ * bytecode/ArrayProfile.h: >+ (JSC::ArrayProfile::ArrayProfile): >+ (JSC::ArrayProfile::bytecodeOffset const): Deleted. >+ (JSC::ArrayProfile::isValid const): Deleted. >+ * bytecode/BytecodeList.rb: >+ * bytecode/CallLinkStatus.cpp: >+ (JSC::CallLinkStatus::computeFromLLInt): >+ * bytecode/CodeBlock.cpp: >+ (JSC::CodeBlock::finishCreation): >+ (JSC::CodeBlock::finalizeLLIntInlineCaches): >+ (JSC::CodeBlock::getArrayProfile): >+ (JSC::CodeBlock::updateAllPredictionsAndCountLiveness): >+ (JSC::CodeBlock::dumpValueProfiles): >+ * bytecode/CodeBlock.h: >+ (JSC::CodeBlock::valueProfileForArgument): >+ * bytecode/CodeBlockInlines.h: >+ (JSC::CodeBlock::forEachValueProfile): >+ (JSC::CodeBlock::forEachArrayProfile): >+ * bytecode/GetByIdMetadata.h: >+ We use ProtoLoad's JSObject's high bits to embed hitCountForLLIntCaching and mode, since they >+ are always zero for ProtoLoad mode. >+ >+ (): Deleted. >+ * bytecode/GetByIdStatus.cpp: >+ (JSC::GetByIdStatus::computeFromLLInt): >+ * bytecode/LLIntCallLinkInfo.h: >+ (JSC::LLIntCallLinkInfo::isLinked const): >+ (JSC::LLIntCallLinkInfo::link): >+ (JSC::LLIntCallLinkInfo::unlink): >+ (JSC::LLIntCallLinkInfo::callee const): >+ (JSC::LLIntCallLinkInfo::lastSeenCallee const): >+ (JSC::LLIntCallLinkInfo::clearLastSeenCallee): >+ (JSC::LLIntCallLinkInfo::LLIntCallLinkInfo): Deleted. >+ (JSC::LLIntCallLinkInfo::isLinked): Deleted. >+ In LLIntCallLinkInfo, we always set the same value to lastSeenCallee and callee. But later, callee can be cleared. >+ It means that we can represent them in one value + cleared flag. We encode this flag into the lowest bit of the callee cell so >+ that we can make them one pointer. We also use PackedRawSentinelNode to get some space, and embed ArrayProfile into this space >+ to get further memory reduction. >+ >+ * bytecode/LLIntPrototypeLoadAdaptiveStructureWatchpoint.cpp: >+ (JSC::LLIntPrototypeLoadAdaptiveStructureWatchpoint::clearLLIntGetByIdCache): >+ * bytecode/LazyOperandValueProfile.h: >+ (JSC::LazyOperandValueProfile::LazyOperandValueProfile): >+ (JSC::LazyOperandValueProfile::key const): >+ * bytecode/MetadataTable.h: >+ (JSC::MetadataTable::buffer): >+ * bytecode/ObjectAllocationProfile.h: >+ (JSC::ObjectAllocationProfileBase::offsetOfAllocator): >+ (JSC::ObjectAllocationProfileBase::offsetOfStructure): >+ (JSC::ObjectAllocationProfileBase::clear): >+ (JSC::ObjectAllocationProfileBase::visitAggregate): >+ (JSC::ObjectAllocationProfile::setPrototype): >+ (JSC::ObjectAllocationProfileWithPrototype::prototype): >+ (JSC::ObjectAllocationProfileWithPrototype::clear): >+ (JSC::ObjectAllocationProfileWithPrototype::visitAggregate): >+ (JSC::ObjectAllocationProfileWithPrototype::setPrototype): >+ (JSC::ObjectAllocationProfile::offsetOfAllocator): Deleted. >+ (JSC::ObjectAllocationProfile::offsetOfStructure): Deleted. >+ (JSC::ObjectAllocationProfile::offsetOfInlineCapacity): Deleted. >+ (JSC::ObjectAllocationProfile::ObjectAllocationProfile): Deleted. >+ (JSC::ObjectAllocationProfile::isNull): Deleted. >+ (JSC::ObjectAllocationProfile::structure): Deleted. >+ (JSC::ObjectAllocationProfile::prototype): Deleted. >+ (JSC::ObjectAllocationProfile::inlineCapacity): Deleted. >+ (JSC::ObjectAllocationProfile::clear): Deleted. >+ (JSC::ObjectAllocationProfile::visitAggregate): Deleted. >+ * bytecode/ObjectAllocationProfileInlines.h: >+ (JSC::ObjectAllocationProfileBase<Derived>::initializeProfile): >+ (JSC::ObjectAllocationProfileBase<Derived>::possibleDefaultPropertyCount): >+ (JSC::ObjectAllocationProfile::initializeProfile): Deleted. >+ (JSC::ObjectAllocationProfile::possibleDefaultPropertyCount): Deleted. >+ OpNewObject's ObjectAllocationProfile does not need to hold prototype. So we have two versions now, ObjectAllocationProfile and ObjectAllocationProfileWithPrototype >+ to cut one pointer. We also remove inline capacity since this can be retrieved from Structure. >+ >+ * bytecode/Opcode.h: >+ * bytecode/ValueProfile.h: >+ (JSC::ValueProfileBase::ValueProfileBase): >+ (JSC::ValueProfileBase::totalNumberOfSamples const): >+ (JSC::ValueProfileBase::isSampledBefore const): >+ (JSC::ValueProfileBase::dump): >+ (JSC::ValueProfileBase::computeUpdatedPrediction): >+ (JSC::MinimalValueProfile::MinimalValueProfile): >+ (JSC::ValueProfileWithLogNumberOfBuckets::ValueProfileWithLogNumberOfBuckets): >+ (JSC::ValueProfile::ValueProfile): >+ (JSC::getValueProfileBytecodeOffset): Deleted. >+ Bytecode offset is no longer used. And sample count is not used effectively. >+ >+ * dfg/DFGByteCodeParser.cpp: >+ (JSC::DFG::ByteCodeParser::parseBlock): >+ * dfg/DFGOperations.cpp: >+ * dfg/DFGSpeculativeJIT.cpp: >+ (JSC::DFG::SpeculativeJIT::compileCreateThis): >+ * ftl/FTLAbstractHeapRepository.h: >+ * jit/JITCall.cpp: >+ (JSC::JIT::compileSetupFrame): >+ * jit/JITCall32_64.cpp: >+ (JSC::JIT::compileSetupFrame): >+ * jit/JITOpcodes.cpp: >+ (JSC::JIT::emit_op_catch): >+ (JSC::JIT::emit_op_to_this): >+ (JSC::JIT::emit_op_create_this): >+ * jit/JITOpcodes32_64.cpp: >+ (JSC::JIT::emit_op_catch): >+ (JSC::JIT::emit_op_create_this): >+ (JSC::JIT::emit_op_to_this): >+ * jit/JITOperations.cpp: >+ * jit/JITPropertyAccess.cpp: >+ (JSC::JIT::emit_op_get_by_id): >+ * llint/LLIntSlowPaths.cpp: >+ (JSC::LLInt::setupGetByIdPrototypeCache): >+ (JSC::LLInt::LLINT_SLOW_PATH_DECL): >+ (JSC::LLInt::setUpCall): >+ * llint/LowLevelInterpreter32_64.asm: >+ * llint/LowLevelInterpreter64.asm: >+ * runtime/CommonSlowPaths.cpp: >+ (JSC::SLOW_PATH_DECL): >+ * runtime/FunctionRareData.h: >+ * tools/HeapVerifier.cpp: >+ (JSC::HeapVerifier::validateJSCell): >+ > 2019-05-16 Keith Miller <keith_miller@apple.com> > > Wasm should cage the memory base pointers in structs >diff --git a/Source/JavaScriptCore/bytecode/ArrayProfile.cpp b/Source/JavaScriptCore/bytecode/ArrayProfile.cpp >index c00aef21b6ebb619e91c746af4fb09c584fd66bb..3e150a012396b12824528a5e9f2b9837e8c7de7c 100644 >--- a/Source/JavaScriptCore/bytecode/ArrayProfile.cpp >+++ b/Source/JavaScriptCore/bytecode/ArrayProfile.cpp >@@ -33,10 +33,6 @@ > > namespace JSC { > >-#if !ASSERT_DISABLED >-const char* const ArrayProfile::s_typeName = "ArrayProfile"; >-#endif >- > // Keep in sync with the order of TypedArrayType. > const ArrayModes typedArrayModes[NumberOfTypedArrayTypesExcludingDataView] = { > Int8ArrayMode, >diff --git a/Source/JavaScriptCore/bytecode/ArrayProfile.h b/Source/JavaScriptCore/bytecode/ArrayProfile.h >index bfe38e3f55642a540d9852efec790ade889d0c39..fb348b92dd2623c0651efd2e4a8c0e8de237b201 100644 >--- a/Source/JavaScriptCore/bytecode/ArrayProfile.h >+++ b/Source/JavaScriptCore/bytecode/ArrayProfile.h >@@ -27,7 +27,6 @@ > > #include "ConcurrentJSLock.h" > #include "Structure.h" >-#include <wtf/SegmentedVector.h> > > namespace JSC { > >@@ -194,21 +193,13 @@ class ArrayProfile { > friend class CodeBlock; > > public: >- ArrayProfile() >- : ArrayProfile(std::numeric_limits<unsigned>::max()) >- { >- } >- >- explicit ArrayProfile(unsigned bytecodeOffset) >- : m_bytecodeOffset(bytecodeOffset) >- , m_mayInterceptIndexedAccesses(false) >+ explicit ArrayProfile() >+ : m_mayInterceptIndexedAccesses(false) > , m_usesOriginalArrayStructures(true) > , m_didPerformFirstRunPruning(false) > { > } > >- unsigned bytecodeOffset() const { return m_bytecodeOffset; } >- > StructureID* addressOfLastSeenStructureID() { return &m_lastSeenStructureID; } > ArrayModes* addressOfArrayModes() { return &m_observedArrayModes; } > bool* addressOfMayStoreToHole() { return &m_mayStoreToHole; } >@@ -238,16 +229,11 @@ class ArrayProfile { > CString briefDescription(const ConcurrentJSLocker&, CodeBlock*); > CString briefDescriptionWithoutUpdating(const ConcurrentJSLocker&); > >-#if !ASSERT_DISABLED >- inline bool isValid() const { return m_typeName == s_typeName; } >-#endif >- > private: > friend class LLIntOffsetsExtractor; > > static Structure* polymorphicStructure() { return static_cast<Structure*>(reinterpret_cast<void*>(1)); } > >- unsigned m_bytecodeOffset; > StructureID m_lastSeenStructureID { 0 }; > bool m_mayStoreToHole { false }; // This flag may become overloaded to indicate other special cases that were encountered during array access, as it depends on indexing type. Since we currently have basically just one indexing type (two variants of ArrayStorage), this flag for now just means exactly what its name implies. > bool m_outOfBounds { false }; >@@ -255,13 +241,7 @@ class ArrayProfile { > bool m_usesOriginalArrayStructures : 1; > bool m_didPerformFirstRunPruning : 1; > ArrayModes m_observedArrayModes { 0 }; >- >-#if !ASSERT_DISABLED >- static const char* const s_typeName; >- const char* m_typeName { s_typeName }; >-#endif > }; >- >-typedef SegmentedVector<ArrayProfile, 4> ArrayProfileVector; >+static_assert(sizeof(ArrayProfile) == 12); > > } // namespace JSC >diff --git a/Source/JavaScriptCore/bytecode/BytecodeList.rb b/Source/JavaScriptCore/bytecode/BytecodeList.rb >index 0695a2507b62397649a87989616755c0939377ab..cdee569c8d020b0d793ce84ae88ac5dbe1b5191e 100644 >--- a/Source/JavaScriptCore/bytecode/BytecodeList.rb >+++ b/Source/JavaScriptCore/bytecode/BytecodeList.rb >@@ -136,7 +136,7 @@ > srcDst: VirtualRegister, > }, > metadata: { >- cachedStructure: WriteBarrierBase[Structure], >+ cachedStructureID: StructureID, > toThisStatus: ToThisStatus, > profile: ValueProfile, > } >@@ -414,8 +414,6 @@ > property: unsigned, > }, > metadata: { >- mode: GetByIdMode, >- hitCountForLLIntCaching: unsigned, > modeMetadata: GetByIdModeMetadata, > profile: ValueProfile, > } >@@ -706,7 +704,6 @@ > }, > metadata: { > callLinkInfo: LLIntCallLinkInfo, >- arrayProfile: ArrayProfile, > profile: ValueProfile, > } > >@@ -719,7 +716,6 @@ > }, > metadata: { > callLinkInfo: LLIntCallLinkInfo, >- arrayProfile: ArrayProfile, > profile: ValueProfile, > } > >@@ -732,7 +728,6 @@ > }, > metadata: { > callLinkInfo: LLIntCallLinkInfo, >- arrayProfile: ArrayProfile, > profile: ValueProfile, > } > >@@ -787,7 +782,6 @@ > }, > metadata: { > callLinkInfo: LLIntCallLinkInfo, >- arrayProfile: ArrayProfile, > profile: ValueProfile, > } > >diff --git a/Source/JavaScriptCore/bytecode/CallLinkStatus.cpp b/Source/JavaScriptCore/bytecode/CallLinkStatus.cpp >index 93e33dac090ae7244ba906e9b3f53296b0fcc42c..3b7e193b07fb2885b91e257d69335705a656e0ee 100644 >--- a/Source/JavaScriptCore/bytecode/CallLinkStatus.cpp >+++ b/Source/JavaScriptCore/bytecode/CallLinkStatus.cpp >@@ -86,7 +86,7 @@ CallLinkStatus CallLinkStatus::computeFromLLInt(const ConcurrentJSLocker&, CodeB > } > > >- return CallLinkStatus(callLinkInfo->lastSeenCallee.get()); >+ return CallLinkStatus(callLinkInfo->lastSeenCallee()); > } > > CallLinkStatus CallLinkStatus::computeFor( >diff --git a/Source/JavaScriptCore/bytecode/CodeBlock.cpp b/Source/JavaScriptCore/bytecode/CodeBlock.cpp >index bc9ed129def31a37d4f78451c3a167a71b8dee09..79f06dc3253a251c229d42501a723147f743bd4a 100644 >--- a/Source/JavaScriptCore/bytecode/CodeBlock.cpp >+++ b/Source/JavaScriptCore/bytecode/CodeBlock.cpp >@@ -482,13 +482,8 @@ bool CodeBlock::finishCreation(VM& vm, ScriptExecutable* ownerExecutable, Unlink > // Bookkeep the strongly referenced module environments. > HashSet<JSModuleEnvironment*> stronglyReferencedModuleEnvironments; > >- auto link_profile = [&](const auto& instruction, auto /*bytecode*/, auto& metadata) { >+ auto link_profile = [&](const auto& /*instruction*/, auto /*bytecode*/, auto& /*metadata*/) { > m_numberOfNonArgumentValueProfiles++; >- metadata.m_profile.m_bytecodeOffset = instruction.offset(); >- }; >- >- auto link_arrayProfile = [&](const auto& instruction, auto /*bytecode*/, auto& metadata) { >- metadata.m_arrayProfile.m_bytecodeOffset = instruction.offset(); > }; > > auto link_objectAllocationProfile = [&](const auto& /*instruction*/, auto bytecode, auto& metadata) { >@@ -499,10 +494,6 @@ bool CodeBlock::finishCreation(VM& vm, ScriptExecutable* ownerExecutable, Unlink > metadata.m_arrayAllocationProfile.initializeIndexingMode(bytecode.m_recommendedIndexingType); > }; > >- auto link_hitCountForLLIntCaching = [&](const auto& /*instruction*/, auto /*bytecode*/, auto& metadata) { >- metadata.m_hitCountForLLIntCaching = Options::prototypeHitCountForLLIntCaching(); >- }; >- > #define LINK_FIELD(__field) \ > WTF_LAZY_JOIN(link_, __field)(instruction, bytecode, metadata); > >@@ -527,13 +518,13 @@ bool CodeBlock::finishCreation(VM& vm, ScriptExecutable* ownerExecutable, Unlink > OpcodeID opcodeID = instruction->opcodeID(); > m_bytecodeCost += opcodeLengths[opcodeID]; > switch (opcodeID) { >- LINK(OpHasIndexedProperty, arrayProfile) >+ LINK(OpHasIndexedProperty) > >- LINK(OpCallVarargs, arrayProfile, profile) >- LINK(OpTailCallVarargs, arrayProfile, profile) >- LINK(OpTailCallForwardArguments, arrayProfile, profile) >- LINK(OpConstructVarargs, arrayProfile, profile) >- LINK(OpGetByVal, arrayProfile, profile) >+ LINK(OpCallVarargs, profile) >+ LINK(OpTailCallVarargs, profile) >+ LINK(OpTailCallForwardArguments, profile) >+ LINK(OpConstructVarargs, profile) >+ LINK(OpGetByVal, profile) > > LINK(OpGetDirectPname, profile) > LINK(OpGetByIdWithThis, profile) >@@ -550,16 +541,16 @@ bool CodeBlock::finishCreation(VM& vm, ScriptExecutable* ownerExecutable, Unlink > LINK(OpBitnot, profile) > LINK(OpBitxor, profile) > >- LINK(OpGetById, profile, hitCountForLLIntCaching) >+ LINK(OpGetById, profile) > >- LINK(OpCall, profile, arrayProfile) >- LINK(OpTailCall, profile, arrayProfile) >- LINK(OpCallEval, profile, arrayProfile) >- LINK(OpConstruct, profile, arrayProfile) >+ LINK(OpCall, profile) >+ LINK(OpTailCall, profile) >+ LINK(OpCallEval, profile) >+ LINK(OpConstruct, profile) > >- LINK(OpInByVal, arrayProfile) >- LINK(OpPutByVal, arrayProfile) >- LINK(OpPutByValDirect, arrayProfile) >+ LINK(OpInByVal) >+ LINK(OpPutByVal) >+ LINK(OpPutByValDirect) > > LINK(OpNewArray) > LINK(OpNewArrayWithSize) >@@ -1208,7 +1199,7 @@ void CodeBlock::finalizeLLIntInlineCaches() > switch (curInstruction->opcodeID()) { > case op_get_by_id: { > auto& metadata = curInstruction->as<OpGetById>().metadata(this); >- if (metadata.m_mode != GetByIdMode::Default) >+ if (metadata.m_modeMetadata.mode != GetByIdMode::Default) > break; > StructureID oldStructureID = metadata.m_modeMetadata.defaultMode.structureID; > if (!oldStructureID || vm.heap.isMarked(vm.heap.structureIDTable().get(oldStructureID))) >@@ -1252,11 +1243,13 @@ void CodeBlock::finalizeLLIntInlineCaches() > break; > case op_to_this: { > auto& metadata = curInstruction->as<OpToThis>().metadata(this); >- if (!metadata.m_cachedStructure || vm.heap.isMarked(metadata.m_cachedStructure.get())) >+ if (!metadata.m_cachedStructureID || vm.heap.isMarked(vm.heap.structureIDTable().get(metadata.m_cachedStructureID))) > break; >- if (Options::verboseOSR()) >- dataLogF("Clearing LLInt to_this with structure %p.\n", metadata.m_cachedStructure.get()); >- metadata.m_cachedStructure.clear(); >+ if (Options::verboseOSR()) { >+ Structure* structure = !metadata.m_cachedStructureID ? nullptr : vm.heap.structureIDTable().get(metadata.m_cachedStructureID); >+ dataLogF("Clearing LLInt to_this with structure %p.\n", structure); >+ } >+ metadata.m_cachedStructureID = 0; > metadata.m_toThisStatus = merge(metadata.m_toThisStatus, ToThisClearedByGC); > break; > } >@@ -1324,13 +1317,13 @@ void CodeBlock::finalizeLLIntInlineCaches() > }); > > forEachLLIntCallLinkInfo([&](LLIntCallLinkInfo& callLinkInfo) { >- if (callLinkInfo.isLinked() && !vm.heap.isMarked(callLinkInfo.callee.get())) { >+ if (callLinkInfo.isLinked() && !vm.heap.isMarked(callLinkInfo.callee())) { > if (Options::verboseOSR()) > dataLog("Clearing LLInt call from ", *this, "\n"); > callLinkInfo.unlink(); > } >- if (!!callLinkInfo.lastSeenCallee && !vm.heap.isMarked(callLinkInfo.lastSeenCallee.get())) >- callLinkInfo.lastSeenCallee.clear(); >+ if (callLinkInfo.lastSeenCallee() && !vm.heap.isMarked(callLinkInfo.lastSeenCallee())) >+ callLinkInfo.clearLastSeenCallee(); > }); > } > >@@ -2575,17 +2568,24 @@ ArrayProfile* CodeBlock::getArrayProfile(const ConcurrentJSLocker&, unsigned byt > { > auto instruction = instructions().at(bytecodeOffset); > switch (instruction->opcodeID()) { >-#define CASE(Op) \ >+#define CASE1(Op) \ > case Op::opcodeID: \ > return &instruction->as<Op>().metadata(this).m_arrayProfile; > >- FOR_EACH_OPCODE_WITH_ARRAY_PROFILE(CASE) >-#undef CASE >+#define CASE2(Op) \ >+ case Op::opcodeID: \ >+ return &instruction->as<Op>().metadata(this).m_callLinkInfo.m_arrayProfile; >+ >+ FOR_EACH_OPCODE_WITH_ARRAY_PROFILE(CASE1) >+ FOR_EACH_OPCODE_WITH_LLINT_CALL_LINK_INFO(CASE2) >+ >+#undef CASE1 >+#undef CASE2 > > case OpGetById::opcodeID: { > auto bytecode = instruction->as<OpGetById>(); > auto& metadata = bytecode.metadata(this); >- if (metadata.m_mode == GetByIdMode::ArrayLength) >+ if (metadata.m_modeMetadata.mode == GetByIdMode::ArrayLength) > return &metadata.m_modeMetadata.arrayLengthMode.arrayProfile; > break; > } >@@ -2633,16 +2633,17 @@ void CodeBlock::updateAllPredictionsAndCountLiveness(unsigned& numberOfLiveNonAr > numberOfLiveNonArgumentValueProfiles = 0; > numberOfSamplesInProfiles = 0; // If this divided by ValueProfile::numberOfBuckets equals numberOfValueProfiles() then value profiles are full. > >- forEachValueProfile([&](ValueProfile& profile) { >+ forEachValueProfile([&](ValueProfile& profile, bool isArgument) { > unsigned numSamples = profile.totalNumberOfSamples(); >+ static_assert(ValueProfile::numberOfBuckets == 1); > if (numSamples > ValueProfile::numberOfBuckets) > numSamples = ValueProfile::numberOfBuckets; // We don't want profiles that are extremely hot to be given more weight. > numberOfSamplesInProfiles += numSamples; >- if (profile.m_bytecodeOffset < 0) { >+ if (isArgument) { > profile.computeUpdatedPrediction(locker); > return; > } >- if (profile.numberOfSamples() || profile.m_prediction != SpecNone) >+ if (profile.numberOfSamples() || profile.isSampledBefore()) > numberOfLiveNonArgumentValueProfiles++; > profile.computeUpdatedPrediction(locker); > }); >@@ -2650,7 +2651,7 @@ void CodeBlock::updateAllPredictionsAndCountLiveness(unsigned& numberOfLiveNonAr > if (auto* rareData = m_rareData.get()) { > for (auto& profileBucket : rareData->m_catchProfiles) { > profileBucket->forEach([&] (ValueProfileAndOperand& profile) { >- profile.m_profile.computeUpdatedPrediction(locker); >+ profile.computeUpdatedPrediction(locker); > }); > } > } >@@ -2800,12 +2801,11 @@ void CodeBlock::notifyLexicalBindingUpdate() > void CodeBlock::dumpValueProfiles() > { > dataLog("ValueProfile for ", *this, ":\n"); >- forEachValueProfile([](ValueProfile& profile) { >- if (profile.m_bytecodeOffset < 0) { >- ASSERT(profile.m_bytecodeOffset == -1); >- dataLogF(" arg = %u: ", i); >- } else >- dataLogF(" bc = %d: ", profile.m_bytecodeOffset); >+ forEachValueProfile([](ValueProfile& profile, bool isArgument) { >+ if (isArgument) >+ dataLogF(" arg: "); >+ else >+ dataLogF(" bc: "); > if (!profile.numberOfSamples() && profile.m_prediction == SpecNone) { > dataLogF("<empty>\n"); > continue; >diff --git a/Source/JavaScriptCore/bytecode/CodeBlock.h b/Source/JavaScriptCore/bytecode/CodeBlock.h >index a93c36fb918f3b30cab0c798e61e70ca591b18cf..f5dbfec694bd1472b99ac5f0eb31d0d0ad0cffdc 100644 >--- a/Source/JavaScriptCore/bytecode/CodeBlock.h >+++ b/Source/JavaScriptCore/bytecode/CodeBlock.h >@@ -478,7 +478,6 @@ class CodeBlock : public JSCell { > { > ASSERT(vm()->canUseJIT()); // This is only called from the various JIT compilers or places that first check numberOfArgumentValueProfiles before calling this. > ValueProfile& result = m_argumentValueProfiles[argumentIndex]; >- ASSERT(result.m_bytecodeOffset == -1); > return result; > } > >@@ -974,7 +973,7 @@ class CodeBlock : public JSCell { > VM* m_vm; > > const void* m_instructionsRawPointer { nullptr }; >- SentinelLinkedList<LLIntCallLinkInfo, BasicRawSentinelNode<LLIntCallLinkInfo>> m_incomingLLIntCalls; >+ SentinelLinkedList<LLIntCallLinkInfo, PackedRawSentinelNode<LLIntCallLinkInfo>> m_incomingLLIntCalls; > StructureWatchpointMap m_llintGetByIdWatchpointMap; > RefPtr<JITCode> m_jitCode; > #if ENABLE(JIT) >diff --git a/Source/JavaScriptCore/bytecode/CodeBlockInlines.h b/Source/JavaScriptCore/bytecode/CodeBlockInlines.h >index f00b3978732e4f67dd2e55610477fdc2f7a5cacb..22e76b185eb9eba97de9bb081d64186bf54ce4aa 100644 >--- a/Source/JavaScriptCore/bytecode/CodeBlockInlines.h >+++ b/Source/JavaScriptCore/bytecode/CodeBlockInlines.h >@@ -35,11 +35,11 @@ template<typename Functor> > void CodeBlock::forEachValueProfile(const Functor& func) > { > for (unsigned i = 0; i < numberOfArgumentValueProfiles(); ++i) >- func(valueProfileForArgument(i)); >+ func(valueProfileForArgument(i), true); > > if (m_metadata) { > #define VISIT(__op) \ >- m_metadata->forEach<__op>([&] (auto& metadata) { func(metadata.m_profile); }); >+ m_metadata->forEach<__op>([&] (auto& metadata) { func(metadata.m_profile, false); }); > > FOR_EACH_OPCODE_WITH_VALUE_PROFILE(VISIT) > >@@ -53,16 +53,21 @@ void CodeBlock::forEachArrayProfile(const Functor& func) > { > if (m_metadata) { > m_metadata->forEach<OpGetById>([&] (auto& metadata) { >- if (metadata.m_mode == GetByIdMode::ArrayLength) >+ if (metadata.m_modeMetadata.mode == GetByIdMode::ArrayLength) > func(metadata.m_modeMetadata.arrayLengthMode.arrayProfile); > }); > >-#define VISIT(__op) \ >+#define VISIT1(__op) \ > m_metadata->forEach<__op>([&] (auto& metadata) { func(metadata.m_arrayProfile); }); > >- FOR_EACH_OPCODE_WITH_ARRAY_PROFILE(VISIT) >+#define VISIT2(__op) \ >+ m_metadata->forEach<__op>([&] (auto& metadata) { func(metadata.m_callLinkInfo.m_arrayProfile); }); > >-#undef VISIT >+ FOR_EACH_OPCODE_WITH_ARRAY_PROFILE(VISIT1) >+ FOR_EACH_OPCODE_WITH_LLINT_CALL_LINK_INFO(VISIT2) >+ >+#undef VISIT1 >+#undef VISIT2 > } > } > >diff --git a/Source/JavaScriptCore/bytecode/GetByIdMetadata.h b/Source/JavaScriptCore/bytecode/GetByIdMetadata.h >index dc434e6c4b7fd4ed847fcb3f37c12a3751051e9d..59c43fec406e9db95f630b1a78a530f5ce166ab4 100644 >--- a/Source/JavaScriptCore/bytecode/GetByIdMetadata.h >+++ b/Source/JavaScriptCore/bytecode/GetByIdMetadata.h >@@ -28,34 +28,79 @@ > namespace JSC { > > enum class GetByIdMode : uint8_t { >- Default = 0, >- Unset = 1, >- ProtoLoad = 2, >+ ProtoLoad = 0, // This must be zero to reuse the higher bits of the pointer as this ProtoLoad mode. >+ Default = 1, >+ Unset = 2, > ArrayLength = 3, > }; > >+// In 64bit Little endian architecture, this union shares ProtoLoad's JSObject* cachedSlot with "higCountForLLIntCaching" and "mode". >+// This is possible because these values must be zero if we use ProtoLoad mode. >+#if CPU(LITTLE_ENDIAN) && CPU(ADDRESS64) > union GetByIdModeMetadata { >+#else >+struct GetByIdModeMetadata { >+#endif > GetByIdModeMetadata() >- { } >+ { >+ defaultMode.structureID = 0; >+ defaultMode.cachedOffset = 0; >+ defaultMode.padding1 = 0; >+ mode = GetByIdMode::Default; >+ hitCountForLLIntCaching = Options::prototypeHitCountForLLIntCaching(); >+ } > > struct Default { > StructureID structureID; > PropertyOffset cachedOffset; >- } defaultMode; >+ unsigned padding1; >+ }; >+ static_assert(sizeof(Default) == 12); > > struct Unset { > StructureID structureID; >- } unsetMode; >+ unsigned padding1; >+ unsigned padding2; >+ }; >+ static_assert(sizeof(Unset) == 12); >+ >+ struct ArrayLength { >+ ArrayProfile arrayProfile; >+ }; >+ static_assert(sizeof(ArrayLength) == 12); > > struct ProtoLoad { > StructureID structureID; > PropertyOffset cachedOffset; > JSObject* cachedSlot; >- } protoLoadMode; >+ }; > >- struct ArrayLength { >- ArrayProfile arrayProfile; >- } arrayLengthMode; >+#if CPU(LITTLE_ENDIAN) && CPU(ADDRESS64) >+ struct { >+ uint32_t padding1; >+ uint32_t padding2; >+ uint32_t padding3; >+ uint16_t padding4; >+ GetByIdMode mode; >+ uint8_t hitCountForLLIntCaching; // This must be zero when we use ProtoLoad mode. >+ }; >+ Default defaultMode; >+ Unset unsetMode; >+ ArrayLength arrayLengthMode; >+ ProtoLoad protoLoadMode; >+#else >+ union { >+ Default defaultMode; >+ Unset unsetMode; >+ ArrayLength arrayLengthMode; >+ ProtoLoad protoLoadMode; >+ }; >+ GetByIdMode mode; >+ uint8_t hitCountForLLIntCaching; >+#endif > }; >+#if CPU(LITTLE_ENDIAN) && CPU(ADDRESS64) >+static_assert(sizeof(GetByIdModeMetadata) == 16); >+#endif > > } // namespace JSC >diff --git a/Source/JavaScriptCore/bytecode/GetByIdStatus.cpp b/Source/JavaScriptCore/bytecode/GetByIdStatus.cpp >index 6e4e8bfdbe175c06bf982e8340ec642c39c20186..db9ad160ac28130f39841ff3bfeb32b683ddead6 100644 >--- a/Source/JavaScriptCore/bytecode/GetByIdStatus.cpp >+++ b/Source/JavaScriptCore/bytecode/GetByIdStatus.cpp >@@ -64,7 +64,7 @@ GetByIdStatus GetByIdStatus::computeFromLLInt(CodeBlock* profiledBlock, unsigned > auto& metadata = instruction->as<OpGetById>().metadata(profiledBlock); > // FIXME: We should not just bail if we see a get_by_id_proto_load. > // https://bugs.webkit.org/show_bug.cgi?id=158039 >- if (metadata.m_mode != GetByIdMode::Default) >+ if (metadata.m_modeMetadata.mode != GetByIdMode::Default) > return GetByIdStatus(NoInformation, false); > structureID = metadata.m_modeMetadata.defaultMode.structureID; > break; >diff --git a/Source/JavaScriptCore/bytecode/LLIntCallLinkInfo.h b/Source/JavaScriptCore/bytecode/LLIntCallLinkInfo.h >index 84b5c11c58665d32bbd5c220d9a91838284a1e16..7c27cdf66e7a9bddaa2f6abea2628fcb46939f60 100644 >--- a/Source/JavaScriptCore/bytecode/LLIntCallLinkInfo.h >+++ b/Source/JavaScriptCore/bytecode/LLIntCallLinkInfo.h >@@ -34,10 +34,13 @@ namespace JSC { > > struct Instruction; > >-struct LLIntCallLinkInfo : public BasicRawSentinelNode<LLIntCallLinkInfo> { >- LLIntCallLinkInfo() >- { >- } >+class LLIntCallLinkInfo : public PackedRawSentinelNode<LLIntCallLinkInfo> { >+public: >+ friend class LLIntOffsetsExtractor; >+ >+ static constexpr uintptr_t unlinkedBit = 0x1; >+ >+ LLIntCallLinkInfo() = default; > > ~LLIntCallLinkInfo() > { >@@ -45,19 +48,49 @@ struct LLIntCallLinkInfo : public BasicRawSentinelNode<LLIntCallLinkInfo> { > remove(); > } > >- bool isLinked() { return !!callee; } >+ bool isLinked() const { return !(m_calleeOrLastSeenCalleeWithLinkBit & unlinkedBit); } > >+ >+ void link(VM& vm, JSCell* owner, JSObject* callee, MacroAssemblerCodePtr<JSEntryPtrTag> codePtr) >+ { >+ if (isOnList()) >+ remove(); >+ m_calleeOrLastSeenCalleeWithLinkBit = bitwise_cast<uintptr_t>(callee); >+ vm.heap.writeBarrier(owner, callee); >+ m_machineCodeTarget = codePtr; >+ } >+ > void unlink() > { >- callee.clear(); >- machineCodeTarget = MacroAssemblerCodePtr<JSEntryPtrTag>(); >+ // Make link invalidated. It works because LLInt tests the given callee with this pointer. But it is still valid as lastSeenCallee! >+ m_calleeOrLastSeenCalleeWithLinkBit |= unlinkedBit; >+ m_machineCodeTarget = MacroAssemblerCodePtr<JSEntryPtrTag>(); > if (isOnList()) > remove(); > } >- >- WriteBarrier<JSObject> callee; >- WriteBarrier<JSObject> lastSeenCallee; >- MacroAssemblerCodePtr<JSEntryPtrTag> machineCodeTarget; >+ >+ JSObject* callee() const >+ { >+ if (!isLinked()) >+ return nullptr; >+ return bitwise_cast<JSObject*>(m_calleeOrLastSeenCalleeWithLinkBit); >+ } >+ >+ JSObject* lastSeenCallee() const >+ { >+ return bitwise_cast<JSObject*>(m_calleeOrLastSeenCalleeWithLinkBit & ~unlinkedBit); >+ } >+ >+ void clearLastSeenCallee() >+ { >+ m_calleeOrLastSeenCalleeWithLinkBit = unlinkedBit; >+ } >+ >+ ArrayProfile m_arrayProfile; >+ >+private: >+ uintptr_t m_calleeOrLastSeenCalleeWithLinkBit { unlinkedBit }; >+ MacroAssemblerCodePtr<JSEntryPtrTag> m_machineCodeTarget; > }; > > } // namespace JSC >diff --git a/Source/JavaScriptCore/bytecode/LLIntPrototypeLoadAdaptiveStructureWatchpoint.cpp b/Source/JavaScriptCore/bytecode/LLIntPrototypeLoadAdaptiveStructureWatchpoint.cpp >index 0ef6a55c9c719603f1a2e38b580566f0a2467be1..57abb9a4c03f2a9ff51ecedcdbf214e84a9408d4 100644 >--- a/Source/JavaScriptCore/bytecode/LLIntPrototypeLoadAdaptiveStructureWatchpoint.cpp >+++ b/Source/JavaScriptCore/bytecode/LLIntPrototypeLoadAdaptiveStructureWatchpoint.cpp >@@ -65,7 +65,8 @@ void LLIntPrototypeLoadAdaptiveStructureWatchpoint::fireInternal(VM& vm, const F > > void LLIntPrototypeLoadAdaptiveStructureWatchpoint::clearLLIntGetByIdCache(OpGetById::Metadata& metadata) > { >- metadata.m_mode = GetByIdMode::Default; >+ // Keep hitCountForLLIntCaching value. >+ metadata.m_modeMetadata.mode = GetByIdMode::Default; > metadata.m_modeMetadata.defaultMode.cachedOffset = 0; > metadata.m_modeMetadata.defaultMode.structureID = 0; > } >diff --git a/Source/JavaScriptCore/bytecode/LazyOperandValueProfile.h b/Source/JavaScriptCore/bytecode/LazyOperandValueProfile.h >index 9c3b068427034b089ba098e822ffd0e16ef35096..8f60eb0f40ac3b16d6ab03daddc622889ceb4c2d 100644 >--- a/Source/JavaScriptCore/bytecode/LazyOperandValueProfile.h >+++ b/Source/JavaScriptCore/bytecode/LazyOperandValueProfile.h >@@ -129,17 +129,18 @@ struct LazyOperandValueProfile : public MinimalValueProfile { > } > > explicit LazyOperandValueProfile(const LazyOperandValueProfileKey& key) >- : MinimalValueProfile(key.bytecodeOffset()) >- , m_operand(key.operand()) >+ : MinimalValueProfile() >+ , m_key(key) > { > } > > LazyOperandValueProfileKey key() const > { >- return LazyOperandValueProfileKey(m_bytecodeOffset, m_operand); >+ return m_key; > } > > VirtualRegister m_operand; >+ LazyOperandValueProfileKey m_key; > > typedef SegmentedVector<LazyOperandValueProfile, 8> List; > }; >diff --git a/Source/JavaScriptCore/bytecode/MetadataTable.h b/Source/JavaScriptCore/bytecode/MetadataTable.h >index a5d4121b1d41d161bf26542e09286cfbd99395b8..954d0ba16d78210397bcfff94868983ba202c46a 100644 >--- a/Source/JavaScriptCore/bytecode/MetadataTable.h >+++ b/Source/JavaScriptCore/bytecode/MetadataTable.h >@@ -88,11 +88,6 @@ class MetadataTable { > return refCount() == 1; > } > >- UnlinkedMetadataTable::Offset* buffer() >- { >- return bitwise_cast<UnlinkedMetadataTable::Offset*>(this); >- } >- > private: > MetadataTable(UnlinkedMetadataTable&); > >@@ -101,6 +96,11 @@ class MetadataTable { > return *bitwise_cast<UnlinkedMetadataTable::LinkingData*>((bitwise_cast<uint8_t*>(this) - sizeof(UnlinkedMetadataTable::LinkingData))); > } > >+ UnlinkedMetadataTable::Offset* buffer() >+ { >+ return bitwise_cast<UnlinkedMetadataTable::Offset*>(this); >+ } >+ > ALWAYS_INLINE uint8_t* getImpl(unsigned i) > { > return bitwise_cast<uint8_t*>(this) + buffer()[i]; >diff --git a/Source/JavaScriptCore/bytecode/ObjectAllocationProfile.h b/Source/JavaScriptCore/bytecode/ObjectAllocationProfile.h >index a70753a7b9964dc8c5af6183f3802e0564e5de8c..f5679c93e5ce0d4c5bb148ae35c728e532333298 100644 >--- a/Source/JavaScriptCore/bytecode/ObjectAllocationProfile.h >+++ b/Source/JavaScriptCore/bytecode/ObjectAllocationProfile.h >@@ -35,17 +35,14 @@ namespace JSC { > > class FunctionRareData; > >-class ObjectAllocationProfile { >+template<typename Derived> >+class ObjectAllocationProfileBase { > friend class LLIntOffsetsExtractor; > public: >- static ptrdiff_t offsetOfAllocator() { return OBJECT_OFFSETOF(ObjectAllocationProfile, m_allocator); } >- static ptrdiff_t offsetOfStructure() { return OBJECT_OFFSETOF(ObjectAllocationProfile, m_structure); } >- static ptrdiff_t offsetOfInlineCapacity() { return OBJECT_OFFSETOF(ObjectAllocationProfile, m_inlineCapacity); } >+ static ptrdiff_t offsetOfAllocator() { return OBJECT_OFFSETOF(ObjectAllocationProfileBase, m_allocator); } >+ static ptrdiff_t offsetOfStructure() { return OBJECT_OFFSETOF(ObjectAllocationProfileBase, m_structure); } > >- ObjectAllocationProfile() >- : m_inlineCapacity(0) >- { >- } >+ ObjectAllocationProfileBase() = default; > > bool isNull() { return !m_structure; } > >@@ -58,37 +55,74 @@ class ObjectAllocationProfile { > WTF::loadLoadFence(); > return structure; > } >+ >+protected: >+ void clear() >+ { >+ m_allocator = Allocator(); >+ m_structure.clear(); >+ ASSERT(isNull()); >+ } >+ >+ void visitAggregate(SlotVisitor& visitor) >+ { >+ visitor.append(m_structure); >+ } >+ >+private: >+ unsigned possibleDefaultPropertyCount(VM&, JSObject* prototype); >+ >+ Allocator m_allocator; // Precomputed to make things easier for generated code. >+ WriteBarrier<Structure> m_structure; >+}; >+ >+class ObjectAllocationProfile : public ObjectAllocationProfileBase<ObjectAllocationProfile> { >+public: >+ using Base = ObjectAllocationProfileBase<ObjectAllocationProfile>; >+ >+ ObjectAllocationProfile() = default; >+ >+ using Base::clear; >+ using Base::visitAggregate; >+ >+ void setPrototype(VM&, JSCell*, JSObject*) { } >+}; >+ >+class ObjectAllocationProfileWithPrototype : public ObjectAllocationProfileBase<ObjectAllocationProfileWithPrototype> { >+public: >+ using Base = ObjectAllocationProfileBase<ObjectAllocationProfileWithPrototype>; >+ >+ ObjectAllocationProfileWithPrototype() = default; >+ > JSObject* prototype() > { > JSObject* prototype = m_prototype.get(); > WTF::loadLoadFence(); > return prototype; > } >- unsigned inlineCapacity() { return m_inlineCapacity; } >- > > void clear() > { >- m_allocator = Allocator(); >- m_structure.clear(); >+ Base::clear(); > m_prototype.clear(); >- m_inlineCapacity = 0; > ASSERT(isNull()); > } > > void visitAggregate(SlotVisitor& visitor) > { >- visitor.append(m_structure); >+ Base::visitAggregate(visitor); > visitor.append(m_prototype); > } > >-private: >- unsigned possibleDefaultPropertyCount(VM&, JSObject* prototype); >+ void setPrototype(VM& vm, JSCell* owner, JSObject* object) >+ { >+ m_prototype.set(vm, owner, object); >+ } > >- Allocator m_allocator; // Precomputed to make things easier for generated code. >- WriteBarrier<Structure> m_structure; >+private: > WriteBarrier<JSObject> m_prototype; >- unsigned m_inlineCapacity; > }; > >+ >+ > } // namespace JSC >diff --git a/Source/JavaScriptCore/bytecode/ObjectAllocationProfileInlines.h b/Source/JavaScriptCore/bytecode/ObjectAllocationProfileInlines.h >index 4999121b1660643c8597e320a0a625ddc2c5d682..f11c4cc4f5f43aa6f734785440adf7a0be818934 100644 >--- a/Source/JavaScriptCore/bytecode/ObjectAllocationProfileInlines.h >+++ b/Source/JavaScriptCore/bytecode/ObjectAllocationProfileInlines.h >@@ -31,12 +31,11 @@ > > namespace JSC { > >-ALWAYS_INLINE void ObjectAllocationProfile::initializeProfile(VM& vm, JSGlobalObject* globalObject, JSCell* owner, JSObject* prototype, unsigned inferredInlineCapacity, JSFunction* constructor, FunctionRareData* functionRareData) >+template<typename Derived> >+ALWAYS_INLINE void ObjectAllocationProfileBase<Derived>::initializeProfile(VM& vm, JSGlobalObject* globalObject, JSCell* owner, JSObject* prototype, unsigned inferredInlineCapacity, JSFunction* constructor, FunctionRareData* functionRareData) > { > ASSERT(!m_allocator); > ASSERT(!m_structure); >- ASSERT(!m_prototype); >- ASSERT(!m_inlineCapacity); > > // FIXME: Teach create_this's fast path how to allocate poly > // proto objects: https://bugs.webkit.org/show_bug.cgi?id=177517 >@@ -56,8 +55,7 @@ ALWAYS_INLINE void ObjectAllocationProfile::initializeProfile(VM& vm, JSGlobalOb > RELEASE_ASSERT(structure->typeInfo().type() == FinalObjectType); > m_allocator = Allocator(); > m_structure.set(vm, owner, structure); >- m_prototype.set(vm, owner, prototype); >- m_inlineCapacity = structure->inlineCapacity(); >+ static_cast<Derived*>(this)->setPrototype(vm, owner, prototype); > return; > } > >@@ -138,11 +136,11 @@ ALWAYS_INLINE void ObjectAllocationProfile::initializeProfile(VM& vm, JSGlobalOb > WTF::storeStoreFence(); > > m_structure.set(vm, owner, structure); >- m_prototype.set(vm, owner, prototype); >- m_inlineCapacity = inlineCapacity; >+ static_cast<Derived*>(this)->setPrototype(vm, owner, prototype); > } > >-ALWAYS_INLINE unsigned ObjectAllocationProfile::possibleDefaultPropertyCount(VM& vm, JSObject* prototype) >+template<typename Derived> >+ALWAYS_INLINE unsigned ObjectAllocationProfileBase<Derived>::possibleDefaultPropertyCount(VM& vm, JSObject* prototype) > { > if (prototype == prototype->globalObject(vm)->objectPrototype()) > return 0; >diff --git a/Source/JavaScriptCore/bytecode/Opcode.h b/Source/JavaScriptCore/bytecode/Opcode.h >index 8ac603e3cc5f7b193544bb0cff0848ddb22074ce..4427dd98a40716f9e38c469e72cf0b60e9cc5995 100644 >--- a/Source/JavaScriptCore/bytecode/Opcode.h >+++ b/Source/JavaScriptCore/bytecode/Opcode.h >@@ -111,10 +111,6 @@ extern const unsigned opcodeLengths[]; > macro(OpTailCallForwardArguments) \ > macro(OpConstructVarargs) \ > macro(OpGetByVal) \ >- macro(OpCall) \ >- macro(OpTailCall) \ >- macro(OpCallEval) \ >- macro(OpConstruct) \ > macro(OpInByVal) \ > macro(OpPutByVal) \ > macro(OpPutByValDirect) \ >diff --git a/Source/JavaScriptCore/bytecode/ValueProfile.h b/Source/JavaScriptCore/bytecode/ValueProfile.h >index fa0d3b07a3c446bd1a0030e5124184e3b44fae95..4883b4b3d21bbfc8ed66cfb947590d7095ab61ef 100644 >--- a/Source/JavaScriptCore/bytecode/ValueProfile.h >+++ b/Source/JavaScriptCore/bytecode/ValueProfile.h >@@ -44,14 +44,6 @@ struct ValueProfileBase { > static const unsigned totalNumberOfBuckets = numberOfBuckets + numberOfSpecFailBuckets; > > ValueProfileBase() >- : m_bytecodeOffset(-1) >- { >- for (unsigned i = 0; i < totalNumberOfBuckets; ++i) >- m_buckets[i] = JSValue::encode(JSValue()); >- } >- >- ValueProfileBase(int bytecodeOffset) >- : m_bytecodeOffset(bytecodeOffset) > { > for (unsigned i = 0; i < totalNumberOfBuckets; ++i) > m_buckets[i] = JSValue::encode(JSValue()); >@@ -86,8 +78,10 @@ struct ValueProfileBase { > > unsigned totalNumberOfSamples() const > { >- return numberOfSamples() + m_numberOfSamplesInPrediction; >+ return numberOfSamples() + isSampledBefore(); > } >+ >+ bool isSampledBefore() const { return m_prediction != SpecNone; } > > bool isLive() const > { >@@ -109,7 +103,7 @@ struct ValueProfileBase { > > void dump(PrintStream& out) > { >- out.print("samples = ", totalNumberOfSamples(), " prediction = ", SpeculationDump(m_prediction)); >+ out.print("sampled before = ", isSampledBefore(), " live samples = ", numberOfSamples(), " prediction = ", SpeculationDump(m_prediction)); > bool first = true; > for (unsigned i = 0; i < totalNumberOfBuckets; ++i) { > JSValue value = JSValue::decode(m_buckets[i]); >@@ -133,7 +127,6 @@ struct ValueProfileBase { > if (!value) > continue; > >- m_numberOfSamplesInPrediction++; > mergeSpeculation(m_prediction, speculationFromValue(value)); > > m_buckets[i] = JSValue::encode(JSValue()); >@@ -142,17 +135,13 @@ struct ValueProfileBase { > return m_prediction; > } > >- int m_bytecodeOffset; // -1 for prologue >- unsigned m_numberOfSamplesInPrediction { 0 }; >- >- SpeculatedType m_prediction { SpecNone }; >- > EncodedJSValue m_buckets[totalNumberOfBuckets]; >+ >+ SpeculatedType m_prediction { SpecNone }; > }; > > struct MinimalValueProfile : public ValueProfileBase<0> { > MinimalValueProfile(): ValueProfileBase<0>() { } >- MinimalValueProfile(int bytecodeOffset): ValueProfileBase<0>(bytecodeOffset) { } > }; > > template<unsigned logNumberOfBucketsArgument> >@@ -163,23 +152,12 @@ struct ValueProfileWithLogNumberOfBuckets : public ValueProfileBase<1 << logNumb > : ValueProfileBase<1 << logNumberOfBucketsArgument>() > { > } >- ValueProfileWithLogNumberOfBuckets(int bytecodeOffset) >- : ValueProfileBase<1 << logNumberOfBucketsArgument>(bytecodeOffset) >- { >- } > }; > > struct ValueProfile : public ValueProfileWithLogNumberOfBuckets<0> { > ValueProfile() : ValueProfileWithLogNumberOfBuckets<0>() { } >- ValueProfile(int bytecodeOffset) : ValueProfileWithLogNumberOfBuckets<0>(bytecodeOffset) { } > }; > >-template<typename T> >-inline int getValueProfileBytecodeOffset(T* valueProfile) >-{ >- return valueProfile->m_bytecodeOffset; >-} >- > // This is a mini value profile to catch pathologies. It is a counter that gets > // incremented when we take the slow path on any instruction. > struct RareCaseProfile { >@@ -198,8 +176,7 @@ inline int getRareCaseProfileBytecodeOffset(RareCaseProfile* rareCaseProfile) > return rareCaseProfile->m_bytecodeOffset; > } > >-struct ValueProfileAndOperand { >- ValueProfile m_profile; >+struct ValueProfileAndOperand : public ValueProfile { > int m_operand; > }; > >diff --git a/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp b/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp >index 6505ad57934702119c58fb6787d9a6cd8834d3c5..b6865f7acac7120d8f16a5b4a645706bed6a0ccf 100644 >--- a/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp >+++ b/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp >@@ -4795,7 +4795,10 @@ void ByteCodeParser::parseBlock(unsigned limit) > case op_to_this: { > Node* op1 = getThis(); > auto& metadata = currentInstruction->as<OpToThis>().metadata(codeBlock); >- Structure* cachedStructure = metadata.m_cachedStructure.get(); >+ StructureID cachedStructureID = metadata.m_cachedStructureID; >+ Structure* cachedStructure = nullptr; >+ if (cachedStructureID) >+ cachedStructure = m_vm->heap.structureIDTable().get(cachedStructureID); > if (metadata.m_toThisStatus != ToThisOK > || !cachedStructure > || cachedStructure->classInfo()->methodTable.toThis != JSObject::info()->methodTable.toThis >@@ -6011,7 +6014,7 @@ void ByteCodeParser::parseBlock(unsigned limit) > > buffer->forEach([&] (ValueProfileAndOperand& profile) { > VirtualRegister operand(profile.m_operand); >- SpeculatedType prediction = profile.m_profile.computeUpdatedPrediction(locker); >+ SpeculatedType prediction = profile.computeUpdatedPrediction(locker); > if (operand.isLocal()) > localPredictions.append(prediction); > else { >diff --git a/Source/JavaScriptCore/dfg/DFGOperations.cpp b/Source/JavaScriptCore/dfg/DFGOperations.cpp >index ee53d61d7f7546781a9046760f2a5a9ef38b6ed5..25c4c090d6001e6bc11f40b646b9df676b6c0418 100644 >--- a/Source/JavaScriptCore/dfg/DFGOperations.cpp >+++ b/Source/JavaScriptCore/dfg/DFGOperations.cpp >@@ -355,7 +355,7 @@ JSCell* JIT_OPERATION operationCreateThis(ExecState* exec, JSObject* constructor > if (constructor->type() == JSFunctionType && jsCast<JSFunction*>(constructor)->canUseAllocationProfile()) { > auto rareData = jsCast<JSFunction*>(constructor)->ensureRareDataAndAllocationProfile(exec, inlineCapacity); > scope.releaseAssertNoException(); >- ObjectAllocationProfile* allocationProfile = rareData->objectAllocationProfile(); >+ ObjectAllocationProfileWithPrototype* allocationProfile = rareData->objectAllocationProfile(); > Structure* structure = allocationProfile->structure(); > JSObject* result = constructEmptyObject(exec, structure); > if (structure->hasPolyProto()) { >diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp >index debd03eb1ec685d07c0a75b1e8d3fa4a9e5c6752..d0410b35d39017f80db5905ed492d3e33868bc5c 100644 >--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp >+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp >@@ -12565,14 +12565,13 @@ void SpeculativeJIT::compileCreateThis(Node* node) > slowPath.append(m_jit.branchIfNotFunction(calleeGPR)); > m_jit.loadPtr(JITCompiler::Address(calleeGPR, JSFunction::offsetOfRareData()), rareDataGPR); > slowPath.append(m_jit.branchTestPtr(MacroAssembler::Zero, rareDataGPR)); >- m_jit.loadPtr(JITCompiler::Address(rareDataGPR, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfAllocator()), allocatorGPR); >- m_jit.loadPtr(JITCompiler::Address(rareDataGPR, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfStructure()), structureGPR); >+ m_jit.loadPtr(JITCompiler::Address(rareDataGPR, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfileWithPrototype::offsetOfAllocator()), allocatorGPR); >+ m_jit.loadPtr(JITCompiler::Address(rareDataGPR, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfileWithPrototype::offsetOfStructure()), structureGPR); > > auto butterfly = TrustedImmPtr(nullptr); > emitAllocateJSObject(resultGPR, JITAllocator::variable(), allocatorGPR, structureGPR, butterfly, scratchGPR, slowPath); > >- m_jit.loadPtr(JITCompiler::Address(calleeGPR, JSFunction::offsetOfRareData()), rareDataGPR); >- m_jit.load32(JITCompiler::Address(rareDataGPR, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfInlineCapacity()), inlineCapacityGPR); >+ m_jit.load8(JITCompiler::Address(structureGPR, Structure::inlineCapacityOffset()), inlineCapacityGPR); > m_jit.emitInitializeInlineStorage(resultGPR, inlineCapacityGPR); > m_jit.mutatorFence(*m_jit.vm()); > >diff --git a/Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.h b/Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.h >index 89995e6748d3f242a5e0205ca0cae16458397e30..70c993a718a771f161dcf2be34212f0918eb9f0c 100644 >--- a/Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.h >+++ b/Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.h >@@ -57,8 +57,8 @@ namespace JSC { namespace FTL { > macro(DirectArguments_minCapacity, DirectArguments::offsetOfMinCapacity()) \ > macro(DirectArguments_mappedArguments, DirectArguments::offsetOfMappedArguments()) \ > macro(DirectArguments_modifiedArgumentsDescriptor, DirectArguments::offsetOfModifiedArgumentsDescriptor()) \ >- macro(FunctionRareData_allocator, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfAllocator()) \ >- macro(FunctionRareData_structure, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfStructure()) \ >+ macro(FunctionRareData_allocator, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfileWithPrototype::offsetOfAllocator()) \ >+ macro(FunctionRareData_structure, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfileWithPrototype::offsetOfStructure()) \ > macro(GetterSetter_getter, GetterSetter::offsetOfGetter()) \ > macro(GetterSetter_setter, GetterSetter::offsetOfSetter()) \ > macro(JSArrayBufferView_length, JSArrayBufferView::offsetOfLength()) \ >diff --git a/Source/JavaScriptCore/jit/JITCall.cpp b/Source/JavaScriptCore/jit/JITCall.cpp >index afc4b2953d3225de08616727473cad7cec3dc259..c064387bea60c493873a1e91e55d4d559f7ac75d 100644 >--- a/Source/JavaScriptCore/jit/JITCall.cpp >+++ b/Source/JavaScriptCore/jit/JITCall.cpp >@@ -69,7 +69,7 @@ JIT::compileSetupFrame(const Op& bytecode, CallLinkInfo*) > emitGetVirtualRegister(registerOffset + CallFrame::argumentOffsetIncludingThis(0), regT0); > Jump done = branchIfNotCell(regT0); > load32(Address(regT0, JSCell::structureIDOffset()), regT0); >- store32(regT0, metadata.m_arrayProfile.addressOfLastSeenStructureID()); >+ store32(regT0, metadata.m_callLinkInfo.m_arrayProfile.addressOfLastSeenStructureID()); > done.link(this); > } > >diff --git a/Source/JavaScriptCore/jit/JITCall32_64.cpp b/Source/JavaScriptCore/jit/JITCall32_64.cpp >index b42e997aab63dbb152dae242d7b94f0e34e93697..af608f0310d18eb2fbbe2dd684b2e5cefd83b752 100644 >--- a/Source/JavaScriptCore/jit/JITCall32_64.cpp >+++ b/Source/JavaScriptCore/jit/JITCall32_64.cpp >@@ -160,7 +160,7 @@ JIT::compileSetupFrame(const Op& bytecode, CallLinkInfo*) > emitLoad(registerOffset + CallFrame::argumentOffsetIncludingThis(0), regT0, regT1); > Jump done = branchIfNotCell(regT0); > load32(Address(regT1, JSCell::structureIDOffset()), regT1); >- store32(regT1, metadata.m_arrayProfile.addressOfLastSeenStructureID()); >+ store32(regT1, metadata.m_callLinkInfo.m_arrayProfile.addressOfLastSeenStructureID()); > done.link(this); > } > >diff --git a/Source/JavaScriptCore/jit/JITOpcodes.cpp b/Source/JavaScriptCore/jit/JITOpcodes.cpp >index 18ceba6ad90b808c4e04ffd18f1cb926522f4262..f05dba964b2e677a893b1dd1efbadc24c3e83a40 100644 >--- a/Source/JavaScriptCore/jit/JITOpcodes.cpp >+++ b/Source/JavaScriptCore/jit/JITOpcodes.cpp >@@ -707,7 +707,7 @@ void JIT::emit_op_catch(const Instruction* currentInstruction) > buffer->forEach([&] (ValueProfileAndOperand& profile) { > JSValueRegs regs(regT0); > emitGetVirtualRegister(profile.m_operand, regs); >- emitValueProfilingSite(profile.m_profile); >+ emitValueProfilingSite(static_cast<ValueProfile&>(profile)); > }); > } > #endif // ENABLE(DFG_JIT) >@@ -878,15 +878,13 @@ void JIT::emit_op_to_this(const Instruction* currentInstruction) > { > auto bytecode = currentInstruction->as<OpToThis>(); > auto& metadata = bytecode.metadata(m_codeBlock); >- WriteBarrierBase<Structure>* cachedStructure = &metadata.m_cachedStructure; >+ StructureID* cachedStructureID = &metadata.m_cachedStructureID; > emitGetVirtualRegister(bytecode.m_srcDst.offset(), regT1); > > emitJumpSlowCaseIfNotJSCell(regT1); > > addSlowCase(branchIfNotType(regT1, FinalObjectType)); >- loadPtr(cachedStructure, regT2); >- addSlowCase(branchTestPtr(Zero, regT2)); >- load32(Address(regT2, Structure::structureIDOffset()), regT2); >+ load32(cachedStructureID, regT2); > addSlowCase(branch32(NotEqual, Address(regT1, JSCell::structureIDOffset()), regT2)); > } > >@@ -908,8 +906,8 @@ void JIT::emit_op_create_this(const Instruction* currentInstruction) > addSlowCase(branchIfNotFunction(calleeReg)); > loadPtr(Address(calleeReg, JSFunction::offsetOfRareData()), rareDataReg); > addSlowCase(branchTestPtr(Zero, rareDataReg)); >- loadPtr(Address(rareDataReg, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfAllocator()), allocatorReg); >- loadPtr(Address(rareDataReg, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfStructure()), structureReg); >+ loadPtr(Address(rareDataReg, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfileWithPrototype::offsetOfAllocator()), allocatorReg); >+ loadPtr(Address(rareDataReg, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfileWithPrototype::offsetOfStructure()), structureReg); > > loadPtr(cachedFunction, cachedFunctionReg); > Jump hasSeenMultipleCallees = branchPtr(Equal, cachedFunctionReg, TrustedImmPtr(JSCell::seenMultipleCalleeObjects())); >@@ -919,9 +917,7 @@ void JIT::emit_op_create_this(const Instruction* currentInstruction) > JumpList slowCases; > auto butterfly = TrustedImmPtr(nullptr); > emitAllocateJSObject(resultReg, JITAllocator::variable(), allocatorReg, structureReg, butterfly, scratchReg, slowCases); >- emitGetVirtualRegister(callee, scratchReg); >- loadPtr(Address(scratchReg, JSFunction::offsetOfRareData()), scratchReg); >- load32(Address(scratchReg, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfInlineCapacity()), scratchReg); >+ load8(Address(structureReg, Structure::inlineCapacityOffset()), scratchReg); > emitInitializeInlineStorage(resultReg, scratchReg); > addSlowCase(slowCases); > emitPutVirtualRegister(bytecode.m_dst.offset()); >diff --git a/Source/JavaScriptCore/jit/JITOpcodes32_64.cpp b/Source/JavaScriptCore/jit/JITOpcodes32_64.cpp >index 25a9c8f350adc3e69440daca34d82422ee4f182c..97ae2fbe5b2f375954448de85a81118ded89d03e 100644 >--- a/Source/JavaScriptCore/jit/JITOpcodes32_64.cpp >+++ b/Source/JavaScriptCore/jit/JITOpcodes32_64.cpp >@@ -899,7 +899,7 @@ void JIT::emit_op_catch(const Instruction* currentInstruction) > buffer->forEach([&] (ValueProfileAndOperand& profile) { > JSValueRegs regs(regT1, regT0); > emitGetVirtualRegister(profile.m_operand, regs); >- emitValueProfilingSite(profile.m_profile); >+ emitValueProfilingSite(profile); > }); > } > #endif // ENABLE(DFG_JIT) >@@ -1020,8 +1020,8 @@ void JIT::emit_op_create_this(const Instruction* currentInstruction) > addSlowCase(branchIfNotFunction(calleeReg)); > loadPtr(Address(calleeReg, JSFunction::offsetOfRareData()), rareDataReg); > addSlowCase(branchTestPtr(Zero, rareDataReg)); >- load32(Address(rareDataReg, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfAllocator()), allocatorReg); >- loadPtr(Address(rareDataReg, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfStructure()), structureReg); >+ load32(Address(rareDataReg, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfileWithPrototype::offsetOfAllocator()), allocatorReg); >+ loadPtr(Address(rareDataReg, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfileWithPrototype::offsetOfStructure()), structureReg); > > loadPtr(cachedFunction, cachedFunctionReg); > Jump hasSeenMultipleCallees = branchPtr(Equal, cachedFunctionReg, TrustedImmPtr(JSCell::seenMultipleCalleeObjects())); >@@ -1031,9 +1031,7 @@ void JIT::emit_op_create_this(const Instruction* currentInstruction) > JumpList slowCases; > auto butterfly = TrustedImmPtr(nullptr); > emitAllocateJSObject(resultReg, JITAllocator::variable(), allocatorReg, structureReg, butterfly, scratchReg, slowCases); >- emitLoadPayload(callee, scratchReg); >- loadPtr(Address(scratchReg, JSFunction::offsetOfRareData()), scratchReg); >- load32(Address(scratchReg, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfInlineCapacity()), scratchReg); >+ load8(Address(structureReg, Structure::inlineCapacityOffset()), scratchReg); > emitInitializeInlineStorage(resultReg, scratchReg); > addSlowCase(slowCases); > emitStoreCell(bytecode.m_dst.offset(), resultReg); >@@ -1043,7 +1041,7 @@ void JIT::emit_op_to_this(const Instruction* currentInstruction) > { > auto bytecode = currentInstruction->as<OpToThis>(); > auto& metadata = bytecode.metadata(m_codeBlock); >- WriteBarrierBase<Structure>* cachedStructure = &metadata.m_cachedStructure; >+ StructureID* cachedStructureID = &metadata.m_cachedStructureID; > int thisRegister = bytecode.m_srcDst.offset(); > > emitLoad(thisRegister, regT3, regT2); >@@ -1051,7 +1049,7 @@ void JIT::emit_op_to_this(const Instruction* currentInstruction) > addSlowCase(branchIfNotCell(regT3)); > addSlowCase(branchIfNotType(regT2, FinalObjectType)); > loadPtr(Address(regT2, JSCell::structureIDOffset()), regT0); >- loadPtr(cachedStructure, regT2); >+ load32(cachedStructureID, regT2); > addSlowCase(branchPtr(NotEqual, regT0, regT2)); > } > >diff --git a/Source/JavaScriptCore/jit/JITOperations.cpp b/Source/JavaScriptCore/jit/JITOperations.cpp >index 5a003aaf30031fd361df40b7de7a26b256f8ecd8..12b78aadf4589dd483cc8792d2d6386351990ef4 100644 >--- a/Source/JavaScriptCore/jit/JITOperations.cpp >+++ b/Source/JavaScriptCore/jit/JITOperations.cpp >@@ -1695,7 +1695,7 @@ char* JIT_OPERATION operationTryOSREnterAtCatchAndValueProfile(ExecState* exec, > auto bytecode = codeBlock->instructions().at(bytecodeIndex)->as<OpCatch>(); > auto& metadata = bytecode.metadata(codeBlock); > metadata.m_buffer->forEach([&] (ValueProfileAndOperand& profile) { >- profile.m_profile.m_buckets[0] = JSValue::encode(exec->uncheckedR(profile.m_operand).jsValue()); >+ profile.m_buckets[0] = JSValue::encode(exec->uncheckedR(profile.m_operand).jsValue()); > }); > > return nullptr; >diff --git a/Source/JavaScriptCore/jit/JITPropertyAccess.cpp b/Source/JavaScriptCore/jit/JITPropertyAccess.cpp >index 519ad7aec364aba9266014c2d3eb462d259b3713..9d621481f77474efb831bd4e5f1f58229e332556 100644 >--- a/Source/JavaScriptCore/jit/JITPropertyAccess.cpp >+++ b/Source/JavaScriptCore/jit/JITPropertyAccess.cpp >@@ -583,7 +583,7 @@ void JIT::emit_op_get_by_id(const Instruction* currentInstruction) > emitJumpSlowCaseIfNotJSCell(regT0, baseVReg); > > if (*ident == m_vm->propertyNames->length && shouldEmitProfiling()) { >- Jump notArrayLengthMode = branch8(NotEqual, AbsoluteAddress(&metadata.m_mode), TrustedImm32(static_cast<uint8_t>(GetByIdMode::ArrayLength))); >+ Jump notArrayLengthMode = branch8(NotEqual, AbsoluteAddress(&metadata.m_modeMetadata.mode), TrustedImm32(static_cast<uint8_t>(GetByIdMode::ArrayLength))); > emitArrayProfilingSiteWithCell(regT0, regT1, &metadata.m_modeMetadata.arrayLengthMode.arrayProfile); > notArrayLengthMode.link(this); > } >diff --git a/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp b/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp >index 2e90c701da1ec75b1822aa6d7b3be7c7426f25b7..254fcdcddafd624c075801076c807d8777f1fe7e 100644 >--- a/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp >+++ b/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp >@@ -740,19 +740,24 @@ static void setupGetByIdPrototypeCache(ExecState* exec, VM& vm, const Instructio > ConcurrentJSLocker locker(codeBlock->m_lock); > > if (slot.isUnset()) { >- metadata.m_mode = GetByIdMode::Unset; >+ metadata.m_modeMetadata.mode = GetByIdMode::Unset; > metadata.m_modeMetadata.unsetMode.structureID = structure->id(); > return; > } > ASSERT(slot.isValue()); > >- metadata.m_mode = GetByIdMode::ProtoLoad; >+ metadata.m_modeMetadata.mode = GetByIdMode::ProtoLoad; // This must be first set. In 64bit architecture, this field is shared with protoLoadMode.cachedSlot. > metadata.m_modeMetadata.protoLoadMode.structureID = structure->id(); > metadata.m_modeMetadata.protoLoadMode.cachedOffset = offset; >- metadata.m_modeMetadata.protoLoadMode.cachedSlot = slot.slotBase(); > // We know that this pointer will remain valid because it will be cleared by either a watchpoint fire or > // during GC when we clear the LLInt caches. > metadata.m_modeMetadata.protoLoadMode.cachedSlot = slot.slotBase(); >+ >+ ASSERT(metadata.m_modeMetadata.mode == GetByIdMode::ProtoLoad); >+ ASSERT(!metadata.m_modeMetadata.hitCountForLLIntCaching); >+ ASSERT(metadata.m_modeMetadata.protoLoadMode.structureID == structure->id()); >+ ASSERT(metadata.m_modeMetadata.protoLoadMode.cachedOffset == offset); >+ ASSERT(metadata.m_modeMetadata.protoLoadMode.cachedSlot == slot.slotBase()); > } > > >@@ -775,8 +780,7 @@ LLINT_SLOW_PATH_DECL(slow_path_get_by_id) > && slot.isCacheable()) { > { > StructureID oldStructureID; >- auto mode = metadata.m_mode; >- switch (mode) { >+ switch (metadata.m_modeMetadata.mode) { > case GetByIdMode::Default: > oldStructureID = metadata.m_modeMetadata.defaultMode.structureID; > break; >@@ -804,12 +808,12 @@ LLINT_SLOW_PATH_DECL(slow_path_get_by_id) > Structure* structure = baseCell->structure(vm); > if (slot.isValue() && slot.slotBase() == baseValue) { > // Start out by clearing out the old cache. >- metadata.m_mode = GetByIdMode::Default; >+ metadata.m_modeMetadata.mode = GetByIdMode::Default; > metadata.m_modeMetadata.defaultMode.structureID = 0; > metadata.m_modeMetadata.defaultMode.cachedOffset = 0; > > // Prevent the prototype cache from ever happening. >- metadata.m_hitCountForLLIntCaching = 0; >+ metadata.m_modeMetadata.hitCountForLLIntCaching = 0; > > if (structure->propertyAccessesAreCacheable() > && !structure->needImpurePropertyWatchpoint()) { >@@ -820,21 +824,22 @@ LLINT_SLOW_PATH_DECL(slow_path_get_by_id) > metadata.m_modeMetadata.defaultMode.structureID = structure->id(); > metadata.m_modeMetadata.defaultMode.cachedOffset = slot.cachedOffset(); > } >- } else if (UNLIKELY(metadata.m_hitCountForLLIntCaching && (slot.isValue() || slot.isUnset()))) { >+ } else if (UNLIKELY(metadata.m_modeMetadata.hitCountForLLIntCaching && (slot.isValue() || slot.isUnset()))) { > ASSERT(slot.slotBase() != baseValue); > >- if (!(--metadata.m_hitCountForLLIntCaching)) >+ if (!(--metadata.m_modeMetadata.hitCountForLLIntCaching)) > setupGetByIdPrototypeCache(exec, vm, pc, metadata, baseCell, slot, ident); > } > } else if (!LLINT_ALWAYS_ACCESS_SLOW > && isJSArray(baseValue) > && ident == vm.propertyNames->length) { >- metadata.m_mode = GetByIdMode::ArrayLength; >- new (&metadata.m_modeMetadata.arrayLengthMode.arrayProfile) ArrayProfile(codeBlock->bytecodeOffset(pc)); >+ ConcurrentJSLocker locker(codeBlock->m_lock); >+ metadata.m_modeMetadata.mode = GetByIdMode::ArrayLength; >+ new (&metadata.m_modeMetadata.arrayLengthMode.arrayProfile) ArrayProfile; > metadata.m_modeMetadata.arrayLengthMode.arrayProfile.observeStructure(baseValue.asCell()->structure(vm)); > > // Prevent the prototype cache from ever happening. >- metadata.m_hitCountForLLIntCaching = 0; >+ metadata.m_modeMetadata.hitCountForLLIntCaching = 0; > } > > LLINT_PROFILE_VALUE(result); >@@ -1502,12 +1507,7 @@ inline SlowPathReturnType setUpCall(ExecState* execCallee, CodeSpecializationKin > CodeBlock* callerCodeBlock = exec->codeBlock(); > > ConcurrentJSLocker locker(callerCodeBlock->m_lock); >- >- if (callLinkInfo->isOnList()) >- callLinkInfo->remove(); >- callLinkInfo->callee.set(vm, callerCodeBlock, internalFunction); >- callLinkInfo->lastSeenCallee.set(vm, callerCodeBlock, internalFunction); >- callLinkInfo->machineCodeTarget = codePtr; >+ callLinkInfo->link(vm, callerCodeBlock, internalFunction, codePtr); > } > > assertIsTaggedWith(codePtr.executableAddress(), JSEntryPtrTag); >@@ -1550,12 +1550,7 @@ inline SlowPathReturnType setUpCall(ExecState* execCallee, CodeSpecializationKin > CodeBlock* callerCodeBlock = exec->codeBlock(); > > ConcurrentJSLocker locker(callerCodeBlock->m_lock); >- >- if (callLinkInfo->isOnList()) >- callLinkInfo->remove(); >- callLinkInfo->callee.set(vm, callerCodeBlock, callee); >- callLinkInfo->lastSeenCallee.set(vm, callerCodeBlock, callee); >- callLinkInfo->machineCodeTarget = codePtr; >+ callLinkInfo->link(vm, callerCodeBlock, callee, codePtr); > if (codeBlock) > codeBlock->linkIncomingCall(exec, callLinkInfo); > } >@@ -1928,7 +1923,7 @@ LLINT_SLOW_PATH_DECL(slow_path_profile_catch) > auto bytecode = pc->as<OpCatch>(); > auto& metadata = bytecode.metadata(exec); > metadata.m_buffer->forEach([&] (ValueProfileAndOperand& profile) { >- profile.m_profile.m_buckets[0] = JSValue::encode(exec->uncheckedR(profile.m_operand).jsValue()); >+ profile.m_buckets[0] = JSValue::encode(exec->uncheckedR(profile.m_operand).jsValue()); > }); > > LLINT_END(); >diff --git a/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm b/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm >index c77fb7e36bcde53ea34ade7e483906aec5bf3632..c2e60ab6dfb0b2d8d8868750cf58be9f335a10de 100644 >--- a/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm >+++ b/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm >@@ -745,7 +745,7 @@ llintOpWithMetadata(op_to_this, OpToThis, macro (size, get, dispatch, metadata, > loadi PayloadOffset[cfr, t0, 8], t0 > bbneq JSCell::m_type[t0], FinalObjectType, .opToThisSlow > metadata(t2, t3) >- loadp OpToThis::Metadata::m_cachedStructure[t2], t2 >+ loadi OpToThis::Metadata::m_cachedStructureID[t2], t2 > bineq JSCell::m_structureID[t0], t2, .opToThisSlow > dispatch() > >@@ -1344,7 +1344,7 @@ end) > > llintOpWithMetadata(op_get_by_id, OpGetById, macro (size, get, dispatch, metadata, return) > metadata(t5, t0) >- loadb OpGetById::Metadata::m_mode[t5], t1 >+ loadb OpGetById::Metadata::m_modeMetadata.mode[t5], t1 > get(m_base, t0) > > .opGetByIdProtoLoad: >@@ -1823,7 +1823,7 @@ macro arrayProfileForCall(opcodeStruct, getu) > bineq ThisArgumentOffset + TagOffset[cfr, t3, 8], CellTag, .done > loadi ThisArgumentOffset + PayloadOffset[cfr, t3, 8], t0 > loadi JSCell::m_structureID[t0], t0 >- storei t0, %opcodeStruct%::Metadata::m_arrayProfile.m_lastSeenStructureID[t5] >+ storei t0, %opcodeStruct%::Metadata::m_callLinkInfo.m_arrayProfile.m_lastSeenStructureID[t5] > .done: > end > >@@ -1836,7 +1836,7 @@ macro commonCallOp(opcodeName, slowPath, opcodeStruct, prepareCall, prologue) > end, metadata) > > get(m_callee, t0) >- loadp %opcodeStruct%::Metadata::m_callLinkInfo.callee[t5], t2 >+ loadp %opcodeStruct%::Metadata::m_callLinkInfo.m_calleeOrLastSeenCalleeWithLinkBit[t5], t2 > loadConstantOrVariablePayload(size, t0, CellTag, t3, .opCallSlow) > bineq t3, t2, .opCallSlow > getu(size, opcodeStruct, m_argv, t3) >@@ -1849,8 +1849,8 @@ macro commonCallOp(opcodeName, slowPath, opcodeStruct, prepareCall, prologue) > storei t2, ArgumentCount + PayloadOffset[t3] > storei CellTag, Callee + TagOffset[t3] > move t3, sp >- prepareCall(%opcodeStruct%::Metadata::m_callLinkInfo.machineCodeTarget[t5], t2, t3, t4, JSEntryPtrTag) >- callTargetFunction(size, opcodeStruct, dispatch, %opcodeStruct%::Metadata::m_callLinkInfo.machineCodeTarget[t5], JSEntryPtrTag) >+ prepareCall(%opcodeStruct%::Metadata::m_callLinkInfo.m_machineCodeTarget[t5], t2, t3, t4, JSEntryPtrTag) >+ callTargetFunction(size, opcodeStruct, dispatch, %opcodeStruct%::Metadata::m_callLinkInfo.m_machineCodeTarget[t5], JSEntryPtrTag) > > .opCallSlow: > slowPathForCall(size, opcodeStruct, dispatch, slowPath, prepareCall) >diff --git a/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm b/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm >index c80743584509cfee30993a6eb11255d213351c62..8119da2cbff7ab482032b72ad337b51abfb19c35 100644 >--- a/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm >+++ b/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm >@@ -701,10 +701,10 @@ llintOpWithMetadata(op_to_this, OpToThis, macro (size, get, dispatch, metadata, > loadq [cfr, t0, 8], t0 > btqnz t0, tagMask, .opToThisSlow > bbneq JSCell::m_type[t0], FinalObjectType, .opToThisSlow >- loadStructureWithScratch(t0, t1, t2, t3) >+ loadi JSCell::m_structureID[t0], t1 > metadata(t2, t3) >- loadp OpToThis::Metadata::m_cachedStructure[t2], t2 >- bpneq t1, t2, .opToThisSlow >+ loadi OpToThis::Metadata::m_cachedStructureID[t2], t2 >+ bineq t1, t2, .opToThisSlow > dispatch() > > .opToThisSlow: >@@ -1288,7 +1288,7 @@ end) > > llintOpWithMetadata(op_get_by_id, OpGetById, macro (size, get, dispatch, metadata, return) > metadata(t2, t1) >- loadb OpGetById::Metadata::m_mode[t2], t1 >+ loadb OpGetById::Metadata::m_modeMetadata.mode[t2], t1 > get(m_base, t0) > loadConstantOrVariableCell(size, t0, t3, .opGetByIdSlow) > >@@ -1918,7 +1918,7 @@ macro arrayProfileForCall(opcodeStruct, getu) > loadq ThisArgumentOffset[cfr, t3, 8], t0 > btqnz t0, tagMask, .done > loadi JSCell::m_structureID[t0], t3 >- storei t3, %opcodeStruct%::Metadata::m_arrayProfile.m_lastSeenStructureID[t5] >+ storei t3, %opcodeStruct%::Metadata::m_callLinkInfo.m_arrayProfile.m_lastSeenStructureID[t5] > .done: > end > >@@ -1931,7 +1931,7 @@ macro commonCallOp(opcodeName, slowPath, opcodeStruct, prepareCall, prologue) > end, metadata) > > get(m_callee, t0) >- loadp %opcodeStruct%::Metadata::m_callLinkInfo.callee[t5], t2 >+ loadp %opcodeStruct%::Metadata::m_callLinkInfo.m_calleeOrLastSeenCalleeWithLinkBit[t5], t2 > loadConstantOrVariable(size, t0, t3) > bqneq t3, t2, .opCallSlow > getu(size, opcodeStruct, m_argv, t3) >@@ -1943,8 +1943,8 @@ macro commonCallOp(opcodeName, slowPath, opcodeStruct, prepareCall, prologue) > storei PC, ArgumentCount + TagOffset[cfr] > storei t2, ArgumentCount + PayloadOffset[t3] > move t3, sp >- prepareCall(%opcodeStruct%::Metadata::m_callLinkInfo.machineCodeTarget[t5], t2, t3, t4, JSEntryPtrTag) >- callTargetFunction(size, opcodeStruct, dispatch, %opcodeStruct%::Metadata::m_callLinkInfo.machineCodeTarget[t5], JSEntryPtrTag) >+ prepareCall(%opcodeStruct%::Metadata::m_callLinkInfo.m_machineCodeTarget[t5], t2, t3, t4, JSEntryPtrTag) >+ callTargetFunction(size, opcodeStruct, dispatch, %opcodeStruct%::Metadata::m_callLinkInfo.m_machineCodeTarget[t5], JSEntryPtrTag) > > .opCallSlow: > slowPathForCall(size, opcodeStruct, dispatch, slowPath, prepareCall) >diff --git a/Source/JavaScriptCore/runtime/CommonSlowPaths.cpp b/Source/JavaScriptCore/runtime/CommonSlowPaths.cpp >index 8544188836fe689cd8a67d2bff437252173cf5be..4e78a48d6d19eb9b06d3c9ed074e958112d08a7a 100644 >--- a/Source/JavaScriptCore/runtime/CommonSlowPaths.cpp >+++ b/Source/JavaScriptCore/runtime/CommonSlowPaths.cpp >@@ -242,7 +242,7 @@ SLOW_PATH_DECL(slow_path_create_this) > cachedCallee.setWithoutWriteBarrier(JSCell::seenMultipleCalleeObjects()); > > size_t inlineCapacity = bytecode.m_inlineCapacity; >- ObjectAllocationProfile* allocationProfile = constructor->ensureRareDataAndAllocationProfile(exec, inlineCapacity)->objectAllocationProfile(); >+ ObjectAllocationProfileWithPrototype* allocationProfile = constructor->ensureRareDataAndAllocationProfile(exec, inlineCapacity)->objectAllocationProfile(); > throwScope.releaseAssertNoException(); > Structure* structure = allocationProfile->structure(); > result = constructEmptyObject(exec, structure); >@@ -272,16 +272,17 @@ SLOW_PATH_DECL(slow_path_to_this) > auto& metadata = bytecode.metadata(exec); > JSValue v1 = GET(bytecode.m_srcDst).jsValue(); > if (v1.isCell()) { >- Structure* myStructure = v1.asCell()->structure(vm); >- Structure* otherStructure = metadata.m_cachedStructure.get(); >- if (myStructure != otherStructure) { >- if (otherStructure) >+ StructureID myStructureID = v1.asCell()->structureID(); >+ StructureID otherStructureID = metadata.m_cachedStructureID; >+ if (myStructureID != otherStructureID) { >+ if (otherStructureID) > metadata.m_toThisStatus = ToThisConflicted; >- metadata.m_cachedStructure.set(vm, exec->codeBlock(), myStructure); >+ metadata.m_cachedStructureID = myStructureID; >+ vm.heap.writeBarrier(exec->codeBlock(), vm.getStructure(myStructureID)); > } > } else { > metadata.m_toThisStatus = ToThisConflicted; >- metadata.m_cachedStructure.clear(); >+ metadata.m_cachedStructureID = 0; > } > // Note: We only need to do this value profiling here on the slow path. The fast path > // just returns the input to to_this if the structure check succeeds. If the structure >diff --git a/Source/JavaScriptCore/runtime/FunctionRareData.h b/Source/JavaScriptCore/runtime/FunctionRareData.h >index a6a6df00d454789801904ff5eecea22bddc7921c..2eda32ba3853610d01a945a6a49f5fa404c3b348 100644 >--- a/Source/JavaScriptCore/runtime/FunctionRareData.h >+++ b/Source/JavaScriptCore/runtime/FunctionRareData.h >@@ -66,7 +66,7 @@ class FunctionRareData final : public JSCell { > return OBJECT_OFFSETOF(FunctionRareData, m_objectAllocationProfile); > } > >- ObjectAllocationProfile* objectAllocationProfile() >+ ObjectAllocationProfileWithPrototype* objectAllocationProfile() > { > return &m_objectAllocationProfile; > } >@@ -145,7 +145,7 @@ class FunctionRareData final : public JSCell { > // > // We don't really care about 1) since this memory is rare and small in total. 2) is unfortunate but is > // probably outweighed by the cost of 3). >- ObjectAllocationProfile m_objectAllocationProfile; >+ ObjectAllocationProfileWithPrototype m_objectAllocationProfile; > InlineWatchpointSet m_objectAllocationProfileWatchpoint; > InternalFunctionAllocationProfile m_internalFunctionAllocationProfile; > WriteBarrier<Structure> m_boundFunctionStructure; >diff --git a/Source/JavaScriptCore/tools/HeapVerifier.cpp b/Source/JavaScriptCore/tools/HeapVerifier.cpp >index d773d855ffc2e01548ecf6daa8cd8d55591c8c96..05e4099b164c6a8a1d71374cfdc45b62d83860e8 100644 >--- a/Source/JavaScriptCore/tools/HeapVerifier.cpp >+++ b/Source/JavaScriptCore/tools/HeapVerifier.cpp >@@ -330,7 +330,7 @@ bool HeapVerifier::validateJSCell(VM* expectedVM, JSCell* cell, CellProfile* pro > CodeBlock* codeBlock = jsDynamicCast<CodeBlock*>(vm, cell); > if (UNLIKELY(codeBlock)) { > bool success = true; >- codeBlock->forEachValueProfile([&](ValueProfile& valueProfile) { >+ codeBlock->forEachValueProfile([&](ValueProfile& valueProfile, bool) { > for (unsigned i = 0; i < ValueProfile::totalNumberOfBuckets; ++i) { > JSValue value = JSValue::decode(valueProfile.m_buckets[i]); > if (!value)
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Diff
View Attachment As Raw
Actions:
View
|
Formatted Diff
|
Diff
Attachments on
bug 197940
:
370108
|
370112
|
370134
|
370135
|
370136
|
370144
|
370147
|
370149