JSC should be a triple-tier VM
authorfpizlo@apple.com <fpizlo@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Wed, 22 Feb 2012 05:23:19 +0000 (05:23 +0000)
committerfpizlo@apple.com <fpizlo@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Wed, 22 Feb 2012 05:23:19 +0000 (05:23 +0000)
https://bugs.webkit.org/show_bug.cgi?id=75812
<rdar://problem/10079694>

Source/JavaScriptCore:

Reviewed by Gavin Barraclough.

Implemented an interpreter that uses the JIT's calling convention. This
interpreter is called LLInt, or the Low Level Interpreter. JSC will now
will start by executing code in LLInt and will only tier up to the old
JIT after the code is proven hot.

LLInt is written in a modified form of our macro assembly. This new macro
assembly is compiled by an offline assembler (see offlineasm), which
implements many modern conveniences such as a Turing-complete CPS-based
macro language and direct access to relevant C++ type information
(basically offsets of fields and sizes of structs/classes).

Code executing in LLInt appears to the rest of the JSC world "as if" it
were executing in the old JIT. Hence, things like exception handling and
cross-execution-engine calls just work and require pretty much no
additional overhead.

This interpreter is 2-2.5x faster than our old interpreter on SunSpider,
V8, and Kraken. With triple-tiering turned on, we're neutral on SunSpider,
V8, and Kraken, but appear to get a double-digit improvement on real-world
websites due to a huge reduction in the amount of JIT'ing.

* CMakeLists.txt:
* GNUmakefile.am:
* GNUmakefile.list.am:
* JavaScriptCore.pri:
* JavaScriptCore.vcproj/JavaScriptCore/JavaScriptCore.vcproj:
* JavaScriptCore.vcproj/JavaScriptCore/JavaScriptCoreCommon.vsprops:
* JavaScriptCore.vcproj/JavaScriptCore/copy-files.cmd:
* JavaScriptCore.xcodeproj/project.pbxproj:
* Target.pri:
* assembler/LinkBuffer.h:
* assembler/MacroAssemblerCodeRef.h:
(MacroAssemblerCodePtr):
(JSC::MacroAssemblerCodePtr::createFromExecutableAddress):
* bytecode/BytecodeConventions.h: Added.
* bytecode/CallLinkStatus.cpp:
(JSC::CallLinkStatus::computeFromLLInt):
(JSC):
(JSC::CallLinkStatus::computeFor):
* bytecode/CallLinkStatus.h:
(JSC::CallLinkStatus::isSet):
(JSC::CallLinkStatus::operator!):
(CallLinkStatus):
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::dump):
(JSC::CodeBlock::CodeBlock):
(JSC::CodeBlock::~CodeBlock):
(JSC::CodeBlock::finalizeUnconditionally):
(JSC::CodeBlock::stronglyVisitStrongReferences):
(JSC):
(JSC::CodeBlock::unlinkCalls):
(JSC::CodeBlock::unlinkIncomingCalls):
(JSC::CodeBlock::bytecodeOffset):
(JSC::ProgramCodeBlock::jettison):
(JSC::EvalCodeBlock::jettison):
(JSC::FunctionCodeBlock::jettison):
(JSC::ProgramCodeBlock::jitCompileImpl):
(JSC::EvalCodeBlock::jitCompileImpl):
(JSC::FunctionCodeBlock::jitCompileImpl):
* bytecode/CodeBlock.h:
(JSC):
(CodeBlock):
(JSC::CodeBlock::baselineVersion):
(JSC::CodeBlock::linkIncomingCall):
(JSC::CodeBlock::bytecodeOffset):
(JSC::CodeBlock::jitCompile):
(JSC::CodeBlock::hasOptimizedReplacement):
(JSC::CodeBlock::addPropertyAccessInstruction):
(JSC::CodeBlock::addGlobalResolveInstruction):
(JSC::CodeBlock::addLLIntCallLinkInfo):
(JSC::CodeBlock::addGlobalResolveInfo):
(JSC::CodeBlock::numberOfMethodCallLinkInfos):
(JSC::CodeBlock::valueProfilePredictionForBytecodeOffset):
(JSC::CodeBlock::likelyToTakeSlowCase):
(JSC::CodeBlock::couldTakeSlowCase):
(JSC::CodeBlock::likelyToTakeSpecialFastCase):
(JSC::CodeBlock::likelyToTakeDeepestSlowCase):
(JSC::CodeBlock::likelyToTakeAnySlowCase):
(JSC::CodeBlock::addFrequentExitSite):
(JSC::CodeBlock::dontJITAnytimeSoon):
(JSC::CodeBlock::jitAfterWarmUp):
(JSC::CodeBlock::jitSoon):
(JSC::CodeBlock::llintExecuteCounter):
(ProgramCodeBlock):
(EvalCodeBlock):
(FunctionCodeBlock):
* bytecode/GetByIdStatus.cpp:
(JSC::GetByIdStatus::computeFromLLInt):
(JSC):
(JSC::GetByIdStatus::computeFor):
* bytecode/GetByIdStatus.h:
(JSC::GetByIdStatus::GetByIdStatus):
(JSC::GetByIdStatus::wasSeenInJIT):
(GetByIdStatus):
* bytecode/Instruction.h:
(JSC):
(JSC::Instruction::Instruction):
(Instruction):
* bytecode/LLIntCallLinkInfo.h: Added.
(JSC):
(JSC::LLIntCallLinkInfo::LLIntCallLinkInfo):
(LLIntCallLinkInfo):
(JSC::LLIntCallLinkInfo::~LLIntCallLinkInfo):
(JSC::LLIntCallLinkInfo::isLinked):
(JSC::LLIntCallLinkInfo::unlink):
* bytecode/MethodCallLinkStatus.cpp:
(JSC::MethodCallLinkStatus::computeFor):
* bytecode/Opcode.cpp:
(JSC):
* bytecode/Opcode.h:
(JSC):
(JSC::padOpcodeName):
* bytecode/PutByIdStatus.cpp:
(JSC::PutByIdStatus::computeFromLLInt):
(JSC):
(JSC::PutByIdStatus::computeFor):
* bytecode/PutByIdStatus.h:
(PutByIdStatus):
* bytecompiler/BytecodeGenerator.cpp:
(JSC::BytecodeGenerator::emitResolve):
(JSC::BytecodeGenerator::emitResolveWithBase):
(JSC::BytecodeGenerator::emitGetById):
(JSC::BytecodeGenerator::emitPutById):
(JSC::BytecodeGenerator::emitDirectPutById):
(JSC::BytecodeGenerator::emitCall):
(JSC::BytecodeGenerator::emitConstruct):
(JSC::BytecodeGenerator::emitCatch):
* dfg/DFGByteCodeParser.cpp:
(JSC::DFG::ByteCodeParser::getPredictionWithoutOSRExit):
(JSC::DFG::ByteCodeParser::handleInlining):
(JSC::DFG::ByteCodeParser::parseBlock):
* dfg/DFGCapabilities.h:
(JSC::DFG::canCompileOpcode):
* dfg/DFGOSRExitCompiler.cpp:
* dfg/DFGOperations.cpp:
* heap/Heap.h:
(JSC):
(JSC::Heap::firstAllocatorWithoutDestructors):
(Heap):
* heap/MarkStack.cpp:
(JSC::visitChildren):
* heap/MarkedAllocator.h:
(JSC):
(MarkedAllocator):
* heap/MarkedSpace.h:
(JSC):
(MarkedSpace):
(JSC::MarkedSpace::firstAllocator):
* interpreter/CallFrame.cpp:
(JSC):
(JSC::CallFrame::bytecodeOffsetForNonDFGCode):
(JSC::CallFrame::setBytecodeOffsetForNonDFGCode):
(JSC::CallFrame::currentVPC):
(JSC::CallFrame::setCurrentVPC):
(JSC::CallFrame::trueCallerFrame):
* interpreter/CallFrame.h:
(JSC::ExecState::hasReturnPC):
(JSC::ExecState::clearReturnPC):
(ExecState):
(JSC::ExecState::bytecodeOffsetForNonDFGCode):
(JSC::ExecState::currentVPC):
(JSC::ExecState::setCurrentVPC):
* interpreter/Interpreter.cpp:
(JSC::Interpreter::Interpreter):
(JSC::Interpreter::~Interpreter):
(JSC):
(JSC::Interpreter::initialize):
(JSC::Interpreter::isOpcode):
(JSC::Interpreter::unwindCallFrame):
(JSC::getCallerInfo):
(JSC::Interpreter::privateExecute):
(JSC::Interpreter::retrieveLastCaller):
* interpreter/Interpreter.h:
(JSC):
(Interpreter):
(JSC::Interpreter::getOpcode):
(JSC::Interpreter::getOpcodeID):
(JSC::Interpreter::classicEnabled):
* interpreter/RegisterFile.h:
(JSC):
(RegisterFile):
* jit/ExecutableAllocator.h:
(JSC):
* jit/HostCallReturnValue.cpp: Added.
(JSC):
(JSC::getHostCallReturnValueWithExecState):
* jit/HostCallReturnValue.h: Added.
(JSC):
(JSC::initializeHostCallReturnValue):
* jit/JIT.cpp:
(JSC::JIT::privateCompileMainPass):
(JSC::JIT::privateCompileSlowCases):
(JSC::JIT::privateCompile):
* jit/JITCode.h:
(JSC::JITCode::isOptimizingJIT):
(JITCode):
(JSC::JITCode::isBaselineCode):
(JSC::JITCode::JITCode):
* jit/JITDriver.h:
(JSC::jitCompileIfAppropriate):
(JSC::jitCompileFunctionIfAppropriate):
* jit/JITExceptions.cpp:
(JSC::jitThrow):
* jit/JITInlineMethods.h:
(JSC::JIT::updateTopCallFrame):
* jit/JITStubs.cpp:
(JSC::DEFINE_STUB_FUNCTION):
(JSC):
* jit/JITStubs.h:
(JSC):
* jit/JSInterfaceJIT.h:
* llint: Added.
* llint/LLIntCommon.h: Added.
* llint/LLIntData.cpp: Added.
(LLInt):
(JSC::LLInt::Data::Data):
(JSC::LLInt::Data::performAssertions):
(JSC::LLInt::Data::~Data):
* llint/LLIntData.h: Added.
(JSC):
(LLInt):
(Data):
(JSC::LLInt::Data::exceptionInstructions):
(JSC::LLInt::Data::opcodeMap):
(JSC::LLInt::Data::performAssertions):
* llint/LLIntEntrypoints.cpp: Added.
(LLInt):
(JSC::LLInt::getFunctionEntrypoint):
(JSC::LLInt::getEvalEntrypoint):
(JSC::LLInt::getProgramEntrypoint):
* llint/LLIntEntrypoints.h: Added.
(JSC):
(LLInt):
(JSC::LLInt::getEntrypoint):
* llint/LLIntExceptions.cpp: Added.
(LLInt):
(JSC::LLInt::interpreterThrowInCaller):
(JSC::LLInt::returnToThrowForThrownException):
(JSC::LLInt::returnToThrow):
(JSC::LLInt::callToThrow):
* llint/LLIntExceptions.h: Added.
(JSC):
(LLInt):
* llint/LLIntOfflineAsmConfig.h: Added.
* llint/LLIntOffsetsExtractor.cpp: Added.
(JSC):
(LLIntOffsetsExtractor):
(JSC::LLIntOffsetsExtractor::dummy):
(main):
* llint/LLIntSlowPaths.cpp: Added.
(LLInt):
(JSC::LLInt::llint_trace_operand):
(JSC::LLInt::llint_trace_value):
(JSC::LLInt::LLINT_SLOW_PATH_DECL):
(JSC::LLInt::traceFunctionPrologue):
(JSC::LLInt::shouldJIT):
(JSC::LLInt::entryOSR):
(JSC::LLInt::resolveGlobal):
(JSC::LLInt::getByVal):
(JSC::LLInt::handleHostCall):
(JSC::LLInt::setUpCall):
(JSC::LLInt::genericCall):
* llint/LLIntSlowPaths.h: Added.
(JSC):
(LLInt):
* llint/LLIntThunks.cpp: Added.
(LLInt):
(JSC::LLInt::generateThunkWithJumpTo):
(JSC::LLInt::functionForCallEntryThunkGenerator):
(JSC::LLInt::functionForConstructEntryThunkGenerator):
(JSC::LLInt::functionForCallArityCheckThunkGenerator):
(JSC::LLInt::functionForConstructArityCheckThunkGenerator):
(JSC::LLInt::evalEntryThunkGenerator):
(JSC::LLInt::programEntryThunkGenerator):
* llint/LLIntThunks.h: Added.
(JSC):
(LLInt):
* llint/LowLevelInterpreter.asm: Added.
* llint/LowLevelInterpreter.cpp: Added.
* llint/LowLevelInterpreter.h: Added.
* offlineasm: Added.
* offlineasm/armv7.rb: Added.
* offlineasm/asm.rb: Added.
* offlineasm/ast.rb: Added.
* offlineasm/backends.rb: Added.
* offlineasm/generate_offset_extractor.rb: Added.
* offlineasm/instructions.rb: Added.
* offlineasm/offset_extractor_constants.rb: Added.
* offlineasm/offsets.rb: Added.
* offlineasm/opt.rb: Added.
* offlineasm/parser.rb: Added.
* offlineasm/registers.rb: Added.
* offlineasm/self_hash.rb: Added.
* offlineasm/settings.rb: Added.
* offlineasm/transform.rb: Added.
* offlineasm/x86.rb: Added.
* runtime/CodeSpecializationKind.h: Added.
(JSC):
* runtime/CommonSlowPaths.h:
(JSC::CommonSlowPaths::arityCheckFor):
(CommonSlowPaths):
* runtime/Executable.cpp:
(JSC::jettisonCodeBlock):
(JSC):
(JSC::EvalExecutable::jitCompile):
(JSC::samplingDescription):
(JSC::EvalExecutable::compileInternal):
(JSC::ProgramExecutable::jitCompile):
(JSC::ProgramExecutable::compileInternal):
(JSC::FunctionExecutable::baselineCodeBlockFor):
(JSC::FunctionExecutable::jitCompileForCall):
(JSC::FunctionExecutable::jitCompileForConstruct):
(JSC::FunctionExecutable::compileForCallInternal):
(JSC::FunctionExecutable::compileForConstructInternal):
* runtime/Executable.h:
(JSC):
(EvalExecutable):
(ProgramExecutable):
(FunctionExecutable):
(JSC::FunctionExecutable::jitCompileFor):
* runtime/ExecutionHarness.h: Added.
(JSC):
(JSC::prepareForExecution):
(JSC::prepareFunctionForExecution):
* runtime/JSArray.h:
(JSC):
(JSArray):
* runtime/JSCell.h:
(JSC):
(JSCell):
* runtime/JSFunction.h:
(JSC):
(JSFunction):
* runtime/JSGlobalData.cpp:
(JSC::JSGlobalData::JSGlobalData):
* runtime/JSGlobalData.h:
(JSC):
(JSGlobalData):
* runtime/JSGlobalObject.h:
(JSC):
(JSGlobalObject):
* runtime/JSObject.h:
(JSC):
(JSObject):
(JSFinalObject):
* runtime/JSPropertyNameIterator.h:
(JSC):
(JSPropertyNameIterator):
* runtime/JSString.h:
(JSC):
(JSString):
* runtime/JSTypeInfo.h:
(JSC):
(TypeInfo):
* runtime/JSValue.cpp:
(JSC::JSValue::description):
* runtime/JSValue.h:
(LLInt):
(JSValue):
* runtime/JSVariableObject.h:
(JSC):
(JSVariableObject):
* runtime/Options.cpp:
(Options):
(JSC::Options::initializeOptions):
* runtime/Options.h:
(Options):
* runtime/ScopeChain.h:
(JSC):
(ScopeChainNode):
* runtime/Structure.cpp:
(JSC::Structure::addPropertyTransition):
* runtime/Structure.h:
(JSC):
(Structure):
* runtime/StructureChain.h:
(JSC):
(StructureChain):
* wtf/InlineASM.h:
* wtf/Platform.h:
* wtf/SentinelLinkedList.h:
(SentinelLinkedList):
(WTF::SentinelLinkedList::isEmpty):
* wtf/text/StringImpl.h:
(JSC):
(StringImpl):

Source/WebCore:

Reviewed by Gavin Barraclough.

No new tests, because there is no change in behavior.

* CMakeLists.txt:

Source/WebKit:

Reviewed by Gavin Barraclough.

Changed EFL's build system to include a new directory in JavaScriptCore.

* CMakeLists.txt:

Tools:

Reviewed by Gavin Barraclough.

Changed EFL's build system to include a new directory in JavaScriptCore.

* DumpRenderTree/efl/CMakeLists.txt:

git-svn-id: http://svn.webkit.org/repository/webkit/trunk@108444 268f45cc-cd09-0410-ab3c-d52691b4dbfc

115 files changed:
Source/JavaScriptCore/CMakeLists.txt
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/GNUmakefile.am
Source/JavaScriptCore/GNUmakefile.list.am
Source/JavaScriptCore/JavaScriptCore.pri
Source/JavaScriptCore/JavaScriptCore.vcproj/JavaScriptCore/JavaScriptCore.vcproj
Source/JavaScriptCore/JavaScriptCore.vcproj/JavaScriptCore/JavaScriptCoreCommon.vsprops
Source/JavaScriptCore/JavaScriptCore.vcproj/JavaScriptCore/copy-files.cmd
Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
Source/JavaScriptCore/Target.pri
Source/JavaScriptCore/assembler/LinkBuffer.h
Source/JavaScriptCore/assembler/MacroAssemblerCodeRef.h
Source/JavaScriptCore/bytecode/BytecodeConventions.h [new file with mode: 0644]
Source/JavaScriptCore/bytecode/CallLinkStatus.cpp
Source/JavaScriptCore/bytecode/CallLinkStatus.h
Source/JavaScriptCore/bytecode/CodeBlock.cpp
Source/JavaScriptCore/bytecode/CodeBlock.h
Source/JavaScriptCore/bytecode/GetByIdStatus.cpp
Source/JavaScriptCore/bytecode/GetByIdStatus.h
Source/JavaScriptCore/bytecode/Instruction.h
Source/JavaScriptCore/bytecode/LLIntCallLinkInfo.h [new file with mode: 0644]
Source/JavaScriptCore/bytecode/MethodCallLinkStatus.cpp
Source/JavaScriptCore/bytecode/Opcode.cpp
Source/JavaScriptCore/bytecode/Opcode.h
Source/JavaScriptCore/bytecode/PutByIdStatus.cpp
Source/JavaScriptCore/bytecode/PutByIdStatus.h
Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp
Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
Source/JavaScriptCore/dfg/DFGCapabilities.h
Source/JavaScriptCore/dfg/DFGOSRExitCompiler.cpp
Source/JavaScriptCore/dfg/DFGOperations.cpp
Source/JavaScriptCore/heap/Heap.h
Source/JavaScriptCore/heap/MarkStack.cpp
Source/JavaScriptCore/heap/MarkedAllocator.h
Source/JavaScriptCore/heap/MarkedSpace.h
Source/JavaScriptCore/interpreter/CallFrame.cpp
Source/JavaScriptCore/interpreter/CallFrame.h
Source/JavaScriptCore/interpreter/Interpreter.cpp
Source/JavaScriptCore/interpreter/Interpreter.h
Source/JavaScriptCore/interpreter/RegisterFile.h
Source/JavaScriptCore/jit/ExecutableAllocator.h
Source/JavaScriptCore/jit/HostCallReturnValue.cpp [new file with mode: 0644]
Source/JavaScriptCore/jit/HostCallReturnValue.h [new file with mode: 0644]
Source/JavaScriptCore/jit/JIT.cpp
Source/JavaScriptCore/jit/JITCode.h
Source/JavaScriptCore/jit/JITDriver.h
Source/JavaScriptCore/jit/JITExceptions.cpp
Source/JavaScriptCore/jit/JITInlineMethods.h
Source/JavaScriptCore/jit/JITStubs.cpp
Source/JavaScriptCore/jit/JITStubs.h
Source/JavaScriptCore/jit/JSInterfaceJIT.h
Source/JavaScriptCore/llint/LLIntCommon.h [new file with mode: 0644]
Source/JavaScriptCore/llint/LLIntData.cpp [new file with mode: 0644]
Source/JavaScriptCore/llint/LLIntData.h [new file with mode: 0644]
Source/JavaScriptCore/llint/LLIntEntrypoints.cpp [new file with mode: 0644]
Source/JavaScriptCore/llint/LLIntEntrypoints.h [new file with mode: 0644]
Source/JavaScriptCore/llint/LLIntExceptions.cpp [new file with mode: 0644]
Source/JavaScriptCore/llint/LLIntExceptions.h [new file with mode: 0644]
Source/JavaScriptCore/llint/LLIntOfflineAsmConfig.h [new file with mode: 0644]
Source/JavaScriptCore/llint/LLIntOffsetsExtractor.cpp [new file with mode: 0644]
Source/JavaScriptCore/llint/LLIntSlowPaths.cpp [new file with mode: 0644]
Source/JavaScriptCore/llint/LLIntSlowPaths.h [new file with mode: 0644]
Source/JavaScriptCore/llint/LLIntThunks.cpp [new file with mode: 0644]
Source/JavaScriptCore/llint/LLIntThunks.h [new file with mode: 0644]
Source/JavaScriptCore/llint/LowLevelInterpreter.asm [new file with mode: 0644]
Source/JavaScriptCore/llint/LowLevelInterpreter.cpp [new file with mode: 0644]
Source/JavaScriptCore/llint/LowLevelInterpreter.h [new file with mode: 0644]
Source/JavaScriptCore/offlineasm/armv7.rb [new file with mode: 0644]
Source/JavaScriptCore/offlineasm/asm.rb [new file with mode: 0644]
Source/JavaScriptCore/offlineasm/ast.rb [new file with mode: 0644]
Source/JavaScriptCore/offlineasm/backends.rb [new file with mode: 0644]
Source/JavaScriptCore/offlineasm/generate_offset_extractor.rb [new file with mode: 0644]
Source/JavaScriptCore/offlineasm/instructions.rb [new file with mode: 0644]
Source/JavaScriptCore/offlineasm/offsets.rb [new file with mode: 0644]
Source/JavaScriptCore/offlineasm/opt.rb [new file with mode: 0644]
Source/JavaScriptCore/offlineasm/parser.rb [new file with mode: 0644]
Source/JavaScriptCore/offlineasm/registers.rb [new file with mode: 0644]
Source/JavaScriptCore/offlineasm/self_hash.rb [new file with mode: 0644]
Source/JavaScriptCore/offlineasm/settings.rb [new file with mode: 0644]
Source/JavaScriptCore/offlineasm/transform.rb [new file with mode: 0644]
Source/JavaScriptCore/offlineasm/x86.rb [new file with mode: 0644]
Source/JavaScriptCore/runtime/CodeSpecializationKind.h [new file with mode: 0644]
Source/JavaScriptCore/runtime/CommonSlowPaths.h
Source/JavaScriptCore/runtime/Executable.cpp
Source/JavaScriptCore/runtime/Executable.h
Source/JavaScriptCore/runtime/ExecutionHarness.h [new file with mode: 0644]
Source/JavaScriptCore/runtime/JSArray.h
Source/JavaScriptCore/runtime/JSCell.h
Source/JavaScriptCore/runtime/JSFunction.h
Source/JavaScriptCore/runtime/JSGlobalData.cpp
Source/JavaScriptCore/runtime/JSGlobalData.h
Source/JavaScriptCore/runtime/JSGlobalObject.h
Source/JavaScriptCore/runtime/JSObject.h
Source/JavaScriptCore/runtime/JSPropertyNameIterator.h
Source/JavaScriptCore/runtime/JSString.h
Source/JavaScriptCore/runtime/JSTypeInfo.h
Source/JavaScriptCore/runtime/JSValue.cpp
Source/JavaScriptCore/runtime/JSValue.h
Source/JavaScriptCore/runtime/JSVariableObject.h
Source/JavaScriptCore/runtime/Options.cpp
Source/JavaScriptCore/runtime/Options.h
Source/JavaScriptCore/runtime/ScopeChain.h
Source/JavaScriptCore/runtime/Structure.cpp
Source/JavaScriptCore/runtime/Structure.h
Source/JavaScriptCore/runtime/StructureChain.h
Source/JavaScriptCore/wtf/InlineASM.h
Source/JavaScriptCore/wtf/Platform.h
Source/JavaScriptCore/wtf/SentinelLinkedList.h
Source/JavaScriptCore/wtf/text/StringImpl.h
Source/WebCore/CMakeLists.txt
Source/WebCore/ChangeLog
Source/WebKit/CMakeLists.txt
Source/WebKit/ChangeLog
Tools/ChangeLog
Tools/DumpRenderTree/efl/CMakeLists.txt

index 815a5bc..d9d6231 100644 (file)
@@ -11,6 +11,7 @@ SET(JavaScriptCore_INCLUDE_DIRECTORIES
     "${JAVASCRIPTCORE_DIR}/debugger"
     "${JAVASCRIPTCORE_DIR}/interpreter"
     "${JAVASCRIPTCORE_DIR}/jit"
+    "${JAVASCRIPTCORE_DIR}/llint"
     "${JAVASCRIPTCORE_DIR}/parser"
     "${JAVASCRIPTCORE_DIR}/profiler"
     "${JAVASCRIPTCORE_DIR}/runtime"
@@ -102,6 +103,7 @@ SET(JavaScriptCore_SOURCES
     interpreter/RegisterFile.cpp
 
     jit/ExecutableAllocator.cpp
+    jit/HostCallReturnValue.cpp
     jit/JITArithmetic32_64.cpp
     jit/JITArithmetic.cpp
     jit/JITCall32_64.cpp
index 8b98061..bac1d15 100644 (file)
@@ -1,3 +1,398 @@
+2012-02-21  Filip Pizlo  <fpizlo@apple.com>
+
+        JSC should be a triple-tier VM
+        https://bugs.webkit.org/show_bug.cgi?id=75812
+        <rdar://problem/10079694>
+
+        Reviewed by Gavin Barraclough.
+        
+        Implemented an interpreter that uses the JIT's calling convention. This
+        interpreter is called LLInt, or the Low Level Interpreter. JSC will now
+        will start by executing code in LLInt and will only tier up to the old
+        JIT after the code is proven hot.
+        
+        LLInt is written in a modified form of our macro assembly. This new macro
+        assembly is compiled by an offline assembler (see offlineasm), which
+        implements many modern conveniences such as a Turing-complete CPS-based
+        macro language and direct access to relevant C++ type information
+        (basically offsets of fields and sizes of structs/classes).
+        
+        Code executing in LLInt appears to the rest of the JSC world "as if" it
+        were executing in the old JIT. Hence, things like exception handling and
+        cross-execution-engine calls just work and require pretty much no
+        additional overhead.
+        
+        This interpreter is 2-2.5x faster than our old interpreter on SunSpider,
+        V8, and Kraken. With triple-tiering turned on, we're neutral on SunSpider,
+        V8, and Kraken, but appear to get a double-digit improvement on real-world
+        websites due to a huge reduction in the amount of JIT'ing.
+        
+        * CMakeLists.txt:
+        * GNUmakefile.am:
+        * GNUmakefile.list.am:
+        * JavaScriptCore.pri:
+        * JavaScriptCore.vcproj/JavaScriptCore/JavaScriptCore.vcproj:
+        * JavaScriptCore.vcproj/JavaScriptCore/JavaScriptCoreCommon.vsprops:
+        * JavaScriptCore.vcproj/JavaScriptCore/copy-files.cmd:
+        * JavaScriptCore.xcodeproj/project.pbxproj:
+        * Target.pri:
+        * assembler/LinkBuffer.h:
+        * assembler/MacroAssemblerCodeRef.h:
+        (MacroAssemblerCodePtr):
+        (JSC::MacroAssemblerCodePtr::createFromExecutableAddress):
+        * bytecode/BytecodeConventions.h: Added.
+        * bytecode/CallLinkStatus.cpp:
+        (JSC::CallLinkStatus::computeFromLLInt):
+        (JSC):
+        (JSC::CallLinkStatus::computeFor):
+        * bytecode/CallLinkStatus.h:
+        (JSC::CallLinkStatus::isSet):
+        (JSC::CallLinkStatus::operator!):
+        (CallLinkStatus):
+        * bytecode/CodeBlock.cpp:
+        (JSC::CodeBlock::dump):
+        (JSC::CodeBlock::CodeBlock):
+        (JSC::CodeBlock::~CodeBlock):
+        (JSC::CodeBlock::finalizeUnconditionally):
+        (JSC::CodeBlock::stronglyVisitStrongReferences):
+        (JSC):
+        (JSC::CodeBlock::unlinkCalls):
+        (JSC::CodeBlock::unlinkIncomingCalls):
+        (JSC::CodeBlock::bytecodeOffset):
+        (JSC::ProgramCodeBlock::jettison):
+        (JSC::EvalCodeBlock::jettison):
+        (JSC::FunctionCodeBlock::jettison):
+        (JSC::ProgramCodeBlock::jitCompileImpl):
+        (JSC::EvalCodeBlock::jitCompileImpl):
+        (JSC::FunctionCodeBlock::jitCompileImpl):
+        * bytecode/CodeBlock.h:
+        (JSC):
+        (CodeBlock):
+        (JSC::CodeBlock::baselineVersion):
+        (JSC::CodeBlock::linkIncomingCall):
+        (JSC::CodeBlock::bytecodeOffset):
+        (JSC::CodeBlock::jitCompile):
+        (JSC::CodeBlock::hasOptimizedReplacement):
+        (JSC::CodeBlock::addPropertyAccessInstruction):
+        (JSC::CodeBlock::addGlobalResolveInstruction):
+        (JSC::CodeBlock::addLLIntCallLinkInfo):
+        (JSC::CodeBlock::addGlobalResolveInfo):
+        (JSC::CodeBlock::numberOfMethodCallLinkInfos):
+        (JSC::CodeBlock::valueProfilePredictionForBytecodeOffset):
+        (JSC::CodeBlock::likelyToTakeSlowCase):
+        (JSC::CodeBlock::couldTakeSlowCase):
+        (JSC::CodeBlock::likelyToTakeSpecialFastCase):
+        (JSC::CodeBlock::likelyToTakeDeepestSlowCase):
+        (JSC::CodeBlock::likelyToTakeAnySlowCase):
+        (JSC::CodeBlock::addFrequentExitSite):
+        (JSC::CodeBlock::dontJITAnytimeSoon):
+        (JSC::CodeBlock::jitAfterWarmUp):
+        (JSC::CodeBlock::jitSoon):
+        (JSC::CodeBlock::llintExecuteCounter):
+        (ProgramCodeBlock):
+        (EvalCodeBlock):
+        (FunctionCodeBlock):
+        * bytecode/GetByIdStatus.cpp:
+        (JSC::GetByIdStatus::computeFromLLInt):
+        (JSC):
+        (JSC::GetByIdStatus::computeFor):
+        * bytecode/GetByIdStatus.h:
+        (JSC::GetByIdStatus::GetByIdStatus):
+        (JSC::GetByIdStatus::wasSeenInJIT):
+        (GetByIdStatus):
+        * bytecode/Instruction.h:
+        (JSC):
+        (JSC::Instruction::Instruction):
+        (Instruction):
+        * bytecode/LLIntCallLinkInfo.h: Added.
+        (JSC):
+        (JSC::LLIntCallLinkInfo::LLIntCallLinkInfo):
+        (LLIntCallLinkInfo):
+        (JSC::LLIntCallLinkInfo::~LLIntCallLinkInfo):
+        (JSC::LLIntCallLinkInfo::isLinked):
+        (JSC::LLIntCallLinkInfo::unlink):
+        * bytecode/MethodCallLinkStatus.cpp:
+        (JSC::MethodCallLinkStatus::computeFor):
+        * bytecode/Opcode.cpp:
+        (JSC):
+        * bytecode/Opcode.h:
+        (JSC):
+        (JSC::padOpcodeName):
+        * bytecode/PutByIdStatus.cpp:
+        (JSC::PutByIdStatus::computeFromLLInt):
+        (JSC):
+        (JSC::PutByIdStatus::computeFor):
+        * bytecode/PutByIdStatus.h:
+        (PutByIdStatus):
+        * bytecompiler/BytecodeGenerator.cpp:
+        (JSC::BytecodeGenerator::emitResolve):
+        (JSC::BytecodeGenerator::emitResolveWithBase):
+        (JSC::BytecodeGenerator::emitGetById):
+        (JSC::BytecodeGenerator::emitPutById):
+        (JSC::BytecodeGenerator::emitDirectPutById):
+        (JSC::BytecodeGenerator::emitCall):
+        (JSC::BytecodeGenerator::emitConstruct):
+        (JSC::BytecodeGenerator::emitCatch):
+        * dfg/DFGByteCodeParser.cpp:
+        (JSC::DFG::ByteCodeParser::getPredictionWithoutOSRExit):
+        (JSC::DFG::ByteCodeParser::handleInlining):
+        (JSC::DFG::ByteCodeParser::parseBlock):
+        * dfg/DFGCapabilities.h:
+        (JSC::DFG::canCompileOpcode):
+        * dfg/DFGOSRExitCompiler.cpp:
+        * dfg/DFGOperations.cpp:
+        * heap/Heap.h:
+        (JSC):
+        (JSC::Heap::firstAllocatorWithoutDestructors):
+        (Heap):
+        * heap/MarkStack.cpp:
+        (JSC::visitChildren):
+        * heap/MarkedAllocator.h:
+        (JSC):
+        (MarkedAllocator):
+        * heap/MarkedSpace.h:
+        (JSC):
+        (MarkedSpace):
+        (JSC::MarkedSpace::firstAllocator):
+        * interpreter/CallFrame.cpp:
+        (JSC):
+        (JSC::CallFrame::bytecodeOffsetForNonDFGCode):
+        (JSC::CallFrame::setBytecodeOffsetForNonDFGCode):
+        (JSC::CallFrame::currentVPC):
+        (JSC::CallFrame::setCurrentVPC):
+        (JSC::CallFrame::trueCallerFrame):
+        * interpreter/CallFrame.h:
+        (JSC::ExecState::hasReturnPC):
+        (JSC::ExecState::clearReturnPC):
+        (ExecState):
+        (JSC::ExecState::bytecodeOffsetForNonDFGCode):
+        (JSC::ExecState::currentVPC):
+        (JSC::ExecState::setCurrentVPC):
+        * interpreter/Interpreter.cpp:
+        (JSC::Interpreter::Interpreter):
+        (JSC::Interpreter::~Interpreter):
+        (JSC):
+        (JSC::Interpreter::initialize):
+        (JSC::Interpreter::isOpcode):
+        (JSC::Interpreter::unwindCallFrame):
+        (JSC::getCallerInfo):
+        (JSC::Interpreter::privateExecute):
+        (JSC::Interpreter::retrieveLastCaller):
+        * interpreter/Interpreter.h:
+        (JSC):
+        (Interpreter):
+        (JSC::Interpreter::getOpcode):
+        (JSC::Interpreter::getOpcodeID):
+        (JSC::Interpreter::classicEnabled):
+        * interpreter/RegisterFile.h:
+        (JSC):
+        (RegisterFile):
+        * jit/ExecutableAllocator.h:
+        (JSC):
+        * jit/HostCallReturnValue.cpp: Added.
+        (JSC):
+        (JSC::getHostCallReturnValueWithExecState):
+        * jit/HostCallReturnValue.h: Added.
+        (JSC):
+        (JSC::initializeHostCallReturnValue):
+        * jit/JIT.cpp:
+        (JSC::JIT::privateCompileMainPass):
+        (JSC::JIT::privateCompileSlowCases):
+        (JSC::JIT::privateCompile):
+        * jit/JITCode.h:
+        (JSC::JITCode::isOptimizingJIT):
+        (JITCode):
+        (JSC::JITCode::isBaselineCode):
+        (JSC::JITCode::JITCode):
+        * jit/JITDriver.h:
+        (JSC::jitCompileIfAppropriate):
+        (JSC::jitCompileFunctionIfAppropriate):
+        * jit/JITExceptions.cpp:
+        (JSC::jitThrow):
+        * jit/JITInlineMethods.h:
+        (JSC::JIT::updateTopCallFrame):
+        * jit/JITStubs.cpp:
+        (JSC::DEFINE_STUB_FUNCTION):
+        (JSC):
+        * jit/JITStubs.h:
+        (JSC):
+        * jit/JSInterfaceJIT.h:
+        * llint: Added.
+        * llint/LLIntCommon.h: Added.
+        * llint/LLIntData.cpp: Added.
+        (LLInt):
+        (JSC::LLInt::Data::Data):
+        (JSC::LLInt::Data::performAssertions):
+        (JSC::LLInt::Data::~Data):
+        * llint/LLIntData.h: Added.
+        (JSC):
+        (LLInt):
+        (Data):
+        (JSC::LLInt::Data::exceptionInstructions):
+        (JSC::LLInt::Data::opcodeMap):
+        (JSC::LLInt::Data::performAssertions):
+        * llint/LLIntEntrypoints.cpp: Added.
+        (LLInt):
+        (JSC::LLInt::getFunctionEntrypoint):
+        (JSC::LLInt::getEvalEntrypoint):
+        (JSC::LLInt::getProgramEntrypoint):
+        * llint/LLIntEntrypoints.h: Added.
+        (JSC):
+        (LLInt):
+        (JSC::LLInt::getEntrypoint):
+        * llint/LLIntExceptions.cpp: Added.
+        (LLInt):
+        (JSC::LLInt::interpreterThrowInCaller):
+        (JSC::LLInt::returnToThrowForThrownException):
+        (JSC::LLInt::returnToThrow):
+        (JSC::LLInt::callToThrow):
+        * llint/LLIntExceptions.h: Added.
+        (JSC):
+        (LLInt):
+        * llint/LLIntOfflineAsmConfig.h: Added.
+        * llint/LLIntOffsetsExtractor.cpp: Added.
+        (JSC):
+        (LLIntOffsetsExtractor):
+        (JSC::LLIntOffsetsExtractor::dummy):
+        (main):
+        * llint/LLIntSlowPaths.cpp: Added.
+        (LLInt):
+        (JSC::LLInt::llint_trace_operand):
+        (JSC::LLInt::llint_trace_value):
+        (JSC::LLInt::LLINT_SLOW_PATH_DECL):
+        (JSC::LLInt::traceFunctionPrologue):
+        (JSC::LLInt::shouldJIT):
+        (JSC::LLInt::entryOSR):
+        (JSC::LLInt::resolveGlobal):
+        (JSC::LLInt::getByVal):
+        (JSC::LLInt::handleHostCall):
+        (JSC::LLInt::setUpCall):
+        (JSC::LLInt::genericCall):
+        * llint/LLIntSlowPaths.h: Added.
+        (JSC):
+        (LLInt):
+        * llint/LLIntThunks.cpp: Added.
+        (LLInt):
+        (JSC::LLInt::generateThunkWithJumpTo):
+        (JSC::LLInt::functionForCallEntryThunkGenerator):
+        (JSC::LLInt::functionForConstructEntryThunkGenerator):
+        (JSC::LLInt::functionForCallArityCheckThunkGenerator):
+        (JSC::LLInt::functionForConstructArityCheckThunkGenerator):
+        (JSC::LLInt::evalEntryThunkGenerator):
+        (JSC::LLInt::programEntryThunkGenerator):
+        * llint/LLIntThunks.h: Added.
+        (JSC):
+        (LLInt):
+        * llint/LowLevelInterpreter.asm: Added.
+        * llint/LowLevelInterpreter.cpp: Added.
+        * llint/LowLevelInterpreter.h: Added.
+        * offlineasm: Added.
+        * offlineasm/armv7.rb: Added.
+        * offlineasm/asm.rb: Added.
+        * offlineasm/ast.rb: Added.
+        * offlineasm/backends.rb: Added.
+        * offlineasm/generate_offset_extractor.rb: Added.
+        * offlineasm/instructions.rb: Added.
+        * offlineasm/offset_extractor_constants.rb: Added.
+        * offlineasm/offsets.rb: Added.
+        * offlineasm/opt.rb: Added.
+        * offlineasm/parser.rb: Added.
+        * offlineasm/registers.rb: Added.
+        * offlineasm/self_hash.rb: Added.
+        * offlineasm/settings.rb: Added.
+        * offlineasm/transform.rb: Added.
+        * offlineasm/x86.rb: Added.
+        * runtime/CodeSpecializationKind.h: Added.
+        (JSC):
+        * runtime/CommonSlowPaths.h:
+        (JSC::CommonSlowPaths::arityCheckFor):
+        (CommonSlowPaths):
+        * runtime/Executable.cpp:
+        (JSC::jettisonCodeBlock):
+        (JSC):
+        (JSC::EvalExecutable::jitCompile):
+        (JSC::samplingDescription):
+        (JSC::EvalExecutable::compileInternal):
+        (JSC::ProgramExecutable::jitCompile):
+        (JSC::ProgramExecutable::compileInternal):
+        (JSC::FunctionExecutable::baselineCodeBlockFor):
+        (JSC::FunctionExecutable::jitCompileForCall):
+        (JSC::FunctionExecutable::jitCompileForConstruct):
+        (JSC::FunctionExecutable::compileForCallInternal):
+        (JSC::FunctionExecutable::compileForConstructInternal):
+        * runtime/Executable.h:
+        (JSC):
+        (EvalExecutable):
+        (ProgramExecutable):
+        (FunctionExecutable):
+        (JSC::FunctionExecutable::jitCompileFor):
+        * runtime/ExecutionHarness.h: Added.
+        (JSC):
+        (JSC::prepareForExecution):
+        (JSC::prepareFunctionForExecution):
+        * runtime/JSArray.h:
+        (JSC):
+        (JSArray):
+        * runtime/JSCell.h:
+        (JSC):
+        (JSCell):
+        * runtime/JSFunction.h:
+        (JSC):
+        (JSFunction):
+        * runtime/JSGlobalData.cpp:
+        (JSC::JSGlobalData::JSGlobalData):
+        * runtime/JSGlobalData.h:
+        (JSC):
+        (JSGlobalData):
+        * runtime/JSGlobalObject.h:
+        (JSC):
+        (JSGlobalObject):
+        * runtime/JSObject.h:
+        (JSC):
+        (JSObject):
+        (JSFinalObject):
+        * runtime/JSPropertyNameIterator.h:
+        (JSC):
+        (JSPropertyNameIterator):
+        * runtime/JSString.h:
+        (JSC):
+        (JSString):
+        * runtime/JSTypeInfo.h:
+        (JSC):
+        (TypeInfo):
+        * runtime/JSValue.cpp:
+        (JSC::JSValue::description):
+        * runtime/JSValue.h:
+        (LLInt):
+        (JSValue):
+        * runtime/JSVariableObject.h:
+        (JSC):
+        (JSVariableObject):
+        * runtime/Options.cpp:
+        (Options):
+        (JSC::Options::initializeOptions):
+        * runtime/Options.h:
+        (Options):
+        * runtime/ScopeChain.h:
+        (JSC):
+        (ScopeChainNode):
+        * runtime/Structure.cpp:
+        (JSC::Structure::addPropertyTransition):
+        * runtime/Structure.h:
+        (JSC):
+        (Structure):
+        * runtime/StructureChain.h:
+        (JSC):
+        (StructureChain):
+        * wtf/InlineASM.h:
+        * wtf/Platform.h:
+        * wtf/SentinelLinkedList.h:
+        (SentinelLinkedList):
+        (WTF::SentinelLinkedList::isEmpty):
+        * wtf/text/StringImpl.h:
+        (JSC):
+        (StringImpl):
+
 2012-02-21  Oliver Hunt  <oliver@apple.com>
 
         Unbreak double-typed arrays on ARMv7
index 654cd10..8d6d252 100644 (file)
@@ -57,6 +57,7 @@ javascriptcore_cppflags += \
        -I$(srcdir)/Source/JavaScriptCore/interpreter \
        -I$(srcdir)/Source/JavaScriptCore/jit \
        -I$(srcdir)/Source/JavaScriptCore/jit \
+       -I$(srcdir)/Source/JavaScriptCore/llint \
        -I$(srcdir)/Source/JavaScriptCore/parser \
        -I$(srcdir)/Source/JavaScriptCore/profiler \
        -I$(srcdir)/Source/JavaScriptCore/runtime \
index c39eda0..26fa271 100644 (file)
@@ -81,6 +81,7 @@ javascriptcore_sources += \
        Source/JavaScriptCore/assembler/RepatchBuffer.h \
        Source/JavaScriptCore/assembler/SH4Assembler.h \
        Source/JavaScriptCore/assembler/X86Assembler.h \
+       Source/JavaScriptCore/bytecode/BytecodeConventions.h \
        Source/JavaScriptCore/bytecode/CallLinkInfo.cpp \
        Source/JavaScriptCore/bytecode/CallLinkInfo.h \
        Source/JavaScriptCore/bytecode/CallLinkStatus.cpp \
@@ -102,6 +103,7 @@ javascriptcore_sources += \
        Source/JavaScriptCore/bytecode/Instruction.h \
        Source/JavaScriptCore/bytecode/JumpTable.cpp \
        Source/JavaScriptCore/bytecode/JumpTable.h \
+       Source/JavaScriptCore/bytecode/LLIntCallLinkInfo.h \
        Source/JavaScriptCore/bytecode/LineInfo.h \
        Source/JavaScriptCore/bytecode/MethodCallLinkInfo.cpp \
        Source/JavaScriptCore/bytecode/MethodCallLinkInfo.h \
@@ -297,6 +299,8 @@ javascriptcore_sources += \
        Source/JavaScriptCore/jit/CompactJITCodeMap.h \
        Source/JavaScriptCore/jit/ExecutableAllocator.cpp \
        Source/JavaScriptCore/jit/ExecutableAllocator.h \
+       Source/JavaScriptCore/jit/HostCallReturnValue.cpp \
+       Source/JavaScriptCore/jit/HostCallReturnValue.h \
        Source/JavaScriptCore/jit/JITArithmetic32_64.cpp \
        Source/JavaScriptCore/jit/JITArithmetic.cpp \
        Source/JavaScriptCore/jit/JITCall32_64.cpp \
@@ -320,6 +324,7 @@ javascriptcore_sources += \
        Source/JavaScriptCore/jit/SpecializedThunkJIT.h \
        Source/JavaScriptCore/jit/ThunkGenerators.cpp \
        Source/JavaScriptCore/jit/ThunkGenerators.h \
+       Source/JavaScriptCore/llint/LLIntData.h \
        Source/JavaScriptCore/os-win32/stdbool.h \
        Source/JavaScriptCore/os-win32/stdint.h \
        Source/JavaScriptCore/parser/ASTBuilder.h \
@@ -370,6 +375,7 @@ javascriptcore_sources += \
        Source/JavaScriptCore/runtime/CallData.cpp \
        Source/JavaScriptCore/runtime/CallData.h \
        Source/JavaScriptCore/runtime/ClassInfo.h \
+       Source/JavaScriptCore/runtime/CodeSpecializationKind.h \
        Source/JavaScriptCore/runtime/CommonIdentifiers.cpp \
        Source/JavaScriptCore/runtime/CommonIdentifiers.h \
        Source/JavaScriptCore/runtime/CommonSlowPaths.h \
@@ -398,6 +404,7 @@ javascriptcore_sources += \
        Source/JavaScriptCore/runtime/ExceptionHelpers.h \
        Source/JavaScriptCore/runtime/Executable.cpp \
        Source/JavaScriptCore/runtime/Executable.h \
+       Source/JavaScriptCore/runtime/ExecutionHarness.h \
        Source/JavaScriptCore/runtime/FunctionConstructor.cpp \
        Source/JavaScriptCore/runtime/FunctionConstructor.h \
        Source/JavaScriptCore/runtime/FunctionPrototype.cpp \
index 11749d4..eeace17 100644 (file)
@@ -20,6 +20,7 @@ INCLUDEPATH += \
     $$SOURCE_DIR/debugger \
     $$SOURCE_DIR/interpreter \
     $$SOURCE_DIR/jit \
+    $$SOURCE_DIR/llint \
     $$SOURCE_DIR/parser \
     $$SOURCE_DIR/profiler \
     $$SOURCE_DIR/runtime \
index 8989f80..7ec9b8c 100644 (file)
                                >
                        </File>
                        <File
+                               RelativePath="..\..\jit\HostCallReturnValue.cpp"
+                               >
+                       </File>
+                       <File
                                RelativePath="..\..\jit\JIT.cpp"
                                >
                        </File>
                        </File>
                </Filter>
                <Filter
+                       Name="llint"
+                       >
+                       <File
+                               RelativePath="..\..\llint\LLIntData.h"
+                               >
+                       </File>
+               </Filter>
+               <Filter
                        Name="interpreter"
                        >
                        <File
index 33b5344..b0b45d3 100644 (file)
@@ -6,7 +6,7 @@
        >
        <Tool
                Name="VCCLCompilerTool"
-               AdditionalIncludeDirectories="&quot;$(ConfigurationBuildDir)\obj\JavaScriptCore\DerivedSources\&quot;;../../;../../API/;../../parser/;../../bytecompiler/;../../dfg/;../../jit/;../../runtime/;../../tools/;../../bytecode/;../../interpreter/;../../wtf/;../../profiler;../../assembler/;../../debugger/;../../heap/;&quot;$(WebKitLibrariesDir)\include&quot;;&quot;$(WebKitLibrariesDir)\include\private&quot;;&quot;$(ConfigurationBuildDir)\include&quot;;&quot;$(ConfigurationBuildDir)\include\JavaScriptCore&quot;;&quot;$(ConfigurationBuildDir)\include\private&quot;;&quot;$(WebKitLibrariesDir)\include\pthreads&quot;"
+               AdditionalIncludeDirectories="&quot;$(ConfigurationBuildDir)\obj\JavaScriptCore\DerivedSources\&quot;;../../;../../API/;../../parser/;../../bytecompiler/;../../dfg/;../../jit/;../../llint/;../../runtime/;../../tools/;../../bytecode/;../../interpreter/;../../wtf/;../../profiler;../../assembler/;../../debugger/;../../heap/;&quot;$(WebKitLibrariesDir)\include&quot;;&quot;$(WebKitLibrariesDir)\include\private&quot;;&quot;$(ConfigurationBuildDir)\include&quot;;&quot;$(ConfigurationBuildDir)\include\JavaScriptCore&quot;;&quot;$(ConfigurationBuildDir)\include\private&quot;;&quot;$(WebKitLibrariesDir)\include\pthreads&quot;"
                PreprocessorDefinitions="__STD_C"
                ForcedIncludeFiles="ICUVersion.h"
        />
index 71fa151..a5b151a 100644 (file)
@@ -7,6 +7,15 @@
        objects = {
 
 /* Begin PBXAggregateTarget section */
+               0F4680A914BA7FD900BFE272 /* LLInt Offsets */ = {
+                       isa = PBXAggregateTarget;
+                       buildConfigurationList = 0F4680AC14BA7FD900BFE272 /* Build configuration list for PBXAggregateTarget "LLInt Offsets" */;
+                       buildPhases = (
+                               0F4680AA14BA7FD900BFE272 /* Generate Derived Sources */,
+                       );
+                       name = "LLInt Offsets";
+                       productName = "Derived Sources";
+               };
                65FB3F6609D11E9100F49DEB /* Derived Sources */ = {
                        isa = PBXAggregateTarget;
                        buildConfigurationList = 65FB3F7709D11EBD00F49DEB /* Build configuration list for PBXAggregateTarget "Derived Sources" */;
@@ -14,6 +23,9 @@
                                65FB3F6509D11E9100F49DEB /* Generate Derived Sources */,
                                5D35DEE10C7C140B008648B2 /* Generate DTrace header */,
                        );
+                       dependencies = (
+                               0FF922D614F46B600041A24E /* PBXTargetDependency */,
+                       );
                        name = "Derived Sources";
                        productName = "Derived Sources";
                };
                0BAC94A01338728400CF135B /* ThreadRestrictionVerifier.h in Headers */ = {isa = PBXBuildFile; fileRef = 0BAC949E1338728400CF135B /* ThreadRestrictionVerifier.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0BCD83571485845200EA2003 /* TemporaryChange.h in Headers */ = {isa = PBXBuildFile; fileRef = 0BCD83541485841200EA2003 /* TemporaryChange.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0BF28A2911A33DC300638F84 /* SizeLimits.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0BF28A2811A33DC300638F84 /* SizeLimits.cpp */; };
+               0F0B839A14BCF45D00885B4F /* LLIntEntrypoints.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F0B839514BCF45A00885B4F /* LLIntEntrypoints.cpp */; };
+               0F0B839B14BCF46000885B4F /* LLIntEntrypoints.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F0B839614BCF45A00885B4F /* LLIntEntrypoints.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F0B839C14BCF46300885B4F /* LLIntThunks.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F0B839714BCF45A00885B4F /* LLIntThunks.cpp */; };
+               0F0B839D14BCF46600885B4F /* LLIntThunks.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F0B839814BCF45A00885B4F /* LLIntThunks.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F0B83A714BCF50700885B4F /* CodeType.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F0B83A514BCF50400885B4F /* CodeType.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F0B83A914BCF56200885B4F /* HandlerInfo.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F0B83A814BCF55E00885B4F /* HandlerInfo.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F0B83AB14BCF5BB00885B4F /* ExpressionRangeInfo.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F0B83AA14BCF5B900885B4F /* ExpressionRangeInfo.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F0B83B514BCF86200885B4F /* MethodCallLinkInfo.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F0B83B314BCF85E00885B4F /* MethodCallLinkInfo.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F0B83B714BCF8E100885B4F /* GlobalResolveInfo.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F0B83B614BCF8DF00885B4F /* GlobalResolveInfo.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F0B83B914BCF95F00885B4F /* CallReturnOffsetToBytecodeOffset.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F0B83B814BCF95B00885B4F /* CallReturnOffsetToBytecodeOffset.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F0FC45A14BD15F500B81154 /* LLIntCallLinkInfo.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F0FC45814BD15F100B81154 /* LLIntCallLinkInfo.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F15F15F14B7A73E005DE37D /* CommonSlowPaths.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F15F15D14B7A73A005DE37D /* CommonSlowPaths.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F16D726142C39C000CF784A /* BitVector.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F16D724142C39A200CF784A /* BitVector.cpp */; };
                0F21C26814BE5F6800ADC64B /* JITDriver.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F21C26614BE5F5E00ADC64B /* JITDriver.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F21C27C14BE727600ADC64B /* ExecutionHarness.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F21C27A14BE727300ADC64B /* ExecutionHarness.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F21C27D14BE727A00ADC64B /* CodeSpecializationKind.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F21C27914BE727300ADC64B /* CodeSpecializationKind.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F21C27F14BEAA8200ADC64B /* BytecodeConventions.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F21C27E14BEAA8000ADC64B /* BytecodeConventions.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F242DA713F3B1E8007ADD4C /* WeakReferenceHarvester.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F242DA513F3B1BB007ADD4C /* WeakReferenceHarvester.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F2C556F14738F3100121E4F /* DFGCodeBlocks.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F2C556E14738F2E00121E4F /* DFGCodeBlocks.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F2C557014738F3500121E4F /* DFGCodeBlocks.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F2C556D14738F2E00121E4F /* DFGCodeBlocks.cpp */; };
                0F431738146BAC69007E3890 /* ListableHandler.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F431736146BAC65007E3890 /* ListableHandler.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F46808214BA572D00BFE272 /* JITExceptions.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F46808014BA572700BFE272 /* JITExceptions.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F46808314BA573100BFE272 /* JITExceptions.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F46807F14BA572700BFE272 /* JITExceptions.cpp */; };
+               0F4680A314BA7F8D00BFE272 /* LLIntExceptions.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F46809E14BA7F8200BFE272 /* LLIntExceptions.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F4680A414BA7F8D00BFE272 /* LLIntSlowPaths.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F46809F14BA7F8200BFE272 /* LLIntSlowPaths.cpp */; settings = {COMPILER_FLAGS = "-Wno-unused-parameter"; }; };
+               0F4680A514BA7F8D00BFE272 /* LLIntSlowPaths.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4680A014BA7F8200BFE272 /* LLIntSlowPaths.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F4680A814BA7FAB00BFE272 /* LLIntExceptions.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F46809D14BA7F8200BFE272 /* LLIntExceptions.cpp */; };
+               0F4680CA14BBB16C00BFE272 /* LLIntCommon.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4680C514BBB16900BFE272 /* LLIntCommon.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F4680CB14BBB17200BFE272 /* LLIntOfflineAsmConfig.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4680C614BBB16900BFE272 /* LLIntOfflineAsmConfig.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F4680CC14BBB17A00BFE272 /* LowLevelInterpreter.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F4680C714BBB16900BFE272 /* LowLevelInterpreter.cpp */; };
+               0F4680CD14BBB17D00BFE272 /* LowLevelInterpreter.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4680C814BBB16900BFE272 /* LowLevelInterpreter.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F4680D214BBD16500BFE272 /* LLIntData.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F4680CE14BBB3D100BFE272 /* LLIntData.cpp */; };
+               0F4680D314BBD16700BFE272 /* LLIntData.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4680CF14BBB3D100BFE272 /* LLIntData.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F4680D414BBD24900BFE272 /* HostCallReturnValue.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F4680D014BBC5F800BFE272 /* HostCallReturnValue.cpp */; };
+               0F4680D514BBD24B00BFE272 /* HostCallReturnValue.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4680D114BBC5F800BFE272 /* HostCallReturnValue.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F55F0F414D1063900AC7649 /* AbstractPC.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F55F0F114D1063600AC7649 /* AbstractPC.cpp */; };
                0F55F0F514D1063C00AC7649 /* AbstractPC.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F55F0F214D1063600AC7649 /* AbstractPC.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F5F08CF146C7633000472A9 /* UnconditionalFinalizer.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F5F08CE146C762F000472A9 /* UnconditionalFinalizer.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0FD82F4B142806A100179C94 /* BitVector.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD82F491428069200179C94 /* BitVector.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0FE228ED1436AB2700196C48 /* Options.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FE228EB1436AB2300196C48 /* Options.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0FE228EE1436AB2C00196C48 /* Options.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FE228EA1436AB2300196C48 /* Options.cpp */; };
+               0FF922D414F46B410041A24E /* LLIntOffsetsExtractor.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F4680A114BA7F8200BFE272 /* LLIntOffsetsExtractor.cpp */; };
                0FFFC95514EF909A00C72532 /* DFGArithNodeFlagsInferencePhase.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FFFC94914EF909500C72532 /* DFGArithNodeFlagsInferencePhase.cpp */; };
                0FFFC95614EF909C00C72532 /* DFGArithNodeFlagsInferencePhase.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FFFC94A14EF909500C72532 /* DFGArithNodeFlagsInferencePhase.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0FFFC95714EF90A000C72532 /* DFGCFAPhase.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FFFC94B14EF909500C72532 /* DFGCFAPhase.cpp */; };
                86B99AE3117E578100DF5A90 /* StringBuffer.h in Headers */ = {isa = PBXBuildFile; fileRef = 86B99AE1117E578100DF5A90 /* StringBuffer.h */; settings = {ATTRIBUTES = (Private, ); }; };
                86BB09C0138E381B0056702F /* DFGRepatch.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 86BB09BE138E381B0056702F /* DFGRepatch.cpp */; };
                86BB09C1138E381B0056702F /* DFGRepatch.h in Headers */ = {isa = PBXBuildFile; fileRef = 86BB09BF138E381B0056702F /* DFGRepatch.h */; };
-               86C36EEA0EE1289D00B3DF59 /* MacroAssembler.h in Headers */ = {isa = PBXBuildFile; fileRef = 86C36EE90EE1289D00B3DF59 /* MacroAssembler.h */; };
+               86C36EEA0EE1289D00B3DF59 /* MacroAssembler.h in Headers */ = {isa = PBXBuildFile; fileRef = 86C36EE90EE1289D00B3DF59 /* MacroAssembler.h */; settings = {ATTRIBUTES = (Private, ); }; };
                86C568E011A213EE0007F7F0 /* MacroAssemblerARM.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 86C568DD11A213EE0007F7F0 /* MacroAssemblerARM.cpp */; };
                86C568E111A213EE0007F7F0 /* MacroAssemblerMIPS.h in Headers */ = {isa = PBXBuildFile; fileRef = 86C568DE11A213EE0007F7F0 /* MacroAssemblerMIPS.h */; };
                86C568E211A213EE0007F7F0 /* MIPSAssembler.h in Headers */ = {isa = PBXBuildFile; fileRef = 86C568DF11A213EE0007F7F0 /* MIPSAssembler.h */; };
 /* End PBXBuildFile section */
 
 /* Begin PBXContainerItemProxy section */
+               0FF922D214F46B2F0041A24E /* PBXContainerItemProxy */ = {
+                       isa = PBXContainerItemProxy;
+                       containerPortal = 0867D690FE84028FC02AAC07 /* Project object */;
+                       proxyType = 1;
+                       remoteGlobalIDString = 0F4680A914BA7FD900BFE272;
+                       remoteInfo = "LLInt Offsets";
+               };
+               0FF922D514F46B600041A24E /* PBXContainerItemProxy */ = {
+                       isa = PBXContainerItemProxy;
+                       containerPortal = 0867D690FE84028FC02AAC07 /* Project object */;
+                       proxyType = 1;
+                       remoteGlobalIDString = 0FF922C314F46B130041A24E;
+                       remoteInfo = JSCLLIntOffsetsExtractor;
+               };
                141214BE0A49190E00480255 /* PBXContainerItemProxy */ = {
                        isa = PBXContainerItemProxy;
                        containerPortal = 0867D690FE84028FC02AAC07 /* Project object */;
                0BAC949E1338728400CF135B /* ThreadRestrictionVerifier.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ThreadRestrictionVerifier.h; sourceTree = "<group>"; };
                0BCD83541485841200EA2003 /* TemporaryChange.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = TemporaryChange.h; sourceTree = "<group>"; };
                0BF28A2811A33DC300638F84 /* SizeLimits.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = SizeLimits.cpp; sourceTree = "<group>"; };
+               0F0B839514BCF45A00885B4F /* LLIntEntrypoints.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = LLIntEntrypoints.cpp; path = llint/LLIntEntrypoints.cpp; sourceTree = "<group>"; };
+               0F0B839614BCF45A00885B4F /* LLIntEntrypoints.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = LLIntEntrypoints.h; path = llint/LLIntEntrypoints.h; sourceTree = "<group>"; };
+               0F0B839714BCF45A00885B4F /* LLIntThunks.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = LLIntThunks.cpp; path = llint/LLIntThunks.cpp; sourceTree = "<group>"; };
+               0F0B839814BCF45A00885B4F /* LLIntThunks.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = LLIntThunks.h; path = llint/LLIntThunks.h; sourceTree = "<group>"; };
                0F0B83A514BCF50400885B4F /* CodeType.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CodeType.h; sourceTree = "<group>"; };
                0F0B83A814BCF55E00885B4F /* HandlerInfo.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = HandlerInfo.h; sourceTree = "<group>"; };
                0F0B83AA14BCF5B900885B4F /* ExpressionRangeInfo.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ExpressionRangeInfo.h; sourceTree = "<group>"; };
                0F0B83B314BCF85E00885B4F /* MethodCallLinkInfo.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = MethodCallLinkInfo.h; sourceTree = "<group>"; };
                0F0B83B614BCF8DF00885B4F /* GlobalResolveInfo.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = GlobalResolveInfo.h; sourceTree = "<group>"; };
                0F0B83B814BCF95B00885B4F /* CallReturnOffsetToBytecodeOffset.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CallReturnOffsetToBytecodeOffset.h; sourceTree = "<group>"; };
+               0F0FC45814BD15F100B81154 /* LLIntCallLinkInfo.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = LLIntCallLinkInfo.h; sourceTree = "<group>"; };
                0F15F15D14B7A73A005DE37D /* CommonSlowPaths.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CommonSlowPaths.h; sourceTree = "<group>"; };
                0F16D724142C39A200CF784A /* BitVector.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = BitVector.cpp; sourceTree = "<group>"; };
                0F21C26614BE5F5E00ADC64B /* JITDriver.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JITDriver.h; sourceTree = "<group>"; };
+               0F21C27914BE727300ADC64B /* CodeSpecializationKind.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CodeSpecializationKind.h; sourceTree = "<group>"; };
+               0F21C27A14BE727300ADC64B /* ExecutionHarness.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ExecutionHarness.h; sourceTree = "<group>"; };
+               0F21C27E14BEAA8000ADC64B /* BytecodeConventions.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = BytecodeConventions.h; sourceTree = "<group>"; };
                0F242DA513F3B1BB007ADD4C /* WeakReferenceHarvester.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WeakReferenceHarvester.h; sourceTree = "<group>"; };
                0F2C556D14738F2E00121E4F /* DFGCodeBlocks.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = DFGCodeBlocks.cpp; sourceTree = "<group>"; };
                0F2C556E14738F2E00121E4F /* DFGCodeBlocks.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = DFGCodeBlocks.h; sourceTree = "<group>"; };
                0F431736146BAC65007E3890 /* ListableHandler.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ListableHandler.h; sourceTree = "<group>"; };
                0F46807F14BA572700BFE272 /* JITExceptions.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JITExceptions.cpp; sourceTree = "<group>"; };
                0F46808014BA572700BFE272 /* JITExceptions.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JITExceptions.h; sourceTree = "<group>"; };
+               0F46809D14BA7F8200BFE272 /* LLIntExceptions.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = LLIntExceptions.cpp; path = llint/LLIntExceptions.cpp; sourceTree = "<group>"; };
+               0F46809E14BA7F8200BFE272 /* LLIntExceptions.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = LLIntExceptions.h; path = llint/LLIntExceptions.h; sourceTree = "<group>"; };
+               0F46809F14BA7F8200BFE272 /* LLIntSlowPaths.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = LLIntSlowPaths.cpp; path = llint/LLIntSlowPaths.cpp; sourceTree = "<group>"; };
+               0F4680A014BA7F8200BFE272 /* LLIntSlowPaths.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = LLIntSlowPaths.h; path = llint/LLIntSlowPaths.h; sourceTree = "<group>"; };
+               0F4680A114BA7F8200BFE272 /* LLIntOffsetsExtractor.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = LLIntOffsetsExtractor.cpp; path = llint/LLIntOffsetsExtractor.cpp; sourceTree = "<group>"; };
+               0F4680C514BBB16900BFE272 /* LLIntCommon.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = LLIntCommon.h; path = llint/LLIntCommon.h; sourceTree = "<group>"; };
+               0F4680C614BBB16900BFE272 /* LLIntOfflineAsmConfig.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = LLIntOfflineAsmConfig.h; path = llint/LLIntOfflineAsmConfig.h; sourceTree = "<group>"; };
+               0F4680C714BBB16900BFE272 /* LowLevelInterpreter.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = LowLevelInterpreter.cpp; path = llint/LowLevelInterpreter.cpp; sourceTree = "<group>"; };
+               0F4680C814BBB16900BFE272 /* LowLevelInterpreter.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = LowLevelInterpreter.h; path = llint/LowLevelInterpreter.h; sourceTree = "<group>"; };
+               0F4680CE14BBB3D100BFE272 /* LLIntData.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = LLIntData.cpp; path = llint/LLIntData.cpp; sourceTree = "<group>"; };
+               0F4680CF14BBB3D100BFE272 /* LLIntData.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = LLIntData.h; path = llint/LLIntData.h; sourceTree = "<group>"; };
+               0F4680D014BBC5F800BFE272 /* HostCallReturnValue.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = HostCallReturnValue.cpp; sourceTree = "<group>"; };
+               0F4680D114BBC5F800BFE272 /* HostCallReturnValue.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = HostCallReturnValue.h; sourceTree = "<group>"; };
                0F55F0F114D1063600AC7649 /* AbstractPC.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = AbstractPC.cpp; sourceTree = "<group>"; };
                0F55F0F214D1063600AC7649 /* AbstractPC.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = AbstractPC.h; sourceTree = "<group>"; };
                0F5F08CC146BE602000472A9 /* DFGByteCodeCache.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGByteCodeCache.h; path = dfg/DFGByteCodeCache.h; sourceTree = "<group>"; };
                0FD82F491428069200179C94 /* BitVector.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = BitVector.h; sourceTree = "<group>"; };
                0FE228EA1436AB2300196C48 /* Options.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = Options.cpp; sourceTree = "<group>"; };
                0FE228EB1436AB2300196C48 /* Options.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = Options.h; sourceTree = "<group>"; };
+               0FF922CF14F46B130041A24E /* JSCLLIntOffsetsExtractor */ = {isa = PBXFileReference; explicitFileType = "compiled.mach-o.executable"; includeInIndex = 0; path = JSCLLIntOffsetsExtractor; sourceTree = BUILT_PRODUCTS_DIR; };
                0FFFC94914EF909500C72532 /* DFGArithNodeFlagsInferencePhase.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGArithNodeFlagsInferencePhase.cpp; path = dfg/DFGArithNodeFlagsInferencePhase.cpp; sourceTree = "<group>"; };
                0FFFC94A14EF909500C72532 /* DFGArithNodeFlagsInferencePhase.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGArithNodeFlagsInferencePhase.h; path = dfg/DFGArithNodeFlagsInferencePhase.h; sourceTree = "<group>"; };
                0FFFC94B14EF909500C72532 /* DFGCFAPhase.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGCFAPhase.cpp; path = dfg/DFGCFAPhase.cpp; sourceTree = "<group>"; };
 /* End PBXFileReference section */
 
 /* Begin PBXFrameworksBuildPhase section */
+               0FF922C614F46B130041A24E /* Frameworks */ = {
+                       isa = PBXFrameworksBuildPhase;
+                       buildActionMask = 2147483647;
+                       files = (
+                       );
+                       runOnlyForDeploymentPostprocessing = 0;
+               };
                1412111E0A48793C00480255 /* Frameworks */ = {
                        isa = PBXFrameworksBuildPhase;
                        buildActionMask = 2147483647;
                                141211200A48793C00480255 /* minidom */,
                                14BD59BF0A3E8F9000BAF59C /* testapi */,
                                6511230514046A4C002B101D /* testRegExp */,
+                               0FF922CF14F46B130041A24E /* JSCLLIntOffsetsExtractor */,
                        );
                        name = Products;
                        sourceTree = "<group>";
                                45E12D8806A49B0F00E9DF84 /* jsc.cpp */,
                                A767FF9F14F4502900789059 /* JSCTypedArrayStubs.h */,
                                F68EBB8C0255D4C601FF60F7 /* config.h */,
+                               0F46809C14BA7F4D00BFE272 /* llint */,
                                1432EBD70A34CAD400717B9F /* API */,
                                9688CB120ED12B4E001D649F /* assembler */,
                                969A078F0ED1D3AE00F1F681 /* bytecode */,
                        tabWidth = 4;
                        usesTabs = 0;
                };
+               0F46809C14BA7F4D00BFE272 /* llint */ = {
+                       isa = PBXGroup;
+                       children = (
+                               0F0B839514BCF45A00885B4F /* LLIntEntrypoints.cpp */,
+                               0F0B839614BCF45A00885B4F /* LLIntEntrypoints.h */,
+                               0F0B839714BCF45A00885B4F /* LLIntThunks.cpp */,
+                               0F0B839814BCF45A00885B4F /* LLIntThunks.h */,
+                               0F4680CE14BBB3D100BFE272 /* LLIntData.cpp */,
+                               0F4680CF14BBB3D100BFE272 /* LLIntData.h */,
+                               0F4680C514BBB16900BFE272 /* LLIntCommon.h */,
+                               0F4680C614BBB16900BFE272 /* LLIntOfflineAsmConfig.h */,
+                               0F4680C714BBB16900BFE272 /* LowLevelInterpreter.cpp */,
+                               0F4680C814BBB16900BFE272 /* LowLevelInterpreter.h */,
+                               0F46809D14BA7F8200BFE272 /* LLIntExceptions.cpp */,
+                               0F46809E14BA7F8200BFE272 /* LLIntExceptions.h */,
+                               0F46809F14BA7F8200BFE272 /* LLIntSlowPaths.cpp */,
+                               0F4680A014BA7F8200BFE272 /* LLIntSlowPaths.h */,
+                               0F4680A114BA7F8200BFE272 /* LLIntOffsetsExtractor.cpp */,
+                       );
+                       name = llint;
+                       sourceTree = "<group>";
+               };
                141211000A48772600480255 /* tests */ = {
                        isa = PBXGroup;
                        children = (
                1429D92C0ED22D7000B89619 /* jit */ = {
                        isa = PBXGroup;
                        children = (
+                               0F4680D014BBC5F800BFE272 /* HostCallReturnValue.cpp */,
+                               0F4680D114BBC5F800BFE272 /* HostCallReturnValue.h */,
                                0F46807F14BA572700BFE272 /* JITExceptions.cpp */,
                                0F46808014BA572700BFE272 /* JITExceptions.h */,
                                0FD82E37141AB14200179C94 /* CompactJITCodeMap.h */,
                7EF6E0BB0EB7A1EC0079AFAF /* runtime */ = {
                        isa = PBXGroup;
                        children = (
+                               0F21C27914BE727300ADC64B /* CodeSpecializationKind.h */,
+                               0F21C27A14BE727300ADC64B /* ExecutionHarness.h */,
                                0F15F15D14B7A73A005DE37D /* CommonSlowPaths.h */,
                                BCF605110E203EF800B9A64D /* ArgList.cpp */,
                                BCF605120E203EF800B9A64D /* ArgList.h */,
                969A078F0ED1D3AE00F1F681 /* bytecode */ = {
                        isa = PBXGroup;
                        children = (
+                               0F21C27E14BEAA8000ADC64B /* BytecodeConventions.h */,
+                               0F0FC45814BD15F100B81154 /* LLIntCallLinkInfo.h */,
                                0F9FC8BF14E1B5FB00D52AE0 /* PolymorphicPutByIdList.cpp */,
                                0F9FC8C014E1B5FB00D52AE0 /* PolymorphicPutByIdList.h */,
                                0F9FC8C114E1B5FB00D52AE0 /* PutKind.h */,
                                86704B8A12DBA33700A9FE7B /* YarrPattern.h in Headers */,
                                86704B4312DB8A8100A9FE7B /* YarrSyntaxChecker.h in Headers */,
                                0F15F15F14B7A73E005DE37D /* CommonSlowPaths.h in Headers */,
+                               0F4680A314BA7F8D00BFE272 /* LLIntExceptions.h in Headers */,
+                               0F4680A514BA7F8D00BFE272 /* LLIntSlowPaths.h in Headers */,
                                0F46808214BA572D00BFE272 /* JITExceptions.h in Headers */,
+                               0F4680CA14BBB16C00BFE272 /* LLIntCommon.h in Headers */,
+                               0F4680CB14BBB17200BFE272 /* LLIntOfflineAsmConfig.h in Headers */,
+                               0F4680CD14BBB17D00BFE272 /* LowLevelInterpreter.h in Headers */,
+                               0F4680D314BBD16700BFE272 /* LLIntData.h in Headers */,
+                               0F4680D514BBD24B00BFE272 /* HostCallReturnValue.h in Headers */,
+                               0F0B839B14BCF46000885B4F /* LLIntEntrypoints.h in Headers */,
+                               0F0B839D14BCF46600885B4F /* LLIntThunks.h in Headers */,
                                0F0B83A714BCF50700885B4F /* CodeType.h in Headers */,
                                0F0B83A914BCF56200885B4F /* HandlerInfo.h in Headers */,
                                0F0B83AB14BCF5BB00885B4F /* ExpressionRangeInfo.h in Headers */,
                                0F0B83B514BCF86200885B4F /* MethodCallLinkInfo.h in Headers */,
                                0F0B83B714BCF8E100885B4F /* GlobalResolveInfo.h in Headers */,
                                0F0B83B914BCF95F00885B4F /* CallReturnOffsetToBytecodeOffset.h in Headers */,
+                               0F0FC45A14BD15F500B81154 /* LLIntCallLinkInfo.h in Headers */,
                                0F21C26814BE5F6800ADC64B /* JITDriver.h in Headers */,
+                               0F21C27C14BE727600ADC64B /* ExecutionHarness.h in Headers */,
+                               0F21C27D14BE727A00ADC64B /* CodeSpecializationKind.h in Headers */,
+                               0F21C27F14BEAA8200ADC64B /* BytecodeConventions.h in Headers */,
                                0F7B294A14C3CD29007C3DB1 /* DFGCCallHelpers.h in Headers */,
                                0F7B294B14C3CD2F007C3DB1 /* DFGCapabilities.h in Headers */,
                                0F7B294C14C3CD43007C3DB1 /* DFGByteCodeCache.h in Headers */,
 /* End PBXHeadersBuildPhase section */
 
 /* Begin PBXNativeTarget section */
+               0FF922C314F46B130041A24E /* JSCLLIntOffsetsExtractor */ = {
+                       isa = PBXNativeTarget;
+                       buildConfigurationList = 0FF922CA14F46B130041A24E /* Build configuration list for PBXNativeTarget "JSCLLIntOffsetsExtractor" */;
+                       buildPhases = (
+                               0FF922C414F46B130041A24E /* Sources */,
+                               0FF922C614F46B130041A24E /* Frameworks */,
+                       );
+                       buildRules = (
+                       );
+                       dependencies = (
+                               0FF922D314F46B2F0041A24E /* PBXTargetDependency */,
+                       );
+                       name = JSCLLIntOffsetsExtractor;
+                       productInstallPath = /usr/local/bin;
+                       productName = jsc;
+                       productReference = 0FF922CF14F46B130041A24E /* JSCLLIntOffsetsExtractor */;
+                       productType = "com.apple.product-type.tool";
+               };
                1412111F0A48793C00480255 /* minidom */ = {
                        isa = PBXNativeTarget;
                        buildConfigurationList = 141211390A48798400480255 /* Build configuration list for PBXNativeTarget "minidom" */;
                                14BD59BE0A3E8F9000BAF59C /* testapi */,
                                932F5BDA0822A1C700736975 /* jsc */,
                                651122F714046A4C002B101D /* testRegExp */,
+                               0F4680A914BA7FD900BFE272 /* LLInt Offsets */,
+                               0FF922C314F46B130041A24E /* JSCLLIntOffsetsExtractor */,
                        );
                };
 /* End PBXProject section */
 
 /* Begin PBXShellScriptBuildPhase section */
+               0F4680AA14BA7FD900BFE272 /* Generate Derived Sources */ = {
+                       isa = PBXShellScriptBuildPhase;
+                       buildActionMask = 2147483647;
+                       files = (
+                       );
+                       inputPaths = (
+                               "$(SRCROOT)/llint/LowLevelAssembler.asm",
+                       );
+                       name = "Generate Derived Sources";
+                       outputPaths = (
+                               "$(BUILT_PRODUCTS_DIR)/LLIntOffsets/LLIntDesiredOffsets.h",
+                       );
+                       runOnlyForDeploymentPostprocessing = 0;
+                       shellPath = /bin/sh;
+                       shellScript = "mkdir -p \"${BUILT_PRODUCTS_DIR}/LLIntOffsets/\"\n\n/usr/bin/env ruby \"${SRCROOT}/offlineasm/generate_offset_extractor.rb\" \"${SRCROOT}/llint/LowLevelInterpreter.asm\" \"${BUILT_PRODUCTS_DIR}/LLIntOffsets/LLIntDesiredOffsets.h\"\n";
+               };
                3713F014142905240036387F /* Check For Inappropriate Objective-C Class Names */ = {
                        isa = PBXShellScriptBuildPhase;
                        buildActionMask = 2147483647;
                        );
                        runOnlyForDeploymentPostprocessing = 0;
                        shellPath = /bin/sh;
-                       shellScript = "mkdir -p \"${BUILT_PRODUCTS_DIR}/DerivedSources/JavaScriptCore/docs\"\ncd \"${BUILT_PRODUCTS_DIR}/DerivedSources/JavaScriptCore\"\n\n/bin/ln -sfh \"${SRCROOT}\" JavaScriptCore\nexport JavaScriptCore=\"JavaScriptCore\"\nexport BUILT_PRODUCTS_DIR=\"../..\"\n\nmake --no-builtin-rules -f \"JavaScriptCore/DerivedSources.make\" -j `/usr/sbin/sysctl -n hw.ncpu`\n";
+                       shellScript = "mkdir -p \"${BUILT_PRODUCTS_DIR}/DerivedSources/JavaScriptCore/docs\"\ncd \"${BUILT_PRODUCTS_DIR}/DerivedSources/JavaScriptCore\"\n\n/bin/ln -sfh \"${SRCROOT}\" JavaScriptCore\nexport JavaScriptCore=\"JavaScriptCore\"\nexport BUILT_PRODUCTS_DIR=\"../..\"\n\nmake --no-builtin-rules -f \"JavaScriptCore/DerivedSources.make\" -j `/usr/sbin/sysctl -n hw.ncpu`\n\n/usr/bin/env ruby JavaScriptCore/offlineasm/asm.rb JavaScriptCore/llint/LowLevelInterpreter.asm ${BUILT_PRODUCTS_DIR}/JSCLLIntOffsetsExtractor LLIntAssembly.h\n";
                };
                9319586B09D9F91A00A56FD4 /* Check For Global Initializers */ = {
                        isa = PBXShellScriptBuildPhase;
 /* End PBXShellScriptBuildPhase section */
 
 /* Begin PBXSourcesBuildPhase section */
+               0FF922C414F46B130041A24E /* Sources */ = {
+                       isa = PBXSourcesBuildPhase;
+                       buildActionMask = 2147483647;
+                       files = (
+                               0FF922D414F46B410041A24E /* LLIntOffsetsExtractor.cpp in Sources */,
+                       );
+                       runOnlyForDeploymentPostprocessing = 0;
+               };
                1412111D0A48793C00480255 /* Sources */ = {
                        isa = PBXSourcesBuildPhase;
                        buildActionMask = 2147483647;
                                86704B8612DBA33700A9FE7B /* YarrJIT.cpp in Sources */,
                                86704B8912DBA33700A9FE7B /* YarrPattern.cpp in Sources */,
                                86704B4212DB8A8100A9FE7B /* YarrSyntaxChecker.cpp in Sources */,
+                               0F4680A414BA7F8D00BFE272 /* LLIntSlowPaths.cpp in Sources */,
+                               0F4680A814BA7FAB00BFE272 /* LLIntExceptions.cpp in Sources */,
                                0F46808314BA573100BFE272 /* JITExceptions.cpp in Sources */,
+                               0F4680CC14BBB17A00BFE272 /* LowLevelInterpreter.cpp in Sources */,
+                               0F4680D214BBD16500BFE272 /* LLIntData.cpp in Sources */,
+                               0F4680D414BBD24900BFE272 /* HostCallReturnValue.cpp in Sources */,
+                               0F0B839A14BCF45D00885B4F /* LLIntEntrypoints.cpp in Sources */,
+                               0F0B839C14BCF46300885B4F /* LLIntThunks.cpp in Sources */,
                                0F0B83B014BCF71600885B4F /* CallLinkInfo.cpp in Sources */,
                                0F0B83B414BCF86000885B4F /* MethodCallLinkInfo.cpp in Sources */,
                                F69E86C314C6E551002C2C62 /* NumberOfCores.cpp in Sources */,
 /* End PBXSourcesBuildPhase section */
 
 /* Begin PBXTargetDependency section */
+               0FF922D314F46B2F0041A24E /* PBXTargetDependency */ = {
+                       isa = PBXTargetDependency;
+                       target = 0F4680A914BA7FD900BFE272 /* LLInt Offsets */;
+                       targetProxy = 0FF922D214F46B2F0041A24E /* PBXContainerItemProxy */;
+               };
+               0FF922D614F46B600041A24E /* PBXTargetDependency */ = {
+                       isa = PBXTargetDependency;
+                       target = 0FF922C314F46B130041A24E /* JSCLLIntOffsetsExtractor */;
+                       targetProxy = 0FF922D514F46B600041A24E /* PBXContainerItemProxy */;
+               };
                141214BF0A49190E00480255 /* PBXTargetDependency */ = {
                        isa = PBXTargetDependency;
                        target = 1412111F0A48793C00480255 /* minidom */;
 /* End PBXTargetDependency section */
 
 /* Begin XCBuildConfiguration section */
+               0F4680AD14BA7FD900BFE272 /* Debug */ = {
+                       isa = XCBuildConfiguration;
+                       buildSettings = {
+                               PRODUCT_NAME = "Derived Sources copy";
+                       };
+                       name = Debug;
+               };
+               0F4680AE14BA7FD900BFE272 /* Release */ = {
+                       isa = XCBuildConfiguration;
+                       buildSettings = {
+                               PRODUCT_NAME = "Derived Sources copy";
+                       };
+                       name = Release;
+               };
+               0F4680AF14BA7FD900BFE272 /* Profiling */ = {
+                       isa = XCBuildConfiguration;
+                       buildSettings = {
+                               PRODUCT_NAME = "Derived Sources copy";
+                       };
+                       name = Profiling;
+               };
+               0F4680B014BA7FD900BFE272 /* Production */ = {
+                       isa = XCBuildConfiguration;
+                       buildSettings = {
+                               PRODUCT_NAME = "Derived Sources copy";
+                       };
+                       name = Production;
+               };
+               0FF922CB14F46B130041A24E /* Debug */ = {
+                       isa = XCBuildConfiguration;
+                       baseConfigurationReference = 5DAFD6CB146B686300FBEFB4 /* JSC.xcconfig */;
+                       buildSettings = {
+                               PRODUCT_NAME = JSCLLIntOffsetsExtractor;
+                               USER_HEADER_SEARCH_PATHS = ". icu $(HEADER_SEARCH_PATHS) $(BUILT_PRODUCTS_DIR)/LLIntOffsets";
+                       };
+                       name = Debug;
+               };
+               0FF922CC14F46B130041A24E /* Release */ = {
+                       isa = XCBuildConfiguration;
+                       baseConfigurationReference = 5DAFD6CB146B686300FBEFB4 /* JSC.xcconfig */;
+                       buildSettings = {
+                               PRODUCT_NAME = JSCLLIntOffsetsExtractor;
+                               USER_HEADER_SEARCH_PATHS = ". icu $(HEADER_SEARCH_PATHS) $(BUILT_PRODUCTS_DIR)/LLIntOffsets";
+                       };
+                       name = Release;
+               };
+               0FF922CD14F46B130041A24E /* Profiling */ = {
+                       isa = XCBuildConfiguration;
+                       baseConfigurationReference = 5DAFD6CB146B686300FBEFB4 /* JSC.xcconfig */;
+                       buildSettings = {
+                               PRODUCT_NAME = JSCLLIntOffsetsExtractor;
+                               USER_HEADER_SEARCH_PATHS = ". icu $(HEADER_SEARCH_PATHS) $(BUILT_PRODUCTS_DIR)/LLIntOffsets";
+                       };
+                       name = Profiling;
+               };
+               0FF922CE14F46B130041A24E /* Production */ = {
+                       isa = XCBuildConfiguration;
+                       baseConfigurationReference = 5DAFD6CB146B686300FBEFB4 /* JSC.xcconfig */;
+                       buildSettings = {
+                               PRODUCT_NAME = JSCLLIntOffsetsExtractor;
+                               USER_HEADER_SEARCH_PATHS = ". icu $(HEADER_SEARCH_PATHS) $(BUILT_PRODUCTS_DIR)/LLIntOffsets";
+                       };
+                       name = Production;
+               };
                1412113A0A48798400480255 /* Debug */ = {
                        isa = XCBuildConfiguration;
                        buildSettings = {
 /* End XCBuildConfiguration section */
 
 /* Begin XCConfigurationList section */
+               0F4680AC14BA7FD900BFE272 /* Build configuration list for PBXAggregateTarget "LLInt Offsets" */ = {
+                       isa = XCConfigurationList;
+                       buildConfigurations = (
+                               0F4680AD14BA7FD900BFE272 /* Debug */,
+                               0F4680AE14BA7FD900BFE272 /* Release */,
+                               0F4680AF14BA7FD900BFE272 /* Profiling */,
+                               0F4680B014BA7FD900BFE272 /* Production */,
+                       );
+                       defaultConfigurationIsVisible = 0;
+                       defaultConfigurationName = Production;
+               };
+               0FF922CA14F46B130041A24E /* Build configuration list for PBXNativeTarget "JSCLLIntOffsetsExtractor" */ = {
+                       isa = XCConfigurationList;
+                       buildConfigurations = (
+                               0FF922CB14F46B130041A24E /* Debug */,
+                               0FF922CC14F46B130041A24E /* Release */,
+                               0FF922CD14F46B130041A24E /* Profiling */,
+                               0FF922CE14F46B130041A24E /* Production */,
+                       );
+                       defaultConfigurationIsVisible = 0;
+                       defaultConfigurationName = Production;
+               };
                141211390A48798400480255 /* Build configuration list for PBXNativeTarget "minidom" */ = {
                        isa = XCConfigurationList;
                        buildConfigurations = (
index ba77229..e1a96d7 100644 (file)
@@ -113,6 +113,7 @@ SOURCES += \
     interpreter/RegisterFile.cpp \
     jit/ExecutableAllocatorFixedVMPool.cpp \
     jit/ExecutableAllocator.cpp \
+    jit/HostCallReturnValue.cpp \
     jit/JITArithmetic.cpp \
     jit/JITArithmetic32_64.cpp \
     jit/JITCall.cpp \
index 6177c91..2c07d13 100644 (file)
@@ -34,7 +34,7 @@
 #define GLOBAL_THUNK_ID reinterpret_cast<void*>(static_cast<intptr_t>(-1))
 #define REGEXP_CODE_ID reinterpret_cast<void*>(static_cast<intptr_t>(-2))
 
-#include <MacroAssembler.h>
+#include "MacroAssembler.h"
 #include <wtf/DataLog.h>
 #include <wtf/Noncopyable.h>
 
index c59d151..3d7d845 100644 (file)
@@ -31,8 +31,6 @@
 #include "RefPtr.h"
 #include "UnusedParam.h"
 
-#if ENABLE(ASSEMBLER)
-
 // ASSERT_VALID_CODE_POINTER checks that ptr is a non-null pointer, and that it is a valid
 // instruction address on the platform (for example, check any alignment requirements).
 #if CPU(ARM_THUMB2)
@@ -273,6 +271,14 @@ public:
     {
         ASSERT_VALID_CODE_POINTER(m_value);
     }
+    
+    static MacroAssemblerCodePtr createFromExecutableAddress(void* value)
+    {
+        ASSERT_VALID_CODE_POINTER(value);
+        MacroAssemblerCodePtr result;
+        result.m_value = value;
+        return result;
+    }
 
     explicit MacroAssemblerCodePtr(ReturnAddressPtr ra)
         : m_value(ra.value())
@@ -360,6 +366,4 @@ private:
 
 } // namespace JSC
 
-#endif // ENABLE(ASSEMBLER)
-
 #endif // MacroAssemblerCodeRef_h
diff --git a/Source/JavaScriptCore/bytecode/BytecodeConventions.h b/Source/JavaScriptCore/bytecode/BytecodeConventions.h
new file mode 100644 (file)
index 0000000..f33b060
--- /dev/null
@@ -0,0 +1,36 @@
+/*
+ * Copyright (C) 2012 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#ifndef BytecodeConventions_h
+#define BytecodeConventions_h
+
+// Register numbers used in bytecode operations have different meaning according to their ranges:
+//      0x80000000-0xFFFFFFFF  Negative indices from the CallFrame pointer are entries in the call frame, see RegisterFile.h.
+//      0x00000000-0x3FFFFFFF  Forwards indices from the CallFrame pointer are local vars and temporaries with the function's callframe.
+//      0x40000000-0x7FFFFFFF  Positive indices from 0x40000000 specify entries in the constant pool on the CodeBlock.
+static const int FirstConstantRegisterIndex = 0x40000000;
+
+#endif // BytecodeConventions_h
+
index f3fd5bb..7f9e9ee 100644 (file)
 #include "CallLinkStatus.h"
 
 #include "CodeBlock.h"
+#include "LLIntCallLinkInfo.h"
 
 namespace JSC {
 
+CallLinkStatus CallLinkStatus::computeFromLLInt(CodeBlock* profiledBlock, unsigned bytecodeIndex)
+{
+    UNUSED_PARAM(profiledBlock);
+    UNUSED_PARAM(bytecodeIndex);
+#if ENABLE(LLINT)
+    Instruction* instruction = profiledBlock->instructions().begin() + bytecodeIndex;
+    LLIntCallLinkInfo* callLinkInfo = instruction[4].u.callLinkInfo;
+    
+    return CallLinkStatus(callLinkInfo->lastSeenCallee.get(), false);
+#else
+    return CallLinkStatus(0, false);
+#endif
+}
+
 CallLinkStatus CallLinkStatus::computeFor(CodeBlock* profiledBlock, unsigned bytecodeIndex)
 {
     UNUSED_PARAM(profiledBlock);
     UNUSED_PARAM(bytecodeIndex);
 #if ENABLE(JIT) && ENABLE(VALUE_PROFILER)
-    return CallLinkStatus(
-        profiledBlock->getCallLinkInfo(bytecodeIndex).lastSeenCallee.get(),
-        profiledBlock->couldTakeSlowCase(bytecodeIndex));
+    if (!profiledBlock->numberOfCallLinkInfos())
+        return computeFromLLInt(profiledBlock, bytecodeIndex);
+    
+    if (profiledBlock->couldTakeSlowCase(bytecodeIndex))
+        return CallLinkStatus(0, true);
+    
+    JSFunction* target = profiledBlock->getCallLinkInfo(bytecodeIndex).lastSeenCallee.get();
+    if (!target)
+        return computeFromLLInt(profiledBlock, bytecodeIndex);
+    
+    return CallLinkStatus(target, false);
 #else
     return CallLinkStatus(0, false);
 #endif
index e1c7410..5f72019 100644 (file)
@@ -47,15 +47,17 @@ public:
     
     static CallLinkStatus computeFor(CodeBlock*, unsigned bytecodeIndex);
     
-    bool isSet() const { return !!m_callTarget; }
+    bool isSet() const { return !!m_callTarget || m_couldTakeSlowPath; }
     
-    bool operator!() const { return !m_callTarget; }
+    bool operator!() const { return !isSet(); }
     
     bool couldTakeSlowPath() const { return m_couldTakeSlowPath; }
     
     JSFunction* callTarget() const { return m_callTarget; }
     
 private:
+    static CallLinkStatus computeFromLLInt(CodeBlock*, unsigned bytecodeIndex);
+    
     JSFunction* m_callTarget;
     bool m_couldTakeSlowPath;
 };
index 5e366a6..7c8eab9 100644 (file)
@@ -42,6 +42,7 @@
 #include "JSFunction.h"
 #include "JSStaticScopeObject.h"
 #include "JSValue.h"
+#include "LowLevelInterpreter.h"
 #include "RepatchBuffer.h"
 #include "UStringConcatenate.h"
 #include <stdio.h>
@@ -356,10 +357,10 @@ void CodeBlock::dump(ExecState* exec) const
     for (size_t i = 0; i < instructions().size(); i += opcodeLengths[exec->interpreter()->getOpcodeID(instructions()[i].u.opcode)])
         ++instructionCount;
 
-    dataLog("%lu m_instructions; %lu bytes at %p; %d parameter(s); %d callee register(s)\n\n",
+    dataLog("%lu m_instructions; %lu bytes at %p; %d parameter(s); %d callee register(s); %d variable(s)\n\n",
         static_cast<unsigned long>(instructionCount),
         static_cast<unsigned long>(instructions().size() * sizeof(Instruction)),
-        this, m_numParameters, m_numCalleeRegisters);
+        this, m_numParameters, m_numCalleeRegisters, m_numVars);
 
     Vector<Instruction>::const_iterator begin = instructions().begin();
     Vector<Instruction>::const_iterator end = instructions().end();
@@ -896,6 +897,14 @@ void CodeBlock::dump(ExecState* exec, const Vector<Instruction>::const_iterator&
             printPutByIdOp(exec, location, it, "put_by_id_transition");
             break;
         }
+        case op_put_by_id_transition_direct: {
+            printPutByIdOp(exec, location, it, "put_by_id_transition_direct");
+            break;
+        }
+        case op_put_by_id_transition_normal: {
+            printPutByIdOp(exec, location, it, "put_by_id_transition_normal");
+            break;
+        }
         case op_put_by_id_generic: {
             printPutByIdOp(exec, location, it, "put_by_id_generic");
             break;
@@ -1453,6 +1462,7 @@ CodeBlock::CodeBlock(CopyParsedBlockTag, CodeBlock& other, SymbolTable* symTab)
 {
     setNumParameters(other.numParameters());
     optimizeAfterWarmUp();
+    jitAfterWarmUp();
     
     if (other.m_rareData) {
         createRareDataIfNecessary();
@@ -1501,6 +1511,7 @@ CodeBlock::CodeBlock(ScriptExecutable* ownerExecutable, CodeType codeType, JSGlo
     ASSERT(m_source);
     
     optimizeAfterWarmUp();
+    jitAfterWarmUp();
 
 #if DUMP_CODE_BLOCK_STATISTICS
     liveCodeBlockSet.add(this);
@@ -1518,7 +1529,11 @@ CodeBlock::~CodeBlock()
 #if ENABLE(VERBOSE_VALUE_PROFILE)
     dumpValueProfiles();
 #endif
-    
+
+#if ENABLE(LLINT)    
+    while (m_incomingLLIntCalls.begin() != m_incomingLLIntCalls.end())
+        m_incomingLLIntCalls.begin()->remove();
+#endif // ENABLE(LLINT)
 #if ENABLE(JIT)
     // We may be destroyed before any CodeBlocks that refer to us are destroyed.
     // Consider that two CodeBlocks become unreachable at the same time. There
@@ -1730,8 +1745,69 @@ void CodeBlock::finalizeUnconditionally()
 #else
     static const bool verboseUnlinking = false;
 #endif
-#endif
+#endif // ENABLE(JIT)
     
+#if ENABLE(LLINT)
+    Interpreter* interpreter = m_globalData->interpreter;
+    // interpreter->classicEnabled() returns true if the old C++ interpreter is enabled. If that's enabled
+    // then we're not using LLInt.
+    if (!interpreter->classicEnabled()) {
+        for (size_t size = m_propertyAccessInstructions.size(), i = 0; i < size; ++i) {
+            Instruction* curInstruction = &instructions()[m_propertyAccessInstructions[i]];
+            switch (interpreter->getOpcodeID(curInstruction[0].u.opcode)) {
+            case op_get_by_id:
+            case op_put_by_id:
+                if (!curInstruction[4].u.structure || Heap::isMarked(curInstruction[4].u.structure.get()))
+                    break;
+                if (verboseUnlinking)
+                    dataLog("Clearing LLInt property access with structure %p.\n", curInstruction[4].u.structure.get());
+                curInstruction[4].u.structure.clear();
+                curInstruction[5].u.operand = 0;
+                break;
+            case op_put_by_id_transition_direct:
+            case op_put_by_id_transition_normal:
+                if (Heap::isMarked(curInstruction[4].u.structure.get())
+                    && Heap::isMarked(curInstruction[6].u.structure.get())
+                    && Heap::isMarked(curInstruction[7].u.structureChain.get()))
+                    break;
+                if (verboseUnlinking) {
+                    dataLog("Clearing LLInt put transition with structures %p -> %p, chain %p.\n",
+                            curInstruction[4].u.structure.get(),
+                            curInstruction[6].u.structure.get(),
+                            curInstruction[7].u.structureChain.get());
+                }
+                curInstruction[4].u.structure.clear();
+                curInstruction[6].u.structure.clear();
+                curInstruction[7].u.structureChain.clear();
+                curInstruction[0].u.opcode = interpreter->getOpcode(op_put_by_id);
+                break;
+            default:
+                ASSERT_NOT_REACHED();
+            }
+        }
+        for (size_t size = m_globalResolveInstructions.size(), i = 0; i < size; ++i) {
+            Instruction* curInstruction = &instructions()[m_globalResolveInstructions[i]];
+            ASSERT(interpreter->getOpcodeID(curInstruction[0].u.opcode) == op_resolve_global
+                   || interpreter->getOpcodeID(curInstruction[0].u.opcode) == op_resolve_global_dynamic);
+            if (!curInstruction[3].u.structure || Heap::isMarked(curInstruction[3].u.structure.get()))
+                continue;
+            if (verboseUnlinking)
+                dataLog("Clearing LLInt global resolve cache with structure %p.\n", curInstruction[3].u.structure.get());
+            curInstruction[3].u.structure.clear();
+            curInstruction[4].u.operand = 0;
+        }
+        for (unsigned i = 0; i < m_llintCallLinkInfos.size(); ++i) {
+            if (m_llintCallLinkInfos[i].isLinked() && !Heap::isMarked(m_llintCallLinkInfos[i].callee.get())) {
+                if (verboseUnlinking)
+                    dataLog("Clearing LLInt call from %p.\n", this);
+                m_llintCallLinkInfos[i].unlink();
+            }
+            if (!!m_llintCallLinkInfos[i].lastSeenCallee && !Heap::isMarked(m_llintCallLinkInfos[i].lastSeenCallee.get()))
+                m_llintCallLinkInfos[i].lastSeenCallee.clear();
+        }
+    }
+#endif // ENABLE(LLINT)
+
 #if ENABLE(DFG_JIT)
     // Check if we're not live. If we are, then jettison.
     if (!(shouldImmediatelyAssumeLivenessDuringScan() || m_dfgData->livenessHasBeenProved)) {
@@ -1754,7 +1830,7 @@ void CodeBlock::finalizeUnconditionally()
         for (unsigned i = 0; i < numberOfCallLinkInfos(); ++i) {
             if (callLinkInfo(i).isLinked() && !Heap::isMarked(callLinkInfo(i).callee.get())) {
                 if (verboseUnlinking)
-                    dataLog("Clearing call from %p.\n", this);
+                    dataLog("Clearing call from %p to %p.\n", this, callLinkInfo(i).callee.get());
                 callLinkInfo(i).unlink(*m_globalData, repatchBuffer);
             }
             if (!!callLinkInfo(i).lastSeenCallee
@@ -1852,10 +1928,12 @@ void CodeBlock::stronglyVisitStrongReferences(SlotVisitor& visitor)
     for (size_t i = 0; i < m_functionDecls.size(); ++i)
         visitor.append(&m_functionDecls[i]);
 #if ENABLE(CLASSIC_INTERPRETER)
-    for (size_t size = m_propertyAccessInstructions.size(), i = 0; i < size; ++i)
-        visitStructures(visitor, &instructions()[m_propertyAccessInstructions[i]]);
-    for (size_t size = m_globalResolveInstructions.size(), i = 0; i < size; ++i)
-        visitStructures(visitor, &instructions()[m_globalResolveInstructions[i]]);
+    if (m_globalData->interpreter->classicEnabled()) {
+        for (size_t size = m_propertyAccessInstructions.size(), i = 0; i < size; ++i)
+            visitStructures(visitor, &instructions()[m_propertyAccessInstructions[i]]);
+        for (size_t size = m_globalResolveInstructions.size(), i = 0; i < size; ++i)
+            visitStructures(visitor, &instructions()[m_globalResolveInstructions[i]]);
+    }
 #endif
 
 #if ENABLE(DFG_JIT)
@@ -1863,8 +1941,9 @@ void CodeBlock::stronglyVisitStrongReferences(SlotVisitor& visitor)
         // Make sure that executables that we have inlined don't die.
         // FIXME: If they would have otherwise died, we should probably trigger recompilation.
         for (size_t i = 0; i < inlineCallFrames().size(); ++i) {
-            visitor.append(&inlineCallFrames()[i].executable);
-            visitor.append(&inlineCallFrames()[i].callee);
+            InlineCallFrame& inlineCallFrame = inlineCallFrames()[i];
+            visitor.append(&inlineCallFrame.executable);
+            visitor.append(&inlineCallFrame.callee);
         }
     }
 #endif
@@ -2068,12 +2147,18 @@ unsigned CodeBlock::addOrFindConstant(JSValue v)
     }
     return addConstant(v);
 }
-    
+
 #if ENABLE(JIT)
 void CodeBlock::unlinkCalls()
 {
     if (!!m_alternative)
         m_alternative->unlinkCalls();
+#if ENABLE(LLINT)
+    for (size_t i = 0; i < m_llintCallLinkInfos.size(); ++i) {
+        if (m_llintCallLinkInfos[i].isLinked())
+            m_llintCallLinkInfos[i].unlink();
+    }
+#endif
     if (!(m_callLinkInfos.size() || m_methodCallLinkInfos.size()))
         return;
     if (!m_globalData->canUseJIT())
@@ -2088,10 +2173,62 @@ void CodeBlock::unlinkCalls()
 
 void CodeBlock::unlinkIncomingCalls()
 {
+#if ENABLE(LLINT)
+    while (m_incomingLLIntCalls.begin() != m_incomingLLIntCalls.end())
+        m_incomingLLIntCalls.begin()->unlink();
+#endif
+    if (m_incomingCalls.isEmpty())
+        return;
     RepatchBuffer repatchBuffer(this);
     while (m_incomingCalls.begin() != m_incomingCalls.end())
         m_incomingCalls.begin()->unlink(*m_globalData, repatchBuffer);
 }
+
+unsigned CodeBlock::bytecodeOffset(ExecState* exec, ReturnAddressPtr returnAddress)
+{
+#if ENABLE(LLINT)
+    if (returnAddress.value() >= bitwise_cast<void*>(&llint_begin)
+        && returnAddress.value() <= bitwise_cast<void*>(&llint_end)) {
+        ASSERT(exec->codeBlock());
+        ASSERT(exec->codeBlock() == this);
+        ASSERT(JITCode::isBaselineCode(getJITType()));
+        Instruction* instruction = exec->currentVPC();
+        ASSERT(instruction);
+        
+        // The LLInt stores the PC after the call instruction rather than the PC of
+        // the call instruction. This requires some correcting. We rely on the fact
+        // that the preceding instruction must be one of the call instructions, so
+        // either it's a call_varargs or it's a call, construct, or eval.
+        ASSERT(OPCODE_LENGTH(op_call_varargs) <= OPCODE_LENGTH(op_call));
+        ASSERT(OPCODE_LENGTH(op_call) == OPCODE_LENGTH(op_construct));
+        ASSERT(OPCODE_LENGTH(op_call) == OPCODE_LENGTH(op_call_eval));
+        if (instruction[-OPCODE_LENGTH(op_call_varargs)].u.pointer == bitwise_cast<void*>(llint_op_call_varargs)) {
+            // We know that the preceding instruction must be op_call_varargs because there is no way that
+            // the pointer to the call_varargs could be an operand to the call.
+            instruction -= OPCODE_LENGTH(op_call_varargs);
+            ASSERT(instruction[-OPCODE_LENGTH(op_call)].u.pointer != bitwise_cast<void*>(llint_op_call)
+                   && instruction[-OPCODE_LENGTH(op_call)].u.pointer != bitwise_cast<void*>(llint_op_construct)
+                   && instruction[-OPCODE_LENGTH(op_call)].u.pointer != bitwise_cast<void*>(llint_op_call_eval));
+        } else {
+            // Must be that the last instruction was some op_call.
+            ASSERT(instruction[-OPCODE_LENGTH(op_call)].u.pointer == bitwise_cast<void*>(llint_op_call)
+                   || instruction[-OPCODE_LENGTH(op_call)].u.pointer == bitwise_cast<void*>(llint_op_construct)
+                   || instruction[-OPCODE_LENGTH(op_call)].u.pointer == bitwise_cast<void*>(llint_op_call_eval));
+            instruction -= OPCODE_LENGTH(op_call);
+        }
+        
+        return bytecodeOffset(instruction);
+    }
+#else
+    UNUSED_PARAM(exec);
+#endif
+    if (!m_rareData)
+        return 1;
+    Vector<CallReturnOffsetToBytecodeOffset>& callIndices = m_rareData->m_callReturnIndexVector;
+    if (!callIndices.size())
+        return 1;
+    return binarySearch<CallReturnOffsetToBytecodeOffset, unsigned, getCallReturnOffset>(callIndices.begin(), callIndices.size(), getJITCode().offsetOf(returnAddress.value()))->bytecodeOffset;
+}
 #endif
 
 void CodeBlock::clearEvalCache()
@@ -2187,24 +2324,45 @@ bool FunctionCodeBlock::canCompileWithDFGInternal()
 
 void ProgramCodeBlock::jettison()
 {
-    ASSERT(getJITType() != JITCode::BaselineJIT);
+    ASSERT(JITCode::isOptimizingJIT(getJITType()));
     ASSERT(this == replacement());
     static_cast<ProgramExecutable*>(ownerExecutable())->jettisonOptimizedCode(*globalData());
 }
 
 void EvalCodeBlock::jettison()
 {
-    ASSERT(getJITType() != JITCode::BaselineJIT);
+    ASSERT(JITCode::isOptimizingJIT(getJITType()));
     ASSERT(this == replacement());
     static_cast<EvalExecutable*>(ownerExecutable())->jettisonOptimizedCode(*globalData());
 }
 
 void FunctionCodeBlock::jettison()
 {
-    ASSERT(getJITType() != JITCode::BaselineJIT);
+    ASSERT(JITCode::isOptimizingJIT(getJITType()));
     ASSERT(this == replacement());
     static_cast<FunctionExecutable*>(ownerExecutable())->jettisonOptimizedCodeFor(*globalData(), m_isConstructor ? CodeForConstruct : CodeForCall);
 }
+
+void ProgramCodeBlock::jitCompileImpl(JSGlobalData& globalData)
+{
+    ASSERT(getJITType() == JITCode::InterpreterThunk);
+    ASSERT(this == replacement());
+    return static_cast<ProgramExecutable*>(ownerExecutable())->jitCompile(globalData);
+}
+
+void EvalCodeBlock::jitCompileImpl(JSGlobalData& globalData)
+{
+    ASSERT(getJITType() == JITCode::InterpreterThunk);
+    ASSERT(this == replacement());
+    return static_cast<EvalExecutable*>(ownerExecutable())->jitCompile(globalData);
+}
+
+void FunctionCodeBlock::jitCompileImpl(JSGlobalData& globalData)
+{
+    ASSERT(getJITType() == JITCode::InterpreterThunk);
+    ASSERT(this == replacement());
+    return static_cast<FunctionExecutable*>(ownerExecutable())->jitCompileFor(globalData, m_isConstructor ? CodeForConstruct : CodeForCall);
+}
 #endif
 
 #if ENABLE(VALUE_PROFILER)
index 3091df0..8e6d07b 100644 (file)
@@ -30,6 +30,7 @@
 #ifndef CodeBlock_h
 #define CodeBlock_h
 
+#include "BytecodeConventions.h"
 #include "CallLinkInfo.h"
 #include "CallReturnOffsetToBytecodeOffset.h"
 #include "CodeOrigin.h"
@@ -50,6 +51,7 @@
 #include "JITWriteBarrier.h"
 #include "JSGlobalObject.h"
 #include "JumpTable.h"
+#include "LLIntCallLinkInfo.h"
 #include "LineInfo.h"
 #include "Nodes.h"
 #include "PredictionTracker.h"
 #include <wtf/Vector.h>
 #include "StructureStubInfo.h"
 
-// Register numbers used in bytecode operations have different meaning according to their ranges:
-//      0x80000000-0xFFFFFFFF  Negative indices from the CallFrame pointer are entries in the call frame, see RegisterFile.h.
-//      0x00000000-0x3FFFFFFF  Forwards indices from the CallFrame pointer are local vars and temporaries with the function's callframe.
-//      0x40000000-0x7FFFFFFF  Positive indices from 0x40000000 specify entries in the constant pool on the CodeBlock.
-static const int FirstConstantRegisterIndex = 0x40000000;
-
 namespace JSC {
 
-    class ExecState;
     class DFGCodeBlocks;
+    class ExecState;
+    class LLIntOffsetsExtractor;
 
     inline int unmodifiedArgumentsRegister(int argumentsRegister) { return argumentsRegister - 1; }
 
@@ -83,6 +80,7 @@ namespace JSC {
     class CodeBlock : public UnconditionalFinalizer, public WeakReferenceHarvester {
         WTF_MAKE_FAST_ALLOCATED;
         friend class JIT;
+        friend class LLIntOffsetsExtractor;
     public:
         enum CopyParsedBlockTag { CopyParsedBlock };
     protected:
@@ -123,7 +121,7 @@ namespace JSC {
             while (result->alternative())
                 result = result->alternative();
             ASSERT(result);
-            ASSERT(result->getJITType() == JITCode::BaselineJIT);
+            ASSERT(JITCode::isBaselineCode(result->getJITType()));
             return result;
         }
 #endif
@@ -192,15 +190,7 @@ namespace JSC {
             return *(binarySearch<MethodCallLinkInfo, unsigned, getMethodCallLinkInfoBytecodeIndex>(m_methodCallLinkInfos.begin(), m_methodCallLinkInfos.size(), bytecodeIndex));
         }
 
-        unsigned bytecodeOffset(ReturnAddressPtr returnAddress)
-        {
-            if (!m_rareData)
-                return 1;
-            Vector<CallReturnOffsetToBytecodeOffset>& callIndices = m_rareData->m_callReturnIndexVector;
-            if (!callIndices.size())
-                return 1;
-            return binarySearch<CallReturnOffsetToBytecodeOffset, unsigned, getCallReturnOffset>(callIndices.begin(), callIndices.size(), getJITCode().offsetOf(returnAddress.value()))->bytecodeOffset;
-        }
+        unsigned bytecodeOffset(ExecState*, ReturnAddressPtr);
 
         unsigned bytecodeOffsetForCallAtIndex(unsigned index)
         {
@@ -221,11 +211,17 @@ namespace JSC {
         {
             m_incomingCalls.push(incoming);
         }
+#if ENABLE(LLINT)
+        void linkIncomingCall(LLIntCallLinkInfo* incoming)
+        {
+            m_incomingLLIntCalls.push(incoming);
+        }
+#endif // ENABLE(LLINT)
         
         void unlinkIncomingCalls();
-#endif
+#endif // ENABLE(JIT)
 
-#if ENABLE(DFG_JIT)
+#if ENABLE(DFG_JIT) || ENABLE(LLINT)
         void setJITCodeMap(PassOwnPtr<CompactJITCodeMap> jitCodeMap)
         {
             m_jitCodeMap = jitCodeMap;
@@ -234,7 +230,9 @@ namespace JSC {
         {
             return m_jitCodeMap.get();
         }
+#endif
         
+#if ENABLE(DFG_JIT)
         void createDFGDataIfNecessary()
         {
             if (!!m_dfgData)
@@ -333,12 +331,11 @@ namespace JSC {
         }
 #endif
 
-#if ENABLE(CLASSIC_INTERPRETER)
         unsigned bytecodeOffset(Instruction* returnAddress)
         {
+            ASSERT(returnAddress >= instructions().begin() && returnAddress < instructions().end());
             return static_cast<Instruction*>(returnAddress) - instructions().begin();
         }
-#endif
 
         void setIsNumericCompareFunction(bool isNumericCompareFunction) { m_isNumericCompareFunction = isNumericCompareFunction; }
         bool isNumericCompareFunction() { return m_isNumericCompareFunction; }
@@ -376,6 +373,20 @@ namespace JSC {
         ExecutableMemoryHandle* executableMemory() { return getJITCode().getExecutableMemory(); }
         virtual JSObject* compileOptimized(ExecState*, ScopeChainNode*) = 0;
         virtual void jettison() = 0;
+        bool jitCompile(JSGlobalData& globalData)
+        {
+            if (getJITType() != JITCode::InterpreterThunk) {
+                ASSERT(getJITType() == JITCode::BaselineJIT);
+                return false;
+            }
+#if ENABLE(JIT)
+            jitCompileImpl(globalData);
+            return true;
+#else
+            UNUSED_PARAM(globalData);
+            return false;
+#endif
+        }
         virtual CodeBlock* replacement() = 0;
 
         enum CompileWithDFGState {
@@ -395,13 +406,13 @@ namespace JSC {
 
         bool hasOptimizedReplacement()
         {
-            ASSERT(getJITType() == JITCode::BaselineJIT);
+            ASSERT(JITCode::isBaselineCode(getJITType()));
             bool result = replacement()->getJITType() > getJITType();
 #if !ASSERT_DISABLED
             if (result)
                 ASSERT(replacement()->getJITType() == JITCode::DFGJIT);
             else {
-                ASSERT(replacement()->getJITType() == JITCode::BaselineJIT);
+                ASSERT(JITCode::isBaselineCode(replacement()->getJITType()));
                 ASSERT(replacement() == this);
             }
 #endif
@@ -460,18 +471,21 @@ namespace JSC {
 
         void clearEvalCache();
 
-#if ENABLE(CLASSIC_INTERPRETER)
         void addPropertyAccessInstruction(unsigned propertyAccessInstruction)
         {
-            if (!m_globalData->canUseJIT())
-                m_propertyAccessInstructions.append(propertyAccessInstruction);
+            m_propertyAccessInstructions.append(propertyAccessInstruction);
         }
         void addGlobalResolveInstruction(unsigned globalResolveInstruction)
         {
-            if (!m_globalData->canUseJIT())
-                m_globalResolveInstructions.append(globalResolveInstruction);
+            m_globalResolveInstructions.append(globalResolveInstruction);
         }
         bool hasGlobalResolveInstructionAtBytecodeOffset(unsigned bytecodeOffset);
+#if ENABLE(LLINT)
+        LLIntCallLinkInfo* addLLIntCallLinkInfo()
+        {
+            m_llintCallLinkInfos.append(LLIntCallLinkInfo());
+            return &m_llintCallLinkInfos.last();
+        }
 #endif
 #if ENABLE(JIT)
         void setNumberOfStructureStubInfos(size_t size) { m_structureStubInfos.grow(size); }
@@ -480,8 +494,7 @@ namespace JSC {
 
         void addGlobalResolveInfo(unsigned globalResolveInstruction)
         {
-            if (m_globalData->canUseJIT())
-                m_globalResolveInfos.append(GlobalResolveInfo(globalResolveInstruction));
+            m_globalResolveInfos.append(GlobalResolveInfo(globalResolveInstruction));
         }
         GlobalResolveInfo& globalResolveInfo(int index) { return m_globalResolveInfos[index]; }
         bool hasGlobalResolveInfoAtBytecodeOffset(unsigned bytecodeOffset);
@@ -492,6 +505,7 @@ namespace JSC {
 
         void addMethodCallLinkInfos(unsigned n) { ASSERT(m_globalData->canUseJIT()); m_methodCallLinkInfos.grow(n); }
         MethodCallLinkInfo& methodCallLinkInfo(int index) { return m_methodCallLinkInfos[index]; }
+        size_t numberOfMethodCallLinkInfos() { return m_methodCallLinkInfos.size(); }
 #endif
         
 #if ENABLE(VALUE_PROFILER)
@@ -533,6 +547,10 @@ namespace JSC {
                                    bytecodeOffset].u.opcode)) - 1].u.profile == result);
             return result;
         }
+        PredictedType valueProfilePredictionForBytecodeOffset(int bytecodeOffset)
+        {
+            return valueProfileForBytecodeOffset(bytecodeOffset)->computeUpdatedPrediction();
+        }
         
         unsigned totalNumberOfValueProfiles()
         {
@@ -559,12 +577,16 @@ namespace JSC {
         
         bool likelyToTakeSlowCase(int bytecodeOffset)
         {
+            if (!numberOfRareCaseProfiles())
+                return false;
             unsigned value = rareCaseProfileForBytecodeOffset(bytecodeOffset)->m_counter;
             return value >= Options::likelyToTakeSlowCaseMinimumCount && static_cast<double>(value) / m_executionEntryCount >= Options::likelyToTakeSlowCaseThreshold;
         }
         
         bool couldTakeSlowCase(int bytecodeOffset)
         {
+            if (!numberOfRareCaseProfiles())
+                return false;
             unsigned value = rareCaseProfileForBytecodeOffset(bytecodeOffset)->m_counter;
             return value >= Options::couldTakeSlowCaseMinimumCount && static_cast<double>(value) / m_executionEntryCount >= Options::couldTakeSlowCaseThreshold;
         }
@@ -583,12 +605,16 @@ namespace JSC {
         
         bool likelyToTakeSpecialFastCase(int bytecodeOffset)
         {
+            if (!numberOfRareCaseProfiles())
+                return false;
             unsigned specialFastCaseCount = specialFastCaseProfileForBytecodeOffset(bytecodeOffset)->m_counter;
             return specialFastCaseCount >= Options::likelyToTakeSlowCaseMinimumCount && static_cast<double>(specialFastCaseCount) / m_executionEntryCount >= Options::likelyToTakeSlowCaseThreshold;
         }
         
         bool likelyToTakeDeepestSlowCase(int bytecodeOffset)
         {
+            if (!numberOfRareCaseProfiles())
+                return false;
             unsigned slowCaseCount = rareCaseProfileForBytecodeOffset(bytecodeOffset)->m_counter;
             unsigned specialFastCaseCount = specialFastCaseProfileForBytecodeOffset(bytecodeOffset)->m_counter;
             unsigned value = slowCaseCount - specialFastCaseCount;
@@ -597,6 +623,8 @@ namespace JSC {
         
         bool likelyToTakeAnySlowCase(int bytecodeOffset)
         {
+            if (!numberOfRareCaseProfiles())
+                return false;
             unsigned slowCaseCount = rareCaseProfileForBytecodeOffset(bytecodeOffset)->m_counter;
             unsigned specialFastCaseCount = specialFastCaseProfileForBytecodeOffset(bytecodeOffset)->m_counter;
             unsigned value = slowCaseCount + specialFastCaseCount;
@@ -694,7 +722,7 @@ namespace JSC {
         
         bool addFrequentExitSite(const DFG::FrequentExitSite& site)
         {
-            ASSERT(getJITType() == JITCode::BaselineJIT);
+            ASSERT(JITCode::isBaselineCode(getJITType()));
             return m_exitProfile.add(site);
         }
 
@@ -802,6 +830,29 @@ namespace JSC {
         void copyPostParseDataFrom(CodeBlock* alternative);
         void copyPostParseDataFromAlternative();
         
+        // Functions for controlling when JITting kicks in, in a mixed mode
+        // execution world.
+        
+        void dontJITAnytimeSoon()
+        {
+            m_llintExecuteCounter = Options::executionCounterValueForDontJITAnytimeSoon;
+        }
+        
+        void jitAfterWarmUp()
+        {
+            m_llintExecuteCounter = Options::executionCounterValueForJITAfterWarmUp;
+        }
+        
+        void jitSoon()
+        {
+            m_llintExecuteCounter = Options::executionCounterValueForJITSoon;
+        }
+        
+        int32_t llintExecuteCounter() const
+        {
+            return m_llintExecuteCounter;
+        }
+        
         // Functions for controlling when tiered compilation kicks in. This
         // controls both when the optimizing compiler is invoked and when OSR
         // entry happens. Two triggers exist: the loop trigger and the return
@@ -994,6 +1045,9 @@ namespace JSC {
         bool m_shouldDiscardBytecode;
 
     protected:
+#if ENABLE(JIT)
+        virtual void jitCompileImpl(JSGlobalData&) = 0;
+#endif
         virtual void visitWeakReferences(SlotVisitor&);
         virtual void finalizeUnconditionally();
         
@@ -1075,9 +1129,11 @@ namespace JSC {
         RefPtr<SourceProvider> m_source;
         unsigned m_sourceOffset;
 
-#if ENABLE(CLASSIC_INTERPRETER)
         Vector<unsigned> m_propertyAccessInstructions;
         Vector<unsigned> m_globalResolveInstructions;
+#if ENABLE(LLINT)
+        SegmentedVector<LLIntCallLinkInfo, 8> m_llintCallLinkInfos;
+        SentinelLinkedList<LLIntCallLinkInfo, BasicRawSentinelNode<LLIntCallLinkInfo> > m_incomingLLIntCalls;
 #endif
 #if ENABLE(JIT)
         Vector<StructureStubInfo> m_structureStubInfos;
@@ -1088,9 +1144,10 @@ namespace JSC {
         MacroAssemblerCodePtr m_jitCodeWithArityCheck;
         SentinelLinkedList<CallLinkInfo, BasicRawSentinelNode<CallLinkInfo> > m_incomingCalls;
 #endif
-#if ENABLE(DFG_JIT)
+#if ENABLE(DFG_JIT) || ENABLE(LLINT)
         OwnPtr<CompactJITCodeMap> m_jitCodeMap;
-        
+#endif
+#if ENABLE(DFG_JIT)
         struct WeakReferenceTransition {
             WeakReferenceTransition() { }
             
@@ -1153,12 +1210,14 @@ namespace JSC {
 
         OwnPtr<CodeBlock> m_alternative;
         
+        int32_t m_llintExecuteCounter;
+        
         int32_t m_jitExecuteCounter;
         uint32_t m_speculativeSuccessCounter;
         uint32_t m_speculativeFailCounter;
         uint8_t m_optimizationDelayCounter;
         uint8_t m_reoptimizationRetryCounter;
-
+        
         struct RareData {
            WTF_MAKE_FAST_ALLOCATED;
         public:
@@ -1234,6 +1293,7 @@ namespace JSC {
     protected:
         virtual JSObject* compileOptimized(ExecState*, ScopeChainNode*);
         virtual void jettison();
+        virtual void jitCompileImpl(JSGlobalData&);
         virtual CodeBlock* replacement();
         virtual bool canCompileWithDFGInternal();
 #endif
@@ -1268,6 +1328,7 @@ namespace JSC {
     protected:
         virtual JSObject* compileOptimized(ExecState*, ScopeChainNode*);
         virtual void jettison();
+        virtual void jitCompileImpl(JSGlobalData&);
         virtual CodeBlock* replacement();
         virtual bool canCompileWithDFGInternal();
 #endif
@@ -1305,6 +1366,7 @@ namespace JSC {
     protected:
         virtual JSObject* compileOptimized(ExecState*, ScopeChainNode*);
         virtual void jettison();
+        virtual void jitCompileImpl(JSGlobalData&);
         virtual CodeBlock* replacement();
         virtual bool canCompileWithDFGInternal();
 #endif
index 5eff1d4..11aead3 100644 (file)
 #include "GetByIdStatus.h"
 
 #include "CodeBlock.h"
+#include "LowLevelInterpreter.h"
 
 namespace JSC {
 
+GetByIdStatus GetByIdStatus::computeFromLLInt(CodeBlock* profiledBlock, unsigned bytecodeIndex, Identifier& ident)
+{
+    UNUSED_PARAM(profiledBlock);
+    UNUSED_PARAM(bytecodeIndex);
+    UNUSED_PARAM(ident);
+#if ENABLE(LLINT)
+    Instruction* instruction = profiledBlock->instructions().begin() + bytecodeIndex;
+    
+    if (instruction[0].u.opcode == llint_op_method_check)
+        instruction++;
+
+    Structure* structure = instruction[4].u.structure.get();
+    if (!structure)
+        return GetByIdStatus(NoInformation, StructureSet(), notFound, false);
+    
+    size_t offset = structure->get(*profiledBlock->globalData(), ident);
+    if (offset == notFound)
+        return GetByIdStatus(NoInformation, StructureSet(), notFound, false);
+    
+    return GetByIdStatus(SimpleDirect, StructureSet(structure), offset, false);
+#else
+    return GetByIdStatus(NoInformation, StructureSet(), notFound, false);
+#endif
+}
+
 GetByIdStatus GetByIdStatus::computeFor(CodeBlock* profiledBlock, unsigned bytecodeIndex, Identifier& ident)
 {
     UNUSED_PARAM(profiledBlock);
     UNUSED_PARAM(bytecodeIndex);
     UNUSED_PARAM(ident);
 #if ENABLE(JIT) && ENABLE(VALUE_PROFILER)
+    if (!profiledBlock->numberOfStructureStubInfos())
+        return computeFromLLInt(profiledBlock, bytecodeIndex, ident);
+    
     // First check if it makes either calls, in which case we want to be super careful, or
     // if it's not set at all, in which case we punt.
     StructureStubInfo& stubInfo = profiledBlock->getStubInfo(bytecodeIndex);
     if (!stubInfo.seen)
-        return GetByIdStatus(NoInformation, StructureSet(), notFound);
+        return computeFromLLInt(profiledBlock, bytecodeIndex, ident);
     
     PolymorphicAccessStructureList* list;
     int listSize;
@@ -60,18 +89,19 @@ GetByIdStatus GetByIdStatus::computeFor(CodeBlock* profiledBlock, unsigned bytec
     }
     for (int i = 0; i < listSize; ++i) {
         if (!list->list[i].isDirect)
-            return GetByIdStatus(MakesCalls, StructureSet(), notFound);
+            return GetByIdStatus(MakesCalls, StructureSet(), notFound, true);
     }
     
     // Next check if it takes slow case, in which case we want to be kind of careful.
     if (profiledBlock->likelyToTakeSlowCase(bytecodeIndex))
-        return GetByIdStatus(TakesSlowPath, StructureSet(), notFound);
+        return GetByIdStatus(TakesSlowPath, StructureSet(), notFound, true);
     
     // Finally figure out if we can derive an access strategy.
     GetByIdStatus result;
+    result.m_wasSeenInJIT = true;
     switch (stubInfo.accessType) {
     case access_unset:
-        return GetByIdStatus(NoInformation, StructureSet(), notFound);
+        return computeFromLLInt(profiledBlock, bytecodeIndex, ident);
         
     case access_get_by_id_self: {
         Structure* structure = stubInfo.u.getByIdSelf.baseObjectStructure.get();
@@ -130,7 +160,7 @@ GetByIdStatus GetByIdStatus::computeFor(CodeBlock* profiledBlock, unsigned bytec
     
     return result;
 #else // ENABLE(JIT)
-    return GetByIdStatus(NoInformation, StructureSet(), notFound);
+    return GetByIdStatus(NoInformation, StructureSet(), notFound, false);
 #endif // ENABLE(JIT)
 }
 
index 00e50e7..39476c0 100644 (file)
@@ -49,10 +49,11 @@ public:
     {
     }
     
-    GetByIdStatus(State state, const StructureSet& structureSet, size_t offset)
+    GetByIdStatus(State state, const StructureSet& structureSet, size_t offset, bool wasSeenInJIT)
         : m_state(state)
         , m_structureSet(structureSet)
         , m_offset(offset)
+        , m_wasSeenInJIT(wasSeenInJIT)
     {
         ASSERT((state == SimpleDirect) == (offset != notFound));
     }
@@ -70,10 +71,15 @@ public:
     const StructureSet& structureSet() const { return m_structureSet; }
     size_t offset() const { return m_offset; }
     
+    bool wasSeenInJIT() const { return m_wasSeenInJIT; }
+    
 private:
+    static GetByIdStatus computeFromLLInt(CodeBlock*, unsigned bytecodeIndex, Identifier&);
+    
     State m_state;
     StructureSet m_structureSet;
     size_t m_offset;
+    bool m_wasSeenInJIT;
 };
 
 } // namespace JSC
index 7fe1152..c4989d2 100644 (file)
@@ -48,6 +48,7 @@ namespace JSC {
     class JSCell;
     class Structure;
     class StructureChain;
+    struct LLIntCallLinkInfo;
     struct ValueProfile;
 
 #if ENABLE(JIT)
@@ -146,6 +147,11 @@ namespace JSC {
 #endif
 
     struct Instruction {
+        Instruction()
+        {
+            u.jsCell.clear();
+        }
+        
         Instruction(Opcode opcode)
         {
 #if !ENABLE(COMPUTED_GOTO_CLASSIC_INTERPRETER)
@@ -182,6 +188,8 @@ namespace JSC {
 
         Instruction(PropertySlot::GetValueFunc getterFunc) { u.getterFunc = getterFunc; }
         
+        Instruction(LLIntCallLinkInfo* callLinkInfo) { u.callLinkInfo = callLinkInfo; }
+        
         Instruction(ValueProfile* profile) { u.profile = profile; }
 
         union {
@@ -191,7 +199,9 @@ namespace JSC {
             WriteBarrierBase<StructureChain> structureChain;
             WriteBarrierBase<JSCell> jsCell;
             PropertySlot::GetValueFunc getterFunc;
+            LLIntCallLinkInfo* callLinkInfo;
             ValueProfile* profile;
+            void* pointer;
         } u;
         
     private:
diff --git a/Source/JavaScriptCore/bytecode/LLIntCallLinkInfo.h b/Source/JavaScriptCore/bytecode/LLIntCallLinkInfo.h
new file mode 100644 (file)
index 0000000..bfb9510
--- /dev/null
@@ -0,0 +1,66 @@
+/*
+ * Copyright (C) 2012 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#ifndef LLIntCallLinkInfo_h
+#define LLIntCallLinkInfo_h
+
+#include "JSFunction.h"
+#include "MacroAssemblerCodeRef.h"
+#include <wtf/SentinelLinkedList.h>
+
+namespace JSC {
+
+struct Instruction;
+
+struct LLIntCallLinkInfo : public BasicRawSentinelNode<LLIntCallLinkInfo> {
+    LLIntCallLinkInfo()
+    {
+    }
+    
+    ~LLIntCallLinkInfo()
+    {
+        if (isOnList())
+            remove();
+    }
+    
+    bool isLinked() { return callee; }
+    
+    void unlink()
+    {
+        callee.clear();
+        machineCodeTarget = MacroAssemblerCodePtr();
+        if (isOnList())
+            remove();
+    }
+    
+    WriteBarrier<JSFunction> callee;
+    WriteBarrier<JSFunction> lastSeenCallee;
+    MacroAssemblerCodePtr machineCodeTarget;
+};
+
+} // namespace JSC
+
+#endif // LLIntCallLinkInfo_h
+
index e7d721c..795b41b 100644 (file)
@@ -35,6 +35,11 @@ MethodCallLinkStatus MethodCallLinkStatus::computeFor(CodeBlock* profiledBlock,
     UNUSED_PARAM(profiledBlock);
     UNUSED_PARAM(bytecodeIndex);
 #if ENABLE(JIT) && ENABLE(VALUE_PROFILER)
+    // NOTE: This does not have an LLInt fall-back because LLInt does not do any method
+    // call link caching.
+    if (!profiledBlock->numberOfMethodCallLinkInfos())
+        return MethodCallLinkStatus();
+    
     MethodCallLinkInfo& methodCall = profiledBlock->getMethodCallLinkInfo(bytecodeIndex);
     
     if (!methodCall.seen || !methodCall.cachedStructure)
index 33db8e6..a277140 100644 (file)
@@ -39,16 +39,12 @@ using namespace std;
 
 namespace JSC {
 
-#if !defined(NDEBUG) || ENABLE(OPCODE_SAMPLING) || ENABLE(CODEBLOCK_SAMPLING) || ENABLE(OPCODE_STATS)
-
 const char* const opcodeNames[] = {
 #define OPCODE_NAME_ENTRY(opcode, size) #opcode,
     FOR_EACH_OPCODE_ID(OPCODE_NAME_ENTRY)
 #undef OPCODE_NAME_ENTRY
 };
 
-#endif
-
 #if ENABLE(OPCODE_STATS)
 
 long long OpcodeStats::opcodeCounts[numOpcodeIDs];
index d416af4..a47fa5e 100644 (file)
@@ -123,6 +123,8 @@ namespace JSC {
         macro(op_get_arguments_length, 4) \
         macro(op_put_by_id, 9) \
         macro(op_put_by_id_transition, 9) \
+        macro(op_put_by_id_transition_direct, 9) \
+        macro(op_put_by_id_transition_normal, 9) \
         macro(op_put_by_id_replace, 9) \
         macro(op_put_by_id_generic, 9) \
         macro(op_del_by_id, 4) \
@@ -201,6 +203,7 @@ namespace JSC {
         typedef enum { FOR_EACH_OPCODE_ID(OPCODE_ID_ENUM) } OpcodeID;
     #undef OPCODE_ID_ENUM
 
+    const int maxOpcodeLength = 9;
     const int numOpcodeIDs = op_end + 1;
 
     #define OPCODE_ID_LENGTHS(id, length) const int id##_length = length;
@@ -217,7 +220,7 @@ namespace JSC {
         FOR_EACH_OPCODE_ID(VERIFY_OPCODE_ID);
     #undef VERIFY_OPCODE_ID
 
-#if ENABLE(COMPUTED_GOTO_CLASSIC_INTERPRETER)
+#if ENABLE(COMPUTED_GOTO_CLASSIC_INTERPRETER) || ENABLE(LLINT)
 #if COMPILER(RVCT) || COMPILER(INTEL)
     typedef void* Opcode;
 #else
@@ -227,8 +230,6 @@ namespace JSC {
     typedef OpcodeID Opcode;
 #endif
 
-#if !defined(NDEBUG) || ENABLE(OPCODE_SAMPLING) || ENABLE(CODEBLOCK_SAMPLING) || ENABLE(OPCODE_STATS)
-
 #define PADDING_STRING "                                "
 #define PADDING_STRING_LENGTH static_cast<unsigned>(strlen(PADDING_STRING))
 
@@ -244,8 +245,6 @@ namespace JSC {
 #undef PADDING_STRING_LENGTH
 #undef PADDING_STRING
 
-#endif
-
 #if ENABLE(OPCODE_STATS)
 
     struct OpcodeStats {
index 45a5e61..209d4cd 100644 (file)
 #include "PutByIdStatus.h"
 
 #include "CodeBlock.h"
+#include "LowLevelInterpreter.h"
 #include "Structure.h"
 #include "StructureChain.h"
 
 namespace JSC {
 
+PutByIdStatus PutByIdStatus::computeFromLLInt(CodeBlock* profiledBlock, unsigned bytecodeIndex, Identifier& ident)
+{
+    UNUSED_PARAM(profiledBlock);
+    UNUSED_PARAM(bytecodeIndex);
+    UNUSED_PARAM(ident);
+#if ENABLE(LLINT)
+    Instruction* instruction = profiledBlock->instructions().begin() + bytecodeIndex;
+
+    Structure* structure = instruction[4].u.structure.get();
+    if (!structure)
+        return PutByIdStatus(NoInformation, 0, 0, 0, notFound);
+    
+    if (instruction[0].u.opcode == llint_op_put_by_id) {
+        size_t offset = structure->get(*profiledBlock->globalData(), ident);
+        if (offset == notFound)
+            return PutByIdStatus(NoInformation, 0, 0, 0, notFound);
+        
+        return PutByIdStatus(SimpleReplace, structure, 0, 0, offset);
+    }
+    
+    ASSERT(instruction[0].u.opcode == llint_op_put_by_id_transition_direct
+           || instruction[0].u.opcode == llint_op_put_by_id_transition_normal);
+    
+    Structure* newStructure = instruction[6].u.structure.get();
+    StructureChain* chain = instruction[7].u.structureChain.get();
+    ASSERT(newStructure);
+    ASSERT(chain);
+    
+    size_t offset = newStructure->get(*profiledBlock->globalData(), ident);
+    if (offset == notFound)
+        return PutByIdStatus(NoInformation, 0, 0, 0, notFound);
+    
+    return PutByIdStatus(SimpleTransition, structure, newStructure, chain, offset);
+#else
+    return PutByIdStatus(NoInformation, 0, 0, 0, notFound);
+#endif
+}
+
 PutByIdStatus PutByIdStatus::computeFor(CodeBlock* profiledBlock, unsigned bytecodeIndex, Identifier& ident)
 {
     UNUSED_PARAM(profiledBlock);
     UNUSED_PARAM(bytecodeIndex);
     UNUSED_PARAM(ident);
 #if ENABLE(JIT) && ENABLE(VALUE_PROFILER)
+    if (!profiledBlock->numberOfStructureStubInfos())
+        return computeFromLLInt(profiledBlock, bytecodeIndex, ident);
+    
     if (profiledBlock->likelyToTakeSlowCase(bytecodeIndex))
         return PutByIdStatus(TakesSlowPath, 0, 0, 0, notFound);
     
     StructureStubInfo& stubInfo = profiledBlock->getStubInfo(bytecodeIndex);
     if (!stubInfo.seen)
-        return PutByIdStatus(NoInformation, 0, 0, 0, notFound);
+        return computeFromLLInt(profiledBlock, bytecodeIndex, ident);
     
     switch (stubInfo.accessType) {
     case access_unset:
-        return PutByIdStatus(NoInformation, 0, 0, 0, notFound);
+        return computeFromLLInt(profiledBlock, bytecodeIndex, ident);
         
     case access_put_by_id_replace: {
         size_t offset = stubInfo.u.putByIdReplace.baseObjectStructure->get(
index b33f4d0..a6d95a4 100644 (file)
@@ -93,6 +93,8 @@ public:
     size_t offset() const { return m_offset; }
     
 private:
+    static PutByIdStatus computeFromLLInt(CodeBlock*, unsigned bytecodeIndex, Identifier&);
+    
     State m_state;
     Structure* m_oldStructure;
     Structure* m_newStructure;
index 16a112a..6fa0ce9 100644 (file)
@@ -35,6 +35,7 @@
 #include "JSActivation.h"
 #include "JSFunction.h"
 #include "Interpreter.h"
+#include "LowLevelInterpreter.h"
 #include "ScopeChain.h"
 #include "StrongInlines.h"
 #include "UString.h"
@@ -1278,9 +1279,7 @@ RegisterID* BytecodeGenerator::emitResolve(RegisterID* dst, const ResolveResult&
 #if ENABLE(JIT)
         m_codeBlock->addGlobalResolveInfo(instructions().size());
 #endif
-#if ENABLE(CLASSIC_INTERPRETER)
         m_codeBlock->addGlobalResolveInstruction(instructions().size());
-#endif
         bool dynamic = resolveResult.isDynamic() && resolveResult.depth();
         ValueProfile* profile = emitProfiledOpcode(dynamic ? op_resolve_global_dynamic : op_resolve_global);
         instructions().append(dst->index());
@@ -1384,9 +1383,6 @@ RegisterID* BytecodeGenerator::emitResolveWithBase(RegisterID* baseDst, Register
         return baseDst;
     }
 
-
-
-
     ValueProfile* profile = emitProfiledOpcode(op_resolve_with_base);
     instructions().append(baseDst->index());
     instructions().append(propDst->index());
@@ -1494,9 +1490,7 @@ void BytecodeGenerator::emitMethodCheck()
 
 RegisterID* BytecodeGenerator::emitGetById(RegisterID* dst, RegisterID* base, const Identifier& property)
 {
-#if ENABLE(CLASSIC_INTERPRETER)
     m_codeBlock->addPropertyAccessInstruction(instructions().size());
-#endif
 
     ValueProfile* profile = emitProfiledOpcode(op_get_by_id);
     instructions().append(dst->index());
@@ -1522,9 +1516,7 @@ RegisterID* BytecodeGenerator::emitGetArgumentsLength(RegisterID* dst, RegisterI
 
 RegisterID* BytecodeGenerator::emitPutById(RegisterID* base, const Identifier& property, RegisterID* value)
 {
-#if ENABLE(CLASSIC_INTERPRETER)
     m_codeBlock->addPropertyAccessInstruction(instructions().size());
-#endif
 
     emitOpcode(op_put_by_id);
     instructions().append(base->index());
@@ -1540,9 +1532,7 @@ RegisterID* BytecodeGenerator::emitPutById(RegisterID* base, const Identifier& p
 
 RegisterID* BytecodeGenerator::emitDirectPutById(RegisterID* base, const Identifier& property, RegisterID* value)
 {
-#if ENABLE(CLASSIC_INTERPRETER)
     m_codeBlock->addPropertyAccessInstruction(instructions().size());
-#endif
     
     emitOpcode(op_put_by_id);
     instructions().append(base->index());
@@ -1823,7 +1813,11 @@ RegisterID* BytecodeGenerator::emitCall(OpcodeID opcodeID, RegisterID* dst, Regi
     instructions().append(func->index()); // func
     instructions().append(callArguments.argumentCountIncludingThis()); // argCount
     instructions().append(callArguments.registerOffset()); // registerOffset
+#if ENABLE(LLINT)
+    instructions().append(m_codeBlock->addLLIntCallLinkInfo());
+#else
     instructions().append(0);
+#endif
     instructions().append(0);
     if (dst != ignoredResult()) {
         ValueProfile* profile = emitProfiledOpcode(op_call_put_result);
@@ -1927,7 +1921,11 @@ RegisterID* BytecodeGenerator::emitConstruct(RegisterID* dst, RegisterID* func,
     instructions().append(func->index()); // func
     instructions().append(callArguments.argumentCountIncludingThis()); // argCount
     instructions().append(callArguments.registerOffset()); // registerOffset
+#if ENABLE(LLINT)
+    instructions().append(m_codeBlock->addLLIntCallLinkInfo());
+#else
     instructions().append(0);
+#endif
     instructions().append(0);
     if (dst != ignoredResult()) {
         ValueProfile* profile = emitProfiledOpcode(op_call_put_result);
@@ -2188,7 +2186,11 @@ RegisterID* BytecodeGenerator::emitCatch(RegisterID* targetRegister, Label* star
 {
     m_usesExceptions = true;
 #if ENABLE(JIT)
+#if ENABLE(LLINT)
+    HandlerInfo info = { start->bind(0, 0), end->bind(0, 0), instructions().size(), m_dynamicScopeDepth + m_baseScopeDepth, CodeLocationLabel(MacroAssemblerCodePtr::createFromExecutableAddress(bitwise_cast<void*>(&llint_op_catch))) };
+#else
     HandlerInfo info = { start->bind(0, 0), end->bind(0, 0), instructions().size(), m_dynamicScopeDepth + m_baseScopeDepth, CodeLocationLabel() };
+#endif
 #else
     HandlerInfo info = { start->bind(0, 0), end->bind(0, 0), instructions().size(), m_dynamicScopeDepth + m_baseScopeDepth };
 #endif
index 6378203..6e465af 100644 (file)
@@ -585,9 +585,7 @@ private:
     {
         UNUSED_PARAM(nodeIndex);
         
-        ValueProfile* profile = m_inlineStackTop->m_profiledBlock->valueProfileForBytecodeOffset(bytecodeIndex);
-        ASSERT(profile);
-        PredictedType prediction = profile->computeUpdatedPrediction();
+        PredictedType prediction = m_inlineStackTop->m_profiledBlock->valueProfilePredictionForBytecodeOffset(bytecodeIndex);
 #if DFG_ENABLE(DEBUG_VERBOSE)
         dataLog("Dynamic [@%u, bc#%u] prediction: %s\n", nodeIndex, bytecodeIndex, predictionToString(prediction));
 #endif
@@ -1022,6 +1020,9 @@ bool ByteCodeParser::handleInlining(bool usesResult, int callTarget, NodeIndex c
     
     // If we get here then it looks like we should definitely inline this code. Proceed
     // with parsing the code to get bytecode, so that we can then parse the bytecode.
+    // Note that if LLInt is enabled, the bytecode will always be available. Also note
+    // that if LLInt is enabled, we may inline a code block that has never been JITted
+    // before!
     CodeBlock* codeBlock = m_codeBlockCache.get(CodeBlockKey(executable, kind), expectedFunction->scope());
     if (!codeBlock)
         return false;
@@ -1722,7 +1723,7 @@ bool ByteCodeParser::parseBlock(unsigned limit)
                 m_inlineStackTop->m_profiledBlock, m_currentIndex);
             
             if (methodCallStatus.isSet()
-                && !getByIdStatus.isSet()
+                && !getByIdStatus.wasSeenInJIT()
                 && !m_inlineStackTop->m_exitProfile.hasExitSite(m_currentIndex, BadCache)) {
                 // It's monomorphic as far as we can tell, since the method_check was linked
                 // but the slow path (i.e. the normal get_by_id) never fired.
@@ -1791,7 +1792,9 @@ bool ByteCodeParser::parseBlock(unsigned limit)
 
             NEXT_OPCODE(op_get_by_id);
         }
-        case op_put_by_id: {
+        case op_put_by_id:
+        case op_put_by_id_transition_direct:
+        case op_put_by_id_transition_normal: {
             NodeIndex value = get(currentInstruction[3].u.operand);
             NodeIndex base = get(currentInstruction[1].u.operand);
             unsigned identifierNumber = m_inlineStackTop->m_identifierRemap[currentInstruction[2].u.operand];
index 2653c73..e339714 100644 (file)
@@ -111,6 +111,8 @@ inline bool canCompileOpcode(OpcodeID opcodeID)
     case op_put_scoped_var:
     case op_get_by_id:
     case op_put_by_id:
+    case op_put_by_id_transition_direct:
+    case op_put_by_id_transition_normal:
     case op_get_global_var:
     case op_put_global_var:
     case op_jmp:
index 68c3e72..a195ee3 100644 (file)
@@ -48,6 +48,21 @@ void compileOSRExit(ExecState* exec)
     uint32_t exitIndex = globalData->osrExitIndex;
     OSRExit& exit = codeBlock->osrExit(exitIndex);
     
+    // Make sure all code on our inline stack is JIT compiled. This is necessary since
+    // we may opt to inline a code block even before it had ever been compiled by the
+    // JIT, but our OSR exit infrastructure currently only works if the target of the
+    // OSR exit is JIT code. This could be changed since there is nothing particularly
+    // hard about doing an OSR exit into the interpreter, but for now this seems to make
+    // sense in that if we're OSR exiting from inlined code of a DFG code block, then
+    // probably it's a good sign that the thing we're exiting into is hot. Even more
+    // interestingly, since the code was inlined, it may never otherwise get JIT
+    // compiled since the act of inlining it may ensure that it otherwise never runs.
+    for (CodeOrigin codeOrigin = exit.m_codeOrigin; codeOrigin.inlineCallFrame; codeOrigin = codeOrigin.inlineCallFrame->caller) {
+        static_cast<FunctionExecutable*>(codeOrigin.inlineCallFrame->executable.get())
+            ->baselineCodeBlockFor(codeOrigin.inlineCallFrame->isCall ? CodeForCall : CodeForConstruct)
+            ->jitCompile(*globalData);
+    }
+    
     SpeculationRecovery* recovery = 0;
     if (exit.m_recoveryIndex)
         recovery = &codeBlock->speculationRecovery(exit.m_recoveryIndex - 1);
index 2e5aad7..165a214 100644 (file)
 #include "config.h"
 #include "DFGOperations.h"
 
-#if ENABLE(DFG_JIT)
-
 #include "CodeBlock.h"
 #include "DFGOSRExit.h"
 #include "DFGRepatch.h"
+#include "HostCallReturnValue.h"
 #include "GetterSetter.h"
 #include "InlineASM.h"
 #include "Interpreter.h"
@@ -38,6 +37,8 @@
 #include "JSGlobalData.h"
 #include "Operations.h"
 
+#if ENABLE(DFG_JIT)
+
 #if CPU(X86_64)
 
 #define FUNCTION_WRAPPER_WITH_RETURN_ADDRESS(function, register) \
@@ -737,50 +738,6 @@ size_t DFG_OPERATION operationCompareStrictEq(ExecState* exec, EncodedJSValue en
     return JSValue::strictEqual(exec, JSValue::decode(encodedOp1), JSValue::decode(encodedOp2));
 }
 
-EncodedJSValue DFG_OPERATION getHostCallReturnValue();
-EncodedJSValue DFG_OPERATION getHostCallReturnValueWithExecState(ExecState*);
-
-#if CPU(X86_64)
-asm (
-".globl " SYMBOL_STRING(getHostCallReturnValue) "\n"
-HIDE_SYMBOL(getHostCallReturnValue) "\n"
-SYMBOL_STRING(getHostCallReturnValue) ":" "\n"
-    "mov -40(%r13), %r13\n"
-    "mov %r13, %rdi\n"
-    "jmp " SYMBOL_STRING_RELOCATION(getHostCallReturnValueWithExecState) "\n"
-);
-#elif CPU(X86)
-asm (
-".globl " SYMBOL_STRING(getHostCallReturnValue) "\n"
-HIDE_SYMBOL(getHostCallReturnValue) "\n"
-SYMBOL_STRING(getHostCallReturnValue) ":" "\n"
-    "mov -40(%edi), %edi\n"
-    "mov %edi, 4(%esp)\n"
-    "jmp " SYMBOL_STRING_RELOCATION(getHostCallReturnValueWithExecState) "\n"
-);
-#elif CPU(ARM_THUMB2)
-asm (
-".text" "\n"
-".align 2" "\n"
-".globl " SYMBOL_STRING(getHostCallReturnValue) "\n"
-HIDE_SYMBOL(getHostCallReturnValue) "\n"
-".thumb" "\n"
-".thumb_func " THUMB_FUNC_PARAM(getHostCallReturnValue) "\n"
-SYMBOL_STRING(getHostCallReturnValue) ":" "\n"
-    "ldr r5, [r5, #-40]" "\n"
-    "cpy r0, r5" "\n"
-    "b " SYMBOL_STRING_RELOCATION(getHostCallReturnValueWithExecState) "\n"
-);
-#endif
-
-EncodedJSValue DFG_OPERATION getHostCallReturnValueWithExecState(ExecState* exec)
-{
-    JSGlobalData* globalData = &exec->globalData();
-    NativeCallFrameTracer tracer(globalData, exec);
-    
-    return JSValue::encode(exec->globalData().hostCallReturnValue);
-}
-
 static void* handleHostCall(ExecState* execCallee, JSValue callee, CodeSpecializationKind kind)
 {
     ExecState* exec = execCallee->callerFrame();
@@ -788,6 +745,7 @@ static void* handleHostCall(ExecState* execCallee, JSValue callee, CodeSpecializ
 
     execCallee->setScopeChain(exec->scopeChain());
     execCallee->setCodeBlock(0);
+    execCallee->clearReturnPC();
 
     if (kind == CodeForCall) {
         CallData callData;
@@ -1093,3 +1051,52 @@ void DFG_OPERATION debugOperationPrintSpeculationFailure(ExecState* exec, void*
 } } // namespace JSC::DFG
 
 #endif
+
+#if COMPILER(GCC)
+
+namespace JSC {
+
+#if CPU(X86_64)
+asm (
+".globl " SYMBOL_STRING(getHostCallReturnValue) "\n"
+HIDE_SYMBOL(getHostCallReturnValue) "\n"
+SYMBOL_STRING(getHostCallReturnValue) ":" "\n"
+    "mov -40(%r13), %r13\n"
+    "mov %r13, %rdi\n"
+    "jmp " SYMBOL_STRING_RELOCATION(getHostCallReturnValueWithExecState) "\n"
+);
+#elif CPU(X86)
+asm (
+".globl " SYMBOL_STRING(getHostCallReturnValue) "\n"
+HIDE_SYMBOL(getHostCallReturnValue) "\n"
+SYMBOL_STRING(getHostCallReturnValue) ":" "\n"
+    "mov -40(%edi), %edi\n"
+    "mov %edi, 4(%esp)\n"
+    "jmp " SYMBOL_STRING_RELOCATION(getHostCallReturnValueWithExecState) "\n"
+);
+#elif CPU(ARM_THUMB2)
+asm (
+".text" "\n"
+".align 2" "\n"
+".globl " SYMBOL_STRING(getHostCallReturnValue) "\n"
+HIDE_SYMBOL(getHostCallReturnValue) "\n"
+".thumb" "\n"
+".thumb_func " THUMB_FUNC_PARAM(getHostCallReturnValue) "\n"
+SYMBOL_STRING(getHostCallReturnValue) ":" "\n"
+    "ldr r5, [r5, #-40]" "\n"
+    "cpy r0, r5" "\n"
+    "b " SYMBOL_STRING_RELOCATION(getHostCallReturnValueWithExecState) "\n"
+);
+#endif
+
+extern "C" EncodedJSValue HOST_CALL_RETURN_VALUE_OPTION getHostCallReturnValueWithExecState(ExecState* exec)
+{
+    if (!exec)
+        return JSValue::encode(JSValue());
+    return JSValue::encode(exec->globalData().hostCallReturnValue);
+}
+
+} // namespace JSC
+
+#endif // COMPILER(GCC)
+
index 250f3ef..bcacee6 100644 (file)
@@ -50,6 +50,7 @@ namespace JSC {
     class JSGlobalData;
     class JSValue;
     class LiveObjectIterator;
+    class LLIntOffsetsExtractor;
     class MarkedArgumentBuffer;
     class RegisterFile;
     class UString;
@@ -95,6 +96,7 @@ namespace JSC {
         // true if an allocation or collection is in progress
         inline bool isBusy();
         
+        MarkedAllocator& firstAllocatorWithoutDestructors() { return m_objectSpace.firstAllocator(); }
         MarkedAllocator& allocatorForObjectWithoutDestructor(size_t bytes) { return m_objectSpace.allocatorFor(bytes); }
         MarkedAllocator& allocatorForObjectWithDestructor(size_t bytes) { return m_objectSpace.destructorAllocatorFor(bytes); }
         CheckedBoolean tryAllocateStorage(size_t, void**);
@@ -136,12 +138,13 @@ namespace JSC {
         void getConservativeRegisterRoots(HashSet<JSCell*>& roots);
 
     private:
+        friend class CodeBlock;
+        friend class LLIntOffsetsExtractor;
         friend class MarkedSpace;
         friend class MarkedAllocator;
         friend class MarkedBlock;
         friend class CopiedSpace;
         friend class SlotVisitor;
-        friend class CodeBlock;
         template<typename T> friend void* allocateCell(Heap&);
 
         void* allocateWithDestructor(size_t);
index eab2482..129a7ab 100644 (file)
@@ -303,7 +303,7 @@ ALWAYS_INLINE static void visitChildren(SlotVisitor& visitor, const JSCell* cell
 #endif
 
     ASSERT(Heap::isMarked(cell));
-
+    
     if (isJSString(cell)) {
         JSString::visitChildren(const_cast<JSCell*>(cell), visitor);
         return;
index 16207f6..1c6af77 100644 (file)
@@ -8,6 +8,7 @@ namespace JSC {
 
 class Heap;
 class MarkedSpace;
+class LLIntOffsetsExtractor;
 
 namespace DFG {
 class SpeculativeJIT;
@@ -33,6 +34,8 @@ public:
     void init(Heap*, MarkedSpace*, size_t cellSize, bool cellsNeedDestruction);
     
 private:
+    friend class LLIntOffsetsExtractor;
+    
     JS_EXPORT_PRIVATE void* allocateSlowCase();
     void* tryAllocate();
     void* tryAllocateHelper();
index 7df68f0..cfcf3f8 100644 (file)
@@ -41,6 +41,7 @@ namespace JSC {
 class Heap;
 class JSCell;
 class LiveObjectIterator;
+class LLIntOffsetsExtractor;
 class WeakGCHandle;
 class SlotVisitor;
 
@@ -51,6 +52,7 @@ public:
 
     MarkedSpace(Heap*);
 
+    MarkedAllocator& firstAllocator();
     MarkedAllocator& allocatorFor(size_t);
     MarkedAllocator& allocatorFor(MarkedBlock*);
     MarkedAllocator& destructorAllocatorFor(size_t);
@@ -79,6 +81,8 @@ public:
     void didConsumeFreeList(MarkedBlock*);
 
 private:
+    friend class LLIntOffsetsExtractor;
+    
     // [ 32... 256 ]
     static const size_t preciseStep = MarkedBlock::atomSize;
     static const size_t preciseCutoff = 256;
@@ -129,6 +133,11 @@ template<typename Functor> inline typename Functor::ReturnType MarkedSpace::forE
     return forEachCell(functor);
 }
 
+inline MarkedAllocator& MarkedSpace::firstAllocator()
+{
+    return m_normalSpace.preciseAllocators[0];
+}
+
 inline MarkedAllocator& MarkedSpace::allocatorFor(size_t bytes)
 {
     ASSERT(bytes && bytes <= maxCellSize);
index bad6424..b0e5ea0 100644 (file)
@@ -50,6 +50,29 @@ RegisterFile* CallFrame::registerFile()
 
 #endif
 
+#if USE(JSVALUE32_64)
+unsigned CallFrame::bytecodeOffsetForNonDFGCode() const
+{
+    ASSERT(codeBlock());
+    return currentVPC() - codeBlock()->instructions().begin();
+}
+
+void CallFrame::setBytecodeOffsetForNonDFGCode(unsigned offset)
+{
+    ASSERT(codeBlock());
+    setCurrentVPC(codeBlock()->instructions().begin() + offset);
+}
+#else
+Instruction* CallFrame::currentVPC() const
+{
+    return codeBlock()->instructions().begin() + bytecodeOffsetForNonDFGCode();
+}
+void CallFrame::setCurrentVPC(Instruction* vpc)
+{
+    setBytecodeOffsetForNonDFGCode(vpc - codeBlock()->instructions().begin());
+}
+#endif
+    
 #if ENABLE(DFG_JIT)
 bool CallFrame::isInlineCallFrameSlow()
 {
@@ -142,7 +165,7 @@ CallFrame* CallFrame::trueCallerFrame()
     //    more frames above the true caller due to inlining.
 
     // Am I an inline call frame? If so, we're done.
-    if (isInlineCallFrame() || !hasReturnPC())
+    if (isInlineCallFrame())
         return callerFrame()->removeHostCallFrameFlag();
     
     // I am a machine call frame, so the question is: is my caller a machine call frame
@@ -153,7 +176,7 @@ CallFrame* CallFrame::trueCallerFrame()
     ASSERT(!machineCaller->isInlineCallFrame());
     
     // Figure out how we want to get the current code location.
-    if (hasHostCallFrameFlag() || returnAddressIsInCtiTrampoline(returnPC()))
+    if (!hasReturnPC() || returnAddressIsInCtiTrampoline(returnPC()))
         return machineCaller->trueCallFrameFromVMCode()->removeHostCallFrameFlag();
     
     return machineCaller->trueCallFrame(returnPC())->removeHostCallFrameFlag();
index 9ef41c9..5bf2b94 100644 (file)
@@ -103,11 +103,16 @@ namespace JSC  {
 
         CallFrame* callerFrame() const { return this[RegisterFile::CallerFrame].callFrame(); }
 #if ENABLE(JIT)
-        bool hasReturnPC() const { return this[RegisterFile::ReturnPC].vPC(); }
         ReturnAddressPtr returnPC() const { return ReturnAddressPtr(this[RegisterFile::ReturnPC].vPC()); }
+        bool hasReturnPC() const { return !!this[RegisterFile::ReturnPC].vPC(); }
+        void clearReturnPC() { registers()[RegisterFile::ReturnPC] = static_cast<Instruction*>(0); }
 #endif
         AbstractPC abstractReturnPC(JSGlobalData& globalData) { return AbstractPC(globalData, this); }
-        unsigned bytecodeOffsetForNonDFGCode()
+#if USE(JSVALUE32_64)
+        unsigned bytecodeOffsetForNonDFGCode() const;
+        void setBytecodeOffsetForNonDFGCode(unsigned offset);
+#else
+        unsigned bytecodeOffsetForNonDFGCode() const
         {
             ASSERT(codeBlock());
             return this[RegisterFile::ArgumentCount].tag();
@@ -118,6 +123,7 @@ namespace JSC  {
             ASSERT(codeBlock());
             this[RegisterFile::ArgumentCount].tag() = static_cast<int32_t>(offset);
         }
+#endif
 
 #if ENABLE(DFG_JIT)
         InlineCallFrame* inlineCallFrame() const { return this[RegisterFile::ReturnPC].asInlineCallFrame(); }
@@ -135,6 +141,19 @@ namespace JSC  {
 #if ENABLE(CLASSIC_INTERPRETER)
         Instruction* returnVPC() const { return this[RegisterFile::ReturnPC].vPC(); }
 #endif
+#if USE(JSVALUE32_64)
+        Instruction* currentVPC() const
+        {
+            return bitwise_cast<Instruction*>(this[RegisterFile::ArgumentCount].tag());
+        }
+        void setCurrentVPC(Instruction* vpc)
+        {
+            this[RegisterFile::ArgumentCount].tag() = bitwise_cast<int32_t>(vpc);
+        }
+#else
+        Instruction* currentVPC() const;
+        void setCurrentVPC(Instruction* vpc);
+#endif
 
         void setCallerFrame(CallFrame* callerFrame) { static_cast<Register*>(this)[RegisterFile::CallerFrame] = callerFrame; }
         void setScopeChain(ScopeChainNode* scopeChain) { static_cast<Register*>(this)[RegisterFile::ScopeChain] = scopeChain; }
index e1bfe7f..eb6a29d 100644 (file)
@@ -69,7 +69,7 @@
 #include "JIT.h"
 #endif
 
-#define WTF_USE_GCC_COMPUTED_GOTO_WORKAROUND (ENABLE(COMPUTED_GOTO_CLASSIC_INTERPRETER) && !defined(__llvm__))
+#define WTF_USE_GCC_COMPUTED_GOTO_WORKAROUND ((ENABLE(COMPUTED_GOTO_CLASSIC_INTERPRETER) || ENABLE(LLINT)) && !defined(__llvm__))
 
 using namespace std;
 
@@ -543,34 +543,59 @@ Interpreter::Interpreter()
 #if !ASSERT_DISABLED
     , m_initialized(false)
 #endif
-    , m_enabled(false)
+    , m_classicEnabled(false)
 {
 }
 
-void Interpreter::initialize(bool canUseJIT)
+Interpreter::~Interpreter()
 {
-#if ENABLE(COMPUTED_GOTO_CLASSIC_INTERPRETER)
+#if ENABLE(LLINT)
+    if (m_classicEnabled)
+        delete[] m_opcodeTable;
+#endif
+}
+
+void Interpreter::initialize(LLInt::Data* llintData, bool canUseJIT)
+{
+    UNUSED_PARAM(llintData);
+    UNUSED_PARAM(canUseJIT);
+#if ENABLE(COMPUTED_GOTO_CLASSIC_INTERPRETER) || ENABLE(LLINT)
+#if !ENABLE(COMPUTED_GOTO_CLASSIC_INTERPRETER)
+    // Having LLInt enabled, but not being able to use the JIT, and not having
+    // a computed goto interpreter, is not supported. Not because we cannot
+    // support it, but because I decided to draw the line at the number of
+    // permutations of execution engines that I wanted this code to grok.
+    ASSERT(canUseJIT);
+#endif
     if (canUseJIT) {
+#if ENABLE(LLINT)
+        m_opcodeTable = llintData->opcodeMap();
+        for (int i = 0; i < numOpcodeIDs; ++i)
+            m_opcodeIDTable.add(m_opcodeTable[i], static_cast<OpcodeID>(i));
+#else
         // If the JIT is present, don't use jump destinations for opcodes.
         
         for (int i = 0; i < numOpcodeIDs; ++i) {
             Opcode opcode = bitwise_cast<void*>(static_cast<uintptr_t>(i));
             m_opcodeTable[i] = opcode;
         }
+#endif
     } else {
+#if ENABLE(LLINT)
+        m_opcodeTable = new Opcode[numOpcodeIDs];
+#endif
         privateExecute(InitializeAndReturn, 0, 0);
         
         for (int i = 0; i < numOpcodeIDs; ++i)
             m_opcodeIDTable.add(m_opcodeTable[i], static_cast<OpcodeID>(i));
         
-        m_enabled = true;
+        m_classicEnabled = true;
     }
 #else
-    UNUSED_PARAM(canUseJIT);
 #if ENABLE(CLASSIC_INTERPRETER)
-    m_enabled = true;
+    m_classicEnabled = true;
 #else
-    m_enabled = false;
+    m_classicEnabled = false;
 #endif
 #endif // ENABLE(COMPUTED_GOTO_CLASSIC_INTERPRETER)
 #if !ASSERT_DISABLED
@@ -667,9 +692,11 @@ void Interpreter::dumpRegisters(CallFrame* callFrame)
 
 bool Interpreter::isOpcode(Opcode opcode)
 {
-#if ENABLE(COMPUTED_GOTO_CLASSIC_INTERPRETER)
-    if (!m_enabled)
+#if ENABLE(COMPUTED_GOTO_CLASSIC_INTERPRETER) || ENABLE(LLINT)
+#if !ENABLE(LLINT)
+    if (!m_classicEnabled)
         return opcode >= 0 && static_cast<OpcodeID>(bitwise_cast<uintptr_t>(opcode)) <= op_end;
+#endif
     return opcode != HashTraits<Opcode>::emptyValue()
         && !HashTraits<Opcode>::isDeletedValue(opcode)
         && m_opcodeIDTable.contains(opcode);
@@ -726,11 +753,11 @@ NEVER_INLINE bool Interpreter::unwindCallFrame(CallFrame*& callFrame, JSValue ex
     // have to subtract 1.
 #if ENABLE(JIT) && ENABLE(CLASSIC_INTERPRETER)
     if (callerFrame->globalData().canUseJIT())
-        bytecodeOffset = codeBlock->bytecodeOffset(callFrame->returnPC());
+        bytecodeOffset = codeBlock->bytecodeOffset(callerFrame, callFrame->returnPC());
     else
         bytecodeOffset = codeBlock->bytecodeOffset(callFrame->returnVPC()) - 1;
 #elif ENABLE(JIT)
-    bytecodeOffset = codeBlock->bytecodeOffset(callFrame->returnPC());
+    bytecodeOffset = codeBlock->bytecodeOffset(callerFrame, callFrame->returnPC());
 #else
     bytecodeOffset = codeBlock->bytecodeOffset(callFrame->returnVPC()) - 1;
 #endif
@@ -857,7 +884,7 @@ static CallFrame* getCallerInfo(JSGlobalData* globalData, CallFrame* callFrame,
             }
         } else
     #endif
-            bytecodeOffset = callerCodeBlock->bytecodeOffset(callFrame->returnPC());
+            bytecodeOffset = callerCodeBlock->bytecodeOffset(callerFrame, callFrame->returnPC());
 #endif
     }
 
@@ -1815,7 +1842,7 @@ JSValue Interpreter::privateExecute(ExecutionFlag flag, RegisterFile* registerFi
     }
     
     ASSERT(m_initialized);
-    ASSERT(m_enabled);
+    ASSERT(m_classicEnabled);
     
 #if ENABLE(JIT)
 #if ENABLE(CLASSIC_INTERPRETER)
@@ -3466,6 +3493,8 @@ skip_id_custom_self:
 #if USE(GCC_COMPUTED_GOTO_WORKAROUND)
       skip_put_by_id:
 #endif
+    DEFINE_OPCODE(op_put_by_id_transition_direct)
+    DEFINE_OPCODE(op_put_by_id_transition_normal)
     DEFINE_OPCODE(op_put_by_id_transition) {
         /* op_put_by_id_transition base(r) property(id) value(r) oldStructure(sID) newStructure(sID) structureChain(chain) offset(n) direct(b)
          
@@ -5299,10 +5328,10 @@ void Interpreter::retrieveLastCaller(CallFrame* callFrame, int& lineNumber, intp
         bytecodeOffset = callerCodeBlock->bytecodeOffset(callFrame->returnVPC());
 #if ENABLE(JIT)
     else
-        bytecodeOffset = callerCodeBlock->bytecodeOffset(callFrame->returnPC());
+        bytecodeOffset = callerCodeBlock->bytecodeOffset(callerFrame, callFrame->returnPC());
 #endif
 #else
-    bytecodeOffset = callerCodeBlock->bytecodeOffset(callFrame->returnPC());
+    bytecodeOffset = callerCodeBlock->bytecodeOffset(callerFrame, callFrame->returnPC());
 #endif
     lineNumber = callerCodeBlock->lineNumberForBytecodeOffset(bytecodeOffset - 1);
     sourceID = callerCodeBlock->ownerExecutable()->sourceID();
index 6920eb2..51881a5 100644 (file)
@@ -34,6 +34,7 @@
 #include "JSFunction.h"
 #include "JSValue.h"
 #include "JSObject.h"
+#include "LLIntData.h"
 #include "Opcode.h"
 #include "RegisterFile.h"
 
@@ -46,6 +47,7 @@ namespace JSC {
     class ExecutableBase;
     class FunctionExecutable;
     class JSGlobalObject;
+    class LLIntOffsetsExtractor;
     class ProgramExecutable;
     class Register;
     class ScopeChainNode;
@@ -158,19 +160,21 @@ namespace JSC {
 
     class Interpreter {
         WTF_MAKE_FAST_ALLOCATED;
-        friend class JIT;
         friend class CachedCall;
+        friend class LLIntOffsetsExtractor;
+        friend class JIT;
     public:
         Interpreter();
+        ~Interpreter();
         
-        void initialize(bool canUseJIT);
+        void initialize(LLInt::Data*, bool canUseJIT);
 
         RegisterFile& registerFile() { return m_registerFile; }
         
         Opcode getOpcode(OpcodeID id)
         {
             ASSERT(m_initialized);
-#if ENABLE(COMPUTED_GOTO_CLASSIC_INTERPRETER)
+#if ENABLE(COMPUTED_GOTO_CLASSIC_INTERPRETER) || ENABLE(LLINT)
             return m_opcodeTable[id];
 #else
             return id;
@@ -180,9 +184,12 @@ namespace JSC {
         OpcodeID getOpcodeID(Opcode opcode)
         {
             ASSERT(m_initialized);
-#if ENABLE(COMPUTED_GOTO_CLASSIC_INTERPRETER)
+#if ENABLE(LLINT)
+            ASSERT(isOpcode(opcode));
+            return m_opcodeIDTable.get(opcode);
+#elif ENABLE(COMPUTED_GOTO_CLASSIC_INTERPRETER)
             ASSERT(isOpcode(opcode));
-            if (!m_enabled)
+            if (!m_classicEnabled)
                 return static_cast<OpcodeID>(bitwise_cast<uintptr_t>(opcode));
 
             return m_opcodeIDTable.get(opcode);
@@ -190,6 +197,11 @@ namespace JSC {
             return opcode;
 #endif
         }
+        
+        bool classicEnabled()
+        {
+            return m_classicEnabled;
+        }
 
         bool isOpcode(Opcode);
 
@@ -259,7 +271,10 @@ namespace JSC {
 
         RegisterFile m_registerFile;
         
-#if ENABLE(COMPUTED_GOTO_CLASSIC_INTERPRETER)
+#if ENABLE(LLINT)
+        Opcode* m_opcodeTable; // Maps OpcodeID => Opcode for compiling
+        HashMap<Opcode, OpcodeID> m_opcodeIDTable; // Maps Opcode => OpcodeID for decompiling
+#elif ENABLE(COMPUTED_GOTO_CLASSIC_INTERPRETER)
         Opcode m_opcodeTable[numOpcodeIDs]; // Maps OpcodeID => Opcode for compiling
         HashMap<Opcode, OpcodeID> m_opcodeIDTable; // Maps Opcode => OpcodeID for decompiling
 #endif
@@ -267,7 +282,7 @@ namespace JSC {
 #if !ASSERT_DISABLED
         bool m_initialized;
 #endif
-        bool m_enabled;
+        bool m_classicEnabled;
     };
 
     // This value must not be an object that would require this conversion (WebCore's global object).
index e45b869..21ad7fb 100644 (file)
@@ -39,6 +39,7 @@ namespace JSC {
 
     class ConservativeRoots;
     class DFGCodeBlocks;
+    class LLIntOffsetsExtractor;
 
     class RegisterFile {
         WTF_MAKE_NONCOPYABLE(RegisterFile);
@@ -81,6 +82,8 @@ namespace JSC {
         }
 
     private:
+        friend class LLIntOffsetsExtractor;
+        
         bool growSlowCase(Register*);
         void releaseExcessCapacity();
         void addToCommittedByteCount(long);
index bc8b816..7520913 100644 (file)
@@ -89,12 +89,12 @@ inline size_t roundUpAllocationSize(size_t request, size_t granularity)
 
 }
 
-#if ENABLE(JIT) && ENABLE(ASSEMBLER)
-
 namespace JSC {
 
 typedef WTF::MetaAllocatorHandle ExecutableMemoryHandle;
 
+#if ENABLE(JIT) && ENABLE(ASSEMBLER)
+
 class ExecutableAllocator {
     enum ProtectionSetting { Writable, Executable };
 
@@ -235,8 +235,8 @@ private:
 #endif
 };
 
-} // namespace JSC
-
 #endif // ENABLE(JIT) && ENABLE(ASSEMBLER)
 
+} // namespace JSC
+
 #endif // !defined(ExecutableAllocator)
diff --git a/Source/JavaScriptCore/jit/HostCallReturnValue.cpp b/Source/JavaScriptCore/jit/HostCallReturnValue.cpp
new file mode 100644 (file)
index 0000000..924bc76
--- /dev/null
@@ -0,0 +1,40 @@
+/*
+ * Copyright (C) 2012 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#include "config.h"
+#include "HostCallReturnValue.h"
+
+#include "CallFrame.h"
+#include "InlineASM.h"
+#include "JSObject.h"
+#include "JSValueInlineMethods.h"
+#include "ScopeChain.h"
+
+namespace JSC {
+
+// Nothing to see here.
+
+} // namespace JSC
+
diff --git a/Source/JavaScriptCore/jit/HostCallReturnValue.h b/Source/JavaScriptCore/jit/HostCallReturnValue.h
new file mode 100644 (file)
index 0000000..12fe10b
--- /dev/null
@@ -0,0 +1,67 @@
+/*
+ * Copyright (C) 2012 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#ifndef HostCallReturnValue_h
+#define HostCallReturnValue_h
+
+#include "JSValue.h"
+#include "MacroAssemblerCodeRef.h"
+#include <wtf/Platform.h>
+
+// Unfortunately this only works on GCC-like compilers. And it's currently only used
+// by LLInt and DFG, which also are restricted to GCC-like compilers. We should
+// probably fix that at some point.
+#if COMPILER(GCC)
+
+#if CALLING_CONVENTION_IS_STDCALL
+#define HOST_CALL_RETURN_VALUE_OPTION CDECL
+#else
+#define HOST_CALL_RETURN_VALUE_OPTION
+#endif
+
+namespace JSC {
+
+extern "C" EncodedJSValue HOST_CALL_RETURN_VALUE_OPTION getHostCallReturnValue();
+
+// This is a public declaration only to convince CLANG not to elide it.
+extern "C" EncodedJSValue HOST_CALL_RETURN_VALUE_OPTION getHostCallReturnValueWithExecState(ExecState*);
+
+inline void initializeHostCallReturnValue()
+{
+    getHostCallReturnValueWithExecState(0);
+}
+
+}
+
+#else // COMPILER(GCC)
+
+namespace JSC {
+inline void initializeHostCallReturnValue() { }
+}
+
+#endif // COMPILER(GCC)
+
+#endif // HostCallReturnValue_h
+
index f3f833a..2adc596 100644 (file)
@@ -325,6 +325,8 @@ void JIT::privateCompileMainPass()
         DEFINE_OP(op_profile_will_call)
         DEFINE_OP(op_push_new_scope)
         DEFINE_OP(op_push_scope)
+        case op_put_by_id_transition_direct:
+        case op_put_by_id_transition_normal:
         DEFINE_OP(op_put_by_id)
         DEFINE_OP(op_put_by_index)
         DEFINE_OP(op_put_by_val)
@@ -486,6 +488,8 @@ void JIT::privateCompileSlowCases()
         DEFINE_SLOWCASE_OP(op_post_inc)
         DEFINE_SLOWCASE_OP(op_pre_dec)
         DEFINE_SLOWCASE_OP(op_pre_inc)
+        case op_put_by_id_transition_direct:
+        case op_put_by_id_transition_normal:
         DEFINE_SLOWCASE_OP(op_put_by_id)
         DEFINE_SLOWCASE_OP(op_put_by_val)
         DEFINE_SLOWCASE_OP(op_resolve_global)
@@ -525,6 +529,10 @@ void JIT::privateCompileSlowCases()
 
 JITCode JIT::privateCompile(CodePtr* functionEntryArityCheck)
 {
+#if ENABLE(JIT_VERBOSE_OSR)
+    printf("Compiling JIT code!\n");
+#endif
+    
 #if ENABLE(VALUE_PROFILER)
     m_canBeOptimized = m_codeBlock->canCompileWithDFG();
 #endif
@@ -693,8 +701,12 @@ JITCode JIT::privateCompile(CodePtr* functionEntryArityCheck)
         info.callReturnLocation = m_codeBlock->structureStubInfo(m_methodCallCompilationInfo[i].propertyAccessIndex).callReturnLocation;
     }
 
-#if ENABLE(DFG_JIT)
-    if (m_canBeOptimized) {
+#if ENABLE(DFG_JIT) || ENABLE(LLINT)
+    if (m_canBeOptimized
+#if ENABLE(LLINT)
+        || true
+#endif
+        ) {
         CompactJITCodeMap::Encoder jitCodeMapEncoder;
         for (unsigned bytecodeOffset = 0; bytecodeOffset < m_labels.size(); ++bytecodeOffset) {
             if (m_labels[bytecodeOffset].isSet())
index f63c4a1..3ae5ff2 100644 (file)
@@ -48,7 +48,7 @@ namespace JSC {
         JITCode() { }
 #endif
     public:
-        enum JITType { HostCallThunk, BaselineJIT, DFGJIT };
+        enum JITType { None, HostCallThunk, InterpreterThunk, BaselineJIT, DFGJIT };
         
         static JITType bottomTierJIT()
         {
@@ -66,8 +66,19 @@ namespace JSC {
             return DFGJIT;
         }
         
+        static bool isOptimizingJIT(JITType jitType)
+        {
+            return jitType == DFGJIT;
+        }
+        
+        static bool isBaselineCode(JITType jitType)
+        {
+            return jitType == InterpreterThunk || jitType == BaselineJIT;
+        }
+        
 #if ENABLE(JIT)
         JITCode()
+            : m_jitType(None)
         {
         }
 
@@ -75,6 +86,7 @@ namespace JSC {
             : m_ref(ref)
             , m_jitType(jitType)
         {
+            ASSERT(jitType != None);
         }
         
         bool operator !() const
index 4b8df47..b204c77 100644 (file)
 #include "BytecodeGenerator.h"
 #include "DFGDriver.h"
 #include "JIT.h"
+#include "LLIntEntrypoints.h"
 
 namespace JSC {
 
 template<typename CodeBlockType>
 inline bool jitCompileIfAppropriate(JSGlobalData& globalData, OwnPtr<CodeBlockType>& codeBlock, JITCode& jitCode, JITCode::JITType jitType)
 {
+    if (jitType == codeBlock->getJITType())
+        return true;
+    
     if (!globalData.canUseJIT())
         return true;
     
+    codeBlock->unlinkIncomingCalls();
+    
     bool dfgCompiled = false;
     if (jitType == JITCode::DFGJIT)
         dfgCompiled = DFG::tryCompile(globalData, codeBlock.get(), jitCode);
@@ -62,9 +68,14 @@ inline bool jitCompileIfAppropriate(JSGlobalData& globalData, OwnPtr<CodeBlockTy
 
 inline bool jitCompileFunctionIfAppropriate(JSGlobalData& globalData, OwnPtr<FunctionCodeBlock>& codeBlock, JITCode& jitCode, MacroAssemblerCodePtr& jitCodeWithArityCheck, SharedSymbolTable*& symbolTable, JITCode::JITType jitType)
 {
+    if (jitType == codeBlock->getJITType())
+        return true;
+    
     if (!globalData.canUseJIT())
         return true;
     
+    codeBlock->unlinkIncomingCalls();
+    
     bool dfgCompiled = false;
     if (jitType == JITCode::DFGJIT)
         dfgCompiled = DFG::tryCompileFunction(globalData, codeBlock.get(), jitCode, jitCodeWithArityCheck);
@@ -79,7 +90,6 @@ inline bool jitCompileFunctionIfAppropriate(JSGlobalData& globalData, OwnPtr<Fun
         }
         jitCode = JIT::compile(&globalData, codeBlock.get(), &jitCodeWithArityCheck);
     }
-    
     codeBlock->setJITCode(jitCode, jitCodeWithArityCheck);
     
     return true;
index 24baca4..2edd340 100644 (file)
@@ -64,7 +64,7 @@ ExceptionHandler genericThrow(JSGlobalData* globalData, ExecState* callFrame, JS
 
 ExceptionHandler jitThrow(JSGlobalData* globalData, ExecState* callFrame, JSValue exceptionValue, ReturnAddressPtr faultLocation)
 {
-    return genericThrow(globalData, callFrame, exceptionValue, callFrame->codeBlock()->bytecodeOffset(faultLocation));
+    return genericThrow(globalData, callFrame, exceptionValue, callFrame->codeBlock()->bytecodeOffset(callFrame, faultLocation));
 }
 
 }
index 36643d0..e031056 100644 (file)
@@ -265,8 +265,13 @@ ALWAYS_INLINE void JIT::restoreArgumentReference()
 ALWAYS_INLINE void JIT::updateTopCallFrame()
 {
     ASSERT(static_cast<int>(m_bytecodeOffset) >= 0);
-    if (m_bytecodeOffset)
+    if (m_bytecodeOffset) {
+#if USE(JSVALUE32_64)
+        storePtr(TrustedImmPtr(m_codeBlock->instructions().begin() + m_bytecodeOffset + 1), intTagFor(RegisterFile::ArgumentCount));
+#else
         store32(Imm32(m_bytecodeOffset + 1), intTagFor(RegisterFile::ArgumentCount));
+#endif
+    }
     storePtr(callFrameRegister, &m_globalData->topCallFrame);
 }
 
index ad2f94b..a0a8165 100644 (file)
@@ -1446,6 +1446,7 @@ DEFINE_STUB_FUNCTION(void, op_put_by_id_direct)
     PutPropertySlot slot(callFrame->codeBlock()->isStrictMode());
     JSValue baseValue = stackFrame.args[0].jsValue();
     ASSERT(baseValue.isObject());
+    
     asObject(baseValue)->putDirect(callFrame->globalData(), ident, stackFrame.args[2].jsValue(), slot);
     
     CodeBlock* codeBlock = stackFrame.callFrame->codeBlock();
@@ -1931,7 +1932,7 @@ DEFINE_STUB_FUNCTION(void, optimize_from_loop)
     unsigned bytecodeIndex = stackFrame.args[0].int32();
 
 #if ENABLE(JIT_VERBOSE_OSR)
-    dataLog("Entered optimize_from_loop with executeCounter = %d, reoptimizationRetryCounter = %u, optimizationDelayCounter = %u\n", codeBlock->jitExecuteCounter(), codeBlock->reoptimizationRetryCounter(), codeBlock->optimizationDelayCounter());
+    dataLog("%p: Entered optimize_from_loop with executeCounter = %d, reoptimizationRetryCounter = %u, optimizationDelayCounter = %u\n", codeBlock, codeBlock->jitExecuteCounter(), codeBlock->reoptimizationRetryCounter(), codeBlock->optimizationDelayCounter());
 #endif
 
     if (codeBlock->hasOptimizedReplacement()) {
@@ -2186,45 +2187,13 @@ DEFINE_STUB_FUNCTION(void*, op_construct_jitCompile)
     return result;
 }
 
-inline CallFrame* arityCheckFor(CallFrame* callFrame, RegisterFile* registerFile, CodeSpecializationKind kind)
-{
-    JSFunction* callee = asFunction(callFrame->callee());
-    ASSERT(!callee->isHostFunction());
-    CodeBlock* newCodeBlock = &callee->jsExecutable()->generatedBytecodeFor(kind);
-    int argumentCountIncludingThis = callFrame->argumentCountIncludingThis();
-
-    // This ensures enough space for the worst case scenario of zero arguments passed by the caller.
-    if (!registerFile->grow(callFrame->registers() + newCodeBlock->numParameters() + newCodeBlock->m_numCalleeRegisters))
-        return 0;
-
-    ASSERT(argumentCountIncludingThis < newCodeBlock->numParameters());
-
-    // Too few arguments -- copy call frame and arguments, then fill in missing arguments with undefined.
-    size_t delta = newCodeBlock->numParameters() - argumentCountIncludingThis;
-    Register* src = callFrame->registers();
-    Register* dst = callFrame->registers() + delta;
-
-    int i;
-    int end = -CallFrame::offsetFor(argumentCountIncludingThis);
-    for (i = -1; i >= end; --i)
-        dst[i] = src[i];
-
-    end -= delta;
-    for ( ; i >= end; --i)
-        dst[i] = jsUndefined();
-
-    CallFrame* newCallFrame = CallFrame::create(dst);
-    ASSERT((void*)newCallFrame <= registerFile->end());
-    return newCallFrame;
-}
-
 DEFINE_STUB_FUNCTION(void*, op_call_arityCheck)
 {
     STUB_INIT_STACK_FRAME(stackFrame);
 
     CallFrame* callFrame = stackFrame.callFrame;
 
-    CallFrame* newCallFrame = arityCheckFor(callFrame, stackFrame.registerFile, CodeForCall);
+    CallFrame* newCallFrame = CommonSlowPaths::arityCheckFor(callFrame, stackFrame.registerFile, CodeForCall);
     if (!newCallFrame)
         return throwExceptionFromOpCall<void*>(stackFrame, callFrame, STUB_RETURN_ADDRESS, createStackOverflowError(callFrame->callerFrame()));
 
@@ -2237,7 +2206,7 @@ DEFINE_STUB_FUNCTION(void*, op_construct_arityCheck)
 
     CallFrame* callFrame = stackFrame.callFrame;
 
-    CallFrame* newCallFrame = arityCheckFor(callFrame, stackFrame.registerFile, CodeForConstruct);
+    CallFrame* newCallFrame = CommonSlowPaths::arityCheckFor(callFrame, stackFrame.registerFile, CodeForConstruct);
     if (!newCallFrame)
         return throwExceptionFromOpCall<void*>(stackFrame, callFrame, STUB_RETURN_ADDRESS, createStackOverflowError(callFrame->callerFrame()));
 
@@ -2314,6 +2283,7 @@ DEFINE_STUB_FUNCTION(EncodedJSValue, op_call_NotJSFunction)
     STUB_INIT_STACK_FRAME(stackFrame);
 
     CallFrame* callFrame = stackFrame.callFrame;
+    
     JSValue callee = callFrame->calleeAsValue();
 
     CallData callData;
index fe5f522..890d997 100644 (file)
@@ -37,8 +37,6 @@
 #include "ThunkGenerators.h"
 #include <wtf/HashMap.h>
 
-#if ENABLE(JIT)
-
 namespace JSC {
 
     struct StructureStubInfo;
@@ -263,6 +261,8 @@ namespace JSC {
 
 #define JITSTACKFRAME_ARGS_INDEX (OBJECT_OFFSETOF(JITStackFrame, args) / sizeof(void*))
 
+#if ENABLE(JIT)
+
 #define STUB_ARGS_DECLARATION void** args
 #define STUB_ARGS (args)
 
@@ -456,8 +456,8 @@ extern "C" {
     void* JIT_STUB cti_vm_throw(STUB_ARGS_DECLARATION);
 } // extern "C"
 
-} // namespace JSC
-
 #endif // ENABLE(JIT)
 
+} // namespace JSC
+
 #endif // JITStubs_h
index d54dedc..05d1ce5 100644 (file)
 #ifndef JSInterfaceJIT_h
 #define JSInterfaceJIT_h
 
+#include "BytecodeConventions.h"
 #include "JITCode.h"
 #include "JITStubs.h"
+#include "JSString.h"
 #include "JSValue.h"
 #include "MacroAssembler.h"
 #include "RegisterFile.h"
diff --git a/Source/JavaScriptCore/llint/LLIntCommon.h b/Source/JavaScriptCore/llint/LLIntCommon.h
new file mode 100644 (file)
index 0000000..6b908ea
--- /dev/null
@@ -0,0 +1,49 @@
+/*
+ * Copyright (C) 2012 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#ifndef LLIntCommon_h
+#define LLIntCommon_h
+
+// Print every instruction executed.
+#define LLINT_EXECUTION_TRACING 0
+
+// Print some information for some of the more subtle slow paths.
+#define LLINT_SLOW_PATH_TRACING 0
+
+// Disable inline allocation in the interpreter. This is great if you're changing
+// how the GC allocates.
+#define LLINT_ALWAYS_ALLOCATE_SLOW 0
+
+// Enable OSR into the JIT. Disabling this while the LLInt is enabled effectively
+// turns off all JIT'ing, since in LLInt's parlance, OSR subsumes any form of JIT
+// invocation.
+#if ENABLE(JIT)
+#define LLINT_OSR_TO_JIT 1
+#else
+#define LLINT_OSR_TO_JIT 0
+#endif
+
+#endif // LLIntCommon_h
+
diff --git a/Source/JavaScriptCore/llint/LLIntData.cpp b/Source/JavaScriptCore/llint/LLIntData.cpp
new file mode 100644 (file)
index 0000000..c0fe781
--- /dev/null
@@ -0,0 +1,116 @@
+/*
+ * Copyright (C) 2011 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#include "config.h"
+#include "LLIntData.h"
+
+#if ENABLE(LLINT)
+
+#include "BytecodeConventions.h"
+#include "CodeType.h"
+#include "Instruction.h"
+#include "LowLevelInterpreter.h"
+#include "Opcode.h"
+
+namespace JSC { namespace LLInt {
+
+Data::Data()
+    : m_exceptionInstructions(new Instruction[maxOpcodeLength + 1])
+    , m_opcodeMap(new Opcode[numOpcodeIDs])
+{
+    for (int i = 0; i < maxOpcodeLength + 1; ++i)
+        m_exceptionInstructions[i].u.pointer = bitwise_cast<void*>(&llint_throw_from_slow_path_trampoline);
+#define OPCODE_ENTRY(opcode, length) m_opcodeMap[opcode] = bitwise_cast<void*>(&llint_##opcode);
+    FOR_EACH_OPCODE_ID(OPCODE_ENTRY);
+#undef OPCODE_ENTRY
+}
+
+#if COMPILER(CLANG)
+#pragma clang diagnostic push
+#pragma clang diagnostic ignored "-Wmissing-noreturn"
+#endif
+void Data::performAssertions(JSGlobalData& globalData)
+{
+    UNUSED_PARAM(globalData);
+    
+    // Assertions to match LowLevelInterpreter.asm.  If you change any of this code, be
+    // prepared to change LowLevelInterpreter.asm as well!!
+    ASSERT(RegisterFile::CallFrameHeaderSize * 8 == 48);
+    ASSERT(RegisterFile::ArgumentCount * 8 == -48);
+    ASSERT(RegisterFile::CallerFrame * 8 == -40);
+    ASSERT(RegisterFile::Callee * 8 == -32);
+    ASSERT(RegisterFile::ScopeChain * 8 == -24);
+    ASSERT(RegisterFile::ReturnPC * 8 == -16);
+    ASSERT(RegisterFile::CodeBlock * 8 == -8);
+    ASSERT(CallFrame::argumentOffsetIncludingThis(0) == -RegisterFile::CallFrameHeaderSize - 1);
+#if CPU(BIG_ENDIAN)
+    ASSERT(OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.tag) == 0);
+    ASSERT(OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.payload) == 4);
+#else
+    ASSERT(OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.tag) == 4);
+    ASSERT(OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.payload) == 0);
+#endif
+    ASSERT(JSValue::Int32Tag == -1);
+    ASSERT(JSValue::BooleanTag == -2);
+    ASSERT(JSValue::NullTag == -3);
+    ASSERT(JSValue::UndefinedTag == -4);
+    ASSERT(JSValue::CellTag == -5);
+    ASSERT(JSValue::EmptyValueTag == -6);
+    ASSERT(JSValue::DeletedValueTag == -7);
+    ASSERT(JSValue::LowestTag == -7);
+    ASSERT(StringType == 5);
+    ASSERT(ObjectType == 13);
+    ASSERT(MasqueradesAsUndefined == 1);
+    ASSERT(ImplementsHasInstance == 2);
+    ASSERT(ImplementsDefaultHasInstance == 8);
+    ASSERT(&globalData.heap.allocatorForObjectWithoutDestructor(sizeof(JSFinalObject)) - &globalData.heap.firstAllocatorWithoutDestructors() == 3);
+    ASSERT(FirstConstantRegisterIndex == 0x40000000);
+    ASSERT(GlobalCode == 0);
+    ASSERT(EvalCode == 1);
+    ASSERT(FunctionCode == 2);
+    
+    // FIXME: make these assertions less horrible.
+#if !ASSERT_DISABLED
+    Vector<int> testVector;
+    testVector.resize(42);
+    ASSERT(bitwise_cast<size_t*>(&testVector)[0] == 42);
+    ASSERT(bitwise_cast<int**>(&testVector)[1] == testVector.begin());
+#endif
+
+    ASSERT(StringImpl::s_hashFlag8BitBuffer == 64);
+}
+#if COMPILER(CLANG)
+#pragma clang diagnostic pop
+#endif
+
+Data::~Data()
+{
+    delete[] m_exceptionInstructions;
+    delete[] m_opcodeMap;
+}
+
+} } // namespace JSC::LLInt
+
+#endif // ENABLE(LLINT)
diff --git a/Source/JavaScriptCore/llint/LLIntData.h b/Source/JavaScriptCore/llint/LLIntData.h
new file mode 100644 (file)
index 0000000..ba8daed
--- /dev/null
@@ -0,0 +1,93 @@
+/*
+ * Copyright (C) 2011 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#ifndef LLIntData_h
+#define LLIntData_h
+
+#include "Opcode.h"
+#include <wtf/Platform.h>
+
+namespace JSC {
+
+class JSGlobalData;
+struct Instruction;
+
+namespace LLInt {
+
+#if ENABLE(LLINT)
+class Data {
+public:
+    Data();
+    ~Data();
+    
+    void performAssertions(JSGlobalData&);
+    
+    Instruction* exceptionInstructions()
+    {
+        return m_exceptionInstructions;
+    }
+    
+    Opcode* opcodeMap()
+    {
+        return m_opcodeMap;
+    }
+private:
+    Instruction* m_exceptionInstructions;
+    Opcode* m_opcodeMap;
+};
+#else // ENABLE(LLINT)
+
+#if COMPILER(CLANG)
+#pragma clang diagnostic push
+#pragma clang diagnostic ignored "-Wmissing-noreturn"
+#endif
+
+class Data {
+public:
+    void performAssertions(JSGlobalData&) { }
+
+    Instruction* exceptionInstructions()
+    {
+        ASSERT_NOT_REACHED();
+        return 0;
+    }
+    
+    Opcode* opcodeMap()
+    {
+        ASSERT_NOT_REACHED();
+        return 0;
+    }
+};
+
+#if COMPILER(CLANG)
+#pragma clang diagnostic pop
+#endif
+
+#endif // ENABLE(LLINT)
+
+} } // namespace JSC::LLInt
+
+#endif // LLIntData_h
+
diff --git a/Source/JavaScriptCore/llint/LLIntEntrypoints.cpp b/Source/JavaScriptCore/llint/LLIntEntrypoints.cpp
new file mode 100644 (file)
index 0000000..f610f4b
--- /dev/null
@@ -0,0 +1,86 @@
+/*
+ * Copyright (C) 2012 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#include "config.h"
+#include "LLIntEntrypoints.h"
+
+#if ENABLE(LLINT)
+
+#include "JITCode.h"
+#include "JSGlobalData.h"
+#include "LLIntThunks.h"
+#include "LowLevelInterpreter.h"
+
+namespace JSC { namespace LLInt {
+
+void getFunctionEntrypoint(JSGlobalData& globalData, CodeSpecializationKind kind, JITCode& jitCode, MacroAssemblerCodePtr& arityCheck)
+{
+    if (!globalData.canUseJIT()) {
+        if (kind == CodeForCall) {
+            jitCode = JITCode::HostFunction(MacroAssemblerCodeRef::createSelfManagedCodeRef(MacroAssemblerCodePtr(bitwise_cast<void*>(&llint_function_for_call_prologue))));
+            arityCheck = MacroAssemblerCodePtr(bitwise_cast<void*>(&llint_function_for_call_arity_check));
+            return;
+        }
+
+        ASSERT(kind == CodeForConstruct);
+        jitCode = JITCode::HostFunction(MacroAssemblerCodeRef::createSelfManagedCodeRef(MacroAssemblerCodePtr(bitwise_cast<void*>(&llint_function_for_construct_prologue))));
+        arityCheck = MacroAssemblerCodePtr(bitwise_cast<void*>(&llint_function_for_construct_arity_check));
+        return;
+    }
+    
+    if (kind == CodeForCall) {
+        jitCode = JITCode(globalData.getCTIStub(functionForCallEntryThunkGenerator), JITCode::InterpreterThunk);
+        arityCheck = globalData.getCTIStub(functionForCallArityCheckThunkGenerator).code();
+        return;
+    }
+
+    ASSERT(kind == CodeForConstruct);
+    jitCode = JITCode(globalData.getCTIStub(functionForConstructEntryThunkGenerator), JITCode::InterpreterThunk);
+    arityCheck = globalData.getCTIStub(functionForConstructArityCheckThunkGenerator).code();
+}
+
+void getEvalEntrypoint(JSGlobalData& globalData, JITCode& jitCode)
+{
+    if (!globalData.canUseJIT()) {
+        jitCode = JITCode::HostFunction(MacroAssemblerCodeRef::createSelfManagedCodeRef(MacroAssemblerCodePtr(bitwise_cast<void*>(&llint_eval_prologue))));
+        return;
+    }
+    
+    jitCode = JITCode(globalData.getCTIStub(evalEntryThunkGenerator), JITCode::InterpreterThunk);
+}
+
+void getProgramEntrypoint(JSGlobalData& globalData, JITCode& jitCode)
+{
+    if (!globalData.canUseJIT()) {
+        jitCode = JITCode::HostFunction(MacroAssemblerCodeRef::createSelfManagedCodeRef(MacroAssemblerCodePtr(bitwise_cast<void*>(&llint_program_prologue))));
+        return;
+    }
+    
+    jitCode = JITCode(globalData.getCTIStub(programEntryThunkGenerator), JITCode::InterpreterThunk);
+}
+
+} } // namespace JSC::LLInt
+
+#endif // ENABLE(LLINT)
diff --git a/Source/JavaScriptCore/llint/LLIntEntrypoints.h b/Source/JavaScriptCore/llint/LLIntEntrypoints.h
new file mode 100644 (file)
index 0000000..dd7c277
--- /dev/null
@@ -0,0 +1,64 @@
+/*
+ * Copyright (C) 2012 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#ifndef LLIntEntrypoints_h
+#define LLIntEntrypoints_h
+
+#include <wtf/Platform.h>
+
+#if ENABLE(LLINT)
+
+#include "CodeSpecializationKind.h"
+
+namespace JSC {
+
+class EvalCodeBlock;
+class JITCode;
+class JSGlobalData;
+class MacroAssemblerCodePtr;
+class MacroAssemblerCodeRef;
+class ProgramCodeBlock;
+
+namespace LLInt {
+
+void getFunctionEntrypoint(JSGlobalData&, CodeSpecializationKind, JITCode&, MacroAssemblerCodePtr& arityCheck);
+void getEvalEntrypoint(JSGlobalData&, JITCode&);
+void getProgramEntrypoint(JSGlobalData&, JITCode&);
+
+inline void getEntrypoint(JSGlobalData& globalData, EvalCodeBlock*, JITCode& jitCode)
+{
+    getEvalEntrypoint(globalData, jitCode);
+}
+
+inline void getEntrypoint(JSGlobalData& globalData, ProgramCodeBlock*, JITCode& jitCode)
+{
+    getProgramEntrypoint(globalData, jitCode);
+}
+
+} } // namespace JSC::LLInt
+
+#endif // ENABLE(LLINT)
+
+#endif // LLIntEntrypoints_h
diff --git a/Source/JavaScriptCore/llint/LLIntExceptions.cpp b/Source/JavaScriptCore/llint/LLIntExceptions.cpp
new file mode 100644 (file)
index 0000000..a7d1a96
--- /dev/null
@@ -0,0 +1,80 @@
+/*
+ * Copyright (C) 2011 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#include "config.h"
+#include "LLIntExceptions.h"
+
+#if ENABLE(LLINT)
+
+#include "CallFrame.h"
+#include "CodeBlock.h"
+#include "Instruction.h"
+#include "JITExceptions.h"
+#include "LLIntCommon.h"
+#include "LowLevelInterpreter.h"
+
+namespace JSC { namespace LLInt {
+
+void interpreterThrowInCaller(ExecState* exec, ReturnAddressPtr pc)
+{
+    JSGlobalData* globalData = &exec->globalData();
+#if LLINT_SLOW_PATH_TRACING
+    dataLog("Throwing exception %s.\n", globalData->exception.description());
+#endif
+    genericThrow(
+        globalData, exec, globalData->exception,
+        exec->codeBlock()->bytecodeOffset(exec, pc));
+}
+
+Instruction* returnToThrowForThrownException(ExecState* exec)
+{
+    return exec->globalData().llintData.exceptionInstructions();
+}
+
+Instruction* returnToThrow(ExecState* exec, Instruction* pc)
+{
+    JSGlobalData* globalData = &exec->globalData();
+#if LLINT_SLOW_PATH_TRACING
+    dataLog("Throwing exception %s (returnToThrow).\n", globalData->exception.description());
+#endif
+    genericThrow(globalData, exec, globalData->exception, pc - exec->codeBlock()->instructions().begin());
+    
+    return globalData->llintData.exceptionInstructions();
+}
+
+void* callToThrow(ExecState* exec, Instruction* pc)
+{
+    JSGlobalData* globalData = &exec->globalData();
+#if LLINT_SLOW_PATH_TRACING
+    dataLog("Throwing exception %s (callToThrow).\n", globalData->exception.description());
+#endif
+    genericThrow(globalData, exec, globalData->exception, pc - exec->codeBlock()->instructions().begin());
+    
+    return bitwise_cast<void*>(&llint_throw_during_call_trampoline);
+}
+
+} } // namespace JSC::LLInt
+
+#endif // ENABLE(LLINT)
diff --git a/Source/JavaScriptCore/llint/LLIntExceptions.h b/Source/JavaScriptCore/llint/LLIntExceptions.h
new file mode 100644 (file)
index 0000000..3baa3f4
--- /dev/null
@@ -0,0 +1,66 @@
+/*
+ * Copyright (C) 2011 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#ifndef LLIntExceptions_h
+#define LLIntExceptions_h
+
+#include <wtf/Platform.h>
+#include <wtf/StdLibExtras.h>
+
+#if ENABLE(LLINT)
+
+#include "MacroAssemblerCodeRef.h"
+
+namespace JSC {
+
+class ExecState;
+struct Instruction;
+
+namespace LLInt {
+
+// Throw the currently active exception in the context of the caller's call frame.
+void interpreterThrowInCaller(ExecState* callerFrame, ReturnAddressPtr);
+
+// Tells you where to jump to if you want to return-to-throw, after you've already
+// set up all information needed to throw the exception.
+Instruction* returnToThrowForThrownException(ExecState*);
+
+// Saves the current PC in the global data for safe-keeping, and gives you a PC
+// that you can tell the interpreter to go to, which when advanced between 1
+// and 9 slots will give you an "instruction" that threads to the interpreter's
+// exception handler. Note that if you give it the PC for exception handling,
+// it's smart enough to just return that PC without doing anything else; this
+// lets you thread exception handling through common helper functions used by
+// other helpers.
+Instruction* returnToThrow(ExecState*, Instruction*);
+
+// Use this when you're throwing to a call thunk.
+void* callToThrow(ExecState*, Instruction*);
+
+} } // namespace JSC::LLInt
+
+#endif // ENABLE(LLINT)
+
+#endif // LLIntExceptions_h
diff --git a/Source/JavaScriptCore/llint/LLIntOfflineAsmConfig.h b/Source/JavaScriptCore/llint/LLIntOfflineAsmConfig.h
new file mode 100644 (file)
index 0000000..f140701
--- /dev/null
@@ -0,0 +1,90 @@
+/*
+ * Copyright (C) 2012 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#ifndef LLIntOfflineAsmConfig_h
+#define LLIntOfflineAsmConfig_h
+
+#include "LLIntCommon.h"
+#include <wtf/Assertions.h>
+#include <wtf/InlineAsm.h>
+#include <wtf/Platform.h>
+
+#if CPU(X86)
+#define OFFLINE_ASM_X86 1
+#else
+#define OFFLINE_ASM_X86 0
+#endif
+
+#if CPU(ARM_THUMB2)
+#define OFFLINE_ASM_ARMv7 1
+#else
+#define OFFLINE_ASM_ARMv7 0
+#endif
+
+#if !ASSERT_DISABLED
+#define OFFLINE_ASM_ASSERT_ENABLED 1
+#else
+#define OFFLINE_ASM_ASSERT_ENABLED 0
+#endif
+
+#if CPU(BIG_ENDIAN)
+#define OFFLINE_ASM_BIG_ENDIAN 1
+#else
+#define OFFLINE_ASM_BIG_ENDIAN 0
+#endif
+
+#if LLINT_OSR_TO_JIT
+#define OFFLINE_ASM_JIT_ENABLED 1
+#else
+#define OFFLINE_ASM_JIT_ENABLED 0
+#endif
+
+#if LLINT_EXECUTION_TRACING
+#define OFFLINE_ASM_EXECUTION_TRACING 1
+#else
+#define OFFLINE_ASM_EXECUTION_TRACING 0
+#endif
+
+#if LLINT_ALWAYS_ALLOCATE_SLOW
+#define OFFLINE_ASM_ALWAYS_ALLOCATE_SLOW 1
+#else
+#define OFFLINE_ASM_ALWAYS_ALLOCATE_SLOW 0
+#endif
+
+#if CPU(ARM_THUMB2)
+#define OFFLINE_ASM_GLOBAL_LABEL(label)          \
+    ".globl " SYMBOL_STRING(label) "\n"          \
+    HIDE_SYMBOL(name) "\n"                       \
+    ".thumb\n"                                   \
+    ".thumb_func " THUMB_FUNC_PARAM(label) "\n"  \
+    SYMBOL_STRING(label) ":\n"
+#else
+#define OFFLINE_ASM_GLOBAL_LABEL(label)         \
+    ".globl " SYMBOL_STRING(label) "\n"         \
+    HIDE_SYMBOL(name) "\n"                      \
+    SYMBOL_STRING(label) ":\n"
+#endif
+
+#endif // LLIntOfflineAsmConfig_h
diff --git a/Source/JavaScriptCore/llint/LLIntOffsetsExtractor.cpp b/Source/JavaScriptCore/llint/LLIntOffsetsExtractor.cpp
new file mode 100644 (file)
index 0000000..e8bb378
--- /dev/null
@@ -0,0 +1,82 @@
+/*
+ * Copyright (C) 2012 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#include "config.h"
+
+#include "CodeBlock.h"
+#include "Executable.h"
+#include "Heap.h"
+#include "Interpreter.h"
+#include "JITStubs.h"
+#include "JSArray.h"
+#include "JSCell.h"
+#include "JSFunction.h"
+#include "JSGlobalData.h"
+#include "JSGlobalObject.h"
+#include "JSObject.h"
+#include "JSPropertyNameIterator.h"
+#include "JSString.h"
+#include "JSTypeInfo.h"
+#include "JSVariableObject.h"
+#include "JumpTable.h"
+#include "LLIntOfflineAsmConfig.h"
+#include "MarkedSpace.h"
+#include "RegisterFile.h"
+#include "ScopeChain.h"
+#include "Structure.h"
+#include "StructureChain.h"
+#include "ValueProfile.h"
+#include <wtf/text/StringImpl.h>
+
+namespace JSC {
+
+#define OFFLINE_ASM_OFFSETOF(clazz, field) OBJECT_OFFSETOF(clazz, field)
+
+class LLIntOffsetsExtractor {
+public:
+    static const unsigned* dummy();
+};
+
+const unsigned* LLIntOffsetsExtractor::dummy()
+{
+// This is a file generated by offlineasm/generate_offsets_extractor.rb, and contains code
+// to create a table of offsets, sizes, and a header identifying what combination of
+// Platform.h macros we have set. We include it inside of a method on LLIntOffsetsExtractor
+// because the fields whose offsets we're extracting are mostly private. So we make their
+// classes friends with LLIntOffsetsExtractor, and include the header here, to get the C++
+// compiler to kindly step aside and yield to our best intentions.
+#include "LLIntDesiredOffsets.h"
+    return extractorTable;
+}
+
+} // namespace JSC
+
+int main(int, char**)
+{
+    // Out of an abundance of caution, make sure that LLIntOffsetsExtractor::dummy() is live,
+    // and the extractorTable is live, too.
+    printf("%p\n", JSC::LLIntOffsetsExtractor::dummy());
+    return 0;
+}
diff --git a/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp b/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp
new file mode 100644 (file)
index 0000000..3203d25
--- /dev/null
@@ -0,0 +1,1558 @@
+/*
+ * Copyright (C) 2011, 2012 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#include "config.h"
+#include "LLIntSlowPaths.h"
+
+#if ENABLE(LLINT)
+
+#include "Arguments.h"
+#include "CallFrame.h"
+#include "CommonSlowPaths.h"
+#include "GetterSetter.h"
+#include "HostCallReturnValue.h"
+#include "Interpreter.h"
+#include "JIT.h"
+#include "JITDriver.h"
+#include "JSActivation.h"
+#include "JSByteArray.h"
+#include "JSGlobalObjectFunctions.h"
+#include "JSPropertyNameIterator.h"
+#include "JSStaticScopeObject.h"
+#include "JSString.h"
+#include "JSValue.h"
+#include "LLIntCommon.h"
+#include "LLIntExceptions.h"
+#include "LowLevelInterpreter.h"
+#include "Operations.h"
+
+namespace JSC { namespace LLInt {
+
+#define LLINT_BEGIN_NO_SET_PC() \
+    JSGlobalData& globalData = exec->globalData();      \
+    NativeCallFrameTracer tracer(&globalData, exec)
+
+#define LLINT_SET_PC_FOR_STUBS() \
+    exec->setCurrentVPC(pc + 1)
+
+#define LLINT_BEGIN()                           \
+    LLINT_BEGIN_NO_SET_PC();                    \
+    LLINT_SET_PC_FOR_STUBS()
+
+#define LLINT_OP(index) (exec->uncheckedR(pc[index].u.operand))
+#define LLINT_OP_C(index) (exec->r(pc[index].u.operand))
+
+#define LLINT_RETURN_TWO(first, second) do {       \
+        union {                                    \
+            struct {                               \
+                void* a;                           \
+                void* b;                           \
+            } pair;                                \
+            int64_t i;                             \
+        } __rt_u;                                  \
+        __rt_u.pair.a = first;                     \
+        __rt_u.pair.b = second;                    \
+        return __rt_u.i;                           \
+    } while (false)
+
+#define LLINT_END_IMPL() LLINT_RETURN_TWO(pc, exec)
+
+#define LLINT_THROW(exceptionToThrow) do {                        \
+        globalData.exception = (exceptionToThrow);                \
+        pc = returnToThrow(exec, pc);                             \
+        LLINT_END_IMPL();                                         \
+    } while (false)
+
+#define LLINT_CHECK_EXCEPTION() do {                    \
+        if (UNLIKELY(globalData.exception)) {           \
+            pc = returnToThrow(exec, pc);               \
+            LLINT_END_IMPL();                           \
+        }                                               \
+    } while (false)
+
+#define LLINT_END() do {                        \
+        LLINT_CHECK_EXCEPTION();                \
+        LLINT_END_IMPL();                       \
+    } while (false)
+
+#define LLINT_BRANCH(opcode, condition) do {                      \
+        bool __b_condition = (condition);                         \
+        LLINT_CHECK_EXCEPTION();                                  \
+        if (__b_condition)                                        \
+            pc += pc[OPCODE_LENGTH(opcode) - 1].u.operand;        \
+        else                                                      \
+            pc += OPCODE_LENGTH(opcode);                          \
+        LLINT_END_IMPL();                                         \
+    } while (false)
+
+#define LLINT_RETURN(value) do {                \
+        JSValue __r_returnValue = (value);      \
+        LLINT_CHECK_EXCEPTION();                \
+        LLINT_OP(1) = __r_returnValue;          \
+        LLINT_END_IMPL();                       \
+    } while (false)
+
+#define LLINT_RETURN_PROFILED(opcode, value) do {               \
+        JSValue __rp_returnValue = (value);                     \
+        LLINT_CHECK_EXCEPTION();                                \
+        LLINT_OP(1) = __rp_returnValue;                         \
+        pc[OPCODE_LENGTH(opcode) - 1].u.profile->m_buckets[0] = \
+            JSValue::encode(__rp_returnValue);                  \
+        LLINT_END_IMPL();                                       \
+    } while (false)
+
+#define LLINT_CALL_END_IMPL(exec, callTarget) LLINT_RETURN_TWO((callTarget), (exec))
+
+#define LLINT_CALL_THROW(exec, pc, exceptionToThrow) do {               \
+        ExecState* __ct_exec = (exec);                                  \
+        Instruction* __ct_pc = (pc);                                    \
+        globalData.exception = (exceptionToThrow);                      \
+        LLINT_CALL_END_IMPL(__ct_exec, callToThrow(__ct_exec, __ct_pc)); \
+    } while (false)
+
+#define LLINT_CALL_CHECK_EXCEPTION(exec, pc) do {                       \
+        ExecState* __cce_exec = (exec);                                 \
+        Instruction* __cce_pc = (pc);                                   \
+        if (UNLIKELY(globalData.exception))                              \
+            LLINT_CALL_END_IMPL(__cce_exec, callToThrow(__cce_exec, __cce_pc)); \
+    } while (false)
+
+#define LLINT_CALL_RETURN(exec, pc, callTarget) do {                    \
+        ExecState* __cr_exec = (exec);                                  \
+        Instruction* __cr_pc = (pc);                                    \
+        void* __cr_callTarget = (callTarget);                           \
+        LLINT_CALL_CHECK_EXCEPTION(__cr_exec->callerFrame(), __cr_pc);  \
+        LLINT_CALL_END_IMPL(__cr_exec, __cr_callTarget);                \
+    } while (false)
+
+extern "C" SlowPathReturnType llint_trace_operand(ExecState* exec, Instruction* pc, int fromWhere, int operand)
+{
+    LLINT_BEGIN();
+    dataLog("%p / %p: executing bc#%zu, op#%u: Trace(%d): %d: %d\n",
+            exec->codeBlock(),
+            exec,
+            static_cast<intptr_t>(pc - exec->codeBlock()->instructions().begin()),
+            exec->globalData().interpreter->getOpcodeID(pc[0].u.opcode),
+            fromWhere,
+            operand,
+            pc[operand].u.operand);
+    LLINT_END();
+}
+
+extern "C" SlowPathReturnType llint_trace_value(ExecState* exec, Instruction* pc, int fromWhere, int operand)
+{
+    LLINT_BEGIN();
+    JSValue value = LLINT_OP_C(operand).jsValue();
+    union {
+        struct {
+            uint32_t tag;
+            uint32_t payload;
+        } bits;
+        EncodedJSValue asValue;
+    } u;
+    u.asValue = JSValue::encode(value);
+    dataLog("%p / %p: executing bc#%zu, op#%u: Trace(%d): %d: %d: %08x:%08x: %s\n",
+            exec->codeBlock(),
+            exec,
+            static_cast<intptr_t>(pc - exec->codeBlock()->instructions().begin()),
+            exec->globalData().interpreter->getOpcodeID(pc[0].u.opcode),
+            fromWhere,
+            operand,
+            pc[operand].u.operand,
+            u.bits.tag,
+            u.bits.payload,
+            value.description());
+    LLINT_END();
+}
+
+LLINT_SLOW_PATH_DECL(trace_prologue)
+{
+    LLINT_BEGIN();
+    dataLog("%p / %p: in prologue.\n", exec->codeBlock(), exec);
+    LLINT_END();
+}
+
+static void traceFunctionPrologue(ExecState* exec, const char* comment, CodeSpecializationKind kind)
+{
+    JSFunction* callee = asFunction(exec->callee());
+    FunctionExecutable* executable = callee->jsExecutable();
+    CodeBlock* codeBlock = &executable->generatedBytecodeFor(kind);
+    dataLog("%p / %p: in %s of function %p, executable %p; numVars = %u, numParameters = %u, numCalleeRegisters = %u, caller = %p.\n",
+            codeBlock, exec, comment, callee, executable,
+            codeBlock->m_numVars, codeBlock->numParameters(), codeBlock->m_numCalleeRegisters,
+            exec->callerFrame());
+}
+
+LLINT_SLOW_PATH_DECL(trace_prologue_function_for_call)
+{
+    LLINT_BEGIN();
+    traceFunctionPrologue(exec, "call prologue", CodeForCall);
+    LLINT_END();
+}
+
+LLINT_SLOW_PATH_DECL(trace_prologue_function_for_construct)
+{
+    LLINT_BEGIN();
+    traceFunctionPrologue(exec, "construct prologue", CodeForConstruct);
+    LLINT_END();
+}
+
+LLINT_SLOW_PATH_DECL(trace_arityCheck_for_call)
+{
+    LLINT_BEGIN();
+    traceFunctionPrologue(exec, "call arity check", CodeForCall);
+    LLINT_END();
+}
+
+LLINT_SLOW_PATH_DECL(trace_arityCheck_for_construct)
+{
+    LLINT_BEGIN();
+    traceFunctionPrologue(exec, "construct arity check", CodeForConstruct);
+    LLINT_END();
+}
+
+LLINT_SLOW_PATH_DECL(trace)
+{
+    LLINT_BEGIN();
+    dataLog("%p / %p: executing bc#%zu, %s, scope %p\n",
+            exec->codeBlock(),
+            exec,
+            static_cast<intptr_t>(pc - exec->codeBlock()->instructions().begin()),
+            opcodeNames[exec->globalData().interpreter->getOpcodeID(pc[0].u.opcode)],
+            exec->scopeChain());
+    LLINT_END();
+}
+
+LLINT_SLOW_PATH_DECL(special_trace)
+{
+    LLINT_BEGIN();
+    dataLog("%p / %p: executing special case bc#%zu, op#%u, return PC is %p\n",
+            exec->codeBlock(),
+            exec,
+            static_cast<intptr_t>(pc - exec->codeBlock()->instructions().begin()),
+            exec->globalData().interpreter->getOpcodeID(pc[0].u.opcode),
+            exec->returnPC().value());
+    LLINT_END();
+}
+
+inline bool shouldJIT(ExecState* exec)
+{
+    // You can modify this to turn off JITting without rebuilding the world.
+    return exec->globalData().canUseJIT();
+}
+
+enum EntryKind { Prologue, ArityCheck };
+static SlowPathReturnType entryOSR(ExecState* exec, Instruction* pc, CodeBlock* codeBlock, const char *name, EntryKind kind)
+{
+#if ENABLE(JIT_VERBOSE_OSR)
+    dataLog("%p: Entered %s with executeCounter = %d\n", codeBlock, name, codeBlock->llintExecuteCounter());
+#endif
+    
+    if (!shouldJIT(exec)) {
+        codeBlock->dontJITAnytimeSoon();
+        LLINT_RETURN_TWO(0, exec);
+    }
+    if (!codeBlock->jitCompile(exec->globalData())) {
+#if ENABLE(JIT_VERBOSE_OSR)
+        dataLog("    Code was already compiled.\n");
+#endif
+    }
+    codeBlock->jitSoon();
+    if (kind == Prologue)
+        LLINT_RETURN_TWO(codeBlock->getJITCode().executableAddressAtOffset(0), exec);
+    ASSERT(kind == ArityCheck);
+    LLINT_RETURN_TWO(codeBlock->getJITCodeWithArityCheck().executableAddress(), exec);
+}
+
+LLINT_SLOW_PATH_DECL(entry_osr)
+{
+    return entryOSR(exec, pc, exec->codeBlock(), "entry_osr", Prologue);
+}
+
+LLINT_SLOW_PATH_DECL(entry_osr_function_for_call)
+{
+    return entryOSR(exec, pc, &asFunction(exec->callee())->jsExecutable()->generatedBytecodeFor(CodeForCall), "entry_osr_function_for_call", Prologue);
+}
+
+LLINT_SLOW_PATH_DECL(entry_osr_function_for_construct)
+{
+    return entryOSR(exec, pc, &asFunction(exec->callee())->jsExecutable()->generatedBytecodeFor(CodeForConstruct), "entry_osr_function_for_construct", Prologue);
+}
+
+LLINT_SLOW_PATH_DECL(entry_osr_function_for_call_arityCheck)
+{
+    return entryOSR(exec, pc, &asFunction(exec->callee())->jsExecutable()->generatedBytecodeFor(CodeForCall), "entry_osr_function_for_call_arityCheck", ArityCheck);
+}
+
+LLINT_SLOW_PATH_DECL(entry_osr_function_for_construct_arityCheck)
+{
+    return entryOSR(exec, pc, &asFunction(exec->callee())->jsExecutable()->generatedBytecodeFor(CodeForConstruct), "entry_osr_function_for_construct_arityCheck", ArityCheck);
+}
+
+LLINT_SLOW_PATH_DECL(loop_osr)
+{
+    CodeBlock* codeBlock = exec->codeBlock();
+    
+#if ENABLE(JIT_VERBOSE_OSR)
+    dataLog("%p: Entered loop_osr with executeCounter = %d\n", codeBlock, codeBlock->llintExecuteCounter());
+#endif
+    
+    if (!shouldJIT(exec)) {
+        codeBlock->dontJITAnytimeSoon();
+        LLINT_RETURN_TWO(0, exec);
+    }
+    
+    if (!codeBlock->jitCompile(exec->globalData())) {
+#if ENABLE(JIT_VERBOSE_OSR)
+        dataLog("    Code was already compiled.\n");
+#endif
+    }
+    codeBlock->jitSoon();
+    
+    ASSERT(codeBlock->getJITType() == JITCode::BaselineJIT);
+    
+    Vector<BytecodeAndMachineOffset> map;
+    codeBlock->jitCodeMap()->decode(map);
+    BytecodeAndMachineOffset* mapping = binarySearch<BytecodeAndMachineOffset, unsigned, BytecodeAndMachineOffset::getBytecodeIndex>(map.begin(), map.size(), pc - codeBlock->instructions().begin());
+    ASSERT(mapping);
+    ASSERT(mapping->m_bytecodeIndex == static_cast<unsigned>(pc - codeBlock->instructions().begin()));
+    
+    void* jumpTarget = codeBlock->getJITCode().executableAddressAtOffset(mapping->m_machineCodeOffset);
+    ASSERT(jumpTarget);
+    
+    LLINT_RETURN_TWO(jumpTarget, exec);
+}
+
+LLINT_SLOW_PATH_DECL(replace)
+{
+    CodeBlock* codeBlock = exec->codeBlock();
+    
+#if ENABLE(JIT_VERBOSE_OSR)
+    dataLog("%p: Entered replace with executeCounter = %d\n", codeBlock, codeBlock->llintExecuteCounter());
+#endif
+    
+    if (shouldJIT(exec)) {
+        if (!codeBlock->jitCompile(exec->globalData())) {
+#if ENABLE(JIT_VERBOSE_OSR)
+            dataLog("    Code was already compiled.\n");
+#endif
+        }
+        codeBlock->jitSoon();
+    } else
+        codeBlock->dontJITAnytimeSoon();
+    LLINT_END_IMPL();
+}
+
+LLINT_SLOW_PATH_DECL(register_file_check)
+{
+    LLINT_BEGIN();
+#if LLINT_SLOW_PATH_TRACING
+    dataLog("Checking stack height with exec = %p.\n", exec);
+    dataLog("CodeBlock = %p.\n", exec->codeBlock());
+    dataLog("Num callee registers = %u.\n", exec->codeBlock()->m_numCalleeRegisters);
+    dataLog("Num vars = %u.\n", exec->codeBlock()->m_numVars);
+    dataLog("Current end is at %p.\n", exec->globalData().interpreter->registerFile().end());
+#endif
+    ASSERT(&exec->registers()[exec->codeBlock()->m_numCalleeRegisters] > exec->globalData().interpreter->registerFile().end());
+    if (UNLIKELY(!globalData.interpreter->registerFile().grow(&exec->registers()[exec->codeBlock()->m_numCalleeRegisters]))) {
+        ReturnAddressPtr returnPC = exec->returnPC();
+        exec = exec->callerFrame();
+        globalData.exception = createStackOverflowError(exec);
+        interpreterThrowInCaller(exec, returnPC);
+        pc = returnToThrowForThrownException(exec);
+    }
+    LLINT_END_IMPL();
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_call_arityCheck)
+{
+    LLINT_BEGIN();
+    ExecState* newExec = CommonSlowPaths::arityCheckFor(exec, &globalData.interpreter->registerFile(), CodeForCall);
+    if (!newExec) {
+        ReturnAddressPtr returnPC = exec->returnPC();
+        exec = exec->callerFrame();
+        globalData.exception = createStackOverflowError(exec);
+        interpreterThrowInCaller(exec, returnPC);
+        LLINT_RETURN_TWO(bitwise_cast<void*>(static_cast<uintptr_t>(1)), exec);
+    }
+    LLINT_RETURN_TWO(0, newExec);
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_construct_arityCheck)
+{
+    LLINT_BEGIN();
+    ExecState* newExec = CommonSlowPaths::arityCheckFor(exec, &globalData.interpreter->registerFile(), CodeForConstruct);
+    if (!newExec) {
+        ReturnAddressPtr returnPC = exec->returnPC();
+        exec = exec->callerFrame();
+        globalData.exception = createStackOverflowError(exec);
+        interpreterThrowInCaller(exec, returnPC);
+        LLINT_RETURN_TWO(bitwise_cast<void*>(static_cast<uintptr_t>(1)), exec);
+    }
+    LLINT_RETURN_TWO(0, newExec);
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_create_activation)
+{
+    LLINT_BEGIN();
+#if LLINT_SLOW_PATH_TRACING
+    dataLog("Creating an activation, exec = %p!\n", exec);
+#endif
+    JSActivation* activation = JSActivation::create(globalData, exec, static_cast<FunctionExecutable*>(exec->codeBlock()->ownerExecutable()));
+    exec->setScopeChain(exec->scopeChain()->push(activation));
+    LLINT_RETURN(JSValue(activation));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_create_arguments)
+{
+    LLINT_BEGIN();
+    JSValue arguments = JSValue(Arguments::create(globalData, exec));
+    LLINT_CHECK_EXCEPTION();
+    exec->uncheckedR(pc[1].u.operand) = arguments;
+    exec->uncheckedR(unmodifiedArgumentsRegister(pc[1].u.operand)) = arguments;
+    LLINT_END();
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_create_this)
+{
+    LLINT_BEGIN();
+    JSFunction* constructor = asFunction(exec->callee());
+    
+#if !ASSERT_DISABLED
+    ConstructData constructData;
+    ASSERT(constructor->methodTable()->getConstructData(constructor, constructData) == ConstructTypeJS);
+#endif
+    
+    Structure* structure;
+    JSValue proto = LLINT_OP(2).jsValue();
+    if (proto.isObject())
+        structure = asObject(proto)->inheritorID(globalData);
+    else
+        structure = constructor->scope()->globalObject->emptyObjectStructure();
+    
+    LLINT_RETURN(constructEmptyObject(exec, structure));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_convert_this)
+{
+    LLINT_BEGIN();
+    JSValue v1 = LLINT_OP(1).jsValue();
+    ASSERT(v1.isPrimitive());
+    LLINT_RETURN(v1.toThisObject(exec));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_new_object)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(constructEmptyObject(exec));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_new_array)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(constructArray(exec, bitwise_cast<JSValue*>(&LLINT_OP(2)), pc[3].u.operand));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_new_array_buffer)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(constructArray(exec, exec->codeBlock()->constantBuffer(pc[2].u.operand), pc[3].u.operand));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_new_regexp)
+{
+    LLINT_BEGIN();
+    RegExp* regExp = exec->codeBlock()->regexp(pc[2].u.operand);
+    if (!regExp->isValid())
+        LLINT_THROW(createSyntaxError(exec, "Invalid flag supplied to RegExp constructor."));
+    LLINT_RETURN(RegExpObject::create(globalData, exec->lexicalGlobalObject(), exec->lexicalGlobalObject()->regExpStructure(), regExp));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_not)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsBoolean(!LLINT_OP_C(2).jsValue().toBoolean(exec)));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_eq)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsBoolean(JSValue::equal(exec, LLINT_OP_C(2).jsValue(), LLINT_OP_C(3).jsValue())));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_neq)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsBoolean(!JSValue::equal(exec, LLINT_OP_C(2).jsValue(), LLINT_OP_C(3).jsValue())));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_stricteq)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsBoolean(JSValue::strictEqual(exec, LLINT_OP_C(2).jsValue(), LLINT_OP_C(3).jsValue())));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_nstricteq)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsBoolean(!JSValue::strictEqual(exec, LLINT_OP_C(2).jsValue(), LLINT_OP_C(3).jsValue())));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_less)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsBoolean(jsLess<true>(exec, LLINT_OP_C(2).jsValue(), LLINT_OP_C(3).jsValue())));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_lesseq)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsBoolean(jsLessEq<true>(exec, LLINT_OP_C(2).jsValue(), LLINT_OP_C(3).jsValue())));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_greater)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsBoolean(jsLess<false>(exec, LLINT_OP_C(3).jsValue(), LLINT_OP_C(2).jsValue())));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_greatereq)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsBoolean(jsLessEq<false>(exec, LLINT_OP_C(3).jsValue(), LLINT_OP_C(2).jsValue())));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_pre_inc)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsNumber(LLINT_OP(1).jsValue().toNumber(exec) + 1));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_pre_dec)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsNumber(LLINT_OP(1).jsValue().toNumber(exec) - 1));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_post_inc)
+{
+    LLINT_BEGIN();
+    double result = LLINT_OP(2).jsValue().toNumber(exec);
+    LLINT_OP(2) = jsNumber(result + 1);
+    LLINT_RETURN(jsNumber(result));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_post_dec)
+{
+    LLINT_BEGIN();
+    double result = LLINT_OP(2).jsValue().toNumber(exec);
+    LLINT_OP(2) = jsNumber(result - 1);
+    LLINT_RETURN(jsNumber(result));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_to_jsnumber)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsNumber(LLINT_OP_C(2).jsValue().toNumber(exec)));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_negate)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsNumber(-LLINT_OP_C(2).jsValue().toNumber(exec)));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_add)
+{
+    LLINT_BEGIN();
+    JSValue v1 = LLINT_OP_C(2).jsValue();
+    JSValue v2 = LLINT_OP_C(3).jsValue();
+    
+#if LLINT_SLOW_PATH_TRACING
+    dataLog("Trying to add %s", v1.description());
+    dataLog(" to %s.\n", v2.description());
+#endif
+    
+    if (v1.isString() && !v2.isObject())
+        LLINT_RETURN(jsString(exec, asString(v1), v2.toString(exec)));
+    
+    if (v1.isNumber() && v2.isNumber())
+        LLINT_RETURN(jsNumber(v1.asNumber() + v2.asNumber()));
+    
+    LLINT_RETURN(jsAddSlowCase(exec, v1, v2));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_mul)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsNumber(LLINT_OP_C(2).jsValue().toNumber(exec) * LLINT_OP_C(3).jsValue().toNumber(exec)));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_sub)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsNumber(LLINT_OP_C(2).jsValue().toNumber(exec) - LLINT_OP_C(3).jsValue().toNumber(exec)));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_div)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsNumber(LLINT_OP_C(2).jsValue().toNumber(exec) / LLINT_OP_C(3).jsValue().toNumber(exec)));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_mod)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsNumber(fmod(LLINT_OP_C(2).jsValue().toNumber(exec), LLINT_OP_C(3).jsValue().toNumber(exec))));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_lshift)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsNumber(LLINT_OP_C(2).jsValue().toInt32(exec) << (LLINT_OP_C(3).jsValue().toUInt32(exec) & 31)));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_rshift)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsNumber(LLINT_OP_C(2).jsValue().toInt32(exec) >> (LLINT_OP_C(3).jsValue().toUInt32(exec) & 31)));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_urshift)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsNumber(LLINT_OP_C(2).jsValue().toUInt32(exec) >> (LLINT_OP_C(3).jsValue().toUInt32(exec) & 31)));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_bitand)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsNumber(LLINT_OP_C(2).jsValue().toInt32(exec) & LLINT_OP_C(3).jsValue().toInt32(exec)));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_bitor)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsNumber(LLINT_OP_C(2).jsValue().toInt32(exec) | LLINT_OP_C(3).jsValue().toInt32(exec)));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_bitxor)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsNumber(LLINT_OP_C(2).jsValue().toInt32(exec) ^ LLINT_OP_C(3).jsValue().toInt32(exec)));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_bitnot)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsNumber(~LLINT_OP_C(2).jsValue().toInt32(exec)));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_check_has_instance)
+{
+    LLINT_BEGIN();
+    JSValue baseVal = LLINT_OP_C(1).jsValue();
+#ifndef NDEBUG
+    TypeInfo typeInfo(UnspecifiedType);
+    ASSERT(!baseVal.isObject()
+           || !(typeInfo = asObject(baseVal)->structure()->typeInfo()).implementsHasInstance());
+#endif
+    LLINT_THROW(createInvalidParamError(exec, "instanceof", baseVal));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_instanceof)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsBoolean(CommonSlowPaths::opInstanceOfSlow(exec, LLINT_OP_C(2).jsValue(), LLINT_OP_C(3).jsValue(), LLINT_OP_C(4).jsValue())));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_typeof)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsTypeStringForValue(exec, LLINT_OP_C(2).jsValue()));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_is_undefined)
+{
+    LLINT_BEGIN();
+    JSValue v = LLINT_OP_C(2).jsValue();
+    LLINT_RETURN(jsBoolean(v.isCell() ? v.asCell()->structure()->typeInfo().masqueradesAsUndefined() : v.isUndefined()));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_is_boolean)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsBoolean(LLINT_OP_C(2).jsValue().isBoolean()));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_is_number)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsBoolean(LLINT_OP_C(2).jsValue().isNumber()));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_is_string)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsBoolean(isJSString(LLINT_OP_C(2).jsValue())));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_is_object)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsBoolean(jsIsObjectType(LLINT_OP_C(2).jsValue())));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_is_function)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsBoolean(jsIsFunctionType(LLINT_OP_C(2).jsValue())));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_in)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsBoolean(CommonSlowPaths::opIn(exec, LLINT_OP_C(2).jsValue(), LLINT_OP_C(3).jsValue())));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_resolve)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN_PROFILED(op_resolve, CommonSlowPaths::opResolve(exec, exec->codeBlock()->identifier(pc[2].u.operand)));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_resolve_skip)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN_PROFILED(
+        op_resolve_skip,
+        CommonSlowPaths::opResolveSkip(
+            exec,
+            exec->codeBlock()->identifier(pc[2].u.operand),
+            pc[3].u.operand));
+}
+
+static JSValue resolveGlobal(ExecState* exec, Instruction* pc)
+{
+    CodeBlock* codeBlock = exec->codeBlock();
+    JSGlobalObject* globalObject = codeBlock->globalObject();
+    ASSERT(globalObject->isGlobalObject());
+    int property = pc[2].u.operand;
+    Structure* structure = pc[3].u.structure.get();
+    
+    ASSERT_UNUSED(structure, structure != globalObject->structure());
+    
+    Identifier& ident = codeBlock->identifier(property);
+    PropertySlot slot(globalObject);
+    
+    if (globalObject->getPropertySlot(exec, ident, slot)) {
+        JSValue result = slot.getValue(exec, ident);
+        if (slot.isCacheableValue() && !globalObject->structure()->isUncacheableDictionary()
+            && slot.slotBase() == globalObject) {
+            pc[3].u.structure.set(
+                exec->globalData(), codeBlock->ownerExecutable(), globalObject->structure());
+            pc[4] = slot.cachedOffset();
+        }
+        
+        return result;
+    }
+    
+    exec->globalData().exception = createUndefinedVariableError(exec, ident);
+    return JSValue();
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_resolve_global)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN_PROFILED(op_resolve_global, resolveGlobal(exec, pc));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_resolve_global_dynamic)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN_PROFILED(op_resolve_global_dynamic, resolveGlobal(exec, pc));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_resolve_for_resolve_global_dynamic)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN_PROFILED(op_resolve_global_dynamic, CommonSlowPaths::opResolve(exec, exec->codeBlock()->identifier(pc[2].u.operand)));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_resolve_base)
+{
+    LLINT_BEGIN();
+    Identifier& ident = exec->codeBlock()->identifier(pc[2].u.operand);
+    if (pc[3].u.operand) {
+        JSValue base = JSC::resolveBase(exec, ident, exec->scopeChain(), true);
+        if (!base)
+            LLINT_THROW(createErrorForInvalidGlobalAssignment(exec, ident.ustring()));
+        LLINT_RETURN(base);
+    }
+    
+    LLINT_RETURN_PROFILED(op_resolve_base, JSC::resolveBase(exec, ident, exec->scopeChain(), false));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_ensure_property_exists)
+{
+    LLINT_BEGIN();
+    JSObject* object = asObject(LLINT_OP(1).jsValue());
+    PropertySlot slot(object);
+    Identifier& ident = exec->codeBlock()->identifier(pc[2].u.operand);
+    if (!object->getPropertySlot(exec, ident, slot))
+        LLINT_THROW(createErrorForInvalidGlobalAssignment(exec, ident.ustring()));
+    LLINT_END();
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_resolve_with_base)
+{
+    LLINT_BEGIN();
+    JSValue result = CommonSlowPaths::opResolveWithBase(exec, exec->codeBlock()->identifier(pc[3].u.operand), LLINT_OP(1));
+    LLINT_CHECK_EXCEPTION();
+    LLINT_OP(2) = result;
+    // FIXME: technically should have profiling, but we don't do it because the DFG won't use it.
+    LLINT_END();
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_resolve_with_this)
+{
+    LLINT_BEGIN();
+    JSValue result = CommonSlowPaths::opResolveWithThis(exec, exec->codeBlock()->identifier(pc[3].u.operand), LLINT_OP(1));
+    LLINT_CHECK_EXCEPTION();
+    LLINT_OP(2) = result;
+    // FIXME: technically should have profiling, but we don't do it because the DFG won't use it.
+    LLINT_END();
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_get_by_id)
+{
+    LLINT_BEGIN();
+    CodeBlock* codeBlock = exec->codeBlock();
+    Identifier& ident = codeBlock->identifier(pc[3].u.operand);
+    JSValue baseValue = LLINT_OP_C(2).jsValue();
+    PropertySlot slot(baseValue);
+
+    JSValue result = baseValue.get(exec, ident, slot);
+    LLINT_CHECK_EXCEPTION();
+    LLINT_OP(1) = result;
+
+    if (baseValue.isCell()
+        && slot.isCacheable()
+        && slot.slotBase() == baseValue
+        && slot.cachedPropertyType() == PropertySlot::Value) {
+        
+        JSCell* baseCell = baseValue.asCell();
+        Structure* structure = baseCell->structure();
+        
+        if (!structure->isUncacheableDictionary()
+            && !structure->typeInfo().prohibitsPropertyCaching()) {
+            pc[4].u.structure.set(
+                globalData, codeBlock->ownerExecutable(), structure);
+            pc[5].u.operand = slot.cachedOffset() * sizeof(JSValue);
+        }
+    }
+    
+    pc[OPCODE_LENGTH(op_get_by_id) - 1].u.profile->m_buckets[0] = JSValue::encode(result);
+    LLINT_END();
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_get_arguments_length)
+{
+    LLINT_BEGIN();
+    CodeBlock* codeBlock = exec->codeBlock();
+    Identifier& ident = codeBlock->identifier(pc[3].u.operand);
+    JSValue baseValue = LLINT_OP(2).jsValue();
+    PropertySlot slot(baseValue);
+    LLINT_RETURN(baseValue.get(exec, ident, slot));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_put_by_id)
+{
+    LLINT_BEGIN();
+    CodeBlock* codeBlock = exec->codeBlock();
+    Identifier& ident = codeBlock->identifier(pc[2].u.operand);
+    
+    JSValue baseValue = LLINT_OP_C(1).jsValue();
+    PutPropertySlot slot(codeBlock->isStrictMode());
+    if (pc[8].u.operand)
+        asObject(baseValue)->putDirect(globalData, ident, LLINT_OP_C(3).jsValue(), slot);
+    else
+        baseValue.put(exec, ident, LLINT_OP_C(3).jsValue(), slot);
+    LLINT_CHECK_EXCEPTION();
+    
+    if (baseValue.isCell()
+        && slot.isCacheable()) {
+        
+        JSCell* baseCell = baseValue.asCell();
+        Structure* structure = baseCell->structure();
+        
+        if (!structure->isUncacheableDictionary()
+            && !structure->typeInfo().prohibitsPropertyCaching()
+            && baseCell == slot.base()) {
+            
+            if (slot.type() == PutPropertySlot::NewProperty) {
+                if (!structure->isDictionary() && structure->previousID()->propertyStorageCapacity() == structure->propertyStorageCapacity()) {
+                    // This is needed because some of the methods we call
+                    // below may GC.
+                    pc[0].u.opcode = bitwise_cast<void*>(&llint_op_put_by_id);
+
+                    normalizePrototypeChain(exec, baseCell);
+                    
+                    ASSERT(structure->previousID()->isObject());
+                    pc[4].u.structure.set(
+                        globalData, codeBlock->ownerExecutable(), structure->previousID());
+                    pc[5].u.operand = slot.cachedOffset() * sizeof(JSValue);
+                    pc[6].u.structure.set(
+                        globalData, codeBlock->ownerExecutable(), structure);
+                    StructureChain* chain = structure->prototypeChain(exec);
+                    ASSERT(chain);
+                    pc[7].u.structureChain.set(
+                        globalData, codeBlock->ownerExecutable(), chain);
+                    
+                    if (pc[8].u.operand)
+                        pc[0].u.opcode = bitwise_cast<void*>(&llint_op_put_by_id_transition_direct);
+                    else
+                        pc[0].u.opcode = bitwise_cast<void*>(&llint_op_put_by_id_transition_normal);
+                }
+            } else {
+                pc[0].u.opcode = bitwise_cast<void*>(&llint_op_put_by_id);
+                pc[4].u.structure.set(
+                    globalData, codeBlock->ownerExecutable(), structure);
+                pc[5].u.operand = slot.cachedOffset() * sizeof(JSValue);
+            }
+        }
+    }
+    
+    LLINT_END();
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_del_by_id)
+{
+    LLINT_BEGIN();
+    CodeBlock* codeBlock = exec->codeBlock();
+    JSObject* baseObject = LLINT_OP_C(2).jsValue().toObject(exec);
+    bool couldDelete = baseObject->methodTable()->deleteProperty(baseObject, exec, codeBlock->identifier(pc[3].u.operand));
+    LLINT_CHECK_EXCEPTION();
+    if (!couldDelete && codeBlock->isStrictMode())
+        LLINT_THROW(createTypeError(exec, "Unable to delete property."));
+    LLINT_RETURN(jsBoolean(couldDelete));
+}
+
+inline JSValue getByVal(ExecState* exec, JSValue baseValue, JSValue subscript)
+{
+    if (LIKELY(baseValue.isCell() && subscript.isString())) {
+        if (JSValue result = baseValue.asCell()->fastGetOwnProperty(exec, asString(subscript)->value(exec)))
+            return result;
+    }
+    
+    if (subscript.isUInt32()) {
+        uint32_t i = subscript.asUInt32();
+        if (isJSString(baseValue) && asString(baseValue)->canGetIndex(i))
+            return asString(baseValue)->getIndex(exec, i);
+        
+        if (isJSByteArray(baseValue) && asByteArray(baseValue)->canAccessIndex(i))
+            return asByteArray(baseValue)->getIndex(exec, i);
+        
+        return baseValue.get(exec, i);
+    }
+    
+    Identifier property(exec, subscript.toString(exec)->value(exec));
+    return baseValue.get(exec, property);
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_get_by_val)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN_PROFILED(op_get_by_val, getByVal(exec, LLINT_OP_C(2).jsValue(), LLINT_OP_C(3).jsValue()));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_get_argument_by_val)
+{
+    LLINT_BEGIN();
+    JSValue arguments = LLINT_OP(2).jsValue();
+    if (!arguments) {
+        arguments = Arguments::create(globalData, exec);
+        LLINT_CHECK_EXCEPTION();
+        LLINT_OP(2) = arguments;
+        exec->uncheckedR(unmodifiedArgumentsRegister(pc[2].u.operand)) = arguments;
+    }
+    
+    LLINT_RETURN(getByVal(exec, arguments, LLINT_OP_C(3).jsValue()));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_get_by_pname)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(getByVal(exec, LLINT_OP(2).jsValue(), LLINT_OP(3).jsValue()));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_put_by_val)
+{
+    LLINT_BEGIN();
+    
+    JSValue baseValue = LLINT_OP_C(1).jsValue();
+    JSValue subscript = LLINT_OP_C(2).jsValue();
+    JSValue value = LLINT_OP_C(3).jsValue();
+    
+    if (LIKELY(subscript.isUInt32())) {
+        uint32_t i = subscript.asUInt32();
+        if (isJSArray(baseValue)) {
+            JSArray* jsArray = asArray(baseValue);
+            if (jsArray->canSetIndex(i))
+                jsArray->setIndex(globalData, i, value);
+            else
+                JSArray::putByIndex(jsArray, exec, i, value);
+            LLINT_END();
+        }
+        if (isJSByteArray(baseValue)
+            && asByteArray(baseValue)->canAccessIndex(i)) {
+            JSByteArray* jsByteArray = asByteArray(baseValue);
+            if (value.isInt32()) {
+                jsByteArray->setIndex(i, value.asInt32());
+                LLINT_END();
+            }
+            if (value.isNumber()) {
+                jsByteArray->setIndex(i, value.asNumber());
+                LLINT_END();
+            }
+        }
+        baseValue.put(exec, i, value);
+        LLINT_END();
+    }
+    
+    Identifier property(exec, subscript.toString(exec)->value(exec));
+    LLINT_CHECK_EXCEPTION();
+    PutPropertySlot slot(exec->codeBlock()->isStrictMode());
+    baseValue.put(exec, property, value, slot);
+    LLINT_END();
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_del_by_val)
+{
+    LLINT_BEGIN();
+    JSValue baseValue = LLINT_OP_C(2).jsValue();
+    JSObject* baseObject = baseValue.toObject(exec);
+    
+    JSValue subscript = LLINT_OP_C(3).jsValue();
+    
+    bool couldDelete;
+    
+    uint32_t i;
+    if (subscript.getUInt32(i))
+        couldDelete = baseObject->methodTable()->deletePropertyByIndex(baseObject, exec, i);
+    else {
+        LLINT_CHECK_EXCEPTION();
+        Identifier property(exec, subscript.toString(exec)->value(exec));
+        LLINT_CHECK_EXCEPTION();
+        couldDelete = baseObject->methodTable()->deleteProperty(baseObject, exec, property);
+    }
+    
+    if (!couldDelete && exec->codeBlock()->isStrictMode())
+        LLINT_THROW(createTypeError(exec, "Unable to delete property."));
+    
+    LLINT_RETURN(jsBoolean(couldDelete));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_put_by_index)
+{
+    LLINT_BEGIN();
+    LLINT_OP_C(1).jsValue().put(exec, pc[2].u.operand, LLINT_OP_C(3).jsValue());
+    LLINT_END();
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_put_getter_setter)
+{
+    LLINT_BEGIN();
+    ASSERT(LLINT_OP(1).jsValue().isObject());
+    JSObject* baseObj = asObject(LLINT_OP(1).jsValue());
+    
+    GetterSetter* accessor = GetterSetter::create(exec);
+    LLINT_CHECK_EXCEPTION();
+    
+    JSValue getter = LLINT_OP(3).jsValue();
+    JSValue setter = LLINT_OP(4).jsValue();
+    ASSERT(getter.isObject() || getter.isUndefined());
+    ASSERT(setter.isObject() || setter.isUndefined());
+    ASSERT(getter.isObject() || setter.isObject());
+    
+    if (!getter.isUndefined())
+        accessor->setGetter(globalData, asObject(getter));
+    if (!setter.isUndefined())
+        accessor->setSetter(globalData, asObject(setter));
+    baseObj->putDirectAccessor(
+        globalData,
+        exec->codeBlock()->identifier(pc[2].u.operand),
+        accessor, Accessor);
+    LLINT_END();
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_jmp_scopes)
+{
+    LLINT_BEGIN();
+    unsigned count = pc[1].u.operand;
+    ScopeChainNode* tmp = exec->scopeChain();
+    while (count--)
+        tmp = tmp->pop();
+    exec->setScopeChain(tmp);
+    pc += pc[2].u.operand;
+    LLINT_END();
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_jtrue)
+{
+    LLINT_BEGIN();
+    LLINT_BRANCH(op_jtrue, LLINT_OP_C(1).jsValue().toBoolean(exec));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_jfalse)
+{
+    LLINT_BEGIN();
+    LLINT_BRANCH(op_jfalse, !LLINT_OP_C(1).jsValue().toBoolean(exec));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_jless)
+{
+    LLINT_BEGIN();
+    LLINT_BRANCH(op_jless, jsLess<true>(exec, LLINT_OP_C(1).jsValue(), LLINT_OP_C(2).jsValue()));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_jnless)
+{
+    LLINT_BEGIN();
+    LLINT_BRANCH(op_jnless, !jsLess<true>(exec, LLINT_OP_C(1).jsValue(), LLINT_OP_C(2).jsValue()));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_jgreater)
+{
+    LLINT_BEGIN();
+    LLINT_BRANCH(op_jgreater, jsLess<false>(exec, LLINT_OP_C(2).jsValue(), LLINT_OP_C(1).jsValue()));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_jngreater)
+{
+    LLINT_BEGIN();
+    LLINT_BRANCH(op_jngreater, !jsLess<false>(exec, LLINT_OP_C(2).jsValue(), LLINT_OP_C(1).jsValue()));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_jlesseq)
+{
+    LLINT_BEGIN();
+    LLINT_BRANCH(op_jlesseq, jsLessEq<true>(exec, LLINT_OP_C(1).jsValue(), LLINT_OP_C(2).jsValue()));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_jnlesseq)
+{
+    LLINT_BEGIN();
+    LLINT_BRANCH(op_jnlesseq, !jsLessEq<true>(exec, LLINT_OP_C(1).jsValue(), LLINT_OP_C(2).jsValue()));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_jgreatereq)
+{
+    LLINT_BEGIN();
+    LLINT_BRANCH(op_jgreatereq, jsLessEq<false>(exec, LLINT_OP_C(2).jsValue(), LLINT_OP_C(1).jsValue()));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_jngreatereq)
+{
+    LLINT_BEGIN();
+    LLINT_BRANCH(op_jngreatereq, !jsLessEq<false>(exec, LLINT_OP_C(2).jsValue(), LLINT_OP_C(1).jsValue()));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_switch_imm)
+{
+    LLINT_BEGIN();
+    JSValue scrutinee = LLINT_OP_C(3).jsValue();
+    ASSERT(scrutinee.isDouble());
+    double value = scrutinee.asDouble();
+    int32_t intValue = static_cast<int32_t>(value);
+    int defaultOffset = pc[2].u.operand;
+    if (value == intValue) {
+        CodeBlock* codeBlock = exec->codeBlock();
+        pc += codeBlock->immediateSwitchJumpTable(pc[1].u.operand).offsetForValue(intValue, defaultOffset);
+    } else
+        pc += defaultOffset;
+    LLINT_END();
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_switch_string)
+{
+    LLINT_BEGIN();
+    JSValue scrutinee = LLINT_OP_C(3).jsValue();
+    int defaultOffset = pc[2].u.operand;
+    if (!scrutinee.isString())
+        pc += defaultOffset;
+    else {
+        CodeBlock* codeBlock = exec->codeBlock();
+        pc += codeBlock->stringSwitchJumpTable(pc[1].u.operand).offsetForValue(asString(scrutinee)->value(exec).impl(), defaultOffset);
+    }
+    LLINT_END();
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_new_func)
+{
+    LLINT_BEGIN();
+    CodeBlock* codeBlock = exec->codeBlock();
+    ASSERT(codeBlock->codeType() != FunctionCode
+           || !codeBlock->needsFullScopeChain()
+           || exec->uncheckedR(codeBlock->activationRegister()).jsValue());
+#if LLINT_SLOW_PATH_TRACING
+    dataLog("Creating function!\n");
+#endif
+    LLINT_RETURN(codeBlock->functionDecl(pc[2].u.operand)->make(exec, exec->scopeChain()));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_new_func_exp)
+{
+    LLINT_BEGIN();
+    CodeBlock* codeBlock = exec->codeBlock();
+    FunctionExecutable* function = codeBlock->functionExpr(pc[2].u.operand);
+    JSFunction* func = function->make(exec, exec->scopeChain());
+    
+    if (!function->name().isNull()) {
+        JSStaticScopeObject* functionScopeObject = JSStaticScopeObject::create(exec, function->name(), func, ReadOnly | DontDelete);
+        func->setScope(globalData, func->scope()->push(functionScopeObject));
+    }
+    
+    LLINT_RETURN(func);
+}
+
+static SlowPathReturnType handleHostCall(ExecState* execCallee, Instruction* pc, JSValue callee, CodeSpecializationKind kind)
+{
+    ExecState* exec = execCallee->callerFrame();
+    JSGlobalData& globalData = exec->globalData();
+    
+    execCallee->setScopeChain(exec->scopeChain());
+    execCallee->setCodeBlock(0);
+    execCallee->clearReturnPC();
+
+    if (kind == CodeForCall) {
+        CallData callData;
+        CallType callType = getCallData(callee, callData);
+    
+        ASSERT(callType != CallTypeJS);
+    
+        if (callType == CallTypeHost) {
+            globalData.hostCallReturnValue = JSValue::decode(callData.native.function(execCallee));
+            
+            LLINT_CALL_RETURN(execCallee, pc, reinterpret_cast<void*>(getHostCallReturnValue));
+        }
+        
+#if LLINT_SLOW_PATH_TRACING
+        dataLog("Call callee is not a function: %s\n", callee.description());
+#endif
+
+        ASSERT(callType == CallTypeNone);
+        LLINT_CALL_THROW(exec, pc, createNotAFunctionError(exec, callee));
+    }
+
+    ASSERT(kind == CodeForConstruct);
+    
+    ConstructData constructData;
+    ConstructType constructType = getConstructData(callee, constructData);
+    
+    ASSERT(constructType != ConstructTypeJS);
+    
+    if (constructType == ConstructTypeHost) {
+        globalData.hostCallReturnValue = JSValue::decode(constructData.native.function(execCallee));
+
+        LLINT_CALL_RETURN(execCallee, pc, reinterpret_cast<void*>(getHostCallReturnValue));
+    }
+    
+#if LLINT_SLOW_PATH_TRACING
+    dataLog("Constructor callee is not a function: %s\n", callee.description());
+#endif
+
+    ASSERT(constructType == ConstructTypeNone);
+    LLINT_CALL_THROW(exec, pc, createNotAConstructorError(exec, callee));
+}
+
+inline SlowPathReturnType setUpCall(ExecState* execCallee, Instruction* pc, CodeSpecializationKind kind, JSValue calleeAsValue, LLIntCallLinkInfo* callLinkInfo = 0)
+{
+#if LLINT_SLOW_PATH_TRACING
+    dataLog("Performing call with recorded PC = %p\n", execCallee->callerFrame()->currentVPC());
+#endif
+
+    JSCell* calleeAsFunctionCell = getJSFunction(calleeAsValue);
+    if (!calleeAsFunctionCell)
+        return handleHostCall(execCallee, pc, calleeAsValue, kind);
+    
+    JSFunction* callee = asFunction(calleeAsFunctionCell);
+    ScopeChainNode* scope = callee->scopeUnchecked();
+    JSGlobalData& globalData = *scope->globalData;
+    execCallee->setScopeChain(scope);
+    ExecutableBase* executable = callee->executable();
+    
+    MacroAssemblerCodePtr codePtr;
+    CodeBlock* codeBlock = 0;
+    if (executable->isHostFunction())
+        codePtr = executable->generatedJITCodeFor(kind).addressForCall();
+    else {
+        FunctionExecutable* functionExecutable = static_cast<FunctionExecutable*>(executable);
+        JSObject* error = functionExecutable->compileFor(execCallee, callee->scope(), kind);
+        if (error)
+            LLINT_CALL_THROW(execCallee->callerFrame(), pc, error);
+        codeBlock = &functionExecutable->generatedBytecodeFor(kind);
+        ASSERT(codeBlock);
+        if (execCallee->argumentCountIncludingThis() < static_cast<size_t>(codeBlock->numParameters()))
+            codePtr = functionExecutable->generatedJITCodeWithArityCheckFor(kind);
+        else
+            codePtr = functionExecutable->generatedJITCodeFor(kind).addressForCall();
+    }
+    
+    if (callLinkInfo) {
+        if (callLinkInfo->isOnList())
+            callLinkInfo->remove();
+        ExecState* execCaller = execCallee->callerFrame();
+        callLinkInfo->callee.set(globalData, execCaller->codeBlock()->ownerExecutable(), callee);
+        callLinkInfo->lastSeenCallee.set(globalData, execCaller->codeBlock()->ownerExecutable(), callee);
+        callLinkInfo->machineCodeTarget = codePtr;
+        if (codeBlock)
+            codeBlock->linkIncomingCall(callLinkInfo);
+    }
+    
+    LLINT_CALL_RETURN(execCallee, pc, codePtr.executableAddress());
+}
+
+inline SlowPathReturnType genericCall(ExecState* exec, Instruction* pc, CodeSpecializationKind kind)
+{
+    // This needs to:
+    // - Set up a call frame.
+    // - Figure out what to call and compile it if necessary.
+    // - If possible, link the call's inline cache.
+    // - Return a tuple of machine code address to call and the new call frame.
+    
+    JSValue calleeAsValue = LLINT_OP_C(1).jsValue();
+    
+    ExecState* execCallee = exec + pc[3].u.operand;
+    
+    execCallee->setArgumentCountIncludingThis(pc[2].u.operand);
+    execCallee->uncheckedR(RegisterFile::Callee) = calleeAsValue;
+    execCallee->setCallerFrame(exec);
+    
+    ASSERT(pc[4].u.callLinkInfo);
+    return setUpCall(execCallee, pc, kind, calleeAsValue, pc[4].u.callLinkInfo);
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_call)
+{
+    LLINT_BEGIN_NO_SET_PC();
+    return genericCall(exec, pc, CodeForCall);
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_construct)
+{
+    LLINT_BEGIN_NO_SET_PC();
+    return genericCall(exec, pc, CodeForConstruct);
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_call_varargs)
+{
+    LLINT_BEGIN();
+    // This needs to:
+    // - Set up a call frame while respecting the variable arguments.
+    // - Figure out what to call and compile it if necessary.
+    // - Return a tuple of machine code address to call and the new call frame.
+    
+    JSValue calleeAsValue = LLINT_OP_C(1).jsValue();
+    
+    ExecState* execCallee = loadVarargs(
+        exec, &globalData.interpreter->registerFile(),
+        LLINT_OP_C(2).jsValue(), LLINT_OP_C(3).jsValue(), pc[4].u.operand);
+    LLINT_CALL_CHECK_EXCEPTION(exec, pc);
+    
+    execCallee->uncheckedR(RegisterFile::Callee) = calleeAsValue;
+    execCallee->setCallerFrame(exec);
+    exec->uncheckedR(RegisterFile::ArgumentCount).tag() = bitwise_cast<int32_t>(pc + OPCODE_LENGTH(op_call_varargs));
+    
+    return setUpCall(execCallee, pc, CodeForCall, calleeAsValue);
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_call_eval)
+{
+    LLINT_BEGIN_NO_SET_PC();
+    JSValue calleeAsValue = LLINT_OP(1).jsValue();
+    
+    ExecState* execCallee = exec + pc[3].u.operand;
+    
+    execCallee->setArgumentCountIncludingThis(pc[2].u.operand);
+    execCallee->setCallerFrame(exec);
+    execCallee->uncheckedR(RegisterFile::Callee) = calleeAsValue;
+    execCallee->setScopeChain(exec->scopeChain());
+    execCallee->setReturnPC(bitwise_cast<Instruction*>(&llint_generic_return_point));
+    execCallee->setCodeBlock(0);
+    exec->uncheckedR(RegisterFile::ArgumentCount).tag() = bitwise_cast<int32_t>(pc + OPCODE_LENGTH(op_call_eval));
+    
+    if (!isHostFunction(calleeAsValue, globalFuncEval))
+        return setUpCall(execCallee, pc, CodeForCall, calleeAsValue);
+    
+    globalData.hostCallReturnValue = eval(execCallee);
+    LLINT_CALL_RETURN(execCallee, pc, reinterpret_cast<void*>(getHostCallReturnValue));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_tear_off_activation)
+{
+    LLINT_BEGIN();
+    ASSERT(exec->codeBlock()->needsFullScopeChain());
+    JSValue activationValue = LLINT_OP(1).jsValue();
+    if (!activationValue) {
+        if (JSValue v = exec->uncheckedR(unmodifiedArgumentsRegister(pc[2].u.operand)).jsValue()) {
+            if (!exec->codeBlock()->isStrictMode())
+                asArguments(v)->tearOff(exec);
+        }
+        LLINT_END();
+    }
+    JSActivation* activation = asActivation(activationValue);
+    activation->tearOff(globalData);
+    if (JSValue v = exec->uncheckedR(unmodifiedArgumentsRegister(pc[2].u.operand)).jsValue())
+        asArguments(v)->didTearOffActivation(globalData, activation);
+    LLINT_END();
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_tear_off_arguments)
+{
+    LLINT_BEGIN();
+    ASSERT(exec->codeBlock()->usesArguments() && !exec->codeBlock()->needsFullScopeChain());
+    asArguments(exec->uncheckedR(unmodifiedArgumentsRegister(pc[1].u.operand)).jsValue())->tearOff(exec);
+    LLINT_END();
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_strcat)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(jsString(exec, &LLINT_OP(2), pc[3].u.operand));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_to_primitive)
+{
+    LLINT_BEGIN();
+    LLINT_RETURN(LLINT_OP_C(2).jsValue().toPrimitive(exec));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_get_pnames)
+{
+    LLINT_BEGIN();
+    JSValue v = LLINT_OP(2).jsValue();
+    if (v.isUndefinedOrNull()) {
+        pc += pc[5].u.operand;
+        LLINT_END();
+    }
+    
+    JSObject* o = v.toObject(exec);
+    Structure* structure = o->structure();
+    JSPropertyNameIterator* jsPropertyNameIterator = structure->enumerationCache();
+    if (!jsPropertyNameIterator || jsPropertyNameIterator->cachedPrototypeChain() != structure->prototypeChain(exec))
+        jsPropertyNameIterator = JSPropertyNameIterator::create(exec, o);
+    
+    LLINT_OP(1) = JSValue(jsPropertyNameIterator);
+    LLINT_OP(2) = JSValue(o);
+    LLINT_OP(3) = Register::withInt(0);
+    LLINT_OP(4) = Register::withInt(jsPropertyNameIterator->size());
+    
+    pc += OPCODE_LENGTH(op_get_pnames);
+    LLINT_END();
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_next_pname)
+{
+    LLINT_BEGIN();
+    JSObject* base = asObject(LLINT_OP(2).jsValue());
+    JSString* property = asString(LLINT_OP(1).jsValue());
+    if (base->hasProperty(exec, Identifier(exec, property->value(exec)))) {
+        // Go to target.
+        pc += pc[6].u.operand;
+    } // Else, don't change the PC, so the interpreter will reloop.
+    LLINT_END();
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_push_scope)
+{
+    LLINT_BEGIN();
+    JSValue v = LLINT_OP(1).jsValue();
+    JSObject* o = v.toObject(exec);
+    LLINT_CHECK_EXCEPTION();
+    
+    LLINT_OP(1) = o;
+    exec->setScopeChain(exec->scopeChain()->push(o));
+    
+    LLINT_END();
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_pop_scope)
+{
+    LLINT_BEGIN();
+    exec->setScopeChain(exec->scopeChain()->pop());
+    LLINT_END();
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_push_new_scope)
+{
+    LLINT_BEGIN();
+    CodeBlock* codeBlock = exec->codeBlock();
+    JSObject* scope = JSStaticScopeObject::create(exec, codeBlock->identifier(pc[2].u.operand), LLINT_OP(3).jsValue(), DontDelete);
+    exec->setScopeChain(exec->scopeChain()->push(scope));
+    LLINT_RETURN(scope);
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_throw)
+{
+    LLINT_BEGIN();
+    LLINT_THROW(LLINT_OP_C(1).jsValue());
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_throw_reference_error)
+{
+    LLINT_BEGIN();
+    LLINT_THROW(createReferenceError(exec, LLINT_OP_C(1).jsValue().toString(exec)->value(exec)));
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_debug)
+{
+    LLINT_BEGIN();
+    int debugHookID = pc[1].u.operand;
+    int firstLine = pc[2].u.operand;
+    int lastLine = pc[3].u.operand;
+    
+    globalData.interpreter->debug(exec, static_cast<DebugHookID>(debugHookID), firstLine, lastLine);
+    
+    LLINT_END();
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_profile_will_call)
+{
+    LLINT_BEGIN();
+    (*Profiler::enabledProfilerReference())->willExecute(exec, LLINT_OP(1).jsValue());
+    LLINT_END();
+}
+
+LLINT_SLOW_PATH_DECL(slow_path_profile_did_call)
+{
+    LLINT_BEGIN();
+    (*Profiler::enabledProfilerReference())->didExecute(exec, LLINT_OP(1).jsValue());
+    LLINT_END();
+}
+
+} } // namespace JSC::LLInt
+
+#endif // ENABLE(LLINT)
diff --git a/Source/JavaScriptCore/llint/LLIntSlowPaths.h b/Source/JavaScriptCore/llint/LLIntSlowPaths.h
new file mode 100644 (file)
index 0000000..fe684d3
--- /dev/null
@@ -0,0 +1,171 @@
+/*
+ * Copyright (C) 2011 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#ifndef LLIntSlowPaths_h
+#define LLIntSlowPaths_h
+
+#include <wtf/Platform.h>
+#include <wtf/StdLibExtras.h>
+
+#if ENABLE(LLINT)
+
+namespace JSC {
+
+class ExecState;
+struct Instruction;
+
+namespace LLInt {
+
+typedef int64_t SlowPathReturnType;
+
+extern "C" SlowPathReturnType llint_trace_operand(ExecState*, Instruction*, int fromWhere, int operand);
+extern "C" SlowPathReturnType llint_trace_value(ExecState*, Instruction*, int fromWhere, int operand);
+
+#define LLINT_SLOW_PATH_DECL(name) \
+    extern "C" SlowPathReturnType llint_##name(ExecState* exec, Instruction* pc)
+
+LLINT_SLOW_PATH_DECL(trace_prologue);
+LLINT_SLOW_PATH_DECL(trace_prologue_function_for_call);
+LLINT_SLOW_PATH_DECL(trace_prologue_function_for_construct);
+LLINT_SLOW_PATH_DECL(trace_arityCheck_for_call);
+LLINT_SLOW_PATH_DECL(trace_arityCheck_for_construct);
+LLINT_SLOW_PATH_DECL(trace);
+LLINT_SLOW_PATH_DECL(special_trace);
+LLINT_SLOW_PATH_DECL(entry_osr);
+LLINT_SLOW_PATH_DECL(entry_osr_function_for_call);
+LLINT_SLOW_PATH_DECL(entry_osr_function_for_construct);
+LLINT_SLOW_PATH_DECL(entry_osr_function_for_call_arityCheck);
+LLINT_SLOW_PATH_DECL(entry_osr_function_for_construct_arityCheck);
+LLINT_SLOW_PATH_DECL(loop_osr);
+LLINT_SLOW_PATH_DECL(replace);
+LLINT_SLOW_PATH_DECL(register_file_check);
+LLINT_SLOW_PATH_DECL(slow_path_call_arityCheck);
+LLINT_SLOW_PATH_DECL(slow_path_construct_arityCheck);
+LLINT_SLOW_PATH_DECL(slow_path_create_activation);
+LLINT_SLOW_PATH_DECL(slow_path_create_arguments);
+LLINT_SLOW_PATH_DECL(slow_path_create_this);
+LLINT_SLOW_PATH_DECL(slow_path_convert_this);
+LLINT_SLOW_PATH_DECL(slow_path_new_object);
+LLINT_SLOW_PATH_DECL(slow_path_new_array);
+LLINT_SLOW_PATH_DECL(slow_path_new_array_buffer);
+LLINT_SLOW_PATH_DECL(slow_path_new_regexp);
+LLINT_SLOW_PATH_DECL(slow_path_not);
+LLINT_SLOW_PATH_DECL(slow_path_eq);
+LLINT_SLOW_PATH_DECL(slow_path_neq);
+LLINT_SLOW_PATH_DECL(slow_path_stricteq);
+LLINT_SLOW_PATH_DECL(slow_path_nstricteq);
+LLINT_SLOW_PATH_DECL(slow_path_less);
+LLINT_SLOW_PATH_DECL(slow_path_lesseq);
+LLINT_SLOW_PATH_DECL(slow_path_greater);
+LLINT_SLOW_PATH_DECL(slow_path_greatereq);
+LLINT_SLOW_PATH_DECL(slow_path_pre_inc);
+LLINT_SLOW_PATH_DECL(slow_path_pre_dec);
+LLINT_SLOW_PATH_DECL(slow_path_post_inc);
+LLINT_SLOW_PATH_DECL(slow_path_post_dec);
+LLINT_SLOW_PATH_DECL(slow_path_to_jsnumber);
+LLINT_SLOW_PATH_DECL(slow_path_negate);
+LLINT_SLOW_PATH_DECL(slow_path_add);
+LLINT_SLOW_PATH_DECL(slow_path_mul);
+LLINT_SLOW_PATH_DECL(slow_path_sub);
+LLINT_SLOW_PATH_DECL(slow_path_div);
+LLINT_SLOW_PATH_DECL(slow_path_mod);
+LLINT_SLOW_PATH_DECL(slow_path_lshift);
+LLINT_SLOW_PATH_DECL(slow_path_rshift);
+LLINT_SLOW_PATH_DECL(slow_path_urshift);
+LLINT_SLOW_PATH_DECL(slow_path_bitand);
+LLINT_SLOW_PATH_DECL(slow_path_bitor);
+LLINT_SLOW_PATH_DECL(slow_path_bitxor);
+LLINT_SLOW_PATH_DECL(slow_path_bitnot);
+LLINT_SLOW_PATH_DECL(slow_path_check_has_instance);
+LLINT_SLOW_PATH_DECL(slow_path_instanceof);
+LLINT_SLOW_PATH_DECL(slow_path_typeof);
+LLINT_SLOW_PATH_DECL(slow_path_is_undefined);
+LLINT_SLOW_PATH_DECL(slow_path_is_boolean);
+LLINT_SLOW_PATH_DECL(slow_path_is_number);
+LLINT_SLOW_PATH_DECL(slow_path_is_string);
+LLINT_SLOW_PATH_DECL(slow_path_is_object);
+LLINT_SLOW_PATH_DECL(slow_path_is_function);
+LLINT_SLOW_PATH_DECL(slow_path_in);
+LLINT_SLOW_PATH_DECL(slow_path_resolve);
+LLINT_SLOW_PATH_DECL(slow_path_resolve_skip);
+LLINT_SLOW_PATH_DECL(slow_path_resolve_global);
+LLINT_SLOW_PATH_DECL(slow_path_resolve_global_dynamic);
+LLINT_SLOW_PATH_DECL(slow_path_resolve_for_resolve_global_dynamic);
+LLINT_SLOW_PATH_DECL(slow_path_resolve_base);
+LLINT_SLOW_PATH_DECL(slow_path_ensure_property_exists);
+LLINT_SLOW_PATH_DECL(slow_path_resolve_with_base);
+LLINT_SLOW_PATH_DECL(slow_path_resolve_with_this);
+LLINT_SLOW_PATH_DECL(slow_path_get_by_id);
+LLINT_SLOW_PATH_DECL(slow_path_get_arguments_length);
+LLINT_SLOW_PATH_DECL(slow_path_put_by_id);
+LLINT_SLOW_PATH_DECL(slow_path_del_by_id);
+LLINT_SLOW_PATH_DECL(slow_path_get_by_val);
+LLINT_SLOW_PATH_DECL(slow_path_get_argument_by_val);
+LLINT_SLOW_PATH_DECL(slow_path_get_by_pname);
+LLINT_SLOW_PATH_DECL(slow_path_put_by_val);
+LLINT_SLOW_PATH_DECL(slow_path_del_by_val);
+LLINT_SLOW_PATH_DECL(slow_path_put_by_index);
+LLINT_SLOW_PATH_DECL(slow_path_put_getter_setter);
+LLINT_SLOW_PATH_DECL(slow_path_jmp_scopes);
+LLINT_SLOW_PATH_DECL(slow_path_jtrue);
+LLINT_SLOW_PATH_DECL(slow_path_jfalse);
+LLINT_SLOW_PATH_DECL(slow_path_jless);
+LLINT_SLOW_PATH_DECL(slow_path_jnless);
+LLINT_SLOW_PATH_DECL(slow_path_jgreater);
+LLINT_SLOW_PATH_DECL(slow_path_jngreater);
+LLINT_SLOW_PATH_DECL(slow_path_jlesseq);
+LLINT_SLOW_PATH_DECL(slow_path_jnlesseq);
+LLINT_SLOW_PATH_DECL(slow_path_jgreatereq);
+LLINT_SLOW_PATH_DECL(slow_path_jngreatereq);
+LLINT_SLOW_PATH_DECL(slow_path_switch_imm);
+LLINT_SLOW_PATH_DECL(slow_path_switch_char);
+LLINT_SLOW_PATH_DECL(slow_path_switch_string);
+LLINT_SLOW_PATH_DECL(slow_path_new_func);
+LLINT_SLOW_PATH_DECL(slow_path_new_func_exp);
+LLINT_SLOW_PATH_DECL(slow_path_call);
+LLINT_SLOW_PATH_DECL(slow_path_construct);
+LLINT_SLOW_PATH_DECL(slow_path_call_varargs);
+LLINT_SLOW_PATH_DECL(slow_path_call_eval);
+LLINT_SLOW_PATH_DECL(slow_path_tear_off_activation);
+LLINT_SLOW_PATH_DECL(slow_path_tear_off_arguments);
+LLINT_SLOW_PATH_DECL(slow_path_strcat);
+LLINT_SLOW_PATH_DECL(slow_path_to_primitive);
+LLINT_SLOW_PATH_DECL(slow_path_get_pnames);
+LLINT_SLOW_PATH_DECL(slow_path_next_pname);
+LLINT_SLOW_PATH_DECL(slow_path_push_scope);
+LLINT_SLOW_PATH_DECL(slow_path_pop_scope);
+LLINT_SLOW_PATH_DECL(slow_path_push_new_scope);
+LLINT_SLOW_PATH_DECL(slow_path_throw);
+LLINT_SLOW_PATH_DECL(slow_path_throw_reference_error);
+LLINT_SLOW_PATH_DECL(slow_path_debug);
+LLINT_SLOW_PATH_DECL(slow_path_profile_will_call);
+LLINT_SLOW_PATH_DECL(slow_path_profile_did_call);
+
+} } // namespace JSC::LLInt
+
+#endif // ENABLE(LLINT)
+
+#endif // LLIntSlowPaths_h
+
diff --git a/Source/JavaScriptCore/llint/LLIntThunks.cpp b/Source/JavaScriptCore/llint/LLIntThunks.cpp
new file mode 100644 (file)
index 0000000..ddb0c46
--- /dev/null
@@ -0,0 +1,81 @@
+/*
+ * Copyright (C) 2012 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#include "config.h"
+#include "LLIntThunks.h"
+
+#if ENABLE(LLINT)
+
+#include "JSInterfaceJIT.h"
+#include "LinkBuffer.h"
+#include "LowLevelInterpreter.h"
+
+namespace JSC { namespace LLInt {
+
+static MacroAssemblerCodeRef generateThunkWithJumpTo(JSGlobalData* globalData, void (*target)())
+{
+    JSInterfaceJIT jit;
+    
+    // FIXME: there's probably a better way to do it on X86, but I'm not sure I care.
+    jit.move(JSInterfaceJIT::TrustedImmPtr(bitwise_cast<void*>(target)), JSInterfaceJIT::regT0);
+    jit.jump(JSInterfaceJIT::regT0);
+    
+    LinkBuffer patchBuffer(*globalData, &jit, GLOBAL_THUNK_ID);
+    return patchBuffer.finalizeCode();
+}
+
+MacroAssemblerCodeRef functionForCallEntryThunkGenerator(JSGlobalData* globalData)
+{
+    return generateThunkWithJumpTo(globalData, llint_function_for_call_prologue);
+}
+
+MacroAssemblerCodeRef functionForConstructEntryThunkGenerator(JSGlobalData* globalData)
+{
+    return generateThunkWithJumpTo(globalData, llint_function_for_construct_prologue);
+}
+
+MacroAssemblerCodeRef functionForCallArityCheckThunkGenerator(JSGlobalData* globalData)
+{
+    return generateThunkWithJumpTo(globalData, llint_function_for_call_arity_check);
+}
+
+MacroAssemblerCodeRef functionForConstructArityCheckThunkGenerator(JSGlobalData* globalData)
+{
+    return generateThunkWithJumpTo(globalData, llint_function_for_construct_arity_check);
+}
+
+MacroAssemblerCodeRef evalEntryThunkGenerator(JSGlobalData* globalData)
+{
+    return generateThunkWithJumpTo(globalData, llint_eval_prologue);
+}
+
+MacroAssemblerCodeRef programEntryThunkGenerator(JSGlobalData* globalData)
+{
+    return generateThunkWithJumpTo(globalData, llint_program_prologue);
+}
+
+} } // namespace JSC::LLInt
+
+#endif // ENABLE(LLINT)
diff --git a/Source/JavaScriptCore/llint/LLIntThunks.h b/Source/JavaScriptCore/llint/LLIntThunks.h
new file mode 100644 (file)
index 0000000..ee119e0
--- /dev/null
@@ -0,0 +1,52 @@
+/*
+ * Copyright (C) 2012 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#ifndef LLIntThunks_h
+#define LLIntThunks_h
+
+#include <wtf/Platform.h>
+
+#if ENABLE(LLINT)
+
+#include "MacroAssemblerCodeRef.h"
+
+namespace JSC {
+
+class JSGlobalData;
+
+namespace LLInt {
+
+MacroAssemblerCodeRef functionForCallEntryThunkGenerator(JSGlobalData*);
+MacroAssemblerCodeRef functionForConstructEntryThunkGenerator(JSGlobalData*);
+MacroAssemblerCodeRef functionForCallArityCheckThunkGenerator(JSGlobalData*);
+MacroAssemblerCodeRef functionForConstructArityCheckThunkGenerator(JSGlobalData*);
+MacroAssemblerCodeRef evalEntryThunkGenerator(JSGlobalData*);
+MacroAssemblerCodeRef programEntryThunkGenerator(JSGlobalData*);
+
+} } // namespace JSC::LLInt
+
+#endif // ENABLE(LLINT)
+
+#endif // LLIntThunks_h
diff --git a/Source/JavaScriptCore/llint/LowLevelInterpreter.asm b/Source/JavaScriptCore/llint/LowLevelInterpreter.asm
new file mode 100644 (file)
index 0000000..a9f83f6
--- /dev/null
@@ -0,0 +1,2390 @@
+# Copyright (C) 2011, 2012 Apple Inc. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+# 1. Redistributions of source code must retain the above copyright
+#    notice, this list of conditions and the following disclaimer.
+# 2. Redistributions in binary form must reproduce the above copyright
+#    notice, this list of conditions and the following disclaimer in the
+#    documentation and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+# BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+# THE POSSIBILITY OF SUCH DAMAGE.
+
+
+# Crash course on the language that this is written in (which I just call
+# "assembly" even though it's more than that):
+#
+# - Mostly gas-style operand ordering. The last operand tends to be the
+#   destination. So "a := b" is written as "mov b, a". But unlike gas,
+#   comparisons are in-order, so "if (a < b)" is written as
+#   "bilt a, b, ...".
+#
+# - "b" = byte, "h" = 16-bit word, "i" = 32-bit word, "p" = pointer.
+#   Currently this is just 32-bit so "i" and "p" are interchangeable
+#   except when an op supports one but not the other.
+#
+# - In general, valid operands for macro invocations and instructions are
+#   registers (eg "t0"), addresses (eg "4[t0]"), base-index addresses
+#   (eg "7[t0, t1, 2]"), absolute addresses (eg "0xa0000000[]"), or labels
+#   (eg "_foo" or ".foo"). Macro invocations can also take anonymous
+#   macros as operands. Instructions cannot take anonymous macros.
+#
+# - Labels must have names that begin with either "_" or ".".  A "." label
+#   is local and gets renamed before code gen to minimize namespace
+#   pollution. A "_" label is an extern symbol (i.e. ".globl"). The "_"
+#   may or may not be removed during code gen depending on whether the asm
+#   conventions for C name mangling on the target platform mandate a "_"
+#   prefix.
+#
+# - A "macro" is a lambda expression, which may be either anonymous or
+#   named. But this has caveats. "macro" can take zero or more arguments,
+#   which may be macros or any valid operands, but it can only return
+#   code. But you can do Turing-complete things via continuation passing
+#   style: "macro foo (a, b) b(a) end foo(foo, foo)". Actually, don't do
+#   that, since you'll just crash the assembler.
+#
+# - An "if" is a conditional on settings. Any identifier supplied in the
+#   predicate of an "if" is assumed to be a #define that is available
+#   during code gen. So you can't use "if" for computation in a macro, but
+#   you can use it to select different pieces of code for different
+#   platforms.
+#
+# - Arguments to macros follow lexical scoping rather than dynamic scoping.
+#   Const's also follow lexical scoping and may override (hide) arguments
+#   or other consts. All variables (arguments and constants) can be bound
+#   to operands. Additionally, arguments (but not constants) can be bound
+#   to macros.
+
+
+# Below we have a bunch of constant declarations. Each constant must have
+# a corresponding ASSERT() in LLIntData.cpp.
+
+# These declarations must match interpreter/RegisterFile.h.
+const CallFrameHeaderSize = 48
+const ArgumentCount = -48
+const CallerFrame = -40
+const Callee = -32
+const ScopeChain = -24
+const ReturnPC = -16
+const CodeBlock = -8
+
+const ThisArgumentOffset = -CallFrameHeaderSize - 8
+
+# Declare some aliases for the registers we will use.
+const PC = t4
+
+# Offsets needed for reasoning about value representation.
+if BIG_ENDIAN
+    const TagOffset = 0
+    const PayloadOffset = 4
+else
+    const TagOffset = 4
+    const PayloadOffset = 0
+end
+
+# Value representation constants.
+const Int32Tag = -1
+const BooleanTag = -2
+const NullTag = -3
+const UndefinedTag = -4
+const CellTag = -5
+const EmptyValueTag = -6
+const DeletedValueTag = -7
+const LowestTag = DeletedValueTag
+
+# Type constants.
+const StringType = 5
+const ObjectType = 13
+
+# Type flags constants.
+const MasqueradesAsUndefined = 1
+const ImplementsHasInstance = 2
+const ImplementsDefaultHasInstance = 8
+
+# Heap allocation constants.
+const JSFinalObjectSizeClassIndex = 3
+
+# Bytecode operand constants.
+const FirstConstantRegisterIndex = 0x40000000
+
+# Code type constants.
+const GlobalCode = 0
+const EvalCode = 1
+const FunctionCode = 2
+
+# The interpreter steals the tag word of the argument count.
+const LLIntReturnPC = ArgumentCount + TagOffset
+
+# This must match wtf/Vector.h.
+const VectorSizeOffset = 0
+const VectorBufferOffset = 4
+
+# String flags.
+const HashFlags8BitBuffer = 64
+
+# Utilities
+macro crash()
+    storei 0, 0xbbadbeef[]
+    move 0, t0
+    call t0
+end
+
+macro assert(assertion)
+    if ASSERT_ENABLED
+        assertion(.ok)
+        crash()
+    .ok:
+    end
+end
+
+macro preserveReturnAddressAfterCall(destinationRegister)
+    if ARMv7
+        move lr, destinationRegister
+    elsif X86
+        pop destinationRegister
+    else
+        error
+    end
+end
+
+macro restoreReturnAddressBeforeReturn(sourceRegister)
+    if ARMv7
+        move sourceRegister, lr
+    elsif X86
+        push sourceRegister
+    else
+        error
+    end
+end
+
+macro dispatch(advance)
+    addp advance * 4, PC
+    jmp [PC]
+end
+
+macro dispatchBranchWithOffset(pcOffset)
+    lshifti 2, pcOffset
+    addp pcOffset, PC
+    jmp [PC]
+end
+
+macro dispatchBranch(pcOffset)
+    loadi pcOffset, t0
+    dispatchBranchWithOffset(t0)
+end
+
+macro dispatchAfterCall()
+    loadi ArgumentCount + TagOffset[cfr], PC
+    jmp [PC]
+end
+
+macro cCall2(function, arg1, arg2)
+    if ARMv7
+        move arg1, t0
+        move arg2, t1
+    elsif X86
+        poke arg1, 0
+        poke arg2, 1
+    else
+        error
+    end
+    call function
+end
+
+# This barely works. arg3 and arg4 should probably be immediates.
+macro cCall4(function, arg1, arg2, arg3, arg4)
+    if ARMv7
+        move arg1, t0
+        move arg2, t1
+        move arg3, t2
+        move arg4, t3
+    elsif X86
+        poke arg1, 0
+        poke arg2, 1
+        poke arg3, 2
+        poke arg4, 3
+    else
+        error
+    end
+    call function
+end
+
+macro callSlowPath(slow_path)
+    cCall2(slow_path, cfr, PC)
+    move t0, PC
+    move t1, cfr
+end
+
+# Debugging operation if you'd like to print an operand in the instruction stream. fromWhere
+# should be an immediate integer - any integer you like; use it to identify the place you're
+# debugging from. operand should likewise be an immediate, and should identify the operand
+# in the instruction stream you'd like to print out.
+macro traceOperand(fromWhere, operand)
+    cCall4(_llint_trace_operand, cfr, PC, fromWhere, operand)
+    move t0, PC
+    move t1, cfr
+end
+
+# Debugging operation if you'd like to print the value of an operand in the instruction
+# stream. Same as traceOperand(), but assumes that the operand is a register, and prints its
+# value.
+macro traceValue(fromWhere, operand)
+    cCall4(_llint_trace_value, cfr, PC, fromWhere, operand)
+    move t0, PC
+    move t1, cfr
+end
+
+macro traceExecution()
+    if EXECUTION_TRACING
+        callSlowPath(_llint_trace)
+    end
+end
+
+# Call a slow_path for call opcodes.
+macro callCallSlowPath(advance, slow_path, action)
+    addp advance * 4, PC, t0
+    storep t0, ArgumentCount + TagOffset[cfr]
+    cCall2(slow_path, cfr, PC)
+    move t1, cfr
+    action(t0)
+end
+
+macro slowPathForCall(advance, slow_path)
+    callCallSlowPath(
+        advance,
+        slow_path,
+        macro (callee)
+            call callee
+            dispatchAfterCall()
+        end)
+end
+
+macro checkSwitchToJIT(increment, action)
+    if JIT_ENABLED
+        loadp CodeBlock[cfr], t0
+        baddis increment, CodeBlock::m_llintExecuteCounter[t0], .continue
+        action()
+    .continue:
+    end
+end
+
+macro checkSwitchToJITForLoop()
+    checkSwitchToJIT(
+        1,
+        macro ()
+            storei PC, ArgumentCount + TagOffset[cfr]
+            cCall2(_llint_loop_osr, cfr, PC)
+            move t1, cfr
+            btpz t0, .recover
+            jmp t0
+        .recover:
+            loadi ArgumentCount + TagOffset[cfr], PC
+        end)
+end
+
+macro checkSwitchToJITForEpilogue()
+    checkSwitchToJIT(
+        10,
+        macro ()
+            callSlowPath(_llint_replace)
+        end)
+end
+
+macro assertNotConstant(index)
+    assert(macro (ok) bilt index, FirstConstantRegisterIndex, ok end)
+end
+
+# Index, tag, and payload must be different registers. Index is not
+# changed.
+macro loadConstantOrVariable(index, tag, payload)
+    bigteq index, FirstConstantRegisterIndex, .constant
+    loadi TagOffset[cfr, index, 8], tag
+    loadi PayloadOffset[cfr, index, 8], payload
+    jmp .done
+.constant:
+    loadp CodeBlock[cfr], payload
+    loadp CodeBlock::m_constantRegisters + VectorBufferOffset[payload], payload
+    # There is a bit of evil here: if the index contains a value >= FirstConstantRegisterIndex,
+    # then value << 3 will be equal to (value - FirstConstantRegisterIndex) << 3.
+    loadp TagOffset[payload, index, 8], tag
+    loadp PayloadOffset[payload, index, 8], payload
+.done:
+end
+
+# Index and payload may be the same register. Index may be clobbered.
+macro loadConstantOrVariable2Reg(index, tag, payload)
+    bigteq index, FirstConstantRegisterIndex, .constant
+    loadi TagOffset[cfr, index, 8], tag
+    loadi PayloadOffset[cfr, index, 8], payload
+    jmp .done
+.constant:
+    loadp CodeBlock[cfr], tag
+    loadp CodeBlock::m_constantRegisters + VectorBufferOffset[tag], tag
+    # There is a bit of evil here: if the index contains a value >= FirstConstantRegisterIndex,
+    # then value << 3 will be equal to (value - FirstConstantRegisterIndex) << 3.
+    lshifti 3, index
+    addp index, tag
+    loadp PayloadOffset[tag], payload
+    loadp TagOffset[tag], tag
+.done:
+end
+
+macro loadConstantOrVariablePayloadTagCustom(index, tagCheck, payload)
+    bigteq index, FirstConstantRegisterIndex, .constant
+    tagCheck(TagOffset[cfr, index, 8])
+    loadi PayloadOffset[cfr, index, 8], payload
+    jmp .done
+.constant:
+    loadp CodeBlock[cfr], payload
+    loadp CodeBlock::m_constantRegisters + VectorBufferOffset[payload], payload
+    # There is a bit of evil here: if the index contains a value >= FirstConstantRegisterIndex,
+    # then value << 3 will be equal to (value - FirstConstantRegisterIndex) << 3.
+    tagCheck(TagOffset[payload, index, 8])
+    loadp PayloadOffset[payload, index, 8], payload
+.done:
+end
+
+# Index and payload must be different registers. Index is not mutated. Use
+# this if you know what the tag of the variable should be. Doing the tag
+# test as part of loading the variable reduces register use, but may not
+# be faster than doing loadConstantOrVariable followed by a branch on the
+# tag.
+macro loadConstantOrVariablePayload(index, expectedTag, payload, slow)
+    loadConstantOrVariablePayloadTagCustom(
+        index,
+        macro (actualTag) bineq actualTag, expectedTag, slow end,
+        payload)
+end
+
+macro loadConstantOrVariablePayloadUnchecked(index, payload)
+    loadConstantOrVariablePayloadTagCustom(
+        index,
+        macro (actualTag) end,
+        payload)
+end
+
+macro writeBarrier(tag, payload)
+    # Nothing to do, since we don't have a generational or incremental collector.
+end
+
+macro valueProfile(tag, payload, profile)
+    if JIT_ENABLED
+        storei tag, ValueProfile::m_buckets + TagOffset[profile]
+        storei payload, ValueProfile::m_buckets + PayloadOffset[profile]
+    end
+end
+
+
+# Indicate the beginning of LLInt.
+_llint_begin:
+    crash()
+
+
+# Entrypoints into the interpreter
+
+macro functionForCallCodeBlockGetter(targetRegister)
+    loadp Callee[cfr], targetRegister
+    loadp JSFunction::m_executable[targetRegister], targetRegister
+    loadp FunctionExecutable::m_codeBlockForCall[targetRegister], targetRegister
+end
+
+macro functionForConstructCodeBlockGetter(targetRegister)
+    loadp Callee[cfr], targetRegister
+    loadp JSFunction::m_executable[targetRegister], targetRegister
+    loadp FunctionExecutable::m_codeBlockForConstruct[targetRegister], targetRegister
+end
+
+macro notFunctionCodeBlockGetter(targetRegister)
+    loadp CodeBlock[cfr], targetRegister
+end
+
+macro functionCodeBlockSetter(sourceRegister)
+    storep sourceRegister, CodeBlock[cfr]
+end
+
+macro notFunctionCodeBlockSetter(sourceRegister)
+    # Nothing to do!
+end
+
+# Do the bare minimum required to execute code. Sets up the PC, leave the CodeBlock*
+# in t1. May also trigger prologue entry OSR.
+macro prologue(codeBlockGetter, codeBlockSetter, osrSlowPath, traceSlowPath)
+    preserveReturnAddressAfterCall(t2)
+    
+    # Set up the call frame and check if we should OSR.
+    storep t2, ReturnPC[cfr]
+    if EXECUTION_TRACING
+        callSlowPath(traceSlowPath)
+    end
+    codeBlockGetter(t1)
+    if JIT_ENABLED
+        baddis 5, CodeBlock::m_llintExecuteCounter[t1], .continue
+        cCall2(osrSlowPath, cfr, PC)
+        move t1, cfr
+        btpz t0, .recover
+        loadp ReturnPC[cfr], t2
+        restoreReturnAddressBeforeReturn(t2)
+        jmp t0
+    .recover:
+        codeBlockGetter(t1)
+    .continue:
+    end
+    codeBlockSetter(t1)
+    
+    # Set up the PC.
+    loadp CodeBlock::m_instructions[t1], t0
+    loadp CodeBlock::Instructions::m_instructions + VectorBufferOffset[t0], PC
+end
+
+# Expects that CodeBlock is in t1, which is what prologue() leaves behind.
+# Must call dispatch(0) after calling this.
+macro functionInitialization(profileArgSkip)
+    if JIT_ENABLED
+        # Profile the arguments. Unfortunately, we have no choice but to do this. This
+        # code is pretty horrendous because of the difference in ordering between
+        # arguments and value profiles, the desire to have a simple loop-down-to-zero
+        # loop, and the desire to use only three registers so as to preserve the PC and
+        # the code block. It is likely that this code should be rewritten in a more
+        # optimal way for architectures that have more than five registers available
+        # for arbitrary use in the interpreter.
+        loadi CodeBlock::m_numParameters[t1], t0
+        addi -profileArgSkip, t0 # Use addi because that's what has the peephole
+        assert(macro (ok) bigteq t0, 0, ok end)
+        btiz t0, .argumentProfileDone
+        loadp CodeBlock::m_argumentValueProfiles + VectorBufferOffset[t1], t3
+        muli sizeof ValueProfile, t0, t2 # Aaaaahhhh! Need strength reduction!
+        negi t0
+        lshifti 3, t0
+        addp t2, t3
+    .argumentProfileLoop:
+        loadi ThisArgumentOffset + TagOffset + 8 - profileArgSkip * 8[cfr, t0], t2
+        subp sizeof ValueProfile, t3
+        storei t2, profileArgSkip * sizeof ValueProfile + ValueProfile::m_buckets + TagOffset[t3]
+        loadi ThisArgumentOffset + PayloadOffset + 8 - profileArgSkip * 8[cfr, t0], t2
+        storei t2, profileArgSkip * sizeof ValueProfile + ValueProfile::m_buckets + PayloadOffset[t3]
+        baddinz 8, t0, .argumentProfileLoop
+    .argumentProfileDone:
+    end
+        
+    # Check stack height.
+    loadi CodeBlock::m_numCalleeRegisters[t1], t0
+    loadp CodeBlock::m_globalData[t1], t2
+    loadp JSGlobalData::interpreter[t2], t2   # FIXME: Can get to the RegisterFile from the JITStackFrame
+    lshifti 3, t0
+    addp t0, cfr, t0
+    bpaeq Interpreter::m_registerFile + RegisterFile::m_end[t2], t0, .stackHeightOK
+
+    # Stack height check failed - need to call a slow_path.
+    callSlowPath(_llint_register_file_check)
+.stackHeightOK:
+end
+
+# Expects that CodeBlock is in t1, which is what prologue() leaves behind.
+macro functionArityCheck(doneLabel, slow_path)
+    loadi PayloadOffset + ArgumentCount[cfr], t0
+    biaeq t0, CodeBlock::m_numParameters[t1], doneLabel
+    cCall2(slow_path, cfr, PC)   # This slow_path has a simple protocol: t0 = 0 => no error, t0 != 0 => error
+    move t1, cfr
+    btiz t0, .continue
+    loadp JITStackFrame::globalData[sp], t1
+    loadp JSGlobalData::callFrameForThrow[t1], t0
+    jmp JSGlobalData::targetMachinePCForThrow[t1]
+.continue:
+    # Reload CodeBlock and PC, since the slow_path clobbered it.
+    loadp CodeBlock[cfr], t1
+    loadp CodeBlock::m_instructions[t1], t0
+    loadp CodeBlock::Instructions::m_instructions + VectorBufferOffset[t0], PC
+    jmp doneLabel
+end
+
+_llint_program_prologue:
+    prologue(notFunctionCodeBlockGetter, notFunctionCodeBlockSetter, _llint_entry_osr, _llint_trace_prologue)
+    dispatch(0)
+
+
+_llint_eval_prologue:
+    prologue(notFunctionCodeBlockGetter, notFunctionCodeBlockSetter, _llint_entry_osr, _llint_trace_prologue)
+    dispatch(0)
+
+
+_llint_function_for_call_prologue:
+    prologue(functionForCallCodeBlockGetter, functionCodeBlockSetter, _llint_entry_osr_function_for_call, _llint_trace_prologue_function_for_call)
+.functionForCallBegin:
+    functionInitialization(0)
+    dispatch(0)
+    
+
+_llint_function_for_construct_prologue:
+    prologue(functionForConstructCodeBlockGetter, functionCodeBlockSetter, _llint_entry_osr_function_for_construct, _llint_trace_prologue_function_for_construct)
+.functionForConstructBegin:
+    functionInitialization(1)
+    dispatch(0)
+    
+
+_llint_function_for_call_arity_check:
+    prologue(functionForCallCodeBlockGetter, functionCodeBlockSetter, _llint_entry_osr_function_for_call_arityCheck, _llint_trace_arityCheck_for_call)
+    functionArityCheck(.functionForCallBegin, _llint_slow_path_call_arityCheck)
+
+
+_llint_function_for_construct_arity_check:
+    prologue(functionForConstructCodeBlockGetter, functionCodeBlockSetter, _llint_entry_osr_function_for_construct_arityCheck, _llint_trace_arityCheck_for_construct)
+    functionArityCheck(.functionForConstructBegin, _llint_slow_path_construct_arityCheck)
+
+# Instruction implementations
+
+_llint_op_enter:
+    traceExecution()
+    loadp CodeBlock[cfr], t2
+    loadi CodeBlock::m_numVars[t2], t2
+    btiz t2, .opEnterDone
+    move UndefinedTag, t0
+    move 0, t1
+.opEnterLoop:
+    subi 1, t2
+    storei t0, TagOffset[cfr, t2, 8]
+    storei t1, PayloadOffset[cfr, t2, 8]
+    btinz t2, .opEnterLoop
+.opEnterDone:
+    dispatch(1)
+
+
+_llint_op_create_activation:
+    traceExecution()
+    loadi 4[PC], t0
+    bineq TagOffset[cfr, t0, 8], EmptyValueTag, .opCreateActivationDone
+    callSlowPath(_llint_slow_path_create_activation)
+.opCreateActivationDone:
+    dispatch(2)
+
+
+_llint_op_init_lazy_reg:
+    traceExecution()
+    loadi 4[PC], t0
+    storei EmptyValueTag, TagOffset[cfr, t0, 8]
+    storei 0, PayloadOffset[cfr, t0, 8]
+    dispatch(2)
+
+
+_llint_op_create_arguments:
+    traceExecution()
+    loadi 4[PC], t0
+    bineq TagOffset[cfr, t0, 8], EmptyValueTag, .opCreateArgumentsDone
+    callSlowPath(_llint_slow_path_create_arguments)
+.opCreateArgumentsDone:
+    dispatch(2)
+
+
+macro allocateBasicJSObject(sizeClassIndex, classInfoOffset, structure, result, scratch1, scratch2, slowCase)
+    if ALWAYS_ALLOCATE_SLOW
+        jmp slowCase
+    else
+        const offsetOfMySizeClass =
+            JSGlobalData::heap +
+            Heap::m_objectSpace +
+            MarkedSpace::m_normalSpace +
+            MarkedSpace::Subspace::preciseAllocators +
+            sizeClassIndex * sizeof MarkedAllocator
+        
+        # FIXME: we can get the global data in one load from the stack.
+        loadp CodeBlock[cfr], scratch1
+        loadp CodeBlock::m_globalData[scratch1], scratch1
+        
+        # Get the object from the free list.    
+        loadp offsetOfMySizeClass + MarkedAllocator::m_firstFreeCell[scratch1], result
+        btpz result, slowCase
+        
+        # Remove the object from the free list.
+        loadp [result], scratch2
+        storep scratch2, offsetOfMySizeClass + MarkedAllocator::m_firstFreeCell[scratch1]
+    
+        # Initialize the object.
+        loadp classInfoOffset[scratch1], scratch2
+        storep scratch2, [result]
+        storep structure, JSCell::m_structure[result]
+        storep 0, JSObject::m_inheritorID[result]
+        addp sizeof JSObject, result, scratch1
+        storep scratch1, JSObject::m_propertyStorage[result]
+    end
+end
+
+_llint_op_create_this:
+    traceExecution()
+    loadi 8[PC], t0
+    assertNotConstant(t0)
+    bineq TagOffset[cfr, t0, 8], CellTag, .opCreateThisSlow
+    loadi PayloadOffset[cfr, t0, 8], t0
+    loadp JSCell::m_structure[t0], t1
+    bbb Structure::m_typeInfo + TypeInfo::m_type[t1], ObjectType, .opCreateThisSlow
+    loadp JSObject::m_inheritorID[t0], t2
+    btpz t2, .opCreateThisSlow
+    allocateBasicJSObject(JSFinalObjectSizeClassIndex, JSGlobalData::jsFinalObjectClassInfo, t2, t0, t1, t3, .opCreateThisSlow)
+    loadi 4[PC], t1
+    storei CellTag, TagOffset[cfr, t1, 8]
+    storei t0, PayloadOffset[cfr, t1, 8]
+    dispatch(3)
+
+.opCreateThisSlow:
+    callSlowPath(_llint_slow_path_create_this)
+    dispatch(3)
+
+
+_llint_op_get_callee:
+    traceExecution()
+    loadi 4[PC], t0
+    loadp PayloadOffset + Callee[cfr], t1
+    storei CellTag, TagOffset[cfr, t0, 8]
+    storei t1, PayloadOffset[cfr, t0, 8]
+    dispatch(2)
+
+
+_llint_op_convert_this:
+    traceExecution()
+    loadi 4[PC], t0
+    bineq TagOffset[cfr, t0, 8], CellTag, .opConvertThisSlow
+    loadi PayloadOffset[cfr, t0, 8], t0
+    loadp JSCell::m_structure[t0], t0
+    bbb Structure::m_typeInfo + TypeInfo::m_type[t0], ObjectType, .opConvertThisSlow
+    dispatch(2)
+
+.opConvertThisSlow:
+    callSlowPath(_llint_slow_path_convert_this)
+    dispatch(2)
+
+
+_llint_op_new_object:
+    traceExecution()
+    loadp CodeBlock[cfr], t0
+    loadp CodeBlock::m_globalObject[t0], t0
+    loadp JSGlobalObject::m_emptyObjectStructure[t0], t1
+    allocateBasicJSObject(JSFinalObjectSizeClassIndex, JSGlobalData::jsFinalObjectClassInfo, t1, t0, t2, t3, .opNewObjectSlow)
+    loadi 4[PC], t1
+    storei CellTag, TagOffset[cfr, t1, 8]
+    storei t0, PayloadOffset[cfr, t1, 8]
+    dispatch(2)
+
+.opNewObjectSlow:
+    callSlowPath(_llint_slow_path_new_object)
+    dispatch(2)
+
+
+_llint_op_new_array:
+    traceExecution()
+    callSlowPath(_llint_slow_path_new_array)
+    dispatch(4)
+
+
+_llint_op_new_array_buffer:
+    traceExecution()
+    callSlowPath(_llint_slow_path_new_array_buffer)
+    dispatch(4)
+
+
+_llint_op_new_regexp:
+    traceExecution()
+    callSlowPath(_llint_slow_path_new_regexp)
+    dispatch(3)
+
+
+_llint_op_mov:
+    traceExecution()
+    loadi 8[PC], t1
+    loadi 4[PC], t0
+    loadConstantOrVariable(t1, t2, t3)
+    storei t2, TagOffset[cfr, t0, 8]
+    storei t3, PayloadOffset[cfr, t0, 8]
+    dispatch(3)
+
+
+_llint_op_not:
+    traceExecution()
+    loadi 8[PC], t0
+    loadi 4[PC], t1
+    loadConstantOrVariable(t0, t2, t3)
+    bineq t2, BooleanTag, .opNotSlow
+    xori 1, t3
+    storei t2, TagOffset[cfr, t1, 8]
+    storei t3, PayloadOffset[cfr, t1, 8]
+    dispatch(3)
+
+.opNotSlow:
+    callSlowPath(_llint_slow_path_not)
+    dispatch(3)
+
+
+_llint_op_eq:
+    traceExecution()
+    loadi 12[PC], t2
+    loadi 8[PC], t0
+    loadConstantOrVariable(t2, t3, t1)
+    loadConstantOrVariable2Reg(t0, t2, t0)
+    bineq t2, t3, .opEqSlow
+    bieq t2, CellTag, .opEqSlow
+    bib t2, LowestTag, .opEqSlow
+    loadi 4[PC], t2
+    cieq t0, t1, t0
+    storei BooleanTag, TagOffset[cfr, t2, 8]
+    storei t0, PayloadOffset[cfr, t2, 8]
+    dispatch(4)
+
+.opEqSlow:
+    callSlowPath(_llint_slow_path_eq)
+    dispatch(4)
+
+
+_llint_op_eq_null:
+    traceExecution()
+    loadi 8[PC], t0
+    loadi 4[PC], t3
+    assertNotConstant(t0)
+    loadi TagOffset[cfr, t0, 8], t1
+    loadi PayloadOffset[cfr, t0, 8], t0
+    bineq t1, CellTag, .opEqNullImmediate
+    loadp JSCell::m_structure[t0], t1
+    tbnz Structure::m_typeInfo + TypeInfo::m_flags[t1], MasqueradesAsUndefined, t1
+    jmp .opEqNullNotImmediate
+.opEqNullImmediate:
+    cieq t1, NullTag, t2
+    cieq t1, UndefinedTag, t1
+    ori t2, t1
+.opEqNullNotImmediate:
+    storei BooleanTag, TagOffset[cfr, t3, 8]
+    storei t1, PayloadOffset[cfr, t3, 8]
+    dispatch(3)
+
+
+_llint_op_neq:
+    traceExecution()
+    loadi 12[PC], t2
+    loadi 8[PC], t0
+    loadConstantOrVariable(t2, t3, t1)
+    loadConstantOrVariable2Reg(t0, t2, t0)
+    bineq t2, t3, .opNeqSlow
+    bieq t2, CellTag, .opNeqSlow
+    bib t2, LowestTag, .opNeqSlow
+    loadi 4[PC], t2
+    cineq t0, t1, t0
+    storei BooleanTag, TagOffset[cfr, t2, 8]
+    storei t0, PayloadOffset[cfr, t2, 8]
+    dispatch(4)
+
+.opNeqSlow:
+    callSlowPath(_llint_slow_path_neq)
+    dispatch(4)
+    
+
+_llint_op_neq_null:
+    traceExecution()
+    loadi 8[PC], t0
+    loadi 4[PC], t3
+    assertNotConstant(t0)
+    loadi TagOffset[cfr, t0, 8], t1
+    loadi PayloadOffset[cfr, t0, 8], t0
+    bineq t1, CellTag, .opNeqNullImmediate
+    loadp JSCell::m_structure[t0], t1
+    tbz Structure::m_typeInfo + TypeInfo::m_flags[t1], MasqueradesAsUndefined, t1
+    jmp .opNeqNullNotImmediate
+.opNeqNullImmediate:
+    cineq t1, NullTag, t2
+    cineq t1, UndefinedTag, t1
+    andi t2, t1
+.opNeqNullNotImmediate:
+    storei BooleanTag, TagOffset[cfr, t3, 8]
+    storei t1, PayloadOffset[cfr, t3, 8]
+    dispatch(3)
+
+
+macro strictEq(equalityOperation, slow_path)
+    loadi 12[PC], t2
+    loadi 8[PC], t0
+    loadConstantOrVariable(t2, t3, t1)
+    loadConstantOrVariable2Reg(t0, t2, t0)
+    bineq t2, t3, .slow
+    bib t2, LowestTag, .slow
+    bineq t2, CellTag, .notString
+    loadp JSCell::m_structure[t0], t2
+    loadp JSCell::m_structure[t1], t3
+    bbneq Structure::m_typeInfo + TypeInfo::m_type[t2], StringType, .notString
+    bbeq Structure::m_typeInfo + TypeInfo::m_type[t3], StringType, .slow
+.notString:
+    loadi 4[PC], t2
+    equalityOperation(t0, t1, t0)
+    storei BooleanTag, TagOffset[cfr, t2, 8]
+    storei t0, PayloadOffset[cfr, t2, 8]
+    dispatch(4)
+
+.slow:
+    callSlowPath(slow_path)
+    dispatch(4)
+end
+
+_llint_op_stricteq:
+    traceExecution()
+    strictEq(macro (left, right, result) cieq left, right, result end, _llint_slow_path_stricteq)
+
+
+_llint_op_nstricteq:
+    traceExecution()
+    strictEq(macro (left, right, result) cineq left, right, result end, _llint_slow_path_nstricteq)
+
+
+_llint_op_less:
+    traceExecution()
+    callSlowPath(_llint_slow_path_less)
+    dispatch(4)
+
+
+_llint_op_lesseq:
+    traceExecution()
+    callSlowPath(_llint_slow_path_lesseq)
+    dispatch(4)
+
+
+_llint_op_greater:
+    traceExecution()
+    callSlowPath(_llint_slow_path_greater)
+    dispatch(4)
+
+
+_llint_op_greatereq:
+    traceExecution()
+    callSlowPath(_llint_slow_path_greatereq)
+    dispatch(4)
+
+
+_llint_op_pre_inc:
+    traceExecution()
+    loadi 4[PC], t0
+    bineq TagOffset[cfr, t0, 8], Int32Tag, .opPreIncSlow
+    loadi PayloadOffset[cfr, t0, 8], t1
+    baddio 1, t1, .opPreIncSlow
+    storei t1, PayloadOffset[cfr, t0, 8]
+    dispatch(2)
+
+.opPreIncSlow:
+    callSlowPath(_llint_slow_path_pre_inc)
+    dispatch(2)
+
+
+_llint_op_pre_dec:
+    traceExecution()
+    loadi 4[PC], t0
+    bineq TagOffset[cfr, t0, 8], Int32Tag, .opPreDecSlow
+    loadi PayloadOffset[cfr, t0, 8], t1
+    bsubio 1, t1, .opPreDecSlow
+    storei t1, PayloadOffset[cfr, t0, 8]
+    dispatch(2)
+
+.opPreDecSlow:
+    callSlowPath(_llint_slow_path_pre_dec)
+    dispatch(2)
+
+
+_llint_op_post_inc:
+    traceExecution()
+    loadi 8[PC], t0
+    loadi 4[PC], t1
+    bineq TagOffset[cfr, t0, 8], Int32Tag, .opPostIncSlow
+    bieq t0, t1, .opPostIncDone
+    loadi PayloadOffset[cfr, t0, 8], t2
+    move t2, t3
+    baddio 1, t3, .opPostIncSlow
+    storei Int32Tag, TagOffset[cfr, t1, 8]
+    storei t2, PayloadOffset[cfr, t1, 8]
+    storei t3, PayloadOffset[cfr, t0, 8]
+.opPostIncDone:
+    dispatch(3)
+
+.opPostIncSlow:
+    callSlowPath(_llint_slow_path_post_inc)
+    dispatch(3)
+
+
+_llint_op_post_dec:
+    traceExecution()
+    loadi 8[PC], t0
+    loadi 4[PC], t1
+    bineq TagOffset[cfr, t0, 8], Int32Tag, .opPostDecSlow
+    bieq t0, t1, .opPostDecDone
+    loadi PayloadOffset[cfr, t0, 8], t2
+    move t2, t3
+    bsubio 1, t3, .opPostDecSlow
+    storei Int32Tag, TagOffset[cfr, t1, 8]
+    storei t2, PayloadOffset[cfr, t1, 8]
+    storei t3, PayloadOffset[cfr, t0, 8]
+.opPostDecDone:
+    dispatch(3)
+
+.opPostDecSlow:
+    callSlowPath(_llint_slow_path_post_dec)
+    dispatch(3)
+
+
+_llint_op_to_jsnumber:
+    traceExecution()
+    loadi 8[PC], t0
+    loadi 4[PC], t1
+    loadConstantOrVariable(t0, t2, t3)
+    bieq t2, Int32Tag, .opToJsnumberIsInt
+    biaeq t2, EmptyValueTag, .opToJsnumberSlow
+.opToJsnumberIsInt:
+    storei t2, TagOffset[cfr, t1, 8]
+    storei t3, PayloadOffset[cfr, t1, 8]
+    dispatch(3)
+
+.opToJsnumberSlow:
+    callSlowPath(_llint_slow_path_to_jsnumber)
+    dispatch(3)
+
+
+_llint_op_negate:
+    traceExecution()
+    loadi 8[PC], t0
+    loadi 4[PC], t3
+    loadConstantOrVariable(t0, t1, t2)
+    bineq t1, Int32Tag, .opNegateSrcNotInt
+    btiz t2, 0x7fffffff, .opNegateSlow
+    negi t2
+    storei Int32Tag, TagOffset[cfr, t3, 8]
+    storei t2, PayloadOffset[cfr, t3, 8]
+    dispatch(3)
+.opNegateSrcNotInt:
+    bia t1, LowestTag, .opNegateSlow
+    xori 0x80000000, t1
+    storei t1, TagOffset[cfr, t3, 8]
+    storei t2, PayloadOffset[cfr, t3, 8]
+    dispatch(3)
+
+.opNegateSlow:
+    callSlowPath(_llint_slow_path_negate)
+    dispatch(3)
+
+
+macro binaryOpCustomStore(integerOperationAndStore, doubleOperation, slow_path)
+    loadi 12[PC], t2
+    loadi 8[PC], t0
+    loadConstantOrVariable(t2, t3, t1)
+    loadConstantOrVariable2Reg(t0, t2, t0)
+    bineq t2, Int32Tag, .op1NotInt
+    bineq t3, Int32Tag, .op2NotInt
+    loadi 4[PC], t2
+    integerOperationAndStore(t3, t1, t0, .slow, t2)
+    dispatch(5)
+
+.op1NotInt:
+    # First operand is definitely not an int, the second operand could be anything.
+    bia t2, LowestTag, .slow
+    bib t3, LowestTag, .op1NotIntOp2Double
+    bineq t3, Int32Tag, .slow
+    ci2d t1, ft1
+    jmp .op1NotIntReady
+.op1NotIntOp2Double:
+    fii2d t1, t3, ft1
+.op1NotIntReady:
+    loadi 4[PC], t1
+    fii2d t0, t2, ft0
+    doubleOperation(ft1, ft0)
+    stored ft0, [cfr, t1, 8]
+    dispatch(5)
+
+.op2NotInt:
+    # First operand is definitely an int, the second operand is definitely not.
+    loadi 4[PC], t2
+    bia t3, LowestTag, .slow
+    ci2d t0, ft0
+    fii2d t1, t3, ft1
+    doubleOperation(ft1, ft0)
+    stored ft0, [cfr, t2, 8]
+    dispatch(5)
+
+.slow:
+    callSlowPath(slow_path)
+    dispatch(5)
+end
+
+macro binaryOp(integerOperation, doubleOperation, slow_path)
+    binaryOpCustomStore(
+        macro (int32Tag, left, right, slow, index)
+            integerOperation(left, right, slow)
+            storei int32Tag, TagOffset[cfr, index, 8]
+            storei right, PayloadOffset[cfr, index, 8]
+        end,
+        doubleOperation, slow_path)
+end
+
+_llint_op_add:
+    traceExecution()
+    binaryOp(
+        macro (left, right, slow) baddio left, right, slow end,
+        macro (left, right) addd left, right end,
+        _llint_slow_path_add)
+
+
+_llint_op_mul:
+    traceExecution()
+    binaryOpCustomStore(
+        macro (int32Tag, left, right, slow, index)
+            const scratch = int32Tag   # We know that we can reuse the int32Tag register since it has a constant.
+            move right, scratch
+            bmulio left, scratch, slow
+            btinz scratch, .done
+            bilt left, 0, slow
+            bilt right, 0, slow
+        .done:
+            storei Int32Tag, TagOffset[cfr, index, 8]
+            storei scratch, PayloadOffset[cfr, index, 8]
+        end,
+        macro (left, right) muld left, right end,
+        _llint_slow_path_mul)
+
+
+_llint_op_sub:
+    traceExecution()
+    binaryOp(
+        macro (left, right, slow) bsubio left, right, slow end,
+        macro (left, right) subd left, right end,
+        _llint_slow_path_sub)
+
+
+_llint_op_div:
+    traceExecution()
+    binaryOpCustomStore(
+        macro (int32Tag, left, right, slow, index)
+            ci2d left, ft0
+            ci2d right, ft1
+            divd ft0, ft1
+            bcd2i ft1, right, .notInt
+            storei int32Tag, TagOffset[cfr, index, 8]
+            storei right, PayloadOffset[cfr, index, 8]
+            jmp .done
+        .notInt:
+            stored ft1, [cfr, index, 8]
+        .done:
+        end,
+        macro (left, right) divd left, right end,
+        _llint_slow_path_div)
+
+
+_llint_op_mod:
+    traceExecution()
+    callSlowPath(_llint_slow_path_mod)
+    dispatch(4)
+
+
+macro bitOp(operation, slow_path, advance)
+    loadi 12[PC], t2
+    loadi 8[PC], t0
+    loadConstantOrVariable(t2, t3, t1)
+    loadConstantOrVariable2Reg(t0, t2, t0)
+    bineq t3, Int32Tag, .slow
+    bineq t2, Int32Tag, .slow
+    loadi 4[PC], t2
+    operation(t1, t0, .slow)
+    storei t3, TagOffset[cfr, t2, 8]
+    storei t0, PayloadOffset[cfr, t2, 8]
+    dispatch(advance)
+
+.slow:
+    callSlowPath(slow_path)
+    dispatch(advance)
+end
+
+_llint_op_lshift:
+    traceExecution()
+    bitOp(
+        macro (left, right, slow) lshifti left, right end,
+        _llint_slow_path_lshift,
+        4)
+
+
+_llint_op_rshift:
+    traceExecution()
+    bitOp(
+        macro (left, right, slow) rshifti left, right end,
+        _llint_slow_path_rshift,
+        4)
+
+
+_llint_op_urshift:
+    traceExecution()
+    bitOp(
+        macro (left, right, slow)
+            urshifti left, right
+            bilt right, 0, slow
+        end,
+        _llint_slow_path_urshift,
+        4)
+
+
+_llint_op_bitand:
+    traceExecution()
+    bitOp(
+        macro (left, right, slow) andi left, right end,
+        _llint_slow_path_bitand,
+        5)
+
+
+_llint_op_bitxor:
+    traceExecution()
+    bitOp(
+        macro (left, right, slow) xori left, right end,
+        _llint_slow_path_bitxor,
+        5)
+
+
+_llint_op_bitor:
+    traceExecution()
+    bitOp(
+        macro (left, right, slow) ori left, right end,
+        _llint_slow_path_bitor,
+        5)
+
+
+_llint_op_bitnot:
+    traceExecution()
+    loadi 8[PC], t1
+    loadi 4[PC], t0
+    loadConstantOrVariable(t1, t2, t3)
+    bineq t2, Int32Tag, .opBitnotSlow
+    noti t3
+    storei t2, TagOffset[cfr, t0, 8]
+    storei t3, PayloadOffset[cfr, t0, 8]
+    dispatch(3)
+
+.opBitnotSlow:
+    callSlowPath(_llint_slow_path_bitnot)
+    dispatch(3)
+
+
+_llint_op_check_has_instance:
+    traceExecution()
+    loadi 4[PC], t1
+    loadConstantOrVariablePayload(t1, CellTag, t0, .opCheckHasInstanceSlow)
+    loadp JSCell::m_structure[t0], t0
+    btbz Structure::m_typeInfo + TypeInfo::m_flags[t0], ImplementsHasInstance, .opCheckHasInstanceSlow
+    dispatch(2)
+
+.opCheckHasInstanceSlow:
+    callSlowPath(_llint_slow_path_check_has_instance)
+    dispatch(2)
+
+
+_llint_op_instanceof:
+    traceExecution()
+    # Check that baseVal implements the default HasInstance behavior.
+    # FIXME: This should be deprecated.
+    loadi 12[PC], t1
+    loadConstantOrVariablePayloadUnchecked(t1, t0)
+    loadp JSCell::m_structure[t0], t0
+    btbz Structure::m_typeInfo + TypeInfo::m_flags[t0], ImplementsDefaultHasInstance, .opInstanceofSlow
+    
+    # Actually do the work.
+    loadi 16[PC], t0
+    loadi 4[PC], t3
+    loadConstantOrVariablePayload(t0, CellTag, t1, .opInstanceofSlow)
+    loadp JSCell::m_structure[t1], t2
+    bbb Structure::m_typeInfo + TypeInfo::m_type[t2], ObjectType, .opInstanceofSlow
+    loadi 8[PC], t0
+    loadConstantOrVariablePayload(t0, CellTag, t2, .opInstanceofSlow)
+    
+    # Register state: t1 = prototype, t2 = value
+    move 1, t0
+.opInstanceofLoop:
+    loadp JSCell::m_structure[t2], t2
+    loadi Structure::m_prototype + PayloadOffset[t2], t2
+    bpeq t2, t1, .opInstanceofDone
+    btinz t2, .opInstanceofLoop
+
+    move 0, t0
+.opInstanceofDone:
+    storei BooleanTag, TagOffset[cfr, t3, 8]
+    storei t0, PayloadOffset[cfr, t3, 8]
+    dispatch(5)
+
+.opInstanceofSlow:
+    callSlowPath(_llint_slow_path_instanceof)
+    dispatch(5)
+
+
+_llint_op_typeof:
+    traceExecution()
+    callSlowPath(_llint_slow_path_typeof)
+    dispatch(3)
+
+
+_llint_op_is_undefined:
+    traceExecution()
+    callSlowPath(_llint_slow_path_is_undefined)
+    dispatch(3)
+
+
+_llint_op_is_boolean:
+    traceExecution()
+    callSlowPath(_llint_slow_path_is_boolean)
+    dispatch(3)
+
+
+_llint_op_is_number:
+    traceExecution()
+    callSlowPath(_llint_slow_path_is_number)
+    dispatch(3)
+
+
+_llint_op_is_string:
+    traceExecution()
+    callSlowPath(_llint_slow_path_is_string)
+    dispatch(3)
+
+
+_llint_op_is_object:
+    traceExecution()
+    callSlowPath(_llint_slow_path_is_object)
+    dispatch(3)
+
+
+_llint_op_is_function:
+    traceExecution()
+    callSlowPath(_llint_slow_path_is_function)
+    dispatch(3)
+
+
+_llint_op_in:
+    traceExecution()
+    callSlowPath(_llint_slow_path_in)
+    dispatch(4)
+
+
+_llint_op_resolve:
+    traceExecution()
+    callSlowPath(_llint_slow_path_resolve)
+    dispatch(4)
+
+
+_llint_op_resolve_skip:
+    traceExecution()
+    callSlowPath(_llint_slow_path_resolve_skip)
+    dispatch(5)
+
+
+macro resolveGlobal(size, slow)
+    # Operands are as follows:
+    # 4[PC]   Destination for the load.
+    # 8[PC]   Property identifier index in the code block.
+    # 12[PC]  Structure pointer, initialized to 0 by bytecode generator.
+    # 16[PC]  Offset in global object, initialized to 0 by bytecode generator.
+    loadp CodeBlock[cfr], t0
+    loadp CodeBlock::m_globalObject[t0], t0
+    loadp JSCell::m_structure[t0], t1
+    bpneq t1, 12[PC], slow
+    loadi 16[PC], t1
+    loadp JSObject::m_propertyStorage[t0], t0
+    loadi TagOffset[t0, t1, 8], t2
+    loadi PayloadOffset[t0, t1, 8], t3
+    loadi 4[PC], t0
+    storei t2, TagOffset[cfr, t0, 8]
+    storei t3, PayloadOffset[cfr, t0, 8]
+    loadi (size - 1) * 4[PC], t0
+    valueProfile(t2, t3, t0)
+end
+
+_llint_op_resolve_global:
+    traceExecution()
+    resolveGlobal(6, .opResolveGlobalSlow)
+    dispatch(6)
+
+.opResolveGlobalSlow:
+    callSlowPath(_llint_slow_path_resolve_global)
+    dispatch(6)
+
+
+# Gives you the scope in t0, while allowing you to optionally perform additional checks on the
+# scopes as they are traversed. scopeCheck() is called with two arguments: the register
+# holding the scope, and a register that can be used for scratch. Note that this does not
+# use t3, so you can hold stuff in t3 if need be.
+macro getScope(deBruijinIndexOperand, scopeCheck)
+    loadp ScopeChain + PayloadOffset[cfr], t0
+    loadi deBruijinIndexOperand, t2
+    
+    btiz t2, .done
+    
+    loadp CodeBlock[cfr], t1
+    bineq CodeBlock::m_codeType[t1], FunctionCode, .loop
+    btbz CodeBlock::m_needsFullScopeChain[t1], .loop
+    
+    loadi CodeBlock::m_activationRegister[t1], t1
+
+    # Need to conditionally skip over one scope.
+    bieq TagOffset[cfr, t1, 8], EmptyValueTag, .noActivation
+    scopeCheck(t0, t1)
+    loadp ScopeChainNode::next[t0], t0
+.noActivation:
+    subi 1, t2
+    
+    btiz t2, .done
+.loop:
+    scopeCheck(t0, t1)
+    loadp ScopeChainNode::next[t0], t0
+    subi 1, t2
+    btinz t2, .loop
+
+.done:
+end
+
+_llint_op_resolve_global_dynamic:
+    traceExecution()
+    loadp JITStackFrame::globalData[sp], t3
+    loadp JSGlobalData::activationStructure[t3], t3
+    getScope(
+        20[PC],
+        macro (scope, scratch)
+            loadp ScopeChainNode::object[scope], scratch
+            bpneq JSCell::m_structure[scratch], t3, .opResolveGlobalDynamicSuperSlow
+        end)
+    resolveGlobal(7, .opResolveGlobalDynamicSlow)
+    dispatch(7)
+
+.opResolveGlobalDynamicSuperSlow:
+    callSlowPath(_llint_slow_path_resolve_for_resolve_global_dynamic)
+    dispatch(7)
+
+.opResolveGlobalDynamicSlow:
+    callSlowPath(_llint_slow_path_resolve_global_dynamic)
+    dispatch(7)
+
+
+_llint_op_get_scoped_var:
+    traceExecution()
+    # Operands are as follows:
+    # 4[PC]   Destination for the load.
+    # 8[PC]   Index of register in the scope.
+    # 12[PC]  De Bruijin index.
+    getScope(12[PC], macro (scope, scratch) end)
+    loadi 4[PC], t1
+    loadi 8[PC], t2
+    loadp ScopeChainNode::object[t0], t0
+    loadp JSVariableObject::m_registers[t0], t0
+    loadi TagOffset[t0, t2, 8], t3
+    loadi PayloadOffset[t0, t2, 8], t0
+    storei t3, TagOffset[cfr, t1, 8]
+    storei t0, PayloadOffset[cfr, t1, 8]
+    loadi 16[PC], t1
+    valueProfile(t3, t0, t1)
+    dispatch(5)
+
+
+_llint_op_put_scoped_var:
+    traceExecution()
+    getScope(8[PC], macro (scope, scratch) end)
+    loadi 12[PC], t1
+    loadConstantOrVariable(t1, t3, t2)
+    loadi 4[PC], t1
+    writeBarrier(t3, t2)
+    loadp ScopeChainNode::object[t0], t0
+    loadp JSVariableObject::m_registers[t0], t0
+    storei t3, TagOffset[t0, t1, 8]
+    storei t2, PayloadOffset[t0, t1, 8]
+    dispatch(4)
+
+
+_llint_op_get_global_var:
+    traceExecution()
+    loadi 8[PC], t1
+    loadi 4[PC], t3
+    loadp CodeBlock[cfr], t0
+    loadp CodeBlock::m_globalObject[t0], t0
+    loadp JSGlobalObject::m_registers[t0], t0
+    loadi TagOffset[t0, t1, 8], t2
+    loadi PayloadOffset[t0, t1, 8], t1
+    storei t2, TagOffset[cfr, t3, 8]
+    storei t1, PayloadOffset[cfr, t3, 8]
+    loadi 12[PC], t3
+    valueProfile(t2, t1, t3)
+    dispatch(4)
+
+
+_llint_op_put_global_var:
+    traceExecution()
+    loadi 8[PC], t1
+    loadp CodeBlock[cfr], t0
+    loadp CodeBlock::m_globalObject[t0], t0
+    loadp JSGlobalObject::m_registers[t0], t0
+    loadConstantOrVariable(t1, t2, t3)
+    loadi 4[PC], t1
+    writeBarrier(t2, t3)
+    storei t2, TagOffset[t0, t1, 8]
+    storei t3, PayloadOffset[t0, t1, 8]
+    dispatch(3)
+
+
+_llint_op_resolve_base:
+    traceExecution()
+    callSlowPath(_llint_slow_path_resolve_base)
+    dispatch(5)
+
+
+_llint_op_ensure_property_exists:
+    traceExecution()
+    callSlowPath(_llint_slow_path_ensure_property_exists)
+    dispatch(3)
+
+
+_llint_op_resolve_with_base:
+    traceExecution()
+    callSlowPath(_llint_slow_path_resolve_with_base)
+    dispatch(5)
+
+
+_llint_op_resolve_with_this:
+    traceExecution()
+    callSlowPath(_llint_slow_path_resolve_with_this)
+    dispatch(5)
+
+
+_llint_op_get_by_id:
+    traceExecution()
+    # We only do monomorphic get_by_id caching for now, and we do not modify the
+    # opcode. We do, however, allow for the cache to change anytime if fails, since
+    # ping-ponging is free. At best we get lucky and the get_by_id will continue
+    # to take fast path on the new cache. At worst we take slow path, which is what
+    # we would have been doing anyway.
+    loadi 8[PC], t0
+    loadi 16[PC], t1
+    loadConstantOrVariablePayload(t0, CellTag, t3, .opGetByIdSlow)
+    loadi 20[PC], t2
+    loadp JSObject::m_propertyStorage[t3], t0
+    bpneq JSCell::m_structure[t3], t1, .opGetByIdSlow
+    loadi 4[PC], t1
+    loadi TagOffset[t0, t2], t3
+    loadi PayloadOffset[t0, t2], t2
+    storei t3, TagOffset[cfr, t1, 8]
+    storei t2, PayloadOffset[cfr, t1, 8]
+    loadi 32[PC], t1
+    valueProfile(t3, t2, t1)
+    dispatch(9)
+
+.opGetByIdSlow:
+    callSlowPath(_llint_slow_path_get_by_id)
+    dispatch(9)
+
+
+_llint_op_get_arguments_length:
+    traceExecution()
+    loadi 8[PC], t0
+    loadi 4[PC], t1
+    bineq TagOffset[cfr, t0, 8], EmptyValueTag, .opGetArgumentsLengthSlow
+    loadi ArgumentCount + PayloadOffset[cfr], t2
+    subi 1, t2
+    storei Int32Tag, TagOffset[cfr, t1, 8]
+    storei t2, PayloadOffset[cfr, t1, 8]
+    dispatch(4)
+
+.opGetArgumentsLengthSlow:
+    callSlowPath(_llint_slow_path_get_arguments_length)
+    dispatch(4)
+
+
+_llint_op_put_by_id:
+    traceExecution()
+    loadi 4[PC], t3
+    loadi 16[PC], t1
+    loadConstantOrVariablePayload(t3, CellTag, t0, .opPutByIdSlow)
+    loadi 12[PC], t2
+    loadp JSObject::m_propertyStorage[t0], t3
+    bpneq JSCell::m_structure[t0], t1, .opPutByIdSlow
+    loadi 20[PC], t1
+    loadConstantOrVariable2Reg(t2, t0, t2)
+    writeBarrier(t0, t2)
+    storei t0, TagOffset[t3, t1]
+    storei t2, PayloadOffset[t3, t1]
+    dispatch(9)
+
+.opPutByIdSlow:
+    callSlowPath(_llint_slow_path_put_by_id)
+    dispatch(9)
+
+
+macro putByIdTransition(additionalChecks)
+    traceExecution()
+    loadi 4[PC], t3
+    loadi 16[PC], t1
+    loadConstantOrVariablePayload(t3, CellTag, t0, .opPutByIdSlow)
+    loadi 12[PC], t2
+    bpneq JSCell::m_structure[t0], t1, .opPutByIdSlow
+    additionalChecks(t1, t3, .opPutByIdSlow)
+    loadi 20[PC], t1
+    loadp JSObject::m_propertyStorage[t0], t3
+    addp t1, t3
+    loadConstantOrVariable2Reg(t2, t1, t2)
+    writeBarrier(t1, t2)
+    storei t1, TagOffset[t3]
+    loadi 24[PC], t1
+    storei t2, PayloadOffset[t3]
+    storep t1, JSCell::m_structure[t0]
+    dispatch(9)
+end
+
+_llint_op_put_by_id_transition_direct:
+    putByIdTransition(macro (oldStructure, scratch, slow) end)
+
+
+_llint_op_put_by_id_transition_normal:
+    putByIdTransition(
+        macro (oldStructure, scratch, slow)
+            const protoCell = oldStructure   # Reusing the oldStructure register for the proto
+        
+            loadp 28[PC], scratch
+            assert(macro (ok) btpnz scratch, ok end)
+            loadp StructureChain::m_vector[scratch], scratch
+            assert(macro (ok) btpnz scratch, ok end)
+            bieq Structure::m_prototype + TagOffset[oldStructure], NullTag, .done
+        .loop:
+            loadi Structure::m_prototype + PayloadOffset[oldStructure], protoCell
+            loadp JSCell::m_structure[protoCell], oldStructure
+            bpneq oldStructure, [scratch], slow
+            addp 4, scratch
+            bineq Structure::m_prototype + TagOffset[oldStructure], NullTag, .loop
+        .done:
+        end)
+
+
+_llint_op_del_by_id:
+    traceExecution()
+    callSlowPath(_llint_slow_path_del_by_id)
+    dispatch(4)
+
+
+_llint_op_get_by_val:
+    traceExecution()
+    loadp CodeBlock[cfr], t1
+    loadi 8[PC], t2
+    loadi 12[PC], t3
+    loadp CodeBlock::m_globalData[t1], t1
+    loadConstantOrVariablePayload(t2, CellTag, t0, .opGetByValSlow)
+    loadp JSGlobalData::jsArrayClassInfo[t1], t2
+    loadConstantOrVariablePayload(t3, Int32Tag, t1, .opGetByValSlow)
+    bpneq [t0], t2, .opGetByValSlow
+    loadp JSArray::m_storage[t0], t3
+    biaeq t1, JSArray::m_vectorLength[t0], .opGetByValSlow
+    loadi 4[PC], t0
+    loadi ArrayStorage::m_vector + TagOffset[t3, t1, 8], t2
+    loadi ArrayStorage::m_vector + PayloadOffset[t3, t1, 8], t1
+    bieq t2, EmptyValueTag, .opGetByValSlow
+    storei t2, TagOffset[cfr, t0, 8]
+    storei t1, PayloadOffset[cfr, t0, 8]
+    loadi 16[PC], t0
+    valueProfile(t2, t1, t0)
+    dispatch(5)
+
+.opGetByValSlow:
+    callSlowPath(_llint_slow_path_get_by_val)
+    dispatch(5)
+
+
+_llint_op_get_argument_by_val:
+    traceExecution()
+    loadi 8[PC], t0
+    loadi 12[PC], t1
+    bineq TagOffset[cfr, t0, 8], EmptyValueTag, .opGetArgumentByValSlow
+    loadConstantOrVariablePayload(t1, Int32Tag, t2, .opGetArgumentByValSlow)
+    addi 1, t2
+    loadi ArgumentCount + PayloadOffset[cfr], t1
+    biaeq t2, t1, .opGetArgumentByValSlow
+    negi t2
+    loadi 4[PC], t3
+    loadi ThisArgumentOffset + TagOffset[cfr, t2, 8], t0
+    loadi ThisArgumentOffset + PayloadOffset[cfr, t2, 8], t1
+    storei t0, TagOffset[cfr, t3, 8]
+    storei t1, PayloadOffset[cfr, t3, 8]
+    dispatch(5)
+
+.opGetArgumentByValSlow:
+    callSlowPath(_llint_slow_path_get_argument_by_val)
+    dispatch(5)
+
+
+_llint_op_get_by_pname:
+    traceExecution()
+    loadi 12[PC], t0
+    loadConstantOrVariablePayload(t0, CellTag, t1, .opGetByPnameSlow)
+    loadi 16[PC], t0
+    bpneq t1, PayloadOffset[cfr, t0, 8], .opGetByPnameSlow
+    loadi 8[PC], t0
+    loadConstantOrVariablePayload(t0, CellTag, t2, .opGetByPnameSlow)
+    loadi 20[PC], t0
+    loadi PayloadOffset[cfr, t0, 8], t3
+    loadp JSCell::m_structure[t2], t0
+    bpneq t0, JSPropertyNameIterator::m_cachedStructure[t3], .opGetByPnameSlow
+    loadi 24[PC], t0
+    loadi [cfr, t0, 8], t0
+    subi 1, t0
+    biaeq t0, JSPropertyNameIterator::m_numCacheableSlots[t3], .opGetByPnameSlow
+    loadp JSObject::m_propertyStorage[t2], t2
+    loadi TagOffset[t2, t0, 8], t1
+    loadi PayloadOffset[t2, t0, 8], t3
+    loadi 4[PC], t0
+    storei t1, TagOffset[cfr, t0, 8]
+    storei t3, PayloadOffset[cfr, t0, 8]
+    dispatch(7)
+
+.opGetByPnameSlow:
+    callSlowPath(_llint_slow_path_get_by_pname)
+    dispatch(7)
+
+
+_llint_op_put_by_val:
+    traceExecution()
+    loadi 4[PC], t0
+    loadConstantOrVariablePayload(t0, CellTag, t1, .opPutByValSlow)
+    loadi 8[PC], t0
+    loadConstantOrVariablePayload(t0, Int32Tag, t2, .opPutByValSlow)
+    loadp CodeBlock[cfr], t0
+    loadp CodeBlock::m_globalData[t0], t0
+    loadp JSGlobalData::jsArrayClassInfo[t0], t0
+    bpneq [t1], t0, .opPutByValSlow
+    biaeq t2, JSArray::m_vectorLength[t1], .opPutByValSlow
+    loadp JSArray::m_storage[t1], t0
+    bieq ArrayStorage::m_vector + TagOffset[t0, t2, 8], EmptyValueTag, .opPutByValEmpty
+.opPutByValStoreResult:
+    loadi 12[PC], t3
+    loadConstantOrVariable2Reg(t3, t1, t3)
+    writeBarrier(t1, t3)
+    storei t1, ArrayStorage::m_vector + TagOffset[t0, t2, 8]
+    storei t3, ArrayStorage::m_vector + PayloadOffset[t0, t2, 8]
+    dispatch(4)
+
+.opPutByValEmpty:
+    addi 1, ArrayStorage::m_numValuesInVector[t0]
+    bib t2, ArrayStorage::m_length[t0], .opPutByValStoreResult
+    addi 1, t2, t1
+    storei t1, ArrayStorage::m_length[t0]
+    jmp .opPutByValStoreResult
+
+.opPutByValSlow:
+    callSlowPath(_llint_slow_path_put_by_val)
+    dispatch(4)
+
+
+_llint_op_del_by_val:
+    traceExecution()
+    callSlowPath(_llint_slow_path_del_by_val)
+    dispatch(4)
+
+
+_llint_op_put_by_index:
+    traceExecution()
+    callSlowPath(_llint_slow_path_put_by_index)
+    dispatch(4)
+
+
+_llint_op_put_getter_setter:
+    traceExecution()
+    callSlowPath(_llint_slow_path_put_getter_setter)
+    dispatch(5)
+
+
+_llint_op_loop:
+    nop
+_llint_op_jmp:
+    traceExecution()
+    dispatchBranch(4[PC])
+
+
+_llint_op_jmp_scopes:
+    traceExecution()
+    callSlowPath(_llint_slow_path_jmp_scopes)
+    dispatch(0)
+
+
+macro jumpTrueOrFalse(conditionOp, slow)
+    loadi 4[PC], t1
+    loadConstantOrVariablePayload(t1, BooleanTag, t0, .slow)
+    conditionOp(t0, .target)
+    dispatch(3)
+
+.target:
+    dispatchBranch(8[PC])
+
+.slow:
+    callSlowPath(slow)
+    dispatch(0)
+end
+
+_llint_op_loop_if_true:
+    nop
+_llint_op_jtrue:
+    traceExecution()
+    jumpTrueOrFalse(
+        macro (value, target) btinz value, target end,
+        _llint_slow_path_jtrue)
+
+
+_llint_op_loop_if_false:
+    nop
+_llint_op_jfalse:
+    traceExecution()
+    jumpTrueOrFalse(
+        macro (value, target) btiz value, target end,
+        _llint_slow_path_jfalse)
+
+
+macro equalNull(cellHandler, immediateHandler)
+    loadi 4[PC], t0
+    loadi TagOffset[cfr, t0, 8], t1
+    loadi PayloadOffset[cfr, t0, 8], t0
+    bineq t1, CellTag, .immediate
+    loadp JSCell::m_structure[t0], t2
+    cellHandler(Structure::m_typeInfo + TypeInfo::m_flags[t2], .target)
+    dispatch(3)
+
+.target:
+    dispatchBranch(8[PC])
+
+.immediate:
+    ori 1, t1
+    immediateHandler(t1, .target)
+    dispatch(3)
+end
+
+_llint_op_jeq_null:
+    traceExecution()
+    equalNull(
+        macro (value, target) btbnz value, MasqueradesAsUndefined, target end,
+        macro (value, target) bieq value, NullTag, target end)
+    
+
+_llint_op_jneq_null:
+    traceExecution()
+    equalNull(
+        macro (value, target) btbz value, MasqueradesAsUndefined, target end,
+        macro (value, target) bineq value, NullTag, target end)
+
+
+_llint_op_jneq_ptr:
+    traceExecution()
+    loadi 4[PC], t0
+    loadi 8[PC], t1
+    bineq TagOffset[cfr, t0, 8], CellTag, .opJneqPtrBranch
+    bpeq PayloadOffset[cfr, t0, 8], t1, .opJneqPtrFallThrough
+.opJneqPtrBranch:
+    dispatchBranch(12[PC])
+.opJneqPtrFallThrough:
+    dispatch(4)
+
+
+macro compare(integerCompare, doubleCompare, slow_path)
+    loadi 4[PC], t2
+    loadi 8[PC], t3
+    loadConstantOrVariable(t2, t0, t1)
+    loadConstantOrVariable2Reg(t3, t2, t3)
+    bineq t0, Int32Tag, .op1NotInt
+    bineq t2, Int32Tag, .op2NotInt
+    integerCompare(t1, t3, .jumpTarget)
+    dispatch(4)
+
+.op1NotInt:
+    bia t0, LowestTag, .slow
+    bib t2, LowestTag, .op1NotIntOp2Double
+    bineq t2, Int32Tag, .slow
+    ci2d t3, ft1
+    jmp .op1NotIntReady
+.op1NotIntOp2Double:
+    fii2d t3, t2, ft1
+.op1NotIntReady:
+    fii2d t1, t0, ft0
+    doubleCompare(ft0, ft1, .jumpTarget)
+    dispatch(4)
+
+.op2NotInt:
+    ci2d t1, ft0
+    bia t2, LowestTag, .slow
+    fii2d t3, t2, ft1
+    doubleCompare(ft0, ft1, .jumpTarget)
+    dispatch(4)
+
+.jumpTarget:
+    dispatchBranch(12[PC])
+
+.slow:
+    callSlowPath(slow_path)
+    dispatch(0)
+end
+
+_llint_op_loop_if_less:
+    nop
+_llint_op_jless:
+    traceExecution()
+    compare(
+        macro (left, right, target) bilt left, right, target end,
+        macro (left, right, target) bdlt left, right, target end,
+        _llint_slow_path_jless)
+
+
+_llint_op_jnless:
+    traceExecution()
+    compare(
+        macro (left, right, target) bigteq left, right, target end,
+        macro (left, right, target) bdgtequn left, right, target end,
+        _llint_slow_path_jnless)
+
+
+_llint_op_loop_if_greater:
+    nop
+_llint_op_jgreater:
+    traceExecution()
+    compare(
+        macro (left, right, target) bigt left, right, target end,
+        macro (left, right, target) bdgt left, right, target end,
+        _llint_slow_path_jgreater)
+
+
+_llint_op_jngreater:
+    traceExecution()
+    compare(
+        macro (left, right, target) bilteq left, right, target end,
+        macro (left, right, target) bdltequn left, right, target end,
+        _llint_slow_path_jngreater)
+
+
+_llint_op_loop_if_lesseq:
+    nop
+_llint_op_jlesseq:
+    traceExecution()
+    compare(
+        macro (left, right, target) bilteq left, right, target end,
+        macro (left, right, target) bdlteq left, right, target end,
+        _llint_slow_path_jlesseq)
+
+
+_llint_op_jnlesseq:
+    traceExecution()
+    compare(
+        macro (left, right, target) bigt left, right, target end,
+        macro (left, right, target) bdgtun left, right, target end,
+        _llint_slow_path_jnlesseq)
+
+
+_llint_op_loop_if_greatereq:
+    nop
+_llint_op_jgreatereq:
+    traceExecution()
+    compare(
+        macro (left, right, target) bigteq left, right, target end,
+        macro (left, right, target) bdgteq left, right, target end,
+        _llint_slow_path_jgreatereq)
+
+
+_llint_op_jngreatereq:
+    traceExecution()
+    compare(
+        macro (left, right, target) bilt left, right, target end,
+        macro (left, right, target) bdltun left, right, target end,
+        _llint_slow_path_jngreatereq)
+
+
+_llint_op_loop_hint:
+    traceExecution()
+    checkSwitchToJITForLoop()
+    dispatch(1)
+
+
+_llint_op_switch_imm:
+    traceExecution()
+    loadi 12[PC], t2
+    loadi 4[PC], t3
+    loadConstantOrVariable(t2, t1, t0)
+    loadp CodeBlock[cfr], t2
+    loadp CodeBlock::m_rareData[t2], t2
+    muli sizeof SimpleJumpTable, t3   # FIXME: would be nice to peephole this!
+    loadp CodeBlock::RareData::m_immediateSwitchJumpTables + VectorBufferOffset[t2], t2
+    addp t3, t2
+    bineq t1, Int32Tag, .opSwitchImmNotInt
+    subi SimpleJumpTable::min[t2], t0
+    biaeq t0, SimpleJumpTable::branchOffsets + VectorSizeOffset[t2], .opSwitchImmFallThrough
+    loadp SimpleJumpTable::branchOffsets + VectorBufferOffset[t2], t3
+    loadi [t3, t0, 4], t1
+    btiz t1, .opSwitchImmFallThrough
+    dispatchBranchWithOffset(t1)
+
+.opSwitchImmNotInt:
+    bib t1, LowestTag, .opSwitchImmSlow  # Go to slow path if it's a double.
+.opSwitchImmFallThrough:
+    dispatchBranch(8[PC])
+
+.opSwitchImmSlow:
+    callSlowPath(_llint_slow_path_switch_imm)
+    dispatch(0)
+
+
+_llint_op_switch_char:
+    traceExecution()
+    loadi 12[PC], t2
+    loadi 4[PC], t3
+    loadConstantOrVariable(t2, t1, t0)
+    loadp CodeBlock[cfr], t2
+    loadp CodeBlock::m_rareData[t2], t2
+    muli sizeof SimpleJumpTable, t3
+    loadp CodeBlock::RareData::m_characterSwitchJumpTables + VectorBufferOffset[t2], t2
+    addp t3, t2
+    bineq t1, CellTag, .opSwitchCharFallThrough
+    loadp JSCell::m_structure[t0], t1
+    bbneq Structure::m_typeInfo + TypeInfo::m_type[t1], StringType, .opSwitchCharFallThrough
+    loadp JSString::m_value[t0], t0
+    bineq StringImpl::m_length[t0], 1, .opSwitchCharFallThrough
+    loadp StringImpl::m_data8[t0], t1
+    btinz StringImpl::m_hashAndFlags[t0], HashFlags8BitBuffer, .opSwitchChar8Bit
+    loadh [t1], t0
+    jmp .opSwitchCharReady
+.opSwitchChar8Bit:
+    loadb [t1], t0
+.opSwitchCharReady:
+    subi SimpleJumpTable::min[t2], t0
+    biaeq t0, SimpleJumpTable::branchOffsets + VectorSizeOffset[t2], .opSwitchCharFallThrough
+    loadp SimpleJumpTable::branchOffsets + VectorBufferOffset[t2], t2
+    loadi [t2, t0, 4], t1
+    btiz t1, .opSwitchImmFallThrough
+    dispatchBranchWithOffset(t1)
+
+.opSwitchCharFallThrough:
+    dispatchBranch(8[PC])
+
+
+_llint_op_switch_string:
+    traceExecution()
+    callSlowPath(_llint_slow_path_switch_string)
+    dispatch(0)
+
+
+_llint_op_new_func:
+    traceExecution()
+    btiz 12[PC], .opNewFuncUnchecked
+    loadi 4[PC], t1
+    bineq TagOffset[cfr, t1, 8], EmptyValueTag, .opNewFuncDone
+.opNewFuncUnchecked:
+    callSlowPath(_llint_slow_path_new_func)
+.opNewFuncDone:
+    dispatch(4)
+
+
+_llint_op_new_func_exp:
+    traceExecution()
+    callSlowPath(_llint_slow_path_new_func_exp)
+    dispatch(3)
+
+
+macro doCall(slow_path)
+    loadi 4[PC], t0
+    loadi 16[PC], t1
+    loadp LLIntCallLinkInfo::callee[t1], t2
+    loadConstantOrVariablePayload(t0, CellTag, t3, .opCallSlow)
+    bineq t3, t2, .opCallSlow
+    loadi 12[PC], t3
+    addp 24, PC
+    lshifti 3, t3
+    addp cfr, t3  # t3 contains the new value of cfr
+    loadp JSFunction::m_scopeChain[t2], t0
+    storei t2, Callee + PayloadOffset[t3]
+    storei t0, ScopeChain + PayloadOffset[t3]
+    loadi 8 - 24[PC], t2
+    storei PC, ArgumentCount + TagOffset[cfr]
+    storep cfr, CallerFrame[t3]
+    storei t2, ArgumentCount + PayloadOffset[t3]
+    storei CellTag, Callee + TagOffset[t3]
+    storei CellTag, ScopeChain + TagOffset[t3]
+    move t3, cfr
+    call LLIntCallLinkInfo::machineCodeTarget[t1]
+    dispatchAfterCall()
+
+.opCallSlow:
+    slowPathForCall(6, slow_path)
+end
+
+_llint_op_call:
+    traceExecution()
+    doCall(_llint_slow_path_call)
+
+
+_llint_op_construct:
+    traceExecution()
+    doCall(_llint_slow_path_construct)
+
+
+_llint_op_call_varargs:
+    traceExecution()
+    slowPathForCall(6, _llint_slow_path_call_varargs)
+
+
+_llint_op_call_eval:
+    traceExecution()
+    
+    # Eval is executed in one of two modes:
+    #
+    # 1) We find that we're really invoking eval() in which case the
+    #    execution is perfomed entirely inside the slow_path, and it
+    #    returns the PC of a function that just returns the return value
+    #    that the eval returned.
+    #
+    # 2) We find that we're invoking something called eval() that is not
+    #    the real eval. Then the slow_path returns the PC of the thing to
+    #    call, and we call it.
+    #
+    # This allows us to handle two cases, which would require a total of
+    # up to four pieces of state that cannot be easily packed into two
+    # registers (C functions can return up to two registers, easily):
+    #
+    # - The call frame register. This may or may not have been modified
+    #   by the slow_path, but the convention is that it returns it. It's not
+    #   totally clear if that's necessary, since the cfr is callee save.
+    #   But that's our style in this here interpreter so we stick with it.
+    #
+    # - A bit to say if the slow_path successfully executed the eval and has
+    #   the return value, or did not execute the eval but has a PC for us
+    #   to call.
+    #
+    # - Either:
+    #   - The JS return value (two registers), or
+    #
+    #   - The PC to call.
+    #
+    # It turns out to be easier to just always have this return the cfr
+    # and a PC to call, and that PC may be a dummy thunk that just
+    # returns the JS value that the eval returned.
+    
+    slowPathForCall(4, _llint_slow_path_call_eval)
+
+
+_llint_generic_return_point:
+    dispatchAfterCall()
+
+
+_llint_op_tear_off_activation:
+    traceExecution()
+    loadi 4[PC], t0
+    loadi 8[PC], t1
+    bineq TagOffset[cfr, t0, 8], EmptyValueTag, .opTearOffActivationCreated
+    bieq TagOffset[cfr, t1, 8], EmptyValueTag, .opTearOffActivationNotCreated
+.opTearOffActivationCreated:
+    callSlowPath(_llint_slow_path_tear_off_activation)
+.opTearOffActivationNotCreated:
+    dispatch(3)
+
+
+_llint_op_tear_off_arguments:
+    traceExecution()
+    loadi 4[PC], t0
+    subi 1, t0   # Get the unmodifiedArgumentsRegister
+    bieq TagOffset[cfr, t0, 8], EmptyValueTag, .opTearOffArgumentsNotCreated
+    callSlowPath(_llint_slow_path_tear_off_arguments)
+.opTearOffArgumentsNotCreated:
+    dispatch(2)
+
+
+macro doReturn()
+    loadp ReturnPC[cfr], t2
+    loadp CallerFrame[cfr], cfr
+    restoreReturnAddressBeforeReturn(t2)
+    ret
+end
+
+_llint_op_ret:
+    traceExecution()
+    checkSwitchToJITForEpilogue()
+    loadi 4[PC], t2
+    loadConstantOrVariable(t2, t1, t0)
+    doReturn()
+
+
+_llint_op_call_put_result:
+    loadi 4[PC], t2
+    loadi 8[PC], t3
+    storei t1, TagOffset[cfr, t2, 8]
+    storei t0, PayloadOffset[cfr, t2, 8]
+    valueProfile(t1, t0, t3)
+    traceExecution() # Needs to be here because it would clobber t1, t0
+    dispatch(3)
+
+
+_llint_op_ret_object_or_this:
+    traceExecution()
+    checkSwitchToJITForEpilogue()
+    loadi 4[PC], t2
+    loadConstantOrVariable(t2, t1, t0)
+    bineq t1, CellTag, .opRetObjectOrThisNotObject
+    loadp JSCell::m_structure[t0], t2
+    bbb Structure::m_typeInfo + TypeInfo::m_type[t2], ObjectType, .opRetObjectOrThisNotObject
+    doReturn()
+
+.opRetObjectOrThisNotObject:
+    loadi 8[PC], t2
+    loadConstantOrVariable(t2, t1, t0)
+    doReturn()
+
+
+_llint_op_method_check:
+    traceExecution()
+    # We ignore method checks and use normal get_by_id optimizations.
+    dispatch(1)
+
+
+_llint_op_strcat:
+    traceExecution()
+    callSlowPath(_llint_slow_path_strcat)
+    dispatch(4)
+
+
+_llint_op_to_primitive:
+    traceExecution()
+    loadi 8[PC], t2
+    loadi 4[PC], t3
+    loadConstantOrVariable(t2, t1, t0)
+    bineq t1, CellTag, .opToPrimitiveIsImm
+    loadp JSCell::m_structure[t0], t2
+    bbneq Structure::m_typeInfo + TypeInfo::m_type[t2], StringType, .opToPrimitiveSlowCase
+.opToPrimitiveIsImm:
+    storei t1, TagOffset[cfr, t3, 8]
+    storei t0, PayloadOffset[cfr, t3, 8]
+    dispatch(3)
+
+.opToPrimitiveSlowCase:
+    callSlowPath(_llint_slow_path_to_primitive)
+    dispatch(3)
+
+
+_llint_op_get_pnames:
+    traceExecution()
+    callSlowPath(_llint_slow_path_get_pnames)
+    dispatch(0) # The slow_path either advances the PC or jumps us to somewhere else.
+
+
+_llint_op_next_pname:
+    traceExecution()
+    loadi 12[PC], t1
+    loadi 16[PC], t2
+    loadi PayloadOffset[cfr, t1, 8], t0
+    bieq t0, PayloadOffset[cfr, t2, 8], .opNextPnameEnd
+    loadi 20[PC], t2
+    loadi PayloadOffset[cfr, t2, 8], t2
+    loadp JSPropertyNameIterator::m_jsStrings[t2], t3
+    loadi [t3, t0, 8], t3
+    addi 1, t0
+    storei t0, PayloadOffset[cfr, t1, 8]
+    loadi 4[PC], t1
+    storei CellTag, TagOffset[cfr, t1, 8]
+    storei t3, PayloadOffset[cfr, t1, 8]
+    loadi 8[PC], t3
+    loadi PayloadOffset[cfr, t3, 8], t3
+    loadp JSCell::m_structure[t3], t1
+    bpneq t1, JSPropertyNameIterator::m_cachedStructure[t2], .opNextPnameSlow
+    loadp JSPropertyNameIterator::m_cachedPrototypeChain[t2], t0
+    loadp StructureChain::m_vector[t0], t0
+    btpz [t0], .opNextPnameTarget
+.opNextPnameCheckPrototypeLoop:
+    bieq Structure::m_prototype + TagOffset[t1], NullTag, .opNextPnameSlow
+    loadp Structure::m_prototype + PayloadOffset[t1], t2
+    loadp JSCell::m_structure[t2], t1
+    bpneq t1, [t0], .opNextPnameSlow
+    addp 4, t0
+    btpnz [t0], .opNextPnameCheckPrototypeLoop
+.opNextPnameTarget:
+    dispatchBranch(24[PC])
+
+.opNextPnameEnd:
+    dispatch(7)
+
+.opNextPnameSlow:
+    callSlowPath(_llint_slow_path_next_pname) # This either keeps the PC where it was (causing us to loop) or sets it to target.
+    dispatch(0)
+
+
+_llint_op_push_scope:
+    traceExecution()
+    callSlowPath(_llint_slow_path_push_scope)
+    dispatch(2)
+
+
+_llint_op_pop_scope:
+    traceExecution()
+    callSlowPath(_llint_slow_path_pop_scope)
+    dispatch(1)
+
+
+_llint_op_push_new_scope:
+    traceExecution()
+    callSlowPath(_llint_slow_path_push_new_scope)
+    dispatch(4)
+
+
+_llint_op_catch:
+    # This is where we end up from the JIT's throw trampoline (because the
+    # machine code return address will be set to _llint_op_catch), and from
+    # the interpreter's throw trampoline (see _llint_throw_trampoline).
+    # The JIT throwing protocol calls for the cfr to be in t0. The throwing
+    # code must have known that we were throwing to the interpreter, and have
+    # set JSGlobalData::targetInterpreterPCForThrow.
+    move t0, cfr
+    loadp JITStackFrame::globalData[sp], t3
+    loadi JSGlobalData::targetInterpreterPCForThrow[t3], PC
+    loadi JSGlobalData::exception + PayloadOffset[t3], t0
+    loadi JSGlobalData::exception + TagOffset[t3], t1
+    storei 0, JSGlobalData::exception + PayloadOffset[t3]
+    storei EmptyValueTag, JSGlobalData::exception + TagOffset[t3]       
+    loadi 4[PC], t2
+    storei t0, PayloadOffset[cfr, t2, 8]
+    storei t1, TagOffset[cfr, t2, 8]
+    traceExecution()  # This needs to be here because we don't want to clobber t0, t1, t2, t3 above.
+    dispatch(2)
+
+
+_llint_op_throw:
+    traceExecution()
+    callSlowPath(_llint_slow_path_throw)
+    dispatch(2)
+
+
+_llint_op_throw_reference_error:
+    traceExecution()
+    callSlowPath(_llint_slow_path_throw_reference_error)
+    dispatch(2)
+
+
+_llint_op_jsr:
+    traceExecution()
+    loadi 4[PC], t0
+    addi 3 * 4, PC, t1
+    storei t1, [cfr, t0, 8]
+    dispatchBranch(8[PC])
+
+
+_llint_op_sret:
+    traceExecution()
+    loadi 4[PC], t0
+    loadp [cfr, t0, 8], PC
+    dispatch(0)
+
+
+_llint_op_debug:
+    traceExecution()
+    callSlowPath(_llint_slow_path_debug)
+    dispatch(4)
+
+
+_llint_op_profile_will_call:
+    traceExecution()
+    loadp JITStackFrame::enabledProfilerReference[sp], t0
+    btpz [t0], .opProfileWillCallDone
+    callSlowPath(_llint_slow_path_profile_will_call)
+.opProfileWillCallDone:
+    dispatch(2)
+
+
+_llint_op_profile_did_call:
+    traceExecution()
+    loadp JITStackFrame::enabledProfilerReference[sp], t0
+    btpz [t0], .opProfileWillCallDone
+    callSlowPath(_llint_slow_path_profile_did_call)
+.opProfileDidCallDone:
+    dispatch(2)
+
+
+_llint_op_end:
+    traceExecution()
+    checkSwitchToJITForEpilogue()
+    loadi 4[PC], t0
+    loadi TagOffset[cfr, t0, 8], t1
+    loadi PayloadOffset[cfr, t0, 8], t0
+    doReturn()
+
+
+_llint_throw_from_slow_path_trampoline:
+    # When throwing from the interpreter (i.e. throwing from LLIntSlowPaths), so
+    # the throw target is not necessarily interpreted code, we come to here.
+    # This essentially emulates the JIT's throwing protocol.
+    loadp JITStackFrame::globalData[sp], t1
+    loadp JSGlobalData::callFrameForThrow[t1], t0
+    jmp JSGlobalData::targetMachinePCForThrow[t1]
+
+
+_llint_throw_during_call_trampoline:
+    preserveReturnAddressAfterCall(t2)
+    loadp JITStackFrame::globalData[sp], t1
+    loadp JSGlobalData::callFrameForThrow[t1], t0
+    jmp JSGlobalData::targetMachinePCForThrow[t1]
+
+
+# Lastly, make sure that we can link even though we don't support all opcodes.
+# These opcodes should never arise when using LLInt or either JIT. We assert
+# as much.
+
+macro notSupported()
+    if ASSERT_ENABLED
+        crash()
+    else
+        # We should use whatever the smallest possible instruction is, just to
+        # ensure that there is a gap between instruction labels. If multiple
+        # smallest instructions exist, we should pick the one that is most
+        # likely result in execution being halted. Currently that is the break
+        # instruction on all architectures we're interested in. (Break is int3
+        # on Intel, which is 1 byte, and bkpt on ARMv7, which is 2 bytes.)
+        break
+    end
+end
+
+_llint_op_get_array_length:
+    notSupported()
+
+_llint_op_get_by_id_chain:
+    notSupported()
+
+_llint_op_get_by_id_custom_chain:
+    notSupported()
+
+_llint_op_get_by_id_custom_proto:
+    notSupported()
+
+_llint_op_get_by_id_custom_self:
+    notSupported()
+
+_llint_op_get_by_id_generic:
+    notSupported()
+
+_llint_op_get_by_id_getter_chain:
+    notSupported()
+
+_llint_op_get_by_id_getter_proto:
+    notSupported()
+
+_llint_op_get_by_id_getter_self:
+    notSupported()
+
+_llint_op_get_by_id_proto:
+    notSupported()
+
+_llint_op_get_by_id_self:
+    notSupported()
+
+_llint_op_get_string_length:
+    notSupported()
+
+_llint_op_put_by_id_generic:
+    notSupported()
+
+_llint_op_put_by_id_replace:
+    notSupported()
+
+_llint_op_put_by_id_transition:
+    notSupported()
+
+
+# Indicate the end of LLInt.
+_llint_end:
+    crash()
+
diff --git a/Source/JavaScriptCore/llint/LowLevelInterpreter.cpp b/Source/JavaScriptCore/llint/LowLevelInterpreter.cpp
new file mode 100644 (file)
index 0000000..b95a500
--- /dev/null
@@ -0,0 +1,38 @@
+/*
+ * Copyright (C) 2012 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#include "config.h"
+#include "LowLevelInterpreter.h"
+
+#if ENABLE(LLINT)
+
+#include "LLIntOfflineAsmConfig.h"
+#include <wtf/InlineASM.h>
+
+// This is a file generated by offlineasm, which contains all of the assembly code
+// for the interpreter, as compiled from LowLevelInterpreter.asm.
+#include "LLIntAssembly.h"
+
+#endif // ENABLE(LLINT)
diff --git a/Source/JavaScriptCore/llint/LowLevelInterpreter.h b/Source/JavaScriptCore/llint/LowLevelInterpreter.h
new file mode 100644 (file)
index 0000000..e5a54a4
--- /dev/null
@@ -0,0 +1,53 @@
+/*
+ * Copyright (C) 2012 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#ifndef LowLevelInterpreter_h
+#define LowLevelInterpreter_h
+
+#include <wtf/Platform.h>
+
+#if ENABLE(LLINT)
+
+#include "Opcode.h"
+
+#define LLINT_INSTRUCTION_DECL(opcode, length) extern "C" void llint_##opcode();
+    FOR_EACH_OPCODE_ID(LLINT_INSTRUCTION_DECL);
+#undef LLINT_INSTRUCTION_DECL
+
+extern "C" void llint_begin();
+extern "C" void llint_end();
+extern "C" void llint_program_prologue();
+extern "C" void llint_eval_prologue();
+extern "C" void llint_function_for_call_prologue();
+extern "C" void llint_function_for_construct_prologue();
+extern "C" void llint_function_for_call_arity_check();
+extern "C" void llint_function_for_construct_arity_check();
+extern "C" void llint_generic_return_point();
+extern "C" void llint_throw_from_slow_path_trampoline();
+extern "C" void llint_throw_during_call_trampoline();
+
+#endif // ENABLE(LLINT)
+
+#endif // LowLevelInterpreter_h
diff --git a/Source/JavaScriptCore/offlineasm/armv7.rb b/Source/JavaScriptCore/offlineasm/armv7.rb
new file mode 100644 (file)
index 0000000..eb8df68
--- /dev/null
@@ -0,0 +1,1032 @@
+# Copyright (C) 2011 Apple Inc. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+# 1. Redistributions of source code must retain the above copyright
+#    notice, this list of conditions and the following disclaimer.
+# 2. Redistributions in binary form must reproduce the above copyright
+#    notice, this list of conditions and the following disclaimer in the
+#    documentation and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+# BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+# THE POSSIBILITY OF SUCH DAMAGE.
+
+require "ast"
+require "opt"
+
+class Node
+    def armV7Single
+        doubleOperand = armV7Operand
+        raise "Bogus register name #{doubleOperand}" unless doubleOperand =~ /^d/
+        "s" + ($~.post_match.to_i * 2).to_s
+    end
+end
+
+class SpecialRegister < NoChildren
+    def initialize(name)
+        @name = name
+    end
+    
+    def armV7Operand
+        @name
+    end
+    
+    def address?
+        false
+    end
+    
+    def label?
+        false
+    end
+    
+    def immediate?
+        false
+    end
+    
+    def register?
+        true
+    end
+end
+
+ARMv7_EXTRA_GPRS = [SpecialRegister.new("r9"), SpecialRegister.new("r8"), SpecialRegister.new("r3")]
+ARMv7_EXTRA_FPRS = [SpecialRegister.new("d7")]
+ARMv7_SCRATCH_FPR = SpecialRegister.new("d8")
+
+def armV7MoveImmediate(value, register)
+    # Currently we only handle the simple cases, and fall back to mov/movt for the complex ones.
+    if value >= 0 && value < 256
+        $asm.puts "movw #{register.armV7Operand}, \##{value}"
+    elsif (~value) >= 0 && (~value) < 256
+        $asm.puts "mvn #{register.armV7Operand}, \##{~value}"
+    else
+        $asm.puts "movw #{register.armV7Operand}, \##{value & 0xffff}"
+        if (value & 0xffff0000) != 0
+            $asm.puts "movt #{register.armV7Operand}, \##{value >> 16}"
+        end
+    end
+end
+
+class RegisterID
+    def armV7Operand
+        case name
+        when "t0", "a0", "r0"
+            "r0"
+        when "t1", "a1", "r1"
+            "r1"
+        when "t2", "a2"
+            "r2"
+        when "a3"
+            "r3"
+        when "t3"
+            "r4"
+        when "t4"
+            "r7"
+        when "cfr"
+            "r5"
+        when "lr"
+            "lr"
+        when "sp"
+            "sp"
+        else
+            raise "Bad register #{name} for ARMv7 at #{codeOriginString}"
+        end
+    end
+end
+
+class FPRegisterID
+    def armV7Operand
+        case name
+        when "ft0", "fr"
+            "d0"
+        when "ft1"
+            "d1"
+        when "ft2"
+            "d2"
+        when "ft3"
+            "d3"
+        when "ft4"
+            "d4"
+        when "ft5"
+            "d5"
+        else
+            raise "Bad register #{name} for ARMv7 at #{codeOriginString}"
+        end
+    end
+end
+
+class Immediate
+    def armV7Operand
+        raise "Invalid immediate #{value} at #{codeOriginString}" if value < 0 or value > 255
+        "\##{value}"
+    end
+end
+
+class Address
+    def armV7Operand
+        raise "Bad offset at #{codeOriginString}" if offset.value < -0xff or offset.value > 0xfff
+        "[#{base.armV7Operand}, \##{offset.value}]"
+    end
+end
+
+class BaseIndex
+    def armV7Operand
+        raise "Bad offset at #{codeOriginString}" if offset.value != 0
+        "[#{base.armV7Operand}, #{index.armV7Operand}, lsl \##{scaleShift}]"
+    end
+end
+
+class AbsoluteAddress
+    def armV7Operand
+        raise "Unconverted absolute address at #{codeOriginString}"
+    end
+end
+
+#
+# Lowering of branch ops. For example:
+#
+# baddiz foo, bar, baz
+#
+# will become:
+#
+# addi foo, bar
+# bz baz
+#
+
+def armV7LowerBranchOps(list)
+    newList = []
+    list.each {
+        | node |
+        if node.is_a? Instruction
+            case node.opcode
+            when /^b(addi|subi|ori|addp)/
+                op = $1
+                branch = "b" + $~.post_match
+                
+                case op
+                when "addi", "addp"
+                    op = "addis"
+                when "subi"
+                    op = "subis"
+                when "ori"
+                    op = "oris"
+                end
+                
+                newList << Instruction.new(node.codeOrigin, op, node.operands[0..-2])
+                newList << Instruction.new(node.codeOrigin, branch, [node.operands[-1]])
+            when "bmulio"
+                tmp1 = Tmp.new(node.codeOrigin, :gpr)
+                tmp2 = Tmp.new(node.codeOrigin, :gpr)
+                newList << Instruction.new(node.codeOrigin, "smulli", [node.operands[0], node.operands[1], node.operands[1], tmp1])
+                newList << Instruction.new(node.codeOrigin, "rshifti", [node.operands[-2], Immediate.new(node.codeOrigin, 31), tmp2])
+                newList << Instruction.new(node.codeOrigin, "bineq", [tmp1, tmp2, node.operands[-1]])
+            when /^bmuli/
+                condition = $~.post_match
+                newList << Instruction.new(node.codeOrigin, "muli", node.operands[0..-2])
+                newList << Instruction.new(node.codeOrigin, "bti" + condition, [node.operands[-2], node.operands[-1]])
+            else
+                newList << node
+            end
+        else
+            newList << node
+        end
+    }
+    newList
+end
+
+#
+# Lowering of shift ops. For example:
+#
+# lshifti foo, bar
+#
+# will become:
+#
+# andi foo, 31, tmp
+# lshifti tmp, bar
+#
+
+def armV7SanitizeShift(operand, list)
+    return operand if operand.immediate?
+    
+    tmp = Tmp.new(operand.codeOrigin, :gpr)
+    list << Instruction.new(operand.codeOrigin, "andi", [operand, Immediate.new(operand.codeOrigin, 31), tmp])
+    tmp
+end
+
+def armV7LowerShiftOps(list)
+    newList = []
+    list.each {
+        | node |
+        if node.is_a? Instruction
+            case node.opcode
+            when "lshifti", "rshifti", "urshifti"
+                if node.operands.size == 2
+                    newList << Instruction.new(node.codeOrigin, node.opcode, [armV7SanitizeShift(node.operands[0], newList), node.operands[1]])
+                else
+                    newList << Instruction.new(node.codeOrigin, node.opcode, [node.operands[0], armV7SanitizeShift(node.operands[1], newList), node.operands[2]])
+                    raise "Wrong number of operands for shift at #{node.codeOriginString}" unless node.operands.size == 3
+                end
+            else
+                newList << node
+            end
+        else
+            newList << node
+        end
+    }
+    newList
+end
+
+#
+# Lowering of malformed addresses. For example:
+#
+# loadp 10000[foo], bar
+#
+# will become:
+#
+# move 10000, tmp
+# addp foo, tmp
+# loadp 0[tmp], bar
+#
+
+class Node
+    def armV7LowerMalformedAddressesRecurse(list)
+        mapChildren {
+            | node |
+            node.armV7LowerMalformedAddressesRecurse(list)
+        }
+    end
+end
+
+class Address
+    def armV7LowerMalformedAddressesRecurse(list)
+        if offset.value < -0xff or offset.value > 0xfff
+            tmp = Tmp.new(codeOrigin, :gpr)
+            list << Instruction.new(codeOrigin, "move", [offset, tmp])
+            list << Instruction.new(codeOrigin, "addp", [base, tmp])
+            Address.new(codeOrigin, tmp, Immediate.new(codeOrigin, 0))
+        else
+            self
+        end
+    end
+end
+
+class BaseIndex
+    def armV7LowerMalformedAddressesRecurse(list)
+        if offset.value != 0
+            tmp = Tmp.new(codeOrigin, :gpr)
+            list << Instruction.new(codeOrigin, "move", [offset, tmp])
+            list << Instruction.new(codeOrigin, "addp", [base, tmp])
+            BaseIndex.new(codeOrigin, tmp, index, scale, Immediate.new(codeOrigin, 0))
+        else
+            self
+        end
+    end
+end
+
+class AbsoluteAddress
+    def armV7LowerMalformedAddressesRecurse(list)
+        tmp = Tmp.new(codeOrigin, :gpr)
+        list << Instruction.new(codeOrigin, "move", [address, tmp])
+        Address.new(codeOrigin, tmp, Immediate.new(codeOrigin, 0))
+    end
+end
+
+def armV7LowerMalformedAddresses(list)
+    newList = []
+    list.each {
+        | node |
+        newList << node.armV7LowerMalformedAddressesRecurse(newList)
+    }
+    newList
+end
+
+#
+# Lowering of malformed addresses in double loads and stores. For example:
+#
+# loadd [foo, bar, 8], baz
+#
+# becomes:
+#
+# leap [foo, bar, 8], tmp
+# loadd [tmp], baz
+#
+
+class Node
+    def armV7DoubleAddress(list)
+        self
+    end
+end
+
+class BaseIndex
+    def armV7DoubleAddress(list)
+        tmp = Tmp.new(codeOrigin, :gpr)
+        list << Instruction.new(codeOrigin, "leap", [self, tmp])
+        Address.new(codeOrigin, tmp, Immediate.new(codeOrigin, 0))
+    end
+end
+
+def armV7LowerMalformedAddressesDouble(list)
+    newList = []
+    list.each {
+        | node |
+        if node.is_a? Instruction
+            case node.opcode
+            when "loadd"
+                newList << Instruction.new(node.codeOrigin, "loadd", [node.operands[0].armV7DoubleAddress(newList), node.operands[1]])
+            when "stored"
+                newList << Instruction.new(node.codeOrigin, "stored", [node.operands[0], node.operands[1].armV7DoubleAddress(newList)])
+            else
+                newList << node
+            end
+        else
+            newList << node
+        end
+    }
+    newList
+end
+
+#
+# Lowering of misplaced immediates. For example:
+#
+# storei 0, [foo]
+#
+# will become:
+#
+# move 0, tmp
+# storei tmp, [foo]
+#
+
+def armV7LowerMisplacedImmediates(list)
+    newList = []
+    list.each {
+        | node |
+        if node.is_a? Instruction
+            case node.opcode
+            when "storei", "storep"
+                operands = node.operands
+                newOperands = []
+                operands.each {
+                    | operand |
+                    if operand.is_a? Immediate
+                        tmp = Tmp.new(operand.codeOrigin, :gpr)
+                        newList << Instruction.new(operand.codeOrigin, "move", [operand, tmp])
+                        newOperands << tmp
+                    else
+                        newOperands << operand
+                    end
+                }
+                newList << Instruction.new(node.codeOrigin, node.opcode, newOperands)
+            else
+                newList << node
+            end
+        else
+            newList << node
+        end
+    }
+    newList
+end
+
+#
+# Lowering of malformed immediates except when used in a "move" instruction.
+# For example:
+#
+# addp 642641, foo
+#
+# will become:
+#
+# move 642641, tmp
+# addp tmp, foo
+#
+
+class Node
+    def armV7LowerMalformedImmediatesRecurse(list)
+        mapChildren {
+            | node |
+            node.armV7LowerMalformedImmediatesRecurse(list)
+        }
+    end
+end
+
+class Address
+    def armV7LowerMalformedImmediatesRecurse(list)
+        self
+    end
+end
+
+class BaseIndex
+    def armV7LowerMalformedImmediatesRecurse(list)
+        self
+    end
+end
+
+class AbsoluteAddress
+    def armV7LowerMalformedImmediatesRecurse(list)
+        self
+    end
+end
+
+class Immediate
+    def armV7LowerMalformedImmediatesRecurse(list)
+        if value < 0 or value > 255
+            tmp = Tmp.new(codeOrigin, :gpr)
+            list << Instruction.new(codeOrigin, "move", [self, tmp])
+            tmp
+        else
+            self
+        end
+    end
+end
+
+def armV7LowerMalformedImmediates(list)
+    newList = []
+    list.each {
+        | node |
+        if node.is_a? Instruction
+            case node.opcode
+            when "move"
+                newList << node
+            when "addi", "addp", "addis", "subi", "subp", "subis"
+                if node.operands[0].is_a? Immediate and
+                        node.operands[0].value < 0 and
+                        node.operands[0].value >= 255 and
+                        node.operands.size == 2
+                    if node.opcode =~ /add/
+                        newOpcode = "sub" + node.opcode[-1..-1]
+                    else
+                        newOpcode = "add" + node.opcode[-1..-1]
+                    end
+                    newList << Instruction.new(node.codeOrigin, newOpcode,
+                                               [Immediate.new(-node.operands[0].value)] + node.operands[1..-1])
+                else
+                    newList << node.armV7LowerMalformedImmediatesRecurse(newList)
+                end
+            when "muli"
+                if node.operands[0].is_a? Immediate
+                    tmp = Tmp.new(codeOrigin, :gpr)
+                    newList << Instruction.new(node.codeOrigin, "move", [node.operands[0], tmp])
+                    newList << Instruction.new(node.codeOrigin, "muli", [tmp] + node.operands[1..-1])
+                else
+                    newList << node.armV7LowerMalformedImmediatesRecurse(newList)
+                end
+            else
+                newList << node.armV7LowerMalformedImmediatesRecurse(newList)
+            end
+        else
+            newList << node
+        end
+    }
+    newList
+end
+
+#
+# Lowering of misplaced addresses. For example:
+#
+# addi foo, [bar]
+#
+# will become:
+#
+# loadi [bar], tmp
+# addi foo, tmp
+# storei tmp, [bar]
+#
+# Another example:
+#
+# addi [foo], bar
+#
+# will become:
+#
+# loadi [foo], tmp
+# addi tmp, bar
+#
+
+def armV7AsRegister(preList, postList, operand, suffix, needStore)
+    return operand unless operand.address?
+    
+    tmp = Tmp.new(operand.codeOrigin, if suffix == "d" then :fpr else :gpr end)
+    preList << Instruction.new(operand.codeOrigin, "load" + suffix, [operand, tmp])
+    if needStore
+        postList << Instruction.new(operand.codeOrigin, "store" + suffix, [tmp, operand])
+    end
+    tmp
+end
+
+def armV7AsRegisters(preList, postList, operands, suffix)
+    newOperands = []
+    operands.each_with_index {
+        | operand, index |
+        newOperands << armV7AsRegister(preList, postList, operand, suffix, index == operands.size - 1)
+    }
+    newOperands
+end
+
+def armV7LowerMisplacedAddresses(list)
+    newList = []
+    list.each {
+        | node |
+        if node.is_a? Instruction
+            postInstructions = []
+            case node.opcode
+            when "addi", "addp", "addis", "andi", "andp", "lshifti", "muli", "negi", "noti", "ori", "oris",
+                "orp", "rshifti", "urshifti", "subi", "subp", "subis", "xori", "xorp", /^bi/, /^bp/, /^bti/,
+                /^btp/, /^ci/, /^cp/, /^ti/
+                newList << Instruction.new(node.codeOrigin,
+                                           node.opcode,
+                                           armV7AsRegisters(newList, postInstructions, node.operands, "i"))
+            when "bbeq", "bbneq", "bba", "bbaeq", "bbb", "bbbeq", "btbo", "btbz", "btbnz", "tbz", "tbnz",
+                "tbo"
+                newList << Instruction.new(node.codeOrigin,
+                                           node.opcode,
+                                           armV7AsRegisters(newList, postInstructions, node.operands, "b"))
+            when "bbgt", "bbgteq", "bblt", "bblteq", "btbs", "tbs"
+                newList << Instruction.new(node.codeOrigin,
+                                           node.opcode,
+                                           armV7AsRegisters(newList, postInstructions, node.operands, "bs"))
+            when "addd", "divd", "subd", "muld", "sqrtd", /^bd/
+                newList << Instruction.new(node.codeOrigin,
+                                           node.opcode,
+                                           armV7AsRegisters(newList, postInstructions, node.operands, "d"))
+            when "jmp", "call"
+                newList << Instruction.new(node.codeOrigin,
+                                           node.opcode,
+                                           [armV7AsRegister(newList, postInstructions, node.operands[0], "p", false)])
+            else
+                newList << node
+            end
+            newList += postInstructions
+        else
+            newList << node
+        end
+    }
+    newList
+end
+
+#
+# Lowering of register reuse in compare instructions. For example:
+#
+# cieq t0, t1, t0
+#
+# will become:
+#
+# mov tmp, t0
+# cieq tmp, t1, t0
+#
+
+def armV7LowerRegisterReuse(list)
+    newList = []
+    list.each {
+        | node |
+        if node.is_a? Instruction
+            case node.opcode
+            when "cieq", "cineq", "cia", "ciaeq", "cib", "cibeq", "cigt", "cigteq", "cilt", "cilteq",
+                "cpeq", "cpneq", "cpa", "cpaeq", "cpb", "cpbeq", "cpgt", "cpgteq", "cplt", "cplteq",
+                "tio", "tis", "tiz", "tinz", "tbo", "tbs", "tbz", "tbnz"
+                if node.operands.size == 2
+                    if node.operands[0] == node.operands[1]
+                        tmp = Tmp.new(node.codeOrigin, :gpr)
+                        newList << Instruction.new(node.codeOrigin, "move", [node.operands[0], tmp])
+                        newList << Instruction.new(node.codeOrigin, node.opcode, [tmp, node.operands[1]])
+                    else
+                        newList << node
+                    end
+                else
+                    raise "Wrong number of arguments at #{node.codeOriginString}" unless node.operands.size == 3
+                    if node.operands[0] == node.operands[2]
+                        tmp = Tmp.new(node.codeOrigin, :gpr)
+                        newList << Instruction.new(node.codeOrigin, "move", [node.operands[0], tmp])
+                        newList << Instruction.new(node.codeOrigin, node.opcode, [tmp, node.operands[1], node.operands[2]])
+                    elsif node.operands[1] == node.operands[2]
+                        tmp = Tmp.new(node.codeOrigin, :gpr)
+                        newList << Instruction.new(node.codeOrigin, "move", [node.operands[1], tmp])
+                        newList << Instruction.new(node.codeOrigin, node.opcode, [node.operands[0], tmp, node.operands[2]])
+                    else
+                        newList << node
+                    end
+                end
+            else
+                newList << node
+            end
+        else
+            newList << node
+        end
+    }
+    newList
+end
+
+#
+# Lea support.
+#
+
+class Address
+    def armV7EmitLea(destination)
+        if destination == base
+            $asm.puts "adds #{destination.armV7Operand}, \##{offset.value}"
+        else
+            $asm.puts "adds #{destination.armV7Operand}, #{base.armV7Operand}, \##{offset.value}"
+        end
+    end
+end
+
+class BaseIndex
+    def armV7EmitLea(destination)
+        raise "Malformed BaseIndex, offset should be zero at #{codeOriginString}" unless offset.value == 0
+        $asm.puts "add.w #{destination.armV7Operand}, #{base.armV7Operand}, #{index.armV7Operand}, lsl \##{scaleShift}"
+    end
+end
+
+# FIXME: we could support AbsoluteAddress for lea, but we don't.
+
+#
+# Actual lowering code follows.
+#
+
+class Sequence
+    def lowerARMv7
+        myList = @list
+        
+        # Verify that we will only see instructions and labels.
+        myList.each {
+            | node |
+            unless node.is_a? Instruction or
+                    node.is_a? Label or
+                    node.is_a? LocalLabel or
+                    node.is_a? Skip
+                raise "Unexpected #{node.inspect} at #{node.codeOrigin}" 
+            end
+        }
+        
+        myList = armV7LowerBranchOps(myList)
+        myList = armV7LowerShiftOps(myList)
+        myList = armV7LowerMalformedAddresses(myList)
+        myList = armV7LowerMalformedAddressesDouble(myList)
+        myList = armV7LowerMisplacedImmediates(myList)
+        myList = armV7LowerMalformedImmediates(myList)
+        myList = armV7LowerMisplacedAddresses(myList)
+        myList = armV7LowerRegisterReuse(myList)
+        myList = assignRegistersToTemporaries(myList, :gpr, ARMv7_EXTRA_GPRS)
+        myList = assignRegistersToTemporaries(myList, :fpr, ARMv7_EXTRA_FPRS)
+        myList.each {
+            | node |
+            node.lower("ARMv7")
+        }
+    end
+end
+
+def armV7Operands(operands)
+    operands.map{|v| v.armV7Operand}.join(", ")
+end
+
+def armV7FlippedOperands(operands)
+    armV7Operands([operands[-1]] + operands[0..-2])
+end
+
+def emitArmV7Compact(opcode2, opcode3, operands)
+    if operands.size == 3
+        $asm.puts "#{opcode3} #{armV7FlippedOperands(operands)}"
+    else
+        raise unless operands.size == 2
+        raise unless operands[1].is_a? RegisterID
+        if operands[0].is_a? Immediate
+            $asm.puts "#{opcode3} #{operands[1].armV7Operand}, #{operands[1].armV7Operand}, #{operands[0].armV7Operand}"
+        else
+            $asm.puts "#{opcode2} #{armV7FlippedOperands(operands)}"
+        end
+    end
+end
+
+def emitArmV7(opcode, operands)
+    if operands.size == 3
+        $asm.puts "#{opcode} #{armV7FlippedOperands(operands)}"
+    else
+        raise unless operands.size == 2
+        $asm.puts "#{opcode} #{operands[1].armV7Operand}, #{operands[1].armV7Operand}, #{operands[0].armV7Operand}"
+    end
+end
+
+def emitArmV7DoubleBranch(branchOpcode, operands)
+    $asm.puts "vcmpe.f64 #{armV7Operands(operands[0..1])}"
+    $asm.puts "vmrs apsr_nzcv, fpscr"
+    $asm.puts "#{branchOpcode} #{operands[2].asmLabel}"
+end
+
+def emitArmV7Test(operands)
+    value = operands[0]
+    case operands.size
+    when 2
+        mask = Immediate.new(codeOrigin, -1)
+    when 3
+        mask = operands[1]
+    else
+        raise "Expected 2 or 3 operands but got #{operands.size} at #{codeOriginString}"
+    end
+    
+    if mask.is_a? Immediate and mask.value == -1
+        $asm.puts "tst #{value.armV7Operand}, #{value.armV7Operand}"
+    elsif mask.is_a? Immediate
+        $asm.puts "tst.w #{value.armV7Operand}, #{mask.armV7Operand}"
+    else
+        $asm.puts "tst #{value.armV7Operand}, #{mask.armV7Operand}"
+    end
+end
+
+def emitArmV7Compare(operands, code)
+    $asm.puts "movs #{operands[2].armV7Operand}, \#0"
+    $asm.puts "cmp #{operands[0].armV7Operand}, #{operands[1].armV7Operand}"
+    $asm.puts "it #{code}"
+    $asm.puts "mov#{code} #{operands[2].armV7Operand}, \#1"
+end
+
+def emitArmV7TestSet(operands, code)
+    $asm.puts "movs #{operands[-1].armV7Operand}, \#0"
+    emitArmV7Test(operands)
+    $asm.puts "it #{code}"
+    $asm.puts "mov#{code} #{operands[-1].armV7Operand}, \#1"
+end
+
+class Instruction
+    def lowerARMv7
+        $asm.comment codeOriginString
+        case opcode
+        when "addi", "addp", "addis"
+            if opcode == "addis"
+                suffix = "s"
+            else
+                suffix = ""
+            end
+            if operands.size == 3 and operands[0].is_a? Immediate
+                raise unless operands[1].is_a? RegisterID
+                raise unless operands[2].is_a? RegisterID
+                if operands[0].value == 0 and suffix.empty?
+                    unless operands[1] == operands[2]
+                        $asm.puts "mov #{operands[2].armV7Operand}, #{operands[1].armV7Operand}"
+                    end
+                else
+                    $asm.puts "adds #{operands[2].armV7Operand}, #{operands[1].armV7Operand}, #{operands[0].armV7Operand}"
+                end
+            elsif operands.size == 3 and operands[0].is_a? RegisterID
+                raise unless operands[1].is_a? RegisterID
+                raise unless operands[2].is_a? RegisterID
+                $asm.puts "adds #{armV7FlippedOperands(operands)}"
+            else
+                if operands[0].is_a? Immediate
+                    unless Immediate.new(nil, 0) == operands[0]
+                        $asm.puts "adds #{armV7FlippedOperands(operands)}"
+                    end
+                else
+                    $asm.puts "add#{suffix} #{armV7FlippedOperands(operands)}"
+                end
+            end
+        when "andi", "andp"
+            emitArmV7Compact("ands", "and", operands)
+        when "ori", "orp"
+            emitArmV7Compact("orrs", "orr", operands)
+        when "oris"
+            emitArmV7Compact("orrs", "orrs", operands)
+        when "xori", "xorp"
+            emitArmV7Compact("eors", "eor", operands)
+        when "lshifti"
+            emitArmV7Compact("lsls", "lsls", operands)
+        when "rshifti"
+            emitArmV7Compact("asrs", "asrs", operands)
+        when "urshifti"
+            emitArmV7Compact("lsrs", "lsrs", operands)
+        when "muli"
+            if operands.size == 2 or operands[0] == operands[2] or operands[1] == operands[2]
+                emitArmV7("muls", operands)
+            else
+                $asm.puts "mov #{operands[2].armV7Operand}, #{operands[0].armV7Operand}"
+                $asm.puts "muls #{operands[2].armV7Operand}, #{operands[2].armV7Operand}, #{operands[1].armV7Operand}"
+            end
+        when "subi", "subp", "subis"
+            emitArmV7Compact("subs", "subs", operands)
+        when "negi"
+            $asm.puts "rsbs #{operands[0].armV7Operand}, #{operands[0].armV7Operand}, \#0"
+        when "noti"
+            $asm.puts "mvns #{operands[0].armV7Operand}, #{operands[0].armV7Operand}"
+        when "loadi", "loadp"
+            $asm.puts "ldr #{armV7FlippedOperands(operands)}"
+        when "storei", "storep"
+            $asm.puts "str #{armV7Operands(operands)}"
+        when "loadb"
+            $asm.puts "ldrb #{armV7FlippedOperands(operands)}"
+        when "loadbs"
+            $asm.puts "ldrsb.w #{armV7FlippedOperands(operands)}"
+        when "storeb"
+            $asm.puts "strb #{armV7Operands(operands)}"
+        when "loadh"
+            $asm.puts "ldrh #{armV7FlippedOperands(operands)}"
+        when "loadhs"
+            $asm.puts "ldrsh.w #{armV7FlippedOperands(operands)}"
+        when "storeh"
+            $asm.puts "strh #{armV7Operands(operands)}"
+        when "loadd"
+            $asm.puts "vldr.64 #{armV7FlippedOperands(operands)}"
+        when "stored"
+            $asm.puts "vstr.64 #{armV7Operands(operands)}"
+        when "addd"
+            emitArmV7("vadd.f64", operands)
+        when "divd"
+            emitArmV7("vdiv.f64", operands)
+        when "subd"
+            emitArmV7("vsub.f64", operands)
+        when "muld"
+            emitArmV7("vmul.f64", operands)
+        when "sqrtd"
+            $asm.puts "vsqrt.f64 #{armV7FlippedOperands(operands)}"
+        when "ci2d"
+            $asm.puts "vmov #{operands[1].armV7Single}, #{operands[0].armV7Operand}"
+            $asm.puts "vcvt.f64.s32 #{operands[1].armV7Operand}, #{operands[1].armV7Single}"
+        when "bdeq"
+            emitArmV7DoubleBranch("beq", operands)
+        when "bdneq"
+            $asm.puts "vcmpe.f64 #{armV7Operands(operands[0..1])}"
+            $asm.puts "vmrs apsr_nzcv, fpscr"
+            isUnordered = LocalLabel.unique("bdneq")
+            $asm.puts "bvs #{LabelReference.new(codeOrigin, isUnordered).asmLabel}"
+            $asm.puts "bne #{operands[2].asmLabel}"
+            isUnordered.lower("ARMv7")
+        when "bdgt"
+            emitArmV7DoubleBranch("bgt", operands)
+        when "bdgteq"
+            emitArmV7DoubleBranch("bge", operands)
+        when "bdlt"
+            emitArmV7DoubleBranch("bmi", operands)
+        when "bdlteq"
+            emitArmV7DoubleBranch("bls", operands)
+        when "bdequn"
+            $asm.puts "vcmpe.f64 #{armV7Operands(operands[0..1])}"
+            $asm.puts "vmrs apsr_nzcv, fpscr"
+            $asm.puts "bvs #{operands[2].asmLabel}"
+            $asm.puts "beq #{operands[2].asmLabel}"
+        when "bdnequn"
+            emitArmV7DoubleBranch("bne", operands)
+        when "bdgtun"
+            emitArmV7DoubleBranch("bhi", operands)
+        when "bdgtequn"
+            emitArmV7DoubleBranch("bpl", operands)
+        when "bdltun"
+            emitArmV7DoubleBranch("blt", operands)
+        when "bdltequn"
+            emitArmV7DoubleBranch("ble", operands)
+        when "btd2i"
+            # FIXME: may be a good idea to just get rid of this instruction, since the interpreter
+            # currently does not use it.
+            raise "ARMv7 does not support this opcode yet, #{codeOrigin}"
+        when "td2i"
+            $asm.puts "vcvt.s32.f64 #{ARMv7_SCRATCH_FPR.armV7Single}, #{operands[0].armV7Operand}"
+            $asm.puts "vmov #{operands[1].armV7Operand}, #{ARMv7_SCRATCH_FPR.armV7Single}"
+        when "bcd2i"
+            $asm.puts "vcvt.s32.f64 #{ARMv7_SCRATCH_FPR.armV7Single}, #{operands[0].armV7Operand}"
+            $asm.puts "vmov #{operands[1].armV7Operand}, #{ARMv7_SCRATCH_FPR.armV7Single}"
+            $asm.puts "vcvt.f64.s32 #{ARMv7_SCRATCH_FPR.armV7Operand}, #{ARMv7_SCRATCH_FPR.armV7Single}"
+            emitArmV7DoubleBranch("bne", [ARMv7_SCRATCH_FPR, operands[0], operands[2]])
+            $asm.puts "tst #{operands[1].armV7Operand}, #{operands[1].armV7Operand}"
+            $asm.puts "beq #{operands[2].asmLabel}"
+        when "movdz"
+            # FIXME: either support this or remove it.
+            raise "ARMv7 does not support this opcode yet, #{codeOrigin}"
+        when "pop"
+            $asm.puts "pop #{operands[0].armV7Operand}"
+        when "push"
+            $asm.puts "push #{operands[0].armV7Operand}"
+        when "move", "sxi2p", "zxi2p"
+            if operands[0].is_a? Immediate
+                armV7MoveImmediate(operands[0].value, operands[1])
+            else
+                $asm.puts "mov #{armV7FlippedOperands(operands)}"
+            end
+        when "nop"
+            $asm.puts "nop"
+        when "bieq", "bpeq", "bbeq"
+            if Immediate.new(nil, 0) == operands[0]
+                $asm.puts "tst #{operands[1].armV7Operand}, #{operands[1].armV7Operand}"
+            elsif Immediate.new(nil, 0) == operands[1]
+                $asm.puts "tst #{operands[0].armV7Operand}, #{operands[0].armV7Operand}"
+            else
+                $asm.puts "cmp #{armV7Operands(operands[0..1])}"
+            end
+            $asm.puts "beq #{operands[2].asmLabel}"
+        when "bineq", "bpneq", "bbneq"
+            if Immediate.new(nil, 0) == operands[0]
+                $asm.puts "tst #{operands[1].armV7Operand}, #{operands[1].armV7Operand}"
+            elsif Immediate.new(nil, 0) == operands[1]
+                $asm.puts "tst #{operands[0].armV7Operand}, #{operands[0].armV7Operand}"
+            else
+                $asm.puts "cmp #{armV7Operands(operands[0..1])}"
+            end
+            $asm.puts "bne #{operands[2].asmLabel}"
+        when "bia", "bpa", "bba"
+            $asm.puts "cmp #{armV7Operands(operands[0..1])}"
+            $asm.puts "bhi #{operands[2].asmLabel}"
+        when "biaeq", "bpaeq", "bbaeq"
+            $asm.puts "cmp #{armV7Operands(operands[0..1])}"
+            $asm.puts "bhs #{operands[2].asmLabel}"
+        when "bib", "bpb", "bbb"
+            $asm.puts "cmp #{armV7Operands(operands[0..1])}"
+            $asm.puts "blo #{operands[2].asmLabel}"
+        when "bibeq", "bpbeq", "bbbeq"
+            $asm.puts "cmp #{armV7Operands(operands[0..1])}"
+            $asm.puts "bls #{operands[2].asmLabel}"
+        when "bigt", "bpgt", "bbgt"
+            $asm.puts "cmp #{armV7Operands(operands[0..1])}"
+            $asm.puts "bgt #{operands[2].asmLabel}"
+        when "bigteq", "bpgteq", "bbgteq"
+            $asm.puts "cmp #{armV7Operands(operands[0..1])}"
+            $asm.puts "bge #{operands[2].asmLabel}"
+        when "bilt", "bplt", "bblt"
+            $asm.puts "cmp #{armV7Operands(operands[0..1])}"
+            $asm.puts "blt #{operands[2].asmLabel}"
+        when "bilteq", "bplteq", "bblteq"
+            $asm.puts "cmp #{armV7Operands(operands[0..1])}"
+            $asm.puts "ble #{operands[2].asmLabel}"
+        when "btiz", "btpz", "btbz"
+            emitArmV7Test(operands)
+            $asm.puts "beq #{operands[-1].asmLabel}"
+        when "btinz", "btpnz", "btbnz"
+            emitArmV7Test(operands)
+            $asm.puts "bne #{operands[-1].asmLabel}"
+        when "btio", "btpo", "btbo"
+            emitArmV7Test(operands)
+            $asm.puts "bvs #{operands[-1].asmLabel}"
+        when "btis", "btps", "btbs"
+            emitArmV7Test(operands)
+            $asm.puts "bmi #{operands[-1].asmLabel}"
+        when "jmp"
+            if operands[0].label?
+                $asm.puts "b #{operands[0].asmLabel}"
+            else
+                $asm.puts "mov pc, #{operands[0].armV7Operand}"
+            end
+        when "call"
+            if operands[0].label?
+                $asm.puts "blx #{operands[0].asmLabel}"
+            else
+                $asm.puts "blx #{operands[0].armV7Operand}"
+            end
+        when "break"
+            $asm.puts "bkpt"
+        when "ret"
+            $asm.puts "bx lr"
+        when "cieq", "cpeq"
+            emitArmV7Compare(operands, "eq")
+        when "cineq", "cpneq"
+            emitArmV7Compare(operands, "ne")
+        when "cia", "cpa"
+            emitArmV7Compare(operands, "hi")
+        when "ciaeq", "cpaeq"
+            emitArmV7Compare(operands, "hs")
+        when "cib", "cpb"
+            emitArmV7Compare(operands, "lo")
+        when "cibeq", "cpbeq"
+            emitArmV7Compare(operands, "ls")
+        when "cigt", "cpgt"
+            emitArmV7Compare(operands, "gt")
+        when "cigteq", "cpgteq"
+            emitArmV7Compare(operands, "ge")
+        when "cilt", "cplt"
+            emitArmV7Compare(operands, "lt")
+        when "cilteq", "cplteq"
+            emitArmV7Compare(operands, "le")
+        when "tio", "tbo"
+            emitArmV7TestSet(operands, "vs")
+        when "tis", "tbs"
+            emitArmV7TestSet(operands, "mi")
+        when "tiz", "tbz"
+            emitArmV7TestSet(operands, "eq")
+        when "tinz", "tbnz"
+            emitArmV7TestSet(operands, "ne")
+        when "peek"
+            $asm.puts "ldr #{operands[1].armV7Operand}, [sp, \##{operands[0].value * 4}]"
+        when "poke"
+            $asm.puts "str #{operands[1].armV7Operand}, [sp, \##{operands[0].value * 4}]"
+        when "fii2d"
+            $asm.puts "vmov #{operands[2].armV7Operand}, #{operands[0].armV7Operand}, #{operands[1].armV7Operand}"
+        when "fd2ii"
+            $asm.puts "vmov #{operands[1].armV7Operand}, #{operands[2].armV7Operand}, #{operands[0].armV7Operand}"
+        when "bo"
+            $asm.puts "bvs #{operands[0].asmLabel}"
+        when "bs"
+            $asm.puts "bmi #{operands[0].asmLabel}"
+        when "bz"
+            $asm.puts "beq #{operands[0].asmLabel}"
+        when "bnz"
+            $asm.puts "bne #{operands[0].asmLabel}"
+        when "leai", "leap"
+            operands[0].armV7EmitLea(operands[1])
+        when "smulli"
+            raise "Wrong number of arguments to smull in #{self.inspect} at #{codeOriginString}" unless operands.length == 4
+            $asm.puts "smull #{operands[2].armV7Operand}, #{operands[3].armV7Operand}, #{operands[0].armV7Operand}, #{operands[1].armV7Operand}"
+        else
+            raise "Unhandled opcode #{opcode} at #{codeOriginString}"
+        end
+    end
+end
+
diff --git a/Source/JavaScriptCore/offlineasm/asm.rb b/Source/JavaScriptCore/offlineasm/asm.rb
new file mode 100644 (file)
index 0000000..a93a8c5
--- /dev/null
@@ -0,0 +1,176 @@
+#!/usr/bin/env ruby
+
+# Copyright (C) 2011 Apple Inc. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+# 1. Redistributions of source code must retain the above copyright
+#    notice, this list of conditions and the following disclaimer.
+# 2. Redistributions in binary form must reproduce the above copyright
+#    notice, this list of conditions and the following disclaimer in the
+#    documentation and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+# BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+# THE POSSIBILITY OF SUCH DAMAGE.
+
+$: << File.dirname(__FILE__)
+
+require "backends"
+require "digest/sha1"
+require "offsets"
+require "parser"
+require "self_hash"
+require "settings"
+require "transform"
+
+class Assembler
+    def initialize(outp)
+        @outp = outp
+        @state = :cpp
+        @commentState = :none
+        @comment = nil
+    end
+    
+    def enterAsm
+        @outp.puts "asm ("
+        @state = :asm
+    end
+    
+    def leaveAsm
+        putsLastComment
+        @outp.puts ");"
+        @state = :cpp
+    end
+    
+    def inAsm
+        enterAsm
+        yield
+        leaveAsm
+    end
+    
+    def lastComment
+        if @comment
+            result = "// #{@comment}"
+        else
+            result = ""
+        end
+        @commentState = :none
+        @comment = nil
+        result
+    end
+    
+    def putsLastComment
+        comment = lastComment
+        unless comment.empty?
+            @outp.puts comment
+        end
+    end
+    
+    def puts(*line)
+        raise unless @state == :asm
+        @outp.puts("\"\\t" + line.join('') + "\\n\" #{lastComment}")
+    end
+    
+    def print(line)
+        raise unless @state == :asm
+        @outp.print("\"" + line + "\"")
+    end
+    
+    def putsLabel(labelName)
+        raise unless @state == :asm
+        @outp.puts("OFFLINE_ASM_GLOBAL_LABEL(#{labelName}) #{lastComment}")
+    end
+    
+    def putsLocalLabel(labelName)
+        raise unless @state == :asm
+        @outp.puts("LOCAL_LABEL_STRING(#{labelName}) \":\\n\" #{lastComment}")
+    end
+    
+    def self.labelReference(labelName)
+        "\" SYMBOL_STRING(#{labelName}) \""
+    end
+    
+    def self.localLabelReference(labelName)
+        "\" LOCAL_LABEL_STRING(#{labelName}) \""
+    end
+    
+    def comment(text)
+        case @commentState
+        when :none
+            @comment = text
+            @commentState = :one
+        when :one
+            @outp.puts "// #{@comment}"
+            @outp.puts "// #{text}"
+            @comment = nil
+            @commentState = :many
+        when :many
+            @outp.puts "// #{text}"
+        else
+            raise
+        end
+    end
+end
+
+asmFile = ARGV.shift
+offsetsFile = ARGV.shift
+outputFlnm = ARGV.shift
+
+$stderr.puts "offlineasm: Parsing #{asmFile} and #{offsetsFile} and creating assembly file #{outputFlnm}."
+
+configurationList = offsetsAndConfigurationIndex(offsetsFile)
+inputData = IO::read(asmFile)
+
+inputHash =
+    "// offlineasm input hash: " + Digest::SHA1.hexdigest(inputData) +
+    " " + Digest::SHA1.hexdigest(configurationList.map{|v| (v[0] + [v[1]]).join(' ')}.join(' ')) +
+    " " + selfHash
+
+if FileTest.exist? outputFlnm
+    File.open(outputFlnm, "r") {
+        | inp |
+        firstLine = inp.gets
+        if firstLine and firstLine.chomp == inputHash
+            $stderr.puts "offlineasm: Nothing changed."
+            exit 0
+        end
+    }
+end
+
+File.open(outputFlnm, "w") {
+    | outp |
+    $output = outp
+    $output.puts inputHash
+    
+    $asm = Assembler.new($output)
+    
+    ast = parse(lex(inputData))
+    
+    configurationList.each {
+        | configuration |
+        offsetsList = configuration[0]
+        configIndex = configuration[1]
+        forSettings(computeSettingsCombinations(ast)[configIndex], ast) {
+            | concreteSettings, lowLevelAST, backend |
+            lowLevelAST = lowLevelAST.resolve(*buildOffsetsMap(lowLevelAST, offsetsList))
+            emitCodeInConfiguration(concreteSettings, lowLevelAST, backend) {
+                $asm.inAsm {
+                    lowLevelAST.lower(backend)
+                }
+            }
+        }
+    }
+}
+
+$stderr.puts "offlineasm: Assembly file #{outputFlnm} successfully generated."
+
diff --git a/Source/JavaScriptCore/offlineasm/ast.rb b/Source/JavaScriptCore/offlineasm/ast.rb
new file mode 100644 (file)
index 0000000..f67b0fc
--- /dev/null
@@ -0,0 +1,1039 @@
+# Copyright (C) 2011 Apple Inc. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+# 1. Redistributions of source code must retain the above copyright
+#    notice, this list of conditions and the following disclaimer.
+# 2. Redistributions in binary form must reproduce the above copyright
+#    notice, this list of conditions and the following disclaimer in the
+#    documentation and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+# BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+# THE POSSIBILITY OF SUCH DAMAGE.
+
+#
+# Base utility types for the AST.
+#
+
+# Valid methods for Node:
+#
+# node.children -> Returns an array of immediate children.
+#
+# node.descendents -> Returns an array of all strict descendants (children
+#     and children of children, transitively).
+#
+# node.flatten -> Returns an array containing the strict descendants and
+#     the node itself.
+#
+# node.filter(type) -> Returns an array containing those elements in
+#     node.flatten that are of the given type (is_a? type returns true).
+#
+# node.mapChildren{|v| ...} -> Returns a new node with all children
+#     replaced according to the given block.
+#
+# Examples:
+#
+# node.filter(Setting).uniq -> Returns all of the settings that the AST's
+#     IfThenElse blocks depend on.
+#
+# node.filter(StructOffset).uniq -> Returns all of the structure offsets
+#     that the AST depends on.
+
+class Node
+    attr_reader :codeOrigin
+    
+    def initialize(codeOrigin)
+        @codeOrigin = codeOrigin
+    end
+    
+    def codeOriginString
+        "line number #{@codeOrigin}"
+    end
+    
+    def descendants
+        children.collect{|v| v.flatten}.flatten
+    end
+    
+    def flatten
+        [self] + descendants
+    end
+    
+    def filter(type)
+        flatten.select{|v| v.is_a? type}
+    end
+end
+
+class NoChildren < Node
+    def initialize(codeOrigin)
+        super(codeOrigin)
+    end
+    
+    def children
+        []
+    end
+    
+    def mapChildren
+        self
+    end
+end
+
+class StructOffsetKey
+    attr_reader :struct, :field
+    
+    def initialize(struct, field)
+        @struct = struct
+        @field = field
+    end
+    
+    def hash
+        @struct.hash + @field.hash * 3
+    end
+    
+    def eql?(other)
+        @struct == other.struct and @field == other.field
+    end
+end
+
+#
+# AST nodes.
+#
+
+class StructOffset < NoChildren
+    attr_reader :struct, :field
+    
+    def initialize(codeOrigin, struct, field)
+        super(codeOrigin)
+        @struct = struct
+        @field = field
+    end
+    
+    @@mapping = {}
+    
+    def self.forField(codeOrigin, struct, field)
+        key = StructOffsetKey.new(struct, field)
+        
+        unless @@mapping[key]
+            @@mapping[key] = StructOffset.new(codeOrigin, struct, field)
+        end
+        @@mapping[key]
+    end
+    
+    def dump
+        "#{struct}::#{field}"
+    end
+    
+    def <=>(other)
+        if @struct != other.struct
+            return @struct <=> other.struct
+        end
+        @field <=> other.field
+    end
+    
+    def address?
+        false
+    end
+    
+    def label?
+        false
+    end
+    
+    def immediate?
+        true
+    end
+    
+    def register?
+        false
+    end
+end
+
+class Sizeof < NoChildren
+    attr_reader :struct
+    
+    def initialize(codeOrigin, struct)
+        super(codeOrigin)
+        @struct = struct
+    end
+    
+    @@mapping = {}
+    
+    def self.forName(codeOrigin, struct)
+        unless @@mapping[struct]
+            @@mapping[struct] = Sizeof.new(codeOrigin, struct)
+        end
+        @@mapping[struct]
+    end
+    
+    def dump
+        "sizeof #{@struct}"
+    end
+    
+    def <=>(other)
+        @struct <=> other.struct
+    end
+    
+    def address?
+        false
+    end
+    
+    def label?
+        false
+    end
+    
+    def immediate?
+        true
+    end
+    
+    def register?
+        false
+    end
+end
+
+class Immediate < NoChildren
+    attr_reader :value
+    
+    def initialize(codeOrigin, value)
+        super(codeOrigin)
+        @value = value
+        raise "Bad immediate value #{value.inspect} at #{codeOriginString}" unless value.is_a? Integer
+    end
+    
+    def dump
+        "#{value}"
+    end
+    
+    def ==(other)
+        other.is_a? Immediate and other.value == @value
+    end
+    
+    def address?
+        false
+    end
+    
+    def label?
+        false
+    end
+    
+    def immediate?
+        true
+    end
+    
+    def register?
+        false
+    end
+end
+
+class AddImmediates < Node
+    attr_reader :left, :right
+    
+    def initialize(codeOrigin, left, right)
+        super(codeOrigin)
+        @left = left
+        @right = right
+    end
+    
+    def children
+        [@left, @right]
+    end
+    
+    def mapChildren
+        AddImmediates.new(codeOrigin, (yield @left), (yield @right))
+    end
+    
+    def dump
+        "(#{left.dump} + #{right.dump})"
+    end
+    
+    def address?
+        false
+    end
+    
+    def label?
+        false
+    end
+    
+    def immediate?
+        true
+    end
+    
+    def register?
+        false
+    end
+end
+
+class SubImmediates < Node
+    attr_reader :left, :right
+    
+    def initialize(codeOrigin, left, right)
+        super(codeOrigin)
+        @left = left
+        @right = right
+    end
+    
+    def children
+        [@left, @right]
+    end
+    
+    def mapChildren
+        SubImmediates.new(codeOrigin, (yield @left), (yield @right))
+    end
+    
+    def dump
+        "(#{left.dump} - #{right.dump})"
+    end
+    
+    def address?
+        false
+    end
+    
+    def label?
+        false
+    end
+    
+    def immediate?
+        true
+    end
+    
+    def register?
+        false
+    end
+end
+
+class MulImmediates < Node
+    attr_reader :left, :right
+    
+    def initialize(codeOrigin, left, right)
+        super(codeOrigin)
+        @left = left
+        @right = right
+    end
+    
+    def children
+        [@left, @right]
+    end
+    
+    def mapChildren
+        MulImmediates.new(codeOrigin, (yield @left), (yield @right))
+    end
+    
+    def dump
+        "(#{left.dump} * #{right.dump})"
+    end
+    
+    def address?
+        false
+    end
+    
+    def label?
+        false
+    end
+    
+    def immediate?
+        true
+    end
+    
+    def register?
+        false
+    end
+end
+
+class NegImmediate < Node
+    attr_reader :child
+    
+    def initialize(codeOrigin, child)
+        super(codeOrigin)
+        @child = child
+    end
+    
+    def children
+        [@child]
+    end
+    
+    def mapChildren
+        NegImmediate.new(codeOrigin, (yield @child))
+    end
+    
+    def dump
+        "(-#{@child.dump})"
+    end
+    
+    def address?
+        false
+    end
+    
+    def label?
+        false
+    end
+    
+    def immediate?
+        true
+    end
+    
+    def register?
+        false
+    end
+end
+
+class RegisterID < NoChildren
+    attr_reader :name
+    
+    def initialize(codeOrigin, name)
+        super(codeOrigin)
+        @name = name
+    end
+    
+    @@mapping = {}
+    
+    def self.forName(codeOrigin, name)
+        unless @@mapping[name]
+            @@mapping[name] = RegisterID.new(codeOrigin, name)
+        end
+        @@mapping[name]
+    end
+    
+    def dump
+        name
+    end
+    
+    def address?
+        false
+    end
+    
+    def label?
+        false
+    end
+    
+    def immediate?
+        false
+    end
+    
+    def register?
+        true
+    end
+end
+
+class FPRegisterID < NoChildren
+    attr_reader :name
+    
+    def initialize(codeOrigin, name)
+        super(codeOrigin)
+        @name = name
+    end
+    
+    @@mapping = {}
+    
+    def self.forName(codeOrigin, name)
+        unless @@mapping[name]
+            @@mapping[name] = FPRegisterID.new(codeOrigin, name)
+        end
+        @@mapping[name]
+    end
+    
+    def dump
+        name
+    end
+    
+    def address?
+        false
+    end
+    
+    def label?
+        false
+    end
+    
+    def immediate?
+        false
+    end
+    
+    def register?
+        true
+    end
+end
+
+class Variable < NoChildren
+    attr_reader :name
+    
+    def initialize(codeOrigin, name)
+        super(codeOrigin)
+        @name = name
+    end
+    
+    @@mapping = {}
+    
+    def self.forName(codeOrigin, name)
+        unless @@mapping[name]
+            @@mapping[name] = Variable.new(codeOrigin, name)
+        end
+        @@mapping[name]
+    end
+    
+    def dump
+        name
+    end
+end
+
+class Address < Node
+    attr_reader :base, :offset
+    
+    def initialize(codeOrigin, base, offset)
+        super(codeOrigin)
+        @base = base
+        @offset = offset
+        raise "Bad base for address #{base.inspect} at #{codeOriginString}" unless base.is_a? Variable or base.register?
+        raise "Bad offset for address #{offset.inspect} at #{codeOriginString}" unless offset.is_a? Variable or offset.immediate?
+    end
+    
+    def children
+        [@base, @offset]
+    end
+    
+    def mapChildren
+        Address.new(codeOrigin, (yield @base), (yield @offset))
+    end
+    
+    def dump
+        "#{offset.dump}[#{base.dump}]"
+    end
+    
+    def address?
+        true
+    end
+    
+    def label?
+        false
+    end
+    
+    def immediate?
+        false
+    end
+    
+    def register?
+        false
+    end
+end
+
+class BaseIndex < Node
+    attr_reader :base, :index, :scale, :offset
+    
+    def initialize(codeOrigin, base, index, scale, offset)
+        super(codeOrigin)
+        @base = base
+        @index = index
+        @scale = scale
+        raise unless [1, 2, 4, 8].member? @scale
+        @offset = offset
+    end
+    
+    def scaleShift
+        case scale
+        when 1
+            0
+        when 2
+            1
+        when 4
+            2
+        when 8
+            3
+        else
+            raise "Bad scale at #{codeOriginString}"
+        end
+    end
+    
+    def children
+        [@base, @index, @offset]
+    end
+    
+    def mapChildren
+        BaseIndex.new(codeOrigin, (yield @base), (yield @index), @scale, (yield @offset))
+    end
+    
+    def dump
+        "#{offset.dump}[#{base.dump}, #{index.dump}, #{scale}]"
+    end
+    
+    def address?
+        true
+    end
+    
+    def label?
+        false
+    end
+    
+    def immediate?
+        false
+    end
+    
+    def register?
+        false
+    end
+end
+
+class AbsoluteAddress < NoChildren
+    attr_reader :address
+    
+    def initialize(codeOrigin, address)
+        super(codeOrigin)
+        @address = address
+    end
+    
+    def dump
+        "#{address.dump}[]"
+    end
+    
+    def address?
+        true
+    end
+    
+    def label?
+        false
+    end
+    
+    def immediate?
+        false
+    end
+    
+    def register?
+        false
+    end
+end
+
+class Instruction < Node
+    attr_reader :opcode, :operands
+    
+    def initialize(codeOrigin, opcode, operands)
+        super(codeOrigin)
+        @opcode = opcode
+        @operands = operands
+    end
+    
+    def children
+        operands
+    end
+    
+    def mapChildren(&proc)
+        Instruction.new(codeOrigin, @opcode, @operands.map(&proc))
+    end
+    
+    def dump
+        "\t" + opcode.to_s + " " + operands.collect{|v| v.dump}.join(", ")
+    end
+end
+
+class Error < NoChildren
+    def initialize(codeOrigin)
+        super(codeOrigin)
+    end
+    
+    def dump
+        "\terror"
+    end
+end
+
+class ConstDecl < Node
+    attr_reader :variable, :value
+    
+    def initialize(codeOrigin, variable, value)
+        super(codeOrigin)
+        @variable = variable
+        @value = value
+    end
+    
+    def children
+        [@variable, @value]
+    end
+    
+    def mapChildren
+        ConstDecl.new(codeOrigin, (yield @variable), (yield @value))
+    end
+    
+    def dump
+        "const #{@variable.dump} = #{@value.dump}"
+    end
+end
+
+$labelMapping = {}
+
+class Label < NoChildren
+    attr_reader :name
+    
+    def initialize(codeOrigin, name)
+        super(codeOrigin)
+        @name = name
+    end
+    
+    def self.forName(codeOrigin, name)
+        if $labelMapping[name]
+            raise "Label name collision: #{name}" unless $labelMapping[name].is_a? Label
+        else
+            $labelMapping[name] = Label.new(codeOrigin, name)
+        end
+        $labelMapping[name]
+    end
+    
+    def dump
+        "#{name}:"
+    end
+end
+
+class LocalLabel < NoChildren
+    attr_reader :name
+    
+    def initialize(codeOrigin, name)
+        super(codeOrigin)
+        @name = name
+    end
+
+    @@uniqueNameCounter = 0
+    
+    def self.forName(codeOrigin, name)
+        if $labelMapping[name]
+            raise "Label name collision: #{name}" unless $labelMapping[name].is_a? LocalLabel
+        else
+            $labelMapping[name] = LocalLabel.new(codeOrigin, name)
+        end
+        $labelMapping[name]
+    end
+    
+    def self.unique(comment)
+        newName = "_#{comment}"
+        if $labelMapping[newName]
+            while $labelMapping[newName = "_#{@@uniqueNameCounter}_#{comment}"]
+                @@uniqueNameCounter += 1
+            end
+        end
+        forName(nil, newName)
+    end
+    
+    def cleanName
+        if name =~ /^\./
+            "_" + name[1..-1]
+        else
+            name
+        end
+    end
+    
+    def dump
+        "#{name}:"
+    end
+end
+
+class LabelReference < Node
+    attr_reader :label
+    
+    def initialize(codeOrigin, label)
+        super(codeOrigin)
+        @label = label
+    end
+    
+    def children
+        [@label]
+    end
+    
+    def mapChildren
+        LabelReference.new(codeOrigin, (yield @label))
+    end
+    
+    def name
+        label.name
+    end
+    
+    def dump
+        label.name
+    end
+    
+    def address?
+        false
+    end
+    
+    def label?
+        true
+    end
+end
+
+class LocalLabelReference < NoChildren
+    attr_reader :label
+    
+    def initialize(codeOrigin, label)
+        super(codeOrigin)
+        @label = label
+    end
+    
+    def children
+        [@label]
+    end
+    
+    def mapChildren
+        LocalLabelReference.new(codeOrigin, (yield @label))
+    end
+    
+    def name
+        label.name
+    end
+    
+    def dump
+        label.name
+    end
+    
+    def address?
+        false
+    end
+    
+    def label?
+        true
+    end
+end
+
+class Sequence < Node
+    attr_reader :list
+    
+    def initialize(codeOrigin, list)
+        super(codeOrigin)
+        @list = list
+    end
+    
+    def children
+        list
+    end
+    
+    def mapChildren(&proc)
+        Sequence.new(codeOrigin, @list.map(&proc))
+    end
+    
+    def dump
+        list.collect{|v| v.dump}.join("\n")
+    end
+end
+
+class True < NoChildren
+    def initialize
+        super(nil)
+    end
+    
+    @@instance = True.new
+    
+    def self.instance
+        @@instance
+    end
+    
+    def value
+        true
+    end
+    
+    def dump
+        "true"
+    end
+end
+
+class False < NoChildren
+    def initialize
+        super(nil)
+    end
+    
+    @@instance = False.new
+    
+    def self.instance
+        @@instance
+    end
+    
+    def value
+        false
+    end
+    
+    def dump
+        "false"
+    end
+end
+
+class TrueClass
+    def asNode
+        True.instance
+    end
+end
+
+class FalseClass
+    def asNode
+        False.instance
+    end
+end
+
+class Setting < NoChildren
+    attr_reader :name
+    
+    def initialize(codeOrigin, name)
+        super(codeOrigin)
+        @name = name
+    end
+    
+    @@mapping = {}
+    
+    def self.forName(codeOrigin, name)
+        unless @@mapping[name]
+            @@mapping[name] = Setting.new(codeOrigin, name)
+        end
+        @@mapping[name]
+    end
+    
+    def dump
+        name
+    end
+end
+
+class And < Node
+    attr_reader :left, :right
+    
+    def initialize(codeOrigin, left, right)
+        super(codeOrigin)
+        @left = left
+        @right = right
+    end
+    
+    def children
+        [@left, @right]
+    end
+    
+    def mapChildren
+        And.new(codeOrigin, (yield @left), (yield @right))
+    end
+    
+    def dump
+        "(#{left.dump} and #{right.dump})"
+    end
+end
+
+class Or < Node
+    attr_reader :left, :right
+    
+    def initialize(codeOrigin, left, right)
+        super(codeOrigin)
+        @left = left
+        @right = right
+    end
+    
+    def children
+        [@left, @right]
+    end
+    
+    def mapChildren
+        Or.new(codeOrigin, (yield @left), (yield @right))
+    end
+    
+    def dump
+        "(#{left.dump} or #{right.dump})"
+    end
+end
+
+class Not < Node
+    attr_reader :child
+    
+    def initialize(codeOrigin, child)
+        super(codeOrigin)
+        @child = child
+    end
+    
+    def children
+        [@left, @right]
+    end
+    
+    def mapChildren
+        Not.new(codeOrigin, (yield @child))
+    end
+    
+    def dump
+        "(not #{child.dump})"
+    end
+end
+
+class Skip < NoChildren
+    def initialize(codeOrigin)
+        super(codeOrigin)
+    end
+    
+    def dump
+        "\tskip"
+    end
+end
+
+class IfThenElse < Node
+    attr_reader :predicate, :thenCase
+    attr_accessor :elseCase
+    
+    def initialize(codeOrigin, predicate, thenCase)
+        super(codeOrigin)
+        @predicate = predicate
+        @thenCase = thenCase
+        @elseCase = Skip.new(codeOrigin)
+    end
+    
+    def children
+        if @elseCase
+            [@predicate, @thenCase, @elseCase]
+        else
+            [@predicate, @thenCase]
+        end
+    end
+    
+    def mapChildren
+        IfThenElse.new(codeOrigin, (yield @predicate), (yield @thenCase), (yield @elseCase))
+    end
+    
+    def dump
+        "if #{predicate.dump}\n" + thenCase.dump + "\nelse\n" + elseCase.dump + "\nend"
+    end
+end
+
+class Macro < Node
+    attr_reader :name, :variables, :body
+    
+    def initialize(codeOrigin, name, variables, body)
+        super(codeOrigin)
+        @name = name
+        @variables = variables
+        @body = body
+    end
+    
+    def children
+        @variables + [@body]
+    end
+    
+    def mapChildren
+        Macro.new(codeOrigin, @name, @variables.map{|v| yield v}, (yield @body))
+    end
+    
+    def dump
+        "macro #{name}(" + variables.collect{|v| v.dump}.join(", ") + ")\n" + body.dump + "\nend"
+    end
+end
+
+class MacroCall < Node
+    attr_reader :name, :operands
+    
+    def initialize(codeOrigin, name, operands)
+        super(codeOrigin)
+        @name = name
+        @operands = operands
+        raise unless @operands
+        @operands.each{|v| raise unless v}
+    end
+    
+    def children
+        @operands
+    end
+    
+    def mapChildren(&proc)
+        MacroCall.new(codeOrigin, @name, @operands.map(&proc))
+    end
+    
+    def dump
+        "\t#{name}(" + operands.collect{|v| v.dump}.join(", ") + ")"
+    end
+end
+
diff --git a/Source/JavaScriptCore/offlineasm/backends.rb b/Source/JavaScriptCore/offlineasm/backends.rb
new file mode 100644 (file)
index 0000000..2c65b51
--- /dev/null
@@ -0,0 +1,96 @@
+# Copyright (C) 2011 Apple Inc. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+# 1. Redistributions of source code must retain the above copyright
+#    notice, this list of conditions and the following disclaimer.
+# 2. Redistributions in binary form must reproduce the above copyright
+#    notice, this list of conditions and the following disclaimer in the
+#    documentation and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+# BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+# THE POSSIBILITY OF SUCH DAMAGE.
+
+require "armv7"
+require "ast"
+require "x86"
+
+BACKENDS =
+    [
+     "X86",
+     "ARMv7"
+    ]
+
+# Keep the set of working backends separate from the set of backends that might be
+# supported. This is great because the BACKENDS list is almost like a reserved
+# words list, in that it causes settings resolution to treat those words specially.
+# Hence this lets us set aside the name of a backend we might want to support in
+# the future while not actually supporting the backend yet.
+WORKING_BACKENDS =
+    [
+     "X86",
+     "ARMv7"
+    ]
+
+BACKEND_PATTERN = Regexp.new('\\A(' + BACKENDS.join(')|(') + ')\\Z')
+
+class Node
+    def lower(name)
+        send("lower" + name)
+    end
+end
+
+# Overrides for lower() for those nodes that are backend-agnostic
+
+class Label
+    def lower(name)
+        $asm.putsLabel(self.name[1..-1])
+    end
+end
+
+class LocalLabel
+    def lower(name)
+        $asm.putsLocalLabel "_offlineasm_#{self.name[1..-1]}"
+    end
+end
+
+class LabelReference
+    def asmLabel
+        Assembler.labelReference(name[1..-1])
+    end
+end
+
+class LocalLabelReference
+    def asmLabel
+        Assembler.localLabelReference("_offlineasm_"+name[1..-1])
+    end
+end
+
+class Skip
+    def lower(name)
+    end
+end
+
+class Sequence
+    def lower(name)
+        if respond_to? "lower#{name}"
+            send("lower#{name}")
+        else
+            @list.each {
+                | node |
+                node.lower(name)
+            }
+        end
+    end
+end
+
diff --git a/Source/JavaScriptCore/offlineasm/generate_offset_extractor.rb b/Source/JavaScriptCore/offlineasm/generate_offset_extractor.rb
new file mode 100644 (file)
index 0000000..8bdf450
--- /dev/null
@@ -0,0 +1,146 @@
+#!/usr/bin/env ruby
+
+# Copyright (C) 2011 Apple Inc. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+# 1. Redistributions of source code must retain the above copyright
+#    notice, this list of conditions and the following disclaimer.
+# 2. Redistributions in binary form must reproduce the above copyright
+#    notice, this list of conditions and the following disclaimer in the
+#    documentation and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+# BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+# THE POSSIBILITY OF SUCH DAMAGE.
+
+$: << File.dirname(__FILE__)
+
+require "backends"
+require "digest/sha1"
+require "offsets"
+require "parser"
+require "self_hash"
+require "settings"
+require "transform"
+
+inputFlnm = ARGV.shift
+outputFlnm = ARGV.shift
+
+$stderr.puts "offlineasm: Parsing #{inputFlnm} and creating offset extractor #{outputFlnm}."
+
+def emitMagicNumber
+    OFFSET_MAGIC_NUMBERS.each {
+        | number |
+        $output.puts "#{number},"
+    }
+end
+
+inputData = IO::read(inputFlnm)
+inputHash = "// offlineasm input hash: #{Digest::SHA1.hexdigest(inputData)} #{selfHash}"
+
+if FileTest.exist? outputFlnm
+    File.open(outputFlnm, "r") {
+        | inp |
+        firstLine = inp.gets
+        if firstLine and firstLine.chomp == inputHash
+            $stderr.puts "offlineasm: Nothing changed."
+            exit 0
+        end
+    }
+end
+
+originalAST = parse(lex(inputData))
+
+#
+# Optimize the AST to make configuration extraction faster. This reduces the AST to a form
+# that only contains the things that matter for our purposes: offsets, sizes, and if
+# statements.
+#
+
+class Node
+    def offsetsPruneTo(sequence)
+        children.each {
+            | child |
+            child.offsetsPruneTo(sequence)
+        }
+    end
+    
+    def offsetsPrune
+        result = Sequence.new(codeOrigin, [])
+        offsetsPruneTo(result)
+        result
+    end
+end
+
+class IfThenElse
+    def offsetsPruneTo(sequence)
+        ifThenElse = IfThenElse.new(codeOrigin, predicate, thenCase.offsetsPrune)
+        ifThenElse.elseCase = elseCase.offsetsPrune
+        sequence.list << ifThenElse
+    end
+end
+
+class StructOffset
+    def offsetsPruneTo(sequence)
+        sequence.list << self
+    end
+end
+
+class Sizeof
+    def offsetsPruneTo(sequence)
+        sequence.list << self
+    end
+end
+
+prunedAST = originalAST.offsetsPrune
+
+File.open(outputFlnm, "w") {
+    | outp |
+    $output = outp
+    outp.puts inputHash
+    length = 0
+    emitCodeInAllConfigurations(prunedAST) {
+        | settings, ast, backend, index |
+        offsetsList = ast.filter(StructOffset).uniq.sort
+        sizesList = ast.filter(Sizeof).uniq.sort
+        length += OFFSET_HEADER_MAGIC_NUMBERS.size + (OFFSET_MAGIC_NUMBERS.size + 1) * (1 + offsetsList.size + sizesList.size)
+    }
+    outp.puts "static const unsigned extractorTable[#{length}] = {"
+    emitCodeInAllConfigurations(prunedAST) {
+        | settings, ast, backend, index |
+        OFFSET_HEADER_MAGIC_NUMBERS.each {
+            | number |
+            $output.puts "#{number},"
+        }
+
+        offsetsList = ast.filter(StructOffset).uniq.sort
+        sizesList = ast.filter(Sizeof).uniq.sort
+        
+        emitMagicNumber
+        outp.puts "#{index},"
+        offsetsList.each {
+            | offset |
+            emitMagicNumber
+            outp.puts "OFFLINE_ASM_OFFSETOF(#{offset.struct}, #{offset.field}),"
+        }
+        sizesList.each {
+            | offset |
+            emitMagicNumber
+            outp.puts "sizeof(#{offset.struct}),"
+        }
+    }
+    outp.puts "};"
+}
+
+$stderr.puts "offlineasm: offset extractor #{outputFlnm} successfully generated."
+
diff --git a/Source/JavaScriptCore/offlineasm/instructions.rb b/Source/JavaScriptCore/offlineasm/instructions.rb
new file mode 100644 (file)
index 0000000..497b473
--- /dev/null
@@ -0,0 +1,217 @@
+# Copyright (C) 2011 Apple Inc. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+# 1. Redistributions of source code must retain the above copyright
+#    notice, this list of conditions and the following disclaimer.
+# 2. Redistributions in binary form must reproduce the above copyright
+#    notice, this list of conditions and the following disclaimer in the
+#    documentation and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+# BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+# THE POSSIBILITY OF SUCH DAMAGE.
+
+# Interesting invariant, which we take advantage of: branching instructions
+# always begin with "b", and no non-branching instructions begin with "b".
+# Terminal instructions are "jmp" and "ret".
+
+MACRO_INSTRUCTIONS =
+    [
+     "addi",
+     "andi",
+     "lshifti",
+     "muli",
+     "negi",
+     "noti",
+     "ori",
+     "rshifti",
+     "urshifti",
+     "subi",
+     "xori",
+     "loadi",
+     "loadb",
+     "loadbs",
+     "loadh",
+     "loadhs",
+     "storei",
+     "storeb",
+     "loadd",
+     "moved",
+     "stored",
+     "addd",
+     "divd",
+     "subd",
+     "muld",
+     "sqrtd",
+     "ci2d",
+     "fii2d", # usage: fii2d <gpr with least significant bits>, <gpr with most significant bits>, <fpr>
+     "fd2ii", # usage: fd2ii <fpr>, <gpr with least significant bits>, <gpr with most significant bits>
+     "bdeq",
+     "bdneq",
+     "bdgt",
+     "bdgteq",
+     "bdlt",
+     "bdlteq",
+     "bdequn",
+     "bdnequn",
+     "bdgtun",
+     "bdgtequn",
+     "bdltun",
+     "bdltequn",
+     "btd2i",
+     "td2i",
+     "bcd2i",
+     "movdz",
+     "pop",
+     "push",
+     "move",
+     "sxi2p",
+     "zxi2p",
+     "nop",
+     "bieq",
+     "bineq",
+     "bia",
+     "biaeq",
+     "bib",
+     "bibeq",
+     "bigt",
+     "bigteq",
+     "bilt",
+     "bilteq",
+     "bbeq",
+     "bbneq",
+     "bba",
+     "bbaeq",
+     "bbb",
+     "bbbeq",
+     "bbgt",
+     "bbgteq",
+     "bblt",
+     "bblteq",
+     "btio",
+     "btis",
+     "btiz",
+     "btinz",
+     "btbo",
+     "btbs",
+     "btbz",
+     "btbnz",
+     "jmp",
+     "baddio",
+     "baddis",
+     "baddiz",
+     "baddinz",
+     "bsubio",
+     "bsubis",
+     "bsubiz",
+     "bsubinz",
+     "bmulio",
+     "bmulis",
+     "bmuliz",
+     "bmulinz",
+     "borio",
+     "boris",
+     "boriz",
+     "borinz",
+     "break",
+     "call",
+     "ret",
+     "cieq",
+     "cineq",
+     "cia",
+     "ciaeq",
+     "cib",
+     "cibeq",
+     "cigt",
+     "cigteq",
+     "cilt",
+     "cilteq",
+     "tio",
+     "tis",
+     "tiz",
+     "tinz",
+     "tbo",
+     "tbs",
+     "tbz",
+     "tbnz",
+     "peek",
+     "poke",
+     "bpeq",
+     "bpneq",
+     "bpa",
+     "bpaeq",
+     "bpb",
+     "bpbeq",
+     "bpgt",
+     "bpgteq",
+     "bplt",
+     "bplteq",
+     "addp",
+     "andp",
+     "orp",
+     "subp",
+     "xorp",
+     "loadp",
+     "cpeq",
+     "cpneq",
+     "cpa",
+     "cpaeq",
+     "cpb",
+     "cpbeq",
+     "cpgt",
+     "cpgteq",
+     "cplt",
+     "cplteq",
+     "storep",
+     "btpo",
+     "btps",
+     "btpz",
+     "btpnz",
+     "baddpo",
+     "baddps",
+     "baddpz",
+     "baddpnz",
+     "bo",
+     "bs",
+     "bz",
+     "bnz",
+     "leai",
+     "leap",
+    ]
+
+X86_INSTRUCTIONS =
+    [
+     "cdqi",
+     "idivi"
+    ]
+
+ARMv7_INSTRUCTIONS =
+    [
+     "smulli",
+     "addis",
+     "subis",
+     "oris"
+    ]
+
+INSTRUCTIONS = MACRO_INSTRUCTIONS + X86_INSTRUCTIONS + ARMv7_INSTRUCTIONS
+
+INSTRUCTION_PATTERN = Regexp.new('\\A((' + INSTRUCTIONS.join(')|(') + '))\\Z')
+
+def isBranch(instruction)
+    instruction =~ /^b/
+end
+
+def hasFallThrough(instruction)
+    instruction != "ret" and instruction != "jmp"
+end
+
diff --git a/Source/JavaScriptCore/offlineasm/offsets.rb b/Source/JavaScriptCore/offlineasm/offsets.rb
new file mode 100644 (file)
index 0000000..21e1706
--- /dev/null
@@ -0,0 +1,173 @@
+# Copyright (C) 2011 Apple Inc. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+# 1. Redistributions of source code must retain the above copyright
+#    notice, this list of conditions and the following disclaimer.
+# 2. Redistributions in binary form must reproduce the above copyright
+#    notice, this list of conditions and the following disclaimer in the
+#    documentation and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+# BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+# THE POSSIBILITY OF SUCH DAMAGE.
+
+require "ast"
+
+OFFSET_HEADER_MAGIC_NUMBERS = [ 0x9e43fd66, 0x4379bfba ]
+OFFSET_MAGIC_NUMBERS = [ 0xec577ac7, 0x0ff5e755 ]
+
+#
+# offsetsList(ast)
+# sizesList(ast)
+#
+# Returns a list of offsets and sizes used by the AST.
+#
+
+def offsetsList(ast)
+    ast.filter(StructOffset).uniq.sort
+end
+
+def sizesList(ast)
+    ast.filter(Sizeof).uniq.sort
+end
+
+#
+# offsetsAndConfigurationIndex(ast, file) ->
+#     [[offsets, index], ...]
+#
+# Parses the offsets from a file and returns a list of offsets and the
+# index of the configuration that is valid in this build target.
+#
+
+def offsetsAndConfigurationIndex(file)
+    endiannessMarkerBytes = nil
+    result = []
+    
+    def readInt(endianness, bytes)
+        if endianness == :little
+            # Little endian
+            (bytes[0] << 0 |
+             bytes[1] << 8 |
+             bytes[2] << 16 |
+             bytes[3] << 24)
+        else
+            # Big endian
+            (bytes[0] << 24 |
+             bytes[1] << 16 |
+             bytes[2] << 8 |
+             bytes[3] << 0)
+        end
+    end
+    
+    def prepareMagic(endianness, numbers)
+        magicBytes = []
+        numbers.each {
+            | number |
+            currentBytes = []
+            4.times {
+                currentBytes << (number & 0xff)
+                number >>= 8
+            }
+            if endianness == :big
+                currentBytes.reverse!
+            end
+            magicBytes += currentBytes
+        }
+        magicBytes
+    end
+    
+    fileBytes = []
+    
+    File.open(file, "r") {
+        | inp |
+        loop {
+            byte = inp.getbyte
+            break unless byte
+            fileBytes << byte
+        }
+    }
+    
+    def sliceByteArrays(byteArray, pattern)
+        result = []
+        lastSlicePoint = 0
+        (byteArray.length - pattern.length + 1).times {
+            | index |
+            foundOne = true
+            pattern.length.times {
+                | subIndex |
+                if byteArray[index + subIndex] != pattern[subIndex]
+                    foundOne = false
+                    break
+                end
+            }
+            if foundOne
+                result << byteArray[lastSlicePoint...index]
+                lastSlicePoint = index + pattern.length
+            end
+        }
+        
+        result << byteArray[lastSlicePoint...(byteArray.length)]
+        
+        result
+    end
+    
+    [:little, :big].each {
+        | endianness |
+        headerMagicBytes = prepareMagic(endianness, OFFSET_HEADER_MAGIC_NUMBERS)
+        magicBytes = prepareMagic(endianness, OFFSET_MAGIC_NUMBERS)
+        
+        bigArray = sliceByteArrays(fileBytes, headerMagicBytes)
+        unless bigArray.size <= 1
+            bigArray[1..-1].each {
+                | configArray |
+                array = sliceByteArrays(configArray, magicBytes)
+                index = readInt(endianness, array[1])
+                offsets = []
+                array[2..-1].each {
+                    | data |
+                    offsets << readInt(endianness, data)
+                }
+                result << [offsets, index]
+            }
+        end
+    }
+    
+    raise unless result.length >= 1
+    raise if result.map{|v| v[1]}.uniq.size < result.map{|v| v[1]}.size
+    
+    result
+end
+
+#
+# buildOffsetsMap(ast, offsetsList) -> [offsets, sizes]
+#
+# Builds a mapping between StructOffset nodes and their values.
+#
+
+def buildOffsetsMap(ast, offsetsList)
+    offsetsMap = {}
+    sizesMap = {}
+    astOffsetsList = offsetsList(ast)
+    astSizesList = sizesList(ast)
+    raise unless astOffsetsList.size + astSizesList.size == offsetsList.size
+    offsetsList(ast).each_with_index {
+        | structOffset, index |
+        offsetsMap[structOffset] = offsetsList.shift
+    }
+    sizesList(ast).each_with_index {
+        | sizeof, index |
+        sizesMap[sizeof] = offsetsList.shift
+    }
+    [offsetsMap, sizesMap]
+end
+
diff --git a/Source/JavaScriptCore/offlineasm/opt.rb b/Source/JavaScriptCore/offlineasm/opt.rb
new file mode 100644 (file)
index 0000000..3170d3a
--- /dev/null
@@ -0,0 +1,134 @@
+# Copyright (C) 2011 Apple Inc. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+# 1. Redistributions of source code must retain the above copyright
+#    notice, this list of conditions and the following disclaimer.
+# 2. Redistributions in binary form must reproduce the above copyright
+#    notice, this list of conditions and the following disclaimer in the
+#    documentation and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+# BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+# THE POSSIBILITY OF SUCH DAMAGE.
+
+require "ast"
+
+#
+# "Optimization" passes. These are used to lower the representation for
+# backends that cannot handle some of our higher-level instructions.
+#
+
+#
+# A temporary - a variable that will be allocated to a register after we're
+# done.
+#
+
+class Node
+    def replaceTemporariesWithRegisters(kind)
+        mapChildren {
+            | node |
+            node.replaceTemporariesWithRegisters(kind)
+        }
+    end
+end
+
+class Tmp < NoChildren
+    attr_reader :firstMention, :lastMention
+    attr_reader :kind
+    attr_accessor :register
+
+    def initialize(codeOrigin, kind)
+        super(codeOrigin)
+        @kind = kind
+    end
+    
+    def dump
+        "$tmp#{object_id}"
+    end
+    
+    def mention!(position)
+        if not @firstMention or position < @firstMention
+            @firstMention = position
+        end
+        if not @lastMention or position > @lastMention
+            @lastMention = position
+        end
+    end
+    
+    def replaceTemporariesWithRegisters(kind)
+        if @kind == kind
+            raise "Did not allocate register to temporary at #{codeOriginString}" unless @register
+            @register
+        else
+            self
+        end
+    end
+    
+    def address?
+        false
+    end
+    
+    def label?
+        false
+    end
+    
+    def immediate?
+        false
+    end
+    
+    def register?
+        true
+    end
+end
+
+# Assign registers to temporaries, by finding which temporaries interfere
+# with each other. Note that this relies on temporary live ranges not crossing
+# basic block boundaries.
+
+def assignRegistersToTemporaries(list, kind, registers)
+    list.each_with_index {
+        | node, index |
+        node.filter(Tmp).uniq.each {
+            | tmp |
+            if tmp.kind == kind
+                tmp.mention! index
+            end
+        }
+    }
+    
+    freeRegisters = registers.dup
+    list.each_with_index {
+        | node, index |
+        tmpList = node.filter(Tmp).uniq
+        tmpList.each {
+            | tmp |
+            if tmp.kind == kind and tmp.firstMention == index
+                raise "Could not allocate register to temporary at #{node.codeOriginString}" if freeRegisters.empty?
+                tmp.register = freeRegisters.pop
+            end
+        }
+        tmpList.each {
+            | tmp |
+            if tmp.kind == kind and tmp.lastMention == index
+                freeRegisters.push tmp.register
+                raise "Register allocation inconsistency at #{node.codeOriginString}" if freeRegisters.size > registers.size
+            end
+        }
+    }
+    
+    list.map {
+        | node |
+        node.replaceTemporariesWithRegisters(kind)
+    }
+end
+
diff --git a/Source/JavaScriptCore/offlineasm/parser.rb b/Source/JavaScriptCore/offlineasm/parser.rb
new file mode 100644 (file)
index 0000000..f0e4b00
--- /dev/null
@@ -0,0 +1,586 @@
+# Copyright (C) 2011 Apple Inc. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+# 1. Redistributions of source code must retain the above copyright
+#    notice, this list of conditions and the following disclaimer.
+# 2. Redistributions in binary form must reproduce the above copyright
+#    notice, this list of conditions and the following disclaimer in the
+#    documentation and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+# BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+# THE POSSIBILITY OF SUCH DAMAGE.
+
+require "ast"
+require "instructions"
+require "registers"
+
+class Token
+    attr_reader :codeOrigin, :string
+    
+    def initialize(codeOrigin, string)
+        @codeOrigin = codeOrigin
+        @string = string
+    end
+    
+    def ==(other)
+        if other.is_a? Token
+            @string == other.string
+        else
+            @string == other
+        end
+    end
+    
+    def =~(other)
+        @string =~ other
+    end
+    
+    def to_s
+        "#{@string.inspect} at line #{codeOrigin}"
+    end
+    
+    def parseError(*comment)
+        if comment.empty?
+            raise "Parse error: #{to_s}"
+        else
+            raise "Parse error: #{to_s}: #{comment[0]}"
+        end
+    end
+end
+
+#
+# The lexer. Takes a string and returns an array of tokens.
+#
+
+def lex(str)
+    result = []
+    lineNumber = 1
+    while not str.empty?
+        case str
+        when /\A\#([^\n]*)/
+            # comment, ignore
+        when /\A\n/
+            result << Token.new(lineNumber, $&)
+            lineNumber += 1
+        when /\A[a-zA-Z]([a-zA-Z0-9_]*)/
+            result << Token.new(lineNumber, $&)
+        when /\A\.([a-zA-Z0-9_]*)/
+            result << Token.new(lineNumber, $&)
+        when /\A_([a-zA-Z0-9_]*)/
+            result << Token.new(lineNumber, $&)
+        when /\A([ \t]+)/
+            # whitespace, ignore
+        when /\A0x([0-9a-fA-F]+)/
+            result << Token.new(lineNumber, $&.hex.to_s)
+        when /\A0([0-7]+)/
+            result << Token.new(lineNumber, $&.oct.to_s)
+        when /\A([0-9]+)/
+            result << Token.new(lineNumber, $&)
+        when /\A::/
+            result << Token.new(lineNumber, $&)
+        when /\A[:,\(\)\[\]=\+\-*]/
+            result << Token.new(lineNumber, $&)
+        else
+            raise "Lexer error at line number #{lineNumber}, unexpected sequence #{str[0..20].inspect}"
+        end
+        str = $~.post_match
+    end
+    result
+end
+
+#
+# Token identification.
+#
+
+def isRegister(token)
+    token =~ REGISTER_PATTERN
+end
+
+def isInstruction(token)
+    token =~ INSTRUCTION_PATTERN
+end
+
+def isKeyword(token)
+    token =~ /\A((true)|(false)|(if)|(then)|(else)|(elsif)|(end)|(and)|(or)|(not)|(macro)|(const)|(sizeof)|(error))\Z/ or
+        token =~ REGISTER_PATTERN or
+        token =~ INSTRUCTION_PATTERN
+end
+
+def isIdentifier(token)
+    token =~ /\A[a-zA-Z]([a-zA-Z0-9_]*)\Z/ and not isKeyword(token)
+end
+
+def isLabel(token)
+    token =~ /\A_([a-zA-Z0-9_]*)\Z/
+end
+
+def isLocalLabel(token)
+    token =~ /\A\.([a-zA-Z0-9_]*)\Z/
+end
+
+def isVariable(token)
+    isIdentifier(token) or isRegister(token)
+end
+
+def isInteger(token)
+    token =~ /\A[0-9]/
+end
+
+#
+# The parser. Takes an array of tokens and returns an AST. Methods
+# other than parse(tokens) are not for public consumption.
+#
+
+class Parser
+    def initialize(tokens)
+        @tokens = tokens
+        @idx = 0
+    end
+    
+    def parseError(*comment)
+        if @tokens[@idx]
+            @tokens[@idx].parseError(*comment)
+        else
+            if comment.empty?
+                raise "Parse error at end of file"
+            else
+                raise "Parse error at end of file: #{comment[0]}"
+            end
+        end
+    end
+    
+    def consume(regexp)
+        if regexp
+            parseError unless @tokens[@idx] =~ regexp
+        else
+            parseError unless @idx == @tokens.length
+        end
+        @idx += 1
+    end
+    
+    def skipNewLine
+        while @tokens[@idx] == "\n"
+            @idx += 1
+        end
+    end
+    
+    def parsePredicateAtom
+        if @tokens[@idx] == "not"
+            @idx += 1
+            parsePredicateAtom
+        elsif @tokens[@idx] == "("
+            @idx += 1
+            skipNewLine
+            result = parsePredicate
+            parseError unless @tokens[@idx] == ")"
+            @idx += 1
+            result
+        elsif @tokens[@idx] == "true"
+            result = True.instance
+            @idx += 1
+            result
+        elsif @tokens[@idx] == "false"
+            result = False.instance
+            @idx += 1
+            result
+        elsif isIdentifier @tokens[@idx]
+            result = Setting.forName(@tokens[@idx].codeOrigin, @tokens[@idx].string)
+            @idx += 1
+            result
+        else
+            parseError
+        end
+    end
+    
+    def parsePredicateAnd
+        result = parsePredicateAtom
+        while @tokens[@idx] == "and"
+            codeOrigin = @tokens[@idx].codeOrigin
+            @idx += 1
+            skipNewLine
+            right = parsePredicateAtom
+            result = And.new(codeOrigin, result, right)
+        end
+        result
+    end
+    
+    def parsePredicate
+        # some examples of precedence:
+        # not a and b -> (not a) and b
+        # a and b or c -> (a and b) or c
+        # a or b and c -> a or (b and c)
+        
+        result = parsePredicateAnd
+        while @tokens[@idx] == "or"
+            codeOrigin = @tokens[@idx].codeOrigin
+            @idx += 1
+            skipNewLine
+            right = parsePredicateAnd
+            result = Or.new(codeOrigin, result, right)
+        end
+        result
+    end
+    
+    def parseVariable
+        if isRegister(@tokens[@idx])
+            if @tokens[@idx] =~ FPR_PATTERN
+                result = FPRegisterID.forName(@tokens[@idx].codeOrigin, @tokens[@idx].string)
+            else
+                result = RegisterID.forName(@tokens[@idx].codeOrigin, @tokens[@idx].string)
+            end
+        elsif isIdentifier(@tokens[@idx])
+            result = Variable.forName(@tokens[@idx].codeOrigin, @tokens[@idx].string)
+        else
+            parseError
+        end
+        @idx += 1
+        result
+    end
+    
+    def parseAddress(offset)
+        parseError unless @tokens[@idx] == "["
+        codeOrigin = @tokens[@idx].codeOrigin
+        
+        # Three possibilities:
+        # []       -> AbsoluteAddress
+        # [a]      -> Address
+        # [a,b]    -> BaseIndex with scale = 1
+        # [a,b,c]  -> BaseIndex
+        
+        @idx += 1
+        if @tokens[@idx] == "]"
+            @idx += 1
+            return AbsoluteAddress.new(codeOrigin, offset)
+        end
+        a = parseVariable
+        if @tokens[@idx] == "]"
+            result = Address.new(codeOrigin, a, offset)
+        else
+            parseError unless @tokens[@idx] == ","
+            @idx += 1
+            b = parseVariable
+            if @tokens[@idx] == "]"
+                result = BaseIndex.new(codeOrigin, a, b, 1, offset)
+            else
+                parseError unless @tokens[@idx] == ","
+                @idx += 1
+                parseError unless ["1", "2", "4", "8"].member? @tokens[@idx].string
+                c = @tokens[@idx].string.to_i
+                @idx += 1
+                parseError unless @tokens[@idx] == "]"
+                result = BaseIndex.new(codeOrigin, a, b, c, offset)
+            end
+        end
+        @idx += 1
+        result
+    end
+    
+    def parseColonColon
+        skipNewLine
+        codeOrigin = @tokens[@idx].codeOrigin
+        parseError unless isIdentifier @tokens[@idx]
+        names = [@tokens[@idx].string]
+        @idx += 1
+        while @tokens[@idx] == "::"
+            @idx += 1
+            parseError unless isIdentifier @tokens[@idx]
+            names << @tokens[@idx].string
+            @idx += 1
+        end
+        raise if names.empty?
+        [codeOrigin, names]
+    end
+    
+    def parseExpressionAtom
+        skipNewLine
+        if @tokens[@idx] == "-"
+            @idx += 1
+            NegImmediate.new(@tokens[@idx - 1].codeOrigin, parseExpressionAtom)
+        elsif @tokens[@idx] == "("
+            @idx += 1
+            result = parseExpression
+            parseError unless @tokens[@idx] == ")"
+            @idx += 1
+            result
+        elsif isInteger @tokens[@idx]
+            result = Immediate.new(@tokens[@idx].codeOrigin, @tokens[@idx].string.to_i)
+            @idx += 1
+            result
+        elsif isIdentifier @tokens[@idx]
+            codeOrigin, names = parseColonColon
+            if names.size > 1
+                StructOffset.forField(codeOrigin, names[0..-2].join('::'), names[-1])
+            else
+                Variable.forName(codeOrigin, names[0])
+            end
+        elsif isRegister @tokens[@idx]
+            parseVariable
+        elsif @tokens[@idx] == "sizeof"
+            @idx += 1
+            codeOrigin, names = parseColonColon
+            Sizeof.forName(codeOrigin, names.join('::'))
+        else
+            parseError
+        end
+    end
+    
+    def parseExpressionMul
+        skipNewLine
+        result = parseExpressionAtom
+        while @tokens[@idx] == "*"
+            if @tokens[@idx] == "*"
+                @idx += 1
+                result = MulImmediates.new(@tokens[@idx - 1].codeOrigin, result, parseExpressionAtom)
+            else
+                raise
+            end
+        end
+        result
+    end
+    
+    def couldBeExpression
+        @tokens[@idx] == "-" or @tokens[@idx] == "sizeof" or isInteger(@tokens[@idx]) or isVariable(@tokens[@idx]) or @tokens[@idx] == "("
+    end
+    
+    def parseExpression
+        skipNewLine
+        result = parseExpressionMul
+        while @tokens[@idx] == "+" or @tokens[@idx] == "-"
+            if @tokens[@idx] == "+"
+                @idx += 1
+                result = AddImmediates.new(@tokens[@idx - 1].codeOrigin, result, parseExpressionMul)
+            elsif @tokens[@idx] == "-"
+                @idx += 1
+                result = SubImmediates.new(@tokens[@idx - 1].codeOrigin, result, parseExpressionMul)
+            else
+                raise
+            end
+        end
+        result
+    end
+    
+    def parseOperand(comment)
+        skipNewLine
+        if couldBeExpression
+            expr = parseExpression
+            if @tokens[@idx] == "["
+                parseAddress(expr)
+            else
+                expr
+            end
+        elsif @tokens[@idx] == "["
+            parseAddress(Immediate.new(@tokens[@idx].codeOrigin, 0))
+        elsif isLabel @tokens[@idx]
+            result = LabelReference.new(@tokens[@idx].codeOrigin, Label.forName(@tokens[@idx].codeOrigin, @tokens[@idx].string))
+            @idx += 1
+            result
+        elsif isLocalLabel @tokens[@idx]
+            result = LocalLabelReference.new(@tokens[@idx].codeOrigin, LocalLabel.forName(@tokens[@idx].codeOrigin, @tokens[@idx].string))
+            @idx += 1
+            result
+        else
+            parseError(comment)
+        end
+    end
+    
+    def parseMacroVariables
+        skipNewLine
+        consume(/\A\(\Z/)
+        variables = []
+        loop {
+            skipNewLine
+            if @tokens[@idx] == ")"
+                @idx += 1
+                break
+            elsif isIdentifier(@tokens[@idx])
+                variables << Variable.forName(@tokens[@idx].codeOrigin, @tokens[@idx].string)
+                @idx += 1
+                skipNewLine
+                if @tokens[@idx] == ")"
+                    @idx += 1
+                    break
+                elsif @tokens[@idx] == ","
+                    @idx += 1
+                else
+                    parseError
+                end
+            else
+                parseError
+            end
+        }
+        variables
+    end
+    
+    def parseSequence(final, comment)
+        firstCodeOrigin = @tokens[@idx].codeOrigin
+        list = []
+        loop {
+            if (@idx == @tokens.length and not final) or (final and @tokens[@idx] =~ final)
+                break
+            elsif @tokens[@idx] == "\n"
+                # ignore
+                @idx += 1
+            elsif @tokens[@idx] == "const"
+                @idx += 1
+                parseError unless isVariable @tokens[@idx]
+                variable = Variable.forName(@tokens[@idx].codeOrigin, @tokens[@idx].string)
+                @idx += 1
+                parseError unless @tokens[@idx] == "="
+                @idx += 1
+                value = parseOperand("while inside of const #{variable.name}")
+                list << ConstDecl.new(@tokens[@idx].codeOrigin, variable, value)
+            elsif @tokens[@idx] == "error"
+                list << Error.new(@tokens[@idx].codeOrigin)
+                @idx += 1
+            elsif @tokens[@idx] == "if"
+                codeOrigin = @tokens[@idx].codeOrigin
+                @idx += 1
+                skipNewLine
+                predicate = parsePredicate
+                consume(/\A((then)|(\n))\Z/)
+                skipNewLine
+                ifThenElse = IfThenElse.new(codeOrigin, predicate, parseSequence(/\A((else)|(end)|(elsif))\Z/, "while inside of \"if #{predicate.dump}\""))
+                list << ifThenElse
+                while @tokens[@idx] == "elsif"
+                    codeOrigin = @tokens[@idx].codeOrigin
+                    @idx += 1
+                    skipNewLine
+                    predicate = parsePredicate
+                    consume(/\A((then)|(\n))\Z/)
+                    skipNewLine
+                    elseCase = IfThenElse.new(codeOrigin, predicate, parseSequence(/\A((else)|(end)|(elsif))\Z/, "while inside of \"if #{predicate.dump}\""))
+                    ifThenElse.elseCase = elseCase
+                    ifThenElse = elseCase
+                end
+                if @tokens[@idx] == "else"
+                    @idx += 1
+                    ifThenElse.elseCase = parseSequence(/\Aend\Z/, "while inside of else case for \"if #{predicate.dump}\"")
+                    @idx += 1
+                else
+                    parseError unless @tokens[@idx] == "end"
+                    @idx += 1
+                end
+            elsif @tokens[@idx] == "macro"
+                codeOrigin = @tokens[@idx].codeOrigin
+                @idx += 1
+                skipNewLine
+                parseError unless isIdentifier(@tokens[@idx])
+                name = @tokens[@idx].string
+                @idx += 1
+                variables = parseMacroVariables
+                body = parseSequence(/\Aend\Z/, "while inside of macro #{name}")
+                @idx += 1
+                list << Macro.new(codeOrigin, name, variables, body)
+            elsif isInstruction @tokens[@idx]
+                codeOrigin = @tokens[@idx].codeOrigin
+                name = @tokens[@idx].string
+                @idx += 1
+                if (not final and @idx == @tokens.size) or (final and @tokens[@idx] =~ final)
+                    # Zero operand instruction, and it's the last one.
+                    list << Instruction.new(codeOrigin, name, [])
+                    break
+                elsif @tokens[@idx] == "\n"
+                    # Zero operand instruction.
+                    list << Instruction.new(codeOrigin, name, [])
+                    @idx += 1
+                else
+                    # It's definitely an instruction, and it has at least one operand.
+                    operands = []
+                    endOfSequence = false
+                    loop {
+                        operands << parseOperand("while inside of instruction #{name}")
+                        if (not final and @idx == @tokens.size) or (final and @tokens[@idx] =~ final)
+                            # The end of the instruction and of the sequence.
+                            endOfSequence = true
+                            break
+                        elsif @tokens[@idx] == ","
+                            # Has another operand.
+                            @idx += 1
+                        elsif @tokens[@idx] == "\n"
+                            # The end of the instruction.
+                            @idx += 1
+                            break
+                        else
+                            parseError("Expected a comma, newline, or #{final} after #{operands.last.dump}")
+                        end
+                    }
+                    list << Instruction.new(codeOrigin, name, operands)
+                    if endOfSequence
+                        break
+                    end
+                end
+            elsif isIdentifier @tokens[@idx]
+                codeOrigin = @tokens[@idx].codeOrigin
+                name = @tokens[@idx].string
+                @idx += 1
+                if @tokens[@idx] == "("
+                    # Macro invocation.
+                    @idx += 1
+                    operands = []
+                    skipNewLine
+                    if @tokens[@idx] == ")"
+                        @idx += 1
+                    else
+                        loop {
+                            skipNewLine
+                            if @tokens[@idx] == "macro"
+                                # It's a macro lambda!
+                                codeOriginInner = @tokens[@idx].codeOrigin
+                                @idx += 1
+                                variables = parseMacroVariables
+                                body = parseSequence(/\Aend\Z/, "while inside of anonymous macro passed as argument to #{name}")
+                                @idx += 1
+                                operands << Macro.new(codeOriginInner, nil, variables, body)
+                            else
+                                operands << parseOperand("while inside of macro call to #{name}")
+                            end
+                            skipNewLine
+                            if @tokens[@idx] == ")"
+                                @idx += 1
+                                break
+                            elsif @tokens[@idx] == ","
+                                @idx += 1
+                            else
+                                parseError "Unexpected #{@tokens[@idx].string.inspect} while parsing invocation of macro #{name}"
+                            end
+                        }
+                    end
+                    list << MacroCall.new(codeOrigin, name, operands)
+                else
+                    parseError "Expected \"(\" after #{name}"
+                end
+            elsif isLabel @tokens[@idx] or isLocalLabel @tokens[@idx]
+                codeOrigin = @tokens[@idx].codeOrigin
+                name = @tokens[@idx].string
+                @idx += 1
+                parseError unless @tokens[@idx] == ":"
+                # It's a label.
+                if isLabel name
+                    list << Label.forName(codeOrigin, name)
+                else
+                    list << LocalLabel.forName(codeOrigin, name)
+                end
+                @idx += 1
+            else
+                parseError "Expecting terminal #{final} #{comment}"
+            end
+        }
+        Sequence.new(firstCodeOrigin, list)
+    end
+end
+
+def parse(tokens)
+    parser = Parser.new(tokens)
+    parser.parseSequence(nil, "")
+end
+
diff --git a/Source/JavaScriptCore/offlineasm/registers.rb b/Source/JavaScriptCore/offlineasm/registers.rb
new file mode 100644 (file)
index 0000000..75fae41
--- /dev/null
@@ -0,0 +1,60 @@
+# Copyright (C) 2011 Apple Inc. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+# 1. Redistributions of source code must retain the above copyright
+#    notice, this list of conditions and the following disclaimer.
+# 2. Redistributions in binary form must reproduce the above copyright
+#    notice, this list of conditions and the following disclaimer in the
+#    documentation and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+# BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+# THE POSSIBILITY OF SUCH DAMAGE.
+
+GPRS =
+    [
+     "t0",
+     "t1",
+     "t2",
+     "t3",
+     "t4",
+     "cfr",
+     "a0",
+     "a1",
+     "r0",
+     "r1",
+     "sp",
+     "lr"
+    ]
+
+FPRS =
+    [
+     "ft0",
+     "ft1",
+     "ft2",
+     "ft3",
+     "ft4",
+     "ft5",
+     "fa0",
+     "fa1",
+     "fa2",
+     "fa3",
+     "fr"
+    ]
+
+REGISTERS = GPRS + FPRS
+
+GPR_PATTERN = Regexp.new('\\A((' + GPRS.join(')|(') + '))\\Z')
+FPR_PATTERN = Regexp.new('\\A((' + FPRS.join(')|(') + '))\\Z')
+
+REGISTER_PATTERN = Regexp.new('\\A((' + REGISTERS.join(')|(') + '))\\Z')
diff --git a/Source/JavaScriptCore/offlineasm/self_hash.rb b/Source/JavaScriptCore/offlineasm/self_hash.rb
new file mode 100644 (file)
index 0000000..a7b51e1
--- /dev/null
@@ -0,0 +1,46 @@
+# Copyright (C) 2011 Apple Inc. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+# 1. Redistributions of source code must retain the above copyright
+#    notice, this list of conditions and the following disclaimer.
+# 2. Redistributions in binary form must reproduce the above copyright
+#    notice, this list of conditions and the following disclaimer in the
+#    documentation and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+# BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+# THE POSSIBILITY OF SUCH DAMAGE.
+
+require "digest/sha1"
+require "pathname"
+
+#
+# selfHash -> SHA1 hexdigest
+#
+# Returns a hash of the offlineasm source code. This allows dependency
+# tracking for not just changes in input, but also changes in the assembler
+# itself.
+#
+
+def selfHash
+    contents = ""
+    myPath = Pathname.new(__FILE__).dirname
+    Dir.foreach(myPath) {
+        | entry |
+        if entry =~ /\.rb$/
+            contents += IO::read(myPath + entry)
+        end
+    }
+    return Digest::SHA1.hexdigest(contents)
+end
+
diff --git a/Source/JavaScriptCore/offlineasm/settings.rb b/Source/JavaScriptCore/offlineasm/settings.rb
new file mode 100644 (file)
index 0000000..3459818
--- /dev/null
@@ -0,0 +1,205 @@
+# Copyright (C) 2011 Apple Inc. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+# 1. Redistributions of source code must retain the above copyright
+#    notice, this list of conditions and the following disclaimer.
+# 2. Redistributions in binary form must reproduce the above copyright
+#    notice, this list of conditions and the following disclaimer in the
+#    documentation and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+# BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+# THE POSSIBILITY OF SUCH DAMAGE.
+
+require "ast"
+require "backends"
+require "parser"
+require "transform"
+
+#
+# computeSettingsCombinations(ast) -> settingsCombiations
+#
+# Computes an array of settings maps, where a settings map constitutes
+# a configuration for the assembly code being generated. The map
+# contains key value pairs where keys are settings names (strings) and
+# the values are booleans (true for enabled, false for disabled).
+#
+
+def computeSettingsCombinations(ast)
+    settingsCombinations = []
+    
+    def settingsCombinator(settingsCombinations, mapSoFar, remaining)
+        if remaining.empty?
+            settingsCombinations << mapSoFar
+            return
+        end
+        
+        newMap = mapSoFar.dup
+        newMap[remaining[0]] = true
+        settingsCombinator(settingsCombinations, newMap, remaining[1..-1])
+        
+        newMap = mapSoFar.dup
+        newMap[remaining[0]] = false
+        settingsCombinator(settingsCombinations, newMap, remaining[1..-1])
+    end
+    
+    settingsCombinator(settingsCombinations, {}, (ast.filter(Setting).uniq.collect{|v| v.name} + ["X86", "ARMv7"]).uniq)
+    
+    settingsCombinations
+end
+
+#
+# forSettings(concreteSettings, ast) {
+#     | concreteSettings, lowLevelAST, backend | ... }
+#
+# Determines if the settings combination is valid, and if so, calls
+# the block with the information you need to generate code.
+#
+
+def forSettings(concreteSettings, ast)
+    # Check which architectures this combinator claims to support.
+    numClaimedBackends = 0
+    selectedBackend = nil
+    BACKENDS.each {
+        | backend |
+        isSupported = concreteSettings[backend]
+        raise unless isSupported != nil
+        numClaimedBackends += if isSupported then 1 else 0 end
+        if isSupported
+            selectedBackend = backend
+        end
+    }
+    
+    return if numClaimedBackends > 1
+    
+    # Resolve the AST down to a low-level form (no macros or conditionals).
+    lowLevelAST = ast.resolveSettings(concreteSettings)
+    
+    yield concreteSettings, lowLevelAST, selectedBackend
+end
+
+#
+# forEachValidSettingsCombination(ast) {
+#     | concreteSettings, ast, backend, index | ... }
+#
+# forEachValidSettingsCombination(ast, settingsCombinations) {
+#     | concreteSettings, ast, backend, index | ... }
+#
+# Executes the given block for each valid settings combination in the
+# settings map. The ast passed into the block is resolved
+# (ast.resolve) against the settings.
+#
+# The first form will call computeSettingsCombinations(ast) for you.
+#
+
+def forEachValidSettingsCombination(ast, *optionalSettingsCombinations)
+    raise if optionalSettingsCombinations.size > 1
+    
+    if optionalSettingsCombinations.empty?
+        settingsCombinations = computeSettingsCombinations(ast)
+    else
+        settingsCombinations = optionalSettingsCombiations[0]
+    end
+    
+    settingsCombinations.each_with_index {
+        | concreteSettings, index |
+        forSettings(concreteSettings, ast) {
+            | concreteSettings_, lowLevelAST, backend |
+            yield concreteSettings, lowLevelAST, backend, index
+        }
+    }
+end
+
+#
+# cppSettingsTest(concreteSettings)
+#
+# Returns the C++ code used to test if we are in a configuration that
+# corresponds to the given concrete settings.
+#
+
+def cppSettingsTest(concreteSettings)
+    "#if " + concreteSettings.to_a.collect{
+        | pair |
+        (if pair[1]
+             ""
+         else
+             "!"
+         end) + "OFFLINE_ASM_" + pair[0]
+    }.join(" && ")
+end
+
+#
+# isASTErroneous(ast)
+#
+# Tests to see if the AST claims that there is an error - i.e. if the
+# user's code, after settings resolution, has Error nodes.
+#
+
+def isASTErroneous(ast)
+    not ast.filter(Error).empty?
+end
+
+#
+# assertConfiguration(concreteSettings)
+#
+# Emits a check that asserts that we're using the given configuration.
+#
+
+def assertConfiguration(concreteSettings)
+    $output.puts cppSettingsTest(concreteSettings)
+    $output.puts "#else"
+    $output.puts "#error \"Configuration mismatch.\""
+    $output.puts "#endif"
+end
+
+#
+# emitCodeInConfiguration(concreteSettings, ast, backend) {
+#     | concreteSettings, ast, backend | ... }
+#
+# Emits all relevant guards to see if the configuration holds and
+# calls the block if the configuration is not erroneous.
+#
+
+def emitCodeInConfiguration(concreteSettings, ast, backend)
+    $output.puts cppSettingsTest(concreteSettings)
+    
+    if isASTErroneous(ast)
+        $output.puts "#error \"Invalid configuration.\""
+    elsif not WORKING_BACKENDS.include? backend
+        $output.puts "#error \"This backend is not supported yet.\""
+    else
+        yield concreteSettings, ast, backend
+    end
+    
+    $output.puts "#endif"
+end
+
+#
+# emitCodeInAllConfigurations(ast) {
+#     | concreteSettings, ast, backend, index | ... }
+#
+# Emits guard codes for all valid configurations, and calls the block
+# for those configurations that are valid and not erroneous.
+#
+
+def emitCodeInAllConfigurations(ast)
+    forEachValidSettingsCombination(ast) {
+        | concreteSettings, lowLevelAST, backend, index |
+        $output.puts cppSettingsTest(concreteSettings)
+        yield concreteSettings, lowLevelAST, backend, index
+        $output.puts "#endif"
+    }
+end
+
+
+
diff --git a/Source/JavaScriptCore/offlineasm/transform.rb b/Source/JavaScriptCore/offlineasm/transform.rb
new file mode 100644 (file)
index 0000000..5f5024d
--- /dev/null
@@ -0,0 +1,342 @@
+# Copyright (C) 2011 Apple Inc. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+# 1. Redistributions of source code must retain the above copyright
+#    notice, this list of conditions and the following disclaimer.
+# 2. Redistributions in binary form must reproduce the above copyright
+#    notice, this list of conditions and the following disclaimer in the
+#    documentation and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+# BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+# THE POSSIBILITY OF SUCH DAMAGE.
+
+require "ast"
+
+#
+# node.resolveSettings(settings)
+#
+# Construct a new AST that does not have any IfThenElse nodes by
+# substituting concrete boolean values for each Setting.
+#
+
+class Node
+    def resolveSettings(settings)
+        mapChildren {
+            | child |
+            child.resolveSettings(settings)
+        }
+    end
+end
+
+class True
+    def resolveSettings(settings)
+        self
+    end
+end
+
+class False
+    def resolveSettings(settings)
+        self
+    end
+end
+
+class Setting
+    def resolveSettings(settings)
+        settings[@name].asNode
+    end
+end
+
+class And
+    def resolveSettings(settings)
+        (@left.resolveSettings(settings).value and @right.resolveSettings(settings).value).asNode
+    end
+end
+
+class Or
+    def resolveSettings(settings)
+        (@left.resolveSettings(settings).value or @right.resolveSettings(settings).value).asNode
+    end
+end
+
+class Not
+    def resolveSettings(settings)
+        (not @child.resolveSettings(settings).value).asNode
+    end
+end
+
+class IfThenElse
+    def resolveSettings(settings)
+        if @predicate.resolveSettings(settings).value
+            @thenCase.resolveSettings(settings)
+        else
+            @elseCase.resolveSettings(settings)
+        end
+    end
+end
+
+class Sequence
+    def resolveSettings(settings)
+        newList = []
+        @list.each {
+            | item |
+            item = item.resolveSettings(settings)
+            if item.is_a? Sequence
+                newList += item.list
+            else
+                newList << item
+            end
+        }
+        Sequence.new(codeOrigin, newList)
+    end
+end
+
+#
+# node.demacroify(macros)
+# node.substitute(mapping)
+#
+# demacroify() constructs a new AST that does not have any Macro
+# nodes, while substitute() replaces Variable nodes with the given
+# nodes in the mapping.
+#
+
+class Node
+    def demacroify(macros)
+        mapChildren {
+            | child |
+            child.demacroify(macros)
+        }
+    end
+    
+    def substitute(mapping)
+        mapChildren {
+            | child |
+            child.substitute(mapping)
+        }
+    end
+    
+    def substituteLabels(mapping)
+        mapChildren {
+            | child |
+            child.substituteLabels(mapping)
+        }
+    end
+end
+
+class Macro
+    def substitute(mapping)
+        myMapping = {}
+        mapping.each_pair {
+            | key, value |
+            unless @variables.include? key
+                myMapping[key] = value
+            end
+        }
+        mapChildren {
+            | child |
+            child.substitute(myMapping)
+        }
+    end
+end
+
+class Variable
+    def substitute(mapping)
+        if mapping[self]
+            mapping[self]
+        else
+            self
+        end
+    end
+end
+
+class LocalLabel
+    def substituteLabels(mapping)
+        if mapping[self]
+            mapping[self]
+        else
+            self
+        end
+    end
+end
+
+class Sequence
+    def substitute(constants)
+        newList = []
+        myConstants = constants.dup
+        @list.each {
+            | item |
+            if item.is_a? ConstDecl
+                myConstants[item.variable] = item.value.substitute(myConstants)
+            else
+                newList << item.substitute(myConstants)
+            end
+        }
+        Sequence.new(codeOrigin, newList)
+    end
+    
+    def renameLabels(comment)
+        mapping = {}
+        
+        @list.each {
+            | item |
+            if item.is_a? LocalLabel
+                mapping[item] = LocalLabel.unique(if comment then comment + "_" else "" end + item.cleanName)
+            end
+        }
+        
+        substituteLabels(mapping)
+    end
+    
+    def demacroify(macros)
+        myMacros = macros.dup
+        @list.each {
+            | item |
+            if item.is_a? Macro
+                myMacros[item.name] = item
+            end
+        }
+        newList = []
+        @list.each {
+            | item |
+            if item.is_a? Macro
+                # Ignore.
+            elsif item.is_a? MacroCall
+                mapping = {}
+                myMyMacros = myMacros.dup
+                raise "Could not find macro #{item.name} at #{item.codeOriginString}" unless myMacros[item.name]
+                raise "Argument count mismatch for call to #{item.name} at #{item.codeOriginString}" unless item.operands.size == myMacros[item.name].variables.size
+                item.operands.size.times {
+                    | idx |
+                    if item.operands[idx].is_a? Variable and myMacros[item.operands[idx].name]
+                        myMyMacros[myMacros[item.name].variables[idx].name] = myMacros[item.operands[idx].name]
+                        mapping[myMacros[item.name].variables[idx].name] = nil
+                    elsif item.operands[idx].is_a? Macro
+                        myMyMacros[myMacros[item.name].variables[idx].name] = item.operands[idx]
+                        mapping[myMacros[item.name].variables[idx].name] = nil
+                    else
+                        myMyMacros[myMacros[item.name].variables[idx]] = nil
+                        mapping[myMacros[item.name].variables[idx]] = item.operands[idx]
+                    end
+                }
+                newList += myMacros[item.name].body.substitute(mapping).demacroify(myMyMacros).renameLabels(item.name).list
+            else
+                newList << item.demacroify(myMacros)
+            end
+        }
+        Sequence.new(codeOrigin, newList).substitute({})
+    end
+end
+
+#
+# node.resolveOffsets(offsets, sizes)
+#
+# Construct a new AST that has offset values instead of symbolic
+# offsets.
+#
+
+class Node
+    def resolveOffsets(offsets, sizes)
+        mapChildren {
+            | child |
+            child.resolveOffsets(offsets, sizes)
+        }
+    end
+end
+
+class StructOffset
+    def resolveOffsets(offsets, sizes)
+        if offsets[self]
+            Immediate.new(codeOrigin, offsets[self])
+        else
+            self
+        end
+    end
+end
+
+class Sizeof
+    def resolveOffsets(offsets, sizes)
+        if sizes[self]
+            Immediate.new(codeOrigin, sizes[self])
+        else
+            puts "Could not find #{self.inspect} in #{sizes.keys.inspect}"
+            puts "sizes = #{sizes.inspect}"
+            self
+        end
+    end
+end
+
+#
+# node.fold
+#
+# Resolve constant references and compute arithmetic expressions.
+#
+
+class Node
+    def fold
+        mapChildren {
+            | child |
+            child.fold
+        }
+    end
+end
+
+class AddImmediates
+    def fold
+        @left = @left.fold
+        @right = @right.fold
+        return self unless @left.is_a? Immediate
+        return self unless @right.is_a? Immediate
+        Immediate.new(codeOrigin, @left.value + @right.value)
+    end
+end
+
+class SubImmediates
+    def fold
+        @left = @left.fold
+        @right = @right.fold
+        return self unless @left.is_a? Immediate
+        return self unless @right.is_a? Immediate
+        Immediate.new(codeOrigin, @left.value - @right.value)
+    end
+end
+
+class MulImmediates
+    def fold
+        @left = @left.fold
+        @right = @right.fold
+        return self unless @left.is_a? Immediate
+        return self unless @right.is_a? Immediate
+        Immediate.new(codeOrigin, @left.value * @right.value)
+    end
+end
+
+class NegImmediate
+    def fold
+        @child = @child.fold
+        return self unless @child.is_a? Immediate
+        Immediate.new(codeOrigin, -@child.value)
+    end
+end
+
+#
+# node.resolveAfterSettings(offsets, sizes)
+#
+# Compile assembly against a set of offsets.
+#
+
+class Node
+    def resolve(offsets, sizes)
+        demacroify({}).resolveOffsets(offsets, sizes).fold
+    end
+end
+
diff --git a/Source/JavaScriptCore/offlineasm/x86.rb b/Source/JavaScriptCore/offlineasm/x86.rb
new file mode 100644 (file)
index 0000000..b89f2d9
--- /dev/null
@@ -0,0 +1,681 @@
+# Copyright (C) 2011 Apple Inc. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+# 1. Redistributions of source code must retain the above copyright
+#    notice, this list of conditions and the following disclaimer.
+# 2. Redistributions in binary form must reproduce the above copyright
+#    notice, this list of conditions and the following disclaimer in the
+#    documentation and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+# BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+# THE POSSIBILITY OF SUCH DAMAGE.
+
+class RegisterID
+    def supports8BitOnX86
+        case name
+        when "t0", "a0", "r0", "t1", "a1", "r1", "t2", "t3"
+            true
+        when "t4", "cfr"
+            false
+        else
+            raise
+        end
+    end
+    
+    def x86Operand(kind)
+        case name
+        when "t0", "a0", "r0"
+            case kind
+            when :byte
+                "%al"
+            when :half
+                "%ax"
+            when :int
+                "%eax"
+            else
+                raise
+            end
+        when "t1", "a1", "r1"
+            case kind
+            when :byte
+                "%dl"
+            when :half
+                "%dx"
+            when :int
+                "%edx"
+            else
+                raise
+            end
+        when "t2"
+            case kind
+            when :byte
+                "%cl"
+            when :half
+                "%cx"
+            when :int
+                "%ecx"
+            else
+                raise
+            end
+        when "t3"
+            case kind
+            when :byte
+                "%bl"
+            when :half
+                "%bx"
+            when :int
+                "%ebx"
+            else
+                raise
+            end
+        when "t4"
+            case kind
+            when :byte
+                "%sil"
+            when :half
+                "%si"
+            when :int
+                "%esi"
+            else
+                raise
+            end
+        when "cfr"
+            case kind
+            when :byte
+                "%dil"
+            when :half
+                "%di"
+            when :int
+                "%edi"
+            else
+                raise
+            end
+        when "sp"
+            case kind
+            when :byte
+                "%spl"
+            when :half
+                "%sp"
+            when :int
+                "%esp"
+            else
+                raise
+            end
+        else
+            raise "Bad register #{name} for X86 at #{codeOriginString}"
+        end
+    end
+    def x86CallOperand(kind)
+        "*#{x86Operand(kind)}"
+    end
+end
+
+class FPRegisterID
+    def x86Operand(kind)
+        raise unless kind == :double
+        case name
+        when "ft0", "fa0", "fr"
+            "%xmm0"
+        when "ft1", "fa1"
+            "%xmm1"
+        when "ft2", "fa2"
+            "%xmm2"
+        when "ft3", "fa3"
+            "%xmm3"
+        when "ft4"
+            "%xmm4"
+        when "ft5"
+            "%xmm5"
+        else
+            raise "Bad register #{name} for X86 at #{codeOriginString}"
+        end
+    end
+    def x86CallOperand(kind)
+        "*#{x86Operand(kind)}"
+    end
+end
+
+class Immediate
+    def x86Operand(kind)
+        "$#{value}"
+    end
+    def x86CallOperand(kind)
+        "#{value}"
+    end
+end
+
+class Address
+    def supports8BitOnX86
+        true
+    end
+    
+    def x86Operand(kind)
+        "#{offset.value}(#{base.x86Operand(:int)})"
+    end
+    def x86CallOperand(kind)
+        "*#{x86Operand(kind)}"
+    end
+end
+
+class BaseIndex
+    def supports8BitOnX86
+        true
+    end
+    
+    def x86Operand(kind)
+        "#{offset.value}(#{base.x86Operand(:int)}, #{index.x86Operand(:int)}, #{scale})"
+    end
+
+    def x86CallOperand(kind)
+        "*#{x86operand(kind)}"
+    end
+end
+
+class AbsoluteAddress
+    def supports8BitOnX86
+        true
+    end
+    
+    def x86Operand(kind)
+        "#{address.value}"
+    end
+
+    def x86CallOperand(kind)
+        "*#{address.value}"
+    end
+end
+
+class LabelReference
+    def x86CallOperand(kind)
+        asmLabel
+    end
+end
+
+class LocalLabelReference
+    def x86CallOperand(kind)
+        asmLabel
+    end
+end
+
+class Instruction
+    def x86Operands(*kinds)
+        raise unless kinds.size == operands.size
+        result = []
+        kinds.size.times {
+            | idx |
+            result << operands[idx].x86Operand(kinds[idx])
+        }
+        result.join(", ")
+    end
+
+    def x86Suffix(kind)
+        case kind
+        when :byte
+            "b"
+        when :half
+            "w"
+        when :int
+            "l"
+        when :double
+            "sd"
+        else
+            raise
+        end
+    end
+    
+    def handleX86OpWithNumOperands(opcode, kind, numOperands)
+        if numOperands == 3
+            if operands[0] == operands[2]
+                $asm.puts "#{opcode} #{operands[1].x86Operand(kind)}, #{operands[2].x86Operand(kind)}"
+            elsif operands[1] == operands[2]
+                $asm.puts "#{opcode} #{operands[0].x86Operand(kind)}, #{operands[2].x86Operand(kind)}"
+            else
+                $asm.puts "mov#{x86Suffix(kind)} #{operands[0].x86Operand(kind)}, #{operands[2].x86Operand(kind)}"
+                $asm.puts "#{opcode} #{operands[1].x86Operand(kind)}, #{operands[2].x86Operand(kind)}"
+            end
+        else
+            $asm.puts "#{opcode} #{operands[0].x86Operand(kind)}, #{operands[1].x86Operand(kind)}"
+        end
+    end
+    
+    def handleX86Op(opcode, kind)
+        handleX86OpWithNumOperands(opcode, kind, operands.size)
+    end
+    
+    def handleX86Shift(opcode, kind)
+        if operands[0].is_a? Immediate or operands[0] == RegisterID.forName(nil, "t2")
+            $asm.puts "#{opcode} #{operands[0].x86Operand(:byte)}, #{operands[1].x86Operand(kind)}"
+        else
+            $asm.puts "xchgl #{operands[0].x86Operand(:int)}, %ecx"
+            $asm.puts "#{opcode} %cl, #{operands[1].x86Operand(kind)}"
+            $asm.puts "xchgl #{operands[0].x86Operand(:int)}, %ecx"
+        end
+    end
+    
+    def handleX86DoubleBranch(branchOpcode, mode)
+        case mode
+        when :normal
+            $asm.puts "ucomisd #{operands[1].x86Operand(:double)}, #{operands[0].x86Operand(:double)}"
+        when :reverse
+            $asm.puts "ucomisd #{operands[0].x86Operand(:double)}, #{operands[1].x86Operand(:double)}"
+        else
+            raise mode.inspect
+        end
+        $asm.puts "#{branchOpcode} #{operands[2].asmLabel}"
+    end
+    
+    def handleX86IntCompare(opcodeSuffix, kind)
+        if operands[0].is_a? Immediate and operands[0].value == 0 and operands[1].is_a? RegisterID and (opcodeSuffix == "e" or opcodeSuffix == "ne")
+            $asm.puts "test#{x86Suffix(kind)} #{operands[1].x86Operand(kind)}"
+        elsif operands[1].is_a? Immediate and operands[1].value == 0 and operands[0].is_a? RegisterID and (opcodeSuffix == "e" or opcodeSuffix == "ne")
+            $asm.puts "test#{x86Suffix(kind)} #{operands[0].x86Operand(kind)}"
+        else
+            $asm.puts "cmp#{x86Suffix(kind)} #{operands[1].x86Operand(kind)}, #{operands[0].x86Operand(kind)}"
+        end
+    end
+    
+    def handleX86IntBranch(branchOpcode, kind)
+        handleX86IntCompare(branchOpcode[1..-1], kind)
+        $asm.puts "#{branchOpcode} #{operands[2].asmLabel}"
+    end
+    
+    def handleX86Set(setOpcode, operand)
+        if operand.supports8BitOnX86
+            $asm.puts "#{setOpcode} #{operand.x86Operand(:byte)}"
+            $asm.puts "movzbl #{operand.x86Operand(:byte)}, #{operand.x86Operand(:int)}"
+        else
+            $asm.puts "xchgl #{operand.x86Operand(:int)}, %eax"
+            $asm.puts "#{setOpcode} %al"
+            $asm.puts "movzbl %al, %eax"
+            $asm.puts "xchgl #{operand.x86Operand(:int)}, %eax"
+        end
+    end
+    
+    def handleX86IntCompareSet(setOpcode, kind)
+        handleX86IntCompare(setOpcode[3..-1], kind)
+        handleX86Set(setOpcode, operands[2])
+    end
+    
+    def handleX86Test(kind)
+        value = operands[0]
+        case operands.size
+        when 2
+            mask = Immediate.new(codeOrigin, -1)
+        when 3
+            mask = operands[1]
+        else
+            raise "Expected 2 or 3 operands, but got #{operands.size} at #{codeOriginString}"
+        end
+        
+        if mask.is_a? Immediate and mask.value == -1
+            if value.is_a? RegisterID
+                $asm.puts "test#{x86Suffix(kind)} #{value.x86Operand(kind)}, #{value.x86Operand(kind)}"
+            else
+                $asm.puts "cmp#{x86Suffix(kind)} $0, #{value.x86Operand(kind)}"
+            end
+        else
+            $asm.puts "test#{x86Suffix(kind)} #{mask.x86Operand(kind)}, #{value.x86Operand(kind)}"
+        end
+    end
+    
+    def handleX86BranchTest(branchOpcode, kind)
+        handleX86Test(kind)
+        $asm.puts "#{branchOpcode} #{operands.last.asmLabel}"
+    end
+    
+    def handleX86SetTest(setOpcode, kind)
+        handleX86Test(kind)
+        handleX86Set(setOpcode, operands.last)
+    end
+    
+    def handleX86OpBranch(opcode, branchOpcode, kind)
+        handleX86OpWithNumOperands(opcode, kind, operands.size - 1)
+        case operands.size
+        when 4
+            jumpTarget = operands[3]
+        when 3
+            jumpTarget = operands[2]
+        else
+            raise self.inspect
+        end
+        $asm.puts "#{branchOpcode} #{jumpTarget.asmLabel}"
+    end
+    
+    def handleX86SubBranch(branchOpcode, kind)
+        if operands.size == 4 and operands[1] == operands[2]
+            $asm.puts "negl #{operands[2].x86Operand(:int)}"
+            $asm.puts "addl #{operands[0].x86Operand(:int)}, #{operands[2].x86Operand(:int)}"
+        else
+            handleX86OpWithNumOperands("sub#{x86Suffix(kind)}", kind, operands.size - 1)
+        end
+        case operands.size
+        when 4
+            jumpTarget = operands[3]
+        when 3
+            jumpTarget = operands[2]
+        else
+            raise self.inspect
+        end
+        $asm.puts "#{branchOpcode} #{jumpTarget.asmLabel}"
+    end
+    
+    def lowerX86
+        $asm.comment codeOriginString
+        case opcode
+        when "addi", "addp"
+            if operands.size == 3 and operands[0].is_a? Immediate
+                raise unless operands[1].is_a? RegisterID
+                raise unless operands[2].is_a? RegisterID
+                if operands[0].value == 0
+                    unless operands[1] == operands[2]
+                        $asm.puts "movl #{operands[1].x86Operand(:int)}, #{operands[2].x86Operand(:int)}"
+                    end
+                else
+                    $asm.puts "leal #{operands[0].value}(#{operands[1].x86Operand(:int)}), #{operands[2].x86Operand(:int)}"
+                end
+            elsif operands.size == 3 and operands[0].is_a? RegisterID
+                raise unless operands[1].is_a? RegisterID
+                raise unless operands[2].is_a? RegisterID
+                $asm.puts "leal (#{operands[0].x86Operand(:int)}, #{operands[1].x86Operand(:int)}), #{operands[2].x86Operand(:int)}"
+            else
+                unless Immediate.new(nil, 0) == operands[0]
+                    $asm.puts "addl #{x86Operands(:int, :int)}"
+                end
+            end
+        when "andi", "andp"
+            handleX86Op("andl", :int)
+        when "lshifti"
+            handleX86Shift("sall", :int)
+        when "muli"
+            if operands.size == 3 and operands[0].is_a? Immediate
+                $asm.puts "imull #{x86Operands(:int, :int, :int)}"
+            else
+                # FIXME: could do some peephole in case the left operand is immediate and it's
+                # a power of two.
+                handleX86Op("imull", :int)
+            end
+        when "negi"
+            $asm.puts "negl #{x86Operands(:int)}"
+        when "noti"
+            $asm.puts "notl #{x86Operands(:int)}"
+        when "ori", "orp"
+            handleX86Op("orl", :int)
+        when "rshifti"
+            handleX86Shift("sarl", :int)
+        when "urshifti"
+            handleX86Shift("shrl", :int)
+        when "subi", "subp"
+            if operands.size == 3 and operands[1] == operands[2]
+                $asm.puts "negl #{operands[2].x86Operand(:int)}"
+                $asm.puts "addl #{operands[0].x86Operand(:int)}, #{operands[2].x86Operand(:int)}"
+            else
+                handleX86Op("subl", :int)
+            end
+        when "xori", "xorp"
+            handleX86Op("xorl", :int)
+        when "loadi", "storei", "loadp", "storep"
+            $asm.puts "movl #{x86Operands(:int, :int)}"
+        when "loadb"
+            $asm.puts "movzbl #{operands[0].x86Operand(:byte)}, #{operands[1].x86Operand(:int)}"
+        when "loadbs"
+            $asm.puts "movsbl #{operands[0].x86Operand(:byte)}, #{operands[1].x86Operand(:int)}"
+        when "loadh"
+            $asm.puts "movzwl #{operands[0].x86Operand(:half)}, #{operands[1].x86Operand(:int)}"
+        when "loadhs"
+            $asm.puts "movswl #{operands[0].x86Operand(:half)}, #{operands[1].x86Operand(:int)}"
+        when "storeb"
+            $asm.puts "movb #{x86Operands(:byte, :byte)}"
+        when "loadd", "moved", "stored"
+            $asm.puts "movsd #{x86Operands(:double, :double)}"
+        when "addd"
+            $asm.puts "addsd #{x86Operands(:double, :double)}"
+        when "divd"
+            $asm.puts "divsd #{x86Operands(:double, :double)}"
+        when "subd"
+            $asm.puts "subsd #{x86Operands(:double, :double)}"
+        when "muld"
+            $asm.puts "mulsd #{x86Operands(:double, :double)}"
+        when "sqrtd"
+            $asm.puts "sqrtsd #{operands[0].x86Operand(:double)}, #{operands[1].x86Operand(:double)}"
+        when "ci2d"
+            $asm.puts "cvtsi2sd #{operands[0].x86Operand(:int)}, #{operands[1].x86Operand(:double)}"
+        when "bdeq"
+            isUnordered = LocalLabel.unique("bdeq")
+            $asm.puts "ucomisd #{operands[0].x86Operand(:double)}, #{operands[1].x86Operand(:double)}"
+            $asm.puts "jp #{LabelReference.new(codeOrigin, isUnordered).asmLabel}"
+            $asm.puts "je #{LabelReference.new(codeOrigin, operands[2]).asmLabel}"
+            isUnordered.lower("X86")
+        when "bdneq"
+            handleX86DoubleBranch("jne", :normal)
+        when "bdgt"
+            handleX86DoubleBranch("ja", :normal)
+        when "bdgteq"
+            handleX86DoubleBranch("jae", :normal)
+        when "bdlt"
+            handleX86DoubleBranch("ja", :reverse)
+        when "bdlteq"
+            handleX86DoubleBranch("jae", :reverse)
+        when "bdequn"
+            handleX86DoubleBranch("je", :normal)
+        when "bdnequn"
+            isUnordered = LocalLabel.unique("bdnequn")
+            isEqual = LocalLabel.unique("bdnequn")
+            $asm.puts "ucomisd #{operands[0].x86Operand(:double)}, #{operands[1].x86Operand(:double)}"
+            $asm.puts "jp #{LabelReference.new(codeOrigin, isUnordered).asmLabel}"
+            $asm.puts "je #{LabelReference.new(codeOrigin, isEqual).asmLabel}"
+            isUnordered.lower("X86")
+            $asm.puts "jmp #{operands[2].asmLabel}"
+            isEqual.lower("X86")
+        when "bdgtun"
+            handleX86DoubleBranch("jb", :reverse)
+        when "bdgtequn"
+            handleX86DoubleBranch("jbe", :reverse)
+        when "bdltun"
+            handleX86DoubleBranch("jb", :normal)
+        when "bdltequn"
+            handleX86DoubleBranch("jbe", :normal)
+        when "btd2i"
+            $asm.puts "cvttsd2si #{operands[0].x86Operand(:double)}, #{operands[1].x86Operand(:int)}"
+            $asm.puts "cmpl $0x80000000 #{operands[1].x86Operand(:int)}"
+            $asm.puts "je #{operands[2].asmLabel}"
+        when "td2i"
+            $asm.puts "cvttsd2si #{operands[0].x86Operand(:double)}, #{operands[1].x86Operand(:int)}"
+        when "bcd2i"
+            $asm.puts "cvttsd2si #{operands[0].x86Operand(:double)}, #{operands[1].x86Operand(:int)}"
+            $asm.puts "testl #{operands[1].x86Operand(:int)}, #{operands[1].x86Operand(:int)}"
+            $asm.puts "je #{operands[2].asmLabel}"
+            $asm.puts "cvtsi2sd #{operands[1].x86Operand(:int)}, %xmm7"
+            $asm.puts "ucomisd #{operands[0].x86Operand(:double)}, %xmm7"
+            $asm.puts "jp #{operands[2].asmLabel}"
+            $asm.puts "jne #{operands[2].asmLabel}"
+        when "movdz"
+            $asm.puts "xorpd #{operands[0].x86Operand(:double)}, #{operands[0].x86Operand(:double)}"
+        when "pop"
+            $asm.puts "pop #{operands[0].x86Operand(:int)}"
+        when "push"
+            $asm.puts "push #{operands[0].x86Operand(:int)}"
+        when "move", "sxi2p", "zxi2p"
+            if Immediate.new(nil, 0) == operands[0] and operands[1].is_a? RegisterID
+                $asm.puts "xorl #{operands[1].x86Operand(:int)}, #{operands[1].x86Operand(:int)}"
+            elsif operands[0] != operands[1]
+                $asm.puts "movl #{x86Operands(:int, :int)}"
+            end
+        when "nop"
+            $asm.puts "nop"
+        when "bieq", "bpeq"
+            handleX86IntBranch("je", :int)
+        when "bineq", "bpneq"
+            handleX86IntBranch("jne", :int)
+        when "bia", "bpa"
+            handleX86IntBranch("ja", :int)
+        when "biaeq", "bpaeq"
+            handleX86IntBranch("jae", :int)
+        when "bib", "bpb"
+            handleX86IntBranch("jb", :int)
+        when "bibeq", "bpbeq"
+            handleX86IntBranch("jbe", :int)
+        when "bigt", "bpgt"
+            handleX86IntBranch("jg", :int)
+        when "bigteq", "bpgteq"
+            handleX86IntBranch("jge", :int)
+        when "bilt", "bplt"
+            handleX86IntBranch("jl", :int)
+        when "bilteq", "bplteq"
+            handleX86IntBranch("jle", :int)
+        when "bbeq"
+            handleX86IntBranch("je", :byte)
+        when "bbneq"
+            handleX86IntBranch("jne", :byte)
+        when "bba"
+            handleX86IntBranch("ja", :byte)
+        when "bbaeq"
+            handleX86IntBranch("jae", :byte)
+        when "bbb"
+            handleX86IntBranch("jb", :byte)
+        when "bbbeq"
+            handleX86IntBranch("jbe", :byte)
+        when "bbgt"
+            handleX86IntBranch("jg", :byte)
+        when "bbgteq"
+            handleX86IntBranch("jge", :byte)
+        when "bblt"
+            handleX86IntBranch("jl", :byte)
+        when "bblteq"
+            handleX86IntBranch("jlteq", :byte)
+        when "btio", "btpo"
+            handleX86BranchTest("jo", :int)
+        when "btis", "btps"
+            handleX86BranchTest("js", :int)
+        when "btiz", "btpz"
+            handleX86BranchTest("jz", :int)
+        when "btinz", "btpnz"
+            handleX86BranchTest("jnz", :int)
+        when "btbo"
+            handleX86BranchTest("jo", :byte)
+        when "btbs"
+            handleX86BranchTest("js", :byte)
+        when "btbz"
+            handleX86BranchTest("jz", :byte)
+        when "btbnz"
+            handleX86BranchTest("jnz", :byte)
+        when "jmp"
+            $asm.puts "jmp #{operands[0].x86CallOperand(:int)}"
+        when "baddio", "baddpo"
+            handleX86OpBranch("addl", "jo", :int)
+        when "baddis", "baddps"
+            handleX86OpBranch("addl", "js", :int)
+        when "baddiz", "baddpz"
+            handleX86OpBranch("addl", "jz", :int)
+        when "baddinz", "baddpnz"
+            handleX86OpBranch("addl", "jnz", :int)
+        when "bsubio"
+            handleX86SubBranch("jo", :int)
+        when "bsubis"
+            handleX86SubBranch("js", :int)
+        when "bsubiz"
+            handleX86SubBranch("jz", :int)
+        when "bsubinz"
+            handleX86SubBranch("jnz", :int)
+        when "bmulio"
+            handleX86OpBranch("imull", "jo", :int)
+        when "bmulis"
+            handleX86OpBranch("imull", "js", :int)
+        when "bmuliz"
+            handleX86OpBranch("imull", "jz", :int)
+        when "bmulinz"
+            handleX86OpBranch("imull", "jnz", :int)
+        when "borio"
+            handleX86OpBranch("orl", "jo", :int)
+        when "boris"
+            handleX86OpBranch("orl", "js", :int)
+        when "boriz"
+            handleX86OpBranch("orl", "jz", :int)
+        when "borinz"
+            handleX86OpBranch("orl", "jnz", :int)
+        when "break"
+            $asm.puts "int $3"
+        when "call"
+            $asm.puts "call #{operands[0].x86CallOperand(:int)}"
+        when "ret"
+            $asm.puts "ret"
+        when "cieq", "cpeq"
+            handleX86IntCompareSet("sete", :int)
+        when "cineq", "cpneq"
+            handleX86IntCompareSet("setne", :int)
+        when "cia", "cpa"
+            handleX86IntCompareSet("seta", :int)
+        when "ciaeq", "cpaeq"
+            handleX86IntCompareSet("setae", :int)
+        when "cib", "cpb"
+            handleX86IntCompareSet("setb", :int)
+        when "cibeq", "cpbeq"
+            handleX86IntCompareSet("setbe", :int)
+        when "cigt", "cpgt"
+            handleX86IntCompareSet("setg", :int)
+        when "cigteq", "cpgteq"
+            handleX86IntCompareSet("setge", :int)
+        when "cilt", "cplt"
+            handleX86IntCompareSet("setl", :int)
+        when "cilteq", "cplteq"
+            handleX86IntCompareSet("setle", :int)
+        when "tio"
+            handleX86SetTest("seto", :int)
+        when "tis"
+            handleX86SetTest("sets", :int)
+        when "tiz"
+            handleX86SetTest("setz", :int)
+        when "tinz"
+            handleX86SetTest("setnz", :int)
+        when "tbo"
+            handleX86SetTest("seto", :byte)
+        when "tbs"
+            handleX86SetTest("sets", :byte)
+        when "tbz"
+            handleX86SetTest("setz", :byte)
+        when "tbnz"
+            handleX86SetTest("setnz", :byte)
+        when "peek"
+            $asm.puts "movl #{operands[0].value * 4}(%esp), #{operands[1].x86Operand(:int)}"
+        when "poke"
+            $asm.puts "movl #{operands[0].x86Operand(:int)}, #{operands[1].value * 4}(%esp)"
+        when "cdqi"
+            $asm.puts "cdq"
+        when "idivi"
+            $asm.puts "idivl #{operands[0].x86Operand(:int)}"
+        when "fii2d"
+            $asm.puts "movd #{operands[0].x86Operand(:int)}, #{operands[2].x86Operand(:double)}"
+            $asm.puts "movd #{operands[1].x86Operand(:int)}, %xmm7"
+            $asm.puts "psllq $32, %xmm7"
+            $asm.puts "por %xmm7, #{operands[2].x86Operand(:double)}"
+        when "fd2ii"
+            $asm.puts "movd #{operands[0].x86Operand(:double)}, #{operands[1].x86Operand(:int)}"
+            $asm.puts "movsd #{operands[0].x86Operand(:double)}, %xmm7"
+            $asm.puts "psrlq $32, %xmm7"
+            $asm.puts "movsd %xmm7, #{operands[2].x86Operand(:int)}"
+        when "bo"
+            $asm.puts "jo #{operands[0].asmLabel}"
+        when "bs"
+            $asm.puts "js #{operands[0].asmLabel}"
+        when "bz"
+            $asm.puts "jz #{operands[0].asmLabel}"
+        when "bnz"
+            $asm.puts "jnz #{operands[0].asmLabel}"
+        when "leai", "leap"
+            $asm.puts "leal #{operands[0].x86Operand(:int)}, #{operands[1].x86Operand(:int)}"
+        else
+            raise "Bad opcode: #{opcode}"
+        end
+    end
+end
+
diff --git a/Source/JavaScriptCore/runtime/CodeSpecializationKind.h b/Source/JavaScriptCore/runtime/CodeSpecializationKind.h
new file mode 100644 (file)
index 0000000..ba2a54f
--- /dev/null
@@ -0,0 +1,36 @@
+/*
+ * Copyright (C) 2012 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#ifndef CodeSpecializationKind_h
+#define CodeSpecializationKind_h
+
+namespace JSC {
+
+enum CodeSpecializationKind { CodeForCall, CodeForConstruct };
+
+} // namespace JSC
+
+#endif // CodeSpecializationKind_h
+
index 86c4bd5..345af2e 100644 (file)
@@ -27,6 +27,7 @@
 #define CommonSlowPaths_h
 
 #include "CodeBlock.h"
+#include "CodeSpecializationKind.h"
 #include "ExceptionHelpers.h"
 #include "JSArray.h"
 
@@ -41,6 +42,38 @@ namespace JSC {
 
 namespace CommonSlowPaths {
 
+ALWAYS_INLINE ExecState* arityCheckFor(ExecState* exec, RegisterFile* registerFile, CodeSpecializationKind kind)
+{
+    JSFunction* callee = asFunction(exec->callee());
+    ASSERT(!callee->isHostFunction());
+    CodeBlock* newCodeBlock = &callee->jsExecutable()->generatedBytecodeFor(kind);
+    int argumentCountIncludingThis = exec->argumentCountIncludingThis();
+
+    // This ensures enough space for the worst case scenario of zero arguments passed by the caller.
+    if (!registerFile->grow(exec->registers() + newCodeBlock->numParameters() + newCodeBlock->m_numCalleeRegisters))
+        return 0;
+
+    ASSERT(argumentCountIncludingThis < newCodeBlock->numParameters());
+
+    // Too few arguments -- copy call frame and arguments, then fill in missing arguments with undefined.
+    size_t delta = newCodeBlock->numParameters() - argumentCountIncludingThis;
+    Register* src = exec->registers();
+    Register* dst = exec->registers() + delta;
+
+    int i;
+    int end = -ExecState::offsetFor(argumentCountIncludingThis);
+    for (i = -1; i >= end; --i)
+        dst[i] = src[i];
+
+    end -= delta;
+    for ( ; i >= end; --i)
+        dst[i] = jsUndefined();
+
+    ExecState* newExec = ExecState::create(dst);
+    ASSERT((void*)newExec <= registerFile->end());
+    return newExec;
+}
+
 ALWAYS_INLINE bool opInstanceOfSlow(ExecState* exec, JSValue value, JSValue baseVal, JSValue proto)
 {
     ASSERT(!value.isCell() || !baseVal.isCell() || !proto.isCell()
index c7a7f04..25ddf76 100644 (file)
@@ -29,6 +29,7 @@
 #include "BytecodeGenerator.h"
 #include "CodeBlock.h"
 #include "DFGDriver.h"
+#include "ExecutionHarness.h"
 #include "JIT.h"
 #include "JITDriver.h"
 #include "Parser.h"
@@ -88,7 +89,7 @@ Intrinsic NativeExecutable::intrinsic() const
 template<typename T>
 static void jettisonCodeBlock(JSGlobalData& globalData, OwnPtr<T>& codeBlock)
 {
-    ASSERT(codeBlock->getJITType() != JITCode::BaselineJIT);
+    ASSERT(JITCode::isOptimizingJIT(codeBlock->getJITType()));
     ASSERT(codeBlock->alternative());
     OwnPtr<T> codeBlockToJettison = codeBlock.release();
     codeBlock = static_pointer_cast<T>(codeBlockToJettison->releaseAlternative());
@@ -175,9 +176,32 @@ JSObject* EvalExecutable::compileOptimized(ExecState* exec, ScopeChainNode* scop
     return error;
 }
 
+#if ENABLE(JIT)
+void EvalExecutable::jitCompile(JSGlobalData& globalData)
+{
+    bool result = jitCompileIfAppropriate(globalData, m_evalCodeBlock, m_jitCodeForCall, JITCode::bottomTierJIT());
+    ASSERT_UNUSED(result, result);
+}
+#endif
+
+inline const char* samplingDescription(JITCode::JITType jitType)
+{
+    switch (jitType) {
+    case JITCode::InterpreterThunk:
+        return "Interpreter Compilation (TOTAL)";
+    case JITCode::BaselineJIT:
+        return "Baseline Compilation (TOTAL)";
+    case JITCode::DFGJIT:
+        return "DFG Compilation (TOTAL)";
+    default:
+        ASSERT_NOT_REACHED();
+        return 0;
+    }
+}
+
 JSObject* EvalExecutable::compileInternal(ExecState* exec, ScopeChainNode* scopeChainNode, JITCode::JITType jitType)
 {
-    SamplingRegion samplingRegion(jitType == JITCode::BaselineJIT ? "Baseline Compilation (TOTAL)" : "DFG Compilation (TOTAL)");
+    SamplingRegion samplingRegion(samplingDescription(jitType));
     
 #if !ENABLE(JIT)
     UNUSED_PARAM(jitType);
@@ -218,7 +242,7 @@ JSObject* EvalExecutable::compileInternal(ExecState* exec, ScopeChainNode* scope
     }
 
 #if ENABLE(JIT)
-    if (!jitCompileIfAppropriate(*globalData, m_evalCodeBlock, m_jitCodeForCall, jitType))
+    if (!prepareForExecution(*globalData, m_evalCodeBlock, m_jitCodeForCall, jitType))
         return 0;
 #endif
 
@@ -303,9 +327,17 @@ JSObject* ProgramExecutable::compileOptimized(ExecState* exec, ScopeChainNode* s
     return error;
 }
 
+#if ENABLE(JIT)
+void ProgramExecutable::jitCompile(JSGlobalData& globalData)
+{
+    bool result = jitCompileIfAppropriate(globalData, m_programCodeBlock, m_jitCodeForCall, JITCode::bottomTierJIT());
+    ASSERT_UNUSED(result, result);
+}
+#endif
+
 JSObject* ProgramExecutable::compileInternal(ExecState* exec, ScopeChainNode* scopeChainNode, JITCode::JITType jitType)
 {
-    SamplingRegion samplingRegion(jitType == JITCode::BaselineJIT ? "Baseline Compilation (TOTAL)" : "DFG Compilation (TOTAL)");
+    SamplingRegion samplingRegion(samplingDescription(jitType));
     
 #if !ENABLE(JIT)
     UNUSED_PARAM(jitType);
@@ -344,7 +376,7 @@ JSObject* ProgramExecutable::compileInternal(ExecState* exec, ScopeChainNode* sc
     }
 
 #if ENABLE(JIT)
-    if (!jitCompileIfAppropriate(*globalData, m_programCodeBlock, m_jitCodeForCall, jitType))
+    if (!prepareForExecution(*globalData, m_programCodeBlock, m_jitCodeForCall, jitType))
         return 0;
 #endif
 
@@ -420,7 +452,7 @@ FunctionCodeBlock* FunctionExecutable::baselineCodeBlockFor(CodeSpecializationKi
     while (result->alternative())
         result = static_cast<FunctionCodeBlock*>(result->alternative());
     ASSERT(result);
-    ASSERT(result->getJITType() == JITCode::BaselineJIT);
+    ASSERT(JITCode::isBaselineCode(result->getJITType()));
     return result;
 }
 
@@ -446,6 +478,20 @@ JSObject* FunctionExecutable::compileOptimizedForConstruct(ExecState* exec, Scop
     return error;
 }
 
+#if ENABLE(JIT)
+void FunctionExecutable::jitCompileForCall(JSGlobalData& globalData)
+{
+    bool result = jitCompileFunctionIfAppropriate(globalData, m_codeBlockForCall, m_jitCodeForCall, m_jitCodeForCallWithArityCheck, m_symbolTable, JITCode::bottomTierJIT());
+    ASSERT_UNUSED(result, result);
+}
+
+void FunctionExecutable::jitCompileForConstruct(JSGlobalData& globalData)
+{
+    bool result = jitCompileFunctionIfAppropriate(globalData, m_codeBlockForConstruct, m_jitCodeForConstruct, m_jitCodeForConstructWithArityCheck, m_symbolTable, JITCode::bottomTierJIT());
+    ASSERT_UNUSED(result, result);
+}
+#endif
+
 FunctionCodeBlock* FunctionExecutable::codeBlockWithBytecodeFor(CodeSpecializationKind kind)
 {
     FunctionCodeBlock* codeBlock = baselineCodeBlockFor(kind);
@@ -490,7 +536,7 @@ PassOwnPtr<FunctionCodeBlock> FunctionExecutable::produceCodeBlockFor(ScopeChain
 
 JSObject* FunctionExecutable::compileForCallInternal(ExecState* exec, ScopeChainNode* scopeChainNode, JITCode::JITType jitType)
 {
-    SamplingRegion samplingRegion(jitType == JITCode::BaselineJIT ? "Baseline Compilation (TOTAL)" : "DFG Compilation (TOTAL)");
+    SamplingRegion samplingRegion(samplingDescription(jitType));
     
 #if !ENABLE(JIT)
     UNUSED_PARAM(exec);
@@ -512,7 +558,7 @@ JSObject* FunctionExecutable::compileForCallInternal(ExecState* exec, ScopeChain
     m_symbolTable = m_codeBlockForCall->sharedSymbolTable();
 
 #if ENABLE(JIT)
-    if (!jitCompileFunctionIfAppropriate(exec->globalData(), m_codeBlockForCall, m_jitCodeForCall, m_jitCodeForCallWithArityCheck, m_symbolTable, jitType))
+    if (!prepareFunctionForExecution(exec->globalData(), m_codeBlockForCall, m_jitCodeForCall, m_jitCodeForCallWithArityCheck, m_symbolTable, jitType, CodeForCall))
         return 0;
 #endif
 
@@ -532,7 +578,7 @@ JSObject* FunctionExecutable::compileForCallInternal(ExecState* exec, ScopeChain
 
 JSObject* FunctionExecutable::compileForConstructInternal(ExecState* exec, ScopeChainNode* scopeChainNode, JITCode::JITType jitType)
 {
-    SamplingRegion samplingRegion(jitType == JITCode::BaselineJIT ? "Baseline Compilation (TOTAL)" : "DFG Compilation (TOTAL)");
+    SamplingRegion samplingRegion(samplingDescription(jitType));
     
 #if !ENABLE(JIT)
     UNUSED_PARAM(jitType);
@@ -554,7 +600,7 @@ JSObject* FunctionExecutable::compileForConstructInternal(ExecState* exec, Scope
     m_symbolTable = m_codeBlockForConstruct->sharedSymbolTable();
 
 #if ENABLE(JIT)
-    if (!jitCompileFunctionIfAppropriate(exec->globalData(), m_codeBlockForConstruct, m_jitCodeForConstruct, m_jitCodeForConstructWithArityCheck, m_symbolTable, jitType))
+    if (!prepareFunctionForExecution(exec->globalData(), m_codeBlockForConstruct, m_jitCodeForConstruct, m_jitCodeForConstructWithArityCheck, m_symbolTable, jitType, CodeForConstruct))
         return 0;
 #endif
 
index d40263b..69e80b2 100644 (file)
@@ -27,6 +27,7 @@
 #define Executable_h
 
 #include "CallData.h"
+#include "CodeSpecializationKind.h"
 #include "JSFunction.h"
 #include "Interpreter.h"
 #include "Nodes.h"
@@ -39,12 +40,12 @@ namespace JSC {
     class Debugger;
     class EvalCodeBlock;
     class FunctionCodeBlock;
+    class LLIntOffsetsExtractor;
     class ProgramCodeBlock;
     class ScopeChainNode;
 
     struct ExceptionInfo;
     
-    enum CodeSpecializationKind { CodeForCall, CodeForConstruct };
     enum CompilationKind { FirstCompilation, OptimizingCompilation };
 
     inline bool isCall(CodeSpecializationKind kind)
@@ -325,6 +326,7 @@ namespace JSC {
     };
 
     class EvalExecutable : public ScriptExecutable {
+        friend class LLIntOffsetsExtractor;
     public:
         typedef ScriptExecutable Base;
 
@@ -344,6 +346,7 @@ namespace JSC {
         
 #if ENABLE(JIT)
         void jettisonOptimizedCode(JSGlobalData&);
+        void jitCompile(JSGlobalData&);
 #endif
 
         EvalCodeBlock& generatedBytecode()
@@ -390,6 +393,7 @@ namespace JSC {
     };
 
     class ProgramExecutable : public ScriptExecutable {
+        friend class LLIntOffsetsExtractor;
     public:
         typedef ScriptExecutable Base;
 
@@ -417,6 +421,7 @@ namespace JSC {
         
 #if ENABLE(JIT)
         void jettisonOptimizedCode(JSGlobalData&);
+        void jitCompile(JSGlobalData&);
 #endif
 
         ProgramCodeBlock& generatedBytecode()
@@ -459,6 +464,7 @@ namespace JSC {
 
     class FunctionExecutable : public ScriptExecutable {
         friend class JIT;
+        friend class LLIntOffsetsExtractor;
     public:
         typedef ScriptExecutable Base;
 
@@ -514,6 +520,7 @@ namespace JSC {
         
 #if ENABLE(JIT)
         void jettisonOptimizedCodeForCall(JSGlobalData&);
+        void jitCompileForCall(JSGlobalData&);
 #endif
 
         bool isGeneratedForCall() const
@@ -541,6 +548,7 @@ namespace JSC {
         
 #if ENABLE(JIT)
         void jettisonOptimizedCodeForConstruct(JSGlobalData&);
+        void jitCompileForConstruct(JSGlobalData&);
 #endif
 
         bool isGeneratedForConstruct() const
@@ -588,6 +596,16 @@ namespace JSC {
                 jettisonOptimizedCodeForConstruct(globalData);
             }
         }
+        
+        void jitCompileFor(JSGlobalData& globalData, CodeSpecializationKind kind)
+        {
+            if (kind == CodeForCall) {
+                jitCompileForCall(globalData);
+                return;
+            }
+            ASSERT(kind == CodeForConstruct);
+            jitCompileForConstruct(globalData);
+        }
 #endif
         
         bool isGeneratedFor(CodeSpecializationKind kind)
diff --git a/Source/JavaScriptCore/runtime/ExecutionHarness.h b/Source/JavaScriptCore/runtime/ExecutionHarness.h
new file mode 100644 (file)
index 0000000..774c5bf
--- /dev/null
@@ -0,0 +1,72 @@
+/*
+ * Copyright (C) 2012 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#ifndef ExecutionHarness_h
+#define ExecutionHarness_h
+
+#include <wtf/Platform.h>
+
+#if ENABLE(JIT)
+
+#include "JITDriver.h"
+#include "LLIntEntrypoints.h"
+
+namespace JSC {
+
+template<typename CodeBlockType>
+inline bool prepareForExecution(JSGlobalData& globalData, OwnPtr<CodeBlockType>& codeBlock, JITCode& jitCode, JITCode::JITType jitType)
+{
+#if ENABLE(LLINT)
+    if (JITCode::isBaselineCode(jitType)) {
+        // Start off in the low level interpreter.
+        LLInt::getEntrypoint(globalData, codeBlock.get(), jitCode);
+        codeBlock->setJITCode(jitCode, MacroAssemblerCodePtr());
+        return true;
+    }
+#endif // ENABLE(LLINT)
+    return jitCompileIfAppropriate(globalData, codeBlock, jitCode, jitType);
+}
+
+inline bool prepareFunctionForExecution(JSGlobalData& globalData, OwnPtr<FunctionCodeBlock>& codeBlock, JITCode& jitCode, MacroAssemblerCodePtr& jitCodeWithArityCheck, SharedSymbolTable*& symbolTable, JITCode::JITType jitType, CodeSpecializationKind kind)
+{
+#if ENABLE(LLINT)
+    if (JITCode::isBaselineCode(jitType)) {
+        // Start off in the low level interpreter.
+        LLInt::getFunctionEntrypoint(globalData, kind, jitCode, jitCodeWithArityCheck);
+        codeBlock->setJITCode(jitCode, jitCodeWithArityCheck);
+        return true;
+    }
+#else
+    UNUSED_PARAM(kind);
+#endif // ENABLE(LLINT)
+    return jitCompileFunctionIfAppropriate(globalData, codeBlock, jitCode, jitCodeWithArityCheck, symbolTable, jitType);
+}
+
+} // namespace JSC
+
+#endif // ENABLE(JIT)
+
+#endif // ExecutionHarness_h
+
index df75084..405dcb8 100644 (file)
@@ -28,6 +28,7 @@
 namespace JSC {
 
     class JSArray;
+    class LLIntOffsetsExtractor;
 
     struct SparseArrayEntry : public WriteBarrier<Unknown> {
         typedef WriteBarrier<Unknown> Base;
@@ -122,6 +123,7 @@ namespace JSC {
     };
 
     class JSArray : public JSNonFinalObject {
+        friend class LLIntOffsetsExtractor;
         friend class Walker;
 
     protected:
index baaa9ef..78d2d08 100644 (file)
 namespace JSC {
 
     class JSGlobalObject;
-    class Structure;
+    class LLIntOffsetsExtractor;
     class PropertyDescriptor;
     class PropertyNameArray;
+    class Structure;
 
     enum EnumerationMode {
         ExcludeDontEnumProperties,
@@ -163,6 +164,8 @@ namespace JSC {
         static bool getOwnPropertyDescriptor(JSObject*, ExecState*, const Identifier&, PropertyDescriptor&);
 
     private:
+        friend class LLIntOffsetsExtractor;
+        
         const ClassInfo* m_classInfo;
         WriteBarrier<Structure> m_structure;
     };
index d73a7ff..6e8557f 100644 (file)
@@ -33,6 +33,7 @@ namespace JSC {
     class FunctionPrototype;
     class JSActivation;
     class JSGlobalObject;
+    class LLIntOffsetsExtractor;
     class NativeExecutable;
     class SourceCode;
     namespace DFG {
@@ -140,6 +141,8 @@ namespace JSC {
         static void visitChildren(JSCell*, SlotVisitor&);
 
     private:
+        friend class LLIntOffsetsExtractor;
+        
         JS_EXPORT_PRIVATE bool isHostFunctionNonInline() const;
 
         static JSValue argumentsGetter(ExecState*, JSValue, const Identifier&);
index e3593ff..2bdc28a 100644 (file)
@@ -35,6 +35,7 @@
 #include "DebuggerActivation.h"
 #include "FunctionConstructor.h"
 #include "GetterSetter.h"
+#include "HostCallReturnValue.h"
 #include "Interpreter.h"
 #include "JSActivation.h"
 #include "JSAPIValueWrapper.h"
@@ -141,6 +142,8 @@ JSGlobalData::JSGlobalData(GlobalDataType globalDataType, ThreadStackType thread
     , keywords(adoptPtr(new Keywords(this)))
     , interpreter(0)
     , heap(this, heapSize)
+    , jsArrayClassInfo(&JSArray::s_info)
+    , jsFinalObjectClassInfo(&JSFinalObject::s_info)
 #if ENABLE(DFG_JIT)
     , sizeOfLastScratchBuffer(0)
 #endif
@@ -217,9 +220,13 @@ JSGlobalData::JSGlobalData(GlobalDataType globalDataType, ThreadStackType thread
     jitStubs = adoptPtr(new JITThunks(this));
 #endif
 
-    interpreter->initialize(this->canUseJIT());
+    interpreter->initialize(&llintData, this->canUseJIT());
+    
+    initializeHostCallReturnValue(); // This is needed to convince the linker not to drop host call return support.
 
     heap.notifyIsSafeToCollect();
+    
+    llintData.performAssertions(*this);
 }
 
 void JSGlobalData::clearBuiltinStructures()
index 96a69de..7e54c00 100644 (file)
 #define JSGlobalData_h
 
 #include "CachedTranscendentalFunction.h"
-#include "Intrinsic.h"
 #include "DateInstanceCache.h"
 #include "ExecutableAllocator.h"
 #include "Heap.h"
-#include "Strong.h"
+#include "Intrinsic.h"
 #include "JITStubs.h"
 #include "JSValue.h"
+#include "LLIntData.h"
 #include "NumericStrings.h"
 #include "SmallStrings.h"
+#include "Strong.h"
 #include "Terminator.h"
 #include "TimeoutChecker.h"
 #include "WeakRandom.h"
@@ -65,6 +66,7 @@ namespace JSC {
     class JSGlobalObject;
     class JSObject;
     class Keywords;
+    class LLIntOffsetsExtractor;
     class NativeExecutable;
     class ParserArena;
     class RegExpCache;
@@ -251,7 +253,12 @@ namespace JSC {
         Heap heap;
 
         JSValue exception;
-#if ENABLE(JIT)
+
+        const ClassInfo* const jsArrayClassInfo;
+        const ClassInfo* const jsFinalObjectClassInfo;
+
+        LLInt::Data llintData;
+
         ReturnAddressPtr exceptionLocation;
         JSValue hostCallReturnValue;
         CallFrame* callFrameForThrow;
@@ -281,7 +288,6 @@ namespace JSC {
             return scratchBuffers.last();
         }
 #endif
-#endif
 
         HashMap<OpaqueJSClass*, OwnPtr<OpaqueJSClassContextData> > opaqueJSClassData;
 
@@ -356,6 +362,8 @@ namespace JSC {
 #undef registerTypedArrayFunction
 
     private:
+        friend class LLIntOffsetsExtractor;
+        
         JSGlobalData(GlobalDataType, ThreadStackType, HeapSize);
         static JSGlobalData*& sharedInstanceInternal();
         void createNativeThunk();
index 781a840..cbc436e 100644 (file)
@@ -44,6 +44,7 @@ namespace JSC {
     class FunctionPrototype;
     class GetterSetter;
     class GlobalCodeBlock;
+    class LLIntOffsetsExtractor;
     class NativeErrorConstructor;
     class ProgramCodeBlock;
     class RegExpConstructor;
@@ -340,6 +341,8 @@ namespace JSC {
         JS_EXPORT_PRIVATE void addStaticGlobals(GlobalPropertyInfo*, int count);
 
     private:
+        friend class LLIntOffsetsExtractor;
+        
         // FIXME: Fold reset into init.
         JS_EXPORT_PRIVATE void init(JSObject* thisValue);
         void reset(JSValue prototype);
index 65130db..c117cff 100644 (file)
@@ -49,6 +49,7 @@ namespace JSC {
     class GetterSetter;
     class HashEntry;
     class InternalFunction;
+    class LLIntOffsetsExtractor;
     class MarkedBlock;
     class PropertyDescriptor;
     class PropertyNameArray;
@@ -264,6 +265,8 @@ namespace JSC {
         JSObject(JSGlobalData&, Structure*, PropertyStorage inlineStorage);
 
     private:
+        friend class LLIntOffsetsExtractor;
+        
         // Nobody should ever ask any of these questions on something already known to be a JSObject.
         using JSCell::isAPIValueWrapper;
         using JSCell::isGetterSetter;
@@ -369,6 +372,8 @@ COMPILE_ASSERT((JSFinalObject_inlineStorageCapacity >= JSNonFinalObject_inlineSt
         }
 
     private:
+        friend class LLIntOffsetsExtractor;
+        
         explicit JSFinalObject(JSGlobalData& globalData, Structure* structure)
             : JSObject(globalData, structure, m_inlineStorage)
         {
index d52e3ea..7530d75 100644 (file)
@@ -38,6 +38,7 @@ namespace JSC {
 
     class Identifier;
     class JSObject;
+    class LLIntOffsetsExtractor;
 
     class JSPropertyNameIterator : public JSCell {
         friend class JIT;
@@ -96,6 +97,8 @@ namespace JSC {
         }
 
     private:
+        friend class LLIntOffsetsExtractor;
+        
         JSPropertyNameIterator(ExecState*, PropertyNameArrayData* propertyNameArrayData, size_t numCacheableSlot);
 
         WriteBarrier<Structure> m_cachedStructure;
index c0637a6..32a3278 100644 (file)
@@ -32,6 +32,7 @@
 namespace JSC {
 
     class JSString;
+    class LLIntOffsetsExtractor;
 
     JSString* jsEmptyString(JSGlobalData*);
     JSString* jsEmptyString(ExecState*);
@@ -240,6 +241,8 @@ namespace JSC {
         static void visitChildren(JSCell*, SlotVisitor&);
 
     private:
+        friend class LLIntOffsetsExtractor;
+        
         JS_EXPORT_PRIVATE void resolveRope(ExecState*) const;
         void resolveRopeSlowCase8(LChar*) const;
         void resolveRopeSlowCase(UChar*) const;
index 3e23aa2..83a3594 100644 (file)
@@ -34,6 +34,8 @@
 
 namespace JSC {
 
+    class LLIntOffsetsExtractor;
+
     static const unsigned MasqueradesAsUndefined = 1; // WebCore uses MasqueradesAsUndefined to make document.all undetectable.
     static const unsigned ImplementsHasInstance = 1 << 1;
     static const unsigned OverridesHasInstance = 1 << 2;
@@ -87,6 +89,8 @@ namespace JSC {
         }
 
     private:
+        friend class LLIntOffsetsExtractor;
+        
         bool isSetOnFlags1(unsigned flag) const { ASSERT(flag <= (1 << 7)); return m_flags & flag; }
         bool isSetOnFlags2(unsigned flag) const { ASSERT(flag >= (1 << 8)); return m_flags2 & (flag >> 8); }
 
index 72cf5a8..e3843f0 100644 (file)
@@ -118,7 +118,7 @@ JSObject* JSValue::synthesizePrototype(ExecState* exec) const
 
 char* JSValue::description()
 {
-    static const size_t size = 64;
+    static const size_t size = 128;
     static char description[size];
 
     if (!*this)
@@ -127,14 +127,14 @@ char* JSValue::description()
         snprintf(description, size, "Int32: %d", asInt32());
     else if (isDouble()) {
 #if USE(JSVALUE64)
-        snprintf(description, size, "Double: %lf, %lx", asDouble(), reinterpretDoubleToIntptr(asDouble()));
+        snprintf(description, size, "Double: %lx, %lf", reinterpretDoubleToIntptr(asDouble()), asDouble());
 #else
         union {
             double asDouble;
             uint32_t asTwoInt32s[2];
         } u;
         u.asDouble = asDouble();
-        snprintf(description, size, "Double: %lf, %08x:%08x", asDouble(), u.asTwoInt32s[1], u.asTwoInt32s[0]);
+        snprintf(description, size, "Double: %08x:%08x, %lf", u.asTwoInt32s[1], u.asTwoInt32s[0], asDouble());
 #endif
     } else if (isCell())
         snprintf(description, size, "Cell: %p", asCell());
index 80acbb1..9f797e0 100644 (file)
@@ -55,6 +55,9 @@ namespace JSC {
         class SpeculativeJIT;
     }
 #endif
+    namespace LLInt {
+        class Data;
+    }
 
     struct ClassInfo;
     struct Instruction;
@@ -118,6 +121,7 @@ namespace JSC {
         friend class DFG::OSRExitCompiler;
         friend class DFG::SpeculativeJIT;
 #endif
+        friend class LLInt::Data;
 
     public:
         static EncodedJSValue encode(JSValue);
index c1d05ff..8d058f1 100644 (file)
 
 namespace JSC {
 
+    class LLIntOffsetsExtractor;
     class Register;
 
     class JSVariableObject : public JSNonFinalObject {
         friend class JIT;
+        friend class LLIntOffsetsExtractor;
 
     public:
         typedef JSNonFinalObject Base;
index ddfba6e..5500508 100644 (file)
@@ -52,6 +52,10 @@ unsigned maximumFunctionForConstructInlineCandidateInstructionCount;
 
 unsigned maximumInliningDepth;
 
+int32_t executionCounterValueForJITAfterWarmUp;
+int32_t executionCounterValueForDontJITAnytimeSoon;
+int32_t executionCounterValueForJITSoon;
+
 int32_t executionCounterValueForOptimizeAfterWarmUp;
 int32_t executionCounterValueForOptimizeAfterLongWarmUp;
 int32_t executionCounterValueForDontOptimizeAnytimeSoon;
@@ -137,6 +141,10 @@ void initializeOptions()
     
     SET(maximumInliningDepth, 5);
 
+    SET(executionCounterValueForJITAfterWarmUp,     -100);
+    SET(executionCounterValueForDontJITAnytimeSoon, std::numeric_limits<int32_t>::min());
+    SET(executionCounterValueForJITSoon,            -100);
+
     SET(executionCounterValueForOptimizeAfterWarmUp,     -1000);
     SET(executionCounterValueForOptimizeAfterLongWarmUp, -5000);
     SET(executionCounterValueForDontOptimizeAnytimeSoon, std::numeric_limits<int32_t>::min());
@@ -185,6 +193,8 @@ void initializeOptions()
     if (cpusToUse < 1)
         cpusToUse = 1;
     
+    cpusToUse = 1;
+    
     SET(numberOfGCMarkers, cpusToUse);
 
     ASSERT(executionCounterValueForDontOptimizeAnytimeSoon <= executionCounterValueForOptimizeAfterLongWarmUp);
index feebd37..b9e68f9 100644 (file)
@@ -37,6 +37,10 @@ extern unsigned maximumFunctionForConstructInlineCandidateInstructionCount;
 
 extern unsigned maximumInliningDepth; // Depth of inline stack, so 1 = no inlining, 2 = one level, etc.
 
+extern int32_t executionCounterValueForJITAfterWarmUp;
+extern int32_t executionCounterValueForDontJITAnytimeSoon;
+extern int32_t executionCounterValueForJITSoon;
+
 extern int32_t executionCounterValueForOptimizeAfterWarmUp;
 extern int32_t executionCounterValueForOptimizeAfterLongWarmUp;
 extern int32_t executionCounterValueForDontOptimizeAnytimeSoon;
index 6e358d7..c382008 100644 (file)
@@ -30,6 +30,7 @@ namespace JSC {
     class JSGlobalData;
     class JSGlobalObject;
     class JSObject;
+    class LLIntOffsetsExtractor;
     class ScopeChainIterator;
     class SlotVisitor;
     
@@ -91,6 +92,8 @@ namespace JSC {
         static JS_EXPORTDATA const ClassInfo s_info;
 
     private:
+        friend class LLIntOffsetsExtractor;
+        
         static const unsigned StructureFlags = OverridesVisitChildren;
     };
 
index f292072..8716c75 100644 (file)
@@ -325,7 +325,7 @@ Structure* Structure::addPropertyTransition(JSGlobalData& globalData, Structure*
             transition->growPropertyStorageCapacity();
         return transition;
     }
-
+    
     Structure* transition = create(globalData, structure);
 
     transition->m_cachedPrototypeChain.setMayBeNull(globalData, transition, structure->m_cachedPrototypeChain.get());
index 7a2f415..e2655a0 100644 (file)
@@ -45,6 +45,7 @@
 
 namespace JSC {
 
+    class LLIntOffsetsExtractor;
     class PropertyNameArray;
     class PropertyNameArrayData;
     class StructureChain;
@@ -206,6 +207,8 @@ namespace JSC {
         static JS_EXPORTDATA const ClassInfo s_info;
 
     private:
+        friend class LLIntOffsetsExtractor;
+        
         JS_EXPORT_PRIVATE Structure(JSGlobalData&, JSGlobalObject*, JSValue prototype, const TypeInfo&, const ClassInfo*);
         Structure(JSGlobalData&);
         Structure(JSGlobalData&, const Structure*);
index df7a37f..3b19d4c 100644 (file)
@@ -37,6 +37,7 @@
 
 namespace JSC {
 
+    class LLIntOffsetsExtractor;
     class Structure;
 
     class StructureChain : public JSCell {
@@ -74,6 +75,8 @@ namespace JSC {
         }
 
     private:
+        friend class LLIntOffsetsExtractor;
+        
         StructureChain(JSGlobalData&, Structure*);
         static void destroy(JSCell*);
         OwnArrayPtr<WriteBarrier<Structure> > m_vector;
index 379ebd3..1c99e65 100644 (file)
 #define HIDE_SYMBOL(name)
 #endif
 
+// FIXME: figure out how this works on all the platforms. I know that
+// on Linux, the preferred form is ".Lstuff" as opposed to "Lstuff".
+// Don't know about any of the others.
+#if PLATFORM(MAC)
+#define LOCAL_LABEL_STRING(name) "L" #name
+#endif
+
 #endif // InlineASM_h
index faa0c08..bf6ad6e 100644 (file)
 #define ENABLE_JIT 1
 #endif
 
+/* On some of the platforms where we have a JIT, we want to also have the 
+   low-level interpreter. */
+#if !defined(ENABLE_LLINT) && ENABLE(JIT) && OS(DARWIN) && (CPU(X86) || CPU(ARM_THUMB2)) && USE(JSVALUE32_64)
+#define ENABLE_LLINT 1
+#endif
+
 #if !defined(ENABLE_DFG_JIT) && ENABLE(JIT)
 /* Enable the DFG JIT on X86 and X86_64.  Only tested on Mac and GNU/Linux. */
 #if (CPU(X86) || CPU(X86_64)) && (PLATFORM(MAC) || OS(LINUX))
index ecd6024..3943aa5 100644 (file)
@@ -86,6 +86,8 @@ public:
 
     iterator begin();
     iterator end();
+    
+    bool isEmpty() { return begin() == end(); }
 
 private:
     RawNode m_headSentinel;
index 9d5e7a5..667335b 100644 (file)
@@ -43,6 +43,8 @@ typedef const struct __CFString * CFStringRef;
 // Landing the file moves in one patch, will follow on with patches to change the namespaces.
 namespace JSC {
 struct IdentifierCStringTranslator;
+namespace LLInt { class Data; }
+class LLIntOffsetsExtractor;
 template <typename T> struct IdentifierCharBufferTranslator;
 struct IdentifierLCharFromUCharTranslator;
 }
@@ -72,7 +74,9 @@ class StringImpl {
     friend struct WTF::SubstringTranslator;
     friend struct WTF::UCharBufferTranslator;
     friend class AtomicStringImpl;
-
+    friend class JSC::LLInt::Data;
+    friend class JSC::LLIntOffsetsExtractor;
+    
 private:
     enum BufferOwnership {
         BufferInternal,
index 20470c8..1ce5e0b 100644 (file)
@@ -76,6 +76,7 @@ SET(WebCore_INCLUDE_DIRECTORIES
     "${JAVASCRIPTCORE_DIR}/debugger"
     "${JAVASCRIPTCORE_DIR}/interpreter"
     "${JAVASCRIPTCORE_DIR}/jit"
+    "${JAVASCRIPTCORE_DIR}/llint"
     "${JAVASCRIPTCORE_DIR}/parser"
     "${JAVASCRIPTCORE_DIR}/profiler"
     "${JAVASCRIPTCORE_DIR}/runtime"
index e5be537..315ba21 100644 (file)
@@ -1,3 +1,15 @@
+2012-02-21  Filip Pizlo  <fpizlo@apple.com>
+
+        JSC should be a triple-tier VM
+        https://bugs.webkit.org/show_bug.cgi?id=75812
+        <rdar://problem/10079694>
+
+        Reviewed by Gavin Barraclough.
+        
+        No new tests, because there is no change in behavior.
+
+        * CMakeLists.txt:
+
 2012-02-21  Kentaro Hara  <haraken@chromium.org>
 
         NavigatorMediaStream.idl defines an interface for NavigatorGamepad
index c3ac166..c7e5c8c 100644 (file)
@@ -45,6 +45,7 @@ SET(WebKit_INCLUDE_DIRECTORIES
     "${JAVASCRIPTCORE_DIR}/debugger"
     "${JAVASCRIPTCORE_DIR}/interpreter"
     "${JAVASCRIPTCORE_DIR}/jit"
+    "${JAVASCRIPTCORE_DIR}/llint"
     "${JAVASCRIPTCORE_DIR}/parser"
     "${JAVASCRIPTCORE_DIR}/profiler"
     "${JAVASCRIPTCORE_DIR}/runtime"
index 2889ffc..feb9658 100644 (file)
@@ -1,3 +1,15 @@
+2012-02-20  Filip Pizlo  <fpizlo@apple.com>
+
+        JSC should be a triple-tier VM
+        https://bugs.webkit.org/show_bug.cgi?id=75812
+        <rdar://problem/10079694>
+
+        Reviewed by Gavin Barraclough.
+
+        Changed EFL's build system to include a new directory in JavaScriptCore.
+        
+        * CMakeLists.txt:
+
 2012-02-21  Jon Lee  <jonlee@apple.com>
 
         Bring notifications support to WK1 mac: showing, canceling, removing notifications
 
         * CMakeLists.txt:
 
-2012-02-20  Filip Pizlo  <fpizlo@apple.com>
-
-        JSC should be a triple-tier VM
-        https://bugs.webkit.org/show_bug.cgi?id=75812
-        <rdar://problem/10079694>
-
-        Reviewed by Gavin Barraclough.
-
-        Changed EFL's build system to include a new directory in JavaScriptCore.
-        
-        * CMakeLists.txt:
-
 2012-02-16  Leo Yang  <leo.yang@torchmobile.com.cn>
 
         [BlackBerry] Adapt to the removal of WebStringIml.h
index 9787ac6..217b2ae 100644 (file)
@@ -1,3 +1,15 @@
+2012-02-21  Filip Pizlo  <fpizlo@apple.com>
+
+        JSC should be a triple-tier VM
+        https://bugs.webkit.org/show_bug.cgi?id=75812
+        <rdar://problem/10079694>
+
+        Reviewed by Gavin Barraclough.
+
+        Changed EFL's build system to include a new directory in JavaScriptCore.
+
+        * DumpRenderTree/efl/CMakeLists.txt:
+
 2012-02-21  Daniel Cheng  <dcheng@chromium.org>
 
         [chromium] Fix image drag out on Chromium
index ba3026d..436dd6d 100644 (file)
@@ -70,10 +70,12 @@ SET(DumpRenderTree_INCLUDE_DIRECTORIES
     ${JAVASCRIPTCORE_DIR}
     ${JAVASCRIPTCORE_DIR}/API
     ${JAVASCRIPTCORE_DIR}/assembler
+    ${JAVASCRIPTCORE_DIR}/bytecode
     ${JAVASCRIPTCORE_DIR}/dfg
     ${JAVASCRIPTCORE_DIR}/heap
     ${JAVASCRIPTCORE_DIR}/interpreter
     ${JAVASCRIPTCORE_DIR}/jit
+    ${JAVASCRIPTCORE_DIR}/llint
     ${JAVASCRIPTCORE_DIR}/runtime
     ${JAVASCRIPTCORE_DIR}/ForwardingHeaders
     ${JAVASCRIPTCORE_DIR}/wtf