+++ /dev/null
-syntax: glob
-*.egg
-*.pyc
-*.pyo
-.*.sw[op]
-.idea/
-.ropeproject
-.project
-.tags
-.tox
-Pygments.egg-info/*
-TAGS
-build/*
-dist/*
-doc/_build
-TAGS
-tests/.coverage
-tests/cover
-tests/examplefiles/output
+++ /dev/null
-634420aa4221cc1eb2b3753bd571166bd9e611d4 0.9
-942ecbb5c84ca5d57ae82f5697775973f4e12717 0.10
-63632d0340958d891176db20fe9a32a56abcd5ea 0.11
-13834ec94d2c5a90a68bc2c2a327abd962c486bc 0.11.1
-a5748745272afffd725570e068a560d46e28dc1f 1.0
-5a794a620dc711a219722a7af94d9d2e95cda26d 1.1
-dd81c35efd95292de4965153c66c8bbfe435f1c4 1.1.1
-e7691aa4f473a2cdaa2e5b7bfed8aec196719aca 0.5.1
-6f53364d63ddb8bd9532bb6ea402e3af05275b03 0.5
-11efe99c11e601071c3a77910b9fca769de66fbf 0.6
-99df0a7404d168b05626ffced6fd16edcf58c145 0.7
-d0b08fd569d3d9dafec4c045a7d8876442b3ef64 0.7.1
-1054522d1dda9c7899516ead3e65e5e363fdf30d 0.8
-066e56d8f5caa31e15386fff6f938bedd85a8732 0.8.1
-bae0833cae75e5a641abe3c4b430fa384cd9d258 1.2
-f6e5acee4f761696676e05a9112c91a5a5670b49 1.2.1
-580c5ce755486bc92c79c50f80cfc79924e15140 1.2.2
-c62867700c9e98cc2988c62f298ec54cee9b6927 1.3
-3a3846c2503db85bb70a243c8bc702629c4bce57 1.3.1
-8ad6d35dd2ab0530a1e2c088ab7fe0e00426b5f9 1.4
-eff3aee4abff2b72564ddfde77fcc82adbba52ad 1.5
-2c262bfc66b05a8aecc1109c3acc5b9447a5213c 1.6rc1
-7c962dcb484cb73394aec7f41709940340dc8a9c 1.6
-da509a68ea620bbb8ee3f5d5cf7761375d8f4451 2.0rc1
-ed3206a773e9cb90a0edeabee8ef6b56b5b9a53c 2.0
-94e1e056c92d97e3a54759f9216e8deff22efbdd 2.0.1
-142a870bf0f1822414649ae26f433b112a5c92d5 2.0.2
-34530db252d35d7ef57a8dbb9fce7bcc46f6ba6b 2.1
-2935c3a59672e8ae74ffb7ea66ea6567f49782f6 2.1.1
-8e7ebc56153cf899067333bff4f15ae98758a2e1 2.1.2
-88527db663dce0729c2cd6e3bc2f3c657ae39254 2.1.3
* Sam Aaron -- Ioke lexer
* Ali Afshar -- image formatter
-* Thomas Aglassinger -- Easytrieve, JCL and Rexx lexers
+* Thomas Aglassinger -- Easytrieve, JCL, Rexx and Transact-SQL lexers
* Muthiah Annamalai -- Ezhil lexer
* Kumar Appaiah -- Debian control lexer
* Andreas Amann -- AppleScript lexer
* Adam Blinkinsop -- Haskell, Redcode lexers
* Frits van Bommel -- assembler lexers
* Pierre Bourdon -- bugfixes
+* Matthias Bussonnier -- ANSI style handling for terminal-256 formatter
* chebee7i -- Python traceback lexer improvements
* Hiram Chirino -- Scaml and Jade lexers
+* Mauricio Caceres -- SAS and Stata lexers.
* Ian Cooper -- VGL lexer
-* David Corbett -- Inform, Jasmin, and TADS 3 lexers
+* David Corbett -- Inform, Jasmin, JSGF, Snowball, and TADS 3 lexers
* Leaf Corcoran -- MoonScript lexer
* Christopher Creutzig -- MuPAD lexer
* Daniël W. Crompton -- Pike lexer
* Alex Gilding -- BlitzBasic lexer
* Bertrand Goetzmann -- Groovy lexer
* Krzysiek Goj -- Scala lexer
+* Andrey Golovizin -- BibTeX lexers
* Matt Good -- Genshi, Cheetah lexers
* Michał Górny -- vim modeline support
* Alex Gosse -- TrafficScript lexer
* Tim Howard -- BlitzMax lexer
* Dustin Howett -- Logos lexer
* Ivan Inozemtsev -- Fantom lexer
-* Hiroaki Itoh -- Shell console rewrite, Lexers for PowerShell session, MSDOS session, BC
+* Hiroaki Itoh -- Shell console rewrite, Lexers for PowerShell session,
+ MSDOS session, BC, WDiff
* Brian R. Jackson -- Tea lexer
* Christian Jann -- ShellSession lexer
* Dennis Kaarsemaker -- sources.list lexer
* Jon Larimer, Google Inc. -- Smali lexer
* Olov Lassus -- Dart lexer
* Matt Layman -- TAP lexer
+* Kristian Lyngstøl -- Varnish lexers
* Sylvestre Ledru -- Scilab lexer
+* Chee Sing Lee -- Flatline lexer
* Mark Lee -- Vala lexer
* Valentin Lorentz -- C++ lexer improvements
* Ben Mabey -- Gherkin lexer
* Mher Movsisyan -- DTD lexer
* Dejan Muhamedagic -- Crmsh lexer
* Ana Nelson -- Ragel, ANTLR, R console lexers
+* Kurt Neufeld -- Markdown lexer
* Nam T. Nguyen -- Monokai style
* Jesper Noehr -- HTML formatter "anchorlinenos"
* Mike Nolta -- Julia lexer
* Dominik Picheta -- Nimrod lexer
* Andrew Pinkham -- RTF Formatter Refactoring
* Clément Prévost -- UrbiScript lexer
+* Tanner Prynn -- cmdline -x option and loading lexers from files
+* Oleh Prypin -- Crystal lexer (based on Ruby lexer)
* Elias Rabel -- Fortran fixed form lexer
* raichoo -- Idris lexer
* Kashif Rasul -- CUDA lexer
* Corey Richardson -- Rust lexer updates
* Lubomir Rintel -- GoodData MAQL and CL lexers
* Andre Roberge -- Tango style
+* Georg Rollinger -- HSAIL lexer
+* Michiel Roos -- TypoScript lexer
* Konrad Rudolph -- LaTeX formatter enhancements
* Mario Ruggier -- Evoque lexers
* Miikka Salminen -- Lovelace style, Hexdump lexer, lexer enhancements
* Matteo Sasso -- Common Lisp lexer
* Joe Schafer -- Ada lexer
* Ken Schutte -- Matlab lexers
+* René Schwaiger -- Rainbow Dash style
+* Sebastian Schweizer -- Whiley lexer
* Tassilo Schweyer -- Io, MOOCode lexers
* Ted Shaw -- AutoIt lexer
* Joerg Sieker -- ABAP lexer
* Robert Simmons -- Standard ML lexer
* Kirill Simonov -- YAML lexer
+* Corbin Simpson -- Monte lexer
* Alexander Smishlajev -- Visual FoxPro lexer
* Steve Spigarelli -- XQuery lexer
* Jerome St-Louis -- eC lexer
+* Camil Staps -- Clean and NuSMV lexers
* James Strachan -- Kotlin lexer
* Tom Stuart -- Treetop lexer
* Colin Sullivan -- SuperCollider lexer
+* Ben Swift -- Extempore lexer
* Edoardo Tenani -- Arduino lexer
* Tiberius Teng -- default style overhaul
* Jeremy Thurgood -- Erlang, Squid config lexers
* Brian Tiffin -- OpenCOBOL lexer
* Bob Tolbert -- Hy lexer
+* Matthias Trute -- Forth lexer
* Erick Tryzelaar -- Felix lexer
* Alexander Udalov -- Kotlin lexer improvements
* Thomas Van Doren -- Chapel lexer
* Abe Voelker -- OpenEdge ABL lexer
* Pepijn de Vos -- HTML formatter CTags support
* Matthias Vallentin -- Bro lexer
+* Benoît Vinot -- AMPL lexer
* Linh Vu Hong -- RSL lexer
* Nathan Weizenbaum -- Haml and Sass lexers
* Nathan Whetsell -- Csound lexers
pull request numbers to the requests at
<https://bitbucket.org/birkenfeld/pygments-main/pull-requests/merged>.
+Version 2.2.0
+-------------
+(release Jan 22, 2017)
+
+- Added lexers:
+
+ * AMPL
+ * TypoScript (#1173)
+ * Varnish config (PR#554)
+ * Clean (PR#503)
+ * WDiff (PR#513)
+ * Flatline (PR#551)
+ * Silver (PR#537)
+ * HSAIL (PR#518)
+ * JSGF (PR#546)
+ * NCAR command language (PR#536)
+ * Extempore (PR#530)
+ * Cap'n Proto (PR#595)
+ * Whiley (PR#573)
+ * Monte (PR#592)
+ * Crystal (PR#576)
+ * Snowball (PR#589)
+ * CapDL (PR#579)
+ * NuSMV (PR#564)
+ * SAS, Stata (PR#593)
+
+- Added the ability to load lexer and formatter classes directly from files
+ with the `-x` command line option and the `lexers.load_lexer_from_file()`
+ and `formatters.load_formatter_from_file()` functions. (PR#559)
+
+- Added `lexers.find_lexer_class_by_name()`. (#1203)
+
+- Added new token types and lexing for magic methods and variables in Python
+ and PHP.
+
+- Added a new token type for string affixes and lexing for them in Python, C++
+ and Postgresql lexers.
+
+- Added a new token type for heredoc (and similar) string delimiters and
+ lexing for them in C++, Perl, PHP, Postgresql and Ruby lexers.
+
+- Styles can now define colors with ANSI colors for use in the 256-color
+ terminal formatter. (PR#531)
+
+- Improved the CSS lexer. (#1083, #1130)
+
+- Added "Rainbow Dash" style. (PR#623)
+
+- Delay loading `pkg_resources`, which takes a long while to import. (PR#690)
+
+
Version 2.1.3
-------------
(released Mar 2, 2016)
-Copyright (c) 2006-2015 by the respective authors (see AUTHORS file).
+Copyright (c) 2006-2017 by the respective authors (see AUTHORS file).
All rights reserved.
Redistribution and use in source and binary forms, with or without
include pygmentize
include external/*
-include Makefile CHANGES LICENSE AUTHORS TODO ez_setup.py
+include Makefile CHANGES LICENSE AUTHORS TODO
recursive-include tests *
recursive-include doc *
recursive-include scripts *
#
# Combines scripts for common tasks.
#
-# :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+# :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
# :license: BSD, see LICENSE for details.
#
test-coverage:
@$(PYTHON) tests/run.py -d --with-coverage --cover-package=pygments --cover-erase $(TEST)
+test-examplefiles:
+ nosetests tests/test_examplefiles.py
+
tox-test:
@tox -- $(TEST)
Metadata-Version: 1.1
Name: Pygments
-Version: 2.1.3
+Version: 2.2.0
Summary: Pygments is a syntax highlighting package written in Python.
Home-page: http://pygments.org/
Author: Georg Brandl
* a number of output formats, presently HTML, LaTeX, RTF, SVG, all image formats that PIL supports and ANSI sequences
* it is usable as a command-line tool and as a library
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
Keywords: syntax highlighting
Metadata-Version: 1.1
Name: Pygments
-Version: 2.1.3
+Version: 2.2.0
Summary: Pygments is a syntax highlighting package written in Python.
Home-page: http://pygments.org/
Author: Georg Brandl
* a number of output formats, presently HTML, LaTeX, RTF, SVG, all image formats that PIL supports and ANSI sequences
* it is usable as a command-line tool and as a library
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
Keywords: syntax highlighting
-.hgignore
-.hgtags
AUTHORS
CHANGES
LICENSE
Makefile
README.rst
TODO
-ez_setup.py
pygmentize
-requirements.txt
setup.cfg
setup.py
-tox.ini
Pygments.egg-info/PKG-INFO
Pygments.egg-info/SOURCES.txt
Pygments.egg-info/dependency_links.txt
pygments/lexers/_scilab_builtins.py
pygments/lexers/_sourcemod_builtins.py
pygments/lexers/_stan_builtins.py
+pygments/lexers/_stata_builtins.py
+pygments/lexers/_tsql_builtins.py
pygments/lexers/_vim_builtins.py
pygments/lexers/actionscript.py
pygments/lexers/agile.py
pygments/lexers/algebra.py
pygments/lexers/ambient.py
+pygments/lexers/ampl.py
pygments/lexers/apl.py
pygments/lexers/archetype.py
pygments/lexers/asm.py
pygments/lexers/automation.py
pygments/lexers/basic.py
+pygments/lexers/bibtex.py
pygments/lexers/business.py
pygments/lexers/c_cpp.py
pygments/lexers/c_like.py
+pygments/lexers/capnproto.py
pygments/lexers/chapel.py
+pygments/lexers/clean.py
pygments/lexers/compiled.py
pygments/lexers/configs.py
pygments/lexers/console.py
+pygments/lexers/crystal.py
pygments/lexers/csound.py
pygments/lexers/css.py
pygments/lexers/d.py
pygments/lexers/factor.py
pygments/lexers/fantom.py
pygments/lexers/felix.py
+pygments/lexers/forth.py
pygments/lexers/fortran.py
pygments/lexers/foxpro.py
pygments/lexers/functional.py
pygments/lexers/ml.py
pygments/lexers/modeling.py
pygments/lexers/modula2.py
+pygments/lexers/monte.py
+pygments/lexers/ncl.py
pygments/lexers/nimrod.py
pygments/lexers/nit.py
pygments/lexers/nix.py
pygments/lexers/rdf.py
pygments/lexers/rebol.py
pygments/lexers/resource.py
+pygments/lexers/rnc.py
pygments/lexers/roboconf.py
pygments/lexers/robotframework.py
pygments/lexers/ruby.py
pygments/lexers/rust.py
+pygments/lexers/sas.py
pygments/lexers/scripting.py
pygments/lexers/shell.py
pygments/lexers/smalltalk.py
+pygments/lexers/smv.py
pygments/lexers/snobol.py
pygments/lexers/special.py
pygments/lexers/sql.py
+pygments/lexers/stata.py
pygments/lexers/supercollider.py
pygments/lexers/tcl.py
pygments/lexers/templates.py
pygments/lexers/textfmts.py
pygments/lexers/theorem.py
pygments/lexers/trafficscript.py
+pygments/lexers/typoscript.py
pygments/lexers/urbi.py
+pygments/lexers/varnish.py
+pygments/lexers/verification.py
pygments/lexers/web.py
pygments/lexers/webmisc.py
+pygments/lexers/whiley.py
pygments/lexers/x10.py
pygments/styles/__init__.py
+pygments/styles/abap.py
pygments/styles/algol.py
pygments/styles/algol_nu.py
pygments/styles/arduino.py
pygments/styles/paraiso_light.py
pygments/styles/pastie.py
pygments/styles/perldoc.py
+pygments/styles/rainbow_dash.py
pygments/styles/rrt.py
+pygments/styles/sas.py
+pygments/styles/stata.py
pygments/styles/tango.py
pygments/styles/trac.py
pygments/styles/vim.py
scripts/get_vimkw.py
scripts/pylintrc
scripts/vim2pygments.py
-tests/.coverage
tests/run.py
tests/string_asserts.py
-tests/string_asserts.pyc
tests/support.py
-tests/support.pyc
tests/test_basic_api.py
-tests/test_basic_api.pyc
+tests/test_bibtex.py
tests/test_cfm.py
-tests/test_cfm.pyc
tests/test_clexer.py
-tests/test_clexer.pyc
tests/test_cmdline.py
-tests/test_cmdline.pyc
+tests/test_cpp.py
+tests/test_crystal.py
+tests/test_data.py
tests/test_examplefiles.py
-tests/test_examplefiles.pyc
tests/test_ezhil.py
-tests/test_ezhil.pyc
tests/test_html_formatter.py
-tests/test_html_formatter.pyc
tests/test_inherit.py
-tests/test_inherit.pyc
tests/test_irc_formatter.py
-tests/test_irc_formatter.pyc
tests/test_java.py
-tests/test_java.pyc
+tests/test_javascript.py
+tests/test_julia.py
tests/test_latex_formatter.py
-tests/test_latex_formatter.pyc
tests/test_lexers_other.py
-tests/test_lexers_other.pyc
+tests/test_modeline.py
tests/test_objectiveclexer.py
-tests/test_objectiveclexer.pyc
tests/test_perllexer.py
-tests/test_perllexer.pyc
+tests/test_php.py
+tests/test_praat.py
+tests/test_properties.py
+tests/test_python.py
tests/test_qbasiclexer.py
-tests/test_qbasiclexer.pyc
tests/test_regexlexer.py
-tests/test_regexlexer.pyc
tests/test_regexopt.py
-tests/test_regexopt.pyc
tests/test_rtf_formatter.py
-tests/test_rtf_formatter.pyc
tests/test_ruby.py
-tests/test_ruby.pyc
tests/test_shell.py
-tests/test_shell.pyc
tests/test_smarty.py
-tests/test_smarty.pyc
+tests/test_sql.py
tests/test_string_asserts.py
-tests/test_string_asserts.pyc
tests/test_terminal_formatter.py
-tests/test_terminal_formatter.pyc
tests/test_textfmts.py
-tests/test_textfmts.pyc
tests/test_token.py
-tests/test_token.pyc
tests/test_unistring.py
-tests/test_unistring.pyc
tests/test_using_api.py
-tests/test_using_api.pyc
tests/test_util.py
-tests/test_util.pyc
-tests/__pycache__/string_asserts.cpython-33.pyc
-tests/__pycache__/string_asserts.cpython-35.pyc
-tests/__pycache__/support.cpython-33.pyc
-tests/__pycache__/support.cpython-35.pyc
-tests/__pycache__/test_basic_api.cpython-33.pyc
-tests/__pycache__/test_basic_api.cpython-35.pyc
-tests/__pycache__/test_cfm.cpython-33.pyc
-tests/__pycache__/test_cfm.cpython-35.pyc
-tests/__pycache__/test_clexer.cpython-33.pyc
-tests/__pycache__/test_clexer.cpython-35.pyc
-tests/__pycache__/test_cmdline.cpython-33.pyc
-tests/__pycache__/test_cmdline.cpython-35.pyc
-tests/__pycache__/test_examplefiles.cpython-33.pyc
-tests/__pycache__/test_examplefiles.cpython-35.pyc
-tests/__pycache__/test_ezhil.cpython-35.pyc
-tests/__pycache__/test_html_formatter.cpython-33.pyc
-tests/__pycache__/test_html_formatter.cpython-35.pyc
-tests/__pycache__/test_inherit.cpython-33.pyc
-tests/__pycache__/test_inherit.cpython-35.pyc
-tests/__pycache__/test_irc_formatter.cpython-35.pyc
-tests/__pycache__/test_java.cpython-33.pyc
-tests/__pycache__/test_java.cpython-35.pyc
-tests/__pycache__/test_latex_formatter.cpython-33.pyc
-tests/__pycache__/test_latex_formatter.cpython-35.pyc
-tests/__pycache__/test_lexers_other.cpython-33.pyc
-tests/__pycache__/test_lexers_other.cpython-35.pyc
-tests/__pycache__/test_objectiveclexer.cpython-33.pyc
-tests/__pycache__/test_objectiveclexer.cpython-35.pyc
-tests/__pycache__/test_perllexer.cpython-33.pyc
-tests/__pycache__/test_perllexer.cpython-35.pyc
-tests/__pycache__/test_qbasiclexer.cpython-33.pyc
-tests/__pycache__/test_qbasiclexer.cpython-35.pyc
-tests/__pycache__/test_regexlexer.cpython-33.pyc
-tests/__pycache__/test_regexlexer.cpython-35.pyc
-tests/__pycache__/test_regexopt.cpython-33.pyc
-tests/__pycache__/test_regexopt.cpython-35.pyc
-tests/__pycache__/test_rtf_formatter.cpython-33.pyc
-tests/__pycache__/test_rtf_formatter.cpython-35.pyc
-tests/__pycache__/test_ruby.cpython-33.pyc
-tests/__pycache__/test_ruby.cpython-35.pyc
-tests/__pycache__/test_shell.cpython-33.pyc
-tests/__pycache__/test_shell.cpython-35.pyc
-tests/__pycache__/test_smarty.cpython-33.pyc
-tests/__pycache__/test_smarty.cpython-35.pyc
-tests/__pycache__/test_string_asserts.cpython-33.pyc
-tests/__pycache__/test_string_asserts.cpython-35.pyc
-tests/__pycache__/test_terminal_formatter.cpython-35.pyc
-tests/__pycache__/test_textfmts.cpython-33.pyc
-tests/__pycache__/test_textfmts.cpython-35.pyc
-tests/__pycache__/test_token.cpython-33.pyc
-tests/__pycache__/test_token.cpython-35.pyc
-tests/__pycache__/test_unistring.cpython-33.pyc
-tests/__pycache__/test_unistring.cpython-35.pyc
-tests/__pycache__/test_using_api.cpython-33.pyc
-tests/__pycache__/test_using_api.cpython-35.pyc
-tests/__pycache__/test_util.cpython-33.pyc
-tests/__pycache__/test_util.cpython-35.pyc
-tests/cover/coverage_html.js
-tests/cover/jquery.hotkeys.js
-tests/cover/jquery.isonscreen.js
-tests/cover/jquery.min.js
-tests/cover/jquery.tablesorter.min.js
-tests/cover/keybd_closed.png
-tests/cover/keybd_open.png
-tests/cover/status.dat
-tests/cover/style.css
+tests/test_whiley.py
tests/dtds/HTML4-f.dtd
tests/dtds/HTML4-s.dtd
tests/dtds/HTML4.dcl
tests/examplefiles/RoleQ.pm6
tests/examplefiles/SmallCheck.hs
tests/examplefiles/Sorting.mod
+tests/examplefiles/StdGeneric.icl
tests/examplefiles/Sudoku.lhs
tests/examplefiles/abnf_example1.abnf
tests/examplefiles/abnf_example2.abnf
tests/examplefiles/bnf_example1.bnf
tests/examplefiles/boot-9.scm
tests/examplefiles/ca65_example
+tests/examplefiles/capdl_example.cdl
tests/examplefiles/cbmbas_example
tests/examplefiles/cells.ps
tests/examplefiles/ceval.c
tests/examplefiles/demo.ahk
tests/examplefiles/demo.cfm
tests/examplefiles/demo.css.in
+tests/examplefiles/demo.frt
tests/examplefiles/demo.hbs
tests/examplefiles/demo.js.in
tests/examplefiles/demo.thrift
tests/examplefiles/demo.xul.in
tests/examplefiles/django_sample.html+django
tests/examplefiles/docker.docker
+tests/examplefiles/durexmania.aheui
tests/examplefiles/dwarf.cw
tests/examplefiles/eg_example1.eg
tests/examplefiles/ember.handlebars
tests/examplefiles/example.jag
tests/examplefiles/example.java
tests/examplefiles/example.jcl
+tests/examplefiles/example.jsgf
tests/examplefiles/example.jsonld
+tests/examplefiles/example.juttle
tests/examplefiles/example.kal
tests/examplefiles/example.kt
tests/examplefiles/example.lagda
tests/examplefiles/example.lua
tests/examplefiles/example.ma
tests/examplefiles/example.mac
+tests/examplefiles/example.md
tests/examplefiles/example.monkey
tests/examplefiles/example.moo
tests/examplefiles/example.moon
tests/examplefiles/example.mq4
tests/examplefiles/example.mqh
tests/examplefiles/example.msc
+tests/examplefiles/example.ng2
tests/examplefiles/example.ni
tests/examplefiles/example.nim
tests/examplefiles/example.nix
tests/examplefiles/example.rkt
tests/examplefiles/example.rpf
tests/examplefiles/example.rts
+tests/examplefiles/example.sbl
tests/examplefiles/example.scd
tests/examplefiles/example.sh
tests/examplefiles/example.sh-session
tests/examplefiles/example.snobol
tests/examplefiles/example.stan
tests/examplefiles/example.tap
+tests/examplefiles/example.tasm
tests/examplefiles/example.tea
tests/examplefiles/example.tf
tests/examplefiles/example.thy
tests/examplefiles/example.todotxt
-tests/examplefiles/example.ts
tests/examplefiles/example.ttl
tests/examplefiles/example.u
tests/examplefiles/example.weechatlog
+tests/examplefiles/example.whiley
tests/examplefiles/example.x10
tests/examplefiles/example.xhtml
tests/examplefiles/example.xtend
+tests/examplefiles/example.xtm
tests/examplefiles/example.yaml
tests/examplefiles/example1.cadl
tests/examplefiles/example2.aspx
+tests/examplefiles/example2.cpp
tests/examplefiles/example2.msc
tests/examplefiles/exampleScript.cfc
tests/examplefiles/exampleTag.cfc
tests/examplefiles/example_elixir.ex
tests/examplefiles/example_file.fy
tests/examplefiles/ezhil_primefactors.n
+tests/examplefiles/fibonacci.tokigun.aheui
tests/examplefiles/firefox.mak
+tests/examplefiles/flatline_example
tests/examplefiles/flipflop.sv
tests/examplefiles/foo.sce
tests/examplefiles/format.ml
tests/examplefiles/glsl.frag
tests/examplefiles/glsl.vert
tests/examplefiles/grammar-test.p6
+tests/examplefiles/guidance.smv
tests/examplefiles/hash_syntax.rb
+tests/examplefiles/hello-world.puzzlet.aheui
tests/examplefiles/hello.at
tests/examplefiles/hello.golo
tests/examplefiles/hello.lsl
tests/examplefiles/phpMyAdmin.spec
tests/examplefiles/phpcomplete.vim
tests/examplefiles/pkgconfig_example.pc
+tests/examplefiles/plain.bst
tests/examplefiles/pleac.in.rb
tests/examplefiles/postgresql_test.txt
tests/examplefiles/pppoe.applescript
tests/examplefiles/regex.js
tests/examplefiles/resourcebundle_demo
tests/examplefiles/reversi.lsp
+tests/examplefiles/rnc_example.rnc
tests/examplefiles/roboconf.graph
tests/examplefiles/roboconf.instances
tests/examplefiles/robotframework_test.txt
tests/examplefiles/test.asy
tests/examplefiles/test.awk
tests/examplefiles/test.bb
+tests/examplefiles/test.bib
tests/examplefiles/test.bmx
tests/examplefiles/test.boo
tests/examplefiles/test.bpl
tests/examplefiles/test.bro
tests/examplefiles/test.cadl
+tests/examplefiles/test.cr
tests/examplefiles/test.cs
tests/examplefiles/test.csd
tests/examplefiles/test.css
tests/examplefiles/test.ec
tests/examplefiles/test.eh
tests/examplefiles/test.erl
+tests/examplefiles/test.escript
tests/examplefiles/test.evoque
tests/examplefiles/test.fan
tests/examplefiles/test.flx
tests/examplefiles/test.gdc
tests/examplefiles/test.gradle
tests/examplefiles/test.groovy
+tests/examplefiles/test.hsail
tests/examplefiles/test.html
tests/examplefiles/test.idr
tests/examplefiles/test.ini
tests/examplefiles/test.mask
tests/examplefiles/test.mod
tests/examplefiles/test.moo
+tests/examplefiles/test.mt
tests/examplefiles/test.myt
+tests/examplefiles/test.ncl
tests/examplefiles/test.nim
tests/examplefiles/test.odin
tests/examplefiles/test.opa
tests/examplefiles/test.scaml
tests/examplefiles/test.sco
tests/examplefiles/test.shen
+tests/examplefiles/test.sil
tests/examplefiles/test.ssp
tests/examplefiles/test.swift
tests/examplefiles/test.tcsh
tests/examplefiles/test2.pypylog
tests/examplefiles/test_basic.adls
tests/examplefiles/truncated.pytb
+tests/examplefiles/tsql_example.sql
tests/examplefiles/twig_test
tests/examplefiles/type.lisp
+tests/examplefiles/typescript_example
+tests/examplefiles/typoscript_example
tests/examplefiles/underscore.coffee
tests/examplefiles/unicode.applescript
tests/examplefiles/unicode.go
tests/examplefiles/unicode.js
tests/examplefiles/unicodedoc.py
tests/examplefiles/unix-io.lid
+tests/examplefiles/varnish.vcl
tests/examplefiles/vbnet_test.bas
tests/examplefiles/vctreestatus_hg
tests/examplefiles/vimrc
tests/examplefiles/vpath.mk
+tests/examplefiles/wdiff_example1.wdiff
+tests/examplefiles/wdiff_example3.wdiff
tests/examplefiles/webkit-transition.css
tests/examplefiles/while.pov
tests/examplefiles/wiki.factor
tests/examplefiles/xml_example
tests/examplefiles/yahalom.cpsa
tests/examplefiles/zmlrpc.f90
+tests/support/empty.py
+tests/support/html_formatter.py
+tests/support/python_lexer.py
tests/support/tags
\ No newline at end of file
{% block footer %}
<div class="footer" role="contentinfo">
- © Copyright 2006-2015, Georg Brandl and Pygments contributors.
+ © Copyright 2006-2017, Georg Brandl and Pygments contributors.
Created using <a href="http://sphinx-doc.org/">Sphinx</a> {{
sphinx_version }}. <br/>
Pygments logo created by <a href="http://joelunger.com">Joel Unger</a>.
*
* Sphinx stylesheet -- pygments14 theme. Heavily copied from sphinx13.
*
- * :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ * :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/
Will raise :exc:`pygments.util.ClassNotFound` if not lexer for that mimetype
is found.
+.. function:: load_lexer_from_file(filename, lexername="CustomLexer", **options)
+
+ Return a `Lexer` subclass instance loaded from the provided file, relative
+ to the current directory. The file is expected to contain a Lexer class
+ named `lexername` (by default, CustomLexer). Users should be very careful with
+ the input, because this method is equivalent to running eval on the input file.
+ The lexer is given the `options` at its instantiation.
+
+ :exc:`ClassNotFound` is raised if there are any errors loading the Lexer
+
+ .. versionadded:: 2.2
+
.. function:: guess_lexer(text, **options)
Return a `Lexer` subclass instance that's guessed from the text in
.. versionadded:: 0.6
+.. function:: find_lexer_class_by_name(alias)
+
+ Return the `Lexer` subclass that has `alias` in its aliases list, without
+ instantiating it.
+
+ Will raise :exc:`pygments.util.ClassNotFound` if no lexer with that alias is
+ found.
+
+ .. versionadded:: 2.2
+
+.. function:: find_lexer_class(name)
+
+ Return the `Lexer` subclass that with the *name* attribute as given by
+ the *name* argument.
+
.. module:: pygments.formatters
Will raise :exc:`pygments.util.ClassNotFound` if no formatter for that filename
is found.
+.. function:: load_formatter_from_file(filename, formattername="CustomFormatter", **options)
+
+ Return a `Formatter` subclass instance loaded from the provided file, relative
+ to the current directory. The file is expected to contain a Formatter class
+ named ``formattername`` (by default, CustomFormatter). Users should be very
+ careful with the input, because this method is equivalent to running eval
+ on the input file. The formatter is given the `options` at its instantiation.
+
+ :exc:`ClassNotFound` is raised if there are any errors loading the Formatter
+
+ .. versionadded:: 2.2
.. module:: pygments.styles
will print out ``python``. It won't highlight anything yet. If no specific
lexer is known for that filename, ``text`` is printed.
+Custom Lexers and Formatters
+----------------------------
+
+.. versionadded:: 2.2
+
+The ``-x`` flag enables custom lexers and formatters to be loaded
+from files relative to the current directory. Create a file with a class named
+CustomLexer or CustomFormatter, then specify it on the command line::
+
+ $ pygmentize -l your_lexer.py -f your_formatter.py -x
+
+You can also specify the name of your class with a colon::
+
+ $ pygmentize -l your_lexer.py:SomeLexer -x
+
+For more information, see :doc:`the Pygments documentation on Lexer development
+<lexerdevelopment>`.
Getting help
------------
Adding and testing a new lexer
==============================
-To make Pygments aware of your new lexer, you have to perform the following
-steps:
+The easiest way to use a new lexer is to use Pygments' support for loading
+the lexer from a file relative to your current directory.
-First, change to the current directory containing the Pygments source code:
+First, change the name of your lexer class to CustomLexer:
+
+.. code-block:: python
+
+ from pygments.lexer import RegexLexer
+ from pygments.token import *
+
+ class CustomLexer(RegexLexer):
+ """All your lexer code goes here!"""
+
+Then you can load the lexer from the command line with the additional
+flag ``-x``:
+
+.. code-block:: console
+
+ $ pygmentize -l your_lexer_file.py -x
+
+To specify a class name other than CustomLexer, append it with a colon:
+
+.. code-block:: console
+
+ $ pygmentize -l your_lexer.py:SomeLexer -x
+
+Or, using the Python API:
+
+.. code-block:: python
+
+ # For a lexer named CustomLexer
+ your_lexer = load_lexer_from_file(filename, **options)
+
+ # For a lexer named MyNewLexer
+ your_named_lexer = load_lexer_from_file(filename, "MyNewLexer", **options)
+
+When loading custom lexers and formatters, be extremely careful to use only
+trusted files; Pygments will perform the equivalent of ``eval`` on them.
+
+If you only want to use your lexer with the Pygments API, you can import and
+instantiate the lexer yourself, then pass it to :func:`pygments.highlight`.
+
+To prepare your new lexer for inclusion in the Pygments distribution, so that it
+will be found when passing filenames or lexer aliases from the command line, you
+have to perform the following steps.
+
+First, change to the current directory containing the Pygments source code. You
+will need to have either an unpacked source tarball, or (preferably) a copy
+cloned from BitBucket.
.. code-block:: console
your lexer class.
Next, make sure the lexer is known from outside of the module. All modules in
-the ``pygments.lexers`` specify ``__all__``. For example, ``esoteric.py`` sets::
+the ``pygments.lexers`` package specify ``__all__``. For example,
+``esoteric.py`` sets::
__all__ = ['BrainfuckLexer', 'BefungeLexer', ...]
-Simply add the name of your lexer class to this list.
+Add the name of your lexer class to this list (or create the list if your lexer
+is the only class in the module).
Finally the lexer can be made publicly known by rebuilding the lexer mapping:
tokens = {...}
def get_tokens_unprocessed(self, text, stack=('root', 'otherstate')):
- for item in RegexLexer.get_tokens_unprocessed(text, stack):
+ for item in RegexLexer.get_tokens_unprocessed(self, text, stack):
yield item
Some lexers like the `PhpLexer` use this to make the leading ``<?php``
If this option is set to ``"guess"``, a simple UTF-8 vs. Latin-1
detection is used, if it is set to ``"chardet"``, the
- `chardet library <http://chardet.feedparser.org/>`_ is used to
+ `chardet library <https://chardet.github.io/>`_ is used to
guess the encoding of the input.
.. versionadded:: 0.6
>>> from pygments.styles import get_all_styles
>>> styles = list(get_all_styles())
+
+
+.. _AnsiTerminalStyle:
+
+Terminal Styles
+===============
+
+.. versionadded:: 2.2
+
+Custom styles used with the 256-color terminal formatter can also map colors to
+use the 8 default ANSI colors. To do so, use ``#ansigreen``, ``#ansired`` or
+any other colors defined in :attr:`pygments.style.ansicolors`. Foreground ANSI
+colors will be mapped to the corresponding `escape codes 30 to 37
+<https://en.wikipedia.org/wiki/ANSI_escape_code#Colors>`_ thus respecting any
+custom color mapping and themes provided by many terminal emulators. Light
+variants are treated as foreground color with and an added bold flag.
+``bg:#ansi<color>`` will also be respected, except the light variant will be the
+same shade as their dark variant.
+
+See the following example where the color of the string ``"hello world"`` is
+governed by the escape sequence ``\x1b[34;01m`` (Ansi Blue, Bold, 41 being red
+background) instead of an extended foreground & background color.
+
+.. sourcecode:: pycon
+
+ >>> from pygments import highlight
+ >>> from pygments.style import Style
+ >>> from pygments.token import Token
+ >>> from pygments.lexers import Python3Lexer
+ >>> from pygments.formatters import Terminal256Formatter
+
+ >>> class MyStyle(Style):
+ styles = {
+ Token.String: '#ansiblue bg:#ansired',
+ }
+
+ >>> code = 'print("Hello World")'
+ >>> result = highlight(code, Python3Lexer(), Terminal256Formatter(style=MyStyle))
+ >>> print(result.encode())
+ b'\x1b[34;41;01m"\x1b[39;49;00m\x1b[34;41;01mHello World\x1b[39;49;00m\x1b[34;41;01m"\x1b[39;49;00m'
+
+Colors specified using ``#ansi*`` are converted to a default set of RGB colors
+when used with formatters other than the terminal-256 formatter.
+
+By definition of ANSI, the following colors are considered "light" colors, and
+will be rendered by most terminals as bold:
+
+- "darkgray", "red", "green", "yellow", "blue", "fuchsia", "turquoise", "white"
+
+The following are considered "dark" colors and will be rendered as non-bold:
+
+- "black", "darkred", "darkgreen", "brown", "darkblue", "purple", "teal",
+ "lightgray"
+
+Exact behavior might depends on the terminal emulator you are using, and its
+settings.
`Name.Function`
Token type for function names.
+`Name.Function.Magic`
+ same as `Name.Function` but for special function names that have an implicit use
+ in a language (e.g. ``__init__`` method in Python).
+
`Name.Label`
Token type for label names (e.g. in languages that support ``goto``).
`Name.Variable.Instance`
same as `Name.Variable` but for instance variables.
+`Name.Variable.Magic`
+ same as `Name.Variable` but for special variable names that have an implicit use
+ in a language (e.g. ``__doc__`` in Python).
+
Literals
========
`String`
For any string literal.
+`String.Affix`
+ Token type for affixes that further specify the type of the string they're
+ attached to (e.g. the prefixes ``r`` and ``u8`` in ``r"foo"`` and ``u8"foo"``).
+
`String.Backtick`
Token type for strings enclosed in backticks.
`String.Char`
Token type for single characters (e.g. Java, C).
+`String.Delimiter`
+ Token type for delimiting identifiers in "heredoc", raw and other similar
+ strings (e.g. the word ``END`` in Perl code ``print <<'END';``).
+
`String.Doc`
Token type for documentation strings (for example Python).
options dict with lexers and formatters, and still have different input and
output encodings.
-.. _chardet: http://chardet.feedparser.org/
+.. _chardet: https://chardet.github.io/
* Common Lisp
* Coq
* Cryptol (incl. Literate Cryptol)
+* `Crystal <http://crystal-lang.org>`_
* `Cython <http://cython.org>`_
* `D <http://dlang.org>`_
* Dart
* Delphi
* Dylan
+* `Elm <http://elm-lang.org/>`_
* Erlang
* `Ezhil <http://ezhillang.org>`_ Ezhil - A Tamil programming language
* Factor
#!/bin/bash
# Best effort auto-pygmentization with transparent decompression
-# by Reuben Thomas 2008-2015
+# by Reuben Thomas 2008-2016
# This program is in the public domain.
# Strategy: first see if pygmentize can find a lexer; if not, ask file; if that finds nothing, fail
text/x-awk) lexer=awk;;
text/x-c) lexer=c;;
text/x-c++) lexer=cpp;;
+ text/x-crystal) lexer=crystal;;
text/x-diff) lexer=diff;;
text/x-fortran) lexer=fortran;;
text/x-gawk) lexer=gawk;;
esac
fi
+# Find a preprocessor for compressed files
+concat=cat
+case $(file $file_common_opts --mime-type "$file") in
+ application/x-gzip) concat=zcat;;
+ application/x-bzip2) concat=bzcat;;
+ application/x-xz) concat=xzcat;;
+esac
+
+# Find a suitable lexer, preceded by a hex dump for binary files
+prereader=""
encoding=$(file --mime-encoding --uncompress $file_common_opts "$file")
-if [[ $encoding == "us-asciibinarybinary" ]]; then
- encoding="us-ascii"
+if [[ $encoding == "binary" ]]; then
+ prereader="od -x" # POSIX fallback
+ if [[ -n $(which hd) ]]; then
+ prereader="hd" # preferred
+ fi
+ lexer=hexdump
+ encoding=latin1
fi
-
if [[ -n "$lexer" ]]; then
- concat=cat
- case $(file $file_common_opts --mime-type "$file") in
- application/x-gzip) concat=zcat;;
- application/x-bzip2) concat=bzcat;;
- application/x-xz) concat=xzcat;;
- esac
- exec $concat "$file" | pygmentize -O inencoding=$encoding $PYGMENTIZE_OPTS $options -l $lexer
+ reader="pygmentize -O inencoding=$encoding $PYGMENTIZE_OPTS $options -l $lexer"
+fi
+
+# If we found a reader, run it
+if [[ -n "$reader" ]]; then
+ if [[ -n "$prereader" ]]; then
+ exec $concat "$file" | $prereader | $reader
+ else
+ exec $concat "$file" | $reader
+ fi
fi
exit 1
.. _Markdown: https://pypi.python.org/pypi/Markdown
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
If you do not want to do that and are willing to accept larger HTML
output, you can set the INLINESTYLES option below to True.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
.. _directive documentation:
http://docutils.sourceforge.net/docs/howto/rst-directives.html
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
+++ /dev/null
-#!python
-"""Bootstrap setuptools installation
-
-If you want to use setuptools in your package's setup.py, just include this
-file in the same directory with it, and add this to the top of your setup.py::
-
- from ez_setup import use_setuptools
- use_setuptools()
-
-If you want to require a specific version of setuptools, set a download
-mirror, or use an alternate download directory, you can do so by supplying
-the appropriate options to ``use_setuptools()``.
-
-This file can also be run as a script to install or upgrade setuptools.
-"""
-import os
-import shutil
-import sys
-import tempfile
-import tarfile
-import optparse
-import subprocess
-import platform
-
-from distutils import log
-
-try:
- from site import USER_SITE
-except ImportError:
- USER_SITE = None
-
-DEFAULT_VERSION = "1.4.2"
-DEFAULT_URL = "https://pypi.python.org/packages/source/s/setuptools/"
-
-def _python_cmd(*args):
- args = (sys.executable,) + args
- return subprocess.call(args) == 0
-
-def _check_call_py24(cmd, *args, **kwargs):
- res = subprocess.call(cmd, *args, **kwargs)
- class CalledProcessError(Exception):
- pass
- if not res == 0:
- msg = "Command '%s' return non-zero exit status %d" % (cmd, res)
- raise CalledProcessError(msg)
-vars(subprocess).setdefault('check_call', _check_call_py24)
-
-def _install(tarball, install_args=()):
- # extracting the tarball
- tmpdir = tempfile.mkdtemp()
- log.warn('Extracting in %s', tmpdir)
- old_wd = os.getcwd()
- try:
- os.chdir(tmpdir)
- tar = tarfile.open(tarball)
- _extractall(tar)
- tar.close()
-
- # going in the directory
- subdir = os.path.join(tmpdir, os.listdir(tmpdir)[0])
- os.chdir(subdir)
- log.warn('Now working in %s', subdir)
-
- # installing
- log.warn('Installing Setuptools')
- if not _python_cmd('setup.py', 'install', *install_args):
- log.warn('Something went wrong during the installation.')
- log.warn('See the error message above.')
- # exitcode will be 2
- return 2
- finally:
- os.chdir(old_wd)
- shutil.rmtree(tmpdir)
-
-
-def _build_egg(egg, tarball, to_dir):
- # extracting the tarball
- tmpdir = tempfile.mkdtemp()
- log.warn('Extracting in %s', tmpdir)
- old_wd = os.getcwd()
- try:
- os.chdir(tmpdir)
- tar = tarfile.open(tarball)
- _extractall(tar)
- tar.close()
-
- # going in the directory
- subdir = os.path.join(tmpdir, os.listdir(tmpdir)[0])
- os.chdir(subdir)
- log.warn('Now working in %s', subdir)
-
- # building an egg
- log.warn('Building a Setuptools egg in %s', to_dir)
- _python_cmd('setup.py', '-q', 'bdist_egg', '--dist-dir', to_dir)
-
- finally:
- os.chdir(old_wd)
- shutil.rmtree(tmpdir)
- # returning the result
- log.warn(egg)
- if not os.path.exists(egg):
- raise IOError('Could not build the egg.')
-
-
-def _do_download(version, download_base, to_dir, download_delay):
- egg = os.path.join(to_dir, 'setuptools-%s-py%d.%d.egg'
- % (version, sys.version_info[0], sys.version_info[1]))
- if not os.path.exists(egg):
- tarball = download_setuptools(version, download_base,
- to_dir, download_delay)
- _build_egg(egg, tarball, to_dir)
- sys.path.insert(0, egg)
-
- # Remove previously-imported pkg_resources if present (see
- # https://bitbucket.org/pypa/setuptools/pull-request/7/ for details).
- if 'pkg_resources' in sys.modules:
- del sys.modules['pkg_resources']
-
- import setuptools
- setuptools.bootstrap_install_from = egg
-
-
-def use_setuptools(version=DEFAULT_VERSION, download_base=DEFAULT_URL,
- to_dir=os.curdir, download_delay=15):
- # making sure we use the absolute path
- to_dir = os.path.abspath(to_dir)
- was_imported = 'pkg_resources' in sys.modules or \
- 'setuptools' in sys.modules
- try:
- import pkg_resources
- except ImportError:
- return _do_download(version, download_base, to_dir, download_delay)
- try:
- pkg_resources.require("setuptools>=" + version)
- return
- except pkg_resources.VersionConflict:
- e = sys.exc_info()[1]
- if was_imported:
- sys.stderr.write(
- "The required version of setuptools (>=%s) is not available,\n"
- "and can't be installed while this script is running. Please\n"
- "install a more recent version first, using\n"
- "'easy_install -U setuptools'."
- "\n\n(Currently using %r)\n" % (version, e.args[0]))
- sys.exit(2)
- else:
- del pkg_resources, sys.modules['pkg_resources'] # reload ok
- return _do_download(version, download_base, to_dir,
- download_delay)
- except pkg_resources.DistributionNotFound:
- return _do_download(version, download_base, to_dir,
- download_delay)
-
-def _clean_check(cmd, target):
- """
- Run the command to download target. If the command fails, clean up before
- re-raising the error.
- """
- try:
- subprocess.check_call(cmd)
- except subprocess.CalledProcessError:
- if os.access(target, os.F_OK):
- os.unlink(target)
- raise
-
-def download_file_powershell(url, target):
- """
- Download the file at url to target using Powershell (which will validate
- trust). Raise an exception if the command cannot complete.
- """
- target = os.path.abspath(target)
- cmd = [
- 'powershell',
- '-Command',
- "(new-object System.Net.WebClient).DownloadFile(%(url)r, %(target)r)" % vars(),
- ]
- _clean_check(cmd, target)
-
-def has_powershell():
- if platform.system() != 'Windows':
- return False
- cmd = ['powershell', '-Command', 'echo test']
- devnull = open(os.path.devnull, 'wb')
- try:
- try:
- subprocess.check_call(cmd, stdout=devnull, stderr=devnull)
- except:
- return False
- finally:
- devnull.close()
- return True
-
-download_file_powershell.viable = has_powershell
-
-def download_file_curl(url, target):
- cmd = ['curl', url, '--silent', '--output', target]
- _clean_check(cmd, target)
-
-def has_curl():
- cmd = ['curl', '--version']
- devnull = open(os.path.devnull, 'wb')
- try:
- try:
- subprocess.check_call(cmd, stdout=devnull, stderr=devnull)
- except:
- return False
- finally:
- devnull.close()
- return True
-
-download_file_curl.viable = has_curl
-
-def download_file_wget(url, target):
- cmd = ['wget', url, '--quiet', '--output-document', target]
- _clean_check(cmd, target)
-
-def has_wget():
- cmd = ['wget', '--version']
- devnull = open(os.path.devnull, 'wb')
- try:
- try:
- subprocess.check_call(cmd, stdout=devnull, stderr=devnull)
- except:
- return False
- finally:
- devnull.close()
- return True
-
-download_file_wget.viable = has_wget
-
-def download_file_insecure(url, target):
- """
- Use Python to download the file, even though it cannot authenticate the
- connection.
- """
- try:
- from urllib.request import urlopen
- except ImportError:
- from urllib2 import urlopen
- src = dst = None
- try:
- src = urlopen(url)
- # Read/write all in one block, so we don't create a corrupt file
- # if the download is interrupted.
- data = src.read()
- dst = open(target, "wb")
- dst.write(data)
- finally:
- if src:
- src.close()
- if dst:
- dst.close()
-
-download_file_insecure.viable = lambda: True
-
-def get_best_downloader():
- downloaders = [
- download_file_powershell,
- download_file_curl,
- download_file_wget,
- download_file_insecure,
- ]
-
- for dl in downloaders:
- if dl.viable():
- return dl
-
-def download_setuptools(version=DEFAULT_VERSION, download_base=DEFAULT_URL,
- to_dir=os.curdir, delay=15,
- downloader_factory=get_best_downloader):
- """Download setuptools from a specified location and return its filename
-
- `version` should be a valid setuptools version number that is available
- as an egg for download under the `download_base` URL (which should end
- with a '/'). `to_dir` is the directory where the egg will be downloaded.
- `delay` is the number of seconds to pause before an actual download
- attempt.
-
- ``downloader_factory`` should be a function taking no arguments and
- returning a function for downloading a URL to a target.
- """
- # making sure we use the absolute path
- to_dir = os.path.abspath(to_dir)
- tgz_name = "setuptools-%s.tar.gz" % version
- url = download_base + tgz_name
- saveto = os.path.join(to_dir, tgz_name)
- if not os.path.exists(saveto): # Avoid repeated downloads
- log.warn("Downloading %s", url)
- downloader = downloader_factory()
- downloader(url, saveto)
- return os.path.realpath(saveto)
-
-
-def _extractall(self, path=".", members=None):
- """Extract all members from the archive to the current working
- directory and set owner, modification time and permissions on
- directories afterwards. `path' specifies a different directory
- to extract to. `members' is optional and must be a subset of the
- list returned by getmembers().
- """
- import copy
- import operator
- from tarfile import ExtractError
- directories = []
-
- if members is None:
- members = self
-
- for tarinfo in members:
- if tarinfo.isdir():
- # Extract directories with a safe mode.
- directories.append(tarinfo)
- tarinfo = copy.copy(tarinfo)
- tarinfo.mode = 448 # decimal for oct 0700
- self.extract(tarinfo, path)
-
- # Reverse sort directories.
- if sys.version_info < (2, 4):
- def sorter(dir1, dir2):
- return cmp(dir1.name, dir2.name)
- directories.sort(sorter)
- directories.reverse()
- else:
- directories.sort(key=operator.attrgetter('name'), reverse=True)
-
- # Set correct owner, mtime and filemode on directories.
- for tarinfo in directories:
- dirpath = os.path.join(path, tarinfo.name)
- try:
- self.chown(tarinfo, dirpath)
- self.utime(tarinfo, dirpath)
- self.chmod(tarinfo, dirpath)
- except ExtractError:
- e = sys.exc_info()[1]
- if self.errorlevel > 1:
- raise
- else:
- self._dbg(1, "tarfile: %s" % e)
-
-
-def _build_install_args(options):
- """
- Build the arguments to 'python setup.py install' on the setuptools package
- """
- install_args = []
- if options.user_install:
- if sys.version_info < (2, 6):
- log.warn("--user requires Python 2.6 or later")
- raise SystemExit(1)
- install_args.append('--user')
- return install_args
-
-def _parse_args():
- """
- Parse the command line for options
- """
- parser = optparse.OptionParser()
- parser.add_option(
- '--user', dest='user_install', action='store_true', default=False,
- help='install in user site package (requires Python 2.6 or later)')
- parser.add_option(
- '--download-base', dest='download_base', metavar="URL",
- default=DEFAULT_URL,
- help='alternative URL from where to download the setuptools package')
- parser.add_option(
- '--insecure', dest='downloader_factory', action='store_const',
- const=lambda: download_file_insecure, default=get_best_downloader,
- help='Use internal, non-validating downloader'
- )
- options, args = parser.parse_args()
- # positional arguments are ignored
- return options
-
-def main(version=DEFAULT_VERSION):
- """Install or upgrade setuptools and EasyInstall"""
- options = _parse_args()
- tarball = download_setuptools(download_base=options.download_base,
- downloader_factory=options.downloader_factory)
- return _install(tarball, _build_install_args(options))
-
-if __name__ == '__main__':
- sys.exit(main())
.. _Pygments tip:
http://bitbucket.org/birkenfeld/pygments-main/get/tip.zip#egg=Pygments-dev
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
+import sys
+
+from pygments.util import StringIO, BytesIO
-__version__ = '2.1.3'
+__version__ = '2.2.0'
__docformat__ = 'restructuredtext'
__all__ = ['lex', 'format', 'highlight']
-import sys
-
-from pygments.util import StringIO, BytesIO
-
-
def lex(code, lexer):
"""
Lex ``code`` with ``lexer`` and return an iterable of tokens.
try:
return lexer.get_tokens(code)
except TypeError as err:
- if isinstance(err.args[0], str) and \
- ('unbound method get_tokens' in err.args[0] or
- 'missing 1 required positional argument' in err.args[0]):
+ if (isinstance(err.args[0], str) and
+ ('unbound method get_tokens' in err.args[0] or
+ 'missing 1 required positional argument' in err.args[0])):
raise TypeError('lex() argument must be a lexer instance, '
'not a class')
raise
else:
formatter.format(tokens, outfile)
except TypeError as err:
- if isinstance(err.args[0], str) and \
- ('unbound method format' in err.args[0] or
- 'missing 1 required positional argument' in err.args[0]):
+ if (isinstance(err.args[0], str) and
+ ('unbound method format' in err.args[0] or
+ 'missing 1 required positional argument' in err.args[0])):
raise TypeError('format() argument must be a formatter instance, '
'not a class')
raise
Command line interface.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from pygments.util import ClassNotFound, OptionError, docstring_headline, \
guess_decode, guess_decode_from_terminal, terminal_encoding
from pygments.lexers import get_all_lexers, get_lexer_by_name, guess_lexer, \
- get_lexer_for_filename, find_lexer_class_for_filename
+ load_lexer_from_file, get_lexer_for_filename, find_lexer_class_for_filename
from pygments.lexers.special import TextLexer
from pygments.formatters.latex import LatexEmbeddedLexer, LatexFormatter
from pygments.formatters import get_all_formatters, get_formatter_by_name, \
- get_formatter_for_filename, find_formatter_class
+ load_formatter_from_file, get_formatter_for_filename, find_formatter_class
from pygments.formatters.terminal import TerminalFormatter
from pygments.filters import get_all_filters, find_filter_class
from pygments.styles import get_all_styles, get_style_by_name
USAGE = """\
Usage: %s [-l <lexer> | -g] [-F <filter>[:<options>]] [-f <formatter>]
- [-O <options>] [-P <option=value>] [-s] [-v] [-o <outfile>] [<infile>]
+ [-O <options>] [-P <option=value>] [-s] [-v] [-x] [-o <outfile>] [<infile>]
%s -S <style> -f <formatter> [-a <arg>] [-O <options>] [-P <option=value>]
%s -L [<which> ...]
the extension of the output file name. If no output file is given,
the terminal formatter will be used by default.
+The additional option -x allows custom lexers and formatters to be
+loaded from a .py file relative to the current working directory. For
+example, ``-l ./customlexer.py -x``. By default, this option expects a
+file with a class named CustomLexer or CustomFormatter; you can also
+specify your own class name with a colon (``-l ./lexer.py:MyLexer``).
+Users should be very careful not to use this option with untrusted files,
+because it will import and run them.
+
With the -O option, you can give the lexer and formatter a comma-
separated list of options, e.g. ``-O bg=light,python=cool``.
return 0
if opts.pop('-V', None) is not None:
- print('Pygments version %s, (c) 2006-2015 by Georg Brandl.' % __version__)
+ print('Pygments version %s, (c) 2006-2017 by Georg Brandl.' % __version__)
return 0
# handle ``pygmentize -L``
F_opts = _parse_filters(F_opts)
opts.pop('-F', None)
+ allow_custom_lexer_formatter = False
+ # -x: allow custom (eXternal) lexers and formatters
+ if opts.pop('-x', None) is not None:
+ allow_custom_lexer_formatter = True
+
# select lexer
lexer = None
# given by name?
lexername = opts.pop('-l', None)
if lexername:
- try:
- lexer = get_lexer_by_name(lexername, **parsed_opts)
- except (OptionError, ClassNotFound) as err:
- print('Error:', err, file=sys.stderr)
- return 1
+ # custom lexer, located relative to user's cwd
+ if allow_custom_lexer_formatter and '.py' in lexername:
+ try:
+ if ':' in lexername:
+ filename, name = lexername.rsplit(':', 1)
+ lexer = load_lexer_from_file(filename, name,
+ **parsed_opts)
+ else:
+ lexer = load_lexer_from_file(lexername, **parsed_opts)
+ except ClassNotFound as err:
+ print('Error:', err, file=sys.stderr)
+ return 1
+ else:
+ try:
+ lexer = get_lexer_by_name(lexername, **parsed_opts)
+ except (OptionError, ClassNotFound) as err:
+ print('Error:', err, file=sys.stderr)
+ return 1
# read input code
code = None
outfn = opts.pop('-o', None)
fmter = opts.pop('-f', None)
if fmter:
- try:
- fmter = get_formatter_by_name(fmter, **parsed_opts)
- except (OptionError, ClassNotFound) as err:
- print('Error:', err, file=sys.stderr)
- return 1
+ # custom formatter, located relative to user's cwd
+ if allow_custom_lexer_formatter and '.py' in fmter:
+ try:
+ if ':' in fmter:
+ file, fmtername = fmter.rsplit(':', 1)
+ fmter = load_formatter_from_file(file, fmtername,
+ **parsed_opts)
+ else:
+ fmter = load_formatter_from_file(fmter, **parsed_opts)
+ except ClassNotFound as err:
+ print('Error:', err, file=sys.stderr)
+ return 1
+ else:
+ try:
+ fmter = get_formatter_by_name(fmter, **parsed_opts)
+ except (OptionError, ClassNotFound) as err:
+ print('Error:', err, file=sys.stderr)
+ return 1
if outfn:
if not fmter:
usage = USAGE % ((args[0],) * 6)
try:
- popts, args = getopt.getopt(args[1:], "l:f:F:o:O:P:LS:a:N:vhVHgs")
+ popts, args = getopt.getopt(args[1:], "l:f:F:o:O:P:LS:a:N:vhVHgsx")
except getopt.GetoptError:
print(usage, file=sys.stderr)
return 2
Format colored console output.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
esc = "\x1b["
codes = {}
-codes[""] = ""
-codes["reset"] = esc + "39;49;00m"
+codes[""] = ""
+codes["reset"] = esc + "39;49;00m"
-codes["bold"] = esc + "01m"
-codes["faint"] = esc + "02m"
-codes["standout"] = esc + "03m"
+codes["bold"] = esc + "01m"
+codes["faint"] = esc + "02m"
+codes["standout"] = esc + "03m"
codes["underline"] = esc + "04m"
-codes["blink"] = esc + "05m"
-codes["overline"] = esc + "06m"
+codes["blink"] = esc + "05m"
+codes["overline"] = esc + "06m"
-dark_colors = ["black", "darkred", "darkgreen", "brown", "darkblue",
- "purple", "teal", "lightgray"]
+dark_colors = ["black", "darkred", "darkgreen", "brown", "darkblue",
+ "purple", "teal", "lightgray"]
light_colors = ["darkgray", "red", "green", "yellow", "blue",
"fuchsia", "turquoise", "white"]
del d, l, x
-codes["darkteal"] = codes["turquoise"]
+codes["darkteal"] = codes["turquoise"]
codes["darkyellow"] = codes["brown"]
-codes["fuscia"] = codes["fuchsia"]
-codes["white"] = codes["bold"]
+codes["fuscia"] = codes["fuchsia"]
+codes["white"] = codes["bold"]
def reset_color():
Module that implements the default filter.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
yield ttype, value.lower()
"""
return type(f.__name__, (FunctionFilter,), {
- 'function': f,
- '__module__': getattr(f, '__module__'),
- '__doc__': f.__doc__
- })
+ '__module__': getattr(f, '__module__'),
+ '__doc__': f.__doc__,
+ 'function': f,
+ })
class Filter(object):
Module containing filter lookup functions and default
filters.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Base formatter class.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
def __init__(self, **options):
self.style = _lookup_style(options.get('style', 'default'))
- self.full = get_bool_opt(options, 'full', False)
+ self.full = get_bool_opt(options, 'full', False)
self.title = options.get('title', '')
self.encoding = options.get('encoding', None) or None
if self.encoding in ('guess', 'chardet'):
Pygments formatters.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from pygments.util import ClassNotFound, itervalues
__all__ = ['get_formatter_by_name', 'get_formatter_for_filename',
- 'get_all_formatters'] + list(FORMATTERS)
+ 'get_all_formatters', 'load_formatter_from_file'] + list(FORMATTERS)
_formatter_cache = {} # classes by name
_pattern_cache = {}
return cls(**options)
+def load_formatter_from_file(filename, formattername="CustomFormatter",
+ **options):
+ """Load a formatter from a file.
+
+ This method expects a file located relative to the current working
+ directory, which contains a class named CustomFormatter. By default,
+ it expects the Formatter to be named CustomFormatter; you can specify
+ your own class name as the second argument to this function.
+
+ Users should be very careful with the input, because this method
+ is equivalent to running eval on the input file.
+
+ Raises ClassNotFound if there are any problems importing the Formatter.
+
+ .. versionadded:: 2.2
+ """
+ try:
+ # This empty dict will contain the namespace for the exec'd file
+ custom_namespace = {}
+ exec(open(filename, 'rb').read(), custom_namespace)
+ # Retrieve the class `formattername` from that namespace
+ if formattername not in custom_namespace:
+ raise ClassNotFound('no valid %s class found in %s' %
+ (formattername, filename))
+ formatter_class = custom_namespace[formattername]
+ # And finally instantiate it with the options
+ return formatter_class(**options)
+ except IOError as err:
+ raise ClassNotFound('cannot read %s' % filename)
+ except ClassNotFound as err:
+ raise
+ except Exception as err:
+ raise ClassNotFound('error when loading custom formatter: %s' % err)
+
+
def get_formatter_for_filename(fn, **options):
"""Lookup and instantiate a formatter by filename pattern.
Do not alter the FORMATTERS dictionary by hand.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
BBcode formatter.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Formatter for HTML output.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Formatter for Pixmap output.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
+import os
import sys
from pygments.formatter import Formatter
# A sane default for modern systems
DEFAULT_FONT_NAME_NIX = 'Bitstream Vera Sans Mono'
DEFAULT_FONT_NAME_WIN = 'Courier New'
+DEFAULT_FONT_NAME_MAC = 'Courier New'
class PilNotAvailable(ImportError):
if not font_name:
self.font_name = DEFAULT_FONT_NAME_WIN
self._create_win()
+ elif sys.platform.startswith('darwin'):
+ if not font_name:
+ self.font_name = DEFAULT_FONT_NAME_MAC
+ self._create_mac()
else:
if not font_name:
self.font_name = DEFAULT_FONT_NAME_NIX
else:
self.fonts[style] = self.fonts['NORMAL']
+ def _get_mac_font_path(self, font_map, name, style):
+ return font_map.get((name + ' ' + style).strip().lower())
+
+ def _create_mac(self):
+ font_map = {}
+ for font_dir in (os.path.join(os.getenv("HOME"), 'Library/Fonts/'),
+ '/Library/Fonts/', '/System/Library/Fonts/'):
+ font_map.update(
+ ((os.path.splitext(f)[0].lower(), os.path.join(font_dir, f))
+ for f in os.listdir(font_dir) if f.lower().endswith('ttf')))
+
+ for name in STYLES['NORMAL']:
+ path = self._get_mac_font_path(font_map, self.font_name, name)
+ if path is not None:
+ self.fonts['NORMAL'] = ImageFont.truetype(path, self.font_size)
+ break
+ else:
+ raise FontNotFound('No usable fonts named: "%s"' %
+ self.font_name)
+ for style in ('ITALIC', 'BOLD', 'BOLDITALIC'):
+ for stylename in STYLES[style]:
+ path = self._get_mac_font_path(font_map, self.font_name, stylename)
+ if path is not None:
+ self.fonts[style] = ImageFont.truetype(path, self.font_size)
+ break
+ else:
+ if style == 'BOLDITALIC':
+ self.fonts[style] = self.fonts['BOLD']
+ else:
+ self.fonts[style] = self.fonts['NORMAL']
+
def _lookup_win(self, key, basename, styles, fail=False):
for suffix in ('', ' (TrueType)'):
for style in styles:
Formatter for IRC output
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Formatter for LaTeX fancyvrb output.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Other formatters: NullFormatter, RawTokenFormatter.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
A formatter that generates RTF files.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Formatter for SVG output.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Formatter for terminal output with ANSI sequences.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Formatter version 1.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import sys
from pygments.formatter import Formatter
+from pygments.console import codes
+from pygments.style import ansicolors
__all__ = ['Terminal256Formatter', 'TerminalTrueColorFormatter']
def color_string(self):
attrs = []
if self.fg is not None:
- attrs.extend(("38", "5", "%i" % self.fg))
+ if self.fg in ansicolors:
+ esc = codes[self.fg[5:]]
+ if ';01m' in esc:
+ self.bold = True
+ # extract fg color code.
+ attrs.append(esc[2:4])
+ else:
+ attrs.extend(("38", "5", "%i" % self.fg))
if self.bg is not None:
- attrs.extend(("48", "5", "%i" % self.bg))
+ if self.bg in ansicolors:
+ esc = codes[self.bg[5:]]
+ # extract fg color code, add 10 for bg.
+ attrs.append(str(int(esc[2:4])+10))
+ else:
+ attrs.extend(("48", "5", "%i" % self.bg))
if self.bold:
attrs.append("01")
if self.underline:
.. versionadded:: 0.9
+ .. versionchanged:: 2.2
+ If the used style defines foreground colors in the form ``#ansi*``, then
+ `Terminal256Formatter` will map these to non extended foreground color.
+ See :ref:`AnsiTerminalStyle` for more information.
+
Options accepted:
`style`
def _color_index(self, color):
index = self.best_match.get(color, None)
+ if color in ansicolors:
+ # strip the `#ansi` part and look up code
+ index = color
+ self.best_match[color] = index
if index is None:
try:
rgb = int(str(color), 16)
def _setup_styles(self):
for ttype, ndef in self.style:
escape = EscapeSequence()
- if ndef['color']:
+ # get foreground from ansicolor if set
+ if ndef['ansicolor']:
+ escape.fg = self._color_index(ndef['ansicolor'])
+ elif ndef['color']:
escape.fg = self._color_index(ndef['color'])
- if ndef['bgcolor']:
+ if ndef['bgansicolor']:
+ escape.bg = self._color_index(ndef['bgansicolor'])
+ elif ndef['bgcolor']:
escape.bg = self._color_index(ndef['bgcolor'])
if self.usebold and ndef['bold']:
escape.bold = True
Base lexer classes.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
if data is not None:
if ctx:
ctx.pos = match.start(i + 1)
- for item in action(
- lexer, _PseudoMatch(match.start(i + 1), data), ctx):
+ for item in action(lexer,
+ _PseudoMatch(match.start(i + 1), data), ctx):
if item:
yield item
if ctx:
Pygments lexers.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
__all__ = ['get_lexer_by_name', 'get_lexer_for_filename', 'find_lexer_class',
- 'guess_lexer'] + list(LEXERS)
+ 'guess_lexer', 'load_lexer_from_file'] + list(LEXERS)
_lexer_cache = {}
_pattern_cache = {}
return cls
+def find_lexer_class_by_name(_alias):
+ """Lookup a lexer class by alias.
+
+ Like `get_lexer_by_name`, but does not instantiate the class.
+
+ .. versionadded:: 2.2
+ """
+ if not _alias:
+ raise ClassNotFound('no lexer for alias %r found' % _alias)
+ # lookup builtin lexers
+ for module_name, name, aliases, _, _ in itervalues(LEXERS):
+ if _alias.lower() in aliases:
+ if name not in _lexer_cache:
+ _load_lexers(module_name)
+ return _lexer_cache[name]
+ # continue with lexers from setuptools entrypoints
+ for cls in find_plugin_lexers():
+ if _alias.lower() in cls.aliases:
+ return cls
+ raise ClassNotFound('no lexer for alias %r found' % _alias)
+
+
def get_lexer_by_name(_alias, **options):
"""Get a lexer by an alias.
raise ClassNotFound('no lexer for alias %r found' % _alias)
+def load_lexer_from_file(filename, lexername="CustomLexer", **options):
+ """Load a lexer from a file.
+
+ This method expects a file located relative to the current working
+ directory, which contains a Lexer class. By default, it expects the
+ Lexer to be name CustomLexer; you can specify your own class name
+ as the second argument to this function.
+
+ Users should be very careful with the input, because this method
+ is equivalent to running eval on the input file.
+
+ Raises ClassNotFound if there are any problems importing the Lexer.
+
+ .. versionadded:: 2.2
+ """
+ try:
+ # This empty dict will contain the namespace for the exec'd file
+ custom_namespace = {}
+ exec(open(filename, 'rb').read(), custom_namespace)
+ # Retrieve the class `lexername` from that namespace
+ if lexername not in custom_namespace:
+ raise ClassNotFound('no valid %s class found in %s' %
+ (lexername, filename))
+ lexer_class = custom_namespace[lexername]
+ # And finally instantiate it with the options
+ return lexer_class(**options)
+ except IOError as err:
+ raise ClassNotFound('cannot read %s' % filename)
+ except ClassNotFound as err:
+ raise
+ except Exception as err:
+ raise ClassNotFound('error when loading custom lexer: %s' % err)
+
+
def find_lexer_class_for_filename(_fn, code=None):
"""Get a lexer for a filename.
# gets turned into 0.0. Run scripts/detect_missing_analyse_text.py
# to find lexers which need it overridden.
if code:
- return cls.analyse_text(code) + bonus
- return cls.priority + bonus
+ return cls.analyse_text(code) + bonus, cls.__name__
+ return cls.priority + bonus, cls.__name__
if matches:
matches.sort(key=get_rating)
TODO: perl/python script in Asymptote SVN similar to asy-list.pl but only
for function and variable names.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
ANSI Common Lisp builtins.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
File may be also used as standalone generator for aboves.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
pygments.lexers._csound_builtins
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Built-in Lasso types, traits, methods, and members.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
'curl_netrc_ignored',
'curl_netrc_optional',
'curl_netrc_required',
+ 'curl_sslversion_default',
+ 'curl_sslversion_sslv2',
+ 'curl_sslversion_sslv3',
+ 'curl_sslversion_tlsv1',
'curl_version_asynchdns',
'curl_version_debug',
'curl_version_gssnegotiate',
'json_open_array',
'json_open_object',
'json_period',
+ 'json_positive',
'json_quote_double',
'json_rpccall',
'json_serialize',
'lcapi_loadmodules',
'lcapi_updatedatasourceslist',
'ldap_scope_base',
+ 'ldap_scope_children',
'ldap_scope_onelevel',
'ldap_scope_subtree',
'library_once',
'iscntrl',
'isdigit',
'isdir',
+ 'isdirectory',
'isempty',
'isemptyelement',
'isfirststep',
Do not edit the MODULES dict by hand.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from __future__ import print_function
-
MODULES = {'basic': ('_G',
'_VERSION',
'assert',
'collectgarbage',
'dofile',
'error',
- 'getfenv',
'getmetatable',
'ipairs',
'load',
'loadfile',
- 'loadstring',
'next',
'pairs',
'pcall',
'print',
'rawequal',
'rawget',
+ 'rawlen',
'rawset',
'select',
- 'setfenv',
'setmetatable',
'tonumber',
'tostring',
'type',
- 'unpack',
'xpcall'),
+ 'bit32': ('bit32.arshift',
+ 'bit32.band',
+ 'bit32.bnot',
+ 'bit32.bor',
+ 'bit32.btest',
+ 'bit32.bxor',
+ 'bit32.extract',
+ 'bit32.lrotate',
+ 'bit32.lshift',
+ 'bit32.replace',
+ 'bit32.rrotate',
+ 'bit32.rshift'),
'coroutine': ('coroutine.create',
+ 'coroutine.isyieldable',
'coroutine.resume',
'coroutine.running',
'coroutine.status',
'coroutine.wrap',
'coroutine.yield'),
'debug': ('debug.debug',
- 'debug.getfenv',
'debug.gethook',
'debug.getinfo',
'debug.getlocal',
'debug.getmetatable',
'debug.getregistry',
'debug.getupvalue',
- 'debug.setfenv',
+ 'debug.getuservalue',
'debug.sethook',
'debug.setlocal',
'debug.setmetatable',
'debug.setupvalue',
- 'debug.traceback'),
+ 'debug.setuservalue',
+ 'debug.traceback',
+ 'debug.upvalueid',
+ 'debug.upvaluejoin'),
'io': ('io.close',
'io.flush',
'io.input',
'io.output',
'io.popen',
'io.read',
+ 'io.stderr',
+ 'io.stdin',
+ 'io.stdout',
'io.tmpfile',
'io.type',
'io.write'),
'math': ('math.abs',
'math.acos',
'math.asin',
- 'math.atan2',
'math.atan',
+ 'math.atan2',
'math.ceil',
- 'math.cosh',
'math.cos',
+ 'math.cosh',
'math.deg',
'math.exp',
'math.floor',
'math.frexp',
'math.huge',
'math.ldexp',
- 'math.log10',
'math.log',
'math.max',
+ 'math.maxinteger',
'math.min',
+ 'math.mininteger',
'math.modf',
'math.pi',
'math.pow',
'math.rad',
'math.random',
'math.randomseed',
- 'math.sinh',
'math.sin',
+ 'math.sinh',
'math.sqrt',
+ 'math.tan',
'math.tanh',
- 'math.tan'),
- 'modules': ('module',
- 'require',
+ 'math.tointeger',
+ 'math.type',
+ 'math.ult'),
+ 'modules': ('package.config',
'package.cpath',
'package.loaded',
'package.loadlib',
'package.path',
'package.preload',
- 'package.seeall'),
+ 'package.searchers',
+ 'package.searchpath',
+ 'require'),
'os': ('os.clock',
'os.date',
'os.difftime',
'string.len',
'string.lower',
'string.match',
+ 'string.pack',
+ 'string.packsize',
'string.rep',
'string.reverse',
'string.sub',
+ 'string.unpack',
'string.upper'),
'table': ('table.concat',
'table.insert',
- 'table.maxn',
+ 'table.move',
+ 'table.pack',
'table.remove',
- 'table.sort')}
-
+ 'table.sort',
+ 'table.unpack'),
+ 'utf8': ('utf8.char',
+ 'utf8.charpattern',
+ 'utf8.codepoint',
+ 'utf8.codes',
+ 'utf8.len',
+ 'utf8.offset')}
if __name__ == '__main__': # pragma: no cover
import re
+ import sys
+
+ # urllib ends up wanting to import a module called 'math' -- if
+ # pygments/lexers is in the path, this ends badly.
+ for i in range(len(sys.path)-1, -1, -1):
+ if sys.path[i].endswith('/lexers'):
+ del sys.path[i]
+
try:
from urllib import urlopen
except ImportError:
def get_newest_version():
f = urlopen('http://www.lua.org/manual/')
- r = re.compile(r'^<A HREF="(\d\.\d)/">Lua \1</A>')
+ r = re.compile(r'^<A HREF="(\d\.\d)/">(Lua )?\1</A>')
for line in f:
m = r.match(line)
if m is not None:
def get_lua_functions(version):
f = urlopen('http://www.lua.org/manual/%s/' % version)
- r = re.compile(r'^<A HREF="manual.html#pdf-(.+)">\1</A>')
+ r = re.compile(r'^<A HREF="manual.html#pdf-(?!lua|LUA)([^:]+)">\1</A>')
functions = []
for line in f:
m = r.match(line)
def run():
version = get_newest_version()
- print('> Downloading function index for Lua %s' % version)
- functions = get_lua_functions(version)
- print('> %d functions found:' % len(functions))
+ functions = set()
+ for v in ('5.2', version):
+ print('> Downloading function index for Lua %s' % v)
+ f = get_lua_functions(v)
+ print('> %d functions found, %d new:' %
+ (len(f), len(set(f) - functions)))
+ functions |= set(f)
+
+ functions = sorted(functions)
modules = {}
for full_function_name in functions:
print('>> %s' % full_function_name)
m = get_function_module(full_function_name)
modules.setdefault(m, []).append(full_function_name)
+ modules = {k: tuple(v) for k, v in modules.iteritems()}
regenerate(__file__, modules)
Do not alter the LEXERS dictionary by hand.
- :copyright: Copyright 2006-2014 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2014, 2016 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
'AdaLexer': ('pygments.lexers.pascal', 'Ada', ('ada', 'ada95', 'ada2005'), ('*.adb', '*.ads', '*.ada'), ('text/x-ada',)),
'AdlLexer': ('pygments.lexers.archetype', 'ADL', ('adl',), ('*.adl', '*.adls', '*.adlf', '*.adlx'), ()),
'AgdaLexer': ('pygments.lexers.haskell', 'Agda', ('agda',), ('*.agda',), ('text/x-agda',)),
+ 'AheuiLexer': ('pygments.lexers.esoteric', 'Aheui', ('aheui',), ('*.aheui',), ()),
'AlloyLexer': ('pygments.lexers.dsls', 'Alloy', ('alloy',), ('*.als',), ('text/x-alloy',)),
'AmbientTalkLexer': ('pygments.lexers.ambient', 'AmbientTalk', ('at', 'ambienttalk', 'ambienttalk/2'), ('*.at',), ('text/x-ambienttalk',)),
+ 'AmplLexer': ('pygments.lexers.ampl', 'Ampl', ('ampl',), ('*.run',), ()),
+ 'Angular2HtmlLexer': ('pygments.lexers.templates', 'HTML + Angular2', ('html+ng2',), ('*.ng2',), ()),
+ 'Angular2Lexer': ('pygments.lexers.templates', 'Angular2', ('ng2',), (), ()),
'AntlrActionScriptLexer': ('pygments.lexers.parsers', 'ANTLR With ActionScript Target', ('antlr-as', 'antlr-actionscript'), ('*.G', '*.g'), ()),
'AntlrCSharpLexer': ('pygments.lexers.parsers', 'ANTLR With C# Target', ('antlr-csharp', 'antlr-c#'), ('*.G', '*.g'), ()),
'AntlrCppLexer': ('pygments.lexers.parsers', 'ANTLR With CPP Target', ('antlr-cpp',), ('*.G', '*.g'), ()),
'AwkLexer': ('pygments.lexers.textedit', 'Awk', ('awk', 'gawk', 'mawk', 'nawk'), ('*.awk',), ('application/x-awk',)),
'BBCodeLexer': ('pygments.lexers.markup', 'BBCode', ('bbcode',), (), ('text/x-bbcode',)),
'BCLexer': ('pygments.lexers.algebra', 'BC', ('bc',), ('*.bc',), ()),
+ 'BSTLexer': ('pygments.lexers.bibtex', 'BST', ('bst', 'bst-pybtex'), ('*.bst',), ()),
'BaseMakefileLexer': ('pygments.lexers.make', 'Base Makefile', ('basemake',), (), ()),
- 'BashLexer': ('pygments.lexers.shell', 'Bash', ('bash', 'sh', 'ksh', 'shell'), ('*.sh', '*.ksh', '*.bash', '*.ebuild', '*.eclass', '*.exheres-0', '*.exlib', '.bashrc', 'bashrc', '.bash_*', 'bash_*', 'PKGBUILD'), ('application/x-sh', 'application/x-shellscript')),
+ 'BashLexer': ('pygments.lexers.shell', 'Bash', ('bash', 'sh', 'ksh', 'zsh', 'shell'), ('*.sh', '*.ksh', '*.bash', '*.ebuild', '*.eclass', '*.exheres-0', '*.exlib', '*.zsh', '.bashrc', 'bashrc', '.bash_*', 'bash_*', 'zshrc', '.zshrc', 'PKGBUILD'), ('application/x-sh', 'application/x-shellscript')),
'BashSessionLexer': ('pygments.lexers.shell', 'Bash Session', ('console', 'shell-session'), ('*.sh-session', '*.shell-session'), ('application/x-shell-session', 'application/x-sh-session')),
'BatchLexer': ('pygments.lexers.shell', 'Batchfile', ('bat', 'batch', 'dosbatch', 'winbatch'), ('*.bat', '*.cmd'), ('application/x-dos-batch',)),
'BefungeLexer': ('pygments.lexers.esoteric', 'Befunge', ('befunge',), ('*.befunge',), ('application/x-befunge',)),
+ 'BibTeXLexer': ('pygments.lexers.bibtex', 'BibTeX', ('bib', 'bibtex'), ('*.bib',), ('text/x-bibtex',)),
'BlitzBasicLexer': ('pygments.lexers.basic', 'BlitzBasic', ('blitzbasic', 'b3d', 'bplus'), ('*.bb', '*.decls'), ('text/x-bb',)),
'BlitzMaxLexer': ('pygments.lexers.basic', 'BlitzMax', ('blitzmax', 'bmax'), ('*.bmx',), ('text/x-bmx',)),
'BnfLexer': ('pygments.lexers.grammar_notation', 'BNF', ('bnf',), ('*.bnf',), ('text/x-bnf',)),
'BooLexer': ('pygments.lexers.dotnet', 'Boo', ('boo',), ('*.boo',), ('text/x-boo',)),
- 'BoogieLexer': ('pygments.lexers.esoteric', 'Boogie', ('boogie',), ('*.bpl',), ()),
+ 'BoogieLexer': ('pygments.lexers.verification', 'Boogie', ('boogie',), ('*.bpl',), ()),
'BrainfuckLexer': ('pygments.lexers.esoteric', 'Brainfuck', ('brainfuck', 'bf'), ('*.bf', '*.b'), ('application/x-brainfuck',)),
'BroLexer': ('pygments.lexers.dsls', 'Bro', ('bro',), ('*.bro',), ()),
'BugsLexer': ('pygments.lexers.modeling', 'BUGS', ('bugs', 'winbugs', 'openbugs'), ('*.bug',), ()),
'CSharpLexer': ('pygments.lexers.dotnet', 'C#', ('csharp', 'c#'), ('*.cs',), ('text/x-csharp',)),
'Ca65Lexer': ('pygments.lexers.asm', 'ca65 assembler', ('ca65',), ('*.s',), ()),
'CadlLexer': ('pygments.lexers.archetype', 'cADL', ('cadl',), ('*.cadl',), ()),
+ 'CapDLLexer': ('pygments.lexers.esoteric', 'CapDL', ('capdl',), ('*.cdl',), ()),
+ 'CapnProtoLexer': ('pygments.lexers.capnproto', "Cap'n Proto", ('capnp',), ('*.capnp',), ()),
'CbmBasicV2Lexer': ('pygments.lexers.basic', 'CBM BASIC V2', ('cbmbas',), ('*.bas',), ()),
'CeylonLexer': ('pygments.lexers.jvm', 'Ceylon', ('ceylon',), ('*.ceylon',), ('text/x-ceylon',)),
'Cfengine3Lexer': ('pygments.lexers.configs', 'CFEngine3', ('cfengine3', 'cf3'), ('*.cf',), ()),
'CheetahXmlLexer': ('pygments.lexers.templates', 'XML+Cheetah', ('xml+cheetah', 'xml+spitfire'), (), ('application/xml+cheetah', 'application/xml+spitfire')),
'CirruLexer': ('pygments.lexers.webmisc', 'Cirru', ('cirru',), ('*.cirru',), ('text/x-cirru',)),
'ClayLexer': ('pygments.lexers.c_like', 'Clay', ('clay',), ('*.clay',), ('text/x-clay',)),
+ 'CleanLexer': ('pygments.lexers.clean', 'Clean', ('clean',), ('*.icl', '*.dcl'), ()),
'ClojureLexer': ('pygments.lexers.jvm', 'Clojure', ('clojure', 'clj'), ('*.clj',), ('text/x-clojure', 'application/x-clojure')),
'ClojureScriptLexer': ('pygments.lexers.jvm', 'ClojureScript', ('clojurescript', 'cljs'), ('*.cljs',), ('text/x-clojurescript', 'application/x-clojurescript')),
'CobolFreeformatLexer': ('pygments.lexers.business', 'COBOLFree', ('cobolfree',), ('*.cbl', '*.CBL'), ()),
'CrmshLexer': ('pygments.lexers.dsls', 'Crmsh', ('crmsh', 'pcmk'), ('*.crmsh', '*.pcmk'), ()),
'CrocLexer': ('pygments.lexers.d', 'Croc', ('croc',), ('*.croc',), ('text/x-crocsrc',)),
'CryptolLexer': ('pygments.lexers.haskell', 'Cryptol', ('cryptol', 'cry'), ('*.cry',), ('text/x-cryptol',)),
+ 'CrystalLexer': ('pygments.lexers.crystal', 'Crystal', ('cr', 'crystal'), ('*.cr',), ('text/x-crystal',)),
'CsoundDocumentLexer': ('pygments.lexers.csound', 'Csound Document', ('csound-document', 'csound-csd'), ('*.csd',), ()),
'CsoundOrchestraLexer': ('pygments.lexers.csound', 'Csound Orchestra', ('csound', 'csound-orc'), ('*.orc',), ()),
'CsoundScoreLexer': ('pygments.lexers.csound', 'Csound Score', ('csound-score', 'csound-sco'), ('*.sco',), ()),
'DarcsPatchLexer': ('pygments.lexers.diff', 'Darcs Patch', ('dpatch',), ('*.dpatch', '*.darcspatch'), ()),
'DartLexer': ('pygments.lexers.javascript', 'Dart', ('dart',), ('*.dart',), ('text/x-dart',)),
'DebianControlLexer': ('pygments.lexers.installers', 'Debian Control file', ('control', 'debcontrol'), ('control',), ()),
- 'DelphiLexer': ('pygments.lexers.pascal', 'Delphi', ('delphi', 'pas', 'pascal', 'objectpascal'), ('*.pas',), ('text/x-pascal',)),
+ 'DelphiLexer': ('pygments.lexers.pascal', 'Delphi', ('delphi', 'pas', 'pascal', 'objectpascal'), ('*.pas', '*.dpr'), ('text/x-pascal',)),
'DgLexer': ('pygments.lexers.python', 'dg', ('dg',), ('*.dg',), ('text/x-dg',)),
'DiffLexer': ('pygments.lexers.diff', 'Diff', ('diff', 'udiff'), ('*.diff', '*.patch'), ('text/x-diff', 'text/x-patch')),
'DjangoLexer': ('pygments.lexers.templates', 'Django/Jinja', ('django', 'jinja'), (), ('application/x-django-templating', 'application/x-jinja')),
'FantomLexer': ('pygments.lexers.fantom', 'Fantom', ('fan',), ('*.fan',), ('application/x-fantom',)),
'FelixLexer': ('pygments.lexers.felix', 'Felix', ('felix', 'flx'), ('*.flx', '*.flxh'), ('text/x-felix',)),
'FishShellLexer': ('pygments.lexers.shell', 'Fish', ('fish', 'fishshell'), ('*.fish', '*.load'), ('application/x-fish',)),
+ 'FlatlineLexer': ('pygments.lexers.dsls', 'Flatline', ('flatline',), (), ('text/x-flatline',)),
+ 'ForthLexer': ('pygments.lexers.forth', 'Forth', ('forth',), ('*.frt', '*.fs'), ('application/x-forth',)),
'FortranFixedLexer': ('pygments.lexers.fortran', 'FortranFixed', ('fortranfixed',), ('*.f', '*.F'), ()),
'FortranLexer': ('pygments.lexers.fortran', 'Fortran', ('fortran',), ('*.f03', '*.f90', '*.F03', '*.F90'), ('text/x-fortran',)),
'FoxProLexer': ('pygments.lexers.foxpro', 'FoxPro', ('foxpro', 'vfp', 'clipper', 'xbase'), ('*.PRG', '*.prg'), ()),
'HaskellLexer': ('pygments.lexers.haskell', 'Haskell', ('haskell', 'hs'), ('*.hs',), ('text/x-haskell',)),
'HaxeLexer': ('pygments.lexers.haxe', 'Haxe', ('hx', 'haxe', 'hxsl'), ('*.hx', '*.hxsl'), ('text/haxe', 'text/x-haxe', 'text/x-hx')),
'HexdumpLexer': ('pygments.lexers.hexdump', 'Hexdump', ('hexdump',), (), ()),
+ 'HsailLexer': ('pygments.lexers.asm', 'HSAIL', ('hsail', 'hsa'), ('*.hsail',), ('text/x-hsail',)),
'HtmlDjangoLexer': ('pygments.lexers.templates', 'HTML+Django/Jinja', ('html+django', 'html+jinja', 'htmldjango'), (), ('text/html+django', 'text/html+jinja')),
'HtmlGenshiLexer': ('pygments.lexers.templates', 'HTML+Genshi', ('html+genshi', 'html+kid'), (), ('text/html+genshi',)),
'HtmlLexer': ('pygments.lexers.html', 'HTML', ('html',), ('*.html', '*.htm', '*.xhtml', '*.xslt'), ('text/html', 'application/xhtml+xml')),
'IrcLogsLexer': ('pygments.lexers.textfmts', 'IRC logs', ('irc',), ('*.weechatlog',), ('text/x-irclog',)),
'IsabelleLexer': ('pygments.lexers.theorem', 'Isabelle', ('isabelle',), ('*.thy',), ('text/x-isabelle',)),
'JLexer': ('pygments.lexers.j', 'J', ('j',), ('*.ijs',), ('text/x-j',)),
- 'JadeLexer': ('pygments.lexers.html', 'Jade', ('jade',), ('*.jade',), ('text/x-jade',)),
'JagsLexer': ('pygments.lexers.modeling', 'JAGS', ('jags',), ('*.jag', '*.bug'), ()),
'JasminLexer': ('pygments.lexers.jvm', 'Jasmin', ('jasmin', 'jasminxt'), ('*.j',), ()),
'JavaLexer': ('pygments.lexers.jvm', 'Java', ('java',), ('*.java',), ('text/x-java',)),
'JavascriptPhpLexer': ('pygments.lexers.templates', 'JavaScript+PHP', ('js+php', 'javascript+php'), (), ('application/x-javascript+php', 'text/x-javascript+php', 'text/javascript+php')),
'JavascriptSmartyLexer': ('pygments.lexers.templates', 'JavaScript+Smarty', ('js+smarty', 'javascript+smarty'), (), ('application/x-javascript+smarty', 'text/x-javascript+smarty', 'text/javascript+smarty')),
'JclLexer': ('pygments.lexers.scripting', 'JCL', ('jcl',), ('*.jcl',), ('text/x-jcl',)),
+ 'JsgfLexer': ('pygments.lexers.grammar_notation', 'JSGF', ('jsgf',), ('*.jsgf',), ('application/jsgf', 'application/x-jsgf', 'text/jsgf')),
+ 'JsonBareObjectLexer': ('pygments.lexers.data', 'JSONBareObject', ('json-object',), (), ('application/json-object',)),
'JsonLdLexer': ('pygments.lexers.data', 'JSON-LD', ('jsonld', 'json-ld'), ('*.jsonld',), ('application/ld+json',)),
'JsonLexer': ('pygments.lexers.data', 'JSON', ('json',), ('*.json',), ('application/json',)),
'JspLexer': ('pygments.lexers.templates', 'Java Server Page', ('jsp',), ('*.jsp',), ('application/x-jsp',)),
'JuliaConsoleLexer': ('pygments.lexers.julia', 'Julia console', ('jlcon',), (), ()),
'JuliaLexer': ('pygments.lexers.julia', 'Julia', ('julia', 'jl'), ('*.jl',), ('text/x-julia', 'application/x-julia')),
+ 'JuttleLexer': ('pygments.lexers.javascript', 'Juttle', ('juttle', 'juttle'), ('*.juttle',), ('application/juttle', 'application/x-juttle', 'text/x-juttle', 'text/juttle')),
'KalLexer': ('pygments.lexers.javascript', 'Kal', ('kal',), ('*.kal',), ('text/kal', 'application/kal')),
'KconfigLexer': ('pygments.lexers.configs', 'Kconfig', ('kconfig', 'menuconfig', 'linux-config', 'kernel-config'), ('Kconfig', '*Config.in*', 'external.in*', 'standard-modules.in'), ('text/x-kconfig',)),
'KokaLexer': ('pygments.lexers.haskell', 'Koka', ('koka',), ('*.kk', '*.kki'), ('text/x-koka',)),
'MakoLexer': ('pygments.lexers.templates', 'Mako', ('mako',), ('*.mao',), ('application/x-mako',)),
'MakoXmlLexer': ('pygments.lexers.templates', 'XML+Mako', ('xml+mako',), (), ('application/xml+mako',)),
'MaqlLexer': ('pygments.lexers.business', 'MAQL', ('maql',), ('*.maql',), ('text/x-gooddata-maql', 'application/x-gooddata-maql')),
+ 'MarkdownLexer': ('pygments.lexers.markup', 'markdown', ('md',), ('*.md',), ('text/x-markdown',)),
'MaskLexer': ('pygments.lexers.javascript', 'Mask', ('mask',), ('*.mask',), ('text/x-mask',)),
'MasonLexer': ('pygments.lexers.templates', 'Mason', ('mason',), ('*.m', '*.mhtml', '*.mc', '*.mi', 'autohandler', 'dhandler'), ('application/x-mason',)),
'MathematicaLexer': ('pygments.lexers.algebra', 'Mathematica', ('mathematica', 'mma', 'nb'), ('*.nb', '*.cdf', '*.nbp', '*.ma'), ('application/mathematica', 'application/vnd.wolfram.mathematica', 'application/vnd.wolfram.mathematica.package', 'application/vnd.wolfram.cdf')),
'Modula2Lexer': ('pygments.lexers.modula2', 'Modula-2', ('modula2', 'm2'), ('*.def', '*.mod'), ('text/x-modula2',)),
'MoinWikiLexer': ('pygments.lexers.markup', 'MoinMoin/Trac Wiki markup', ('trac-wiki', 'moin'), (), ('text/x-trac-wiki',)),
'MonkeyLexer': ('pygments.lexers.basic', 'Monkey', ('monkey',), ('*.monkey',), ('text/x-monkey',)),
+ 'MonteLexer': ('pygments.lexers.monte', 'Monte', ('monte',), ('*.mt',), ()),
'MoonScriptLexer': ('pygments.lexers.scripting', 'MoonScript', ('moon', 'moonscript'), ('*.moon',), ('text/x-moonscript', 'application/x-moonscript')),
'MozPreprocCssLexer': ('pygments.lexers.markup', 'CSS+mozpreproc', ('css+mozpreproc',), ('*.css.in',), ()),
'MozPreprocHashLexer': ('pygments.lexers.markup', 'mozhashpreproc', ('mozhashpreproc',), (), ()),
'MyghtyJavascriptLexer': ('pygments.lexers.templates', 'JavaScript+Myghty', ('js+myghty', 'javascript+myghty'), (), ('application/x-javascript+myghty', 'text/x-javascript+myghty', 'text/javascript+mygthy')),
'MyghtyLexer': ('pygments.lexers.templates', 'Myghty', ('myghty',), ('*.myt', 'autodelegate'), ('application/x-myghty',)),
'MyghtyXmlLexer': ('pygments.lexers.templates', 'XML+Myghty', ('xml+myghty',), (), ('application/xml+myghty',)),
+ 'NCLLexer': ('pygments.lexers.ncl', 'NCL', ('ncl',), ('*.ncl',), ('text/ncl',)),
'NSISLexer': ('pygments.lexers.installers', 'NSIS', ('nsis', 'nsi', 'nsh'), ('*.nsi', '*.nsh'), ('text/x-nsis',)),
'NasmLexer': ('pygments.lexers.asm', 'NASM', ('nasm',), ('*.asm', '*.ASM'), ('text/x-nasm',)),
'NasmObjdumpLexer': ('pygments.lexers.asm', 'objdump-nasm', ('objdump-nasm',), ('*.objdump-intel',), ('text/x-nasm-objdump',)),
'NemerleLexer': ('pygments.lexers.dotnet', 'Nemerle', ('nemerle',), ('*.n',), ('text/x-nemerle',)),
'NesCLexer': ('pygments.lexers.c_like', 'nesC', ('nesc',), ('*.nc',), ('text/x-nescsrc',)),
- 'NewLispLexer': ('pygments.lexers.lisp', 'NewLisp', ('newlisp',), ('*.lsp', '*.nl'), ('text/x-newlisp', 'application/x-newlisp')),
+ 'NewLispLexer': ('pygments.lexers.lisp', 'NewLisp', ('newlisp',), ('*.lsp', '*.nl', '*.kif'), ('text/x-newlisp', 'application/x-newlisp')),
'NewspeakLexer': ('pygments.lexers.smalltalk', 'Newspeak', ('newspeak',), ('*.ns2',), ('text/x-newspeak',)),
- 'NginxConfLexer': ('pygments.lexers.configs', 'Nginx configuration file', ('nginx',), (), ('text/x-nginx-conf',)),
- 'NimrodLexer': ('pygments.lexers.nimrod', 'Nimrod', ('nimrod', 'nim'), ('*.nim', '*.nimrod'), ('text/x-nimrod',)),
+ 'NginxConfLexer': ('pygments.lexers.configs', 'Nginx configuration file', ('nginx',), ('nginx.conf',), ('text/x-nginx-conf',)),
+ 'NimrodLexer': ('pygments.lexers.nimrod', 'Nimrod', ('nim', 'nimrod'), ('*.nim', '*.nimrod'), ('text/x-nim',)),
'NitLexer': ('pygments.lexers.nit', 'Nit', ('nit',), ('*.nit',), ()),
'NixLexer': ('pygments.lexers.nix', 'Nix', ('nixos', 'nix'), ('*.nix',), ('text/x-nix',)),
+ 'NuSMVLexer': ('pygments.lexers.smv', 'NuSMV', ('nusmv',), ('*.smv',), ()),
'NumPyLexer': ('pygments.lexers.python', 'NumPy', ('numpy',), (), ()),
'ObjdumpLexer': ('pygments.lexers.asm', 'objdump', ('objdump',), ('*.objdump',), ('text/x-objdump',)),
'ObjectiveCLexer': ('pygments.lexers.objective', 'Objective-C', ('objective-c', 'objectivec', 'obj-c', 'objc'), ('*.m', '*.h'), ('text/x-objective-c',)),
'PrologLexer': ('pygments.lexers.prolog', 'Prolog', ('prolog',), ('*.ecl', '*.prolog', '*.pro', '*.pl'), ('text/x-prolog',)),
'PropertiesLexer': ('pygments.lexers.configs', 'Properties', ('properties', 'jproperties'), ('*.properties',), ('text/x-java-properties',)),
'ProtoBufLexer': ('pygments.lexers.dsls', 'Protocol Buffer', ('protobuf', 'proto'), ('*.proto',), ()),
+ 'PugLexer': ('pygments.lexers.html', 'Pug', ('pug', 'jade'), ('*.pug', '*.jade'), ('text/x-pug', 'text/x-jade')),
'PuppetLexer': ('pygments.lexers.dsls', 'Puppet', ('puppet',), ('*.pp',), ()),
'PyPyLogLexer': ('pygments.lexers.console', 'PyPy Log', ('pypylog', 'pypy'), ('*.pypylog',), ('application/x-pypylog',)),
'Python3Lexer': ('pygments.lexers.python', 'Python 3', ('python3', 'py3'), (), ('text/x-python3', 'application/x-python3')),
'QVToLexer': ('pygments.lexers.qvt', 'QVTO', ('qvto', 'qvt'), ('*.qvto',), ()),
'QmlLexer': ('pygments.lexers.webmisc', 'QML', ('qml', 'qbs'), ('*.qml', '*.qbs'), ('application/x-qml', 'application/x-qt.qbs+qml')),
'RConsoleLexer': ('pygments.lexers.r', 'RConsole', ('rconsole', 'rout'), ('*.Rout',), ()),
+ 'RNCCompactLexer': ('pygments.lexers.rnc', 'Relax-NG Compact', ('rnc', 'rng-compact'), ('*.rnc',), ()),
'RPMSpecLexer': ('pygments.lexers.installers', 'RPMSpec', ('spec',), ('*.spec',), ('text/x-rpm-spec',)),
'RacketLexer': ('pygments.lexers.lisp', 'Racket', ('racket', 'rkt'), ('*.rkt', '*.rktd', '*.rktl'), ('text/x-racket', 'application/x-racket')),
'RagelCLexer': ('pygments.lexers.parsers', 'Ragel in C Host', ('ragel-c',), ('*.rl',), ()),
'RubyConsoleLexer': ('pygments.lexers.ruby', 'Ruby irb session', ('rbcon', 'irb'), (), ('text/x-ruby-shellsession',)),
'RubyLexer': ('pygments.lexers.ruby', 'Ruby', ('rb', 'ruby', 'duby'), ('*.rb', '*.rbw', 'Rakefile', '*.rake', '*.gemspec', '*.rbx', '*.duby', 'Gemfile'), ('text/x-ruby', 'application/x-ruby')),
'RustLexer': ('pygments.lexers.rust', 'Rust', ('rust',), ('*.rs', '*.rs.in'), ('text/rust',)),
+ 'SASLexer': ('pygments.lexers.sas', 'SAS', ('sas',), ('*.SAS', '*.sas'), ('text/x-sas', 'text/sas', 'application/x-sas')),
'SLexer': ('pygments.lexers.r', 'S', ('splus', 's', 'r'), ('*.S', '*.R', '.Rhistory', '.Rprofile', '.Renviron'), ('text/S-plus', 'text/S', 'text/x-r-source', 'text/x-r', 'text/x-R', 'text/x-r-history', 'text/x-r-profile')),
'SMLLexer': ('pygments.lexers.ml', 'Standard ML', ('sml',), ('*.sml', '*.sig', '*.fun'), ('text/x-standardml', 'application/x-standardml')),
'SassLexer': ('pygments.lexers.css', 'Sass', ('sass',), ('*.sass',), ('text/x-sass',)),
'ScilabLexer': ('pygments.lexers.matlab', 'Scilab', ('scilab',), ('*.sci', '*.sce', '*.tst'), ('text/scilab',)),
'ScssLexer': ('pygments.lexers.css', 'SCSS', ('scss',), ('*.scss',), ('text/x-scss',)),
'ShenLexer': ('pygments.lexers.lisp', 'Shen', ('shen',), ('*.shen',), ('text/x-shen', 'application/x-shen')),
+ 'SilverLexer': ('pygments.lexers.verification', 'Silver', ('silver',), ('*.sil', '*.vpr'), ()),
'SlimLexer': ('pygments.lexers.webmisc', 'Slim', ('slim',), ('*.slim',), ('text/x-slim',)),
'SmaliLexer': ('pygments.lexers.dalvik', 'Smali', ('smali',), ('*.smali',), ('text/smali',)),
'SmalltalkLexer': ('pygments.lexers.smalltalk', 'Smalltalk', ('smalltalk', 'squeak', 'st'), ('*.st',), ('text/x-smalltalk',)),
'SmartyLexer': ('pygments.lexers.templates', 'Smarty', ('smarty',), ('*.tpl',), ('application/x-smarty',)),
'SnobolLexer': ('pygments.lexers.snobol', 'Snobol', ('snobol',), ('*.snobol',), ('text/x-snobol',)),
+ 'SnowballLexer': ('pygments.lexers.dsls', 'Snowball', ('snowball',), ('*.sbl',), ()),
'SourcePawnLexer': ('pygments.lexers.pawn', 'SourcePawn', ('sp',), ('*.sp',), ('text/x-sourcepawn',)),
'SourcesListLexer': ('pygments.lexers.installers', 'Debian Sourcelist', ('sourceslist', 'sources.list', 'debsources'), ('sources.list',), ()),
'SparqlLexer': ('pygments.lexers.rdf', 'SPARQL', ('sparql',), ('*.rq', '*.sparql'), ('application/sparql-query',)),
'SquidConfLexer': ('pygments.lexers.configs', 'SquidConf', ('squidconf', 'squid.conf', 'squid'), ('squid.conf',), ('text/x-squidconf',)),
'SspLexer': ('pygments.lexers.templates', 'Scalate Server Page', ('ssp',), ('*.ssp',), ('application/x-ssp',)),
'StanLexer': ('pygments.lexers.modeling', 'Stan', ('stan',), ('*.stan',), ()),
+ 'StataLexer': ('pygments.lexers.stata', 'Stata', ('stata', 'do'), ('*.do', '*.ado'), ('text/x-stata', 'text/stata', 'application/x-stata')),
'SuperColliderLexer': ('pygments.lexers.supercollider', 'SuperCollider', ('sc', 'supercollider'), ('*.sc', '*.scd'), ('application/supercollider', 'text/supercollider')),
'SwiftLexer': ('pygments.lexers.objective', 'Swift', ('swift',), ('*.swift',), ('text/x-swift',)),
'SwigLexer': ('pygments.lexers.c_like', 'SWIG', ('swig',), ('*.swg', '*.i'), ('text/swig',)),
'SystemVerilogLexer': ('pygments.lexers.hdl', 'systemverilog', ('systemverilog', 'sv'), ('*.sv', '*.svh'), ('text/x-systemverilog',)),
'TAPLexer': ('pygments.lexers.testing', 'TAP', ('tap',), ('*.tap',), ()),
'Tads3Lexer': ('pygments.lexers.int_fiction', 'TADS 3', ('tads3',), ('*.t',), ()),
+ 'TasmLexer': ('pygments.lexers.asm', 'TASM', ('tasm',), ('*.asm', '*.ASM', '*.tasm'), ('text/x-tasm',)),
'TclLexer': ('pygments.lexers.tcl', 'Tcl', ('tcl',), ('*.tcl', '*.rvt'), ('text/x-tcl', 'text/x-script.tcl', 'application/x-tcl')),
'TcshLexer': ('pygments.lexers.shell', 'Tcsh', ('tcsh', 'csh'), ('*.tcsh', '*.csh'), ('application/x-csh',)),
'TcshSessionLexer': ('pygments.lexers.shell', 'Tcsh Session', ('tcshcon',), (), ()),
'TextLexer': ('pygments.lexers.special', 'Text only', ('text',), ('*.txt',), ('text/plain',)),
'ThriftLexer': ('pygments.lexers.dsls', 'Thrift', ('thrift',), ('*.thrift',), ('application/x-thrift',)),
'TodotxtLexer': ('pygments.lexers.textfmts', 'Todotxt', ('todotxt',), ('todo.txt', '*.todotxt'), ('text/x-todo',)),
+ 'TransactSqlLexer': ('pygments.lexers.sql', 'Transact-SQL', ('tsql', 't-sql'), ('*.sql',), ('text/x-tsql',)),
'TreetopLexer': ('pygments.lexers.parsers', 'Treetop', ('treetop',), ('*.treetop', '*.tt'), ()),
'TurtleLexer': ('pygments.lexers.rdf', 'Turtle', ('turtle',), ('*.ttl',), ('text/turtle', 'application/x-turtle')),
'TwigHtmlLexer': ('pygments.lexers.templates', 'HTML+Twig', ('html+twig',), ('*.twig',), ('text/html+twig',)),
'TwigLexer': ('pygments.lexers.templates', 'Twig', ('twig',), (), ('application/x-twig',)),
'TypeScriptLexer': ('pygments.lexers.javascript', 'TypeScript', ('ts', 'typescript'), ('*.ts',), ('text/x-typescript',)),
+ 'TypoScriptCssDataLexer': ('pygments.lexers.typoscript', 'TypoScriptCssData', ('typoscriptcssdata',), (), ()),
+ 'TypoScriptHtmlDataLexer': ('pygments.lexers.typoscript', 'TypoScriptHtmlData', ('typoscripthtmldata',), (), ()),
+ 'TypoScriptLexer': ('pygments.lexers.typoscript', 'TypoScript', ('typoscript',), ('*.ts', '*.txt'), ('text/x-typoscript',)),
'UrbiscriptLexer': ('pygments.lexers.urbi', 'UrbiScript', ('urbiscript',), ('*.u',), ('application/x-urbiscript',)),
+ 'VCLLexer': ('pygments.lexers.varnish', 'VCL', ('vcl',), ('*.vcl',), ('text/x-vclsrc',)),
+ 'VCLSnippetLexer': ('pygments.lexers.varnish', 'VCLSnippets', ('vclsnippets', 'vclsnippet'), (), ('text/x-vclsnippet',)),
'VCTreeStatusLexer': ('pygments.lexers.console', 'VCTreeStatus', ('vctreestatus',), (), ()),
'VGLLexer': ('pygments.lexers.dsls', 'VGL', ('vgl',), ('*.rpf',), ()),
'ValaLexer': ('pygments.lexers.c_like', 'Vala', ('vala', 'vapi'), ('*.vala', '*.vapi'), ('text/x-vala',)),
'VerilogLexer': ('pygments.lexers.hdl', 'verilog', ('verilog', 'v'), ('*.v',), ('text/x-verilog',)),
'VhdlLexer': ('pygments.lexers.hdl', 'vhdl', ('vhdl',), ('*.vhdl', '*.vhd'), ('text/x-vhdl',)),
'VimLexer': ('pygments.lexers.textedit', 'VimL', ('vim',), ('*.vim', '.vimrc', '.exrc', '.gvimrc', '_vimrc', '_exrc', '_gvimrc', 'vimrc', 'gvimrc'), ('text/x-vim',)),
+ 'WDiffLexer': ('pygments.lexers.diff', 'WDiff', ('wdiff',), ('*.wdiff',), ()),
+ 'WhileyLexer': ('pygments.lexers.whiley', 'Whiley', ('whiley',), ('*.whiley',), ('text/x-whiley',)),
'X10Lexer': ('pygments.lexers.x10', 'X10', ('x10', 'xten'), ('*.x10',), ('text/x-x10',)),
'XQueryLexer': ('pygments.lexers.webmisc', 'XQuery', ('xquery', 'xqy', 'xq', 'xql', 'xqm'), ('*.xqy', '*.xquery', '*.xq', '*.xql', '*.xqm'), ('text/xquery', 'application/xquery')),
'XmlDjangoLexer': ('pygments.lexers.templates', 'XML+Django/Jinja', ('xml+django', 'xml+jinja'), (), ('application/xml+django', 'application/xml+jinja')),
'XmlSmartyLexer': ('pygments.lexers.templates', 'XML+Smarty', ('xml+smarty',), (), ('application/xml+smarty',)),
'XsltLexer': ('pygments.lexers.html', 'XSLT', ('xslt',), ('*.xsl', '*.xslt', '*.xpl'), ('application/xsl+xml', 'application/xslt+xml')),
'XtendLexer': ('pygments.lexers.jvm', 'Xtend', ('xtend',), ('*.xtend',), ('text/x-xtend',)),
+ 'XtlangLexer': ('pygments.lexers.lisp', 'xtlang', ('extempore',), ('*.xtm',), ()),
'YamlJinjaLexer': ('pygments.lexers.templates', 'YAML+Jinja', ('yaml+jinja', 'salt', 'sls'), ('*.sls',), ('text/x-yaml+jinja', 'text/x-sls')),
'YamlLexer': ('pygments.lexers.data', 'YAML', ('yaml',), ('*.yaml', '*.yml'), ('text/x-yaml',)),
'ZephirLexer': ('pygments.lexers.php', 'Zephir', ('zephir',), ('*.zep',), ()),
Builtins for the MqlLexer.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
types = (
Builtin list for the OpenEdgeLexer.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
internet connection. don't run that at home, use
a server ;-)
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Self-updating data files for PostgreSQL lexer.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Builtin list for the ScilabLexer.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Do not edit the FUNCTIONS list by hand.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
This file contains the names of functions for Stan used by
``pygments.lexers.math.StanLexer. This is for Stan language version 2.8.0.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ pygments.lexers._stata_builtins
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ Builtins for Stata
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+
+builtins_base = (
+ "if", "else", "in", "foreach", "for", "forv", "forva",
+ "forval", "forvalu", "forvalue", "forvalues", "by", "bys",
+ "bysort", "quietly", "qui", "about", "ac",
+ "ac_7", "acprplot", "acprplot_7", "adjust", "ado", "adopath",
+ "adoupdate", "alpha", "ameans", "an", "ano", "anov", "anova",
+ "anova_estat", "anova_terms", "anovadef", "aorder", "ap", "app",
+ "appe", "appen", "append", "arch", "arch_dr", "arch_estat",
+ "arch_p", "archlm", "areg", "areg_p", "args", "arima",
+ "arima_dr", "arima_estat", "arima_p", "as", "asmprobit",
+ "asmprobit_estat", "asmprobit_lf", "asmprobit_mfx__dlg",
+ "asmprobit_p", "ass", "asse", "asser", "assert", "avplot",
+ "avplot_7", "avplots", "avplots_7", "bcskew0", "bgodfrey",
+ "binreg", "bip0_lf", "biplot", "bipp_lf", "bipr_lf",
+ "bipr_p", "biprobit", "bitest", "bitesti", "bitowt", "blogit",
+ "bmemsize", "boot", "bootsamp", "bootstrap", "bootstrap_8",
+ "boxco_l", "boxco_p", "boxcox", "boxcox_6", "boxcox_p",
+ "bprobit", "br", "break", "brier", "bro", "brow", "brows",
+ "browse", "brr", "brrstat", "bs", "bs_7", "bsampl_w",
+ "bsample", "bsample_7", "bsqreg", "bstat", "bstat_7", "bstat_8",
+ "bstrap", "bstrap_7", "ca", "ca_estat", "ca_p", "cabiplot",
+ "camat", "canon", "canon_8", "canon_8_p", "canon_estat",
+ "canon_p", "cap", "caprojection", "capt", "captu", "captur",
+ "capture", "cat", "cc", "cchart", "cchart_7", "cci",
+ "cd", "censobs_table", "centile", "cf", "char", "chdir",
+ "checkdlgfiles", "checkestimationsample", "checkhlpfiles",
+ "checksum", "chelp", "ci", "cii", "cl", "class", "classutil",
+ "clear", "cli", "clis", "clist", "clo", "clog", "clog_lf",
+ "clog_p", "clogi", "clogi_sw", "clogit", "clogit_lf",
+ "clogit_p", "clogitp", "clogl_sw", "cloglog", "clonevar",
+ "clslistarray", "cluster", "cluster_measures", "cluster_stop",
+ "cluster_tree", "cluster_tree_8", "clustermat", "cmdlog",
+ "cnr", "cnre", "cnreg", "cnreg_p", "cnreg_sw", "cnsreg",
+ "codebook", "collaps4", "collapse", "colormult_nb",
+ "colormult_nw", "compare", "compress", "conf", "confi",
+ "confir", "confirm", "conren", "cons", "const", "constr",
+ "constra", "constrai", "constrain", "constraint", "continue",
+ "contract", "copy", "copyright", "copysource", "cor", "corc",
+ "corr", "corr2data", "corr_anti", "corr_kmo", "corr_smc",
+ "corre", "correl", "correla", "correlat", "correlate",
+ "corrgram", "cou", "coun", "count", "cox", "cox_p", "cox_sw",
+ "coxbase", "coxhaz", "coxvar", "cprplot", "cprplot_7",
+ "crc", "cret", "cretu", "cretur", "creturn", "cross", "cs",
+ "cscript", "cscript_log", "csi", "ct", "ct_is", "ctset",
+ "ctst_5", "ctst_st", "cttost", "cumsp", "cumsp_7", "cumul",
+ "cusum", "cusum_7", "cutil", "d", "datasig", "datasign",
+ "datasigna", "datasignat", "datasignatu", "datasignatur",
+ "datasignature", "datetof", "db", "dbeta", "de", "dec",
+ "deco", "decod", "decode", "deff", "des", "desc", "descr",
+ "descri", "describ", "describe", "destring", "dfbeta",
+ "dfgls", "dfuller", "di", "di_g", "dir", "dirstats", "dis",
+ "discard", "disp", "disp_res", "disp_s", "displ", "displa",
+ "display", "distinct", "do", "doe", "doed", "doedi",
+ "doedit", "dotplot", "dotplot_7", "dprobit", "drawnorm",
+ "drop", "ds", "ds_util", "dstdize", "duplicates", "durbina",
+ "dwstat", "dydx", "e", "ed", "edi", "edit", "egen",
+ "eivreg", "emdef", "en", "enc", "enco", "encod", "encode",
+ "eq", "erase", "ereg", "ereg_lf", "ereg_p", "ereg_sw",
+ "ereghet", "ereghet_glf", "ereghet_glf_sh", "ereghet_gp",
+ "ereghet_ilf", "ereghet_ilf_sh", "ereghet_ip", "eret",
+ "eretu", "eretur", "ereturn", "err", "erro", "error", "est",
+ "est_cfexist", "est_cfname", "est_clickable", "est_expand",
+ "est_hold", "est_table", "est_unhold", "est_unholdok",
+ "estat", "estat_default", "estat_summ", "estat_vce_only",
+ "esti", "estimates", "etodow", "etof", "etomdy", "ex",
+ "exi", "exit", "expand", "expandcl", "fac", "fact", "facto",
+ "factor", "factor_estat", "factor_p", "factor_pca_rotated",
+ "factor_rotate", "factormat", "fcast", "fcast_compute",
+ "fcast_graph", "fdades", "fdadesc", "fdadescr", "fdadescri",
+ "fdadescrib", "fdadescribe", "fdasav", "fdasave", "fdause",
+ "fh_st", "open", "read", "close",
+ "file", "filefilter", "fillin", "find_hlp_file", "findfile",
+ "findit", "findit_7", "fit", "fl", "fli", "flis", "flist",
+ "for5_0", "form", "forma", "format", "fpredict", "frac_154",
+ "frac_adj", "frac_chk", "frac_cox", "frac_ddp", "frac_dis",
+ "frac_dv", "frac_in", "frac_mun", "frac_pp", "frac_pq",
+ "frac_pv", "frac_wgt", "frac_xo", "fracgen", "fracplot",
+ "fracplot_7", "fracpoly", "fracpred", "fron_ex", "fron_hn",
+ "fron_p", "fron_tn", "fron_tn2", "frontier", "ftodate", "ftoe",
+ "ftomdy", "ftowdate", "g", "gamhet_glf", "gamhet_gp",
+ "gamhet_ilf", "gamhet_ip", "gamma", "gamma_d2", "gamma_p",
+ "gamma_sw", "gammahet", "gdi_hexagon", "gdi_spokes", "ge",
+ "gen", "gene", "gener", "genera", "generat", "generate",
+ "genrank", "genstd", "genvmean", "gettoken", "gl", "gladder",
+ "gladder_7", "glim_l01", "glim_l02", "glim_l03", "glim_l04",
+ "glim_l05", "glim_l06", "glim_l07", "glim_l08", "glim_l09",
+ "glim_l10", "glim_l11", "glim_l12", "glim_lf", "glim_mu",
+ "glim_nw1", "glim_nw2", "glim_nw3", "glim_p", "glim_v1",
+ "glim_v2", "glim_v3", "glim_v4", "glim_v5", "glim_v6",
+ "glim_v7", "glm", "glm_6", "glm_p", "glm_sw", "glmpred", "glo",
+ "glob", "globa", "global", "glogit", "glogit_8", "glogit_p",
+ "gmeans", "gnbre_lf", "gnbreg", "gnbreg_5", "gnbreg_p",
+ "gomp_lf", "gompe_sw", "gomper_p", "gompertz", "gompertzhet",
+ "gomphet_glf", "gomphet_glf_sh", "gomphet_gp", "gomphet_ilf",
+ "gomphet_ilf_sh", "gomphet_ip", "gphdot", "gphpen",
+ "gphprint", "gprefs", "gprobi_p", "gprobit", "gprobit_8", "gr",
+ "gr7", "gr_copy", "gr_current", "gr_db", "gr_describe",
+ "gr_dir", "gr_draw", "gr_draw_replay", "gr_drop", "gr_edit",
+ "gr_editviewopts", "gr_example", "gr_example2", "gr_export",
+ "gr_print", "gr_qscheme", "gr_query", "gr_read", "gr_rename",
+ "gr_replay", "gr_save", "gr_set", "gr_setscheme", "gr_table",
+ "gr_undo", "gr_use", "graph", "graph7", "grebar", "greigen",
+ "greigen_7", "greigen_8", "grmeanby", "grmeanby_7",
+ "gs_fileinfo", "gs_filetype", "gs_graphinfo", "gs_stat",
+ "gsort", "gwood", "h", "hadimvo", "hareg", "hausman",
+ "haver", "he", "heck_d2", "heckma_p", "heckman", "heckp_lf",
+ "heckpr_p", "heckprob", "hel", "help", "hereg", "hetpr_lf",
+ "hetpr_p", "hetprob", "hettest", "hexdump", "hilite",
+ "hist", "hist_7", "histogram", "hlogit", "hlu", "hmeans",
+ "hotel", "hotelling", "hprobit", "hreg", "hsearch", "icd9",
+ "icd9_ff", "icd9p", "iis", "impute", "imtest", "inbase",
+ "include", "inf", "infi", "infil", "infile", "infix", "inp",
+ "inpu", "input", "ins", "insheet", "insp", "inspe",
+ "inspec", "inspect", "integ", "inten", "intreg", "intreg_7",
+ "intreg_p", "intrg2_ll", "intrg_ll", "intrg_ll2", "ipolate",
+ "iqreg", "ir", "irf", "irf_create", "irfm", "iri", "is_svy",
+ "is_svysum", "isid", "istdize", "ivprob_1_lf", "ivprob_lf",
+ "ivprobit", "ivprobit_p", "ivreg", "ivreg_footnote",
+ "ivtob_1_lf", "ivtob_lf", "ivtobit", "ivtobit_p", "jackknife",
+ "jacknife", "jknife", "jknife_6", "jknife_8", "jkstat",
+ "joinby", "kalarma1", "kap", "kap_3", "kapmeier", "kappa",
+ "kapwgt", "kdensity", "kdensity_7", "keep", "ksm", "ksmirnov",
+ "ktau", "kwallis", "l", "la", "lab", "labe", "label",
+ "labelbook", "ladder", "levels", "levelsof", "leverage",
+ "lfit", "lfit_p", "li", "lincom", "line", "linktest",
+ "lis", "list", "lloghet_glf", "lloghet_glf_sh", "lloghet_gp",
+ "lloghet_ilf", "lloghet_ilf_sh", "lloghet_ip", "llogi_sw",
+ "llogis_p", "llogist", "llogistic", "llogistichet",
+ "lnorm_lf", "lnorm_sw", "lnorma_p", "lnormal", "lnormalhet",
+ "lnormhet_glf", "lnormhet_glf_sh", "lnormhet_gp",
+ "lnormhet_ilf", "lnormhet_ilf_sh", "lnormhet_ip", "lnskew0",
+ "loadingplot", "loc", "loca", "local", "log", "logi",
+ "logis_lf", "logistic", "logistic_p", "logit", "logit_estat",
+ "logit_p", "loglogs", "logrank", "loneway", "lookfor",
+ "lookup", "lowess", "lowess_7", "lpredict", "lrecomp", "lroc",
+ "lroc_7", "lrtest", "ls", "lsens", "lsens_7", "lsens_x",
+ "lstat", "ltable", "ltable_7", "ltriang", "lv", "lvr2plot",
+ "lvr2plot_7", "m", "ma", "mac", "macr", "macro", "makecns",
+ "man", "manova", "manova_estat", "manova_p", "manovatest",
+ "mantel", "mark", "markin", "markout", "marksample", "mat",
+ "mat_capp", "mat_order", "mat_put_rr", "mat_rapp", "mata",
+ "mata_clear", "mata_describe", "mata_drop", "mata_matdescribe",
+ "mata_matsave", "mata_matuse", "mata_memory", "mata_mlib",
+ "mata_mosave", "mata_rename", "mata_which", "matalabel",
+ "matcproc", "matlist", "matname", "matr", "matri",
+ "matrix", "matrix_input__dlg", "matstrik", "mcc", "mcci",
+ "md0_", "md1_", "md1debug_", "md2_", "md2debug_", "mds",
+ "mds_estat", "mds_p", "mdsconfig", "mdslong", "mdsmat",
+ "mdsshepard", "mdytoe", "mdytof", "me_derd", "mean",
+ "means", "median", "memory", "memsize", "meqparse", "mer",
+ "merg", "merge", "mfp", "mfx", "mhelp", "mhodds", "minbound",
+ "mixed_ll", "mixed_ll_reparm", "mkassert", "mkdir",
+ "mkmat", "mkspline", "ml", "ml_5", "ml_adjs", "ml_bhhhs",
+ "ml_c_d", "ml_check", "ml_clear", "ml_cnt", "ml_debug",
+ "ml_defd", "ml_e0", "ml_e0_bfgs", "ml_e0_cycle", "ml_e0_dfp",
+ "ml_e0i", "ml_e1", "ml_e1_bfgs", "ml_e1_bhhh", "ml_e1_cycle",
+ "ml_e1_dfp", "ml_e2", "ml_e2_cycle", "ml_ebfg0", "ml_ebfr0",
+ "ml_ebfr1", "ml_ebh0q", "ml_ebhh0", "ml_ebhr0", "ml_ebr0i",
+ "ml_ecr0i", "ml_edfp0", "ml_edfr0", "ml_edfr1", "ml_edr0i",
+ "ml_eds", "ml_eer0i", "ml_egr0i", "ml_elf", "ml_elf_bfgs",
+ "ml_elf_bhhh", "ml_elf_cycle", "ml_elf_dfp", "ml_elfi",
+ "ml_elfs", "ml_enr0i", "ml_enrr0", "ml_erdu0", "ml_erdu0_bfgs",
+ "ml_erdu0_bhhh", "ml_erdu0_bhhhq", "ml_erdu0_cycle",
+ "ml_erdu0_dfp", "ml_erdu0_nrbfgs", "ml_exde", "ml_footnote",
+ "ml_geqnr", "ml_grad0", "ml_graph", "ml_hbhhh", "ml_hd0",
+ "ml_hold", "ml_init", "ml_inv", "ml_log", "ml_max",
+ "ml_mlout", "ml_mlout_8", "ml_model", "ml_nb0", "ml_opt",
+ "ml_p", "ml_plot", "ml_query", "ml_rdgrd", "ml_repor",
+ "ml_s_e", "ml_score", "ml_searc", "ml_technique", "ml_unhold",
+ "mleval", "mlf_", "mlmatbysum", "mlmatsum", "mlog", "mlogi",
+ "mlogit", "mlogit_footnote", "mlogit_p", "mlopts", "mlsum",
+ "mlvecsum", "mnl0_", "mor", "more", "mov", "move", "mprobit",
+ "mprobit_lf", "mprobit_p", "mrdu0_", "mrdu1_", "mvdecode",
+ "mvencode", "mvreg", "mvreg_estat", "n", "nbreg",
+ "nbreg_al", "nbreg_lf", "nbreg_p", "nbreg_sw", "nestreg", "net",
+ "newey", "newey_7", "newey_p", "news", "nl", "nl_7", "nl_9",
+ "nl_9_p", "nl_p", "nl_p_7", "nlcom", "nlcom_p", "nlexp2",
+ "nlexp2_7", "nlexp2a", "nlexp2a_7", "nlexp3", "nlexp3_7",
+ "nlgom3", "nlgom3_7", "nlgom4", "nlgom4_7", "nlinit", "nllog3",
+ "nllog3_7", "nllog4", "nllog4_7", "nlog_rd", "nlogit",
+ "nlogit_p", "nlogitgen", "nlogittree", "nlpred", "no",
+ "nobreak", "noi", "nois", "noisi", "noisil", "noisily", "note",
+ "notes", "notes_dlg", "nptrend", "numlabel", "numlist", "odbc",
+ "old_ver", "olo", "olog", "ologi", "ologi_sw", "ologit",
+ "ologit_p", "ologitp", "on", "one", "onew", "onewa", "oneway",
+ "op_colnm", "op_comp", "op_diff", "op_inv", "op_str", "opr",
+ "opro", "oprob", "oprob_sw", "oprobi", "oprobi_p", "oprobit",
+ "oprobitp", "opts_exclusive", "order", "orthog", "orthpoly",
+ "ou", "out", "outf", "outfi", "outfil", "outfile", "outs",
+ "outsh", "outshe", "outshee", "outsheet", "ovtest", "pac",
+ "pac_7", "palette", "parse", "parse_dissim", "pause", "pca",
+ "pca_8", "pca_display", "pca_estat", "pca_p", "pca_rotate",
+ "pcamat", "pchart", "pchart_7", "pchi", "pchi_7", "pcorr",
+ "pctile", "pentium", "pergram", "pergram_7", "permute",
+ "permute_8", "personal", "peto_st", "pkcollapse", "pkcross",
+ "pkequiv", "pkexamine", "pkexamine_7", "pkshape", "pksumm",
+ "pksumm_7", "pl", "plo", "plot", "plugin", "pnorm",
+ "pnorm_7", "poisgof", "poiss_lf", "poiss_sw", "poisso_p",
+ "poisson", "poisson_estat", "post", "postclose", "postfile",
+ "postutil", "pperron", "pr", "prais", "prais_e", "prais_e2",
+ "prais_p", "predict", "predictnl", "preserve", "print",
+ "pro", "prob", "probi", "probit", "probit_estat", "probit_p",
+ "proc_time", "procoverlay", "procrustes", "procrustes_estat",
+ "procrustes_p", "profiler", "prog", "progr", "progra",
+ "program", "prop", "proportion", "prtest", "prtesti", "pwcorr",
+ "pwd", "q", "s", "qby", "qbys", "qchi", "qchi_7", "qladder",
+ "qladder_7", "qnorm", "qnorm_7", "qqplot", "qqplot_7", "qreg",
+ "qreg_c", "qreg_p", "qreg_sw", "qu", "quadchk", "quantile",
+ "quantile_7", "que", "quer", "query", "range", "ranksum",
+ "ratio", "rchart", "rchart_7", "rcof", "recast", "reclink",
+ "recode", "reg", "reg3", "reg3_p", "regdw", "regr", "regre",
+ "regre_p2", "regres", "regres_p", "regress", "regress_estat",
+ "regriv_p", "remap", "ren", "rena", "renam", "rename",
+ "renpfix", "repeat", "replace", "report", "reshape",
+ "restore", "ret", "retu", "retur", "return", "rm", "rmdir",
+ "robvar", "roccomp", "roccomp_7", "roccomp_8", "rocf_lf",
+ "rocfit", "rocfit_8", "rocgold", "rocplot", "rocplot_7",
+ "roctab", "roctab_7", "rolling", "rologit", "rologit_p",
+ "rot", "rota", "rotat", "rotate", "rotatemat", "rreg",
+ "rreg_p", "ru", "run", "runtest", "rvfplot", "rvfplot_7",
+ "rvpplot", "rvpplot_7", "sa", "safesum", "sample",
+ "sampsi", "sav", "save", "savedresults", "saveold", "sc",
+ "sca", "scal", "scala", "scalar", "scatter", "scm_mine",
+ "sco", "scob_lf", "scob_p", "scobi_sw", "scobit", "scor",
+ "score", "scoreplot", "scoreplot_help", "scree", "screeplot",
+ "screeplot_help", "sdtest", "sdtesti", "se", "search",
+ "separate", "seperate", "serrbar", "serrbar_7", "serset", "set",
+ "set_defaults", "sfrancia", "sh", "she", "shel", "shell",
+ "shewhart", "shewhart_7", "signestimationsample", "signrank",
+ "signtest", "simul", "simul_7", "simulate", "simulate_8",
+ "sktest", "sleep", "slogit", "slogit_d2", "slogit_p", "smooth",
+ "snapspan", "so", "sor", "sort", "spearman", "spikeplot",
+ "spikeplot_7", "spikeplt", "spline_x", "split", "sqreg",
+ "sqreg_p", "sret", "sretu", "sretur", "sreturn", "ssc", "st",
+ "st_ct", "st_hc", "st_hcd", "st_hcd_sh", "st_is", "st_issys",
+ "st_note", "st_promo", "st_set", "st_show", "st_smpl",
+ "st_subid", "stack", "statsby", "statsby_8", "stbase", "stci",
+ "stci_7", "stcox", "stcox_estat", "stcox_fr", "stcox_fr_ll",
+ "stcox_p", "stcox_sw", "stcoxkm", "stcoxkm_7", "stcstat",
+ "stcurv", "stcurve", "stcurve_7", "stdes", "stem", "stepwise",
+ "stereg", "stfill", "stgen", "stir", "stjoin", "stmc", "stmh",
+ "stphplot", "stphplot_7", "stphtest", "stphtest_7",
+ "stptime", "strate", "strate_7", "streg", "streg_sw", "streset",
+ "sts", "sts_7", "stset", "stsplit", "stsum", "sttocc",
+ "sttoct", "stvary", "stweib", "su", "suest", "suest_8",
+ "sum", "summ", "summa", "summar", "summari", "summariz",
+ "summarize", "sunflower", "sureg", "survcurv", "survsum",
+ "svar", "svar_p", "svmat", "svy", "svy_disp", "svy_dreg",
+ "svy_est", "svy_est_7", "svy_estat", "svy_get", "svy_gnbreg_p",
+ "svy_head", "svy_header", "svy_heckman_p", "svy_heckprob_p",
+ "svy_intreg_p", "svy_ivreg_p", "svy_logistic_p", "svy_logit_p",
+ "svy_mlogit_p", "svy_nbreg_p", "svy_ologit_p", "svy_oprobit_p",
+ "svy_poisson_p", "svy_probit_p", "svy_regress_p", "svy_sub",
+ "svy_sub_7", "svy_x", "svy_x_7", "svy_x_p", "svydes",
+ "svydes_8", "svygen", "svygnbreg", "svyheckman", "svyheckprob",
+ "svyintreg", "svyintreg_7", "svyintrg", "svyivreg", "svylc",
+ "svylog_p", "svylogit", "svymarkout", "svymarkout_8",
+ "svymean", "svymlog", "svymlogit", "svynbreg", "svyolog",
+ "svyologit", "svyoprob", "svyoprobit", "svyopts",
+ "svypois", "svypois_7", "svypoisson", "svyprobit", "svyprobt",
+ "svyprop", "svyprop_7", "svyratio", "svyreg", "svyreg_p",
+ "svyregress", "svyset", "svyset_7", "svyset_8", "svytab",
+ "svytab_7", "svytest", "svytotal", "sw", "sw_8", "swcnreg",
+ "swcox", "swereg", "swilk", "swlogis", "swlogit",
+ "swologit", "swoprbt", "swpois", "swprobit", "swqreg",
+ "swtobit", "swweib", "symmetry", "symmi", "symplot",
+ "symplot_7", "syntax", "sysdescribe", "sysdir", "sysuse",
+ "szroeter", "ta", "tab", "tab1", "tab2", "tab_or", "tabd",
+ "tabdi", "tabdis", "tabdisp", "tabi", "table", "tabodds",
+ "tabodds_7", "tabstat", "tabu", "tabul", "tabula", "tabulat",
+ "tabulate", "te", "tempfile", "tempname", "tempvar", "tes",
+ "test", "testnl", "testparm", "teststd", "tetrachoric",
+ "time_it", "timer", "tis", "tob", "tobi", "tobit", "tobit_p",
+ "tobit_sw", "token", "tokeni", "tokeniz", "tokenize",
+ "tostring", "total", "translate", "translator", "transmap",
+ "treat_ll", "treatr_p", "treatreg", "trim", "trnb_cons",
+ "trnb_mean", "trpoiss_d2", "trunc_ll", "truncr_p", "truncreg",
+ "tsappend", "tset", "tsfill", "tsline", "tsline_ex",
+ "tsreport", "tsrevar", "tsrline", "tsset", "tssmooth",
+ "tsunab", "ttest", "ttesti", "tut_chk", "tut_wait", "tutorial",
+ "tw", "tware_st", "two", "twoway", "twoway__fpfit_serset",
+ "twoway__function_gen", "twoway__histogram_gen",
+ "twoway__ipoint_serset", "twoway__ipoints_serset",
+ "twoway__kdensity_gen", "twoway__lfit_serset",
+ "twoway__normgen_gen", "twoway__pci_serset",
+ "twoway__qfit_serset", "twoway__scatteri_serset",
+ "twoway__sunflower_gen", "twoway_ksm_serset", "ty", "typ",
+ "type", "typeof", "u", "unab", "unabbrev", "unabcmd",
+ "update", "us", "use", "uselabel", "var", "var_mkcompanion",
+ "var_p", "varbasic", "varfcast", "vargranger", "varirf",
+ "varirf_add", "varirf_cgraph", "varirf_create", "varirf_ctable",
+ "varirf_describe", "varirf_dir", "varirf_drop", "varirf_erase",
+ "varirf_graph", "varirf_ograph", "varirf_rename", "varirf_set",
+ "varirf_table", "varlist", "varlmar", "varnorm", "varsoc",
+ "varstable", "varstable_w", "varstable_w2", "varwle",
+ "vce", "vec", "vec_fevd", "vec_mkphi", "vec_p", "vec_p_w",
+ "vecirf_create", "veclmar", "veclmar_w", "vecnorm",
+ "vecnorm_w", "vecrank", "vecstable", "verinst", "vers",
+ "versi", "versio", "version", "view", "viewsource", "vif",
+ "vwls", "wdatetof", "webdescribe", "webseek", "webuse",
+ "weib1_lf", "weib2_lf", "weib_lf", "weib_lf0", "weibhet_glf",
+ "weibhet_glf_sh", "weibhet_glfa", "weibhet_glfa_sh",
+ "weibhet_gp", "weibhet_ilf", "weibhet_ilf_sh", "weibhet_ilfa",
+ "weibhet_ilfa_sh", "weibhet_ip", "weibu_sw", "weibul_p",
+ "weibull", "weibull_c", "weibull_s", "weibullhet",
+ "wh", "whelp", "whi", "which", "whil", "while", "wilc_st",
+ "wilcoxon", "win", "wind", "windo", "window", "winexec",
+ "wntestb", "wntestb_7", "wntestq", "xchart", "xchart_7",
+ "xcorr", "xcorr_7", "xi", "xi_6", "xmlsav", "xmlsave",
+ "xmluse", "xpose", "xsh", "xshe", "xshel", "xshell",
+ "xt_iis", "xt_tis", "xtab_p", "xtabond", "xtbin_p",
+ "xtclog", "xtcloglog", "xtcloglog_8", "xtcloglog_d2",
+ "xtcloglog_pa_p", "xtcloglog_re_p", "xtcnt_p", "xtcorr",
+ "xtdata", "xtdes", "xtfront_p", "xtfrontier", "xtgee",
+ "xtgee_elink", "xtgee_estat", "xtgee_makeivar", "xtgee_p",
+ "xtgee_plink", "xtgls", "xtgls_p", "xthaus", "xthausman",
+ "xtht_p", "xthtaylor", "xtile", "xtint_p", "xtintreg",
+ "xtintreg_8", "xtintreg_d2", "xtintreg_p", "xtivp_1",
+ "xtivp_2", "xtivreg", "xtline", "xtline_ex", "xtlogit",
+ "xtlogit_8", "xtlogit_d2", "xtlogit_fe_p", "xtlogit_pa_p",
+ "xtlogit_re_p", "xtmixed", "xtmixed_estat", "xtmixed_p",
+ "xtnb_fe", "xtnb_lf", "xtnbreg", "xtnbreg_pa_p",
+ "xtnbreg_refe_p", "xtpcse", "xtpcse_p", "xtpois", "xtpoisson",
+ "xtpoisson_d2", "xtpoisson_pa_p", "xtpoisson_refe_p", "xtpred",
+ "xtprobit", "xtprobit_8", "xtprobit_d2", "xtprobit_re_p",
+ "xtps_fe", "xtps_lf", "xtps_ren", "xtps_ren_8", "xtrar_p",
+ "xtrc", "xtrc_p", "xtrchh", "xtrefe_p", "xtreg", "xtreg_be",
+ "xtreg_fe", "xtreg_ml", "xtreg_pa_p", "xtreg_re",
+ "xtregar", "xtrere_p", "xtset", "xtsf_ll", "xtsf_llti",
+ "xtsum", "xttab", "xttest0", "xttobit", "xttobit_8",
+ "xttobit_p", "xttrans", "yx", "yxview__barlike_draw",
+ "yxview_area_draw", "yxview_bar_draw", "yxview_dot_draw",
+ "yxview_dropline_draw", "yxview_function_draw",
+ "yxview_iarrow_draw", "yxview_ilabels_draw",
+ "yxview_normal_draw", "yxview_pcarrow_draw",
+ "yxview_pcbarrow_draw", "yxview_pccapsym_draw",
+ "yxview_pcscatter_draw", "yxview_pcspike_draw",
+ "yxview_rarea_draw", "yxview_rbar_draw", "yxview_rbarm_draw",
+ "yxview_rcap_draw", "yxview_rcapsym_draw",
+ "yxview_rconnected_draw", "yxview_rline_draw",
+ "yxview_rscatter_draw", "yxview_rspike_draw",
+ "yxview_spike_draw", "yxview_sunflower_draw", "zap_s", "zinb",
+ "zinb_llf", "zinb_plf", "zip", "zip_llf", "zip_p", "zip_plf",
+ "zt_ct_5", "zt_hc_5", "zt_hcd_5", "zt_is_5", "zt_iss_5",
+ "zt_sho_5", "zt_smp_5", "ztbase_5", "ztcox_5", "ztdes_5",
+ "ztereg_5", "ztfill_5", "ztgen_5", "ztir_5", "ztjoin_5", "ztnb",
+ "ztnb_p", "ztp", "ztp_p", "zts_5", "ztset_5", "ztspli_5",
+ "ztsum_5", "zttoct_5", "ztvary_5", "ztweib_5"
+)
+
+builtins_functions = (
+ "Cdhms", "Chms", "Clock", "Cmdyhms", "Cofc", "Cofd", "F",
+ "Fden", "Ftail", "I", "J", "_caller", "abbrev", "abs", "acos",
+ "acosh", "asin", "asinh", "atan", "atan2", "atanh",
+ "autocode", "betaden", "binomial", "binomialp", "binomialtail",
+ "binormal", "bofd", "byteorder", "c", "ceil", "char",
+ "chi2", "chi2den", "chi2tail", "cholesky", "chop", "clip",
+ "clock", "cloglog", "cofC", "cofd", "colnumb", "colsof", "comb",
+ "cond", "corr", "cos", "cosh", "d", "daily", "date", "day",
+ "det", "dgammapda", "dgammapdada", "dgammapdadx", "dgammapdx",
+ "dgammapdxdx", "dhms", "diag", "diag0cnt", "digamma",
+ "dofC", "dofb", "dofc", "dofh", "dofm", "dofq", "dofw",
+ "dofy", "dow", "doy", "dunnettprob", "e", "el", "epsdouble",
+ "epsfloat", "exp", "fileexists", "fileread", "filereaderror",
+ "filewrite", "float", "floor", "fmtwidth", "gammaden",
+ "gammap", "gammaptail", "get", "group", "h", "hadamard",
+ "halfyear", "halfyearly", "has_eprop", "hh", "hhC", "hms",
+ "hofd", "hours", "hypergeometric", "hypergeometricp", "ibeta",
+ "ibetatail", "index", "indexnot", "inlist", "inrange", "int",
+ "inv", "invF", "invFtail", "invbinomial", "invbinomialtail",
+ "invchi2", "invchi2tail", "invcloglog", "invdunnettprob",
+ "invgammap", "invgammaptail", "invibeta", "invibetatail",
+ "invlogit", "invnFtail", "invnbinomial", "invnbinomialtail",
+ "invnchi2", "invnchi2tail", "invnibeta", "invnorm", "invnormal",
+ "invnttail", "invpoisson", "invpoissontail", "invsym", "invt",
+ "invttail", "invtukeyprob", "irecode", "issym", "issymmetric",
+ "itrim", "length", "ln", "lnfact", "lnfactorial", "lngamma",
+ "lnnormal", "lnnormalden", "log", "log10", "logit", "lower",
+ "ltrim", "m", "match", "matmissing", "matrix", "matuniform",
+ "max", "maxbyte", "maxdouble", "maxfloat", "maxint", "maxlong",
+ "mdy", "mdyhms", "mi", "min", "minbyte", "mindouble",
+ "minfloat", "minint", "minlong", "minutes", "missing", "mm",
+ "mmC", "mod", "mofd", "month", "monthly", "mreldif",
+ "msofhours", "msofminutes", "msofseconds", "nF", "nFden",
+ "nFtail", "nbetaden", "nbinomial", "nbinomialp", "nbinomialtail",
+ "nchi2", "nchi2den", "nchi2tail", "nibeta", "norm", "normal",
+ "normalden", "normd", "npnF", "npnchi2", "npnt", "nt", "ntden",
+ "nttail", "nullmat", "plural", "poisson", "poissonp",
+ "poissontail", "proper", "q", "qofd", "quarter", "quarterly",
+ "r", "rbeta", "rbinomial", "rchi2", "real", "recode", "regexm",
+ "regexr", "regexs", "reldif", "replay", "return", "reverse",
+ "rgamma", "rhypergeometric", "rnbinomial", "rnormal", "round",
+ "rownumb", "rowsof", "rpoisson", "rt", "rtrim", "runiform", "s",
+ "scalar", "seconds", "sign", "sin", "sinh", "smallestdouble",
+ "soundex", "soundex_nara", "sqrt", "ss", "ssC", "strcat",
+ "strdup", "string", "strlen", "strlower", "strltrim", "strmatch",
+ "strofreal", "strpos", "strproper", "strreverse", "strrtrim",
+ "strtoname", "strtrim", "strupper", "subinstr", "subinword",
+ "substr", "sum", "sweep", "syminv", "t", "tC", "tan", "tanh",
+ "tc", "td", "tden", "th", "tin", "tm", "tq", "trace",
+ "trigamma", "trim", "trunc", "ttail", "tukeyprob", "tw",
+ "twithin", "uniform", "upper", "vec", "vecdiag", "w", "week",
+ "weekly", "wofd", "word", "wordcount", "year", "yearly",
+ "yh", "ym", "yofd", "yq", "yw"
+)
+
+
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ pygments.lexers._tsql_builtins
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ These are manually translated lists from https://msdn.microsoft.com.
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+# See https://msdn.microsoft.com/en-us/library/ms174986.aspx.
+OPERATORS = (
+ '!<',
+ '!=',
+ '!>',
+ '<',
+ '<=',
+ '<>',
+ '=',
+ '>',
+ '>=',
+ '+',
+ '+=',
+ '-',
+ '-=',
+ '*',
+ '*=',
+ '/',
+ '/=',
+ '%',
+ '%=',
+ '&',
+ '&=',
+ '|',
+ '|=',
+ '^',
+ '^=',
+ '~',
+ '::',
+)
+
+OPERATOR_WORDS = (
+ 'all',
+ 'and',
+ 'any',
+ 'between',
+ 'except',
+ 'exists',
+ 'in',
+ 'intersect',
+ 'like',
+ 'not',
+ 'or',
+ 'some',
+ 'union',
+)
+
+_KEYWORDS_SERVER = (
+ 'add',
+ 'all',
+ 'alter',
+ 'and',
+ 'any',
+ 'as',
+ 'asc',
+ 'authorization',
+ 'backup',
+ 'begin',
+ 'between',
+ 'break',
+ 'browse',
+ 'bulk',
+ 'by',
+ 'cascade',
+ 'case',
+ 'catch',
+ 'check',
+ 'checkpoint',
+ 'close',
+ 'clustered',
+ 'coalesce',
+ 'collate',
+ 'column',
+ 'commit',
+ 'compute',
+ 'constraint',
+ 'contains',
+ 'containstable',
+ 'continue',
+ 'convert',
+ 'create',
+ 'cross',
+ 'current',
+ 'current_date',
+ 'current_time',
+ 'current_timestamp',
+ 'current_user',
+ 'cursor',
+ 'database',
+ 'dbcc',
+ 'deallocate',
+ 'declare',
+ 'default',
+ 'delete',
+ 'deny',
+ 'desc',
+ 'disk',
+ 'distinct',
+ 'distributed',
+ 'double',
+ 'drop',
+ 'dump',
+ 'else',
+ 'end',
+ 'errlvl',
+ 'escape',
+ 'except',
+ 'exec',
+ 'execute',
+ 'exists',
+ 'exit',
+ 'external',
+ 'fetch',
+ 'file',
+ 'fillfactor',
+ 'for',
+ 'foreign',
+ 'freetext',
+ 'freetexttable',
+ 'from',
+ 'full',
+ 'function',
+ 'goto',
+ 'grant',
+ 'group',
+ 'having',
+ 'holdlock',
+ 'identity',
+ 'identity_insert',
+ 'identitycol',
+ 'if',
+ 'in',
+ 'index',
+ 'inner',
+ 'insert',
+ 'intersect',
+ 'into',
+ 'is',
+ 'join',
+ 'key',
+ 'kill',
+ 'left',
+ 'like',
+ 'lineno',
+ 'load',
+ 'merge',
+ 'national',
+ 'nocheck',
+ 'nonclustered',
+ 'not',
+ 'null',
+ 'nullif',
+ 'of',
+ 'off',
+ 'offsets',
+ 'on',
+ 'open',
+ 'opendatasource',
+ 'openquery',
+ 'openrowset',
+ 'openxml',
+ 'option',
+ 'or',
+ 'order',
+ 'outer',
+ 'over',
+ 'percent',
+ 'pivot',
+ 'plan',
+ 'precision',
+ 'primary',
+ 'print',
+ 'proc',
+ 'procedure',
+ 'public',
+ 'raiserror',
+ 'read',
+ 'readtext',
+ 'reconfigure',
+ 'references',
+ 'replication',
+ 'restore',
+ 'restrict',
+ 'return',
+ 'revert',
+ 'revoke',
+ 'right',
+ 'rollback',
+ 'rowcount',
+ 'rowguidcol',
+ 'rule',
+ 'save',
+ 'schema',
+ 'securityaudit',
+ 'select',
+ 'semantickeyphrasetable',
+ 'semanticsimilaritydetailstable',
+ 'semanticsimilaritytable',
+ 'session_user',
+ 'set',
+ 'setuser',
+ 'shutdown',
+ 'some',
+ 'statistics',
+ 'system_user',
+ 'table',
+ 'tablesample',
+ 'textsize',
+ 'then',
+ 'throw',
+ 'to',
+ 'top',
+ 'tran',
+ 'transaction',
+ 'trigger',
+ 'truncate',
+ 'try',
+ 'try_convert',
+ 'tsequal',
+ 'union',
+ 'unique',
+ 'unpivot',
+ 'update',
+ 'updatetext',
+ 'use',
+ 'user',
+ 'values',
+ 'varying',
+ 'view',
+ 'waitfor',
+ 'when',
+ 'where',
+ 'while',
+ 'with',
+ 'within',
+ 'writetext',
+)
+
+_KEYWORDS_FUTURE = (
+ 'absolute',
+ 'action',
+ 'admin',
+ 'after',
+ 'aggregate',
+ 'alias',
+ 'allocate',
+ 'are',
+ 'array',
+ 'asensitive',
+ 'assertion',
+ 'asymmetric',
+ 'at',
+ 'atomic',
+ 'before',
+ 'binary',
+ 'bit',
+ 'blob',
+ 'boolean',
+ 'both',
+ 'breadth',
+ 'call',
+ 'called',
+ 'cardinality',
+ 'cascaded',
+ 'cast',
+ 'catalog',
+ 'char',
+ 'character',
+ 'class',
+ 'clob',
+ 'collation',
+ 'collect',
+ 'completion',
+ 'condition',
+ 'connect',
+ 'connection',
+ 'constraints',
+ 'constructor',
+ 'corr',
+ 'corresponding',
+ 'covar_pop',
+ 'covar_samp',
+ 'cube',
+ 'cume_dist',
+ 'current_catalog',
+ 'current_default_transform_group',
+ 'current_path',
+ 'current_role',
+ 'current_schema',
+ 'current_transform_group_for_type',
+ 'cycle',
+ 'data',
+ 'date',
+ 'day',
+ 'dec',
+ 'decimal',
+ 'deferrable',
+ 'deferred',
+ 'depth',
+ 'deref',
+ 'describe',
+ 'descriptor',
+ 'destroy',
+ 'destructor',
+ 'deterministic',
+ 'diagnostics',
+ 'dictionary',
+ 'disconnect',
+ 'domain',
+ 'dynamic',
+ 'each',
+ 'element',
+ 'end-exec',
+ 'equals',
+ 'every',
+ 'exception',
+ 'false',
+ 'filter',
+ 'first',
+ 'float',
+ 'found',
+ 'free',
+ 'fulltexttable',
+ 'fusion',
+ 'general',
+ 'get',
+ 'global',
+ 'go',
+ 'grouping',
+ 'hold',
+ 'host',
+ 'hour',
+ 'ignore',
+ 'immediate',
+ 'indicator',
+ 'initialize',
+ 'initially',
+ 'inout',
+ 'input',
+ 'int',
+ 'integer',
+ 'intersection',
+ 'interval',
+ 'isolation',
+ 'iterate',
+ 'language',
+ 'large',
+ 'last',
+ 'lateral',
+ 'leading',
+ 'less',
+ 'level',
+ 'like_regex',
+ 'limit',
+ 'ln',
+ 'local',
+ 'localtime',
+ 'localtimestamp',
+ 'locator',
+ 'map',
+ 'match',
+ 'member',
+ 'method',
+ 'minute',
+ 'mod',
+ 'modifies',
+ 'modify',
+ 'module',
+ 'month',
+ 'multiset',
+ 'names',
+ 'natural',
+ 'nchar',
+ 'nclob',
+ 'new',
+ 'next',
+ 'no',
+ 'none',
+ 'normalize',
+ 'numeric',
+ 'object',
+ 'occurrences_regex',
+ 'old',
+ 'only',
+ 'operation',
+ 'ordinality',
+ 'out',
+ 'output',
+ 'overlay',
+ 'pad',
+ 'parameter',
+ 'parameters',
+ 'partial',
+ 'partition',
+ 'path',
+ 'percent_rank',
+ 'percentile_cont',
+ 'percentile_disc',
+ 'position_regex',
+ 'postfix',
+ 'prefix',
+ 'preorder',
+ 'prepare',
+ 'preserve',
+ 'prior',
+ 'privileges',
+ 'range',
+ 'reads',
+ 'real',
+ 'recursive',
+ 'ref',
+ 'referencing',
+ 'regr_avgx',
+ 'regr_avgy',
+ 'regr_count',
+ 'regr_intercept',
+ 'regr_r2',
+ 'regr_slope',
+ 'regr_sxx',
+ 'regr_sxy',
+ 'regr_syy',
+ 'relative',
+ 'release',
+ 'result',
+ 'returns',
+ 'role',
+ 'rollup',
+ 'routine',
+ 'row',
+ 'rows',
+ 'savepoint',
+ 'scope',
+ 'scroll',
+ 'search',
+ 'second',
+ 'section',
+ 'sensitive',
+ 'sequence',
+ 'session',
+ 'sets',
+ 'similar',
+ 'size',
+ 'smallint',
+ 'space',
+ 'specific',
+ 'specifictype',
+ 'sql',
+ 'sqlexception',
+ 'sqlstate',
+ 'sqlwarning',
+ 'start',
+ 'state',
+ 'statement',
+ 'static',
+ 'stddev_pop',
+ 'stddev_samp',
+ 'structure',
+ 'submultiset',
+ 'substring_regex',
+ 'symmetric',
+ 'system',
+ 'temporary',
+ 'terminate',
+ 'than',
+ 'time',
+ 'timestamp',
+ 'timezone_hour',
+ 'timezone_minute',
+ 'trailing',
+ 'translate_regex',
+ 'translation',
+ 'treat',
+ 'true',
+ 'uescape',
+ 'under',
+ 'unknown',
+ 'unnest',
+ 'usage',
+ 'using',
+ 'value',
+ 'var_pop',
+ 'var_samp',
+ 'varchar',
+ 'variable',
+ 'whenever',
+ 'width_bucket',
+ 'window',
+ 'within',
+ 'without',
+ 'work',
+ 'write',
+ 'xmlagg',
+ 'xmlattributes',
+ 'xmlbinary',
+ 'xmlcast',
+ 'xmlcomment',
+ 'xmlconcat',
+ 'xmldocument',
+ 'xmlelement',
+ 'xmlexists',
+ 'xmlforest',
+ 'xmliterate',
+ 'xmlnamespaces',
+ 'xmlparse',
+ 'xmlpi',
+ 'xmlquery',
+ 'xmlserialize',
+ 'xmltable',
+ 'xmltext',
+ 'xmlvalidate',
+ 'year',
+ 'zone',
+)
+
+_KEYWORDS_ODBC = (
+ 'absolute',
+ 'action',
+ 'ada',
+ 'add',
+ 'all',
+ 'allocate',
+ 'alter',
+ 'and',
+ 'any',
+ 'are',
+ 'as',
+ 'asc',
+ 'assertion',
+ 'at',
+ 'authorization',
+ 'avg',
+ 'begin',
+ 'between',
+ 'bit',
+ 'bit_length',
+ 'both',
+ 'by',
+ 'cascade',
+ 'cascaded',
+ 'case',
+ 'cast',
+ 'catalog',
+ 'char',
+ 'char_length',
+ 'character',
+ 'character_length',
+ 'check',
+ 'close',
+ 'coalesce',
+ 'collate',
+ 'collation',
+ 'column',
+ 'commit',
+ 'connect',
+ 'connection',
+ 'constraint',
+ 'constraints',
+ 'continue',
+ 'convert',
+ 'corresponding',
+ 'count',
+ 'create',
+ 'cross',
+ 'current',
+ 'current_date',
+ 'current_time',
+ 'current_timestamp',
+ 'current_user',
+ 'cursor',
+ 'date',
+ 'day',
+ 'deallocate',
+ 'dec',
+ 'decimal',
+ 'declare',
+ 'default',
+ 'deferrable',
+ 'deferred',
+ 'delete',
+ 'desc',
+ 'describe',
+ 'descriptor',
+ 'diagnostics',
+ 'disconnect',
+ 'distinct',
+ 'domain',
+ 'double',
+ 'drop',
+ 'else',
+ 'end',
+ 'end-exec',
+ 'escape',
+ 'except',
+ 'exception',
+ 'exec',
+ 'execute',
+ 'exists',
+ 'external',
+ 'extract',
+ 'false',
+ 'fetch',
+ 'first',
+ 'float',
+ 'for',
+ 'foreign',
+ 'fortran',
+ 'found',
+ 'from',
+ 'full',
+ 'get',
+ 'global',
+ 'go',
+ 'goto',
+ 'grant',
+ 'group',
+ 'having',
+ 'hour',
+ 'identity',
+ 'immediate',
+ 'in',
+ 'include',
+ 'index',
+ 'indicator',
+ 'initially',
+ 'inner',
+ 'input',
+ 'insensitive',
+ 'insert',
+ 'int',
+ 'integer',
+ 'intersect',
+ 'interval',
+ 'into',
+ 'is',
+ 'isolation',
+ 'join',
+ 'key',
+ 'language',
+ 'last',
+ 'leading',
+ 'left',
+ 'level',
+ 'like',
+ 'local',
+ 'lower',
+ 'match',
+ 'max',
+ 'min',
+ 'minute',
+ 'module',
+ 'month',
+ 'names',
+ 'national',
+ 'natural',
+ 'nchar',
+ 'next',
+ 'no',
+ 'none',
+ 'not',
+ 'null',
+ 'nullif',
+ 'numeric',
+ 'octet_length',
+ 'of',
+ 'on',
+ 'only',
+ 'open',
+ 'option',
+ 'or',
+ 'order',
+ 'outer',
+ 'output',
+ 'overlaps',
+ 'pad',
+ 'partial',
+ 'pascal',
+ 'position',
+ 'precision',
+ 'prepare',
+ 'preserve',
+ 'primary',
+ 'prior',
+ 'privileges',
+ 'procedure',
+ 'public',
+ 'read',
+ 'real',
+ 'references',
+ 'relative',
+ 'restrict',
+ 'revoke',
+ 'right',
+ 'rollback',
+ 'rows',
+ 'schema',
+ 'scroll',
+ 'second',
+ 'section',
+ 'select',
+ 'session',
+ 'session_user',
+ 'set',
+ 'size',
+ 'smallint',
+ 'some',
+ 'space',
+ 'sql',
+ 'sqlca',
+ 'sqlcode',
+ 'sqlerror',
+ 'sqlstate',
+ 'sqlwarning',
+ 'substring',
+ 'sum',
+ 'system_user',
+ 'table',
+ 'temporary',
+ 'then',
+ 'time',
+ 'timestamp',
+ 'timezone_hour',
+ 'timezone_minute',
+ 'to',
+ 'trailing',
+ 'transaction',
+ 'translate',
+ 'translation',
+ 'trim',
+ 'true',
+ 'union',
+ 'unique',
+ 'unknown',
+ 'update',
+ 'upper',
+ 'usage',
+ 'user',
+ 'using',
+ 'value',
+ 'values',
+ 'varchar',
+ 'varying',
+ 'view',
+ 'when',
+ 'whenever',
+ 'where',
+ 'with',
+ 'work',
+ 'write',
+ 'year',
+ 'zone',
+)
+
+# See https://msdn.microsoft.com/en-us/library/ms189822.aspx.
+KEYWORDS = sorted(set(_KEYWORDS_FUTURE + _KEYWORDS_ODBC + _KEYWORDS_SERVER))
+
+# See https://msdn.microsoft.com/en-us/library/ms187752.aspx.
+TYPES = (
+ 'bigint',
+ 'binary',
+ 'bit',
+ 'char',
+ 'cursor',
+ 'date',
+ 'datetime',
+ 'datetime2',
+ 'datetimeoffset',
+ 'decimal',
+ 'float',
+ 'hierarchyid',
+ 'image',
+ 'int',
+ 'money',
+ 'nchar',
+ 'ntext',
+ 'numeric',
+ 'nvarchar',
+ 'real',
+ 'smalldatetime',
+ 'smallint',
+ 'smallmoney',
+ 'sql_variant',
+ 'table',
+ 'text',
+ 'time',
+ 'timestamp',
+ 'tinyint',
+ 'uniqueidentifier',
+ 'varbinary',
+ 'varchar',
+ 'xml',
+)
+
+# See https://msdn.microsoft.com/en-us/library/ms174318.aspx.
+FUNCTIONS = (
+ '$partition',
+ 'abs',
+ 'acos',
+ 'app_name',
+ 'applock_mode',
+ 'applock_test',
+ 'ascii',
+ 'asin',
+ 'assemblyproperty',
+ 'atan',
+ 'atn2',
+ 'avg',
+ 'binary_checksum',
+ 'cast',
+ 'ceiling',
+ 'certencoded',
+ 'certprivatekey',
+ 'char',
+ 'charindex',
+ 'checksum',
+ 'checksum_agg',
+ 'choose',
+ 'col_length',
+ 'col_name',
+ 'columnproperty',
+ 'compress',
+ 'concat',
+ 'connectionproperty',
+ 'context_info',
+ 'convert',
+ 'cos',
+ 'cot',
+ 'count',
+ 'count_big',
+ 'current_request_id',
+ 'current_timestamp',
+ 'current_transaction_id',
+ 'current_user',
+ 'cursor_status',
+ 'database_principal_id',
+ 'databasepropertyex',
+ 'dateadd',
+ 'datediff',
+ 'datediff_big',
+ 'datefromparts',
+ 'datename',
+ 'datepart',
+ 'datetime2fromparts',
+ 'datetimefromparts',
+ 'datetimeoffsetfromparts',
+ 'day',
+ 'db_id',
+ 'db_name',
+ 'decompress',
+ 'degrees',
+ 'dense_rank',
+ 'difference',
+ 'eomonth',
+ 'error_line',
+ 'error_message',
+ 'error_number',
+ 'error_procedure',
+ 'error_severity',
+ 'error_state',
+ 'exp',
+ 'file_id',
+ 'file_idex',
+ 'file_name',
+ 'filegroup_id',
+ 'filegroup_name',
+ 'filegroupproperty',
+ 'fileproperty',
+ 'floor',
+ 'format',
+ 'formatmessage',
+ 'fulltextcatalogproperty',
+ 'fulltextserviceproperty',
+ 'get_filestream_transaction_context',
+ 'getansinull',
+ 'getdate',
+ 'getutcdate',
+ 'grouping',
+ 'grouping_id',
+ 'has_perms_by_name',
+ 'host_id',
+ 'host_name',
+ 'iif',
+ 'index_col',
+ 'indexkey_property',
+ 'indexproperty',
+ 'is_member',
+ 'is_rolemember',
+ 'is_srvrolemember',
+ 'isdate',
+ 'isjson',
+ 'isnull',
+ 'isnumeric',
+ 'json_modify',
+ 'json_query',
+ 'json_value',
+ 'left',
+ 'len',
+ 'log',
+ 'log10',
+ 'lower',
+ 'ltrim',
+ 'max',
+ 'min',
+ 'min_active_rowversion',
+ 'month',
+ 'nchar',
+ 'newid',
+ 'newsequentialid',
+ 'ntile',
+ 'object_definition',
+ 'object_id',
+ 'object_name',
+ 'object_schema_name',
+ 'objectproperty',
+ 'objectpropertyex',
+ 'opendatasource',
+ 'openjson',
+ 'openquery',
+ 'openrowset',
+ 'openxml',
+ 'original_db_name',
+ 'original_login',
+ 'parse',
+ 'parsename',
+ 'patindex',
+ 'permissions',
+ 'pi',
+ 'power',
+ 'pwdcompare',
+ 'pwdencrypt',
+ 'quotename',
+ 'radians',
+ 'rand',
+ 'rank',
+ 'replace',
+ 'replicate',
+ 'reverse',
+ 'right',
+ 'round',
+ 'row_number',
+ 'rowcount_big',
+ 'rtrim',
+ 'schema_id',
+ 'schema_name',
+ 'scope_identity',
+ 'serverproperty',
+ 'session_context',
+ 'session_user',
+ 'sign',
+ 'sin',
+ 'smalldatetimefromparts',
+ 'soundex',
+ 'sp_helplanguage',
+ 'space',
+ 'sqrt',
+ 'square',
+ 'stats_date',
+ 'stdev',
+ 'stdevp',
+ 'str',
+ 'string_escape',
+ 'string_split',
+ 'stuff',
+ 'substring',
+ 'sum',
+ 'suser_id',
+ 'suser_name',
+ 'suser_sid',
+ 'suser_sname',
+ 'switchoffset',
+ 'sysdatetime',
+ 'sysdatetimeoffset',
+ 'system_user',
+ 'sysutcdatetime',
+ 'tan',
+ 'textptr',
+ 'textvalid',
+ 'timefromparts',
+ 'todatetimeoffset',
+ 'try_cast',
+ 'try_convert',
+ 'try_parse',
+ 'type_id',
+ 'type_name',
+ 'typeproperty',
+ 'unicode',
+ 'upper',
+ 'user_id',
+ 'user_name',
+ 'var',
+ 'varp',
+ 'xact_state',
+ 'year',
+)
This file is autogenerated by scripts/get_vimkw.py
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexers for ActionScript and MXML.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Just export lexer classes previously contained in this module.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexers for computer algebra systems.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
(r'#\d*', Name.Variable),
(r'([a-zA-Z]+[a-zA-Z0-9]*)', Name),
- (r'-?[0-9]+\.[0-9]*', Number.Float),
- (r'-?[0-9]*\.[0-9]+', Number.Float),
- (r'-?[0-9]+', Number.Integer),
+ (r'-?\d+\.\d*', Number.Float),
+ (r'-?\d*\.\d+', Number.Float),
+ (r'-?\d+', Number.Integer),
(words(operators), Operator),
(words(punctuation), Punctuation),
Lexers for AmbientTalk language.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ pygments.lexers.ampl
+ ~~~~~~~~~~~~~~~~~~~~
+
+ Lexers for the ampl language. <http://ampl.com/>
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+from pygments.lexer import RegexLexer, bygroups, using, this, words
+from pygments.token import Text, Comment, Operator, Keyword, Name, String, \
+ Number, Punctuation
+
+__all__ = ['AmplLexer']
+
+
+class AmplLexer(RegexLexer):
+ """
+ For AMPL source code.
+
+ .. versionadded:: 2.2
+ """
+ name = 'Ampl'
+ aliases = ['ampl']
+ filenames = ['*.run']
+
+ tokens = {
+ 'root': [
+ (r'\n', Text),
+ (r'\s+', Text.Whitespace),
+ (r'#.*?\n', Comment.Single),
+ (r'/[*](.|\n)*?[*]/', Comment.Multiline),
+ (words((
+ 'call', 'cd', 'close', 'commands', 'data', 'delete', 'display',
+ 'drop', 'end', 'environ', 'exit', 'expand', 'include', 'load',
+ 'model', 'objective', 'option', 'problem', 'purge', 'quit',
+ 'redeclare', 'reload', 'remove', 'reset', 'restore', 'shell',
+ 'show', 'solexpand', 'solution', 'solve', 'update', 'unload',
+ 'xref', 'coeff', 'coef', 'cover', 'obj', 'interval', 'default',
+ 'from', 'to', 'to_come', 'net_in', 'net_out', 'dimen',
+ 'dimension', 'check', 'complements', 'write', 'function',
+ 'pipe', 'format', 'if', 'then', 'else', 'in', 'while', 'repeat',
+ 'for'), suffix=r'\b'), Keyword.Reserved),
+ (r'(integer|binary|symbolic|ordered|circular|reversed|INOUT|IN|OUT|LOCAL)',
+ Keyword.Type),
+ (r'\".*?\"', String.Double),
+ (r'\'.*?\'', String.Single),
+ (r'[()\[\]{},;:]+', Punctuation),
+ (r'\b(\w+)(\.)(astatus|init0|init|lb0|lb1|lb2|lb|lrc|'
+ r'lslack|rc|relax|slack|sstatus|status|ub0|ub1|ub2|'
+ r'ub|urc|uslack|val)',
+ bygroups(Name.Variable, Punctuation, Keyword.Reserved)),
+ (r'(set|param|var|arc|minimize|maximize|subject to|s\.t\.|subj to|'
+ r'node|table|suffix|read table|write table)(\s+)(\w+)',
+ bygroups(Keyword.Declaration, Text, Name.Variable)),
+ (r'(param)(\s*)(:)(\s*)(\w+)(\s*)(:)(\s*)((\w|\s)+)',
+ bygroups(Keyword.Declaration, Text, Punctuation, Text,
+ Name.Variable, Text, Punctuation, Text, Name.Variable)),
+ (r'(let|fix|unfix)(\s*)((?:\{.*\})?)(\s*)(\w+)',
+ bygroups(Keyword.Declaration, Text, using(this), Text, Name.Variable)),
+ (words((
+ 'abs', 'acos', 'acosh', 'alias', 'asin', 'asinh', 'atan', 'atan2',
+ 'atanh', 'ceil', 'ctime', 'cos', 'exp', 'floor', 'log', 'log10',
+ 'max', 'min', 'precision', 'round', 'sin', 'sinh', 'sqrt', 'tan',
+ 'tanh', 'time', 'trunc', 'Beta', 'Cauchy', 'Exponential', 'Gamma',
+ 'Irand224', 'Normal', 'Normal01', 'Poisson', 'Uniform', 'Uniform01',
+ 'num', 'num0', 'ichar', 'char', 'length', 'substr', 'sprintf',
+ 'match', 'sub', 'gsub', 'print', 'printf', 'next', 'nextw', 'prev',
+ 'prevw', 'first', 'last', 'ord', 'ord0', 'card', 'arity',
+ 'indexarity'), prefix=r'\b', suffix=r'\b'), Name.Builtin),
+ (r'(\+|\-|\*|/|\*\*|=|<=|>=|==|\||\^|<|>|\!|\.\.|:=|\&|\!=|<<|>>)',
+ Operator),
+ (words((
+ 'or', 'exists', 'forall', 'and', 'in', 'not', 'within', 'union',
+ 'diff', 'difference', 'symdiff', 'inter', 'intersect',
+ 'intersection', 'cross', 'setof', 'by', 'less', 'sum', 'prod',
+ 'product', 'div', 'mod'), suffix=r'\b'),
+ Keyword.Reserved), # Operator.Name but not enough emphasized with that
+ (r'(\d+\.(?!\.)\d*|\.(?!.)\d+)([eE][+-]?\d+)?', Number.Float),
+ (r'\d+([eE][+-]?\d+)?', Number.Integer),
+ (r'[+-]?Infinity', Number.Integer),
+ (r'(\w+|(\.(?!\.)))', Text)
+ ]
+
+ }
Lexers for APL.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Contributed by Thomas Beale <https://github.com/wolandscat>,
<https://bitbucket.org/thomas_beale>.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexers for assembly languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import re
-from pygments.lexer import RegexLexer, include, bygroups, using, DelegatingLexer
+from pygments.lexer import RegexLexer, include, bygroups, using, words, \
+ DelegatingLexer
from pygments.lexers.c_cpp import CppLexer, CLexer
from pygments.lexers.d import DLexer
from pygments.token import Text, Name, Number, String, Comment, Punctuation, \
Other, Keyword, Operator
__all__ = ['GasLexer', 'ObjdumpLexer', 'DObjdumpLexer', 'CppObjdumpLexer',
- 'CObjdumpLexer', 'LlvmLexer', 'NasmLexer', 'NasmObjdumpLexer',
- 'Ca65Lexer']
+ 'CObjdumpLexer', 'HsailLexer', 'LlvmLexer', 'NasmLexer',
+ 'NasmObjdumpLexer', 'TasmLexer', 'Ca65Lexer']
class GasLexer(RegexLexer):
(number, Number.Integer),
(r'[\r\n]+', Text, '#pop'),
- (r'#.*?$', Comment, '#pop'),
-
include('punctuation'),
include('whitespace')
],
('$'+number, Number.Integer),
(r"$'(.|\\')'", String.Char),
(r'[\r\n]+', Text, '#pop'),
- (r'#.*?$', Comment, '#pop'),
+
include('punctuation'),
include('whitespace')
],
'whitespace': [
(r'\n', Text),
(r'\s+', Text),
- (r'#.*?\n', Comment)
+ (r'[;#].*?\n', Comment)
],
'punctuation': [
(r'[-*,.()\[\]!:]+', Punctuation)
super(CObjdumpLexer, self).__init__(CLexer, ObjdumpLexer, **options)
+class HsailLexer(RegexLexer):
+ """
+ For HSAIL assembly code.
+
+ .. versionadded:: 2.2
+ """
+ name = 'HSAIL'
+ aliases = ['hsail', 'hsa']
+ filenames = ['*.hsail']
+ mimetypes = ['text/x-hsail']
+
+ string = r'"[^"]*?"'
+ identifier = r'[a-zA-Z_][\w.]*'
+ # Registers
+ register_number = r'[0-9]+'
+ register = r'(\$(c|s|d|q)' + register_number + ')'
+ # Qualifiers
+ alignQual = r'(align\(\d+\))'
+ widthQual = r'(width\((\d+|all)\))'
+ allocQual = r'(alloc\(agent\))'
+ # Instruction Modifiers
+ roundingMod = (r'((_ftz)?(_up|_down|_zero|_near))')
+ datatypeMod = (r'_('
+ # packedTypes
+ r'u8x4|s8x4|u16x2|s16x2|u8x8|s8x8|u16x4|s16x4|u32x2|s32x2|'
+ r'u8x16|s8x16|u16x8|s16x8|u32x4|s32x4|u64x2|s64x2|'
+ r'f16x2|f16x4|f16x8|f32x2|f32x4|f64x2|'
+ # baseTypes
+ r'u8|s8|u16|s16|u32|s32|u64|s64|'
+ r'b128|b8|b16|b32|b64|b1|'
+ r'f16|f32|f64|'
+ # opaqueType
+ r'roimg|woimg|rwimg|samp|sig32|sig64)')
+
+ # Numeric Constant
+ float = r'((\d+\.)|(\d*\.\d+))[eE][+-]?\d+'
+ hexfloat = r'0[xX](([0-9a-fA-F]+\.[0-9a-fA-F]*)|([0-9a-fA-F]*\.[0-9a-fA-F]+))[pP][+-]?\d+'
+ ieeefloat = r'0((h|H)[0-9a-fA-F]{4}|(f|F)[0-9a-fA-F]{8}|(d|D)[0-9a-fA-F]{16})'
+
+ tokens = {
+ 'root': [
+ include('whitespace'),
+ include('comments'),
+
+ (string, String),
+
+ (r'@' + identifier + ':?', Name.Label),
+
+ (register, Name.Variable.Anonymous),
+
+ include('keyword'),
+
+ (r'&' + identifier, Name.Variable.Global),
+ (r'%' + identifier, Name.Variable),
+
+ (hexfloat, Number.Hex),
+ (r'0[xX][a-fA-F0-9]+', Number.Hex),
+ (ieeefloat, Number.Float),
+ (float, Number.Float),
+ ('\d+', Number.Integer),
+
+ (r'[=<>{}\[\]()*.,:;!]|x\b', Punctuation)
+ ],
+ 'whitespace': [
+ (r'(\n|\s)+', Text),
+ ],
+ 'comments': [
+ (r'/\*.*?\*/', Comment.Multiline),
+ (r'//.*?\n', Comment.Singleline),
+ ],
+ 'keyword': [
+ # Types
+ (r'kernarg' + datatypeMod, Keyword.Type),
+
+ # Regular keywords
+ (r'\$(full|base|small|large|default|zero|near)', Keyword),
+ (words((
+ 'module', 'extension', 'pragma', 'prog', 'indirect', 'signature',
+ 'decl', 'kernel', 'function', 'enablebreakexceptions',
+ 'enabledetectexceptions', 'maxdynamicgroupsize', 'maxflatgridsize',
+ 'maxflatworkgroupsize', 'requireddim', 'requiredgridsize',
+ 'requiredworkgroupsize', 'requirenopartialworkgroups'),
+ suffix=r'\b'), Keyword),
+
+ # instructions
+ (roundingMod, Keyword),
+ (datatypeMod, Keyword),
+ (r'_(' + alignQual + '|' + widthQual + ')', Keyword),
+ (r'_kernarg', Keyword),
+ (r'(nop|imagefence)\b', Keyword),
+ (words((
+ 'cleardetectexcept', 'clock', 'cuid', 'debugtrap', 'dim',
+ 'getdetectexcept', 'groupbaseptr', 'kernargbaseptr', 'laneid',
+ 'maxcuid', 'maxwaveid', 'packetid', 'setdetectexcept', 'waveid',
+ 'workitemflatabsid', 'workitemflatid', 'nullptr', 'abs', 'bitrev',
+ 'currentworkgroupsize', 'currentworkitemflatid', 'fract', 'ncos',
+ 'neg', 'nexp2', 'nlog2', 'nrcp', 'nrsqrt', 'nsin', 'nsqrt',
+ 'gridgroups', 'gridsize', 'not', 'sqrt', 'workgroupid',
+ 'workgroupsize', 'workitemabsid', 'workitemid', 'ceil', 'floor',
+ 'rint', 'trunc', 'add', 'bitmask', 'borrow', 'carry', 'copysign',
+ 'div', 'rem', 'sub', 'shl', 'shr', 'and', 'or', 'xor', 'unpackhi',
+ 'unpacklo', 'max', 'min', 'fma', 'mad', 'bitextract', 'bitselect',
+ 'shuffle', 'cmov', 'bitalign', 'bytealign', 'lerp', 'nfma', 'mul',
+ 'mulhi', 'mul24hi', 'mul24', 'mad24', 'mad24hi', 'bitinsert',
+ 'combine', 'expand', 'lda', 'mov', 'pack', 'unpack', 'packcvt',
+ 'unpackcvt', 'sad', 'sementp', 'ftos', 'stof', 'cmp', 'ld', 'st',
+ '_eq', '_ne', '_lt', '_le', '_gt', '_ge', '_equ', '_neu', '_ltu',
+ '_leu', '_gtu', '_geu', '_num', '_nan', '_seq', '_sne', '_slt',
+ '_sle', '_sgt', '_sge', '_snum', '_snan', '_sequ', '_sneu', '_sltu',
+ '_sleu', '_sgtu', '_sgeu', 'atomic', '_ld', '_st', '_cas', '_add',
+ '_and', '_exch', '_max', '_min', '_or', '_sub', '_wrapdec',
+ '_wrapinc', '_xor', 'ret', 'cvt', '_readonly', '_kernarg', '_global',
+ 'br', 'cbr', 'sbr', '_scacq', '_screl', '_scar', '_rlx', '_wave',
+ '_wg', '_agent', '_system', 'ldimage', 'stimage', '_v2', '_v3', '_v4',
+ '_1d', '_2d', '_3d', '_1da', '_2da', '_1db', '_2ddepth', '_2dadepth',
+ '_width', '_height', '_depth', '_array', '_channelorder',
+ '_channeltype', 'querysampler', '_coord', '_filter', '_addressing',
+ 'barrier', 'wavebarrier', 'initfbar', 'joinfbar', 'waitfbar',
+ 'arrivefbar', 'leavefbar', 'releasefbar', 'ldf', 'activelaneid',
+ 'activelanecount', 'activelanemask', 'activelanepermute', 'call',
+ 'scall', 'icall', 'alloca', 'packetcompletionsig',
+ 'addqueuewriteindex', 'casqueuewriteindex', 'ldqueuereadindex',
+ 'stqueuereadindex', 'readonly', 'global', 'private', 'group',
+ 'spill', 'arg', '_upi', '_downi', '_zeroi', '_neari', '_upi_sat',
+ '_downi_sat', '_zeroi_sat', '_neari_sat', '_supi', '_sdowni',
+ '_szeroi', '_sneari', '_supi_sat', '_sdowni_sat', '_szeroi_sat',
+ '_sneari_sat', '_pp', '_ps', '_sp', '_ss', '_s', '_p', '_pp_sat',
+ '_ps_sat', '_sp_sat', '_ss_sat', '_s_sat', '_p_sat')), Keyword),
+
+ # Integer types
+ (r'i[1-9]\d*', Keyword)
+ ]
+ }
+
+
class LlvmLexer(RegexLexer):
"""
For LLVM assembly code.
],
'keyword': [
# Regular keywords
- (r'(begin|end'
- r'|true|false'
- r'|declare|define'
- r'|global|constant'
-
- r'|private|linker_private|internal|available_externally|linkonce'
- r'|linkonce_odr|weak|weak_odr|appending|dllimport|dllexport'
- r'|common|default|hidden|protected|extern_weak|external'
- r'|thread_local|zeroinitializer|undef|null|to|tail|target|triple'
- r'|datalayout|volatile|nuw|nsw|nnan|ninf|nsz|arcp|fast|exact|inbounds'
- r'|align|addrspace|section|alias|module|asm|sideeffect|gc|dbg'
- r'|linker_private_weak'
- r'|attributes|blockaddress|initialexec|localdynamic|localexec'
- r'|prefix|unnamed_addr'
-
- r'|ccc|fastcc|coldcc|x86_stdcallcc|x86_fastcallcc|arm_apcscc'
- r'|arm_aapcscc|arm_aapcs_vfpcc|ptx_device|ptx_kernel'
- r'|intel_ocl_bicc|msp430_intrcc|spir_func|spir_kernel'
- r'|x86_64_sysvcc|x86_64_win64cc|x86_thiscallcc'
-
- r'|cc|c'
-
- r'|signext|zeroext|inreg|sret|nounwind|noreturn|noalias|nocapture'
- r'|byval|nest|readnone|readonly'
- r'|inlinehint|noinline|alwaysinline|optsize|ssp|sspreq|noredzone'
- r'|noimplicitfloat|naked'
- r'|builtin|cold|nobuiltin|noduplicate|nonlazybind|optnone'
- r'|returns_twice|sanitize_address|sanitize_memory|sanitize_thread'
- r'|sspstrong|uwtable|returned'
-
- r'|type|opaque'
-
- r'|eq|ne|slt|sgt|sle'
- r'|sge|ult|ugt|ule|uge'
- r'|oeq|one|olt|ogt|ole'
- r'|oge|ord|uno|ueq|une'
- r'|x'
- r'|acq_rel|acquire|alignstack|atomic|catch|cleanup|filter'
- r'|inteldialect|max|min|monotonic|nand|personality|release'
- r'|seq_cst|singlethread|umax|umin|unordered|xchg'
-
- # instructions
- r'|add|fadd|sub|fsub|mul|fmul|udiv|sdiv|fdiv|urem|srem|frem|shl'
- r'|lshr|ashr|and|or|xor|icmp|fcmp'
-
- r'|phi|call|trunc|zext|sext|fptrunc|fpext|uitofp|sitofp|fptoui'
- r'|fptosi|inttoptr|ptrtoint|bitcast|addrspacecast'
- r'|select|va_arg|ret|br|switch'
- r'|invoke|unwind|unreachable'
- r'|indirectbr|landingpad|resume'
-
- r'|malloc|alloca|free|load|store|getelementptr'
-
- r'|extractelement|insertelement|shufflevector|getresult'
- r'|extractvalue|insertvalue'
-
- r'|atomicrmw|cmpxchg|fence'
-
- r')\b', Keyword),
+ (words((
+ 'begin', 'end', 'true', 'false', 'declare', 'define', 'global',
+ 'constant', 'private', 'linker_private', 'internal',
+ 'available_externally', 'linkonce', 'linkonce_odr', 'weak',
+ 'weak_odr', 'appending', 'dllimport', 'dllexport', 'common',
+ 'default', 'hidden', 'protected', 'extern_weak', 'external',
+ 'thread_local', 'zeroinitializer', 'undef', 'null', 'to', 'tail',
+ 'target', 'triple', 'datalayout', 'volatile', 'nuw', 'nsw', 'nnan',
+ 'ninf', 'nsz', 'arcp', 'fast', 'exact', 'inbounds', 'align',
+ 'addrspace', 'section', 'alias', 'module', 'asm', 'sideeffect',
+ 'gc', 'dbg', 'linker_private_weak', 'attributes', 'blockaddress',
+ 'initialexec', 'localdynamic', 'localexec', 'prefix', 'unnamed_addr',
+ 'ccc', 'fastcc', 'coldcc', 'x86_stdcallcc', 'x86_fastcallcc',
+ 'arm_apcscc', 'arm_aapcscc', 'arm_aapcs_vfpcc', 'ptx_device',
+ 'ptx_kernel', 'intel_ocl_bicc', 'msp430_intrcc', 'spir_func',
+ 'spir_kernel', 'x86_64_sysvcc', 'x86_64_win64cc', 'x86_thiscallcc',
+ 'cc', 'c', 'signext', 'zeroext', 'inreg', 'sret', 'nounwind',
+ 'noreturn', 'noalias', 'nocapture', 'byval', 'nest', 'readnone',
+ 'readonly', 'inlinehint', 'noinline', 'alwaysinline', 'optsize', 'ssp',
+ 'sspreq', 'noredzone', 'noimplicitfloat', 'naked', 'builtin', 'cold',
+ 'nobuiltin', 'noduplicate', 'nonlazybind', 'optnone', 'returns_twice',
+ 'sanitize_address', 'sanitize_memory', 'sanitize_thread', 'sspstrong',
+ 'uwtable', 'returned', 'type', 'opaque', 'eq', 'ne', 'slt', 'sgt',
+ 'sle', 'sge', 'ult', 'ugt', 'ule', 'uge', 'oeq', 'one', 'olt', 'ogt',
+ 'ole', 'oge', 'ord', 'uno', 'ueq', 'une', 'x', 'acq_rel', 'acquire',
+ 'alignstack', 'atomic', 'catch', 'cleanup', 'filter', 'inteldialect',
+ 'max', 'min', 'monotonic', 'nand', 'personality', 'release', 'seq_cst',
+ 'singlethread', 'umax', 'umin', 'unordered', 'xchg', 'add', 'fadd',
+ 'sub', 'fsub', 'mul', 'fmul', 'udiv', 'sdiv', 'fdiv', 'urem', 'srem',
+ 'frem', 'shl', 'lshr', 'ashr', 'and', 'or', 'xor', 'icmp', 'fcmp',
+ 'phi', 'call', 'trunc', 'zext', 'sext', 'fptrunc', 'fpext', 'uitofp',
+ 'sitofp', 'fptoui', 'fptosi', 'inttoptr', 'ptrtoint', 'bitcast',
+ 'addrspacecast', 'select', 'va_arg', 'ret', 'br', 'switch', 'invoke',
+ 'unwind', 'unreachable', 'indirectbr', 'landingpad', 'resume',
+ 'malloc', 'alloca', 'free', 'load', 'store', 'getelementptr',
+ 'extractelement', 'insertelement', 'shufflevector', 'getresult',
+ 'extractvalue', 'insertvalue', 'atomicrmw', 'cmpxchg', 'fence',
+ 'allocsize', 'amdgpu_cs', 'amdgpu_gs', 'amdgpu_kernel', 'amdgpu_ps',
+ 'amdgpu_vs', 'any', 'anyregcc', 'argmemonly', 'avr_intrcc',
+ 'avr_signalcc', 'caller', 'catchpad', 'catchret', 'catchswitch',
+ 'cleanuppad', 'cleanupret', 'comdat', 'convergent', 'cxx_fast_tlscc',
+ 'deplibs', 'dereferenceable', 'dereferenceable_or_null', 'distinct',
+ 'exactmatch', 'externally_initialized', 'from', 'ghccc', 'hhvm_ccc',
+ 'hhvmcc', 'ifunc', 'inaccessiblemem_or_argmemonly', 'inaccessiblememonly',
+ 'inalloca', 'jumptable', 'largest', 'local_unnamed_addr', 'minsize',
+ 'musttail', 'noduplicates', 'none', 'nonnull', 'norecurse', 'notail',
+ 'preserve_allcc', 'preserve_mostcc', 'prologue', 'safestack', 'samesize',
+ 'source_filename', 'swiftcc', 'swifterror', 'swiftself', 'webkit_jscc',
+ 'within', 'writeonly', 'x86_intrcc', 'x86_vectorcallcc'),
+ suffix=r'\b'), Keyword),
# Types
- (r'void|half|float|double|x86_fp80|fp128|ppc_fp128|label|metadata',
- Keyword.Type),
+ (words(('void', 'half', 'float', 'double', 'x86_fp80', 'fp128',
+ 'ppc_fp128', 'label', 'metadata', 'token')), Keyword.Type),
# Integer types
(r'i[1-9]\d*', Keyword)
tokens = _objdump_lexer_tokens(NasmLexer)
+class TasmLexer(RegexLexer):
+ """
+ For Tasm (Turbo Assembler) assembly code.
+ """
+ name = 'TASM'
+ aliases = ['tasm']
+ filenames = ['*.asm', '*.ASM', '*.tasm']
+ mimetypes = ['text/x-tasm']
+
+ identifier = r'[@a-z$._?][\w$.?#@~]*'
+ hexn = r'(?:0x[0-9a-f]+|$0[0-9a-f]*|[0-9]+[0-9a-f]*h)'
+ octn = r'[0-7]+q'
+ binn = r'[01]+b'
+ decn = r'[0-9]+'
+ floatn = decn + r'\.e?' + decn
+ string = r'"(\\"|[^"\n])*"|' + r"'(\\'|[^'\n])*'|" + r"`(\\`|[^`\n])*`"
+ declkw = r'(?:res|d)[bwdqt]|times'
+ register = (r'r[0-9][0-5]?[bwd]|'
+ r'[a-d][lh]|[er]?[a-d]x|[er]?[sb]p|[er]?[sd]i|[c-gs]s|st[0-7]|'
+ r'mm[0-7]|cr[0-4]|dr[0-367]|tr[3-7]')
+ wordop = r'seg|wrt|strict'
+ type = r'byte|[dq]?word'
+ directives = (r'BITS|USE16|USE32|SECTION|SEGMENT|ABSOLUTE|EXTERN|GLOBAL|'
+ r'ORG|ALIGN|STRUC|ENDSTRUC|ENDS|COMMON|CPU|GROUP|UPPERCASE|INCLUDE|'
+ r'EXPORT|LIBRARY|MODULE|PROC|ENDP|USES|ARG|DATASEG|UDATASEG|END|IDEAL|'
+ r'P386|MODEL|ASSUME|CODESEG|SIZE')
+ # T[A-Z][a-z] is more of a convention. Lexer should filter out STRUC definitions
+ # and then 'add' them to datatype somehow.
+ datatype = (r'db|dd|dw|T[A-Z][a-z]+')
+
+ flags = re.IGNORECASE | re.MULTILINE
+ tokens = {
+ 'root': [
+ (r'^\s*%', Comment.Preproc, 'preproc'),
+ include('whitespace'),
+ (identifier + ':', Name.Label),
+ (directives, Keyword, 'instruction-args'),
+ (r'(%s)(\s+)(%s)' % (identifier, datatype),
+ bygroups(Name.Constant, Keyword.Declaration, Keyword.Declaration),
+ 'instruction-args'),
+ (declkw, Keyword.Declaration, 'instruction-args'),
+ (identifier, Name.Function, 'instruction-args'),
+ (r'[\r\n]+', Text)
+ ],
+ 'instruction-args': [
+ (string, String),
+ (hexn, Number.Hex),
+ (octn, Number.Oct),
+ (binn, Number.Bin),
+ (floatn, Number.Float),
+ (decn, Number.Integer),
+ include('punctuation'),
+ (register, Name.Builtin),
+ (identifier, Name.Variable),
+ # Do not match newline when it's preceeded by a backslash
+ (r'(\\\s*)(;.*)([\r\n])', bygroups(Text, Comment.Single, Text)),
+ (r'[\r\n]+', Text, '#pop'),
+ include('whitespace')
+ ],
+ 'preproc': [
+ (r'[^;\n]+', Comment.Preproc),
+ (r';.*?\n', Comment.Single, '#pop'),
+ (r'\n', Comment.Preproc, '#pop'),
+ ],
+ 'whitespace': [
+ (r'[\n\r]', Text),
+ (r'\\[\n\r]', Text),
+ (r'[ \t]+', Text),
+ (r';.*', Comment.Single)
+ ],
+ 'punctuation': [
+ (r'[,():\[\]]+', Punctuation),
+ (r'[&|^<>+*=/%~-]+', Operator),
+ (r'[$]+', Keyword.Constant),
+ (wordop, Operator.Word),
+ (type, Keyword.Type)
+ ],
+ }
+
+
class Ca65Lexer(RegexLexer):
"""
For ca65 assembler sources.
Lexers for automation scripting languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexers for BASIC like languages (other than VB.net).
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ pygments.lexers.bibtex
+ ~~~~~~~~~~~~~~~~~~~~~~
+
+ Lexers for BibTeX bibliography data and styles
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+import re
+
+from pygments.lexer import RegexLexer, ExtendedRegexLexer, include, default, \
+ words
+from pygments.token import Name, Comment, String, Error, Number, Text, \
+ Keyword, Punctuation
+
+__all__ = ['BibTeXLexer', 'BSTLexer']
+
+
+class BibTeXLexer(ExtendedRegexLexer):
+ """
+ A lexer for BibTeX bibliography data format.
+
+ .. versionadded:: 2.2
+ """
+
+ name = 'BibTeX'
+ aliases = ['bib', 'bibtex']
+ filenames = ['*.bib']
+ mimetypes = ["text/x-bibtex"]
+ flags = re.IGNORECASE
+
+ ALLOWED_CHARS = r'@!$&*+\-./:;<>?\[\\\]^`|~'
+ IDENTIFIER = '[{0}][{1}]*'.format('a-z_' + ALLOWED_CHARS, r'\w' + ALLOWED_CHARS)
+
+ def open_brace_callback(self, match, ctx):
+ opening_brace = match.group()
+ ctx.opening_brace = opening_brace
+ yield match.start(), Punctuation, opening_brace
+ ctx.pos = match.end()
+
+ def close_brace_callback(self, match, ctx):
+ closing_brace = match.group()
+ if (
+ ctx.opening_brace == '{' and closing_brace != '}' or
+ ctx.opening_brace == '(' and closing_brace != ')'
+ ):
+ yield match.start(), Error, closing_brace
+ else:
+ yield match.start(), Punctuation, closing_brace
+ del ctx.opening_brace
+ ctx.pos = match.end()
+
+ tokens = {
+ 'root': [
+ include('whitespace'),
+ ('@comment', Comment),
+ ('@preamble', Name.Class, ('closing-brace', 'value', 'opening-brace')),
+ ('@string', Name.Class, ('closing-brace', 'field', 'opening-brace')),
+ ('@' + IDENTIFIER, Name.Class,
+ ('closing-brace', 'command-body', 'opening-brace')),
+ ('.+', Comment),
+ ],
+ 'opening-brace': [
+ include('whitespace'),
+ (r'[{(]', open_brace_callback, '#pop'),
+ ],
+ 'closing-brace': [
+ include('whitespace'),
+ (r'[})]', close_brace_callback, '#pop'),
+ ],
+ 'command-body': [
+ include('whitespace'),
+ (r'[^\s\,\}]+', Name.Label, ('#pop', 'fields')),
+ ],
+ 'fields': [
+ include('whitespace'),
+ (',', Punctuation, 'field'),
+ default('#pop'),
+ ],
+ 'field': [
+ include('whitespace'),
+ (IDENTIFIER, Name.Attribute, ('value', '=')),
+ default('#pop'),
+ ],
+ '=': [
+ include('whitespace'),
+ ('=', Punctuation, '#pop'),
+ ],
+ 'value': [
+ include('whitespace'),
+ (IDENTIFIER, Name.Variable),
+ ('"', String, 'quoted-string'),
+ (r'\{', String, 'braced-string'),
+ (r'[\d]+', Number),
+ ('#', Punctuation),
+ default('#pop'),
+ ],
+ 'quoted-string': [
+ (r'\{', String, 'braced-string'),
+ ('"', String, '#pop'),
+ ('[^\{\"]+', String),
+ ],
+ 'braced-string': [
+ (r'\{', String, '#push'),
+ (r'\}', String, '#pop'),
+ ('[^\{\}]+', String),
+ ],
+ 'whitespace': [
+ (r'\s+', Text),
+ ],
+ }
+
+
+class BSTLexer(RegexLexer):
+ """
+ A lexer for BibTeX bibliography styles.
+
+ .. versionadded:: 2.2
+ """
+
+ name = 'BST'
+ aliases = ['bst', 'bst-pybtex']
+ filenames = ['*.bst']
+ flags = re.IGNORECASE | re.MULTILINE
+
+ tokens = {
+ 'root': [
+ include('whitespace'),
+ (words(['read', 'sort']), Keyword),
+ (words(['execute', 'integers', 'iterate', 'reverse', 'strings']),
+ Keyword, ('group')),
+ (words(['function', 'macro']), Keyword, ('group', 'group')),
+ (words(['entry']), Keyword, ('group', 'group', 'group')),
+ ],
+ 'group': [
+ include('whitespace'),
+ (r'\{', Punctuation, ('#pop', 'group-end', 'body')),
+ ],
+ 'group-end': [
+ include('whitespace'),
+ (r'\}', Punctuation, '#pop'),
+ ],
+ 'body': [
+ include('whitespace'),
+ (r"\'[^#\"\{\}\s]+", Name.Function),
+ (r'[^#\"\{\}\s]+\$', Name.Builtin),
+ (r'[^#\"\{\}\s]+', Name.Variable),
+ (r'"[^\"]*"', String),
+ (r'#-?\d+', Number),
+ (r'\{', Punctuation, ('group-end', 'body')),
+ default('#pop'),
+ ],
+ 'whitespace': [
+ ('\s+', Text),
+ ('%.*?$', Comment.SingleLine),
+ ],
+ }
Lexers for "business-oriented" languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
],
'core': [
# Figurative constants
- (r'(^|(?<=[^0-9a-z_\-]))(ALL\s+)?'
+ (r'(^|(?<=[^\w\-]))(ALL\s+)?'
r'((ZEROES)|(HIGH-VALUE|LOW-VALUE|QUOTE|SPACE|ZERO)(S)?)'
- r'\s*($|(?=[^0-9a-z_\-]))',
+ r'\s*($|(?=[^\w\-]))',
Name.Constant),
# Reserved words STATEMENTS and other bolds
'RETURN', 'REWRITE', 'SCREEN', 'SD', 'SEARCH', 'SECTION', 'SET',
'SORT', 'START', 'STOP', 'STRING', 'SUBTRACT', 'SUPPRESS',
'TERMINATE', 'THEN', 'UNLOCK', 'UNSTRING', 'USE', 'VALIDATE',
- 'WORKING-STORAGE', 'WRITE'), prefix=r'(^|(?<=[^0-9a-z_\-]))',
- suffix=r'\s*($|(?=[^0-9a-z_\-]))'),
+ 'WORKING-STORAGE', 'WRITE'), prefix=r'(^|(?<=[^\w\-]))',
+ suffix=r'\s*($|(?=[^\w\-]))'),
Keyword.Reserved),
# Reserved words
'ALPHABET', 'ALPHABETIC', 'ALPHABETIC-LOWER', 'ALPHABETIC-UPPER',
'ALPHANUMERIC', 'ALPHANUMERIC-EDITED', 'ALSO', 'ALTER', 'ALTERNATE'
'ANY', 'ARE', 'AREA', 'AREAS', 'ARGUMENT-NUMBER', 'ARGUMENT-VALUE', 'AS',
- 'ASCENDING', 'ASSIGN', 'AT', 'AUTO', 'AUTO-SKIP', 'AUTOMATIC', 'AUTOTERMINATE',
- 'BACKGROUND-COLOR', 'BASED', 'BEEP', 'BEFORE', 'BELL',
+ 'ASCENDING', 'ASSIGN', 'AT', 'AUTO', 'AUTO-SKIP', 'AUTOMATIC',
+ 'AUTOTERMINATE', 'BACKGROUND-COLOR', 'BASED', 'BEEP', 'BEFORE', 'BELL',
'BLANK', 'BLINK', 'BLOCK', 'BOTTOM', 'BY', 'BYTE-LENGTH', 'CHAINING',
- 'CHARACTER', 'CHARACTERS', 'CLASS', 'CODE', 'CODE-SET', 'COL', 'COLLATING',
- 'COLS', 'COLUMN', 'COLUMNS', 'COMMA', 'COMMAND-LINE', 'COMMIT', 'COMMON',
- 'CONSTANT', 'CONTAINS', 'CONTENT', 'CONTROL',
+ 'CHARACTER', 'CHARACTERS', 'CLASS', 'CODE', 'CODE-SET', 'COL',
+ 'COLLATING', 'COLS', 'COLUMN', 'COLUMNS', 'COMMA', 'COMMAND-LINE',
+ 'COMMIT', 'COMMON', 'CONSTANT', 'CONTAINS', 'CONTENT', 'CONTROL',
'CONTROLS', 'CONVERTING', 'COPY', 'CORR', 'CORRESPONDING', 'COUNT', 'CRT',
- 'CURRENCY', 'CURSOR', 'CYCLE', 'DATE', 'DAY', 'DAY-OF-WEEK', 'DE', 'DEBUGGING',
- 'DECIMAL-POINT', 'DECLARATIVES', 'DEFAULT', 'DELIMITED',
+ 'CURRENCY', 'CURSOR', 'CYCLE', 'DATE', 'DAY', 'DAY-OF-WEEK', 'DE',
+ 'DEBUGGING', 'DECIMAL-POINT', 'DECLARATIVES', 'DEFAULT', 'DELIMITED',
'DELIMITER', 'DEPENDING', 'DESCENDING', 'DETAIL', 'DISK',
'DOWN', 'DUPLICATES', 'DYNAMIC', 'EBCDIC',
'ENTRY', 'ENVIRONMENT-NAME', 'ENVIRONMENT-VALUE', 'EOL', 'EOP',
'EOS', 'ERASE', 'ERROR', 'ESCAPE', 'EXCEPTION',
- 'EXCLUSIVE', 'EXTEND', 'EXTERNAL',
- 'FILE-ID', 'FILLER', 'FINAL', 'FIRST', 'FIXED', 'FLOAT-LONG', 'FLOAT-SHORT',
- 'FOOTING', 'FOR', 'FOREGROUND-COLOR', 'FORMAT', 'FROM', 'FULL', 'FUNCTION',
- 'FUNCTION-ID', 'GIVING', 'GLOBAL', 'GROUP',
+ 'EXCLUSIVE', 'EXTEND', 'EXTERNAL', 'FILE-ID', 'FILLER', 'FINAL',
+ 'FIRST', 'FIXED', 'FLOAT-LONG', 'FLOAT-SHORT',
+ 'FOOTING', 'FOR', 'FOREGROUND-COLOR', 'FORMAT', 'FROM', 'FULL',
+ 'FUNCTION', 'FUNCTION-ID', 'GIVING', 'GLOBAL', 'GROUP',
'HEADING', 'HIGHLIGHT', 'I-O', 'ID',
'IGNORE', 'IGNORING', 'IN', 'INDEX', 'INDEXED', 'INDICATE',
- 'INITIAL', 'INITIALIZED', 'INPUT',
- 'INTO', 'INTRINSIC', 'INVALID', 'IS', 'JUST', 'JUSTIFIED', 'KEY', 'LABEL',
+ 'INITIAL', 'INITIALIZED', 'INPUT', 'INTO', 'INTRINSIC', 'INVALID',
+ 'IS', 'JUST', 'JUSTIFIED', 'KEY', 'LABEL',
'LAST', 'LEADING', 'LEFT', 'LENGTH', 'LIMIT', 'LIMITS', 'LINAGE',
'LINAGE-COUNTER', 'LINE', 'LINES', 'LOCALE', 'LOCK',
- 'LOWLIGHT', 'MANUAL', 'MEMORY', 'MINUS', 'MODE',
- 'MULTIPLE', 'NATIONAL', 'NATIONAL-EDITED', 'NATIVE',
- 'NEGATIVE', 'NEXT', 'NO', 'NULL', 'NULLS', 'NUMBER', 'NUMBERS', 'NUMERIC',
- 'NUMERIC-EDITED', 'OBJECT-COMPUTER', 'OCCURS', 'OF', 'OFF', 'OMITTED', 'ON', 'ONLY',
+ 'LOWLIGHT', 'MANUAL', 'MEMORY', 'MINUS', 'MODE', 'MULTIPLE',
+ 'NATIONAL', 'NATIONAL-EDITED', 'NATIVE', 'NEGATIVE', 'NEXT', 'NO',
+ 'NULL', 'NULLS', 'NUMBER', 'NUMBERS', 'NUMERIC', 'NUMERIC-EDITED',
+ 'OBJECT-COMPUTER', 'OCCURS', 'OF', 'OFF', 'OMITTED', 'ON', 'ONLY',
'OPTIONAL', 'ORDER', 'ORGANIZATION', 'OTHER', 'OUTPUT', 'OVERFLOW',
'OVERLINE', 'PACKED-DECIMAL', 'PADDING', 'PAGE', 'PARAGRAPH',
'PLUS', 'POINTER', 'POSITION', 'POSITIVE', 'PRESENT', 'PREVIOUS',
'UNSIGNED-INT', 'UNSIGNED-LONG', 'UNSIGNED-SHORT', 'UNTIL', 'UP',
'UPDATE', 'UPON', 'USAGE', 'USING', 'VALUE', 'VALUES', 'VARYING',
'WAIT', 'WHEN', 'WITH', 'WORDS', 'YYYYDDD', 'YYYYMMDD'),
- prefix=r'(^|(?<=[^0-9a-z_\-]))', suffix=r'\s*($|(?=[^0-9a-z_\-]))'),
+ prefix=r'(^|(?<=[^\w\-]))', suffix=r'\s*($|(?=[^\w\-]))'),
Keyword.Pseudo),
# inactive reserved words
(words((
- 'ACTIVE-CLASS', 'ALIGNED', 'ANYCASE', 'ARITHMETIC', 'ATTRIBUTE', 'B-AND',
- 'B-NOT', 'B-OR', 'B-XOR', 'BIT', 'BOOLEAN', 'CD', 'CENTER', 'CF', 'CH', 'CHAIN', 'CLASS-ID',
- 'CLASSIFICATION', 'COMMUNICATION', 'CONDITION', 'DATA-POINTER',
- 'DESTINATION', 'DISABLE', 'EC', 'EGI', 'EMI', 'ENABLE', 'END-RECEIVE',
- 'ENTRY-CONVENTION', 'EO', 'ESI', 'EXCEPTION-OBJECT', 'EXPANDS', 'FACTORY',
- 'FLOAT-BINARY-16', 'FLOAT-BINARY-34', 'FLOAT-BINARY-7',
- 'FLOAT-DECIMAL-16', 'FLOAT-DECIMAL-34', 'FLOAT-EXTENDED', 'FORMAT',
- 'FUNCTION-POINTER', 'GET', 'GROUP-USAGE', 'IMPLEMENTS', 'INFINITY',
- 'INHERITS', 'INTERFACE', 'INTERFACE-ID', 'INVOKE', 'LC_ALL', 'LC_COLLATE',
+ 'ACTIVE-CLASS', 'ALIGNED', 'ANYCASE', 'ARITHMETIC', 'ATTRIBUTE',
+ 'B-AND', 'B-NOT', 'B-OR', 'B-XOR', 'BIT', 'BOOLEAN', 'CD', 'CENTER',
+ 'CF', 'CH', 'CHAIN', 'CLASS-ID', 'CLASSIFICATION', 'COMMUNICATION',
+ 'CONDITION', 'DATA-POINTER', 'DESTINATION', 'DISABLE', 'EC', 'EGI',
+ 'EMI', 'ENABLE', 'END-RECEIVE', 'ENTRY-CONVENTION', 'EO', 'ESI',
+ 'EXCEPTION-OBJECT', 'EXPANDS', 'FACTORY', 'FLOAT-BINARY-16',
+ 'FLOAT-BINARY-34', 'FLOAT-BINARY-7', 'FLOAT-DECIMAL-16',
+ 'FLOAT-DECIMAL-34', 'FLOAT-EXTENDED', 'FORMAT', 'FUNCTION-POINTER',
+ 'GET', 'GROUP-USAGE', 'IMPLEMENTS', 'INFINITY', 'INHERITS',
+ 'INTERFACE', 'INTERFACE-ID', 'INVOKE', 'LC_ALL', 'LC_COLLATE',
'LC_CTYPE', 'LC_MESSAGES', 'LC_MONETARY', 'LC_NUMERIC', 'LC_TIME',
- 'LINE-COUNTER', 'MESSAGE', 'METHOD', 'METHOD-ID', 'NESTED', 'NONE', 'NORMAL',
- 'OBJECT', 'OBJECT-REFERENCE', 'OPTIONS', 'OVERRIDE', 'PAGE-COUNTER', 'PF', 'PH',
- 'PROPERTY', 'PROTOTYPE', 'PURGE', 'QUEUE', 'RAISE', 'RAISING', 'RECEIVE',
- 'RELATION', 'REPLACE', 'REPRESENTS-NOT-A-NUMBER', 'RESET', 'RESUME', 'RETRY',
- 'RF', 'RH', 'SECONDS', 'SEGMENT', 'SELF', 'SEND', 'SOURCES', 'STATEMENT', 'STEP',
- 'STRONG', 'SUB-QUEUE-1', 'SUB-QUEUE-2', 'SUB-QUEUE-3', 'SUPER', 'SYMBOL',
- 'SYSTEM-DEFAULT', 'TABLE', 'TERMINAL', 'TEXT', 'TYPEDEF', 'UCS-4', 'UNIVERSAL',
- 'USER-DEFAULT', 'UTF-16', 'UTF-8', 'VAL-STATUS', 'VALID', 'VALIDATE',
- 'VALIDATE-STATUS'),
- prefix=r'(^|(?<=[^0-9a-z_\-]))', suffix=r'\s*($|(?=[^0-9a-z_\-]))'),
+ 'LINE-COUNTER', 'MESSAGE', 'METHOD', 'METHOD-ID', 'NESTED', 'NONE',
+ 'NORMAL', 'OBJECT', 'OBJECT-REFERENCE', 'OPTIONS', 'OVERRIDE',
+ 'PAGE-COUNTER', 'PF', 'PH', 'PROPERTY', 'PROTOTYPE', 'PURGE',
+ 'QUEUE', 'RAISE', 'RAISING', 'RECEIVE', 'RELATION', 'REPLACE',
+ 'REPRESENTS-NOT-A-NUMBER', 'RESET', 'RESUME', 'RETRY', 'RF', 'RH',
+ 'SECONDS', 'SEGMENT', 'SELF', 'SEND', 'SOURCES', 'STATEMENT',
+ 'STEP', 'STRONG', 'SUB-QUEUE-1', 'SUB-QUEUE-2', 'SUB-QUEUE-3',
+ 'SUPER', 'SYMBOL', 'SYSTEM-DEFAULT', 'TABLE', 'TERMINAL', 'TEXT',
+ 'TYPEDEF', 'UCS-4', 'UNIVERSAL', 'USER-DEFAULT', 'UTF-16', 'UTF-8',
+ 'VAL-STATUS', 'VALID', 'VALIDATE', 'VALIDATE-STATUS'),
+ prefix=r'(^|(?<=[^\w\-]))', suffix=r'\s*($|(?=[^\w\-]))'),
Error),
# Data Types
- (r'(^|(?<=[^0-9a-z_\-]))'
+ (r'(^|(?<=[^\w\-]))'
r'(PIC\s+.+?(?=(\s|\.\s))|PICTURE\s+.+?(?=(\s|\.\s))|'
r'(COMPUTATIONAL)(-[1-5X])?|(COMP)(-[1-5X])?|'
r'BINARY-C-LONG|'
r'BINARY-CHAR|BINARY-DOUBLE|BINARY-LONG|BINARY-SHORT|'
- r'BINARY)\s*($|(?=[^0-9a-z_\-]))', Keyword.Type),
+ r'BINARY)\s*($|(?=[^\w\-]))', Keyword.Type),
# Operators
(r'(\*\*|\*|\+|-|/|<=|>=|<|>|==|/=|=)', Operator),
(r'([(),;:&%.])', Punctuation),
# Intrinsics
- (r'(^|(?<=[^0-9a-z_\-]))(ABS|ACOS|ANNUITY|ASIN|ATAN|BYTE-LENGTH|'
+ (r'(^|(?<=[^\w\-]))(ABS|ACOS|ANNUITY|ASIN|ATAN|BYTE-LENGTH|'
r'CHAR|COMBINED-DATETIME|CONCATENATE|COS|CURRENT-DATE|'
r'DATE-OF-INTEGER|DATE-TO-YYYYMMDD|DAY-OF-INTEGER|DAY-TO-YYYYDDD|'
r'EXCEPTION-(?:FILE|LOCATION|STATEMENT|STATUS)|EXP10|EXP|E|'
r'STANDARD-DEVIATION|STORED-CHAR-LENGTH|SUBSTITUTE(?:-CASE)?|'
r'SUM|TAN|TEST-DATE-YYYYMMDD|TEST-DAY-YYYYDDD|TRIM|'
r'UPPER-CASE|VARIANCE|WHEN-COMPILED|YEAR-TO-YYYY)\s*'
- r'($|(?=[^0-9a-z_\-]))', Name.Function),
+ r'($|(?=[^\w\-]))', Name.Function),
# Booleans
- (r'(^|(?<=[^0-9a-z_\-]))(true|false)\s*($|(?=[^0-9a-z_\-]))', Name.Builtin),
+ (r'(^|(?<=[^\w\-]))(true|false)\s*($|(?=[^\w\-]))', Name.Builtin),
# Comparing Operators
- (r'(^|(?<=[^0-9a-z_\-]))(equal|equals|ne|lt|le|gt|ge|'
- r'greater|less|than|not|and|or)\s*($|(?=[^0-9a-z_\-]))', Operator.Word),
+ (r'(^|(?<=[^\w\-]))(equal|equals|ne|lt|le|gt|ge|'
+ r'greater|less|than|not|and|or)\s*($|(?=[^\w\-]))', Operator.Word),
],
# \"[^\"\n]*\"|\'[^\'\n]*\'
(r'\s+', Text),
(r'^\*.*$', Comment.Single),
(r'\".*?\n', Comment.Single),
+ (r'##\w+', Comment.Special),
],
'variable-names': [
(r'<\S+>', Name.Variable),
'root': [
include('common'),
# function calls
- (r'(CALL\s+(?:BADI|CUSTOMER-FUNCTION|FUNCTION))(\s+)(\'?\S+\'?)',
- bygroups(Keyword, Text, Name.Function)),
+ (r'CALL\s+(?:BADI|CUSTOMER-FUNCTION|FUNCTION)',
+ Keyword),
(r'(CALL\s+(?:DIALOG|SCREEN|SUBSCREEN|SELECTION-SCREEN|'
r'TRANSACTION|TRANSFORMATION))\b',
Keyword),
# call methodnames returning style
(r'(?<=(=|-)>)([\w\-~]+)(?=\()', Name.Function),
+ # text elements
+ (r'(TEXT)(-)(\d{3})',
+ bygroups(Keyword, Punctuation, Number.Integer)),
+ (r'(TEXT)(-)(\w{3})',
+ bygroups(Keyword, Punctuation, Name.Variable)),
+
# keywords with dashes in them.
# these need to be first, because for instance the -ID part
# of MESSAGE-ID wouldn't get highlighted if MESSAGE was
r'OUTPUT-LENGTH|PRINT-CONTROL|'
r'SELECT-OPTIONS|START-OF-SELECTION|SUBTRACT-CORRESPONDING|'
r'SYNTAX-CHECK|SYSTEM-EXCEPTIONS|'
- r'TYPE-POOL|TYPE-POOLS'
+ r'TYPE-POOL|TYPE-POOLS|NO-DISPLAY'
r')\b', Keyword),
# keyword kombinations
- (r'CREATE\s+(PUBLIC|PRIVATE|DATA|OBJECT)|'
- r'((PUBLIC|PRIVATE|PROTECTED)\s+SECTION|'
- r'(TYPE|LIKE)(\s+(LINE\s+OF|REF\s+TO|'
+ (r'(?<![-\>])(CREATE\s+(PUBLIC|PRIVATE|DATA|OBJECT)|'
+ r'(PUBLIC|PRIVATE|PROTECTED)\s+SECTION|'
+ r'(TYPE|LIKE)\s+((LINE\s+OF|REF\s+TO|'
r'(SORTED|STANDARD|HASHED)\s+TABLE\s+OF))?|'
r'FROM\s+(DATABASE|MEMORY)|CALL\s+METHOD|'
r'(GROUP|ORDER) BY|HAVING|SEPARATED BY|'
r'(BEGIN|END)\s+OF|'
r'DELETE(\s+ADJACENT\s+DUPLICATES\sFROM)?|'
r'COMPARING(\s+ALL\s+FIELDS)?|'
- r'INSERT(\s+INITIAL\s+LINE\s+INTO|\s+LINES\s+OF)?|'
+ r'(INSERT|APPEND)(\s+INITIAL\s+LINE\s+(IN)?TO|\s+LINES\s+OF)?|'
r'IN\s+((BYTE|CHARACTER)\s+MODE|PROGRAM)|'
r'END-OF-(DEFINITION|PAGE|SELECTION)|'
r'WITH\s+FRAME(\s+TITLE)|'
+ r'(REPLACE|FIND)\s+((FIRST|ALL)\s+OCCURRENCES?\s+OF\s+)?(SUBSTRING|REGEX)?|'
+ r'MATCH\s+(LENGTH|COUNT|LINE|OFFSET)|'
+ r'(RESPECTING|IGNORING)\s+CASE|'
+ r'IN\s+UPDATE\s+TASK|'
+ r'(SOURCE|RESULT)\s+(XML)?|'
+ r'REFERENCE\s+INTO|'
# simple kombinations
r'AND\s+(MARK|RETURN)|CLIENT\s+SPECIFIED|CORRESPONDING\s+FIELDS\s+OF|'
r'MODIFY\s+SCREEN|NESTING\s+LEVEL|NO\s+INTERVALS|OF\s+STRUCTURE|'
r'RADIOBUTTON\s+GROUP|RANGE\s+OF|REF\s+TO|SUPPRESS DIALOG|'
r'TABLE\s+OF|UPPER\s+CASE|TRANSPORTING\s+NO\s+FIELDS|'
- r'VALUE\s+CHECK|VISIBLE\s+LENGTH|HEADER\s+LINE)\b', Keyword),
+ r'VALUE\s+CHECK|VISIBLE\s+LENGTH|HEADER\s+LINE|COMMON\s+PART)\b', Keyword),
# single word keywords.
- (r'(^|(?<=(\s|\.)))(ABBREVIATED|ADD|ALIASES|APPEND|ASSERT|'
- r'ASSIGN(ING)?|AT(\s+FIRST)?|'
+ (r'(^|(?<=(\s|\.)))(ABBREVIATED|ABSTRACT|ADD|ALIASES|ALIGN|ALPHA|'
+ r'ASSERT|AS|ASSIGN(ING)?|AT(\s+FIRST)?|'
r'BACK|BLOCK|BREAK-POINT|'
r'CASE|CATCH|CHANGING|CHECK|CLASS|CLEAR|COLLECT|COLOR|COMMIT|'
r'CREATE|COMMUNICATION|COMPONENTS?|COMPUTE|CONCATENATE|CONDENSE|'
- r'CONSTANTS|CONTEXTS|CONTINUE|CONTROLS|'
- r'DATA|DECIMALS|DEFAULT|DEFINE|DEFINITION|DEFERRED|DEMAND|'
- r'DETAIL|DIRECTORY|DIVIDE|DO|'
- r'ELSE(IF)?|ENDAT|ENDCASE|ENDCLASS|ENDDO|ENDFORM|ENDFUNCTION|'
- r'ENDIF|ENDLOOP|ENDMETHOD|ENDMODULE|ENDSELECT|ENDTRY|'
- r'ENHANCEMENT|EVENTS|EXCEPTIONS|EXIT|EXPORT|EXPORTING|EXTRACT|'
- r'FETCH|FIELDS?|FIND|FOR|FORM|FORMAT|FREE|FROM|'
+ r'CONSTANTS|CONTEXTS|CONTINUE|CONTROLS|COUNTRY|CURRENCY|'
+ r'DATA|DATE|DECIMALS|DEFAULT|DEFINE|DEFINITION|DEFERRED|DEMAND|'
+ r'DETAIL|DIRECTORY|DIVIDE|DO|DUMMY|'
+ r'ELSE(IF)?|ENDAT|ENDCASE|ENDCATCH|ENDCLASS|ENDDO|ENDFORM|ENDFUNCTION|'
+ r'ENDIF|ENDINTERFACE|ENDLOOP|ENDMETHOD|ENDMODULE|ENDSELECT|ENDTRY|ENDWHILE|'
+ r'ENHANCEMENT|EVENTS|EXACT|EXCEPTIONS?|EXIT|EXPONENT|EXPORT|EXPORTING|EXTRACT|'
+ r'FETCH|FIELDS?|FOR|FORM|FORMAT|FREE|FROM|FUNCTION|'
r'HIDE|'
r'ID|IF|IMPORT|IMPLEMENTATION|IMPORTING|IN|INCLUDE|INCLUDING|'
r'INDEX|INFOTYPES|INITIALIZATION|INTERFACE|INTERFACES|INTO|'
- r'LENGTH|LINES|LOAD|LOCAL|'
+ r'LANGUAGE|LEAVE|LENGTH|LINES|LOAD|LOCAL|'
r'JOIN|'
r'KEY|'
- r'MAXIMUM|MESSAGE|METHOD[S]?|MINIMUM|MODULE|MODIFY|MOVE|MULTIPLY|'
- r'NODES|'
- r'OBLIGATORY|OF|OFF|ON|OVERLAY|'
- r'PACK|PARAMETERS|PERCENTAGE|POSITION|PROGRAM|PROVIDE|PUBLIC|PUT|'
- r'RAISE|RAISING|RANGES|READ|RECEIVE|REFRESH|REJECT|REPORT|RESERVE|'
- r'RESUME|RETRY|RETURN|RETURNING|RIGHT|ROLLBACK|'
- r'SCROLL|SEARCH|SELECT|SHIFT|SINGLE|SKIP|SORT|SPLIT|STATICS|STOP|'
- r'SUBMIT|SUBTRACT|SUM|SUMMARY|SUMMING|SUPPLY|'
- r'TABLE|TABLES|TIMES|TITLE|TO|TOP-OF-PAGE|TRANSFER|TRANSLATE|TRY|TYPES|'
+ r'NEXT|'
+ r'MAXIMUM|MESSAGE|METHOD[S]?|MINIMUM|MODULE|MODIFIER|MODIFY|MOVE|MULTIPLY|'
+ r'NODES|NUMBER|'
+ r'OBLIGATORY|OBJECT|OF|OFF|ON|OTHERS|OVERLAY|'
+ r'PACK|PAD|PARAMETERS|PERCENTAGE|POSITION|PROGRAM|PROVIDE|PUBLIC|PUT|PF\d\d|'
+ r'RAISE|RAISING|RANGES?|READ|RECEIVE|REDEFINITION|REFRESH|REJECT|REPORT|RESERVE|'
+ r'RESUME|RETRY|RETURN|RETURNING|RIGHT|ROLLBACK|REPLACE|'
+ r'SCROLL|SEARCH|SELECT|SHIFT|SIGN|SINGLE|SIZE|SKIP|SORT|SPLIT|STATICS|STOP|'
+ r'STYLE|SUBMATCHES|SUBMIT|SUBTRACT|SUM(?!\()|SUMMARY|SUMMING|SUPPLY|'
+ r'TABLE|TABLES|TIMESTAMP|TIMES?|TIMEZONE|TITLE|\??TO|'
+ r'TOP-OF-PAGE|TRANSFER|TRANSLATE|TRY|TYPES|'
r'ULINE|UNDER|UNPACK|UPDATE|USING|'
- r'VALUE|VALUES|VIA|'
- r'WAIT|WHEN|WHERE|WHILE|WITH|WINDOW|WRITE)\b', Keyword),
+ r'VALUE|VALUES|VIA|VARYING|VARY|'
+ r'WAIT|WHEN|WHERE|WIDTH|WHILE|WITH|WINDOW|WRITE|XSD|ZERO)\b', Keyword),
# builtins
(r'(abs|acos|asin|atan|'
# operators which look like variable names before
# parsing variable names.
- (r'(?<=(\s|.))(AND|EQ|NE|GT|LT|GE|LE|CO|CN|CA|NA|CS|NOT|NS|CP|NP|'
+ (r'(?<=(\s|.))(AND|OR|EQ|NE|GT|LT|GE|LE|CO|CN|CA|NA|CS|NOT|NS|CP|NP|'
r'BYTE-CO|BYTE-CN|BYTE-CA|BYTE-NA|BYTE-CS|BYTE-NS|'
- r'IS\s+(NOT\s+)?(INITIAL|ASSIGNED|REQUESTED|BOUND))\b', Operator),
+ r'IS\s+(NOT\s+)?(INITIAL|ASSIGNED|REQUESTED|BOUND))\b', Operator.Word),
include('variable-names'),
- # standard oparators after variable names,
+ # standard operators after variable names,
# because < and > are part of field symbols.
- (r'[?*<>=\-+]', Operator),
+ (r'[?*<>=\-+&]', Operator),
(r"'(''|[^'])*'", String.Single),
(r"`([^`])*`", String.Single),
- (r'[/;:()\[\],.]', Punctuation)
+ (r"([|}])([^{}|]*?)([|{])",
+ bygroups(Punctuation, String.Single, Punctuation)),
+ (r'[/;:()\[\],.]', Punctuation),
+ (r'(!)(\w+)', bygroups(Operator, Name)),
],
}
filenames = ['*.p', '*.cls']
mimetypes = ['text/x-openedge', 'application/x-openedge']
- types = (r'(?i)(^|(?<=[^0-9a-z_\-]))(CHARACTER|CHAR|CHARA|CHARAC|CHARACT|CHARACTE|'
+ types = (r'(?i)(^|(?<=[^\w\-]))(CHARACTER|CHAR|CHARA|CHARAC|CHARACT|CHARACTE|'
r'COM-HANDLE|DATE|DATETIME|DATETIME-TZ|'
r'DECIMAL|DEC|DECI|DECIM|DECIMA|HANDLE|'
r'INT64|INTEGER|INT|INTE|INTEG|INTEGE|'
- r'LOGICAL|LONGCHAR|MEMPTR|RAW|RECID|ROWID)\s*($|(?=[^0-9a-z_\-]))')
+ r'LOGICAL|LONGCHAR|MEMPTR|RAW|RECID|ROWID)\s*($|(?=[^\w\-]))')
keywords = words(OPENEDGEKEYWORDS,
- prefix=r'(?i)(^|(?<=[^0-9a-z_\-]))',
- suffix=r'\s*($|(?=[^0-9a-z_\-]))')
+ prefix=r'(?i)(^|(?<=[^\w\-]))',
+ suffix=r'\s*($|(?=[^\w\-]))')
tokens = {
'root': [
Lexers for C/C++ languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
(r'\n', Text),
(r'\s+', Text),
(r'\\\n', Text), # line continuation
- (r'//(\n|(.|\n)*?[^\\]\n)', Comment.Single),
- (r'/(\\\n)?[*](.|\n)*?[*](\\\n)?/', Comment.Multiline),
+ (r'//(\n|[\w\W]*?[^\\]\n)', Comment.Single),
+ (r'/(\\\n)?[*][\w\W]*?[*](\\\n)?/', Comment.Multiline),
+ # Open until EOF, so no ending delimeter
+ (r'/(\\\n)?[*][\w\W]*', Comment.Multiline),
],
'statements': [
- (r'L?"', String, 'string'),
- (r"L?'(\\.|\\[0-7]{1,3}|\\x[a-fA-F0-9]{1,2}|[^\\\'\n])'", String.Char),
+ (r'(L?)(")', bygroups(String.Affix, String), 'string'),
+ (r"(L?)(')(\\.|\\[0-7]{1,3}|\\x[a-fA-F0-9]{1,2}|[^\\\'\n])(')",
+ bygroups(String.Affix, String.Char, String.Char, String.Char)),
(r'(\d+\.\d*|\.\d+|\d+)[eE][+-]?\d+[LlUu]*', Number.Float),
(r'(\d+\.\d*|\.\d+|\d+[fF])[fF]?', Number.Float),
(r'0x[0-9a-fA-F]+[LlUu]*', Number.Hex),
(r'\*/', Error),
(r'[~!%^&*+=|?:<>/-]', Operator),
(r'[()\[\],.]', Punctuation),
- (words(('auto', 'break', 'case', 'const', 'continue', 'default', 'do',
- 'else', 'enum', 'extern', 'for', 'goto', 'if', 'register',
- 'restricted', 'return', 'sizeof', 'static', 'struct',
- 'switch', 'typedef', 'union', 'volatile', 'while'),
+ (words(('asm', 'auto', 'break', 'case', 'const', 'continue',
+ 'default', 'do', 'else', 'enum', 'extern', 'for', 'goto',
+ 'if', 'register', 'restricted', 'return', 'sizeof',
+ 'static', 'struct', 'switch', 'typedef', 'union',
+ 'volatile', 'while'),
suffix=r'\b'), Keyword),
(r'(bool|int|long|float|short|double|char|unsigned|signed|void)\b',
Keyword.Type),
(r'\\', String), # stray backslash
],
'macro': [
- (r'(include)(' + _ws1 + ')([^\n]+)', bygroups(Comment.Preproc, Text, Comment.PreprocFile)),
+ (r'(include)(' + _ws1 + r')([^\n]+)',
+ bygroups(Comment.Preproc, Text, Comment.PreprocFile)),
(r'[^/\n]+', Comment.Preproc),
(r'/[*](.|\n)*?[*]/', Comment.Multiline),
(r'//.*?\n', Comment.Single, '#pop'),
tokens = {
'statements': [
(words((
- 'asm', 'catch', 'const_cast', 'delete', 'dynamic_cast', 'explicit',
+ 'catch', 'const_cast', 'delete', 'dynamic_cast', 'explicit',
'export', 'friend', 'mutable', 'namespace', 'new', 'operator',
'private', 'protected', 'public', 'reinterpret_cast',
'restrict', 'static_cast', 'template', 'this', 'throw', 'throws',
(r'char(16_t|32_t)\b', Keyword.Type),
(r'(class)(\s+)', bygroups(Keyword, Text), 'classname'),
# C++11 raw strings
- (r'R"\(', String, 'rawstring'),
+ (r'(R)(")([^\\()\s]{,16})(\()((?:.|\n)*?)(\)\3)(")',
+ bygroups(String.Affix, String, String.Delimiter, String.Delimiter,
+ String, String.Delimiter, String)),
+ # C++11 UTF-8/16/32 strings
+ (r'(u8|u|U)(")', bygroups(String.Affix, String), 'string'),
inherit,
],
'root': [
# template specification
(r'\s*(?=>)', Text, '#pop'),
],
- 'rawstring': [
- (r'\)"', String, '#pop'),
- (r'[^)]+', String),
- (r'\)', String),
- ],
}
def analyse_text(text):
Lexers for other C-like languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
filenames = ['*.ino']
mimetypes = ['text/x-arduino']
- # Language constants
- constants = set(('DIGITAL_MESSAGE', 'FIRMATA_STRING', 'ANALOG_MESSAGE',
- 'REPORT_DIGITAL', 'REPORT_ANALOG', 'INPUT_PULLUP',
- 'SET_PIN_MODE', 'INTERNAL2V56', 'SYSTEM_RESET', 'LED_BUILTIN',
- 'INTERNAL1V1', 'SYSEX_START', 'INTERNAL', 'EXTERNAL',
- 'DEFAULT', 'OUTPUT', 'INPUT', 'HIGH', 'LOW'))
-
# Language sketch main structure functions
structure = set(('setup', 'loop'))
- # Language variable types
- storage = set(('boolean', 'const', 'byte', 'word', 'string', 'String', 'array'))
+ # Language operators
+ operators = set(('not', 'or', 'and', 'xor'))
+
+ # Language 'variables'
+ variables = set((
+ 'DIGITAL_MESSAGE', 'FIRMATA_STRING', 'ANALOG_MESSAGE', 'REPORT_DIGITAL',
+ 'REPORT_ANALOG', 'INPUT_PULLUP', 'SET_PIN_MODE', 'INTERNAL2V56', 'SYSTEM_RESET',
+ 'LED_BUILTIN', 'INTERNAL1V1', 'SYSEX_START', 'INTERNAL', 'EXTERNAL', 'HIGH',
+ 'LOW', 'INPUT', 'OUTPUT', 'INPUT_PULLUP', 'LED_BUILTIN', 'true', 'false',
+ 'void', 'boolean', 'char', 'unsigned char', 'byte', 'int', 'unsigned int',
+ 'word', 'long', 'unsigned long', 'short', 'float', 'double', 'string', 'String',
+ 'array', 'static', 'volatile', 'const', 'boolean', 'byte', 'word', 'string',
+ 'String', 'array', 'int', 'float', 'private', 'char', 'virtual', 'operator',
+ 'sizeof', 'uint8_t', 'uint16_t', 'uint32_t', 'uint64_t', 'int8_t', 'int16_t',
+ 'int32_t', 'int64_t', 'dynamic_cast', 'typedef', 'const_cast', 'const',
+ 'struct', 'static_cast', 'union', 'unsigned', 'long', 'volatile', 'static',
+ 'protected', 'bool', 'public', 'friend', 'auto', 'void', 'enum', 'extern',
+ 'class', 'short', 'reinterpret_cast', 'double', 'register', 'explicit',
+ 'signed', 'inline', 'delete', '_Bool', 'complex', '_Complex', '_Imaginary',
+ 'atomic_bool', 'atomic_char', 'atomic_schar', 'atomic_uchar', 'atomic_short',
+ 'atomic_ushort', 'atomic_int', 'atomic_uint', 'atomic_long', 'atomic_ulong',
+ 'atomic_llong', 'atomic_ullong', 'PROGMEM'))
# Language shipped functions and class ( )
- functions = set(('KeyboardController', 'MouseController', 'SoftwareSerial',
- 'EthernetServer', 'EthernetClient', 'LiquidCrystal',
- 'RobotControl', 'GSMVoiceCall', 'EthernetUDP', 'EsploraTFT',
- 'HttpClient', 'RobotMotor', 'WiFiClient', 'GSMScanner',
- 'FileSystem', 'Scheduler', 'GSMServer', 'YunClient', 'YunServer',
- 'IPAddress', 'GSMClient', 'GSMModem', 'Keyboard', 'Ethernet',
- 'Console', 'GSMBand', 'Esplora', 'Stepper', 'Process',
- 'WiFiUDP', 'GSM_SMS', 'Mailbox', 'USBHost', 'Firmata', 'PImage',
- 'Client', 'Server', 'GSMPIN', 'FileIO', 'Bridge', 'Serial',
- 'EEPROM', 'Stream', 'Mouse', 'Audio', 'Servo', 'File', 'Task',
- 'GPRS', 'WiFi', 'Wire', 'TFT', 'GSM', 'SPI', 'SD',
- 'runShellCommandAsynchronously', 'analogWriteResolution',
- 'retrieveCallingNumber', 'printFirmwareVersion',
- 'analogReadResolution', 'sendDigitalPortPair',
- 'noListenOnLocalhost', 'readJoystickButton', 'setFirmwareVersion',
- 'readJoystickSwitch', 'scrollDisplayRight', 'getVoiceCallStatus',
- 'scrollDisplayLeft', 'writeMicroseconds', 'delayMicroseconds',
- 'beginTransmission', 'getSignalStrength', 'runAsynchronously',
- 'getAsynchronously', 'listenOnLocalhost', 'getCurrentCarrier',
- 'readAccelerometer', 'messageAvailable', 'sendDigitalPorts',
- 'lineFollowConfig', 'countryNameWrite', 'runShellCommand',
- 'readStringUntil', 'rewindDirectory', 'readTemperature',
- 'setClockDivider', 'readLightSensor', 'endTransmission',
- 'analogReference', 'detachInterrupt', 'countryNameRead',
- 'attachInterrupt', 'encryptionType', 'readBytesUntil',
- 'robotNameWrite', 'readMicrophone', 'robotNameRead', 'cityNameWrite',
- 'userNameWrite', 'readJoystickY', 'readJoystickX', 'mouseReleased',
- 'openNextFile', 'scanNetworks', 'noInterrupts', 'digitalWrite',
- 'beginSpeaker', 'mousePressed', 'isActionDone', 'mouseDragged',
- 'displayLogos', 'noAutoscroll', 'addParameter', 'remoteNumber',
- 'getModifiers', 'keyboardRead', 'userNameRead', 'waitContinue',
- 'processInput', 'parseCommand', 'printVersion', 'readNetworks',
- 'writeMessage', 'blinkVersion', 'cityNameRead', 'readMessage',
- 'setDataMode', 'parsePacket', 'isListening', 'setBitOrder',
- 'beginPacket', 'isDirectory', 'motorsWrite', 'drawCompass',
- 'digitalRead', 'clearScreen', 'serialEvent', 'rightToLeft',
- 'setTextSize', 'leftToRight', 'requestFrom', 'keyReleased',
- 'compassRead', 'analogWrite', 'interrupts', 'WiFiServer',
- 'disconnect', 'playMelody', 'parseFloat', 'autoscroll',
- 'getPINUsed', 'setPINUsed', 'setTimeout', 'sendAnalog',
- 'readSlider', 'analogRead', 'beginWrite', 'createChar',
- 'motorsStop', 'keyPressed', 'tempoWrite', 'readButton',
- 'subnetMask', 'debugPrint', 'macAddress', 'writeGreen',
- 'randomSeed', 'attachGPRS', 'readString', 'sendString',
- 'remotePort', 'releaseAll', 'mouseMoved', 'background',
- 'getXChange', 'getYChange', 'answerCall', 'getResult',
- 'voiceCall', 'endPacket', 'constrain', 'getSocket', 'writeJSON',
- 'getButton', 'available', 'connected', 'findUntil', 'readBytes',
- 'exitValue', 'readGreen', 'writeBlue', 'startLoop', 'IPAddress',
- 'isPressed', 'sendSysex', 'pauseMode', 'gatewayIP', 'setCursor',
- 'getOemKey', 'tuneWrite', 'noDisplay', 'loadImage', 'switchPIN',
- 'onRequest', 'onReceive', 'changePIN', 'playFile', 'noBuffer',
- 'parseInt', 'overflow', 'checkPIN', 'knobRead', 'beginTFT',
- 'bitClear', 'updateIR', 'bitWrite', 'position', 'writeRGB',
- 'highByte', 'writeRed', 'setSpeed', 'readBlue', 'noStroke',
- 'remoteIP', 'transfer', 'shutdown', 'hangCall', 'beginSMS',
- 'endWrite', 'attached', 'maintain', 'noCursor', 'checkReg',
- 'checkPUK', 'shiftOut', 'isValid', 'shiftIn', 'pulseIn',
- 'connect', 'println', 'localIP', 'pinMode', 'getIMEI',
- 'display', 'noBlink', 'process', 'getBand', 'running', 'beginSD',
- 'drawBMP', 'lowByte', 'setBand', 'release', 'bitRead', 'prepare',
- 'pointTo', 'readRed', 'setMode', 'noFill', 'remove', 'listen',
- 'stroke', 'detach', 'attach', 'noTone', 'exists', 'buffer',
- 'height', 'bitSet', 'circle', 'config', 'cursor', 'random',
- 'IRread', 'sizeof', 'setDNS', 'endSMS', 'getKey', 'micros',
- 'millis', 'begin', 'print', 'write', 'ready', 'flush', 'width',
- 'isPIN', 'blink', 'clear', 'press', 'mkdir', 'rmdir', 'close',
- 'point', 'yield', 'image', 'float', 'BSSID', 'click', 'delay',
- 'read', 'text', 'move', 'peek', 'beep', 'rect', 'line', 'open',
- 'seek', 'fill', 'size', 'turn', 'stop', 'home', 'find', 'char',
- 'byte', 'step', 'word', 'long', 'tone', 'sqrt', 'RSSI', 'SSID',
- 'end', 'bit', 'tan', 'cos', 'sin', 'pow', 'map', 'abs', 'max',
- 'min', 'int', 'get', 'run', 'put'))
-
+ functions = set((
+ 'KeyboardController', 'MouseController', 'SoftwareSerial', 'EthernetServer',
+ 'EthernetClient', 'LiquidCrystal', 'RobotControl', 'GSMVoiceCall',
+ 'EthernetUDP', 'EsploraTFT', 'HttpClient', 'RobotMotor', 'WiFiClient',
+ 'GSMScanner', 'FileSystem', 'Scheduler', 'GSMServer', 'YunClient', 'YunServer',
+ 'IPAddress', 'GSMClient', 'GSMModem', 'Keyboard', 'Ethernet', 'Console',
+ 'GSMBand', 'Esplora', 'Stepper', 'Process', 'WiFiUDP', 'GSM_SMS', 'Mailbox',
+ 'USBHost', 'Firmata', 'PImage', 'Client', 'Server', 'GSMPIN', 'FileIO',
+ 'Bridge', 'Serial', 'EEPROM', 'Stream', 'Mouse', 'Audio', 'Servo', 'File',
+ 'Task', 'GPRS', 'WiFi', 'Wire', 'TFT', 'GSM', 'SPI', 'SD',
+ 'runShellCommandAsynchronously', 'analogWriteResolution',
+ 'retrieveCallingNumber', 'printFirmwareVersion', 'analogReadResolution',
+ 'sendDigitalPortPair', 'noListenOnLocalhost', 'readJoystickButton',
+ 'setFirmwareVersion', 'readJoystickSwitch', 'scrollDisplayRight',
+ 'getVoiceCallStatus', 'scrollDisplayLeft', 'writeMicroseconds',
+ 'delayMicroseconds', 'beginTransmission', 'getSignalStrength',
+ 'runAsynchronously', 'getAsynchronously', 'listenOnLocalhost',
+ 'getCurrentCarrier', 'readAccelerometer', 'messageAvailable',
+ 'sendDigitalPorts', 'lineFollowConfig', 'countryNameWrite', 'runShellCommand',
+ 'readStringUntil', 'rewindDirectory', 'readTemperature', 'setClockDivider',
+ 'readLightSensor', 'endTransmission', 'analogReference', 'detachInterrupt',
+ 'countryNameRead', 'attachInterrupt', 'encryptionType', 'readBytesUntil',
+ 'robotNameWrite', 'readMicrophone', 'robotNameRead', 'cityNameWrite',
+ 'userNameWrite', 'readJoystickY', 'readJoystickX', 'mouseReleased',
+ 'openNextFile', 'scanNetworks', 'noInterrupts', 'digitalWrite', 'beginSpeaker',
+ 'mousePressed', 'isActionDone', 'mouseDragged', 'displayLogos', 'noAutoscroll',
+ 'addParameter', 'remoteNumber', 'getModifiers', 'keyboardRead', 'userNameRead',
+ 'waitContinue', 'processInput', 'parseCommand', 'printVersion', 'readNetworks',
+ 'writeMessage', 'blinkVersion', 'cityNameRead', 'readMessage', 'setDataMode',
+ 'parsePacket', 'isListening', 'setBitOrder', 'beginPacket', 'isDirectory',
+ 'motorsWrite', 'drawCompass', 'digitalRead', 'clearScreen', 'serialEvent',
+ 'rightToLeft', 'setTextSize', 'leftToRight', 'requestFrom', 'keyReleased',
+ 'compassRead', 'analogWrite', 'interrupts', 'WiFiServer', 'disconnect',
+ 'playMelody', 'parseFloat', 'autoscroll', 'getPINUsed', 'setPINUsed',
+ 'setTimeout', 'sendAnalog', 'readSlider', 'analogRead', 'beginWrite',
+ 'createChar', 'motorsStop', 'keyPressed', 'tempoWrite', 'readButton',
+ 'subnetMask', 'debugPrint', 'macAddress', 'writeGreen', 'randomSeed',
+ 'attachGPRS', 'readString', 'sendString', 'remotePort', 'releaseAll',
+ 'mouseMoved', 'background', 'getXChange', 'getYChange', 'answerCall',
+ 'getResult', 'voiceCall', 'endPacket', 'constrain', 'getSocket', 'writeJSON',
+ 'getButton', 'available', 'connected', 'findUntil', 'readBytes', 'exitValue',
+ 'readGreen', 'writeBlue', 'startLoop', 'IPAddress', 'isPressed', 'sendSysex',
+ 'pauseMode', 'gatewayIP', 'setCursor', 'getOemKey', 'tuneWrite', 'noDisplay',
+ 'loadImage', 'switchPIN', 'onRequest', 'onReceive', 'changePIN', 'playFile',
+ 'noBuffer', 'parseInt', 'overflow', 'checkPIN', 'knobRead', 'beginTFT',
+ 'bitClear', 'updateIR', 'bitWrite', 'position', 'writeRGB', 'highByte',
+ 'writeRed', 'setSpeed', 'readBlue', 'noStroke', 'remoteIP', 'transfer',
+ 'shutdown', 'hangCall', 'beginSMS', 'endWrite', 'attached', 'maintain',
+ 'noCursor', 'checkReg', 'checkPUK', 'shiftOut', 'isValid', 'shiftIn', 'pulseIn',
+ 'connect', 'println', 'localIP', 'pinMode', 'getIMEI', 'display', 'noBlink',
+ 'process', 'getBand', 'running', 'beginSD', 'drawBMP', 'lowByte', 'setBand',
+ 'release', 'bitRead', 'prepare', 'pointTo', 'readRed', 'setMode', 'noFill',
+ 'remove', 'listen', 'stroke', 'detach', 'attach', 'noTone', 'exists', 'buffer',
+ 'height', 'bitSet', 'circle', 'config', 'cursor', 'random', 'IRread', 'setDNS',
+ 'endSMS', 'getKey', 'micros', 'millis', 'begin', 'print', 'write', 'ready',
+ 'flush', 'width', 'isPIN', 'blink', 'clear', 'press', 'mkdir', 'rmdir', 'close',
+ 'point', 'yield', 'image', 'BSSID', 'click', 'delay', 'read', 'text', 'move',
+ 'peek', 'beep', 'rect', 'line', 'open', 'seek', 'fill', 'size', 'turn', 'stop',
+ 'home', 'find', 'step', 'tone', 'sqrt', 'RSSI', 'SSID', 'end', 'bit', 'tan',
+ 'cos', 'sin', 'pow', 'map', 'abs', 'max', 'min', 'get', 'run', 'put',
+ 'isAlphaNumeric', 'isAlpha', 'isAscii', 'isWhitespace', 'isControl', 'isDigit',
+ 'isGraph', 'isLowerCase', 'isPrintable', 'isPunct', 'isSpace', 'isUpperCase',
+ 'isHexadecimalDigit'))
+
+ # do not highlight
+ suppress_highlight = set((
+ 'namespace', 'template', 'mutable', 'using', 'asm', 'typeid',
+ 'typename', 'this', 'alignof', 'constexpr', 'decltype', 'noexcept',
+ 'static_assert', 'thread_local', 'restrict'))
+
def get_tokens_unprocessed(self, text):
for index, token, value in CppLexer.get_tokens_unprocessed(self, text):
- if token is Name:
- if value in self.constants:
- yield index, Keyword.Constant, value
- elif value in self.functions:
- yield index, Name.Function, value
- elif value in self.storage:
- yield index, Keyword.Type, value
- else:
- yield index, token, value
- elif token is Name.Function:
- if value in self.structure:
- yield index, Name.Other, value
- else:
- yield index, token, value
- elif token is Keyword:
- if value in self.storage:
- yield index, Keyword.Type, value
- else:
- yield index, token, value
+ if value in self.structure:
+ yield index, Name.Builtin, value
+ elif value in self.operators:
+ yield index, Operator, value
+ elif value in self.variables:
+ yield index, Keyword.Reserved, value
+ elif value in self.suppress_highlight:
+ yield index, Name, value
+ elif value in self.functions:
+ yield index, Name.Function, value
else:
yield index, token, value
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ pygments.lexers.capnproto
+ ~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ Lexers for the Cap'n Proto schema language.
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+import re
+
+from pygments.lexer import RegexLexer, default
+from pygments.token import Text, Comment, Keyword, Name, Literal
+
+__all__ = ['CapnProtoLexer']
+
+
+class CapnProtoLexer(RegexLexer):
+ """
+ For `Cap'n Proto <https://capnproto.org>`_ source.
+
+ .. versionadded:: 2.2
+ """
+ name = 'Cap\'n Proto'
+ filenames = ['*.capnp']
+ aliases = ['capnp']
+
+ flags = re.MULTILINE | re.UNICODE
+
+ tokens = {
+ 'root': [
+ (r'#.*?$', Comment.Single),
+ (r'@[0-9a-zA-Z]*', Name.Decorator),
+ (r'=', Literal, 'expression'),
+ (r':', Name.Class, 'type'),
+ (r'\$', Name.Attribute, 'annotation'),
+ (r'(struct|enum|interface|union|import|using|const|annotation|'
+ r'extends|in|of|on|as|with|from|fixed)\b',
+ Keyword),
+ (r'[\w.]+', Name),
+ (r'[^#@=:$\w]+', Text),
+ ],
+ 'type': [
+ (r'[^][=;,(){}$]+', Name.Class),
+ (r'[[(]', Name.Class, 'parentype'),
+ default('#pop'),
+ ],
+ 'parentype': [
+ (r'[^][;()]+', Name.Class),
+ (r'[[(]', Name.Class, '#push'),
+ (r'[])]', Name.Class, '#pop'),
+ default('#pop'),
+ ],
+ 'expression': [
+ (r'[^][;,(){}$]+', Literal),
+ (r'[[(]', Literal, 'parenexp'),
+ default('#pop'),
+ ],
+ 'parenexp': [
+ (r'[^][;()]+', Literal),
+ (r'[[(]', Literal, '#push'),
+ (r'[])]', Literal, '#pop'),
+ default('#pop'),
+ ],
+ 'annotation': [
+ (r'[^][;,(){}=:]+', Name.Attribute),
+ (r'[[(]', Name.Attribute, 'annexp'),
+ default('#pop'),
+ ],
+ 'annexp': [
+ (r'[^][;()]+', Name.Attribute),
+ (r'[[(]', Name.Attribute, '#push'),
+ (r'[])]', Name.Attribute, '#pop'),
+ default('#pop'),
+ ],
+ }
Lexer for the Chapel language.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
(r'(bool|complex|imag|int|opaque|range|real|string|uint)\b',
Keyword.Type),
(words((
- 'align', 'atomic', 'begin', 'break', 'by', 'cobegin', 'coforall',
- 'continue', 'delete', 'dmapped', 'do', 'domain', 'else', 'enum',
- 'except', 'export', 'extern', 'for', 'forall', 'if', 'index',
- 'inline', 'iter', 'label', 'lambda', 'let', 'local', 'new',
- 'noinit', 'on', 'only', 'otherwise', 'pragma', 'private',
- 'public', 'reduce', 'require', 'return', 'scan', 'select',
- 'serial', 'single', 'sparse', 'subdomain', 'sync', 'then',
- 'use', 'when', 'where', 'while', 'with', 'yield', 'zip'),
- suffix=r'\b'),
+ 'align', 'as', 'atomic', 'begin', 'break', 'by', 'cobegin',
+ 'coforall', 'continue', 'delete', 'dmapped', 'do', 'domain',
+ 'else', 'enum', 'except', 'export', 'extern', 'for', 'forall',
+ 'if', 'index', 'inline', 'iter', 'label', 'lambda', 'let',
+ 'local', 'new', 'noinit', 'on', 'only', 'otherwise', 'pragma',
+ 'private', 'public', 'reduce', 'require', 'return', 'scan',
+ 'select', 'serial', 'single', 'sparse', 'subdomain', 'sync',
+ 'then', 'use', 'when', 'where', 'while', 'with', 'yield',
+ 'zip'), suffix=r'\b'),
Keyword),
- (r'(proc)((?:\s|\\\s)+)', bygroups(Keyword, Text), 'procname'),
+ (r'(proc)((?:\s)+)', bygroups(Keyword, Text), 'procname'),
(r'(class|module|record|union)(\s+)', bygroups(Keyword, Text),
'classname'),
(r'[a-zA-Z_][\w$]*', Name.Class, '#pop'),
],
'procname': [
- (r'[a-zA-Z_][\w$]*', Name.Function, '#pop'),
+ (r'([a-zA-Z_][\w$]+|\~[a-zA-Z_][\w$]+|[+*/!~%<>=&^|\-]{1,2})',
+ Name.Function, '#pop'),
],
}
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ pygments.lexers.clean
+ ~~~~~~~~~~~~~~~~~~~~~
+
+ Lexer for the Clean language.
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+from pygments.lexer import ExtendedRegexLexer, LexerContext, \
+ bygroups, words, include, default
+from pygments.token import Comment, Keyword, Literal, Name, Number, Operator, \
+ Punctuation, String, Text, Whitespace
+
+__all__ = ['CleanLexer']
+
+
+class CleanLexer(ExtendedRegexLexer):
+ """
+ Lexer for the general purpose, state-of-the-art, pure and lazy functional
+ programming language Clean (http://clean.cs.ru.nl/Clean).
+
+ .. versionadded: 2.2
+ """
+ name = 'Clean'
+ aliases = ['clean']
+ filenames = ['*.icl', '*.dcl']
+
+ def get_tokens_unprocessed(self, text=None, context=None):
+ ctx = LexerContext(text, 0)
+ ctx.indent = 0
+ return ExtendedRegexLexer.get_tokens_unprocessed(self, text, context=ctx)
+
+ def check_class_not_import(lexer, match, ctx):
+ if match.group(0) == 'import':
+ yield match.start(), Keyword.Namespace, match.group(0)
+ ctx.stack = ctx.stack[:-1] + ['fromimportfunc']
+ else:
+ yield match.start(), Name.Class, match.group(0)
+ ctx.pos = match.end()
+
+ def check_instance_class(lexer, match, ctx):
+ if match.group(0) == 'instance' or match.group(0) == 'class':
+ yield match.start(), Keyword, match.group(0)
+ else:
+ yield match.start(), Name.Function, match.group(0)
+ ctx.stack = ctx.stack + ['fromimportfunctype']
+ ctx.pos = match.end()
+
+ @staticmethod
+ def indent_len(text):
+ # Tabs are four spaces:
+ # https://svn.cs.ru.nl/repos/clean-platform/trunk/doc/STANDARDS.txt
+ text = text.replace('\n', '')
+ return len(text.replace('\t', ' ')), len(text)
+
+ def store_indent(lexer, match, ctx):
+ ctx.indent, _ = CleanLexer.indent_len(match.group(0))
+ ctx.pos = match.end()
+ yield match.start(), Text, match.group(0)
+
+ def check_indent1(lexer, match, ctx):
+ indent, reallen = CleanLexer.indent_len(match.group(0))
+ if indent > ctx.indent:
+ yield match.start(), Whitespace, match.group(0)
+ ctx.pos = match.start() + reallen + 1
+ else:
+ ctx.indent = 0
+ ctx.pos = match.start()
+ ctx.stack = ctx.stack[:-1]
+ yield match.start(), Whitespace, match.group(0)[1:]
+
+ def check_indent2(lexer, match, ctx):
+ indent, reallen = CleanLexer.indent_len(match.group(0))
+ if indent > ctx.indent:
+ yield match.start(), Whitespace, match.group(0)
+ ctx.pos = match.start() + reallen + 1
+ else:
+ ctx.indent = 0
+ ctx.pos = match.start()
+ ctx.stack = ctx.stack[:-2]
+
+ def check_indent3(lexer, match, ctx):
+ indent, reallen = CleanLexer.indent_len(match.group(0))
+ if indent > ctx.indent:
+ yield match.start(), Whitespace, match.group(0)
+ ctx.pos = match.start() + reallen + 1
+ else:
+ ctx.indent = 0
+ ctx.pos = match.start()
+ ctx.stack = ctx.stack[:-3]
+ yield match.start(), Whitespace, match.group(0)[1:]
+ if match.group(0) == '\n\n':
+ ctx.pos = ctx.pos + 1
+
+ def skip(lexer, match, ctx):
+ ctx.stack = ctx.stack[:-1]
+ ctx.pos = match.end()
+ yield match.start(), Comment, match.group(0)
+
+ keywords = ('class', 'instance', 'where', 'with', 'let', 'let!',
+ 'in', 'case', 'of', 'infix', 'infixr', 'infixl', 'generic',
+ 'derive', 'otherwise', 'code', 'inline')
+
+ tokens = {
+ 'common': [
+ (r';', Punctuation, '#pop'),
+ (r'//', Comment, 'singlecomment'),
+ ],
+ 'root': [
+ # Comments
+ (r'//.*\n', Comment.Single),
+ (r'(?s)/\*\*.*?\*/', Comment.Special),
+ (r'(?s)/\*.*?\*/', Comment.Multi),
+
+ # Modules, imports, etc.
+ (r'\b((?:implementation|definition|system)\s+)?(module)(\s+)([\w`.]+)',
+ bygroups(Keyword.Namespace, Keyword.Namespace, Text, Name.Class)),
+ (r'(?<=\n)import(?=\s)', Keyword.Namespace, 'import'),
+ (r'(?<=\n)from(?=\s)', Keyword.Namespace, 'fromimport'),
+
+ # Keywords
+ # We cannot use (?s)^|(?<=\s) as prefix, so need to repeat this
+ (words(keywords, prefix=r'(?<=\s)', suffix=r'(?=\s)'), Keyword),
+ (words(keywords, prefix=r'^', suffix=r'(?=\s)'), Keyword),
+
+ # Function definitions
+ (r'(?=\{\|)', Whitespace, 'genericfunction'),
+ (r'(?<=\n)([ \t]*)([\w`$()=\-<>~*\^|+&%]+)((?:\s+\w)*)(\s*)(::)',
+ bygroups(store_indent, Name.Function, Keyword.Type, Whitespace,
+ Punctuation),
+ 'functiondefargs'),
+
+ # Type definitions
+ (r'(?<=\n)([ \t]*)(::)', bygroups(store_indent, Punctuation), 'typedef'),
+ (r'^([ \t]*)(::)', bygroups(store_indent, Punctuation), 'typedef'),
+
+ # Literals
+ (r'\'\\?.(?<!\\)\'', String.Char),
+ (r'\'\\\d+\'', String.Char),
+ (r'\'\\\\\'', String.Char), # (special case for '\\')
+ (r'[+\-~]?\s*\d+\.\d+(E[+\-~]?\d+)?\b', Number.Float),
+ (r'[+\-~]?\s*0[0-7]\b', Number.Oct),
+ (r'[+\-~]?\s*0x[0-9a-fA-F]\b', Number.Hex),
+ (r'[+\-~]?\s*\d+\b', Number.Integer),
+ (r'"', String.Double, 'doubleqstring'),
+ (words(('True', 'False'), prefix=r'(?<=\s)', suffix=r'(?=\s)'),
+ Literal),
+
+ # Qualified names
+ (r'(\')([\w.]+)(\'\.)',
+ bygroups(Punctuation, Name.Namespace, Punctuation)),
+
+ # Everything else is some name
+ (r'([\w`$%/?@]+\.?)*[\w`$%/?@]+', Name),
+
+ # Punctuation
+ (r'[{}()\[\],:;.#]', Punctuation),
+ (r'[+\-=!<>|&~*\^/]', Operator),
+ (r'\\\\', Operator),
+
+ # Lambda expressions
+ (r'\\.*?(->|\.|=)', Name.Function),
+
+ # Whitespace
+ (r'\s', Whitespace),
+
+ include('common'),
+ ],
+ 'fromimport': [
+ include('common'),
+ (r'([\w`.]+)', check_class_not_import),
+ (r'\n', Whitespace, '#pop'),
+ (r'\s', Whitespace),
+ ],
+ 'fromimportfunc': [
+ include('common'),
+ (r'(::)(\s+)([^,\s]+)', bygroups(Punctuation, Text, Keyword.Type)),
+ (r'([\w`$()=\-<>~*\^|+&%/]+)', check_instance_class),
+ (r',', Punctuation),
+ (r'\n', Whitespace, '#pop'),
+ (r'\s', Whitespace),
+ ],
+ 'fromimportfunctype': [
+ include('common'),
+ (r'[{(\[]', Punctuation, 'combtype'),
+ (r',', Punctuation, '#pop'),
+ (r'[:;.#]', Punctuation),
+ (r'\n', Whitespace, '#pop:2'),
+ (r'[^\S\n]+', Whitespace),
+ (r'\S+', Keyword.Type),
+ ],
+ 'combtype': [
+ include('common'),
+ (r'[})\]]', Punctuation, '#pop'),
+ (r'[{(\[]', Punctuation, '#pop'),
+ (r'[,:;.#]', Punctuation),
+ (r'\s+', Whitespace),
+ (r'\S+', Keyword.Type),
+ ],
+ 'import': [
+ include('common'),
+ (words(('from', 'import', 'as', 'qualified'),
+ prefix='(?<=\s)', suffix='(?=\s)'), Keyword.Namespace),
+ (r'[\w`.]+', Name.Class),
+ (r'\n', Whitespace, '#pop'),
+ (r',', Punctuation),
+ (r'[^\S\n]+', Whitespace),
+ ],
+ 'singlecomment': [
+ (r'(.)(?=\n)', skip),
+ (r'.+(?!\n)', Comment),
+ ],
+ 'doubleqstring': [
+ (r'[^\\"]+', String.Double),
+ (r'"', String.Double, '#pop'),
+ (r'\\.', String.Double),
+ ],
+ 'typedef': [
+ include('common'),
+ (r'[\w`]+', Keyword.Type),
+ (r'[:=|(),\[\]{}!*]', Punctuation),
+ (r'->', Punctuation),
+ (r'\n(?=[^\s|])', Whitespace, '#pop'),
+ (r'\s', Whitespace),
+ (r'.', Keyword.Type),
+ ],
+ 'genericfunction': [
+ include('common'),
+ (r'\{\|', Punctuation),
+ (r'\|\}', Punctuation, '#pop'),
+ (r',', Punctuation),
+ (r'->', Punctuation),
+ (r'(\s+of\s+)(\{)', bygroups(Keyword, Punctuation), 'genericftypes'),
+ (r'\s', Whitespace),
+ (r'[\w`\[\]{}!]+', Keyword.Type),
+ (r'[*()]', Punctuation),
+ ],
+ 'genericftypes': [
+ include('common'),
+ (r'[\w`]+', Keyword.Type),
+ (r',', Punctuation),
+ (r'\s', Whitespace),
+ (r'\}', Punctuation, '#pop'),
+ ],
+ 'functiondefargs': [
+ include('common'),
+ (r'\n(\s*)', check_indent1),
+ (r'[!{}()\[\],:;.#]', Punctuation),
+ (r'->', Punctuation, 'functiondefres'),
+ (r'^(?=\S)', Whitespace, '#pop'),
+ (r'\S', Keyword.Type),
+ (r'\s', Whitespace),
+ ],
+ 'functiondefres': [
+ include('common'),
+ (r'\n(\s*)', check_indent2),
+ (r'^(?=\S)', Whitespace, '#pop:2'),
+ (r'[!{}()\[\],:;.#]', Punctuation),
+ (r'\|', Punctuation, 'functiondefclasses'),
+ (r'\S', Keyword.Type),
+ (r'\s', Whitespace),
+ ],
+ 'functiondefclasses': [
+ include('common'),
+ (r'\n(\s*)', check_indent3),
+ (r'^(?=\S)', Whitespace, '#pop:3'),
+ (r'[,&]', Punctuation),
+ (r'\[', Punctuation, 'functiondefuniquneq'),
+ (r'[\w`$()=\-<>~*\^|+&%/{}\[\]@]', Name.Function, 'functionname'),
+ (r'\s+', Whitespace),
+ ],
+ 'functiondefuniquneq': [
+ include('common'),
+ (r'[a-z]+', Keyword.Type),
+ (r'\s+', Whitespace),
+ (r'<=|,', Punctuation),
+ (r'\]', Punctuation, '#pop')
+ ],
+ 'functionname': [
+ include('common'),
+ (r'[\w`$()=\-<>~*\^|+&%/]+', Name.Function),
+ (r'(?=\{\|)', Punctuation, 'genericfunction'),
+ default('#pop'),
+ ]
+ }
Just export lexer classes previously contained in this module.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from pygments.lexers.ooc import OocLexer
from pygments.lexers.felix import FelixLexer
from pygments.lexers.nimrod import NimrodLexer
+from pygments.lexers.crystal import CrystalLexer
__all__ = []
Lexers for configuration file formats.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
(r'[;#].*', Comment.Single),
(r'\[.*?\]$', Keyword),
(r'(.*?)([ \t]*)(=)([ \t]*)(.*(?:\n[ \t].+)*)',
- bygroups(Name.Attribute, Text, Operator, Text, String))
- ]
+ bygroups(Name.Attribute, Text, Operator, Text, String)),
+ # standalone option, supported by some INI parsers
+ (r'(.+?)$', Name.Attribute),
+ ],
}
def analyse_text(text):
"""
Lexer for configuration files in Java's properties format.
+ Note: trailing whitespace counts as part of the value as per spec
+
.. versionadded:: 1.4
"""
tokens = {
'root': [
- (r'\s+', Text),
- (r'(?:[;#]|//).*$', Comment),
+ (r'^(\w+)([ \t])(\w+\s*)$', bygroups(Name.Attribute, Text, String)),
+ (r'^\w+(\\[ \t]\w*)*$', Name.Attribute),
+ (r'(^ *)([#!].*)', bygroups(Text, Comment)),
+ # More controversial comments
+ (r'(^ *)((?:;|//).*)', bygroups(Text, Comment)),
(r'(.*?)([ \t]*)([=:])([ \t]*)(.*(?:(?<=\\)\n.*)*)',
bygroups(Name.Attribute, Text, Operator, Text, String)),
+ (r'\s', Text),
],
}
"""
name = 'Nginx configuration file'
aliases = ['nginx']
- filenames = []
+ filenames = ['nginx.conf']
mimetypes = ['text/x-nginx-conf']
tokens = {
(r'(".*")', bygroups(String.Double)),
],
'punctuation': [
- (r'[\[\]\(\),.]', Punctuation),
+ (r'[\[\](),.]', Punctuation),
],
# Keep this seperate from punctuation - we sometimes want to use different
# Tokens for { }
.. versionadded:: 2.1
"""
name = 'Termcap'
- aliases = ['termcap',]
-
- filenames = ['termcap', 'termcap.src',]
+ aliases = ['termcap']
+ filenames = ['termcap', 'termcap.src']
mimetypes = []
# NOTE:
tokens = {
'root': [
(r'^#.*$', Comment),
- (r'^[^\s#:\|]+', Name.Tag, 'names'),
+ (r'^[^\s#:|]+', Name.Tag, 'names'),
],
'names': [
(r'\n', Text, '#pop'),
(r':', Punctuation, 'defs'),
(r'\|', Punctuation),
- (r'[^:\|]+', Name.Attribute),
+ (r'[^:|]+', Name.Attribute),
],
'defs': [
(r'\\\n[ \t]*', Text),
.. versionadded:: 2.1
"""
name = 'Terminfo'
- aliases = ['terminfo',]
-
- filenames = ['terminfo', 'terminfo.src',]
+ aliases = ['terminfo']
+ filenames = ['terminfo', 'terminfo.src']
mimetypes = []
# NOTE:
tokens = {
'root': [
(r'^#.*$', Comment),
- (r'^[^\s#,\|]+', Name.Tag, 'names'),
+ (r'^[^\s#,|]+', Name.Tag, 'names'),
],
'names': [
(r'\n', Text, '#pop'),
(r'(,)([ \t]*)', bygroups(Punctuation, Text), 'defs'),
(r'\|', Punctuation),
- (r'[^,\|]+', Name.Attribute),
+ (r'[^,|]+', Name.Attribute),
],
'defs': [
(r'\n[ \t]+', Text),
"""
name = 'PkgConfig'
- aliases = ['pkgconfig',]
- filenames = ['*.pc',]
+ aliases = ['pkgconfig']
+ filenames = ['*.pc']
mimetypes = []
tokens = {
"""
name = 'PacmanConf'
- aliases = ['pacmanconf',]
- filenames = ['pacman.conf',]
+ aliases = ['pacmanconf']
+ filenames = ['pacman.conf']
mimetypes = []
tokens = {
'%u', # url
), suffix=r'\b'),
Name.Variable),
-
+
# fallback
(r'.', Text),
],
Lexers for misc console output.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ pygments.lexers.crystal
+ ~~~~~~~~~~~~~~~~~~~~~~~
+
+ Lexer for Crystal.
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+import re
+
+from pygments.lexer import ExtendedRegexLexer, include, \
+ bygroups, default, LexerContext, words
+from pygments.token import Text, Comment, Operator, Keyword, Name, String, \
+ Number, Punctuation, Error
+
+__all__ = ['CrystalLexer']
+
+line_re = re.compile('.*?\n')
+
+
+CRYSTAL_OPERATORS = [
+ '!=', '!~', '!', '%', '&&', '&', '**', '*', '+', '-', '/', '<=>', '<<', '<=', '<',
+ '===', '==', '=~', '=', '>=', '>>', '>', '[]=', '[]?', '[]', '^', '||', '|', '~'
+]
+
+
+class CrystalLexer(ExtendedRegexLexer):
+ """
+ For `Crystal <http://crystal-lang.org>`_ source code.
+
+ .. versionadded:: 2.2
+ """
+
+ name = 'Crystal'
+ aliases = ['cr', 'crystal']
+ filenames = ['*.cr']
+ mimetypes = ['text/x-crystal']
+
+ flags = re.DOTALL | re.MULTILINE
+
+ def heredoc_callback(self, match, ctx):
+ # okay, this is the hardest part of parsing Crystal...
+ # match: 1 = <<-?, 2 = quote? 3 = name 4 = quote? 5 = rest of line
+
+ start = match.start(1)
+ yield start, Operator, match.group(1) # <<-?
+ yield match.start(2), String.Heredoc, match.group(2) # quote ", ', `
+ yield match.start(3), String.Delimiter, match.group(3) # heredoc name
+ yield match.start(4), String.Heredoc, match.group(4) # quote again
+
+ heredocstack = ctx.__dict__.setdefault('heredocstack', [])
+ outermost = not bool(heredocstack)
+ heredocstack.append((match.group(1) == '<<-', match.group(3)))
+
+ ctx.pos = match.start(5)
+ ctx.end = match.end(5)
+ # this may find other heredocs
+ for i, t, v in self.get_tokens_unprocessed(context=ctx):
+ yield i, t, v
+ ctx.pos = match.end()
+
+ if outermost:
+ # this is the outer heredoc again, now we can process them all
+ for tolerant, hdname in heredocstack:
+ lines = []
+ for match in line_re.finditer(ctx.text, ctx.pos):
+ if tolerant:
+ check = match.group().strip()
+ else:
+ check = match.group().rstrip()
+ if check == hdname:
+ for amatch in lines:
+ yield amatch.start(), String.Heredoc, amatch.group()
+ yield match.start(), String.Delimiter, match.group()
+ ctx.pos = match.end()
+ break
+ else:
+ lines.append(match)
+ else:
+ # end of heredoc not found -- error!
+ for amatch in lines:
+ yield amatch.start(), Error, amatch.group()
+ ctx.end = len(ctx.text)
+ del heredocstack[:]
+
+ def gen_crystalstrings_rules():
+ def intp_regex_callback(self, match, ctx):
+ yield match.start(1), String.Regex, match.group(1) # begin
+ nctx = LexerContext(match.group(3), 0, ['interpolated-regex'])
+ for i, t, v in self.get_tokens_unprocessed(context=nctx):
+ yield match.start(3)+i, t, v
+ yield match.start(4), String.Regex, match.group(4) # end[imsx]*
+ ctx.pos = match.end()
+
+ def intp_string_callback(self, match, ctx):
+ yield match.start(1), String.Other, match.group(1)
+ nctx = LexerContext(match.group(3), 0, ['interpolated-string'])
+ for i, t, v in self.get_tokens_unprocessed(context=nctx):
+ yield match.start(3)+i, t, v
+ yield match.start(4), String.Other, match.group(4) # end
+ ctx.pos = match.end()
+
+ states = {}
+ states['strings'] = [
+ (r'\:@{0,2}[a-zA-Z_]\w*[!?]?', String.Symbol),
+ (words(CRYSTAL_OPERATORS, prefix=r'\:@{0,2}'), String.Symbol),
+ (r":'(\\\\|\\'|[^'])*'", String.Symbol),
+ # This allows arbitrary text after '\ for simplicity
+ (r"'(\\\\|\\'|[^']|\\[^'\\]+)'", String.Char),
+ (r':"', String.Symbol, 'simple-sym'),
+ # Crystal doesn't have "symbol:"s but this simplifies function args
+ (r'([a-zA-Z_]\w*)(:)(?!:)', bygroups(String.Symbol, Punctuation)),
+ (r'"', String.Double, 'simple-string'),
+ (r'(?<!\.)`', String.Backtick, 'simple-backtick'),
+ ]
+
+ # double-quoted string and symbol
+ for name, ttype, end in ('string', String.Double, '"'), \
+ ('sym', String.Symbol, '"'), \
+ ('backtick', String.Backtick, '`'):
+ states['simple-'+name] = [
+ include('string-escaped' if name == 'sym' else 'string-intp-escaped'),
+ (r'[^\\%s#]+' % end, ttype),
+ (r'[\\#]', ttype),
+ (end, ttype, '#pop'),
+ ]
+
+ # braced quoted strings
+ for lbrace, rbrace, bracecc, name in \
+ ('\\{', '\\}', '{}', 'cb'), \
+ ('\\[', '\\]', '\\[\\]', 'sb'), \
+ ('\\(', '\\)', '()', 'pa'), \
+ ('<', '>', '<>', 'ab'):
+ states[name+'-intp-string'] = [
+ (r'\\[' + lbrace + ']', String.Other),
+ (lbrace, String.Other, '#push'),
+ (rbrace, String.Other, '#pop'),
+ include('string-intp-escaped'),
+ (r'[\\#' + bracecc + ']', String.Other),
+ (r'[^\\#' + bracecc + ']+', String.Other),
+ ]
+ states['strings'].append((r'%' + lbrace, String.Other,
+ name+'-intp-string'))
+ states[name+'-string'] = [
+ (r'\\[\\' + bracecc + ']', String.Other),
+ (lbrace, String.Other, '#push'),
+ (rbrace, String.Other, '#pop'),
+ (r'[\\#' + bracecc + ']', String.Other),
+ (r'[^\\#' + bracecc + ']+', String.Other),
+ ]
+ # http://crystal-lang.org/docs/syntax_and_semantics/literals/array.html
+ states['strings'].append((r'%[wi]' + lbrace, String.Other,
+ name+'-string'))
+ states[name+'-regex'] = [
+ (r'\\[\\' + bracecc + ']', String.Regex),
+ (lbrace, String.Regex, '#push'),
+ (rbrace + '[imsx]*', String.Regex, '#pop'),
+ include('string-intp'),
+ (r'[\\#' + bracecc + ']', String.Regex),
+ (r'[^\\#' + bracecc + ']+', String.Regex),
+ ]
+ states['strings'].append((r'%r' + lbrace, String.Regex,
+ name+'-regex'))
+
+ # these must come after %<brace>!
+ states['strings'] += [
+ # %r regex
+ (r'(%r([\W_]))((?:\\\2|(?!\2).)*)(\2[imsx]*)',
+ intp_regex_callback),
+ # regular fancy strings with qsw
+ (r'(%[wi]([\W_]))((?:\\\2|(?!\2).)*)(\2)',
+ intp_string_callback),
+ # special forms of fancy strings after operators or
+ # in method calls with braces
+ (r'(?<=[-+/*%=<>&!^|~,(])(\s*)(%([\t ])(?:(?:\\\3|(?!\3).)*)\3)',
+ bygroups(Text, String.Other, None)),
+ # and because of fixed width lookbehinds the whole thing a
+ # second time for line startings...
+ (r'^(\s*)(%([\t ])(?:(?:\\\3|(?!\3).)*)\3)',
+ bygroups(Text, String.Other, None)),
+ # all regular fancy strings without qsw
+ (r'(%([\[{(<]))((?:\\\2|(?!\2).)*)(\2)',
+ intp_string_callback),
+ ]
+
+ return states
+
+ tokens = {
+ 'root': [
+ (r'#.*?$', Comment.Single),
+ # keywords
+ (words('''
+ abstract asm as begin break case do else elsif end ensure extend ifdef if
+ include instance_sizeof next of pointerof private protected rescue return
+ require sizeof super then typeof unless until when while with yield
+ '''.split(), suffix=r'\b'), Keyword),
+ (words(['true', 'false', 'nil'], suffix=r'\b'), Keyword.Constant),
+ # start of function, class and module names
+ (r'(module|lib)(\s+)([a-zA-Z_]\w*(?:::[a-zA-Z_]\w*)*)',
+ bygroups(Keyword, Text, Name.Namespace)),
+ (r'(def|fun|macro)(\s+)((?:[a-zA-Z_]\w*::)*)',
+ bygroups(Keyword, Text, Name.Namespace), 'funcname'),
+ (r'def(?=[*%&^`~+-/\[<>=])', Keyword, 'funcname'),
+ (r'(class|struct|union|type|alias|enum)(\s+)((?:[a-zA-Z_]\w*::)*)',
+ bygroups(Keyword, Text, Name.Namespace), 'classname'),
+ (r'(self|out|uninitialized)\b|(is_a|responds_to)\?', Keyword.Pseudo),
+ # macros
+ (words('''
+ debugger record pp assert_responds_to spawn parallel
+ getter setter property delegate def_hash def_equals def_equals_and_hash
+ forward_missing_to
+ '''.split(), suffix=r'\b'), Name.Builtin.Pseudo),
+ (r'getter[!?]|property[!?]|__(DIR|FILE|LINE)__\b', Name.Builtin.Pseudo),
+ # builtins
+ # http://crystal-lang.org/api/toplevel.html
+ (words('''
+ Object Value Struct Reference Proc Class Nil Symbol Enum Void
+ Bool Number Int Int8 Int16 Int32 Int64 UInt8 UInt16 UInt32 UInt64
+ Float Float32 Float64 Char String
+ Pointer Slice Range Exception Regex
+ Mutex StaticArray Array Hash Set Tuple Deque Box Process File
+ Dir Time Channel Concurrent Scheduler
+ abort at_exit caller delay exit fork future get_stack_top gets
+ lazy loop main p print printf puts
+ raise rand read_line sleep sprintf system with_color
+ '''.split(), prefix=r'(?<!\.)', suffix=r'\b'), Name.Builtin),
+ # normal heredocs
+ (r'(?<!\w)(<<-?)(["`\']?)([a-zA-Z_]\w*)(\2)(.*?\n)',
+ heredoc_callback),
+ # empty string heredocs
+ (r'(<<-?)("|\')()(\2)(.*?\n)', heredoc_callback),
+ (r'__END__', Comment.Preproc, 'end-part'),
+ # multiline regex (after keywords or assignments)
+ (r'(?:^|(?<=[=<>~!:])|'
+ r'(?<=(?:\s|;)when\s)|'
+ r'(?<=(?:\s|;)or\s)|'
+ r'(?<=(?:\s|;)and\s)|'
+ r'(?<=\.index\s)|'
+ r'(?<=\.scan\s)|'
+ r'(?<=\.sub\s)|'
+ r'(?<=\.sub!\s)|'
+ r'(?<=\.gsub\s)|'
+ r'(?<=\.gsub!\s)|'
+ r'(?<=\.match\s)|'
+ r'(?<=(?:\s|;)if\s)|'
+ r'(?<=(?:\s|;)elsif\s)|'
+ r'(?<=^when\s)|'
+ r'(?<=^index\s)|'
+ r'(?<=^scan\s)|'
+ r'(?<=^sub\s)|'
+ r'(?<=^gsub\s)|'
+ r'(?<=^sub!\s)|'
+ r'(?<=^gsub!\s)|'
+ r'(?<=^match\s)|'
+ r'(?<=^if\s)|'
+ r'(?<=^elsif\s)'
+ r')(\s*)(/)', bygroups(Text, String.Regex), 'multiline-regex'),
+ # multiline regex (in method calls or subscripts)
+ (r'(?<=\(|,|\[)/', String.Regex, 'multiline-regex'),
+ # multiline regex (this time the funny no whitespace rule)
+ (r'(\s+)(/)(?![\s=])', bygroups(Text, String.Regex),
+ 'multiline-regex'),
+ # lex numbers and ignore following regular expressions which
+ # are division operators in fact (grrrr. i hate that. any
+ # better ideas?)
+ # since pygments 0.7 we also eat a "?" operator after numbers
+ # so that the char operator does not work. Chars are not allowed
+ # there so that you can use the ternary operator.
+ # stupid example:
+ # x>=0?n[x]:""
+ (r'(0o[0-7]+(?:_[0-7]+)*(?:_?[iu][0-9]+)?)\b(\s*)([/?])?',
+ bygroups(Number.Oct, Text, Operator)),
+ (r'(0x[0-9A-Fa-f]+(?:_[0-9A-Fa-f]+)*(?:_?[iu][0-9]+)?)\b(\s*)([/?])?',
+ bygroups(Number.Hex, Text, Operator)),
+ (r'(0b[01]+(?:_[01]+)*(?:_?[iu][0-9]+)?)\b(\s*)([/?])?',
+ bygroups(Number.Bin, Text, Operator)),
+ # 3 separate expressions for floats because any of the 3 optional
+ # parts makes it a float
+ (r'((?:0(?![0-9])|[1-9][\d_]*)(?:\.\d[\d_]*)(?:e[+-]?[0-9]+)?'
+ r'(?:_?f[0-9]+)?)(\s*)([/?])?',
+ bygroups(Number.Float, Text, Operator)),
+ (r'((?:0(?![0-9])|[1-9][\d_]*)(?:\.\d[\d_]*)?(?:e[+-]?[0-9]+)'
+ r'(?:_?f[0-9]+)?)(\s*)([/?])?',
+ bygroups(Number.Float, Text, Operator)),
+ (r'((?:0(?![0-9])|[1-9][\d_]*)(?:\.\d[\d_]*)?(?:e[+-]?[0-9]+)?'
+ r'(?:_?f[0-9]+))(\s*)([/?])?',
+ bygroups(Number.Float, Text, Operator)),
+ (r'(0\b|[1-9][\d]*(?:_\d+)*(?:_?[iu][0-9]+)?)\b(\s*)([/?])?',
+ bygroups(Number.Integer, Text, Operator)),
+ # Names
+ (r'@@[a-zA-Z_]\w*', Name.Variable.Class),
+ (r'@[a-zA-Z_]\w*', Name.Variable.Instance),
+ (r'\$\w+', Name.Variable.Global),
+ (r'\$[!@&`\'+~=/\\,;.<>_*$?:"^-]', Name.Variable.Global),
+ (r'\$-[0adFiIlpvw]', Name.Variable.Global),
+ (r'::', Operator),
+ include('strings'),
+ # chars
+ (r'\?(\\[MC]-)*' # modifiers
+ r'(\\([\\befnrtv#"\']|x[a-fA-F0-9]{1,2}|[0-7]{1,3})|\S)'
+ r'(?!\w)',
+ String.Char),
+ (r'[A-Z][A-Z_]+\b', Name.Constant),
+ # macro expansion
+ (r'\{%', String.Interpol, 'in-macro-control'),
+ (r'\{\{', String.Interpol, 'in-macro-expr'),
+ # attributes
+ (r'(@\[)(\s*)([A-Z]\w*)',
+ bygroups(Operator, Text, Name.Decorator), 'in-attr'),
+ # this is needed because Crystal attributes can look
+ # like keywords (class) or like this: ` ?!?
+ (words(CRYSTAL_OPERATORS, prefix=r'(\.|::)'),
+ bygroups(Operator, Name.Operator)),
+ (r'(\.|::)([a-zA-Z_]\w*[!?]?|[*%&^`~+\-/\[<>=])',
+ bygroups(Operator, Name)),
+ # Names can end with [!?] unless it's "!="
+ (r'[a-zA-Z_]\w*(?:[!?](?!=))?', Name),
+ (r'(\[|\]\??|\*\*|<=>?|>=|<<?|>>?|=~|===|'
+ r'!~|&&?|\|\||\.{1,3})', Operator),
+ (r'[-+/*%=<>&!^|~]=?', Operator),
+ (r'[(){};,/?:\\]', Punctuation),
+ (r'\s+', Text)
+ ],
+ 'funcname': [
+ (r'(?:([a-zA-Z_]\w*)(\.))?'
+ r'([a-zA-Z_]\w*[!?]?|\*\*?|[-+]@?|'
+ r'[/%&|^`~]|\[\]=?|<<|>>|<=?>|>=?|===?)',
+ bygroups(Name.Class, Operator, Name.Function), '#pop'),
+ default('#pop')
+ ],
+ 'classname': [
+ (r'[A-Z_]\w*', Name.Class),
+ (r'(\()(\s*)([A-Z_]\w*)(\s*)(\))',
+ bygroups(Punctuation, Text, Name.Class, Text, Punctuation)),
+ default('#pop')
+ ],
+ 'in-intp': [
+ (r'\{', String.Interpol, '#push'),
+ (r'\}', String.Interpol, '#pop'),
+ include('root'),
+ ],
+ 'string-intp': [
+ (r'#\{', String.Interpol, 'in-intp'),
+ ],
+ 'string-escaped': [
+ (r'\\([\\befnstv#"\']|x[a-fA-F0-9]{1,2}|[0-7]{1,3})', String.Escape)
+ ],
+ 'string-intp-escaped': [
+ include('string-intp'),
+ include('string-escaped'),
+ ],
+ 'interpolated-regex': [
+ include('string-intp'),
+ (r'[\\#]', String.Regex),
+ (r'[^\\#]+', String.Regex),
+ ],
+ 'interpolated-string': [
+ include('string-intp'),
+ (r'[\\#]', String.Other),
+ (r'[^\\#]+', String.Other),
+ ],
+ 'multiline-regex': [
+ include('string-intp'),
+ (r'\\\\', String.Regex),
+ (r'\\/', String.Regex),
+ (r'[\\#]', String.Regex),
+ (r'[^\\/#]+', String.Regex),
+ (r'/[imsx]*', String.Regex, '#pop'),
+ ],
+ 'end-part': [
+ (r'.+', Comment.Preproc, '#pop')
+ ],
+ 'in-macro-control': [
+ (r'\{%', String.Interpol, '#push'),
+ (r'%\}', String.Interpol, '#pop'),
+ (r'for\b|in\b', Keyword),
+ include('root'),
+ ],
+ 'in-macro-expr': [
+ (r'\{\{', String.Interpol, '#push'),
+ (r'\}\}', String.Interpol, '#pop'),
+ include('root'),
+ ],
+ 'in-attr': [
+ (r'\[', Operator, '#push'),
+ (r'\]', Operator, '#pop'),
+ include('root'),
+ ],
+ }
+ tokens.update(gen_crystalstrings_rules())
Lexers for CSound languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
-import copy, re
+import re
from pygments.lexer import RegexLexer, bygroups, default, include, using, words
from pygments.token import Comment, Keyword, Name, Number, Operator, Punctuation, \
__all__ = ['CsoundScoreLexer', 'CsoundOrchestraLexer', 'CsoundDocumentLexer']
-newline = (r'((?:;|//).*)*(\n)', bygroups(Comment.Single, Text))
+newline = (r'((?:(?:;|//).*)*)(\n)', bygroups(Comment.Single, Text))
class CsoundLexer(RegexLexer):
(r'0[xX][a-fA-F0-9]+', Number.Hex),
(r'\d+', Number.Integer),
(r'"', String, 'single-line string'),
- (r'{{', String, 'multi-line string'),
+ (r'\{\{', String, 'multi-line string'),
(r'[+\-*/%^!=&|<>#~¬]', Operator),
(r'[](),?:[]', Punctuation),
(words((
(r'[\\"~$%\^\n]', String)
],
'multi-line string': [
- (r'}}', String, '#pop'),
- (r'[^\}]+|\}(?!\})', String)
+ (r'\}\}', String, '#pop'),
+ (r'[^}]+|\}(?!\})', String)
],
'scoreline opcode': [
include('whitespace or macro call'),
- (r'{{', String, 'scoreline'),
+ (r'\{\{', String, 'scoreline'),
default('#pop')
],
'scoreline': [
- (r'}}', String, '#pop'),
- (r'([^\}]+)|\}(?!\})', using(CsoundScoreLexer))
+ (r'\}\}', String, '#pop'),
+ (r'([^}]+)|\}(?!\})', using(CsoundScoreLexer))
],
'python opcode': [
include('whitespace or macro call'),
- (r'{{', String, 'python'),
+ (r'\{\{', String, 'python'),
default('#pop')
],
'python': [
- (r'}}', String, '#pop'),
- (r'([^\}]+)|\}(?!\})', using(PythonLexer))
+ (r'\}\}', String, '#pop'),
+ (r'([^}]+)|\}(?!\})', using(PythonLexer))
],
'lua opcode': [
include('whitespace or macro call'),
(r'"', String, 'single-line string'),
- (r'{{', String, 'lua'),
+ (r'\{\{', String, 'lua'),
(r',', Punctuation),
default('#pop')
],
'lua': [
- (r'}}', String, '#pop'),
- (r'([^\}]+)|\}(?!\})', using(LuaLexer))
+ (r'\}\}', String, '#pop'),
+ (r'([^}]+)|\}(?!\})', using(LuaLexer))
]
}
"""
For `Csound <http://csound.github.io>`_ documents.
-
+ .. versionadded:: 2.1
"""
name = 'Csound Document'
Lexers for CSS and related stylesheet formats.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
__all__ = ['CssLexer', 'SassLexer', 'ScssLexer', 'LessCssLexer']
+# List of vendor prefixes obtained from:
+# https://www.w3.org/TR/CSS21/syndata.html#vendor-keyword-history
+_vendor_prefixes = (
+ '-ms-', 'mso-', '-moz-', '-o-', '-xv-', '-atsc-', '-wap-', '-khtml-',
+ '-webkit-', 'prince-', '-ah-', '-hp-', '-ro-', '-rim-', '-tc-',
+)
+
+# List of CSS properties obtained from:
+# https://www.w3.org/Style/CSS/all-properties.en.html
+# Note: handle --* separately
+_css_properties = (
+ 'align-content', 'align-items', 'align-self', 'alignment-baseline', 'all',
+ 'animation', 'animation-delay', 'animation-direction',
+ 'animation-duration', 'animation-fill-mode', 'animation-iteration-count',
+ 'animation-name', 'animation-play-state', 'animation-timing-function',
+ 'appearance', 'azimuth', 'backface-visibility', 'background',
+ 'background-attachment', 'background-blend-mode', 'background-clip',
+ 'background-color', 'background-image', 'background-origin',
+ 'background-position', 'background-repeat', 'background-size',
+ 'baseline-shift', 'bookmark-label', 'bookmark-level', 'bookmark-state',
+ 'border', 'border-bottom', 'border-bottom-color',
+ 'border-bottom-left-radius', 'border-bottom-right-radius',
+ 'border-bottom-style', 'border-bottom-width', 'border-boundary',
+ 'border-collapse', 'border-color', 'border-image', 'border-image-outset',
+ 'border-image-repeat', 'border-image-slice', 'border-image-source',
+ 'border-image-width', 'border-left', 'border-left-color',
+ 'border-left-style', 'border-left-width', 'border-radius', 'border-right',
+ 'border-right-color', 'border-right-style', 'border-right-width',
+ 'border-spacing', 'border-style', 'border-top', 'border-top-color',
+ 'border-top-left-radius', 'border-top-right-radius', 'border-top-style',
+ 'border-top-width', 'border-width', 'bottom', 'box-decoration-break',
+ 'box-shadow', 'box-sizing', 'box-snap', 'box-suppress', 'break-after',
+ 'break-before', 'break-inside', 'caption-side', 'caret', 'caret-animation',
+ 'caret-color', 'caret-shape', 'chains', 'clear', 'clip', 'clip-path',
+ 'clip-rule', 'color', 'color-interpolation-filters', 'column-count',
+ 'column-fill', 'column-gap', 'column-rule', 'column-rule-color',
+ 'column-rule-style', 'column-rule-width', 'column-span', 'column-width',
+ 'columns', 'content', 'counter-increment', 'counter-reset', 'counter-set',
+ 'crop', 'cue', 'cue-after', 'cue-before', 'cursor', 'direction', 'display',
+ 'dominant-baseline', 'elevation', 'empty-cells', 'filter', 'flex',
+ 'flex-basis', 'flex-direction', 'flex-flow', 'flex-grow', 'flex-shrink',
+ 'flex-wrap', 'float', 'float-defer', 'float-offset', 'float-reference',
+ 'flood-color', 'flood-opacity', 'flow', 'flow-from', 'flow-into', 'font',
+ 'font-family', 'font-feature-settings', 'font-kerning',
+ 'font-language-override', 'font-size', 'font-size-adjust', 'font-stretch',
+ 'font-style', 'font-synthesis', 'font-variant', 'font-variant-alternates',
+ 'font-variant-caps', 'font-variant-east-asian', 'font-variant-ligatures',
+ 'font-variant-numeric', 'font-variant-position', 'font-weight',
+ 'footnote-display', 'footnote-policy', 'glyph-orientation-vertical',
+ 'grid', 'grid-area', 'grid-auto-columns', 'grid-auto-flow',
+ 'grid-auto-rows', 'grid-column', 'grid-column-end', 'grid-column-gap',
+ 'grid-column-start', 'grid-gap', 'grid-row', 'grid-row-end',
+ 'grid-row-gap', 'grid-row-start', 'grid-template', 'grid-template-areas',
+ 'grid-template-columns', 'grid-template-rows', 'hanging-punctuation',
+ 'height', 'hyphenate-character', 'hyphenate-limit-chars',
+ 'hyphenate-limit-last', 'hyphenate-limit-lines', 'hyphenate-limit-zone',
+ 'hyphens', 'image-orientation', 'image-resolution', 'initial-letter',
+ 'initial-letter-align', 'initial-letter-wrap', 'isolation',
+ 'justify-content', 'justify-items', 'justify-self', 'left',
+ 'letter-spacing', 'lighting-color', 'line-break', 'line-grid',
+ 'line-height', 'line-snap', 'list-style', 'list-style-image',
+ 'list-style-position', 'list-style-type', 'margin', 'margin-bottom',
+ 'margin-left', 'margin-right', 'margin-top', 'marker-side',
+ 'marquee-direction', 'marquee-loop', 'marquee-speed', 'marquee-style',
+ 'mask', 'mask-border', 'mask-border-mode', 'mask-border-outset',
+ 'mask-border-repeat', 'mask-border-slice', 'mask-border-source',
+ 'mask-border-width', 'mask-clip', 'mask-composite', 'mask-image',
+ 'mask-mode', 'mask-origin', 'mask-position', 'mask-repeat', 'mask-size',
+ 'mask-type', 'max-height', 'max-lines', 'max-width', 'min-height',
+ 'min-width', 'mix-blend-mode', 'motion', 'motion-offset', 'motion-path',
+ 'motion-rotation', 'move-to', 'nav-down', 'nav-left', 'nav-right',
+ 'nav-up', 'object-fit', 'object-position', 'offset-after', 'offset-before',
+ 'offset-end', 'offset-start', 'opacity', 'order', 'orphans', 'outline',
+ 'outline-color', 'outline-offset', 'outline-style', 'outline-width',
+ 'overflow', 'overflow-style', 'overflow-wrap', 'overflow-x', 'overflow-y',
+ 'padding', 'padding-bottom', 'padding-left', 'padding-right', 'padding-top',
+ 'page', 'page-break-after', 'page-break-before', 'page-break-inside',
+ 'page-policy', 'pause', 'pause-after', 'pause-before', 'perspective',
+ 'perspective-origin', 'pitch', 'pitch-range', 'play-during', 'polar-angle',
+ 'polar-distance', 'position', 'presentation-level', 'quotes',
+ 'region-fragment', 'resize', 'rest', 'rest-after', 'rest-before',
+ 'richness', 'right', 'rotation', 'rotation-point', 'ruby-align',
+ 'ruby-merge', 'ruby-position', 'running', 'scroll-snap-coordinate',
+ 'scroll-snap-destination', 'scroll-snap-points-x', 'scroll-snap-points-y',
+ 'scroll-snap-type', 'shape-image-threshold', 'shape-inside', 'shape-margin',
+ 'shape-outside', 'size', 'speak', 'speak-as', 'speak-header',
+ 'speak-numeral', 'speak-punctuation', 'speech-rate', 'stress', 'string-set',
+ 'tab-size', 'table-layout', 'text-align', 'text-align-last',
+ 'text-combine-upright', 'text-decoration', 'text-decoration-color',
+ 'text-decoration-line', 'text-decoration-skip', 'text-decoration-style',
+ 'text-emphasis', 'text-emphasis-color', 'text-emphasis-position',
+ 'text-emphasis-style', 'text-indent', 'text-justify', 'text-orientation',
+ 'text-overflow', 'text-shadow', 'text-space-collapse', 'text-space-trim',
+ 'text-spacing', 'text-transform', 'text-underline-position', 'text-wrap',
+ 'top', 'transform', 'transform-origin', 'transform-style', 'transition',
+ 'transition-delay', 'transition-duration', 'transition-property',
+ 'transition-timing-function', 'unicode-bidi', 'user-select',
+ 'vertical-align', 'visibility', 'voice-balance', 'voice-duration',
+ 'voice-family', 'voice-pitch', 'voice-range', 'voice-rate', 'voice-stress',
+ 'voice-volume', 'volume', 'white-space', 'widows', 'width', 'will-change',
+ 'word-break', 'word-spacing', 'word-wrap', 'wrap-after', 'wrap-before',
+ 'wrap-flow', 'wrap-inside', 'wrap-through', 'writing-mode', 'z-index',
+)
+
+# List of keyword values obtained from:
+# http://cssvalues.com/
+_keyword_values = (
+ 'absolute', 'alias', 'all', 'all-petite-caps', 'all-scroll',
+ 'all-small-caps', 'allow-end', 'alpha', 'alternate', 'alternate-reverse',
+ 'always', 'armenian', 'auto', 'avoid', 'avoid-column', 'avoid-page',
+ 'backwards', 'balance', 'baseline', 'below', 'blink', 'block', 'bold',
+ 'bolder', 'border-box', 'both', 'bottom', 'box-decoration', 'break-word',
+ 'capitalize', 'cell', 'center', 'circle', 'clip', 'clone', 'close-quote',
+ 'col-resize', 'collapse', 'color', 'color-burn', 'color-dodge', 'column',
+ 'column-reverse', 'compact', 'condensed', 'contain', 'container',
+ 'content-box', 'context-menu', 'copy', 'cover', 'crisp-edges', 'crosshair',
+ 'currentColor', 'cursive', 'darken', 'dashed', 'decimal',
+ 'decimal-leading-zero', 'default', 'descendants', 'difference', 'digits',
+ 'disc', 'distribute', 'dot', 'dotted', 'double', 'double-circle', 'e-resize',
+ 'each-line', 'ease', 'ease-in', 'ease-in-out', 'ease-out', 'edges',
+ 'ellipsis', 'end', 'ew-resize', 'exclusion', 'expanded', 'extra-condensed',
+ 'extra-expanded', 'fantasy', 'fill', 'fill-box', 'filled', 'first', 'fixed',
+ 'flat', 'flex', 'flex-end', 'flex-start', 'flip', 'force-end', 'forwards',
+ 'from-image', 'full-width', 'geometricPrecision', 'georgian', 'groove',
+ 'hanging', 'hard-light', 'help', 'hidden', 'hide', 'horizontal', 'hue',
+ 'icon', 'infinite', 'inherit', 'initial', 'ink', 'inline', 'inline-block',
+ 'inline-flex', 'inline-table', 'inset', 'inside', 'inter-word', 'invert',
+ 'isolate', 'italic', 'justify', 'large', 'larger', 'last', 'left',
+ 'lighten', 'lighter', 'line-through', 'linear', 'list-item', 'local',
+ 'loose', 'lower-alpha', 'lower-greek', 'lower-latin', 'lower-roman',
+ 'lowercase', 'ltr', 'luminance', 'luminosity', 'mandatory', 'manipulation',
+ 'manual', 'margin-box', 'match-parent', 'medium', 'mixed', 'monospace',
+ 'move', 'multiply', 'n-resize', 'ne-resize', 'nesw-resize',
+ 'no-close-quote', 'no-drop', 'no-open-quote', 'no-repeat', 'none', 'normal',
+ 'not-allowed', 'nowrap', 'ns-resize', 'nw-resize', 'nwse-resize', 'objects',
+ 'oblique', 'off', 'on', 'open', 'open-quote', 'optimizeLegibility',
+ 'optimizeSpeed', 'outset', 'outside', 'over', 'overlay', 'overline',
+ 'padding-box', 'page', 'pan-down', 'pan-left', 'pan-right', 'pan-up',
+ 'pan-x', 'pan-y', 'paused', 'petite-caps', 'pixelated', 'pointer',
+ 'preserve-3d', 'progress', 'proximity', 'relative', 'repeat',
+ 'repeat no-repeat', 'repeat-x', 'repeat-y', 'reverse', 'ridge', 'right',
+ 'round', 'row', 'row-resize', 'row-reverse', 'rtl', 'ruby', 'ruby-base',
+ 'ruby-base-container', 'ruby-text', 'ruby-text-container', 'run-in',
+ 'running', 's-resize', 'sans-serif', 'saturation', 'scale-down', 'screen',
+ 'scroll', 'se-resize', 'semi-condensed', 'semi-expanded', 'separate',
+ 'serif', 'sesame', 'show', 'sideways', 'sideways-left', 'sideways-right',
+ 'slice', 'small', 'small-caps', 'smaller', 'smooth', 'snap', 'soft-light',
+ 'solid', 'space', 'space-around', 'space-between', 'spaces', 'square',
+ 'start', 'static', 'step-end', 'step-start', 'sticky', 'stretch', 'strict',
+ 'stroke-box', 'style', 'sw-resize', 'table', 'table-caption', 'table-cell',
+ 'table-column', 'table-column-group', 'table-footer-group',
+ 'table-header-group', 'table-row', 'table-row-group', 'text', 'thick',
+ 'thin', 'titling-caps', 'to', 'top', 'triangle', 'ultra-condensed',
+ 'ultra-expanded', 'under', 'underline', 'unicase', 'unset', 'upper-alpha',
+ 'upper-latin', 'upper-roman', 'uppercase', 'upright', 'use-glyph-orientation',
+ 'vertical', 'vertical-text', 'view-box', 'visible', 'w-resize', 'wait',
+ 'wavy', 'weight', 'weight style', 'wrap', 'wrap-reverse', 'x-large',
+ 'x-small', 'xx-large', 'xx-small', 'zoom-in', 'zoom-out',
+)
+
+# List of extended color keywords obtained from:
+# https://drafts.csswg.org/css-color/#named-colors
+_color_keywords = (
+ 'aliceblue', 'antiquewhite', 'aqua', 'aquamarine', 'azure', 'beige',
+ 'bisque', 'black', 'blanchedalmond', 'blue', 'blueviolet', 'brown',
+ 'burlywood', 'cadetblue', 'chartreuse', 'chocolate', 'coral',
+ 'cornflowerblue', 'cornsilk', 'crimson', 'cyan', 'darkblue', 'darkcyan',
+ 'darkgoldenrod', 'darkgray', 'darkgreen', 'darkgrey', 'darkkhaki',
+ 'darkmagenta', 'darkolivegreen', 'darkorange', 'darkorchid', 'darkred',
+ 'darksalmon', 'darkseagreen', 'darkslateblue', 'darkslategray',
+ 'darkslategrey', 'darkturquoise', 'darkviolet', 'deeppink', 'deepskyblue',
+ 'dimgray', 'dimgrey', 'dodgerblue', 'firebrick', 'floralwhite',
+ 'forestgreen', 'fuchsia', 'gainsboro', 'ghostwhite', 'gold', 'goldenrod',
+ 'gray', 'green', 'greenyellow', 'grey', 'honeydew', 'hotpink', 'indianred',
+ 'indigo', 'ivory', 'khaki', 'lavender', 'lavenderblush', 'lawngreen',
+ 'lemonchiffon', 'lightblue', 'lightcoral', 'lightcyan',
+ 'lightgoldenrodyellow', 'lightgray', 'lightgreen', 'lightgrey',
+ 'lightpink', 'lightsalmon', 'lightseagreen', 'lightskyblue',
+ 'lightslategray', 'lightslategrey', 'lightsteelblue', 'lightyellow',
+ 'lime', 'limegreen', 'linen', 'magenta', 'maroon', 'mediumaquamarine',
+ 'mediumblue', 'mediumorchid', 'mediumpurple', 'mediumseagreen',
+ 'mediumslateblue', 'mediumspringgreen', 'mediumturquoise',
+ 'mediumvioletred', 'midnightblue', 'mintcream', 'mistyrose', 'moccasin',
+ 'navajowhite', 'navy', 'oldlace', 'olive', 'olivedrab', 'orange',
+ 'orangered', 'orchid', 'palegoldenrod', 'palegreen', 'paleturquoise',
+ 'palevioletred', 'papayawhip', 'peachpuff', 'peru', 'pink', 'plum',
+ 'powderblue', 'purple', 'rebeccapurple', 'red', 'rosybrown', 'royalblue',
+ 'saddlebrown', 'salmon', 'sandybrown', 'seagreen', 'seashell', 'sienna',
+ 'silver', 'skyblue', 'slateblue', 'slategray', 'slategrey', 'snow',
+ 'springgreen', 'steelblue', 'tan', 'teal', 'thistle', 'tomato', 'turquoise',
+ 'violet', 'wheat', 'white', 'whitesmoke', 'yellow', 'yellowgreen',
+) + ('transparent',)
+
+# List of other keyword values from other sources:
+_other_keyword_values = (
+ 'above', 'aural', 'behind', 'bidi-override', 'center-left', 'center-right',
+ 'cjk-ideographic', 'continuous', 'crop', 'cross', 'embed', 'far-left',
+ 'far-right', 'fast', 'faster', 'hebrew', 'high', 'higher', 'hiragana',
+ 'hiragana-iroha', 'katakana', 'katakana-iroha', 'landscape', 'left-side',
+ 'leftwards', 'level', 'loud', 'low', 'lower', 'message-box', 'middle',
+ 'mix', 'narrower', 'once', 'portrait', 'right-side', 'rightwards', 'silent',
+ 'slow', 'slower', 'small-caption', 'soft', 'spell-out', 'status-bar',
+ 'super', 'text-bottom', 'text-top', 'wider', 'x-fast', 'x-high', 'x-loud',
+ 'x-low', 'x-soft', 'yes', 'pre', 'pre-wrap', 'pre-line',
+)
+
+# List of functional notation and function keyword values:
+_functional_notation_keyword_values = (
+ 'attr', 'blackness', 'blend', 'blenda', 'blur', 'brightness', 'calc',
+ 'circle', 'color-mod', 'contrast', 'counter', 'cubic-bezier', 'device-cmyk',
+ 'drop-shadow', 'ellipse', 'gray', 'grayscale', 'hsl', 'hsla', 'hue',
+ 'hue-rotate', 'hwb', 'image', 'inset', 'invert', 'lightness',
+ 'linear-gradient', 'matrix', 'matrix3d', 'opacity', 'perspective',
+ 'polygon', 'radial-gradient', 'rect', 'repeating-linear-gradient',
+ 'repeating-radial-gradient', 'rgb', 'rgba', 'rotate', 'rotate3d', 'rotateX',
+ 'rotateY', 'rotateZ', 'saturate', 'saturation', 'scale', 'scale3d',
+ 'scaleX', 'scaleY', 'scaleZ', 'sepia', 'shade', 'skewX', 'skewY', 'steps',
+ 'tint', 'toggle', 'translate', 'translate3d', 'translateX', 'translateY',
+ 'translateZ', 'whiteness',
+)
+# Note! Handle url(...) separately.
+
+# List of units obtained from:
+# https://www.w3.org/TR/css3-values/
+_angle_units = (
+ 'deg', 'grad', 'rad', 'turn',
+)
+_frequency_units = (
+ 'Hz', 'kHz',
+)
+_length_units = (
+ 'em', 'ex', 'ch', 'rem',
+ 'vh', 'vw', 'vmin', 'vmax',
+ 'px', 'mm', 'cm', 'in', 'pt', 'pc', 'q',
+)
+_resolution_units = (
+ 'dpi', 'dpcm', 'dppx',
+)
+_time_units = (
+ 's', 'ms',
+)
+_all_units = _angle_units + _frequency_units + _length_units + \
+ _resolution_units + _time_units
+
+
class CssLexer(RegexLexer):
"""
For CSS (Cascading Style Sheets).
(r'\s+', Text),
(r'/\*(?:.|\n)*?\*/', Comment),
(r'\{', Punctuation, 'content'),
- (r'\:[\w-]+', Name.Decorator),
- (r'\.[\w-]+', Name.Class),
- (r'\#[\w-]+', Name.Namespace),
- (r'@[\w-]+', Keyword, 'atrule'),
+ (r'(\:{1,2})([\w-]+)', bygroups(Punctuation, Name.Decorator)),
+ (r'(\.)([\w-]+)', bygroups(Punctuation, Name.Class)),
+ (r'(\#)([\w-]+)', bygroups(Punctuation, Name.Namespace)),
+ (r'(@)([\w-]+)', bygroups(Punctuation, Keyword), 'atrule'),
(r'[\w-]+', Name.Tag),
(r'[~^*!%&$\[\]()<>|+=@:;,./?-]', Operator),
(r'"(\\\\|\\"|[^"])*"', String.Double),
'content': [
(r'\s+', Text),
(r'\}', Punctuation, '#pop'),
- (r'url\(.*?\)', String.Other),
+ (r';', Punctuation),
(r'^@.*?$', Comment.Preproc),
- (words((
- 'azimuth', 'background-attachment', 'background-color',
- 'background-image', 'background-position', 'background-repeat',
- 'background', 'border-bottom-color', 'border-bottom-style',
- 'border-bottom-width', 'border-left-color', 'border-left-style',
- 'border-left-width', 'border-right', 'border-right-color',
- 'border-right-style', 'border-right-width', 'border-top-color',
- 'border-top-style', 'border-top-width', 'border-bottom',
- 'border-collapse', 'border-left', 'border-width', 'border-color',
- 'border-spacing', 'border-style', 'border-top', 'border', 'caption-side',
- 'clear', 'clip', 'color', 'content', 'counter-increment', 'counter-reset',
- 'cue-after', 'cue-before', 'cue', 'cursor', 'direction', 'display',
- 'elevation', 'empty-cells', 'float', 'font-family', 'font-size',
- 'font-size-adjust', 'font-stretch', 'font-style', 'font-variant',
- 'font-weight', 'font', 'height', 'letter-spacing', 'line-height',
- 'list-style-type', 'list-style-image', 'list-style-position',
- 'list-style', 'margin-bottom', 'margin-left', 'margin-right',
- 'margin-top', 'margin', 'marker-offset', 'marks', 'max-height', 'max-width',
- 'min-height', 'min-width', 'opacity', 'orphans', 'outline-color',
- 'outline-style', 'outline-width', 'outline', 'overflow', 'overflow-x',
- 'overflow-y', 'padding-bottom', 'padding-left', 'padding-right', 'padding-top',
- 'padding', 'page', 'page-break-after', 'page-break-before', 'page-break-inside',
- 'pause-after', 'pause-before', 'pause', 'pitch-range', 'pitch',
- 'play-during', 'position', 'quotes', 'richness', 'right', 'size',
- 'speak-header', 'speak-numeral', 'speak-punctuation', 'speak',
- 'speech-rate', 'stress', 'table-layout', 'text-align', 'text-decoration',
- 'text-indent', 'text-shadow', 'text-transform', 'top', 'unicode-bidi',
- 'vertical-align', 'visibility', 'voice-family', 'volume', 'white-space',
- 'widows', 'width', 'word-spacing', 'z-index', 'bottom',
- 'above', 'absolute', 'always', 'armenian', 'aural', 'auto', 'avoid', 'baseline',
- 'behind', 'below', 'bidi-override', 'blink', 'block', 'bolder', 'bold', 'both',
- 'capitalize', 'center-left', 'center-right', 'center', 'circle',
- 'cjk-ideographic', 'close-quote', 'collapse', 'condensed', 'continuous',
- 'crop', 'crosshair', 'cross', 'cursive', 'dashed', 'decimal-leading-zero',
- 'decimal', 'default', 'digits', 'disc', 'dotted', 'double', 'e-resize', 'embed',
- 'extra-condensed', 'extra-expanded', 'expanded', 'fantasy', 'far-left',
- 'far-right', 'faster', 'fast', 'fixed', 'georgian', 'groove', 'hebrew', 'help',
- 'hidden', 'hide', 'higher', 'high', 'hiragana-iroha', 'hiragana', 'icon',
- 'inherit', 'inline-table', 'inline', 'inset', 'inside', 'invert', 'italic',
- 'justify', 'katakana-iroha', 'katakana', 'landscape', 'larger', 'large',
- 'left-side', 'leftwards', 'left', 'level', 'lighter', 'line-through', 'list-item',
- 'loud', 'lower-alpha', 'lower-greek', 'lower-roman', 'lowercase', 'ltr',
- 'lower', 'low', 'medium', 'message-box', 'middle', 'mix', 'monospace',
- 'n-resize', 'narrower', 'ne-resize', 'no-close-quote', 'no-open-quote',
- 'no-repeat', 'none', 'normal', 'nowrap', 'nw-resize', 'oblique', 'once',
- 'open-quote', 'outset', 'outside', 'overline', 'pointer', 'portrait', 'px',
- 'relative', 'repeat-x', 'repeat-y', 'repeat', 'rgb', 'ridge', 'right-side',
- 'rightwards', 's-resize', 'sans-serif', 'scroll', 'se-resize',
- 'semi-condensed', 'semi-expanded', 'separate', 'serif', 'show', 'silent',
- 'slower', 'slow', 'small-caps', 'small-caption', 'smaller', 'soft', 'solid',
- 'spell-out', 'square', 'static', 'status-bar', 'super', 'sw-resize',
- 'table-caption', 'table-cell', 'table-column', 'table-column-group',
- 'table-footer-group', 'table-header-group', 'table-row',
- 'table-row-group', 'text-bottom', 'text-top', 'text', 'thick', 'thin',
- 'transparent', 'ultra-condensed', 'ultra-expanded', 'underline',
- 'upper-alpha', 'upper-latin', 'upper-roman', 'uppercase', 'url',
- 'visible', 'w-resize', 'wait', 'wider', 'x-fast', 'x-high', 'x-large', 'x-loud',
- 'x-low', 'x-small', 'x-soft', 'xx-large', 'xx-small', 'yes'), suffix=r'\b'),
- Name.Builtin),
- (words((
- 'indigo', 'gold', 'firebrick', 'indianred', 'yellow', 'darkolivegreen',
- 'darkseagreen', 'mediumvioletred', 'mediumorchid', 'chartreuse',
- 'mediumslateblue', 'black', 'springgreen', 'crimson', 'lightsalmon', 'brown',
- 'turquoise', 'olivedrab', 'cyan', 'silver', 'skyblue', 'gray', 'darkturquoise',
- 'goldenrod', 'darkgreen', 'darkviolet', 'darkgray', 'lightpink', 'teal',
- 'darkmagenta', 'lightgoldenrodyellow', 'lavender', 'yellowgreen', 'thistle',
- 'violet', 'navy', 'orchid', 'blue', 'ghostwhite', 'honeydew', 'cornflowerblue',
- 'darkblue', 'darkkhaki', 'mediumpurple', 'cornsilk', 'red', 'bisque', 'slategray',
- 'darkcyan', 'khaki', 'wheat', 'deepskyblue', 'darkred', 'steelblue', 'aliceblue',
- 'gainsboro', 'mediumturquoise', 'floralwhite', 'coral', 'purple', 'lightgrey',
- 'lightcyan', 'darksalmon', 'beige', 'azure', 'lightsteelblue', 'oldlace',
- 'greenyellow', 'royalblue', 'lightseagreen', 'mistyrose', 'sienna',
- 'lightcoral', 'orangered', 'navajowhite', 'lime', 'palegreen', 'burlywood',
- 'seashell', 'mediumspringgreen', 'fuchsia', 'papayawhip', 'blanchedalmond',
- 'peru', 'aquamarine', 'white', 'darkslategray', 'ivory', 'dodgerblue',
- 'lemonchiffon', 'chocolate', 'orange', 'forestgreen', 'slateblue', 'olive',
- 'mintcream', 'antiquewhite', 'darkorange', 'cadetblue', 'moccasin',
- 'limegreen', 'saddlebrown', 'darkslateblue', 'lightskyblue', 'deeppink',
- 'plum', 'aqua', 'darkgoldenrod', 'maroon', 'sandybrown', 'magenta', 'tan',
- 'rosybrown', 'pink', 'lightblue', 'palevioletred', 'mediumseagreen',
- 'dimgray', 'powderblue', 'seagreen', 'snow', 'mediumblue', 'midnightblue',
- 'paleturquoise', 'palegoldenrod', 'whitesmoke', 'darkorchid', 'salmon',
- 'lightslategray', 'lawngreen', 'lightgreen', 'tomato', 'hotpink',
- 'lightyellow', 'lavenderblush', 'linen', 'mediumaquamarine', 'green',
- 'blueviolet', 'peachpuff'), suffix=r'\b'),
- Name.Builtin),
+
+ (words(_vendor_prefixes,), Keyword.Pseudo),
+ (r'('+r'|'.join(_css_properties)+r')(\s*)(\:)',
+ bygroups(Keyword, Text, Punctuation), 'value-start'),
+ (r'([a-zA-Z_][\w-]*)(\s*)(\:)', bygroups(Name, Text, Punctuation),
+ 'value-start'),
+
+ (r'/\*(?:.|\n)*?\*/', Comment),
+ ],
+ 'value-start': [
+ (r'\s+', Text),
+ (words(_vendor_prefixes,), Name.Builtin.Pseudo),
+ include('urls'),
+ (r'('+r'|'.join(_functional_notation_keyword_values)+r')(\()',
+ bygroups(Name.Builtin, Punctuation), 'function-start'),
+ (r'([a-zA-Z_][\w-]+)(\()', bygroups(Name.Function, Punctuation), 'function-start'),
+ (words(_keyword_values, suffix=r'\b'), Keyword.Constant),
+ (words(_other_keyword_values, suffix=r'\b'), Keyword.Constant),
+ (words(_color_keywords, suffix=r'\b'), Keyword.Constant),
+ (words(_css_properties, suffix=r'\b'), Keyword), # for transition-property etc.
(r'\!important', Comment.Preproc),
(r'/\*(?:.|\n)*?\*/', Comment),
- (r'\#[a-zA-Z0-9]{1,6}', Number),
- (r'[.-]?[0-9]*[.]?[0-9]+(em|px|pt|pc|in|mm|cm|ex|s)\b', Number),
- # Separate regex for percentages, as can't do word boundaries with %
- (r'[.-]?[0-9]*[.]?[0-9]+%', Number),
- (r'-?[0-9]+', Number),
- (r'[~^*!%&<>|+=@:,./?-]+', Operator),
- (r'[\[\]();]+', Punctuation),
+
+ include('numeric-values'),
+
+ (r'[~^*!%&<>|+=@:./?-]+', Operator),
+ (r'[\[\](),]+', Punctuation),
(r'"(\\\\|\\"|[^"])*"', String.Double),
(r"'(\\\\|\\'|[^'])*'", String.Single),
- (r'[a-zA-Z_]\w*', Name)
- ]
+ (r'[a-zA-Z_][\w-]*', Name),
+ (r';', Punctuation, '#pop'),
+ (r'\}', Punctuation, '#pop:2'),
+ ],
+ 'function-start': [
+ (r'\s+', Text),
+ include('urls'),
+ (words(_vendor_prefixes,), Keyword.Pseudo),
+ (words(_keyword_values, suffix=r'\b'), Keyword.Constant),
+ (words(_other_keyword_values, suffix=r'\b'), Keyword.Constant),
+ (words(_color_keywords, suffix=r'\b'), Keyword.Constant),
+
+ # function-start may be entered recursively
+ (r'(' + r'|'.join(_functional_notation_keyword_values) + r')(\()',
+ bygroups(Name.Builtin, Punctuation), 'function-start'),
+ (r'([a-zA-Z_][\w-]+)(\()', bygroups(Name.Function, Punctuation), 'function-start'),
+
+ (r'/\*(?:.|\n)*?\*/', Comment),
+ include('numeric-values'),
+ (r'[*+/-]', Operator),
+ (r'[,]', Punctuation),
+ (r'"(\\\\|\\"|[^"])*"', String.Double),
+ (r"'(\\\\|\\'|[^'])*'", String.Single),
+ (r'[a-zA-Z_-]\w*', Name),
+ (r'\)', Punctuation, '#pop'),
+ ],
+ 'urls': [
+ (r'(url)(\()(".*?")(\))', bygroups(Name.Builtin, Punctuation,
+ String.Double, Punctuation)),
+ (r"(url)(\()('.*?')(\))", bygroups(Name.Builtin, Punctuation,
+ String.Single, Punctuation)),
+ (r'(url)(\()(.*?)(\))', bygroups(Name.Builtin, Punctuation,
+ String.Other, Punctuation)),
+ ],
+ 'numeric-values': [
+ (r'\#[a-zA-Z0-9]{1,6}', Number.Hex),
+ (r'[+\-]?[0-9]*[.][0-9]+', Number.Float, 'numeric-end'),
+ (r'[+\-]?[0-9]+', Number.Integer, 'numeric-end'),
+ ],
+ 'numeric-end': [
+ (words(_all_units, suffix=r'\b'), Keyword.Type),
+ (r'%', Keyword.Type),
+ default('#pop'),
+ ],
}
(r'[!$][\w-]+', Name.Variable),
(r'url\(', String.Other, 'string-url'),
(r'[a-z_-][\w-]*(?=\()', Name.Function),
- (words((
- 'azimuth', 'background-attachment', 'background-color',
- 'background-image', 'background-position', 'background-repeat',
- 'background', 'border-bottom-color', 'border-bottom-style',
- 'border-bottom-width', 'border-left-color', 'border-left-style',
- 'border-left-width', 'border-right', 'border-right-color',
- 'border-right-style', 'border-right-width', 'border-top-color',
- 'border-top-style', 'border-top-width', 'border-bottom',
- 'border-collapse', 'border-left', 'border-width', 'border-color',
- 'border-spacing', 'border-style', 'border-top', 'border', 'caption-side',
- 'clear', 'clip', 'color', 'content', 'counter-increment', 'counter-reset',
- 'cue-after', 'cue-before', 'cue', 'cursor', 'direction', 'display',
- 'elevation', 'empty-cells', 'float', 'font-family', 'font-size',
- 'font-size-adjust', 'font-stretch', 'font-style', 'font-variant',
- 'font-weight', 'font', 'height', 'letter-spacing', 'line-height',
- 'list-style-type', 'list-style-image', 'list-style-position',
- 'list-style', 'margin-bottom', 'margin-left', 'margin-right',
- 'margin-top', 'margin', 'marker-offset', 'marks', 'max-height', 'max-width',
- 'min-height', 'min-width', 'opacity', 'orphans', 'outline', 'outline-color',
- 'outline-style', 'outline-width', 'overflow', 'padding-bottom',
- 'padding-left', 'padding-right', 'padding-top', 'padding', 'page',
- 'page-break-after', 'page-break-before', 'page-break-inside',
- 'pause-after', 'pause-before', 'pause', 'pitch', 'pitch-range',
- 'play-during', 'position', 'quotes', 'richness', 'right', 'size',
- 'speak-header', 'speak-numeral', 'speak-punctuation', 'speak',
- 'speech-rate', 'stress', 'table-layout', 'text-align', 'text-decoration',
- 'text-indent', 'text-shadow', 'text-transform', 'top', 'unicode-bidi',
- 'vertical-align', 'visibility', 'voice-family', 'volume', 'white-space',
- 'widows', 'width', 'word-spacing', 'z-index', 'bottom', 'left',
+ (words(_css_properties + (
'above', 'absolute', 'always', 'armenian', 'aural', 'auto', 'avoid', 'baseline',
'behind', 'below', 'bidi-override', 'blink', 'block', 'bold', 'bolder', 'both',
'capitalize', 'center-left', 'center-right', 'center', 'circle',
'visible', 'w-resize', 'wait', 'wider', 'x-fast', 'x-high', 'x-large', 'x-loud',
'x-low', 'x-small', 'x-soft', 'xx-large', 'xx-small', 'yes'), suffix=r'\b'),
Name.Constant),
- (words((
- 'indigo', 'gold', 'firebrick', 'indianred', 'darkolivegreen',
- 'darkseagreen', 'mediumvioletred', 'mediumorchid', 'chartreuse',
- 'mediumslateblue', 'springgreen', 'crimson', 'lightsalmon', 'brown',
- 'turquoise', 'olivedrab', 'cyan', 'skyblue', 'darkturquoise',
- 'goldenrod', 'darkgreen', 'darkviolet', 'darkgray', 'lightpink',
- 'darkmagenta', 'lightgoldenrodyellow', 'lavender', 'yellowgreen', 'thistle',
- 'violet', 'orchid', 'ghostwhite', 'honeydew', 'cornflowerblue',
- 'darkblue', 'darkkhaki', 'mediumpurple', 'cornsilk', 'bisque', 'slategray',
- 'darkcyan', 'khaki', 'wheat', 'deepskyblue', 'darkred', 'steelblue', 'aliceblue',
- 'gainsboro', 'mediumturquoise', 'floralwhite', 'coral', 'lightgrey',
- 'lightcyan', 'darksalmon', 'beige', 'azure', 'lightsteelblue', 'oldlace',
- 'greenyellow', 'royalblue', 'lightseagreen', 'mistyrose', 'sienna',
- 'lightcoral', 'orangered', 'navajowhite', 'palegreen', 'burlywood',
- 'seashell', 'mediumspringgreen', 'papayawhip', 'blanchedalmond',
- 'peru', 'aquamarine', 'darkslategray', 'ivory', 'dodgerblue',
- 'lemonchiffon', 'chocolate', 'orange', 'forestgreen', 'slateblue',
- 'mintcream', 'antiquewhite', 'darkorange', 'cadetblue', 'moccasin',
- 'limegreen', 'saddlebrown', 'darkslateblue', 'lightskyblue', 'deeppink',
- 'plum', 'darkgoldenrod', 'sandybrown', 'magenta', 'tan',
- 'rosybrown', 'pink', 'lightblue', 'palevioletred', 'mediumseagreen',
- 'dimgray', 'powderblue', 'seagreen', 'snow', 'mediumblue', 'midnightblue',
- 'paleturquoise', 'palegoldenrod', 'whitesmoke', 'darkorchid', 'salmon',
- 'lightslategray', 'lawngreen', 'lightgreen', 'tomato', 'hotpink',
- 'lightyellow', 'lavenderblush', 'linen', 'mediumaquamarine',
- 'blueviolet', 'peachpuff'), suffix=r'\b'),
- Name.Entity),
+ (words(_color_keywords, suffix=r'\b'), Name.Entity),
(words((
'black', 'silver', 'gray', 'white', 'maroon', 'red', 'purple', 'fuchsia', 'green',
'lime', 'olive', 'yellow', 'navy', 'blue', 'teal', 'aqua'), suffix=r'\b'),
(r'@[\w-]+', Keyword, 'selector'),
(r'(\$[\w-]*\w)([ \t]*:)', bygroups(Name.Variable, Operator), 'value'),
# TODO: broken, and prone to infinite loops.
- #(r'(?=[^;{}][;}])', Name.Attribute, 'attr'),
- #(r'(?=[^;{}:]+:[^a-z])', Name.Attribute, 'attr'),
+ # (r'(?=[^;{}][;}])', Name.Attribute, 'attr'),
+ # (r'(?=[^;{}:]+:[^a-z])', Name.Attribute, 'attr'),
default('selector'),
],
inherit,
],
'content': [
- (r'{', Punctuation, '#push'),
+ (r'\{', Punctuation, '#push'),
inherit,
],
}
Lexers for D languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Pygments lexers for Dalvik VM-related languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexers for data file format.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from pygments.lexer import RegexLexer, ExtendedRegexLexer, LexerContext, \
include, bygroups, inherit
from pygments.token import Text, Comment, Keyword, Name, String, Number, \
- Punctuation, Literal
+ Punctuation, Literal, Error
-__all__ = ['YamlLexer', 'JsonLexer', 'JsonLdLexer']
+__all__ = ['YamlLexer', 'JsonLexer', 'JsonBareObjectLexer', 'JsonLdLexer']
class YamlLexerContext(LexerContext):
# tags, anchors, aliases
'descriptors': [
# a full-form tag
- (r'!<[\w;/?:@&=+$,.!~*\'()\[\]%-]+>', Keyword.Type),
+ (r'!<[\w#;/?:@&=+$,.!~*\'()\[\]%-]+>', Keyword.Type),
# a tag in the form '!', '!suffix' or '!handle!suffix'
- (r'!(?:[\w-]+)?'
- r'(?:![\w;/?:@&=+$,.!~*\'()\[\]%-]+)?', Keyword.Type),
+ (r'!(?:[\w-]+!)?'
+ r'[\w#;/?:@&=+$,.!~*\'()\[\]%-]+', Keyword.Type),
# an anchor
(r'&[\w-]+', Name.Label),
# an alias
# comma terminates the attribute but expects more
(r',', Punctuation, '#pop'),
# a closing bracket terminates the entire object, so pop twice
- (r'\}', Punctuation, ('#pop', '#pop')),
+ (r'\}', Punctuation, '#pop:2'),
],
# a json object - { attr, attr, ... }
],
}
+
+class JsonBareObjectLexer(JsonLexer):
+ """
+ For JSON data structures (with missing object curly braces).
+
+ .. versionadded:: 2.2
+ """
+
+ name = 'JSONBareObject'
+ aliases = ['json-object']
+ filenames = []
+ mimetypes = ['application/json-object']
+
+ tokens = {
+ 'root': [
+ (r'\}', Error),
+ include('objectvalue'),
+ ],
+ 'objectattribute': [
+ (r'\}', Error),
+ inherit,
+ ],
+ }
+
+
class JsonLdLexer(JsonLexer):
"""
For `JSON-LD <http://json-ld.org/>`_ linked data.
Lexers for diff/patch formats.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
+import re
+
from pygments.lexer import RegexLexer, include, bygroups
from pygments.token import Text, Comment, Operator, Keyword, Name, Generic, \
Literal
-__all__ = ['DiffLexer', 'DarcsPatchLexer']
+__all__ = ['DiffLexer', 'DarcsPatchLexer', 'WDiffLexer']
class DiffLexer(RegexLexer):
(r'[^\n\[]+', Generic.Deleted),
],
}
+
+
+class WDiffLexer(RegexLexer):
+ """
+ A `wdiff <https://www.gnu.org/software/wdiff/>`_ lexer.
+
+ Note that:
+
+ * only to normal output (without option like -l).
+ * if target files of wdiff contain "[-", "-]", "{+", "+}",
+ especially they are unbalanced, this lexer will get confusing.
+
+ .. versionadded:: 2.2
+ """
+
+ name = 'WDiff'
+ aliases = ['wdiff']
+ filenames = ['*.wdiff']
+ mimetypes = []
+
+ flags = re.MULTILINE | re.DOTALL
+
+ # We can only assume "[-" after "[-" before "-]" is `nested`,
+ # for instance wdiff to wdiff outputs. We have no way to
+ # distinct these marker is of wdiff output from original text.
+
+ ins_op = r"\{\+"
+ ins_cl = r"\+\}"
+ del_op = r"\[\-"
+ del_cl = r"\-\]"
+ normal = r'[^{}[\]+-]+' # for performance
+ tokens = {
+ 'root': [
+ (ins_op, Generic.Inserted, 'inserted'),
+ (del_op, Generic.Deleted, 'deleted'),
+ (normal, Text),
+ (r'.', Text),
+ ],
+ 'inserted': [
+ (ins_op, Generic.Inserted, '#push'),
+ (del_op, Generic.Inserted, '#push'),
+ (del_cl, Generic.Inserted, '#pop'),
+
+ (ins_cl, Generic.Inserted, '#pop'),
+ (normal, Generic.Inserted),
+ (r'.', Generic.Inserted),
+ ],
+ 'deleted': [
+ (del_op, Generic.Deleted, '#push'),
+ (ins_op, Generic.Deleted, '#push'),
+ (ins_cl, Generic.Deleted, '#pop'),
+
+ (del_cl, Generic.Deleted, '#pop'),
+ (normal, Generic.Deleted),
+ (r'.', Generic.Deleted),
+ ],
+ }
Lexers for .net languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import re
from pygments.lexer import RegexLexer, DelegatingLexer, bygroups, include, \
- using, this, default
+ using, this, default, words
from pygments.token import Punctuation, \
Text, Comment, Operator, Keyword, Name, String, Number, Literal, Other
from pygments.util import get_choice_opt, iteritems
filenames = ['*.vb', '*.bas']
mimetypes = ['text/x-vbnet', 'text/x-vba'] # (?)
- uni_name = '[_' + uni.combine('Lu', 'Ll', 'Lt', 'Lm', 'Nl') + ']' + \
- '[' + uni.combine('Lu', 'Ll', 'Lt', 'Lm', 'Nl', 'Nd', 'Pc',
+ uni_name = '[_' + uni.combine('Ll', 'Lt', 'Lm', 'Nl') + ']' + \
+ '[' + uni.combine('Ll', 'Lt', 'Lm', 'Nl', 'Nd', 'Pc',
'Cf', 'Mn', 'Mc') + ']*'
flags = re.MULTILINE | re.IGNORECASE
(r'[(){}!#,.:]', Punctuation),
(r'Option\s+(Strict|Explicit|Compare)\s+'
r'(On|Off|Binary|Text)', Keyword.Declaration),
- (r'(?<!\.)(AddHandler|Alias|'
- r'ByRef|ByVal|Call|Case|Catch|CBool|CByte|CChar|CDate|'
- r'CDec|CDbl|CInt|CLng|CObj|Continue|CSByte|CShort|'
- r'CSng|CStr|CType|CUInt|CULng|CUShort|Declare|'
- r'Default|Delegate|DirectCast|Do|Each|Else|ElseIf|'
- r'EndIf|Erase|Error|Event|Exit|False|Finally|For|'
- r'Friend|Get|Global|GoSub|GoTo|Handles|If|'
- r'Implements|Inherits|Interface|'
- r'Let|Lib|Loop|Me|MustInherit|'
- r'MustOverride|MyBase|MyClass|Narrowing|New|Next|'
- r'Not|Nothing|NotInheritable|NotOverridable|Of|On|'
- r'Operator|Option|Optional|Overloads|Overridable|'
- r'Overrides|ParamArray|Partial|Private|Protected|'
- r'Public|RaiseEvent|ReadOnly|ReDim|RemoveHandler|Resume|'
- r'Return|Select|Set|Shadows|Shared|Single|'
- r'Static|Step|Stop|SyncLock|Then|'
- r'Throw|To|True|Try|TryCast|Wend|'
- r'Using|When|While|Widening|With|WithEvents|'
- r'WriteOnly)\b', Keyword),
+ (words((
+ 'AddHandler', 'Alias', 'ByRef', 'ByVal', 'Call', 'Case',
+ 'Catch', 'CBool', 'CByte', 'CChar', 'CDate', 'CDec', 'CDbl',
+ 'CInt', 'CLng', 'CObj', 'Continue', 'CSByte', 'CShort', 'CSng',
+ 'CStr', 'CType', 'CUInt', 'CULng', 'CUShort', 'Declare',
+ 'Default', 'Delegate', 'DirectCast', 'Do', 'Each', 'Else',
+ 'ElseIf', 'EndIf', 'Erase', 'Error', 'Event', 'Exit', 'False',
+ 'Finally', 'For', 'Friend', 'Get', 'Global', 'GoSub', 'GoTo',
+ 'Handles', 'If', 'Implements', 'Inherits', 'Interface', 'Let',
+ 'Lib', 'Loop', 'Me', 'MustInherit', 'MustOverride', 'MyBase',
+ 'MyClass', 'Narrowing', 'New', 'Next', 'Not', 'Nothing',
+ 'NotInheritable', 'NotOverridable', 'Of', 'On', 'Operator',
+ 'Option', 'Optional', 'Overloads', 'Overridable', 'Overrides',
+ 'ParamArray', 'Partial', 'Private', 'Protected', 'Public',
+ 'RaiseEvent', 'ReadOnly', 'ReDim', 'RemoveHandler', 'Resume',
+ 'Return', 'Select', 'Set', 'Shadows', 'Shared', 'Single',
+ 'Static', 'Step', 'Stop', 'SyncLock', 'Then', 'Throw', 'To',
+ 'True', 'Try', 'TryCast', 'Wend', 'Using', 'When', 'While',
+ 'Widening', 'With', 'WithEvents', 'WriteOnly'),
+ prefix='(?<!\.)', suffix=r'\b'), Keyword),
(r'(?<!\.)End\b', Keyword, 'end'),
(r'(?<!\.)(Dim|Const)\b', Keyword, 'dim'),
(r'(?<!\.)(Function|Sub|Property)(\s+)',
Lexers for various domain-specific languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import re
-from pygments.lexer import RegexLexer, bygroups, words, include, default, \
- this, using, combined
+from pygments.lexer import ExtendedRegexLexer, RegexLexer, bygroups, words, \
+ include, default, this, using, combined
from pygments.token import Text, Comment, Operator, Keyword, Name, String, \
Number, Punctuation, Literal, Whitespace
__all__ = ['ProtoBufLexer', 'BroLexer', 'PuppetLexer', 'RslLexer',
'MscgenLexer', 'VGLLexer', 'AlloyLexer', 'PanLexer',
- 'CrmshLexer', 'ThriftLexer']
+ 'CrmshLexer', 'ThriftLexer', 'FlatlineLexer', 'SnowballLexer']
class ProtoBufLexer(RegexLexer):
tokens = {
'root': [
(r'[ \t]+', Text),
- (r'[,;{}\[\]()]', Punctuation),
+ (r'[,;{}\[\]()<>]', Punctuation),
(r'/(\\\n)?/(\n|(.|\n)*?[^\\]\n)', Comment.Single),
(r'/(\\\n)?\*(.|\n)*?\*(\\\n)?/', Comment.Multiline),
(words((
include('keywords'),
include('numbers'),
(r'[&=]', Operator),
- (r'[:;\,\{\}\(\)\<>\[\]]', Punctuation),
- (r'[a-zA-Z_](\.[a-zA-Z_0-9]|[a-zA-Z_0-9])*', Name),
+ (r'[:;,{}()<>\[\]]', Punctuation),
+ (r'[a-zA-Z_](\.\w|\w)*', Name),
],
'whitespace': [
(r'\n', Text.Whitespace),
(r'[^\\\'\n]+', String.Single),
],
'namespace': [
- (r'[a-z\*](\.[a-zA-Z_0-9]|[a-zA-Z_0-9])*', Name.Namespace, '#pop'),
+ (r'[a-z*](\.\w|\w)*', Name.Namespace, '#pop'),
default('#pop'),
],
'class': [
Keyword.Namespace),
(words((
'void', 'bool', 'byte', 'i16', 'i32', 'i64', 'double',
- 'string', 'binary', 'void', 'map', 'list', 'set', 'slist',
+ 'string', 'binary', 'map', 'list', 'set', 'slist',
'senum'), suffix=r'\b'),
Keyword.Type),
(words((
'if', 'for', 'with', 'else', 'type', 'bind', 'while', 'valid', 'final',
'prefix', 'unique', 'object', 'foreach', 'include', 'template',
'function', 'variable', 'structure', 'extensible', 'declaration'),
- prefix=r'\b', suffix=r'\s*\b'),
+ prefix=r'\b', suffix=r'\s*\b'),
Keyword),
(words((
'file_contents', 'format', 'index', 'length', 'match', 'matches',
'is_number', 'is_property', 'is_resource', 'is_string', 'to_boolean',
'to_double', 'to_long', 'to_string', 'clone', 'delete', 'exists',
'path_exists', 'if_exists', 'return', 'value'),
- prefix=r'\b', suffix=r'\s*\b'),
+ prefix=r'\b', suffix=r'\s*\b'),
Name.Builtin),
(r'#.*', Comment),
(r'\\[\w\W]', String.Escape),
(r'\s+|\n', Whitespace),
],
}
+
+
+class FlatlineLexer(RegexLexer):
+ """
+ Lexer for `Flatline <https://github.com/bigmlcom/flatline>`_ expressions.
+
+ .. versionadded:: 2.2
+ """
+ name = 'Flatline'
+ aliases = ['flatline']
+ filenames = []
+ mimetypes = ['text/x-flatline']
+
+ special_forms = ('let',)
+
+ builtins = (
+ "!=", "*", "+", "-", "<", "<=", "=", ">", ">=", "abs", "acos", "all",
+ "all-but", "all-with-defaults", "all-with-numeric-default", "and",
+ "asin", "atan", "avg", "avg-window", "bin-center", "bin-count", "call",
+ "category-count", "ceil", "cond", "cond-window", "cons", "cos", "cosh",
+ "count", "diff-window", "div", "ensure-value", "ensure-weighted-value",
+ "epoch", "epoch-day", "epoch-fields", "epoch-hour", "epoch-millisecond",
+ "epoch-minute", "epoch-month", "epoch-second", "epoch-weekday",
+ "epoch-year", "exp", "f", "field", "field-prop", "fields", "filter",
+ "first", "floor", "head", "if", "in", "integer", "language", "length",
+ "levenshtein", "linear-regression", "list", "ln", "log", "log10", "map",
+ "matches", "matches?", "max", "maximum", "md5", "mean", "median", "min",
+ "minimum", "missing", "missing-count", "missing?", "missing_count",
+ "mod", "mode", "normalize", "not", "nth", "occurrences", "or",
+ "percentile", "percentile-label", "population", "population-fraction",
+ "pow", "preferred", "preferred?", "quantile-label", "rand", "rand-int",
+ "random-value", "re-quote", "real", "replace", "replace-first", "rest",
+ "round", "row-number", "segment-label", "sha1", "sha256", "sin", "sinh",
+ "sqrt", "square", "standard-deviation", "standard_deviation", "str",
+ "subs", "sum", "sum-squares", "sum-window", "sum_squares", "summary",
+ "summary-no", "summary-str", "tail", "tan", "tanh", "to-degrees",
+ "to-radians", "variance", "vectorize", "weighted-random-value", "window",
+ "winnow", "within-percentiles?", "z-score",
+ )
+
+ valid_name = r'(?!#)[\w!$%*+<=>?/.#-]+'
+
+ tokens = {
+ 'root': [
+ # whitespaces - usually not relevant
+ (r'[,\s]+', Text),
+
+ # numbers
+ (r'-?\d+\.\d+', Number.Float),
+ (r'-?\d+', Number.Integer),
+ (r'0x-?[a-f\d]+', Number.Hex),
+
+ # strings, symbols and characters
+ (r'"(\\\\|\\"|[^"])*"', String),
+ (r"\\(.|[a-z]+)", String.Char),
+
+ # expression template placeholder
+ (r'_', String.Symbol),
+
+ # highlight the special forms
+ (words(special_forms, suffix=' '), Keyword),
+
+ # highlight the builtins
+ (words(builtins, suffix=' '), Name.Builtin),
+
+ # the remaining functions
+ (r'(?<=\()' + valid_name, Name.Function),
+
+ # find the remaining variables
+ (valid_name, Name.Variable),
+
+ # parentheses
+ (r'(\(|\))', Punctuation),
+ ],
+ }
+
+
+class SnowballLexer(ExtendedRegexLexer):
+ """
+ Lexer for `Snowball <http://snowballstem.org/>`_ source code.
+
+ .. versionadded:: 2.2
+ """
+
+ name = 'Snowball'
+ aliases = ['snowball']
+ filenames = ['*.sbl']
+
+ _ws = r'\n\r\t '
+
+ def __init__(self, **options):
+ self._reset_stringescapes()
+ ExtendedRegexLexer.__init__(self, **options)
+
+ def _reset_stringescapes(self):
+ self._start = "'"
+ self._end = "'"
+
+ def _string(do_string_first):
+ def callback(lexer, match, ctx):
+ s = match.start()
+ text = match.group()
+ string = re.compile(r'([^%s]*)(.)' % re.escape(lexer._start)).match
+ escape = re.compile(r'([^%s]*)(.)' % re.escape(lexer._end)).match
+ pos = 0
+ do_string = do_string_first
+ while pos < len(text):
+ if do_string:
+ match = string(text, pos)
+ yield s + match.start(1), String.Single, match.group(1)
+ if match.group(2) == "'":
+ yield s + match.start(2), String.Single, match.group(2)
+ ctx.stack.pop()
+ break
+ yield s + match.start(2), String.Escape, match.group(2)
+ pos = match.end()
+ match = escape(text, pos)
+ yield s + match.start(), String.Escape, match.group()
+ if match.group(2) != lexer._end:
+ ctx.stack[-1] = 'escape'
+ break
+ pos = match.end()
+ do_string = True
+ ctx.pos = s + match.end()
+ return callback
+
+ def _stringescapes(lexer, match, ctx):
+ lexer._start = match.group(3)
+ lexer._end = match.group(5)
+ return bygroups(Keyword.Reserved, Text, String.Escape, Text,
+ String.Escape)(lexer, match, ctx)
+
+ tokens = {
+ 'root': [
+ (words(('len', 'lenof'), suffix=r'\b'), Operator.Word),
+ include('root1'),
+ ],
+ 'root1': [
+ (r'[%s]+' % _ws, Text),
+ (r'\d+', Number.Integer),
+ (r"'", String.Single, 'string'),
+ (r'[()]', Punctuation),
+ (r'/\*[\w\W]*?\*/', Comment.Multiline),
+ (r'//.*', Comment.Single),
+ (r'[!*+\-/<=>]=|[-=]>|<[+-]|[$*+\-/<=>?\[\]]', Operator),
+ (words(('as', 'get', 'hex', 'among', 'define', 'decimal',
+ 'backwardmode'), suffix=r'\b'),
+ Keyword.Reserved),
+ (words(('strings', 'booleans', 'integers', 'routines', 'externals',
+ 'groupings'), suffix=r'\b'),
+ Keyword.Reserved, 'declaration'),
+ (words(('do', 'or', 'and', 'for', 'hop', 'non', 'not', 'set', 'try',
+ 'fail', 'goto', 'loop', 'next', 'test', 'true',
+ 'false', 'unset', 'atmark', 'attach', 'delete', 'gopast',
+ 'insert', 'repeat', 'sizeof', 'tomark', 'atleast',
+ 'atlimit', 'reverse', 'setmark', 'tolimit', 'setlimit',
+ 'backwards', 'substring'), suffix=r'\b'),
+ Operator.Word),
+ (words(('size', 'limit', 'cursor', 'maxint', 'minint'),
+ suffix=r'\b'),
+ Name.Builtin),
+ (r'(stringdef\b)([%s]*)([^%s]+)' % (_ws, _ws),
+ bygroups(Keyword.Reserved, Text, String.Escape)),
+ (r'(stringescapes\b)([%s]*)(.)([%s]*)(.)' % (_ws, _ws),
+ _stringescapes),
+ (r'[A-Za-z]\w*', Name),
+ ],
+ 'declaration': [
+ (r'\)', Punctuation, '#pop'),
+ (words(('len', 'lenof'), suffix=r'\b'), Name,
+ ('root1', 'declaration')),
+ include('root1'),
+ ],
+ 'string': [
+ (r"[^']*'", _string(True)),
+ ],
+ 'escape': [
+ (r"[^']*'", _string(False)),
+ ],
+ }
+
+ def get_tokens_unprocessed(self, text=None, context=None):
+ self._reset_stringescapes()
+ return ExtendedRegexLexer.get_tokens_unprocessed(self, text, context)
Lexers for the Dylan language.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexers for the ECL language.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexer for the Eiffel language.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexer for the Elm programming language.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
'root': [
# Comments
- (r'{-', Comment.Multiline, 'comment'),
+ (r'\{-', Comment.Multiline, 'comment'),
(r'--.*', Comment.Single),
# Whitespace
(validName, Name.Variable),
# Parens
- (r'[,\(\)\[\]{}]', Punctuation),
+ (r'[,()\[\]{}]', Punctuation),
],
'comment': [
- (r'-(?!})', Comment.Multiline),
- (r'{-', Comment.Multiline, 'comment'),
+ (r'-(?!\})', Comment.Multiline),
+ (r'\{-', Comment.Multiline, 'comment'),
(r'[^-}]', Comment.Multiline),
- (r'-}', Comment.Multiline, '#pop'),
+ (r'-\}', Comment.Multiline, '#pop'),
],
'doublequote': [
- (r'\\u[0-9a-fA-F]\{4}', String.Escape),
- (r'\\[nrfvb\\\"]', String.Escape),
+ (r'\\u[0-9a-fA-F]{4}', String.Escape),
+ (r'\\[nrfvb\\"]', String.Escape),
(r'[^"]', String),
(r'"', String, '#pop'),
],
Lexers for Erlang.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
variable_re = r'(?:[A-Z_]\w*)'
- escape_re = r'(?:\\(?:[bdefnrstv\'"\\/]|[0-7][0-7]?[0-7]?|\^[a-zA-Z]))'
+ esc_char_re = r'[bdefnrstv\'"\\]'
+ esc_octal_re = r'[0-7][0-7]?[0-7]?'
+ esc_hex_re = r'(?:x[0-9a-fA-F]{2}|x\{[0-9a-fA-F]+\})'
+ esc_ctrl_re = r'\^[a-zA-Z]'
+ escape_re = r'(?:\\(?:'+esc_char_re+r'|'+esc_octal_re+r'|'+esc_hex_re+r'|'+esc_ctrl_re+r'))'
macro_re = r'(?:'+variable_re+r'|'+atom_re+r')'
(r'\?'+macro_re, Name.Constant),
(r'\$(?:'+escape_re+r'|\\[ %]|[^\\])', String.Char),
(r'#'+atom_re+r'(:?\.'+atom_re+r')?', Name.Label),
+
+ # Erlang script shebang
+ (r'\A#!.+\n', Comment.Hashbang),
+
+ # EEP 43: Maps
+ # http://www.erlang.org/eeps/eep-0043.html
+ (r'#\{', Punctuation, 'map_key'),
],
'string': [
(escape_re, String.Escape),
(r'"', String, '#pop'),
- (r'~[0-9.*]*[~#+bBcdefginpPswWxX]', String.Interpol),
+ (r'~[0-9.*]*[~#+BPWXb-ginpswx]', String.Interpol),
(r'[^"\\~]+', String),
(r'~', String),
],
bygroups(Name.Entity, Text, Punctuation, Name.Label), '#pop'),
(atom_re, Name.Entity, '#pop'),
],
+ 'map_key': [
+ include('root'),
+ (r'=>', Punctuation, 'map_val'),
+ (r':=', Punctuation, 'map_val'),
+ (r'\}', Punctuation, '#pop'),
+ ],
+ 'map_val': [
+ include('root'),
+ (r',', Punctuation, '#pop'),
+ (r'(?=\})', Punctuation, '#pop'),
+ ],
}
KEYWORD_OPERATOR = ('not', 'and', 'or', 'when', 'in')
BUILTIN = (
'case', 'cond', 'for', 'if', 'unless', 'try', 'receive', 'raise',
- 'quote', 'unquote', 'unquote_splicing', 'throw', 'super'
+ 'quote', 'unquote', 'unquote_splicing', 'throw', 'super',
)
BUILTIN_DECLARATION = (
'def', 'defp', 'defmodule', 'defprotocol', 'defmacro', 'defmacrop',
- 'defdelegate', 'defexception', 'defstruct', 'defimpl', 'defcallback'
+ 'defdelegate', 'defexception', 'defstruct', 'defimpl', 'defcallback',
)
BUILTIN_NAMESPACE = ('import', 'require', 'use', 'alias')
OPERATORS1 = ('<', '>', '+', '-', '*', '/', '!', '^', '&')
PUNCTUATION = (
- '\\\\', '<<', '>>', '=>', '(', ')', ':', ';', ',', '[', ']'
+ '\\\\', '<<', '>>', '=>', '(', ')', ':', ';', ',', '[', ']',
)
def get_tokens_unprocessed(self, text):
Lexers for esoteric languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from pygments.lexer import RegexLexer, include, words
from pygments.token import Text, Comment, Operator, Keyword, Name, String, \
- Number, Punctuation, Error, Whitespace
+ Number, Punctuation, Error
-__all__ = ['BrainfuckLexer', 'BefungeLexer', 'BoogieLexer', 'RedcodeLexer', 'CAmkESLexer']
+__all__ = ['BrainfuckLexer', 'BefungeLexer', 'RedcodeLexer', 'CAmkESLexer',
+ 'CapDLLexer', 'AheuiLexer']
class BrainfuckLexer(RegexLexer):
filenames = ['*.camkes', '*.idl4']
tokens = {
- 'root':[
+ 'root': [
# C pre-processor directive
(r'^\s*#.*\n', Comment.Preproc),
(r'/\*(.|\n)*?\*/', Comment),
(r'//.*\n', Comment),
- (r'[\[\(\){},\.;=\]]', Punctuation),
+ (r'[\[(){},.;\]]', Punctuation),
+ (r'[~!%^&*+=|?:<>/-]', Operator),
(words(('assembly', 'attribute', 'component', 'composition',
'configuration', 'connection', 'connector', 'consumes',
- 'control', 'dataport', 'Dataport', 'emits', 'event',
- 'Event', 'from', 'group', 'hardware', 'has', 'interface',
- 'Interface', 'maybe', 'procedure', 'Procedure', 'provides',
- 'template', 'to', 'uses'), suffix=r'\b'), Keyword),
+ 'control', 'dataport', 'Dataport', 'Dataports', 'emits',
+ 'event', 'Event', 'Events', 'export', 'from', 'group',
+ 'hardware', 'has', 'interface', 'Interface', 'maybe',
+ 'procedure', 'Procedure', 'Procedures', 'provides',
+ 'template', 'thread', 'threads', 'to', 'uses', 'with'),
+ suffix=r'\b'), Keyword),
(words(('bool', 'boolean', 'Buf', 'char', 'character', 'double',
'float', 'in', 'inout', 'int', 'int16_6', 'int32_t',
'int64_t', 'int8_t', 'integer', 'mutex', 'out', 'real',
- 'refin', 'semaphore', 'signed', 'string', 'uint16_t',
- 'uint32_t', 'uint64_t', 'uint8_t', 'uintptr_t', 'unsigned',
- 'void'), suffix=r'\b'), Keyword.Type),
+ 'refin', 'semaphore', 'signed', 'string', 'struct',
+ 'uint16_t', 'uint32_t', 'uint64_t', 'uint8_t', 'uintptr_t',
+ 'unsigned', 'void'),
+ suffix=r'\b'), Keyword.Type),
# Recognised attributes
(r'[a-zA-Z_]\w*_(priority|domain|buffer)', Keyword.Reserved),
(r'-?[\d]+', Number),
(r'-?[\d]+\.[\d]+', Number.Float),
(r'"[^"]*"', String),
+ (r'[Tt]rue|[Ff]alse', Name.Builtin),
# Identifiers
(r'[a-zA-Z_]\w*', Name),
}
+class CapDLLexer(RegexLexer):
+ """
+ Basic lexer for
+ `CapDL <https://ssrg.nicta.com.au/publications/nictaabstracts/Kuz_KLW_10.abstract.pml>`_.
+
+ The source of the primary tool that reads such specifications is available
+ at https://github.com/seL4/capdl/tree/master/capDL-tool. Note that this
+ lexer only supports a subset of the grammar. For example, identifiers can
+ shadow type names, but these instances are currently incorrectly
+ highlighted as types. Supporting this would need a stateful lexer that is
+ considered unnecessarily complex for now.
+
+ .. versionadded:: 2.2
+ """
+ name = 'CapDL'
+ aliases = ['capdl']
+ filenames = ['*.cdl']
+
+ tokens = {
+ 'root': [
+ # C pre-processor directive
+ (r'^\s*#.*\n', Comment.Preproc),
+
+ # Whitespace, comments
+ (r'\s+', Text),
+ (r'/\*(.|\n)*?\*/', Comment),
+ (r'(//|--).*\n', Comment),
+
+ (r'[<>\[(){},:;=\]]', Punctuation),
+ (r'\.\.', Punctuation),
+
+ (words(('arch', 'arm11', 'caps', 'child_of', 'ia32', 'irq', 'maps',
+ 'objects'), suffix=r'\b'), Keyword),
+
+ (words(('aep', 'asid_pool', 'cnode', 'ep', 'frame', 'io_device',
+ 'io_ports', 'io_pt', 'notification', 'pd', 'pt', 'tcb',
+ 'ut', 'vcpu'), suffix=r'\b'), Keyword.Type),
+
+ # Properties
+ (words(('asid', 'addr', 'badge', 'cached', 'dom', 'domainID', 'elf',
+ 'fault_ep', 'G', 'guard', 'guard_size', 'init', 'ip',
+ 'prio', 'sp', 'R', 'RG', 'RX', 'RW', 'RWG', 'RWX', 'W',
+ 'WG', 'WX', 'level', 'masked', 'master_reply', 'paddr',
+ 'ports', 'reply', 'uncached'), suffix=r'\b'),
+ Keyword.Reserved),
+
+ # Literals
+ (r'0[xX][\da-fA-F]+', Number.Hex),
+ (r'\d+(\.\d+)?(k|M)?', Number),
+ (words(('bits',), suffix=r'\b'), Number),
+ (words(('cspace', 'vspace', 'reply_slot', 'caller_slot',
+ 'ipc_buffer_slot'), suffix=r'\b'), Number),
+
+ # Identifiers
+ (r'[a-zA-Z_][-@\.\w]*', Name),
+ ],
+ }
+
+
class RedcodeLexer(RegexLexer):
"""
A simple Redcode lexer based on ICWS'94.
}
-class BoogieLexer(RegexLexer):
+class AheuiLexer(RegexLexer):
"""
- For `Boogie <https://boogie.codeplex.com/>`_ source code.
+ Aheui_ Lexer.
+
+ Aheui_ is esoteric language based on Korean alphabets.
+
+ .. _Aheui:: http://aheui.github.io/
- .. versionadded:: 2.1
"""
- name = 'Boogie'
- aliases = ['boogie']
- filenames = ['*.bpl']
+
+ name = 'Aheui'
+ aliases = ['aheui']
+ filenames = ['*.aheui']
tokens = {
'root': [
- # Whitespace and Comments
- (r'\n', Whitespace),
- (r'\s+', Whitespace),
- (r'//[/!](.*?)\n', Comment.Doc),
- (r'//(.*?)\n', Comment.Single),
- (r'/\*', Comment.Multiline, 'comment'),
-
- (words((
- 'axiom', 'break', 'call', 'ensures', 'else', 'exists', 'function',
- 'forall', 'if', 'invariant', 'modifies', 'procedure', 'requires',
- 'then', 'var', 'while'),
- suffix=r'\b'), Keyword),
- (words(('const',), suffix=r'\b'), Keyword.Reserved),
-
- (words(('bool', 'int', 'ref'), suffix=r'\b'), Keyword.Type),
- include('numbers'),
- (r"(>=|<=|:=|!=|==>|&&|\|\||[+/\-=>*<\[\]])", Operator),
- (r"([{}():;,.])", Punctuation),
- # Identifier
- (r'[a-zA-Z_]\w*', Name),
- ],
- 'comment': [
- (r'[^*/]+', Comment.Multiline),
- (r'/\*', Comment.Multiline, '#push'),
- (r'\*/', Comment.Multiline, '#pop'),
- (r'[*/]', Comment.Multiline),
- ],
- 'numbers': [
- (r'[0-9]+', Number.Integer),
+ (u'['
+ u'나-낳냐-냫너-넣녀-녛노-놓뇨-눟뉴-닇'
+ u'다-닿댜-댷더-덯뎌-뎧도-돟됴-둫듀-딓'
+ u'따-땋땨-떃떠-떻뗘-뗳또-똫뚀-뚷뜌-띟'
+ u'라-랗랴-럏러-렇려-렿로-롷료-뤃류-릫'
+ u'마-맣먀-먛머-멓며-몋모-뫃묘-뭏뮤-믷'
+ u'바-밯뱌-뱧버-벟벼-볗보-봏뵤-붛뷰-빃'
+ u'빠-빻뺘-뺳뻐-뻫뼈-뼣뽀-뽛뾰-뿧쀼-삏'
+ u'사-샇샤-샿서-섷셔-셯소-솧쇼-숳슈-싛'
+ u'싸-쌓쌰-썋써-쎃쎠-쎻쏘-쏳쑈-쑿쓔-씧'
+ u'자-잫쟈-쟣저-젛져-졓조-좋죠-줗쥬-즿'
+ u'차-챃챠-챻처-첳쳐-쳫초-촣쵸-춯츄-칗'
+ u'카-캏캬-컇커-컿켜-켷코-콯쿄-쿻큐-킣'
+ u'타-탛탸-턓터-텋텨-톃토-톻툐-퉇튜-틯'
+ u'파-팧퍄-퍟퍼-펗펴-폏포-퐇표-풓퓨-픻'
+ u'하-핳햐-햫허-헣혀-혛호-홓효-훟휴-힇'
+ u']', Operator),
+ ('.', Comment),
],
}
~~~~~~~~~~~~~~~~~~~~~
Pygments lexers for Ezhil language.
-
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
__all__ = ['EzhilLexer']
+
class EzhilLexer(RegexLexer):
"""
Lexer for `Ezhil, a Tamil script-based programming language <http://ezhillang.org>`_
(r'#.*\n', Comment.Single),
(r'[@+/*,^\-%]|[!<>=]=?|&&?|\|\|?', Operator),
(u'இல்', Operator.Word),
- (words(('assert', 'max', 'min',
- 'நீளம்','சரம்_இடமாற்று','சரம்_கண்டுபிடி',
- 'பட்டியல்','பின்இணை','வரிசைப்படுத்து',
- 'எடு','தலைகீழ்','நீட்டிக்க','நுழைக்க','வை',
- 'கோப்பை_திற','கோப்பை_எழுது','கோப்பை_மூடு',
- 'pi','sin','cos','tan','sqrt','hypot','pow','exp','log','log10'
- 'min','max','exit',
+ (words((u'assert', u'max', u'min',
+ u'நீளம்', u'சரம்_இடமாற்று', u'சரம்_கண்டுபிடி',
+ u'பட்டியல்', u'பின்இணை', u'வரிசைப்படுத்து',
+ u'எடு', u'தலைகீழ்', u'நீட்டிக்க', u'நுழைக்க', u'வை',
+ u'கோப்பை_திற', u'கோப்பை_எழுது', u'கோப்பை_மூடு',
+ u'pi', u'sin', u'cos', u'tan', u'sqrt', u'hypot', u'pow',
+ u'exp', u'log', u'log10', u'exit',
), suffix=r'\b'), Name.Builtin),
(r'(True|False)\b', Keyword.Constant),
(r'[^\S\n]+', Text),
(r'(?u)\d+', Number.Integer),
]
}
-
+
def __init__(self, **options):
super(EzhilLexer, self).__init__(**options)
self.encoding = options.get('encoding', 'utf-8')
Lexers for the Factor language.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexer for the Fantom language.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexer for the Felix language.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
],
'strings': [
(r'%(\([a-zA-Z0-9]+\))?[-#0 +]*([0-9]+|[*])?(\.([0-9]+|[*]))?'
- '[hlL]?[diouxXeEfFgGcrs%]', String.Interpol),
+ '[hlL]?[E-GXc-giorsux%]', String.Interpol),
(r'[^\\\'"%\n]+', String),
# quotes, percents and backslashes must be parsed one at a time
(r'[\'"\\]', String),
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ pygments.lexers.forth
+ ~~~~~~~~~~~~~~~~~~~~~
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+import re
+
+from pygments.lexer import RegexLexer, include, bygroups
+from pygments.token import Error, Punctuation, Literal, Token, \
+ Text, Comment, Operator, Keyword, Name, String, Number, Generic
+
+
+__all__ = ['ForthLexer']
+
+
+class ForthLexer(RegexLexer):
+ """
+ Lexer for Forth files.
+
+ .. versionadded:: 2.2
+ """
+ name = 'Forth'
+ aliases = ['forth']
+ filenames = ['*.frt', '*.fs']
+ mimetypes = ['application/x-forth']
+
+ delimiter = r'\s'
+ delimiter_end = r'(?=[%s])' % delimiter
+
+ valid_name_chars = r'[^%s]' % delimiter
+ valid_name = r"%s+%s" % (valid_name_chars, delimiter_end)
+
+ flags = re.IGNORECASE | re.MULTILINE
+
+ tokens = {
+ 'root': [
+ (r'\s+', Text),
+ # All comment types
+ (r'\\.*?\n', Comment.Single),
+ (r'\([\s].*?\)', Comment.Single),
+ # defining words. The next word is a new command name
+ (r'(:|variable|constant|value|buffer:)(\s+)',
+ bygroups(Keyword.Namespace, Text), 'worddef'),
+ # strings are rather simple
+ (r'([.sc]")(\s+?)', bygroups(String, Text), 'stringdef'),
+ # keywords from the various wordsets
+ # *** Wordset BLOCK
+ (r'(blk|block|buffer|evaluate|flush|load|save-buffers|update|'
+ # *** Wordset BLOCK-EXT
+ r'empty-buffers|list|refill|scr|thru|'
+ # *** Wordset CORE
+ r'\#s|\*\/mod|\+loop|\/mod|0<|0=|1\+|1-|2!|'
+ r'2\*|2\/|2@|2drop|2dup|2over|2swap|>body|'
+ r'>in|>number|>r|\?dup|abort|abort\"|abs|'
+ r'accept|align|aligned|allot|and|base|begin|'
+ r'bl|c!|c,|c@|cell\+|cells|char|char\+|'
+ r'chars|constant|count|cr|create|decimal|'
+ r'depth|do|does>|drop|dup|else|emit|environment\?|'
+ r'evaluate|execute|exit|fill|find|fm\/mod|'
+ r'here|hold|i|if|immediate|invert|j|key|'
+ r'leave|literal|loop|lshift|m\*|max|min|'
+ r'mod|move|negate|or|over|postpone|quit|'
+ r'r>|r@|recurse|repeat|rot|rshift|s\"|s>d|'
+ r'sign|sm\/rem|source|space|spaces|state|swap|'
+ r'then|type|u\.|u\<|um\*|um\/mod|unloop|until|'
+ r'variable|while|word|xor|\[char\]|\[\'\]|'
+ r'@|!|\#|<\#|\#>|:|;|\+|-|\*|\/|,|<|>|\|1\+|1-|\.|'
+ # *** Wordset CORE-EXT
+ r'\.r|0<>|'
+ r'0>|2>r|2r>|2r@|:noname|\?do|again|c\"|'
+ r'case|compile,|endcase|endof|erase|false|'
+ r'hex|marker|nip|of|pad|parse|pick|refill|'
+ r'restore-input|roll|save-input|source-id|to|'
+ r'true|tuck|u\.r|u>|unused|value|within|'
+ r'\[compile\]|'
+ # *** Wordset CORE-EXT-obsolescent
+ r'\#tib|convert|expect|query|span|'
+ r'tib|'
+ # *** Wordset DOUBLE
+ r'2constant|2literal|2variable|d\+|d-|'
+ r'd\.|d\.r|d0<|d0=|d2\*|d2\/|d<|d=|d>s|'
+ r'dabs|dmax|dmin|dnegate|m\*\/|m\+|'
+ # *** Wordset DOUBLE-EXT
+ r'2rot|du<|'
+ # *** Wordset EXCEPTION
+ r'catch|throw|'
+ # *** Wordset EXCEPTION-EXT
+ r'abort|abort\"|'
+ # *** Wordset FACILITY
+ r'at-xy|key\?|page|'
+ # *** Wordset FACILITY-EXT
+ r'ekey|ekey>char|ekey\?|emit\?|ms|time&date|'
+ # *** Wordset FILE
+ r'BIN|CLOSE-FILE|CREATE-FILE|DELETE-FILE|FILE-POSITION|'
+ r'FILE-SIZE|INCLUDE-FILE|INCLUDED|OPEN-FILE|R\/O|'
+ r'R\/W|READ-FILE|READ-LINE|REPOSITION-FILE|RESIZE-FILE|'
+ r'S\"|SOURCE-ID|W/O|WRITE-FILE|WRITE-LINE|'
+ # *** Wordset FILE-EXT
+ r'FILE-STATUS|FLUSH-FILE|REFILL|RENAME-FILE|'
+ # *** Wordset FLOAT
+ r'>float|d>f|'
+ r'f!|f\*|f\+|f-|f\/|f0<|f0=|f<|f>d|f@|'
+ r'falign|faligned|fconstant|fdepth|fdrop|fdup|'
+ r'fliteral|float\+|floats|floor|fmax|fmin|'
+ r'fnegate|fover|frot|fround|fswap|fvariable|'
+ r'represent|'
+ # *** Wordset FLOAT-EXT
+ r'df!|df@|dfalign|dfaligned|dfloat\+|'
+ r'dfloats|f\*\*|f\.|fabs|facos|facosh|falog|'
+ r'fasin|fasinh|fatan|fatan2|fatanh|fcos|fcosh|'
+ r'fe\.|fexp|fexpm1|fln|flnp1|flog|fs\.|fsin|'
+ r'fsincos|fsinh|fsqrt|ftan|ftanh|f~|precision|'
+ r'set-precision|sf!|sf@|sfalign|sfaligned|sfloat\+|'
+ r'sfloats|'
+ # *** Wordset LOCAL
+ r'\(local\)|to|'
+ # *** Wordset LOCAL-EXT
+ r'locals\||'
+ # *** Wordset MEMORY
+ r'allocate|free|resize|'
+ # *** Wordset SEARCH
+ r'definitions|find|forth-wordlist|get-current|'
+ r'get-order|search-wordlist|set-current|set-order|'
+ r'wordlist|'
+ # *** Wordset SEARCH-EXT
+ r'also|forth|only|order|previous|'
+ # *** Wordset STRING
+ r'-trailing|\/string|blank|cmove|cmove>|compare|'
+ r'search|sliteral|'
+ # *** Wordset TOOLS
+ r'.s|dump|see|words|'
+ # *** Wordset TOOLS-EXT
+ r';code|'
+ r'ahead|assembler|bye|code|cs-pick|cs-roll|'
+ r'editor|state|\[else\]|\[if\]|\[then\]|'
+ # *** Wordset TOOLS-EXT-obsolescent
+ r'forget|'
+ # Forth 2012
+ r'defer|defer@|defer!|action-of|begin-structure|field:|buffer:|'
+ r'parse-name|buffer:|traverse-wordlist|n>r|nr>|2value|fvalue|'
+ r'name>interpret|name>compile|name>string|'
+ r'cfield:|end-structure)'+delimiter, Keyword),
+
+ # Numbers
+ (r'(\$[0-9A-F]+)', Number.Hex),
+ (r'(\#|%|&|\-|\+)?[0-9]+', Number.Integer),
+ (r'(\#|%|&|\-|\+)?[0-9.]+', Keyword.Type),
+ # amforth specific
+ (r'(@i|!i|@e|!e|pause|noop|turnkey|sleep|'
+ r'itype|icompare|sp@|sp!|rp@|rp!|up@|up!|'
+ r'>a|a>|a@|a!|a@+|a@-|>b|b>|b@|b!|b@+|b@-|'
+ r'find-name|1ms|'
+ r'sp0|rp0|\(evaluate\)|int-trap|int!)' + delimiter,
+ Name.Constant),
+ # a proposal
+ (r'(do-recognizer|r:fail|recognizer:|get-recognizers|'
+ r'set-recognizers|r:float|r>comp|r>int|r>post|'
+ r'r:name|r:word|r:dnum|r:num|recognizer|forth-recognizer|'
+ r'rec:num|rec:float|rec:word)' + delimiter, Name.Decorator),
+ # defining words. The next word is a new command name
+ (r'(Evalue|Rvalue|Uvalue|Edefer|Rdefer|Udefer)(\s+)',
+ bygroups(Keyword.Namespace, Text), 'worddef'),
+
+ (valid_name, Name.Function), # Anything else is executed
+
+ ],
+ 'worddef': [
+ (r'\S+', Name.Class, '#pop'),
+ ],
+ 'stringdef': [
+ (r'[^"]+', String, '#pop'),
+ ],
+ }
Lexers for Fortran languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import re
-from pygments.lexer import RegexLexer, bygroups, include, words, using
+from pygments.lexer import RegexLexer, bygroups, include, words, using, default
from pygments.token import Text, Comment, Operator, Keyword, Name, String, \
Number, Punctuation, Generic
'nums': [
(r'\d+(?![.e])(_[a-z]\w+)?', Number.Integer),
- (r'[+-]?\d*\.\d+(e[-+]?\d+)?(_[a-z]\w+)?', Number.Float),
- (r'[+-]?\d+\.\d*(e[-+]?\d+)?(_[a-z]\w+)?', Number.Float),
+ (r'[+-]?\d*\.\d+([ed][-+]?\d+)?(_[a-z]\w+)?', Number.Float),
+ (r'[+-]?\d+\.\d*([ed][-+]?\d+)?(_[a-z]\w+)?', Number.Float),
],
}
(r'(.{5})', Name.Label, 'cont-char'),
(r'.*\n', using(FortranLexer)),
],
-
'cont-char': [
(' ', Text, 'code'),
('0', Comment, 'code'),
- ('.', Generic.Strong, 'code')
+ ('.', Generic.Strong, 'code'),
],
-
'code': [
(r'(.{66})(.*)(\n)',
bygroups(_lex_fortran, Comment, Text), 'root'),
(r'(.*)(\n)', bygroups(_lex_fortran, Text), 'root'),
- (r'', Text, 'root')]
+ default('root'),
+ ]
}
Simple lexer for Microsoft Visual FoxPro source code.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Just export lexer classes previously contained in this module.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexers for the Google Go language.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexers for grammer notations like BNF.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
-from pygments.lexer import RegexLexer, bygroups, words
-from pygments.token import Punctuation, Text, Comment, Operator, \
- Keyword, Name, Literal
+import re
-__all__ = ['BnfLexer', 'AbnfLexer']
+from pygments.lexer import RegexLexer, bygroups, include, this, using, words
+from pygments.token import Comment, Keyword, Literal, Name, Number, \
+ Operator, Punctuation, String, Text
+
+__all__ = ['BnfLexer', 'AbnfLexer', 'JsgfLexer']
class BnfLexer(RegexLexer):
(r'.', Text),
],
}
+
+
+class JsgfLexer(RegexLexer):
+ """
+ For `JSpeech Grammar Format <https://www.w3.org/TR/jsgf/>`_
+ grammars.
+
+ .. versionadded:: 2.2
+ """
+ name = 'JSGF'
+ aliases = ['jsgf']
+ filenames = ['*.jsgf']
+ mimetypes = ['application/jsgf', 'application/x-jsgf', 'text/jsgf']
+
+ flags = re.MULTILINE | re.UNICODE
+
+ tokens = {
+ 'root': [
+ include('comments'),
+ include('non-comments'),
+ ],
+ 'comments': [
+ (r'/\*\*(?!/)', Comment.Multiline, 'documentation comment'),
+ (r'/\*[\w\W]*?\*/', Comment.Multiline),
+ (r'//.*', Comment.Single),
+ ],
+ 'non-comments': [
+ ('\A#JSGF[^;]*', Comment.Preproc),
+ (r'\s+', Text),
+ (r';', Punctuation),
+ (r'[=|()\[\]*+]', Operator),
+ (r'/[^/]+/', Number.Float),
+ (r'"', String.Double, 'string'),
+ (r'\{', String.Other, 'tag'),
+ (words(('import', 'public'), suffix=r'\b'), Keyword.Reserved),
+ (r'grammar\b', Keyword.Reserved, 'grammar name'),
+ (r'(<)(NULL|VOID)(>)',
+ bygroups(Punctuation, Name.Builtin, Punctuation)),
+ (r'<', Punctuation, 'rulename'),
+ (r'\w+|[^\s;=|()\[\]*+/"{<\w]+', Text),
+ ],
+ 'string': [
+ (r'"', String.Double, '#pop'),
+ (r'\\.', String.Escape),
+ (r'[^\\"]+', String.Double),
+ ],
+ 'tag': [
+ (r'\}', String.Other, '#pop'),
+ (r'\\.', String.Escape),
+ (r'[^\\}]+', String.Other),
+ ],
+ 'grammar name': [
+ (r';', Punctuation, '#pop'),
+ (r'\s+', Text),
+ (r'\.', Punctuation),
+ (r'[^;\s.]+', Name.Namespace),
+ ],
+ 'rulename': [
+ (r'>', Punctuation, '#pop'),
+ (r'\*', Punctuation),
+ (r'\s+', Text),
+ (r'([^.>]+)(\s*)(\.)', bygroups(Name.Namespace, Text, Punctuation)),
+ (r'[^.>]+', Name.Constant),
+ ],
+ 'documentation comment': [
+ (r'\*/', Comment.Multiline, '#pop'),
+ (r'(^\s*\*?\s*)(@(?:example|see)\s+)'
+ r'([\w\W]*?(?=(?:^\s*\*?\s*@|\*/)))',
+ bygroups(Comment.Multiline, Comment.Special,
+ using(this, state='example'))),
+ (r'(^\s*\*?\s*)(@\S*)',
+ bygroups(Comment.Multiline, Comment.Special)),
+ (r'[^*\n@]+|\w|\W', Comment.Multiline),
+ ],
+ 'example': [
+ (r'\n\s*\*', Comment.Multiline),
+ include('non-comments'),
+ (r'.', Comment.Multiline),
+ ],
+ }
Lexers for graph query languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexers for computer graphics and plotting related languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexers for Haskell and related languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
flags = re.MULTILINE | re.UNICODE
reserved = ('case', 'class', 'data', 'default', 'deriving', 'do', 'else',
- 'if', 'in', 'infix[lr]?', 'instance',
+ 'family', 'if', 'in', 'infix[lr]?', 'instance',
'let', 'newtype', 'of', 'then', 'type', 'where', '_')
ascii = ('NUL', 'SOH', '[SE]TX', 'EOT', 'ENQ', 'ACK',
'BEL', 'BS', 'HT', 'LF', 'VT', 'FF', 'CR', 'S[OI]', 'DLE',
(r'^[_' + uni.Ll + r'][\w\']*', Name.Function),
(r"'?[_" + uni.Ll + r"][\w']*", Name),
(r"('')?[" + uni.Lu + r"][\w\']*", Keyword.Type),
+ (r"(')[" + uni.Lu + r"][\w\']*", Keyword.Type),
+ (r"(')\[[^\]]*\]", Keyword.Type), # tuples and lists get special treatment in GHC
+ (r"(')\([^)]*\)", Keyword.Type), # ..
# Operators
(r'\\(?![:!#$%&*+.\\/<=>?@^|~-]+)', Name.Function), # lambda operator
(r'(<-|::|->|=>|=)(?![:!#$%&*+.\\/<=>?@^|~-]+)', Operator.Word), # specials
'module': [
(r'\{-', Comment.Multiline, 'comment'),
(r'[a-zA-Z][\w.]*', Name, '#pop'),
- (r'[^a-zA-Z]+', Text)
+ (r'[\W0-9_]+', Text)
],
'comment': HaskellLexer.tokens['comment'],
'character': HaskellLexer.tokens['character'],
Lexers for Haxe and related stuff.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexers for hardware descriptor languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexers for hexadecimal dumps.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
-import re
-
from pygments.lexer import RegexLexer, bygroups, include
from pygments.token import Text, Name, Number, String, Punctuation
* ``od -t x1z FILE``
* ``xxd FILE``
* ``DEBUG.EXE FILE.COM`` and entering ``d`` to the prompt.
-
+
.. versionadded:: 2.1
"""
name = 'Hexdump'
'root': [
(r'\n', Text),
include('offset'),
- (r'('+hd+r'{2})(\-)('+hd+r'{2})', bygroups(Number.Hex, Punctuation, Number.Hex)),
+ (r'('+hd+r'{2})(\-)('+hd+r'{2})',
+ bygroups(Number.Hex, Punctuation, Number.Hex)),
(hd+r'{2}', Number.Hex),
- (r'(\s{2,3})(\>)(.{16})(\<)$', bygroups(Text, Punctuation, String, Punctuation), 'bracket-strings'),
- (r'(\s{2,3})(\|)(.{16})(\|)$', bygroups(Text, Punctuation, String, Punctuation), 'piped-strings'),
- (r'(\s{2,3})(\>)(.{1,15})(\<)$', bygroups(Text, Punctuation, String, Punctuation)),
- (r'(\s{2,3})(\|)(.{1,15})(\|)$', bygroups(Text, Punctuation, String, Punctuation)),
+ (r'(\s{2,3})(\>)(.{16})(\<)$',
+ bygroups(Text, Punctuation, String, Punctuation), 'bracket-strings'),
+ (r'(\s{2,3})(\|)(.{16})(\|)$',
+ bygroups(Text, Punctuation, String, Punctuation), 'piped-strings'),
+ (r'(\s{2,3})(\>)(.{1,15})(\<)$',
+ bygroups(Text, Punctuation, String, Punctuation)),
+ (r'(\s{2,3})(\|)(.{1,15})(\|)$',
+ bygroups(Text, Punctuation, String, Punctuation)),
(r'(\s{2,3})(.{1,15})$', bygroups(Text, String)),
(r'(\s{2,3})(.{16}|.{20})$', bygroups(Text, String), 'nonpiped-strings'),
(r'\s', Text),
(r'\n', Text),
include('offset'),
(hd+r'{2}', Number.Hex),
- (r'(\s{2,3})(\|)(.{1,16})(\|)$', bygroups(Text, Punctuation, String, Punctuation)),
+ (r'(\s{2,3})(\|)(.{1,16})(\|)$',
+ bygroups(Text, Punctuation, String, Punctuation)),
(r'\s', Text),
(r'^\*', Punctuation),
],
(r'\n', Text),
include('offset'),
(hd+r'{2}', Number.Hex),
- (r'(\s{2,3})(\>)(.{1,16})(\<)$', bygroups(Text, Punctuation, String, Punctuation)),
+ (r'(\s{2,3})(\>)(.{1,16})(\<)$',
+ bygroups(Text, Punctuation, String, Punctuation)),
(r'\s', Text),
(r'^\*', Punctuation),
],
'nonpiped-strings': [
(r'\n', Text),
include('offset'),
- (r'('+hd+r'{2})(\-)('+hd+r'{2})', bygroups(Number.Hex, Punctuation, Number.Hex)),
+ (r'('+hd+r'{2})(\-)('+hd+r'{2})',
+ bygroups(Number.Hex, Punctuation, Number.Hex)),
(hd+r'{2}', Number.Hex),
(r'(\s{19,})(.{1,20}?)$', bygroups(Text, String)),
(r'(\s{2,3})(.{1,20})$', bygroups(Text, String)),
Lexers for HTML, XML and related markup.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from pygments.lexers.ruby import RubyLexer
__all__ = ['HtmlLexer', 'DtdLexer', 'XmlLexer', 'XsltLexer', 'HamlLexer',
- 'ScamlLexer', 'JadeLexer']
+ 'ScamlLexer', 'PugLexer']
class HtmlLexer(RegexLexer):
}
-class JadeLexer(ExtendedRegexLexer):
+class PugLexer(ExtendedRegexLexer):
"""
- For Jade markup.
- Jade is a variant of Scaml, see:
+ For Pug markup.
+ Pug is a variant of Scaml, see:
http://scalate.fusesource.org/documentation/scaml-reference.html
.. versionadded:: 1.4
"""
- name = 'Jade'
- aliases = ['jade']
- filenames = ['*.jade']
- mimetypes = ['text/x-jade']
+ name = 'Pug'
+ aliases = ['pug', 'jade']
+ filenames = ['*.pug', '*.jade']
+ mimetypes = ['text/x-pug', 'text/x-jade']
flags = re.IGNORECASE
_dot = r'.'
(r'\n', Text, 'root'),
],
}
+JadeLexer = PugLexer # compat
Lexers for IDL.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
(r'\b(mod|lt|le|eq|ne|ge|gt|not|and|or|xor)\b', Operator),
(r'"[^\"]*"', String.Double),
(r"'[^\']*'", String.Single),
- (r'\b[\+\-]?([0-9]*\.[0-9]+|[0-9]+\.[0-9]*)(D|E)?([\+\-]?[0-9]+)?\b', Number.Float),
- (r'\b\'[\+\-]?[0-9A-F]+\'X(U?(S?|L{1,2})|B)\b', Number.Hex),
- (r'\b\'[\+\-]?[0-7]+\'O(U?(S?|L{1,2})|B)\b', Number.Oct),
- (r'\b[\+\-]?[0-9]+U?L{1,2}\b', Number.Integer.Long),
- (r'\b[\+\-]?[0-9]+U?S?\b', Number.Integer),
- (r'\b[\+\-]?[0-9]+B\b', Number),
+ (r'\b[+\-]?([0-9]*\.[0-9]+|[0-9]+\.[0-9]*)(D|E)?([+\-]?[0-9]+)?\b',
+ Number.Float),
+ (r'\b\'[+\-]?[0-9A-F]+\'X(U?(S?|L{1,2})|B)\b', Number.Hex),
+ (r'\b\'[+\-]?[0-7]+\'O(U?(S?|L{1,2})|B)\b', Number.Oct),
+ (r'\b[+\-]?[0-9]+U?L{1,2}\b', Number.Integer.Long),
+ (r'\b[+\-]?[0-9]+U?S?\b', Number.Integer),
+ (r'\b[+\-]?[0-9]+B\b', Number),
(r'.', Text),
]
}
Lexers for Igor Pro.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
types = (
'variable', 'string', 'constant', 'strconstant', 'NVAR', 'SVAR', 'WAVE',
'STRUCT', 'dfref', 'funcref', 'char', 'uchar', 'int16', 'uint16', 'int32',
- 'uint32', 'float', 'double'
+ 'uint32', 'int64', 'uint64', 'float', 'double'
)
keywords = (
'override', 'ThreadSafe', 'MultiThread', 'static', 'Proc',
'Structure', 'EndStructure', 'EndMacro', 'Menu', 'SubMenu'
)
operations = (
- 'Abort', 'AddFIFOData', 'AddFIFOVectData', 'AddMovieAudio',
- 'AddMovieFrame', 'APMath', 'Append', 'AppendImage',
- 'AppendLayoutObject', 'AppendMatrixContour', 'AppendText',
- 'AppendToGraph', 'AppendToLayout', 'AppendToTable', 'AppendXYZContour',
- 'AutoPositionWindow', 'BackgroundInfo', 'Beep', 'BoundingBall',
- 'BrowseURL', 'BuildMenu', 'Button', 'cd', 'Chart', 'CheckBox',
- 'CheckDisplayed', 'ChooseColor', 'Close', 'CloseMovie', 'CloseProc',
- 'ColorScale', 'ColorTab2Wave', 'Concatenate', 'ControlBar',
- 'ControlInfo', 'ControlUpdate', 'ConvexHull', 'Convolve', 'CopyFile',
- 'CopyFolder', 'CopyScales', 'Correlate', 'CreateAliasShortcut', 'Cross',
- 'CtrlBackground', 'CtrlFIFO', 'CtrlNamedBackground', 'Cursor',
- 'CurveFit', 'CustomControl', 'CWT', 'Debugger', 'DebuggerOptions',
- 'DefaultFont', 'DefaultGuiControls', 'DefaultGuiFont', 'DefineGuide',
- 'DelayUpdate', 'DeleteFile', 'DeleteFolder', 'DeletePoints',
- 'Differentiate', 'dir', 'Display', 'DisplayHelpTopic',
- 'DisplayProcedure', 'DoAlert', 'DoIgorMenu', 'DoUpdate', 'DoWindow',
- 'DoXOPIdle', 'DrawAction', 'DrawArc', 'DrawBezier', 'DrawLine',
- 'DrawOval', 'DrawPICT', 'DrawPoly', 'DrawRect', 'DrawRRect', 'DrawText',
- 'DSPDetrend', 'DSPPeriodogram', 'Duplicate', 'DuplicateDataFolder',
- 'DWT', 'EdgeStats', 'Edit', 'ErrorBars', 'Execute', 'ExecuteScriptText',
- 'ExperimentModified', 'Extract', 'FastGaussTransform', 'FastOp',
- 'FBinRead', 'FBinWrite', 'FFT', 'FIFO2Wave', 'FIFOStatus', 'FilterFIR',
- 'FilterIIR', 'FindLevel', 'FindLevels', 'FindPeak', 'FindPointsInPoly',
- 'FindRoots', 'FindSequence', 'FindValue', 'FPClustering', 'fprintf',
- 'FReadLine', 'FSetPos', 'FStatus', 'FTPDelete', 'FTPDownload',
- 'FTPUpload', 'FuncFit', 'FuncFitMD', 'GetAxis', 'GetFileFolderInfo',
- 'GetLastUserMenuInfo', 'GetMarquee', 'GetSelection', 'GetWindow',
- 'GraphNormal', 'GraphWaveDraw', 'GraphWaveEdit', 'Grep', 'GroupBox',
- 'Hanning', 'HideIgorMenus', 'HideInfo', 'HideProcedures', 'HideTools',
- 'HilbertTransform', 'Histogram', 'IFFT', 'ImageAnalyzeParticles',
- 'ImageBlend', 'ImageBoundaryToMask', 'ImageEdgeDetection',
- 'ImageFileInfo', 'ImageFilter', 'ImageFocus', 'ImageGenerateROIMask',
- 'ImageHistModification', 'ImageHistogram', 'ImageInterpolate',
- 'ImageLineProfile', 'ImageLoad', 'ImageMorphology', 'ImageRegistration',
- 'ImageRemoveBackground', 'ImageRestore', 'ImageRotate', 'ImageSave',
- 'ImageSeedFill', 'ImageSnake', 'ImageStats', 'ImageThreshold',
- 'ImageTransform', 'ImageUnwrapPhase', 'ImageWindow', 'IndexSort',
- 'InsertPoints', 'Integrate', 'IntegrateODE', 'Interp3DPath',
- 'Interpolate3D', 'KillBackground', 'KillControl', 'KillDataFolder',
- 'KillFIFO', 'KillFreeAxis', 'KillPath', 'KillPICTs', 'KillStrings',
- 'KillVariables', 'KillWaves', 'KillWindow', 'KMeans', 'Label', 'Layout',
- 'Legend', 'LinearFeedbackShiftRegister', 'ListBox', 'LoadData',
- 'LoadPackagePreferences', 'LoadPICT', 'LoadWave', 'Loess',
- 'LombPeriodogram', 'Make', 'MakeIndex', 'MarkPerfTestTime',
- 'MatrixConvolve', 'MatrixCorr', 'MatrixEigenV', 'MatrixFilter',
- 'MatrixGaussJ', 'MatrixInverse', 'MatrixLinearSolve',
- 'MatrixLinearSolveTD', 'MatrixLLS', 'MatrixLUBkSub', 'MatrixLUD',
- 'MatrixMultiply', 'MatrixOP', 'MatrixSchur', 'MatrixSolve',
- 'MatrixSVBkSub', 'MatrixSVD', 'MatrixTranspose', 'MeasureStyledText',
- 'Modify', 'ModifyContour', 'ModifyControl', 'ModifyControlList',
- 'ModifyFreeAxis', 'ModifyGraph', 'ModifyImage', 'ModifyLayout',
- 'ModifyPanel', 'ModifyTable', 'ModifyWaterfall', 'MoveDataFolder',
- 'MoveFile', 'MoveFolder', 'MoveString', 'MoveSubwindow', 'MoveVariable',
- 'MoveWave', 'MoveWindow', 'NeuralNetworkRun', 'NeuralNetworkTrain',
- 'NewDataFolder', 'NewFIFO', 'NewFIFOChan', 'NewFreeAxis', 'NewImage',
- 'NewLayout', 'NewMovie', 'NewNotebook', 'NewPanel', 'NewPath',
- 'NewWaterfall', 'Note', 'Notebook', 'NotebookAction', 'Open',
- 'OpenNotebook', 'Optimize', 'ParseOperationTemplate', 'PathInfo',
- 'PauseForUser', 'PauseUpdate', 'PCA', 'PlayMovie', 'PlayMovieAction',
- 'PlaySnd', 'PlaySound', 'PopupContextualMenu', 'PopupMenu',
- 'Preferences', 'PrimeFactors', 'Print', 'printf', 'PrintGraphs',
- 'PrintLayout', 'PrintNotebook', 'PrintSettings', 'PrintTable',
- 'Project', 'PulseStats', 'PutScrapText', 'pwd', 'Quit',
- 'RatioFromNumber', 'Redimension', 'Remove', 'RemoveContour',
+ 'Abort', 'AddFIFOData', 'AddFIFOVectData', 'AddMovieAudio', 'AddMovieFrame',
+ 'AdoptFiles', 'APMath', 'Append', 'AppendImage', 'AppendLayoutObject',
+ 'AppendMatrixContour', 'AppendText', 'AppendToGizmo', 'AppendToGraph',
+ 'AppendToLayout', 'AppendToTable', 'AppendXYZContour', 'AutoPositionWindow',
+ 'BackgroundInfo', 'Beep', 'BoundingBall', 'BoxSmooth', 'BrowseURL', 'BuildMenu',
+ 'Button', 'cd', 'Chart', 'CheckBox', 'CheckDisplayed', 'ChooseColor', 'Close',
+ 'CloseHelp', 'CloseMovie', 'CloseProc', 'ColorScale', 'ColorTab2Wave',
+ 'Concatenate', 'ControlBar', 'ControlInfo', 'ControlUpdate',
+ 'ConvertGlobalStringTextEncoding', 'ConvexHull', 'Convolve', 'CopyFile',
+ 'CopyFolder', 'CopyScales', 'Correlate', 'CreateAliasShortcut', 'CreateBrowser',
+ 'Cross', 'CtrlBackground', 'CtrlFIFO', 'CtrlNamedBackground', 'Cursor',
+ 'CurveFit', 'CustomControl', 'CWT', 'Debugger', 'DebuggerOptions', 'DefaultFont',
+ 'DefaultGuiControls', 'DefaultGuiFont', 'DefaultTextEncoding', 'DefineGuide',
+ 'DelayUpdate', 'DeleteAnnotations', 'DeleteFile', 'DeleteFolder', 'DeletePoints',
+ 'Differentiate', 'dir', 'Display', 'DisplayHelpTopic', 'DisplayProcedure',
+ 'DoAlert', 'DoIgorMenu', 'DoUpdate', 'DoWindow', 'DoXOPIdle', 'DPSS',
+ 'DrawAction', 'DrawArc', 'DrawBezier', 'DrawLine', 'DrawOval', 'DrawPICT',
+ 'DrawPoly', 'DrawRect', 'DrawRRect', 'DrawText', 'DrawUserShape', 'DSPDetrend',
+ 'DSPPeriodogram', 'Duplicate', 'DuplicateDataFolder', 'DWT', 'EdgeStats', 'Edit',
+ 'ErrorBars', 'EstimatePeakSizes', 'Execute', 'ExecuteScriptText',
+ 'ExperimentModified', 'ExportGizmo', 'Extract', 'FastGaussTransform', 'FastOp',
+ 'FBinRead', 'FBinWrite', 'FFT', 'FIFOStatus', 'FIFO2Wave', 'FilterFIR',
+ 'FilterIIR', 'FindAPeak', 'FindContour', 'FindDuplicates', 'FindLevel',
+ 'FindLevels', 'FindPeak', 'FindPointsInPoly', 'FindRoots', 'FindSequence',
+ 'FindValue', 'FPClustering', 'fprintf', 'FReadLine', 'FSetPos', 'FStatus',
+ 'FTPCreateDirectory', 'FTPDelete', 'FTPDownload', 'FTPUpload', 'FuncFit',
+ 'FuncFitMD', 'GBLoadWave', 'GetAxis', 'GetCamera', 'GetFileFolderInfo',
+ 'GetGizmo', 'GetLastUserMenuInfo', 'GetMarquee', 'GetMouse', 'GetSelection',
+ 'GetWindow', 'GPIBReadBinaryWave2', 'GPIBReadBinary2', 'GPIBReadWave2',
+ 'GPIBRead2', 'GPIBWriteBinaryWave2', 'GPIBWriteBinary2', 'GPIBWriteWave2',
+ 'GPIBWrite2', 'GPIB2', 'GraphNormal', 'GraphWaveDraw', 'GraphWaveEdit', 'Grep',
+ 'GroupBox', 'Hanning', 'HDF5CloseFile', 'HDF5CloseGroup', 'HDF5ConvertColors',
+ 'HDF5CreateFile', 'HDF5CreateGroup', 'HDF5CreateLink', 'HDF5Dump',
+ 'HDF5DumpErrors', 'HDF5DumpState', 'HDF5ListAttributes', 'HDF5ListGroup',
+ 'HDF5LoadData', 'HDF5LoadGroup', 'HDF5LoadImage', 'HDF5OpenFile', 'HDF5OpenGroup',
+ 'HDF5SaveData', 'HDF5SaveGroup', 'HDF5SaveImage', 'HDF5TestOperation',
+ 'HDF5UnlinkObject', 'HideIgorMenus', 'HideInfo', 'HideProcedures', 'HideTools',
+ 'HilbertTransform', 'Histogram', 'ICA', 'IFFT', 'ImageAnalyzeParticles',
+ 'ImageBlend', 'ImageBoundaryToMask', 'ImageEdgeDetection', 'ImageFileInfo',
+ 'ImageFilter', 'ImageFocus', 'ImageFromXYZ', 'ImageGenerateROIMask', 'ImageGLCM',
+ 'ImageHistModification', 'ImageHistogram', 'ImageInterpolate', 'ImageLineProfile',
+ 'ImageLoad', 'ImageMorphology', 'ImageRegistration', 'ImageRemoveBackground',
+ 'ImageRestore', 'ImageRotate', 'ImageSave', 'ImageSeedFill', 'ImageSkeleton3d',
+ 'ImageSnake', 'ImageStats', 'ImageThreshold', 'ImageTransform',
+ 'ImageUnwrapPhase', 'ImageWindow', 'IndexSort', 'InsertPoints', 'Integrate',
+ 'IntegrateODE', 'Integrate2D', 'Interpolate2', 'Interpolate3D', 'Interp3DPath',
+ 'JCAMPLoadWave', 'JointHistogram', 'KillBackground', 'KillControl',
+ 'KillDataFolder', 'KillFIFO', 'KillFreeAxis', 'KillPath', 'KillPICTs',
+ 'KillStrings', 'KillVariables', 'KillWaves', 'KillWindow', 'KMeans', 'Label',
+ 'Layout', 'LayoutPageAction', 'LayoutSlideShow', 'Legend',
+ 'LinearFeedbackShiftRegister', 'ListBox', 'LoadData', 'LoadPackagePreferences',
+ 'LoadPICT', 'LoadWave', 'Loess', 'LombPeriodogram', 'Make', 'MakeIndex',
+ 'MarkPerfTestTime', 'MatrixConvolve', 'MatrixCorr', 'MatrixEigenV',
+ 'MatrixFilter', 'MatrixGaussJ', 'MatrixGLM', 'MatrixInverse', 'MatrixLinearSolve',
+ 'MatrixLinearSolveTD', 'MatrixLLS', 'MatrixLUBkSub', 'MatrixLUD', 'MatrixLUDTD',
+ 'MatrixMultiply', 'MatrixOP', 'MatrixSchur', 'MatrixSolve', 'MatrixSVBkSub',
+ 'MatrixSVD', 'MatrixTranspose', 'MeasureStyledText', 'MLLoadWave', 'Modify',
+ 'ModifyBrowser', 'ModifyCamera', 'ModifyContour', 'ModifyControl',
+ 'ModifyControlList', 'ModifyFreeAxis', 'ModifyGizmo', 'ModifyGraph',
+ 'ModifyImage', 'ModifyLayout', 'ModifyPanel', 'ModifyTable', 'ModifyWaterfall',
+ 'MoveDataFolder', 'MoveFile', 'MoveFolder', 'MoveString', 'MoveSubwindow',
+ 'MoveVariable', 'MoveWave', 'MoveWindow', 'MultiTaperPSD',
+ 'MultiThreadingControl', 'NeuralNetworkRun', 'NeuralNetworkTrain', 'NewCamera',
+ 'NewDataFolder', 'NewFIFO', 'NewFIFOChan', 'NewFreeAxis', 'NewGizmo', 'NewImage',
+ 'NewLayout', 'NewMovie', 'NewNotebook', 'NewPanel', 'NewPath', 'NewWaterfall',
+ 'NI4882', 'Note', 'Notebook', 'NotebookAction', 'Open', 'OpenHelp',
+ 'OpenNotebook', 'Optimize', 'ParseOperationTemplate', 'PathInfo', 'PauseForUser',
+ 'PauseUpdate', 'PCA', 'PlayMovie', 'PlayMovieAction', 'PlaySound',
+ 'PopupContextualMenu', 'PopupMenu', 'Preferences', 'PrimeFactors', 'Print',
+ 'printf', 'PrintGraphs', 'PrintLayout', 'PrintNotebook', 'PrintSettings',
+ 'PrintTable', 'Project', 'PulseStats', 'PutScrapText', 'pwd', 'Quit',
+ 'RatioFromNumber', 'Redimension', 'Remove', 'RemoveContour', 'RemoveFromGizmo',
'RemoveFromGraph', 'RemoveFromLayout', 'RemoveFromTable', 'RemoveImage',
- 'RemoveLayoutObjects', 'RemovePath', 'Rename', 'RenameDataFolder',
- 'RenamePath', 'RenamePICT', 'RenameWindow', 'ReorderImages',
- 'ReorderTraces', 'ReplaceText', 'ReplaceWave', 'Resample',
- 'ResumeUpdate', 'Reverse', 'Rotate', 'Save', 'SaveData',
- 'SaveExperiment', 'SaveGraphCopy', 'SaveNotebook',
- 'SavePackagePreferences', 'SavePICT', 'SaveTableCopy',
- 'SetActiveSubwindow', 'SetAxis', 'SetBackground', 'SetDashPattern',
- 'SetDataFolder', 'SetDimLabel', 'SetDrawEnv', 'SetDrawLayer',
- 'SetFileFolderInfo', 'SetFormula', 'SetIgorHook', 'SetIgorMenuMode',
- 'SetIgorOption', 'SetMarquee', 'SetProcessSleep', 'SetRandomSeed',
- 'SetScale', 'SetVariable', 'SetWaveLock', 'SetWindow', 'ShowIgorMenus',
- 'ShowInfo', 'ShowTools', 'Silent', 'Sleep', 'Slider', 'Smooth',
- 'SmoothCustom', 'Sort', 'SoundInRecord', 'SoundInSet',
- 'SoundInStartChart', 'SoundInStatus', 'SoundInStopChart',
- 'SphericalInterpolate', 'SphericalTriangulate', 'SplitString',
- 'sprintf', 'sscanf', 'Stack', 'StackWindows',
+ 'RemoveLayoutObjects', 'RemovePath', 'Rename', 'RenameDataFolder', 'RenamePath',
+ 'RenamePICT', 'RenameWindow', 'ReorderImages', 'ReorderTraces', 'ReplaceText',
+ 'ReplaceWave', 'Resample', 'ResumeUpdate', 'Reverse', 'Rotate', 'Save',
+ 'SaveData', 'SaveExperiment', 'SaveGraphCopy', 'SaveNotebook',
+ 'SavePackagePreferences', 'SavePICT', 'SaveTableCopy', 'SetActiveSubwindow',
+ 'SetAxis', 'SetBackground', 'SetDashPattern', 'SetDataFolder', 'SetDimLabel',
+ 'SetDrawEnv', 'SetDrawLayer', 'SetFileFolderInfo', 'SetFormula', 'SetIgorHook',
+ 'SetIgorMenuMode', 'SetIgorOption', 'SetMarquee', 'SetProcessSleep',
+ 'SetRandomSeed', 'SetScale', 'SetVariable', 'SetWaveLock', 'SetWaveTextEncoding',
+ 'SetWindow', 'ShowIgorMenus', 'ShowInfo', 'ShowTools', 'Silent', 'Sleep',
+ 'Slider', 'Smooth', 'SmoothCustom', 'Sort', 'SortColumns', 'SoundInRecord',
+ 'SoundInSet', 'SoundInStartChart', 'SoundInStatus', 'SoundInStopChart',
+ 'SoundLoadWave', 'SoundSaveWave', 'SphericalInterpolate', 'SphericalTriangulate',
+ 'SplitString', 'SplitWave', 'sprintf', 'sscanf', 'Stack', 'StackWindows',
'StatsAngularDistanceTest', 'StatsANOVA1Test', 'StatsANOVA2NRTest',
'StatsANOVA2RMTest', 'StatsANOVA2Test', 'StatsChiTest',
- 'StatsCircularCorrelationTest', 'StatsCircularMeans',
- 'StatsCircularMoments', 'StatsCircularTwoSampleTest',
- 'StatsCochranTest', 'StatsContingencyTable', 'StatsDIPTest',
- 'StatsDunnettTest', 'StatsFriedmanTest', 'StatsFTest',
- 'StatsHodgesAjneTest', 'StatsJBTest', 'StatsKendallTauTest',
+ 'StatsCircularCorrelationTest', 'StatsCircularMeans', 'StatsCircularMoments',
+ 'StatsCircularTwoSampleTest', 'StatsCochranTest', 'StatsContingencyTable',
+ 'StatsDIPTest', 'StatsDunnettTest', 'StatsFriedmanTest', 'StatsFTest',
+ 'StatsHodgesAjneTest', 'StatsJBTest', 'StatsKDE', 'StatsKendallTauTest',
'StatsKSTest', 'StatsKWTest', 'StatsLinearCorrelationTest',
- 'StatsLinearRegression', 'StatsMultiCorrelationTest',
- 'StatsNPMCTest', 'StatsNPNominalSRTest', 'StatsQuantiles',
- 'StatsRankCorrelationTest', 'StatsResample', 'StatsSample',
- 'StatsScheffeTest', 'StatsSignTest', 'StatsSRTest', 'StatsTTest',
- 'StatsTukeyTest', 'StatsVariancesTest', 'StatsWatsonUSquaredTest',
- 'StatsWatsonWilliamsTest', 'StatsWheelerWatsonTest',
- 'StatsWilcoxonRankTest', 'StatsWRCorrelationTest', 'String',
- 'StructGet', 'StructPut', 'TabControl', 'Tag', 'TextBox', 'Tile',
- 'TileWindows', 'TitleBox', 'ToCommandLine', 'ToolsGrid',
- 'Triangulate3d', 'Unwrap', 'ValDisplay', 'Variable', 'WaveMeanStdv',
- 'WaveStats', 'WaveTransform', 'wfprintf', 'WignerTransform',
- 'WindowFunction',
+ 'StatsLinearRegression', 'StatsMultiCorrelationTest', 'StatsNPMCTest',
+ 'StatsNPNominalSRTest', 'StatsQuantiles', 'StatsRankCorrelationTest',
+ 'StatsResample', 'StatsSample', 'StatsScheffeTest', 'StatsShapiroWilkTest',
+ 'StatsSignTest', 'StatsSRTest', 'StatsTTest', 'StatsTukeyTest',
+ 'StatsVariancesTest', 'StatsWatsonUSquaredTest', 'StatsWatsonWilliamsTest',
+ 'StatsWheelerWatsonTest', 'StatsWilcoxonRankTest', 'StatsWRCorrelationTest',
+ 'String', 'StructGet', 'StructPut', 'SumDimension', 'SumSeries', 'TabControl',
+ 'Tag', 'TextBox', 'ThreadGroupPutDF', 'ThreadStart', 'Tile', 'TileWindows',
+ 'TitleBox', 'ToCommandLine', 'ToolsGrid', 'Triangulate3d', 'Unwrap', 'URLRequest',
+ 'ValDisplay', 'Variable', 'VDTClosePort2', 'VDTGetPortList2', 'VDTGetStatus2',
+ 'VDTOpenPort2', 'VDTOperationsPort2', 'VDTReadBinaryWave2', 'VDTReadBinary2',
+ 'VDTReadHexWave2', 'VDTReadHex2', 'VDTReadWave2', 'VDTRead2', 'VDTTerminalPort2',
+ 'VDTWriteBinaryWave2', 'VDTWriteBinary2', 'VDTWriteHexWave2', 'VDTWriteHex2',
+ 'VDTWriteWave2', 'VDTWrite2', 'VDT2', 'WaveMeanStdv', 'WaveStats',
+ 'WaveTransform', 'wfprintf', 'WignerTransform', 'WindowFunction', 'XLLoadWave'
)
functions = (
- 'abs', 'acos', 'acosh', 'AiryA', 'AiryAD', 'AiryB', 'AiryBD', 'alog',
- 'area', 'areaXY', 'asin', 'asinh', 'atan', 'atan2', 'atanh',
- 'AxisValFromPixel', 'Besseli', 'Besselj', 'Besselk', 'Bessely', 'bessi',
- 'bessj', 'bessk', 'bessy', 'beta', 'betai', 'BinarySearch',
+ 'abs', 'acos', 'acosh', 'AddListItem', 'AiryA', 'AiryAD', 'AiryB', 'AiryBD',
+ 'alog', 'AnnotationInfo', 'AnnotationList', 'area', 'areaXY', 'asin', 'asinh',
+ 'atan', 'atanh', 'atan2', 'AxisInfo', 'AxisList', 'AxisValFromPixel', 'Besseli',
+ 'Besselj', 'Besselk', 'Bessely', 'beta', 'betai', 'BinarySearch',
'BinarySearchInterp', 'binomial', 'binomialln', 'binomialNoise', 'cabs',
- 'CaptureHistoryStart', 'ceil', 'cequal', 'char2num', 'chebyshev',
- 'chebyshevU', 'CheckName', 'cmplx', 'cmpstr', 'conj', 'ContourZ', 'cos',
- 'cosh', 'cot', 'CountObjects', 'CountObjectsDFR', 'cpowi',
- 'CreationDate', 'csc', 'DataFolderExists', 'DataFolderRefsEqual',
- 'DataFolderRefStatus', 'date2secs', 'datetime', 'DateToJulian',
- 'Dawson', 'DDEExecute', 'DDEInitiate', 'DDEPokeString', 'DDEPokeWave',
- 'DDERequestWave', 'DDEStatus', 'DDETerminate', 'defined', 'deltax', 'digamma',
- 'DimDelta', 'DimOffset', 'DimSize', 'ei', 'enoise', 'equalWaves', 'erf',
- 'erfc', 'exists', 'exp', 'expInt', 'expNoise', 'factorial', 'fakedata',
- 'faverage', 'faverageXY', 'FindDimLabel', 'FindListItem', 'floor',
+ 'CaptureHistory', 'CaptureHistoryStart', 'ceil', 'cequal', 'char2num',
+ 'chebyshev', 'chebyshevU', 'CheckName', 'ChildWindowList', 'CleanupName', 'cmplx',
+ 'cmpstr', 'conj', 'ContourInfo', 'ContourNameList', 'ContourNameToWaveRef',
+ 'ContourZ', 'ControlNameList', 'ConvertTextEncoding', 'cos', 'cosh',
+ 'cosIntegral', 'cot', 'coth', 'CountObjects', 'CountObjectsDFR', 'cpowi',
+ 'CreationDate', 'csc', 'csch', 'CsrInfo', 'CsrWave', 'CsrWaveRef', 'CsrXWave',
+ 'CsrXWaveRef', 'CTabList', 'DataFolderDir', 'DataFolderExists',
+ 'DataFolderRefsEqual', 'DataFolderRefStatus', 'date', 'datetime', 'DateToJulian',
+ 'date2secs', 'Dawson', 'DDERequestString', 'defined', 'deltax', 'digamma',
+ 'dilogarithm', 'DimDelta', 'DimOffset', 'DimSize', 'ei', 'enoise', 'equalWaves',
+ 'erf', 'erfc', 'erfcw', 'exists', 'exp', 'ExpConvExp', 'ExpConvExpFit',
+ 'ExpConvExpFitBL', 'ExpConvExpFit1Shape', 'ExpConvExpFit1ShapeBL', 'ExpGauss',
+ 'ExpGaussFit', 'ExpGaussFitBL', 'ExpGaussFit1Shape', 'ExpGaussFit1ShapeBL',
+ 'expInt', 'expIntegralE1', 'expNoise', 'factorial', 'fakedata', 'faverage',
+ 'faverageXY', 'FetchURL', 'FindDimLabel', 'FindListItem', 'floor', 'FontList',
'FontSizeHeight', 'FontSizeStringWidth', 'FresnelCos', 'FresnelSin',
- 'gamma', 'gammaInc', 'gammaNoise', 'gammln', 'gammp', 'gammq', 'Gauss',
- 'Gauss1D', 'Gauss2D', 'gcd', 'GetDefaultFontSize',
- 'GetDefaultFontStyle', 'GetKeyState', 'GetRTError', 'gnoise',
- 'GrepString', 'hcsr', 'hermite', 'hermiteGauss', 'HyperG0F1',
- 'HyperG1F1', 'HyperG2F1', 'HyperGNoise', 'HyperGPFQ', 'IgorVersion',
- 'ilim', 'imag', 'Inf', 'Integrate1D', 'interp', 'Interp2D', 'Interp3D',
- 'inverseERF', 'inverseERFC', 'ItemsInList', 'jlim', 'Laguerre',
- 'LaguerreA', 'LaguerreGauss', 'leftx', 'LegendreA', 'limit', 'ln',
- 'log', 'logNormalNoise', 'lorentzianNoise', 'magsqr', 'MandelbrotPoint',
- 'MarcumQ', 'MatrixDet', 'MatrixDot', 'MatrixRank', 'MatrixTrace', 'max',
- 'mean', 'min', 'mod', 'ModDate', 'NaN', 'norm', 'NumberByKey',
- 'numpnts', 'numtype', 'NumVarOrDefault', 'NVAR_Exists', 'p2rect',
- 'ParamIsDefault', 'pcsr', 'Pi', 'PixelFromAxisVal', 'pnt2x',
- 'poissonNoise', 'poly', 'poly2D', 'PolygonArea', 'qcsr', 'r2polar',
- 'real', 'rightx', 'round', 'sawtooth', 'ScreenResolution', 'sec',
- 'SelectNumber', 'sign', 'sin', 'sinc', 'sinh', 'SphericalBessJ',
- 'SphericalBessJD', 'SphericalBessY', 'SphericalBessYD',
- 'SphericalHarmonics', 'sqrt', 'StartMSTimer', 'StatsBetaCDF',
- 'StatsBetaPDF', 'StatsBinomialCDF', 'StatsBinomialPDF',
- 'StatsCauchyCDF', 'StatsCauchyPDF', 'StatsChiCDF', 'StatsChiPDF',
- 'StatsCMSSDCDF', 'StatsCorrelation', 'StatsDExpCDF', 'StatsDExpPDF',
- 'StatsErlangCDF', 'StatsErlangPDF', 'StatsErrorPDF', 'StatsEValueCDF',
- 'StatsEValuePDF', 'StatsExpCDF', 'StatsExpPDF', 'StatsFCDF',
- 'StatsFPDF', 'StatsFriedmanCDF', 'StatsGammaCDF', 'StatsGammaPDF',
- 'StatsGeometricCDF', 'StatsGeometricPDF', 'StatsHyperGCDF',
- 'StatsHyperGPDF', 'StatsInvBetaCDF', 'StatsInvBinomialCDF',
- 'StatsInvCauchyCDF', 'StatsInvChiCDF', 'StatsInvCMSSDCDF',
- 'StatsInvDExpCDF', 'StatsInvEValueCDF', 'StatsInvExpCDF',
- 'StatsInvFCDF', 'StatsInvFriedmanCDF', 'StatsInvGammaCDF',
- 'StatsInvGeometricCDF', 'StatsInvKuiperCDF', 'StatsInvLogisticCDF',
- 'StatsInvLogNormalCDF', 'StatsInvMaxwellCDF', 'StatsInvMooreCDF',
- 'StatsInvNBinomialCDF', 'StatsInvNCChiCDF', 'StatsInvNCFCDF',
- 'StatsInvNormalCDF', 'StatsInvParetoCDF', 'StatsInvPoissonCDF',
- 'StatsInvPowerCDF', 'StatsInvQCDF', 'StatsInvQpCDF',
+ 'FuncRefInfo', 'FunctionInfo', 'FunctionList', 'FunctionPath', 'gamma',
+ 'gammaEuler', 'gammaInc', 'gammaNoise', 'gammln', 'gammp', 'gammq', 'Gauss',
+ 'GaussFit', 'GaussFitBL', 'GaussFit1Width', 'GaussFit1WidthBL', 'Gauss1D',
+ 'Gauss2D', 'gcd', 'GetBrowserLine', 'GetBrowserSelection', 'GetDataFolder',
+ 'GetDataFolderDFR', 'GetDefaultFont', 'GetDefaultFontSize', 'GetDefaultFontStyle',
+ 'GetDimLabel', 'GetEnvironmentVariable', 'GetErrMessage', 'GetFormula',
+ 'GetIndependentModuleName', 'GetIndexedObjName', 'GetIndexedObjNameDFR',
+ 'GetKeyState', 'GetRTErrMessage', 'GetRTError', 'GetRTLocation', 'GetRTLocInfo',
+ 'GetRTStackInfo', 'GetScrapText', 'GetUserData', 'GetWavesDataFolder',
+ 'GetWavesDataFolderDFR', 'GizmoInfo', 'GizmoScale', 'gnoise', 'GrepList',
+ 'GrepString', 'GuideInfo', 'GuideNameList', 'Hash', 'hcsr', 'HDF5AttributeInfo',
+ 'HDF5DatasetInfo', 'HDF5LibraryInfo', 'HDF5TypeInfo', 'hermite', 'hermiteGauss',
+ 'HyperGNoise', 'HyperGPFQ', 'HyperG0F1', 'HyperG1F1', 'HyperG2F1', 'IgorInfo',
+ 'IgorVersion', 'imag', 'ImageInfo', 'ImageNameList', 'ImageNameToWaveRef',
+ 'IndependentModuleList', 'IndexedDir', 'IndexedFile', 'Inf', 'Integrate1D',
+ 'interp', 'Interp2D', 'Interp3D', 'inverseERF', 'inverseERFC', 'ItemsInList',
+ 'JacobiCn', 'JacobiSn', 'JulianToDate', 'Laguerre', 'LaguerreA', 'LaguerreGauss',
+ 'LambertW', 'LayoutInfo', 'leftx', 'LegendreA', 'limit', 'ListMatch',
+ 'ListToTextWave', 'ListToWaveRefWave', 'ln', 'log', 'logNormalNoise',
+ 'LorentzianFit', 'LorentzianFitBL', 'LorentzianFit1Width',
+ 'LorentzianFit1WidthBL', 'lorentzianNoise', 'LowerStr', 'MacroList', 'magsqr',
+ 'MandelbrotPoint', 'MarcumQ', 'MatrixCondition', 'MatrixDet', 'MatrixDot',
+ 'MatrixRank', 'MatrixTrace', 'max', 'mean', 'median', 'min', 'mod', 'ModDate',
+ 'MPFXEMGPeak', 'MPFXExpConvExpPeak', 'MPFXGaussPeak', 'MPFXLorenzianPeak',
+ 'MPFXVoigtPeak', 'NameOfWave', 'NaN', 'NewFreeDataFolder', 'NewFreeWave', 'norm',
+ 'NormalizeUnicode', 'note', 'NumberByKey', 'numpnts', 'numtype',
+ 'NumVarOrDefault', 'num2char', 'num2istr', 'num2str', 'NVAR_Exists',
+ 'OperationList', 'PadString', 'PanelResolution', 'ParamIsDefault',
+ 'ParseFilePath', 'PathList', 'pcsr', 'Pi', 'PICTInfo', 'PICTList',
+ 'PixelFromAxisVal', 'pnt2x', 'poissonNoise', 'poly', 'PolygonArea', 'poly2D',
+ 'PossiblyQuoteName', 'ProcedureText', 'p2rect', 'qcsr', 'real', 'RemoveByKey',
+ 'RemoveEnding', 'RemoveFromList', 'RemoveListItem', 'ReplaceNumberByKey',
+ 'ReplaceString', 'ReplaceStringByKey', 'rightx', 'round', 'r2polar', 'sawtooth',
+ 'scaleToIndex', 'ScreenResolution', 'sec', 'sech', 'Secs2Date', 'Secs2Time',
+ 'SelectNumber', 'SelectString', 'SetEnvironmentVariable', 'sign', 'sin', 'sinc',
+ 'sinh', 'sinIntegral', 'SortList', 'SpecialCharacterInfo', 'SpecialCharacterList',
+ 'SpecialDirPath', 'SphericalBessJ', 'SphericalBessJD', 'SphericalBessY',
+ 'SphericalBessYD', 'SphericalHarmonics', 'sqrt', 'StartMSTimer', 'StatsBetaCDF',
+ 'StatsBetaPDF', 'StatsBinomialCDF', 'StatsBinomialPDF', 'StatsCauchyCDF',
+ 'StatsCauchyPDF', 'StatsChiCDF', 'StatsChiPDF', 'StatsCMSSDCDF',
+ 'StatsCorrelation', 'StatsDExpCDF', 'StatsDExpPDF', 'StatsErlangCDF',
+ 'StatsErlangPDF', 'StatsErrorPDF', 'StatsEValueCDF', 'StatsEValuePDF',
+ 'StatsExpCDF', 'StatsExpPDF', 'StatsFCDF', 'StatsFPDF', 'StatsFriedmanCDF',
+ 'StatsGammaCDF', 'StatsGammaPDF', 'StatsGeometricCDF', 'StatsGeometricPDF',
+ 'StatsGEVCDF', 'StatsGEVPDF', 'StatsHyperGCDF', 'StatsHyperGPDF',
+ 'StatsInvBetaCDF', 'StatsInvBinomialCDF', 'StatsInvCauchyCDF', 'StatsInvChiCDF',
+ 'StatsInvCMSSDCDF', 'StatsInvDExpCDF', 'StatsInvEValueCDF', 'StatsInvExpCDF',
+ 'StatsInvFCDF', 'StatsInvFriedmanCDF', 'StatsInvGammaCDF', 'StatsInvGeometricCDF',
+ 'StatsInvKuiperCDF', 'StatsInvLogisticCDF', 'StatsInvLogNormalCDF',
+ 'StatsInvMaxwellCDF', 'StatsInvMooreCDF', 'StatsInvNBinomialCDF',
+ 'StatsInvNCChiCDF', 'StatsInvNCFCDF', 'StatsInvNormalCDF', 'StatsInvParetoCDF',
+ 'StatsInvPoissonCDF', 'StatsInvPowerCDF', 'StatsInvQCDF', 'StatsInvQpCDF',
'StatsInvRayleighCDF', 'StatsInvRectangularCDF', 'StatsInvSpearmanCDF',
'StatsInvStudentCDF', 'StatsInvTopDownCDF', 'StatsInvTriangularCDF',
'StatsInvUsquaredCDF', 'StatsInvVonMisesCDF', 'StatsInvWeibullCDF',
- 'StatsKuiperCDF', 'StatsLogisticCDF', 'StatsLogisticPDF',
- 'StatsLogNormalCDF', 'StatsLogNormalPDF', 'StatsMaxwellCDF',
- 'StatsMaxwellPDF', 'StatsMedian', 'StatsMooreCDF', 'StatsNBinomialCDF',
- 'StatsNBinomialPDF', 'StatsNCChiCDF', 'StatsNCChiPDF', 'StatsNCFCDF',
- 'StatsNCFPDF', 'StatsNCTCDF', 'StatsNCTPDF', 'StatsNormalCDF',
- 'StatsNormalPDF', 'StatsParetoCDF', 'StatsParetoPDF', 'StatsPermute',
- 'StatsPoissonCDF', 'StatsPoissonPDF', 'StatsPowerCDF',
- 'StatsPowerNoise', 'StatsPowerPDF', 'StatsQCDF', 'StatsQpCDF',
- 'StatsRayleighCDF', 'StatsRayleighPDF', 'StatsRectangularCDF',
- 'StatsRectangularPDF', 'StatsRunsCDF', 'StatsSpearmanRhoCDF',
- 'StatsStudentCDF', 'StatsStudentPDF', 'StatsTopDownCDF',
+ 'StatsKuiperCDF', 'StatsLogisticCDF', 'StatsLogisticPDF', 'StatsLogNormalCDF',
+ 'StatsLogNormalPDF', 'StatsMaxwellCDF', 'StatsMaxwellPDF', 'StatsMedian',
+ 'StatsMooreCDF', 'StatsNBinomialCDF', 'StatsNBinomialPDF', 'StatsNCChiCDF',
+ 'StatsNCChiPDF', 'StatsNCFCDF', 'StatsNCFPDF', 'StatsNCTCDF', 'StatsNCTPDF',
+ 'StatsNormalCDF', 'StatsNormalPDF', 'StatsParetoCDF', 'StatsParetoPDF',
+ 'StatsPermute', 'StatsPoissonCDF', 'StatsPoissonPDF', 'StatsPowerCDF',
+ 'StatsPowerNoise', 'StatsPowerPDF', 'StatsQCDF', 'StatsQpCDF', 'StatsRayleighCDF',
+ 'StatsRayleighPDF', 'StatsRectangularCDF', 'StatsRectangularPDF', 'StatsRunsCDF',
+ 'StatsSpearmanRhoCDF', 'StatsStudentCDF', 'StatsStudentPDF', 'StatsTopDownCDF',
'StatsTriangularCDF', 'StatsTriangularPDF', 'StatsTrimmedMean',
- 'StatsUSquaredCDF', 'StatsVonMisesCDF', 'StatsVonMisesNoise',
- 'StatsVonMisesPDF', 'StatsWaldCDF', 'StatsWaldPDF', 'StatsWeibullCDF',
- 'StatsWeibullPDF', 'StopMSTimer', 'str2num', 'stringCRC', 'stringmatch',
- 'strlen', 'strsearch', 'StudentA', 'StudentT', 'sum', 'SVAR_Exists',
- 'TagVal', 'tan', 'tanh', 'ThreadGroupCreate', 'ThreadGroupRelease',
- 'ThreadGroupWait', 'ThreadProcessorCount', 'ThreadReturnValue', 'ticks',
- 'trunc', 'Variance', 'vcsr', 'WaveCRC', 'WaveDims', 'WaveExists',
- 'WaveMax', 'WaveMin', 'WaveRefsEqual', 'WaveType', 'WhichListItem',
- 'WinType', 'WNoise', 'x2pnt', 'xcsr', 'zcsr', 'ZernikeR',
- )
- functions += (
- 'AddListItem', 'AnnotationInfo', 'AnnotationList', 'AxisInfo',
- 'AxisList', 'CaptureHistory', 'ChildWindowList', 'CleanupName',
- 'ContourInfo', 'ContourNameList', 'ControlNameList', 'CsrInfo',
- 'CsrWave', 'CsrXWave', 'CTabList', 'DataFolderDir', 'date',
- 'DDERequestString', 'FontList', 'FuncRefInfo', 'FunctionInfo',
- 'FunctionList', 'FunctionPath', 'GetDataFolder', 'GetDefaultFont',
- 'GetDimLabel', 'GetErrMessage', 'GetFormula',
- 'GetIndependentModuleName', 'GetIndexedObjName', 'GetIndexedObjNameDFR',
- 'GetRTErrMessage', 'GetRTStackInfo', 'GetScrapText', 'GetUserData',
- 'GetWavesDataFolder', 'GrepList', 'GuideInfo', 'GuideNameList', 'Hash',
- 'IgorInfo', 'ImageInfo', 'ImageNameList', 'IndexedDir', 'IndexedFile',
- 'JulianToDate', 'LayoutInfo', 'ListMatch', 'LowerStr', 'MacroList',
- 'NameOfWave', 'note', 'num2char', 'num2istr', 'num2str',
- 'OperationList', 'PadString', 'ParseFilePath', 'PathList', 'PICTInfo',
- 'PICTList', 'PossiblyQuoteName', 'ProcedureText', 'RemoveByKey',
- 'RemoveEnding', 'RemoveFromList', 'RemoveListItem',
- 'ReplaceNumberByKey', 'ReplaceString', 'ReplaceStringByKey',
- 'Secs2Date', 'Secs2Time', 'SelectString', 'SortList',
- 'SpecialCharacterInfo', 'SpecialCharacterList', 'SpecialDirPath',
- 'StringByKey', 'StringFromList', 'StringList', 'StrVarOrDefault',
- 'TableInfo', 'TextFile', 'ThreadGroupGetDF', 'time', 'TraceFromPixel',
- 'TraceInfo', 'TraceNameList', 'UniqueName', 'UnPadString', 'UpperStr',
- 'VariableList', 'WaveInfo', 'WaveList', 'WaveName', 'WaveUnits',
- 'WinList', 'WinName', 'WinRecreation', 'XWaveName',
- 'ContourNameToWaveRef', 'CsrWaveRef', 'CsrXWaveRef',
- 'ImageNameToWaveRef', 'NewFreeWave', 'TagWaveRef', 'TraceNameToWaveRef',
- 'WaveRefIndexed', 'XWaveRefFromTrace', 'GetDataFolderDFR',
- 'GetWavesDataFolderDFR', 'NewFreeDataFolder', 'ThreadGroupGetDFR',
+ 'StatsUSquaredCDF', 'StatsVonMisesCDF', 'StatsVonMisesNoise', 'StatsVonMisesPDF',
+ 'StatsWaldCDF', 'StatsWaldPDF', 'StatsWeibullCDF', 'StatsWeibullPDF',
+ 'StopMSTimer', 'StringByKey', 'stringCRC', 'StringFromList', 'StringList',
+ 'stringmatch', 'strlen', 'strsearch', 'StrVarOrDefault', 'str2num', 'StudentA',
+ 'StudentT', 'sum', 'SVAR_Exists', 'TableInfo', 'TagVal', 'TagWaveRef', 'tan',
+ 'tanh', 'TextEncodingCode', 'TextEncodingName', 'TextFile', 'ThreadGroupCreate',
+ 'ThreadGroupGetDF', 'ThreadGroupGetDFR', 'ThreadGroupRelease', 'ThreadGroupWait',
+ 'ThreadProcessorCount', 'ThreadReturnValue', 'ticks', 'time', 'TraceFromPixel',
+ 'TraceInfo', 'TraceNameList', 'TraceNameToWaveRef', 'trunc', 'UniqueName',
+ 'UnPadString', 'UnsetEnvironmentVariable', 'UpperStr', 'URLDecode', 'URLEncode',
+ 'VariableList', 'Variance', 'vcsr', 'Voigt', 'VoigtFit', 'VoigtFitBL',
+ 'VoigtFit1Shape', 'VoigtFit1ShapeBL', 'VoigtFit1Shape1Width',
+ 'VoigtFit1Shape1WidthBL', 'VoigtFunc', 'WaveCRC', 'WaveDims', 'WaveExists',
+ 'WaveInfo', 'WaveList', 'WaveMax', 'WaveMin', 'WaveName', 'WaveRefIndexed',
+ 'WaveRefIndexedDFR', 'WaveRefsEqual', 'WaveRefWaveToList', 'WaveTextEncoding',
+ 'WaveType', 'WaveUnits', 'WhichListItem', 'WinList', 'WinName', 'WinRecreation',
+ 'WinType', 'WMFindWholeWord', 'WNoise', 'xcsr', 'XWaveName', 'XWaveRefFromTrace',
+ 'x2pnt', 'zcsr', 'ZernikeR', 'zeta'
)
tokens = {
# Built-in functions.
(words(functions, prefix=r'\b', suffix=r'\b'), Name.Function),
# Compiler directives.
- (r'^#(include|pragma|define|ifdef|ifndef|endif)',
+ (r'^#(include|pragma|define|undef|ifdef|ifndef|if|elif|else|endif)',
Name.Decorator),
(r'[^a-z"/]+$', Text),
(r'.', Text),
Lexers for Inferno os and all the related stuff.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexers for installer/packager DSLs and formats.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexers for interactive fiction languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexers for the Io language.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexer for the J programming language.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
# Definitions
(r'0\s+:\s*0|noun\s+define\s*$', Name.Entity, 'nounDefinition'),
- (r'\b(([1-4]|13)\s+:\s*0)|((adverb|conjunction|dyad|monad|verb)\s+define)\b',
+ (r'(([1-4]|13)\s+:\s*0|(adverb|conjunction|dyad|monad|verb)\s+define)\b',
Name.Function, 'explicitDefinition'),
# Flow Control
'fetch', 'file2url', 'fixdotdot', 'fliprgb', 'getargs',
'getenv', 'hfd', 'inv', 'inverse', 'iospath',
'isatty', 'isutf8', 'items', 'leaf', 'list',
- 'nameclass', 'namelist', 'namelist', 'names', 'nc',
- 'nl', 'on', 'pick', 'pick', 'rows',
+ 'nameclass', 'namelist', 'names', 'nc',
+ 'nl', 'on', 'pick', 'rows',
'script', 'scriptd', 'sign', 'sminfo', 'smoutput',
'sort', 'split', 'stderr', 'stdin', 'stdout',
'table', 'take', 'timespacex', 'timex', 'tmoutput',
Lexers for JavaScript and related languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
__all__ = ['JavascriptLexer', 'KalLexer', 'LiveScriptLexer', 'DartLexer',
'TypeScriptLexer', 'LassoLexer', 'ObjectiveJLexer',
- 'CoffeeScriptLexer', 'MaskLexer', 'EarlGreyLexer']
+ 'CoffeeScriptLexer', 'MaskLexer', 'EarlGreyLexer', 'JuttleLexer']
JS_IDENT_START = ('(?:[$_' + uni.combine('Lu', 'Ll', 'Lt', 'Lm', 'Lo', 'Nl') +
']|\\\\u[a-fA-F0-9]{4})')
'slashstartsregex': [
include('commentsandwhitespace'),
(r'/(\\.|[^[/\\\n]|\[(\\.|[^\]\\\n])*])+/'
- r'([gim]+\b|\B)', String.Regex, '#pop'),
+ r'([gimuy]+\b|\B)', String.Regex, '#pop'),
(r'(?=/)', Text, ('#pop', 'badregex')),
default('#pop')
],
(r'\A#! ?/.*?\n', Comment.Hashbang), # recognized by node.js
(r'^(?=\s|/|<!--)', Text, 'slashstartsregex'),
include('commentsandwhitespace'),
+ (r'(\.\d+|[0-9]+\.[0-9]*)([eE][-+]?[0-9]+)?', Number.Float),
+ (r'0[bB][01]+', Number.Bin),
+ (r'0[oO][0-7]+', Number.Oct),
+ (r'0[xX][0-9a-fA-F]+', Number.Hex),
+ (r'[0-9]+', Number.Integer),
+ (r'\.\.\.|=>', Punctuation),
(r'\+\+|--|~|&&|\?|:|\|\||\\(?=\n)|'
- r'(<<|>>>?|=>|==?|!=?|[-<>+*%&|^/])=?', Operator, 'slashstartsregex'),
- (r'\.\.\.', Punctuation),
+ r'(<<|>>>?|==?|!=?|[-<>+*%&|^/])=?', Operator, 'slashstartsregex'),
(r'[{(\[;,]', Punctuation, 'slashstartsregex'),
(r'[})\].]', Punctuation),
(r'(for|in|while|do|break|return|continue|switch|case|default|if|else|'
r'Error|eval|isFinite|isNaN|isSafeInteger|parseFloat|parseInt|'
r'document|this|window)\b', Name.Builtin),
(JS_IDENT, Name.Other),
- (r'[0-9][0-9]*\.[0-9]+([eE][0-9]+)?[fd]?', Number.Float),
- (r'0b[01]+', Number.Bin),
- (r'0o[0-7]+', Number.Oct),
- (r'0x[0-9a-fA-F]+', Number.Hex),
- (r'[0-9]+', Number.Integer),
(r'"(\\\\|\\"|[^"])*"', String.Double),
(r"'(\\\\|\\'|[^'])*'", String.Single),
(r'`', String.Backtick, 'interp'),
(r'`', String.Backtick, '#pop'),
(r'\\\\', String.Backtick),
(r'\\`', String.Backtick),
- (r'\${', String.Interpol, 'interp-inside'),
+ (r'\$\{', String.Interpol, 'interp-inside'),
(r'\$', String.Backtick),
(r'[^`\\$]+', String.Backtick),
],
'interp-inside': [
# TODO: should this include single-line comments and allow nesting strings?
- (r'}', String.Interpol, '#pop'),
+ (r'\}', String.Interpol, '#pop'),
include('root'),
],
# (\\\\|\\`|[^`])*`', String.Backtick),
(r'\b(assert|break|case|catch|continue|default|do|else|finally|for|'
r'if|in|is|new|return|super|switch|this|throw|try|while)\b',
Keyword),
- (r'\b(abstract|const|extends|factory|final|get|implements|'
- r'native|operator|set|static|typedef|var)\b', Keyword.Declaration),
- (r'\b(bool|double|Dynamic|int|num|Object|String|void)\b', Keyword.Type),
+ (r'\b(abstract|async|await|const|extends|factory|final|get|'
+ r'implements|native|operator|set|static|sync|typedef|var|with|'
+ r'yield)\b', Keyword.Declaration),
+ (r'\b(bool|double|dynamic|int|num|Object|String|void)\b', Keyword.Type),
(r'\b(false|null|true)\b', Keyword.Constant),
(r'[~!%^&*+=|?:<>/-]|as\b', Operator),
(r'[a-zA-Z_$]\w*:', Name.Label),
name = 'TypeScript'
aliases = ['ts', 'typescript']
- filenames = ['*.ts']
+ filenames = ['*.ts', '*.tsx']
mimetypes = ['text/x-typescript']
flags = re.DOTALL | re.MULTILINE
(r'[0-9]+', Number.Integer),
(r'"(\\\\|\\"|[^"])*"', String.Double),
(r"'(\\\\|\\'|[^'])*'", String.Single),
+ (r'`', String.Backtick, 'interp'),
# Match stuff like: Decorators
(r'@\w+', Keyword.Declaration),
- ]
+ ],
+
+ # The 'interp*' rules match those in JavascriptLexer. Changes made
+ # there should be reflected here as well.
+ 'interp': [
+ (r'`', String.Backtick, '#pop'),
+ (r'\\\\', String.Backtick),
+ (r'\\`', String.Backtick),
+ (r'\$\{', String.Interpol, 'interp-inside'),
+ (r'\$', String.Backtick),
+ (r'[^`\\$]+', String.Backtick),
+ ],
+ 'interp-inside': [
+ # TODO: should this include single-line comments and allow nesting strings?
+ (r'\}', String.Interpol, '#pop'),
+ include('root'),
+ ],
}
+ def analyse_text(text):
+ if re.search('^(import.+(from\s+)?["\']|'
+ '(export\s*)?(interface|class|function)\s+)',
+ text, re.MULTILINE):
+ return 1.0
+
class LassoLexer(RegexLexer):
"""
tokens = {
'root': [
(r'^#![ \S]+lasso9\b', Comment.Preproc, 'lasso'),
- (r'\[no_square_brackets\]', Comment.Preproc, 'nosquarebrackets'),
- (r'\[noprocess\]', Comment.Preproc, ('delimiters', 'noprocess')),
- (r'\[', Comment.Preproc, ('delimiters', 'squarebrackets')),
- (r'<\?(LassoScript|lasso|=)', Comment.Preproc,
- ('delimiters', 'anglebrackets')),
- (r'<(!--.*?-->)?', Other, 'delimiters'),
+ (r'(?=\[|<)', Other, 'delimiters'),
(r'\s+', Other),
default(('delimiters', 'lassofile')),
],
(r'\[no_square_brackets\]', Comment.Preproc, 'nosquarebrackets'),
(r'\[noprocess\]', Comment.Preproc, 'noprocess'),
(r'\[', Comment.Preproc, 'squarebrackets'),
- (r'<\?(LassoScript|lasso|=)', Comment.Preproc, 'anglebrackets'),
+ (r'<\?(lasso(script)?|=)', Comment.Preproc, 'anglebrackets'),
(r'<(!--.*?-->)?', Other),
(r'[^[<]+', Other),
],
'nosquarebrackets': [
(r'\[noprocess\]', Comment.Preproc, 'noprocess'),
(r'\[', Other),
- (r'<\?(LassoScript|lasso|=)', Comment.Preproc, 'anglebrackets'),
+ (r'<\?(lasso(script)?|=)', Comment.Preproc, 'anglebrackets'),
(r'<(!--.*?-->)?', Other),
(r'[^[<]+', Other),
],
# names
(r'\$[a-z_][\w.]*', Name.Variable),
- (r'#([a-z_][\w.]*|\d+)', Name.Variable.Instance),
+ (r'#([a-z_][\w.]*|\d+\b)', Name.Variable.Instance),
(r"(\.\s*)('[a-z_][\w.]*')",
bygroups(Name.Builtin.Pseudo, Name.Variable.Class)),
(r"(self)(\s*->\s*)('[a-z_][\w.]*')",
r'Database_TableNames|Define_Tag|Define_Type|Email_Batch|'
r'Encode_Set|HTML_Comment|Handle|Handle_Error|Header|If|Inline|'
r'Iterate|LJAX_Target|Link|Link_CurrentAction|Link_CurrentGroup|'
- r'Link_CurrentRecord|Link_Detail|Link_FirstGroup|'
- r'Link_FirstRecord|Link_LastGroup|Link_LastRecord|Link_NextGroup|'
- r'Link_NextRecord|Link_PrevGroup|Link_PrevRecord|Log|Loop|'
- r'NoProcess|Output_None|Portal|Private|Protect|Records|Referer|'
- r'Referrer|Repeating|ResultSet|Rows|Search_Args|Search_Arguments|'
- r'Select|Sort_Args|Sort_Arguments|Thread_Atomic|Value_List|While|'
- r'Abort|Case|Else|If_Empty|If_False|If_Null|If_True|Loop_Abort|'
- r'Loop_Continue|Loop_Count|Params|Params_Up|Return|Return_Value|'
- r'Run_Children|SOAP_DefineTag|SOAP_LastRequest|SOAP_LastResponse|'
- r'Tag_Name|ascending|average|by|define|descending|do|equals|'
- r'frozen|group|handle_failure|import|in|into|join|let|match|max|'
- r'min|on|order|parent|protected|provide|public|require|returnhome|'
- r'skip|split_thread|sum|take|thread|to|trait|type|where|with|'
- r'yield|yieldhome)\b',
+ r'Link_CurrentRecord|Link_Detail|Link_FirstGroup|Link_FirstRecord|'
+ r'Link_LastGroup|Link_LastRecord|Link_NextGroup|Link_NextRecord|'
+ r'Link_PrevGroup|Link_PrevRecord|Log|Loop|Output_None|Portal|'
+ r'Private|Protect|Records|Referer|Referrer|Repeating|ResultSet|'
+ r'Rows|Search_Args|Search_Arguments|Select|Sort_Args|'
+ r'Sort_Arguments|Thread_Atomic|Value_List|While|Abort|Case|Else|'
+ r'Fail_If|Fail_IfNot|Fail|If_Empty|If_False|If_Null|If_True|'
+ r'Loop_Abort|Loop_Continue|Loop_Count|Params|Params_Up|Return|'
+ r'Return_Value|Run_Children|SOAP_DefineTag|SOAP_LastRequest|'
+ r'SOAP_LastResponse|Tag_Name|ascending|average|by|define|'
+ r'descending|do|equals|frozen|group|handle_failure|import|in|into|'
+ r'join|let|match|max|min|on|order|parent|protected|provide|public|'
+ r'require|returnhome|skip|split_thread|sum|take|thread|to|trait|'
+ r'type|where|with|yield|yieldhome)\b',
bygroups(Punctuation, Keyword)),
# other
filenames = ['*.coffee']
mimetypes = ['text/coffeescript']
+
+ _operator_re = (
+ r'\+\+|~|&&|\band\b|\bor\b|\bis\b|\bisnt\b|\bnot\b|\?|:|'
+ r'\|\||\\(?=\n)|'
+ r'(<<|>>>?|==?(?!>)|!=?|=(?!>)|-(?!>)|[<>+*`%&\|\^/])=?')
+
flags = re.DOTALL
tokens = {
'commentsandwhitespace': [
(r'///', String.Regex, ('#pop', 'multilineregex')),
(r'/(?! )(\\.|[^[/\\\n]|\[(\\.|[^\]\\\n])*])+/'
r'([gim]+\b|\B)', String.Regex, '#pop'),
+ # This isn't really guarding against mishighlighting well-formed
+ # code, just the ability to infinite-loop between root and
+ # slashstartsregex.
+ (r'/', Operator),
default('#pop'),
],
'root': [
- # this next expr leads to infinite loops root -> slashstartsregex
- # (r'^(?=\s|/|<!--)', Text, 'slashstartsregex'),
include('commentsandwhitespace'),
- (r'\+\+|~|&&|\band\b|\bor\b|\bis\b|\bisnt\b|\bnot\b|\?|:|'
- r'\|\||\\(?=\n)|'
- r'(<<|>>>?|==?(?!>)|!=?|=(?!>)|-(?!>)|[<>+*`%&|^/])=?',
- Operator, 'slashstartsregex'),
- (r'(?:\([^()]*\))?\s*[=-]>', Name.Function),
+ (r'^(?=\s|/)', Text, 'slashstartsregex'),
+ (_operator_re, Operator, 'slashstartsregex'),
+ (r'(?:\([^()]*\))?\s*[=-]>', Name.Function, 'slashstartsregex'),
(r'[{(\[;,]', Punctuation, 'slashstartsregex'),
(r'[})\].]', Punctuation),
(r'(?<![.$])(for|own|in|of|while|until|'
(r'@[$a-zA-Z_][\w.:$]*\s*[:=]\s', Name.Variable.Instance,
'slashstartsregex'),
(r'@', Name.Other, 'slashstartsregex'),
- (r'@?[$a-zA-Z_][\w$]*', Name.Other, 'slashstartsregex'),
+ (r'@?[$a-zA-Z_][\w$]*', Name.Other),
(r'[0-9][0-9]*\.[0-9]+([eE][0-9]+)?[fd]?', Number.Float),
(r'0x[0-9a-fA-F]+', Number.Hex),
(r'[0-9]+', Number.Integer),
include('control'),
(r'[^\S\n]+', Text),
(r';;.*\n', Comment),
- (r'[\[\]\{\}\:\(\)\,\;]', Punctuation),
+ (r'[\[\]{}:(),;]', Punctuation),
(r'\\\n', Text),
(r'\\', Text),
include('errors'),
(words((
'with', 'where', 'when', 'and', 'not', 'or', 'in',
'as', 'of', 'is'),
- prefix=r'(?<=\s|\[)', suffix=r'(?![\w\$\-])'),
+ prefix=r'(?<=\s|\[)', suffix=r'(?![\w$\-])'),
Operator.Word),
- (r'[\*@]?->', Name.Function),
+ (r'[*@]?->', Name.Function),
(r'[+\-*/~^<>%&|?!@#.]*=', Operator.Word),
(r'\.{2,3}', Operator.Word), # Range Operator
(r'([+*/~^<>&|?!]+)|([#\-](?=\s))|@@+(?=\s)|=+', Operator),
- (r'(?<![\w\$\-])(var|let)(?:[^\w\$])', Keyword.Declaration),
+ (r'(?<![\w$\-])(var|let)(?:[^\w$])', Keyword.Declaration),
include('keywords'),
include('builtins'),
include('assignment'),
(r'''(?x)
- (?:()([a-zA-Z$_](?:[a-zA-Z$0-9_\-]*[a-zA-Z$0-9_])?)|
- (?<=[\s\{\[\(])(\.)([a-zA-Z$_](?:[a-zA-Z$0-9_\-]*[a-zA-Z$0-9_])?))
+ (?:()([a-zA-Z$_](?:[\w$\-]*[\w$])?)|
+ (?<=[\s{\[(])(\.)([a-zA-Z$_](?:[\w$\-]*[\w$])?))
(?=.*%)''',
bygroups(Punctuation, Name.Tag, Punctuation, Name.Class.Start), 'dbs'),
(r'[rR]?`', String.Backtick, 'bt'),
(r'[rR]?```', String.Backtick, 'tbt'),
- (r'(?<=[\s\[\{\(,;])\.([a-zA-Z$_](?:[a-zA-Z$0-9_-]*[a-zA-Z$0-9_])?)'
- r'(?=[\s\]\}\),;])', String.Symbol),
+ (r'(?<=[\s\[{(,;])\.([a-zA-Z$_](?:[\w$\-]*[\w$])?)'
+ r'(?=[\s\]}),;])', String.Symbol),
include('nested'),
(r'(?:[rR]|[rR]\.[gmi]{1,3})?"', String, combined('stringescape', 'dqs')),
(r'(?:[rR]|[rR]\.[gmi]{1,3})?\'', String, combined('stringescape', 'sqs')),
include('numbers'),
],
'dbs': [
- (r'(\.)([a-zA-Z$_](?:[a-zA-Z$0-9_\-]*[a-zA-Z$0-9_])?)(?=[\[\.\s])',
+ (r'(\.)([a-zA-Z$_](?:[\w$\-]*[\w$])?)(?=[.\[\s])',
bygroups(Punctuation, Name.Class.DBS)),
- (r'(\[)([\^#][a-zA-Z$_](?:[a-zA-Z$0-9_\-]*[a-zA-Z$0-9_])?)(\])',
+ (r'(\[)([\^#][a-zA-Z$_](?:[\w$\-]*[\w$])?)(\])',
bygroups(Punctuation, Name.Entity.DBS, Punctuation)),
(r'\s+', Text),
(r'%', Operator.DBS, '#pop'),
bygroups(Text.Whitespace, Text)),
],
'assignment': [
- (r'(\.)?([a-zA-Z$_](?:[a-zA-Z$0-9_-]*[a-zA-Z$0-9_])?)'
+ (r'(\.)?([a-zA-Z$_](?:[\w$\-]*[\w$])?)'
r'(?=\s+[+\-*/~^<>%&|?!@#.]*\=\s)',
bygroups(Punctuation, Name.Variable))
],
'errors': [
(words(('Error', 'TypeError', 'ReferenceError'),
- prefix=r'(?<![\w\$\-\.])', suffix=r'(?![\w\$\-\.])'),
+ prefix=r'(?<![\w\-$.])', suffix=r'(?![\w\-$.])'),
Name.Exception),
(r'''(?x)
- (?<![\w\$])
- E\.[\w\$](?:[\w\$\-]*[\w\$])?
- (?:\.[\w\$](?:[\w\$\-]*[\w\$])?)*
- (?=[\(\{\[\?\!\s])''',
+ (?<![\w$])
+ E\.[\w$](?:[\w$\-]*[\w$])?
+ (?:\.[\w$](?:[\w$\-]*[\w$])?)*
+ (?=[({\[?!\s])''',
Name.Exception),
],
'control': [
(r'''(?x)
- ([a-zA-Z$_](?:[a-zA-Z$0-9_-]*[a-zA-Z$0-9_])?)
+ ([a-zA-Z$_](?:[\w$-]*[\w$])?)
(?!\n)\s+
(?!and|as|each\*|each|in|is|mod|of|or|when|where|with)
- (?=(?:[+\-*/~^<>%&|?!@#.])?[a-zA-Z$_](?:[a-zA-Z$0-9_-]*[a-zA-Z$0-9_])?)''',
+ (?=(?:[+\-*/~^<>%&|?!@#.])?[a-zA-Z$_](?:[\w$-]*[\w$])?)''',
Keyword.Control),
- (r'([a-zA-Z$_](?:[a-zA-Z$0-9_-]*[a-zA-Z$0-9_])?)(?!\n)\s+(?=[\'"\d\{\[\(])',
+ (r'([a-zA-Z$_](?:[\w$-]*[\w$])?)(?!\n)\s+(?=[\'"\d{\[(])',
Keyword.Control),
(r'''(?x)
(?:
(?<=with|each|with)|
(?<=each\*|where)
)(\s+)
- ([a-zA-Z$_](?:[a-zA-Z$0-9_\-]*[a-zA-Z$0-9_])?)(:)''',
+ ([a-zA-Z$_](?:[\w$-]*[\w$])?)(:)''',
bygroups(Text, Keyword.Control, Punctuation)),
(r'''(?x)
(?<![+\-*/~^<>%&|?!@#.])(\s+)
- ([a-zA-Z$_](?:[a-zA-Z$0-9_-]*[a-zA-Z$0-9_])?)(:)''',
+ ([a-zA-Z$_](?:[\w$-]*[\w$])?)(:)''',
bygroups(Text, Keyword.Control, Punctuation)),
],
'nested': [
(r'''(?x)
- (?<=[a-zA-Z$0-9_\]\}\)])(\.)
- ([a-zA-Z$_](?:[a-zA-Z$0-9_-]*[a-zA-Z$0-9_])?)
+ (?<=[\w$\]})])(\.)
+ ([a-zA-Z$_](?:[\w$-]*[\w$])?)
(?=\s+with(?:\s|\n))''',
bygroups(Punctuation, Name.Function)),
(r'''(?x)
(?<!\s)(\.)
- ([a-zA-Z$_](?:[a-zA-Z$0-9_-]*[a-zA-Z$0-9_])?)
- (?=[\}\]\)\.,;:\s])''',
+ ([a-zA-Z$_](?:[\w$-]*[\w$])?)
+ (?=[}\]).,;:\s])''',
bygroups(Punctuation, Name.Field)),
(r'''(?x)
- (?<=[a-zA-Z$0-9_\]\}\)])(\.)
- ([a-zA-Z$_](?:[a-zA-Z$0-9_-]*[a-zA-Z$0-9_])?)
- (?=[\[\{\(:])''',
+ (?<=[\w$\]})])(\.)
+ ([a-zA-Z$_](?:[\w$-]*[\w$])?)
+ (?=[\[{(:])''',
bygroups(Punctuation, Name.Function)),
],
'keywords': [
'continue', 'elif', 'expr-value', 'if', 'match',
'return', 'yield', 'pass', 'else', 'require', 'var',
'let', 'async', 'method', 'gen'),
- prefix=r'(?<![\w\$\-\.])', suffix=r'(?![\w\$\-\.])'),
+ prefix=r'(?<![\w\-$.])', suffix=r'(?![\w\-$.])'),
Keyword.Pseudo),
(words(('this', 'self', '@'),
- prefix=r'(?<![\w\$\-\.])', suffix=r'(?![\w\$\-])'),
+ prefix=r'(?<![\w\-$.])', suffix=r'(?![\w\-$])'),
Keyword.Constant),
(words((
'Function', 'Object', 'Array', 'String', 'Number',
'Boolean', 'ErrorFactory', 'ENode', 'Promise'),
- prefix=r'(?<![\w\$\-\.])', suffix=r'(?![\w\$\-])'),
+ prefix=r'(?<![\w\-$.])', suffix=r'(?![\w\-$])'),
Keyword.Type),
],
'builtins': [
'getChecker', 'get-checker', 'getProperty', 'get-property',
'getProjector', 'get-projector', 'consume', 'take',
'promisify', 'spawn', 'constructor'),
- prefix=r'(?<![\w\-#\.])', suffix=r'(?![\w\-\.])'),
+ prefix=r'(?<![\w\-#.])', suffix=r'(?![\w\-.])'),
Name.Builtin),
(words((
'true', 'false', 'null', 'undefined'),
- prefix=r'(?<![\w\$\-\.])', suffix=r'(?![\w\$\-\.])'),
+ prefix=r'(?<![\w\-$.])', suffix=r'(?![\w\-$.])'),
Name.Constant),
],
'name': [
- (r'@([a-zA-Z$_](?:[a-zA-Z$0-9_-]*[a-zA-Z$0-9_])?)', Name.Variable.Instance),
- (r'([a-zA-Z$_](?:[a-zA-Z$0-9_-]*[a-zA-Z$0-9_])?)(\+\+|\-\-)?',
+ (r'@([a-zA-Z$_](?:[\w$-]*[\w$])?)', Name.Variable.Instance),
+ (r'([a-zA-Z$_](?:[\w$-]*[\w$])?)(\+\+|\-\-)?',
bygroups(Name.Symbol, Operator.Word))
],
'tuple': [
- (r'#[a-zA-Z_][a-zA-Z_\-0-9]*(?=[\s\{\(,;\n])', Name.Namespace)
+ (r'#[a-zA-Z_][\w\-]*(?=[\s{(,;])', Name.Namespace)
],
'interpoling_string': [
(r'\}', String.Interpol, '#pop'),
(r'```', String.Backtick, '#pop'),
(r'\n', String.Backtick),
(r'\^=?', String.Escape),
- (r'[^\`]+', String.Backtick),
+ (r'[^`]+', String.Backtick),
],
'numbers': [
(r'\d+\.(?!\.)\d*([eE][+-]?[0-9]+)?', Number.Float),
(r'8r[0-7]+', Number.Oct),
(r'2r[01]+', Number.Bin),
(r'16r[a-fA-F0-9]+', Number.Hex),
- (r'([3-79]|[1-2][0-9]|3[0-6])r[a-zA-Z\d]+(\.[a-zA-Z\d]+)?', Number.Radix),
+ (r'([3-79]|[12][0-9]|3[0-6])r[a-zA-Z\d]+(\.[a-zA-Z\d]+)?', Number.Radix),
(r'\d+', Number.Integer)
],
}
+
+class JuttleLexer(RegexLexer):
+ """
+ For `Juttle`_ source code.
+
+ .. _Juttle: https://github.com/juttle/juttle
+
+ """
+
+ name = 'Juttle'
+ aliases = ['juttle', 'juttle']
+ filenames = ['*.juttle']
+ mimetypes = ['application/juttle', 'application/x-juttle',
+ 'text/x-juttle', 'text/juttle']
+
+ flags = re.DOTALL | re.UNICODE | re.MULTILINE
+
+ tokens = {
+ 'commentsandwhitespace': [
+ (r'\s+', Text),
+ (r'//.*?\n', Comment.Single),
+ (r'/\*.*?\*/', Comment.Multiline)
+ ],
+ 'slashstartsregex': [
+ include('commentsandwhitespace'),
+ (r'/(\\.|[^[/\\\n]|\[(\\.|[^\]\\\n])*])+/'
+ r'([gim]+\b|\B)', String.Regex, '#pop'),
+ (r'(?=/)', Text, ('#pop', 'badregex')),
+ default('#pop')
+ ],
+ 'badregex': [
+ (r'\n', Text, '#pop')
+ ],
+ 'root': [
+ (r'^(?=\s|/)', Text, 'slashstartsregex'),
+ include('commentsandwhitespace'),
+ (r':\d{2}:\d{2}:\d{2}(\.\d*)?:', String.Moment),
+ (r':(now|beginning|end|forever|yesterday|today|tomorrow|(\d+(\.\d*)?|\.\d+)(ms|[smhdwMy])?):', String.Moment),
+ (r':\d{4}-\d{2}-\d{2}(T\d{2}:\d{2}:\d{2}(\.\d*)?)?(Z|[+-]\d{2}:\d{2}|[+-]\d{4})?:', String.Moment),
+ (r':((\d+(\.\d*)?|\.\d+)[ ]+)?(millisecond|second|minute|hour|day|week|month|year)[s]?'
+ r'(([ ]+and[ ]+(\d+[ ]+)?(millisecond|second|minute|hour|day|week|month|year)[s]?)'
+ r'|[ ]+(ago|from[ ]+now))*:', String.Moment),
+ (r'\+\+|--|~|&&|\?|:|\|\||\\(?=\n)|'
+ r'(==?|!=?|[-<>+*%&|^/])=?', Operator, 'slashstartsregex'),
+ (r'[{(\[;,]', Punctuation, 'slashstartsregex'),
+ (r'[})\].]', Punctuation),
+ (r'(import|return|continue|if|else)\b', Keyword, 'slashstartsregex'),
+ (r'(var|const|function|reducer|sub|input)\b', Keyword.Declaration, 'slashstartsregex'),
+ (r'(batch|emit|filter|head|join|keep|pace|pass|put|read|reduce|remove|'
+ r'sequence|skip|sort|split|tail|unbatch|uniq|view|write)\b', Keyword.Reserved),
+ (r'(true|false|null|Infinity)\b', Keyword.Constant),
+ (r'(Array|Date|Juttle|Math|Number|Object|RegExp|String)\b', Name.Builtin),
+ (JS_IDENT, Name.Other),
+ (r'[0-9][0-9]*\.[0-9]+([eE][0-9]+)?[fd]?', Number.Float),
+ (r'[0-9]+', Number.Integer),
+ (r'"(\\\\|\\"|[^"])*"', String.Double),
+ (r"'(\\\\|\\'|[^'])*'", String.Single)
+ ]
+
+ }
Lexers for the Julia language.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import re
-from pygments.lexer import Lexer, RegexLexer, bygroups, combined, do_insertions
+from pygments.lexer import Lexer, RegexLexer, bygroups, do_insertions, \
+ words, include
from pygments.token import Text, Comment, Operator, Keyword, Name, String, \
Number, Punctuation, Generic
from pygments.util import shebang_matches, unirange
__all__ = ['JuliaLexer', 'JuliaConsoleLexer']
+allowed_variable = (
+ u'(?:[a-zA-Z_\u00A1-\uffff]|%s)(?:[a-zA-Z_0-9\u00A1-\uffff]|%s)*!*' %
+ ((unirange(0x10000, 0x10ffff),) * 2))
+
class JuliaLexer(RegexLexer):
"""
.. versionadded:: 1.6
"""
+
name = 'Julia'
aliases = ['julia', 'jl']
filenames = ['*.jl']
flags = re.MULTILINE | re.UNICODE
- builtins = [
- 'exit', 'whos', 'edit', 'load', 'is', 'isa', 'isequal', 'typeof', 'tuple',
- 'ntuple', 'uid', 'hash', 'finalizer', 'convert', 'promote', 'subtype',
- 'typemin', 'typemax', 'realmin', 'realmax', 'sizeof', 'eps', 'promote_type',
- 'method_exists', 'applicable', 'invoke', 'dlopen', 'dlsym', 'system',
- 'error', 'throw', 'assert', 'new', 'Inf', 'Nan', 'pi', 'im',
- ]
-
tokens = {
'root': [
(r'\n', Text),
(r'[^\S\n]+', Text),
(r'#=', Comment.Multiline, "blockcomment"),
(r'#.*$', Comment),
- (r'[]{}:(),;[@]', Punctuation),
- (r'\\\n', Text),
- (r'\\', Text),
+ (r'[\[\]{}(),;]', Punctuation),
# keywords
- (r'(begin|while|for|in|return|break|continue|'
- r'macro|quote|let|if|elseif|else|try|catch|end|'
- r'bitstype|ccall|do|using|module|import|export|'
- r'importall|baremodule|immutable)\b', Keyword),
+ (r'in\b', Keyword.Pseudo),
+ (r'(true|false)\b', Keyword.Constant),
(r'(local|global|const)\b', Keyword.Declaration),
- (r'(Bool|Int|Int8|Int16|Int32|Int64|Uint|Uint8|Uint16|Uint32|Uint64'
- r'|Float32|Float64|Complex64|Complex128|Any|Nothing|None)\b',
- Keyword.Type),
-
+ (words([
+ 'function', 'type', 'typealias', 'abstract', 'immutable',
+ 'baremodule', 'begin', 'bitstype', 'break', 'catch', 'ccall',
+ 'continue', 'do', 'else', 'elseif', 'end', 'export', 'finally',
+ 'for', 'if', 'import', 'importall', 'let', 'macro', 'module',
+ 'quote', 'return', 'try', 'using', 'while'],
+ suffix=r'\b'), Keyword),
+
+ # NOTE
+ # Patterns below work only for definition sites and thus hardly reliable.
+ #
# functions
- (r'(function)((?:\s|\\\s)+)',
- bygroups(Keyword, Name.Function), 'funcname'),
-
+ # (r'(function)(\s+)(' + allowed_variable + ')',
+ # bygroups(Keyword, Text, Name.Function)),
+ #
# types
- (r'(type|typealias|abstract|immutable)((?:\s|\\\s)+)',
- bygroups(Keyword, Name.Class), 'typename'),
-
- # operators
- (r'==|!=|<=|>=|->|&&|\|\||::|<:|[-~+/*%=<>&^|.?!$]', Operator),
- (r'\.\*|\.\^|\.\\|\.\/|\\', Operator),
+ # (r'(type|typealias|abstract|immutable)(\s+)(' + allowed_variable + ')',
+ # bygroups(Keyword, Text, Name.Class)),
+
+ # type names
+ (words([
+ 'ANY', 'ASCIIString', 'AbstractArray', 'AbstractChannel',
+ 'AbstractFloat', 'AbstractMatrix', 'AbstractRNG',
+ 'AbstractSparseArray', 'AbstractSparseMatrix',
+ 'AbstractSparseVector', 'AbstractString', 'AbstractVecOrMat',
+ 'AbstractVector', 'Any', 'ArgumentError', 'Array',
+ 'AssertionError', 'Associative', 'Base64DecodePipe',
+ 'Base64EncodePipe', 'Bidiagonal', 'BigFloat', 'BigInt',
+ 'BitArray', 'BitMatrix', 'BitVector', 'Bool', 'BoundsError',
+ 'Box', 'BufferStream', 'CapturedException', 'CartesianIndex',
+ 'CartesianRange', 'Cchar', 'Cdouble', 'Cfloat', 'Channel',
+ 'Char', 'Cint', 'Cintmax_t', 'Clong', 'Clonglong',
+ 'ClusterManager', 'Cmd', 'Coff_t', 'Colon', 'Complex',
+ 'Complex128', 'Complex32', 'Complex64', 'CompositeException',
+ 'Condition', 'Cptrdiff_t', 'Cshort', 'Csize_t', 'Cssize_t',
+ 'Cstring', 'Cuchar', 'Cuint', 'Cuintmax_t', 'Culong',
+ 'Culonglong', 'Cushort', 'Cwchar_t', 'Cwstring', 'DataType',
+ 'Date', 'DateTime', 'DenseArray', 'DenseMatrix',
+ 'DenseVecOrMat', 'DenseVector', 'Diagonal', 'Dict',
+ 'DimensionMismatch', 'Dims', 'DirectIndexString', 'Display',
+ 'DivideError', 'DomainError', 'EOFError', 'EachLine', 'Enum',
+ 'Enumerate', 'ErrorException', 'Exception', 'Expr',
+ 'Factorization', 'FileMonitor', 'FileOffset', 'Filter',
+ 'Float16', 'Float32', 'Float64', 'FloatRange', 'Function',
+ 'GenSym', 'GlobalRef', 'GotoNode', 'HTML', 'Hermitian', 'IO',
+ 'IOBuffer', 'IOStream', 'IPv4', 'IPv6', 'InexactError',
+ 'InitError', 'Int', 'Int128', 'Int16', 'Int32', 'Int64', 'Int8',
+ 'IntSet', 'Integer', 'InterruptException', 'IntrinsicFunction',
+ 'InvalidStateException', 'Irrational', 'KeyError', 'LabelNode',
+ 'LambdaStaticData', 'LinSpace', 'LineNumberNode', 'LoadError',
+ 'LocalProcess', 'LowerTriangular', 'MIME', 'Matrix',
+ 'MersenneTwister', 'Method', 'MethodError', 'MethodTable',
+ 'Module', 'NTuple', 'NewvarNode', 'NullException', 'Nullable',
+ 'Number', 'ObjectIdDict', 'OrdinalRange', 'OutOfMemoryError',
+ 'OverflowError', 'Pair', 'ParseError', 'PartialQuickSort',
+ 'Pipe', 'PollingFileWatcher', 'ProcessExitedException',
+ 'ProcessGroup', 'Ptr', 'QuoteNode', 'RandomDevice', 'Range',
+ 'Rational', 'RawFD', 'ReadOnlyMemoryError', 'Real',
+ 'ReentrantLock', 'Ref', 'Regex', 'RegexMatch',
+ 'RemoteException', 'RemoteRef', 'RepString', 'RevString',
+ 'RopeString', 'RoundingMode', 'SegmentationFault',
+ 'SerializationState', 'Set', 'SharedArray', 'SharedMatrix',
+ 'SharedVector', 'Signed', 'SimpleVector', 'SparseMatrixCSC',
+ 'StackOverflowError', 'StatStruct', 'StepRange', 'StridedArray',
+ 'StridedMatrix', 'StridedVecOrMat', 'StridedVector', 'SubArray',
+ 'SubString', 'SymTridiagonal', 'Symbol', 'SymbolNode',
+ 'Symmetric', 'SystemError', 'TCPSocket', 'Task', 'Text',
+ 'TextDisplay', 'Timer', 'TopNode', 'Tridiagonal', 'Tuple',
+ 'Type', 'TypeConstructor', 'TypeError', 'TypeName', 'TypeVar',
+ 'UDPSocket', 'UInt', 'UInt128', 'UInt16', 'UInt32', 'UInt64',
+ 'UInt8', 'UTF16String', 'UTF32String', 'UTF8String',
+ 'UndefRefError', 'UndefVarError', 'UnicodeError', 'UniformScaling',
+ 'Union', 'UnitRange', 'Unsigned', 'UpperTriangular', 'Val',
+ 'Vararg', 'VecOrMat', 'Vector', 'VersionNumber', 'Void', 'WString',
+ 'WeakKeyDict', 'WeakRef', 'WorkerConfig', 'Zip'], suffix=r'\b'),
+ Keyword.Type),
# builtins
- ('(' + '|'.join(builtins) + r')\b', Name.Builtin),
+ (words([
+ u'ARGS', u'CPU_CORES', u'C_NULL', u'DevNull', u'ENDIAN_BOM',
+ u'ENV', u'I', u'Inf', u'Inf16', u'Inf32', u'Inf64',
+ u'InsertionSort', u'JULIA_HOME', u'LOAD_PATH', u'MergeSort',
+ u'NaN', u'NaN16', u'NaN32', u'NaN64', u'OS_NAME',
+ u'QuickSort', u'RoundDown', u'RoundFromZero', u'RoundNearest',
+ u'RoundNearestTiesAway', u'RoundNearestTiesUp',
+ u'RoundToZero', u'RoundUp', u'STDERR', u'STDIN', u'STDOUT',
+ u'VERSION', u'WORD_SIZE', u'catalan', u'e', u'eu',
+ u'eulergamma', u'golden', u'im', u'nothing', u'pi', u'γ',
+ u'π', u'φ'],
+ suffix=r'\b'), Name.Builtin),
- # backticks
- (r'`(?s).*?`', String.Backtick),
+ # operators
+ # see: https://github.com/JuliaLang/julia/blob/master/src/julia-parser.scm
+ (words([
+ # prec-assignment
+ u'=', u':=', u'+=', u'-=', u'*=', u'/=', u'//=', u'.//=', u'.*=', u'./=',
+ u'\=', u'.\=', u'^=', u'.^=', u'÷=', u'.÷=', u'%=', u'.%=', u'|=', u'&=',
+ u'$=', u'=>', u'<<=', u'>>=', u'>>>=', u'~', u'.+=', u'.-=',
+ # prec-conditional
+ u'?',
+ # prec-arrow
+ u'--', u'-->',
+ # prec-lazy-or
+ u'||',
+ # prec-lazy-and
+ u'&&',
+ # prec-comparison
+ u'>', u'<', u'>=', u'≥', u'<=', u'≤', u'==', u'===', u'≡', u'!=', u'≠',
+ u'!==', u'≢', u'.>', u'.<', u'.>=', u'.≥', u'.<=', u'.≤', u'.==', u'.!=',
+ u'.≠', u'.=', u'.!', u'<:', u'>:', u'∈', u'∉', u'∋', u'∌', u'⊆',
+ u'⊈', u'⊂',
+ u'⊄', u'⊊',
+ # prec-pipe
+ u'|>', u'<|',
+ # prec-colon
+ u':',
+ # prec-plus
+ u'+', u'-', u'.+', u'.-', u'|', u'∪', u'$',
+ # prec-bitshift
+ u'<<', u'>>', u'>>>', u'.<<', u'.>>', u'.>>>',
+ # prec-times
+ u'*', u'/', u'./', u'÷', u'.÷', u'%', u'⋅', u'.%', u'.*', u'\\', u'.\\', u'&', u'∩',
+ # prec-rational
+ u'//', u'.//',
+ # prec-power
+ u'^', u'.^',
+ # prec-decl
+ u'::',
+ # prec-dot
+ u'.',
+ # unary op
+ u'+', u'-', u'!', u'~', u'√', u'∛', u'∜'
+ ]), Operator),
# chars
(r"'(\\.|\\[0-7]{1,3}|\\x[a-fA-F0-9]{1,3}|\\u[a-fA-F0-9]{1,4}|"
(r'(?<=[.\w)\]])\'+', Operator),
# strings
- (r'(?:[IL])"', String, 'string'),
- (r'[E]?"', String, combined('stringescape', 'string')),
+ (r'"""', String, 'tqstring'),
+ (r'"', String, 'string'),
+
+ # regular expressions
+ (r'r"""', String.Regex, 'tqregex'),
+ (r'r"', String.Regex, 'regex'),
+
+ # backticks
+ (r'`', String.Backtick, 'command'),
# names
- (r'@[\w.]+', Name.Decorator),
- (u'(?:[a-zA-Z_\u00A1-\uffff]|%s)(?:[a-zA-Z_0-9\u00A1-\uffff]|%s)*!*' %
- ((unirange(0x10000, 0x10ffff),)*2), Name),
+ (allowed_variable, Name),
+ (r'@' + allowed_variable, Name.Decorator),
# numbers
(r'(\d+(_\d+)+\.\d*|\d*\.\d+(_\d+)+)([eEf][+-]?[0-9]+)?', Number.Float),
(r'\d+', Number.Integer)
],
- 'funcname': [
- ('[a-zA-Z_]\w*', Name.Function, '#pop'),
- ('\([^\s\w{]{1,2}\)', Operator, '#pop'),
- ('[^\s\w{]{1,2}', Operator, '#pop'),
- ],
-
- 'typename': [
- ('[a-zA-Z_]\w*', Name.Class, '#pop')
- ],
-
- 'stringescape': [
- (r'\\([\\abfnrtv"\']|\n|N\{.*?\}|u[a-fA-F0-9]{4}|'
- r'U[a-fA-F0-9]{8}|x[a-fA-F0-9]{2}|[0-7]{1,3})', String.Escape)
- ],
"blockcomment": [
(r'[^=#]', Comment.Multiline),
(r'#=', Comment.Multiline, '#push'),
(r'=#', Comment.Multiline, '#pop'),
(r'[=#]', Comment.Multiline),
],
+
'string': [
(r'"', String, '#pop'),
- (r'\\\\|\\"|\\\n', String.Escape), # included here for raw strings
+ # FIXME: This escape pattern is not perfect.
+ (r'\\([\\"\'$nrbtfav]|(x|u|U)[a-fA-F0-9]+|\d+)', String.Escape),
# Interpolation is defined as "$" followed by the shortest full
# expression, which is something we can't parse.
# Include the most common cases here: $word, and $(paren'd expr).
- (r'\$[a-zA-Z_]+', String.Interpol),
- (r'\$\(', String.Interpol, 'in-intp'),
+ (r'\$' + allowed_variable, String.Interpol),
+ # (r'\$[a-zA-Z_]+', String.Interpol),
+ (r'(\$)(\()', bygroups(String.Interpol, Punctuation), 'in-intp'),
# @printf and @sprintf formats
- (r'%[-#0 +]*([0-9]+|[*])?(\.([0-9]+|[*]))?[hlL]?[diouxXeEfFgGcrs%]',
+ (r'%[-#0 +]*([0-9]+|[*])?(\.([0-9]+|[*]))?[hlL]?[E-GXc-giorsux%]',
String.Interpol),
- (r'[^$%"\\]+', String),
- # unhandled special signs
- (r'[$%"\\]', String),
+ (r'.|\s', String),
],
+
+ 'tqstring': [
+ (r'"""', String, '#pop'),
+ (r'\\([\\"\'$nrbtfav]|(x|u|U)[a-fA-F0-9]+|\d+)', String.Escape),
+ (r'\$' + allowed_variable, String.Interpol),
+ (r'(\$)(\()', bygroups(String.Interpol, Punctuation), 'in-intp'),
+ (r'.|\s', String),
+ ],
+
+ 'regex': [
+ (r'"', String.Regex, '#pop'),
+ (r'\\"', String.Regex),
+ (r'.|\s', String.Regex),
+ ],
+
+ 'tqregex': [
+ (r'"""', String.Regex, '#pop'),
+ (r'.|\s', String.Regex),
+ ],
+
+ 'command': [
+ (r'`', String.Backtick, '#pop'),
+ (r'\$' + allowed_variable, String.Interpol),
+ (r'(\$)(\()', bygroups(String.Interpol, Punctuation), 'in-intp'),
+ (r'.|\s', String.Backtick)
+ ],
+
'in-intp': [
- (r'[^()]+', String.Interpol),
- (r'\(', String.Interpol, '#push'),
- (r'\)', String.Interpol, '#pop'),
+ (r'\(', Punctuation, '#push'),
+ (r'\)', Punctuation, '#pop'),
+ include('root'),
]
}
return shebang_matches(text, r'julia')
-line_re = re.compile('.*?\n')
-
-
class JuliaConsoleLexer(Lexer):
"""
For Julia console sessions. Modeled after MatlabSessionLexer.
def get_tokens_unprocessed(self, text):
jllexer = JuliaLexer(**self.options)
-
+ start = 0
curcode = ''
insertions = []
+ output = False
+ error = False
- for match in line_re.finditer(text):
- line = match.group()
-
+ for line in text.splitlines(True):
if line.startswith('julia>'):
- insertions.append((len(curcode),
- [(0, Generic.Prompt, line[:6])]))
+ insertions.append((len(curcode), [(0, Generic.Prompt, line[:6])]))
+ curcode += line[6:]
+ output = False
+ error = False
+ elif line.startswith('help?>') or line.startswith('shell>'):
+ yield start, Generic.Prompt, line[:6]
+ yield start + 6, Text, line[6:]
+ output = False
+ error = False
+ elif line.startswith(' ') and not output:
+ insertions.append((len(curcode), [(0, Text, line[:6])]))
curcode += line[6:]
-
- elif line.startswith(' '):
-
- idx = len(curcode)
-
- # without is showing error on same line as before...?
- line = "\n" + line
- token = (0, Generic.Traceback, line)
- insertions.append((idx, [token]))
-
else:
if curcode:
for item in do_insertions(
yield item
curcode = ''
insertions = []
-
- yield match.start(), Generic.Output, line
-
- if curcode: # or item:
+ if line.startswith('ERROR: ') or error:
+ yield start, Generic.Error, line
+ error = True
+ else:
+ yield start, Generic.Output, line
+ output = True
+ start += len(line)
+
+ if curcode:
for item in do_insertions(
insertions, jllexer.get_tokens_unprocessed(curcode)):
yield item
Pygments lexers for JVM languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Keyword.Type),
(r'(package)(\s+)', bygroups(Keyword.Namespace, Text), 'import'),
(r'(true|false|null)\b', Keyword.Constant),
- (r'(class|interface)(\s+)', bygroups(Keyword.Declaration, Text), 'class'),
- (r'(import)(\s+)', bygroups(Keyword.Namespace, Text), 'import'),
+ (r'(class|interface)(\s+)', bygroups(Keyword.Declaration, Text),
+ 'class'),
+ (r'(import(?:\s+static)?)(\s+)', bygroups(Keyword.Namespace, Text),
+ 'import'),
(r'"(\\\\|\\"|[^"])*"', String),
(r"'\\.'|'[^\\]'|'\\u[0-9a-fA-F]{4}'", String.Char),
(r'(\.)((?:[^\W\d]|\$)[\w$]*)', bygroups(Operator, Name.Attribute)),
(r'^\s*([^\W\d]|\$)[\w$]*:', Name.Label),
(r'([^\W\d]|\$)[\w$]*', Name),
- (r'([0-9](_*[0-9]+)*\.([0-9](_*[0-9]+)*)?|'
- r'([0-9](_*[0-9]+)*)?\.[0-9](_*[0-9]+)*)'
- r'([eE][+\-]?[0-9](_*[0-9]+)*)?[fFdD]?|'
- r'[0-9][eE][+\-]?[0-9](_*[0-9]+)*[fFdD]?|'
- r'[0-9]([eE][+\-]?[0-9](_*[0-9]+)*)?[fFdD]|'
- r'0[xX]([0-9a-fA-F](_*[0-9a-fA-F]+)*\.?|'
- r'([0-9a-fA-F](_*[0-9a-fA-F]+)*)?\.[0-9a-fA-F](_*[0-9a-fA-F]+)*)'
- r'[pP][+\-]?[0-9](_*[0-9]+)*[fFdD]?', Number.Float),
- (r'0[xX][0-9a-fA-F](_*[0-9a-fA-F]+)*[lL]?', Number.Hex),
- (r'0[bB][01](_*[01]+)*[lL]?', Number.Bin),
- (r'0(_*[0-7]+)+[lL]?', Number.Oct),
- (r'0|[1-9](_*[0-9]+)*[lL]?', Number.Integer),
+ (r'([0-9][0-9_]*\.([0-9][0-9_]*)?|'
+ r'\.[0-9][0-9_]*)'
+ r'([eE][+\-]?[0-9][0-9_]*)?[fFdD]?|'
+ r'[0-9][eE][+\-]?[0-9][0-9_]*[fFdD]?|'
+ r'[0-9]([eE][+\-]?[0-9][0-9_]*)?[fFdD]|'
+ r'0[xX]([0-9a-fA-F][0-9a-fA-F_]*\.?|'
+ r'([0-9a-fA-F][0-9a-fA-F_]*)?\.[0-9a-fA-F][0-9a-fA-F_]*)'
+ r'[pP][+\-]?[0-9][0-9_]*[fFdD]?', Number.Float),
+ (r'0[xX][0-9a-fA-F][0-9a-fA-F_]*[lL]?', Number.Hex),
+ (r'0[bB][01][01_]*[lL]?', Number.Bin),
+ (r'0[0-7_]+[lL]?', Number.Oct),
+ (r'0|[1-9][0-9_]*[lL]?', Number.Integer),
(r'[~^*!%&\[\](){}<>|+=:;,./?-]', Operator),
(r'\n', Text)
],
],
'slashRegexp': [
- (r'(?<!\\)/[oxpniums]*', String.Regex, '#pop'),
+ (r'(?<!\\)/[im-psux]*', String.Regex, '#pop'),
include('interpolatableText'),
(r'\\/', String.Regex),
(r'[^/]', String.Regex)
],
'squareRegexp': [
- (r'(?<!\\)][oxpniums]*', String.Regex, '#pop'),
+ (r'(?<!\\)][im-psux]*', String.Regex, '#pop'),
include('interpolatableText'),
(r'\\]', String.Regex),
(r'[^\]]', String.Regex)
Lexers for Lispy languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from pygments.lexers.python import PythonLexer
__all__ = ['SchemeLexer', 'CommonLispLexer', 'HyLexer', 'RacketLexer',
- 'NewLispLexer', 'EmacsLispLexer', 'ShenLexer', 'CPSALexer']
+ 'NewLispLexer', 'EmacsLispLexer', 'ShenLexer', 'CPSALexer',
+ 'XtlangLexer']
class SchemeLexer(RegexLexer):
# Generated by example.rkt
_keywords = (
- '#%app', '#%datum', '#%declare', '#%expression', '#%module-begin',
- '#%plain-app', '#%plain-lambda', '#%plain-module-begin',
- '#%printing-module-begin', '#%provide', '#%require',
- '#%stratified-body', '#%top', '#%top-interaction',
- '#%variable-reference', '->', '->*', '->*m', '->d', '->dm', '->i',
- '->m', '...', ':do-in', '==', '=>', '_', 'absent', 'abstract',
- 'all-defined-out', 'all-from-out', 'and', 'any', 'augment', 'augment*',
- 'augment-final', 'augment-final*', 'augride', 'augride*', 'begin',
- 'begin-for-syntax', 'begin0', 'case', 'case->', 'case->m',
- 'case-lambda', 'class', 'class*', 'class-field-accessor',
- 'class-field-mutator', 'class/c', 'class/derived', 'combine-in',
- 'combine-out', 'command-line', 'compound-unit', 'compound-unit/infer',
- 'cond', 'contract', 'contract-out', 'contract-struct', 'contracted',
- 'define', 'define-compound-unit', 'define-compound-unit/infer',
- 'define-contract-struct', 'define-custom-hash-types',
- 'define-custom-set-types', 'define-for-syntax',
- 'define-local-member-name', 'define-logger', 'define-match-expander',
- 'define-member-name', 'define-module-boundary-contract',
- 'define-namespace-anchor', 'define-opt/c', 'define-sequence-syntax',
- 'define-serializable-class', 'define-serializable-class*',
- 'define-signature', 'define-signature-form', 'define-struct',
- 'define-struct/contract', 'define-struct/derived', 'define-syntax',
- 'define-syntax-rule', 'define-syntaxes', 'define-unit',
- 'define-unit-binding', 'define-unit-from-context',
- 'define-unit/contract', 'define-unit/new-import-export',
- 'define-unit/s', 'define-values', 'define-values-for-export',
- 'define-values-for-syntax', 'define-values/invoke-unit',
- 'define-values/invoke-unit/infer', 'define/augment',
- 'define/augment-final', 'define/augride', 'define/contract',
- 'define/final-prop', 'define/match', 'define/overment',
- 'define/override', 'define/override-final', 'define/private',
- 'define/public', 'define/public-final', 'define/pubment',
- 'define/subexpression-pos-prop', 'delay', 'delay/idle', 'delay/name',
- 'delay/strict', 'delay/sync', 'delay/thread', 'do', 'else', 'except',
- 'except-in', 'except-out', 'export', 'extends', 'failure-cont',
- 'false', 'false/c', 'field', 'field-bound?', 'file',
- 'flat-murec-contract', 'flat-rec-contract', 'for', 'for*', 'for*/and',
- 'for*/first', 'for*/fold', 'for*/fold/derived', 'for*/hash',
- 'for*/hasheq', 'for*/hasheqv', 'for*/last', 'for*/list', 'for*/lists',
- 'for*/mutable-set', 'for*/mutable-seteq', 'for*/mutable-seteqv',
- 'for*/or', 'for*/product', 'for*/set', 'for*/seteq', 'for*/seteqv',
- 'for*/sum', 'for*/vector', 'for*/weak-set', 'for*/weak-seteq',
- 'for*/weak-seteqv', 'for-label', 'for-meta', 'for-syntax',
- 'for-template', 'for/and', 'for/first', 'for/fold', 'for/fold/derived',
- 'for/hash', 'for/hasheq', 'for/hasheqv', 'for/last', 'for/list',
- 'for/lists', 'for/mutable-set', 'for/mutable-seteq',
- 'for/mutable-seteqv', 'for/or', 'for/product', 'for/set', 'for/seteq',
- 'for/seteqv', 'for/sum', 'for/vector', 'for/weak-set',
- 'for/weak-seteq', 'for/weak-seteqv', 'gen:custom-write', 'gen:dict',
- 'gen:equal+hash', 'gen:set', 'gen:stream', 'generic', 'get-field',
- 'if', 'implies', 'import', 'include', 'include-at/relative-to',
- 'include-at/relative-to/reader', 'include/reader', 'inherit',
- 'inherit-field', 'inherit/inner', 'inherit/super', 'init',
- 'init-depend', 'init-field', 'init-rest', 'inner', 'inspect',
- 'instantiate', 'interface', 'interface*', 'invoke-unit',
- 'invoke-unit/infer', 'lambda', 'lazy', 'let', 'let*', 'let*-values',
- 'let-syntax', 'let-syntaxes', 'let-values', 'let/cc', 'let/ec',
- 'letrec', 'letrec-syntax', 'letrec-syntaxes', 'letrec-syntaxes+values',
- 'letrec-values', 'lib', 'link', 'local', 'local-require', 'log-debug',
- 'log-error', 'log-fatal', 'log-info', 'log-warning', 'match', 'match*',
- 'match*/derived', 'match-define', 'match-define-values',
- 'match-lambda', 'match-lambda*', 'match-lambda**', 'match-let',
- 'match-let*', 'match-let*-values', 'match-let-values', 'match-letrec',
- 'match/derived', 'match/values', 'member-name-key', 'method-contract?',
- 'mixin', 'module', 'module*', 'module+', 'nand', 'new', 'nor',
- 'object-contract', 'object/c', 'only', 'only-in', 'only-meta-in',
- 'open', 'opt/c', 'or', 'overment', 'overment*', 'override',
- 'override*', 'override-final', 'override-final*', 'parameterize',
- 'parameterize*', 'parameterize-break', 'parametric->/c', 'place',
- 'place*', 'planet', 'prefix', 'prefix-in', 'prefix-out', 'private',
- 'private*', 'prompt-tag/c', 'protect-out', 'provide',
- 'provide-signature-elements', 'provide/contract', 'public', 'public*',
- 'public-final', 'public-final*', 'pubment', 'pubment*', 'quasiquote',
- 'quasisyntax', 'quasisyntax/loc', 'quote', 'quote-syntax',
- 'quote-syntax/prune', 'recontract-out', 'recursive-contract',
- 'relative-in', 'rename', 'rename-in', 'rename-inner', 'rename-out',
- 'rename-super', 'require', 'send', 'send*', 'send+', 'send-generic',
- 'send/apply', 'send/keyword-apply', 'set!', 'set!-values',
- 'set-field!', 'shared', 'stream', 'stream-cons', 'struct', 'struct*',
- 'struct-copy', 'struct-field-index', 'struct-out', 'struct/c',
- 'struct/ctc', 'struct/dc', 'submod', 'super', 'super-instantiate',
- 'super-make-object', 'super-new', 'syntax', 'syntax-case',
- 'syntax-case*', 'syntax-id-rules', 'syntax-rules', 'syntax/loc', 'tag',
- 'this', 'this%', 'thunk', 'thunk*', 'time', 'unconstrained-domain->',
- 'unit', 'unit-from-context', 'unit/c', 'unit/new-import-export',
- 'unit/s', 'unless', 'unquote', 'unquote-splicing', 'unsyntax',
- 'unsyntax-splicing', 'values/drop', 'when', 'with-continuation-mark',
- 'with-contract', 'with-handlers', 'with-handlers*', 'with-method',
- 'with-syntax', u'λ'
+ u'#%app', u'#%datum', u'#%declare', u'#%expression', u'#%module-begin',
+ u'#%plain-app', u'#%plain-lambda', u'#%plain-module-begin',
+ u'#%printing-module-begin', u'#%provide', u'#%require',
+ u'#%stratified-body', u'#%top', u'#%top-interaction',
+ u'#%variable-reference', u'->', u'->*', u'->*m', u'->d', u'->dm', u'->i',
+ u'->m', u'...', u':do-in', u'==', u'=>', u'_', u'absent', u'abstract',
+ u'all-defined-out', u'all-from-out', u'and', u'any', u'augment', u'augment*',
+ u'augment-final', u'augment-final*', u'augride', u'augride*', u'begin',
+ u'begin-for-syntax', u'begin0', u'case', u'case->', u'case->m',
+ u'case-lambda', u'class', u'class*', u'class-field-accessor',
+ u'class-field-mutator', u'class/c', u'class/derived', u'combine-in',
+ u'combine-out', u'command-line', u'compound-unit', u'compound-unit/infer',
+ u'cond', u'cons/dc', u'contract', u'contract-out', u'contract-struct',
+ u'contracted', u'define', u'define-compound-unit',
+ u'define-compound-unit/infer', u'define-contract-struct',
+ u'define-custom-hash-types', u'define-custom-set-types',
+ u'define-for-syntax', u'define-local-member-name', u'define-logger',
+ u'define-match-expander', u'define-member-name',
+ u'define-module-boundary-contract', u'define-namespace-anchor',
+ u'define-opt/c', u'define-sequence-syntax', u'define-serializable-class',
+ u'define-serializable-class*', u'define-signature',
+ u'define-signature-form', u'define-struct', u'define-struct/contract',
+ u'define-struct/derived', u'define-syntax', u'define-syntax-rule',
+ u'define-syntaxes', u'define-unit', u'define-unit-binding',
+ u'define-unit-from-context', u'define-unit/contract',
+ u'define-unit/new-import-export', u'define-unit/s', u'define-values',
+ u'define-values-for-export', u'define-values-for-syntax',
+ u'define-values/invoke-unit', u'define-values/invoke-unit/infer',
+ u'define/augment', u'define/augment-final', u'define/augride',
+ u'define/contract', u'define/final-prop', u'define/match',
+ u'define/overment', u'define/override', u'define/override-final',
+ u'define/private', u'define/public', u'define/public-final',
+ u'define/pubment', u'define/subexpression-pos-prop',
+ u'define/subexpression-pos-prop/name', u'delay', u'delay/idle',
+ u'delay/name', u'delay/strict', u'delay/sync', u'delay/thread', u'do',
+ u'else', u'except', u'except-in', u'except-out', u'export', u'extends',
+ u'failure-cont', u'false', u'false/c', u'field', u'field-bound?', u'file',
+ u'flat-murec-contract', u'flat-rec-contract', u'for', u'for*', u'for*/and',
+ u'for*/async', u'for*/first', u'for*/fold', u'for*/fold/derived',
+ u'for*/hash', u'for*/hasheq', u'for*/hasheqv', u'for*/last', u'for*/list',
+ u'for*/lists', u'for*/mutable-set', u'for*/mutable-seteq',
+ u'for*/mutable-seteqv', u'for*/or', u'for*/product', u'for*/set',
+ u'for*/seteq', u'for*/seteqv', u'for*/stream', u'for*/sum', u'for*/vector',
+ u'for*/weak-set', u'for*/weak-seteq', u'for*/weak-seteqv', u'for-label',
+ u'for-meta', u'for-syntax', u'for-template', u'for/and', u'for/async',
+ u'for/first', u'for/fold', u'for/fold/derived', u'for/hash', u'for/hasheq',
+ u'for/hasheqv', u'for/last', u'for/list', u'for/lists', u'for/mutable-set',
+ u'for/mutable-seteq', u'for/mutable-seteqv', u'for/or', u'for/product',
+ u'for/set', u'for/seteq', u'for/seteqv', u'for/stream', u'for/sum',
+ u'for/vector', u'for/weak-set', u'for/weak-seteq', u'for/weak-seteqv',
+ u'gen:custom-write', u'gen:dict', u'gen:equal+hash', u'gen:set',
+ u'gen:stream', u'generic', u'get-field', u'hash/dc', u'if', u'implies',
+ u'import', u'include', u'include-at/relative-to',
+ u'include-at/relative-to/reader', u'include/reader', u'inherit',
+ u'inherit-field', u'inherit/inner', u'inherit/super', u'init',
+ u'init-depend', u'init-field', u'init-rest', u'inner', u'inspect',
+ u'instantiate', u'interface', u'interface*', u'invariant-assertion',
+ u'invoke-unit', u'invoke-unit/infer', u'lambda', u'lazy', u'let', u'let*',
+ u'let*-values', u'let-syntax', u'let-syntaxes', u'let-values', u'let/cc',
+ u'let/ec', u'letrec', u'letrec-syntax', u'letrec-syntaxes',
+ u'letrec-syntaxes+values', u'letrec-values', u'lib', u'link', u'local',
+ u'local-require', u'log-debug', u'log-error', u'log-fatal', u'log-info',
+ u'log-warning', u'match', u'match*', u'match*/derived', u'match-define',
+ u'match-define-values', u'match-lambda', u'match-lambda*',
+ u'match-lambda**', u'match-let', u'match-let*', u'match-let*-values',
+ u'match-let-values', u'match-letrec', u'match-letrec-values',
+ u'match/derived', u'match/values', u'member-name-key', u'mixin', u'module',
+ u'module*', u'module+', u'nand', u'new', u'nor', u'object-contract',
+ u'object/c', u'only', u'only-in', u'only-meta-in', u'open', u'opt/c', u'or',
+ u'overment', u'overment*', u'override', u'override*', u'override-final',
+ u'override-final*', u'parameterize', u'parameterize*',
+ u'parameterize-break', u'parametric->/c', u'place', u'place*',
+ u'place/context', u'planet', u'prefix', u'prefix-in', u'prefix-out',
+ u'private', u'private*', u'prompt-tag/c', u'protect-out', u'provide',
+ u'provide-signature-elements', u'provide/contract', u'public', u'public*',
+ u'public-final', u'public-final*', u'pubment', u'pubment*', u'quasiquote',
+ u'quasisyntax', u'quasisyntax/loc', u'quote', u'quote-syntax',
+ u'quote-syntax/prune', u'recontract-out', u'recursive-contract',
+ u'relative-in', u'rename', u'rename-in', u'rename-inner', u'rename-out',
+ u'rename-super', u'require', u'send', u'send*', u'send+', u'send-generic',
+ u'send/apply', u'send/keyword-apply', u'set!', u'set!-values',
+ u'set-field!', u'shared', u'stream', u'stream*', u'stream-cons', u'struct',
+ u'struct*', u'struct-copy', u'struct-field-index', u'struct-out',
+ u'struct/c', u'struct/ctc', u'struct/dc', u'submod', u'super',
+ u'super-instantiate', u'super-make-object', u'super-new', u'syntax',
+ u'syntax-case', u'syntax-case*', u'syntax-id-rules', u'syntax-rules',
+ u'syntax/loc', u'tag', u'this', u'this%', u'thunk', u'thunk*', u'time',
+ u'unconstrained-domain->', u'unit', u'unit-from-context', u'unit/c',
+ u'unit/new-import-export', u'unit/s', u'unless', u'unquote',
+ u'unquote-splicing', u'unsyntax', u'unsyntax-splicing', u'values/drop',
+ u'when', u'with-continuation-mark', u'with-contract',
+ u'with-contract-continuation-mark', u'with-handlers', u'with-handlers*',
+ u'with-method', u'with-syntax', u'λ'
)
# Generated by example.rkt
_builtins = (
- '*', '+', '-', '/', '<', '</c', '<=', '<=/c', '=', '=/c', '>', '>/c',
- '>=', '>=/c', 'abort-current-continuation', 'abs', 'absolute-path?',
- 'acos', 'add-between', 'add1', 'alarm-evt', 'always-evt', 'and/c',
- 'andmap', 'angle', 'any/c', 'append', 'append*', 'append-map', 'apply',
- 'argmax', 'argmin', 'arithmetic-shift', 'arity-at-least',
- 'arity-at-least-value', 'arity-at-least?', 'arity-checking-wrapper',
- 'arity-includes?', 'arity=?', 'asin', 'assf', 'assoc', 'assq', 'assv',
- 'atan', 'bad-number-of-results', 'banner', 'base->-doms/c',
- 'base->-rngs/c', 'base->?', 'between/c', 'bitwise-and',
- 'bitwise-bit-field', 'bitwise-bit-set?', 'bitwise-ior', 'bitwise-not',
- 'bitwise-xor', 'blame-add-car-context', 'blame-add-cdr-context',
- 'blame-add-context', 'blame-add-missing-party',
- 'blame-add-nth-arg-context', 'blame-add-or-context',
- 'blame-add-range-context', 'blame-add-unknown-context',
- 'blame-context', 'blame-contract', 'blame-fmt->-string',
- 'blame-negative', 'blame-original?', 'blame-positive',
- 'blame-replace-negative', 'blame-source', 'blame-swap',
- 'blame-swapped?', 'blame-update', 'blame-value', 'blame?', 'boolean=?',
- 'boolean?', 'bound-identifier=?', 'box', 'box-cas!', 'box-immutable',
- 'box-immutable/c', 'box/c', 'box?', 'break-enabled', 'break-thread',
- 'build-chaperone-contract-property', 'build-compound-type-name',
- 'build-contract-property', 'build-flat-contract-property',
- 'build-list', 'build-path', 'build-path/convention-type',
- 'build-string', 'build-vector', 'byte-pregexp', 'byte-pregexp?',
- 'byte-ready?', 'byte-regexp', 'byte-regexp?', 'byte?', 'bytes',
- 'bytes->immutable-bytes', 'bytes->list', 'bytes->path',
- 'bytes->path-element', 'bytes->string/latin-1', 'bytes->string/locale',
- 'bytes->string/utf-8', 'bytes-append', 'bytes-append*',
- 'bytes-close-converter', 'bytes-convert', 'bytes-convert-end',
- 'bytes-converter?', 'bytes-copy', 'bytes-copy!',
- 'bytes-environment-variable-name?', 'bytes-fill!', 'bytes-join',
- 'bytes-length', 'bytes-no-nuls?', 'bytes-open-converter', 'bytes-ref',
- 'bytes-set!', 'bytes-utf-8-index', 'bytes-utf-8-length',
- 'bytes-utf-8-ref', 'bytes<?', 'bytes=?', 'bytes>?', 'bytes?', 'caaaar',
- 'caaadr', 'caaar', 'caadar', 'caaddr', 'caadr', 'caar', 'cadaar',
- 'cadadr', 'cadar', 'caddar', 'cadddr', 'caddr', 'cadr',
- 'call-in-nested-thread', 'call-with-atomic-output-file',
- 'call-with-break-parameterization',
- 'call-with-composable-continuation', 'call-with-continuation-barrier',
- 'call-with-continuation-prompt', 'call-with-current-continuation',
- 'call-with-default-reading-parameterization',
- 'call-with-escape-continuation', 'call-with-exception-handler',
- 'call-with-file-lock/timeout', 'call-with-immediate-continuation-mark',
- 'call-with-input-bytes', 'call-with-input-file',
- 'call-with-input-file*', 'call-with-input-string',
- 'call-with-output-bytes', 'call-with-output-file',
- 'call-with-output-file*', 'call-with-output-string',
- 'call-with-parameterization', 'call-with-semaphore',
- 'call-with-semaphore/enable-break', 'call-with-values', 'call/cc',
- 'call/ec', 'car', 'cdaaar', 'cdaadr', 'cdaar', 'cdadar', 'cdaddr',
- 'cdadr', 'cdar', 'cddaar', 'cddadr', 'cddar', 'cdddar', 'cddddr',
- 'cdddr', 'cddr', 'cdr', 'ceiling', 'channel-get', 'channel-put',
- 'channel-put-evt', 'channel-put-evt?', 'channel-try-get', 'channel/c',
- 'channel?', 'chaperone-box', 'chaperone-channel',
- 'chaperone-continuation-mark-key', 'chaperone-contract-property?',
- 'chaperone-contract?', 'chaperone-evt', 'chaperone-hash',
- 'chaperone-of?', 'chaperone-procedure', 'chaperone-prompt-tag',
- 'chaperone-struct', 'chaperone-struct-type', 'chaperone-vector',
- 'chaperone?', 'char->integer', 'char-alphabetic?', 'char-blank?',
- 'char-ci<=?', 'char-ci<?', 'char-ci=?', 'char-ci>=?', 'char-ci>?',
- 'char-downcase', 'char-foldcase', 'char-general-category',
- 'char-graphic?', 'char-iso-control?', 'char-lower-case?',
- 'char-numeric?', 'char-punctuation?', 'char-ready?', 'char-symbolic?',
- 'char-title-case?', 'char-titlecase', 'char-upcase',
- 'char-upper-case?', 'char-utf-8-length', 'char-whitespace?', 'char<=?',
- 'char<?', 'char=?', 'char>=?', 'char>?', 'char?',
- 'check-duplicate-identifier', 'checked-procedure-check-and-extract',
- 'choice-evt', 'class->interface', 'class-info', 'class?',
- 'cleanse-path', 'close-input-port', 'close-output-port',
- 'coerce-chaperone-contract', 'coerce-chaperone-contracts',
- 'coerce-contract', 'coerce-contract/f', 'coerce-contracts',
- 'coerce-flat-contract', 'coerce-flat-contracts', 'collect-garbage',
- 'collection-file-path', 'collection-path', 'compile',
- 'compile-allow-set!-undefined', 'compile-context-preservation-enabled',
- 'compile-enforce-module-constants', 'compile-syntax',
- 'compiled-expression?', 'compiled-module-expression?',
- 'complete-path?', 'complex?', 'compose', 'compose1', 'conjugate',
- 'cons', 'cons/c', 'cons?', 'const', 'continuation-mark-key/c',
- 'continuation-mark-key?', 'continuation-mark-set->context',
- 'continuation-mark-set->list', 'continuation-mark-set->list*',
- 'continuation-mark-set-first', 'continuation-mark-set?',
- 'continuation-marks', 'continuation-prompt-available?',
- 'continuation-prompt-tag?', 'continuation?',
- 'contract-continuation-mark-key', 'contract-first-order',
- 'contract-first-order-passes?', 'contract-name', 'contract-proc',
- 'contract-projection', 'contract-property?',
- 'contract-random-generate', 'contract-stronger?',
- 'contract-struct-exercise', 'contract-struct-generate',
- 'contract-val-first-projection', 'contract?', 'convert-stream',
- 'copy-directory/files', 'copy-file', 'copy-port', 'cos', 'cosh',
- 'count', 'current-blame-format', 'current-break-parameterization',
- 'current-code-inspector', 'current-command-line-arguments',
- 'current-compile', 'current-compiled-file-roots',
- 'current-continuation-marks', 'current-contract-region',
- 'current-custodian', 'current-directory', 'current-directory-for-user',
- 'current-drive', 'current-environment-variables', 'current-error-port',
- 'current-eval', 'current-evt-pseudo-random-generator',
- 'current-future', 'current-gc-milliseconds',
- 'current-get-interaction-input-port', 'current-inexact-milliseconds',
- 'current-input-port', 'current-inspector',
- 'current-library-collection-links', 'current-library-collection-paths',
- 'current-load', 'current-load-extension',
- 'current-load-relative-directory', 'current-load/use-compiled',
- 'current-locale', 'current-logger', 'current-memory-use',
- 'current-milliseconds', 'current-module-declare-name',
- 'current-module-declare-source', 'current-module-name-resolver',
- 'current-module-path-for-load', 'current-namespace',
- 'current-output-port', 'current-parameterization',
- 'current-preserved-thread-cell-values', 'current-print',
- 'current-process-milliseconds', 'current-prompt-read',
- 'current-pseudo-random-generator', 'current-read-interaction',
- 'current-reader-guard', 'current-readtable', 'current-seconds',
- 'current-security-guard', 'current-subprocess-custodian-mode',
- 'current-thread', 'current-thread-group',
- 'current-thread-initial-stack-size',
- 'current-write-relative-directory', 'curry', 'curryr',
- 'custodian-box-value', 'custodian-box?', 'custodian-limit-memory',
- 'custodian-managed-list', 'custodian-memory-accounting-available?',
- 'custodian-require-memory', 'custodian-shutdown-all', 'custodian?',
- 'custom-print-quotable-accessor', 'custom-print-quotable?',
- 'custom-write-accessor', 'custom-write-property-proc', 'custom-write?',
- 'date', 'date*', 'date*-nanosecond', 'date*-time-zone-name', 'date*?',
- 'date-day', 'date-dst?', 'date-hour', 'date-minute', 'date-month',
- 'date-second', 'date-time-zone-offset', 'date-week-day', 'date-year',
- 'date-year-day', 'date?', 'datum->syntax', 'datum-intern-literal',
- 'default-continuation-prompt-tag', 'degrees->radians',
- 'delete-directory', 'delete-directory/files', 'delete-file',
- 'denominator', 'dict->list', 'dict-can-functional-set?',
- 'dict-can-remove-keys?', 'dict-clear', 'dict-clear!', 'dict-copy',
- 'dict-count', 'dict-empty?', 'dict-for-each', 'dict-has-key?',
- 'dict-implements/c', 'dict-implements?', 'dict-iter-contract',
- 'dict-iterate-first', 'dict-iterate-key', 'dict-iterate-next',
- 'dict-iterate-value', 'dict-key-contract', 'dict-keys', 'dict-map',
- 'dict-mutable?', 'dict-ref', 'dict-ref!', 'dict-remove',
- 'dict-remove!', 'dict-set', 'dict-set!', 'dict-set*', 'dict-set*!',
- 'dict-update', 'dict-update!', 'dict-value-contract', 'dict-values',
- 'dict?', 'directory-exists?', 'directory-list', 'display',
- 'display-lines', 'display-lines-to-file', 'display-to-file',
- 'displayln', 'double-flonum?', 'drop', 'drop-right', 'dropf',
- 'dropf-right', 'dump-memory-stats', 'dup-input-port',
- 'dup-output-port', 'dynamic-get-field', 'dynamic-place',
- 'dynamic-place*', 'dynamic-require', 'dynamic-require-for-syntax',
- 'dynamic-send', 'dynamic-set-field!', 'dynamic-wind', 'eighth',
- 'empty', 'empty-sequence', 'empty-stream', 'empty?',
- 'environment-variables-copy', 'environment-variables-names',
- 'environment-variables-ref', 'environment-variables-set!',
- 'environment-variables?', 'eof', 'eof-evt', 'eof-object?',
- 'ephemeron-value', 'ephemeron?', 'eprintf', 'eq-contract-val',
- 'eq-contract?', 'eq-hash-code', 'eq?', 'equal-contract-val',
- 'equal-contract?', 'equal-hash-code', 'equal-secondary-hash-code',
- 'equal<%>', 'equal?', 'equal?/recur', 'eqv-hash-code', 'eqv?', 'error',
- 'error-display-handler', 'error-escape-handler',
- 'error-print-context-length', 'error-print-source-location',
- 'error-print-width', 'error-value->string-handler', 'eval',
- 'eval-jit-enabled', 'eval-syntax', 'even?', 'evt/c', 'evt?',
- 'exact->inexact', 'exact-ceiling', 'exact-floor', 'exact-integer?',
- 'exact-nonnegative-integer?', 'exact-positive-integer?', 'exact-round',
- 'exact-truncate', 'exact?', 'executable-yield-handler', 'exit',
- 'exit-handler', 'exn', 'exn-continuation-marks', 'exn-message',
- 'exn:break', 'exn:break-continuation', 'exn:break:hang-up',
- 'exn:break:hang-up?', 'exn:break:terminate', 'exn:break:terminate?',
- 'exn:break?', 'exn:fail', 'exn:fail:contract',
- 'exn:fail:contract:arity', 'exn:fail:contract:arity?',
- 'exn:fail:contract:blame', 'exn:fail:contract:blame-object',
- 'exn:fail:contract:blame?', 'exn:fail:contract:continuation',
- 'exn:fail:contract:continuation?', 'exn:fail:contract:divide-by-zero',
- 'exn:fail:contract:divide-by-zero?',
- 'exn:fail:contract:non-fixnum-result',
- 'exn:fail:contract:non-fixnum-result?', 'exn:fail:contract:variable',
- 'exn:fail:contract:variable-id', 'exn:fail:contract:variable?',
- 'exn:fail:contract?', 'exn:fail:filesystem',
- 'exn:fail:filesystem:errno', 'exn:fail:filesystem:errno-errno',
- 'exn:fail:filesystem:errno?', 'exn:fail:filesystem:exists',
- 'exn:fail:filesystem:exists?', 'exn:fail:filesystem:missing-module',
- 'exn:fail:filesystem:missing-module-path',
- 'exn:fail:filesystem:missing-module?', 'exn:fail:filesystem:version',
- 'exn:fail:filesystem:version?', 'exn:fail:filesystem?',
- 'exn:fail:network', 'exn:fail:network:errno',
- 'exn:fail:network:errno-errno', 'exn:fail:network:errno?',
- 'exn:fail:network?', 'exn:fail:object', 'exn:fail:object?',
- 'exn:fail:out-of-memory', 'exn:fail:out-of-memory?', 'exn:fail:read',
- 'exn:fail:read-srclocs', 'exn:fail:read:eof', 'exn:fail:read:eof?',
- 'exn:fail:read:non-char', 'exn:fail:read:non-char?', 'exn:fail:read?',
- 'exn:fail:syntax', 'exn:fail:syntax-exprs',
- 'exn:fail:syntax:missing-module',
- 'exn:fail:syntax:missing-module-path',
- 'exn:fail:syntax:missing-module?', 'exn:fail:syntax:unbound',
- 'exn:fail:syntax:unbound?', 'exn:fail:syntax?', 'exn:fail:unsupported',
- 'exn:fail:unsupported?', 'exn:fail:user', 'exn:fail:user?',
- 'exn:fail?', 'exn:misc:match?', 'exn:missing-module-accessor',
- 'exn:missing-module?', 'exn:srclocs-accessor', 'exn:srclocs?', 'exn?',
- 'exp', 'expand', 'expand-once', 'expand-syntax', 'expand-syntax-once',
- 'expand-syntax-to-top-form', 'expand-to-top-form', 'expand-user-path',
- 'explode-path', 'expt', 'externalizable<%>', 'false?', 'field-names',
- 'fifth', 'file->bytes', 'file->bytes-lines', 'file->lines',
- 'file->list', 'file->string', 'file->value', 'file-exists?',
- 'file-name-from-path', 'file-or-directory-identity',
- 'file-or-directory-modify-seconds', 'file-or-directory-permissions',
- 'file-position', 'file-position*', 'file-size',
- 'file-stream-buffer-mode', 'file-stream-port?', 'file-truncate',
- 'filename-extension', 'filesystem-change-evt',
- 'filesystem-change-evt-cancel', 'filesystem-change-evt?',
- 'filesystem-root-list', 'filter', 'filter-map', 'filter-not',
- 'filter-read-input-port', 'find-executable-path', 'find-files',
- 'find-library-collection-links', 'find-library-collection-paths',
- 'find-relative-path', 'find-system-path', 'findf', 'first', 'fixnum?',
- 'flat-contract', 'flat-contract-predicate', 'flat-contract-property?',
- 'flat-contract?', 'flat-named-contract', 'flatten',
- 'floating-point-bytes->real', 'flonum?', 'floor', 'flush-output',
- 'fold-files', 'foldl', 'foldr', 'for-each', 'force', 'format',
- 'fourth', 'fprintf', 'free-identifier=?', 'free-label-identifier=?',
- 'free-template-identifier=?', 'free-transformer-identifier=?',
- 'fsemaphore-count', 'fsemaphore-post', 'fsemaphore-try-wait?',
- 'fsemaphore-wait', 'fsemaphore?', 'future', 'future?',
- 'futures-enabled?', 'gcd', 'generate-member-key',
- 'generate-temporaries', 'generic-set?', 'generic?', 'gensym',
- 'get-output-bytes', 'get-output-string', 'get-preference',
- 'get/build-val-first-projection', 'getenv',
- 'global-port-print-handler', 'group-execute-bit', 'group-read-bit',
- 'group-write-bit', 'guard-evt', 'handle-evt', 'handle-evt?',
- 'has-contract?', 'hash', 'hash->list', 'hash-clear', 'hash-clear!',
- 'hash-copy', 'hash-copy-clear', 'hash-count', 'hash-empty?',
- 'hash-eq?', 'hash-equal?', 'hash-eqv?', 'hash-for-each',
- 'hash-has-key?', 'hash-iterate-first', 'hash-iterate-key',
- 'hash-iterate-next', 'hash-iterate-value', 'hash-keys', 'hash-map',
- 'hash-placeholder?', 'hash-ref', 'hash-ref!', 'hash-remove',
- 'hash-remove!', 'hash-set', 'hash-set!', 'hash-set*', 'hash-set*!',
- 'hash-update', 'hash-update!', 'hash-values', 'hash-weak?', 'hash/c',
- 'hash?', 'hasheq', 'hasheqv', 'identifier-binding',
- 'identifier-binding-symbol', 'identifier-label-binding',
- 'identifier-prune-lexical-context',
- 'identifier-prune-to-source-module',
- 'identifier-remove-from-definition-context',
- 'identifier-template-binding', 'identifier-transformer-binding',
- 'identifier?', 'identity', 'imag-part', 'immutable?',
- 'impersonate-box', 'impersonate-channel',
- 'impersonate-continuation-mark-key', 'impersonate-hash',
- 'impersonate-procedure', 'impersonate-prompt-tag',
- 'impersonate-struct', 'impersonate-vector', 'impersonator-contract?',
- 'impersonator-ephemeron', 'impersonator-of?',
- 'impersonator-prop:application-mark', 'impersonator-prop:contracted',
- 'impersonator-property-accessor-procedure?', 'impersonator-property?',
- 'impersonator?', 'implementation?', 'implementation?/c', 'in-bytes',
- 'in-bytes-lines', 'in-cycle', 'in-dict', 'in-dict-keys',
- 'in-dict-pairs', 'in-dict-values', 'in-directory', 'in-hash',
- 'in-hash-keys', 'in-hash-pairs', 'in-hash-values', 'in-indexed',
- 'in-input-port-bytes', 'in-input-port-chars', 'in-lines', 'in-list',
- 'in-mlist', 'in-naturals', 'in-parallel', 'in-permutations', 'in-port',
- 'in-producer', 'in-range', 'in-sequences', 'in-set', 'in-stream',
- 'in-string', 'in-value', 'in-values*-sequence', 'in-values-sequence',
- 'in-vector', 'inexact->exact', 'inexact-real?', 'inexact?',
- 'infinite?', 'input-port-append', 'input-port?', 'inspector?',
- 'instanceof/c', 'integer->char', 'integer->integer-bytes',
- 'integer-bytes->integer', 'integer-in', 'integer-length',
- 'integer-sqrt', 'integer-sqrt/remainder', 'integer?',
- 'interface->method-names', 'interface-extension?', 'interface?',
- 'internal-definition-context-seal', 'internal-definition-context?',
- 'is-a?', 'is-a?/c', 'keyword->string', 'keyword-apply', 'keyword<?',
- 'keyword?', 'keywords-match', 'kill-thread', 'last', 'last-pair',
- 'lcm', 'length', 'liberal-define-context?', 'link-exists?', 'list',
- 'list*', 'list->bytes', 'list->mutable-set', 'list->mutable-seteq',
- 'list->mutable-seteqv', 'list->set', 'list->seteq', 'list->seteqv',
- 'list->string', 'list->vector', 'list->weak-set', 'list->weak-seteq',
- 'list->weak-seteqv', 'list-ref', 'list-tail', 'list/c', 'list?',
- 'listof', 'load', 'load-extension', 'load-on-demand-enabled',
- 'load-relative', 'load-relative-extension', 'load/cd',
- 'load/use-compiled', 'local-expand', 'local-expand/capture-lifts',
- 'local-transformer-expand', 'local-transformer-expand/capture-lifts',
- 'locale-string-encoding', 'log', 'log-level?', 'log-max-level',
- 'log-message', 'log-receiver?', 'logger-name', 'logger?', 'magnitude',
- 'make-arity-at-least', 'make-base-empty-namespace',
- 'make-base-namespace', 'make-bytes', 'make-channel',
- 'make-chaperone-contract', 'make-continuation-mark-key',
- 'make-continuation-prompt-tag', 'make-contract', 'make-custodian',
- 'make-custodian-box', 'make-custom-hash', 'make-custom-hash-types',
- 'make-custom-set', 'make-custom-set-types', 'make-date', 'make-date*',
- 'make-derived-parameter', 'make-directory', 'make-directory*',
- 'make-do-sequence', 'make-empty-namespace',
- 'make-environment-variables', 'make-ephemeron', 'make-exn',
- 'make-exn:break', 'make-exn:break:hang-up', 'make-exn:break:terminate',
- 'make-exn:fail', 'make-exn:fail:contract',
- 'make-exn:fail:contract:arity', 'make-exn:fail:contract:blame',
- 'make-exn:fail:contract:continuation',
- 'make-exn:fail:contract:divide-by-zero',
- 'make-exn:fail:contract:non-fixnum-result',
- 'make-exn:fail:contract:variable', 'make-exn:fail:filesystem',
- 'make-exn:fail:filesystem:errno', 'make-exn:fail:filesystem:exists',
- 'make-exn:fail:filesystem:missing-module',
- 'make-exn:fail:filesystem:version', 'make-exn:fail:network',
- 'make-exn:fail:network:errno', 'make-exn:fail:object',
- 'make-exn:fail:out-of-memory', 'make-exn:fail:read',
- 'make-exn:fail:read:eof', 'make-exn:fail:read:non-char',
- 'make-exn:fail:syntax', 'make-exn:fail:syntax:missing-module',
- 'make-exn:fail:syntax:unbound', 'make-exn:fail:unsupported',
- 'make-exn:fail:user', 'make-file-or-directory-link',
- 'make-flat-contract', 'make-fsemaphore', 'make-generic',
- 'make-handle-get-preference-locked', 'make-hash',
- 'make-hash-placeholder', 'make-hasheq', 'make-hasheq-placeholder',
- 'make-hasheqv', 'make-hasheqv-placeholder',
- 'make-immutable-custom-hash', 'make-immutable-hash',
- 'make-immutable-hasheq', 'make-immutable-hasheqv',
- 'make-impersonator-property', 'make-input-port',
- 'make-input-port/read-to-peek', 'make-inspector',
- 'make-keyword-procedure', 'make-known-char-range-list',
- 'make-limited-input-port', 'make-list', 'make-lock-file-name',
- 'make-log-receiver', 'make-logger', 'make-mixin-contract',
- 'make-mutable-custom-set', 'make-none/c', 'make-object',
- 'make-output-port', 'make-parameter', 'make-phantom-bytes',
- 'make-pipe', 'make-pipe-with-specials', 'make-placeholder',
- 'make-polar', 'make-prefab-struct', 'make-primitive-class',
- 'make-proj-contract', 'make-pseudo-random-generator',
- 'make-reader-graph', 'make-readtable', 'make-rectangular',
- 'make-rename-transformer', 'make-resolved-module-path',
- 'make-security-guard', 'make-semaphore', 'make-set!-transformer',
- 'make-shared-bytes', 'make-sibling-inspector', 'make-special-comment',
- 'make-srcloc', 'make-string', 'make-struct-field-accessor',
- 'make-struct-field-mutator', 'make-struct-type',
- 'make-struct-type-property', 'make-syntax-delta-introducer',
- 'make-syntax-introducer', 'make-temporary-file',
- 'make-tentative-pretty-print-output-port', 'make-thread-cell',
- 'make-thread-group', 'make-vector', 'make-weak-box',
- 'make-weak-custom-hash', 'make-weak-custom-set', 'make-weak-hash',
- 'make-weak-hasheq', 'make-weak-hasheqv', 'make-will-executor', 'map',
- 'match-equality-test', 'matches-arity-exactly?', 'max', 'mcar', 'mcdr',
- 'mcons', 'member', 'member-name-key-hash-code', 'member-name-key=?',
- 'member-name-key?', 'memf', 'memq', 'memv', 'merge-input',
- 'method-in-interface?', 'min', 'mixin-contract', 'module->exports',
- 'module->imports', 'module->language-info', 'module->namespace',
- 'module-compiled-cross-phase-persistent?', 'module-compiled-exports',
- 'module-compiled-imports', 'module-compiled-language-info',
- 'module-compiled-name', 'module-compiled-submodules',
- 'module-declared?', 'module-path-index-join',
- 'module-path-index-resolve', 'module-path-index-split',
- 'module-path-index-submodule', 'module-path-index?', 'module-path?',
- 'module-predefined?', 'module-provide-protected?', 'modulo', 'mpair?',
- 'mutable-set', 'mutable-seteq', 'mutable-seteqv', 'n->th',
- 'nack-guard-evt', 'namespace-anchor->empty-namespace',
- 'namespace-anchor->namespace', 'namespace-anchor?',
- 'namespace-attach-module', 'namespace-attach-module-declaration',
- 'namespace-base-phase', 'namespace-mapped-symbols',
- 'namespace-module-identifier', 'namespace-module-registry',
- 'namespace-require', 'namespace-require/constant',
- 'namespace-require/copy', 'namespace-require/expansion-time',
- 'namespace-set-variable-value!', 'namespace-symbol->identifier',
- 'namespace-syntax-introduce', 'namespace-undefine-variable!',
- 'namespace-unprotect-module', 'namespace-variable-value', 'namespace?',
- 'nan?', 'natural-number/c', 'negate', 'negative?', 'never-evt',
- u'new-∀/c', u'new-∃/c', 'newline', 'ninth', 'non-empty-listof',
- 'none/c', 'normal-case-path', 'normalize-arity', 'normalize-path',
- 'normalized-arity?', 'not', 'not/c', 'null', 'null?', 'number->string',
- 'number?', 'numerator', 'object%', 'object->vector', 'object-info',
- 'object-interface', 'object-method-arity-includes?', 'object-name',
- 'object=?', 'object?', 'odd?', 'one-of/c', 'open-input-bytes',
- 'open-input-file', 'open-input-output-file', 'open-input-string',
- 'open-output-bytes', 'open-output-file', 'open-output-nowhere',
- 'open-output-string', 'or/c', 'order-of-magnitude', 'ormap',
- 'other-execute-bit', 'other-read-bit', 'other-write-bit',
- 'output-port?', 'pair?', 'parameter-procedure=?', 'parameter/c',
- 'parameter?', 'parameterization?', 'parse-command-line', 'partition',
- 'path->bytes', 'path->complete-path', 'path->directory-path',
- 'path->string', 'path-add-suffix', 'path-convention-type',
- 'path-element->bytes', 'path-element->string', 'path-element?',
- 'path-for-some-system?', 'path-list-string->path-list', 'path-only',
- 'path-replace-suffix', 'path-string?', 'path<?', 'path?',
- 'pathlist-closure', 'peek-byte', 'peek-byte-or-special', 'peek-bytes',
- 'peek-bytes!', 'peek-bytes!-evt', 'peek-bytes-avail!',
- 'peek-bytes-avail!*', 'peek-bytes-avail!-evt',
- 'peek-bytes-avail!/enable-break', 'peek-bytes-evt', 'peek-char',
- 'peek-char-or-special', 'peek-string', 'peek-string!',
- 'peek-string!-evt', 'peek-string-evt', 'peeking-input-port',
- 'permutations', 'phantom-bytes?', 'pi', 'pi.f', 'pipe-content-length',
- 'place-break', 'place-channel', 'place-channel-get',
- 'place-channel-put', 'place-channel-put/get', 'place-channel?',
- 'place-dead-evt', 'place-enabled?', 'place-kill', 'place-location?',
- 'place-message-allowed?', 'place-sleep', 'place-wait', 'place?',
- 'placeholder-get', 'placeholder-set!', 'placeholder?',
- 'poll-guard-evt', 'port->bytes', 'port->bytes-lines', 'port->lines',
- 'port->list', 'port->string', 'port-closed-evt', 'port-closed?',
- 'port-commit-peeked', 'port-count-lines!', 'port-count-lines-enabled',
- 'port-counts-lines?', 'port-display-handler', 'port-file-identity',
- 'port-file-unlock', 'port-next-location', 'port-print-handler',
- 'port-progress-evt', 'port-provides-progress-evts?',
- 'port-read-handler', 'port-try-file-lock?', 'port-write-handler',
- 'port-writes-atomic?', 'port-writes-special?', 'port?', 'positive?',
- 'predicate/c', 'prefab-key->struct-type', 'prefab-key?',
- 'prefab-struct-key', 'preferences-lock-file-mode', 'pregexp',
- 'pregexp?', 'pretty-display', 'pretty-format', 'pretty-print',
- 'pretty-print-.-symbol-without-bars',
- 'pretty-print-abbreviate-read-macros', 'pretty-print-columns',
- 'pretty-print-current-style-table', 'pretty-print-depth',
- 'pretty-print-exact-as-decimal', 'pretty-print-extend-style-table',
- 'pretty-print-handler', 'pretty-print-newline',
- 'pretty-print-post-print-hook', 'pretty-print-pre-print-hook',
- 'pretty-print-print-hook', 'pretty-print-print-line',
- 'pretty-print-remap-stylable', 'pretty-print-show-inexactness',
- 'pretty-print-size-hook', 'pretty-print-style-table?',
- 'pretty-printing', 'pretty-write', 'primitive-closure?',
- 'primitive-result-arity', 'primitive?', 'print', 'print-as-expression',
- 'print-boolean-long-form', 'print-box', 'print-graph',
- 'print-hash-table', 'print-mpair-curly-braces',
- 'print-pair-curly-braces', 'print-reader-abbreviations',
- 'print-struct', 'print-syntax-width', 'print-unreadable',
- 'print-vector-length', 'printable/c', 'printable<%>', 'printf',
- 'procedure->method', 'procedure-arity', 'procedure-arity-includes/c',
- 'procedure-arity-includes?', 'procedure-arity?',
- 'procedure-closure-contents-eq?', 'procedure-extract-target',
- 'procedure-keywords', 'procedure-reduce-arity',
- 'procedure-reduce-keyword-arity', 'procedure-rename',
- 'procedure-struct-type?', 'procedure?', 'process', 'process*',
- 'process*/ports', 'process/ports', 'processor-count', 'progress-evt?',
- 'promise-forced?', 'promise-running?', 'promise/c', 'promise?',
- 'prop:arity-string', 'prop:chaperone-contract',
- 'prop:checked-procedure', 'prop:contract', 'prop:contracted',
- 'prop:custom-print-quotable', 'prop:custom-write', 'prop:dict',
- 'prop:dict/contract', 'prop:equal+hash', 'prop:evt',
- 'prop:exn:missing-module', 'prop:exn:srclocs', 'prop:flat-contract',
- 'prop:impersonator-of', 'prop:input-port',
- 'prop:liberal-define-context', 'prop:opt-chaperone-contract',
- 'prop:opt-chaperone-contract-get-test', 'prop:opt-chaperone-contract?',
- 'prop:output-port', 'prop:place-location', 'prop:procedure',
- 'prop:rename-transformer', 'prop:sequence', 'prop:set!-transformer',
- 'prop:stream', 'proper-subset?', 'pseudo-random-generator->vector',
- 'pseudo-random-generator-vector?', 'pseudo-random-generator?',
- 'put-preferences', 'putenv', 'quotient', 'quotient/remainder',
- 'radians->degrees', 'raise', 'raise-argument-error',
- 'raise-arguments-error', 'raise-arity-error', 'raise-blame-error',
- 'raise-contract-error', 'raise-mismatch-error',
- 'raise-not-cons-blame-error', 'raise-range-error',
- 'raise-result-error', 'raise-syntax-error', 'raise-type-error',
- 'raise-user-error', 'random', 'random-seed', 'range', 'rational?',
- 'rationalize', 'read', 'read-accept-bar-quote', 'read-accept-box',
- 'read-accept-compiled', 'read-accept-dot', 'read-accept-graph',
- 'read-accept-infix-dot', 'read-accept-lang', 'read-accept-quasiquote',
- 'read-accept-reader', 'read-byte', 'read-byte-or-special',
- 'read-bytes', 'read-bytes!', 'read-bytes!-evt', 'read-bytes-avail!',
- 'read-bytes-avail!*', 'read-bytes-avail!-evt',
- 'read-bytes-avail!/enable-break', 'read-bytes-evt', 'read-bytes-line',
- 'read-bytes-line-evt', 'read-case-sensitive', 'read-char',
- 'read-char-or-special', 'read-curly-brace-as-paren',
- 'read-decimal-as-inexact', 'read-eval-print-loop', 'read-language',
- 'read-line', 'read-line-evt', 'read-on-demand-source',
- 'read-square-bracket-as-paren', 'read-string', 'read-string!',
- 'read-string!-evt', 'read-string-evt', 'read-syntax',
- 'read-syntax/recursive', 'read/recursive', 'readtable-mapping',
- 'readtable?', 'real->decimal-string', 'real->double-flonum',
- 'real->floating-point-bytes', 'real->single-flonum', 'real-in',
- 'real-part', 'real?', 'reencode-input-port', 'reencode-output-port',
- 'regexp', 'regexp-match', 'regexp-match*', 'regexp-match-evt',
- 'regexp-match-exact?', 'regexp-match-peek',
- 'regexp-match-peek-immediate', 'regexp-match-peek-positions',
- 'regexp-match-peek-positions*',
- 'regexp-match-peek-positions-immediate',
- 'regexp-match-peek-positions-immediate/end',
- 'regexp-match-peek-positions/end', 'regexp-match-positions',
- 'regexp-match-positions*', 'regexp-match-positions/end',
- 'regexp-match/end', 'regexp-match?', 'regexp-max-lookbehind',
- 'regexp-quote', 'regexp-replace', 'regexp-replace*',
- 'regexp-replace-quote', 'regexp-replaces', 'regexp-split',
- 'regexp-try-match', 'regexp?', 'relative-path?', 'relocate-input-port',
- 'relocate-output-port', 'remainder', 'remove', 'remove*',
- 'remove-duplicates', 'remq', 'remq*', 'remv', 'remv*',
- 'rename-file-or-directory', 'rename-transformer-target',
- 'rename-transformer?', 'reroot-path', 'resolve-path',
- 'resolved-module-path-name', 'resolved-module-path?', 'rest',
- 'reverse', 'round', 'second', 'seconds->date', 'security-guard?',
- 'semaphore-peek-evt', 'semaphore-peek-evt?', 'semaphore-post',
- 'semaphore-try-wait?', 'semaphore-wait', 'semaphore-wait/enable-break',
- 'semaphore?', 'sequence->list', 'sequence->stream',
- 'sequence-add-between', 'sequence-andmap', 'sequence-append',
- 'sequence-count', 'sequence-filter', 'sequence-fold',
- 'sequence-for-each', 'sequence-generate', 'sequence-generate*',
- 'sequence-length', 'sequence-map', 'sequence-ormap', 'sequence-ref',
- 'sequence-tail', 'sequence?', 'set', 'set!-transformer-procedure',
- 'set!-transformer?', 'set->list', 'set->stream', 'set-add', 'set-add!',
- 'set-box!', 'set-clear', 'set-clear!', 'set-copy', 'set-copy-clear',
- 'set-count', 'set-empty?', 'set-eq?', 'set-equal?', 'set-eqv?',
- 'set-first', 'set-for-each', 'set-implements/c', 'set-implements?',
- 'set-intersect', 'set-intersect!', 'set-map', 'set-mcar!', 'set-mcdr!',
- 'set-member?', 'set-mutable?', 'set-phantom-bytes!',
- 'set-port-next-location!', 'set-remove', 'set-remove!', 'set-rest',
- 'set-subtract', 'set-subtract!', 'set-symmetric-difference',
- 'set-symmetric-difference!', 'set-union', 'set-union!', 'set-weak?',
- 'set/c', 'set=?', 'set?', 'seteq', 'seteqv', 'seventh', 'sgn',
- 'shared-bytes', 'shell-execute', 'shrink-path-wrt', 'shuffle',
- 'simple-form-path', 'simplify-path', 'sin', 'single-flonum?', 'sinh',
- 'sixth', 'skip-projection-wrapper?', 'sleep',
- 'some-system-path->string', 'sort', 'special-comment-value',
- 'special-comment?', 'special-filter-input-port', 'split-at',
- 'split-at-right', 'split-path', 'splitf-at', 'splitf-at-right', 'sqr',
- 'sqrt', 'srcloc', 'srcloc->string', 'srcloc-column', 'srcloc-line',
- 'srcloc-position', 'srcloc-source', 'srcloc-span', 'srcloc?',
- 'stop-after', 'stop-before', 'stream->list', 'stream-add-between',
- 'stream-andmap', 'stream-append', 'stream-count', 'stream-empty?',
- 'stream-filter', 'stream-first', 'stream-fold', 'stream-for-each',
- 'stream-length', 'stream-map', 'stream-ormap', 'stream-ref',
- 'stream-rest', 'stream-tail', 'stream?', 'string',
- 'string->bytes/latin-1', 'string->bytes/locale', 'string->bytes/utf-8',
- 'string->immutable-string', 'string->keyword', 'string->list',
- 'string->number', 'string->path', 'string->path-element',
- 'string->some-system-path', 'string->symbol',
- 'string->uninterned-symbol', 'string->unreadable-symbol',
- 'string-append', 'string-append*', 'string-ci<=?', 'string-ci<?',
- 'string-ci=?', 'string-ci>=?', 'string-ci>?', 'string-copy',
- 'string-copy!', 'string-downcase', 'string-environment-variable-name?',
- 'string-fill!', 'string-foldcase', 'string-join', 'string-len/c',
- 'string-length', 'string-locale-ci<?', 'string-locale-ci=?',
- 'string-locale-ci>?', 'string-locale-downcase', 'string-locale-upcase',
- 'string-locale<?', 'string-locale=?', 'string-locale>?',
- 'string-no-nuls?', 'string-normalize-nfc', 'string-normalize-nfd',
- 'string-normalize-nfkc', 'string-normalize-nfkd',
- 'string-normalize-spaces', 'string-ref', 'string-replace',
- 'string-set!', 'string-split', 'string-titlecase', 'string-trim',
- 'string-upcase', 'string-utf-8-length', 'string<=?', 'string<?',
- 'string=?', 'string>=?', 'string>?', 'string?', 'struct->vector',
- 'struct-accessor-procedure?', 'struct-constructor-procedure?',
- 'struct-info', 'struct-mutator-procedure?',
- 'struct-predicate-procedure?', 'struct-type-info',
- 'struct-type-make-constructor', 'struct-type-make-predicate',
- 'struct-type-property-accessor-procedure?', 'struct-type-property/c',
- 'struct-type-property?', 'struct-type?', 'struct:arity-at-least',
- 'struct:date', 'struct:date*', 'struct:exn', 'struct:exn:break',
- 'struct:exn:break:hang-up', 'struct:exn:break:terminate',
- 'struct:exn:fail', 'struct:exn:fail:contract',
- 'struct:exn:fail:contract:arity', 'struct:exn:fail:contract:blame',
- 'struct:exn:fail:contract:continuation',
- 'struct:exn:fail:contract:divide-by-zero',
- 'struct:exn:fail:contract:non-fixnum-result',
- 'struct:exn:fail:contract:variable', 'struct:exn:fail:filesystem',
- 'struct:exn:fail:filesystem:errno',
- 'struct:exn:fail:filesystem:exists',
- 'struct:exn:fail:filesystem:missing-module',
- 'struct:exn:fail:filesystem:version', 'struct:exn:fail:network',
- 'struct:exn:fail:network:errno', 'struct:exn:fail:object',
- 'struct:exn:fail:out-of-memory', 'struct:exn:fail:read',
- 'struct:exn:fail:read:eof', 'struct:exn:fail:read:non-char',
- 'struct:exn:fail:syntax', 'struct:exn:fail:syntax:missing-module',
- 'struct:exn:fail:syntax:unbound', 'struct:exn:fail:unsupported',
- 'struct:exn:fail:user', 'struct:srcloc',
- 'struct:wrapped-extra-arg-arrow', 'struct?', 'sub1', 'subbytes',
- 'subclass?', 'subclass?/c', 'subprocess', 'subprocess-group-enabled',
- 'subprocess-kill', 'subprocess-pid', 'subprocess-status',
- 'subprocess-wait', 'subprocess?', 'subset?', 'substring',
- 'symbol->string', 'symbol-interned?', 'symbol-unreadable?', 'symbol<?',
- 'symbol=?', 'symbol?', 'symbols', 'sync', 'sync/enable-break',
- 'sync/timeout', 'sync/timeout/enable-break', 'syntax->datum',
- 'syntax->list', 'syntax-arm', 'syntax-column', 'syntax-disarm',
- 'syntax-e', 'syntax-line', 'syntax-local-bind-syntaxes',
- 'syntax-local-certifier', 'syntax-local-context',
- 'syntax-local-expand-expression', 'syntax-local-get-shadower',
- 'syntax-local-introduce', 'syntax-local-lift-context',
- 'syntax-local-lift-expression',
- 'syntax-local-lift-module-end-declaration',
- 'syntax-local-lift-provide', 'syntax-local-lift-require',
- 'syntax-local-lift-values-expression',
- 'syntax-local-make-definition-context',
- 'syntax-local-make-delta-introducer',
- 'syntax-local-module-defined-identifiers',
- 'syntax-local-module-exports',
- 'syntax-local-module-required-identifiers', 'syntax-local-name',
- 'syntax-local-phase-level', 'syntax-local-submodules',
- 'syntax-local-transforming-module-provides?', 'syntax-local-value',
- 'syntax-local-value/immediate', 'syntax-original?', 'syntax-position',
- 'syntax-property', 'syntax-property-symbol-keys', 'syntax-protect',
- 'syntax-rearm', 'syntax-recertify', 'syntax-shift-phase-level',
- 'syntax-source', 'syntax-source-module', 'syntax-span', 'syntax-taint',
- 'syntax-tainted?', 'syntax-track-origin',
- 'syntax-transforming-module-expression?', 'syntax-transforming?',
- 'syntax/c', 'syntax?', 'system', 'system*', 'system*/exit-code',
- 'system-big-endian?', 'system-idle-evt', 'system-language+country',
- 'system-library-subpath', 'system-path-convention-type', 'system-type',
- 'system/exit-code', 'tail-marks-match?', 'take', 'take-right', 'takef',
- 'takef-right', 'tan', 'tanh', 'tcp-abandon-port', 'tcp-accept',
- 'tcp-accept-evt', 'tcp-accept-ready?', 'tcp-accept/enable-break',
- 'tcp-addresses', 'tcp-close', 'tcp-connect',
- 'tcp-connect/enable-break', 'tcp-listen', 'tcp-listener?', 'tcp-port?',
- 'tentative-pretty-print-port-cancel',
- 'tentative-pretty-print-port-transfer', 'tenth', 'terminal-port?',
- 'the-unsupplied-arg', 'third', 'thread', 'thread-cell-ref',
- 'thread-cell-set!', 'thread-cell-values?', 'thread-cell?',
- 'thread-dead-evt', 'thread-dead?', 'thread-group?', 'thread-receive',
- 'thread-receive-evt', 'thread-resume', 'thread-resume-evt',
- 'thread-rewind-receive', 'thread-running?', 'thread-send',
- 'thread-suspend', 'thread-suspend-evt', 'thread-try-receive',
- 'thread-wait', 'thread/suspend-to-kill', 'thread?', 'time-apply',
- 'touch', 'transplant-input-port', 'transplant-output-port', 'true',
- 'truncate', 'udp-addresses', 'udp-bind!', 'udp-bound?', 'udp-close',
- 'udp-connect!', 'udp-connected?', 'udp-multicast-interface',
- 'udp-multicast-join-group!', 'udp-multicast-leave-group!',
- 'udp-multicast-loopback?', 'udp-multicast-set-interface!',
- 'udp-multicast-set-loopback!', 'udp-multicast-set-ttl!',
- 'udp-multicast-ttl', 'udp-open-socket', 'udp-receive!',
- 'udp-receive!*', 'udp-receive!-evt', 'udp-receive!/enable-break',
- 'udp-receive-ready-evt', 'udp-send', 'udp-send*', 'udp-send-evt',
- 'udp-send-ready-evt', 'udp-send-to', 'udp-send-to*', 'udp-send-to-evt',
- 'udp-send-to/enable-break', 'udp-send/enable-break', 'udp?', 'unbox',
- 'uncaught-exception-handler', 'unit?', 'unspecified-dom',
- 'unsupplied-arg?', 'use-collection-link-paths',
- 'use-compiled-file-paths', 'use-user-specific-search-paths',
- 'user-execute-bit', 'user-read-bit', 'user-write-bit',
- 'value-contract', 'values', 'variable-reference->empty-namespace',
- 'variable-reference->module-base-phase',
- 'variable-reference->module-declaration-inspector',
- 'variable-reference->module-path-index',
- 'variable-reference->module-source', 'variable-reference->namespace',
- 'variable-reference->phase',
- 'variable-reference->resolved-module-path',
- 'variable-reference-constant?', 'variable-reference?', 'vector',
- 'vector->immutable-vector', 'vector->list',
- 'vector->pseudo-random-generator', 'vector->pseudo-random-generator!',
- 'vector->values', 'vector-append', 'vector-argmax', 'vector-argmin',
- 'vector-copy', 'vector-copy!', 'vector-count', 'vector-drop',
- 'vector-drop-right', 'vector-fill!', 'vector-filter',
- 'vector-filter-not', 'vector-immutable', 'vector-immutable/c',
- 'vector-immutableof', 'vector-length', 'vector-map', 'vector-map!',
- 'vector-member', 'vector-memq', 'vector-memv', 'vector-ref',
- 'vector-set!', 'vector-set*!', 'vector-set-performance-stats!',
- 'vector-split-at', 'vector-split-at-right', 'vector-take',
- 'vector-take-right', 'vector/c', 'vector?', 'vectorof', 'version',
- 'void', 'void?', 'weak-box-value', 'weak-box?', 'weak-set',
- 'weak-seteq', 'weak-seteqv', 'will-execute', 'will-executor?',
- 'will-register', 'will-try-execute', 'with-input-from-bytes',
- 'with-input-from-file', 'with-input-from-string',
- 'with-output-to-bytes', 'with-output-to-file', 'with-output-to-string',
- 'would-be-future', 'wrap-evt', 'wrapped-extra-arg-arrow',
- 'wrapped-extra-arg-arrow-extra-neg-party-argument',
- 'wrapped-extra-arg-arrow-real-func', 'wrapped-extra-arg-arrow?',
- 'writable<%>', 'write', 'write-byte', 'write-bytes',
- 'write-bytes-avail', 'write-bytes-avail*', 'write-bytes-avail-evt',
- 'write-bytes-avail/enable-break', 'write-char', 'write-special',
- 'write-special-avail*', 'write-special-evt', 'write-string',
- 'write-to-file', 'xor', 'zero?', '~.a', '~.s', '~.v', '~a', '~e', '~r',
- '~s', '~v'
+ u'*', u'*list/c', u'+', u'-', u'/', u'<', u'</c', u'<=', u'<=/c', u'=', u'=/c',
+ u'>', u'>/c', u'>=', u'>=/c', u'abort-current-continuation', u'abs',
+ u'absolute-path?', u'acos', u'add-between', u'add1', u'alarm-evt',
+ u'always-evt', u'and/c', u'andmap', u'angle', u'any/c', u'append', u'append*',
+ u'append-map', u'apply', u'argmax', u'argmin', u'arithmetic-shift',
+ u'arity-at-least', u'arity-at-least-value', u'arity-at-least?',
+ u'arity-checking-wrapper', u'arity-includes?', u'arity=?',
+ u'arrow-contract-info', u'arrow-contract-info-accepts-arglist',
+ u'arrow-contract-info-chaperone-procedure',
+ u'arrow-contract-info-check-first-order', u'arrow-contract-info?',
+ u'asin', u'assf', u'assoc', u'assq', u'assv', u'atan',
+ u'bad-number-of-results', u'banner', u'base->-doms/c', u'base->-rngs/c',
+ u'base->?', u'between/c', u'bitwise-and', u'bitwise-bit-field',
+ u'bitwise-bit-set?', u'bitwise-ior', u'bitwise-not', u'bitwise-xor',
+ u'blame-add-car-context', u'blame-add-cdr-context', u'blame-add-context',
+ u'blame-add-missing-party', u'blame-add-nth-arg-context',
+ u'blame-add-range-context', u'blame-add-unknown-context',
+ u'blame-context', u'blame-contract', u'blame-fmt->-string',
+ u'blame-missing-party?', u'blame-negative', u'blame-original?',
+ u'blame-positive', u'blame-replace-negative', u'blame-source',
+ u'blame-swap', u'blame-swapped?', u'blame-update', u'blame-value',
+ u'blame?', u'boolean=?', u'boolean?', u'bound-identifier=?', u'box',
+ u'box-cas!', u'box-immutable', u'box-immutable/c', u'box/c', u'box?',
+ u'break-enabled', u'break-parameterization?', u'break-thread',
+ u'build-chaperone-contract-property', u'build-compound-type-name',
+ u'build-contract-property', u'build-flat-contract-property',
+ u'build-list', u'build-path', u'build-path/convention-type',
+ u'build-string', u'build-vector', u'byte-pregexp', u'byte-pregexp?',
+ u'byte-ready?', u'byte-regexp', u'byte-regexp?', u'byte?', u'bytes',
+ u'bytes->immutable-bytes', u'bytes->list', u'bytes->path',
+ u'bytes->path-element', u'bytes->string/latin-1', u'bytes->string/locale',
+ u'bytes->string/utf-8', u'bytes-append', u'bytes-append*',
+ u'bytes-close-converter', u'bytes-convert', u'bytes-convert-end',
+ u'bytes-converter?', u'bytes-copy', u'bytes-copy!',
+ u'bytes-environment-variable-name?', u'bytes-fill!', u'bytes-join',
+ u'bytes-length', u'bytes-no-nuls?', u'bytes-open-converter', u'bytes-ref',
+ u'bytes-set!', u'bytes-utf-8-index', u'bytes-utf-8-length',
+ u'bytes-utf-8-ref', u'bytes<?', u'bytes=?', u'bytes>?', u'bytes?', u'caaaar',
+ u'caaadr', u'caaar', u'caadar', u'caaddr', u'caadr', u'caar', u'cadaar',
+ u'cadadr', u'cadar', u'caddar', u'cadddr', u'caddr', u'cadr',
+ u'call-in-nested-thread', u'call-with-atomic-output-file',
+ u'call-with-break-parameterization',
+ u'call-with-composable-continuation', u'call-with-continuation-barrier',
+ u'call-with-continuation-prompt', u'call-with-current-continuation',
+ u'call-with-default-reading-parameterization',
+ u'call-with-escape-continuation', u'call-with-exception-handler',
+ u'call-with-file-lock/timeout', u'call-with-immediate-continuation-mark',
+ u'call-with-input-bytes', u'call-with-input-file',
+ u'call-with-input-file*', u'call-with-input-string',
+ u'call-with-output-bytes', u'call-with-output-file',
+ u'call-with-output-file*', u'call-with-output-string',
+ u'call-with-parameterization', u'call-with-semaphore',
+ u'call-with-semaphore/enable-break', u'call-with-values', u'call/cc',
+ u'call/ec', u'car', u'cartesian-product', u'cdaaar', u'cdaadr', u'cdaar',
+ u'cdadar', u'cdaddr', u'cdadr', u'cdar', u'cddaar', u'cddadr', u'cddar',
+ u'cdddar', u'cddddr', u'cdddr', u'cddr', u'cdr', u'ceiling', u'channel-get',
+ u'channel-put', u'channel-put-evt', u'channel-put-evt?',
+ u'channel-try-get', u'channel/c', u'channel?', u'chaperone-box',
+ u'chaperone-channel', u'chaperone-continuation-mark-key',
+ u'chaperone-contract-property?', u'chaperone-contract?', u'chaperone-evt',
+ u'chaperone-hash', u'chaperone-hash-set', u'chaperone-of?',
+ u'chaperone-procedure', u'chaperone-procedure*', u'chaperone-prompt-tag',
+ u'chaperone-struct', u'chaperone-struct-type', u'chaperone-vector',
+ u'chaperone?', u'char->integer', u'char-alphabetic?', u'char-blank?',
+ u'char-ci<=?', u'char-ci<?', u'char-ci=?', u'char-ci>=?', u'char-ci>?',
+ u'char-downcase', u'char-foldcase', u'char-general-category',
+ u'char-graphic?', u'char-in', u'char-in/c', u'char-iso-control?',
+ u'char-lower-case?', u'char-numeric?', u'char-punctuation?',
+ u'char-ready?', u'char-symbolic?', u'char-title-case?', u'char-titlecase',
+ u'char-upcase', u'char-upper-case?', u'char-utf-8-length',
+ u'char-whitespace?', u'char<=?', u'char<?', u'char=?', u'char>=?', u'char>?',
+ u'char?', u'check-duplicate-identifier', u'check-duplicates',
+ u'checked-procedure-check-and-extract', u'choice-evt',
+ u'class->interface', u'class-info', u'class-seal', u'class-unseal',
+ u'class?', u'cleanse-path', u'close-input-port', u'close-output-port',
+ u'coerce-chaperone-contract', u'coerce-chaperone-contracts',
+ u'coerce-contract', u'coerce-contract/f', u'coerce-contracts',
+ u'coerce-flat-contract', u'coerce-flat-contracts', u'collect-garbage',
+ u'collection-file-path', u'collection-path', u'combinations', u'compile',
+ u'compile-allow-set!-undefined', u'compile-context-preservation-enabled',
+ u'compile-enforce-module-constants', u'compile-syntax',
+ u'compiled-expression-recompile', u'compiled-expression?',
+ u'compiled-module-expression?', u'complete-path?', u'complex?', u'compose',
+ u'compose1', u'conjoin', u'conjugate', u'cons', u'cons/c', u'cons?', u'const',
+ u'continuation-mark-key/c', u'continuation-mark-key?',
+ u'continuation-mark-set->context', u'continuation-mark-set->list',
+ u'continuation-mark-set->list*', u'continuation-mark-set-first',
+ u'continuation-mark-set?', u'continuation-marks',
+ u'continuation-prompt-available?', u'continuation-prompt-tag?',
+ u'continuation?', u'contract-continuation-mark-key',
+ u'contract-custom-write-property-proc', u'contract-exercise',
+ u'contract-first-order', u'contract-first-order-passes?',
+ u'contract-late-neg-projection', u'contract-name', u'contract-proc',
+ u'contract-projection', u'contract-property?',
+ u'contract-random-generate', u'contract-random-generate-fail',
+ u'contract-random-generate-fail?',
+ u'contract-random-generate-get-current-environment',
+ u'contract-random-generate-stash', u'contract-random-generate/choose',
+ u'contract-stronger?', u'contract-struct-exercise',
+ u'contract-struct-generate', u'contract-struct-late-neg-projection',
+ u'contract-struct-list-contract?', u'contract-val-first-projection',
+ u'contract?', u'convert-stream', u'copy-directory/files', u'copy-file',
+ u'copy-port', u'cos', u'cosh', u'count', u'current-blame-format',
+ u'current-break-parameterization', u'current-code-inspector',
+ u'current-command-line-arguments', u'current-compile',
+ u'current-compiled-file-roots', u'current-continuation-marks',
+ u'current-contract-region', u'current-custodian', u'current-directory',
+ u'current-directory-for-user', u'current-drive',
+ u'current-environment-variables', u'current-error-port', u'current-eval',
+ u'current-evt-pseudo-random-generator',
+ u'current-force-delete-permissions', u'current-future',
+ u'current-gc-milliseconds', u'current-get-interaction-input-port',
+ u'current-inexact-milliseconds', u'current-input-port',
+ u'current-inspector', u'current-library-collection-links',
+ u'current-library-collection-paths', u'current-load',
+ u'current-load-extension', u'current-load-relative-directory',
+ u'current-load/use-compiled', u'current-locale', u'current-logger',
+ u'current-memory-use', u'current-milliseconds',
+ u'current-module-declare-name', u'current-module-declare-source',
+ u'current-module-name-resolver', u'current-module-path-for-load',
+ u'current-namespace', u'current-output-port', u'current-parameterization',
+ u'current-plumber', u'current-preserved-thread-cell-values',
+ u'current-print', u'current-process-milliseconds', u'current-prompt-read',
+ u'current-pseudo-random-generator', u'current-read-interaction',
+ u'current-reader-guard', u'current-readtable', u'current-seconds',
+ u'current-security-guard', u'current-subprocess-custodian-mode',
+ u'current-thread', u'current-thread-group',
+ u'current-thread-initial-stack-size',
+ u'current-write-relative-directory', u'curry', u'curryr',
+ u'custodian-box-value', u'custodian-box?', u'custodian-limit-memory',
+ u'custodian-managed-list', u'custodian-memory-accounting-available?',
+ u'custodian-require-memory', u'custodian-shutdown-all', u'custodian?',
+ u'custom-print-quotable-accessor', u'custom-print-quotable?',
+ u'custom-write-accessor', u'custom-write-property-proc', u'custom-write?',
+ u'date', u'date*', u'date*-nanosecond', u'date*-time-zone-name', u'date*?',
+ u'date-day', u'date-dst?', u'date-hour', u'date-minute', u'date-month',
+ u'date-second', u'date-time-zone-offset', u'date-week-day', u'date-year',
+ u'date-year-day', u'date?', u'datum->syntax', u'datum-intern-literal',
+ u'default-continuation-prompt-tag', u'degrees->radians',
+ u'delete-directory', u'delete-directory/files', u'delete-file',
+ u'denominator', u'dict->list', u'dict-can-functional-set?',
+ u'dict-can-remove-keys?', u'dict-clear', u'dict-clear!', u'dict-copy',
+ u'dict-count', u'dict-empty?', u'dict-for-each', u'dict-has-key?',
+ u'dict-implements/c', u'dict-implements?', u'dict-iter-contract',
+ u'dict-iterate-first', u'dict-iterate-key', u'dict-iterate-next',
+ u'dict-iterate-value', u'dict-key-contract', u'dict-keys', u'dict-map',
+ u'dict-mutable?', u'dict-ref', u'dict-ref!', u'dict-remove',
+ u'dict-remove!', u'dict-set', u'dict-set!', u'dict-set*', u'dict-set*!',
+ u'dict-update', u'dict-update!', u'dict-value-contract', u'dict-values',
+ u'dict?', u'directory-exists?', u'directory-list', u'disjoin', u'display',
+ u'display-lines', u'display-lines-to-file', u'display-to-file',
+ u'displayln', u'double-flonum?', u'drop', u'drop-common-prefix',
+ u'drop-right', u'dropf', u'dropf-right', u'dump-memory-stats',
+ u'dup-input-port', u'dup-output-port', u'dynamic->*', u'dynamic-get-field',
+ u'dynamic-object/c', u'dynamic-place', u'dynamic-place*',
+ u'dynamic-require', u'dynamic-require-for-syntax', u'dynamic-send',
+ u'dynamic-set-field!', u'dynamic-wind', u'eighth', u'empty',
+ u'empty-sequence', u'empty-stream', u'empty?',
+ u'environment-variables-copy', u'environment-variables-names',
+ u'environment-variables-ref', u'environment-variables-set!',
+ u'environment-variables?', u'eof', u'eof-evt', u'eof-object?',
+ u'ephemeron-value', u'ephemeron?', u'eprintf', u'eq-contract-val',
+ u'eq-contract?', u'eq-hash-code', u'eq?', u'equal-contract-val',
+ u'equal-contract?', u'equal-hash-code', u'equal-secondary-hash-code',
+ u'equal<%>', u'equal?', u'equal?/recur', u'eqv-hash-code', u'eqv?', u'error',
+ u'error-display-handler', u'error-escape-handler',
+ u'error-print-context-length', u'error-print-source-location',
+ u'error-print-width', u'error-value->string-handler', u'eval',
+ u'eval-jit-enabled', u'eval-syntax', u'even?', u'evt/c', u'evt?',
+ u'exact->inexact', u'exact-ceiling', u'exact-floor', u'exact-integer?',
+ u'exact-nonnegative-integer?', u'exact-positive-integer?', u'exact-round',
+ u'exact-truncate', u'exact?', u'executable-yield-handler', u'exit',
+ u'exit-handler', u'exn', u'exn-continuation-marks', u'exn-message',
+ u'exn:break', u'exn:break-continuation', u'exn:break:hang-up',
+ u'exn:break:hang-up?', u'exn:break:terminate', u'exn:break:terminate?',
+ u'exn:break?', u'exn:fail', u'exn:fail:contract',
+ u'exn:fail:contract:arity', u'exn:fail:contract:arity?',
+ u'exn:fail:contract:blame', u'exn:fail:contract:blame-object',
+ u'exn:fail:contract:blame?', u'exn:fail:contract:continuation',
+ u'exn:fail:contract:continuation?', u'exn:fail:contract:divide-by-zero',
+ u'exn:fail:contract:divide-by-zero?',
+ u'exn:fail:contract:non-fixnum-result',
+ u'exn:fail:contract:non-fixnum-result?', u'exn:fail:contract:variable',
+ u'exn:fail:contract:variable-id', u'exn:fail:contract:variable?',
+ u'exn:fail:contract?', u'exn:fail:filesystem',
+ u'exn:fail:filesystem:errno', u'exn:fail:filesystem:errno-errno',
+ u'exn:fail:filesystem:errno?', u'exn:fail:filesystem:exists',
+ u'exn:fail:filesystem:exists?', u'exn:fail:filesystem:missing-module',
+ u'exn:fail:filesystem:missing-module-path',
+ u'exn:fail:filesystem:missing-module?', u'exn:fail:filesystem:version',
+ u'exn:fail:filesystem:version?', u'exn:fail:filesystem?',
+ u'exn:fail:network', u'exn:fail:network:errno',
+ u'exn:fail:network:errno-errno', u'exn:fail:network:errno?',
+ u'exn:fail:network?', u'exn:fail:object', u'exn:fail:object?',
+ u'exn:fail:out-of-memory', u'exn:fail:out-of-memory?', u'exn:fail:read',
+ u'exn:fail:read-srclocs', u'exn:fail:read:eof', u'exn:fail:read:eof?',
+ u'exn:fail:read:non-char', u'exn:fail:read:non-char?', u'exn:fail:read?',
+ u'exn:fail:syntax', u'exn:fail:syntax-exprs',
+ u'exn:fail:syntax:missing-module',
+ u'exn:fail:syntax:missing-module-path',
+ u'exn:fail:syntax:missing-module?', u'exn:fail:syntax:unbound',
+ u'exn:fail:syntax:unbound?', u'exn:fail:syntax?', u'exn:fail:unsupported',
+ u'exn:fail:unsupported?', u'exn:fail:user', u'exn:fail:user?',
+ u'exn:fail?', u'exn:misc:match?', u'exn:missing-module-accessor',
+ u'exn:missing-module?', u'exn:srclocs-accessor', u'exn:srclocs?', u'exn?',
+ u'exp', u'expand', u'expand-once', u'expand-syntax', u'expand-syntax-once',
+ u'expand-syntax-to-top-form', u'expand-to-top-form', u'expand-user-path',
+ u'explode-path', u'expt', u'externalizable<%>', u'failure-result/c',
+ u'false?', u'field-names', u'fifth', u'file->bytes', u'file->bytes-lines',
+ u'file->lines', u'file->list', u'file->string', u'file->value',
+ u'file-exists?', u'file-name-from-path', u'file-or-directory-identity',
+ u'file-or-directory-modify-seconds', u'file-or-directory-permissions',
+ u'file-position', u'file-position*', u'file-size',
+ u'file-stream-buffer-mode', u'file-stream-port?', u'file-truncate',
+ u'filename-extension', u'filesystem-change-evt',
+ u'filesystem-change-evt-cancel', u'filesystem-change-evt?',
+ u'filesystem-root-list', u'filter', u'filter-map', u'filter-not',
+ u'filter-read-input-port', u'find-executable-path', u'find-files',
+ u'find-library-collection-links', u'find-library-collection-paths',
+ u'find-relative-path', u'find-system-path', u'findf', u'first',
+ u'first-or/c', u'fixnum?', u'flat-contract', u'flat-contract-predicate',
+ u'flat-contract-property?', u'flat-contract?', u'flat-named-contract',
+ u'flatten', u'floating-point-bytes->real', u'flonum?', u'floor',
+ u'flush-output', u'fold-files', u'foldl', u'foldr', u'for-each', u'force',
+ u'format', u'fourth', u'fprintf', u'free-identifier=?',
+ u'free-label-identifier=?', u'free-template-identifier=?',
+ u'free-transformer-identifier=?', u'fsemaphore-count', u'fsemaphore-post',
+ u'fsemaphore-try-wait?', u'fsemaphore-wait', u'fsemaphore?', u'future',
+ u'future?', u'futures-enabled?', u'gcd', u'generate-member-key',
+ u'generate-temporaries', u'generic-set?', u'generic?', u'gensym',
+ u'get-output-bytes', u'get-output-string', u'get-preference',
+ u'get/build-late-neg-projection', u'get/build-val-first-projection',
+ u'getenv', u'global-port-print-handler', u'group-by', u'group-execute-bit',
+ u'group-read-bit', u'group-write-bit', u'guard-evt', u'handle-evt',
+ u'handle-evt?', u'has-blame?', u'has-contract?', u'hash', u'hash->list',
+ u'hash-clear', u'hash-clear!', u'hash-copy', u'hash-copy-clear',
+ u'hash-count', u'hash-empty?', u'hash-eq?', u'hash-equal?', u'hash-eqv?',
+ u'hash-for-each', u'hash-has-key?', u'hash-iterate-first',
+ u'hash-iterate-key', u'hash-iterate-key+value', u'hash-iterate-next',
+ u'hash-iterate-pair', u'hash-iterate-value', u'hash-keys', u'hash-map',
+ u'hash-placeholder?', u'hash-ref', u'hash-ref!', u'hash-remove',
+ u'hash-remove!', u'hash-set', u'hash-set!', u'hash-set*', u'hash-set*!',
+ u'hash-update', u'hash-update!', u'hash-values', u'hash-weak?', u'hash/c',
+ u'hash?', u'hasheq', u'hasheqv', u'identifier-binding',
+ u'identifier-binding-symbol', u'identifier-label-binding',
+ u'identifier-prune-lexical-context',
+ u'identifier-prune-to-source-module',
+ u'identifier-remove-from-definition-context',
+ u'identifier-template-binding', u'identifier-transformer-binding',
+ u'identifier?', u'identity', u'if/c', u'imag-part', u'immutable?',
+ u'impersonate-box', u'impersonate-channel',
+ u'impersonate-continuation-mark-key', u'impersonate-hash',
+ u'impersonate-hash-set', u'impersonate-procedure',
+ u'impersonate-procedure*', u'impersonate-prompt-tag',
+ u'impersonate-struct', u'impersonate-vector', u'impersonator-contract?',
+ u'impersonator-ephemeron', u'impersonator-of?',
+ u'impersonator-prop:application-mark', u'impersonator-prop:blame',
+ u'impersonator-prop:contracted',
+ u'impersonator-property-accessor-procedure?', u'impersonator-property?',
+ u'impersonator?', u'implementation?', u'implementation?/c', u'in-bytes',
+ u'in-bytes-lines', u'in-combinations', u'in-cycle', u'in-dict',
+ u'in-dict-keys', u'in-dict-pairs', u'in-dict-values', u'in-directory',
+ u'in-hash', u'in-hash-keys', u'in-hash-pairs', u'in-hash-values',
+ u'in-immutable-hash', u'in-immutable-hash-keys',
+ u'in-immutable-hash-pairs', u'in-immutable-hash-values',
+ u'in-immutable-set', u'in-indexed', u'in-input-port-bytes',
+ u'in-input-port-chars', u'in-lines', u'in-list', u'in-mlist',
+ u'in-mutable-hash', u'in-mutable-hash-keys', u'in-mutable-hash-pairs',
+ u'in-mutable-hash-values', u'in-mutable-set', u'in-naturals',
+ u'in-parallel', u'in-permutations', u'in-port', u'in-producer', u'in-range',
+ u'in-sequences', u'in-set', u'in-slice', u'in-stream', u'in-string',
+ u'in-syntax', u'in-value', u'in-values*-sequence', u'in-values-sequence',
+ u'in-vector', u'in-weak-hash', u'in-weak-hash-keys', u'in-weak-hash-pairs',
+ u'in-weak-hash-values', u'in-weak-set', u'inexact->exact',
+ u'inexact-real?', u'inexact?', u'infinite?', u'input-port-append',
+ u'input-port?', u'inspector?', u'instanceof/c', u'integer->char',
+ u'integer->integer-bytes', u'integer-bytes->integer', u'integer-in',
+ u'integer-length', u'integer-sqrt', u'integer-sqrt/remainder', u'integer?',
+ u'interface->method-names', u'interface-extension?', u'interface?',
+ u'internal-definition-context-binding-identifiers',
+ u'internal-definition-context-introduce',
+ u'internal-definition-context-seal', u'internal-definition-context?',
+ u'is-a?', u'is-a?/c', u'keyword->string', u'keyword-apply', u'keyword<?',
+ u'keyword?', u'keywords-match', u'kill-thread', u'last', u'last-pair',
+ u'lcm', u'length', u'liberal-define-context?', u'link-exists?', u'list',
+ u'list*', u'list*of', u'list->bytes', u'list->mutable-set',
+ u'list->mutable-seteq', u'list->mutable-seteqv', u'list->set',
+ u'list->seteq', u'list->seteqv', u'list->string', u'list->vector',
+ u'list->weak-set', u'list->weak-seteq', u'list->weak-seteqv',
+ u'list-contract?', u'list-prefix?', u'list-ref', u'list-set', u'list-tail',
+ u'list-update', u'list/c', u'list?', u'listen-port-number?', u'listof',
+ u'load', u'load-extension', u'load-on-demand-enabled', u'load-relative',
+ u'load-relative-extension', u'load/cd', u'load/use-compiled',
+ u'local-expand', u'local-expand/capture-lifts',
+ u'local-transformer-expand', u'local-transformer-expand/capture-lifts',
+ u'locale-string-encoding', u'log', u'log-all-levels', u'log-level-evt',
+ u'log-level?', u'log-max-level', u'log-message', u'log-receiver?',
+ u'logger-name', u'logger?', u'magnitude', u'make-arity-at-least',
+ u'make-base-empty-namespace', u'make-base-namespace', u'make-bytes',
+ u'make-channel', u'make-chaperone-contract',
+ u'make-continuation-mark-key', u'make-continuation-prompt-tag',
+ u'make-contract', u'make-custodian', u'make-custodian-box',
+ u'make-custom-hash', u'make-custom-hash-types', u'make-custom-set',
+ u'make-custom-set-types', u'make-date', u'make-date*',
+ u'make-derived-parameter', u'make-directory', u'make-directory*',
+ u'make-do-sequence', u'make-empty-namespace',
+ u'make-environment-variables', u'make-ephemeron', u'make-exn',
+ u'make-exn:break', u'make-exn:break:hang-up', u'make-exn:break:terminate',
+ u'make-exn:fail', u'make-exn:fail:contract',
+ u'make-exn:fail:contract:arity', u'make-exn:fail:contract:blame',
+ u'make-exn:fail:contract:continuation',
+ u'make-exn:fail:contract:divide-by-zero',
+ u'make-exn:fail:contract:non-fixnum-result',
+ u'make-exn:fail:contract:variable', u'make-exn:fail:filesystem',
+ u'make-exn:fail:filesystem:errno', u'make-exn:fail:filesystem:exists',
+ u'make-exn:fail:filesystem:missing-module',
+ u'make-exn:fail:filesystem:version', u'make-exn:fail:network',
+ u'make-exn:fail:network:errno', u'make-exn:fail:object',
+ u'make-exn:fail:out-of-memory', u'make-exn:fail:read',
+ u'make-exn:fail:read:eof', u'make-exn:fail:read:non-char',
+ u'make-exn:fail:syntax', u'make-exn:fail:syntax:missing-module',
+ u'make-exn:fail:syntax:unbound', u'make-exn:fail:unsupported',
+ u'make-exn:fail:user', u'make-file-or-directory-link',
+ u'make-flat-contract', u'make-fsemaphore', u'make-generic',
+ u'make-handle-get-preference-locked', u'make-hash',
+ u'make-hash-placeholder', u'make-hasheq', u'make-hasheq-placeholder',
+ u'make-hasheqv', u'make-hasheqv-placeholder',
+ u'make-immutable-custom-hash', u'make-immutable-hash',
+ u'make-immutable-hasheq', u'make-immutable-hasheqv',
+ u'make-impersonator-property', u'make-input-port',
+ u'make-input-port/read-to-peek', u'make-inspector',
+ u'make-keyword-procedure', u'make-known-char-range-list',
+ u'make-limited-input-port', u'make-list', u'make-lock-file-name',
+ u'make-log-receiver', u'make-logger', u'make-mixin-contract',
+ u'make-mutable-custom-set', u'make-none/c', u'make-object',
+ u'make-output-port', u'make-parameter', u'make-parent-directory*',
+ u'make-phantom-bytes', u'make-pipe', u'make-pipe-with-specials',
+ u'make-placeholder', u'make-plumber', u'make-polar', u'make-prefab-struct',
+ u'make-primitive-class', u'make-proj-contract',
+ u'make-pseudo-random-generator', u'make-reader-graph', u'make-readtable',
+ u'make-rectangular', u'make-rename-transformer',
+ u'make-resolved-module-path', u'make-security-guard', u'make-semaphore',
+ u'make-set!-transformer', u'make-shared-bytes', u'make-sibling-inspector',
+ u'make-special-comment', u'make-srcloc', u'make-string',
+ u'make-struct-field-accessor', u'make-struct-field-mutator',
+ u'make-struct-type', u'make-struct-type-property',
+ u'make-syntax-delta-introducer', u'make-syntax-introducer',
+ u'make-temporary-file', u'make-tentative-pretty-print-output-port',
+ u'make-thread-cell', u'make-thread-group', u'make-vector',
+ u'make-weak-box', u'make-weak-custom-hash', u'make-weak-custom-set',
+ u'make-weak-hash', u'make-weak-hasheq', u'make-weak-hasheqv',
+ u'make-will-executor', u'map', u'match-equality-test',
+ u'matches-arity-exactly?', u'max', u'mcar', u'mcdr', u'mcons', u'member',
+ u'member-name-key-hash-code', u'member-name-key=?', u'member-name-key?',
+ u'memf', u'memq', u'memv', u'merge-input', u'method-in-interface?', u'min',
+ u'mixin-contract', u'module->exports', u'module->imports',
+ u'module->language-info', u'module->namespace',
+ u'module-compiled-cross-phase-persistent?', u'module-compiled-exports',
+ u'module-compiled-imports', u'module-compiled-language-info',
+ u'module-compiled-name', u'module-compiled-submodules',
+ u'module-declared?', u'module-path-index-join',
+ u'module-path-index-resolve', u'module-path-index-split',
+ u'module-path-index-submodule', u'module-path-index?', u'module-path?',
+ u'module-predefined?', u'module-provide-protected?', u'modulo', u'mpair?',
+ u'mutable-set', u'mutable-seteq', u'mutable-seteqv', u'n->th',
+ u'nack-guard-evt', u'namespace-anchor->empty-namespace',
+ u'namespace-anchor->namespace', u'namespace-anchor?',
+ u'namespace-attach-module', u'namespace-attach-module-declaration',
+ u'namespace-base-phase', u'namespace-mapped-symbols',
+ u'namespace-module-identifier', u'namespace-module-registry',
+ u'namespace-require', u'namespace-require/constant',
+ u'namespace-require/copy', u'namespace-require/expansion-time',
+ u'namespace-set-variable-value!', u'namespace-symbol->identifier',
+ u'namespace-syntax-introduce', u'namespace-undefine-variable!',
+ u'namespace-unprotect-module', u'namespace-variable-value', u'namespace?',
+ u'nan?', u'natural-number/c', u'negate', u'negative?', u'never-evt',
+ u'new-∀/c', u'new-∃/c', u'newline', u'ninth', u'non-empty-listof',
+ u'non-empty-string?', u'none/c', u'normal-case-path', u'normalize-arity',
+ u'normalize-path', u'normalized-arity?', u'not', u'not/c', u'null', u'null?',
+ u'number->string', u'number?', u'numerator', u'object%', u'object->vector',
+ u'object-info', u'object-interface', u'object-method-arity-includes?',
+ u'object-name', u'object-or-false=?', u'object=?', u'object?', u'odd?',
+ u'one-of/c', u'open-input-bytes', u'open-input-file',
+ u'open-input-output-file', u'open-input-string', u'open-output-bytes',
+ u'open-output-file', u'open-output-nowhere', u'open-output-string',
+ u'or/c', u'order-of-magnitude', u'ormap', u'other-execute-bit',
+ u'other-read-bit', u'other-write-bit', u'output-port?', u'pair?',
+ u'parameter-procedure=?', u'parameter/c', u'parameter?',
+ u'parameterization?', u'parse-command-line', u'partition', u'path->bytes',
+ u'path->complete-path', u'path->directory-path', u'path->string',
+ u'path-add-suffix', u'path-convention-type', u'path-element->bytes',
+ u'path-element->string', u'path-element?', u'path-for-some-system?',
+ u'path-list-string->path-list', u'path-only', u'path-replace-suffix',
+ u'path-string?', u'path<?', u'path?', u'pathlist-closure', u'peek-byte',
+ u'peek-byte-or-special', u'peek-bytes', u'peek-bytes!', u'peek-bytes!-evt',
+ u'peek-bytes-avail!', u'peek-bytes-avail!*', u'peek-bytes-avail!-evt',
+ u'peek-bytes-avail!/enable-break', u'peek-bytes-evt', u'peek-char',
+ u'peek-char-or-special', u'peek-string', u'peek-string!',
+ u'peek-string!-evt', u'peek-string-evt', u'peeking-input-port',
+ u'permutations', u'phantom-bytes?', u'pi', u'pi.f', u'pipe-content-length',
+ u'place-break', u'place-channel', u'place-channel-get',
+ u'place-channel-put', u'place-channel-put/get', u'place-channel?',
+ u'place-dead-evt', u'place-enabled?', u'place-kill', u'place-location?',
+ u'place-message-allowed?', u'place-sleep', u'place-wait', u'place?',
+ u'placeholder-get', u'placeholder-set!', u'placeholder?',
+ u'plumber-add-flush!', u'plumber-flush-all',
+ u'plumber-flush-handle-remove!', u'plumber-flush-handle?', u'plumber?',
+ u'poll-guard-evt', u'port->bytes', u'port->bytes-lines', u'port->lines',
+ u'port->list', u'port->string', u'port-closed-evt', u'port-closed?',
+ u'port-commit-peeked', u'port-count-lines!', u'port-count-lines-enabled',
+ u'port-counts-lines?', u'port-display-handler', u'port-file-identity',
+ u'port-file-unlock', u'port-next-location', u'port-number?',
+ u'port-print-handler', u'port-progress-evt',
+ u'port-provides-progress-evts?', u'port-read-handler',
+ u'port-try-file-lock?', u'port-write-handler', u'port-writes-atomic?',
+ u'port-writes-special?', u'port?', u'positive?', u'predicate/c',
+ u'prefab-key->struct-type', u'prefab-key?', u'prefab-struct-key',
+ u'preferences-lock-file-mode', u'pregexp', u'pregexp?', u'pretty-display',
+ u'pretty-format', u'pretty-print', u'pretty-print-.-symbol-without-bars',
+ u'pretty-print-abbreviate-read-macros', u'pretty-print-columns',
+ u'pretty-print-current-style-table', u'pretty-print-depth',
+ u'pretty-print-exact-as-decimal', u'pretty-print-extend-style-table',
+ u'pretty-print-handler', u'pretty-print-newline',
+ u'pretty-print-post-print-hook', u'pretty-print-pre-print-hook',
+ u'pretty-print-print-hook', u'pretty-print-print-line',
+ u'pretty-print-remap-stylable', u'pretty-print-show-inexactness',
+ u'pretty-print-size-hook', u'pretty-print-style-table?',
+ u'pretty-printing', u'pretty-write', u'primitive-closure?',
+ u'primitive-result-arity', u'primitive?', u'print', u'print-as-expression',
+ u'print-boolean-long-form', u'print-box', u'print-graph',
+ u'print-hash-table', u'print-mpair-curly-braces',
+ u'print-pair-curly-braces', u'print-reader-abbreviations',
+ u'print-struct', u'print-syntax-width', u'print-unreadable',
+ u'print-vector-length', u'printable/c', u'printable<%>', u'printf',
+ u'println', u'procedure->method', u'procedure-arity',
+ u'procedure-arity-includes/c', u'procedure-arity-includes?',
+ u'procedure-arity?', u'procedure-closure-contents-eq?',
+ u'procedure-extract-target', u'procedure-keywords',
+ u'procedure-reduce-arity', u'procedure-reduce-keyword-arity',
+ u'procedure-rename', u'procedure-result-arity', u'procedure-specialize',
+ u'procedure-struct-type?', u'procedure?', u'process', u'process*',
+ u'process*/ports', u'process/ports', u'processor-count', u'progress-evt?',
+ u'promise-forced?', u'promise-running?', u'promise/c', u'promise/name?',
+ u'promise?', u'prop:arity-string', u'prop:arrow-contract',
+ u'prop:arrow-contract-get-info', u'prop:arrow-contract?', u'prop:blame',
+ u'prop:chaperone-contract', u'prop:checked-procedure', u'prop:contract',
+ u'prop:contracted', u'prop:custom-print-quotable', u'prop:custom-write',
+ u'prop:dict', u'prop:dict/contract', u'prop:equal+hash', u'prop:evt',
+ u'prop:exn:missing-module', u'prop:exn:srclocs',
+ u'prop:expansion-contexts', u'prop:flat-contract',
+ u'prop:impersonator-of', u'prop:input-port',
+ u'prop:liberal-define-context', u'prop:object-name',
+ u'prop:opt-chaperone-contract', u'prop:opt-chaperone-contract-get-test',
+ u'prop:opt-chaperone-contract?', u'prop:orc-contract',
+ u'prop:orc-contract-get-subcontracts', u'prop:orc-contract?',
+ u'prop:output-port', u'prop:place-location', u'prop:procedure',
+ u'prop:recursive-contract', u'prop:recursive-contract-unroll',
+ u'prop:recursive-contract?', u'prop:rename-transformer', u'prop:sequence',
+ u'prop:set!-transformer', u'prop:stream', u'proper-subset?',
+ u'pseudo-random-generator->vector', u'pseudo-random-generator-vector?',
+ u'pseudo-random-generator?', u'put-preferences', u'putenv', u'quotient',
+ u'quotient/remainder', u'radians->degrees', u'raise',
+ u'raise-argument-error', u'raise-arguments-error', u'raise-arity-error',
+ u'raise-blame-error', u'raise-contract-error', u'raise-mismatch-error',
+ u'raise-not-cons-blame-error', u'raise-range-error',
+ u'raise-result-error', u'raise-syntax-error', u'raise-type-error',
+ u'raise-user-error', u'random', u'random-seed', u'range', u'rational?',
+ u'rationalize', u'read', u'read-accept-bar-quote', u'read-accept-box',
+ u'read-accept-compiled', u'read-accept-dot', u'read-accept-graph',
+ u'read-accept-infix-dot', u'read-accept-lang', u'read-accept-quasiquote',
+ u'read-accept-reader', u'read-byte', u'read-byte-or-special',
+ u'read-bytes', u'read-bytes!', u'read-bytes!-evt', u'read-bytes-avail!',
+ u'read-bytes-avail!*', u'read-bytes-avail!-evt',
+ u'read-bytes-avail!/enable-break', u'read-bytes-evt', u'read-bytes-line',
+ u'read-bytes-line-evt', u'read-case-sensitive', u'read-cdot', u'read-char',
+ u'read-char-or-special', u'read-curly-brace-as-paren',
+ u'read-curly-brace-with-tag', u'read-decimal-as-inexact',
+ u'read-eval-print-loop', u'read-language', u'read-line', u'read-line-evt',
+ u'read-on-demand-source', u'read-square-bracket-as-paren',
+ u'read-square-bracket-with-tag', u'read-string', u'read-string!',
+ u'read-string!-evt', u'read-string-evt', u'read-syntax',
+ u'read-syntax/recursive', u'read/recursive', u'readtable-mapping',
+ u'readtable?', u'real->decimal-string', u'real->double-flonum',
+ u'real->floating-point-bytes', u'real->single-flonum', u'real-in',
+ u'real-part', u'real?', u'reencode-input-port', u'reencode-output-port',
+ u'regexp', u'regexp-match', u'regexp-match*', u'regexp-match-evt',
+ u'regexp-match-exact?', u'regexp-match-peek',
+ u'regexp-match-peek-immediate', u'regexp-match-peek-positions',
+ u'regexp-match-peek-positions*',
+ u'regexp-match-peek-positions-immediate',
+ u'regexp-match-peek-positions-immediate/end',
+ u'regexp-match-peek-positions/end', u'regexp-match-positions',
+ u'regexp-match-positions*', u'regexp-match-positions/end',
+ u'regexp-match/end', u'regexp-match?', u'regexp-max-lookbehind',
+ u'regexp-quote', u'regexp-replace', u'regexp-replace*',
+ u'regexp-replace-quote', u'regexp-replaces', u'regexp-split',
+ u'regexp-try-match', u'regexp?', u'relative-path?', u'relocate-input-port',
+ u'relocate-output-port', u'remainder', u'remf', u'remf*', u'remove',
+ u'remove*', u'remove-duplicates', u'remq', u'remq*', u'remv', u'remv*',
+ u'rename-contract', u'rename-file-or-directory',
+ u'rename-transformer-target', u'rename-transformer?', u'replace-evt',
+ u'reroot-path', u'resolve-path', u'resolved-module-path-name',
+ u'resolved-module-path?', u'rest', u'reverse', u'round', u'second',
+ u'seconds->date', u'security-guard?', u'semaphore-peek-evt',
+ u'semaphore-peek-evt?', u'semaphore-post', u'semaphore-try-wait?',
+ u'semaphore-wait', u'semaphore-wait/enable-break', u'semaphore?',
+ u'sequence->list', u'sequence->stream', u'sequence-add-between',
+ u'sequence-andmap', u'sequence-append', u'sequence-count',
+ u'sequence-filter', u'sequence-fold', u'sequence-for-each',
+ u'sequence-generate', u'sequence-generate*', u'sequence-length',
+ u'sequence-map', u'sequence-ormap', u'sequence-ref', u'sequence-tail',
+ u'sequence/c', u'sequence?', u'set', u'set!-transformer-procedure',
+ u'set!-transformer?', u'set->list', u'set->stream', u'set-add', u'set-add!',
+ u'set-box!', u'set-clear', u'set-clear!', u'set-copy', u'set-copy-clear',
+ u'set-count', u'set-empty?', u'set-eq?', u'set-equal?', u'set-eqv?',
+ u'set-first', u'set-for-each', u'set-implements/c', u'set-implements?',
+ u'set-intersect', u'set-intersect!', u'set-map', u'set-mcar!', u'set-mcdr!',
+ u'set-member?', u'set-mutable?', u'set-phantom-bytes!',
+ u'set-port-next-location!', u'set-remove', u'set-remove!', u'set-rest',
+ u'set-some-basic-contracts!', u'set-subtract', u'set-subtract!',
+ u'set-symmetric-difference', u'set-symmetric-difference!', u'set-union',
+ u'set-union!', u'set-weak?', u'set/c', u'set=?', u'set?', u'seteq', u'seteqv',
+ u'seventh', u'sgn', u'shared-bytes', u'shell-execute', u'shrink-path-wrt',
+ u'shuffle', u'simple-form-path', u'simplify-path', u'sin',
+ u'single-flonum?', u'sinh', u'sixth', u'skip-projection-wrapper?', u'sleep',
+ u'some-system-path->string', u'sort', u'special-comment-value',
+ u'special-comment?', u'special-filter-input-port', u'split-at',
+ u'split-at-right', u'split-common-prefix', u'split-path', u'splitf-at',
+ u'splitf-at-right', u'sqr', u'sqrt', u'srcloc', u'srcloc->string',
+ u'srcloc-column', u'srcloc-line', u'srcloc-position', u'srcloc-source',
+ u'srcloc-span', u'srcloc?', u'stop-after', u'stop-before', u'stream->list',
+ u'stream-add-between', u'stream-andmap', u'stream-append', u'stream-count',
+ u'stream-empty?', u'stream-filter', u'stream-first', u'stream-fold',
+ u'stream-for-each', u'stream-length', u'stream-map', u'stream-ormap',
+ u'stream-ref', u'stream-rest', u'stream-tail', u'stream/c', u'stream?',
+ u'string', u'string->bytes/latin-1', u'string->bytes/locale',
+ u'string->bytes/utf-8', u'string->immutable-string', u'string->keyword',
+ u'string->list', u'string->number', u'string->path',
+ u'string->path-element', u'string->some-system-path', u'string->symbol',
+ u'string->uninterned-symbol', u'string->unreadable-symbol',
+ u'string-append', u'string-append*', u'string-ci<=?', u'string-ci<?',
+ u'string-ci=?', u'string-ci>=?', u'string-ci>?', u'string-contains?',
+ u'string-copy', u'string-copy!', u'string-downcase',
+ u'string-environment-variable-name?', u'string-fill!', u'string-foldcase',
+ u'string-join', u'string-len/c', u'string-length', u'string-locale-ci<?',
+ u'string-locale-ci=?', u'string-locale-ci>?', u'string-locale-downcase',
+ u'string-locale-upcase', u'string-locale<?', u'string-locale=?',
+ u'string-locale>?', u'string-no-nuls?', u'string-normalize-nfc',
+ u'string-normalize-nfd', u'string-normalize-nfkc',
+ u'string-normalize-nfkd', u'string-normalize-spaces', u'string-port?',
+ u'string-prefix?', u'string-ref', u'string-replace', u'string-set!',
+ u'string-split', u'string-suffix?', u'string-titlecase', u'string-trim',
+ u'string-upcase', u'string-utf-8-length', u'string<=?', u'string<?',
+ u'string=?', u'string>=?', u'string>?', u'string?', u'struct->vector',
+ u'struct-accessor-procedure?', u'struct-constructor-procedure?',
+ u'struct-info', u'struct-mutator-procedure?',
+ u'struct-predicate-procedure?', u'struct-type-info',
+ u'struct-type-make-constructor', u'struct-type-make-predicate',
+ u'struct-type-property-accessor-procedure?', u'struct-type-property/c',
+ u'struct-type-property?', u'struct-type?', u'struct:arity-at-least',
+ u'struct:arrow-contract-info', u'struct:date', u'struct:date*',
+ u'struct:exn', u'struct:exn:break', u'struct:exn:break:hang-up',
+ u'struct:exn:break:terminate', u'struct:exn:fail',
+ u'struct:exn:fail:contract', u'struct:exn:fail:contract:arity',
+ u'struct:exn:fail:contract:blame',
+ u'struct:exn:fail:contract:continuation',
+ u'struct:exn:fail:contract:divide-by-zero',
+ u'struct:exn:fail:contract:non-fixnum-result',
+ u'struct:exn:fail:contract:variable', u'struct:exn:fail:filesystem',
+ u'struct:exn:fail:filesystem:errno',
+ u'struct:exn:fail:filesystem:exists',
+ u'struct:exn:fail:filesystem:missing-module',
+ u'struct:exn:fail:filesystem:version', u'struct:exn:fail:network',
+ u'struct:exn:fail:network:errno', u'struct:exn:fail:object',
+ u'struct:exn:fail:out-of-memory', u'struct:exn:fail:read',
+ u'struct:exn:fail:read:eof', u'struct:exn:fail:read:non-char',
+ u'struct:exn:fail:syntax', u'struct:exn:fail:syntax:missing-module',
+ u'struct:exn:fail:syntax:unbound', u'struct:exn:fail:unsupported',
+ u'struct:exn:fail:user', u'struct:srcloc',
+ u'struct:wrapped-extra-arg-arrow', u'struct?', u'sub1', u'subbytes',
+ u'subclass?', u'subclass?/c', u'subprocess', u'subprocess-group-enabled',
+ u'subprocess-kill', u'subprocess-pid', u'subprocess-status',
+ u'subprocess-wait', u'subprocess?', u'subset?', u'substring', u'suggest/c',
+ u'symbol->string', u'symbol-interned?', u'symbol-unreadable?', u'symbol<?',
+ u'symbol=?', u'symbol?', u'symbols', u'sync', u'sync/enable-break',
+ u'sync/timeout', u'sync/timeout/enable-break', u'syntax->datum',
+ u'syntax->list', u'syntax-arm', u'syntax-column', u'syntax-debug-info',
+ u'syntax-disarm', u'syntax-e', u'syntax-line',
+ u'syntax-local-bind-syntaxes', u'syntax-local-certifier',
+ u'syntax-local-context', u'syntax-local-expand-expression',
+ u'syntax-local-get-shadower', u'syntax-local-identifier-as-binding',
+ u'syntax-local-introduce', u'syntax-local-lift-context',
+ u'syntax-local-lift-expression', u'syntax-local-lift-module',
+ u'syntax-local-lift-module-end-declaration',
+ u'syntax-local-lift-provide', u'syntax-local-lift-require',
+ u'syntax-local-lift-values-expression',
+ u'syntax-local-make-definition-context',
+ u'syntax-local-make-delta-introducer',
+ u'syntax-local-module-defined-identifiers',
+ u'syntax-local-module-exports',
+ u'syntax-local-module-required-identifiers', u'syntax-local-name',
+ u'syntax-local-phase-level', u'syntax-local-submodules',
+ u'syntax-local-transforming-module-provides?', u'syntax-local-value',
+ u'syntax-local-value/immediate', u'syntax-original?', u'syntax-position',
+ u'syntax-property', u'syntax-property-preserved?',
+ u'syntax-property-symbol-keys', u'syntax-protect', u'syntax-rearm',
+ u'syntax-recertify', u'syntax-shift-phase-level', u'syntax-source',
+ u'syntax-source-module', u'syntax-span', u'syntax-taint',
+ u'syntax-tainted?', u'syntax-track-origin',
+ u'syntax-transforming-module-expression?',
+ u'syntax-transforming-with-lifts?', u'syntax-transforming?', u'syntax/c',
+ u'syntax?', u'system', u'system*', u'system*/exit-code',
+ u'system-big-endian?', u'system-idle-evt', u'system-language+country',
+ u'system-library-subpath', u'system-path-convention-type', u'system-type',
+ u'system/exit-code', u'tail-marks-match?', u'take', u'take-common-prefix',
+ u'take-right', u'takef', u'takef-right', u'tan', u'tanh',
+ u'tcp-abandon-port', u'tcp-accept', u'tcp-accept-evt',
+ u'tcp-accept-ready?', u'tcp-accept/enable-break', u'tcp-addresses',
+ u'tcp-close', u'tcp-connect', u'tcp-connect/enable-break', u'tcp-listen',
+ u'tcp-listener?', u'tcp-port?', u'tentative-pretty-print-port-cancel',
+ u'tentative-pretty-print-port-transfer', u'tenth', u'terminal-port?',
+ u'the-unsupplied-arg', u'third', u'thread', u'thread-cell-ref',
+ u'thread-cell-set!', u'thread-cell-values?', u'thread-cell?',
+ u'thread-dead-evt', u'thread-dead?', u'thread-group?', u'thread-receive',
+ u'thread-receive-evt', u'thread-resume', u'thread-resume-evt',
+ u'thread-rewind-receive', u'thread-running?', u'thread-send',
+ u'thread-suspend', u'thread-suspend-evt', u'thread-try-receive',
+ u'thread-wait', u'thread/suspend-to-kill', u'thread?', u'time-apply',
+ u'touch', u'transplant-input-port', u'transplant-output-port', u'true',
+ u'truncate', u'udp-addresses', u'udp-bind!', u'udp-bound?', u'udp-close',
+ u'udp-connect!', u'udp-connected?', u'udp-multicast-interface',
+ u'udp-multicast-join-group!', u'udp-multicast-leave-group!',
+ u'udp-multicast-loopback?', u'udp-multicast-set-interface!',
+ u'udp-multicast-set-loopback!', u'udp-multicast-set-ttl!',
+ u'udp-multicast-ttl', u'udp-open-socket', u'udp-receive!',
+ u'udp-receive!*', u'udp-receive!-evt', u'udp-receive!/enable-break',
+ u'udp-receive-ready-evt', u'udp-send', u'udp-send*', u'udp-send-evt',
+ u'udp-send-ready-evt', u'udp-send-to', u'udp-send-to*', u'udp-send-to-evt',
+ u'udp-send-to/enable-break', u'udp-send/enable-break', u'udp?', u'unbox',
+ u'uncaught-exception-handler', u'unit?', u'unspecified-dom',
+ u'unsupplied-arg?', u'use-collection-link-paths',
+ u'use-compiled-file-paths', u'use-user-specific-search-paths',
+ u'user-execute-bit', u'user-read-bit', u'user-write-bit', u'value-blame',
+ u'value-contract', u'values', u'variable-reference->empty-namespace',
+ u'variable-reference->module-base-phase',
+ u'variable-reference->module-declaration-inspector',
+ u'variable-reference->module-path-index',
+ u'variable-reference->module-source', u'variable-reference->namespace',
+ u'variable-reference->phase',
+ u'variable-reference->resolved-module-path',
+ u'variable-reference-constant?', u'variable-reference?', u'vector',
+ u'vector->immutable-vector', u'vector->list',
+ u'vector->pseudo-random-generator', u'vector->pseudo-random-generator!',
+ u'vector->values', u'vector-append', u'vector-argmax', u'vector-argmin',
+ u'vector-copy', u'vector-copy!', u'vector-count', u'vector-drop',
+ u'vector-drop-right', u'vector-fill!', u'vector-filter',
+ u'vector-filter-not', u'vector-immutable', u'vector-immutable/c',
+ u'vector-immutableof', u'vector-length', u'vector-map', u'vector-map!',
+ u'vector-member', u'vector-memq', u'vector-memv', u'vector-ref',
+ u'vector-set!', u'vector-set*!', u'vector-set-performance-stats!',
+ u'vector-split-at', u'vector-split-at-right', u'vector-take',
+ u'vector-take-right', u'vector/c', u'vector?', u'vectorof', u'version',
+ u'void', u'void?', u'weak-box-value', u'weak-box?', u'weak-set',
+ u'weak-seteq', u'weak-seteqv', u'will-execute', u'will-executor?',
+ u'will-register', u'will-try-execute', u'with-input-from-bytes',
+ u'with-input-from-file', u'with-input-from-string',
+ u'with-output-to-bytes', u'with-output-to-file', u'with-output-to-string',
+ u'would-be-future', u'wrap-evt', u'wrapped-extra-arg-arrow',
+ u'wrapped-extra-arg-arrow-extra-neg-party-argument',
+ u'wrapped-extra-arg-arrow-real-func', u'wrapped-extra-arg-arrow?',
+ u'writable<%>', u'write', u'write-byte', u'write-bytes',
+ u'write-bytes-avail', u'write-bytes-avail*', u'write-bytes-avail-evt',
+ u'write-bytes-avail/enable-break', u'write-char', u'write-special',
+ u'write-special-avail*', u'write-special-evt', u'write-string',
+ u'write-to-file', u'writeln', u'xor', u'zero?', u'~.a', u'~.s', u'~.v', u'~a',
+ u'~e', u'~r', u'~s', u'~v'
)
_opening_parenthesis = r'[([{]'
name = 'NewLisp'
aliases = ['newlisp']
- filenames = ['*.lsp', '*.nl']
+ filenames = ['*.lsp', '*.nl', '*.kif']
mimetypes = ['text/x-newlisp', 'application/x-newlisp']
flags = re.IGNORECASE | re.MULTILINE | re.UNICODE
filenames = ['*.shen']
mimetypes = ['text/x-shen', 'application/x-shen']
- DECLARATIONS = re.findall(r'\S+', """
- datatype define defmacro defprolog defcc synonyms declare package
- type function
- """)
-
- SPECIAL_FORMS = re.findall(r'\S+', """
- lambda get let if cases cond put time freeze value load $
- protect or and not do output prolog? trap-error error
- make-string /. set @p @s @v
- """)
-
- BUILTINS = re.findall(r'\S+', """
- == = * + - / < > >= <= <-address <-vector abort absvector
- absvector? address-> adjoin append arity assoc bind boolean?
- bound? call cd close cn compile concat cons cons? cut destroy
- difference element? empty? enable-type-theory error-to-string
- eval eval-kl exception explode external fail fail-if file
- findall fix fst fwhen gensym get-time hash hd hdstr hdv head
- identical implementation in include include-all-but inferences
- input input+ integer? intern intersection is kill language
- length limit lineread loaded macro macroexpand map mapcan
- maxinferences mode n->string nl nth null number? occurrences
- occurs-check open os out port porters pos pr preclude
- preclude-all-but print profile profile-results ps quit read
- read+ read-byte read-file read-file-as-bytelist
- read-file-as-string read-from-string release remove return
- reverse run save set simple-error snd specialise spy step
- stinput stoutput str string->n string->symbol string? subst
- symbol? systemf tail tc tc? thaw tl tlstr tlv track tuple?
- undefmacro unify unify! union unprofile unspecialise untrack
- variable? vector vector-> vector? verified version warn when
- write-byte write-to-file y-or-n?
- """)
-
- BUILTINS_ANYWHERE = re.findall(r'\S+', """
- where skip >> _ ! <e> <!>
- """)
+ DECLARATIONS = (
+ 'datatype', 'define', 'defmacro', 'defprolog', 'defcc',
+ 'synonyms', 'declare', 'package', 'type', 'function',
+ )
+
+ SPECIAL_FORMS = (
+ 'lambda', 'get', 'let', 'if', 'cases', 'cond', 'put', 'time', 'freeze',
+ 'value', 'load', '$', 'protect', 'or', 'and', 'not', 'do', 'output',
+ 'prolog?', 'trap-error', 'error', 'make-string', '/.', 'set', '@p',
+ '@s', '@v',
+ )
+
+ BUILTINS = (
+ '==', '=', '*', '+', '-', '/', '<', '>', '>=', '<=', '<-address',
+ '<-vector', 'abort', 'absvector', 'absvector?', 'address->', 'adjoin',
+ 'append', 'arity', 'assoc', 'bind', 'boolean?', 'bound?', 'call', 'cd',
+ 'close', 'cn', 'compile', 'concat', 'cons', 'cons?', 'cut', 'destroy',
+ 'difference', 'element?', 'empty?', 'enable-type-theory',
+ 'error-to-string', 'eval', 'eval-kl', 'exception', 'explode', 'external',
+ 'fail', 'fail-if', 'file', 'findall', 'fix', 'fst', 'fwhen', 'gensym',
+ 'get-time', 'hash', 'hd', 'hdstr', 'hdv', 'head', 'identical',
+ 'implementation', 'in', 'include', 'include-all-but', 'inferences',
+ 'input', 'input+', 'integer?', 'intern', 'intersection', 'is', 'kill',
+ 'language', 'length', 'limit', 'lineread', 'loaded', 'macro', 'macroexpand',
+ 'map', 'mapcan', 'maxinferences', 'mode', 'n->string', 'nl', 'nth', 'null',
+ 'number?', 'occurrences', 'occurs-check', 'open', 'os', 'out', 'port',
+ 'porters', 'pos', 'pr', 'preclude', 'preclude-all-but', 'print', 'profile',
+ 'profile-results', 'ps', 'quit', 'read', 'read+', 'read-byte', 'read-file',
+ 'read-file-as-bytelist', 'read-file-as-string', 'read-from-string',
+ 'release', 'remove', 'return', 'reverse', 'run', 'save', 'set',
+ 'simple-error', 'snd', 'specialise', 'spy', 'step', 'stinput', 'stoutput',
+ 'str', 'string->n', 'string->symbol', 'string?', 'subst', 'symbol?',
+ 'systemf', 'tail', 'tc', 'tc?', 'thaw', 'tl', 'tlstr', 'tlv', 'track',
+ 'tuple?', 'undefmacro', 'unify', 'unify!', 'union', 'unprofile',
+ 'unspecialise', 'untrack', 'variable?', 'vector', 'vector->', 'vector?',
+ 'verified', 'version', 'warn', 'when', 'write-byte', 'write-to-file',
+ 'y-or-n?',
+ )
+
+ BUILTINS_ANYWHERE = ('where', 'skip', '>>', '_', '!', '<e>', '<!>')
MAPPINGS = dict((s, Keyword) for s in DECLARATIONS)
MAPPINGS.update((s, Name.Builtin) for s in BUILTINS)
MAPPINGS.update((s, Keyword) for s in SPECIAL_FORMS)
- valid_symbol_chars = r'[\w!$%*+,<=>?/.\'@&#:_-]'
+ valid_symbol_chars = r'[\w!$%*+,<=>?/.\'@&#:-]'
valid_name = '%s+' % valid_symbol_chars
symbol_name = r'[a-z!$%%*+,<=>?/.\'@&#_-]%s*' % valid_symbol_chars
variable = r'[A-Z]%s*' % valid_symbol_chars
# valid names for identifiers
# well, names can only not consist fully of numbers
# but this should be good enough for now
- valid_name = r'[a-zA-Z0-9!$%&*+,/:<=>?@^_~|-]+'
+ valid_name = r'[\w!$%&*+,/:<=>?@^~|-]+'
tokens = {
'root': [
# strings, symbols and characters
(r'"(\\\\|\\"|[^"])*"', String),
(r"'" + valid_name, String.Symbol),
- (r"#\\([()/'\"._!§$%& ?=+-]{1}|[a-zA-Z0-9]+)", String.Char),
+ (r"#\\([()/'\"._!§$%& ?=+-]|[a-zA-Z0-9]+)", String.Char),
# constants
(r'(#t|#f)', Name.Constant),
(r'(\[|\])', Punctuation),
],
}
+
+
+class XtlangLexer(RegexLexer):
+ """An xtlang lexer for the `Extempore programming environment
+ <http://extempore.moso.com.au>`_.
+
+ This is a mixture of Scheme and xtlang, really. Keyword lists are
+ taken from the Extempore Emacs mode
+ (https://github.com/extemporelang/extempore-emacs-mode)
+
+ .. versionadded:: 2.2
+ """
+ name = 'xtlang'
+ aliases = ['extempore']
+ filenames = ['*.xtm']
+ mimetypes = []
+
+ common_keywords = (
+ 'lambda', 'define', 'if', 'else', 'cond', 'and',
+ 'or', 'let', 'begin', 'set!', 'map', 'for-each',
+ )
+ scheme_keywords = (
+ 'do', 'delay', 'quasiquote', 'unquote', 'unquote-splicing', 'eval',
+ 'case', 'let*', 'letrec', 'quote',
+ )
+ xtlang_bind_keywords = (
+ 'bind-func', 'bind-val', 'bind-lib', 'bind-type', 'bind-alias',
+ 'bind-poly', 'bind-dylib', 'bind-lib-func', 'bind-lib-val',
+ )
+ xtlang_keywords = (
+ 'letz', 'memzone', 'cast', 'convert', 'dotimes', 'doloop',
+ )
+ common_functions = (
+ '*', '+', '-', '/', '<', '<=', '=', '>', '>=', '%', 'abs', 'acos',
+ 'angle', 'append', 'apply', 'asin', 'assoc', 'assq', 'assv',
+ 'atan', 'boolean?', 'caaaar', 'caaadr', 'caaar', 'caadar',
+ 'caaddr', 'caadr', 'caar', 'cadaar', 'cadadr', 'cadar',
+ 'caddar', 'cadddr', 'caddr', 'cadr', 'car', 'cdaaar',
+ 'cdaadr', 'cdaar', 'cdadar', 'cdaddr', 'cdadr', 'cdar',
+ 'cddaar', 'cddadr', 'cddar', 'cdddar', 'cddddr', 'cdddr',
+ 'cddr', 'cdr', 'ceiling', 'cons', 'cos', 'floor', 'length',
+ 'list', 'log', 'max', 'member', 'min', 'modulo', 'not',
+ 'reverse', 'round', 'sin', 'sqrt', 'substring', 'tan',
+ 'println', 'random', 'null?', 'callback', 'now',
+ )
+ scheme_functions = (
+ 'call-with-current-continuation', 'call-with-input-file',
+ 'call-with-output-file', 'call-with-values', 'call/cc',
+ 'char->integer', 'char-alphabetic?', 'char-ci<=?', 'char-ci<?',
+ 'char-ci=?', 'char-ci>=?', 'char-ci>?', 'char-downcase',
+ 'char-lower-case?', 'char-numeric?', 'char-ready?',
+ 'char-upcase', 'char-upper-case?', 'char-whitespace?',
+ 'char<=?', 'char<?', 'char=?', 'char>=?', 'char>?', 'char?',
+ 'close-input-port', 'close-output-port', 'complex?',
+ 'current-input-port', 'current-output-port', 'denominator',
+ 'display', 'dynamic-wind', 'eof-object?', 'eq?', 'equal?',
+ 'eqv?', 'even?', 'exact->inexact', 'exact?', 'exp', 'expt',
+ 'force', 'gcd', 'imag-part', 'inexact->exact', 'inexact?',
+ 'input-port?', 'integer->char', 'integer?',
+ 'interaction-environment', 'lcm', 'list->string',
+ 'list->vector', 'list-ref', 'list-tail', 'list?', 'load',
+ 'magnitude', 'make-polar', 'make-rectangular', 'make-string',
+ 'make-vector', 'memq', 'memv', 'negative?', 'newline',
+ 'null-environment', 'number->string', 'number?',
+ 'numerator', 'odd?', 'open-input-file', 'open-output-file',
+ 'output-port?', 'pair?', 'peek-char', 'port?', 'positive?',
+ 'procedure?', 'quotient', 'rational?', 'rationalize', 'read',
+ 'read-char', 'real-part', 'real?',
+ 'remainder', 'scheme-report-environment', 'set-car!', 'set-cdr!',
+ 'string', 'string->list', 'string->number', 'string->symbol',
+ 'string-append', 'string-ci<=?', 'string-ci<?', 'string-ci=?',
+ 'string-ci>=?', 'string-ci>?', 'string-copy', 'string-fill!',
+ 'string-length', 'string-ref', 'string-set!', 'string<=?',
+ 'string<?', 'string=?', 'string>=?', 'string>?', 'string?',
+ 'symbol->string', 'symbol?', 'transcript-off', 'transcript-on',
+ 'truncate', 'values', 'vector', 'vector->list', 'vector-fill!',
+ 'vector-length', 'vector?',
+ 'with-input-from-file', 'with-output-to-file', 'write',
+ 'write-char', 'zero?',
+ )
+ xtlang_functions = (
+ 'toString', 'afill!', 'pfill!', 'tfill!', 'tbind', 'vfill!',
+ 'array-fill!', 'pointer-fill!', 'tuple-fill!', 'vector-fill!', 'free',
+ 'array', 'tuple', 'list', '~', 'cset!', 'cref', '&', 'bor',
+ 'ang-names', '<<', '>>', 'nil', 'printf', 'sprintf', 'null', 'now',
+ 'pset!', 'pref-ptr', 'vset!', 'vref', 'aset!', 'aref', 'aref-ptr',
+ 'tset!', 'tref', 'tref-ptr', 'salloc', 'halloc', 'zalloc', 'alloc',
+ 'schedule', 'exp', 'log', 'sin', 'cos', 'tan', 'asin', 'acos', 'atan',
+ 'sqrt', 'expt', 'floor', 'ceiling', 'truncate', 'round',
+ 'llvm_printf', 'push_zone', 'pop_zone', 'memzone', 'callback',
+ 'llvm_sprintf', 'make-array', 'array-set!', 'array-ref',
+ 'array-ref-ptr', 'pointer-set!', 'pointer-ref', 'pointer-ref-ptr',
+ 'stack-alloc', 'heap-alloc', 'zone-alloc', 'make-tuple', 'tuple-set!',
+ 'tuple-ref', 'tuple-ref-ptr', 'closure-set!', 'closure-ref', 'pref',
+ 'pdref', 'impc_null', 'bitcast', 'void', 'ifret', 'ret->', 'clrun->',
+ 'make-env-zone', 'make-env', '<>', 'dtof', 'ftod', 'i1tof',
+ 'i1tod', 'i1toi8', 'i1toi32', 'i1toi64', 'i8tof', 'i8tod',
+ 'i8toi1', 'i8toi32', 'i8toi64', 'i32tof', 'i32tod', 'i32toi1',
+ 'i32toi8', 'i32toi64', 'i64tof', 'i64tod', 'i64toi1',
+ 'i64toi8', 'i64toi32',
+ )
+
+ # valid names for Scheme identifiers (names cannot consist fully
+ # of numbers, but this should be good enough for now)
+ valid_scheme_name = r'[\w!$%&*+,/:<=>?@^~|-]+'
+
+ # valid characters in xtlang names & types
+ valid_xtlang_name = r'[\w.!-]+'
+ valid_xtlang_type = r'[]{}[\w<>,*/|!-]+'
+
+ tokens = {
+ # keep track of when we're exiting the xtlang form
+ 'xtlang': [
+ (r'\(', Punctuation, '#push'),
+ (r'\)', Punctuation, '#pop'),
+
+ (r'(?<=bind-func\s)' + valid_xtlang_name, Name.Function),
+ (r'(?<=bind-val\s)' + valid_xtlang_name, Name.Function),
+ (r'(?<=bind-type\s)' + valid_xtlang_name, Name.Function),
+ (r'(?<=bind-alias\s)' + valid_xtlang_name, Name.Function),
+ (r'(?<=bind-poly\s)' + valid_xtlang_name, Name.Function),
+ (r'(?<=bind-lib\s)' + valid_xtlang_name, Name.Function),
+ (r'(?<=bind-dylib\s)' + valid_xtlang_name, Name.Function),
+ (r'(?<=bind-lib-func\s)' + valid_xtlang_name, Name.Function),
+ (r'(?<=bind-lib-val\s)' + valid_xtlang_name, Name.Function),
+
+ # type annotations
+ (r':' + valid_xtlang_type, Keyword.Type),
+
+ # types
+ (r'(<' + valid_xtlang_type + r'>|\|' + valid_xtlang_type + r'\||/' +
+ valid_xtlang_type + r'/|' + valid_xtlang_type + r'\*)\**',
+ Keyword.Type),
+
+ # keywords
+ (words(xtlang_keywords, prefix=r'(?<=\()'), Keyword),
+
+ # builtins
+ (words(xtlang_functions, prefix=r'(?<=\()'), Name.Function),
+
+ include('common'),
+
+ # variables
+ (valid_xtlang_name, Name.Variable),
+ ],
+ 'scheme': [
+ # quoted symbols
+ (r"'" + valid_scheme_name, String.Symbol),
+
+ # char literals
+ (r"#\\([()/'\"._!§$%& ?=+-]|[a-zA-Z0-9]+)", String.Char),
+
+ # special operators
+ (r"('|#|`|,@|,|\.)", Operator),
+
+ # keywords
+ (words(scheme_keywords, prefix=r'(?<=\()'), Keyword),
+
+ # builtins
+ (words(scheme_functions, prefix=r'(?<=\()'), Name.Function),
+
+ include('common'),
+
+ # variables
+ (valid_scheme_name, Name.Variable),
+ ],
+ # common to both xtlang and Scheme
+ 'common': [
+ # comments
+ (r';.*$', Comment.Single),
+
+ # whitespaces - usually not relevant
+ (r'\s+', Text),
+
+ # numbers
+ (r'-?\d+\.\d+', Number.Float),
+ (r'-?\d+', Number.Integer),
+
+ # binary/oct/hex literals
+ (r'(#b|#o|#x)[\d.]+', Number),
+
+ # strings
+ (r'"(\\\\|\\"|[^"])*"', String),
+
+ # true/false constants
+ (r'(#t|#f)', Name.Constant),
+
+ # keywords
+ (words(common_keywords, prefix=r'(?<=\()'), Keyword),
+
+ # builtins
+ (words(common_functions, prefix=r'(?<=\()'), Name.Function),
+
+ # the famous parentheses!
+ (r'(\(|\))', Punctuation),
+ ],
+ 'root': [
+ # go into xtlang mode
+ (words(xtlang_bind_keywords, prefix=r'(?<=\()', suffix=r'\b'),
+ Keyword, 'xtlang'),
+
+ include('scheme')
+ ],
+ }
Lexers for Makefiles and similar.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
bygroups(Keyword, Text), 'export'),
(r'export\s+', Keyword),
# assignment
- (r'([\w${}.-]+)(\s*)([!?:+]?=)([ \t]*)((?:.*\\\n)+|.*\n)',
+ (r'([\w${}().-]+)(\s*)([!?:+]?=)([ \t]*)((?:.*\\\n)+|.*\n)',
bygroups(Name.Variable, Text, Operator, Text, using(BashLexer))),
# strings
(r'(?s)"(\\\\|\\.|[^"\\])*"', String.Double),
(r'\$\(', Keyword, 'expansion'),
],
'expansion': [
- (r'[^$a-zA-Z_)]+', Text),
+ (r'[^$a-zA-Z_()]+', Text),
(r'[a-zA-Z_]+', Name.Variable),
(r'\$', Keyword),
(r'\(', Keyword, '#push'),
Lexers for non-HTML markup languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
__all__ = ['BBCodeLexer', 'MoinWikiLexer', 'RstLexer', 'TexLexer', 'GroffLexer',
'MozPreprocHashLexer', 'MozPreprocPercentLexer',
'MozPreprocXulLexer', 'MozPreprocJavascriptLexer',
- 'MozPreprocCssLexer']
+ 'MozPreprocCssLexer', 'MarkdownLexer']
class BBCodeLexer(RegexLexer):
super(MozPreprocCssLexer, self).__init__(
CssLexer, MozPreprocPercentLexer, **options)
+
+class MarkdownLexer(RegexLexer):
+ """
+ For `Markdown <https://help.github.com/categories/writing-on-github/>`_ markup.
+
+ .. versionadded:: 2.2
+ """
+ name = 'markdown'
+ aliases = ['md']
+ filenames = ['*.md']
+ mimetypes = ["text/x-markdown"]
+ flags = re.MULTILINE
+
+ def _handle_codeblock(self, match):
+ """
+ match args: 1:backticks, 2:lang_name, 3:newline, 4:code, 5:backticks
+ """
+ from pygments.lexers import get_lexer_by_name
+
+ # section header
+ yield match.start(1), String , match.group(1)
+ yield match.start(2), String , match.group(2)
+ yield match.start(3), Text , match.group(3)
+
+ # lookup lexer if wanted and existing
+ lexer = None
+ if self.handlecodeblocks:
+ try:
+ lexer = get_lexer_by_name( match.group(2).strip() )
+ except ClassNotFound:
+ pass
+ code = match.group(4)
+
+ # no lexer for this language. handle it like it was a code block
+ if lexer is None:
+ yield match.start(4), String, code
+ return
+
+ for item in do_insertions([], lexer.get_tokens_unprocessed(code)):
+ yield item
+
+ yield match.start(5), String , match.group(5)
+
+ tokens = {
+ 'root': [
+ # heading with pound prefix
+ (r'^(#)([^#].+\n)', bygroups(Generic.Heading, Text)),
+ (r'^(#{2,6})(.+\n)', bygroups(Generic.Subheading, Text)),
+ # task list
+ (r'^(\s*)([*-] )(\[[ xX]\])( .+\n)',
+ bygroups(Text, Keyword, Keyword, using(this, state='inline'))),
+ # bulleted lists
+ (r'^(\s*)([*-])(\s)(.+\n)',
+ bygroups(Text, Keyword, Text, using(this, state='inline'))),
+ # numbered lists
+ (r'^(\s*)([0-9]+\.)( .+\n)',
+ bygroups(Text, Keyword, using(this, state='inline'))),
+ # quote
+ (r'^(\s*>\s)(.+\n)', bygroups(Keyword, Generic.Emph)),
+ # text block
+ (r'^(```\n)([\w\W]*?)(^```$)', bygroups(String, Text, String)),
+ # code block with language
+ (r'^(```)(\w+)(\n)([\w\W]*?)(^```$)', _handle_codeblock),
+
+ include('inline'),
+ ],
+ 'inline': [
+ # escape
+ (r'\\.', Text),
+ # italics
+ (r'(\s)([*_][^*_]+[*_])(\W|\n)', bygroups(Text, Generic.Emph, Text)),
+ # bold
+ # warning: the following rule eats internal tags. eg. **foo _bar_ baz** bar is not italics
+ (r'(\s)((\*\*|__).*\3)((?=\W|\n))', bygroups(Text, Generic.Strong, None, Text)),
+ # "proper way" (r'(\s)([*_]{2}[^*_]+[*_]{2})((?=\W|\n))', bygroups(Text, Generic.Strong, Text)),
+ # strikethrough
+ (r'(\s)(~~[^~]+~~)((?=\W|\n))', bygroups(Text, Generic.Deleted, Text)),
+ # inline code
+ (r'`[^`]+`', String.Backtick),
+ # mentions and topics (twitter and github stuff)
+ (r'[@#][\w/:]+', Name.Entity),
+ # (image?) links eg: 
+ (r'(!?\[)([^]]+)(\])(\()([^)]+)(\))', bygroups(Text, Name.Tag, Text, Text, Name.Attribute, Text)),
+
+ # general text, must come last!
+ (r'[^\\\s]+', Text),
+ (r'.', Text),
+ ],
+ }
+
+ def __init__(self, **options):
+ self.handlecodeblocks = get_bool_opt(options, 'handlecodeblocks', True)
+ RegexLexer.__init__(self, **options)
Just export lexers that were contained in this module.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexers for Matlab and related languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexers for ML family languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexers for modeling languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Multi-Dialect Lexer for Modula-2.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
],
'unigraph_punctuation': [
# Common Punctuation
- (r'[\(\)\[\]{},.:;\|]', Punctuation),
+ (r'[()\[\]{},.:;|]', Punctuation),
# Case Label Separator Synonym
(r'!', Punctuation), # ISO
# Blueprint Punctuation
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ pygments.lexers.monte
+ ~~~~~~~~~~~~~~~~~~~~~
+
+ Lexer for the Monte programming language.
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+from pygments.token import Comment, Error, Keyword, Name, Number, Operator, \
+ Punctuation, String, Whitespace
+from pygments.lexer import RegexLexer, include, words
+
+__all__ = ['MonteLexer']
+
+
+# `var` handled separately
+# `interface` handled separately
+_declarations = ['bind', 'def', 'fn', 'object']
+_methods = ['method', 'to']
+_keywords = [
+ 'as', 'break', 'catch', 'continue', 'else', 'escape', 'exit', 'exports',
+ 'extends', 'finally', 'for', 'guards', 'if', 'implements', 'import',
+ 'in', 'match', 'meta', 'pass', 'return', 'switch', 'try', 'via', 'when',
+ 'while',
+]
+_operators = [
+ # Unary
+ '~', '!',
+ # Binary
+ '+', '-', '*', '/', '%', '**', '&', '|', '^', '<<', '>>',
+ # Binary augmented
+ '+=', '-=', '*=', '/=', '%=', '**=', '&=', '|=', '^=', '<<=', '>>=',
+ # Comparison
+ '==', '!=', '<', '<=', '>', '>=', '<=>',
+ # Patterns and assignment
+ ':=', '?', '=~', '!~', '=>',
+ # Calls and sends
+ '.', '<-', '->',
+]
+_escape_pattern = (
+ r'(?:\\x[0-9a-fA-F]{2}|\\u[0-9a-fA-F]{4}|\\U[0-9a-fA-F]{8}|'
+ r'\\["\'\\bftnr])')
+# _char = _escape_chars + [('.', String.Char)]
+_identifier = r'[_a-zA-Z]\w*'
+
+_constants = [
+ # Void constants
+ 'null',
+ # Bool constants
+ 'false', 'true',
+ # Double constants
+ 'Infinity', 'NaN',
+ # Special objects
+ 'M', 'Ref', 'throw', 'traceln',
+]
+
+_guards = [
+ 'Any', 'Binding', 'Bool', 'Bytes', 'Char', 'DeepFrozen', 'Double',
+ 'Empty', 'Int', 'List', 'Map', 'Near', 'NullOk', 'Same', 'Selfless',
+ 'Set', 'Str', 'SubrangeGuard', 'Transparent', 'Void',
+]
+
+_safeScope = [
+ '_accumulateList', '_accumulateMap', '_auditedBy', '_bind',
+ '_booleanFlow', '_comparer', '_equalizer', '_iterForever', '_loop',
+ '_makeBytes', '_makeDouble', '_makeFinalSlot', '_makeInt', '_makeList',
+ '_makeMap', '_makeMessageDesc', '_makeOrderedSpace', '_makeParamDesc',
+ '_makeProtocolDesc', '_makeSourceSpan', '_makeString', '_makeVarSlot',
+ '_makeVerbFacet', '_mapExtract', '_matchSame', '_quasiMatcher',
+ '_slotToBinding', '_splitList', '_suchThat', '_switchFailed',
+ '_validateFor', 'b__quasiParser', 'eval', 'import', 'm__quasiParser',
+ 'makeBrandPair', 'makeLazySlot', 'safeScope', 'simple__quasiParser',
+]
+
+
+class MonteLexer(RegexLexer):
+ """
+ Lexer for the `Monte <https://monte.readthedocs.io/>`_ programming language.
+
+ .. versionadded:: 2.2
+ """
+ name = 'Monte'
+ aliases = ['monte']
+ filenames = ['*.mt']
+
+ tokens = {
+ 'root': [
+ # Comments
+ (r'#[^\n]*\n', Comment),
+
+ # Docstrings
+ # Apologies for the non-greedy matcher here.
+ (r'/\*\*.*?\*/', String.Doc),
+
+ # `var` declarations
+ (r'\bvar\b', Keyword.Declaration, 'var'),
+
+ # `interface` declarations
+ (r'\binterface\b', Keyword.Declaration, 'interface'),
+
+ # method declarations
+ (words(_methods, prefix='\\b', suffix='\\b'),
+ Keyword, 'method'),
+
+ # All other declarations
+ (words(_declarations, prefix='\\b', suffix='\\b'),
+ Keyword.Declaration),
+
+ # Keywords
+ (words(_keywords, prefix='\\b', suffix='\\b'), Keyword),
+
+ # Literals
+ ('[+-]?0x[_0-9a-fA-F]+', Number.Hex),
+ (r'[+-]?[_0-9]+\.[_0-9]*([eE][+-]?[_0-9]+)?', Number.Float),
+ ('[+-]?[_0-9]+', Number.Integer),
+ ("'", String.Double, 'char'),
+ ('"', String.Double, 'string'),
+
+ # Quasiliterals
+ ('`', String.Backtick, 'ql'),
+
+ # Operators
+ (words(_operators), Operator),
+
+ # Verb operators
+ (_identifier + '=', Operator.Word),
+
+ # Safe scope constants
+ (words(_constants, prefix='\\b', suffix='\\b'),
+ Keyword.Pseudo),
+
+ # Safe scope guards
+ (words(_guards, prefix='\\b', suffix='\\b'), Keyword.Type),
+
+ # All other safe scope names
+ (words(_safeScope, prefix='\\b', suffix='\\b'),
+ Name.Builtin),
+
+ # Identifiers
+ (_identifier, Name),
+
+ # Punctuation
+ (r'\(|\)|\{|\}|\[|\]|:|,', Punctuation),
+
+ # Whitespace
+ (' +', Whitespace),
+
+ # Definite lexer errors
+ ('=', Error),
+ ],
+ 'char': [
+ # It is definitely an error to have a char of width == 0.
+ ("'", Error, 'root'),
+ (_escape_pattern, String.Escape, 'charEnd'),
+ ('.', String.Char, 'charEnd'),
+ ],
+ 'charEnd': [
+ ("'", String.Char, '#pop:2'),
+ # It is definitely an error to have a char of width > 1.
+ ('.', Error),
+ ],
+ # The state of things coming into an interface.
+ 'interface': [
+ (' +', Whitespace),
+ (_identifier, Name.Class, '#pop'),
+ include('root'),
+ ],
+ # The state of things coming into a method.
+ 'method': [
+ (' +', Whitespace),
+ (_identifier, Name.Function, '#pop'),
+ include('root'),
+ ],
+ 'string': [
+ ('"', String.Double, 'root'),
+ (_escape_pattern, String.Escape),
+ (r'\n', String.Double),
+ ('.', String.Double),
+ ],
+ 'ql': [
+ ('`', String.Backtick, 'root'),
+ (r'\$' + _escape_pattern, String.Escape),
+ (r'\$\$', String.Escape),
+ (r'@@', String.Escape),
+ (r'\$\{', String.Interpol, 'qlNest'),
+ (r'@\{', String.Interpol, 'qlNest'),
+ (r'\$' + _identifier, Name),
+ ('@' + _identifier, Name),
+ ('.', String.Backtick),
+ ],
+ 'qlNest': [
+ (r'\}', String.Interpol, '#pop'),
+ include('root'),
+ ],
+ # The state of things immediately following `var`.
+ 'var': [
+ (' +', Whitespace),
+ (_identifier, Name.Variable, '#pop'),
+ include('root'),
+ ],
+ }
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ pygments.lexers.ncl
+ ~~~~~~~~~~~~~~~~~~~
+
+ Lexers for NCAR Command Language.
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+import re
+
+from pygments.lexer import RegexLexer, include, words
+from pygments.token import Text, Comment, Operator, Keyword, Name, String, \
+ Number, Punctuation
+
+__all__ = ['NCLLexer']
+
+
+class NCLLexer(RegexLexer):
+ """
+ Lexer for NCL code.
+
+ .. versionadded:: 2.2
+ """
+ name = 'NCL'
+ aliases = ['ncl']
+ filenames = ['*.ncl']
+ mimetypes = ['text/ncl']
+ flags = re.MULTILINE
+
+ tokens = {
+ 'root': [
+ (r';.*\n', Comment),
+ include('strings'),
+ include('core'),
+ (r'[a-zA-Z_]\w*', Name),
+ include('nums'),
+ (r'[\s]+', Text),
+ ],
+ 'core': [
+ # Statements
+ (words((
+ 'begin', 'break', 'continue', 'create', 'defaultapp', 'do',
+ 'else', 'end', 'external', 'exit', 'True', 'False', 'file', 'function',
+ 'getvalues', 'graphic', 'group', 'if', 'list', 'load', 'local',
+ 'new', '_Missing', 'Missing', 'noparent', 'procedure',
+ 'quit', 'QUIT', 'Quit', 'record', 'return', 'setvalues', 'stop',
+ 'then', 'while'), prefix=r'\b', suffix=r'\s*\b'),
+ Keyword),
+
+ # Data Types
+ (words((
+ 'ubyte', 'uint', 'uint64', 'ulong', 'string', 'byte',
+ 'character', 'double', 'float', 'integer', 'int64', 'logical',
+ 'long', 'short', 'ushort', 'enumeric', 'numeric', 'snumeric'),
+ prefix=r'\b', suffix=r'\s*\b'),
+ Keyword.Type),
+
+ # Operators
+ (r'[\%^*+\-/<>]', Operator),
+
+ # punctuation:
+ (r'[\[\]():@$!&|.,\\{}]', Punctuation),
+ (r'[=:]', Punctuation),
+
+ # Intrinsics
+ (words((
+ 'abs', 'acos', 'addfile', 'addfiles', 'all', 'angmom_atm', 'any',
+ 'area_conserve_remap', 'area_hi2lores', 'area_poly_sphere',
+ 'asciiread', 'asciiwrite', 'asin', 'atan', 'atan2', 'attsetvalues',
+ 'avg', 'betainc', 'bin_avg', 'bin_sum', 'bw_bandpass_filter',
+ 'cancor', 'cbinread', 'cbinwrite', 'cd_calendar', 'cd_inv_calendar',
+ 'cdfbin_p', 'cdfbin_pr', 'cdfbin_s', 'cdfbin_xn', 'cdfchi_p',
+ 'cdfchi_x', 'cdfgam_p', 'cdfgam_x', 'cdfnor_p', 'cdfnor_x',
+ 'cdft_p', 'cdft_t', 'ceil', 'center_finite_diff',
+ 'center_finite_diff_n', 'cfftb', 'cfftf', 'cfftf_frq_reorder',
+ 'charactertodouble', 'charactertofloat', 'charactertointeger',
+ 'charactertolong', 'charactertoshort', 'charactertostring',
+ 'chartodouble', 'chartofloat', 'chartoint', 'chartointeger',
+ 'chartolong', 'chartoshort', 'chartostring', 'chiinv', 'clear',
+ 'color_index_to_rgba', 'conform', 'conform_dims', 'cos', 'cosh',
+ 'count_unique_values', 'covcorm', 'covcorm_xy', 'craybinnumrec',
+ 'craybinrecread', 'create_graphic', 'csa1', 'csa1d', 'csa1s',
+ 'csa1x', 'csa1xd', 'csa1xs', 'csa2', 'csa2d', 'csa2l', 'csa2ld',
+ 'csa2ls', 'csa2lx', 'csa2lxd', 'csa2lxs', 'csa2s', 'csa2x',
+ 'csa2xd', 'csa2xs', 'csa3', 'csa3d', 'csa3l', 'csa3ld', 'csa3ls',
+ 'csa3lx', 'csa3lxd', 'csa3lxs', 'csa3s', 'csa3x', 'csa3xd',
+ 'csa3xs', 'csc2s', 'csgetp', 'css2c', 'cssetp', 'cssgrid', 'csstri',
+ 'csvoro', 'cumsum', 'cz2ccm', 'datatondc', 'day_of_week',
+ 'day_of_year', 'days_in_month', 'default_fillvalue', 'delete',
+ 'depth_to_pres', 'destroy', 'determinant', 'dewtemp_trh',
+ 'dgeevx_lapack', 'dim_acumrun_n', 'dim_avg', 'dim_avg_n',
+ 'dim_avg_wgt', 'dim_avg_wgt_n', 'dim_cumsum', 'dim_cumsum_n',
+ 'dim_gamfit_n', 'dim_gbits', 'dim_max', 'dim_max_n', 'dim_median',
+ 'dim_median_n', 'dim_min', 'dim_min_n', 'dim_num', 'dim_num_n',
+ 'dim_numrun_n', 'dim_pqsort', 'dim_pqsort_n', 'dim_product',
+ 'dim_product_n', 'dim_rmsd', 'dim_rmsd_n', 'dim_rmvmean',
+ 'dim_rmvmean_n', 'dim_rmvmed', 'dim_rmvmed_n', 'dim_spi_n',
+ 'dim_standardize', 'dim_standardize_n', 'dim_stat4', 'dim_stat4_n',
+ 'dim_stddev', 'dim_stddev_n', 'dim_sum', 'dim_sum_n', 'dim_sum_wgt',
+ 'dim_sum_wgt_n', 'dim_variance', 'dim_variance_n', 'dimsizes',
+ 'doubletobyte', 'doubletochar', 'doubletocharacter',
+ 'doubletofloat', 'doubletoint', 'doubletointeger', 'doubletolong',
+ 'doubletoshort', 'dpres_hybrid_ccm', 'dpres_plevel', 'draw',
+ 'draw_color_palette', 'dsgetp', 'dsgrid2', 'dsgrid2d', 'dsgrid2s',
+ 'dsgrid3', 'dsgrid3d', 'dsgrid3s', 'dspnt2', 'dspnt2d', 'dspnt2s',
+ 'dspnt3', 'dspnt3d', 'dspnt3s', 'dssetp', 'dtrend', 'dtrend_msg',
+ 'dtrend_msg_n', 'dtrend_n', 'dtrend_quadratic',
+ 'dtrend_quadratic_msg_n', 'dv2uvf', 'dv2uvg', 'dz_height',
+ 'echo_off', 'echo_on', 'eof2data', 'eof_varimax', 'eofcor',
+ 'eofcor_pcmsg', 'eofcor_ts', 'eofcov', 'eofcov_pcmsg', 'eofcov_ts',
+ 'eofunc', 'eofunc_ts', 'eofunc_varimax', 'equiv_sample_size', 'erf',
+ 'erfc', 'esacr', 'esacv', 'esccr', 'esccv', 'escorc', 'escorc_n',
+ 'escovc', 'exit', 'exp', 'exp_tapersh', 'exp_tapersh_wgts',
+ 'exp_tapershC', 'ezfftb', 'ezfftb_n', 'ezfftf', 'ezfftf_n',
+ 'f2fosh', 'f2foshv', 'f2fsh', 'f2fshv', 'f2gsh', 'f2gshv', 'fabs',
+ 'fbindirread', 'fbindirwrite', 'fbinnumrec', 'fbinread',
+ 'fbinrecread', 'fbinrecwrite', 'fbinwrite', 'fft2db', 'fft2df',
+ 'fftshift', 'fileattdef', 'filechunkdimdef', 'filedimdef',
+ 'fileexists', 'filegrpdef', 'filevarattdef', 'filevarchunkdef',
+ 'filevarcompressleveldef', 'filevardef', 'filevardimsizes',
+ 'filwgts_lancos', 'filwgts_lanczos', 'filwgts_normal',
+ 'floattobyte', 'floattochar', 'floattocharacter', 'floattoint',
+ 'floattointeger', 'floattolong', 'floattoshort', 'floor',
+ 'fluxEddy', 'fo2fsh', 'fo2fshv', 'fourier_info', 'frame', 'fspan',
+ 'ftcurv', 'ftcurvd', 'ftcurvi', 'ftcurvp', 'ftcurvpi', 'ftcurvps',
+ 'ftcurvs', 'ftest', 'ftgetp', 'ftkurv', 'ftkurvd', 'ftkurvp',
+ 'ftkurvpd', 'ftsetp', 'ftsurf', 'g2fsh', 'g2fshv', 'g2gsh',
+ 'g2gshv', 'gamma', 'gammainc', 'gaus', 'gaus_lobat',
+ 'gaus_lobat_wgt', 'gc_aangle', 'gc_clkwise', 'gc_dangle',
+ 'gc_inout', 'gc_latlon', 'gc_onarc', 'gc_pnt2gc', 'gc_qarea',
+ 'gc_tarea', 'generate_2d_array', 'get_color_index',
+ 'get_color_rgba', 'get_cpu_time', 'get_isolines', 'get_ncl_version',
+ 'get_script_name', 'get_script_prefix_name', 'get_sphere_radius',
+ 'get_unique_values', 'getbitsone', 'getenv', 'getfiledimsizes',
+ 'getfilegrpnames', 'getfilepath', 'getfilevaratts',
+ 'getfilevarchunkdimsizes', 'getfilevardims', 'getfilevardimsizes',
+ 'getfilevarnames', 'getfilevartypes', 'getvaratts', 'getvardims',
+ 'gradsf', 'gradsg', 'greg2jul', 'grid2triple', 'hlsrgb', 'hsvrgb',
+ 'hydro', 'hyi2hyo', 'idsfft', 'igradsf', 'igradsg', 'ilapsf',
+ 'ilapsg', 'ilapvf', 'ilapvg', 'ind', 'ind_resolve', 'int2p',
+ 'int2p_n', 'integertobyte', 'integertochar', 'integertocharacter',
+ 'integertoshort', 'inttobyte', 'inttochar', 'inttoshort',
+ 'inverse_matrix', 'isatt', 'isbigendian', 'isbyte', 'ischar',
+ 'iscoord', 'isdefined', 'isdim', 'isdimnamed', 'isdouble',
+ 'isenumeric', 'isfile', 'isfilepresent', 'isfilevar',
+ 'isfilevaratt', 'isfilevarcoord', 'isfilevardim', 'isfloat',
+ 'isfunc', 'isgraphic', 'isint', 'isint64', 'isinteger',
+ 'isleapyear', 'islogical', 'islong', 'ismissing', 'isnan_ieee',
+ 'isnumeric', 'ispan', 'isproc', 'isshort', 'issnumeric', 'isstring',
+ 'isubyte', 'isuint', 'isuint64', 'isulong', 'isunlimited',
+ 'isunsigned', 'isushort', 'isvar', 'jul2greg', 'kmeans_as136',
+ 'kolsm2_n', 'kron_product', 'lapsf', 'lapsg', 'lapvf', 'lapvg',
+ 'latlon2utm', 'lclvl', 'lderuvf', 'lderuvg', 'linint1', 'linint1_n',
+ 'linint2', 'linint2_points', 'linmsg', 'linmsg_n', 'linrood_latwgt',
+ 'linrood_wgt', 'list_files', 'list_filevars', 'list_hlus',
+ 'list_procfuncs', 'list_vars', 'ListAppend', 'ListCount',
+ 'ListGetType', 'ListIndex', 'ListIndexFromName', 'ListPop',
+ 'ListPush', 'ListSetType', 'loadscript', 'local_max', 'local_min',
+ 'log', 'log10', 'longtobyte', 'longtochar', 'longtocharacter',
+ 'longtoint', 'longtointeger', 'longtoshort', 'lspoly', 'lspoly_n',
+ 'mask', 'max', 'maxind', 'min', 'minind', 'mixed_layer_depth',
+ 'mixhum_ptd', 'mixhum_ptrh', 'mjo_cross_coh2pha',
+ 'mjo_cross_segment', 'moc_globe_atl', 'monthday', 'natgrid',
+ 'natgridd', 'natgrids', 'ncargpath', 'ncargversion', 'ndctodata',
+ 'ndtooned', 'new', 'NewList', 'ngezlogo', 'nggcog', 'nggetp',
+ 'nglogo', 'ngsetp', 'NhlAddAnnotation', 'NhlAddData',
+ 'NhlAddOverlay', 'NhlAddPrimitive', 'NhlAppGetDefaultParentId',
+ 'NhlChangeWorkstation', 'NhlClassName', 'NhlClearWorkstation',
+ 'NhlDataPolygon', 'NhlDataPolyline', 'NhlDataPolymarker',
+ 'NhlDataToNDC', 'NhlDestroy', 'NhlDraw', 'NhlFrame', 'NhlFreeColor',
+ 'NhlGetBB', 'NhlGetClassResources', 'NhlGetErrorObjectId',
+ 'NhlGetNamedColorIndex', 'NhlGetParentId',
+ 'NhlGetParentWorkstation', 'NhlGetWorkspaceObjectId',
+ 'NhlIsAllocatedColor', 'NhlIsApp', 'NhlIsDataComm', 'NhlIsDataItem',
+ 'NhlIsDataSpec', 'NhlIsTransform', 'NhlIsView', 'NhlIsWorkstation',
+ 'NhlName', 'NhlNDCPolygon', 'NhlNDCPolyline', 'NhlNDCPolymarker',
+ 'NhlNDCToData', 'NhlNewColor', 'NhlNewDashPattern', 'NhlNewMarker',
+ 'NhlPalGetDefined', 'NhlRemoveAnnotation', 'NhlRemoveData',
+ 'NhlRemoveOverlay', 'NhlRemovePrimitive', 'NhlSetColor',
+ 'NhlSetDashPattern', 'NhlSetMarker', 'NhlUpdateData',
+ 'NhlUpdateWorkstation', 'nice_mnmxintvl', 'nngetaspectd',
+ 'nngetaspects', 'nngetp', 'nngetsloped', 'nngetslopes', 'nngetwts',
+ 'nngetwtsd', 'nnpnt', 'nnpntd', 'nnpntend', 'nnpntendd',
+ 'nnpntinit', 'nnpntinitd', 'nnpntinits', 'nnpnts', 'nnsetp', 'num',
+ 'obj_anal_ic', 'omega_ccm', 'onedtond', 'overlay', 'paleo_outline',
+ 'pdfxy_bin', 'poisson_grid_fill', 'pop_remap', 'potmp_insitu_ocn',
+ 'prcwater_dp', 'pres2hybrid', 'pres_hybrid_ccm', 'pres_sigma',
+ 'print', 'print_table', 'printFileVarSummary', 'printVarSummary',
+ 'product', 'pslec', 'pslhor', 'pslhyp', 'qsort', 'rand',
+ 'random_chi', 'random_gamma', 'random_normal', 'random_setallseed',
+ 'random_uniform', 'rcm2points', 'rcm2rgrid', 'rdsstoi',
+ 'read_colormap_file', 'reg_multlin', 'regcoef', 'regCoef_n',
+ 'regline', 'relhum', 'replace_ieeenan', 'reshape', 'reshape_ind',
+ 'rgba_to_color_index', 'rgbhls', 'rgbhsv', 'rgbyiq', 'rgrid2rcm',
+ 'rhomb_trunc', 'rip_cape_2d', 'rip_cape_3d', 'round', 'rtest',
+ 'runave', 'runave_n', 'set_default_fillvalue', 'set_sphere_radius',
+ 'setfileoption', 'sfvp2uvf', 'sfvp2uvg', 'shaec', 'shagc',
+ 'shgetnp', 'shgetp', 'shgrid', 'shorttobyte', 'shorttochar',
+ 'shorttocharacter', 'show_ascii', 'shsec', 'shsetp', 'shsgc',
+ 'shsgc_R42', 'sigma2hybrid', 'simpeq', 'simpne', 'sin',
+ 'sindex_yrmo', 'sinh', 'sizeof', 'sleep', 'smth9', 'snindex_yrmo',
+ 'solve_linsys', 'span_color_indexes', 'span_color_rgba',
+ 'sparse_matrix_mult', 'spcorr', 'spcorr_n', 'specx_anal',
+ 'specxy_anal', 'spei', 'sprintf', 'sprinti', 'sqrt', 'sqsort',
+ 'srand', 'stat2', 'stat4', 'stat_medrng', 'stat_trim',
+ 'status_exit', 'stdatmus_p2tdz', 'stdatmus_z2tdp', 'stddev',
+ 'str_capital', 'str_concat', 'str_fields_count', 'str_get_cols',
+ 'str_get_dq', 'str_get_field', 'str_get_nl', 'str_get_sq',
+ 'str_get_tab', 'str_index_of_substr', 'str_insert', 'str_is_blank',
+ 'str_join', 'str_left_strip', 'str_lower', 'str_match',
+ 'str_match_ic', 'str_match_ic_regex', 'str_match_ind',
+ 'str_match_ind_ic', 'str_match_ind_ic_regex', 'str_match_ind_regex',
+ 'str_match_regex', 'str_right_strip', 'str_split',
+ 'str_split_by_length', 'str_split_csv', 'str_squeeze', 'str_strip',
+ 'str_sub_str', 'str_switch', 'str_upper', 'stringtochar',
+ 'stringtocharacter', 'stringtodouble', 'stringtofloat',
+ 'stringtoint', 'stringtointeger', 'stringtolong', 'stringtoshort',
+ 'strlen', 'student_t', 'sum', 'svd_lapack', 'svdcov', 'svdcov_sv',
+ 'svdstd', 'svdstd_sv', 'system', 'systemfunc', 'tan', 'tanh',
+ 'taper', 'taper_n', 'tdclrs', 'tdctri', 'tdcudp', 'tdcurv',
+ 'tddtri', 'tdez2d', 'tdez3d', 'tdgetp', 'tdgrds', 'tdgrid',
+ 'tdgtrs', 'tdinit', 'tditri', 'tdlbla', 'tdlblp', 'tdlbls',
+ 'tdline', 'tdlndp', 'tdlnpa', 'tdlpdp', 'tdmtri', 'tdotri',
+ 'tdpara', 'tdplch', 'tdprpa', 'tdprpi', 'tdprpt', 'tdsetp',
+ 'tdsort', 'tdstri', 'tdstrs', 'tdttri', 'thornthwaite', 'tobyte',
+ 'tochar', 'todouble', 'tofloat', 'toint', 'toint64', 'tointeger',
+ 'tolong', 'toshort', 'tosigned', 'tostring', 'tostring_with_format',
+ 'totype', 'toubyte', 'touint', 'touint64', 'toulong', 'tounsigned',
+ 'toushort', 'trend_manken', 'tri_trunc', 'triple2grid',
+ 'triple2grid2d', 'trop_wmo', 'ttest', 'typeof', 'undef',
+ 'unique_string', 'update', 'ushorttoint', 'ut_calendar',
+ 'ut_inv_calendar', 'utm2latlon', 'uv2dv_cfd', 'uv2dvf', 'uv2dvg',
+ 'uv2sfvpf', 'uv2sfvpg', 'uv2vr_cfd', 'uv2vrdvf', 'uv2vrdvg',
+ 'uv2vrf', 'uv2vrg', 'v5d_close', 'v5d_create', 'v5d_setLowLev',
+ 'v5d_setUnits', 'v5d_write', 'v5d_write_var', 'variance', 'vhaec',
+ 'vhagc', 'vhsec', 'vhsgc', 'vibeta', 'vinth2p', 'vinth2p_ecmwf',
+ 'vinth2p_ecmwf_nodes', 'vinth2p_nodes', 'vintp2p_ecmwf', 'vr2uvf',
+ 'vr2uvg', 'vrdv2uvf', 'vrdv2uvg', 'wavelet', 'wavelet_default',
+ 'weibull', 'wgt_area_smooth', 'wgt_areaave', 'wgt_areaave2',
+ 'wgt_arearmse', 'wgt_arearmse2', 'wgt_areasum2', 'wgt_runave',
+ 'wgt_runave_n', 'wgt_vert_avg_beta', 'wgt_volave', 'wgt_volave_ccm',
+ 'wgt_volrmse', 'wgt_volrmse_ccm', 'where', 'wk_smooth121', 'wmbarb',
+ 'wmbarbmap', 'wmdrft', 'wmgetp', 'wmlabs', 'wmsetp', 'wmstnm',
+ 'wmvect', 'wmvectmap', 'wmvlbl', 'wrf_avo', 'wrf_cape_2d',
+ 'wrf_cape_3d', 'wrf_dbz', 'wrf_eth', 'wrf_helicity', 'wrf_ij_to_ll',
+ 'wrf_interp_1d', 'wrf_interp_2d_xy', 'wrf_interp_3d_z',
+ 'wrf_latlon_to_ij', 'wrf_ll_to_ij', 'wrf_omega', 'wrf_pvo',
+ 'wrf_rh', 'wrf_slp', 'wrf_smooth_2d', 'wrf_td', 'wrf_tk',
+ 'wrf_updraft_helicity', 'wrf_uvmet', 'wrf_virtual_temp',
+ 'wrf_wetbulb', 'wrf_wps_close_int', 'wrf_wps_open_int',
+ 'wrf_wps_rddata_int', 'wrf_wps_rdhead_int', 'wrf_wps_read_int',
+ 'wrf_wps_write_int', 'write_matrix', 'write_table', 'yiqrgb',
+ 'z2geouv', 'zonal_mpsi', 'addfiles_GetVar', 'advect_variable',
+ 'area_conserve_remap_Wrap', 'area_hi2lores_Wrap',
+ 'array_append_record', 'assignFillValue', 'byte2flt',
+ 'byte2flt_hdf', 'calcDayAnomTLL', 'calcMonAnomLLLT',
+ 'calcMonAnomLLT', 'calcMonAnomTLL', 'calcMonAnomTLLL',
+ 'calculate_monthly_values', 'cd_convert', 'changeCase',
+ 'changeCaseChar', 'clmDayTLL', 'clmDayTLLL', 'clmMon2clmDay',
+ 'clmMonLLLT', 'clmMonLLT', 'clmMonTLL', 'clmMonTLLL', 'closest_val',
+ 'copy_VarAtts', 'copy_VarCoords', 'copy_VarCoords_1',
+ 'copy_VarCoords_2', 'copy_VarMeta', 'copyatt', 'crossp3',
+ 'cshstringtolist', 'cssgrid_Wrap', 'dble2flt', 'decimalPlaces',
+ 'delete_VarAtts', 'dim_avg_n_Wrap', 'dim_avg_wgt_n_Wrap',
+ 'dim_avg_wgt_Wrap', 'dim_avg_Wrap', 'dim_cumsum_n_Wrap',
+ 'dim_cumsum_Wrap', 'dim_max_n_Wrap', 'dim_min_n_Wrap',
+ 'dim_rmsd_n_Wrap', 'dim_rmsd_Wrap', 'dim_rmvmean_n_Wrap',
+ 'dim_rmvmean_Wrap', 'dim_rmvmed_n_Wrap', 'dim_rmvmed_Wrap',
+ 'dim_standardize_n_Wrap', 'dim_standardize_Wrap',
+ 'dim_stddev_n_Wrap', 'dim_stddev_Wrap', 'dim_sum_n_Wrap',
+ 'dim_sum_wgt_n_Wrap', 'dim_sum_wgt_Wrap', 'dim_sum_Wrap',
+ 'dim_variance_n_Wrap', 'dim_variance_Wrap', 'dpres_plevel_Wrap',
+ 'dtrend_leftdim', 'dv2uvF_Wrap', 'dv2uvG_Wrap', 'eof_north',
+ 'eofcor_Wrap', 'eofcov_Wrap', 'eofunc_north', 'eofunc_ts_Wrap',
+ 'eofunc_varimax_reorder', 'eofunc_varimax_Wrap', 'eofunc_Wrap',
+ 'epsZero', 'f2fosh_Wrap', 'f2foshv_Wrap', 'f2fsh_Wrap',
+ 'f2fshv_Wrap', 'f2gsh_Wrap', 'f2gshv_Wrap', 'fbindirSwap',
+ 'fbinseqSwap1', 'fbinseqSwap2', 'flt2dble', 'flt2string',
+ 'fo2fsh_Wrap', 'fo2fshv_Wrap', 'g2fsh_Wrap', 'g2fshv_Wrap',
+ 'g2gsh_Wrap', 'g2gshv_Wrap', 'generate_resample_indices',
+ 'generate_sample_indices', 'generate_unique_indices',
+ 'genNormalDist', 'get1Dindex', 'get1Dindex_Collapse',
+ 'get1Dindex_Exclude', 'get_file_suffix', 'GetFillColor',
+ 'GetFillColorIndex', 'getFillValue', 'getind_latlon2d',
+ 'getVarDimNames', 'getVarFillValue', 'grib_stime2itime',
+ 'hyi2hyo_Wrap', 'ilapsF_Wrap', 'ilapsG_Wrap', 'ind_nearest_coord',
+ 'indStrSubset', 'int2dble', 'int2flt', 'int2p_n_Wrap', 'int2p_Wrap',
+ 'isMonotonic', 'isStrSubset', 'latGau', 'latGauWgt', 'latGlobeF',
+ 'latGlobeFo', 'latRegWgt', 'linint1_n_Wrap', 'linint1_Wrap',
+ 'linint2_points_Wrap', 'linint2_Wrap', 'local_max_1d',
+ 'local_min_1d', 'lonFlip', 'lonGlobeF', 'lonGlobeFo', 'lonPivot',
+ 'merge_levels_sfc', 'mod', 'month_to_annual',
+ 'month_to_annual_weighted', 'month_to_season', 'month_to_season12',
+ 'month_to_seasonN', 'monthly_total_to_daily_mean', 'nameDim',
+ 'natgrid_Wrap', 'NewCosWeight', 'niceLatLon2D', 'NormCosWgtGlobe',
+ 'numAsciiCol', 'numAsciiRow', 'numeric2int',
+ 'obj_anal_ic_deprecated', 'obj_anal_ic_Wrap', 'omega_ccm_driver',
+ 'omega_to_w', 'oneDtostring', 'pack_values', 'pattern_cor', 'pdfx',
+ 'pdfxy', 'pdfxy_conform', 'pot_temp', 'pot_vort_hybrid',
+ 'pot_vort_isobaric', 'pres2hybrid_Wrap', 'print_clock',
+ 'printMinMax', 'quadroots', 'rcm2points_Wrap', 'rcm2rgrid_Wrap',
+ 'readAsciiHead', 'readAsciiTable', 'reg_multlin_stats',
+ 'region_ind', 'regline_stats', 'relhum_ttd', 'replaceSingleChar',
+ 'RGBtoCmap', 'rgrid2rcm_Wrap', 'rho_mwjf', 'rm_single_dims',
+ 'rmAnnCycle1D', 'rmInsufData', 'rmMonAnnCycLLLT', 'rmMonAnnCycLLT',
+ 'rmMonAnnCycTLL', 'runave_n_Wrap', 'runave_Wrap', 'short2flt',
+ 'short2flt_hdf', 'shsgc_R42_Wrap', 'sign_f90', 'sign_matlab',
+ 'smth9_Wrap', 'smthClmDayTLL', 'smthClmDayTLLL', 'SqrtCosWeight',
+ 'stat_dispersion', 'static_stability', 'stdMonLLLT', 'stdMonLLT',
+ 'stdMonTLL', 'stdMonTLLL', 'symMinMaxPlt', 'table_attach_columns',
+ 'table_attach_rows', 'time_to_newtime', 'transpose',
+ 'triple2grid_Wrap', 'ut_convert', 'uv2dvF_Wrap', 'uv2dvG_Wrap',
+ 'uv2vrF_Wrap', 'uv2vrG_Wrap', 'vr2uvF_Wrap', 'vr2uvG_Wrap',
+ 'w_to_omega', 'wallClockElapseTime', 'wave_number_spc',
+ 'wgt_areaave_Wrap', 'wgt_runave_leftdim', 'wgt_runave_n_Wrap',
+ 'wgt_runave_Wrap', 'wgt_vertical_n', 'wind_component',
+ 'wind_direction', 'yyyyddd_to_yyyymmdd', 'yyyymm_time',
+ 'yyyymm_to_yyyyfrac', 'yyyymmdd_time', 'yyyymmdd_to_yyyyddd',
+ 'yyyymmdd_to_yyyyfrac', 'yyyymmddhh_time', 'yyyymmddhh_to_yyyyfrac',
+ 'zonal_mpsi_Wrap', 'zonalAve', 'calendar_decode2', 'cd_string',
+ 'kf_filter', 'run_cor', 'time_axis_labels', 'ut_string',
+ 'wrf_contour', 'wrf_map', 'wrf_map_overlay', 'wrf_map_overlays',
+ 'wrf_map_resources', 'wrf_map_zoom', 'wrf_overlay', 'wrf_overlays',
+ 'wrf_user_getvar', 'wrf_user_ij_to_ll', 'wrf_user_intrp2d',
+ 'wrf_user_intrp3d', 'wrf_user_latlon_to_ij', 'wrf_user_list_times',
+ 'wrf_user_ll_to_ij', 'wrf_user_unstagger', 'wrf_user_vert_interp',
+ 'wrf_vector', 'gsn_add_annotation', 'gsn_add_polygon',
+ 'gsn_add_polyline', 'gsn_add_polymarker',
+ 'gsn_add_shapefile_polygons', 'gsn_add_shapefile_polylines',
+ 'gsn_add_shapefile_polymarkers', 'gsn_add_text', 'gsn_attach_plots',
+ 'gsn_blank_plot', 'gsn_contour', 'gsn_contour_map',
+ 'gsn_contour_shade', 'gsn_coordinates', 'gsn_create_labelbar',
+ 'gsn_create_legend', 'gsn_create_text',
+ 'gsn_csm_attach_zonal_means', 'gsn_csm_blank_plot',
+ 'gsn_csm_contour', 'gsn_csm_contour_map', 'gsn_csm_contour_map_ce',
+ 'gsn_csm_contour_map_overlay', 'gsn_csm_contour_map_polar',
+ 'gsn_csm_hov', 'gsn_csm_lat_time', 'gsn_csm_map', 'gsn_csm_map_ce',
+ 'gsn_csm_map_polar', 'gsn_csm_pres_hgt',
+ 'gsn_csm_pres_hgt_streamline', 'gsn_csm_pres_hgt_vector',
+ 'gsn_csm_streamline', 'gsn_csm_streamline_contour_map',
+ 'gsn_csm_streamline_contour_map_ce',
+ 'gsn_csm_streamline_contour_map_polar', 'gsn_csm_streamline_map',
+ 'gsn_csm_streamline_map_ce', 'gsn_csm_streamline_map_polar',
+ 'gsn_csm_streamline_scalar', 'gsn_csm_streamline_scalar_map',
+ 'gsn_csm_streamline_scalar_map_ce',
+ 'gsn_csm_streamline_scalar_map_polar', 'gsn_csm_time_lat',
+ 'gsn_csm_vector', 'gsn_csm_vector_map', 'gsn_csm_vector_map_ce',
+ 'gsn_csm_vector_map_polar', 'gsn_csm_vector_scalar',
+ 'gsn_csm_vector_scalar_map', 'gsn_csm_vector_scalar_map_ce',
+ 'gsn_csm_vector_scalar_map_polar', 'gsn_csm_x2y', 'gsn_csm_x2y2',
+ 'gsn_csm_xy', 'gsn_csm_xy2', 'gsn_csm_xy3', 'gsn_csm_y',
+ 'gsn_define_colormap', 'gsn_draw_colormap', 'gsn_draw_named_colors',
+ 'gsn_histogram', 'gsn_labelbar_ndc', 'gsn_legend_ndc', 'gsn_map',
+ 'gsn_merge_colormaps', 'gsn_open_wks', 'gsn_panel', 'gsn_polygon',
+ 'gsn_polygon_ndc', 'gsn_polyline', 'gsn_polyline_ndc',
+ 'gsn_polymarker', 'gsn_polymarker_ndc', 'gsn_retrieve_colormap',
+ 'gsn_reverse_colormap', 'gsn_streamline', 'gsn_streamline_map',
+ 'gsn_streamline_scalar', 'gsn_streamline_scalar_map', 'gsn_table',
+ 'gsn_text', 'gsn_text_ndc', 'gsn_vector', 'gsn_vector_map',
+ 'gsn_vector_scalar', 'gsn_vector_scalar_map', 'gsn_xy', 'gsn_y',
+ 'hsv2rgb', 'maximize_output', 'namedcolor2rgb', 'namedcolor2rgba',
+ 'reset_device_coordinates', 'span_named_colors'), prefix=r'\b'),
+ Name.Builtin),
+
+ # Resources
+ (words((
+ 'amDataXF', 'amDataYF', 'amJust', 'amOn', 'amOrthogonalPosF',
+ 'amParallelPosF', 'amResizeNotify', 'amSide', 'amTrackData',
+ 'amViewId', 'amZone', 'appDefaultParent', 'appFileSuffix',
+ 'appResources', 'appSysDir', 'appUsrDir', 'caCopyArrays',
+ 'caXArray', 'caXCast', 'caXMaxV', 'caXMinV', 'caXMissingV',
+ 'caYArray', 'caYCast', 'caYMaxV', 'caYMinV', 'caYMissingV',
+ 'cnCellFillEdgeColor', 'cnCellFillMissingValEdgeColor',
+ 'cnConpackParams', 'cnConstFEnableFill', 'cnConstFLabelAngleF',
+ 'cnConstFLabelBackgroundColor', 'cnConstFLabelConstantSpacingF',
+ 'cnConstFLabelFont', 'cnConstFLabelFontAspectF',
+ 'cnConstFLabelFontColor', 'cnConstFLabelFontHeightF',
+ 'cnConstFLabelFontQuality', 'cnConstFLabelFontThicknessF',
+ 'cnConstFLabelFormat', 'cnConstFLabelFuncCode', 'cnConstFLabelJust',
+ 'cnConstFLabelOn', 'cnConstFLabelOrthogonalPosF',
+ 'cnConstFLabelParallelPosF', 'cnConstFLabelPerimColor',
+ 'cnConstFLabelPerimOn', 'cnConstFLabelPerimSpaceF',
+ 'cnConstFLabelPerimThicknessF', 'cnConstFLabelSide',
+ 'cnConstFLabelString', 'cnConstFLabelTextDirection',
+ 'cnConstFLabelZone', 'cnConstFUseInfoLabelRes',
+ 'cnExplicitLabelBarLabelsOn', 'cnExplicitLegendLabelsOn',
+ 'cnExplicitLineLabelsOn', 'cnFillBackgroundColor', 'cnFillColor',
+ 'cnFillColors', 'cnFillDotSizeF', 'cnFillDrawOrder', 'cnFillMode',
+ 'cnFillOn', 'cnFillOpacityF', 'cnFillPalette', 'cnFillPattern',
+ 'cnFillPatterns', 'cnFillScaleF', 'cnFillScales', 'cnFixFillBleed',
+ 'cnGridBoundFillColor', 'cnGridBoundFillPattern',
+ 'cnGridBoundFillScaleF', 'cnGridBoundPerimColor',
+ 'cnGridBoundPerimDashPattern', 'cnGridBoundPerimOn',
+ 'cnGridBoundPerimThicknessF', 'cnHighLabelAngleF',
+ 'cnHighLabelBackgroundColor', 'cnHighLabelConstantSpacingF',
+ 'cnHighLabelCount', 'cnHighLabelFont', 'cnHighLabelFontAspectF',
+ 'cnHighLabelFontColor', 'cnHighLabelFontHeightF',
+ 'cnHighLabelFontQuality', 'cnHighLabelFontThicknessF',
+ 'cnHighLabelFormat', 'cnHighLabelFuncCode', 'cnHighLabelPerimColor',
+ 'cnHighLabelPerimOn', 'cnHighLabelPerimSpaceF',
+ 'cnHighLabelPerimThicknessF', 'cnHighLabelString', 'cnHighLabelsOn',
+ 'cnHighLowLabelOverlapMode', 'cnHighUseLineLabelRes',
+ 'cnInfoLabelAngleF', 'cnInfoLabelBackgroundColor',
+ 'cnInfoLabelConstantSpacingF', 'cnInfoLabelFont',
+ 'cnInfoLabelFontAspectF', 'cnInfoLabelFontColor',
+ 'cnInfoLabelFontHeightF', 'cnInfoLabelFontQuality',
+ 'cnInfoLabelFontThicknessF', 'cnInfoLabelFormat',
+ 'cnInfoLabelFuncCode', 'cnInfoLabelJust', 'cnInfoLabelOn',
+ 'cnInfoLabelOrthogonalPosF', 'cnInfoLabelParallelPosF',
+ 'cnInfoLabelPerimColor', 'cnInfoLabelPerimOn',
+ 'cnInfoLabelPerimSpaceF', 'cnInfoLabelPerimThicknessF',
+ 'cnInfoLabelSide', 'cnInfoLabelString', 'cnInfoLabelTextDirection',
+ 'cnInfoLabelZone', 'cnLabelBarEndLabelsOn', 'cnLabelBarEndStyle',
+ 'cnLabelDrawOrder', 'cnLabelMasking', 'cnLabelScaleFactorF',
+ 'cnLabelScaleValueF', 'cnLabelScalingMode', 'cnLegendLevelFlags',
+ 'cnLevelCount', 'cnLevelFlag', 'cnLevelFlags', 'cnLevelSelectionMode',
+ 'cnLevelSpacingF', 'cnLevels', 'cnLineColor', 'cnLineColors',
+ 'cnLineDashPattern', 'cnLineDashPatterns', 'cnLineDashSegLenF',
+ 'cnLineDrawOrder', 'cnLineLabelAngleF', 'cnLineLabelBackgroundColor',
+ 'cnLineLabelConstantSpacingF', 'cnLineLabelCount',
+ 'cnLineLabelDensityF', 'cnLineLabelFont', 'cnLineLabelFontAspectF',
+ 'cnLineLabelFontColor', 'cnLineLabelFontColors',
+ 'cnLineLabelFontHeightF', 'cnLineLabelFontQuality',
+ 'cnLineLabelFontThicknessF', 'cnLineLabelFormat',
+ 'cnLineLabelFuncCode', 'cnLineLabelInterval', 'cnLineLabelPerimColor',
+ 'cnLineLabelPerimOn', 'cnLineLabelPerimSpaceF',
+ 'cnLineLabelPerimThicknessF', 'cnLineLabelPlacementMode',
+ 'cnLineLabelStrings', 'cnLineLabelsOn', 'cnLinePalette',
+ 'cnLineThicknessF', 'cnLineThicknesses', 'cnLinesOn',
+ 'cnLowLabelAngleF', 'cnLowLabelBackgroundColor',
+ 'cnLowLabelConstantSpacingF', 'cnLowLabelCount', 'cnLowLabelFont',
+ 'cnLowLabelFontAspectF', 'cnLowLabelFontColor',
+ 'cnLowLabelFontHeightF', 'cnLowLabelFontQuality',
+ 'cnLowLabelFontThicknessF', 'cnLowLabelFormat', 'cnLowLabelFuncCode',
+ 'cnLowLabelPerimColor', 'cnLowLabelPerimOn', 'cnLowLabelPerimSpaceF',
+ 'cnLowLabelPerimThicknessF', 'cnLowLabelString', 'cnLowLabelsOn',
+ 'cnLowUseHighLabelRes', 'cnMaxDataValueFormat', 'cnMaxLevelCount',
+ 'cnMaxLevelValF', 'cnMaxPointDistanceF', 'cnMinLevelValF',
+ 'cnMissingValFillColor', 'cnMissingValFillPattern',
+ 'cnMissingValFillScaleF', 'cnMissingValPerimColor',
+ 'cnMissingValPerimDashPattern', 'cnMissingValPerimGridBoundOn',
+ 'cnMissingValPerimOn', 'cnMissingValPerimThicknessF',
+ 'cnMonoFillColor', 'cnMonoFillPattern', 'cnMonoFillScale',
+ 'cnMonoLevelFlag', 'cnMonoLineColor', 'cnMonoLineDashPattern',
+ 'cnMonoLineLabelFontColor', 'cnMonoLineThickness', 'cnNoDataLabelOn',
+ 'cnNoDataLabelString', 'cnOutOfRangeFillColor',
+ 'cnOutOfRangeFillPattern', 'cnOutOfRangeFillScaleF',
+ 'cnOutOfRangePerimColor', 'cnOutOfRangePerimDashPattern',
+ 'cnOutOfRangePerimOn', 'cnOutOfRangePerimThicknessF',
+ 'cnRasterCellSizeF', 'cnRasterMinCellSizeF', 'cnRasterModeOn',
+ 'cnRasterSampleFactorF', 'cnRasterSmoothingOn', 'cnScalarFieldData',
+ 'cnSmoothingDistanceF', 'cnSmoothingOn', 'cnSmoothingTensionF',
+ 'cnSpanFillPalette', 'cnSpanLinePalette', 'ctCopyTables',
+ 'ctXElementSize', 'ctXMaxV', 'ctXMinV', 'ctXMissingV', 'ctXTable',
+ 'ctXTableLengths', 'ctXTableType', 'ctYElementSize', 'ctYMaxV',
+ 'ctYMinV', 'ctYMissingV', 'ctYTable', 'ctYTableLengths',
+ 'ctYTableType', 'dcDelayCompute', 'errBuffer',
+ 'errFileName', 'errFilePtr', 'errLevel', 'errPrint', 'errUnitNumber',
+ 'gsClipOn', 'gsColors', 'gsEdgeColor', 'gsEdgeDashPattern',
+ 'gsEdgeDashSegLenF', 'gsEdgeThicknessF', 'gsEdgesOn',
+ 'gsFillBackgroundColor', 'gsFillColor', 'gsFillDotSizeF',
+ 'gsFillIndex', 'gsFillLineThicknessF', 'gsFillOpacityF',
+ 'gsFillScaleF', 'gsFont', 'gsFontAspectF', 'gsFontColor',
+ 'gsFontHeightF', 'gsFontOpacityF', 'gsFontQuality',
+ 'gsFontThicknessF', 'gsLineColor', 'gsLineDashPattern',
+ 'gsLineDashSegLenF', 'gsLineLabelConstantSpacingF', 'gsLineLabelFont',
+ 'gsLineLabelFontAspectF', 'gsLineLabelFontColor',
+ 'gsLineLabelFontHeightF', 'gsLineLabelFontQuality',
+ 'gsLineLabelFontThicknessF', 'gsLineLabelFuncCode',
+ 'gsLineLabelString', 'gsLineOpacityF', 'gsLineThicknessF',
+ 'gsMarkerColor', 'gsMarkerIndex', 'gsMarkerOpacityF', 'gsMarkerSizeF',
+ 'gsMarkerThicknessF', 'gsSegments', 'gsTextAngleF',
+ 'gsTextConstantSpacingF', 'gsTextDirection', 'gsTextFuncCode',
+ 'gsTextJustification', 'gsnAboveYRefLineBarColors',
+ 'gsnAboveYRefLineBarFillScales', 'gsnAboveYRefLineBarPatterns',
+ 'gsnAboveYRefLineColor', 'gsnAddCyclic', 'gsnAttachBorderOn',
+ 'gsnAttachPlotsXAxis', 'gsnBelowYRefLineBarColors',
+ 'gsnBelowYRefLineBarFillScales', 'gsnBelowYRefLineBarPatterns',
+ 'gsnBelowYRefLineColor', 'gsnBoxMargin', 'gsnCenterString',
+ 'gsnCenterStringFontColor', 'gsnCenterStringFontHeightF',
+ 'gsnCenterStringFuncCode', 'gsnCenterStringOrthogonalPosF',
+ 'gsnCenterStringParallelPosF', 'gsnContourLineThicknessesScale',
+ 'gsnContourNegLineDashPattern', 'gsnContourPosLineDashPattern',
+ 'gsnContourZeroLineThicknessF', 'gsnDebugWriteFileName', 'gsnDraw',
+ 'gsnFrame', 'gsnHistogramBarWidthPercent', 'gsnHistogramBinIntervals',
+ 'gsnHistogramBinMissing', 'gsnHistogramBinWidth',
+ 'gsnHistogramClassIntervals', 'gsnHistogramCompare',
+ 'gsnHistogramComputePercentages',
+ 'gsnHistogramComputePercentagesNoMissing',
+ 'gsnHistogramDiscreteBinValues', 'gsnHistogramDiscreteClassValues',
+ 'gsnHistogramHorizontal', 'gsnHistogramMinMaxBinsOn',
+ 'gsnHistogramNumberOfBins', 'gsnHistogramPercentSign',
+ 'gsnHistogramSelectNiceIntervals', 'gsnLeftString',
+ 'gsnLeftStringFontColor', 'gsnLeftStringFontHeightF',
+ 'gsnLeftStringFuncCode', 'gsnLeftStringOrthogonalPosF',
+ 'gsnLeftStringParallelPosF', 'gsnMajorLatSpacing',
+ 'gsnMajorLonSpacing', 'gsnMaskLambertConformal',
+ 'gsnMaskLambertConformalOutlineOn', 'gsnMaximize',
+ 'gsnMinorLatSpacing', 'gsnMinorLonSpacing', 'gsnPanelBottom',
+ 'gsnPanelCenter', 'gsnPanelDebug', 'gsnPanelFigureStrings',
+ 'gsnPanelFigureStringsBackgroundFillColor',
+ 'gsnPanelFigureStringsFontHeightF', 'gsnPanelFigureStringsJust',
+ 'gsnPanelFigureStringsPerimOn', 'gsnPanelLabelBar', 'gsnPanelLeft',
+ 'gsnPanelMainFont', 'gsnPanelMainFontColor',
+ 'gsnPanelMainFontHeightF', 'gsnPanelMainString', 'gsnPanelRight',
+ 'gsnPanelRowSpec', 'gsnPanelScalePlotIndex', 'gsnPanelTop',
+ 'gsnPanelXF', 'gsnPanelXWhiteSpacePercent', 'gsnPanelYF',
+ 'gsnPanelYWhiteSpacePercent', 'gsnPaperHeight', 'gsnPaperMargin',
+ 'gsnPaperOrientation', 'gsnPaperWidth', 'gsnPolar',
+ 'gsnPolarLabelDistance', 'gsnPolarLabelFont',
+ 'gsnPolarLabelFontHeightF', 'gsnPolarLabelSpacing', 'gsnPolarTime',
+ 'gsnPolarUT', 'gsnRightString', 'gsnRightStringFontColor',
+ 'gsnRightStringFontHeightF', 'gsnRightStringFuncCode',
+ 'gsnRightStringOrthogonalPosF', 'gsnRightStringParallelPosF',
+ 'gsnScalarContour', 'gsnScale', 'gsnShape', 'gsnSpreadColorEnd',
+ 'gsnSpreadColorStart', 'gsnSpreadColors', 'gsnStringFont',
+ 'gsnStringFontColor', 'gsnStringFontHeightF', 'gsnStringFuncCode',
+ 'gsnTickMarksOn', 'gsnXAxisIrregular2Linear', 'gsnXAxisIrregular2Log',
+ 'gsnXRefLine', 'gsnXRefLineColor', 'gsnXRefLineDashPattern',
+ 'gsnXRefLineThicknessF', 'gsnXYAboveFillColors', 'gsnXYBarChart',
+ 'gsnXYBarChartBarWidth', 'gsnXYBarChartColors',
+ 'gsnXYBarChartColors2', 'gsnXYBarChartFillDotSizeF',
+ 'gsnXYBarChartFillLineThicknessF', 'gsnXYBarChartFillOpacityF',
+ 'gsnXYBarChartFillScaleF', 'gsnXYBarChartOutlineOnly',
+ 'gsnXYBarChartOutlineThicknessF', 'gsnXYBarChartPatterns',
+ 'gsnXYBarChartPatterns2', 'gsnXYBelowFillColors', 'gsnXYFillColors',
+ 'gsnXYFillOpacities', 'gsnXYLeftFillColors', 'gsnXYRightFillColors',
+ 'gsnYAxisIrregular2Linear', 'gsnYAxisIrregular2Log', 'gsnYRefLine',
+ 'gsnYRefLineColor', 'gsnYRefLineColors', 'gsnYRefLineDashPattern',
+ 'gsnYRefLineDashPatterns', 'gsnYRefLineThicknessF',
+ 'gsnYRefLineThicknesses', 'gsnZonalMean', 'gsnZonalMeanXMaxF',
+ 'gsnZonalMeanXMinF', 'gsnZonalMeanYRefLine', 'lbAutoManage',
+ 'lbBottomMarginF', 'lbBoxCount', 'lbBoxEndCapStyle', 'lbBoxFractions',
+ 'lbBoxLineColor', 'lbBoxLineDashPattern', 'lbBoxLineDashSegLenF',
+ 'lbBoxLineThicknessF', 'lbBoxLinesOn', 'lbBoxMajorExtentF',
+ 'lbBoxMinorExtentF', 'lbBoxSeparatorLinesOn', 'lbBoxSizing',
+ 'lbFillBackground', 'lbFillColor', 'lbFillColors', 'lbFillDotSizeF',
+ 'lbFillLineThicknessF', 'lbFillPattern', 'lbFillPatterns',
+ 'lbFillScaleF', 'lbFillScales', 'lbJustification', 'lbLabelAlignment',
+ 'lbLabelAngleF', 'lbLabelAutoStride', 'lbLabelBarOn',
+ 'lbLabelConstantSpacingF', 'lbLabelDirection', 'lbLabelFont',
+ 'lbLabelFontAspectF', 'lbLabelFontColor', 'lbLabelFontHeightF',
+ 'lbLabelFontQuality', 'lbLabelFontThicknessF', 'lbLabelFuncCode',
+ 'lbLabelJust', 'lbLabelOffsetF', 'lbLabelPosition', 'lbLabelStride',
+ 'lbLabelStrings', 'lbLabelsOn', 'lbLeftMarginF', 'lbMaxLabelLenF',
+ 'lbMinLabelSpacingF', 'lbMonoFillColor', 'lbMonoFillPattern',
+ 'lbMonoFillScale', 'lbOrientation', 'lbPerimColor',
+ 'lbPerimDashPattern', 'lbPerimDashSegLenF', 'lbPerimFill',
+ 'lbPerimFillColor', 'lbPerimOn', 'lbPerimThicknessF',
+ 'lbRasterFillOn', 'lbRightMarginF', 'lbTitleAngleF',
+ 'lbTitleConstantSpacingF', 'lbTitleDirection', 'lbTitleExtentF',
+ 'lbTitleFont', 'lbTitleFontAspectF', 'lbTitleFontColor',
+ 'lbTitleFontHeightF', 'lbTitleFontQuality', 'lbTitleFontThicknessF',
+ 'lbTitleFuncCode', 'lbTitleJust', 'lbTitleOffsetF', 'lbTitleOn',
+ 'lbTitlePosition', 'lbTitleString', 'lbTopMarginF', 'lgAutoManage',
+ 'lgBottomMarginF', 'lgBoxBackground', 'lgBoxLineColor',
+ 'lgBoxLineDashPattern', 'lgBoxLineDashSegLenF', 'lgBoxLineThicknessF',
+ 'lgBoxLinesOn', 'lgBoxMajorExtentF', 'lgBoxMinorExtentF',
+ 'lgDashIndex', 'lgDashIndexes', 'lgItemCount', 'lgItemOrder',
+ 'lgItemPlacement', 'lgItemPositions', 'lgItemType', 'lgItemTypes',
+ 'lgJustification', 'lgLabelAlignment', 'lgLabelAngleF',
+ 'lgLabelAutoStride', 'lgLabelConstantSpacingF', 'lgLabelDirection',
+ 'lgLabelFont', 'lgLabelFontAspectF', 'lgLabelFontColor',
+ 'lgLabelFontHeightF', 'lgLabelFontQuality', 'lgLabelFontThicknessF',
+ 'lgLabelFuncCode', 'lgLabelJust', 'lgLabelOffsetF', 'lgLabelPosition',
+ 'lgLabelStride', 'lgLabelStrings', 'lgLabelsOn', 'lgLeftMarginF',
+ 'lgLegendOn', 'lgLineColor', 'lgLineColors', 'lgLineDashSegLenF',
+ 'lgLineDashSegLens', 'lgLineLabelConstantSpacingF', 'lgLineLabelFont',
+ 'lgLineLabelFontAspectF', 'lgLineLabelFontColor',
+ 'lgLineLabelFontColors', 'lgLineLabelFontHeightF',
+ 'lgLineLabelFontHeights', 'lgLineLabelFontQuality',
+ 'lgLineLabelFontThicknessF', 'lgLineLabelFuncCode',
+ 'lgLineLabelStrings', 'lgLineLabelsOn', 'lgLineThicknessF',
+ 'lgLineThicknesses', 'lgMarkerColor', 'lgMarkerColors',
+ 'lgMarkerIndex', 'lgMarkerIndexes', 'lgMarkerSizeF', 'lgMarkerSizes',
+ 'lgMarkerThicknessF', 'lgMarkerThicknesses', 'lgMonoDashIndex',
+ 'lgMonoItemType', 'lgMonoLineColor', 'lgMonoLineDashSegLen',
+ 'lgMonoLineLabelFontColor', 'lgMonoLineLabelFontHeight',
+ 'lgMonoLineThickness', 'lgMonoMarkerColor', 'lgMonoMarkerIndex',
+ 'lgMonoMarkerSize', 'lgMonoMarkerThickness', 'lgOrientation',
+ 'lgPerimColor', 'lgPerimDashPattern', 'lgPerimDashSegLenF',
+ 'lgPerimFill', 'lgPerimFillColor', 'lgPerimOn', 'lgPerimThicknessF',
+ 'lgRightMarginF', 'lgTitleAngleF', 'lgTitleConstantSpacingF',
+ 'lgTitleDirection', 'lgTitleExtentF', 'lgTitleFont',
+ 'lgTitleFontAspectF', 'lgTitleFontColor', 'lgTitleFontHeightF',
+ 'lgTitleFontQuality', 'lgTitleFontThicknessF', 'lgTitleFuncCode',
+ 'lgTitleJust', 'lgTitleOffsetF', 'lgTitleOn', 'lgTitlePosition',
+ 'lgTitleString', 'lgTopMarginF', 'mpAreaGroupCount',
+ 'mpAreaMaskingOn', 'mpAreaNames', 'mpAreaTypes', 'mpBottomAngleF',
+ 'mpBottomMapPosF', 'mpBottomNDCF', 'mpBottomNPCF',
+ 'mpBottomPointLatF', 'mpBottomPointLonF', 'mpBottomWindowF',
+ 'mpCenterLatF', 'mpCenterLonF', 'mpCenterRotF', 'mpCountyLineColor',
+ 'mpCountyLineDashPattern', 'mpCountyLineDashSegLenF',
+ 'mpCountyLineThicknessF', 'mpDataBaseVersion', 'mpDataResolution',
+ 'mpDataSetName', 'mpDefaultFillColor', 'mpDefaultFillPattern',
+ 'mpDefaultFillScaleF', 'mpDynamicAreaGroups', 'mpEllipticalBoundary',
+ 'mpFillAreaSpecifiers', 'mpFillBoundarySets', 'mpFillColor',
+ 'mpFillColors', 'mpFillColors-default', 'mpFillDotSizeF',
+ 'mpFillDrawOrder', 'mpFillOn', 'mpFillPatternBackground',
+ 'mpFillPattern', 'mpFillPatterns', 'mpFillPatterns-default',
+ 'mpFillScaleF', 'mpFillScales', 'mpFillScales-default',
+ 'mpFixedAreaGroups', 'mpGeophysicalLineColor',
+ 'mpGeophysicalLineDashPattern', 'mpGeophysicalLineDashSegLenF',
+ 'mpGeophysicalLineThicknessF', 'mpGreatCircleLinesOn',
+ 'mpGridAndLimbDrawOrder', 'mpGridAndLimbOn', 'mpGridLatSpacingF',
+ 'mpGridLineColor', 'mpGridLineDashPattern', 'mpGridLineDashSegLenF',
+ 'mpGridLineThicknessF', 'mpGridLonSpacingF', 'mpGridMaskMode',
+ 'mpGridMaxLatF', 'mpGridPolarLonSpacingF', 'mpGridSpacingF',
+ 'mpInlandWaterFillColor', 'mpInlandWaterFillPattern',
+ 'mpInlandWaterFillScaleF', 'mpLabelDrawOrder', 'mpLabelFontColor',
+ 'mpLabelFontHeightF', 'mpLabelsOn', 'mpLambertMeridianF',
+ 'mpLambertParallel1F', 'mpLambertParallel2F', 'mpLandFillColor',
+ 'mpLandFillPattern', 'mpLandFillScaleF', 'mpLeftAngleF',
+ 'mpLeftCornerLatF', 'mpLeftCornerLonF', 'mpLeftMapPosF',
+ 'mpLeftNDCF', 'mpLeftNPCF', 'mpLeftPointLatF',
+ 'mpLeftPointLonF', 'mpLeftWindowF', 'mpLimbLineColor',
+ 'mpLimbLineDashPattern', 'mpLimbLineDashSegLenF',
+ 'mpLimbLineThicknessF', 'mpLimitMode', 'mpMaskAreaSpecifiers',
+ 'mpMaskOutlineSpecifiers', 'mpMaxLatF', 'mpMaxLonF',
+ 'mpMinLatF', 'mpMinLonF', 'mpMonoFillColor', 'mpMonoFillPattern',
+ 'mpMonoFillScale', 'mpNationalLineColor', 'mpNationalLineDashPattern',
+ 'mpNationalLineThicknessF', 'mpOceanFillColor', 'mpOceanFillPattern',
+ 'mpOceanFillScaleF', 'mpOutlineBoundarySets', 'mpOutlineDrawOrder',
+ 'mpOutlineMaskingOn', 'mpOutlineOn', 'mpOutlineSpecifiers',
+ 'mpPerimDrawOrder', 'mpPerimLineColor', 'mpPerimLineDashPattern',
+ 'mpPerimLineDashSegLenF', 'mpPerimLineThicknessF', 'mpPerimOn',
+ 'mpPolyMode', 'mpProjection', 'mpProvincialLineColor',
+ 'mpProvincialLineDashPattern', 'mpProvincialLineDashSegLenF',
+ 'mpProvincialLineThicknessF', 'mpRelativeCenterLat',
+ 'mpRelativeCenterLon', 'mpRightAngleF', 'mpRightCornerLatF',
+ 'mpRightCornerLonF', 'mpRightMapPosF', 'mpRightNDCF',
+ 'mpRightNPCF', 'mpRightPointLatF', 'mpRightPointLonF',
+ 'mpRightWindowF', 'mpSatelliteAngle1F', 'mpSatelliteAngle2F',
+ 'mpSatelliteDistF', 'mpShapeMode', 'mpSpecifiedFillColors',
+ 'mpSpecifiedFillDirectIndexing', 'mpSpecifiedFillPatterns',
+ 'mpSpecifiedFillPriority', 'mpSpecifiedFillScales',
+ 'mpTopAngleF', 'mpTopMapPosF', 'mpTopNDCF', 'mpTopNPCF',
+ 'mpTopPointLatF', 'mpTopPointLonF', 'mpTopWindowF',
+ 'mpUSStateLineColor', 'mpUSStateLineDashPattern',
+ 'mpUSStateLineDashSegLenF', 'mpUSStateLineThicknessF',
+ 'pmAnnoManagers', 'pmAnnoViews', 'pmLabelBarDisplayMode',
+ 'pmLabelBarHeightF', 'pmLabelBarKeepAspect', 'pmLabelBarOrthogonalPosF',
+ 'pmLabelBarParallelPosF', 'pmLabelBarSide', 'pmLabelBarWidthF',
+ 'pmLabelBarZone', 'pmLegendDisplayMode', 'pmLegendHeightF',
+ 'pmLegendKeepAspect', 'pmLegendOrthogonalPosF',
+ 'pmLegendParallelPosF', 'pmLegendSide', 'pmLegendWidthF',
+ 'pmLegendZone', 'pmOverlaySequenceIds', 'pmTickMarkDisplayMode',
+ 'pmTickMarkZone', 'pmTitleDisplayMode', 'pmTitleZone',
+ 'prGraphicStyle', 'prPolyType', 'prXArray', 'prYArray',
+ 'sfCopyData', 'sfDataArray', 'sfDataMaxV', 'sfDataMinV',
+ 'sfElementNodes', 'sfExchangeDimensions', 'sfFirstNodeIndex',
+ 'sfMissingValueV', 'sfXArray', 'sfXCActualEndF', 'sfXCActualStartF',
+ 'sfXCEndIndex', 'sfXCEndSubsetV', 'sfXCEndV', 'sfXCStartIndex',
+ 'sfXCStartSubsetV', 'sfXCStartV', 'sfXCStride', 'sfXCellBounds',
+ 'sfYArray', 'sfYCActualEndF', 'sfYCActualStartF', 'sfYCEndIndex',
+ 'sfYCEndSubsetV', 'sfYCEndV', 'sfYCStartIndex', 'sfYCStartSubsetV',
+ 'sfYCStartV', 'sfYCStride', 'sfYCellBounds', 'stArrowLengthF',
+ 'stArrowStride', 'stCrossoverCheckCount',
+ 'stExplicitLabelBarLabelsOn', 'stLabelBarEndLabelsOn',
+ 'stLabelFormat', 'stLengthCheckCount', 'stLevelColors',
+ 'stLevelCount', 'stLevelPalette', 'stLevelSelectionMode',
+ 'stLevelSpacingF', 'stLevels', 'stLineColor', 'stLineOpacityF',
+ 'stLineStartStride', 'stLineThicknessF', 'stMapDirection',
+ 'stMaxLevelCount', 'stMaxLevelValF', 'stMinArrowSpacingF',
+ 'stMinDistanceF', 'stMinLevelValF', 'stMinLineSpacingF',
+ 'stMinStepFactorF', 'stMonoLineColor', 'stNoDataLabelOn',
+ 'stNoDataLabelString', 'stScalarFieldData', 'stScalarMissingValColor',
+ 'stSpanLevelPalette', 'stStepSizeF', 'stStreamlineDrawOrder',
+ 'stUseScalarArray', 'stVectorFieldData', 'stZeroFLabelAngleF',
+ 'stZeroFLabelBackgroundColor', 'stZeroFLabelConstantSpacingF',
+ 'stZeroFLabelFont', 'stZeroFLabelFontAspectF',
+ 'stZeroFLabelFontColor', 'stZeroFLabelFontHeightF',
+ 'stZeroFLabelFontQuality', 'stZeroFLabelFontThicknessF',
+ 'stZeroFLabelFuncCode', 'stZeroFLabelJust', 'stZeroFLabelOn',
+ 'stZeroFLabelOrthogonalPosF', 'stZeroFLabelParallelPosF',
+ 'stZeroFLabelPerimColor', 'stZeroFLabelPerimOn',
+ 'stZeroFLabelPerimSpaceF', 'stZeroFLabelPerimThicknessF',
+ 'stZeroFLabelSide', 'stZeroFLabelString', 'stZeroFLabelTextDirection',
+ 'stZeroFLabelZone', 'tfDoNDCOverlay', 'tfPlotManagerOn',
+ 'tfPolyDrawList', 'tfPolyDrawOrder', 'tiDeltaF', 'tiMainAngleF',
+ 'tiMainConstantSpacingF', 'tiMainDirection', 'tiMainFont',
+ 'tiMainFontAspectF', 'tiMainFontColor', 'tiMainFontHeightF',
+ 'tiMainFontQuality', 'tiMainFontThicknessF', 'tiMainFuncCode',
+ 'tiMainJust', 'tiMainOffsetXF', 'tiMainOffsetYF', 'tiMainOn',
+ 'tiMainPosition', 'tiMainSide', 'tiMainString', 'tiUseMainAttributes',
+ 'tiXAxisAngleF', 'tiXAxisConstantSpacingF', 'tiXAxisDirection',
+ 'tiXAxisFont', 'tiXAxisFontAspectF', 'tiXAxisFontColor',
+ 'tiXAxisFontHeightF', 'tiXAxisFontQuality', 'tiXAxisFontThicknessF',
+ 'tiXAxisFuncCode', 'tiXAxisJust', 'tiXAxisOffsetXF',
+ 'tiXAxisOffsetYF', 'tiXAxisOn', 'tiXAxisPosition', 'tiXAxisSide',
+ 'tiXAxisString', 'tiYAxisAngleF', 'tiYAxisConstantSpacingF',
+ 'tiYAxisDirection', 'tiYAxisFont', 'tiYAxisFontAspectF',
+ 'tiYAxisFontColor', 'tiYAxisFontHeightF', 'tiYAxisFontQuality',
+ 'tiYAxisFontThicknessF', 'tiYAxisFuncCode', 'tiYAxisJust',
+ 'tiYAxisOffsetXF', 'tiYAxisOffsetYF', 'tiYAxisOn', 'tiYAxisPosition',
+ 'tiYAxisSide', 'tiYAxisString', 'tmBorderLineColor',
+ 'tmBorderThicknessF', 'tmEqualizeXYSizes', 'tmLabelAutoStride',
+ 'tmSciNoteCutoff', 'tmXBAutoPrecision', 'tmXBBorderOn',
+ 'tmXBDataLeftF', 'tmXBDataRightF', 'tmXBFormat', 'tmXBIrrTensionF',
+ 'tmXBIrregularPoints', 'tmXBLabelAngleF', 'tmXBLabelConstantSpacingF',
+ 'tmXBLabelDeltaF', 'tmXBLabelDirection', 'tmXBLabelFont',
+ 'tmXBLabelFontAspectF', 'tmXBLabelFontColor', 'tmXBLabelFontHeightF',
+ 'tmXBLabelFontQuality', 'tmXBLabelFontThicknessF',
+ 'tmXBLabelFuncCode', 'tmXBLabelJust', 'tmXBLabelStride', 'tmXBLabels',
+ 'tmXBLabelsOn', 'tmXBMajorLengthF', 'tmXBMajorLineColor',
+ 'tmXBMajorOutwardLengthF', 'tmXBMajorThicknessF', 'tmXBMaxLabelLenF',
+ 'tmXBMaxTicks', 'tmXBMinLabelSpacingF', 'tmXBMinorLengthF',
+ 'tmXBMinorLineColor', 'tmXBMinorOn', 'tmXBMinorOutwardLengthF',
+ 'tmXBMinorPerMajor', 'tmXBMinorThicknessF', 'tmXBMinorValues',
+ 'tmXBMode', 'tmXBOn', 'tmXBPrecision', 'tmXBStyle', 'tmXBTickEndF',
+ 'tmXBTickSpacingF', 'tmXBTickStartF', 'tmXBValues', 'tmXMajorGrid',
+ 'tmXMajorGridLineColor', 'tmXMajorGridLineDashPattern',
+ 'tmXMajorGridThicknessF', 'tmXMinorGrid', 'tmXMinorGridLineColor',
+ 'tmXMinorGridLineDashPattern', 'tmXMinorGridThicknessF',
+ 'tmXTAutoPrecision', 'tmXTBorderOn', 'tmXTDataLeftF',
+ 'tmXTDataRightF', 'tmXTFormat', 'tmXTIrrTensionF',
+ 'tmXTIrregularPoints', 'tmXTLabelAngleF', 'tmXTLabelConstantSpacingF',
+ 'tmXTLabelDeltaF', 'tmXTLabelDirection', 'tmXTLabelFont',
+ 'tmXTLabelFontAspectF', 'tmXTLabelFontColor', 'tmXTLabelFontHeightF',
+ 'tmXTLabelFontQuality', 'tmXTLabelFontThicknessF',
+ 'tmXTLabelFuncCode', 'tmXTLabelJust', 'tmXTLabelStride', 'tmXTLabels',
+ 'tmXTLabelsOn', 'tmXTMajorLengthF', 'tmXTMajorLineColor',
+ 'tmXTMajorOutwardLengthF', 'tmXTMajorThicknessF', 'tmXTMaxLabelLenF',
+ 'tmXTMaxTicks', 'tmXTMinLabelSpacingF', 'tmXTMinorLengthF',
+ 'tmXTMinorLineColor', 'tmXTMinorOn', 'tmXTMinorOutwardLengthF',
+ 'tmXTMinorPerMajor', 'tmXTMinorThicknessF', 'tmXTMinorValues',
+ 'tmXTMode', 'tmXTOn', 'tmXTPrecision', 'tmXTStyle', 'tmXTTickEndF',
+ 'tmXTTickSpacingF', 'tmXTTickStartF', 'tmXTValues', 'tmXUseBottom',
+ 'tmYLAutoPrecision', 'tmYLBorderOn', 'tmYLDataBottomF',
+ 'tmYLDataTopF', 'tmYLFormat', 'tmYLIrrTensionF',
+ 'tmYLIrregularPoints', 'tmYLLabelAngleF', 'tmYLLabelConstantSpacingF',
+ 'tmYLLabelDeltaF', 'tmYLLabelDirection', 'tmYLLabelFont',
+ 'tmYLLabelFontAspectF', 'tmYLLabelFontColor', 'tmYLLabelFontHeightF',
+ 'tmYLLabelFontQuality', 'tmYLLabelFontThicknessF',
+ 'tmYLLabelFuncCode', 'tmYLLabelJust', 'tmYLLabelStride', 'tmYLLabels',
+ 'tmYLLabelsOn', 'tmYLMajorLengthF', 'tmYLMajorLineColor',
+ 'tmYLMajorOutwardLengthF', 'tmYLMajorThicknessF', 'tmYLMaxLabelLenF',
+ 'tmYLMaxTicks', 'tmYLMinLabelSpacingF', 'tmYLMinorLengthF',
+ 'tmYLMinorLineColor', 'tmYLMinorOn', 'tmYLMinorOutwardLengthF',
+ 'tmYLMinorPerMajor', 'tmYLMinorThicknessF', 'tmYLMinorValues',
+ 'tmYLMode', 'tmYLOn', 'tmYLPrecision', 'tmYLStyle', 'tmYLTickEndF',
+ 'tmYLTickSpacingF', 'tmYLTickStartF', 'tmYLValues', 'tmYMajorGrid',
+ 'tmYMajorGridLineColor', 'tmYMajorGridLineDashPattern',
+ 'tmYMajorGridThicknessF', 'tmYMinorGrid', 'tmYMinorGridLineColor',
+ 'tmYMinorGridLineDashPattern', 'tmYMinorGridThicknessF',
+ 'tmYRAutoPrecision', 'tmYRBorderOn', 'tmYRDataBottomF',
+ 'tmYRDataTopF', 'tmYRFormat', 'tmYRIrrTensionF',
+ 'tmYRIrregularPoints', 'tmYRLabelAngleF', 'tmYRLabelConstantSpacingF',
+ 'tmYRLabelDeltaF', 'tmYRLabelDirection', 'tmYRLabelFont',
+ 'tmYRLabelFontAspectF', 'tmYRLabelFontColor', 'tmYRLabelFontHeightF',
+ 'tmYRLabelFontQuality', 'tmYRLabelFontThicknessF',
+ 'tmYRLabelFuncCode', 'tmYRLabelJust', 'tmYRLabelStride', 'tmYRLabels',
+ 'tmYRLabelsOn', 'tmYRMajorLengthF', 'tmYRMajorLineColor',
+ 'tmYRMajorOutwardLengthF', 'tmYRMajorThicknessF', 'tmYRMaxLabelLenF',
+ 'tmYRMaxTicks', 'tmYRMinLabelSpacingF', 'tmYRMinorLengthF',
+ 'tmYRMinorLineColor', 'tmYRMinorOn', 'tmYRMinorOutwardLengthF',
+ 'tmYRMinorPerMajor', 'tmYRMinorThicknessF', 'tmYRMinorValues',
+ 'tmYRMode', 'tmYROn', 'tmYRPrecision', 'tmYRStyle', 'tmYRTickEndF',
+ 'tmYRTickSpacingF', 'tmYRTickStartF', 'tmYRValues', 'tmYUseLeft',
+ 'trGridType', 'trLineInterpolationOn',
+ 'trXAxisType', 'trXCoordPoints', 'trXInterPoints', 'trXLog',
+ 'trXMaxF', 'trXMinF', 'trXReverse', 'trXSamples', 'trXTensionF',
+ 'trYAxisType', 'trYCoordPoints', 'trYInterPoints', 'trYLog',
+ 'trYMaxF', 'trYMinF', 'trYReverse', 'trYSamples', 'trYTensionF',
+ 'txAngleF', 'txBackgroundFillColor', 'txConstantSpacingF', 'txDirection',
+ 'txFont', 'HLU-Fonts', 'txFontAspectF', 'txFontColor',
+ 'txFontHeightF', 'txFontOpacityF', 'txFontQuality',
+ 'txFontThicknessF', 'txFuncCode', 'txJust', 'txPerimColor',
+ 'txPerimDashLengthF', 'txPerimDashPattern', 'txPerimOn',
+ 'txPerimSpaceF', 'txPerimThicknessF', 'txPosXF', 'txPosYF',
+ 'txString', 'vcExplicitLabelBarLabelsOn', 'vcFillArrowEdgeColor',
+ 'vcFillArrowEdgeThicknessF', 'vcFillArrowFillColor',
+ 'vcFillArrowHeadInteriorXF', 'vcFillArrowHeadMinFracXF',
+ 'vcFillArrowHeadMinFracYF', 'vcFillArrowHeadXF', 'vcFillArrowHeadYF',
+ 'vcFillArrowMinFracWidthF', 'vcFillArrowWidthF', 'vcFillArrowsOn',
+ 'vcFillOverEdge', 'vcGlyphOpacityF', 'vcGlyphStyle',
+ 'vcLabelBarEndLabelsOn', 'vcLabelFontColor', 'vcLabelFontHeightF',
+ 'vcLabelsOn', 'vcLabelsUseVectorColor', 'vcLevelColors',
+ 'vcLevelCount', 'vcLevelPalette', 'vcLevelSelectionMode',
+ 'vcLevelSpacingF', 'vcLevels', 'vcLineArrowColor',
+ 'vcLineArrowHeadMaxSizeF', 'vcLineArrowHeadMinSizeF',
+ 'vcLineArrowThicknessF', 'vcMagnitudeFormat',
+ 'vcMagnitudeScaleFactorF', 'vcMagnitudeScaleValueF',
+ 'vcMagnitudeScalingMode', 'vcMapDirection', 'vcMaxLevelCount',
+ 'vcMaxLevelValF', 'vcMaxMagnitudeF', 'vcMinAnnoAngleF',
+ 'vcMinAnnoArrowAngleF', 'vcMinAnnoArrowEdgeColor',
+ 'vcMinAnnoArrowFillColor', 'vcMinAnnoArrowLineColor',
+ 'vcMinAnnoArrowMinOffsetF', 'vcMinAnnoArrowSpaceF',
+ 'vcMinAnnoArrowUseVecColor', 'vcMinAnnoBackgroundColor',
+ 'vcMinAnnoConstantSpacingF', 'vcMinAnnoExplicitMagnitudeF',
+ 'vcMinAnnoFont', 'vcMinAnnoFontAspectF', 'vcMinAnnoFontColor',
+ 'vcMinAnnoFontHeightF', 'vcMinAnnoFontQuality',
+ 'vcMinAnnoFontThicknessF', 'vcMinAnnoFuncCode', 'vcMinAnnoJust',
+ 'vcMinAnnoOn', 'vcMinAnnoOrientation', 'vcMinAnnoOrthogonalPosF',
+ 'vcMinAnnoParallelPosF', 'vcMinAnnoPerimColor', 'vcMinAnnoPerimOn',
+ 'vcMinAnnoPerimSpaceF', 'vcMinAnnoPerimThicknessF', 'vcMinAnnoSide',
+ 'vcMinAnnoString1', 'vcMinAnnoString1On', 'vcMinAnnoString2',
+ 'vcMinAnnoString2On', 'vcMinAnnoTextDirection', 'vcMinAnnoZone',
+ 'vcMinDistanceF', 'vcMinFracLengthF', 'vcMinLevelValF',
+ 'vcMinMagnitudeF', 'vcMonoFillArrowEdgeColor',
+ 'vcMonoFillArrowFillColor', 'vcMonoLineArrowColor',
+ 'vcMonoWindBarbColor', 'vcNoDataLabelOn', 'vcNoDataLabelString',
+ 'vcPositionMode', 'vcRefAnnoAngleF', 'vcRefAnnoArrowAngleF',
+ 'vcRefAnnoArrowEdgeColor', 'vcRefAnnoArrowFillColor',
+ 'vcRefAnnoArrowLineColor', 'vcRefAnnoArrowMinOffsetF',
+ 'vcRefAnnoArrowSpaceF', 'vcRefAnnoArrowUseVecColor',
+ 'vcRefAnnoBackgroundColor', 'vcRefAnnoConstantSpacingF',
+ 'vcRefAnnoExplicitMagnitudeF', 'vcRefAnnoFont',
+ 'vcRefAnnoFontAspectF', 'vcRefAnnoFontColor', 'vcRefAnnoFontHeightF',
+ 'vcRefAnnoFontQuality', 'vcRefAnnoFontThicknessF',
+ 'vcRefAnnoFuncCode', 'vcRefAnnoJust', 'vcRefAnnoOn',
+ 'vcRefAnnoOrientation', 'vcRefAnnoOrthogonalPosF',
+ 'vcRefAnnoParallelPosF', 'vcRefAnnoPerimColor', 'vcRefAnnoPerimOn',
+ 'vcRefAnnoPerimSpaceF', 'vcRefAnnoPerimThicknessF', 'vcRefAnnoSide',
+ 'vcRefAnnoString1', 'vcRefAnnoString1On', 'vcRefAnnoString2',
+ 'vcRefAnnoString2On', 'vcRefAnnoTextDirection', 'vcRefAnnoZone',
+ 'vcRefLengthF', 'vcRefMagnitudeF', 'vcScalarFieldData',
+ 'vcScalarMissingValColor', 'vcScalarValueFormat',
+ 'vcScalarValueScaleFactorF', 'vcScalarValueScaleValueF',
+ 'vcScalarValueScalingMode', 'vcSpanLevelPalette', 'vcUseRefAnnoRes',
+ 'vcUseScalarArray', 'vcVectorDrawOrder', 'vcVectorFieldData',
+ 'vcWindBarbCalmCircleSizeF', 'vcWindBarbColor',
+ 'vcWindBarbLineThicknessF', 'vcWindBarbScaleFactorF',
+ 'vcWindBarbTickAngleF', 'vcWindBarbTickLengthF',
+ 'vcWindBarbTickSpacingF', 'vcZeroFLabelAngleF',
+ 'vcZeroFLabelBackgroundColor', 'vcZeroFLabelConstantSpacingF',
+ 'vcZeroFLabelFont', 'vcZeroFLabelFontAspectF',
+ 'vcZeroFLabelFontColor', 'vcZeroFLabelFontHeightF',
+ 'vcZeroFLabelFontQuality', 'vcZeroFLabelFontThicknessF',
+ 'vcZeroFLabelFuncCode', 'vcZeroFLabelJust', 'vcZeroFLabelOn',
+ 'vcZeroFLabelOrthogonalPosF', 'vcZeroFLabelParallelPosF',
+ 'vcZeroFLabelPerimColor', 'vcZeroFLabelPerimOn',
+ 'vcZeroFLabelPerimSpaceF', 'vcZeroFLabelPerimThicknessF',
+ 'vcZeroFLabelSide', 'vcZeroFLabelString', 'vcZeroFLabelTextDirection',
+ 'vcZeroFLabelZone', 'vfCopyData', 'vfDataArray',
+ 'vfExchangeDimensions', 'vfExchangeUVData', 'vfMagMaxV', 'vfMagMinV',
+ 'vfMissingUValueV', 'vfMissingVValueV', 'vfPolarData',
+ 'vfSingleMissingValue', 'vfUDataArray', 'vfUMaxV', 'vfUMinV',
+ 'vfVDataArray', 'vfVMaxV', 'vfVMinV', 'vfXArray', 'vfXCActualEndF',
+ 'vfXCActualStartF', 'vfXCEndIndex', 'vfXCEndSubsetV', 'vfXCEndV',
+ 'vfXCStartIndex', 'vfXCStartSubsetV', 'vfXCStartV', 'vfXCStride',
+ 'vfYArray', 'vfYCActualEndF', 'vfYCActualStartF', 'vfYCEndIndex',
+ 'vfYCEndSubsetV', 'vfYCEndV', 'vfYCStartIndex', 'vfYCStartSubsetV',
+ 'vfYCStartV', 'vfYCStride', 'vpAnnoManagerId', 'vpClipOn',
+ 'vpHeightF', 'vpKeepAspect', 'vpOn', 'vpUseSegments', 'vpWidthF',
+ 'vpXF', 'vpYF', 'wkAntiAlias', 'wkBackgroundColor', 'wkBackgroundOpacityF',
+ 'wkColorMapLen', 'wkColorMap', 'wkColorModel', 'wkDashTableLength',
+ 'wkDefGraphicStyleId', 'wkDeviceLowerX', 'wkDeviceLowerY',
+ 'wkDeviceUpperX', 'wkDeviceUpperY', 'wkFileName', 'wkFillTableLength',
+ 'wkForegroundColor', 'wkFormat', 'wkFullBackground', 'wkGksWorkId',
+ 'wkHeight', 'wkMarkerTableLength', 'wkMetaName', 'wkOrientation',
+ 'wkPDFFileName', 'wkPDFFormat', 'wkPDFResolution', 'wkPSFileName',
+ 'wkPSFormat', 'wkPSResolution', 'wkPaperHeightF', 'wkPaperSize',
+ 'wkPaperWidthF', 'wkPause', 'wkTopLevelViews', 'wkViews',
+ 'wkVisualType', 'wkWidth', 'wkWindowId', 'wkXColorMode', 'wsCurrentSize',
+ 'wsMaximumSize', 'wsThresholdSize', 'xyComputeXMax',
+ 'xyComputeXMin', 'xyComputeYMax', 'xyComputeYMin', 'xyCoordData',
+ 'xyCoordDataSpec', 'xyCurveDrawOrder', 'xyDashPattern',
+ 'xyDashPatterns', 'xyExplicitLabels', 'xyExplicitLegendLabels',
+ 'xyLabelMode', 'xyLineColor', 'xyLineColors', 'xyLineDashSegLenF',
+ 'xyLineLabelConstantSpacingF', 'xyLineLabelFont',
+ 'xyLineLabelFontAspectF', 'xyLineLabelFontColor',
+ 'xyLineLabelFontColors', 'xyLineLabelFontHeightF',
+ 'xyLineLabelFontQuality', 'xyLineLabelFontThicknessF',
+ 'xyLineLabelFuncCode', 'xyLineThicknessF', 'xyLineThicknesses',
+ 'xyMarkLineMode', 'xyMarkLineModes', 'xyMarker', 'xyMarkerColor',
+ 'xyMarkerColors', 'xyMarkerSizeF', 'xyMarkerSizes',
+ 'xyMarkerThicknessF', 'xyMarkerThicknesses', 'xyMarkers',
+ 'xyMonoDashPattern', 'xyMonoLineColor', 'xyMonoLineLabelFontColor',
+ 'xyMonoLineThickness', 'xyMonoMarkLineMode', 'xyMonoMarker',
+ 'xyMonoMarkerColor', 'xyMonoMarkerSize', 'xyMonoMarkerThickness',
+ 'xyXIrrTensionF', 'xyXIrregularPoints', 'xyXStyle', 'xyYIrrTensionF',
+ 'xyYIrregularPoints', 'xyYStyle'), prefix=r'\b'),
+ Name.Builtin),
+
+ # Booleans
+ (r'\.(True|False)\.', Name.Builtin),
+ # Comparing Operators
+ (r'\.(eq|ne|lt|le|gt|ge|not|and|or|xor)\.', Operator.Word),
+ ],
+
+ 'strings': [
+ (r'(?s)"(\\\\|\\[0-7]+|\\.|[^"\\])*"', String.Double),
+ ],
+
+ 'nums': [
+ (r'\d+(?![.e])(_[a-z]\w+)?', Number.Integer),
+ (r'[+-]?\d*\.\d+(e[-+]?\d+)?(_[a-z]\w+)?', Number.Float),
+ (r'[+-]?\d+\.\d*(e[-+]?\d+)?(_[a-z]\w+)?', Number.Float),
+ ],
+ }
pygments.lexers.nimrod
~~~~~~~~~~~~~~~~~~~~~~
- Lexer for the Nimrod language.
+ Lexer for the Nim language (formerly known as Nimrod).
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
class NimrodLexer(RegexLexer):
"""
- For `Nimrod <http://nimrod-code.org/>`_ source code.
+ For `Nim <http://nim-lang.org/>`_ source code.
.. versionadded:: 1.5
"""
name = 'Nimrod'
- aliases = ['nimrod', 'nim']
+ aliases = ['nim', 'nimrod']
filenames = ['*.nim', '*.nimrod']
- mimetypes = ['text/x-nimrod']
+ mimetypes = ['text/x-nim']
flags = re.MULTILINE | re.IGNORECASE | re.UNICODE
return "|".join(newWords)
keywords = [
- 'addr', 'and', 'as', 'asm', 'atomic', 'bind', 'block', 'break',
- 'case', 'cast', 'const', 'continue', 'converter', 'discard',
- 'distinct', 'div', 'elif', 'else', 'end', 'enum', 'except', 'finally',
- 'for', 'generic', 'if', 'implies', 'in', 'yield',
- 'is', 'isnot', 'iterator', 'lambda', 'let', 'macro', 'method',
- 'mod', 'not', 'notin', 'object', 'of', 'or', 'out', 'proc',
- 'ptr', 'raise', 'ref', 'return', 'shl', 'shr', 'template', 'try',
+ 'addr', 'and', 'as', 'asm', 'atomic', 'bind', 'block', 'break', 'case',
+ 'cast', 'concept', 'const', 'continue', 'converter', 'defer', 'discard',
+ 'distinct', 'div', 'do', 'elif', 'else', 'end', 'enum', 'except',
+ 'export', 'finally', 'for', 'func', 'if', 'in', 'yield', 'interface',
+ 'is', 'isnot', 'iterator', 'let', 'macro', 'method', 'mixin', 'mod',
+ 'not', 'notin', 'object', 'of', 'or', 'out', 'proc', 'ptr', 'raise',
+ 'ref', 'return', 'shared', 'shl', 'shr', 'static', 'template', 'try',
'tuple', 'type', 'when', 'while', 'with', 'without', 'xor'
]
Lexer for the Nit language.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexers for the NixOS Nix language.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexers for Oberon family languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
(r'\s+', Text), # whitespace
],
'comments': [
- (r'\(\*([^\$].*?)\*\)', Comment.Multiline),
+ (r'\(\*([^$].*?)\*\)', Comment.Multiline),
# TODO: nested comments (* (* ... *) ... (* ... *) *) not supported!
],
'punctuation': [
- (r'[\(\)\[\]\{\},.:;\|]', Punctuation),
+ (r'[()\[\]{},.:;|]', Punctuation),
],
'numliterals': [
(r'[0-9A-F]+X\b', Number.Hex), # char code
(r'\$', Operator),
],
'identifiers': [
- (r'([a-zA-Z_\$][\w\$]*)', Name),
+ (r'([a-zA-Z_$][\w$]*)', Name),
],
'builtins': [
(words((
Lexers for Objective-C family languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
(r'\s+', Text),
(r'//', Comment.Single, 'comment-single'),
(r'/\*', Comment.Multiline, 'comment-multi'),
- (r'#(if|elseif|else|endif)\b', Comment.Preproc, 'preproc'),
+ (r'#(if|elseif|else|endif|available)\b', Comment.Preproc, 'preproc'),
# Keywords
include('keywords'),
],
'keywords': [
(words((
- 'break', 'case', 'continue', 'default', 'do', 'else',
- 'fallthrough', 'for', 'if', 'in', 'return', 'switch', 'where',
- 'while'), suffix=r'\b'),
+ 'as', 'break', 'case', 'catch', 'continue', 'default', 'defer',
+ 'do', 'else', 'fallthrough', 'for', 'guard', 'if', 'in', 'is',
+ 'repeat', 'return', '#selector', 'switch', 'throw', 'try',
+ 'where', 'while'), suffix=r'\b'),
Keyword),
(r'@availability\([^)]+\)', Keyword.Reserved),
(words((
'associativity', 'convenience', 'dynamic', 'didSet', 'final',
- 'get', 'infix', 'inout', 'lazy', 'left', 'mutating', 'none',
- 'nonmutating', 'optional', 'override', 'postfix', 'precedence',
- 'prefix', 'Protocol', 'required', 'right', 'set', 'Type',
- 'unowned', 'weak', 'willSet', '@availability', '@autoclosure',
- '@noreturn', '@NSApplicationMain', '@NSCopying', '@NSManaged',
- '@objc', '@UIApplicationMain', '@IBAction', '@IBDesignable',
+ 'get', 'indirect', 'infix', 'inout', 'lazy', 'left', 'mutating',
+ 'none', 'nonmutating', 'optional', 'override', 'postfix',
+ 'precedence', 'prefix', 'Protocol', 'required', 'rethrows',
+ 'right', 'set', 'throws', 'Type', 'unowned', 'weak', 'willSet',
+ '@availability', '@autoclosure', '@noreturn',
+ '@NSApplicationMain', '@NSCopying', '@NSManaged', '@objc',
+ '@UIApplicationMain', '@IBAction', '@IBDesignable',
'@IBInspectable', '@IBOutlet'), suffix=r'\b'),
Keyword.Reserved),
(r'(as|dynamicType|false|is|nil|self|Self|super|true|__COLUMN__'
- r'|__FILE__|__FUNCTION__|__LINE__|_)\b', Keyword.Constant),
+ r'|__FILE__|__FUNCTION__|__LINE__|_'
+ r'|#(?:file|line|column|function))\b', Keyword.Constant),
(r'import\b', Keyword.Declaration, 'module'),
(r'(class|enum|extension|struct|protocol)(\s+)([a-zA-Z_]\w*)',
bygroups(Keyword.Declaration, Text, Name.Class)),
Lexers for the Ooc language.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Just export lexer classes previously contained in this module.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from pygments.lexers.smalltalk import SmalltalkLexer, NewspeakLexer
from pygments.lexers.installers import NSISLexer, RPMSpecLexer
from pygments.lexers.textedit import AwkLexer
+from pygments.lexers.smv import NuSMVLexer
__all__ = []
Lexer for ParaSail.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
(r'[a-zA-Z]\w*', Name),
# Operators and Punctuation
(r'(<==|==>|<=>|\*\*=|<\|=|<<=|>>=|==|!=|=\?|<=|>=|'
- r'\*\*|<<|>>|=>|:=|\+=|-=|\*=|\||\|=|/=|\+|-|\*|/|'
+ r'\*\*|<<|>>|=>|:=|\+=|-=|\*=|\|=|\||/=|\+|-|\*|/|'
r'\.\.|<\.\.|\.\.<|<\.\.<)',
Operator),
(r'(<|>|\[|\]|\(|\)|\||:|;|,|.|\{|\}|->)',
Lexers for parser generators.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexers for Pascal family languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
"""
name = 'Delphi'
aliases = ['delphi', 'pas', 'pascal', 'objectpascal']
- filenames = ['*.pas']
+ filenames = ['*.pas', '*.dpr']
mimetypes = ['text/x-pascal']
TURBO_PASCAL_KEYWORDS = (
Lexers for the Pawn languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexers for Perl and related languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
(words((
'case', 'continue', 'do', 'else', 'elsif', 'for', 'foreach',
'if', 'last', 'my', 'next', 'our', 'redo', 'reset', 'then',
- 'unless', 'until', 'while', 'use', 'print', 'new', 'BEGIN',
+ 'unless', 'until', 'while', 'print', 'new', 'BEGIN',
'CHECK', 'INIT', 'END', 'return'), suffix=r'\b'),
Keyword),
(r'(format)(\s+)(\w+)(\s*)(=)(\s*\n)',
'getservbyport', 'getservent', 'getsockname', 'getsockopt', 'glob', 'gmtime',
'goto', 'grep', 'hex', 'import', 'index', 'int', 'ioctl', 'join', 'keys', 'kill', 'last',
'lc', 'lcfirst', 'length', 'link', 'listen', 'local', 'localtime', 'log', 'lstat',
- 'map', 'mkdir', 'msgctl', 'msgget', 'msgrcv', 'msgsnd', 'my', 'next', 'no', 'oct', 'open',
- 'opendir', 'ord', 'our', 'pack', 'package', 'pipe', 'pop', 'pos', 'printf',
+ 'map', 'mkdir', 'msgctl', 'msgget', 'msgrcv', 'msgsnd', 'my', 'next', 'oct', 'open',
+ 'opendir', 'ord', 'our', 'pack', 'pipe', 'pop', 'pos', 'printf',
'prototype', 'push', 'quotemeta', 'rand', 'read', 'readdir',
- 'readline', 'readlink', 'readpipe', 'recv', 'redo', 'ref', 'rename', 'require',
+ 'readline', 'readlink', 'readpipe', 'recv', 'redo', 'ref', 'rename',
'reverse', 'rewinddir', 'rindex', 'rmdir', 'scalar', 'seek', 'seekdir',
'select', 'semctl', 'semget', 'semop', 'send', 'setgrent', 'sethostent', 'setnetent',
'setpgrp', 'setpriority', 'setprotoent', 'setpwent', 'setservent',
'utime', 'values', 'vec', 'wait', 'waitpid', 'wantarray', 'warn', 'write'), suffix=r'\b'),
Name.Builtin),
(r'((__(DATA|DIE|WARN)__)|(STD(IN|OUT|ERR)))\b', Name.Builtin.Pseudo),
- (r'<<([\'"]?)([a-zA-Z_]\w*)\1;?\n.*?\n\2\n', String),
+ (r'(<<)([\'"]?)([a-zA-Z_]\w*)(\2;?\n.*?\n)(\3)(\n)',
+ bygroups(String, String, String.Delimiter, String, String.Delimiter, Text)),
(r'__END__', Comment.Preproc, 'end-part'),
(r'\$\^[ADEFHILMOPSTWX]', Name.Variable.Global),
(r"\$[\\\"\[\]'&`+*.,;=%~?@$!<>(^|/-](?!\w)", Name.Variable.Global),
(r'(q|qq|qw|qr|qx)\[', String.Other, 'sb-string'),
(r'(q|qq|qw|qr|qx)\<', String.Other, 'lt-string'),
(r'(q|qq|qw|qr|qx)([\W_])(.|\n)*?\2', String.Other),
- (r'package\s+', Keyword, 'modulename'),
- (r'sub\s+', Keyword, 'funcname'),
+ (r'(package)(\s+)([a-zA-Z_]\w*(?:::[a-zA-Z_]\w*)*)',
+ bygroups(Keyword, Text, Name.Namespace)),
+ (r'(use|require|no)(\s+)([a-zA-Z_]\w*(?:::[a-zA-Z_]\w*)*)',
+ bygroups(Keyword, Text, Name.Namespace)),
+ (r'(sub)(\s+)', bygroups(Keyword, Text), 'funcname'),
+ (words((
+ 'no', 'package', 'require', 'use'), suffix=r'\b'),
+ Keyword),
(r'(\[\]|\*\*|::|<<|>>|>=|<=>|<=|={3}|!=|=~|'
r'!~|&&?|\|\||\.{1,3})', Operator),
(r'[-+/*%=<>&^|!\\~]=?', Operator),
(r'[\w:]+', Name.Variable, '#pop'),
],
'name': [
- (r'\w+::', Name.Namespace),
+ (r'[a-zA-Z_]\w*(::[a-zA-Z_]\w*)*(::)?(?=\s*->)', Name.Namespace, '#pop'),
+ (r'[a-zA-Z_]\w*(::[a-zA-Z_]\w*)*::', Name.Namespace, '#pop'),
(r'[\w:]+', Name, '#pop'),
(r'[A-Z_]+(?=\W)', Name.Constant, '#pop'),
(r'(?=\W)', Text, '#pop'),
],
- 'modulename': [
- (r'[a-zA-Z_]\w*', Name.Namespace, '#pop')
- ],
'funcname': [
(r'[a-zA-Z_]\w*[!?]?', Name.Function),
(r'\s+', Text),
Lexers for PHP and related languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import re
-from pygments.lexer import RegexLexer, include, bygroups, default, using, this
+from pygments.lexer import RegexLexer, include, bygroups, default, using, \
+ this, words
from pygments.token import Text, Comment, Operator, Keyword, Name, String, \
Number, Punctuation, Other
from pygments.util import get_bool_opt, get_list_opt, iteritems
],
'php': [
(r'\?>', Comment.Preproc, '#pop'),
- (r'<<<([\'"]?)(' + _ident_inner + r')\1\n.*?\n\s*\2;?\n', String),
+ (r'(<<<)([\'"]?)(' + _ident_inner + r')(\2\n.*?\n\s*)(\3)(;?)(\n)',
+ bygroups(String, String, String.Delimiter, String, String.Delimiter,
+ Punctuation, Text)),
(r'\s+', Text),
(r'#.*?\n', Comment.Single),
(r'//.*?\n', Comment.Single),
r'FALSE|print|for|require|continue|foreach|require_once|'
r'declare|return|default|static|do|switch|die|stdClass|'
r'echo|else|TRUE|elseif|var|empty|if|xor|enddeclare|include|'
- r'virtual|endfor|include_once|while|endforeach|global|__FILE__|'
- r'endif|list|__LINE__|endswitch|new|__sleep|endwhile|not|'
- r'array|__wakeup|E_ALL|NULL|final|php_user_filter|interface|'
+ r'virtual|endfor|include_once|while|endforeach|global|'
+ r'endif|list|endswitch|new|endwhile|not|'
+ r'array|E_ALL|NULL|final|php_user_filter|interface|'
r'implements|public|private|protected|abstract|clone|try|'
r'catch|throw|this|use|namespace|trait|yield|'
r'finally)\b', Keyword),
(r'(true|false|null)\b', Keyword.Constant),
+ include('magicconstants'),
(r'\$\{\$+' + _ident_inner + '\}', Name.Variable),
(r'\$+' + _ident_inner, Name.Variable),
(_ident_inner, Name.Other),
(r'`([^`\\]*(?:\\.[^`\\]*)*)`', String.Backtick),
(r'"', String.Double, 'string'),
],
+ 'magicfuncs': [
+ # source: http://php.net/manual/en/language.oop5.magic.php
+ (words((
+ '__construct', '__destruct', '__call', '__callStatic', '__get', '__set',
+ '__isset', '__unset', '__sleep', '__wakeup', '__toString', '__invoke',
+ '__set_state', '__clone', '__debugInfo',), suffix=r'\b'),
+ Name.Function.Magic),
+ ],
+ 'magicconstants': [
+ # source: http://php.net/manual/en/language.constants.predefined.php
+ (words((
+ '__LINE__', '__FILE__', '__DIR__', '__FUNCTION__', '__CLASS__',
+ '__TRAIT__', '__METHOD__', '__NAMESPACE__',),
+ suffix=r'\b'),
+ Name.Constant),
+ ],
'classname': [
(_ident_inner, Name.Class, '#pop')
],
'functionname': [
- (_ident_inner, Name.Function, '#pop')
+ include('magicfuncs'),
+ (_ident_inner, Name.Function, '#pop'),
+ default('#pop')
],
'string': [
(r'"', String.Double, '#pop'),
String.Interpol)),
(r'(\$\{)(\S+)(\})',
bygroups(String.Interpol, Name.Variable, String.Interpol)),
- (r'[${\\]+', String.Double)
+ (r'[${\\]', String.Double)
],
}
Lexer for Praat
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
aliases = ['praat']
filenames = ['*.praat', '*.proc', '*.psc']
- keywords = [
+ keywords = (
'if', 'then', 'else', 'elsif', 'elif', 'endif', 'fi', 'for', 'from', 'to',
'endfor', 'endproc', 'while', 'endwhile', 'repeat', 'until', 'select', 'plus',
'minus', 'demo', 'assert', 'stopwatch', 'nocheck', 'nowarn', 'noprogress',
'editor', 'endeditor', 'clearinfo',
- ]
+ )
- functions_string = [
+ functions_string = (
'backslashTrigraphsToUnicode', 'chooseDirectory', 'chooseReadFile',
'chooseWriteFile', 'date', 'demoKey', 'do', 'environment', 'extractLine',
'extractWord', 'fixed', 'info', 'left', 'mid', 'percent', 'readFile', 'replace',
'replace_regex', 'right', 'selected', 'string', 'unicodeToBackslashTrigraphs',
- ]
+ )
- functions_numeric = [
+ functions_numeric = (
'abs', 'appendFile', 'appendFileLine', 'appendInfo', 'appendInfoLine', 'arccos',
'arccosh', 'arcsin', 'arcsinh', 'arctan', 'arctan2', 'arctanh', 'barkToHertz',
'beginPause', 'beginSendPraat', 'besselI', 'besselK', 'beta', 'beta2',
'sincpi', 'sinh', 'soundPressureToPhon', 'sqrt', 'startsWith', 'studentP',
'studentQ', 'tan', 'tanh', 'variableExists', 'word', 'writeFile', 'writeFileLine',
'writeInfo', 'writeInfoLine',
- ]
+ )
- functions_array = [
+ functions_array = (
'linear', 'randomGauss', 'randomInteger', 'randomUniform', 'zero',
- ]
+ )
- objects = [
+ objects = (
'Activation', 'AffineTransform', 'AmplitudeTier', 'Art', 'Artword',
'Autosegment', 'BarkFilter', 'BarkSpectrogram', 'CCA', 'Categories',
'Cepstrogram', 'Cepstrum', 'Cepstrumc', 'ChebyshevSeries', 'ClassificationTable',
'Strings', 'StringsIndex', 'Table', 'TableOfReal', 'TextGrid', 'TextInterval',
'TextPoint', 'TextTier', 'Tier', 'Transition', 'VocalTract', 'VocalTractTier',
'Weight', 'WordList',
- ]
+ )
- variables_numeric = [
+ variables_numeric = (
'macintosh', 'windows', 'unix', 'praatVersion', 'pi', 'e', 'undefined',
- ]
+ )
- variables_string = [
+ variables_string = (
'praatVersion', 'tab', 'shellDirectory', 'homeDirectory',
'preferencesDirectory', 'newline', 'temporaryDirectory',
'defaultDirectory',
- ]
+ )
tokens = {
'root': [
(r"'(?=.*')", String.Interpol, 'string_interpolated'),
(r'\.{3}', Keyword, ('#pop', 'old_arguments')),
(r':', Keyword, ('#pop', 'comma_list')),
- (r'[\s\n]', Text, '#pop'),
+ (r'\s', Text, '#pop'),
],
'procedure_call': [
(r'\s+', Text),
],
'function_call': [
(words(functions_string, suffix=r'\$(?=\s*[:(])'), Name.Function, 'function'),
- (words(functions_array, suffix=r'#(?=\s*[:(])'), Name.Function, 'function'),
- (words(functions_numeric, suffix=r'(?=\s*[:(])'), Name.Function, 'function'),
+ (words(functions_array, suffix=r'#(?=\s*[:(])'), Name.Function, 'function'),
+ (words(functions_numeric, suffix=r'(?=\s*[:(])'), Name.Function, 'function'),
],
'function': [
(r'\s+', Text),
include('operator'),
include('number'),
+ (r'[()]', Text),
(r',', Punctuation),
],
'old_arguments': [
(r'[^\n]', Text),
],
'number': [
+ (r'\n', Text, '#pop'),
(r'\b\d+(\.\d*)?([eE][-+]?\d+)?%?', Number),
],
'object_attributes': [
bygroups(Name.Builtin, Name.Builtin, String.Interpol),
('object_attributes', 'string_interpolated')),
- (r'\.?_?[a-z][a-zA-Z0-9_.]*(\$|#)?', Text),
+ (r'\.?_?[a-z][\w.]*(\$|#)?', Text),
(r'[\[\]]', Punctuation, 'comma_list'),
(r"'(?=.*')", String.Interpol, 'string_interpolated'),
],
'operator': [
- (r'([+\/*<>=!-]=?|[&*|][&*|]?|\^|<>)', Operator),
- (r'\b(and|or|not|div|mod)\b', Operator.Word),
+ (r'([+\/*<>=!-]=?|[&*|][&*|]?|\^|<>)', Operator),
+ (r'(?<![\w.])(and|or|not|div|mod)(?![\w.])', Operator.Word),
],
'string_interpolated': [
- (r'\.?[_a-z][a-zA-Z0-9_.]*[\$#]?(?:\[[a-zA-Z0-9,]+\])?(:[0-9]+)?',
+ (r'\.?[_a-z][\w.]*[$#]?(?:\[[a-zA-Z0-9,]+\])?(:[0-9]+)?',
String.Interpol),
(r"'", String.Interpol, '#pop'),
],
(r'(optionmenu|choice)([ \t]+\S+:[ \t]+)',
bygroups(Keyword, Text), 'number'),
- (r'(option|button)([ \t]+)',
- bygroups(Keyword, Text), 'number'),
-
(r'(option|button)([ \t]+)',
bygroups(Keyword, Text), 'string_unquoted'),
Lexers for Prolog and Prolog-like languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexers for Python and related languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
return [
# the old style '%s' % (...) string formatting
(r'%(\(\w+\))?[-#0 +]*([0-9]+|[*])?(\.([0-9]+|[*]))?'
- '[hlL]?[diouxXeEfFgGcrs%]', String.Interpol),
+ '[hlL]?[E-GXc-giorsux%]', String.Interpol),
# backslashes, quotes and formatting signs must be parsed one at a time
(r'[^\\\'"%\n]+', ttype),
(r'[\'"\\]', ttype),
tokens = {
'root': [
(r'\n', Text),
- (r'^(\s*)([rRuU]{,2}"""(?:.|\n)*?""")', bygroups(Text, String.Doc)),
- (r"^(\s*)([rRuU]{,2}'''(?:.|\n)*?''')", bygroups(Text, String.Doc)),
+ (r'^(\s*)([rRuUbB]{,2})("""(?:.|\n)*?""")',
+ bygroups(Text, String.Affix, String.Doc)),
+ (r"^(\s*)([rRuUbB]{,2})('''(?:.|\n)*?''')",
+ bygroups(Text, String.Affix, String.Doc)),
(r'[^\S\n]+', Text),
(r'\A#!.+$', Comment.Hashbang),
(r'#.*$', Comment.Single),
(r'(import)((?:\s|\\\s)+)', bygroups(Keyword.Namespace, Text),
'import'),
include('builtins'),
+ include('magicfuncs'),
+ include('magicvars'),
include('backtick'),
- ('(?:[rR]|[uU][rR]|[rR][uU])"""', String.Double, 'tdqs'),
- ("(?:[rR]|[uU][rR]|[rR][uU])'''", String.Single, 'tsqs'),
- ('(?:[rR]|[uU][rR]|[rR][uU])"', String.Double, 'dqs'),
- ("(?:[rR]|[uU][rR]|[rR][uU])'", String.Single, 'sqs'),
- ('[uU]?"""', String.Double, combined('stringescape', 'tdqs')),
- ("[uU]?'''", String.Single, combined('stringescape', 'tsqs')),
- ('[uU]?"', String.Double, combined('stringescape', 'dqs')),
- ("[uU]?'", String.Single, combined('stringescape', 'sqs')),
+ ('([rR]|[uUbB][rR]|[rR][uUbB])(""")',
+ bygroups(String.Affix, String.Double), 'tdqs'),
+ ("([rR]|[uUbB][rR]|[rR][uUbB])(''')",
+ bygroups(String.Affix, String.Single), 'tsqs'),
+ ('([rR]|[uUbB][rR]|[rR][uUbB])(")',
+ bygroups(String.Affix, String.Double), 'dqs'),
+ ("([rR]|[uUbB][rR]|[rR][uUbB])(')",
+ bygroups(String.Affix, String.Single), 'sqs'),
+ ('([uUbB]?)(""")', bygroups(String.Affix, String.Double),
+ combined('stringescape', 'tdqs')),
+ ("([uUbB]?)(''')", bygroups(String.Affix, String.Single),
+ combined('stringescape', 'tsqs')),
+ ('([uUbB]?)(")', bygroups(String.Affix, String.Double),
+ combined('stringescape', 'dqs')),
+ ("([uUbB]?)(')", bygroups(String.Affix, String.Single),
+ combined('stringescape', 'sqs')),
include('name'),
include('numbers'),
],
'unichr', 'unicode', 'vars', 'xrange', 'zip'),
prefix=r'(?<!\.)', suffix=r'\b'),
Name.Builtin),
- (r'(?<!\.)(self|None|Ellipsis|NotImplemented|False|True'
+ (r'(?<!\.)(self|None|Ellipsis|NotImplemented|False|True|cls'
r')\b', Name.Builtin.Pseudo),
(words((
'ArithmeticError', 'AssertionError', 'AttributeError',
'ZeroDivisionError'), prefix=r'(?<!\.)', suffix=r'\b'),
Name.Exception),
],
+ 'magicfuncs': [
+ (words((
+ '__abs__', '__add__', '__and__', '__call__', '__cmp__', '__coerce__',
+ '__complex__', '__contains__', '__del__', '__delattr__', '__delete__',
+ '__delitem__', '__delslice__', '__div__', '__divmod__', '__enter__',
+ '__eq__', '__exit__', '__float__', '__floordiv__', '__ge__', '__get__',
+ '__getattr__', '__getattribute__', '__getitem__', '__getslice__', '__gt__',
+ '__hash__', '__hex__', '__iadd__', '__iand__', '__idiv__', '__ifloordiv__',
+ '__ilshift__', '__imod__', '__imul__', '__index__', '__init__',
+ '__instancecheck__', '__int__', '__invert__', '__iop__', '__ior__',
+ '__ipow__', '__irshift__', '__isub__', '__iter__', '__itruediv__',
+ '__ixor__', '__le__', '__len__', '__long__', '__lshift__', '__lt__',
+ '__missing__', '__mod__', '__mul__', '__ne__', '__neg__', '__new__',
+ '__nonzero__', '__oct__', '__op__', '__or__', '__pos__', '__pow__',
+ '__radd__', '__rand__', '__rcmp__', '__rdiv__', '__rdivmod__', '__repr__',
+ '__reversed__', '__rfloordiv__', '__rlshift__', '__rmod__', '__rmul__',
+ '__rop__', '__ror__', '__rpow__', '__rrshift__', '__rshift__', '__rsub__',
+ '__rtruediv__', '__rxor__', '__set__', '__setattr__', '__setitem__',
+ '__setslice__', '__str__', '__sub__', '__subclasscheck__', '__truediv__',
+ '__unicode__', '__xor__'), suffix=r'\b'),
+ Name.Function.Magic),
+ ],
+ 'magicvars': [
+ (words((
+ '__bases__', '__class__', '__closure__', '__code__', '__defaults__',
+ '__dict__', '__doc__', '__file__', '__func__', '__globals__',
+ '__metaclass__', '__module__', '__mro__', '__name__', '__self__',
+ '__slots__', '__weakref__'),
+ suffix=r'\b'),
+ Name.Variable.Magic),
+ ],
'numbers': [
(r'(\d+\.\d*|\d*\.\d+)([eE][+-]?[0-9]+)?j?', Number.Float),
(r'\d+[eE][+-]?[0-9]+j?', Number.Float),
('[a-zA-Z_]\w*', Name),
],
'funcname': [
- ('[a-zA-Z_]\w*', Name.Function, '#pop')
+ include('magicfuncs'),
+ ('[a-zA-Z_]\w*', Name.Function, '#pop'),
+ default('#pop'),
],
'classname': [
('[a-zA-Z_]\w*', Name.Class, '#pop')
return [
# the old style '%s' % (...) string formatting (still valid in Py3)
(r'%(\(\w+\))?[-#0 +]*([0-9]+|[*])?(\.([0-9]+|[*]))?'
- '[hlL]?[diouxXeEfFgGcrs%]', String.Interpol),
+ '[hlL]?[E-GXc-giorsux%]', String.Interpol),
# the new style '{}'.format(...) string formatting
(r'\{'
- '((\w+)((\.\w+)|(\[[^\]]+\]))*)?' # field name
- '(\![sra])?' # conversion
- '(\:(.?[<>=\^])?[-+ ]?#?0?(\d+)?,?(\.\d+)?[bcdeEfFgGnosxX%]?)?'
+ '((\w+)((\.\w+)|(\[[^\]]+\]))*)?' # field name
+ '(\![sra])?' # conversion
+ '(\:(.?[<>=\^])?[-+ ]?#?0?(\d+)?,?(\.\d+)?[E-GXb-gnosx%]?)?'
'\}', String.Interpol),
# backslashes, quotes and formatting signs must be parsed one at a time
- (r'[^\\\'"%\{\n]+', ttype),
+ (r'[^\\\'"%{\n]+', ttype),
(r'[\'"\\]', ttype),
# unhandled string formatting sign
(r'%|(\{{1,2})', ttype)
'sum', 'super', 'tuple', 'type', 'vars', 'zip'), prefix=r'(?<!\.)',
suffix=r'\b'),
Name.Builtin),
- (r'(?<!\.)(self|Ellipsis|NotImplemented)\b', Name.Builtin.Pseudo),
+ (r'(?<!\.)(self|Ellipsis|NotImplemented|cls)\b', Name.Builtin.Pseudo),
(words((
'ArithmeticError', 'AssertionError', 'AttributeError',
'BaseException', 'BufferError', 'BytesWarning', 'DeprecationWarning',
prefix=r'(?<!\.)', suffix=r'\b'),
Name.Exception),
]
+ tokens['magicfuncs'] = [
+ (words((
+ '__abs__', '__add__', '__aenter__', '__aexit__', '__aiter__', '__and__',
+ '__anext__', '__await__', '__bool__', '__bytes__', '__call__',
+ '__complex__', '__contains__', '__del__', '__delattr__', '__delete__',
+ '__delitem__', '__dir__', '__divmod__', '__enter__', '__eq__', '__exit__',
+ '__float__', '__floordiv__', '__format__', '__ge__', '__get__',
+ '__getattr__', '__getattribute__', '__getitem__', '__gt__', '__hash__',
+ '__iadd__', '__iand__', '__ifloordiv__', '__ilshift__', '__imatmul__',
+ '__imod__', '__import__', '__imul__', '__index__', '__init__',
+ '__instancecheck__', '__int__', '__invert__', '__ior__', '__ipow__',
+ '__irshift__', '__isub__', '__iter__', '__itruediv__', '__ixor__',
+ '__le__', '__len__', '__length_hint__', '__lshift__', '__lt__',
+ '__matmul__', '__missing__', '__mod__', '__mul__', '__ne__', '__neg__',
+ '__new__', '__next__', '__or__', '__pos__', '__pow__', '__prepare__',
+ '__radd__', '__rand__', '__rdivmod__', '__repr__', '__reversed__',
+ '__rfloordiv__', '__rlshift__', '__rmatmul__', '__rmod__', '__rmul__',
+ '__ror__', '__round__', '__rpow__', '__rrshift__', '__rshift__',
+ '__rsub__', '__rtruediv__', '__rxor__', '__set__', '__setattr__',
+ '__setitem__', '__str__', '__sub__', '__subclasscheck__', '__truediv__',
+ '__xor__'), suffix=r'\b'),
+ Name.Function.Magic),
+ ]
+ tokens['magicvars'] = [
+ (words((
+ '__annotations__', '__bases__', '__class__', '__closure__', '__code__',
+ '__defaults__', '__dict__', '__doc__', '__file__', '__func__',
+ '__globals__', '__kwdefaults__', '__module__', '__mro__', '__name__',
+ '__objclass__', '__qualname__', '__self__', '__slots__', '__weakref__'),
+ suffix=r'\b'),
+ Name.Variable.Magic),
+ ]
tokens['numbers'] = [
(r'(\d+\.\d*|\d*\.\d+)([eE][+-]?[0-9]+)?', Number.Float),
+ (r'\d+[eE][+-]?[0-9]+j?', Number.Float),
(r'0[oO][0-7]+', Number.Oct),
(r'0[bB][01]+', Number.Bin),
(r'0[xX][a-fA-F0-9]+', Number.Hex),
],
'strings': [
(r'%(\([a-zA-Z0-9]+\))?[-#0 +]*([0-9]+|[*])?(\.([0-9]+|[*]))?'
- '[hlL]?[diouxXeEfFgGcrs%]', String.Interpol),
+ '[hlL]?[E-GXc-giorsux%]', String.Interpol),
(r'[^\\\'"%\n]+', String),
# quotes, percents and backslashes must be parsed one at a time
(r'[\'"\\]', String),
(words((
'bool', 'bytearray', 'bytes', 'classmethod', 'complex', 'dict', 'dict\'',
'float', 'frozenset', 'int', 'list', 'list\'', 'memoryview', 'object',
- 'property', 'range', 'set', 'set\'', 'slice', 'staticmethod', 'str', 'super',
- 'tuple', 'tuple\'', 'type'), prefix=r'(?<!\.)', suffix=r'(?![\'\w])'),
+ 'property', 'range', 'set', 'set\'', 'slice', 'staticmethod', 'str',
+ 'super', 'tuple', 'tuple\'', 'type'),
+ prefix=r'(?<!\.)', suffix=r'(?![\'\w])'),
Name.Builtin),
(words((
'__import__', 'abs', 'all', 'any', 'bin', 'bind', 'chr', 'cmp', 'compile',
'complex', 'delattr', 'dir', 'divmod', 'drop', 'dropwhile', 'enumerate',
- 'eval', 'exhaust', 'filter', 'flip', 'foldl1?', 'format', 'fst', 'getattr',
- 'globals', 'hasattr', 'hash', 'head', 'hex', 'id', 'init', 'input',
- 'isinstance', 'issubclass', 'iter', 'iterate', 'last', 'len', 'locals',
- 'map', 'max', 'min', 'next', 'oct', 'open', 'ord', 'pow', 'print', 'repr',
- 'reversed', 'round', 'setattr', 'scanl1?', 'snd', 'sorted', 'sum', 'tail',
- 'take', 'takewhile', 'vars', 'zip'), prefix=r'(?<!\.)', suffix=r'(?![\'\w])'),
+ 'eval', 'exhaust', 'filter', 'flip', 'foldl1?', 'format', 'fst',
+ 'getattr', 'globals', 'hasattr', 'hash', 'head', 'hex', 'id', 'init',
+ 'input', 'isinstance', 'issubclass', 'iter', 'iterate', 'last', 'len',
+ 'locals', 'map', 'max', 'min', 'next', 'oct', 'open', 'ord', 'pow',
+ 'print', 'repr', 'reversed', 'round', 'setattr', 'scanl1?', 'snd',
+ 'sorted', 'sum', 'tail', 'take', 'takewhile', 'vars', 'zip'),
+ prefix=r'(?<!\.)', suffix=r'(?![\'\w])'),
Name.Builtin),
(r"(?<!\.)(self|Ellipsis|NotImplemented|None|True|False)(?!['\w])",
Name.Builtin.Pseudo),
],
'string': [
(r'%(\(\w+\))?[-#0 +]*([0-9]+|[*])?(\.([0-9]+|[*]))?'
- '[hlL]?[diouxXeEfFgGcrs%]', String.Interpol),
+ '[hlL]?[E-GXc-giorsux%]', String.Interpol),
(r'[^\\\'"%\n]+', String),
# quotes, percents and backslashes must be parsed one at a time
(r'[\'"\\]', String),
Lexer for QVT Operational language.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
-from pygments.lexer import RegexLexer, bygroups, include, combined
+from pygments.lexer import RegexLexer, bygroups, include, combined, default, \
+ words
from pygments.token import Text, Comment, Operator, Keyword, Punctuation, \
Name, String, Number
bygroups(Comment, Comment, Comment.Preproc, Comment)),
# Uncomment the following if you want to distinguish between
# '/*' and '/**', à la javadoc
- #(r'/[*]{2}(.|\n)*?[*]/', Comment.Multiline),
+ # (r'/[*]{2}(.|\n)*?[*]/', Comment.Multiline),
(r'/[*](.|\n)*?[*]/', Comment.Multiline),
(r'\\\n', Text),
(r'(and|not|or|xor|##?)\b', Operator.Word),
- (r'([:]{1-2}=|[-+]=)\b', Operator.Word),
- (r'(@|<<|>>)\b', Keyword), # stereotypes
- (r'!=|<>|=|==|!->|->|>=|<=|[.]{3}|[+/*%=<>&|.~]', Operator),
+ (r'(:{1,2}=|[-+]=)\b', Operator.Word),
+ (r'(@|<<|>>)\b', Keyword), # stereotypes
+ (r'!=|<>|==|=|!->|->|>=|<=|[.]{3}|[+/*%=<>&|.~]', Operator),
(r'[]{}:(),;[]', Punctuation),
(r'(true|false|unlimited|null)\b', Keyword.Constant),
(r'(this|self|result)\b', Name.Builtin.Pseudo),
(r'(var)\b', Keyword.Declaration),
(r'(from|import)\b', Keyword.Namespace, 'fromimport'),
- (r'(metamodel|class|exception|primitive|enum|transformation|library)(\s+)([a-zA-Z0-9_]+)',
+ (r'(metamodel|class|exception|primitive|enum|transformation|'
+ r'library)(\s+)(\w+)',
bygroups(Keyword.Word, Text, Name.Class)),
- (r'(exception)(\s+)([a-zA-Z0-9_]+)', bygroups(Keyword.Word, Text, Name.Exception)),
+ (r'(exception)(\s+)(\w+)',
+ bygroups(Keyword.Word, Text, Name.Exception)),
(r'(main)\b', Name.Function),
- (r'(mapping|helper|query)(\s+)', bygroups(Keyword.Declaration, Text), 'operation'),
+ (r'(mapping|helper|query)(\s+)',
+ bygroups(Keyword.Declaration, Text), 'operation'),
(r'(assert)(\s+)\b', bygroups(Keyword, Text), 'assert'),
(r'(Bag|Collection|Dict|OrderedSet|Sequence|Set|Tuple|List)\b',
Keyword.Type),
("'", String, combined('stringescape', 'sqs')),
include('name'),
include('numbers'),
- # (r'([a-zA-Z_][a-zA-Z0-9_]*)(::)([a-zA-Z_][a-zA-Z0-9_]*)',
+ # (r'([a-zA-Z_]\w*)(::)([a-zA-Z_]\w*)',
# bygroups(Text, Text, Text)),
- ],
+ ],
'fromimport': [
(r'(?:[ \t]|\\\n)+', Text),
- (r'[a-zA-Z_][a-zA-Z0-9_.]*', Name.Namespace),
- (r'', Text, '#pop'),
- ],
+ (r'[a-zA-Z_][\w.]*', Name.Namespace),
+ default('#pop'),
+ ],
'operation': [
(r'::', Text),
- (r'(.*::)([a-zA-Z_][a-zA-Z0-9_]*)[ \t]*(\()', bygroups(Text,Name.Function, Text), '#pop')
- ],
+ (r'(.*::)([a-zA-Z_]\w*)([ \t]*)(\()',
+ bygroups(Text, Name.Function, Text, Punctuation), '#pop')
+ ],
'assert': [
(r'(warning|error|fatal)\b', Keyword, '#pop'),
- (r'', Text, '#pop') # all else: go back
- ],
+ default('#pop'), # all else: go back
+ ],
'keywords': [
- (r'(abstract|access|any|assert|'
- r'blackbox|break|case|collect|collectNested|'
- r'collectOne|collectselect|collectselectOne|composes|'
- r'compute|configuration|constructor|continue|datatype|'
- r'default|derived|disjuncts|do|elif|else|end|'
- r'endif|except|exists|extends|'
- r'forAll|forEach|forOne|from|if|'
- r'implies|in|inherits|init|inout|'
- r'intermediate|invresolve|invresolveIn|invresolveone|'
- r'invresolveoneIn|isUnique|iterate|late|let|'
- r'literal|log|map|merges|'
- r'modeltype|new|object|one|'
- r'ordered|out|package|population|'
- r'property|raise|readonly|references|refines|'
- r'reject|resolve|resolveIn|resolveone|resolveoneIn|'
- r'return|select|selectOne|sortedBy|static|switch|'
- r'tag|then|try|typedef|'
- r'unlimited|uses|when|where|while|with|'
- r'xcollect|xmap|xselect)\b', Keyword),
+ (words((
+ 'abstract', 'access', 'any', 'assert', 'blackbox', 'break',
+ 'case', 'collect', 'collectNested', 'collectOne', 'collectselect',
+ 'collectselectOne', 'composes', 'compute', 'configuration',
+ 'constructor', 'continue', 'datatype', 'default', 'derived',
+ 'disjuncts', 'do', 'elif', 'else', 'end', 'endif', 'except',
+ 'exists', 'extends', 'forAll', 'forEach', 'forOne', 'from', 'if',
+ 'implies', 'in', 'inherits', 'init', 'inout', 'intermediate',
+ 'invresolve', 'invresolveIn', 'invresolveone', 'invresolveoneIn',
+ 'isUnique', 'iterate', 'late', 'let', 'literal', 'log', 'map',
+ 'merges', 'modeltype', 'new', 'object', 'one', 'ordered', 'out',
+ 'package', 'population', 'property', 'raise', 'readonly',
+ 'references', 'refines', 'reject', 'resolve', 'resolveIn',
+ 'resolveone', 'resolveoneIn', 'return', 'select', 'selectOne',
+ 'sortedBy', 'static', 'switch', 'tag', 'then', 'try', 'typedef',
+ 'unlimited', 'uses', 'when', 'where', 'while', 'with', 'xcollect',
+ 'xmap', 'xselect'), suffix=r'\b'), Keyword),
],
# There is no need to distinguish between String.Single and
'stringescape': [
(r'\\([\\btnfr"\']|u[0-3][0-7]{2}|u[0-7]{1,2})', String.Escape)
],
- 'dqs': [ # double-quoted string
+ 'dqs': [ # double-quoted string
(r'"', String, '#pop'),
(r'\\\\|\\"', String.Escape),
include('strings')
],
- 'sqs': [ # single-quoted string
+ 'sqs': [ # single-quoted string
(r"'", String, '#pop'),
(r"\\\\|\\'", String.Escape),
include('strings')
],
'name': [
- ('[a-zA-Z_][a-zA-Z0-9_]*', Name),
+ ('[a-zA-Z_]\w*', Name),
],
# numbers: excerpt taken from the python lexer
'numbers': [
(r'\d+[eE][+-]?[0-9]+', Number.Float),
(r'\d+', Number.Integer)
],
- }
-
+ }
Lexers for the R/S languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexers for semantic web and RDF query languages and markup.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
u'\u2c00-\u2fef'
u'\u3001-\ud7ff'
u'\uf900-\ufdcf'
- u'\ufdf0-\ufffd'
- u'\U00010000-\U000effff')
+ u'\ufdf0-\ufffd')
PN_CHARS_U_GRP = (PN_CHARS_BASE_GRP + '_')
HEX_GRP = '0-9A-Fa-f'
- PN_LOCAL_ESC_CHARS_GRP = r' _~.\-!$&""()*+,;=/?#@%'
+ PN_LOCAL_ESC_CHARS_GRP = r' _~.\-!$&"()*+,;=/?#@%'
# terminal productions ::
flags = re.IGNORECASE
patterns = {
- 'PNAME_NS': r'((?:[a-zA-Z][\w-]*)?\:)', # Simplified character range
+ 'PNAME_NS': r'((?:[a-z][\w-]*)?\:)', # Simplified character range
'IRIREF': r'(<[^<>"{}|^`\\\x00-\x20]*>)'
}
(r'.', String, '#pop'),
],
'end-of-string': [
-
- (r'(@)([a-zA-Z]+(:?-[a-zA-Z0-9]+)*)',
+ (r'(@)([a-z]+(:?-[a-z0-9]+)*)',
bygroups(Operator, Generic.Emph), '#pop:2'),
(r'(\^\^)%(IRIREF)s' % patterns, bygroups(Operator, Generic.Emph), '#pop:2'),
Lexers for the REBOL and related languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexer for resource definition files.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
}
def analyse_text(text):
- return text.startswith('root:table')
+ if text.startswith('root:table'):
+ return 1.0
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ pygments.lexers.rnc
+ ~~~~~~~~~~~~~~~~~~~
+
+ Lexer for Relax-NG Compact syntax
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+from pygments.lexer import RegexLexer
+from pygments.token import Text, Comment, Operator, Keyword, Name, String, \
+ Punctuation
+
+__all__ = ['RNCCompactLexer']
+
+
+class RNCCompactLexer(RegexLexer):
+ """
+ For `RelaxNG-compact <http://relaxng.org>`_ syntax.
+
+ .. versionadded:: 2.2
+ """
+
+ name = 'Relax-NG Compact'
+ aliases = ['rnc', 'rng-compact']
+ filenames = ['*.rnc']
+
+ tokens = {
+ 'root': [
+ (r'namespace\b', Keyword.Namespace),
+ (r'(?:default|datatypes)\b', Keyword.Declaration),
+ (r'##.*$', Comment.Preproc),
+ (r'#.*$', Comment.Single),
+ (r'"[^"]*"', String.Double),
+ # TODO single quoted strings and escape sequences outside of
+ # double-quoted strings
+ (r'(?:element|attribute|mixed)\b', Keyword.Declaration, 'variable'),
+ (r'(text\b|xsd:[^ ]+)', Keyword.Type, 'maybe_xsdattributes'),
+ (r'[,?&*=|~]|>>', Operator),
+ (r'[(){}]', Punctuation),
+ (r'.', Text),
+ ],
+
+ # a variable has been declared using `element` or `attribute`
+ 'variable': [
+ (r'[^{]+', Name.Variable),
+ (r'\{', Punctuation, '#pop'),
+ ],
+
+ # after an xsd:<datatype> declaration there may be attributes
+ 'maybe_xsdattributes': [
+ (r'\{', Punctuation, 'xsdattributes'),
+ (r'\}', Punctuation, '#pop'),
+ (r'.', Text),
+ ],
+
+ # attributes take the form { key1 = value1 key2 = value2 ... }
+ 'xsdattributes': [
+ (r'[^ =}]', Name.Attribute),
+ (r'=', Operator),
+ (r'"[^"]*"', String.Double),
+ (r'\}', Punctuation, '#pop'),
+ (r'.', Text),
+ ],
+ }
Lexers for Roboconf DSL.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexer for Robot Framework.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexers for Ruby and related languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
start = match.start(1)
yield start, Operator, match.group(1) # <<-?
- yield match.start(2), String.Heredoc, match.group(2) # quote ", ', `
- yield match.start(3), Name.Constant, match.group(3) # heredoc name
- yield match.start(4), String.Heredoc, match.group(4) # quote again
+ yield match.start(2), String.Heredoc, match.group(2) # quote ", ', `
+ yield match.start(3), String.Delimiter, match.group(3) # heredoc name
+ yield match.start(4), String.Heredoc, match.group(4) # quote again
heredocstack = ctx.__dict__.setdefault('heredocstack', [])
outermost = not bool(heredocstack)
if check == hdname:
for amatch in lines:
yield amatch.start(), String.Heredoc, amatch.group()
- yield match.start(), Name.Constant, match.group()
+ yield match.start(), String.Delimiter, match.group()
ctx.pos = match.end()
break
else:
Lexers for the Rust language.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
class RustLexer(RegexLexer):
"""
- Lexer for the Rust programming language (version 1.0).
+ Lexer for the Rust programming language (version 1.10).
.. versionadded:: 1.6
"""
aliases = ['rust']
mimetypes = ['text/rust']
+ keyword_types = (
+ words(('u8', 'u16', 'u32', 'u64', 'i8', 'i16', 'i32', 'i64',
+ 'usize', 'isize', 'f32', 'f64', 'str', 'bool'),
+ suffix=r'\b'),
+ Keyword.Type)
+
+ builtin_types = (words((
+ # Reexported core operators
+ 'Copy', 'Send', 'Sized', 'Sync',
+ 'Drop', 'Fn', 'FnMut', 'FnOnce',
+
+ # Reexported types and traits
+ 'Box',
+ 'ToOwned',
+ 'Clone',
+ 'PartialEq', 'PartialOrd', 'Eq', 'Ord',
+ 'AsRef', 'AsMut', 'Into', 'From',
+ 'Default',
+ 'Iterator', 'Extend', 'IntoIterator',
+ 'DoubleEndedIterator', 'ExactSizeIterator',
+ 'Option',
+ 'Some', 'None',
+ 'Result',
+ 'Ok', 'Err',
+ 'SliceConcatExt',
+ 'String', 'ToString',
+ 'Vec'), suffix=r'\b'),
+ Name.Builtin)
+
tokens = {
'root': [
# rust allows a file to start with a shebang, but if the first line
(r"""\$([a-zA-Z_]\w*|\(,?|\),?|,?)""", Comment.Preproc),
# Keywords
(words((
- 'as', 'box', 'crate', 'do', 'else', 'enum', 'extern', # break and continue are in labels
- 'fn', 'for', 'if', 'impl', 'in', 'loop', 'match', 'mut', 'priv',
- 'proc', 'pub', 'ref', 'return', 'static', 'struct',
- 'trait', 'true', 'type', 'unsafe', 'while'), suffix=r'\b'),
+ 'as', 'box', 'const', 'crate', 'else', 'extern',
+ 'for', 'if', 'impl', 'in', 'loop', 'match', 'move',
+ 'mut', 'pub', 'ref', 'return', 'static', 'super',
+ 'trait', 'unsafe', 'use', 'where', 'while'), suffix=r'\b'),
Keyword),
- (words(('alignof', 'be', 'const', 'offsetof', 'pure', 'sizeof',
- 'typeof', 'once', 'unsized', 'yield'), suffix=r'\b'),
+ (words(('abstract', 'alignof', 'become', 'do', 'final', 'macro',
+ 'offsetof', 'override', 'priv', 'proc', 'pure', 'sizeof',
+ 'typeof', 'unsized', 'virtual', 'yield'), suffix=r'\b'),
Keyword.Reserved),
- (r'(mod|use)\b', Keyword.Namespace),
(r'(true|false)\b', Keyword.Constant),
+ (r'mod\b', Keyword, 'modname'),
(r'let\b', Keyword.Declaration),
- (words(('u8', 'u16', 'u32', 'u64', 'i8', 'i16', 'i32', 'i64', 'usize',
- 'isize', 'f32', 'f64', 'str', 'bool'), suffix=r'\b'),
- Keyword.Type),
+ (r'fn\b', Keyword, 'funcname'),
+ (r'(struct|enum|type|union)\b', Keyword, 'typename'),
+ (r'(default)(\s+)(type|fn)\b', bygroups(Keyword, Text, Keyword)),
+ keyword_types,
(r'self\b', Name.Builtin.Pseudo),
# Prelude (taken from Rust’s src/libstd/prelude.rs)
- (words((
- # Reexported core operators
- 'Copy', 'Send', 'Sized', 'Sync',
- 'Drop', 'Fn', 'FnMut', 'FnOnce',
-
- # Reexported functions
- 'drop',
-
- # Reexported types and traits
- 'Box',
- 'ToOwned',
- 'Clone',
- 'PartialEq', 'PartialOrd', 'Eq', 'Ord',
- 'AsRef', 'AsMut', 'Into', 'From',
- 'Default',
- 'Iterator', 'Extend', 'IntoIterator',
- 'DoubleEndedIterator', 'ExactSizeIterator',
- 'Option',
- 'Some', 'None',
- 'Result',
- 'Ok', 'Err',
- 'SliceConcatExt',
- 'String', 'ToString',
- 'Vec',
- ), suffix=r'\b'),
- Name.Builtin),
+ builtin_types,
+ # Path seperators, so types don't catch them.
+ (r'::\b', Text),
+ # Types in positions.
+ (r'(?::|->)', Text, 'typename'),
# Labels
- (r'(break|continue)(\s*)(\'[A-Za-z_]\w*)?', bygroups(Keyword, Text.Whitespace, Name.Label)),
+ (r'(break|continue)(\s*)(\'[A-Za-z_]\w*)?',
+ bygroups(Keyword, Text.Whitespace, Name.Label)),
# Character Literal
(r"""'(\\['"\\nrt]|\\x[0-7][0-9a-fA-F]|\\0"""
r"""|\\u\{[0-9a-fA-F]{1,6}\}|.)'""",
(r'0[xX][0-9a-fA-F_]+', Number.Hex, 'number_lit'),
# Decimal Literal
(r'[0-9][0-9_]*(\.[0-9_]+[eE][+\-]?[0-9_]+|'
- r'\.[0-9_]*(?!\.)|[eE][+\-]?[0-9_]+)', Number.Float, 'number_lit'),
+ r'\.[0-9_]*(?!\.)|[eE][+\-]?[0-9_]+)', Number.Float,
+ 'number_lit'),
(r'[0-9][0-9_]*', Number.Integer, 'number_lit'),
# String Literal
(r'b"', String, 'bytestring'),
(r'\*/', String.Doc, '#pop'),
(r'[*/]', String.Doc),
],
+ 'modname': [
+ (r'\s+', Text),
+ (r'[a-zA-Z_]\w*', Name.Namespace, '#pop'),
+ default('#pop'),
+ ],
+ 'funcname': [
+ (r'\s+', Text),
+ (r'[a-zA-Z_]\w*', Name.Function, '#pop'),
+ default('#pop'),
+ ],
+ 'typename': [
+ (r'\s+', Text),
+ (r'&', Keyword.Pseudo),
+ builtin_types,
+ keyword_types,
+ (r'[a-zA-Z_]\w*', Name.Class, '#pop'),
+ default('#pop'),
+ ],
'number_lit': [
(r'[ui](8|16|32|64|size)', Keyword, '#pop'),
(r'f(32|64)', Keyword, '#pop'),
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ pygments.lexers.sas
+ ~~~~~~~~~~~~~~~~~~~
+
+ Lexer for SAS.
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+import re
+from pygments.lexer import RegexLexer, include, words
+from pygments.token import Comment, Keyword, Name, Number, String, Text, \
+ Other, Generic
+
+__all__ = ['SASLexer']
+
+
+class SASLexer(RegexLexer):
+ """
+ For `SAS <http://www.sas.com/>`_ files.
+
+ .. versionadded:: 2.2
+ """
+ # Syntax from syntax/sas.vim by James Kidd <james.kidd@covance.com>
+
+ name = 'SAS'
+ aliases = ['sas']
+ filenames = ['*.SAS', '*.sas']
+ mimetypes = ['text/x-sas', 'text/sas', 'application/x-sas']
+ flags = re.IGNORECASE | re.MULTILINE
+
+ builtins_macros = (
+ "bquote", "nrbquote", "cmpres", "qcmpres", "compstor", "datatyp",
+ "display", "do", "else", "end", "eval", "global", "goto", "if",
+ "index", "input", "keydef", "label", "left", "length", "let",
+ "local", "lowcase", "macro", "mend", "nrquote",
+ "nrstr", "put", "qleft", "qlowcase", "qscan",
+ "qsubstr", "qsysfunc", "qtrim", "quote", "qupcase", "scan",
+ "str", "substr", "superq", "syscall", "sysevalf", "sysexec",
+ "sysfunc", "sysget", "syslput", "sysprod", "sysrc", "sysrput",
+ "then", "to", "trim", "unquote", "until", "upcase", "verify",
+ "while", "window"
+ )
+
+ builtins_conditionals = (
+ "do", "if", "then", "else", "end", "until", "while"
+ )
+
+ builtins_statements = (
+ "abort", "array", "attrib", "by", "call", "cards", "cards4",
+ "catname", "continue", "datalines", "datalines4", "delete", "delim",
+ "delimiter", "display", "dm", "drop", "endsas", "error", "file",
+ "filename", "footnote", "format", "goto", "in", "infile", "informat",
+ "input", "keep", "label", "leave", "length", "libname", "link",
+ "list", "lostcard", "merge", "missing", "modify", "options", "output",
+ "out", "page", "put", "redirect", "remove", "rename", "replace",
+ "retain", "return", "select", "set", "skip", "startsas", "stop",
+ "title", "update", "waitsas", "where", "window", "x", "systask"
+ )
+
+ builtins_sql = (
+ "add", "and", "alter", "as", "cascade", "check", "create",
+ "delete", "describe", "distinct", "drop", "foreign", "from",
+ "group", "having", "index", "insert", "into", "in", "key", "like",
+ "message", "modify", "msgtype", "not", "null", "on", "or",
+ "order", "primary", "references", "reset", "restrict", "select",
+ "set", "table", "unique", "update", "validate", "view", "where"
+ )
+
+ builtins_functions = (
+ "abs", "addr", "airy", "arcos", "arsin", "atan", "attrc",
+ "attrn", "band", "betainv", "blshift", "bnot", "bor",
+ "brshift", "bxor", "byte", "cdf", "ceil", "cexist", "cinv",
+ "close", "cnonct", "collate", "compbl", "compound",
+ "compress", "cos", "cosh", "css", "curobs", "cv", "daccdb",
+ "daccdbsl", "daccsl", "daccsyd", "dacctab", "dairy", "date",
+ "datejul", "datepart", "datetime", "day", "dclose", "depdb",
+ "depdbsl", "depsl", "depsyd",
+ "deptab", "dequote", "dhms", "dif", "digamma",
+ "dim", "dinfo", "dnum", "dopen", "doptname", "doptnum",
+ "dread", "dropnote", "dsname", "erf", "erfc", "exist", "exp",
+ "fappend", "fclose", "fcol", "fdelete", "fetch", "fetchobs",
+ "fexist", "fget", "fileexist", "filename", "fileref",
+ "finfo", "finv", "fipname", "fipnamel", "fipstate", "floor",
+ "fnonct", "fnote", "fopen", "foptname", "foptnum", "fpoint",
+ "fpos", "fput", "fread", "frewind", "frlen", "fsep", "fuzz",
+ "fwrite", "gaminv", "gamma", "getoption", "getvarc", "getvarn",
+ "hbound", "hms", "hosthelp", "hour", "ibessel", "index",
+ "indexc", "indexw", "input", "inputc", "inputn", "int",
+ "intck", "intnx", "intrr", "irr", "jbessel", "juldate",
+ "kurtosis", "lag", "lbound", "left", "length", "lgamma",
+ "libname", "libref", "log", "log10", "log2", "logpdf", "logpmf",
+ "logsdf", "lowcase", "max", "mdy", "mean", "min", "minute",
+ "mod", "month", "mopen", "mort", "n", "netpv", "nmiss",
+ "normal", "note", "npv", "open", "ordinal", "pathname",
+ "pdf", "peek", "peekc", "pmf", "point", "poisson", "poke",
+ "probbeta", "probbnml", "probchi", "probf", "probgam",
+ "probhypr", "probit", "probnegb", "probnorm", "probt",
+ "put", "putc", "putn", "qtr", "quote", "ranbin", "rancau",
+ "ranexp", "rangam", "range", "rank", "rannor", "ranpoi",
+ "rantbl", "rantri", "ranuni", "repeat", "resolve", "reverse",
+ "rewind", "right", "round", "saving", "scan", "sdf", "second",
+ "sign", "sin", "sinh", "skewness", "soundex", "spedis",
+ "sqrt", "std", "stderr", "stfips", "stname", "stnamel",
+ "substr", "sum", "symget", "sysget", "sysmsg", "sysprod",
+ "sysrc", "system", "tan", "tanh", "time", "timepart", "tinv",
+ "tnonct", "today", "translate", "tranwrd", "trigamma",
+ "trim", "trimn", "trunc", "uniform", "upcase", "uss", "var",
+ "varfmt", "varinfmt", "varlabel", "varlen", "varname",
+ "varnum", "varray", "varrayx", "vartype", "verify", "vformat",
+ "vformatd", "vformatdx", "vformatn", "vformatnx", "vformatw",
+ "vformatwx", "vformatx", "vinarray", "vinarrayx", "vinformat",
+ "vinformatd", "vinformatdx", "vinformatn", "vinformatnx",
+ "vinformatw", "vinformatwx", "vinformatx", "vlabel",
+ "vlabelx", "vlength", "vlengthx", "vname", "vnamex", "vtype",
+ "vtypex", "weekday", "year", "yyq", "zipfips", "zipname",
+ "zipnamel", "zipstate"
+ )
+
+ tokens = {
+ 'root': [
+ include('comments'),
+ include('proc-data'),
+ include('cards-datalines'),
+ include('logs'),
+ include('general'),
+ (r'.', Text),
+ ],
+ # SAS is multi-line regardless, but * is ended by ;
+ 'comments': [
+ (r'^\s*\*.*?;', Comment),
+ (r'/\*.*?\*/', Comment),
+ (r'^\s*\*(.|\n)*?;', Comment.Multiline),
+ (r'/[*](.|\n)*?[*]/', Comment.Multiline),
+ ],
+ # Special highlight for proc, data, quit, run
+ 'proc-data': [
+ (r'(^|;)\s*(proc \w+|data|run|quit)[\s;]',
+ Keyword.Reserved),
+ ],
+ # Special highlight cards and datalines
+ 'cards-datalines': [
+ (r'^\s*(datalines|cards)\s*;\s*$', Keyword, 'data'),
+ ],
+ 'data': [
+ (r'(.|\n)*^\s*;\s*$', Other, '#pop'),
+ ],
+ # Special highlight for put NOTE|ERROR|WARNING (order matters)
+ 'logs': [
+ (r'\n?^\s*%?put ', Keyword, 'log-messages'),
+ ],
+ 'log-messages': [
+ (r'NOTE(:|-).*', Generic, '#pop'),
+ (r'WARNING(:|-).*', Generic.Emph, '#pop'),
+ (r'ERROR(:|-).*', Generic.Error, '#pop'),
+ include('general'),
+ ],
+ 'general': [
+ include('keywords'),
+ include('vars-strings'),
+ include('special'),
+ include('numbers'),
+ ],
+ # Keywords, statements, functions, macros
+ 'keywords': [
+ (words(builtins_statements,
+ prefix = r'\b',
+ suffix = r'\b'),
+ Keyword),
+ (words(builtins_sql,
+ prefix = r'\b',
+ suffix = r'\b'),
+ Keyword),
+ (words(builtins_conditionals,
+ prefix = r'\b',
+ suffix = r'\b'),
+ Keyword),
+ (words(builtins_macros,
+ prefix = r'%',
+ suffix = r'\b'),
+ Name.Builtin),
+ (words(builtins_functions,
+ prefix = r'\b',
+ suffix = r'\('),
+ Name.Builtin),
+ ],
+ # Strings and user-defined variables and macros (order matters)
+ 'vars-strings': [
+ (r'&[a-z_]\w{0,31}\.?', Name.Variable),
+ (r'%[a-z_]\w{0,31}', Name.Function),
+ (r'\'', String, 'string_squote'),
+ (r'"', String, 'string_dquote'),
+ ],
+ 'string_squote': [
+ ('\'', String, '#pop'),
+ (r'\\\\|\\"|\\\n', String.Escape),
+ # AFAIK, macro variables are not evaluated in single quotes
+ # (r'&', Name.Variable, 'validvar'),
+ (r'[^$\'\\]+', String),
+ (r'[$\'\\]', String),
+ ],
+ 'string_dquote': [
+ (r'"', String, '#pop'),
+ (r'\\\\|\\"|\\\n', String.Escape),
+ (r'&', Name.Variable, 'validvar'),
+ (r'[^$&"\\]+', String),
+ (r'[$"\\]', String),
+ ],
+ 'validvar': [
+ (r'[a-z_]\w{0,31}\.?', Name.Variable, '#pop'),
+ ],
+ # SAS numbers and special variables
+ 'numbers': [
+ (r'\b[+-]?([0-9]+(\.[0-9]+)?|\.[0-9]+|\.)(E[+-]?[0-9]+)?i?\b',
+ Number),
+ ],
+ 'special': [
+ (r'(null|missing|_all_|_automatic_|_character_|_n_|'
+ r'_infile_|_name_|_null_|_numeric_|_user_|_webout_)',
+ Keyword.Constant),
+ ],
+ # 'operators': [
+ # (r'(-|=|<=|>=|<|>|<>|&|!=|'
+ # r'\||\*|\+|\^|/|!|~|~=)', Operator)
+ # ],
+ }
Lexer for scripting and embedded languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
filenames = ['*.lua', '*.wlua']
mimetypes = ['text/x-lua', 'application/x-lua']
+ _comment_multiline = r'(?:--\[(?P<level>=*)\[[\w\W]*?\](?P=level)\])'
+ _comment_single = r'(?:--.*$)'
+ _space = r'(?:\s+)'
+ _s = r'(?:%s|%s|%s)' % (_comment_multiline, _comment_single, _space)
+ _name = r'(?:[^\W\d]\w*)'
+
tokens = {
'root': [
- # lua allows a file to start with a shebang
- (r'#!(.*?)$', Comment.Preproc),
+ # Lua allows a file to start with a shebang.
+ (r'#!.*', Comment.Preproc),
default('base'),
],
+ 'ws': [
+ (_comment_multiline, Comment.Multiline),
+ (_comment_single, Comment.Single),
+ (_space, Text),
+ ],
'base': [
- (r'(?s)--\[(=*)\[.*?\]\1\]', Comment.Multiline),
- ('--.*$', Comment.Single),
+ include('ws'),
+ (r'(?i)0x[\da-f]*(\.[\da-f]*)?(p[+-]?\d+)?', Number.Hex),
(r'(?i)(\d*\.\d+|\d+\.\d*)(e[+-]?\d+)?', Number.Float),
(r'(?i)\d+e[+-]?\d+', Number.Float),
- ('(?i)0x[0-9a-f]*', Number.Hex),
(r'\d+', Number.Integer),
- (r'\n', Text),
- (r'[^\S\n]', Text),
# multiline strings
(r'(?s)\[(=*)\[.*?\]\1\]', String),
- (r'(==|~=|<=|>=|\.\.\.|\.\.|[=+\-*/%^<>#])', Operator),
+ (r'::', Punctuation, 'label'),
+ (r'\.{3}', Punctuation),
+ (r'[=<>|~&+\-*/%#^]+|\.\.', Operator),
(r'[\[\]{}().,:;]', Punctuation),
(r'(and|or|not)\b', Operator.Word),
('(break|do|else|elseif|end|for|if|in|repeat|return|then|until|'
- r'while)\b', Keyword),
+ r'while)\b', Keyword.Reserved),
+ (r'goto\b', Keyword.Reserved, 'goto'),
(r'(local)\b', Keyword.Declaration),
(r'(true|false|nil)\b', Keyword.Constant),
- (r'(function)\b', Keyword, 'funcname'),
+ (r'(function)\b', Keyword.Reserved, 'funcname'),
(r'[A-Za-z_]\w*(\.[A-Za-z_]\w*)?', Name),
],
'funcname': [
- (r'\s+', Text),
- ('(?:([A-Za-z_]\w*)(\.))?([A-Za-z_]\w*)',
- bygroups(Name.Class, Punctuation, Name.Function), '#pop'),
+ include('ws'),
+ (r'[.:]', Punctuation),
+ (r'%s(?=%s*[.:])' % (_name, _s), Name.Class),
+ (_name, Name.Function, '#pop'),
# inline function
('\(', Punctuation, '#pop'),
],
- # if I understand correctly, every character is valid in a lua string,
- # so this state is only for later corrections
- 'string': [
- ('.', String)
+ 'goto': [
+ include('ws'),
+ (_name, Name.Label, '#pop'),
+ ],
+
+ 'label': [
+ include('ws'),
+ (r'::', Punctuation, '#pop'),
+ (_name, Name.Label),
],
'stringescape': [
- (r'''\\([abfnrtv\\"']|\d{1,3})''', String.Escape)
+ (r'\\([abfnrtv\\"\']|[\r\n]{1,2}|z\s*|x[0-9a-fA-F]{2}|\d{1,3}|'
+ r'u\{[0-9a-fA-F]+\})', String.Escape),
],
'sqs': [
- ("'", String, '#pop'),
- include('string')
+ (r"'", String.Single, '#pop'),
+ (r"[^\\']+", String.Single),
],
'dqs': [
- ('"', String, '#pop'),
- include('string')
+ (r'"', String.Double, '#pop'),
+ (r'[^\\"]+', String.Double),
]
}
(r"'(''|[^'])*'", String),
(r'\s+', Whitespace),
# Everything else just belongs to a name
- (_NON_DELIMITER_OR_COMMENT_PATTERN + r'+', Name)
+ (_NON_DELIMITER_OR_COMMENT_PATTERN + r'+', Name),
],
'after_declaration': [
(_NON_DELIMITER_OR_COMMENT_PATTERN + r'+', Name.Function),
- ('', Whitespace, '#pop')
+ default('#pop'),
],
'after_macro_argument': [
(r'\*.*\n', Comment.Single, '#pop'),
(_OPERATORS_PATTERN, Operator, '#pop'),
(r"'(''|[^'])*'", String, '#pop'),
# Everything else just belongs to a name
- (_NON_DELIMITER_OR_COMMENT_PATTERN + r'+', Name)
+ (_NON_DELIMITER_OR_COMMENT_PATTERN + r'+', Name),
],
}
_COMMENT_LINE_REGEX = re.compile(r'^\s*\*')
class JclLexer(RegexLexer):
"""
- `Job Control Language (JCL) <http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/IEA2B570/CCONTENTS>`_
+ `Job Control Language (JCL)
+ <http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/IEA2B570/CCONTENTS>`_
is a scripting language used on mainframe platforms to instruct the system
on how to run a batch job or start a subsystem. It is somewhat
comparable to MS DOS batch and Unix shell scripts.
],
'statement': [
(r'\s*\n', Whitespace, '#pop'),
- (r'([a-z][a-z_0-9]*)(\s+)(exec|job)(\s*)',
+ (r'([a-z]\w*)(\s+)(exec|job)(\s*)',
bygroups(Name.Label, Whitespace, Keyword.Reserved, Whitespace),
'option'),
- (r'[a-z][a-z_0-9]*', Name.Variable, 'statement_command'),
+ (r'[a-z]\w*', Name.Variable, 'statement_command'),
(r'\s+', Whitespace, 'statement_command'),
],
'statement_command': [
(r'\*', Name.Builtin),
(r'[\[\](){}<>;,]', Punctuation),
(r'[-+*/=&%]', Operator),
- (r'[a-z_][a-z_0-9]*', Name),
- (r'[0-9]+\.[0-9]*', Number.Float),
- (r'\.[0-9]+', Number.Float),
- (r'[0-9]+', Number.Integer),
+ (r'[a-z_]\w*', Name),
+ (r'\d+\.\d*', Number.Float),
+ (r'\.\d+', Number.Float),
+ (r'\d+', Number.Integer),
(r"'", String, 'option_string'),
(r'[ \t]+', Whitespace, 'option_comment'),
(r'\.', Punctuation),
Lexers for various shells.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
class BashLexer(RegexLexer):
"""
- Lexer for (ba|k|)sh shell scripts.
+ Lexer for (ba|k|z|)sh shell scripts.
.. versionadded:: 0.6
"""
name = 'Bash'
- aliases = ['bash', 'sh', 'ksh', 'shell']
+ aliases = ['bash', 'sh', 'ksh', 'zsh', 'shell']
filenames = ['*.sh', '*.ksh', '*.bash', '*.ebuild', '*.eclass',
- '*.exheres-0', '*.exlib',
- '.bashrc', 'bashrc', '.bash_*', 'bash_*', 'PKGBUILD']
+ '*.exheres-0', '*.exlib', '*.zsh',
+ '.bashrc', 'bashrc', '.bash_*', 'bash_*', 'zshrc', '.zshrc',
+ 'PKGBUILD']
mimetypes = ['application/x-sh', 'application/x-shellscript']
tokens = {
(r'\$\(\(', Keyword, 'math'),
(r'\$\(', Keyword, 'paren'),
(r'\$\{#?', String.Interpol, 'curly'),
- (r'\$[a-zA-Z_][a-zA-Z0-9_]*', Name.Variable), # user variable
+ (r'\$[a-zA-Z_]\w*', Name.Variable), # user variable
(r'\$(?:\d+|[#$?!_*@-])', Name.Variable), # builtin
(r'\$', Text),
],
(r'\A#!.+\n', Comment.Hashbang),
(r'#.*\n', Comment.Single),
(r'\\[\w\W]', String.Escape),
- (r'(\b\w+)(\s*)(=)', bygroups(Name.Variable, Text, Operator)),
+ (r'(\b\w+)(\s*)(\+?=)', bygroups(Name.Variable, Text, Operator)),
(r'[\[\]{}()=]', Operator),
(r'<<<', Operator), # here-string
(r'<<-?\s*(\'?)\\?(\w+)[\w\W]+?\2', String),
(r'&', Punctuation),
(r'\|', Punctuation),
(r'\s+', Text),
- (r'\d+(?= |\Z)', Number),
+ (r'\d+\b', Number),
(r'[^=\s\[\]{}()$"\'`\\<&|;]+', Text),
(r'<', Text),
],
pos = 0
curcode = ''
insertions = []
+ backslash_continuation = False
for match in line_re.finditer(text):
line = match.group()
m = re.match(self._ps1rgx, line)
- if m:
+ if backslash_continuation:
+ curcode += line
+ backslash_continuation = curcode.endswith('\\\n')
+ elif m:
# To support output lexers (say diff output), the output
# needs to be broken by prompts whenever the output lexer
# changes.
insertions.append((len(curcode),
[(0, Generic.Prompt, m.group(1))]))
curcode += m.group(2)
+ backslash_continuation = curcode.endswith('\\\n')
elif line.startswith(self._ps2):
insertions.append((len(curcode),
[(0, Generic.Prompt, line[:len(self._ps2)])]))
curcode += line[len(self._ps2):]
+ backslash_continuation = curcode.endswith('\\\n')
else:
if insertions:
toks = innerlexer.get_tokens_unprocessed(curcode)
bygroups(String.Double, using(this, state='string'), Text,
Punctuation)),
(r'"', String.Double, ('#pop', 'for2', 'string')),
- (r"('(?:%s|[\w\W])*?')([%s%s]*)(\))" % (_variable, _nl, _ws),
+ (r"('(?:%%%%|%s|[\w\W])*?')([%s%s]*)(\))" % (_variable, _nl, _ws),
bygroups(using(this, state='sqstring'), Text, Punctuation)),
- (r'(`(?:%s|[\w\W])*?`)([%s%s]*)(\))' % (_variable, _nl, _ws),
+ (r'(`(?:%%%%|%s|[\w\W])*?`)([%s%s]*)(\))' % (_variable, _nl, _ws),
bygroups(using(this, state='bqstring'), Text, Punctuation)),
include('for2')
],
using(this, state='variable')), '#pop'),
(r'(exist%s)(%s%s)' % (_token_terminator, _space, _stoken),
bygroups(Keyword, using(this, state='text')), '#pop'),
- (r'(%s%s?)(==)(%s?%s)' % (_stoken, _space, _space, _stoken),
- bygroups(using(this, state='text'), Operator,
- using(this, state='text')), '#pop'),
(r'(%s%s)(%s)(%s%s)' % (_number, _space, _opword, _space, _number),
bygroups(using(this, state='arithmetic'), Operator.Word,
using(this, state='arithmetic')), '#pop'),
- (r'(%s%s)(%s)(%s%s)' % (_stoken, _space, _opword, _space, _stoken),
+ (_stoken, using(this, state='text'), ('#pop', 'if2')),
+ ],
+ 'if2': [
+ (r'(%s?)(==)(%s?%s)' % (_space, _space, _stoken),
+ bygroups(using(this, state='text'), Operator,
+ using(this, state='text')), '#pop'),
+ (r'(%s)(%s)(%s%s)' % (_space, _opword, _space, _stoken),
bygroups(using(this, state='text'), Operator.Word,
using(this, state='text')), '#pop')
],
Lexers for Smalltalk and related languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ pygments.lexers.smv
+ ~~~~~~~~~~~~~~~~~~~
+
+ Lexers for the SMV languages.
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+from pygments.lexer import RegexLexer, words
+from pygments.token import Comment, Generic, Keyword, Name, Number, \
+ Operator, Punctuation, Text
+
+__all__ = ['NuSMVLexer']
+
+
+class NuSMVLexer(RegexLexer):
+ """
+ Lexer for the NuSMV language.
+
+ .. versionadded:: 2.2
+ """
+
+ name = 'NuSMV'
+ aliases = ['nusmv']
+ filenames = ['*.smv']
+ mimetypes = []
+
+ tokens = {
+ 'root': [
+ # Comments
+ (r'(?s)\/\-\-.*?\-\-/', Comment),
+ (r'--.*\n', Comment),
+
+ # Reserved
+ (words(('MODULE', 'DEFINE', 'MDEFINE', 'CONSTANTS', 'VAR', 'IVAR',
+ 'FROZENVAR', 'INIT', 'TRANS', 'INVAR', 'SPEC', 'CTLSPEC',
+ 'LTLSPEC', 'PSLSPEC', 'COMPUTE', 'NAME', 'INVARSPEC',
+ 'FAIRNESS', 'JUSTICE', 'COMPASSION', 'ISA', 'ASSIGN',
+ 'CONSTRAINT', 'SIMPWFF', 'CTLWFF', 'LTLWFF', 'PSLWFF',
+ 'COMPWFF', 'IN', 'MIN', 'MAX', 'MIRROR', 'PRED',
+ 'PREDICATES'), suffix=r'(?![\w$#-])'),
+ Keyword.Declaration),
+ (r'process(?![\w$#-])', Keyword),
+ (words(('array', 'of', 'boolean', 'integer', 'real', 'word'),
+ suffix=r'(?![\w$#-])'), Keyword.Type),
+ (words(('case', 'esac'), suffix=r'(?![\w$#-])'), Keyword),
+ (words(('word1', 'bool', 'signed', 'unsigned', 'extend', 'resize',
+ 'sizeof', 'uwconst', 'swconst', 'init', 'self', 'count',
+ 'abs', 'max', 'min'), suffix=r'(?![\w$#-])'),
+ Name.Builtin),
+ (words(('EX', 'AX', 'EF', 'AF', 'EG', 'AG', 'E', 'F', 'O', 'G',
+ 'H', 'X', 'Y', 'Z', 'A', 'U', 'S', 'V', 'T', 'BU', 'EBF',
+ 'ABF', 'EBG', 'ABG', 'next', 'mod', 'union', 'in', 'xor',
+ 'xnor'), suffix=r'(?![\w$#-])'),
+ Operator.Word),
+ (words(('TRUE', 'FALSE'), suffix=r'(?![\w$#-])'), Keyword.Constant),
+
+ # Names
+ (r'[a-zA-Z_][\w$#-]*', Name.Variable),
+
+ # Operators
+ (r':=', Operator),
+ (r'[-&|+*/<>!=]', Operator),
+
+ # Literals
+ (r'\-?\d+\b', Number.Integer),
+ (r'0[su][bB]\d*_[01_]+', Number.Bin),
+ (r'0[su][oO]\d*_[0-7_]+', Number.Oct),
+ (r'0[su][dD]\d*_[\d_]+', Number.Dec),
+ (r'0[su][hH]\d*_[\da-fA-F_]+', Number.Hex),
+
+ # Whitespace, punctuation and the rest
+ (r'\s+', Text.Whitespace),
+ (r'[()\[\]{};?:.,]', Punctuation),
+ ],
+ }
Lexers for the SNOBOL language.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Special lexers.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
aliases = ['text']
filenames = ['*.txt']
mimetypes = ['text/plain']
+ priority = 0.01
def get_tokens_unprocessed(self, text):
yield 0, Text, text
+ def analyse_text(text):
+ return TextLexer.priority
_ttype_cache = {}
The ``tests/examplefiles`` contains a few test files with data to be
parsed by these lexers.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import re
from pygments.lexer import Lexer, RegexLexer, do_insertions, bygroups, words
-from pygments.token import Punctuation, \
+from pygments.token import Punctuation, Whitespace, Error, \
Text, Comment, Operator, Keyword, Name, String, Number, Generic
from pygments.lexers import get_lexer_by_name, ClassNotFound
from pygments.util import iteritems
from pygments.lexers._postgres_builtins import KEYWORDS, DATATYPES, \
PSEUDO_TYPES, PLPGSQL_KEYWORDS
+from pygments.lexers import _tsql_builtins
__all__ = ['PostgresLexer', 'PlPgsqlLexer', 'PostgresConsoleLexer',
- 'SqlLexer', 'MySqlLexer', 'SqliteConsoleLexer', 'RqlLexer']
+ 'SqlLexer', 'TransactSqlLexer', 'MySqlLexer',
+ 'SqliteConsoleLexer', 'RqlLexer']
line_re = re.compile('.*?\n')
language_re = re.compile(r"\s+LANGUAGE\s+'?(\w+)'?", re.IGNORECASE)
+do_re = re.compile(r'\bDO\b', re.IGNORECASE)
+
def language_callback(lexer, match):
"""Parse the content of a $-string using a lexer
- The lexer is chosen looking for a nearby LANGUAGE.
+ The lexer is chosen looking for a nearby LANGUAGE or assumed as
+ plpgsql if inside a DO statement and no LANGUAGE has been found.
"""
l = None
m = language_re.match(lexer.text[match.end():match.end()+100])
lexer.text[max(0, match.start()-100):match.start()]))
if m:
l = lexer._get_lexer(m[-1].group(1))
-
+ else:
+ m = list(do_re.finditer(
+ lexer.text[max(0, match.start()-25):match.start()]))
+ if m:
+ l = lexer._get_lexer('plpgsql')
+
+ # 1 = $, 2 = delimiter, 3 = $
+ yield (match.start(1), String, match.group(1))
+ yield (match.start(2), String.Delimiter, match.group(2))
+ yield (match.start(3), String, match.group(3))
+ # 4 = string contents
if l:
- yield (match.start(1), String, match.group(1))
- for x in l.get_tokens_unprocessed(match.group(2)):
+ for x in l.get_tokens_unprocessed(match.group(4)):
yield x
- yield (match.start(3), String, match.group(3))
-
else:
- yield (match.start(), String, match.group())
+ yield (match.start(4), String, match.group(4))
+ # 5 = $, 6 = delimiter, 7 = $
+ yield (match.start(5), String, match.group(5))
+ yield (match.start(6), String.Delimiter, match.group(6))
+ yield (match.start(7), String, match.group(7))
class PostgresBase(object):
tokens = {
'root': [
(r'\s+', Text),
- (r'--.*?\n', Comment.Single),
+ (r'--.*\n?', Comment.Single),
(r'/\*', Comment.Multiline, 'multiline-comments'),
(r'(' + '|'.join(s.replace(" ", "\s+")
for s in DATATYPES + PSEUDO_TYPES)
(r'\$\d+', Name.Variable),
(r'([0-9]*\.[0-9]*|[0-9]+)(e[+-]?[0-9]+)?', Number.Float),
(r'[0-9]+', Number.Integer),
- (r"(E|U&)?'", String.Single, 'string'),
- (r'(U&)?"', String.Name, 'quoted-ident'), # quoted identifier
- (r'(?s)(\$[^$]*\$)(.*?)(\1)', language_callback),
+ (r"((?:E|U&)?)(')", bygroups(String.Affix, String.Single), 'string'),
+ # quoted identifier
+ (r'((?:U&)?)(")', bygroups(String.Affix, String.Name), 'quoted-ident'),
+ (r'(?s)(\$)([^$]*)(\$)(.*?)(\$)(\2)(\$)', language_callback),
(r'[a-z_]\w*', Name),
# psql variable in SQL
tokens = {
'root': [
(r'\s+', Text),
- (r'--.*?\n', Comment.Single),
+ (r'--.*\n?', Comment.Single),
(r'/\*', Comment.Multiline, 'multiline-comments'),
(words((
'ABORT', 'ABS', 'ABSOLUTE', 'ACCESS', 'ADA', 'ADD', 'ADMIN', 'AFTER', 'AGGREGATE',
}
+class TransactSqlLexer(RegexLexer):
+ """
+ Transact-SQL (T-SQL) is Microsoft's and Sybase's proprietary extension to
+ SQL.
+
+ The list of keywords includes ODBC and keywords reserved for future use..
+ """
+
+ name = 'Transact-SQL'
+ aliases = ['tsql', 't-sql']
+ filenames = ['*.sql']
+ mimetypes = ['text/x-tsql']
+
+ # Use re.UNICODE to allow non ASCII letters in names.
+ flags = re.IGNORECASE | re.UNICODE
+ tokens = {
+ 'root': [
+ (r'\s+', Whitespace),
+ (r'--(?m).*?$\n?', Comment.Single),
+ (r'/\*', Comment.Multiline, 'multiline-comments'),
+ (words(_tsql_builtins.OPERATORS), Operator),
+ (words(_tsql_builtins.OPERATOR_WORDS, suffix=r'\b'), Operator.Word),
+ (words(_tsql_builtins.TYPES, suffix=r'\b'), Name.Class),
+ (words(_tsql_builtins.FUNCTIONS, suffix=r'\b'), Name.Function),
+ (r'(goto)(\s+)(\w+\b)', bygroups(Keyword, Whitespace, Name.Label)),
+ (words(_tsql_builtins.KEYWORDS, suffix=r'\b'), Keyword),
+ (r'(\[)([^]]+)(\])', bygroups(Operator, Name, Operator)),
+ (r'0x[0-9a-f]+', Number.Hex),
+ # Float variant 1, for example: 1., 1.e2, 1.2e3
+ (r'[0-9]+\.[0-9]*(e[+-]?[0-9]+)?', Number.Float),
+ # Float variant 2, for example: .1, .1e2
+ (r'\.[0-9]+(e[+-]?[0-9]+)?', Number.Float),
+ # Float variant 3, for example: 123e45
+ (r'[0-9]+e[+-]?[0-9]+', Number.Float),
+ (r'[0-9]+', Number.Integer),
+ (r"'(''|[^'])*'", String.Single),
+ (r'"(""|[^"])*"', String.Symbol),
+ (r'[;(),.]', Punctuation),
+ # Below we use \w even for the first "real" character because
+ # tokens starting with a digit have already been recognized
+ # as Number above.
+ (r'@@\w+', Name.Builtin),
+ (r'@\w+', Name.Variable),
+ (r'(\w+)(:)', bygroups(Name.Label, Punctuation)),
+ (r'#?#?\w+', Name), # names for temp tables and anything else
+ (r'\?', Name.Variable.Magic), # parameter for prepared statements
+ ],
+ 'multiline-comments': [
+ (r'/\*', Comment.Multiline, 'multiline-comments'),
+ (r'\*/', Comment.Multiline, '#pop'),
+ (r'[^/*]+', Comment.Multiline),
+ (r'[/*]', Comment.Multiline)
+ ]
+ }
+
+
class MySqlLexer(RegexLexer):
"""
Special lexer for MySQL.
tokens = {
'root': [
(r'\s+', Text),
- (r'(#|--\s+).*?\n', Comment.Single),
+ (r'(#|--\s+).*\n?', Comment.Single),
(r'/\*', Comment.Multiline, 'multiline-comments'),
(r'[0-9]+', Number.Integer),
(r'[0-9]*\.[0-9]+(e[+-][0-9]+)', Number.Float),
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ pygments.lexers.stata
+ ~~~~~~~~~~~~~~~~~~~~~
+
+ Lexer for Stata
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+from pygments.lexer import RegexLexer, include, words
+from pygments.token import Comment, Keyword, Name, Number, \
+ String, Text, Operator
+
+from pygments.lexers._stata_builtins import builtins_base, builtins_functions
+
+__all__ = ['StataLexer']
+
+
+class StataLexer(RegexLexer):
+ """
+ For `Stata <http://www.stata.com/>`_ do files.
+
+ .. versionadded:: 2.2
+ """
+ # Syntax based on
+ # - http://fmwww.bc.edu/RePEc/bocode/s/synlightlist.ado
+ # - http://github.com/isagalaev/highlight.js/blob/master/src/languages/stata.js
+ # - http://github.com/jpitblado/vim-stata/blob/master/syntax/stata.vim
+
+ name = 'Stata'
+ aliases = ['stata', 'do']
+ filenames = ['*.do', '*.ado']
+ mimetypes = ['text/x-stata', 'text/stata', 'application/x-stata']
+
+ tokens = {
+ 'root': [
+ include('comments'),
+ include('vars-strings'),
+ include('numbers'),
+ include('keywords'),
+ (r'.', Text),
+ ],
+ # Global and local macros; regular and special strings
+ 'vars-strings': [
+ (r'\$[\w{]', Name.Variable.Global, 'var_validglobal'),
+ (r'`\w{0,31}\'', Name.Variable),
+ (r'"', String, 'string_dquote'),
+ (r'`"', String, 'string_mquote'),
+ ],
+ # For either string type, highlight macros as macros
+ 'string_dquote': [
+ (r'"', String, '#pop'),
+ (r'\\\\|\\"|\\\n', String.Escape),
+ (r'\$', Name.Variable.Global, 'var_validglobal'),
+ (r'`', Name.Variable, 'var_validlocal'),
+ (r'[^$`"\\]+', String),
+ (r'[$"\\]', String),
+ ],
+ 'string_mquote': [
+ (r'"\'', String, '#pop'),
+ (r'\\\\|\\"|\\\n', String.Escape),
+ (r'\$', Name.Variable.Global, 'var_validglobal'),
+ (r'`', Name.Variable, 'var_validlocal'),
+ (r'[^$`"\\]+', String),
+ (r'[$"\\]', String),
+ ],
+ 'var_validglobal': [
+ (r'\{\w{0,32}\}', Name.Variable.Global, '#pop'),
+ (r'\w{1,32}', Name.Variable.Global, '#pop'),
+ ],
+ 'var_validlocal': [
+ (r'\w{0,31}\'', Name.Variable, '#pop'),
+ ],
+ # * only OK at line start, // OK anywhere
+ 'comments': [
+ (r'^\s*\*.*$', Comment),
+ (r'//.*', Comment.Single),
+ (r'/\*.*?\*/', Comment.Multiline),
+ (r'/[*](.|\n)*?[*]/', Comment.Multiline),
+ ],
+ # Built in functions and statements
+ 'keywords': [
+ (words(builtins_functions, prefix = r'\b', suffix = r'\('),
+ Name.Function),
+ (words(builtins_base, prefix = r'(^\s*|\s)', suffix = r'\b'),
+ Keyword),
+ ],
+ # http://www.stata.com/help.cgi?operators
+ 'operators': [
+ (r'-|==|<=|>=|<|>|&|!=', Operator),
+ (r'\*|\+|\^|/|!|~|==|~=', Operator)
+ ],
+ # Stata numbers
+ 'numbers': [
+ # decimal number
+ (r'\b[+-]?([0-9]+(\.[0-9]+)?|\.[0-9]+|\.)([eE][+-]?[0-9]+)?[i]?\b',
+ Number),
+ ],
+ # Stata formats
+ 'format': [
+ (r'%-?\d{1,2}(\.\d{1,2})?[gfe]c?', Name.Variable),
+ (r'%(21x|16H|16L|8H|8L)', Name.Variable),
+ (r'%-?(tc|tC|td|tw|tm|tq|th|ty|tg).{0,32}', Name.Variable),
+ (r'%[-~]?\d{1,4}s', Name.Variable),
+ ]
+ }
Lexer for SuperCollider
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import re
-from pygments.lexer import RegexLexer, include, words
+from pygments.lexer import RegexLexer, include, words, default
from pygments.token import Text, Comment, Operator, Keyword, Name, String, \
Number, Punctuation
(r'/(\\.|[^[/\\\n]|\[(\\.|[^\]\\\n])*])+/'
r'([gim]+\b|\B)', String.Regex, '#pop'),
(r'(?=/)', Text, ('#pop', 'badregex')),
- (r'', Text, '#pop')
+ default('#pop'),
],
'badregex': [
(r'\n', Text, '#pop')
(words(('true', 'false', 'nil', 'inf'), suffix=r'\b'), Keyword.Constant),
(words((
'Array', 'Boolean', 'Date', 'Error', 'Function', 'Number',
- 'Object', 'Packages', 'RegExp', 'String', 'Error',
+ 'Object', 'Packages', 'RegExp', 'String',
'isFinite', 'isNaN', 'parseFloat', 'parseInt', 'super',
'thisFunctionDef', 'thisFunction', 'thisMethod', 'thisProcess',
'thisThread', 'this'), suffix=r'\b'),
Name.Builtin),
- (r'[$a-zA-Z_][a-zA-Z0-9_]*', Name.Other),
- (r'\\?[$a-zA-Z_][a-zA-Z0-9_]*', String.Symbol),
+ (r'[$a-zA-Z_]\w*', Name.Other),
+ (r'\\?[$a-zA-Z_]\w*', String.Symbol),
(r'[0-9][0-9]*\.[0-9]+([eE][0-9]+)?[fd]?', Number.Float),
(r'0x[0-9a-fA-F]+', Number.Hex),
(r'[0-9]+', Number.Integer),
Lexers for Tcl and related languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexers for various template engines' markup.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
'TeaTemplateLexer', 'LassoHtmlLexer', 'LassoXmlLexer',
'LassoCssLexer', 'LassoJavascriptLexer', 'HandlebarsLexer',
'HandlebarsHtmlLexer', 'YamlJinjaLexer', 'LiquidLexer',
- 'TwigLexer', 'TwigHtmlLexer']
+ 'TwigLexer', 'TwigHtmlLexer', 'Angular2Lexer', 'Angular2HtmlLexer']
class ErbLexer(Lexer):
(r'\}\}', Comment.Preproc, '#pop'),
# Handlebars
- (r'([#/]*)(each|if|unless|else|with|log|in)', bygroups(Keyword,
+ (r'([#/]*)(each|if|unless|else|with|log|in(line)?)', bygroups(Keyword,
Keyword)),
+ (r'#\*inline', Keyword),
# General {{#block}}
(r'([#/])([\w-]+)', bygroups(Name.Function, Name.Function)),
# {{opt=something}}
(r'([\w-]+)(=)', bygroups(Name.Attribute, Operator)),
+ # Partials {{> ...}}
+ (r'(>)(\s*)(@partial-block)', bygroups(Keyword, Text, Keyword)),
+ (r'(#?>)(\s*)([\w-]+)', bygroups(Keyword, Text, Name.Variable)),
+ (r'(>)(\s*)(\()', bygroups(Keyword, Text, Punctuation),
+ 'dynamic-partial'),
+
+ include('generic'),
+ ],
+ 'dynamic-partial': [
+ (r'\s+', Text),
+ (r'\)', Punctuation, '#pop'),
+
+ (r'(lookup)(\s+)(\.|this)(\s+)', bygroups(Keyword, Text,
+ Name.Variable, Text)),
+ (r'(lookup)(\s+)(\S+)', bygroups(Keyword, Text,
+ using(this, state='variable'))),
+ (r'[\w-]+', Name.Function),
+
+ include('generic'),
+ ],
+ 'variable': [
+ (r'[a-zA-Z][\w-]*', Name.Variable),
+ (r'\.[\w-]+', Name.Variable),
+ (r'(this\/|\.\/|(\.\.\/)+)[\w-]+', Name.Variable),
+ ],
+ 'generic': [
+ include('variable'),
+
# borrowed from DjangoLexer
(r':?"(\\\\|\\"|[^"])*"', String.Double),
(r":?'(\\\\|\\'|[^'])*'", String.Single),
- (r'[a-zA-Z][\w-]*', Name.Variable),
- (r'\.[\w-]+', Name.Variable),
(r"[0-9](\.[0-9]*)?(eE[+-][0-9])?[flFLdD]?|"
r"0[xX][0-9a-fA-F]+[Ll]?", Number),
]
def __init__(self, **options):
super(TwigHtmlLexer, self).__init__(HtmlLexer, TwigLexer, **options)
+
+
+class Angular2Lexer(RegexLexer):
+ """
+ Generic
+ `angular2 <http://victorsavkin.com/post/119943127151/angular-2-template-syntax>`_
+ template lexer.
+
+ Highlights only the Angular template tags (stuff between `{{` and `}}` and
+ special attributes: '(event)=', '[property]=', '[(twoWayBinding)]=').
+ Everything else is left for a delegating lexer.
+
+ .. versionadded:: 2.1
+ """
+
+ name = "Angular2"
+ aliases = ['ng2']
+
+ tokens = {
+ 'root': [
+ (r'[^{([*#]+', Other),
+
+ # {{meal.name}}
+ (r'(\{\{)(\s*)', bygroups(Comment.Preproc, Text), 'ngExpression'),
+
+ # (click)="deleteOrder()"; [value]="test"; [(twoWayTest)]="foo.bar"
+ (r'([([]+)([\w:.-]+)([\])]+)(\s*)(=)(\s*)',
+ bygroups(Punctuation, Name.Attribute, Punctuation, Text, Operator, Text),
+ 'attr'),
+ (r'([([]+)([\w:.-]+)([\])]+)(\s*)',
+ bygroups(Punctuation, Name.Attribute, Punctuation, Text)),
+
+ # *ngIf="..."; #f="ngForm"
+ (r'([*#])([\w:.-]+)(\s*)(=)(\s*)',
+ bygroups(Punctuation, Name.Attribute, Punctuation, Operator), 'attr'),
+ (r'([*#])([\w:.-]+)(\s*)',
+ bygroups(Punctuation, Name.Attribute, Punctuation)),
+ ],
+
+ 'ngExpression': [
+ (r'\s+(\|\s+)?', Text),
+ (r'\}\}', Comment.Preproc, '#pop'),
+
+ # Literals
+ (r':?(true|false)', String.Boolean),
+ (r':?"(\\\\|\\"|[^"])*"', String.Double),
+ (r":?'(\\\\|\\'|[^'])*'", String.Single),
+ (r"[0-9](\.[0-9]*)?(eE[+-][0-9])?[flFLdD]?|"
+ r"0[xX][0-9a-fA-F]+[Ll]?", Number),
+
+ # Variabletext
+ (r'[a-zA-Z][\w-]*(\(.*\))?', Name.Variable),
+ (r'\.[\w-]+(\(.*\))?', Name.Variable),
+
+ # inline If
+ (r'(\?)(\s*)([^}\s]+)(\s*)(:)(\s*)([^}\s]+)(\s*)',
+ bygroups(Operator, Text, String, Text, Operator, Text, String, Text)),
+ ],
+ 'attr': [
+ ('".*?"', String, '#pop'),
+ ("'.*?'", String, '#pop'),
+ (r'[^\s>]+', String, '#pop'),
+ ],
+ }
+
+
+class Angular2HtmlLexer(DelegatingLexer):
+ """
+ Subclass of the `Angular2Lexer` that highlights unlexed data with the
+ `HtmlLexer`.
+
+ .. versionadded:: 2.0
+ """
+
+ name = "HTML + Angular2"
+ aliases = ["html+ng2"]
+ filenames = ['*.ng2']
+
+ def __init__(self, **options):
+ super(Angular2HtmlLexer, self).__init__(HtmlLexer, Angular2Lexer, **options)
Lexers for testing languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
(r'^TAP version \d+\n', Name.Namespace),
# Specify a plan with a plan line.
- (r'^1..\d+', Keyword.Declaration, 'plan'),
+ (r'^1\.\.\d+', Keyword.Declaration, 'plan'),
# A test failure
(r'^(not ok)([^\S\n]*)(\d*)',
Lexers for non-source code file types.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexers for languages related to text processing.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexers for various text formats.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexers for theorem-proving languages.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
flags = re.MULTILINE | re.UNICODE
- keywords1 = ('import', 'abbreviation', 'opaque_hint', 'tactic_hint', 'definition', 'renaming',
- 'inline', 'hiding', 'exposing', 'parameter', 'parameters', 'conjecture',
- 'hypothesis', 'lemma', 'corollary', 'variable', 'variables', 'print', 'theorem',
- 'axiom', 'inductive', 'structure', 'universe', 'alias', 'help',
- 'options', 'precedence', 'postfix', 'prefix', 'calc_trans', 'calc_subst', 'calc_refl',
- 'infix', 'infixl', 'infixr', 'notation', 'eval', 'check', 'exit', 'coercion', 'end',
- 'private', 'using', 'namespace', 'including', 'instance', 'section', 'context',
- 'protected', 'expose', 'export', 'set_option', 'add_rewrite', 'extends',
- 'open', 'example', 'constant', 'constants', 'print', 'opaque', 'reducible', 'irreducible'
+ keywords1 = (
+ 'import', 'abbreviation', 'opaque_hint', 'tactic_hint', 'definition',
+ 'renaming', 'inline', 'hiding', 'exposing', 'parameter', 'parameters',
+ 'conjecture', 'hypothesis', 'lemma', 'corollary', 'variable', 'variables',
+ 'theorem', 'axiom', 'inductive', 'structure', 'universe', 'alias',
+ 'help', 'options', 'precedence', 'postfix', 'prefix', 'calc_trans',
+ 'calc_subst', 'calc_refl', 'infix', 'infixl', 'infixr', 'notation', 'eval',
+ 'check', 'exit', 'coercion', 'end', 'private', 'using', 'namespace',
+ 'including', 'instance', 'section', 'context', 'protected', 'expose',
+ 'export', 'set_option', 'add_rewrite', 'extends', 'open', 'example',
+ 'constant', 'constants', 'print', 'opaque', 'reducible', 'irreducible',
)
keywords2 = (
- 'forall', 'fun', 'Pi', 'obtain', 'from', 'have', 'show', 'assume', 'take',
- 'let', 'if', 'else', 'then', 'by', 'in', 'with', 'begin', 'proof', 'qed', 'calc', 'match'
+ 'forall', 'fun', 'Pi', 'obtain', 'from', 'have', 'show', 'assume',
+ 'take', 'let', 'if', 'else', 'then', 'by', 'in', 'with', 'begin',
+ 'proof', 'qed', 'calc', 'match',
)
keywords3 = (
)
operators = (
- '!=', '#', '&', '&&', '*', '+', '-', '/', '@', '!', '`',
- '-.', '->', '.', '..', '...', '::', ':>', ';', ';;', '<',
- '<-', '=', '==', '>', '_', '`', '|', '||', '~', '=>', '<=', '>=',
- '/\\', '\\/', u'∀', u'Π', u'λ', u'↔', u'∧', u'∨', u'≠', u'≤', u'≥',
- u'¬', u'⁻¹', u'⬝', u'▸', u'→', u'∃', u'ℕ', u'ℤ', u'≈', u'×', u'⌞', u'⌟', u'≡',
- u'â\9f¨', u'â\9f©'
+ u'!=', u'#', u'&', u'&&', u'*', u'+', u'-', u'/', u'@', u'!', u'`',
+ u'-.', u'->', u'.', u'..', u'...', u'::', u':>', u';', u';;', u'<',
+ u'<-', u'=', u'==', u'>', u'_', u'|', u'||', u'~', u'=>', u'<=', u'>=',
+ u'/\\', u'\\/', u'∀', u'Π', u'λ', u'↔', u'∧', u'∨', u'≠', u'≤', u'≥',
+ u'¬', u'⁻¹', u'⬝', u'▸', u'→', u'∃', u'ℕ', u'ℤ', u'≈', u'×', u'⌞',
+ u'â\8c\9f', u'â\89¡', u'â\9f¨', u'â\9f©',
)
- punctuation = ('(', ')', ':', '{', '}', '[', ']', u'⦃', u'⦄', ':=', ',')
+ punctuation = (u'(', u')', u':', u'{', u'}', u'[', u']', u'⦃', u'⦄',
+ u':=', u',')
tokens = {
'root': [
Lexer for RiverBed's TrafficScript (RTS) language.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ pygments.lexers.typoscript
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ Lexers for TypoScript
+
+ `TypoScriptLexer`
+ A TypoScript lexer.
+
+ `TypoScriptCssDataLexer`
+ Lexer that highlights markers, constants and registers within css.
+
+ `TypoScriptHtmlDataLexer`
+ Lexer that highlights markers, constants and registers within html tags.
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+import re
+
+from pygments.lexer import RegexLexer, include, bygroups, using
+from pygments.token import Text, Comment, Name, String, Number, \
+ Operator, Punctuation
+
+__all__ = ['TypoScriptLexer', 'TypoScriptCssDataLexer', 'TypoScriptHtmlDataLexer']
+
+
+class TypoScriptCssDataLexer(RegexLexer):
+ """
+ Lexer that highlights markers, constants and registers within css blocks.
+
+ .. versionadded:: 2.2
+ """
+
+ name = 'TypoScriptCssData'
+ aliases = ['typoscriptcssdata']
+
+ tokens = {
+ 'root': [
+ # marker: ###MARK###
+ (r'(.*)(###\w+###)(.*)', bygroups(String, Name.Constant, String)),
+ # constant: {$some.constant}
+ (r'(\{)(\$)((?:[\w\-]+\.)*)([\w\-]+)(\})',
+ bygroups(String.Symbol, Operator, Name.Constant,
+ Name.Constant, String.Symbol)), # constant
+ # constant: {register:somevalue}
+ (r'(.*)(\{)([\w\-]+)(\s*:\s*)([\w\-]+)(\})(.*)',
+ bygroups(String, String.Symbol, Name.Constant, Operator,
+ Name.Constant, String.Symbol, String)), # constant
+ # whitespace
+ (r'\s+', Text),
+ # comments
+ (r'/\*(?:(?!\*/).)*\*/', Comment),
+ (r'(?<!(#|\'|"))(?:#(?!(?:[a-fA-F0-9]{6}|[a-fA-F0-9]{3}))[^\n#]+|//[^\n]*)',
+ Comment),
+ # other
+ (r'[<>,:=.*%+|]', String),
+ (r'[\w"\-!/&;(){}]+', String),
+ ]
+ }
+
+
+class TypoScriptHtmlDataLexer(RegexLexer):
+ """
+ Lexer that highlights markers, constants and registers within html tags.
+
+ .. versionadded:: 2.2
+ """
+
+ name = 'TypoScriptHtmlData'
+ aliases = ['typoscripthtmldata']
+
+ tokens = {
+ 'root': [
+ # INCLUDE_TYPOSCRIPT
+ (r'(INCLUDE_TYPOSCRIPT)', Name.Class),
+ # Language label or extension resource FILE:... or LLL:... or EXT:...
+ (r'(EXT|FILE|LLL):[^}\n"]*', String),
+ # marker: ###MARK###
+ (r'(.*)(###\w+###)(.*)', bygroups(String, Name.Constant, String)),
+ # constant: {$some.constant}
+ (r'(\{)(\$)((?:[\w\-]+\.)*)([\w\-]+)(\})',
+ bygroups(String.Symbol, Operator, Name.Constant,
+ Name.Constant, String.Symbol)), # constant
+ # constant: {register:somevalue}
+ (r'(.*)(\{)([\w\-]+)(\s*:\s*)([\w\-]+)(\})(.*)',
+ bygroups(String, String.Symbol, Name.Constant, Operator,
+ Name.Constant, String.Symbol, String)), # constant
+ # whitespace
+ (r'\s+', Text),
+ # other
+ (r'[<>,:=.*%+|]', String),
+ (r'[\w"\-!/&;(){}#]+', String),
+ ]
+ }
+
+
+class TypoScriptLexer(RegexLexer):
+ """
+ Lexer for TypoScript code.
+
+ http://docs.typo3.org/typo3cms/TyposcriptReference/
+
+ .. versionadded:: 2.2
+ """
+
+ name = 'TypoScript'
+ aliases = ['typoscript']
+ filenames = ['*.ts', '*.txt']
+ mimetypes = ['text/x-typoscript']
+
+ flags = re.DOTALL | re.MULTILINE
+
+ # Slightly higher than TypeScript (which is 0).
+ priority = 0.1
+
+ tokens = {
+ 'root': [
+ include('comment'),
+ include('constant'),
+ include('html'),
+ include('label'),
+ include('whitespace'),
+ include('keywords'),
+ include('punctuation'),
+ include('operator'),
+ include('structure'),
+ include('literal'),
+ include('other'),
+ ],
+ 'keywords': [
+ # Conditions
+ (r'(\[)(?i)(browser|compatVersion|dayofmonth|dayofweek|dayofyear|'
+ r'device|ELSE|END|GLOBAL|globalString|globalVar|hostname|hour|IP|'
+ r'language|loginUser|loginuser|minute|month|page|PIDinRootline|'
+ r'PIDupinRootline|system|treeLevel|useragent|userFunc|usergroup|'
+ r'version)([^\]]*)(\])',
+ bygroups(String.Symbol, Name.Constant, Text, String.Symbol)),
+ # Functions
+ (r'(?=[\w\-])(HTMLparser|HTMLparser_tags|addParams|cache|encapsLines|'
+ r'filelink|if|imageLinkWrap|imgResource|makelinks|numRows|numberFormat|'
+ r'parseFunc|replacement|round|select|split|stdWrap|strPad|tableStyle|'
+ r'tags|textStyle|typolink)(?![\w\-])', Name.Function),
+ # Toplevel objects and _*
+ (r'(?:(=?\s*<?\s+|^\s*))(cObj|field|config|content|constants|FEData|'
+ r'file|frameset|includeLibs|lib|page|plugin|register|resources|sitemap|'
+ r'sitetitle|styles|temp|tt_[^:.\s]*|types|xmlnews|INCLUDE_TYPOSCRIPT|'
+ r'_CSS_DEFAULT_STYLE|_DEFAULT_PI_VARS|_LOCAL_LANG)(?![\w\-])',
+ bygroups(Operator, Name.Builtin)),
+ # Content objects
+ (r'(?=[\w\-])(CASE|CLEARGIF|COA|COA_INT|COBJ_ARRAY|COLUMNS|CONTENT|'
+ r'CTABLE|EDITPANEL|FILE|FILES|FLUIDTEMPLATE|FORM|HMENU|HRULER|HTML|'
+ r'IMAGE|IMGTEXT|IMG_RESOURCE|LOAD_REGISTER|MEDIA|MULTIMEDIA|OTABLE|'
+ r'PAGE|QTOBJECT|RECORDS|RESTORE_REGISTER|SEARCHRESULT|SVG|SWFOBJECT|'
+ r'TEMPLATE|TEXT|USER|USER_INT)(?![\w\-])', Name.Class),
+ # Menu states
+ (r'(?=[\w\-])(ACTIFSUBRO|ACTIFSUB|ACTRO|ACT|CURIFSUBRO|CURIFSUB|CURRO|'
+ r'CUR|IFSUBRO|IFSUB|NO|SPC|USERDEF1RO|USERDEF1|USERDEF2RO|USERDEF2|'
+ r'USRRO|USR)', Name.Class),
+ # Menu objects
+ (r'(?=[\w\-])(GMENU_FOLDOUT|GMENU_LAYERS|GMENU|IMGMENUITEM|IMGMENU|'
+ r'JSMENUITEM|JSMENU|TMENUITEM|TMENU_LAYERS|TMENU)', Name.Class),
+ # PHP objects
+ (r'(?=[\w\-])(PHP_SCRIPT(_EXT|_INT)?)', Name.Class),
+ (r'(?=[\w\-])(userFunc)(?![\w\-])', Name.Function),
+ ],
+ 'whitespace': [
+ (r'\s+', Text),
+ ],
+ 'html': [
+ (r'<\S[^\n>]*>', using(TypoScriptHtmlDataLexer)),
+ (r'&[^;\n]*;', String),
+ (r'(_CSS_DEFAULT_STYLE)(\s*)(\()(?s)(.*(?=\n\)))',
+ bygroups(Name.Class, Text, String.Symbol, using(TypoScriptCssDataLexer))),
+ ],
+ 'literal': [
+ (r'0x[0-9A-Fa-f]+t?', Number.Hex),
+ # (r'[0-9]*\.[0-9]+([eE][0-9]+)?[fd]?\s*(?:[^=])', Number.Float),
+ (r'[0-9]+', Number.Integer),
+ (r'(###\w+###)', Name.Constant),
+ ],
+ 'label': [
+ # Language label or extension resource FILE:... or LLL:... or EXT:...
+ (r'(EXT|FILE|LLL):[^}\n"]*', String),
+ # Path to a resource
+ (r'(?![^\w\-])([\w\-]+(?:/[\w\-]+)+/?)(\S*\n)',
+ bygroups(String, String)),
+ ],
+ 'punctuation': [
+ (r'[,.]', Punctuation),
+ ],
+ 'operator': [
+ (r'[<>,:=.*%+|]', Operator),
+ ],
+ 'structure': [
+ # Brackets and braces
+ (r'[{}()\[\]\\]', String.Symbol),
+ ],
+ 'constant': [
+ # Constant: {$some.constant}
+ (r'(\{)(\$)((?:[\w\-]+\.)*)([\w\-]+)(\})',
+ bygroups(String.Symbol, Operator, Name.Constant,
+ Name.Constant, String.Symbol)), # constant
+ # Constant: {register:somevalue}
+ (r'(\{)([\w\-]+)(\s*:\s*)([\w\-]+)(\})',
+ bygroups(String.Symbol, Name.Constant, Operator,
+ Name.Constant, String.Symbol)), # constant
+ # Hex color: #ff0077
+ (r'(#[a-fA-F0-9]{6}\b|#[a-fA-F0-9]{3}\b)', String.Char)
+ ],
+ 'comment': [
+ (r'(?<!(#|\'|"))(?:#(?!(?:[a-fA-F0-9]{6}|[a-fA-F0-9]{3}))[^\n#]+|//[^\n]*)',
+ Comment),
+ (r'/\*(?:(?!\*/).)*\*/', Comment),
+ (r'(\s*#\s*\n)', Comment),
+ ],
+ 'other': [
+ (r'[\w"\-!/&;]+', Text),
+ ],
+ }
+
+ def analyse_text(text):
+ if '<INCLUDE_TYPOSCRIPT:' in text:
+ return 1.0
Lexers for UrbiScript language.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ pygments.lexers.varnish
+ ~~~~~~~~~~~~~~~~~~~~~~~
+
+ Lexers for Varnish configuration
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+from pygments.lexer import RegexLexer, include, bygroups, using, this, \
+ inherit, words
+from pygments.token import Text, Comment, Operator, Keyword, Name, String, \
+ Number, Punctuation, Literal
+
+__all__ = ['VCLLexer', 'VCLSnippetLexer']
+
+
+class VCLLexer(RegexLexer):
+ """
+ For Varnish Configuration Language (VCL).
+
+ .. versionadded:: 2.2
+ """
+ name = 'VCL'
+ aliases = ['vcl']
+ filenames = ['*.vcl']
+ mimetypes = ['text/x-vclsrc']
+
+ def analyse_text(text):
+ # If the very first line is 'vcl 4.0;' it's pretty much guaranteed
+ # that this is VCL
+ if text.startswith('vcl 4.0;'):
+ return 1.0
+ # Skip over comments and blank lines
+ # This is accurate enough that returning 0.9 is reasonable.
+ # Almost no VCL files start without some comments.
+ elif '\nvcl 4\.0;' in text[:1000]:
+ return 0.9
+
+ tokens = {
+ 'probe': [
+ include('whitespace'),
+ include('comments'),
+ (r'(\.\w+)(\s*=\s*)([^;]*)(;)',
+ bygroups(Name.Attribute, Operator, using(this), Punctuation)),
+ (r'\}', Punctuation, '#pop'),
+ ],
+ 'acl': [
+ include('whitespace'),
+ include('comments'),
+ (r'[!/]+', Operator),
+ (r';', Punctuation),
+ (r'\d+', Number),
+ (r'\}', Punctuation, '#pop'),
+ ],
+ 'backend': [
+ include('whitespace'),
+ (r'(\.probe)(\s*=\s*)(\w+)(;)',
+ bygroups(Name.Attribute, Operator, Name.Variable.Global, Punctuation)),
+ (r'(\.probe)(\s*=\s*)(\{)',
+ bygroups(Name.Attribute, Operator, Punctuation), 'probe'),
+ (r'(\.\w+\b)(\s*=\s*)([^;]*)(\s*;)',
+ bygroups(Name.Attribute, Operator, using(this), Punctuation)),
+ (r'\{', Punctuation, '#push'),
+ (r'\}', Punctuation, '#pop'),
+ ],
+ 'statements': [
+ (r'(\d\.)?\d+[sdwhmy]', Literal.Date),
+ (r'(\d\.)?\d+ms', Literal.Date),
+ (r'(vcl_pass|vcl_hash|vcl_hit|vcl_init|vcl_backend_fetch|vcl_pipe|'
+ r'vcl_backend_response|vcl_synth|vcl_deliver|vcl_backend_error|'
+ r'vcl_fini|vcl_recv|vcl_purge|vcl_miss)\b', Name.Function),
+ (r'(pipe|retry|hash|synth|deliver|purge|abandon|lookup|pass|fail|ok|'
+ r'miss|fetch|restart)\b', Name.Constant),
+ (r'(beresp|obj|resp|req|req_top|bereq)\.http\.[a-zA-Z_-]+\b', Name.Variable),
+ (words((
+ 'obj.status', 'req.hash_always_miss', 'beresp.backend', 'req.esi_level',
+ 'req.can_gzip', 'beresp.ttl', 'obj.uncacheable', 'req.ttl', 'obj.hits',
+ 'client.identity', 'req.hash_ignore_busy', 'obj.reason', 'req.xid',
+ 'req_top.proto', 'beresp.age', 'obj.proto', 'obj.age', 'local.ip',
+ 'beresp.uncacheable', 'req.method', 'beresp.backend.ip', 'now',
+ 'obj.grace', 'req.restarts', 'beresp.keep', 'req.proto', 'resp.proto',
+ 'bereq.xid', 'bereq.between_bytes_timeout', 'req.esi',
+ 'bereq.first_byte_timeout', 'bereq.method', 'bereq.connect_timeout',
+ 'beresp.do_gzip', 'resp.status', 'beresp.do_gunzip',
+ 'beresp.storage_hint', 'resp.is_streaming', 'beresp.do_stream',
+ 'req_top.method', 'bereq.backend', 'beresp.backend.name', 'beresp.status',
+ 'req.url', 'obj.keep', 'obj.ttl', 'beresp.reason', 'bereq.retries',
+ 'resp.reason', 'bereq.url', 'beresp.do_esi', 'beresp.proto', 'client.ip',
+ 'bereq.proto', 'server.hostname', 'remote.ip', 'req.backend_hint',
+ 'server.identity', 'req_top.url', 'beresp.grace', 'beresp.was_304',
+ 'server.ip', 'bereq.uncacheable'), suffix=r'\b'),
+ Name.Variable),
+ (r'[!%&+*\-,/<.}{>=|~]+', Operator),
+ (r'[();]', Punctuation),
+
+ (r'[,]+', Punctuation),
+ (words(('hash_data', 'regsub', 'regsuball', 'if', 'else',
+ 'elsif', 'elif', 'synth', 'synthetic', 'ban',
+ 'return', 'set', 'unset', 'import', 'include', 'new',
+ 'rollback', 'call'), suffix=r'\b'),
+ Keyword),
+ (r'storage\.\w+\.\w+\b', Name.Variable),
+ (words(('true', 'false')), Name.Builtin),
+ (r'\d+\b', Number),
+ (r'(backend)(\s+\w+)(\s*\{)',
+ bygroups(Keyword, Name.Variable.Global, Punctuation), 'backend'),
+ (r'(probe\s)(\s*\w+\s)(\{)',
+ bygroups(Keyword, Name.Variable.Global, Punctuation), 'probe'),
+ (r'(acl\s)(\s*\w+\s)(\{)',
+ bygroups(Keyword, Name.Variable.Global, Punctuation), 'acl'),
+ (r'(vcl )(4.0)(;)$',
+ bygroups(Keyword.Reserved, Name.Constant, Punctuation)),
+ (r'(sub\s+)([a-zA-Z]\w*)(\s*\{)',
+ bygroups(Keyword, Name.Function, Punctuation)),
+ (r'([a-zA-Z_]\w*)'
+ r'(\.)'
+ r'([a-zA-Z_]\w*)'
+ r'(\s*\(.*\))',
+ bygroups(Name.Function, Punctuation, Name.Function, using(this))),
+ ('[a-zA-Z_]\w*', Name),
+ ],
+ 'comment': [
+ (r'[^*/]+', Comment.Multiline),
+ (r'/\*', Comment.Multiline, '#push'),
+ (r'\*/', Comment.Multiline, '#pop'),
+ (r'[*/]', Comment.Multiline),
+ ],
+ 'comments': [
+ (r'#.*$', Comment),
+ (r'/\*', Comment.Multiline, 'comment'),
+ (r'//.*$', Comment),
+ ],
+ 'string': [
+ (r'"', String, '#pop'),
+ (r'[^"\n]+', String), # all other characters
+ ],
+ 'multistring': [
+ (r'[^"}]', String),
+ (r'"\}', String, '#pop'),
+ (r'["}]', String),
+ ],
+ 'whitespace': [
+ (r'L?"', String, 'string'),
+ (r'\{"', String, 'multistring'),
+ (r'\n', Text),
+ (r'\s+', Text),
+ (r'\\\n', Text), # line continuation
+ ],
+ 'root': [
+ include('whitespace'),
+ include('comments'),
+ include('statements'),
+ (r'\s+', Text),
+ ],
+ }
+
+
+class VCLSnippetLexer(VCLLexer):
+ """
+ For Varnish Configuration Language snippets.
+
+ .. versionadded:: 2.2
+ """
+ name = 'VCLSnippets'
+ aliases = ['vclsnippets', 'vclsnippet']
+ mimetypes = ['text/x-vclsnippet']
+ filenames = []
+
+ def analyse_text(text):
+ # override method inherited from VCLLexer
+ return 0
+
+ tokens = {
+ 'snippetspre': [
+ (r'\.\.\.+', Comment),
+ (r'(bereq|req|req_top|resp|beresp|obj|client|server|local|remote|'
+ r'storage)($|\.\*)', Name.Variable),
+ ],
+ 'snippetspost': [
+ (r'(backend)\b', Keyword.Reserved),
+ ],
+ 'root': [
+ include('snippetspre'),
+ inherit,
+ include('snippetspost'),
+ ],
+ }
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ pygments.lexers.verification
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ Lexer for Intermediate Verification Languages (IVLs).
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+from pygments.lexer import RegexLexer, include, words
+from pygments.token import Comment, Operator, Keyword, Name, Number, \
+ Punctuation, Whitespace
+
+__all__ = ['BoogieLexer', 'SilverLexer']
+
+
+class BoogieLexer(RegexLexer):
+ """
+ For `Boogie <https://boogie.codeplex.com/>`_ source code.
+
+ .. versionadded:: 2.1
+ """
+ name = 'Boogie'
+ aliases = ['boogie']
+ filenames = ['*.bpl']
+
+ tokens = {
+ 'root': [
+ # Whitespace and Comments
+ (r'\n', Whitespace),
+ (r'\s+', Whitespace),
+ (r'//[/!](.*?)\n', Comment.Doc),
+ (r'//(.*?)\n', Comment.Single),
+ (r'/\*', Comment.Multiline, 'comment'),
+
+ (words((
+ 'axiom', 'break', 'call', 'ensures', 'else', 'exists', 'function',
+ 'forall', 'if', 'invariant', 'modifies', 'procedure', 'requires',
+ 'then', 'var', 'while'),
+ suffix=r'\b'), Keyword),
+ (words(('const',), suffix=r'\b'), Keyword.Reserved),
+
+ (words(('bool', 'int', 'ref'), suffix=r'\b'), Keyword.Type),
+ include('numbers'),
+ (r"(>=|<=|:=|!=|==>|&&|\|\||[+/\-=>*<\[\]])", Operator),
+ (r"([{}():;,.])", Punctuation),
+ # Identifier
+ (r'[a-zA-Z_]\w*', Name),
+ ],
+ 'comment': [
+ (r'[^*/]+', Comment.Multiline),
+ (r'/\*', Comment.Multiline, '#push'),
+ (r'\*/', Comment.Multiline, '#pop'),
+ (r'[*/]', Comment.Multiline),
+ ],
+ 'numbers': [
+ (r'[0-9]+', Number.Integer),
+ ],
+ }
+
+
+class SilverLexer(RegexLexer):
+ """
+ For `Silver <https://bitbucket.org/viperproject/silver>`_ source code.
+
+ .. versionadded:: 2.2
+ """
+ name = 'Silver'
+ aliases = ['silver']
+ filenames = ['*.sil', '*.vpr']
+
+ tokens = {
+ 'root': [
+ # Whitespace and Comments
+ (r'\n', Whitespace),
+ (r'\s+', Whitespace),
+ (r'//[/!](.*?)\n', Comment.Doc),
+ (r'//(.*?)\n', Comment.Single),
+ (r'/\*', Comment.Multiline, 'comment'),
+
+ (words((
+ 'result', 'true', 'false', 'null', 'method', 'function',
+ 'predicate', 'program', 'domain', 'axiom', 'var', 'returns',
+ 'field', 'define', 'requires', 'ensures', 'invariant',
+ 'fold', 'unfold', 'inhale', 'exhale', 'new', 'assert',
+ 'assume', 'goto', 'while', 'if', 'elseif', 'else', 'fresh',
+ 'constraining', 'Seq', 'Set', 'Multiset', 'union', 'intersection',
+ 'setminus', 'subset', 'unfolding', 'in', 'old', 'forall', 'exists',
+ 'acc', 'wildcard', 'write', 'none', 'epsilon', 'perm', 'unique',
+ 'apply', 'package', 'folding', 'label', 'forperm'),
+ suffix=r'\b'), Keyword),
+ (words(('Int', 'Perm', 'Bool', 'Ref'), suffix=r'\b'), Keyword.Type),
+ include('numbers'),
+
+ (r'[!%&*+=|?:<>/\-\[\]]', Operator),
+ (r'([{}():;,.])', Punctuation),
+ # Identifier
+ (r'[\w$]\w*', Name),
+ ],
+ 'comment': [
+ (r'[^*/]+', Comment.Multiline),
+ (r'/\*', Comment.Multiline, '#push'),
+ (r'\*/', Comment.Multiline, '#pop'),
+ (r'[*/]', Comment.Multiline),
+ ],
+ 'numbers': [
+ (r'[0-9]+', Number.Integer),
+ ],
+ }
Just export previously exported lexers.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Lexers for misc. web stuff.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
bygroups(Keyword, Text, Keyword), 'itemtype'),
(r'(treat)(\s+)(as)\b',
bygroups(Keyword, Text, Keyword), 'itemtype'),
- (r'(case)(\s+)(' + stringdouble + ')', bygroups(Keyword, Text, String.Double), 'itemtype'),
- (r'(case)(\s+)(' + stringsingle + ')', bygroups(Keyword, Text, String.Single), 'itemtype'),
+ (r'(case)(\s+)(' + stringdouble + ')',
+ bygroups(Keyword, Text, String.Double), 'itemtype'),
+ (r'(case)(\s+)(' + stringsingle + ')',
+ bygroups(Keyword, Text, String.Single), 'itemtype'),
(r'(case|as)\b', Keyword, 'itemtype'),
(r'(\))(\s*)(as)',
bygroups(Punctuation, Text, Keyword), 'itemtype'),
(r'(for|let|previous|next)(\s+)(\$)',
bygroups(Keyword, Text, Name.Variable), 'varname'),
(r'(for)(\s+)(tumbling|sliding)(\s+)(window)(\s+)(\$)',
- bygroups(Keyword, Text, Keyword, Text, Keyword, Text, Name.Variable), 'varname'),
+ bygroups(Keyword, Text, Keyword, Text, Keyword, Text, Name.Variable),
+ 'varname'),
# (r'\)|\?|\]', Punctuation, '#push'),
(r'\)|\?|\]', Punctuation),
(r'(empty)(\s+)(greatest|least)', bygroups(Keyword, Text, Keyword)),
(r'preserve|no-preserve', Keyword),
(r',', Punctuation),
],
- 'annotationname':[
+ 'annotationname': [
(r'\(:', Comment, 'comment'),
(qname, Name.Decorator),
(r'(\()(' + stringdouble + ')', bygroups(Punctuation, String.Double)),
(r'(\()(' + stringsingle + ')', bygroups(Punctuation, String.Single)),
- (r'(\,)(\s+)(' + stringdouble + ')', bygroups(Punctuation, Text, String.Double)),
- (r'(\,)(\s+)(' + stringsingle + ')', bygroups(Punctuation, Text, String.Single)),
+ (r'(\,)(\s+)(' + stringdouble + ')',
+ bygroups(Punctuation, Text, String.Double)),
+ (r'(\,)(\s+)(' + stringsingle + ')',
+ bygroups(Punctuation, Text, String.Single)),
(r'\)', Punctuation),
(r'(\s+)(\%)', bygroups(Text, Name.Decorator), 'annotationname'),
- (r'(\s+)(variable)(\s+)(\$)', bygroups(Text, Keyword.Declaration, Text, Name.Variable), 'varname'),
- (r'(\s+)(function)(\s+)', bygroups(Text, Keyword.Declaration, Text), 'root')
+ (r'(\s+)(variable)(\s+)(\$)',
+ bygroups(Text, Keyword.Declaration, Text, Name.Variable), 'varname'),
+ (r'(\s+)(function)(\s+)',
+ bygroups(Text, Keyword.Declaration, Text), 'root')
],
'varname': [
(r'\(:', Comment, 'comment'),
bygroups(Keyword, Text, Keyword), 'singletype'),
(r'(treat)(\s+)(as)', bygroups(Keyword, Text, Keyword)),
(r'(instance)(\s+)(of)', bygroups(Keyword, Text, Keyword)),
- (r'(case)(\s+)(' + stringdouble + ')', bygroups(Keyword, Text, String.Double), 'itemtype'),
- (r'(case)(\s+)(' + stringsingle + ')', bygroups(Keyword, Text, String.Single), 'itemtype'),
+ (r'(case)(\s+)(' + stringdouble + ')',
+ bygroups(Keyword, Text, String.Double), 'itemtype'),
+ (r'(case)(\s+)(' + stringsingle + ')',
+ bygroups(Keyword, Text, String.Single), 'itemtype'),
(r'case|as', Keyword, 'itemtype'),
(r'(\))(\s*)(as)', bygroups(Operator, Text, Keyword), 'itemtype'),
(ncname + r':\*', Keyword.Type, 'operator'),
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ pygments.lexers.whiley
+ ~~~~~~~~~~~~~~~~~~~~~~
+
+ Lexers for the Whiley language.
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+from pygments.lexer import RegexLexer, bygroups, words
+from pygments.token import Comment, Keyword, Name, Number, Operator, \
+ Punctuation, String, Text
+
+__all__ = ['WhileyLexer']
+
+
+class WhileyLexer(RegexLexer):
+ """
+ Lexer for the Whiley programming language.
+
+ .. versionadded:: 2.2
+ """
+ name = 'Whiley'
+ filenames = ['*.whiley']
+ aliases = ['whiley']
+ mimetypes = ['text/x-whiley']
+
+ # See the language specification:
+ # http://whiley.org/download/WhileyLanguageSpec.pdf
+
+ tokens = {
+ 'root': [
+ # Whitespace
+ (r'\s+', Text),
+
+ # Comments
+ (r'//.*', Comment.Single),
+ # don't parse empty comment as doc comment
+ (r'/\*\*/', Comment.Multiline),
+ (r'(?s)/\*\*.*?\*/', String.Doc),
+ (r'(?s)/\*.*?\*/', Comment.Multiline),
+
+ # Keywords
+ (words((
+ 'if', 'else', 'while', 'for', 'do', 'return',
+ 'switch', 'case', 'default', 'break', 'continue',
+ 'requires', 'ensures', 'where', 'assert', 'assume',
+ 'all', 'no', 'some', 'in', 'is', 'new',
+ 'throw', 'try', 'catch', 'debug', 'skip', 'fail',
+ 'finite', 'total'), suffix=r'\b'), Keyword.Reserved),
+ (words((
+ 'function', 'method', 'public', 'private', 'protected',
+ 'export', 'native'), suffix=r'\b'), Keyword.Declaration),
+ # "constant" & "type" are not keywords unless used in declarations
+ (r'(constant|type)(\s+)([a-zA-Z_]\w*)(\s+)(is)\b',
+ bygroups(Keyword.Declaration, Text, Name, Text, Keyword.Reserved)),
+ (r'(true|false|null)\b', Keyword.Constant),
+ (r'(bool|byte|int|real|any|void)\b', Keyword.Type),
+ # "from" is not a keyword unless used with import
+ (r'(import)(\s+)(\*)([^\S\n]+)(from)\b',
+ bygroups(Keyword.Namespace, Text, Punctuation, Text, Keyword.Namespace)),
+ (r'(import)(\s+)([a-zA-Z_]\w*)([^\S\n]+)(from)\b',
+ bygroups(Keyword.Namespace, Text, Name, Text, Keyword.Namespace)),
+ (r'(package|import)\b', Keyword.Namespace),
+
+ # standard library: https://github.com/Whiley/WhileyLibs/
+ (words((
+ # types defined in whiley.lang.Int
+ 'i8', 'i16', 'i32', 'i64',
+ 'u8', 'u16', 'u32', 'u64',
+ 'uint', 'nat',
+
+ # whiley.lang.Any
+ 'toString'), suffix=r'\b'), Name.Builtin),
+
+ # byte literal
+ (r'[01]+b', Number.Bin),
+
+ # decimal literal
+ (r'[0-9]+\.[0-9]+', Number.Float),
+ # match "1." but not ranges like "3..5"
+ (r'[0-9]+\.(?!\.)', Number.Float),
+
+ # integer literal
+ (r'0x[0-9a-fA-F]+', Number.Hex),
+ (r'[0-9]+', Number.Integer),
+
+ # character literal
+ (r"""'[^\\]'""", String.Char),
+ (r"""(')(\\['"\\btnfr])(')""",
+ bygroups(String.Char, String.Escape, String.Char)),
+
+ # string literal
+ (r'"', String, 'string'),
+
+ # operators and punctuation
+ (r'[{}()\[\],.;]', Punctuation),
+ (u'[+\\-*/%&|<>^!~@=:?'
+ # unicode operators
+ u'\u2200\u2203\u2205\u2282\u2286\u2283\u2287'
+ u'\u222A\u2229\u2264\u2265\u2208\u2227\u2228'
+ u']', Operator),
+
+ # identifier
+ (r'[a-zA-Z_]\w*', Name),
+ ],
+ 'string': [
+ (r'"', String, '#pop'),
+ (r'\\[btnfr]', String.Escape),
+ (r'\\u[0-9a-fA-F]{4}', String.Escape),
+ (r'\\.', String),
+ (r'[^\\"]+', String),
+ ],
+ }
Lexers for the X10 programming language.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
A simple modeline parser (based on pymodeline).
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
ret = get_filetype_from_line(l)
if ret:
return ret
- for l in lines[max_lines:-1:-1]:
- ret = get_filetype_from_line(l)
- if ret:
- return ret
+ for i in range(max_lines, -1, -1):
+ if i < len(lines):
+ ret = get_filetype_from_line(lines[i])
+ if ret:
+ return ret
return None
yourfilter = yourfilter:YourFilter
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
-try:
- import pkg_resources
-except ImportError:
- pkg_resources = None
-
LEXER_ENTRY_POINT = 'pygments.lexers'
FORMATTER_ENTRY_POINT = 'pygments.formatters'
STYLE_ENTRY_POINT = 'pygments.styles'
FILTER_ENTRY_POINT = 'pygments.filters'
+def iter_entry_points(group_name):
+ try:
+ import pkg_resources
+ except ImportError:
+ return []
+
+ return pkg_resources.iter_entry_points(group_name)
def find_plugin_lexers():
- if pkg_resources is None:
- return
- for entrypoint in pkg_resources.iter_entry_points(LEXER_ENTRY_POINT):
+ for entrypoint in iter_entry_points(LEXER_ENTRY_POINT):
yield entrypoint.load()
def find_plugin_formatters():
- if pkg_resources is None:
- return
- for entrypoint in pkg_resources.iter_entry_points(FORMATTER_ENTRY_POINT):
+ for entrypoint in iter_entry_points(FORMATTER_ENTRY_POINT):
yield entrypoint.name, entrypoint.load()
def find_plugin_styles():
- if pkg_resources is None:
- return
- for entrypoint in pkg_resources.iter_entry_points(STYLE_ENTRY_POINT):
+ for entrypoint in iter_entry_points(STYLE_ENTRY_POINT):
yield entrypoint.name, entrypoint.load()
def find_plugin_filters():
- if pkg_resources is None:
- return
- for entrypoint in pkg_resources.iter_entry_points(FILTER_ENTRY_POINT):
+ for entrypoint in iter_entry_points(FILTER_ENTRY_POINT):
yield entrypoint.name, entrypoint.load()
An algorithm that generates optimized regexes for matching long lists of
literal strings.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
return open_paren + regex_opt_inner(rest, '') + '|' \
+ make_charset(oneletter) + close_paren
# print '-> only 1-character'
- return make_charset(oneletter)
+ return open_paren + make_charset(oneletter) + close_paren
prefix = commonprefix(strings)
if prefix:
plen = len(prefix)
Have a look at the `DelphiLexer` to get an idea of how to use
this scanner.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import re
def test(self, pattern):
"""Apply a pattern on the current position and check
- if it patches. Doesn't touch pos."""
+ if it patches. Doesn't touch pos.
+ """
return self.check(pattern) is not None
def scan(self, pattern):
Sphinx extension to generate automatic documentation of lexers,
formatters and filters.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
'''
+
class PygmentsDoc(Directive):
"""
A directive to collect all lexers/formatters/filters and generate
Basic style object.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from pygments.token import Token, STANDARD_TYPES
from pygments.util import add_metaclass
+# Default mapping of #ansixxx to RGB colors.
+_ansimap = {
+ # dark
+ '#ansiblack': '000000',
+ '#ansidarkred': '7f0000',
+ '#ansidarkgreen': '007f00',
+ '#ansibrown': '7f7fe0',
+ '#ansidarkblue': '00007f',
+ '#ansipurple': '7f007f',
+ '#ansiteal': '007f7f',
+ '#ansilightgray': 'e5e5e5',
+ # normal
+ '#ansidarkgray': '555555',
+ '#ansired': 'ff0000',
+ '#ansigreen': '00ff00',
+ '#ansiyellow': 'ffff00',
+ '#ansiblue': '0000ff',
+ '#ansifuchsia': 'ff00ff',
+ '#ansiturquoise': '00ffff',
+ '#ansiwhite': 'ffffff',
+}
+ansicolors = set(_ansimap)
+
class StyleMeta(type):
obj.styles[token] = ''
def colorformat(text):
+ if text in ansicolors:
+ return text
if text[0:1] == '#':
col = text[1:]
if len(col) == 6:
def style_for_token(cls, token):
t = cls._styles[token]
+ ansicolor = bgansicolor = None
+ color = t[0]
+ if color.startswith('#ansi'):
+ ansicolor = color
+ color = _ansimap[color]
+ bgcolor = t[4]
+ if bgcolor.startswith('#ansi'):
+ bgansicolor = bgcolor
+ bgcolor = _ansimap[bgcolor]
+
return {
- 'color': t[0] or None,
+ 'color': color or None,
'bold': bool(t[1]),
'italic': bool(t[2]),
'underline': bool(t[3]),
- 'bgcolor': t[4] or None,
+ 'bgcolor': bgcolor or None,
'border': t[5] or None,
'roman': bool(t[6]) or None,
'sans': bool(t[7]) or None,
'mono': bool(t[8]) or None,
+ 'ansicolor': ansicolor,
+ 'bgansicolor': bgansicolor,
}
def list_styles(cls):
Contains built-in styles.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
'lovelace': 'lovelace::LovelaceStyle',
'algol': 'algol::AlgolStyle',
'algol_nu': 'algol_nu::Algol_NuStyle',
+ 'arduino': 'arduino::ArduinoStyle',
+ 'rainbow_dash': 'rainbow_dash::RainbowDashStyle',
+ 'abap': 'abap::AbapStyle',
}
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ pygments.styles.abap
+ ~~~~~~~~~~~~~~~~~~~~
+
+ ABAP workbench like style.
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+from pygments.style import Style
+from pygments.token import Keyword, Name, Comment, String, Error, \
+ Number, Operator
+
+
+class AbapStyle(Style):
+ default_style = ""
+ styles = {
+ Comment: 'italic #888',
+ Comment.Special: '#888',
+ Keyword: '#00f',
+ Operator.Word: '#00f',
+ Name: '#000',
+ Number: '#3af',
+ String: '#5a2',
+
+ Error: '#F00',
+ }
[1] `Revised Report on the Algorithmic Language Algol-60 <http://www.masswerk.at/algol60/report.htm>`
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
[1] `Revised Report on the Algorithmic Language Algol-60 <http://www.masswerk.at/algol60/report.htm>`
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Arduino® Syntax highlighting style.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Comment: "#95a5a6", # class: 'c'
Comment.Multiline: "", # class: 'cm'
- Comment.Preproc: "#434f54", # class: 'cp'
+ Comment.Preproc: "#728E00", # class: 'cp'
Comment.Single: "", # class: 'c1'
Comment.Special: "", # class: 'cs'
Keyword.Declaration: "", # class: 'kd'
Keyword.Namespace: "", # class: 'kn'
Keyword.Pseudo: "#00979D", # class: 'kp'
- Keyword.Reserved: "", # class: 'kr'
+ Keyword.Reserved: "#00979D", # class: 'kr'
Keyword.Type: "#00979D", # class: 'kt'
- Operator: "#434f54", # class: 'o'
+ Operator: "#728E00", # class: 'o'
Operator.Word: "", # class: 'ow'
Name: "#434f54", # class: 'n'
Name.Attribute: "", # class: 'na'
- Name.Builtin: "", # class: 'nb'
+ Name.Builtin: "#728E00", # class: 'nb'
Name.Builtin.Pseudo: "", # class: 'bp'
Name.Class: "", # class: 'nc'
Name.Constant: "", # class: 'no'
Name.Variable.Global: "", # class: 'vg'
Name.Variable.Instance: "", # class: 'vi'
- Number: "#434f54", # class: 'm'
+ Number: "#8A7B52", # class: 'm'
Number.Float: "", # class: 'mf'
Number.Hex: "", # class: 'mh'
Number.Integer: "", # class: 'mi'
A colorful style, inspired by the terminal highlighting style.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Style similar to the style used in the Borland IDEs.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Simple black/white only style.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
A colorful style, inspired by CodeRay.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
The default highlighting style.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
A highlighting style for Pygments, inspired by Emacs.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
A modern style based on the VIM pyte theme.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
pygments version of my "fruity" vim theme.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Igor Pro default style.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
A desaturated, somewhat subdued style created for the Lovelace interactive
learning environment.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Name.Entity: _ESCAPE_LIME,
Name.Exception: _EXCEPT_YELLOW,
Name.Function: _FUN_BROWN,
+ Name.Function.Magic: _DOC_ORANGE,
Name.Label: _LABEL_CYAN,
Name.Namespace: _LABEL_CYAN,
Name.Tag: _KW_BLUE,
Name.Variable: '#b04040',
Name.Variable.Global:_EXCEPT_YELLOW,
+ Name.Variable.Magic: _DOC_ORANGE,
String: _STR_RED,
+ String.Affix: '#444444',
String.Char: _OW_PURPLE,
+ String.Delimiter: _DOC_ORANGE,
String.Doc: 'italic '+_DOC_ORANGE,
String.Escape: _ESCAPE_LIME,
String.Interpol: 'underline',
This is a port of the style used in the `php port`_ of pygments
by Manni. The style is called 'default' there.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
http://www.monokai.nl/blog/2006/07/15/textmate-color-theme/
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Murphy's style from CodeRay.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
pygments version of my "native" vim theme.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Created with Base16 Builder by Chris Kempson
(https://github.com/chriskempson/base16-builder).
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Created with Base16 Builder by Chris Kempson
(https://github.com/chriskempson/base16-builder).
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
.. _pastie: http://pastie.caboo.se/
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
.. _perldoc: http://perldoc.perl.org/
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Operator.Word: '#8B008B',
Keyword: '#8B008B bold',
- Keyword.Type: '#a7a7a7',
+ Keyword.Type: '#00688B',
Name.Class: '#008b45 bold',
Name.Exception: '#008b45 bold',
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ pygments.styles.rainbow_dash
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ A bright and colorful syntax highlighting `theme`.
+
+ .. _theme: http://sanssecours.github.io/Rainbow-Dash.tmbundle
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+from pygments.style import Style
+from pygments.token import (Comment, Error, Generic, Name, Number, Operator,
+ String, Text, Whitespace, Keyword)
+
+BLUE_LIGHT = '#0080ff'
+BLUE = '#2c5dcd'
+GREEN = '#00cc66'
+GREEN_LIGHT = '#ccffcc'
+GREEN_NEON = '#00cc00'
+GREY = '#aaaaaa'
+GREY_LIGHT = '#cbcbcb'
+GREY_DARK = '#4d4d4d'
+PURPLE = '#5918bb'
+RED = '#cc0000'
+RED_DARK = '#c5060b'
+RED_LIGHT = '#ffcccc'
+RED_BRIGHT = '#ff0000'
+WHITE = '#ffffff'
+TURQUOISE = '#318495'
+ORANGE = '#ff8000'
+
+
+class RainbowDashStyle(Style):
+ """
+ A bright and colorful syntax highlighting theme.
+ """
+
+ background_color = WHITE
+
+ styles = {
+ Comment: 'italic {}'.format(BLUE_LIGHT),
+ Comment.Preproc: 'noitalic',
+ Comment.Special: 'bold',
+
+ Error: 'bg:{} {}'.format(RED, WHITE),
+
+ Generic.Deleted: 'border:{} bg:{}'.format(RED_DARK, RED_LIGHT),
+ Generic.Emph: 'italic',
+ Generic.Error: RED_BRIGHT,
+ Generic.Heading: 'bold {}'.format(BLUE),
+ Generic.Inserted: 'border:{} bg:{}'.format(GREEN_NEON, GREEN_LIGHT),
+ Generic.Output: GREY,
+ Generic.Prompt: 'bold {}'.format(BLUE),
+ Generic.Strong: 'bold',
+ Generic.Subheading: 'bold {}'.format(BLUE),
+ Generic.Traceback: RED_DARK,
+
+ Keyword: 'bold {}'.format(BLUE),
+ Keyword.Pseudo: 'nobold',
+ Keyword.Type: PURPLE,
+
+ Name.Attribute: 'italic {}'.format(BLUE),
+ Name.Builtin: 'bold {}'.format(PURPLE),
+ Name.Class: 'underline',
+ Name.Constant: TURQUOISE,
+ Name.Decorator: 'bold {}'.format(ORANGE),
+ Name.Entity: 'bold {}'.format(PURPLE),
+ Name.Exception: 'bold {}'.format(PURPLE),
+ Name.Function: 'bold {}'.format(ORANGE),
+ Name.Tag: 'bold {}'.format(BLUE),
+
+ Number: 'bold {}'.format(PURPLE),
+
+ Operator: BLUE,
+ Operator.Word: 'bold',
+
+ String: GREEN,
+ String.Doc: 'italic',
+ String.Escape: 'bold {}'.format(RED_DARK),
+ String.Other: TURQUOISE,
+ String.Symbol: 'bold {}'.format(RED_DARK),
+
+ Text: GREY_DARK,
+
+ Whitespace: GREY_LIGHT
+ }
pygments "rrt" theme, based on Zap and Emacs defaults.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ pygments.styles.sas
+ ~~~~~~~~~~~~~~~~~~~
+
+ Style inspired by SAS' enhanced program editor. Note This is not
+ meant to be a complete style. It's merely meant to mimic SAS'
+ program editor syntax highlighting.
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+from pygments.style import Style
+from pygments.token import Keyword, Name, Comment, String, Error, \
+ Number, Other, Whitespace, Generic
+
+
+class SasStyle(Style):
+ """
+ Style inspired by SAS' enhanced program editor. Note This is not
+ meant to be a complete style. It's merely meant to mimic SAS'
+ program editor syntax highlighting.
+ """
+
+ default_style = ''
+
+ styles = {
+ Whitespace: '#bbbbbb',
+ Comment: 'italic #008800',
+ String: '#800080',
+ Number: 'bold #2e8b57',
+ Other: 'bg:#ffffe0',
+ Keyword: '#2c2cff',
+ Keyword.Reserved: 'bold #353580',
+ Keyword.Constant: 'bold',
+ Name.Builtin: '#2c2cff',
+ Name.Function: 'bold italic',
+ Name.Variable: 'bold #2c2cff',
+ Generic: '#2c2cff',
+ Generic.Emph: '#008800',
+ Generic.Error: '#d30202',
+ Error: 'bg:#e3d2d2 #a61717'
+ }
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ pygments.styles.stata
+ ~~~~~~~~~~~~~~~~~~~~~
+
+ Style inspired by Stata's do-file editor. Note this is not meant
+ to be a complete style. It's merely meant to mimic Stata's do file
+ editor syntax highlighting.
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+from pygments.style import Style
+from pygments.token import Keyword, Name, Comment, String, Error, \
+ Number, Operator, Whitespace
+
+
+class StataStyle(Style):
+ """
+ Style inspired by Stata's do-file editor. Note this is not meant
+ to be a complete style. It's merely meant to mimic Stata's do file
+ editor syntax highlighting.
+ """
+
+ default_style = ''
+
+ styles = {
+ Whitespace: '#bbbbbb',
+ Comment: 'italic #008800',
+ String: '#7a2424',
+ Number: '#2c2cff',
+ Operator: '',
+ Keyword: 'bold #353580',
+ Keyword.Constant: '',
+ Name.Function: '#2c2cff',
+ Name.Variable: 'bold #35baba',
+ Name.Variable.Global: 'bold #b5565e',
+ Error: 'bg:#e3d2d2 #a61717'
+ }
have been chosen to have the same style. Similarly, keywords (Keyword.*),
and Operator.Word (and, or, in) have been assigned the same style.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Port of the default trac highlighter design.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
A highlighting style for Pygments, inspired by vim.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Simple style with MS Visual Studio colors.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Style similar to the `Xcode` default theme.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Basic token types and the standard tokens.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
+
class _TokenType(tuple):
parent = None
return self
-Token = _TokenType()
+Token = _TokenType()
# Special token types
-Text = Token.Text
-Whitespace = Text.Whitespace
-Escape = Token.Escape
-Error = Token.Error
+Text = Token.Text
+Whitespace = Text.Whitespace
+Escape = Token.Escape
+Error = Token.Error
# Text that doesn't belong to this lexer (e.g. HTML in PHP)
-Other = Token.Other
+Other = Token.Other
# Common token types for source code
-Keyword = Token.Keyword
-Name = Token.Name
-Literal = Token.Literal
-String = Literal.String
-Number = Literal.Number
+Keyword = Token.Keyword
+Name = Token.Name
+Literal = Token.Literal
+String = Literal.String
+Number = Literal.Number
Punctuation = Token.Punctuation
-Operator = Token.Operator
-Comment = Token.Comment
+Operator = Token.Operator
+Comment = Token.Comment
# Generic types for non-source code
-Generic = Token.Generic
+Generic = Token.Generic
-# String and some others are not direct childs of Token.
+# String and some others are not direct children of Token.
# alias them:
Token.Token = Token
Token.String = String
Name.Entity: 'ni',
Name.Exception: 'ne',
Name.Function: 'nf',
+ Name.Function.Magic: 'fm',
Name.Property: 'py',
Name.Label: 'nl',
Name.Namespace: 'nn',
Name.Variable.Class: 'vc',
Name.Variable.Global: 'vg',
Name.Variable.Instance: 'vi',
+ Name.Variable.Magic: 'vm',
Literal: 'l',
Literal.Date: 'ld',
String: 's',
+ String.Affix: 'sa',
String.Backtick: 'sb',
String.Char: 'sc',
+ String.Delimiter: 'dl',
String.Doc: 'sd',
String.Double: 's2',
String.Escape: 'se',
Inspired by chartypes_create.py from the MoinMoin project.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Utility functions.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
split_path_re = re.compile(r'[/\\ ]')
-doctype_lookup_re = re.compile(r'''(?smx)
+doctype_lookup_re = re.compile(r'''
(<\?.*?\?>)?\s*
<!DOCTYPE\s+(
[a-zA-Z_][a-zA-Z0-9]*
"[^"]*")?
)
[^>]*>
-''')
-tag_re = re.compile(r'<(.+?)(\s.*?)?>.*?</.+?>(?uism)')
+''', re.DOTALL | re.MULTILINE | re.VERBOSE)
+tag_re = re.compile(r'<(.+?)(\s.*?)?>.*?</.+?>',
+ re.UNICODE | re.IGNORECASE | re.DOTALL | re.MULTILINE)
xml_decl_re = re.compile(r'\s*<\?xml[^>]*\?>', re.I)
+++ /dev/null
-coverage
-nose
-pyflakes
-pylint
-tox
Make sure each Python file has a correct file header
including copyright and license information.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
name_mail_re = r'[\w ]+(<.*?>)?'
-copyright_re = re.compile(r'^ :copyright: Copyright 2006-2015 by '
+copyright_re = re.compile(r'^ :copyright: Copyright 2006-2017 by '
r'the Pygments team, see AUTHORS\.$', re.UNICODE)
copyright_2_re = re.compile(r'^ %s(, %s)*[,.]$' %
(name_mail_re, name_mail_re), re.UNICODE)
the text where Error tokens are being generated, along
with some context.
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
lxcls = find_lexer_class(name)
if lxcls is None:
raise AssertionError('no lexer found for file %r' % fn)
+ print('Using lexer: %s (%s.%s)' % (lxcls.name, lxcls.__module__,
+ lxcls.__name__))
debug_lexer = False
# if profile:
# # does not work for e.g. ExtendedRegexLexers
+++ /dev/null
-debug_lexer.py
\ No newline at end of file
--- /dev/null
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+"""
+ Lexing error finder
+ ~~~~~~~~~~~~~~~~~~~
+
+ For the source files given on the command line, display
+ the text where Error tokens are being generated, along
+ with some context.
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+from __future__ import print_function
+
+import os
+import sys
+
+# always prefer Pygments from source if exists
+srcpath = os.path.join(os.path.dirname(__file__), '..')
+if os.path.isdir(os.path.join(srcpath, 'pygments')):
+ sys.path.insert(0, srcpath)
+
+
+from pygments.lexer import RegexLexer, ExtendedRegexLexer, LexerContext, \
+ ProfilingRegexLexer, ProfilingRegexLexerMeta
+from pygments.lexers import get_lexer_by_name, find_lexer_class, \
+ find_lexer_class_for_filename
+from pygments.token import Error, Text, _TokenType
+from pygments.cmdline import _parse_options
+
+
+class DebuggingRegexLexer(ExtendedRegexLexer):
+ """Make the state stack, position and current match instance attributes."""
+
+ def get_tokens_unprocessed(self, text, stack=('root',)):
+ """
+ Split ``text`` into (tokentype, text) pairs.
+
+ ``stack`` is the inital stack (default: ``['root']``)
+ """
+ tokendefs = self._tokens
+ self.ctx = ctx = LexerContext(text, 0)
+ ctx.stack = list(stack)
+ statetokens = tokendefs[ctx.stack[-1]]
+ while 1:
+ for rexmatch, action, new_state in statetokens:
+ self.m = m = rexmatch(text, ctx.pos, ctx.end)
+ if m:
+ if action is not None:
+ if type(action) is _TokenType:
+ yield ctx.pos, action, m.group()
+ ctx.pos = m.end()
+ else:
+ if not isinstance(self, ExtendedRegexLexer):
+ for item in action(self, m):
+ yield item
+ ctx.pos = m.end()
+ else:
+ for item in action(self, m, ctx):
+ yield item
+ if not new_state:
+ # altered the state stack?
+ statetokens = tokendefs[ctx.stack[-1]]
+ if new_state is not None:
+ # state transition
+ if isinstance(new_state, tuple):
+ for state in new_state:
+ if state == '#pop':
+ ctx.stack.pop()
+ elif state == '#push':
+ ctx.stack.append(ctx.stack[-1])
+ else:
+ ctx.stack.append(state)
+ elif isinstance(new_state, int):
+ # pop
+ del ctx.stack[new_state:]
+ elif new_state == '#push':
+ ctx.stack.append(ctx.stack[-1])
+ else:
+ assert False, 'wrong state def: %r' % new_state
+ statetokens = tokendefs[ctx.stack[-1]]
+ break
+ else:
+ try:
+ if ctx.pos >= ctx.end:
+ break
+ if text[ctx.pos] == '\n':
+ # at EOL, reset state to 'root'
+ ctx.stack = ['root']
+ statetokens = tokendefs['root']
+ yield ctx.pos, Text, u'\n'
+ ctx.pos += 1
+ continue
+ yield ctx.pos, Error, text[ctx.pos]
+ ctx.pos += 1
+ except IndexError:
+ break
+
+
+def main(fn, lexer=None, options={}):
+ if lexer is not None:
+ lxcls = get_lexer_by_name(lexer).__class__
+ else:
+ lxcls = find_lexer_class_for_filename(os.path.basename(fn))
+ if lxcls is None:
+ name, rest = fn.split('_', 1)
+ lxcls = find_lexer_class(name)
+ if lxcls is None:
+ raise AssertionError('no lexer found for file %r' % fn)
+ print('Using lexer: %s (%s.%s)' % (lxcls.name, lxcls.__module__,
+ lxcls.__name__))
+ debug_lexer = False
+ # if profile:
+ # # does not work for e.g. ExtendedRegexLexers
+ # if lxcls.__bases__ == (RegexLexer,):
+ # # yes we can! (change the metaclass)
+ # lxcls.__class__ = ProfilingRegexLexerMeta
+ # lxcls.__bases__ = (ProfilingRegexLexer,)
+ # lxcls._prof_sort_index = profsort
+ # else:
+ # if lxcls.__bases__ == (RegexLexer,):
+ # lxcls.__bases__ = (DebuggingRegexLexer,)
+ # debug_lexer = True
+ # elif lxcls.__bases__ == (DebuggingRegexLexer,):
+ # # already debugged before
+ # debug_lexer = True
+ # else:
+ # # HACK: ExtendedRegexLexer subclasses will only partially work here.
+ # lxcls.__bases__ = (DebuggingRegexLexer,)
+ # debug_lexer = True
+
+ lx = lxcls(**options)
+ lno = 1
+ if fn == '-':
+ text = sys.stdin.read()
+ else:
+ with open(fn, 'rb') as fp:
+ text = fp.read().decode('utf-8')
+ text = text.strip('\n') + '\n'
+ tokens = []
+ states = []
+
+ def show_token(tok, state):
+ reprs = list(map(repr, tok))
+ print(' ' + reprs[1] + ' ' + ' ' * (29-len(reprs[1])) + reprs[0], end=' ')
+ if debug_lexer:
+ print(' ' + ' ' * (29-len(reprs[0])) + ' : '.join(state) if state else '', end=' ')
+ print()
+
+ for type, val in lx.get_tokens(text):
+ lno += val.count('\n')
+ if type == Error and not ignerror:
+ print('Error parsing', fn, 'on line', lno)
+ if not showall:
+ print('Previous tokens' + (debug_lexer and ' and states' or '') + ':')
+ for i in range(max(len(tokens) - num, 0), len(tokens)):
+ if debug_lexer:
+ show_token(tokens[i], states[i])
+ else:
+ show_token(tokens[i], None)
+ print('Error token:')
+ l = len(repr(val))
+ print(' ' + repr(val), end=' ')
+ if debug_lexer and hasattr(lx, 'ctx'):
+ print(' ' * (60-l) + ' : '.join(lx.ctx.stack), end=' ')
+ print()
+ print()
+ return 1
+ tokens.append((type, val))
+ if debug_lexer:
+ if hasattr(lx, 'ctx'):
+ states.append(lx.ctx.stack[:])
+ else:
+ states.append(None)
+ if showall:
+ show_token((type, val), states[-1] if debug_lexer else None)
+ return 0
+
+
+def print_help():
+ print('''\
+Pygments development helper to quickly debug lexers.
+
+ scripts/debug_lexer.py [options] file ...
+
+Give one or more filenames to lex them and display possible error tokens
+and/or profiling info. Files are assumed to be encoded in UTF-8.
+
+Selecting lexer and options:
+
+ -l NAME use lexer named NAME (default is to guess from
+ the given filenames)
+ -O OPTIONSTR use lexer options parsed from OPTIONSTR
+
+Debugging lexing errors:
+
+ -n N show the last N tokens on error
+ -a always show all lexed tokens (default is only
+ to show them when an error occurs)
+ -e do not stop on error tokens
+
+Profiling:
+
+ -p use the ProfilingRegexLexer to profile regexes
+ instead of the debugging lexer
+ -s N sort profiling output by column N (default is
+ column 4, the time per call)
+''')
+
+num = 10
+showall = False
+ignerror = False
+lexer = None
+options = {}
+profile = False
+profsort = 4
+
+if __name__ == '__main__':
+ import getopt
+ opts, args = getopt.getopt(sys.argv[1:], 'n:l:aepO:s:h')
+ for opt, val in opts:
+ if opt == '-n':
+ num = int(val)
+ elif opt == '-a':
+ showall = True
+ elif opt == '-e':
+ ignerror = True
+ elif opt == '-l':
+ lexer = val
+ elif opt == '-p':
+ profile = True
+ elif opt == '-s':
+ profsort = int(val)
+ elif opt == '-O':
+ options = _parse_options([val])
+ elif opt == '-h':
+ print_help()
+ sys.exit(0)
+ ret = 0
+ if not args:
+ print_help()
+ for f in args:
+ ret += main(f, lexer, options)
+ sys.exit(bool(ret))
This file is autogenerated by scripts/get_vimkw.py
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
[egg_info]
tag_build =
tag_date = 0
-tag_svn_revision = 0
[aliases]
-release = egg_info -RDb ''
+release = egg_info -Db ''
upload = upload --sign --identity=36580288
[bdist_wheel]
formats that PIL supports and ANSI sequences
* it is usable as a command-line tool and as a library
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from setuptools import setup, find_packages
have_setuptools = True
except ImportError:
- try:
- import ez_setup
- ez_setup.use_setuptools()
- from setuptools import setup, find_packages
- have_setuptools = True
- except ImportError:
- from distutils.core import setup
- def find_packages(*args, **kwargs):
- return [
- 'pygments',
- 'pygments.lexers',
- 'pygments.formatters',
- 'pygments.styles',
- 'pygments.filters',
- ]
- have_setuptools = False
+ from distutils.core import setup
+ def find_packages(*args, **kwargs):
+ return [
+ 'pygments',
+ 'pygments.lexers',
+ 'pygments.formatters',
+ 'pygments.styles',
+ 'pygments.filters',
+ ]
+ have_setuptools = False
if have_setuptools:
add_keywords = dict(
setup(
name = 'Pygments',
- version = '2.1.3',
+ version = '2.2.0',
url = 'http://pygments.org/',
license = 'BSD License',
author = 'Georg Brandl',
description = 'Pygments is a syntax highlighting package written in Python.',
long_description = __doc__,
keywords = 'syntax highlighting',
- packages = find_packages(exclude=['ez_setup']),
+ packages = find_packages(),
platforms = 'any',
zip_safe = False,
include_package_data = True,
+++ /dev/null
-// Coverage.py HTML report browser code.
-/*jslint browser: true, sloppy: true, vars: true, plusplus: true, maxerr: 50, indent: 4 */
-/*global coverage: true, document, window, $ */
-
-coverage = {};
-
-// Find all the elements with shortkey_* class, and use them to assign a shotrtcut key.
-coverage.assign_shortkeys = function () {
- $("*[class*='shortkey_']").each(function (i, e) {
- $.each($(e).attr("class").split(" "), function (i, c) {
- if (/^shortkey_/.test(c)) {
- $(document).bind('keydown', c.substr(9), function () {
- $(e).click();
- });
- }
- });
- });
-};
-
-// Create the events for the help panel.
-coverage.wire_up_help_panel = function () {
- $("#keyboard_icon").click(function () {
- // Show the help panel, and position it so the keyboard icon in the
- // panel is in the same place as the keyboard icon in the header.
- $(".help_panel").show();
- var koff = $("#keyboard_icon").offset();
- var poff = $("#panel_icon").position();
- $(".help_panel").offset({
- top: koff.top-poff.top,
- left: koff.left-poff.left
- });
- });
- $("#panel_icon").click(function () {
- $(".help_panel").hide();
- });
-};
-
-// Loaded on index.html
-coverage.index_ready = function ($) {
- // Look for a cookie containing previous sort settings:
- var sort_list = [];
- var cookie_name = "COVERAGE_INDEX_SORT";
- var i;
-
- // This almost makes it worth installing the jQuery cookie plugin:
- if (document.cookie.indexOf(cookie_name) > -1) {
- var cookies = document.cookie.split(";");
- for (i = 0; i < cookies.length; i++) {
- var parts = cookies[i].split("=");
-
- if ($.trim(parts[0]) === cookie_name && parts[1]) {
- sort_list = eval("[[" + parts[1] + "]]");
- break;
- }
- }
- }
-
- // Create a new widget which exists only to save and restore
- // the sort order:
- $.tablesorter.addWidget({
- id: "persistentSort",
-
- // Format is called by the widget before displaying:
- format: function (table) {
- if (table.config.sortList.length === 0 && sort_list.length > 0) {
- // This table hasn't been sorted before - we'll use
- // our stored settings:
- $(table).trigger('sorton', [sort_list]);
- }
- else {
- // This is not the first load - something has
- // already defined sorting so we'll just update
- // our stored value to match:
- sort_list = table.config.sortList;
- }
- }
- });
-
- // Configure our tablesorter to handle the variable number of
- // columns produced depending on report options:
- var headers = [];
- var col_count = $("table.index > thead > tr > th").length;
-
- headers[0] = { sorter: 'text' };
- for (i = 1; i < col_count-1; i++) {
- headers[i] = { sorter: 'digit' };
- }
- headers[col_count-1] = { sorter: 'percent' };
-
- // Enable the table sorter:
- $("table.index").tablesorter({
- widgets: ['persistentSort'],
- headers: headers
- });
-
- coverage.assign_shortkeys();
- coverage.wire_up_help_panel();
-
- // Watch for page unload events so we can save the final sort settings:
- $(window).unload(function () {
- document.cookie = cookie_name + "=" + sort_list.toString() + "; path=/";
- });
-};
-
-// -- pyfile stuff --
-
-coverage.pyfile_ready = function ($) {
- // If we're directed to a particular line number, highlight the line.
- var frag = location.hash;
- if (frag.length > 2 && frag[1] === 'n') {
- $(frag).addClass('highlight');
- coverage.set_sel(parseInt(frag.substr(2), 10));
- }
- else {
- coverage.set_sel(0);
- }
-
- $(document)
- .bind('keydown', 'j', coverage.to_next_chunk_nicely)
- .bind('keydown', 'k', coverage.to_prev_chunk_nicely)
- .bind('keydown', '0', coverage.to_top)
- .bind('keydown', '1', coverage.to_first_chunk)
- ;
-
- $(".button_toggle_run").click(function (evt) {coverage.toggle_lines(evt.target, "run");});
- $(".button_toggle_exc").click(function (evt) {coverage.toggle_lines(evt.target, "exc");});
- $(".button_toggle_mis").click(function (evt) {coverage.toggle_lines(evt.target, "mis");});
- $(".button_toggle_par").click(function (evt) {coverage.toggle_lines(evt.target, "par");});
-
- coverage.assign_shortkeys();
- coverage.wire_up_help_panel();
-};
-
-coverage.toggle_lines = function (btn, cls) {
- btn = $(btn);
- var hide = "hide_"+cls;
- if (btn.hasClass(hide)) {
- $("#source ."+cls).removeClass(hide);
- btn.removeClass(hide);
- }
- else {
- $("#source ."+cls).addClass(hide);
- btn.addClass(hide);
- }
-};
-
-// Return the nth line div.
-coverage.line_elt = function (n) {
- return $("#t" + n);
-};
-
-// Return the nth line number div.
-coverage.num_elt = function (n) {
- return $("#n" + n);
-};
-
-// Return the container of all the code.
-coverage.code_container = function () {
- return $(".linenos");
-};
-
-// Set the selection. b and e are line numbers.
-coverage.set_sel = function (b, e) {
- // The first line selected.
- coverage.sel_begin = b;
- // The next line not selected.
- coverage.sel_end = (e === undefined) ? b+1 : e;
-};
-
-coverage.to_top = function () {
- coverage.set_sel(0, 1);
- coverage.scroll_window(0);
-};
-
-coverage.to_first_chunk = function () {
- coverage.set_sel(0, 1);
- coverage.to_next_chunk();
-};
-
-coverage.is_transparent = function (color) {
- // Different browsers return different colors for "none".
- return color === "transparent" || color === "rgba(0, 0, 0, 0)";
-};
-
-coverage.to_next_chunk = function () {
- var c = coverage;
-
- // Find the start of the next colored chunk.
- var probe = c.sel_end;
- while (true) {
- var probe_line = c.line_elt(probe);
- if (probe_line.length === 0) {
- return;
- }
- var color = probe_line.css("background-color");
- if (!c.is_transparent(color)) {
- break;
- }
- probe++;
- }
-
- // There's a next chunk, `probe` points to it.
- var begin = probe;
-
- // Find the end of this chunk.
- var next_color = color;
- while (next_color === color) {
- probe++;
- probe_line = c.line_elt(probe);
- next_color = probe_line.css("background-color");
- }
- c.set_sel(begin, probe);
- c.show_selection();
-};
-
-coverage.to_prev_chunk = function () {
- var c = coverage;
-
- // Find the end of the prev colored chunk.
- var probe = c.sel_begin-1;
- var probe_line = c.line_elt(probe);
- if (probe_line.length === 0) {
- return;
- }
- var color = probe_line.css("background-color");
- while (probe > 0 && c.is_transparent(color)) {
- probe--;
- probe_line = c.line_elt(probe);
- if (probe_line.length === 0) {
- return;
- }
- color = probe_line.css("background-color");
- }
-
- // There's a prev chunk, `probe` points to its last line.
- var end = probe+1;
-
- // Find the beginning of this chunk.
- var prev_color = color;
- while (prev_color === color) {
- probe--;
- probe_line = c.line_elt(probe);
- prev_color = probe_line.css("background-color");
- }
- c.set_sel(probe+1, end);
- c.show_selection();
-};
-
-// Return the line number of the line nearest pixel position pos
-coverage.line_at_pos = function (pos) {
- var l1 = coverage.line_elt(1),
- l2 = coverage.line_elt(2),
- result;
- if (l1.length && l2.length) {
- var l1_top = l1.offset().top,
- line_height = l2.offset().top - l1_top,
- nlines = (pos - l1_top) / line_height;
- if (nlines < 1) {
- result = 1;
- }
- else {
- result = Math.ceil(nlines);
- }
- }
- else {
- result = 1;
- }
- return result;
-};
-
-// Returns 0, 1, or 2: how many of the two ends of the selection are on
-// the screen right now?
-coverage.selection_ends_on_screen = function () {
- if (coverage.sel_begin === 0) {
- return 0;
- }
-
- var top = coverage.line_elt(coverage.sel_begin);
- var next = coverage.line_elt(coverage.sel_end-1);
-
- return (
- (top.isOnScreen() ? 1 : 0) +
- (next.isOnScreen() ? 1 : 0)
- );
-};
-
-coverage.to_next_chunk_nicely = function () {
- coverage.finish_scrolling();
- if (coverage.selection_ends_on_screen() === 0) {
- // The selection is entirely off the screen: select the top line on
- // the screen.
- var win = $(window);
- coverage.select_line_or_chunk(coverage.line_at_pos(win.scrollTop()));
- }
- coverage.to_next_chunk();
-};
-
-coverage.to_prev_chunk_nicely = function () {
- coverage.finish_scrolling();
- if (coverage.selection_ends_on_screen() === 0) {
- var win = $(window);
- coverage.select_line_or_chunk(coverage.line_at_pos(win.scrollTop() + win.height()));
- }
- coverage.to_prev_chunk();
-};
-
-// Select line number lineno, or if it is in a colored chunk, select the
-// entire chunk
-coverage.select_line_or_chunk = function (lineno) {
- var c = coverage;
- var probe_line = c.line_elt(lineno);
- if (probe_line.length === 0) {
- return;
- }
- var the_color = probe_line.css("background-color");
- if (!c.is_transparent(the_color)) {
- // The line is in a highlighted chunk.
- // Search backward for the first line.
- var probe = lineno;
- var color = the_color;
- while (probe > 0 && color === the_color) {
- probe--;
- probe_line = c.line_elt(probe);
- if (probe_line.length === 0) {
- break;
- }
- color = probe_line.css("background-color");
- }
- var begin = probe + 1;
-
- // Search forward for the last line.
- probe = lineno;
- color = the_color;
- while (color === the_color) {
- probe++;
- probe_line = c.line_elt(probe);
- color = probe_line.css("background-color");
- }
-
- coverage.set_sel(begin, probe);
- }
- else {
- coverage.set_sel(lineno);
- }
-};
-
-coverage.show_selection = function () {
- var c = coverage;
-
- // Highlight the lines in the chunk
- c.code_container().find(".highlight").removeClass("highlight");
- for (var probe = c.sel_begin; probe > 0 && probe < c.sel_end; probe++) {
- c.num_elt(probe).addClass("highlight");
- }
-
- c.scroll_to_selection();
-};
-
-coverage.scroll_to_selection = function () {
- // Scroll the page if the chunk isn't fully visible.
- if (coverage.selection_ends_on_screen() < 2) {
- // Need to move the page. The html,body trick makes it scroll in all
- // browsers, got it from http://stackoverflow.com/questions/3042651
- var top = coverage.line_elt(coverage.sel_begin);
- var top_pos = parseInt(top.offset().top, 10);
- coverage.scroll_window(top_pos - 30);
- }
-};
-
-coverage.scroll_window = function (to_pos) {
- $("html,body").animate({scrollTop: to_pos}, 200);
-};
-
-coverage.finish_scrolling = function () {
- $("html,body").stop(true, true);
-};
+++ /dev/null
-/*
- * jQuery Hotkeys Plugin
- * Copyright 2010, John Resig
- * Dual licensed under the MIT or GPL Version 2 licenses.
- *
- * Based upon the plugin by Tzury Bar Yochay:
- * http://github.com/tzuryby/hotkeys
- *
- * Original idea by:
- * Binny V A, http://www.openjs.com/scripts/events/keyboard_shortcuts/
-*/
-
-(function(jQuery){
-
- jQuery.hotkeys = {
- version: "0.8",
-
- specialKeys: {
- 8: "backspace", 9: "tab", 13: "return", 16: "shift", 17: "ctrl", 18: "alt", 19: "pause",
- 20: "capslock", 27: "esc", 32: "space", 33: "pageup", 34: "pagedown", 35: "end", 36: "home",
- 37: "left", 38: "up", 39: "right", 40: "down", 45: "insert", 46: "del",
- 96: "0", 97: "1", 98: "2", 99: "3", 100: "4", 101: "5", 102: "6", 103: "7",
- 104: "8", 105: "9", 106: "*", 107: "+", 109: "-", 110: ".", 111 : "/",
- 112: "f1", 113: "f2", 114: "f3", 115: "f4", 116: "f5", 117: "f6", 118: "f7", 119: "f8",
- 120: "f9", 121: "f10", 122: "f11", 123: "f12", 144: "numlock", 145: "scroll", 191: "/", 224: "meta"
- },
-
- shiftNums: {
- "`": "~", "1": "!", "2": "@", "3": "#", "4": "$", "5": "%", "6": "^", "7": "&",
- "8": "*", "9": "(", "0": ")", "-": "_", "=": "+", ";": ": ", "'": "\"", ",": "<",
- ".": ">", "/": "?", "\\": "|"
- }
- };
-
- function keyHandler( handleObj ) {
- // Only care when a possible input has been specified
- if ( typeof handleObj.data !== "string" ) {
- return;
- }
-
- var origHandler = handleObj.handler,
- keys = handleObj.data.toLowerCase().split(" ");
-
- handleObj.handler = function( event ) {
- // Don't fire in text-accepting inputs that we didn't directly bind to
- if ( this !== event.target && (/textarea|select/i.test( event.target.nodeName ) ||
- event.target.type === "text") ) {
- return;
- }
-
- // Keypress represents characters, not special keys
- var special = event.type !== "keypress" && jQuery.hotkeys.specialKeys[ event.which ],
- character = String.fromCharCode( event.which ).toLowerCase(),
- key, modif = "", possible = {};
-
- // check combinations (alt|ctrl|shift+anything)
- if ( event.altKey && special !== "alt" ) {
- modif += "alt+";
- }
-
- if ( event.ctrlKey && special !== "ctrl" ) {
- modif += "ctrl+";
- }
-
- // TODO: Need to make sure this works consistently across platforms
- if ( event.metaKey && !event.ctrlKey && special !== "meta" ) {
- modif += "meta+";
- }
-
- if ( event.shiftKey && special !== "shift" ) {
- modif += "shift+";
- }
-
- if ( special ) {
- possible[ modif + special ] = true;
-
- } else {
- possible[ modif + character ] = true;
- possible[ modif + jQuery.hotkeys.shiftNums[ character ] ] = true;
-
- // "$" can be triggered as "Shift+4" or "Shift+$" or just "$"
- if ( modif === "shift+" ) {
- possible[ jQuery.hotkeys.shiftNums[ character ] ] = true;
- }
- }
-
- for ( var i = 0, l = keys.length; i < l; i++ ) {
- if ( possible[ keys[i] ] ) {
- return origHandler.apply( this, arguments );
- }
- }
- };
- }
-
- jQuery.each([ "keydown", "keyup", "keypress" ], function() {
- jQuery.event.special[ this ] = { add: keyHandler };
- });
-
-})( jQuery );
+++ /dev/null
-/* Copyright (c) 2010
- * @author Laurence Wheway
- * Dual licensed under the MIT (http://www.opensource.org/licenses/mit-license.php)
- * and GPL (http://www.opensource.org/licenses/gpl-license.php) licenses.
- *
- * @version 1.2.0
- */
-(function($) {
- jQuery.extend({
- isOnScreen: function(box, container) {
- //ensure numbers come in as intgers (not strings) and remove 'px' is it's there
- for(var i in box){box[i] = parseFloat(box[i])};
- for(var i in container){container[i] = parseFloat(container[i])};
-
- if(!container){
- container = {
- left: $(window).scrollLeft(),
- top: $(window).scrollTop(),
- width: $(window).width(),
- height: $(window).height()
- }
- }
-
- if( box.left+box.width-container.left > 0 &&
- box.left < container.width+container.left &&
- box.top+box.height-container.top > 0 &&
- box.top < container.height+container.top
- ) return true;
- return false;
- }
- })
-
-
- jQuery.fn.isOnScreen = function (container) {
- for(var i in container){container[i] = parseFloat(container[i])};
-
- if(!container){
- container = {
- left: $(window).scrollLeft(),
- top: $(window).scrollTop(),
- width: $(window).width(),
- height: $(window).height()
- }
- }
-
- if( $(this).offset().left+$(this).width()-container.left > 0 &&
- $(this).offset().left < container.width+container.left &&
- $(this).offset().top+$(this).height()-container.top > 0 &&
- $(this).offset().top < container.height+container.top
- ) return true;
- return false;
- }
-})(jQuery);
+++ /dev/null
-/*!
- * jQuery JavaScript Library v1.4.3
- * http://jquery.com/
- *
- * Copyright 2010, John Resig
- * Dual licensed under the MIT or GPL Version 2 licenses.
- * http://jquery.org/license
- *
- * Includes Sizzle.js
- * http://sizzlejs.com/
- * Copyright 2010, The Dojo Foundation
- * Released under the MIT, BSD, and GPL Licenses.
- *
- * Date: Thu Oct 14 23:10:06 2010 -0400
- */
-(function(E,A){function U(){return false}function ba(){return true}function ja(a,b,d){d[0].type=a;return c.event.handle.apply(b,d)}function Ga(a){var b,d,e=[],f=[],h,k,l,n,s,v,B,D;k=c.data(this,this.nodeType?"events":"__events__");if(typeof k==="function")k=k.events;if(!(a.liveFired===this||!k||!k.live||a.button&&a.type==="click")){if(a.namespace)D=RegExp("(^|\\.)"+a.namespace.split(".").join("\\.(?:.*\\.)?")+"(\\.|$)");a.liveFired=this;var H=k.live.slice(0);for(n=0;n<H.length;n++){k=H[n];k.origType.replace(X,
-"")===a.type?f.push(k.selector):H.splice(n--,1)}f=c(a.target).closest(f,a.currentTarget);s=0;for(v=f.length;s<v;s++){B=f[s];for(n=0;n<H.length;n++){k=H[n];if(B.selector===k.selector&&(!D||D.test(k.namespace))){l=B.elem;h=null;if(k.preType==="mouseenter"||k.preType==="mouseleave"){a.type=k.preType;h=c(a.relatedTarget).closest(k.selector)[0]}if(!h||h!==l)e.push({elem:l,handleObj:k,level:B.level})}}}s=0;for(v=e.length;s<v;s++){f=e[s];if(d&&f.level>d)break;a.currentTarget=f.elem;a.data=f.handleObj.data;
-a.handleObj=f.handleObj;D=f.handleObj.origHandler.apply(f.elem,arguments);if(D===false||a.isPropagationStopped()){d=f.level;if(D===false)b=false}}return b}}function Y(a,b){return(a&&a!=="*"?a+".":"")+b.replace(Ha,"`").replace(Ia,"&")}function ka(a,b,d){if(c.isFunction(b))return c.grep(a,function(f,h){return!!b.call(f,h,f)===d});else if(b.nodeType)return c.grep(a,function(f){return f===b===d});else if(typeof b==="string"){var e=c.grep(a,function(f){return f.nodeType===1});if(Ja.test(b))return c.filter(b,
-e,!d);else b=c.filter(b,e)}return c.grep(a,function(f){return c.inArray(f,b)>=0===d})}function la(a,b){var d=0;b.each(function(){if(this.nodeName===(a[d]&&a[d].nodeName)){var e=c.data(a[d++]),f=c.data(this,e);if(e=e&&e.events){delete f.handle;f.events={};for(var h in e)for(var k in e[h])c.event.add(this,h,e[h][k],e[h][k].data)}}})}function Ka(a,b){b.src?c.ajax({url:b.src,async:false,dataType:"script"}):c.globalEval(b.text||b.textContent||b.innerHTML||"");b.parentNode&&b.parentNode.removeChild(b)}
-function ma(a,b,d){var e=b==="width"?a.offsetWidth:a.offsetHeight;if(d==="border")return e;c.each(b==="width"?La:Ma,function(){d||(e-=parseFloat(c.css(a,"padding"+this))||0);if(d==="margin")e+=parseFloat(c.css(a,"margin"+this))||0;else e-=parseFloat(c.css(a,"border"+this+"Width"))||0});return e}function ca(a,b,d,e){if(c.isArray(b)&&b.length)c.each(b,function(f,h){d||Na.test(a)?e(a,h):ca(a+"["+(typeof h==="object"||c.isArray(h)?f:"")+"]",h,d,e)});else if(!d&&b!=null&&typeof b==="object")c.isEmptyObject(b)?
-e(a,""):c.each(b,function(f,h){ca(a+"["+f+"]",h,d,e)});else e(a,b)}function S(a,b){var d={};c.each(na.concat.apply([],na.slice(0,b)),function(){d[this]=a});return d}function oa(a){if(!da[a]){var b=c("<"+a+">").appendTo("body"),d=b.css("display");b.remove();if(d==="none"||d==="")d="block";da[a]=d}return da[a]}function ea(a){return c.isWindow(a)?a:a.nodeType===9?a.defaultView||a.parentWindow:false}var u=E.document,c=function(){function a(){if(!b.isReady){try{u.documentElement.doScroll("left")}catch(i){setTimeout(a,
-1);return}b.ready()}}var b=function(i,r){return new b.fn.init(i,r)},d=E.jQuery,e=E.$,f,h=/^(?:[^<]*(<[\w\W]+>)[^>]*$|#([\w\-]+)$)/,k=/\S/,l=/^\s+/,n=/\s+$/,s=/\W/,v=/\d/,B=/^<(\w+)\s*\/?>(?:<\/\1>)?$/,D=/^[\],:{}\s]*$/,H=/\\(?:["\\\/bfnrt]|u[0-9a-fA-F]{4})/g,w=/"[^"\\\n\r]*"|true|false|null|-?\d+(?:\.\d*)?(?:[eE][+\-]?\d+)?/g,G=/(?:^|:|,)(?:\s*\[)+/g,M=/(webkit)[ \/]([\w.]+)/,g=/(opera)(?:.*version)?[ \/]([\w.]+)/,j=/(msie) ([\w.]+)/,o=/(mozilla)(?:.*? rv:([\w.]+))?/,m=navigator.userAgent,p=false,
-q=[],t,x=Object.prototype.toString,C=Object.prototype.hasOwnProperty,P=Array.prototype.push,N=Array.prototype.slice,R=String.prototype.trim,Q=Array.prototype.indexOf,L={};b.fn=b.prototype={init:function(i,r){var y,z,F;if(!i)return this;if(i.nodeType){this.context=this[0]=i;this.length=1;return this}if(i==="body"&&!r&&u.body){this.context=u;this[0]=u.body;this.selector="body";this.length=1;return this}if(typeof i==="string")if((y=h.exec(i))&&(y[1]||!r))if(y[1]){F=r?r.ownerDocument||r:u;if(z=B.exec(i))if(b.isPlainObject(r)){i=
-[u.createElement(z[1])];b.fn.attr.call(i,r,true)}else i=[F.createElement(z[1])];else{z=b.buildFragment([y[1]],[F]);i=(z.cacheable?z.fragment.cloneNode(true):z.fragment).childNodes}return b.merge(this,i)}else{if((z=u.getElementById(y[2]))&&z.parentNode){if(z.id!==y[2])return f.find(i);this.length=1;this[0]=z}this.context=u;this.selector=i;return this}else if(!r&&!s.test(i)){this.selector=i;this.context=u;i=u.getElementsByTagName(i);return b.merge(this,i)}else return!r||r.jquery?(r||f).find(i):b(r).find(i);
-else if(b.isFunction(i))return f.ready(i);if(i.selector!==A){this.selector=i.selector;this.context=i.context}return b.makeArray(i,this)},selector:"",jquery:"1.4.3",length:0,size:function(){return this.length},toArray:function(){return N.call(this,0)},get:function(i){return i==null?this.toArray():i<0?this.slice(i)[0]:this[i]},pushStack:function(i,r,y){var z=b();b.isArray(i)?P.apply(z,i):b.merge(z,i);z.prevObject=this;z.context=this.context;if(r==="find")z.selector=this.selector+(this.selector?" ":
-"")+y;else if(r)z.selector=this.selector+"."+r+"("+y+")";return z},each:function(i,r){return b.each(this,i,r)},ready:function(i){b.bindReady();if(b.isReady)i.call(u,b);else q&&q.push(i);return this},eq:function(i){return i===-1?this.slice(i):this.slice(i,+i+1)},first:function(){return this.eq(0)},last:function(){return this.eq(-1)},slice:function(){return this.pushStack(N.apply(this,arguments),"slice",N.call(arguments).join(","))},map:function(i){return this.pushStack(b.map(this,function(r,y){return i.call(r,
-y,r)}))},end:function(){return this.prevObject||b(null)},push:P,sort:[].sort,splice:[].splice};b.fn.init.prototype=b.fn;b.extend=b.fn.extend=function(){var i=arguments[0]||{},r=1,y=arguments.length,z=false,F,I,K,J,fa;if(typeof i==="boolean"){z=i;i=arguments[1]||{};r=2}if(typeof i!=="object"&&!b.isFunction(i))i={};if(y===r){i=this;--r}for(;r<y;r++)if((F=arguments[r])!=null)for(I in F){K=i[I];J=F[I];if(i!==J)if(z&&J&&(b.isPlainObject(J)||(fa=b.isArray(J)))){if(fa){fa=false;clone=K&&b.isArray(K)?K:[]}else clone=
-K&&b.isPlainObject(K)?K:{};i[I]=b.extend(z,clone,J)}else if(J!==A)i[I]=J}return i};b.extend({noConflict:function(i){E.$=e;if(i)E.jQuery=d;return b},isReady:false,readyWait:1,ready:function(i){i===true&&b.readyWait--;if(!b.readyWait||i!==true&&!b.isReady){if(!u.body)return setTimeout(b.ready,1);b.isReady=true;if(!(i!==true&&--b.readyWait>0)){if(q){for(var r=0;i=q[r++];)i.call(u,b);q=null}b.fn.triggerHandler&&b(u).triggerHandler("ready")}}},bindReady:function(){if(!p){p=true;if(u.readyState==="complete")return setTimeout(b.ready,
-1);if(u.addEventListener){u.addEventListener("DOMContentLoaded",t,false);E.addEventListener("load",b.ready,false)}else if(u.attachEvent){u.attachEvent("onreadystatechange",t);E.attachEvent("onload",b.ready);var i=false;try{i=E.frameElement==null}catch(r){}u.documentElement.doScroll&&i&&a()}}},isFunction:function(i){return b.type(i)==="function"},isArray:Array.isArray||function(i){return b.type(i)==="array"},isWindow:function(i){return i&&typeof i==="object"&&"setInterval"in i},isNaN:function(i){return i==
-null||!v.test(i)||isNaN(i)},type:function(i){return i==null?String(i):L[x.call(i)]||"object"},isPlainObject:function(i){if(!i||b.type(i)!=="object"||i.nodeType||b.isWindow(i))return false;if(i.constructor&&!C.call(i,"constructor")&&!C.call(i.constructor.prototype,"isPrototypeOf"))return false;for(var r in i);return r===A||C.call(i,r)},isEmptyObject:function(i){for(var r in i)return false;return true},error:function(i){throw i;},parseJSON:function(i){if(typeof i!=="string"||!i)return null;i=b.trim(i);
-if(D.test(i.replace(H,"@").replace(w,"]").replace(G,"")))return E.JSON&&E.JSON.parse?E.JSON.parse(i):(new Function("return "+i))();else b.error("Invalid JSON: "+i)},noop:function(){},globalEval:function(i){if(i&&k.test(i)){var r=u.getElementsByTagName("head")[0]||u.documentElement,y=u.createElement("script");y.type="text/javascript";if(b.support.scriptEval)y.appendChild(u.createTextNode(i));else y.text=i;r.insertBefore(y,r.firstChild);r.removeChild(y)}},nodeName:function(i,r){return i.nodeName&&i.nodeName.toUpperCase()===
-r.toUpperCase()},each:function(i,r,y){var z,F=0,I=i.length,K=I===A||b.isFunction(i);if(y)if(K)for(z in i){if(r.apply(i[z],y)===false)break}else for(;F<I;){if(r.apply(i[F++],y)===false)break}else if(K)for(z in i){if(r.call(i[z],z,i[z])===false)break}else for(y=i[0];F<I&&r.call(y,F,y)!==false;y=i[++F]);return i},trim:R?function(i){return i==null?"":R.call(i)}:function(i){return i==null?"":i.toString().replace(l,"").replace(n,"")},makeArray:function(i,r){var y=r||[];if(i!=null){var z=b.type(i);i.length==
-null||z==="string"||z==="function"||z==="regexp"||b.isWindow(i)?P.call(y,i):b.merge(y,i)}return y},inArray:function(i,r){if(r.indexOf)return r.indexOf(i);for(var y=0,z=r.length;y<z;y++)if(r[y]===i)return y;return-1},merge:function(i,r){var y=i.length,z=0;if(typeof r.length==="number")for(var F=r.length;z<F;z++)i[y++]=r[z];else for(;r[z]!==A;)i[y++]=r[z++];i.length=y;return i},grep:function(i,r,y){var z=[],F;y=!!y;for(var I=0,K=i.length;I<K;I++){F=!!r(i[I],I);y!==F&&z.push(i[I])}return z},map:function(i,
-r,y){for(var z=[],F,I=0,K=i.length;I<K;I++){F=r(i[I],I,y);if(F!=null)z[z.length]=F}return z.concat.apply([],z)},guid:1,proxy:function(i,r,y){if(arguments.length===2)if(typeof r==="string"){y=i;i=y[r];r=A}else if(r&&!b.isFunction(r)){y=r;r=A}if(!r&&i)r=function(){return i.apply(y||this,arguments)};if(i)r.guid=i.guid=i.guid||r.guid||b.guid++;return r},access:function(i,r,y,z,F,I){var K=i.length;if(typeof r==="object"){for(var J in r)b.access(i,J,r[J],z,F,y);return i}if(y!==A){z=!I&&z&&b.isFunction(y);
-for(J=0;J<K;J++)F(i[J],r,z?y.call(i[J],J,F(i[J],r)):y,I);return i}return K?F(i[0],r):A},now:function(){return(new Date).getTime()},uaMatch:function(i){i=i.toLowerCase();i=M.exec(i)||g.exec(i)||j.exec(i)||i.indexOf("compatible")<0&&o.exec(i)||[];return{browser:i[1]||"",version:i[2]||"0"}},browser:{}});b.each("Boolean Number String Function Array Date RegExp Object".split(" "),function(i,r){L["[object "+r+"]"]=r.toLowerCase()});m=b.uaMatch(m);if(m.browser){b.browser[m.browser]=true;b.browser.version=
-m.version}if(b.browser.webkit)b.browser.safari=true;if(Q)b.inArray=function(i,r){return Q.call(r,i)};if(!/\s/.test("\u00a0")){l=/^[\s\xA0]+/;n=/[\s\xA0]+$/}f=b(u);if(u.addEventListener)t=function(){u.removeEventListener("DOMContentLoaded",t,false);b.ready()};else if(u.attachEvent)t=function(){if(u.readyState==="complete"){u.detachEvent("onreadystatechange",t);b.ready()}};return E.jQuery=E.$=b}();(function(){c.support={};var a=u.documentElement,b=u.createElement("script"),d=u.createElement("div"),
-e="script"+c.now();d.style.display="none";d.innerHTML=" <link/><table></table><a href='/a' style='color:red;float:left;opacity:.55;'>a</a><input type='checkbox'/>";var f=d.getElementsByTagName("*"),h=d.getElementsByTagName("a")[0],k=u.createElement("select"),l=k.appendChild(u.createElement("option"));if(!(!f||!f.length||!h)){c.support={leadingWhitespace:d.firstChild.nodeType===3,tbody:!d.getElementsByTagName("tbody").length,htmlSerialize:!!d.getElementsByTagName("link").length,style:/red/.test(h.getAttribute("style")),
-hrefNormalized:h.getAttribute("href")==="/a",opacity:/^0.55$/.test(h.style.opacity),cssFloat:!!h.style.cssFloat,checkOn:d.getElementsByTagName("input")[0].value==="on",optSelected:l.selected,optDisabled:false,checkClone:false,scriptEval:false,noCloneEvent:true,boxModel:null,inlineBlockNeedsLayout:false,shrinkWrapBlocks:false,reliableHiddenOffsets:true};k.disabled=true;c.support.optDisabled=!l.disabled;b.type="text/javascript";try{b.appendChild(u.createTextNode("window."+e+"=1;"))}catch(n){}a.insertBefore(b,
-a.firstChild);if(E[e]){c.support.scriptEval=true;delete E[e]}a.removeChild(b);if(d.attachEvent&&d.fireEvent){d.attachEvent("onclick",function s(){c.support.noCloneEvent=false;d.detachEvent("onclick",s)});d.cloneNode(true).fireEvent("onclick")}d=u.createElement("div");d.innerHTML="<input type='radio' name='radiotest' checked='checked'/>";a=u.createDocumentFragment();a.appendChild(d.firstChild);c.support.checkClone=a.cloneNode(true).cloneNode(true).lastChild.checked;c(function(){var s=u.createElement("div");
-s.style.width=s.style.paddingLeft="1px";u.body.appendChild(s);c.boxModel=c.support.boxModel=s.offsetWidth===2;if("zoom"in s.style){s.style.display="inline";s.style.zoom=1;c.support.inlineBlockNeedsLayout=s.offsetWidth===2;s.style.display="";s.innerHTML="<div style='width:4px;'></div>";c.support.shrinkWrapBlocks=s.offsetWidth!==2}s.innerHTML="<table><tr><td style='padding:0;display:none'></td><td>t</td></tr></table>";var v=s.getElementsByTagName("td");c.support.reliableHiddenOffsets=v[0].offsetHeight===
-0;v[0].style.display="";v[1].style.display="none";c.support.reliableHiddenOffsets=c.support.reliableHiddenOffsets&&v[0].offsetHeight===0;s.innerHTML="";u.body.removeChild(s).style.display="none"});a=function(s){var v=u.createElement("div");s="on"+s;var B=s in v;if(!B){v.setAttribute(s,"return;");B=typeof v[s]==="function"}return B};c.support.submitBubbles=a("submit");c.support.changeBubbles=a("change");a=b=d=f=h=null}})();c.props={"for":"htmlFor","class":"className",readonly:"readOnly",maxlength:"maxLength",
-cellspacing:"cellSpacing",rowspan:"rowSpan",colspan:"colSpan",tabindex:"tabIndex",usemap:"useMap",frameborder:"frameBorder"};var pa={},Oa=/^(?:\{.*\}|\[.*\])$/;c.extend({cache:{},uuid:0,expando:"jQuery"+c.now(),noData:{embed:true,object:"clsid:D27CDB6E-AE6D-11cf-96B8-444553540000",applet:true},data:function(a,b,d){if(c.acceptData(a)){a=a==E?pa:a;var e=a.nodeType,f=e?a[c.expando]:null,h=c.cache;if(!(e&&!f&&typeof b==="string"&&d===A)){if(e)f||(a[c.expando]=f=++c.uuid);else h=a;if(typeof b==="object")if(e)h[f]=
-c.extend(h[f],b);else c.extend(h,b);else if(e&&!h[f])h[f]={};a=e?h[f]:h;if(d!==A)a[b]=d;return typeof b==="string"?a[b]:a}}},removeData:function(a,b){if(c.acceptData(a)){a=a==E?pa:a;var d=a.nodeType,e=d?a[c.expando]:a,f=c.cache,h=d?f[e]:e;if(b){if(h){delete h[b];d&&c.isEmptyObject(h)&&c.removeData(a)}}else if(d&&c.support.deleteExpando)delete a[c.expando];else if(a.removeAttribute)a.removeAttribute(c.expando);else if(d)delete f[e];else for(var k in a)delete a[k]}},acceptData:function(a){if(a.nodeName){var b=
-c.noData[a.nodeName.toLowerCase()];if(b)return!(b===true||a.getAttribute("classid")!==b)}return true}});c.fn.extend({data:function(a,b){if(typeof a==="undefined")return this.length?c.data(this[0]):null;else if(typeof a==="object")return this.each(function(){c.data(this,a)});var d=a.split(".");d[1]=d[1]?"."+d[1]:"";if(b===A){var e=this.triggerHandler("getData"+d[1]+"!",[d[0]]);if(e===A&&this.length){e=c.data(this[0],a);if(e===A&&this[0].nodeType===1){e=this[0].getAttribute("data-"+a);if(typeof e===
-"string")try{e=e==="true"?true:e==="false"?false:e==="null"?null:!c.isNaN(e)?parseFloat(e):Oa.test(e)?c.parseJSON(e):e}catch(f){}else e=A}}return e===A&&d[1]?this.data(d[0]):e}else return this.each(function(){var h=c(this),k=[d[0],b];h.triggerHandler("setData"+d[1]+"!",k);c.data(this,a,b);h.triggerHandler("changeData"+d[1]+"!",k)})},removeData:function(a){return this.each(function(){c.removeData(this,a)})}});c.extend({queue:function(a,b,d){if(a){b=(b||"fx")+"queue";var e=c.data(a,b);if(!d)return e||
-[];if(!e||c.isArray(d))e=c.data(a,b,c.makeArray(d));else e.push(d);return e}},dequeue:function(a,b){b=b||"fx";var d=c.queue(a,b),e=d.shift();if(e==="inprogress")e=d.shift();if(e){b==="fx"&&d.unshift("inprogress");e.call(a,function(){c.dequeue(a,b)})}}});c.fn.extend({queue:function(a,b){if(typeof a!=="string"){b=a;a="fx"}if(b===A)return c.queue(this[0],a);return this.each(function(){var d=c.queue(this,a,b);a==="fx"&&d[0]!=="inprogress"&&c.dequeue(this,a)})},dequeue:function(a){return this.each(function(){c.dequeue(this,
-a)})},delay:function(a,b){a=c.fx?c.fx.speeds[a]||a:a;b=b||"fx";return this.queue(b,function(){var d=this;setTimeout(function(){c.dequeue(d,b)},a)})},clearQueue:function(a){return this.queue(a||"fx",[])}});var qa=/[\n\t]/g,ga=/\s+/,Pa=/\r/g,Qa=/^(?:href|src|style)$/,Ra=/^(?:button|input)$/i,Sa=/^(?:button|input|object|select|textarea)$/i,Ta=/^a(?:rea)?$/i,ra=/^(?:radio|checkbox)$/i;c.fn.extend({attr:function(a,b){return c.access(this,a,b,true,c.attr)},removeAttr:function(a){return this.each(function(){c.attr(this,
-a,"");this.nodeType===1&&this.removeAttribute(a)})},addClass:function(a){if(c.isFunction(a))return this.each(function(s){var v=c(this);v.addClass(a.call(this,s,v.attr("class")))});if(a&&typeof a==="string")for(var b=(a||"").split(ga),d=0,e=this.length;d<e;d++){var f=this[d];if(f.nodeType===1)if(f.className){for(var h=" "+f.className+" ",k=f.className,l=0,n=b.length;l<n;l++)if(h.indexOf(" "+b[l]+" ")<0)k+=" "+b[l];f.className=c.trim(k)}else f.className=a}return this},removeClass:function(a){if(c.isFunction(a))return this.each(function(n){var s=
-c(this);s.removeClass(a.call(this,n,s.attr("class")))});if(a&&typeof a==="string"||a===A)for(var b=(a||"").split(ga),d=0,e=this.length;d<e;d++){var f=this[d];if(f.nodeType===1&&f.className)if(a){for(var h=(" "+f.className+" ").replace(qa," "),k=0,l=b.length;k<l;k++)h=h.replace(" "+b[k]+" "," ");f.className=c.trim(h)}else f.className=""}return this},toggleClass:function(a,b){var d=typeof a,e=typeof b==="boolean";if(c.isFunction(a))return this.each(function(f){var h=c(this);h.toggleClass(a.call(this,
-f,h.attr("class"),b),b)});return this.each(function(){if(d==="string")for(var f,h=0,k=c(this),l=b,n=a.split(ga);f=n[h++];){l=e?l:!k.hasClass(f);k[l?"addClass":"removeClass"](f)}else if(d==="undefined"||d==="boolean"){this.className&&c.data(this,"__className__",this.className);this.className=this.className||a===false?"":c.data(this,"__className__")||""}})},hasClass:function(a){a=" "+a+" ";for(var b=0,d=this.length;b<d;b++)if((" "+this[b].className+" ").replace(qa," ").indexOf(a)>-1)return true;return false},
-val:function(a){if(!arguments.length){var b=this[0];if(b){if(c.nodeName(b,"option")){var d=b.attributes.value;return!d||d.specified?b.value:b.text}if(c.nodeName(b,"select")){var e=b.selectedIndex;d=[];var f=b.options;b=b.type==="select-one";if(e<0)return null;var h=b?e:0;for(e=b?e+1:f.length;h<e;h++){var k=f[h];if(k.selected&&(c.support.optDisabled?!k.disabled:k.getAttribute("disabled")===null)&&(!k.parentNode.disabled||!c.nodeName(k.parentNode,"optgroup"))){a=c(k).val();if(b)return a;d.push(a)}}return d}if(ra.test(b.type)&&
-!c.support.checkOn)return b.getAttribute("value")===null?"on":b.value;return(b.value||"").replace(Pa,"")}return A}var l=c.isFunction(a);return this.each(function(n){var s=c(this),v=a;if(this.nodeType===1){if(l)v=a.call(this,n,s.val());if(v==null)v="";else if(typeof v==="number")v+="";else if(c.isArray(v))v=c.map(v,function(D){return D==null?"":D+""});if(c.isArray(v)&&ra.test(this.type))this.checked=c.inArray(s.val(),v)>=0;else if(c.nodeName(this,"select")){var B=c.makeArray(v);c("option",this).each(function(){this.selected=
-c.inArray(c(this).val(),B)>=0});if(!B.length)this.selectedIndex=-1}else this.value=v}})}});c.extend({attrFn:{val:true,css:true,html:true,text:true,data:true,width:true,height:true,offset:true},attr:function(a,b,d,e){if(!a||a.nodeType===3||a.nodeType===8)return A;if(e&&b in c.attrFn)return c(a)[b](d);e=a.nodeType!==1||!c.isXMLDoc(a);var f=d!==A;b=e&&c.props[b]||b;if(a.nodeType===1){var h=Qa.test(b);if((b in a||a[b]!==A)&&e&&!h){if(f){b==="type"&&Ra.test(a.nodeName)&&a.parentNode&&c.error("type property can't be changed");
-if(d===null)a.nodeType===1&&a.removeAttribute(b);else a[b]=d}if(c.nodeName(a,"form")&&a.getAttributeNode(b))return a.getAttributeNode(b).nodeValue;if(b==="tabIndex")return(b=a.getAttributeNode("tabIndex"))&&b.specified?b.value:Sa.test(a.nodeName)||Ta.test(a.nodeName)&&a.href?0:A;return a[b]}if(!c.support.style&&e&&b==="style"){if(f)a.style.cssText=""+d;return a.style.cssText}f&&a.setAttribute(b,""+d);if(!a.attributes[b]&&a.hasAttribute&&!a.hasAttribute(b))return A;a=!c.support.hrefNormalized&&e&&
-h?a.getAttribute(b,2):a.getAttribute(b);return a===null?A:a}}});var X=/\.(.*)$/,ha=/^(?:textarea|input|select)$/i,Ha=/\./g,Ia=/ /g,Ua=/[^\w\s.|`]/g,Va=function(a){return a.replace(Ua,"\\$&")},sa={focusin:0,focusout:0};c.event={add:function(a,b,d,e){if(!(a.nodeType===3||a.nodeType===8)){if(c.isWindow(a)&&a!==E&&!a.frameElement)a=E;if(d===false)d=U;var f,h;if(d.handler){f=d;d=f.handler}if(!d.guid)d.guid=c.guid++;if(h=c.data(a)){var k=a.nodeType?"events":"__events__",l=h[k],n=h.handle;if(typeof l===
-"function"){n=l.handle;l=l.events}else if(!l){a.nodeType||(h[k]=h=function(){});h.events=l={}}if(!n)h.handle=n=function(){return typeof c!=="undefined"&&!c.event.triggered?c.event.handle.apply(n.elem,arguments):A};n.elem=a;b=b.split(" ");for(var s=0,v;k=b[s++];){h=f?c.extend({},f):{handler:d,data:e};if(k.indexOf(".")>-1){v=k.split(".");k=v.shift();h.namespace=v.slice(0).sort().join(".")}else{v=[];h.namespace=""}h.type=k;if(!h.guid)h.guid=d.guid;var B=l[k],D=c.event.special[k]||{};if(!B){B=l[k]=[];
-if(!D.setup||D.setup.call(a,e,v,n)===false)if(a.addEventListener)a.addEventListener(k,n,false);else a.attachEvent&&a.attachEvent("on"+k,n)}if(D.add){D.add.call(a,h);if(!h.handler.guid)h.handler.guid=d.guid}B.push(h);c.event.global[k]=true}a=null}}},global:{},remove:function(a,b,d,e){if(!(a.nodeType===3||a.nodeType===8)){if(d===false)d=U;var f,h,k=0,l,n,s,v,B,D,H=a.nodeType?"events":"__events__",w=c.data(a),G=w&&w[H];if(w&&G){if(typeof G==="function"){w=G;G=G.events}if(b&&b.type){d=b.handler;b=b.type}if(!b||
-typeof b==="string"&&b.charAt(0)==="."){b=b||"";for(f in G)c.event.remove(a,f+b)}else{for(b=b.split(" ");f=b[k++];){v=f;l=f.indexOf(".")<0;n=[];if(!l){n=f.split(".");f=n.shift();s=RegExp("(^|\\.)"+c.map(n.slice(0).sort(),Va).join("\\.(?:.*\\.)?")+"(\\.|$)")}if(B=G[f])if(d){v=c.event.special[f]||{};for(h=e||0;h<B.length;h++){D=B[h];if(d.guid===D.guid){if(l||s.test(D.namespace)){e==null&&B.splice(h--,1);v.remove&&v.remove.call(a,D)}if(e!=null)break}}if(B.length===0||e!=null&&B.length===1){if(!v.teardown||
-v.teardown.call(a,n)===false)c.removeEvent(a,f,w.handle);delete G[f]}}else for(h=0;h<B.length;h++){D=B[h];if(l||s.test(D.namespace)){c.event.remove(a,v,D.handler,h);B.splice(h--,1)}}}if(c.isEmptyObject(G)){if(b=w.handle)b.elem=null;delete w.events;delete w.handle;if(typeof w==="function")c.removeData(a,H);else c.isEmptyObject(w)&&c.removeData(a)}}}}},trigger:function(a,b,d,e){var f=a.type||a;if(!e){a=typeof a==="object"?a[c.expando]?a:c.extend(c.Event(f),a):c.Event(f);if(f.indexOf("!")>=0){a.type=
-f=f.slice(0,-1);a.exclusive=true}if(!d){a.stopPropagation();c.event.global[f]&&c.each(c.cache,function(){this.events&&this.events[f]&&c.event.trigger(a,b,this.handle.elem)})}if(!d||d.nodeType===3||d.nodeType===8)return A;a.result=A;a.target=d;b=c.makeArray(b);b.unshift(a)}a.currentTarget=d;(e=d.nodeType?c.data(d,"handle"):(c.data(d,"__events__")||{}).handle)&&e.apply(d,b);e=d.parentNode||d.ownerDocument;try{if(!(d&&d.nodeName&&c.noData[d.nodeName.toLowerCase()]))if(d["on"+f]&&d["on"+f].apply(d,b)===
-false){a.result=false;a.preventDefault()}}catch(h){}if(!a.isPropagationStopped()&&e)c.event.trigger(a,b,e,true);else if(!a.isDefaultPrevented()){e=a.target;var k,l=f.replace(X,""),n=c.nodeName(e,"a")&&l==="click",s=c.event.special[l]||{};if((!s._default||s._default.call(d,a)===false)&&!n&&!(e&&e.nodeName&&c.noData[e.nodeName.toLowerCase()])){try{if(e[l]){if(k=e["on"+l])e["on"+l]=null;c.event.triggered=true;e[l]()}}catch(v){}if(k)e["on"+l]=k;c.event.triggered=false}}},handle:function(a){var b,d,e;
-d=[];var f,h=c.makeArray(arguments);a=h[0]=c.event.fix(a||E.event);a.currentTarget=this;b=a.type.indexOf(".")<0&&!a.exclusive;if(!b){e=a.type.split(".");a.type=e.shift();d=e.slice(0).sort();e=RegExp("(^|\\.)"+d.join("\\.(?:.*\\.)?")+"(\\.|$)")}a.namespace=a.namespace||d.join(".");f=c.data(this,this.nodeType?"events":"__events__");if(typeof f==="function")f=f.events;d=(f||{})[a.type];if(f&&d){d=d.slice(0);f=0;for(var k=d.length;f<k;f++){var l=d[f];if(b||e.test(l.namespace)){a.handler=l.handler;a.data=
-l.data;a.handleObj=l;l=l.handler.apply(this,h);if(l!==A){a.result=l;if(l===false){a.preventDefault();a.stopPropagation()}}if(a.isImmediatePropagationStopped())break}}}return a.result},props:"altKey attrChange attrName bubbles button cancelable charCode clientX clientY ctrlKey currentTarget data detail eventPhase fromElement handler keyCode layerX layerY metaKey newValue offsetX offsetY pageX pageY prevValue relatedNode relatedTarget screenX screenY shiftKey srcElement target toElement view wheelDelta which".split(" "),
-fix:function(a){if(a[c.expando])return a;var b=a;a=c.Event(b);for(var d=this.props.length,e;d;){e=this.props[--d];a[e]=b[e]}if(!a.target)a.target=a.srcElement||u;if(a.target.nodeType===3)a.target=a.target.parentNode;if(!a.relatedTarget&&a.fromElement)a.relatedTarget=a.fromElement===a.target?a.toElement:a.fromElement;if(a.pageX==null&&a.clientX!=null){b=u.documentElement;d=u.body;a.pageX=a.clientX+(b&&b.scrollLeft||d&&d.scrollLeft||0)-(b&&b.clientLeft||d&&d.clientLeft||0);a.pageY=a.clientY+(b&&b.scrollTop||
-d&&d.scrollTop||0)-(b&&b.clientTop||d&&d.clientTop||0)}if(a.which==null&&(a.charCode!=null||a.keyCode!=null))a.which=a.charCode!=null?a.charCode:a.keyCode;if(!a.metaKey&&a.ctrlKey)a.metaKey=a.ctrlKey;if(!a.which&&a.button!==A)a.which=a.button&1?1:a.button&2?3:a.button&4?2:0;return a},guid:1E8,proxy:c.proxy,special:{ready:{setup:c.bindReady,teardown:c.noop},live:{add:function(a){c.event.add(this,Y(a.origType,a.selector),c.extend({},a,{handler:Ga,guid:a.handler.guid}))},remove:function(a){c.event.remove(this,
-Y(a.origType,a.selector),a)}},beforeunload:{setup:function(a,b,d){if(c.isWindow(this))this.onbeforeunload=d},teardown:function(a,b){if(this.onbeforeunload===b)this.onbeforeunload=null}}}};c.removeEvent=u.removeEventListener?function(a,b,d){a.removeEventListener&&a.removeEventListener(b,d,false)}:function(a,b,d){a.detachEvent&&a.detachEvent("on"+b,d)};c.Event=function(a){if(!this.preventDefault)return new c.Event(a);if(a&&a.type){this.originalEvent=a;this.type=a.type}else this.type=a;this.timeStamp=
-c.now();this[c.expando]=true};c.Event.prototype={preventDefault:function(){this.isDefaultPrevented=ba;var a=this.originalEvent;if(a)if(a.preventDefault)a.preventDefault();else a.returnValue=false},stopPropagation:function(){this.isPropagationStopped=ba;var a=this.originalEvent;if(a){a.stopPropagation&&a.stopPropagation();a.cancelBubble=true}},stopImmediatePropagation:function(){this.isImmediatePropagationStopped=ba;this.stopPropagation()},isDefaultPrevented:U,isPropagationStopped:U,isImmediatePropagationStopped:U};
-var ta=function(a){var b=a.relatedTarget;try{for(;b&&b!==this;)b=b.parentNode;if(b!==this){a.type=a.data;c.event.handle.apply(this,arguments)}}catch(d){}},ua=function(a){a.type=a.data;c.event.handle.apply(this,arguments)};c.each({mouseenter:"mouseover",mouseleave:"mouseout"},function(a,b){c.event.special[a]={setup:function(d){c.event.add(this,b,d&&d.selector?ua:ta,a)},teardown:function(d){c.event.remove(this,b,d&&d.selector?ua:ta)}}});if(!c.support.submitBubbles)c.event.special.submit={setup:function(){if(this.nodeName.toLowerCase()!==
-"form"){c.event.add(this,"click.specialSubmit",function(a){var b=a.target,d=b.type;if((d==="submit"||d==="image")&&c(b).closest("form").length){a.liveFired=A;return ja("submit",this,arguments)}});c.event.add(this,"keypress.specialSubmit",function(a){var b=a.target,d=b.type;if((d==="text"||d==="password")&&c(b).closest("form").length&&a.keyCode===13){a.liveFired=A;return ja("submit",this,arguments)}})}else return false},teardown:function(){c.event.remove(this,".specialSubmit")}};if(!c.support.changeBubbles){var V,
-va=function(a){var b=a.type,d=a.value;if(b==="radio"||b==="checkbox")d=a.checked;else if(b==="select-multiple")d=a.selectedIndex>-1?c.map(a.options,function(e){return e.selected}).join("-"):"";else if(a.nodeName.toLowerCase()==="select")d=a.selectedIndex;return d},Z=function(a,b){var d=a.target,e,f;if(!(!ha.test(d.nodeName)||d.readOnly)){e=c.data(d,"_change_data");f=va(d);if(a.type!=="focusout"||d.type!=="radio")c.data(d,"_change_data",f);if(!(e===A||f===e))if(e!=null||f){a.type="change";a.liveFired=
-A;return c.event.trigger(a,b,d)}}};c.event.special.change={filters:{focusout:Z,beforedeactivate:Z,click:function(a){var b=a.target,d=b.type;if(d==="radio"||d==="checkbox"||b.nodeName.toLowerCase()==="select")return Z.call(this,a)},keydown:function(a){var b=a.target,d=b.type;if(a.keyCode===13&&b.nodeName.toLowerCase()!=="textarea"||a.keyCode===32&&(d==="checkbox"||d==="radio")||d==="select-multiple")return Z.call(this,a)},beforeactivate:function(a){a=a.target;c.data(a,"_change_data",va(a))}},setup:function(){if(this.type===
-"file")return false;for(var a in V)c.event.add(this,a+".specialChange",V[a]);return ha.test(this.nodeName)},teardown:function(){c.event.remove(this,".specialChange");return ha.test(this.nodeName)}};V=c.event.special.change.filters;V.focus=V.beforeactivate}u.addEventListener&&c.each({focus:"focusin",blur:"focusout"},function(a,b){function d(e){e=c.event.fix(e);e.type=b;return c.event.trigger(e,null,e.target)}c.event.special[b]={setup:function(){sa[b]++===0&&u.addEventListener(a,d,true)},teardown:function(){--sa[b]===
-0&&u.removeEventListener(a,d,true)}}});c.each(["bind","one"],function(a,b){c.fn[b]=function(d,e,f){if(typeof d==="object"){for(var h in d)this[b](h,e,d[h],f);return this}if(c.isFunction(e)||e===false){f=e;e=A}var k=b==="one"?c.proxy(f,function(n){c(this).unbind(n,k);return f.apply(this,arguments)}):f;if(d==="unload"&&b!=="one")this.one(d,e,f);else{h=0;for(var l=this.length;h<l;h++)c.event.add(this[h],d,k,e)}return this}});c.fn.extend({unbind:function(a,b){if(typeof a==="object"&&!a.preventDefault)for(var d in a)this.unbind(d,
-a[d]);else{d=0;for(var e=this.length;d<e;d++)c.event.remove(this[d],a,b)}return this},delegate:function(a,b,d,e){return this.live(b,d,e,a)},undelegate:function(a,b,d){return arguments.length===0?this.unbind("live"):this.die(b,null,d,a)},trigger:function(a,b){return this.each(function(){c.event.trigger(a,b,this)})},triggerHandler:function(a,b){if(this[0]){var d=c.Event(a);d.preventDefault();d.stopPropagation();c.event.trigger(d,b,this[0]);return d.result}},toggle:function(a){for(var b=arguments,d=
-1;d<b.length;)c.proxy(a,b[d++]);return this.click(c.proxy(a,function(e){var f=(c.data(this,"lastToggle"+a.guid)||0)%d;c.data(this,"lastToggle"+a.guid,f+1);e.preventDefault();return b[f].apply(this,arguments)||false}))},hover:function(a,b){return this.mouseenter(a).mouseleave(b||a)}});var wa={focus:"focusin",blur:"focusout",mouseenter:"mouseover",mouseleave:"mouseout"};c.each(["live","die"],function(a,b){c.fn[b]=function(d,e,f,h){var k,l=0,n,s,v=h||this.selector;h=h?this:c(this.context);if(typeof d===
-"object"&&!d.preventDefault){for(k in d)h[b](k,e,d[k],v);return this}if(c.isFunction(e)){f=e;e=A}for(d=(d||"").split(" ");(k=d[l++])!=null;){n=X.exec(k);s="";if(n){s=n[0];k=k.replace(X,"")}if(k==="hover")d.push("mouseenter"+s,"mouseleave"+s);else{n=k;if(k==="focus"||k==="blur"){d.push(wa[k]+s);k+=s}else k=(wa[k]||k)+s;if(b==="live"){s=0;for(var B=h.length;s<B;s++)c.event.add(h[s],"live."+Y(k,v),{data:e,selector:v,handler:f,origType:k,origHandler:f,preType:n})}else h.unbind("live."+Y(k,v),f)}}return this}});
-c.each("blur focus focusin focusout load resize scroll unload click dblclick mousedown mouseup mousemove mouseover mouseout mouseenter mouseleave change select submit keydown keypress keyup error".split(" "),function(a,b){c.fn[b]=function(d,e){if(e==null){e=d;d=null}return arguments.length>0?this.bind(b,d,e):this.trigger(b)};if(c.attrFn)c.attrFn[b]=true});E.attachEvent&&!E.addEventListener&&c(E).bind("unload",function(){for(var a in c.cache)if(c.cache[a].handle)try{c.event.remove(c.cache[a].handle.elem)}catch(b){}});
-(function(){function a(g,j,o,m,p,q){p=0;for(var t=m.length;p<t;p++){var x=m[p];if(x){x=x[g];for(var C=false;x;){if(x.sizcache===o){C=m[x.sizset];break}if(x.nodeType===1&&!q){x.sizcache=o;x.sizset=p}if(x.nodeName.toLowerCase()===j){C=x;break}x=x[g]}m[p]=C}}}function b(g,j,o,m,p,q){p=0;for(var t=m.length;p<t;p++){var x=m[p];if(x){x=x[g];for(var C=false;x;){if(x.sizcache===o){C=m[x.sizset];break}if(x.nodeType===1){if(!q){x.sizcache=o;x.sizset=p}if(typeof j!=="string"){if(x===j){C=true;break}}else if(l.filter(j,
-[x]).length>0){C=x;break}}x=x[g]}m[p]=C}}}var d=/((?:\((?:\([^()]+\)|[^()]+)+\)|\[(?:\[[^\[\]]*\]|['"][^'"]*['"]|[^\[\]'"]+)+\]|\\.|[^ >+~,(\[\\]+)+|[>+~])(\s*,\s*)?((?:.|\r|\n)*)/g,e=0,f=Object.prototype.toString,h=false,k=true;[0,0].sort(function(){k=false;return 0});var l=function(g,j,o,m){o=o||[];var p=j=j||u;if(j.nodeType!==1&&j.nodeType!==9)return[];if(!g||typeof g!=="string")return o;var q=[],t,x,C,P,N=true,R=l.isXML(j),Q=g,L;do{d.exec("");if(t=d.exec(Q)){Q=t[3];q.push(t[1]);if(t[2]){P=t[3];
-break}}}while(t);if(q.length>1&&s.exec(g))if(q.length===2&&n.relative[q[0]])x=M(q[0]+q[1],j);else for(x=n.relative[q[0]]?[j]:l(q.shift(),j);q.length;){g=q.shift();if(n.relative[g])g+=q.shift();x=M(g,x)}else{if(!m&&q.length>1&&j.nodeType===9&&!R&&n.match.ID.test(q[0])&&!n.match.ID.test(q[q.length-1])){t=l.find(q.shift(),j,R);j=t.expr?l.filter(t.expr,t.set)[0]:t.set[0]}if(j){t=m?{expr:q.pop(),set:D(m)}:l.find(q.pop(),q.length===1&&(q[0]==="~"||q[0]==="+")&&j.parentNode?j.parentNode:j,R);x=t.expr?l.filter(t.expr,
-t.set):t.set;if(q.length>0)C=D(x);else N=false;for(;q.length;){t=L=q.pop();if(n.relative[L])t=q.pop();else L="";if(t==null)t=j;n.relative[L](C,t,R)}}else C=[]}C||(C=x);C||l.error(L||g);if(f.call(C)==="[object Array]")if(N)if(j&&j.nodeType===1)for(g=0;C[g]!=null;g++){if(C[g]&&(C[g]===true||C[g].nodeType===1&&l.contains(j,C[g])))o.push(x[g])}else for(g=0;C[g]!=null;g++)C[g]&&C[g].nodeType===1&&o.push(x[g]);else o.push.apply(o,C);else D(C,o);if(P){l(P,p,o,m);l.uniqueSort(o)}return o};l.uniqueSort=function(g){if(w){h=
-k;g.sort(w);if(h)for(var j=1;j<g.length;j++)g[j]===g[j-1]&&g.splice(j--,1)}return g};l.matches=function(g,j){return l(g,null,null,j)};l.matchesSelector=function(g,j){return l(j,null,null,[g]).length>0};l.find=function(g,j,o){var m;if(!g)return[];for(var p=0,q=n.order.length;p<q;p++){var t=n.order[p],x;if(x=n.leftMatch[t].exec(g)){var C=x[1];x.splice(1,1);if(C.substr(C.length-1)!=="\\"){x[1]=(x[1]||"").replace(/\\/g,"");m=n.find[t](x,j,o);if(m!=null){g=g.replace(n.match[t],"");break}}}}m||(m=j.getElementsByTagName("*"));
-return{set:m,expr:g}};l.filter=function(g,j,o,m){for(var p=g,q=[],t=j,x,C,P=j&&j[0]&&l.isXML(j[0]);g&&j.length;){for(var N in n.filter)if((x=n.leftMatch[N].exec(g))!=null&&x[2]){var R=n.filter[N],Q,L;L=x[1];C=false;x.splice(1,1);if(L.substr(L.length-1)!=="\\"){if(t===q)q=[];if(n.preFilter[N])if(x=n.preFilter[N](x,t,o,q,m,P)){if(x===true)continue}else C=Q=true;if(x)for(var i=0;(L=t[i])!=null;i++)if(L){Q=R(L,x,i,t);var r=m^!!Q;if(o&&Q!=null)if(r)C=true;else t[i]=false;else if(r){q.push(L);C=true}}if(Q!==
-A){o||(t=q);g=g.replace(n.match[N],"");if(!C)return[];break}}}if(g===p)if(C==null)l.error(g);else break;p=g}return t};l.error=function(g){throw"Syntax error, unrecognized expression: "+g;};var n=l.selectors={order:["ID","NAME","TAG"],match:{ID:/#((?:[\w\u00c0-\uFFFF\-]|\\.)+)/,CLASS:/\.((?:[\w\u00c0-\uFFFF\-]|\\.)+)/,NAME:/\[name=['"]*((?:[\w\u00c0-\uFFFF\-]|\\.)+)['"]*\]/,ATTR:/\[\s*((?:[\w\u00c0-\uFFFF\-]|\\.)+)\s*(?:(\S?=)\s*(['"]*)(.*?)\3|)\s*\]/,TAG:/^((?:[\w\u00c0-\uFFFF\*\-]|\\.)+)/,CHILD:/:(only|nth|last|first)-child(?:\((even|odd|[\dn+\-]*)\))?/,
-POS:/:(nth|eq|gt|lt|first|last|even|odd)(?:\((\d*)\))?(?=[^\-]|$)/,PSEUDO:/:((?:[\w\u00c0-\uFFFF\-]|\\.)+)(?:\((['"]?)((?:\([^\)]+\)|[^\(\)]*)+)\2\))?/},leftMatch:{},attrMap:{"class":"className","for":"htmlFor"},attrHandle:{href:function(g){return g.getAttribute("href")}},relative:{"+":function(g,j){var o=typeof j==="string",m=o&&!/\W/.test(j);o=o&&!m;if(m)j=j.toLowerCase();m=0;for(var p=g.length,q;m<p;m++)if(q=g[m]){for(;(q=q.previousSibling)&&q.nodeType!==1;);g[m]=o||q&&q.nodeName.toLowerCase()===
-j?q||false:q===j}o&&l.filter(j,g,true)},">":function(g,j){var o=typeof j==="string",m,p=0,q=g.length;if(o&&!/\W/.test(j))for(j=j.toLowerCase();p<q;p++){if(m=g[p]){o=m.parentNode;g[p]=o.nodeName.toLowerCase()===j?o:false}}else{for(;p<q;p++)if(m=g[p])g[p]=o?m.parentNode:m.parentNode===j;o&&l.filter(j,g,true)}},"":function(g,j,o){var m=e++,p=b,q;if(typeof j==="string"&&!/\W/.test(j)){q=j=j.toLowerCase();p=a}p("parentNode",j,m,g,q,o)},"~":function(g,j,o){var m=e++,p=b,q;if(typeof j==="string"&&!/\W/.test(j)){q=
-j=j.toLowerCase();p=a}p("previousSibling",j,m,g,q,o)}},find:{ID:function(g,j,o){if(typeof j.getElementById!=="undefined"&&!o)return(g=j.getElementById(g[1]))&&g.parentNode?[g]:[]},NAME:function(g,j){if(typeof j.getElementsByName!=="undefined"){for(var o=[],m=j.getElementsByName(g[1]),p=0,q=m.length;p<q;p++)m[p].getAttribute("name")===g[1]&&o.push(m[p]);return o.length===0?null:o}},TAG:function(g,j){return j.getElementsByTagName(g[1])}},preFilter:{CLASS:function(g,j,o,m,p,q){g=" "+g[1].replace(/\\/g,
-"")+" ";if(q)return g;q=0;for(var t;(t=j[q])!=null;q++)if(t)if(p^(t.className&&(" "+t.className+" ").replace(/[\t\n]/g," ").indexOf(g)>=0))o||m.push(t);else if(o)j[q]=false;return false},ID:function(g){return g[1].replace(/\\/g,"")},TAG:function(g){return g[1].toLowerCase()},CHILD:function(g){if(g[1]==="nth"){var j=/(-?)(\d*)n((?:\+|-)?\d*)/.exec(g[2]==="even"&&"2n"||g[2]==="odd"&&"2n+1"||!/\D/.test(g[2])&&"0n+"+g[2]||g[2]);g[2]=j[1]+(j[2]||1)-0;g[3]=j[3]-0}g[0]=e++;return g},ATTR:function(g,j,o,
-m,p,q){j=g[1].replace(/\\/g,"");if(!q&&n.attrMap[j])g[1]=n.attrMap[j];if(g[2]==="~=")g[4]=" "+g[4]+" ";return g},PSEUDO:function(g,j,o,m,p){if(g[1]==="not")if((d.exec(g[3])||"").length>1||/^\w/.test(g[3]))g[3]=l(g[3],null,null,j);else{g=l.filter(g[3],j,o,true^p);o||m.push.apply(m,g);return false}else if(n.match.POS.test(g[0])||n.match.CHILD.test(g[0]))return true;return g},POS:function(g){g.unshift(true);return g}},filters:{enabled:function(g){return g.disabled===false&&g.type!=="hidden"},disabled:function(g){return g.disabled===
-true},checked:function(g){return g.checked===true},selected:function(g){return g.selected===true},parent:function(g){return!!g.firstChild},empty:function(g){return!g.firstChild},has:function(g,j,o){return!!l(o[3],g).length},header:function(g){return/h\d/i.test(g.nodeName)},text:function(g){return"text"===g.type},radio:function(g){return"radio"===g.type},checkbox:function(g){return"checkbox"===g.type},file:function(g){return"file"===g.type},password:function(g){return"password"===g.type},submit:function(g){return"submit"===
-g.type},image:function(g){return"image"===g.type},reset:function(g){return"reset"===g.type},button:function(g){return"button"===g.type||g.nodeName.toLowerCase()==="button"},input:function(g){return/input|select|textarea|button/i.test(g.nodeName)}},setFilters:{first:function(g,j){return j===0},last:function(g,j,o,m){return j===m.length-1},even:function(g,j){return j%2===0},odd:function(g,j){return j%2===1},lt:function(g,j,o){return j<o[3]-0},gt:function(g,j,o){return j>o[3]-0},nth:function(g,j,o){return o[3]-
-0===j},eq:function(g,j,o){return o[3]-0===j}},filter:{PSEUDO:function(g,j,o,m){var p=j[1],q=n.filters[p];if(q)return q(g,o,j,m);else if(p==="contains")return(g.textContent||g.innerText||l.getText([g])||"").indexOf(j[3])>=0;else if(p==="not"){j=j[3];o=0;for(m=j.length;o<m;o++)if(j[o]===g)return false;return true}else l.error("Syntax error, unrecognized expression: "+p)},CHILD:function(g,j){var o=j[1],m=g;switch(o){case "only":case "first":for(;m=m.previousSibling;)if(m.nodeType===1)return false;if(o===
-"first")return true;m=g;case "last":for(;m=m.nextSibling;)if(m.nodeType===1)return false;return true;case "nth":o=j[2];var p=j[3];if(o===1&&p===0)return true;var q=j[0],t=g.parentNode;if(t&&(t.sizcache!==q||!g.nodeIndex)){var x=0;for(m=t.firstChild;m;m=m.nextSibling)if(m.nodeType===1)m.nodeIndex=++x;t.sizcache=q}m=g.nodeIndex-p;return o===0?m===0:m%o===0&&m/o>=0}},ID:function(g,j){return g.nodeType===1&&g.getAttribute("id")===j},TAG:function(g,j){return j==="*"&&g.nodeType===1||g.nodeName.toLowerCase()===
-j},CLASS:function(g,j){return(" "+(g.className||g.getAttribute("class"))+" ").indexOf(j)>-1},ATTR:function(g,j){var o=j[1];o=n.attrHandle[o]?n.attrHandle[o](g):g[o]!=null?g[o]:g.getAttribute(o);var m=o+"",p=j[2],q=j[4];return o==null?p==="!=":p==="="?m===q:p==="*="?m.indexOf(q)>=0:p==="~="?(" "+m+" ").indexOf(q)>=0:!q?m&&o!==false:p==="!="?m!==q:p==="^="?m.indexOf(q)===0:p==="$="?m.substr(m.length-q.length)===q:p==="|="?m===q||m.substr(0,q.length+1)===q+"-":false},POS:function(g,j,o,m){var p=n.setFilters[j[2]];
-if(p)return p(g,o,j,m)}}},s=n.match.POS,v=function(g,j){return"\\"+(j-0+1)},B;for(B in n.match){n.match[B]=RegExp(n.match[B].source+/(?![^\[]*\])(?![^\(]*\))/.source);n.leftMatch[B]=RegExp(/(^(?:.|\r|\n)*?)/.source+n.match[B].source.replace(/\\(\d+)/g,v))}var D=function(g,j){g=Array.prototype.slice.call(g,0);if(j){j.push.apply(j,g);return j}return g};try{Array.prototype.slice.call(u.documentElement.childNodes,0)}catch(H){D=function(g,j){var o=j||[],m=0;if(f.call(g)==="[object Array]")Array.prototype.push.apply(o,
-g);else if(typeof g.length==="number")for(var p=g.length;m<p;m++)o.push(g[m]);else for(;g[m];m++)o.push(g[m]);return o}}var w,G;if(u.documentElement.compareDocumentPosition)w=function(g,j){if(g===j){h=true;return 0}if(!g.compareDocumentPosition||!j.compareDocumentPosition)return g.compareDocumentPosition?-1:1;return g.compareDocumentPosition(j)&4?-1:1};else{w=function(g,j){var o=[],m=[],p=g.parentNode,q=j.parentNode,t=p;if(g===j){h=true;return 0}else if(p===q)return G(g,j);else if(p){if(!q)return 1}else return-1;
-for(;t;){o.unshift(t);t=t.parentNode}for(t=q;t;){m.unshift(t);t=t.parentNode}p=o.length;q=m.length;for(t=0;t<p&&t<q;t++)if(o[t]!==m[t])return G(o[t],m[t]);return t===p?G(g,m[t],-1):G(o[t],j,1)};G=function(g,j,o){if(g===j)return o;for(g=g.nextSibling;g;){if(g===j)return-1;g=g.nextSibling}return 1}}l.getText=function(g){for(var j="",o,m=0;g[m];m++){o=g[m];if(o.nodeType===3||o.nodeType===4)j+=o.nodeValue;else if(o.nodeType!==8)j+=l.getText(o.childNodes)}return j};(function(){var g=u.createElement("div"),
-j="script"+(new Date).getTime();g.innerHTML="<a name='"+j+"'/>";var o=u.documentElement;o.insertBefore(g,o.firstChild);if(u.getElementById(j)){n.find.ID=function(m,p,q){if(typeof p.getElementById!=="undefined"&&!q)return(p=p.getElementById(m[1]))?p.id===m[1]||typeof p.getAttributeNode!=="undefined"&&p.getAttributeNode("id").nodeValue===m[1]?[p]:A:[]};n.filter.ID=function(m,p){var q=typeof m.getAttributeNode!=="undefined"&&m.getAttributeNode("id");return m.nodeType===1&&q&&q.nodeValue===p}}o.removeChild(g);
-o=g=null})();(function(){var g=u.createElement("div");g.appendChild(u.createComment(""));if(g.getElementsByTagName("*").length>0)n.find.TAG=function(j,o){var m=o.getElementsByTagName(j[1]);if(j[1]==="*"){for(var p=[],q=0;m[q];q++)m[q].nodeType===1&&p.push(m[q]);m=p}return m};g.innerHTML="<a href='#'></a>";if(g.firstChild&&typeof g.firstChild.getAttribute!=="undefined"&&g.firstChild.getAttribute("href")!=="#")n.attrHandle.href=function(j){return j.getAttribute("href",2)};g=null})();u.querySelectorAll&&
-function(){var g=l,j=u.createElement("div");j.innerHTML="<p class='TEST'></p>";if(!(j.querySelectorAll&&j.querySelectorAll(".TEST").length===0)){l=function(m,p,q,t){p=p||u;if(!t&&!l.isXML(p))if(p.nodeType===9)try{return D(p.querySelectorAll(m),q)}catch(x){}else if(p.nodeType===1&&p.nodeName.toLowerCase()!=="object"){var C=p.id,P=p.id="__sizzle__";try{return D(p.querySelectorAll("#"+P+" "+m),q)}catch(N){}finally{if(C)p.id=C;else p.removeAttribute("id")}}return g(m,p,q,t)};for(var o in g)l[o]=g[o];
-j=null}}();(function(){var g=u.documentElement,j=g.matchesSelector||g.mozMatchesSelector||g.webkitMatchesSelector||g.msMatchesSelector,o=false;try{j.call(u.documentElement,":sizzle")}catch(m){o=true}if(j)l.matchesSelector=function(p,q){try{if(o||!n.match.PSEUDO.test(q))return j.call(p,q)}catch(t){}return l(q,null,null,[p]).length>0}})();(function(){var g=u.createElement("div");g.innerHTML="<div class='test e'></div><div class='test'></div>";if(!(!g.getElementsByClassName||g.getElementsByClassName("e").length===
-0)){g.lastChild.className="e";if(g.getElementsByClassName("e").length!==1){n.order.splice(1,0,"CLASS");n.find.CLASS=function(j,o,m){if(typeof o.getElementsByClassName!=="undefined"&&!m)return o.getElementsByClassName(j[1])};g=null}}})();l.contains=u.documentElement.contains?function(g,j){return g!==j&&(g.contains?g.contains(j):true)}:function(g,j){return!!(g.compareDocumentPosition(j)&16)};l.isXML=function(g){return(g=(g?g.ownerDocument||g:0).documentElement)?g.nodeName!=="HTML":false};var M=function(g,
-j){for(var o=[],m="",p,q=j.nodeType?[j]:j;p=n.match.PSEUDO.exec(g);){m+=p[0];g=g.replace(n.match.PSEUDO,"")}g=n.relative[g]?g+"*":g;p=0;for(var t=q.length;p<t;p++)l(g,q[p],o);return l.filter(m,o)};c.find=l;c.expr=l.selectors;c.expr[":"]=c.expr.filters;c.unique=l.uniqueSort;c.text=l.getText;c.isXMLDoc=l.isXML;c.contains=l.contains})();var Wa=/Until$/,Xa=/^(?:parents|prevUntil|prevAll)/,Ya=/,/,Ja=/^.[^:#\[\.,]*$/,Za=Array.prototype.slice,$a=c.expr.match.POS;c.fn.extend({find:function(a){for(var b=this.pushStack("",
-"find",a),d=0,e=0,f=this.length;e<f;e++){d=b.length;c.find(a,this[e],b);if(e>0)for(var h=d;h<b.length;h++)for(var k=0;k<d;k++)if(b[k]===b[h]){b.splice(h--,1);break}}return b},has:function(a){var b=c(a);return this.filter(function(){for(var d=0,e=b.length;d<e;d++)if(c.contains(this,b[d]))return true})},not:function(a){return this.pushStack(ka(this,a,false),"not",a)},filter:function(a){return this.pushStack(ka(this,a,true),"filter",a)},is:function(a){return!!a&&c.filter(a,this).length>0},closest:function(a,
-b){var d=[],e,f,h=this[0];if(c.isArray(a)){var k={},l,n=1;if(h&&a.length){e=0;for(f=a.length;e<f;e++){l=a[e];k[l]||(k[l]=c.expr.match.POS.test(l)?c(l,b||this.context):l)}for(;h&&h.ownerDocument&&h!==b;){for(l in k){e=k[l];if(e.jquery?e.index(h)>-1:c(h).is(e))d.push({selector:l,elem:h,level:n})}h=h.parentNode;n++}}return d}k=$a.test(a)?c(a,b||this.context):null;e=0;for(f=this.length;e<f;e++)for(h=this[e];h;)if(k?k.index(h)>-1:c.find.matchesSelector(h,a)){d.push(h);break}else{h=h.parentNode;if(!h||
-!h.ownerDocument||h===b)break}d=d.length>1?c.unique(d):d;return this.pushStack(d,"closest",a)},index:function(a){if(!a||typeof a==="string")return c.inArray(this[0],a?c(a):this.parent().children());return c.inArray(a.jquery?a[0]:a,this)},add:function(a,b){var d=typeof a==="string"?c(a,b||this.context):c.makeArray(a),e=c.merge(this.get(),d);return this.pushStack(!d[0]||!d[0].parentNode||d[0].parentNode.nodeType===11||!e[0]||!e[0].parentNode||e[0].parentNode.nodeType===11?e:c.unique(e))},andSelf:function(){return this.add(this.prevObject)}});
-c.each({parent:function(a){return(a=a.parentNode)&&a.nodeType!==11?a:null},parents:function(a){return c.dir(a,"parentNode")},parentsUntil:function(a,b,d){return c.dir(a,"parentNode",d)},next:function(a){return c.nth(a,2,"nextSibling")},prev:function(a){return c.nth(a,2,"previousSibling")},nextAll:function(a){return c.dir(a,"nextSibling")},prevAll:function(a){return c.dir(a,"previousSibling")},nextUntil:function(a,b,d){return c.dir(a,"nextSibling",d)},prevUntil:function(a,b,d){return c.dir(a,"previousSibling",
-d)},siblings:function(a){return c.sibling(a.parentNode.firstChild,a)},children:function(a){return c.sibling(a.firstChild)},contents:function(a){return c.nodeName(a,"iframe")?a.contentDocument||a.contentWindow.document:c.makeArray(a.childNodes)}},function(a,b){c.fn[a]=function(d,e){var f=c.map(this,b,d);Wa.test(a)||(e=d);if(e&&typeof e==="string")f=c.filter(e,f);f=this.length>1?c.unique(f):f;if((this.length>1||Ya.test(e))&&Xa.test(a))f=f.reverse();return this.pushStack(f,a,Za.call(arguments).join(","))}});
-c.extend({filter:function(a,b,d){if(d)a=":not("+a+")";return b.length===1?c.find.matchesSelector(b[0],a)?[b[0]]:[]:c.find.matches(a,b)},dir:function(a,b,d){var e=[];for(a=a[b];a&&a.nodeType!==9&&(d===A||a.nodeType!==1||!c(a).is(d));){a.nodeType===1&&e.push(a);a=a[b]}return e},nth:function(a,b,d){b=b||1;for(var e=0;a;a=a[d])if(a.nodeType===1&&++e===b)break;return a},sibling:function(a,b){for(var d=[];a;a=a.nextSibling)a.nodeType===1&&a!==b&&d.push(a);return d}});var xa=/ jQuery\d+="(?:\d+|null)"/g,
-$=/^\s+/,ya=/<(?!area|br|col|embed|hr|img|input|link|meta|param)(([\w:]+)[^>]*)\/>/ig,za=/<([\w:]+)/,ab=/<tbody/i,bb=/<|&#?\w+;/,Aa=/<(?:script|object|embed|option|style)/i,Ba=/checked\s*(?:[^=]|=\s*.checked.)/i,cb=/\=([^="'>\s]+\/)>/g,O={option:[1,"<select multiple='multiple'>","</select>"],legend:[1,"<fieldset>","</fieldset>"],thead:[1,"<table>","</table>"],tr:[2,"<table><tbody>","</tbody></table>"],td:[3,"<table><tbody><tr>","</tr></tbody></table>"],col:[2,"<table><tbody></tbody><colgroup>","</colgroup></table>"],
-area:[1,"<map>","</map>"],_default:[0,"",""]};O.optgroup=O.option;O.tbody=O.tfoot=O.colgroup=O.caption=O.thead;O.th=O.td;if(!c.support.htmlSerialize)O._default=[1,"div<div>","</div>"];c.fn.extend({text:function(a){if(c.isFunction(a))return this.each(function(b){var d=c(this);d.text(a.call(this,b,d.text()))});if(typeof a!=="object"&&a!==A)return this.empty().append((this[0]&&this[0].ownerDocument||u).createTextNode(a));return c.text(this)},wrapAll:function(a){if(c.isFunction(a))return this.each(function(d){c(this).wrapAll(a.call(this,
-d))});if(this[0]){var b=c(a,this[0].ownerDocument).eq(0).clone(true);this[0].parentNode&&b.insertBefore(this[0]);b.map(function(){for(var d=this;d.firstChild&&d.firstChild.nodeType===1;)d=d.firstChild;return d}).append(this)}return this},wrapInner:function(a){if(c.isFunction(a))return this.each(function(b){c(this).wrapInner(a.call(this,b))});return this.each(function(){var b=c(this),d=b.contents();d.length?d.wrapAll(a):b.append(a)})},wrap:function(a){return this.each(function(){c(this).wrapAll(a)})},
-unwrap:function(){return this.parent().each(function(){c.nodeName(this,"body")||c(this).replaceWith(this.childNodes)}).end()},append:function(){return this.domManip(arguments,true,function(a){this.nodeType===1&&this.appendChild(a)})},prepend:function(){return this.domManip(arguments,true,function(a){this.nodeType===1&&this.insertBefore(a,this.firstChild)})},before:function(){if(this[0]&&this[0].parentNode)return this.domManip(arguments,false,function(b){this.parentNode.insertBefore(b,this)});else if(arguments.length){var a=
-c(arguments[0]);a.push.apply(a,this.toArray());return this.pushStack(a,"before",arguments)}},after:function(){if(this[0]&&this[0].parentNode)return this.domManip(arguments,false,function(b){this.parentNode.insertBefore(b,this.nextSibling)});else if(arguments.length){var a=this.pushStack(this,"after",arguments);a.push.apply(a,c(arguments[0]).toArray());return a}},remove:function(a,b){for(var d=0,e;(e=this[d])!=null;d++)if(!a||c.filter(a,[e]).length){if(!b&&e.nodeType===1){c.cleanData(e.getElementsByTagName("*"));
-c.cleanData([e])}e.parentNode&&e.parentNode.removeChild(e)}return this},empty:function(){for(var a=0,b;(b=this[a])!=null;a++)for(b.nodeType===1&&c.cleanData(b.getElementsByTagName("*"));b.firstChild;)b.removeChild(b.firstChild);return this},clone:function(a){var b=this.map(function(){if(!c.support.noCloneEvent&&!c.isXMLDoc(this)){var d=this.outerHTML,e=this.ownerDocument;if(!d){d=e.createElement("div");d.appendChild(this.cloneNode(true));d=d.innerHTML}return c.clean([d.replace(xa,"").replace(cb,'="$1">').replace($,
-"")],e)[0]}else return this.cloneNode(true)});if(a===true){la(this,b);la(this.find("*"),b.find("*"))}return b},html:function(a){if(a===A)return this[0]&&this[0].nodeType===1?this[0].innerHTML.replace(xa,""):null;else if(typeof a==="string"&&!Aa.test(a)&&(c.support.leadingWhitespace||!$.test(a))&&!O[(za.exec(a)||["",""])[1].toLowerCase()]){a=a.replace(ya,"<$1></$2>");try{for(var b=0,d=this.length;b<d;b++)if(this[b].nodeType===1){c.cleanData(this[b].getElementsByTagName("*"));this[b].innerHTML=a}}catch(e){this.empty().append(a)}}else c.isFunction(a)?
-this.each(function(f){var h=c(this);h.html(a.call(this,f,h.html()))}):this.empty().append(a);return this},replaceWith:function(a){if(this[0]&&this[0].parentNode){if(c.isFunction(a))return this.each(function(b){var d=c(this),e=d.html();d.replaceWith(a.call(this,b,e))});if(typeof a!=="string")a=c(a).detach();return this.each(function(){var b=this.nextSibling,d=this.parentNode;c(this).remove();b?c(b).before(a):c(d).append(a)})}else return this.pushStack(c(c.isFunction(a)?a():a),"replaceWith",a)},detach:function(a){return this.remove(a,
-true)},domManip:function(a,b,d){var e,f,h=a[0],k=[],l;if(!c.support.checkClone&&arguments.length===3&&typeof h==="string"&&Ba.test(h))return this.each(function(){c(this).domManip(a,b,d,true)});if(c.isFunction(h))return this.each(function(s){var v=c(this);a[0]=h.call(this,s,b?v.html():A);v.domManip(a,b,d)});if(this[0]){e=h&&h.parentNode;e=c.support.parentNode&&e&&e.nodeType===11&&e.childNodes.length===this.length?{fragment:e}:c.buildFragment(a,this,k);l=e.fragment;if(f=l.childNodes.length===1?l=l.firstChild:
-l.firstChild){b=b&&c.nodeName(f,"tr");f=0;for(var n=this.length;f<n;f++)d.call(b?c.nodeName(this[f],"table")?this[f].getElementsByTagName("tbody")[0]||this[f].appendChild(this[f].ownerDocument.createElement("tbody")):this[f]:this[f],f>0||e.cacheable||this.length>1?l.cloneNode(true):l)}k.length&&c.each(k,Ka)}return this}});c.buildFragment=function(a,b,d){var e,f,h;b=b&&b[0]?b[0].ownerDocument||b[0]:u;if(a.length===1&&typeof a[0]==="string"&&a[0].length<512&&b===u&&!Aa.test(a[0])&&(c.support.checkClone||
-!Ba.test(a[0]))){f=true;if(h=c.fragments[a[0]])if(h!==1)e=h}if(!e){e=b.createDocumentFragment();c.clean(a,b,e,d)}if(f)c.fragments[a[0]]=h?e:1;return{fragment:e,cacheable:f}};c.fragments={};c.each({appendTo:"append",prependTo:"prepend",insertBefore:"before",insertAfter:"after",replaceAll:"replaceWith"},function(a,b){c.fn[a]=function(d){var e=[];d=c(d);var f=this.length===1&&this[0].parentNode;if(f&&f.nodeType===11&&f.childNodes.length===1&&d.length===1){d[b](this[0]);return this}else{f=0;for(var h=
-d.length;f<h;f++){var k=(f>0?this.clone(true):this).get();c(d[f])[b](k);e=e.concat(k)}return this.pushStack(e,a,d.selector)}}});c.extend({clean:function(a,b,d,e){b=b||u;if(typeof b.createElement==="undefined")b=b.ownerDocument||b[0]&&b[0].ownerDocument||u;for(var f=[],h=0,k;(k=a[h])!=null;h++){if(typeof k==="number")k+="";if(k){if(typeof k==="string"&&!bb.test(k))k=b.createTextNode(k);else if(typeof k==="string"){k=k.replace(ya,"<$1></$2>");var l=(za.exec(k)||["",""])[1].toLowerCase(),n=O[l]||O._default,
-s=n[0],v=b.createElement("div");for(v.innerHTML=n[1]+k+n[2];s--;)v=v.lastChild;if(!c.support.tbody){s=ab.test(k);l=l==="table"&&!s?v.firstChild&&v.firstChild.childNodes:n[1]==="<table>"&&!s?v.childNodes:[];for(n=l.length-1;n>=0;--n)c.nodeName(l[n],"tbody")&&!l[n].childNodes.length&&l[n].parentNode.removeChild(l[n])}!c.support.leadingWhitespace&&$.test(k)&&v.insertBefore(b.createTextNode($.exec(k)[0]),v.firstChild);k=v.childNodes}if(k.nodeType)f.push(k);else f=c.merge(f,k)}}if(d)for(h=0;f[h];h++)if(e&&
-c.nodeName(f[h],"script")&&(!f[h].type||f[h].type.toLowerCase()==="text/javascript"))e.push(f[h].parentNode?f[h].parentNode.removeChild(f[h]):f[h]);else{f[h].nodeType===1&&f.splice.apply(f,[h+1,0].concat(c.makeArray(f[h].getElementsByTagName("script"))));d.appendChild(f[h])}return f},cleanData:function(a){for(var b,d,e=c.cache,f=c.event.special,h=c.support.deleteExpando,k=0,l;(l=a[k])!=null;k++)if(!(l.nodeName&&c.noData[l.nodeName.toLowerCase()]))if(d=l[c.expando]){if((b=e[d])&&b.events)for(var n in b.events)f[n]?
-c.event.remove(l,n):c.removeEvent(l,n,b.handle);if(h)delete l[c.expando];else l.removeAttribute&&l.removeAttribute(c.expando);delete e[d]}}});var Ca=/alpha\([^)]*\)/i,db=/opacity=([^)]*)/,eb=/-([a-z])/ig,fb=/([A-Z])/g,Da=/^-?\d+(?:px)?$/i,gb=/^-?\d/,hb={position:"absolute",visibility:"hidden",display:"block"},La=["Left","Right"],Ma=["Top","Bottom"],W,ib=u.defaultView&&u.defaultView.getComputedStyle,jb=function(a,b){return b.toUpperCase()};c.fn.css=function(a,b){if(arguments.length===2&&b===A)return this;
-return c.access(this,a,b,true,function(d,e,f){return f!==A?c.style(d,e,f):c.css(d,e)})};c.extend({cssHooks:{opacity:{get:function(a,b){if(b){var d=W(a,"opacity","opacity");return d===""?"1":d}else return a.style.opacity}}},cssNumber:{zIndex:true,fontWeight:true,opacity:true,zoom:true,lineHeight:true},cssProps:{"float":c.support.cssFloat?"cssFloat":"styleFloat"},style:function(a,b,d,e){if(!(!a||a.nodeType===3||a.nodeType===8||!a.style)){var f,h=c.camelCase(b),k=a.style,l=c.cssHooks[h];b=c.cssProps[h]||
-h;if(d!==A){if(!(typeof d==="number"&&isNaN(d)||d==null)){if(typeof d==="number"&&!c.cssNumber[h])d+="px";if(!l||!("set"in l)||(d=l.set(a,d))!==A)try{k[b]=d}catch(n){}}}else{if(l&&"get"in l&&(f=l.get(a,false,e))!==A)return f;return k[b]}}},css:function(a,b,d){var e,f=c.camelCase(b),h=c.cssHooks[f];b=c.cssProps[f]||f;if(h&&"get"in h&&(e=h.get(a,true,d))!==A)return e;else if(W)return W(a,b,f)},swap:function(a,b,d){var e={},f;for(f in b){e[f]=a.style[f];a.style[f]=b[f]}d.call(a);for(f in b)a.style[f]=
-e[f]},camelCase:function(a){return a.replace(eb,jb)}});c.curCSS=c.css;c.each(["height","width"],function(a,b){c.cssHooks[b]={get:function(d,e,f){var h;if(e){if(d.offsetWidth!==0)h=ma(d,b,f);else c.swap(d,hb,function(){h=ma(d,b,f)});return h+"px"}},set:function(d,e){if(Da.test(e)){e=parseFloat(e);if(e>=0)return e+"px"}else return e}}});if(!c.support.opacity)c.cssHooks.opacity={get:function(a,b){return db.test((b&&a.currentStyle?a.currentStyle.filter:a.style.filter)||"")?parseFloat(RegExp.$1)/100+"":
-b?"1":""},set:function(a,b){var d=a.style;d.zoom=1;var e=c.isNaN(b)?"":"alpha(opacity="+b*100+")",f=d.filter||"";d.filter=Ca.test(f)?f.replace(Ca,e):d.filter+" "+e}};if(ib)W=function(a,b,d){var e;d=d.replace(fb,"-$1").toLowerCase();if(!(b=a.ownerDocument.defaultView))return A;if(b=b.getComputedStyle(a,null)){e=b.getPropertyValue(d);if(e===""&&!c.contains(a.ownerDocument.documentElement,a))e=c.style(a,d)}return e};else if(u.documentElement.currentStyle)W=function(a,b){var d,e,f=a.currentStyle&&a.currentStyle[b],
-h=a.style;if(!Da.test(f)&&gb.test(f)){d=h.left;e=a.runtimeStyle.left;a.runtimeStyle.left=a.currentStyle.left;h.left=b==="fontSize"?"1em":f||0;f=h.pixelLeft+"px";h.left=d;a.runtimeStyle.left=e}return f};if(c.expr&&c.expr.filters){c.expr.filters.hidden=function(a){var b=a.offsetHeight;return a.offsetWidth===0&&b===0||!c.support.reliableHiddenOffsets&&(a.style.display||c.css(a,"display"))==="none"};c.expr.filters.visible=function(a){return!c.expr.filters.hidden(a)}}var kb=c.now(),lb=/<script\b[^<]*(?:(?!<\/script>)<[^<]*)*<\/script>/gi,
-mb=/^(?:select|textarea)/i,nb=/^(?:color|date|datetime|email|hidden|month|number|password|range|search|tel|text|time|url|week)$/i,ob=/^(?:GET|HEAD|DELETE)$/,Na=/\[\]$/,T=/\=\?(&|$)/,ia=/\?/,pb=/([?&])_=[^&]*/,qb=/^(\w+:)?\/\/([^\/?#]+)/,rb=/%20/g,sb=/#.*$/,Ea=c.fn.load;c.fn.extend({load:function(a,b,d){if(typeof a!=="string"&&Ea)return Ea.apply(this,arguments);else if(!this.length)return this;var e=a.indexOf(" ");if(e>=0){var f=a.slice(e,a.length);a=a.slice(0,e)}e="GET";if(b)if(c.isFunction(b)){d=
-b;b=null}else if(typeof b==="object"){b=c.param(b,c.ajaxSettings.traditional);e="POST"}var h=this;c.ajax({url:a,type:e,dataType:"html",data:b,complete:function(k,l){if(l==="success"||l==="notmodified")h.html(f?c("<div>").append(k.responseText.replace(lb,"")).find(f):k.responseText);d&&h.each(d,[k.responseText,l,k])}});return this},serialize:function(){return c.param(this.serializeArray())},serializeArray:function(){return this.map(function(){return this.elements?c.makeArray(this.elements):this}).filter(function(){return this.name&&
-!this.disabled&&(this.checked||mb.test(this.nodeName)||nb.test(this.type))}).map(function(a,b){var d=c(this).val();return d==null?null:c.isArray(d)?c.map(d,function(e){return{name:b.name,value:e}}):{name:b.name,value:d}}).get()}});c.each("ajaxStart ajaxStop ajaxComplete ajaxError ajaxSuccess ajaxSend".split(" "),function(a,b){c.fn[b]=function(d){return this.bind(b,d)}});c.extend({get:function(a,b,d,e){if(c.isFunction(b)){e=e||d;d=b;b=null}return c.ajax({type:"GET",url:a,data:b,success:d,dataType:e})},
-getScript:function(a,b){return c.get(a,null,b,"script")},getJSON:function(a,b,d){return c.get(a,b,d,"json")},post:function(a,b,d,e){if(c.isFunction(b)){e=e||d;d=b;b={}}return c.ajax({type:"POST",url:a,data:b,success:d,dataType:e})},ajaxSetup:function(a){c.extend(c.ajaxSettings,a)},ajaxSettings:{url:location.href,global:true,type:"GET",contentType:"application/x-www-form-urlencoded",processData:true,async:true,xhr:function(){return new E.XMLHttpRequest},accepts:{xml:"application/xml, text/xml",html:"text/html",
-script:"text/javascript, application/javascript",json:"application/json, text/javascript",text:"text/plain",_default:"*/*"}},ajax:function(a){var b=c.extend(true,{},c.ajaxSettings,a),d,e,f,h=b.type.toUpperCase(),k=ob.test(h);b.url=b.url.replace(sb,"");b.context=a&&a.context!=null?a.context:b;if(b.data&&b.processData&&typeof b.data!=="string")b.data=c.param(b.data,b.traditional);if(b.dataType==="jsonp"){if(h==="GET")T.test(b.url)||(b.url+=(ia.test(b.url)?"&":"?")+(b.jsonp||"callback")+"=?");else if(!b.data||
-!T.test(b.data))b.data=(b.data?b.data+"&":"")+(b.jsonp||"callback")+"=?";b.dataType="json"}if(b.dataType==="json"&&(b.data&&T.test(b.data)||T.test(b.url))){d=b.jsonpCallback||"jsonp"+kb++;if(b.data)b.data=(b.data+"").replace(T,"="+d+"$1");b.url=b.url.replace(T,"="+d+"$1");b.dataType="script";var l=E[d];E[d]=function(m){f=m;c.handleSuccess(b,w,e,f);c.handleComplete(b,w,e,f);if(c.isFunction(l))l(m);else{E[d]=A;try{delete E[d]}catch(p){}}v&&v.removeChild(B)}}if(b.dataType==="script"&&b.cache===null)b.cache=
-false;if(b.cache===false&&h==="GET"){var n=c.now(),s=b.url.replace(pb,"$1_="+n);b.url=s+(s===b.url?(ia.test(b.url)?"&":"?")+"_="+n:"")}if(b.data&&h==="GET")b.url+=(ia.test(b.url)?"&":"?")+b.data;b.global&&c.active++===0&&c.event.trigger("ajaxStart");n=(n=qb.exec(b.url))&&(n[1]&&n[1]!==location.protocol||n[2]!==location.host);if(b.dataType==="script"&&h==="GET"&&n){var v=u.getElementsByTagName("head")[0]||u.documentElement,B=u.createElement("script");if(b.scriptCharset)B.charset=b.scriptCharset;B.src=
-b.url;if(!d){var D=false;B.onload=B.onreadystatechange=function(){if(!D&&(!this.readyState||this.readyState==="loaded"||this.readyState==="complete")){D=true;c.handleSuccess(b,w,e,f);c.handleComplete(b,w,e,f);B.onload=B.onreadystatechange=null;v&&B.parentNode&&v.removeChild(B)}}}v.insertBefore(B,v.firstChild);return A}var H=false,w=b.xhr();if(w){b.username?w.open(h,b.url,b.async,b.username,b.password):w.open(h,b.url,b.async);try{if(b.data!=null&&!k||a&&a.contentType)w.setRequestHeader("Content-Type",
-b.contentType);if(b.ifModified){c.lastModified[b.url]&&w.setRequestHeader("If-Modified-Since",c.lastModified[b.url]);c.etag[b.url]&&w.setRequestHeader("If-None-Match",c.etag[b.url])}n||w.setRequestHeader("X-Requested-With","XMLHttpRequest");w.setRequestHeader("Accept",b.dataType&&b.accepts[b.dataType]?b.accepts[b.dataType]+", */*; q=0.01":b.accepts._default)}catch(G){}if(b.beforeSend&&b.beforeSend.call(b.context,w,b)===false){b.global&&c.active--===1&&c.event.trigger("ajaxStop");w.abort();return false}b.global&&
-c.triggerGlobal(b,"ajaxSend",[w,b]);var M=w.onreadystatechange=function(m){if(!w||w.readyState===0||m==="abort"){H||c.handleComplete(b,w,e,f);H=true;if(w)w.onreadystatechange=c.noop}else if(!H&&w&&(w.readyState===4||m==="timeout")){H=true;w.onreadystatechange=c.noop;e=m==="timeout"?"timeout":!c.httpSuccess(w)?"error":b.ifModified&&c.httpNotModified(w,b.url)?"notmodified":"success";var p;if(e==="success")try{f=c.httpData(w,b.dataType,b)}catch(q){e="parsererror";p=q}if(e==="success"||e==="notmodified")d||
-c.handleSuccess(b,w,e,f);else c.handleError(b,w,e,p);d||c.handleComplete(b,w,e,f);m==="timeout"&&w.abort();if(b.async)w=null}};try{var g=w.abort;w.abort=function(){w&&g.call&&g.call(w);M("abort")}}catch(j){}b.async&&b.timeout>0&&setTimeout(function(){w&&!H&&M("timeout")},b.timeout);try{w.send(k||b.data==null?null:b.data)}catch(o){c.handleError(b,w,null,o);c.handleComplete(b,w,e,f)}b.async||M();return w}},param:function(a,b){var d=[],e=function(h,k){k=c.isFunction(k)?k():k;d[d.length]=encodeURIComponent(h)+
-"="+encodeURIComponent(k)};if(b===A)b=c.ajaxSettings.traditional;if(c.isArray(a)||a.jquery)c.each(a,function(){e(this.name,this.value)});else for(var f in a)ca(f,a[f],b,e);return d.join("&").replace(rb,"+")}});c.extend({active:0,lastModified:{},etag:{},handleError:function(a,b,d,e){a.error&&a.error.call(a.context,b,d,e);a.global&&c.triggerGlobal(a,"ajaxError",[b,a,e])},handleSuccess:function(a,b,d,e){a.success&&a.success.call(a.context,e,d,b);a.global&&c.triggerGlobal(a,"ajaxSuccess",[b,a])},handleComplete:function(a,
-b,d){a.complete&&a.complete.call(a.context,b,d);a.global&&c.triggerGlobal(a,"ajaxComplete",[b,a]);a.global&&c.active--===1&&c.event.trigger("ajaxStop")},triggerGlobal:function(a,b,d){(a.context&&a.context.url==null?c(a.context):c.event).trigger(b,d)},httpSuccess:function(a){try{return!a.status&&location.protocol==="file:"||a.status>=200&&a.status<300||a.status===304||a.status===1223}catch(b){}return false},httpNotModified:function(a,b){var d=a.getResponseHeader("Last-Modified"),e=a.getResponseHeader("Etag");
-if(d)c.lastModified[b]=d;if(e)c.etag[b]=e;return a.status===304},httpData:function(a,b,d){var e=a.getResponseHeader("content-type")||"",f=b==="xml"||!b&&e.indexOf("xml")>=0;a=f?a.responseXML:a.responseText;f&&a.documentElement.nodeName==="parsererror"&&c.error("parsererror");if(d&&d.dataFilter)a=d.dataFilter(a,b);if(typeof a==="string")if(b==="json"||!b&&e.indexOf("json")>=0)a=c.parseJSON(a);else if(b==="script"||!b&&e.indexOf("javascript")>=0)c.globalEval(a);return a}});if(E.ActiveXObject)c.ajaxSettings.xhr=
-function(){if(E.location.protocol!=="file:")try{return new E.XMLHttpRequest}catch(a){}try{return new E.ActiveXObject("Microsoft.XMLHTTP")}catch(b){}};c.support.ajax=!!c.ajaxSettings.xhr();var da={},tb=/^(?:toggle|show|hide)$/,ub=/^([+\-]=)?([\d+.\-]+)(.*)$/,aa,na=[["height","marginTop","marginBottom","paddingTop","paddingBottom"],["width","marginLeft","marginRight","paddingLeft","paddingRight"],["opacity"]];c.fn.extend({show:function(a,b,d){if(a||a===0)return this.animate(S("show",3),a,b,d);else{a=
-0;for(b=this.length;a<b;a++){if(!c.data(this[a],"olddisplay")&&this[a].style.display==="none")this[a].style.display="";this[a].style.display===""&&c.css(this[a],"display")==="none"&&c.data(this[a],"olddisplay",oa(this[a].nodeName))}for(a=0;a<b;a++)this[a].style.display=c.data(this[a],"olddisplay")||"";return this}},hide:function(a,b,d){if(a||a===0)return this.animate(S("hide",3),a,b,d);else{a=0;for(b=this.length;a<b;a++){d=c.css(this[a],"display");d!=="none"&&c.data(this[a],"olddisplay",d)}for(a=
-0;a<b;a++)this[a].style.display="none";return this}},_toggle:c.fn.toggle,toggle:function(a,b,d){var e=typeof a==="boolean";if(c.isFunction(a)&&c.isFunction(b))this._toggle.apply(this,arguments);else a==null||e?this.each(function(){var f=e?a:c(this).is(":hidden");c(this)[f?"show":"hide"]()}):this.animate(S("toggle",3),a,b,d);return this},fadeTo:function(a,b,d,e){return this.filter(":hidden").css("opacity",0).show().end().animate({opacity:b},a,d,e)},animate:function(a,b,d,e){var f=c.speed(b,d,e);if(c.isEmptyObject(a))return this.each(f.complete);
-return this[f.queue===false?"each":"queue"](function(){var h=c.extend({},f),k,l=this.nodeType===1,n=l&&c(this).is(":hidden"),s=this;for(k in a){var v=c.camelCase(k);if(k!==v){a[v]=a[k];delete a[k];k=v}if(a[k]==="hide"&&n||a[k]==="show"&&!n)return h.complete.call(this);if(l&&(k==="height"||k==="width")){h.overflow=[this.style.overflow,this.style.overflowX,this.style.overflowY];if(c.css(this,"display")==="inline"&&c.css(this,"float")==="none")if(c.support.inlineBlockNeedsLayout)if(oa(this.nodeName)===
-"inline")this.style.display="inline-block";else{this.style.display="inline";this.style.zoom=1}else this.style.display="inline-block"}if(c.isArray(a[k])){(h.specialEasing=h.specialEasing||{})[k]=a[k][1];a[k]=a[k][0]}}if(h.overflow!=null)this.style.overflow="hidden";h.curAnim=c.extend({},a);c.each(a,function(B,D){var H=new c.fx(s,h,B);if(tb.test(D))H[D==="toggle"?n?"show":"hide":D](a);else{var w=ub.exec(D),G=H.cur(true)||0;if(w){var M=parseFloat(w[2]),g=w[3]||"px";if(g!=="px"){c.style(s,B,(M||1)+g);
-G=(M||1)/H.cur(true)*G;c.style(s,B,G+g)}if(w[1])M=(w[1]==="-="?-1:1)*M+G;H.custom(G,M,g)}else H.custom(G,D,"")}});return true})},stop:function(a,b){var d=c.timers;a&&this.queue([]);this.each(function(){for(var e=d.length-1;e>=0;e--)if(d[e].elem===this){b&&d[e](true);d.splice(e,1)}});b||this.dequeue();return this}});c.each({slideDown:S("show",1),slideUp:S("hide",1),slideToggle:S("toggle",1),fadeIn:{opacity:"show"},fadeOut:{opacity:"hide"}},function(a,b){c.fn[a]=function(d,e,f){return this.animate(b,
-d,e,f)}});c.extend({speed:function(a,b,d){var e=a&&typeof a==="object"?c.extend({},a):{complete:d||!d&&b||c.isFunction(a)&&a,duration:a,easing:d&&b||b&&!c.isFunction(b)&&b};e.duration=c.fx.off?0:typeof e.duration==="number"?e.duration:e.duration in c.fx.speeds?c.fx.speeds[e.duration]:c.fx.speeds._default;e.old=e.complete;e.complete=function(){e.queue!==false&&c(this).dequeue();c.isFunction(e.old)&&e.old.call(this)};return e},easing:{linear:function(a,b,d,e){return d+e*a},swing:function(a,b,d,e){return(-Math.cos(a*
-Math.PI)/2+0.5)*e+d}},timers:[],fx:function(a,b,d){this.options=b;this.elem=a;this.prop=d;if(!b.orig)b.orig={}}});c.fx.prototype={update:function(){this.options.step&&this.options.step.call(this.elem,this.now,this);(c.fx.step[this.prop]||c.fx.step._default)(this)},cur:function(){if(this.elem[this.prop]!=null&&(!this.elem.style||this.elem.style[this.prop]==null))return this.elem[this.prop];var a=parseFloat(c.css(this.elem,this.prop));return a&&a>-1E4?a:0},custom:function(a,b,d){function e(h){return f.step(h)}
-this.startTime=c.now();this.start=a;this.end=b;this.unit=d||this.unit||"px";this.now=this.start;this.pos=this.state=0;var f=this;a=c.fx;e.elem=this.elem;if(e()&&c.timers.push(e)&&!aa)aa=setInterval(a.tick,a.interval)},show:function(){this.options.orig[this.prop]=c.style(this.elem,this.prop);this.options.show=true;this.custom(this.prop==="width"||this.prop==="height"?1:0,this.cur());c(this.elem).show()},hide:function(){this.options.orig[this.prop]=c.style(this.elem,this.prop);this.options.hide=true;
-this.custom(this.cur(),0)},step:function(a){var b=c.now(),d=true;if(a||b>=this.options.duration+this.startTime){this.now=this.end;this.pos=this.state=1;this.update();this.options.curAnim[this.prop]=true;for(var e in this.options.curAnim)if(this.options.curAnim[e]!==true)d=false;if(d){if(this.options.overflow!=null&&!c.support.shrinkWrapBlocks){var f=this.elem,h=this.options;c.each(["","X","Y"],function(l,n){f.style["overflow"+n]=h.overflow[l]})}this.options.hide&&c(this.elem).hide();if(this.options.hide||
-this.options.show)for(var k in this.options.curAnim)c.style(this.elem,k,this.options.orig[k]);this.options.complete.call(this.elem)}return false}else{a=b-this.startTime;this.state=a/this.options.duration;b=this.options.easing||(c.easing.swing?"swing":"linear");this.pos=c.easing[this.options.specialEasing&&this.options.specialEasing[this.prop]||b](this.state,a,0,1,this.options.duration);this.now=this.start+(this.end-this.start)*this.pos;this.update()}return true}};c.extend(c.fx,{tick:function(){for(var a=
-c.timers,b=0;b<a.length;b++)a[b]()||a.splice(b--,1);a.length||c.fx.stop()},interval:13,stop:function(){clearInterval(aa);aa=null},speeds:{slow:600,fast:200,_default:400},step:{opacity:function(a){c.style(a.elem,"opacity",a.now)},_default:function(a){if(a.elem.style&&a.elem.style[a.prop]!=null)a.elem.style[a.prop]=(a.prop==="width"||a.prop==="height"?Math.max(0,a.now):a.now)+a.unit;else a.elem[a.prop]=a.now}}});if(c.expr&&c.expr.filters)c.expr.filters.animated=function(a){return c.grep(c.timers,function(b){return a===
-b.elem}).length};var vb=/^t(?:able|d|h)$/i,Fa=/^(?:body|html)$/i;c.fn.offset="getBoundingClientRect"in u.documentElement?function(a){var b=this[0],d;if(a)return this.each(function(k){c.offset.setOffset(this,a,k)});if(!b||!b.ownerDocument)return null;if(b===b.ownerDocument.body)return c.offset.bodyOffset(b);try{d=b.getBoundingClientRect()}catch(e){}var f=b.ownerDocument,h=f.documentElement;if(!d||!c.contains(h,b))return d||{top:0,left:0};b=f.body;f=ea(f);return{top:d.top+(f.pageYOffset||c.support.boxModel&&
-h.scrollTop||b.scrollTop)-(h.clientTop||b.clientTop||0),left:d.left+(f.pageXOffset||c.support.boxModel&&h.scrollLeft||b.scrollLeft)-(h.clientLeft||b.clientLeft||0)}}:function(a){var b=this[0];if(a)return this.each(function(s){c.offset.setOffset(this,a,s)});if(!b||!b.ownerDocument)return null;if(b===b.ownerDocument.body)return c.offset.bodyOffset(b);c.offset.initialize();var d=b.offsetParent,e=b.ownerDocument,f,h=e.documentElement,k=e.body;f=(e=e.defaultView)?e.getComputedStyle(b,null):b.currentStyle;
-for(var l=b.offsetTop,n=b.offsetLeft;(b=b.parentNode)&&b!==k&&b!==h;){if(c.offset.supportsFixedPosition&&f.position==="fixed")break;f=e?e.getComputedStyle(b,null):b.currentStyle;l-=b.scrollTop;n-=b.scrollLeft;if(b===d){l+=b.offsetTop;n+=b.offsetLeft;if(c.offset.doesNotAddBorder&&!(c.offset.doesAddBorderForTableAndCells&&vb.test(b.nodeName))){l+=parseFloat(f.borderTopWidth)||0;n+=parseFloat(f.borderLeftWidth)||0}d=b.offsetParent}if(c.offset.subtractsBorderForOverflowNotVisible&&f.overflow!=="visible"){l+=
-parseFloat(f.borderTopWidth)||0;n+=parseFloat(f.borderLeftWidth)||0}f=f}if(f.position==="relative"||f.position==="static"){l+=k.offsetTop;n+=k.offsetLeft}if(c.offset.supportsFixedPosition&&f.position==="fixed"){l+=Math.max(h.scrollTop,k.scrollTop);n+=Math.max(h.scrollLeft,k.scrollLeft)}return{top:l,left:n}};c.offset={initialize:function(){var a=u.body,b=u.createElement("div"),d,e,f,h=parseFloat(c.css(a,"marginTop"))||0;c.extend(b.style,{position:"absolute",top:0,left:0,margin:0,border:0,width:"1px",
-height:"1px",visibility:"hidden"});b.innerHTML="<div style='position:absolute;top:0;left:0;margin:0;border:5px solid #000;padding:0;width:1px;height:1px;'><div></div></div><table style='position:absolute;top:0;left:0;margin:0;border:5px solid #000;padding:0;width:1px;height:1px;' cellpadding='0' cellspacing='0'><tr><td></td></tr></table>";a.insertBefore(b,a.firstChild);d=b.firstChild;e=d.firstChild;f=d.nextSibling.firstChild.firstChild;this.doesNotAddBorder=e.offsetTop!==5;this.doesAddBorderForTableAndCells=
-f.offsetTop===5;e.style.position="fixed";e.style.top="20px";this.supportsFixedPosition=e.offsetTop===20||e.offsetTop===15;e.style.position=e.style.top="";d.style.overflow="hidden";d.style.position="relative";this.subtractsBorderForOverflowNotVisible=e.offsetTop===-5;this.doesNotIncludeMarginInBodyOffset=a.offsetTop!==h;a.removeChild(b);c.offset.initialize=c.noop},bodyOffset:function(a){var b=a.offsetTop,d=a.offsetLeft;c.offset.initialize();if(c.offset.doesNotIncludeMarginInBodyOffset){b+=parseFloat(c.css(a,
-"marginTop"))||0;d+=parseFloat(c.css(a,"marginLeft"))||0}return{top:b,left:d}},setOffset:function(a,b,d){var e=c.css(a,"position");if(e==="static")a.style.position="relative";var f=c(a),h=f.offset(),k=c.css(a,"top"),l=c.css(a,"left"),n=e==="absolute"&&c.inArray("auto",[k,l])>-1;e={};var s={};if(n)s=f.position();k=n?s.top:parseInt(k,10)||0;l=n?s.left:parseInt(l,10)||0;if(c.isFunction(b))b=b.call(a,d,h);if(b.top!=null)e.top=b.top-h.top+k;if(b.left!=null)e.left=b.left-h.left+l;"using"in b?b.using.call(a,
-e):f.css(e)}};c.fn.extend({position:function(){if(!this[0])return null;var a=this[0],b=this.offsetParent(),d=this.offset(),e=Fa.test(b[0].nodeName)?{top:0,left:0}:b.offset();d.top-=parseFloat(c.css(a,"marginTop"))||0;d.left-=parseFloat(c.css(a,"marginLeft"))||0;e.top+=parseFloat(c.css(b[0],"borderTopWidth"))||0;e.left+=parseFloat(c.css(b[0],"borderLeftWidth"))||0;return{top:d.top-e.top,left:d.left-e.left}},offsetParent:function(){return this.map(function(){for(var a=this.offsetParent||u.body;a&&!Fa.test(a.nodeName)&&
-c.css(a,"position")==="static";)a=a.offsetParent;return a})}});c.each(["Left","Top"],function(a,b){var d="scroll"+b;c.fn[d]=function(e){var f=this[0],h;if(!f)return null;if(e!==A)return this.each(function(){if(h=ea(this))h.scrollTo(!a?e:c(h).scrollLeft(),a?e:c(h).scrollTop());else this[d]=e});else return(h=ea(f))?"pageXOffset"in h?h[a?"pageYOffset":"pageXOffset"]:c.support.boxModel&&h.document.documentElement[d]||h.document.body[d]:f[d]}});c.each(["Height","Width"],function(a,b){var d=b.toLowerCase();
-c.fn["inner"+b]=function(){return this[0]?parseFloat(c.css(this[0],d,"padding")):null};c.fn["outer"+b]=function(e){return this[0]?parseFloat(c.css(this[0],d,e?"margin":"border")):null};c.fn[d]=function(e){var f=this[0];if(!f)return e==null?null:this;if(c.isFunction(e))return this.each(function(h){var k=c(this);k[d](e.call(this,h,k[d]()))});return c.isWindow(f)?f.document.compatMode==="CSS1Compat"&&f.document.documentElement["client"+b]||f.document.body["client"+b]:f.nodeType===9?Math.max(f.documentElement["client"+
-b],f.body["scroll"+b],f.documentElement["scroll"+b],f.body["offset"+b],f.documentElement["offset"+b]):e===A?parseFloat(c.css(f,d)):this.css(d,typeof e==="string"?e:e+"px")}})})(window);
+++ /dev/null
-
-(function($){$.extend({tablesorter:new function(){var parsers=[],widgets=[];this.defaults={cssHeader:"header",cssAsc:"headerSortUp",cssDesc:"headerSortDown",sortInitialOrder:"asc",sortMultiSortKey:"shiftKey",sortForce:null,sortAppend:null,textExtraction:"simple",parsers:{},widgets:[],widgetZebra:{css:["even","odd"]},headers:{},widthFixed:false,cancelSelection:true,sortList:[],headerList:[],dateFormat:"us",decimal:'.',debug:false};function benchmark(s,d){log(s+","+(new Date().getTime()-d.getTime())+"ms");}this.benchmark=benchmark;function log(s){if(typeof console!="undefined"&&typeof console.debug!="undefined"){console.log(s);}else{alert(s);}}function buildParserCache(table,$headers){if(table.config.debug){var parsersDebug="";}var rows=table.tBodies[0].rows;if(table.tBodies[0].rows[0]){var list=[],cells=rows[0].cells,l=cells.length;for(var i=0;i<l;i++){var p=false;if($.metadata&&($($headers[i]).metadata()&&$($headers[i]).metadata().sorter)){p=getParserById($($headers[i]).metadata().sorter);}else if((table.config.headers[i]&&table.config.headers[i].sorter)){p=getParserById(table.config.headers[i].sorter);}if(!p){p=detectParserForColumn(table,cells[i]);}if(table.config.debug){parsersDebug+="column:"+i+" parser:"+p.id+"\n";}list.push(p);}}if(table.config.debug){log(parsersDebug);}return list;};function detectParserForColumn(table,node){var l=parsers.length;for(var i=1;i<l;i++){if(parsers[i].is($.trim(getElementText(table.config,node)),table,node)){return parsers[i];}}return parsers[0];}function getParserById(name){var l=parsers.length;for(var i=0;i<l;i++){if(parsers[i].id.toLowerCase()==name.toLowerCase()){return parsers[i];}}return false;}function buildCache(table){if(table.config.debug){var cacheTime=new Date();}var totalRows=(table.tBodies[0]&&table.tBodies[0].rows.length)||0,totalCells=(table.tBodies[0].rows[0]&&table.tBodies[0].rows[0].cells.length)||0,parsers=table.config.parsers,cache={row:[],normalized:[]};for(var i=0;i<totalRows;++i){var c=table.tBodies[0].rows[i],cols=[];cache.row.push($(c));for(var j=0;j<totalCells;++j){cols.push(parsers[j].format(getElementText(table.config,c.cells[j]),table,c.cells[j]));}cols.push(i);cache.normalized.push(cols);cols=null;};if(table.config.debug){benchmark("Building cache for "+totalRows+" rows:",cacheTime);}return cache;};function getElementText(config,node){if(!node)return"";var t="";if(config.textExtraction=="simple"){if(node.childNodes[0]&&node.childNodes[0].hasChildNodes()){t=node.childNodes[0].innerHTML;}else{t=node.innerHTML;}}else{if(typeof(config.textExtraction)=="function"){t=config.textExtraction(node);}else{t=$(node).text();}}return t;}function appendToTable(table,cache){if(table.config.debug){var appendTime=new Date()}var c=cache,r=c.row,n=c.normalized,totalRows=n.length,checkCell=(n[0].length-1),tableBody=$(table.tBodies[0]),rows=[];for(var i=0;i<totalRows;i++){rows.push(r[n[i][checkCell]]);if(!table.config.appender){var o=r[n[i][checkCell]];var l=o.length;for(var j=0;j<l;j++){tableBody[0].appendChild(o[j]);}}}if(table.config.appender){table.config.appender(table,rows);}rows=null;if(table.config.debug){benchmark("Rebuilt table:",appendTime);}applyWidget(table);setTimeout(function(){$(table).trigger("sortEnd");},0);};function buildHeaders(table){if(table.config.debug){var time=new Date();}var meta=($.metadata)?true:false,tableHeadersRows=[];for(var i=0;i<table.tHead.rows.length;i++){tableHeadersRows[i]=0;};$tableHeaders=$("thead th",table);$tableHeaders.each(function(index){this.count=0;this.column=index;this.order=formatSortingOrder(table.config.sortInitialOrder);if(checkHeaderMetadata(this)||checkHeaderOptions(table,index))this.sortDisabled=true;if(!this.sortDisabled){$(this).addClass(table.config.cssHeader);}table.config.headerList[index]=this;});if(table.config.debug){benchmark("Built headers:",time);log($tableHeaders);}return $tableHeaders;};function checkCellColSpan(table,rows,row){var arr=[],r=table.tHead.rows,c=r[row].cells;for(var i=0;i<c.length;i++){var cell=c[i];if(cell.colSpan>1){arr=arr.concat(checkCellColSpan(table,headerArr,row++));}else{if(table.tHead.length==1||(cell.rowSpan>1||!r[row+1])){arr.push(cell);}}}return arr;};function checkHeaderMetadata(cell){if(($.metadata)&&($(cell).metadata().sorter===false)){return true;};return false;}function checkHeaderOptions(table,i){if((table.config.headers[i])&&(table.config.headers[i].sorter===false)){return true;};return false;}function applyWidget(table){var c=table.config.widgets;var l=c.length;for(var i=0;i<l;i++){getWidgetById(c[i]).format(table);}}function getWidgetById(name){var l=widgets.length;for(var i=0;i<l;i++){if(widgets[i].id.toLowerCase()==name.toLowerCase()){return widgets[i];}}};function formatSortingOrder(v){if(typeof(v)!="Number"){i=(v.toLowerCase()=="desc")?1:0;}else{i=(v==(0||1))?v:0;}return i;}function isValueInArray(v,a){var l=a.length;for(var i=0;i<l;i++){if(a[i][0]==v){return true;}}return false;}function setHeadersCss(table,$headers,list,css){$headers.removeClass(css[0]).removeClass(css[1]);var h=[];$headers.each(function(offset){if(!this.sortDisabled){h[this.column]=$(this);}});var l=list.length;for(var i=0;i<l;i++){h[list[i][0]].addClass(css[list[i][1]]);}}function fixColumnWidth(table,$headers){var c=table.config;if(c.widthFixed){var colgroup=$('<colgroup>');$("tr:first td",table.tBodies[0]).each(function(){colgroup.append($('<col>').css('width',$(this).width()));});$(table).prepend(colgroup);};}function updateHeaderSortCount(table,sortList){var c=table.config,l=sortList.length;for(var i=0;i<l;i++){var s=sortList[i],o=c.headerList[s[0]];o.count=s[1];o.count++;}}function multisort(table,sortList,cache){if(table.config.debug){var sortTime=new Date();}var dynamicExp="var sortWrapper = function(a,b) {",l=sortList.length;for(var i=0;i<l;i++){var c=sortList[i][0];var order=sortList[i][1];var s=(getCachedSortType(table.config.parsers,c)=="text")?((order==0)?"sortText":"sortTextDesc"):((order==0)?"sortNumeric":"sortNumericDesc");var e="e"+i;dynamicExp+="var "+e+" = "+s+"(a["+c+"],b["+c+"]); ";dynamicExp+="if("+e+") { return "+e+"; } ";dynamicExp+="else { ";}var orgOrderCol=cache.normalized[0].length-1;dynamicExp+="return a["+orgOrderCol+"]-b["+orgOrderCol+"];";for(var i=0;i<l;i++){dynamicExp+="}; ";}dynamicExp+="return 0; ";dynamicExp+="}; ";eval(dynamicExp);cache.normalized.sort(sortWrapper);if(table.config.debug){benchmark("Sorting on "+sortList.toString()+" and dir "+order+" time:",sortTime);}return cache;};function sortText(a,b){return((a<b)?-1:((a>b)?1:0));};function sortTextDesc(a,b){return((b<a)?-1:((b>a)?1:0));};function sortNumeric(a,b){return a-b;};function sortNumericDesc(a,b){return b-a;};function getCachedSortType(parsers,i){return parsers[i].type;};this.construct=function(settings){return this.each(function(){if(!this.tHead||!this.tBodies)return;var $this,$document,$headers,cache,config,shiftDown=0,sortOrder;this.config={};config=$.extend(this.config,$.tablesorter.defaults,settings);$this=$(this);$headers=buildHeaders(this);this.config.parsers=buildParserCache(this,$headers);cache=buildCache(this);var sortCSS=[config.cssDesc,config.cssAsc];fixColumnWidth(this);$headers.click(function(e){$this.trigger("sortStart");var totalRows=($this[0].tBodies[0]&&$this[0].tBodies[0].rows.length)||0;if(!this.sortDisabled&&totalRows>0){var $cell=$(this);var i=this.column;this.order=this.count++%2;if(!e[config.sortMultiSortKey]){config.sortList=[];if(config.sortForce!=null){var a=config.sortForce;for(var j=0;j<a.length;j++){if(a[j][0]!=i){config.sortList.push(a[j]);}}}config.sortList.push([i,this.order]);}else{if(isValueInArray(i,config.sortList)){for(var j=0;j<config.sortList.length;j++){var s=config.sortList[j],o=config.headerList[s[0]];if(s[0]==i){o.count=s[1];o.count++;s[1]=o.count%2;}}}else{config.sortList.push([i,this.order]);}};setTimeout(function(){setHeadersCss($this[0],$headers,config.sortList,sortCSS);appendToTable($this[0],multisort($this[0],config.sortList,cache));},1);return false;}}).mousedown(function(){if(config.cancelSelection){this.onselectstart=function(){return false};return false;}});$this.bind("update",function(){this.config.parsers=buildParserCache(this,$headers);cache=buildCache(this);}).bind("sorton",function(e,list){$(this).trigger("sortStart");config.sortList=list;var sortList=config.sortList;updateHeaderSortCount(this,sortList);setHeadersCss(this,$headers,sortList,sortCSS);appendToTable(this,multisort(this,sortList,cache));}).bind("appendCache",function(){appendToTable(this,cache);}).bind("applyWidgetId",function(e,id){getWidgetById(id).format(this);}).bind("applyWidgets",function(){applyWidget(this);});if($.metadata&&($(this).metadata()&&$(this).metadata().sortlist)){config.sortList=$(this).metadata().sortlist;}if(config.sortList.length>0){$this.trigger("sorton",[config.sortList]);}applyWidget(this);});};this.addParser=function(parser){var l=parsers.length,a=true;for(var i=0;i<l;i++){if(parsers[i].id.toLowerCase()==parser.id.toLowerCase()){a=false;}}if(a){parsers.push(parser);};};this.addWidget=function(widget){widgets.push(widget);};this.formatFloat=function(s){var i=parseFloat(s);return(isNaN(i))?0:i;};this.formatInt=function(s){var i=parseInt(s);return(isNaN(i))?0:i;};this.isDigit=function(s,config){var DECIMAL='\\'+config.decimal;var exp='/(^[+]?0('+DECIMAL+'0+)?$)|(^([-+]?[1-9][0-9]*)$)|(^([-+]?((0?|[1-9][0-9]*)'+DECIMAL+'(0*[1-9][0-9]*)))$)|(^[-+]?[1-9]+[0-9]*'+DECIMAL+'0+$)/';return RegExp(exp).test($.trim(s));};this.clearTableBody=function(table){if($.browser.msie){function empty(){while(this.firstChild)this.removeChild(this.firstChild);}empty.apply(table.tBodies[0]);}else{table.tBodies[0].innerHTML="";}};}});$.fn.extend({tablesorter:$.tablesorter.construct});var ts=$.tablesorter;ts.addParser({id:"text",is:function(s){return true;},format:function(s){return $.trim(s.toLowerCase());},type:"text"});ts.addParser({id:"digit",is:function(s,table){var c=table.config;return $.tablesorter.isDigit(s,c);},format:function(s){return $.tablesorter.formatFloat(s);},type:"numeric"});ts.addParser({id:"currency",is:function(s){return/^[£$€?.]/.test(s);},format:function(s){return $.tablesorter.formatFloat(s.replace(new RegExp(/[^0-9.]/g),""));},type:"numeric"});ts.addParser({id:"ipAddress",is:function(s){return/^\d{2,3}[\.]\d{2,3}[\.]\d{2,3}[\.]\d{2,3}$/.test(s);},format:function(s){var a=s.split("."),r="",l=a.length;for(var i=0;i<l;i++){var item=a[i];if(item.length==2){r+="0"+item;}else{r+=item;}}return $.tablesorter.formatFloat(r);},type:"numeric"});ts.addParser({id:"url",is:function(s){return/^(https?|ftp|file):\/\/$/.test(s);},format:function(s){return jQuery.trim(s.replace(new RegExp(/(https?|ftp|file):\/\//),''));},type:"text"});ts.addParser({id:"isoDate",is:function(s){return/^\d{4}[\/-]\d{1,2}[\/-]\d{1,2}$/.test(s);},format:function(s){return $.tablesorter.formatFloat((s!="")?new Date(s.replace(new RegExp(/-/g),"/")).getTime():"0");},type:"numeric"});ts.addParser({id:"percent",is:function(s){return/\%$/.test($.trim(s));},format:function(s){return $.tablesorter.formatFloat(s.replace(new RegExp(/%/g),""));},type:"numeric"});ts.addParser({id:"usLongDate",is:function(s){return s.match(new RegExp(/^[A-Za-z]{3,10}\.? [0-9]{1,2}, ([0-9]{4}|'?[0-9]{2}) (([0-2]?[0-9]:[0-5][0-9])|([0-1]?[0-9]:[0-5][0-9]\s(AM|PM)))$/));},format:function(s){return $.tablesorter.formatFloat(new Date(s).getTime());},type:"numeric"});ts.addParser({id:"shortDate",is:function(s){return/\d{1,2}[\/\-]\d{1,2}[\/\-]\d{2,4}/.test(s);},format:function(s,table){var c=table.config;s=s.replace(/\-/g,"/");if(c.dateFormat=="us"){s=s.replace(/(\d{1,2})[\/\-](\d{1,2})[\/\-](\d{4})/,"$3/$1/$2");}else if(c.dateFormat=="uk"){s=s.replace(/(\d{1,2})[\/\-](\d{1,2})[\/\-](\d{4})/,"$3/$2/$1");}else if(c.dateFormat=="dd/mm/yy"||c.dateFormat=="dd-mm-yy"){s=s.replace(/(\d{1,2})[\/\-](\d{1,2})[\/\-](\d{2})/,"$1/$2/$3");}return $.tablesorter.formatFloat(new Date(s).getTime());},type:"numeric"});ts.addParser({id:"time",is:function(s){return/^(([0-2]?[0-9]:[0-5][0-9])|([0-1]?[0-9]:[0-5][0-9]\s(am|pm)))$/.test(s);},format:function(s){return $.tablesorter.formatFloat(new Date("2000/01/01 "+s).getTime());},type:"numeric"});ts.addParser({id:"metadata",is:function(s){return false;},format:function(s,table,cell){var c=table.config,p=(!c.parserMetadataName)?'sortValue':c.parserMetadataName;return $(cell).metadata()[p];},type:"numeric"});ts.addWidget({id:"zebra",format:function(table){if(table.config.debug){var time=new Date();}$("tr:visible",table.tBodies[0]).filter(':even').removeClass(table.config.widgetZebra.css[1]).addClass(table.config.widgetZebra.css[0]).end().filter(':odd').removeClass(table.config.widgetZebra.css[0]).addClass(table.config.widgetZebra.css[1]);if(table.config.debug){$.tablesorter.benchmark("Applying Zebra widget",time);}}});})(jQuery);
\ No newline at end of file
+++ /dev/null
-(dp1
-S'files'
-p2
-(dp3
-S'pygments_lexers_textedit'
-p4
-(dp5
-S'index'
-p6
-(dp7
-S'html_filename'
-p8
-S'pygments_lexers_textedit.html'
-p9
-sS'name'
-p10
-S'pygments.lexers.textedit'
-p11
-sS'nums'
-p12
-ccopy_reg
-_reconstructor
-p13
-(ccoverage.results
-Numbers
-p14
-c__builtin__
-object
-p15
-NtRp16
-(dp17
-S'n_files'
-p18
-I1
-sS'n_branches'
-p19
-I0
-sS'n_statements'
-p20
-I44
-sS'n_excluded'
-p21
-I0
-sS'n_partial_branches'
-p22
-I0
-sS'n_missing'
-p23
-I1
-sS'n_missing_branches'
-p24
-I0
-sbssS'hash'
-p25
-S'\xac\xd5/\x12\x80xh\xce\xb7\x87\xb9\xa5~g\x95&'
-p26
-ssS'pygments_lexers_robotframework'
-p27
-(dp28
-g6
-(dp29
-g8
-S'pygments_lexers_robotframework.html'
-p30
-sg10
-S'pygments.lexers.robotframework'
-p31
-sg12
-g13
-(g14
-g15
-NtRp32
-(dp33
-g18
-I1
-sg19
-I0
-sg20
-I395
-sg21
-I0
-sg22
-I0
-sg23
-I40
-sg24
-I0
-sbssg25
-S'\xb5\x90\x07_\x96m\xab\x02\xc2\xd5\xbdB\x16@\xd5\xf7'
-p34
-ssS'pygments_lexers_urbi'
-p35
-(dp36
-g6
-(dp37
-g8
-S'pygments_lexers_urbi.html'
-p38
-sg10
-S'pygments.lexers.urbi'
-p39
-sg12
-g13
-(g14
-g15
-NtRp40
-(dp41
-g18
-I1
-sg19
-I0
-sg20
-I27
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S"R\xd2\xcbD\xbe\x19}-\xcb'\xb5\x18\xf0\xe5\xcb\xe1"
-p42
-ssS'pygments_filters'
-p43
-(dp44
-g6
-(dp45
-S'html_filename'
-p46
-S'pygments_filters.html'
-p47
-sS'name'
-p48
-S'pygments.filters'
-p49
-sS'nums'
-p50
-g13
-(g14
-g15
-NtRp51
-(dp52
-g18
-I1
-sg19
-I0
-sg20
-I159
-sg21
-I0
-sg22
-I0
-sg23
-I124
-sg24
-I0
-sbssg25
-S'\xd7\xb0<O\xca,p)#)\xfe\xd1\x8a\xfb\xfc\x04'
-p53
-ssS'pygments_lexers_objective'
-p54
-(dp55
-g6
-(dp56
-g8
-S'pygments_lexers_objective.html'
-p57
-sg10
-S'pygments.lexers.objective'
-p58
-sg12
-g13
-(g14
-g15
-NtRp59
-(dp60
-g18
-I1
-sg19
-I0
-sg20
-I66
-sg21
-I0
-sg22
-I0
-sg23
-I5
-sg24
-I0
-sbssg25
-S'\xa8\x8f\xfd=W5\xe3\xbd\x02=\x9akg\x9eZR'
-p61
-ssS'pygments_lexers__asy_builtins'
-p62
-(dp63
-g6
-(dp64
-g8
-S'pygments_lexers__asy_builtins.html'
-p65
-sg10
-S'pygments.lexers._asy_builtins'
-p66
-sg12
-g13
-(g14
-g15
-NtRp67
-(dp68
-g18
-I1
-sg19
-I0
-sg20
-I3
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S"\x84\x1dN\xc7wi\xe2cX\xe4\x1dA\x97\xb0'\xaf"
-p69
-ssS'pygments_styles_pastie'
-p70
-(dp71
-g6
-(dp72
-g8
-S'pygments_styles_pastie.html'
-p73
-sg10
-S'pygments.styles.pastie'
-p74
-sg12
-g13
-(g14
-g15
-NtRp75
-(dp76
-g18
-I1
-sg19
-I0
-sg20
-I6
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'K\x8c\x81Y\xe1\xdb\x04n\x0e\x8a\x158|\x0e\x8f\x86'
-p77
-ssS'pygments_lexers_graph'
-p78
-(dp79
-g6
-(dp80
-g8
-S'pygments_lexers_graph.html'
-p81
-sg10
-S'pygments.lexers.graph'
-p82
-sg12
-g13
-(g14
-g15
-NtRp83
-(dp84
-g18
-I1
-sg19
-I0
-sg20
-I11
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\xd3\xe9\x8f\x1c\xd2\xaf\x80\xe9T\xf8\xb4\xe3kPVy'
-p85
-ssS'pygments_lexers_apl'
-p86
-(dp87
-g6
-(dp88
-g8
-S'pygments_lexers_apl.html'
-p89
-sg10
-S'pygments.lexers.apl'
-p90
-sg12
-g13
-(g14
-g15
-NtRp91
-(dp92
-g18
-I1
-sg19
-I0
-sg20
-I9
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'^\xc4\xbe8\xde6\x1c\x03@\xdd|{A\xec,/'
-p93
-ssS'pygments_styles'
-p94
-(dp95
-g6
-(dp96
-g46
-S'pygments_styles.html'
-p97
-sg48
-S'pygments.styles'
-p98
-sg50
-g13
-(g14
-g15
-NtRp99
-(dp100
-g18
-I1
-sg19
-I0
-sg20
-I27
-sg21
-I0
-sg22
-I0
-sg23
-I14
-sg24
-I0
-sbssg25
-S'\x03\xf2\x87\xc6\x80E\xb0\x88/\xd9\x87\xbd\x13[7\x12'
-p101
-ssS'pygments_lexers__vim_builtins'
-p102
-(dp103
-g6
-(dp104
-g8
-S'pygments_lexers__vim_builtins.html'
-p105
-sg10
-S'pygments.lexers._vim_builtins'
-p106
-sg12
-g13
-(g14
-g15
-NtRp107
-(dp108
-g18
-I1
-sg19
-I0
-sg20
-I13
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\x03\xabN\\<\x1e\xa9\xc3\xb3\xed\x01\x93`\xf8\t}'
-p109
-ssS'pygments_plugin'
-p110
-(dp111
-g6
-(dp112
-g46
-S'pygments_plugin.html'
-p113
-sg48
-S'pygments.plugin'
-p114
-sg50
-g13
-(g14
-g15
-NtRp115
-(dp116
-g18
-I1
-sg19
-I0
-sg20
-I29
-sg21
-I0
-sg22
-I0
-sg23
-I18
-sg24
-I0
-sbssg25
-S'Xq`P\xb9\x8f#\xef\x03\xac\xed\xda1\xca\xd6r'
-p117
-ssS'pygments_lexers_sql'
-p118
-(dp119
-g6
-(dp120
-g8
-S'pygments_lexers_sql.html'
-p121
-sg10
-S'pygments.lexers.sql'
-p122
-sg12
-g13
-(g14
-g15
-NtRp123
-(dp124
-g18
-I1
-sg19
-I0
-sg20
-I181
-sg21
-I0
-sg22
-I0
-sg23
-I7
-sg24
-I0
-sbssg25
-S'7\xaa\xaf(w\xf5N0\xecO|\x15FL\xac\x16'
-p125
-ssS'pygments_styles_trac'
-p126
-(dp127
-g6
-(dp128
-g8
-S'pygments_styles_trac.html'
-p129
-sg10
-S'pygments.styles.trac'
-p130
-sg12
-g13
-(g14
-g15
-NtRp131
-(dp132
-g18
-I1
-sg19
-I0
-sg20
-I6
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\x9dO\x95-9\x9a\xd1\xfe|Ud"@\xad\x0fn'
-p133
-ssS'pygments_lexers_matlab'
-p134
-(dp135
-g6
-(dp136
-g8
-S'pygments_lexers_matlab.html'
-p137
-sg10
-S'pygments.lexers.matlab'
-p138
-sg12
-g13
-(g14
-g15
-NtRp139
-(dp140
-g18
-I1
-sg19
-I0
-sg20
-I67
-sg21
-I0
-sg22
-I0
-sg23
-I2
-sg24
-I0
-sbssg25
-S'\x08\xb9\xf1\xaf\xe6\xec%6\xb4\xdd\xed\n\x84\x9f\x89\xf6'
-p141
-ssS'pygments_token'
-p142
-(dp143
-g6
-(dp144
-g46
-S'pygments_token.html'
-p145
-sg48
-S'pygments.token'
-p146
-sg50
-g13
-(g14
-g15
-NtRp147
-(dp148
-g18
-I1
-sg19
-I0
-sg20
-I55
-sg21
-I0
-sg22
-I0
-sg23
-I11
-sg24
-I0
-sbssg25
-S'0\xa2N\xae\x11{\x93za\xffv\xfc.\x95\x03\xc0'
-p149
-ssS'pygments_lexers_templates'
-p150
-(dp151
-g6
-(dp152
-g8
-S'pygments_lexers_templates.html'
-p153
-sg10
-S'pygments.lexers.templates'
-p154
-sg12
-g13
-(g14
-g15
-NtRp155
-(dp156
-g18
-I1
-sg19
-I0
-sg20
-I662
-sg21
-I0
-sg22
-I0
-sg23
-I43
-sg24
-I0
-sbssg25
-S'#==SU)\x17\xad\x13wn\xe5\x9b\x10\xfe\xc3'
-p157
-ssS'pygments_lexers_business'
-p158
-(dp159
-g6
-(dp160
-g8
-S'pygments_lexers_business.html'
-p161
-sg10
-S'pygments.lexers.business'
-p162
-sg12
-g13
-(g14
-g15
-NtRp163
-(dp164
-g18
-I1
-sg19
-I0
-sg20
-I49
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'?\x8b\xd1\xdfr\xb9\xfe\x1cV/6!\x8c\xdbG['
-p165
-ssS'pygments_lexers__sourcemod_builtins'
-p166
-(dp167
-g6
-(dp168
-g8
-S'pygments_lexers__sourcemod_builtins.html'
-p169
-sg10
-S'pygments.lexers._sourcemod_builtins'
-p170
-sg12
-g13
-(g14
-g15
-NtRp171
-(dp172
-g18
-I1
-sg19
-I0
-sg20
-I3
-sg21
-I51
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S' Kf\xc8\x84\xfb\xfc\xf3\xfc}\x13\x90\xb0\xe6c\xa5'
-p173
-ssS'pygments_lexers_automation'
-p174
-(dp175
-g6
-(dp176
-g8
-S'pygments_lexers_automation.html'
-p177
-sg10
-S'pygments.lexers.automation'
-p178
-sg12
-g13
-(g14
-g15
-NtRp179
-(dp180
-g18
-I1
-sg19
-I0
-sg20
-I19
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\xa5w\xad\x1d\x90\x11\xa1z\xfa\xa6J\xc1\x1c\xa9\xf6v'
-p181
-ssS'pygments_lexers_int_fiction'
-p182
-(dp183
-g6
-(dp184
-g8
-S'pygments_lexers_int_fiction.html'
-p185
-sg10
-S'pygments.lexers.int_fiction'
-p186
-sg12
-g13
-(g14
-g15
-NtRp187
-(dp188
-g18
-I1
-sg19
-I0
-sg20
-I132
-sg21
-I0
-sg22
-I0
-sg23
-I3
-sg24
-I0
-sbssg25
-S'Y\x0c\xfd\x86TX$x\x08;R\x05_\x82\xaf\xc2'
-p189
-ssS'pygments_lexers_rdf'
-p190
-(dp191
-g6
-(dp192
-g8
-S'pygments_lexers_rdf.html'
-p193
-sg10
-S'pygments.lexers.rdf'
-p194
-sg12
-g13
-(g14
-g15
-NtRp195
-(dp196
-g18
-I1
-sg19
-I0
-sg20
-I12
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\xc0\x18\x80\xd1\xcap^=\xaf*\xb4vOfD\xda'
-p197
-ssS'pygments_lexers_resource'
-p198
-(dp199
-g6
-(dp200
-g8
-S'pygments_lexers_resource.html'
-p201
-sg10
-S'pygments.lexers.resource'
-p202
-sg12
-g13
-(g14
-g15
-NtRp203
-(dp204
-g18
-I1
-sg19
-I0
-sg20
-I14
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\xa2\xac9\x90\xeb-~<e\xbb\x88\xed\x9e\xe9\xbd\xda'
-p205
-ssS'pygments_lexers__cl_builtins'
-p206
-(dp207
-g6
-(dp208
-g8
-S'pygments_lexers__cl_builtins.html'
-p209
-sg10
-S'pygments.lexers._cl_builtins'
-p210
-sg12
-g13
-(g14
-g15
-NtRp211
-(dp212
-g18
-I1
-sg19
-I0
-sg20
-I8
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'd\x93\x92\xd9~p\t\xc2 \xc7_\xe8\x18\x8c\xb0)'
-p213
-ssS'pygments_styles_emacs'
-p214
-(dp215
-g6
-(dp216
-g8
-S'pygments_styles_emacs.html'
-p217
-sg10
-S'pygments.styles.emacs'
-p218
-sg12
-g13
-(g14
-g15
-NtRp219
-(dp220
-g18
-I1
-sg19
-I0
-sg20
-I7
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\x16\x9c\xb5\xc4\xc2\xaf\x96\xd4\xa4\xd6\xc7*Am\x07\xee'
-p221
-ssS'pygments_lexers_snobol'
-p222
-(dp223
-g6
-(dp224
-g8
-S'pygments_lexers_snobol.html'
-p225
-sg10
-S'pygments.lexers.snobol'
-p226
-sg12
-g13
-(g14
-g15
-NtRp227
-(dp228
-g18
-I1
-sg19
-I0
-sg20
-I10
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\x10\x9fA\xd0\xd1\x9d\xc6]Cl\t\x8c\xa8\t\xb1\xcb'
-p229
-ssS'pygments_formatters_html'
-p230
-(dp231
-g6
-(dp232
-g8
-S'pygments_formatters_html.html'
-p233
-sg10
-S'pygments.formatters.html'
-p234
-sg12
-g13
-(g14
-g15
-NtRp235
-(dp236
-g18
-I1
-sg19
-I0
-sg20
-I342
-sg21
-I0
-sg22
-I0
-sg23
-I29
-sg24
-I0
-sbssg25
-S'\xc0Y7\xce \xa1\xe5\xaf\xff`\x953\xd7\x94\x08\x98'
-p237
-ssS'pygments_filter'
-p238
-(dp239
-g6
-(dp240
-g46
-S'pygments_filter.html'
-p241
-sg48
-S'pygments.filter'
-p242
-sg50
-g13
-(g14
-g15
-NtRp243
-(dp244
-g18
-I1
-sg19
-I0
-sg20
-I24
-sg21
-I0
-sg22
-I0
-sg23
-I11
-sg24
-I0
-sbssg25
-S'\xaa\xba.\x1b\xeb\x98\xae\xc6F\xc8\xee\xfd\xfd\xcf\xaf\xa6'
-p245
-ssS'pygments_formatters_terminal'
-p246
-(dp247
-g6
-(dp248
-g8
-S'pygments_formatters_terminal.html'
-p249
-sg10
-S'pygments.formatters.terminal'
-p250
-sg12
-g13
-(g14
-g15
-NtRp251
-(dp252
-g18
-I1
-sg19
-I0
-sg20
-I65
-sg21
-I0
-sg22
-I0
-sg23
-I27
-sg24
-I0
-sbssg25
-S'2\x9e3S\xd5\x8f[\xd8?"\x12^\xe6\r\x19\xf1'
-p253
-ssS'pygments_lexers_pawn'
-p254
-(dp255
-g6
-(dp256
-g8
-S'pygments_lexers_pawn.html'
-p257
-sg10
-S'pygments.lexers.pawn'
-p258
-sg12
-g13
-(g14
-g15
-NtRp259
-(dp260
-g18
-I1
-sg19
-I0
-sg20
-I38
-sg21
-I0
-sg22
-I0
-sg23
-I1
-sg24
-I0
-sbssg25
-S'\x85^\x0fG\x9a\x12\x90\x19T\xd2}\xea\xc0f\x93\x07'
-p261
-ssS'pygments_lexers_c_like'
-p262
-(dp263
-g6
-(dp264
-g8
-S'pygments_lexers_c_like.html'
-p265
-sg10
-S'pygments.lexers.c_like'
-p266
-sg12
-g13
-(g14
-g15
-NtRp267
-(dp268
-g18
-I1
-sg19
-I0
-sg20
-I87
-sg21
-I0
-sg22
-I0
-sg23
-I5
-sg24
-I0
-sbssg25
-S'\x12Q\x8f\xdd\xdcOL\xbfR\x8d/\xb9\x18\xc9\x8a?'
-p269
-ssS'pygments_styles_perldoc'
-p270
-(dp271
-g6
-(dp272
-g8
-S'pygments_styles_perldoc.html'
-p273
-sg10
-S'pygments.styles.perldoc'
-p274
-sg12
-g13
-(g14
-g15
-NtRp275
-(dp276
-g18
-I1
-sg19
-I0
-sg20
-I7
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\x05$\xff5\xa2\xa8~\x85\xf4\xa9\xea\xa4\x1c\x1aB\x89'
-p277
-ssS'pygments_styles_autumn'
-p278
-(dp279
-g6
-(dp280
-g8
-S'pygments_styles_autumn.html'
-p281
-sg10
-S'pygments.styles.autumn'
-p282
-sg12
-g13
-(g14
-g15
-NtRp283
-(dp284
-g18
-I1
-sg19
-I0
-sg20
-I6
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'p\x8c\xa9\xb7\xdaq5:~)\xc3\xd9e\xee/\xf9'
-p285
-ssS'pygments_formatters'
-p286
-(dp287
-g6
-(dp288
-g46
-S'pygments_formatters.html'
-p289
-sg48
-S'pygments.formatters'
-p290
-sg50
-g13
-(g14
-g15
-NtRp291
-(dp292
-g18
-I1
-sg19
-I0
-sg20
-I70
-sg21
-I0
-sg22
-I0
-sg23
-I35
-sg24
-I0
-sbssg25
-S'E\xb3z,?1A7\xf0\xea\x10\x98\x9aD|\xec'
-p293
-ssS'pygments_lexers_dylan'
-p294
-(dp295
-g6
-(dp296
-g8
-S'pygments_lexers_dylan.html'
-p297
-sg10
-S'pygments.lexers.dylan'
-p298
-sg12
-g13
-(g14
-g15
-NtRp299
-(dp300
-g18
-I1
-sg19
-I0
-sg20
-I68
-sg21
-I0
-sg22
-I0
-sg23
-I2
-sg24
-I0
-sbssg25
-S'U\xd8"\x92\xad\x9dV\x89H\xb4\x17P\xd5\xf3\xac\x0b'
-p301
-ssS'pygments_lexers_web'
-p302
-(dp303
-g6
-(dp304
-g8
-S'pygments_lexers_web.html'
-p305
-sg10
-S'pygments.lexers.web'
-p306
-sg12
-g13
-(g14
-g15
-NtRp307
-(dp308
-g18
-I1
-sg19
-I0
-sg20
-I10
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'T\xf6\x86\xf95J\xc9\x85\xdc\x06\x1f\xd7\xc4<\x10\x08'
-p309
-ssS'pygments_lexers__lua_builtins'
-p310
-(dp311
-g6
-(dp312
-g8
-S'pygments_lexers__lua_builtins.html'
-p313
-sg10
-S'pygments.lexers._lua_builtins'
-p314
-sg12
-g13
-(g14
-g15
-NtRp315
-(dp316
-g18
-I1
-sg19
-I0
-sg20
-I3
-sg21
-I73
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'KO_\xaf!k_D\t\xe9-\xd6y.\xd3\x82'
-p317
-ssS'pygments_lexers_data'
-p318
-(dp319
-g6
-(dp320
-g8
-S'pygments_lexers_data.html'
-p321
-sg10
-S'pygments.lexers.data'
-p322
-sg12
-g13
-(g14
-g15
-NtRp323
-(dp324
-g18
-I1
-sg19
-I0
-sg20
-I141
-sg21
-I0
-sg22
-I0
-sg23
-I18
-sg24
-I0
-sbssg25
-S'a\xec\xd0\x87\x8a\xc6\n\xbf\xdd\xeb\xdd>\x07\xdd\x91\xb4'
-p325
-ssS'pygments_lexers_haxe'
-p326
-(dp327
-g6
-(dp328
-g8
-S'pygments_lexers_haxe.html'
-p329
-sg10
-S'pygments.lexers.haxe'
-p330
-sg12
-g13
-(g14
-g15
-NtRp331
-(dp332
-g18
-I1
-sg19
-I0
-sg20
-I42
-sg21
-I0
-sg22
-I0
-sg23
-I1
-sg24
-I0
-sbssg25
-S'\x10\x1a\xfcG\nT(Z\xfb\x1bs\x1f:\xbdB\x17'
-p333
-ssS'pygments_lexers_factor'
-p334
-(dp335
-g6
-(dp336
-g8
-S'pygments_lexers_factor.html'
-p337
-sg10
-S'pygments.lexers.factor'
-p338
-sg12
-g13
-(g14
-g15
-NtRp339
-(dp340
-g18
-I1
-sg19
-I0
-sg20
-I23
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'{\xa5`lW\x02\xca\xaf\x8b\x8e\xf8\xfe\xa2@\x87G'
-p341
-ssS'pygments_formatters_bbcode'
-p342
-(dp343
-g6
-(dp344
-g8
-S'pygments_formatters_bbcode.html'
-p345
-sg10
-S'pygments.formatters.bbcode'
-p346
-sg12
-g13
-(g14
-g15
-NtRp347
-(dp348
-g18
-I1
-sg19
-I0
-sg20
-I56
-sg21
-I0
-sg22
-I0
-sg23
-I8
-sg24
-I0
-sbssg25
-S'\xdc\r\xa1\xf7-tGa\xaf\x82\xa1~\x18\xb5q\x14'
-p349
-ssS'pygments_styles_fruity'
-p350
-(dp351
-g6
-(dp352
-g8
-S'pygments_styles_fruity.html'
-p353
-sg10
-S'pygments.styles.fruity'
-p354
-sg12
-g13
-(g14
-g15
-NtRp355
-(dp356
-g18
-I1
-sg19
-I0
-sg20
-I7
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S"\xa9'\xdf\xb5\x92\xeeI\r\xd8-Ut\xd0m\xa7\xd7"
-p357
-ssS'pygments_lexers_dotnet'
-p358
-(dp359
-g6
-(dp360
-g8
-S'pygments_lexers_dotnet.html'
-p361
-sg10
-S'pygments.lexers.dotnet'
-p362
-sg12
-g13
-(g14
-g15
-NtRp363
-(dp364
-g18
-I1
-sg19
-I0
-sg20
-I103
-sg21
-I0
-sg22
-I0
-sg23
-I5
-sg24
-I0
-sbssg25
-S"\xee\xed\x1b\x10'\x9c2W\xfd\x11*\xb4M\xf8\xacp"
-p365
-ssS'pygments_lexers__stan_builtins'
-p366
-(dp367
-g6
-(dp368
-g8
-S'pygments_lexers__stan_builtins.html'
-p369
-sg10
-S'pygments.lexers._stan_builtins'
-p370
-sg12
-g13
-(g14
-g15
-NtRp371
-(dp372
-g18
-I1
-sg19
-I0
-sg20
-I6
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S"\xe9\xf0^3>/O\xe8\x95\xd6R'\x84\x9b%R"
-p373
-ssS'pygments_lexers__scilab_builtins'
-p374
-(dp375
-g6
-(dp376
-g8
-S'pygments_lexers__scilab_builtins.html'
-p377
-sg10
-S'pygments.lexers._scilab_builtins'
-p378
-sg12
-g13
-(g14
-g15
-NtRp379
-(dp380
-g18
-I1
-sg19
-I0
-sg20
-I5
-sg21
-I28
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S"x'N\x91L\\\xb0-\x1es\xae\xa3*.\xf7\x96"
-p381
-ssS'pygments_lexers_modeling'
-p382
-(dp383
-g6
-(dp384
-g8
-S'pygments_lexers_modeling.html'
-p385
-sg10
-S'pygments.lexers.modeling'
-p386
-sg12
-g13
-(g14
-g15
-NtRp387
-(dp388
-g18
-I1
-sg19
-I0
-sg20
-I51
-sg21
-I0
-sg22
-I0
-sg23
-I1
-sg24
-I0
-sbssg25
-S'\x05Rw\x8c\xbb\xe9\x01\xfe6^H\x83F\xedS\xf4'
-p389
-ssS'pygments_lexers_css'
-p390
-(dp391
-g6
-(dp392
-g8
-S'pygments_lexers_css.html'
-p393
-sg10
-S'pygments.lexers.css'
-p394
-sg12
-g13
-(g14
-g15
-NtRp395
-(dp396
-g18
-I1
-sg19
-I0
-sg20
-I55
-sg21
-I0
-sg22
-I0
-sg23
-I1
-sg24
-I0
-sbssg25
-S'\xdf\xcd\xed&2\x176v\xdd\x07\xca\t\xfe)\x08\xd6'
-p397
-ssS'pygments_formatter'
-p398
-(dp399
-g6
-(dp400
-g46
-S'pygments_formatter.html'
-p401
-sg48
-S'pygments.formatter'
-p402
-sg50
-g13
-(g14
-g15
-NtRp403
-(dp404
-g18
-I1
-sg19
-I0
-sg20
-I29
-sg21
-I0
-sg22
-I0
-sg23
-I3
-sg24
-I0
-sbssg25
-S'Q\n7\xa3DZ\x80{\x8c&\xeau\xec\xa10?'
-p405
-ssS'pygments_lexers_nimrod'
-p406
-(dp407
-g6
-(dp408
-g8
-S'pygments_lexers_nimrod.html'
-p409
-sg10
-S'pygments.lexers.nimrod'
-p410
-sg12
-g13
-(g14
-g15
-NtRp411
-(dp412
-g18
-I1
-sg19
-I0
-sg20
-I25
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\x07|\t\tcl\xe1\x14\xac\xb8a\xb4;\xea\x9bX'
-p413
-ssS'pygments_lexers_rust'
-p414
-(dp415
-g6
-(dp416
-g8
-S'pygments_lexers_rust.html'
-p417
-sg10
-S'pygments.lexers.rust'
-p418
-sg12
-g13
-(g14
-g15
-NtRp419
-(dp420
-g18
-I1
-sg19
-I0
-sg20
-I10
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\xbc<^\xbb\x88\\\x18\xbfg\xcc\x9a\x11\x89I8\xc2'
-p421
-ssS'pygments_styles_xcode'
-p422
-(dp423
-g6
-(dp424
-g8
-S'pygments_styles_xcode.html'
-p425
-sg10
-S'pygments.styles.xcode'
-p426
-sg12
-g13
-(g14
-g15
-NtRp427
-(dp428
-g18
-I1
-sg19
-I0
-sg20
-I6
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'K\xa1l0\xaf\x05+\x16#O)u\xed\xbf\xda\x9b'
-p429
-ssS'pygments_lexers_markup'
-p430
-(dp431
-g6
-(dp432
-g8
-S'pygments_lexers_markup.html'
-p433
-sg10
-S'pygments.lexers.markup'
-p434
-sg12
-g13
-(g14
-g15
-NtRp435
-(dp436
-g18
-I1
-sg19
-I0
-sg20
-I130
-sg21
-I0
-sg22
-I0
-sg23
-I9
-sg24
-I0
-sbssg25
-S'\x93<\xadV4\x7fZ\xa4}\x99=\xe9\xd2\xfa\x950'
-p437
-ssS'pygments_styles_vs'
-p438
-(dp439
-g6
-(dp440
-g8
-S'pygments_styles_vs.html'
-p441
-sg10
-S'pygments.styles.vs'
-p442
-sg12
-g13
-(g14
-g15
-NtRp443
-(dp444
-g18
-I1
-sg19
-I0
-sg20
-I7
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'*\xa8\x1dS"\xf2&\x12\xe4"t\xd8!\xb56\xf9'
-p445
-ssS'pygments_lexers_prolog'
-p446
-(dp447
-g6
-(dp448
-g8
-S'pygments_lexers_prolog.html'
-p449
-sg10
-S'pygments.lexers.prolog'
-p450
-sg12
-g13
-(g14
-g15
-NtRp451
-(dp452
-g18
-I1
-sg19
-I0
-sg20
-I30
-sg21
-I0
-sg22
-I0
-sg23
-I3
-sg24
-I0
-sbssg25
-S'G\x99\x18\xdc\x1f\x9a\x1d\xee+\x8c\xbd\xc0\xa8\xcc\xfb\xec'
-p453
-ssS'pygments_modeline'
-p454
-(dp455
-g6
-(dp456
-g46
-S'pygments_modeline.html'
-p457
-sg48
-S'pygments.modeline'
-p458
-sg50
-g13
-(g14
-g15
-NtRp459
-(dp460
-g18
-I1
-sg19
-I0
-sg20
-I19
-sg21
-I0
-sg22
-I0
-sg23
-I13
-sg24
-I0
-sbssg25
-S'\t\x0c\xa0\x84Yn\xd63\xfa\xb2:\x1a\xb4\xe0\x8e8'
-p461
-ssS'pygments_formatters_svg'
-p462
-(dp463
-g6
-(dp464
-g8
-S'pygments_formatters_svg.html'
-p465
-sg10
-S'pygments.formatters.svg'
-p466
-sg12
-g13
-(g14
-g15
-NtRp467
-(dp468
-g18
-I1
-sg19
-I0
-sg20
-I70
-sg21
-I0
-sg22
-I0
-sg23
-I4
-sg24
-I0
-sbssg25
-S'\x04\xbd\xb1p"\xb3\xebeJ*\xe0\xcd<\x94\xddb'
-p469
-ssS'pygments_styles_colorful'
-p470
-(dp471
-g6
-(dp472
-g8
-S'pygments_styles_colorful.html'
-p473
-sg10
-S'pygments.styles.colorful'
-p474
-sg12
-g13
-(g14
-g15
-NtRp475
-(dp476
-g18
-I1
-sg19
-I0
-sg20
-I6
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\xd7\xcf\x17\x94\x10]{BL\xc3\x1f_\x9b7\xf4!'
-p477
-ssS'pygments_scanner'
-p478
-(dp479
-g6
-(dp480
-g8
-S'pygments_scanner.html'
-p481
-sg10
-S'pygments.scanner'
-p482
-sg12
-g13
-(g14
-g15
-NtRp483
-(dp484
-g18
-I1
-sg19
-I0
-sg20
-I41
-sg21
-I0
-sg22
-I0
-sg23
-I3
-sg24
-I0
-sbssg25
-S'\x07\x83\xf0\x96\xf4\x10c\xbd_\xa4\x98\x07\xca\t\xc0k'
-p485
-ssS'pygments_styles_default'
-p486
-(dp487
-g6
-(dp488
-g8
-S'pygments_styles_default.html'
-p489
-sg10
-S'pygments.styles.default'
-p490
-sg12
-g13
-(g14
-g15
-NtRp491
-(dp492
-g18
-I1
-sg19
-I0
-sg20
-I7
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'&i\xcdL\xbfT\xe0Z\xc8D\x0e\x85\x83\xd8\xd46'
-p493
-ssS'pygments_lexers__php_builtins'
-p494
-(dp495
-g6
-(dp496
-g8
-S'pygments_lexers__php_builtins.html'
-p497
-sg10
-S'pygments.lexers._php_builtins'
-p498
-sg12
-g13
-(g14
-g15
-NtRp499
-(dp500
-g18
-I1
-sg19
-I0
-sg20
-I3
-sg21
-I66
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\xcc\x83C\xda\xe4\xeb\x9fe\x94`\x04h\x9b\xed\xbem'
-p501
-ssS'pygments_lexers_graphics'
-p502
-(dp503
-g6
-(dp504
-g8
-S'pygments_lexers_graphics.html'
-p505
-sg10
-S'pygments.lexers.graphics'
-p506
-sg12
-g13
-(g14
-g15
-NtRp507
-(dp508
-g18
-I1
-sg19
-I0
-sg20
-I52
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'VU\xf5\x88\xbb1\xf6\xb90\r\x05\xc0\xde\xa5\x0c\xfe'
-p509
-ssS'pygments_lexers_scripting'
-p510
-(dp511
-g6
-(dp512
-g8
-S'pygments_lexers_scripting.html'
-p513
-sg10
-S'pygments.lexers.scripting'
-p514
-sg12
-g13
-(g14
-g15
-NtRp515
-(dp516
-g18
-I1
-sg19
-I0
-sg20
-I135
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\x03\x9e\x82\xdc\xfa\xcfi\xb8i\x8f\xe60W\xfe\x836'
-p517
-ssS'pygments_lexers_ecl'
-p518
-(dp519
-g6
-(dp520
-g8
-S'pygments_lexers_ecl.html'
-p521
-sg10
-S'pygments.lexers.ecl'
-p522
-sg12
-g13
-(g14
-g15
-NtRp523
-(dp524
-g18
-I1
-sg19
-I0
-sg20
-I12
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\x98\x14\x8d\x11\xbd\xcf=\xd3\x19\xf6K\xa2\x96@#z'
-p525
-ssS'pygments_lexers_algebra'
-p526
-(dp527
-g6
-(dp528
-g8
-S'pygments_lexers_algebra.html'
-p529
-sg10
-S'pygments.lexers.algebra'
-p530
-sg12
-g13
-(g14
-g15
-NtRp531
-(dp532
-g18
-I1
-sg19
-I0
-sg20
-I25
-sg21
-I0
-sg22
-I0
-sg23
-I1
-sg24
-I0
-sbssg25
-S'\x8c\xbc\x06^m\xa9Z\xf4u\x91\xf3\x06O\xc7X\xdf'
-p533
-ssS'pygments_styles_rrt'
-p534
-(dp535
-g6
-(dp536
-g8
-S'pygments_styles_rrt.html'
-p537
-sg10
-S'pygments.styles.rrt'
-p538
-sg12
-g13
-(g14
-g15
-NtRp539
-(dp540
-g18
-I1
-sg19
-I0
-sg20
-I7
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\xc2\x1drQ^\x0b\x8a\xb1\xb7\xa2\x81\x19\x14\x87\xebD'
-p541
-ssS'pygments_styles_paraiso_dark'
-p542
-(dp543
-g6
-(dp544
-g8
-S'pygments_styles_paraiso_dark.html'
-p545
-sg10
-S'pygments.styles.paraiso_dark'
-p546
-sg12
-g13
-(g14
-g15
-NtRp547
-(dp548
-g18
-I1
-sg19
-I0
-sg20
-I22
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\xd4\xd3\x7f\xe8[\xb0\xe8\xc3\xfeF3\x8fl\xbf\xe50'
-p549
-ssS'pygments_lexers_jvm'
-p550
-(dp551
-g6
-(dp552
-g8
-S'pygments_lexers_jvm.html'
-p553
-sg10
-S'pygments.lexers.jvm'
-p554
-sg12
-g13
-(g14
-g15
-NtRp555
-(dp556
-g18
-I1
-sg19
-I0
-sg20
-I150
-sg21
-I0
-sg22
-I0
-sg23
-I4
-sg24
-I0
-sbssg25
-S'`\xa6\xe4\xc9\xe61\xd7\x94\x852\xd8\x85\xb6\xfd\xac\n'
-p557
-ssS'pygments_lexer'
-p558
-(dp559
-g6
-(dp560
-g46
-S'pygments_lexer.html'
-p561
-sg48
-S'pygments.lexer'
-p562
-sg50
-g13
-(g14
-g15
-NtRp563
-(dp564
-g18
-I1
-sg19
-I0
-sg20
-I470
-sg21
-I0
-sg22
-I0
-sg23
-I239
-sg24
-I0
-sbssg25
-S'LR\x13H\x1aj\x1cT!7PbU\x85\xa8\x02'
-p565
-ssS'pygments_lexers_asm'
-p566
-(dp567
-g6
-(dp568
-g8
-S'pygments_lexers_asm.html'
-p569
-sg10
-S'pygments.lexers.asm'
-p570
-sg12
-g13
-(g14
-g15
-NtRp571
-(dp572
-g18
-I1
-sg19
-I0
-sg20
-I94
-sg21
-I0
-sg22
-I0
-sg23
-I3
-sg24
-I0
-sbssg25
-S'\x89\x9e\xec\xc0&\xe5\x97\x9eG\x90\x93\x1d\x91\xa3\xda('
-p573
-ssS'pygments_console'
-p574
-(dp575
-S'index'
-p576
-(dp577
-S'html_filename'
-p578
-S'pygments_console.html'
-p579
-sS'name'
-p580
-S'pygments.console'
-p581
-sS'nums'
-p582
-g13
-(g14
-g15
-NtRp583
-(dp584
-g18
-I1
-sg19
-I0
-sg20
-I42
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssS'hash'
-p585
-S'b!s\x8d\xefc \xa3\x16\xc7\x1b\xeak\xd2c\xc4'
-p586
-ssS'pygments_formatters_terminal256'
-p587
-(dp588
-g6
-(dp589
-g8
-S'pygments_formatters_terminal256.html'
-p590
-sg10
-S'pygments.formatters.terminal256'
-p591
-sg12
-g13
-(g14
-g15
-NtRp592
-(dp593
-g18
-I1
-sg19
-I0
-sg20
-I133
-sg21
-I0
-sg22
-I0
-sg23
-I12
-sg24
-I0
-sbssg25
-S'\xaf\xc6Qu\x1e\xd5\xbaz\x89w/\x12V\xb9\x99\xd7'
-p594
-ssS'pygments_lexers_parsers'
-p595
-(dp596
-g6
-(dp597
-g8
-S'pygments_lexers_parsers.html'
-p598
-sg10
-S'pygments.lexers.parsers'
-p599
-sg12
-g13
-(g14
-g15
-NtRp600
-(dp601
-g18
-I1
-sg19
-I0
-sg20
-I164
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S"'%\xe3\xea\x91\xec\xf6;N\xb56?%\xf3\x07\xfd"
-p602
-ssS'pygments_formatters__mapping'
-p603
-(dp604
-g6
-(dp605
-g8
-S'pygments_formatters__mapping.html'
-p606
-sg10
-S'pygments.formatters._mapping'
-p607
-sg12
-g13
-(g14
-g15
-NtRp608
-(dp609
-g18
-I1
-sg19
-I0
-sg20
-I3
-sg21
-I26
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\xd2\x8fQ\x97\xe8\x7f\xbf\x87<\xc6\x18\x03\xeb\xb5Lt'
-p610
-ssS'pygments_lexers_ml'
-p611
-(dp612
-g6
-(dp613
-g8
-S'pygments_lexers_ml.html'
-p614
-sg10
-S'pygments.lexers.ml'
-p615
-sg12
-g13
-(g14
-g15
-NtRp616
-(dp617
-g18
-I1
-sg19
-I0
-sg20
-I62
-sg21
-I0
-sg22
-I0
-sg23
-I3
-sg24
-I0
-sbssg25
-S'\xd9P`\xca\xd0\x88,\x12~e\x82\xe40\x97\x0f\x1d'
-p618
-ssS'pygments_lexers_basic'
-p619
-(dp620
-g6
-(dp621
-g8
-S'pygments_lexers_basic.html'
-p622
-sg10
-S'pygments.lexers.basic'
-p623
-sg12
-g13
-(g14
-g15
-NtRp624
-(dp625
-g18
-I1
-sg19
-I0
-sg20
-I66
-sg21
-I0
-sg22
-I0
-sg23
-I3
-sg24
-I0
-sbssg25
-S'Q\x17\ntj\x97\x9bK\xbd\xce\xff\xa2\xfck\xa8\xa8'
-p626
-ssS'pygments_style'
-p627
-(dp628
-g6
-(dp629
-g46
-S'pygments_style.html'
-p630
-sg48
-S'pygments.style'
-p631
-sg50
-g13
-(g14
-g15
-NtRp632
-(dp633
-g18
-I1
-sg19
-I0
-sg20
-I76
-sg21
-I0
-sg22
-I0
-sg23
-I14
-sg24
-I0
-sbssg25
-S'b]\x18\xde\xe2\x8b\xfd\xdd\x04\xce\xaa3\xc1\x8d\xa0\x15'
-p634
-ssS'pygments_styles_bw'
-p635
-(dp636
-g6
-(dp637
-g8
-S'pygments_styles_bw.html'
-p638
-sg10
-S'pygments.styles.bw'
-p639
-sg12
-g13
-(g14
-g15
-NtRp640
-(dp641
-g18
-I1
-sg19
-I0
-sg20
-I7
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\xed\xf4\x113$lj\x82\xaf_\xdfC\\\xca\xc4['
-p642
-ssS'pygments'
-p643
-(dp644
-g576
-(dp645
-g46
-S'pygments.html'
-p646
-sg48
-S'pygments'
-p647
-sg50
-g13
-(g14
-g15
-NtRp648
-(dp649
-g18
-I1
-sg19
-I0
-sg20
-I26
-sg21
-I3
-sg22
-I0
-sg23
-I17
-sg24
-I0
-sbssg585
-S'\xf7R\xe4\x12\x0b\xe4\x10\x07\xa5\x93\xe2j\xbd\x04\xde\x90'
-p650
-ssS'pygments_styles_vim'
-p651
-(dp652
-g6
-(dp653
-g8
-S'pygments_styles_vim.html'
-p654
-sg10
-S'pygments.styles.vim'
-p655
-sg12
-g13
-(g14
-g15
-NtRp656
-(dp657
-g18
-I1
-sg19
-I0
-sg20
-I8
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'=\x0f\xf69\xbc\xc6}+\xd8SQZ(\xc3\xca<'
-p658
-ssS'pygments_styles_paraiso_light'
-p659
-(dp660
-g6
-(dp661
-g8
-S'pygments_styles_paraiso_light.html'
-p662
-sg10
-S'pygments.styles.paraiso_light'
-p663
-sg12
-g13
-(g14
-g15
-NtRp664
-(dp665
-g18
-I1
-sg19
-I0
-sg20
-I22
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'j\xeb\x14\x81z\xff\x90\x80E\xea\x83V\xb0\xb1\xe2\xf4'
-p666
-ssS'pygments_lexers_c_cpp'
-p667
-(dp668
-g6
-(dp669
-g8
-S'pygments_lexers_c_cpp.html'
-p670
-sg10
-S'pygments.lexers.c_cpp'
-p671
-sg12
-g13
-(g14
-g15
-NtRp672
-(dp673
-g18
-I1
-sg19
-I0
-sg20
-I47
-sg21
-I0
-sg22
-I0
-sg23
-I3
-sg24
-I0
-sbssg25
-S'\x98\x0c\x05F\xfa\xfa`\xe3\xe9\x107T\x16\x14x\x10'
-p674
-ssS'pygments_lexers_webmisc'
-p675
-(dp676
-g6
-(dp677
-g8
-S'pygments_lexers_webmisc.html'
-p678
-sg10
-S'pygments.lexers.webmisc'
-p679
-sg12
-g13
-(g14
-g15
-NtRp680
-(dp681
-g18
-I1
-sg19
-I0
-sg20
-I226
-sg21
-I0
-sg22
-I0
-sg23
-I66
-sg24
-I0
-sbssg25
-S'\x0fL\xaa\x1f\xe6\x15\xefMi\xe9\xd7(\xcc\xed\x9bZ'
-p682
-ssS'pygments_lexers_foxpro'
-p683
-(dp684
-g6
-(dp685
-g8
-S'pygments_lexers_foxpro.html'
-p686
-sg10
-S'pygments.lexers.foxpro'
-p687
-sg12
-g13
-(g14
-g15
-NtRp688
-(dp689
-g18
-I1
-sg19
-I0
-sg20
-I12
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\xdda\xff./5xT;\xc7WK\x04\xd8v\xd2'
-p690
-ssS'pygments_lexers_ruby'
-p691
-(dp692
-g6
-(dp693
-g8
-S'pygments_lexers_ruby.html'
-p694
-sg10
-S'pygments.lexers.ruby'
-p695
-sg12
-g13
-(g14
-g15
-NtRp696
-(dp697
-g18
-I1
-sg19
-I0
-sg20
-I109
-sg21
-I0
-sg22
-I0
-sg23
-I2
-sg24
-I0
-sbssg25
-S'\xa2\xbbv\xb7\xe1&\xb7#\x86h(\xb9\t\xaf)\xd7'
-p698
-ssS'pygments_styles_tango'
-p699
-(dp700
-g6
-(dp701
-g8
-S'pygments_styles_tango.html'
-p702
-sg10
-S'pygments.styles.tango'
-p703
-sg12
-g13
-(g14
-g15
-NtRp704
-(dp705
-g18
-I1
-sg19
-I0
-sg20
-I7
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\x860\xfd\x9fE\xdd\x1f\xe1&G\x7fO\x87^\xadw'
-p706
-ssS'pygments_cmdline'
-p707
-(dp708
-g6
-(dp709
-g8
-S'pygments_cmdline.html'
-p710
-sg10
-S'pygments.cmdline'
-p711
-sg12
-g13
-(g14
-g15
-NtRp712
-(dp713
-g18
-I1
-sg19
-I0
-sg20
-I318
-sg21
-I13
-sg22
-I0
-sg23
-I13
-sg24
-I0
-sbssg25
-S',9\x9e\xb7B\x1b\xbe\xdf\xcfE\xeb$^\x077\xf1'
-p714
-ssS'pygments_lexers_iolang'
-p715
-(dp716
-g6
-(dp717
-g8
-S'pygments_lexers_iolang.html'
-p718
-sg10
-S'pygments.lexers.iolang'
-p719
-sg12
-g13
-(g14
-g15
-NtRp720
-(dp721
-g18
-I1
-sg19
-I0
-sg20
-I10
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S"\xc7\xa5zkn\xee\xd4\x9e\xd7\xe3A\x98\xf4\x9d'\x9a"
-p722
-ssS'pygments_lexers_erlang'
-p723
-(dp724
-g6
-(dp725
-g8
-S'pygments_lexers_erlang.html'
-p726
-sg10
-S'pygments.lexers.erlang'
-p727
-sg12
-g13
-(g14
-g15
-NtRp728
-(dp729
-g18
-I1
-sg19
-I0
-sg20
-I158
-sg21
-I0
-sg22
-I0
-sg23
-I2
-sg24
-I0
-sbssg25
-S'J\xd8/\xd1\x04=\x08\x0e\xe8}\xfa\xc90\x01\xa9='
-p730
-ssS'pygments_lexers__mapping'
-p731
-(dp732
-g6
-(dp733
-g8
-S'pygments_lexers__mapping.html'
-p734
-sg10
-S'pygments.lexers._mapping'
-p735
-sg12
-g13
-(g14
-g15
-NtRp736
-(dp737
-g18
-I1
-sg19
-I0
-sg20
-I3
-sg21
-I24
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\x88\x8d\x9e\x9c!\x03Us\x95\x0cJBrd\xdc\xc0'
-p738
-ssS'pygments_lexers_haskell'
-p739
-(dp740
-g6
-(dp741
-g8
-S'pygments_lexers_haskell.html'
-p742
-sg10
-S'pygments.lexers.haskell'
-p743
-sg12
-g13
-(g14
-g15
-NtRp744
-(dp745
-g18
-I1
-sg19
-I0
-sg20
-I134
-sg21
-I0
-sg22
-I0
-sg23
-I1
-sg24
-I0
-sbssg25
-S'h\xdfV\x93\x81\xd2\xe5\xae\n\xb1\x95?S\xa6\xee;'
-p746
-ssS'pygments_unistring'
-p747
-(dp748
-g6
-(dp749
-g46
-S'pygments_unistring.html'
-p750
-sg48
-S'pygments.unistring'
-p751
-sg50
-g13
-(g14
-g15
-NtRp752
-(dp753
-g18
-I1
-sg19
-I0
-sg20
-I63
-sg21
-I57
-sg22
-I0
-sg23
-I7
-sg24
-I0
-sbssg25
-S'z*\xce\xf9t\xbc\xf7Z\xfd\x02\x9e\xd0\x99\xca\xe6\xd3'
-p754
-ssS'pygments_lexers_textfmts'
-p755
-(dp756
-g6
-(dp757
-g8
-S'pygments_lexers_textfmts.html'
-p758
-sg10
-S'pygments.lexers.textfmts'
-p759
-sg12
-g13
-(g14
-g15
-NtRp760
-(dp761
-g18
-I1
-sg19
-I0
-sg20
-I81
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S"\x0et\x93\x9e\xd01N \x06\x90\x1d\xa7'\xd1\xe9\xd1"
-p762
-ssS'pygments_lexers_configs'
-p763
-(dp764
-g6
-(dp765
-g8
-S'pygments_lexers_configs.html'
-p766
-sg10
-S'pygments.lexers.configs'
-p767
-sg12
-g13
-(g14
-g15
-NtRp768
-(dp769
-g18
-I1
-sg19
-I0
-sg20
-I98
-sg21
-I0
-sg22
-I0
-sg23
-I1
-sg24
-I0
-sbssg25
-S'\x9d\xf3#\xf7di\xbc\x9b?\x03}\xb3\xfd%\x14y'
-p770
-ssS'pygments_lexers_dalvik'
-p771
-(dp772
-g6
-(dp773
-g8
-S'pygments_lexers_dalvik.html'
-p774
-sg10
-S'pygments.lexers.dalvik'
-p775
-sg12
-g13
-(g14
-g15
-NtRp776
-(dp777
-g18
-I1
-sg19
-I0
-sg20
-I20
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'k\x1fq\xe8\xaa\xda\x87Oh_\x8a\xc3[\xb3\xb9\xa0'
-p778
-ssS'pygments_lexers_julia'
-p779
-(dp780
-g6
-(dp781
-g8
-S'pygments_lexers_julia.html'
-p782
-sg10
-S'pygments.lexers.julia'
-p783
-sg12
-g13
-(g14
-g15
-NtRp784
-(dp785
-g18
-I1
-sg19
-I0
-sg20
-I42
-sg21
-I0
-sg22
-I0
-sg23
-I12
-sg24
-I0
-sbssg25
-S'\x1a\x04+\x10p#\xcaO8\x8b6\xfe\xe7\x8a\xfa-'
-p786
-ssS'pygments_formatters_rtf'
-p787
-(dp788
-g6
-(dp789
-g8
-S'pygments_formatters_rtf.html'
-p790
-sg10
-S'pygments.formatters.rtf'
-p791
-sg12
-g13
-(g14
-g15
-NtRp792
-(dp793
-g18
-I1
-sg19
-I0
-sg20
-I65
-sg21
-I0
-sg22
-I0
-sg23
-I7
-sg24
-I0
-sbssg25
-S'\xa8h\xcd\x8e\xc3#R\x92\xaf|\xf9 U\xe2\xcb\x83'
-p794
-ssS'pygments_lexers_installers'
-p795
-(dp796
-g6
-(dp797
-g8
-S'pygments_lexers_installers.html'
-p798
-sg10
-S'pygments.lexers.installers'
-p799
-sg12
-g13
-(g14
-g15
-NtRp800
-(dp801
-g18
-I1
-sg19
-I0
-sg20
-I35
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S"\xd2\xca\x05\n\x99'\xbf\xf3Q\x05-[\\\xdco\xf1"
-p802
-ssS'pygments_styles_borland'
-p803
-(dp804
-g6
-(dp805
-g8
-S'pygments_styles_borland.html'
-p806
-sg10
-S'pygments.styles.borland'
-p807
-sg12
-g13
-(g14
-g15
-NtRp808
-(dp809
-g18
-I1
-sg19
-I0
-sg20
-I6
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'H\x88%n\xbb\x93\xac\x17k"\x1f\xd0\x99\xf8\x92\xb2'
-p810
-ssS'pygments_lexers_make'
-p811
-(dp812
-g6
-(dp813
-g8
-S'pygments_lexers_make.html'
-p814
-sg10
-S'pygments.lexers.make'
-p815
-sg12
-g13
-(g14
-g15
-NtRp816
-(dp817
-g18
-I1
-sg19
-I0
-sg20
-I48
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\x0fyp\xd3vK\xf2\x03F0\x10\n\x05\x8d\\h'
-p818
-ssS'pygments_lexers_diff'
-p819
-(dp820
-g6
-(dp821
-g8
-S'pygments_lexers_diff.html'
-p822
-sg10
-S'pygments.lexers.diff'
-p823
-sg12
-g13
-(g14
-g15
-NtRp824
-(dp825
-g18
-I1
-sg19
-I0
-sg20
-I23
-sg21
-I0
-sg22
-I0
-sg23
-I3
-sg24
-I0
-sbssg25
-S'\xb9\x86\xbfo\xcc\x1c\x04\xa0\xc4Y?\xc6^]\xe0W'
-p826
-ssS'pygments_formatters_other'
-p827
-(dp828
-g6
-(dp829
-g8
-S'pygments_formatters_other.html'
-p830
-sg10
-S'pygments.formatters.other'
-p831
-sg12
-g13
-(g14
-g15
-NtRp832
-(dp833
-g18
-I1
-sg19
-I0
-sg20
-I89
-sg21
-I0
-sg22
-I0
-sg23
-I25
-sg24
-I0
-sbssg25
-S'\xfa\x8d\xe484:bo\xe3\xde<\xc5\xe1*\xacr'
-p834
-ssS'pygments_lexers_chapel'
-p835
-(dp836
-g6
-(dp837
-g8
-S'pygments_lexers_chapel.html'
-p838
-sg10
-S'pygments.lexers.chapel'
-p839
-sg12
-g13
-(g14
-g15
-NtRp840
-(dp841
-g18
-I1
-sg19
-I0
-sg20
-I9
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\xf2\xb1\xeb\xbd2\x98-\x0c?\x1d\x80\xe3\xa6\xe8\xef\x17'
-p842
-ssS'pygments_styles_murphy'
-p843
-(dp844
-g6
-(dp845
-g8
-S'pygments_styles_murphy.html'
-p846
-sg10
-S'pygments.styles.murphy'
-p847
-sg12
-g13
-(g14
-g15
-NtRp848
-(dp849
-g18
-I1
-sg19
-I0
-sg20
-I6
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'@\x82\x16\xfc\x00\x9ex\xf2\x03P\x87\x90l\x1c\xcc\xc2'
-p850
-ssS'pygments_lexers_rebol'
-p851
-(dp852
-g6
-(dp853
-g8
-S'pygments_lexers_rebol.html'
-p854
-sg10
-S'pygments.lexers.rebol'
-p855
-sg12
-g13
-(g14
-g15
-NtRp856
-(dp857
-g18
-I1
-sg19
-I0
-sg20
-I88
-sg21
-I0
-sg22
-I0
-sg23
-I6
-sg24
-I0
-sbssg25
-S'\xdc\x82\xda,yd\xca\\f\x8c\x1b\x87;\x92\xa8\x1f'
-p858
-ssS'pygments_lexers_special'
-p859
-(dp860
-g6
-(dp861
-g8
-S'pygments_lexers_special.html'
-p862
-sg10
-S'pygments.lexers.special'
-p863
-sg12
-g13
-(g14
-g15
-NtRp864
-(dp865
-g18
-I1
-sg19
-I0
-sg20
-I56
-sg21
-I0
-sg22
-I0
-sg23
-I16
-sg24
-I0
-sbssg25
-S'fL\xbdpQ\xd2\xd9\x9c\x9d"\xbe\xd8\xe3\xef\xf7\x9f'
-p866
-ssS'pygments_lexers_actionscript'
-p867
-(dp868
-g6
-(dp869
-g8
-S'pygments_lexers_actionscript.html'
-p870
-sg10
-S'pygments.lexers.actionscript'
-p871
-sg12
-g13
-(g14
-g15
-NtRp872
-(dp873
-g18
-I1
-sg19
-I0
-sg20
-I32
-sg21
-I0
-sg22
-I0
-sg23
-I1
-sg24
-I0
-sbssg25
-S'\xd5\xc0\xce\xfd\xe00\xc8\x1e\xe2\xc3h\xa8\x93\xe7\x95I'
-p874
-ssS'pygments_styles_monokai'
-p875
-(dp876
-g6
-(dp877
-g8
-S'pygments_styles_monokai.html'
-p878
-sg10
-S'pygments.styles.monokai'
-p879
-sg12
-g13
-(g14
-g15
-NtRp880
-(dp881
-g18
-I1
-sg19
-I0
-sg20
-I7
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\x1f2\x05\x95\x8e\xa5\xe2\x88U\x92\xe6!\x1a\x9a6\xe6'
-p882
-ssS'pygments_lexers__cocoa_builtins'
-p883
-(dp884
-g6
-(dp885
-g8
-S'pygments_lexers__cocoa_builtins.html'
-p886
-sg10
-S'pygments.lexers._cocoa_builtins'
-p887
-sg12
-g13
-(g14
-g15
-NtRp888
-(dp889
-g18
-I1
-sg19
-I0
-sg20
-I5
-sg21
-I39
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\xf4f:\x8b\x91\xfc\xa3\xa9\xa1[]\xc0>Z\x1a\x1d'
-p890
-ssS'pygments_lexers_felix'
-p891
-(dp892
-g6
-(dp893
-g8
-S'pygments_lexers_felix.html'
-p894
-sg10
-S'pygments.lexers.felix'
-p895
-sg12
-g13
-(g14
-g15
-NtRp896
-(dp897
-g18
-I1
-sg19
-I0
-sg20
-I20
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\x9f"\xb9\x95;\xeeC\xf9\x1e\x0b\x82\xb5;z\x02\xfc'
-p898
-ssS'pygments_styles_igor'
-p899
-(dp900
-g6
-(dp901
-g8
-S'pygments_styles_igor.html'
-p902
-sg10
-S'pygments.styles.igor'
-p903
-sg12
-g13
-(g14
-g15
-NtRp904
-(dp905
-g18
-I1
-sg19
-I0
-sg20
-I6
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\xb1\x9d\xa8\x853\xafh\xf7\xbca\x18A\xeb"\x01z'
-p906
-ssS'pygments_lexers_hdl'
-p907
-(dp908
-g6
-(dp909
-g8
-S'pygments_lexers_hdl.html'
-p910
-sg10
-S'pygments.lexers.hdl'
-p911
-sg12
-g13
-(g14
-g15
-NtRp912
-(dp913
-g18
-I1
-sg19
-I0
-sg20
-I38
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\xe8\xe1}lz\xe0R\xca:\x13\x84\xa7\xd8\x8e\xde\xed'
-p914
-ssS'pygments_lexers__mql_builtins'
-p915
-(dp916
-g6
-(dp917
-g8
-S'pygments_lexers__mql_builtins.html'
-p918
-sg10
-S'pygments.lexers._mql_builtins'
-p919
-sg12
-g13
-(g14
-g15
-NtRp920
-(dp921
-g18
-I1
-sg19
-I0
-sg20
-I6
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\xce\x1f\xf4mf\xfa\xd4\xe9\xe9a\xc0I\x007GH'
-p922
-ssS'pygments_lexers_shell'
-p923
-(dp924
-g6
-(dp925
-g8
-S'pygments_lexers_shell.html'
-p926
-sg10
-S'pygments.lexers.shell'
-p927
-sg12
-g13
-(g14
-g15
-NtRp928
-(dp929
-g18
-I1
-sg19
-I0
-sg20
-I101
-sg21
-I0
-sg22
-I0
-sg23
-I1
-sg24
-I0
-sbssg25
-S'\xe9q\x9bK\x8e\x10\xb9\x8a\xe4\xf6\xc7\xad\xf1j=t'
-p930
-ssS'pygments_lexers_idl'
-p931
-(dp932
-g6
-(dp933
-g8
-S'pygments_lexers_idl.html'
-p934
-sg10
-S'pygments.lexers.idl'
-p935
-sg12
-g13
-(g14
-g15
-NtRp936
-(dp937
-g18
-I1
-sg19
-I0
-sg20
-I14
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S',\xea\x84\xdf\x83\xb2soE{\xea\xc3kXf\xcd'
-p938
-ssS'pygments_lexers_inferno'
-p939
-(dp940
-g6
-(dp941
-g8
-S'pygments_lexers_inferno.html'
-p942
-sg10
-S'pygments.lexers.inferno'
-p943
-sg12
-g13
-(g14
-g15
-NtRp944
-(dp945
-g18
-I1
-sg19
-I0
-sg20
-I14
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\x83\xd5-Av\xe0\xbb~\xe2K\xfc\x0b8\xa8\xddm'
-p946
-ssS'pygments_lexers_ambient'
-p947
-(dp948
-g6
-(dp949
-g8
-S'pygments_lexers_ambient.html'
-p950
-sg10
-S'pygments.lexers.ambient'
-p951
-sg12
-g13
-(g14
-g15
-NtRp952
-(dp953
-g18
-I1
-sg19
-I0
-sg20
-I13
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'1S\x1bp\xe7Y\xdcG\xb2\x9f\x8cH(\\]r'
-p954
-ssS'pygments_formatters_img'
-p955
-(dp956
-g6
-(dp957
-g8
-S'pygments_formatters_img.html'
-p958
-sg10
-S'pygments.formatters.img'
-p959
-sg12
-g13
-(g14
-g15
-NtRp960
-(dp961
-g18
-I1
-sg19
-I0
-sg20
-I241
-sg21
-I0
-sg22
-I0
-sg23
-I60
-sg24
-I0
-sbssg25
-S'\xd9[\xecC#\xb2<\x1e\xa7Hv\xe8W\xdc\xa7?'
-p962
-ssS'pygments_lexers_fortran'
-p963
-(dp964
-g6
-(dp965
-g8
-S'pygments_lexers_fortran.html'
-p966
-sg10
-S'pygments.lexers.fortran'
-p967
-sg12
-g13
-(g14
-g15
-NtRp968
-(dp969
-g18
-I1
-sg19
-I0
-sg20
-I12
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\xd9\xeb\x7f\xd0\xe5C%p&\xb7\nt!\xa1\xb8y'
-p970
-ssS'pygments_lexers_fantom'
-p971
-(dp972
-g6
-(dp973
-g8
-S'pygments_lexers_fantom.html'
-p974
-sg10
-S'pygments.lexers.fantom'
-p975
-sg12
-g13
-(g14
-g15
-NtRp976
-(dp977
-g18
-I1
-sg19
-I0
-sg20
-I13
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\xcdE;&\x94\xe4\xb1=)\xcc\x90\xbcV+_\xa2'
-p978
-ssS'pygments_lexers_go'
-p979
-(dp980
-g6
-(dp981
-g8
-S'pygments_lexers_go.html'
-p982
-sg10
-S'pygments.lexers.go'
-p983
-sg12
-g13
-(g14
-g15
-NtRp984
-(dp985
-g18
-I1
-sg19
-I0
-sg20
-I12
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\x9fp\x05^\x1a\x86L\xe4\x0c\x05\x94,\x8f\x07\xa2('
-p986
-ssS'pygments_lexers_tcl'
-p987
-(dp988
-g6
-(dp989
-g8
-S'pygments_lexers_tcl.html'
-p990
-sg10
-S'pygments.lexers.tcl'
-p991
-sg12
-g13
-(g14
-g15
-NtRp992
-(dp993
-g18
-I1
-sg19
-I0
-sg20
-I17
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S"E\xb3\x7f\xc1'q\xde\xeb\x0e\xc3\xc5\xcbm0Nx"
-p994
-ssS'pygments_lexers_agile'
-p995
-(dp996
-g6
-(dp997
-g8
-S'pygments_lexers_agile.html'
-p998
-sg10
-S'pygments.lexers.agile'
-p999
-sg12
-g13
-(g14
-g15
-NtRp1000
-(dp1001
-g18
-I1
-sg19
-I0
-sg20
-I12
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\x92\x0c/\xdb\xea\xfb\x9f\xcc@@\xe7\xb6\\\x07\xa7 '
-p1002
-ssS'pygments_styles_native'
-p1003
-(dp1004
-g6
-(dp1005
-g8
-S'pygments_styles_native.html'
-p1006
-sg10
-S'pygments.styles.native'
-p1007
-sg12
-g13
-(g14
-g15
-NtRp1008
-(dp1009
-g18
-I1
-sg19
-I0
-sg20
-I7
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\x12/\xf0\xd2\xc9"\xccZ\xab/\x04\xad\xbf\x1at\x02'
-p1010
-ssS'pygments_lexers_esoteric'
-p1011
-(dp1012
-g6
-(dp1013
-g8
-S'pygments_lexers_esoteric.html'
-p1014
-sg10
-S'pygments.lexers.esoteric'
-p1015
-sg12
-g13
-(g14
-g15
-NtRp1016
-(dp1017
-g18
-I1
-sg19
-I0
-sg20
-I23
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'H9D\xae\x9eZ\xc2\x03\x1a2\x83\x9cxK&;'
-p1018
-ssS'pygments_lexers_pascal'
-p1019
-(dp1020
-g6
-(dp1021
-g8
-S'pygments_lexers_pascal.html'
-p1022
-sg10
-S'pygments.lexers.pascal'
-p1023
-sg12
-g13
-(g14
-g15
-NtRp1024
-(dp1025
-g18
-I1
-sg19
-I0
-sg20
-I226
-sg21
-I0
-sg22
-I0
-sg23
-I23
-sg24
-I0
-sbssg25
-S'\x8fT\x10*q!\xda\x9d\xb4#\xc8^R\x82\xfb\xee'
-p1026
-ssS'pygments_regexopt'
-p1027
-(dp1028
-g6
-(dp1029
-g46
-S'pygments_regexopt.html'
-p1030
-sg48
-S'pygments.regexopt'
-p1031
-sg50
-g13
-(g14
-g15
-NtRp1032
-(dp1033
-g18
-I1
-sg19
-I0
-sg20
-I43
-sg21
-I0
-sg22
-I0
-sg23
-I2
-sg24
-I0
-sbssg25
-S">rG5'\xf0\xa6\xa0\xbc\xef,\x94\x06@\x86a"
-p1034
-ssS'pygments_lexers__lasso_builtins'
-p1035
-(dp1036
-g6
-(dp1037
-g8
-S'pygments_lexers__lasso_builtins.html'
-p1038
-sg10
-S'pygments.lexers._lasso_builtins'
-p1039
-sg12
-g13
-(g14
-g15
-NtRp1040
-(dp1041
-g18
-I1
-sg19
-I0
-sg20
-I3
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'&\xd8\x14\xfb+\xc1\xc7\xb9\x90\xf5\xd8k\x8c\x93y\xf3'
-p1042
-ssS'pygments_lexers_javascript'
-p1043
-(dp1044
-g6
-(dp1045
-g8
-S'pygments_lexers_javascript.html'
-p1046
-sg10
-S'pygments.lexers.javascript'
-p1047
-sg12
-g13
-(g14
-g15
-NtRp1048
-(dp1049
-g18
-I1
-sg19
-I0
-sg20
-I109
-sg21
-I0
-sg22
-I0
-sg23
-I1
-sg24
-I0
-sbssg25
-S'\xcf\xe5\x8du<\xa3$%\xb2\xe8\x1f\x14u\x9b\xc2\xa0'
-p1050
-ssS'pygments_lexers__openedge_builtins'
-p1051
-(dp1052
-g6
-(dp1053
-g8
-S'pygments_lexers__openedge_builtins.html'
-p1054
-sg10
-S'pygments.lexers._openedge_builtins'
-p1055
-sg12
-g13
-(g14
-g15
-NtRp1056
-(dp1057
-g18
-I1
-sg19
-I0
-sg20
-I2
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\x0eF\x90{\xda!\x03"\xebX\x08\x87$#BJ'
-p1058
-ssS'pygments_formatters_latex'
-p1059
-(dp1060
-g6
-(dp1061
-S'html_filename'
-p1062
-S'pygments_formatters_latex.html'
-p1063
-sS'name'
-p1064
-S'pygments.formatters.latex'
-p1065
-sS'nums'
-p1066
-g13
-(g14
-g15
-NtRp1067
-(dp1068
-g18
-I1
-sg19
-I0
-sg20
-I189
-sg21
-I0
-sg22
-I0
-sg23
-I72
-sg24
-I0
-sbssg25
-S'\xe2\n\x90\x01I\x1c6\xf2w)\xa4,\xe5\xc1\xbe='
-p1069
-ssS'pygments_lexers_html'
-p1070
-(dp1071
-g6
-(dp1072
-g8
-S'pygments_lexers_html.html'
-p1073
-sg10
-S'pygments.lexers.html'
-p1074
-sg12
-g13
-(g14
-g15
-NtRp1075
-(dp1076
-g18
-I1
-sg19
-I0
-sg20
-I80
-sg21
-I0
-sg22
-I0
-sg23
-I2
-sg24
-I0
-sbssg25
-S'\x18\x97\x07#\xe4\x9c\xa5\xff\xbf\xf1\xbc\xdb\x83`2?'
-p1077
-ssS'pygments_styles_friendly'
-p1078
-(dp1079
-g6
-(dp1080
-g8
-S'pygments_styles_friendly.html'
-p1081
-sg10
-S'pygments.styles.friendly'
-p1082
-sg12
-g13
-(g14
-g15
-NtRp1083
-(dp1084
-g18
-I1
-sg19
-I0
-sg20
-I7
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\x8b\x8a\x97\x9c\x17\x82W\xc1V> \xc5\xae\xc1N\x0e'
-p1085
-ssS'pygments_lexers_lisp'
-p1086
-(dp1087
-g6
-(dp1088
-g8
-S'pygments_lexers_lisp.html'
-p1089
-sg10
-S'pygments.lexers.lisp'
-p1090
-sg12
-g13
-(g14
-g15
-NtRp1091
-(dp1092
-g18
-I1
-sg19
-I0
-sg20
-I109
-sg21
-I0
-sg22
-I0
-sg23
-I2
-sg24
-I0
-sbssg25
-S'\x07\xb7\xeco4\x86L\n\xa8\xe8|\xeeRh\xa7\x14'
-p1093
-ssS'pygments_lexers_ooc'
-p1094
-(dp1095
-g6
-(dp1096
-g8
-S'pygments_lexers_ooc.html'
-p1097
-sg10
-S'pygments.lexers.ooc'
-p1098
-sg12
-g13
-(g14
-g15
-NtRp1099
-(dp1100
-g18
-I1
-sg19
-I0
-sg20
-I10
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\xb4\xb6\xef\x10/\x90v\x91\x19VN\x0e\xffE\xfe\xfc'
-p1101
-ssS'pygments_lexers_perl'
-p1102
-(dp1103
-g6
-(dp1104
-g8
-S'pygments_lexers_perl.html'
-p1105
-sg10
-S'pygments.lexers.perl'
-p1106
-sg12
-g13
-(g14
-g15
-NtRp1107
-(dp1108
-g18
-I1
-sg19
-I0
-sg20
-I129
-sg21
-I0
-sg22
-I0
-sg23
-I11
-sg24
-I0
-sbssg25
-S'\xfbf:\xae\x15\xbc\xa2\xedK\xc3\xda~\xebQl\xf7'
-p1109
-ssS'pygments_lexers'
-p1110
-(dp1111
-g6
-(dp1112
-g46
-S'pygments_lexers.html'
-p1113
-sg48
-S'pygments.lexers'
-p1114
-sg50
-g13
-(g14
-g15
-NtRp1115
-(dp1116
-g18
-I1
-sg19
-I0
-sg20
-I156
-sg21
-I0
-sg22
-I0
-sg23
-I115
-sg24
-I0
-sbssg25
-S'\xbdb&K\xc4y\x19=\xb1\xa7\xd4\xd0\x04\xfdjY'
-p1117
-ssS'pygments_lexers_console'
-p1118
-(dp1119
-g6
-(dp1120
-g8
-S'pygments_lexers_console.html'
-p1121
-sg10
-S'pygments.lexers.console'
-p1122
-sg12
-g13
-(g14
-g15
-NtRp1123
-(dp1124
-g18
-I1
-sg19
-I0
-sg20
-I16
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\xf0\xc1\x12_E;G\xefs\xbbC\xace\x1fvM'
-p1125
-ssS'pygments_lexers_nit'
-p1126
-(dp1127
-g6
-(dp1128
-g8
-S'pygments_lexers_nit.html'
-p1129
-sg10
-S'pygments.lexers.nit'
-p1130
-sg12
-g13
-(g14
-g15
-NtRp1131
-(dp1132
-g18
-I1
-sg19
-I0
-sg20
-I9
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\xe8%\xf4\xf5-\xc5\xf79\x96\xfa8m\xa4h\xb8_'
-p1133
-ssS'pygments_lexers_python'
-p1134
-(dp1135
-g6
-(dp1136
-g46
-S'pygments_lexers_python.html'
-p1137
-sg48
-S'pygments.lexers.python'
-p1138
-sg50
-g13
-(g14
-g15
-NtRp1139
-(dp1140
-g18
-I1
-sg19
-I0
-sg20
-I124
-sg21
-I0
-sg22
-I0
-sg23
-I52
-sg24
-I0
-sbssg25
-S'\x02\x11\x8c\x17W\x9c\xf2 \xf3Z\xae\xce\xcfM\xda\xf4'
-p1141
-ssS'pygments_util'
-p1142
-(dp1143
-g576
-(dp1144
-g46
-S'pygments_util.html'
-p1145
-sg48
-S'pygments.util'
-p1146
-sg50
-g13
-(g14
-g15
-NtRp1147
-(dp1148
-g18
-I1
-sg19
-I0
-sg20
-I207
-sg21
-I0
-sg22
-I0
-sg23
-I141
-sg24
-I0
-sbssg585
-S'N\x94T\xf7\xf2S8?\xbc6\xf2s\xde\x84dS'
-p1149
-ssS'pygments_lexers_nix'
-p1150
-(dp1151
-g6
-(dp1152
-g8
-S'pygments_lexers_nix.html'
-p1153
-sg10
-S'pygments.lexers.nix'
-p1154
-sg12
-g13
-(g14
-g15
-NtRp1155
-(dp1156
-g18
-I1
-sg19
-I0
-sg20
-I27
-sg21
-I0
-sg22
-I0
-sg23
-I3
-sg24
-I0
-sbssg25
-S'\x13\x97_n\xb1\x1aY\xbf\xaa/aNb\x9f\xf1\xd6'
-p1157
-ssS'pygments_lexers_dsls'
-p1158
-(dp1159
-g6
-(dp1160
-g8
-S'pygments_lexers_dsls.html'
-p1161
-sg10
-S'pygments.lexers.dsls'
-p1162
-sg12
-g13
-(g14
-g15
-NtRp1163
-(dp1164
-g18
-I1
-sg19
-I0
-sg20
-I59
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\xabaRq\xac&\x12\x84\xdd\x0c3I\xca\x97\x07\xdf'
-p1165
-ssS'pygments_lexers_theorem'
-p1166
-(dp1167
-g6
-(dp1168
-g8
-S'pygments_lexers_theorem.html'
-p1169
-sg10
-S'pygments.lexers.theorem'
-p1170
-sg12
-g13
-(g14
-g15
-NtRp1171
-(dp1172
-g18
-I1
-sg19
-I0
-sg20
-I66
-sg21
-I0
-sg22
-I0
-sg23
-I1
-sg24
-I0
-sbssg25
-S'\xae\xc2,\xcf\xd0v\xc8\x1c>\xe8\xc2\xc5\x0c\xb5[\xed'
-p1173
-ssS'pygments_lexers_r'
-p1174
-(dp1175
-g6
-(dp1176
-g8
-S'pygments_lexers_r.html'
-p1177
-sg10
-S'pygments.lexers.r'
-p1178
-sg12
-g13
-(g14
-g15
-NtRp1179
-(dp1180
-g18
-I1
-sg19
-I0
-sg20
-I44
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S"l[\x81\xc4\xa5\x00Ab\xdf'u\xe1(\xe9\x95\xd5"
-p1181
-ssS'pygments_lexers__postgres_builtins'
-p1182
-(dp1183
-g6
-(dp1184
-g8
-S'pygments_lexers__postgres_builtins.html'
-p1185
-sg10
-S'pygments.lexers._postgres_builtins'
-p1186
-sg12
-g13
-(g14
-g15
-NtRp1187
-(dp1188
-g18
-I1
-sg19
-I0
-sg20
-I6
-sg21
-I77
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\xba\xc8G)\x13\xdax\x03\x00\xbfd6\xcc\xf5\xfd\xff'
-p1189
-ssS'pygments_lexers_eiffel'
-p1190
-(dp1191
-g6
-(dp1192
-g8
-S'pygments_lexers_eiffel.html'
-p1193
-sg10
-S'pygments.lexers.eiffel'
-p1194
-sg12
-g13
-(g14
-g15
-NtRp1195
-(dp1196
-g18
-I1
-sg19
-I0
-sg20
-I10
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'7\xe6\xd4\xb9\xb4S\xd8I\xd9\xae\x8a\x8c\x15\xf2R\x06'
-p1197
-ssS'pygments_lexers_d'
-p1198
-(dp1199
-g6
-(dp1200
-g8
-S'pygments_lexers_d.html'
-p1201
-sg10
-S'pygments.lexers.d'
-p1202
-sg12
-g13
-(g14
-g15
-NtRp1203
-(dp1204
-g18
-I1
-sg19
-I0
-sg20
-I21
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\xab\x13a\xfe\xf8,\xb7u\xd0\x0fL\xe0\xc91\xa7z'
-p1205
-ssS'pygments_styles_manni'
-p1206
-(dp1207
-g6
-(dp1208
-g8
-S'pygments_styles_manni.html'
-p1209
-sg10
-S'pygments.styles.manni'
-p1210
-sg12
-g13
-(g14
-g15
-NtRp1211
-(dp1212
-g18
-I1
-sg19
-I0
-sg20
-I6
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'X\xfc\x88\xafu\x0e;\xa6\x07*Ex\x0f\x13U\x86'
-p1213
-ssS'pygments_lexers_smalltalk'
-p1214
-(dp1215
-g6
-(dp1216
-g8
-S'pygments_lexers_smalltalk.html'
-p1217
-sg10
-S'pygments.lexers.smalltalk'
-p1218
-sg12
-g13
-(g14
-g15
-NtRp1219
-(dp1220
-g18
-I1
-sg19
-I0
-sg20
-I16
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S'\n\xa8\xf0t\xd2\x9fVa\xad\xeba\x18=\xa2\x96\xc8'
-p1221
-ssS'pygments_lexers_php'
-p1222
-(dp1223
-g6
-(dp1224
-g8
-S'pygments_lexers_php.html'
-p1225
-sg10
-S'pygments.lexers.php'
-p1226
-sg12
-g13
-(g14
-g15
-NtRp1227
-(dp1228
-g18
-I1
-sg19
-I0
-sg20
-I53
-sg21
-I0
-sg22
-I0
-sg23
-I1
-sg24
-I0
-sbssg25
-S'\x9c\xabG\x80\xd0\x03\xd9\xa0\x14\x84\x00~1\xc5\x8c\x0c'
-p1229
-ssS'pygments_lexers_igor'
-p1230
-(dp1231
-g6
-(dp1232
-g8
-S'pygments_lexers_igor.html'
-p1233
-sg10
-S'pygments.lexers.igor'
-p1234
-sg12
-g13
-(g14
-g15
-NtRp1235
-(dp1236
-g18
-I1
-sg19
-I0
-sg20
-I18
-sg21
-I0
-sg22
-I0
-sg23
-I0
-sg24
-I0
-sbssg25
-S' \xd0\x11)\xab7Ey\x8b\x83y<Y\xaex\x17'
-p1237
-sssS'version'
-p1238
-S'3.7.1'
-p1239
-sS'settings'
-p1240
-S'\xc8!z\xf4\xa7\xa7\x82\xd3`\x15\x01\xc8\\\x84N\x11'
-p1241
-sS'format'
-p1242
-I1
-s.
\ No newline at end of file
+++ /dev/null
-/* CSS styles for Coverage. */
-/* Page-wide styles */
-html, body, h1, h2, h3, p, td, th {
- margin: 0;
- padding: 0;
- border: 0;
- outline: 0;
- font-weight: inherit;
- font-style: inherit;
- font-size: 100%;
- font-family: inherit;
- vertical-align: baseline;
- }
-
-/* Set baseline grid to 16 pt. */
-body {
- font-family: georgia, serif;
- font-size: 1em;
- }
-
-html>body {
- font-size: 16px;
- }
-
-/* Set base font size to 12/16 */
-p {
- font-size: .75em; /* 12/16 */
- line-height: 1.33333333em; /* 16/12 */
- }
-
-table {
- border-collapse: collapse;
- }
-
-a.nav {
- text-decoration: none;
- color: inherit;
- }
-a.nav:hover {
- text-decoration: underline;
- color: inherit;
- }
-
-/* Page structure */
-#header {
- background: #f8f8f8;
- width: 100%;
- border-bottom: 1px solid #eee;
- }
-
-#source {
- padding: 1em;
- font-family: "courier new", monospace;
- }
-
-#indexfile #footer {
- margin: 1em 3em;
- }
-
-#pyfile #footer {
- margin: 1em 1em;
- }
-
-#footer .content {
- padding: 0;
- font-size: 85%;
- font-family: verdana, sans-serif;
- color: #666666;
- font-style: italic;
- }
-
-#index {
- margin: 1em 0 0 3em;
- }
-
-/* Header styles */
-#header .content {
- padding: 1em 3em;
- }
-
-h1 {
- font-size: 1.25em;
-}
-
-h2.stats {
- margin-top: .5em;
- font-size: 1em;
-}
-.stats span {
- border: 1px solid;
- padding: .1em .25em;
- margin: 0 .1em;
- cursor: pointer;
- border-color: #999 #ccc #ccc #999;
-}
-.stats span.hide_run, .stats span.hide_exc,
-.stats span.hide_mis, .stats span.hide_par,
-.stats span.par.hide_run.hide_par {
- border-color: #ccc #999 #999 #ccc;
-}
-.stats span.par.hide_run {
- border-color: #999 #ccc #ccc #999;
-}
-
-.stats span.run {
- background: #ddffdd;
-}
-.stats span.exc {
- background: #eeeeee;
-}
-.stats span.mis {
- background: #ffdddd;
-}
-.stats span.hide_run {
- background: #eeffee;
-}
-.stats span.hide_exc {
- background: #f5f5f5;
-}
-.stats span.hide_mis {
- background: #ffeeee;
-}
-.stats span.par {
- background: #ffffaa;
-}
-.stats span.hide_par {
- background: #ffffcc;
-}
-
-/* Help panel */
-#keyboard_icon {
- float: right;
- cursor: pointer;
-}
-
-.help_panel {
- position: absolute;
- background: #ffc;
- padding: .5em;
- border: 1px solid #883;
- display: none;
-}
-
-#indexfile .help_panel {
- width: 20em; height: 4em;
-}
-
-#pyfile .help_panel {
- width: 16em; height: 8em;
-}
-
-.help_panel .legend {
- font-style: italic;
- margin-bottom: 1em;
-}
-
-#panel_icon {
- float: right;
- cursor: pointer;
-}
-
-.keyhelp {
- margin: .75em;
-}
-
-.keyhelp .key {
- border: 1px solid black;
- border-color: #888 #333 #333 #888;
- padding: .1em .35em;
- font-family: monospace;
- font-weight: bold;
- background: #eee;
-}
-
-/* Source file styles */
-.linenos p {
- text-align: right;
- margin: 0;
- padding: 0 .5em;
- color: #999999;
- font-family: verdana, sans-serif;
- font-size: .625em; /* 10/16 */
- line-height: 1.6em; /* 16/10 */
- }
-.linenos p.highlight {
- background: #ffdd00;
- }
-.linenos p a {
- text-decoration: none;
- color: #999999;
- }
-.linenos p a:hover {
- text-decoration: underline;
- color: #999999;
- }
-
-td.text {
- width: 100%;
- }
-.text p {
- margin: 0;
- padding: 0 0 0 .5em;
- border-left: 2px solid #ffffff;
- white-space: nowrap;
- }
-
-.text p.mis {
- background: #ffdddd;
- border-left: 2px solid #ff0000;
- }
-.text p.run, .text p.run.hide_par {
- background: #ddffdd;
- border-left: 2px solid #00ff00;
- }
-.text p.exc {
- background: #eeeeee;
- border-left: 2px solid #808080;
- }
-.text p.par, .text p.par.hide_run {
- background: #ffffaa;
- border-left: 2px solid #eeee99;
- }
-.text p.hide_run, .text p.hide_exc, .text p.hide_mis, .text p.hide_par,
-.text p.hide_run.hide_par {
- background: inherit;
- }
-
-.text span.annotate {
- font-family: georgia;
- font-style: italic;
- color: #666;
- float: right;
- padding-right: .5em;
- }
-.text p.hide_par span.annotate {
- display: none;
- }
-
-/* Syntax coloring */
-.text .com {
- color: green;
- font-style: italic;
- line-height: 1px;
- }
-.text .key {
- font-weight: bold;
- line-height: 1px;
- }
-.text .str {
- color: #000080;
- }
-
-/* index styles */
-#index td, #index th {
- text-align: right;
- width: 5em;
- padding: .25em .5em;
- border-bottom: 1px solid #eee;
- }
-#index th {
- font-style: italic;
- color: #333;
- border-bottom: 1px solid #ccc;
- cursor: pointer;
- }
-#index th:hover {
- background: #eee;
- border-bottom: 1px solid #999;
- }
-#index td.left, #index th.left {
- padding-left: 0;
- }
-#index td.right, #index th.right {
- padding-right: 0;
- }
-#index th.headerSortDown, #index th.headerSortUp {
- border-bottom: 1px solid #000;
- }
-#index td.name, #index th.name {
- text-align: left;
- width: auto;
- }
-#index td.name a {
- text-decoration: none;
- color: #000;
- }
-#index td.name a:hover {
- text-decoration: underline;
- color: #000;
- }
-#index tr.total {
- }
-#index tr.total td {
- font-weight: bold;
- border-top: 1px solid #ccc;
- border-bottom: none;
- }
-#index tr.file:hover {
- background: #eeeeee;
- }
--- /dev/null
+implementation module StdGeneric
+
+import StdInt, StdMisc, StdClass, StdFunc
+
+generic bimap a b :: Bimap .a .b
+
+bimapId :: Bimap .a .a
+bimapId = { map_to = id, map_from = id }
+
+bimap{|c|} = { map_to = id, map_from = id }
+
+bimap{|PAIR|} bx by = { map_to= map_to, map_from=map_from }
+where
+ map_to (PAIR x y) = PAIR (bx.map_to x) (by.map_to y)
+ map_from (PAIR x y) = PAIR (bx.map_from x) (by.map_from y)
+bimap{|EITHER|} bl br = { map_to= map_to, map_from=map_from }
+where
+ map_to (LEFT x) = LEFT (bl.map_to x)
+ map_to (RIGHT x) = RIGHT (br.map_to x)
+ map_from (LEFT x) = LEFT (bl.map_from x)
+ map_from (RIGHT x) = RIGHT (br.map_from x)
+
+bimap{|(->)|} barg bres = { map_to = map_to, map_from = map_from }
+where
+ map_to f = comp3 bres.map_to f barg.map_from
+ map_from f = comp3 bres.map_from f barg.map_to
+
+bimap{|CONS|} barg = { map_to= map_to, map_from=map_from }
+where
+ map_to (CONS x) = CONS (barg.map_to x)
+ map_from (CONS x) = CONS (barg.map_from x)
+
+bimap{|FIELD|} barg = { map_to= map_to, map_from=map_from }
+where
+ map_to (FIELD x) = FIELD (barg.map_to x)
+ map_from (FIELD x) = FIELD (barg.map_from x)
+
+bimap{|OBJECT|} barg = { map_to= map_to, map_from=map_from }
+where
+ map_to (OBJECT x) = OBJECT (barg.map_to x)
+ map_from (OBJECT x) = OBJECT (barg.map_from x)
+
+bimap{|Bimap|} x y = {map_to = map_to, map_from = map_from}
+where
+ map_to {map_to, map_from} =
+ { map_to = comp3 y.map_to map_to x.map_from
+ , map_from = comp3 x.map_to map_from y.map_from
+ }
+ map_from {map_to, map_from} =
+ { map_to = comp3 y.map_from map_to x.map_to
+ , map_from = comp3 x.map_from map_from y.map_to
+ }
+
+comp3 :: !(.a -> .b) u:(.c -> .a) !(.d -> .c) -> u:(.d -> .b)
+comp3 f g h
+ | is_id f
+ | is_id h
+ = cast g
+ = cast (\x -> g (h x))
+ | is_id h
+ = cast (\x -> f (g x))
+ = \x -> f (g (h x))
+where
+ is_id :: !.(.a -> .b) -> Bool
+ is_id f = code inline
+ {
+ eq_desc e_StdFunc_did 0 0
+ pop_a 1
+ }
+
+ cast :: !u:a -> u:b
+ cast f = code inline
+ {
+ pop_a 0
+ }
+
+getConsPath :: !GenericConsDescriptor -> [ConsPos]
+getConsPath {gcd_index, gcd_type_def={gtd_num_conses}}
+ = doit gcd_index gtd_num_conses
+where
+ doit i n
+ | n == 0
+ = abort "getConsPath: zero conses\n"
+ | i >= n
+ = abort "getConsPath: cons index >= number of conses"
+ | n == 1
+ = []
+ | i < (n/2)
+ = [ ConsLeft : doit i (n/2) ]
+ | otherwise
+ = [ ConsRight : doit (i - (n/2)) (n - (n/2)) ]
+
\ No newline at end of file
--- /dev/null
+#ifdef ARCH_ARM
+arch arm11
+#else
+arch ia32
+#endif
+
+objects {
+ my_ep = ep /* A synchronous endpoint */
+
+ /* Two thread control blocks */
+ tcb1 = tcb
+ tcb2 = tcb
+
+ /* Four frames of physical memory */
+ frame1 = frame (4k)
+ frame2 = frame (4k)
+ frame3 = frame (4k)
+ frame4 = frame (4k)
+
+ /* Two page tables */
+ pt1 = pt
+ pt2 = pt
+
+ /* Two page directories */
+ pd1 = pd
+ pd2 = pd
+
+ /* Two capability nodes */
+ cnode1 = cnode (2 bits)
+ cnode2 = cnode (3 bits)
+}
+caps {
+ cnode1 {
+ 0x1: frame1 (RW) /* read/write */
+ 0x2: my_ep (R) /* read-only */
+ }
+ cnode2 {
+ 0x1: my_ep (W) /* write-only */
+ }
+ tcb1 {
+ vspace: pd1
+ ipc_buffer_slot: frame1
+ cspace: cnode1
+ }
+ pd1 {
+ 0x10: pt1
+ }
+ pt1 {
+ 0x8: frame1 (RW)
+ 0x9: frame2 (R)
+ }
+ tcb2 {
+ vspace: pd2
+ ipc_buffer_slot: frame3
+ cspace: cnode2
+ }
+ pd2 {
+ 0x10: pt2
+ }
+ pt2 {
+ 0x10: frame3 (RW)
+ 0x12: frame4 (R)
+ }
+}
--- /dev/null
+2 3 + CR .
+: F ( blah ) DUP DROP 1 + ;
+1 F CR .
{{else}}
<button {{action expand}}>Show More...</button>
{{/if}}
+
+{{> myPartial}}
+{{> myPartial var="value" }}
+{{> myPartial var=../value}}
+{{> (myPartial)}}
+{{> (myPartial) var="value"}}
+{{> (lookup . "myPartial")}}
+{{> ( lookup . "myPartial" ) var="value" }}
+{{> (lookup ../foo "myPartial") var="value" }}
+{{> @partial-block}}
+
+{{#>myPartial}}
+...
+{{/myPartial}}
+
+{{#*inline "myPartial"}}
+...
+{{/inline}}
+
+{{../name}}
+{{./name}}
+{{this/name}}
--- /dev/null
+우주메이저☆듀렉스전도사♀♂♀♂♀♂♀♂♀♂♀♂♀♂♀♂♀♂♀♂♀♂♀♂♀♂♀♂♀♂♀♂♀♂♀♂♀♂♀♂♀♂♀♂♀♂♀♂♀♂♀♂♀♂♀♂♀♂♀♂♀♂♀♂♀♂♀♂♀♂♀♂♀♂♀♡먊
+삶은밥과야근밥샤주세양♡밥사밥사밥사밥사밥사땅땅땅빵☆따밦내발따밦다빵맣밥밥밥내놔밥줘밥밥밥밗땅땅땅박밝땅땅딻타밟타맣밦밣따박타맣밦밣따박타맣밦밣따박타맣박빵빵빵빵따따따따맣삶몲
+Original Source by @harunene // Run it on AheuiChem(http://yoo2001818.github.io/AheuiChem/)
+https://gist.github.com/item4/ca870a63b390da6cc6f1
) &>nul ver\r
if errorlevel 0 if not errorlevel 1 set /a _passed+=1\r
goto :eof\r
+FOR /F %%a IN ('%%c%%') DO %%a\r
+rem %x% %x% %x% %x% %x% %x% %x% %x% %x% %x% %x% %x% %x% %x% %x% %x%\r
:/?\r
goto :fail\r
-- some char literals:
charl = ['"', 'a', '\ESC', '\'', ' ']
+
+-- closed type families
+type family Fam (a :: Type) = r :: Type where
+ Fam Int = True
+ Fam a = False
+
+-- type literals
+type IntChar = '[Int, Char]
+type Falsy = 'False
+type Falsy = '(10, 20, 30)
--- /dev/null
+#JSGF V1.0 UTF-8 en;\r
+\r
+grammar org.pygments.example;\r
+\r
+// comment /*\r
+/*\r
+ multi-line\r
+ comment\r
+ /* without nesting\r
+ @example doesn't mean anything here.\r
+*/\r
+/**/\r
+\r
+/**\r
+ * A silly @example grammar.\r
+ *\r
+ * @author David Corbett\r
+ * @version 1\r
+* @see <org.pygments.example.en>\r
+ * @example hello world\r
+ @example hello ","\r
+ *\r
+ **@blah\r
+ **world\r
+ */\r
+public <sentence> = (/1/<en> | / 0.8f /<fr> |/0/<VOID>""""{}{})*<NULL>;\r
+<org.pygments.example.fr> = bonjour {hello} [,] "le monde" {world};\r
+<en> = (/56/hello|/3.14e3/"\"hello\"") {hello} {{ {\\//\} } world {world} !+ ;\r
--- /dev/null
+/* Block comment */
+/*
+ Multiline block
+ comment
+*/
+
+// inline comment
+function juttleFunction(arg) {
+ if (arg == null) {
+ return null;
+ }
+ else if (arg == 0) {
+ return 'zero';
+ }
+ else if (arg == 1) {
+ return "one";
+ }
+ else {
+ return 1.1;
+ }
+}
+
+reducer juttleReducer(field) {
+ var x = 0;
+ function update() {
+ x = *field;
+ }
+
+ function result() {
+ return x;
+ }
+}
+
+sub myemit(limit) {
+ emit -limit limit
+}
+
+input test: text -default 'input';
+const object = {
+ xyz: 123,
+ name: 'something'
+};
+
+const array = [
+ :2016-01-01:,
+ :2016-01-01T01:00:00:,
+ :2016-01-01T01:00:00.000:,
+ :2016-01-01T01:00:00.000Z:,
+ :2016-01-01T01:00:00.000-0800:,
+ :2016-01-01T01:00:00.000-08:00:,
+ :00:00:01:,
+ :00:00:00.001:,
+ :now:,
+ :beginning:,
+ :end:,
+ :forever:,
+ :yesterday:,
+ :today:,
+ :tomorrow:,
+ :1:,
+ :1.1:,
+ :1s:,
+ :1 second:,
+ :1 seconds:,
+ :100ms:,
+ :100 millisecond:,
+ :100 milliseconds:,
+ :1d:,
+ :1 day:,
+ :1 days:,
+ :.2h:,
+ :1.2h:,
+ :.2 hour:,
+ :1.2 hours:,
+ :.5d:,
+ :1.5d:,
+ :.5 day:,
+ :1.5 days:,
+ :5m:,
+ :5 minutes:,
+ :10w:,
+ :10 weeks:,
+ :10M:,
+ :10 months:,
+ :100y:,
+ :100 years:,
+ :1 year and 2 months and 2 days:
+];
+
+emit
+ | batch :10 minutes:
+ | filter x=true
+ | head 1
+ | join
+ | keep x
+ | pace -every :1 minute:
+ | pass
+ | put y=false
+ | remove z
+ | sequence
+ | skip 1
+ | sort field -desc
+ | split field
+ | tail 10
+ | unbatch
+ | uniq field
+;
+
+read adapter -last :day: 'search' AND field~/pattern/ OR field == 'string'
+ | write adapter
return DebugLib.Assert(addonName, test, message)
end
+--[==[
+Here follow further tests of Lua syntax.
+]]==]
+---[[
+local t = {
+ [ [[
+x
+]==] \]]]=1|2; a={b={c={}}},
+ 1, 1., 1.2, .2, 1e3, 1.e3, 1.2e3, .2e3, 1.2e+3, 1.2E-3;
+ 0xA, 0Xa, 0xA., 0x.F, 0xA.F, 0xA.Fp1, 0xA.FP+1, 0Xa.fp-1;
+}
+function t.f()
+ goto eof
+ os.exit()
+ :: eof ::
+end
+
+function t . a --[==[x]==] .b --[==[y]==] --
+-- () end
+ . c : d (file)
+ return '.\a.\b.\f.\n.\r.\t.\v.\\.\".\'.\
+.\z
+ .\0.\00.\000.\0000.\xFa.\u{1}.\u{1234}'
+end
--- /dev/null
+# this is a header
+
+## this is a 2nd level header
+
+* list item 1
+ * list item 1.1
+* list item 2
+- list item 3
+
+1. numbered list item 1
+1. numbered list item 2
+
+- [ ] todo
+- [x] done
+- [X] done
+
+The following is italic: *italic*
+The following is italic: _italic_
+
+The following is not italic: \*italic\*
+The following is not italic: \_italic\_
+
+The following is not italic: snake*case*word
+The following is not italic: snake_case_word
+
+The following is bold: **bold** **two or more words**
+The following is bold: __bold__ __two or more words__
+
+The following is not bold: snake**case**word
+The following is not bold: snake__case__word
+
+The following is strikethrough: ~~bold~~
+The following is not strikethrough: snake~~case~~word
+
+The following is bold with italics inside: **the next _word_ should have been italics**
+
+> this is a quote
+
+> this is a multiline
+> quote string thing
+
+this sentence `has monospace` in it
+
+this sentence @tweets a person about a #topic.
+
+[google](https://google.com/some/path.html)
+
+
+```
+ * this is just unformated
+ __text__
+```
+
+some other text
+
+```python
+from pygments import token
+# comment
+```
+
+some more text
--- /dev/null
+<div>\r
+ <p>{{order.DueTime | date:'d. MMMM yyyy HH:mm'}}</p>\r
+ <p>Status: {{order.OrderState}}</p>\r
+ <button (click)="deleteOrder()" *ngIf="cancelable" [value]="test" [(twoWayTest)]="foo.bar">Remove</button>\r
+ <ul>\r
+ <li *ngFor="#meal of order.Positions">\r
+ {{meal.Name}}\r
+ </li>\r
+ </ul>\r
+ <p>Preis: <b>{{order.TotalPrice | currency:'EUR':true:'1.2-2'}}</b></p>\r
+</div>
\ No newline at end of file
sentence Blank
sentence My_sentence This should all be a string
text My_text This should also all be a string
- word My_word Only the first word is a string, the rest is invalid
+ word My_word Only the first word is a string, the rest is discarded
boolean Binary 1
boolean Text no
boolean Quoted "yes"
comment This should be a string
+ optionmenu Choice: 1
+ option Foo
+ option Bar
+ option 100
real left_Range -123.6
positive right_Range_max 3.3
integer Int 4
natural Nat 4
endform
+# Periods do not establish boundaries for keywords
+form.var = 10
+# Or operators
+not.an.operator$ = "Bad variable name"
+bad.or.not = 1
+
# External scripts
include /path/to/file
runScript: "/path/to/file"
string$ = "Strings can be 'interpolated'"
string$ = "But don't interpolate everything!"
+string$(10)
+
+repeat
+ string$ = string$ - right$(string$)
+until !length(string$)
Text... 1 Right 0.2 Half many----hyphens
Text... 1 Right -0.4 Bottom aحبيبa
Text... 1 Right -0.6 Bottom 日本
Draw circle (mm)... 0.5 0.5 i
-x=1
rows = Object_'table'.nrow
value$ = Table_'table'$[25, "f0"]
n = numberOfSelected("Sound")
for i from newStyle.local to n
name = selected$(extractWord$(selected$(), " "))
- sound'i' = selected("Sound", i)
+ sound'i' = selected("Sound", i+(a*b))
sound[i] = sound'i'
endfor
-for i from 1 to n
+i = 1
+while i < n
+ i++
# Different styles of object selection
select sound'i'
sound = selected()
sound$ = selected$("Sound")
select Sound 'sound$'
- selectObject(sound[i])
+ selectObject( sound[i])
selectObject: sound
# Pause commands
# Multi-line command with modifier
pitch = noprogress To Pitch (ac): 0, 75, 15, "no",
...0.03, 0.45, 0.01, 0.35, 0.14, 600
+ # Formulas are strings
+ Formula: "if col = 1 then row * Object_'pitch'.dx + 'first' else self fi"
# do-style command with assignment
minimum = do("Get minimum...", 0, 0, "Hertz", "Parabolic")
# New-style multi-line command call with broken strings
table = Create Table with column names: "table", 0,
- ..."file subject speaker
- ...f0 f1 f2 f3 " +
+ ..."file subject speaker
+ ... f0 f1 f2 f" + string$(3) + " " +
..."duration response"
# Function call with trailing space
demoWaitForInput ( )
demo Erase all
demo Text: 50, "centre", 50, "half", "Finished"
-endfor
+endwhile
switch$ = if switch == 1 then "a" else
... if switch == 2 then "b" else
assert a <> b || c
assert a < b | c
assert a > b
+
+assert (a)or (b)
+assert (a) or(b)
+assert (a)and(b)
+
assert "hello" = "he" + "llo"
assert "hello" == "hello world" - " world"
asserterror Unknown symbol:'newline$'« _
assert '_new_style.local'
+@proc: a, selected("string"), b
+# Comment
+
+for i to saveSelection.n
+ selectObject: saveSelection.id[i]
+ appendInfoLine: selected$()
+endfor
+
+@ok(if selected$("Sound") = "tone" then 1 else 0 fi,
+ ... "selected sound is tone")
+
+@ok_formula("selected$(""Sound"") = ""tone""", "selected sound is tone")
--- /dev/null
+/* Stemmer for Esperanto in UTF-8 */\r
+\r
+strings ()\r
+\r
+integers ()\r
+\r
+booleans ( foreign )\r
+\r
+routines (\r
+ apostrophe\r
+ canonical_form\r
+ correlative\r
+ interjection\r
+ short_word\r
+ standard_suffix\r
+ unuj\r
+)\r
+\r
+externals ( stem )\r
+\r
+groupings ( vowel aiou ao ou )\r
+\r
+stringdef a' decimal '225'\r
+stringdef e' hex 'E9'\r
+stringdef i' hex 'ED'\r
+stringdef o' hex ' f3'\r
+stringdef u' hex 'fa '\r
+\r
+stringdef cx hex '0109'\r
+stringdef gx hex '011D'\r
+stringdef hx hex '0125'\r
+stringdef jx hex '0135'\r
+stringdef sx hex '015D'\r
+stringdef ux hex '016D'\r
+\r
+define canonical_form as repeat (\r
+ [substring]\r
+ among (\r
+stringescapes //\r
+ '/a'/' (<- 'a' set foreign)\r
+ '/e'/' (<- 'e' set foreign)\r
+ '/i'/' (<- 'i' set foreign)\r
+ '/o'/' (<- 'o' set foreign)\r
+ '/u'/' (<- 'u' set foreign)\r
+stringescapes `'\r
+ 'cx' (<- '`cx'')\r
+ 'gx' (<- '`gx'')\r
+ 'hx' (<- '`hx'')\r
+ 'jx' (<- '`jx'')\r
+ 'sx' (<- '`sx'')\r
+ 'ux' (<- '`ux'')\r
+ '' (next)\r
+ )\r
+)\r
+\r
+backwardmode (\r
+ stringescapes { }\r
+\r
+ define apostrophe as (\r
+ (['un{'}'] atlimit <- 'unu') or\r
+ (['l{'}'] atlimit <- 'la') or\r
+ (['{'}'] <- 'o')\r
+ )\r
+\r
+ define vowel 'aeiou'\r
+ define aiou vowel - 'e'\r
+ define ao 'ao'\r
+ define ou 'ou'\r
+\r
+ define short_word as not (loop (maxint * 0 + 4 / 2) gopast vowel)\r
+\r
+ define interjection as (\r
+ among ('adia{ux}' 'aha' 'amen' 'hola' 'hura' 'mia{ux}' 'muu' 'oho')\r
+ atlimit\r
+ )\r
+\r
+ define correlative as (\r
+ []\r
+ // Ignore -al, -am, etc. since they can't be confused with suffixes.\r
+ test (\r
+ ('a' or (try 'n'] 'e') or (try 'n' try 'j'] ou))\r
+ 'i'\r
+ try ('k' or 't' or '{cx}' or 'nen')\r
+ atlimit\r
+ )\r
+ delete\r
+ )\r
+\r
+ define unuj as (\r
+ [try 'n' 'j'] 'unu' atlimit delete\r
+ )\r
+\r
+ define standard_suffix as (\r
+ [\r
+ try ((try 'n' try 'j' ao) or (try 's' aiou) or (try 'n' 'e'))\r
+ try '-' try 'a{ux}'\r
+ ] delete\r
+ )\r
+)\r
+\r
+define stem as (\r
+ do canonical_form\r
+ not foreign\r
+ backwards (\r
+ do apostrophe\r
+ short_word or interjection or\r
+ correlative or unuj or do standard_suffix\r
+ )\r
+)\r
--- /dev/null
+;----------------------------------------------------------------------------;
+; Does A* pathfinding for rockraiders and vehicles
+;
+; Copyright 2015 Ruben De Smet
+;
+; Redistribution and use in source and binary forms, with or without
+; modification, are permitted provided that the following conditions are
+; met:
+;
+; (1) Redistributions of source code must retain the above copyright
+; notice, this list of conditions and the following disclaimer.
+;
+; (2) Redistributions in binary form must reproduce the above copyright
+; notice, this list of conditions and the following disclaimer in
+; the documentation and/or other materials provided with the
+; distribution.
+;
+; (3) The name of the author may not be used to
+; endorse or promote products derived from this software without
+; specific prior written permission.
+;
+; THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+; IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+; WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+; DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT,
+; INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+; (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+; SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+; HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+; STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
+; IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+; POSSIBILITY OF SUCH DAMAGE.
+;
+;----------------------------------------------------------------------------;
+
+IDEAL
+P386
+MODEL FLAT, C
+ASSUME cs:_TEXT,ds:FLAT,es:FLAT,fs:FLAT,gs:FLAT
+
+INCLUDE "ASTAR.INC"
+INCLUDE "READLVL.INC"
+INCLUDE "DEBUG.INC"
+
+STRUC TPriorityField
+ heuristic dd ?
+ distance dd ?
+ x db ?
+ y db ?
+ fromx db ?
+ fromy db ?
+ENDS
+
+STRUC TField
+ distance dd ?
+ x db ?
+ y db ?
+ENDS
+
+CODESEG
+
+PROC getPath
+ USES ecx
+ ARG @@tgtx:dword, \
+ @@tgty:dword \
+ RETURNS eax, ebx ; eax contains x, ebx contains y
+
+ call getLevelWidth
+ imul eax, [@@tgty]
+ add eax, [@@tgtx]
+ imul eax, SIZE TField
+ add eax, offset backtraceGraph
+ mov ecx, eax
+
+ xor eax, eax
+ xor ebx, ebx
+
+ mov al, [(TField ptr ecx).x]
+ mov bl, [(TField ptr ecx).y]
+
+ ret
+ENDP getPath
+
+PROC findPath
+ ; eax will contain a 1 when a path has been found
+ ; 0 otherwise.
+ ARG @@srcx:dword, \
+ @@srcy:dword, \
+ @@tgtx:dword, \
+ @@tgty:dword, \
+ @@type:dword \
+ RETURNS eax
+
+ ; Check whether the target field is "allowed" for
+ ; the selected vehicle or rock raider
+ call getField, [@@tgtx], [@@tgty]
+ mov al, [byte ptr eax]
+ and eax, 0FFh
+
+ add eax, offset actionTable
+ mov eax, [eax]
+ and eax, [@@type] ; TODO: for now, rock raider is hard coded
+ jnz @canGoToTarget
+
+ mov eax, 0
+ ret
+@canGoToTarget:
+
+ call cleanData
+ mov eax, [@@type]
+ mov [currentType], eax
+
+ mov eax, [@@srcx]
+ mov [currentOpen.x], al
+ mov eax, [@@srcy]
+ mov [currentOpen.y], al
+
+ call distance, [@@srcx], [@@srcy], [@@tgtx], [@@tgty]
+ ; eax <- distance
+ call addOpen, [@@srcx], [@@srcy], eax, 0
+
+@openListNotEmpty:
+ call popOpen
+ cmp eax, 0
+ je @openListEmpty
+
+ call addToMap
+
+ call addClosed
+
+ mov eax, [@@tgtx]
+ cmp [currentOpen.x], al
+ jne @nextOpen
+ mov eax, [@@tgty]
+ cmp [currentOpen.y], al
+ jne @nextOpen
+
+ jmp @routeFound
+
+ @nextOpen:
+ call addNeighbours, [@@tgtx], [@@tgty]
+
+ jmp @openListNotEmpty
+
+@openListEmpty:
+ mov eax, 0
+ ret
+
+@routeFound:
+ mov eax, 1
+ ret
+ENDP findPath
+
+PROC addToMap
+ USES eax, ecx
+
+ call getLevelWidth
+ xor ecx, ecx
+ mov cl, [currentOpen.y]
+ imul eax, ecx
+ mov cl, [currentOpen.x]
+ add eax, ecx
+ imul eax, SIZE TField
+ add eax, offset backtraceGraph
+
+ mov ecx, [currentOpen.distance]
+ cmp [(TField ptr eax).distance], ecx
+ jbe @dontAdd
+
+ mov [(TField ptr eax).distance], ecx
+ mov cl, [currentOpen.fromx]
+ mov [(TField ptr eax).x], cl
+ mov cl, [currentOpen.fromy]
+ mov [(TField ptr eax).y], cl
+
+@dontAdd:
+ ret
+ENDP addToMap
+
+; Is closed checks whether the field considered is "closed" for being added to the open list.
+; So, it also checks whether we can go on the selected field.
+PROC isClosed
+ USES ebx, ecx, edx
+ ARG @@x:dword, \
+ @@y:dword RETURNS eax
+
+ ; Check bounds first:
+
+ call getLevelWidth
+ cmp [@@x], eax
+ ja notWithinBounds ; ja considers -1 > 10
+
+ call getLevelHeight
+ cmp [@@y], eax
+ ja notWithinBounds
+
+ ; Check whether this field is "allowed" for
+ ; the selected vehicle or rock raider
+ call getField, [@@x], [@@y]
+ mov al, [byte ptr eax]
+ and eax, 0FFh
+
+ add eax, offset actionTable
+ mov eax, [eax]
+ and eax, [currentType] ; TODO: for now, rock raider is hard coded
+ jnz @canGoHere
+
+
+ inc eax ; mov eax, 1
+ ret
+
+@canGoHere:
+
+ ; Getting here means the field is okay to walk/fly/whatever on
+
+ xor ecx, ecx
+ mov cx, [closedlistSize]
+ cmp cx, 0 ; If empty, return 0
+ jne @closedNotEmpty
+
+ mov eax, 0
+ ret
+
+@closedNotEmpty:
+ mov ebx, offset closedlist
+
+@loopClosed:
+ mov edx, [@@x]
+ cmp [(TField ptr ebx).x], dl
+ jne @nextClosed
+ mov edx, [@@y]
+ cmp [(TField ptr ebx).y], dl
+ jne @nextClosed
+
+ ; If reached here, yep, contained in closed list
+ mov eax, 1
+ ret
+
+ @nextClosed:
+ add ebx, SIZE TField
+ dec ecx
+ jnz @loopClosed
+
+ mov eax, 0
+ ret
+
+notWithinBounds:
+ mov eax, 1
+ ret
+ENDP isClosed
+
+PROC addNeighbours
+ USES eax, ebx, ecx, edx
+ ARG @@tgtx:dword, \
+ @@tgty:dword
+ ; Push all neighbours of currentOpen on openList
+
+ xor ebx, ebx
+ xor ecx, ecx
+
+ mov bl, [currentOpen.x]
+ mov cl, [currentOpen.y]
+ mov edx, [currentOpen.distance]
+ inc edx ; Next distance is one more.
+
+ ; Up
+ dec ecx
+ call isClosed, ebx, ecx
+ cmp eax, 0
+ jne @noUp
+ call distance, ebx, ecx, [@@tgtx], [@@tgty]
+ add eax, edx
+ call addOpen, ebx, ecx, eax, edx
+ @noUp:
+ inc ecx
+
+ ; Right
+ inc ebx
+ call isClosed, ebx, ecx
+ cmp eax, 0
+ jne @noRight
+ call distance, ebx, ecx, [@@tgtx], [@@tgty]
+ add eax, edx
+ call addOpen, ebx, ecx, eax, edx
+ @noRight:
+ dec ebx
+
+ ; Left
+ dec ebx
+ call isClosed, ebx, ecx
+ cmp eax, 0
+ jne @noLeft
+ call distance, ebx, ecx, [@@tgtx], [@@tgty]
+ add eax, edx
+ call addOpen, ebx, ecx, eax, edx
+ @noLeft:
+ inc ebx
+
+ ; Down
+ inc ecx
+ call isClosed, ebx, ecx
+ cmp eax, 0
+ jne @noDown
+ call distance, ebx, ecx, [@@tgtx], [@@tgty]
+ add eax, edx
+ call addOpen, ebx, ecx, eax, edx
+ @noDown:
+ dec ecx
+
+ ret
+ENDP addNeighbours
+
+PROC popOpen
+ ARG RETURNS eax
+ USES ebx, ecx, edx, esi, edi
+ ; eax contains the smallest current heuristic
+ ; ebx contains the index of that field
+
+ cmp [openlistSize], 0 ; If empty, return 0
+ jne @goForth
+
+ mov eax, 0
+ ret
+
+@goForth:
+
+ mov eax, 0FFFFFFFFh ; Longest distance possible in 32 bits.
+ xor ebx, ebx
+ xor ecx, ecx ; ecx contains the current index
+
+@searchFurther:
+ mov edx, ecx
+ imul edx, SIZE TPriorityField
+ cmp [(TPriorityField ptr (openlist + edx)).heuristic], eax
+ ja @notBetter
+ ; Better guess found, put right values in eax and ebx
+ mov eax, [(TPriorityField ptr (openlist + edx)).heuristic]
+ mov ebx, ecx
+
+@notBetter:
+
+ inc ecx
+ cmp cx, [openlistSize]
+ jne @searchFurther
+
+ ; By now, we have found the right item to pop from the priorityqueue.
+
+ ; Move the correct item in currentOpen
+ mov ecx, SIZE TPriorityField
+ mov esi, ebx
+ imul esi, ecx
+ add esi, offset openlist
+
+ mov edi, offset currentOpen
+ rep movsb
+
+ ; Now make the remove the thing from the vector
+
+ xor ecx, ecx
+ mov cx, [openlistSize]
+ sub ecx, ebx
+ dec ecx
+ imul ecx, SIZE TPriorityField
+ mov edi, esi
+ sub edi, SIZE TPriorityField
+ rep movsb
+
+ dec [openlistSize]
+ mov eax, 1
+ ret
+ENDP popOpen
+
+PROC addClosed
+ USES eax, ebx
+
+ xor ebx, ebx
+ xor eax, eax
+
+ mov bx, [closedlistSize]
+ imul ebx, SIZE TField
+ add ebx, offset closedlist ; ebx contains the target TField
+
+ mov al, [currentOpen.x]
+ mov [(TField ptr ebx).x], al
+ mov al, [currentOpen.y]
+ mov [(TField ptr ebx).y], al
+ mov eax, [currentOpen.distance]
+ mov [(TField ptr ebx).distance], eax
+
+ inc [closedlistSize]
+ cmp [closedlistSize], CLOSED_LIST_SIZE_MAX
+ jne @noProblemWithClosedVector
+
+ xor eax, eax
+ mov ax, [closedlistSize]
+ call crash, offset closedOutOfMemory, eax
+
+@noProblemWithClosedVector:
+ ret
+ENDP addClosed
+
+PROC addOpen
+ USES eax, ebx
+ ARG @@x:dword, \
+ @@y:dword, \
+ @@priority:dword, \
+ @@distance:dword
+
+ xor eax, eax
+ mov ax, [openlistSize]
+ imul eax, SIZE TPriorityField
+ add eax, offset openlist
+
+ mov ebx, [@@x]
+ mov [(TPriorityField ptr eax).x], bl
+ mov ebx, [@@y]
+ mov [(TPriorityField ptr eax).y], bl
+
+ mov bl, [currentOpen.x]
+ mov [(TPriorityField ptr eax).fromx], bl
+ mov bl, [currentOpen.y]
+ mov [(TPriorityField ptr eax).fromy], bl
+
+ mov ebx, [@@priority]
+ mov [(TPriorityField ptr eax).heuristic], ebx
+ mov ebx, [@@distance]
+ mov [(TPriorityField ptr eax).distance], ebx
+
+ inc [openlistSize]
+ cmp [openlistSize], OPEN_LIST_SIZE_MAX
+ jne @noProblem
+
+ xor eax, eax
+ mov ax, [openlistSize]
+ call crash, offset openOutOfMemory, eax
+
+@noProblem:
+ ret
+ENDP
+
+PROC distance
+ USES ebx
+ ARG @@srcx:dword, \
+ @@srcy:dword, \
+ @@tgtx:dword, \
+ @@tgty:dword \
+ RETURNS eax
+
+ mov eax, [@@srcx]
+ sub eax, [@@tgtx]
+
+ jns @noSignChangex
+ neg eax
+
+ @noSignChangex:
+
+ mov ebx, [@@srcy]
+ sub ebx, [@@tgty]
+
+ jns @noSignChangey
+ neg ebx
+
+ @noSignChangey:
+ add eax, ebx
+ ret
+ENDP distance
+
+PROC cleanData
+ USES eax, ecx
+ mov [openlistSize], 0
+ mov [closedlistSize], 0
+
+ mov [currentOpen.x], -1
+ mov [currentOpen.y], -1
+ mov [currentOpen.distance], 0
+
+ call getLevelWidth
+ mov ecx, eax
+ call getLevelHeight
+ imul ecx, eax
+
+ mov eax, offset backtraceGraph
+@fieldIter:
+ mov [(TField ptr eax).distance], 0ffffffffh ; Set to approximately +inf
+ mov [(TField ptr eax).x], 0
+ mov [(TField ptr eax).y], 0
+ add eax, SIZE TField
+ dec ecx
+ jnz @fieldIter
+
+ ret
+ENDP cleanData
+
+DATASEG
+
+openOutOfMemory db "Out of openlistSize memory. Hi dev: Please increase$"
+closedOutOfMemory db "Out of closedlistSize memory. Hi dev: Please increase$"
+
+; power | discover | walking | sailing | flying
+actionTable db 00001101b, \ ;EMPTY
+ 00001101b, \ ;RUBBLE
+ 00000000b, \ ;GRAVEL
+ 00000000b, \ ;LOOSE ROCK
+ 00000000b, \ ;HARD ROCK
+ 00000000b, \ ;MASSIVE ROCK
+ 00000000b, \ ;KRISTAL SOURCE
+ 00000000b, \ ;OREROCK
+ 00001011b, \ ;WATER
+ 00001001b, \ ;LAVA
+ 00001101b, \ ;SNAIL HOLE
+ 00001101b, \ ;EROSION
+ 00011101b, \ ;POWER PATH
+ 00011101b, \ ;BUILDING POWER PATH
+ 00011000b \ ;BUILDING
+
+UDATASEG
+
+currentType dd ?
+currentOpen TPriorityField ?
+
+openlist TPriorityField OPEN_LIST_SIZE_MAX dup(?)
+openlistSize dw ?
+closedlist TField CLOSED_LIST_SIZE_MAX dup(?)
+closedlistSize dw ?
+backtraceGraph TField MAX_LEVEL_SIZE dup(?)
+
+END
+++ /dev/null
-class Animal {
- constructor(public name) { }
- move(meters) {
- alert(this.name + " moved " + meters + "m.");
- }
-}
-
-class Snake extends Animal {
- constructor(name) { super(name); }
- move() {
- alert("Slithering...");
- super.move(5);
- }
-}
-
-class Horse extends Animal {
- constructor(name) { super(name); }
- move() {
- alert("Galloping...");
- super.move(45);
- }
-}
-
-@View({
- templateUrl: "app/components/LoginForm.html",
- directives: [FORM_DIRECTIVES, NgIf]
-})
-@Component({
- selector: "login-form"
-})
-class LoginForm {
-
-}
-
-var sam = new Snake("Sammy the Python")
-var tom: Animal = new Horse("Tommy the Palomino")
-
-sam.move()
-tom.move(34)
--- /dev/null
+/**
+ * Example Whiley program, taken from the Whiley benchmark suite.
+ * https://github.com/Whiley/WyBench/blob/master/src/101_interpreter/Main.whiley
+ */
+
+import whiley.lang.System
+import whiley.lang.Int
+import whiley.io.File
+import string from whiley.lang.ASCII
+import char from whiley.lang.ASCII
+
+// ====================================================
+// A simple calculator for expressions
+// ====================================================
+
+constant ADD is 0
+constant SUB is 1
+constant MUL is 2
+constant DIV is 3
+
+// binary operation
+type BOp is (int x) where ADD <= x && x <= DIV
+type BinOp is { BOp op, Expr lhs, Expr rhs }
+
+// variables
+type Var is { string id }
+
+// list access
+type ListAccess is {
+ Expr src,
+ Expr index
+}
+
+// expression tree
+type Expr is int | // constant
+ Var | // variable
+ BinOp | // binary operator
+ Expr[] | // array constructor
+ ListAccess // list access
+
+// values
+type Value is int | Value[]
+
+// stmts
+type Print is { Expr rhs }
+type Set is { string lhs, Expr rhs }
+type Stmt is Print | Set
+
+// ====================================================
+// Expression Evaluator
+// ====================================================
+
+type RuntimeError is { string msg }
+type Environment is [{string k, Value v}]
+
+// Evaluate an expression in a given environment reducing either to a
+// value, or a runtime error. The latter occurs if evaluation gets
+// "stuck" (e.g. expression is // not well-formed)
+function evaluate(Expr e, Environment env) -> Value | RuntimeError:
+ //
+ if e is int:
+ return e
+ else if e is Var:
+ return env[e.id]
+ else if e is BinOp:
+ Value|RuntimeError lhs = evaluate(e.lhs, env)
+ Value|RuntimeError rhs = evaluate(e.rhs, env)
+ // check if stuck
+ if !(lhs is int && rhs is int):
+ return {msg: "arithmetic attempted on non-numeric value"}
+ // switch statement would be good
+ if e.op == ADD:
+ return lhs + rhs
+ else if e.op == SUB:
+ return lhs - rhs
+ else if e.op == MUL:
+ return lhs * rhs
+ else if rhs != 0:
+ return lhs / rhs
+ return {msg: "divide-by-zero"}
+ else if e is Expr[]:
+ [Value] r = []
+ for i in e:
+ Value|RuntimeError v = evaluate(i, env)
+ if v is RuntimeError:
+ return v
+ else:
+ r = r ++ [v]
+ return r
+ else if e is ListAccess:
+ Value|RuntimeError src = evaluate(e.src, env)
+ Value|RuntimeError index = evaluate(e.index, env)
+ // santity checks
+ if src is [Value] && index is int && index >= 0 && index < |src|:
+ return src[index]
+ else:
+ return {msg: "invalid list access"}
+ else:
+ return 0 // dead-code
+
+// ====================================================
+// Expression Parser
+// ====================================================
+
+type State is { string input, int pos }
+type SyntaxError is { string msg, int start, int end }
+
+function SyntaxError(string msg, int start, int end) -> SyntaxError:
+ return { msg: msg, start: start, end: end }
+
+// Top-level parse method
+function parse(State st) -> (Stmt,State)|SyntaxError:
+ //
+ Var keyword, Var v
+ Expr e
+ int start = st.pos
+ //
+ keyword,st = parseIdentifier(st)
+ switch keyword.id:
+ case "print":
+ any r = parseAddSubExpr(st)
+ if !(r is SyntaxError):
+ e,st = r
+ return {rhs: e},st
+ else:
+ return r // error case
+ case "set":
+ st = parseWhiteSpace(st)
+ v,st = parseIdentifier(st)
+ any r = parseAddSubExpr(st)
+ if !(r is SyntaxError):
+ e,st = r
+ return {lhs: v.id, rhs: e},st
+ else:
+ return r // error case
+ default:
+ return SyntaxError("unknown statement",start,st.pos-1)
+
+function parseAddSubExpr(State st) -> (Expr, State)|SyntaxError:
+ //
+ Expr lhs, Expr rhs
+ // First, pass left-hand side
+ any r = parseMulDivExpr(st)
+ //
+ if r is SyntaxError:
+ return r
+ //
+ lhs,st = r
+ st = parseWhiteSpace(st)
+ // Second, see if there is a right-hand side
+ if st.pos < |st.input| && st.input[st.pos] == '+':
+ // add expression
+ st.pos = st.pos + 1
+ r = parseAddSubExpr(st)
+ if !(r is SyntaxError):
+ rhs,st = r
+ return {op: ADD, lhs: lhs, rhs: rhs},st
+ else:
+ return r
+ else if st.pos < |st.input| && st.input[st.pos] == '-':
+ // subtract expression
+ st.pos = st.pos + 1
+ r = parseAddSubExpr(st)
+ if !(r is SyntaxError):
+ rhs,st = r
+ return {op: SUB, lhs: lhs, rhs: rhs},st
+ else:
+ return r
+ // No right-hand side
+ return (lhs,st)
+
+function parseMulDivExpr(State st) -> (Expr, State)|SyntaxError:
+ // First, parse left-hand side
+ Expr lhs, Expr rhs
+ any r = parseTerm(st)
+ if r is SyntaxError:
+ return r
+ //
+ lhs,st = r
+ st = parseWhiteSpace(st)
+ // Second, see if there is a right-hand side
+ if st.pos < |st.input| && st.input[st.pos] == '*':
+ // add expression
+ st.pos = st.pos + 1
+ r = parseMulDivExpr(st)
+ if !(r is SyntaxError):
+ rhs,st = r
+ return {op: MUL, lhs: lhs, rhs: rhs}, st
+ else:
+ return r
+ else if st.pos < |st.input| && st.input[st.pos] == '/':
+ // subtract expression
+ st.pos = st.pos + 1
+ r = parseMulDivExpr(st)
+ if !(r is SyntaxError):
+ rhs,st = r
+ return {op: DIV, lhs: lhs, rhs: rhs}, st
+ else:
+ return r
+ // No right-hand side
+ return (lhs,st)
+
+function parseTerm(State st) -> (Expr, State)|SyntaxError:
+ //
+ st = parseWhiteSpace(st)
+ if st.pos < |st.input|:
+ if ASCII.isLetter(st.input[st.pos]):
+ return parseIdentifier(st)
+ else if ASCII.isDigit(st.input[st.pos]):
+ return parseNumber(st)
+ else if st.input[st.pos] == '[':
+ return parseList(st)
+ //
+ return SyntaxError("expecting number or variable",st.pos,st.pos)
+
+function parseIdentifier(State st) -> (Var, State):
+ //
+ string txt = ""
+ // inch forward until end of identifier reached
+ while st.pos < |st.input| && ASCII.isLetter(st.input[st.pos]):
+ txt = txt ++ [st.input[st.pos]]
+ st.pos = st.pos + 1
+ return ({id:txt}, st)
+
+function parseNumber(State st) -> (Expr, State)|SyntaxError:
+ // inch forward until end of identifier reached
+ int start = st.pos
+ while st.pos < |st.input| && ASCII.isDigit(st.input[st.pos]):
+ st.pos = st.pos + 1
+ //
+ int|null iv = Int.parse(st.input[start..st.pos])
+ if iv == null:
+ return SyntaxError("Error parsing number",start,st.pos)
+ else:
+ return iv, st
+
+function parseList(State st) -> (Expr, State)|SyntaxError:
+ //
+ st.pos = st.pos + 1 // skip '['
+ st = parseWhiteSpace(st)
+ [Expr] l = [] // initial list
+ bool firstTime = true
+ while st.pos < |st.input| && st.input[st.pos] != ']':
+ if !firstTime && st.input[st.pos] != ',':
+ return SyntaxError("expecting comma",st.pos,st.pos)
+ else if !firstTime:
+ st.pos = st.pos + 1 // skip ','
+ firstTime = false
+ any r = parseAddSubExpr(st)
+ if r is SyntaxError:
+ return r
+ else:
+ Expr e
+ e,st = r
+ // perform annoying error check
+ l = l ++ [e]
+ st = parseWhiteSpace(st)
+ st.pos = st.pos + 1
+ return l,st
+
+// Parse all whitespace upto end-of-file
+function parseWhiteSpace(State st) -> State:
+ while st.pos < |st.input| && ASCII.isWhiteSpace(st.input[st.pos]):
+ st.pos = st.pos + 1
+ return st
+
+// ====================================================
+// Main Method
+// ====================================================
+
+public method main(System.Console sys):
+ if(|sys.args| == 0):
+ sys.out.println("no parameter provided!")
+ else:
+ File.Reader file = File.Reader(sys.args[0])
+ string input = ASCII.fromBytes(file.readAll())
+
+ Environment env = Environment()
+ State st = {pos: 0, input: input}
+ while st.pos < |st.input|:
+ Stmt s
+ any r = parse(st)
+ if r is SyntaxError:
+ sys.out.println("syntax error: " ++ r.msg)
+ return
+ s,st = r
+ Value|RuntimeError v = evaluate(s.rhs,env)
+ if v is RuntimeError:
+ sys.out.println("runtime error: " ++ v.msg)
+ return
+ if s is Set:
+ env[s.lhs] = v
+ else:
+ sys.out.println(r)
+ st = parseWhiteSpace(st)
+
--- /dev/null
+;;; example.xtm -- Extempore code examples
+
+;; Author: Ben Swift, Andrew Sorensen
+;; Keywords: extempore
+
+;;; Commentary:
+
+
+
+;;; Code:
+
+;; bit twiddling
+
+(xtmtest '(bind-func test_bit_twiddle_1
+ (lambda ()
+ (bitwise-and 65535 255 15 1)))
+
+ (test_bit_twiddle_1) 1)
+
+(xtmtest '(bind-func test_bit_twiddle_2
+ (lambda ()
+ (bitwise-not -1)))
+
+ (test_bit_twiddle_2) 0)
+
+(xtmtest '(bind-func test_bit_twiddle_3
+ (lambda ()
+ (bitwise-not 0)))
+
+ (test_bit_twiddle_3) -1)
+
+(xtmtest '(bind-func test_bit_twiddle_4
+ (lambda ()
+ (bitwise-shift-right 65535 8)
+ (bitwise-shift-right 65535 4 4)))
+
+ (test_bit_twiddle_4) 255)
+
+(xtmtest '(bind-func test_bit_twiddle_5
+ (lambda ()
+ (bitwise-shift-left (bitwise-shift-right 65535 8) 4 4)))
+
+ (test_bit_twiddle_5) 65280)
+
+(xtmtest '(bind-func test_bit_twiddle_6
+ (lambda ()
+ (bitwise-and (bitwise-or (bitwise-eor 21844 65534) (bitwise-eor 43690 65534)) 1)))
+
+ (test_bit_twiddle_6) 0)
+
+;; integer literals default to 64 bit integers
+(xtmtest '(bind-func int-literal-test
+ (lambda (a)
+ (* a 5)))
+
+ (int-literal-test 6) 30)
+
+;; float literals default to doubles
+(xtmtest '(bind-func float-literal-test
+ (lambda (a)
+ (* a 5.0)))
+
+ (float-literal-test 6.0) 30.0)
+
+;; you are free to recompile an existing closure
+(xtmtest '(bind-func int-literal-test
+ (lambda (a)
+ (/ a 5)))
+
+ (int-literal-test 30))
+
+(xtmtest '(bind-func closure-test1
+ (let ((power 0))
+ (lambda (x)
+ (set! power (+ power 1)) ;; set! for closure mutation as per scheme
+ (* x power))))
+
+ (closure-test1 2))
+
+(xtmtest '(bind-func closure-returns-closure-test
+ (lambda ()
+ (lambda (x)
+ (* x 3))))
+
+ (closure-returns-closure-test))
+
+(xtmtest '(bind-func incrementer-test1
+ (lambda (i:i64)
+ (lambda (incr)
+ (set! i (+ i incr))
+ i)))
+
+ (incrementer-test1 0))
+
+(define myf (incrementer-test1 0))
+
+;; so we need to type f properly
+(xtmtest '(bind-func incrementer-test2
+ (lambda (f:[i64,i64]* x)
+ (f x)))
+ (incrementer-test2 myf 1) 1)
+
+;; and we can call my-in-maker-wrapper
+;; to appy myf
+(xtmtest-result (incrementer-test2 myf 1) 2)
+(xtmtest-result (incrementer-test2 myf 1) 3)
+(xtmtest-result (incrementer-test2 myf 1) 4)
+
+;; of course the wrapper is only required if you
+;; need interaction with the scheme world.
+;; otherwise you just call my-inc-maker directly
+
+;; this avoids the wrapper completely
+(xtmtest '(bind-func incrementer-test3
+ (let ((f (incrementer-test1 0)))
+ (lambda ()
+ (f 1))))
+
+ (incrementer-test3) 1)
+
+(xtmtest-result (incrementer-test3) 2)
+(xtmtest-result (incrementer-test3) 3)
+
+;; hopefully you're getting the idea.
+;; note that once we've compiled something
+;; we can then use it any of our new
+;; function definitions.
+
+;; do a little 16bit test
+(xtmtest '(bind-func bitsize-sixteen
+ (lambda (a:i16)
+ (dtoi16 (* (i16tod a) 5.0))))
+
+ (bitsize-sixteen 5) 25)
+
+;; while loop test
+
+(xtmtest '(bind-func test_while_loop_1
+ (lambda ()
+ (let ((count 0))
+ (while (< count 5)
+ (printf "count = %lld\n" count)
+ (set! count (+ count 1)))
+ count)))
+
+ (test_while_loop_1) 5)
+
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;;
+;; Closures can be recursive
+;;
+
+(xtmtest '(bind-func recursive-closure-test
+ (lambda (a)
+ (if (< a 1)
+ (printf "done\n")
+ (begin (printf "a: %lld\n" a)
+ (recursive-closure-test (- a 1))))))
+
+ (recursive-closure-test 3))
+
+;; check TAIL OPTIMIZATION
+;; if there is no tail call optimiation
+;; in place then this should blow the
+;; stack and crash the test
+
+;; CANNOT RUN THIS TEST ON WINDOWS (i.e. no salloc)!
+(if (not (equal? (sys:platform) "Windows"))
+ (xtmtest '(bind-func tail_opt_test
+ (lambda (n:i64)
+ (let ((a:float* (salloc 8000)))
+ (if (= n 0)
+ (printf "tail opt test passed!\n")
+ (tail_opt_test (- n 1))))))
+
+ (tail_opt_test 200)))
+
+(println 'A 'segfault 'here 'incidates 'that 'tail-call-optimizations 'are 'not 'working!)
+
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;;
+;; some anon lambda tests
+;;
+
+(xtmtest '(bind-func infer_lambdas_test
+ (lambda ()
+ (let ((a 5)
+ (b (lambda (x) (* x x)))
+ (c (lambda (y) (* y y))))
+ (c (b a)))))
+
+ (infer_lambdas_test))
+
+
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;; a simple tuple example
+;;
+;; tuple types are represented as <type,type,type>*
+;;
+
+;; make and return a simple tuple
+(xtmtest '(bind-func tuple-test1
+ (lambda ()
+ (let ((t:<i64,double,i32>* (alloc)))
+ t)))
+
+ (tuple-test1))
+
+;; logview shows [<i64,double,i32>*]*
+;; i.e. a closure that takes no arguments
+;; and returns the tuple <i64,double,i32>*
+
+
+;; here's another tuple example
+;; note that my-test-7's return type is inferred
+;; by the tuple-reference index
+;; (i.e. i64 being tuple index 0)
+(xtmtest '(bind-func tuple-test2
+ (lambda ()
+ (let ((a:<i64,double>* (alloc)) ; returns pointer to type <i64,double>
+ (b 37)
+ (c 6.4))
+ (tuple-set! a 0 b) ;; set i64 to 64
+ (tset! a 1 c) ;; set double to 6.4 - tset! is an alias for tuple-set!
+ (printf "tuple:1 %lld::%f\n" (tuple-ref a 0) (tref a 1))
+ ;; we can fill a tuple in a single call by using tfill!
+ (tfill! a 77 77.7)
+ (printf "tuple:2 %lld::%f\n" (tuple-ref a 0) (tuple-ref a 1))
+ (tuple-ref a 0))))
+
+ (tuple-test2) 77)
+
+;; return first element which is i64
+;; should be 64 as we return the
+;; first element of the tuple
+;; (println (my-test-7)) ; 77
+
+
+;; tbind binds variables to values
+;; based on tuple structure
+;; _ (underscore) means don't attempt
+;; to match against this position in
+;; the tuple (i.e. skip)
+(xtmtest '(bind-func tuple-bind-test
+ (lambda ()
+ (let ((t1:<i32,float,<i32,float>*,double>* (alloc))
+ (t2:<i32,float>* (alloc))
+ (a 0) (b:float 0.0) (c 0.0))
+ (tfill! t2 3 3.3)
+ (tfill! t1 1 2.0 t2 4.0)
+ (tbind t1 a b _ c)
+ c)))
+
+ (tuple-bind-test) 4.0)
+
+
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;; some array code with *casting*
+;; this function returns void
+(xtmtest '(bind-func array-test1
+ (lambda ()
+ (let ((v1:|5,float|* (alloc))
+ (v2:|5,float|* (alloc))
+ (i 0)
+ (k 0))
+ (dotimes (i 5)
+ ;; random returns double so "truncate" to float
+ ;; which is what v expects
+ (array-set! v1 i (dtof (random))))
+ ;; we can use the afill! function to fill an array
+ (afill! v2 1.1 2.2 3.3 4.4 5.5)
+ (dotimes (k 5)
+ ;; unfortunately printf doesn't like floats
+ ;; so back to double for us :(
+ (printf "val: %lld::%f::%f\n" k
+ (ftod (array-ref v1 k))
+ (ftod (aref v2 k)))))))
+
+ (array-test1))
+
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;; some crazy array code with
+;; closures and arrays
+;; try to figure out what this all does
+;;
+;; this example uses the array type
+;; the pretty print for this type is
+;; |num,type| num elements of type
+;; |5,i64| is an array of 5 x i64
+;;
+;; An array is not a pointer type
+;; i.e. |5,i64| cannot be bitcast to i64*
+;;
+;; However an array can be a pointer
+;; i.e. |5,i64|* can be bitcast to i64*
+;; i.e. |5,i64|** to i64** etc..
+;;
+;; make-array returns a pointer to an array
+;; i.e. (make-array 5 i64) returns type |5,i64|*
+;;
+;; aref (array-ref) and aset! (array-set!)
+;; can operate with either pointers to arrays or
+;; standard pointers.
+;;
+;; in other words aref and aset! are happy
+;; to work with either i64* or |5,i64|*
+
+(bind-func array-test2
+ (lambda (v:|5,i64|*)
+ (let ((f (lambda (x)
+ (* (array-ref v 2) x))))
+ f)))
+
+(bind-func array-test3
+ (lambda (v:|5,[i64,i64]*|*)
+ (let ((ff (aref v 0))) ; aref alias for array-ref
+ (ff 5))))
+
+(xtmtest '(bind-func array-test4
+ (lambda ()
+ (let ((v:|5,[i64,i64]*|* (alloc)) ;; make an array of closures!
+ (vv:|5,i64|* (alloc)))
+ (array-set! vv 2 3)
+ (aset! v 0 (array-test2 vv)) ;; aset! alias for array-set!
+ (array-test3 v))))
+
+ ;; try to guess the answer before you call this!!
+ (array-test4))
+
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;; some conditionals
+
+(xtmtest '(bind-func cond-test1
+ (lambda (x:i64 y)
+ (if (> x y)
+ x
+ y)))
+
+ (cond-test1 12 13))
+
+;; returns boolean true
+(xtmtest '(bind-func cond-test2
+ (lambda (x:i64)
+ (cond ((= x 1) (printf "A\n"))
+ ((= x 2) (printf "B\n"))
+ ((= x 3) (printf "C\n"))
+ ((= x 4) (printf "D\n"))
+ (else (printf "E\n")))
+ #t))
+
+ (cond-test2 1))
+
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;; making a linear envelop generator
+;; for signal processing and alike
+
+(bind-func envelope-segments
+ (lambda (points:double* num-of-points:i64)
+ (let ((lines:[double,double]** (zone-alloc num-of-points))
+ (k 0))
+ (dotimes (k num-of-points)
+ (let* ((idx (* k 2))
+ (x1 (pointer-ref points (+ idx 0)))
+ (y1 (pointer-ref points (+ idx 1)))
+ (x2 (pointer-ref points (+ idx 2)))
+ (y2 (pointer-ref points (+ idx 3)))
+ (m (if (= 0.0 (- x2 x1)) 0.0 (/ (- y2 y1) (- x2 x1))))
+ (c (- y2 (* m x2)))
+ (l (lambda (time) (+ (* m time) c))))
+ (pointer-set! lines k l)))
+ lines)))
+
+(bind-func make-envelope
+ (lambda (points:double* num-of-points)
+ (let ((klines:[double,double]** (envelope-segments points num-of-points))
+ (line-length num-of-points))
+ (lambda (time)
+ (let ((res -1.0)
+ (k:i64 0))
+ (dotimes (k num-of-points)
+ (let ((line (pointer-ref klines k))
+ (time-point (pointer-ref points (* k 2))))
+ (if (or (= time time-point)
+ (< time-point time))
+ (set! res (line time)))))
+ res)))))
+
+;; make a convenience wrapper
+(xtmtest '(bind-func env-wrap
+ (let* ((points 3)
+ (data:double* (zone-alloc (* points 2))))
+ (pointer-set! data 0 0.0) ;; point data
+ (pset! data 1 0.0)
+ (pset! data 2 2.0)
+ (pset! data 3 1.0)
+ (pset! data 4 4.0)
+ (pset! data 5 0.0)
+ (let ((f (make-envelope data points)))
+ (lambda (time:double)
+ (f time)))))
+ (env-wrap 0.0) 0.0)
+
+(xtmtest-result (env-wrap 1.0) 0.5)
+(xtmtest-result (env-wrap 2.0) 1.0)
+(xtmtest-result (env-wrap 2.5) 0.75)
+(xtmtest-result (env-wrap 4.0) 0.0)
+
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;;
+;; direct access to a closures environment
+;;
+;; it is possible to directly access a closures
+;; environment in order to read or modify data
+;; at runtime.
+;;
+;; You do this using a dot operator
+;; To access an environment slot you use
+;; closure.slot:type
+;; So for example
+;; (f.a:i32)
+;; would return the 32bit integer symbol 'a'
+;; from the closure 'f'
+;;
+;; To set an environment slot you just
+;; add a value of the correct type
+;; for example
+;; (f.a:i32 565)
+;; would set 'a' in 'f' to 565
+;;
+;; let's create a closure that capture's 'a'
+
+
+(xtmtest '(bind-func dot-access-test1
+ (let ((a:i32 6))
+ (lambda ()
+ (printf "a:%d\n" a)
+ a)))
+ (dot-access-test1))
+
+;; now let's create a new function
+;; that calls my-test14 twice
+;; once normally
+;; then we directly set the closures 'a' binding
+;; then call again
+;;
+(xtmtest '(bind-func dot-access-test2
+ (lambda (x:i32)
+ (dot-access-test1)
+ (dot-access-test1.a:i32 x)
+ (dot-access-test1)))
+
+ (dot-access-test2 9))
+
+;; of course this works just as well for
+;; non-global closures
+(xtmtest '(bind-func dot-access-test3
+ (lambda (a:i32)
+ (let ((f (lambda ()
+ (* 3 a))))
+ f)))
+ (dot-access-test3 1))
+
+(xtmtest '(bind-func dot-access-test4
+ (lambda ()
+ (let ((f (dot-access-test3 5)))
+ (f.a:i32 7)
+ (f))))
+
+ (dot-access-test4)
+ 21)
+
+;; and you can get and set closures also!
+(xtmtest '(bind-func dot-access-test5
+ (lambda ()
+ (let ((f (lambda (x:i64) x)))
+ (lambda (z)
+ (f z)))))
+
+ (dot-access-test5))
+
+(xtmtest '(bind-func dot-access-test6
+ (lambda ()
+ (let ((t1 (dot-access-test5))
+ (t2 (dot-access-test5)))
+ ;; identity of 5
+ (printf "%lld:%lld\n" (t1 5) (t2 5))
+ (t1.f:[i64,i64]* (lambda (x:i64) (* x x)))
+ ;; square of 5
+ (printf "%lld:%lld\n" (t1 5) (t2 5))
+ ;; cube of 5
+ (t2.f:[i64,i64]* (lambda (y:i64) (* y y y)))
+ (printf "%lld:%lld\n" (t1 5) (t2 5))
+ void)))
+
+ (dot-access-test6)) ;; 5:5 > 25:5 > 25:125
+
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;;
+;; named types
+
+;; it can sometimes be helpful to allocate
+;; a predefined tuple type on the stack
+;; you can do this using allocate
+(bind-type vec3 <double,double,double>)
+
+;; String printing!
+(bind-func vec3_print:[void,vec3*]*
+ (lambda (x)
+ (printf "<%d,%d,%d>" (tref x 0) (tref x 1) (tref x 2))
+ void))
+
+(bind-poly print vec3_print)
+
+;; note that point is deallocated at the
+;; end of the function call. You can
+;; stack allocate (stack-alloc)
+;; any valid type (i64 for example)
+(xtmtest '(bind-func salloc-test
+ (lambda ()
+ (let ((point:vec3* (stack-alloc)))
+ (tset! point 0 0.0)
+ (tset! point 1 -1.0)
+ (tset! point 2 1.0)
+ 1)))
+
+ (salloc-test)) ;; 1
+
+;; all named types have 2 default constructors
+;; name (zone alloation) + name_h (heap allocation)
+;; and a default print poly
+(xtmtest '(bind-func data-constructor-test
+ (lambda ()
+ (let ((v1 (vec3 1.0 2.0 3.0))
+ (v2 (vec3_h 4.0 5.0 6.0)))
+ (println v1 v2)
+ ;; halloced vec3 needs freeing
+ (free v2)
+ void)))
+
+ (data-constructor-test))
+
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;;
+;; aref-ptr and tref-ptr
+;;
+
+;; aref-ptr and tref-ptr return a pointer to an element
+;; just as aref and tref return elements aref-ptr and
+;; tref-ptr return a pointer to those elements.
+
+;; This allows you to do things like create an array
+;; with an offset
+(xtmtest '(bind-func aref-ptr-test
+ (lambda ()
+ (let ((arr:|32,i64|* (alloc))
+ (arroff (aref-ptr arr 16))
+ (i 0)
+ (k 0))
+ ;; load arr
+ (dotimes (i 32) (aset! arr i i))
+ (dotimes (k 16)
+ (printf "index: %lld\tarr: %lld\tarroff: %lld\n"
+ k (aref arr k) (pref arroff k))))))
+
+ (aref-ptr-test))
+
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;;
+;; arrays
+;; Extempore lang supports arrays as for first class
+;; aggregate types (in other words as distinct from
+;; a pointer).
+;;
+;; an array is made up of a size and a type
+;; |32,i64| is an array of 32 elements of type i64
+;;
+
+(bind-type tuple-with-array <double,|32,|4,i32||,float>)
+
+(xtmtest '(bind-func array-test5
+ (lambda ()
+ (let ((tup:tuple-with-array* (stack-alloc))
+ (t2:|32,i64|* (stack-alloc)))
+ (aset! t2 0 9)
+ (tset! tup 2 5.5)
+ (aset! (aref-ptr (tref-ptr tup 1) 0) 0 0)
+ (aset! (aref-ptr (tref-ptr tup 1) 0) 1 1)
+ (aset! (aref-ptr (tref-ptr tup 1) 0) 2 2)
+ (printf "val: %lld %lld %f\n"
+ (aref (aref-ptr (tref-ptr tup 1) 0) 1)
+ (aref t2 0) (ftod (tref tup 2)))
+ (aref (aref-ptr (tref-ptr tup 1) 0) 1))))
+
+ (array-test5) 1) ;; val: 1 9 5.5
+
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;;
+;; Global Variables
+;;
+;; You can allocate global variables using bind-val
+;;
+
+(bind-val g_var_a i32 5)
+
+;; increment g_var_a by inc
+;; and return new value of g_var_a
+(xtmtest '(bind-func global_var_test1
+ (lambda (incr)
+ (set! g_var_a (+ g_var_a incr))
+ g_var_a))
+
+ (global_var_test1 3) 8) ;; 8
+
+;; you can bind any primitive type
+(bind-val g_var_b double 5.5)
+(bind-val g_var_c i1 0)
+
+(xtmtest '(bind-func global_var_test1b
+ (lambda ()
+ (* g_var_b (if g_var_c 1.0 4.0))))
+
+ (global_var_test1b) 22.0)
+
+;; global strings
+
+(bind-val g_cstring i8* "Jiblet.")
+
+(xtmtest '(bind-func test_g_cstring
+ (lambda ()
+ (let ((i 0))
+ (dotimes (i 7)
+ (printf "g_cstring[%lld] = %c\n" i (pref g_cstring i)))
+ (printf "\nSpells... %s\n" g_cstring))))
+
+ (test_g_cstring))
+
+(xtmtest '(bind-func test_g_cstring1
+ (lambda ()
+ (let ((test_cstring "Niblot.")
+ (i 0)
+ (total 0))
+ (dotimes (i 7)
+ (let ((c1 (pref g_cstring i))
+ (c2 (pref test_cstring i)))
+ (printf "checking %c against %c\n" c1 c2)
+ (if (= c1 c2)
+ (set! total (+ total 1)))))
+ total)))
+
+ (test_g_cstring1) 5)
+
+
+
+
+
+;; for tuples, arrays and vectors, bind-val only takes *two*
+;; arguments. The tuple/array/vector will be initialised to zero.
+
+(bind-val g_tuple1 <i64,i64>)
+(bind-val g_tuple2 <double,double>)
+
+(xtmtest '(bind-func test_g_tuple
+ (lambda ()
+ (tfill! g_tuple1 1 4)
+ (tfill! g_tuple2 4.0 1.0)
+ (and (= (tref g_tuple1 0) (dtoi64 (tref g_tuple2 1)))
+ (= (dtoi64 (tref g_tuple2 0)) (tref g_tuple1 1)))))
+
+ (test_g_tuple) 1)
+
+;; same thing with arrays
+
+(bind-val g_array1 |10,double|)
+(bind-val g_array2 |10,i64|)
+
+;; if we just loop over and print the values in each array
+
+(xtmtest '(bind-func test_g_array11
+ (lambda ()
+ (let ((i 0))
+ (dotimes (i 10)
+ (printf "garray_1[%lld] = %f garray_2[%lld] = %lld\n"
+ i (aref g_array1 i) i (aref g_array2 i))))))
+
+ (test_g_array11) 1)
+
+;; but if we loop over and set some values into the arrays
+
+(xtmtest '(bind-func test_g_array2
+ (lambda ()
+ (let ((i 0))
+ (dotimes (i 10)
+ (aset! g_array1 i (i64tod i))
+ (aset! g_array2 i i)
+ (printf "garray_1[%lld] = %f garray_2[%lld] = %lld\n"
+ i (aref g_array1 i) i (aref g_array2 i)))
+ (= (dtoi64 (aref g_array1 5))
+ (aref g_array2 5)))))
+
+ (test_g_array2) 1)
+
+;; just to test, let's try a large array
+
+(bind-val g_array3 |100000000,i64|)
+
+(xtmtest '(bind-func test_g_array3
+ (lambda ()
+ (let ((i 0))
+ (dotimes (i 100000000)
+ (aset! g_array3 i i))
+ (= (pref g_array3 87654321)
+ 87654321))))
+
+ (test_g_array3) 1)
+
+;; if you want to bind a global pointer, then the third 'value'
+;; argument is the size of the memory to allocate (in elements, not in bytes)
+
+(bind-val g_ptr0 double* 10)
+
+(xtmtest '(bind-func test_g_ptr0
+ (lambda ()
+ (let ((total 0.0)
+ (i 0))
+ (dotimes (i 10)
+ (pset! g_ptr0 i (i64tod i))
+ (set! total (+ total (pref g_ptr0 i))))
+ total)))
+
+ (test_g_ptr0) 45.0)
+
+(bind-val g_ptr1 |4,i32|* 2)
+(bind-val g_ptr2 <i64,double>* 4)
+
+(xtmtest '(bind-func test_g_ptr1
+ (lambda ()
+ (afill! g_ptr1 11 66 35 81)
+ (tset! g_ptr2 1 35.0)
+ (printf "%f :: %d\n" (tref g_ptr2 1) (aref g_ptr1 2))
+ (aref g_ptr1 3)))
+
+ (test_g_ptr1) 81) ;; should also print 35.000000 :: 35
+
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;;
+;; Callbacks
+
+(xtmtest '(bind-func callback-test
+ (lambda (time:i64 count:i64)
+ (printf "time: %lld:%lld\n" time count)
+ (callback (+ time 1000) callback-test (+ time 22050) (+ count 1))))
+
+ (callback-test (now) 0))
+
+;; compiling this will stop the callbacks
+;;
+;; of course we need to keep the type
+;; signature the same [void,i64,i64]*
+;;
+(xtmtest '(bind-func callback-test
+ (lambda (time:i64 count:i64)
+ #t))
+
+ (callback-test))
+
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;;
+;; some memzone tests
+
+(xtmtest '(bind-func memzone-test1
+ (lambda ()
+ (let ((b:|5,double|* (zalloc)))
+ (aset! b 0
+ (memzone 1024
+ (let ((a:|10,double|* (zalloc)))
+ (aset! a 0 3.5)
+ (aref a 0))))
+ (let ((c:|9,i32|* (zalloc)))
+ (aset! c 0 99)
+ (aref b 0)))))
+
+ (memzone-test1) 3.5)
+
+(xtmtest '(bind-func memzone-test2
+ (lambda ()
+ (memzone 1024
+ (let ((k:|15,double|* (zalloc))
+ (f (lambda (fa:|15,double|*)
+ (memzone 1024
+ (let ((a:|10,double|* (zalloc))
+ (i 0))
+ (dotimes (i 10)
+ (aset! a i (* (aref fa i) (random))))
+ a)))))
+ (f k)))))
+
+ (memzone-test2))
+
+(xtmtest '(bind-func memzone-test3
+ (lambda ()
+ (let ((v (memzone-test2))
+ (i 0))
+ (dotimes (i 10) (printf "%lld:%f\n" i (aref v i))))))
+
+ (memzone-test3)) ;; should print all 0.0's
+
+(xtmtest '(bind-func memzone-test4
+ (lambda ()
+ (memzone 1024 (* 44100 10)
+ (let ((a:|5,double|* (alloc)))
+ (aset! a 0 5.5)
+ (aref a 0)))))
+
+ (memzone-test4) 5.50000)
+
+;;
+;; Large allocation of memory on BUILD (i.e. when the closure is created)
+;; requires an optional argument (i.e. an amount of memory to allocate
+;; specifically for closure creation)
+;;
+;; This memory is automatically free'd whenever you recompile the closure
+;; (it will be destroyed and replaced by a new allocation of the
+;; same amount or whatever new amount you have allocated for closure
+;; compilation)
+;;
+(xtmtest '(bind-func closure-zalloc-test 1000000
+ (let ((k:|100000,double|* (zalloc)))
+ (lambda ()
+ (aset! k 0 1.0)
+ (aref k 0))))
+
+ (closure-zalloc-test 1000000))
+
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;;
+;; Ad-Hoc Polymorphism
+;;
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+
+;; extempore supports ad-hoc polymorphism
+;; at some stage in the future this will
+;; be implicit - but for the moment
+;; it is explicitly defined using bind-poly
+
+;; ad-hoc polymorphism allows you to provide
+;; different specialisations depending on
+;; type. In other words, a single 'name'
+;; can be bound to multiple function
+;; implementations each with a uniqute
+;; type.
+
+
+;; poly variables can be for functions of
+;; mixed argument lengths
+;;
+;; so for example:
+(bind-func poly-test4
+ (lambda (a:i8*)
+ (printf "%s\n" a)))
+
+(bind-func poly-test5
+ (lambda (a:i8* b:i8*)
+ (printf "%s %s\n" a b)))
+
+(bind-func poly-test6
+ (lambda (a:i8* b:i8* c:i8*)
+ (printf "%s %s %s\n" a b c)))
+
+;; bind these three functions to poly 'print'
+(bind-poly testprint poly-test4)
+(bind-poly testprint poly-test5)
+(bind-poly testprint poly-test6)
+
+(xtmtest '(bind-func poly-test7
+ (lambda ()
+ (testprint "extempore's")
+ (testprint "extempore's" "polymorphism")
+ (testprint "extempore's" "polymorphism" "rocks")))
+
+ (poly-test7))
+
+;; polys can Also specialize
+;; on the return type
+(bind-func poly-test8
+ (lambda (a:double)
+ (* a a)))
+
+(bind-func poly-test9
+ (lambda (a:double)
+ (dtoi64 (* a a))))
+
+(bind-poly sqrd poly-test8)
+(bind-poly sqrd poly-test9)
+
+;; specialize on [i64,double]*
+;;
+(xtmtest '(bind-func poly-test10:[i64,double]*
+ (lambda (a)
+ (+ 1 (sqrd a))))
+ (poly-test10 5.0))
+
+;; specialize on [double,doube]*
+(xtmtest '(bind-func poly-test11:[double,double]*
+ (lambda (a)
+ (+ 1.0 (sqrd a))))
+
+ (poly-test11 5.0))
+
+
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;;
+;; a little test for zone cleanup
+;;
+(bind-func MyLittleCleanupTest
+ (lambda ()
+ (let ((tmp2:i8* (alloc 8)))
+ (cleanup (println "Clean up before leaving zone!"))
+ tmp2)))
+
+(xtmtest '(bind-func cleanup-test
+ (lambda ()
+ (letz ((tmp:i8* (alloc 8))
+ (t2 (MyLittleCleanupTest)))
+ (begin
+ (println "In Zone ...")
+ 1))
+ (println "Out of zone ...")
+ void))
+
+ (cleanup-test))
+
+;;;;;;;;;;;;;;;;;;
+;; vector types
+
+;; (bind-func vector-test1
+;; (lambda ()
+;; (let ((v1:/4,float/* (alloc))
+;; (v2:/4,float/* (alloc))
+;; (v3:/4,float/* (alloc)))
+;; (vfill! v1 4.0 3.0 2.0 1.0)
+;; (vfill! v2 1.0 2.0 3.0 4.0)
+;; (vfill! v3 5.0 5.0 5.0 5.0)
+;; (let ((v4 (* v1 v2))
+;; (v5 (> v3 v4))) ;; unforunately vector conditionals don't work!
+;; (printf "mul:%f:%f:%f:%f\n" (ftod (vref v4 0)) (ftod (vref v4 1)) (ftod (vref v4 2)) (ftod (vref v4 3)))
+;; (printf "cmp:%d:%d:%d:%d\n" (i1toi32 (vref v5 0)) (i1toi32 (vref v5 1)) (i1toi32 (vref v5 2)) (i1toi32 (vref v5 3)))
+;; void))))
+
+;; (test-xtfunc (vector-test1))
+
+(bind-func vector-test2
+ (lambda ()
+ (let ((v1:/4,float/* (alloc))
+ (v2:/4,float/* (alloc)))
+ (vfill! v1 1.0 2.0 4.0 8.0)
+ (vfill! v2 2.0 2.5 2.25 2.125)
+ (* v1 v2))))
+
+(xtmtest '(bind-func vector-test3
+ (lambda ()
+ (let ((a (vector-test2)))
+ (printf "%f:%f:%f:%f\n"
+ (ftod (vref a 0))
+ (ftod (vref a 1))
+ (ftod (vref a 2))
+ (ftod (vref a 3)))
+ void)))
+
+ (vector-test3))
+
+;; vectorised sine func
+(bind-func vsinf4
+ (let ((p:/4,float/* (alloc))
+ (b:/4,float/* (alloc))
+ (c:/4,float/* (alloc))
+ (f1:/4,float/* (alloc))
+ (f2:/4,float/* (alloc))
+ (i:i32 0)
+ (p_ 0.225)
+ (b_ (dtof (/ 4.0 3.1415)))
+ (c_ (dtof (/ -4.0 (* 3.1415 3.1415)))))
+ (dotimes (i 4) (vset! p i p_) (vset! b i b_) (vset! c i c_))
+ (lambda (x:/4,float/)
+ ;; no SIMD for abs yet!
+ (dotimes (i 4) (vset! f1 i (fabs (vref x i))))
+ (let ((y (+ (* b x) (* c x f1))))
+ ;; no SIMD for abs yet!
+ (dotimes (i 4) (vset! f2 i (fabs (vref y i))))
+ (+ (* p (- (* y f2) y)) y)))))
+
+(bind-func vcosf4
+ (let ((p:/4,float/* (alloc))
+ (b:/4,float/* (alloc))
+ (c:/4,float/* (alloc))
+ (d:/4,float/* (alloc))
+ (f1:/4,float/* (alloc))
+ (f2:/4,float/* (alloc))
+ (i:i32 0)
+ (p_ 0.225)
+ (d_ (dtof (/ 3.1415 2.0)))
+ (b_ (dtof (/ 4.0 3.1415)))
+ (c_ (dtof (/ -4.0 (* 3.1415 3.1415)))))
+ (dotimes (i 4)
+ (vset! p i p_) (vset! b i b_) (vset! c i c_) (vset! d i d_))
+ (lambda (x:/4,float/)
+ ;; offset x for cos
+ (set! x (+ x d))
+ ;; no SIMD for abs yet!
+ (dotimes (i 4) (vset! f1 i (fabs (vref x i))))
+ (let ((y (+ (* b x) (* c x f1))))
+ ;; no SIMD for abs yet!
+ (dotimes (i 4) (vset! f2 i (fabs (vref y i))))
+ (+ (* p (- (* y f2) y)) y)))))
+
+
+(xtmtest '(bind-func vector-test4
+ (lambda ()
+ (let ((a:/4,float/* (alloc)))
+ (vfill! a 0.1 0.2 0.3 0.4)
+ (let ((b (vsinf4 (pref a 0)))
+ (c (vcosf4 (pref a 0))))
+ (printf "precision inaccuracy is expected:\n")
+ (printf " sinf:\t%f,%f,%f,%f\n"
+ (ftod (sin 0.1:f))
+ (ftod (sin 0.2:f))
+ (ftod (sin 0.3:f))
+ (ftod (sin 0.4:f)))
+ (printf "vsinf:\t%f,%f,%f,%f\n"
+ (ftod (vref b 0))
+ (ftod (vref b 1))
+ (ftod (vref b 2))
+ (ftod (vref b 3)))
+ (printf " cosf:\t%f,%f,%f,%f\n"
+ (ftod (cos 0.1:f))
+ (ftod (cos 0.2:f))
+ (ftod (cos 0.3:f))
+ (ftod (cos 0.4:f)))
+ (printf "vcosf:\t%f,%f,%f,%f\n"
+ (ftod (vref c 0))
+ (ftod (vref c 1))
+ (ftod (vref c 2))
+ (ftod (vref c 3)))
+ void))))
+
+ (vector-test4))
+
+;; test the call-as-xtlang macro
+
+;; make sure it'll handle multiple body forms
+(xtmtest-result (call-as-xtlang (println 1) (println 2) 5)
+ 5)
+
+
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;;
+;; test globalvar as closure
+;;
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+
+(bind-func testinc
+ (lambda (incr:i64)
+ (lambda (x:i64)
+ (+ x incr))))
+
+(bind-val GlobalInc [i64,i64]* (testinc 2))
+
+(xtmtest '(bind-func ginc
+ (lambda ()
+ (GlobalInc 5)))
+ (ginc) 7)
+
+
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;; syntax highlighting tests ;;
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;; these don't return any values, they're visual tests---do they look
+;; right?
+
+(bind-func hl_test1a:[i32,double,|4,i32|**]* 4000
+ "docstring"
+ (lambda (a b)
+ (printf "done\n")))
+
+(bind-func hl_test1b:[i32]*
+ (lambda ()
+ (let ((i:i32 6))
+ (printf "done\n"))))
+
+(bind-val hl_test2 <i32,i32>)
+(bind-val hl_test3 |4,i8|)
+(bind-val hl_test4 double* 10)
+(bind-val hl_test5 i8* "teststr")
+
+(bind-type hl_test_type <i64>)
+
+(println '(bind-lib testlib testfn [i32,i32]*))
+
+;; (and 4 5)
+;; (bind-val hl_test4 double* 10)
+;; (bind-type hl_test_type <i64> "docstring")
+;; (bind-lib testlib testfn [i32,i32]*)
+#
+# Regression tests
+#
+
+%TAG ! tag:example.com:foo/
+---
+test: !foo/bar {a: 'asdf'}
+test2: fred
+...
#
# Examples from the Preview section of the YAML specification
--- /dev/null
+/*
+ * A Test file for the different string literals.
+ */
+
+#include <iostream>
+
+int main() {
+ char *_str = "a normal string";
+ wchar_t *L_str = L"a wide string";
+ char *u8_str = u8"utf-8 string";
+ char16_t *u_str = u"utf-16 string";
+ char32_t *U_str = U"utf-32 string";
+ char *R_str = R""""(raw string with
+"""
+as a delimiter)"""";
+
+ std::cout << R_str << std::endl;
+
+ return 0;
+}
--- /dev/null
+바싹반박나싼순
+뿌멓떠벌번멍뻐
+쌀삭쌀살다순옭
+어어선썬설썩옭
--- /dev/null
+(field "another field" 2)
+(f "000001" -2)
+
+(missing? "a field" 23)
+
+(random-value "age")
+(weighted-random-value "000001")
+
+(if (missing? "00000") (random-value "000000") (f "000000"))
+
+(ensure-value "000000")
+(ensure-weighted-value "000000")
+
+(normalize "000001")
+(normalize "length" 8 23)
+
+(z-score "a numeric field")
+(z-score 23)
+
+(field-prop string "00023" name)
+(field-prop numeric "00023" summary missing_count)
+
+(category-count "species" "Iris-versicolor")
+(category-count "species" (f "000004"))
+(bin-count "age" (f "bin-selector"))
+(bin-center "000003" 3)
+(bin-center (field "field-selector") 4)
+
+(let (v (f "age"))
+ (cond (< v 2) "baby"
+ (< v 10) "child"
+ (< v 20) "teenager"
+ "adult"))
+
+(segment-label "000000" "baby" 2 "child" 10 "teenager" 20 "adult")
+(segment-label 0 "1st fourth" "2nd fourth" "3rd fourth" "4th fourth")
+
+(let (max (maximum 0)
+ min (minimum 0)
+ step (/ (- max min) 4))
+ (segment-label 0 "1st fourth" (+ min step)
+ "2nd fourth" (+ min step step)
+ "3rd fourth" (+ min step step step)
+ "4th fourth"))
+
+(contains-items? "000000" "blue" "green" "darkblue")
+
+(<= (percentile "age" 0.5) (f "age") (percentile "age" 0.95))
+
+(within-percentiles? "age" 0.5 0.95)
+
+(percentile-label "000023" "1st" "2nd" "3rd" "4th")
+
+(cond (within-percentiles? "000023" 0 0.25) "1st"
+ (within-percentiles? "000023" 0.25 0.5) "2nd"
+ (within-percentiles? "000023" 0.5 0.75) "3rd"
+ "4th")
+
+(str 1 "hello " (field "a"))
+(str "value_" (+ 3 4) "/" (name "000001"))
+
+(length "abc")
+(length "")
+
+(levenshtein (f 0) "a random string")
+(if (< (levenshtein (f 0) "bluething") 5) "bluething" (f 0))
+
+(occurrences "howdy woman, howdy" "howdy")
+(occurrences "howdy woman" "Man" true)
+(occurrences "howdy man" "Man" true)
+(occurrences "hola, Holas" "hola" true "es")
+
+(md5 "a text")
+(sha1 "a text")
+(sha256 "")
+
+(matches? (field "name") ".*\\sHal\\s.*")
+(matches? (field "name") "(?i).*\\shal\\s.*")
+
+(if (matches? (f "result") (re-quote (f "target"))) "GOOD" "MISS")
+(matches? (f "name") (str "^" (re-quote (f "salutation")) "\\s *$"))
+
+(replace "Almost Pig Latin" "\\b(\\w)(\\w+)\\b" "$2$1ay")
+(replace-first "swap first two words" "(\\w+)(\\s+)(\\w+)" "$3$2$1")
+
+(language "this is an English phrase")
+
+(< (field 0) (field 1))
+(<= (field 0 -1) (field 0) (field 0 1))
+(> (field "date") "07-14-1969")
+(>= 23 (f "000004" -2))
+
+(= "Dante" (field "Author"))
+(= 1300 (field "Year"))
+(= (field "Year" -2) (field "Year" -1) (field "Year"))
+(!= (field "00033" -1) (field "00033" 1))
+
+(and (= 3 (field 1)) (= "meh" (f "a")) (< (f "pregnancies") 5))
+(not true)
+
+(linear-regression 1 1 2 2 3 3 4 4)
+(linear-regression 2.0 3.1 2.3 3.3 24.3 45.2)
+
+(epoch-fields (f "milliseconds"))
+(epoch-year (* 1000 (f "seconds")))
+
+(/ (f "a-datetime-string") 1000)
+(/ (epoch (f "a-datetime-string")) 1000)
+
+(epoch-fields (epoch "1969-14-07T06:00:12"))
+(epoch-hour (epoch "11~22~30" "hh~mm~ss"))
+
+(let (x (+ (window "a" -10 10))
+ a (/ (* x 3) 4.34)
+ y (if (< a 10) "Good" "Bad"))
+ (list x (str (f 10) "-" y) a y))
+
+(list (let (z (f 0)) (* 2 (* z z) (log z)))
+ (let (pi 3.141592653589793 r (f "radius")) (* 4 pi r r)))
+
+(if (< (field "age") 18) "non-adult" "adult")
+
+(if (= "oh" (field "000000")) "OH")
+
+(if (> (field "000001") (mean "000001"))
+ "above average"
+ (if (< (field "000001") (mean "000001"))
+ "below average"
+ "mediocre"))
+
+(cond (> (f "000001") (mean "000001")) "above average"
+ (= (f "000001") (mean "000001")) "below average"
+ "mediocre")
+
+(cond (or (= "a" (f 0)) (= "a+" (f 0))) 1
+ (or (= "b" (f 0)) (= "b+" (f 0))) 0
+ (or (= "c" (f 0)) (= "c+" (f 0))) -1)
+
+(cond (< (f "age") 2) "baby"
+ (and (<= 2 (f "age") 10) (= "F" (f "sex"))) "girl"
+ (and (<= 2 (f "age") 10) (= "M" (f "sex"))) "boy"
+ (< 10 (f "age") 20) "teenager"
+ "adult")
+
+(list (field "age")
+ (field "weight" -1)
+ (population "age"))
+
+(list 1.23
+ (if (< (field "age") 10) "child" "adult")
+ (field 3))
+
+(head (cons x lst))
+(tail (cons x lst))
+
+(count (list (f 1) (f 2)))
+(mode (list a b b c b a c c c))
+(max (list -1 2 -2 0.38))
+(min (list -1.3 2 1))
+(avg (list -1 -2 1 2 0.8 -0.8))
+
+(in 3 (1 2 3 2))
+(in "abc" (1 2 3))
+(in (f "size") ("X" "XXL"))
+
+(< _ 3)
+(+ (f "000001" _) 3)
+(< -18 _ (f 3))
+
+(map (* 2 _) (list (f 0 -1) (f 0) (f 0 1)))
+
+(all-but "id" "000023")
+(fields "000003" 3 "a field" "another" "0002a3b-3")
+
+(all-with-defaults "species" "Iris-versicolor"
+ "petal-width" 2.8
+ "000002" 0)
+
+(all-with-numeric-default "median")
+(all-with-numeric-default 0)
+
+(window "000001" -1 2)
+(filter (< _ 99.9) (map (+ 32 (* 1.8 _)) (window "Temp" -2 0)))
+
+(let (now (f "epoch"))
+ (avg (cond-window "temperature" (< (- (f "epoch") now) 240))))
--- /dev/null
+--
+-- Shuttle Digital Autopilot
+-- by Sergey Berezin (berez@cs.cmu.edu)
+--
+MODULE cont_3eo_mode_select(start,smode5,vel,q_bar,apogee_alt_LT_alt_ref,
+ h_dot_LT_hdot_reg2,alpha_n_GRT_alpha_reg2,
+ delta_r_GRT_del_r_usp,v_horiz_dnrng_LT_0,
+ high_rate_sep,meco_confirmed)
+
+VAR cont_3EO_start: boolean;
+ RTLS_abort_declared: boolean;
+ region_selected : boolean;
+ m_mode: {mm102, mm103, mm601};
+ r: {reg-1, reg0, reg1, reg2, reg3, reg102};
+ step : {1,2,3,4,5,6,7,8,9,10, exit, undef};
+
+ASSIGN
+ init(cont_3EO_start) := FALSE;
+ init(m_mode) := {mm102, mm103};
+ init(region_selected) := FALSE;
+ init(RTLS_abort_declared) := FALSE;
+ init(r) := reg-1;
+ init(step) := undef;
+
+ next(step) :=
+ case
+ step = 1 & m_mode = mm102 : exit;
+ step = 1 : 2;
+ step = 2 & smode5 : 5;
+ step = 2 & vel = GRT_vi_3eo_max: exit;
+ step = 2 : 3;
+ step = 3 & vel = LEQ_vi_3eo_min : 6;
+ step = 3 : 4;
+ step = 4 & apogee_alt_LT_alt_ref: exit;
+ step = 4 : 6;
+ step = 5 : 6;
+ step = 6 & r = reg0 : exit;
+ step = 6 : 7;
+ step = 7 : 8;
+ step = 8 & q_bar = GRT_qbar_reg3 & !high_rate_sep : 10;
+ step = 8 : 9;
+ step = 9 : 10;
+ step = 10: exit;
+ next(start): 1;
+ step = exit : undef;
+ TRUE: step;
+ esac;
+
+ next(cont_3EO_start) :=
+ case
+ step = 1 & m_mode = mm102 : TRUE;
+ step = 10 & meco_confirmed : TRUE;
+ TRUE : cont_3EO_start;
+ esac;
+
+ next(r) :=
+ case
+ step = 1 & m_mode = mm102 : reg102;
+ step = 2 & !smode5 & vel = GRT_vi_3eo_max: reg0;
+ step = 4 & apogee_alt_LT_alt_ref: reg0;
+ step = 5 & v_horiz_dnrng_LT_0 & delta_r_GRT_del_r_usp : reg0;
+ step = 8 & q_bar = GRT_qbar_reg3 & !high_rate_sep : reg3;
+ step = 9: case
+ (h_dot_LT_hdot_reg2 & alpha_n_GRT_alpha_reg2 &
+ q_bar = GRT_qbar_reg1) | high_rate_sep : reg2;
+ TRUE : reg1;
+ esac;
+ next(step) = 1 : reg-1;
+ TRUE: r;
+ esac;
+
+ next(RTLS_abort_declared) :=
+ case
+ step = 10 & meco_confirmed & m_mode = mm103 : TRUE;
+ TRUE: RTLS_abort_declared;
+ esac;
+
+ next(m_mode) :=
+ case
+ step = 10 & meco_confirmed & m_mode = mm103 : mm601;
+ TRUE: m_mode;
+ esac;
+
+ next(region_selected) :=
+ case
+ next(step) = 1 : FALSE;
+ next(step) = exit : TRUE;
+ TRUE : region_selected;
+ esac;
+
+MODULE cont_3eo_guide(start,cont_3EO_start, mode_select_completed, et_sep_cmd,
+ h_dot_LT_0, q_bar_a_GRT_qbar_max_sep, m_mode, r0,
+ cont_minus_z_compl, t_nav-t_et_sep_GRT_dt_min_z_102,
+ ABS_q_orb_GRT_q_minus_z_max, ABS_r_orb_GRT_r_minus_z_max,
+ excess_OMS_propellant, q_bar_a_LT_qbar_oms_dump,
+ entry_mnvr_couter_LE_0, rcs_all_jet_inhibit,
+ alt_GRT_alt_min_102_dump, t_nav-t_gmtlo_LT_t_dmp_last,
+ pre_sep, cond_18, q_orb_LT_0, ABS_alf_err_LT_alf_sep_err,
+ cond_20b, cond_21, ABS_beta_n_GRT_beta_max, cond_24, cond_26,
+ cond_27, cond_29, mm602_OK)
+VAR
+ step: {1,a1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,
+ b20, c20, d20, 21,22,23,24,25,26,27,28,29,exit, undef};
+ call_RTLS_abort_task : boolean;
+ first3: boolean; -- indicates if it is the first pass
+ first8: boolean;
+ first27: boolean;
+ s_unconv : boolean;
+ mode_2_indicator : boolean;
+ et_sep_man_initiate : boolean;
+ emerg_sep : boolean;
+ cont_3eo_pr_delay : {minus_z_reg1, minus_z_reg2,
+ minus_z_reg3, minus_z_reg4, minus_z_reg102, 0, 5};
+ etsep_y_drift : {undef, minus_z_reg1, minus_z_reg2,
+ minus_z_reg3, minus_z_reg4, minus_z_reg102, 0};
+ fwd_rcs_dump_enable : boolean;
+ fcs_accept_icnct : boolean;
+ oms_rcs_i_c_inh_ena_cmd : boolean;
+ orbiter_dump_ena : boolean;
+ frz_3eo : boolean;
+ high_rate_sep: boolean;
+ entry_gains : boolean;
+ cont_sep_cplt : boolean;
+ pch_cmd_reg4 : boolean;
+ alpha_ok : boolean;
+ r : {reg-1, reg0, reg1, reg2, reg3, reg4, reg102};
+ early_sep : boolean;
+--------------------------------------------
+----- Additional Variables -----------------
+--------------------------------------------
+ rtls_lo_f_d_delay : {undef, 0};
+ wcb2 : {undef, reg1_0, reg2_neg4, wcb2_3eo, reg4_0,
+ reg102_undef, post_sep_0};
+ q_gcb_i : {undef, quat_reg1, quat_reg2, quat_reg3, quat_reg4,
+ quat_reg102_undef, quat_entry_M50_to_cmdbody};
+ oms_nz_lim : {undef, oms_nz_lim_3eo, oms_nz_lim_iload, oms_nz_lim_std};
+ contingency_nz_lim : {undef, contingency_nz_lim_3eo,
+ contingency_nz_lim_iload, contingency_nz_lim_std};
+
+
+
+ASSIGN
+ init(entry_gains) := FALSE;
+ init(frz_3eo) := FALSE;
+ init(cont_3eo_pr_delay) := 5;
+ init(etsep_y_drift) := undef;
+ init(r) := reg-1;
+ init(step) := undef;
+ init(call_RTLS_abort_task) := FALSE;
+ init(first3) := TRUE;
+ init(first8) := TRUE;
+ init(first27) := TRUE;
+ init(cont_sep_cplt) := FALSE;
+ init(et_sep_man_initiate) := FALSE;
+ init(alpha_ok) := FALSE;
+ init(pch_cmd_reg4) := FALSE;
+
+-- Assumed initializations:
+
+ init(rtls_lo_f_d_delay) := undef;
+ init(wcb2) := undef;
+ init(q_gcb_i) := undef;
+ init(oms_nz_lim) := undef;
+ init(contingency_nz_lim) := undef;
+ init(oms_rcs_i_c_inh_ena_cmd) := FALSE;
+ init(orbiter_dump_ena) := FALSE;
+-- init(early_sep) := FALSE;
+
+-------------
+
+ next(step) := nextstep;
+
+ next(r) :=
+ case
+ step = a1 & (cont_3EO_start | mode_select_completed) : r0;
+ step = 21 & cond_21 : reg4;
+ step = 23 & ABS_beta_n_GRT_beta_max & !high_rate_sep : reg1;
+ TRUE : r;
+ esac;
+
+ next(first3) :=
+ case
+ step = 3 & cont_3EO_start : FALSE;
+ TRUE : first3;
+ esac;
+
+ next(first8) :=
+ case
+ step = 8 & excess_OMS_propellant & cont_3EO_start : FALSE;
+ TRUE : first8;
+ esac;
+
+ next(first27) :=
+ case
+ step = 27 : FALSE;
+ TRUE: first27;
+ esac;
+
+ next(s_unconv) :=
+ case
+ step = 3 : FALSE;
+ TRUE : s_unconv;
+ esac;
+
+ next(call_RTLS_abort_task) :=
+ case
+ step = 3 : TRUE;
+ TRUE : call_RTLS_abort_task;
+ esac;
+
+ next(mode_2_indicator) :=
+ case
+ step = 4 : TRUE;
+ TRUE : mode_2_indicator;
+ esac;
+
+ next(et_sep_man_initiate) :=
+ case
+ step = 5 & h_dot_LT_0 & q_bar_a_GRT_qbar_max_sep & m_mode != mm102 : TRUE;
+ step = 14 & pre_sep : TRUE;
+ step = 19 & q_orb_LT_0 : TRUE;
+ step = d20 : TRUE;
+ step = 26 & cond_26 : TRUE;
+ step = 29 & cond_29 : TRUE;
+ TRUE : et_sep_man_initiate;
+ esac;
+
+ next(emerg_sep) :=
+ case
+ next(step) = 1 : FALSE;
+ step = 5 & h_dot_LT_0 & q_bar_a_GRT_qbar_max_sep & m_mode != mm102: TRUE;
+ TRUE : emerg_sep;
+ esac;
+
+ next(cont_3eo_pr_delay) :=
+ case
+ next(step) = 1 : 5;
+ step = 5 & h_dot_LT_0 & q_bar_a_GRT_qbar_max_sep & m_mode != mm102 :
+ minus_z_reg3;
+ step = 7 & !cont_minus_z_compl & r = reg102 &
+ t_nav-t_et_sep_GRT_dt_min_z_102 &
+ (ABS_q_orb_GRT_q_minus_z_max | ABS_r_orb_GRT_r_minus_z_max) : 0;
+ step = 14 & pre_sep : minus_z_reg102;
+ step = 19 & q_orb_LT_0 : minus_z_reg4;
+ step = d20 : minus_z_reg3;
+ step = 26 & cond_26 : minus_z_reg2;
+ step = 27 & first27 : minus_z_reg1;
+ TRUE : cont_3eo_pr_delay;
+ esac;
+
+ next(etsep_y_drift) :=
+ case
+ step = 5 & h_dot_LT_0 & q_bar_a_GRT_qbar_max_sep & m_mode != mm102 :
+ minus_z_reg3;
+ step = 7 & !cont_minus_z_compl & r = reg102 &
+ t_nav-t_et_sep_GRT_dt_min_z_102 &
+ (ABS_q_orb_GRT_q_minus_z_max | ABS_r_orb_GRT_r_minus_z_max) : 0;
+ step = 14 & pre_sep : minus_z_reg102;
+ step = 19 & q_orb_LT_0 : minus_z_reg4;
+ step = d20 : minus_z_reg3;
+ step = 26 & cond_26 : minus_z_reg2;
+ step = 27 & first27 : minus_z_reg1;
+ TRUE : etsep_y_drift;
+ esac;
+
+ next(fwd_rcs_dump_enable) :=
+ case
+ step = 8 & excess_OMS_propellant & first8 : FALSE;
+ TRUE : fwd_rcs_dump_enable;
+ esac;
+
+ next(fcs_accept_icnct) :=
+ case
+ step = 9 & q_bar_a_LT_qbar_oms_dump & r != reg102 : TRUE;
+ TRUE : fcs_accept_icnct;
+ esac;
+
+ next(oms_rcs_i_c_inh_ena_cmd) :=
+ case
+-- next(step) = 1 & oms_rcs_i_c_inh_ena_cmd : {0,1};
+ next(step) = 1 & oms_rcs_i_c_inh_ena_cmd : FALSE; -- Assumed initialization
+ step = 9 & q_bar_a_LT_qbar_oms_dump & r != reg102 : TRUE;
+ TRUE : oms_rcs_i_c_inh_ena_cmd;
+ esac;
+
+ next(orbiter_dump_ena) :=
+ case
+ next(start) = TRUE : FALSE; -- Assumed initialization
+ step = 9 & q_bar_a_LT_qbar_oms_dump & r != reg102 : TRUE;
+ step = 13 & alt_GRT_alt_min_102_dump & t_nav-t_gmtlo_LT_t_dmp_last : TRUE;
+ TRUE : orbiter_dump_ena;
+ esac;
+
+ next(frz_3eo) :=
+ case
+ next(step) = 1 : FALSE;
+ step = 10 & entry_mnvr_couter_LE_0 & !rcs_all_jet_inhibit : FALSE;
+ step = 28 & !et_sep_man_initiate : TRUE;
+ TRUE : frz_3eo;
+ esac;
+
+ next(high_rate_sep) :=
+ case
+ step = 10 & entry_mnvr_couter_LE_0 & !rcs_all_jet_inhibit : FALSE;
+ step = 25 : TRUE;
+ TRUE : high_rate_sep;
+ esac;
+
+ next(entry_gains) :=
+ case
+ next(step) = 1 : FALSE;
+ step = 10 & entry_mnvr_couter_LE_0 & !rcs_all_jet_inhibit : TRUE;
+ TRUE : entry_gains;
+ esac;
+
+ next(cont_sep_cplt) :=
+ case
+ next(step) = 1 : FALSE;
+ step = 12 & mm602_OK : TRUE;
+ TRUE : cont_sep_cplt;
+ esac;
+
+ next(pch_cmd_reg4) :=
+ case
+ next(step) = 1 : FALSE;
+ step = 18 & !pch_cmd_reg4 & cond_18 : TRUE;
+ TRUE : pch_cmd_reg4;
+ esac;
+
+ next(alpha_ok) :=
+ case
+ next(step) = 1 : FALSE;
+ step = 20 & ABS_alf_err_LT_alf_sep_err : TRUE;
+ TRUE : alpha_ok;
+ esac;
+
+ next(early_sep) :=
+ case
+ step = 27 & first27 :
+ case
+ cond_27 : TRUE;
+ TRUE : FALSE;
+ esac;
+ TRUE : early_sep;
+ esac;
+
+--------------------------------------------
+----- Additional Variables -----------------
+--------------------------------------------
+
+ next(rtls_lo_f_d_delay) :=
+ case
+ next(start) = TRUE : undef; -- Assumed initialization
+ step = 8 & first8 & excess_OMS_propellant : 0;
+ TRUE : rtls_lo_f_d_delay;
+ esac;
+
+ next(wcb2) :=
+ case
+ next(start) = TRUE : undef; -- Assumed initialization
+ step = 10 & entry_mnvr_couter_LE_0 : post_sep_0;
+ step = 12 : case
+ r = reg4 : reg4_0;
+ TRUE : wcb2_3eo;
+ esac;
+ step = 14 & pre_sep : reg102_undef;
+ step = 15 : case
+ r = reg4 : reg4_0;
+ TRUE : wcb2_3eo;
+ esac;
+ step = 25 : reg2_neg4;
+ TRUE : wcb2;
+ esac;
+
+ next(q_gcb_i) :=
+ case
+ next(start) = TRUE : undef; -- Assumed initialization
+ step = 11 : quat_entry_M50_to_cmdbody;
+ step = 14 & pre_sep : quat_reg102_undef;
+ step = 16 : case
+ r = reg4 : quat_reg4;
+ TRUE : quat_reg3;
+ esac;
+ step = 22 : quat_reg2;
+
+-- Without this step the value "quat_reg2" would remain in "reg1":
+-- step = 23 & ABS_beta_n_GRT_beta_max & !high_rate_sep : undef;
+
+ TRUE : q_gcb_i;
+ esac;
+
+ next(oms_nz_lim) :=
+ case
+ next(start) = TRUE : undef; -- Assumed initialization
+ step = 9 & q_bar_a_LT_qbar_oms_dump & r != reg102 : oms_nz_lim_3eo;
+ step = 12 & mm602_OK : oms_nz_lim_std;
+ TRUE : oms_nz_lim;
+ esac;
+
+ next(contingency_nz_lim) :=
+ case
+ next(start) = TRUE : undef; -- Assumed initialization
+ step = 9 & q_bar_a_LT_qbar_oms_dump & r != reg102 :
+ contingency_nz_lim_3eo;
+ step = 12 & mm602_OK : contingency_nz_lim_std;
+ TRUE : contingency_nz_lim;
+ esac;
+
+DEFINE
+ finished := step = exit;
+ idle := step = undef;
+
+ start_cont_3eo_mode_select :=
+ case
+ step = 1 & !cont_3EO_start : TRUE;
+ TRUE : FALSE;
+ esac;
+
+ nextstep :=
+ case
+ step = 1 : a1;
+ step = a1 : case
+ (cont_3EO_start | mode_select_completed) : 2;
+ TRUE : step;
+ esac;
+ step = 2 : case
+ !cont_3EO_start : exit;
+ first3 : 3;
+ TRUE: 4;
+ esac;
+ step = 3 : 4;
+ step = 4 : case
+ et_sep_cmd : 7;
+ TRUE : 5;
+ esac;
+ step = 5 : case
+ h_dot_LT_0 & q_bar_a_GRT_qbar_max_sep &
+ m_mode != mm102 : exit;
+ TRUE : 6;
+ esac;
+ step = 6 :
+ case
+ r = reg102 : 13;
+ r in {reg3, reg4} : 15;
+ r = reg2 : 22;
+ r = reg1 : 27;
+ TRUE : exit;
+ esac;
+ step = 7 : case
+ cont_minus_z_compl : 8;
+ TRUE : exit;
+ esac;
+ step = 8 : case
+ excess_OMS_propellant & first8 : 9;
+ TRUE : 10;
+ esac;
+ step = 9 : exit;
+ step = 10 : case
+ !entry_mnvr_couter_LE_0 | rcs_all_jet_inhibit : exit;
+ TRUE : 11;
+ esac;
+ step = 11 : 12;
+ step = 12 : exit;
+ step = 13 : 14;
+ step = 14 : exit;
+ step = 15 : 16;
+ step = 16 : 17;
+ step = 17 : case
+ r = reg4 : 18;
+ TRUE : 20;
+ esac;
+ step = 18 : case
+ pch_cmd_reg4 | cond_18 : 19;
+ TRUE : exit;
+ esac;
+ step = 19 : exit;
+ step = 20 : case
+ ABS_alf_err_LT_alf_sep_err : b20;
+ TRUE : c20;
+ esac;
+ step = b20 : case
+ cond_20b : d20;
+ TRUE : exit;
+ esac;
+ step = c20 : case
+ alpha_ok : d20;
+ TRUE : 21;
+ esac;
+ step = d20 : exit;
+ TRUE : nextstep21;
+ esac;
+
+ nextstep21 :=
+ case
+ step = 21 : case
+ cond_21 : 15;
+ TRUE : exit;
+ esac;
+ step = 22 : 23;
+ step = 23 : case
+ ABS_beta_n_GRT_beta_max & !high_rate_sep : 27;
+ TRUE : 24;
+ esac;
+ step = 24 : case
+ cond_24 | high_rate_sep : 25;
+ TRUE : exit;
+ esac;
+ step = 25 : 26;
+ step = 26 : exit;
+ step = 27 : 28;
+ step = 28 : case
+ !et_sep_man_initiate : 29;
+ TRUE : exit;
+ esac;
+ step = 29 : exit;
+ start : 1;
+ step = exit : undef;
+ TRUE : step;
+ esac;
+
+ post_sep_mode := step in {7,8,9,10,11,12};
+
+------------------------------------------------------------------
+------------------------------------------------------------------
+
+MODULE main
+VAR
+ smode5: boolean;
+ vel : {GRT_vi_3eo_max, GRT_vi_3eo_min, LEQ_vi_3eo_min};
+ q_bar: {GRT_qbar_reg3, GRT_qbar_reg1, LEQ_qbar_reg1};
+ q_bar_a_GRT_qbar_max_sep : boolean;
+ q_bar_a_LT_qbar_oms_dump : boolean;
+ apogee_alt_LT_alt_ref : boolean;
+ h_dot_LT_hdot_reg2 : boolean;
+ h_dot_LT_0 : boolean;
+ alpha_n_GRT_alpha_reg2 : boolean;
+ delta_r_GRT_del_r_usp : boolean;
+ v_horiz_dnrng_LT_0: boolean;
+ meco_confirmed: boolean;
+ et_sep_cmd : boolean;
+ cont_minus_z_compl : boolean;
+ t_nav-t_et_sep_GRT_dt_min_z_102 : boolean;
+ ABS_q_orb_GRT_q_minus_z_max : boolean;
+ ABS_r_orb_GRT_r_minus_z_max : boolean;
+ excess_OMS_propellant : boolean;
+ entry_mnvr_couter_LE_0 : boolean;
+ rcs_all_jet_inhibit : boolean;
+ alt_GRT_alt_min_102_dump : boolean;
+ t_nav-t_gmtlo_LT_t_dmp_last : boolean;
+ pre_sep : boolean;
+ cond_18 : boolean;
+ q_orb_LT_0 : boolean;
+ ABS_alf_err_LT_alf_sep_err : boolean;
+ cond_20b : boolean;
+ cond_21 : boolean;
+ ABS_beta_n_GRT_beta_max : boolean;
+ cond_24 : boolean;
+ cond_26 : boolean;
+ cond_27 : boolean;
+ cond_29 : boolean;
+ mm602_OK : boolean;
+ start_guide : boolean;
+ mated_coast_mnvr : boolean;
+
+ cs: cont_3eo_mode_select(cg.start_cont_3eo_mode_select,
+ smode5,vel,q_bar,apogee_alt_LT_alt_ref,
+ h_dot_LT_hdot_reg2,alpha_n_GRT_alpha_reg2,
+ delta_r_GRT_del_r_usp,v_horiz_dnrng_LT_0,
+ cg.high_rate_sep,meco_confirmed);
+
+ cg: cont_3eo_guide(start_guide,
+ cs.cont_3EO_start, cs.region_selected, et_sep_cmd,
+ h_dot_LT_0, q_bar_a_GRT_qbar_max_sep, cs.m_mode, cs.r,
+ cont_minus_z_compl, t_nav-t_et_sep_GRT_dt_min_z_102,
+ ABS_q_orb_GRT_q_minus_z_max, ABS_r_orb_GRT_r_minus_z_max,
+ excess_OMS_propellant, q_bar_a_LT_qbar_oms_dump,
+ entry_mnvr_couter_LE_0, rcs_all_jet_inhibit,
+ alt_GRT_alt_min_102_dump, t_nav-t_gmtlo_LT_t_dmp_last,
+ pre_sep, cond_18, q_orb_LT_0, ABS_alf_err_LT_alf_sep_err,
+ cond_20b, cond_21, ABS_beta_n_GRT_beta_max, cond_24, cond_26,
+ cond_27, cond_29, mm602_OK);
+
+ASSIGN
+ init(start_guide) := FALSE;
+ init(mated_coast_mnvr) := FALSE;
+
+ next(entry_mnvr_couter_LE_0) :=
+ case
+ !entry_mnvr_couter_LE_0 : {FALSE, TRUE};
+ TRUE : TRUE;
+ esac;
+
+---------------------------------------------------------------------
+---------------------------------------------------------------------
+ next(start_guide) :=
+ case
+ start_guide : FALSE;
+ !cg.idle : FALSE;
+ TRUE : {FALSE, TRUE};
+ esac;
+
+ next(smode5) :=
+ case
+ fixed_values : smode5;
+ cg.idle : { FALSE, TRUE };
+ TRUE : smode5;
+ esac;
+
+ next(vel) :=
+ case
+ fixed_values : vel;
+ cg.idle : {GRT_vi_3eo_max, GRT_vi_3eo_min, LEQ_vi_3eo_min};
+ TRUE : vel;
+ esac;
+
+ next(q_bar) :=
+ case
+ fixed_values : q_bar;
+ cg.idle : {GRT_qbar_reg3, GRT_qbar_reg1, LEQ_qbar_reg1};
+ TRUE : q_bar;
+ esac;
+
+ next(q_bar_a_GRT_qbar_max_sep) :=
+ case
+ fixed_values : q_bar_a_GRT_qbar_max_sep;
+ cg.idle : { FALSE, TRUE };
+ TRUE : q_bar_a_GRT_qbar_max_sep;
+ esac;
+
+ next(apogee_alt_LT_alt_ref) :=
+ case
+ fixed_values : apogee_alt_LT_alt_ref;
+ cg.idle : { FALSE, TRUE };
+ TRUE : apogee_alt_LT_alt_ref;
+ esac;
+
+ next(h_dot_LT_hdot_reg2) :=
+ case
+ fixed_values : h_dot_LT_hdot_reg2;
+ cg.idle : { FALSE, TRUE };
+ TRUE : h_dot_LT_hdot_reg2;
+ esac;
+
+ next(h_dot_LT_0) :=
+ case
+ fixed_values : h_dot_LT_0;
+ cg.idle : { FALSE, TRUE };
+ TRUE : h_dot_LT_0;
+ esac;
+
+ next(alpha_n_GRT_alpha_reg2) :=
+ case
+ fixed_values : alpha_n_GRT_alpha_reg2;
+ cg.idle : { FALSE, TRUE };
+ TRUE : alpha_n_GRT_alpha_reg2;
+ esac;
+
+ next(delta_r_GRT_del_r_usp) :=
+ case
+ fixed_values : delta_r_GRT_del_r_usp;
+ cg.idle : { FALSE, TRUE };
+ TRUE : delta_r_GRT_del_r_usp;
+ esac;
+
+ next(v_horiz_dnrng_LT_0) :=
+ case
+ fixed_values : v_horiz_dnrng_LT_0;
+ cg.idle : { FALSE, TRUE };
+ TRUE : v_horiz_dnrng_LT_0;
+ esac;
+
+ next(meco_confirmed) :=
+ case
+ fixed_values : meco_confirmed;
+ meco_confirmed : TRUE;
+ cg.idle : { FALSE, TRUE };
+ TRUE : meco_confirmed;
+ esac;
+
+ next(et_sep_cmd) :=
+ case
+ fixed_values : et_sep_cmd;
+ et_sep_cmd : TRUE;
+ cg.idle : { FALSE, TRUE };
+ TRUE : et_sep_cmd;
+ esac;
+
+ next(cont_minus_z_compl) :=
+ case
+ fixed_values : cont_minus_z_compl;
+ cg.idle : { FALSE, TRUE };
+ TRUE : cont_minus_z_compl;
+ esac;
+
+ next(t_nav-t_et_sep_GRT_dt_min_z_102) :=
+ case
+ fixed_values : t_nav-t_et_sep_GRT_dt_min_z_102;
+ cg.idle : { FALSE, TRUE };
+ TRUE : t_nav-t_et_sep_GRT_dt_min_z_102;
+ esac;
+
+ next(ABS_q_orb_GRT_q_minus_z_max) :=
+ case
+ fixed_values : ABS_q_orb_GRT_q_minus_z_max;
+ cg.idle : { FALSE, TRUE };
+ TRUE : ABS_q_orb_GRT_q_minus_z_max;
+ esac;
+
+ next(ABS_r_orb_GRT_r_minus_z_max) :=
+ case
+ fixed_values : ABS_r_orb_GRT_r_minus_z_max;
+ cg.idle : { FALSE, TRUE };
+ TRUE : ABS_r_orb_GRT_r_minus_z_max;
+ esac;
+
+ next(excess_OMS_propellant) :=
+ case
+ fixed_values : excess_OMS_propellant;
+ cg.idle & excess_OMS_propellant : { FALSE, TRUE };
+ TRUE : excess_OMS_propellant;
+ esac;
+
+ next(q_bar_a_LT_qbar_oms_dump) :=
+ case
+ fixed_values : q_bar_a_LT_qbar_oms_dump;
+ cg.idle : { FALSE, TRUE };
+ TRUE : q_bar_a_LT_qbar_oms_dump;
+ esac;
+
+ next(rcs_all_jet_inhibit) :=
+ case
+ fixed_values : rcs_all_jet_inhibit;
+ cg.idle : { FALSE, TRUE };
+ TRUE : rcs_all_jet_inhibit;
+ esac;
+
+ next(alt_GRT_alt_min_102_dump) :=
+ case
+ fixed_values : alt_GRT_alt_min_102_dump;
+ cg.idle : { FALSE, TRUE };
+ TRUE : alt_GRT_alt_min_102_dump;
+ esac;
+
+ next(t_nav-t_gmtlo_LT_t_dmp_last) :=
+ case
+ fixed_values : t_nav-t_gmtlo_LT_t_dmp_last;
+ cg.idle : { FALSE, TRUE };
+ TRUE : t_nav-t_gmtlo_LT_t_dmp_last;
+ esac;
+
+ next(pre_sep) :=
+ case
+ fixed_values : pre_sep;
+ cg.idle : { FALSE, TRUE };
+ TRUE : pre_sep;
+ esac;
+
+ next(cond_18) :=
+ case
+ fixed_values : cond_18;
+ cg.idle : { FALSE, TRUE };
+ TRUE : cond_18;
+ esac;
+
+ next(q_orb_LT_0) :=
+ case
+ fixed_values : q_orb_LT_0;
+ cg.idle : { FALSE, TRUE };
+ TRUE : q_orb_LT_0;
+ esac;
+
+ next(ABS_alf_err_LT_alf_sep_err) :=
+ case
+ fixed_values : ABS_alf_err_LT_alf_sep_err;
+ cg.idle : { FALSE, TRUE };
+ TRUE : ABS_alf_err_LT_alf_sep_err;
+ esac;
+
+ next(cond_20b) :=
+ case
+ fixed_values : cond_20b;
+ cg.idle : { FALSE, TRUE };
+ TRUE : cond_20b;
+ esac;
+
+ next(cond_21) :=
+ case
+ fixed_values : cond_21;
+ cg.idle : { FALSE, TRUE };
+ TRUE : cond_21;
+ esac;
+
+ next(ABS_beta_n_GRT_beta_max) :=
+ case
+ fixed_values : ABS_beta_n_GRT_beta_max;
+ cg.idle : { FALSE, TRUE };
+ TRUE : ABS_beta_n_GRT_beta_max;
+ esac;
+
+ next(cond_24) :=
+ case
+ fixed_values : cond_24;
+ cg.idle : { FALSE, TRUE };
+ TRUE : cond_24;
+ esac;
+
+ next(cond_26) :=
+ case
+ fixed_values : cond_26;
+ cg.idle : { FALSE, TRUE };
+ TRUE : cond_26;
+ esac;
+
+ next(cond_27) :=
+ case
+ fixed_values : cond_27;
+ cg.idle : { FALSE, TRUE };
+ TRUE : cond_27;
+ esac;
+
+ next(cond_29) :=
+ case
+ fixed_values : cond_29;
+ cg.idle : { FALSE, TRUE };
+ TRUE : cond_29;
+ esac;
+
+ next(mm602_OK) :=
+ case
+ fixed_values : mm602_OK;
+ cg.idle : { FALSE, TRUE };
+ TRUE : mm602_OK;
+ esac;
+
+ next(mated_coast_mnvr) :=
+ case
+ next(cg.step) = 1 : FALSE;
+ cg.step = 6 & cg.r in {reg1, reg2, reg3, reg4, reg102} : TRUE;
+ TRUE : mated_coast_mnvr;
+ esac;
+
+---------------------------------------------------------------------
+---------------------------------------------------------------------
+DEFINE
+ fixed_values := FALSE;
+
+ output_ok :=
+ case
+ cg.q_gcb_i = undef | cg.wcb2 = undef |
+ cg.cont_3eo_pr_delay = 5 |
+ cg.etsep_y_drift = undef :
+ case
+ !mated_coast_mnvr: 1;
+ TRUE : undef;
+ esac;
+ !mated_coast_mnvr: toint(cg.q_gcb_i = quat_entry_M50_to_cmdbody &
+ cg.wcb2 = post_sep_0);
+-- reg1 never happens?
+-- cg.r = reg1 : (cg.q_gcb_i = quat_reg1 & cg.wcb2 = reg1_0 &
+-- cg.cont_3eo_pr_delay = minus_z_reg1 &
+-- cg.etsep_y_drift = minus_z_reg1) | cg.emerg_sep;
+ cg.r = reg2 : toint((cg.q_gcb_i = quat_reg2 & cg.wcb2 = reg2_neg4 &
+ cg.cont_3eo_pr_delay = minus_z_reg2 &
+ cg.etsep_y_drift = minus_z_reg2) | cg.emerg_sep);
+
+ cg.r = reg3 : toint((cg.q_gcb_i = quat_reg3 & cg.wcb2 = wcb2_3eo &
+ cg.cont_3eo_pr_delay = minus_z_reg3 &
+ cg.etsep_y_drift = minus_z_reg3) | cg.emerg_sep);
+ cg.r = reg4 : toint((cg.q_gcb_i = quat_reg4 & cg.wcb2 = reg4_0 &
+ cg.cont_3eo_pr_delay = minus_z_reg4 &
+ cg.etsep_y_drift = minus_z_reg4) | cg.emerg_sep);
+ cg.r = reg102 : toint((cg.q_gcb_i = quat_reg102_undef &
+ cg.wcb2 = reg102_undef &
+ cg.cont_3eo_pr_delay = minus_z_reg102 &
+ cg.etsep_y_drift = minus_z_reg102) | cg.emerg_sep);
+ TRUE : 0;
+ esac;
+
+---------------------------------------------------------------------
+-------- Specifications ---------------------------------------------
+---------------------------------------------------------------------
+
+-- Contingency Guide terminates
+
+SPEC AG(!cg.idle -> AF(cg.finished))
+
+-- Contingency guide can be executed infinitely often
+
+SPEC AG( (cg.idle | cg.finished) ->
+ EF(!(cg.idle | cg.finished) & EF(cg.finished)))
+
+-- Contingency mode select task works fine
+
+SPEC AG(cs.cont_3EO_start & cs.region_selected ->
+ ((cs.m_mode = mm102 | meco_confirmed) &
+ cs.r != reg-1 & cs.r != reg0))
+
+-- Bad (initial) value never happens again once region is computed
+-- unless we restart the task
+
+--SPEC AG(cs.r != reg-1 -> !E[!cg.start_cont_3eo_mode_select U
+-- cs.r = reg-1 & !cg.start_cont_3eo_mode_select])
+
+-- Comment out each of the regions and see if this is still true
+-- (Check, if ALL of the regions can happen)
+
+--SPEC AG(cs.r in {reg-1
+-- ,reg0
+-- ,reg1
+-- ,reg2
+-- ,reg3
+-- ,reg102
+-- })
+
+-- Comment out each of the regions and see if this is still true
+-- (Check, if ALL of the regions can happen)
+
+--SPEC AG(cg.r in {reg-1
+-- ,reg0
+-- ,reg1
+-- ,reg2
+-- ,reg3
+-- ,reg4
+-- ,reg102
+-- })
+
+-- Mode_select starts at the next step after its "start" bit is set:
+
+--SPEC AG(!cg.start_cont_3eo_mode_select ->
+-- AX(cg.start_cont_3eo_mode_select & cs.step in {exit, undef} ->
+-- AX(cs.step = 1 & !cs.region_selected)))
+
+-- During major mode 103, the inertial velocity is monitored.
+-- Below an I-loaded velocity, a MECO would constitute a contingency
+-- abort. (Must NOT be in SMODE=5 (??))
+
+SPEC AG(cg.start_cont_3eo_mode_select & cs.m_mode = mm103 &
+ vel = LEQ_vi_3eo_min & meco_confirmed & !smode5 ->
+ A[!cs.region_selected U cs.region_selected & cs.cont_3EO_start])
+
+-- Above a certain inertial velocity (in mode 103), the 3E/O field
+-- is blanked, indicating that a MECO at this point would not require
+-- an OPS 6 contingency abort.
+
+SPEC AG(cs.region_selected ->
+ (cs.m_mode = mm103 & vel = GRT_vi_3eo_max -> !cs.cont_3EO_start))
+
+-- Between the two velocities, an apogee altitude - velocity curve is
+-- constructed based on the current inertial velocity. If the apogee
+-- altitude is above this curve, a contingency abort capability is
+-- still required and a 3E/O region index will be calculated.
+-- Otherwise, the 3E/O field is blanked out and no further contingency
+-- abort calculations will be performed. (Must NOT be in SMODE=5 (??))
+
+SPEC AG(cg.start_cont_3eo_mode_select & cs.m_mode = mm103 &
+ vel = GRT_vi_3eo_min & meco_confirmed & !smode5 ->
+ A[!cs.region_selected U cs.region_selected &
+ apogee_alt_LT_alt_ref = !cs.cont_3EO_start])
+
+-- For an RTLS trajectory (SMODE=5), a check is made on the downrange
+-- velocity to see if the vehicle is heading away from the landing site.
+-- If this is the case, a 3E/O region index is calculated. If the vehicle
+-- is heading back to the landing site, and the current range to the MECO
+-- R-V line is greater than an I-loaded value, a 3E/O region index is
+-- calculated. Otherwise, an intact abort is possible and the 3E/O field
+-- is blanked.
+
+SPEC AG(cg.start_cont_3eo_mode_select & smode5 & meco_confirmed &
+ (!v_horiz_dnrng_LT_0 | !delta_r_GRT_del_r_usp) ->
+ A[!cs.region_selected U cs.region_selected & cs.cont_3EO_start])
+
+-- If this task is called prior to SRB separation [mm102], the 3E/O region
+-- index is set to 102 and the 3E/O contingency flag is set.
+
+SPEC AG(cs.m_mode = mm102 & cg.start_cont_3eo_mode_select ->
+ AX (A [ !cs.region_selected U cs.region_selected &
+ cs.r = reg102 & cs.cont_3EO_start]))
+
+-- After SRB separation, on every pass that the 3E/O region index is
+-- calculated, a check is made to see if MECO confirmed has occured. If
+-- so, a check is made to see if the major mode is 103. If so, an RTLS is
+-- automatically invoked to transition to major mode 601.
+
+SPEC AG(!cs.region_selected & cs.m_mode = mm103 & meco_confirmed ->
+ A[!cs.region_selected U cs.region_selected & cs.r != reg0 ->
+ cs.m_mode = mm601 & cs.RTLS_abort_declared])
+
+-- Once the 3E/O contingency flag has been set, this task is no longer
+-- executed.
+
+SPEC AG(cs.cont_3EO_start -> AG(!cg.start_cont_3eo_mode_select))
+
+-- If MECO confirmed occurs in MM103 and an OPS 6 contingency abort
+-- procedure is still required, contingency 3E/O guidance sets the
+-- CONT_3EO_START flag ON. Contingency 3E/O guidance then switches
+-- from its display support function into an actual auto guidance
+-- steering process. [...] Contingency 3E/O guidance sets the RTLS abort
+-- declared flag and the MSC performs the transition from from major mode
+-- 103 to 601.
+
+SPEC AG(!cg.idle & !cg.finished & !cs.region_selected & cs.m_mode = mm103 ->
+ A[ !cg.finished U cg.finished & cs.region_selected &
+ (cs.cont_3EO_start -> cs.m_mode = mm601 & cs.RTLS_abort_declared) ])
+
+-- If MECO confirmed occurs in a major mode 601 and a contingency abort
+-- procedure is still required, contingency 3E/O guidance sets the
+-- CONT_3EO_START flag ON. [...] Contingency 3E/O guidance then commands
+-- 3E/O auto maneuvers in major mode 601. [What are these maneuvers??]
+
+SPEC AG(cg.finished & cs.m_mode = mm601 & !et_sep_cmd &
+ meco_confirmed & cs.cont_3EO_start ->
+ cg.q_gcb_i in {quat_reg1, quat_reg2, quat_reg3, quat_reg4, undef}
+ | cg.emerg_sep)
+
+-- If MECO confirmed occurs in a first stage (MM102) [...], contingency
+-- 3E/O guidance will command a fast ET separation during SRB tailoff in
+-- major mode 102. CONT 3E/O GUID will then command maneuver post-sep in
+-- MM601 (???). [ I'm not sure what indicates fast ET sep.: emerg_sep or
+-- early_sep, or what? ]
+
+SPEC AG(cg.finished & cs.m_mode = mm102 & meco_confirmed & pre_sep ->
+ cg.emerg_sep | et_sep_cmd
+ | cg.et_sep_man_initiate
+ | cg.early_sep
+ )
+
+---------------------------------------------
+-- Invariants from Murphi code --------------
+---------------------------------------------
+
+--SPEC AG(cg.finished -> (output_ok != 0 | (output_ok = undef &
+-- (cg.emerg_sep | !cg.cont_sep_cplt))))
+
+--SPEC AG(!cg.finished & !cg.idle -> !mated_coast_mnvr | !et_sep_cmd)
+
+-- Stronger version !!!
+
+SPEC AG(cg.finished -> output_ok != 0)
+
+-- Contingency Guidance shall command an ET separation
+-- [under certain conditions :-].
+
+SPEC AG(cs.cont_3EO_start & cg.finished &
+ (cg.r = reg1 -> cond_29) &
+ (cg.r = reg2 -> cond_24 & cond_26) &
+ (cg.r = reg3 -> cg.alpha_ok &
+ (ABS_alf_err_LT_alf_sep_err -> cond_20b)) &
+ (cg.r = reg4 -> cond_18 & q_orb_LT_0) &
+ (cg.r = reg102 -> pre_sep) ->
+ et_sep_cmd | cg.et_sep_man_initiate
+ | cg.early_sep
+ | cg.emerg_sep
+ )
+
+-- Contingency Guidance shall command at most one interconnected OMS dump.
+
+SPEC AG(cg.finished & cg.oms_rcs_i_c_inh_ena_cmd ->
+ AG(!cg.oms_rcs_i_c_inh_ena_cmd -> AG(!cg.oms_rcs_i_c_inh_ena_cmd)))
+
+-- Contingency Guidance shall command a transition to glide RTLS
+-- (flight mode 602)
+
+SPEC AG(cg.finished & cs.m_mode = mm601 ->
+ --cg.cont_sep_cplt | cg.emerg_sep |
+ cg.call_RTLS_abort_task)
+
+-- Paper, p. 28, unstated assumption 2: at step 6 the region is
+-- among 102, 1-4.
+
+SPEC AG(cg.step = 6 -> cg.r in {reg102, reg1, reg2, reg3, reg4})
+
+-- The transition to mode 602 shall not occur until the entry maneuver
+-- has been calculated
+
+SPEC !E[cg.q_gcb_i = undef U cg.cont_sep_cplt & cg.q_gcb_i = undef]
+
+-- The entry maneuver calculations shall not commence until the OMS/RCS
+-- interconnect, if any, is complete (??? What does it exactly mean???)
+-- !!!
+--SPEC AG(cg.oms_rcs_i_c_inh_ena_cmd ->
+-- !E[cg.oms_rcs_i_c_inh_ena_cmd U
+-- cg.q_gcb_i != undef & cg.oms_rcs_i_c_inh_ena_cmd])
+
+SPEC AG(cg.oms_rcs_i_c_inh_ena_cmd ->
+ !E[rcs_all_jet_inhibit U
+ cg.q_gcb_i != undef & rcs_all_jet_inhibit])
+
+-- The OMS dump shall not be considered until the -Z translation is complete.
+
+SPEC !E[!cont_minus_z_compl & cg.r != reg102 U cg.orbiter_dump_ena]
+
+-- Completion of -Z translation shall not be checked until ET separation
+-- has been commanded
+
+SPEC !E[!et_sep_cmd U cg.step = 7]
+
+-- ET separation shall be commanded if and only if an abort maneuver
+-- region is assigned [and again there are *certain conditions*].
+
+SPEC AG(cg.finished & cs.cont_3EO_start &
+ (cg.r = reg1 -> cond_29) &
+ (cg.r = reg2 -> cond_24 & cond_26) &
+ (cg.r = reg3 -> cg.alpha_ok &
+ (ABS_alf_err_LT_alf_sep_err -> cond_20b)) &
+ (cg.r = reg4 -> cond_18 & q_orb_LT_0) &
+ (cg.r = reg102 -> pre_sep) ->
+ (cg.et_sep_man_initiate | et_sep_cmd
+ <-> cg.r in {reg1, reg2, reg3, reg4, reg102}))
+
+-- The assigned region can not change arbitrarily.
+
+-- Regions 1 and 2 may interchange, but will not switch to any other region:
+
+SPEC AG(cg.finished & cs.cont_3EO_start & cg.r in {reg1,reg2} ->
+ AG(cg.finished -> cg.r in {reg1,reg2}))
+
+-- Regions 3 and 4 may interchange, but will not switch to any other region:
+
+SPEC AG(cg.finished & cs.cont_3EO_start & cg.r in {reg3,reg4} ->
+ AG(cg.finished -> cg.r in {reg3,reg4}))
+
+-- Region 102 never changes:
+
+SPEC AG(cg.finished & cg.r = reg102 -> AG(cg.finished -> cg.r = reg102))
--- /dev/null
+밤밣따빠밣밟따뿌
+빠맣파빨받밤뚜뭏
+돋밬탕빠맣붏두붇
+볻뫃박발뚷투뭏붖
+뫃도뫃희멓뭏뭏붘
+뫃봌토범더벌뿌뚜
+뽑뽀멓멓더벓뻐뚠
+뽀덩벐멓뻐덕더벅
+
+https://github.com/aheui/snippets/blob/master/hello-world/hello-world.puzzlet.aheui
--- /dev/null
+% BibTeX standard bibliography style `plain'
+ % Version 0.99b (8-Dec-10 release) for BibTeX versions 0.99a or later.
+ % Copyright (C) 1984, 1985, 1988, 2010 Howard Trickey and Oren Patashnik.
+ % Unlimited copying and redistribution of this file are permitted as long as
+ % it is unmodified. Modifications (and redistribution of modified versions)
+ % are also permitted, but only if the resulting file is renamed to something
+ % besides btxbst.doc, plain.bst, unsrt.bst, alpha.bst, and abbrv.bst.
+ % This restriction helps ensure that all standard styles are identical.
+ % The file btxbst.doc has the documentation for this style.
+
+ENTRY
+ { address
+ author
+ booktitle
+ chapter
+ edition
+ editor
+ howpublished
+ institution
+ journal
+ key
+ month
+ note
+ number
+ organization
+ pages
+ publisher
+ school
+ series
+ title
+ type
+ volume
+ year
+ }
+ {}
+ { label }
+
+INTEGERS { output.state before.all mid.sentence after.sentence after.block }
+
+FUNCTION {init.state.consts}
+{ #0 'before.all :=
+ #1 'mid.sentence :=
+ #2 'after.sentence :=
+ #3 'after.block :=
+}
+
+STRINGS { s t }
+
+FUNCTION {output.nonnull}
+{ 's :=
+ output.state mid.sentence =
+ { ", " * write$ }
+ { output.state after.block =
+ { add.period$ write$
+ newline$
+ "\newblock " write$
+ }
+ { output.state before.all =
+ 'write$
+ { add.period$ " " * write$ }
+ if$
+ }
+ if$
+ mid.sentence 'output.state :=
+ }
+ if$
+ s
+}
+
+FUNCTION {output}
+{ duplicate$ empty$
+ 'pop$
+ 'output.nonnull
+ if$
+}
+
+FUNCTION {output.check}
+{ 't :=
+ duplicate$ empty$
+ { pop$ "empty " t * " in " * cite$ * warning$ }
+ 'output.nonnull
+ if$
+}
+
+FUNCTION {output.bibitem}
+{ newline$
+ "\bibitem{" write$
+ cite$ write$
+ "}" write$
+ newline$
+ ""
+ before.all 'output.state :=
+}
+
+FUNCTION {fin.entry}
+{ add.period$
+ write$
+ newline$
+}
+
+FUNCTION {new.block}
+{ output.state before.all =
+ 'skip$
+ { after.block 'output.state := }
+ if$
+}
+
+FUNCTION {new.sentence}
+{ output.state after.block =
+ 'skip$
+ { output.state before.all =
+ 'skip$
+ { after.sentence 'output.state := }
+ if$
+ }
+ if$
+}
+
+FUNCTION {not}
+{ { #0 }
+ { #1 }
+ if$
+}
+
+FUNCTION {and}
+{ 'skip$
+ { pop$ #0 }
+ if$
+}
+
+FUNCTION {or}
+{ { pop$ #1 }
+ 'skip$
+ if$
+}
+
+FUNCTION {new.block.checka}
+{ empty$
+ 'skip$
+ 'new.block
+ if$
+}
+
+FUNCTION {new.block.checkb}
+{ empty$
+ swap$ empty$
+ and
+ 'skip$
+ 'new.block
+ if$
+}
+
+FUNCTION {new.sentence.checka}
+{ empty$
+ 'skip$
+ 'new.sentence
+ if$
+}
+
+FUNCTION {new.sentence.checkb}
+{ empty$
+ swap$ empty$
+ and
+ 'skip$
+ 'new.sentence
+ if$
+}
+
+FUNCTION {field.or.null}
+{ duplicate$ empty$
+ { pop$ "" }
+ 'skip$
+ if$
+}
+
+FUNCTION {emphasize}
+{ duplicate$ empty$
+ { pop$ "" }
+ { "{\em " swap$ * "}" * }
+ if$
+}
+
+INTEGERS { nameptr namesleft numnames }
+
+FUNCTION {format.names}
+{ 's :=
+ #1 'nameptr :=
+ s num.names$ 'numnames :=
+ numnames 'namesleft :=
+ { namesleft #0 > }
+ { s nameptr "{ff~}{vv~}{ll}{, jj}" format.name$ 't :=
+ nameptr #1 >
+ { namesleft #1 >
+ { ", " * t * }
+ { numnames #2 >
+ { "," * }
+ 'skip$
+ if$
+ t "others" =
+ { " et~al." * }
+ { " and " * t * }
+ if$
+ }
+ if$
+ }
+ 't
+ if$
+ nameptr #1 + 'nameptr :=
+ namesleft #1 - 'namesleft :=
+ }
+ while$
+}
+
+FUNCTION {format.authors}
+{ author empty$
+ { "" }
+ { author format.names }
+ if$
+}
+
+FUNCTION {format.editors}
+{ editor empty$
+ { "" }
+ { editor format.names
+ editor num.names$ #1 >
+ { ", editors" * }
+ { ", editor" * }
+ if$
+ }
+ if$
+}
+
+FUNCTION {format.title}
+{ title empty$
+ { "" }
+ { title "t" change.case$ }
+ if$
+}
+
+FUNCTION {n.dashify}
+{ 't :=
+ ""
+ { t empty$ not }
+ { t #1 #1 substring$ "-" =
+ { t #1 #2 substring$ "--" = not
+ { "--" *
+ t #2 global.max$ substring$ 't :=
+ }
+ { { t #1 #1 substring$ "-" = }
+ { "-" *
+ t #2 global.max$ substring$ 't :=
+ }
+ while$
+ }
+ if$
+ }
+ { t #1 #1 substring$ *
+ t #2 global.max$ substring$ 't :=
+ }
+ if$
+ }
+ while$
+}
+
+FUNCTION {format.date}
+{ year empty$
+ { month empty$
+ { "" }
+ { "there's a month but no year in " cite$ * warning$
+ month
+ }
+ if$
+ }
+ { month empty$
+ 'year
+ { month " " * year * }
+ if$
+ }
+ if$
+}
+
+FUNCTION {format.btitle}
+{ title emphasize
+}
+
+FUNCTION {tie.or.space.connect}
+{ duplicate$ text.length$ #3 <
+ { "~" }
+ { " " }
+ if$
+ swap$ * *
+}
+
+FUNCTION {either.or.check}
+{ empty$
+ 'pop$
+ { "can't use both " swap$ * " fields in " * cite$ * warning$ }
+ if$
+}
+
+FUNCTION {format.bvolume}
+{ volume empty$
+ { "" }
+ { "volume" volume tie.or.space.connect
+ series empty$
+ 'skip$
+ { " of " * series emphasize * }
+ if$
+ "volume and number" number either.or.check
+ }
+ if$
+}
+
+FUNCTION {format.number.series}
+{ volume empty$
+ { number empty$
+ { series field.or.null }
+ { output.state mid.sentence =
+ { "number" }
+ { "Number" }
+ if$
+ number tie.or.space.connect
+ series empty$
+ { "there's a number but no series in " cite$ * warning$ }
+ { " in " * series * }
+ if$
+ }
+ if$
+ }
+ { "" }
+ if$
+}
+
+FUNCTION {format.edition}
+{ edition empty$
+ { "" }
+ { output.state mid.sentence =
+ { edition "l" change.case$ " edition" * }
+ { edition "t" change.case$ " edition" * }
+ if$
+ }
+ if$
+}
+
+INTEGERS { multiresult }
+
+FUNCTION {multi.page.check}
+{ 't :=
+ #0 'multiresult :=
+ { multiresult not
+ t empty$ not
+ and
+ }
+ { t #1 #1 substring$
+ duplicate$ "-" =
+ swap$ duplicate$ "," =
+ swap$ "+" =
+ or or
+ { #1 'multiresult := }
+ { t #2 global.max$ substring$ 't := }
+ if$
+ }
+ while$
+ multiresult
+}
+
+FUNCTION {format.pages}
+{ pages empty$
+ { "" }
+ { pages multi.page.check
+ { "pages" pages n.dashify tie.or.space.connect }
+ { "page" pages tie.or.space.connect }
+ if$
+ }
+ if$
+}
+
+FUNCTION {format.vol.num.pages}
+{ volume field.or.null
+ number empty$
+ 'skip$
+ { "(" number * ")" * *
+ volume empty$
+ { "there's a number but no volume in " cite$ * warning$ }
+ 'skip$
+ if$
+ }
+ if$
+ pages empty$
+ 'skip$
+ { duplicate$ empty$
+ { pop$ format.pages }
+ { ":" * pages n.dashify * }
+ if$
+ }
+ if$
+}
+
+FUNCTION {format.chapter.pages}
+{ chapter empty$
+ 'format.pages
+ { type empty$
+ { "chapter" }
+ { type "l" change.case$ }
+ if$
+ chapter tie.or.space.connect
+ pages empty$
+ 'skip$
+ { ", " * format.pages * }
+ if$
+ }
+ if$
+}
+
+FUNCTION {format.in.ed.booktitle}
+{ booktitle empty$
+ { "" }
+ { editor empty$
+ { "In " booktitle emphasize * }
+ { "In " format.editors * ", " * booktitle emphasize * }
+ if$
+ }
+ if$
+}
+
+FUNCTION {empty.misc.check}
+{ author empty$ title empty$ howpublished empty$
+ month empty$ year empty$ note empty$
+ and and and and and
+ key empty$ not and
+ { "all relevant fields are empty in " cite$ * warning$ }
+ 'skip$
+ if$
+}
+
+FUNCTION {format.thesis.type}
+{ type empty$
+ 'skip$
+ { pop$
+ type "t" change.case$
+ }
+ if$
+}
+
+FUNCTION {format.tr.number}
+{ type empty$
+ { "Technical Report" }
+ 'type
+ if$
+ number empty$
+ { "t" change.case$ }
+ { number tie.or.space.connect }
+ if$
+}
+
+FUNCTION {format.article.crossref}
+{ key empty$
+ { journal empty$
+ { "need key or journal for " cite$ * " to crossref " * crossref *
+ warning$
+ ""
+ }
+ { "In {\em " journal * "\/}" * }
+ if$
+ }
+ { "In " key * }
+ if$
+ " \cite{" * crossref * "}" *
+}
+
+FUNCTION {format.crossref.editor}
+{ editor #1 "{vv~}{ll}" format.name$
+ editor num.names$ duplicate$
+ #2 >
+ { pop$ " et~al." * }
+ { #2 <
+ 'skip$
+ { editor #2 "{ff }{vv }{ll}{ jj}" format.name$ "others" =
+ { " et~al." * }
+ { " and " * editor #2 "{vv~}{ll}" format.name$ * }
+ if$
+ }
+ if$
+ }
+ if$
+}
+
+FUNCTION {format.book.crossref}
+{ volume empty$
+ { "empty volume in " cite$ * "'s crossref of " * crossref * warning$
+ "In "
+ }
+ { "Volume" volume tie.or.space.connect
+ " of " *
+ }
+ if$
+ editor empty$
+ editor field.or.null author field.or.null =
+ or
+ { key empty$
+ { series empty$
+ { "need editor, key, or series for " cite$ * " to crossref " *
+ crossref * warning$
+ "" *
+ }
+ { "{\em " * series * "\/}" * }
+ if$
+ }
+ { key * }
+ if$
+ }
+ { format.crossref.editor * }
+ if$
+ " \cite{" * crossref * "}" *
+}
+
+FUNCTION {format.incoll.inproc.crossref}
+{ editor empty$
+ editor field.or.null author field.or.null =
+ or
+ { key empty$
+ { booktitle empty$
+ { "need editor, key, or booktitle for " cite$ * " to crossref " *
+ crossref * warning$
+ ""
+ }
+ { "In {\em " booktitle * "\/}" * }
+ if$
+ }
+ { "In " key * }
+ if$
+ }
+ { "In " format.crossref.editor * }
+ if$
+ " \cite{" * crossref * "}" *
+}
+
+FUNCTION {article}
+{ output.bibitem
+ format.authors "author" output.check
+ new.block
+ format.title "title" output.check
+ new.block
+ crossref missing$
+ { journal emphasize "journal" output.check
+ format.vol.num.pages output
+ format.date "year" output.check
+ }
+ { format.article.crossref output.nonnull
+ format.pages output
+ }
+ if$
+ new.block
+ note output
+ fin.entry
+}
+
+FUNCTION {book}
+{ output.bibitem
+ author empty$
+ { format.editors "author and editor" output.check }
+ { format.authors output.nonnull
+ crossref missing$
+ { "author and editor" editor either.or.check }
+ 'skip$
+ if$
+ }
+ if$
+ new.block
+ format.btitle "title" output.check
+ crossref missing$
+ { format.bvolume output
+ new.block
+ format.number.series output
+ new.sentence
+ publisher "publisher" output.check
+ address output
+ }
+ { new.block
+ format.book.crossref output.nonnull
+ }
+ if$
+ format.edition output
+ format.date "year" output.check
+ new.block
+ note output
+ fin.entry
+}
+
+FUNCTION {booklet}
+{ output.bibitem
+ format.authors output
+ new.block
+ format.title "title" output.check
+ howpublished address new.block.checkb
+ howpublished output
+ address output
+ format.date output
+ new.block
+ note output
+ fin.entry
+}
+
+FUNCTION {inbook}
+{ output.bibitem
+ author empty$
+ { format.editors "author and editor" output.check }
+ { format.authors output.nonnull
+ crossref missing$
+ { "author and editor" editor either.or.check }
+ 'skip$
+ if$
+ }
+ if$
+ new.block
+ format.btitle "title" output.check
+ crossref missing$
+ { format.bvolume output
+ format.chapter.pages "chapter and pages" output.check
+ new.block
+ format.number.series output
+ new.sentence
+ publisher "publisher" output.check
+ address output
+ }
+ { format.chapter.pages "chapter and pages" output.check
+ new.block
+ format.book.crossref output.nonnull
+ }
+ if$
+ format.edition output
+ format.date "year" output.check
+ new.block
+ note output
+ fin.entry
+}
+
+FUNCTION {incollection}
+{ output.bibitem
+ format.authors "author" output.check
+ new.block
+ format.title "title" output.check
+ new.block
+ crossref missing$
+ { format.in.ed.booktitle "booktitle" output.check
+ format.bvolume output
+ format.number.series output
+ format.chapter.pages output
+ new.sentence
+ publisher "publisher" output.check
+ address output
+ format.edition output
+ format.date "year" output.check
+ }
+ { format.incoll.inproc.crossref output.nonnull
+ format.chapter.pages output
+ }
+ if$
+ new.block
+ note output
+ fin.entry
+}
+
+FUNCTION {inproceedings}
+{ output.bibitem
+ format.authors "author" output.check
+ new.block
+ format.title "title" output.check
+ new.block
+ crossref missing$
+ { format.in.ed.booktitle "booktitle" output.check
+ format.bvolume output
+ format.number.series output
+ format.pages output
+ address empty$
+ { organization publisher new.sentence.checkb
+ organization output
+ publisher output
+ format.date "year" output.check
+ }
+ { address output.nonnull
+ format.date "year" output.check
+ new.sentence
+ organization output
+ publisher output
+ }
+ if$
+ }
+ { format.incoll.inproc.crossref output.nonnull
+ format.pages output
+ }
+ if$
+ new.block
+ note output
+ fin.entry
+}
+
+FUNCTION {conference} { inproceedings }
+
+FUNCTION {manual}
+{ output.bibitem
+ author empty$
+ { organization empty$
+ 'skip$
+ { organization output.nonnull
+ address output
+ }
+ if$
+ }
+ { format.authors output.nonnull }
+ if$
+ new.block
+ format.btitle "title" output.check
+ author empty$
+ { organization empty$
+ { address new.block.checka
+ address output
+ }
+ 'skip$
+ if$
+ }
+ { organization address new.block.checkb
+ organization output
+ address output
+ }
+ if$
+ format.edition output
+ format.date output
+ new.block
+ note output
+ fin.entry
+}
+
+FUNCTION {mastersthesis}
+{ output.bibitem
+ format.authors "author" output.check
+ new.block
+ format.title "title" output.check
+ new.block
+ "Master's thesis" format.thesis.type output.nonnull
+ school "school" output.check
+ address output
+ format.date "year" output.check
+ new.block
+ note output
+ fin.entry
+}
+
+FUNCTION {misc}
+{ output.bibitem
+ format.authors output
+ title howpublished new.block.checkb
+ format.title output
+ howpublished new.block.checka
+ howpublished output
+ format.date output
+ new.block
+ note output
+ fin.entry
+ empty.misc.check
+}
+
+FUNCTION {phdthesis}
+{ output.bibitem
+ format.authors "author" output.check
+ new.block
+ format.btitle "title" output.check
+ new.block
+ "PhD thesis" format.thesis.type output.nonnull
+ school "school" output.check
+ address output
+ format.date "year" output.check
+ new.block
+ note output
+ fin.entry
+}
+
+FUNCTION {proceedings}
+{ output.bibitem
+ editor empty$
+ { organization output }
+ { format.editors output.nonnull }
+ if$
+ new.block
+ format.btitle "title" output.check
+ format.bvolume output
+ format.number.series output
+ address empty$
+ { editor empty$
+ { publisher new.sentence.checka }
+ { organization publisher new.sentence.checkb
+ organization output
+ }
+ if$
+ publisher output
+ format.date "year" output.check
+ }
+ { address output.nonnull
+ format.date "year" output.check
+ new.sentence
+ editor empty$
+ 'skip$
+ { organization output }
+ if$
+ publisher output
+ }
+ if$
+ new.block
+ note output
+ fin.entry
+}
+
+FUNCTION {techreport}
+{ output.bibitem
+ format.authors "author" output.check
+ new.block
+ format.title "title" output.check
+ new.block
+ format.tr.number output.nonnull
+ institution "institution" output.check
+ address output
+ format.date "year" output.check
+ new.block
+ note output
+ fin.entry
+}
+
+FUNCTION {unpublished}
+{ output.bibitem
+ format.authors "author" output.check
+ new.block
+ format.title "title" output.check
+ new.block
+ note "note" output.check
+ format.date output
+ fin.entry
+}
+
+FUNCTION {default.type} { misc }
+
+MACRO {jan} {"January"}
+
+MACRO {feb} {"February"}
+
+MACRO {mar} {"March"}
+
+MACRO {apr} {"April"}
+
+MACRO {may} {"May"}
+
+MACRO {jun} {"June"}
+
+MACRO {jul} {"July"}
+
+MACRO {aug} {"August"}
+
+MACRO {sep} {"September"}
+
+MACRO {oct} {"October"}
+
+MACRO {nov} {"November"}
+
+MACRO {dec} {"December"}
+
+MACRO {acmcs} {"ACM Computing Surveys"}
+
+MACRO {acta} {"Acta Informatica"}
+
+MACRO {cacm} {"Communications of the ACM"}
+
+MACRO {ibmjrd} {"IBM Journal of Research and Development"}
+
+MACRO {ibmsj} {"IBM Systems Journal"}
+
+MACRO {ieeese} {"IEEE Transactions on Software Engineering"}
+
+MACRO {ieeetc} {"IEEE Transactions on Computers"}
+
+MACRO {ieeetcad}
+ {"IEEE Transactions on Computer-Aided Design of Integrated Circuits"}
+
+MACRO {ipl} {"Information Processing Letters"}
+
+MACRO {jacm} {"Journal of the ACM"}
+
+MACRO {jcss} {"Journal of Computer and System Sciences"}
+
+MACRO {scp} {"Science of Computer Programming"}
+
+MACRO {sicomp} {"SIAM Journal on Computing"}
+
+MACRO {tocs} {"ACM Transactions on Computer Systems"}
+
+MACRO {tods} {"ACM Transactions on Database Systems"}
+
+MACRO {tog} {"ACM Transactions on Graphics"}
+
+MACRO {toms} {"ACM Transactions on Mathematical Software"}
+
+MACRO {toois} {"ACM Transactions on Office Information Systems"}
+
+MACRO {toplas} {"ACM Transactions on Programming Languages and Systems"}
+
+MACRO {tcs} {"Theoretical Computer Science"}
+
+READ
+
+FUNCTION {sortify}
+{ purify$
+ "l" change.case$
+}
+
+INTEGERS { len }
+
+FUNCTION {chop.word}
+{ 's :=
+ 'len :=
+ s #1 len substring$ =
+ { s len #1 + global.max$ substring$ }
+ 's
+ if$
+}
+
+FUNCTION {sort.format.names}
+{ 's :=
+ #1 'nameptr :=
+ ""
+ s num.names$ 'numnames :=
+ numnames 'namesleft :=
+ { namesleft #0 > }
+ { nameptr #1 >
+ { " " * }
+ 'skip$
+ if$
+ s nameptr "{vv{ } }{ll{ }}{ ff{ }}{ jj{ }}" format.name$ 't :=
+ nameptr numnames = t "others" = and
+ { "et al" * }
+ { t sortify * }
+ if$
+ nameptr #1 + 'nameptr :=
+ namesleft #1 - 'namesleft :=
+ }
+ while$
+}
+
+FUNCTION {sort.format.title}
+{ 't :=
+ "A " #2
+ "An " #3
+ "The " #4 t chop.word
+ chop.word
+ chop.word
+ sortify
+ #1 global.max$ substring$
+}
+
+FUNCTION {author.sort}
+{ author empty$
+ { key empty$
+ { "to sort, need author or key in " cite$ * warning$
+ ""
+ }
+ { key sortify }
+ if$
+ }
+ { author sort.format.names }
+ if$
+}
+
+FUNCTION {author.editor.sort}
+{ author empty$
+ { editor empty$
+ { key empty$
+ { "to sort, need author, editor, or key in " cite$ * warning$
+ ""
+ }
+ { key sortify }
+ if$
+ }
+ { editor sort.format.names }
+ if$
+ }
+ { author sort.format.names }
+ if$
+}
+
+FUNCTION {author.organization.sort}
+{ author empty$
+ { organization empty$
+ { key empty$
+ { "to sort, need author, organization, or key in " cite$ * warning$
+ ""
+ }
+ { key sortify }
+ if$
+ }
+ { "The " #4 organization chop.word sortify }
+ if$
+ }
+ { author sort.format.names }
+ if$
+}
+
+FUNCTION {editor.organization.sort}
+{ editor empty$
+ { organization empty$
+ { key empty$
+ { "to sort, need editor, organization, or key in " cite$ * warning$
+ ""
+ }
+ { key sortify }
+ if$
+ }
+ { "The " #4 organization chop.word sortify }
+ if$
+ }
+ { editor sort.format.names }
+ if$
+}
+
+FUNCTION {presort}
+{ type$ "book" =
+ type$ "inbook" =
+ or
+ 'author.editor.sort
+ { type$ "proceedings" =
+ 'editor.organization.sort
+ { type$ "manual" =
+ 'author.organization.sort
+ 'author.sort
+ if$
+ }
+ if$
+ }
+ if$
+ " "
+ *
+ year field.or.null sortify
+ *
+ " "
+ *
+ title field.or.null
+ sort.format.title
+ *
+ #1 entry.max$ substring$
+ 'sort.key$ :=
+}
+
+ITERATE {presort}
+
+SORT
+
+STRINGS { longest.label }
+
+INTEGERS { number.label longest.label.width }
+
+FUNCTION {initialize.longest.label}
+{ "" 'longest.label :=
+ #1 'number.label :=
+ #0 'longest.label.width :=
+}
+
+FUNCTION {longest.label.pass}
+{ number.label int.to.str$ 'label :=
+ number.label #1 + 'number.label :=
+ label width$ longest.label.width >
+ { label 'longest.label :=
+ label width$ 'longest.label.width :=
+ }
+ 'skip$
+ if$
+}
+
+EXECUTE {initialize.longest.label}
+
+ITERATE {longest.label.pass}
+
+FUNCTION {begin.bib}
+{ preamble$ empty$
+ 'skip$
+ { preamble$ write$ newline$ }
+ if$
+ "\begin{thebibliography}{" longest.label * "}" * write$ newline$
+}
+
+EXECUTE {begin.bib}
+
+EXECUTE {init.state.consts}
+
+ITERATE {call.type$}
+
+FUNCTION {end.bib}
+{ newline$
+ "\end{thebibliography}" write$ newline$
+}
+
+EXECUTE {end.bib}
SELECT U&'\0441\043B\043E\043D'
FROM U&"\0441\043B\043E\043D";
+-- Escapes
+SELECT E'1\n2\n3';
+
+-- DO example from postgresql documentation
+/*
+ * PostgreSQL is Copyright © 1996-2016 by the PostgreSQL Global Development Group.
+ *
+ * Postgres95 is Copyright © 1994-5 by the Regents of the University of California.
+ *
+ * Permission to use, copy, modify, and distribute this software and its
+ * documentation for any purpose, without fee, and without a written agreement
+ * is hereby granted, provided that the above copyright notice and this paragraph
+ * and the following two paragraphs appear in all copies.
+ *
+ * IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY PARTY FOR
+ * DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
+ * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION,
+ * EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ *
+ * THE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING,
+ * BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS-IS" BASIS,
+ * AND THE UNIVERSITY OF CALIFORNIA HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE,
+ * SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
+ */
+DO $$DECLARE r record;
+BEGIN
+ FOR r IN SELECT table_schema, table_name FROM information_schema.tables
+ WHERE table_type = 'VIEW' AND table_schema = 'public'
+ LOOP
+ EXECUTE 'GRANT ALL ON ' || quote_ident(r.table_schema) || '.' || quote_ident(r.table_name) || ' TO webuser';
+ END LOOP;
+END$$;
--- /dev/null
+# This is a sample RNC file from the tutorial for the 2003 Working Draft
+# http://relaxng.org/compact-tutorial-20030326.html
+
+element html {
+ element head {
+ element title { text }
+ },
+ element body {
+ element table {
+ attribute class { "addressBook" },
+ element tr {
+ attribute class { "card" },
+ element td {
+ attribute class { "name" },
+ mixed {
+ element span {
+ attribute class { "givenName" },
+ text
+ }?,
+ element span {
+ attribute class { "familyName" },
+ text
+ }?
+ }
+ },
+ element td {
+ attribute class { "email" },
+ text
+ }
+ }+
+ }
+ }
+}
--- /dev/null
+This is an example BibTeX file.
+This text is a comment.
+
+@preamble{"%%% example BibTeX file"}
+
+@Preamble{"\newcommand{\noopsort}[1]{} "
+ "\newcommand{\noopsort}[1]{} "}
+
+@String{SCI = "Science"}
+
+@STRING{JFernandez = "Fernandez, Julio M."}
+@StRiNg{HGaub = "Gaub, Hermann E."}
+@string{MGautel = "Gautel, Mathias"}
+@String{FOesterhelt = "Oesterhelt, Filipp"}
+@String{MRief = "Rief, Matthias"}
+
+@Article{rief97b,
+ author = MRief #" and "# MGautel #" and "# FOesterhelt
+ #" and "# JFernandez #" and "# HGaub,
+ title = "Reversible Unfolding of Individual Titin
+ Immunoglobulin Domains by {AFM}",
+ journal = SCI,
+ volume = 276,
+ number = 5315,
+ pages = "1109--1112",
+ year = 1997,
+ doi = "10.1126/science.276.5315.1109",
+ URL = "http://www.sciencemag.org/cgi/content/abstract/276/5315/1109",
+ eprint = "http://www.sciencemag.org/cgi/reprint/276/5315/1109.pdf",
+}
+
+
+Parens can be used instead of braces:
+
+@ARTICLE(ruckenstein-diffusion,
+ author = "Liu, Hongquin and Ruckenstein, Eli",
+ language = "english",
+ title = "Predicting the Diffusion Coefficient in Supercritical Fluids",
+ journal = "Ind. Eng. Chem. Res.",
+ volume = "36",
+ year = "1997",
+ pages = "888-895"
+)
+
+@book{
+ viktorov-methods,
+ author = "Викторов, Михаил Маркович",
+ publisher = "Л.: <<Химия>>",
+ title = "Методы вычисления физико-химических величин и прикладные расчёты",
+ language = "russian",
+ year = "1977",
+ isbn = "000-0000000000",
+}
+
+@comment{jackson-commented-out,
+ author = "Jackson, P\'eter",
+ publisher = "Some Publisher",
+ language = "english",
+ title = "Some Title",
+ series = "Some series",
+ booktitle = "Commented Out",
+ number = "3",
+ edition = "Second",
+ year = "1933",
+ pages = "44--59"
+}
+
+@booklet{test-booklet,
+ author = "de Last, Jr., First Middle",
+ language = "english",
+ title = "Just a booklet",
+ year = 2006,
+ month = jan,
+ address = "Moscow",
+ howpublished = "Published by Foo"
+}
+
--- /dev/null
+# Examples taken from http://crystal-lang.org/docs/
+# Copyright 2012-2016 Manas Technology Solutions.
+
+
+require "http/server"
+
+server = HTTP::Server.new(8080) do |context|
+ context.response.content_type = "text/plain"
+ context.response.print "Hello world! The time is #{Time.now}"
+end
+
+puts "Listening on http://0.0.0.0:8080"
+server.listen
+
+
+module HTTP
+ class RequestHandler
+ end
+end
+
+alias NumericValue = Float32 | Float64 | Int32 | Int64
+
+enum Time::DayOfWeek
+end
+
+
+$global_greeting = "Hello world"
+
+class Greeting
+ @@default_greeting = "Hello world"
+
+ def initialize(@custom_greeting = nil)
+ end
+
+ def print_greeting
+ greeting = @custom_greeting || @@default_greeting
+ puts greeting
+ end
+end
+
+
+LUCKY_NUMBERS = [3, 7, 11]
+DOCUMENTATION_URL = "http://crystal-lang.org/docs"
+
+
+module Scorecard
+ class Parser
+ def parse(score_text)
+ begin
+ score_text.scan(SCORE_PATTERN) do |match|
+ handle_match(match)
+ end
+ rescue err : ParseError
+ # handle error ...
+ end
+ end
+ end
+end
+
+
+module Money
+ CURRENCIES = {
+ "EUR" => 1.0,
+ "ARS" => 10.55,
+ "USD" => 1.12,
+ "JPY" => 134.15,
+ }
+
+ class Amount
+ getter :currency, :value
+
+ def initialize(@currency, @value)
+ end
+ end
+
+ class CurrencyConversion
+ def initialize(@amount, @target_currency)
+ end
+
+ def amount
+ # implement conversion ...
+ end
+ end
+end
+
+
+i = 0
+while i < 10
+ proc = ->(x : Int32) do
+ spawn do
+ puts(x)
+ end
+ end
+ proc.call(i)
+ i += 1
+end
+
+Fiber.yield
+
+
+# A buffered channel of capacity 2
+channel = Channel(Int32).new(2)
+
+spawn do
+ channel.send(1)
+ channel.send(2)
+ channel.send(3)
+end
+
+3.times do |i|
+ puts channel.receive
+end
+
+
+class MyDictionary(K, V)
+end
+
+
+MyBox.new(1) #:: MyBox(Int32)
+MyBox.new("hello") #:: MyBox(String)
+
+
+module Moo(T)
+ def t
+ T
+ end
+end
+
+class Foo(U)
+ include Moo(U)
+
+ def initialize(@value : U)
+ end
+end
+
+foo = Foo.new(1)
+foo.t # Int32
+
+
+class Parent(T)
+end
+
+class Int32Child < Parent(Int32)
+end
+
+class GenericChild(T) < Parent(T)
+end
+
+
+class Person
+end
+
+
+a = 1
+ptr = pointerof(a)
+ptr[100_000] = 2 # undefined behaviour, probably a segmentation fault
+
+
+alias Int32OrString = Int32 | String
+
+
+alias Int32OrNil = Int32?
+
+
+alias Int32OrNil_ = Int32 | ::Nil
+
+
+alias Int32Ptr = Int32*
+
+
+alias Int32Ptr_ = Pointer(Int32)
+
+
+alias Int32_8 = Int32[8]
+
+
+alias Int32_8_ = StaticArray(Int32, 8)
+
+
+alias Int32StringTuple = {Int32, String}
+
+
+alias Int32StringTuple_ = Tuple(Int32, String)
+
+
+alias Int32ToString = Int32 -> String
+
+
+alias Int32ToString_ = Proc(Int32, String)
+
+
+alias ProcThatReturnsInt32 = -> Int32
+
+
+alias Int32AndCharToString = Int32, Char -> String
+
+
+alias ComplexProc = (Int32 -> Int32) -> String
+
+
+def foo(x : Int32)
+ "instance"
+end
+
+def foo(x : Int32.class)
+ "class"
+end
+
+foo 1 # "instance"
+foo Int32 # "class"
+
+
+class Parent
+end
+
+class Child1 < Parent
+end
+
+class Child2 < Parent
+end
+
+ary = [] of Parent.class
+ary << Child1
+ary << Child2
+
+
+# Same as not specifying a restriction, not very useful
+def foo(x : _)
+end
+
+# A bit more useful: any two arguments Proc that returns an Int32:
+def foo(x : _, _ -> Int32)
+end
+
+
+#alias SameAsInt32 = typeof(2)
+#alias Int32OrString_ = typeof(1, "a")
+
+
+class Person
+ def initialize(name)
+ @name = name
+ @age = 0
+ end
+
+ def name
+ @name
+ end
+
+ def age
+ @age
+ end
+end
+
+
+john = Person.new "John"
+peter = Person.new "Peter"
+
+john.name #=> "John"
+john.age #=> 0
+
+peter.name #=> "Peter"
+
+
+class Person
+ def self.new(name)
+ instance = Person.allocate
+ instance.initialize(name)
+ instance
+ end
+ end
+
+
+if a.is_a?(String)
+ # here a is a String
+end
+
+if b.is_a?(Number)
+ # here b is a Number
+end
+
+
+a = some_condition ? 1 : "hello"
+# a : Int32 | String
+
+if a.is_a?(Number)
+ # a : Int32
+else
+ # a : String
+end
+
+
+if a.is_a?(String) && b.is_a?(Number)
+ # here a is a String and b is a Number
+end
+
+
+a.+(b)
+
+
+struct Vector2
+ getter x, y
+
+ def initialize(@x, @y)
+ end
+
+ def +(other)
+ Vector2.new(x + other.x, y + other.y)
+ end
+end
+
+v1 = Vector2.new(1, 2)
+v2 = Vector2.new(3, 4)
+v1 + v2 #=> Vector2(@x=4, @y=6)
+
+
+
+
+struct Vector2
+ def -
+ Vector2.new(-x, -y)
+ end
+end
+
+v1 = Vector2.new(1, 2)
+-v1 #=> Vector2(@x=-1, @y=-2)
+
+
+
+
+
+class MyArray
+ def [](index)
+ # ...
+ end
+
+ def [](index1, index2, index3)
+ # ...
+ end
+
+ def []=(index, value)
+ # ...
+ end
+end
+
+array = MyArray.new
+
+array[1] # invokes the first method
+array[1, 2, 3] # invokes the second method
+array[1] = 2 # invokes the third method
+
+array.[](1) # invokes the first method
+array.[](1, 2, 3) # invokes the second method
+array.[]=(1, 2) # invokes the third method
+
+
+raise "OH NO!"
+raise Exception.new("Some error")
+
+
+class MyException < Exception
+end
+
+
+begin
+ raise MyException.new("OH NO!")
+rescue ex : MyException
+ puts "Rescued MyException: #{ex.message}"
+end
+
+
+begin
+ # ...
+rescue ex : MyException | MyOtherException
+ # only MyException or MyOtherException
+rescue
+ # any other kind of exception
+ensure
+ puts "Cleanup..."
+end
+
+
+def some_method
+ something_dangerous
+rescue
+ # execute if an exception is raised
+end
+
+
+array = [1, 2, 3]
+array[4] # raises because of IndexError
+array[4]? # returns nil because of index out of bounds
+
+
+def some_proc(&block : Int32 -> Int32)
+ block
+end
+
+x = 0
+proc = ->(i : Int32) { x += i }
+proc = some_proc(&proc)
+proc.call(1) #=> 1
+proc.call(10) #=> 11
+x #=> 11
+
+
+def add(x, y)
+ x + y
+end
+
+adder = ->add(Int32, Int32)
+adder.call(1, 2) #=> 3
+
+
+module Curses
+ class Window
+ end
+end
+
+Curses::Window.new
+
+
+module ItemsSize
+ def size
+ items.size
+ end
+end
+
+class Items
+ include ItemsSize
+
+ def items
+ [1, 2, 3]
+ end
+end
+
+items = Items.new
+items.size #=> 3
+
+
+module Base64
+ extend self
+
+ def encode64(string)
+ # ...
+ end
+
+ def decode64(string)
+ # ...
+ end
+end
+
+Base64.encode64 "hello" #=> "aGVsbG8="
+
+
+if some_condition
+ a = 1
+else
+ a = "hello"
+end
+
+a_as_int = a as Int32
+a_as_int.abs # works, compiler knows that a_as_int is Int32
+
+
+ptr = Pointer(Int32).malloc(1)
+ptr as Int8* #:: Pointer(Int8)
+
+
+array = [1, 2, 3]
+
+# object_id returns the address of an object in memory,
+# so we create a pointer with that address
+ptr = Pointer(Void).new(array.object_id)
+
+# Now we cast that pointer to the same type, and
+# we should get the same value
+array2 = ptr as Array(Int32)
+array2.same?(array) #=> true
+
+
+a = 1
+b = a as Int32 | Float64
+b #:: Int32 | Float64
+
+
+ary = [1, 2, 3]
+
+# We want to create an array 1, 2, 3 of Int32 | Float64
+ary2 = ary.map { |x| x as Int32 | Float64 }
+
+ary2 #:: Array(Int32 | Float64)
+ary2 << 1.5 # OK
+
+
+class Person
+ def initialize(@name)
+ end
+
+ def name
+ @name
+ end
+end
+
+a = [] of Person
+x = a.map { |f| f.name } # Error: can't infer block return type
+
+
+a = [] of Person
+x = a.map { |f| f.name as String } # OK
+
+
+Person.new "John"
+
+a = [] of Person
+x = a.map { |f| f.name } # OK
+
+
+loop do
+ do_something
+ break if some_condition
+end
+
+
+class Point
+ def initialize(@x, @y)
+ end
+end
+
+Point.new 1, 2
+
+# 2 x Int32 = 2 x 4 = 8
+instance_sizeof(Point) #=> 12
+
+
+a = 1
+while a < 5
+ a += 1
+ if a == 3
+ next
+ end
+ puts a
+end
+# The above prints the numbers 2, 4 and 5
+
+
+lib C
+ # In C: double cos(double x)
+ fun cos(value : Float64) : Float64
+
+ fun getch : Int32
+
+ fun srand(seed : UInt32)
+
+ fun exit(status : Int32) : NoReturn
+
+ fun printf(format : UInt8*, ...) : Int32
+end
+
+C.cos(1.5) #=> 0.0707372
+C.srand(1_u32)
+
+a = 1
+b = 2
+C.printf "%d + %d = %d\n", a, b, a + b
+
+
+lib LibSDL
+ fun init = SDL_Init(flags : UInt32) : Int32
+end
+
+lib LLVMIntrinsics
+ fun ceil_f32 = "llvm.ceil.f32"(value : Float32) : Float32
+end
+
+lib MyLib
+ fun my_fun(some_size : LibC::SizeT)
+end
+
+@[Link("pcre")]
+lib LibPCRE
+end
+
+
+lib C
+ ifdef x86_64
+ alias SizeT = UInt64
+ else
+ alias SizeT = UInt32
+ end
+
+ fun memcmp(p1 : Void*, p2 : Void*, size : C::SizeT) : Int32
+end
+
+
+lib X
+ enum SomeEnum
+ Ten = 10
+ Twenty = 10 * 2
+ ThirtyTwo = 1 << 5
+ end
+end
+
+
+lib X
+ enum SomeEnum
+ A = 1_u32
+ end
+end
+
+
+X::SomeEnum::Zero #=> 0_i8
+X::SomeEnum::Two #=> 2_i8
+
+
+lib X
+ fun callback(f : Int32 -> Int32)
+end
+
+
+f = ->(x : Int32) { x + 1 }
+X.callback(f)
+
+
+X.callback ->(x) { x + 1 }
+
+
+X.callback nil
+
+
+lib LibFoo
+ fun store_callback(callback : ->)
+ fun execute_callback
+end
+
+LibFoo.store_callback ->{ raise "OH NO!" }
+LibFoo.execute_callback
+
+
+lib LibFoo
+ fun store_callback(callback : ->)
+
+ @[Raises]
+ fun execute_callback
+end
+
+
+@[Link("pcre")]
+lib PCRE
+ INFO_CAPTURECOUNT = 2
+end
+
+PCRE::INFO_CAPTURECOUNT #=> 2
+
+
+lib U
+ # In C:
+ #
+ # union IntOrFloat {
+ # int some_int;
+ # double some_float;
+ # };
+ union IntOrFloat
+ some_int : Int32
+ some_float : Float64
+ end
+end
+
+
+value = U::IntOrFloat.new
+
+
+value = uninitialized U::IntOrFlaot
+value.some_int #=> some garbage value
+
+
+value = U::IntOrFloat.new
+value.some_int = 1
+value.some_int #=> 1
+value.some_float #=> 4.94066e-324
+
+
+def change_it(value)
+ value.some_int = 1
+end
+
+value = U::IntOrFloat.new
+change_it value
+value.some_int #=> 0
+
+
+lib C
+ # In C:
+ #
+ # struct TimeZone {
+ # int minutes_west;
+ # int dst_time;
+ # };
+ struct TimeZone
+ minutes_west : Int32
+ dst_time : Int32
+ end
+end
+
+
+lib C
+ # This is a forward declaration
+ struct Node
+ end
+
+ struct Node
+ node : Node*
+ end
+end
+
+
+tz = C::TimeZone.new
+
+
+tz = uninitialized C::TimeZone
+tz.minutes_west #=> some garbage value
+
+
+tz = C::TimeZone.new
+tz.minutes_west = 1
+tz.minutes_west #=> 1
+
+
+tz = C::TimeZone.new minutes_west: 1, dst_time: 2
+tz.minutes_west #=> 1
+tz.dst_time #=> 2
+
+
+def change_it(tz)
+ tz.minutes_west = 1
+end
+
+tz = C::TimeZone.new
+change_it tz
+tz.minutes_west #=> 0
+
+
+lib C
+ $errno : Int32
+end
+
+
+C.errno #=> some value
+C.errno = 0
+C.errno #=> 0
+
+
+lib C
+ @[ThreadLocal]
+ $errno : Int32
+end
+
+
+lib C
+ fun waitpid(pid : Int32, status_ptr : Int32*, options : Int32) : Int32
+end
+
+
+status_ptr = uninitialized Int32
+
+C.waitpid(pid, pointerof(status_ptr), options)
+
+
+C.waitpid(pid, out status_ptr, options)
+
+
+lib X
+ type CInt = Int32
+end
+
+
+ifdef x86_64
+ # some specific code for 64 bits platforms
+else
+ # some specific code for non-64 bits platforms
+end
+
+
+ifdef linux && x86_64
+ # some specific code for linux 64 bits
+end
+
+
+lib C
+ ifdef linux
+ struct SomeStruct
+ some_field : Int32
+ end
+ else
+ struct SomeStruct
+ some_field : Int64
+ end
+ end
+end
+
+
+# Assigns to a local variable
+local = 1
+
+# Assigns to a global variable
+$global = 4
+
+class Testing
+ # Assigns to an instance variable
+ @instance = 2
+
+ # Assigns to a class variable
+ @@class = 3
+end
+
+
+local += 1 # same as: local = local + 1
+
+# The above is valid with these operators:
+# +, -, *, /, %, |, &, ^, **, <<, >>
+
+local ||= 1 # same as: local || (local = 1)
+local &&= 1 # same as: local && (local = 1)
+
+
+# A setter
+person.name=("John")
+
+# The above can be written as:
+person.name = "John"
+
+# An indexed assignment
+objects.[]=(2, 3)
+
+# The above can be written as:
+objects[2] = 3
+
+# Not assignment-related, but also syntax sugar:
+objects.[](2, 3)
+
+# The above can be written as:
+objects[2, 3]
+
+
+person.age += 1 # same as: person.age = person.age + 1
+
+person.name ||= "John" # same as: person.name || (person.name = "John")
+person.name &&= "John" # same as: person.name && (person.name = "John")
+
+objects[1] += 2 # same as: objects[1] = objects[1] + 2
+
+objects[1] ||= 2 # same as: objects[1]? || (objects[1] = 2)
+objects[1] &&= 2 # same as: objects[1]? && (objects[1] = 2)
+
+
+alias PInt32 = Pointer(Int32)
+
+ptr = PInt32.malloc(1) # : Pointer(Int32)
+
+
+alias RecArray = Array(Int32) | Array(RecArray)
+
+ary = [] of RecArray
+ary.push [1, 2, 3]
+ary.push ary
+ary #=> [[1, 2, 3], [...]]
+
+
+module Json
+ alias Type = Nil |
+ Bool |
+ Int64 |
+ Float64 |
+ String |
+ Array(Type) |
+ Hash(String, Type)
+end
+
+
+a = 1
+if a > 0
+ a = 10
+end
+a #=> 10
+
+b = 1
+if b > 2
+ b = 10
+else
+ b = 20
+end
+b #=> 20
+
+
+if some_condition
+ do_something
+elsif some_other_condition
+ do_something_else
+else
+ do_that
+end
+
+
+a = 1
+if some_condition
+ a = "hello"
+else
+ a = true
+end
+# a : String | Bool
+
+b = 1
+if some_condition
+ b = "hello"
+end
+# b : Int32 | String
+
+if some_condition
+ c = 1
+else
+ c = "hello"
+end
+# c : Int32 | String
+
+if some_condition
+ d = 1
+end
+# d : Int32 | Nil
+
+
+a = 1
+if some_condition
+ a = "hello"
+ # a : String
+ a.size
+end
+# a : String | Int32
+
+
+if some_condition
+ e = 1
+else
+ e = "hello"
+ # e : String
+ return
+end
+# e : Int32
+
+
+enum Color : UInt8
+ Red # 0
+ Green # 1
+ Blue = 5 # overwritten to 5
+ Yellow # 6 (5 + 1)
+
+ def red?
+ self == Color::Red
+ end
+end
+
+Color::Red.value #:: UInt8
+
+
+@[Flags]
+enum IOMode
+ Read # 1
+ Write # 2
+ Async # 4
+end
+
+
+IOMode::None.value #=> 0
+IOMode::All.value #=> 7
+
+
+puts(Color::Red) # prints "Red"
+puts(IOMode::Write | IOMode::Async) # prints "Write, Async"
+
+
+puts Color.new(1) #=> prints "Green"
+
+
+puts Color.new(10) #=> prints "10"
+
+
+Color::Red.red? #=> true
+Color::Blue.red? #=> false
+
+
+def paint(color : Color)
+ case color
+ when Color::Red
+ # ...
+ else
+ # Unusual, but still can happen
+ raise "unknown color: #{color}"
+ end
+end
+
+paint Color::Red
+
+
+def paint(color : Symbol)
+ case color
+ when :red
+ # ...
+ else
+ raise "unknown color: #{color}"
+ end
+end
+
+paint :red
+
+
+name = "Crystal"
+age = 1
+
+
+flower = "Tulip"
+# At this point 'flower' is a String
+
+flower = 1
+# At this point 'flower' is an Int32
+
+
+class Foo
+ def finalize
+ # Invoked when Foo is garbage-collected
+ puts "Bye bye from #{self}!"
+ end
+end
+
+# Prints "Bye bye ...!" for ever
+loop do
+ Foo.new
+end
+
+
+# Defines a method in the program
+def add(x, y)
+ x + y
+end
+
+# Invokes the add method in the program
+add(1, 2) #=> 3
+
+
+def even?(num)
+ if num % 2 == 0
+ return true
+ end
+
+ return false
+end
+
+
+def add(x, y)
+ x + y
+end
+
+class Foo
+ def bar
+ # invokes the program's add method
+ add(1, 2)
+
+ # invokes Foo's baz method
+ baz(1, 2)
+ end
+
+ def baz(x, y)
+ x * y
+ end
+end
+
+
+def baz(x, y)
+ x + y
+end
+
+class Foo
+ def bar
+ baz(4, 2) #=> 2
+ ::baz(4, 2) #=> 6
+ end
+
+ def baz(x, y)
+ x - y
+ end
+end
+
+
+x = 1
+
+def add(y)
+ x + y # error: undefined local variable or method 'x'
+end
+
+add(2)
+
+
+add 1, 2 # same as add(1, 2)
+
+
+class Counter
+ @@instances = 0
+
+ def initialize
+ @@instances += 1
+ end
+
+ def self.instances
+ @@instances
+ end
+end
+
+Counter.instances #=> 0
+Counter.new
+Counter.new
+Counter.new
+Counter.instances #=> 3
+
+
+class Counter
+ def self.increment
+ @@instances += 1
+ end
+end
+
+Counter.increment # Error: undefined method '+' for Nil
+
+
+class Parent
+ @@counter = 0
+end
+
+class Child < Parent
+ def self.counter
+ @@counter
+ end
+end
+
+Child.counter #=> nil
+
+
+unless some_condition
+ then_expression
+else
+ else_expression
+end
+
+# Can also be written as a suffix
+close_door unless door_closed?
+
+
+a = 1
+b = typeof(a) #=> Int32
+
+
+typeof(1, "a", 'a') #=> (Int32 | String | Char)
+
+
+hash = {} of Int32 => String
+another_hash = typeof(hash).new #:: Hash(Int32, String)
+
+
+class Array
+ def self.elem_type(typ)
+ if typ.is_a?(Array)
+ elem_type(typ.first)
+ else
+ typ
+ end
+ end
+end
+
+nest = [1, ["b", [:c, ['d']]]]
+flat = Array(typeof(Array.elem_type(nest))).new
+typeof(nest) #=> Array(Int32 | Array(String | Array(Symbol | Array(Char))))
+typeof(flat) #=> Array(String | Int32 | Symbol | Char)
+
+
+a = 2 if some_condition
+
+
+x = 0
+proc = ->{ x += 1; x }
+proc.call #=> 1
+proc.call #=> 2
+x #=> 2
+
+
+def counter
+ x = 0
+ ->{ x += 1; x }
+end
+
+proc = counter
+proc.call #=> 1
+proc.call #=> 2
+
+
+def foo
+ yield
+end
+
+x = 1
+foo do
+ x = "hello"
+end
+x # : Int32 | String
+
+
+x = 1
+foo do
+ x = "hello"
+end
+x # : Int32 | String
+
+x = 'a'
+x # : Char
+
+
+def capture(&block)
+ block
+end
+
+x = 1
+capture { x = "hello" }
+
+x = 'a'
+x # : Int32 | String | Char
+
+
+def capture(&block)
+ block
+end
+
+x = 1
+->{ x = "hello" }
+
+x = 'a'
+x # : Int32 | String | Char
+
+
+abstract class Animal
+ # Makes this animal talk
+ abstract def talk
+end
+
+class Dog < Animal
+ def talk
+ "Woof!"
+ end
+end
+
+class Cat < Animal
+ def talk
+ "Miau"
+ end
+end
+
+class Person
+ getter pet
+
+ def initialize(@name, @pet)
+ end
+end
+
+john = Person.new "John", Dog.new
+peter = Person.new "Peter", Cat.new
+
+
+john.pet.talk #=> "Woof!"
+
+
+a = 1 > 2 ? 3 : 4
+
+# The above is the same as:
+a = if 1 > 2
+ 3
+ else
+ 4
+ end
+
+
+def some_method : String
+ "hello"
+end
+
+
+PI = 3.14
+
+module Earth
+ RADIUS = 6_371_000
+end
+
+PI #=> 3.14
+Earth::RADIUS #=> 6_371_000
+
+
+TEN = begin
+ a = 0
+ while a < 10
+ a += 1
+ end
+ a
+end
+
+TEN #=> 10
+
+
+class Person
+ getter name
+
+ def initialize(@name)
+ @age = 0
+ end
+end
+
+john = Person.new "John"
+john.name #=> "John"
+john.name.size #=> 4
+
+
+one = Person.new 1
+one.name #=> 1
+one.name + 2 #=> 3
+
+
+john = Person.new "John"
+one = Person.new 1
+
+
+john = Person.new "John"
+one = Person.new 1
+
+# Error: undefined method 'size' for Int32
+john.name.size
+
+# Error: no overload matches 'String#+' with types Int32
+john.name + 3
+
+
+john = Person.new "John"
+john.name.size
+one = Person.new 1
+
+
+class Person
+ getter name
+
+ def initialize(@name)
+ @age = 0
+ end
+
+ def address
+ @address
+ end
+
+ def address=(@address)
+ end
+end
+
+john = Person.new "John"
+john.address = "Argentina"
+
+
+# Error: undefined method 'size' for Nil
+john.address.size
+
+
+class Person
+ @age = 0
+
+ def initialize(@name)
+ end
+end
+
+
+class Person
+ @age : Int32
+
+ def initialize(@name)
+ @age = 0
+ end
+end
+
+
+a = if 2 > 1
+ 3
+ else
+ 4
+ end
+a #=> 3
+
+
+if 1 > 2
+else
+ 3
+end
+
+
+def twice(&block)
+ yield
+ yield
+end
+
+
+twice() do
+ puts "Hello!"
+end
+
+twice do
+ puts "Hello!"
+end
+
+twice { puts "Hello!" }
+
+
+def twice
+ yield 1
+ yield 2
+end
+
+twice do |i|
+ puts "Got #{i}"
+end
+
+
+twice { |i| puts "Got #{i}" }
+
+
+def many
+ yield 1, 2, 3
+end
+
+many do |x, y, z|
+ puts x + y + z
+end
+
+# Output: 6
+
+
+def many
+ yield 1, 2, 3
+end
+
+many do |x, y|
+ puts x + y
+end
+
+# Output: 3
+
+
+def twice
+ yield
+ yield
+end
+
+twice do |i|
+ puts i.inspect
+end
+
+
+def some
+ yield 1, 'a'
+ yield true, "hello"
+ yield 2
+end
+
+some do |first, second|
+ # first is Int32 | Bool
+ # second is Char | String | Nil
+end
+
+
+method do |argument|
+ argument.some_method
+end
+
+
+method(&.some_method)
+
+
+method &.some_method(arg1, arg2)
+
+
+method &.+(2)
+method &.[index]
+
+
+def twice
+ v1 = yield 1
+ puts v1
+
+ v2 = yield 2
+ puts v2
+end
+
+twice do |i|
+ i + 1
+end
+
+
+ary = [1, 2, 3]
+ary.map { |x| x + 1 } #=> [2, 3, 4]
+ary.select { |x| x % 2 == 1 } #=> [1, 3]
+
+
+def transform(value)
+ yield value
+end
+
+transform(1) { |x| x + 1 } #=> 2
+
+
+def thrice
+ puts "Before 1"
+ yield 1
+ puts "Before 2"
+ yield 2
+ puts "Before 3"
+ yield 3
+ puts "After 3"
+end
+
+thrice do |i|
+ if i == 2
+ break
+ end
+end
+
+
+def twice
+ yield 1
+ yield 2
+end
+
+twice { |i| i + 1 } #=> 3
+twice { |i| break "hello" } #=> "hello"
+
+
+value = twice do |i|
+ if i == 1
+ break "hello"
+ end
+ i + 1
+end
+value #:: Int32 | String
+
+
+values = twice { break 1, 2 }
+values #=> {1, 2}
+
+
+value = twice { break }
+value #=> nil
+
+
+def twice
+ yield 1
+ yield 2
+end
+
+twice do |i|
+ if i == 1
+ puts "Skipping 1"
+ next
+ end
+
+ puts "Got #{i}"
+end
+
+
+
+def twice
+ v1 = yield 1
+ puts v1
+
+ v2 = yield 2
+ puts v2
+end
+
+twice do |i|
+ if i == 1
+ next 10
+ end
+
+ i + 1
+end
+
+# Output
+# 10
+# 3
+
+
+class Foo
+ def one
+ 1
+ end
+
+ def yield_with_self
+ with self yield
+ end
+
+ def yield_normally
+ yield
+ end
+end
+
+def one
+ "one"
+end
+
+Foo.new.yield_with_self { one } # => 1
+Foo.new.yield_normally { one } # => "one"
+
+
+def twice
+ yield 1
+ yield 2
+end
+
+twice do |i|
+ puts "Got: #{i}"
+end
+
+
+i = 1
+puts "Got: #{i}"
+i = 2
+puts "Got: #{i}"
+
+
+3.times do |i|
+ puts i
+end
+
+
+struct Int
+ def times
+ i = 0
+ while i < self
+ yield i
+ i += 1
+ end
+ end
+end
+
+
+i = 0
+while i < 3
+ puts i
+ i += 1
+end
+
+
+class Person
+ def initialize(@name)
+ end
+
+ def greet
+ puts "Hi, I'm #{@name}"
+ end
+end
+
+class Employee < Person
+end
+
+employee = Employee.new "John"
+employee.greet # "Hi, I'm John"
+
+
+class Person
+ def initialize(@name)
+ end
+end
+
+class Employee < Person
+ def initialize(@name, @company_name)
+ end
+end
+
+Employee.new "John", "Acme" # OK
+Employee.new "Peter" # Error: wrong number of arguments
+ # for 'Employee:Class#new' (1 for 2)
+
+
+class Person
+ def greet(msg)
+ puts "Hi, #{msg}"
+ end
+end
+
+class Employee < Person
+ def greet(msg)
+ puts "Hello, #{msg}"
+ end
+end
+
+p = Person.new
+p.greet "everyone" # "Hi, everyone"
+
+e = Employee.new
+e.greet "everyone" # "Hello, everyone"
+
+
+class Person
+ def greet(msg)
+ puts "Hi, #{msg}"
+ end
+end
+
+class Employee < Person
+ def greet(msg : Int32)
+ puts "Hi, this is a number: #{msg}"
+ end
+end
+
+e = Employee.new
+e.greet "everyone" # "Hi, everyone"
+
+e.greet 1 # "Hi, this is a number: 1"
+
+
+class Person
+ def greet(msg)
+ puts "Hello, "#{msg}"
+ end
+end
+
+class Employee < Person
+ def greet(msg)
+ super # Same as: super(msg)
+ super("another message")
+ end
+end
+
+
+def int_to_int(&block : Int32 -> Int32)
+ block
+end
+
+proc = int_to_int { |x| x + 1 }
+proc.call(1) #=> 2
+
+
+class Model
+ def on_save(&block)
+ @on_save_callback = block
+ end
+
+ def save
+ if callback = @on_save_callback
+ callback.call
+ end
+ end
+end
+
+model = Model.new
+model.on_save { puts "Saved!" }
+model.save # prints "Saved!"
+
+
+def some_proc(&block : Int32 ->)
+ block
+end
+
+proc = some_proc { |x| x + 1 }
+proc.call(1) # void
+
+
+def some_proc(&block : Int32 -> _)
+ block
+end
+
+proc = some_proc { |x| x + 1 }
+proc.call(1) # 2
+
+proc = some_proc { |x| x.to_s }
+proc.call(1) # "1"
+
+
+macro update_x
+ x = 1
+end
+
+x = 0
+update_x
+x #=> 1
+
+
+macro dont_update_x
+ %x = 1
+ puts %x
+end
+
+x = 0
+dont_update_x # outputs 1
+x #=> 0
+
+
+macro fresh_vars_sample(*names)
+ # First declare vars
+ {% for name, index in names %}
+ print "Declaring: ", "%name{index}", '\n'
+ %name{index} = {{index}}
+ {% end %}
+
+ # Then print them
+ {% for name, index in names %}
+ print "%name{index}: ", %name{index}, '\n'
+ {% end %}
+end
+
+fresh_vars_sample a, b, c
+
+# Sample output:
+# Declaring: __temp_255
+# Declaring: __temp_256
+# Declaring: __temp_257
+# __temp_255: 0
+# __temp_256: 1
+# __temp_257: 2
+
+
+class Object
+ macro def instance_vars_names : Array(String)
+ {{ @type.instance_vars.map &.name.stringify }}
+ end
+end
+
+class Person
+ def initialize(@name, @age)
+ end
+end
+
+person = Person.new "John", 30
+person.instance_vars_names #=> ["name", "age"]
+
+
+class Object
+ macro def has_instance_var?(name) : Bool
+ # We cannot access name inside the macro expansion here,
+ # instead we need to use the macro language to construct an array
+ # and do the inclusion check at runtime.
+ {{ @type.instance_vars.map &.name.stringify }}.includes? name
+ end
+end
+
+person = Person.new "John", 30
+person.has_instance_var?("name") #=> true
+person.has_instance_var?("birthday") #=> false
+
+
+class Parent
+ macro inherited
+ def {{@type.name.downcase.id}}
+ 1
+ end
+ end
+end
+
+class Child < Parent
+end
+
+Child.new.child #=> 1
+
+
+macro method_missing(name, args, block)
+ print "Got ", {{name.id.stringify}}, " with ", {{args.size}}, " arguments", '\n'
+end
+
+foo # Prints: Got foo with 0 arguments
+bar 'a', 'b' # Prints: Got bar with 2 arguments
+
+
+sizeof(Int32) #=> 4
+sizeof(Int64) #=> 8
+
+
+# On a 64 bits machine
+sizeof(Pointer(Int32)) #=> 8
+sizeof(String) #=> 8
+
+
+a = 1
+sizeof(typeof(a)) #=> 4
+
+
+class Foo
+ macro emphasize(value)
+ "***#{ {{value}} }***"
+ end
+
+ def yield_with_self
+ with self yield
+ end
+end
+
+Foo.new.yield_with_self { emphasize(10) } #=> "***10***"
+
+
+# This generates:
+#
+# def :foo
+# 1
+# end
+define_method :foo, 1
+
+
+macro define_method(name, content)
+ def {{name.id}}
+ {{content}}
+ end
+end
+
+# This correctly generates:
+#
+# def foo
+# 1
+# end
+define_method :foo, 1
+
+
+macro define_method(name, content)
+ def {{name}}
+ {% if content == 1 %}
+ "one"
+ {% else %}
+ {{content}}
+ {% end %}
+ end
+end
+
+define_method foo, 1
+define_method bar, 2
+
+foo #=> one
+bar #=> 2
+
+
+{% if env("TEST") %}
+ puts "We are in test mode"
+{% end %}
+
+
+macro define_dummy_methods(names)
+ {% for name, index in names %}
+ def {{name.id}}
+ {{index}}
+ end
+ {% end %}
+end
+
+define_dummy_methods [foo, bar, baz]
+
+foo #=> 0
+bar #=> 1
+baz #=> 2
+
+
+macro define_dummy_methods(hash)
+ {% for key, value in hash %}
+ def {{key.id}}
+ {{value}}
+ end
+ {% end %}
+end
+define_dummy_methods({foo: 10, bar: 20})
+foo #=> 10
+bar #=> 20
+
+
+{% for name, index in ["foo", "bar", "baz"] %}
+ def {{name.id}}
+ {{index}}
+ end
+{% end %}
+
+foo #=> 0
+bar #=> 1
+baz #=> 2
+
+
+macro define_dummy_methods(*names)
+ {% for name, index in names %}
+ def {{name.id}}
+ {{index}}
+ end
+ {% end %}
+end
+
+define_dummy_methods foo, bar, baz
+
+foo #=> 0
+bar #=> 1
+baz #=> 2
+
+
+macro println(*values)
+ print {{*values}}, '\n'
+end
+
+println 1, 2, 3 # outputs 123\n
+
+
+VALUES = [1, 2, 3]
+
+{% for value in VALUES %}
+ puts {{value}}
+{% end %}
+
+
+until some_condition
+ do_this
+end
+
+# The above is the same as:
+while !some_condition
+ do_this
+end
+
+
+a = some_condition ? nil : 3
+# a is Int32 or Nil
+
+if a
+ # Since the only way to get here is if a is truthy,
+ # a can't be nil. So here a is Int32.
+ a.abs
+end
+
+
+if a = some_expression
+ # here a is not nil
+end
+
+
+if a && b
+ # here both a and b are guaranteed not to be Nil
+end
+
+
+if @a
+ # here @a can be nil
+end
+
+
+# First option: assign it to a variable
+if a = @a
+ # here a can't be nil
+end
+
+# Second option: use `Object#try` found in the standard library
+@a.try do |a|
+ # here a can't be nil
+end
+
+
+if method # first call to a method that can return Int32 or Nil
+ # here we know that the first call did not return Nil
+ method # second call can still return Int32 or Nil
+end
+
+
+class Person
+ def become_older(by = 1)
+ @age += by
+ end
+end
+
+john = Person.new "John"
+john.age #=> 0
+
+john.become_older
+john.age #=> 1
+
+john.become_older 2
+john.age #=> 3
+
+
+john.become_older by: 5
+
+
+def some_method(x, y = 1, z = 2, w = 3)
+ # do something...
+end
+
+some_method 10 # x = 10, y = 1, z = 2, w = 3
+some_method 10, z: 10 # x = 10, y = 1, z = 10, w = 3
+some_method 10, w: 1, y: 2, z: 3 # x = 10, y = 2, z = 3, w = 1
+
+
+case exp
+when value1, value2
+ do_something
+when value3
+ do_something_else
+else
+ do_another_thing
+end
+
+
+case var
+when String
+ # var : String
+ do_something
+when Int32
+ # var : Int32
+ do_something_else
+else
+ # here var is neither a String nor an Int32
+ do_another_thing
+end
+
+
+case num
+when .even?
+ do_something
+when .odd?
+ do_something_else
+end
+
+
+case
+when cond1, cond2
+ do_something
+when cond3
+ do_something_else
+end
+
+
+a = 1
+a.responds_to?(:abs) #=> true
+a.responds_to?(:size) #=> false
+
+
+foo_or_bar = /foo|bar/
+heeello = /h(e+)llo/
+integer = /\d+/
+
+
+r = /foo/imx
+
+
+slash = /\//
+
+
+r = %r(regex with slash: /)
+
+
+"hello world"
+
+
+"\"" # double quote
+"\\" # backslash
+"\e" # escape
+"\f" # form feed
+"\n" # newline
+"\r" # carriage return
+"\t" # tab
+"\v" # vertical tab
+
+
+"\101" # == "A"
+"\123" # == "S"
+"\12" # == "\n"
+"\1" # string with one character with code point 1
+
+
+"\u0041" # == "A"
+
+
+"\u{41}" # == "A"
+"\u{1F52E}" # == "🔮"
+
+
+"hello
+ world" # same as "hello\n world"
+
+
+"hello " \
+"world, " \
+"no newlines" # same as "hello world, no newlines"
+
+
+"hello \
+ world, \
+ no newlines" # same as "hello world, no newlines"
+
+
+# Supports double quotes and nested parenthesis
+%(hello ("world")) # same as "hello (\"world\")"
+
+# Supports double quotes and nested brackets
+%[hello ["world"]] # same as "hello [\"world\"]"
+
+# Supports double quotes and nested curlies
+%{hello {"world"}} # same as "hello {\"world\"}"
+
+# Supports double quotes and nested angles
+%<hello <"world">> # same as "hello <\"world\">"
+
+
+<<-XML
+<parent>
+ <child />
+</parent>
+XML
+
+
+# Same as "Hello\n world"
+<<-STRING
+ Hello
+ world
+ STRING
+
+# Same as " Hello\n world"
+<<-STRING
+ Hello
+ world
+ STRING
+
+
+a = 1
+b = 2
+"sum = #{a + b}" # "sum = 3"
+
+
+1.0 # Float64
+1.0_f32 # Float32
+1_f32 # Float32
+
+1e10 # Float64
+1.5e10 # Float64
+1.5e-7 # Float64
+
++1.3 # Float64
+-0.5 # Float64
+
+
+1_000_000.111_111 # better than 1000000.111111
+
+
+'a'
+'z'
+'0'
+'_'
+'あ'
+
+
+'\'' # single quote
+'\\' # backslash
+'\e' # escape
+'\f' # form feed
+'\n' # newline
+'\r' # carriage return
+'\t' # tab
+'\v' # vertical tab
+
+
+'\101' # == 'A'
+'\123' # == 'S'
+'\12' # == '\n'
+'\1' # code point 1
+
+
+'\u0041' # == 'A'
+
+
+'\u{41}' # == 'A'
+'\u{1F52E}' # == '🔮'
+
+
+{1 => 2, 3 => 4} # Hash(Int32, Int32)
+{1 => 2, 'a' => 3} # Hash(Int32 | Char, Int32)
+
+
+{} of Int32 => Int32 # same as Hash(Int32, Int32).new
+
+
+{key1: 'a', key2: 'b'} # Hash(Symbol, Char)
+
+
+{"key1": 'a', "key2": 'b'} # Hash(String, Char)
+
+
+MyType{"foo": "bar"}
+
+
+tmp = MyType.new
+tmp["foo"] = "bar"
+tmp
+
+
+tmp = MyType(typeof("foo"), typeof("bar")).new
+tmp["foo"] = "bar"
+tmp
+
+
+MyType(String, String) {"foo": "bar"}
+
+
+:hello
+:good_bye
+
+# With spaces and symbols
+:"symbol with spaces"
+
+# Ending with question and exclamation marks
+:question?
+:exclamation!
+
+# For the operators
+:+
+:-
+:*
+:/
+:==
+:<
+:<=
+:>
+:>=
+:!
+:!=
+:=~
+:!~
+:&
+:|
+:^
+:~
+:**
+:>>
+:<<
+:%
+:[]
+:[]?
+:[]=
+:<=>
+:===
+
+
+x..y # an inclusive range, in mathematics: [x, y]
+x...y # an exclusive range, in mathematics: [x, y)
+
+
+# A proc without arguments
+->{ 1 } # Proc(Int32)
+
+# A proc with one argument
+->(x : Int32) { x.to_s } # Proc(Int32, String)
+
+# A proc with two arguments:
+->(x : Int32, y : Int32) { x + y } # Proc(Int32, Int32, Int32)
+
+
+Proc(Int32, String).new { |x| x.to_s } # Proc(Int32, String)
+
+
+proc = ->(x : Int32, y : Int32) { x + y }
+proc.call(1, 2) #=> 3
+
+
+def one
+ 1
+end
+
+proc = ->one
+proc.call #=> 1
+
+
+def plus_one(x)
+ x + 1
+end
+
+proc = ->plus_one(Int32)
+proc.call(41) #=> 42
+
+
+str = "hello"
+proc = ->str.count(Char)
+proc.call('e') #=> 1
+proc.call('l') #=> 2
+
+
+tuple = {1, "hello", 'x'} # Tuple(Int32, String, Char)
+tuple[0] #=> 1 (Int32)
+tuple[1] #=> "hello" (String)
+tuple[2] #=> 'x' (Char)
+
+
+[1, 2, 3] # Array(Int32)
+[1, "hello", 'x'] # Array(Int32 | String | Char)
+
+
+[] of Int32 # same as Array(Int32).new
+
+
+%w(one two three) # ["one", "two", "three"]
+
+
+%i(one two three) # [:one, :two, :three]
+
+
+MyType{1, 2, 3}
+
+
+tmp = MyType.new
+tmp << 1
+tmp << 2
+tmp << 3
+tmp
+
+
+tmp = MyType(typeof(1, 2, 3)).new
+tmp << 1
+tmp << 2
+tmp << 3
+tmp
+
+
+MyType(Int32 | String) {1, 2, "foo"}
+
+
+nil
+
+
+1 # Int32
+
+1_i8 # Int8
+1_i16 # Int16
+1_i32 # Int32
+1_i64 # Int64
+
+1_u8 # UInt8
+1_u16 # UInt16
+1_u32 # UInt32
+1_u64 # UInt64
+
++10 # Int32
+-20 # Int32
+
+2147483648 # Int64
+9223372036854775808 # UInt64
+
+
+1_000_000 # better than 1000000
+
+
+0b1101 # == 13
+
+
+0o123 # == 83
+
+
+0xFE012D # == 16646445
+0xfe012d # == 16646445
+
+
+true # A Bool that is true
+false # A Bool that is false
+
+
+a = 1
+
+ptr = pointerof(a)
+ptr.value = 2
+
+a #=> 2
+
+
+class Point
+ def initialize(@x, @y)
+ end
+
+ def x
+ @x
+ end
+
+ def x_ptr
+ pointerof(@x)
+ end
+end
+
+point = Point.new 1, 2
+
+ptr = point.x_ptr
+ptr.value = 10
+
+point.x #=> 10
+
+
+def add(x : Number, y : Number)
+ x + y
+end
+
+# Ok
+add 1, 2 # Ok
+
+# Error: no overload matches 'add' with types Bool, Bool
+add true, false
+
+
+def add(x, y)
+ x + y
+end
+
+add true, false
+
+
+# A class that has a + method but isn't a Number
+class Six
+ def +(other)
+ 6 + other
+ end
+end
+
+# add method without type restrictions
+def add(x, y)
+ x + y
+end
+
+# OK
+add Six.new, 10
+
+# add method with type restrictions
+def restricted_add(x : Number, y : Number)
+ x + y
+end
+
+# Error: no overload matches 'restricted_add' with types Six, Int32
+restricted_add Six.new, 10
+
+
+class Person
+ def ==(other : self)
+ other.name == name
+ end
+
+ def ==(other)
+ false
+ end
+end
+
+john = Person.new "John"
+another_john = Person.new "John"
+peter = Person.new "Peter"
+
+john == another_john #=> true
+john == peter #=> false (names differ)
+john == 1 #=> false (because 1 is not a Person)
+
+
+class Person
+ def self.compare(p1 : self, p2 : self)
+ p1.name == p2.name
+ end
+end
+
+john = Person.new "John"
+peter = Person.new "Peter"
+
+Person.compare(john, peter) # OK
+
+
+def foo(x : Int32)
+end
+
+foo 1 # OK
+foo "hello" # Error
+
+
+def foo(x : Int32.class)
+end
+
+foo Int32 # OK
+foo String # Error
+
+
+def foo(x : Int32.class)
+ puts "Got Int32"
+end
+
+def foo(x : String.class)
+ puts "Got String"
+end
+
+foo Int32 # prints "Got Int32"
+foo String # prints "Got String"
+
+
+def foo(*args : Int32)
+end
+
+def foo(*args : String)
+end
+
+foo 1, 2, 3 # OK, invokes first overload
+foo "a", "b", "c" # OK, invokes second overload
+foo 1, 2, "hello" # Error
+foo() # Error
+
+
+def foo
+ # This is the empty-tuple case
+end
+
+
+def foo(x : T)
+ T
+end
+
+foo(1) #=> Int32
+foo("hello") #=> String
+
+
+def foo(x : Array(T))
+ T
+end
+
+foo([1, 2]) #=> Int32
+foo([1, "a"]) #=> (Int32 | String)
+
+
+def foo(x : T.class)
+ Array(T)
+end
+
+foo(Int32) #=> Array(Int32)
+foo(String) #=> Array(String)
+
+
+class Person
+ # Increases age by one
+ def become_older
+ @age += 1
+ end
+
+ # Increases age by the given number of years
+ def become_older(years : Int32)
+ @age += years
+ end
+
+ # Increases age by the given number of years, as a String
+ def become_older(years : String)
+ @age += years.to_i
+ end
+
+ # Yields the current age of this person and increases
+ # its age by the value returned by the block
+ def become_older
+ @age += yield @age
+ end
+end
+
+person = Person.new "John"
+
+person.become_older
+person.age #=> 1
+
+person.become_older 5
+person.age #=> 6
+
+person.become_older "12"
+person.age #=> 18
+
+person.become_older do |current_age|
+ current_age < 20 ? 10 : 30
+end
+person.age #=> 28
+
+
+a = 1
+a.is_a?(Int32) #=> true
+a.is_a?(String) #=> false
+a.is_a?(Number) #=> true
+a.is_a?(Int32 | String) #=> true
+
+
+# One for each thread
+@[ThreadLocal]
+$values = [] of Int32
+
+
+@[AlwaysInline]
+def foo
+ 1
+end
+
+
+@[NoInline]
+def foo
+ 1
+end
+
+
+lib LibFoo
+ @[CallConvention("X86_StdCall")]
+ fun foo : Int32
+end
+
+
+def sum(*elements)
+ total = 0
+ elements.each do |value|
+ total += value
+ end
+ total
+end
+
+# elements is Tuple(Int32, Int32, Int32, Float64)
+sum 1, 2, 3, 4.5
+
+
+if a.responds_to?(:abs)
+ # here a's type will be reduced to those responding to the 'abs' method
+end
+
+
+a = some_condition ? 1 : "hello"
+# a : Int32 | String
+
+if a.responds_to?(:abs)
+ # here a will be Int32, since Int32#abs exists but String#abs doesn't
+else
+ # here a will be String
+end
+
+
+if (a = @a).responds_to?(:abs)
+ # here a is guaranteed to respond to `abs`
+end
+
+
+def capture(&block)
+ block
+end
+
+def invoke(&block)
+ block.call
+end
+
+proc = capture { puts "Hello" }
+invoke(&proc) # prints "Hello"
+
+
+
+
+def capture(&block)
+ block
+end
+
+def twice
+ yield
+ yield
+end
+
+proc = capture { puts "Hello" }
+twice &proc
+
+
+twice &->{ puts "Hello" }
+
+
+def say_hello
+ puts "Hello"
+end
+
+twice &->say_hello
+
+
+def foo
+ yield 1
+end
+
+def wrap_foo
+ puts "Before foo"
+ foo do |x|
+ yield x
+ end
+ puts "After foo"
+end
+
+wrap_foo do |i|
+ puts i
+end
+
+
+def foo
+ yield 1
+end
+
+def wrap_foo(&block : Int32 -> _)
+ puts "Before foo"
+ foo(&block)
+ puts "After foo"
+end
+
+wrap_foo do |i|
+ puts i
+end
+
+
+foo_forward do |i|
+ break # error
+end
+
+
+a = 2
+while (a += 1) < 20
+ if a == 10
+ # goes to 'puts a'
+ break
+ end
+end
+puts a #=> 10
+
+
+class Person
+ private def say(message)
+ puts message
+ end
+
+ def say_hello
+ say "hello" # OK, no receiver
+ self.say "hello" # Error, self is a receiver
+
+ other = Person.new "Other"
+ other.say "hello" # Error, other is a receiver
+ end
+end
+
+
+class Employee < Person
+ def say_bye
+ say "bye" # OK
+ end
+end
+
+
+module Namespace
+ class Foo
+ protected def foo
+ puts "Hello"
+ end
+ end
+
+ class Bar
+ def bar
+ # Works, because Foo and Bar are under Namespace
+ Foo.new.foo
+ end
+ end
+end
+
+Namespace::Bar.new.bar
+
+
+class Person
+ protected def self.say(message)
+ puts message
+ end
+
+ def say_hello
+ Person.say "hello"
+ end
+end
+
+
+buffer = uninitialized UInt8[256]
a_list_comprehension() ->
[X*2 || X <- [1,2,3]].
+a_map() ->
+ M0 = #{ a => 1, b => 2 },
+ M1 = M0#{ b := 200 }.
+
+escape_sequences() ->
+ [ "\b\d\e\f\n\r\s\t\v\'\"\\"
+ , "\1\12\123" % octal
+ , "\x01" % short hex
+ , "\x{fff}" % long hex
+ , "\^a\^A" % control characters
+ ].
+
map(Fun, [H|T]) ->
[Fun(H) | map(Fun, T)];
--- /dev/null
+#!/usr/bin/env escript
+
+main(_Args) ->
+ ok.
--- /dev/null
+module &__llvm_hsail_module:1:0:$full:$large:$near;
+
+prog kernel &mmul2d(
+ kernarg_u64 %__arg_p0,
+ kernarg_u64 %__arg_p1,
+ kernarg_u64 %__arg_p2,
+ kernarg_u64 %__arg_p3)
+{
+ pragma "AMD RTI", "ARGSTART:mmul2d";
+ pragma "AMD RTI", "version:3:1:104";
+ pragma "AMD RTI", "device:generic";
+ pragma "AMD RTI", "uniqueid:1025";
+ pragma "AMD RTI", "function:1:0";
+ pragma "AMD RTI", "memory:64bitABI";
+ pragma "AMD RTI", "privateid:1";
+ pragma "AMD RTI", "ARGEND:mmul2d";
+ // BB#0: // %top
+ mov_f64 $d1, 0.0E+0;
+ gridsize_u32 $s0, 0;
+ workitemabsid_u32 $s1, 1;
+ workitemabsid_u32 $s2, 0;
+ cvt_u64_u32 $d0, $s2;
+ cvt_u64_u32 $d3, $s1;
+ cvt_u64_u32 $d4, $s0;
+ ld_kernarg_align(8)_width(all)_u64 $d2, [%__arg_p2];
+ ld_kernarg_align(8)_width(all)_u64 $d6, [%__arg_p1];
+ ld_kernarg_align(8)_width(all)_u64 $d5, [%__arg_p3];
+ ld_kernarg_align(8)_width(all)_u64 $d7, [%__arg_p0];
+ cmp_lt_b1_s64 $c0, $d5, 1;
+ cbr_b1 $c0, @BB0_3;
+ // BB#1: // %L.preheader
+ mul_u64 $d1, $d5, $d3;
+ shl_u64 $d1, $d1, 3;
+ shl_u64 $d8, $d0, 3;
+ add_u64 $d8, $d7, $d8;
+ add_u64 $d6, $d6, $d1;
+ shl_u64 $d7, $d4, 3;
+ mov_f64 $d1, 0D0000000000000000;
+
+@BB0_2:
+ // %L
+ add_u64 $d9, $d8, $d7;
+ ld_global_f64 $d8, [$d8];
+ ld_global_f64 $d10, [$d6];
+ mul_f64 $d8, $d8, $d10;
+ add_f64 $d1, $d1, $d8;
+ add_u64 $d6, $d6, 8;
+ add_u64 $d5, $d5, 18446744073709551615;
+ cmp_ne_b1_s64 $c0, $d5, 0;
+ mov_b64 $d8, $d9;
+ cbr_b1 $c0, @BB0_2;
+
+@BB0_3:
+ // %L.7
+ mul_u64 $d3, $d3, $d4;
+ add_u64 $d0, $d3, $d0;
+ shl_u64 $d0, $d0, 3;
+ add_u64 $d0, $d2, $d0;
+ st_global_f64 $d1, [$d0];
+ ret;
+};
+
--- /dev/null
+exports (main)
+
+def main(=> currentProcess) :Int as DeepFrozen:
+ traceln(`Current process: $currentProcess`)
+ "A \r \n \x00 \u1234"
+ '\u1234'
+ return 0
--- /dev/null
+load "$NCARG_ROOT/lib/ncarg/nclscripts/csm/gsn_code.ncl"
+load "$NCARG_ROOT/lib/ncarg/nclscripts/csm/gsn_csm.ncl"
+load "$NCARG_ROOT/lib/ncarg/nclscripts/csm/contributed.ncl"
+begin
+ int_num = 1
+ float_num = 0.1
+ str = "A void map"
+ array = (/1, 2, 3, 4, 5/)
+
+
+ wks = gsn_open_wks("X11", "test_for_pygments")
+
+ res = True
+ res@mpMinLonF = 90.
+ res@mpMaxLonF = 180.
+ res@mpMinLatF = 0.
+ res@mpMaxLatF = 90.
+
+ plot = gsn_csm_map_ce(wks, res)
+end
\ No newline at end of file
return $x;
}
+// Test highlighting of magic methods and variables
+class MagicClass {
+ public $magic_str;
+ public $ordinary_str;
+
+ public function __construct($some_var) {
+ $this->magic_str = __FILE__;
+ $this->ordinary_str = $some_var;
+ }
+
+ public function __toString() {
+ return $this->magic_str;
+ }
+
+ public function nonMagic() {
+ return $this->ordinary_str;
+ }
+}
+
+$magic = new MagicClass(__DIR__);
+__toString();
+$magic->nonMagic();
+$magic->__toString();
+
echo <<<EOF
Test the heredocs...
EOF;
+echo <<<"some_delimiter"
+more heredoc testing
+continues on this line
+some_delimiter;
+
?>
--- /dev/null
+domain Option__Node {
+ unique function Option__Node__Some(): Option__Node
+ unique function Option__Node__None(): Option__Node
+
+ function variantOfOptionNode(self: Ref): Option__Node
+
+ function isOptionNode(self: Ref): Bool
+
+ axiom ax_variantOfOptionNodeChoices {
+ forall x: Ref :: { variantOfOptionNode(x) }
+ (variantOfOptionNode(x) == Option__Node__Some() || variantOfOptionNode(x) == Option__Node__None())
+ }
+
+ axiom ax_isCounterState {
+ forall x: Ref :: { variantOfOptionNode(x) }
+ isOptionNode(x) == (variantOfOptionNode(x) == Option__Node__Some() ||
+ variantOfOptionNode(x) == Option__Node__None())
+ }
+}
+
+predicate validOption(this: Ref) {
+ isOptionNode(this) &&
+ variantOfOptionNode(this) == Option__Node__Some() ==> (
+ acc(this.Option__Node__Some__1, write) &&
+ acc(validNode(this.Option__Node__Some__1))
+ )
+}
+
+field Option__Node__Some__1: Ref
+
+field Node__v: Int
+field Node__next: Ref
+
+predicate validNode(this: Ref) {
+ acc(this.Node__v) &&
+ acc(this.Node__next) &&
+ acc(validOption(this.Node__next))
+}
+
+
+function length(this: Ref): Int
+ requires acc(validNode(this), write)
+ ensures result >= 1
+{
+ (unfolding acc(validNode(this), write) in
+ unfolding acc(validOption(this.Node__next)) in
+ (variantOfOptionNode(this.Node__next) == Option__Node__None()) ?
+ 1 : 1 + length(this.Node__next.Option__Node__Some__1)
+ )
+}
+
+function itemAt(this: Ref, i: Int): Int
+ requires acc(validNode(this), write)
+ requires 0 <= i && i < length(this)
+{
+ unfolding acc(validNode(this), write) in unfolding acc(validOption(this.Node__next)) in (
+ (i == 0) ?
+ this.Node__v:
+ (variantOfOptionNode(this.Node__next) == Option__Node__Some()) ?
+ itemAt(this.Node__next.Option__Node__Some__1, i-1) : this.Node__v
+ )
+}
+
+function sum(this$1: Ref): Int
+ requires acc(validNode(this$1), write)
+{
+ (unfolding acc(validNode(this$1), write) in unfolding acc(validOption(this$1.Node__next)) in
+ (variantOfOptionNode(this$1.Node__next) == Option__Node__None()) ? this$1.Node__v : this$1.Node__v + sum(this$1.Node__next.Option__Node__Some__1))
+}
+
+method append(this: Ref, val: Int)
+ requires acc(validNode(this), write)
+ ensures acc(validNode(this), write) /* POST1 */
+ ensures length(this) == (old(length(this)) + 1) /* POST2 */
+ ensures (forall i: Int :: (0 <= i && i < old(length(this))) ==> (itemAt(this, i) == old(itemAt(this, i)))) /* POST3 */
+ ensures itemAt(this, length(this) - 1) == val /* POST4 */
+ ensures true ==> true
+{
+ var tmp_node: Ref
+ var tmp_option: Ref
+
+ unfold acc(validNode(this), write)
+ unfold acc(validOption(this.Node__next), write)
+
+ if (variantOfOptionNode(this.Node__next) == Option__Node__None()) {
+ tmp_node := new(Node__next, Node__v)
+ tmp_node.Node__next := null
+ tmp_node.Node__v := val
+
+ assume variantOfOptionNode(tmp_node.Node__next) == Option__Node__None()
+ fold acc(validOption(tmp_node.Node__next))
+ fold acc(validNode(tmp_node), write)
+
+ tmp_option := new(Option__Node__Some__1)
+ tmp_option.Option__Node__Some__1 := tmp_node
+ assume variantOfOptionNode(tmp_option) == Option__Node__Some()
+ fold acc(validOption(tmp_option))
+
+ this.Node__next := tmp_option
+
+
+ unfold validOption(tmp_option)
+ assert length(tmp_node) == 1 /* TODO: Required by Silicon, POST2 fails otherwise */
+ assert itemAt(tmp_node, 0) == val /* TODO: Required by Silicon, POST4 fails otherwise */
+ fold validOption(tmp_option)
+ } else {
+ append(this.Node__next.Option__Node__Some__1, val)
+ fold acc(validOption(this.Node__next), write)
+ }
+
+ fold acc(validNode(this), write)
+}
+
+method prepend(tail: Ref, val: Int) returns (res: Ref)
+ requires acc(validNode(tail))
+ ensures acc(validNode(res))
+ //ensures acc(validNode(tail))
+ ensures length(res) == old(length(tail)) + 1
+
+ ensures (forall i: Int :: (1 <= i && i < length(res)) ==> (itemAt(res, i) == old(itemAt(tail, i-1)))) /* POST3 */
+ ensures itemAt(res, 0) == val
+{
+ var tmp_option: Ref
+
+ res := new(Node__v, Node__next)
+ res.Node__v := val
+
+ tmp_option := new(Option__Node__Some__1)
+ tmp_option.Option__Node__Some__1 := tail
+ assume variantOfOptionNode(tmp_option) == Option__Node__Some()
+
+ res.Node__next := tmp_option
+
+ assert acc(validNode(tail))
+ fold acc(validOption(res.Node__next))
+ fold acc(validNode(res))
+}
+
+method length_iter(list: Ref) returns (len: Int)
+ requires acc(validNode(list), write)
+ ensures old(length(list)) == len
+ // TODO we have to preserve this property
+ // ensures acc(validNode(list))
+{
+ var curr: Ref := list
+ var tmp: Ref := list
+
+ len := 1
+
+ unfold acc(validNode(curr))
+ unfold acc(validOption(curr.Node__next))
+ while(variantOfOptionNode(curr.Node__next) == Option__Node__Some())
+ invariant acc(curr.Node__v)
+ invariant acc(curr.Node__next)
+ invariant (variantOfOptionNode(curr.Node__next) == Option__Node__Some() ==> (
+ acc(curr.Node__next.Option__Node__Some__1, write) &&
+ acc(validNode(curr.Node__next.Option__Node__Some__1))
+ ))
+ invariant (variantOfOptionNode(curr.Node__next) == Option__Node__Some() ==> len + length(curr.Node__next.Option__Node__Some__1) == old(length(list)))
+ invariant (variantOfOptionNode(curr.Node__next) == Option__Node__None() ==> len == old(length(list)))
+ {
+ assert acc(validNode(curr.Node__next.Option__Node__Some__1))
+ len := len + 1
+ tmp := curr
+ curr := curr.Node__next.Option__Node__Some__1
+ unfold acc(validNode(curr))
+ unfold acc(validOption(curr.Node__next))
+ }
+}
+
+method t1()
+{
+ var l: Ref
+
+ l := new(Node__v, Node__next)
+ l.Node__next := null
+ l.Node__v := 1
+ assume variantOfOptionNode(l.Node__next) == Option__Node__None()
+
+ fold validOption(l.Node__next)
+ fold validNode(l)
+
+ assert length(l) == 1
+ assert itemAt(l, 0) == 1
+
+ append(l, 7)
+ assert itemAt(l, 1) == 7
+ assert itemAt(l, 0) == 1
+ assert length(l) == 2
+
+ l := prepend(l, 10)
+ assert itemAt(l, 2) == 7
+ assert itemAt(l, 1) == 1
+ assert itemAt(l, 0) == 10
+ assert length(l) == 3
+
+ //assert sum(l) == 18
+}
+
+method t2(l: Ref) returns (res: Ref)
+ requires acc(validNode(l), write)
+ ensures acc(validNode(res), write)
+ ensures length(res) > old(length(l))
+{
+ res := prepend(l, 10)
+}
--- /dev/null
+-- Example Transact-SQL file.
+
+-- Single line comment
+/* A comment
+ * spawning two lines. */
+ /* An indented comment
+ * spawning multiple
+ * lines. */
+/* A /* nested */ comment. */
+
+select
+ left(emp.firstname, 1) + '.' + [emp.surname] as "Name",
+ dep.name as [Department]
+into
+ #temp_employee
+from
+ employee as emp
+ inner join department as dep on
+ dep.ident_code = emp.department_id
+where
+ emp.date_of_birth >= '1990-01-01';
+go
+
+declare @TextToFind nvarchar(100) = N'some
+text across
+multiple lines';
+
+set @TextToFind varchar(32) = 'hello' + ' world';
+set @TextTiFind += '!';
+
+declare @Count int = 17 * (3 - 5);
+
+delete from
+ [server].[database].[schema].[table]
+where
+ [Text] = @TextToFind and author Not LIKE '%some%';
+
+goto overthere;
+overthere:
+
+select
+ 123 as "int 1",
+ +123 as "int 2",
+ -123 as "int 3",
+ 0x20 as "hex int",
+ 123.45 as "float 1",
+ -1.23e45 as "float 2"
+ +1.23E+45 as "float 3",
+ -1.23e-45 as "float 4",
+ 1. as "float 5",
+ .1 as "float 6",
+ 1.e2 as "float 7",
+ .1e2 as "float 8";
+
+Select @@Error, $PARTITion.RangePF1(10);
+
+select top 3 Ähnliches from Müll;
+
+-- Example transaction
+BEGIN TRAN
+
+BEGIN TRY
+ INSERT INTO #temp_employe(Name, Department) VALUES ('L. Miller', 'Sales')
+ iNsErT inTO #temp_employe(Name, Department) VaLuEs ('M. Webster', 'Helpdesk')
+ COMMIT TRAN
+END TRY
+BEGIN CATCH
+ print 'cannot perform transaction; rolling back';
+ ROLLBACK TRAN
+END CATCH
+
+-- Comment at end without newline.
\ No newline at end of file
--- /dev/null
+class Animal {
+ constructor(public name) { }
+ move(meters) {
+ alert(this.name + " moved " + meters + "m.");
+ }
+}
+
+class Snake extends Animal {
+ constructor(name) { super(name); }
+ move() {
+ alert("Slithering...");
+ super.move(5);
+ }
+}
+
+class Horse extends Animal {
+ constructor(name) { super(name); }
+ move() {
+ alert("Galloping...");
+ super.move(45);
+ }
+}
+
+@View({
+ templateUrl: "app/components/LoginForm.html",
+ directives: [FORM_DIRECTIVES, NgIf]
+})
+@Component({
+ selector: "login-form"
+})
+class LoginForm {
+
+}
+
+var sam = new Snake("Sammy the Python")
+var tom: Animal = new Horse("Tommy the Palomino")
+
+sam.move()
+tom.move(34)
--- /dev/null
+# ***************************************************************************
+# Notice: "styles." (and "temp.") objects are UNSET after template parsing!
+# Use "lib." for persisting storage of objects.
+# ***************************************************************************
+
+<INCLUDE_TYPOSCRIPT: source="FILE: EXT:www_tue_nl/Configuration/TypoScript/Setup/Root.ts">
+
+page.80 = RECORDS
+page.80 {
+ source = 1
+ tables = tt_address
+ conf.tt_address = COA
+ conf.tt_address {
+ 20 = TEXT
+ 20.field = email
+ 20.typolink.parameter.field = email
+ }
+}
+
+ /*
+page.200 = PHP_SCRIPT_EXT
+page.200 {
+ 1 = TMENU
+ 1.wrap = <div style="width:200px; border: 1px solid;">|</div>
+ 1.expAll = 1
+ 1.submenuObjSuffixes = a |*| |*| b
+ 1.NO.allWrap = <b>|</b><br/>
+
+ 2 = TMENU
+ 2.NO.allWrap = <div style="background:red;">|</div>
+
+ 2a = TMENU
+ 2a.NO.allWrap = <div style="background:yellow;">|</div>
+*
+ 2b = TMENU
+ 2b.NO.allWrap = <div style="background:green;">|</div>
+}
+*/
+
+ # Add the CSS and JS files
+page {
+ includeCSS { # comment at the end of a line
+ file99 = fileadmin/your-fancybox.css
+ }
+ includeJSFooter {
+ fancybox = fileadmin/your-fancybox.js
+ }
+}
+
+ # Change the default rendering of images to match lightbox requirements
+tt_content.image.20.1.imageLinkWrap {
+ JSwindow = 0
+ test = MyExtension\Path\To\Class
+
+ directImageLink = 1
+ linkParams.ATagParams {
+ dataWrap = class= "lightbox" rel="fancybox{field:uid}"
+ }
+}
+
+tt_content.image.20.1.imageLinkWrap >
+tt_content.image.20.1.imageLinkWrap = 1
+tt_content.image.20.1.imageLinkWrap {
+ enable = 1
+ typolink {
+ # directly link to the recent image
+ parameter.cObject = IMG_RESOURCE
+ parameter.cObject.file.import.data = TSFE:lastImageInfo|origFile
+ parameter.cObject.file.maxW = {$styles.content.imgtext.maxW}
+ parameter.override.listNum.stdWrap.data = register : IMAGE_NUM_CURRENT
+ title.field = imagecaption // title
+ title.split.token.char = 10
+ title.if.isTrue.field = imagecaption // header
+ title.split.token.char = 10
+ title.split.returnKey.data = register : IMAGE_NUM_CURRENT
+ parameter.cObject = IMG_RESOURCE
+ parameter.cObject.file.import.data = TSFE:lastImageInfo|origFile
+ ATagParams = target="_blank"
+ }
+}
+
+10 = IMAGE
+10 {
+ # point to the image
+ file = fileadmin/demo/lorem_ipsum/images/a4.jpg
+ # make it rather small
+ file.width = 80
+ # add a link to tx_cms_showpic.php that shows the original image
+ imageLinkWrap = 1
+ imageLinkWrap {
+ enable = 1
+ # JSwindow = 1
+ }
+}
+
+# Clear out any constants in this reserved room!
+styles.content >
+
+# get content
+styles.content.get = CONTENT
+styles.content.get {
+ table = tt_content
+ select.orderBy = sorting
+ select.where = colPos=0
+ select.languageField = sys_language_uid
+}
+
+# get content, left
+styles.content.getLeft < styles.content.get
+styles.content.getLeft.select.where = colPos=1
+
+# get content, right
+styles.content.getRight < styles.content.get
+styles.content.getRight.select.where = colPos=2
+
+# get content, margin
+styles.content.getBorder < styles.content.get
+styles.content.getBorder.select.where = colPos=3
+
+# get news
+styles.content.getNews < styles.content.get
+styles.content.getNews.select.pidInList = {$styles.content.getNews.newsPid}
+
+# Edit page object:
+styles.content.editPanelPage = COA
+styles.content.editPanelPage {
+ 10 = EDITPANEL
+ 10 {
+ allow = toolbar,move,hide
+ label.data = LLL:EXT:css_styled_content/pi1/locallang.xml:eIcon.page
+ label.wrap = | <b>%s</b>
+ }
+}
+
+
+
+
+
+
+
+
+
+
+
+# *********************************************************************
+# "lib." objects are preserved from unsetting after template parsing
+# *********************************************************************
+
+# Creates persistent ParseFunc setup for non-HTML content. This is recommended to use (as a reference!)
+lib.parseFunc {
+ makelinks = 1
+ makelinks.http.keep = {$styles.content.links.keep}
+ makelinks.http.extTarget = {$styles.content.links.extTarget}
+ makelinks.mailto.keep = path
+ tags {
+ link = TEXT
+ link {
+ current = 1
+ typolink.parameter.data = parameters : allParams
+ typolink.extTarget = {$styles.content.links.extTarget}
+ typolink.target = {$styles.content.links.target}
+ parseFunc.constants =1
+ }
+ }
+ allowTags = {$styles.content.links.allowTags}
+ denyTags = *
+ sword = <span class="csc-sword">|</span>
+ constants = 1
+
+ nonTypoTagStdWrap.HTMLparser = 1
+ nonTypoTagStdWrap.HTMLparser {
+ keepNonMatchedTags = 1
+ htmlSpecialChars = 2
+ }
+}
+
+# good old parsefunc in "styles.content.parseFunc" is created for backwards compatibility. Don't use it, just ignore.
+styles.content.parseFunc < lib.parseFunc
+
+# Creates persistent ParseFunc setup for RTE content (which is mainly HTML) based on the "ts_css" transformation.
+lib.parseFunc_RTE < lib.parseFunc
+lib.parseFunc_RTE {
+ // makelinks >
+ # Processing <table> and <blockquote> blocks separately
+ externalBlocks = table, blockquote, dd, dl, ol, ul, div
+ externalBlocks {
+ # The blockquote content is passed into parseFunc again...
+ blockquote.stripNL=1
+ blockquote.callRecursive=1
+ blockquote.callRecursive.tagStdWrap.HTMLparser = 1
+ blockquote.callRecursive.tagStdWrap.HTMLparser.tags.blockquote.overrideAttribs = style="margin-bottom:0;margin-top:0;"
+
+ ol.stripNL=1
+ ol.stdWrap.parseFunc = < lib.parseFunc
+
+ ul.stripNL=1
+ ul.stdWrap.parseFunc = < lib.parseFunc
+
+ table.stripNL=1
+ table.stdWrap.HTMLparser = 1
+ table.stdWrap.HTMLparser.tags.table.fixAttrib.class {
+ default = contenttable
+ always = 1
+ list = contenttable
+ }
+ table.stdWrap.HTMLparser.keepNonMatchedTags = 1
+ table.HTMLtableCells=1
+ table.HTMLtableCells {
+ default.callRecursive=1
+ addChr10BetweenParagraphs=1
+ }
+ div.stripNL = 1
+ div.callRecursive = 1
+
+ # Definition list processing
+ dl < .div
+ dd < .div
+ }
+ nonTypoTagStdWrap.encapsLines {
+ encapsTagList = p,pre,h1,h2,h3,h4,h5,h6,hr,dt
+ remapTag.DIV = P
+ nonWrappedTag = P
+ innerStdWrap_all.ifBlank =
+ addAttributes.P.class = bodytext
+ addAttributes.P.class.setOnly=blank
+ }
+ nonTypoTagStdWrap.HTMLparser = 1
+ nonTypoTagStdWrap.HTMLparser {
+ keepNonMatchedTags = 1
+ htmlSpecialChars = 2
+ }
+}
+
+
+# Content header:
+lib.stdheader = COA
+lib.stdheader {
+
+ # Create align style-attribute for <Hx> tags
+ 2 = LOAD_REGISTER
+ 2.headerStyle.field = header_position
+ 2.headerStyle.required = 1
+ 2.headerStyle.noTrimWrap = | style="text-align:|;"|
+
+ # Create class="csc-firstHeader" attribute for <Hx> tags
+ 3 = LOAD_REGISTER
+ 3.headerClass = csc-firstHeader
+ 3.headerClass.if.value=1
+ 3.headerClass.if.equals.data = cObj:parentRecordNumber
+ 3.headerClass.noTrimWrap = | class="|"|
+
+ # Date format:
+ 5 = TEXT
+ 5.field = date
+ 5.if.isTrue.field = date
+ 5.strftime = %x
+ 5.wrap = <p class="csc-header-date">|</p>
+ 5.prefixComment = 2 | Header date:
+
+ # This CASE cObject renders the header content:
+ # currentValue is set to the header data, possibly wrapped in link-tags.
+ 10 = CASE
+ 10.setCurrent {
+ field = header
+ htmlSpecialChars = 1
+ typolink.parameter.field = header_link
+ }
+ 10.key.field = header_layout
+ 10.key.ifEmpty = {$content.defaultHeaderType}
+ 10.key.ifEmpty.override.data = register: defaultHeaderType
+
+ 10.1 = TEXT
+ 10.1.current = 1
+ 10.1.dataWrap = <h1{register:headerStyle}{register:headerClass}>|</h1>
+
+ 10.2 < .10.1
+ 10.2.dataWrap = <h2{register:headerStyle}{register:headerClass}>|</h2>
+
+ 10.3 < .10.1
+ 10.3.dataWrap = <h3{register:headerStyle}{register:headerClass}>|</h3>
+
+ 10.4 < .10.1
+ 10.4.dataWrap = <h4{register:headerStyle}{register:headerClass}>|</h4>
+
+ 10.5 < .10.1
+ 10.5.dataWrap = <h5{register:headerStyle}{register:headerClass}>|</h5>
+
+ # Pops the used registers off the stack:
+ 98 = RESTORE_REGISTER
+ 99 = RESTORE_REGISTER
+
+ # Post-processing:
+ stdWrap.fieldRequired = header
+ stdWrap.if {
+ equals.field = header_layout
+ value = 100
+ negate = 1
+ }
+
+ stdWrap.editIcons = tt_content : header, [header_layout | header_position], [header_link|date]
+ stdWrap.editIcons.beforeLastTag = 1
+ stdWrap.editIcons.iconTitle.data = LLL:EXT:css_styled_content/pi1/locallang.xml:eIcon.header
+
+ stdWrap.dataWrap = <div class="csc-header csc-header-n{cObj:parentRecordNumber}">|</div>
+ stdWrap.prefixComment = 2 | Header:
+}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+#******************************************************
+# Including library for processing of some elements:
+#******************************************************
+includeLibs.tx_cssstyledcontent_pi1 = EXT:css_styled_content/pi1/class.tx_cssstyledcontent_pi1.php
+
+
+#**********************************
+# tt_content is started
+#**********************************
+tt_content >
+tt_content = CASE
+tt_content.key.field = CType
+tt_content.stdWrap {
+ innerWrap.cObject = CASE
+ innerWrap.cObject {
+ key.field = section_frame
+
+ default = COA
+ default {
+ 10 = TEXT
+ 10 {
+ value = <div id="c{field:uid}"
+ override.cObject = TEXT
+ override.cObject {
+ value = <div
+ if.value = div
+ if.equals.field = CType
+ }
+ insertData = 1
+ }
+
+ 15 = TEXT
+ 15 {
+ value = csc-default
+ noTrimWrap = | class="|" |
+ required = 1
+ }
+
+ 20 = COA
+ 20 {
+ 10 = COA
+ 10 {
+ 10 = TEXT
+ 10 {
+ value = {$content.spaceBefore}
+ wrap = |+
+ if.isTrue = {$content.spaceBefore}
+ }
+
+ 20 = TEXT
+ 20 {
+ field = spaceBefore
+ }
+
+ stdWrap {
+ prioriCalc = intval
+ wrap = margin-top:|px;
+ required = 1
+ ifEmpty.value =
+ }
+ }
+
+ 20 = COA
+ 20 {
+ 10 = TEXT
+ 10 {
+ value = {$content.spaceAfter}
+ wrap = |+
+ if.isTrue = {$content.spaceAfter}
+ }
+
+ 20 = TEXT
+ 20 {
+ field = spaceAfter
+ }
+
+ stdWrap {
+ prioriCalc = intval
+ wrap = margin-bottom:|px;
+ required = 1
+ ifEmpty.value =
+ }
+ }
+
+ stdWrap.noTrimWrap = | style="|" |
+ stdWrap.required = 1
+ }
+ 30 = TEXT
+ 30.value = >|</div>
+ }
+
+ 1 =< tt_content.stdWrap.innerWrap.cObject.default
+ 1.15.value = csc-frame csc-frame-invisible
+
+ 5 =< tt_content.stdWrap.innerWrap.cObject.default
+ 5.15.value = csc-frame csc-frame-rulerBefore
+
+ 6 =< tt_content.stdWrap.innerWrap.cObject.default
+ 6.15.value = csc-frame csc-frame-rulerAfter
+
+ 10 =< tt_content.stdWrap.innerWrap.cObject.default
+ 10.15.value = csc-frame csc-frame-indent
+
+ 11 =< tt_content.stdWrap.innerWrap.cObject.default
+ 11.15.value = csc-frame csc-frame-indent3366
+
+ 12 =< tt_content.stdWrap.innerWrap.cObject.default
+ 12.15.value = csc-frame csc-frame-indent6633
+
+ 20 =< tt_content.stdWrap.innerWrap.cObject.default
+ 20.15.value = csc-frame csc-frame-frame1
+
+ 21 =< tt_content.stdWrap.innerWrap.cObject.default
+ 21.15.value = csc-frame csc-frame-frame2
+
+ 66 = COA
+ 66 {
+ 10 = TEXT
+ 10 {
+ value = <a id="c{field:uid}"></a>
+ insertData = 1
+ }
+
+ 20 = COA
+ 20 {
+ 10 = TEXT
+ 10 {
+ value = {$content.spaceBefore}
+ wrap = |+
+ if.isTrue = {$content.spaceBefore}
+ }
+
+ 20 = TEXT
+ 20 {
+ field = spaceBefore
+ }
+
+ stdWrap {
+ prioriCalc = intval
+ wrap = margin-top:|px;
+ required = 1
+ ifEmpty.value =
+ wrap2 = <div style="|"></div>
+ }
+ }
+
+ 30 = TEXT
+ 30 {
+ value = |
+ }
+
+ 40 < .20
+ 40 {
+ 10 {
+ value = {$content.spaceAfter}
+ if.isTrue = {$content.spaceAfter}
+ }
+ 20.field = spaceAfter
+ stdWrap.wrap = margin-bottom:|px;
+ }
+ }
+
+ }
+
+ innerWrap2 = | <p class="csc-linkToTop"><a href="#">{LLL:EXT:css_styled_content/pi1/locallang.xml:label.toTop}</a></p>
+ innerWrap2.insertData = 1
+ innerWrap2.fieldRequired = linkToTop
+
+ prepend = TEXT
+ prepend.dataWrap = <a id="c{field:_LOCALIZED_UID}"></a>
+ prepend.if.isTrue.field = _LOCALIZED_UID
+
+ editPanel = 1
+ editPanel {
+ allow = move,new,edit,hide,delete
+ line = 5
+ label = %s
+ onlyCurrentPid = 1
+ previewBorder = 4
+ edit.displayRecord = 1
+ }
+
+ prefixComment = 1 | CONTENT ELEMENT, uid:{field:uid}/{field:CType}
+}
+
+
+
+# *****************
+# CType: header
+# *****************
+# See Object path "lib.stdheader"
+tt_content.header = COA
+tt_content.header {
+ 10 = < lib.stdheader
+
+ 20 = TEXT
+ 20 {
+ field = subheader
+ required = 1
+
+ dataWrap = <p class="csc-subheader csc-subheader-{field:layout}">|</p>
+ htmlSpecialChars = 1
+
+ editIcons = tt_content:subheader,layout
+ editIcons.beforeLastTag = 1
+ editIcons.iconTitle.data = LLL:EXT:css_styled_content/pi1/locallang.xml:eIcon.subheader
+
+ prefixComment = 2 | Subheader:
+ }
+}
+
+
+
+# *****************
+# CType: text
+# *****************
+tt_content.text = COA
+tt_content.text {
+ 10 = < lib.stdheader
+
+ 20 = TEXT
+ 20 {
+ field = bodytext
+ required = 1
+
+ parseFunc = < lib.parseFunc_RTE
+
+ editIcons = tt_content:bodytext, rte_enabled
+ editIcons.beforeLastTag = 1
+ editIcons.iconTitle.data = LLL:EXT:css_styled_content/pi1/locallang.xml:eIcon.bodytext
+
+ prefixComment = 2 | Text:
+ }
+}
+
+
+
+# *****************
+# CType: image
+# *****************
+# (also used for rendering 'textpic' type):
+tt_content.image = COA
+tt_content.image.10 = < lib.stdheader
+tt_content.image.20 = USER
+tt_content.image.20 {
+ userFunc = tx_cssstyledcontent_pi1->render_textpic
+
+ # Image source
+ imgList.field = image
+ imgPath = uploads/pics/
+
+ # Single image rendering
+ imgObjNum = 1
+ 1 {
+ file.import.current = 1
+ file.width.field = imagewidth
+ imageLinkWrap = 1
+ imageLinkWrap {
+ bodyTag = <body style="margin:0; background:#fff;">
+ wrap = <a href="javascript:close();"> | </a>
+ width = {$styles.content.imgtext.linkWrap.width}
+ height = {$styles.content.imgtext.linkWrap.height}
+ effects = {$styles.content.imgtext.linkWrap.effects}
+
+ JSwindow = 1
+ JSwindow.newWindow = {$styles.content.imgtext.linkWrap.newWindow}
+ JSwindow.if.isFalse = {$styles.content.imgtext.linkWrap.lightboxEnabled}
+
+ directImageLink = {$styles.content.imgtext.linkWrap.lightboxEnabled}
+
+ enable.field = image_zoom
+ enable.ifEmpty.typolink.parameter.field = image_link
+ enable.ifEmpty.typolink.parameter.listNum.splitChar = 10
+ enable.ifEmpty.typolink.parameter.listNum.stdWrap.data = register : IMAGE_NUM_CURRENT
+ enable.ifEmpty.typolink.returnLast = url
+
+ typolink.parameter.field = image_link
+ typolink.parameter.listNum.splitChar = 10
+ typolink.parameter.listNum.stdWrap.data = register : IMAGE_NUM_CURRENT
+ typolink.target = {$styles.content.links.target}
+ typolink.extTarget = {$styles.content.links.extTarget}
+
+ linkParams.ATagParams.dataWrap = class="{$styles.content.imgtext.linkWrap.lightboxCssClass}" rel="{$styles.content.imgtext.linkWrap.lightboxRelAttribute}"
+ }
+
+ altText = TEXT
+ altText {
+ field = altText
+ stripHtml = 1
+ split.token.char = 10
+ split.token.if.isTrue = {$styles.content.imgtext.imageTextSplit}
+ split.returnKey.data = register : IMAGE_NUM_CURRENT
+ }
+
+ titleText < .altText
+ titleText.field = titleText
+
+ longdescURL < .altText
+ longdescURL.field = longdescURL
+
+ emptyTitleHandling = {$styles.content.imgtext.emptyTitleHandling}
+ titleInLink = {$styles.content.imgtext.titleInLink}
+ titleInLinkAndImg = {$styles.content.imgtext.titleInLinkAndImg}
+ }
+
+ textPos.field = imageorient
+ maxW = {$styles.content.imgtext.maxW}
+ maxW.override.data = register:maxImageWidth
+ maxWInText = {$styles.content.imgtext.maxWInText}
+ maxWInText.override.data = register:maxImageWidthInText
+
+ equalH.field = imageheight
+
+ image_compression.field = image_compression
+ image_effects.field = image_effects
+
+ noRows.field = image_noRows
+
+ cols.field = imagecols
+ border.field = imageborder
+
+ caption {
+ 1 = TEXT
+ 1 {
+ field = imagecaption
+ required = 1
+ parseFunc =< lib.parseFunc
+ br = 1
+ split.token.char = 10
+ split.token.if.isPositive = {$styles.content.imgtext.imageTextSplit} + {$styles.content.imgtext.captionSplit}
+ split.returnKey.data = register : IMAGE_NUM_CURRENT
+ }
+ }
+ # captionSplit is deprecated, use imageTextSplit instead
+ captionSplit = {$styles.content.imgtext.captionSplit}
+ captionAlign.field = imagecaption_position
+ # caption/alttext/title/longdescURL splitting
+ imageTextSplit = {$styles.content.imgtext.imageTextSplit}
+
+ borderCol = {$styles.content.imgtext.borderColor}
+ borderThick = {$styles.content.imgtext.borderThick}
+ borderClass = {$styles.content.imgtext.borderClass}
+ colSpace = {$styles.content.imgtext.colSpace}
+ rowSpace = {$styles.content.imgtext.rowSpace}
+ textMargin = {$styles.content.imgtext.textMargin}
+
+ borderSpace = {$styles.content.imgtext.borderSpace}
+ separateRows = {$styles.content.imgtext.separateRows}
+ addClasses =
+ addClassesImage =
+ addClassesImage.ifEmpty = csc-textpic-firstcol csc-textpic-lastcol
+ addClassesImage.override = csc-textpic-firstcol |*| |*| csc-textpic-lastcol
+ addClassesImage.override.if {
+ isGreaterThan.field = imagecols
+ value = 1
+ }
+
+ #
+ imageStdWrap.dataWrap = <div class="csc-textpic-imagewrap" style="width:{register:totalwidth}px;"> | </div>
+ imageStdWrapNoWidth.wrap = <div class="csc-textpic-imagewrap"> | </div>
+
+ # if noRows is set, wrap around each column:
+ imageColumnStdWrap.dataWrap = <div class="csc-textpic-imagecolumn" style="width:{register:columnwidth}px;"> | </div>
+
+ layout = CASE
+ layout {
+ key.field = imageorient
+ # above-center
+ default = TEXT
+ default.value = <div class="csc-textpic csc-textpic-center csc-textpic-above###CLASSES###">###IMAGES######TEXT###</div><div class="csc-textpic-clear"><!-- --></div>
+ # above-right
+ 1 = TEXT
+ 1.value = <div class="csc-textpic csc-textpic-right csc-textpic-above###CLASSES###">###IMAGES######TEXT###</div><div class="csc-textpic-clear"><!-- --></div>
+ # above-left
+ 2 = TEXT
+ 2.value = <div class="csc-textpic csc-textpic-left csc-textpic-above###CLASSES###">###IMAGES######TEXT###</div><div class="csc-textpic-clear"><!-- --></div>
+ # below-center
+ 8 = TEXT
+ 8.value = <div class="csc-textpic csc-textpic-center csc-textpic-below###CLASSES###">###TEXT######IMAGES###</div><div class="csc-textpic-clear"><!-- --></div>
+ # below-right
+ 9 = TEXT
+ 9.value = <div class="csc-textpic csc-textpic-right csc-textpic-below###CLASSES###">###TEXT######IMAGES###</div><div class="csc-textpic-clear"><!-- --></div>
+ # below-left
+ 10 = TEXT
+ 10.value = <div class="csc-textpic csc-textpic-left csc-textpic-below###CLASSES###">###TEXT######IMAGES###</div><div class="csc-textpic-clear"><!-- --></div>
+ # intext-right
+ 17 = TEXT
+ 17.value = <div class="csc-textpic csc-textpic-intext-right###CLASSES###">###IMAGES######TEXT###</div>
+ 17.override = <div class="csc-textpic csc-textpic-intext-right###CLASSES###">###IMAGES######TEXT###</div><div class="csc-textpic-clear"><!-- --></div>
+ 17.override.if.isTrue = {$styles.content.imgtext.addIntextClearer}
+ # intext-left
+ 18 = TEXT
+ 18.value = <div class="csc-textpic csc-textpic-intext-left###CLASSES###">###IMAGES######TEXT###</div>
+ 18.override = <div class="csc-textpic csc-textpic-intext-left###CLASSES###">###IMAGES######TEXT###</div><div class="csc-textpic-clear"><!-- --></div>
+ 18.override.if.isTrue = {$styles.content.imgtext.addIntextClearer}
+ # intext-right-nowrap
+ 25 = TEXT
+ 25.value = <div class="csc-textpic csc-textpic-intext-right-nowrap###CLASSES###">###IMAGES###<div style="margin-right:{register:rowWidthPlusTextMargin}px;">###TEXT###</div></div><div class="csc-textpic-clear"><!-- --></div>
+ 25.insertData = 1
+ # intext-left-nowrap
+ 26 = TEXT
+ 26.value = <div class="csc-textpic csc-textpic-intext-left-nowrap###CLASSES###">###IMAGES###<div style="margin-left:{register:rowWidthPlusTextMargin}px;">###TEXT###</div></div><div class="csc-textpic-clear"><!-- --></div>
+ 26.insertData = 1
+ }
+
+ rendering {
+ dl {
+ # Choose another rendering for special edge cases
+ fallbackRendering = COA
+ fallbackRendering {
+ # Just one image without a caption => don't need the dl-overhead, use the "simple" rendering
+ 10 = TEXT
+ 10 {
+ if {
+ isFalse.field = imagecaption
+ value = 1
+ equals.data = register:imageCount
+ }
+ value = simple
+ }
+
+ # Multiple images and one global caption => "ul"
+ 20 = TEXT
+ 20 {
+ if {
+ value = 1
+ isGreaterThan.data = register:imageCount
+ isTrue.if.isTrue.data = register:renderGlobalCaption
+ isTrue.field = imagecaption
+ }
+ value = ul
+ }
+
+ # Multiple images and no caption at all => "ul"
+ 30 = TEXT
+ 30 {
+ if {
+ value = 1
+ isGreaterThan.data = register:imageCount
+ isFalse.field = imagecaption
+ }
+ value = ul
+ }
+ }
+ imageRowStdWrap.dataWrap = <div class="csc-textpic-imagerow" style="width:{register:rowwidth}px;"> | </div>
+ imageLastRowStdWrap.dataWrap = <div class="csc-textpic-imagerow csc-textpic-imagerow-last" style="width:{register:rowwidth}px;"> | </div>
+ noRowsStdWrap.wrap =
+ oneImageStdWrap.dataWrap = <dl class="csc-textpic-image###CLASSES###" style="width:{register:imagespace}px;"> | </dl>
+ imgTagStdWrap.wrap = <dt> | </dt>
+ editIconsStdWrap.wrap = <dd> | </dd>
+ caption {
+ required = 1
+ wrap = <dd class="csc-textpic-caption"> | </dd>
+ }
+ }
+ ul {
+ # Just one image without a caption => don't need the ul-overhead, use the "simple" rendering
+ fallbackRendering < tt_content.image.20.rendering.dl.fallbackRendering.10
+ imageRowStdWrap.dataWrap = <div class="csc-textpic-imagerow" style="width:{register:rowwidth}px;"><ul> | </ul></div>
+ imageLastRowStdWrap.dataWrap = <div class="csc-textpic-imagerow csc-textpic-imagerow-last" style="width:{register:rowwidth}px;"><ul> | </ul></div>
+ noRowsStdWrap.wrap = <ul> | </ul>
+ oneImageStdWrap.dataWrap = <li class="csc-textpic-image###CLASSES###" style="width:{register:imagespace}px;"> | </li>
+ imgTagStdWrap.wrap =
+ editIconsStdWrap.wrap = <div> | </div>
+ caption.wrap = <div class="csc-textpic-caption"> | </div>
+ }
+ div {
+ # Just one image without a caption => don't need the div-overhead, use the "simple" rendering
+ fallbackRendering < tt_content.image.20.rendering.dl.fallbackRendering.10
+ imageRowStdWrap.dataWrap = <div class="csc-textpic-imagerow" style="width:{register:rowwidth}px;"> | </div>
+ imageLastRowStdWrap.dataWrap = <div class="csc-textpic-imagerow csc-textpic-imagerow-last" style="width:{register:rowwidth}px;"> | </div>
+ noRowsStdWrap.wrap =
+ oneImageStdWrap.dataWrap = <div class="csc-textpic-image###CLASSES###" style="width:{register:imagespace}px;"> | </div>
+ imgTagStdWrap.wrap = <div> | </div>
+ editIconsStdWrap.wrap = <div> | </div>
+ caption.wrap = <div class="csc-textpic-caption"> | </div>
+ }
+ simple {
+ imageRowStdWrap.dataWrap = |
+ imageLastRowStdWrap.dataWrap = |
+ noRowsStdWrap.wrap =
+ oneImageStdWrap.dataWrap = |
+ imgTagStdWrap.wrap = |
+ editIconsStdWrap.wrap = |
+ caption.wrap = <div class="csc-textpic-caption"> | </div>
+ imageStdWrap.dataWrap = <div class="csc-textpic-imagewrap csc-textpic-single-image" style="width:{register:totalwidth}px;"> | </div>
+ imageStdWrapNoWidth.wrap = <div class="csc-textpic-imagewrap csc-textpic-single-image"> | </div>
+ }
+ }
+ renderMethod = dl
+
+ editIcons = tt_content : image [imageorient|imagewidth|imageheight], [imagecols|image_noRows|imageborder],[image_link|image_zoom],[image_compression|image_effects|image_frames],imagecaption[imagecaption_position]
+ editIcons.iconTitle.data = LLL:EXT:css_styled_content/pi1/locallang.xml:eIcon.images
+
+ caption.editIcons = tt_content : imagecaption[imagecaption_position]
+ caption.editIcons.beforeLastTag=1
+ caption.editIcons.iconTitle.data = LLL:EXT:css_styled_content/pi1/locallang.xml:eIcon.caption
+
+ stdWrap.prefixComment = 2 | Image block:
+}
+
+# *****************
+# CType: textpic
+# *****************
+tt_content.textpic = COA
+tt_content.textpic {
+ 10 = COA
+ 10.if.value = 25
+ 10.if.isLessThan.field = imageorient
+ 10.10 = < lib.stdheader
+
+ 20 = < tt_content.image.20
+ 20 {
+ text.10 = COA
+ text.10 {
+ if.value = 24
+ if.isGreaterThan.field = imageorient
+ 10 = < lib.stdheader
+ 10.stdWrap.dataWrap = <div class="csc-textpicHeader csc-textpicHeader-{field:imageorient}">|</div>
+ }
+ text.20 = < tt_content.text.20
+ text.wrap = <div class="csc-textpic-text"> | </div>
+ }
+}
+
+
+
+# *****************
+# CType: bullet
+# *****************
+tt_content.bullets = COA
+tt_content.bullets {
+ 10 = < lib.stdheader
+
+ 20 = TEXT
+ 20 {
+ field = bodytext
+ trim = 1
+ split{
+ token.char = 10
+ cObjNum = |*|1|| 2|*|
+ 1.current = 1
+ 1.parseFunc =< lib.parseFunc
+ 1.wrap = <li class="odd">|</li>
+
+ 2.current = 1
+ 2.parseFunc =< lib.parseFunc
+ 2.wrap = <li class="even">|</li>
+ }
+ dataWrap = <ul class="csc-bulletlist csc-bulletlist-{field:layout}">|</ul>
+ editIcons = tt_content: bodytext, [layout]
+ editIcons.beforeLastTag = 1
+ editIcons.iconTitle.data = LLL:EXT:css_styled_content/pi1/locallang.php:eIcon.bullets
+
+ prefixComment = 2 | Bullet list:
+ }
+}
+
+
+
+# *****************
+# CType: table
+# *****************
+# Rendered by a PHP function specifically written to handle CE tables. See css_styled_content/pi1/class.tx_cssstyledcontent_pi1.php
+tt_content.table = COA
+tt_content.table {
+ 10 = < lib.stdheader
+
+ 20 = USER
+ 20.userFunc = tx_cssstyledcontent_pi1->render_table
+ 20.field = bodytext
+
+ 20.color {
+ default =
+ 1 = #EDEBF1
+ 2 = #F5FFAA
+ }
+ 20.tableParams_0 {
+ border =
+ cellpadding =
+ cellspacing =
+ }
+ 20.tableParams_1 {
+ border =
+ cellpadding =
+ cellspacing =
+ }
+ 20.tableParams_2 {
+ border =
+ cellpadding =
+ cellspacing =
+ }
+ 20.tableParams_3 {
+ border =
+ cellpadding =
+ cellspacing =
+ }
+ 20.innerStdWrap.wrap = |
+ 20.innerStdWrap.parseFunc = < lib.parseFunc
+
+ 20.stdWrap {
+ editIcons = tt_content: cols, bodytext, [layout], [table_bgColor|table_border|table_cellspacing|table_cellpadding]
+ editIcons.beforeLastTag = 1
+ editIcons.iconTitle.data = LLL:EXT:css_styled_content/pi1/locallang.xml:eIcon.table
+
+ prefixComment = 2 | Table:
+ }
+}
+
+
+# *****************
+# CType: uploads
+# *****************
+# Rendered by a PHP function specifically written to handle CE filelists. See css_styled_content/pi1/class.tx_cssstyledcontent_pi1.php
+tt_content.uploads = COA
+tt_content.uploads {
+ 10 = < lib.stdheader
+
+ 20 = USER
+ 20.userFunc = tx_cssstyledcontent_pi1->render_uploads
+ 20.field = media
+ 20.filePath.field = select_key
+
+ 20 {
+ # Rendering for each file (e.g. rows of the table) as a cObject
+ itemRendering = COA
+ itemRendering {
+ wrap = <tr class="tr-odd tr-first">|</tr> |*| <tr class="tr-even">|</tr> || <tr class="tr-odd">|</tr> |*|
+
+ 10 = TEXT
+ 10.data = register:linkedIcon
+ 10.wrap = <td class="csc-uploads-icon">|</td>
+ 10.if.isPositive.field = layout
+
+ 20 = COA
+ 20.wrap = <td class="csc-uploads-fileName">|</td>
+ 20.1 = TEXT
+ 20.1 {
+ data = register:linkedLabel
+ wrap = <p>|</p>
+ }
+ 20.2 = TEXT
+ 20.2 {
+ data = register:description
+ wrap = <p class="csc-uploads-description">|</p>
+ required = 1
+ htmlSpecialChars = 1
+ }
+
+ 30 = TEXT
+ 30.if.isTrue.field = filelink_size
+ 30.data = register:fileSize
+ 30.wrap = <td class="csc-uploads-fileSize">|</td>
+ 30.bytes = 1
+ 30.bytes.labels = {$styles.content.uploads.filesizeBytesLabels}
+ }
+ useSpacesInLinkText = 0
+ stripFileExtensionFromLinkText = 0
+ }
+
+ 20.color {
+ default =
+ 1 = #EDEBF1
+ 2 = #F5FFAA
+ }
+ 20.tableParams_0 {
+ border =
+ cellpadding =
+ cellspacing =
+ }
+ 20.tableParams_1 {
+ border =
+ cellpadding =
+ cellspacing =
+ }
+ 20.tableParams_2 {
+ border =
+ cellpadding =
+ cellspacing =
+ }
+ 20.tableParams_3 {
+ border =
+ cellpadding =
+ cellspacing =
+ }
+
+ 20.linkProc {
+ target = _blank
+ jumpurl = {$styles.content.uploads.jumpurl}
+ jumpurl.secure = {$styles.content.uploads.jumpurl_secure}
+ jumpurl.secure.mimeTypes = {$styles.content.uploads.jumpurl_secure_mimeTypes}
+ removePrependedNumbers = 1
+
+ iconCObject = IMAGE
+ iconCObject.file.import.data = register : ICON_REL_PATH
+ iconCObject.file.width = 150
+ }
+
+ 20.filesize {
+ bytes = 1
+ bytes.labels = {$styles.content.uploads.filesizeBytesLabels}
+ }
+
+ 20.stdWrap {
+ editIcons = tt_content: media, layout [table_bgColor|table_border|table_cellspacing|table_cellpadding], filelink_size, imagecaption
+ editIcons.iconTitle.data = LLL:EXT:css_styled_content/pi1/locallang.xml:eIcon.filelist
+
+ prefixComment = 2 | File list:
+ }
+}
+
+
+# ******************
+# CType: multimedia
+# ******************
+tt_content.multimedia = COA
+tt_content.multimedia {
+ 10 = < lib.stdheader
+
+ 20 = MULTIMEDIA
+ 20.file.field = multimedia
+ 20.file.wrap = uploads/media/
+ 20.file.listNum = 0
+ 20.params.field = bodytext
+
+ 20.stdWrap {
+ editIcons = tt_content: multimedia, bodytext
+ editIcons.iconTitle.data = LLL:EXT:css_styled_content/pi1/locallang.xml:eIcon.multimedia
+
+ prefixComment = 2 | Multimedia element:
+ }
+}
+
+# *****************
+# CType: swfobject
+# *****************
+tt_content.swfobject = COA
+tt_content.swfobject {
+ 10 = < lib.stdheader
+
+ 20 = SWFOBJECT
+ 20 {
+ file =
+ width =
+ height =
+
+ flexParams.field = pi_flexform
+
+ alternativeContent.field = bodytext
+
+ layout = ###SWFOBJECT###
+
+ video {
+ player = {$styles.content.media.videoPlayer}
+
+ defaultWidth = {$styles.content.media.defaultVideoWidth}
+ defaultHeight = {$styles.content.media.defaultVideoHeight}
+
+ default {
+ params.quality = high
+ params.menu = false
+ params.allowScriptAccess = sameDomain
+ params.allowFullScreen = true
+ }
+ mapping {
+
+ }
+ }
+
+ audio {
+ player = {$styles.content.media.audioPlayer}
+
+ defaultWidth = {$styles.content.media.defaultAudioWidth}
+ defaultHeight = {$styles.content.media.defaultAudioHeight}
+
+ default {
+ params.quality = high
+ params.allowScriptAccess = sameDomain
+ params.menu = false
+ }
+ mapping {
+ flashvars.file = soundFile
+ }
+ }
+
+ }
+ 20.stdWrap {
+ editIcons = tt_content: multimedia, imagewidth, imageheight, pi_flexform, bodytext
+ editIcons.iconTitle.data = LLL:EXT:css_styled_content/pi1/locallang.xml:eIcon.multimedia
+
+ prefixComment = 2 | SWFobject element:
+ }
+}
+
+# *****************
+# CType: qtobject
+# *****************
+tt_content.qtobject = COA
+tt_content.qtobject {
+ 10 = < lib.stdheader
+
+ 20 = QTOBJECT
+ 20 {
+ file =
+ width =
+ height =
+
+ flexParams.field = pi_flexform
+
+ alternativeContent.field = bodytext
+
+ layout = ###QTOBJECT###
+
+ video {
+ player = {$styles.content.media.videoPlayer}
+
+ defaultWidth = {$styles.content.media.defaultVideoWidth}
+ defaultHeight = {$styles.content.media.defaultVideoHeight}
+
+ default {
+ params.quality = high
+ params.menu = false
+ params.allowScriptAccess = sameDomain
+ params.allowFullScreen = true
+ }
+ mapping {
+
+ }
+ }
+
+ audio {
+ player = {$styles.content.media.audioPlayer}
+
+ defaultWidth = {$styles.content.media.defaultAudioWidth}
+ defaultHeight = {$styles.content.media.defaultAudioHeight}
+
+ default {
+ params.quality = high
+ params.allowScriptAccess = sameDomain
+ params.menu = false
+ }
+ mapping {
+ flashvars.file = soundFile
+ }
+ }
+ }
+ 20.stdWrap {
+ editIcons = tt_content: multimedia, imagewidth, imageheight, pi_flexform, bodytext
+ editIcons.iconTitle.data = LLL:EXT:css_styled_content/pi1/locallang.xml:eIcon.multimedia
+
+ prefixComment = 2 | QTobject element:
+ }
+}
+
+# *****************
+# CType: media
+# *****************
+tt_content.media = COA
+tt_content.media {
+ 10 = < lib.stdheader
+
+ 20 = MEDIA
+ 20 {
+
+ flexParams.field = pi_flexform
+ alternativeContent < tt_content.text.20
+ alternativeContent.field = bodytext
+
+ type = video
+ renderType = auto
+ allowEmptyUrl = 0
+ forcePlayer = 1
+
+ fileExtHandler {
+ default = MEDIA
+ avi = MEDIA
+ asf = MEDIA
+ class = MEDIA
+ wmv = MEDIA
+ mp3 = SWF
+ mp4 = SWF
+ m4v = SWF
+ swa = SWF
+ flv = SWF
+ swf = SWF
+ mov = QT
+ m4v = QT
+ m4a = QT
+ }
+
+ mimeConf.swfobject < tt_content.swfobject.20
+ mimeConf.qtobject < tt_content.qtobject.20
+
+ }
+ 20.stdWrap {
+ editIcons = tt_content: pi_flexform, bodytext
+ editIcons.iconTitle.data = LLL:EXT:css_styled_content/pi1/locallang.xml:eIcon.multimedia
+
+ prefixComment = 2 | Media element:
+ }
+}
+
+# ******************
+# CType: mailform
+# ******************
+tt_content.mailform = COA
+tt_content.mailform.10 = < lib.stdheader
+tt_content.mailform.20 = FORM
+tt_content.mailform.20 {
+ accessibility = 1
+ noWrapAttr=1
+ formName = mailform
+ dontMd5FieldNames = 1
+ layout = <div class="csc-mailform-field">###LABEL### ###FIELD###</div>
+ labelWrap.wrap = |
+ commentWrap.wrap = |
+ radioWrap.wrap = |<br />
+ radioWrap.accessibilityWrap = <fieldset###RADIO_FIELD_ID###><legend>###RADIO_GROUP_LABEL###</legend>|</fieldset>
+ REQ = 1
+ REQ.labelWrap.wrap = |
+ COMMENT.layout = <div class="csc-mailform-label">###LABEL###</div>
+ RADIO.layout = <div class="csc-mailform-field">###LABEL### <span class="csc-mailform-radio">###FIELD###</span></div>
+ LABEL.layout = <div class="csc-mailform-field">###LABEL### <span class="csc-mailform-label">###FIELD###</span></div>
+ target = {$styles.content.mailform.target}
+ goodMess = {$styles.content.mailform.goodMess}
+ badMess = {$styles.content.mailform.badMess}
+ redirect.field = pages
+ redirect.listNum = 0
+ recipient.field = subheader
+ data.field = bodytext
+ locationData = 1
+ hiddenFields.stdWrap.wrap = <div style="display:none;">|</div>
+
+ params.radio = class="csc-mailform-radio"
+ params.check = class="csc-mailform-check"
+ params.submit = class="csc-mailform-submit"
+
+ stdWrap.wrap = <fieldset class="csc-mailform"> | </fieldset>
+ stdWrap {
+ editIcons = tt_content: bodytext, pages, subheader
+ editIcons.iconTitle.data = LLL:EXT:css_styled_content/pi1/locallang.xml:eIcon.form
+
+ prefixComment = 2 | Mail form inserted:
+ }
+}
+
+
+# ******************
+# CType: search
+# ******************
+tt_content.search = COA
+tt_content.search.10 = < lib.stdheader
+# Result:
+tt_content.search.20 = SEARCHRESULT
+tt_content.search.20 {
+ allowedCols = pages.title-subtitle-keywords-description : tt_content.header-bodytext-imagecaption : tt_address.name-title-address-email-company-city-country : tt_links.title-note-note2-url : tt_board.subject-message-author-email : tt_calender.title-note : tt_products.title-note-itemnumber
+ languageField.tt_content = sys_language_uid
+ renderObj = COA
+ renderObj {
+
+ 10 = TEXT
+ 10.field = pages_title
+ 10.htmlSpecialChars = 1
+ 10.typolink {
+ parameter.field = uid
+ target = {$styles.content.searchresult.resultTarget}
+ additionalParams.data = register:SWORD_PARAMS
+ additionalParams.required = 1
+ additionalParams.wrap = &no_cache=1
+ }
+ 10.htmlSpecialChars = 1
+ 10.wrap = <h3 class="csc-searchResultHeader">|</h3>
+
+ 20 = COA
+ 20 {
+ 10 = TEXT
+ 10.field = tt_content_bodytext
+ 10.stripHtml = 1
+ 10.htmlSpecialChars = 1
+ }
+ 20.stdWrap.crop = 200 | ...
+ 20.stdWrap.wrap = <p class="csc-searchResult">|</p>
+ }
+
+ layout = COA
+ layout {
+ wrap = <table border="0" cellspacing="0" cellpadding="2" class="csc-searchResultInfo"><tr> | </tr></table> ###RESULT###
+
+ 10 = TEXT
+ 10.data = LLL:EXT:css_styled_content/pi1/locallang.xml:search.resultRange
+ 10.wrap = <td class="csc-searchResultRange"><p>|</p></td>
+
+ 20 = TEXT
+ 20.value = ###PREV### ###NEXT###
+ 20.wrap = <td class="csc-searchResultPrevNext"><p>|</p></td>
+ }
+
+ noResultObj = COA
+ noResultObj {
+ 10 = TEXT
+ 10.data = LLL:EXT:css_styled_content/pi1/locallang.xml:search.emptySearch
+ 10.wrap = <h3 class="csc-noSearchResultMsg">|</h3>
+ }
+
+ next = TEXT
+ next.data = LLL:EXT:css_styled_content/pi1/locallang.xml:search.searchResultNext
+
+ prev = TEXT
+ prev.data = LLL:EXT:css_styled_content/pi1/locallang.xml:search.searchResultPrev
+
+ target = {$styles.content.searchresult.target}
+ range = 20
+
+ stdWrap.prefixComment = 2 | Search result:
+}
+
+# Form:
+tt_content.search.30 < tt_content.mailform.20
+tt_content.search.30 {
+ goodMess = {$styles.content.searchform.goodMess}
+ redirect >
+ recipient >
+ data >
+ dataArray {
+ 10.label.data = LLL:EXT:css_styled_content/pi1/locallang.xml:search.searchWord
+ 10.type = sword=input
+ 20.label.data = LLL:EXT:css_styled_content/pi1/locallang.xml:search.searchIn
+ 20.type = scols=select
+ 20.valueArray {
+ 10.label.data = LLL:EXT:css_styled_content/pi1/locallang.xml:search.headersKeywords
+ 10.value = pages.title-subtitle-keywords-description:tt_content.header
+ 20.label.data = LLL:EXT:css_styled_content/pi1/locallang.xml:search.pageContent
+ 20.value = tt_content.header-bodytext-imagecaption
+ }
+ 30.type = stype=hidden
+ 30.value = L0
+ 40.type = submit=submit
+ 40.value.data = LLL:EXT:css_styled_content/pi1/locallang.xml:search.searchButton
+ }
+ type.field = pages
+ type.listNum = 0
+ locationData = HTTP_POST_VARS
+ no_cache = 1
+
+ stdWrap.wrap = <table border="0" cellspacing="1" cellpadding="1" class="csc-searchform"> | </table>
+ stdWrap {
+ editIcons.iconTitle.data = LLL:EXT:css_styled_content/pi1/locallang.xml:eIcon.search
+
+ prefixComment = 2 | Search form inserted:
+ }
+}
+
+
+# ******************
+# CType: login
+# ******************
+tt_content.login < tt_content.mailform
+tt_content.login.10 = < lib.stdheader
+tt_content.login.20 {
+ goodMess = {$styles.content.loginform.goodMess}
+ redirect >
+ recipient >
+ data >
+ dataArray {
+ 10.label.data = LLL:EXT:css_styled_content/pi1/locallang.xml:login.username
+ 10.type = *user=input
+ 20.label.data = LLL:EXT:css_styled_content/pi1/locallang.xml:login.password
+ 20.type = *pass=password
+ 30.type = logintype=hidden
+ 30.value = login
+ 40.type = submit=submit
+ 40.value.data = LLL:EXT:css_styled_content/pi1/locallang.xml:login.login
+ }
+ type.field = pages
+ type.listNum = 0
+ target = {$styles.content.loginform.target}
+ locationData = 0
+ hiddenFields.pid = TEXT
+ hiddenFields.pid {
+ value = {$styles.content.loginform.pid}
+ override.field = pages
+ override.listNum = 1
+ }
+
+ stdWrap.wrap = <div class="csc-loginform"> | </div>
+ stdWrap {
+ editIcons.iconTitle.data = LLL:EXT:css_styled_content/pi1/locallang.xml:eIcon.login
+
+ prefixComment = 2 | Login/Logout form:
+ }
+}
+[loginUser = *]
+tt_content.login.20 {
+ dataArray >
+ dataArray {
+ 10.label.data = LLL:EXT:css_styled_content/pi1/locallang.xml:login.username
+ 10.label.wrap = | <!--###USERNAME###-->
+ 30.type = logintype=hidden
+ 30.value = logout
+ 40.type = submit=submit
+ 40.value.data = LLL:EXT:css_styled_content/pi1/locallang.xml:login.logout
+ }
+}
+[global]
+
+
+# ******************
+# CType: splash
+# ******************
+# Deprecated element.
+# Still here for backwards compliance with plugins using the "text box" type.
+tt_content.splash = CASE
+tt_content.splash.key.field = splash_layout
+tt_content.splash.stdWrap {
+ prefixComment = 2 | Textbox inserted (Deprecated)
+}
+tt_content.splash.default = COA
+tt_content.splash.default {
+ 20 = CTABLE
+ 20 {
+ c.1 = < tt_content.text
+ lm.1 = IMAGE
+ lm.1.file {
+ import = uploads/pics/
+ import.field = image
+ import.listNum = 0
+ maxW.field = imagewidth
+ maxW.ifEmpty = 200
+ }
+ cMargins = 30,0,0,0
+ }
+}
+tt_content.splash.1 < tt_content.splash.default
+tt_content.splash.1.20.lm.1.file >
+tt_content.splash.1.20.lm.1.file = GIFBUILDER
+tt_content.splash.1.20.lm.1.file {
+ XY = [10.w]+10,[10.h]+10
+ backColor = {$content.splash.bgCol}
+ backColor.override.data = register:pageColor
+ format = jpg
+ 5 = BOX
+ 5.dimensions = 3,3,[10.w],[10.h]
+ 5.color = #333333
+ 7 = EFFECT
+ 7.value = blur=99|blur=99|blur=99|blur=99|blur=99|blur=99|blur=99
+ 10 = IMAGE
+ 10.file {
+ import = uploads/pics/
+ import.field = image
+ import.listNum = 0
+ maxW.field = imagewidth
+ maxW.ifEmpty = 200
+ }
+}
+// The image frames are not available unless TypoScript code from styles.content.imgFrames.x is provided manually:
+tt_content.splash.2 < tt_content.splash.default
+#tt_content.splash.2.20.lm.1.file.m < styles.content.imgFrames.1
+tt_content.splash.3 < tt_content.splash.default
+#tt_content.splash.3.20.lm.1.file.m < styles.content.imgFrames.2
+
+// From plugin.postit1, if included:
+tt_content.splash.20 = < plugin.postit1
+
+
+
+# ****************
+# CType: menu
+# ****************
+tt_content.menu = COA
+tt_content.menu {
+ 10 = < lib.stdheader
+
+ 20 = CASE
+ 20 {
+ key.field = menu_type
+
+ # "Menu of these pages"
+ default = HMENU
+ default {
+ special = list
+ special.value.field = pages
+ wrap = <ul class="csc-menu csc-menu-def">|</ul>
+ 1 = TMENU
+ 1 {
+ target = {$PAGE_TARGET}
+ NO {
+ stdWrap.htmlSpecialChars = 1
+ wrapItemAndSub = <li>|</li>
+ ATagTitle.field = description // title
+ }
+ noBlur = 1
+ }
+ }
+
+ # "Menu of subpages to these pages"
+ 1 < .default
+ 1 {
+ special = directory
+ wrap = <ul class="csc-menu csc-menu-1">|</ul>
+ }
+
+ # "Sitemap - liststyle"
+ 2 = HMENU
+ 2 {
+ wrap = <div class="csc-sitemap">|</div>
+ 1 = TMENU
+ 1 {
+ target = {$PAGE_TARGET}
+ noBlur = 1
+ expAll = 1
+ wrap = <ul>|</ul>
+ NO {
+ stdWrap.htmlSpecialChars = 1
+ wrapItemAndSub = <li>|</li>
+ ATagTitle.field = description // title
+ }
+ }
+ 2 < .1
+ 3 < .1
+ 4 < .1
+ 5 < .1
+ 6 < .1
+ 7 < .1
+ }
+
+ # "Section index (pagecontent w/Index checked - liststyle)"
+ 3 < styles.content.get
+ 3 {
+ wrap = <ul class="csc-menu csc-menu-3">|</ul>
+ select.andWhere = sectionIndex!=0
+ select.pidInList.override.field = pages
+ renderObj = TEXT
+ renderObj {
+ fieldRequired = header
+ trim = 1
+ field = header
+ htmlSpecialChars = 1
+ noBlur = 1
+ wrap = <li class="csc-section">|</li>
+ typolink.parameter.field = pid
+ typolink.section.field = uid
+ }
+ }
+
+ # "Menu of subpages to these pages (with abstract)"
+ 4 < .1
+ 4 {
+ wrap = <dl class="csc-menu csc-menu-4">|</dl>
+ 1.NO {
+ wrapItemAndSub >
+ linkWrap = <dt>|</dt>
+ after {
+ data = field : abstract // field : description // field : subtitle
+ required = 1
+ htmlSpecialChars = 1
+ wrap = <dd>|</dd>
+ }
+ ATagTitle.field = description // title
+ }
+ }
+
+ # "Recently updated pages"
+ 5 < .default
+ 5 {
+ wrap = <ul class="csc-menu csc-menu-5">|</ul>
+ special = updated
+ special {
+ maxAge = 3600*24*7
+ excludeNoSearchPages = 1
+ }
+ }
+
+ # "Related pages (based on keywords)"
+ 6 < .default
+ 6 {
+ wrap = <ul class="csc-menu csc-menu-6">|</ul>
+ special = keywords
+ special {
+ excludeNoSearchPages = 1
+ }
+ }
+
+ # "Menu of subpages to these pages + sections - liststyle"
+ 7 < .1
+ 7 {
+ wrap = <ul class="csc-menu csc-menu-7">|</ul>
+ 1.expAll = 1
+ 2 < .1
+ 2 {
+ sectionIndex = 1
+ sectionIndex.type = header
+ wrap = <ul>|</ul>
+ NO.wrapItemAndSub = <li class="csc-section">|</li>
+ }
+ }
+ }
+
+ 20.stdWrap {
+ editIcons = tt_content: menu_type, pages
+ editIcons.iconTitle.data = LLL:EXT:css_styled_content/pi1/locallang.xml:eIcon.menuSitemap
+
+ prefixComment = 2 | Menu/Sitemap element:
+ }
+}
+
+
+
+# ****************
+# CType: shortcut
+# ****************
+# Should be a complete copy from the old static template "content (default)"
+tt_content.shortcut = COA
+tt_content.shortcut {
+ 20 = CASE
+ 20.key.field = layout
+ 20.0= RECORDS
+ 20.0 {
+ source.field = records
+ tables = {$content.shortcut.tables}
+ # THESE are OLD plugins. Modern plugins registers themselves automatically!
+ conf.tt_content = < tt_content
+ conf.tt_address = < tt_address
+ conf.tt_links = < tt_links
+ conf.tt_guest = < tt_guest
+ conf.tt_board = < tt_board
+ conf.tt_calender = < tt_calender
+ conf.tt_rating < tt_rating
+ conf.tt_products = < tt_products
+ conf.tt_news = < tt_news
+ conf.tt_poll = < plugin.tt_poll
+ }
+ 20.1= RECORDS
+ 20.1 {
+ source.field = records
+ tables = {$content.shortcut.tables}
+ conf.tt_poll = < plugin.tt_poll
+ conf.tt_poll.code = RESULT,SUBMITTEDVOTE
+ }
+
+ 20.stdWrap {
+ editIcons = tt_content: records
+ editIcons.iconTitle.data = LLL:EXT:css_styled_content/pi1/locallang.xml:eIcon.recordList
+
+ prefixComment = 2 | Inclusion of other records (by reference):
+ }
+}
+
+
+# ****************
+# CType: list
+# ****************
+# Should be a complete copy from the old static template "content (default)" (except "lib.stdheader")
+tt_content.list = COA
+tt_content.list {
+ 10 = < lib.stdheader
+
+ 20 = CASE
+ 20.key.field = list_type
+ 20 {
+ # LIST element references (NOT copy of objects!)
+ # THESE are OLD plugins. Modern plugins registers themselves automatically!
+ 3 = CASE
+ 3.key.field = layout
+ 3.0 = < plugin.tt_guest
+
+ 4 = CASE
+ 4.key.field = layout
+ 4.0 = < plugin.tt_board_list
+ 4.1 = < plugin.tt_board_tree
+
+ 2 = CASE
+ 2.key.field = layout
+ 2.0 = < plugin.tt_board_tree
+
+ 5 = CASE
+ 5.key.field = layout
+ 5.0 = < plugin.tt_products
+
+ 7 = CASE
+ 7.key.field = layout
+ 7.0 = < plugin.tt_calender
+
+ 8 = CASE
+ 8.key.field = layout
+ 8.0 = < plugin.tt_rating
+
+ 9 = CASE
+ 9.key.field = layout
+ 9.0 = < plugin.tt_news
+
+ 11 = CASE
+ 11.key.field = layout
+ 11.0 = < plugin.tipafriend
+
+ 20 = CASE
+ 20.key.field = layout
+ 20.0 = < plugin.feadmin.fe_users
+
+ 21 = CASE
+ 21.key.field = layout
+ 21.0 = < plugin.feadmin.dmailsubscription
+ }
+
+ 20.stdWrap {
+ editIcons = tt_content: list_type, layout, select_key, pages [recursive]
+ editIcons.iconTitle.data = LLL:EXT:css_styled_content/pi1/locallang.xml:eIcon.plugin
+
+ prefixComment = 2 | Plugin inserted:
+ }
+}
+
+
+# ****************
+# CType: script
+# ****************
+# OBSOLETE! Please make extensions instead. The "script" content element was meant for these custom purposes in the past. Today extensions will do the job better.
+tt_content.script = TEXT
+tt_content.script {
+ value =
+
+ prefixComment = 2 | Script element (Deprecated)
+}
+
+
+# ****************
+# CType: div
+# ****************
+tt_content.div = TEXT
+tt_content.div {
+ value = <hr />
+ wrap = <div class="divider">|</div>
+ prefixComment = 2 | Div element
+}
+
+
+# ****************
+# CType: html
+# ****************
+# This truely IS a content object, launched from inside the PHP class of course.
+# Should be a complete copy from the old static template "content (default)"
+tt_content.html = TEXT
+tt_content.html {
+ field = bodytext
+
+ editIcons = tt_content: pages
+ editIcons.iconTitle.data = LLL:EXT:css_styled_content/pi1/locallang.xml:eIcon.html
+
+ prefixComment = 2 | Raw HTML content:
+}
+
+
+# ****************
+# Default error msg:
+# ****************
+tt_content.default = TEXT
+tt_content.default {
+ field = CType
+ wrap = <p style="background-color: yellow;"><b>ERROR:</b> Content Element type "|" has no rendering definition!</p>
+
+ prefixComment = 2 | Unknown element message:
+}
+
+# *********************************************************************
+# ACCESSIBILTY MODE
+# *********************************************************************
+
+
+
+
+
+
+
+plugin.tx_cssstyledcontent._CSS_DEFAULT_STYLE (
+ /* Captions */
+ DIV.csc-textpic-caption-c .csc-textpic-caption { text-align: center; }
+ DIV.csc-textpic-caption-r .csc-textpic-caption { text-align: right; }
+ DIV.csc-textpic-caption-l .csc-textpic-caption { text-align: left; }
+
+ /* Needed for noRows setting */
+ DIV.csc-textpic DIV.csc-textpic-imagecolumn { float: left; display: inline; }
+
+ /* Border just around the image */
+ {$styles.content.imgtext.borderSelector} {
+ border: {$styles.content.imgtext.borderThick}px solid {$styles.content.imgtext.borderColor};
+ padding: {$styles.content.imgtext.borderSpace}px {$styles.content.imgtext.borderSpace}px;
+ }
+
+ DIV.csc-textpic-imagewrap { padding: 0; }
+
+ DIV.csc-textpic IMG { border: none; }
+
+ /* DIV: This will place the images side by side */
+ DIV.csc-textpic DIV.csc-textpic-imagewrap DIV.csc-textpic-image { float: left; }
+
+ /* UL: This will place the images side by side */
+ DIV.csc-textpic DIV.csc-textpic-imagewrap UL { list-style: none; margin: 0; padding: 0; }
+ DIV.csc-textpic DIV.csc-textpic-imagewrap UL LI { float: left; margin: 0; padding: 0; }
+
+ /* DL: This will place the images side by side */
+ DIV.csc-textpic DIV.csc-textpic-imagewrap DL.csc-textpic-image { float: left; }
+ DIV.csc-textpic DIV.csc-textpic-imagewrap DL.csc-textpic-image DT { float: none; }
+ DIV.csc-textpic DIV.csc-textpic-imagewrap DL.csc-textpic-image DD { float: none; }
+ DIV.csc-textpic DIV.csc-textpic-imagewrap DL.csc-textpic-image DD IMG { border: none; } /* FE-Editing Icons */
+ DL.csc-textpic-image { margin: 0; }
+ DL.csc-textpic-image DT { margin: 0; display: inline; }
+ DL.csc-textpic-image DD { margin: 0; }
+
+ /* Clearer */
+ DIV.csc-textpic-clear { clear: both; }
+
+ /* Margins around images: */
+
+ /* Pictures on left, add margin on right */
+ DIV.csc-textpic-left DIV.csc-textpic-imagewrap .csc-textpic-image,
+ DIV.csc-textpic-intext-left-nowrap DIV.csc-textpic-imagewrap .csc-textpic-image,
+ DIV.csc-textpic-intext-left DIV.csc-textpic-imagewrap .csc-textpic-image {
+ display: inline; /* IE fix for double-margin bug */
+ margin-right: {$styles.content.imgtext.colSpace}px;
+ }
+
+ /* Pictures on right, add margin on left */
+ DIV.csc-textpic-right DIV.csc-textpic-imagewrap .csc-textpic-image,
+ DIV.csc-textpic-intext-right-nowrap DIV.csc-textpic-imagewrap .csc-textpic-image,
+ DIV.csc-textpic-intext-right DIV.csc-textpic-imagewrap .csc-textpic-image {
+ display: inline; /* IE fix for double-margin bug */
+ margin-left: {$styles.content.imgtext.colSpace}px;
+ }
+
+ /* Pictures centered, add margin on left */
+ DIV.csc-textpic-center DIV.csc-textpic-imagewrap .csc-textpic-image {
+ display: inline; /* IE fix for double-margin bug */
+ margin-left: {$styles.content.imgtext.colSpace}px;
+ }
+ DIV.csc-textpic DIV.csc-textpic-imagewrap .csc-textpic-image .csc-textpic-caption { margin: 0; }
+ DIV.csc-textpic DIV.csc-textpic-imagewrap .csc-textpic-image IMG { margin: 0; vertical-align:bottom; }
+
+ /* Space below each image (also in-between rows) */
+ DIV.csc-textpic DIV.csc-textpic-imagewrap .csc-textpic-image { margin-bottom: {$styles.content.imgtext.rowSpace}px; }
+ DIV.csc-textpic-equalheight DIV.csc-textpic-imagerow { margin-bottom: {$styles.content.imgtext.rowSpace}px; display: block; }
+ DIV.csc-textpic DIV.csc-textpic-imagerow { clear: both; }
+ DIV.csc-textpic DIV.csc-textpic-single-image IMG { margin-bottom: {$styles.content.imgtext.rowSpace}px; }
+
+ /* IE7 hack for margin between image rows */
+ *+html DIV.csc-textpic DIV.csc-textpic-imagerow .csc-textpic-image { margin-bottom: 0; }
+ *+html DIV.csc-textpic DIV.csc-textpic-imagerow { margin-bottom: {$styles.content.imgtext.rowSpace}px; }
+
+ /* No margins around the whole image-block */
+ DIV.csc-textpic DIV.csc-textpic-imagewrap .csc-textpic-firstcol { margin-left: 0px !important; }
+ DIV.csc-textpic DIV.csc-textpic-imagewrap .csc-textpic-lastcol { margin-right: 0px !important; }
+
+ /* Add margin from image-block to text (in case of "Text w/ images") */
+ DIV.csc-textpic-intext-left DIV.csc-textpic-imagewrap,
+ DIV.csc-textpic-intext-left-nowrap DIV.csc-textpic-imagewrap {
+ margin-right: {$styles.content.imgtext.textMargin}px !important;
+ }
+ DIV.csc-textpic-intext-right DIV.csc-textpic-imagewrap,
+ DIV.csc-textpic-intext-right-nowrap DIV.csc-textpic-imagewrap {
+ margin-left: {$styles.content.imgtext.textMargin}px !important;
+ }
+
+ /* Positioning of images: */
+
+ /* Above */
+ DIV.csc-textpic-above DIV.csc-textpic-text { clear: both; }
+
+ /* Center (above or below) */
+ DIV.csc-textpic-center { text-align: center; /* IE-hack */ }
+ DIV.csc-textpic-center DIV.csc-textpic-imagewrap { margin: 0 auto; }
+ DIV.csc-textpic-center DIV.csc-textpic-imagewrap .csc-textpic-image { text-align: left; /* Remove IE-hack */ }
+ DIV.csc-textpic-center DIV.csc-textpic-text { text-align: left; /* Remove IE-hack */ }
+
+ /* Right (above or below) */
+ DIV.csc-textpic-right DIV.csc-textpic-imagewrap { float: right; }
+ DIV.csc-textpic-right DIV.csc-textpic-text { clear: right; }
+
+ /* Left (above or below) */
+ DIV.csc-textpic-left DIV.csc-textpic-imagewrap { float: left; }
+ DIV.csc-textpic-left DIV.csc-textpic-text { clear: left; }
+
+ /* Left (in text) */
+ DIV.csc-textpic-intext-left DIV.csc-textpic-imagewrap { float: left; }
+
+ /* Right (in text) */
+ DIV.csc-textpic-intext-right DIV.csc-textpic-imagewrap { float: right; }
+
+ /* Right (in text, no wrap around) */
+ DIV.csc-textpic-intext-right-nowrap DIV.csc-textpic-imagewrap { float: right; clear: both; }
+ /* Hide from IE5-mac. Only IE-win sees this. \*/
+ * html DIV.csc-textpic-intext-right-nowrap .csc-textpic-text { height: 1%; }
+ /* End hide from IE5/mac */
+
+ /* Left (in text, no wrap around) */
+ DIV.csc-textpic-intext-left-nowrap DIV.csc-textpic-imagewrap { float: left; clear: both; }
+ /* Hide from IE5-mac. Only IE-win sees this. \*/
+ * html DIV.csc-textpic-intext-left-nowrap .csc-textpic-text,
+ * html .csc-textpic-intext-left ol,
+ * html .csc-textpic-intext-left ul { height: 1%; }
+ /* End hide from IE5/mac */
+
+ DIV.csc-textpic DIV.csc-textpic-imagerow-last { margin-bottom: 0; }
+
+ /* Browser fixes: */
+
+ /* Fix for unordered and ordered list with image "In text, left" */
+ .csc-textpic-intext-left ol, .csc-textpic-intext-left ul {padding-left: 40px; overflow: auto; }
+)
+
+# TYPO3 SVN ID: $Id$
+
--- /dev/null
+# This is the VCL configuration Varnish will automatically append to your VCL
+# file during compilation/loading. See the vcl(7) man page for details on syntax
+# and semantics.
+# New users is recommended to use the example.vcl file as a starting point.
+
+vcl 4.0;
+
+backend foo { .host = "192.168.1.1"; }
+
+probe blatti { .url = "foo"; }
+probe fooy {
+ .url = "beh";
+
+}
+
+acl foo {
+ "192.168.1.1";
+ "192.168.0.0"/24;
+ ! "192.168.0.1";
+}
+
+include "foo.vcl";
+
+import std;
+
+sub vcl_init {
+ new b = director.foo();
+}
+
+sub vcl_recv {
+ ban(req.url ~ "foo");
+ rollback();
+}
+sub vcl_recv {
+ if (req.method == "PRI") {
+ /* We do not support SPDY or HTTP/2.0 */
+ return (synth(405));
+ }
+ if (req.method != "GET" &&
+ req.method != "HEAD" &&
+ req.method != "PUT" &&
+ req.method != "POST" &&
+ req.method != "TRACE" &&
+ req.method != "OPTIONS" &&
+ req.method != "DELETE") {
+ /* Non-RFC2616 or CONNECT which is weird. */
+ return (pipe);
+ }
+
+ if (req.method != "GET" && req.method != "HEAD") {
+ /* We only deal with GET and HEAD by default */
+ return (pass);
+ }
+ if (req.http.Authorization || req.http.Cookie) {
+ /* Not cacheable by default */
+ return (pass);
+ }
+ return (hash);
+}
+
+sub vcl_pipe {
+ # By default Connection: close is set on all piped requests, to stop
+ # connection reuse from sending future requests directly to the
+ # (potentially) wrong backend. If you do want this to happen, you can undo
+ # it here.
+ # unset bereq.http.connection;
+ return (pipe);
+}
+
+sub vcl_pass {
+ return (fetch);
+}
+
+sub vcl_hash {
+ hash_data(req.url);
+ if (req.http.host) {
+ hash_data(req.http.host);
+ } else {
+ hash_data(server.ip);
+ }
+ return (lookup);
+}
+
+sub vcl_purge {
+ return (synth(200, "Purged"));
+}
+
+sub vcl_hit {
+ if (obj.ttl >= 0s) {
+ // A pure unadultered hit, deliver it
+ return (deliver);
+ }
+ if (obj.ttl + obj.grace > 0s) {
+ // Object is in grace, deliver it
+ // Automatically triggers a background fetch
+ return (deliver);
+ }
+ // fetch & deliver once we get the result
+ return (miss);
+}
+
+sub vcl_miss {
+ return (fetch);
+}
+
+sub vcl_deliver {
+ set resp.http.x-storage = storage.s0.free;
+ return (deliver);
+}
+
+/*
+ * We can come here "invisibly" with the following errors: 413, 417 & 503
+ */
+sub vcl_synth {
+ set resp.http.Content-Type = "text/html; charset=utf-8";
+ set resp.http.Retry-After = "5";
+ synthetic( {"<!DOCTYPE html>
+<html>
+ <head>
+ <title>"} + resp.status + " " + resp.reason + {"</title>
+ </head>
+ <body>
+ <h1>Error "} + resp.status + " " + resp.reason + {"</h1>
+ <p>"} + resp.reason + {"</p>
+ <h3>Guru Meditation:</h3>
+ <p>XID: "} + req.xid + {"</p>
+ <hr>
+ <p>Varnish cache server</p>
+ </body>
+</html>
+"} );
+ return (deliver);
+}
+
+#######################################################################
+# Backend Fetch
+
+sub vcl_backend_fetch {
+ return (fetch);
+}
+
+sub vcl_backend_response {
+ if (beresp.ttl <= 0s ||
+ beresp.http.Set-Cookie ||
+ beresp.http.Surrogate-control ~ "no-store" ||
+ (!beresp.http.Surrogate-Control &&
+ beresp.http.Cache-Control ~ "no-cache|no-store|private") ||
+ beresp.http.Vary == "*") {
+ /*
+ * Mark as "Hit-For-Pass" for the next 2 minutes
+ */
+ set beresp.ttl = 120s;
+ set beresp.uncacheable = true;
+ }
+ return (deliver);
+}
+
+sub vcl_backend_error {
+ set beresp.http.Content-Type = "text/html; charset=utf-8";
+ set beresp.http.Retry-After = "5";
+ synthetic( {"<!DOCTYPE html>
+<html>
+ <head>
+ <title>"} + beresp.status + " " + beresp.reason + {"</title>
+ </head>
+ <body>
+ <h1>Error "} + beresp.status + " " + beresp.reason + {"</h1>
+ <p>"} + beresp.reason + {"</p>
+ <h3>Guru Meditation:</h3>
+ <p>XID: "} + bereq.xid + {"</p>
+ <hr>
+ <p>Varnish cache server</p>
+ </body>
+</html>
+"} );
+ return (deliver);
+}
+
+#######################################################################
+# Housekeeping
+
+sub vcl_init {
+}
+
+sub vcl_fini {
+ return (ok);
+}
--- /dev/null
+.. -*- mode: rst -*-
+
+{+.. highlight:: python+}
+
+====================
+Write your own lexer
+====================
+
+If a lexer for your favorite language is missing in the Pygments package, you
+can easily write your own and extend Pygments.
+
+All you need can be found inside the :mod:`pygments.lexer` module. As you can
+read in the :doc:`API documentation <api>`, a lexer is a class that is
+initialized with some keyword arguments (the lexer options) and that provides a
+:meth:`.get_tokens_unprocessed()` method which is given a string or unicode
+object with the data to [-parse.-] {+lex.+}
+
+The :meth:`.get_tokens_unprocessed()` method must return an iterator or iterable
+containing tuples in the form ``(index, token, value)``. Normally you don't
+need to do this since there are [-numerous-] base lexers {+that do most of the work and that+}
+you can subclass.
+
+
+RegexLexer
+==========
+
+[-A very powerful (but quite easy to use)-]
+
+{+The+} lexer {+base class used by almost all of Pygments' lexers+} is the
+:class:`RegexLexer`. This
+[-lexer base-] class allows you to define lexing rules in terms of
+*regular expressions* for different *states*.
+
+States are groups of regular expressions that are matched against the input
+string at the *current position*. If one of these expressions matches, a
+corresponding action is performed [-(normally-] {+(such as+} yielding a token with a specific
+[-type),-]
+{+type, or changing state),+} the current position is set to where the last match
+ended and the matching process continues with the first regex of the current
+state.
+
+Lexer states are kept [-in-] {+on+} a [-state-] stack: each time a new state is entered, the new
+state is pushed onto the stack. The most basic lexers (like the `DiffLexer`)
+just need one state.
+
+Each state is defined as a list of tuples in the form (`regex`, `action`,
+`new_state`) where the last item is optional. In the most basic form, `action`
+is a token type (like `Name.Builtin`). That means: When `regex` matches, emit a
+token with the match text and type `tokentype` and push `new_state` on the state
+stack. If the new state is ``'#pop'``, the topmost state is popped from the
+stack instead. [-(To-] {+To+} pop more than one state, use ``'#pop:2'`` and so [-on.)-] {+on.+}
+``'#push'`` is a synonym for pushing the current state on the stack.
+
+The following example shows the `DiffLexer` from the builtin lexers. Note that
+it contains some additional attributes `name`, `aliases` and `filenames` which
+aren't required for a lexer. They are used by the builtin lexer lookup
+functions.
+
+[-.. sourcecode:: python-] {+::+}
+
+ from pygments.lexer import RegexLexer
+ from pygments.token import *
+
+ class DiffLexer(RegexLexer):
+ name = 'Diff'
+ aliases = ['diff']
+ filenames = ['*.diff']
+
+ tokens = {
+ 'root': [
+ (r' .*\n', Text),
+ (r'\+.*\n', Generic.Inserted),
+ (r'-.*\n', Generic.Deleted),
+ (r'@.*\n', Generic.Subheading),
+ (r'Index.*\n', Generic.Heading),
+ (r'=.*\n', Generic.Heading),
+ (r'.*\n', Text),
+ ]
+ }
+
+As you can see this lexer only uses one state. When the lexer starts scanning
+the text, it first checks if the current character is a space. If this is true
+it scans everything until newline and returns the [-parsed-] data as {+a+} `Text` [-token.-] {+token (which
+is the "no special highlighting" token).+}
+
+If this rule doesn't match, it checks if the current char is a plus sign. And
+so on.
+
+If no rule matches at the current position, the current char is emitted as an
+`Error` token that indicates a [-parsing-] {+lexing+} error, and the position is increased by
+[-1.-]
+{+one.+}
+
+
+Adding and testing a new lexer
+==============================
+
+To make [-pygments-] {+Pygments+} aware of your new lexer, you have to perform the following
+steps:
+
+First, change to the current directory containing the [-pygments-] {+Pygments+} source code:
+
+.. [-sourcecode::-] {+code-block::+} console
+
+ $ cd .../pygments-main
+
+{+Select a matching module under ``pygments/lexers``, or create a new module for
+your lexer class.+}
+
+Next, make sure the lexer is known from outside of the module. All modules in
+the ``pygments.lexers`` specify ``__all__``. For example, [-``other.py`` sets:
+
+.. sourcecode:: python-] {+``esoteric.py`` sets::+}
+
+ __all__ = ['BrainfuckLexer', 'BefungeLexer', ...]
+
+Simply add the name of your lexer class to this list.
+
+Finally the lexer can be made [-publically-] {+publicly+} known by rebuilding the lexer mapping:
+
+.. [-sourcecode::-] {+code-block::+} console
+
+ $ make mapfiles
+
+To test the new lexer, store an example file with the proper extension in
+``tests/examplefiles``. For example, to test your ``DiffLexer``, add a
+``tests/examplefiles/example.diff`` containing a sample diff output.
+
+Now you can use pygmentize to render your example to HTML:
+
+.. [-sourcecode::-] {+code-block::+} console
+
+ $ ./pygmentize -O full -f html -o /tmp/example.html tests/examplefiles/example.diff
+
+Note that this [-explicitely-] {+explicitly+} calls the ``pygmentize`` in the current directory
+by preceding it with ``./``. This ensures your modifications are used.
+Otherwise a possibly already installed, unmodified version without your new
+lexer would have been called from the system search path (``$PATH``).
+
+To view the result, open ``/tmp/example.html`` in your browser.
+
+Once the example renders as expected, you should run the complete test suite:
+
+.. [-sourcecode::-] {+code-block::+} console
+
+ $ make test
+
+{+It also tests that your lexer fulfills the lexer API and certain invariants,
+such as that the concatenation of all token text is the same as the input text.+}
+
+
+Regex Flags
+===========
+
+You can either define regex flags {+locally+} in the regex (``r'(?x)foo bar'``) or
+{+globally+} by adding a `flags` attribute to your lexer class. If no attribute is
+defined, it defaults to `re.MULTILINE`. For more [-informations-] {+information+} about regular
+expression flags see the {+page about+} `regular expressions`_ [-help page-] in the [-python-] {+Python+}
+documentation.
+
+.. _regular expressions: [-http://docs.python.org/lib/re-syntax.html-] {+http://docs.python.org/library/re.html#regular-expression-syntax+}
+
+
+Scanning multiple tokens at once
+================================
+
+{+So far, the `action` element in the rule tuple of regex, action and state has
+been a single token type. Now we look at the first of several other possible
+values.+}
+
+Here is a more complex lexer that highlights INI files. INI files consist of
+sections, comments and [-key-] {+``key+} = [-value pairs:
+
+.. sourcecode:: python-] {+value`` pairs::+}
+
+ from pygments.lexer import RegexLexer, bygroups
+ from pygments.token import *
+
+ class IniLexer(RegexLexer):
+ name = 'INI'
+ aliases = ['ini', 'cfg']
+ filenames = ['*.ini', '*.cfg']
+
+ tokens = {
+ 'root': [
+ (r'\s+', Text),
+ (r';.*?$', Comment),
+ (r'\[.*?\]$', Keyword),
+ (r'(.*?)(\s*)(=)(\s*)(.*?)$',
+ bygroups(Name.Attribute, Text, Operator, Text, String))
+ ]
+ }
+
+The lexer first looks for whitespace, comments and section names. [-And later-] {+Later+} it
+looks for a line that looks like a key, value pair, separated by an ``'='``
+sign, and optional whitespace.
+
+The `bygroups` helper [-makes sure that-] {+yields+} each {+capturing+} group [-is yielded-] {+in the regex+} with a different
+token type. First the `Name.Attribute` token, then a `Text` token for the
+optional whitespace, after that a `Operator` token for the equals sign. Then a
+`Text` token for the whitespace again. The rest of the line is returned as
+`String`.
+
+Note that for this to work, every part of the match must be inside a capturing
+group (a ``(...)``), and there must not be any nested capturing groups. If you
+nevertheless need a group, use a non-capturing group defined using this syntax:
+[-``r'(?:some|words|here)'``-]
+{+``(?:some|words|here)``+} (note the ``?:`` after the beginning parenthesis).
+
+If you find yourself needing a capturing group inside the regex which shouldn't
+be part of the output but is used in the regular expressions for backreferencing
+(eg: ``r'(<(foo|bar)>)(.*?)(</\2>)'``), you can pass `None` to the bygroups
+function and [-it will skip-] that group will be skipped in the output.
+
+
+Changing states
+===============
+
+Many lexers need multiple states to work as expected. For example, some
+languages allow multiline comments to be nested. Since this is a recursive
+pattern it's impossible to lex just using regular expressions.
+
+Here is [-the solution:
+
+.. sourcecode:: python-] {+a lexer that recognizes C++ style comments (multi-line with ``/* */``
+and single-line with ``//`` until end of line)::+}
+
+ from pygments.lexer import RegexLexer
+ from pygments.token import *
+
+ class [-ExampleLexer(RegexLexer):-] {+CppCommentLexer(RegexLexer):+}
+ name = 'Example Lexer with states'
+
+ tokens = {
+ 'root': [
+ (r'[^/]+', Text),
+ (r'/\*', Comment.Multiline, 'comment'),
+ (r'//.*?$', Comment.Singleline),
+ (r'/', Text)
+ ],
+ 'comment': [
+ (r'[^*/]', Comment.Multiline),
+ (r'/\*', Comment.Multiline, '#push'),
+ (r'\*/', Comment.Multiline, '#pop'),
+ (r'[*/]', Comment.Multiline)
+ ]
+ }
+
+This lexer starts lexing in the ``'root'`` state. It tries to match as much as
+possible until it finds a slash (``'/'``). If the next character after the slash
+is [-a star-] {+an asterisk+} (``'*'``) the `RegexLexer` sends those two characters to the
+output stream marked as `Comment.Multiline` and continues [-parsing-] {+lexing+} with the rules
+defined in the ``'comment'`` state.
+
+If there wasn't [-a star-] {+an asterisk+} after the slash, the `RegexLexer` checks if it's a
+[-singleline-]
+{+Singleline+} comment [-(eg:-] {+(i.e.+} followed by a second slash). If this also wasn't the
+case it must be a single [-slash-] {+slash, which is not a comment starter+} (the separate
+regex for a single slash must also be given, else the slash would be marked as
+an error token).
+
+Inside the ``'comment'`` state, we do the same thing again. Scan until the
+lexer finds a star or slash. If it's the opening of a multiline comment, push
+the ``'comment'`` state on the stack and continue scanning, again in the
+``'comment'`` state. Else, check if it's the end of the multiline comment. If
+yes, pop one state from the stack.
+
+Note: If you pop from an empty stack you'll get an `IndexError`. (There is an
+easy way to prevent this from happening: don't ``'#pop'`` in the root state).
+
+If the `RegexLexer` encounters a newline that is flagged as an error token, the
+stack is emptied and the lexer continues scanning in the ``'root'`` state. This
+[-helps-]
+{+can help+} producing error-tolerant highlighting for erroneous input, e.g. when a
+single-line string is not closed.
+
+
+Advanced state tricks
+=====================
+
+There are a few more things you can do with states:
+
+- You can push multiple states onto the stack if you give a tuple instead of a
+ simple string as the third item in a rule tuple. For example, if you want to
+ match a comment containing a directive, something [-like::-] {+like:
+
+ .. code-block:: text+}
+
+ /* <processing directive> rest of comment */
+
+ you can use this [-rule:
+
+ .. sourcecode:: python-] {+rule::+}
+
+ tokens = {
+ 'root': [
+ (r'/\* <', Comment, ('comment', 'directive')),
+ ...
+ ],
+ 'directive': [
+ (r'[^>]*', Comment.Directive),
+ (r'>', Comment, '#pop'),
+ ],
+ 'comment': [
+ (r'[^*]+', Comment),
+ (r'\*/', Comment, '#pop'),
+ (r'\*', Comment),
+ ]
+ }
+
+ When this encounters the above sample, first ``'comment'`` and ``'directive'``
+ are pushed onto the stack, then the lexer continues in the directive state
+ until it finds the closing ``>``, then it continues in the comment state until
+ the closing ``*/``. Then, both states are popped from the stack again and
+ lexing continues in the root state.
+
+ .. versionadded:: 0.9
+ The tuple can contain the special ``'#push'`` and ``'#pop'`` (but not
+ ``'#pop:n'``) directives.
+
+
+- You can include the rules of a state in the definition of another. This is
+ done by using `include` from [-`pygments.lexer`:
+
+ .. sourcecode:: python-] {+`pygments.lexer`::+}
+
+ from pygments.lexer import RegexLexer, bygroups, include
+ from pygments.token import *
+
+ class ExampleLexer(RegexLexer):
+ tokens = {
+ 'comments': [
+ (r'/\*.*?\*/', Comment),
+ (r'//.*?\n', Comment),
+ ],
+ 'root': [
+ include('comments'),
+ (r'(function )(\w+)( {)',
+ bygroups(Keyword, Name, Keyword), 'function'),
+ (r'.', Text),
+ ],
+ 'function': [
+ (r'[^}/]+', Text),
+ include('comments'),
+ (r'/', Text),
+ [-(r'}',-]
+ {+(r'\}',+} Keyword, '#pop'),
+ ]
+ }
+
+ This is a hypothetical lexer for a language that consist of functions and
+ comments. Because comments can occur at toplevel and in functions, we need
+ rules for comments in both states. As you can see, the `include` helper saves
+ repeating rules that occur more than once (in this example, the state
+ ``'comment'`` will never be entered by the lexer, as it's only there to be
+ included in ``'root'`` and ``'function'``).
+
+- Sometimes, you may want to "combine" a state from existing ones. This is
+ possible with the [-`combine`-] {+`combined`+} helper from `pygments.lexer`.
+
+ If you, instead of a new state, write ``combined('state1', 'state2')`` as the
+ third item of a rule tuple, a new anonymous state will be formed from state1
+ and state2 and if the rule matches, the lexer will enter this state.
+
+ This is not used very often, but can be helpful in some cases, such as the
+ `PythonLexer`'s string literal processing.
+
+- If you want your lexer to start lexing in a different state you can modify the
+ stack by [-overloading-] {+overriding+} the `get_tokens_unprocessed()` [-method:
+
+ .. sourcecode:: python-] {+method::+}
+
+ from pygments.lexer import RegexLexer
+
+ class [-MyLexer(RegexLexer):-] {+ExampleLexer(RegexLexer):+}
+ tokens = {...}
+
+ def get_tokens_unprocessed(self, [-text):
+ stack = ['root', 'otherstate']-] {+text, stack=('root', 'otherstate')):+}
+ for item in RegexLexer.get_tokens_unprocessed(text, stack):
+ yield item
+
+ Some lexers like the `PhpLexer` use this to make the leading ``<?php``
+ preprocessor comments optional. Note that you can crash the lexer easily by
+ putting values into the stack that don't exist in the token map. Also
+ removing ``'root'`` from the stack can result in strange errors!
+
+- [-An-] {+In some lexers, a state should be popped if anything is encountered that isn't
+ matched by a rule in the state. You could use an+} empty regex at the end of [-a-]
+ {+the+} state list, [-combined with ``'#pop'``, can
+ act as-] {+but Pygments provides+} a [-return point-] {+more obvious way of spelling that:
+ ``default('#pop')`` is equivalent to ``('', Text, '#pop')``.
+
+ .. versionadded:: 2.0
+
+
+Subclassing lexers derived+} from {+RegexLexer
+==========================================
+
+.. versionadded:: 1.6
+
+Sometimes multiple languages are very similar, but should still be lexed by
+different lexer classes.
+
+When subclassing+} a {+lexer derived from RegexLexer, the ``tokens`` dictionaries
+defined in the parent and child class are merged. For example::
+
+ from pygments.lexer import RegexLexer, inherit
+ from pygments.token import *
+
+ class BaseLexer(RegexLexer):
+ tokens = {
+ 'root': [
+ ('[a-z]+', Name),
+ (r'/\*', Comment, 'comment'),
+ ('"', String, 'string'),
+ ('\s+', Text),
+ ],
+ 'string': [
+ ('[^"]+', String),
+ ('"', String, '#pop'),
+ ],
+ 'comment': [
+ ...
+ ],
+ }
+
+ class DerivedLexer(BaseLexer):
+ tokens = {
+ 'root': [
+ ('[0-9]+', Number),
+ inherit,
+ ],
+ 'string': [
+ (r'[^"\\]+', String),
+ (r'\\.', String.Escape),
+ ('"', String, '#pop'),
+ ],
+ }
+
+The `BaseLexer` defines two states, lexing names and strings. The
+`DerivedLexer` defines its own tokens dictionary, which extends the definitions
+of the base lexer:
+
+* The "root"+} state {+has an additional rule and then the special object `inherit`,
+ which tells Pygments to insert the token definitions of the parent class at+}
+ that [-doesn't have a clear end marker.-] {+point.
+
+* The "string" state is replaced entirely, since there is not `inherit` rule.
+
+* The "comment" state is inherited entirely.+}
+
+
+Using multiple lexers
+=====================
+
+Using multiple lexers for the same input can be tricky. One of the easiest
+combination techniques is shown here: You can replace the [-token type-] {+action+} entry in a rule
+tuple [-(the second item)-] with a lexer class. The matched text will then be lexed with that lexer,
+and the resulting tokens will be yielded.
+
+For example, look at this stripped-down HTML [-lexer:
+
+.. sourcecode:: python-] {+lexer::+}
+
+ from pygments.lexer import RegexLexer, bygroups, using
+ from pygments.token import *
+ from [-pygments.lexers.web-] {+pygments.lexers.javascript+} import JavascriptLexer
+
+ class HtmlLexer(RegexLexer):
+ name = 'HTML'
+ aliases = ['html']
+ filenames = ['*.html', '*.htm']
+
+ flags = re.IGNORECASE | re.DOTALL
+ tokens = {
+ 'root': [
+ ('[^<&]+', Text),
+ ('&.*?;', Name.Entity),
+ (r'<\s*script\s*', Name.Tag, ('script-content', 'tag')),
+ (r'<\s*[a-zA-Z0-9:]+', Name.Tag, 'tag'),
+ (r'<\s*/\s*[a-zA-Z0-9:]+\s*>', Name.Tag),
+ ],
+ 'script-content': [
+ (r'(.+?)(<\s*/\s*script\s*>)',
+ bygroups(using(JavascriptLexer), Name.Tag),
+ '#pop'),
+ ]
+ }
+
+Here the content of a ``<script>`` tag is passed to a newly created instance of
+a `JavascriptLexer` and not processed by the `HtmlLexer`. This is done using
+the `using` helper that takes the other lexer class as its parameter.
+
+Note the combination of `bygroups` and `using`. This makes sure that the
+content up to the ``</script>`` end tag is processed by the `JavascriptLexer`,
+while the end tag is yielded as a normal token with the `Name.Tag` type.
+
+[-As an additional goodie, if the lexer class is replaced by `this` (imported from
+`pygments.lexer`), the "other" lexer will be the current one (because you cannot
+refer to the current class within the code that runs at class definition time).-]
+
+Also note the ``(r'<\s*script\s*', Name.Tag, ('script-content', 'tag'))`` rule.
+Here, two states are pushed onto the state stack, ``'script-content'`` and
+``'tag'``. That means that first ``'tag'`` is processed, which will [-parse-] {+lex+}
+attributes and the closing ``>``, then the ``'tag'`` state is popped and the
+next state on top of the stack will be ``'script-content'``.
+
+{+Since you cannot refer to the class currently being defined, use `this`
+(imported from `pygments.lexer`) to refer to the current lexer class, i.e.
+``using(this)``. This construct may seem unnecessary, but this is often the
+most obvious way of lexing arbitrary syntax between fixed delimiters without
+introducing deeply nested states.+}
+
+The `using()` helper has a special keyword argument, `state`, which works as
+follows: if given, the lexer to use initially is not in the ``"root"`` state,
+but in the state given by this argument. This [-*only* works-] {+does not work+} with [-a `RegexLexer`.-] {+advanced
+`RegexLexer` subclasses such as `ExtendedRegexLexer` (see below).+}
+
+Any other keywords arguments passed to `using()` are added to the keyword
+arguments used to create the lexer.
+
+
+Delegating Lexer
+================
+
+Another approach for nested lexers is the `DelegatingLexer` which is for example
+used for the template engine lexers. It takes two lexers as arguments on
+initialisation: a `root_lexer` and a `language_lexer`.
+
+The input is processed as follows: First, the whole text is lexed with the
+`language_lexer`. All tokens yielded with [-a-] {+the special+} type of ``Other`` are
+then concatenated and given to the `root_lexer`. The language tokens of the
+`language_lexer` are then inserted into the `root_lexer`'s token stream at the
+appropriate positions.
+
+[-.. sourcecode:: python-] {+::+}
+
+ from pygments.lexer import DelegatingLexer
+ from pygments.lexers.web import HtmlLexer, PhpLexer
+
+ class HtmlPhpLexer(DelegatingLexer):
+ def __init__(self, **options):
+ super(HtmlPhpLexer, self).__init__(HtmlLexer, PhpLexer, **options)
+
+This procedure ensures that e.g. HTML with template tags in it is highlighted
+correctly even if the template tags are put into HTML tags or attributes.
+
+If you want to change the needle token ``Other`` to something else, you can give
+the lexer another token type as the third [-parameter:
+
+.. sourcecode:: python-] {+parameter::+}
+
+ DelegatingLexer.__init__(MyLexer, OtherLexer, Text, **options)
+
+
+Callbacks
+=========
+
+Sometimes the grammar of a language is so complex that a lexer would be unable
+to [-parse-] {+process+} it just by using regular expressions and stacks.
+
+For this, the `RegexLexer` allows callbacks to be given in rule tuples, instead
+of token types (`bygroups` and `using` are nothing else but preimplemented
+callbacks). The callback must be a function taking two arguments:
+
+* the lexer itself
+* the match object for the last matched rule
+
+The callback must then return an iterable of (or simply yield) ``(index,
+tokentype, value)`` tuples, which are then just passed through by
+`get_tokens_unprocessed()`. The ``index`` here is the position of the token in
+the input string, ``tokentype`` is the normal token type (like `Name.Builtin`),
+and ``value`` the associated part of the input string.
+
+You can see an example [-here:
+
+.. sourcecode:: python-] {+here::+}
+
+ from pygments.lexer import RegexLexer
+ from pygments.token import Generic
+
+ class HypotheticLexer(RegexLexer):
+
+ def headline_callback(lexer, match):
+ equal_signs = match.group(1)
+ text = match.group(2)
+ yield match.start(), Generic.Headline, equal_signs + text + equal_signs
+
+ tokens = {
+ 'root': [
+ (r'(=+)(.*?)(\1)', headline_callback)
+ ]
+ }
+
+If the regex for the `headline_callback` matches, the function is called with
+the match object. Note that after the callback is done, processing continues
+normally, that is, after the end of the previous match. The callback has no
+possibility to influence the position.
+
+There are not really any simple examples for lexer callbacks, but you can see
+them in action e.g. in the [-`compiled.py`_ source code-] {+`SMLLexer` class+} in [-the `CLexer` and
+`JavaLexer` classes.-] {+`ml.py`_.+}
+
+.. [-_compiled.py: http://bitbucket.org/birkenfeld/pygments-main/src/tip/pygments/lexers/compiled.py-] {+_ml.py: http://bitbucket.org/birkenfeld/pygments-main/src/tip/pygments/lexers/ml.py+}
+
+
+The ExtendedRegexLexer class
+============================
+
+The `RegexLexer`, even with callbacks, unfortunately isn't powerful enough for
+the funky syntax rules of [-some-] languages [-that will go unnamed,-] such as Ruby.
+
+But fear not; even then you don't have to abandon the regular expression
+[-approach. For-]
+{+approach:+} Pygments has a subclass of `RegexLexer`, the `ExtendedRegexLexer`.
+All features known from RegexLexers are available here too, and the tokens are
+specified in exactly the same way, *except* for one detail:
+
+The `get_tokens_unprocessed()` method holds its internal state data not as local
+variables, but in an instance of the `pygments.lexer.LexerContext` class, and
+that instance is passed to callbacks as a third argument. This means that you
+can modify the lexer state in callbacks.
+
+The `LexerContext` class has the following members:
+
+* `text` -- the input text
+* `pos` -- the current starting position that is used for matching regexes
+* `stack` -- a list containing the state stack
+* `end` -- the maximum position to which regexes are matched, this defaults to
+ the length of `text`
+
+Additionally, the `get_tokens_unprocessed()` method can be given a
+`LexerContext` instead of a string and will then process this context instead of
+creating a new one for the string argument.
+
+Note that because you can set the current position to anything in the callback,
+it won't be automatically be set by the caller after the callback is finished.
+For example, this is how the hypothetical lexer above would be written with the
+[-`ExtendedRegexLexer`:
+
+.. sourcecode:: python-]
+{+`ExtendedRegexLexer`::+}
+
+ from pygments.lexer import ExtendedRegexLexer
+ from pygments.token import Generic
+
+ class ExHypotheticLexer(ExtendedRegexLexer):
+
+ def headline_callback(lexer, match, ctx):
+ equal_signs = match.group(1)
+ text = match.group(2)
+ yield match.start(), Generic.Headline, equal_signs + text + equal_signs
+ ctx.pos = match.end()
+
+ tokens = {
+ 'root': [
+ (r'(=+)(.*?)(\1)', headline_callback)
+ ]
+ }
+
+This might sound confusing (and it can really be). But it is needed, and for an
+example look at the Ruby lexer in [-`agile.py`_.-] {+`ruby.py`_.+}
+
+.. [-_agile.py: https://bitbucket.org/birkenfeld/pygments-main/src/tip/pygments/lexers/agile.py
+
+
+Filtering-] {+_ruby.py: https://bitbucket.org/birkenfeld/pygments-main/src/tip/pygments/lexers/ruby.py
+
+
+Handling Lists of Keywords
+==========================
+
+For a relatively short list (hundreds) you can construct an optimized regular
+expression directly using ``words()`` (longer lists, see next section). This
+function handles a few things for you automatically, including escaping
+metacharacters and Python's first-match rather than longest-match in
+alternations. Feel free to put the lists themselves in
+``pygments/lexers/_$lang_builtins.py`` (see examples there), and generated by
+code if possible.
+
+An example of using ``words()`` is something like::
+
+ from pygments.lexer import RegexLexer, words, Name
+
+ class MyLexer(RegexLexer):
+
+ tokens = {
+ 'root': [
+ (words(('else', 'elseif'), suffix=r'\b'), Name.Builtin),
+ (r'\w+', Name),
+ ],
+ }
+
+As you can see, you can add ``prefix`` and ``suffix`` parts to the constructed
+regex.
+
+
+Modifying+} Token Streams
+=======================
+
+Some languages ship a lot of builtin functions (for example PHP). The total
+amount of those functions differs from system to system because not everybody
+has every extension installed. In the case of PHP there are over 3000 builtin
+functions. That's an [-incredible-] {+incredibly+} huge amount of functions, much more than you
+[-can-]
+{+want to+} put into a regular expression.
+
+But because only `Name` tokens can be function names [-it's-] {+this is+} solvable by
+overriding the ``get_tokens_unprocessed()`` method. The following lexer
+subclasses the `PythonLexer` so that it highlights some additional names as
+pseudo [-keywords:
+
+.. sourcecode:: python-] {+keywords::+}
+
+ from [-pygments.lexers.agile-] {+pygments.lexers.python+} import PythonLexer
+ from pygments.token import Name, Keyword
+
+ class MyPythonLexer(PythonLexer):
+ EXTRA_KEYWORDS = [-['foo',-] {+set(('foo',+} 'bar', 'foobar', 'barfoo', 'spam', [-'eggs']-] {+'eggs'))+}
+
+ def get_tokens_unprocessed(self, text):
+ for index, token, value in PythonLexer.get_tokens_unprocessed(self, text):
+ if token is Name and value in self.EXTRA_KEYWORDS:
+ yield index, Keyword.Pseudo, value
+ else:
+ yield index, token, value
+
+The `PhpLexer` and `LuaLexer` use this method to resolve builtin functions.
+
+[-.. note:: Do not confuse this with the :doc:`filter <filters>` system.-]
--- /dev/null
+This example is unbalanced open-close.
+We can't treat these easily.
+
+{+ added? -]
+[- deleted? +}
+
+suddenly closed -]
+suddenly closed +}
+
+{+ added? [- deleted?
python run.py [testfile ...]
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Pygments string assert utility
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
--- /dev/null
+# -*- coding: utf-8 -*-
--- /dev/null
+# -*- coding: utf-8 -*-
+from pygments.formatters import HtmlFormatter
+
+
+class HtmlFormatterWrapper(HtmlFormatter):
+ name = 'HtmlWrapper'
--- /dev/null
+# -*- coding: utf-8 -*-
+# pygments.lexers.python (as CustomLexer) for test_cmdline.py
+
+from pygments.lexers import PythonLexer
+
+
+class CustomLexer(PythonLexer):
+ name = 'PythonLexerWrapper'
+
+
+class LexerWrapper(CustomLexer):
+ name = 'PythonLexerWrapperWrapper'
Pygments basic API tests
~~~~~~~~~~~~~~~~~~~~~~~~
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
try:
inst = formatter(opt1="val1")
- except (ImportError, FontNotFound):
- raise support.SkipTest
+ except (ImportError, FontNotFound) as e:
+ raise support.SkipTest(e)
try:
inst.get_style_defs()
def verify(formatter):
try:
inst = formatter(encoding=None)
- except (ImportError, FontNotFound):
+ except (ImportError, FontNotFound) as e:
# some dependency or font not installed
- raise support.SkipTest
+ raise support.SkipTest(e)
if formatter.name != 'Raw tokens':
out = format(tokens, inst)
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ BibTeX Test
+ ~~~~~~~~~~~
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+import textwrap
+import unittest
+
+from pygments.lexers import BibTeXLexer, BSTLexer
+from pygments.token import Token
+
+
+class BibTeXTest(unittest.TestCase):
+ def setUp(self):
+ self.lexer = BibTeXLexer()
+
+ def testPreamble(self):
+ data = u'@PREAMBLE{"% some LaTeX code here"}'
+ tokens = [
+ (Token.Name.Class, u'@PREAMBLE'),
+ (Token.Punctuation, u'{'),
+ (Token.String, u'"'),
+ (Token.String, u'% some LaTeX code here'),
+ (Token.String, u'"'),
+ (Token.Punctuation, u'}'),
+ (Token.Text, u'\n'),
+ ]
+ self.assertEqual(list(self.lexer.get_tokens(data)), tokens)
+
+ def testString(self):
+ data = u'@STRING(SCI = "Science")'
+ tokens = [
+ (Token.Name.Class, u'@STRING'),
+ (Token.Punctuation, u'('),
+ (Token.Name.Attribute, u'SCI'),
+ (Token.Text, u' '),
+ (Token.Punctuation, u'='),
+ (Token.Text, u' '),
+ (Token.String, u'"'),
+ (Token.String, u'Science'),
+ (Token.String, u'"'),
+ (Token.Punctuation, u')'),
+ (Token.Text, u'\n'),
+ ]
+ self.assertEqual(list(self.lexer.get_tokens(data)), tokens)
+
+ def testEntry(self):
+ data = u"""
+ This is a comment.
+
+ @ARTICLE{ruckenstein-diffusion,
+ author = "Liu, Hongquin" # and # "Ruckenstein, Eli",
+ year = 1997,
+ month = JAN,
+ pages = "888-895"
+ }
+ """
+
+ tokens = [
+ (Token.Comment, u'This is a comment.'),
+ (Token.Text, u'\n\n'),
+ (Token.Name.Class, u'@ARTICLE'),
+ (Token.Punctuation, u'{'),
+ (Token.Name.Label, u'ruckenstein-diffusion'),
+ (Token.Punctuation, u','),
+ (Token.Text, u'\n '),
+ (Token.Name.Attribute, u'author'),
+ (Token.Text, u' '),
+ (Token.Punctuation, u'='),
+ (Token.Text, u' '),
+ (Token.String, u'"'),
+ (Token.String, u'Liu, Hongquin'),
+ (Token.String, u'"'),
+ (Token.Text, u' '),
+ (Token.Punctuation, u'#'),
+ (Token.Text, u' '),
+ (Token.Name.Variable, u'and'),
+ (Token.Text, u' '),
+ (Token.Punctuation, u'#'),
+ (Token.Text, u' '),
+ (Token.String, u'"'),
+ (Token.String, u'Ruckenstein, Eli'),
+ (Token.String, u'"'),
+ (Token.Punctuation, u','),
+ (Token.Text, u'\n '),
+ (Token.Name.Attribute, u'year'),
+ (Token.Text, u' '),
+ (Token.Punctuation, u'='),
+ (Token.Text, u' '),
+ (Token.Number, u'1997'),
+ (Token.Punctuation, u','),
+ (Token.Text, u'\n '),
+ (Token.Name.Attribute, u'month'),
+ (Token.Text, u' '),
+ (Token.Punctuation, u'='),
+ (Token.Text, u' '),
+ (Token.Name.Variable, u'JAN'),
+ (Token.Punctuation, u','),
+ (Token.Text, u'\n '),
+ (Token.Name.Attribute, u'pages'),
+ (Token.Text, u' '),
+ (Token.Punctuation, u'='),
+ (Token.Text, u' '),
+ (Token.String, u'"'),
+ (Token.String, u'888-895'),
+ (Token.String, u'"'),
+ (Token.Text, u'\n'),
+ (Token.Punctuation, u'}'),
+ (Token.Text, u'\n'),
+ ]
+ self.assertEqual(list(self.lexer.get_tokens(textwrap.dedent(data))), tokens)
+
+ def testComment(self):
+ data = '@COMMENT{test}'
+ tokens = [
+ (Token.Comment, u'@COMMENT'),
+ (Token.Comment, u'{test}'),
+ (Token.Text, u'\n'),
+ ]
+ self.assertEqual(list(self.lexer.get_tokens(data)), tokens)
+
+ def testMissingBody(self):
+ data = '@ARTICLE xxx'
+ tokens = [
+ (Token.Name.Class, u'@ARTICLE'),
+ (Token.Text, u' '),
+ (Token.Error, u'x'),
+ (Token.Error, u'x'),
+ (Token.Error, u'x'),
+ (Token.Text, u'\n'),
+ ]
+ self.assertEqual(list(self.lexer.get_tokens(data)), tokens)
+
+ def testMismatchedBrace(self):
+ data = '@PREAMBLE(""}'
+ tokens = [
+ (Token.Name.Class, u'@PREAMBLE'),
+ (Token.Punctuation, u'('),
+ (Token.String, u'"'),
+ (Token.String, u'"'),
+ (Token.Error, u'}'),
+ (Token.Text, u'\n'),
+ ]
+ self.assertEqual(list(self.lexer.get_tokens(data)), tokens)
+
+
+class BSTTest(unittest.TestCase):
+ def setUp(self):
+ self.lexer = BSTLexer()
+
+ def testBasicBST(self):
+ data = """
+ % BibTeX standard bibliography style `plain'
+
+ INTEGERS { output.state before.all }
+
+ FUNCTION {sort.format.title}
+ { 't :=
+ "A " #2
+ "An " #3
+ "The " #4 t chop.word
+ chop.word
+ chop.word
+ sortify
+ #1 global.max$ substring$
+ }
+
+ ITERATE {call.type$}
+ """
+ tokens = [
+ (Token.Comment.SingleLine, "% BibTeX standard bibliography style `plain'"),
+ (Token.Text, u'\n\n'),
+ (Token.Keyword, u'INTEGERS'),
+ (Token.Text, u' '),
+ (Token.Punctuation, u'{'),
+ (Token.Text, u' '),
+ (Token.Name.Variable, u'output.state'),
+ (Token.Text, u' '),
+ (Token.Name.Variable, u'before.all'),
+ (Token.Text, u' '),
+ (Token.Punctuation, u'}'),
+ (Token.Text, u'\n\n'),
+ (Token.Keyword, u'FUNCTION'),
+ (Token.Text, u' '),
+ (Token.Punctuation, u'{'),
+ (Token.Name.Variable, u'sort.format.title'),
+ (Token.Punctuation, u'}'),
+ (Token.Text, u'\n'),
+ (Token.Punctuation, u'{'),
+ (Token.Text, u' '),
+ (Token.Name.Function, u"'t"),
+ (Token.Text, u' '),
+ (Token.Name.Variable, u':='),
+ (Token.Text, u'\n'),
+ (Token.Literal.String, u'"A "'),
+ (Token.Text, u' '),
+ (Token.Literal.Number, u'#2'),
+ (Token.Text, u'\n '),
+ (Token.Literal.String, u'"An "'),
+ (Token.Text, u' '),
+ (Token.Literal.Number, u'#3'),
+ (Token.Text, u'\n '),
+ (Token.Literal.String, u'"The "'),
+ (Token.Text, u' '),
+ (Token.Literal.Number, u'#4'),
+ (Token.Text, u' '),
+ (Token.Name.Variable, u't'),
+ (Token.Text, u' '),
+ (Token.Name.Variable, u'chop.word'),
+ (Token.Text, u'\n '),
+ (Token.Name.Variable, u'chop.word'),
+ (Token.Text, u'\n'),
+ (Token.Name.Variable, u'chop.word'),
+ (Token.Text, u'\n'),
+ (Token.Name.Variable, u'sortify'),
+ (Token.Text, u'\n'),
+ (Token.Literal.Number, u'#1'),
+ (Token.Text, u' '),
+ (Token.Name.Builtin, u'global.max$'),
+ (Token.Text, u' '),
+ (Token.Name.Builtin, u'substring$'),
+ (Token.Text, u'\n'),
+ (Token.Punctuation, u'}'),
+ (Token.Text, u'\n\n'),
+ (Token.Keyword, u'ITERATE'),
+ (Token.Text, u' '),
+ (Token.Punctuation, u'{'),
+ (Token.Name.Builtin, u'call.type$'),
+ (Token.Punctuation, u'}'),
+ (Token.Text, u'\n'),
+ ]
+ self.assertEqual(list(self.lexer.get_tokens(textwrap.dedent(data))), tokens)
Basic ColdfusionHtmlLexer Test
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Basic CLexer Test
~~~~~~~~~~~~~~~~~
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Command line test
~~~~~~~~~~~~~~~~~
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
finally:
os.unlink(name)
+ def test_load_from_file(self):
+ lexer_file = os.path.join(TESTDIR, 'support', 'python_lexer.py')
+ formatter_file = os.path.join(TESTDIR, 'support', 'html_formatter.py')
+
+ # By default, use CustomLexer
+ o = self.check_success('-l', lexer_file, '-f', 'html',
+ '-x', stdin=TESTCODE)
+ o = re.sub('<[^>]*>', '', o)
+ # rstrip is necessary since HTML inserts a \n after the last </div>
+ self.assertEqual(o.rstrip(), TESTCODE.rstrip())
+
+ # If user specifies a name, use it
+ o = self.check_success('-f', 'html', '-x', '-l',
+ lexer_file + ':LexerWrapper', stdin=TESTCODE)
+ o = re.sub('<[^>]*>', '', o)
+ # rstrip is necessary since HTML inserts a \n after the last </div>
+ self.assertEqual(o.rstrip(), TESTCODE.rstrip())
+
+ # Should also work for formatters
+ o = self.check_success('-lpython', '-f',
+ formatter_file + ':HtmlFormatterWrapper',
+ '-x', stdin=TESTCODE)
+ o = re.sub('<[^>]*>', '', o)
+ # rstrip is necessary since HTML inserts a \n after the last </div>
+ self.assertEqual(o.rstrip(), TESTCODE.rstrip())
+
def test_stream_opt(self):
o = self.check_success('-lpython', '-s', '-fterminal', stdin=TESTCODE)
o = re.sub(r'\x1b\[.*?m', '', o)
e = self.check_failure('-lfooo', TESTFILE)
self.assertTrue('Error: no lexer for alias' in e)
+ # cannot load .py file without load_from_file flag
+ e = self.check_failure('-l', 'nonexistent.py', TESTFILE)
+ self.assertTrue('Error: no lexer for alias' in e)
+
+ # lexer file is missing/unreadable
+ e = self.check_failure('-l', 'nonexistent.py',
+ '-x', TESTFILE)
+ self.assertTrue('Error: cannot read' in e)
+
+ # lexer file is malformed
+ e = self.check_failure('-l', 'support/empty.py',
+ '-x', TESTFILE)
+ self.assertTrue('Error: no valid CustomLexer class found' in e)
+
# formatter not found
e = self.check_failure('-lpython', '-ffoo', TESTFILE)
self.assertTrue('Error: no formatter found for name' in e)
e = self.check_failure('-ofoo.foo', TESTFILE)
self.assertTrue('Error: no formatter found for file name' in e)
+ # cannot load .py file without load_from_file flag
+ e = self.check_failure('-f', 'nonexistent.py', TESTFILE)
+ self.assertTrue('Error: no formatter found for name' in e)
+
+ # formatter file is missing/unreadable
+ e = self.check_failure('-f', 'nonexistent.py',
+ '-x', TESTFILE)
+ self.assertTrue('Error: cannot read' in e)
+
+ # formatter file is malformed
+ e = self.check_failure('-f', 'support/empty.py',
+ '-x', TESTFILE)
+ self.assertTrue('Error: no valid CustomFormatter class found' in e)
+
# output file not writable
e = self.check_failure('-o', os.path.join('nonexistent', 'dir', 'out.html'),
'-lpython', TESTFILE)
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ CPP Tests
+ ~~~~~~~~~
+
+ :copyright: Copyright 2006-2016 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+import unittest
+
+from pygments.lexers import CppLexer
+from pygments.token import Token
+
+
+class CppTest(unittest.TestCase):
+ def setUp(self):
+ self.lexer = CppLexer()
+
+ def testGoodComment(self):
+ fragment = u'/* foo */\n'
+ tokens = [
+ (Token.Comment.Multiline, u'/* foo */'),
+ (Token.Text, u'\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
+
+ def testOpenComment(self):
+ fragment = u'/* foo\n'
+ tokens = [
+ (Token.Comment.Multiline, u'/* foo\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ Basic CrystalLexer Test
+ ~~~~~~~~~~~~~~~~~~~~
+
+ :copyright: Copyright 2006-2016 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+from __future__ import unicode_literals
+import unittest
+
+from pygments.token import Text, Comment, Operator, Keyword, Name, String, \
+ Number, Punctuation, Error
+from pygments.lexers import CrystalLexer
+
+
+class CrystalTest(unittest.TestCase):
+
+ def setUp(self):
+ self.lexer = CrystalLexer()
+ self.maxDiff = None
+
+ def testRangeSyntax1(self):
+ fragment = '1...3\n'
+ tokens = [
+ (Number.Integer, '1'),
+ (Operator, '...'),
+ (Number.Integer, '3'),
+ (Text, '\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
+
+ def testRangeSyntax2(self):
+ fragment = '1 .. 3\n'
+ tokens = [
+ (Number.Integer, '1'),
+ (Text, ' '),
+ (Operator, '..'),
+ (Text, ' '),
+ (Number.Integer, '3'),
+ (Text, '\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
+
+ def testInterpolationNestedCurly(self):
+ fragment = (
+ '"A#{ (3..5).group_by { |x| x/2}.map '
+ 'do |k,v| "#{k}" end.join }" + "Z"\n')
+ tokens = [
+ (String.Double, '"'),
+ (String.Double, 'A'),
+ (String.Interpol, '#{'),
+ (Text, ' '),
+ (Punctuation, '('),
+ (Number.Integer, '3'),
+ (Operator, '..'),
+ (Number.Integer, '5'),
+ (Punctuation, ')'),
+ (Operator, '.'),
+ (Name, 'group_by'),
+ (Text, ' '),
+ (String.Interpol, '{'),
+ (Text, ' '),
+ (Operator, '|'),
+ (Name, 'x'),
+ (Operator, '|'),
+ (Text, ' '),
+ (Name, 'x'),
+ (Operator, '/'),
+ (Number.Integer, '2'),
+ (String.Interpol, '}'),
+ (Operator, '.'),
+ (Name, 'map'),
+ (Text, ' '),
+ (Keyword, 'do'),
+ (Text, ' '),
+ (Operator, '|'),
+ (Name, 'k'),
+ (Punctuation, ','),
+ (Name, 'v'),
+ (Operator, '|'),
+ (Text, ' '),
+ (String.Double, '"'),
+ (String.Interpol, '#{'),
+ (Name, 'k'),
+ (String.Interpol, '}'),
+ (String.Double, '"'),
+ (Text, ' '),
+ (Keyword, 'end'),
+ (Operator, '.'),
+ (Name, 'join'),
+ (Text, ' '),
+ (String.Interpol, '}'),
+ (String.Double, '"'),
+ (Text, ' '),
+ (Operator, '+'),
+ (Text, ' '),
+ (String.Double, '"'),
+ (String.Double, 'Z'),
+ (String.Double, '"'),
+ (Text, '\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
+
+ def testOperatorMethods(self):
+ fragment = '([] of Int32).[]?(5)\n'
+ tokens = [
+ (Punctuation, '('),
+ (Operator, '['),
+ (Operator, ']'),
+ (Text, ' '),
+ (Keyword, 'of'),
+ (Text, ' '),
+ (Name.Builtin, 'Int32'),
+ (Punctuation, ')'),
+ (Operator, '.'),
+ (Name.Operator, '[]?'),
+ (Punctuation, '('),
+ (Number.Integer, '5'),
+ (Punctuation, ')'),
+ (Text, '\n')
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
+
+ def testArrayAccess(self):
+ fragment = '[5][5]?\n'
+ tokens = [
+ (Operator, '['),
+ (Number.Integer, '5'),
+ (Operator, ']'),
+ (Operator, '['),
+ (Number.Integer, '5'),
+ (Operator, ']?'),
+ (Text, '\n')
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
+
+ def testNumbers(self):
+ for kind, testset in [
+ (Number.Integer, '0 1 1_000_000 1u8 11231231231121312i64'),
+ (Number.Float, '0.0 1.0_f32 1_f32 0f64 1e+4 1e111 1_234.567_890'),
+ (Number.Bin, '0b1001_0110 0b0u8'),
+ (Number.Oct, '0o17 0o7_i32'),
+ (Number.Hex, '0xdeadBEEF'),
+ ]:
+ for fragment in testset.split():
+ self.assertEqual([(kind, fragment), (Text, '\n')],
+ list(self.lexer.get_tokens(fragment + '\n')))
+
+ for fragment in '01 0b2 0x129g2 0o12358'.split():
+ self.assertEqual(next(self.lexer.get_tokens(fragment + '\n'))[0],
+ Error)
+
+ def testChars(self):
+ for fragment in ["'a'", "'я'", "'\\u{1234}'", "'\n'"]:
+ self.assertEqual([(String.Char, fragment), (Text, '\n')],
+ list(self.lexer.get_tokens(fragment + '\n')))
+ self.assertEqual(next(self.lexer.get_tokens("'abc'"))[0], Error)
+
+ def testMacro(self):
+ fragment = (
+ 'def<=>(other : self) : Int\n'
+ '{%for field in %w(first_name middle_name last_name)%}\n'
+ 'cmp={{field.id}}<=>other.{{field.id}}\n'
+ 'return cmp if cmp!=0\n'
+ '{%end%}\n'
+ '0\n'
+ 'end\n')
+ tokens = [
+ (Keyword, 'def'),
+ (Name.Function, '<=>'),
+ (Punctuation, '('),
+ (Name, 'other'),
+ (Text, ' '),
+ (Punctuation, ':'),
+ (Text, ' '),
+ (Keyword.Pseudo, 'self'),
+ (Punctuation, ')'),
+ (Text, ' '),
+ (Punctuation, ':'),
+ (Text, ' '),
+ (Name.Builtin, 'Int'),
+ (Text, '\n'),
+ (String.Interpol, '{%'),
+ (Keyword, 'for'),
+ (Text, ' '),
+ (Name, 'field'),
+ (Text, ' '),
+ (Keyword, 'in'),
+ (Text, ' '),
+ (String.Other, '%w('),
+ (String.Other, 'first_name middle_name last_name'),
+ (String.Other, ')'),
+ (String.Interpol, '%}'),
+ (Text, '\n'),
+ (Name, 'cmp'),
+ (Operator, '='),
+ (String.Interpol, '{{'),
+ (Name, 'field'),
+ (Operator, '.'),
+ (Name, 'id'),
+ (String.Interpol, '}}'),
+ (Operator, '<=>'),
+ (Name, 'other'),
+ (Operator, '.'),
+ (String.Interpol, '{{'),
+ (Name, 'field'),
+ (Operator, '.'),
+ (Name, 'id'),
+ (String.Interpol, '}}'),
+ (Text, '\n'),
+ (Keyword, 'return'),
+ (Text, ' '),
+ (Name, 'cmp'),
+ (Text, ' '),
+ (Keyword, 'if'),
+ (Text, ' '),
+ (Name, 'cmp'),
+ (Operator, '!='),
+ (Number.Integer, '0'),
+ (Text, '\n'),
+ (String.Interpol, '{%'),
+ (Keyword, 'end'),
+ (String.Interpol, '%}'),
+ (Text, '\n'),
+ (Number.Integer, '0'),
+ (Text, '\n'),
+ (Keyword, 'end'),
+ (Text, '\n')
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
+
+ def testLib(self):
+ fragment = (
+ '@[Link("some")]\nlib LibSome\n'
+ '@[CallConvention("X86_StdCall")]\nfun foo="some.foo"(thing : Void*) : LibC::Int\n'
+ 'end\n')
+ tokens = [
+ (Operator, '@['),
+ (Name.Decorator, 'Link'),
+ (Punctuation, '('),
+ (String.Double, '"'),
+ (String.Double, 'some'),
+ (String.Double, '"'),
+ (Punctuation, ')'),
+ (Operator, ']'),
+ (Text, '\n'),
+ (Keyword, 'lib'),
+ (Text, ' '),
+ (Name.Namespace, 'LibSome'),
+ (Text, '\n'),
+ (Operator, '@['),
+ (Name.Decorator, 'CallConvention'),
+ (Punctuation, '('),
+ (String.Double, '"'),
+ (String.Double, 'X86_StdCall'),
+ (String.Double, '"'),
+ (Punctuation, ')'),
+ (Operator, ']'),
+ (Text, '\n'),
+ (Keyword, 'fun'),
+ (Text, ' '),
+ (Name.Function, 'foo'),
+ (Operator, '='),
+ (String.Double, '"'),
+ (String.Double, 'some.foo'),
+ (String.Double, '"'),
+ (Punctuation, '('),
+ (Name, 'thing'),
+ (Text, ' '),
+ (Punctuation, ':'),
+ (Text, ' '),
+ (Name.Builtin, 'Void'),
+ (Operator, '*'),
+ (Punctuation, ')'),
+ (Text, ' '),
+ (Punctuation, ':'),
+ (Text, ' '),
+ (Name, 'LibC'),
+ (Operator, '::'),
+ (Name.Builtin, 'Int'),
+ (Text, '\n'),
+ (Keyword, 'end'),
+ (Text, '\n')
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
+
+ def testEscapedBracestring(self):
+ fragment = 'str.gsub(%r{\\\\\\\\}, "/")\n'
+ tokens = [
+ (Name, 'str'),
+ (Operator, '.'),
+ (Name, 'gsub'),
+ (Punctuation, '('),
+ (String.Regex, '%r{'),
+ (String.Regex, '\\\\'),
+ (String.Regex, '\\\\'),
+ (String.Regex, '}'),
+ (Punctuation, ','),
+ (Text, ' '),
+ (String.Double, '"'),
+ (String.Double, '/'),
+ (String.Double, '"'),
+ (Punctuation, ')'),
+ (Text, '\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ Data Tests
+ ~~~~~~~~~~
+
+ :copyright: Copyright 2006-2016 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+import unittest
+
+from pygments.lexers import JsonLexer, JsonBareObjectLexer
+from pygments.token import Token
+
+
+class JsonTest(unittest.TestCase):
+ def setUp(self):
+ self.lexer = JsonLexer()
+
+ def testBasic(self):
+ fragment = u'{"foo": "bar", "foo2": [1, 2, 3]}\n'
+ tokens = [
+ (Token.Punctuation, u'{'),
+ (Token.Name.Tag, u'"foo"'),
+ (Token.Punctuation, u':'),
+ (Token.Text, u' '),
+ (Token.Literal.String.Double, u'"bar"'),
+ (Token.Punctuation, u','),
+ (Token.Text, u' '),
+ (Token.Name.Tag, u'"foo2"'),
+ (Token.Punctuation, u':'),
+ (Token.Text, u' '),
+ (Token.Punctuation, u'['),
+ (Token.Literal.Number.Integer, u'1'),
+ (Token.Punctuation, u','),
+ (Token.Text, u' '),
+ (Token.Literal.Number.Integer, u'2'),
+ (Token.Punctuation, u','),
+ (Token.Text, u' '),
+ (Token.Literal.Number.Integer, u'3'),
+ (Token.Punctuation, u']'),
+ (Token.Punctuation, u'}'),
+ (Token.Text, u'\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
+
+class JsonBareObjectTest(unittest.TestCase):
+ def setUp(self):
+ self.lexer = JsonBareObjectLexer()
+
+ def testBasic(self):
+ # This is the same as testBasic for JsonLexer above, except the
+ # enclosing curly braces are removed.
+ fragment = u'"foo": "bar", "foo2": [1, 2, 3]\n'
+ tokens = [
+ (Token.Name.Tag, u'"foo"'),
+ (Token.Punctuation, u':'),
+ (Token.Text, u' '),
+ (Token.Literal.String.Double, u'"bar"'),
+ (Token.Punctuation, u','),
+ (Token.Text, u' '),
+ (Token.Name.Tag, u'"foo2"'),
+ (Token.Punctuation, u':'),
+ (Token.Text, u' '),
+ (Token.Punctuation, u'['),
+ (Token.Literal.Number.Integer, u'1'),
+ (Token.Punctuation, u','),
+ (Token.Text, u' '),
+ (Token.Literal.Number.Integer, u'2'),
+ (Token.Punctuation, u','),
+ (Token.Text, u' '),
+ (Token.Literal.Number.Integer, u'3'),
+ (Token.Punctuation, u']'),
+ (Token.Text, u'\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
+
+ def testClosingCurly(self):
+ # This can be an Error token, but should not be a can't-pop-from-stack
+ # exception.
+ fragment = '}"a"\n'
+ tokens = [
+ (Token.Error, '}'),
+ (Token.Name.Tag, '"a"'),
+ (Token.Text, '\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
+
+ def testClosingCurlyInValue(self):
+ fragment = '"": ""}\n'
+ tokens = [
+ (Token.Name.Tag, '""'),
+ (Token.Punctuation, ':'),
+ (Token.Text, ' '),
+ (Token.Literal.String.Double, '""'),
+ (Token.Error, '}'),
+ (Token.Text, '\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
+
Pygments tests with example files
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
if not os.path.isfile(absfn):
continue
+ extension = os.getenv('TEST_EXT')
+ if extension and not absfn.endswith(extension):
+ continue
+
print(absfn)
with open(absfn, 'rb') as f:
code = f.read()
def check_lexer(lx, fn):
if os.name == 'java' and fn in BAD_FILES_FOR_JYTHON:
- raise support.SkipTest
+ raise support.SkipTest('%s is a known bad file on Jython' % fn)
absfn = os.path.join(TESTDIR, 'examplefiles', fn)
with open(absfn, 'rb') as fp:
text = fp.read()
முடி\n"""
tokens = [
(Token.Comment.Single,
- u'# (C) \u0bae\u0bc1\u0ba4\u0bcd\u0ba4\u0bc8\u0baf\u0bbe \u0b85\u0ba3\u0bcd\u0ba3\u0bbe\u0bae\u0bb2\u0bc8 2013, 2015\n'),
+ u'# (C) \u0bae\u0bc1\u0ba4\u0bcd\u0ba4\u0bc8\u0baf\u0bbe \u0b85'
+ u'\u0ba3\u0bcd\u0ba3\u0bbe\u0bae\u0bb2\u0bc8 2013, 2015\n'),
(Token.Keyword,u'நிரல்பாகம்'),
(Token.Text, u' '),
(Token.Name, u'gcd'),
Pygments HTML formatter tests
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Tests for inheritance in RegexLexer
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Pygments IRC formatter tests
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Basic JavaLexer Test
~~~~~~~~~~~~~~~~~~~~
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ Javascript tests
+ ~~~~~~~~~~~~~~~~
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+import unittest
+
+from pygments.lexers import CoffeeScriptLexer
+from pygments.token import Token
+
+COFFEE_SLASH_GOLDEN = [
+ # input_str, slashes_are_regex_here
+ (r'/\\/', True),
+ (r'/\\/i', True),
+ (r'/\//', True),
+ (r'/(\s)/', True),
+ ('/a{2,8}/', True),
+ ('/b*c?d+/', True),
+ ('/(capture-match)/', True),
+ ('/(?:do-not-capture-match)/', True),
+ ('/this|or|that/', True),
+ ('/[char-set]/', True),
+ ('/[^neg-char_st]/', True),
+ ('/^.*$/', True),
+ (r'/\n(\f)\0\1\d\b\cm\u1234/', True),
+ (r'/^.?([^/\\\n\w]*)a\1+$/.something(or_other) # something more complex', True),
+ ("foo = (str) ->\n /'|\"/.test str", True),
+ ('a = a / b / c', False),
+ ('a = a/b/c', False),
+ ('a = a/b/ c', False),
+ ('a = a /b/c', False),
+ ('a = 1 + /d/.test(a)', True),
+]
+
+def test_coffee_slashes():
+ for input_str, slashes_are_regex_here in COFFEE_SLASH_GOLDEN:
+ yield coffee_runner, input_str, slashes_are_regex_here
+
+def coffee_runner(input_str, slashes_are_regex_here):
+ lex = CoffeeScriptLexer()
+ output = list(lex.get_tokens(input_str))
+ print(output)
+ for t, s in output:
+ if '/' in s:
+ is_regex = t is Token.String.Regex
+ assert is_regex == slashes_are_regex_here, (t, s)
+
+class CoffeeTest(unittest.TestCase):
+ def setUp(self):
+ self.lexer = CoffeeScriptLexer()
+
+ def testMixedSlashes(self):
+ fragment = u'a?/foo/:1/2;\n'
+ tokens = [
+ (Token.Name.Other, u'a'),
+ (Token.Operator, u'?'),
+ (Token.Literal.String.Regex, u'/foo/'),
+ (Token.Operator, u':'),
+ (Token.Literal.Number.Integer, u'1'),
+ (Token.Operator, u'/'),
+ (Token.Literal.Number.Integer, u'2'),
+ (Token.Punctuation, u';'),
+ (Token.Text, u'\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
+
+ def testBewareInfiniteLoop(self):
+ # This demonstrates the case that "This isn't really guarding" comment
+ # refers to.
+ fragment = '/a/x;\n'
+ tokens = [
+ (Token.Text, ''),
+ (Token.Operator, '/'),
+ (Token.Name.Other, 'a'),
+ (Token.Operator, '/'),
+ (Token.Name.Other, 'x'),
+ (Token.Punctuation, ';'),
+ (Token.Text, '\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ Julia Tests
+ ~~~~~~~~~~~
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+import unittest
+
+from pygments.lexers import JuliaLexer
+from pygments.token import Token
+
+
+class JuliaTests(unittest.TestCase):
+ def setUp(self):
+ self.lexer = JuliaLexer()
+
+ def test_unicode(self):
+ """
+ Test that unicode character, √, in an expression is recognized
+ """
+ fragment = u's = \u221a((1/n) * sum(count .^ 2) - mu .^2)\n'
+ tokens = [
+ (Token.Name, u's'),
+ (Token.Text, u' '),
+ (Token.Operator, u'='),
+ (Token.Text, u' '),
+ (Token.Operator, u'\u221a'),
+ (Token.Punctuation, u'('),
+ (Token.Punctuation, u'('),
+ (Token.Literal.Number.Integer, u'1'),
+ (Token.Operator, u'/'),
+ (Token.Name, u'n'),
+ (Token.Punctuation, u')'),
+ (Token.Text, u' '),
+ (Token.Operator, u'*'),
+ (Token.Text, u' '),
+ (Token.Name, u'sum'),
+ (Token.Punctuation, u'('),
+ (Token.Name, u'count'),
+ (Token.Text, u' '),
+ (Token.Operator, u'.^'),
+ (Token.Text, u' '),
+ (Token.Literal.Number.Integer, u'2'),
+ (Token.Punctuation, u')'),
+ (Token.Text, u' '),
+ (Token.Operator, u'-'),
+ (Token.Text, u' '),
+ (Token.Name, u'mu'),
+ (Token.Text, u' '),
+ (Token.Operator, u'.^'),
+ (Token.Literal.Number.Integer, u'2'),
+ (Token.Punctuation, u')'),
+ (Token.Text, u'\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
Pygments LaTeX formatter tests
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
ret = po.wait()
output = po.stdout.read()
po.stdout.close()
- except OSError:
+ except OSError as e:
# latex not available
- raise support.SkipTest
+ raise support.SkipTest(e)
else:
if ret:
print(output)
Tests for other lexers
~~~~~~~~~~~~~~~~~~~~~~
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import glob
from pygments.lexers import guess_lexer
from pygments.lexers.scripting import EasytrieveLexer, JclLexer, RexxLexer
+
def _exampleFilePath(filename):
return os.path.join(os.path.dirname(__file__), 'examplefiles', filename)
text = fp.read().decode('utf-8')
probability = lexer.analyse_text(text)
self.assertTrue(probability > 0,
- '%s must recognize %r' % (
- lexer.name, exampleFilePath))
+ '%s must recognize %r' % (
+ lexer.name, exampleFilePath))
guessedLexer = guess_lexer(text)
self.assertEqual(guessedLexer.name, lexer.name)
class EasyTrieveLexerTest(unittest.TestCase):
def testCanGuessFromText(self):
- self.assertLess(0, EasytrieveLexer.analyse_text('MACRO'))
- self.assertLess(0, EasytrieveLexer.analyse_text('\nMACRO'))
- self.assertLess(0, EasytrieveLexer.analyse_text(' \nMACRO'))
- self.assertLess(0, EasytrieveLexer.analyse_text(' \n MACRO'))
- self.assertLess(0, EasytrieveLexer.analyse_text('*\nMACRO'))
- self.assertLess(0, EasytrieveLexer.analyse_text(
+ self.assertTrue(EasytrieveLexer.analyse_text('MACRO'))
+ self.assertTrue(EasytrieveLexer.analyse_text('\nMACRO'))
+ self.assertTrue(EasytrieveLexer.analyse_text(' \nMACRO'))
+ self.assertTrue(EasytrieveLexer.analyse_text(' \n MACRO'))
+ self.assertTrue(EasytrieveLexer.analyse_text('*\nMACRO'))
+ self.assertTrue(EasytrieveLexer.analyse_text(
'*\n *\n\n \n*\n MACRO'))
class RexxLexerTest(unittest.TestCase):
def testCanGuessFromText(self):
- self.assertAlmostEqual(0.01,
- RexxLexer.analyse_text('/* */'))
+ self.assertAlmostEqual(0.01, RexxLexer.analyse_text('/* */'))
self.assertAlmostEqual(1.0,
- RexxLexer.analyse_text('''/* Rexx */
+ RexxLexer.analyse_text('''/* Rexx */
say "hello world"'''))
val = RexxLexer.analyse_text('/* */\n'
- 'hello:pRoceduRe\n'
- ' say "hello world"')
+ 'hello:pRoceduRe\n'
+ ' say "hello world"')
self.assertTrue(val > 0.5, val)
val = RexxLexer.analyse_text('''/* */
if 1 > 0 then do
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ Tests for the vim modeline feature
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+from __future__ import print_function
+
+from pygments import modeline
+
+
+def test_lexer_classes():
+ def verify(buf):
+ assert modeline.get_filetype_from_buffer(buf) == 'python'
+
+ for buf in [
+ 'vi: ft=python' + '\n' * 8,
+ 'vi: ft=python' + '\n' * 8,
+ '\n\n\n\nvi=8: syntax=python' + '\n' * 8,
+ '\n' * 8 + 'ex: filetype=python',
+ '\n' * 8 + 'vim: some,other,syn=python\n\n\n\n'
+ ]:
+ yield verify, buf
Basic CLexer Test
~~~~~~~~~~~~~~~~~
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Pygments regex lexer tests
~~~~~~~~~~~~~~~~~~~~~~~~~~
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import time
import unittest
-from pygments.token import String
+from pygments.token import Keyword, Name, String, Text
from pygments.lexers.perl import PerlLexer
def test_substitution_with_parenthesis(self):
self.assert_single_token(r's(aaa)', String.Regex)
self.assert_fast_tokenization('s(' + '\\'*999)
+
+ ### Namespaces/modules
+
+ def test_package_statement(self):
+ self.assert_tokens(['package', ' ', 'Foo'], [Keyword, Text, Name.Namespace])
+ self.assert_tokens(['package', ' ', 'Foo::Bar'], [Keyword, Text, Name.Namespace])
+
+ def test_use_statement(self):
+ self.assert_tokens(['use', ' ', 'Foo'], [Keyword, Text, Name.Namespace])
+ self.assert_tokens(['use', ' ', 'Foo::Bar'], [Keyword, Text, Name.Namespace])
+
+ def test_no_statement(self):
+ self.assert_tokens(['no', ' ', 'Foo'], [Keyword, Text, Name.Namespace])
+ self.assert_tokens(['no', ' ', 'Foo::Bar'], [Keyword, Text, Name.Namespace])
+
+ def test_require_statement(self):
+ self.assert_tokens(['require', ' ', 'Foo'], [Keyword, Text, Name.Namespace])
+ self.assert_tokens(['require', ' ', 'Foo::Bar'], [Keyword, Text, Name.Namespace])
+ self.assert_tokens(['require', ' ', '"Foo/Bar.pm"'], [Keyword, Text, String])
+
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ PHP Tests
+ ~~~~~~~~~
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+import unittest
+
+from pygments.lexers import PhpLexer
+from pygments.token import Token
+
+
+class PhpTest(unittest.TestCase):
+ def setUp(self):
+ self.lexer = PhpLexer()
+
+ def testStringEscapingRun(self):
+ fragment = '<?php $x="{\\""; ?>\n'
+ tokens = [
+ (Token.Comment.Preproc, '<?php'),
+ (Token.Text, ' '),
+ (Token.Name.Variable, '$x'),
+ (Token.Operator, '='),
+ (Token.Literal.String.Double, '"'),
+ (Token.Literal.String.Double, '{'),
+ (Token.Literal.String.Escape, '\\"'),
+ (Token.Literal.String.Double, '"'),
+ (Token.Punctuation, ';'),
+ (Token.Text, ' '),
+ (Token.Comment.Preproc, '?>'),
+ (Token.Other, '\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ Praat lexer tests
+ ~~~~~~~~~~~~~~~~~
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+import unittest
+
+from pygments.token import Token
+from pygments.lexers import PraatLexer
+
+class PraatTest(unittest.TestCase):
+
+ def setUp(self):
+ self.lexer = PraatLexer()
+ self.maxDiff = None
+
+ def testNumericAssignment(self):
+ fragment = u'var = -15e4\n'
+ tokens = [
+ (Token.Text, u'var'),
+ (Token.Text, u' '),
+ (Token.Operator, u'='),
+ (Token.Text, u' '),
+ (Token.Operator, u'-'),
+ (Token.Literal.Number, u'15e4'),
+ (Token.Text, u'\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
+
+ def testStringAssignment(self):
+ fragment = u'var$ = "foo"\n'
+ tokens = [
+ (Token.Text, u'var$'),
+ (Token.Text, u' '),
+ (Token.Operator, u'='),
+ (Token.Text, u' '),
+ (Token.Literal.String, u'"'),
+ (Token.Literal.String, u'foo'),
+ (Token.Literal.String, u'"'),
+ (Token.Text, u'\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
+
+ def testStringEscapedQuotes(self):
+ fragment = u'"it said ""foo"""\n'
+ tokens = [
+ (Token.Literal.String, u'"'),
+ (Token.Literal.String, u'it said '),
+ (Token.Literal.String, u'"'),
+ (Token.Literal.String, u'"'),
+ (Token.Literal.String, u'foo'),
+ (Token.Literal.String, u'"'),
+ (Token.Literal.String, u'"'),
+ (Token.Literal.String, u'"'),
+ (Token.Text, u'\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
+
+ def testFunctionCall(self):
+ fragment = u'selected("Sound", i+(a*b))\n'
+ tokens = [
+ (Token.Name.Function, u'selected'),
+ (Token.Punctuation, u'('),
+ (Token.Literal.String, u'"'),
+ (Token.Literal.String, u'Sound'),
+ (Token.Literal.String, u'"'),
+ (Token.Punctuation, u','),
+ (Token.Text, u' '),
+ (Token.Text, u'i'),
+ (Token.Operator, u'+'),
+ (Token.Text, u'('),
+ (Token.Text, u'a'),
+ (Token.Operator, u'*'),
+ (Token.Text, u'b'),
+ (Token.Text, u')'),
+ (Token.Punctuation, u')'),
+ (Token.Text, u'\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
+
+ def testBrokenUnquotedString(self):
+ fragment = u'printline string\n... \'interpolated\' string\n'
+ tokens = [
+ (Token.Keyword, u'printline'),
+ (Token.Text, u' '),
+ (Token.Literal.String, u'string'),
+ (Token.Text, u'\n'),
+ (Token.Punctuation, u'...'),
+ (Token.Text, u' '),
+ (Token.Literal.String.Interpol, u"'"),
+ (Token.Literal.String.Interpol, u'interpolated'),
+ (Token.Literal.String.Interpol, u"'"),
+ (Token.Text, u' '),
+ (Token.Literal.String, u'string'),
+ (Token.Text, u'\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
+
+ def testInlinIf(self):
+ fragment = u'var = if true == 1 then -1 else 0 fi'
+ tokens = [
+ (Token.Text, u'var'),
+ (Token.Text, u' '),
+ (Token.Operator, u'='),
+ (Token.Text, u' '),
+ (Token.Keyword, u'if'),
+ (Token.Text, u' '),
+ (Token.Text, u'true'),
+ (Token.Text, u' '),
+ (Token.Operator, u'=='),
+ (Token.Text, u' '),
+ (Token.Literal.Number, u'1'),
+ (Token.Text, u' '),
+ (Token.Keyword, u'then'),
+ (Token.Text, u' '),
+ (Token.Operator, u'-'),
+ (Token.Literal.Number, u'1'),
+ (Token.Text, u' '),
+ (Token.Keyword, u'else'),
+ (Token.Text, u' '),
+ (Token.Literal.Number, u'0'),
+ (Token.Text, u' '),
+ (Token.Keyword, u'fi'),
+ (Token.Text, u'\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ Properties Tests
+ ~~~~~~~~~~~~~~~~
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+import unittest
+
+from pygments.lexers.configs import PropertiesLexer
+from pygments.token import Token
+
+
+class PropertiesTest(unittest.TestCase):
+ def setUp(self):
+ self.lexer = PropertiesLexer()
+
+ def test_comments(self):
+ """
+ Assures lines lead by either # or ! are recognized as a comment
+ """
+ fragment = '! a comment\n# also a comment\n'
+ tokens = [
+ (Token.Comment, '! a comment'),
+ (Token.Text, '\n'),
+ (Token.Comment, '# also a comment'),
+ (Token.Text, '\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
+
+ def test_leading_whitespace_comments(self):
+ fragment = ' # comment\n'
+ tokens = [
+ (Token.Text, ' '),
+ (Token.Comment, '# comment'),
+ (Token.Text, '\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
+
+ def test_escaped_space_in_key(self):
+ fragment = 'key = value\n'
+ tokens = [
+ (Token.Name.Attribute, 'key'),
+ (Token.Text, ' '),
+ (Token.Operator, '='),
+ (Token.Text, ' '),
+ (Token.Literal.String, 'value'),
+ (Token.Text, '\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
+
+ def test_escaped_space_in_value(self):
+ fragment = 'key = doubleword\\ value\n'
+ tokens = [
+ (Token.Name.Attribute, 'key'),
+ (Token.Text, ' '),
+ (Token.Operator, '='),
+ (Token.Text, ' '),
+ (Token.Literal.String, 'doubleword\\ value'),
+ (Token.Text, '\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
+
+ def test_space_delimited_kv_pair(self):
+ fragment = 'key value\n'
+ tokens = [
+ (Token.Name.Attribute, 'key'),
+ (Token.Text, ' '),
+ (Token.Literal.String, 'value\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
+
+ def test_just_key(self):
+ fragment = 'justkey\n'
+ tokens = [
+ (Token.Name.Attribute, 'justkey'),
+ (Token.Text, '\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
+
+ def test_just_key_with_space(self):
+ fragment = 'just\\ key\n'
+ tokens = [
+ (Token.Name.Attribute, 'just\\ key'),
+ (Token.Text, '\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ Python Tests
+ ~~~~~~~~~~~~
+
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+import unittest
+
+from pygments.lexers import PythonLexer, Python3Lexer
+from pygments.token import Token
+
+
+class PythonTest(unittest.TestCase):
+ def setUp(self):
+ self.lexer = PythonLexer()
+
+ def test_cls_builtin(self):
+ """
+ Tests that a cls token gets interpreted as a Token.Name.Builtin.Pseudo
+
+ """
+ fragment = 'class TestClass():\n @classmethod\n def hello(cls):\n pass\n'
+ tokens = [
+ (Token.Keyword, 'class'),
+ (Token.Text, ' '),
+ (Token.Name.Class, 'TestClass'),
+ (Token.Punctuation, '('),
+ (Token.Punctuation, ')'),
+ (Token.Punctuation, ':'),
+ (Token.Text, '\n'),
+ (Token.Text, ' '),
+ (Token.Name.Decorator, '@classmethod'),
+ (Token.Text, '\n'),
+ (Token.Text, ' '),
+ (Token.Keyword, 'def'),
+ (Token.Text, ' '),
+ (Token.Name.Function, 'hello'),
+ (Token.Punctuation, '('),
+ (Token.Name.Builtin.Pseudo, 'cls'),
+ (Token.Punctuation, ')'),
+ (Token.Punctuation, ':'),
+ (Token.Text, '\n'),
+ (Token.Text, ' '),
+ (Token.Keyword, 'pass'),
+ (Token.Text, '\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
+
+
+class Python3Test(unittest.TestCase):
+ def setUp(self):
+ self.lexer = Python3Lexer()
+
+ def testNeedsName(self):
+ """
+ Tests that '@' is recognized as an Operator
+ """
+ fragment = u'S = (H @ beta - r).T @ inv(H @ V @ H.T) @ (H @ beta - r)\n'
+ tokens = [
+ (Token.Name, u'S'),
+ (Token.Text, u' '),
+ (Token.Operator, u'='),
+ (Token.Text, u' '),
+ (Token.Punctuation, u'('),
+ (Token.Name, u'H'),
+ (Token.Text, u' '),
+ (Token.Operator, u'@'),
+ (Token.Text, u' '),
+ (Token.Name, u'beta'),
+ (Token.Text, u' '),
+ (Token.Operator, u'-'),
+ (Token.Text, u' '),
+ (Token.Name, u'r'),
+ (Token.Punctuation, u')'),
+ (Token.Operator, u'.'),
+ (Token.Name, u'T'),
+ (Token.Text, u' '),
+ (Token.Operator, u'@'),
+ (Token.Text, u' '),
+ (Token.Name, u'inv'),
+ (Token.Punctuation, u'('),
+ (Token.Name, u'H'),
+ (Token.Text, u' '),
+ (Token.Operator, u'@'),
+ (Token.Text, u' '),
+ (Token.Name, u'V'),
+ (Token.Text, u' '),
+ (Token.Operator, u'@'),
+ (Token.Text, u' '),
+ (Token.Name, u'H'),
+ (Token.Operator, u'.'),
+ (Token.Name, u'T'),
+ (Token.Punctuation, u')'),
+ (Token.Text, u' '),
+ (Token.Operator, u'@'),
+ (Token.Text, u' '),
+ (Token.Punctuation, u'('),
+ (Token.Name, u'H'),
+ (Token.Text, u' '),
+ (Token.Operator, u'@'),
+ (Token.Text, u' '),
+ (Token.Name, u'beta'),
+ (Token.Text, u' '),
+ (Token.Operator, u'-'),
+ (Token.Text, u' '),
+ (Token.Name, u'r'),
+ (Token.Punctuation, u')'),
+ (Token.Text, u'\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
Tests for QBasic
~~~~~~~~~~~~~~~~
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Pygments regex lexer tests
~~~~~~~~~~~~~~~~~~~~~~~~~~
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Tests for pygments.regexopt
~~~~~~~~~~~~~~~~~~~~~~~~~~~
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
random.randint(1, len(kwlist) - 1))
no_match = set(kwlist) - set(to_match)
rex = re.compile(regex_opt(to_match))
+ self.assertEqual(rex.groups, 1)
for w in to_match:
self.assertTrue(rex.match(w))
for w in no_match:
rex = re.compile(opt)
m = rex.match('abfoo')
self.assertEqual(5, m.end())
+
+ def test_different_length_grouping(self):
+ opt = regex_opt(('a', 'xyz'))
+ print(opt)
+ rex = re.compile(opt)
+ self.assertTrue(rex.match('a'))
+ self.assertTrue(rex.match('xyz'))
+ self.assertFalse(rex.match('b'))
+ self.assertEqual(1, rex.groups)
+
+ def test_same_length_grouping(self):
+ opt = regex_opt(('a', 'b'))
+ print(opt)
+ rex = re.compile(opt)
+ self.assertTrue(rex.match('a'))
+ self.assertTrue(rex.match('b'))
+ self.assertFalse(rex.match('x'))
+
+ self.assertEqual(1, rex.groups)
+ groups = rex.match('a').groups()
+ self.assertEqual(('a',), groups)
+
+ def test_same_length_suffix_grouping(self):
+ opt = regex_opt(('a', 'b'), suffix='(m)')
+ print(opt)
+ rex = re.compile(opt)
+ self.assertTrue(rex.match('am'))
+ self.assertTrue(rex.match('bm'))
+ self.assertFalse(rex.match('xm'))
+ self.assertFalse(rex.match('ax'))
+ self.assertEqual(2, rex.groups)
+ groups = rex.match('am').groups()
+ self.assertEqual(('a', 'm'), groups)
Pygments RTF formatter tests
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Basic RubyLexer Test
~~~~~~~~~~~~~~~~~~~~
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Basic Shell Tests
~~~~~~~~~~~~~~~~~
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import unittest
from pygments.token import Token
-from pygments.lexers import BashLexer
+from pygments.lexers import BashLexer, BashSessionLexer
class BashTest(unittest.TestCase):
(Token.Text, u'\n'),
]
self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
+
+ def testArrayNums(self):
+ fragment = u'a=(1 2 3)\n'
+ tokens = [
+ (Token.Name.Variable, u'a'),
+ (Token.Operator, u'='),
+ (Token.Operator, u'('),
+ (Token.Literal.Number, u'1'),
+ (Token.Text, u' '),
+ (Token.Literal.Number, u'2'),
+ (Token.Text, u' '),
+ (Token.Literal.Number, u'3'),
+ (Token.Operator, u')'),
+ (Token.Text, u'\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
+
+ def testEndOfLineNums(self):
+ fragment = u'a=1\nb=2 # comment\n'
+ tokens = [
+ (Token.Name.Variable, u'a'),
+ (Token.Operator, u'='),
+ (Token.Literal.Number, u'1'),
+ (Token.Text, u'\n'),
+ (Token.Name.Variable, u'b'),
+ (Token.Operator, u'='),
+ (Token.Literal.Number, u'2'),
+ (Token.Text, u' '),
+ (Token.Comment.Single, u'# comment\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
+
+class BashSessionTest(unittest.TestCase):
+
+ def setUp(self):
+ self.lexer = BashSessionLexer()
+ self.maxDiff = None
+
+ def testNeedsName(self):
+ fragment = u'$ echo \\\nhi\nhi\n'
+ tokens = [
+ (Token.Text, u''),
+ (Token.Generic.Prompt, u'$'),
+ (Token.Text, u' '),
+ (Token.Name.Builtin, u'echo'),
+ (Token.Text, u' '),
+ (Token.Literal.String.Escape, u'\\\n'),
+ (Token.Text, u'hi'),
+ (Token.Text, u'\n'),
+ (Token.Generic.Output, u'hi\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
+
Basic SmartyLexer Test
~~~~~~~~~~~~~~~~~~~~
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ Pygments SQL lexers tests
+ ~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ :copyright: Copyright 2006-2016 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+import unittest
+
+from pygments.lexers.sql import TransactSqlLexer
+from pygments.token import Comment, Name, Number, Punctuation, Whitespace
+
+
+class TransactSqlLexerTest(unittest.TestCase):
+
+ def setUp(self):
+ self.lexer = TransactSqlLexer()
+
+ def _assertAreTokensOfType(self, examples, expected_token_type):
+ for test_number, example in enumerate(examples.split(), 1):
+ token_count = 0
+ for token_type, token_value in self.lexer.get_tokens(example):
+ if token_type != Whitespace:
+ token_count += 1
+ self.assertEqual(
+ token_type, expected_token_type,
+ 'token_type #%d for %s is be %s but must be %s' %
+ (test_number, token_value, token_type, expected_token_type))
+ self.assertEqual(
+ token_count, 1,
+ '%s must yield exactly 1 token instead of %d' %
+ (example, token_count))
+
+ def _assertTokensMatch(self, text, expected_tokens_without_trailing_newline):
+ actual_tokens = tuple(self.lexer.get_tokens(text))
+ if (len(actual_tokens) >= 1) and (actual_tokens[-1] == (Whitespace, '\n')):
+ actual_tokens = tuple(actual_tokens[:-1])
+ self.assertEqual(
+ expected_tokens_without_trailing_newline, actual_tokens,
+ 'text must yield expected tokens: %s' % text)
+
+ def test_can_lex_float(self):
+ self._assertAreTokensOfType(
+ '1. 1.e1 .1 1.2 1.2e3 1.2e+3 1.2e-3 1e2', Number.Float)
+ self._assertTokensMatch(
+ '1e2.1e2',
+ ((Number.Float, '1e2'), (Number.Float, '.1e2'))
+ )
+
+ def test_can_reject_almost_float(self):
+ self._assertTokensMatch(
+ '.e1',
+ ((Punctuation, '.'), (Name, 'e1')))
+
+ def test_can_lex_integer(self):
+ self._assertAreTokensOfType(
+ '1 23 456', Number.Integer)
+
+ def test_can_lex_names(self):
+ self._assertAreTokensOfType(
+ u'thingy thingy123 _thingy _ _123 Ähnliches Müll #temp1 ##temp2', Name)
+
+ def test_can_lex_comments(self):
+ self._assertTokensMatch('--\n', ((Comment.Single, '--\n'),))
+ self._assertTokensMatch('/**/', (
+ (Comment.Multiline, '/*'), (Comment.Multiline, '*/')
+ ))
+ self._assertTokensMatch('/*/**/*/', (
+ (Comment.Multiline, '/*'),
+ (Comment.Multiline, '/*'),
+ (Comment.Multiline, '*/'),
+ (Comment.Multiline, '*/'),
+ ))
Pygments string assert utility tests
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Pygments terminal formatter tests
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from pygments.util import StringIO
from pygments.lexers.sql import PlPgsqlLexer
-from pygments.formatters import TerminalFormatter
+from pygments.formatters import TerminalFormatter, Terminal256Formatter, \
+ HtmlFormatter, LatexFormatter
+
+from pygments.style import Style
+from pygments.token import Token
+from pygments.lexers import Python3Lexer
+from pygments import highlight
DEMO_TEXT = '''\
-- comment
ANSI_RE = re.compile(r'\x1b[\w\W]*?m')
+
def strip_ansi(x):
return ANSI_RE.sub('', x)
+
class TerminalFormatterTest(unittest.TestCase):
def test_reasonable_output(self):
out = StringIO()
for a, b in zip(DEMO_TEXT.splitlines(), plain.splitlines()):
self.assertTrue(a in b)
+
+
+class MyStyle(Style):
+ styles = {
+ Token.Comment: '#ansidarkgray',
+ Token.String: '#ansiblue bg:#ansidarkred',
+ Token.Number: '#ansigreen bg:#ansidarkgreen',
+ Token.Number.Hex: '#ansidarkgreen bg:#ansired',
+ }
+
+
+class Terminal256FormatterTest(unittest.TestCase):
+ code = '''
+# this should be a comment
+print("Hello World")
+async def function(a,b,c, *d, **kwarg:Bool)->Bool:
+ pass
+ return 123, 0xb3e3
+
+'''
+
+ def test_style_html(self):
+ style = HtmlFormatter(style=MyStyle).get_style_defs()
+ self.assertTrue('#555555' in style,
+ "ansigray for comment not html css style")
+
+ def test_others_work(self):
+ """check other formatters don't crash"""
+ highlight(self.code, Python3Lexer(), LatexFormatter(style=MyStyle))
+ highlight(self.code, Python3Lexer(), HtmlFormatter(style=MyStyle))
+
+ def test_256esc_seq(self):
+ """
+ test that a few escape sequences are actualy used when using #ansi<> color codes
+ """
+ def termtest(x):
+ return highlight(x, Python3Lexer(),
+ Terminal256Formatter(style=MyStyle))
+
+ self.assertTrue('32;41' in termtest('0x123'))
+ self.assertTrue('32;42' in termtest('123'))
+ self.assertTrue('30;01' in termtest('#comment'))
+ self.assertTrue('34;41' in termtest('"String"'))
Basic Tests for textfmts
~~~~~~~~~~~~~~~~~~~~~~~~
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Test suite for the token module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Test suite for the unistring module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Pygments tests for using()
~~~~~~~~~~~~~~~~~~~~~~~~~~
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Test suite for the util module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS.
+ :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
--- /dev/null
+# -*- coding: utf-8 -*-
+"""
+ Whiley Test
+ ~~~~~~~~~~~
+
+ :copyright: Copyright 2006-2016 by the Pygments team, see AUTHORS.
+ :license: BSD, see LICENSE for details.
+"""
+
+import unittest
+
+from pygments.lexers import WhileyLexer
+from pygments.token import Token
+
+
+class WhileyTest(unittest.TestCase):
+ def setUp(self):
+ self.lexer = WhileyLexer()
+
+ def testWhileyOperator(self):
+ fragment = u'123 \u2200 x\n'
+ tokens = [
+ (Token.Literal.Number.Integer, u'123'),
+ (Token.Text, u' '),
+ (Token.Operator, u'\u2200'),
+ (Token.Text, u' '),
+ (Token.Name, u'x'),
+ (Token.Text, u'\n'),
+ ]
+ self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
+++ /dev/null
-[tox]
-envlist = py26, py27, py33, py34
-[testenv]
-deps =
- nose
- coverage
-commands = python -d tests/run.py {posargs}