From: adam Date: Tue, 19 Mar 2013 18:13:46 +0000 (+0700) Subject: Working for Lua module doc X-Git-Tag: v1.2.12~296^2~72 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=1ea57d8df592baa5f41f5df83944a2c67f45ebcb;p=platform%2Fupstream%2Fejdb.git Working for Lua module doc --- diff --git a/luaejdb/Makefile b/luaejdb/Makefile index 0366cff..ce5c85e 100644 --- a/luaejdb/Makefile +++ b/luaejdb/Makefile @@ -15,10 +15,10 @@ check-valgrind: build-dbg doc: rm -rf ./doc - luadoc -d ./doc ejdb.luadoc + lua ./tools/ldoc/ldoc.lua -d ./doc -c ./config.ld ejdb.luadoc clean: - - rm -rf *.so *.rock + - rm -f *.so *.rock ./ejdb/*.so - rm -rf ./doc - make -C ./test clean diff --git a/luaejdb/config.ld b/luaejdb/config.ld new file mode 100644 index 0000000..fbc6449 --- /dev/null +++ b/luaejdb/config.ld @@ -0,0 +1,3 @@ +format='markdown' +project = 'EJDB' +file = {'ejdb.luadoc'} \ No newline at end of file diff --git a/luaejdb/doc/index.html b/luaejdb/doc/index.html new file mode 100644 index 0000000..0fd9211 --- /dev/null +++ b/luaejdb/doc/index.html @@ -0,0 +1,951 @@ + + + + + Reference + + + + +
+ +
+ +
+
+
+ + +
+ + + + + + +
+ +

Module ejdb

+ +

The Lua binding of EJDB database.
+ http://ejdb.org

+

+ +

+ + +

Functions

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
open (path, mode)Opens EJDB database.
close ()Closes opened database.
toOID (val)Converts string OID into BSON oid table.
toDate (val)Converts os.time table (or number of seconds since epoch) into BSON_DATE.
toDateNow ()Converts current time into BSON_DATE.
toRegexp (re, opts)Builds BSON_REGEX value
toBinData (val)Converts lua string into BSON_BINDATA value
toNull ()Builds BSON_NULL value
toUndefined ()Builds BSON_UNDEFINED value
DB.toOID (val)Converts string OID into BSON oid table.
DB.toDate (val)Converts os.time table or number of secods integer into BSON_DATE.
DB.toDateNow ()Converts current time into BSON_DATE.
DB.toRegexp (re, opts)Builds BSON_REGEX value.
DB.toBinData (val)Converts lua string into BSON_BINDATA value.
DB.toNull ()Builds BSON_NULL value.
DB.toUndefined ()Builds BSON_UNDEFINED value .
DB:save (cname, obj, ...)Save/update specified JSON objects in the collection.
DB:find (cname, q)Execute query on collection.
Q:Eq (val)Field eq restriction.
Q:ElemMatch (val)Element match construction.
Q:Not (val)The $not negatiation for val block
Q:Gt (val)Greater than (val > arg)
Q:Gte (val)Greater than or equal (val >= arg)
Q:Lt (val)Lesser than (val < arg)
Q:Lte (val)Lesser than or equal (val <= arg)
+

Tables

+ + + + + + + + + +
QQuery/JSON builder is used to create EJDB queries or JSON objects with + preserverd keys order (Unlike lua tables).
DBDatabase itself.
+ +
+
+ + +

Functions

+
+
+ + open (path, mode) +
+
+ Opens EJDB database. + w Open as a writer
+ r Open as a reader
+ c Create db if it not exists
+ t Truncate existing db
+ s Sycn db after each transaction
+ Default open mode: rws + +

Parameters:

+
    +
  • path + {String} Database main file
  • +
  • mode + {String?} Database open mode flags:
  • +
+ +

Returns:

+
    + + Database table +
+ + + +

Usage:

+
    +
    local db = ejdb.open("foodb", "wrc")
    +
+ +
+
+ + close () +
+
+ Closes opened database. + + + + + + +
+
+ + toOID (val) +
+
+ Converts string OID into BSON oid table. + +

Parameters:

+
    +
  • val + {String} 24 hex chars BSON_OID
  • +
+ + + + + +
+
+ + toDate (val) +
+
+ Converts os.time table (or number of seconds since epoch) into BSON_DATE. + +

Parameters:

+
    +
  • val + + +
  • +
+ +

Returns:

+
    + + BSON_DATE table. +
+ + + +

Usage:

+
    +
  • ejdb.toDate({ year = 2013, month = 1, day = 1, hour = 0, sec = 1 })
  • +
  • ejdb.toDate(1363705285431)
  • +
+ +
+
+ + toDateNow () +
+
+ Converts current time into BSON_DATE. + + + + + + +
+
+ + toRegexp (re, opts) +
+
+ Builds BSON_REGEX value + +

Parameters:

+
    +
  • re + {String} Regular expression
  • +
  • opts + {String} Regular expression flags
  • +
+ +

Returns:

+
    + + BSON_REGEX table value +
+ + + + +
+
+ + toBinData (val) +
+
+ Converts lua string into BSON_BINDATA value + +

Parameters:

+
    +
  • val + + +
  • +
+ +

Returns:

+
    + + BSON_BINDATA table value +
+ + + + +
+
+ + toNull () +
+
+ Builds BSON_NULL value + + +

Returns:

+
    + + BSON_NULL table value +
+ + + + +
+
+ + toUndefined () +
+
+ Builds BSON_UNDEFINED value + + +

Returns:

+
    + + BSON_UNDEFINED table value +
+ + + + +
+
+ + DB.toOID (val) +
+
+ Converts string OID into BSON oid table. + +

Parameters:

+
    +
  • val + + +
  • +
+ + + +

see also:

+ + + +
+
+ + DB.toDate (val) +
+
+ Converts os.time table or number of secods integer into BSON_DATE. + +

Parameters:

+
    +
  • val + + +
  • +
+ + + +

see also:

+ + + +
+
+ + DB.toDateNow () +
+
+ Converts current time into BSON_DATE. + + + + +

see also:

+ + + +
+
+ + DB.toRegexp (re, opts) +
+
+ Builds BSON_REGEX value. + +

Parameters:

+
    +
  • re + + +
  • +
  • opts + + +
  • +
+ + + +

see also:

+ + + +
+
+ + DB.toBinData (val) +
+
+ Converts lua string into BSON_BINDATA value. + +

Parameters:

+
    +
  • val + + +
  • +
+ + + +

see also:

+ + + +
+
+ + DB.toNull () +
+
+ Builds BSON_NULL value. + + + + +

see also:

+ + + +
+
+ + DB.toUndefined () +
+
+ Builds BSON_UNDEFINED value . + + + + +

see also:

+ + + +
+
+ + DB:save (cname, obj, ...) +
+
+ Save/update specified JSON objects in the collection. + If collection with cname does not exists it will be created. + Each persistent object has unique identifier (OID) placed in the _id property. + If a saved object does not have _id it will be autogenerated. + To identify and update object it should contains _id property. + already persisted in db. + +

Parameters:

+
    +
  • cname + {String} Name of collection.
  • +
  • obj + Lua table or Q represents JSON object.
  • +
  • ... + If last argument is True a saved object will be merged with who's
  • +
+ + + + +

Usage:

+
    +
  • dQ:save("parrots2", {foo = "bar"})
  • +
  • dQ:save("parrots2", Q("foo", "bar"), true) -- merge option is on
  • +
+ +
+
+ + DB:find (cname, q) +
+
+ +

Execute query on collection.

+ +

EJDB queries inspired by MongoDB (mongodb.org) and follows same philosophy. + - Supported queries:

+
- Simple matching of String OR Number OR Array value:
+    -   {'fpath' : 'val', ...}
+- $not Negate operation.
+    -   {'fpath' : {'$not' : val}} //Field not equal to val
+    -   {'fpath' : {'$not' : {'$begin' : prefix}}} //Field not begins with val
+- $begin String starts with prefix
+    -   {'fpath' : {'$begin' : prefix}}
+- $gt, $gte (>, >=) and $lt, $lte for number types:
+    -   {'fpath' : {'$gt' : number}, ...}
+- $bt Between for number types:
+    -   {'fpath' : {'$bt' : [num1, num2]}}
+- $in String OR Number OR Array val matches to value in specified array:
+    -   {'fpath' : {'$in' : [val1, val2, val3]}}
+- $nin - Not IN
+- $strand String tokens OR String array val matches all tokens in specified array:
+    -   {'fpath' : {'$strand' : [val1, val2, val3]}}
+- $stror String tokens OR String array val matches any token in specified array:
+    -   {'fpath' : {'$stror' : [val1, val2, val3]}}
+- $exists Field existence matching:
+    -   {'fpath' : {'$exists' : true|false}}
+- $icase Case insensitive string matching:
+    -    {'fpath' : {'$icase' : 'val1'}} //icase matching
+          icase matching with '$in' operation:
+    -    {'name' : {'$icase' : {'$in' : ['HEllo', 'heLLo WorlD']}}}
+         For case insensitive matching you can create special type of string index.
+- $elemMatch The $elemMatch operator matches more than one component within an array element.
+    -    { array: { $elemMatch: { value1 : 1, value2 : { $gt: 1 } } } }
+          Restriction: only one $elemMatch allowed in context of one array field.
+
+ +
    +
  • Queries can be used to update records:

    + +

    $set Field set operation.

    +
    - {.., '$set' : {'field1' : val1, 'fieldN' : valN}}
    +
    +

    $upsert Atomic upsert. If matching records are found it will be '$set' operation,

    +
        otherwise new record will be inserted
    +    with fields specified by argment object.
    +- {.., '$upsert' : {'field1' : val1, 'fieldN' : valN}}
    +
    +

    $inc Increment operation. Only number types are supported.

    +
    - {.., '$inc' : {'field1' : number, ...,  'field1' : number}
    +
    +

    $dropall In-place record removal operation.

    +
    - {.., '$dropall' : true}
    +
    +

    $addToSet Atomically adds value to the array only if its not in the array already.

    +
            If containing array is missing it will be created.
    +- {.., '$addToSet' : {'fpath' : val1, 'fpathN' : valN, ...}}
    +
    +

    $addToSetAll Batch version if $addToSet

    +
    - {.., '$addToSetAll' : {'fpath' : [array of values to add], ...}}
    +
    +

    $pull Atomically removes all occurrences of value from field, if field is an array.

    +
    - {.., '$pull' : {'fpath' : val1, 'fpathN' : valN, ...}}
    +
    +

    $pullAll Batch version of $pull

    +
    - {.., '$pullAll' : {'fpath' : [array of values to remove], ...}}
    +
  • +
  • Collection joins supported in the following form:

    + +
    {..., $do : {fpath : {$join : 'collectionname'}} }
    + Where 'fpath' value points to object's OIDs from 'collectionname'. Its value
    + can be OID, string representation of OID or array of this pointers.
    +
  • +
+ +

NOTE: It is better to execute update queries with $onlycount=true hint flag

+
    or use the special `update()` method to avoid unnecessarily data fetching.
+
+

NOTE: Negate operations: $not and $nin not using indexes

+
    so they can be slow in comparison to other matching operations.
+
+

NOTE: Only one index can be used in search query operation. + NOTE: If callback is not provided this function will be synchronous.

+ +

QUERY HINTS (specified by hints argument):

+
- $max Maximum number in the result set
+- $skip Number of skipped results in the result set
+- $orderby Sorting order of query fields.
+- $onlycount true|false If `true` only count of matching records will be returned
+                        without placing records in result set.
+- $fields Set subset of fetched fields.
+    If field presented in $orderby clause it will be forced to include in resulting records.
+    Example:
+    hints:    {
+                "$orderby" : { //ORDER BY field1 ASC, field2 DESC
+                    "field1" : 1,
+                    "field2" : -1
+                 },
+                "$fields" : { //SELECT ONLY {_id, field1, field2}
+                    "field1" : 1,
+                    "field2" : 1
+                }
+              }
+
+

To traverse selected records cursor object is returned. + Cursor (res):

+
#res - length of result set
+res[i] - BSON representations of object as lua string
+res:object(i) - Lua table constructed from BSON data
+res:field(i, <field name>) - Lua value of fetched BSON object
+res() - Creates iterator for pairs (obj, idx)
+      where obj - Lua table constructed from BSON data
+            idx - Index of fetched object in the result set
+
+ +

Examples:

+
for i = 1, #res do
+  local ob = res:object(i)
+  ...
+end
+
+ +

OR

+ +
for i = 1, #res do
+  res:field(i, "json field name")
+  ...
+end
+
+ +

OR

+ +
for vobj, idx in res() do
+  -- vobj is a lua table representation of fetched json object
+  vobj["json field name"]
+  ...
+end
+
+ + + + +

Parameters:

+
    +
  • cname + {String} Name of collection
  • +
  • q + {table|Q} JSON query object
  • +
+ + + +

see also:

+
    + Q +
+ + +
+
+ + Q:Eq (val) +
+
+ +

Field eq restriction.

+
{fname : fval}
+
+ + +

Parameters:

+
    +
  • val + + +
  • +
+ + + + +

Usage:

+
    +
  • Q():F("fname"):Eq(<fval>)
  • +
  • Q("fname", <fval>)
  • +
+ +
+
+ + Q:ElemMatch (val) +
+
+ +

Element match construction.

+
- $elemMatch The $elemMatch operator matches more than one component within an array element.
+  -    { array: { $elemMatch: { value1 : 1, value2 : { $gt: 1 } } } }
+    Restriction: only one $elemMatch allowed in context of one array field.
+
+ + +

Parameters:

+
    +
  • val + + +
  • +
+ + + + + +
+
+ + Q:Not (val) +
+
+ The $not negatiation for val block + +

Parameters:

+
    +
  • val + + +
  • +
+ + + + +

Usage:

+
    +
    Q():Not(Q("foo", "bar")) => {"$not" : {"foo" : "bar"}}
    +
+ +
+
+ + Q:Gt (val) +
+
+ Greater than (val > arg) + +

Parameters:

+
    +
  • val + + +
  • +
+ + + + +

Usage:

+
    +
    Q():F("age"):Gt(29) => {"age" : {"$gt" : 29}}
    +
+ +
+
+ + Q:Gte (val) +
+
+ Greater than or equal (val >= arg) + +

Parameters:

+
    +
  • val + + +
  • +
+ + + + +

Usage:

+
    +
    Q():F("age"):Gt(29) => {"age" : {"$gte" : 29}}
    +
+ +
+
+ + Q:Lt (val) +
+
+ Lesser than (val < arg) + +

Parameters:

+
    +
  • val + + +
  • +
+ + + + +

Usage:

+
    +
    Q():F("age"):Lt(29) => {"age" : {"$lt" : 29}}
    +
+ +
+
+ + Q:Lte (val) +
+
+ Lesser than or equal (val <= arg) + +

Parameters:

+
    +
  • val + + +
  • +
+ + + + +

Usage:

+
    +
    Q():F("age"):Lt(29) => {"age" : {"$lte" : 29}}
    +
+ +
+
+

Tables

+
+
+ + Q +
+
+ Query/JSON builder is used to create EJDB queries or JSON objects with + preserverd keys order (Unlike lua tables).

+ +

Examples: + + + + +

see also:

+ + +

Usage:

+
    +
  • Q("foo", "bar")
  • +
  • Q("likes", "toys"):OrderBy("name asc", "age desc")
  • +
  • Q("name", "Andy"):F("_id"):Eq("510f7fa91ad6270a00000000"):F("age"):Gt(20):Lt(40):F("score"):In({ 11, 22.12333, 1362835380447, db.toNull() }):Max(232)
  • +
  • Q():Or(Q("foo", "bar"), Q("foo", "bar6")):OrderBy({ foo = 1 })
  • +
+ +
+
+ + DB +
+
+ Database itself. + + + + + + +
+
+ + +
+
+
+generated by LDoc 1.3 +
+
+ + diff --git a/luaejdb/doc/ldoc.css b/luaejdb/doc/ldoc.css new file mode 100644 index 0000000..54f8d4c --- /dev/null +++ b/luaejdb/doc/ldoc.css @@ -0,0 +1,297 @@ +/* BEGIN RESET + +Copyright (c) 2010, Yahoo! Inc. All rights reserved. +Code licensed under the BSD License: +http://developer.yahoo.com/yui/license.html +version: 2.8.2r1 +*/ +html { + color: #000; + background: #FFF; +} +body,div,dl,dt,dd,ul,ol,li,h1,h2,h3,h4,h5,h6,pre,code,form,fieldset,legend,input,button,textarea,p,blockquote,th,td { + margin: 0; + padding: 0; +} +table { + border-collapse: collapse; + border-spacing: 0; +} +fieldset,img { + border: 0; +} +address,caption,cite,code,dfn,em,strong,th,var,optgroup { + font-style: inherit; + font-weight: inherit; +} +del,ins { + text-decoration: none; +} +li { + list-style: bullet; + margin-left: 20px; +} +caption,th { + text-align: left; +} +h1,h2,h3,h4,h5,h6 { + font-size: 100%; + font-weight: bold; +} +q:before,q:after { + content: ''; +} +abbr,acronym { + border: 0; + font-variant: normal; +} +sup { + vertical-align: baseline; +} +sub { + vertical-align: baseline; +} +legend { + color: #000; +} +input,button,textarea,select,optgroup,option { + font-family: inherit; + font-size: inherit; + font-style: inherit; + font-weight: inherit; +} +input,button,textarea,select {*font-size:100%; +} +/* END RESET */ + +body { + margin-left: 1em; + margin-right: 1em; + font-family: arial, helvetica, geneva, sans-serif; + background-color: #ffffff; margin: 0px; +} + +code, tt { font-family: monospace; } +span.parameter { font-family:monospace; } +span.parameter:after { content:":"; } +span.types:before { content:"("; } +span.types:after { content:")"; } +.type { font-weight: bold; font-style:italic } + +body, p, td, th { font-size: .95em; line-height: 1.2em;} + +p, ul { margin: 10px 0 0 0px;} + +strong { font-weight: bold;} + +em { font-style: italic;} + +h1 { + font-size: 1.5em; + margin: 0 0 20px 0; +} +h2, h3, h4 { margin: 15px 0 10px 0; } +h2 { font-size: 1.25em; } +h3 { font-size: 1.15em; } +h4 { font-size: 1.06em; } + +a:link { font-weight: bold; color: #004080; text-decoration: none; } +a:visited { font-weight: bold; color: #006699; text-decoration: none; } +a:link:hover { text-decoration: underline; } + +hr { + color:#cccccc; + background: #00007f; + height: 1px; +} + +blockquote { margin-left: 3em; } + +ul { list-style-type: disc; } + +p.name { + font-family: "Andale Mono", monospace; + padding-top: 1em; +} + +pre.example { + background-color: rgb(245, 245, 245); + border: 1px solid silver; + padding: 10px; + margin: 10px 0 10px 0; + font-family: "Andale Mono", monospace; + font-size: .85em; +} + +pre { + background-color: rgb(245, 245, 245); + border: 1px solid silver; + padding: 10px; + margin: 10px 0 10px 0; + overflow: auto; + font-family: "Andale Mono", monospace; +} + + +table.index { border: 1px #00007f; } +table.index td { text-align: left; vertical-align: top; } + +#container { + margin-left: 1em; + margin-right: 1em; + background-color: #f0f0f0; +} + +#product { + text-align: center; + border-bottom: 1px solid #cccccc; + background-color: #ffffff; +} + +#product big { + font-size: 2em; +} + +#main { + background-color: #f0f0f0; + border-left: 2px solid #cccccc; +} + +#navigation { + float: left; + width: 18em; + vertical-align: top; + background-color: #f0f0f0; + overflow: visible; +} + +#navigation h2 { + background-color:#e7e7e7; + font-size:1.1em; + color:#000000; + text-align: left; + padding:0.2em; + border-top:1px solid #dddddd; + border-bottom:1px solid #dddddd; +} + +#navigation ul +{ + font-size:1em; + list-style-type: none; + margin: 1px 1px 10px 1px; +} + +#navigation li { + text-indent: -1em; + display: block; + margin: 3px 0px 0px 22px; +} + +#navigation li li a { + margin: 0px 3px 0px -1em; +} + +#content { + margin-left: 18em; + padding: 1em; + width: 700px; + border-left: 2px solid #cccccc; + border-right: 2px solid #cccccc; + background-color: #ffffff; +} + +#about { + clear: both; + padding: 5px; + border-top: 2px solid #cccccc; + background-color: #ffffff; +} + +@media print { + body { + font: 12pt "Times New Roman", "TimeNR", Times, serif; + } + a { font-weight: bold; color: #004080; text-decoration: underline; } + + #main { + background-color: #ffffff; + border-left: 0px; + } + + #container { + margin-left: 2%; + margin-right: 2%; + background-color: #ffffff; + } + + #content { + padding: 1em; + background-color: #ffffff; + } + + #navigation { + display: none; + } + pre.example { + font-family: "Andale Mono", monospace; + font-size: 10pt; + page-break-inside: avoid; + } +} + +table.module_list { + border-width: 1px; + border-style: solid; + border-color: #cccccc; + border-collapse: collapse; +} +table.module_list td { + border-width: 1px; + padding: 3px; + border-style: solid; + border-color: #cccccc; +} +table.module_list td.name { background-color: #f0f0f0; ; min-width: 200px; } +table.module_list td.summary { width: 100%; } + + +table.function_list { + border-width: 1px; + border-style: solid; + border-color: #cccccc; + border-collapse: collapse; +} +table.function_list td { + border-width: 1px; + padding: 3px; + border-style: solid; + border-color: #cccccc; +} +table.function_list td.name { background-color: #f0f0f0; ; min-width: 200px; } +table.function_list td.summary { width: 100%; } + +dl.table dt, dl.function dt {border-top: 1px solid #ccc; padding-top: 1em;} +dl.table dd, dl.function dd {padding-bottom: 1em; margin: 10px 0 0 20px;} +dl.table h3, dl.function h3 {font-size: .95em;} + +/* stop sublists from having initial vertical space */ +ul ul { margin-top: 0px; } +ol ul { margin-top: 0px; } +ol ol { margin-top: 0px; } +ul ol { margin-top: 0px; } + +/* styles for prettification of source */ +pre .comment { color: #558817; } +pre .constant { color: #a8660d; } +pre .escape { color: #844631; } +pre .keyword { color: #2239a8; font-weight: bold; } +pre .library { color: #0e7c6b; } +pre .marker { color: #512b1e; background: #fedc56; font-weight: bold; } +pre .string { color: #a8660d; } +pre .number { color: #f8660d; } +pre .operator { color: #2239a8; font-weight: bold; } +pre .preprocessor, pre .prepro { color: #a33243; } +pre .global { color: #800080; } +pre .prompt { color: #558817; } +pre .url { color: #272fc2; text-decoration: underline; } diff --git a/luaejdb/ejdb.luadoc b/luaejdb/ejdb.luadoc index d9508dc..9841bf2 100644 --- a/luaejdb/ejdb.luadoc +++ b/luaejdb/ejdb.luadoc @@ -1,23 +1,357 @@ - --- The Lua binding of EJDB database.
-- http://ejdb.org + module("ejdb") -Q = {} -ejdb = {} +local ejdb = {} + +--- Query/JSON builder is used to create EJDB queries or JSON objects with +-- preserverd keys order (Unlike lua tables). +-- @class table +-- @name Q +-- +-- Examples: +-- @usage Q("foo", "bar") +-- @usage Q("likes", "toys"):OrderBy("name asc", "age desc") +-- @usage Q("name", "Andy"):F("_id"):Eq("510f7fa91ad6270a00000000"):F("age"):Gt(20):Lt(40):F("score"):In({ 11, 22.12333, 1362835380447, db.toNull() }):Max(232) +-- @usage Q():Or(Q("foo", "bar"), Q("foo", "bar6")):OrderBy({ foo = 1 }) +-- @see Q:Eq +-- @see Q:ElemMatch +-- @see Q:Not +-- @see Q:Gt +-- @see Q:Gte +-- @see Q:Lt +-- @see Q:Lte +-- @see Q:Icase +-- @see Q:Begin +-- @see Q:In +-- @see Q:NotIn +-- @see Q:Bt +-- @see Q:StrAnd +-- @see Q:StrOr +-- @see Q:Inc +-- @see Q:Set +-- @see Q:AddToSet +-- @see Q:AddToSetAll +-- @see Q:Pull +-- @see Q:PullAll +-- @see Q:Upsert +-- @see Q:Upsert +-- @see Q:DropAll +-- @see Q:Do +-- @see Q:Or +-- @see Q:Skip +-- @see Q:Skip +-- @see Q:Max +-- @see Q:OrderBy +-- @see Q:Fields +-- @see Q:Fields +-- @see Q:NotFields +-- +local Q = {} + +--- +-- Database itself. +-- @class table +-- @name DB +local DB = {} --- Opens EJDB database. -- @usage local db = ejdb.open("foodb", "wrc") -- @param path {String} Database main file -- @param mode {String?} Database open mode flags:
--- `w` Open as a writer
--- `r` Open as a reader
--- `c` Create db if it not exists
--- `t` Truncate existing db
--- `s` Sycn db after each transaction
--- Default open mode: `rws` +-- `w` Open as a writer
+-- `r` Open as a reader
+-- `c` Create db if it not exists
+-- `t` Truncate existing db
+-- `s` Sycn db after each transaction
+-- Default open mode: `rws` +-- @return Database table -- function ejdb.open(path, mode) end --- Closes opened database. -function ejdb.close() end \ No newline at end of file +function ejdb.close() end + +--- Converts string OID into BSON oid table. +-- @param val {String} 24 hex chars BSON_OID +function ejdb.toOID(val) end + +--- Converts os.time table (or number of seconds since epoch) into BSON_DATE. +-- @return BSON_DATE table. +-- @usage ejdb.toDate({ year = 2013, month = 1, day = 1, hour = 0, sec = 1 }) +-- @usage ejdb.toDate(1363705285431) +function ejdb.toDate(val) end + +--- Converts current time into BSON_DATE. +function ejdb.toDateNow() end + +--- Builds BSON_REGEX value +-- @param re {String} Regular expression +-- @param opts {String} Regular expression flags +-- @return BSON_REGEX table value +function ejdb.toRegexp(re, opts) end + +--- Converts lua string into BSON_BINDATA value +-- @return BSON_BINDATA table value +function ejdb.toBinData(val) end + +--- Builds BSON_NULL value +-- @return BSON_NULL table value +function ejdb.toNull() end + +--- Builds BSON_UNDEFINED value +-- @return BSON_UNDEFINED table value +function ejdb.toUndefined() end + +--- Converts string OID into BSON oid table. +-- @see ejdb.toOID +function DB.toOID(val) end + +--- Converts os.time table or number of secods integer into BSON_DATE. +-- @see ejdb.toDate +function DB.toDate(val) end + +--- Converts current time into BSON_DATE. +-- @see ejdb.toDateNow +function DB.toDateNow() end + +--- Builds BSON_REGEX value. +-- @see ejdb.toRegexp +function DB.toRegexp(re, opts) end + +--- Converts lua string into BSON_BINDATA value. +-- @see ejdb.toBinData +function DB.toBinData(val) end + +--- Builds BSON_NULL value. +-- @see ejdb.toNull +function DB.toNull() end + +--- Builds BSON_UNDEFINED value . +-- @see ejdb.toUndefined +function DB.toUndefined() end + +--- Save/update specified JSON objects in the collection. +-- If collection with `cname` does not exists it will be created. +-- Each persistent object has unique identifier (OID) placed in the `_id` property. +-- If a saved object does not have `_id` it will be autogenerated. +-- To identify and update object it should contains `_id` property. +-- @param cname {String} Name of collection. +-- @param obj Lua table or Q represents JSON object. +-- @param ... If last argument is True a saved object will be merged with who's +-- already persisted in db. +-- @usage dQ:save("parrots2", {foo = "bar"}) +-- @usage dQ:save("parrots2", Q("foo", "bar"), true) -- merge option is on +function DB:save(cname, obj, ...) end + +--- Execute query on collection. +-- +-- EJDB queries inspired by MongoDB (mongodb.org) and follows same philosophy. +-- - Supported queries: +-- - Simple matching of String OR Number OR Array value: +-- - {'fpath' : 'val', ...} +-- - $not Negate operation. +-- - {'fpath' : {'$not' : val}} //Field not equal to val +-- - {'fpath' : {'$not' : {'$begin' : prefix}}} //Field not begins with val +-- - $begin String starts with prefix +-- - {'fpath' : {'$begin' : prefix}} +-- - $gt, $gte (>, >=) and $lt, $lte for number types: +-- - {'fpath' : {'$gt' : number}, ...} +-- - $bt Between for number types: +-- - {'fpath' : {'$bt' : [num1, num2]}} +-- - $in String OR Number OR Array val matches to value in specified array: +-- - {'fpath' : {'$in' : [val1, val2, val3]}} +-- - $nin - Not IN +-- - $strand String tokens OR String array val matches all tokens in specified array: +-- - {'fpath' : {'$strand' : [val1, val2, val3]}} +-- - $stror String tokens OR String array val matches any token in specified array: +-- - {'fpath' : {'$stror' : [val1, val2, val3]}} +-- - $exists Field existence matching: +-- - {'fpath' : {'$exists' : true|false}} +-- - $icase Case insensitive string matching: +-- - {'fpath' : {'$icase' : 'val1'}} //icase matching +-- icase matching with '$in' operation: +-- - {'name' : {'$icase' : {'$in' : ['HEllo', 'heLLo WorlD']}}} +-- For case insensitive matching you can create special type of string index. +-- - $elemMatch The $elemMatch operator matches more than one component within an array element. +-- - { array: { $elemMatch: { value1 : 1, value2 : { $gt: 1 } } } } +-- Restriction: only one $elemMatch allowed in context of one array field. +-- +-- - Queries can be used to update records: +-- +-- $set Field set operation. +-- - {.., '$set' : {'field1' : val1, 'fieldN' : valN}} +-- $upsert Atomic upsert. If matching records are found it will be '$set' operation, +-- otherwise new record will be inserted +-- with fields specified by argment object. +-- - {.., '$upsert' : {'field1' : val1, 'fieldN' : valN}} +-- $inc Increment operation. Only number types are supported. +-- - {.., '$inc' : {'field1' : number, ..., 'field1' : number} +-- $dropall In-place record removal operation. +-- - {.., '$dropall' : true} +-- $addToSet Atomically adds value to the array only if its not in the array already. +-- If containing array is missing it will be created. +-- - {.., '$addToSet' : {'fpath' : val1, 'fpathN' : valN, ...}} +-- $addToSetAll Batch version if $addToSet +-- - {.., '$addToSetAll' : {'fpath' : [array of values to add], ...}} +-- $pull Atomically removes all occurrences of value from field, if field is an array. +-- - {.., '$pull' : {'fpath' : val1, 'fpathN' : valN, ...}} +-- $pullAll Batch version of $pull +-- - {.., '$pullAll' : {'fpath' : [array of values to remove], ...}} +-- +-- - Collection joins supported in the following form: +-- +-- {..., $do : {fpath : {$join : 'collectionname'}} } +-- Where 'fpath' value points to object's OIDs from 'collectionname'. Its value +-- can be OID, string representation of OID or array of this pointers. +-- +-- NOTE: It is better to execute update queries with `$onlycount=true` hint flag +-- or use the special `update()` method to avoid unnecessarily data fetching. +-- NOTE: Negate operations: $not and $nin not using indexes +-- so they can be slow in comparison to other matching operations. +-- NOTE: Only one index can be used in search query operation. +-- NOTE: If callback is not provided this function will be synchronous. +-- +-- QUERY HINTS (specified by `hints` argument): +-- - $max Maximum number in the result set +-- - $skip Number of skipped results in the result set +-- - $orderby Sorting order of query fields. +-- - $onlycount true|false If `true` only count of matching records will be returned +-- without placing records in result set. +-- - $fields Set subset of fetched fields. +-- If field presented in $orderby clause it will be forced to include in resulting records. +-- Example: +-- hints: { +-- "$orderby" : { //ORDER BY field1 ASC, field2 DESC +-- "field1" : 1, +-- "field2" : -1 +-- }, +-- "$fields" : { //SELECT ONLY {_id, field1, field2} +-- "field1" : 1, +-- "field2" : 1 +-- } +-- } +-- To traverse selected records cursor object is returned. +-- Cursor (res): +-- #res - length of result set +-- res[i] - BSON representations of object as lua string +-- res:object(i) - Lua table constructed from BSON data +-- res:field(i, ) - Lua value of fetched BSON object +-- res() - Creates iterator for pairs (obj, idx) +-- where obj - Lua table constructed from BSON data +-- idx - Index of fetched object in the result set +-- +-- Examples: +-- for i = 1, #res do +-- local ob = res:object(i) +-- ... +-- end +-- +-- OR +-- +-- for i = 1, #res do +-- res:field(i, "json field name") +-- ... +-- end +-- +-- OR +-- +-- for vobj, idx in res() do +-- -- vobj is a lua table representation of fetched json object +-- vobj["json field name"] +-- ... +-- end +-- +-- +-- @param cname {String} Name of collection +-- @param q {table|Q} JSON query object +-- @see Q +-- +function DB:find(cname, q, ...) end + + + +--- Field eq restriction. +-- {fname : fval} +-- @usage Q():F("fname"):Eq() +-- @usage Q("fname", ) +function Q:Eq(val) self:_setop(nil, val, nil, true) end + +--- Element match construction. +-- - $elemMatch The $elemMatch operator matches more than one component within an array element. +-- - { array: { $elemMatch: { value1 : 1, value2 : { $gt: 1 } } } } +-- Restriction: only one $elemMatch allowed in context of one array field. +function Q:ElemMatch(val) end + +--- The $not negatiation for `val` block +-- @usage Q():Not(Q("foo", "bar")) => {"$not" : {"foo" : "bar"}} +function Q:Not(val) end + +--- Greater than (val > arg) +-- @usage Q():F("age"):Gt(29) => {"age" : {"$gt" : 29}} +function Q:Gt(val) end + +--- Greater than or equal (val >= arg) +-- @usage Q():F("age"):Gt(29) => {"age" : {"$gte" : 29}} +function Q:Gte(val) end + +--- Lesser than (val < arg) +-- @usage Q():F("age"):Lt(29) => {"age" : {"$lt" : 29}} +function Q:Lt(val) end + +--- Lesser than or equal (val <= arg) +-- @usage Q():F("age"):Lt(29) => {"age" : {"$lte" : 29}} +function Q:Lte(val) end + +function Q:Icase(val) end + +function Q:Begin(val) end + +function Q:In(val) end + +function Q:NotIn(val) end + +function Q:Bt(val) end + +function Q:StrAnd(val) end + +function Q:StrOr(val) end + +function Q:Inc(val) end + +function Q:Set(val) end + +function Q:AddToSet(val) end + +function Q:AddToSetAll(val) end + +function Q:Pull(val) end + +function Q:PullAll(val) end + +function Q:Upsert(val) end + +function Q:DropAll() end + +function Q:Do(val) end + +function Q:Or(...) end + +function Q:Skip(val) end + +function Q:Max(val) end + +function Q:OrderBy(...) end + +function Q:Fields(...) end + +function Q:NotFields(...) end + + + + + + + + diff --git a/luaejdb/tools/ldoc/COPYRIGHT b/luaejdb/tools/ldoc/COPYRIGHT new file mode 100644 index 0000000..841f141 --- /dev/null +++ b/luaejdb/tools/ldoc/COPYRIGHT @@ -0,0 +1,22 @@ +LDoc License +----------- +Copyright (C) 2011 Steve Donovan. + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in +all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +THE SOFTWARE. + diff --git a/luaejdb/tools/ldoc/config.ld b/luaejdb/tools/ldoc/config.ld new file mode 100644 index 0000000..322ba56 --- /dev/null +++ b/luaejdb/tools/ldoc/config.ld @@ -0,0 +1,7 @@ +project='LDoc' +title='LDoc documentation' +description='A Lua documentation tool' +format='discount' +file='ldoc.lua' +dir='out' +readme='docs/doc.md' diff --git a/luaejdb/tools/ldoc/docs/doc.md b/luaejdb/tools/ldoc/docs/doc.md new file mode 100644 index 0000000..e1a690f --- /dev/null +++ b/luaejdb/tools/ldoc/docs/doc.md @@ -0,0 +1,1092 @@ +# LDoc, a Lua Documentation Tool + +@lookup doc.md + +## Introduction + +LDoc is a second-generation documentation tool that can be used as a replacement for +[LuaDoc](http://keplerproject.github.com/luadoc/). It arose out of my need to document my +own projects and only depends on the [Penlight](https://github.com/stevedonovan/Penlight) +libraries. + +It is mostly compatible with LuaDoc, except that certain workarounds are no longer needed. +For instance, it is not so married to the idea that Lua modules should be defined using the +`module` function; this is not only a matter of taste since this has been deprecated in Lua +5.2. + +Otherwise, the output is very similar, which is no accident since the HTML templates are +based directly on LuaDoc. You can ship your own customized templates and style sheets with +your [own project](http://nilnor.github.com/textui/docs/), however. You have an option to +use Markdown to process the documentation, which means no ugly HTML is needed in doc +comments. C/C++ extension modules may be documented in a similar way, although function +names cannot be inferred from the code itself. + +LDoc can provide integrated documentation, with traditional function comments, any documents +in Markdown format, and specified source examples. Lua source in examples and the documents +will be prettified. + +Although there are a fair number of command-line options, the preferred route is to write a +`config.ld` configuration file in Lua format. By convention, if LDoc is simply invoked as +`ldoc .` it will read this file first. In this way, the aim is to make it very easy for +end-users to build your documentation using this simple command. + +## Commenting Conventions + +LDoc follows the conventions established by Javadoc and later by LuaDoc. + +Only 'doc comments' are parsed; these can be started with at least 3 hyphens, or by a empty +comment line with at least 3 hypens: + + --- summary. + -- Description; this can extend over + -- several lines + + ----------------- + -- This will also do. + +You can also use Lua block comments: + + --[[-- + Summary. A description + ...; + ]] + +Any module or script must start with a doc comment; any other files are ignored and a +warning issued. The only exception is if the module starts with an explicit `module` +statement. + +All doc comments start with a summary sentence, that ends with a period or a question mark. +An optional description may follow. Normally the summary sentence will appear in the module +contents. + +After this descriptive text, there will typically be _tags_. These follow the convention +established by Javadoc and widely used in tools for other languages. + + --- foo explodes text. + -- It is a specialized splitting operation on a string. + -- @param text the string + -- @return a table of substrings + function foo (text) + .... + end + +There are also 'tparam' and 'treturn' which let you [specify a type](#Tag_Modifiers): + + -- @tparam string text the string + -- @treturn {string,...} a table of substrings + +There may be multiple 'param' tags, which should document each formal parameter of the +function. For Lua, there can also be multiple 'return' tags + + --- solvers for common equations. + module("solvers", package.seeall) + + --- solve a quadratic equation. + -- @param a first coeff + -- @param b second coeff + -- @param c third coeff + -- @return first root, or nil + -- @return second root, or imaginary root error + function solve (a,b,c) + local disc = b^2 - 4*a*c + if disc < 0 then + return nil,"imaginary roots" + else + disc = math.sqrt(disc) + return (-b + disc)/2*a, + (-b - disc)/2*a + end + end + + ... + +This is the common module style used in Lua 5.1, but it's increasingly common to see less +'magic' ways of creating modules in Lua. Since `module` is deprecated in Lua 5.2, any +future-proof documentation tool needs to handle these styles gracefully: + + --- a test module + -- @module test + + local test = {} + + --- first test. + function test.one() + ... + end + + ... + + return test + +Here the name of the module is explicitly given using the 'module' tag. If you leave this +out, then LDoc will infer the name of the module from the name of the file and its relative +location in the filesystem; this logic is also used for the `module(...)` idiom. (How this +works and when you need to provide extra information is discussed later.) + +It is common to use a local name for a module when declaring its contents. In this case the +'alias' tag can tell LDoc that these functions do belong to the module: + + --- another test. + -- @module test2 + -- @alias M + + local M = {} + + -- first test. + function M.one() + .. + end + + return M + +`M` and `_M` are used commonly enough that LDoc will recognize them as aliases +automatically, but 'alias' allows you to use any identifier. + +LDoc tries to deduce the function name and the formal parameter names from examining the +code after the doc comment. It also recognizes the 'unsugared' way of defining functions as +explicit assignment to a variable: + + --- second test. + M.two = function(...) ... end + +Apart from exported functions, a module usually contains local functions. By default, LDoc +does not include these in the documentation, but they can be enabled using the `--all` flag. +They can be documented just like 'public' functions: + + --- it's clear that boo is local from context. + local function boo(...) .. end + + local foo + + --- we need to give a hint here for foo + -- @local here + function foo(...) .. end + +Modules can of course export tables and other values. The classic way to document a table +looks like this: + + --- a useful table of constants + -- @field alpha first correction + -- @field beta second correction + -- @field gamma fudge factor + -- @table constants + +Here the kind of item is made explicit by the 'table' tag; tables have 'fields' in the same +way as functions have parameters. + +This can get tedious, so LDoc will attempt to extract table documentation from code: + + --- a useful table of constants + M.constants = { + alpha = 0.23, -- first correction + beta = 0.443, -- second correction + gamma = 0.01 -- fudge factor + } + +The rule followed here is `NAME = `. If LDoc can't work out the name and +type from the following code, then a warning will be issued, pointing to the file and +location. + +Another kind of module-level type is 'field', such as follows: + + --- module version. + M._VERSION = '0.5' + +That is, a module may contain exported functions, local functions, tables and fields. + +When the code analysis would lead to the wrong type, you can always be explicit. + + --- module contents. + -- @field _CONTENTS + M._CONTENTS = {constants=true,one=true,...} + +The order of tags is not important, but as always, consistency is useful. Tags like 'param' +and 'return' can be specified multiple times, whereas a type tag like 'function' can only +occur once in a comment. The basic rule is that a single doc comment can only document one +entity. + +By default, LDoc will process any file ending in '.lua' or '.luadoc' in a specified +directory; you may point it to a single file as well. A 'project' usually consists of many +modules in one or more _packages_. The generated `index.html` will point to the generated +documentation for each of these modules. + +If only one module or script is documented for a project, then the `index.html` generated +contains the documentation for that module, since an index pointing to one module would be +redundant. + +(If you want to document a script, there is a project-level type 'script' for that.) + +## See References + +The tag 'see' is used to reference other parts of the documentation, and 'usage' can provide +examples of use: + + --------- + -- split a string in two. + -- @param s the string + -- @param delim the delimiter (default space) + -- @return first part + -- @return second part + -- @usage local hello,world = split2("hello world") + -- @see split + funtion split2(s,delim) .. end + +Here it's assumed that 'split' is a function defined in the same module. If you wish to link +to a function in another module, then the reference has to be qualified. + +References to methods use a colon: `myclass:method`; this is for instance how you would +refer to members of a `@type` section. + +The example at `tests/complex` shows how @see references are interpreted: + + complex.util.parse + complex.convert.basic + complex.util + complex.display + complex + +You may of course use the full name of a module or function, but can omit the top-level +namespace - e.g. can refer to the module `util` and the function `display.display_that` +directly. Within a module, you can directly use a function name, e.g. in `display` you can +say `display_this`. + +What applies to functions also applies to any module-level item like tables. New +module-level items can be defined and they will work according to these rules. + +If a reference is not found within the project, LDoc checks to see if it is a reference to a +Lua standard function or table, and links to the online Lua manual. So references like +'table.concat' are handled sensibly. + +References may be made inline using the @\{ref} syntax. This may appear anywhere in the +text, and is more flexible than @see. In particular, it provides one way to document the +type of a parameter or return value when that type has a particular structure: + + ------ + -- extract standard variables. + -- @param s the string + -- @return @\{stdvars} + function extract_std(s) ... end + + ------ + -- standard variables. + -- Use @\{extract_std} to parse a string containing variables, + -- and @\{pack_std} to make such a string. + -- @field length + -- @field duration + -- @field viscosity + -- @table stdvars + +@\{ref} is very useful for referencing your API from code samples and readme text. (I've had +to throw in a spurious backspace to stop expansion in this example.) + +The link text can be changed from the default by the extended syntax @\{ref|text}. + +You can also put references in backticks, like `\`stdvars\``. This is commonly used in +Markdown to indicate code, so it comes naturally when writing documents. It is controlled by +the configuration variable `backtick_references`; the default is `true` if you use Markdown +in your project, but can be specified explicitly in your `config.ld`. + +### Custom @see References + +It's useful to define how to handle references external to a project. For instance, in the +[luaposix](https://github.com/luaposix/luaposix) project we wanted to have `man` references +to the corresponding C function: + + ------------ + -- raise a signal on this process. + -- @see raise(3) + -- @int nsig + -- @return integer error cod + function raise (nsig) + end + +These see references always have this particular form, and the task is to turn them into +online references to the Linux manpages. So in `config.ld` we have: + + local upat = "http://www.kernel.org/doc/man-pages/online/pages/man%s/%s.%s.html" + + custom_see_handler('^(%a+)%((%d)%)$',function(name,section) + local url = upat:format(section,name,section) + local name = name .. '(' ..section..')' + return name, url + end) + +'^(%a+)%((%d)%)$' both matches the pattern and extracts the name and its section. THen it's +a simple matter of building up the appropriate URL. The function is expected to +return _link text_ and _link source_ and the patterns are checked before LDoc tries to resolve +project references. So it is best to make them match as exactly as possible. + +## Sections + +LDoc supports _explicit_ sections. By default, the sections correspond to the pre-existing +types in a module: 'Functions', 'Tables' and 'Fields' (There is another default section +'Local Functions' which only appears if LDoc is invoked with the `--all` flag.) But new +sections can be added; the first mechanism is when you define a new type (say 'macro') a new +section ('Macros') is created to contain these types. There is also a way to declare ad-hoc +sections using the `@section` tag. + +The need occurs when a module has a lot of functions that need to be put into logical +sections. + + --- File functions. + -- Useful utilities for opening foobar format files. + -- @section file + + --- open a file + ... + + --- read a file + ... + + --- Encoding operations. + -- Encoding foobar output in different ways. + -- @section encoding + + ... + +A section doc-comment has the same structure as a normal doc-comment; the summary is used as +the new section title, and the description will be output at the start of the function +details for that section. + +In any case, sections appear under 'Contents' on the left-hand side. See the +[winapi](http://stevedonovan.github.com/winapi/api.html) documentation for an example of how +this looks. + +Arguably a module writer should not write such very long modules, but it is not the job of +the documentation tool to limit the programmer! + +A specialized kind of section is `type`: it is used for documenting classes. The functions +(or fields) within a type section are considered to be the methods of that class. + + --- A File class. + -- @type File + + .... + --- get the modification time. + -- @return standard time since epoch + function File:mtime() + ... + end + +(In an ideal world, we would use the word 'class' instead of 'type', but this would conflict +with the LuaDoc usage.) + +A section continues until the next section is found, `@section end`, or end of file. + +You can put items into an implicit section using the @within tag. This allows you to put +adjacent functions in different sections, so that you are not forced to order your code +in a particular way. + +Sometimes a module may logically span several files. There will be a master module with name +'foo' and other files which when required add functions to that module. If these files have +a @submodule tag, their contents will be placed in the master module documentation. However, +a current limitation is that the master module must be processed before the submodules. + +See the `tests/submodule` example for how this works in practice. + +## Differences from LuaDoc + +LDoc only does 'module' documentation, so the idea of 'files' is redundant. + +One added convenience is that it is easier to name entities: + + ------------ + -- a simple module. + -- (LuaDoc) + -- @class module + -- @name simple + + ------------ + -- a simple module. + -- (LDoc) + -- @module simple + +This is because type names (like 'function', 'module', 'table', etc) can function as tags. +LDoc also provides a means to add new types (e.g. 'macro') using a configuration file which +can be shipped with the source. If you become bored with typing 'param' repeatedly then you +can define an alias for it, such as 'p'. This can also be specified in the configuration file. + +LDoc will also work with C/C++ files, since extension writers clearly have the same +documentation needs as Lua module writers. + +LDoc allows you to attach a _type_ to a parameter or return value + + --- better mangler. + -- @tparam string name + -- @int max length + -- @treturn string mangled name + function strmangler(name,max) + ... + end + +`int` here is short for `tparam int` (see @{Tag_Modifiers}) + +It's common for types to be optional, or have different types, so the type can be like +'?int|string' which renders as '(int or string)', or '?int', which renders as +'(optional int)'. + +LDoc gives the documenter the option to use Markdown to parse the contents of comments. + +Since 1.3, LDoc allows the use of _colons_ instead of @. + + --- a simple function. + -- string name person's name + -- int: age age of person + -- treturn: ?nil|string + -- function check(name,age) + + + +## Adding new Tags + +LDoc tries to be faithful to LuaDoc, but provides some extensions. Aliases for tags can be +defined, and new types declared. + + --- zero function. Two new ldoc features here; item types + -- can be used directly as tags, and aliases for tags + -- can be defined in config.ld. + -- @function zero_fun + -- @p k1 first + -- @p k2 second + +Here an alias for 'param' has been defined. If a file `config.ld` is found in the source, +then it will be loaded as Lua data. For example, the configuration for the above module +provides a title and defines an alias for 'param': + + title = "testmod docs" + project = "testmod" + alias("p","param") + +Extra tag _types_ can be defined: + + new_type("macro","Macros") + +And then used as any other type: + + ----- + -- A useful macro. This is an example of a custom type. + -- @macro first_macro + -- @see second_function + +This will also create a new module section called 'Macros'. + +If your new type has arguments or fields, then specify the name: + + new_type("macro","Macros",false,"param") + +(The third argument means that this is not a _project level_ tag) + +Then you may say: + + ----- + -- A macro with arguments. + -- @macro second_macro + -- @param x the argument + +And the arguments will be displayed under the subsection 'param' + + +## Inferring more from Code + +The qualified name of a function will be inferred from any `function` keyword following the +doc comment. LDoc goes further with this kind of code analysis, however. + +Instead of: + + --- first table. + -- @table one + -- @field A alpha + -- @field B beta + M.one = { + A = 1, + B = 2; + } + +you can write: + + --- first table + -- @table one + M.one = { + A = 1, -- alpha + B = 2; -- beta + } + +Simularly, function parameter comments can be directly used: + + ------------ + -- third function. Can also provide parameter comments inline, + -- provided they follow this pattern. + function mod1.third_function( + alpha, -- correction A + beta, -- correction B + gamma -- factor C + ) + ... + end + +As always, explicit tags can override this behaviour if it is inappropriate. + +## Extension modules written in C + +LDoc can process C/C++ files: + + @plain + /*** + Create a table with given array and hash slots. + @function createtable + @param narr initial array slots, default 0 + @param nrec initial hash slots, default 0 + @return the new table + */ + static int l_createtable (lua_State *L) { + .... + +Both `/**` and `///` are recognized as starting a comment block. Otherwise, the tags are +processed in exactly the same way. It is necessary to specify that this is a function with a +given name, since this cannot be reliably be inferred from code. Such a file will need a +module comment, which is treated exactly as in Lua. + +An unknown extension can be associated with a language using a call like +`add_language_extension('lc','c')` in `config.ld`. (Currently the language can only be 'c' +or 'lua'.) + +See 'tests/examples/mylib.c' for the full example. + +## Basic Usage + +For example, to process all files in the 'lua' directory: + + $ ldoc lua + output written to docs/ + +Thereafter the `docs` directory will contain `index.html` which points to individual modules +in the `modules` subdirectory. The `--dir` flag can specify where the output is generated, +and will ensure that the directory exists. The output structure is like LuaDoc: there is an +`index.html` and the individual modules are in the `modules` subdirectory. This applies to +all project-level types, so that you can also get `scripts`, `examples` and `topics` +directories. + +If your modules use `module(...)` then the module name has to be deduced. If `ldoc` is run +from the root of the package, then this deduction does not need any help - e.g. if your +package was `foo` then `ldoc foo` will work as expected. If we were actually in the `foo` +directory then `ldoc -b .. .` will correctly deduce the module names. An example would be +generating documentation for LuaDoc itself: + + $ ldoc -b .. /path/to/luadoc + +Without the `-b` setting the base of the package to the _parent_ of the directory, implicit +modules like `luadoc.config` will be incorrectly placed in the global namespace. + +For new-style modules, that don't use `module()`, it is recommended that the module comment +has an explicit `@module PACKAGE.NAME`. If it does not, then `ldoc` will still attempt to +deduce the module name, but may need help with `--package/-b` as above. + +`format = 'markdown'` can be used in your `config.ld` and will be used to process summaries +and descriptions. This requires [markdown.lua](http://www.frykholm.se/files/markdown.lua) by +Niklas Frykholm to be installed (this can be most easily done with `luarocks install +markdown`.) A much faster alternative is +[lua-discount](http://asbradbury.org/projects/lua-discount/) which you can use by setting +`format` to 'discount' after installing using `luarocks install lua-discount`) The +[discount](http://www.pell.portland.or.us/~orc/Code/discount/) Markdown processor +additionally has more features than the pure Lua version, such as PHP-Extra style tables. +As a special case, LDoc will fall back to using `markdown.lua` if it cannot find `discount`. + +`format = 'markdown'` can be used in your `config.ld` and will be used to process summaries +and descriptions. This requires a markdown processor. +LDoc knows how to use: + + - [markdown.lua](http://www.frykholm.se/files/markdown.lua) a pure Lua processor by +Niklas Frykholm (this can be installed easily with `luarocks install markdown`.) + - [lua-discount](http://asbradbury.org/projects/lua-discount/), a faster alternative +(installed with `luarocks install lua-discount`). lua-discount uses the C +[discount](http://www.pell.portland.or.us/~orc/Code/discount/) Markdown processor which has +more features than the pure Lua version, such as PHP-Extra style tables. + - [lunamark](http://jgm.github.com/lunamark/), another pure Lua processor, faster than +markdown, and with extra features (`luarocks install lunamark`). + +You can request the processor you like with `format = 'markdown|discount|lunamark'`, and +LDoc will attempt to use it. If it can't find it, it will look for one of the other +markdown processors. If it can't find any markdown processer, it will fall back to text +processing. + + +A special case is if you simply say 'ldoc .'. Then there _must_ be a `config.ld` file +available in the directory, and it can specify the file: + + file = "mymod.lua" + title = "mymod documentation" + description = "mymod does some simple but useful things" + +`file` can of course point to a directory, just as with the `--file` option. This mode makes +it particularly easy for the user to build the documentation, by allowing you to specify +everything explicitly in the configuration. + +In `config.ld`, `file` may be a Lua table, containing file names or directories; if it has +an `exclude` field then that will be used to exclude files from the list, for example +`{'examples', exclude = {'examples/slow.lua'}}`. + + +## Processing Single Modules + +`--output` can be used to give the output file a different name. This is useful for the +special case when a single module file is specified. Here an index would be redundant, so +the single HTML file generated contains the module documentation. + + $ ldoc mylib.lua --> results in docs/index.html + $ ldoc --output mylib mylib.lua --> results in docs/mylib.html + $ ldoc --output mylib --dir html mylib.lua --> results in html/mylib.html + +The default sections used by LDoc are 'Functions', 'Tables' and 'Fields', corresponding to +the built-in types 'function', 'table' and 'field'. If `config.ld` contains something like +`new_type("macro","Macros")` then this adds a new section 'Macros' which contains items of +'macro' type - 'macro' is registered as a new valid tag name. The default template then +presents items under their corresponding section titles, in order of definition. + +## Getting Help about a Module + +There is an option to simply dump the results of parsing modules. Consider the C example +`tests/example/mylib.c': + + @plain + $ ldoc --dump mylib.c + ---- + module: mylib A sample C extension. + Demonstrates using ldoc's C/C++ support. Can either use /// or /*** */ etc. + + function createtable(narr, nrec) + Create a table with given array and hash slots. + narr initial array slots, default 0 + nrec initial hash slots, default 0 + + function solve(a, b, c) + Solve a quadratic equation. + a coefficient of x^2 + b coefficient of x + c constant + return {"first root","second root"} + +This is useful to quickly check for problems; here we see that `createable` did not have a +return tag. + +LDoc takes this idea of data dumping one step further. If used with the `-m` flag it will +look up an installed Lua module and parse it. If it has been marked up in LuaDoc-style then +you will get a handy summary of the contents: + + @plain + $ ldoc -m pl.pretty + ---- + module: pl.pretty Pretty-printing Lua tables. + * read(s) - read a string representation of a Lua table. + * write(tbl, space, not_clever) - Create a string representation of a Lua table. + + * dump(t, ...) - Dump a Lua table out to a file or stdout. + +You can specify a fully qualified function to get more information: + + @plain + $ ldoc -m pl.pretty.write + + function write(tbl, space, not_clever) + create a string representation of a Lua table. + tbl {table} Table to serialize to a string. + space {string} (optional) The indent to use. + Defaults to two spaces. + not_clever {bool} (optional) Use for plain output, e.g {['key']=1}. + Defaults to false. + +LDoc knows about the basic Lua libraries, so that it can be used as a handy console reference: + + @plain + $> ldoc -m assert + + function assert(v, message) + Issues an error when the value of its argument `v` is false (i.e., + nil or false); otherwise, returns all its arguments. + `message` is an error + message; when absent, it defaults to "assertion failed!" + v + message + +Thanks to Mitchell's [Textadept](http://foicica.com/textadept/) project, LDoc has a +set of `.luadoc` files for all the standard tables, plus +[LuaFileSystem](http://keplerproject.github.com/luafilesystem/) and +[LPeg](http://www.inf.puc-rio.br/~roberto/lpeg/lpeg.html). + + @plain + $> ldoc -m lfs.lock + + function lock(filehandle, mode, start, length) + Locks a file or a part of it. + This function works on open files; the file + handle should be specified as the first argument. The string mode could be + either r (for a read/shared lock) or w (for a write/exclusive lock). The + optional arguments start and length can be used to specify a starting point + and its length; both should be numbers. + Returns true if the operation was successful; in case of error, it returns + nil plus an error string. + filehandle + mode + start + length + +## Anatomy of a LDoc-generated Page + +[winapi](http://stevedonovan.github.com/winapi/api.html) can be used as a good example of a +module that uses extended LDoc features. + +The _navigation section_ down the left has several parts: + + - The project name ('project' in the config) + - A project description ('description') + - ''Contents'' of the current page + - ''Modules'' listing all the modules in this project + +Note that `description` will be passed through Markdown, if it has been specified for the +project. This gives you an opportunity to make lists of links, etc; any '##' headers will be +formatted like the other top-level items on the navigation bar. + +'Contents' is automatically generated. It will contain any explicit sections, if they have +been used. Otherwise you will get the usual categories: 'Functions', 'Tables' and 'Fields'. + +'Modules' will appear for any project providing Lua libraries; there may also be a 'Scripts' +section if the project contains Lua scripts. For example, +[LuaMacro](http://stevedonovan.github.com/LuaMacro/docs/api.html) has a driver script `luam` +in this section. The +[builtin](http://stevedonovan.github.com/LuaMacro/docs/modules/macro.builtin.html) module +only defines macros, which are defined as a _custom tag type_. + +The _content section_ on the right shows: + + - The module summary and description + - The contents summary, per section as above + - The detailed documentation for each item + +As before, the description can use Markdown. The summary contains the contents of each +section as a table, with links to the details. This is where the difference between an +item's summary and an item's description is important; the first will appear in the contents +summary. The item details show the item name and its summary again, followed by the +description. There are then sections for the following tags: 'param', 'usage', 'return' and +'see' in that order. (For tables, 'Fields' is used instead of 'Parameters' but internally +fields of a table are stored as the 'param' tag.) + +You can of course customize the default template, but there are some parameters that can +control what the template will generate. Setting `one` to `true` in your configuration file +will give a _one-column_ layout, which can be easier to use as a programming reference. You +can suppress the contents summary with `no_summary`. + +## Customizing the Page + +Setting `no_return_or_parms` to `true` will suppress the display of 'param' and 'return' +tags. This may appeal to programmers who dislike the traditional @tag soup xDoc style and +prefer to comment functions just with a description. This is particularly useful when using +Markdown in a stylized way to specify arguments: + + --------- + -- This extracts the shortest common substring from the strings _s1_ and _s2_ + function M.common_substring(s1,s2) + +Here I've chosen to italicise parameter names; the main thing is to be consistent. + +This style is close to the Python [documentation +standard](http://docs.python.org/library/array.html#module-array), especially when used with +`no_summary`. + +It is also very much how the Lua documentation is ordered. For instance, this configuration +file formats the built-in documentation for the Lua global functions in a way which is close +to the original: + + project = 'Lua' + description = 'Lua Standard Libraries' + file = {'ldoc/builtin',exclude = {'ldoc/builtin/globals.lua'}} + no_summary = true + no_return_or_parms = true + format = 'discount' + + +Generally, using Markdown gives you the opportunity to structure your documentation in any +way you want; particularly if using lua-discount and its [table +syntax](http://michelf.com/projects/php-markdown/extra/#table); the desired result can often +be achieved then by using a custom style sheet. + +## Examples + +It has been long known that documentation generated just from the source is not really +adequate to explain _how_ to use a library. People like reading narrative documentation, +and they like looking at examples. Previously I found myself dealing with source-generated +and writer-generated documentation using different tools, and having to match these up. + +LDoc allows for source examples to be included in the documentation. For example, see the +online documentation for [winapi](http://stevedonovan.github.com/winapi/api.html). The +function `utf8_expand` has a `@see` reference to 'testu.lua' and following that link gives +you a pretty-printed version of the code. + +The line in the `config.ld` that enables this is: + + examples = {'examples', exclude = {'examples/slow.lua'}} + +That is, all files in the `examples` folder are to be pretty-printed, except for `slow.lua` +which is meant to be called from one of the examples. The see-reference to `testu.lua` +resolves to 'examples/testu.lua.html'. + +Examples may link back to the API documentation, for instance the example `input.lua` has a +@\{spawn_process} inline reference. + +## Readme files + +Like all good Github projects, Winapi has a `readme.md`: + + readme = "readme.md" + +This goes under the 'Topics' global section; the 'Contents' of this document is generated +from the second-level (##) headings of the readme. + +Readme files are always processed with the current Markdown processor, but may also contain @\{} references back +to the documentation and to example files. Any symbols within backticks will be expanded as +references, if possible. As with doc comments, a link to a standard Lua function like +@\{os.execute} will work as well. Any code sections will be pretty-printed as Lua, unless +the first indented line is '@plain'. (See the source for this readme to see how it's used.) + +Another name for `readme` is `topics`, which is more descriptive. From LDoc 1.2, +`readme/topics` can be a list of documents. These act as a top-level table-of-contents for +your documentation. Currently, if you want them in a particular order, then use names like +`01-introduction.md` etc which sort appropriately. + +The first line of a document may be a Markdown `#` title. If so, then LDoc will regard the +next level as the subheadings, normally second-level `##`. But if the title is already +second-level, then third-level headings will be used `###`, and so forth. The implication is +that the first heading must be top-level relative to the headings that follow, and must +start at the first line. + +A reference like @\{string.upper} is unambiguous, and will refer to the online Lua manual. +In a project like Penlight, it can get tedious to have to write out fully qualified names +like @\{pl.utils.printf}. The first simplification is to use the `package` field to resolve +unknown references, which in this case is 'pl'. (Previously we discussed how `package` is +used to tell LDoc where the base package is in cases where the module author wishes to +remain vague, but it does double-duty here.) A further level of simplification comes from +the @lookup directive in documents, which must start at the first column on its own line. +For instance, if I am talking about `pl.utils`, then I can say "@lookup utils" and +thereafter references like @\{printf} will resolve correctly. + +If you look at the source for this document, you will see a `@lookup doc.md` which allows +direct references to sections like @{Readme_files|this}. + +Remember that the default is for references in backticks to be resolved; unlike @ +references, it is not an error if the reference cannot be found. + +The _sections_ of a document (the second-level headings) are also references. This +particular section can be refered to as @\{doc.md.Resolving_References_in_Documents} - the +rule is that any non-alphabetic character is replaced by an underscore. + + +## Tag Modifiers + +Ay tag may have _tag modifiers_. For instance, you may say +@\param[type=number] and this associates the modifier `type` with value `number` with this +particular param tag. A shorthand can be introduced for this common case, which is "@tparam + "; in the same way @\treturn is defined. + +This is useful for larger projects where you want to provide the argument and return value +types for your API, in a structured way that can be easily extracted later. There is a +useful function for creating new tags that can be used in `config.ld`: + + tparam_alias('string','string') + +That is, "@string" will now have the same meaning as "@tparam string". + +From 1.3, the following standard type aliases are predefined: + + * `string` + * `number` + * `int` + * `bool` Lua 'boolean' type + * `func` 'function' (using 'function' would conflict with the type) + * `tab` 'table' + * `thread` + +The exact form of `` is not defined, but here is a suggested scheme: + + number -- a plain type + Bonzo -- a known type; a reference link will be generated + {string,number} -- a 'list' tuple, built from type expressions + {A=string,N=number} -- a 'struct' tuple, ditto + {Bonzo,...} -- an array of Bonzo objects + {[string]=Bonzo,...} -- a map of Bonzo objects with string keys + Array(Bonzo) -- (assuming that Array is a container) + +Currently the `type` modifier is the only one known and used by LDoc when generating HTML +output. However, any other modifiers are allowed and are available for use with your own +templates or for extraction by your own tools. + +The `alias` function within configuration files has been extended so that alias tags can be +defined as a tag plus a set of modifiers. So `tparam` is defined as: + + alias('tparam',{'param',modifiers={type="$1"}}) + +As an extension, you're allowed to use '@param' tags in table definitions. This makes it +possible to use type alias like '@string' to describe fields, since they will expand to +'param'. + +## Fields allowed in `config.ld` + +These mostly have the same meaning as the corresponding parameters: + + - `file` a file or directory containing sources. In `config.ld` this can also be a table +of files and directories. + - `project` name of project, used as title in top left + - `title` page title, default 'Reference' + - `package ` explicit base package name; also used for resolving references in documents + - `all` show local functions, etc as well in the docs + - `format` markup processor, can be 'plain' (default), 'markdown' or 'discount' + - `output` output name (default 'index') + - `dir` directory for output files (default 'docs') + - `ext` extension for output (default 'html') + - `one` use a one-column layout + - `style`, `template`: together these specify the directories for the style and and the +template. In `config.ld` they may also be `true`, meaning use the same directory as the +configuration file. + +These only appear in `config.ld`: + + - `description` a project description used under the project title + - `examples` a directory or file: can be a table + - `readme` name of readme file (to be processed with Markdown) + - `no_return_or_parms` don't show parameters or return values in output + - `backtick_references` whether references in backticks will be resolved + - `manual_url` point to an alternative or local location for the Lua manual, e.g. +'file:///D:/dev/lua/projects/lua-5.1.4/doc/manual.html' + - `one` use a one-column output format + - `no_summary` suppress the Contents summary + +Available functions are: + + - `alias(a,tag)` provide an alias `a` for the tag `tag`, for instance `p` as short for +`param` + - `add_language_extension(ext,lang)` here `lang` may be either 'c' or 'lua', and `ext` is +an extension to be recognized as this language + - `add_section` + - `new_type(tag,header,project_level)` used to add new tags, which are put in their own +section `header`. They may be 'project level'. + - `tparam_alias(name,type)` for instance, you may wish that `Object` means `@\tparam +Object`. + - `custom_see_handler(pattern,handler)`. If a reference matches `pattern`, then the +extracted values will be passed to `handler`. It is expected to return link text +and a suitable URI. (This match will happen before default processing.) + +## Annotations and Searching for Tags + +Annotations are special tags that can be used to keep track of internal development status. +The known annotations are 'todo', 'fixme' and 'warning'. They may occur in regular +function/table doc comments, or on their own anywhere in the code. + + --- Testing annotations + -- @module annot1 + ... + --- first function. + -- @todo check if this works! + function annot1.first () + if boo then + + end + --- @fixme what about else? + end + +Although not currently rendered by the template as HTML, they can be extracted by the +`--tags` command, which is given a comma-separated list of tags to list. + + @plain + D:\dev\lua\LDoc\tests> ldoc --tags todo,fixme annot1.lua + d:\dev\lua\ldoc\tests\annot1.lua:14: first: todo check if this works! + d:\dev\lua\ldoc\tests\annot1.lua:19: first-fixme1: fixme what about else? + + +## Generating HTML + +LDoc, like LuaDoc, generates output HTML using a template, in this case `ldoc_ltp.lua`. This +is expanded by the powerful but simple preprocessor devised originally by [Rici +Lake](http://lua-users.org/wiki/SlightlyLessSimpleLuaPreprocessor) which is now part of +Penlight. There are two rules - any line starting with '#' is Lua code, which can also be +embedded with '$(...)'. + +

Contents

+
    + # for kind,items in module.kinds() do +
  • $(kind)
  • + # end +
+ +This is then styled with `ldoc.css`. Currently the template and stylesheet is very much +based on LuaDoc, so the results are mostly equivalent; the main change that the template has +been more generalized. The default location (indicated by '!') is the directory of `ldoc.lua`. + +You may customize how you generate your documentation by specifying an alternative style +sheet and/or template, which can be deployed with your project. The parameters are `--style` +and `--template`, which give the directories where `ldoc.css` and `ldoc.ltp` are to be +found. If `config.ld` contains these variables, they are interpreted slightly differently; +if they are true, then it means 'use the same directory as config.ld'; otherwise they must +be a valid directory relative to the ldoc invocation. + +An example of fully customized documentation is `tests/example/style`: this is what you +could call 'minimal Markdown style' where there is no attempt to tag things (except +emphasizing parameter names). The narrative alone _can_ to be sufficient, if it is written +appropriately. + +Of course, there's no reason why LDoc must always generate HTML. `--ext` defines what output +extension to use; this can also be set in the configuration file. So it's possible to write +a template that converts LDoc output to LaTex, for instance. The separation of processing +and presentation makes this kind of new application possible with LDoc. + +## Internal Data Representation + +The `--dump` flag gives a rough text output on the console. But there is a more +customizeable way to process the output data generated by LDoc, using the `--filter` +parameter. This is understood to be a fully qualified function (module + name). For example, +try + + $ ldoc --filter pl.pretty.dump mylib.c + +to see a raw dump of the data. (Simply using `dump` as the value here would be a shorthand +for `pl.pretty.dump`.) This is potentially very powerful, since you may write arbitrary Lua +code to extract the information you need from your project. + +For instance, a file `custom.lua` like this: + + return { + filter = function (t) + for _, mod in ipairs(t) do + print(mod.type,mod.name,mod.summary) + end + end + } + +Can be used like so: + + ~/LDoc/tests/example$ ldoc --filter custom.filter mylib.c + module mylib A sample C extension. + +The basic data structure is straightforward: it is an array of 'modules' (project-level +entities, including scripts) which each contain an `item` array (functions, tables and so +forth). + +For instance, to find all functions which don't have a @return tag: + + return { + filter = function (t) + for _, mod in ipairs(t) do + for _, item in ipairs(mod.items) do + if item.type == 'function' and not item.ret then + print(mod.name,item.name,mod.file,item.lineno) + end + end + end + end + } + +The internal naming is not always so consistent; `ret` corresponds to @return, and `params` +corresponds to @param. `item.params` is an array of the function parameters, in order; it +is also a map from these names to the individual descriptions of the parameters. + +`item.modifiers` is a table where the keys are the tags and the values are arrays of +modifier tables. The standard tag aliases `tparam` and `treturn` attach a `type` modifier +to their tags. + + diff --git a/luaejdb/tools/ldoc/ldoc.lua b/luaejdb/tools/ldoc/ldoc.lua new file mode 100644 index 0000000..fb209ab --- /dev/null +++ b/luaejdb/tools/ldoc/ldoc.lua @@ -0,0 +1,615 @@ +#!/usr/bin/env lua +--------------- +-- ## ldoc, a Lua documentation generator. +-- +-- Compatible with luadoc-style annotations, but providing +-- easier customization options. +-- +-- C/C++ support for Lua extensions is provided. +-- +-- Available from LuaRocks as 'ldoc' and as a [Zip file](http://stevedonovan.github.com/files/ldoc-1.3.0.zip) +-- +-- [Github Page](https://github.com/stevedonovan/ldoc) +-- +-- @author Steve Donovan +-- @copyright 2011 +-- @license MIT/X11 +-- @script ldoc + +local class = require 'pl.class' +local app = require 'pl.app' +local path = require 'pl.path' +local dir = require 'pl.dir' +local utils = require 'pl.utils' +local List = require 'pl.List' +local stringx = require 'pl.stringx' +local tablex = require 'pl.tablex' + + +local append = table.insert + +local lapp = require 'pl.lapp' + +-- so we can find our private modules +app.require_here() + +--- @usage +local usage = [[ +ldoc, a documentation generator for Lua, vs 1.3.1 + -d,--dir (default docs) output directory + -o,--output (default 'index') output name + -v,--verbose verbose + -a,--all show local functions, etc, in docs + -q,--quiet suppress output + -m,--module module docs as text + -s,--style (default !) directory for style sheet (ldoc.css) + -l,--template (default !) directory for template (ldoc.ltp) + -1,--one use one-column output layout + -p,--project (default ldoc) project name + -t,--title (default Reference) page title + -f,--format (default plain) formatting - can be markdown, discount or plain + -b,--package (default .) top-level package basename (needed for module(...)) + -x,--ext (default html) output file extension + -c,--config (default config.ld) configuration name + -i,--ignore ignore any 'no doc comment or no module' warnings + -D,--define (default none) set a flag to be used in config.ld + -C,--colon use colon style + -B,--boilerplate ignore first comment in source files + -M,--merge allow module merging + --dump debug output dump + --filter (default none) filter output as Lua data (e.g pl.pretty.dump) + --tags (default none) show all references to given tags, comma-separated + (string) source file or directory containing source + + `ldoc .` reads options from an `config.ld` file in same directory; + `ldoc -c path/to/myconfig.ld .` reads options from `path/to/myconfig.ld` +]] +local args = lapp(usage) +local lfs = require 'lfs' +local doc = require 'ldoc.doc' +local lang = require 'ldoc.lang' +local tools = require 'ldoc.tools' +local global = require 'ldoc.builtin.globals' +local markup = require 'ldoc.markup' +local parse = require 'ldoc.parse' +local KindMap = tools.KindMap +local Item,File,Module = doc.Item,doc.File,doc.Module +local quit = utils.quit + + +class.ModuleMap(KindMap) +local ModuleMap = ModuleMap + +function ModuleMap:_init () + self.klass = ModuleMap + self.fieldname = 'section' +end + +ModuleMap:add_kind('function','Functions','Parameters') +ModuleMap:add_kind('table','Tables','Fields') +ModuleMap:add_kind('field','Fields') +ModuleMap:add_kind('lfunction','Local Functions','Parameters') +ModuleMap:add_kind('annotation','Issues') + + +class.ProjectMap(KindMap) +ProjectMap.project_level = true + +function ProjectMap:_init () + self.klass = ProjectMap + self.fieldname = 'type' +end + +ProjectMap:add_kind('module','Modules') +ProjectMap:add_kind('script','Scripts') +ProjectMap:add_kind('topic','Topics') +ProjectMap:add_kind('example','Examples') + +local lua, cc = lang.lua, lang.cc + +local file_types = { + ['.lua'] = lua, + ['.ldoc'] = lua, + ['.luadoc'] = lua, + ['.c'] = cc, + ['.cpp'] = cc, + ['.cxx'] = cc, + ['.C'] = cc, + ['.mm'] = cc +} + +------- ldoc external API ------------ + +-- the ldoc table represents the API available in `config.ld`. +local ldoc = {} +local add_language_extension + +local function override (field) + if ldoc[field] ~= nil then args[field] = ldoc[field] end +end + +-- aliases to existing tags can be defined. E.g. just 'p' for 'param' +function ldoc.alias (a,tag) + doc.add_alias(a,tag) +end + +-- standard aliases -- + +ldoc.alias('tparam',{'param',modifiers={type="$1"}}) +ldoc.alias('treturn',{'return',modifiers={type="$1"}}) +ldoc.alias('tfield',{'field',modifiers={type="$1"}}) + +function ldoc.tparam_alias (name,type) + type = type or name + ldoc.alias(name,{'param',modifiers={type=type}}) +end + +ldoc.tparam_alias 'string' +ldoc.tparam_alias 'number' +ldoc.tparam_alias 'int' +ldoc.tparam_alias 'bool' +ldoc.tparam_alias 'func' +ldoc.tparam_alias 'tab' +ldoc.tparam_alias 'thread' + +function ldoc.add_language_extension(ext, lang) + lang = (lang=='c' and cc) or (lang=='lua' and lua) or quit('unknown language') + if ext:sub(1,1) ~= '.' then ext = '.'..ext end + file_types[ext] = lang +end + +function ldoc.add_section (name, title, subname) + ModuleMap:add_kind(name,title,subname) +end + +-- new tags can be added, which can be on a project level. +function ldoc.new_type (tag, header, project_level,subfield) + doc.add_tag(tag,doc.TAG_TYPE,project_level) + if project_level then + ProjectMap:add_kind(tag,header,subfield) + else + ModuleMap:add_kind(tag,header,subfield) + end +end + +function ldoc.manual_url (url) + global.set_manual_url(url) +end + +function ldoc.custom_see_handler(pat, handler) + doc.add_custom_see_handler(pat, handler) +end + +local ldoc_contents = { + 'alias','add_language_extension','new_type','add_section', 'tparam_alias', + 'file','project','title','package','format','output','dir','ext', 'topics', + 'one','style','template','description','examples', 'pretty', + 'readme','all','manual_url', 'ignore', 'colon','boilerplate','merge', 'wrap', + 'no_return_or_parms','no_summary','full_description','backtick_references', 'custom_see_handler', +} +ldoc_contents = tablex.makeset(ldoc_contents) + +local function loadstr (ldoc,txt) + local chunk, err + local load + -- Penlight's Lua 5.2 compatibility has wobbled over the years... + if not rawget(_G,'loadin') then -- Penlight 0.9.5 + -- Penlight 0.9.7; no more global load() override + load = load or utils.load + chunk,err = load(txt,'config',nil,ldoc) + else + chunk,err = loadin(ldoc,txt) + end + return chunk, err +end + +-- any file called 'config.ld' found in the source tree will be +-- handled specially. It will be loaded using 'ldoc' as the environment. +local function read_ldoc_config (fname) + local directory = path.dirname(fname) + if directory == '' then + directory = '.' + end + local chunk, err, ok + if args.filter == 'none' then + print('reading configuration from '..fname) + end + local txt,not_found = utils.readfile(fname) + if txt then + chunk, err = loadstr(ldoc,txt) + if chunk then + if args.define ~= 'none' then ldoc[args.define] = true end + ok,err = pcall(chunk) + end + end + if err then quit('error loading config file '..fname..': '..err) end + for k in pairs(ldoc) do + if not ldoc_contents[k] then + quit("this config file field/function is unrecognized: "..k) + end + end + return directory, not_found +end + +local quote = tools.quote +--- processing command line and preparing for output --- + +local F +local file_list = List() +File.list = file_list +local config_dir + + +local ldoc_dir = arg[0]:gsub('[^/\\]+$','') +local doc_path = ldoc_dir..'/ldoc/builtin/?.lua' + +-- ldoc -m is expecting a Lua package; this converts this to a file path +if args.module then + -- first check if we've been given a global Lua lib function + if args.file:match '^%a+$' and global.functions[args.file] then + args.file = 'global.'..args.file + end + local fullpath,mod,on_docpath = tools.lookup_existing_module_or_function (args.file, doc_path) + if not fullpath then + quit(mod) + else + args.file = fullpath + args.module = mod + end +end + +local abspath = tools.abspath + +-- a special case: 'ldoc .' can get all its parameters from config.ld +if args.file == '.' then + local err + config_dir,err = read_ldoc_config(args.config) + if err then quit("no "..quote(args.config).." found") end + local config_path = path.dirname(args.config) + if config_path ~= '' then + print('changing to directory',config_path) + lfs.chdir(config_path) + end + config_is_read = true + args.file = ldoc.file or '.' + if args.file == '.' then + args.file = lfs.currentdir() + elseif type(args.file) == 'table' then + for i,f in ipairs(args.file) do + args.file[i] = abspath(f) + print(args.file[i]) + end + else + args.file = abspath(args.file) + end +else + args.file = abspath(args.file) +end + +local source_dir = args.file +if type(source_dir) == 'table' then + source_dir = source_dir[1] +end +if type(source_dir) == 'string' and path.isfile(source_dir) then + source_dir = path.splitpath(source_dir) +end + +---------- specifying the package for inferring module names -------- +-- If you use module(...), or forget to explicitly use @module, then +-- ldoc has to infer the module name. There are three sensible values for +-- `args.package`: +-- +-- * '.' the actual source is in an immediate subdir of the path given +-- * '..' the path given points to the source directory +-- * 'NAME' explicitly give the base module package name +-- + +local function setup_package_base() + if ldoc.package then args.package = ldoc.package end + if args.package == '.' then + args.package = source_dir + elseif args.package == '..' then + args.package = path.splitpath(source_dir) + elseif not args.package:find '[\\/]' then + local subdir,dir = path.splitpath(source_dir) + if dir == args.package then + args.package = subdir + elseif path.isdir(path.join(source_dir,args.package)) then + args.package = source_dir + else + quit("args.package is not the name of the source directory") + end + end +end + + +--------- processing files --------------------- +-- ldoc may be given a file, or a directory. `args.file` may also be specified in config.ld +-- where it is a list of files or directories. If specified on the command-line, we have +-- to find an optional associated config.ld, if not already loaded. + +if ldoc.ignore then args.ignore = true end + +local function process_file (f, flist) + local ext = path.extension(f) + local ftype = file_types[ext] + if ftype then + if args.verbose then print(path.basename(f)) end + local F,err = parse.file(f,ftype,args) + if err then + if F then + F:warning("internal LDoc error") + end + quit(err) + end + flist:append(F) + end +end + +local process_file_list = tools.process_file_list + +setup_package_base() + +override 'colon' +override 'merge' + +if type(args.file) == 'table' then + -- this can only be set from config file so we can assume it's already read + process_file_list(args.file,'*.*',process_file, file_list) + if #file_list == 0 then quit "no source files specified" end +elseif path.isdir(args.file) then + local files = List(dir.getallfiles(args.file,'*.*')) + -- use any configuration file we find, if not already specified + if not config_dir then + local config_files = files:filter(function(f) + return path.basename(f) == args.config + end) + if #config_files > 0 then + config_dir = read_ldoc_config(config_files[1]) + if #config_files > 1 then + print('warning: other config files found: '..config_files[2]) + end + end + end + for f in files:iter() do + process_file(f, file_list) + end + if #file_list == 0 then + quit(quote(args.file).." contained no source files") + end +elseif path.isfile(args.file) then + -- a single file may be accompanied by a config.ld in the same dir + if not config_dir then + config_dir = path.dirname(args.file) + if config_dir == '' then config_dir = '.' end + local config = path.join(config_dir,args.config) + if path.isfile(config) then + read_ldoc_config(config) + end + end + process_file(args.file, file_list) + if #file_list == 0 then quit "unsupported file extension" end +else + quit ("file or directory does not exist: "..quote(args.file)) +end + +-- create the function that renders text (descriptions and summaries) +-- (this also will initialize the code prettifier used) +override 'format' +override 'pretty' +ldoc.markup = markup.create(ldoc, args.format,args.pretty) + +------ 'Special' Project-level entities --------------------------------------- +-- Examples and Topics do not contain code to be processed for doc comments. +-- Instead, they are intended to be rendered nicely as-is, whether as pretty-lua +-- or as Markdown text. Treating them as 'modules' does stretch the meaning of +-- of the term, but allows them to be treated much as modules or scripts. +-- They define an item 'body' field (containing the file's text) and a 'postprocess' +-- field which is used later to convert them into HTML. They may contain @{ref}s. + +local function add_special_project_entity (f,tags,process) + local F = File(f) + tags.name = path.basename(f) + local text = utils.readfile(f) + local item = F:new_item(tags,1) + if process then + text = process(F, text) + end + F:finish() + file_list:append(F) + item.body = text + return item, F +end + +if type(ldoc.examples) == 'string' then + ldoc.examples = {ldoc.examples} +end +if type(ldoc.examples) == 'table' then + local prettify = require 'ldoc.prettify' + + process_file_list (ldoc.examples, '*.lua', function(f) + local item = add_special_project_entity(f,{ + class = 'example', + }) + -- wrap prettify for this example so it knows which file to blame + -- if there's a problem + item.postprocess = function(code) return prettify.lua(f,code,0,true) end + end) +end + +ldoc.readme = ldoc.readme or ldoc.topics +if type(ldoc.readme) == 'string' then + ldoc.readme = {ldoc.readme} +end +if type(ldoc.readme) == 'table' then + process_file_list(ldoc.readme, '*.md', function(f) + local item, F = add_special_project_entity(f,{ + class = 'topic' + }, markup.add_sections) + -- add_sections above has created sections corresponding to the 2nd level + -- headers in the readme, which are attached to the File. So + -- we pass the File to the postprocesser, which will insert the section markers + -- and resolve inline @ references. + item.postprocess = function(txt) return ldoc.markup(txt,F) end + end) +end + +-- extract modules from the file objects, resolve references and sort appropriately --- + +local first_module +local project = ProjectMap() +local module_list = List() +module_list.by_name = {} + +local modcount = 0 + +for F in file_list:iter() do + for mod in F.modules:iter() do + if not first_module then first_module = mod end + if doc.code_tag(mod.type) then modcount = modcount + 1 end + module_list:append(mod) + module_list.by_name[mod.name] = mod + end +end + +for mod in module_list:iter() do + if not args.module then -- no point if we're just showing docs on the console + mod:resolve_references(module_list) + end + project:add(mod,module_list) +end + +-- the default is not to show local functions in the documentation. +if not args.all and not ldoc.all then + for mod in module_list:iter() do + mod:mask_locals() + end +end + +table.sort(module_list,function(m1,m2) + return m1.name < m2.name +end) + +ldoc.single = modcount == 1 and first_module or nil + + +-------- three ways to dump the object graph after processing ----- + +-- ldoc -m will give a quick & dirty dump of the module's documentation; +-- using -v will make it more verbose +if args.module then + if #module_list == 0 then quit("no modules found") end + if args.module == true then + file_list[1]:dump(args.verbose) + else + local fun = module_list[1].items.by_name[args.module] + if not fun then quit(quote(args.module).." is not part of "..quote(args.file)) end + fun:dump(true) + end + return +end + +-- ldoc --dump will do the same as -m, except for the currently specified files +if args.dump then + for mod in module_list:iter() do + mod:dump(true) + end + os.exit() +end +if args.tags ~= 'none' then + local tagset = {} + for t in stringx.split(args.tags,','):iter() do + tagset[t] = true + end + for mod in module_list:iter() do + mod:dump_tags(tagset) + end + os.exit() +end + +-- ldoc --filter mod.name will load the module `mod` and pass the object graph +-- to the function `name`. As a special case --filter dump will use pl.pretty.dump. +if args.filter ~= 'none' then + doc.filter_objects_through_function(args.filter, module_list) + os.exit() +end + +ldoc.css, ldoc.templ = 'ldoc.css','ldoc.ltp' + +local function style_dir (sname) + local style = ldoc[sname] + local dir + if style then + if style == true then + dir = config_dir + elseif type(style) == 'string' and path.isdir(style) then + dir = style + else + quit(quote(tostring(style)).." is not a directory") + end + args[sname] = dir + end +end + + +-- the directories for template and stylesheet can be specified +-- either by command-line '--template','--style' arguments or by 'template and +-- 'style' fields in config.ld. +-- The assumption here is that if these variables are simply true then the directory +-- containing config.ld contains a ldoc.css and a ldoc.ltp respectively. Otherwise +-- they must be a valid subdirectory. + +style_dir 'style' +style_dir 'template' + +-- can specify format, output, dir and ext in config.ld +override 'output' +override 'dir' +override 'ext' +override 'one' +override 'boilerplate' + +if not args.ext:find '^%.' then + args.ext = '.'..args.ext +end + +if args.one then + ldoc.css = 'ldoc_one.css' +end + +if args.style == '!' or args.template == '!' then + -- '!' here means 'use built-in templates' + local tmpdir = path.join(path.is_windows and os.getenv('TMP') or '/tmp','ldoc') + if not path.isdir(tmpdir) then + lfs.mkdir(tmpdir) + end + local function tmpwrite (name) + utils.writefile(path.join(tmpdir,name),require('ldoc.html.'..name:gsub('%.','_'))) + end + if args.style == '!' then + tmpwrite(ldoc.templ) + args.style = tmpdir + end + if args.template == '!' then + tmpwrite(ldoc.css) + args.template = tmpdir + end +end + +ldoc.log = print +ldoc.kinds = project +ldoc.modules = module_list +ldoc.title = ldoc.title or args.title +ldoc.project = ldoc.project or args.project +ldoc.package = args.package:match '%a+' and args.package or nil + +local html = require 'ldoc.html' + +html.generate_output(ldoc, args, project) + +if args.verbose then + print 'modules' + for k in pairs(module_list.by_name) do print(k) end +end + + diff --git a/luaejdb/tools/ldoc/ldoc/SciTE.properties b/luaejdb/tools/ldoc/ldoc/SciTE.properties new file mode 100644 index 0000000..788e228 --- /dev/null +++ b/luaejdb/tools/ldoc/ldoc/SciTE.properties @@ -0,0 +1,3 @@ +tabsize=3 +indent.size=3 +use.tabs=0 diff --git a/luaejdb/tools/ldoc/ldoc/builtin/coroutine.lua b/luaejdb/tools/ldoc/ldoc/builtin/coroutine.lua new file mode 100644 index 0000000..5bef560 --- /dev/null +++ b/luaejdb/tools/ldoc/ldoc/builtin/coroutine.lua @@ -0,0 +1,48 @@ +--- creating and controlling coroutines. + +module 'coroutine' + +--- +-- Creates a new coroutine, with body `f`. `f` must be a Lua +-- function. Returns this new coroutine, an object with type `"thread"`. +function coroutine.create(f) end + +--- +-- Starts or continues the execution of coroutine `co`. The first time +-- you resume a coroutine, it starts running its body. The values +-- ... are passed as the arguments to the body function. If the coroutine +-- has yielded, `resume` restarts it; the values ... are passed +-- as the results from the yield. +-- If the coroutine runs without any errors, `resume` returns true plus any +-- values passed to `yield` (if the coroutine yields) or any values returned +-- by the body function (if the coroutine terminates). If there is any error, +-- `resume` returns false plus the error message. +function coroutine.resume(co , ...) end + +--- +-- Returns the running coroutine. Or nil when called by the main thread. +function coroutine.running() end + +--- +-- Returns the status of coroutine `co`. Result is a string: `"running"`, if +-- the coroutine is running (that is, it called `status`); `"suspended"`, if +-- the coroutine is suspended in a call to `yield`, or if it has not started +-- running yet; `"normal"` if the coroutine is active but not running (that +-- is, it has resumed another coroutine); and `"dead"` if the coroutine has +-- finished its body function, or if it has stopped with an error. +function coroutine.status(co) end + +--- +-- Creates a new coroutine, with body `f`. `f` must be a Lua +-- function. Returns a function that resumes the coroutine each time it is +-- called. Any arguments passed to the function behave as the extra arguments to +-- `resume`. Returns the same values returned by `resume`, except the first +-- boolean. In case of error, propagates the error. +function coroutine.wrap(f) end + +--- +-- Suspends the execution of the calling coroutine. The coroutine cannot +-- be running a C function, a metamethod, or an iterator. Any arguments to +-- `yield` are passed as extra results to `resume`. +function coroutine.yield(...) end + diff --git a/luaejdb/tools/ldoc/ldoc/builtin/debug.lua b/luaejdb/tools/ldoc/ldoc/builtin/debug.lua new file mode 100644 index 0000000..f3ec9b2 --- /dev/null +++ b/luaejdb/tools/ldoc/ldoc/builtin/debug.lua @@ -0,0 +1,123 @@ +--- getting runtime debug information. + +module 'debug' + +--- +-- Enters an interactive mode with the user, running each string that +-- the user enters. Using simple commands and other debug facilities, +-- the user can inspect global and local variables, change their values, +-- evaluate expressions, and so on. A line containing only the word `cont` +-- finishes this function, so that the caller continues its execution. +-- Note that commands for `debug.debug` are not lexically nested within any +-- function, and so have no direct access to local variables. +function debug.debug() end + +--- +-- Returns the environment of object `o`. +function debug.getfenv(o) end + +--- +-- Returns the current hook settings of the thread, as three values: the +-- current hook function, the current hook mask, and the current hook count +-- (as set by the `debug.sethook` function). +function debug.gethook(thread) end + +--- +-- Returns a table with information about a function. You can give the +-- function directly, or you can give a number as the value of `function`, +-- which means the function running at level `function` of the call stack +-- of the given thread: level 0 is the current function (`getinfo` itself); +-- level 1 is the function that called `getinfo`; and so on. If `function` +-- is a number larger than the number of active functions, then `getinfo` +-- returns nil. +-- +-- `thread` and `what` are optional. +-- +-- The returned table can contain all the fields returned by `lua_getinfo`, +-- with the string `what` describing which fields to fill in. The default for +-- `what` is to get all information available, except the table of valid +-- lines. If present, the option '`f`' adds a field named `func` with +-- the function itself. If present, the option '`L`' adds a field named +-- `activelines` with the table of valid lines. +-- For instance, the expression `debug.getinfo(1,"n").name` returns a table +-- with a name for the current function, if a reasonable name can be found, +-- and the expression `debug.getinfo(print)` returns a table with all available +-- information about the `print` function. +function debug.getinfo(thread, function , what) end + +--- +-- This function returns the name and the value of the local variable with +-- index `local` of the function at level `level` of the stack. (The first +-- parameter or local variable has index 1, and so on, until the last active +-- local variable.) The function returns nil if there is no local variable +-- with the given index, and raises an error when called with a `level` out +-- of range. (You can call `debug.getinfo` to check whether the level is valid.) +-- Variable names starting with '`(`' (open parentheses) represent internal +-- variables (loop control variables, temporaries, and C function locals). +function debug.getlocal(thread, level, local) end + +--- +-- Returns the metatable of the given `object` or nil if it does not have +-- a metatable. +function debug.getmetatable(object) end + +--- +-- Returns the registry table (see §3.5). +function debug.getregistry() end + +--- +-- This function returns the name and the value of the upvalue with index +-- `up` of the function `func`. The function returns nil if there is no +-- upvalue with the given index. +function debug.getupvalue(func, up) end + +--- +-- Sets the environment of the given `object` to the given `table`. Returns +-- `object`. +function debug.setfenv(object, table) end + +--- +-- Sets the given function as a hook. The string `mask` and the number +-- `count` describe when the hook will be called. The string mask may have +-- the following characters, with the given meaning: +-- +-- * `"c"`: the hook is called every time Lua calls a function; +-- * `"r"`: the hook is called every time Lua returns from a function; +-- * `"l"`: the hook is called every time Lua enters a new line of code. +-- +-- With a `count` different from zero, the hook is called after every `count` +-- instructions. +-- +-- When called without arguments, `debug.sethook` turns off the hook. +-- +-- When the hook is called, its first parameter is a string describing +-- the event that has triggered its call: `"call"`, `"return"` (or `"tail +-- return"`, when simulating a return from a tail call), `"line"`, and +-- `"count"`. For line events, the hook also gets the new line number as its +-- second parameter. Inside a hook, you can call `getinfo` with level 2 to +-- get more information about the running function (level 0 is the `getinfo` +-- function, and level 1 is the hook function), unless the event is `"tail +-- return"`. In this case, Lua is only simulating the return, and a call to +-- `getinfo` will return invalid data. +function debug.sethook(thread, hook, mask , count) end + +--- +-- This function assigns the value `value` to the local variable with +-- index `local` of the function at level `level` of the stack. The function +-- returns nil if there is no local variable with the given index, and raises +-- an error when called with a `level` out of range. (You can call `getinfo` +-- to check whether the level is valid.) Otherwise, it returns the name of +-- the local variable. +function debug.setlocal(thread, level, local, value) end + +--- +-- Sets the metatable for the given `object` to the given `table` (which +-- can be nil). +function debug.setmetatable(object, table) end + +--- +-- This function assigns the value `value` to the upvalue with index `up` +-- of the function `func`. The function returns nil if there is no upvalue +-- with the given index. Otherwise, it returns the name of the upvalue. +function debug.setupvalue(func, up, value) end + diff --git a/luaejdb/tools/ldoc/ldoc/builtin/global.lua b/luaejdb/tools/ldoc/ldoc/builtin/global.lua new file mode 100644 index 0000000..66dd415 --- /dev/null +++ b/luaejdb/tools/ldoc/ldoc/builtin/global.lua @@ -0,0 +1,285 @@ +--- Lua global functions. + +module 'global' + +--- +-- Issues an error when the value of its argument `v` is false (i.e., +-- nil or false); otherwise, returns all its arguments. `message` is an error +-- message; when absent, it defaults to "assertion failed!" +function assert(v , message) end + +--- +-- This function is a generic interface to the garbage collector. It +-- performs different functions according to its first argument, `opt`: +-- +-- * "stop": stops the garbage collector. +-- * "restart": restarts the garbage collector. +-- * "collect": performs a full garbage-collection cycle. +-- * "count": returns the total memory in use by Lua (in Kbytes). +-- * "step": performs a garbage-collection step. The step "size" is controlled +-- by `arg` (larger values mean more steps) in a non-specified way. If you +-- want to control the step size you must experimentally tune the value of +-- * "arg". Returns true if the step finished a collection cycle. +-- * "setpause": sets `arg` as the new value for the *pause* of the collector +-- (see §2.10). Returns the previous value for *pause*. +-- * "setstepmul": sets `arg` as the new value for the *step multiplier* +-- of the collector (see §2.10). Returns the previous value for *step*. +-- +function collectgarbage(opt , arg) end + +--- +-- Opens the named file and executes its contents as a Lua chunk. When +-- called without arguments, +-- `dofile` executes the contents of the standard input (`stdin`). Returns +-- all values returned by the chunk. In case of errors, `dofile` propagates +-- the error to its caller (that is, `dofile` does not run in protected mode). +function dofile(filename) end + +--- +-- Terminates the last protected function called and returns `message` +-- as the error message. Function `error` never returns. +-- Usually, `error` adds some information about the error position at the +-- beginning of the message. The `level` argument specifies how to get the +-- error position. With level 1 (the default), the error position is where the +-- `error` function was called. Level 2 points the error to where the function +-- that called `error` was called; and so on. Passing a level 0 avoids the +-- addition of error position information to the message. +function error(message , level) end + +--- +-- A global variable (not a function) that holds the global environment +-- (that is, `_G._G = _G`). Lua itself does not use this variable; changing +-- its value does not affect any environment, nor vice-versa. (Use `setfenv` +-- to change environments.) +-- function _G end +-- * `_G._G`: _G._G + +--- +-- Returns the current environment in use by the function. +-- `f` can be a Lua function or a number that specifies the function at that +-- stack level: Level 1 is the function calling `getfenv`. If the given +-- function is not a Lua function, or if `f` is 0, `getfenv` returns the +-- global environment. The default for `f` is 1. +function getfenv(f) end + +--- +-- If `object` does not have a metatable, returns nil. Otherwise, if the +-- object's metatable has a `"__metatable"` field, returns the associated +-- value. Otherwise, returns the metatable of the given object. +function getmetatable(object) end + +--- +-- Returns three values: an iterator function, the table `t`, and 0, +-- so that the construction +-- for i,v in ipairs(t) do *body* end +-- will iterate over the pairs (`1,t[1]`), (`2,t[2]`), ..., up to the +-- first integer key absent from the table. +function ipairs(t) end + +--- +-- Loads a chunk using function `func` to get its pieces. Each call to +-- `func` must return a string that concatenates with previous results. A +-- return of an empty string, nil, or no value signals the end of the chunk. +-- If there are no errors, returns the compiled chunk as a function; otherwise, +-- returns nil plus the error message. The environment of the returned function +-- is the global environment. +-- `chunkname` is used as the chunk name for error messages and debug +-- information. When absent, it defaults to "`=(load)`". +function load(func , chunkname) end + +--- +-- Similar to `load`, but gets the chunk from file `filename` or from the +-- standard input, if no file name is given. +function loadfile(filename) end + +--- +-- Similar to `load`, but gets the chunk from the given string. +-- To load and run a given string, use the idiom +-- assert(loadstring(s))() +-- When absent, `chunkname` defaults to the given string. +function loadstring(string , chunkname) end + +--- +-- Allows a program to traverse all fields of a table. Its first argument is +-- a table and its second argument is an index in this table. `next` returns +-- the next index of the table and its associated value. +-- +-- When called with nil +-- as its second argument, `next` returns an initial index and its associated +-- value. When called with the last index, or with nil in an empty table, `next` +-- returns nil. +-- +-- If the second argument is absent, then it is interpreted as +-- nil. In particular, you can use `next(t)` to check whether a table is empty. +-- The order in which the indices are enumerated is not specified, *even for +-- numeric indices*. (To traverse a table in numeric order, use a numerical +-- for or the `ipairs` function.) +-- +-- The behavior of `next` is *undefined* if, during the traversal, you assign +-- any value to a non-existent field in the table. You may however modify +-- existing fields. In particular, you may clear existing fields. +function next(table , index) end + +--- +-- Returns three values: the `next` function, the table `t`, and nil, +-- so that the construction +-- for k,v in pairs(t) do *body* end +-- will iterate over all key–value pairs of table `t`. +-- See function `next` for the caveats of modifying the table during its +-- traversal. +function pairs(t) end + +--- +-- Calls function `f` with the given arguments in *protected mode*. This +-- means that any error inside `f` is not propagated; instead, `pcall` catches +-- the error and returns a status code. Its first result is the status code (a +-- boolean), which is true if the call succeeds without errors. In such case, +-- `pcall` also returns all results from the call, after this first result. In +-- case of any error, `pcall` returns false plus the error message. +function pcall(f, arg1, ...) end + +--- +-- Receives any number of arguments, and prints their values to `stdout`, +-- using the `tostring` function to convert them to strings. `print` is not +-- intended for formatted output, but only as a quick way to show a value, +-- typically for debugging. For formatted output, use `string.format`. +function print(...) end + +--- +-- Checks whether `v1` is equal to `v2`, without invoking any +-- metamethod. Returns a boolean. +function rawequal(v1, v2) end + +--- +-- Gets the real value of `table[index]`, without invoking any +-- metamethod. `table` must be a table; `index` may be any value. +function rawget(table, index) end + +--- +-- Sets the real value of `table[index]` to `value`, without invoking any +-- metamethod. `table` must be a table, `index` any value different from nil, +-- and `value` any Lua value. +-- This function returns `table`. +function rawset(table, index, value) end + +--- +-- If `index` is a number, returns all arguments after argument number +-- `index`. Otherwise, `index` must be the string `"#"`, and `select` returns +-- the total number of extra arguments it received. +function select(index, ...) end + +--- +-- Sets the environment to be used by the given function. `f` can be a Lua +-- function or a number that specifies the function at that stack level: Level +-- 1 is the function calling `setfenv`. `setfenv` returns the given function. +-- As a special case, when `f` is 0 `setfenv` changes the environment of the +-- running thread. In this case, `setfenv` returns no values. +function setfenv(f, table) end + +--- +-- Sets the metatable for the given table. (You cannot change the metatable +-- of other types from Lua, only from C.) If `metatable` is nil, removes the +-- metatable of the given table. If the original metatable has a `"__metatable"` +-- field, raises an error. +-- This function returns `table`. +function setmetatable(table, metatable) end + +--- +-- Tries to convert its argument to a number. If the argument is already +-- a number or a string convertible to a number, then `tonumber` returns this +-- number; otherwise, it returns nil. +-- An optional argument specifies the base to interpret the numeral. The base +-- may be any integer between 2 and 36, inclusive. In bases above 10, the +-- letter '`A`' (in either upper or lower case) represents 10, '`B`' represents +-- 11, and so forth, with '`Z`' representing 35. In base 10 (the default), +-- the number can have a decimal part, as well as an optional exponent part +-- (see §2.1). In other bases, only unsigned integers are accepted. +function tonumber(e , base) end + +--- +-- Receives an argument of any type and converts it to a string in a +-- reasonable format. For complete control of how numbers are converted, use +-- `string.format`. +-- If the metatable of `e` has a `"__tostring"` field, then `tostring` calls +-- the corresponding value with `e` as argument, and uses the result of the +-- call as its result. +function tostring(e) end + +--- +-- Returns the type of its only argument, coded as a string. The possible +-- results of this function are " +-- `nil`" (a string, not the value nil), "`number`", "`string`", "`boolean`", +-- "`table`", "`function`", "`thread`", and "`userdata`". +function type(v) end + +--- +-- Returns the elements from the given table. This function is equivalent to +-- return list[i], list[i+1], ..., list[j] +-- except that the above code can be written only for a fixed number of +-- elements. By default, `i` is 1 and `j` is the length of the list, as +-- defined by the length operator (see §2.5.5). +function unpack(list , i , j) end + +--- +-- A global variable (not a function) that holds a string containing the +-- current interpreter version. The current contents of this variable is +-- "`Lua 5.1`". +-- function _VERSION end +-- * `_G._VERSION`: _G._VERSION + +--- +-- This function is similar to `pcall`, except that you can set a new +-- error handler. +-- `xpcall` calls function `f` in protected mode, using `err` as the error +-- handler. Any error inside `f` is not propagated; instead, `xpcall` catches +-- the error, calls the `err` function with the original error object, and +-- returns a status code. Its first result is the status code (a boolean), +-- which is true if the call succeeds without errors. In this case, `xpcall` +-- also returns all results from the call, after this first result. In case +-- of any error, `xpcall` returns false plus the result from `err`. +function xpcall(f, err) end + +--- +-- Creates a module. If there is a table in `package.loaded[name]`, +-- this table is the module. Otherwise, if there is a global table `t` +-- with the given name, this table is the module. Otherwise creates a new +-- table `t` and sets it as the value of the global `name` and the value of +-- `package.loaded[name]`. This function also initializes `t._NAME` with the +-- given name, `t._M` with the module (`t` itself), and `t._PACKAGE` with the +-- package name (the full module name minus last component; see below). Finally, +-- `module` sets `t` as the new environment of the current function and the +-- new value of `package.loaded[name]`, so that `require` returns `t`. +-- If `name` is a compound name (that is, one with components separated by +-- dots), `module` creates (or reuses, if they already exist) tables for each +-- component. For instance, if `name` is `a.b.c`, then `module` stores the +-- module table in field `c` of field `b` of global `a`. +-- This function can receive optional *options* after the module name, where +-- each option is a function to be applied over the module. +function module(name , ...) end + +--- +-- Loads the given module. The function starts by looking into the +-- `package.loaded` table to determine whether `modname` is already +-- loaded. If it is, then `require` returns the value stored at +-- `package.loaded[modname]`. Otherwise, it tries to find a *loader* for +-- the module. +-- To find a loader, `require` is guided by the `package.loaders` array. By +-- changing this array, we can change how `require` looks for a module. The +-- following explanation is based on the default configuration for +-- `package.loaders`. +-- First `require` queries `package.preload[modname]`. If it has a value, +-- this value (which should be a function) is the loader. Otherwise `require` +-- searches for a Lua loader using the path stored in `package.path`. If +-- that also fails, it searches for a C loader using the path stored in +-- `package.cpath`. If that also fails, it tries an *all-in-one* loader (see +-- `package.loaders`). +-- Once a loader is found, `require` calls the loader with a single argument, +-- `modname`. If the loader returns any value, `require` assigns the returned +-- value to `package.loaded[modname]`. If the loader returns no value and +-- has not assigned any value to `package.loaded[modname]`, then `require` +-- assigns true to this entry. In any case, `require` returns the final value +-- of `package.loaded[modname]`. +-- If there is any error loading or running the module, or if it cannot find +-- any loader for the module, then `require` signals an error. +function require(modname) end + diff --git a/luaejdb/tools/ldoc/ldoc/builtin/globals.lua b/luaejdb/tools/ldoc/ldoc/builtin/globals.lua new file mode 100644 index 0000000..06dbdd5 --- /dev/null +++ b/luaejdb/tools/ldoc/ldoc/builtin/globals.lua @@ -0,0 +1,103 @@ +------- +-- global functions and tables +local tools = require 'ldoc.tools' +local globals = {} +local lua52 = _VERSION:match '5.2' + + +globals.functions = { + assert = true, + collectgarbage = true, + dofile = true, + getmetatable = true, + setmetatable = true, + pairs = true, + ipairs = true, + load = true, + loadfile = true, + loadstring = true, + next = true, + pcall = true, + print = true, + rawequal = true, + rawget = true, + rawset = true, + select = true, + tonumber = true, + tostring = true, + type = true, + xpcall = true, + module = true, + require = true, +} +local functions = globals.functions + +if not lua52 then + functions.setfenv = true + functions.getfenv = true + functions.unpack = true +else + functions.rawlen = true +end + +local manual, fun_ref + +function globals.set_manual_url(url) + manual = url .. '#' + fun_ref = manual..'pdf-' +end + +if lua52 then + globals.tables = { + io = '6.8', + package = '6.3', + math = '6.6', + os = '6.9', + string = '6.4', + table = '6.5', + coroutine = '6.2', + debug = '6.10' + } + globals.set_manual_url 'http://www.lua.org/manual/5.2/manual.html' +else + globals.tables = { + io = '5.7', + package = '5.3', + math = '5.6', + os = '5.8', + string = '5.4', + table = '5.5', + coroutine = '5.2', + debug = '5.9' + } + globals.set_manual_url 'http://www.lua.org/manual/5.1/manual.html' +end + +local tables = globals.tables + +local function function_ref (name) + return {href = fun_ref..name, label = name} +end + +local function module_ref (name) + return {href = manual..tables[name], label = name} +end + +function globals.lua_manual_ref (name) + local tbl,fname = tools.split_dotted_name(name) + if not tbl then -- plain symbol + if functions[name] then + return function_ref(name) + end + if tables[name] then + return module_ref(name) + end + else + if tables[tbl] then + return function_ref(name) + end + end + return nil +end + +return globals diff --git a/luaejdb/tools/ldoc/ldoc/builtin/io.lua b/luaejdb/tools/ldoc/ldoc/builtin/io.lua new file mode 100644 index 0000000..b658d4f --- /dev/null +++ b/luaejdb/tools/ldoc/ldoc/builtin/io.lua @@ -0,0 +1,157 @@ +--- Reading and Writing Files. + +module 'io' + +--- +-- Equivalent to `file:close()`. Without a `file`, closes the default +-- output file. +function io.close(file) end + +--- +-- Equivalent to `file:flush` over the default output file. +function io.flush() end + +--- +-- When called with a file name, it opens the named file (in text mode), +-- and sets its handle as the default input file. When called with a file +-- handle, it simply sets this file handle as the default input file. When +-- called without parameters, it returns the current default input file. +-- In case of errors this function raises the error, instead of returning an +-- error code. +function io.input(file) end + +--- +-- Opens the given file name in read mode and returns an iterator function +-- that, each time it is called, returns a new line from the file. Therefore, +-- the construction +-- for line in io.lines(filename) do *body* end +-- will iterate over all lines of the file. When the iterator function detects +-- the end of file, it returns nil (to finish the loop) and automatically +-- closes the file. +-- The call `io.lines()` (with no file name) is equivalent to +-- `io.input():lines()`; that is, it iterates over the lines of the default +-- input file. In this case it does not close the file when the loop ends. +function io.lines(filename) end + +--- +-- This function opens a file, in the mode specified in the string `mode`. It +-- returns a new file handle, or, in case of errors, nil plus an error message. +-- The `mode` string can be any of the following: +-- "r": read mode (the default); +-- "w": write mode; +-- "a": append mode; +-- "r+": update mode, all previous data is preserved; +-- "w+": update mode, all previous data is erased; +-- "a+": append update mode, previous data is preserved, writing is only +-- allowed at the end of file. +-- The `mode` string can also have a '`b`' at the end, which is needed in +-- some systems to open the file in binary mode. This string is exactly what +-- is used in the standard C function `fopen`. +function io.open(filename , mode) end + +--- +-- Similar to `io.input`, but operates over the default output file. +function io.output(file) end + +--- +-- Starts program `prog` in a separated process and returns a file handle +-- that you can use to read data from this program (if `mode` is `"r"`, +-- the default) or to write data to this program (if `mode` is `"w"`). +-- This function is system dependent and is not available on all platforms. +function io.popen(prog , mode) end + +--- +-- Equivalent to `io.input():read`. +function io.read(...) end + +-- * `io.stderr`: Standard error. +-- * `io.stdin`: Standard in. +-- * `io.stdout`: Standard out. + +--- +-- Returns a handle for a temporary file. This file is opened in update +-- mode and it is automatically removed when the program ends. +function io.tmpfile() end + +--- +-- Checks whether `obj` is a valid file handle. Returns the string `"file"` +-- if `obj` is an open file handle, `"closed file"` if `obj` is a closed file +-- handle, or nil if `obj` is not a file handle. +function io.type(obj) end + +--- +-- Equivalent to `io.output():write`. +function io.write(...) end + +--- +-- Closes `file`. Note that files are automatically closed when their +-- handles are garbage collected, but that takes an unpredictable amount of +-- time to happen. +function file:close() end + +--- +-- Saves any written data to `file`. +function file:flush() end + +--- +-- Returns an iterator function that, each time it is called, returns a +-- new line from the file. Therefore, the construction +-- for line in file:lines() do *body* end +-- will iterate over all lines of the file. (Unlike `io.lines`, this function +-- does not close the file when the loop ends.) +function file:lines() end + +--- +-- Reads the file `file`, according to the given formats, which specify +-- what to read. For each format, the function returns a string (or a number) +-- with the characters read, or nil if it cannot read data with the specified +-- format. When called without formats, it uses a default format that reads +-- the entire next line (see below). +-- The available formats are +-- "*n": reads a number; this is the only format that returns a number +-- instead of a string. +-- "*a": reads the whole file, starting at the current position. On end of +-- file, it returns the empty string. +-- "*l": reads the next line (skipping the end of line), returning nil on +-- end of file. This is the default format. +-- *number*: reads a string with up to this number of characters, returning +-- nil on end of file. If number is zero, it reads nothing and returns an +-- empty string, or nil on end of file. +function file:read(...) end + +--- +-- Sets and gets the file position, measured from the beginning of the +-- file, to the position given by `offset` plus a base specified by the string +-- `whence`, as follows: +-- "set": base is position 0 (beginning of the file); +-- "cur": base is current position; +-- "end": base is end of file; +-- In case of success, function `seek` returns the final file position, +-- measured in bytes from the beginning of the file. If this function fails, +-- it returns nil, plus a string describing the error. +-- The default value for `whence` is `"cur"`, and for `offset` is 0. Therefore, +-- the call `file:seek()` returns the current file position, without changing +-- it; the call `file:seek("set")` sets the position to the beginning of the +-- file (and returns 0); and the call `file:seek("end")` sets the position +-- to the end of the file, and returns its size. +function file:seek(whence , offset) end + +--- +-- Sets the buffering mode for an output file. There are three available +-- modes: +-- +-- * "no": no buffering; the result of any output operation appears immediately. +-- * "full": full buffering; output operation is performed only when the +-- buffer is full (or when you explicitly `flush` the file (see `io.flush`)). +-- * "line": line buffering; output is buffered until a newline is output or +-- there is any input from some special files (such as a terminal device). +-- For the last two cases, `size` specifies the size of the buffer, in +-- bytes. The default is an appropriate size. +function file:setvbuf(mode , size) end + +--- +-- Writes the value of each of its arguments to the `file`. The arguments +-- must be strings or numbers. To write other values, use `tostring` or +-- `string.format` before `write`. +function file:write(...) end + diff --git a/luaejdb/tools/ldoc/ldoc/builtin/lfs.lua b/luaejdb/tools/ldoc/ldoc/builtin/lfs.lua new file mode 100644 index 0000000..c48de0d --- /dev/null +++ b/luaejdb/tools/ldoc/ldoc/builtin/lfs.lua @@ -0,0 +1,122 @@ +--- File and Directory manipulation + +module 'lfs' + +--- +-- Returns a table with the file attributes corresponding to filepath (or nil +-- followed by an error message in case of error). If the second optional +-- argument is given, then only the value of the named attribute is returned +-- (this use is equivalent to lfs.attributes(filepath).aname, but the table is +-- not created and only one attribute is retrieved from the O.S.). The +-- attributes are described as follows; attribute mode is a string, all the +-- others are numbers, and the time related attributes use the same time +-- reference of os.time: +-- +-- - dev: on Unix systems, this represents the device that the inode resides on. +-- On Windows systems, represents the drive number of the disk containing +-- the file +-- - ino: on Unix systems, this represents the inode number. On Windows systems +-- this has no meaning +-- - mode: string representing the associated protection mode (the values could +-- be file, directory, link, socket, named pipe, char device, block +-- device or other) +-- - nlink: number of hard links to the file +-- - uid: user-id of owner (Unix only, always 0 on Windows) +-- - gid: group-id of owner (Unix only, always 0 on Windows) +-- - rdev: on Unix systems, represents the device type, for special file inodes. +-- On Windows systems represents the same as dev +-- - access: time of last access +-- - modification: time of last data modification +-- - change: time of last file status change +-- - size: file size, in bytes +-- - blocks: block allocated for file; (Unix only) +-- - blksize: optimal file system I/O blocksize; (Unix only) +-- This function uses stat internally thus if the given filepath is a symbolic +-- link, it is followed (if it points to another link the chain is followed +-- recursively) and the information is about the file it refers to. To obtain +-- information about the link itself, see function lfs.symlinkattributes. +function lfs.attributes(filepath , aname) end + +--- +-- Changes the current working directory to the given path. +-- Returns true in case of success or nil plus an error string. +function lfs.chdir(path) end + +--- +-- Creates a lockfile (called lockfile.lfs) in path if it does not exist and +-- returns the lock. If the lock already exists checks it it's stale, using the +-- second parameter (default for the second parameter is INT_MAX, which in +-- practice means the lock will never be stale. To free the the lock call +-- lock:free(). +-- In case of any errors it returns nil and the error message. In particular, +-- if the lock exists and is not stale it returns the "File exists" message. +function lfs.lock_dir(path, seconds_stale) end + +--- +-- Returns a string with the current working directory or nil plus an error +-- string. +function lfs.currentdir() end + +--- +-- Lua iterator over the entries of a given directory. Each time the iterator is +-- called with dir_obj it returns a directory entry's name as a string, or nil +-- if there are no more entries. You can also iterate by calling `dir_obj:next()`, +-- and explicitly close the directory before the iteration finished with +-- `dir_obj:close()`. Raises an error if path is not a directory. +function lfs.dir(path) end + +--- +-- Locks a file or a part of it. This function works on open files; the file +-- handle should be specified as the first argument. The string mode could be +-- either r (for a read/shared lock) or w (for a write/exclusive lock). The +-- optional arguments start and length can be used to specify a starting point +-- and its length; both should be numbers. +-- Returns true if the operation was successful; in case of error, it returns +-- nil plus an error string. +function lfs.lock(filehandle, mode, start, length) + +--- +-- Creates a new directory. The argument is the name of the new directory. +-- Returns true if the operation was successful; in case of error, it returns +-- nil plus an error string. +function lfs.mkdir(dirname) end + +--- +-- Removes an existing directory. The argument is the name of the directory. +-- Returns true if the operation was successful; in case of error, it returns +-- nil plus an error string. +function lfs.rmdir(dirname) end + +--- +-- Sets the writing mode for a file. The mode string can be either binary or +-- text. Returns the previous mode string for the file. This function is only +-- available in Windows, so you may want to make sure that lfs.setmode exists +-- before using it. +function lfs.setmode(file, mode) end + +--- +-- Identical to lfs.attributes except that it obtains information about the link +-- itself (not the file it refers to). This function is not available in Windows +-- so you may want to make sure that lfs.symlinkattributes exists before using +-- it. +function lfs.symlinkattributes(filepath , aname) end + +--- +-- Set access and modification times of a file. This function is a bind to utime +-- function. The first argument is the filename, the second argument (atime) is +-- the access time, and the third argument (mtime) is the modification time. +-- Both times are provided in seconds (which should be generated with Lua +-- standard function os.time). If the modification time is omitted, the access +-- time provided is used; if both times are omitted, the current time is used. +-- Returns true if the operation was successful; in case of error, it returns +-- nil plus an error string. +function lfs.touch(filepath , atime , mtime) end + +--- +-- Unlocks a file or a part of it. This function works on open files; the file +-- handle should be specified as the first argument. The optional arguments +-- start and length can be used to specify a starting point and its length; both +-- should be numbers. +-- Returns true if the operation was successful; in case of error, it returns +-- nil plus an error string. +function lfs.unlock(filehandle, start, length) end diff --git a/luaejdb/tools/ldoc/ldoc/builtin/lpeg.lua b/luaejdb/tools/ldoc/ldoc/builtin/lpeg.lua new file mode 100644 index 0000000..4a740b3 --- /dev/null +++ b/luaejdb/tools/ldoc/ldoc/builtin/lpeg.lua @@ -0,0 +1,211 @@ +--- LPeg PEG pattern matching. + +module 'lpeg' + +--- +-- The matching function. It attempts to match the given pattern against the +-- subject string. If the match succeeds, returns the index in the subject of +-- the first character after the match, or the captured values (if the pattern +-- captured any value). +-- +-- An optional numeric argument init makes the match starts at that position in +-- the subject string. As usual in Lua libraries, a negative value counts from +-- the end. +-- +-- Unlike typical pattern-matching functions, match works only in anchored mode; +-- that is, it tries to match the pattern with a prefix of the given subject +-- string (at position init), not with an arbitrary substring of the subject. +-- So, if we want to find a pattern anywhere in a string, we must either write a +-- loop in Lua or write a pattern that matches anywhere. This second approach is +-- easy and quite efficient; see examples. +function lpeg.match(pattern, subject , init) end + +--- +-- If the given value is a pattern, returns the string "pattern". Otherwise +-- returns nil. +function lpeg.type(value) end + +--- +-- Returns a string with the running version of LPeg. +function lpeg.version() end + +--- +-- Sets the maximum size for the backtrack stack used by LPeg to track calls and +-- choices. Most well-written patterns need little backtrack levels and +-- therefore you seldom need to change this maximum; but a few useful patterns +-- may need more space. Before changing this maximum you should try to rewrite +-- your pattern to avoid the need for extra space. +function lpeg.setmaxstack(max) end + +--- +-- Converts the given value into a proper pattern, according to the following +-- rules: +-- * If the argument is a pattern, it is returned unmodified. +-- * If the argument is a string, it is translated to a pattern that matches +-- literally the string. +-- * If the argument is a non-negative number n, the result is a pattern that +-- matches exactly n characters. +-- * If the argument is a negative number -n, the result is a pattern that +-- succeeds only if the input string does not have n characters: lpeg.P(-n) +-- is equivalent to -lpeg.P(n) (see the unary minus operation). +-- * If the argument is a boolean, the result is a pattern that always +-- succeeds or always fails (according to the boolean value), without +-- consuming any input. +-- * If the argument is a table, it is interpreted as a grammar (see +-- Grammars). +-- * If the argument is a function, returns a pattern equivalent to a +-- match-time capture over the empty string. +function lpeg.P(value) end + +--- +-- Returns a pattern that matches any single character belonging to one of the +-- given ranges. Each range is a string xy of length 2, representing all +-- characters with code between the codes of x and y (both inclusive). +-- As an example, the pattern `lpeg.R("09")` matches any digit, and `lpeg.R("az", +-- "AZ")` matches any ASCII letter. +function lpeg.R({range}) end + +--- +-- Returns a pattern that matches any single character that appears in the given +-- string. (The S stands for Set.) +-- As an example, the pattern lpeg.S("+-*/") matches any arithmetic operator. +-- Note that, if s is a character (that is, a string of length 1), then +-- lpeg.P(s) is equivalent to lpeg.S(s) which is equivalent to lpeg.R(s..s). +-- Note also that both lpeg.S("") and lpeg.R() are patterns that always fail. +function lpeg.S(string) end + +--- +-- This operation creates a non-terminal (a variable) for a grammar. The created +-- non-terminal refers to the rule indexed by v in the enclosing grammar. (See +-- Grammars for details.) +function lpeg.V(v) end + +--- +-- Returns a table with patterns for matching some character classes according +-- to the current locale. The table has fields: +-- +-- * alnum +-- * alpha +-- * cntrl +-- * digit +-- * graph +-- * lower +-- * print +-- * punct +-- * space +-- * upper +-- * xdigit +-- +-- each one containing a +-- correspondent pattern. Each pattern matches any single character that belongs +-- to its class. +-- +-- If called with an argument table, then it creates those fields inside the +-- given table and returns that table. +function lpeg.locale(table) end + +--- +-- Creates a simple capture, which captures the substring of the subject that +-- matches patt. The captured value is a string. If patt has other captures, +-- their values are returned after this one. +function lpeg.C(patt) end + +--- +-- Creates an argument capture. This pattern matches the empty string and +-- produces the value given as the nth extra argument given in the call to +-- lpeg.match. +function lpeg.Carg(n) end + +--- +-- Creates a back capture. This pattern matches the empty string and produces +-- the values produced by the most recent group capture named name. +-- Most recent means the last complete outermost group capture with the given +-- name. A Complete capture means that the entire pattern corresponding to the +-- capture has matched. An Outermost capture means that the capture is not +-- inside another complete capture. +function lpeg.Cb(name) end + +--- +-- Creates a constant capture. This pattern matches the empty string and +-- produces all given values as its captured values. +function lpeg.Cc(...) end + +--- +-- Creates a fold capture. If patt produces a list of captures C1 C2 ... Cn, +-- this capture will produce the value func(...func(func(C1, C2), C3)..., Cn), +-- that is, it will fold (or accumulate, or reduce) the captures from patt using +-- function func. +-- +-- This capture assumes that patt should produce at least one capture with at +-- least one value (of any type), which becomes the initial value of an +-- accumulator. (If you need a specific initial value, you may prefix a constant +-- capture to patt.) For each subsequent capture LPeg calls func with this +-- accumulator as the first argument and all values produced by the capture as +-- extra arguments; the value returned by this call becomes the new value for +-- the accumulator. The final value of the accumulator becomes the captured +-- value. +-- +-- As an example, the following pattern matches a list of numbers separated by +-- commas and returns their addition: +-- +-- -- matches a numeral and captures its value +-- number = lpeg.R"09"^1 / tonumber +-- -- matches a list of numbers, captures their values +-- list = number * ("," * number)^0 +-- -- auxiliary function to add two numbers +-- function add (acc, newvalue) return acc + newvalue end +-- -- folds the list of numbers adding them +-- sum = lpeg.Cf(list, add) +-- -- example of use +-- print(sum:match("10,30,43")) --> 83 +-- +function lpeg.Cf(patt, func) end + +--- +-- Creates a group capture. It groups all values returned by patt into a single +-- capture. The group may be anonymous (if no name is given) or named with the +-- given name. +-- An anonymous group serves to join values from several captures into a single +-- capture. A named group has a different behavior. In most situations, a named +-- group returns no values at all. Its values are only relevant for a following +-- back capture or when used inside a table capture. +function lpeg.Cg(patt , name) end + +--- +-- Creates a position capture. It matches the empty string and captures the +-- position in the subject where the match occurs. The captured value is a +-- number. +function lpeg.Cp() end + +--- +-- Creates a substitution capture, which captures the substring of the subject +-- that matches patt, with substitutions. For any capture inside patt with a +-- value, the substring that matched the capture is replaced by the capture +-- value (which should be a string). The final captured value is the string +-- resulting from all replacements. +function lpeg.Cs(patt) end + +--- +-- Creates a table capture. This capture creates a table and puts all values +-- from all anonymous captures made by patt inside this table in successive +-- integer keys, starting at 1. Moreover, for each named capture group created +-- by patt, the first value of the group is put into the table with the group +-- name as its key. The captured value is only the table. +function lpeg.Ct(patt) end + +--- +-- Creates a match-time capture. Unlike all other captures, this one is +-- evaluated immediately when a match occurs. It forces the immediate evaluation +-- of all its nested captures and then calls function. +-- The given function gets as arguments the entire subject, the current position +-- (after the match of patt), plus any capture values produced by patt. +-- The first value returned by function defines how the match happens. If the +-- call returns a number, the match succeeds and the returned number becomes the +-- new current position. (Assuming a subject s and current position i, the +-- returned number must be in the range [i, len(s) + 1].) If the call returns +-- true, the match succeeds without consuming any input. (So, to return true is +-- equivalent to return i.) If the call returns false, nil, or no value, the +-- match fails. +-- Any extra values returned by the function become the values produced by the +-- capture. +function lpeg.Cmt(patt, function) end diff --git a/luaejdb/tools/ldoc/ldoc/builtin/math.lua b/luaejdb/tools/ldoc/ldoc/builtin/math.lua new file mode 100644 index 0000000..9dcedf4 --- /dev/null +++ b/luaejdb/tools/ldoc/ldoc/builtin/math.lua @@ -0,0 +1,142 @@ +--- standard mathematical functions. + +module 'math' + +--- +-- Returns the absolute value of `x`. +function math.abs(x) end + +--- +-- Returns the arc cosine of `x` (in radians). +function math.acos(x) end + +--- +-- Returns the arc sine of `x` (in radians). +function math.asin(x) end + +--- +-- Returns the arc tangent of `x` (in radians). +function math.atan(x) end + +--- +-- Returns the arc tangent of `y/x` (in radians), but uses the signs +-- of both parameters to find the quadrant of the result. (It also handles +-- correctly the case of `x` being zero.) +function math.atan2(y, x) end + +--- +-- Returns the smallest integer larger than or equal to `x`. +function math.ceil(x) end + +--- +-- Returns the cosine of `x` (assumed to be in radians). +function math.cos(x) end + +--- +-- Returns the hyperbolic cosine of `x`. +function math.cosh(x) end + +--- +-- Returns the angle `x` (given in radians) in degrees. +function math.deg(x) end + +--- +-- Returns the value *e^x*. +function math.exp(x) end + +--- +-- Returns the largest integer smaller than or equal to `x`. +function math.floor(x) end + +--- +-- Returns the remainder of the division of `x` by `y` that rounds the +-- quotient towards zero. +function math.fmod(x, y) end + +--- +-- Returns `m` and `e` such that *x = m2^e*, `e` is an integer and the +-- absolute value of `m` is in the range *[0.5, 1)* (or zero when `x` is zero). +function math.frexp(x) end + +--- +-- The value `HUGE_VAL`, a value larger than or equal to any other +-- numerical value. +-- function math.huge end +-- * `math.HUGE_VAL`: math.HUGE_VAL + +--- +-- Returns *m2^e* (`e` should be an integer). +function math.ldexp(m, e) end + +--- +-- Returns the natural logarithm of `x`. +function math.log(x) end + +--- +-- Returns the base-10 logarithm of `x`. +function math.log10(x) end + +--- +-- Returns the maximum value among its arguments. +function math.max(x, ...) end + +--- +-- Returns the minimum value among its arguments. +function math.min(x, ...) end + +--- +-- Returns two numbers, the integral part of `x` and the fractional part of +-- `x`. +function math.modf(x) end + +--- +-- The value of *pi*. +-- function math.pi end +-- * `math.pi`: math.pi + +--- +-- Returns *x^y*. (You can also use the expression `x^y` to compute this +-- value.) +function math.pow(x, y) end + +--- +-- Returns the angle `x` (given in degrees) in radians. +function math.rad(x) end + +--- +-- This function is an interface to the simple pseudo-random generator +-- function `rand` provided by ANSI C. (No guarantees can be given for its +-- statistical properties.) +-- When called without arguments, returns a uniform pseudo-random real +-- number in the range *[0,1)*. When called with an integer number `m`, +-- `math.random` returns a uniform pseudo-random integer in the range *[1, +-- m]*. When called with two integer numbers `m` and `n`, `math.random` +-- returns a uniform pseudo-random integer in the range *[m, n]*. +function math.random(m , n) end + +--- +-- Sets `x` as the "seed" for the pseudo-random generator: equal seeds +-- produce equal sequences of numbers. +function math.randomseed(x) end + +--- +-- Returns the sine of `x` (assumed to be in radians). +function math.sin(x) end + +--- +-- Returns the hyperbolic sine of `x`. +function math.sinh(x) end + +--- +-- Returns the square root of `x`. (You can also use the expression `x^0.5` +-- to compute this value.) +function math.sqrt(x) end + +--- +-- Returns the tangent of `x` (assumed to be in radians). +function math.tan(x) end + +--- +-- Returns the hyperbolic tangent of `x`. +function math.tanh(x) end + diff --git a/luaejdb/tools/ldoc/ldoc/builtin/os.lua b/luaejdb/tools/ldoc/ldoc/builtin/os.lua new file mode 100644 index 0000000..e5980fe --- /dev/null +++ b/luaejdb/tools/ldoc/ldoc/builtin/os.lua @@ -0,0 +1,110 @@ +--- Operating System facilities like date, time and program execution. + +module 'os' + +--- +-- Returns an approximation of the amount in seconds of CPU time used by +-- the program. +function os.clock() end + +--- +-- Returns a string or a table containing date and time, formatted according +-- to the given string `format`. +-- +-- If the `time` argument is present, this is the time to be formatted +-- (see the `os.time` function for a description of this value). Otherwise, +-- `date` formats the current time. +-- +-- If `format` starts with '`!`', then the date is formatted in Coordinated +-- Universal Time. After this optional character, if `format` is the string +-- "`*t`", then `date` returns a table with the following fields: +-- +-- * `year` (four digits) +-- * `month` (1--12) +-- * `day` (1--31) +-- * `hour` (0--23) +-- * `min` (0--59) +-- * `sec` (0--61) +-- * `wday` (weekday, Sunday is 1) +-- * `yday` (day of the year) +-- * `isdst` (daylight saving flag, a boolean). +-- +-- If `format` is not "`*t`", then `date` returns the date as a string, +-- formatted according to the same rules as the C function `strftime`. +-- When called without arguments, `date` returns a reasonable date and time +-- representation that depends on the host system and on the current locale +-- (that is, `os.date()` is equivalent to `os.date("%c")`). +function os.date(format , time) end + +--- +-- Returns the number of seconds from time `t1` to time `t2`. In POSIX, +-- Windows, and some other systems, this value is exactly `t2`*-*`t1`. +function os.difftime(t2, t1) end + +--- +-- This function is equivalent to the C function `system`. It passes +-- `command` to be executed by an operating system shell. It returns a status +-- code, which is system-dependent. If `command` is absent, then it returns +-- nonzero if a shell is available and zero otherwise. +function os.execute(command) end + +--- +-- Calls the C function `exit`, with an optional `code`, to terminate the +-- host program. The default value for `code` is the success code. +function os.exit(code) end + +--- +-- Returns the value of the process environment variable `varname`, or +-- nil if the variable is not defined. +function os.getenv(varname) end + +--- +-- Deletes the file or directory with the given name. Directories must be +-- empty to be removed. If this function fails, it returns nil, plus a string +-- describing the error. +function os.remove(filename) end + +--- +-- Renames file or directory named `oldname` to `newname`. If this function +-- fails, it returns nil, plus a string describing the error. +function os.rename(oldname, newname) end + +--- +-- Sets the current locale of the program. `locale` is a string specifying +-- a locale; `category` is an optional string describing which category to +-- change: `"all"`, `"collate"`, `"ctype"`, `"monetary"`, `"numeric"`, or +-- `"time"`; the default category is `"all"`. The function returns the name +-- of the new locale, or nil if the request cannot be honored. +-- If `locale` is the empty string, the current locale is set to an +-- implementation-defined native locale. If `locale` is the string "`C`", +-- the current locale is set to the standard C locale. +-- When called with nil as the first argument, this function only returns +-- the name of the current locale for the given category. +function os.setlocale(locale , category) end + +--- +-- Returns the current time when called without arguments, or a time +-- representing the date and time specified by the given table. This table +-- must have fields `year`, `month`, and `day`, and may have fields `hour`, +-- `min`, `sec`, and `isdst` (for a description of these fields, see the +-- `os.date` function). +-- The returned value is a number, whose meaning depends on your system. In +-- POSIX, Windows, and some other systems, this number counts the number +-- of seconds since some given start time (the "epoch"). In other systems, +-- the meaning is not specified, and the number returned by `time` can be +-- used only as an argument to `date` and `difftime`. +function os.time(table) end + +--- +-- Returns a string with a file name that can be used for a temporary +-- file. The file must be explicitly opened before its use and explicitly +-- removed when no longer needed. +-- On some systems (POSIX), this function also creates a file with that +-- name, to avoid security risks. (Someone else might create the file with +-- wrong permissions in the time between getting the name and creating the +-- file.) You still have to open the file to use it and to remove it (even +-- if you do not use it). +-- When possible, you may prefer to use `io.tmpfile`, which automatically +-- removes the file when the program ends. +function os.tmpname() end + diff --git a/luaejdb/tools/ldoc/ldoc/builtin/package.lua b/luaejdb/tools/ldoc/ldoc/builtin/package.lua new file mode 100644 index 0000000..d9a7237 --- /dev/null +++ b/luaejdb/tools/ldoc/ldoc/builtin/package.lua @@ -0,0 +1,95 @@ +--- controlling how `require` finds packages. + +module 'package' + +--- +-- The path used by `require` to search for a C loader. +-- Lua initializes the C path `package.cpath` in the same way it initializes +-- the Lua path `package.path`, using the environment variable `LUA_CPATH` +-- or a default path defined in `luaconf.h`. +-- function package.cpath end +-- * `package.cpath`: package.cpath + +--- +-- A table used by `require` to control which modules are already +-- loaded. When you require a module `modname` and `package.loaded[modname]` +-- is not false, `require` simply returns the value stored there. +-- function package.loaded end +-- * `package.loaded`: package.loaded + +--- +-- A table used by `require` to control how to load modules. +-- Each entry in this table is a *searcher function*. When looking for a module, +-- `require` calls each of these searchers in ascending order, with the module +-- name (the argument given to `require`) as its sole parameter. The function +-- can return another function (the module *loader*) or a string explaining +-- why it did not find that module (or nil if it has nothing to say). Lua +-- initializes this table with four functions. +-- The first searcher simply looks for a loader in the `package.preload` table. +-- The second searcher looks for a loader as a Lua library, using the path +-- stored at `package.path`. A path is a sequence of *templates* separated by +-- semicolons. For each template, the searcher will change each interrogation +-- mark in the template by `filename`, which is the module name with each dot +-- replaced by a "directory separator" (such as "`/`" in Unix); then it will +-- try to open the resulting file name. So, for instance, if the Lua path is +-- the string +-- "./?.lua;./?.lc;/usr/local/?/init.lua" +-- the search for a Lua file for module `foo` will try to open the files +-- `./foo.lua`, `./foo.lc`, and `/usr/local/foo/init.lua`, in that order. +-- The third searcher looks for a loader as a C library, using the path given +-- by the variable `package.cpath`. For instance, if the C path is the string +-- "./?.so;./?.dll;/usr/local/?/init.so" +-- the searcher for module `foo` will try to open the files `./foo.so`, +-- `./foo.dll`, and `/usr/local/foo/init.so`, in that order. Once it finds +-- a C library, this searcher first uses a dynamic link facility to link the +-- application with the library. Then it tries to find a C function inside the +-- library to be used as the loader. The name of this C function is the string +-- "`luaopen_`" concatenated with a copy of the module name where each dot +-- is replaced by an underscore. Moreover, if the module name has a hyphen, +-- its prefix up to (and including) the first hyphen is removed. For instance, +-- if the module name is `a.v1-b.c`, the function name will be `luaopen_b_c`. +-- The fourth searcher tries an *all-in-one loader*. It searches the C +-- path for a library for the root name of the given module. For instance, +-- when requiring `a.b.c`, it will search for a C library for `a`. If found, +-- it looks into it for an open function for the submodule; in our example, +-- that would be `luaopen_a_b_c`. With this facility, a package can pack +-- several C submodules into one single library, with each submodule keeping +-- its original open function. +-- function package.loaders end +-- * `package.loaders`: package.loaders + +--- +-- Dynamically links the host program with the C library `libname`. Inside +-- this library, looks for a function `funcname` and returns this function as a +-- C function. (So, `funcname` must follow the protocol (see `lua_CFunction`)). +-- This is a low-level function. It completely bypasses the package and module +-- system. Unlike `require`, it does not perform any path searching and does +-- not automatically adds extensions. `libname` must be the complete file name +-- of the C library, including if necessary a path and extension. `funcname` +-- must be the exact name exported by the C library (which may depend on the +-- C compiler and linker used). +-- This function is not supported by ANSI C. As such, it is only available +-- on some platforms (Windows, Linux, Mac OS X, Solaris, BSD, plus other Unix +-- systems that support the `dlfcn` standard). +function package.loadlib(libname, funcname) end + +--- +-- The path used by `require` to search for a Lua loader. +-- At start-up, Lua initializes this variable with the value of the environment +-- variable `LUA_PATH` or with a default path defined in `luaconf.h`, if +-- the environment variable is not defined. Any "`;;`" in the value of the +-- environment variable is replaced by the default path. +-- function package.path end +-- * `package.path`: package.path + +--- +-- A table to store loaders for specific modules (see `require`). +-- function package.preload end +-- * `package.preload`: package.preload + +--- +-- Sets a metatable for `module` with its `__index` field referring to the +-- global environment, so that this module inherits values from the global +-- environment. To be used as an option to function `module`. +function package.seeall(module) end + diff --git a/luaejdb/tools/ldoc/ldoc/builtin/string.lua b/luaejdb/tools/ldoc/ldoc/builtin/string.lua new file mode 100644 index 0000000..34c1610 --- /dev/null +++ b/luaejdb/tools/ldoc/ldoc/builtin/string.lua @@ -0,0 +1,172 @@ +--- string operations like searching and matching. + +module 'string' + +--- +-- Returns the internal numerical codes of the characters `s[i]`, `s[i+1]`, +-- ..., `s[j]`. The default value for `i` is 1; the default value for `j` +-- is `i`. +-- Note that numerical codes are not necessarily portable across platforms. +function string.byte(s , i , j) end + +--- +-- Receives zero or more integers. Returns a string with length equal to +-- the number of arguments, in which each character has the internal numerical +-- code equal to its corresponding argument. +-- Note that numerical codes are not necessarily portable across platforms. +function string.char(...) end + +--- +-- Returns a string containing a binary representation of the given +-- function, so that a later `loadstring` on this string returns a copy of +-- the function. `function` must be a Lua function without upvalues. +function string.dump(function) end + +--- +-- Looks for the first match of `pattern` in the string `s`. If it finds a +-- match, then `find` returns the indices of `s` where this occurrence starts +-- and ends; otherwise, it returns nil. A third, optional numerical argument +-- `init` specifies where to start the search; its default value is 1 and +-- can be negative. A value of true as a fourth, optional argument `plain` +-- turns off the pattern matching facilities, so the function does a plain +-- "find substring" operation, with no characters in `pattern` being considered +-- "magic". Note that if `plain` is given, then `init` must be given as well. +-- If the pattern has captures, then in a successful match the captured values +-- are also returned, after the two indices. +function string.find(s, pattern , init , plain) end + +--- +-- Returns a formatted version of its variable number of arguments following +-- the description given in its first argument (which must be a string). The +-- format string follows the same rules as the `printf` family of standard C +-- functions. The only differences are that the options/modifiers `*`, `l`, +-- `L`, `n`, `p`, and `h` are not supported and that there is an extra option, +-- `q`. The `q` option formats a string in a form suitable to be safely read +-- back by the Lua interpreter: the string is written between double quotes, +-- and all double quotes, newlines, embedded zeros, and backslashes in the +-- string are correctly escaped when written. For instance, the call +-- +-- string.format('%q', 'a string with "quotes" and \n new line') +-- +-- will produce the string: +-- +-- "a string with \"quotes\" and \ +-- new line" +-- +-- The options `c`, `d`, `E`, `e`, `f`, `g`, `G`, `i`, `o`, `u`, `X`, and +-- `x` all expect a number as argument, whereas `q` and `s` expect a string. +-- This function does not accept string values containing embedded zeros, +-- except as arguments to the `q` option. +function string.format(formatstring, ...) end + +--- +-- Returns an iterator function that, each time it is called, returns the +-- next captures from `pattern` over string `s`. If `pattern` specifies no +-- captures, then the whole match is produced in each call. +-- As an example, the following loop +-- +-- s = "hello world from Lua" +-- for w in string.gmatch(s, "%a+") do +-- print(w) +-- end +-- +-- will iterate over all the words from string `s`, printing one per line. The +-- next example collects all pairs `key=value` from the given string into +-- a table: +-- +-- t = {} +-- s = "from=world, to=Lua" +-- for k, v in string.gmatch(s, "(%w+)=(%w+)") do +-- t[k] = v +-- end +-- +-- For this function, a '`^`' at the start of a pattern does not work as an +-- anchor, as this would prevent the iteration. +function string.gmatch(s, pattern) end + +--- +-- Returns a copy of `s` in which all (or the first `n`, if given) +-- occurrences of the `pattern` have been replaced by a replacement string +-- specified by `repl`, which can be a string, a table, or a function. `gsub` +-- also returns, as its second value, the total number of matches that occurred. +-- +-- If `repl` is a string, then its value is used for replacement. The character +-- `%` works as an escape character: any sequence in `repl` of the form `%n`, +-- with *n* between 1 and 9, stands for the value of the *n*-th captured +-- substring (see below). The sequence `%0` stands for the whole match. The +-- sequence `%%` stands for a single `%`. +-- +-- If `repl` is a table, then the table is queried for every match, using +-- the first capture as the key; if the pattern specifies no captures, then +-- the whole match is used as the key. +-- +-- If `repl` is a function, then this function is called every time a match +-- occurs, with all captured substrings passed as arguments, in order; if +-- the pattern specifies no captures, then the whole match is passed as a +-- sole argument. +-- +-- If the value returned by the table query or by the function call is a +-- string or a number, then it is used as the replacement string; otherwise, +-- if it is false or nil, then there is no replacement (that is, the original +-- match is kept in the string). +-- +-- Here are some examples: +-- x = string.gsub("hello world", "(%w+)", "%1 %1") +-- --> x="hello hello world world" +-- x = string.gsub("hello world", "%w+", "%0 %0", 1) +-- --> x="hello hello world" +-- x = string.gsub("hello world from Lua", "(%w+)%s*(%w+)", "%2 %1") +-- --> x="world hello Lua from" +-- x = string.gsub("home = $HOME, user = $USER", "%$(%w+)", os.getenv) +-- --> x="home = /home/roberto, user = roberto" +-- x = string.gsub("4+5 = $return 4+5$", "%$(.-)%$", function (s) +-- return loadstring(s)() +-- end) +-- --> x="4+5 = 9" +-- local t = {name="lua", version="5.1"} +-- x = string.gsub("$name-$version.tar.gz", "%$(%w+)", t) +-- --> x="lua-5.1.tar.gz" +function string.gsub(s, pattern, repl , n) end + +--- +-- Receives a string and returns its length. The empty string `""` has +-- length 0. Embedded zeros are counted, so `"a\000bc\000"` has length 5. +function string.len(s) end + +--- +-- Receives a string and returns a copy of this string with all uppercase +-- letters changed to lowercase. All other characters are left unchanged. The +-- definition of what an uppercase letter is depends on the current locale. +function string.lower(s) end + +--- +-- Looks for the first *match* of `pattern` in the string `s`. If it +-- finds one, then `match` returns the captures from the pattern; otherwise +-- it returns nil. If `pattern` specifies no captures, then the whole match +-- is returned. A third, optional numerical argument `init` specifies where +-- to start the search; its default value is 1 and can be negative. +function string.match(s, pattern , init) end + +--- +-- Returns a string that is the concatenation of `n` copies of the string +-- `s`. +function string.rep(s, n) end + +--- +-- Returns a string that is the string `s` reversed. +function string.reverse(s) end + +--- +-- Returns the substring of `s` that starts at `i` and continues until +-- `j`; `i` and `j` can be negative. If `j` is absent, then it is assumed to +-- be equal to -1 (which is the same as the string length). In particular, +-- the call `string.sub(s,1,j)` returns a prefix of `s` with length `j`, and +-- `string.sub(s, -i)` returns a suffix of `s` with length `i`. +function string.sub(s, i , j) end + +--- +-- Receives a string and returns a copy of this string with all lowercase +-- letters changed to uppercase. All other characters are left unchanged. The +-- definition of what a lowercase letter is depends on the current locale. +function string.upper(s) end + diff --git a/luaejdb/tools/ldoc/ldoc/builtin/table.lua b/luaejdb/tools/ldoc/ldoc/builtin/table.lua new file mode 100644 index 0000000..d65b6d9 --- /dev/null +++ b/luaejdb/tools/ldoc/ldoc/builtin/table.lua @@ -0,0 +1,41 @@ +--- manipulating Lua tables. + +module 'table' + +--- +-- Given an array where all elements are strings or numbers, returns +-- `table[i]..sep..table[i+1] ... sep..table[j]`. The default value for +-- `sep` is the empty string, the default for `i` is 1, and the default for +-- `j` is the length of the table. If `i` is greater than `j`, returns the +-- empty string. +function table.concat(table , sep , i , j) end + +--- +-- Inserts element `value` at position `pos` in `table`, shifting up +-- other elements to open space, if necessary. The default value for `pos` is +-- `n+1`, where `n` is the length of the table (see §2.5.5), so that a call +-- `table.insert(t,x)` inserts `x` at the end of table `t`. +function table.insert(table, pos, value) end + +--- +-- Returns the largest positive numerical index of the given table, or +-- zero if the table has no positive numerical indices. (To do its job this +-- function does a linear traversal of the whole table.) +function table.maxn(table) end + +--- +-- Removes from `table` the element at position `pos`, shifting down other +-- elements to close the space, if necessary. Returns the value of the removed +-- element. The default value for `pos` is `n`, where `n` is the length of the +-- table, so that a call `table.remove(t)` removes the last element of table +-- `t`. +function table.remove(table , pos) end + +--- +-- Sorts table elements in a given order, +-- *in-place*, from `table[1]` to `table[n]`, where `n` is the length of the +-- table. If `comp` is given, then it must be a function that receives two +-- table elements, and returns true when the first is less than the second +-- (so that `not comp(a[i+1],a[i])` will be true after the sort). If `comp` +-- is not given, then the '<' operator will be used. +function table.sort(table , comp) end diff --git a/luaejdb/tools/ldoc/ldoc/config.ld b/luaejdb/tools/ldoc/ldoc/config.ld new file mode 100644 index 0000000..9c220ba --- /dev/null +++ b/luaejdb/tools/ldoc/ldoc/config.ld @@ -0,0 +1,11 @@ +project = 'Lua' +description = 'Lua Standard Libraries' +full_description = [[ +These are the built-in libraries of Lua 5.1 + +Plus documentation for lpeg and luafilesystem. +]] +file = {'builtin',exclude = {'builtin/globals.lua'}} +no_summary = true +no_return_or_parms = true +format = 'discount' diff --git a/luaejdb/tools/ldoc/ldoc/doc.lua b/luaejdb/tools/ldoc/ldoc/doc.lua new file mode 100644 index 0000000..46b381c --- /dev/null +++ b/luaejdb/tools/ldoc/ldoc/doc.lua @@ -0,0 +1,967 @@ +------ +-- Defining the ldoc document model. + + +local class = require 'pl.class' +local utils = require 'pl.utils' +local List = require 'pl.List' +local Map = require 'pl.Map' + +local doc = {} +local global = require 'ldoc.builtin.globals' +local tools = require 'ldoc.tools' +local split_dotted_name = tools.split_dotted_name + +local TAG_MULTI,TAG_ID,TAG_SINGLE,TAG_TYPE,TAG_FLAG,TAG_MULTI_LINE = 'M','id','S','T','N','ML' + +-- these are the basic tags known to ldoc. They come in several varieties: +-- - 'M' tags with multiple values like 'param' (TAG_MULTI) +-- - 'id' tags which are identifiers, like 'name' (TAG_ID) +-- - 'S' tags with a single value, like 'release' (TAG_SINGLE) +-- - 'N' tags which have no associated value, like 'local` (TAG_FLAG) +-- - 'T' tags which represent a type, like 'function' (TAG_TYPE) +local known_tags = { + param = 'M', see = 'M', usage = 'ML', ['return'] = 'M', field = 'M', author='M'; + class = 'id', name = 'id', pragma = 'id', alias = 'id', within = 'id', + copyright = 'S', summary = 'S', description = 'S', release = 'S', license = 'S', + fixme = 'S', todo = 'S', warning = 'S', raise = 'S', + ['local'] = 'N', export = 'N', private = 'N', constructor = 'N', static = 'N'; + -- project-level + module = 'T', script = 'T', example = 'T', topic = 'T', submodule='T', + -- module-level + ['function'] = 'T', lfunction = 'T', table = 'T', section = 'T', type = 'T', + annotation = 'T', factory = 'T'; + +} +known_tags._alias = {} +known_tags._project_level = { + module = true, + script = true, + example = true, + topic = true, + submodule = true; +} + +known_tags._code_types = { + module = true, + script = true +} + +known_tags._module_info = { + 'copyright','release','license','author' +} + +local see_reference_handlers = {} + + +doc.TAG_MULTI,doc.TAG_ID,doc.TAG_SINGLE,doc.TAG_TYPE,doc.TAG_FLAG = + TAG_MULTI,TAG_ID,TAG_SINGLE,TAG_TYPE,TAG_FLAG + +-- add a new tag. +function doc.add_tag(tag,type,project_level) + if not known_tags[tag] then + known_tags[tag] = type + known_tags._project_level[tag] = project_level + end +end + +function doc.add_custom_see_handler(pat,action) + see_reference_handlers[pat] = action +end + +-- add an alias to an existing tag (exposed through ldoc API) +function doc.add_alias (a,tag) + known_tags._alias[a] = tag +end + +-- get the tag alias value, if it exists. +function doc.get_alias(tag) + return known_tags._alias[tag] +end + +-- is it a'project level' tag, such as 'module' or 'script'? +function doc.project_level(tag) + return known_tags._project_level[tag] +end + +-- is it a project level tag containing code? +function doc.code_tag (tag) + return known_tags._code_types[tag] +end + +-- is it a section tag? +function doc.section_tag (tag) + return tag == 'section' or doc.class_tag(tag) +end + +-- is it a class tag, like 'type' or 'factory'? +function doc.class_tag (tag) + return tag == 'type' or tag == 'factory' +end + +function doc.module_info_tags () + return List.iter(known_tags._module_info) +end + + +-- annotation tags can appear anywhere in the code and may contain of these tags: +known_tags._annotation_tags = { + fixme = true, todo = true, warning = true +} + +local acount = 1 + +function doc.expand_annotation_item (tags, last_item) + if tags.summary ~= '' then return false end + for tag, value in pairs(tags) do + if known_tags._annotation_tags[tag] then + tags.class = 'annotation' + tags.summary = value + local item_name = last_item and last_item.tags.name or '?' + tags.name = item_name..'-'..tag..acount + acount = acount + 1 + return true + end + end +end + +-- we process each file, resulting in a File object, which has a list of Item objects. +-- Items can be modules, scripts ('project level') or functions, tables, etc. +-- (In the code 'module' refers to any project level tag.) +-- When the File object is finalized, we specialize some items as modules which +-- are 'container' types containing functions and tables, etc. + +local File = class() +local Item = class() +local Module = class(Item) -- a specialized kind of Item + +doc.File = File +doc.Item = Item +doc.Module = Module + +function File:_init(filename) + self.filename = filename + self.items = List() + self.modules = List() + self.sections = List() +end + +function File:new_item(tags,line) + local item = Item(tags,self,line or 1) + self.items:append(item) + return item +end + +function File:export_item (name) + for item in self.items:iter() do + local tags = item.tags + if tags.name == name then + if tags['local'] then + tags['local'] = nil + end + return + end + end + -- warn if any of these guys are not found, indicating no + -- documentation was given. + self:warning('no docs '..tools.quote(name)) +end + + +local function has_prefix (name,prefix) + local i1,i2 = name:find(prefix) + return i1 == 1 and i2 == #prefix +end + +local function mod_section_type (this_mod) + return this_mod and this_mod.section and this_mod.section.type +end + +local function find_module_in_files (name) + for f in File.list:iter() do + for m in f.modules:iter() do + if m.name == name then + return m,f.filename + end + end + end +end + +function File:finish() + local this_mod + local items = self.items + local tagged_inside + local function add_section (item, display_name) + display_name = display_name or item.display_name + this_mod.section = item + this_mod.kinds:add_kind(display_name,display_name,nil,item) + this_mod.sections:append(item) + this_mod.sections.by_name[display_name:gsub('%A','_')] = item + end + for item in items:iter() do + if mod_section_type(this_mod) == 'factory' and item.tags then + local klass = '@{'..this_mod.section.name..'}' + -- Factory constructors return the object type, and methods all have implicit self argument + if item.tags.constructor and not item.tags['return'] then + item.tags['return'] = List{klass} + elseif item.tags.param then + item.tags.param:put('self '..klass) + end + end + item:finish() + if doc.project_level(item.type) then + this_mod = item + local package,mname,submodule + if item.type == 'module' then + -- if name is 'package.mod', then mod_name is 'mod' + package,mname = split_dotted_name(this_mod.name) + if self.args.merge then + local mod,mf = find_module_in_files(item.name) + if mod then + print('found master module',mf) + this_mod = mod + submodule = true + end + end + elseif item.type == 'submodule' then + local mf + submodule = true + this_mod,mf = find_module_in_files(item.name) + if this_mod == nil then + self:error("'"..item.name.."' not found for submodule") + end + tagged_inside = tools.this_module_name(self.base,self.filename)..' Functions' + this_mod.kinds:add_kind(tagged_inside, tagged_inside) + end + if not package then + mname = this_mod.name + package = '' + end + if not submodule then + this_mod.package = package + this_mod.mod_name = mname + this_mod.kinds = ModuleMap() -- the iterator over the module contents + self.modules:append(this_mod) + end + elseif doc.section_tag(item.type) then + local display_name = item.name + if display_name == 'end' then + this_mod.section = nil + else + local summary = item.summary:gsub('%.$','') + if doc.class_tag(item.type) then + display_name = 'Class '..item.name + item.module = this_mod + this_mod.items.by_name[item.name] = item + else + display_name = summary + end + item.display_name = display_name + add_section(item) + end + else + local to_be_removed + -- add the item to the module's item list + if this_mod then + -- new-style modules will have qualified names like 'mod.foo' + local mod,fname = split_dotted_name(item.name) + -- warning for inferred unqualified names in new style modules + -- (retired until we handle methods like Set:unset() properly) + if not mod and not this_mod.old_style and item.inferred then + --item:warning(item.name .. ' is declared in global scope') + end + -- the function may be qualified with a module alias... + local alias = this_mod.tags.alias + if (alias and mod == alias) or mod == 'M' or mod == '_M' then + mod = this_mod.mod_name + end + -- if that's the mod_name, then we want to only use 'foo' + if mod == this_mod.mod_name and this_mod.tags.pragma ~= 'nostrip' then + item.name = fname + end + + local enclosing_section + if tagged_inside then + item.tags.within = tagged_inside + end + if item.tags.within then + local name = item.tags.within + this_mod.kinds:add_kind(name, name) + enclosing_section = this_mod.section + this_mod.section = nil + end + + -- right, this item was within a section or a 'class' + local section_description + if this_mod.section then + local this_section = this_mod.section + item.section = this_section.display_name + -- if it was a class, then if the name is unqualified then it becomes + -- 'Class:foo' (unless flagged as being a constructor, static or not a function) + local stype = this_section.type + if doc.class_tag(stype) then + if not item.name:match '[:%.]' then -- not qualified + local class = this_section.name + local static = item.tags.constructor or item.tags.static or item.type ~= 'function' + item.name = class..(not static and ':' or '.')..item.name + end + if stype == 'factory' then + if item.tags.private then to_be_removed = true + elseif item.type == 'lfunction' then + item.type = 'function' + end + if item.tags.constructor then + item.section = item.type + end + end + end + section_description = this_section.summary..' '..this_section.description + elseif item.tags.within then + section_description = item.tags.within + item.section = section_description + else -- otherwise, just goes into the default sections (Functions,Tables,etc) + item.section = item.type + end + + item.module = this_mod + if not to_be_removed then + local these_items = this_mod.items + these_items.by_name[item.name] = item + these_items:append(item) + this_mod.kinds:add(item,these_items,section_description) + end + + -- restore current section after a 'within' + if enclosing_section then this_mod.section = enclosing_section end + + else + -- must be a free-standing function (sometimes a problem...) + end + end + end +end + +-- some serious hackery. We force sections into this 'module', +-- and ensure that there is a dummy item so that the section +-- is not empty. + +function File:add_document_section(title) + local section = title:gsub('%A','_') + self:new_item { + name = section, + class = 'section', + summary = title + } + self:new_item { + name = 'dumbo', + class = 'function', + } + return section +end + +function Item:_init(tags,file,line) + self.file = file + self.lineno = line + self.summary = tags.summary + self.description = tags.description + tags.summary = nil + tags.description = nil + self.tags = {} + self.formal_args = tags.formal_args + tags.formal_args = nil + local iter = tags.iter + if not iter then + iter = Map.iter + end + for tag in iter(tags) do + self:set_tag(tag,tags[tag]) + end +end + +function Item:add_to_description (rest) + if type(rest) == 'string' then + self.description = (self.description or '') .. rest + end +end + +function Item:set_tag (tag,value) + local ttype = known_tags[tag] + + if ttype == TAG_MULTI or ttype == TAG_MULTI_LINE then -- value is always a List! + if getmetatable(value) ~= List then + value = List{value} + end + if ttype ~= TAG_MULTI_LINE then + local last = value[#value] + if type(last) == 'string' and last:match '\n' then + local line,rest = last:match('([^\n]+)(.*)') + value[#value] = line + self:add_to_description(rest) + end + end + self.tags[tag] = value + elseif ttype == TAG_ID then + local modifiers + if type(value) == 'table' then + if value.append then -- it was a List! + -- such tags are _not_ multiple, e.g. name + self:error("'"..tag.."' cannot have multiple values") + end + value = value[1] + modifiers = value.modifiers + end + local id, rest = tools.extract_identifier(value) + self.tags[tag] = id + self:add_to_description(rest) + elseif ttype == TAG_SINGLE then + self.tags[tag] = value + elseif ttype == TAG_FLAG then + self.tags[tag] = true + self:add_to_description(value) + else + Item.warning(self,"unknown tag: '"..tag.."' "..tostring(ttype)) + end +end + +-- preliminary processing of tags. We check for any aliases, and for tags +-- which represent types. This implements the shortcut notation. +function Item.check_tag(tags,tag, value, modifiers) + local alias = doc.get_alias(tag) + if alias then + if type(alias) == 'string' then + tag = alias + else + local avalue,amod + tag, avalue, amod = alias[1],alias.value,alias.modifiers + if avalue then value = avalue..' '..value end + if amod then + modifiers = modifiers or {} + for m,v in pairs(amod) do + local idx = v:match('^%$(%d+)') + if idx then + v, value = value:match('(%S+)(.*)') + end + modifiers[m] = v + end + end + end + end + local ttype = known_tags[tag] + if ttype == TAG_TYPE then + tags:add('class',tag) + tag = 'name' + end + return tag, value, modifiers +end + +-- any tag (except name and classs) may have associated modifiers, +-- in the form @tag[m1,...] where m1 is either name1=value1 or name1. +-- At this stage, these are encoded +-- in the tag value table and need to be extracted. + +local function extract_value_modifier (p) + if type(p)~='table' then + return p, { } + else + return p[1], p.modifiers or { } + end +end + +local function extract_tag_modifiers (tags) + local modifiers, mods = {} + for tag, value in pairs(tags) do + if type(value)=='table' and value.append then -- i.e. it is a List! + local tmods = {} + for i, v in ipairs(value) do + v, mods = extract_value_modifier(v) + tmods[i] = mods + value[i] = v + end + modifiers[tag] = tmods + else + value, mods = extract_value_modifier(value) + modifiers[tag] = mods + tags[tag] = value + end + end + return modifiers +end + +local function read_del (tags,name) + local ret = tags[name] + tags[name] = nil + return ret +end + +local build_arg_list, split_iden -- forward declaration + + +function Item:finish() + local tags = self.tags + local quote = tools.quote + self.name = read_del(tags,'name') + self.type = read_del(tags,'class') + self.modifiers = extract_tag_modifiers(tags) + self.usage = read_del(tags,'usage') + -- see tags are multiple, but they may also be comma-separated + if tags.see then + tags.see = tools.expand_comma_list(read_del(tags,'see')) + end + if doc.project_level(self.type) then + -- we are a module, so become one! + self.items = List() + self.sections = List() + self.items.by_name = {} + self.sections.by_name = {} + setmetatable(self,Module) + elseif not doc.section_tag(self.type) then + -- params are either a function's arguments, or a table's fields, etc. + if self.type == 'function' then + self.parameter = 'param' + self.ret = read_del(tags,'return') + self.raise = read_del(tags,'raise') + if tags['local'] then + self.type = 'lfunction' + end + else + self.parameter = 'field' + end + local field = self.parameter + local params = read_del(tags,field) + -- use of macros like @string (which is short for '@tparam string') + -- can lead to param tags associated with a table. + if self.parameter == 'field' and tags.param then + local tparams = read_del(tags,'param') + if params then + params:extend(tparams) + List(self.modifiers.field):extend(self.modifiers.param) + else + params = tparams + self.modifiers.field = self.modifiers.param + end + end + local param_names, comments = List(), List() + if params then + for line in params:iter() do + local name, comment = line:match('%s*([%w_%.:]+)(.*)') + if not name then + self:error("bad param name format '"..line.."'. Are you missing a parameter name?") + end + param_names:append(name) + comments:append(comment) + end + end + self.modifiers['return'] = self.modifiers['return'] or List() + self.modifiers[field] = self.modifiers[field] or List() + -- we use the formal arguments (if available) as the authoritative list. + -- If there are both params and formal args, then they must match; + -- (A formal argument of ... may match any number of params at the end, however.) + -- If there are formal args and no params, we see if the args have any suitable comments. + -- Params may have subfields. + local fargs, formal = self.formal_args + if fargs then + if #param_names == 0 then + --docs may be embedded in argument comments; in either case, use formal arg names + formal = List() + if fargs.return_comment then + local retc = self:parse_argument_comment(fargs.return_comment,'return') + self.ret = List{retc} + end + for i, name in ipairs(fargs) do + formal:append(name) + comments:append(self:parse_argument_comment(fargs.comments[name],self.parameter)) + end + elseif #fargs > 0 then + local varargs = fargs[#fargs] == '...' + if varargs then table.remove(fargs) end + local k = 0 + for _,pname in ipairs(param_names) do + local _,field = split_iden(pname) + if not field then + k = k + 1 + if k > #fargs then + if not varargs then + self:warning("extra param with no formal argument: "..quote(pname)) + end + elseif pname ~= fargs[k] then + self:warning("param and formal argument name mismatch: "..quote(pname).." "..quote(fargs[k])) + end + end + end + if k < #fargs then + for i = k+1,#fargs do + if fargs[i] ~= '...' then + self:warning("undocumented formal argument: "..quote(fargs[i])) + end + end + end + end + end + + -- the comments are associated with each parameter by + -- adding name-value pairs to the params list (this is + -- also done for any associated modifiers) + -- (At this point we patch up any subparameter references) + local pmods = self.modifiers[field] + local params, fields = List() + local original_names = formal and formal or param_names + local names = List() + self.subparams = {} + for i,name in ipairs(original_names) do + local pname,field = split_iden(name) + if field then + if not fields then + fields = List() + self.subparams[pname] = fields + end + fields:append(name) + else + names:append(name) + params:append(name) + fields = nil + end + + params[name] = comments[i] + if pmods then + pmods[name] = pmods[i] + end + end + self.params = params + self.args = build_arg_list (names,pmods) + end +end + +-- ldoc allows comments in the formal arg list to be used, if they aren't specified with @param +-- Further, these comments may start with a type followed by a colon, and are then equivalent +-- to a @tparam +function Item:parse_argument_comment (comment,field) + if comment then + comment = comment:gsub('^%-+%s*','') + local type,rest = comment:match '([^:]+):(.*)' + if type then + self.modifiers[field]:append {type = type} + comment = rest + end + end + return comment or '' +end + +function split_iden (name) + if name == '...' then return name end + local pname,field = name:match('(.-)%.(.+)') + if not pname then + return name + else + return pname,field + end +end + +function build_arg_list (names,pmods) + -- build up the string representation of the argument list, + -- using any opt and optchain modifiers if present. + -- For instance, '(a [, b])' if b is marked as optional + -- with @param[opt] b + local buffer, npending = { }, 0 + local function acc(x) table.insert(buffer, x) end + for i = 1, #names do + local m = pmods and pmods[i] + local opt + if m then + if not m.optchain then + acc ((']'):rep(npending)) + npending=0 + end + opt = m.opt or m.optchain + if opt then + acc(' [') + npending=npending+1 + end + end + if i>1 then acc (', ') end + acc(names[i]) + if opt and opt ~= true then acc('='..opt) end + end + acc ((']'):rep(npending)) + return '('..table.concat(buffer)..')' +end + +function Item:type_of_param(p) + local mods = self.modifiers[self.parameter] + if not mods then return '' end + local mparam = mods[p] + return mparam and mparam.type or '' +end + +function Item:type_of_ret(idx) + local rparam = self.modifiers['return'][idx] + return rparam and rparam.type or '' +end + +function Item:subparam(p) + if self.subparams[p] then + return self.subparams[p],p + else + return {p},nil + end +end + +function Item:display_name_of(p) + local pname,field = split_iden(p) + if field then + return field + else + return pname + end +end + + +function Item:warning(msg) + local file = self.file and self.file.filename + if type(file) == 'table' then require 'pl.pretty'.dump(file); file = '?' end + file = file or '?' + io.stderr:write(file,':',self.lineno or '1',': ',self.name or '?',': ',msg,'\n') + return nil +end + +function Item:error(msg) + self:warning(msg) + os.exit(1) +end + +Module.warning, Module.error = Item.warning, Item.error + + +-------- Resolving References ----------------- + +function Module:hunt_for_reference (packmod, modules) + local mod_ref + local package = self.package or '' + repeat -- same package? + local nmod = package..'.'..packmod + mod_ref = modules.by_name[nmod] + if mod_ref then break end -- cool + package = split_dotted_name(package) + until not package + return mod_ref +end + +local err = io.stderr + +local function custom_see_references (s) + for pat, action in pairs(see_reference_handlers) do + if s:match(pat) then + local label, href = action(s:match(pat)) + if not label then print('custom rule failed',s,pat,href) end + return {href = href, label = label} + end + end +end + +local function reference (s, mod_ref, item_ref) + local name = item_ref and item_ref.name or '' + -- this is deeply hacky; classes have 'Class ' prepended. + if item_ref and doc.class_tag(item_ref.type) then + name = 'Class_'..name + end + return {mod = mod_ref, name = name, label=s} +end + +function Module:process_see_reference (s,modules) + local mod_ref,fun_ref,name,packmod + local ref = custom_see_references(s) + if ref then return ref end + if not s:match '^[%w_%.%:%-]+$' or not s:match '[%w_]$' then + return nil, "malformed see reference: '"..s..'"' + end + -- is this a fully qualified module name? + local mod_ref = modules.by_name[s] + if mod_ref then return reference(s, mod_ref,nil) end + -- module reference? + mod_ref = self:hunt_for_reference(s, modules) + if mod_ref then return mod_ref end + -- method reference? (These are of form CLASS.NAME) + fun_ref = self.items.by_name[s] + if fun_ref then return reference(s,self,fun_ref) end + -- otherwise, start splitting! + local packmod,name = split_dotted_name(s) -- e.g. 'pl.utils','split' + if packmod then -- qualified name + mod_ref = modules.by_name[packmod] -- fully qualified mod name? + if not mod_ref then + mod_ref = self:hunt_for_reference(packmod, modules) + if not mod_ref then + local ref = global.lua_manual_ref(s) + if ref then return ref end + return nil,"module not found: "..packmod + end + end + fun_ref = mod_ref.items.by_name[name] + if fun_ref then + return reference(s,mod_ref,fun_ref) + else + fun_ref = mod_ref.sections.by_name[name] + if not fun_ref then + return nil,"function or section not found: "..s.." in "..mod_ref.name + else + return reference(fun_ref.name:gsub('_',' '),mod_ref,fun_ref) + end + end + else -- plain jane name; module in this package, function in this module + mod_ref = modules.by_name[self.package..'.'..s] + if mod_ref then return reference(s, mod_ref,nil) end + fun_ref = self.items.by_name[s] + if fun_ref then return reference(s, self,fun_ref) + else + local ref = global.lua_manual_ref (s) + if ref then return ref end + return nil, "function not found: "..s.." in this module" + end + end +end + +-- resolving @see references. A word may be either a function in this module, +-- or a module in this package. A MOD.NAME reference is within this package. +-- Otherwise, the full qualified name must be used. +-- First, check whether it is already a fully qualified module name. +-- Then split it and see if the module part is a qualified module +-- and try look up the name part in that module. +-- If this isn't successful then try prepending the current package to the reference, +-- and try to to resolve this. +function Module:resolve_references(modules) + local found = List() + for item in self.items:iter() do + local see = item.tags.see + if see then -- this guy has @see references + item.see = List() + for s in see:iter() do + local href, err = self:process_see_reference(s,modules) + if href then + item.see:append (href) + found:append{item,s} + elseif err then + item:warning(err) + end + end + end + end + -- mark as found, so we don't waste time re-searching + for f in found:iter() do + f[1].tags.see:remove_value(f[2]) + end +end + +-- suppress the display of local functions and annotations. +-- This is just a placeholder hack until we have a more general scheme +-- for indicating 'private' content of a module. +function Module:mask_locals () + self.kinds['Local Functions'] = nil + self.kinds['Annotations'] = nil +end + +function Item:dump_tags (taglist) + for tag, value in pairs(self.tags) do + if not taglist or taglist[tag] then + Item.warning(self,tag..' '..tostring(value)) + end + end +end + +function Module:dump_tags (taglist) + Item.dump_tags(self,taglist) + for item in self.items:iter() do + item:dump_tags(taglist) + end +end + +--------- dumping out modules and items ------------- + +local function dump_tags (tags) + if next(tags) then + print 'tags:' + for tag, value in pairs(tags) do + print('\t',tag,value) + end + end +end + +function Module:dump(verbose) + if self.type ~= 'module' then return end + print '----' + print(self.type..':',self.name,self.summary) + if self.description then print(self.description) end + dump_tags (self.tags) + for item in self.items:iter() do + item:dump(verbose) + end +end + +-- make a text dump of the contents of this File object. +-- The level of detail is controlled by the 'verbose' parameter. +-- Primarily intended as a debugging tool. +function File:dump(verbose) + for mod in self.modules:iter() do + mod:dump(verbose) + end +end + +function Item:dump(verbose) + local tags = self.tags + local name = self.name + if self.type == 'function' then + name = name .. self.args + end + if verbose then + print() + print(self.type,name) + print(self.summary) + if self.description and self.description:match '%S' then + print 'description:' + print(self.description) + end + if #self.params > 0 then + print 'parameters:' + for _,p in ipairs(self.params) do + print('',p,self.params[p]) + end + end + if self.ret and #self.ret > 0 then + print 'returns:' + for _,r in ipairs(self.ret) do + print('',r) + end + end + dump_tags(self.tags) + else + print('* '..name..' - '..self.summary) + end +end + +function doc.filter_objects_through_function(filter, module_list) + local quit, quote = utils.quit, tools.quote + if filter == 'dump' then filter = 'pl.pretty.dump' end + local mod,name = tools.split_dotted_name(filter) + local ok,P = pcall(require,mod) + if not ok then quit("cannot find module "..quote(mod)) end + local ok,f = pcall(function() return P[name] end) + if not ok or type(f) ~= 'function' then quit("dump module: no function "..quote(name)) end + + -- clean up some redundant and cyclical references-- + module_list.by_name = nil + for mod in module_list:iter() do + mod.kinds = nil + mod.file = mod.file.filename + for item in mod.items:iter() do + item.module = nil + item.file = nil + item.formal_args = nil + item.tags['return'] = nil + item.see = nil + end + mod.items.by_name = nil + end + + local ok,err = pcall(f,module_list) + if not ok then quit("dump failed: "..err) end +end + +return doc + diff --git a/luaejdb/tools/ldoc/ldoc/html.lua b/luaejdb/tools/ldoc/ldoc/html.lua new file mode 100644 index 0000000..1cd3ddd --- /dev/null +++ b/luaejdb/tools/ldoc/ldoc/html.lua @@ -0,0 +1,247 @@ +------ generating HTML output --------- +-- Although this can be generalized for outputting any format, since the template +-- is language-agnostic, this implementation concentrates on HTML. +-- This does the actual generation of HTML, and provides support functions in the ldoc +-- table for the template +-- +-- A fair amount of the complexity comes from operating in two basic modes; first, where +-- there is a number of modules (classic LuaDoc) or otherwise, where there is only one +-- module and the index contains the documentation for that module. +-- +-- Like LuaDoc, LDoc puts similar kinds of documentation files in their own directories. +-- So module docs go into 'modules/', scripts go into 'scripts/', and so forth. LDoc +-- generalizes the idea of these project-level categories and in fact custom categories +-- can be created (refered to as 'kinds' in the code) + +local List = require 'pl.List' +local utils = require 'pl.utils' +local path = require 'pl.path' +local stringx = require 'pl.stringx' +local template = require 'pl.template' +local tools = require 'ldoc.tools' +local markup = require 'ldoc.markup' +local prettify = require 'ldoc.prettify' +local doc = require 'ldoc.doc' +local html = {} + + +local quit = utils.quit + +local function cleanup_whitespaces(text) + local lines = stringx.splitlines(text) + for i = 1, #lines do + lines[i] = stringx.rstrip(lines[i]) + end + lines[#lines + 1] = "" -- Little trick: file should end with newline + return table.concat(lines, "\n") +end + +local function get_module_info(m) + local info = {} + for tag in doc.module_info_tags() do + local val = m.tags[tag] + if type(val)=='table' then + val = table.concat(val,',') + end + tag = stringx.title(tag) + info[tag] = val + end + if next(info) then + return info + end +end + +local escape_table = { ["'"] = "'", ["\""] = """, ["<"] = "<", [">"] = ">", ["&"] = "&" } + +function html.generate_output(ldoc, args, project) + local check_directory, check_file, writefile = tools.check_directory, tools.check_file, tools.writefile + + function ldoc.escape(str) + return (str:gsub("['&<>\"]", escape_table)) + end + + function ldoc.prettify(str) + return prettify.code('lua','usage',str,0,false) + end + + -- Item descriptions come from combining the summary and description fields + function ldoc.descript(item) + return (item.summary or '?')..' '..(item.description or '') + end + + -- this generates the internal module/function references + function ldoc.href(see) + if see.href then -- explict reference, e.g. to Lua manual + return see.href + else + return ldoc.ref_to_module(see.mod)..'#'..see.name + end + end + + -- this is either called from the 'root' (index or single module) or + -- from the 'modules' etc directories. If we are in one of those directories, + -- then linking to another kind is `../kind/name`; to the same kind is just `name`. + -- If we are in the root, then it is `kind/name`. + function ldoc.ref_to_module (mod) + local base = "" -- default: same directory + mod = mod or ldoc.module + local kind, module = mod.kind, ldoc.module + local name = mod.name -- default: name of module + if not ldoc.single then + if module then -- we are in kind/ + if module.type ~= type then -- cross ref to ../kind/ + base = "../"..kind.."/" + end + else -- we are in root: index + base = kind..'/' + end + else -- single module + if mod == ldoc.single then + name = ldoc.output + if not ldoc.root then base = '../' end + elseif ldoc.root then -- ref to other kinds (like examples) + base = kind..'/' + else + if module.type ~= type then -- cross ref to ../kind/ + base = "../"..kind.."/" + end + end + end + return base..name..'.html' + end + + function ldoc.use_li(ls) + if #ls > 1 then return '
  • ','
  • ' else return '','' end + end + + function ldoc.display_name(item) + local name = item.display_name or item.name + if item.type == 'function' or item.type == 'lfunction' then return name..' '..item.args + else return name end + end + + function ldoc.no_spaces(s) return (s:gsub('%A','_')) end + + function ldoc.titlecase(s) + return (s:gsub('(%a)(%a*)',function(f,r) + return f:upper()..r + end)) + end + + function ldoc.is_list (t) + return type(t) == 'table' and t.append + end + + function ldoc.typename (tp) + if not tp or tp == '' then return '' end + local optional + -- ? is short for ?nil| + if tp:match("^%?") and not tp:match '|' then + tp = '?|'..tp:sub(2) + end + local tp2 = tp:match("%?|?(.*)") + if tp2 then + optional = true + tp = tp2 + end + local types = {} + for name in tp:gmatch("[^|]+") do + local ref,err = markup.process_reference(name) + if ref then + types[#types+1] = ('%s'):format(ldoc.href(ref),ref.label or name) + else + types[#types+1] = ''..name..'' + end + end + local names = table.concat(types, ", ", 1, math.max(#types-1, 1)) + if #types > 1 then names = names.." or "..types[#types] end + if optional then + if names ~= '' then + if #types == 1 then names = "optional "..names end + else + names = "optional" + end + end + return names + end + + local module_template,err = utils.readfile (path.join(args.template,ldoc.templ)) + if not module_template then + quit("template not found at '"..args.template.."' Use -l to specify directory containing ldoc.ltp") + end + + local css = ldoc.css + ldoc.output = args.output + ldoc.ipairs = ipairs + ldoc.pairs = pairs + ldoc.print = print + + -- in single mode there is one module and the 'index' is the + -- documentation for that module. + ldoc.module = ldoc.single + if ldoc.single and args.one then + ldoc.kinds_allowed = {module = true, topic = true} + end + ldoc.root = true + if ldoc.module then + ldoc.module.info = get_module_info(ldoc.module) + end + local out,err = template.substitute(module_template,{ + ldoc = ldoc, + module = ldoc.module, + }) + ldoc.root = false + if not out then quit("template failed: "..err) end + + check_directory(args.dir) -- make sure output directory is ok + + args.dir = args.dir .. path.sep + + check_file(args.dir..css, path.join(args.style,css)) -- has CSS been copied? + + -- write out the module index + out = cleanup_whitespaces(out) + writefile(args.dir..args.output..args.ext,out) + + -- in single mode, we exclude any modules since the module has been done; + -- this step is then only for putting out any examples or topics + local mods = List() + for kind, modules in project() do + local lkind = kind:lower() + if not ldoc.single or ldoc.single and lkind ~= 'modules' then + mods:append {kind, lkind, modules} + end + end + + -- write out the per-module documentation + -- note that we reset the internal ordering of the 'kinds' so that + -- e.g. when reading a topic the other Topics will be listed first. + ldoc.css = '../'..css + for m in mods:iter() do + local kind, lkind, modules = unpack(m) + check_directory(args.dir..lkind) + project:put_kind_first(kind) + for m in modules() do + ldoc.module = m + ldoc.body = m.body + m.info = get_module_info(m) + if ldoc.body and m.postprocess then + ldoc.body = m.postprocess(ldoc.body) + end + out,err = template.substitute(module_template,{ + module=m, + ldoc = ldoc + }) + if not out then + quit('template failed for '..m.name..': '..err) + else + out = cleanup_whitespaces(out) + writefile(args.dir..lkind..'/'..m.name..args.ext,out) + end + end + end + if not args.quiet then print('output written to '..tools.abspath(args.dir)) end +end + +return html + diff --git a/luaejdb/tools/ldoc/ldoc/html/ldoc_css.lua b/luaejdb/tools/ldoc/ldoc/html/ldoc_css.lua new file mode 100644 index 0000000..4470045 --- /dev/null +++ b/luaejdb/tools/ldoc/ldoc/html/ldoc_css.lua @@ -0,0 +1,301 @@ +return [==[ +/* BEGIN RESET + +Copyright (c) 2010, Yahoo! Inc. All rights reserved. +Code licensed under the BSD License: +http://developer.yahoo.com/yui/license.html +version: 2.8.2r1 +*/ +html { + color: #000; + background: #FFF; +} +body,div,dl,dt,dd,ul,ol,li,h1,h2,h3,h4,h5,h6,pre,code,form,fieldset,legend,input,button,textarea,p,blockquote,th,td { + margin: 0; + padding: 0; +} +table { + border-collapse: collapse; + border-spacing: 0; +} +fieldset,img { + border: 0; +} +address,caption,cite,code,dfn,em,strong,th,var,optgroup { + font-style: inherit; + font-weight: inherit; +} +del,ins { + text-decoration: none; +} +li { + list-style: bullet; + margin-left: 20px; +} +caption,th { + text-align: left; +} +h1,h2,h3,h4,h5,h6 { + font-size: 100%; + font-weight: bold; +} +q:before,q:after { + content: ''; +} +abbr,acronym { + border: 0; + font-variant: normal; +} +sup { + vertical-align: baseline; +} +sub { + vertical-align: baseline; +} +legend { + color: #000; +} +input,button,textarea,select,optgroup,option { + font-family: inherit; + font-size: inherit; + font-style: inherit; + font-weight: inherit; +} +input,button,textarea,select {*font-size:100%; +} +/* END RESET */ + +body { + margin-left: 1em; + margin-right: 1em; + font-family: arial, helvetica, geneva, sans-serif; + background-color: #ffffff; margin: 0px; +} + +code, tt { font-family: monospace; } +span.parameter { font-family:monospace; } +span.parameter:after { content:":"; } +span.types:before { content:"("; } +span.types:after { content:")"; } +.type { font-weight: bold; font-style:italic } + +body, p, td, th { font-size: .95em; line-height: 1.2em;} + +p, ul { margin: 10px 0 0 0px;} + +strong { font-weight: bold;} + +em { font-style: italic;} + +h1 { + font-size: 1.5em; + margin: 0 0 20px 0; +} +h2, h3, h4 { margin: 15px 0 10px 0; } +h2 { font-size: 1.25em; } +h3 { font-size: 1.15em; } +h4 { font-size: 1.06em; } + +a:link { font-weight: bold; color: #004080; text-decoration: none; } +a:visited { font-weight: bold; color: #006699; text-decoration: none; } +a:link:hover { text-decoration: underline; } + +hr { + color:#cccccc; + background: #00007f; + height: 1px; +} + +blockquote { margin-left: 3em; } + +ul { list-style-type: disc; } + +p.name { + font-family: "Andale Mono", monospace; + padding-top: 1em; +} + +pre.example { + background-color: rgb(245, 245, 245); + border: 1px solid silver; + padding: 10px; + margin: 10px 0 10px 0; + font-family: "Andale Mono", monospace; + font-size: .85em; +} + +pre { + background-color: rgb(245, 245, 245); + border: 1px solid silver; + padding: 10px; + margin: 10px 0 10px 0; + overflow: auto; + font-family: "Andale Mono", monospace; +} + + +table.index { border: 1px #00007f; } +table.index td { text-align: left; vertical-align: top; } + +#container { + margin-left: 1em; + margin-right: 1em; + background-color: #f0f0f0; +} + +#product { + text-align: center; + border-bottom: 1px solid #cccccc; + background-color: #ffffff; +} + +#product big { + font-size: 2em; +} + +#main { + background-color: #f0f0f0; + border-left: 2px solid #cccccc; +} + +#navigation { + float: left; + width: 18em; + vertical-align: top; + background-color: #f0f0f0; + overflow: visible; +} + +#navigation h2 { + background-color:#e7e7e7; + font-size:1.1em; + color:#000000; + text-align: left; + padding:0.2em; + border-top:1px solid #dddddd; + border-bottom:1px solid #dddddd; +} + +#navigation ul +{ + font-size:1em; + list-style-type: none; + margin: 1px 1px 10px 1px; +} + +#navigation li { + text-indent: -1em; + display: block; + margin: 3px 0px 0px 22px; +} + +#navigation li li a { + margin: 0px 3px 0px -1em; +} + +#content { + margin-left: 18em; + padding: 1em; + width: 700px; + border-left: 2px solid #cccccc; + border-right: 2px solid #cccccc; + background-color: #ffffff; +} + +#about { + clear: both; + padding: 5px; + border-top: 2px solid #cccccc; + background-color: #ffffff; +} + +@media print { + body { + font: 12pt "Times New Roman", "TimeNR", Times, serif; + } + a { font-weight: bold; color: #004080; text-decoration: underline; } + + #main { + background-color: #ffffff; + border-left: 0px; + } + + #container { + margin-left: 2%; + margin-right: 2%; + background-color: #ffffff; + } + + #content { + padding: 1em; + background-color: #ffffff; + } + + #navigation { + display: none; + } + pre.example { + font-family: "Andale Mono", monospace; + font-size: 10pt; + page-break-inside: avoid; + } +} + +table.module_list { + border-width: 1px; + border-style: solid; + border-color: #cccccc; + border-collapse: collapse; +} +table.module_list td { + border-width: 1px; + padding: 3px; + border-style: solid; + border-color: #cccccc; +} +table.module_list td.name { background-color: #f0f0f0; ; min-width: 200px; } +table.module_list td.summary { width: 100%; } + + +table.function_list { + border-width: 1px; + border-style: solid; + border-color: #cccccc; + border-collapse: collapse; +} +table.function_list td { + border-width: 1px; + padding: 3px; + border-style: solid; + border-color: #cccccc; +} +table.function_list td.name { background-color: #f0f0f0; ; min-width: 200px; } +table.function_list td.summary { width: 100%; } + +dl.table dt, dl.function dt {border-top: 1px solid #ccc; padding-top: 1em;} +dl.table dd, dl.function dd {padding-bottom: 1em; margin: 10px 0 0 20px;} +dl.table h3, dl.function h3 {font-size: .95em;} + +/* stop sublists from having initial vertical space */ +ul ul { margin-top: 0px; } +ol ul { margin-top: 0px; } +ol ol { margin-top: 0px; } +ul ol { margin-top: 0px; } + +/* styles for prettification of source */ +pre .comment { color: #558817; } +pre .constant { color: #a8660d; } +pre .escape { color: #844631; } +pre .keyword { color: #2239a8; font-weight: bold; } +pre .library { color: #0e7c6b; } +pre .marker { color: #512b1e; background: #fedc56; font-weight: bold; } +pre .string { color: #a8660d; } +pre .number { color: #f8660d; } +pre .operator { color: #2239a8; font-weight: bold; } +pre .preprocessor, pre .prepro { color: #a33243; } +pre .global { color: #800080; } +pre .prompt { color: #558817; } +pre .url { color: #272fc2; text-decoration: underline; } +]==] + + diff --git a/luaejdb/tools/ldoc/ldoc/html/ldoc_ltp.lua b/luaejdb/tools/ldoc/ldoc/html/ldoc_ltp.lua new file mode 100644 index 0000000..60f2d72 --- /dev/null +++ b/luaejdb/tools/ldoc/ldoc/html/ldoc_ltp.lua @@ -0,0 +1,258 @@ +return [==[ + + + + + $(ldoc.title) + + + + +
    + +
    + +
    +
    +
    + + +
    + +# local no_spaces = ldoc.no_spaces +# local use_li = ldoc.use_li +# local display_name = ldoc.display_name +# local iter = ldoc.modules.iter +# local M = ldoc.markup +# local nowrap = ldoc.wrap and '' or 'nowrap' + + + + + +
    + +#if module then +

    $(ldoc.titlecase(module.type)) $(module.name)

    +# end + +# if ldoc.body then -- verbatim HTML as contents; 'non-code' entries + $(ldoc.body) +# elseif module then -- module documentation +

    $(M(module.summary,module))

    +

    $(M(module.description,module))

    +# if module.usage then +# local li,il = use_li(module.usage) +

    Usage:

    +
      +# for usage in iter(module.usage) do + $(li)
      $(ldoc.escape(usage))
      $(il) +# end -- for +
    +# end -- if usage +# if module.info then +

    Info:

    +
      +# for tag, value in ldoc.pairs(module.info) do +
    • $(tag): $(value)
    • +# end +
    +# end -- if module.info + + +# if not ldoc.no_summary then +# -- bang out the tables of item types for this module (e.g Functions, Tables, etc) +# for kind,items in module.kinds() do +

    $(kind)

    + +# for item in items() do + + + + +# end -- for items +
    $(display_name(item))$(M(item.summary,item))
    +#end -- for kinds + +
    +
    + +#end -- if not no_summary + +# --- currently works for both Functions and Tables. The params field either contains +# --- function parameters or table fields. +# local show_return = not ldoc.no_return_or_parms +# local show_parms = show_return +# for kind, items in module.kinds() do +# local kitem = module.kinds:get_item(kind) +

    $(kind)

    +#-- $(M(module.kinds:get_section_description(kind),nil)) +# if kitem then + $(M(ldoc.descript(kitem),kitem)) +# if kitem.usage then +

    Usage:

    +
    $(ldoc.prettify(kitem.usage[1]))
    +# end +# end +
    +# for item in items() do +
    + + $(display_name(item)) +
    +
    + $(M(ldoc.descript(item),item)) + +# if show_parms and item.params and #item.params > 0 then +

    $(module.kinds:type_of(item).subnames):

    +
      +# for parm in iter(item.params) do +# local param,sublist = item:subparam(parm) +# if sublist then +
    • $(sublist)$(M(item.params[sublist],item)) +
        +# end +# for p in iter(param) do +# local name,tp = item:display_name_of(p), ldoc.typename(item:type_of_param(p)) +
      • $(name) +# if tp ~= '' then + $(tp) +# end + $(M(item.params[p],item))
      • +# end +# if sublist then +
      +# end +# end -- for +
    +# end -- if params + +# if show_return and item.ret then +# local li,il = use_li(item.ret) +

    Returns:

    +
      +# for i,r in ldoc.ipairs(item.ret) do + $(li) +# local tp = ldoc.typename(item:type_of_ret(i)) +# if tp ~= '' then + $(tp) +# end + $(M(r,item))$(il) +# end -- for +
    +# end -- if returns + +# if show_return and item.raise then +

    Raises:

    + $(M(item.raise,item)) +# end + +# if item.see then +# local li,il = use_li(item.see) +

    see also:

    +
      +# for see in iter(item.see) do + $(li)$(see.label)$(il) +# end -- for +
    +# end -- if see + +# if item.usage then +# local li,il = use_li(item.usage) +

    Usage:

    +
      +# for usage in iter(item.usage) do + $(li)
      $(ldoc.prettify(usage))
      $(il) +# end -- for +
    +# end -- if usage + +
    +# end -- for items +
    +# end -- for kinds + +# else -- if module; project-level contents + +# if ldoc.description then +

    $(M(ldoc.description,nil))

    +# end +# if ldoc.full_description then +

    $(M(ldoc.full_description,nil))

    +# end + +# for kind, mods in ldoc.kinds() do +

    $(kind)

    +# kind = kind:lower() + +# for m in mods() do + + + + +# end -- for modules +
    $(m.name)$(M(m.summary,m))
    +# end -- for kinds +# end -- if module + +
    +
    +
    +generated by LDoc 1.3 +
    +
    + + +]==] + diff --git a/luaejdb/tools/ldoc/ldoc/html/ldoc_one_css.lua b/luaejdb/tools/ldoc/ldoc/html/ldoc_one_css.lua new file mode 100644 index 0000000..8735ee4 --- /dev/null +++ b/luaejdb/tools/ldoc/ldoc/html/ldoc_one_css.lua @@ -0,0 +1,277 @@ +return [==[ +/* BEGIN RESET + +Copyright (c) 2010, Yahoo! Inc. All rights reserved. +Code licensed under the BSD License: +http://developer.yahoo.com/yui/license.html +version: 2.8.2r1 +*/ +html { + color: #000; + background: #FFF; +} +body,div,dl,dt,dd,ul,ol,li,h1,h2,h3,h4,h5,h6,pre,code,form,fieldset,legend,input,button,textarea,p,blockquote,th,td { + margin: 0; + padding: 0; +} +table { + border-collapse: collapse; + border-spacing: 0; +} +fieldset,img { + border: 0; +} +address,caption,cite,code,dfn,em,strong,th,var,optgroup { + font-style: inherit; + font-weight: inherit; +} +del,ins { + text-decoration: none; +} +li { + list-style: bullet; + margin-left: 20px; +} +caption,th { + text-align: left; +} +h1,h2,h3,h4,h5,h6 { + font-size: 100%; + font-weight: bold; +} +q:before,q:after { + content: ''; +} +abbr,acronym { + border: 0; + font-variant: normal; +} +sup { + vertical-align: baseline; +} +sub { + vertical-align: baseline; +} +legend { + color: #000; +} +input,button,textarea,select,optgroup,option { + font-family: inherit; + font-size: inherit; + font-style: inherit; + font-weight: inherit; +} +input,button,textarea,select {*font-size:100%; +} +/* END RESET */ + +body { + margin-left: 1em; + margin-right: 1em; + font-family: arial, helvetica, geneva, sans-serif; + background-color: #ffffff; margin: 0px; +} + +code, tt { font-family: monospace; } + +body, p, td, th { font-size: .95em; line-height: 1.2em;} + +p, ul { margin: 10px 0 0 10px;} + +strong { font-weight: bold;} + +em { font-style: italic;} + +h1 { + font-size: 1.5em; + margin: 0 0 20px 0; +} +h2, h3, h4 { margin: 15px 0 10px 0; } +h2 { font-size: 1.25em; } +h3 { font-size: 1.15em; } +h4 { font-size: 1.06em; } + +a:link { font-weight: bold; color: #004080; text-decoration: none; } +a:visited { font-weight: bold; color: #2808FF; text-decoration: none; } +a:link:hover { text-decoration: underline; } + +hr { + color:#cccccc; + background: #00007f; + height: 1px; +} + +blockquote { margin-left: 3em; } + +ul { list-style-type: disc; } + +p.name { + font-family: "Andale Mono", monospace; + padding-top: 1em; +} + +pre.example { + background-color: rgb(245, 245, 245); + border: 1px solid silver; + padding: 10px; + margin: 10px 0 10px 0; + font-family: "Andale Mono", monospace; + font-size: .85em; +} + +pre { + background-color: rgb(245, 245, 245); + border: 1px solid silver; + padding: 10px; + margin: 10px 0 10px 0; + font-family: "Andale Mono", monospace; +} + + +table.index { border: 1px #00007f; } +table.index td { text-align: left; vertical-align: top; } + +#container { + margin-left: 1em; + margin-right: 1em; + background-color: #f5f5f5; +} + +#product { + text-align: center; + border-bottom: 1px solid #cccccc; + background-color: #ffffff; +} + +#product big { + font-size: 2em; +} + +#main { + border-left: 2px solid #cccccc; + border-right: 2px solid #cccccc; +} + +#navigation { + float: top; + vertical-align: top; + background-color: #f5f5f5; + overflow: visible; +} + +#navigation h2 { + background-color:#e7e7e7; + font-size:1.1em; + color:#000000; + text-align: left; + padding:0.2em; + border-top:1px solid #dddddd; + border-bottom:1px solid #dddddd; +} + +#navigation ul +{ + font-size:1em; + list-style-type: none; + margin: 1px 1px 10px 1px; +} + +#navigation li { + text-indent: -1em; + display: block; + margin: 3px 0px 0px 22px; +} + +#navigation li li a { + margin: 0px 3px 0px -1em; +} + +#content { + padding: 1em; + background-color: #ffffff; +} + +#about { + clear: both; + padding: 5px; + border-top: 2px solid #cccccc; + background-color: #ffffff; +} + +@media print { + body { + font: 12pt "Times New Roman", "TimeNR", Times, serif; + } + a { font-weight: bold; color: #004080; text-decoration: underline; } + + #main { + background-color: #ffffff; + border-left: 0px; + } + + #container { + margin-left: 2%; + margin-right: 2%; + background-color: #ffffff; + } + + #content { + padding: 1em; + background-color: #ffffff; + } + + #navigation { + display: none; + } + pre.example { + font-family: "Andale Mono", monospace; + font-size: 10pt; + page-break-inside: avoid; + } +} + +table.module_list { + border-width: 1px; + border-style: solid; + border-color: #cccccc; + border-collapse: collapse; +} +table.module_list td { + border-width: 1px; + padding: 3px; + border-style: solid; + border-color: #cccccc; +} +table.module_list td.name { background-color: #f5f5f5; } +table.module_list td.summary { width: 100%; } + + +table.function_list { + border-width: 1px; + border-style: solid; + border-color: #cccccc; + border-collapse: collapse; +} +table.function_list td { + border-width: 1px; + padding: 3px; + border-style: solid; + border-color: #cccccc; +} +table.function_list td.name { + background-color: #f5f5f5; + white-space: normal; /* voids the "nowrap" in HTML */ +} +table.function_list td.summary { width: 100%; } + +dl.table dt, dl.function dt {border-top: 1px solid #ccc; padding-top: 1em;} +dl.table dd, dl.function dd {padding-bottom: 1em; margin: 10px 0 0 20px;} +dl.table h3, dl.function h3 {font-size: .95em;} + +/* styles for prettification of source */ +.keyword {font-weight: bold; color: #6666AA; } +.number { color: #AA6666; } +.string { color: #8888AA; } +.comment { color: #666600; } +.prepro { color: #006666; } +]==] diff --git a/luaejdb/tools/ldoc/ldoc/lang.lua b/luaejdb/tools/ldoc/ldoc/lang.lua new file mode 100644 index 0000000..e02d02a --- /dev/null +++ b/luaejdb/tools/ldoc/ldoc/lang.lua @@ -0,0 +1,265 @@ +------------ +-- Language-dependent parsing of code. +-- This encapsulates the different strategies needed for parsing C and Lua +-- source code. + +local class = require 'pl.class' +local utils = require 'pl.utils' +local tools = require 'ldoc.tools' +local lexer = require 'ldoc.lexer' +local quit = utils.quit +local tnext = lexer.skipws + + +local Lang = class() + +function Lang:trim_comment (s) + return s:gsub(self.line_comment,'') +end + +function Lang:start_comment (v) + local line = v:match (self.start_comment_) + if line and self.end_comment_ and v:match (self.end_comment_) then + return nil + end + local block = v:match(self.block_comment) + return line or block, block +end + +function Lang:empty_comment (v) + return v:match(self.empty_comment_) +end + +function Lang:grab_block_comment(v,tok) + v = v:gsub(self.block_comment,'') + return tools.grab_block_comment(v,tok,self.end_comment) +end + +function Lang:find_module(tok,t,v) + return '...',t,v +end + +function Lang:item_follows(t,v) + return false +end + +function Lang:finalize() + self.empty_comment_ = self.start_comment_..'%s*$' +end + +function Lang:search_for_token (tok,type,value,t,v) + while t and not (t == type and v == value) do + if t == 'comment' and self:start_comment(v) then return nil,t,v end + t,v = tnext(tok) + end + return t ~= nil,t,v +end + +function Lang:parse_extra (tags,tok) +end + +function Lang:is_module_modifier () + return false +end + +function Lang:parse_module_modifier (tags, tok) + return nil, "@usage or @exports deduction not implemented for this language" +end + + +local Lua = class(Lang) + +function Lua:_init() + self.line_comment = '^%-%-+' -- used for stripping + self.start_comment_ = '^%-%-%-+' -- used for doc comment line start + self.block_comment = '^%-%-%[=*%[%-+' -- used for block doc comments + self.end_comment_ = '[^%-]%-%-+\n$' ---- exclude --- this kind of comment --- + self:finalize() +end + +function Lua.lexer(fname) + local f,e = io.open(fname) + if not f then quit(e) end + return lexer.lua(f,{}),f +end + +function Lua:grab_block_comment(v,tok) + local equals = v:match('^%-%-%[(=*)%[') + v = v:gsub(self.block_comment,'') + return tools.grab_block_comment(v,tok,'%]'..equals..'%]') +end + + +function Lua:parse_module_call(tok,t,v) + t,v = tnext(tok) + if t == '(' then t,v = tnext(tok) end + if t == 'string' then -- explicit name, cool + return v,t,v + elseif t == '...' then -- we have to guess! + return '...',t,v + end +end + +-- If a module name was not provided, then we look for an explicit module() +-- call. However, we should not try too hard; if we hit a doc comment then +-- we should go back and process it. Likewise, module(...) also means +-- that we must infer the module name. +function Lua:find_module(tok,t,v) + local res + res,t,v = self:search_for_token(tok,'iden','module',t,v) + if not res then return nil,t,v end + return self:parse_module_call(tok,t,v) +end + +local function parse_lua_parameters (tags,tok) + tags.formal_args = tools.get_parameters(tok) + tags:add('class','function') +end + +local function parse_lua_function_header (tags,tok) + if not tags.name then + tags:add('name',tools.get_fun_name(tok)) + end + if not tags.name then return 'function has no name' end + parse_lua_parameters(tags,tok) +end + +local function parse_lua_table (tags,tok) + tags.formal_args = tools.get_parameters(tok,'}',function(s) + return s == ',' or s == ';' + end) +end + +--------------- function and variable inferrence ----------- +-- After a doc comment, there may be a local followed by: +-- [1] (l)function: function NAME +-- [2] (l)function: NAME = function +-- [3] table: NAME = { +-- [4] field: NAME = (this is a module-level field) +-- +-- Depending on the case successfully detected, returns a function which +-- will be called later to fill in inferred item tags +function Lua:item_follows(t,v,tok) + local parser, case + local is_local = t == 'keyword' and v == 'local' + if is_local then t,v = tnext(tok) end + if t == 'keyword' and v == 'function' then -- case [1] + case = 1 + parser = parse_lua_function_header + elseif t == 'iden' then + local name,t,v = tools.get_fun_name(tok,v) + if t ~= '=' then return nil end -- probably invalid code... + t,v = tnext(tok) + if t == 'keyword' and v == 'function' then -- case [2] + tnext(tok) -- skip '(' + case = 2 + parser = function(tags,tok) + tags:add('name',name) + parse_lua_parameters(tags,tok) + end + elseif t == '{' then -- case [3] + case = 3 + parser = function(tags,tok) + tags:add('class','table') + tags:add('name',name) + parse_lua_table (tags,tok) + end + else -- case [4] + case = 4 + parser = function(tags) + tags:add('class','field') + tags:add('name',name) + end + end + elseif t == 'keyword' and v == 'return' then + t, v = tnext(tok) + if t == 'keyword' and v == 'function' then + -- return function(a, b, c) + tnext(tok) -- skip '(' + case = 2 + parser = parse_lua_parameters + elseif t == '{' then + -- return {...} + case = 5 + parser = function(tags,tok) + tags:add('class','table') + parse_lua_table(tags,tok) + end + else + return nil + end + end + return parser, is_local, case +end + + +-- we only call the function returned by the item_follows above if there +-- is not already a name and a type. +-- Otherwise, this is called. Currrently only tries to fill in the fields +-- of a table from a table definition as identified above +function Lua:parse_extra (tags,tok,case) + if tags.class == 'table' and not tags.field and case == 3 then + parse_lua_table(tags,tok) + end +end + +-- For Lua, a --- @usage comment means that a long +-- string containing the usage follows, which we +-- use to update the module usage tag. Likewise, the @export +-- tag alone in a doc comment refers to the following returned +-- Lua table of functions + + +function Lua:is_module_modifier (tags) + return tags.summary == '' and (tags.usage or tags.export) +end + +-- Allow for private name convention. +function Lua:is_private_var (name) + return name:match '^_' or name:match '_$' +end + +function Lua:parse_module_modifier (tags, tok, F) + if tags.usage then + if tags.class ~= 'field' then return nil,"cannot deduce @usage" end + local t1= tnext(tok) + if t1 ~= '[' then return nil, t1..' '..': not a long string' end + local t, v = tools.grab_block_comment('',tok,'%]%]') + return true, v, 'usage' + elseif tags.export then + if tags.class ~= 'table' then return nil, "cannot deduce @export" end + for f in tags.formal_args:iter() do + if not self:is_private_var(f) then + F:export_item(f) + end + end + return true + end +end + + +-- note a difference here: we scan C/C++ code in full-text mode, not line by line. +-- This is because we can't detect multiline comments in line mode + +local CC = class(Lang) + +function CC:_init() + self.line_comment = '^//+' + self.start_comment_ = '^///+' + self.block_comment = '^/%*%*+' + self:finalize() +end + +function CC.lexer(f) + local err + f,err = utils.readfile(f) + if not f then quit(err) end + return lexer.cpp(f,{}) +end + +function CC:grab_block_comment(v,tok) + v = v:gsub(self.block_comment,'') + return 'comment',v:sub(1,-3) +end + +return { lua = Lua(), cc = CC() } diff --git a/luaejdb/tools/ldoc/ldoc/lexer.lua b/luaejdb/tools/ldoc/ldoc/lexer.lua new file mode 100644 index 0000000..8f6bbe0 --- /dev/null +++ b/luaejdb/tools/ldoc/ldoc/lexer.lua @@ -0,0 +1,461 @@ +--- Lexical scanner for creating a sequence of tokens from text.
    +--

    lexer.scan(s) returns an iterator over all tokens found in the +-- string s. This iterator returns two values, a token type string +-- (such as 'string' for quoted string, 'iden' for identifier) and the value of the +-- token. +--

    +-- Versions specialized for Lua and C are available; these also handle block comments +-- and classify keywords as 'keyword' tokens. For example: +--

    +-- > s = 'for i=1,n do'
    +-- > for t,v in lexer.lua(s)  do print(t,v) end
    +-- keyword for
    +-- iden    i
    +-- =       =
    +-- number  1
    +-- ,       ,
    +-- iden    n
    +-- keyword do
    +-- 
    +-- See the Guide for further discussion
    +-- @class module +-- @name pl.lexer + +local strfind = string.find +local strsub = string.sub +local append = table.insert +--[[ +module ('pl.lexer',utils._module) +]] + +local function assert_arg(idx,val,tp) + if type(val) ~= tp then + error("argument "..idx.." must be "..tp, 2) + end +end + +local lexer = {} + +local NUMBER1 = '^[%+%-]?%d+%.?%d*[eE][%+%-]?%d+' +local NUMBER2 = '^[%+%-]?%d+%.?%d*' +local NUMBER3 = '^0x[%da-fA-F]+' +local NUMBER4 = '^%d+%.?%d*[eE][%+%-]?%d+' +local NUMBER5 = '^%d+%.?%d*' +local IDEN = '^[%a_][%w_]*' +local WSPACE = '^%s+' +local STRING1 = [[^'.-[^\\]']] +local STRING2 = [[^".-[^\\]"]] +local STRING3 = "^((['\"])%2)" -- empty string +local PREPRO = '^#.-[^\\]\n' + +local plain_matches,lua_matches,cpp_matches,lua_keyword,cpp_keyword + +local function tdump(tok) + return tok,tok +end + +local function ndump(tok,options) + if options and options.number then + tok = tonumber(tok) + end + return "number",tok +end + +-- regular strings, single or double quotes; usually we want them +-- without the quotes +local function sdump(tok,options) + if options and options.string then + tok = tok:sub(2,-2) + end + return "string",tok +end + +-- long Lua strings need extra work to get rid of the quotes +local function sdump_l(tok,options) + if options and options.string then + tok = tok:sub(3,-3) + end + return "string",tok +end + +local function chdump(tok,options) + if options and options.string then + tok = tok:sub(2,-2) + end + return "char",tok +end + +local function cdump(tok) + return 'comment',tok +end + +local function wsdump (tok) + return "space",tok +end + +local function pdump (tok) + return 'prepro',tok +end + +local function plain_vdump(tok) + return "iden",tok +end + +local function lua_vdump(tok) + if lua_keyword[tok] then + return "keyword",tok + else + return "iden",tok + end +end + +local function cpp_vdump(tok) + if cpp_keyword[tok] then + return "keyword",tok + else + return "iden",tok + end +end + +local function count_lines(line, text) + local index, limit = 1, #text + while index <= limit do + local start, stop = text:find('\r\n', index, true) + if not start then + start, stop = text:find('[\r\n\f]', index) + if not start then break end + end + index = stop + 1 + line = line + 1 + end + return line +end + +local multiline = { comment = true, space = true } + + +--- create a plain token iterator from a string or file-like object. +-- @param s the string +-- @param matches an optional match table (set of pattern-action pairs) +-- @param filter a table of token types to exclude, by default {space=true} +-- @param options a table of options; by default, {number=true,string=true}, +-- which means convert numbers and strip string quotes. +function lexer.scan (s,matches,filter,options) + --assert_arg(1,s,'string') + local file = type(s) ~= 'string' and s + filter = filter or {space=true} + options = options or {number=true,string=true} + if filter then + if filter.space then filter[wsdump] = true end + if filter.comments then + filter[cdump] = true + end + end + if not matches then + if not plain_matches then + plain_matches = { + {WSPACE,wsdump}, + {NUMBER3,ndump}, + {IDEN,plain_vdump}, + {NUMBER1,ndump}, + {NUMBER2,ndump}, + {STRING3,sdump}, + {STRING1,sdump}, + {STRING2,sdump}, + {'^.',tdump} + } + end + matches = plain_matches + end + local i1,i2,idx,res1,res2,tok,pat,fun,capt + local line = 1 + if file then + s = file:read() + if not s then return nil end -- empty file + s = s ..'\n' + end + local sz = #s + local idx = 1 + if sz == 0 then return nil end -- empty file + + local res = {} + local mt = {} + mt.__index = mt + setmetatable(res,mt) + + function mt.lineno() return line end + + function mt.getline() + if idx < sz then + tok = strsub(s,idx,-2) + idx = sz + 1 + line = line + 1 + return tok + else + idx = sz + 1 + line = line + 1 + return file:read() + end + end + + function mt.next (tok) + local t,v = tok() + while t == 'space' do + t,v = tok() + end + return t,v + end + + function mt.__call () + if not s then return end + while true do + for _,m in ipairs(matches) do + pat,fun = m[1],m[2] + if fun == nil then error("no match for "..pat) end + i1,i2 = strfind(s,pat,idx) + if i1 then + tok = strsub(s,i1,i2) + idx = i2 + 1 + if not (filter and filter[fun]) then + lexer.finished = idx > sz + local t,v = fun(tok,options) + if not file and multiline[t] then + line = count_lines(line,v) + end + return t,v + end + end + end + if idx > sz then + if file then + line = line + 1 + s = file:read() + if not s then return end + s = s .. '\n' + idx ,sz = 1,#s + else + return + end + end + end + end + return res +end + +--- get everything in a stream upto a newline. +-- @param tok a token stream +-- @return a string +function lexer.getline (tok) + return tok:getline() +end + +--- get current line number.
    +-- Only available if the input source is a file-like object. +-- @param tok a token stream +-- @return the line number and current column +function lexer.lineno (tok) + return tok:lineno() +end + +--- get the Lua keywords as a set-like table. +-- So res["and"] etc would be true. +-- @return a table +function lexer.get_keywords () + if not lua_keyword then + lua_keyword = { + ["and"] = true, ["break"] = true, ["do"] = true, + ["else"] = true, ["elseif"] = true, ["end"] = true, + ["false"] = true, ["for"] = true, ["function"] = true, + ["if"] = true, ["in"] = true, ["local"] = true, ["nil"] = true, + ["not"] = true, ["or"] = true, ["repeat"] = true, + ["return"] = true, ["then"] = true, ["true"] = true, + ["until"] = true, ["while"] = true + } + end + return lua_keyword +end + + +--- create a Lua token iterator from a string or file-like object. +-- Will return the token type and value. +-- @param s the string +-- @param filter a table of token types to exclude, by default {space=true,comments=true} +-- @param options a table of options; by default, {number=true,string=true}, +-- which means convert numbers and strip string quotes. +function lexer.lua(s,filter,options) + filter = filter or {space=true,comments=true} + lexer.get_keywords() + if not lua_matches then + lua_matches = { + {WSPACE,wsdump}, + {NUMBER3,ndump}, + {IDEN,lua_vdump}, + {NUMBER4,ndump}, + {NUMBER5,ndump}, + {STRING3,sdump}, + {STRING1,sdump}, + {STRING2,sdump}, + {'^%-%-%[(=*)%[.-%]%1%]',cdump}, + {'^%-%-.-\n',cdump}, + {'^%[(=*)%[.-%]%1%]',sdump_l}, + {'^==',tdump}, + {'^~=',tdump}, + {'^<=',tdump}, + {'^>=',tdump}, + {'^%.%.%.',tdump}, + {'^%.%.',tdump}, + {'^.',tdump} + } + end + return lexer.scan(s,lua_matches,filter,options) +end + +--- create a C/C++ token iterator from a string or file-like object. +-- Will return the token type type and value. +-- @param s the string +-- @param filter a table of token types to exclude, by default {space=true,comments=true} +-- @param options a table of options; by default, {number=true,string=true}, +-- which means convert numbers and strip string quotes. +function lexer.cpp(s,filter,options) + filter = filter or {comments=true} + if not cpp_keyword then + cpp_keyword = { + ["class"] = true, ["break"] = true, ["do"] = true, ["sizeof"] = true, + ["else"] = true, ["continue"] = true, ["struct"] = true, + ["false"] = true, ["for"] = true, ["public"] = true, ["void"] = true, + ["private"] = true, ["protected"] = true, ["goto"] = true, + ["if"] = true, ["static"] = true, ["const"] = true, ["typedef"] = true, + ["enum"] = true, ["char"] = true, ["int"] = true, ["bool"] = true, + ["long"] = true, ["float"] = true, ["true"] = true, ["delete"] = true, + ["double"] = true, ["while"] = true, ["new"] = true, + ["namespace"] = true, ["try"] = true, ["catch"] = true, + ["switch"] = true, ["case"] = true, ["extern"] = true, + ["return"] = true,["default"] = true,['unsigned'] = true,['signed'] = true, + ["union"] = true, ["volatile"] = true, ["register"] = true,["short"] = true, + } + end + if not cpp_matches then + cpp_matches = { + {WSPACE,wsdump}, + {PREPRO,pdump}, + {NUMBER3,ndump}, + {IDEN,cpp_vdump}, + {NUMBER4,ndump}, + {NUMBER5,ndump}, + {STRING3,sdump}, + {STRING1,chdump}, + {STRING2,sdump}, + {'^//.-\n',cdump}, + {'^/%*.-%*/',cdump}, + {'^==',tdump}, + {'^!=',tdump}, + {'^<=',tdump}, + {'^>=',tdump}, + {'^->',tdump}, + {'^&&',tdump}, + {'^||',tdump}, + {'^%+%+',tdump}, + {'^%-%-',tdump}, + {'^%+=',tdump}, + {'^%-=',tdump}, + {'^%*=',tdump}, + {'^/=',tdump}, + {'^|=',tdump}, + {'^%^=',tdump}, + {'^::',tdump}, + {'^.',tdump} + } + end + return lexer.scan(s,cpp_matches,filter,options) +end + +--- get a list of parameters separated by a delimiter from a stream. +-- @param tok the token stream +-- @param endtoken end of list (default ')'). Can be '\n' +-- @param delim separator (default ',') +-- @return a list of token lists. +function lexer.get_separated_list(tok,endtoken,delim) + endtoken = endtoken or ')' + delim = delim or ',' + local function tappend (tl,t,val) + val = val or t + append(tl,{t,val}) + end + local is_end + if endtoken == '\n' then + is_end = function(t,val) + return t == 'space' and val:find '\n' + end + else + is_end = function (t) + return t == endtoken + end + end + local is_delim + if type(delim) == 'function' then + is_delim = delim + else + is_delim = function(t) + return t == delim + end + end + local parm_values = {} + local level = 1 -- used to count ( and ) + local tl = {} + local token,value + while true do + token,value=tok() + if not token then return nil,'EOS' end -- end of stream is an error! + if is_end(token,value) and level == 1 then + if next(tl) then + append(parm_values,tl) + end + break + elseif token == '(' then + level = level + 1 + tappend(tl,'(') + elseif token == ')' then + level = level - 1 + if level == 0 then -- finished with parm list + append(parm_values,tl) + break + else + tappend(tl,')') + end + elseif level == 1 and is_delim(token) then + append(parm_values,tl) -- a new parm + tl = {} + else + tappend(tl,token,value) + end + end + return parm_values,{token,value} +end + +--- get the next non-space token from the stream. +-- @param tok the token stream. +function lexer.skipws (tok) + return tok:next() +end + +local skipws = lexer.skipws + +--- get the next token, which must be of the expected type. +-- Throws an error if this type does not match! +-- @param tok the token stream +-- @param expected_type the token type +-- @param no_skip_ws whether we should skip whitespace +function lexer.expecting (tok,expected_type,no_skip_ws) + assert_arg(1,tok,'function') + assert_arg(2,expected_type,'string') + local t,v + if no_skip_ws then + t,v = tok() + else + t,v = skipws(tok) + end + if t ~= expected_type then error ("expecting "..expected_type,2) end + return v +end + +return lexer diff --git a/luaejdb/tools/ldoc/ldoc/markdown.lua b/luaejdb/tools/ldoc/ldoc/markdown.lua new file mode 100644 index 0000000..b16d43b --- /dev/null +++ b/luaejdb/tools/ldoc/ldoc/markdown.lua @@ -0,0 +1,1359 @@ +#!/usr/bin/env lua + +--[[ +# markdown.lua -- version 0.32 + + + +**Author:** Niklas Frykholm, +**Date:** 31 May 2008 + +This is an implementation of the popular text markup language Markdown in pure Lua. +Markdown can convert documents written in a simple and easy to read text format +to well-formatted HTML. For a more thourough description of Markdown and the Markdown +syntax, see . + +The original Markdown source is written in Perl and makes heavy use of advanced +regular expression techniques (such as negative look-ahead, etc) which are not available +in Lua's simple regex engine. Therefore this Lua port has been rewritten from the ground +up. It is probably not completely bug free. If you notice any bugs, please report them to +me. A unit test that exposes the error is helpful. + +## Usage + + require "markdown" + markdown(source) + +``markdown.lua`` exposes a single global function named ``markdown(s)`` which applies the +Markdown transformation to the specified string. + +``markdown.lua`` can also be used directly from the command line: + + lua markdown.lua test.md + +Creates a file ``test.html`` with the converted content of ``test.md``. Run: + + lua markdown.lua -h + +For a description of the command-line options. + +``markdown.lua`` uses the same license as Lua, the MIT license. + +## License + +Copyright © 2008 Niklas Frykholm. + +Permission is hereby granted, free of charge, to any person obtaining a copy of this +software and associated documentation files (the "Software"), to deal in the Software +without restriction, including without limitation the rights to use, copy, modify, merge, +publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons +to whom the Software is furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all copies +or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +THE SOFTWARE. + +## Version history + +- **0.32** -- 31 May 2008 + - Fix for links containing brackets +- **0.31** -- 1 Mar 2008 + - Fix for link definitions followed by spaces +- **0.30** -- 25 Feb 2008 + - Consistent behavior with Markdown when the same link reference is reused +- **0.29** -- 24 Feb 2008 + - Fix for
     blocks with spaces in them
    +-	**0.28** -- 18 Feb 2008
    +	-	Fix for link encoding
    +-	**0.27** -- 14 Feb 2008
    +	-	Fix for link database links with ()
    +-	**0.26** -- 06 Feb 2008
    +	-	Fix for nested italic and bold markers
    +-	**0.25** -- 24 Jan 2008
    +	-	Fix for encoding of naked <
    +-	**0.24** -- 21 Jan 2008
    +	-	Fix for link behavior.
    +-	**0.23** -- 10 Jan 2008
    +	-	Fix for a regression bug in longer expressions in italic or bold.
    +-	**0.22** -- 27 Dec 2007
    +	-	Fix for crash when processing blocks with a percent sign in them.
    +-	**0.21** -- 27 Dec 2007
    +	- 	Fix for combined strong and emphasis tags
    +-	**0.20** -- 13 Oct 2007
    +	-	Fix for < as well in image titles, now matches Dingus behavior
    +-	**0.19** -- 28 Sep 2007
    +	-	Fix for quotation marks " and ampersands & in link and image titles.
    +-	**0.18** -- 28 Jul 2007
    +	-	Does not crash on unmatched tags (behaves like standard markdown)
    +-	**0.17** -- 12 Apr 2007
    +	-	Fix for links with %20 in them.
    +-	**0.16** -- 12 Apr 2007
    +	-	Do not require arg global to exist.
    +-	**0.15** -- 28 Aug 2006
    +	-	Better handling of links with underscores in them.
    +-	**0.14** -- 22 Aug 2006
    +	-	Bug for *`foo()`*
    +-	**0.13** -- 12 Aug 2006
    +	-	Added -l option for including stylesheet inline in document.
    +	-	Fixed bug in -s flag.
    +	-	Fixed emphasis bug.
    +-	**0.12** -- 15 May 2006
    +	-	Fixed several bugs to comply with MarkdownTest 1.0 
    +-	**0.11** -- 12 May 2006
    +	-	Fixed bug for escaping `*` and `_` inside code spans.
    +	-	Added license terms.
    +	-	Changed join() to table.concat().
    +-	**0.10** -- 3 May 2006
    +	-	Initial public release.
    +
    +// Niklas
    +]]
    +
    +
    +-- Set up a table for holding local functions to avoid polluting the global namespace
    +local M = {}
    +local MT = {__index = _G}
    +setmetatable(M, MT)
    +setfenv(1, M)
    +
    +----------------------------------------------------------------------
    +-- Utility functions
    +----------------------------------------------------------------------
    +
    +-- Locks table t from changes, writes an error if someone attempts to change the table.
    +-- This is useful for detecting variables that have "accidently" been made global. Something
    +-- I tend to do all too much.
    +function lock(t)
    +	function lock_new_index(t, k, v)
    +		error("module has been locked -- " .. k .. " must be declared local", 2)
    +	end
    +
    +	local mt = {__newindex = lock_new_index}
    +	if getmetatable(t) then mt.__index = getmetatable(t).__index end
    +	setmetatable(t, mt)
    +end
    +
    +-- Returns the result of mapping the values in table t through the function f
    +function map(t, f)
    +	local out = {}
    +	for k,v in pairs(t) do out[k] = f(v,k) end
    +	return out
    +end
    +
    +-- The identity function, useful as a placeholder.
    +function identity(text) return text end
    +
    +-- Functional style if statement. (NOTE: no short circuit evaluation)
    +function iff(t, a, b) if t then return a else return b end end
    +
    +-- Splits the text into an array of separate lines.
    +function split(text, sep)
    +	sep = sep or "\n"
    +	local lines = {}
    +	local pos = 1
    +	while true do
    +		local b,e = text:find(sep, pos)
    +		if not b then table.insert(lines, text:sub(pos)) break end
    +		table.insert(lines, text:sub(pos, b-1))
    +		pos = e + 1
    +	end
    +	return lines
    +end
    +
    +-- Converts tabs to spaces
    +function detab(text)
    +	local tab_width = 4
    +	local function rep(match)
    +		local spaces = -match:len()
    +		while spaces<1 do spaces = spaces + tab_width end
    +		return match .. string.rep(" ", spaces)
    +	end
    +	text = text:gsub("([^\n]-)\t", rep)
    +	return text
    +end
    +
    +-- Applies string.find for every pattern in the list and returns the first match
    +function find_first(s, patterns, index)
    +	local res = {}
    +	for _,p in ipairs(patterns) do
    +		local match = {s:find(p, index)}
    +		if #match>0 and (#res==0 or match[1] < res[1]) then res = match end
    +	end
    +	return unpack(res)
    +end
    +
    +-- If a replacement array is specified, the range [start, stop] in the array is replaced
    +-- with the replacement array and the resulting array is returned. Without a replacement
    +-- array the section of the array between start and stop is returned.
    +function splice(array, start, stop, replacement)
    +	if replacement then
    +		local n = stop - start + 1
    +		while n > 0 do
    +			table.remove(array, start)
    +			n = n - 1
    +		end
    +		for i,v in ipairs(replacement) do
    +			table.insert(array, start, v)
    +		end
    +		return array
    +	else
    +		local res = {}
    +		for i = start,stop do
    +			table.insert(res, array[i])
    +		end
    +		return res
    +	end
    +end
    +
    +-- Outdents the text one step.
    +function outdent(text)
    +	text = "\n" .. text
    +	text = text:gsub("\n  ? ? ?", "\n")
    +	text = text:sub(2)
    +	return text
    +end
    +
    +-- Indents the text one step.
    +function indent(text)
    +	text = text:gsub("\n", "\n    ")
    +	return text
    +end
    +
    +-- Does a simple tokenization of html data. Returns the data as a list of tokens. 
    +-- Each token is a table with a type field (which is either "tag" or "text") and
    +-- a text field (which contains the original token data).
    +function tokenize_html(html)
    +	local tokens = {}
    +	local pos = 1
    +	while true do
    +		local start = find_first(html, {"", start)
    +		elseif html:match("^<%?", start) then
    +			_,stop = html:find("?>", start)
    +		else
    +			_,stop = html:find("%b<>", start)
    +		end
    +		if not stop then
    +			-- error("Could not match html tag " .. html:sub(start,start+30)) 
    +		 	table.insert(tokens, {type="text", text=html:sub(start, start)})
    +			pos = start + 1
    +		else
    +			table.insert(tokens, {type="tag", text=html:sub(start, stop)})
    +			pos = stop + 1
    +		end
    +	end
    +	return tokens
    +end
    +
    +----------------------------------------------------------------------
    +-- Hash
    +----------------------------------------------------------------------
    +
    +-- This is used to "hash" data into alphanumeric strings that are unique
    +-- in the document. (Note that this is not cryptographic hash, the hash
    +-- function is not one-way.) The hash procedure is used to protect parts
    +-- of the document from further processing.
    +
    +local HASH = {
    +	-- Has the hash been inited.
    +	inited = false,
    +	
    +	-- The unique string prepended to all hash values. This is to ensure
    +	-- that hash values do not accidently coincide with an actual existing
    +	-- string in the document.
    +	identifier = "",
    +	
    +	-- Counter that counts up for each new hash instance.
    +	counter = 0,
    +	
    +	-- Hash table.
    +	table = {}
    +}
    +
    +-- Inits hashing. Creates a hash_identifier that doesn't occur anywhere
    +-- in the text.
    +function init_hash(text)
    +	HASH.inited = true
    +	HASH.identifier = ""
    +	HASH.counter = 0
    +	HASH.table = {}
    +	
    +	local s = "HASH"
    +	local counter = 0
    +	local id
    +	while true do
    +		id  = s .. counter
    +		if not text:find(id, 1, true) then break end
    +		counter = counter + 1
    +	end
    +	HASH.identifier = id
    +end
    +
    +-- Returns the hashed value for s.
    +function hash(s)
    +	assert(HASH.inited)
    +	if not HASH.table[s] then
    +		HASH.counter = HASH.counter + 1
    +		local id = HASH.identifier .. HASH.counter .. "X"
    +		HASH.table[s] = id
    +	end
    +	return HASH.table[s]
    +end
    +
    +----------------------------------------------------------------------
    +-- Protection
    +----------------------------------------------------------------------
    +
    +-- The protection module is used to "protect" parts of a document
    +-- so that they are not modified by subsequent processing steps. 
    +-- Protected parts are saved in a table for later unprotection
    +
    +-- Protection data
    +local PD = {
    +	-- Saved blocks that have been converted
    +	blocks = {},
    +
    +	-- Block level tags that will be protected
    +	tags = {"p", "div", "h1", "h2", "h3", "h4", "h5", "h6", "blockquote",
    +	"pre", "table", "dl", "ol", "ul", "script", "noscript", "form", "fieldset",
    +	"iframe", "math", "ins", "del"}
    +}
    +
    +-- Pattern for matching a block tag that begins and ends in the leftmost
    +-- column and may contain indented subtags, i.e.
    +-- 
    +-- A nested block. +--
    +-- Nested data. +--
    +--
    +function block_pattern(tag) + return "\n<" .. tag .. ".-\n[ \t]*\n" +end + +-- Pattern for matching a block tag that begins and ends with a newline +function line_pattern(tag) + return "\n<" .. tag .. ".-[ \t]*\n" +end + +-- Protects the range of characters from start to stop in the text and +-- returns the protected string. +function protect_range(text, start, stop) + local s = text:sub(start, stop) + local h = hash(s) + PD.blocks[h] = s + text = text:sub(1,start) .. h .. text:sub(stop) + return text +end + +-- Protect every part of the text that matches any of the patterns. The first +-- matching pattern is protected first, etc. +function protect_matches(text, patterns) + while true do + local start, stop = find_first(text, patterns) + if not start then break end + text = protect_range(text, start, stop) + end + return text +end + +-- Protects blocklevel tags in the specified text +function protect(text) + -- First protect potentially nested block tags + text = protect_matches(text, map(PD.tags, block_pattern)) + -- Then protect block tags at the line level. + text = protect_matches(text, map(PD.tags, line_pattern)) + -- Protect
    and comment tags + text = protect_matches(text, {"\n]->[ \t]*\n"}) + text = protect_matches(text, {"\n[ \t]*\n"}) + return text +end + +-- Returns true if the string s is a hash resulting from protection +function is_protected(s) + return PD.blocks[s] +end + +-- Unprotects the specified text by expanding all the nonces +function unprotect(text) + for k,v in pairs(PD.blocks) do + v = v:gsub("%%", "%%%%") + text = text:gsub(k, v) + end + return text +end + + +---------------------------------------------------------------------- +-- Block transform +---------------------------------------------------------------------- + +-- The block transform functions transform the text on the block level. +-- They work with the text as an array of lines rather than as individual +-- characters. + +-- Returns true if the line is a ruler of (char) characters. +-- The line must contain at least three char characters and contain only spaces and +-- char characters. +function is_ruler_of(line, char) + if not line:match("^[ %" .. char .. "]*$") then return false end + if not line:match("%" .. char .. ".*%" .. char .. ".*%" .. char) then return false end + return true +end + +-- Identifies the block level formatting present in the line +function classify(line) + local info = {line = line, text = line} + + if line:match("^ ") then + info.type = "indented" + info.outdented = line:sub(5) + return info + end + + for _,c in ipairs({'*', '-', '_', '='}) do + if is_ruler_of(line, c) then + info.type = "ruler" + info.ruler_char = c + return info + end + end + + if line == "" then + info.type = "blank" + return info + end + + if line:match("^(#+)[ \t]*(.-)[ \t]*#*[ \t]*$") then + local m1, m2 = line:match("^(#+)[ \t]*(.-)[ \t]*#*[ \t]*$") + info.type = "header" + info.level = m1:len() + info.text = m2 + return info + end + + if line:match("^ ? ? ?(%d+)%.[ \t]+(.+)") then + local number, text = line:match("^ ? ? ?(%d+)%.[ \t]+(.+)") + info.type = "list_item" + info.list_type = "numeric" + info.number = 0 + number + info.text = text + return info + end + + if line:match("^ ? ? ?([%*%+%-])[ \t]+(.+)") then + local bullet, text = line:match("^ ? ? ?([%*%+%-])[ \t]+(.+)") + info.type = "list_item" + info.list_type = "bullet" + info.bullet = bullet + info.text= text + return info + end + + if line:match("^>[ \t]?(.*)") then + info.type = "blockquote" + info.text = line:match("^>[ \t]?(.*)") + return info + end + + if is_protected(line) then + info.type = "raw" + info.html = unprotect(line) + return info + end + + info.type = "normal" + return info +end + +-- Find headers constisting of a normal line followed by a ruler and converts them to +-- header entries. +function headers(array) + local i = 1 + while i <= #array - 1 do + if array[i].type == "normal" and array[i+1].type == "ruler" and + (array[i+1].ruler_char == "-" or array[i+1].ruler_char == "=") then + local info = {line = array[i].line} + info.text = info.line + info.type = "header" + info.level = iff(array[i+1].ruler_char == "=", 1, 2) + table.remove(array, i+1) + array[i] = info + end + i = i + 1 + end + return array +end + +-- Find list blocks and convert them to protected data blocks +function lists(array, sublist) + local function process_list(arr) + local function any_blanks(arr) + for i = 1, #arr do + if arr[i].type == "blank" then return true end + end + return false + end + + local function split_list_items(arr) + local acc = {arr[1]} + local res = {} + for i=2,#arr do + if arr[i].type == "list_item" then + table.insert(res, acc) + acc = {arr[i]} + else + table.insert(acc, arr[i]) + end + end + table.insert(res, acc) + return res + end + + local function process_list_item(lines, block) + while lines[#lines].type == "blank" do + table.remove(lines) + end + + local itemtext = lines[1].text + for i=2,#lines do + itemtext = itemtext .. "\n" .. outdent(lines[i].line) + end + if block then + itemtext = block_transform(itemtext, true) + if not itemtext:find("
    ") then itemtext = indent(itemtext) end
    +				return "    
  • " .. itemtext .. "
  • " + else + local lines = split(itemtext) + lines = map(lines, classify) + lines = lists(lines, true) + lines = blocks_to_html(lines, true) + itemtext = table.concat(lines, "\n") + if not itemtext:find("
    ") then itemtext = indent(itemtext) end
    +				return "    
  • " .. itemtext .. "
  • " + end + end + + local block_list = any_blanks(arr) + local items = split_list_items(arr) + local out = "" + for _, item in ipairs(items) do + out = out .. process_list_item(item, block_list) .. "\n" + end + if arr[1].list_type == "numeric" then + return "
      \n" .. out .. "
    " + else + return "
      \n" .. out .. "
    " + end + end + + -- Finds the range of lines composing the first list in the array. A list + -- starts with (^ list_item) or (blank list_item) and ends with + -- (blank* $) or (blank normal). + -- + -- A sublist can start with just (list_item) does not need a blank... + local function find_list(array, sublist) + local function find_list_start(array, sublist) + if array[1].type == "list_item" then return 1 end + if sublist then + for i = 1,#array do + if array[i].type == "list_item" then return i end + end + else + for i = 1, #array-1 do + if array[i].type == "blank" and array[i+1].type == "list_item" then + return i+1 + end + end + end + return nil + end + local function find_list_end(array, start) + local pos = #array + for i = start, #array-1 do + if array[i].type == "blank" and array[i+1].type ~= "list_item" + and array[i+1].type ~= "indented" and array[i+1].type ~= "blank" then + pos = i-1 + break + end + end + while pos > start and array[pos].type == "blank" do + pos = pos - 1 + end + return pos + end + + local start = find_list_start(array, sublist) + if not start then return nil end + return start, find_list_end(array, start) + end + + while true do + local start, stop = find_list(array, sublist) + if not start then break end + local text = process_list(splice(array, start, stop)) + local info = { + line = text, + type = "raw", + html = text + } + array = splice(array, start, stop, {info}) + end + + -- Convert any remaining list items to normal + for _,line in ipairs(array) do + if line.type == "list_item" then line.type = "normal" end + end + + return array +end + +-- Find and convert blockquote markers. +function blockquotes(lines) + local function find_blockquote(lines) + local start + for i,line in ipairs(lines) do + if line.type == "blockquote" then + start = i + break + end + end + if not start then return nil end + + local stop = #lines + for i = start+1, #lines do + if lines[i].type == "blank" or lines[i].type == "blockquote" then + elseif lines[i].type == "normal" then + if lines[i-1].type == "blank" then stop = i-1 break end + else + stop = i-1 break + end + end + while lines[stop].type == "blank" do stop = stop - 1 end + return start, stop + end + + local function process_blockquote(lines) + local raw = lines[1].text + for i = 2,#lines do + raw = raw .. "\n" .. lines[i].text + end + local bt = block_transform(raw) + if not bt:find("
    ") then bt = indent(bt) end
    +		return "
    \n " .. bt .. + "\n
    " + end + + while true do + local start, stop = find_blockquote(lines) + if not start then break end + local text = process_blockquote(splice(lines, start, stop)) + local info = { + line = text, + type = "raw", + html = text + } + lines = splice(lines, start, stop, {info}) + end + return lines +end + +-- Find and convert codeblocks. +function codeblocks(lines) + local function find_codeblock(lines) + local start + for i,line in ipairs(lines) do + if line.type == "indented" then start = i break end + end + if not start then return nil end + + local stop = #lines + for i = start+1, #lines do + if lines[i].type ~= "indented" and lines[i].type ~= "blank" then + stop = i-1 + break + end + end + while lines[stop].type == "blank" do stop = stop - 1 end + return start, stop + end + + local function process_codeblock(lines) + local raw = detab(encode_code(outdent(lines[1].line))) + for i = 2,#lines do + raw = raw .. "\n" .. detab(encode_code(outdent(lines[i].line))) + end + return "
    " .. raw .. "\n
    " + end + + while true do + local start, stop = find_codeblock(lines) + if not start then break end + local text = process_codeblock(splice(lines, start, stop)) + local info = { + line = text, + type = "raw", + html = text + } + lines = splice(lines, start, stop, {info}) + end + return lines +end + +-- Convert lines to html code +function blocks_to_html(lines, no_paragraphs) + local out = {} + local i = 1 + while i <= #lines do + local line = lines[i] + if line.type == "ruler" then + table.insert(out, "
    ") + elseif line.type == "raw" then + table.insert(out, line.html) + elseif line.type == "normal" then + local s = line.line + + while i+1 <= #lines and lines[i+1].type == "normal" do + i = i + 1 + s = s .. "\n" .. lines[i].line + end + + if no_paragraphs then + table.insert(out, span_transform(s)) + else + table.insert(out, "

    " .. span_transform(s) .. "

    ") + end + elseif line.type == "header" then + local s = "" .. span_transform(line.text) .. "" + table.insert(out, s) + else + table.insert(out, line.line) + end + i = i + 1 + end + return out +end + +-- Perform all the block level transforms +function block_transform(text, sublist) + local lines = split(text) + lines = map(lines, classify) + lines = headers(lines) + lines = lists(lines, sublist) + lines = codeblocks(lines) + lines = blockquotes(lines) + lines = blocks_to_html(lines) + local text = table.concat(lines, "\n") + return text +end + +-- Debug function for printing a line array to see the result +-- of partial transforms. +function print_lines(lines) + for i, line in ipairs(lines) do + print(i, line.type, line.text or line.line) + end +end + +---------------------------------------------------------------------- +-- Span transform +---------------------------------------------------------------------- + +-- Functions for transforming the text at the span level. + +-- These characters may need to be escaped because they have a special +-- meaning in markdown. +escape_chars = "'\\`*_{}[]()>#+-.!'" +escape_table = {} + +function init_escape_table() + escape_table = {} + for i = 1,#escape_chars do + local c = escape_chars:sub(i,i) + escape_table[c] = hash(c) + end +end + +-- Adds a new escape to the escape table. +function add_escape(text) + if not escape_table[text] then + escape_table[text] = hash(text) + end + return escape_table[text] +end + +-- Escape characters that should not be disturbed by markdown. +function escape_special_chars(text) + local tokens = tokenize_html(text) + + local out = "" + for _, token in ipairs(tokens) do + local t = token.text + if token.type == "tag" then + -- In tags, encode * and _ so they don't conflict with their use in markdown. + t = t:gsub("%*", escape_table["*"]) + t = t:gsub("%_", escape_table["_"]) + else + t = encode_backslash_escapes(t) + end + out = out .. t + end + return out +end + +-- Encode backspace-escaped characters in the markdown source. +function encode_backslash_escapes(t) + for i=1,escape_chars:len() do + local c = escape_chars:sub(i,i) + t = t:gsub("\\%" .. c, escape_table[c]) + end + return t +end + +-- Unescape characters that have been encoded. +function unescape_special_chars(t) + local tin = t + for k,v in pairs(escape_table) do + k = k:gsub("%%", "%%%%") + t = t:gsub(v,k) + end + if t ~= tin then t = unescape_special_chars(t) end + return t +end + +-- Encode/escape certain characters inside Markdown code runs. +-- The point is that in code, these characters are literals, +-- and lose their special Markdown meanings. +function encode_code(s) + s = s:gsub("%&", "&") + s = s:gsub("<", "<") + s = s:gsub(">", ">") + for k,v in pairs(escape_table) do + s = s:gsub("%"..k, v) + end + return s +end + +-- Handle backtick blocks. +function code_spans(s) + s = s:gsub("\\\\", escape_table["\\"]) + s = s:gsub("\\`", escape_table["`"]) + + local pos = 1 + while true do + local start, stop = s:find("`+", pos) + if not start then return s end + local count = stop - start + 1 + -- Find a matching numbert of backticks + local estart, estop = s:find(string.rep("`", count), stop+1) + local brstart = s:find("\n", stop+1) + if estart and (not brstart or estart < brstart) then + local code = s:sub(stop+1, estart-1) + code = code:gsub("^[ \t]+", "") + code = code:gsub("[ \t]+$", "") + code = code:gsub(escape_table["\\"], escape_table["\\"] .. escape_table["\\"]) + code = code:gsub(escape_table["`"], escape_table["\\"] .. escape_table["`"]) + code = "" .. encode_code(code) .. "" + code = add_escape(code) + s = s:sub(1, start-1) .. code .. s:sub(estop+1) + pos = start + code:len() + else + pos = stop + 1 + end + end + return s +end + +-- Encode alt text... enodes &, and ". +function encode_alt(s) + if not s then return s end + s = s:gsub('&', '&') + s = s:gsub('"', '"') + s = s:gsub('<', '<') + return s +end + +-- Handle image references +function images(text) + local function reference_link(alt, id) + alt = encode_alt(alt:match("%b[]"):sub(2,-2)) + id = id:match("%[(.*)%]"):lower() + if id == "" then id = text:lower() end + link_database[id] = link_database[id] or {} + if not link_database[id].url then return nil end + local url = link_database[id].url or id + url = encode_alt(url) + local title = encode_alt(link_database[id].title) + if title then title = " title=\"" .. title .. "\"" else title = "" end + return add_escape ('' .. alt .. '") + end + + local function inline_link(alt, link) + alt = encode_alt(alt:match("%b[]"):sub(2,-2)) + local url, title = link:match("%(?[ \t]*['\"](.+)['\"]") + url = url or link:match("%(?%)") + url = encode_alt(url) + title = encode_alt(title) + if title then + return add_escape('' .. alt .. '') + else + return add_escape('' .. alt .. '') + end + end + + text = text:gsub("!(%b[])[ \t]*\n?[ \t]*(%b[])", reference_link) + text = text:gsub("!(%b[])(%b())", inline_link) + return text +end + +-- Handle anchor references +function anchors(text) + local function reference_link(text, id) + text = text:match("%b[]"):sub(2,-2) + id = id:match("%b[]"):sub(2,-2):lower() + if id == "" then id = text:lower() end + link_database[id] = link_database[id] or {} + if not link_database[id].url then return nil end + local url = link_database[id].url or id + url = encode_alt(url) + local title = encode_alt(link_database[id].title) + if title then title = " title=\"" .. title .. "\"" else title = "" end + return add_escape("") .. text .. add_escape("") + end + + local function inline_link(text, link) + text = text:match("%b[]"):sub(2,-2) + local url, title = link:match("%(?[ \t]*['\"](.+)['\"]") + title = encode_alt(title) + url = url or link:match("%(?%)") or "" + url = encode_alt(url) + if title then + return add_escape("") .. text .. "" + else + return add_escape("") .. text .. add_escape("") + end + end + + text = text:gsub("(%b[])[ \t]*\n?[ \t]*(%b[])", reference_link) + text = text:gsub("(%b[])(%b())", inline_link) + return text +end + +-- Handle auto links, i.e. . +function auto_links(text) + local function link(s) + return add_escape("") .. s .. "" + end + -- Encode chars as a mix of dec and hex entitites to (perhaps) fool + -- spambots. + local function encode_email_address(s) + -- Use a deterministic encoding to make unit testing possible. + -- Code 45% hex, 45% dec, 10% plain. + local hex = {code = function(c) return "&#x" .. string.format("%x", c:byte()) .. ";" end, count = 1, rate = 0.45} + local dec = {code = function(c) return "&#" .. c:byte() .. ";" end, count = 0, rate = 0.45} + local plain = {code = function(c) return c end, count = 0, rate = 0.1} + local codes = {hex, dec, plain} + local function swap(t,k1,k2) local temp = t[k2] t[k2] = t[k1] t[k1] = temp end + + local out = "" + for i = 1,s:len() do + for _,code in ipairs(codes) do code.count = code.count + code.rate end + if codes[1].count < codes[2].count then swap(codes,1,2) end + if codes[2].count < codes[3].count then swap(codes,2,3) end + if codes[1].count < codes[2].count then swap(codes,1,2) end + + local code = codes[1] + local c = s:sub(i,i) + -- Force encoding of "@" to make email address more invisible. + if c == "@" and code == plain then code = codes[2] end + out = out .. code.code(c) + code.count = code.count - 1 + end + return out + end + local function mail(s) + s = unescape_special_chars(s) + local address = encode_email_address("mailto:" .. s) + local text = encode_email_address(s) + return add_escape("") .. text .. "" + end + -- links + text = text:gsub("<(https?:[^'\">%s]+)>", link) + text = text:gsub("<(ftp:[^'\">%s]+)>", link) + + -- mail + text = text:gsub("%s]+)>", mail) + text = text:gsub("<([-.%w]+%@[-.%w]+)>", mail) + return text +end + +-- Encode free standing amps (&) and angles (<)... note that this does not +-- encode free >. +function amps_and_angles(s) + -- encode amps not part of &..; expression + local pos = 1 + while true do + local amp = s:find("&", pos) + if not amp then break end + local semi = s:find(";", amp+1) + local stop = s:find("[ \t\n&]", amp+1) + if not semi or (stop and stop < semi) or (semi - amp) > 15 then + s = s:sub(1,amp-1) .. "&" .. s:sub(amp+1) + pos = amp+1 + else + pos = amp+1 + end + end + + -- encode naked <'s + s = s:gsub("<([^a-zA-Z/?$!])", "<%1") + s = s:gsub("<$", "<") + + -- what about >, nothing done in the original markdown source to handle them + return s +end + +-- Handles emphasis markers (* and _) in the text. +function emphasis(text) + for _, s in ipairs {"%*%*", "%_%_"} do + text = text:gsub(s .. "([^%s][%*%_]?)" .. s, "%1") + text = text:gsub(s .. "([^%s][^<>]-[^%s][%*%_]?)" .. s, "%1") + end + for _, s in ipairs {"%*", "%_"} do + text = text:gsub(s .. "([^%s_])" .. s, "%1") + text = text:gsub(s .. "([^%s_])" .. s, "%1") + text = text:gsub(s .. "([^%s_][^<>_]-[^%s_])" .. s, "%1") + text = text:gsub(s .. "([^<>_]-[^<>_]-[^<>_]-)" .. s, "%1") + end + return text +end + +-- Handles line break markers in the text. +function line_breaks(text) + return text:gsub(" +\n", "
    \n") +end + +-- Perform all span level transforms. +function span_transform(text) + text = code_spans(text) + text = escape_special_chars(text) + text = images(text) + text = anchors(text) + text = auto_links(text) + text = amps_and_angles(text) + text = emphasis(text) + text = line_breaks(text) + return text +end + +---------------------------------------------------------------------- +-- Markdown +---------------------------------------------------------------------- + +-- Cleanup the text by normalizing some possible variations to make further +-- processing easier. +function cleanup(text) + -- Standardize line endings + text = text:gsub("\r\n", "\n") -- DOS to UNIX + text = text:gsub("\r", "\n") -- Mac to UNIX + + -- Convert all tabs to spaces + text = detab(text) + + -- Strip lines with only spaces and tabs + while true do + local subs + text, subs = text:gsub("\n[ \t]+\n", "\n\n") + if subs == 0 then break end + end + + return "\n" .. text .. "\n" +end + +-- Strips link definitions from the text and stores the data in a lookup table. +function strip_link_definitions(text) + local linkdb = {} + + local function link_def(id, url, title) + id = id:match("%[(.+)%]"):lower() + linkdb[id] = linkdb[id] or {} + linkdb[id].url = url or linkdb[id].url + linkdb[id].title = title or linkdb[id].title + return "" + end + + local def_no_title = "\n ? ? ?(%b[]):[ \t]*\n?[ \t]*]+)>?[ \t]*" + local def_title1 = def_no_title .. "[ \t]+\n?[ \t]*[\"'(]([^\n]+)[\"')][ \t]*" + local def_title2 = def_no_title .. "[ \t]*\n[ \t]*[\"'(]([^\n]+)[\"')][ \t]*" + local def_title3 = def_no_title .. "[ \t]*\n?[ \t]+[\"'(]([^\n]+)[\"')][ \t]*" + + text = text:gsub(def_title1, link_def) + text = text:gsub(def_title2, link_def) + text = text:gsub(def_title3, link_def) + text = text:gsub(def_no_title, link_def) + return text, linkdb +end + +link_database = {} + +-- Main markdown processing function +function markdown(text) + init_hash(text) + init_escape_table() + + text = cleanup(text) + text = protect(text) + text, link_database = strip_link_definitions(text) + text = block_transform(text) + text = unescape_special_chars(text) + return text +end + +---------------------------------------------------------------------- +-- End of module +---------------------------------------------------------------------- + +setfenv(1, _G) +M.lock(M) + +-- Expose markdown function to the world +markdown = M.markdown + +-- Class for parsing command-line options +local OptionParser = {} +OptionParser.__index = OptionParser + +-- Creates a new option parser +function OptionParser:new() + local o = {short = {}, long = {}} + setmetatable(o, self) + return o +end + +-- Calls f() whenever a flag with specified short and long name is encountered +function OptionParser:flag(short, long, f) + local info = {type = "flag", f = f} + if short then self.short[short] = info end + if long then self.long[long] = info end +end + +-- Calls f(param) whenever a parameter flag with specified short and long name is encountered +function OptionParser:param(short, long, f) + local info = {type = "param", f = f} + if short then self.short[short] = info end + if long then self.long[long] = info end +end + +-- Calls f(v) for each non-flag argument +function OptionParser:arg(f) + self.arg = f +end + +-- Runs the option parser for the specified set of arguments. Returns true if all arguments +-- where successfully parsed and false otherwise. +function OptionParser:run(args) + local pos = 1 + while pos <= #args do + local arg = args[pos] + if arg == "--" then + for i=pos+1,#args do + if self.arg then self.arg(args[i]) end + return true + end + end + if arg:match("^%-%-") then + local info = self.long[arg:sub(3)] + if not info then print("Unknown flag: " .. arg) return false end + if info.type == "flag" then + info.f() + pos = pos + 1 + else + param = args[pos+1] + if not param then print("No parameter for flag: " .. arg) return false end + info.f(param) + pos = pos+2 + end + elseif arg:match("^%-") then + for i=2,arg:len() do + local c = arg:sub(i,i) + local info = self.short[c] + if not info then print("Unknown flag: -" .. c) return false end + if info.type == "flag" then + info.f() + else + if i == arg:len() then + param = args[pos+1] + if not param then print("No parameter for flag: -" .. c) return false end + info.f(param) + pos = pos + 1 + else + param = arg:sub(i+1) + info.f(param) + end + break + end + end + pos = pos + 1 + else + if self.arg then self.arg(arg) end + pos = pos + 1 + end + end + return true +end + +-- Handles the case when markdown is run from the command line +local function run_command_line(arg) + -- Generate output for input s given options + local function run(s, options) + s = markdown(s) + if not options.wrap_header then return s end + local header = "" + if options.header then + local f = io.open(options.header) or error("Could not open file: " .. options.header) + header = f:read("*a") + f:close() + else + header = [[ + + + + + TITLE + + + +]] + local title = options.title or s:match("

    (.-)

    ") or s:match("

    (.-)

    ") or + s:match("

    (.-)

    ") or "Untitled" + header = header:gsub("TITLE", title) + if options.inline_style then + local style = "" + local f = io.open(options.stylesheet) + if f then + style = f:read("*a") f:close() + else + error("Could not include style sheet " .. options.stylesheet .. ": File not found") + end + header = header:gsub('', + "") + else + header = header:gsub("STYLESHEET", options.stylesheet) + end + header = header:gsub("CHARSET", options.charset) + end + local footer = "" + if options.footer then + local f = io.open(options.footer) or error("Could not open file: " .. options.footer) + footer = f:read("*a") + f:close() + end + return header .. s .. footer + end + + -- Generate output path name from input path name given options. + local function outpath(path, options) + if options.append then return path .. ".html" end + local m = path:match("^(.+%.html)[^/\\]+$") if m then return m end + m = path:match("^(.+%.)[^/\\]*$") if m and path ~= m .. "html" then return m .. "html" end + return path .. ".html" + end + + -- Default commandline options + local options = { + wrap_header = true, + header = nil, + footer = nil, + charset = "utf-8", + title = nil, + stylesheet = "default.css", + inline_style = false + } + local help = [[ +Usage: markdown.lua [OPTION] [FILE] +Runs the markdown text markup to HTML converter on each file specified on the +command line. If no files are specified, runs on standard input. + +No header: + -n, --no-wrap Don't wrap the output in ... tags. +Custom header: + -e, --header FILE Use content of FILE for header. + -f, --footer FILE Use content of FILE for footer. +Generated header: + -c, --charset SET Specifies charset (default utf-8). + -i, --title TITLE Specifies title (default from first

    tag). + -s, --style STYLE Specifies style sheet file (default default.css). + -l, --inline-style Include the style sheet file inline in the header. +Generated files: + -a, --append Append .html extension (instead of replacing). +Other options: + -h, --help Print this help text. + -t, --test Run the unit tests. +]] + + local run_stdin = true + local op = OptionParser:new() + op:flag("n", "no-wrap", function () options.wrap_header = false end) + op:param("e", "header", function (x) options.header = x end) + op:param("f", "footer", function (x) options.footer = x end) + op:param("c", "charset", function (x) options.charset = x end) + op:param("i", "title", function(x) options.title = x end) + op:param("s", "style", function(x) options.stylesheet = x end) + op:flag("l", "inline-style", function(x) options.inline_style = true end) + op:flag("a", "append", function() options.append = true end) + op:flag("t", "test", function() + local n = arg[0]:gsub("markdown.lua", "markdown-tests.lua") + local f = io.open(n) + if f then + f:close() dofile(n) + else + error("Cannot find markdown-tests.lua") + end + run_stdin = false + end) + op:flag("h", "help", function() print(help) run_stdin = false end) + op:arg(function(path) + local file = io.open(path) or error("Could not open file: " .. path) + local s = file:read("*a") + file:close() + s = run(s, options) + file = io.open(outpath(path, options), "w") or error("Could not open output file: " .. outpath(path, options)) + file:write(s) + file:close() + run_stdin = false + end + ) + + if not op:run(arg) then + print(help) + run_stdin = false + end + + if run_stdin then + local s = io.read("*a") + s = run(s, options) + io.write(s) + end +end + +-- If we are being run from the command-line, act accordingly +if arg and arg[0]:find("markdown%.lua$") then + run_command_line(arg) +else + return markdown +end \ No newline at end of file diff --git a/luaejdb/tools/ldoc/ldoc/markup.lua b/luaejdb/tools/ldoc/ldoc/markup.lua new file mode 100644 index 0000000..0ea7bd5 --- /dev/null +++ b/luaejdb/tools/ldoc/ldoc/markup.lua @@ -0,0 +1,324 @@ +-------------- +-- Handling markup transformation. +-- Currently just does Markdown, but this is intended to +-- be the general module for managing other formats as well. + +local doc = require 'ldoc.doc' +local utils = require 'pl.utils' +local stringx = require 'pl.stringx' +local prettify = require 'ldoc.prettify' +local quit, concat, lstrip = utils.quit, table.concat, stringx.lstrip +local markup = {} + +local backtick_references + +-- inline use same lookup as @see +local function resolve_inline_references (ldoc, txt, item, plain) + local res = (txt:gsub('@{([^}]-)}',function (name) + local qname,label = utils.splitv(name,'%s*|') + if not qname then + qname = name + end + local ref,err = markup.process_reference(qname) + if not ref then + err = err .. ' ' .. qname + if item then item:warning(err) + else + io.stderr:write('nofile error: ',err,'\n') + end + return '???' + end + if not label then + label = ref.label + end + if not plain and label then -- a nastiness with markdown.lua and underscores + label = label:gsub('_','\\_') + end + local html = ldoc.href(ref) or '#' + label = label or '?que' + local res = ('%s'):format(html,label) + return res + end)) + if backtick_references then + res = res:gsub('`([^`]+)`',function(name) + local ref,err = markup.process_reference(name) + if ref then + return ('%s '):format(ldoc.href(ref),name) + else + return '`'..name..'`' + end + end) + end + return res +end + +-- for readme text, the idea here is to create module sections at ## so that +-- they can appear in the contents list as a ToC. +function markup.add_sections(F, txt) + local sections, L, first = {}, 1, true + local title_pat_end, title_pat = '[^#]%s*(.+)' + for line in stringx.lines(txt) do + if first then + local level,header = line:match '^(#+)%s*(.+)' + if level then + level = level .. '#' + else + level = '##' + end + title_pat = '^'..level..title_pat_end + first = false + end + local title = line:match (title_pat) + if title then + -- Markdown does allow this pattern + title = title:gsub('%s*#+$','') + sections[L] = F:add_document_section(title) + end + L = L + 1 + end + F.sections = sections + return txt +end + +local function indent_line (line) + line = line:gsub('\t',' ') -- support for barbarians ;) + local indent = #line:match '^%s*' + return indent,line +end + +local function non_blank (line) + return line:find '%S' +end + +local global_context, local_context + +-- before we pass Markdown documents to markdown/discount, we need to do three things: +-- - resolve any @{refs} and (optionally) `refs` +-- - any @lookup directives that set local context for ref lookup +-- - insert any section ids which were generated by add_sections above +-- - prettify any code blocks + +local function process_multiline_markdown(ldoc, txt, F) + local res, L, append = {}, 0, table.insert + local filename = F.filename + local err_item = { + warning = function (self,msg) + io.stderr:write(filename..':'..L..': '..msg,'\n') + end + } + local get = stringx.lines(txt) + local getline = function() + L = L + 1 + return get() + end + local function pretty_code (code, lang) + code = concat(code,'\n') + if code ~= '' then + local err + code, err = prettify.code(lang,filename,code..'\n',L,false) + append(res,'
    ')
    +         append(res, code)
    +         append(res,'
    ') + else + append(res,code) + end + end + local indent,start_indent + local_context = nil + local line = getline() + while line do + local name = line:match '^@lookup%s+(%S+)' + if name then + local_context = name .. '.' + line = getline() + end + local fence = line:match '^```(.*)' + if fence then + local plain = fence=='' + line = getline() + local code = {} + while not line:match '^```' do + if not plain then + append(code, line) + else + append(res, ' '..line) + end + line = getline() + end + pretty_code (code,fence) + line = getline() -- skip fence + end + indent, line = indent_line(line) + if indent >= 4 then -- indented code block + local code = {} + local plain + while indent >= 4 or not non_blank(line) do + if not start_indent then + start_indent = indent + if line:match '^%s*@plain%s*$' then + plain = true + line = getline() + end + end + if not plain then + append(code,line:sub(start_indent)) + else + append(res,line) + end + line = getline() + if line == nil then break end + indent, line = indent_line(line) + end + start_indent = nil + if #code > 1 then table.remove(code) end + pretty_code (code,'lua') + else + local section = F.sections[L] + if section then + append(res,(''):format(section)) + end + line = resolve_inline_references(ldoc, line, err_item) + append(res,line) + line = getline() + end + end + res = concat(res,'\n') + return res +end + + +-- Handle markdown formatters +-- Try to get the one the user has asked for, but if it's not available, +-- try all the others we know about. If they don't work, fall back to text. + +local function generic_formatter(format) + local ok, f = pcall(require, format) + return ok and f +end + + +local formatters = +{ + markdown = function(format) + local ok, markdown = pcall(require, 'markdown') + if not ok then + print('format: using built-in markdown') + ok, markdown = pcall(require, 'ldoc.markdown') + end + return ok and markdown + end, + discount = generic_formatter, + lunamark = function(format) + local ok, lunamark = pcall(require, format) + if ok then + local writer = lunamark.writer.html.new() + local parse = lunamark.reader.markdown.new(writer, + { smart = true }) + return function(text) return parse(text) end + end + end +} + + +local function get_formatter(format) + local formatter = (formatters[format] or generic_formatter)(format) + if formatter then return formatter end + + for name, f in pairs(formatters) do + formatter = f(name) + if formatter then + print('format: '..format..' not found, using '..name) + return formatter + end + end +end + + +local function text_processor(ldoc) + return function(txt,item) + if txt == nil then return '' end + -- hack to separate paragraphs with blank lines + txt = txt:gsub('\n\n','\n

    ') + return resolve_inline_references(ldoc, txt, item, true) + end +end + + +local function markdown_processor(ldoc, formatter) + return function (txt,item) + if txt == nil then return '' end + if utils.is_type(item,doc.File) then + txt = process_multiline_markdown(ldoc, txt, item) + else + txt = resolve_inline_references(ldoc, txt, item) + end + txt = formatter(txt) + -- We will add our own paragraph tags, if needed. + return (txt:gsub('^%s*

    (.+)

    %s*$','%1')) + end +end + + +local function get_processor(ldoc, format) + if format == 'plain' then return text_processor(ldoc) end + + local formatter = get_formatter(format) + if formatter then + markup.plain = false + return markdown_processor(ldoc, formatter) + end + + print('format: '..format..' not found, falling back to text') + return text_processor(ldoc) +end + + +function markup.create (ldoc, format, pretty) + local processor + markup.plain = true + backtick_references = ldoc.backtick_references + global_context = ldoc.package and ldoc.package .. '.' + prettify.set_prettifier(pretty) + + markup.process_reference = function(name) + if local_context == 'none.' and not name:match '%.' then + return nil,'not found' + end + local mod = ldoc.single or ldoc.module or ldoc.modules[1] + local ref,err = mod:process_see_reference(name, ldoc.modules) + if ref then return ref end + if global_context then + local qname = global_context .. name + ref = mod:process_see_reference(qname, ldoc.modules) + if ref then return ref end + end + if local_context then + local qname = local_context .. name + ref = mod:process_see_reference(qname, ldoc.modules) + if ref then return ref end + end + -- note that we'll return the original error! + return ref,err + end + + markup.href = function(ref) + return ldoc.href(ref) + end + + processor = get_processor(ldoc, format) + if not markup.plain and backtick_references == nil then + backtick_references = true + end + + markup.resolve_inline_references = function(txt, errfn) + return resolve_inline_references(ldoc, txt, errfn, markup.plain) + end + markup.processor = processor + prettify.resolve_inline_references = function(txt, errfn) + return resolve_inline_references(ldoc, txt, errfn, true) + end + return processor +end + + +return markup diff --git a/luaejdb/tools/ldoc/ldoc/parse.lua b/luaejdb/tools/ldoc/ldoc/parse.lua new file mode 100644 index 0000000..dded82c --- /dev/null +++ b/luaejdb/tools/ldoc/ldoc/parse.lua @@ -0,0 +1,365 @@ +-- parsing code for doc comments + +local List = require 'pl.List' +local Map = require 'pl.Map' +local stringio = require 'pl.stringio' +local lexer = require 'ldoc.lexer' +local tools = require 'ldoc.tools' +local doc = require 'ldoc.doc' +local Item,File = doc.Item,doc.File + +------ Parsing the Source -------------- +-- This uses the lexer from PL, but it should be possible to use Peter Odding's +-- excellent Lpeg based lexer instead. + +local parse = {} + +local tnext, append = lexer.skipws, table.insert + +-- a pattern particular to LuaDoc tag lines: the line must begin with @TAG, +-- followed by the value, which may extend over several lines. +local luadoc_tag = '^%s*@(%a+)' +local luadoc_tag_value = luadoc_tag..'(.*)' +local luadoc_tag_mod_and_value = luadoc_tag..'%[(.*)%](.*)' + +-- assumes that the doc comment consists of distinct tag lines +local function parse_at_tags(text) + local lines = stringio.lines(text) + local preamble, line = tools.grab_while_not(lines,luadoc_tag) + local tag_items = {} + local follows + while line do + local tag, mod_string, rest = line :match(luadoc_tag_mod_and_value) + if not tag then tag, rest = line :match (luadoc_tag_value) end + local modifiers + if mod_string then + modifiers = { } + for x in mod_string :gmatch "[^,]+" do + local k, v = x :match "^([^=]+)=(.*)$" + if not k then k, v = x, true end -- wuz x, x + modifiers[k] = v + end + end + -- follows: end of current tag + -- line: beginning of next tag (for next iteration) + follows, line = tools.grab_while_not(lines,luadoc_tag) + append(tag_items,{tag, rest .. '\n' .. follows, modifiers}) + end + return preamble,tag_items +end + +--local colon_tag = '%s*(%a+):%s' +local colon_tag = '%s*(%S-):%s' +local colon_tag_value = colon_tag..'(.*)' + +local function parse_colon_tags (text) + local lines = stringio.lines(text) + local preamble, line = tools.grab_while_not(lines,colon_tag) + local tag_items, follows = {} + while line do + local tag, rest = line:match(colon_tag_value) + follows, line = tools.grab_while_not(lines,colon_tag) + local value = rest .. '\n' .. follows + if tag:match '^[%?!]' then + tag = tag:gsub('^!','') + value = tag .. ' ' .. value + tag = 'tparam' + end + append(tag_items,{tag, value}) + end + return preamble,tag_items +end + +local Tags = {} +Tags.__index = Tags + +function Tags.new (t) + t._order = List() + return setmetatable(t,Tags) +end + +function Tags:add (tag,value) + self[tag] = value + --print('adding',tag,value) + self._order:append(tag) +end + +function Tags:iter () + return self._order:iter() +end + +-- This takes the collected comment block, and uses the docstyle to +-- extract tags and values. Assume that the summary ends in a period or a question +-- mark, and everything else in the preamble is the description. +-- If a tag appears more than once, then its value becomes a list of strings. +-- Alias substitution and @TYPE NAME shortcutting is handled by Item.check_tag +local function extract_tags (s,args) + local preamble,tag_items + if s:match '^%s*$' then return {} end + if args.colon then --and s:match ':%s' and not s:match '@%a' then + preamble,tag_items = parse_colon_tags(s) + else + preamble,tag_items = parse_at_tags(s) + end + local strip = tools.strip + local summary, description = preamble:match('^(.-[%.?])(%s.+)') + if not summary then + -- perhaps the first sentence did not have a . or ? terminating it. + -- Then try split at linefeed + summary, description = preamble:match('^(.-\n\n)(.+)') + if not summary then + summary = preamble + end + end -- and strip(description) ? + local tags = Tags.new{summary=summary and strip(summary) or '',description=description or ''} + for _,item in ipairs(tag_items) do + local tag, value, modifiers = Item.check_tag(tags,unpack(item)) + -- treat multiline values more gently.. + if not value:match '\n[^\n]+\n' then + value = strip(value) + end + + if modifiers then value = { value, modifiers=modifiers } end + local old_value = tags[tag] + + if not old_value then -- first element + tags:add(tag,value) + elseif type(old_value)=='table' and old_value.append then -- append to existing list + old_value :append (value) + else -- upgrade string->list + tags:add(tag,List{old_value, value}) + end + end + return tags --Map(tags) +end + +local _xpcall = xpcall +if true then + _xpcall = function(f) return true, f() end +end + + + +-- parses a Lua or C file, looking for ldoc comments. These are like LuaDoc comments; +-- they start with multiple '-'. (Block commments are allowed) +-- If they don't define a name tag, then by default +-- it is assumed that a function definition follows. If it is the first comment +-- encountered, then ldoc looks for a call to module() to find the name of the +-- module if there isn't an explicit module name specified. + +local function parse_file(fname, lang, package, args) + local line,f = 1 + local F = File(fname) + local module_found, first_comment = false,true + local current_item, module_item + + F.args = args + + F.base = package + + local tok,f = lang.lexer(fname) + if not tok then return nil end + + local function lineno () + return tok:lineno() + end + + local function filename () return fname end + + function F:warning (msg,kind,line) + kind = kind or 'warning' + line = line or lineno() + io.stderr:write(fname..':'..line..': '..msg,'\n') + end + + function F:error (msg) + self:warning(msg,'error') + io.stderr:write('LDoc error\n') + os.exit(1) + end + + local function add_module(tags,module_found,old_style) + tags:add('name',module_found) + tags:add('class','module') + local item = F:new_item(tags,lineno()) + item.old_style = old_style + module_item = item + end + + local mod + local t,v = tnext(tok) + -- with some coding styles first comment is standard boilerplate; option to ignore this. + if args.boilerplate and t == 'comment' then + t,v = tnext(tok) + end + if t == '#' then -- skip Lua shebang line, if present + while t and t ~= 'comment' do t,v = tnext(tok) end + if t == nil then + F:warning('empty file') + return nil + end + end + if lang.parse_module_call and t ~= 'comment'then + while t and not (t == 'iden' and v == 'module') do + t,v = tnext(tok) + end + if not t then + if not args.ignore then + F:warning("no module() call found; no initial doc comment") + end + --return nil + else + mod,t,v = lang:parse_module_call(tok,t,v) + if mod ~= '...' then + add_module({summary='(no description)'},mod,true) + first_comment = false + module_found = true + end + end + end + local ok, err = xpcall(function() + while t do + if t == 'comment' then + local comment = {} + local ldoc_comment,block = lang:start_comment(v) + + if ldoc_comment and block then + t,v = lang:grab_block_comment(v,tok) + end + + if lang:empty_comment(v) then -- ignore rest of empty start comments + t,v = tok() + end + + while t and t == 'comment' do + v = lang:trim_comment(v) + append(comment,v) + t,v = tok() + if t == 'space' and not v:match '\n' then + t,v = tok() + end + end + + if t == 'space' then t,v = tnext(tok) end + + local item_follows, tags, is_local, case + if ldoc_comment then + comment = table.concat(comment) + + if first_comment then + first_comment = false + else + item_follows, is_local, case = lang:item_follows(t,v,tok) + end + if item_follows or comment:find '@' or comment:find ': ' then + tags = extract_tags(comment,args) + -- explicitly named @module (which is recommended) + if doc.project_level(tags.class) then + module_found = tags.name + -- might be a module returning a single function! + if tags.param or tags['return'] then + local parms, ret, summ = tags.param, tags['return'],tags.summary + tags.param = nil + tags['return'] = nil + tags.summary = nil + add_module(tags,tags.name,false) + tags = { + summary = summ, + name = 'returns...', + class = 'function', + ['return'] = ret, + param = parms + } + end + end + doc.expand_annotation_item(tags,current_item) + -- if the item has an explicit name or defined meaning + -- then don't continue to do any code analysis! + if tags.name then + if not tags.class then + F:warning("no type specified, assuming function: '"..tags.name.."'") + tags:add('class','function') + end + item_follows, is_local = false, false + elseif lang:is_module_modifier (tags) then + if not item_follows then + F:warning("@usage or @export followed by unknown code") + break + end + item_follows(tags,tok) + local res, value, tagname = lang:parse_module_modifier(tags,tok,F) + if not res then F:warning(value); break + else + if tagname then + module_item:set_tag(tagname,value) + end + -- don't continue to make an item! + ldoc_comment = false + end + end + end + end + -- some hackery necessary to find the module() call + if not module_found and ldoc_comment then + local old_style + module_found,t,v = lang:find_module(tok,t,v) + -- right, we can add the module object ... + old_style = module_found ~= nil + if not module_found or module_found == '...' then + -- we have to guess the module name + module_found = tools.this_module_name(package,fname) + end + if not tags then tags = extract_tags(comment,args) end + add_module(tags,module_found,old_style) + tags = nil + if not t then + F:warning(fname,' contains no items\n','warning',1) + break; + end -- run out of file! + -- if we did bump into a doc comment, then we can continue parsing it + end + + -- end of a block of document comments + if ldoc_comment and tags then + local line = t ~= nil and lineno() + if t ~= nil then + if item_follows then -- parse the item definition + local err = item_follows(tags,tok) + if err then F:error(err) end + else + lang:parse_extra(tags,tok,case) + end + end + if is_local or tags['local'] then + tags['local'] = true + end + if tags.name then + current_item = F:new_item(tags,line) + current_item.inferred = item_follows ~= nil + if doc.project_level(tags.class) then + if module_item then + F:error("Module already declared!") + end + module_item = current_item + end + end + if not t then break end + end + end + if t ~= 'comment' then t,v = tok() end + end + end,debug.traceback) + if not ok then return F, err end + if f then f:close() end + return F +end + +function parse.file(name,lang, args) + local F,err = parse_file(name,lang,args.package,args) + if err or not F then return F,err end + local ok,err = xpcall(function() F:finish() end,debug.traceback) + if not ok then return F,err end + return F +end + +return parse diff --git a/luaejdb/tools/ldoc/ldoc/prettify.lua b/luaejdb/tools/ldoc/ldoc/prettify.lua new file mode 100644 index 0000000..17a9edf --- /dev/null +++ b/luaejdb/tools/ldoc/ldoc/prettify.lua @@ -0,0 +1,103 @@ +-- Making Lua source code look pretty. +-- A simple scanner based prettifier, which scans comments for @{ref} and code +-- for known modules and functions. +-- A module reference to an example `test-fun.lua` would look like +-- `@{example:test-fun}`. +local List = require 'pl.List' +local lexer = require 'ldoc.lexer' +local globals = require 'ldoc.builtin.globals' +local tnext = lexer.skipws +local prettify = {} + +local escaped_chars = { + ['&'] = '&', + ['<'] = '<', + ['>'] = '>', +} +local escape_pat = '[&<>]' + +local function escape(str) + return (str:gsub(escape_pat,escaped_chars)) +end + +local function span(t,val) + return ('%s'):format(t,val) +end + +local spans = {keyword=true,number=true,string=true,comment=true,global=true} + +function prettify.lua (fname, code, initial_lineno, pre) + local res = List() + if pre then + res:append '
    \n'
    +   end
    +   initial_lineno = initial_lineno or 0
    +
    +   local tok = lexer.lua(code,{},{})
    +   local error_reporter = {
    +      warning = function (self,msg)
    +         io.stderr:write(fname..':'..tok:lineno()+initial_lineno..': '..msg,'\n')
    +      end
    +   }
    +   local t,val = tok()
    +   if not t then return nil,"empty file" end
    +   while t do
    +      val = escape(val)
    +      if globals.functions[val] or globals.tables[val] then
    +         t = 'global'
    +      end
    +      if spans[t] then
    +         if t == 'comment' then -- may contain @{ref}
    +            val = prettify.resolve_inline_references(val,error_reporter)
    +         end
    +         res:append(span(t,val))
    +      else
    +         res:append(val)
    +      end
    +      t,val = tok()
    +   end
    +   local last = res[#res]
    +   if last:match '\n$' then
    +      res[#res] = last:gsub('\n+','')
    +   end
    +   if pre then
    +      res:append '
    \n' + end + return res:join () +end + +local lxsh + +local lxsh_highlighers = {bib=true,c=true,lua=true,sh=true} + +function prettify.code (lang,fname,code,initial_lineno,pre) + if not lxsh then + return prettify.lua (fname, code, initial_lineno, pre) + else + if not lxsh_highlighers[lang] then + lang = 'lua' + end + code = lxsh.highlighters[lang](code, { + formatter = lxsh.formatters.html, + external = true + }) + if not pre then + code = code:gsub("^(.*)

    $", '%1') + end + return code + end +end + +function prettify.set_prettifier (pretty) + local ok + if pretty == 'lxsh' then + ok,lxsh = pcall(require,'lxsh') + if not ok then + print('pretty: '..pretty..' not found, using built-in Lua') + lxsh = nil + end + end +end + +return prettify + diff --git a/luaejdb/tools/ldoc/ldoc/tools.lua b/luaejdb/tools/ldoc/ldoc/tools.lua new file mode 100644 index 0000000..34d014e --- /dev/null +++ b/luaejdb/tools/ldoc/ldoc/tools.lua @@ -0,0 +1,466 @@ +--------- +-- General utility functions for ldoc +-- @module tools + +local class = require 'pl.class' +local List = require 'pl.List' +local path = require 'pl.path' +local utils = require 'pl.utils' +local tablex = require 'pl.tablex' +local stringx = require 'pl.stringx' +local dir = require 'pl.dir' +local tools = {} +local M = tools +local append = table.insert +local lexer = require 'ldoc.lexer' +local quit = utils.quit +local lfs = require 'lfs' + +-- this constructs an iterator over a list of objects which returns only +-- those objects where a field has a certain value. It's used to iterate +-- only over functions or tables, etc. +-- (something rather similar exists in LuaDoc) +function M.type_iterator (list,field,value) + return function() + local i = 1 + return function() + local val = list[i] + while val and val[field] ~= value do + i = i + 1 + val = list[i] + end + i = i + 1 + if val then return val end + end + end +end + +-- KindMap is used to iterate over a set of categories, called _kinds_, +-- and the associated iterator over all items in that category. +-- For instance, a module contains functions, tables, etc and we will +-- want to iterate over these categories in a specified order: +-- +-- for kind, items in module.kinds() do +-- print('kind',kind) +-- for item in items() do print(item.name) end +-- end +-- +-- The kind is typically used as a label or a Title, so for type 'function' the +-- kind is 'Functions' and so on. + +local KindMap = class() +M.KindMap = KindMap + +-- calling a KindMap returns an iterator. This returns the kind, the iterator +-- over the items of that type, and the actual type tag value. +function KindMap:__call () + local i = 1 + local klass = self.klass + return function() + local kind = klass.kinds[i] + if not kind then return nil end -- no more kinds + while not self[kind] do + i = i + 1 + kind = klass.kinds[i] + if not kind then return nil end + end + i = i + 1 + local type = klass.types_by_kind [kind].type + return kind, self[kind], type + end +end + +function KindMap:put_kind_first (kind) + -- find this kind in our kind list + local kinds = self.klass.kinds,kind + local idx = tablex.find(kinds,kind) + -- and swop with the start! + if idx then + kinds[1],kinds[idx] = kinds[idx],kinds[1] + end +end + +function KindMap:type_of (item) + local klass = self.klass + local kind = klass.types_by_tag[item.type] + return klass.types_by_kind [kind] +end + +function KindMap:get_section_description (kind) + return self.klass.descriptions[kind] +end + +function KindMap:get_item (kind) + return self.klass.items_by_kind[kind] +end + +-- called for each new item. It does not actually create separate lists, +-- (although that would not break the interface) but creates iterators +-- for that item type if not already created. +function KindMap:add (item,items,description) + local group = item[self.fieldname] -- which wd be item's type or section + local kname = self.klass.types_by_tag[group] -- the kind name + if not self[kname] then + self[kname] = M.type_iterator (items,self.fieldname,group) + self.klass.descriptions[kname] = description + end + item.kind = kname:lower() +end + +-- KindMap has a 'class constructor' which is used to modify +-- any new base class. +function KindMap._class_init (klass) + klass.kinds = {} -- list in correct order of kinds + klass.types_by_tag = {} -- indexed by tag + klass.types_by_kind = {} -- indexed by kind + klass.descriptions = {} -- optional description for each kind + klass.items_by_kind = {} -- some kinds are items +end + + +function KindMap.add_kind (klass,tag,kind,subnames,item) + if not klass.types_by_kind[kind] then + klass.types_by_tag[tag] = kind + klass.types_by_kind[kind] = {type=tag,subnames=subnames} + if item then + klass.items_by_kind[kind] = item + end + append(klass.kinds,kind) + end +end + + +----- some useful utility functions ------ + +function M.module_basepath() + local lpath = List.split(package.path,';') + for p in lpath:iter() do + local p = path.dirname(p) + if path.isabs(p) then + return p + end + end +end + +-- split a qualified name into the module part and the name part, +-- e.g 'pl.utils.split' becomes 'pl.utils' and 'split' +function M.split_dotted_name (s) + local s1,s2 = path.splitext(s) + if s2=='' then return nil + else return s1,s2:sub(2) + end +end + +-- expand lists of possibly qualified identifiers +-- given something like {'one , two.2','three.drei.drie)'} +-- it will output {"one","two.2","three.drei.drie"} +function M.expand_comma_list (ls) + local new_ls = List() + for s in ls:iter() do + s = s:gsub('[^%.:%-%w_]*$','') + if s:find ',' then + new_ls:extend(List.split(s,'%s*,%s*')) + else + new_ls:append(s) + end + end + return new_ls +end + +-- grab lines from a line iterator `iter` until the line matches the pattern. +-- Returns the joined lines and the line, which may be nil if we run out of +-- lines. +function M.grab_while_not(iter,pattern) + local line = iter() + local res = {} + while line and not line:match(pattern) do + append(res,line) + line = iter() + end + res = table.concat(res,'\n') + return res,line +end + + +function M.extract_identifier (value) + return value:match('([%.:%-_%w]+)(.*)$') +end + +function M.strip (s) + return s:gsub('^%s+',''):gsub('%s+$','') +end + +function M.check_directory(d) + if not path.isdir(d) then + lfs.mkdir(d) + end +end + +function M.check_file (f,original) + if not path.exists(f) or path.getmtime(original) > path.getmtime(f) then + local text,err = utils.readfile(original) + if text then + text,err = utils.writefile(f,text) + end + if err then + quit("Could not copy "..original.." to "..f) + end + end +end + +function M.writefile(name,text) + local ok,err = utils.writefile(name,text) + if err then quit(err) end +end + +function M.name_of (lpath) + local ext + lpath,ext = path.splitext(lpath) + return lpath +end + +function M.this_module_name (basename,fname) + local ext + if basename == '' then + return M.name_of(fname) + end + basename = path.abspath(basename) + if basename:sub(-1,-1) ~= path.sep then + basename = basename..path.sep + end + local lpath,cnt = fname:gsub('^'..utils.escape(basename),'') + --print('deduce',lpath,cnt,basename) + if cnt ~= 1 then quit("module(...) name deduction failed: base "..basename.." "..fname) end + lpath = lpath:gsub(path.sep,'.') + return (M.name_of(lpath):gsub('%.init$','')) +end + +function M.find_existing_module (name, dname, searchfn) + local fullpath,lua = searchfn(name) + local mod = true + if not fullpath then -- maybe it's a function reference? + -- try again with the module part + local mpath,fname = M.split_dotted_name(name) + if mpath then + fullpath,lua = searchfn(mpath) + else + fullpath = nil + end + if not fullpath then + return nil, "module or function '"..dname.."' not found on module path" + else + mod = fname + end + end + if not lua then return nil, "module '"..name.."' is a binary extension" end + return fullpath, mod +end + +function M.lookup_existing_module_or_function (name, docpath) + -- first look up on the Lua module path + local on_docpath + local fullpath, mod = M.find_existing_module(name,name,path.package_path) + -- no go; but see if we can find it on the doc path + if not fullpath then + fullpath, mod = M.find_existing_module("ldoc.builtin." .. name,name,path.package_path) + on_docpath = true +--~ fullpath, mod = M.find_existing_module(name, function(name) +--~ local fpath = package.searchpath(name,docpath) +--~ return fpath,true -- result must always be 'lua'! +--~ end) + end + return fullpath, mod, on_docpath -- `mod` can be the error message +end + + +--------- lexer tools ----- + +local tnext = lexer.skipws + +local function type_of (tok) return tok and tok[1] or 'end' end +local function value_of (tok) return tok[2] end + +-- This parses Lua formal argument lists. It will return a list of argument +-- names, which also has a comments field, which will contain any commments +-- following the arguments. ldoc will use these in addition to explicit +-- param tags. + +function M.get_parameters (tok,endtoken,delim) + tok = M.space_skip_getter(tok) + local args = List() + args.comments = {} + local ltl = lexer.get_separated_list(tok,endtoken,delim) + + if not ltl or not ltl[1] or #ltl[1] == 0 then return args end -- no arguments + + local function strip_comment (text) + return text:match("%s*%-%-+%s*(.*)") + end + + local function set_comment (idx,tok) + local text = stringx.rstrip(value_of(tok)) + text = strip_comment(text) + local arg = args[idx] + local current_comment = args.comments[arg] + if current_comment then + text = current_comment .. " " .. text + end + args.comments[arg] = text + end + + for i = 1,#ltl do + local tl = ltl[i] -- token list for argument + if #tl > 0 then + local j = 1 + if type_of(tl[1]) == 'comment' then + -- the comments for the i-1 th arg are in the i th arg... + if i > 1 then + while type_of(tl[j]) == 'comment' do + set_comment(i-1,tl[j]) + j = j + 1 + end + else -- first comment however is for the function return comment! + args.return_comment = strip_comment(value_of(tl[i])) + j = j + 1 + end + if #tl > 1 then + args:append(value_of(tl[j])) + end + else + args:append(value_of(tl[1])) + end + if i == #ltl and #tl > 1 then + while j <= #tl and type_of(tl[j]) ~= 'comment' do + j = j + 1 + end + if j > #tl then break end -- was no comments! + while type_of(tl[j]) == 'comment' do + set_comment(i,tl[j]) + j = j + 1 + end + end + else + return nil,"empty argument" + end + end + + ----[[ + -- we had argument comments + -- but the last one may be outside the parens! (Geoff style) + -- (only try this stunt if it's a function parameter list!) + if (not endtoken or endtoken == ')') and (#args > 0 or next(args.comments)) then + local n = #args + local last_arg = args[n] + if not args.comments[last_arg] then + while true do + local t = {tok()} + if type_of(t) == 'comment' then + set_comment(n,t) + else + break + end + end + end + end + --]] + return args +end + +-- parse a Lua identifier - contains names separated by . and :. +function M.get_fun_name (tok,first) + local res = {} + local t,name,sep + if not first then + t,name = tnext(tok) + else + t,name = 'iden',first + end + if t ~= 'iden' then return nil end + t,sep = tnext(tok) + while sep == '.' or sep == ':' do + append(res,name) + append(res,sep) + t,name = tnext(tok) + t,sep = tnext(tok) + end + append(res,name) + return table.concat(res),t,sep +end + +-- space-skipping version of token iterator +function M.space_skip_getter(tok) + return function () + local t,v = tok() + while t and t == 'space' do + t,v = tok() + end + return t,v + end +end + +function M.quote (s) + return "'"..s.."'" +end + +-- The PL Lua lexer does not do block comments +-- when used in line-grabbing mode, so this function grabs each line +-- until we meet the end of the comment +function M.grab_block_comment (v,tok,patt) + local res = {v} + repeat + v = lexer.getline(tok) + if v:match (patt) then break end + append(res,v) + append(res,'\n') + until false + res = table.concat(res) + --print(res) + return 'comment',res +end + +local prel = path.normcase('/[^/]-/%.%.') + + +function M.abspath (f) + local count + local res = path.normcase(path.abspath(f)) + while true do + res,count = res:gsub(prel,'') + if count == 0 then break end + end + return res +end + +function M.process_file_list (list, mask, operation, ...) + local exclude_list = list.exclude and M.files_from_list(list.exclude, mask) + local function process (f,...) + f = M.abspath(f) + if not exclude_list or exclude_list and exclude_list:index(f) == nil then + operation(f, ...) + end + end + for _,f in ipairs(list) do + if path.isdir(f) then + local files = List(dir.getallfiles(f,mask)) + for f in files:iter() do + process(f,...) + end + elseif path.isfile(f) then + process(f,...) + else + quit("file or directory does not exist: "..M.quote(f)) + end + end +end + +function M.files_from_list (list, mask) + local excl = List() + M.process_file_list (list, mask, function(f) + excl:append(f) + end) + return excl +end + + + +return tools diff --git a/luaejdb/tools/ldoc/readme.md b/luaejdb/tools/ldoc/readme.md new file mode 100644 index 0000000..3d6c378 --- /dev/null +++ b/luaejdb/tools/ldoc/readme.md @@ -0,0 +1,57 @@ +# LDoc - A Lua Documentation Tool + +Copyright (C) 2011-2012 Steve Donovan. + +## Rationale + +This project grew out of the documentation needs of +[Penlight](https://github.com/stevedonovan/Penlight) (and not always getting satisfaction +with LuaDoc) and depends on Penlight itself.(This allowed me to _not_ write a lot of code.) + +The [API documentation](http://stevedonovan.github.com/Penlight/api/index.html) of Penlight +is an example of a project using plain LuaDoc markup processed using LDoc. + +LDoc is intended to be compatible with [LuaDoc](http://luadoc.luaforge.net/manual.htm) and +thus follows the pattern set by the various *Doc tools: + + --- Summary ends with a period. + -- Some description, can be over several lines. + -- @param p1 first parameter + -- @param p2 second parameter + -- @return a string value + -- @see second_fun + function mod1.first_fun(p1,p2) + end + +Tags such as `see` and `usage` are supported, and generally the names of functions and +modules can be inferred from the code. + +LDoc is designed to give better diagnostics: if a `@see` reference cannot be found, then the +line number of the reference is given. LDoc knows about modules which do not use `module()` +- this is important since this function has become deprecated in Lua 5.2. And you can avoid +having to embed HTML in commments by using Markdown. + +LDoc will also work with Lua C extension code, and provides some convenient shortcuts. + +An example showing the support for named sections and 'classes' is the [Winapi +documentation](http://stevedonovan.github.com/winapi/api.html); this is generated from +[winapi.l.c](https://github.com/stevedonovan/winapi/blob/master/winapi.l.c). + +## Installation + +This is straightforward; the only external dependency is +[Penlight](https://github.com/stevedonovan/Penlight), which in turn needs +[LuaFileSystem](http://keplerproject.github.com/luafilesystem/). These are already present +in Lua for Windows, and Penlight is also available through LuaRocks as `luarocks install +penlight`. + +Unpack the sources somewhere and make an alias to `ldoc.lua` on your path. That is, either +an excutable script called 'ldoc' like so: + + lua /path/to/ldoc/ldoc.lua $* + +Or a batch file called 'ldoc.bat': + + @echo off + lua \path\to\ldoc\ldoc.lua %* + diff --git a/node/ejdb.js b/node/ejdb.js index a800004..c58f935 100644 --- a/node/ejdb.js +++ b/node/ejdb.js @@ -175,6 +175,9 @@ EJDB.prototype.dropCollection = function(cname, prune, cb) { * * @param {String} cname Name of collection. * @param {Array|Object} jsarr Signle JSON object or array of JSON objects to save + * @param {Object?} opts Optional options obj. + * If opts.merge == true saved object will be merged with who's + * already persisted in db. * @param {Function} [cb] Callback function with arguments: (error, {Array} of OIDs for saved objects) * @return {Array} of OIDs of saved objects in synchronous mode otherwise returns {undefined}. */ @@ -388,7 +391,7 @@ function parseQueryArgs(args) { * - find(cname, qobj, [cb]) * - find(cname, qobj, hints, [cb]) * - find(cname, qobj, qobjarr, [cb]) - * - find(cname, qobj, qobjarr, hints, [cb]) + * - find(cname, qobj, qobjarr, hnts, [cb]) * * @param {String} cname Name of collection * @param {Object} qobj Main JSON query object