--- /dev/null
+{
+ "git": {
+ "sha1": "e5224ed68ce5d3e8d69e39c0b67e4791fb8c17a7"
+ },
+ "path_in_vcs": "crates/data"
+}
\ No newline at end of file
--- /dev/null
+# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO
+#
+# When uploading crates to the registry Cargo will automatically
+# "normalize" Cargo.toml files for maximal compatibility
+# with all versions of Cargo and also rewrite `path` dependencies
+# to registry (e.g., crates.io) dependencies.
+#
+# If you are reading this file be aware that the original Cargo.toml
+# will likely look very different (and much more reasonable).
+# See Cargo.toml.orig for the original contents.
+
+[package]
+edition = "2021"
+rust-version = "1.60.0"
+name = "toml-test-data"
+version = "1.3.0"
+include = [
+ "src/**/*",
+ "Cargo.toml",
+ "LICENSE*",
+ "README.md",
+ "examples/**/*",
+ "assets/toml-test/tests",
+]
+description = "TOML test cases"
+readme = "README.md"
+keywords = [
+ "development",
+ "toml",
+]
+categories = [
+ "development-tools:testing",
+ "text-processing",
+ "encoding",
+]
+license = "MIT OR Apache-2.0"
+repository = "https://github.com/epage/toml-test-rs"
+
+[dependencies.include_dir]
+version = "0.7.0"
--- /dev/null
+[package]
+name = "toml-test-data"
+version = "1.3.0"
+description = "TOML test cases"
+license = "MIT OR Apache-2.0"
+repository = "https://github.com/epage/toml-test-rs"
+readme = "README.md"
+categories = ["development-tools:testing", "text-processing", "encoding"]
+keywords = ["development", "toml"]
+edition = "2021"
+rust-version = "1.60.0" # MSRV
+include = [
+ "src/**/*",
+ "Cargo.toml",
+ "LICENSE*",
+ "README.md",
+ "examples/**/*",
+ "assets/toml-test/tests"
+]
+
+[dependencies]
+include_dir = "0.7.0"
--- /dev/null
+# toml-test-data
+
+> **TOML test cases**
+
+[][Documentation]
+
+[](https://crates.io/crates/toml-test)
+
+Dual-licensed under [MIT](LICENSE-MIT) or [Apache 2.0](LICENSE-APACHE)
+
+## About
+
+[toml-test](https://github.com/BurntSushi/toml-test) is a language-agnostic
+toml parser spec verification. This crate distributes all of the test cases.
+
+[Documentation]: https://docs.rs/toml-test
--- /dev/null
+`toml-test` is a language-agnostic test suite to verify the correctness of
+[TOML][t] parsers and writers.
+
+Tests are divided into two groups: "invalid" and "valid". Decoders or encoders
+that reject "invalid" tests pass the tests, and decoders that accept "valid"
+tests and output precisely what is expected pass the tests. The output format is
+JSON, described below.
+
+Both encoders and decoders share valid tests, except an encoder accepts JSON and
+outputs TOML rather than the reverse. The TOML representations are read with a
+blessed decoder and is compared. Encoders have their own set of invalid tests in
+the invalid-encoder directory. The JSON given to a TOML encoder is in the same
+format as the JSON that a TOML decoder should output.
+
+Compatible with TOML version [v1.0.0][v1].
+
+[t]: https://toml.io
+[v1]: https://toml.io/en/v1.0.0
+
+Installation
+------------
+There are binaries on the [release page][r]; these are statically compiled and
+should run in most environments. It's recommended you use a binary, or a tagged
+release if you build from source especially in CI environments. This prevents
+your tests from breaking on changes to tests in this tool.
+
+To compile from source you will need Go 1.16 or newer (older versions will *not*
+work):
+
+ $ git clone https://github.com/BurntSushi/toml-test.git
+ $ cd toml-test
+ $ go build ./cmd/toml-test
+
+This will build a `./toml-test` binary.
+
+[r]: https://github.com/BurntSushi/toml-test/releases
+
+Usage
+-----
+`toml-test` accepts an encoder or decoder as the first positional argument, for
+example:
+
+ $ toml-test my-toml-decoder
+ $ toml-test my-toml-encoder -encoder
+
+The `-encoder` flag is used to signal that this is an encoder rather than a
+decoder.
+
+For example, to run the tests against the Go TOML library:
+
+ # Install my parser
+ $ go install github.com/BurntSushi/toml/cmd/toml-test-decoder@master
+ $ go install github.com/BurntSushi/toml/cmd/toml-test-encoder@master
+
+ $ toml-test toml-test-decoder
+ toml-test [toml-test-decoder]: using embeded tests: 278 passed
+
+ $ toml-test -encoder toml-test-encoder
+ toml-test [toml-test-encoder]: using embeded tests: 94 passed, 0 failed
+
+The default is to use the tests compiled in the binary; you can use `-testdir`
+to load tests from the filesystem. You can use `-run [name]` or `-skip [name]`
+to run or skip specific tests. Both flags can be given more than once and accept
+glob patterns: `-run 'valid/string/*'`.
+
+See `toml-test -help` for detailed usage.
+
+### Implementing a decoder
+For your decoder to be compatible with `toml-test` it **must** satisfy the
+expected interface:
+
+- Your decoder **must** accept TOML data on `stdin`.
+- If the TOML data is invalid, your decoder **must** return with a non-zero
+ exit code, indicating an error.
+- If the TOML data is valid, your decoder **must** output a JSON encoding of
+ that data on `stdout` and return with a zero exit code, indicating success.
+
+An example in pseudocode:
+
+ toml_data = read_stdin()
+
+ parsed_toml = decode_toml(toml_data)
+
+ if error_parsing_toml():
+ print_error_to_stderr()
+ exit(1)
+
+ print_as_tagged_json(parsed_toml)
+ exit(0)
+
+Details on the tagged JSON is explained below in "JSON encoding".
+
+### Implementing an encoder
+For your encoder to be compatible with `toml-test`, it **must** satisfy the
+expected interface:
+
+- Your encoder **must** accept JSON data on `stdin`.
+- If the JSON data cannot be converted to a valid TOML representation, your
+ encoder **must** return with a non-zero exit code, indicating an error.
+- If the JSON data can be converted to a valid TOML representation, your encoder
+ **must** output a TOML encoding of that data on `stdout` and return with a
+ zero exit code, indicating success.
+
+An example in pseudocode:
+
+ json_data = read_stdin()
+
+ parsed_json_with_tags = decode_json(json_data)
+
+ if error_parsing_json():
+ print_error_to_stderr()
+ exit(1)
+
+ print_as_toml(parsed_json_with_tags)
+ exit(0)
+
+JSON encoding
+-------------
+The following JSON encoding applies equally to both encoders and decoders:
+
+- TOML tables correspond to JSON objects.
+- TOML table arrays correspond to JSON arrays.
+- TOML values correspond to a special JSON object of the form:
+ `{"type": "{TOML_TYPE}", "value": "{TOML_VALUE}"}`
+
+In the above, `TOML_TYPE` may be one of:
+
+- string
+- integer
+- float
+- bool
+- datetime
+- datetime-local
+- date-local
+- time-local
+
+`TOML_VALUE` is always a JSON string.
+
+Empty hashes correspond to empty JSON objects (`{}`) and empty arrays correspond
+to empty JSON arrays (`[]`).
+
+Offset datetimes should be encoded in RFC 3339; Local datetimes should be
+encoded following RFC 3339 without the offset part. Local dates should be
+encoded as the date part of RFC 3339 and Local times as the time part.
+
+Examples:
+
+ TOML JSON
+
+ a = 42 {"type": "integer": "value": "42"}
+
+<!-- -->
+
+ [tbl] {"tbl": {
+ a = 42 "a": {"type": "integer": "value": "42"}
+ }}
+
+<!-- -->
+
+ a = ["a", 2] {"a": [
+ {"type": "string", "value": "1"},
+ {"type": "integer": "value": "2"}
+ ]}
+
+Or a more complex example:
+
+```toml
+best-day-ever = 1987-07-05T17:45:00Z
+
+[numtheory]
+boring = false
+perfection = [6, 28, 496]
+```
+
+And the JSON encoding expected by `toml-test` is:
+
+```json
+{
+ "best-day-ever": {"type": "datetime", "value": "1987-07-05T17:45:00Z"},
+ "numtheory": {
+ "boring": {"type": "bool", "value": "false"},
+ "perfection": [
+ {"type": "integer", "value": "6"},
+ {"type": "integer", "value": "28"},
+ {"type": "integer", "value": "496"}
+ ]
+ }
+}
+```
+
+Note that the only JSON values ever used are objects, arrays and strings.
+
+An example implementation can be found in the BurnSushi/toml:
+
+- [Add tags](https://github.com/BurntSushi/toml/blob/master/internal/tag/add.go)
+- [Remove tags](https://github.com/BurntSushi/toml/blob/master/internal/tag/rm.go)
+
+Implementation-defined behaviour
+--------------------------------
+This only tests behaviour that's should be true for every encoder implementing
+TOML; a few things are left up to implementations, and are not tested here.
+
+- Millisecond precision (4 digits) is required for datetimes and times, and
+ further precision is implementation-specific, and any greater precision than
+ is supported must be truncated (not rounded).
+
+ This tests only millisecond precision, and not any further precision or the
+ truncation of it.
+
+Assumptions of Truth
+--------------------
+The following are taken as ground truths by `toml-test`:
+
+- All tests classified as `invalid` **are** invalid.
+- All tests classified as `valid` **are** valid.
+- All expected outputs in `valid/test-name.json` are exactly correct.
+- The Go standard library package `encoding/json` decodes JSON correctly.
+- When testing encoders the
+ [BurntSushi/toml](https://github.com/BurntSushi/toml) TOML decoder is assumed
+ to be correct. (Note that this assumption is not made when testing decoders!)
+
+Of particular note is that **no TOML decoder** is taken as ground truth when
+testing decoders. This means that most changes to the spec will only require an
+update of the tests in `toml-test`. (Bigger changes may require an adjustment of
+how two things are considered equal. Particularly if a new type of data is
+added.) Obviously, this advantage does not apply to testing TOML encoders since
+there must exist a TOML decoder that conforms to the specification in order to
+read the output of a TOML encoder.
+
+Adding tests
+------------
+`toml-test` was designed so that tests can be easily added and removed. As
+mentioned above, tests are split into two groups: invalid and valid tests.
+
+Invalid tests **only check if a decoder rejects invalid TOML data**. Or, in the
+case of testing encoders, invalid tests **only check if an encoder rejects an
+invalid representation of TOML** (e.g., a hetergeneous array). Therefore, all
+invalid tests should try to **test one thing and one thing only**. Invalid tests
+should be named after the fault it is trying to expose. Invalid tests for
+decoders are in the `tests/invalid` directory while invalid tests for encoders
+are in the `tests/invalid-encoder` directory.
+
+Valid tests check that a decoder accepts valid TOML data **and** that the parser
+has the correct representation of the TOML data. Therefore, valid tests need a
+JSON encoding in addition to the TOML data. The tests should be small enough
+that writing the JSON encoding by hand will not give you brain damage. The exact
+reverse is true when testing encoders.
+
+A valid test without either a `.json` or `.toml` file will automatically fail.
+
+If you have tests that you'd like to add, please submit a pull request.
+
+Why JSON?
+---------
+In order for a language agnostic test suite to work, we need some kind of data
+exchange format. TOML cannot be used, as it would imply that a particular parser
+has a blessing of correctness.
+
+My decision to use JSON was not a careful one. It was based on expediency. The
+Go standard library has an excellent `encoding/json` package built in, which
+made it easy to compare JSON data.
+
+The problem with JSON is that the types in TOML are not in one-to-one
+correspondence with JSON. This is why every TOML value represented in JSON is
+tagged with a type annotation, as described above.
+
+YAML may be closer in correspondence with TOML, but I don't believe we should
+rely on that correspondence. Making things explicit with JSON means that writing
+tests is a little more cumbersome, but it also reduces the number of assumptions
+we need to make.
--- /dev/null
+*.toml -text
--- /dev/null
+array = [1,,2]
--- /dev/null
+array = [1,2,,]
+
--- /dev/null
+a = [{ b = 1 }]
+
+# Cannot extend tables within static arrays
+# https://github.com/toml-lang/toml/issues/908
+[a.c]
+foo = 1
--- /dev/null
+wrong = [ 1 2 3 ]
--- /dev/null
+x = [{ key = 42 #
--- /dev/null
+x = [{ key = 42
--- /dev/null
+long_array = [ 1, 2, 3
--- /dev/null
+# INVALID TOML DOC
+fruit = []
+
+[[fruit]] # Not allowed
--- /dev/null
+# INVALID TOML DOC
+[[fruit]]
+ name = "apple"
+
+ [[fruit.variety]]
+ name = "red delicious"
+
+ # This table conflicts with the previous table
+ [fruit.variety]
+ name = "granny smith"
--- /dev/null
+array = [
+ "Is there life after an array separator?", No
+ "Entry"
+]
--- /dev/null
+array = [
+ "Is there life before an array separator?" No,
+ "Entry"
+]
--- /dev/null
+array = [
+ "Entry 1",
+ I don't belong,
+ "Entry 2",
+]
--- /dev/null
+a = falsify
--- /dev/null
+a = truthy
--- /dev/null
+valid = False
--- /dev/null
+a = falsey
--- /dev/null
+# The following line contains a single carriage return control character\r
+\r
\ No newline at end of file
--- /dev/null
+bare-formfeed = \f
--- /dev/null
+bare-vertical-tab = \v
--- /dev/null
+comment-cr = "Carriage return in comment" # \ra=1
--- /dev/null
+comment-del = "0x7f" # \7f
--- /dev/null
+comment-lf = "ctrl-P" # \10
--- /dev/null
+comment-us = "ctrl-_" # \1f
--- /dev/null
+# "\x.." sequences are replaced with literal control characters.
+
+comment-null = "null" # \x00
+comment-lf = "ctrl-P" # \x10
+comment-us = "ctrl-_" # \x1f
+comment-del = "0x7f" # \x7f
+comment-cr = "Carriage return in comment" # \x0da=1
+
+string-null = "null\x00"
+string-lf = "null\x10"
+string-us = "null\x1f"
+string-del = "null\x7f"
+
+rawstring-null = 'null\x00'
+rawstring-lf = 'null\x10'
+rawstring-us = 'null\x1f'
+rawstring-del = 'null\x7f'
+
+multi-null = """null\x00"""
+multi-lf = """null\x10"""
+multi-us = """null\x1f"""
+multi-del = """null\x7f"""
+
+rawmulti-null = '''null\x00'''
+rawmulti-lf = '''null\x10'''
+rawmulti-us = '''null\x1f'''
+rawmulti-del = '''null\x7f'''
+
+string-bs = "backspace\x08"
+
+bare-null = "some value" \x00
+bare-formfeed = \x0c
+bare-vertical-tab = \x0b
--- /dev/null
+multi-del = """null\7f"""
--- /dev/null
+multi-lf = """null\10"""
--- /dev/null
+multi-us = """null\1f"""
--- /dev/null
+rawmulti-del = '''null\7f'''
--- /dev/null
+rawmulti-lf = '''null\10'''
--- /dev/null
+rawmulti-us = '''null\1f'''
--- /dev/null
+rawstring-del = 'null\7f'
--- /dev/null
+rawstring-lf = 'null\10'
--- /dev/null
+rawstring-us = 'null\1f'
--- /dev/null
+string-bs = "backspace\b"
--- /dev/null
+string-del = "null\7f"
--- /dev/null
+string-lf = "null\10"
--- /dev/null
+string-us = "null\1f"
--- /dev/null
+# time-hour = 2DIGIT ; 00-23
+d = 2006-01-01T24:00:00-00:00
--- /dev/null
+# date-mday = 2DIGIT ; 01-28, 01-29, 01-30, 01-31 based on
+# ; month/year
+d = 2006-01-32T00:00:00-00:00
--- /dev/null
+# date-mday = 2DIGIT ; 01-28, 01-29, 01-30, 01-31 based on
+# ; month/year
+d = 2006-01-00T00:00:00-00:00
--- /dev/null
+# time-minute = 2DIGIT ; 00-59
+d = 2006-01-01T00:60:00-00:00
--- /dev/null
+# date-month = 2DIGIT ; 01-12
+d = 2006-13-01T00:00:00-00:00
--- /dev/null
+# date-month = 2DIGIT ; 01-12
+d = 2007-00-01T00:00:00-00:00
--- /dev/null
+# Day "5" instead of "05"; the leading zero is required.
+with-milli = 1987-07-5T17:45:00.12Z
--- /dev/null
+# Month "7" instead of "07"; the leading zero is required.
+no-leads = 1987-7-05T17:45:00Z
--- /dev/null
+# No seconds in time.
+no-secs = 1987-07-05T17:45Z
--- /dev/null
+# No "t" or "T" between the date and time.
+no-t = 1987-07-0517:45:00Z
--- /dev/null
+# time-second = 2DIGIT ; 00-58, 00-59, 00-60 based on leap second
+# ; rules
+d = 2006-01-01T00:00:61-00:00
--- /dev/null
+# Leading 0 is always required.
+d = 01:32:0
--- /dev/null
+# Leading 0 is always required.
+d = 1:32:00
--- /dev/null
+# Date cannot end with trailing T
+d = 2006-01-30T
--- /dev/null
+# There is a 0xda at after the quotes, and no EOL at the end of the file.
+#
+# This is a bit of an edge case: This indicates there should be two bytes
+# (0b1101_1010) but there is no byte to follow because it's the end of the file.
+x = """"""Ú
\ No newline at end of file
--- /dev/null
+# The following line contains an invalid UTF-8 sequence.
+bad = '''Ã'''
--- /dev/null
+# The following line contains an invalid UTF-8 sequence.
+bad = """Ã"""
--- /dev/null
+# The following line contains an invalid UTF-8 sequence.
+bad = 'Ã'
--- /dev/null
+# The following line contains an invalid UTF-8 sequence.
+bad = "Ã"
--- /dev/null
+bom-not-at-start ÿý
--- /dev/null
+bom-not-at-start= ÿý
--- /dev/null
+double-point-1 = 0..1
--- /dev/null
+double-point-2 = 0.1.2
--- /dev/null
+exp-double-e-1 = 1ee2
--- /dev/null
+exp-double-e-2 = 1e2e3
--- /dev/null
+exp-double-us = 1e__23
--- /dev/null
+exp-leading-us = 1e_23
--- /dev/null
+exp-point-1 = 1e2.3
--- /dev/null
+exp-point-2 = 1.e2
--- /dev/null
+exp-trailing-us = 1e_23_
--- /dev/null
+leading-zero = 03.14
+leading-zero-neg = -03.14
+leading-zero-plus = +03.14
+
+leading-point = .12345
+leading-point-neg = -.12345
+leading-point-plus = +.12345
+
+trailing-point = 1.
+trailing-point-min = -1.
+trailing-point-plus = +1.
+
+trailing-us = 1.2_
+leading-us = _1.2
+us-before-point = 1_.2
+us-after-point = 1._2
+
+double-point-1 = 0..1
+double-point-2 = 0.1.2
+
+exp-point-1 = 1e2.3
+exp-point-2 = 1.e2
+
+exp-double-e-1 = 1ee2
+exp-double-e-2 = 1e2e3
+
+exp-leading-us = 1e_23
+exp-trailing-us = 1e_23_
+exp-double-us = 1e__23
+
+inf-incomplete-1 = in
+inf-incomplete-2 = +in
+inf-incomplete-3 = -in
+
+nan-incomplete-1 = na
+nan-incomplete-2 = +na
+nan-incomplete-3 = -na
+
+nan_underscore = na_n
+inf_underscore = in_f
--- /dev/null
+inf-incomplete-1 = in
--- /dev/null
+inf-incomplete-2 = +in
--- /dev/null
+inf-incomplete-3 = -in
--- /dev/null
+inf_underscore = in_f
--- /dev/null
+leading-point-neg = -.12345
--- /dev/null
+leading-point-plus = +.12345
--- /dev/null
+leading-point = .12345
--- /dev/null
+leading-us = _1.2
--- /dev/null
+leading-zero-neg = -03.14
--- /dev/null
+leading-zero-plus = +03.14
--- /dev/null
+leading-zero = 03.14
--- /dev/null
+nan-incomplete-1 = na
--- /dev/null
+nan-incomplete-2 = +na
--- /dev/null
+nan-incomplete-3 = -na
--- /dev/null
+nan_underscore = na_n
--- /dev/null
+trailing-point-min = -1.
--- /dev/null
+trailing-point-plus = +1.
--- /dev/null
+trailing-point = 1.
--- /dev/null
+# trailing underscore in integer part is not allowed
+trailing-us-exp = 1_e2
+# trailing underscore in float part is not allowed
+trailing-us-exp2 = 1.2_e2
--- /dev/null
+trailing-us = 1.2_
--- /dev/null
+us-after-point = 1._2
--- /dev/null
+us-before-point = 1_.2
--- /dev/null
+a={}
+# Inline tables are immutable and can't be extended
+[a.b]
--- /dev/null
+t = {x=3,,y=4}
--- /dev/null
+# Duplicate keys within an inline table are invalid
+a={b=1, b=2}
--- /dev/null
+# No newlines are allowed between the curly braces unless they are valid within
+# a value.
+simple = { a = 1
+}
--- /dev/null
+t = {a=1,
+b=2}
--- /dev/null
+t = {a=1
+,b=2}
--- /dev/null
+json_like = {
+ first = "Tom",
+ last = "Preston-Werner"
+}
--- /dev/null
+t = {x = 3 y = 4}
--- /dev/null
+a.b=0
+# Since table "a" is already defined, it can't be replaced by an inline table.
+a={}
--- /dev/null
+# A terminating comma (also called trailing comma) is not permitted after the
+# last key/value pair in an inline table
+abc = { abc = 123, }
--- /dev/null
+capital-bin = 0B0
--- /dev/null
+capital-hex = 0X1
--- /dev/null
+capital-oct = 0O0
--- /dev/null
+double-sign-nex = --99
--- /dev/null
+double-sign-plus = ++99
--- /dev/null
+double-us = 1__23
--- /dev/null
+incomplete-bin = 0b
--- /dev/null
+incomplete-hex = 0x
--- /dev/null
+incomplete-oct = 0o
--- /dev/null
+leading-zero-1 = 01
+leading-zero-2 = 00
+leading-zero-3 = 0_0
+leading-zero-sign-1 = -01
+leading-zero-sign-2 = +01
+leading-zero-sign-3 = +0_1
+
+double-sign-plus = ++99
+double-sign-nex = --99
+
+negative-hex = -0xff
+negative-bin = -0b11010110
+negative-oct = -0o99
+
+positive-hex = +0xff
+positive-bin = +0b11010110
+positive-oct = +0o99
+
+trailing-us = 123_
+leading-us = _123
+double-us = 1__23
+
+us-after-hex = 0x_1
+us-after-oct = 0o_1
+us-after-bin = 0b_1
+
+trailing-us-hex = 0x1_
+trailing-us-oct = 0o1_
+trailing-us-bin = 0b1_
+
+leading-us-hex = _0o1
+leading-us-oct = _0o1
+leading-us-bin = _0o1
+
+invalid-hex = 0xaafz
+invalid-oct = 0o778
+invalid-bin = 0b0012
+
+capital-hex = 0X1
+capital-oct = 0O0
+capital-bin = 0B0
--- /dev/null
+invalid-bin = 0b0012
--- /dev/null
+invalid-hex = 0xaafz
--- /dev/null
+invalid-oct = 0o778
--- /dev/null
+leading-us-bin = _0o1
--- /dev/null
+leading-us-hex = _0o1
--- /dev/null
+leading-us-oct = _0o1
--- /dev/null
+leading-us = _123
--- /dev/null
+leading-zero-1 = 01
--- /dev/null
+leading-zero-2 = 00
--- /dev/null
+leading-zero-3 = 0_0
--- /dev/null
+leading-zero-sign-1 = -01
--- /dev/null
+leading-zero-sign-2 = +01
--- /dev/null
+leading-zero-sign-3 = +0_1
--- /dev/null
+negative-bin = -0b11010110
--- /dev/null
+negative-hex = -0xff
--- /dev/null
+negative-oct = -0o99
--- /dev/null
+positive-bin = +0b11010110
--- /dev/null
+positive-hex = +0xff
--- /dev/null
+positive-oct = +0o99
--- /dev/null
+answer = 42 the ultimate answer?
--- /dev/null
+trailing-us-bin = 0b1_
--- /dev/null
+trailing-us-hex = 0x1_
--- /dev/null
+trailing-us-oct = 0o1_
--- /dev/null
+trailing-us = 123_
--- /dev/null
+us-after-bin = 0b_1
--- /dev/null
+us-after-hex = 0x_1
--- /dev/null
+us-after-oct = 0o_1
--- /dev/null
+[[agencies]] owner = "S Cjelli"
--- /dev/null
+[error] this = "should not be here"
--- /dev/null
+first = "Tom" last = "Preston-Werner" # INVALID
--- /dev/null
+bare!key = 123
--- /dev/null
+# Defined a.b as int
+a.b = 1
+# Tries to access it as table: error
+a.b.c = 2
--- /dev/null
+dupe = false
+dupe = true
--- /dev/null
+# DO NOT DO THIS
+name = "Tom"
+name = "Pradyun"
--- /dev/null
+\u00c0 = "latin capital letter A with grave"
--- /dev/null
+"""long
+key""" = 1
--- /dev/null
+barekey
+ = 123
--- /dev/null
+a = 1 b = 2
--- /dev/null
+partial"quoted" = 5
--- /dev/null
+μ = "greek small letter mu"
--- /dev/null
+[a]
+[xyz = 5
+[b]
--- /dev/null
+[product]
+type = { name = "Nail" }
+type.edible = false # INVALID
--- /dev/null
+[product]
+type.name = "Nail"
+type = { edible = false } # INVALID
--- /dev/null
+key = # INVALID
--- /dev/null
+= "no key name" # INVALID
+"" = "blank" # VALID but discouraged
+'' = 'blank' # VALID but discouraged
--- /dev/null
+str4 = """Here are two quotation marks: "". Simple enough."""
+str5 = """Here are three quotation marks: """.""" # INVALID
+str5 = """Here are three quotation marks: ""\"."""
+str6 = """Here are fifteen quotation marks: ""\"""\"""\"""\"""\"."""
+
+# "This," she said, "is just a pointless statement."
+str7 = """"This," she said, "is just a pointless statement.""""
--- /dev/null
+quot15 = '''Here are fifteen quotation marks: """""""""""""""'''
+
+apos15 = '''Here are fifteen apostrophes: '''''''''''''''''' # INVALID
+apos15 = "Here are fifteen apostrophes: '''''''''''''''"
+
+# 'That,' she said, 'is still pointless.'
+str = ''''That,' she said, 'is still pointless.''''
--- /dev/null
+[fruit]
+apple.color = "red"
+apple.taste.sweet = true
+
+[fruit.apple] # INVALID
+# [fruit.apple.taste] # INVALID
+
+[fruit.apple.texture] # you can add sub-tables
+smooth = true
--- /dev/null
+[fruit]
+apple.color = "red"
+apple.taste.sweet = true
+
+# [fruit.apple] # INVALID
+[fruit.apple.taste] # INVALID
+
+[fruit.apple.texture] # you can add sub-tables
+smooth = true
--- /dev/null
+naughty = "\xAg"
--- /dev/null
+invalid-codepoint = "This string contains a non scalar unicode codepoint \uD801"
--- /dev/null
+no_concat = "first" "second"
--- /dev/null
+invalid-escape = "This string has a bad \a escape character."
--- /dev/null
+invalid-escape = "This string has a bad \ escape character."
+
--- /dev/null
+bad-hex-esc-1 = "\x0g"
--- /dev/null
+bad-hex-esc-2 = "\xG0"
--- /dev/null
+bad-hex-esc-3 = "\x"
--- /dev/null
+bad-hex-esc-4 = "\x 50"
--- /dev/null
+bad-hex-esc-5 = "\x 50"
--- /dev/null
+bad-hex-esc-1 = "\x0g"
+bad-hex-esc-2 = "\xG0"
+bad-hex-esc-3 = "\x"
+bad-hex-esc-4 = "\x 50"
--- /dev/null
+multi = "first line
+second line"
--- /dev/null
+invalid-escape = "This string has a bad \/ escape character."
--- /dev/null
+str = "val\ue"
--- /dev/null
+str = "val\Ux"
--- /dev/null
+str = "val\U0000000"
+
--- /dev/null
+str = "val\U0000"
--- /dev/null
+str = "val\Ugggggggg"
--- /dev/null
+answer = "\x33"
--- /dev/null
+a = """\UFFFFFFFF"""
--- /dev/null
+a = """\U00D80000"""
--- /dev/null
+str5 = """Here are three quotation marks: """."""
--- /dev/null
+a = """\@"""
--- /dev/null
+a = "\UFFFFFFFF"
--- /dev/null
+a = "\U00D80000"
--- /dev/null
+a = '''6 apostrophes: ''''''
+
--- /dev/null
+a = '''15 apostrophes: ''''''''''''''''''
--- /dev/null
+name = value
--- /dev/null
+k = """t\a"""
+
--- /dev/null
+# \<Space> is not a valid escape.
+k = """t\ t"""
--- /dev/null
+# \<Space> is not a valid escape.
+k = """t\ """
+
--- /dev/null
+a = """
+ foo \ \n
+ bar"""
--- /dev/null
+invalid = """
+ this will fail
--- /dev/null
+a = """6 quotes: """"""
--- /dev/null
+no-ending-quote = "One time, at band camp
--- /dev/null
+string = "Is there life after strings?" No.
--- /dev/null
+bad-ending-quote = "double and single'
--- /dev/null
+# First a.b.c defines a table: a.b.c = {z=9}
+#
+# Then we define a.b.c.t = "str" to add a str to the above table, making it:
+#
+# a.b.c = {z=9, t="..."}
+#
+# While this makes sense, logically, it was decided this is not valid TOML as
+# it's too confusing/convoluted.
+#
+# See: https://github.com/toml-lang/toml/issues/846
+# https://github.com/toml-lang/toml/pull/859
+
+[a.b.c]
+ z = 9
+
+[a]
+ b.c.t = "Using dotted keys to add to [a.b.c] after explicitly defining it above is not allowed"
--- /dev/null
+# This is the same issue as in injection-1.toml, except that nests one level
+# deeper. See that file for a more complete description.
+
+[a.b.c.d]
+ z = 9
+
+[a]
+ b.c.d.k.t = "Using dotted keys to add to [a.b.c.d] after explicitly defining it above is not allowed"
--- /dev/null
+[[]]
+name = "Born to Run"
--- /dev/null
+# This test is a bit tricky. It should fail because the first use of
+# `[[albums.songs]]` without first declaring `albums` implies that `albums`
+# must be a table. The alternative would be quite weird. Namely, it wouldn't
+# comply with the TOML spec: "Each double-bracketed sub-table will belong to
+# the most *recently* defined table element *above* it."
+#
+# This is in contrast to the *valid* test, table-array-implicit where
+# `[[albums.songs]]` works by itself, so long as `[[albums]]` isn't declared
+# later. (Although, `[albums]` could be.)
+[[albums.songs]]
+name = "Glory Days"
+
+[[albums]]
+name = "Born in the USA"
--- /dev/null
+[[albums]
+name = "Born to Run"
--- /dev/null
+[fruit]
+apple.color = "red"
+
+[fruit.apple] # INVALID
--- /dev/null
+[fruit]
+apple.taste.sweet = true
+
+[fruit.apple.taste] # INVALID
--- /dev/null
+[fruit]
+type = "apple"
+
+[fruit.type]
+apple = "yes"
--- /dev/null
+[tbl]
+[[tbl]]
--- /dev/null
+[[tbl]]
+[tbl]
--- /dev/null
+[a]
+b = 1
+
+[a]
+c = 2
--- /dev/null
+[naughty..naughty]
--- /dev/null
+[name=bad]
--- /dev/null
+[ [table]]
--- /dev/null
+[a]b]
+zyx = 42
--- /dev/null
+[a[b]
+zyx = 42
--- /dev/null
+["where will it end]
+name = value
--- /dev/null
+# Define b as int, and try to use it as a table: error
+[a]
+b = 1
+
+[a.b]
+c = 2
--- /dev/null
+[[table] ]
--- /dev/null
+[error] this shouldn't be here
--- /dev/null
+[invalid key]
--- /dev/null
+[key#group]
+answer = 42
--- /dev/null
+{
+ "comments": [
+ {
+ "type": "integer",
+ "value": "1"
+ },
+ {
+ "type": "integer",
+ "value": "2"
+ }
+ ],
+ "dates": [
+ {
+ "type": "datetime",
+ "value": "1987-07-05T17:45:00Z"
+ },
+ {
+ "type": "datetime",
+ "value": "1979-05-27T07:32:00Z"
+ },
+ {
+ "type": "datetime",
+ "value": "2006-06-01T11:00:00Z"
+ }
+ ],
+ "floats": [
+ {
+ "type": "float",
+ "value": "1.1"
+ },
+ {
+ "type": "float",
+ "value": "2.1"
+ },
+ {
+ "type": "float",
+ "value": "3.1"
+ }
+ ],
+ "ints": [
+ {
+ "type": "integer",
+ "value": "1"
+ },
+ {
+ "type": "integer",
+ "value": "2"
+ },
+ {
+ "type": "integer",
+ "value": "3"
+ }
+ ],
+ "strings": [
+ {
+ "type": "string",
+ "value": "a"
+ },
+ {
+ "type": "string",
+ "value": "b"
+ },
+ {
+ "type": "string",
+ "value": "c"
+ }
+ ]
+}
--- /dev/null
+ints = [1, 2, 3, ]
+floats = [1.1, 2.1, 3.1]
+strings = ["a", "b", "c"]
+dates = [
+ 1987-07-05T17:45:00Z,
+ 1979-05-27T07:32:00Z,
+ 2006-06-01T11:00:00Z,
+]
+comments = [
+ 1,
+ 2, #this is ok
+]
--- /dev/null
+{
+ "a": [
+ {
+ "type": "bool",
+ "value": "true"
+ },
+ {
+ "type": "bool",
+ "value": "false"
+ }
+ ]
+}
--- /dev/null
+a = [true, false]
--- /dev/null
+{
+ "thevoid": [
+ [
+ [
+ [
+ []
+ ]
+ ]
+ ]
+ ]
+}
--- /dev/null
+thevoid = [[[[[]]]]]
--- /dev/null
+{
+ "mixed": [
+ [
+ {
+ "type": "integer",
+ "value": "1"
+ },
+ {
+ "type": "integer",
+ "value": "2"
+ }
+ ],
+ [
+ {
+ "type": "string",
+ "value": "a"
+ },
+ {
+ "type": "string",
+ "value": "b"
+ }
+ ],
+ [
+ {
+ "type": "float",
+ "value": "1.1"
+ },
+ {
+ "type": "float",
+ "value": "2.1"
+ }
+ ]
+ ]
+}
--- /dev/null
+mixed = [[1, 2], ["a", "b"], [1.1, 2.1]]
--- /dev/null
+{
+ "arrays-and-ints": [
+ {
+ "type": "integer",
+ "value": "1"
+ },
+ [
+ {
+ "type": "string",
+ "value": "Arrays are not integers."
+ }
+ ]
+ ]
+}
--- /dev/null
+arrays-and-ints = [1, ["Arrays are not integers."]]
--- /dev/null
+{
+ "ints-and-floats": [
+ {
+ "type": "integer",
+ "value": "1"
+ },
+ {
+ "type": "float",
+ "value": "1.1"
+ }
+ ]
+}
--- /dev/null
+ints-and-floats = [1, 1.1]
--- /dev/null
+{
+ "strings-and-ints": [
+ {
+ "type": "string",
+ "value": "hi"
+ },
+ {
+ "type": "integer",
+ "value": "42"
+ }
+ ]
+}
--- /dev/null
+strings-and-ints = ["hi", 42]
--- /dev/null
+{
+ "contributors": [
+ {
+ "type": "string",
+ "value": "Foo Bar \u003cfoo@example.com\u003e"
+ },
+ {
+ "email": {
+ "type": "string",
+ "value": "bazqux@example.com"
+ },
+ "name": {
+ "type": "string",
+ "value": "Baz Qux"
+ },
+ "url": {
+ "type": "string",
+ "value": "https://example.com/bazqux"
+ }
+ }
+ ],
+ "mixed": [
+ {
+ "k": {
+ "type": "string",
+ "value": "a"
+ }
+ },
+ {
+ "type": "string",
+ "value": "b"
+ },
+ {
+ "type": "integer",
+ "value": "1"
+ }
+ ]
+}
--- /dev/null
+contributors = [
+ "Foo Bar <foo@example.com>",
+ { name = "Baz Qux", email = "bazqux@example.com", url = "https://example.com/bazqux" }
+]
+
+# Start with a table as the first element. This tests a case that some libraries
+# might have where they will check if the first entry is a table/map/hash/assoc
+# array and then encode it as a table array. This was a reasonable thing to do
+# before TOML 1.0 since arrays could only contain one type, but now it's no
+# longer.
+mixed = [{k="a"}, "b", 1]
--- /dev/null
+{
+ "nest": [
+ [
+ [
+ {
+ "type": "string",
+ "value": "a"
+ }
+ ],
+ [
+ {
+ "type": "integer",
+ "value": "1"
+ },
+ {
+ "type": "integer",
+ "value": "2"
+ },
+ [
+ {
+ "type": "integer",
+ "value": "3"
+ }
+ ]
+ ]
+ ]
+ ]
+}
--- /dev/null
+nest = [
+ [
+ ["a"],
+ [1, 2, [3]]
+ ]
+]
--- /dev/null
+{
+ "a": [
+ {
+ "b": {}
+ }
+ ]
+}
--- /dev/null
+a = [ { b = {} } ]
--- /dev/null
+{
+ "nest": [
+ [
+ {
+ "type": "string",
+ "value": "a"
+ }
+ ],
+ [
+ {
+ "type": "string",
+ "value": "b"
+ }
+ ]
+ ]
+}
--- /dev/null
+nest = [["a"], ["b"]]
--- /dev/null
+{
+ "ints": [
+ {
+ "type": "integer",
+ "value": "1"
+ },
+ {
+ "type": "integer",
+ "value": "2"
+ },
+ {
+ "type": "integer",
+ "value": "3"
+ }
+ ]
+}
--- /dev/null
+ints = [1,2,3]
--- /dev/null
+{
+ "title": [
+ {
+ "type": "string",
+ "value": " \", "
+ }
+ ]
+}
--- /dev/null
+title = [ " \", ",]
--- /dev/null
+{
+ "title": [
+ {
+ "type": "string",
+ "value": "Client: \"XXXX\", Job: XXXX"
+ },
+ {
+ "type": "string",
+ "value": "Code: XXXX"
+ }
+ ]
+}
--- /dev/null
+title = [
+"Client: \"XXXX\", Job: XXXX",
+"Code: XXXX"
+]
--- /dev/null
+{
+ "title": [
+ {
+ "type": "string",
+ "value": "Client: XXXX, Job: XXXX"
+ },
+ {
+ "type": "string",
+ "value": "Code: XXXX"
+ }
+ ]
+}
--- /dev/null
+title = [
+"Client: XXXX, Job: XXXX",
+"Code: XXXX"
+]
--- /dev/null
+{
+ "string_array": [
+ {
+ "type": "string",
+ "value": "all"
+ },
+ {
+ "type": "string",
+ "value": "strings"
+ },
+ {
+ "type": "string",
+ "value": "are the same"
+ },
+ {
+ "type": "string",
+ "value": "type"
+ }
+ ]
+}
--- /dev/null
+string_array = [ "all", 'strings', """are the same""", '''type''']
--- /dev/null
+{
+ "foo": [
+ {
+ "bar": {
+ "type": "string",
+ "value": "\"{{baz}}\""
+ }
+ }
+ ]
+}
--- /dev/null
+foo = [ { bar="\"{{baz}}\""} ]
--- /dev/null
+{
+ "f": {
+ "type": "bool",
+ "value": "false"
+ },
+ "t": {
+ "type": "bool",
+ "value": "true"
+ }
+}
--- /dev/null
+t = true
+f = false
--- /dev/null
+{
+ "key": {
+ "type": "string",
+ "value": "value"
+ }
+}
--- /dev/null
+# This is a full-line comment
+key = "value" # This is a comment at the end of a line
--- /dev/null
+{
+ "key": {
+ "type": "string",
+ "value": "value"
+ }
+}
--- /dev/null
+# This is a full-line comment
+key = "value" # This is a comment at the end of a line
--- /dev/null
+{
+ "group": {
+ "answer": {
+ "type": "integer",
+ "value": "42"
+ },
+ "dt": {
+ "type": "datetime",
+ "value": "1979-05-27T07:32:12-07:00"
+ },
+ "d": {
+ "type": "date-local",
+ "value": "1979-05-27"
+ },
+ "more": [
+ {
+ "type": "integer",
+ "value": "42"
+ },
+ {
+ "type": "integer",
+ "value": "42"
+ }
+ ]
+ }
+}
--- /dev/null
+# Top comment.
+ # Top comment.
+# Top comment.
+
+# [no-extraneous-groups-please]
+
+[group] # Comment
+answer = 42 # Comment
+# no-extraneous-keys-please = 999
+# Inbetween comment.
+more = [ # Comment
+ # What about multiple # comments?
+ # Can you handle it?
+ #
+ # Evil.
+# Evil.
+ 42, 42, # Comments within arrays are fun.
+ # What about multiple # comments?
+ # Can you handle it?
+ #
+ # Evil.
+# Evil.
+# ] Did I fool you?
+] # Hopefully not.
+
+# Make sure the space between the datetime and "#" isn't lexed.
+dt = 1979-05-27T07:32:12-07:00 # c
+d = 1979-05-27 # Comment
--- /dev/null
+# single comment without any eol characters
\ No newline at end of file
--- /dev/null
+{
+ "hash#tag": {
+ "#!": {
+ "type": "string",
+ "value": "hash bang"
+ },
+ "arr3": [
+ {
+ "type": "string",
+ "value": "#"
+ },
+ {
+ "type": "string",
+ "value": "#"
+ },
+ {
+ "type": "string",
+ "value": "###"
+ }
+ ],
+ "arr4": [
+ {
+ "type": "integer",
+ "value": "1"
+ },
+ {
+ "type": "integer",
+ "value": "2"
+ },
+ {
+ "type": "integer",
+ "value": "3"
+ },
+ {
+ "type": "integer",
+ "value": "4"
+ }
+ ],
+ "arr5": [
+ [
+ [
+ [
+ [
+ {
+ "type": "string",
+ "value": "#"
+ }
+ ]
+ ]
+ ]
+ ]
+ ],
+ "tbl1": {
+ "#": {
+ "type": "string",
+ "value": "}#"
+ }
+ }
+ },
+ "section": {
+ "8": {
+ "type": "string",
+ "value": "eight"
+ },
+ "eleven": {
+ "type": "float",
+ "value": "11.1"
+ },
+ "five": {
+ "type": "float",
+ "value": "5.5"
+ },
+ "four": {
+ "type": "string",
+ "value": "# no comment\n# nor this\n#also not comment"
+ },
+ "one": {
+ "type": "string",
+ "value": "11"
+ },
+ "six": {
+ "type": "integer",
+ "value": "6"
+ },
+ "ten": {
+ "type": "float",
+ "value": "1000.0"
+ },
+ "three": {
+ "type": "string",
+ "value": "#"
+ },
+ "two": {
+ "type": "string",
+ "value": "22#"
+ }
+ }
+}
--- /dev/null
+[section]#attached comment
+#[notsection]
+one = "11"#cmt
+two = "22#"
+three = '#'
+
+four = """# no comment
+# nor this
+#also not comment"""#is_comment
+
+five = 5.5#66
+six = 6#7
+8 = "eight"
+#nine = 99
+ten = 10e2#1
+eleven = 1.11e1#23
+
+["hash#tag"]
+"#!" = "hash bang"
+arr3 = [ "#", '#', """###""" ]
+arr4 = [ 1,# 9, 9,
+2#,9
+,#9
+3#]
+,4]
+arr5 = [[[[#["#"],
+["#"]]]]#]
+]
+tbl1 = { "#" = '}#'}#}}
+
+
--- /dev/null
+{
+ "lower": {
+ "type": "datetime",
+ "value": "1987-07-05T17:45:00Z"
+ },
+ "space": {
+ "type": "datetime",
+ "value": "1987-07-05T17:45:00Z"
+ }
+}
--- /dev/null
+space = 1987-07-05 17:45:00Z
+lower = 1987-07-05t17:45:00z
--- /dev/null
+{
+ "bestdayever": {
+ "type": "date-local",
+ "value": "1987-07-05"
+ }
+}
--- /dev/null
+bestdayever = 1987-07-05
--- /dev/null
+{
+ "besttimeever": {
+ "type": "time-local",
+ "value": "17:45:00"
+ },
+ "milliseconds": {
+ "type": "time-local",
+ "value": "10:32:00.555"
+ }
+}
--- /dev/null
+besttimeever = 17:45:00
+milliseconds = 10:32:00.555
--- /dev/null
+{
+ "local": {
+ "type": "datetime-local",
+ "value": "1987-07-05T17:45:00"
+ },
+ "milli": {
+ "type": "datetime-local",
+ "value": "1977-12-21T10:32:00.555"
+ },
+ "space": {
+ "type": "datetime-local",
+ "value": "1987-07-05T17:45:00"
+ }
+}
--- /dev/null
+local = 1987-07-05T17:45:00
+milli = 1977-12-21T10:32:00.555
+space = 1987-07-05 17:45:00
--- /dev/null
+{
+ "utc1": {
+ "type": "datetime",
+ "value": "1987-07-05T17:45:56.1234Z"
+ },
+ "utc2": {
+ "type": "datetime",
+ "value": "1987-07-05T17:45:56.6000Z"
+ },
+ "wita1": {
+ "type": "datetime",
+ "value": "1987-07-05T17:45:56.1234+08:00"
+ },
+ "wita2": {
+ "type": "datetime",
+ "value": "1987-07-05T17:45:56.6000+08:00"
+ }
+}
--- /dev/null
+utc1 = 1987-07-05T17:45:56.1234Z
+utc2 = 1987-07-05T17:45:56.6Z
+wita1 = 1987-07-05T17:45:56.1234+08:00
+wita2 = 1987-07-05T17:45:56.6+08:00
--- /dev/null
+{
+ "without-seconds-1": {
+ "type": "time-local",
+ "value": "13:37:00"
+ },
+ "without-seconds-2": {
+ "type": "datetime",
+ "value": "1979-05-27T07:32:00Z"
+ },
+ "without-seconds-3": {
+ "type": "datetime",
+ "value": "1979-05-27T07:32:00-07:00"
+ },
+ "without-seconds-4": {
+ "type": "datetime-local",
+ "value": "1979-05-27T07:32:00"
+ }
+}
--- /dev/null
+# Seconds are optional in date-time and time.
+without-seconds-1 = 13:37
+without-seconds-2 = 1979-05-27 07:32Z
+without-seconds-3 = 1979-05-27 07:32-07:00
+without-seconds-4 = 1979-05-27T07:32
--- /dev/null
+{
+ "nzdt": {
+ "type": "datetime",
+ "value": "1987-07-05T17:45:56+13:00"
+ },
+ "nzst": {
+ "type": "datetime",
+ "value": "1987-07-05T17:45:56+12:00"
+ },
+ "pdt": {
+ "type": "datetime",
+ "value": "1987-07-05T17:45:56-05:00"
+ },
+ "utc": {
+ "type": "datetime",
+ "value": "1987-07-05T17:45:56Z"
+ }
+}
--- /dev/null
+utc = 1987-07-05T17:45:56Z
+pdt = 1987-07-05T17:45:56-05:00
+nzst = 1987-07-05T17:45:56+12:00
+nzdt = 1987-07-05T17:45:56+13:00 # DST
--- /dev/null
+{
+ "best-day-ever": {
+ "type": "datetime",
+ "value": "1987-07-05T17:45:00Z"
+ },
+ "numtheory": {
+ "boring": {
+ "type": "bool",
+ "value": "false"
+ },
+ "perfection": [
+ {
+ "type": "integer",
+ "value": "6"
+ },
+ {
+ "type": "integer",
+ "value": "28"
+ },
+ {
+ "type": "integer",
+ "value": "496"
+ }
+ ]
+ }
+}
--- /dev/null
+best-day-ever = 1987-07-05T17:45:00Z
+
+[numtheory]
+boring = false
+perfection = [6, 28, 496]
--- /dev/null
+{
+ "lower": {
+ "type": "float",
+ "value": "300.0"
+ },
+ "minustenth": {
+ "type": "float",
+ "value": "-0.1"
+ },
+ "neg": {
+ "type": "float",
+ "value": "0.03"
+ },
+ "pointlower": {
+ "type": "float",
+ "value": "310.0"
+ },
+ "pointupper": {
+ "type": "float",
+ "value": "310.0"
+ },
+ "pos": {
+ "type": "float",
+ "value": "300.0"
+ },
+ "upper": {
+ "type": "float",
+ "value": "300.0"
+ },
+ "zero": {
+ "type": "float",
+ "value": "3.0"
+ }
+}
--- /dev/null
+lower = 3e2
+upper = 3E2
+neg = 3e-2
+pos = 3E+2
+zero = 3e0
+pointlower = 3.1e2
+pointupper = 3.1E2
+minustenth = -1E-1
--- /dev/null
+{
+ "negpi": {
+ "type": "float",
+ "value": "-3.14"
+ },
+ "pi": {
+ "type": "float",
+ "value": "3.14"
+ },
+ "pospi": {
+ "type": "float",
+ "value": "3.14"
+ },
+ "zero-intpart": {
+ "type": "float",
+ "value": "0.123"
+ }
+}
--- /dev/null
+pi = 3.14
+pospi = +3.14
+negpi = -3.14
+zero-intpart = 0.123
--- /dev/null
+{
+ "infinity": {
+ "type": "float",
+ "value": "inf"
+ },
+ "infinity_neg": {
+ "type": "float",
+ "value": "-inf"
+ },
+ "infinity_plus": {
+ "type": "float",
+ "value": "+inf"
+ },
+ "nan": {
+ "type": "float",
+ "value": "nan"
+ },
+ "nan_neg": {
+ "type": "float",
+ "value": "nan"
+ },
+ "nan_plus": {
+ "type": "float",
+ "value": "nan"
+ }
+}
--- /dev/null
+# We don't encode +nan and -nan back with the signs; many languages don't
+# support a sign on NaN (it doesn't really make much sense).
+nan = nan
+nan_neg = -nan
+nan_plus = +nan
+infinity = inf
+infinity_neg = -inf
+infinity_plus = +inf
--- /dev/null
+{
+ "longpi": {
+ "type": "float",
+ "value": "3.141592653589793"
+ },
+ "neglongpi": {
+ "type": "float",
+ "value": "-3.141592653589793"
+ }
+}
--- /dev/null
+longpi = 3.141592653589793
+neglongpi = -3.141592653589793
--- /dev/null
+{
+ "after": {
+ "type": "float",
+ "value": "3141.5927"
+ },
+ "before": {
+ "type": "float",
+ "value": "3141.5927"
+ },
+ "exponent": {
+ "type": "float",
+ "value": "3.0e14"
+ }
+}
--- /dev/null
+before = 3_141.5927
+after = 3141.592_7
+exponent = 3e1_4
--- /dev/null
+{
+ "zero": {
+ "type": "float",
+ "value": "0"
+ },
+ "signed-pos": {
+ "type": "float",
+ "value": "0"
+ },
+ "signed-neg": {
+ "type": "float",
+ "value": "0"
+ },
+ "exponent": {
+ "type": "float",
+ "value": "0"
+ },
+ "exponent-two-0": {
+ "type": "float",
+ "value": "0"
+ },
+ "exponent-signed-pos": {
+ "type": "float",
+ "value": "0"
+ },
+ "exponent-signed-neg": {
+ "type": "float",
+ "value": "0"
+ }
+}
--- /dev/null
+zero = 0.0
+signed-pos = +0.0
+signed-neg = -0.0
+exponent = 0e0
+exponent-two-0 = 0e00
+exponent-signed-pos = +0e0
+exponent-signed-neg = -0e0
--- /dev/null
+{
+ "a": {
+ "b": {
+ "c": {
+ "answer": {
+ "type": "integer",
+ "value": "42"
+ }
+ }
+ },
+ "better": {
+ "type": "integer",
+ "value": "43"
+ }
+ }
+}
--- /dev/null
+[a.b.c]
+answer = 42
+
+[a]
+better = 43
--- /dev/null
+{
+ "a": {
+ "b": {
+ "c": {
+ "answer": {
+ "type": "integer",
+ "value": "42"
+ }
+ }
+ },
+ "better": {
+ "type": "integer",
+ "value": "43"
+ }
+ }
+}
--- /dev/null
+[a]
+better = 43
+
+[a.b.c]
+answer = 42
--- /dev/null
+{
+ "a": {
+ "b": {
+ "c": {
+ "answer": {
+ "type": "integer",
+ "value": "42"
+ }
+ }
+ }
+ }
+}
--- /dev/null
+[a.b.c]
+answer = 42
--- /dev/null
+{
+ "people": [
+ {
+ "first_name": {
+ "type": "string",
+ "value": "Bruce"
+ },
+ "last_name": {
+ "type": "string",
+ "value": "Springsteen"
+ }
+ },
+ {
+ "first_name": {
+ "type": "string",
+ "value": "Eric"
+ },
+ "last_name": {
+ "type": "string",
+ "value": "Clapton"
+ }
+ },
+ {
+ "first_name": {
+ "type": "string",
+ "value": "Bob"
+ },
+ "last_name": {
+ "type": "string",
+ "value": "Seger"
+ }
+ }
+ ]
+}
--- /dev/null
+people = [{first_name = "Bruce", last_name = "Springsteen"},
+ {first_name = "Eric", last_name = "Clapton"},
+ {first_name = "Bob", last_name = "Seger"}]
--- /dev/null
+{
+ "a": {
+ "a": {
+ "type": "bool",
+ "value": "true"
+ },
+ "b": {
+ "type": "bool",
+ "value": "false"
+ }
+ }
+}
--- /dev/null
+a = {a = true, b = false}
--- /dev/null
+{
+ "empty1": {},
+ "empty2": {},
+ "empty_in_array": [
+ {
+ "not_empty": {
+ "type": "integer",
+ "value": "1"
+ }
+ },
+ {}
+ ],
+ "empty_in_array2": [
+ {},
+ {
+ "not_empty": {
+ "type": "integer",
+ "value": "1"
+ }
+ }
+ ],
+ "many_empty": [
+ {},
+ {},
+ {}
+ ],
+ "nested_empty": {
+ "empty": {}
+ }
+}
--- /dev/null
+empty1 = {}
+empty2 = { }
+empty_in_array = [ { not_empty = 1 }, {} ]
+empty_in_array2 = [{},{not_empty=1}]
+many_empty = [{},{},{}]
+nested_empty = {"empty"={}}
--- /dev/null
+{
+ "black": {
+ "allow_prereleases": {
+ "type": "bool",
+ "value": "true"
+ },
+ "python": {
+ "type": "string",
+ "value": "\u003e3.6"
+ },
+ "version": {
+ "type": "string",
+ "value": "\u003e=18.9b0"
+ }
+ }
+}
--- /dev/null
+black = { python=">3.6", version=">=18.9b0", allow_prereleases=true }
--- /dev/null
+{
+ "name": {
+ "first": {
+ "type": "string",
+ "value": "Tom"
+ },
+ "last": {
+ "type": "string",
+ "value": "Preston-Werner"
+ }
+ },
+ "point": {
+ "x": {
+ "type": "integer",
+ "value": "1"
+ },
+ "y": {
+ "type": "integer",
+ "value": "2"
+ }
+ },
+ "simple": {
+ "a": {
+ "type": "integer",
+ "value": "1"
+ }
+ },
+ "str-key": {
+ "a": {
+ "type": "integer",
+ "value": "1"
+ }
+ },
+ "table-array": [
+ {
+ "a": {
+ "type": "integer",
+ "value": "1"
+ }
+ },
+ {
+ "b": {
+ "type": "integer",
+ "value": "2"
+ }
+ }
+ ]
+}
--- /dev/null
+name = { first = "Tom", last = "Preston-Werner" }
+point = { x = 1, y = 2 }
+simple = { a = 1 }
+str-key = { "a" = 1 }
+table-array = [{ "a" = 1 }, { "b" = 2 }]
--- /dev/null
+{
+ "a": {
+ "a": {
+ "b": {
+ "type": "integer",
+ "value": "1"
+ }
+ }
+ },
+ "arr": [
+ {
+ "T": {
+ "a": {
+ "b": {
+ "type": "integer",
+ "value": "1"
+ }
+ }
+ },
+ "t": {
+ "a": {
+ "b": {
+ "type": "integer",
+ "value": "1"
+ }
+ }
+ }
+ },
+ {
+ "T": {
+ "a": {
+ "b": {
+ "type": "integer",
+ "value": "2"
+ }
+ }
+ },
+ "t": {
+ "a": {
+ "b": {
+ "type": "integer",
+ "value": "2"
+ }
+ }
+ }
+ }
+ ],
+ "b": {
+ "a": {
+ "b": {
+ "type": "integer",
+ "value": "1"
+ }
+ }
+ },
+ "c": {
+ "a": {
+ "b": {
+ "type": "integer",
+ "value": "1"
+ }
+ }
+ },
+ "d": {
+ "a": {
+ "b": {
+ "type": "integer",
+ "value": "1"
+ }
+ }
+ },
+ "e": {
+ "a": {
+ "b": {
+ "type": "integer",
+ "value": "1"
+ }
+ }
+ },
+ "inline": {
+ "a": {
+ "b": {
+ "type": "integer",
+ "value": "42"
+ }
+ }
+ },
+ "many": {
+ "dots": {
+ "here": {
+ "dot": {
+ "dot": {
+ "dot": {
+ "a": {
+ "b": {
+ "c": {
+ "type": "integer",
+ "value": "1"
+ },
+ "d": {
+ "type": "integer",
+ "value": "2"
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ },
+ "tbl": {
+ "a": {
+ "b": {
+ "c": {
+ "d": {
+ "e": {
+ "type": "integer",
+ "value": "1"
+ }
+ }
+ }
+ }
+ },
+ "x": {
+ "a": {
+ "b": {
+ "c": {
+ "d": {
+ "e": {
+ "type": "integer",
+ "value": "1"
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+}
--- /dev/null
+inline = {a.b = 42}
+
+many.dots.here.dot.dot.dot = {a.b.c = 1, a.b.d = 2}
+
+a = { a.b = 1 }
+b = { "a"."b" = 1 }
+c = { a . b = 1 }
+d = { 'a' . "b" = 1 }
+e = {a.b=1}
+
+[tbl]
+a.b.c = {d.e=1}
+
+[tbl.x]
+a.b.c = {d.e=1}
+
+[[arr]]
+t = {a.b=1}
+T = {a.b=1}
+
+[[arr]]
+t = {a.b=2}
+T = {a.b=2}
--- /dev/null
+{
+ "tbl_multiline": {
+ "a": {
+ "type": "integer",
+ "value": "1"
+ },
+ "b": {
+ "type": "string",
+ "value": "multiline\n"
+ },
+ "c": {
+ "type": "string",
+ "value": "and yet\nanother line"
+ },
+ "d": {
+ "type": "integer",
+ "value": "4"
+ }
+ }
+}
--- /dev/null
+tbl_multiline = { a = 1, b = """
+multiline
+""", c = """and yet
+another line""", d = 4 }
--- /dev/null
+{
+ "arr_arr_tbl_empty": [
+ [
+ {}
+ ]
+ ],
+ "arr_arr_tbl_val": [
+ [
+ {
+ "one": {
+ "type": "integer",
+ "value": "1"
+ }
+ }
+ ]
+ ],
+ "arr_arr_tbls": [
+ [
+ {
+ "one": {
+ "type": "integer",
+ "value": "1"
+ }
+ },
+ {
+ "two": {
+ "type": "integer",
+ "value": "2"
+ }
+ }
+ ]
+ ],
+ "arr_tbl_tbl": [
+ {
+ "tbl": {
+ "one": {
+ "type": "integer",
+ "value": "1"
+ }
+ }
+ }
+ ],
+ "tbl_arr_tbl": {
+ "arr_tbl": [
+ {
+ "one": {
+ "type": "integer",
+ "value": "1"
+ }
+ }
+ ]
+ },
+ "tbl_tbl_empty": {
+ "tbl_0": {}
+ },
+ "tbl_tbl_val": {
+ "tbl_1": {
+ "one": {
+ "type": "integer",
+ "value": "1"
+ }
+ }
+ }
+}
--- /dev/null
+tbl_tbl_empty = { tbl_0 = {} }
+tbl_tbl_val = { tbl_1 = { one = 1 } }
+tbl_arr_tbl = { arr_tbl = [ { one = 1 } ] }
+arr_tbl_tbl = [ { tbl = { one = 1 } } ]
+
+# Array-of-array-of-table is interesting because it can only
+# be represented in inline form.
+arr_arr_tbl_empty = [ [ {} ] ]
+arr_arr_tbl_val = [ [ { one = 1 } ] ]
+arr_arr_tbls = [ [ { one = 1 }, { two = 2 } ] ]
--- /dev/null
+{
+ "tbl-1": {
+ "1": {
+ "type": "integer",
+ "value": "2"
+ },
+ "arr": [
+ {
+ "type": "integer",
+ "value": "1"
+ },
+ {
+ "type": "integer",
+ "value": "2"
+ },
+ {
+ "type": "integer",
+ "value": "3"
+ }
+ ],
+ "hello": {
+ "type": "string",
+ "value": "world"
+ },
+ "tbl": {
+ "k": {
+ "type": "integer",
+ "value": "1"
+ }
+ }
+ },
+ "tbl-2": {
+ "k": {
+ "type": "string",
+ "value": "\tHello\n\t"
+ }
+ },
+ "trailing-comma-1": {
+ "c": {
+ "type": "integer",
+ "value": "1"
+ }
+ },
+ "trailing-comma-2": {
+ "c": {
+ "type": "integer",
+ "value": "1"
+ }
+ }
+}
--- /dev/null
+# TOML 1.1 supports newlines in inline tables and trailing commas.
+
+trailing-comma-1 = {
+ c = 1,
+}
+trailing-comma-2 = { c = 1, }
+
+tbl-1 = {
+ hello = "world",
+ 1 = 2,
+ arr = [1,
+ 2,
+ 3,
+ ],
+ tbl = {
+ k = 1,
+ }
+}
+
+tbl-2 = {
+ k = """
+ Hello
+ """
+}
--- /dev/null
+{
+ "answer": {
+ "type": "integer",
+ "value": "42"
+ },
+ "neganswer": {
+ "type": "integer",
+ "value": "-42"
+ },
+ "posanswer": {
+ "type": "integer",
+ "value": "42"
+ },
+ "zero": {
+ "type": "integer",
+ "value": "0"
+ }
+}
--- /dev/null
+answer = 42
+posanswer = +42
+neganswer = -42
+zero = 0
--- /dev/null
+{
+ "bin1": {
+ "type": "integer",
+ "value": "214"
+ },
+ "bin2": {
+ "type": "integer",
+ "value": "5"
+ },
+ "hex1": {
+ "type": "integer",
+ "value": "3735928559"
+ },
+ "hex2": {
+ "type": "integer",
+ "value": "3735928559"
+ },
+ "hex3": {
+ "type": "integer",
+ "value": "3735928559"
+ },
+ "hex4": {
+ "type": "integer",
+ "value": "2439"
+ },
+ "oct1": {
+ "type": "integer",
+ "value": "342391"
+ },
+ "oct2": {
+ "type": "integer",
+ "value": "493"
+ },
+ "oct3": {
+ "type": "integer",
+ "value": "501"
+ }
+}
--- /dev/null
+bin1 = 0b11010110
+bin2 = 0b1_0_1
+
+oct1 = 0o01234567
+oct2 = 0o755
+oct3 = 0o7_6_5
+
+hex1 = 0xDEADBEEF
+hex2 = 0xdeadbeef
+hex3 = 0xdead_beef
+hex4 = 0x00987
--- /dev/null
+{
+ "int64-max": {
+ "type": "integer",
+ "value": "9223372036854775807"
+ },
+ "int64-max-neg": {
+ "type": "integer",
+ "value": "-9223372036854775808"
+ }
+}
--- /dev/null
+int64-max = 9223372036854775807
+int64-max-neg = -9223372036854775808
--- /dev/null
+{
+ "kilo": {
+ "type": "integer",
+ "value": "1000"
+ },
+ "x": {
+ "type": "integer",
+ "value": "1111"
+ }
+}
--- /dev/null
+kilo = 1_000
+x = 1_1_1_1
--- /dev/null
+{
+ "a2": {
+ "type": "integer",
+ "value": "0"
+ },
+ "a3": {
+ "type": "integer",
+ "value": "0"
+ },
+ "b1": {
+ "type": "integer",
+ "value": "0"
+ },
+ "b2": {
+ "type": "integer",
+ "value": "0"
+ },
+ "b3": {
+ "type": "integer",
+ "value": "0"
+ },
+ "d1": {
+ "type": "integer",
+ "value": "0"
+ },
+ "d2": {
+ "type": "integer",
+ "value": "0"
+ },
+ "d3": {
+ "type": "integer",
+ "value": "0"
+ },
+ "h1": {
+ "type": "integer",
+ "value": "0"
+ },
+ "h2": {
+ "type": "integer",
+ "value": "0"
+ },
+ "h3": {
+ "type": "integer",
+ "value": "0"
+ },
+ "o1": {
+ "type": "integer",
+ "value": "0"
+ }
+}
--- /dev/null
+d1 = 0
+d2 = +0
+d3 = -0
+
+h1 = 0x0
+h2 = 0x00
+h3 = 0x00000
+
+o1 = 0o0
+a2 = 0o00
+a3 = 0o00000
+
+b1 = 0b0
+b2 = 0b00
+b3 = 0b00000
--- /dev/null
+{
+ "000111": {
+ "type": "string",
+ "value": "leading"
+ },
+ "10e3": {
+ "type": "string",
+ "value": "false float"
+ },
+ "123": {
+ "type": "string",
+ "value": "num"
+ },
+ "2018_10": {
+ "001": {
+ "type": "integer",
+ "value": "1"
+ }
+ },
+ "34-11": {
+ "type": "integer",
+ "value": "23"
+ },
+ "a-a-a": {
+ "_": {
+ "type": "bool",
+ "value": "false"
+ }
+ },
+ "alpha": {
+ "type": "string",
+ "value": "a"
+ },
+ "one1two2": {
+ "type": "string",
+ "value": "mixed"
+ },
+ "under_score": {
+ "type": "string",
+ "value": "___"
+ },
+ "with-dash": {
+ "type": "string",
+ "value": "dashed"
+ }
+}
--- /dev/null
+alpha = "a"
+123 = "num"
+000111 = "leading"
+10e3 = "false float"
+one1two2 = "mixed"
+with-dash = "dashed"
+under_score = "___"
+34-11 = 23
+
+[2018_10]
+001 = 1
+
+[a-a-a]
+_ = false
--- /dev/null
+{
+ "Section": {
+ "M": {
+ "type": "string",
+ "value": "latin letter M"
+ },
+ "name": {
+ "type": "string",
+ "value": "different section!!"
+ },
+ "Μ": {
+ "type": "string",
+ "value": "greek capital letter MU"
+ },
+ "μ": {
+ "type": "string",
+ "value": "greek small letter mu"
+ }
+ },
+ "sectioN": {
+ "type": "string",
+ "value": "NN"
+ },
+ "section": {
+ "NAME": {
+ "type": "string",
+ "value": "upper"
+ },
+ "Name": {
+ "type": "string",
+ "value": "capitalized"
+ },
+ "name": {
+ "type": "string",
+ "value": "lower"
+ }
+ }
+}
--- /dev/null
+sectioN = "NN"
+
+[section]
+name = "lower"
+NAME = "upper"
+Name = "capitalized"
+
+[Section]
+name = "different section!!"
+"μ" = "greek small letter mu"
+"Μ" = "greek capital letter MU"
+M = "latin letter M"
+
--- /dev/null
+{
+ "a": {
+ "few": {
+ "dots": {
+ "polka": {
+ "dance-with": {
+ "type": "string",
+ "value": "Dot"
+ },
+ "dot": {
+ "type": "string",
+ "value": "again?"
+ }
+ }
+ }
+ }
+ },
+ "arr": [
+ {
+ "a": {
+ "b": {
+ "c": {
+ "type": "integer",
+ "value": "1"
+ },
+ "d": {
+ "type": "integer",
+ "value": "2"
+ }
+ }
+ }
+ },
+ {
+ "a": {
+ "b": {
+ "c": {
+ "type": "integer",
+ "value": "3"
+ },
+ "d": {
+ "type": "integer",
+ "value": "4"
+ }
+ }
+ }
+ }
+ ],
+ "count": {
+ "a": {
+ "type": "integer",
+ "value": "1"
+ },
+ "b": {
+ "type": "integer",
+ "value": "2"
+ },
+ "c": {
+ "type": "integer",
+ "value": "3"
+ },
+ "d": {
+ "type": "integer",
+ "value": "4"
+ },
+ "e": {
+ "type": "integer",
+ "value": "5"
+ },
+ "f": {
+ "type": "integer",
+ "value": "6"
+ },
+ "g": {
+ "type": "integer",
+ "value": "7"
+ },
+ "h": {
+ "type": "integer",
+ "value": "8"
+ },
+ "i": {
+ "type": "integer",
+ "value": "9"
+ },
+ "j": {
+ "type": "integer",
+ "value": "10"
+ },
+ "k": {
+ "type": "integer",
+ "value": "11"
+ },
+ "l": {
+ "type": "integer",
+ "value": "12"
+ }
+ },
+ "many": {
+ "dots": {
+ "here": {
+ "dot": {
+ "dot": {
+ "dot": {
+ "type": "integer",
+ "value": "42"
+ }
+ }
+ }
+ }
+ }
+ },
+ "name": {
+ "first": {
+ "type": "string",
+ "value": "Arthur"
+ },
+ "last": {
+ "type": "string",
+ "value": "Dent"
+ }
+ },
+ "tbl": {
+ "a": {
+ "b": {
+ "c": {
+ "type": "float",
+ "value": "42.666"
+ }
+ }
+ }
+ }
+}
--- /dev/null
+# Note: this file contains literal tab characters.
+
+name.first = "Arthur"
+"name".'last' = "Dent"
+
+many.dots.here.dot.dot.dot = 42
+
+# Space are ignored, and key parts can be quoted.
+count.a = 1
+count . b = 2
+"count"."c" = 3
+"count" . "d" = 4
+'count'.'e' = 5
+'count' . 'f' = 6
+"count".'g' = 7
+"count" . 'h' = 8
+count.'i' = 9
+count . 'j' = 10
+"count".k = 11
+"count" . l = 12
+
+[tbl]
+a.b.c = 42.666
+
+[a.few.dots]
+polka.dot = "again?"
+polka.dance-with = "Dot"
+
+[[arr]]
+a.b.c=1
+a.b.d=2
+
+[[arr]]
+a.b.c=3
+a.b.d=4
--- /dev/null
+{
+ "": {
+ "type": "string",
+ "value": "blank"
+ }
+}
--- /dev/null
+"" = "blank"
--- /dev/null
+{
+ "answer": {
+ "type": "integer",
+ "value": "42"
+ }
+}
--- /dev/null
+{
+ "\n": {
+ "type": "string",
+ "value": "newline"
+ },
+ "\"": {
+ "type": "string",
+ "value": "just a quote"
+ },
+ "\"quoted\"": {
+ "quote": {
+ "type": "bool",
+ "value": "true"
+ }
+ },
+ "a.b": {
+ "À": {}
+ },
+ "backsp\u0008\u0008": {},
+ "À": {
+ "type": "string",
+ "value": "latin capital letter A with grave"
+ }
+}
--- /dev/null
+"\n" = "newline"
+"\u00c0" = "latin capital letter A with grave"
+"\"" = "just a quote"
+
+["backsp\b\b"]
+
+["\"quoted\""]
+quote = true
+
+["a.b"."\u00c0"]
--- /dev/null
+{
+ "1": {
+ "2": {
+ "type": "integer",
+ "value": "3"
+ }
+ }
+}
--- /dev/null
+{
+ "1": {
+ "type": "integer",
+ "value": "1"
+ }
+}
--- /dev/null
+{
+ "plain": {
+ "type": "integer",
+ "value": "1"
+ },
+ "plain_table": {
+ "plain": {
+ "type": "integer",
+ "value": "3"
+ },
+ "with.dot": {
+ "type": "integer",
+ "value": "4"
+ }
+ },
+ "table": {
+ "withdot": {
+ "key.with.dots": {
+ "type": "integer",
+ "value": "6"
+ },
+ "plain": {
+ "type": "integer",
+ "value": "5"
+ }
+ }
+ },
+ "with.dot": {
+ "type": "integer",
+ "value": "2"
+ }
+}
--- /dev/null
+plain = 1
+"with.dot" = 2
+
+[plain_table]
+plain = 3
+"with.dot" = 4
+
+[table.withdot]
+plain = 5
+"key.with.dots" = 6
--- /dev/null
+{
+ " c d ": {
+ "type": "integer",
+ "value": "2"
+ },
+ " tbl ": {
+ "\ttab\ttab\t": {
+ "type": "string",
+ "value": "tab"
+ }
+ },
+ "a b": {
+ "type": "integer",
+ "value": "1"
+ }
+}
--- /dev/null
+# Keep whitespace inside quotes keys at all positions.
+"a b" = 1
+" c d " = 2
+
+[ " tbl " ]
+"\ttab\ttab\t" = "tab"
--- /dev/null
+{
+ "=~!@$^\u0026*()_+-`1234567890[]|/?\u003e\u003c.,;:'=": {
+ "type": "integer",
+ "value": "1"
+ }
+}
--- /dev/null
+"=~!@$^&*()_+-`1234567890[]|/?><.,;:'=" = 1
--- /dev/null
+{
+ "false": {
+ "type": "bool",
+ "value": "false"
+ },
+ "inf": {
+ "type": "integer",
+ "value": "100000000"
+ },
+ "nan": {
+ "type": "string",
+ "value": "ceci n'est pas un nombre"
+ },
+ "true": {
+ "type": "integer",
+ "value": "1"
+ }
+}
--- /dev/null
+false = false
+true = 1
+inf = 100000000
+nan = "ceci n'est pas un nombre"
+
--- /dev/null
+{
+ "newline": {
+ "type": "string",
+ "value": "crlf"
+ },
+ "os": {
+ "type": "string",
+ "value": "DOS"
+ }
+}
--- /dev/null
+os = "DOS"\r
+newline = "crlf"\r
--- /dev/null
+{
+ "newline": {
+ "type": "string",
+ "value": "lf"
+ },
+ "os": {
+ "type": "string",
+ "value": "unix"
+ }
+}
--- /dev/null
+os = "unix"
+newline = "lf"
--- /dev/null
+{
+ "clients": {
+ "data": [
+ [
+ {
+ "type": "string",
+ "value": "gamma"
+ },
+ {
+ "type": "string",
+ "value": "delta"
+ }
+ ],
+ [
+ {
+ "type": "integer",
+ "value": "1"
+ },
+ {
+ "type": "integer",
+ "value": "2"
+ }
+ ]
+ ],
+ "hosts": [
+ {
+ "type": "string",
+ "value": "alpha"
+ },
+ {
+ "type": "string",
+ "value": "omega"
+ }
+ ]
+ },
+ "database": {
+ "connection_max": {
+ "type": "integer",
+ "value": "5000"
+ },
+ "enabled": {
+ "type": "bool",
+ "value": "true"
+ },
+ "ports": [
+ {
+ "type": "integer",
+ "value": "8001"
+ },
+ {
+ "type": "integer",
+ "value": "8001"
+ },
+ {
+ "type": "integer",
+ "value": "8002"
+ }
+ ],
+ "server": {
+ "type": "string",
+ "value": "192.168.1.1"
+ }
+ },
+ "owner": {
+ "dob": {
+ "type": "datetime",
+ "value": "1979-05-27T07:32:00-08:00"
+ },
+ "name": {
+ "type": "string",
+ "value": "Lance Uppercut"
+ }
+ },
+ "servers": {
+ "alpha": {
+ "dc": {
+ "type": "string",
+ "value": "eqdc10"
+ },
+ "ip": {
+ "type": "string",
+ "value": "10.0.0.1"
+ }
+ },
+ "beta": {
+ "dc": {
+ "type": "string",
+ "value": "eqdc10"
+ },
+ "ip": {
+ "type": "string",
+ "value": "10.0.0.2"
+ }
+ }
+ },
+ "title": {
+ "type": "string",
+ "value": "TOML Example"
+ }
+}
--- /dev/null
+#Useless spaces eliminated.
+title="TOML Example"
+[owner]
+name="Lance Uppercut"
+dob=1979-05-27T07:32:00-08:00#First class dates
+[database]
+server="192.168.1.1"
+ports=[8001,8001,8002]
+connection_max=5000
+enabled=true
+[servers]
+[servers.alpha]
+ip="10.0.0.1"
+dc="eqdc10"
+[servers.beta]
+ip="10.0.0.2"
+dc="eqdc10"
+[clients]
+data=[["gamma","delta"],[1,2]]
+hosts=[
+"alpha",
+"omega"
+]
--- /dev/null
+{
+ "clients": {
+ "data": [
+ [
+ {
+ "type": "string",
+ "value": "gamma"
+ },
+ {
+ "type": "string",
+ "value": "delta"
+ }
+ ],
+ [
+ {
+ "type": "integer",
+ "value": "1"
+ },
+ {
+ "type": "integer",
+ "value": "2"
+ }
+ ]
+ ],
+ "hosts": [
+ {
+ "type": "string",
+ "value": "alpha"
+ },
+ {
+ "type": "string",
+ "value": "omega"
+ }
+ ]
+ },
+ "database": {
+ "connection_max": {
+ "type": "integer",
+ "value": "5000"
+ },
+ "enabled": {
+ "type": "bool",
+ "value": "true"
+ },
+ "ports": [
+ {
+ "type": "integer",
+ "value": "8001"
+ },
+ {
+ "type": "integer",
+ "value": "8001"
+ },
+ {
+ "type": "integer",
+ "value": "8002"
+ }
+ ],
+ "server": {
+ "type": "string",
+ "value": "192.168.1.1"
+ }
+ },
+ "owner": {
+ "dob": {
+ "type": "datetime",
+ "value": "1979-05-27T07:32:00-08:00"
+ },
+ "name": {
+ "type": "string",
+ "value": "Lance Uppercut"
+ }
+ },
+ "servers": {
+ "alpha": {
+ "dc": {
+ "type": "string",
+ "value": "eqdc10"
+ },
+ "ip": {
+ "type": "string",
+ "value": "10.0.0.1"
+ }
+ },
+ "beta": {
+ "dc": {
+ "type": "string",
+ "value": "eqdc10"
+ },
+ "ip": {
+ "type": "string",
+ "value": "10.0.0.2"
+ }
+ }
+ },
+ "title": {
+ "type": "string",
+ "value": "TOML Example"
+ }
+}
--- /dev/null
+# This is a TOML document. Boom.
+
+title = "TOML Example"
+
+[owner]
+name = "Lance Uppercut"
+dob = 1979-05-27T07:32:00-08:00 # First class dates? Why not?
+
+[database]
+server = "192.168.1.1"
+ports = [ 8001, 8001, 8002 ]
+connection_max = 5000
+enabled = true
+
+[servers]
+
+ # You can indent as you please. Tabs or spaces. TOML don't care.
+ [servers.alpha]
+ ip = "10.0.0.1"
+ dc = "eqdc10"
+
+ [servers.beta]
+ ip = "10.0.0.2"
+ dc = "eqdc10"
+
+[clients]
+data = [ ["gamma", "delta"], [1, 2] ]
+
+# Line breaks are OK when inside arrays
+hosts = [
+ "alpha",
+ "omega"
+]
--- /dev/null
+{
+ "colors": [
+ {
+ "type": "string",
+ "value": "red"
+ },
+ {
+ "type": "string",
+ "value": "yellow"
+ },
+ {
+ "type": "string",
+ "value": "green"
+ }
+ ],
+ "contributors": [
+ {
+ "type": "string",
+ "value": "Foo Bar \u003cfoo@example.com\u003e"
+ },
+ {
+ "email": {
+ "type": "string",
+ "value": "bazqux@example.com"
+ },
+ "name": {
+ "type": "string",
+ "value": "Baz Qux"
+ },
+ "url": {
+ "type": "string",
+ "value": "https://example.com/bazqux"
+ }
+ }
+ ],
+ "integers": [
+ {
+ "type": "integer",
+ "value": "1"
+ },
+ {
+ "type": "integer",
+ "value": "2"
+ },
+ {
+ "type": "integer",
+ "value": "3"
+ }
+ ],
+ "nested_arrays_of_ints": [
+ [
+ {
+ "type": "integer",
+ "value": "1"
+ },
+ {
+ "type": "integer",
+ "value": "2"
+ }
+ ],
+ [
+ {
+ "type": "integer",
+ "value": "3"
+ },
+ {
+ "type": "integer",
+ "value": "4"
+ },
+ {
+ "type": "integer",
+ "value": "5"
+ }
+ ]
+ ],
+ "nested_mixed_array": [
+ [
+ {
+ "type": "integer",
+ "value": "1"
+ },
+ {
+ "type": "integer",
+ "value": "2"
+ }
+ ],
+ [
+ {
+ "type": "string",
+ "value": "a"
+ },
+ {
+ "type": "string",
+ "value": "b"
+ },
+ {
+ "type": "string",
+ "value": "c"
+ }
+ ]
+ ],
+ "numbers": [
+ {
+ "type": "float",
+ "value": "0.1"
+ },
+ {
+ "type": "float",
+ "value": "0.2"
+ },
+ {
+ "type": "float",
+ "value": "0.5"
+ },
+ {
+ "type": "integer",
+ "value": "1"
+ },
+ {
+ "type": "integer",
+ "value": "2"
+ },
+ {
+ "type": "integer",
+ "value": "5"
+ }
+ ],
+ "string_array": [
+ {
+ "type": "string",
+ "value": "all"
+ },
+ {
+ "type": "string",
+ "value": "strings"
+ },
+ {
+ "type": "string",
+ "value": "are the same"
+ },
+ {
+ "type": "string",
+ "value": "type"
+ }
+ ]
+}
--- /dev/null
+integers = [ 1, 2, 3 ]
+colors = [ "red", "yellow", "green" ]
+nested_arrays_of_ints = [ [ 1, 2 ], [3, 4, 5] ]
+nested_mixed_array = [ [ 1, 2 ], ["a", "b", "c"] ]
+string_array = [ "all", 'strings', """are the same""", '''type''' ]
+
+# Mixed-type arrays are allowed
+numbers = [ 0.1, 0.2, 0.5, 1, 2, 5 ]
+contributors = [
+ "Foo Bar <foo@example.com>",
+ { name = "Baz Qux", email = "bazqux@example.com", url = "https://example.com/bazqux" }
+]
--- /dev/null
+{
+ "integers2": [
+ {
+ "type": "integer",
+ "value": "1"
+ },
+ {
+ "type": "integer",
+ "value": "2"
+ },
+ {
+ "type": "integer",
+ "value": "3"
+ }
+ ],
+ "integers3": [
+ {
+ "type": "integer",
+ "value": "1"
+ },
+ {
+ "type": "integer",
+ "value": "2"
+ }
+ ]
+}
--- /dev/null
+integers2 = [
+ 1, 2, 3
+]
+
+integers3 = [
+ 1,
+ 2, # this is ok
+]
--- /dev/null
+{
+ "products": [
+ {
+ "name": {
+ "type": "string",
+ "value": "Hammer"
+ },
+ "sku": {
+ "type": "integer",
+ "value": "738594937"
+ }
+ },
+ {},
+ {
+ "color": {
+ "type": "string",
+ "value": "gray"
+ },
+ "name": {
+ "type": "string",
+ "value": "Nail"
+ },
+ "sku": {
+ "type": "integer",
+ "value": "284758393"
+ }
+ }
+ ]
+}
--- /dev/null
+[[products]]
+name = "Hammer"
+sku = 738594937
+
+[[products]] # empty table within the array
+
+[[products]]
+name = "Nail"
+sku = 284758393
+
+color = "gray"
--- /dev/null
+{
+ "fruits": [
+ {
+ "name": {
+ "type": "string",
+ "value": "apple"
+ },
+ "physical": {
+ "color": {
+ "type": "string",
+ "value": "red"
+ },
+ "shape": {
+ "type": "string",
+ "value": "round"
+ }
+ },
+ "varieties": [
+ {
+ "name": {
+ "type": "string",
+ "value": "red delicious"
+ }
+ },
+ {
+ "name": {
+ "type": "string",
+ "value": "granny smith"
+ }
+ }
+ ]
+ },
+ {
+ "name": {
+ "type": "string",
+ "value": "banana"
+ },
+ "varieties": [
+ {
+ "name": {
+ "type": "string",
+ "value": "plantain"
+ }
+ }
+ ]
+ }
+ ]
+}
--- /dev/null
+[[fruits]]
+name = "apple"
+
+[fruits.physical] # subtable
+color = "red"
+shape = "round"
+
+[[fruits.varieties]] # nested array of tables
+name = "red delicious"
+
+[[fruits.varieties]]
+name = "granny smith"
+
+
+[[fruits]]
+name = "banana"
+
+[[fruits.varieties]]
+name = "plantain"
--- /dev/null
+{
+ "points": [
+ {
+ "x": {
+ "type": "integer",
+ "value": "1"
+ },
+ "y": {
+ "type": "integer",
+ "value": "2"
+ },
+ "z": {
+ "type": "integer",
+ "value": "3"
+ }
+ },
+ {
+ "x": {
+ "type": "integer",
+ "value": "7"
+ },
+ "y": {
+ "type": "integer",
+ "value": "8"
+ },
+ "z": {
+ "type": "integer",
+ "value": "9"
+ }
+ },
+ {
+ "x": {
+ "type": "integer",
+ "value": "2"
+ },
+ "y": {
+ "type": "integer",
+ "value": "4"
+ },
+ "z": {
+ "type": "integer",
+ "value": "8"
+ }
+ }
+ ]
+}
--- /dev/null
+points = [ { x = 1, y = 2, z = 3 },
+ { x = 7, y = 8, z = 9 },
+ { x = 2, y = 4, z = 8 } ]
--- /dev/null
+{
+ "bool1": {
+ "type": "bool",
+ "value": "true"
+ },
+ "bool2": {
+ "type": "bool",
+ "value": "false"
+ }
+}
--- /dev/null
+bool1 = true
+bool2 = false
--- /dev/null
+{
+ "another": {
+ "type": "string",
+ "value": "# This is not a comment"
+ },
+ "key": {
+ "type": "string",
+ "value": "value"
+ }
+}
--- /dev/null
+# This is a full-line comment
+key = "value" # This is a comment at the end of a line
+another = "# This is not a comment"
--- /dev/null
+{
+ "flt1": {
+ "type": "float",
+ "value": "1"
+ },
+ "flt2": {
+ "type": "float",
+ "value": "3.1415"
+ },
+ "flt3": {
+ "type": "float",
+ "value": "-0.01"
+ },
+ "flt4": {
+ "type": "float",
+ "value": "5e+22"
+ },
+ "flt5": {
+ "type": "float",
+ "value": "1e+06"
+ },
+ "flt6": {
+ "type": "float",
+ "value": "-0.02"
+ },
+ "flt7": {
+ "type": "float",
+ "value": "6.626e-34"
+ }
+}
--- /dev/null
+# fractional
+flt1 = +1.0
+flt2 = 3.1415
+flt3 = -0.01
+
+# exponent
+flt4 = 5e+22
+flt5 = 1e06
+flt6 = -2E-2
+
+# both
+flt7 = 6.626e-34
--- /dev/null
+{
+ "flt8": {
+ "type": "float",
+ "value": "224617.445991228"
+ }
+}
--- /dev/null
+flt8 = 224_617.445_991_228
--- /dev/null
+{
+ "sf1": {
+ "type": "float",
+ "value": "+Inf"
+ },
+ "sf2": {
+ "type": "float",
+ "value": "+Inf"
+ },
+ "sf3": {
+ "type": "float",
+ "value": "-Inf"
+ },
+ "sf4": {
+ "type": "float",
+ "value": "nan"
+ },
+ "sf5": {
+ "type": "float",
+ "value": "nan"
+ },
+ "sf6": {
+ "type": "float",
+ "value": "nan"
+ }
+}
--- /dev/null
+# infinity
+sf1 = inf # positive infinity
+sf2 = +inf # positive infinity
+sf3 = -inf # negative infinity
+
+# not a number
+sf4 = nan # actual sNaN/qNaN encoding is implementation-specific
+sf5 = +nan # same as `nan`
+sf6 = -nan # valid, actual encoding is implementation-specific
--- /dev/null
+{
+ "animal": {
+ "type": {
+ "name": {
+ "type": "string",
+ "value": "pug"
+ }
+ }
+ },
+ "name": {
+ "first": {
+ "type": "string",
+ "value": "Tom"
+ },
+ "last": {
+ "type": "string",
+ "value": "Preston-Werner"
+ }
+ },
+ "point": {
+ "x": {
+ "type": "integer",
+ "value": "1"
+ },
+ "y": {
+ "type": "integer",
+ "value": "2"
+ }
+ }
+}
--- /dev/null
+name = { first = "Tom", last = "Preston-Werner" }
+point = { x = 1, y = 2 }
+animal = { type.name = "pug" }
--- /dev/null
+{
+ "animal": {
+ "type": {
+ "name": {
+ "type": "string",
+ "value": "pug"
+ }
+ }
+ },
+ "name": {
+ "first": {
+ "type": "string",
+ "value": "Tom"
+ },
+ "last": {
+ "type": "string",
+ "value": "Preston-Werner"
+ }
+ },
+ "point": {
+ "x": {
+ "type": "integer",
+ "value": "1"
+ },
+ "y": {
+ "type": "integer",
+ "value": "2"
+ }
+ }
+}
--- /dev/null
+[name]
+first = "Tom"
+last = "Preston-Werner"
+
+[point]
+x = 1
+y = 2
+
+[animal]
+type.name = "pug"
--- /dev/null
+{
+ "product": {
+ "type": {
+ "name": {
+ "type": "string",
+ "value": "Nail"
+ }
+ }
+ }
+}
--- /dev/null
+[product]
+type = { name = "Nail" }
+# type.edible = false # INVALID
--- /dev/null
+{
+ "product": {
+ "type": {
+ "name": {
+ "type": "string",
+ "value": "Nail"
+ }
+ }
+ }
+}
--- /dev/null
+[product]
+type.name = "Nail"
+# type = { edible = false } # INVALID
--- /dev/null
+{
+ "int1": {
+ "type": "integer",
+ "value": "99"
+ },
+ "int2": {
+ "type": "integer",
+ "value": "42"
+ },
+ "int3": {
+ "type": "integer",
+ "value": "0"
+ },
+ "int4": {
+ "type": "integer",
+ "value": "-17"
+ }
+}
--- /dev/null
+int1 = +99
+int2 = 42
+int3 = 0
+int4 = -17
--- /dev/null
+{
+ "int5": {
+ "type": "integer",
+ "value": "1000"
+ },
+ "int6": {
+ "type": "integer",
+ "value": "5349221"
+ },
+ "int7": {
+ "type": "integer",
+ "value": "5349221"
+ },
+ "int8": {
+ "type": "integer",
+ "value": "12345"
+ }
+}
--- /dev/null
+int5 = 1_000
+int6 = 5_349_221
+int7 = 53_49_221 # Indian number system grouping
+int8 = 1_2_3_4_5 # VALID but discouraged
--- /dev/null
+{
+ "bin1": {
+ "type": "integer",
+ "value": "214"
+ },
+ "hex1": {
+ "type": "integer",
+ "value": "3735928559"
+ },
+ "hex2": {
+ "type": "integer",
+ "value": "3735928559"
+ },
+ "hex3": {
+ "type": "integer",
+ "value": "3735928559"
+ },
+ "oct1": {
+ "type": "integer",
+ "value": "342391"
+ },
+ "oct2": {
+ "type": "integer",
+ "value": "493"
+ }
+}
--- /dev/null
+# hexadecimal with prefix `0x`
+hex1 = 0xDEADBEEF
+hex2 = 0xdeadbeef
+hex3 = 0xdead_beef
+
+# octal with prefix `0o`
+oct1 = 0o01234567
+oct2 = 0o755 # useful for Unix file permissions
+
+# binary with prefix `0b`
+bin1 = 0b11010110
--- /dev/null
+{
+ "key": {
+ "type": "string",
+ "value": "value"
+ }
+}
--- /dev/null
+key = "value"
--- /dev/null
+{
+ "1234": {
+ "type": "string",
+ "value": "value"
+ },
+ "bare-key": {
+ "type": "string",
+ "value": "value"
+ },
+ "bare_key": {
+ "type": "string",
+ "value": "value"
+ },
+ "key": {
+ "type": "string",
+ "value": "value"
+ }
+}
--- /dev/null
+key = "value"
+bare_key = "value"
+bare-key = "value"
+1234 = "value"
--- /dev/null
+{
+ "127.0.0.1": {
+ "type": "string",
+ "value": "value"
+ },
+ "character encoding": {
+ "type": "string",
+ "value": "value"
+ },
+ "key2": {
+ "type": "string",
+ "value": "value"
+ },
+ "quoted \"value\"": {
+ "type": "string",
+ "value": "value"
+ },
+ "ʎǝʞ": {
+ "type": "string",
+ "value": "value"
+ }
+}
--- /dev/null
+"127.0.0.1" = "value"
+"character encoding" = "value"
+"ʎǝʞ" = "value"
+'key2' = "value"
+'quoted "value"' = "value"
--- /dev/null
+{
+ "name": {
+ "type": "string",
+ "value": "Orange"
+ },
+ "physical": {
+ "color": {
+ "type": "string",
+ "value": "orange"
+ },
+ "shape": {
+ "type": "string",
+ "value": "round"
+ }
+ },
+ "site": {
+ "google.com": {
+ "type": "bool",
+ "value": "true"
+ }
+ }
+}
--- /dev/null
+name = "Orange"
+physical.color = "orange"
+physical.shape = "round"
+site."google.com" = true
--- /dev/null
+{
+ "fruit": {
+ "color": {
+ "type": "string",
+ "value": "yellow"
+ },
+ "flavor": {
+ "type": "string",
+ "value": "banana"
+ },
+ "name": {
+ "type": "string",
+ "value": "banana"
+ }
+ }
+}
--- /dev/null
+fruit.name = "banana" # this is best practice
+fruit. color = "yellow" # same as fruit.color
+fruit . flavor = "banana" # same as fruit.flavor
--- /dev/null
+{
+ "apple": {
+ "color": {
+ "type": "string",
+ "value": "red"
+ },
+ "skin": {
+ "type": "string",
+ "value": "thin"
+ },
+ "type": {
+ "type": "string",
+ "value": "fruit"
+ }
+ },
+ "orange": {
+ "color": {
+ "type": "string",
+ "value": "orange"
+ },
+ "skin": {
+ "type": "string",
+ "value": "thick"
+ },
+ "type": {
+ "type": "string",
+ "value": "fruit"
+ }
+ }
+}
--- /dev/null
+# VALID BUT DISCOURAGED
+
+apple.type = "fruit"
+orange.type = "fruit"
+
+apple.skin = "thin"
+orange.skin = "thick"
+
+apple.color = "red"
+orange.color = "orange"
--- /dev/null
+{
+ "apple": {
+ "color": {
+ "type": "string",
+ "value": "red"
+ },
+ "skin": {
+ "type": "string",
+ "value": "thin"
+ },
+ "type": {
+ "type": "string",
+ "value": "fruit"
+ }
+ },
+ "orange": {
+ "color": {
+ "type": "string",
+ "value": "orange"
+ },
+ "skin": {
+ "type": "string",
+ "value": "thick"
+ },
+ "type": {
+ "type": "string",
+ "value": "fruit"
+ }
+ }
+}
--- /dev/null
+# RECOMMENDED
+
+apple.type = "fruit"
+apple.skin = "thin"
+apple.color = "red"
+
+orange.type = "fruit"
+orange.skin = "thick"
+orange.color = "orange"
--- /dev/null
+{
+ "3": {
+ "14159": {
+ "type": "string",
+ "value": "pi"
+ }
+ }
+}
--- /dev/null
+3.14159 = "pi"
--- /dev/null
+{
+ "ld1": {
+ "type": "date-local",
+ "value": "1979-05-27"
+ }
+}
--- /dev/null
+ld1 = 1979-05-27
--- /dev/null
+{
+ "ldt1": {
+ "type": "datetime-local",
+ "value": "1979-05-27T07:32:00"
+ },
+ "ldt2": {
+ "type": "datetime-local",
+ "value": "1979-05-27T00:32:00.999999"
+ }
+}
--- /dev/null
+ldt1 = 1979-05-27T07:32:00
+ldt2 = 1979-05-27T00:32:00.999999
--- /dev/null
+{
+ "lt1": {
+ "type": "time-local",
+ "value": "07:32:00"
+ },
+ "lt2": {
+ "type": "time-local",
+ "value": "00:32:00.999999"
+ }
+}
--- /dev/null
+lt1 = 07:32:00
+lt2 = 00:32:00.999999
--- /dev/null
+{
+ "odt1": {
+ "type": "datetime",
+ "value": "1979-05-27T07:32:00Z"
+ },
+ "odt2": {
+ "type": "datetime",
+ "value": "1979-05-27T00:32:00-07:00"
+ },
+ "odt3": {
+ "type": "datetime",
+ "value": "1979-05-27T00:32:00.999999-07:00"
+ }
+}
--- /dev/null
+odt1 = 1979-05-27T07:32:00Z
+odt2 = 1979-05-27T00:32:00-07:00
+odt3 = 1979-05-27T00:32:00.999999-07:00
--- /dev/null
+{
+ "odt4": {
+ "type": "datetime",
+ "value": "1979-05-27T07:32:00Z"
+ }
+}
--- /dev/null
+odt4 = 1979-05-27 07:32:00Z
--- /dev/null
+{
+ "str": {
+ "type": "string",
+ "value": "I'm a string. \"You can quote me\". Name\tJosé\nLocation\tSF."
+ }
+}
--- /dev/null
+str = "I'm a string. \"You can quote me\". Name\tJos\u00E9\nLocation\tSF."
--- /dev/null
+{
+ "str1": {
+ "type": "string",
+ "value": "Roses are red\nViolets are blue"
+ }
+}
--- /dev/null
+str1 = """
+Roses are red
+Violets are blue"""
--- /dev/null
+{
+ "str2": {
+ "type": "string",
+ "value": "Roses are red\nViolets are blue"
+ },
+ "str3": {
+ "type": "string",
+ "value": "Roses are red\r\nViolets are blue"
+ }
+}
--- /dev/null
+# On a Unix system, the above multi-line string will most likely be the same as:
+str2 = "Roses are red\nViolets are blue"
+
+# On a Windows system, it will most likely be equivalent to:
+str3 = "Roses are red\r\nViolets are blue"
--- /dev/null
+{
+ "str1": {
+ "type": "string",
+ "value": "The quick brown fox jumps over the lazy dog."
+ },
+ "str2": {
+ "type": "string",
+ "value": "The quick brown fox jumps over the lazy dog."
+ },
+ "str3": {
+ "type": "string",
+ "value": "The quick brown fox jumps over the lazy dog."
+ }
+}
--- /dev/null
+# The following strings are byte-for-byte equivalent:
+str1 = "The quick brown fox jumps over the lazy dog."
+
+str2 = """
+The quick brown \
+
+
+ fox jumps over \
+ the lazy dog."""
+
+str3 = """\
+ The quick brown \
+ fox jumps over \
+ the lazy dog.\
+ """
--- /dev/null
+{
+ "str4": {
+ "type": "string",
+ "value": "Here are two quotation marks: \"\". Simple enough."
+ },
+ "str5": {
+ "type": "string",
+ "value": "Here are three quotation marks: \"\"\"."
+ },
+ "str6": {
+ "type": "string",
+ "value": "Here are fifteen quotation marks: \"\"\"\"\"\"\"\"\"\"\"\"\"\"\"."
+ },
+ "str7": {
+ "type": "string",
+ "value": "\"This,\" she said, \"is just a pointless statement.\""
+ }
+}
--- /dev/null
+str4 = """Here are two quotation marks: "". Simple enough."""
+# str5 = """Here are three quotation marks: """.""" # INVALID
+str5 = """Here are three quotation marks: ""\"."""
+str6 = """Here are fifteen quotation marks: ""\"""\"""\"""\"""\"."""
+
+# "This," she said, "is just a pointless statement."
+str7 = """"This," she said, "is just a pointless statement.""""
--- /dev/null
+{
+ "quoted": {
+ "type": "string",
+ "value": "Tom \"Dubs\" Preston-Werner"
+ },
+ "regex": {
+ "type": "string",
+ "value": "\u003c\\i\\c*\\s*\u003e"
+ },
+ "winpath": {
+ "type": "string",
+ "value": "C:\\Users\\nodejs\\templates"
+ },
+ "winpath2": {
+ "type": "string",
+ "value": "\\\\ServerX\\admin$\\system32\\"
+ }
+}
--- /dev/null
+# What you see is what you get.
+winpath = 'C:\Users\nodejs\templates'
+winpath2 = '\\ServerX\admin$\system32\'
+quoted = 'Tom "Dubs" Preston-Werner'
+regex = '<\i\c*\s*>'
--- /dev/null
+{
+ "lines": {
+ "type": "string",
+ "value": "The first newline is\ntrimmed in raw strings.\n All other whitespace\n is preserved.\n"
+ },
+ "regex2": {
+ "type": "string",
+ "value": "I [dw]on't need \\d{2} apples"
+ }
+}
--- /dev/null
+regex2 = '''I [dw]on't need \d{2} apples'''
+lines = '''
+The first newline is
+trimmed in raw strings.
+ All other whitespace
+ is preserved.
+'''
--- /dev/null
+{
+ "apos15": {
+ "type": "string",
+ "value": "Here are fifteen apostrophes: '''''''''''''''"
+ },
+ "quot15": {
+ "type": "string",
+ "value": "Here are fifteen quotation marks: \"\"\"\"\"\"\"\"\"\"\"\"\"\"\""
+ },
+ "str": {
+ "type": "string",
+ "value": "'That,' she said, 'is still pointless.'"
+ }
+}
--- /dev/null
+quot15 = '''Here are fifteen quotation marks: """""""""""""""'''
+
+# apos15 = '''Here are fifteen apostrophes: '''''''''''''''''' # INVALID
+apos15 = "Here are fifteen apostrophes: '''''''''''''''"
+
+# 'That,' she said, 'is still pointless.'
+str = ''''That,' she said, 'is still pointless.''''
--- /dev/null
+{
+ "table": {}
+}
--- /dev/null
+{
+ "table-1": {
+ "key1": {
+ "type": "string",
+ "value": "some string"
+ },
+ "key2": {
+ "type": "integer",
+ "value": "123"
+ }
+ },
+ "table-2": {
+ "key1": {
+ "type": "string",
+ "value": "another string"
+ },
+ "key2": {
+ "type": "integer",
+ "value": "456"
+ }
+ }
+}
--- /dev/null
+[table-1]
+key1 = "some string"
+key2 = 123
+
+[table-2]
+key1 = "another string"
+key2 = 456
--- /dev/null
+{
+ "dog": {
+ "tater.man": {
+ "type": {
+ "name": {
+ "type": "string",
+ "value": "pug"
+ }
+ }
+ }
+ }
+}
--- /dev/null
+[dog."tater.man"]
+type.name = "pug"
--- /dev/null
+{
+ "a": {
+ "b": {
+ "c": {}
+ }
+ },
+ "d": {
+ "e": {
+ "f": {}
+ }
+ },
+ "g": {
+ "h": {
+ "i": {}
+ }
+ },
+ "j": {
+ "ʞ": {
+ "l": {}
+ }
+ }
+}
--- /dev/null
+[a.b.c] # this is best practice
+[ d.e.f ] # same as [d.e.f]
+[ g . h . i ] # same as [g.h.i]
+[ j . "ʞ" . 'l' ] # same as [j."ʞ".'l']
--- /dev/null
+{
+ "x": {
+ "y": {
+ "z": {
+ "w": {}
+ }
+ }
+ }
+}
--- /dev/null
+# [x] you
+# [x.y] don't
+# [x.y.z] need these
+[x.y.z.w] # for this to work
+
+[x] # defining a super-table afterward is ok
--- /dev/null
+{
+ "animal": {},
+ "fruit": {
+ "apple": {},
+ "orange": {}
+ }
+}
--- /dev/null
+# VALID BUT DISCOURAGED
+[fruit.apple]
+[animal]
+[fruit.orange]
--- /dev/null
+{
+ "animal": {},
+ "fruit": {
+ "apple": {},
+ "orange": {}
+ }
+}
--- /dev/null
+# RECOMMENDED
+[fruit.apple]
+[fruit.orange]
+[animal]
--- /dev/null
+{
+ "breed": {
+ "type": "string",
+ "value": "pug"
+ },
+ "name": {
+ "type": "string",
+ "value": "Fido"
+ },
+ "owner": {
+ "member_since": {
+ "type": "date-local",
+ "value": "1999-08-04"
+ },
+ "name": {
+ "type": "string",
+ "value": "Regina Dogman"
+ }
+ }
+}
--- /dev/null
+# Top-level table begins.
+name = "Fido"
+breed = "pug"
+
+# Top-level table ends.
+[owner]
+name = "Regina Dogman"
+member_since = 1999-08-04
--- /dev/null
+{
+ "fruit": {
+ "apple": {
+ "color": {
+ "type": "string",
+ "value": "red"
+ },
+ "taste": {
+ "sweet": {
+ "type": "bool",
+ "value": "true"
+ }
+ }
+ }
+ }
+}
--- /dev/null
+fruit.apple.color = "red"
+# Defines a table named fruit
+# Defines a table named fruit.apple
+
+fruit.apple.taste.sweet = true
+# Defines a table named fruit.apple.taste
+# fruit and fruit.apple were already created
--- /dev/null
+{
+ "fruit": {
+ "apple": {
+ "color": {
+ "type": "string",
+ "value": "red"
+ },
+ "taste": {
+ "sweet": {
+ "type": "bool",
+ "value": "true"
+ }
+ },
+ "texture": {
+ "smooth": {
+ "type": "bool",
+ "value": "true"
+ }
+ }
+ }
+ }
+}
--- /dev/null
+[fruit]
+apple.color = "red"
+apple.taste.sweet = true
+
+# [fruit.apple] # INVALID
+# [fruit.apple.taste] # INVALID
+
+[fruit.apple.texture] # you can add sub-tables
+smooth = true
--- /dev/null
+{
+ "test": {
+ "type": "string",
+ "value": "\"one\""
+ }
+}
--- /dev/null
+test = "\"one\""
--- /dev/null
+{
+ "answer": {
+ "type": "string",
+ "value": ""
+ }
+}
--- /dev/null
+answer = ""
--- /dev/null
+{
+ "esc": {
+ "type": "string",
+ "value": "\u001b There is no escape! \u001b"
+ }
+}
--- /dev/null
+esc = "\e There is no escape! \e"
--- /dev/null
+{
+ "end_esc": {
+ "type": "string",
+ "value": "String does not end here\" but ends here\\"
+ },
+ "lit_end_esc": {
+ "type": "string",
+ "value": "String ends here\\"
+ },
+ "lit_multiline_end": {
+ "type": "string",
+ "value": "There is no escape\\"
+ },
+ "lit_multiline_not_unicode": {
+ "type": "string",
+ "value": "\\u007f"
+ },
+ "multiline_end_esc": {
+ "type": "string",
+ "value": "When will it end? \"\"\"...\"\"\" should be here\""
+ },
+ "multiline_not_unicode": {
+ "type": "string",
+ "value": "\\u0041"
+ },
+ "multiline_unicode": {
+ "type": "string",
+ "value": " "
+ }
+}
--- /dev/null
+end_esc = "String does not end here\" but ends here\\"
+lit_end_esc = 'String ends here\'
+
+multiline_unicode = """
+\u00a0"""
+
+multiline_not_unicode = """
+\\u0041"""
+
+multiline_end_esc = """When will it end? \"""...""\" should be here\""""
+
+lit_multiline_not_unicode = '''
+\u007f'''
+
+lit_multiline_end = '''There is no escape\'''
--- /dev/null
+{
+ "answer": {
+ "type": "string",
+ "value": "\\x64"
+ }
+}
--- /dev/null
+answer = "\\x64"
--- /dev/null
+{
+ "backslash": {
+ "type": "string",
+ "value": "This string has a \\ backslash character."
+ },
+ "backspace": {
+ "type": "string",
+ "value": "This string has a \u0008 backspace character."
+ },
+ "carriage": {
+ "type": "string",
+ "value": "This string has a \r carriage return character."
+ },
+ "delete": {
+ "type": "string",
+ "value": "This string has a \7f delete control code."
+ },
+ "formfeed": {
+ "type": "string",
+ "value": "This string has a \u000c form feed character."
+ },
+ "newline": {
+ "type": "string",
+ "value": "This string has a \n new line character."
+ },
+ "notunicode1": {
+ "type": "string",
+ "value": "This string does not have a unicode \\u escape."
+ },
+ "notunicode2": {
+ "type": "string",
+ "value": "This string does not have a unicode \\u escape."
+ },
+ "notunicode3": {
+ "type": "string",
+ "value": "This string does not have a unicode \\u0075 escape."
+ },
+ "notunicode4": {
+ "type": "string",
+ "value": "This string does not have a unicode \\u escape."
+ },
+ "quote": {
+ "type": "string",
+ "value": "This string has a \" quote character."
+ },
+ "tab": {
+ "type": "string",
+ "value": "This string has a \t tab character."
+ },
+ "unitseparator": {
+ "type": "string",
+ "value": "This string has a \u001f unit separator control code."
+ }
+}
--- /dev/null
+backspace = "This string has a \b backspace character."
+tab = "This string has a \t tab character."
+newline = "This string has a \n new line character."
+formfeed = "This string has a \f form feed character."
+carriage = "This string has a \r carriage return character."
+quote = "This string has a \" quote character."
+backslash = "This string has a \\ backslash character."
+notunicode1 = "This string does not have a unicode \\u escape."
+notunicode2 = "This string does not have a unicode \u005Cu escape."
+notunicode3 = "This string does not have a unicode \\u0075 escape."
+notunicode4 = "This string does not have a unicode \\\u0075 escape."
+delete = "This string has a \u007F delete control code."
+unitseparator = "This string has a \u001F unit separator control code."
--- /dev/null
+{
+ "bs": {
+ "type": "string",
+ "value": "\7f"
+ },
+ "hello": {
+ "type": "string",
+ "value": "hello\n"
+ },
+ "higher-than-127": {
+ "type": "string",
+ "value": "Sørmirbæren"
+ },
+ "literal": {
+ "type": "string",
+ "value": "\\x20 \\x09 \\x0d\\x0a"
+ },
+ "multiline": {
+ "type": "string",
+ "value": " \t \u001b \r\n\n\7f\n\u0000\nhello\n\nSørmirbæren\n"
+ },
+ "multiline-literal": {
+ "type": "string",
+ "value": "\\x20 \\x09 \\x0d\\x0a\n"
+ },
+ "nul": {
+ "type": "string",
+ "value": "\u0000"
+ },
+ "whitespace": {
+ "type": "string",
+ "value": " \t \u001b \r\n"
+ }
+}
--- /dev/null
+# \x for the first 255 codepoints
+
+whitespace = "\x20 \x09 \x1b \x0d\x0a"
+bs = "\x7f"
+nul = "\x00"
+hello = "\x68\x65\x6c\x6c\x6f\x0a"
+higher-than-127 = "S\xf8rmirb\xe6ren"
+
+multiline = """
+\x20 \x09 \x1b \x0d\x0a
+\x7f
+\x00
+\x68\x65\x6c\x6c\x6f\x0a
+\x53\xF8\x72\x6D\x69\x72\x62\xE6\x72\x65\x6E
+"""
+
+# Not inside literals.
+literal = '\x20 \x09 \x0d\x0a'
+multiline-literal = '''
+\x20 \x09 \x0d\x0a
+'''
--- /dev/null
+{
+ "0": {
+ "type": "string",
+ "value": ""
+ }
+}
--- /dev/null
+# The following line should be an unescaped backslash followed by a Windows\r
+# newline sequence ("\r\n")\r
+0="""\\r
+"""\r
--- /dev/null
+{
+ "escaped": {
+ "type": "string",
+ "value": "lol\"\"\""
+ },
+ "lit_one": {
+ "type": "string",
+ "value": "'one quote'"
+ },
+ "lit_one_space": {
+ "type": "string",
+ "value": " 'one quote' "
+ },
+ "lit_two": {
+ "type": "string",
+ "value": "''two quotes''"
+ },
+ "lit_two_space": {
+ "type": "string",
+ "value": " ''two quotes'' "
+ },
+ "mismatch1": {
+ "type": "string",
+ "value": "aaa'''bbb"
+ },
+ "mismatch2": {
+ "type": "string",
+ "value": "aaa\"\"\"bbb"
+ },
+ "one": {
+ "type": "string",
+ "value": "\"one quote\""
+ },
+ "one_space": {
+ "type": "string",
+ "value": " \"one quote\" "
+ },
+ "two": {
+ "type": "string",
+ "value": "\"\"two quotes\"\""
+ },
+ "two_space": {
+ "type": "string",
+ "value": " \"\"two quotes\"\" "
+ }
+}
--- /dev/null
+# Make sure that quotes inside multiline strings are allowed, including right
+# after the opening '''/""" and before the closing '''/"""
+
+lit_one = ''''one quote''''
+lit_two = '''''two quotes'''''
+lit_one_space = ''' 'one quote' '''
+lit_two_space = ''' ''two quotes'' '''
+
+one = """"one quote""""
+two = """""two quotes"""""
+one_space = """ "one quote" """
+two_space = """ ""two quotes"" """
+
+mismatch1 = """aaa'''bbb"""
+mismatch2 = '''aaa"""bbb'''
+
+# Three opening """, then one escaped ", then two "" (allowed), and then three
+# closing """
+escaped = """lol\""""""
--- /dev/null
+{
+ "equivalent_one": {
+ "type": "string",
+ "value": "The quick brown fox jumps over the lazy dog."
+ },
+ "equivalent_three": {
+ "type": "string",
+ "value": "The quick brown fox jumps over the lazy dog."
+ },
+ "equivalent_two": {
+ "type": "string",
+ "value": "The quick brown fox jumps over the lazy dog."
+ },
+ "escape-bs-1": {
+ "type": "string",
+ "value": "a \\\nb"
+ },
+ "escape-bs-2": {
+ "type": "string",
+ "value": "a \\b"
+ },
+ "escape-bs-3": {
+ "type": "string",
+ "value": "a \\\\\n b"
+ },
+ "keep-ws-before": {
+ "type": "string",
+ "value": "a \tb"
+ },
+ "multiline_empty_four": {
+ "type": "string",
+ "value": ""
+ },
+ "multiline_empty_one": {
+ "type": "string",
+ "value": ""
+ },
+ "multiline_empty_three": {
+ "type": "string",
+ "value": ""
+ },
+ "multiline_empty_two": {
+ "type": "string",
+ "value": ""
+ },
+ "no-space": {
+ "type": "string",
+ "value": "ab"
+ },
+ "whitespace-after-bs": {
+ "type": "string",
+ "value": "The quick brown fox jumps over the lazy dog."
+ }
+}
--- /dev/null
+# NOTE: this file includes some literal tab characters.
+
+multiline_empty_one = """"""
+
+# A newline immediately following the opening delimiter will be trimmed.
+multiline_empty_two = """
+"""
+
+# \ at the end of line trims newlines as well; note that last \ is followed by
+# two spaces, which are ignored.
+multiline_empty_three = """\
+ """
+multiline_empty_four = """\
+ \
+ \
+ """
+
+equivalent_one = "The quick brown fox jumps over the lazy dog."
+equivalent_two = """
+The quick brown \
+
+
+ fox jumps over \
+ the lazy dog."""
+
+equivalent_three = """\
+ The quick brown \
+ fox jumps over \
+ the lazy dog.\
+ """
+
+whitespace-after-bs = """\
+ The quick brown \
+ fox jumps over \
+ the lazy dog.\
+ """
+
+no-space = """a\
+ b"""
+
+# Has tab character.
+keep-ws-before = """a \
+ b"""
+
+escape-bs-1 = """a \\
+b"""
+
+escape-bs-2 = """a \\\
+b"""
+
+escape-bs-3 = """a \\\\
+ b"""
--- /dev/null
+{
+ "lit_nl_end": {
+ "type": "string",
+ "value": "value\\n"
+ },
+ "lit_nl_mid": {
+ "type": "string",
+ "value": "val\\nue"
+ },
+ "lit_nl_uni": {
+ "type": "string",
+ "value": "val\\ue"
+ },
+ "nl_end": {
+ "type": "string",
+ "value": "value\n"
+ },
+ "nl_mid": {
+ "type": "string",
+ "value": "val\nue"
+ }
+}
--- /dev/null
+nl_mid = "val\nue"
+nl_end = """value\n"""
+
+lit_nl_end = '''value\n'''
+lit_nl_mid = 'val\nue'
+lit_nl_uni = 'val\ue'
--- /dev/null
+{
+ "firstnl": {
+ "type": "string",
+ "value": "This string has a ' quote character."
+ },
+ "multiline": {
+ "type": "string",
+ "value": "This string\nhas ' a quote character\nand more than\none newline\nin it."
+ },
+ "oneline": {
+ "type": "string",
+ "value": "This string has a ' quote character."
+ },
+ "multiline_with_tab": {
+ "type": "string",
+ "value": "First line\n\t Followed by a tab"
+ }
+}
--- /dev/null
+# Single ' should be allowed.
+oneline = '''This string has a ' quote character.'''
+
+# A newline immediately following the opening delimiter will be trimmed.
+firstnl = '''
+This string has a ' quote character.'''
+
+# All other whitespace and newline characters remain intact.
+multiline = '''
+This string
+has ' a quote character
+and more than
+one newline
+in it.'''
+
+# Tab character in literal string does not need to be escaped
+multiline_with_tab = '''First line
+ Followed by a tab'''
--- /dev/null
+{
+ "backslash": {
+ "type": "string",
+ "value": "This string has a \\\\ backslash character."
+ },
+ "backspace": {
+ "type": "string",
+ "value": "This string has a \\b backspace character."
+ },
+ "carriage": {
+ "type": "string",
+ "value": "This string has a \\r carriage return character."
+ },
+ "formfeed": {
+ "type": "string",
+ "value": "This string has a \\f form feed character."
+ },
+ "newline": {
+ "type": "string",
+ "value": "This string has a \\n new line character."
+ },
+ "slash": {
+ "type": "string",
+ "value": "This string has a \\/ slash character."
+ },
+ "tab": {
+ "type": "string",
+ "value": "This string has a \\t tab character."
+ },
+ "unescaped_tab": {
+ "type": "string",
+ "value": "This string has an \t unescaped tab character."
+ }
+}
--- /dev/null
+backspace = 'This string has a \b backspace character.'
+tab = 'This string has a \t tab character.'
+unescaped_tab = 'This string has an unescaped tab character.'
+newline = 'This string has a \n new line character.'
+formfeed = 'This string has a \f form feed character.'
+carriage = 'This string has a \r carriage return character.'
+slash = 'This string has a \/ slash character.'
+backslash = 'This string has a \\ backslash character.'
--- /dev/null
+{
+ "answer": {
+ "type": "string",
+ "value": "You are not drinking enough whisky."
+ }
+}
--- /dev/null
+answer = "You are not drinking enough whisky."
--- /dev/null
+{
+ "answer4": {
+ "type": "string",
+ "value": "δ"
+ },
+ "answer8": {
+ "type": "string",
+ "value": "δ"
+ }
+}
--- /dev/null
+answer4 = "\u03B4"
+answer8 = "\U000003B4"
--- /dev/null
+{
+ "answer": {
+ "type": "string",
+ "value": "δ"
+ }
+}
--- /dev/null
+answer = "δ"
--- /dev/null
+{
+ "pound": {
+ "type": "string",
+ "value": "We see no # comments here."
+ },
+ "poundcomment": {
+ "type": "string",
+ "value": "But there are # some comments here."
+ }
+}
--- /dev/null
+pound = "We see no # comments here."
+poundcomment = "But there are # some comments here." # Did I # mess you up?
--- /dev/null
+{
+ "albums": {
+ "songs": [
+ {
+ "name": {
+ "type": "string",
+ "value": "Glory Days"
+ }
+ }
+ ]
+ }
+}
--- /dev/null
+[[albums.songs]]
+name = "Glory Days"
--- /dev/null
+{
+ "people": [
+ {
+ "first_name": {
+ "type": "string",
+ "value": "Bruce"
+ },
+ "last_name": {
+ "type": "string",
+ "value": "Springsteen"
+ }
+ },
+ {
+ "first_name": {
+ "type": "string",
+ "value": "Eric"
+ },
+ "last_name": {
+ "type": "string",
+ "value": "Clapton"
+ }
+ },
+ {
+ "first_name": {
+ "type": "string",
+ "value": "Bob"
+ },
+ "last_name": {
+ "type": "string",
+ "value": "Seger"
+ }
+ }
+ ]
+}
--- /dev/null
+[[people]]
+first_name = "Bruce"
+last_name = "Springsteen"
+
+[[people]]
+first_name = "Eric"
+last_name = "Clapton"
+
+[[people]]
+first_name = "Bob"
+last_name = "Seger"
--- /dev/null
+{
+ "albums": [
+ {
+ "name": {
+ "type": "string",
+ "value": "Born to Run"
+ },
+ "songs": [
+ {
+ "name": {
+ "type": "string",
+ "value": "Jungleland"
+ }
+ },
+ {
+ "name": {
+ "type": "string",
+ "value": "Meeting Across the River"
+ }
+ }
+ ]
+ },
+ {
+ "name": {
+ "type": "string",
+ "value": "Born in the USA"
+ },
+ "songs": [
+ {
+ "name": {
+ "type": "string",
+ "value": "Glory Days"
+ }
+ },
+ {
+ "name": {
+ "type": "string",
+ "value": "Dancing in the Dark"
+ }
+ }
+ ]
+ }
+ ]
+}
--- /dev/null
+[[albums]]
+name = "Born to Run"
+
+ [[albums.songs]]
+ name = "Jungleland"
+
+ [[albums.songs]]
+ name = "Meeting Across the River"
+
+[[albums]]
+name = "Born in the USA"
+
+ [[albums.songs]]
+ name = "Glory Days"
+
+ [[albums.songs]]
+ name = "Dancing in the Dark"
--- /dev/null
+{
+ "people": [
+ {
+ "first_name": {
+ "type": "string",
+ "value": "Bruce"
+ },
+ "last_name": {
+ "type": "string",
+ "value": "Springsteen"
+ }
+ }
+ ]
+}
--- /dev/null
+[[people]]
+first_name = "Bruce"
+last_name = "Springsteen"
--- /dev/null
+{
+ "a": [
+ {
+ "b": [
+ {
+ "c": {
+ "d": {
+ "type": "string",
+ "value": "val0"
+ }
+ }
+ },
+ {
+ "c": {
+ "d": {
+ "type": "string",
+ "value": "val1"
+ }
+ }
+ }
+ ]
+ }
+ ]
+}
--- /dev/null
+[[a]]
+ [[a.b]]
+ [a.b.c]
+ d = "val0"
+ [[a.b]]
+ [a.b.c]
+ d = "val1"
--- /dev/null
+{
+ "a": {}
+}
--- /dev/null
+{
+ "true": {},
+ "false": {},
+ "inf": {},
+ "nan": {}
+}
--- /dev/null
+[true]
+
+[false]
+
+[inf]
+
+[nan]
+
+
--- /dev/null
+{
+ "a": {
+ " x ": {},
+ "b": {
+ "c": {}
+ },
+ "b.c": {},
+ "d.e": {}
+ },
+ "d": {
+ "e": {
+ "f": {}
+ }
+ },
+ "g": {
+ "h": {
+ "i": {}
+ }
+ },
+ "j": {
+ "ʞ": {
+ "l": {}
+ }
+ },
+ "x": {
+ "1": {
+ "2": {}
+ }
+ }
+}
--- /dev/null
+[a.b.c]
+[a."b.c"]
+[a.'d.e']
+[a.' x ']
+[ d.e.f ]
+[ g . h . i ]
+[ j . "ʞ" . 'l' ]
+
+[x.1.2]
--- /dev/null
+{
+ "table": {}
+}
--- /dev/null
+{
+ "a": {
+ "b": {}
+ }
+}
--- /dev/null
+[a]
+[a.b]
--- /dev/null
+{
+ "a": {
+ "extend": {
+ "key": {
+ "type": "integer",
+ "value": "2"
+ },
+ "more": {
+ "key": {
+ "type": "integer",
+ "value": "3"
+ }
+ }
+ },
+ "key": {
+ "type": "integer",
+ "value": "1"
+ }
+ }
+}
--- /dev/null
+[a]
+key = 1
+
+# a.extend is a key inside the "a" table.
+[a.extend]
+key = 2
+
+[a.extend.more]
+key = 3
--- /dev/null
+{
+ "valid key": {}
+}
--- /dev/null
+["valid key"]
--- /dev/null
+{
+ "a": {
+ "\"b\"": {
+ "c": {
+ "answer": {
+ "type": "integer",
+ "value": "42"
+ }
+ }
+ }
+ }
+}
--- /dev/null
+['a']
+[a.'"b"']
+[a.'"b"'.c]
+answer = 42
--- /dev/null
+{
+ "key#group": {
+ "answer": {
+ "type": "integer",
+ "value": "42"
+ }
+ }
+}
--- /dev/null
+["key#group"]
+answer = 42
--- /dev/null
+{
+ "a": {
+ "b": {
+ "c": {
+ "answer": {
+ "type": "integer",
+ "value": "42"
+ }
+ }
+ }
+ }
+}
--- /dev/null
+['a']
+[a.'b']
+[a.'b'.c]
+answer = 42
--- /dev/null
+{
+ "x": {
+ "y": {
+ "z": {
+ "w": {}
+ }
+ }
+ }
+}
--- /dev/null
+# [x] you
+# [x.y] don't
+# [x.y.z] need these
+[x.y.z.w] # for this to work
+[x] # defining a super-table afterwards is ok
--- /dev/null
+const TESTS_DIR: include_dir::Dir =
+ include_dir::include_dir!("$CARGO_MANIFEST_DIR/assets/toml-test/tests");
+
+#[derive(Debug, Copy, Clone, PartialEq, Eq)]
+pub struct Valid<'a> {
+ pub name: &'a std::path::Path,
+ pub fixture: &'a [u8],
+ pub expected: &'a [u8],
+}
+
+pub fn valid() -> impl Iterator<Item = Valid<'static>> {
+ let valid_dir = TESTS_DIR.get_dir("valid").unwrap();
+ valid_files(valid_dir).chain(valid_dir.dirs().flat_map(|d| {
+ assert_eq!(d.dirs().count(), 0);
+ valid_files(d)
+ }))
+}
+
+fn valid_files<'d>(
+ dir: &'d include_dir::Dir<'static>,
+) -> impl Iterator<Item = Valid<'static>> + 'd {
+ dir.files()
+ .filter(|f| f.path().extension().unwrap_or_default() == "toml")
+ .map(move |f| {
+ let t = f;
+ let j = dir
+ .files()
+ .find(|f| {
+ f.path().parent() == t.path().parent()
+ && f.path().file_stem() == t.path().file_stem()
+ && f.path().extension().unwrap() == "json"
+ })
+ .unwrap();
+ let name = t.path();
+ let fixture = t.contents();
+ let expected = j.contents();
+ Valid {
+ name,
+ fixture,
+ expected,
+ }
+ })
+}
+
+#[derive(Debug, Copy, Clone, PartialEq, Eq)]
+pub struct Invalid<'a> {
+ pub name: &'a std::path::Path,
+ pub fixture: &'a [u8],
+}
+
+pub fn invalid() -> impl Iterator<Item = Invalid<'static>> {
+ let invalid_dir = TESTS_DIR.get_dir("invalid").unwrap();
+ assert_eq!(invalid_dir.files().count(), 0);
+ invalid_dir.dirs().flat_map(|d| {
+ assert_eq!(d.dirs().count(), 0);
+ d.files().map(|f| {
+ let t = f;
+ let name = f.path();
+ let fixture = t.contents();
+ Invalid { name, fixture }
+ })
+ })
+}