--- /dev/null
+-*- coding: utf-8 -*-
+
+Written by:
+
+ David Lutterkort <lutter@redhat.com>
+
+Committers:
+
+ Matthew Booth <mbooth@redhat.com>
+ Michael Chapman <mike@very.puzzling.org>
+ Dominic Cleal <dcleal@redhat.com>
+ Francis Giraldeau <francis.giraldeau@revolutionlinux.com>
+ Raphaël Pinson <raphael.pinson@camptocamp.com>
+
+Contributions by:
+
+ Jasper Lievisse Adriaanse <jasper@humppa.nl>
+ Partha Aji <paji@redhat.com>
+ Erik B. Andersen <erik.b.andersen@gmail.com>
+ Sebastien Aperghis-Tramoni <sebastien@aperghis.net>
+ Mathieu Arnold <mat@FreeBSD.org>
+ Sergio Ballestrero
+ Sylvain Baubeau <bob@glumol.com>
+ Oliver Beattie <oliver@obeattie com>
+ Tim Bishop <tim@bishnet.net>
+ Anders F Björklund <afb@users.sourceforge.net>
+ Jurjen Bokma <j.bokma@rug.nl>
+ Aurelien Bompard <aurelien@bompard.org>
+ Joey Boggs <jboggs@redhat.com>
+ Lorenzo Dalrio <lorenzo.dalrio@gmail.com>
+ Francois Deppierraz <francois.deppierraz@camptocamp.com>
+ Luc Didry <luc@didry.org>
+ Dominique Dumont <dominique.dumont@hp.com>
+ Craig Dunn <craig@craigdunn.org>
+ Free Ekanayaka <free@64studio.com>
+ Michal Filka <michal.filka@suse.cz>
+ Freakin <brendanaye@gmail.com>
+ Marc Fournier <marc.fournier@camptocamp.com>
+ Davide Guerri <davide.guerri@gmail.com>
+ Andy Grimm <agrimm@redhat.com>
+ Travis Groth <tgroth@gmail.com>
+ Adam Helms <helms.adam@gmail.com>
+ Harald Hoyer <harald@redhat.com>
+ Shannon Hughes <shughes@redhat.com>
+ Richard W.M. Jones <rjones@redhat.com>
+ Simon Josi <josi@puzzle.ch>
+ Bryan Kearney <bkearney@redhat.com>
+ Jason Kincl <jkincl@gmail.com>
+ Andrew Colin Kissa <andrew@topdog.za.net>
+ Francois Lebel <francoislebel@gmail.com>
+ Frédéric Lespez <frederic.lespez@free.fr>
+ Miroslav Lichvar <mlichvar@redhat.com>
+ Jasper Lievisse Adriaanse <jasper@humppa.nl>
+ Tom Limoncelli <tal@whatexit.org>
+ Erinn Looney-Triggs <erinn.looneytriggs@gmail.com>
+ Duncan Mac-Vicar P. <dmacvicar@suse.de>
+ Jeroen van Meeuwen <vanmeeuwen@kolabsys.com>
+ Jim Meyering <meyering@redhat.com>
+ Sean Millichamp <sean@bruenor.org>
+ Craig Miskell <craig@stroppykitten.com>
+ Michael Moll <kvedulv@kvedulv.de>
+ Tim Mooney <Tim.Mooney@ndsu.edu>
+ Joel Nimety <jnimety@perimeterusa.com>
+ Matthaus Owens <matthaus@puppetlabs.com>
+ Matt Palmer <matt@anchor.net.au>
+ Bill Pemberton <wfp5p@virginia.edu>
+ Dan Prince <dprince@redhat.com>
+ Alan Pevec <apevec@redhat.com>
+ Brett Porter <brett@apache.org>
+ Robin Lee Powell <rlpowell@digitalkingdom.org>
+ Michael Pimmer <blubb@fonfon.at>
+ Branan Purvine-Riley <branan@puppetlabs.com>
+ Andrew Replogle <areplogl@redhat.com>
+ Pat Riehecky <riehecky@fnal.gov>
+ Lubomir Rintel <lubo.rintel@gooddata.com>
+ Roman Rakus <rrakus@redhat.com>
+ David Salmen <dsalmen@dsalmen.com>
+ Carlos Sanchez <csanchez@maestrodev.com>
+ Satoru SATOH <satoru.satoh@gmail.com>
+ Nicolas Valcárcel Scerpella <nvalcarcel@ubuntu.com>
+ Gonzalo Servat <gservat@gmail.com>
+ Nahum Shalman <nshalman elys com>
+ Borislav Stoichkov <borislav.stoichkov@gmail.com>
+ Tim Stoop <tim.stoop@gmail.com>
+ Laine Stump <laine@laine.org>
+ Jiri Suchomel <jsuchome@suse.cz>
+ Ivana Hutarova Varekova <varekova@redhat.com>
+ Simon Vocella <voxsim@gmail.com>
+ Frederik Wagner <wagner@lrz.de>
+ Dean Wilson <dwilson@blueowl.it>
+ Igor Pashev <pashev.igor@gmail.com>
+ Micah Anderson <micah@riseup.net>
+ Domen Kožar <domen@dev.si>
+ Filip Andres <filip.andres@gooddata.com>
+ Josh Kayse <jokajak@gmail.com>
+ Jacob M. McCann <jacob.m.mccann@usps.gov>
+ Danny Yates <danny@codeaholics.org>
+ Terence Haddock <thaddock@tripi.com>
+ Athir Nuaimi <athir@nuaimi.com>
+ Ian Berry <iberry@barracuda.com>
+ Gabriel de Perthuis <g2p.code+augeas@gmail.com>
+ Brian Harrington <bharrington@redhat.com>
+ Mathieu Alorent <malorent@kumy.net>
+ Rob Tucker <rtucker@mozilla.com>
+ Stephen P. Schaefer
+ Pascal Lalonde <plalonde@google.com>
+ Tom Hendrikx <tom@whyscream.net>
+ Yanis Guenane <yguenane@gmail.com>
+ Esteve Fernandez <esteve.fernandez@gmail.com>
+ Dietmar Kling <baldur@email.de>
+ Michael Haslgrübler <work-michael@haslgruebler.eu>
+ Andrew N Golovkov <a.golovkov@dwteam.ru>
+ Matteo Cerutti <matteo.cerutti@hotmail.co.uk>
+ Tomas Hoger <thoger@redhat.com>
+ Tomas Klouda <tomas.klouda@gooddata.com>
+ Kaarle Ritvanen <kaarle.ritvanen@datakunkku.fi>
+ François Maillard <fmaillard@gmail.com>
+ Mykola Nikishov <mn@mn.com.ua>
+ Robert Drake <rdrake@direcpath.com>
+ Simon Séhier <simon.sehier@camptocamp.com>
+ Vincent Brillault <vincent.brillault@cern.ch>
+ Mike Latimer <mlatimer@suse.com>
+ Lorenzo M. Catucci <lorenzo@sancho.ccd.uniroma2.it>
+ Joel Loudermilk <joel@loudermilk.org>
+ Frank Grötzner <frank@unforgotten.de>
+ Pino Toscano <ptoscano@redhat.com>
+ Geoffrey Gardella <gardella@gmail.com>
+ Matt Dainty <matt@bodgit-n-scarper.com>
+ Jan Doleschal <jan.doleschal@lgl.bwl.de>
+ Joe Topjian <joe@topjian.net>
+ Julien Pivotto <roidelapluie@inuits.eu>
+ Gregory Smith <gasmith@nutanix.com>
+ Justin Akers <justin.akers@opengear.com>
+ Oliver Mangold <o.mangold@gmail.com>
+ Geoff Williams <geoff.williams@puppetlabs.com>
+ Florian Chazal <florianchazal@gmail.com>
+ Dimitar Dimitrov <mitkofr@yahoo.fr>
+ Cédric Bosdonnat <cedric.bosdonnat@free.fr>
+ Christoph Maser <cmaser@gmx.de>
+ Chris Reeves <chris.reeves@york.ac.uk>
+ Gerlof Fokkema <gerlof.fokkema@gmail.com>
+ Daniel Trebbien <dtrebbien@gmail.com>
+ Robert Moucha <robert.moucha@gooddata.com>
+ Craig Miskell <craig@catalyst.net.nz>
+ Anton Baranov <abaranov@linuxfoundation.org>
+ Josef Reidinger <jreidinger@suse.cz>
+ James Valleroy <jvalleroy@mailbox.org>
+ Pavel Chechetin <pchechetin@mirantis.com>
+ Pedro Valero Mejia <pedro.valero.mejia@gmail.com>
+ David Farrell <davidpfarrell+github@gmail.com>
+ Nathan Ward <nward@braintrust.co.nz>
+ Xavier Mol <xavier.mol@kit.edu>
+ Nicolas Gif <nicolas.gif@free.fr>
+ Jason A. Smith <smithj4@bnl.gov>
+ George Hansper <george@hansper.id.au>
+ Heston Snodgrass <heston.snodgrass@puppet.com>
-$Id: COPYING,v 1.3 2006-10-26 16:20:28 eggert Exp $
-The files in here are mostly copyright (C) Free Software Foundation, and
-are under assorted licenses. Mostly, but not entirely, GPL.
-
-Many modules are provided dual-license, either GPL or LGPL at your
-option. The headers of files in the lib directory (e.g., lib/error.c)
-state GPL for convenience, since the bulk of current gnulib users are
-GPL'd programs. But the files in the modules directory (e.g.,
-modules/error) state the true license of each file, and when you use
-'gnulib-tool --lgpl --import <modules>', gnulib-tool either rewrites
-the files to have an LGPL header as part of copying them from gnulib
-to your project directory, or fails because the modules you requested
-were not licensed under LGPL.
-
-Some of the source files in lib/ have different licenses. Also, the
-copy of maintain.texi in doc/ has a verbatim-copying license, and
-doc/standards.texi and make-stds.texi are GFDL. Most (but not all)
-m4/*.m4 files have nearly unlimited licenses.
+
+ GNU LESSER GENERAL PUBLIC LICENSE
+ Version 2.1, February 1999
+
+ Copyright (C) 1991, 1999 Free Software Foundation, Inc.
+ 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+[This is the first released version of the Lesser GPL. It also counts
+ as the successor of the GNU Library Public License, version 2, hence
+ the version number 2.1.]
+
+ Preamble
+
+ The licenses for most software are designed to take away your
+freedom to share and change it. By contrast, the GNU General Public
+Licenses are intended to guarantee your freedom to share and change
+free software--to make sure the software is free for all its users.
+
+ This license, the Lesser General Public License, applies to some
+specially designated software packages--typically libraries--of the
+Free Software Foundation and other authors who decide to use it. You
+can use it too, but we suggest you first think carefully about whether
+this license or the ordinary General Public License is the better
+strategy to use in any particular case, based on the explanations
+below.
+
+ When we speak of free software, we are referring to freedom of use,
+not price. Our General Public Licenses are designed to make sure that
+you have the freedom to distribute copies of free software (and charge
+for this service if you wish); that you receive source code or can get
+it if you want it; that you can change the software and use pieces of
+it in new free programs; and that you are informed that you can do
+these things.
+
+ To protect your rights, we need to make restrictions that forbid
+distributors to deny you these rights or to ask you to surrender these
+rights. These restrictions translate to certain responsibilities for
+you if you distribute copies of the library or if you modify it.
+
+ For example, if you distribute copies of the library, whether gratis
+or for a fee, you must give the recipients all the rights that we gave
+you. You must make sure that they, too, receive or can get the source
+code. If you link other code with the library, you must provide
+complete object files to the recipients, so that they can relink them
+with the library after making changes to the library and recompiling
+it. And you must show them these terms so they know their rights.
+
+ We protect your rights with a two-step method: (1) we copyright the
+library, and (2) we offer you this license, which gives you legal
+permission to copy, distribute and/or modify the library.
+
+ To protect each distributor, we want to make it very clear that
+there is no warranty for the free library. Also, if the library is
+modified by someone else and passed on, the recipients should know
+that what they have is not the original version, so that the original
+author's reputation will not be affected by problems that might be
+introduced by others.
+^L
+ Finally, software patents pose a constant threat to the existence of
+any free program. We wish to make sure that a company cannot
+effectively restrict the users of a free program by obtaining a
+restrictive license from a patent holder. Therefore, we insist that
+any patent license obtained for a version of the library must be
+consistent with the full freedom of use specified in this license.
+
+ Most GNU software, including some libraries, is covered by the
+ordinary GNU General Public License. This license, the GNU Lesser
+General Public License, applies to certain designated libraries, and
+is quite different from the ordinary General Public License. We use
+this license for certain libraries in order to permit linking those
+libraries into non-free programs.
+
+ When a program is linked with a library, whether statically or using
+a shared library, the combination of the two is legally speaking a
+combined work, a derivative of the original library. The ordinary
+General Public License therefore permits such linking only if the
+entire combination fits its criteria of freedom. The Lesser General
+Public License permits more lax criteria for linking other code with
+the library.
+
+ We call this license the "Lesser" General Public License because it
+does Less to protect the user's freedom than the ordinary General
+Public License. It also provides other free software developers Less
+of an advantage over competing non-free programs. These disadvantages
+are the reason we use the ordinary General Public License for many
+libraries. However, the Lesser license provides advantages in certain
+special circumstances.
+
+ For example, on rare occasions, there may be a special need to
+encourage the widest possible use of a certain library, so that it
+becomes a de-facto standard. To achieve this, non-free programs must
+be allowed to use the library. A more frequent case is that a free
+library does the same job as widely used non-free libraries. In this
+case, there is little to gain by limiting the free library to free
+software only, so we use the Lesser General Public License.
+
+ In other cases, permission to use a particular library in non-free
+programs enables a greater number of people to use a large body of
+free software. For example, permission to use the GNU C Library in
+non-free programs enables many more people to use the whole GNU
+operating system, as well as its variant, the GNU/Linux operating
+system.
+
+ Although the Lesser General Public License is Less protective of the
+users' freedom, it does ensure that the user of a program that is
+linked with the Library has the freedom and the wherewithal to run
+that program using a modified version of the Library.
+
+ The precise terms and conditions for copying, distribution and
+modification follow. Pay close attention to the difference between a
+"work based on the library" and a "work that uses the library". The
+former contains code derived from the library, whereas the latter must
+be combined with the library in order to run.
+^L
+ GNU LESSER GENERAL PUBLIC LICENSE
+ TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+
+ 0. This License Agreement applies to any software library or other
+program which contains a notice placed by the copyright holder or
+other authorized party saying it may be distributed under the terms of
+this Lesser General Public License (also called "this License").
+Each licensee is addressed as "you".
+
+ A "library" means a collection of software functions and/or data
+prepared so as to be conveniently linked with application programs
+(which use some of those functions and data) to form executables.
+
+ The "Library", below, refers to any such software library or work
+which has been distributed under these terms. A "work based on the
+Library" means either the Library or any derivative work under
+copyright law: that is to say, a work containing the Library or a
+portion of it, either verbatim or with modifications and/or translated
+straightforwardly into another language. (Hereinafter, translation is
+included without limitation in the term "modification".)
+
+ "Source code" for a work means the preferred form of the work for
+making modifications to it. For a library, complete source code means
+all the source code for all modules it contains, plus any associated
+interface definition files, plus the scripts used to control
+compilation and installation of the library.
+
+ Activities other than copying, distribution and modification are not
+covered by this License; they are outside its scope. The act of
+running a program using the Library is not restricted, and output from
+such a program is covered only if its contents constitute a work based
+on the Library (independent of the use of the Library in a tool for
+writing it). Whether that is true depends on what the Library does
+and what the program that uses the Library does.
+
+ 1. You may copy and distribute verbatim copies of the Library's
+complete source code as you receive it, in any medium, provided that
+you conspicuously and appropriately publish on each copy an
+appropriate copyright notice and disclaimer of warranty; keep intact
+all the notices that refer to this License and to the absence of any
+warranty; and distribute a copy of this License along with the
+Library.
+
+ You may charge a fee for the physical act of transferring a copy,
+and you may at your option offer warranty protection in exchange for a
+fee.
+\f
+ 2. You may modify your copy or copies of the Library or any portion
+of it, thus forming a work based on the Library, and copy and
+distribute such modifications or work under the terms of Section 1
+above, provided that you also meet all of these conditions:
+
+ a) The modified work must itself be a software library.
+
+ b) You must cause the files modified to carry prominent notices
+ stating that you changed the files and the date of any change.
+
+ c) You must cause the whole of the work to be licensed at no
+ charge to all third parties under the terms of this License.
+
+ d) If a facility in the modified Library refers to a function or a
+ table of data to be supplied by an application program that uses
+ the facility, other than as an argument passed when the facility
+ is invoked, then you must make a good faith effort to ensure that,
+ in the event an application does not supply such function or
+ table, the facility still operates, and performs whatever part of
+ its purpose remains meaningful.
+
+ (For example, a function in a library to compute square roots has
+ a purpose that is entirely well-defined independent of the
+ application. Therefore, Subsection 2d requires that any
+ application-supplied function or table used by this function must
+ be optional: if the application does not supply it, the square
+ root function must still compute square roots.)
+
+These requirements apply to the modified work as a whole. If
+identifiable sections of that work are not derived from the Library,
+and can be reasonably considered independent and separate works in
+themselves, then this License, and its terms, do not apply to those
+sections when you distribute them as separate works. But when you
+distribute the same sections as part of a whole which is a work based
+on the Library, the distribution of the whole must be on the terms of
+this License, whose permissions for other licensees extend to the
+entire whole, and thus to each and every part regardless of who wrote
+it.
+
+Thus, it is not the intent of this section to claim rights or contest
+your rights to work written entirely by you; rather, the intent is to
+exercise the right to control the distribution of derivative or
+collective works based on the Library.
+
+In addition, mere aggregation of another work not based on the Library
+with the Library (or with a work based on the Library) on a volume of
+a storage or distribution medium does not bring the other work under
+the scope of this License.
+
+ 3. You may opt to apply the terms of the ordinary GNU General Public
+License instead of this License to a given copy of the Library. To do
+this, you must alter all the notices that refer to this License, so
+that they refer to the ordinary GNU General Public License, version 2,
+instead of to this License. (If a newer version than version 2 of the
+ordinary GNU General Public License has appeared, then you can specify
+that version instead if you wish.) Do not make any other change in
+these notices.
+^L
+ Once this change is made in a given copy, it is irreversible for
+that copy, so the ordinary GNU General Public License applies to all
+subsequent copies and derivative works made from that copy.
+
+ This option is useful when you wish to copy part of the code of
+the Library into a program that is not a library.
+
+ 4. You may copy and distribute the Library (or a portion or
+derivative of it, under Section 2) in object code or executable form
+under the terms of Sections 1 and 2 above provided that you accompany
+it with the complete corresponding machine-readable source code, which
+must be distributed under the terms of Sections 1 and 2 above on a
+medium customarily used for software interchange.
+
+ If distribution of object code is made by offering access to copy
+from a designated place, then offering equivalent access to copy the
+source code from the same place satisfies the requirement to
+distribute the source code, even though third parties are not
+compelled to copy the source along with the object code.
+
+ 5. A program that contains no derivative of any portion of the
+Library, but is designed to work with the Library by being compiled or
+linked with it, is called a "work that uses the Library". Such a
+work, in isolation, is not a derivative work of the Library, and
+therefore falls outside the scope of this License.
+
+ However, linking a "work that uses the Library" with the Library
+creates an executable that is a derivative of the Library (because it
+contains portions of the Library), rather than a "work that uses the
+library". The executable is therefore covered by this License.
+Section 6 states terms for distribution of such executables.
+
+ When a "work that uses the Library" uses material from a header file
+that is part of the Library, the object code for the work may be a
+derivative work of the Library even though the source code is not.
+Whether this is true is especially significant if the work can be
+linked without the Library, or if the work is itself a library. The
+threshold for this to be true is not precisely defined by law.
+
+ If such an object file uses only numerical parameters, data
+structure layouts and accessors, and small macros and small inline
+functions (ten lines or less in length), then the use of the object
+file is unrestricted, regardless of whether it is legally a derivative
+work. (Executables containing this object code plus portions of the
+Library will still fall under Section 6.)
+
+ Otherwise, if the work is a derivative of the Library, you may
+distribute the object code for the work under the terms of Section 6.
+Any executables containing that work also fall under Section 6,
+whether or not they are linked directly with the Library itself.
+^L
+ 6. As an exception to the Sections above, you may also combine or
+link a "work that uses the Library" with the Library to produce a
+work containing portions of the Library, and distribute that work
+under terms of your choice, provided that the terms permit
+modification of the work for the customer's own use and reverse
+engineering for debugging such modifications.
+
+ You must give prominent notice with each copy of the work that the
+Library is used in it and that the Library and its use are covered by
+this License. You must supply a copy of this License. If the work
+during execution displays copyright notices, you must include the
+copyright notice for the Library among them, as well as a reference
+directing the user to the copy of this License. Also, you must do one
+of these things:
+
+ a) Accompany the work with the complete corresponding
+ machine-readable source code for the Library including whatever
+ changes were used in the work (which must be distributed under
+ Sections 1 and 2 above); and, if the work is an executable linked
+ with the Library, with the complete machine-readable "work that
+ uses the Library", as object code and/or source code, so that the
+ user can modify the Library and then relink to produce a modified
+ executable containing the modified Library. (It is understood
+ that the user who changes the contents of definitions files in the
+ Library will not necessarily be able to recompile the application
+ to use the modified definitions.)
+
+ b) Use a suitable shared library mechanism for linking with the
+ Library. A suitable mechanism is one that (1) uses at run time a
+ copy of the library already present on the user's computer system,
+ rather than copying library functions into the executable, and (2)
+ will operate properly with a modified version of the library, if
+ the user installs one, as long as the modified version is
+ interface-compatible with the version that the work was made with.
+
+ c) Accompany the work with a written offer, valid for at least
+ three years, to give the same user the materials specified in
+ Subsection 6a, above, for a charge no more than the cost of
+ performing this distribution.
+
+ d) If distribution of the work is made by offering access to copy
+ from a designated place, offer equivalent access to copy the above
+ specified materials from the same place.
+
+ e) Verify that the user has already received a copy of these
+ materials or that you have already sent this user a copy.
+
+ For an executable, the required form of the "work that uses the
+Library" must include any data and utility programs needed for
+reproducing the executable from it. However, as a special exception,
+the materials to be distributed need not include anything that is
+normally distributed (in either source or binary form) with the major
+components (compiler, kernel, and so on) of the operating system on
+which the executable runs, unless that component itself accompanies
+the executable.
+
+ It may happen that this requirement contradicts the license
+restrictions of other proprietary libraries that do not normally
+accompany the operating system. Such a contradiction means you cannot
+use both them and the Library together in an executable that you
+distribute.
+^L
+ 7. You may place library facilities that are a work based on the
+Library side-by-side in a single library together with other library
+facilities not covered by this License, and distribute such a combined
+library, provided that the separate distribution of the work based on
+the Library and of the other library facilities is otherwise
+permitted, and provided that you do these two things:
+
+ a) Accompany the combined library with a copy of the same work
+ based on the Library, uncombined with any other library
+ facilities. This must be distributed under the terms of the
+ Sections above.
+
+ b) Give prominent notice with the combined library of the fact
+ that part of it is a work based on the Library, and explaining
+ where to find the accompanying uncombined form of the same work.
+
+ 8. You may not copy, modify, sublicense, link with, or distribute
+the Library except as expressly provided under this License. Any
+attempt otherwise to copy, modify, sublicense, link with, or
+distribute the Library is void, and will automatically terminate your
+rights under this License. However, parties who have received copies,
+or rights, from you under this License will not have their licenses
+terminated so long as such parties remain in full compliance.
+
+ 9. You are not required to accept this License, since you have not
+signed it. However, nothing else grants you permission to modify or
+distribute the Library or its derivative works. These actions are
+prohibited by law if you do not accept this License. Therefore, by
+modifying or distributing the Library (or any work based on the
+Library), you indicate your acceptance of this License to do so, and
+all its terms and conditions for copying, distributing or modifying
+the Library or works based on it.
+
+ 10. Each time you redistribute the Library (or any work based on the
+Library), the recipient automatically receives a license from the
+original licensor to copy, distribute, link with or modify the Library
+subject to these terms and conditions. You may not impose any further
+restrictions on the recipients' exercise of the rights granted herein.
+You are not responsible for enforcing compliance by third parties with
+this License.
+^L
+ 11. If, as a consequence of a court judgment or allegation of patent
+infringement or for any other reason (not limited to patent issues),
+conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License. If you cannot
+distribute so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you
+may not distribute the Library at all. For example, if a patent
+license would not permit royalty-free redistribution of the Library by
+all those who receive copies directly or indirectly through you, then
+the only way you could satisfy both it and this License would be to
+refrain entirely from distribution of the Library.
+
+If any portion of this section is held invalid or unenforceable under
+any particular circumstance, the balance of the section is intended to
+apply, and the section as a whole is intended to apply in other
+circumstances.
+
+It is not the purpose of this section to induce you to infringe any
+patents or other property right claims or to contest validity of any
+such claims; this section has the sole purpose of protecting the
+integrity of the free software distribution system which is
+implemented by public license practices. Many people have made
+generous contributions to the wide range of software distributed
+through that system in reliance on consistent application of that
+system; it is up to the author/donor to decide if he or she is willing
+to distribute software through any other system and a licensee cannot
+impose that choice.
+
+This section is intended to make thoroughly clear what is believed to
+be a consequence of the rest of this License.
+
+ 12. If the distribution and/or use of the Library is restricted in
+certain countries either by patents or by copyrighted interfaces, the
+original copyright holder who places the Library under this License
+may add an explicit geographical distribution limitation excluding those
+countries, so that distribution is permitted only in or among
+countries not thus excluded. In such case, this License incorporates
+the limitation as if written in the body of this License.
+
+ 13. The Free Software Foundation may publish revised and/or new
+versions of the Lesser General Public License from time to time.
+Such new versions will be similar in spirit to the present version,
+but may differ in detail to address new problems or concerns.
+
+Each version is given a distinguishing version number. If the Library
+specifies a version number of this License which applies to it and
+"any later version", you have the option of following the terms and
+conditions either of that version or of any later version published by
+the Free Software Foundation. If the Library does not specify a
+license version number, you may choose any version ever published by
+the Free Software Foundation.
+^L
+ 14. If you wish to incorporate parts of the Library into other free
+programs whose distribution conditions are incompatible with these,
+write to the author to ask for permission. For software which is
+copyrighted by the Free Software Foundation, write to the Free
+Software Foundation; we sometimes make exceptions for this. Our
+decision will be guided by the two goals of preserving the free status
+of all derivatives of our free software and of promoting the sharing
+and reuse of software generally.
+
+ NO WARRANTY
+
+ 15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO
+WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
+EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR
+OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY
+KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE
+LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME
+THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
+
+ 16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
+WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY
+AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU
+FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR
+CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE
+LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING
+RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A
+FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF
+SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
+DAMAGES.
+
+ END OF TERMS AND CONDITIONS
+^L
+ How to Apply These Terms to Your New Libraries
+
+ If you develop a new library, and you want it to be of the greatest
+possible use to the public, we recommend making it free software that
+everyone can redistribute and change. You can do so by permitting
+redistribution under these terms (or, alternatively, under the terms
+of the ordinary General Public License).
+
+ To apply these terms, attach the following notices to the library.
+It is safest to attach them to the start of each source file to most
+effectively convey the exclusion of warranty; and each file should
+have at least the "copyright" line and a pointer to where the full
+notice is found.
+
+
+ <one line to give the library's name and a brief idea of what it does.>
+ Copyright (C) <year> <name of author>
+
+ This library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation; either
+ version 2.1 of the License, or (at your option) any later version.
+
+ This library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with this library; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+Also add information on how to contact you by electronic and paper mail.
+
+You should also get your employer (if you work as a programmer) or
+your school, if any, to sign a "copyright disclaimer" for the library,
+if necessary. Here is a sample; alter the names:
+
+ Yoyodyne, Inc., hereby disclaims all copyright interest in the
+ library `Frob' (a library for tweaking knobs) written by James
+ Random Hacker.
+
+ <signature of Ty Coon>, 1 April 1990
+ Ty Coon, President of Vice
+
+That's all there is to it!
+
+
--- /dev/null
+This file explains some details about developing the Augeas C library.
+
+# Check out the sources
+
+The sources are in a git repo; to check it out run
+
+```
+ git clone git://github.com/hercules-team/augeas
+```
+
+# Building from git
+
+Besides the usual build tools (gcc, autoconf, automake etc.) you need the
+following tools and libraries to build Augeas:
+
+* Bison
+* Flex
+* readline-devel
+* libxml2-devel
+* libselinux-devel (optional)
+
+Augeas uses gnulib, and you need a checkout of gnulib. The build scripts
+can create a checkout for you behind the scenes - though if you already
+have a gnulib checkout, you can pass its location to autogen.sh with the
+--gnulib-srcdir option.
+
+At its simplest, you build Augeas from git by running the following
+commands in the toplevel directory of your Augeas checkout:
+
+ ./autogen.sh [--gnulib-srcdir=$GNULIB_CHECKOUT]
+ make && make check && make install
+
+It is recommended though to turn on a few development features when
+building; in particular, stricter compiler warnings and some debug
+logging. You can pass these options either to autogen.sh or to
+configure. You'd then run autogen like this:
+
+ ./autogen.sh --enable-compile-warnings=error --enable-debug=yes
+
+# Running augtool
+
+The script ./src/try can be used to run ./src/augtool against a fresh
+filesystem root. It copies the files from tests/root/ to build/try/ and
+starts augtool against that root, using the lenses from lenses/ (and none
+of the ones that might be installed on your system)
+
+The script can be used for the following; OPTS are options that are passed
+to augtool verbatim
+
+* `./src/try OPTS`: run the commands from `build/augcmds.txt`
+* `./src/try cli OPTS`: start an interactive session with augtool
+* `./src/try gdb`: start gdb and set it up from `build/gdbcmds.txt` for
+ debugging augtool with commands in `build/augcmds.txt`
+* `./src/try valgrind`: run the commands from `build/augcmds.txt` through
+ augtool under valgrind to check for memory leaks
+
+Furthermore, the test suite invoked with `make check` includes a test
+called `test-get.sh`, which ensures that reading the files in
+`tests/root/` with `augtool` does not lead to any errors. (It does not
+verify the parsed syntax tree however; you'll have to extend the
+individual lens tests under `lenses/tests/` for that.)
+
+# Platform specific notes
+
+## Mac OSX
+
+OSX comes with a crippled reimplementation of readline, `libedit`; while
+Augeas will build against `libedit`, you can get full readline
+functionality by installing the `readline` package from
+[Homebrew](http://brew.sh/) and setting the following:
+```sh
+> export CPPFLAGS=-I/usr/local/opt/readline/include
+> export LDFLAGS=-L/usr/local/opt/readline/lib
+```
--- /dev/null
+Installation Instructions
+*************************
+
+Copyright (C) 1994, 1995, 1996, 1999, 2000, 2001, 2002, 2004, 2005,
+2006 Free Software Foundation, Inc.
+
+This file is free documentation; the Free Software Foundation gives
+unlimited permission to copy, distribute and modify it.
+
+Basic Installation
+==================
+
+Briefly, the shell commands `./configure; make; make install' should
+configure, build, and install this package. The following
+more-detailed instructions are generic; see the `README' file for
+instructions specific to this package.
+
+ The `configure' shell script attempts to guess correct values for
+various system-dependent variables used during compilation. It uses
+those values to create a `Makefile' in each directory of the package.
+It may also create one or more `.h' files containing system-dependent
+definitions. Finally, it creates a shell script `config.status' that
+you can run in the future to recreate the current configuration, and a
+file `config.log' containing compiler output (useful mainly for
+debugging `configure').
+
+ It can also use an optional file (typically called `config.cache'
+and enabled with `--cache-file=config.cache' or simply `-C') that saves
+the results of its tests to speed up reconfiguring. Caching is
+disabled by default to prevent problems with accidental use of stale
+cache files.
+
+ If you need to do unusual things to compile the package, please try
+to figure out how `configure' could check whether to do them, and mail
+diffs or instructions to the address given in the `README' so they can
+be considered for the next release. If you are using the cache, and at
+some point `config.cache' contains results you don't want to keep, you
+may remove or edit it.
+
+ The file `configure.ac' (or `configure.in') is used to create
+`configure' by a program called `autoconf'. You need `configure.ac' if
+you want to change it or regenerate `configure' using a newer version
+of `autoconf'.
+
+The simplest way to compile this package is:
+
+ 1. `cd' to the directory containing the package's source code and type
+ `./configure' to configure the package for your system.
+
+ Running `configure' might take a while. While running, it prints
+ some messages telling which features it is checking for.
+
+ 2. Type `make' to compile the package.
+
+ 3. Optionally, type `make check' to run any self-tests that come with
+ the package.
+
+ 4. Type `make install' to install the programs and any data files and
+ documentation.
+
+ 5. You can remove the program binaries and object files from the
+ source code directory by typing `make clean'. To also remove the
+ files that `configure' created (so you can compile the package for
+ a different kind of computer), type `make distclean'. There is
+ also a `make maintainer-clean' target, but that is intended mainly
+ for the package's developers. If you use it, you may have to get
+ all sorts of other programs in order to regenerate files that came
+ with the distribution.
+
+Compilers and Options
+=====================
+
+Some systems require unusual options for compilation or linking that the
+`configure' script does not know about. Run `./configure --help' for
+details on some of the pertinent environment variables.
+
+ You can give `configure' initial values for configuration parameters
+by setting variables in the command line or in the environment. Here
+is an example:
+
+ ./configure CC=c99 CFLAGS=-g LIBS=-lposix
+
+ *Note Defining Variables::, for more details.
+
+Compiling For Multiple Architectures
+====================================
+
+You can compile the package for more than one kind of computer at the
+same time, by placing the object files for each architecture in their
+own directory. To do this, you can use GNU `make'. `cd' to the
+directory where you want the object files and executables to go and run
+the `configure' script. `configure' automatically checks for the
+source code in the directory that `configure' is in and in `..'.
+
+ With a non-GNU `make', it is safer to compile the package for one
+architecture at a time in the source code directory. After you have
+installed the package for one architecture, use `make distclean' before
+reconfiguring for another architecture.
+
+Installation Names
+==================
+
+By default, `make install' installs the package's commands under
+`/usr/local/bin', include files under `/usr/local/include', etc. You
+can specify an installation prefix other than `/usr/local' by giving
+`configure' the option `--prefix=PREFIX'.
+
+ You can specify separate installation prefixes for
+architecture-specific files and architecture-independent files. If you
+pass the option `--exec-prefix=PREFIX' to `configure', the package uses
+PREFIX as the prefix for installing programs and libraries.
+Documentation and other data files still use the regular prefix.
+
+ In addition, if you use an unusual directory layout you can give
+options like `--bindir=DIR' to specify different values for particular
+kinds of files. Run `configure --help' for a list of the directories
+you can set and what kinds of files go in them.
+
+ If the package supports it, you can cause programs to be installed
+with an extra prefix or suffix on their names by giving `configure' the
+option `--program-prefix=PREFIX' or `--program-suffix=SUFFIX'.
+
+Optional Features
+=================
+
+Some packages pay attention to `--enable-FEATURE' options to
+`configure', where FEATURE indicates an optional part of the package.
+They may also pay attention to `--with-PACKAGE' options, where PACKAGE
+is something like `gnu-as' or `x' (for the X Window System). The
+`README' should mention any `--enable-' and `--with-' options that the
+package recognizes.
+
+ For packages that use the X Window System, `configure' can usually
+find the X include and library files automatically, but if it doesn't,
+you can use the `configure' options `--x-includes=DIR' and
+`--x-libraries=DIR' to specify their locations.
+
+Specifying the System Type
+==========================
+
+There may be some features `configure' cannot figure out automatically,
+but needs to determine by the type of machine the package will run on.
+Usually, assuming the package is built to be run on the _same_
+architectures, `configure' can figure that out, but if it prints a
+message saying it cannot guess the machine type, give it the
+`--build=TYPE' option. TYPE can either be a short name for the system
+type, such as `sun4', or a canonical name which has the form:
+
+ CPU-COMPANY-SYSTEM
+
+where SYSTEM can have one of these forms:
+
+ OS KERNEL-OS
+
+ See the file `config.sub' for the possible values of each field. If
+`config.sub' isn't included in this package, then this package doesn't
+need to know the machine type.
+
+ If you are _building_ compiler tools for cross-compiling, you should
+use the option `--target=TYPE' to select the type of system they will
+produce code for.
+
+ If you want to _use_ a cross compiler, that generates code for a
+platform different from the build platform, you should specify the
+"host" platform (i.e., that on which the generated programs will
+eventually be run) with `--host=TYPE'.
+
+Sharing Defaults
+================
+
+If you want to set default values for `configure' scripts to share, you
+can create a site shell script called `config.site' that gives default
+values for variables like `CC', `cache_file', and `prefix'.
+`configure' looks for `PREFIX/share/config.site' if it exists, then
+`PREFIX/etc/config.site' if it exists. Or, you can set the
+`CONFIG_SITE' environment variable to the location of the site script.
+A warning: not all `configure' scripts look for a site script.
+
+Defining Variables
+==================
+
+Variables not defined in a site shell script can be set in the
+environment passed to `configure'. However, some packages may run
+configure again during the build, and the customized values of these
+variables may be lost. In order to avoid this problem, you should set
+them in the `configure' command line, using `VAR=value'. For example:
+
+ ./configure CC=/usr/local2/bin/gcc
+
+causes the specified `gcc' to be used as the C compiler (unless it is
+overridden in the site shell script).
+
+Unfortunately, this technique does not work for `CONFIG_SHELL' due to
+an Autoconf bug. Until the bug is fixed you can use this workaround:
+
+ CONFIG_SHELL=/bin/bash /bin/bash ./configure CONFIG_SHELL=/bin/bash
+
+`configure' Invocation
+======================
+
+`configure' recognizes the following options to control how it operates.
+
+`--help'
+`-h'
+ Print a summary of the options to `configure', and exit.
+
+`--version'
+`-V'
+ Print the version of Autoconf used to generate the `configure'
+ script, and exit.
+
+`--cache-file=FILE'
+ Enable the cache: use and save the results of the tests in FILE,
+ traditionally `config.cache'. FILE defaults to `/dev/null' to
+ disable caching.
+
+`--config-cache'
+`-C'
+ Alias for `--cache-file=config.cache'.
+
+`--quiet'
+`--silent'
+`-q'
+ Do not print messages saying which checks are being made. To
+ suppress all normal output, redirect it to `/dev/null' (any error
+ messages will still be shown).
+
+`--srcdir=DIR'
+ Look for the package's source code in directory DIR. Usually
+ `configure' can determine that directory automatically.
+
+`configure' also accepts some other, not widely useful, options. Run
+`configure --help' for more details.
+
--- /dev/null
+SUBDIRS = gnulib/lib src
+if ENABLE_GNULIB_TESTS
+SUBDIRS += gnulib/tests
+endif
+SUBDIRS += tests man doc examples
+
+ACLOCAL_AMFLAGS = -I gnulib/m4
+
+lensdir=$(datadir)/augeas/lenses/dist
+lenstestdir=$(datadir)/augeas/lenses/dist/tests
+
+dist_lens_DATA=$(wildcard lenses/*.aug)
+dist_lenstest_DATA=$(wildcard lenses/tests/*.aug)
+
+EXTRA_DIST=augeas.spec build/ac-aux/move-if-change Makefile.am HACKING.md
+
+pkgconfigdir = $(libdir)/pkgconfig
+pkgconfig_DATA = augeas.pc
+
+distclean-local:
+ -find $(top_builddir)/build/* -maxdepth 0 -not -name ac-aux | xargs rm -rf
+
+ChangeLog:
+ if test -d $(top_srcdir)/.git; then \
+ $(top_srcdir)/build/ac-aux/gitlog-to-changelog > $@; \
+ fi
+
+dist: ChangeLog
+
+.PHONY: ChangeLog
--- /dev/null
+# -*- Makefile-automake -*-
+#
+# Support for running programs with failmalloc preloaded. Include in other
+# automake files and make sure the following variables are set:
+#
+# FAILMALLOC_START - number of first FAILMALLOC_INTERVAL
+# FAILMALLOC_REP - how often to repeat with increasing FAILMALLOC_INTERVAL
+# FAILMALLOC_PROG - the program to run with linfailmalloc preloaded
+
+if WITH_FAILMALLOC
+failmalloc: failmalloc-run
+else
+failmalloc: failmalloc-error
+endif
+
+failmalloc-run: $(FAILMALLOC_PROG)
+ @(echo "Running $(FAILMALLOC_PROG) with failmalloc"; \
+ for i in $$(seq $(FAILMALLOC_START) $$(expr $(FAILMALLOC_START) + $(FAILMALLOC_REP) - 1)) ; do \
+ resp=$$(libtool --mode=execute env LD_PRELOAD=$(LIBFAILMALLOC) FAILMALLOC_INTERVAL=$$i $(FAILMALLOC_PROG)); \
+ status=$$?; \
+ if [ $$status -ne 0 -a $$status -ne 2 ] ; then \
+ printf "%5d FAIL %3d %s\n" $$i $$status "$$resp" ; \
+ elif [ x$(V) = x1 -o $$(( $$i % 100 )) -eq 0 ] ; then \
+ printf "%5d PASS %s\n" $$i "$$resp" ; \
+ fi \
+ done)
+
+failmalloc-error:
+ @(echo "You need to turn on failmalloc support with --with-failmalloc"; \
+ exit 1)
--- /dev/null
+# -*- makefile -*-
+
+# Targets useful for maintenance/making releases etc. Some of them depend
+# on very specific local setups
+
+include Makefile
+
+rpmbuild_dir=/data/rpmbuild/$(PACKAGE_NAME)-$(PACKAGE_VERSION)
+rpb_spec=$(rpmbuild_dir)/augeas.spec
+release_dir=weave:/var/www/sites/download.augeas.net/
+
+tarball=$(PACKAGE_NAME)-$(PACKAGE_VERSION).tar.gz
+
+# This only works with the way I have set up my .rpmmacros
+build-rpm:
+ test -d $(rpmbuild_dir) || mkdir $(rpmbuild_dir)
+ rm -f $(rpmbuild_dir)/$(tarball) $(rpb_spec)
+ ln -sf $(abs_top_srcdir)/$(tarball) $(rpmbuild_dir)
+ ln -sf $(abs_top_srcdir)/augeas.spec $(rpmbuild_dir)
+ rpmbuild -ba $(rpmbuild_dir)/augeas.spec
+
+upload:
+ @gpg -q --batch --verify $(tarball).sig > /dev/null 2>&1 || \
+ gpg --output $(tarball).sig --detach-sig $(tarball); \
+ rsync -v $(tarball) $(tarball).sig $(release_dir); \
+ git push --tags
+
+tag-release:
+ @git tag -s release-$(VERSION)
+
+# Print all the debug categories in use
+debug-categories:
+ @fgrep 'debugging("' src/*.c | sed -r -e 's/^.*debugging\("([^"]+)"\).*$$/\1/' | sort -u
+
+# This is how I run autogen.sh locally
+autogen:
+ ./autogen.sh CFLAGS=-g --prefix=/data/share/ --gnulib-srcdir=${HOME}/code/gnulib/ --enable-compile-warnings=error --enable-debug=yes
+
+.PHONY: build-rpm
-Important notes
----------------
-
-Date Modules Changes
-
-2015-04-24 acl This module no longer defines file_has_acl.
- Use the new file-has-acl module for that.
- Using only the latter module makes for fewer
- link-time dependencies on GNU/Linux.
-
-2015-04-15 acl If your project only uses the file_has_acl()
- detection routine, then the requirements are
- potentially reduced by using $LIB_HAS_ACL rather
- than $LIB_ACL.
-
-2013-04-24 gettext If your project uses 'gettextize --intl' it is now
- your responsibility to put -I$(top_builddir)/intl
- into the Makefile.am for gnulib.
-
-2012-06-27 elisp-comp The module 'elisp-comp' is removed; the script is
- not independently useful outside of automake.
-
-2012-06-21 gnulib-tool The option --with-tests is now implied by the
- options --create-testdir, --test,
- --create-megatestdir, --megatest.
-
-2012-01-07 quotearg In the C locale, the function will no longer use
- the grave accent character to begin a quoted
- string (`like this'). It will use apostrophes
- 'like these' or, in Unicode locales, single quotes
- ‘like these’. You may want to adjust any error
- messages that hard code the quoting characters.
-
-2010-09-04 gnulib-tool The option '--import' is no longer cumulative; it
- now expects the complete list of modules and other
- options on the command line. If you want to
- augment (not set) the list of modules, use the
- new option '--add-import' instead of '--import'.
-
-User visible incompatible changes
----------------------------------
-
-Date Modules Changes
-
-2016-09-05 progname This module is deprecated. Please switch to the
- 'getprogname' module and its getprogname()
- function to obtain the name of the current program.
- Note that there is no longer any need to export a
- 'const char *program_name' variable.
- Currently there is no replacement for
- set_program_name().
-
-2016-08-17 stdbool This no longer supports _Bool for C++.
- Programs intended to be portable to C++
- compilers should use plain 'bool' instead.
-
-2016-04-12 intprops The following macros were removed:
- TYPE_TWOS_COMPLEMENT TYPE_ONES_COMPLEMENT
- TYPE_SIGNED_MAGNITUDE
-
-2015-09-25 c-ctype The following macros were removed:
- C_CTYPE_CONSECUTIVE_DIGITS
- C_CTYPE_CONSECUTIVE_LOWERCASE
- C_CTYPE_CONSECUTIVE_UPPERCASE
-
-2015-09-22 savewd SAVEWD_CHDIR_READABLE constant removed.
-
-2015-07-24 fprintftime Exported functions' time zone arguments are now of
- strftime type timezone_t (with NULL denoting UTC) instead of
- type int (with nonzero denoting UTC). These
- modules now depend on time_rz.
-
-2015-04-03 hash hash_insert0 function removed (deprecated in 2011).
-
-2014-10-29 obstack The obstack functions are no longer limited to
- int sizes; size values are now of type size_t.
- This changes both the ABI and the API.
- obstack_blank no longer accepts a negative size to
- shrink the current object; callers must now use
- obstack_blank_fast with a "negative" (actually,
- large positive) size for that.
-
-2014-02-23 diffseq The members too_expensive, lo_minimal and hi_minimal
- were removed from public structures, and the
- find_minimal argument was removed from diag
- and compareseq.
-
-2014-02-11 savedir The savedir and streamsavedir functions have a
- new argument specifying how to sort the result.
- The fdsavedir function is removed.
-
-2013-05-04 gnulib-tool CVS checkout of gnulib are no longer supported.
-
-2013-02-08 careadlinkat This module no longer provides the careadlinkatcwd
- function.
-
-2012-06-26 getopt-posix This module no longer guarantees that option
- processing is resettable. If your code uses
- 'optreset' or 'optind = 0;', rewrite it to make
- only one pass over the argument array.
-
-2012-02-24 streq This module no longer provides the STREQ macro.
- Use STREQ_OPT instead.
-
-2012-01-10 ignore-value This module no longer provides the ignore_ptr
- function. It was deprecated a year ago, but existed
- so briefly before then that it never came into use.
- Now, the ignore_value function does its job.
-
-2011-11-18 hash This module deprecates the hash_insert0 function
- using gcc's "deprecated" attribute. Use the better-
- named hash_insert_if_absent equivalent.
-
-2011-11-04 openat This module no longer provides the mkdirat()
- function. If you need this function, you now need
- to request the 'mkdirat' module.
-
-2011-11-04 openat This module no longer provides the fstatat()
- function. If you need this function, you now need
- to request the 'fstatat' module.
-
-2011-11-03 openat This module no longer provides the unlinkat()
- function. If you need this function, you now need
- to request the 'unlinkat' module.
-
-2011-11-02 openat This module no longer provides the fchmodat()
- function. If you need this function, you now need
- to request the 'fchmodat' module.
-
-2011-11-01 alignof This module no longer provides the alignof() macro.
- Use either alignof_slot() or alignof_type() instead.
-
-2011-11-01 openat This module no longer provides the fchownat()
- function. If you need this function, you now need
- to request the 'fchownat' module.
-
-2011-10-03 poll The link requirements of this module are changed
- from empty to $(LIB_POLL).
-
-2011-09-25 sys_stat This module no longer provides the fstat()
- function. If you need this function, you now need
- to request the 'fstat' module.
-
-2011-09-23 signal This module is renamed to 'signal-h'.
-
-2011-09-22 select The link requirements of this module are changed
- from $(LIBSOCKET) to $(LIB_SELECT).
-
-2011-09-12 fchdir This module no longer overrides the functions
- opendir() and closedir(), unless the modules
- 'opendir' and 'closedir' are in use, respectively.
- If you use opendir(), please use module 'opendir'.
- If you use closedir(), please use module 'closedir'.
-
-2011-08-04 pathmax The header file "pathmax.h" no longer defines
- PATH_MAX on GNU/Hurd. Please use one of the methods
- listed in pathmax.h to ensure your package is
- portable to GNU/Hurd.
-
-2011-07-24 close This module no longer pulls in the 'fclose' module.
- If your code creates a socket descriptor using
- socket() or accept(), then a FILE stream referring
- to it using fdopen(), then in order to close this
- stream, you need the 'fclose' module.
-
-2011-07-12 arg-nonnull Renamed to snippet/arg-nonnull.
- c++defs Renamed to snippet/c++defs.
- link-warning Renamed to snippet/link-warning.
- unused-parameter Renamed to snippet/unused-parameter.
- warn-on-use Renamed to snippet/warn-on-use.
-
-2011-06-15 verify verify_true (V) is deprecated; please use
- verify_expr (V, 1) instead.
-
-2011-06-05 ansi-c++-opt When a C++ compiler is not found, the variable CXX
- is now set to "no", not to ":".
-
-2011-05-11 group-member The include file is changed from "group-member.h"
- to <unistd.h>.
-
-2011-05-02 exit The module is removed. It was deprecated
- on 2010-03-05. Use 'stdlib' directly instead.
-
-2011-04-27 mgetgroups The 'xgetgroups' function has been split into
- a new 'xgetgroups' module.
-
-2011-04-27 save-cwd This module pulls in fewer dependencies by
- default; to retain robust handling of directories
- with an absolute name longer than PATH_MAX, you
- must now explicitly include the 'getcwd' module.
-
-2011-04-19 close-hook This module has been renamed to 'fd-hook' and
- generalized.
-
-2011-03-08 regex-quote The last argument is no longer an 'int cflags'
- but instead a pointer to a previously constructed
- 'struct regex_quote_spec'.
-
-2011-02-25 dirname These modules no longer put #defines for the
- dirname-lgpl following symbols into <config.h>: ISSLASH,
- backupfile FILE_SYSTEM_ACCEPTS_DRIVE_LETTER_PREFIX,
- lstat FILE_SYSTEM_BACKSLASH_IS_FILE_NAME_SEPARATOR,
- openat FILE_SYSTEM_DRIVE_PREFIX_CAN_BE_RELATIVE.
- remove Applications that need ISSLASH can include the new
- rmdir header dosname.h.
- savewd
- stat
- unlink
-
-2011-02-14 getloadavg This module no longer #defines C_GETLOADAVG or
- HAVE_GETLOADAVG, as the application no longer needs
- to worry about how getloadavg is defined. It no
- longer defines the obsolete symbol NLIST_NAME_UNION
- (which should have been internal to the module
- anyway). Also, support for setgid use has been
- removed, as nobody seems to be using it; thus
- GETLOADAVG_PRIVILEGED is no longer #defined and
- KMEM_GROUP and NEED_SETGID are no longer
- substituted for.
-
-2011-02-08 stdlib Unless the random_r module is also used, this
- module no longer guarantees that the following are
- defined: struct random_data, RAND_MAX, random_r,
- srandom_r, initstate_r, setstate_r.
-
-2011-02-08 wctype-h This module no longer provides the iswblank()
- function. If you need this function, you now need
- to request the 'iswblank' module.
-
-2011-02-07 wctype This module is renamed to wctype-h.
-
-2011-01-18 multiarch This no longer #defines AA_APPLE_UNIVERSAL_BUILD;
- instead, use the shell var APPLE_UNIVERSAL_BUILD.
-
-2010-12-10 pipe This module is renamed to spawn-pipe. The include
- file is renamed to "spawn-pipe.h".
-
-2010-10-05 getdate This module is deprecated. Please use the new
- parse-datetime module for the replacement
- function parse_datetime(), or help us write
- getdate-posix for getdate(). Also, the header
- "getdate.h" has been renamed "parse-datetime.h",
- and doc/getdate.texi to doc/parse-datetime.texi.
-
-2010-09-29 sys_wait This module no longer provides the waitpid()
- function. If you need this function, you now need
- to request the 'waitpid' module.
-
-2010-09-17 utimens The function gl_futimens is removed, and its
- signature has been migrated to fdutimens. Callers
- of gl_futimens should change function name, and
- callers of fdutimens should swap parameter order.
-
-2010-09-17 fdutimensat This function has a new signature: the fd now comes
- first instead of the dir/name pair, and a new
- atflag parameter is added at the end. Old code
- should rearrange parameters, and pass 0 for atflag.
-
-2010-09-13 regex The module is not guaranteeing anymore support for
- 64-bit regoff_t on 64-bit systems. The size of
- regoff_t will always be 32-bit unless the program
- is being configured --with-included-regex. This
- may change again in the future once glibc provides
- this feature as well.
-
-2010-09-12 savedir The fdsavedir function is now deprecated.
-
-2010-09-10 fcntl-h This module now defaults O_CLOEXEC to 0, and
- it defaults O_EXEC and O_SEARCH to O_RDONLY.
- Use "#if O_CLOEXEC" instead of "#ifdef O_CLOEXEC".
-
-2010-08-28 realloc This module is deprecated. Use 'realloc-gnu'
- instead. It will be removed 2012-01-01.
-
-2010-08-28 calloc This module is deprecated. Use 'calloc-gnu'
- instead. It will be removed 2012-01-01.
-
-2010-08-28 malloc This module is deprecated. Use 'malloc-gnu'
- instead. It will be removed 2012-01-01.
-
-2010-08-14 memxfrm This module is renamed to amemxfrm. The include
- file is renamed to "amemxfrm.h". The function is
- renamed to amemxfrm.
-
-2010-08-09 symlinkat This module now only provides symlinkat; use the
- new module 'readlinkat' if needed.
-
-2010-07-31 ansi-c++-opt If Autoconf >= 2.66 is used, the 'configure'
- option is now called --disable-c++ rather than
- --disable-cxx.
-
-2010-04-02 maintainer-makefile
- The macro _prohibit_regexp has been revamped into
- a new macro _sc_search_regexp; custom syntax
- checks in your cfg.mk will need to be rewritten.
-
-2010-03-28 lib-ignore This module now provides a variable
- IGNORE_UNUSED_LIBRARIES_CFLAGS that you should
- add to LDFLAGS (when linking C programs only) or
- CFLAGS yourself. It is no longer added to LDFLAGS
- automatically.
-
-2010-03-18 pty This module now only declares the pty.h header.
- Use the new modules 'forkpty' or 'openpty' to
- get the functions that were previously provided.
-
-2010-03-05 exit This module is deprecated, use 'stdlib' directly
- instead. It will be removed 2011-01-01.
-
-2009-12-13 sublist The module does not define functions any more that
- call xalloc_die() in out-of-memory situations. Use
- module 'xsublist' and include file "gl_xsublist.h"
- instead.
-
-2009-12-13 list The module does not define functions any more that
- call xalloc_die() in out-of-memory situations.
- Use module 'xlist' and include file "gl_xlist.h"
- instead.
-
-2009-12-13 oset The module does not define functions any more that
- call xalloc_die() in out-of-memory situations.
- Use module 'xoset' and include file "gl_xoset.h"
- instead.
-
-2009-12-10 * Most source code files have been converted to
- indentation by spaces (rather than tabs). Patches
- of gnulib source code needs to be updated.
-
-2009-12-09 link-warning The Makefile rules that use $(LINK_WARNING_H) now
- must contain an explicit dependency on
- $(LINK_WARNING_H).
-
-2009-11-12 getgroups These functions now use a signature of gid_t,
- getugroups rather than GETGROUPS_T. This probably has no
- effect except on very old platforms.
-
-2009-11-04 tempname The gen_tempname function takes an additional
- 'suffixlen' argument. You can safely pass 0.
-
-2009-11-04 nproc The num_processors function now takes an argument.
-
-2009-11-02 inet_pton The use of this module now requires linking with
- $(INET_PTON_LIB).
-
-2009-11-02 inet_ntop The use of this module now requires linking with
- $(INET_NTOP_LIB).
-
-2009-10-10 utimens The use of this module now requires linking with
- $(LIB_CLOCK_GETTIME).
-
-2009-09-16 canonicalize-lgpl
- The include file is changed from "canonicalize.h"
- to <stdlib.h>.
-
-2009-09-04 link-follow The macro LINK_FOLLOWS_SYMLINK is now tri-state,
- rather than only defined to 1.
-
-2009-09-03 openat The include files are standardized to POSIX 2008.
- For openat, include <fcntl.h>; for
- fchmodat, fstatat, and mkdirat, include
- <sys/stat.h>; for fchownat and unlinkat,
- include <unistd.h>. For all other
- functions provided by this module,
- continue to include "openat.h".
-
-2009-08-30 striconveh The functions mem_cd_iconveh and str_cd_iconveh
- now take an 'iconveh_t *' argument instead of three
- iconv_t arguments.
-
-2009-08-23 tempname The gen_tempname function takes an additional
- 'flags' argument. You can safely pass 0.
-
-2009-08-12 getopt This module is deprecated. Please choose among
- getopt-posix and getopt-gnu. getopt-gnu provides
- "long options" and "options with optional
- arguments", getopt-posix doesn't.
-
-2009-06-25 fpurge The include file is changed from "fpurge.h" to
- <stdio.h>.
-
-2009-04-26 modules/uniconv/u8-conv-from-enc
- modules/uniconv/u16-conv-from-enc
- modules/uniconv/u32-conv-from-enc
- The calling convention of the functions
- u*_conv_from_encoding is changed.
-
-2009-04-26 modules/uniconv/u8-conv-to-enc
- modules/uniconv/u16-conv-to-enc
- modules/uniconv/u32-conv-to-enc
- The calling convention of the functions
- u*_conv_to_encoding is changed.
-
-2009-04-24 maintainer-makefile
- The maint.mk file was copied from
- coreutils, and the old
- coverage/gettext/indent rules were
- re-added. If you used 'make syntax-check'
- this will add several new checks. If some
- new check is annoying, add the name of the
- checks to 'local-checks-to-skip' in your
- cfg.mk.
-
-2009-04-01 visibility Renamed to lib-symbol-visibility.
-
-2009-04-01 ld-version-script Renamed to lib-symbol-versions.
-
-2009-03-20 close The substituted variable LIB_CLOSE is removed.
-
-2009-03-05 filevercmp Move hidden files up in ordering.
-
-2009-01-22 c-strtod This function no longer calls xalloc_die(). If
- c-strtold you want to exit the program in case of out-of-
- memory, the calling function needs to arrange
- for it, like this:
- errno = 0;
- val = c_strtod (...);
- if (val == 0 && errno == ENOMEM)
- xalloc_die ();
-
-2009-01-17 relocatable-prog In the Makefile.am or Makefile.in, you now also
- need to set RELOCATABLE_STRIP = :.
-
-2008-12-22 getaddrinfo When using this module, you now need to link with
- canon-host $(GETADDRINFO_LIB).
-
-2008-12-21 mbiter The header files "mbiter.h", "mbuiter.h",
- mbuiter "mbfile.h" can now be included without checking
- mbfile HAVE_MBRTOWC. The macro HAVE_MBRTOWC will no
- longer be defined by these modules in a year. If
- you want to continue to use it, you need to invoke
- AC_FUNC_MBRTOWC yourself.
-
-2008-11-11 warnings This module subsumes the file m4/warning.m4 which
- was removed.
-
-2008-10-20 lstat The include file is changed from "lstat.h" to
- <sys/stat.h>.
-
-2008-10-20 getaddrinfo The include file is changed from "getaddrinfo.h"
- to <netdb.h>.
-
-2008-10-19 isnanf The include file is changed from "isnanf.h" to
- <math.h>.
- isnand The include file is changed from "isnand.h" to
- <math.h>.
- isnanl The include file is changed from "isnanl.h" to
- <math.h>.
-
-2008-10-18 lchmod The include file is changed from "lchmod.h" to
- <sys/stat.h>.
-
-2008-10-18 dirfd The include file is changed from "dirfd.h" to
- <dirent.h>.
-
-2008-10-18 euidaccess The include file is changed from "euidaccess.h"
- to <unistd.h>.
-
-2008-10-18 getdomainname The include file is changed from "getdomainname.h"
- to <unistd.h>.
-
-2008-09-28 sockets When using this module, you now need to link with
- $(LIBSOCKET).
-
-2008-09-24 sys_select The limitation on 'select', introduced 2008-09-23,
- was removed. sys_select now includes a select
- wrapper for Winsock. The wrapper expects socket
- and file descriptors to be compatible as arranged
- by the sys_socket on MinGW.
-
-2008-09-23 sys_socket Under Windows (MinGW), the module now adds
- wrappers around Winsock functions, so that
- socket descriptors are now compatible with
- file descriptors. In general, this change
- will simply improve your code's portability
- between POSIX platforms and Windows. In
- particular, you will be able to use ioctl and
- close instead of ioctlsocket and closesocket,
- and test errno instead of WSAGetLastError ().
- On the other hand, you have to audit your code to
- remove usage of these Winsock-specific functions.
-
- This change does not remove the need to call
- the gl_sockets_startup function from the sockets
- gnulib module. Also, for now select is disabled
- when you include the sys_socket module; while
- the functionality will be restored soon, for
- efficiency it is suggested to use the poll system
- poll system call and gnulib module instead.
-
-2008-09-13 EOVERFLOW The module is removed. Use module errno instead.
-
-2008-09-01 filename The module does not define the function
- concatenated_filename any more. To get an
- equivalent function, use function
- xconcatenated_filename from module
- 'xconcat-filename'.
-
-2008-08-31 havelib On Solaris, when searching for 64-bit mode
- libraries the directory $prefix/lib is now ignored.
- Instead the directory $prefix/lib/64 is searched.
- You may need to create a symbolic link for
- $prefix/lib/64 if you have 64-bit libraries
- installed in $prefix/lib.
-
-2008-08-19 strverscmp The include file is changed from "strverscmp.h"
- to <string.h>.
-
-2008-08-14 lock The include file is changed from "lock.h"
- to "glthread/lock.h".
- tls The include file is changed from "tls.h"
- to "glthread/tls.h".
-
-2008-07-17 c-stack The module now requires the addition of
- $(LIBCSTACK) or $(LTLIBCSTACK) in Makefile.am,
- since it may depend on linking with libsigsegv.
-
-2008-07-07 isnanf-nolibm The include file is changed from "isnanf.h"
- to "isnanf-nolibm.h".
- isnand-nolibm The include file is changed from "isnand.h"
- to "isnand-nolibm.h".
-
-2008-06-10 execute The execute function takes an additional termsigp
- argument. Passing termsigp = NULL is ok.
- wait-process The wait_subprocess function takes an additional
- termsigp argument. Passing termsigp = NULL is ok.
-
-2008-05-10 linebreak The module is split into several modules unilbrk/*.
- The include file is changed from "linebreak.h" to
- "unilbrk.h". Two functions are renamed:
- mbs_possible_linebreaks -> ulc_possible_linebreaks
- mbs_width_linebreaks -> ulc_width_linebreaks
-
-2008-04-28 rpmatch The include file is now <stdlib.h>.
-
-2008-04-28 inet_ntop The include file is changed from "inet_ntop.h"
- to <arpa/inet.h>.
-
-2008-04-28 inet_pton The include file is changed from "inet_pton.h"
- to <arpa/inet.h>.
-
-2008-03-06 freadahead The return value's computation has changed. It
- now increases by 1 after ungetc.
-
-2008-01-26 isnan-nolibm The module name is changed from isnan-nolibm to
- isnand-nolibm. The include file is changed from
- "isnan.h" to "isnand.h". The function that it
- defines is changed from isnan() to isnand().
-
-2008-01-14 strcasestr This module now replaces worst-case inefficient
- implementations; clients that use controlled
- needles and thus do not care about worst-case
- efficiency should use the new strcasestr-simple
- module instead for smaller code size.
-
-2008-01-09 alloca-opt Now defines HAVE_ALLOCA_H only when the system
- supplies an <alloca.h>. Gnulib-using code is now
- expected to include <alloca.h> unconditionally.
- Non-gnulib-using code can continue to include
- <alloca.h> only if HAVE_ALLOCA_H is defined.
-
-2008-01-08 memmem This module now replaces worst-case inefficient
- implementations; clients that use controlled
- needles and thus do not care about worst-case
- efficiency should use the new memmem-simple
- module instead for smaller code size.
-
-2007-12-24 setenv The include file is changed from "setenv.h" to
- <stdlib.h>. Also, the unsetenv function is no
- longer declared in this module; use the 'unsetenv'
- module if you need it.
-
-2007-12-03 getpagesize The include file is changed from "getpagesize.h"
- to <unistd.h>.
-
-2007-12-03 strcase The include file is changed from <string.h> to
- <strings.h>.
-
-2007-10-07 most modules The license for most modules has changed from
- GPLv2+ to GPLv3+, and from LGPLv2+ to LGPLv3+.
- A few modules are still under LGPLv2+; see the
- module description for the applicable license.
-
-2007-09-01 linebreak "linebreak.h" no longer declares the functions
- locale_charset, uc_width, u{8,16,32}_width. Use
- "uniwidth.h" to get these functions declared.
-
-2007-08-28 areadlink-with-size
- Renamed from mreadlink-with-size.
- Function renamed: mreadlink_with_size ->
- areadlink_with_size.
-
-2007-08-22 getdelim, getline
- The include file is changed from "getdelim.h"
- and "getline.h" to the POSIX 200x <stdio.h>.
-
-2007-08-18 idcache Now provides prototypes in "idcache.h".
-
-2007-08-10 xstrtol The STRTOL_FATAL_ERROR macro is removed.
- Use the new xstrtol_fatal function instead.
-
-2007-08-04 human The function human_options no longer reports an
- error to standard error; that is now the
- caller's responsibility. It returns an
- error code of type enum strtol_error
- instead of the integer option value, and stores
- the option value via a new int * argument.
- xstrtol The first two arguments of STRTOL_FATAL_ERROR
- are now an option name and option argument
- instead of an option argument and a type string,
- STRTOL_FAIL_WARN is removed.
-
-2007-07-14 gpl, lgpl New Texinfo versions with no sectioning commands.
-
-2007-07-10 version-etc Output now mentions GPLv3+, not GPLv2+. Use
- gnulib-tool --local-dir to override this.
-
-2007-07-07 wcwidth The include file is changed from "wcwidth.h" to
- <wchar.h>.
-
-2007-07-02 gpl, lgpl Renamed to gpl-2.0 and lgpl-2.1 respectively.
- (There is also a new module gpl-3.0.)
-
-2007-06-16 lchown The include file is changed from "lchown.h" to
- <unistd.h>.
-
-2007-06-09 xallocsa Renamed to xmalloca. The include file "xallocsa.h"
- was renamed to "xmalloca.h". The function was
- renamed:
- xallocsa -> xmalloca
-
-2007-06-09 allocsa Renamed to malloca. The include file "allocsa.h"
- was renamed to "malloca.h". The function-like
- macros were renamed:
- allocsa -> malloca
- freesa -> freea
-
-2007-05-20 utimens Renamed futimens to gl_futimens, to avoid
- conflict with the glibc-2.6-introduced function
- that has a different signature.
-
-2007-05-01 sigprocmask The module now depends on signal, so replace
- #include "sigprocmask.h"
- with
- #include <signal.h>
-
-2007-04-06 gettext The macro HAVE_LONG_DOUBLE is no longer set.
- You can replace all its uses with 1, i.e. assume
- 'long double' as a type exists.
-
-2007-04-01 arcfour Renamed to crypto/arcfour.
- arctwo Renamed to crypto/arctwo.
- des Renamed to crypto/des.
- gc Renamed to crypto/gc.
- gc-arcfour Renamed to crypto/gc-arcfour.
- gc-arctwo Renamed to crypto/gc-arctwo.
- gc-des Renamed to crypto/gc-des.
- gc-hmac-md5 Renamed to crypto/gc-hmac-md5.
- gc-hmac-sha1 Renamed to crypto/gc-hmac-sha1.
- gc-md2 Renamed to crypto/gc-md2.
- gc-md4 Renamed to crypto/gc-md4.
- gc-md5 Renamed to crypto/gc-md5.
- gc-pbkdf2-sha1 Renamed to crypto/gc-pbkdf2-sha1.
- gc-random Renamed to crypto/gc-random.
- gc-rijndael Renamed to crypto/gc-rijndael.
- gc-sha1 Renamed to crypto/gc-sha1.
- hmac-md5 Renamed to crypto/hmac-md5.
- hmac-sha1 Renamed to crypto/hmac-sha1.
- md2 Renamed to crypto/md2.
- md4 Renamed to crypto/md4.
- md5 Renamed to crypto/md5.
- rijndael Renamed to crypto/rijndael.
- sha1 Renamed to crypto/sha1.
-
-2007-03-27 vasprintf The module now depends on stdio, so replace
- #include "vasprintf.h"
- with
- #include <stdio.h>
-
-2007-03-24 tsearch The include file is changed from "tsearch.h" to
- <search.h>.
-
-2007-03-24 utf8-ucs4 The include file is changed from "utf8-ucs4.h"
- to "unistr.h".
- utf8-ucs4-unsafe The include file is changed from
- "utf8-ucs4-unsafe.h" to "unistr.h".
- utf16-ucs4 The include file is changed from "utf16-ucs4.h"
- to "unistr.h".
- utf16-ucs4-unsafe The include file is changed from
- "utf16-ucs4-unsafe.h" to "unistr.h".
- ucs4-utf8 The include file is changed from "ucs4-utf8.h"
- to "unistr.h".
- ucs4-utf16 The include file is changed from "ucs4-utf16.h"
- to "unistr.h".
-
-2007-03-19 iconvme The module is removed. Use module striconv instead:
- iconv_string -> str_iconv
- iconv_alloc -> str_cd_iconv (with reversed
- arguments)
-
-2007-03-15 list The functions gl_list_create_empty and
- array-list gl_list_create now take an extra fourth argument.
- carray-list You can pass NULL.
- linked-list
- linkedhash-list
- avltree-list
- rbtree-list
- avltreehash-list
- rbtreehash-list
-
-2007-03-15 oset The function gl_oset_create_empty now takes a
- array-oset third argument. You can pass NULL.
- avltree-oset
- rbtree-oset
-
-2007-03-12 des The types and functions in lib/des.h have been
- gc-des renamed:
-
- des_ctx -> gl_des_ctx, tripledes_ctx -> gl_3des_ctx,
- des_is_weak_key -> gl_des_is_weak_key,
- des_setkey -> gl_des_setkey,
- des_makekey -> gl_des_makekey,
- des_ecb_crypt -> gl_des_ecb_crypt,
- des_ecb_encrypt -> gl_des_ecb_encrypt,
- des_ecb_decrypt -> gl_des_ecb_decrypt,
- tripledes_set2keys -> gl_3des_set2keys,
- tripledes_set3keys -> gl_3des_set3keys,
- tripledes_makekey -> gl_3des_makekey,
- tripledes_ecb_crypt -> gl_3des_ecb_crypt.
-
- Also consider using the "gc-des" buffer instead of
- using the "des" module directly.
-
-2007-02-28 xreadlink The module xreadlink was renamed to
- xreadlink-with-size. The function was renamed:
- xreadlink -> xreadlink_with_size.
-
-2007-02-18 exit The modules now depend on stdlib, so replace
- mkdtemp #include "exit.h"
- mkstemp #include "mkdtemp.h"
- #include "mkstemp.h"
- with
- #include <stdlib.h>
-
-2007-01-26 strdup The module now depends on string, so replace
- #include "strdup.h"
- with
- #include <string.h>
-
-# This is for Emacs.
-# Local Variables:
-# coding: utf-8
-# indent-tabs-mode: nil
-# whitespace-check-buffer-indent: nil
-# End:
+1.14.1 - 2023-06-27
+ - General changes/additions
+ * internal.c: update #if to only use GNU-specific strerror_r() when __GLIBC__ is defined (#791) Dimitry Andric
+ * augeas.c: Fix bug from PR#691 where the nodes of a newly created file are lost upon a subsequent load operation (#810) George Hansper
+ * HACKING.md: describe testing (#796) Laszlo Ersek
+ * Add GitHub Actions (#714) Raphaël Pinson
+ * augprint.c: remove `#include <malloc.h>` , add `#include <libgen.h>` (#792) Ruoyu Zhong
+
+ - Lens changes/additions
+ * TOML: support trailing commas in arrays (#809) Bao
+ * Tmpfiles: allow '=', '~', '^' for letter types, allow ":" as prefix for the mode (#805) Pino Toscano
+ * Sshd: Add keyword PubkeyAcceptedAlgorithms as comma-separated list of items (#806) Dave Re
+ * Cmdline: Allow whitespace at the end of kernel commnd line (#798) rwmjones
+
+1.14.0 - 2022-12-07
+ - General changes/additions
+ * Update submodule gnulib to 2f7479a16a3395f1429c7795f10c5d19b9b4453e (#781)
+ * Add bash-completion for augtool, augmatch, augprint (#783) George Hansper
+ * Fix: Allow values to contain arbitrary unbalanced square brackets (#782) George Hansper
+ * Add package bash to build stage in Dockerfile (#776) George Hansper
+ * Add augprint tool for creating idempotent augtool scripts (#752) George Hansper
+ * Replace deprecated 'security_context_t' with 'char *' (#747) Leo-Schmit
+ * src/syntax.c: Fix whitespace which confuses static checkers (#725) rwmjones
+ * README.md: Add oss-fuzz status badge (#702) Sergey Nizovtsev
+ * Package augmatch, too (#688) oleksandriegorov
+ * Add Github workflow to create releases with complete source tarballs (#744) Hilko Bengen
+
+ - Lens changes/additions
+ * Resolv: add option trust-ad (#784) George Hansper
+ * Sos: new lens for /etc/sos/sos.conf (based on IniFile)
+ (#779) George Hansper
+ * Pg_Hba: unquoted auth-method may contain hyphens (#777) George Hansper
+ * Sysctl: Allow keys to contain * and : and / characters (#755) M Filka
+ * Semanage: Fix parsing of ignoredirs (#758) Richard W M Jones
+ * Systemd: allow empty quoted environment variable values
+ (#757) Michal Vasko
+ * Systemd: allow values starting with whitespaces for Exec* and
+ Environment service entries. (#757) Michal Vasko
+ * Toml: workaround to allow writing toml files (#742) Richard
+ * Kdump: parse "auto_reset_crashkernel" (#754) Laszlo Ersek
+ * Keepalived: add parameters notify_stop and notify_deleted (#749) Adam Bambuch
+ * Chrony: add new directives and options (#745) Miroslav Lichvar
+ * Redis: Allow redis lens to set 'SAVE ""' as a valid option
+ (#738) Mitch Hagstrand
+ * ClamAV: update ClamAV lens to autoload /etc/clamav/*.conf
+ (#748) Guillaume Ross
+ * AuthselectPam: new lens for /etc/authselect/custom/*/*-auth and
+ /etc/authselect/custom/*/postlogin (#743) Heston Snodgrass
+ * Sshd: Parse GSSAPIKexAlgorithms PubkeyAcceptedKeyTypes CASignatureAlgorithms
+ as comma-seperated lists instead of simple strings
+ (#721) Edward Garbade
+ * Yum: Add additional unit tests (#677) Pat Riehecky
+ * Cockpit: new lens for /etc/cockpit/cockpit.conf (#675) Pat Riehecky
+
+1.13.0 - 2021-10-15
+ - General changes/additions
+ * Add Dockerfile (Nicolas Gif) (Issue #650)
+ * augtool: Improved readline integration to handle quoting issues
+ (Pino Toscano)
+ * typechecker: Allow including '/' in keys and labels. Thanks to
+ felixdoerre for pointing out that this restriction was
+ unnecessary. See issue #668 for the discussion.
+ * Add function modified() to select nodes which are marked as dirty
+ (George Hansper) (Issue #691)
+ * Add CLI command 'preview' and API 'aug_preview' to preview file contents
+ (George Hansper) (#690)
+ * Add "else" operator to augeas path-filter expressions (priority selector)
+ (George Hansper) (#692)
+ * Add new axis 'seq' to allow /path/seq::*[expr] to match and create numeric
+ nodes, as idempotent alternative to /path/*[expr] (George Hansper) (#706)
+
+ - Lens changes/additions
+ * Authinfo2: new lens to parse Authinfo2 format (Nicolas Gif) (Issue #649)
+ * Chrony: add new options (Miroslav Lichvar) (Issue #698)
+ * Cmdline: New lens to parse /proc/cmdline (Thomas Weißschuh)
+ * Crypttab: support UUID in device and / in opt (Raphaël Pinson) (#713)
+ * Fail2ban: new lens to parse Fail2ban format (Nicolas Gif) (Issue #651)
+ * Grub: support '+' in kernel command line option names
+ (Pino Toscano) (Issue #647)
+ * Krb5: handle [plugins] subsection (Pino Toscano) (Issue #663)
+ * Limits: support colons in the domain pattern of the limits lens
+ (Xavier Mol) (Issue #645)
+ * Logrotate: add hourly schedule (Jason A. Smith) (Issue #655)
+ * Mke2fs: parse more common entries between [defaults] and the tags
+ in [fs_types], fix the type of few entries, handle the [options]
+ stanza (Pino Toscano) (Issue #642)
+ support quoted values (Pino Toscano) (Issue #661)
+ * NetworkManager: allow # in values (mfilka) (#723)
+ * Opendkim: update to match current conffile format (Issue #644)
+ * Postfix_Master: Allow unix-dgram as type (Issue #635)
+ * Postfix_transport: Allow underscore (Anton Baranov) (Issue #678)
+ * Postgresql: Allow hyphen '-' in values that don't require quotes
+ (Marcin Barczyński) (Issues #700 #701)
+ * Properties: Allow "/" in property names (felixdoerre) (Issue #680)
+ * Redis: add incl path /etc/redis.conf (Raphaël Pinson) (#726)
+ support "replicaof" (Raphaël Pinson) (#727)
+ fix support for "sentinel" (Raphaël Pinson) (#728)
+ * Resolv: Support new options (Trevor Vaughan) (Issues #707 #708)
+ * Rsyslog: support multiple actions in filters and selectors (Issue #653)
+ * Shellvars: exclude more tcsh profile scripts (Pino Toscano) (Issue #627)
+ * Simplevars: add ocsinventory-agent.cfg (Pat Riehecky) (Issue #637)
+ * Sudoers: support new @include/@includedir directives
+ (Pino Toscano) (Issue #693)
+ * Sudoers: Allow AD groups (luchihoratiu) (Issue #696)
+ Support negative integers (Ando David Roots) (#724)
+ * Ssh: add Match keyword support (granquet) (Issue #695)
+ * Sshd: support quotes in Match conditions (Issue #739)
+ * Systemd: fix parsing of envvars with spaces (Pino Toscano) (#659)
+ Add incl paths according to 'systemd.network(5)' (chruetli) (#683)
+ * Tinc: new lens for Tinc VPN configuration files (Thomas Weißschuh) (#718)
+ * Toml: support arrays (norec) in inline tables (Raphaël Pinson) (#703)
+ * Tmpfiles: improvements to the types specification
+ (Pino Toscano) (Issue #694)
+
+1.12.0 - 2019-04-13
+ - General changes/additions
+ * update gnulib to 91584ed6
+ - Lens changes/additions
+ * Anaconda: new lens to process /etc/sysconfig/anaconda instead of
+ Shellvars (Pino Toscano) (Issue #597)
+ * DevfsRules: add lens for FreeBSD devfs.rules files
+ * Dovecot: permit ! in block titles (Nathan Ward) (Issue #599)
+ * Hostname: Allow creation of hostname when file is missing
+ (David Farrell) (Issue #606)
+ * Krb5: add more pkinit_* options (Issue #603)
+ * Logrotate: fix missing recognition of double quoted filenames (Issue #611)
+ * Multipath: accept values enclosed in quotes (Issue #583)
+ * Nginx: support unix sockets as server address (Issue #618)
+ * Nsswitch: add merge action (Issue #609)
+ * Pam: accept continuation lines (Issue #590)
+ * Puppetfile: allow symbols as (optional) values (Issue #619)
+ allow comments in entries (Issue #620)
+ * Rsyslog: support dynamic file paths (Issue #622)
+ treat #!/+/- as comment (arnolda, PR #595)
+ * Syslog: accept 'include' directive (Issue #486)
+ * Semanage: new lens to process /etc/selinux/semanage.conf instead of
+ Simplevars (Pino Toscano) (Issue #594)
+ * Shellvars: allow and/or in @if conditions (#582)
+ accept functions wrapped in round brackets,
+ accept variables with a dash in their name,
+ exclude csh/tcsh profile scripts (Pino Toscano) (Issue #600)
+ accept variable as command (Issue #601)
+ * Ssh: accept RekeyLimit (Issue #605)
+ * Sshd: accept '=' to separate option names from their values
+ (Emil Dragu, #587)
+ * Sudoers: support 'always_query_group_plugin' flag (Steve Traylen, #588)
+ * Strongswan: parse lists. This is a backwards-incompatible change
+ since list entries that were parsed into a single string
+ are now split into a list of entries (Kaarle Ritvanen)
+ * Toml: new lens to parse .toml files (PR #91)
+ * Xorg: accept empty values for options (arnolda, PR #596)
+
+1.11.0 - 2018-08-24
+ - General changes/additions
+ * augmatch: add a --quiet option; make the exit status useful to tell
+ whether there was a match or not
+ * Drastically reduce the amount of memory needed to evaluate complex
+ path expressions against large files (Issue #569)
+ * Fix a segfault on OSX when 'augmatch' is run without any
+ arguments (Issue #556)
+ - API changes
+ * aug_source did not in fact return the source; and always returned
+ NULL for that. That has been fixed.
+ - Lens changes/additions
+ * Chrony: add new options supported in chrony 3.2 and 3.3 (Miroslav Lichvar)
+ * Dhclient: fix parsing of append/prepend and similar directives
+ (John Morrissey)
+ * Fstab: allow leading whitespace in mount entry lines
+ (Pino Toscano) (Issue #544)
+ * Grub: tolerate some invalid entries. Those invalid entries
+ get mapped to '#error' nodes
+ * Httpd: accept comments with whitespace right after a tag
+ opening a section (Issue #577)
+ * Json: allow escaped slashes in strings (Issue #557)
+ * Multipath: accept regular expressions for devnode, wwid, and property
+ in blacklist and blacklist_exceptions sections (Issue #564)
+ * Nginx: parse /etc/nginx/sites-enabled (plumbeo)
+ allow semicolons inside double quoted strings in
+ simple directives, and allow simple directives without
+ an argument (Issue #566)
+ * Redis: accept the 'bind' statement with multiple IP addresses
+ (yannh) (Issue #194)
+ * Rsyslog: support include() directive introduced in rsyslog 8.33
+ * Strongswan: new lens (Kaarle Ritvanen)
+ * Systemd: do not try to treat *.d or *.wants directories as
+ configuration files (Issue #548)
+
+1.10.1 - 2018-01-29
+ - API changes
+ * Fix a symbol versioning mistake in libfa that unnecessarily broke ABI
+
+1.10.0 - 2018-01-25
+ DO NOT USE THIS RELEASE, USE 1.10.1 INSTEAD
+
+ - General changes/additions
+ * New CLI utility 'augmatch' to print the tree for a file and select
+ some of its contents
+ * New command 'count' in augtool
+ * New function 'not(bool) -> bool' for path expressions
+ * The path expression 'label[. = "value"]' can now be written more
+ concisely as 'label["value"]'
+ - API changes
+ * libfa has now a function fa_json to export an FA as a JSON file, and
+ fa_state_* functions that make it possible to iterate over the FA's
+ states and transitions. (Pedro Valero Mejia)
+ * Add functions aug_ns_label, aug_ns_value, aug_ns_count, and
+ aug_ns_path to get the label (with index), the value, the number of
+ nodes, and the fully qualified path for nodes stored in a nodeset in
+ a variable efficiently
+ - Lens changes/additions
+ * Grubenv: new lens to process /boot/grub/grubenv (omgold)
+ * Httpd: also read files from /etc/httpd/conf.modules.d/*.conf
+ (Tomas Meszaros) (Issue #537)
+ * Nsswitch: allow comments at the end of a line (Philip Hahn) (Issue #517)
+ * Ntp: accept 'ntpsigndsocket' statement (Philip Hahn) (Issue #516)
+ * Properties: accept empty comments with DOS line endings (Issue #161)
+ * Rancid: new lens for RANCiD router databases (Matt Dainty)
+ * Resolv: accept empty comments with DOS line endings (Issue #161)
+ * Systemd: also process /etc/systemd/logind.conf (Pat Riehecky)
+ * YAML: process a document that is just a sequence (John Vandenberg)
+
+1.9.0 - 2017-10-06
+ - General changes/additions
+ * several improvements to the error messages when transforming a tree
+ back to text fails. They now make it clearer what part of the tree
+ was problematic, and what the tree should have looked like.
+ * Fixed the pkg-config file, which should now be usable
+ * Fix handling of backslash-escaping in strings and regular expressions
+ in the lens language. We used to handle constructs like "\\" and
+ /\\\\/ incorrectly. (Issue #495)
+ * do not unescape the default value of a del on create; otherwise we are
+ double unescaping these strings (Issue #507)
+ * remove tempfile when saving files because destination is not writable
+ (Issue #479)
+ * span information is now updated on save (Issue #467)
+ * fix lots of warnings generated by gcc 7.1
+ * Various changes to reduce bashisms in tests and make them run on
+ FreeBSD (Romain Tartière)
+ * Fix building on Solaris (Shawn Ferry)
+ - API changes
+ * add function aug_ns_attr to allow iterating through a nodeset
+ quickly. See examples/dump.c for an example of how to use them
+ instead of aug_get, aug_label etc. and for a way to measure
+ performance gains.
+ - Lens changes/additions
+ * Ceph: new lens for /etc/ceph/ceph.conf
+ * Cgconfig: accept fperm & dperm in admin & task (Pino Toscano)
+ * Dovecot: also load files from /usr/local/etc (Roy Hubbard)
+ * Exports: relax the rules for the path at the beginning of a line so
+ that double-quoted paths are legal, too
+ * Getcap: new lens to parse generic termcap-style capability databases
+ (Matt Dainty)
+ * Grub: accept toplevel 'boot' entry (Pino Toscano)
+ * Httpd: handle empty comments with a continuation line (Issue #423);
+ handle '>""' in a directive properly (Issue #429); make space between
+ quoted arguments optional (Issue #435); accept quoted strings as part
+ of bare arguments (Issue #470)
+ * Nginx: load files from sites-available directory (Omer Katz) (Issue #471)
+ * Nslcd: new lens for nss-pam-ldapd config (Jose Plana)
+ * Oz: New lense for /etc/oz/oz.cnf
+ * postfix lenses: also load files from /usr/local/etc (Roy Hubbard)
+ * Properties: accept DOS line endings (Issue #468)
+ * Rtadvd: new lens to parse the rtadvd configuration file (Matt Dainty)
+ * Rsyslog: load files from /etc/rsyslog.d (Doug Wilson) (Issue #475);
+ allow spaces before the # starting a comment; allow comments inside
+ config statements like 'module'
+ * Shellvars: load FreeBSD's /etc/rc.conf.d (Roy Hubbard)
+ * Ssh: accept '=' to separate keyword from arguments
+ * Sshd: split HostKeyAlgorithms into list of values; recognize quoted
+ group names with spaces in them (Issue #477)
+ * Sudoers: recognize "match_group_by_gid" (Luigi Toscano) (Issue #482)
+ * Syslog: allow spaces before the # starting a comment
+ * Termcap: new lens to parse termcap capability databases (Matt Dainty)
+ * Vsftpd: accept seccomp_sandbox (Denys Stroebel)
+ * Xymon: accept 'group-sorted' directive (Issue #462)
+
+1.8.1 - 2017-08-17
+ - General changes/addition
+ * Fix error in handling escaped whitespace at the end of path expressions
+ (addresses CVE-2017-7555)
+
+1.8.0 - 2017-03-20
+ - General changes/additions
+ * augtool: add a 'source' command exposing the aug_source API call
+ * augtool: add a 'context' command to make changing into a node more
+ discoverable
+ * augtool: add an 'info' command to print important information
+ * augtool: dramatically reduce memory consumption when all lenses are
+ loaded by more aggressively releasing temporary data structures. On
+ my machine, maximum memory usage of 'augtool -L' drops from roughly
+ 90MB to about 20MB. This will not change the amount of memory used
+ when only specific lenses are used, only the default behavior of
+ loading all lenses, i.e., when -A is not passed.
+ * make building augtool statically possible (Jörg Krause)
+ * split aug_to_xml into its own source file, so that statically linking
+ against libaugeas.a doesn't require also linking against libXml2 and
+ its dependencies, provided aug_to_xml is not needed.
+ - API changes
+ * add aug_source to find the source file for a particular node
+ * reduce memory consumption when AUG_NO_MODL_AUTOLOAD is _not_ passed;
+ exact same details as described above for augtool
+ - Lens changes/additions
+ * Chrony: allow floating point numbers (Miroslav Lichvar)
+ add new directives from chrony 3.0 and 3.1 (Miroslav Lichvar)
+ * Krb5: support include/includedir directives (Jason Smith) (Issue #430)
+ support realms that start with numbers (Dustin Wheeler) (Issue #437)
+ * Multipath: update to multipath-0.4.9-99.el7 (Xavier Mol)
+ * Php: also look for FPM files in /etc/php/*/fpm/pool.d (Daniel Dico)
+ * Postfix_virtual: allow underscores in e-mail addresses (Jason Lingohr)
+ (Issue #439)
+ * Radicale: new lens for config of http://radicale.org/ (James Valleroy)
+ * Rsyslog: support multiple options in module statements (Craig Miskell)
+ * Ssh: also look for files in in /etc/ssh/ssh_config.d (Ian Mortimer)
+ * Tmpfiles: parse 'q'/'Q' modes, parse two-character arguments,
+ parse three-digit file modes
+ * Xml: support external entity declarations in the doctype (Issue #142)
+ * Yum: also read DNF files from /etc/dnf (Pat Riehecky) (Issue #434)
+
+1.7.0 - 2016-11-08
+ - General changes/additions
+ * allow multiple transforms handling the same file as long as they
+ also use the same lens (reported by Rich Jones)
+ * fix a use-after-free in recursive lenses when spans are
+ enabled (Issue #397)
+ * fix an illegal memory access during put that can be triggered by a
+ lens of the form 'del ... | l1 . l2' when the put has to jump
+ branches in the union (Issue #398)
+ * a large number of fixes based on Coverity scanning and running with
+ gcc's address sanitizer. None of the issues uncovered would have lead
+ to particularly significant leaks (they were all on the order 100-200
+ bytes) and often hard to trigger, but we now have proof that at least
+ while running tests there are no leaks at all.
+ See https://github.com/hercules-team/augeas/pull/405 for details.
+ * The type checker now checks regexes that are involved in
+ expressions. For example, it used to be possible to write 'let rx =
+ /a/ | /b)/' and not get an error from the syntax checker, even though
+ 'let rx = /b)/' would result in an error. Such constructs are now
+ checked properly. This new check might lead to errors in existing
+ lenses, requiring that they be fixed.
+ - Lens changes/additions
+ * Cron_User: New lens to handle user crontab files in /var/spool/cron
+ * Csv: fix failure to load lens on OpenBSD (Issue #396)
+ * Grub: also look for UEFI grub files in /boot/efi/EFI/*/grub.conf
+ (Rich Jones)
+ * Opendkim: new lens for /etc/opendkim.conf (Craig Miskell)
+ * Php: look for php.ini where Ubunto 16.04 puts it, too (Michael Wodniok)
+ * Splunk: support Splunk Universal Forwarder and underscore-prefixed
+ keys for 6.x (Jason Antman)
+
+1.6.0 - 2016-08-05
+ - General changes/additions
+ * augtool: add --load-file option, and corresponding load-file command
+ to load individual files based on the autoload information in lenses
+ * path expressions: numbers in path expressions are now 64 bit integers
+ rather than whatever the C compiler decided 'int' would be
+ - API changes
+ * add aug_load_file to load individual files, bug #135
+ - Lens changes/additions
+ * Httpd: follow line continuations in comments
+ * Nginx: look for nginx.conf in /usr/local/etc, too (Omer Katz)
+ * Ntp: allow 'pool' (Craig Miskell) (Issue #378);
+ fix restrict to allow also -4 and also fix
+ save/store ability (Josef Reidinger) (Issue #386)
+ * Pam: use spaces instead of tabs as the separator in new entries
+ (Loren Gordon) (Issue #236)
+ * Postfix_Passwordmap: New lens to parse Postfix password maps
+ (Anton Baranov) (Issue #380)
+ * Rsyslog: Support for rsyslog RainerScript syntax
+ (Craig Miskell) (Issue #379)
+ * Shellvars: Load /etc/lbu/lbu.conf, the config for Alpine's Local
+ Backup Utility (Kaarle Ritvanen)
+ Load /etc/profile, /etc/profile.d/*, and /etc/byobu
+ * Vsftpd: Add allow_writeable_chroot boolead option
+ (Robert Moucha) (Issue #376)
+
+1.5.0 - 2016-05-11
+ - General changes/additions
+ * augtool: new --timing option that prints after each operation how long
+ it took
+ * augtool: print brief help message when incorrect options are given rather
+ than dumping all help text
+ * Path expressions: optimize performance of evaluating certain
+ expressions
+ * lots of safety improvements in libfa to avoid using uninitialized
+ values and the like (Daniel Trebbien)
+ * tolerate building against OSX' libedit (Issue #256)
+ - API changes
+ * aug_match: fix a bug where expressions like /foo/*[2] would match a
+ hidden node and pretend there was no match at all. We now make sure
+ we never match a hidden node. Thanks to Xavier Mol for reporting the
+ problem.
+ * aug_get: make sure we set *value to NULL, even if the provided path is
+ invalid (Issue #372)
+ * aug_rm: fix segfault when deleting a tree and one of its ancestors
+ (Issue #319)
+ * aug_save: fix segfault when trying to save an invalid subtree. A
+ routine that was generating details for the error message overflowed
+ a buffer it had created (Issue #349)
+ - Lens changes/additions
+ * AptConf: support hash comments
+ * AptSources: support options (Issue #295),
+ support brackets with spaces in URI (GH #296)
+ rename test file to test_aptsources.aug
+ * Chrony: allow signed numbers and indentation, fix stray EOL entry,
+ disallow comment on EOL, add many missing directives and
+ options (Miroslav Lichvar, RHBZ#1213281)
+ add new directives and options that were added in
+ chrony-2.2 and chrony-2.3 and improve parsing of
+ access configuration (Miroslav Lichvar, Issue #348)
+ add new options for chrony-2.4 (Miroslav Lichvar)
+ * Dhclient: avoid put ambiguity for node without value (Issue #294)
+ * Group: support NIS map, support an overridden and disabled password,
+ i.e. `+:*::` (Matt Dainty) (Issue #258)
+ * Host_Conf: support spaces between list items (Cedric Bosdonnat, Issue #358)
+ * Httpd: add paths to SLES vhosts
+ (Jan Doleschal) (Issue #268)
+ parse backslashes in directive arguments (Issue #307)
+ parse mismatching case of opening/closing tags
+ parse multiple ending section tags on one line
+ parse wordlists in braces in SSLRequire directives
+ parse directive args starting with double quote (Issue #330)
+ parse directive args containing quotes
+ support perl directives (Issue #327)
+ parse line breaks/continuations in section arguments
+ parse escaped spaces in directive/section arguments
+ parse backslashes at the start of directive args (Issue #324)
+ * Inputrc: support $else (Cedric Bosdonnat, Issue #359)
+ * Interfaces: add support for source-directory (Issue #306)
+ * Json: add comments support, refactor,
+ allow escaped quotes and blackslashes
+ * Keepalived: fix space/tag alignments and hanging spaces,
+ add vrrp_mcast_group4 and vrrp_mcast_group6,
+ add more vrrp_instance flags,
+ add mcast/unicast_src_ip and unicast_peer,
+ add missing garp options,
+ add vrrp_script options,
+ expand vrrp_sync_group block,
+ allow notify option
+ (Joe Topjian) (Issue #266)
+ * Known_Hosts: refactoring and description fixed
+ * Logrotate: support dateyesterday option (Chris Reeves) (GH #367, #368)
+ * MasterPasswd: new lens to parse /etc/master.passwd
+ (Matt Dainty) (Issue #258)
+ * Multipath: add various missing keywoards (Olivier Mangold) (Issue #289)
+ * MySQL: include /etc/my.cnf.d/*.cnf (Issue #353)
+ * Nginx: improve typechecking of lens,
+ allow masks in IP keys and IPv6 (Issue #260)
+ add @server simple nodes (Issue #335)
+ * Ntp: add support for basic interface syntax
+ * OpenShift_Quickstarts: Use Json.lns
+ * OpenVPN: add all options available in OpenVPN 2.3o
+ (Justin Akers) (Issue #278)
+ * Puppetfile: name separator is not mandatory
+ add support for moduledir (Christoph Maser)
+ * Rabbitmq: remove space in option name,
+ add support for cluster_partitioning_handling,
+ add missing simple options (Joe Topjian) (Issue #264)
+ * Reprepro_Uploaders: add support for distribution field
+ (Mathieu Alorent) (Issue #277),
+ add support for groups (Issue #283)
+ * Rhsm: new lens to parse subscription-manager's /etc/rhsm/rhsm.conf
+ * Rsyslog: improve property filter parsing,
+ treat whitespace after commas as optional.
+ recognize '~' as a valid syslog action (discard)
+ (Gregory Smith) (Issue #282),
+ add support for redirecting output to named pipes
+ (Gerlof Fokkema) (Issue #366)
+ * Shellvars: allow partial quoting, mixing multiple styles
+ (Kaarle Ritvanen) (Issue #183);
+ allow wrapping builtin argument to multiple lines
+ (Kaarle Ritvanen) (Issue #184);
+ support ;; on same line with multiple commands
+ (Kaarle Ritvanen) (Issue #185);
+ allow line wrapping and improve quoting support
+ (Kaarle Ritvanen) (Issue #187);
+ accept [] and [[]] builtins (Issue #188);
+ allow && and || constructs after condition
+ (Kaarle Ritvanen) (Issue #265);
+ add pattern nodes in case entries
+ (BREAKING CHANGE: case entry values are now in a
+ @pattern subnode) (Kaarle Ritvanen) (Issue #265)
+ add eval builtin support;
+ add alias builtin support;
+ allow (almost) any command;
+ allow && and || after commands (Issue #215);
+ allow wrapping command sequences
+ (Kaarle Ritvanen) (Issue #333);
+ allow command-specific environment variable
+ (Kaarle Ritvanen) (Issue #332);
+ support subshells (Issue #339)
+ newlines in start of functions
+ allow newlines after actions
+ support comments after function name (Issue #339)
+ exclude SuSEfirewall2 (Cedric Bosdonnat, Issue #357)
+ * Simplelines: parse OpenBSD's hostname.if(5)
+ files (Jasper Lievisse Adriaanse) (Issue #252)
+ * Smbusers: add support for ; comments
+ * Spacevars: support flags (Issue #279)
+ * Ssh: add support for HostKeyAlgorithms, KexAlgorithms
+ and PubkeyAcceptedKeyTypes (Oliver Mangold) (Issue #290),
+ add support for GlobalKnownHostsFile (Issue #316)
+ * Star: New lens to parse /etc/default/star
+ * Sudoers: support for negated command alias
+ (Geoff Williams) (Issue #262)
+ * Syslog: recognize '~' as a valid syslog action (discard)
+ (Gregory Smith) (Issue #282)
+ * Tmpfiles: new lens to parse systemd's tempfiles.d configuration
+ files (Julien Pivotto) (Issue #269)
+ * Trapperkeeper: new lens for Puppet server configuration files
+ * Util: add comment_c_style_or_hash lens
+ add empty_any lens
+ * Vsftpd: add isolate and isolate_network options
+ (Florian Chazal) (Issue #334)
+ * Xml: allow empty document (Issue #255)
+ * YAML: new lens (subset) (Dimitar Dimitrov) (Issue #338)
+
+1.4.0 - 2015-05-22
+ - General changes/additions
+ * add a aug_escape_name call to sanitize strings for use in path
+ expressions. There are a few characters that are special in path
+ expressions. This function makes it possible to have them all escaped
+ so that the resulting string can be used in a path expression and is
+ guaranteed to only match a node with exactly that name
+ * paths generated by Augeas are now properly escaped so that, e.g., the
+ strings returned by aug_match can always be fed to aug_get, even if
+ they contain special characters
+ * augtool: correctly record history when reading commands from a file
+ and then switching to interactive mode (Robert Drake)
+ * augtool: new command 'errors' that pretty-prints /augeas//error
+ messages; improve the information provided with 'short iteration'
+ errors
+ * fix segfault when saving to a file that was not writable (Issue #178)
+ * augtool: on interrupt (Ctrl-C), cancel current line instead of
+ exiting (jeremy Lin)
+ * updated parser.y to work with Bison 3.0.2
+ * fix put-symlink-augsave test to run on Solaris (Geoffrey Gardella,
+ issue #242)
+ - Lens changes/additions
+ * AFS_Cellalias: new lens (Pat Riehecky)
+ * Authorized_keys: allow double quotes in option values (Issue #135)
+ * Chrony: fix typo in log flag 'measurements' (Pat Riehecky)
+ * Clamav: new lens (Andrew Colin Kissa)
+ * Dns_Zone: New lens to parse DNS zone files (Kaarle Ritvanen)
+ * Dnsmasq: Parse the structure of the 'address' and 'server' options
+ (incompatible change) (Kaarle Ritvanen)
+ * Erlang: parse kernel app config, handle empty lists (RHBZ#1175546)
+ * Exports: support brackets in machine names (Vincent Desjardins)
+ * Grub: support password stanza inside boot/title section (Issue #229)
+ * Httpd: handle eol after opening tag (Issue #220); fix type checking
+ issue (Issue #223)
+ * Iscsid: new lens (Joey Boggs and Pat Riehecky) (Issue #174)
+ * Jaas: several improvements to cover more valid syntax (Steve Shipway)
+ * Known_Hosts: handle aliases for the host name
+ * Krb5: support keyword krb524_server; allow realm names starting
+ with lower-case characters (Jurjen Bokma)
+ * Limits: allow comments at end of line (timdeluxe)
+ * Logrotate: support 'dateformat' directive (Issue #217)
+ support 'maxsize' directive (RHBZ#1213292)
+ do not require a space before an opening '{' (Issue #123)
+ * Mailscanner: new lens (Andrew Colin Kissa)
+ * Mailscanner_Rules: new lens for MailScanner rules (Andrew Colin Kissa)
+ * NagiosCfg: default to no spaces around equal (Issue #177)
+ * Nginx: significantly reworked, now parses entire Nginx stock
+ config successfully (Issue #179)
+ * Pagekite: more fine-grained control of service_on entries; instead of
+ 'source' and 'destination', parse into protocol, kitename,
+ backend_host, backend_port, and secret (Michael Pimmer)
+ (incompatible change)
+ * Passwd: support nis [+-]username syntax (Borislav Stoichkov); fix
+ @nisdefault on OpenBSD (Matt Dainty)
+ * Pgbouncer: new lense for the pgbouncer connection pooler (Andrew
+ Colin Kissa)
+ * Postfix_sasl_smtpd: new lens contributed by larsen0815 (Issue #182)
+ * Postgresql: look for postgresql.conf in paths used on Red Hat based
+ distros (Haotian Liu)
+ * Puppetfile: new lens to parse librarian-puppet's Puppetfile
+ * Pylonspaste: new lense for Pylon's paste init configuration files
+ (Andrew Colin Kissa)
+ * PythonPaste: parse "set" keyword for default overrides (RHBZ#1175545)
+ * Shadow: allow NIS entries (Borislav Stoichkov)
+ * Shellvars: case: support ;; on same line with multiple commands
+ (Kaarle Ritvanen); make insertion at the beginning of a
+ file that starts with blank lines work; the new lens will
+ remove blank lines from the beginning of a file as soon as
+ lines are added to or removed from it (GH issue #202);
+ handle associative arrays; add /etc/periodic.conf for
+ FreeBSD (Michael Moll)
+ * Shellvars_list: support double-quoted continued lines
+ * Sudoers: allow '+' in user/groupnames (Andreas Grüninger)
+ * Sysctl: add /boot/loader.conf for FreeBSD (Michael Moll)
+ * Sysconfig: handle leading whitespace at beginning of a line,
+ RHBZ#761246
+
+1.3.0 - 2014-11-07
+ - General changes/additions
+ * Add missing cp entry in manpage (GH issue #78)
+ * Add seq to vim syntax highlight (Robert Drake)
+ * Update augtool.1 man page with new commands and --span, RHBZ#1100077
+ * augtool autocomplete includes command aliases, RHBZ#1100184
+ * Remove unused "filename" argument from dump-xml command, RHBZ#1100106
+ * aug_save returns non-zero result when unable to delete files,
+ RHBZ#1091143
+ - Lens changes/additions
+ * Aliases: permit missing whitespace between colon and recipients
+ * AptPreferences: Support spaces in origin fields
+ * Cgconfig: handle additional valid controllers (Andy Grimm)
+ * Chrony: New lens to parse /etc/chrony.conf (Pat Riehecky)
+ * CPanel: New lens to parse cpanel.config files
+ * Desktop: Allow @ in keys (GH issue #92)
+ * Device_map: Parse all device.map files under /boot (Mike Latimer)
+ * Dhclient: Add support for option modifiers (Robert Drake,
+ GH issue #95)
+ Parse hash statements with dhcp-eval strings
+ * Dhcpd: stmt_string quoted blocks no longer store quote marks
+ (incompatible change),
+ many changes to support more record types (Robert Drake)
+ * Group: NIS support (KaMichael)
+ * Grub: handle "foreground" option, RHBZ#1059383 (Miguel Armas)
+ * Gshadow: New lens (Lorenzo Catucci)
+ * Httpd: Allow eol comments after section tags
+ Allow continued lines inside quoted value (GH issue #104)
+ Allow comparison operators in tags (GH issue #154)
+ * IPRoute2: handle "/" in protocol name, swap ID and name fields
+ (incompatible change), RHBZ#1063968,
+ handle hex IDs and hyphens, as present in
+ rt_dsfield, RHBZ#1063961
+ * Iptables: parse /etc/sysconfig/iptables.save, RHBZ#1144651
+ * Kdump: parse new options, permit EOL comments, refactor, RHBZ#1139298
+ * Keepalived: Add more virtual/real server settings and checks, RHBZ#1064388
+ * Known_Hosts: New lens for SSH known hosts files
+ * Krb5: permit braces in values when not in sub-section, RHBZ#1066419
+ * Ldso: handle "hwcap" lines (GH issue #100)
+ * Lvm: support negative numbers, parse /etc/lvm/lvm.conf (Pino Toscano)
+ * Multipath: add support for rr_min_io_rq (Joel Loudermilk)
+ * NagiosConfig and NagiosObjects: Fix documentation (Simon Sehier)
+ * NetworkManager: Use the Quote module, support # in values (no eol comments)
+ * OpenVPN: Add support for fragment, mssfix, and script-security
+ (Frank Grötzner)
+ * Pagekite: New lens (Michael Pimmer)
+ * Pam: Add partial support for arguments enclosed in [] (Vincent Brillault)
+ * Passwd: Refactor lens (Lorenzo Catucci)
+ * Redis: Allow empty quoted values (GH issue #115)
+ * Rmt: New lens to parse /etc/default/rmt, RHBZ#1100549
+ * Rsyslog: support complex $template lines, property filters and file
+ actions with templates, RHBZ#1083016
+ * Services: permit colons in service name, RHBZ#1121263
+ * Shadow: New lens (Lorenzo Catucci)
+ * Shellvars: Handle case statements with same-line ';;', RHBZ#1033799
+ Allow any kind of quoted values in block
+ conditions (GH issue #118)
+ Support $(( .. )) arithmetic expansion in variable
+ assignment, RHBZ#1100550
+ * Simplevars: Support flags and empty values
+ * Sshd: Allow all types of entries in Match groups (GH issue #75)
+ * Sssd: Allow ; for comments
+ * Squid: Support configuration files for squid 3 (Mykola Nikishov)
+ * Sudoers: Allow wuoted string in default str/bool params (Nick Piacentine)
+ * Syslog: Support "# !" style comments (Robert Drake, GH issue #65)
+ Permit IPv6 loghost addresses, RHBZ#1129388
+ * Systemd: Allow quoted Environment key=value pairs, RHBZ#1100547
+ Parse /etc/sysconfig/*.systemd, RHBZ#1083022
+ Parse semicolons inside entry values, RHBZ#1139498
+ * Tuned: New lens for /etc/tuned/tuned-main.conf (Pat Riehecky)
+ * UpdateDB: New lens to parse /etc/updatedb.conf
+ (incompatible change as this file used to be processed with
+ Simplevars)
+ * Xml: Allow backslash in #attribute values (GH issue #145)
+ Parse CDATA elements (GH issue #80)
+ * Xymon_Alerting: refactor lens (GH issue #89)
+
+1.2.0 - 2014-01-27
+ - API changes
+ * Add aug_cp and the cp and copy commands
+ * aug_to_xml now includes span information in the XML dump
+ - General changes/additions
+ * Fix documentation link in c_api NaturalDocs menu
+ * Fix NaturalDocs documentation for various lenses
+ * src/transform.c (filter_matches): wrap fnmatch to ensure that an incl
+ pattern containing "//" matches file paths, RHBZ#1031084
+ * Correct locations table for transform_save() (Tomas Hoger)
+ * Corrections for CVE-2012-0786 tests (Tomas Hoger)
+ * Fix umask handling when creating new files, RHBZ#1034261
+ - Lens changes/additions
+ * Access: support DOMAIN\user syntax for users and groups, bug #353
+ * Authorized_Keys: Allow 'ssh-ed25519' as a valid authorized_key
+ type (Jasper Lievisse Adriaanse)
+ * Automounter: Handle hostnames with dashes in them, GH issue #27
+ * Build: Add combinatorics group
+ * Cyrus_Imapd: Create new entries without space before separator,
+ RHBZ#1014974 (Dietmar Kling)
+ * Desktop: Support square brackets in keys
+ * Dhclient: Add dhclient.conf path for Debian/Ubuntu (Esteve Fernandez)
+ * Dhcpd: Support conditionals, GH issue #34
+ Support a wider variety of allow/deny statement, including
+ booting and bootp (Yanis Guenane)
+ Support a wider variety of DHCP allow/deny/ignore statements
+ (Yanis Guenane)
+ * Dovecot: Various enhancements and bug fixes (Michael Haslgrübler):
+ add mailbox to block_names, fix for block_args in quotes,
+ fix for block's brackets upon write,
+ fixes broken tests for mailbox,
+ fixes indention,
+ test case for block_args with ""
+ fixes broken indention
+ Use Quote module
+ * Exports: Permit colons for IPv6 client addresses, bug #366
+ * Grub: Support the 'setkey' and 'lock' directives
+ NFC fix whitespace errors
+ Handle makeactive menu command, bug #340
+ Add 'verbose' option, GH issue #73
+ * Interfaces: Add in support for the source stanza in
+ /etc/network/interfaces files
+ Map bond-slaves and bridge-ports to arrays (incompatible
+ change) (Kaarle Ritvanen)
+ Add /etc/network/interfaces.d/* support
+ Allow numeric characters in stanza options (Pascal Lalonde)
+ * Koji: New lens to parse Koji configs (Pat Riehecky)
+ * MongoDBServer: Accept quoted values (Tomas Klouda)
+ * NagiosCfg: Do not try to parse /etc/nagios/nrpe.cfg anymore, GH issue #43
+ /etc/nagios/nrpe.cfg is parsed by Nrpe (Yanis Guenane)
+ * Nagiosobjects: Add support for optional spaces and indents
+ and whole-line comments (Sean Millichamp)
+ * OpenVPN: Support daemon, client-config-dir, route, and management
+ directives (Freakin https://github.com/Freakin)
+ * PHP: allow php-fpm syntax in keys, GH issue #35
+ * Postfix_Main: Handle stray whitespace at end of multiline lines, bug #348
+ * Postfix_virtual: allow '+' and '=' in email addresses (Tom Hendrikx)
+ * Properties: support multiline starting with an empty string, GH issue #19
+ * Samba: Permit asterisk in key name, bug #354
+ * Shellvars: Read /etc/firewalld/firewalld.conf, bug #363
+ Support all types of quoted strings in arrays, bug #357
+ Exclude /etc/sysconfig/ip*tables.save files
+ * Shellvars, Sysconfig: map "bare" export and unset lines to seq numbered
+ nodes to handle multiple variables (incompatible change), RHBZ#1033795
+ * Shellvars_list: Handle backtick variable assignments, bug #368
+ Allow end-of-line comments, bug #342
+ * Simplevars: Add /etc/selinux/semanage.conf
+ * Slapd: use smart quotes for database entries; rename by/what to by/access;
+ allow access to be absent as per official docs (incompatible change)
+ * Sshd: Indent Match entries by 2 spaces by default
+ Support Ciphers and KexAlgorithms groups, GH issue #69
+ Let all special keys be case-insensitive
+ * Sudoers: Permit underscores in group names, bug #370 (Matteo Cerutti)
+ Allow uppercase characters in user names, bug #376
+ * Sysconfig: Permit empty comments after comment lines, RHBZ#1043636
+ * Sysconfig_Route: New lens for RedHat's route configs
+ * Syslog: Accept UDP(@) and TCP(@@) protocol, bug #364 (Yanis Guenane)
+ * Xymon_Alerting: New lens for Xymon alerting files (François Maillard)
+ * Yum: Add yum-cron*.conf files (Pat Riehecky)
+ Include only *.repo files from yum.repos.d (Andrew N Golovkov)
+ Permit spaces after equals sign in list options, GH issue #45
+ Split excludes as lists, bug #275
+
+1.1.0 - 2013-06-14
+ - General changes/additions
+ * Handle files with special characters in their name, bug #343
+ * Fix type error in composition ('f; g') of functions, bug #328
+ * Improve detection of version script; make build work on Illumos with
+ GBU ld (Igor Pashev)
+ * augparse: add --trace option to print filenames of all modules being
+ loaded
+ * Various lens documentation improvements (Jasper Lievisse Adriaanse)
+ - Lens changes/additions
+ * ActiveMQ_*: new lens for ActiveMQ/JBoss A-MQ (Brian Harrington)
+ * AptCacherNGSecurity: new lens for /etc/apt-cacher-ng/security.conf
+ (Erik Anderson)
+ * Automaster: accept spaces between options
+ * BBHosts: support more flags and downtime feature (Mathieu Alorent)
+ * Bootconf: new lens for OpenBSD's /etc/boot.conf (Jasper Lievisse Adriaanse)
+ * Desktop: Support dos eol
+ * Dhclient: read /etc/dhclient.conf used in OpenBSD (Jasper Lievisse Adriaanse)
+ * Dovecot: New lens for dovecot configurations (Serge Smetana)
+ * Fai_Diskconfig: Optimize some regexps
+ * Fonts: exclude all README files (Jasper Lievisse Adriaanse)
+ * Inetd: support IPv6 addresses, bug #320
+ * IniFile: Add lns_loose and lns_loose_multiline definitions
+ Support smart quotes
+ Warning: Smart quotes support means users should not add
+ escaped double quotes themselves. Tests need to be fixed
+ also.
+ Use standard Util.comment_generic and Util.empty_generic
+ Warning: Existing lens tests must be adapted to use standard
+ comments and empty lines
+ Allow spaces in entry_multiline* values
+ Add entry_generic and entry_multiline_generic
+ Add empty_generic and empty_noindent
+ Let multiline values begin with a single newline
+ Support dos eol
+ Warning: Support for dos eol means existing lenses usually
+ need to be adapted to exclude \r as well as \n.
+ * IPRoute2: Support for iproute2 files (Davide Guerri)
+ * JaaS: lens for the Java Authentication and Authorization Service
+ (Simon Vocella)
+ * JettyRealm: new lens for jetty-realm.properties (Brian Harrington)
+ * JMXAccess, JMXPassword: new lenses for ActiveMQ's JMX files
+ (Brian Harrington)
+ * Krb5: Use standard comments and empty lines
+ Support dos eol
+ Improve performance
+ Accept pkinit_anchors (Andrew Anderson)
+ * Lightdm: Use standard comments and empty lines
+ * LVM: New lens for LVM metadata (Gabriel)
+ * Mdadm_conf: optimize some regexps
+ * MongoDBServer: new lens (Brian Harrington)
+ * Monit: also load /etc/monitrc (Jasper Lievisse Adriaanse)
+ * MySQL: Use standard comments and empty lines
+ Support dos eol
+ * NagiosCfg: handle Icinga and resources.cfg (Jasper Lievisse Adriaanse)
+ * Nrpe: accept any config option rather than predefined list (Gonzalo
+ Servat); optimize some regexps
+ * Ntpd: new lense for OpenNTPD config (Jasper Lievisse Adriaanse)
+ * Odbc: Use standard comments and empty lines
+ * Openshift_*: new lenses for Openshift support (Brian Harrington)
+ * Quote: allow multiple spaces in quote_spaces; improve docs
+ * Passwd: allow period in user names in spec, bug #337; allow overrides
+ in nisentry
+ * PHP: Support smart quotes
+ Use standard comments and empty lines
+ Load /etc/php*/fpm/pool.d/*.conf (Enrico Stahn)
+ * Postfix_master: allow [] in words, bug #345
+ * Resolv: support 'lookup' and 'family' key words, bug #320
+ (Jasper Lievisse Adriaanse))
+ * Rsyslog: support :omusrmsg: list of users in actions
+ * RX: add CR to RX.space_in
+ * Samba: Use standard comments and empty lines
+ Support dos eol
+ * Schroot: Support smart quotes
+ * Services: support port ranges (Branan Purvine-Riley)
+ * Shellvars: optimize some regexps; reinstate /etc/sysconfig/network,
+ fixes bug #330, RHBZ#904222, RHBZ#920609; parse /etc/rc.conf.local
+ from OpenBSD
+ * Sip_Conf: New lens for sip.conf configurations (Rob Tucker)
+ * Splunk: new lens (Tim Brigham)
+ * Subversion: Support smart quotes
+ Use standard comments and empty lines
+ Use IniFile.entry_multiline_generic
+ Use IniFile.empty_noindent
+ Support dos eol
+ * Sudoers: allow user aliases in specs
+ * Sysctl: exclude README file
+ * Systemd: Support smart quotes; allow backslashes in values
+ * Xinetd: handle missing values in list, bug #307
+ * Xorg: allow 'Screen' in Device section, bug #344
+ * Yum: Support dos eol, optimize some regexps
+
+1.0.0 - 2012-12-21
+ - General changes/additions
+ * fix missing requirement on libxml2 in pkg-config
+ * do not replace pathin with '/*' unless the length is 0
+ or pathin is '/', bug #239
+ * create context path if it doesn't exist
+ * add missing argument to escape() to fix build on solaris, bug #242
+ * fix fatest linking with libfa
+ * don't use variables uninitialized upon error (Jim Meyering)
+ * bootstrap: add strchrnul gnulib module (for Solaris)
+ * remove Linux-isms so tests can run on Solaris
+ * re-open rl_outstream/stdout only when stdout isn't a tty
+ (fixes -e -i); use /dev/tty instead of /dev/stdout when re-opening
+ to prevent permission errors, bug #241
+ * take root into account for excludes, bug #252
+ * fix different errors for parse and put failure
+ * fix various memory leaks
+ * add leak test
+ * allocate exception instead of static const value
+ * improve aug_srun quoting to permit concatenation and better detect
+ bad quoting
+ * rename echo to echo_commands to fix differing types reported
+ with Solaris linker (Tim Mooney), bug #262
+ * fix excl filters that only specify a filename or wildcard
+ * make sure reloading discards changes after save with mode 'newfile'
+ * remove loop that added a second iteration around children of /files,
+ causing multiple saves in newfile and noop modes when editing under
+ /files/boot, bug #264
+ * support \t and \n in aug_srun tokens, bug #265
+ * compile_exp: don't return an uninitialized pointer upon failure
+ (Jim Meyering)
+ * include 'extern "C"' wrapper for C++, bug #272 (Igor Pashev)
+ * src/try: don't overwrite gdbcmds.txt if it exists
+ * fix behavior of set with empty strings
+ * allow running individual tests with test-run
+ * test-augtool.sh: escape all possible regular expressions before
+ they are sent to sed (Micah Anderson)
+ * add new print_tree primitive
+ * fix bad memory access in regexp.c
+ * case-insensitive regexps: fix a problem with number of groups
+ * prevent symlink attacks via .augnew during saving,
+ RedHat bug #772257, CVE-2012-0786
+ * prevent cross-mountpoint attacks via .augsave during saving,
+ RedHat bug #772261, CVE-2012-0787
+ * add bundled (gnulib) provides in augeas.spec.in, RedHat bug #821745
+ * make Travis CI builds
+ * src/transform.c (xread_file): catch failed fopen, e.g. EACCES
+ * src/augrun.c (cmd_retrieve_help): tidy line wrapping
+ * make get_square case insensitive on the ending key
+ * escape double quotes when dumping regexp
+ * use constants for "lens", "incl" and "excl"
+ * src/transform.c (filter_generate): remove duplicate variable assignment
+ * src/jmt.c (parse_add_item): ensure return is defined on goto error
+ * src/transform.c (transform_save): chmod after creating new files to
+ permissions implied by the umask
+ * ignore eclipse settings directory
+ * fix memory leak in dbg_visit
+ * build AST while visiting the parse v2
+ * rewrite square lens to be more generic, allowing e.g. square quoting
+ * tests/modules/fail_shadow_union.aug: fix unintended test failure
+ * src/syntax.c (compile_test): print which test failed when missing
+ exception
+ * libfa (fa_enumerate): new function
+ * use precise ctype of a square lens if it is indeed regula
+ * square: properly handle first lens matching empty string
+ * square lens: correctly process skeletons during put
+ * src/pathx.c: disallow ',' in names in path expressions
+ * src/pathx.c: match functions by name and arity
+ * src/pathx.c: pass the number of actual arguments to the func
+ implementation
+ * correctly parse escaped string literals in vim syntax file (Domen Ko¿ar)
+ - API changes/additions
+ * add aug_text_store to write string to tree
+ * add aug_text_retrieve to turn tree into text
+ * add aug_rename to rename node labels without moving them in the tree
+ * add aug_transform to allow specifying transforms
+ * add aug_label to retrieve the label from a path
+ - Augtool/aug_srun changes/additions
+ * add "touch" command to create node if it doesn't exist, bug #276
+ * make <VALUE> argument to "set" and "setm" optional, bug #276
+ * add "text_store" and "text_retrieve" commands
+ * add "rename" command
+ * add "transform" command and "-t|--transform" option
+ * add "label" command
+ * arrange commands in groups for better help
+ * man/augtool.pod: update mentions of default load path
+ * fix exit code when using autosave
+ * output errors when sending a command as argument
+ * honor --echo when sending a command as argument
+ - XPath changes/additions
+ * add support for an 'i' flag in regexp builtin function
+ - Lens changes/additions
+ * Aliases: commands can be fully enclosed in quotes, bug #229
+ * Anacron: new lens for /etc/anacrontab
+ * Apt_Update_Manager: new lens for /etc/update-manager
+ * AptPreferences: #comments are accepted within entries
+ * AuthorizedKeys: new lens for SSH's authorized_keys
+ * AutoMaster: new lens for auto.master files
+ * AutoMounter: new lens for automounter maps (/etc/auto.*)
+ * Avahi: new lens for /etc/avahi/avahi-daemon.conf (Athir Nuaimi)
+ * Build: add blocks
+ * Cachefilesd: new lens for /etc/cachefilesd.conf (Pat Riehecky)
+ * Carbon: new lens for /etc/carbon files (Marc Fournier)
+ * Cgconfig: add space between group and id (Filip Andres)
+ * Channels: new lens for channels.conf
+ * Collectd: new lens for /etc/collectd.conf
+ * Cron: exclude cron allow/deny files;
+ optimize typechecking;
+ records can be prefixed by '-' (Michal Filka)
+ * CronAllow: new lens for cron/at allow/deny files
+ * Cups: new lens for Cups files
+ * Cyrus_Imapd: new lens for /etc/imapd.conf, bug #296 (Jeroen van Meeuwen)
+ * Debctrl: fixed package paragraph keywords, allow variables
+ for version numbers in dependency lists,
+ allow DM-Upload-Allowed keyword, Debian bug #650887;
+ allow control extensions for Python packages, bug #267
+ * Dhcpd: fix primary statement arguments, bug #293;
+ use the Quote module to manage quoted values;
+ force double quotes for filename attribute, bug #311
+ * Dput: use Sys.getenv("HOME")
+ * Erlang: new generic lens to build Erlang config lenses
+ * Fonts: new lens for /etc/fonts files
+ * Fstab: handle options with empty values ("password=");
+ make options field optional;
+ allow end-of-line comment
+ * Fuse: new lens for fuse.conf
+ * Gdm: include /etc/gdm/custom.conf
+ * Grub: parse "password --encrypted" properly, bug #250;
+ optimize typechecking;
+ add /boot/grub/grub.conf to transform (Josh Kayse)
+ * GtkBookmarks: new lens for $HOME/.gtk-bookmarks
+ * Hosts_Access: add netmask;
+ permit more client list formats
+ (whitespace separated lists, @netgroups,
+ IPv6 hosts, inc. zone indices,
+ paths to lists of clients, wildcards,
+ hosts_options), bug #256
+ * Htpasswd: new lens for htpasswd/rsyncd.secret files (Marc Fournier)
+ * Httpd: support DOS eol
+ * IniFile: allow # and ; in quoted values, bug #243;
+ add entry_list and entry_list_nocomment
+ * Inputrc: new lens for /etc/inputrc
+ * Iptables: test that blank lines are accepted (Terence Haddock)
+ * Json: allow JSON number literals to be followed by whitespace;
+ correctly parse empty object and arrays (Lubomir Rintel)
+ * Keepalived: various improvements, optimize typechecking
+ * Krb5: handle host{} sections in v4_name_convert;
+ support ticket_lifetime;
+ handle multiple arguments to *_enctypes (Pat Riehecky);
+ better whitespace and semicolon comment support
+ * Ldif: new lens to read LDIF files per RFC2849
+ * Ldso: new lens for ld.so.conf files
+ * Lightdm: new lens for /etc/lightdm/*.conf, bug #302 (David Salmen)
+ * Logrotate: rewrite with Build, Rx, and Sep;
+ add su logrotate.conf option (Luc Didry);
+ accept integers prefixed by a sign (Michal Filka)
+ * Logwatch: new lens for /etc/logwatch/conf/logwatch.conf (Francois Lebel)
+ * Mcollective: new lens for Mcollective files (Marc Fournier)
+ * Memcached: new lens for /etc/memcached.conf (Marc Fournier)
+ * Mdadm_conf: include /etc/mdadm/mdadm.conf
+ * Mke2fs: add support for default_mntopts, enable_periodic_fsck,
+ and auto_64-bit_support
+ * Modprobe: support softdep command, Debian bug #641813;
+ allow spaces around '=' in option, RedHat bug #826752;
+ support multiline split commands, Ubuntu bug #1054306;
+ revert inner lens name change, fixes Modules_conf
+ * Modules: define own entry regexp as referenced Modprobe inner lens
+ doesn't match file format
+ * Multipath: allow devices to override defaults, bug #278 (Jacob M. McCann)
+ * NagiosCfg: support syntax for commands.cfg and resource.cfg
+ * Netmask: new lens for /etc/inet/netmasks on Solaris
+ * NetworkManager: new lens for NetworkManager files
+ * Networks: handle multiple missing network octets,
+ fix sequencing of aliases
+ * Nginx: new lens for /etc/nginx/nginx.conf (Ian Berry)
+ * Nsswitch: add passwd_compat, group_compat and shadow_compat
+ GNU extensions (Travis Groth);
+ remove long list of databases, match by regexp
+ * Ntp: allow deprecated 'authenticate' setting;
+ add tos directive, bug #297 (Jacob M. McCann)
+ * OpenVPN: use the Quote module to manage quoted values
+ * Pam: allow uppercase chars in 'types', remove /etc/pam.conf from filter;
+ ignore allow.pamlist;
+ exclude /etc/pam.d/README, bug #255
+ * PamConf: new lens for /etc/pam.conf
+ * Passwd: allow asterisk in password field, bug #255
+ * Pg_Hba: support multiple options, bug #313;
+ add a path to pg_hba.aug, bug #281 (Marc Fournier)
+ * Php: support include() statements
+ * Phpvars: map arrays with @arraykey subnodes to make working paths;
+ support classes and public/var values, bug #299 (aheahe)
+ * Postfix_Transport: new lens for Postfix transport files;
+ allow host:port and [host]:port syntaxes, bug #303
+ * Postfix_Virtual: new lens for Postfix virtual files
+ * Postgresql: new lens for postgresql.conf;
+ properly support quotes, bug #317
+ * Properties: improve handling of whitespace, empty props, and underscores
+ in keys (Brett Porter, Carlos Sanchez)
+ * Protocols: new lens for /etc/protocols
+ * Puppet: add /usr/local/etc/puppet paths (Tim Bishop)
+ * Puppet_Auth: new lens for /etc/puppet/auth.conf
+ * PuppetFileserver: add /usr/local/etc/puppet paths (Tim Bishop)
+ * PythonPaste: new lens for Python Paste configs (Dan Prince)
+ * Qpid: new lens to read Apache Qpid daemon/client configs (Andrew Replogle)
+ * Quote: new generic lens to manage quoted values using square lenses
+ * Rabbitmq: new lens for /etc/rabbitmq/rabbitmq.config
+ * Redis: new lens for /etc/redis/redis.conf (Marc Fournier)
+ * Resolv: add in single-request-reopen (Erinn Looney-Triggs)
+ * Rsyslog: new lens for rsyslog files
+ * Rx: add continous lines (cl, cl_or_space, cl_or_opt_space)
+ * Sep: add space_equal;
+ add continous lines (cl_or_space, cl_or_opt_space)
+ * Shellvars: support @return;
+ allow multiple elif statements;
+ parse functions;
+ add more includes;
+ autoload some SuSe and RHN specific files (Duncan Mac-Vicar P);
+ add BSD's /etc/rc.conf, bug #255;
+ remove non-shell files, up2date now has a lens,
+ move updatedb.conf to Simplevars;
+ include /etc/{default,sysconfig}/* and /etc/selinux/config;
+ add systemd's /etc/os-release file;
+ exclude bootloader from shellvars (Duncan Mac-Vicar P);
+ handle bash's implicit concatenation of quoted strings
+ (Michal Filka);
+ exclude /etc/default/whoopsie;
+ fix ambiguity by making semi-colons illegal in bquot
+ and arrays;
+ add lns_norec to check for ambiguities;
+ allow newlines in quoted values;
+ allow semi-colons in bquot and dollar_assign;
+ make end-of-line comments begin with a space;
+ allow double backquoted values;
+ support matching keys in var_action, bug #290;
+ fix empty lines after comments;
+ add shift and exit builtins, with optional args;
+ allow double quotes around variables in case statements;
+ fix empty comments;
+ add locale.conf, vconsole.conf systemd configs,
+ RedHat bug #881841
+ * Shells: permit same-line comments
+ * Simplelines: new lens for simple lines files
+ * Simplevars: new lens for simple key/value, non shellvars files
+ * Smbusers: new lens for Samba's smbusers
+ * Sssd: new lens for sssd.conf (Erinn Looney-Triggs)
+ * Ssh: use Sys.getenv('HOME') in filter instead of ~ since it's not
+ expanded (Luc Didry)
+ * Sshd: permit hyphens in subsystem names
+ * Subversion: new lens for /etc/subversion files
+ * Sudoers: optimize typechecking;
+ allow = in commands (but force ! or / as first character
+ if not an alias);
+ allow commands without full path if they begin with a lowcase
+ letter;
+ allow "!" as a type of Defaults entry, Debian bug #650079;
+ allow quoted strings in Defaults parameters, bug #263
+ * Sysconfig: handle end of line comments and semicolons; strip quotes,
+ RedHat bug #761246
+ * Sysctl: include /etc/sysctl.d files
+ * Syslog: allow capital letters in tokens
+ * Systemd: new lens to parse systemd unit files
+ * Thttpd: new lens for /etc/thttpd/thttpd.conf (Marc Fournier)
+ * Up2date: new lens for /etc/sysconfig/rhn/up2date
+ * Util: add comment_noindent; add delim; add doseol;
+ support DOS eols in various places;
+ add *.bak and *.old to stdexcl, to match files in /etc/sysconfig
+ * Vfstab: new lens for /etc/vfstab config on Solaris
+ * Vmware_Config: new lens for /etc/vmware/config
+ * Vsftpd: add require_ssl_reuse option (Danny Yates)
+ * Xinetd: rewrite with Build, Sep, and Rx;
+ make attribute names case-insensitive (Michal Filka)
+ * Xml: support single _and_ double quoted attribute values,
+ RedHat bug #799885, bug #258
+ * Xymon: new lens for Xymon config files, bug #266 (Jason Kincl)
+ * Yum: rebase on IniFile, support for comments, bug #217
+
+0.10.0 - 2011-12-02
+ - support relative paths by taking them relative to the value
+ of /augeas/context in all API functions where paths are used
+ - add aug_to_xml to API: transform tree(s) into XML, exposed as dump-xml
+ in aug_srun and augtool. Introduces dependency on libxml2
+ - fix regular expression escaping. Previously, /[\/]/ match either a
+ backslash or a slash. Now it only matches a slash
+ - path expressions: add function 'int' to convert a node value (string)
+ to an integer
+ - path expressions: make sure the regexp produced by empty nodesets from
+ regexp() and glob() matches nothing, rather than the empty word
+ - fix --autosave when running single command from command line, BZ 743023
+ - aug_srun: support 'insert' and 'move' as aliases for 'ins' and 'mv'
+ - aug_srun: allow escaping of spaces, quotes and brackets with \
+ - aug_init: accept AUG_NO_ERR_CLOSE flag; return augeas handle even when
+ intialization fails so that caller gets some details about why
+ initialization failed
+ - aug_srun: tolerate trailing white space in commands
+ - much improved, expanded documentation of many lenses
+ - always interpret lens filter paths as absolute, bug #238
+ - fix bug in libfa that would incorrectly calculate the difference of a
+ case sensistive and case insensitive regexp (/[a-zA-Z]+/ - /word/i
+ would match 'worD')
+ - new builtin 'regexp_match' for .aug files to make testing regexp
+ matching easier during development
+ - fix 'span' command, bug #220
+ - Lens changes/additions
+ * Access: parse user@host and (group) in users field; field separator
+ need not be surrounded by spaces
+ * Aliases: allow spaces before colons
+ * Aptconf: new lens for /etc/apt/apt.conf
+ * Aptpreferences: support origin entries
+ * Backuppchosts: new lens for /etc/backuppc/hosts, bug 233 (Adam Helms)
+ * Bbhosts: various fixes
+ * Cgconfig: id allowed too many characters
+ * Cron: variables aren't set like shellvars, semicolons are allowed in
+ email addresses; fix parsing of numeric fields, previously upper case
+ chars were allowed; support ranges in time specs
+ * Desktop: new lens for .desktop files
+ * Dhcpd: slashes must be double-quoted; add Red Hat's dhcpd.conf
+ locations
+ * Exports: allow empty options
+ * Fai_diskconfig: new lens for FAI disk_config files
+ * Fstab: allow ',' in file names, BZ 751342
+ * Host_access: new lens for /etc/hosts.{allow,deny}
+ * Host_conf: new lens for /etc/host.conf
+ * Hostname: new lens for /etc/hostname
+ * Hosts: also load /etc/mailname by default
+ * Iptables: allow digits in ipt_match keys, bug #224
+ * Json: fix whitespace handling, removing some cf ambiguities
+ * Kdump: new lens for /etc/kdump.conf (Roman Rakus)
+ * Keepalived: support many more flags, fields and blocks
+ * Krb5: support [pam] section, bug #225
+ * Logrotate: be more tolerant of whitespace in odd places
+ * Mdadm_conf: new lens for /etc/mdadm.conf
+ * Modprobe: Parse commands in install/remove stanzas (this introduces a
+ backwards incompatibility); Drop support for include as it is not
+ documented in manpages and no unit tests are shipped.
+ * Modules: new lens for /etc/modules
+ * Multipath: add support for seveal options in defaults section, bug #207
+ * Mysql: includedir statements are not part of sections; support
+ \!include; allow indentation of entries and flags
+ * Networks: new lens for /etc/networks
+ * Nrpe: allow '=' in commands, bug #218 (Marc Fournier)
+ * Php: allow indented entries
+ * Phpvars: allow double quotes in variable names; accept case
+ insensitive PHP tags; accept 'include_once'; allow empty lines at
+ EOF; support define() and bash-style and end-of-line comments
+ * Postfix_master: allow a lot more chars in words/commands, including
+ commas
+ * PuppetFileserver: support same-line comments and trailing whitespace,
+ bug #214
+ * Reprepo_uploaders: new lens for reprepro's uploaders files
+ * Resolv: permit end-of-line comments
+ * Schroot: new lens for /etc/schroot/schroot.conf
+ * Shellvars: greatly expand shell syntax understood; support various
+ syntactic constructs like if/then/elif/else, for, while, until, case,
+ and select; load /etc/blkid.conf by default
+ * Spacevars: add toplevel lens 'lns' for consistency
+ * Ssh: new lens for ssh_config (Jiri Suchomel)
+ * Stunnel: new lens for /etc/stunnel/stunnel.conf (Oliver Beattie)
+ * Sudoers: support more parameter flags/options, bug #143
+ * Xendconfsxp: lens for Xen configuration (Tom Limoncelli)
+ * Xinetd: allow spaces after '{'
+
+0.9.0 - 2011-07-25
+ - augtool: keep history in ~/.augeas/history
+ - add aug_srun API function; this makes it possible to run a sequence of
+ commands through the API
+ - aug_mv: report error AUG_EMVDESC on attempts to move a node into one of
+ its descendants
+ - path expressions: allow whitespace inside names, making '/files/etc/foo
+ bar/baz' a legal path, but parse [expr1 or expr2] and [expr1 and expr2]
+ as the logical and/or of expr1 and expr2
+ - path expressions: interpret escape sequences in regexps; since '.' does
+ not match newlines, it has to be possible to write '.|\n' to match any
+ character
+ - path expressions: allow concatenating strings and regexps; add
+ comparison operator '!~'; add function 'glob'; allow passing a nodeset
+ to function 'regexp'
+ - store the names of the functions available in path expressions under
+ /augeas/version
+ - fix several smaller memory leaks
+ - Lens changes/additions
+ * Aliases: allow spaces and commas in aliases (Mathieu Arnold)
+ * Grub: allow "bootfs" Solaris/ZFS extension for dataset name, bug #201
+ (Dominic Cleal); allow kernel path starting with a BIOS device,
+ bug #199
+ * Inifile: allow multiline values
+ * Php: include files from Zend community edition, bug #210
+ * Properties: new lens for Java properties files, bug #194 (Craig Dunn)
+ * Spacevars: autoload two ldap files, bug #202 (John Morrissey)
+ * Sudoers: support users:groups format in a Runas_Spec line, bug #211;
+ add CSW paths (Dominic Cleal)
+ * Util: allow comment_or_eol to match whitespace-only comments,
+ bug #205 (Dominic Cleal)
+ * Xorg: accept InputClass section; autoload from /etc/X11/xorg.conf.d,
+ bug #197
+
+0.8.1 - 2011-04-15
+ - augtool: respect autosave flag in oneshot mode, bug #193; fix segfault
+ caused by unmatched bracket in path expression, bug #186
+ - eliminate a global variable in the lexer, fixes BZ 690286
+ - replace an erroneous assert(0) with a proper error message when none of
+ the alternatives in a union match during saving, bug #183
+ - improve AIX support
+ - Lens changes/additions
+ * Access: support the format @netgroup@@nisdomain, bug #190
+ * Fstab: fix parsing of SELinux labels in the fscontext option (Matt Booth)
+ * Grub: support 'device' directive for UEFI boot, bug #189; support
+ 'configfile' and 'background' (Onur Küçük)
+ * Httpd: handle continuation lines (Bill Pemberton); autoload
+ httpd.conf on Fedora/RHEL, BZ 688149; fix support for single-quoted
+ strings
+ * Iptables: support --tcp-flags, bug #157; allow blank and comment
+ lines anywhere
+ * Mysql: include /etc/my.cnf used on Fedora/RHEL, BZ 688053
+ * NagiosCfg: parse setting multiple values on one line (Sebastien Aperghis)
+ * NagiosObjects: process /etc/nagios3/objects/*.cfg (Sebastien Aperghis)
+ * Nsswitch: support 'sudoers' as a database, bug #187
+ * Shellvars: autoload /etc/rc.conf used in FreeBSD (Rich Jones)
+ * Sudoers: support '#include' and '#includedir', bug #188
+ * Yum: exclude /etc/yum/pluginconf.d/versionlock.list (Bill Pemberton)
+
+0.8.0 - 2011-02-22
+ - add new 'square' lens combinator
+ - add new aug_span API function
+ - augtool: short options for --nostdinc, --noload, and --noautoload
+ - augtool: read commands from tty after executing file with --interactive
+ - augtool: add --autosave option
+ - augtool: add --span option to load nodes' span
+ - augtool: add span command to get the node's span according to the input
+ file
+ - augtool: really be quiet when we shouldn't be echoing
+ - fix segfault in get.c with L_MAYBE lens; bug #180
+ - fix segfault when a path expression called regexp() with an invalid
+ regexp; bug #168
+ - improved vim syntax file
+ - replace augtest by test-augtool.sh to obviate the need for Ruby to run
+ tests
+ - use sys_wait module from gnulib; bug #164
+ - Lens changes/additions
+ * Access: new lens for /etc/security/access.conf (Lorenzo Dalrio)
+ * Crypttab: new lens for /etc/crypttab (Frederic Lespez)
+ * Dhcpd: new lens
+ * Exports: accept hostnames with dashes; bug #169 (Sergio Ballestrero)
+ * Grub: add various Solaris extensions (Dominic Cleal); support "map"
+ entries, bug #148
+ * Httpd: new lens for Apache config
+ * Inifile: new lens indented_title_label
+ * Interfaces: allow indentation for "iface" entries; bug #182
+ * Mysql: change default comment delimiter from ';' to '#'; bug #181
+ * Nsswitch: accept various add'l databases; bug #171
+ * PuppetFileserver: new lens for Puppet's fileserver.conf (Frederic Lespez)
+ * REsolv: allow comments starting with ';'; bug #173 (erinn)
+ * Shellvars: autoload various snmpd config files; bug #170 (erinn)
+ * Solaris_system: new lens for /etc/system on Solaris (Dominic Cleal)
+ * Util (comment_c_style, empty_generic, empty_c_style): new lenses
+ * Xml: generic lens to process XML files
+ * Xorg: make "position" in "screen" optional; allow "Extensions"
+ section; bug #175 (omzkk)
+
+0.7.4 - 2010-11-19
+ - augtool: new clearm command to parallel setm
+ - augtool: add --file option
+ - Fix SEGV under gcc 4.5, caused by difficulties of the gcc optimizer
+ handling bitfields (bug #149; rhbz #651992)
+ - Preserve parse errors under /augeas//error: commit 5ee81630, released
+ in 0.7.3, introduced a regression that would cause the loss of parse
+ errors; bug #138
+ - Avoid losing already parsed nodes under certain circumstances; bug #144
+ - Properly record the new mtime of a saved file; previously the mtime in
+ the tree was reset to 0 when a file was saved, causing unnecessary file
+ reloads
+ - fix a SEGV when using L_MAYBE in recursive lens; bug #136
+ - Incompatible lens changes
+ * Fstab: parse option values
+ * Squid: various improvements, see bug #46;
+ * Xinetd: map service names differently
+ - Lens changes/additions
+ * Aptsources: map comments properly, allow indented lines; bug #151
+ * Grub: add indomU setting for Debian. Allow '=' as separator in title;
+ bug #150
+ * Fstab: also process /etc/mtab
+ * Inetd: support rpc services
+ * Iptables: allow underscore in chain names
+ * Keepalived: new lens for /etc/keepalived/keepalived.conf
+ * Krb5: allow digits in realm names; bug #139
+ * Login_defs: new lens for /etc/login.defs (Erinn Looney-Triggs)
+ * Mke2fs: new lens for /etc/mke2fs.conf
+ * Nrpe: new lens for Nagios nrpe (Marc Fournier)
+ * Nsswitch: new lens for /etc/nsswitch.conf
+ * Odbc: new lens for /etc/odbc.ini (Marc Fournier)
+ * Pg_hba: New lens; bug #140 (Aurelien Bompard). Add system path on
+ Debian; bug #154 (Marc Fournier)
+ * Postfix_master: parse arguments in double quotes; bug #69
+ * Resolv: new lens for /etc/resolv.conf
+ * Shells: new lens for /etc/shells
+ * Shellvars: parse ulimit builtin
+ * Sudoers: load file from /usr/local/etc (Mathieu Arnold) Allow
+ 'visiblepw' parameter flag; bug #143. Read files from /etc/sudoers.d
+ * Syslog: new lens for /etc/syslog.conf (Mathieu Arnold)
+ * Util: exclude dpkg backup files; bug #153 (Marc Fournier)
+ * Yum: accept continuation lines for gpgkey; bug #132
+
+0.7.3 - 2010-08-06
+ - aug_load: only reparse files that have actually changed; greatly speeds
+ up reloading
+ - record all variables in /augeas/variables, regardless of whether they
+ were defined with aug_defvar or aug_defnode; make sure
+ /augeas/variables always exists
+ - redefine all variables (by reevaluating their corresponding
+ expressions) after a aug_load. This makes variables 'sticky' across
+ loads
+ - fix behavior of aug_defnode to not fail when the expression evaluates
+ to a nonempty node set
+ - make gnulib a git submodule so that we record the gnulib commit off
+ which we are based
+ - allow 'let rec' with non-recursive RHS
+ - fix memory corruption when reloading a tree into which a variable
+ defined by defnode points (BZ 613967)
+ - plug a few small memory leaks, and some segfaults
+ - Lens changes/additions
+ * Device_map: new lens for grub's device.map (Matt Booth)
+ * Limits: also look for files in /etc/security/limits.d
+ * Mysql: new lens (Tim Stoop)
+ * Shellvars: read /etc/sysconfig/suseconfig (Frederik Wagner)
+ * Sudoers: allow escaped spaces in user/group names (Raphael Pinson)
+ * Sysconfig: lens for the shell subdialect used in /etc/sysconfig; lens
+ strips quotes automatically
+
+0.7.2 - 2010-06-22
+ - new API call aug_setm to set/create multiple nodes simultaneously
+ - record expression used in a defvar underneath /augeas/variables
+ - Lens changes/additions
+ * Group: add test for disabled account (Raphael Pinson)
+ * Grub: handle comments within a boot stanza
+ * Iptables: also look for /etc/iptables-save (Nicolas Valcarcel)
+ * Modules_conf: new lens for /etc/modules.conf (Matt Booth)
+ * Securetty: added handling of emtpy lines/comments (Frederik Wagner)
+ * Shellvars: added SuSE sysconfig puppet files (Frederik Wagner),
+ process /etc/environment (seph)
+ * Shellvars_list: Shellvars-like lens that treats strings of
+ space-separated words as lists (Frederik Wagner)
+
+0.7.1 - 2010-04-21
+ - new primitive lens 'value' to set value of a node to a constant,
+ similar to 'label' for the key (see http://augeas.net/docs/lenses.html)
+ - new builtins for printing and getting the types of a lens (see
+ http://augeas.net/docs/builtins.html)
+ - add unit type to lens language; allow '_' as an identifier in let's to
+ force evaluation for side effect only
+ - Various fixes for Solaris. Augeas now builds cleanly on Solaris 5.10,
+ and most of the tests pass. The three tests that fail all fail because
+ the test scripts have Linux idiosyncrasies. This needs to be addressed
+ in a future release. Much thanks to Dagobert Michelsen and the OpenCSW
+ project (http://www.opencsw.org/) for providing me with access to their
+ build farm.
+ - fix crash when recursive lens was used in a nonrecursive lens (bug #100)
+ - context free parser/recursive lenses: handle 'l?' properly (bug #119);
+ distinguish between successful parse and parse with an error at end of
+ input; do caller filtering to avoid spurious ambiguous parses with
+ grammars containing epsilon productions
+ - aug_get: return -1 when multiple nodes match (bug #121)
+ - much better error message when iteration stops prematurely during
+ put/create than the dreaded 'Short iteration'
+ - augtool: ignore empty lines from stdin; report error when get fails
+ - fix memory leak in file_info (transform.c); this was leaking a file
+ name every time we loaded a file (Laine Stump)
+ - nicer error message when typechecker spots ambiguity in atype
+ - libfa: handle '(a|)' and 'r{min,}' properly
+ - locale independence: handle a literal '|' properly on systems that lack
+ use_locale
+ - bootstrap: pull in isblank explicitly (needed on Solaris)
+ - src/lens.c (lns_check_rec): fix refcounting mistake on error path (bug #120)
+ - fix SEGV when loading empty files
+ - improvements in handling some OOM's
+ - Lens changes/additions
+ * Approx: lens and test for the approx proxy server (Tim Stoop)
+ * Cgconfig: lens and tests for libcgroup config (Ivana Hutarova Varekova)
+ * Cgrules: new lens and test (Ivana Hutarova Varekova)
+ * Cobblermodules: lens + tests for cobbler's modules.conf (Shannon Hughes)
+ * Debctrl: new lens and test (Dominique Dumont)
+ * Dput: add 'allow_dcut' parameter (bug #105) (Raphael Pinson)
+ * Dhclient: add rfc code parsing (bug #107) (Raphael Pinson)
+ * Group: handle disabled passwords
+ * Grub: support empty kernel parameters, Suse incl.s (Frederik Wagner)
+ * Inittab: allow ':' in the process field (bug #109)
+ * Logrotate: tolerate whitespace at the end of a line (bug #101); files
+ can be separated by newlines (bug #104) (Raphael Pinson)
+ * Modprobe: Suse includes (Frederik Wagner)
+ * Nagisocfg: lens and test for /etc/nagios3/nagios.cfg (Tim Stoop)
+ * Ntp: add 'tinker' directive (bug #103)
+ * Passwd: parse NIS entries on Solaris
+ * Securetty: new lens and test for /etc/securetty (Simon Josi)
+ * Shellvars: handle a bare 'export VAR'; Suse includes (Frederik
+ Wagner); allow spaces after/before opening/closing parens for array
+ * Sshd: allow optional arguments in subsystem commands (Matt Palmer)
+ * Sudoers: allow del_negate even if no negate_node is found (bug #106)
+ (Raphael Pinson); accept 'secure_path' (BZ 566134) (Stuart
+ Sears)
+
+0.7.0 - 2010-01-14
+ - Support for context-free lenses via the 'let rec' keyword. The syntax
+ is experimental, though the feature is here to stay. See
+ lenses/json.aug for an example of what's possible with that.
+ - Support for case-insensitive regular expressions. Simply append 'i' to
+ a regexp literal to make it case-insensitive, e.g. /hello/i will match
+ all variations of hello, regardless of case.
+ - Major revamp of augtool. In particular, path expressions don't need to
+ be quoted anymore. The online help has been greatly improved.
+ - Check during load/save that each file is only matched by one transform
+ under /augeas/load. If there are multiple transforms for a file, the
+ file is skipped.
+ - New error codes AUG_ENOLENS and AUG_EMXFM
+ - Do not choke on non-existing lens during save
+ - Change the metadata for files under /augeas/files slightly: the node
+ /augeas/files/$PATH/lens now has the name of the lens used to load the
+ file; the source location of that lens has moved to
+ /augeas/files/$PATH/lens/info
+ - New public functions fa_nocase, fa_is_nocase, and fa_expand_nocase in
+ libfa
+ - Various smaller bug fixes, performance improvements and improved error
+ messages
+ - Lens changes/additions
+ * Cobblersettings: new lens and test (Bryan Kearney)
+ * Iptables: allow quoted strings as arguments; handle both negation
+ syntaxes
+ * Json: lens and tests for generic Json files
+ * Lokkit: allow '-' in arguments
+ * Samba: accept entry keys with ':' (Partha Aji)
+ * Shellvars: allow arrays that span multiple lines
+ * Xinetd (name): fix bad '-' in character class
+
+0.6.0 - 2009-11-30
+ - Add error reporting API (aug_error and related calls); use to report
+ error details in a variety of places
+ - Path expressions: add regexp matching; add operator '|' to form union
+ of nodesets (ticket #89)
+ - Tolerate non-C locales from the environment (ticket #35); it is no
+ longer necessary to set the locale to C from the outside
+ - use stpcpy/stpncpy from gnulib (needed for building on Solaris)
+ - Properly check regexp literals for syntax errors (ticket #93)
+ - Distribute and install vim syntax files (ticket #97)
+ - many more bugfixes
+ - Lens changes/additions
+ * Apt_preferences: support version pin; filter out empty lines (Matt
+ Palmer)
+ * Cron: variables can contain '_' etc. (ticket #94)
+ * Ethers: new lens for /etc/ethers (Satoru SATOH)
+ * Fstab: allow '#' in spec (ticket #95)
+ * Group: allow empty password field (ticket #95)
+ * Inittab: parse end-of-line comments into a #comment
+ * Krb5: support kdc section; add v4_name_convert subsection to
+ libdefaults (ticket #95)
+ * Lokkit: add mising eol to forward_port; make argument for --trust
+ more permissive
+ * Pam: allow '-' before type
+ * Postfix_access: new lens for /etc/postfix/access (Partha Aji)
+ * Rx: allow '!' in device_name
+ * Sudoers: allow certain backslash-quoted characters in a command (Matt
+ Palmer)
+ * Wine: new lens to read Windows registry files
+
+0.5.3 - 2009-09-14
+ - Match trees on label + value, not just label; see
+ tests/modules/pass_strip_quotes.aug for how that enables stripping
+ quotes
+ - Do not trip over symlinks to files on a different device during save;
+ fixes problems with writing to /etc/grub.conf on Fedora/RHEL
+ - API (defnode): always add the newly created node into the resulting
+ nodeset
+ - Add preceding-sibling and following-sibling axes to path expressions
+ - augtool, augparse: add --version option (bug #88)
+ - Change file info recorded under /augeas/files/FILE/*: remove lens/id
+ and move lens/info to lens
+ - Properly record new files under /augeas/files (bug #78)
+ - aug_load: clean up variables to avoid dangling references (bug #79)
+ - Make Augeas work on AIX
+ - Ignore anything but regular files when globbing
+ - Add 'clear' function to language for use in unit tests
+ - typechecker: print example trees in tree format
+ - libfa: properly support regexps with embedded NUL's
+ - Lens changes/additions
+ * Xorg: revamped, fixes various parse failures (Matt Booth)
+ * Inetd: new lens and test (Matt Palmer)
+ * Multipath: new lens and test
+ * Slapd: also read /etc/openldap.slapd.conf (bug #85)
+
+0.5.2 - 2009-07-13
+ - Make Augeas work on Mac OS/X (bug #66) (Anders Bjoerklund)
+ - reduce symbols exported from libfa with linker script
+ - add --echo option to augtool
+ - require Automake 1.11 (Jim Meyering)
+ - avoid spurious save attempts for freshly read files
+ - Lens changes/additions
+ * Inittab: schema change: use 'id' field as name of subtree for a line,
+ instead of a generated number. Map comments as '#comment' (Matt Palmer)
+ * Logrotate: make owner/group in create statement optional, allow
+ filenames to be indented
+ * Ntp: allow additional options for server etc. (bug #72)
+ * Shellvars: allow backticks as quote characters (bug #74)
+ * Yum: also read files in /etc/yum/pluginconf.d (Marc Fournier)
+
+0.5.1 - 2009-06-09
+ - augeas.h: flag AUG_NO_MODL_AUTOLOAD suppresses initial loading
+ of modules; exposed as --noautoload in augtool
+ - augtool: don't prompt when input is not from tty (Raphael Pinson)
+ - augparse: add --notypecheck option
+ - path expressions: allow things like '/foo and /bar[3]' in predicates
+ - Lens changes/additions
+ * Aliases: map comments as #comment (Raphael Pinson)
+ * Build, Rx, Sep: new utility modules (Raphael Pinson)
+ * Cron: new lens (Raphael Pinson)
+ * Dnsmasq: process files in /etc/dnsmasq.d/* (ticket #65)
+ * Grub: parse kernel and module args into separate nodes; parse
+ arguments for 'serial', 'terminal', and 'chainloader'; allow
+ optional argument for 'savedefault'
+ * Interfaces: make compliant with actual Debian spec (Matt Palmer)
+ * Iptables: relax regexp for chain names; allow comment lines mixed
+ in with chains and rules (ticket #51)
+ * Logrotate: allow '=' as separator (ticket #61); make newline at end
+ of scriptlet optional
+ * Modprobe: handle comments at end of line
+ * Ntp: parse fudge record (Raphael Pinson); parse all directives in
+ default Fedora ntp.conf; process 'broadcastdelay', 'leapfile',
+ and enable/disable flags (ticket #62)
+ * Pbuilder: new lens for Debian's personal builder (Raphael Pinson)
+ * Php: add default path on Fedora/RHEL (Marc Fournier)
+ * Squid: handle indented entries (Raphael Pinson)
+ * Shellvars: map 'export' and 'unset'; map comments as #comment
+ (Raphael Pinson)
+ * Sudoers: allow backslashes inside values (ticket #60) (Raphael Pinson)
+ * Vsftpd: map comments as #comment; handle empty lines; find
+ vsftpd.conf on Fedora/RHEL
+ * Xinetd: map comments as #comment (Raphael Pinson)
+
+0.5.0 - 2009-03-27
+ - Clean up interface for libfa; the interface is now considered stable
+ - New aug_load API call; allows controlling which files to load by
+ modifying /augeas/load and then calling aug_load; on startup, the
+ transforms marked with autoload are reported under /augeas/load
+ - New flag AUG_NO_LOAD for aug_init to keep it from loading files on
+ startup; add --noload option to augtool
+ - New API calls aug_defvar and aug_defnode to define variables for
+ path expressions; exposed as 'defvar' and 'defnode' in augtool
+ - Lenses distributed with Augeas are now installed in
+ /usr/share/augeas/lenses/dist, which is searched after
+ /usr/share/augeas/lenses, so that lenses installed by other packages
+ take precedence
+ - New program examples/fadot to draw various finite automata (Francis
+ Giraldeau)
+ - Report line number and character offset in the tree when parsing a
+ file with a lens fails
+ - Fix error in propagation of dirty flag, which could lead to only
+ parts of a tree being saved when multiple files were modified
+ - Flush files to disk before moving them
+ - Fix a number of memory corruptions in the XPath evaluator
+ - Several performance improvements in libfa
+ - Lens changes/additions
+ * Grub: process embedded comments for update-grub (Raphael Pinson)
+ * Iptables: new lens for /etc/sysconfig/iptables
+ * Krb5: new lens for /etc/krb5.conf
+ * Limits: map dpmain as value of 'domain' node, not as label
+ (Raphael Pinson)
+ * Lokkit: new lens for /etc/sysconfig/system-config-firewall
+ * Modprobe: new lens for /etc/modprobe.d/*
+ * Sudoers: more finegrained parsing (ticket #48) (Raphael Pinson)
+
+0.4.2 - 2009-03-09
+ - Do not delete files that had an error upon parsing
+ - For Fedora/EPEL RPM's, BuildRequire libselinux-devel (bug #26)
+ - Bug fixes in path expressions
+ * for numbers, the meaning of '<' and '<=' was reversed
+ - Always create an entry /files in aug_init
+ - New builtin 'Sys' module with functions 'getenv' and 'read_file',
+ the latter reads a the contents of a file into a string
+ - Lens changes/additions
+ * Postfix_main: handle continuation lines
+ * Bbhosts, Hosts, Logrotate, Sudoers: label comment nodes as '#comment'
+ * Sshd: map comments as '#comment' nodes
+ * Squid: add all keywords from squid 2.7 and 3 (Francois Deppierraz)
+ * Logrotate: process unit suffixes for 'size' and 'minsize'
+
+0.4.1 - 2009-03-02
+ - Remove files when their entire subtree under /files is deleted
+ - Various bug fixes and syntax enhancements for path expressions
+ (see tests/xpath.tests for details)
+ - Evaluate path expressions with multiple predicates correctly
+ - Fix incorrect setting of /augeas/events/saved
+ - Major cleanup of matching during get; drastically improves
+ performance for very large (on the order of 10k lines) config files
+ - Small performance improvement in the typechecker
+ - Reject invalid character sets like [x-u] during typecheck
+ - Build with compile warnings set to 'maximum' instead of 'error', so
+ that builds on platforms with broken headers will work out of the box
+ - Lens changes/additions
+ * Util.stdexcl now excludes .augsave and .augnew files
+ * Logrotate: allow 'yearly' schedule, spaces around braces
+ * Ntp: fix so that it processes ntp.conf on Fedora 10
+ * Services: lens for /etc/services (Raphael Pinson)
+ * Xorg: new lens and tests (Raphael Pinson)
+
+0.4.0 - 2009-02-06
+ - Much improved and expanded support for path expressions in the public
+ API. See doc/xpath.txt and tests/xpath.tests for details.
+ - Solaris support: builds at least on OpenSolaris 2008.11
+ - Lens changes/additions
+ * Grub: support color and savedefault
+ * DarkIce: new lens for http://darkice.tyrell.hu/ (Free Ekanayaka)
+
+0.3.6 - 2009-01-26
+ - report version in /augeas/version, report legal save modes in
+ /augeas/version/save/mode for feature tests/version checking
+ - dynamically change behavior of aug_save; add noop save mode
+ (Bryan Kearney)
+ - plug memory leak, more portable SELinux test (Jim Meyering)
+ - fix bz #478619 - do not use abspath (Arnaud Gomes-do-Vale)
+ - fix segfault when branch in a union does not have a ktype
+ - Lens changes/additions
+ * Dpkg: new lens for Debian's dpkg.cfg (Robin Lee Powell)
+ * Limits: new lens for /etc/security/limits.conf (Free Ekanayaka)
+ * Soma: new lens for http://www.somasuite.org/ config
+ (Free Ekanayaka)
+ * Php, Gdm: fix minor regexp error (Marc Fournier)
+ expand filter for Php config files (Robin Lee Powell)
+ * Phpvars: whitspace fixes (Free Ekanayaka)
+ * Puppet: accept indented puppet.conf (ticket #25)
+
+0.3.5 - 2008-12-23
+ - add an option to rewrite files by overwriting their contents instead of
+ putting the new file in place atomically with rename(2); file contents
+ are only copied after rename fails with EXDEV or EBUSY, and only if the
+ node /augeas/save/copy_if_rename_fails (fix #32)
+ - saving of backup (.augsave) files now works even if the original and
+ backup files are on different devices
+ - major refactoring of how path expressions are handled internally. Fixes
+ a number of bugs and oddities (e.g. tickets #7 and #23)
+ - fix a bug in fa_as_regexp: a '.' wasn't escaped, ultimately leading to
+ spurious errors from the typechecker
+ - Lens changes/additions
+ * Group: process /etc/group (Free Ekanayaka)
+ * Passwd: process /etc/passwd (Free Ekanayaka)
+ * Phpvars: process files that set PHP variables, in particular
+ /etc/squirrelmail/config.php (Free Ekanayaka)
+ * Rsyncd: process /etc/rsyncd.conf (Marc Fournier)
+ * Shellvars: process /etc/arno-iptables-firewall/debconf.cfg and
+ /etc/cron-apt/config (Free Ekanayaka), load /etc/sysconfig/sendmail
+ * Postfix: process postfix's main.cf and master.cf (Free Ekanayaka)
+ * Squid: new lens for squid.conf (Free Ekanayaka)
+ * Webmin: new lens (Free Ekanayaka)
+ * Xinetd: make sure equal sign is surrounded by spaces (#30)
+ * Sshd: change the structure of Condition subtrees (Dominique Dumont)
+
+0.3.4 - 2008-11-05
+ - fix saving of backup files; in 0.3.3, when AUG_SAVE_BACKUP was passed
+ to aug_init, aug_save would always fail
+
+0.3.3 - 2008-10-24
+ - restore the behavior of aug_save; in 0.3.2, aug_save broke API by
+ returning the number of files changed on success instead of 0
+
+0.3.2 - 2008-10-21
+ - saving now reports which files were actually changed in
+ /augeas/events/saved; aug_save also returns the number of files
+ that were changed
+ - preserve file owner, permissions and SELinux context when changing a file.
+ - make saving idempotent, i.e. when a change to the tree does not result
+ in changes to the actual file's content, do not touch the original file
+ - report an error if there are nodes in the tree with a label that
+ is not allowed by the lens
+ - quietly append a newline to files that do not have one
+ - generate lens documentation using NaturalDocs and publish those
+ on the Auegas website (Raphael Pinson)
+ - Lens changes/additions
+ * Grub: support the 'password' directive (Joel Nimety)
+ * Grub: support 'serial' and 'terminal' directives (Sean E. Millichamp)
+ * Samba: change default indentation and separators (Free Ekanayaka)
+ * Logrotate: process tabooext, add dateext flag (Sean E. Millichamp)
+ * Sshd: Cleaner handling of 'Match' blocks (Dominique Dumont)
+ * Monit: new lens (Free Ekanayaka)
+ * Ldap: merge with Spacevars (Free Ekanayaka)
+ * Shellvars: support /etc/default (Free Ekanayaka)
+ * Shellvars: handle space at the end of a line
+
+0.3.1 - 2008-09-04
+ - Major performance improvement when processing huge files, reducing some
+ O(n^2) behavior to O(n) behavior. It's now entirely feasible to
+ manipulate for example /etc/hosts files with 65k lines
+ - Handle character escapes '\x' in regular expressions in compliance with
+ Posix ERE
+ - aug_mv: fix bug when moving at the root level
+ - Fix endless loop when using a mixed-case module name like MyMod.lns
+ - Typecheck del lens: for 'del RE STR', STR must match RE
+ - Properly typecheck the '?' operator, especially the atype; also allow
+ '?' to be applied to lenses that contain only 'store', and do not
+ produce tree nodes.
+ - Many new/improved lenses
+ * many lenses now map comments as '#comment' nodes instead of just
+ deleting them
+ * Sudoers: added (Raphael Pinson)
+ * Hosts: map comments into tree, handle whitespace and comments
+ at the end of a line (Kjetil Homme)
+ * Xinetd: allow indented comments and spaces around "}" (Raphael Pinson)
+ * Pam: allow comments at the end of lines and leading spaces
+ (Raphael Pinson)
+ * Fstab: map comments and support empty lines (Raphael Pinson)
+ * Inifile: major revamp (Raphael Pinson)
+ * Puppet: new lens for /etc/puppet.conf (Raphael Pinson)
+ * Shellvars: handle quoted strings and arrays (Nahum Shalman)
+ * Php: map entries outside of sections to a '.anon' section
+ (Raphael Pinson)
+ * Ldap: new lens for /etc/ldap.conf (Free Ekanayaka)
+ * Dput: add allowed_distributions entry (Free Ekanayaka)
+ * OpenVPN: new lens for /etc/openvpn/{client,server}.conf (Raphael Pinson)
+ * Dhclient: new lens for /etc/dhcp3/dhclient.conf (Free Ekanayaka)
+ * Samba: new lens for /etc/samba/smb.conf (Free Ekanayaka)
+ * Slapd: new lens for /etc/ldap/slapd.conf (Free Ekanayaka)
+ * Dnsmasq: new lens for /etc/dnsmasq.conf (Free Ekanayaka)
+ * Sysctl: new lens for /etc/sysctl.conf (Sean Millichamp)
+
+0.3.0 - 2008-08-07
+ - Add aug_mv call to public API
+ - Do not clobber symlinks, instead write new files to target of symlink
+ - Fail 'put' when tree has invalid entries
+ - Set exit status of augtool
+ - Avoid picking special characters, in particular '\0', in examples (libfa)
+ - Store system errors, using strerror, in the tree during writing of files
+ - New lenses
+ * Generic inifile module (Raphael Pinson)
+ * logrotate (Raphael Pinson)
+ * /etc/ntp.conf (Raphael Pinson)
+ * /etc/apt/preferences (Raphael Pinson)
+ * bbhosts for Big Brother [http://www.bb4.org/] (Raphael Pinson)
+ * php.ini (Raphael Pinson)
+
+0.2.2 - 2008-07-18
+ - Fix segfault in store.put on NULL values
+ - Properly move default lens dir with DATADIR (Jim Meyering)
+ - Fix 'short iteration' error on get/parse of empty string; this bug
+ made it impossible to save into a new file
+ - Add 'insa' and 'insb' primitives to allow insertion from
+ put unit tests
+ - aug_insert: handle insertion before first child properly
+ - New lenses
+ * /etc/exports: NFS exports
+ * /etc/dput.cf: Debian's dput (Raphael Pinson)
+ * /etc/aliases: don't require whitespace after comma (Greg Swift)
+
+0.2.1 - 2008-07-01
+ - Address some compilation issues found on Ubuntu/Debian unstable
+ - Fix segfault when aug_init/close are called multiple times
+ - Man page for augparse
+ - New lenses
+ * /etc/sysconfig/selinux
+ * Bugfixes for grub.conf
+
+0.2.0 - 2008-06-05
+ - Augeas is now much more portable
+ * Pull in gnulib on non-glibc systems
+ * Augeas now builds and runs on FreeBSD (possibly others, too)
+ - Various fixes for memory corruption and the like
+ (Jim Meyering, James Antill)
+ - New lenses
+ * vsftpd.conf
+ * various bugfixes in existing lenses
+
+0.1.1 - 2008-05-16
+ - Add subtraction of regexps to the language, for example
+ let re = /[a-z]+/ - /(Allow|Deny)Users/
+ - Report errors during get/put in the tree; added subnodes to
+ /augeas/files/PATH/error for that purpose
+ - Many many bugfixes:
+ * plugged all known memory leaks
+ * fixed typecheck for lens union (l1 | l2) which was plain wrong
+ * reduce overall memory usage by releasing unused compiled regexps
+ * further performance improvements in libfa
+ * check that values match the regexps in STORE when saving
+ - libfa can now convert an automaton back to a regular expression
+ (FA_AS_REGEXP)
+ - New lenses
+ * /etc/fstab
+ * /etc/xinetd.conf and /etc/xinetd.d/*
+
+0.1.0 - 2008-05-01
+ - Various changes to public API:
+ * Remove aug_exists from public API, and merge functionality into aug_get
+ * Do not hide pointer behind typedef; instead Augeas 'handle' type is now
+ struct augeas, typedef'd to augeas (Jim Meyering)
+ * Const-correctness of public API, return error indication
+ from aug_print (Jim Meyering)
+ * Make buildable on Debian Etch (remove -fstack-protector from compiler
+ switches)
+ - Public API is now stable, and existing calls will be supported without
+ further changes
+ - New schema:
+ * /etc/sysconfig/network-scripts/ifcfg-* (Alan Pevec)
+ * Assorted other files from /etc/sysconfig (the ones that just set
+ shell variables)
+ * /etc/apt/sources.list and /etc/apt/sources.list.d/* (Dean Wilson)
+ - Man page for augtool (Dean Wilson)
+
+0.0.8 - 2008-04-16
+ - Complete rewrite of the language for schema descriptions
+
+0.0.7 - 2008-03-14
+ - Typecheck lenses; in particular, discover and complain about ambiguous
+ concatenation and iteration
+ - Enable typechecking for augparse by default, and for augtool via the
+ '-c' flag
+ - Fixed lens definitions in spec/ to pass typechecking. They contained
+ quite a few stupid and subtle problems
+ - Greatly improved libfa performance to make typechecking reasonably
+ fast. Typechecking cmfm.aug went from more than two hours to under two
+ seconds
+
+0.0.6 - 2008-03-05
+ - Make it possible to overwrite files when saving with and without
+ backups
+ - Take the filesystem root as an optional argument to aug_init
+ - Expose these two things as command line options in augtool
+
+0.0.5 - 2008-03-05
+ - Changed public API to contain explicit reference to augeas_t
+ structure. This makes it easier to write threadsafe code using Augeas
+ - Added libfa, finite automata library, though it's not yet used by
+ Augeas
+
+0.0.4 - 2008-02-25
+ - package as RPM and make sure Augeas can be build on Fedora/RHEL
+
+0.0.3 - 2008-02-25
+ - further rework; file processing now resembles Boomerang lenses much
+ more closely
+ - major revamp of the internal tree representation (ordered tree where
+ multiple children can have the same label, including NULL labels)
+ - move away from LL(1) parsing in favor of regular languages, since they
+ enable much better ahead-of-time checks (which are not implemented yet)
+
+0.0.2 - 2008-01-29:
+ - completely reworked
+ - processing of files is now based on a textual description of the
+ structure of the files (basically a LL(1) grammar)
+
+0.0.1 - 2007-12-01:
+ - First release.
+ - Public API and basic tree data structure.
+ - Record scanning works.
+ - Providers for pam.d, inittab and /etc/hosts
+ - Simple tests and test driver
+++ /dev/null
-Please see doc/gnulib-readme.texi for basic information about Gnulib.
--- /dev/null
+README.md
\ No newline at end of file
--- /dev/null
+[![Build Status](https://travis-ci.org/hercules-team/augeas.svg?branch=master)](https://travis-ci.org/hercules-team/augeas)
+[![Fuzzing Status](https://oss-fuzz-build-logs.storage.googleapis.com/badges/augeas.svg)](https://bugs.chromium.org/p/oss-fuzz/issues/list?sort=-opened&can=1&q=proj:augeas)
+
+Introduction
+------------
+
+ Augeas is a library and command line tool that focuses on the most basic
+ problem in handling Linux configurations programmatically: editing actual
+ configuration files in a controlled manner.
+
+ To that end, Augeas exposes a tree of all configuration settings (well,
+ all the ones it knows about) and a simple local API for manipulating the
+ tree. Augeas then modifies underlying configuration files according to
+ the changes that have been made to the tree; it does as little modeling
+ of configurations as possible, and focuses exclusively on transforming
+ the tree-oriented syntax of its public API to the myriad syntaxes of
+ individual configuration files.
+
+ This focus on editing sets Augeas apart from any other configuration tool
+ I know of. Hopefully, Augeas will form a more solid foundation on which
+ these tools can be built; with a clean, simple API these tools should
+ be able to focus more on their core concerns and less on the mechanics
+ of running sed, grep, awk, etc. to tweak a config file.
+
+ If all you need is a tool to edit configuration files, you only need to
+ concern yourself with the handful of public API calls that Augeas exposes
+ (or their equivalent language bindings). However, to teach Augeas about a
+ new file format, you need to describe that file format in Augeas's domain
+ specific language (a very small subset of ML) Documentation for that
+ language can be found on the Augeas website at http://augeas.net/ If you
+ do that, please contribute the description if at all possible, or include
+ it in the distribution of your software - all you need to do for that is
+ add a couple of text files, there is no need to change existing
+ code. Ultimately, Augeas should describe all config files commonly found
+ on a Linux system.
+
+Non-goals
+---------
+
+Augeas is as much defined by the things it does _not_ try to accomplish
+as by its goals:
+
+* No abstraction from native config format, i.e. the organization of
+ the tree mirrors closely how the native config files are organized
+* No cross-platform abstraction - what is logically the same value may
+ live in different places in the tree on different
+ distributions. Dealing with that should be left to a higher-level
+ tool
+* No remote management support. Augeas is a local API, other ways of
+ access to Augeas should be built on top of it
+* No (or very little) modelling. Augeas is focused on syntax
+ transformation, not on any higher-level understanding of
+ configuration.
+
+The above non-goals are of course important concerns in
+practice. Historically though, too many config mgmt projects have failed
+because they set their sights too high and tried to address syntax
+transformation, modelling, remote support, and scalable management all in
+one. That leads to a lack of focus, and to addressing each of those goals
+unsatisfactorily.
+
+Building
+--------
+
+These instructions apply to building a released tarball. If you want to
+build from a git checkout, see the file HACKING.
+
+See the generic instructions in INSTALL. Generally,
+
+ ./configure
+ make && make install
+should be all that is needed.
+
+You need to have readline-devel installed. On systems that support
+SELinux, you should also install libselinux-devel.
+
+Documentation
+-------------
+
+Documentation can be found on Augeas' website http://augeas.net/ The site
+also contains information on how to get in touch, what you can do to help
+etc.
+
+License
+-------
+
+Augeas is released under the [Lesser General Public License, Version 2.1](http://www.gnu.org/licenses/lgpl-2.1.html)
+See the file COPYING for details.
--- /dev/null
+dnl
+dnl Taken from libvirt/acinclude.m4
+dnl
+dnl We've added:
+dnl -Wextra -Wshadow -Wcast-align -Wwrite-strings -Waggregate-return -Wstrict-prototypes -Winline -Wredundant-decls
+dnl We've removed
+dnl CFLAGS="$realsave_CFLAGS"
+dnl to avoid clobbering user-specified CFLAGS
+dnl
+AC_DEFUN([AUGEAS_COMPILE_WARNINGS],[
+ dnl ******************************
+ dnl More compiler warnings
+ dnl ******************************
+
+ AC_ARG_ENABLE(compile-warnings,
+ AC_HELP_STRING([--enable-compile-warnings=@<:@no/minimum/yes/maximum/error@:>@],
+ [Turn on compiler warnings]),,
+ [enable_compile_warnings="m4_default([$1],[maximum])"])
+
+ warnCFLAGS=
+
+ common_flags="-fexceptions -fasynchronous-unwind-tables"
+
+ case "$enable_compile_warnings" in
+ no)
+ try_compiler_flags=""
+ ;;
+ minimum)
+ try_compiler_flags="-Wall -Wformat -Wformat-security $common_flags"
+ ;;
+ yes)
+ try_compiler_flags="-Wall -Wformat -Wformat-security -Wmissing-prototypes $common_flags"
+ ;;
+ maximum|error)
+ try_compiler_flags="-Wall -Wformat -Wformat-security -Wmissing-prototypes -Wnested-externs -Wpointer-arith"
+ try_compiler_flags="$try_compiler_flags -Wextra -Wshadow -Wcast-align -Wwrite-strings -Waggregate-return"
+ try_compiler_flags="$try_compiler_flags -Wstrict-prototypes -Winline -Wredundant-decls -Wno-sign-compare"
+ try_compiler_flags="$try_compiler_flags $common_flags"
+ if test "$enable_compile_warnings" = "error" ; then
+ try_compiler_flags="$try_compiler_flags -Werror"
+ fi
+ ;;
+ *)
+ AC_MSG_ERROR(Unknown argument '$enable_compile_warnings' to --enable-compile-warnings)
+ ;;
+ esac
+
+ AH_VERBATIM([FORTIFY_SOURCE],
+ [/* Enable compile-time and run-time bounds-checking, and some warnings,
+ without upsetting newer glibc. */
+ #if !defined _FORTIFY_SOURCE && defined __OPTIMIZE__ && __OPTIMIZE__
+ # define _FORTIFY_SOURCE 2
+ #endif
+ ])
+
+ compiler_flags=
+ for option in $try_compiler_flags; do
+ SAVE_CFLAGS="$CFLAGS"
+ CFLAGS="$CFLAGS $option"
+ AC_MSG_CHECKING([whether gcc understands $option])
+ AC_TRY_LINK([], [],
+ has_option=yes,
+ has_option=no,)
+ CFLAGS="$SAVE_CFLAGS"
+ AC_MSG_RESULT($has_option)
+ if test $has_option = yes; then
+ compiler_flags="$compiler_flags $option"
+ fi
+ unset has_option
+ unset SAVE_CFLAGS
+ done
+ unset option
+ unset try_compiler_flags
+
+ AC_ARG_ENABLE(iso-c,
+ AC_HELP_STRING([--enable-iso-c],
+ [Try to warn if code is not ISO C ]),,
+ [enable_iso_c=no])
+
+ AC_MSG_CHECKING(what language compliance flags to pass to the C compiler)
+ complCFLAGS=
+ if test "x$enable_iso_c" != "xno"; then
+ if test "x$GCC" = "xyes"; then
+ case " $CFLAGS " in
+ *[\ \ ]-ansi[\ \ ]*) ;;
+ *) complCFLAGS="$complCFLAGS -ansi" ;;
+ esac
+ case " $CFLAGS " in
+ *[\ \ ]-pedantic[\ \ ]*) ;;
+ *) complCFLAGS="$complCFLAGS -pedantic" ;;
+ esac
+ fi
+ fi
+ AC_MSG_RESULT($complCFLAGS)
+
+ WARN_CFLAGS="$compiler_flags $complCFLAGS"
+ AC_SUBST(WARN_CFLAGS)
+])
+
+dnl
+dnl Determine readline linker flags in a way that works on RHEL 5
+dnl Check for rl_completion_matches (missing on OS/X)
+dnl
+AC_DEFUN([AUGEAS_CHECK_READLINE], [
+ AC_CHECK_HEADERS([readline/readline.h])
+
+ # Check for readline.
+ AC_CHECK_LIB(readline, readline,
+ [use_readline=yes; READLINE_LIBS=-lreadline],
+ [use_readline=no])
+
+ # If the above test failed, it may simply be that -lreadline requires
+ # some termcap-related code, e.g., from one of the following libraries.
+ # See if adding one of them to LIBS helps.
+ if test $use_readline = no; then
+ saved_libs=$LIBS
+ LIBS=
+ AC_SEARCH_LIBS(tgetent, ncurses curses termcap termlib)
+ case $LIBS in
+ no*) ;; # handle "no" and "none required"
+ *) # anything else is a -lLIBRARY
+ # Now, check for -lreadline again, also using $LIBS.
+ # Note: this time we use a different function, so that
+ # we don't get a cached "no" result.
+ AC_CHECK_LIB(readline, rl_initialize,
+ [use_readline=yes
+ READLINE_LIBS="-lreadline $LIBS"],,
+ [$LIBS])
+ ;;
+ esac
+ test $use_readline = no &&
+ AC_MSG_WARN([readline library not found])
+ LIBS=$saved_libs
+ fi
+
+ if test $use_readline = no; then
+ AC_MSG_ERROR(Could not find a working readline library (see config.log for details).)
+ fi
+
+ AC_SUBST(READLINE_LIBS)
+
+ if test $use_readline = yes; then
+ saved_libs=$LIBS
+ LIBS=$READLINE_LIBS
+ AC_CHECK_FUNCS([rl_completion_matches rl_crlf rl_replace_line])
+ LIBS=$saved_libs
+ fi
+])
--- /dev/null
+prefix=@prefix@
+exec_prefix=@exec_prefix@
+libdir=@libdir@
+includedir=@includedir@
+
+Name: augeas
+Version: @VERSION@
+Description: Augeas configuration editing library
+Requires.private: libxml-2.0 @PC_SELINUX@
+Libs: -L${libdir} -laugeas
+Libs.private: -lfa
+Cflags: -I${includedir}
--- /dev/null
+Name: augeas
+Version: @VERSION@
+Release: 1%{?dist}
+Summary: A library for changing configuration files
+
+Group: System Environment/Libraries
+License: LGPLv2+
+URL: http://augeas.net/
+Source0: http://download.augeas.net/%{name}-%{version}.tar.gz
+
+BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)
+
+BuildRequires: readline-devel libselinux-devel libxml2-devel
+Requires: %{name}-libs = %{version}-%{release}
+
+%description
+A library for programmatically editing configuration files. Augeas parses
+configuration files into a tree structure, which it exposes through its
+public API. Changes made through the API are written back to the initially
+read files.
+
+The transformation works very hard to preserve comments and formatting
+details. It is controlled by ``lens'' definitions that describe the file
+format and the transformation into a tree.
+
+%package devel
+Summary: Development files for %{name}
+Group: Development/Libraries
+Requires: %{name}-libs = %{version}-%{release}
+Requires: pkgconfig
+
+%description devel
+The %{name}-devel package contains libraries and header files for
+developing applications that use %{name}.
+
+
+%package libs
+Summary: Libraries for %{name}
+Group: System Environment/Libraries
+
+%description libs
+The libraries for %{name}.
+
+Augeas is a library for programmatically editing configuration files. It parses
+configuration files into a tree structure, which it exposes through its
+public API. Changes made through the API are written back to the initially
+read files.
+
+%package static
+Summary: Static libraries for %{name}
+Group: Development/Libraries
+Requires: %{name}-devel = %{version}-%{release}
+
+%description static
+The %{name}-static package contains static libraries needed to produce
+static builds using %{name}.
+
+
+
+%prep
+%setup -q
+
+%build
+%configure \
+%ifarch riscv64
+ --disable-gnulib-tests \
+%endif
+ --enable-static
+make %{?_smp_mflags}
+
+%check
+# Disable test-preserve.sh SELinux testing. This fails when run under mock due
+# to differing SELinux labelling.
+export SKIP_TEST_PRESERVE_SELINUX=1
+
+make %{?_smp_mflags} check || {
+ echo '===== tests/test-suite.log ====='
+ cat tests/test-suite.log
+ exit 1
+}
+
+%install
+rm -rf $RPM_BUILD_ROOT
+make install DESTDIR=$RPM_BUILD_ROOT INSTALL="%{__install} -p"
+find $RPM_BUILD_ROOT -name '*.la' -exec rm -f {} ';'
+
+# The tests/ subdirectory contains lenses used only for testing, and
+# so it shouldn't be packaged.
+rm -r $RPM_BUILD_ROOT%{_datadir}/augeas/lenses/dist/tests
+
+%clean
+rm -rf $RPM_BUILD_ROOT
+
+%post libs -p /sbin/ldconfig
+
+%postun libs -p /sbin/ldconfig
+
+%files
+%defattr(-,root,root,-)
+%{_bindir}/augtool
+%{_bindir}/augparse
+%{_bindir}/augmatch
+%{_bindir}/augprint
+%{_bindir}/fadot
+%doc %{_mandir}/man1/*
+%{_datadir}/vim/vimfiles/syntax/augeas.vim
+%{_datadir}/vim/vimfiles/ftdetect/augeas.vim
+%{bash_completions_dir}/*
+
+%files libs
+%defattr(-,root,root,-)
+# _datadir/augeas and _datadir/augeas/lenses are owned
+# by filesystem.
+%{_datadir}/augeas/lenses/dist
+%{_libdir}/*.so.*
+%doc AUTHORS COPYING NEWS
+
+%files devel
+%defattr(-,root,root,-)
+%doc
+%{_includedir}/*
+%{_libdir}/*.so
+%{_libdir}/pkgconfig/augeas.pc
+
+%files static
+%defattr(-,root,root,-)
+%{_libdir}/libaugeas.a
+%{_libdir}/libfa.a
+
+%changelog
+* Wed Dec 7 2022 George Hansper <george@hansper.id.au> - 1.14.0
+- add augprint, fix bash-completion filenames
+
+* Sun Nov 20 2022 George Hansper <george@hansper.id.au> - 1.14.0
+- add bash completions
+
+* Fri Mar 17 2017 David Lutterkort <lutter@watzmann.net> - 1.8.0-1
+- add static subpackage
+
+* Fri Feb 10 2017 Fedora Release Engineering <releng@fedoraproject.org> - 1.7.0-4
+- Rebuilt for https://fedoraproject.org/wiki/Fedora_26_Mass_Rebuild
+
+* Thu Jan 12 2017 Igor Gnatenko <ignatenko@redhat.com> - 1.7.0-3
+- Rebuild for readline 7.x
+
+* Sat Nov 12 2016 Richard W.M. Jones <rjones@redhat.com> - 1.7.0-2
+- riscv64: Disable gnulib tests on riscv64 architecture.
+
+* Wed Nov 09 2016 Dominic Cleal <dominic@cleal.org> - 1.7.0-1
+- Update to 1.7.0
+
+* Mon Aug 08 2016 Dominic Cleal <dominic@cleal.org> - 1.6.0-1
+- Update to 1.6.0
+
+* Thu May 12 2016 Dominic Cleal <dominic@cleal.org> - 1.5.0-1
+- Update to 1.5.0
+
+* Wed Feb 03 2016 Fedora Release Engineering <releng@fedoraproject.org> - 1.4.0-3
+- Rebuilt for https://fedoraproject.org/wiki/Fedora_24_Mass_Rebuild
+
+* Wed Jun 17 2015 Fedora Release Engineering <rel-eng@lists.fedoraproject.org> - 1.4.0-2
+- Rebuilt for https://fedoraproject.org/wiki/Fedora_23_Mass_Rebuild
+
+* Tue Jun 02 2015 Dominic Cleal <dcleal@redhat.com> - 1.4.0-1
+- Update to 1.4.0
+
+* Sat Nov 08 2014 Dominic Cleal <dcleal@redhat.com> - 1.3.0-1
+- Update to 1.3.0; remove all patches
+
+* Fri Aug 15 2014 Fedora Release Engineering <rel-eng@lists.fedoraproject.org> - 1.2.0-4
+- Rebuilt for https://fedoraproject.org/wiki/Fedora_21_22_Mass_Rebuild
+
+* Sat Jun 07 2014 Fedora Release Engineering <rel-eng@lists.fedoraproject.org> - 1.2.0-3
+- Rebuilt for https://fedoraproject.org/wiki/Fedora_21_Mass_Rebuild
+
+* Mon Mar 31 2014 Dominic Cleal <dcleal@redhat.com> - 1.2.0-2
+- Add patch for Krb5, parse braces in values (RHBZ#1079444)
+
+* Wed Feb 12 2014 Dominic Cleal <dcleal@redhat.com> - 1.2.0-1
+- Update to 1.2.0, add check section
+- Update source URL to download.augeas.net (RHBZ#996032)
+
+* Sat Aug 03 2013 Fedora Release Engineering <rel-eng@lists.fedoraproject.org> - 1.1.0-2
+- Rebuilt for https://fedoraproject.org/wiki/Fedora_20_Mass_Rebuild
+
+* Wed Jun 19 2013 David Lutterkort <lutter@redhat.com> - 1.1.0-1
+- Update to 1.1.0; remove all patches
+
+* Tue Jun 18 2013 Richard W.M. Jones <rjones@redhat.com> - 1.0.0-4
+- Fix /etc/sysconfig/network (RHBZ#904222).
+
+* Wed Jun 5 2013 Richard W.M. Jones <rjones@redhat.com> - 1.0.0-3
+- Don't package lenses in tests/ subdirectory.
+
+* Wed Feb 13 2013 Fedora Release Engineering <rel-eng@lists.fedoraproject.org> - 1.0.0-2
+- Rebuilt for https://fedoraproject.org/wiki/Fedora_19_Mass_Rebuild
+
+* Fri Jan 4 2013 David Lutterkort <lutter@redhat.com> - 1.0.0-1
+- New version; remove all patches
+
+* Wed Jul 18 2012 Fedora Release Engineering <rel-eng@lists.fedoraproject.org> - 0.10.0-4
+- Rebuilt for https://fedoraproject.org/wiki/Fedora_18_Mass_Rebuild
+
+* Tue Jan 10 2012 David Lutterkort <lutter@redhat.com> - 0.10.0-3
+- Add patches for bugs 247 and 248 (JSON lens)
+
+* Sat Dec 3 2011 Richard W.M. Jones <rjones@redhat.com> - 0.10.0-2
+- Add patch to resolve missing libxml2 requirement in augeas.pc.
+
+* Fri Dec 2 2011 David Lutterkort <lutter@redhat.com> - 0.10.0-1
+- New version
+
+* Mon Jul 25 2011 David Lutterkort <lutter@redhat.com> - 0.9.0-1
+- New version; removed patch pathx-whitespace-ea010d8
+
+* Tue May 3 2011 David Lutterkort <lutter@redhat.com> - 0.8.1-2
+- Add patch pathx-whitespace-ea010d8.patch to fix BZ 700608
+
+* Fri Apr 15 2011 David Lutterkort <lutter@redhat.com> - 0.8.1-1
+- New version
+
+* Wed Feb 23 2011 David Lutterkort <lutter@redhat.com> - 0.8.0-1
+- New version
+
+* Mon Feb 07 2011 Fedora Release Engineering <rel-eng@lists.fedoraproject.org> - 0.7.4-2
+- Rebuilt for https://fedoraproject.org/wiki/Fedora_15_Mass_Rebuild
+
+* Mon Nov 22 2010 Matthew Booth <mbooth@redhat.com> - 0.7.4-1
+- Update to version 0.7.4
+
+* Thu Nov 18 2010 Richard W.M. Jones <rjones@redhat.com> - 0.7.3-2
+- Upstream patch proposed to fix GCC optimization bug (RHBZ#651992).
+
+* Fri Aug 6 2010 David Lutterkort <lutter@redhat.com> - 0.7.3-1
+- Remove upstream patches
+
+* Tue Jun 29 2010 David Lutterkort <lutter@redhat.com> - 0.7.2-2
+- Patches based on upstream fix for BZ 600141
+
+* Tue Jun 22 2010 David Lutterkort <lutter@redhat.com> - 0.7.2-1
+- Fix ownership of /usr/share/augeas. BZ 569393
+
+* Wed Apr 21 2010 David Lutterkort <lutter@redhat.com> - 0.7.1-1
+- New version
+
+* Thu Jan 14 2010 David Lutterkort <lutter@redhat.com> - 0.7.0-1
+- Remove patch vim-ftdetect-syntax.patch. It's upstream
+
+* Tue Dec 15 2009 David Lutterkort <lutter@redhat.com> - 0.6.0-2
+- Fix ftdetect file for vim
+
+* Mon Nov 30 2009 David Lutterkort <lutter@redhat.com> - 0.6.0-1
+- Install vim syntax files
+
+* Mon Sep 14 2009 David Lutterkort <lutter@redhat.com> - 0.5.3-1
+- Remove separate xorg.aug, included in upstream source
+
+* Tue Aug 25 2009 Matthew Booth <mbooth@redhat.com> - 0.5.2-3
+- Include new xorg lens from upstream
+
+* Fri Jul 24 2009 Fedora Release Engineering <rel-eng@lists.fedoraproject.org> - 0.5.2-2
+- Rebuilt for https://fedoraproject.org/wiki/Fedora_12_Mass_Rebuild
+
+* Mon Jul 13 2009 David Lutterkort <lutter@redhat.com> - 0.5.2-1
+- New version
+
+* Fri Jun 5 2009 David Lutterkort <lutter@redhat.com> - 0.5.1-1
+- Install fadot
+
+* Fri Mar 27 2009 David Lutterkort <lutter@redhat.com> - 0.5.0-2
+- fadot isn't being installed just yet
+
+* Tue Mar 24 2009 David Lutterkort <lutter@redhat.com> - 0.5.0-1
+- New program /usr/bin/fadot
+
+* Mon Mar 9 2009 David Lutterkort <lutter@redhat.com> - 0.4.2-1
+- New version
+
+* Fri Feb 27 2009 David Lutterkort <lutter@redhat.com> - 0.4.1-1
+- New version
+
+* Fri Feb 6 2009 David Lutterkort <lutter@redhat.com> - 0.4.0-1
+- New version
+
+* Mon Jan 26 2009 David Lutterkort <lutter@redhat.com> - 0.3.6-1
+- New version
+
+* Tue Dec 23 2008 David Lutterkort <lutter@redhat.com> - 0.3.5-1
+- New version
+
+* Mon Feb 25 2008 David Lutterkort <dlutter@redhat.com> - 0.0.4-1
+- Initial specfile
--- /dev/null
+#!/usr/bin/env bash
+# Run this to generate all the initial makefiles, etc.
+
+usage() {
+ echo >&2 "\
+Usage: $0 [OPTION]...
+Generate makefiles and other infrastructure needed for building
+
+
+Options:
+ --gnulib-srcdir=DIRNAME Specify the local directory where gnulib
+ sources reside. Use this if you already
+ have gnulib sources on your machine, and
+ do not want to waste your bandwidth downloading
+ them again.
+ --help Print this message
+ any other option Pass to the 'configure' script verbatim
+
+Running without arguments will suffice in most cases.
+"
+}
+
+BUILD_AUX=build/ac-aux
+GNULIB_DIR=gnulib
+
+set -e
+srcdir=`dirname $0`
+test -z "$srcdir" && srcdir=.
+
+THEDIR=`pwd`
+cd $srcdir
+
+# Split out options for bootstrap and for configure
+declare -a CF_ARGS
+for option
+do
+ case $option in
+ --help)
+ usage
+ exit;;
+ --gnulib-srcdir=*)
+ GNULIB_SRCDIR=$option;;
+ *)
+ CF_ARGS[${#CF_ARGS[@]}]=$option;;
+ esac
+done
+
+#Check for OSX
+case `uname -s` in
+Darwin) LIBTOOLIZE=glibtoolize;;
+*) LIBTOOLIZE=libtoolize;;
+esac
+
+
+DIE=0
+
+(autoconf --version) < /dev/null > /dev/null 2>&1 || {
+ echo
+ echo "You must have autoconf installed to compile augeas."
+ echo "Download the appropriate package for your distribution,"
+ echo "or see http://www.gnu.org/software/autoconf"
+ DIE=1
+}
+
+(automake --version) < /dev/null > /dev/null 2>&1 || {
+ echo
+ DIE=1
+ echo "You must have automake installed to compile augeas."
+ echo "Download the appropriate package for your distribution,"
+ echo "or see http://www.gnu.org/software/automake"
+}
+
+if test "$DIE" -eq 1; then
+ exit 1
+fi
+
+if test -z "${CF_ARGS[*]}"; then
+ echo "I am going to run ./configure with --enable-warnings - if you "
+ echo "wish to pass any extra arguments to it, please specify them on "
+ echo "the $0 command line."
+fi
+
+mkdir -p $BUILD_AUX
+
+$LIBTOOLIZE --copy --force
+./bootstrap $GNULIB_SRCDIR
+aclocal -I gnulib/m4
+autoheader
+automake --add-missing
+autoconf
+
+cd $THEDIR
+
+if test x$OBJ_DIR != x; then
+ mkdir -p "$OBJ_DIR"
+ cd "$OBJ_DIR"
+fi
+
+$srcdir/configure "${CF_ARGS[@]}" && {
+ echo
+ echo "Now type 'make' to compile augeas."
+}
--- /dev/null
+#!/bin/sh
+
+usage() {
+ echo >&2 "\
+Usage: $0 [OPTION]...
+Bootstrap this package from the checked-out sources.
+
+Options:
+ --gnulib-srcdir=DIRNAME Specify the local directory where gnulib
+ sources reside. Use this if you already
+ have gnulib sources on your machine, and
+ do not want to waste your bandwidth downloading
+ them again.
+
+If the file bootstrap.conf exists in the current working directory, its
+contents are read as shell variables to configure the bootstrap.
+
+Running without arguments will suffice in most cases.
+"
+}
+
+for option
+do
+ case $option in
+ --help)
+ usage
+ exit;;
+ --gnulib-srcdir=*)
+ GNULIB_SRCDIR=${option#--gnulib-srcdir=};;
+ *)
+ echo >&2 "$0: $option: unknown option"
+ exit 1;;
+ esac
+done
+
+# Get gnulib files.
+
+case ${GNULIB_SRCDIR--} in
+-)
+ echo "$0: getting gnulib files..."
+ git submodule init || exit $?
+ git submodule update || exit $?
+ GNULIB_SRCDIR=.gnulib
+ ;;
+*)
+ # Redirect the gnulib submodule to the directory on the command line
+ # if possible.
+ if test -d "$GNULIB_SRCDIR"/.git && \
+ git config --file .gitmodules submodule.gnulib.url >/dev/null; then
+ git submodule init
+ GNULIB_SRCDIR=`cd $GNULIB_SRCDIR && pwd`
+ git config --replace-all submodule.gnulib.url $GNULIB_SRCDIR
+ echo "$0: getting gnulib files..."
+ git submodule update || exit $?
+ GNULIB_SRCDIR=.gnulib
+ else
+ echo >&2 "$0: invalid gnulib srcdir: $GNULIB_SRCDIR"
+ exit 1
+ fi
+ ;;
+esac
+
+gnulib_tool=$GNULIB_SRCDIR/gnulib-tool
+<$gnulib_tool || exit
+
+modules='
+argz
+fnmatch
+getline
+getopt-gnu
+gitlog-to-changelog
+canonicalize-lgpl
+isblank
+locale
+mkstemp
+regex
+safe-alloc
+selinux-h
+stpcpy
+stpncpy
+strchrnul
+strndup
+sys_wait
+vasprintf
+'
+
+# Tell gnulib to:
+# require LGPLv2+
+# put *.m4 files in new gnulib/m4/ dir
+# put *.[ch] files in new gnulib/lib/ dir.
+
+$gnulib_tool \
+ --lgpl=2 \
+ --with-tests \
+ --m4-base=gnulib/m4 \
+ --source-base=gnulib/lib \
+ --tests-base=gnulib/tests \
+ --aux-dir=build/ac-aux \
+ --libtool \
+ --quiet \
+ --import $modules
--- /dev/null
+#!/bin/sh
+# Like mv $1 $2, but if the files are the same, just delete $1.
+# Status is zero if successful, nonzero otherwise.
+
+VERSION='2011-01-28 20:09'; # UTC
+# The definition above must lie within the first 8 lines in order
+# for the Emacs time-stamp write hook (at end) to update it.
+# If you change this file with Emacs, please let the write hook
+# do its job. Otherwise, update this string manually.
+
+# Copyright (C) 2002-2007, 2009-2012 Free Software Foundation, Inc.
+
+# This program is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+
+usage="usage: $0 SOURCE DEST"
+
+help="$usage
+ or: $0 OPTION
+If SOURCE is different than DEST, then move it to DEST; else remove SOURCE.
+
+ --help display this help and exit
+ --version output version information and exit
+
+The variable CMPPROG can be used to specify an alternative to \`cmp'.
+
+Report bugs to <bug-gnulib@gnu.org>."
+
+version=`expr "$VERSION" : '\([^ ]*\)'`
+version="move-if-change (gnulib) $version
+Copyright (C) 2011 Free Software Foundation, Inc.
+License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
+This is free software: you are free to change and redistribute it.
+There is NO WARRANTY, to the extent permitted by law."
+
+cmpprog=${CMPPROG-cmp}
+
+for arg
+do
+ case $arg in
+ --help | --hel | --he | --h)
+ exec echo "$help" ;;
+ --version | --versio | --versi | --vers | --ver | --ve | --v)
+ exec echo "$version" ;;
+ --)
+ shift
+ break ;;
+ -*)
+ echo "$0: invalid option: $arg" >&2
+ exit 1 ;;
+ *)
+ break ;;
+ esac
+done
+
+test $# -eq 2 || { echo "$0: $usage" >&2; exit 1; }
+
+if test -r "$2" && $cmpprog -- "$1" "$2" >/dev/null; then
+ rm -f -- "$1"
+else
+ if mv -f -- "$1" "$2"; then :; else
+ # Ignore failure due to a concurrent move-if-change.
+ test -r "$2" && $cmpprog -- "$1" "$2" >/dev/null && rm -f -- "$1"
+ fi
+fi
+
+## Local Variables:
+## eval: (add-hook 'write-file-hooks 'time-stamp)
+## time-stamp-start: "VERSION='"
+## time-stamp-format: "%:y-%02m-%02d %02H:%02M"
+## time-stamp-time-zone: "UTC"
+## time-stamp-end: "'; # UTC"
+## End:
--- /dev/null
+AC_INIT(augeas, 1.14.1)
+AC_CONFIG_SRCDIR([src/augeas.c])
+AC_CONFIG_AUX_DIR([build/ac-aux])
+AM_CONFIG_HEADER([config.h])
+AM_INIT_AUTOMAKE([-Wno-portability color-tests parallel-tests])
+AM_SILENT_RULES([yes]) # make --enable-silent-rules the default.
+
+
+dnl Check for NaturalDocs
+AC_PATH_PROGS([ND_PROG], [naturaldocs NaturalDocs], missing)
+AM_CONDITIONAL([ND_ENABLED], [test "x$ND_PROG" != "xmissing"])
+
+dnl NaturalDocs output format, defaults to HTML
+ND_FORMAT=HTML
+AC_ARG_WITH([naturaldocs-output],
+ [AS_HELP_STRING([--with-naturaldocs-output=FORMAT],
+ [format of NaturalDocs output (possible values: HTML/FramedHTML, default: HTML)])],
+ [
+ if test "x$ND_PROG" = "xmissing"; then
+ AC_MSG_ERROR([NaturalDocs was not found on your path; there's no point in setting the output format])
+ fi
+ case $withval in
+ HTML|FramedHTML)
+ ND_FORMAT=$withval
+ ;;
+ *)
+ AC_MSG_ERROR($withval is not a supported output format for NaturalDocs)
+ ;;
+ esac
+ ])
+AC_SUBST(ND_FORMAT)
+
+
+dnl Check for pdflatex
+PDFDOCS=""
+AC_ARG_WITH([pdfdocs],
+ [AS_HELP_STRING([--with-pdfdocs],
+ [whether to use pdflatex to build PDF docs])],
+ [AC_PATH_PROG(PDFLATEX, pdflatex, no)
+ if test "x$PDFLATEX" = "xno"; then
+ AC_MSG_ERROR(You asked to use PDFLatex but it could not be found)
+ else
+ PDFDOCS="pdfdocs"
+ fi
+ ])
+AC_SUBST(PDFLATEX)
+AC_SUBST(PDFDOCS)
+
+dnl Support for memory tests with failmalloc
+AC_ARG_WITH([failmalloc],
+ [AS_HELP_STRING([--with-failmalloc=FAILMALLOC],
+ [enable failmalloc test targets and use the failmalloc library FAILMALLOC])],
+ [AC_SUBST([LIBFAILMALLOC], ["$with_failmalloc"])],
+ [with_failmalloc=no])
+
+AM_CONDITIONAL([WITH_FAILMALLOC], [test x$with_failmalloc != xno])
+
+dnl --enable-debug=(yes|no)
+AC_ARG_ENABLE([debug],
+ [AC_HELP_STRING([--enable-debug=no/yes],
+ [enable debugging output])],[],[enable_debug=yes])
+AM_CONDITIONAL([ENABLE_DEBUG], test x"$enable_debug" = x"yes")
+if test x"$enable_debug" = x"yes"; then
+ AC_DEFINE([ENABLE_DEBUG], [1], [whether debugging is enabled])
+fi
+
+dnl Version info in libtool's notation
+AC_SUBST([LIBAUGEAS_VERSION_INFO], [25:0:25])
+AC_SUBST([LIBFA_VERSION_INFO], [6:3:5])
+
+AC_GNU_SOURCE
+
+AC_PROG_CC
+gl_EARLY
+AC_SYS_LARGEFILE
+
+dnl gl_INIT uses m4_foreach_w, yet that is not defined in autoconf-2.59.
+dnl In order to accommodate developers with such old tools, here's a
+dnl replacement definition.
+m4_ifndef([m4_foreach_w],
+ [m4_define([m4_foreach_w],
+ [m4_foreach([$1], m4_split(m4_normalize([$2]), [ ]), [$3])])])
+
+AC_PROG_LIBTOOL
+AC_PROG_YACC
+AC_PROG_LEX
+
+AUGEAS_COMPILE_WARNINGS(maximum)
+
+## Compiler flags to be used everywhere
+AUGEAS_CFLAGS=-std=gnu99
+AC_SUBST(AUGEAS_CFLAGS)
+
+AUGEAS_CHECK_READLINE
+AC_CHECK_FUNCS([open_memstream uselocale])
+
+AC_MSG_CHECKING([how to pass version script to the linker ($LD)])
+VERSION_SCRIPT_FLAGS=none
+if $LD --help 2>&1 | grep "version-script" >/dev/null 2>/dev/null; then
+ VERSION_SCRIPT_FLAGS=-Wl,--version-script=
+ # Solaris needs gnu-version-script-compat to use version-script
+ if test x"$host_os" = x"solaris2.11"; then
+ VERSION_SCRIPT_FLAGS="-z gnu-version-script-compat,${VERSION_SCRIPT_FLAGS}"
+ fi
+elif $LD --help 2>&1 | grep "M mapfile" >/dev/null 2>/dev/null; then
+ VERSION_SCRIPT_FLAGS="-Wl,-M -Wl,"
+fi
+AC_MSG_RESULT([$VERSION_SCRIPT_FLAGS])
+AC_SUBST(VERSION_SCRIPT_FLAGS)
+AM_CONDITIONAL([USE_VERSION_SCRIPT], [test "$VERSION_SCRIPT_FLAGS" != none])
+
+gl_INIT
+
+dnl Should we run the gnulib tests?
+AC_MSG_CHECKING([if we should run the GNUlib tests])
+AC_ARG_ENABLE([gnulib-tests],
+ [AS_HELP_STRING([--disable-gnulib-tests],
+ [disable running GNU Portability library tests @<:@default=yes@:>@])],
+ [ENABLE_GNULIB_TESTS="$enableval"],
+ [ENABLE_GNULIB_TESTS=yes])
+AM_CONDITIONAL([ENABLE_GNULIB_TESTS],[test "x$ENABLE_GNULIB_TESTS" = "xyes"])
+AC_MSG_RESULT([$ENABLE_GNULIB_TESTS])
+
+dnl set PC_SELINUX for use by augeas.pc.in
+PC_SELINUX=$(echo $LIB_SELINUX | sed -e 's/-l/lib/')
+AC_SUBST([PC_SELINUX])
+
+PKG_PROG_PKG_CONFIG
+PKG_CHECK_MODULES([LIBXML], [libxml-2.0])
+
+AC_CHECK_FUNCS([strerror_r fsync])
+
+AC_OUTPUT(Makefile \
+ gnulib/lib/Makefile \
+ gnulib/tests/Makefile \
+ src/Makefile \
+ man/Makefile \
+ tests/Makefile \
+ examples/Makefile \
+ doc/Makefile \
+ doc/naturaldocs/Makefile \
+ augeas.pc augeas.spec)
+
+# Bash completion ...
+PKG_CHECK_VAR(bashcompdir, [bash-completion], [completionsdir], ,
+ bashcompdir="${sysconfdir}/bash_completion.d")
+AC_SUBST(bashcompdir)
+++ /dev/null
-gnulib.aux
-gnulib.cn
-gnulib.cp
-gnulib.cps
-gnulib.dvi
-gnulib.fn
-gnulib.ky
-gnulib.log
-gnulib.pg
-gnulib.toc
-gnulib.tp
-gnulib.vr
-gnulib.vrs
-gnulib.info
-gnulib.info-1
-gnulib.info-2
-gnulib.info-3
-gnulib.info-4
-gnulib.info-5
-gnulib.info-6
-gnulib.html
-gnulib.pdf
-regex.info
-updated-stamp
--- /dev/null
+
+SUBDIRS = naturaldocs
+
+EXTRA_DIST = lenses.tex unambig.tex syntax/augeas.vim ftdetect/augeas.vim bcprules.sty xpath.txt
+
+vimdir = $(datadir)/vim/vimfiles
+nobase_vim_DATA = syntax/augeas.vim ftdetect/augeas.vim
+
+# PDF targets
+PDFTARGETS=lenses.pdf unambig.pdf
+
+all-local: $(PDFDOCS)
+
+pdfdocs: $(PDFTARGETS)
+%.pdf: %.tex
+ $(PDFLATEX) $<
+
+clean-local:
+ rm -f *.pdf *.aux *.log
--- /dev/null
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+%%% %%%
+%%% BCP's latex tricks for typesetting inference rules %%%
+%%% %%%
+%%% Version 1.4 %%%
+%%% %%%
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
+%%%
+%%% This package supports two styles of rules: named and unnamed.
+%%% Unnamed rules are centered on the page. Named rules are set so
+%%% that a series of them will have the rules centered in a vertical
+%%% column taking most of the page and the labels right-justified.
+%%% When a label would overlap its rule, the label is moved down.
+%%%
+%%% The width of the column of labels can be varied using a command of the
+%%% form
+%%%
+%%% \typicallabel{T-Arrow-I}
+%%%
+%%% The default setting is:
+%%%
+%%% \typicallabel{}
+%%%
+%%% In other words, the column of rules takes up the whole width of
+%%% the page: rules are centered on the centerline of the text, and no
+%%% extra space is left for the labels.
+%%%
+%%% The minimum distance between a rule and its label can be altered by a
+%%% command of the form
+%%%
+%%% \setlength{\labelminsep}{0.5em}
+%%%
+%%% (This is the default value.)
+%%%
+%%% Examples:
+%%%
+%%% An axiom with a label in the right-hand column:
+%%%
+%%% \infax[The name]{x - x = 0}
+%%%
+%%% An inference rule with a name:
+%%%
+%%% \infrule[Another name]
+%%% {\mbox{false}}
+%%% {x - x = 1}
+%%%
+%%% A rule with multiple premises on the same line:
+%%%
+%%% \infrule[Wide premises]
+%%% {x > 0 \andalso y > 0 \andalso z > 0}
+%%% {x + y + z > 0}
+%%%
+%%% A rule with several lines of premises:
+%%%
+%%% \infrule[Long premises]
+%%% {x > 0 \\ y > 0 \\ z > 0}
+%%% {x + y + z > 0}
+%%%
+%%% A rule without a name, but centered on the same vertical line as rules
+%%% and axioms with names:
+%%%
+%%% \infrule[]
+%%% {x - y = 5}
+%%% {y - x = -5}
+%%%
+%%% A rule without a name, centered on the page:
+%%%
+%%% \infrule
+%%% {x = 5}
+%%% {x - 1 > 0}
+%%%
+%%%
+%%% Setting the flag \indexrulestrue causes an index entry to be
+%%% generated for each named rule.
+%%%
+%%% Setting the flag \suppressrulenamestrue causes the names of all rules
+%%% to be left blank
+%%%
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
+%%% A switch controlling the sizes of rule names
+\newif\ifsmallrulenames \smallrulenamesfalse
+\newcommand{\smallrulenames}{\smallrulenamestrue}
+\newcommand{\choosernsize}[2]{\ifsmallrulenames#1\else#2\fi}
+
+%%% The font for setting inference rule names
+\newcommand{\rn}[1]{%
+ \ifmmode
+ \mathchoice
+ {\mbox{\choosernsize{\small}{}\sc #1}}
+ {\mbox{\choosernsize{\small}{}\sc #1}}
+ {\mbox{\choosernsize{\tiny}{\small}\sc #1}}
+ {\mbox{\choosernsize{\tiny}{\tiny}\uppercase{#1}}}%
+ \else
+ \hbox{\choosernsize{\small}{}\sc #1}%
+ \fi}
+
+\newif\ifsuppressrulenames
+\suppressrulenamesfalse
+
+\newif\ifbcprulessavespace
+\bcprulessavespacefalse
+
+\newif\ifbcprulestwocol
+\bcprulestwocolfalse
+
+%%% How to display a rule's name to the right of the rule
+\newcommand{\inflabel}[1]{%
+ \ifsuppressrulenames\else
+ \def\lab{#1}%
+ \ifx\lab\empty
+ \relax
+ \else
+ (\rn{\lab})%
+ \fi\fi
+}
+
+%%% Amount of extra space to add before and after a rule
+\newlength{\afterruleskip}
+\setlength{\afterruleskip}{\bigskipamount}
+
+%%% Minimum distance between a rule and its label
+\newlength{\labelminsep}
+\setlength{\labelminsep}{0.2em}
+
+%%% The ``typical'' width of the column of labels: labels are allowed
+%%% to project further to the left if necessary; the rules will be
+%%% centered in a column of width \linewidth - \labelcolwidth
+\newdimen\labelcolwidth
+
+%%% Set the label column width by providing a ``typical'' label --
+%%% i.e. a label of average length
+\newcommand{\typicallabel}[1]{
+ \setbox \@tempboxa \hbox{\inflabel{#1}}
+ \labelcolwidth \wd\@tempboxa
+ }
+\typicallabel{}
+
+%%% A flag controlling generation of index entries
+\newif \ifindexrules \indexrulesfalse
+
+%%% Allocate some temporary registers
+\newbox\@labelbox
+\newbox\rulebox
+\newdimen\ruledim
+\newdimen\labeldim
+
+%%% Put a rule and its label on the same line if this can be done
+%%% without overlapping them; otherwise, put the label on the next
+%%% line. Put a small amount of vertical space above and below.
+\newcommand{\layoutruleverbose}[2]%
+ {\unvbox\voidb@x % to make sure we're in vmode
+ \addvspace{\afterruleskip}%
+
+ \setbox \rulebox \hbox{$\displaystyle #2$}
+
+ \setbox \@labelbox \hbox{#1}
+ \ruledim \wd \rulebox
+ \labeldim \wd \@labelbox
+
+ %%% Will it all fit comfortably on one line?
+ \@tempdima \linewidth
+ \advance \@tempdima -\labelcolwidth
+ \ifdim \@tempdima < \ruledim
+ \@tempdima \ruledim
+ \else
+ \advance \@tempdima by \ruledim
+ \divide \@tempdima by 2
+ \fi
+ \advance \@tempdima by \labelminsep
+ \advance \@tempdima by \labeldim
+ \ifdim \@tempdima < \linewidth
+ % Yes, everything fits on a line
+ \@tempdima \linewidth
+ \advance \@tempdima -\labelcolwidth
+ \hbox to \linewidth{%
+ \hbox to \@tempdima{%
+ \hfil
+ \box\rulebox
+ \hfil}%
+ \hfill
+ \hbox to 0pt{\hss\box\@labelbox}%
+ }%
+ \else
+ %
+ % Will it all fit _UN_comfortably on one line?
+ \@tempdima 0pt
+ \advance \@tempdima by \ruledim
+ \advance \@tempdima by \labelminsep
+ \advance \@tempdima by \labeldim
+ \ifdim \@tempdima < \linewidth
+ % Yes, everything fits, but not centered
+ \hbox to \linewidth{%
+ \hfil
+ \box\rulebox
+ \hskip \labelminsep
+ \box\@labelbox}%
+ \else
+ %
+ % Better put the label on the next line
+ \@tempdima \linewidth
+ \advance \@tempdima -\labelcolwidth
+ \hbox to \linewidth{%
+ \hbox to \@tempdima{%
+ \hfil
+ \box\rulebox
+ \hfil}
+ \hfil}%
+ \penalty10000
+ \hbox to \linewidth{%
+ \hfil
+ \box\@labelbox}%
+ \fi\fi
+
+ \addvspace{\afterruleskip}%
+ \@doendpe % use LaTeX's trick of inhibiting paragraph indent for
+ % text immediately following a rule
+ \ignorespaces
+ }
+
+% Simplified form for when there is no label
+\newcommand{\layoutrulenolabel}[1]%
+ {\unvbox\voidb@x % to make sure we're in vmode
+ \addvspace{\afterruleskip}%
+
+ \setbox \rulebox \hbox{$\displaystyle #1$}
+
+ \@tempdima \linewidth
+ \advance \@tempdima -\labelcolwidth
+ \hbox to \@tempdima{%
+ \hfil
+ \box\rulebox
+ \hfil}%
+
+ \addvspace{\afterruleskip}%
+ \@doendpe % use LaTeX's trick of inhibiting paragraph indent for
+ % text immediately following a rule
+ \ignorespaces
+ }
+
+% Alternate form, for when we need to save space
+%\newcommand{\layoutruleterse}[2]%
+% {\noindent
+% \parbox[b]{0.5\linewidth}{\layoutruleverbose{#1}{#2}}}
+
+\newcommand{\layoutruleterse}[2]%
+ {\setbox \rulebox \hbox{$\displaystyle #2$}
+ \noindent
+ \parbox[b]{0.5\linewidth}
+ {\vspace*{0.4em} \hfill\box\rulebox\hfill~}
+ }
+
+%%% Select low-level layout driver based on \bcprulessavespace flag
+\newcommand{\layoutrule}[2]{%
+ \ifbcprulessavespace
+ \layoutruleterse{#1}{#2}
+ \else
+ \layoutruleverbose{#1}{#2}
+ \fi
+}
+
+%%% Highlighting for new versions of rules
+\newif\ifnewrule \newrulefalse
+\newcommand{\setrulebody}[1]{%
+ \ifnewrule
+ \@ifundefined{HIGHLIGHT}{%
+ \fbox{\ensuremath{#1}}%
+ }{%
+ \HIGHLIGHT{#1}}%
+ \else
+ #1
+ \fi
+}
+
+%%% Commands for setting axioms and rules
+\newcommand{\typesetax}[1]{%
+ \setrulebody{%
+ \begin{array}{@{}c@{}}#1\end{array}}}
+\newcommand{\typesetrule}[2]{%
+ \setrulebody{%
+ \frac{\begin{array}{@{}c@{}}#1\end{array}}%
+ {\begin{array}{@{}c@{}}#2\end{array}}}}
+
+%%% Indexing
+\newcommand{\ruleindexprefix}[1]{%
+ \gdef\ruleindexprefixstring{#1}}
+\ruleindexprefix{}
+\newcommand{\maybeindex}[1]{%
+ \ifindexrules
+ \index{\ruleindexprefixstring#1@\rn{#1}}%
+ \fi}
+
+%%% Setting axioms, with or without names
+\def\infax{\@ifnextchar[{\@infaxy}{\@infaxx}}
+\def\@infaxx#1{%
+ \ifbcprulessavespace $\typesetax{#1}$%
+ \else \layoutrulenolabel{\typesetax{#1}}%
+ \fi\newrulefalse\ignorespaces}
+\def\@infaxy[#1]{\maybeindex{#1}\@infax{\inflabel{#1}}}
+\def\@infax#1#2{\layoutrule{#1}{\typesetax{#2}}\ignorespaces}
+
+%%% Setting rules, with or without names
+\def\infrule{\@ifnextchar[{\@infruley}{\@infrulex}}
+\def\@infrulex#1#2{%
+ \ifbcprulessavespace $\typesetrule{#1}{#2}$%
+ \else \layoutrulenolabel{\typesetrule{#1}{#2}}%
+ \fi\newrulefalse\ignorespaces}
+\def\@infruley[#1]{\maybeindex{#1}\@infrule{\inflabel{#1}}}
+\def\@infrule#1#2#3{\layoutrule{#1}{\typesetrule{#2}{#3}}\ignorespaces}
+
+%%% Miscellaneous helpful definitions
+\newcommand{\andalso}{\quad\quad}
+
+% Abbreviations
+\newcommand{\infabbrev}[2]{\infax{#1 \quad\eqdef\quad #2}}
+
--- /dev/null
+The files tests/cutest.[ch] contain a modified version of CuTest, written
+by Asim Jalis. The original code can be found at
+http://sourceforge.net/projects/cutest/
+
+What follows is the license under which CuTest is distributed, as retrieved
+on 2008-02-29 from
+http://cutest.cvs.sourceforge.net/cutest/cutest/license.txt?revision=1.3
+
+NOTE
+
+The license is based on the zlib/libpng license. For more details see
+http://www.opensource.org/licenses/zlib-license.html. The intent of the
+license is to:
+
+- keep the license as simple as possible
+- encourage the use of CuTest in both free and commercial applications
+ and libraries
+- keep the source code together
+- give credit to the CuTest contributors for their work
+
+If you ship CuTest in source form with your source distribution, the
+following license document must be included with it in unaltered form.
+If you find CuTest useful we would like to hear about it.
+
+LICENSE
+
+Copyright (c) 2003 Asim Jalis
+
+This software is provided 'as-is', without any express or implied
+warranty. In no event will the authors be held liable for any damages
+arising from the use of this software.
+
+Permission is granted to anyone to use this software for any purpose,
+including commercial applications, and to alter it and redistribute it
+freely, subject to the following restrictions:
+
+1. The origin of this software must not be misrepresented; you must not
+claim that you wrote the original software. If you use this software in
+a product, an acknowledgment in the product documentation would be
+appreciated but is not required.
+
+2. Altered source versions must be plainly marked as such, and must not
+be misrepresented as being the original software.
+
+3. This notice may not be removed or altered from any source
+distribution.
--- /dev/null
+/etc/
+|-- DIR_COLORS
+|-- DIR_COLORS.xterm
+ records: line,
+ format: kw ' '+ \value, fixed kw + user extensions
+ comments: #.*$
+|-- acpi
+| |-- actions/*
+| `-- events/*
+ shell scripts
+|-- adjtime
+ ???
+|-- aliases
+ records: line
+ format: alias ':' redir
+ comments: #.*$
+|-- anacrontab
+ format: crontab
+|-- apt
+ ???
+|-- at.deny
+ ???
+|-- auto.master
+ records: line
+ format: key [ options ] location
+ comments: #.*$
+|-- auto.misc
+ automounter map
+|-- autofs_ldap_auth.conf
+ XML
+|-- avahi
+ ??? some XML, some shell scripts
+|-- blkid
+| |-- blkid.tab
+ XML
+|-- cron.d/*
+ crontab with users
+|-- cron.daily
+ shell scripts
+|-- cron.deny
+ ???
+|-- cron.hourly
+ shell scripts
+|-- cron.monthly
+ shell scripts
+|-- cron.weekly
+ shell scripts
+|-- crontab
+ crontab with users
+|-- cups
+ custom
+|-- dbus-1
+ XML
+|-- default
+ sysconfig
+|-- depmod.d/*
+ records: line,
+ format: kw ' '+ \value, fixed kw + user extensions
+ comments: #.*$
+|-- dev.d
+ ???
+|-- diskdump
+ ???
+|-- dnsmasq.conf
+ records: line
+ kw [ '=' value ]
+ comments: #.*$
+|-- dnsmasq.d/*
+ ???
+|-- environment
+ ???
+|-- exports
+ records: line w/ continuation
+ export_point ( ' ' client [ '(' option ( ',' option ) * ')' ] )
+ comments: #.*$
+|-- fedora-release
+ freeform
+|-- fstab
+ records: line
+ device ' \t'+ mount_point ' \t'+ fs [ option ( ',' option ) * ] freq passno
+ comments: ^#.*$
+|-- group
+ POSIX
+|-- grub.conf -> ../boot/grub/grub.conf
+ records: line
+ format: kw ' '+ value
+ comments: #.*$
+|-- gshadow
+|-- hal
+ ???
+|-- host.conf
+|-- hosts
+ records: line
+ format: ip_addr ' \t'+ (hostname [ ' \t'+ alias ] * )
+ comments: #.*$
+|-- hosts.allow
+|-- hosts.deny
+ ???
+ records: line w/ continuation, ordered
+ format: daemon_list ':' client_list [ ':' shell_command ]
+ comments: #.*$
+|-- htdig
+| `-- htdig.conf.rpmsave
+|-- httpd
+ custom
+|-- idmapd.conf
+ ini-style
+|-- initlog.conf
+ records: line
+ format: kw ' '+ value
+ comments: ^#.*$
+|-- inittab
+ records: line
+ format: id ':' runlevels ':' action ':' process ':'
+ comments: ^#.*$
+|-- iscsi
+ ???
+|-- issue
+|-- issue.net
+ freeform
+|-- jwhois.conf
+ custom (libconfuse ?)
+|-- kdump.conf
+ ???
+|-- koji.conf
+ inistyle
+|-- krb5.conf
+ inistyle wit { } subsections
+|-- ld.so.conf
+ ???
+|-- ldap.conf
+ records: line
+ format: kw ' '+ value
+ comments: #.*$
+|-- libaudit.conf
+ records: line
+ format: kw ' '+ '=' ' '+ value
+ comments: #.*$
+|-- libuser.conf
+ inistyle
+|-- libvirt
+ XML
+|-- localtime
+ opaque
+|-- login.defs
+ records: line
+ format: kw ' '+ value
+ comments: #.*$
+|-- logrotate.conf
+|-- logrotate.d
+ custom
+|-- logwatch
+ ???
+|-- mail.rc
+ custom
+|-- mailcap
+ records: line
+ format: kw ' '+ '=' ' '+ value
+ comments: #.*$
+|-- makedev.d
+ ???
+|-- man.config
+ records: line,
+ format: kw ' '+ \value, fixed kw + user extensions
+ comments: #.*$
+|-- mime.types
+|-- minicom.users
+|-- mke2fs.conf
+ custom, like krb5.conf
+|-- mock/*.cfg
+ python script
+|-- modprobe.conf
+|-- modprobe.d/*
+ custom
+|-- multipath.conf
+ custom, libconfuse ?
+|-- netconfig
+ line records
+|-- nscd.conf
+ custom
+|-- nsswitch.conf
+|-- ntp
+ custom
+|-- ntp.conf
+ custom
+|-- openldap
+ ???
+|-- opt
+|-- pam.d
+ custom
+|-- pam_pkcs11
+| |-- pam_pkcs11.conf
+| `-- pkcs11_eventmgr.conf
+ libconfuse ?
+|-- passwd
+ Posix
+|-- postfix
+ ???
+|-- prelink.conf
+ custom
+|-- puppet
+| |-- puppet.conf
+ Ini-style
+|-- quotagrpadmins
+ records: line
+ format: alias ':' redir
+ comments: #.*$
+|-- quotatab
+|-- racoon
+| `-- racoon.conf
+ custom
+|-- readahead.conf
+ sysconfig
+|-- readahead.d
+ path lists
+|-- resolv.conf
+ custom
+|-- rpm
+ random rpm macros (kw ' ' value style)
+|-- rwtab
+|-- rwtab.d
+|-- sasl2
+|-- scsi_id.config
+|-- securetty
+ list of ttys
+|-- security
+ ???
+|-- selinux
+ ???
+|-- sensors.conf
+ custom
+|-- sestatus.conf
+ inistyle
+|-- shadow
+ POSIX
+|-- shells
+ list of shells
+|-- smartd.conf
+ custom
+|-- ssh
+ key/value pairs
+|-- statetab
+|-- statetab.d
+ list of paths
+|-- sudoers
+ custom
+|-- sysconfig
+ sysconfig
+|-- sysctl.conf
+ sysconfig, with special meaning for keys
+|-- syslog.conf
+ key/value, space separated
+|-- udev
+ custom
+|-- updatedb.conf
+ sysconfig
+|-- xinetd.d/*
+ custom
+|-- yum
+|-- yum.conf
+`-- yum.repos.d/*
+ inistyle with includes
--- /dev/null
+These examples show what hopefully will be possible with augeas one day.
+For now, they are more a guide in writing the public API and augtool.
+
+1) Change default runlevel
+
+ p = match("/system/config/inittab/*/action", "initdefault")
+ set( join(dirname(p), "runlevels"), "3")
+
+ [works]
+
+2) yum
+
+ - enable a repo
+ set("/system/config/yum/myrepo/enabled", "1")
+
+ - turn off mirrorlist and set baseurl
+ rm("/system/config/yum/myrepo/mirrorlist")
+ set("/system/config/yum/myrepo/baseurl", "http://yum.example.com/repo")
+
+ [not implemented, needs ini scanner]
+
+3) add entry to hosts
+ set("/system/config/hosts/i0/ipaddr", "192.168.1.1")
+ set("/system/config/hosts/i0/canonical", "pigiron.example.com")
+ set("/system/config/hosts/i0/aliases", "pigiron pigiron.example")
+
+ [works]
+
+4) add to fstab [tree structure will probably look different]
+ p = "/system/config/fstab"
+ rm (p + "/dev.vg00.local/local")
+ set (p + "/dev.vg00.local/local/device", "/dev/vg00/local")
+ set (p + "/dev.vg00.local/local/mountpoint", "/local")
+ set (p + "/dev.vg00.local/local/fs", "ext3")
+ set (p + "/dev.vg00.local/local/options", "defaults")
+ set (p + "/dev.vg00.local/local/freq", "1")
+ set (p + "/dev.vg00.local/local/pass", "2")
+
+ [requires fstab provider, easy, based on record scanner]
+
+5) Mount /dev/sda1 on /exports/foo rather than /foo
+ p = "/system/config/fstab"
+ move entries from p + "/dev.sda1/foo" to p + "/dev.sda1/exports.foo"
+ set(p + "/dev.sda1/exports.foo/mountpoint", "/foo")
+
+ [same as above]
+
+6) Change gateway on all static interfaces
+ for p in find("/system/config/sysconfig/interfaces/**/gateway", "192.168.0.1")
+ set(p, "192.168.0.254")
+
+ [requires key/value scanner]
+
+7) Setup a static interface from scratch
+ p = "/system/config/sysconfig/interfaces/lo"
+ rm (p)
+ set (p + "/device", "lo")
+ set (p + "/ipaddr", "127.0.0.1")
+ ...
+
+ [requires key/value scanner]
+
+8) Turn on SELinux
+ set ("/system/config/sysconfig/selinux", "enforcing")
+
+ [requires key/value scanner]
--- /dev/null
+au BufNewFile,BufRead *.aug set filetype=augeas
--- /dev/null
+\documentclass[12pt,fleqn]{amsart}
+
+\usepackage{amsmath}
+\usepackage{xspace}
+\usepackage{amssymb}
+\usepackage{bcprules}
+
+\newcommand{\ensmath}[1]{\ensuremath{#1}\xspace}
+
+\newcommand{\opnam}[1]{\ensmath{\operatorname{\mathit{#1}}}}
+\newcommand{\nget}{\opnam{get}}
+\newcommand{\nput}{\opnam{put}}
+\newcommand{\nparse}{\opnam{parse}}
+\newcommand{\ncreate}{\opnam{create}}
+\newcommand{\nkey}{\opnam{key}}
+\newcommand{\lget}[1]{\opnam{get}{#1}}
+\newcommand{\lparse}[1]{\nparse{#1}}
+\newcommand{\lput}[2]{\opnam{put}{#1}\,{(#2)}}
+\newcommand{\lcreate}[2]{\opnam{create}{#1}\,{(#2)}}
+\newcommand{\lkey}[1]{\nkey{#1}}
+
+\newcommand{\suff}{\ensmath{\operatorname{suff}}}
+\newcommand{\pref}{\ensmath{\operatorname{pref}}}
+\newcommand{\lenstype}[3][K]{\ensmath{{#2}\stackrel{#1}{\Longleftrightarrow}{#3}}}
+\newcommand{\tree}[1]{\ensmath{[#1]}}
+\newcommand{\niltree}{\ensmath{[]}}
+\newcommand{\nildict}{\ensmath{\{\!\}}}
+\newcommand{\Regexp}{\ensmath{\mathcal R}}
+\newcommand{\reglang}[1]{\ensmath{[\![{#1}]\!]}}
+\newcommand{\lens}[1]{\opnam{#1}}
+\newcommand{\eps}{\ensmath{\epsilon}}
+\newcommand{\conc}[2]{\ensmath{#1\cdot #2}}
+\newcommand{\uaconc}[2]{\ensmath{#1\cdot^{!} #2}}
+\newcommand{\xconc}[2]{\ensmath{#1\odot #2}}
+\newcommand{\dconc}[2]{\ensmath{#1\oplus #2}}
+\newcommand{\alt}[2]{\ensmath{#1\,|\,#2}}
+\newcommand{\cstar}[1]{\ensmath{#1^*}}
+\newcommand{\uastar}[1]{\ensmath{#1^{!*}}}
+\newcommand{\Trees}{\ensmath{\mathcal T}}
+\newcommand{\Words}{\ensmath{\Sigma^*_{\rhd}}}
+\newcommand{\tmap}[2]{\ensmath{#1\mapsto #2}}
+\newcommand{\tmapv}[3]{\ensmath{#1 = #2\mapsto #3}}
+\newcommand{\tmaptt}[2]{\ensmath{{\mathtt #1}\mapsto {\mathtt #2}}}
+\newcommand{\dom}[1]{\ensmath{\mathrm{dom}(#1)}}
+\newcommand{\List}[1]{\ensmath{\mathtt{List(#1)}}}
+\newcommand{\redigits}{\ensmath{\mathtt{D}}}
+\newcommand{\Skel}{\ensmath{\mathcal{S}}}
+\newcommand{\Powerset}{\ensmath{\mathcal{P}}}
+\newcommand{\Keys}{\ensmath{\mathcal{K}}}
+\newcommand{\key}[1]{\ensmath{\kappa(#1)}}
+\newcommand{\lto}{\ensmath{\longrightarrow}}
+\newcommand{\Dictlangs}{\ensmath{\List{\Powerset(\Skel)}}}
+\newcommand{\lookup}{\opnam{lookup}}
+
+\newtheorem{theorem}{Theorem}
+\newtheorem{lemma}[theorem]{Lemma}
+{\theoremstyle{definition}
+ \newtheorem{defn}{Definition}
+ \newtheorem*{remark}{Remark}
+ \newtheorem*{example}{Example}}
+
+\begin{document}
+
+\section{Trees}
+We use trees as the abstract view for our lenses; since the concrete data
+structure, strings, is ordered, and to support some properties of lenses
+that seem sensible intuitively, the trees differ from garden-variety trees
+in a number of ways:
+\begin{itemize}
+ \item a tree node consists of three pieces of data: a \emph{label}, a
+ \emph{value} and an ordered list of \emph{children}, each of them a
+ tree by themselves.
+ \item the labels for tree nodes are either words not containing a slash
+ {\tt /} or the special symbol $\rhd$; in the implementation $\rhd$
+ corresponds to {\tt NULL}. The latter is used to indicate
+ that an entry in the tree corresponds to text that was deleted.
+ \item the children of a tree node form a list of subtrees, i.e. are
+ ordered. In addition, several subtrees in such a list may use the same
+ label. This makes it possible to accommodate concrete files where
+ entries that are logically connected are stored scattered between
+ unrelated entries like the {\tt AcceptEnv} entries in {\tt
+ sshd\_config}.
+\end{itemize}
+
+We write $\tmapv{k}{v}{t}$ for a tree node with label $k$, value $v$ and
+children $t$. If it is clear from the context, or unimportant, $v$ will
+often be omitted.
+
+\subsection{Tree labels}
+We take the tree labels from the set of \emph{path components} $\Keys =
+(\Sigma \setminus \{ {\mathtt /} \})^+ \cup \{ \rhd \}$, that is,
+a tree label is any word not containing a backslash or the special symbol
+$\rhd$. For tree labels, we define a partial concatenation operator
+$\odot$, as
+\begin{equation*}
+ \xconc{k_1}{k_2} = \begin{cases}
+ k_1 & \text{if } k_2 = \rhd\\
+ k_2 & \text{if } k_1 = \rhd\\
+ \text{undefined} & \text{otherwise}
+ \end{cases}
+\end{equation*}
+
+Defining tree labels in this way (1) guarantees that there is a
+one--to--one correspondence between a tree label and the word it came from
+in the concrete text and (2) avoids any pain in splitting tree labels in
+the \nput direction.
+
+\subsection{Trees}
+The set of \emph{ordered trees} \Trees over $\Words$ is recursively
+defined as
+\begin{itemize}
+\item The empty tree $\rhd$
+\item For any words $k\in\Keys$, $v\in\Words$ and any tree $t\in\Trees$,
+ $\tree{\tmapv{k}{v}{t}}$ is in \Trees
+\item For any $n$ and trees $\tree{\tmapv{k_i}{v_i}{t_i}} \in Trees$,
+ the list
+ $[\tmapv{k_1}{v_1}{t_1};\tmapv{k_2}{v_2}{t_2};\ldots;\tmapv{k_n}{v_n}{t_n}]$
+ is in \Trees
+\end{itemize}
+
+Note that this allows the same key to be used multiple times in a tree; for
+example, $[\tmaptt{a}{x}; \tmaptt{a}{x}]$ is a valid tree and
+different from $[\tmaptt{a}{x}]$.
+
+The domain of a tree $\dom{t}$ is the list of all its labels, i.e. an
+element of $\List{\Keys}$; for a tree $t =
+\tree{\tmap{k_1}{t_1};\ldots;\tmap{k_n}{t_n}}$, $\dom{t} =
+ [k_1;\ldots;k_n].$
+
+%% Why ?
+%A tree $t$ can also be viewed as a total map from \Words to
+%$\List{\Trees\cup\Words}$, defined as
+%\begin{equation*}
+% t(k) =
+% \begin{cases}
+% v & \text{for } t = \tree{v}\\
+% [t_1] :: t_2(k) & \text{for } t = t' :: t_2
+% \text{ and } t' = \tree{\tmap{k}{t_1}}\\
+% t_2(k) & \text{for } t = t' :: t_2
+% \text{ and } t' = \tree{\tmap{k'}{t_1}} \text{with } k\neq k'
+% \end{cases}
+%\end{equation*}
+%
+%Note that this definition implies that for a tree $t = \tree{\tmap{k}{v}}$
+%with $k,v\in\Words$, $t(k)=[v].$ For the tree $t_{\mathtt{aa}} = [\tmaptt{a}{x};
+% \tmaptt{a}{x}]$, $t_{\mathtt{aa}}(\mathtt a) = [\mathtt x; \mathtt x]$.
+
+The concatenation of trees $\conc{t_1}{t_2}$ is simply list concatenation.
+
+For sets $K\subset\Keys$ and $T\subset\Trees$, $\tree{\tmap{K}{T}}$
+denotes the set of all trees $t = \tree{\tmap{k}{t'}}$ with $k\in K$, $t'
+\in T$.
+
+\subsection{Concatenation and iteration}
+For a tree $t\in \Trees$, we define its underlying \emph{key language}
+$\key{t}$ by
+\begin{equation*}
+ \key{t} = \begin{cases}
+ {\tt /} & \text{if } t = \rhd \text{ or } t = \tree{\tmap{\rhd}{t_1}}\\
+ k \cdot {\mathtt /} & \text{if } t = \tree{\tmap{k}{t_1}}\\
+ k\cdot {\mathtt /}\cdot \key{t_2} & \text{for } t = \tree{\tmap{k}{t_1};t_2}
+ \end{cases}
+\end{equation*}
+where $\conc{k_1}{k_2}$ is normal string concatenation. The key language of
+a set of trees $\key{T}$ is defined as $\{\key{t} | t \in T\}$.
+
+In analogy to languages, we call two tree sets $T_1, T_2 \subseteq \Trees$
+\emph{unambiguously concatenable} if the key languages $\key{T_1}$ and
+$\key{T_2}$ are unambiguously concatenable. A tree set $T\in\Trees$ is
+\emph{unambiguously iterable} if the underlying key language $\key{T}$ is
+unambiguously iterable.
+
+\subsection{Public tree operations}
+We need the public API to support the following operations. The set
+$P\subseteq \Sigma^*$ are paths
+
+\begin{itemize}
+ \item $\opnam{lookup}(p, t): P \times \Trees \lto \Trees$ finds the tree
+ with path $p$
+ \item $\opnam{assign}(p, v, t): P \times \Sigma^* \times \Trees \lto
+ \Trees$ assigns the value $v$ to the tree node $p$
+ \item $\opnam{remove}(p, t): P \times \Trees \lto \Trees$ removes the
+ subtree denoted by $p$
+ \item $\opnam{get}(p, t): P \times \Trees \lto \Sigma^*$ looks up the
+ value associated with $p$
+ \item $\opnam{ls}(p, t): P \times \Trees \lto \List{\Trees}$ lists all
+ the subtrees underneath $p$
+\end{itemize}
+
+\section{Lenses}
+
+Lenses map between strings in the regular language $C$ and trees
+$T\subseteq\Trees$. They can also produce keys from a regular language $K$;
+these keys are used by the $\lens{subtree}$ lens to construct new
+trees.
+
+A lens $l$ consists of the functions $\nget$, $\nput$, $\ncreate$, and
+$\nparse$.
+
+Lenses here are written as $l:\lenstype[K,S,L]{C}{T}$ where $K$ and $C$ are
+regular languages and $T\subseteq\Trees$. The skeletons $S\subseteq\Skel$
+and dictionary type specifications $L$ are as for Boomerang (really ??)
+Intuitively, the notation says that $l$ is a lens that takes strings from
+$C$ and transforms them to trees in $T$. Generally,
+
+\infrule{C\subseteq\Words \andalso K\subseteq\List{\Keys}
+ \andalso T\subseteq\Trees
+ \andalso S\subseteq\Skel \andalso L \in \Dictlangs}
+ {\lens{l} \in \lenstype[K,S,L]{C}{T}}
+
+\begin{align*}
+ \nget &\in C \lto T\\
+ \nparse &\in C \lto K \times S \times D(L)\\
+ \nput &\in T \lto K \times S \times D(L) \lto C \times D(L)\\
+ \ncreate &\in T \lto K \times D(L) \lto C \times D(L)
+\end{align*}
+
+\subsection{const}
+
+The $\lens{const} E\,t\,v$ maps words matching $E$ in the \nget direction
+to a fixed tree $t$ and maps that fixed tree $t$ back in the \nput
+direction. When text needs to be created from $t$, it produces the default
+word $v$.
+
+\infrule{E\in\Regexp \andalso t\in T\andalso u\in\reglang{E} \andalso L \in \Dictlangs}
+ {\lens{const} E\,t\,u \in \lenstype[\rhd,\reglang{E},L]{\reglang{E}}{\{t\}}}
+
+\begin{align*}
+ \lget{c} &= t\\
+ \lparse{c} &= \rhd, c, \nildict\\
+ \lput{t}{k,s,d} &= s, d\\
+ \lcreate{t}{k,d} &= u, d
+\end{align*}
+
+The \lens{del} lens is syntactic sugar: $\lens{del} E\,u = \lens{const}
+E\,\niltree\,u$.
+
+\subsection{copy}
+
+Copies a word into a leaf.
+
+\infrule{E\in\Regexp \andalso L\in\Dictlangs}
+ {\lens{copy}\in\lenstype[\rhd,\reglang{E},L]{\reglang{E}}{\tree{\reglang{E}}}}
+
+\begin{align*}
+ \lget{c} &= \tree{c}\\
+ \lparse{c} &= \rhd, c, \nildict\\
+ \lput{\tree{v}}{k,s,d} &= v, d\\
+ \lcreate{\tree{v}}{k,d} &= v, d
+\end{align*}
+
+\subsection{seq}
+
+Gets the next value from a sequence as the key. We assume there's a
+generator $\mathtt{nextval}: \Sigma^* \to \mathbb{N}$ that returns
+successive numbers on each invocation. \redigits is the regular expression
+{\tt [0-9]+} that matches positive numbers.
+
+\infrule{w\in\Sigma^* \andalso L\in\Dictlangs \andalso n = \mathtt{nextval}(w)}
+ {\lens{seq} w\in\lenstype[\reglang{\redigits},\eps,L]{\eps}{\niltree}}
+
+\begin{align*}
+ \lget{\eps} &= \niltree\\
+ \lparse{\eps} &= n, \eps, \nildict\\
+ \lput{\niltree}{k,\eps,d} &= \eps, d\\
+ \lcreate{\niltree}{k,d} &= \eps,d
+\end{align*}
+
+\subsection{label}
+
+Uses a fixed tree label
+
+\infrule{w\in\Sigma^* \andalso L\in\Dictlangs}
+ {\lens{label} w\in\lenstype[w,\eps,L]{\eps}{\niltree}}
+
+\begin{align*}
+ \lget{\eps} &= \niltree\\
+ \lparse{\eps} &= w,\eps,\nildict\\
+ \lput{\niltree}{w,\eps,d} &= \eps,d\\
+ \lcreate{\niltree}{k,d} &= \eps,d
+\end{align*}
+
+\subsection{key}
+
+Uses a parsed tree label
+
+\infrule{E\in\Regexp \andalso L\in\Dictlangs}
+ {\lens{key} E\in\lenstype[\reglang{E},\eps,L]{\reglang{E}}{\niltree}}
+
+\begin{align*}
+ \lget{c} &= \niltree\\
+ \lparse{c} &= c,\eps,\nildict\\
+ \lput{\niltree}{c,\eps,d} &= c,d\\
+ \lcreate{\niltree}{c, d} &= c,d
+\end{align*}
+
+\subsection{subtree}
+
+The subtree combinator $[l]$ constructs a subtree from $l$
+
+\infrule{l\in\lenstype[K,S,L]{C}{T}}
+ {[l]\in\lenstype[\rhd,\square,S::L]{C}{\tree{\tmap{K}{T}}}}
+
+\begin{align*}
+ \lget{c} &= \tree{\tmap{l.\lkey{c}}{l.\lget{c}}}\\
+ \lparse{c} &= \rhd, \square, \{\tmap{l.\lkey{c}}{[l.\lparse{c}]}\}\\
+ \lput{\tree{\tmap{k}{t}}}{k',\square,d} &=
+ \begin{cases}
+ \pi_1\left(l.\lput{t}{k,\bar{s},\bar{d}}\right),d' & \text{if } (\bar{k}, \bar{s}, \bar{d}), d' = \lookup(k, d)\\
+ \pi_1\left(l.\lcreate{t}{k,\nildict}\right), d & \text{if } \lookup(k,
+ d) \text{ undefined}
+ \end{cases}\\
+ \lcreate{\tree{\tmap{k}{t}}}{k',d} &= \lput{\tree{\tmap{k}{t}}}{k',\square,d}
+\end{align*}
+
+We store a triple $(k,s,d)$ in dictionaries, but we don't use the stored
+key $k$.
+
+\subsection{concat}
+
+The concat combinator $\conc{l_1}{l_2}$ joins two trees.
+
+\infrule{l_1\in\lenstype[K_1,S_1,L]{C_1}{T_1}
+ \andalso l_2\in\lenstype[K_2,S_2,L]{C_2}{T_2}
+ \andalso \uaconc{C_1}{C_2}
+ \andalso \uaconc{\key{T_1}}{\key{T_2}}}
+ {\conc{l_1}{l_2}\in\lenstype[\conc{K_1}{K_2},S_1\times S_2, L]{\conc{C_1}{C_2}}{\conc{T_1}{T_2}}}
+
+\begin{align*}
+ \lget{(\conc{c_1}{c_2})} &= \conc{(l_1.\lget{c_1})}{(l_2.\lget{c_2})}\\
+ \lparse{\conc{c_1}{c_2}} &= \xconc{k_1}{k_2}, (s_1,s_2), \dconc{d_1}{d_2} \\
+ \lput{\conc{t_1}{t_2}}{k, (s_1,s_2), d_1} &= \conc{c_1}{c_2}, d_3 \\
+ & \text{where } c_i, d_{i+1} = l_i.\lput{t_i}{k, s_i, d_i} \\
+ \lcreate{\conc{t_1}{t_2}}{k, d_1} &= \conc{c_1}{c_2}, d_3\\
+ & \text{where } c_i, d_{i+1} = l_i.\lcreate{t_i}{k, d_i}
+\end{align*}
+
+\subsection{union}
+
+The union combinator $\alt{l_1}{l_2}$ chooses.
+
+\infrule{l_i\in\lenstype[K_i,S_i,L]{C_i}{T_i}\text{ for } i=1,2
+ \andalso C_1 \cap C_2 = \emptyset
+ \andalso S_1 \cap S_2 = \emptyset
+ \andalso \key{T_1} \cap \key{T_2} = \emptyset}
+ {\alt{l_1}{l_2}\in\lenstype[K_1\cup K_2, S_1\cup S_2, L]{C_1 \cup C_2}{T_1 \cup T_2}}
+
+\begin{align*}
+ \lget{c} &=
+ \begin{cases}
+ l_1.\lget{c} & \text{if } c \in C_1\\
+ l_2.\lget{c} & \text{if } c \in C_2\\
+ \end{cases}
+ \\
+ \lparse{c} &=
+ \begin{cases}
+ l_1.\lparse{c} & \text{if } c \in C_1\\
+ l_2.\lparse{c} & \text{if } c \in C_2\\
+ \end{cases}
+ \\
+ \lput{t}{k, s, d} &=
+ \begin{cases}
+ l_1.\lput{t}{k, s, d} & \text{if } t, s \in T_1 \times S_1 \\
+ l_2.\lput{t}{k, s, d} & \text{if } t, s \in T_2 \times S_2 \\
+ l_1.\lcreate{t}{k, d} & \text{if } t, s \in (T_1\setminus T_2)
+ \times S_2\\
+ l_2.\lcreate{t}{k, d} & \text{if } t, s \in (T_2\setminus T_1)
+ \times S_1
+ \end{cases}\\
+ \lcreate{t}{k, d} &=
+ \begin{cases}
+ l_1.\lcreate{t}{k, d} & \text{if } t\in T_1\\
+ l_2.\lcreate{t}{k, d} & \text{if } t\in T_2\setminus T_1
+ \end{cases}
+\end{align*}
+
+\subsection{star}
+
+The star combinator $\cstar{l}$ iterates.
+
+\infrule{l\in\lenstype{C}{T} \andalso \uastar{C} \andalso \uastar{\key{T}}}
+ {\cstar{l}\in\lenstype[\cstar{K}]{\cstar{C}}{\cstar{T}}}
+
+\begin{align*}
+ \lget{c_1\cdots c_n} &= (l.\lget{c_1}) \cdots (l.\lget{c_n})\\
+ \lparse{c_1\cdots c_n} &= k_1\odot\ldots\odot k_n, [s_1;\ldots;s_n],
+ d_1\oplus\ldots\oplus d_n\\
+ & \text{where } k_i,s_i,d_i = l.\lparse{c_i}\\
+ \lput{t_1\cdots t_n}{k, [s_1;\cdots;s_m], d_1} &= (c_1 \cdots c_n), d_{n+1}\\
+ \text{where } &c_i, d_{i+1} =
+ \begin{cases}
+ l.\lput{t_i}{k, s_i, d_i} & \text{for } 1 \leq i \leq \min(m,n)\\
+ l.\lcreate{t_i}{k, d_i} & m+1 \leq i \leq n
+ \end{cases}\\
+ \lcreate{t_1 \cdots t_n}{k, d_1} &= (c_1\cdots c_n), d_{n+1}\\
+ & \text{where } c_i, d_{i+1} = l.\lcreate{t_i}{k, d_i}
+\end{align*}
+
+Want reordering and insertion in the middle to be reflected. If
+$\lget{\conc{c_1}{c_2}} = \conc{t_1}{t_2}$, want
+$\lput{(\conc{t_2}{t_1})}{\conc{c_1}{c_2}} =
+\conc{l.\lput{t_2}{c_2}}{l.\lput{t_1}{c_1}}$ This can only happen if the
+information to be reordered is in subtrees. In particular, comment lines
+need to become their own subtree, with some support from the language to
+create `hidden' entries. Simplest: allow {\tt NULL} as the key for a
+subtree and ignore such tree entries in the public API.
+
+Need to split a tree $t\in T$ into subtrees according to ?? Keeping a fake
+`slot' $\rhd$ around for text that didn't produce a tree should help with
+that.
+
+For $K^*$ to make any sense, must have $\rhd\in K$ and the application of
+$(l^*).\nparse$ must return $\rhd$ for all except at most one application.
+
+\section{Regular expressions and languages}
+
+For type checking, we need to compute the following properties of regular
+languages $R, R_1, R_2$
+\begin{itemize}
+\item decide unambiguous concatenation $\uaconc{R_1}{R_2}$ and compute
+ $\conc{R_1}{R_2}$
+\item decide unambiguous iteration $\uastar{R}$ and compute $R^*$
+\item disjointness $R_1 \cap R_2 = \emptyset$ (we don't need general
+ intersection, though I don't know of a quicker way to decide
+ disjointness)
+\item compute the regular language $R = \reglang{E}$ for a regular
+ expression $E\in\Regexp$
+\end{itemize}
+\end{document}
+
+%% TeX-parse-self: t
+%% TeX-auto-save: t
+
+%% Local Variables:
+%% TeX-master: "lenses.tex"
+%% compile-command: "pdflatex -file-line-error -halt-on-error lenses.tex"
+%% End:
--- /dev/null
+
+EXTRA_DIST = $(wildcard conf/Augeas.css conf/c_api/*.txt) \
+ $(wildcard conf/lenses/*.txt) \
+ Modules/NaturalDocs/Languages/Augeas.pm
+
+ND_CONF=$(srcdir)/conf
+ND_OUTPUT=output
+ND_STYLE=../Augeas
+
+ND_PERL5LIB=$(abs_srcdir)/Modules
+ND_PERL5OPT='-MNaturalDocs::Languages::Augeas'
+
+if ND_ENABLED
+all-local: NaturalDocs
+endif
+
+NaturalDocs: NDLenses NDCAPI
+
+env:
+ echo LIB $(ND_PERL5LIB)
+ echo OPT $(ND_PERL5OPT)
+ test -n "$$PERL5OPT" && ND_PERL5OPT="$(ND_PERL5OPT) $$PERL5OPT" || ND_PERL5OPT=$(ND_PERL5OPT); \
+ test -n "$$PERL5LIB" && ND_PERL5LIB="$(ND_PERL5LIB):$$PERL5LIB" || ND_PERL5LIB=$(ND_PERL5LIB); \
+ PERL5LIB=$$ND_PERL5LIB PERL5OPT=$$ND_PERL5OPT env | grep PERL
+
+NDLenses: NDConf
+ @mkdir -p $(ND_OUTPUT)/lenses
+ @(echo "Format lens documentation"; \
+ test -n "$$PERL5OPT" && ND_PERL5OPT="$(ND_PERL5OPT) $$PERL5OPT" || ND_PERL5OPT=$(ND_PERL5OPT); \
+ test -n "$$PERL5LIB" && ND_PERL5LIB="$(ND_PERL5LIB):$$PERL5LIB" || ND_PERL5LIB=$(ND_PERL5LIB); \
+ PERL5LIB=$$ND_PERL5LIB PERL5OPT=$$ND_PERL5OPT \
+ $(ND_PROG) -p conf/lenses \
+ -i $(top_srcdir)/lenses \
+ -o $(ND_FORMAT) $(ND_OUTPUT)/lenses \
+ -s $(ND_STYLE))
+
+NDCAPI: NDConf
+ @mkdir -p $(ND_OUTPUT)/c_api
+ @(echo "Format C API documentation"; \
+ test -n "$$PERL5OPT" && ND_PERL5OPT="$(ND_PERL5OPT) $$PERL5OPT" || ND_PERL5OPT=$(ND_PERL5OPT); \
+ test -n "$$PERL5LIB" && ND_PERL5LIB="$(ND_PERL5LIB):$$PERL5LIB" || ND_PERL5LIB=$(ND_PERL5LIB); \
+ $(ND_PROG) -p conf/c_api \
+ -i $(top_srcdir)/src \
+ -o $(ND_FORMAT) $(ND_OUTPUT)/c_api \
+ -s $(ND_STYLE))
+
+NDConf:
+ @(if test ! -d $(ND_CONF); then \
+ cp -pr $(ND_CONF) conf; \
+ fi)
+
+clean-local:
+ rm -rf output conf/Data
+ rm -rf $(ND_CONF)/c_api/Data $(ND_CONF)/lenses/Data
--- /dev/null
+###############################################################################
+#
+# Class: NaturalDocs::Languages::Augeas
+#
+###############################################################################
+#
+# A subclass to handle the language variations of Tcl.
+#
+###############################################################################
+
+# This file is part of Natural Docs, which is Copyright (C) 2003-2008 Greg Valure
+# Natural Docs is licensed under the GPL
+
+use strict;
+use integer;
+
+package NaturalDocs::Languages::Augeas;
+
+use base 'NaturalDocs::Languages::Simple';
+
+
+my $pastFirstLet;
+my $pastFirstTest;
+
+
+sub OnCode {
+ my ($self, @params) = @_;
+
+ $pastFirstLet = 0;
+ $pastFirstTest = 0;
+
+ return $self->SUPER::OnCode(@params);
+};
+
+
+# Override NormalizePrototype and ParsePrototype
+sub NormalizePrototype {
+ my ($self, $prototype) = @_;
+ return $prototype;
+}
+
+sub ParsePrototype {
+ my ($self, $type, $prototype) = @_;
+
+ my $object = NaturalDocs::Languages::Prototype->New($prototype);
+ return $object;
+}
+
+
+
+#
+# Function: OnPrototypeEnd
+#
+# Tcl's function syntax is shown below.
+#
+# > proc [name] { [params] } { [code] }
+#
+# The opening brace is one of the prototype enders. We need to allow the first opening brace because it contains the
+# parameters.
+#
+# Also, the parameters may have braces within them. I've seen one that used { seconds 20 } as a parameter.
+#
+# Parameters:
+#
+# type - The <TopicType> of the prototype.
+# prototypeRef - A reference to the prototype so far, minus the ender in dispute.
+# ender - The ender symbol.
+#
+# Returns:
+#
+# ENDER_ACCEPT - The ender is accepted and the prototype is finished.
+# ENDER_IGNORE - The ender is rejected and parsing should continue. Note that the prototype will be rejected as a whole
+# if all enders are ignored before reaching the end of the code.
+# ENDER_ACCEPT_AND_CONTINUE - The ender is accepted so the prototype may stand as is. However, the prototype might
+# also continue on so continue parsing. If there is no accepted ender between here and
+# the end of the code this version will be accepted instead.
+# ENDER_REVERT_TO_ACCEPTED - The expedition from ENDER_ACCEPT_AND_CONTINUE failed. Use the last accepted
+# version and end parsing.
+#
+sub OnPrototypeEnd {
+ my ($self, $type, $prototypeRef, $ender) = @_;
+
+ if ($ender eq "\n") {
+ return ::ENDER_ACCEPT_AND_CONTINUE();
+ } elsif ( ($type eq "augeasvariable" || $type eq "augeaslens" || $type eq "augeastest") &&
+ $ender eq "let" &&
+ (!$pastFirstLet || $$prototypeRef =~ /\=[ \t\r\n]*$/
+ || $$prototypeRef =~ /in[ \t\r\n]+$/) ) {
+ $pastFirstLet = 1;
+ return ::ENDER_IGNORE();
+ } elsif ( ($type eq "augeasvariable" || $type eq "augeaslens" || $type eq "augeastest") &&
+ $ender eq "test" &&
+ (!$pastFirstTest || $$prototypeRef =~ /\=[ \t\r\n]*$/
+ || $$prototypeRef =~ /in[ \t\r\n]+$/) ) {
+ $pastFirstTest = 1;
+ return ::ENDER_IGNORE();
+ } else {
+ return ::ENDER_ACCEPT();
+ };
+};
+
+
+1;
--- /dev/null
+/*
+ IMPORTANT: If you're editing this file in the output directory of one of
+ your projects, your changes will be overwritten the next time you run
+ Natural Docs. Instead, copy this file to your project directory, make your
+ changes, and you can use it with -s. Even better would be to make a CSS
+ file in your project directory with only your changes, which you can then
+ use with -s [original style] [your changes].
+
+ On the other hand, if you're editing this file in the Natural Docs styles
+ directory, the changes will automatically be applied to all your projects
+ that use this style the next time Natural Docs is run on them.
+
+ This file is part of Natural Docs, which is Copyright (C) 2003-2008 Greg Valure
+ Natural Docs is licensed under the GPL
+*/
+
+body {
+ background: white;
+ background: url('http://augeas.net/styles/augeas-logo.png');
+ background-repeat: no-repeat;
+ background-position: 3px 3px;
+ color: black;
+ font: 10pt Verdana, Arial, sans-serif;
+ margin: 0; padding: 0;
+ }
+
+.ContentPage,
+.IndexPage,
+.FramedMenuPage {
+ background-color: white;
+ }
+.FramedContentPage,
+.FramedIndexPage,
+.FramedSearchResultsPage,
+.PopupSearchResultsPage {
+ background-color: #white;
+ }
+
+
+a:link,
+a:visited { color: #900000; text-decoration: none }
+a:hover { color: #900000; text-decoration: underline }
+a:active { color: #FF0000; text-decoration: underline }
+
+td {
+ vertical-align: top }
+
+img { border: 0; }
+
+
+/*
+ Comment out this line to use web-style paragraphs (blank line between
+ paragraphs, no indent) instead of print-style paragraphs (no blank line,
+ indented.)
+*/
+p {
+ text-indent: 5ex; margin: 0 }
+
+
+/* Opera doesn't break with just wbr, but will if you add this. */
+.Opera wbr:after {
+ content: "\00200B";
+ }
+
+
+/* Blockquotes are used as containers for things that may need to scroll. */
+blockquote {
+ padding: 0;
+ margin: 0;
+ overflow: auto;
+ margin-left: 5ex;
+ }
+
+
+.Firefox1 blockquote {
+ padding-bottom: .5em;
+ }
+
+/* Turn off scrolling when printing. */
+@media print {
+ blockquote {
+ overflow: visible;
+ }
+ .IE blockquote {
+ width: auto;
+ }
+ }
+
+
+
+#Menu {
+ float: left;
+ padding-top: 100px;
+ left: 0px;
+ margin-bottom: 1em;
+ margin-left:10px;
+ }
+#Menu a {
+ color:black;
+ list-style: none;
+ padding: 0px;
+ font-weight: bold;
+ }
+.ContentPage #Menu,
+.IndexPage #Menu {
+ position: absolute;
+ top: 0;
+ left: 0;
+ width: 200px;
+ overflow: hidden;
+ }
+.ContentPage .Firefox #Menu,
+.IndexPage .Firefox #Menu {
+ width: 27ex;
+ }
+
+
+ .MTitle {
+ font-size: 16pt; font-weight: bold; font-variant: small-caps;
+ text-align: center;
+ padding: 5px 10px 15px 10px;
+ border-bottom: 1px dotted #000000;
+ color: #8d4a2c;
+ margin-bottom: 15px }
+
+ .MSubTitle {
+ font-size: 9pt; font-weight: normal; font-variant: normal;
+ margin-top: 1ex; margin-bottom: 5px }
+
+
+ .MEntry { }
+
+ .MEntry a:link,
+ .MEntry a:hover,
+ .MEntry a:visited { margin-right: 0 }
+ .MEntry a:active { margin-right: 0 }
+
+
+ .MGroup {
+ font-variant: small-caps; font-weight: bold;
+ margin: 1em 0 1em 10px;
+ }
+
+ .MGroupContent {
+ font-variant: normal; font-weight: normal }
+
+ .MGroup a:link,
+ .MGroup a:hover,
+ .MGroup a:visited { margin-right: 10px }
+ .MGroup a:active { margin-right: 10px }
+
+
+ .MFile,
+ .MText,
+ .MLink,
+ .MIndex {
+ padding: 5px 10px 7px 20px;
+ margin: .1em 0 .1em 0;
+ background: #c9e0a7;
+ }
+
+ .MText { }
+
+ .MLink { }
+
+ #MSelected {
+ color: #000000; background-color: #FFFFFF;
+ /* Replace padding with border. */
+ background: #65a703;
+ font-weight: bold;
+ }
+
+
+ #MSearchPanel {
+ padding: 0px 6px;
+ margin: .25em 0;
+ }
+
+
+ #MSearchField {
+ color: #606060;
+ background-color: #E8E8E8;
+ border: none;
+ padding: 2px 4px;
+ width: 100%;
+ }
+ /* Only Opera gets it right. */
+ .Firefox #MSearchField,
+ .IE #MSearchField,
+ .Safari #MSearchField {
+ width: 94%;
+ }
+ .Opera9 #MSearchField,
+ .Konqueror #MSearchField {
+ width: 97%;
+ }
+ .FramedMenuPage .Firefox #MSearchField,
+ .FramedMenuPage .Safari #MSearchField,
+ .FramedMenuPage .Konqueror #MSearchField {
+ width: 98%;
+ }
+
+ /* Firefox doesn't do this right in frames without #MSearchPanel added on.
+ It's presence doesn't hurt anything other browsers. */
+ #MSearchPanel.MSearchPanelInactive:hover #MSearchField {
+ background-color: #FFFFFF;
+ border: 1px solid #C0C0C0;
+ padding: 1px 3px;
+ }
+ .MSearchPanelActive #MSearchField {
+ background-color: #FFFFFF;
+ border: 1px solid #C0C0C0;
+ font-style: normal;
+ padding: 1px 3px;
+ }
+
+ #MSearchType {
+ visibility: hidden;
+ font: 8pt Verdana, sans-serif;
+ width: 98%;
+ padding: 0;
+ border: 1px solid #C0C0C0;
+ }
+ .MSearchPanelActive #MSearchType,
+ /* As mentioned above, Firefox doesn't do this right in frames without #MSearchPanel added on. */
+ #MSearchPanel.MSearchPanelInactive:hover #MSearchType,
+ #MSearchType:focus {
+ visibility: visible;
+ color: #606060;
+ }
+ #MSearchType option#MSearchEverything {
+ font-weight: bold;
+ }
+
+ .Opera8 .MSearchPanelInactive:hover,
+ .Opera8 .MSearchPanelActive {
+ margin-left: -1px;
+ }
+
+
+ iframe#MSearchResults {
+ width: 60ex;
+ height: 15em;
+ }
+ #MSearchResultsWindow {
+ display: none;
+ position: absolute;
+ left: 0; top: 0;
+ border: 1px solid #000000;
+ background-color: #E8E8E8;
+ }
+ #MSearchResultsWindowClose {
+ font-weight: bold;
+ font-size: 8pt;
+ display: block;
+ padding: 2px 5px;
+ }
+ #MSearchResultsWindowClose:link,
+ #MSearchResultsWindowClose:visited {
+ color: #000000;
+ text-decoration: none;
+ }
+ #MSearchResultsWindowClose:active,
+ #MSearchResultsWindowClose:hover {
+ color: #800000;
+ text-decoration: none;
+ background-color: #F4F4F4;
+ }
+
+
+
+
+#Content {
+ margin-top: 100px;
+ padding-bottom: 15px;
+ }
+
+.ContentPage #Content {
+ background-color: #FFFFFF;
+ font-size: 9pt; /* To make 31ex match the menu's 31ex. */
+ margin-left: 34ex;
+ }
+.ContentPage .Firefox #Content {
+ margin-left: 34ex;
+ }
+
+
+
+ .CTopic {
+ font-size: 10pt;
+ margin-bottom: 3em;
+ }
+
+
+ .CTitle {
+ font-size: 12pt; font-weight: bold;
+ border-width: 0 0 1px 0; border-style: solid; border-color: #A0A0A0;
+ margin: 0 15px .5em 15px }
+
+ .CGroup .CTitle {
+ font-size: 16pt; font-variant: small-caps;
+ padding-left: 15px; padding-right: 15px;
+ border-width: 0 0 2px 0; border-color: #000000;
+ margin-left: 0; margin-right: 0; color: #8d4a2c }
+
+ .CClass .CTitle,
+ .CInterface .CTitle,
+ .CDatabase .CTitle,
+ .CDatabaseTable .CTitle,
+ .CSection .CTitle {
+ font-size: 18pt;
+ font-weight: bold;
+ color: #8d4a2c;
+ padding: 10px 15px 10px 15px;
+ border-width: 2px 0; border-color: #000000;
+ margin-left: 0; margin-right: 0 }
+
+ #MainTopic .CTitle {
+ font-size: 20pt;
+ font-weight: bold;
+ color: #8d4a2c;
+ padding: 10px 15px 10px 15px;
+ border-width: 0 0 3px 0; border-color: #000000;
+ margin-left: 0; margin-right: 0 }
+
+ .CBody {
+ margin-left: 0; margin-right: 15px }
+
+
+ .CToolTip {
+ position: absolute; visibility: hidden;
+ left: 0; top: 0;
+ background-color: #FFFFE0;
+ padding: 5px;
+ border-width: 1px 2px 2px 1px; border-style: solid; border-color: #000000;
+ font-size: 8pt;
+ }
+
+ .Opera .CToolTip {
+ max-width: 98%;
+ }
+
+ /* Scrollbars would be useless. */
+ .CToolTip blockquote {
+ overflow: hidden;
+ }
+ .IE6 .CToolTip blockquote {
+ overflow: visible;
+ }
+
+ .CHeading {
+ font-weight: bold; font-size: 10pt;
+ margin: 1.5em 0 .5em 0;
+ }
+
+ .CBody pre {
+ font: 10pt "Courier New", Courier, monospace;
+ margin: 1em 0;
+ }
+
+ .CBody ul {
+ /* I don't know why CBody's margin doesn't apply, but it's consistent across browsers so whatever.
+ Reapply it here as padding. */
+ padding-left: 15px; padding-right: 15px;
+ margin: .5em 5ex .5em 5ex;
+ }
+
+ .CDescriptionList {
+ margin: .5em 5ex 0 5ex }
+
+ .CDLEntry {
+ font: 10pt "Courier New", Courier, monospace; color: #808080;
+ padding-bottom: .25em;
+ white-space: nowrap }
+
+ .CDLDescription {
+ font-size: 10pt; /* For browsers that don't inherit correctly, like Opera 5. */
+ padding-bottom: .5em; padding-left: 5ex }
+
+
+ .CTopic img {
+ text-align: center;
+ display: block;
+ margin: 1em auto;
+ }
+ .CImageCaption {
+ font-variant: small-caps;
+ font-size: 8pt;
+ color: #808080;
+ text-align: center;
+ position: relative;
+ top: 1em;
+ }
+
+ .CImageLink {
+ color: #808080;
+ }
+ a.CImageLink:link,
+ a.CImageLink:visited,
+ a.CImageLink:hover { color: #808080 }
+
+
+
+
+
+.Prototype {
+ font: 10pt "Courier New", Courier, monospace;
+ padding: 5px 3ex;
+ border: 1px solid #65A703;
+ margin: 0 5ex 1.5em 0;
+ background: #EBF8D5;
+ }
+
+ .Prototype td {
+ font-size: 10pt;
+ }
+
+ .PDefaultValue,
+ .PDefaultValuePrefix,
+ .PTypePrefix {
+ color: #8F8F8F;
+ }
+ .PTypePrefix {
+ text-align: right;
+ }
+ .PAfterParameters {
+ vertical-align: bottom;
+ }
+
+ .IE .Prototype table {
+ padding: 0;
+ }
+
+ .CFunction .Prototype {
+ background-color: #F4F4F4; border-color: #D0D0D0 }
+ .CProperty .Prototype {
+ background-color: #F4F4FF; border-color: #C0C0E8 }
+ .CVariable .Prototype {
+ background-color: #FFFFF0; border-color: #E0E0A0 }
+
+ .CClass .Prototype {
+ border-width: 1px 2px 2px 1px; border-style: solid; border-color: #A0A0A0;
+ background-color: #F4F4F4;
+ }
+ .CInterface .Prototype {
+ border-width: 1px 2px 2px 1px; border-style: solid; border-color: #A0A0D0;
+ background-color: #F4F4FF;
+ }
+
+ .CDatabaseIndex .Prototype,
+ .CConstant .Prototype {
+ background-color: #D0D0D0; border-color: #000000 }
+ .CType .Prototype,
+ .CEnumeration .Prototype {
+ background-color: #FAF0F0; border-color: #E0B0B0;
+ }
+ .CDatabaseTrigger .Prototype,
+ .CEvent .Prototype,
+ .CDelegate .Prototype {
+ background-color: #F0FCF0; border-color: #B8E4B8 }
+
+ .CToolTip .Prototype {
+ margin: 0 0 .5em 0;
+ white-space: nowrap;
+ }
+
+
+
+
+
+.Summary {
+ margin: 1.5em 5ex 0 5ex }
+
+ .STitle {
+ font-size: 12pt; font-weight: bold;
+ margin-bottom: .5em;
+ color: #8d4a2c; }
+
+
+ .SBorder {
+ background-color: #FFFFF0;
+ padding: 15px;
+ border: 1px solid #C0C060 }
+
+ /* In a frame IE 6 will make them too long unless you set the width to 100%. Without frames it will be correct without a width
+ or slightly too long (but not enough to scroll) with a width. This arbitrary weirdness simply astounds me. IE 7 has the same
+ problem with frames, haven't tested it without. */
+ .FramedContentPage .IE .SBorder {
+ width: 100% }
+
+ /* A treat for Mozilla users. Blatantly non-standard. Will be replaced with CSS 3 attributes when finalized/supported. */
+ .Firefox .SBorder {
+ -moz-border-radius: 20px }
+
+
+ .STable {
+ font-size: 9pt; width: 100% }
+
+ .SEntry {
+ width: 30% }
+ .SDescription {
+ width: 70% }
+
+
+ .SMarked {
+ background-color: #F8F8D8 }
+
+ .SDescription { padding-left: 2ex }
+ .SIndent1 .SEntry { padding-left: 1.5ex } .SIndent1 .SDescription { padding-left: 3.5ex }
+ .SIndent2 .SEntry { padding-left: 3.0ex } .SIndent2 .SDescription { padding-left: 5.0ex }
+ .SIndent3 .SEntry { padding-left: 4.5ex } .SIndent3 .SDescription { padding-left: 6.5ex }
+ .SIndent4 .SEntry { padding-left: 6.0ex } .SIndent4 .SDescription { padding-left: 8.0ex }
+ .SIndent5 .SEntry { padding-left: 7.5ex } .SIndent5 .SDescription { padding-left: 9.5ex }
+
+ .SDescription a { color: #800000}
+ .SDescription a:active { color: #A00000 }
+
+ .SGroup td {
+ padding-top: .5em; padding-bottom: .25em }
+
+ .SGroup .SEntry {
+ font-weight: bold; font-variant: small-caps }
+
+ .SGroup .SEntry a { color: #800000 }
+ .SGroup .SEntry a:active { color: #F00000 }
+
+
+ .SMain td,
+ .SClass td,
+ .SDatabase td,
+ .SDatabaseTable td,
+ .SSection td {
+ font-size: 10pt;
+ padding-bottom: .25em }
+
+ .SClass td,
+ .SDatabase td,
+ .SDatabaseTable td,
+ .SSection td {
+ padding-top: 1em }
+
+ .SMain .SEntry,
+ .SClass .SEntry,
+ .SDatabase .SEntry,
+ .SDatabaseTable .SEntry,
+ .SSection .SEntry {
+ font-weight: bold;
+ }
+
+ .SMain .SEntry a,
+ .SClass .SEntry a,
+ .SDatabase .SEntry a,
+ .SDatabaseTable .SEntry a,
+ .SSection .SEntry a { color: #000000 }
+
+ .SMain .SEntry a:active,
+ .SClass .SEntry a:active,
+ .SDatabase .SEntry a:active,
+ .SDatabaseTable .SEntry a:active,
+ .SSection .SEntry a:active { color: #A00000 }
+
+
+
+
+
+.ClassHierarchy {
+ margin: 0 15px 1em 15px }
+
+ .CHEntry {
+ border-width: 1px 2px 2px 1px; border-style: solid; border-color: #A0A0A0;
+ margin-bottom: 3px;
+ padding: 2px 2ex;
+ font-size: 10pt;
+ background-color: #F4F4F4; color: #606060;
+ }
+
+ .Firefox .CHEntry {
+ -moz-border-radius: 4px;
+ }
+
+ .CHCurrent .CHEntry {
+ font-weight: bold;
+ border-color: #000000;
+ color: #000000;
+ }
+
+ .CHChildNote .CHEntry {
+ font-style: italic;
+ font-size: 8pt;
+ }
+
+ .CHIndent {
+ margin-left: 3ex;
+ }
+
+ .CHEntry a:link,
+ .CHEntry a:visited,
+ .CHEntry a:hover {
+ color: #606060;
+ }
+ .CHEntry a:active {
+ color: #800000;
+ }
+
+
+
+
+
+#Index {
+ margin-top: 100px;
+ margin-left: 30px;
+ }
+
+/* As opposed to .PopupSearchResultsPage #Index */
+.IndexPage #Index,
+.FramedIndexPage #Index,
+.FramedSearchResultsPage #Index {
+ padding: 15px;
+ }
+
+.IndexPage #Index {
+ font-size: 9pt; /* To make 27ex match the menu's 27ex. */
+ margin-left: 27ex;
+ }
+
+
+ .IPageTitle {
+ font-size: 20pt; font-weight: bold;
+ color: #8d4a2c;
+ padding: 10px 15px 10px 15px;
+ margin: -15px -15px 0 -15px }
+
+ .FramedSearchResultsPage .IPageTitle {
+ margin-bottom: 15px;
+ }
+
+ .INavigationBar {
+ font-size: 10pt;
+ text-align: center;
+ background-color: #FFFFF0;
+ padding: 5px;
+ border-bottom: solid 1px black;
+ margin: 0 -15px 15px -15px;
+ margin-left: 20px;
+ }
+
+ .INavigationBar a {
+ font-weight: bold }
+
+ .IHeading {
+ font-size: 16pt; font-weight: bold;
+ padding: 2.5em 0 .5em 0;
+ text-align: center;
+ width: 3.5ex;
+ }
+ #IFirstHeading {
+ padding-top: 0;
+ }
+
+ .IEntry {
+ font-size: 10pt;
+ padding-left: 1ex;
+ }
+ .PopupSearchResultsPage .IEntry {
+ font-size: 8pt;
+ padding: 1px 5px;
+ }
+ .PopupSearchResultsPage .Opera9 .IEntry,
+ .FramedSearchResultsPage .Opera9 .IEntry {
+ text-align: left;
+ }
+ .FramedSearchResultsPage .IEntry {
+ padding: 0;
+ }
+
+ .ISubIndex {
+ padding-left: 3ex; padding-bottom: .5em }
+ .PopupSearchResultsPage .ISubIndex {
+ display: none;
+ }
+
+ /* While it may cause some entries to look like links when they aren't, I found it's much easier to read the
+ index if everything's the same color. */
+ .ISymbol {
+ font-weight: bold; color: #900000 }
+
+ .IndexPage .ISymbolPrefix,
+ .FramedIndexPage .ISymbolPrefix {
+ font-size: 10pt;
+ text-align: right;
+ color: #C47C7C;
+ background-color: #F8F8F8;
+ border-right: 3px solid #E0E0E0;
+ border-left: 1px solid #E0E0E0;
+ padding: 0 1px 0 2px;
+ }
+ .PopupSearchResultsPage .ISymbolPrefix,
+ .FramedSearchResultsPage .ISymbolPrefix {
+ color: #900000;
+ }
+ .PopupSearchResultsPage .ISymbolPrefix {
+ font-size: 8pt;
+ }
+
+ .IndexPage #IFirstSymbolPrefix,
+ .FramedIndexPage #IFirstSymbolPrefix {
+ border-top: 1px solid #E0E0E0;
+ }
+ .IndexPage #ILastSymbolPrefix,
+ .FramedIndexPage #ILastSymbolPrefix {
+ border-bottom: 1px solid #E0E0E0;
+ }
+ .IndexPage #IOnlySymbolPrefix,
+ .FramedIndexPage #IOnlySymbolPrefix {
+ border-top: 1px solid #E0E0E0;
+ border-bottom: 1px solid #E0E0E0;
+ }
+
+ a.IParent,
+ a.IFile {
+ display: block;
+ }
+
+ .PopupSearchResultsPage .SRStatus {
+ padding: 2px 5px;
+ font-size: 8pt;
+ font-style: italic;
+ }
+ .FramedSearchResultsPage .SRStatus {
+ font-size: 10pt;
+ font-style: italic;
+ }
+
+ .SRResult {
+ display: none;
+ }
+
+
+#Footer {
+ font-size: 8pt;
+ color: #989898;
+ text-align: right;
+ }
+
+#Footer p {
+ text-indent: 0;
+ margin-bottom: .5em;
+ }
+
+.ContentPage #Footer,
+.IndexPage #Footer {
+ text-align: right;
+ margin: 2px;
+ }
+
+.FramedMenuPage #Footer {
+ text-align: center;
+ margin: 5em 10px 10px 10px;
+ padding-top: 1em;
+ border-top: 1px solid #C8C8C8;
+ }
+
+ #Footer a:link,
+ #Footer a:hover,
+ #Footer a:visited { color: #989898 }
+ #Footer a:active { color: #A00000 }
+
+.CAugeasLens td {
+ white-space: pre !important;
+}
+
+.CAugeasVariable td {
+ white-space: pre !important;
+}
+
+.CAugeasModule td {
+ white-space: pre !important;
+}
+
+.CAugeasTest td {
+ white-space: pre !important;
+}
--- /dev/null
+Format: 1.52
+
+# This is the Natural Docs languages file for this project. If you change
+# anything here, it will apply to THIS PROJECT ONLY. If you'd like to change
+# something for all your projects, edit the Languages.txt in Natural Docs'
+# Config directory instead.
+
+
+# You can prevent certain file extensions from being scanned like this:
+# Ignore Extensions: [extension] [extension] ...
+
+
+#-------------------------------------------------------------------------------
+# SYNTAX:
+#
+# Unlike other Natural Docs configuration files, in this file all comments
+# MUST be alone on a line. Some languages deal with the # character, so you
+# cannot put comments on the same line as content.
+#
+# Also, all lists are separated with spaces, not commas, again because some
+# languages may need to use them.
+#
+# Language: [name]
+# Alter Language: [name]
+# Defines a new language or alters an existing one. Its name can use any
+# characters. If any of the properties below have an add/replace form, you
+# must use that when using Alter Language.
+#
+# The language Shebang Script is special. It's entry is only used for
+# extensions, and files with those extensions have their shebang (#!) lines
+# read to determine the real language of the file. Extensionless files are
+# always treated this way.
+#
+# The language Text File is also special. It's treated as one big comment
+# so you can put Natural Docs content in them without special symbols. Also,
+# if you don't specify a package separator, ignored prefixes, or enum value
+# behavior, it will copy those settings from the language that is used most
+# in the source tree.
+#
+# Extensions: [extension] [extension] ...
+# [Add/Replace] Extensions: [extension] [extension] ...
+# Defines the file extensions of the language's source files. You can
+# redefine extensions found in the main languages file. You can use * to
+# mean any undefined extension.
+#
+# Shebang Strings: [string] [string] ...
+# [Add/Replace] Shebang Strings: [string] [string] ...
+# Defines a list of strings that can appear in the shebang (#!) line to
+# designate that it's part of the language. You can redefine strings found
+# in the main languages file.
+#
+# Ignore Prefixes in Index: [prefix] [prefix] ...
+# [Add/Replace] Ignored Prefixes in Index: [prefix] [prefix] ...
+#
+# Ignore [Topic Type] Prefixes in Index: [prefix] [prefix] ...
+# [Add/Replace] Ignored [Topic Type] Prefixes in Index: [prefix] [prefix] ...
+# Specifies prefixes that should be ignored when sorting symbols in an
+# index. Can be specified in general or for a specific topic type.
+#
+#------------------------------------------------------------------------------
+# For basic language support only:
+#
+# Line Comments: [symbol] [symbol] ...
+# Defines a space-separated list of symbols that are used for line comments,
+# if any.
+#
+# Block Comments: [opening sym] [closing sym] [opening sym] [closing sym] ...
+# Defines a space-separated list of symbol pairs that are used for block
+# comments, if any.
+#
+# Package Separator: [symbol]
+# Defines the default package separator symbol. The default is a dot.
+#
+# [Topic Type] Prototype Enders: [symbol] [symbol] ...
+# When defined, Natural Docs will attempt to get a prototype from the code
+# immediately following the topic type. It stops when it reaches one of
+# these symbols. Use \n for line breaks.
+#
+# Line Extender: [symbol]
+# Defines the symbol that allows a prototype to span multiple lines if
+# normally a line break would end it.
+#
+# Enum Values: [global|under type|under parent]
+# Defines how enum values are referenced. The default is global.
+# global - Values are always global, referenced as 'value'.
+# under type - Values are under the enum type, referenced as
+# 'package.enum.value'.
+# under parent - Values are under the enum's parent, referenced as
+# 'package.value'.
+#
+# Perl Package: [perl package]
+# Specifies the Perl package used to fine-tune the language behavior in ways
+# too complex to do in this file.
+#
+#------------------------------------------------------------------------------
+# For full language support only:
+#
+# Full Language Support: [perl package]
+# Specifies the Perl package that has the parsing routines necessary for full
+# language support.
+#
+#-------------------------------------------------------------------------------
+
+# The following languages are defined in the main file, if you'd like to alter
+# them:
+#
+# Text File, Shebang Script, C/C++, C#, Java, JavaScript, Perl, Python,
+# PHP, SQL, Visual Basic, Pascal, Assembly, Ada, Tcl, Ruby, Makefile,
+# ActionScript, ColdFusion, R, Fortran
+
+# If you add a language that you think would be useful to other developers
+# and should be included in Natural Docs by default, please e-mail it to
+# languages [at] naturaldocs [dot] org.
--- /dev/null
+Format: 1.52
+
+
+Title: Augeas Documentation
+SubTitle: C API
+
+# You can add a footer to your documentation like this:
+# Footer: [text]
+# If you want to add a copyright notice, this would be the place to do it.
+
+# You can add a timestamp to your documentation like one of these:
+# Timestamp: Generated on month day, year
+# Timestamp: Updated mm/dd/yyyy
+# Timestamp: Last updated mon day
+#
+# m - One or two digit month. January is "1"
+# mm - Always two digit month. January is "01"
+# mon - Short month word. January is "Jan"
+# month - Long month word. January is "January"
+# d - One or two digit day. 1 is "1"
+# dd - Always two digit day. 1 is "01"
+# day - Day with letter extension. 1 is "1st"
+# yy - Two digit year. 2006 is "06"
+# yyyy - Four digit year. 2006 is "2006"
+# year - Four digit year. 2006 is "2006"
+
+
+# --------------------------------------------------------------------------
+#
+# Cut and paste the lines below to change the order in which your files
+# appear on the menu. Don't worry about adding or removing files, Natural
+# Docs will take care of that.
+#
+# You can further organize the menu by grouping the entries. Add a
+# "Group: [name] {" line to start a group, and add a "}" to end it.
+#
+# You can add text and web links to the menu by adding "Text: [text]" and
+# "Link: [name] ([URL])" lines, respectively.
+#
+# The formatting and comments are auto-generated, so don't worry about
+# neatness when editing the file. Natural Docs will clean it up the next
+# time it is run. When working with groups, just deal with the braces and
+# forget about the indentation and comments.
+#
+# --------------------------------------------------------------------------
+
+
+Group: Main Links {
+
+ Link: Main (/index.html)
+ Link: Documentation (/docs/index.html)
+ } # Group: Main Links
+
+File: Public API (no auto-title, augeas.h)
+File: Internal API (no auto-title, internal.h)
+
+Group: Index {
+
+ Index: Everything
+ Class Index: Classes
+ Function Index: Functions
+ File Index: Files
+ Macro Index: Macros
+ Type Index: Types
+ } # Group: Index
+
--- /dev/null
+Format: 1.52
+
+# This is the Natural Docs topics file for this project. If you change anything
+# here, it will apply to THIS PROJECT ONLY. If you'd like to change something
+# for all your projects, edit the Topics.txt in Natural Docs' Config directory
+# instead.
+
+
+# If you'd like to prevent keywords from being recognized by Natural Docs, you
+# can do it like this:
+# Ignore Keywords: [keyword], [keyword], ...
+#
+# Or you can use the list syntax like how they are defined:
+# Ignore Keywords:
+# [keyword]
+# [keyword], [plural keyword]
+# ...
+
+
+#-------------------------------------------------------------------------------
+# SYNTAX:
+#
+# Topic Type: [name]
+# Alter Topic Type: [name]
+# Creates a new topic type or alters one from the main file. Each type gets
+# its own index and behavior settings. Its name can have letters, numbers,
+# spaces, and these charaters: - / . '
+#
+# Plural: [name]
+# Sets the plural name of the topic type, if different.
+#
+# Keywords:
+# [keyword]
+# [keyword], [plural keyword]
+# ...
+# Defines or adds to the list of keywords for the topic type. They may only
+# contain letters, numbers, and spaces and are not case sensitive. Plural
+# keywords are used for list topics. You can redefine keywords found in the
+# main topics file.
+#
+# Index: [yes|no]
+# Whether the topics get their own index. Defaults to yes. Everything is
+# included in the general index regardless of this setting.
+#
+# Scope: [normal|start|end|always global]
+# How the topics affects scope. Defaults to normal.
+# normal - Topics stay within the current scope.
+# start - Topics start a new scope for all the topics beneath it,
+# like class topics.
+# end - Topics reset the scope back to global for all the topics
+# beneath it.
+# always global - Topics are defined as global, but do not change the scope
+# for any other topics.
+#
+# Class Hierarchy: [yes|no]
+# Whether the topics are part of the class hierarchy. Defaults to no.
+#
+# Page Title If First: [yes|no]
+# Whether the topic's title becomes the page title if it's the first one in
+# a file. Defaults to no.
+#
+# Break Lists: [yes|no]
+# Whether list topics should be broken into individual topics in the output.
+# Defaults to no.
+#
+# Can Group With: [type], [type], ...
+# Defines a list of topic types that this one can possibly be grouped with.
+# Defaults to none.
+#-------------------------------------------------------------------------------
+
+# The following topics are defined in the main file, if you'd like to alter
+# their behavior or add keywords:
+#
+# Generic, Class, Interface, Section, File, Group, Function, Variable,
+# Property, Type, Constant, Enumeration, Event, Delegate, Macro,
+# Database, Database Table, Database View, Database Index, Database
+# Cursor, Database Trigger, Cookie, Build Target
+
+# If you add something that you think would be useful to other developers
+# and should be included in Natural Docs by default, please e-mail it to
+# topics [at] naturaldocs [dot] org.
--- /dev/null
+Format: 1.52
+
+# This is the Natural Docs languages file for this project. If you change
+# anything here, it will apply to THIS PROJECT ONLY. If you'd like to change
+# something for all your projects, edit the Languages.txt in Natural Docs'
+# Config directory instead.
+
+
+# You can prevent certain file extensions from being scanned like this:
+# Ignore Extensions: [extension] [extension] ...
+
+
+#-------------------------------------------------------------------------------
+# SYNTAX:
+#
+# Unlike other Natural Docs configuration files, in this file all comments
+# MUST be alone on a line. Some languages deal with the # character, so you
+# cannot put comments on the same line as content.
+#
+# Also, all lists are separated with spaces, not commas, again because some
+# languages may need to use them.
+#
+# Language: [name]
+# Alter Language: [name]
+# Defines a new language or alters an existing one. Its name can use any
+# characters. If any of the properties below have an add/replace form, you
+# must use that when using Alter Language.
+#
+# The language Shebang Script is special. It's entry is only used for
+# extensions, and files with those extensions have their shebang (#!) lines
+# read to determine the real language of the file. Extensionless files are
+# always treated this way.
+#
+# The language Text File is also special. It's treated as one big comment
+# so you can put Natural Docs content in them without special symbols. Also,
+# if you don't specify a package separator, ignored prefixes, or enum value
+# behavior, it will copy those settings from the language that is used most
+# in the source tree.
+#
+# Extensions: [extension] [extension] ...
+# [Add/Replace] Extensions: [extension] [extension] ...
+# Defines the file extensions of the language's source files. You can
+# redefine extensions found in the main languages file. You can use * to
+# mean any undefined extension.
+#
+# Shebang Strings: [string] [string] ...
+# [Add/Replace] Shebang Strings: [string] [string] ...
+# Defines a list of strings that can appear in the shebang (#!) line to
+# designate that it's part of the language. You can redefine strings found
+# in the main languages file.
+#
+# Ignore Prefixes in Index: [prefix] [prefix] ...
+# [Add/Replace] Ignored Prefixes in Index: [prefix] [prefix] ...
+#
+# Ignore [Topic Type] Prefixes in Index: [prefix] [prefix] ...
+# [Add/Replace] Ignored [Topic Type] Prefixes in Index: [prefix] [prefix] ...
+# Specifies prefixes that should be ignored when sorting symbols in an
+# index. Can be specified in general or for a specific topic type.
+#
+#------------------------------------------------------------------------------
+# For basic language support only:
+#
+# Line Comments: [symbol] [symbol] ...
+# Defines a space-separated list of symbols that are used for line comments,
+# if any.
+#
+# Block Comments: [opening sym] [closing sym] [opening sym] [closing sym] ...
+# Defines a space-separated list of symbol pairs that are used for block
+# comments, if any.
+#
+# Package Separator: [symbol]
+# Defines the default package separator symbol. The default is a dot.
+#
+# [Topic Type] Prototype Enders: [symbol] [symbol] ...
+# When defined, Natural Docs will attempt to get a prototype from the code
+# immediately following the topic type. It stops when it reaches one of
+# these symbols. Use \n for line breaks.
+#
+# Line Extender: [symbol]
+# Defines the symbol that allows a prototype to span multiple lines if
+# normally a line break would end it.
+#
+# Enum Values: [global|under type|under parent]
+# Defines how enum values are referenced. The default is global.
+# global - Values are always global, referenced as 'value'.
+# under type - Values are under the enum type, referenced as
+# 'package.enum.value'.
+# under parent - Values are under the enum's parent, referenced as
+# 'package.value'.
+#
+# Perl Package: [perl package]
+# Specifies the Perl package used to fine-tune the language behavior in ways
+# too complex to do in this file.
+#
+#------------------------------------------------------------------------------
+# For full language support only:
+#
+# Full Language Support: [perl package]
+# Specifies the Perl package that has the parsing routines necessary for full
+# language support.
+#
+#-------------------------------------------------------------------------------
+
+# The following languages are defined in the main file, if you'd like to alter
+# them:
+#
+# Text File, Shebang Script, C/C++, C#, Java, JavaScript, Perl, Python,
+# PHP, SQL, Visual Basic, Pascal, Assembly, Ada, Tcl, Ruby, Makefile,
+# ActionScript, ColdFusion, R, Fortran
+
+# If you add a language that you think would be useful to other developers
+# and should be included in Natural Docs by default, please e-mail it to
+# languages [at] naturaldocs [dot] org.
+
+
+Language: Augeas
+
+ Extension: aug
+ Block Comment: (* *)
+ Augeas Test Prototype Enders: \n let test module filter
+ Augeas Lens Prototype Enders: \n let test module filter
+ Augeas Variable Prototype Enders: \n let test module filter
+ Perl Package: NaturalDocs::Languages::Augeas
--- /dev/null
+Format: 1.52
+
+
+Title: Augeas Documentation
+SubTitle: Modules
+
+# You can add a footer to your documentation like this:
+# Footer: [text]
+# If you want to add a copyright notice, this would be the place to do it.
+
+# You can add a timestamp to your documentation like one of these:
+# Timestamp: Generated on month day, year
+# Timestamp: Updated mm/dd/yyyy
+# Timestamp: Last updated mon day
+#
+# m - One or two digit month. January is "1"
+# mm - Always two digit month. January is "01"
+# mon - Short month word. January is "Jan"
+# month - Long month word. January is "January"
+# d - One or two digit day. 1 is "1"
+# dd - Always two digit day. 1 is "01"
+# day - Day with letter extension. 1 is "1st"
+# yy - Two digit year. 2006 is "06"
+# yyyy - Four digit year. 2006 is "2006"
+# year - Four digit year. 2006 is "2006"
+
+# These are indexes you deleted, so Natural Docs will not add them again
+# unless you remove them from this line.
+
+Don't Index: Properties
+
+
+# --------------------------------------------------------------------------
+#
+# Cut and paste the lines below to change the order in which your files
+# appear on the menu. Don't worry about adding or removing files, Natural
+# Docs will take care of that.
+#
+# You can further organize the menu by grouping the entries. Add a
+# "Group: [name] {" line to start a group, and add a "}" to end it.
+#
+# You can add text and web links to the menu by adding "Text: [text]" and
+# "Link: [name] ([URL])" lines, respectively.
+#
+# The formatting and comments are auto-generated, so don't worry about
+# neatness when editing the file. Natural Docs will clean it up the next
+# time it is run. When working with groups, just deal with the braces and
+# forget about the indentation and comments.
+#
+# --------------------------------------------------------------------------
+
+
+Group: Main Site {
+
+ Link: Main (/index.html)
+ Link: Documentation (/docs/index.html)
+ } # Group: Main Site
+
+Group: Specific Modules {
+
+ File: Access (access.aug)
+ File: ActiveMQ_Conf (activemq_conf.aug)
+ File: ActiveMQ_XML (activemq_xml.aug)
+ File: AFS_cellalias (afs_cellalias.aug)
+ File: Aliases (aliases.aug)
+ File: Anacron (anacron.aug)
+ File: Approx (approx.aug)
+ File: Apt_Update_Manager (apt_update_manager.aug)
+ File: AptCacherNGSecurity (aptcacherngsecurity.aug)
+ File: AptConf (aptconf.aug)
+ File: AptPreferences (aptpreferences.aug)
+ File: Aptsources (aptsources.aug)
+ File: Authorized_Keys (authorized_keys.aug)
+ File: Automaster (automaster.aug)
+ File: Automounter (automounter.aug)
+ File: Avahi (avahi.aug)
+ File: BackupPCHosts (backuppchosts.aug)
+ File: BootConf (bootconf.aug)
+ File: Cachefilesd (cachefilesd.aug)
+ File: Carbon (carbon.aug)
+ File: Cgconfig (no auto-title, cgconfig.aug)
+ File: Cgrules (no auto-title, cgrules.aug)
+ File: Channels (channels.aug)
+ File: Chrony (chrony.aug)
+ File: Collectd (collectd.aug)
+ File: CPanel (cpanel.aug)
+ File: Cron (cron.aug)
+ File: Crypttab (crypttab.aug)
+ File: CSV (csv.aug)
+ File: Cups (cups.aug)
+ File: Debctrl (no auto-title, debctrl.aug)
+ File: Desktop (desktop.aug)
+ File: Dhcpd (dhcpd.aug)
+ File: Dovecot (dovecot.aug)
+ File: Dpkg (dpkg.aug)
+ File: Exports (exports.aug)
+ File: FAI_DiskConfig (fai_diskconfig.aug)
+ File: Fonts (fonts.aug)
+ File: Fuse (fuse.aug)
+ File: Gshadow (gshadow.aug)
+ File: Grub (grub.aug)
+ File: GtkBookmarks (gtkbookmarks.aug)
+ File: Host_Conf (host_conf.aug)
+ File: Hostname (hostname.aug)
+ File: Hosts_Access (hosts_access.aug)
+ File: Htpasswd (htpasswd.aug)
+ File: Inputrc (inputrc.aug)
+ File: JettyRealm (jettyrealm.aug)
+ File: JMXAccess (jmxaccess.aug)
+ File: JMXPassword (jmxpassword.aug)
+ File: Iscsid (iscsid.aug)
+ File: Iptables (iptables.aug)
+ File: Kdump (kdump.aug)
+ File: Keepalived (keepalived.aug)
+ File: Known_Hosts (known_hosts.aug)
+ File: Koji (koji.aug)
+ File: Ldif (ldif.aug)
+ File: Ldso (no auto-title, ldso.aug)
+ File: Lightdm (lightdm.aug)
+ File: Login_defs (login_defs.aug)
+ File: Lokkit (lokkit.aug)
+ File: LVM (lvm.aug)
+ File: MCollective (mcollective.aug)
+ File: Memcached (memcached.aug)
+ File: Mke2fs (mke2fs.aug)
+ File: Modprobe (modprobe.aug)
+ File: MongoDBServer (mongodbserver.aug)
+ File: Modules (modules.aug)
+ File: Modules_conf (modules_conf.aug)
+ File: NagiosCfg (no auto-title, nagioscfg.aug)
+ File: NagiosObjects (nagiosobjects.aug)
+ File: Netmasks (netmasks.aug)
+ File: NetworkManager (networkmanager.aug)
+ File: Networks (networks.aug)
+ File: Nginx (nginx.aug)
+ File: Nrpe (nrpe.aug)
+ File: Nslcd (nslcd.aug)
+ File: Nsswitch (nsswitch.aug)
+ File: Ntpd (ntpd.aug)
+ File: OpenShift_Config (openshift_config.aug)
+ File: OpenShift_Http (openshift_http.aug)
+ File: OpenShift_Quickstarts (openshift_quickstarts.aug)
+ File: Pagekite (pagekite.aug)
+ File: Pam (pam.aug)
+ File: PamConf (pamconf.aug)
+ File: Passwd (passwd.aug)
+ File: Pbuilder (pbuilder.aug)
+ File: Pg_Hba (pg_hba.aug)
+ File: Postfix_Passwordmap (postfix_passwordmap.aug)
+ File: Postfix_Transport (postfix_transport.aug)
+ File: Postfix_Virtual (postfix_virtual.aug)
+ File: Postgresql (postgresql.aug)
+ File: Protocols (protocols.aug)
+ File: Puppetfile (no auto-title, puppetfile.aug)
+ File: PuppetFileserver (no auto-title, puppetfileserver.aug)
+ File: Puppet_Auth (no auto-title, puppet_auth.aug)
+ File: Qpid (qpid.aug)
+ File: Rabbitmq (rabbitmq.aug)
+ File: Redis (redis.aug)
+ File: Reprepro_Uploaders (reprepro_uploaders.aug)
+ File: Resolv (resolv.aug)
+ File: Rhsm (rhsm.aug)
+ File: Rmt (rmt.aug)
+ File: Rsyslog (rsyslog.aug)
+ File: Schroot (schroot.aug)
+ File: Services (services.aug)
+ File: Shadow (shadow.aug)
+ File: Shells (shells.aug)
+ File: Shellvars (shellvars.aug)
+ File: Simplelines (simplelines.aug)
+ File: Simplevars (simplevars.aug)
+ File: Sip_Conf (sip_conf.aug)
+ File: SmbUsers (smbusers.aug)
+ File: Splunk (splunk.aug)
+ File: Solaris_System (solaris_system.aug)
+ File: Ssh (ssh.aug)
+ File: Sshd (sshd.aug)
+ File: Sssd (no auto-title, sssd.aug)
+ File: Star (star.aug)
+ File: Subversion (subversion.aug)
+ File: Sudoers (sudoers.aug)
+ File: Sysconfig_Route (sysconfig_route.aug)
+ File: Sysctl (sysctl.aug)
+ File: Syslog (syslog.aug)
+ File: Systemd (systemd.aug)
+ File: Thttpd (thttpd.aug)
+ File: Tmpfiles (tmpfiles.aug)
+ File: Tuned (tuned.aug)
+ File: Up2date (up2date.aug)
+ File: UpdateDB (updatedb.aug)
+ File: VWware_Config (vmware_config.aug)
+ File: Vfstab (vfstab.aug)
+ File: Xinetd (xinetd.aug)
+ File: Xorg (xorg.aug)
+ File: ClamAV (clamav.aug)
+ File: Dns_Zone (dns_zone.aug)
+ File: Mailscanner (mailscanner.aug)
+ File: Mailscanner_rules (mailscanner_rules.aug)
+ File: MasterPasswd (masterpasswd.aug)
+ File: Pgbouncer (pgbouncer.aug)
+ File: PylonsPaste (pylonspaste.aug)
+ File: Trapperkeeper (trapperkeeper.aug)
+ File: Xymon_Alerting (xymon_alerting.aug)
+ File: Yaml (yaml.aug)
+ File: Cron_User (cron_user.aug)
+ File: Oz (oz.aug)
+ File: Getcap (getcap.aug)
+ File: Rancid (rancid.aug)
+ File: Rtadvd (rtadvd.aug)
+ File: Strongswan (strongswan.aug)
+ File: Termcap (termcap.aug)
+ File: Anaconda (anaconda.aug)
+ File: Semanage (semanage.aug)
+ File: Toml (toml.aug)
+ } # Group: Specific Modules
+
+Group: Generic Modules {
+
+ File: Build (build.aug)
+ File: Erlang (erlang.aug)
+ File: IniFile (inifile.aug)
+ File: Quote (quote.aug)
+ File: Rx (rx.aug)
+ File: Sep (sep.aug)
+ File: Util (util.aug)
+ } # Group: Generic Modules
+
+Group: Tests and Examples {
+
+ File: Test_Access (tests/test_access.aug)
+ File: Test_ActiveMQ_Conf (tests/test_activemq_conf.aug)
+ File: Test_ActiveMQ_XML (tests/test_activemq_xml.aug)
+ File: Test_AFS_cellalias (tests/test_afs_cellalias.aug)
+ File: Test_Aliases (tests/test_aliases.aug)
+ File: Test_Anacron (tests/test_anacron.aug)
+ File: Test_Approx (tests/test_approx.aug)
+ File: Test_Apt_Update_Manager (tests/test_apt_update_manager.aug)
+ File: Test_Authorized_Keys (tests/test_authorized_keys.aug)
+ File: Test_BootConf (tests/test_bootconf.aug)
+ File: Test_Build (tests/test_build.aug)
+ File: Test_Carbon (tests/test_carbon.aug)
+ File: Test_Channels (tests/test_channels.aug)
+ File: Test_Chrony (tests/test_chrony.aug)
+ File: Test_Collectd (tests/test_collectd.aug)
+ File: Test_CPanel (tests/test_cpanel.aug)
+ File: Test_CSV (no auto-title, tests/test_csv.aug)
+ File: Test_Cups (tests/test_cups.aug)
+ File: Test_Dovecot (tests/test_dovecot.aug)
+ File: Test_Erlang (tests/test_erlang.aug)
+ File: Test_FAI_DiskConfig (tests/test_fai_diskconfig.aug)
+ File: Test_Fonts (tests/test_fonts.aug)
+ File: Test_Fuse (tests/test_fuse.aug)
+ File: Test_GtkBookmarks (tests/test_gtkbookmarks.aug)
+ File: Test_Htpasswd (tests/test_htpasswd.aug)
+ File: Test_IniFile (tests/test_inifile.aug)
+ File: Test_Inputrc (tests/test_inputrc.aug)
+ File: Test_Iscsid (tests/test_iscsid.aug)
+ File: Test_JettyRealm (tests/test_jettyrealm.aug)
+ File: Test_JMXAccess (tests/test_jmxaccess.aug)
+ File: Test_JMXPassword (tests/test_jmxpassword.aug)
+ File: Test_Kdump (tests/test_kdump.aug)
+ File: Test_Keepalived (tests/test_keepalived.aug)
+ File: Test_Known_Hosts (tests/test_known_hosts.aug)
+ File: Test_Koji (tests/test_koji.aug)
+ File: Test_Ldso (tests/test_ldso.aug)
+ File: Test_Lightdm (tests/test_lightdm.aug)
+ File: Test_LVM (tests/test_lvm.aug)
+ File: Test_MCollective (tests/test_mcollective.aug)
+ File: Test_Memcached (tests/test_memcached.aug)
+ File: Test_MongoDBServer (tests/test_mongodbserver.aug)
+ File: Test_NagiosCfg (tests/test_nagioscfg.aug)
+ File: Test_NetworkManager (tests/test_networkmanager.aug)
+ File: Test_Nginx (tests/test_nginx.aug)
+ File: Test_Nslcd (tests/test_nslcd.aug)
+ File: Test_Ntpd (tests/test_ntpd.aug)
+ File: Test_OpenShift_Config (tests/test_openshift_config.aug)
+ File: Test_OpenShift_Http (tests/test_openshift_http.aug)
+ File: Test_OpenShift_Quickstarts (tests/test_openshift_quickstarts.aug)
+ File: Test_Postfix_Passwordmap (tests/test_postfix_passwordmap.aug)
+ File: Test_Postfix_Transport (tests/test_postfix_transport.aug)
+ File: Test_Postfix_Virtual (tests/test_postfix_virtual.aug)
+ File: Test_Postgresql (tests/test_postgresql.aug)
+ File: Test_Protocols (tests/test_protocols.aug)
+ File: Test_Puppet_Auth (tests/test_puppet_auth.aug)
+ File: Test_Puppetfile (tests/test_puppetfile.aug)
+ File: Test_Qpid (tests/test_qpid.aug)
+ File: Test_Quote (tests/test_quote.aug)
+ File: Test_Rabbitmq (tests/test_rabbitmq.aug)
+ File: Test_Redis (tests/test_redis.aug)
+ File: Test_Reprepro_Uploaders (tests/test_reprepro_uploaders.aug)
+ File: Test_Rhsm (tests/test_rhsm.aug)
+ File: Test_Rsyslog (tests/test_rsyslog.aug)
+ File: Test_Simplelines (tests/test_simplelines.aug)
+ File: Test_Simplevars (tests/test_simplevars.aug)
+ File: Test_SmbUsers (tests/test_smbusers.aug)
+ File: Test_Subversion (tests/test_subversion.aug)
+ File: Test_Sysconfig_Route (tests/test_sysconfig_route.aug)
+ File: Test_Sysctl (tests/test_sysctl.aug)
+ File: Test_Systemd (tests/test_systemd.aug)
+ File: Test_Thttpd (tests/test_thttpd.aug)
+ File: Test_Tmpfiles (tests/test_tmpfiles.aug)
+ File: Test_Up2date (tests/test_up2date.aug)
+ File: Test_UpdateDB (tests/test_updatedb.aug)
+ File: Test_VMware_Config (tests/test_vmware_config.aug)
+ File: Test_Xml (tests/test_xml.aug)
+ File: Test_Yum (tests/test_yum.aug)
+ File: Test_login_defs (tests/test_login_defs.aug)
+ File: Test_sssd (tests/test_sssd.aug)
+ File: Test_sudoers (no auto-title, tests/test_sudoers.aug)
+ File: Test_ssh (tests/test_ssh.aug)
+ File: Test_sshd (tests/test_sshd.aug)
+ File: AptPreferences.pin (tests/test_aptpreferences.aug)
+ File: Desktop.lns (tests/test_desktop.aug)
+ File: Interfaces.lns (tests/test_interfaces.aug)
+ File: test_httpd.aug (tests/test_httpd.aug)
+ File: test_shellvars.aug (tests/test_shellvars.aug)
+ File: test_shellvars_list.aug (tests/test_shellvars_list.aug)
+ File: test_slapd.aug (tests/test_slapd.aug)
+ File: test_trapperkeeper.aug (tests/test_trapperkeeper.aug)
+ File: Test_Xymon_Alerting (tests/test_xymon_alerting.aug)
+ File: Test_Anaconda (tests/test_anaconda.aug)
+ File: Test_Semanage (tests/test_semanage.aug)
+ File: test_toml.aug (tests/test_toml.aug)
+ } # Group: Tests and Examples
+
+Group: Index {
+
+ Augeas Lens Index: Lenses
+ Augeas Module Index: Modules
+ Augeas Variable Index: Variables
+ Augeas Test Index: Tests
+ Index: Everything
+ File Index: Files
+ Variable Index: Variables
+ } # Group: Index
+
--- /dev/null
+Format: 1.4
+
+
+Title: Augeas Documentation
+SubTitle: Modules
+
+# You can add a footer to your documentation like this:
+# Footer: [text]
+# If you want to add a copyright notice, this would be the place to do it.
+
+# You can add a timestamp to your documentation like one of these:
+# Timestamp: Generated on month day, year
+# Timestamp: Updated mm/dd/yyyy
+# Timestamp: Last updated mon day
+#
+# m - One or two digit month. January is "1"
+# mm - Always two digit month. January is "01"
+# mon - Short month word. January is "Jan"
+# month - Long month word. January is "January"
+# d - One or two digit day. 1 is "1"
+# dd - Always two digit day. 1 is "01"
+# day - Day with letter extension. 1 is "1st"
+# yy - Two digit year. 2006 is "06"
+# yyyy - Four digit year. 2006 is "2006"
+# year - Four digit year. 2006 is "2006"
+
+
+# --------------------------------------------------------------------------
+#
+# Cut and paste the lines below to change the order in which your files
+# appear on the menu. Don't worry about adding or removing files, Natural
+# Docs will take care of that.
+#
+# You can further organize the menu by grouping the entries. Add a
+# "Group: [name] {" line to start a group, and add a "}" to end it.
+#
+# You can add text and web links to the menu by adding "Text: [text]" and
+# "Link: [name] ([URL])" lines, respectively.
+#
+# The formatting and comments are auto-generated, so don't worry about
+# neatness when editing the file. Natural Docs will clean it up the next
+# time it is run. When working with groups, just deal with the braces and
+# forget about the indentation and comments.
+#
+# --------------------------------------------------------------------------
+
+
+Group: Main Site {
+
+ Link: Main (/index.html)
+ Link: Documentation (/docs/index.html)
+ } # Group: Main Site
+
+Group: Specific Modules {
+
+ File: Aliases (aliases.aug)
+ File: Approx (approx.aug)
+ File: AptPreferences (aptpreferences.aug)
+ File: Aptsources (aptsources.aug)
+ File: BBhosts (bbhosts.aug)
+ File: Cgconfig (cgconfig.aug)
+ File: Cgrules (cgrules.aug)
+ File: CobblerModules (cobblermodules.aug)
+ File: CobblerSettings (cobblersettings.aug)
+ File: Cron (cron.aug)
+ File: Darkice (darkice.aug)
+ File: Debctrl (debctrl.aug)
+ File: Device_map (device_map.aug)
+ File: Dhclient (dhclient.aug)
+ File: Dnsmasq (dnsmasq.aug)
+ File: Dpkg (dpkg.aug)
+ File: Dput (dput.aug)
+ File: Ethers (ethers.aug)
+ File: Exports (exports.aug)
+ File: Fstab (fstab.aug)
+ File: Gdm (gdm.aug)
+ File: Group (group.aug)
+ File: Grub (grub.aug)
+ File: Inetd (inetd.aug)
+ File: Inittab (inittab.aug)
+ File: Interfaces (interfaces.aug)
+ File: Iptables (iptables.aug)
+ File: Json (json.aug)
+ File: Keepalived (keepalived.aug)
+ File: Krb5 (krb5.aug)
+ File: Limits (limits.aug)
+ File: Login_defs (login_defs.aug)
+ File: Logrotate (logrotate.aug)
+ File: Lokkit (lokkit.aug)
+ File: Mke2fs (mke2fs.aug)
+ File: Modprobe (modprobe.aug)
+ File: Modules_conf (modules_conf.aug)
+ File: Monit (monit.aug)
+ File: Multipath (multipath.aug)
+ File: MySQL (mysql.aug)
+ File: NagiosCfg (nagioscfg.aug)
+ File: Nrpe (nrpe.aug)
+ File: Nsswitch (nsswitch.aug)
+ File: Ntp (ntp.aug)
+ File: Odbc (odbc.aug)
+ File: OpenVPN (openvpn.aug)
+ File: Pam (pam.aug)
+ File: Passwd (passwd.aug)
+ File: Pbuilder (pbuilder.aug)
+ File: Pg_Hba (pg_hba.aug)
+ File: PHP (php.aug)
+ File: Phpvars (phpvars.aug)
+ File: Postfix_Access (postfix_access.aug)
+ File: Postfix_Main (postfix_main.aug)
+ File: Postfix_Master (postfix_master.aug)
+ File: Puppet (puppet.aug)
+ File: Resolv (resolv.aug)
+ File: Rsyncd (rsyncd.aug)
+ File: Samba (samba.aug)
+ File: Securetty (securetty.aug)
+ File: Services (services.aug)
+ File: Shells (shells.aug)
+ File: Shellvars_list (shellvars_list.aug)
+ File: Shellvars (shellvars.aug)
+ File: Slapd (slapd.aug)
+ File: Soma (soma.aug)
+ File: Spacevars (spacevars.aug)
+ File: Squid (squid.aug)
+ File: Sshd (sshd.aug)
+ File: Sudoers (sudoers.aug)
+ File: Sysconfig (sysconfig.aug)
+ File: Sysctl (sysctl.aug)
+ File: Syslog (syslog.aug)
+ File: Vsftpd (vsftpd.aug)
+ File: Webmin (webmin.aug)
+ File: Wine (wine.aug)
+ File: Xinetd (xinetd.aug)
+ File: Xorg (xorg.aug)
+ File: Yum (yum.aug)
+ } # Group: Specific Modules
+
+Group: Generic Modules {
+
+ File: Build (build.aug)
+ File: IniFile (inifile.aug)
+ File: Rx (rx.aug)
+ File: Sep (sep.aug)
+ File: Util (util.aug)
+ } # Group: Generic Modules
+
+Group: Index {
+
+ Augeas Lens Index: Lenses
+ Augeas Module Index: Modules
+ Augeas Variable Index: Variables
+ Index: Everything
+ } # Group: Index
+
--- /dev/null
+Format: 1.52
+
+# This is the Natural Docs topics file for this project. If you change anything
+# here, it will apply to THIS PROJECT ONLY. If you'd like to change something
+# for all your projects, edit the Topics.txt in Natural Docs' Config directory
+# instead.
+
+
+Ignore Keywords:
+ class
+
+
+#-------------------------------------------------------------------------------
+# SYNTAX:
+#
+# Topic Type: [name]
+# Alter Topic Type: [name]
+# Creates a new topic type or alters one from the main file. Each type gets
+# its own index and behavior settings. Its name can have letters, numbers,
+# spaces, and these charaters: - / . '
+#
+# Plural: [name]
+# Sets the plural name of the topic type, if different.
+#
+# Keywords:
+# [keyword]
+# [keyword], [plural keyword]
+# ...
+# Defines or adds to the list of keywords for the topic type. They may only
+# contain letters, numbers, and spaces and are not case sensitive. Plural
+# keywords are used for list topics. You can redefine keywords found in the
+# main topics file.
+#
+# Index: [yes|no]
+# Whether the topics get their own index. Defaults to yes. Everything is
+# included in the general index regardless of this setting.
+#
+# Scope: [normal|start|end|always global]
+# How the topics affects scope. Defaults to normal.
+# normal - Topics stay within the current scope.
+# start - Topics start a new scope for all the topics beneath it,
+# like class topics.
+# end - Topics reset the scope back to global for all the topics
+# beneath it.
+# always global - Topics are defined as global, but do not change the scope
+# for any other topics.
+#
+# Class Hierarchy: [yes|no]
+# Whether the topics are part of the class hierarchy. Defaults to no.
+#
+# Page Title If First: [yes|no]
+# Whether the topic's title becomes the page title if it's the first one in
+# a file. Defaults to no.
+#
+# Break Lists: [yes|no]
+# Whether list topics should be broken into individual topics in the output.
+# Defaults to no.
+#
+# Can Group With: [type], [type], ...
+# Defines a list of topic types that this one can possibly be grouped with.
+# Defaults to none.
+#-------------------------------------------------------------------------------
+
+# The following topics are defined in the main file, if you'd like to alter
+# their behavior or add keywords:
+#
+# Generic, Class, Interface, Section, File, Group, Function, Variable,
+# Property, Type, Constant, Enumeration, Event, Delegate, Macro,
+# Database, Database Table, Database View, Database Index, Database
+# Cursor, Database Trigger, Cookie, Build Target
+
+# If you add something that you think would be useful to other developers
+# and should be included in Natural Docs by default, please e-mail it to
+# topics [at] naturaldocs [dot] org.
+
+
+Topic Type: Augeas Lens
+
+ Plural: Augeas Lenses
+ Keywords:
+ view, views
+
+
+Topic Type: Augeas Variable
+
+ Plural: Augeas Variables
+ Keywords:
+ variable, variables
+
+
+Topic Type: Augeas Test
+
+ Plural: Augeas Tests
+ Keywords:
+ test, tests
+
+
+Topic Type: Augeas Module
+
+ Plural: Augeas Modules
+ Scope: Start
+ Page Title If First: Yes
+
+ Keywords:
+ module, modules
--- /dev/null
+" Vim syntax file
+" Language: Augeas
+" Version: 1.0
+" $Id$
+" Maintainer: Bruno Cornec <bruno@project-builder.org>
+" Contributors:
+" Raphaël Pinson <raphink@gmail.com>
+
+" For version 5.x: Clear all syntax items
+" For version 6.x: Quit when a syntax file was already loaded
+if version < 600
+ syntax clear
+elseif exists("b:current_syntax")
+ finish
+endif
+
+
+syn case ignore
+syn sync lines=250
+
+syn keyword augeasStatement module let incl transform autoload in rec excl
+syn keyword augeasTestStatement test get after put insa insb set rm
+syn keyword augeasTodo contained TODO FIXME XXX DEBUG NOTE
+
+if exists("augeas_symbol_operator")
+ syn match augeasSymbolOperator "[+\-/*=]"
+ syn match augeasSymbolOperator "[<>]=\="
+ syn match augeasSymbolOperator "<>"
+ syn match augeasSymbolOperator ":="
+ syn match augeasSymbolOperator "[()]"
+ syn match augeasSymbolOperator "\.\."
+ syn match augeasSymbolOperator "[\^.]"
+ syn match augeasMatrixDelimiter "[][]"
+ "if you prefer you can highlight the range
+ "syn match augeasMatrixDelimiter "[\d\+\.\.\d\+]"
+endif
+
+if exists("augeas_no_tabs")
+ syn match augeasShowTab "\t"
+endif
+
+syn region augeasComment start="(\*" end="\*)" contains=augeasTodo,augeasSpaceError
+
+
+if !exists("augeas_no_functions")
+ " functions
+ syn keyword augeasLabel del key store label value square seq
+ syn keyword augeasFunction Util Build Rx Sep Quote
+endif
+
+syn region augeasRegexp start="/" end="[^\\]/"
+syn region augeasString start=+"+ end=+"+ skip=+\\"+
+
+
+" Define the default highlighting.
+" For version 5.7 and earlier: only when not done already
+" For version 5.8 and later: only when an item doesn't have highlighting yet
+if version >= 508 || !exists("did_augeas_syn_inits")
+ if version < 508
+ let did_augeas_syn_inits = 1
+ command -nargs=+ HiLink hi link <args>
+ else
+ command -nargs=+ HiLink hi def link <args>
+ endif
+
+ HiLink augeasAcces augeasStatement
+ HiLink augeasBoolean Boolean
+ HiLink augeasComment Comment
+ HiLink augeasConditional Conditional
+ HiLink augeasConstant Constant
+ HiLink augeasDelimiter Identifier
+ HiLink augeasDirective augeasStatement
+ HiLink augeasException Exception
+ HiLink augeasFloat Float
+ HiLink augeasFunction Function
+ HiLink augeasLabel Label
+ HiLink augeasMatrixDelimiter Identifier
+ HiLink augeasModifier Type
+ HiLink augeasNumber Number
+ HiLink augeasOperator Operator
+ HiLink augeasPredefined augeasStatement
+ HiLink augeasPreProc PreProc
+ HiLink augeasRepeat Repeat
+ HiLink augeasSpaceError Error
+ HiLink augeasStatement Statement
+ HiLink augeasString String
+ HiLink augeasStringEscape Special
+ HiLink augeasStringEscapeGPC Special
+ HiLink augeasRegexp Special
+ HiLink augeasStringError Error
+ HiLink augeasStruct augeasStatement
+ HiLink augeasSymbolOperator augeasOperator
+ HiLink augeasTestStatement augeasStatement
+ HiLink augeasTodo Todo
+ HiLink augeasType Type
+ HiLink augeasUnclassified augeasStatement
+ " HiLink augeasAsm Assembler
+ HiLink augeasError Error
+ HiLink augeasAsmKey augeasStatement
+ HiLink augeasShowTab Error
+
+ delcommand HiLink
+endif
+
+
+let b:current_syntax = "augeas"
+
+" vim: ts=8 sw=2
--- /dev/null
+\documentclass{amsart}
+
+\usepackage{amsmath}
+\usepackage{xspace}
+\usepackage{amssymb}
+\usepackage{bcprules}
+
+\newcommand{\ensmath}[1]{\ensuremath{#1}\xspace}
+
+\newcommand{\opnam}[1]{\ensmath{\operatorname{\mathit{#1}}}}
+\newcommand{\nget}{\opnam{get}}
+\newcommand{\nput}{\opnam{put}}
+\newcommand{\ncreate}{\opnam{create}}
+\newcommand{\nkey}{\opnam{key}}
+\newcommand{\lget}[1]{\opnam{get}{#1}}
+\newcommand{\lput}[3]{\opnam{put}{#1}\,{#2}\,{#3}}
+\newcommand{\lcreate}[2]{\opnam{create}{#1}\,{#2}}
+\newcommand{\lkey}[1]{\nkey{#1}}
+
+\newcommand{\suff}{\ensmath{\operatorname{suff}}}
+\newcommand{\pref}{\ensmath{\operatorname{pref}}}
+\newcommand{\lenstype}[3][K]{\ensmath{{#2}\stackrel{#1}{\Longleftrightarrow}{#3}}}
+\newcommand{\tree}[1]{\ensmath{[#1]}}
+\newcommand{\niltree}{\ensmath{[]}}
+\newcommand{\Regexp}{\ensmath{\mathcal R}}
+\newcommand{\reglang}[1]{\ensmath{[\![{#1}]\!]}}
+\newcommand{\lens}[1]{\opnam{#1}}
+\newcommand{\eps}{\ensmath{\epsilon}}
+\newcommand{\conc}[2]{\ensmath{#1\cdot #2}}
+\newcommand{\uaconc}[2]{\ensmath{#1\cdot^{!} #2}}
+\newcommand{\xconc}[2]{\ensmath{#1\odot #2}}
+\newcommand{\alt}[2]{\ensmath{#1\,|\,#2}}
+\newcommand{\cstar}[1]{\ensmath{#1^*}}
+\newcommand{\uastar}[1]{\ensmath{#1^{!*}}}
+\newcommand{\Trees}{\ensmath{\mathcal T}}
+\newcommand{\Words}{\ensmath{\Sigma^*}}
+\newcommand{\Wordsrhd}{\ensmath{\Words\cup\rhd}}
+\newcommand{\tmap}[2]{\ensmath{#1\mapsto #2}}
+\newcommand{\tmaptt}[2]{\ensmath{{\mathtt #1}\mapsto {\mathtt #2}}}
+\newcommand{\dom}[1]{\ensmath{\mathrm{dom}(#1)}}
+\newcommand{\List}{\ensmath{\mathtt{List}}}
+\newcommand{\redigits}{\ensmath{\mathtt{D}}}
+
+\newcommand{\noeps}[1]{\ensmath{{#1}_{\setminus \epsilon}}}
+
+\newtheorem{theorem}{Theorem}
+\newtheorem{lemma}[theorem]{Lemma}
+{\theoremstyle{definition}
+ \newtheorem{defn}{Definition}
+ \newtheorem*{remark}{Remark}
+ \newtheorem*{example}{Example}}
+
+\begin{document}
+
+\section*{Ambiguity}
+
+\begin{defn}
+ For a language $L$ over $\Sigma$, define
+ \begin{equation*}
+ \begin{split}
+ \suff(L) = \{ p \in \Sigma^+ | \exists u \in L: up \in L \}\\
+ \pref(L) = \{ p \in \Sigma^+ | \exists v \in L: pv \in L \}
+ \end{split}
+ \end{equation*}
+\end{defn}
+Note that $\suff(L)$ and $\pref(L)$ do not contain $\eps$. Note also that
+we only take prefixes and suffixes that can be expanded to a word in $L$
+with another word in $L$, not any word in $\Sigma^*$.
+
+The set $\suff(L)$ is identical to the left quotient $L^{-1}L$ of $L$ by
+itself. Similarly, $\pref(L) = LL^{-1}$.
+
+\begin{defn}
+Two languages $L_1$ and $L_2$ are \emph{unambiguosuly concatenable},
+written \uaconc{L_1}{L_2}, iff for every $u_1, v_1 \in L_1$ and $u_2, v_2
+\in L_2$, if $u_1u_2=v_1v_2$ then $u_1=v_1$ and $u_2=v_2$.
+\end{defn}
+
+\begin{lemma}
+ The languages $L_1$ and $L_2$ are unambiguously concatenable if and only
+ if $\suff(L_1) \cap \pref(L_2) = \emptyset$.
+\end{lemma}
+\begin{proof}
+Let $L_1$ and $L_2$ not unambiguously concatenable, i.e. there exist $u_1$,
+$v_1$ in $L_1$ and $u_2$, $v_2$ in $L_2$ such that $u_1u_2=v_1v_2$ and
+$u_1\neq v_1$ and therefore $u_2 \neq v_2$. Assume $u_1$ is strictly
+shorter than $v_1$, therefore $v_1 = u_1 p, p \neq \eps$. With that $v_1v_2
+= u_1pv_2$ and $u_2 = pv_2$ and $p \in \suff(L_1) \cap \pref(L_2)$
+
+Let $p \in \suff(L_1) \cap \pref(L_2)$. That implies that
+there are $u\in L_1$ and $v\in L_2$ such that $up\in L_1$ and $pv\in
+L_2$. The word $upv$ can be split as $\conc{up}{v}$ and as $\conc{u}{pv}$
+and therefore $L_1$ and $L_2$ are not unambiguously concatenable.
+\end{proof}
+
+To ease notation we write $\noeps{L}$ for $L\setminus \{
+\epsilon \}$.
+
+\begin{defn}
+A language $L$ is \emph{unambiguously iterable}, written \uastar{L}, iff
+for every $u_1,\dots,u_m, v_1,\dots,v_n \in \noeps{L}$ with $u_1\cdots u_m
+= v_1 \cdots v_n$, $m=n$ and $u_i = v_i$.
+\end{defn}
+
+It is very important that we exclude $\epsilon$ in the definition of
+unambiguous iteration, since otherwise \emph{any} language $L$ that
+contains $\epsilon$ is trivially not ambiguously iterable, even though that
+presents no problems for our purposes, since we never split $\epsilon$ into
+more than one word.
+
+\begin{lemma}
+The language $L$ is unambiguously iterable if and only if $\noeps{L}$ and
+$\noeps{\cstar{L}}$ are unambiguously concatenable.
+\end{lemma}
+\begin{proof}
+Let $L$ be unambiguously iterable, and assume that there is $u_1v_1 =
+u_2v_2$ with $u_i \in \noeps{L}$ and $v_i \in \noeps{\cstar{L}}$ and $u_1
+\neq u_2$. This contradicts $\uastar{L}$ since $\noeps{L}\cdot
+\noeps{\cstar{L}}\subseteq \cstar{L}$.
+
+Let $\uaconc{\noeps{L}}{\noeps{\cstar{L}}}$ and assume there are
+$u_1,\dots,u_m, v_1,\dots,v_n \in \noeps{L}$ with $u_1\cdots u_m = v_1
+\cdots v_n$, but there is an $i \leq \min(m,n)$ with $u_i \neq v_i$. We can
+assume that $u_1 \neq v_1$\footnote{if $u_1 = v_1$, we simply use
+ $u_2\cdots u_m$ and $v_2 \cdots v_n$ for this argument}, and therefore
+$\min(m,n)\geq 2$. Since $u_1$ and $v_1$ are in $\noeps{L}$, we have an
+ambiguous split of a word from $\noeps{L}\cdot\noeps{\cstar{L}}$,
+contradicting the initial assumption.
+\end{proof}
+
+\begin{example}
+Using regular expression notation, the language $L=(a|b|c|abc)$ is
+unambiguously concatenable with itself, but not unambiguosly
+iterable. $\cstar{L} = (a|b|c)^*$ and the word $abcabc$ can be split into
+$abc\cdot abc$ and $a\cdot bcabc$. This example shows that $\uaconc{L}{L}$
+does not imply $\uastar{L}$, but as the previous lemma showed,
+$\uaconc{\noeps{L}}{\noeps{\cstar{L}}}$ does.
+\end{example}
+
+If you have $P = \suff(L_1) \cap \pref(L_2)$, how do you generate a word that
+is ambiguous ?
+\end{document}
+
+%% TeX-parse-self: t
+%% TeX-auto-save: t
+
+%% Local Variables:
+%% TeX-master: "unambig.tex"
+%% compile-command: "pdflatex -file-line-error -halt-on-error unambig.tex"
+%% End:
--- /dev/null
+Path expressions
+================
+
+In the public API, especially for aug_match and aug_get, tree nodes can be
+identified with a powerful syntax, that is modelled on the XPath syntax for
+XML documents.
+
+In the simplest case, a path expression just lists a path to some node in
+the tree, for example,
+
+ /files/etc/hosts/1/ipaddr
+
+If multiple nodes have the same label, one of them can be picked out by
+providing its position, either counting from the first such node (at
+position 1) or counting from the end. For example, the second alias of the
+first host entry is
+
+ /files/etc/hosts/1/alias[2]
+
+and the penultimate alias is
+
+ /files/etc/hosts/1/alias[last() - 1]
+
+For /etc/hosts, each entry can be thought of as a primary key (the ipaddr)
+and additional attributes relating to that primary key, namely the
+canonical host name and its aliases. It is therefore natural to refer to
+host entries by their ipaddr, not by the synthetic sequence number in their
+path. The canonical name of the host entry with ipaddr 127.0.0.1 can be
+found with
+
+ /files/etc/hosts/*[ipaddr = "127.0.0.1"]/canonical
+
+or, equivalently, with
+
+ /files/etc/hosts/*/canonical[../ipaddr = "127.0.0.1"]
+
+The canonical names of all hosts that have at least one alias:
+
+ /files/etc/hosts/*/canonical[../alias]
+
+It is also possible to search bigger parts of the tree by using '//'. For
+example, all nodes called 'ipaddr' underneath /files/etc can be found with
+
+ /files/etc//ipaddr
+
+This is handy for finding errors reported by Augeas underneath /augeas:
+
+ /augeas//error
+
+A lazy way to find localhost is
+
+ /files/etc//ipaddr[. = '127.0.0.1']
+
+The vardir entry in the main section of puppet.conf is at
+
+ /files/etc/puppet/puppet.conf/section[. = "main"]/vardir
+
+All pam entries that use the system-auth module:
+
+ /files/etc/pam.d/*[.//module = "system-auth"]
+
+More examples can be found in tests/xpath.tests
+
+One further extension that might be useful is to add boolean operators for
+predicates, so that we can write
+
+ /files/etc/hosts/ipaddr[../alias = 'localhost' or ../canonical = 'localhost']
+
+Grammar for path expressions
+============================
+
+Formally, path expressions are generated by this grammar. The grammar uses
+nonterminals from the XPath 1.0 grammar to point out the similarities
+between XPath and Augeas path expressions.
+
+Unfortunately, the production for PathExpr is ambiguous, since Augeas is
+too liberal in what it allows as labels for tree nodes: the expression '42'
+can either be the number 42 (a PrimaryExpr) or the RelativeLocationPath
+'child::42'. The reason for this ambiguity is that we allow node names like
+'42' in the tree; rather than forbid them, we resolve the ambiguity by
+always parsing '42' as a number, and requiring that the user write the
+RelativeLocationPath in a different form, e.g. 'child::42' or './42'.
+
+LocationPath ::= RelativeLocationPath | AbsoluteLocationPath
+
+AbsoluteLocationPath ::= '/' RelativeLocationPath?
+ | AbbreviatedAbsoluteLocationPath
+AbbreviatedAbsoluteLocationPath ::= '//' RelativeLocationPath
+
+RelativeLocationPath ::= Step
+ | RelativeLocationPath '/' Step
+ | AbbreviatedRelativeLocationPath
+AbbreviatedRelativeLocationPath ::= RelativeLocationPath '//' Step
+
+Step ::= AxisSpecifier NameTest Predicate* | '.' | '..'
+AxisSpecifier ::= AxisName '::' | <epsilon>
+AxisName ::= 'ancestor'
+ | 'ancestor-or-self'
+ | 'child'
+ | 'descendant'
+ | 'descendant-or-self'
+ | 'parent'
+ | 'self'
+ | 'root'
+NameTest ::= '*' | Name
+Predicate ::= "[" Expr "]" *
+
+PathExpr ::= LocationPath
+ | FilterExpr
+ | FilterExpr '/' RelativeLocationPath
+ | FilterExpr '//' RelativeLocationPath
+
+FilterExpr ::= PrimaryExpr Predicate
+
+PrimaryExpr ::= Literal
+ | Number
+ | FunctionCall
+ | VariableReference
+ | '(' Expr ')'
+
+FunctionCall ::= Name '(' ( Expr ( ',' Expr )* )? ')'
+
+Expr ::= EqualityExpr
+EqualityExpr ::= RelationalExpr (EqualityOp RelationalExpr)? | ReMatchExpr
+ReMatchExpr ::= RelationalExpr MatchOp RelationalExpr
+MatchOp ::= "=~" | "!~"
+EqualityOp ::= "=" | "!="
+RelationalExpr ::= AdditiveExpr (RelationalOp AdditiveExpr)?
+RelationalOp ::= ">" | "<" | ">=" | "<="
+AdditiveExpr ::= MultiplicativeExpr (AdditiveOp MultiplicativeExpr)*
+AdditiveOp ::= '+' | '-'
+MultiplicativeExpr ::= UnionExpr ('*' UnionExpr)*
+UnionExpr ::= PathExpr ("|" PathExpr)*
+
+Literal ::= '"' /[^"]* / '"' | "'" /[^']* / "'"
+Number ::= /[0-9]+/
+/* Names can contain whitespace in the interior */
+NameNoWS ::= [^][|/\= \t\n] | \\.
+NameWS ::= [^][|/\=] | \\.
+Name ::= NameNoWS NameWS* NameNoWS | NameNoWS
+
+VariableReference ::= '$' /[a-zA-Z_][a-zA-Z0-9_]*/
+
+Additional stuff
+================
+
+Just for reference, not really interesting as documentation
+
+XPath 1.0 (from http://www.w3.org/TR/xpath)
+-------------------------------------------
+
+[ 1] LocationPath ::= RelativeLocationPath | AbsoluteLocationPath
+[ 2] AbsoluteLocationPath ::= '/' RelativeLocationPath?
+ | AbbreviatedAbsoluteLocationPath
+[ 3] RelativeLocationPath ::= Step
+ | RelativeLocationPath '/' Step
+ | AbbreviatedRelativeLocationPath
+[ 4] Step ::= AxisSpecifier NodeTest Predicate*
+ | AbbreviatedStep
+[ 5] AxisSpecifier ::= AxisName '::'
+ | AbbreviatedAxisSpecifier
+[ 6] AxisName ::= 'ancestor'
+ | 'ancestor-or-self'
+ | 'attribute'
+ | 'child'
+ | 'descendant'
+ | 'descendant-or-self'
+ | 'following'
+ | 'following-sibling'
+ | 'namespace'
+ | 'parent'
+ | 'preceding'
+ | 'preceding-sibling'
+ | 'self'
+[ 7] NodeTest ::= NameTest
+ | NodeType '(' ')'
+ | 'processing-instruction' '(' Literal ')'
+[ 8] Predicate ::= '[' PredicateExpr ']'
+[ 9] PredicateExpr ::= Expr
+[10] AbbreviatedAbsoluteLocationPath ::= '//' RelativeLocationPath
+[11] AbbreviatedRelativeLocationPath ::= RelativeLocationPath '//' Step
+[12] AbbreviatedStep ::= '.' | '..'
+[13] AbbreviatedAxisSpecifier ::= '@'?
+[14] Expr ::= OrExpr
+[15] PrimaryExpr ::= VariableReference
+ | '(' Expr ')'
+ | Literal
+ | Number
+ | FunctionCall
+[16] FunctionCall ::= FunctionName '(' ( Argument ( ',' Argument )* )? ')'
+[17] Argument ::= Expr
+[18] UnionExpr ::= PathExpr
+ | UnionExpr '|' PathExpr
+[19] PathExpr ::= LocationPath
+ | FilterExpr
+ | FilterExpr '/' RelativeLocationPath
+ | FilterExpr '//' RelativeLocationPath
+[20] FilterExpr ::= PrimaryExpr
+ | FilterExpr Predicate
+[21] OrExpr ::= AndExpr | OrExpr 'or' AndExpr
+[22] AndExpr ::= EqualityExpr | AndExpr 'and' EqualityExpr
+[23] EqualityExpr ::= RelationalExpr
+ | EqualityExpr '=' RelationalExpr
+ | EqualityExpr '!=' RelationalExpr
+[24] RelationalExpr ::= AdditiveExpr
+ | RelationalExpr '<' AdditiveExpr
+ | RelationalExpr '>' AdditiveExpr
+ | RelationalExpr '<=' AdditiveExpr
+ | RelationalExpr '>=' AdditiveExpr
+
+[25] AdditiveExpr ::= MultiplicativeExpr
+ | AdditiveExpr '+' MultiplicativeExpr
+ | AdditiveExpr '-' MultiplicativeExpr
+[26] MultiplicativeExpr ::= UnaryExpr
+ | MultiplicativeExpr MultiplyOperator UnaryExpr
+ | MultiplicativeExpr 'div' UnaryExpr
+ | MultiplicativeExpr 'mod' UnaryExpr
+[27] UnaryExpr ::= UnionExpr
+ | '-' UnaryExpr
+[28] ExprToken ::= '(' | ')' | '[' | ']' | '.' | '..' | '@' | ',' | '::'
+ | NameTest
+ | NodeType
+ | Operator
+ | FunctionName
+ | AxisName
+ | Literal
+ | Number
+ | VariableReference
+[29] Literal ::= '"' [^"]* '"'
+ | "'" [^']* "'"
+[30] Number ::= Digits ('.' Digits?)? | '.' Digits
+[31] Digits ::= [0-9]+
+[32] Operator ::= OperatorName
+ | MultiplyOperator
+ | '/' | '//' | '|' | '+' | '-' | '=' | '!=' | '<' | '<=' | '>' | '>='
+[33] OperatorName ::= 'and' | 'or' | 'mod' | 'div'
+[34] MultiplyOperator ::= '*'
+[35] FunctionName ::= QName - NodeType
+[36] VariableReference ::= '$' QName
+[37] NameTest ::= '*' | NCName ':' '*' | QName
+[38] NodeType ::= 'comment' | 'text'
+ | 'processing-instruction'
+ | 'node'
+[39] ExprWhitespace ::= (#x20 | #x9 | #xD | #xA)+
+
+Useful subset
+-------------
+
+Big swath of XPath 1.0 that might be interesting for Augeas
+
+start symbol [14] Expr
+
+[14] Expr ::= OrExpr
+[21] OrExpr ::= AndExpr ('or' AndExpr)*
+[22] AndExpr ::= EqualityExpr ('and' EqualityExpr)*
+[23] EqualityExpr ::= RelationalExpr
+ | EqualityExpr '=' RelationalExpr
+ | EqualityExpr '!=' RelationalExpr
+[24] RelationalExpr ::= AdditiveExpr
+ | RelationalExpr '<' AdditiveExpr
+ | RelationalExpr '>' AdditiveExpr
+ | RelationalExpr '<=' AdditiveExpr
+ | RelationalExpr '>=' AdditiveExpr
+[25] AdditiveExpr ::= MultiplicativeExpr
+ | AdditiveExpr '+' MultiplicativeExpr
+ | AdditiveExpr '-' MultiplicativeExpr
+[26] MultiplicativeExpr ::= UnaryExpr
+ | MultiplicativeExpr MultiplyOperator UnaryExpr
+[27] UnaryExpr ::= UnionExpr | '-' UnaryExpr
+[18] UnionExpr ::= PathExpr ('|' PathExpr)*
+
+[19] PathExpr ::= LocationPath
+ | FilterExpr
+ | FilterExpr '/' RelativeLocationPath
+ | FilterExpr '//' RelativeLocationPath
+
+[ 1] LocationPath ::= RelativeLocationPath | AbsoluteLocationPath
+[ 3] RelativeLocationPath ::= Step
+ | RelativeLocationPath '/' Step
+ | AbbreviatedRelativeLocationPath
+[11] AbbreviatedRelativeLocationPath ::= RelativeLocationPath '//' Step
+[ 2] AbsoluteLocationPath ::= '/' RelativeLocationPath?
+ | AbbreviatedAbsoluteLocationPath
+[10] AbbreviatedAbsoluteLocationPath ::= '//' RelativeLocationPath
+
+[ 4] Step ::= AxisSpecifier NameTest Predicate* | '.' | '..'
+[ 5] AxisSpecifier ::= AxisName '::' | <epsilon>
+
+[ 6] AxisName ::= 'ancestor'
+ | 'ancestor-or-self'
+ | 'attribute'
+ | 'child'
+ | 'descendant'
+ | 'descendant-or-self'
+ | 'following'
+ | 'following-sibling'
+ | 'namespace'
+ | 'parent'
+ | 'preceding'
+ | 'preceding-sibling'
+ | 'self'
+
+[ 8] Predicate ::= '[' Expr ']'
+
+[20] FilterExpr ::= PrimaryExpr Predicate*
+[15] PrimaryExpr ::= '(' Expr ')'
+ | Literal
+ | Number
+ | FunctionCall
+[16] FunctionCall ::= FunctionName '(' ( Argument ( ',' Argument )* )? ')'
+[17] Argument ::= Expr
+
+Lexical structure
+
+[28] ExprToken ::= '(' | ')' | '[' | ']' | '.' | '..' | '@' | ',' | '::'
+ | NameTest
+ | NodeType
+ | Operator
+ | FunctionName
+ | AxisName
+ | Literal
+ | Number
+ | VariableReference
+[29] Literal ::= '"' [^"]* '"'
+ | "'" [^']* "'"
+[30] Number ::= Digits ('.' Digits?)? | '.' Digits
+[31] Digits ::= [0-9]+
+[32] Operator ::= OperatorName
+ | MultiplyOperator
+ | '/' | '//' | '|' | '+' | '-' | '=' | '!=' | '<' | '<=' | '>' | '>='
+[33] OperatorName ::= 'and' | 'or' | 'mod' | 'div'
+[34] MultiplyOperator ::= '*'
+[35] FunctionName ::= QName - NodeType
+[36] VariableReference ::= '$' QName
+[37] NameTest ::= '*' | QName
+[38] NodeType ::= 'comment' | 'text'
+ | 'processing-instruction'
+ | 'node'
+[39] ExprWhitespace ::= (#x20 | #x9 | #xD | #xA)+
--- /dev/null
+FROM alpine AS build
+
+RUN apk add --no-cache --virtual=.build gcc libc-dev make bison flex readline-dev libxml2-dev git automake autoconf libtool pkgconf coreutils bash
+COPY . /augeas
+RUN set -ex ; \
+ cd augeas \
+ && ./autogen.sh --prefix=/opt/augeas \
+ && make \
+ && make install
+
+FROM alpine
+RUN apk add --no-cache libgcc libxml2 readline bash
+COPY --from=build /opt/augeas /opt/augeas
+ENV PATH=$PATH:/opt/augeas/bin
+RUN set -x ; \
+ cd /opt/augeas/bin \
+ && for TOOL in augcheck auggrep augload augloadone augparsediff augsed ; \
+ do \
+ wget https://raw.githubusercontent.com/raphink/augeas-sandbox/master/"$TOOL" ; \
+ chmod +x "$TOOL" ; \
+ done
+CMD augtool
--- /dev/null
+Building the image
+------------------
+
+First clone augeas repository then, build the image:
+
+```shell
+git clone https://github.com/hercules-team/augeas.git
+cd augeas
+docker build -t augeas -f docker/Dockerfile .
+```
+
+Using the image
+---------------
+
+Here is an exemple to print ssh configuration:
+
+```shell
+docker container run -ti --rm -v /etc/ssh/:/etc/ssh augeas augtool print /files/etc/ssh
+```
--- /dev/null
+
+AM_CFLAGS = @AUGEAS_CFLAGS@ @WARN_CFLAGS@ @LIBXML_CFLAGS@ \
+ -I $(top_srcdir)/src
+
+bin_PROGRAMS = fadot
+noinst_PROGRAMS = dump
+
+fadot_SOURCES = fadot.c
+fadot_LDADD = $(top_builddir)/src/libfa.la
+
+dump_sources = dump.c
+dump_LDADD = $(top_builddir)/src/libaugeas.la $(top_builddir)/src/libfa.la
--- /dev/null
+module Cont =
+
+ (* An experiment in handling empty lines and comments in httpd.conf and
+ * similar. What makes this challenging is that httpd.conf allows
+ * continuation lines; the markers for continuation lines (backslash +
+ * newline) needs to be treated like any other whitespace. *)
+
+ (* The continuation sequence that indicates that we should consider the
+ * next line part of the current line *)
+ let cont = /\\\\\r?\n/
+
+ (* Whitespace within a line: space, tab, and the continuation sequence *)
+ let ws = /[ \t]/ | cont
+
+ (* Any possible character - '.' does not match \n *)
+ let any = /(.|\n)/
+
+ (* Newline sequence - both for Unix and DOS newlines *)
+ let nl = /\r?\n/
+
+ (* Whitespace at the end of a line *)
+ let eol = del (ws* . nl) "\n"
+
+ (* A complete line that is either just whitespace or a comment that only
+ * contains whitespace *)
+ let empty = [ del (ws* . /#?/ . ws* . nl) "\n" ]
+
+ (* A comment that is not just whitespace. We define it in terms of the
+ * things that are not allowed as part of such a comment:
+ * 1) Starts with whitespace or newline
+ * 2) Ends with whitespace, a backslash or \r
+ * 3) Unescaped newlines
+ *)
+ let comment =
+ let comment_start = del (ws* . "#" . ws* ) "# " in
+ let unesc_eol = /[^\]/ . nl in
+ (* As a complement, the above criteria can be written as
+ let line = any* - ((ws|nl) . any*
+ | any* . (ws|/[\r\\]/)
+ | any* . unesc_eol . any* )? in
+ * Printing this out with 'print_regexp line' and simplifying it while
+ * checking for equality with the ruby-fa bindings, we can write this
+ * as follows: *)
+ let w = /[^\t\n\r \\]/ in
+ let r = /[\r\\]/ in
+ let s = /[\t\r ]/ in
+ let b = "\\\\" in
+ let t = /[\t\n\r ]/ in
+ let line = ((r . s* . w|w|r) . (s|w)* . (b . (t? . (s|w)* ))*|(r.s* )?).w.(s*.w)* in
+ [ label "#comment" . comment_start . store line . eol ]
+
+ let lns = (comment|empty)*
+
+ test [eol] get " \n" = { }
+ test [eol] get " \t\n" = { }
+ test [eol] get "\\\n\n" = { }
+
+
+ test lns get "# \\\r\n \t \\\n\n" = { }
+ test lns get "# x\n" = { "#comment" = "x" }
+ test lns get "# x\\\n\n" = { "#comment" = "x" }
+ test lns get "# \\\r\n \tx \\\n\n" = { "#comment" = "x" }
+ test lns get " \t\\\n# x\n" = { "#comment" = "x" }
+ test lns get "# word\\\n word \n" = { "#comment" = "word\\\n word" }
+ (* Not valid as it is an incomplete 'line' *)
+ test lns get "# x\\\n" = *
--- /dev/null
+/*
+ * dump.c:
+ *
+ * Copyright (C) 2009 Red Hat Inc.
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <lutter@redhat.com>
+ */
+
+/*
+ * Example program for dumping (part of) the Augeas tree.
+ *
+ * Run it as 'dump [-n] [pattern] > /tmp/out.txt' to dump all the nodes
+ * matching PATTERN. The PATTERN '/files//descendant::*' dumps all nodes
+ * from files, and '//descendant::*' dumps absolutely everything. If -n is
+ * passed, uses a variable and aug_ns_* functions. Without -n, uses
+ * aug_match and straight aug_get/aug_label/aug_source.
+ *
+ * You might have to set AUGEAS_ROOT and AUGEAS_LENS_LIB to point at the
+ * right things. For example, to run dump against the tree that the Augeas
+ * tests use, and to use the lenses in the checkout, run it as
+ * AUGEAS_ROOT=$PWD/tests/root AUGEAS_LENS_LIB=$PWD/lenses \
+ * dump '//descendant::*'
+ *
+ */
+
+#include <augeas.h>
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <sys/time.h>
+
+/*
+ * Print out information for all nodes matching PATH using aug_match and
+ * then aug_get etc. on each of the matches.
+ */
+static void dump_match(struct augeas *aug, const char *path) {
+ char **matches;
+ int nmatches;
+ int i;
+
+ nmatches = aug_match(aug, path, &matches);
+ if (nmatches < 0) {
+ fprintf(stderr, "aug_match for '%s' failed\n", path);
+ fprintf(stderr, "error: %s\n", aug_error_message(aug));
+ exit(1);
+ }
+
+ fprintf(stderr, "iterating matches\n");
+ fprintf(stderr, "%d matches for %s\n", nmatches, path);
+
+ for (i=0; i < nmatches; i++) {
+ const char *value, *label;
+ char *file;
+
+ aug_get(aug, matches[i], &value);
+ aug_label(aug, matches[i], &label);
+ aug_source(aug, matches[i], &file);
+
+ printf("%s: %s %s %s\n", matches[i], label, value, file);
+ free(file);
+ free(matches[i]);
+ }
+ free(matches);
+}
+
+/*
+ * Print out information for all nodes matching PATH using aug_ns_*
+ * functions
+ */
+static void dump_var(struct augeas *aug, const char *path) {
+ int nmatches;
+ int i;
+
+ /* Define the variable 'matches' to hold all the nodes we are
+ interested in */
+ aug_defvar(aug, "matches", path);
+
+ /* Count how many nodes we have */
+ nmatches = aug_match(aug, "$matches", NULL);
+ if (nmatches < 0) {
+ fprintf(stderr, "aug_match for '%s' failed\n", path);
+ fprintf(stderr, "error: %s\n", aug_error_message(aug));
+ exit(1);
+ }
+
+ fprintf(stderr, "using var and aug_ns_*\n");
+ fprintf(stderr, "%d matches for %s\n", nmatches, path);
+
+ for (i=0; i < nmatches; i++) {
+ const char *value, *label;
+ char *file = NULL;
+
+ /* Get information about the ith node, equivalent to calling
+ * aug_get etc. for "$matches[i]" but much more efficient internally
+ */
+ aug_ns_attr(aug, "matches", i, &value, &label, &file);
+
+ printf("%d: %s %s %s\n", i, label, value, file);
+ free(file);
+ }
+}
+
+static void print_time_taken(const struct timeval *start,
+ const struct timeval *stop) {
+ time_t elapsed = (stop->tv_sec - start->tv_sec)*1000
+ + (stop->tv_usec - start->tv_usec)/1000;
+ fprintf(stderr, "time: %ld ms\n", elapsed);
+}
+
+int main(int argc, char **argv) {
+ int opt;
+ int use_var = 0;
+ const char *pattern = "/files//*";
+ struct timeval stop, start;
+
+ while ((opt = getopt(argc, argv, "n")) != -1) {
+ switch (opt) {
+ case 'n':
+ use_var = 1;
+ break;
+ default:
+ fprintf(stderr, "Usage: %s [-n] [pattern]\n", argv[0]);
+ fprintf(stderr, " without '-n', iterate matches\n");
+ fprintf(stderr, " with '-n', use a variable and aug_ns_*\n");
+ exit(EXIT_FAILURE);
+ break;
+ }
+ }
+
+ struct augeas *aug = aug_init(NULL, NULL, 0);
+
+ if (optind < argc)
+ pattern = argv[optind];
+
+ gettimeofday(&start, NULL);
+ if (use_var) {
+ dump_var(aug, pattern);
+ } else {
+ dump_match(aug, pattern);
+ }
+ gettimeofday(&stop, NULL);
+ print_time_taken(&start, &stop);
+ return 0;
+}
--- /dev/null
+#!/bin/sh
+# create some examples of operations on finite automaton
+
+export PATH=./:$PATH
+
+dest=/tmp/fadot-examples
+mkdir -p $dest
+
+fadot -f $dest/sample.dot -o show "[a-z]*"
+fadot -f $dest/concat.dot -o concat "[a-b]" "[b-c]"
+fadot -f $dest/union.dot -o union "[a-b]" "[b-c]"
+fadot -f $dest/intersect.dot -o intersect "[a-b]" "[b-c]"
+fadot -f $dest/complement.dot -o complement "[a-z]"
+fadot -f $dest/minus.dot -o minus "[a-z]" "[a-c]"
+
+
+for i in $(ls -1 $dest/*.dot|cut -d. -f1); do
+ dot -Tpng -o $i.png $i.dot
+done
+
+echo "Example compilation complete. Results are available in directory $dest"
--- /dev/null
+/*
+ * fadot.c: example usage of finite automata library
+ *
+ * Copyright (C) 2009, Francis Giraldeau
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: Francis Giraldeau <francis.giraldeau@usherbrooke.ca>
+ */
+
+/*
+ * The purpose of this example is to show the usage of libfa
+ */
+
+#include <stdio.h>
+#include <unistd.h>
+#include <ctype.h>
+#include <string.h>
+#include <stdlib.h>
+#include <limits.h>
+
+#include <fa.h>
+
+#define UCHAR_NUM (UCHAR_MAX+1)
+
+const char *progname;
+
+static const char *const escape_chars = "\"\a\b\t\n\v\f\r\\";
+static const char *const escape_names = "\"abtnvfr\\";
+
+__attribute__((noreturn))
+static void usage(void) {
+ fprintf(stderr, "\nUsage: %s [OPTIONS] REGEXP\n", progname);
+ fprintf(stderr, "\nCompile REGEXP and apply operation to them. By default, just print\n");
+ fprintf(stderr, "the minimized regexp.\n");
+ fprintf(stderr, "\nOptions:\n\n");
+ fprintf(stderr, " -o OPERATION one of : show concat union intersect json\n");
+ fprintf(stderr, " complement minus example print\n");
+ fprintf(stderr, " -f DOT_FILE Path of output .dot file\n");
+ fprintf(stderr, " -n do not minimize resulting finite automaton\n");
+
+ exit(EXIT_FAILURE);
+}
+
+int main (int argc, char **argv) {
+
+ opterr = 0;
+
+ int reduce = 1;
+ char *file_output = NULL;
+ const char *operation = NULL;
+ FILE *fd;
+ int c;
+ int nb_regexp = 0;
+
+ progname = argv[0];
+
+ while ((c = getopt (argc, argv, "nhf:o:")) != -1)
+ switch (c)
+ {
+ case 'n':
+ reduce = 0;
+ break;
+ case 'f':
+ file_output = optarg;
+ break;
+ case 'h':
+ usage();
+ break;
+ case 'o':
+ operation = optarg;
+ break;
+ case '?':
+ if (optopt == 'o' || optopt == 'f')
+ fprintf (stderr, "Option -%c requires an argument.\n", optopt);
+ else if (isprint (optopt))
+ fprintf (stderr, "Unknown option `-%c'.\n", optopt);
+ else
+ fprintf (stderr,
+ "Unknown option character `\\x%x'.\n",
+ optopt);
+ usage();
+ break;
+ default:
+ usage();
+ break;
+ }
+
+ //printf ("reduce = %d, file_output = %s, operation = %s\n",
+ // reduce, file_output, operation);
+
+ if (operation == NULL)
+ operation = "show";
+
+ for (int i = optind; i < argc; i++) {
+ nb_regexp++;
+ }
+
+ if (nb_regexp == 0) {
+ printf("Please specify regexp to process.\n");
+ usage();
+ }
+
+ struct fa* fa_result = NULL;
+
+ if (!strcmp(operation,"show")) {
+ fa_compile(argv[optind], strlen(argv[optind]), &fa_result);
+ } else if (!strcmp(operation,"concat")) {
+
+ if (nb_regexp < 2) {
+ fprintf(stderr,"Please specify 2 or more regexp to concat");
+ return 1;
+ }
+
+ fa_result = fa_make_basic(FA_EPSILON);
+ struct fa* fa_tmp;
+ for (int i = optind; i < argc; i++) {
+ fa_compile(argv[i], strlen(argv[i]), &fa_tmp);
+ fa_result = fa_concat(fa_result, fa_tmp);
+ }
+
+ } else if (!strcmp(operation, "union")) {
+
+ if (nb_regexp < 2) {
+ fprintf(stderr,"Please specify 2 or more regexp to union");
+ return 1;
+ }
+
+ fa_result = fa_make_basic(FA_EMPTY);
+
+ struct fa* fa_tmp;
+ for (int i = optind; i < argc; i++) {
+ fa_compile(argv[i], strlen(argv[i]), &fa_tmp);
+ fa_result = fa_union(fa_result, fa_tmp);
+ }
+
+ } else if (!strcmp(operation, "intersect")) {
+
+ if (nb_regexp < 2) {
+ fprintf(stderr,"Please specify 2 or more regexp to intersect");
+ return 1;
+ }
+
+ fa_compile(argv[optind], strlen(argv[optind]), &fa_result);
+ struct fa* fa_tmp;
+
+ for (int i = optind+1; i < argc; i++) {
+ fa_compile(argv[i], strlen(argv[i]), &fa_tmp);
+ fa_result = fa_intersect(fa_result, fa_tmp);
+ }
+
+ } else if (!strcmp(operation, "complement")) {
+
+ if (nb_regexp >= 2) {
+ fprintf(stderr,"Please specify one regexp to complement");
+ return 1;
+ }
+
+ fa_compile(argv[optind], strlen(argv[optind]), &fa_result);
+ fa_result = fa_complement(fa_result);
+
+ } else if (!strcmp(operation, "minus")) {
+
+ if (nb_regexp != 2) {
+ fprintf(stderr,"Please specify 2 regexp for operation minus");
+ return 1;
+ }
+
+ struct fa* fa_tmp1;
+ struct fa* fa_tmp2;
+ fa_compile(argv[optind], strlen(argv[optind]), &fa_tmp1);
+ fa_compile(argv[optind+1], strlen(argv[optind+1]), &fa_tmp2);
+ fa_result = fa_minus(fa_tmp1, fa_tmp2);
+
+ } else if (!strcmp(operation, "example")) {
+
+ if (nb_regexp != 1) {
+ fprintf(stderr,"Please specify one regexp for operation example");
+ return 1;
+ }
+
+ char* word = NULL;
+ size_t word_len = 0;
+ fa_compile(argv[optind], strlen(argv[optind]), &fa_result);
+ fa_example(fa_result, &word, &word_len);
+ printf("Example word = %s\n", word);
+
+ } else if (!strcmp(operation, "json")) {
+ if (nb_regexp != 1) {
+ fprintf(stderr,"Please specify one regexp for operation example");
+ return 1;
+ }
+
+ fa_compile(argv[optind], strlen(argv[optind]), &fa_result);
+
+ if (reduce) {
+ fa_minimize(fa_result);
+ }
+
+ if (file_output != NULL) {
+ if ((fd = fopen(file_output, "w")) == NULL) {
+ fprintf(stderr, "Error while opening file %s \n", file_output);
+ return 1;
+ }
+
+ fa_json(fd, fa_result);
+ fclose(fd);
+ } else {
+ fa_json(stdout, fa_result);
+ }
+
+ return 0;
+
+ } else if (!strcmp(operation, "print")) {
+ if (nb_regexp != 1) {
+ fprintf(stderr,"Please specify one regexp for operation example");
+ return 1;
+ }
+
+ fa_compile(argv[optind], strlen(argv[optind]), &fa_result);
+
+ if (reduce) {
+ fa_minimize(fa_result);
+ }
+
+ struct state *st, *st2;
+ uint32_t num_trans, i;
+ unsigned char begin, end;
+
+ st = fa_state_initial(fa_result);
+
+ printf("%s. Initial state: %p", fa_is_deterministic(fa_result) ? "DFA" : "NFA", fa_result);
+
+ while (st != NULL) {
+ num_trans = fa_state_num_trans(st);
+ printf("\nFrom state %p (final = %s):\n", st, fa_state_is_accepting(st) == 1 ? "true" : "false");
+ for (i = 0; i < num_trans; i++) {
+ if (fa_state_trans(st, i, &st2, &begin, &end) < 0) {
+ printf("Some error occur. \n");
+ }
+ if (begin == end)
+ printf(" to: %p, label: %d\n", st2, begin);
+ else
+ printf(" to: %p, label: %d-%d\n", st2, begin, end);
+ }
+ st = fa_state_next(st);
+ }
+
+ return 0;
+
+ }
+
+ if (reduce) {
+ fa_minimize(fa_result);
+ }
+
+ if (file_output != NULL) {
+ if ((fd = fopen(file_output, "w")) == NULL) {
+ fprintf(stderr, "Error while opening file %s \n", file_output);
+ return 1;
+ }
+
+ fa_dot(fd, fa_result);
+ fclose(fd);
+ } else {
+ int r;
+ char *rx;
+ size_t rx_len;
+
+ r = fa_as_regexp(fa_result, &rx, &rx_len);
+ if (r < 0) {
+ fprintf(stderr, "Converting FA to regexp failed\n");
+ return 1;
+ }
+
+ for (size_t i=0; i < rx_len; i++) {
+ char *p;
+ if (rx[i] && ((p = strchr(escape_chars, rx[i])) != NULL)) {
+ printf("\\%c", escape_names[p - escape_chars]);
+ } else if (! isprint(rx[i])) {
+ printf("\\%030o", (unsigned char) rx[i]);
+ } else {
+ putchar(rx[i]);
+ }
+ }
+ putchar('\n');
+ free(rx);
+ }
+
+ return 0;
+}
--- /dev/null
+(*
+Module: Access
+ Parses /etc/security/access.conf
+
+Author: Lorenzo Dalrio <lorenzo.dalrio@gmail.com>
+
+About: Reference
+ Some examples of valid entries can be found in access.conf or "man access.conf"
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ Sample usage of this lens in augtool
+
+ * Add a rule to permit login of all users from local sources (tty's, X, cron)
+ > set /files/etc/security/access.conf[0] +
+ > set /files/etc/security/access.conf[0]/user ALL
+ > set /files/etc/security/access.conf[0]/origin LOCAL
+
+About: Configuration files
+ This lens applies to /etc/security/access.conf. See <filter>.
+
+About: Examples
+ The <Test_Access> file contains various examples and tests.
+*)
+module Access =
+ autoload xfm
+
+(* Group: Comments and empty lines *)
+(* Variable: comment *)
+let comment = Util.comment
+(* Variable: empty *)
+let empty = Util.empty
+
+(* Group: Useful primitives *)
+(* Variable: colon
+ * this is the standard field separator " : "
+ *)
+let colon = del (Rx.opt_space . ":" . Rx.opt_space) " : "
+
+
+(************************************************************************
+ * Group: ENTRY LINE
+ *************************************************************************)
+(* View: access
+ * Allow (+) or deny (-) access
+ *)
+let access = label "access" . store /[+-]/
+
+(* Variable: identifier_re
+ Regex for user/group identifiers *)
+let identifier_re = /[A-Za-z0-9_.\\-]+/
+
+(* View: user_re
+ * Regex for user/netgroup fields
+ *)
+let user_re = identifier_re - /[Ee][Xx][Cc][Ee][Pp][Tt]/
+
+(* View: user
+ * user can be a username, username@hostname or a group
+ *)
+let user = [ label "user"
+ . ( store user_re
+ | store Rx.word . Util.del_str "@"
+ . [ label "host" . store Rx.word ] ) ]
+
+(* View: group
+ * Format is (GROUP)
+ *)
+let group = [ label "group"
+ . Util.del_str "(" . store identifier_re . Util.del_str ")" ]
+
+(* View: netgroup
+ * Format is @NETGROUP[@@NISDOMAIN]
+ *)
+let netgroup =
+ [ label "netgroup" . Util.del_str "@" . store user_re
+ . [ label "nisdomain" . Util.del_str "@@" . store Rx.word ]? ]
+
+(* View: user_list
+ * A list of users or netgroups to apply the rule to
+ *)
+let user_list = Build.opt_list (user|group|netgroup) Sep.space
+
+(* View: origin_list
+ * origin_list can be a single ipaddr/originname/domain/fqdn or a list of those values
+ *)
+let origin_list =
+ let origin_re = Rx.no_spaces - /[Ee][Xx][Cc][Ee][Pp][Tt]/
+ in Build.opt_list [ label "origin" . store origin_re ] Sep.space
+
+(* View: except
+ * The except operator makes it possible to write very compact rules.
+ *)
+let except (lns:lens) = [ label "except" . Sep.space
+ . del /[Ee][Xx][Cc][Ee][Pp][Tt]/ "EXCEPT"
+ . Sep.space . lns ]
+
+(* View: entry
+ * A valid entry line
+ * Definition:
+ * > entry ::= access ':' user ':' origin_list
+ *)
+let entry = [ access . colon
+ . user_list
+ . (except user_list)?
+ . colon
+ . origin_list
+ . (except origin_list)?
+ . Util.eol ]
+
+(************************************************************************
+ * Group: LENS & FILTER
+ *************************************************************************)
+(* View: lns
+ The access.conf lens, any amount of
+ * <empty> lines
+ * <comments>
+ * <entry>
+*)
+let lns = (comment|empty|entry) *
+
+(* Variable: filter *)
+let filter = incl "/etc/security/access.conf"
+
+(* xfm *)
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: ActiveMQ_Conf
+ ActiveMQ / FuseMQ conf module for Augeas
+
+Author: Brian Redbeard <redbeard@dead-city.org>
+
+About: Reference
+ This lens ensures that conf files included in ActiveMQ /FuseMQ are properly
+ handled by Augeas.
+
+About: License
+ This file is licensed under the LGPL License.
+
+About: Lens Usage
+ Sample usage of this lens in augtool:
+
+ * Get your current setup
+ > print /files/etc/activemq.conf
+ ...
+
+ * Change ActiveMQ Home
+ > set /files/etc/activemq.conf/ACTIVEMQ_HOME /usr/share/activemq
+
+ Saving your file:
+
+ > save
+
+About: Configuration files
+ This lens applies to relevant conf files located in /etc/activemq/ and
+ the file /etc/activemq.conf . See <filter>.
+
+*)
+
+module ActiveMQ_Conf =
+ autoload xfm
+
+(* Variable: blank_val *)
+let blank_val = del /^\z/
+
+(* View: entry *)
+let entry =
+ Build.key_value_line Rx.word Sep.space_equal Quote.any_opt
+
+(* View: empty_entry *)
+let empty_entry = Build.key_value_line Rx.word Sep.equal Quote.dquote_opt_nil
+
+(* View: lns *)
+let lns = (Util.empty | Util.comment | entry | empty_entry )*
+
+(* Variable: filter *)
+let filter = incl "/etc/activemq.conf"
+ . incl "/etc/activemq/*"
+ . excl "/etc/activemq/*.xml"
+ . excl "/etc/activemq/jmx.*"
+ . excl "/etc/activemq/jetty-realm.properties"
+ . excl "/etc/activemq/*.ts"
+ . excl "/etc/activemq/*.ks"
+ . excl "/etc/activemq/*.cert"
+ . Util.stdexcl
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: ActiveMQ_XML
+ ActiveMQ / FuseMQ XML module for Augeas
+
+Author: Brian Redbeard <redbeard@dead-city.org>
+
+About: Reference
+ This lens ensures that XML files included in ActiveMQ / FuseMQ are properly
+ handled by Augeas.
+
+About: License
+ This file is licensed under the LGPL License.
+
+About: Lens Usage
+ Sample usage of this lens in augtool:
+
+ * Get your current setup
+ > print /files/etc/activemq/activemq.xml
+ ...
+
+ * Change OpenShift domain
+ > set /files/etc/openshift/broker.conf/CLOUD_DOMAIN ose.example.com
+
+ Saving your file:
+
+ > save
+
+About: Configuration files
+ This lens applies to relevant XML files located in /etc/activemq/ . See <filter>.
+
+*)
+
+module ActiveMQ_XML =
+ autoload xfm
+
+let lns = Xml.lns
+
+let filter = (incl "/etc/activemq/*.xml")
+ . Util.stdexcl
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: AFS_cellalias
+ Parses AFS configuration file CellAlias
+
+Author: Pat Riehecky <riehecky@fnal.gov>
+
+About: Reference
+ This lens is targeted at the OpenAFS CellAlias file
+
+About: Lens Usage
+ Sample usage of this lens in augtool
+
+ * Add a CellAlias for fnal.gov/files to fnal-files
+ > set /files/usr/vice/etc/CellAlias/target[99] fnal.gov/files
+ > set /files/usr/vice/etc/CellAlias/target[99]/linkname fnal-files
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+*)
+
+module AFS_cellalias =
+ autoload xfm
+
+ (************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+ (* Group: Comments and empty lines *)
+
+ (* View: eol *)
+ let eol = Util.eol
+ (* View: comment *)
+ let comment = Util.comment
+ (* View: empty *)
+ let empty = Util.empty
+
+ (* Group: separators *)
+
+ (* View: space
+ * Separation between key and value
+ *)
+ let space = Util.del_ws_spc
+ let target = /[^ \t\n#]+/
+ let linkname = Rx.word
+
+ (************************************************************************
+ * Group: ENTRIES
+ *************************************************************************)
+
+ (* View: entry *)
+ let entry = [ label "target" . store target . space . [ label "linkname" . store linkname . eol ] ]
+
+ (* View: lns *)
+ let lns = (empty | comment | entry)*
+
+ let xfm = transform lns (incl "/usr/vice/etc/CellAlias")
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(*
+Module: Aliases
+ Parses /etc/aliases
+
+Author: David Lutterkort <lutter@redhat.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 5 aliases` where possible.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ See <lns>.
+
+About: Configuration files
+ This lens applies to /etc/aliases.
+
+About: Examples
+ The <Test_Aliases> file contains various examples and tests.
+*)
+
+module Aliases =
+ autoload xfm
+
+ (************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+ (* Group: basic tokens *)
+
+ (* Variable: word *)
+ let word = /[^|", \t\n]+/
+ (* Variable: name *)
+ let name = /([^ \t\n#:|@]+|"[^"|\n]*")/ (* " make emacs calm down *)
+
+ (* Variable: command
+ * a command can contain spaces, if enclosed in double quotes, the case
+ * without spaces is taken care with <word>
+ *)
+ let command = /(\|([^", \t\n]+|"[^"\n]+"))|("\|[^"\n]+")/
+
+ (* Group: Comments and empty lines *)
+
+ (* View: eol *)
+ let eol = Util.eol
+ (* View: comment *)
+ let comment = Util.comment
+ (* View: empty *)
+ let empty = Util.empty
+
+ (* Group: separators *)
+ (* View: colon
+ * Separation between the alias and it's destinations
+ *)
+ let colon = del /[ \t]*:[ \t]*/ ":\t"
+ (* View: comma
+ * Separation between multiple destinations
+ *)
+ let comma = del /[ \t]*,[ \t]*(\n[ \t]+)?/ ", "
+
+ (* Group: alias *)
+
+ (* View: destination
+ * Can be either a word (no spaces included) or a command with spaces
+ *)
+ let destination = ( word | command )
+
+ (* View: value_list
+ * List of destinations
+ *)
+ let value_list = Build.opt_list ([ label "value" . store destination]) comma
+
+ (* View: alias
+ * a name with one or more destinations
+ *)
+ let alias = [ seq "alias" .
+ [ label "name" . store name ] . colon .
+ value_list
+ ] . eol
+
+ (* View: lns *)
+ let lns = (comment | empty | alias)*
+
+ let xfm = transform lns (incl "/etc/aliases")
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(*
+Module: Anaconda
+ Parses Anaconda's user interaction configuration files.
+
+Author: Pino Toscano <ptoscano@redhat.com>
+
+About: Reference
+ https://anaconda-installer.readthedocs.io/en/latest/user-interaction-config-file-spec.html
+
+About: Configuration file
+ This lens applies to /etc/sysconfig/anaconda.
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+*)
+module Anaconda =
+autoload xfm
+
+let comment = IniFile.comment "#" "#"
+let sep = IniFile.sep "=" "="
+
+let entry = IniFile.entry IniFile.entry_re sep comment
+let title = IniFile.title IniFile.record_re
+let record = IniFile.record title entry
+
+let lns = IniFile.lns record comment
+
+let filter = incl "/etc/sysconfig/anaconda"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Anacron
+ Parses /etc/anacrontab
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 5 anacrontab` where
+ possible.
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+
+About: Configuration files
+ This lens applies to /etc/anacrontab. See <filter>.
+
+About: Examples
+ The <Test_Anacron> file contains various examples and tests.
+*)
+
+module Anacron =
+ autoload xfm
+
+(************************************************************************
+ * Group: ENTRIES
+ *************************************************************************)
+
+
+(************************************************************************
+ * View: shellvar
+ * A shell variable in crontab
+ *************************************************************************)
+
+let shellvar = Cron.shellvar
+
+
+(* View: period *)
+let period = [ label "period" . store Rx.integer ]
+
+(* Variable: period_name_re
+ The valid values for <period_name>. Currently only "monthly" *)
+let period_name_re = "monthly"
+
+(************************************************************************
+ * View: period_name
+ * In the format "@keyword"
+ *************************************************************************)
+let period_name = [ label "period_name" . Util.del_str "@"
+ . store period_name_re ]
+
+(************************************************************************
+ * View: delay
+ * The delay for an <entry>
+ *************************************************************************)
+let delay = [ label "delay" . store Rx.integer ]
+
+(************************************************************************
+ * View: job_identifier
+ * The job_identifier for an <entry>
+ *************************************************************************)
+let job_identifier = [ label "job-identifier" . store Rx.word ]
+
+(************************************************************************
+ * View: entry
+ * An anacrontab entry
+ *************************************************************************)
+
+let entry = [ label "entry" . Util.indent
+ . ( period | period_name )
+ . Sep.space . delay
+ . Sep.space . job_identifier
+ . Sep.space . store Rx.space_in . Util.eol ]
+
+
+(*
+ * View: lns
+ * The anacron lens
+ *)
+let lns = ( Util.empty | Util.comment | shellvar | entry )*
+
+
+(* Variable: filter *)
+let filter = incl "/etc/anacrontab"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Approx
+ Parses /etc/approx/approx.conf
+
+Author: David Lutterkort <lutter@redhat.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 5 approx.conf` where possible.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ See <lns>.
+
+About: Configuration files
+ This lens applies to /etc/approx/approx.conf.
+
+About: Examples
+ The <Test_Approx> file contains various examples and tests.
+*)
+
+module Approx =
+ autoload xfm
+
+ (* Variable: eol
+ An <Util.eol> *)
+ let eol = Util.eol
+
+ (* Variable: indent
+ An <Util.indent> *)
+ let indent = Util.indent
+
+ (* Variable: key_re *)
+ let key_re = /\$?[A-Za-z0-9_.-]+/
+
+ (* Variable: sep *)
+ let sep = /[ \t]+/
+
+ (* Variable: value_re *)
+ let value_re = /[^ \t\n](.*[^ \t\n])?/
+
+ (* View: comment *)
+ let comment = [ indent . label "#comment" . del /[#;][ \t]*/ "# "
+ . store /([^ \t\n].*[^ \t\n]|[^ \t\n])/ . eol ]
+
+ (* View: empty
+ An <Util.empty> *)
+ let empty = Util.empty
+
+ (* View: kv *)
+ let kv = [ indent . key key_re . del sep " " . store value_re . eol ]
+
+ (* View: lns *)
+ let lns = (empty | comment | kv) *
+
+ (* View: filter *)
+ let filter = incl "/etc/approx/approx.conf"
+ let xfm = transform lns filter
--- /dev/null
+(*
+Module: Apt_Update_Manager
+ Parses files in /etc/update-manager
+
+Author: Raphael Pinson <raphael.pinson@camptocamp.com>
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to files in /etc/update-manager. See <filter>.
+
+About: Examples
+ The <Test_Apt_Update_Manager> file contains various examples and tests.
+*)
+module Apt_Update_Manager =
+
+autoload xfm
+
+(* View: comment *)
+let comment = IniFile.comment IniFile.comment_re IniFile.comment_default
+
+(* View: sep *)
+let sep = IniFile.sep IniFile.sep_re IniFile.sep_default
+
+(* View: title *)
+let title = IniFile.title Rx.word
+
+(* View: entry *)
+let entry = IniFile.entry Rx.word sep comment
+
+(* View: record *)
+let record = IniFile.record title entry
+
+(* View: lns *)
+let lns = IniFile.lns record comment
+
+(* Variable: filter *)
+let filter = incl "/etc/update-manager/meta-release"
+ . incl "/etc/update-manager/release-upgrades"
+ . incl "/etc/update-manager/release-upgrades.d/*"
+ . Util.stdexcl
+
+let xfm = transform lns filter
--- /dev/null
+(* Module: AptCacherNGSecurity
+
+ Lens for config files like the one found in
+ /etc/apt-cacher-ng/security.conf
+
+
+ About: License
+ Copyright 2013 Erik B. Andersen; this file is licenced under the LGPL v2+.
+*)
+module AptCacherNGSecurity =
+ autoload xfm
+
+ (* Define a Username/PW pair *)
+ let authpair = [ key /[^ \t:\/]*/ . del /:/ ":" . store /[^: \t\n]*/ ]
+
+ (* Define a record. So far as I can tell, the only auth level supported is Admin *)
+ let record = [ key "AdminAuth". del /[ \t]*:[ \t]*/ ": ". authpair . Util.del_str "\n"]
+
+ (* Define the basic lens *)
+ let lns = ( record | Util.empty | Util.comment )*
+
+ let filter = incl "/etc/apt-cacher-ng/security.conf"
+ . Util.stdexcl
+
+ let xfm = transform lns filter
--- /dev/null
+(*
+Module: AptConf
+ Parses /etc/apt/apt.conf and /etc/apt/apt.conf.d/*
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 5 apt.conf`
+where possible.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/apt/apt.conf and /etc/apt/apt.conf.d/*.
+See <filter>.
+*)
+
+
+module AptConf =
+ autoload xfm
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+(* View: eol
+ And <Util.eol> end of line *)
+let eol = Util.eol
+
+(* View: empty
+ A C-style empty line *)
+let empty = Util.empty_any
+
+(* View: indent
+ An indentation *)
+let indent = Util.indent
+
+(* View: comment_simple
+ A one-line comment, C-style *)
+let comment_simple = Util.comment_c_style_or_hash
+
+(* View: comment_multi
+ A multiline comment, C-style *)
+let comment_multi = Util.comment_multiline
+
+(* View: comment
+ A comment, either <comment_simple> or <comment_multi> *)
+let comment = comment_simple | comment_multi
+
+
+(************************************************************************
+ * Group: ENTRIES
+ *************************************************************************)
+
+(* View: name_re
+ Regex for entry names *)
+let name_re = /[A-Za-z][A-Za-z-]*/
+
+(* View: name_re_colons
+ Regex for entry names with colons *)
+let name_re_colons = /[A-Za-z][A-Za-z:-]*/
+
+
+(* View: entry
+ An apt.conf entry, recursive
+
+ WARNING:
+ This lens exploits a put ambiguity
+ since apt.conf allows for both
+ APT { Clean-Installed { "true" } }
+ and APT::Clean-Installed "true";
+ but we're choosing to map them the same way
+
+ The recursive lens doesn't seem
+ to care and defaults to the first
+ item in the union.
+
+ This is why the APT { Clean-Installed { "true"; } }
+ form is listed first, since it supports
+ all subnodes (which Dpkg::Conf) doesn't.
+
+ Exchanging these two expressions in the union
+ makes tests fails since the tree cannot
+ be mapped back.
+
+ This situation results in existing
+ configuration being modified when the
+ associated tree is modified. For example,
+ changing the value of
+ APT::Clean-Installed "true"; to "false"
+ results in
+ APT { Clean-Installed "false"; }
+ (see unit tests)
+ *)
+let rec entry_noeol =
+ let value =
+ Util.del_str "\"" . store /[^"\n]+/
+ . del /";?/ "\";" in
+ let opt_eol = del /[ \t\n]*/ "\n" in
+ let long_eol = del /[ \t]*\n+/ "\n" in
+ let list_elem = [ opt_eol . label "@elem" . value ] in
+ let eol_comment = del /([ \t\n]*\n)?/ "" . comment in
+ [ key name_re . Sep.space . value ]
+ | [ key name_re . del /[ \t\n]*\{/ " {" .
+ ( (opt_eol . entry_noeol) |
+ list_elem |
+ eol_comment
+ )* .
+ del /[ \t\n]*\};?/ "\n};" ]
+ | [ key name_re . Util.del_str "::" . entry_noeol ]
+
+let entry = indent . entry_noeol . eol
+
+
+(* View: include
+ A file inclusion
+ /!\ The manpage is not clear on the syntax *)
+let include =
+ [ indent . key "#include" . Sep.space
+ . store Rx.fspath . eol ]
+
+
+(* View: clear
+ A list of variables to clear
+ /!\ The manpage is not clear on the syntax *)
+let clear =
+ let name = [ label "name" . store name_re_colons ] in
+ [ indent . key "#clear" . Sep.space
+ . Build.opt_list name Sep.space
+ . eol ]
+
+
+(************************************************************************
+ * Group: LENS AND FILTER
+ *************************************************************************)
+
+(* View: lns
+ The apt.conf lens *)
+let lns = (empty|comment|entry|include|clear)*
+
+
+(* View: filter *)
+let filter = incl "/etc/apt/apt.conf"
+ . incl "/etc/apt/apt.conf.d/*"
+ . Util.stdexcl
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: AptPreferences
+ Apt/preferences module for Augeas
+
+Author: Raphael Pinson <raphael.pinson@camptocamp.com>
+*)
+
+module AptPreferences =
+autoload xfm
+
+(************************************************************************
+ * Group: Entries
+ ************************************************************************)
+
+(* View: colon *)
+let colon = del /:[ \t]*/ ": "
+
+(* View: pin_gen
+ A generic pin
+
+ Parameters:
+ lbl:string - the label *)
+let pin_gen (lbl:string) = store lbl
+ . [ label lbl . Sep.space . store Rx.no_spaces ]
+
+(* View: pin_keys *)
+let pin_keys =
+ let space_in = store /[^, \r\t\n][^,\n]*[^, \r\t\n]|[^, \t\n\r]/
+ in Build.key_value /[aclnov]/ Sep.equal space_in
+
+(* View: pin_options *)
+let pin_options =
+ let comma = Util.delim ","
+ in store "release" . Sep.space
+ . Build.opt_list pin_keys comma
+
+(* View: version_pin *)
+let version_pin = pin_gen "version"
+
+(* View: origin_pin *)
+let origin_pin = pin_gen "origin"
+
+(* View: pin *)
+let pin =
+ let pin_value = pin_options | version_pin | origin_pin
+ in Build.key_value_line "Pin" colon pin_value
+
+(* View: entries *)
+let entries = Build.key_value_line ("Explanation"|"Package"|"Pin-Priority")
+ colon (store Rx.space_in)
+ | pin
+ | Util.comment
+
+(* View: record *)
+let record = [ seq "record" . entries+ ]
+
+(************************************************************************
+ * Group: Lens
+ ************************************************************************)
+
+(* View: lns *)
+let lns = Util.empty* . (Build.opt_list record Util.eol+ . Util.empty*)?
+
+(* View: filter *)
+let filter = incl "/etc/apt/preferences"
+ . incl "/etc/apt/preferences.d/*"
+ . Util.stdexcl
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Aptsources
+ Parsing /etc/apt/sources.list
+*)
+
+module Aptsources =
+ autoload xfm
+
+(************************************************************************
+ * Group: Utility variables/functions
+ ************************************************************************)
+ (* View: sep_ws *)
+ let sep_ws = Sep.space
+
+ (* View: eol *)
+ let eol = Util.del_str "\n"
+
+ (* View: comment *)
+ let comment = Util.comment
+ (* View: empty *)
+ let empty = Util.empty
+
+ (* View: word *)
+ let word = /[^][# \n\t]+/
+
+ (* View: uri *)
+ let uri =
+ let protocol = /[a-z+]+:/
+ in let path = /\/[^] \t]*/
+ in let path_brack = /\[[^]]+\]\/?/
+ in protocol? . path
+ | protocol . path_brack
+
+(************************************************************************
+ * Group: Keywords
+ ************************************************************************)
+ (* View: record *)
+ let record =
+ let option_sep = [ label "operation" . store /[+-]/]? . Sep.equal
+ in let option = Build.key_value /arch|trusted/ option_sep (store Rx.word)
+ in let options = [ label "options"
+ . Util.del_str "[" . Sep.opt_space
+ . Build.opt_list option Sep.space
+ . Sep.opt_space . Util.del_str "]"
+ . sep_ws ]
+ in [ Util.indent . seq "source"
+ . [ label "type" . store word ] . sep_ws
+ . options?
+ . [ label "uri" . store uri ] . sep_ws
+ . [ label "distribution" . store word ]
+ . [ label "component" . sep_ws . store word ]*
+ . del /[ \t]*(#.*)?/ ""
+ . eol ]
+
+(************************************************************************
+ * Group: Lens
+ ************************************************************************)
+ (* View: lns *)
+ let lns = ( comment | empty | record ) *
+
+ (* View: filter *)
+ let filter = (incl "/etc/apt/sources.list")
+ . (incl "/etc/apt/sources.list.d/*")
+ . Util.stdexcl
+
+ let xfm = transform lns filter
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(* Authinfo2 module for Augeas *)
+(* Author: Nicolas Gif <ngf18490@pm.me> *)
+(* Heavily based on DPUT module by Raphael Pinson *)
+(* <raphink@gmail.com> *)
+(* *)
+
+module Authinfo2 =
+ autoload xfm
+
+(************************************************************************
+ * INI File settings
+ *************************************************************************)
+let comment = IniFile.comment IniFile.comment_re "#"
+
+let sep = IniFile.sep IniFile.sep_re ":"
+
+
+(************************************************************************
+ * ENTRY
+ *************************************************************************)
+let entry =
+ IniFile.entry_generic_nocomment (key IniFile.entry_re) sep IniFile.comment_re comment
+
+
+(************************************************************************
+ * TITLE & RECORD
+ *************************************************************************)
+let title = IniFile.title IniFile.record_re
+let record = IniFile.record title entry
+
+
+(************************************************************************
+ * LENS & FILTER
+ *************************************************************************)
+let lns = IniFile.lns record comment
+
+let filter = (incl (Sys.getenv("HOME") . "/.s3ql/authinfo2"))
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Authorized_Keys
+ Parses SSH authorized_keys
+
+Author: Raphael Pinson <raphael.pinson@camptocamp.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 5 authorized_keys` where possible.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to SSH authorized_keys. See <filter>.
+
+About: Examples
+ The <Test_Authorized_Keys> file contains various examples and tests.
+*)
+
+
+module Authorized_Keys =
+
+autoload xfm
+
+(* View: option
+ A key option *)
+let option =
+ let kv_re = "command" | "environment" | "from"
+ | "permitopen" | "principals" | "tunnel"
+ in let flag_re = "cert-authority" | "no-agent-forwarding"
+ | "no-port-forwarding" | "no-pty" | "no-user-rc"
+ | "no-X11-forwarding"
+ in let option_value = Util.del_str "\""
+ . store /((\\\\")?[^\\\n"]*)+/
+ . Util.del_str "\""
+ in Build.key_value kv_re Sep.equal option_value
+ | Build.flag flag_re
+
+(* View: key_options
+ A list of key <option>s *)
+let key_options = [ label "options" . Build.opt_list option Sep.comma ]
+
+(* View: key_type *)
+let key_type =
+ let key_type_re = /ecdsa-sha2-nistp[0-9]+/ | /ssh-[a-z0-9]+/
+ in [ label "type" . store key_type_re ]
+
+(* View: key_comment *)
+let key_comment = [ label "comment" . store Rx.space_in ]
+
+(* View: authorized_key *)
+let authorized_key =
+ [ label "key"
+ . (key_options . Sep.space)?
+ . key_type . Sep.space
+ . store Rx.no_spaces
+ . (Sep.space . key_comment)?
+ . Util.eol ]
+
+(* View: lns
+ The authorized_keys lens
+*)
+let lns = ( Util.empty | Util.comment | authorized_key)*
+
+(* Variable: filter *)
+let filter = incl (Sys.getenv("HOME") . "/.ssh/authorized_keys")
+
+let xfm = transform lns filter
+
--- /dev/null
+(*
+Module: AuthselectPam
+ Parses /etc/authselect/custom/*/*-auth and
+ /etc/authselect/custom/*/postlogin files
+
+Author: Heston Snodgrass <heston.snodgrass@puppet.com> based on pam.aug by David Lutterkort <lutter@redhat.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man pam.conf` where
+ possible. This lens supports authselect templating syntax as
+ can be found in `man authselect-profiles`.
+
+About: Licence
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+
+About: Configuration files
+ This lens also autoloads /etc/authselect/custom/*/*-auth and
+ /etc/authselect/custom/*/postlogin because these files are PAM template
+ files on machines that have authselect custom profiles.
+*)
+module AuthselectPam =
+ autoload xfm
+
+ (* The Pam space does not work for certain parts of the authselect syntax so we need our own whitespace *)
+ let reg_ws = del /([ \t])/ " "
+
+ (* This is close the the same as argument from pam.aug, but curly braces are accounted for *)
+ let argument = /(\[[^]{}#\n]+\]|[^[{#\n \t\\][^#\n \t\\]*)/
+
+ (* The various types of conditional statements that can exist in authselect PAM files *)
+ let authselect_conditional_type = /(continue if|stop if|include if|exclude if|imply|if)/
+
+ (* Basic logical operators supported by authselect templates *)
+ let authselect_logic_stmt = [ reg_ws . key /(and|or|not)/ ]
+
+ (* authselect features inside conditional templates *)
+ let authselect_feature = [ label "feature" . Quote.do_dquote (store /([a-z0-9-]+)/) ]
+
+ (* authselect templates can substitute text if a condition is met. *)
+ (* The sytax for this is `<conditional>:<what to sub on true>|<what to sub on false>` *)
+ (* Both result forms are optional *)
+ let authselect_on_true = [ label "on_true" . Util.del_str ":" . store /([^#{}:|\n\\]+)/ ]
+ let authselect_on_false = [ label "on_false" . Util.del_str "|" . store /([^#{}:|\n\\]+)/ ]
+
+ (* Features in conditionals can be grouped together so that logical operations can be resolved for the entire group *)
+ let authselect_feature_group = [ label "feature_group" . Util.del_str "(" .
+ authselect_feature . authselect_logic_stmt .
+ reg_ws . authselect_feature . (authselect_logic_stmt . reg_ws . authselect_feature)* .
+ Util.del_str ")" ]
+
+ (* Represents a single, full authselect conditional template *)
+ let authselect_conditional = [ Pam.space .
+ Util.del_str "{" .
+ label "authselect_conditional" . store authselect_conditional_type .
+ authselect_logic_stmt* .
+ ( reg_ws . authselect_feature | reg_ws . authselect_feature_group) .
+ authselect_on_true? .
+ authselect_on_false? .
+ Util.del_str "}" ]
+
+ (* Shared with PamConf *)
+ let record = [ label "optional" . del "-" "-" ]? .
+ [ label "type" . store Pam.types ] .
+ Pam.space .
+ [ label "control" . store Pam.control] .
+ Pam.space .
+ [ label "module" . store Pam.word ] .
+ (authselect_conditional | [ Pam.space . label "argument" . store argument ])* .
+ Pam.comment_or_eol
+
+ let record_svc = [ seq "record" . Pam.indent . record ]
+
+ let lns = ( Pam.empty | Pam.comment | Pam.include | record_svc ) *
+
+ let filter = incl "/etc/authselect/custom/*/*-auth"
+ . incl "/etc/authselect/custom/*/postlogin"
+ . Util.stdexcl
+
+ let xfm = transform lns filter
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(*
+Module: Automaster
+ Parses autofs' auto.master files
+
+Author: Dominic Cleal <dcleal@redhat.com>
+
+About: Reference
+ See auto.master(5)
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/auto.master, auto_master and /etc/auto.master.d/*
+ files.
+
+About: Examples
+ The <Test_Automaster> file contains various examples and tests.
+*)
+
+module Automaster =
+autoload xfm
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+(* View: eol *)
+let eol = Util.eol
+
+(* View: empty *)
+let empty = Util.empty
+
+(* View: comment *)
+let comment = Util.comment
+
+(* View mount *)
+let mount = /[^+ \t\n#]+/
+
+(* View: type
+ yp, file, dir etc but not ldap *)
+let type = Rx.word - /ldap/
+
+(* View: format
+ sun, hesoid *)
+let format = Rx.word
+
+(* View: name *)
+let name = /[^: \t\n]+/
+
+(* View: host *)
+let host = /[^:# \n\t]+/
+
+(* View: dn *)
+let dn = /[^:# \n\t]+/
+
+(* An option label can't contain comma, comment, equals, or space *)
+let optlabel = /[^,#= \n\t]+/
+let spec = /[^,# \n\t][^ \n\t]*/
+
+(* View: optsep *)
+let optsep = del /[ \t,]+/ ","
+
+(************************************************************************
+ * Group: ENTRIES
+ *************************************************************************)
+
+(* View: map_format *)
+let map_format = [ label "format" . store format ]
+
+(* View: map_type *)
+let map_type = [ label "type" . store type ]
+
+(* View: map_name *)
+let map_name = [ label "map" . store name ]
+
+(* View: map_generic
+ Used for all except LDAP maps which are parsed further *)
+let map_generic = ( map_type . ( Sep.comma . map_format )? . Sep.colon )?
+ . map_name
+
+(* View: map_ldap_name
+ Split up host:dc=foo into host/map nodes *)
+let map_ldap_name = ( [ label "host" . store host ] . Sep.colon )?
+ . [ label "map" . store dn ]
+
+(* View: map_ldap *)
+let map_ldap = [ label "type" . store "ldap" ]
+ . ( Sep.comma . map_format )? . Sep.colon
+ . map_ldap_name
+
+(* View: comma_spc_sep_list
+ Parses options either for filesystems or autofs *)
+let comma_spc_sep_list (l:string) =
+ let value = [ label "value" . Util.del_str "=" . store Rx.neg1 ] in
+ let lns = [ label l . store optlabel . value? ] in
+ Build.opt_list lns optsep
+
+(* View: map_mount
+ Mountpoint and whitespace, followed by the map info *)
+let map_mount = [ seq "map" . store mount . Util.del_ws_tab
+ . ( map_generic | map_ldap )
+ . ( Util.del_ws_spc . comma_spc_sep_list "opt" )?
+ . Util.eol ]
+
+(* map_master
+ "+" to include more master entries and optional whitespace *)
+let map_master = [ seq "map" . store "+" . Util.del_opt_ws ""
+ . ( map_generic | map_ldap )
+ . ( Util.del_ws_spc . comma_spc_sep_list "opt" )?
+ . Util.eol ]
+
+(* View: lns *)
+let lns = ( empty | comment | map_mount | map_master ) *
+
+(* Variable: filter *)
+let filter = incl "/etc/auto.master"
+ . incl "/etc/auto_master"
+ . incl "/etc/auto.master.d/*"
+ . Util.stdexcl
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Automounter
+ Parses automounter file based maps
+
+Author: Dominic Cleal <dcleal@redhat.com>
+
+About: Reference
+ See autofs(5)
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/auto.*, auto_*, excluding known scripts.
+
+About: Examples
+ The <Test_Automounter> file contains various examples and tests.
+*)
+
+module Automounter =
+autoload xfm
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+(* View: eol *)
+let eol = Util.eol
+
+(* View: empty *)
+let empty = Util.empty
+
+(* View: comment *)
+let comment = Util.comment
+
+(* View: path *)
+let path = /[^-+#: \t\n][^#: \t\n]*/
+
+(* View: hostname *)
+let hostname = /[^-:#\(\), \n\t][^:#\(\), \n\t]*/
+
+(* An option label can't contain comma, comment, equals, or space *)
+let optlabel = /[^,#:\(\)= \n\t]+/
+let spec = /[^,#:\(\)= \n\t][^ \n\t]*/
+
+(* View: weight *)
+let weight = Rx.integer
+
+(* View: map_name *)
+let map_name = /[^: \t\n]+/
+
+(* View: entry_multimount_sep
+ Separator for multimount entries, permits line spanning with "\" *)
+let entry_multimount_sep = del /[ \t]+(\\\\[ \t]*\n[ \t]+)?/ " "
+
+(************************************************************************
+ * Group: ENTRIES
+ *************************************************************************)
+
+(* View: entry_key
+ Key for a map entry *)
+let entry_mkey = store path
+
+(* View: entry_path
+ Path component of an entry location *)
+let entry_path = [ label "path" . store path ]
+
+(* View: entry_host
+ Host component with optional weight of an entry location *)
+let entry_host = [ label "host" . store hostname
+ . ( Util.del_str "(" . [ label "weight"
+ . store weight ] . Util.del_str ")" )? ]
+
+(* View: comma_sep_list
+ Parses options for filesystems *)
+let comma_sep_list (l:string) =
+ let value = [ label "value" . Util.del_str "=" . store Rx.neg1 ] in
+ let lns = [ label l . store optlabel . value? ] in
+ Build.opt_list lns Sep.comma
+
+(* View: entry_options *)
+let entry_options = Util.del_str "-" . comma_sep_list "opt" . Util.del_ws_tab
+
+(* View: entry_location
+ A single location with one or more hosts, and one path *)
+let entry_location = ( entry_host . ( Sep.comma . entry_host )* )?
+ . Sep.colon . entry_path
+
+(* View: entry_locations
+ Multiple locations (each with one or more hosts), separated by spaces *)
+let entry_locations = [ label "location" . counter "location"
+ . [ seq "location" . entry_location ]
+ . ( [ Util.del_ws_spc . seq "location" . entry_location ] )* ]
+
+(* View: entry_multimount
+ Parses one of many mountpoints given for a multimount line *)
+let entry_multimount = entry_mkey . Util.del_ws_tab . entry_options? . entry_locations
+
+(* View: entry_multimounts
+ Parses multiple mountpoints given on an entry line *)
+let entry_multimounts = [ label "mount" . counter "mount"
+ . [ seq "mount" . entry_multimount ]
+ . ( [ entry_multimount_sep . seq "mount" . entry_multimount ] )* ]
+
+(* View: entry
+ A single map entry from start to finish, including multi-mounts *)
+let entry = [ seq "entry" . entry_mkey . Util.del_ws_tab . entry_options?
+ . ( entry_locations | entry_multimounts ) . Util.eol ]
+
+(* View: include
+ An include line starting with a "+" and a map name *)
+let include = [ seq "entry" . store "+" . Util.del_opt_ws ""
+ . [ label "map" . store map_name ] . Util.eol ]
+
+(* View: lns *)
+let lns = ( empty | comment | entry | include ) *
+
+(* Variable: filter
+ Exclude scripts/executable maps from here *)
+let filter = incl "/etc/auto.*"
+ . incl "/etc/auto_*"
+ . excl "/etc/auto.master"
+ . excl "/etc/auto_master"
+ . excl "/etc/auto.net"
+ . excl "/etc/auto.smb"
+ . Util.stdexcl
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Avahi
+ Avahi module for Augeas
+
+ Author: Athir Nuaimi <athir@nuaimi.com>
+
+ avahi-daemon.conf is a standard INI File.
+*)
+
+module Avahi =
+ autoload xfm
+
+(************************************************************************
+ * Group: INI File settings
+ * avahi-daemon.conf only supports "# as commentary and "=" as separator
+ *************************************************************************)
+(* View: comment *)
+let comment = IniFile.comment "#" "#"
+(* View: sep *)
+let sep = IniFile.sep "=" "="
+
+(************************************************************************
+ * Group: Entry
+ *************************************************************************)
+(* View: entry *)
+let entry = IniFile.indented_entry IniFile.entry_re sep comment
+
+(************************************************************************
+ * Group: Record
+ *************************************************************************)
+(* View: title *)
+let title = IniFile.indented_title IniFile.record_re
+(* View: record *)
+let record = IniFile.record title entry
+
+(************************************************************************
+ * Group: Lens and filter
+ *************************************************************************)
+(* View: lns *)
+let lns = IniFile.lns record comment
+
+(* View: filter *)
+let filter = (incl "/etc/avahi/avahi-daemon.conf")
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: BackupPCHosts
+ Parses /etc/backuppc/hosts
+
+About: Reference
+ This lens tries to keep as close as possible to `man backuppc` where possible.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/backuppc/hosts. See <filter>.
+*)
+
+module BackupPCHosts =
+autoload xfm
+
+(* View: word *)
+let word = /[^#, \n\t\/]+/
+
+(* View: record *)
+let record =
+ let moreusers = Build.opt_list [ label "moreusers" . store word ] Sep.comma in
+ [ seq "host"
+ . [ label "host" . store word ] . Util.del_ws_tab
+ . [ label "dhcp" . store word ] . Util.del_ws_tab
+ . [ label "user" . store word ]
+ . (Util.del_ws_tab . moreusers)?
+ . (Util.comment|Util.eol) ]
+
+(* View: lns *)
+let lns = ( Util.empty | Util.comment | record ) *
+
+(* View: filter *)
+let filter = incl "/etc/backuppc/hosts"
+
+let xfm = transform lns filter
--- /dev/null
+(* BB-hosts module for Augeas *)
+(* Author: Raphael Pinson <raphink@gmail.com> *)
+(* *)
+(* Supported : *)
+(* *)
+(* Todo : *)
+(* *)
+
+module BBhosts =
+ autoload xfm
+
+ (* Define useful shortcuts *)
+
+ let eol = Util.eol
+ let eol_no_spc = Util.del_str "\n"
+ let sep_spc = Sep.space
+ let sep_opt_spc = Sep.opt_space
+ let word = store /[^|;# \n\t]+/
+ let value_to_eol = store /[^ \t][^\n]+/
+ let ip = store Rx.ipv4
+ let url = store /https?:[^# \n\t]+/
+ let word_cont = store /[^;# \n\t]+/
+
+ (* Define comments and empty lines *)
+ let comment = Util.comment
+ let empty = Util.empty
+
+
+ (* Define host *)
+ let host_ip = [ label "ip" . ip ]
+ let host_fqdn = [ label "fqdn" . sep_spc . word ]
+
+ let host_test_url = [ label "url" . url ]
+ let host_test_cont (kw:string) = [ store /!?/ . key kw .
+ (Util.del_str ";" .
+ [ label "url" . word_cont ] .
+ (Util.del_str ";" . [ label "keyword" . word ])?
+ )?
+ ]
+
+ (* DOWNTIME=[columns:]day:starttime:endtime:cause[,day:starttime:endtime:cause] *)
+ let host_test_downtime =
+ let probe = [ label "probe" . store (Rx.word | "*") ]
+ in let probes = Build.opt_list probe Sep.comma
+ in let day = [ label "day" . store (Rx.word | "*") ]
+ in let starttime = [ label "starttime" . store Rx.integer ]
+ in let endtime = [ label "endtime" . store Rx.integer ]
+ in let cause = [ label "cause" . Util.del_str "\"" . store /[^"]*/ . Util.del_str "\"" ]
+ in [ key "DOWNTIME" . Sep.equal
+ . (probes . Sep.colon)?
+ . day . Sep.colon
+ . starttime . Sep.colon
+ . endtime . Sep.colon
+ . cause
+ ]
+
+ let host_test_flag_value = [ label "value" . Util.del_str ":"
+ . store Rx.word ]
+
+ let host_test_flag (kw:regexp) = [ store /!?/ . key kw
+ . host_test_flag_value? ]
+
+ let host_test = host_test_cont "cont"
+ | host_test_cont "contInsecure"
+ | host_test_cont "dns"
+ | host_test_flag "BBDISPLAY"
+ | host_test_flag "BBNET"
+ | host_test_flag "BBPAGER"
+ | host_test_flag "CDB"
+ | host_test_flag "GTM"
+ | host_test_flag "XYMON"
+ | host_test_flag "ajp13"
+ | host_test_flag "bbd"
+ | host_test_flag "clamd"
+ | host_test_flag "cupsd"
+ | host_test_flag "front"
+ | host_test_flag /ftps?/
+ | host_test_flag /imap[2-4s]?/
+ | host_test_flag /ldaps?/
+ | host_test_flag /nntps?/
+ | host_test_flag "noconn"
+ | host_test_flag "nocont"
+ | host_test_flag "noping"
+ | host_test_flag "notrends"
+ | host_test_flag "oratns"
+ | host_test_flag /pop-?[2-3]?s?/
+ | host_test_flag "qmqp"
+ | host_test_flag "qmtp"
+ | host_test_flag "rsync"
+ | host_test_flag /smtps?/
+ | host_test_flag "spamd"
+ | host_test_flag /ssh[1-2]?/
+ | host_test_flag /telnets?/
+ | host_test_flag "vnc"
+ | host_test_url
+ | host_test_downtime
+
+ let host_test_list = Build.opt_list host_test sep_spc
+
+ let host_opts = [ label "probes" . sep_spc . Util.del_str "#" . (sep_opt_spc . host_test_list)? ]
+
+ let host = [ label "host" . host_ip . host_fqdn . host_opts . eol ]
+
+ (* Define group-compress and group-only *)
+ let group_compress = [ key /group(-compress)?/ . (sep_spc . value_to_eol)? . eol_no_spc .
+ ( comment | empty | host)*
+ ]
+
+ let group_only_col = [ label "col" . word ]
+ let group_only_cols = sep_spc . group_only_col . ( Util.del_str "|" . group_only_col )*
+ let group_only = [ key "group-only" . group_only_cols . sep_spc . value_to_eol . eol_no_spc .
+ ( comment | empty | host)*
+ ]
+
+
+ (* Define page *)
+ let page_title = [ label "title" . sep_spc . value_to_eol ]
+ let page = [ key "page" . sep_spc . word . page_title? . eol_no_spc .
+ ( comment | empty | host )* . ( group_compress | group_only )*
+ ]
+
+
+ (* Define lens *)
+
+ let lns = (comment | empty)* . page*
+
+ let filter = incl "/etc/bb/bb-hosts"
+
+ let xfm = transform lns filter
+
--- /dev/null
+(*
+Module: BootConf
+ Parses (Open)BSD-stype /etc/boot.conf
+
+Author: Jasper Lievisse Adriaanse <jasper@jasper.la>
+
+About: Reference
+ This lens is used to parse the second-stage bootstrap configuration
+ file, /etc/boot.conf as found on OpenBSD. The format is largely MI,
+ with MD parts included:
+ http://www.openbsd.org/cgi-bin/man.cgi?query=boot.conf&arch=i386
+
+About: Usage Example
+ To be documented
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+
+About: Configuration files
+ This lens applies to /etc/boot.conf.
+See <filter>.
+*)
+
+module BootConf =
+autoload xfm
+
+(************************************************************************
+ * Utility variables/functions
+ ************************************************************************)
+
+(* View: comment *)
+let comment = Util.comment
+(* View: empty *)
+let empty = Util.empty
+(* View: eol *)
+let eol = Util.eol
+(* View: fspath *)
+let fspath = Rx.fspath
+(* View: space *)
+let space = Sep.space
+(* View: word *)
+let word = Rx.word
+
+(************************************************************************
+ * View: key_opt_value_line
+ * A subnode with a keyword, an optional part consisting of a separator
+ * and a storing lens, and an end of line
+ *
+ * Parameters:
+ * kw:regexp - the pattern to match as key
+ * sto:lens - the storing lens
+ ************************************************************************)
+let key_opt_value_line (kw:regexp) (sto:lens) =
+ [ key kw . (space . sto)? . eol ]
+
+(************************************************************************
+ * Commands
+ ************************************************************************)
+
+(* View: single_command
+ single command such as 'help' or 'time' *)
+let single_command =
+ let line_re = /help|time|reboot/
+ in [ Util.indent . key line_re . eol ]
+
+(* View: ls
+ ls [directory] *)
+let ls = Build.key_value_line
+ "ls" space (store fspath)
+
+let set_cmd = "addr"
+ | "debug"
+ | "device"
+ | "howto"
+ | "image"
+ | "timeout"
+ | "tty"
+
+(* View: set
+ set [varname [value]] *)
+let set = Build.key_value
+ "set" space
+ (key_opt_value_line set_cmd (store Rx.space_in))
+
+(* View: stty
+ stty [device [speed]] *)
+let stty =
+ let device = [ label "device" . store fspath ]
+ in let speed = [ label "speed" . store Rx.integer ]
+ in key_opt_value_line "stty" (device . (space . speed)?)
+
+(* View: echo
+ echo [args] *)
+let echo = Build.key_value_line
+ "echo" space (store word)
+
+(* View: boot
+ boot [image [-acds]]
+ XXX: the last arguments are not always needed, so make them optional *)
+let boot =
+ let image = [ label "image" . store fspath ]
+ in let arg = [ label "arg" . store word ]
+ in Build.key_value_line "boot" space (image . space . arg)
+
+(* View: machine
+ machine [command] *)
+let machine =
+ let machine_entry = Build.key_value ("comaddr"|"memory")
+ space (store word)
+ | Build.flag ("diskinfo"|"regs")
+ in Build.key_value_line
+ "machine" space
+ (Build.opt_list
+ machine_entry
+ space)
+
+(************************************************************************
+ * Lens
+ ************************************************************************)
+
+(* View: command *)
+let command = boot | echo | ls | machine | set | stty
+
+(* View: lns *)
+let lns = ( empty | comment | command | single_command )*
+
+(* Variable: filter *)
+let filter = (incl "/etc/boot.conf")
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Build
+ Generic functions to build lenses
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+
+About: Reference
+ This file provides generic functions to build Augeas lenses
+*)
+
+
+module Build =
+
+let eol = Util.eol
+
+(************************************************************************
+ * Group: GENERIC CONSTRUCTIONS
+ ************************************************************************)
+
+(************************************************************************
+ * View: brackets
+ * Put a lens inside brackets
+ *
+ * Parameters:
+ * l:lens - the left bracket lens
+ * r: lens - the right bracket lens
+ * lns:lens - the lens to put inside brackets
+ ************************************************************************)
+let brackets (l:lens) (r:lens) (lns:lens) = l . lns . r
+
+
+(************************************************************************
+ * Group: LIST CONSTRUCTIONS
+ ************************************************************************)
+
+(************************************************************************
+ * View: list
+ * Build a list of identical lenses separated with a given separator
+ * (at least 2 elements)
+ *
+ * Parameters:
+ * lns:lens - the lens to repeat in the list
+ * sep:lens - the separator lens, which can be taken from the <Sep> module
+ ************************************************************************)
+let list (lns:lens) (sep:lens) = lns . ( sep . lns )+
+
+
+(************************************************************************
+ * View: opt_list
+ * Same as <list>, but there might be only one element in the list
+ *
+ * Parameters:
+ * lns:lens - the lens to repeat in the list
+ * sep:lens - the separator lens, which can be taken from the <Sep> module
+ ************************************************************************)
+let opt_list (lns:lens) (sep:lens) = lns . ( sep . lns )*
+
+
+(************************************************************************
+ * Group: LABEL OPERATIONS
+ ************************************************************************)
+
+(************************************************************************
+ * View: xchg
+ * Replace a pattern with a different label in the tree,
+ * thus emulating a key but allowing to replace the keyword
+ * with a different value than matched
+ *
+ * Parameters:
+ * m:regexp - the pattern to match
+ * d:string - the default value when a node in created
+ * l:string - the label to apply for such nodes
+ ************************************************************************)
+let xchg (m:regexp) (d:string) (l:string) = del m d . label l
+
+(************************************************************************
+ * View: xchgs
+ * Same as <xchg>, but the pattern is the default string
+ *
+ * Parameters:
+ * m:string - the string to replace, also used as default
+ * l:string - the label to apply for such nodes
+ ************************************************************************)
+let xchgs (m:string) (l:string) = xchg m m l
+
+
+(************************************************************************
+ * Group: SUBNODE CONSTRUCTIONS
+ ************************************************************************)
+
+(************************************************************************
+ * View: key_value_line
+ * A subnode with a keyword, a separator and a storing lens,
+ * and an end of line
+ *
+ * Parameters:
+ * kw:regexp - the pattern to match as key
+ * sep:lens - the separator lens, which can be taken from the <Sep> module
+ * sto:lens - the storing lens
+ ************************************************************************)
+let key_value_line (kw:regexp) (sep:lens) (sto:lens) =
+ [ key kw . sep . sto . eol ]
+
+(************************************************************************
+ * View: key_value_line_comment
+ * Same as <key_value_line>, but allows to have a comment in the end of a line
+ * and an end of line
+ *
+ * Parameters:
+ * kw:regexp - the pattern to match as key
+ * sep:lens - the separator lens, which can be taken from the <Sep> module
+ * sto:lens - the storing lens
+ * comment:lens - the comment lens, which can be taken from <Util>
+ ************************************************************************)
+let key_value_line_comment (kw:regexp) (sep:lens) (sto:lens) (comment:lens) =
+ [ key kw . sep . sto . (eol|comment) ]
+
+(************************************************************************
+ * View: key_value
+ * Same as <key_value_line>, but does not end with an end of line
+ *
+ * Parameters:
+ * kw:regexp - the pattern to match as key
+ * sep:lens - the separator lens, which can be taken from the <Sep> module
+ * sto:lens - the storing lens
+ ************************************************************************)
+let key_value (kw: regexp) (sep:lens) (sto:lens) =
+ [ key kw . sep . sto ]
+
+(************************************************************************
+ * View: key_ws_value
+ *
+ * Store a key/value pair where key and value are separated by whitespace
+ * and the value goes to the end of the line. Leading and trailing
+ * whitespace is stripped from the value. The end of line is consumed by
+ * this lens
+ *
+ * Parameters:
+ * kw:regexp - the pattern to match as key
+ ************************************************************************)
+let key_ws_value (kw:regexp) =
+ key_value_line kw Util.del_ws_spc (store Rx.space_in)
+
+(************************************************************************
+ * View: flag
+ * A simple flag subnode, consisting of a single key
+ *
+ * Parameters:
+ * kw:regexp - the pattern to match as key
+ ************************************************************************)
+let flag (kw:regexp) = [ key kw ]
+
+(************************************************************************
+ * View: flag_line
+ * A simple flag line, consisting of a single key
+ *
+ * Parameters:
+ * kw:regexp - the pattern to match as key
+ ************************************************************************)
+let flag_line (kw:regexp) = [ key kw . eol ]
+
+
+(************************************************************************
+ * Group: BLOCK CONSTRUCTIONS
+ ************************************************************************)
+
+(************************************************************************
+ * View: block_generic
+ * A block enclosed in brackets
+ *
+ * Parameters:
+ * entry:lens - the entry to be stored inside the block.
+ * This entry should include <Util.empty>
+ * or its equivalent if necessary.
+ * entry_noindent:lens - the entry to be stored inside the block,
+ * without indentation.
+ * This entry should not include <Util.empty>
+ * entry_noeol:lens - the entry to be stored inside the block,
+ * without eol.
+ * This entry should not include <Util.empty>
+ * entry_noindent_noeol:lens - the entry to be stored inside the block,
+ * without indentation or eol.
+ * This entry should not include <Util.empty>
+ * comment:lens - the comment lens used in the block
+ * comment_noindent:lens - the comment lens used in the block,
+ * without indentation.
+ * ldelim_re:regexp - regexp for the left delimiter
+ * rdelim_re:regexp - regexp for the right delimiter
+ * ldelim_default:string - default value for the left delimiter
+ * rdelim_default:string - default value for the right delimiter
+ ************************************************************************)
+let block_generic
+ (entry:lens) (entry_noindent:lens)
+ (entry_noeol:lens) (entry_noindent_noeol:lens)
+ (comment:lens) (comment_noindent:lens)
+ (ldelim_re:regexp) (rdelim_re:regexp)
+ (ldelim_default:string) (rdelim_default:string) =
+ let block_single = entry_noindent_noeol | comment_noindent
+ in let block_start = entry_noindent | comment_noindent
+ in let block_middle = (entry | comment)*
+ in let block_end = entry_noeol | comment
+ in del ldelim_re ldelim_default
+ . ( ( block_start . block_middle . block_end )
+ | block_single )
+ . del rdelim_re rdelim_default
+
+(************************************************************************
+ * View: block_setdefault
+ * A block enclosed in brackets
+ *
+ * Parameters:
+ * entry:lens - the entry to be stored inside the block.
+ * This entry should not include <Util.empty>,
+ * <Util.comment> or <Util.comment_noindent>,
+ * should not be indented or finish with an eol.
+ * ldelim_re:regexp - regexp for the left delimiter
+ * rdelim_re:regexp - regexp for the left delimiter
+ * ldelim_default:string - default value for the left delimiter
+ * rdelim_default:string - default value for the right delimiter
+ ************************************************************************)
+let block_setdelim (entry:lens)
+ (ldelim_re:regexp)
+ (rdelim_re:regexp)
+ (ldelim_default:string)
+ (rdelim_default:string) =
+ block_generic (Util.empty | Util.indent . entry . eol)
+ (entry . eol) (Util.indent . entry) entry
+ Util.comment Util.comment_noindent
+ ldelim_re rdelim_re
+ ldelim_default rdelim_default
+
+(* Variable: block_ldelim_re *)
+let block_ldelim_re = /[ \t\n]+\{[ \t\n]*/
+
+(* Variable: block_rdelim_re *)
+let block_rdelim_re = /[ \t\n]*\}/
+
+(* Variable: block_ldelim_default *)
+let block_ldelim_default = " {\n"
+
+(* Variable: block_rdelim_default *)
+let block_rdelim_default = "}"
+
+(************************************************************************
+ * View: block
+ * A block enclosed in brackets
+ *
+ * Parameters:
+ * entry:lens - the entry to be stored inside the block.
+ * This entry should not include <Util.empty>,
+ * <Util.comment> or <Util.comment_noindent>,
+ * should not be indented or finish with an eol.
+ ************************************************************************)
+let block (entry:lens) = block_setdelim entry
+ block_ldelim_re block_rdelim_re
+ block_ldelim_default block_rdelim_default
+
+(* Variable: block_ldelim_newlines_re *)
+let block_ldelim_newlines_re = /[ \t\n]*\{([ \t\n]*\n)?/
+
+(* Variable: block_rdelim_newlines_re *)
+let block_rdelim_newlines_re = /[ \t]*\}/
+
+(* Variable: block_ldelim_newlines_default *)
+let block_ldelim_newlines_default = "\n{\n"
+
+(* Variable: block_rdelim_newlines_default *)
+let block_rdelim_newlines_default = "}"
+
+(************************************************************************
+ * View: block_newline
+ * A block enclosed in brackets, with newlines forced
+ * and indentation defaulting to a tab.
+ *
+ * Parameters:
+ * entry:lens - the entry to be stored inside the block.
+ * This entry should not include <Util.empty>,
+ * <Util.comment> or <Util.comment_noindent>,
+ * should be indented and finish with an eol.
+ ************************************************************************)
+let block_newlines (entry:lens) (comment:lens) =
+ del block_ldelim_newlines_re block_ldelim_newlines_default
+ . ((entry | comment) . (Util.empty | entry | comment)*)?
+ . del block_rdelim_newlines_re block_rdelim_newlines_default
+
+(************************************************************************
+ * View: block_newlines_spc
+ * A block enclosed in brackets, with newlines forced
+ * and indentation defaulting to a tab. The opening brace
+ * must be preceded by whitespace
+ *
+ * Parameters:
+ * entry:lens - the entry to be stored inside the block.
+ * This entry should not include <Util.empty>,
+ * <Util.comment> or <Util.comment_noindent>,
+ * should be indented and finish with an eol.
+ ************************************************************************)
+let block_newlines_spc (entry:lens) (comment:lens) =
+ del (/[ \t\n]/ . block_ldelim_newlines_re) block_ldelim_newlines_default
+ . ((entry | comment) . (Util.empty | entry | comment)*)?
+ . del block_rdelim_newlines_re block_rdelim_newlines_default
+
+(************************************************************************
+ * View: named_block
+ * A named <block> enclosed in brackets
+ *
+ * Parameters:
+ * kw:regexp - the regexp for the block name
+ * entry:lens - the entry to be stored inside the block
+ * this entry should not include <Util.empty>
+ ************************************************************************)
+let named_block (kw:regexp) (entry:lens) = [ key kw . block entry . eol ]
+
+
+(************************************************************************
+ * Group: COMBINATORICS
+ ************************************************************************)
+
+(************************************************************************
+ * View: combine_two_ord
+ * Combine two lenses, ensuring first lens is first
+ *
+ * Parameters:
+ * a:lens - the first lens
+ * b:lens - the second lens
+ ************************************************************************)
+let combine_two_ord (a:lens) (b:lens) = a . b
+
+(************************************************************************
+ * View: combine_two
+ * Combine two lenses
+ *
+ * Parameters:
+ * a:lens - the first lens
+ * b:lens - the second lens
+ ************************************************************************)
+let combine_two (a:lens) (b:lens) =
+ combine_two_ord a b | combine_two_ord b a
+
+(************************************************************************
+ * View: combine_two_opt_ord
+ * Combine two lenses optionally, ensuring first lens is first
+ * (a, and optionally b)
+ *
+ * Parameters:
+ * a:lens - the first lens
+ * b:lens - the second lens
+ ************************************************************************)
+let combine_two_opt_ord (a:lens) (b:lens) = a . b?
+
+(************************************************************************
+ * View: combine_two_opt
+ * Combine two lenses optionally
+ * (either a, b, or both, in any order)
+ *
+ * Parameters:
+ * a:lens - the first lens
+ * b:lens - the second lens
+ ************************************************************************)
+let combine_two_opt (a:lens) (b:lens) =
+ combine_two_opt_ord a b | combine_two_opt_ord b a
+
+(************************************************************************
+ * View: combine_three_ord
+ * Combine three lenses, ensuring first lens is first
+ * (a followed by either b, c, in any order)
+ *
+ * Parameters:
+ * a:lens - the first lens
+ * b:lens - the second lens
+ * c:lens - the third lens
+ ************************************************************************)
+let combine_three_ord (a:lens) (b:lens) (c:lens) =
+ combine_two_ord a (combine_two b c)
+
+(************************************************************************
+ * View: combine_three
+ * Combine three lenses
+ *
+ * Parameters:
+ * a:lens - the first lens
+ * b:lens - the second lens
+ * c:lens - the third lens
+ ************************************************************************)
+let combine_three (a:lens) (b:lens) (c:lens) =
+ combine_three_ord a b c
+ | combine_three_ord b a c
+ | combine_three_ord c b a
+
+
+(************************************************************************
+ * View: combine_three_opt_ord
+ * Combine three lenses optionally, ensuring first lens is first
+ * (a followed by either b, c, or any of them, in any order)
+ *
+ * Parameters:
+ * a:lens - the first lens
+ * b:lens - the second lens
+ * c:lens - the third lens
+ ************************************************************************)
+let combine_three_opt_ord (a:lens) (b:lens) (c:lens) =
+ combine_two_opt_ord a (combine_two_opt b c)
+
+(************************************************************************
+ * View: combine_three_opt
+ * Combine three lenses optionally
+ * (either a, b, c, or any of them, in any order)
+ *
+ * Parameters:
+ * a:lens - the first lens
+ * b:lens - the second lens
+ * c:lens - the third lens
+ ************************************************************************)
+let combine_three_opt (a:lens) (b:lens) (c:lens) =
+ combine_three_opt_ord a b c
+ | combine_three_opt_ord b a c
+ | combine_three_opt_ord c b a
--- /dev/null
+(*
+Module: Cachefilesd
+ Parses /etc/cachefilesd.conf
+
+Author: Pat Riehecky <riehecky@fnal.gov>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 5 cachefilesd.conf` where possible.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ See <lns>.
+
+About: Configuration files
+ This lens applies to /etc/cachefilesd.conf.
+
+About: Examples
+ The <Test_Cachefilesd> file contains various examples and tests.
+*)
+
+module Cachefilesd =
+ autoload xfm
+
+ (************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+ (* Group: Comments and empty lines *)
+
+ (* View: eol *)
+ let eol = Util.eol
+ (* View: comment *)
+ let comment = Util.comment
+ (* View: empty *)
+ let empty = Util.empty
+
+ (* Group: separators *)
+
+ (* View: space
+ * Separation between key and value
+ *)
+ let space = Util.del_ws_spc
+
+ (* View: colon
+ * Separation between selinux attributes
+ *)
+ let colon = Sep.colon
+
+ (* Group: entries *)
+
+ (* View: entry_key
+ * The key for an entry in the config file
+ *)
+ let entry_key = Rx.word
+
+ (* View: entry_value
+ * The value for an entry may contain all sorts of things
+ *)
+ let entry_value = /[A-Za-z0-9_.-:%]+/
+
+ (* View: nocull
+ * The nocull key has different syntax than the rest
+ *)
+ let nocull = /nocull/i
+
+ (* Group: config *)
+
+ (* View: cacheconfig
+ * This is a simple "key value" setup
+ *)
+ let cacheconfig = [ key (entry_key - nocull) . space
+ . store entry_value . eol ]
+
+ (* View: nocull
+ * This is a either present, and therefore active or missing and
+ * not active
+ *)
+ let nocull_entry = [ key nocull . eol ]
+
+ (* View: lns *)
+ let lns = (empty | comment | cacheconfig | nocull_entry)*
+
+ let xfm = transform lns (incl "/etc/cachefilesd.conf")
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(*
+Module: Carbon
+ Parses Carbon's configuration files
+
+Author: Marc Fournier <marc.fournier@camptocamp.com>
+
+About: Reference
+ This lens is based on the conf/*.conf.example files from the Carbon
+ package.
+
+About: Configuration files
+ This lens applies to most files in /etc/carbon/. See <filter>.
+ NB: whitelist.conf and blacklist.conf use a different syntax. This lens
+ doesn't support them.
+
+About: Usage Example
+(start code)
+ $ augtool
+ augtool> ls /files/etc/carbon/carbon.conf/
+ cache/ = (none)
+ relay/ = (none)
+ aggregator/ = (none)
+
+ augtool> get /files/etc/carbon/carbon.conf/cache/ENABLE_UDP_LISTENER
+ /files/etc/carbon/carbon.conf/cache/ENABLE_UDP_LISTENER = False
+
+ augtool> set /files/etc/carbon/carbon.conf/cache/ENABLE_UDP_LISTENER True
+ augtool> save
+ Saved 1 file(s)
+(end code)
+ The <Test_Carbon> file also contains various examples.
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+*)
+module Carbon =
+autoload xfm
+
+let comment = IniFile.comment "#" "#"
+let sep = IniFile.sep "=" "="
+
+let entry = IniFile.entry IniFile.entry_re sep comment
+let title = IniFile.title IniFile.record_re
+let record = IniFile.record title entry
+
+let lns = IniFile.lns record comment
+
+let filter = incl "/etc/carbon/carbon.conf"
+ . incl "/etc/carbon/relay-rules.conf"
+ . incl "/etc/carbon/rewrite-rules.conf"
+ . incl "/etc/carbon/storage-aggregation.conf"
+ . incl "/etc/carbon/storage-schemas.conf"
+
+let xfm = transform lns filter
--- /dev/null
+(* Ceph module for Augeas
+ Author: Pavel Chechetin <pchechetin@mirantis.com>
+
+ ceph.conf is a standard INI File with whitespaces in the title.
+*)
+
+
+module Ceph =
+ autoload xfm
+
+let comment = IniFile.comment IniFile.comment_re IniFile.comment_default
+let sep = IniFile.sep IniFile.sep_re IniFile.sep_default
+
+let entry_re = /[A-Za-z0-9_.-][A-Za-z0-9 _.-]*[A-Za-z0-9_.-]/
+
+let entry = IniFile.indented_entry entry_re sep comment
+
+let title = IniFile.indented_title IniFile.record_re
+let record = IniFile.record title entry
+
+let lns = IniFile.lns record comment
+
+let filter = (incl "/etc/ceph/ceph.conf")
+ . (incl (Sys.getenv("HOME") . "/.ceph/config"))
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: cgconfig
+ Parses /etc/cgconfig.conf
+
+Author:
+ Ivana Hutarova Varekova <varekova@redhat.com>
+ Raphael Pinson <raphink@gmail.com>
+
+About: Licence
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ Sample usage of this lens in augtool
+ * print all mounted cgroups
+ print /files/etc/cgconfig.conf/mount
+
+About: Configuration files
+ This lens applies to /etc/cgconfig.conf. See <filter>.
+ *)
+
+module Cgconfig =
+ autoload xfm
+
+ let indent = Util.indent
+ let eol = Util.eol
+ let comment = Util.comment
+ let empty = Util.empty
+
+ let id = /[a-zA-Z0-9_\/.-]+/
+ let name = /[^#= \n\t{}\/]+/
+ let cont_name = /(cpuacct|cpu|devices|ns|cpuset|memory|freezer|net_cls|blkio|hugetlb|perf_event)/
+ let role_name = /(admin|task)/
+ let id_name = /(uid|gid|fperm|dperm)/
+ let address = /[^#; \n\t{}]+/
+ let qaddress = address|/"[^#;"\n\t{}]+"/
+
+ let lbracket = del /[ \t\n]*\{/ " {"
+ let rbracket = del /[ \t]*\}/ "}"
+ let eq = indent . Util.del_str "=" . indent
+
+(******************************************
+ * Function to deal with abc=def; entries
+ ******************************************)
+
+ let key_value (key_rx:regexp) (val_rx:regexp) =
+ [ indent . key key_rx . eq . store val_rx
+ . indent . Util.del_str ";" ]
+
+ (* Function to deal with bracketted entries *)
+ let brack_entry_base (lnsa:lens) (lnsb:lens) =
+ [ indent . lnsa . lbracket . lnsb . rbracket ]
+
+ let brack_entry_key (kw:regexp) (lns:lens) =
+ let lnsa = key kw in
+ brack_entry_base lnsa lns
+
+ let brack_entry (kw:regexp) (lns:lens) =
+ let full_lns = (lns | comment | empty)* in
+ brack_entry_key kw full_lns
+
+(******************************************
+ * control groups
+ ******************************************)
+
+ let permission_setting = key_value id_name address
+
+(* task setting *)
+ let t_info = brack_entry "task" permission_setting
+
+(* admin setting *)
+ let a_info = brack_entry "admin" permission_setting
+
+(* permissions setting *)
+ let perm_info =
+ let ce = (comment|empty)* in
+ let perm_info_lns = ce .
+ ((t_info . ce . (a_info . ce)?)
+ |(a_info . ce . (t_info . ce)?))? in
+ brack_entry_key "perm" perm_info_lns
+
+ let variable_setting = key_value name qaddress
+
+(* controllers setting *)
+ let controller_info =
+ let lnsa = label "controller" . store cont_name in
+ let lnsb = ( variable_setting | comment | empty ) * in
+ brack_entry_base lnsa lnsb
+
+(* group { ... } *)
+ let group_data =
+ let lnsa = key "group" . Util.del_ws_spc . store id in
+ let lnsb = ( perm_info | controller_info | comment | empty )* in
+ brack_entry_base lnsa lnsb
+
+
+(*************************************************
+ * mount point
+ *************************************************)
+
+(* controller = mount_point; *)
+ let mount_point = key_value name address
+
+(* mount { .... } *)
+ let mount_data = brack_entry "mount" mount_point
+
+
+(****************************************************
+ * namespace
+ ****************************************************)
+
+(* controller = cgroup; *)
+ let namespace_instance = key_value name address
+
+
+(* namespace { .... } *)
+ let namespace = brack_entry "namespace" namespace_instance
+
+ let lns = ( comment | empty | mount_data | group_data | namespace )*
+
+ let xfm = transform lns (incl "/etc/cgconfig.conf")
--- /dev/null
+(*
+Module: cgrules
+ Parses /etc/cgrules.conf
+
+Author:
+ Raphael Pinson <raphink@gmail.com>
+ Ivana Hutarova Varekova <varekova@redhat.com>
+
+About: Licence
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ Sample usage of this lens in augtool:
+
+About: Configuration files
+ This lens applies to /etc/cgconfig.conf. See <filter>.
+ *)
+
+module Cgrules =
+ autoload xfm
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+(* Group: Separators *)
+(* Variable: ws *)
+ let ws = del /[ \t]+/ " "
+
+(* Group: Comments and empty lines *)
+(* Variable: eol *)
+ let eol = Util.eol
+
+(* Variable: comment *)
+ let comment = Util.comment
+
+(* Variable: empty *)
+ let empty = Util.empty
+
+(* Group: Generic primitive definitions *)
+(* Variable: name *)
+ let name = /[^@%# \t\n][^ \t\n]*/
+(* Variable: ctrl_key *)
+ let ctrl_key = /[^ \t\n\/]+/
+(* Variable: ctrl_value *)
+ let ctrl_value = /[^ \t\n]+/
+
+(************************************************************************
+ * Group: CONTROLLER
+ *************************************************************************)
+
+(* Variable: controller *)
+let controller = ws . [ key ctrl_key . ws . store ctrl_value ]
+
+let more_controller = Util.del_str "%" . controller . eol
+
+(************************************************************************
+ * Group: RECORDS
+ *************************************************************************)
+
+let generic_record (lbl:string) (lns:lens) =
+ [ label lbl . lns
+ . controller . eol
+ . more_controller* ]
+
+(* Variable: user_record *)
+let user_record = generic_record "user" (store name)
+
+(* Variable: group_record *)
+let group_record = generic_record "group" (Util.del_str "@" . store name)
+
+(************************************************************************
+ * Group: LENS & FILTER
+ *************************************************************************)
+
+(* View: lns
+ The main lens, any amount of
+ * <empty> lines
+ * <comment>
+ * <user_record>
+ * <group_record>
+*)
+let lns = ( empty | comment | user_record | group_record )*
+
+let xfm = transform lns (incl "/etc/cgrules.conf")
--- /dev/null
+(*
+Module: Channels
+ Parses channels.conf files
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: Reference
+ See http://linuxtv.org/vdrwiki/index.php/Syntax_of_channels.conf
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to channels.conf files.
+
+About: Examples
+ The <Test_Channels> file contains various examples and tests.
+*)
+
+module Channels =
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+(* View: eol *)
+let eol = Util.eol
+
+(* View: comment *)
+let comment = Util.comment_generic /;[ \t]*/ "; "
+
+
+(* View: equal *)
+let equal = Sep.equal
+
+(* View: colon *)
+let colon = Sep.colon
+
+(* View: comma *)
+let comma = Sep.comma
+
+(* View: semicol *)
+let semicol = Util.del_str ";"
+
+(* View: plus *)
+let plus = Util.del_str "+"
+
+(* View: arroba *)
+let arroba = Util.del_str "@"
+
+(* View: no_colon *)
+let no_colon = /[^: \t\n][^:\n]*[^: \t\n]|[^:\n]/
+
+(* View: no_semicolon *)
+let no_semicolon = /[^;\n]+/
+
+
+(************************************************************************
+ * Group: FUNCTIONS
+ *************************************************************************)
+
+(* View: field
+ A generic field *)
+let field (name:string) (sto:regexp) = [ label name . store sto ]
+
+(* View: field_no_colon
+ A <field> storing <no_colon> *)
+let field_no_colon (name:string) = field name no_colon
+
+(* View: field_int
+ A <field> storing <Rx.integer> *)
+let field_int (name:string) = field name Rx.integer
+
+(* View: field_word
+ A <field> storing <Rx.word> *)
+let field_word (name:string) = field name Rx.word
+
+
+(************************************************************************
+ * Group: ENTRIES
+ *************************************************************************)
+
+(* View: vpid *)
+let vpid =
+ let codec =
+ [ equal . label "codec" . store Rx.integer ]
+ in let vpid_entry (lbl:string) =
+ [ label lbl . store Rx.integer . codec? ]
+ in vpid_entry "vpid"
+ . ( plus . vpid_entry "vpid_pcr" )?
+
+
+(* View: langs *)
+let langs =
+ let lang =
+ [ label "lang" . store Rx.word ]
+ in Build.opt_list lang plus
+
+
+(* View: apid *)
+let apid =
+ let codec =
+ [ arroba . label "codec" . store Rx.integer ]
+ in let options =
+ equal . ( (langs . codec?) | codec )
+ in let apid_entry (lbl:string) =
+ [ label lbl . store Rx.integer . options? ]
+ in Build.opt_list (apid_entry "apid") comma
+ . ( semicol
+ . Build.opt_list (apid_entry "apid_dolby") comma )?
+
+(* View: tpid *)
+let tpid =
+ let tpid_bylang =
+ [ label "tpid_bylang" . store Rx.integer
+ . (equal . langs)? ]
+ in field_int "tpid"
+ . ( semicol . Build.opt_list tpid_bylang comma )?
+
+(* View: caid *)
+let caid =
+ let caid_entry =
+ [ label "caid" . store Rx.word ]
+ in Build.opt_list caid_entry comma
+
+(* View: entry *)
+let entry = [ label "entry" . store no_semicolon
+ . (semicol . field_no_colon "provider")? . colon
+ . field_int "frequency" . colon
+ . field_word "parameter" . colon
+ . field_word "signal_source" . colon
+ . field_int "symbol_rate" . colon
+ . vpid . colon
+ . apid . colon
+ . tpid . colon
+ . caid . colon
+ . field_int "sid" . colon
+ . field_int "nid" . colon
+ . field_int "tid" . colon
+ . field_int "rid" . eol ]
+
+(* View: entry_or_comment *)
+let entry_or_comment = entry | comment
+
+(* View: group *)
+let group =
+ [ Util.del_str ":" . label "group"
+ . store no_colon . eol
+ . entry_or_comment* ]
+
+(* View: lns *)
+let lns = entry_or_comment* . group*
--- /dev/null
+(*
+Module: Chrony
+ Parses the chrony config file
+
+Author: Pat Riehecky <riehecky@fnal.gov>
+
+About: Reference
+ This lens tries to keep as close as possible to chrony config syntax
+
+ See http://chrony.tuxfamily.org/manual.html#Configuration-file
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/chrony.conf
+
+ See <filter>.
+*)
+
+module Chrony =
+ autoload xfm
+
+(************************************************************************
+ * Group: Import provided expressions
+ ************************************************************************)
+ (* View: empty *)
+ let empty = Util.empty
+
+ (* View: eol *)
+ let eol = Util.eol
+
+ (* View: space *)
+ let space = Sep.space
+
+ (* Variable: email_addr *)
+ let email_addr = Rx.email_addr
+
+ (* Variable: word *)
+ let word = Rx.word
+
+ (* Variable: integer *)
+ let integer = Rx.relinteger
+
+ (* Variable: decimal *)
+ let decimal = Rx.reldecimal
+
+ (* Variable: ip *)
+ let ip = Rx.ip
+
+ (* Variable: path *)
+ let path = Rx.fspath
+
+(************************************************************************
+ * Group: Create required expressions
+ ************************************************************************)
+ (* Variable: hex *)
+ let hex = /[0-9a-fA-F]+/
+
+ (* Variable: number *)
+ let number = integer | decimal | decimal . /[eE]/ . integer | hex
+
+ (* Variable: address_re *)
+ let address_re = Rx.ip | Rx.hostname
+
+ (*
+ View: comment
+ from 4.2.1 of the upstream doc
+ Chrony comments start with: ! ; # or % and must be on their own line
+ *)
+ let comment = Util.comment_generic /[ \t]*[!;#%][ \t]*/ "# "
+
+ (* Variable: no_space
+ No spaces or comment characters
+ *)
+ let no_space = /[^ \t\r\n!;#%]+/
+
+ (* Variable: cmd_options
+ Server/Peer/Pool options with values
+ *)
+ let cmd_options = "asymmetry"
+ | "certset"
+ | "extfield"
+ | "filter"
+ | "key"
+ | /maxdelay((dev)?ratio)?/
+ | /(min|max)poll/
+ | /(min|max)samples/
+ | "maxsources"
+ | "mindelay"
+ | "offset"
+ | "polltarget"
+ | "port"
+ | "presend"
+ | "version"
+
+ (* Variable: cmd_flags
+ Server/Peer/Pool options without values
+ *)
+ let cmd_flags = "auto_offline"|"iburst"|"noselect"|"offline"|"prefer"
+ |"copy"|"require"|"trust"|"xleave"|"burst"|"nts"
+
+ (* Variable: ntp_source
+ Server/Peer/Pool key names
+ *)
+ let ntp_source = "server"|"peer"|"pool"
+
+ (* Variable: allowdeny_types
+ Key names for access configuration
+ *)
+ let allowdeny_types = "allow"|"deny"|"cmdallow"|"cmddeny"
+
+ (* Variable: hwtimestamp_options
+ HW timestamping options with values
+ *)
+ let hwtimestamp_options = "minpoll"|"precision"|"rxcomp"|"txcomp"
+ |"minsamples"|"maxsamples"|"rxfilter"
+
+ (* Variable: hwtimestamp_flags
+ HW timestamping options without values
+ *)
+ let hwtimestamp_flags = "nocrossts"
+
+ (* Variable: local_options
+ local options with values
+ *)
+ let local_options = "stratum"|"distance"
+
+ (* Variable: local_flags
+ local options without values
+ *)
+ let local_flags = "orphan"
+
+ (* Variable: ratelimit_options
+ Rate limiting options with values
+ *)
+ let ratelimit_options = "interval"|"burst"|"leak"
+
+ (* Variable: refclock_options
+ refclock options with values
+ *)
+ let refclock_options = "refid"|"lock"|"poll"|"dpoll"|"filter"|"rate"
+ |"minsamples"|"maxsamples"|"offset"|"delay"
+ |"precision"|"maxdispersion"|"stratum"|"width"
+
+ (* Variable: refclock_flags
+ refclock options without values
+ *)
+ let refclock_flags = "noselect"|"pps"|"prefer"|"require"|"tai"|"trust"
+
+ (* Variable: flags
+ Options without values
+ *)
+ let flags = "dumponexit"
+ | "generatecommandkey"
+ | "lock_all"
+ | "manual"
+ | "noclientlog"
+ | "nosystemcert"
+ | "rtconutc"
+ | "rtcsync"
+
+ (* Variable: log_flags
+ log has a specific options list
+ *)
+ let log_flags = "measurements"|"rawmeasurements"|"refclocks"|"rtc"
+ |"statistics"|"tempcomp"|"tracking"
+
+ (* Variable: simple_keys
+ Options with single values
+ *)
+ let simple_keys = "acquisitionport" | "authselectmode" | "bindacqaddress"
+ | "bindaddress" | "bindcmdaddress" | "bindacqdevice"
+ | "bindcmddevice" | "binddevice" | "clientloglimit"
+ | "clockprecision" | "combinelimit" | "commandkey"
+ | "cmdport" | "corrtimeratio" | "driftfile"
+ | "dscp"
+ | "dumpdir" | "hwclockfile" | "include" | "keyfile"
+ | "leapsecmode" | "leapsectz" | "linux_freq_scale"
+ | "linux_hz" | "logbanner" | "logchange" | "logdir"
+ | "maxclockerror" | "maxdistance" | "maxdrift"
+ | "maxjitter" | "maxsamples" | "maxslewrate"
+ | "maxntsconnections"
+ | "maxupdateskew" | "minsamples" | "minsources"
+ | "nocerttimecheck" | "ntsdumpdir" | "ntsntpserver"
+ | "ntsport" | "ntsprocesses" | "ntsrefresh" | "ntsrotate"
+ | "ntsservercert" | "ntsserverkey" | "ntstrustedcerts"
+ | "ntpsigndsocket" | "pidfile" | "ptpport"
+ | "port" | "reselectdist" | "rtcautotrim" | "rtcdevice"
+ | "rtcfile" | "sched_priority" | "stratumweight" | "user"
+
+(************************************************************************
+ * Group: Make some sub-lenses for use in later lenses
+ ************************************************************************)
+ (* View: host_flags *)
+ let host_flags = [ space . key cmd_flags ]
+ (* View: host_options *)
+ let host_options = [ space . key cmd_options . space . store number ]
+ (* View: log_flag_list *)
+ let log_flag_list = [ space . key log_flags ]
+ (* View: store_address *)
+ let store_address = [ label "address" . store address_re ]
+
+(************************************************************************
+ * Group: Lenses for parsing out sections
+ ************************************************************************)
+ (* View: all_flags
+ options without any arguments
+ *)
+ let all_flags = [ Util.indent . key flags . eol ]
+
+ (* View: kv
+ options with only one arg can be directly mapped to key = value
+ *)
+ let kv = [ Util.indent . key simple_keys . space . (store no_space) . eol ]
+
+ (* Property: Options with multiple values
+
+ Each of these gets their own parsing block
+ - server|peer|pool <address> <options>
+ - allow|deny|cmdallow|cmddeny [all] [<address[/subnet]>]
+ - log <options>
+ - broadcast <interval> <address> <optional port>
+ - fallbackdrift <min> <max>
+ - hwtimestamp <interface> <options>
+ - initstepslew <threshold> <addr> <optional extra addrs>
+ - local <options>
+ - mailonchange <emailaddress> <threshold>
+ - makestep <threshold> <limit>
+ - maxchange <threshold> <delay> <limit>
+ - ratelimit|cmdratelimit|ntsratelimit <options>
+ - refclock <driver> <parameter> <options>
+ - smoothtime <maxfreq> <maxwander> <options>
+ - tempcomp <sensorfile> <interval> (<t0> <k0> <k1> <k2> | <pointfile> )
+ - confdir|sourcedir <directories>
+ *)
+
+ (* View: host_list
+ Find all NTP sources and their flags/options
+ *)
+ let host_list = [ Util.indent . key ntp_source
+ . space . store address_re
+ . ( host_flags | host_options )*
+ . eol ]
+
+ (* View: allowdeny
+ allow/deny/cmdallow/cmddeny has a specific syntax
+ *)
+ let allowdeny = [ Util.indent . key allowdeny_types
+ . [ space . key "all" ]?
+ . ( space . store ( no_space - "all" ) )?
+ . eol ]
+
+ (* View: log_list
+ log has a specific options list
+ *)
+ let log_list = [ Util.indent . key "log" . log_flag_list+ . eol ]
+
+ (* View: bcast
+ broadcast has specific syntax
+ *)
+ let bcast = [ Util.indent . key "broadcast"
+ . space . [ label "interval" . store integer ]
+ . space . store_address
+ . ( space . [ label "port" . store integer ] )?
+ . eol ]
+
+ (* View: bcast
+ confdir and sourcedir have specific syntax
+ *)
+ let dir_list = [ Util.indent . key /(conf|source)dir/
+ . [ label "directory" . space . store no_space ]+
+ . eol ]
+
+ (* View: fdrift
+ fallbackdrift has specific syntax
+ *)
+ let fdrift = [ Util.indent . key "fallbackdrift"
+ . space . [ label "min" . store integer ]
+ . space . [ label "max" . store integer ]
+ . eol ]
+
+ (* View: hwtimestamp
+ hwtimestamp has specific syntax
+ *)
+ let hwtimestamp = [ Util.indent . key "hwtimestamp"
+ . space . [ label "interface" . store no_space ]
+ . ( space . ( [ key hwtimestamp_flags ]
+ | [ key hwtimestamp_options . space
+ . store no_space ] )
+ )*
+ . eol ]
+ (* View: istepslew
+ initstepslew has specific syntax
+ *)
+ let istepslew = [ Util.indent . key "initstepslew"
+ . space . [ label "threshold" . store number ]
+ . ( space . store_address )+
+ . eol ]
+
+ (* View: local
+ local has specific syntax
+ *)
+ let local = [ Util.indent . key "local"
+ . ( space . ( [ key local_flags ]
+ | [ key local_options . space . store no_space ] )
+ )*
+ . eol ]
+
+ (* View: email
+ mailonchange has specific syntax
+ *)
+ let email = [ Util.indent . key "mailonchange" . space
+ . [ label "emailaddress" . store email_addr ]
+ . space
+ . [ label "threshold" . store number ]
+ . eol ]
+
+ (* View: makestep
+ makestep has specific syntax
+ *)
+ let makestep = [ Util.indent . key "makestep"
+ . space
+ . [ label "threshold" . store number ]
+ . space
+ . [ label "limit" . store integer ]
+ . eol ]
+
+ (* View: maxchange
+ maxchange has specific syntax
+ *)
+ let maxchange = [ Util.indent . key "maxchange"
+ . space
+ . [ label "threshold" . store number ]
+ . space
+ . [ label "delay" . store integer ]
+ . space
+ . [ label "limit" . store integer ]
+ . eol ]
+
+ (* View: ratelimit
+ ratelimit/cmdratelimit has specific syntax
+ *)
+ let ratelimit = [ Util.indent . key /(cmd|nts)?ratelimit/
+ . [ space . key ratelimit_options
+ . space . store no_space ]*
+ . eol ]
+ (* View: refclock
+ refclock has specific syntax
+ *)
+ let refclock = [ Util.indent . key "refclock"
+ . space
+ . [ label "driver" . store word ]
+ . space
+ . [ label "parameter" . store no_space ]
+ . ( space . ( [ key refclock_flags ]
+ | [ key refclock_options . space . store no_space ] )
+ )*
+ . eol ]
+
+ (* View: smoothtime
+ smoothtime has specific syntax
+ *)
+ let smoothtime = [ Util.indent . key "smoothtime"
+ . space
+ . [ label "maxfreq" . store number ]
+ . space
+ . [ label "maxwander" . store number ]
+ . ( space . [ key "leaponly" ] )?
+ . eol ]
+
+ (* View: tempcomp
+ tempcomp has specific syntax
+ *)
+ let tempcomp = [ Util.indent . key "tempcomp"
+ . space
+ . [ label "sensorfile" . store path ]
+ . space
+ . [ label "interval" . store number ]
+ . space
+ . ( [ label "t0" . store number ] . space
+ . [ label "k0" . store number ] . space
+ . [ label "k1" . store number ] . space
+ . [ label "k2" . store number ]
+ | [ label "pointfile" . store path ] )
+ . eol ]
+
+(************************************************************************
+ * Group: Final lense summary
+ ************************************************************************)
+(* View: settings
+ * All supported chrony settings
+ *)
+let settings = host_list | allowdeny | log_list | bcast | fdrift | istepslew
+ | local | email | makestep | maxchange | refclock | smoothtime
+ | dir_list | hwtimestamp | ratelimit | tempcomp | kv | all_flags
+
+(*
+ * View: lns
+ * The crony lens
+ *)
+let lns = ( empty | comment | settings )*
+
+(* View: filter
+ * The files parsed by default
+ *)
+let filter = incl "/etc/chrony.conf"
+
+let xfm = transform lns filter
+
--- /dev/null
+(*
+Module: ClamAV
+ Parses ClamAV clamd and freshclam configuration files.
+
+Author: Andrew Colin Kissa <andrew@topdog.za.net>
+ Baruwa Enterprise Edition http://www.baruwa.com
+
+About: License
+ This file is licensed under the LGPL v2+.
+
+About: Configuration files
+ This lens applies to /etc/clamd.conf, /etc/freshclam.conf and files in
+ /etc/clamd.d. See <filter>.
+*)
+
+module Clamav =
+autoload xfm
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ ************************************************************************)
+
+let word = /[A-Za-z][A-Za-z0-9]+/
+
+let comment = Util.comment
+
+let some_value = Sep.space . store Rx.space_in
+
+(************************************************************************
+ * Group: Entry
+ ************************************************************************)
+
+let example_entry = [ key "Example" . Util.eol ]
+
+let clamd_entry = [ key word . some_value . Util.eol ]
+
+(******************************************************************
+ * Group: LENS AND FILTER
+ ******************************************************************)
+
+(************************************************************************
+ * View: Lns
+ ************************************************************************)
+
+let lns = (Util.empty | example_entry | clamd_entry | comment )*
+
+(* Variable: filter *)
+let filter = (incl "/etc/clamd.conf")
+ . (incl "/etc/freshclam.conf")
+ . (incl "/etc/clamd.d/*.conf")
+ . (incl "/etc/clamav/*.conf")
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Cmdline
+ Parses /proc/cmdline and /etc/kernel/cmdline
+
+Author: Thomas Weißschuh <thomas.weissschuh@amadeus.com>
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+*)
+
+module Cmdline =
+ autoload xfm
+
+let entry = [ key Rx.word . Util.del_str "=" . store Rx.no_spaces ] | [ key Rx.word ]
+
+let lns = (Build.opt_list entry Sep.space)? . del /[ \t]*\n?/ ""
+
+let filter = incl "/etc/kernel/cmdline"
+ . incl "/proc/cmdline"
+
+let xfm = transform lns filter
--- /dev/null
+
+module CobblerModules =
+ autoload xfm
+
+let comment = IniFile.comment "#" "#"
+let sep = IniFile.sep "=" "="
+
+let entry = IniFile.entry IniFile.entry_re sep comment
+
+let title = IniFile.indented_title IniFile.record_re
+let record = IniFile.record title entry
+
+let lns = IniFile.lns record comment
+
+let filter = (incl "/etc/cobbler/modules.conf")
+
+let xfm = transform lns filter
--- /dev/null
+(*
+ Parse the /etc/cobbler/settings file which is in
+ YAML 1.0 format.
+
+ The lens can handle the following constructs
+ * key: value
+ * key: "value"
+ * key: 'value'
+ * key: [value1, value2]
+ * key:
+ - value1
+ - value2
+ * key:
+ key2: value1
+ key3: value2
+
+ Author: Bryan Kearney
+
+ About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+*)
+module CobblerSettings =
+ autoload xfm
+
+ let kw = /[a-zA-Z0-9_]+/
+ (* TODO Would be better if this stripped off the "" and '' characters *)
+ let kv = /([^]['", \t\n#:@-]+|"[^"\n]*"|'[^'\n]*')/
+
+ let lbr = del /\[/ "["
+ let rbr = del /\]/ "]"
+ let colon = del /[ \t]*:[ \t]*/ ": "
+ let dash = del /-[ \t]*/ "- "
+ (* let comma = del /,[ \t]*(\n[ \t]+)?/ ", " *)
+ let comma = del /[ \t]*,[ \t]*/ ", "
+
+ let eol_only = del /\n/ "\n"
+
+ (* TODO Would be better to make items a child of a document *)
+ let docmarker = /-{3}/
+
+ let eol = Util.eol
+ let comment = Util.comment
+ let empty = Util.empty
+ let indent = del /[ \t]+/ "\t"
+ let ws = del /[ \t]*/ " "
+
+ let value_list = Build.opt_list [label "item" . store kv] comma
+ let setting = [key kw . colon . store kv] . eol
+ let simple_setting_suffix = store kv . eol
+ let setting_list_suffix = [label "sequence" . lbr . ws . (value_list . ws)? . rbr ] . eol
+ let indendented_setting_list_suffix = eol_only . (indent . setting)+
+ let indented_list_suffix = [label "list" . eol_only . ([ label "value" . indent . dash . store kv] . eol)+]
+
+ (* Break out setting because of a current bug in augeas *)
+ let nested_setting = [key kw . colon . (
+ (* simple_setting_suffix | *)
+ setting_list_suffix |
+ indendented_setting_list_suffix |
+ indented_list_suffix
+ )
+ ]
+
+ let document = [label "---" . store docmarker] . eol
+
+ let lns = (document | comment | empty | setting | nested_setting )*
+
+ let xfm = transform lns (incl "/etc/cobbler/settings")
+
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(*
+Module: Cockpit
+ Cockpit module for Augeas
+
+ Author: Pat Riehecky <riehecky@fnal.gov>
+
+About: Configuration files
+ cockpit.conf is a standard INI File.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+*)
+
+module Cockpit =
+ autoload xfm
+
+(************************************************************************
+ * INI File settings
+ *************************************************************************)
+
+let comment = IniFile.comment "#" "#"
+let sep = IniFile.sep "=" "="
+let empty = Util.empty
+let eol = IniFile.eol
+
+(************************************************************************
+ * ENTRY
+ *************************************************************************)
+
+let list_entry (list_key:string) =
+ let list_value = store /[^# \t\r\n,][^ \t\r\n,]*[^# \t\r\n,]|[^# \t\r\n,]/ in
+ let list_sep = del /([ \t]*(,[ \t]*|\r?\n[ \t]+))|[ \t]+/ " " in
+ [ key list_key . sep . Sep.opt_space . list_value ]
+ . (list_sep . Build.opt_list [ label list_key . list_value ] list_sep)?
+ . eol
+
+let entry_re = IniFile.entry_re - ("Origins" | "Fatal")
+
+let entry = IniFile.entry entry_re sep comment
+ | empty
+
+let entries =
+ let list_entry_elem (k:string) = list_entry k . entry*
+ in entry*
+ | entry* . Build.combine_two_opt
+ (list_entry_elem "Origins")
+ (list_entry_elem "Fatal")
+
+
+(***********************************************************************a
+ * TITLE
+ *************************************************************************)
+let title = IniFile.title IniFile.record_re
+let record = [ title . entries ]
+
+
+(************************************************************************
+ * LENS & FILTER
+ *************************************************************************)
+let lns = (empty | comment)* . record*
+
+let filter = (incl "/etc/cockpit/cockpit.conf")
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Collectd
+ Parses collectd configuration files
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 5 collectd.conf` where possible.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to collectd configuration files. See <filter>.
+
+About: Examples
+ The <Test_Collectd> file contains various examples and tests.
+*)
+
+module Collectd =
+
+autoload xfm
+
+(* View: lns
+ Collectd is essentially Httpd-compliant configuration files *)
+let lns = Httpd.lns
+
+(* Variable: filter *)
+let filter = incl "/etc/collectd.conf"
+ . incl "/etc/collectd/*.conf"
+ . incl "/usr/share/doc/collectd/examples/collection3/etc/collection.conf"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: CPanel
+ Parses cpanel.config
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: Reference
+ This lens parses cpanel.config files
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to cpanel.config files. See <filter>.
+
+About: Examples
+ The <Test_CPanel> file contains various examples and tests.
+*)
+module CPanel =
+
+autoload xfm
+
+(* View: kv
+ A key-value pair, supporting flags and empty values *)
+let kv = [ key /[A-Za-z0-9:_.-]+/
+ . (Sep.equal . store (Rx.space_in?))?
+ . Util.eol ]
+
+(* View: lns
+ The <CPanel> lens *)
+let lns = (Util.comment | Util.empty | kv)*
+
+(* View: filter *)
+let filter = incl "/var/cpanel/cpanel.config"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Cron
+ Parses /etc/cron.d/*, /etc/crontab
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 5 crontab` where
+ possible.
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ Sample usage of this lens in augtool
+
+ * Get the entry that launches '/usr/bin/ls'
+ > match '/files/etc/crontab/entry[. = "/usr/bin/ls"]'
+
+About: Configuration files
+ This lens applies to /etc/cron.d/* and /etc/crontab. See <filter>.
+*)
+
+module Cron =
+ autoload xfm
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+(* Group: Generic primitives *)
+
+(* Variable: eol *)
+let eol = Util.eol
+
+(* Variable: indent *)
+let indent = Util.indent
+
+(* Variable: comment *)
+let comment = Util.comment
+
+(* Variable: empty *)
+let empty = Util.empty
+
+(* Variable: num *)
+let num = /[0-9*][0-9\/,*-]*/
+
+(* Variable: alpha *)
+let alpha = /[A-Za-z]{3}/
+
+(* Variable: alphanum *)
+let alphanum = (num|alpha) . ("-" . (num|alpha))?
+
+(* Variable: entry_prefix *)
+let entry_prefix = /-/
+
+(* Group: Separators *)
+
+(* Variable: sep_spc *)
+let sep_spc = Util.del_ws_spc
+
+(* Variable: sep_eq *)
+let sep_eq = Util.del_str "="
+
+
+
+(************************************************************************
+ * Group: ENTRIES
+ *************************************************************************)
+
+
+(************************************************************************
+ * View: shellvar
+ * A shell variable in crontab
+ *************************************************************************)
+
+let shellvar =
+ let key_re = /[A-Za-z1-9_-]+(\[[0-9]+\])?/ - "entry" in
+ let sto_to_eol = store /[^\n]*[^ \t\n]/ in
+ [ key key_re . sep_eq . sto_to_eol . eol ]
+
+(* View: - prefix of an entry *)
+let prefix = [ label "prefix" . store entry_prefix ]
+
+(* View: minute *)
+let minute = [ label "minute" . store num ]
+
+(* View: hour *)
+let hour = [ label "hour" . store num ]
+
+(* View: dayofmonth *)
+let dayofmonth = [ label "dayofmonth" . store num ]
+
+(* View: month *)
+let month = [ label "month" . store alphanum ]
+
+(* View: dayofweek *)
+let dayofweek = [ label "dayofweek" . store alphanum ]
+
+
+(* View: user *)
+let user = [ label "user" . store Rx.word ]
+
+
+(************************************************************************
+ * View: time
+ * Time in the format "minute hour dayofmonth month dayofweek"
+ *************************************************************************)
+let time = [ label "time" .
+ minute . sep_spc . hour . sep_spc . dayofmonth
+ . sep_spc . month . sep_spc . dayofweek ]
+
+(* Variable: the valid values for schedules *)
+let schedule_re = "reboot" | "yearly" | "annually" | "monthly"
+ | "weekly" | "daily" | "midnight" | "hourly"
+
+(************************************************************************
+ * View: schedule
+ * Time in the format "@keyword"
+ *************************************************************************)
+let schedule = [ label "schedule" . Util.del_str "@"
+ . store schedule_re ]
+
+
+(************************************************************************
+ * View: entry
+ * A crontab entry
+ *************************************************************************)
+
+let entry = [ label "entry" . indent
+ . prefix?
+ . ( time | schedule )
+ . sep_spc . user
+ . sep_spc . store Rx.space_in . eol ]
+
+
+(*
+ * View: lns
+ * The cron lens
+ *)
+let lns = ( empty | comment | shellvar | entry )*
+
+
+(* Variable: filter *)
+let filter =
+ incl "/etc/cron.d/*" .
+ incl "/etc/crontab" .
+ incl "/etc/crontabs/*" .
+ excl "/etc/cron.d/at.allow" .
+ excl "/etc/cron.d/at.deny" .
+ excl "/etc/cron.d/cron.allow" .
+ excl "/etc/cron.d/cron.deny" .
+ Util.stdexcl
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Cron_User
+ Parses /var/spool/cron/*
+
+Author: David Lutterkort <lutter@watzmann.net>
+
+About: Reference
+ This lens parses the user crontab files in /var/spool/cron. It produces
+ almost the same tree as the Cron.lns, except that it never contains a user
+ field.
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ Sample usage of this lens in augtool
+
+ * Get the entry that launches '/usr/bin/ls'
+ > match '/files/var/spool/cron/foo/entry[. = "/usr/bin/ls"]'
+
+About: Configuration files
+ This lens applies to /var/spool/cron*. See <filter>.
+ *)
+module Cron_User =
+ autoload xfm
+
+(************************************************************************
+ * View: entry
+ * A crontab entry for a user's crontab
+ *************************************************************************)
+let entry = [ label "entry" . Cron.indent
+ . Cron.prefix?
+ . ( Cron.time | Cron.schedule )
+ . Cron.sep_spc . store Rx.space_in . Cron.eol ]
+
+(*
+ * View: lns
+ * The cron_user lens. Almost identical to Cron.lns
+ *)
+let lns = ( Cron.empty | Cron.comment | Cron.shellvar | entry )*
+
+let filter =
+ incl "/var/spool/cron/*" .
+ Util.stdexcl
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Crypttab
+ Parses /etc/crypttab from the cryptsetup package.
+
+Author: Frédéric Lespez <frederic.lespez@free.fr>
+
+About: Reference
+ This lens tries to keep as close as possible to `man crypttab` where possible.
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ Sample usage of this lens in augtool
+
+ * Create a new entry for an encrypted block devices
+ > ins 01 after /files/etc/crypttab/*[last()]
+ > set /files/etc/crypttab/01/target crypt_sda2
+ > set /files/etc/crypttab/01/device /dev/sda2
+ > set /files/etc/crypttab/01/password /dev/random
+ > set /files/etc/crypttab/01/opt swap
+ * Print the entry applying to the "/dev/sda2" device
+ > print /files/etc/crypttab/01
+ * Remove the entry applying to the "/dev/sda2" device
+ > rm /files/etc/crypttab/*[device="/dev/sda2"]
+
+About: Configuration files
+ This lens applies to /etc/crypttab. See <filter>.
+*)
+
+module Crypttab =
+ autoload xfm
+
+ (************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+ (* Group: Separators *)
+
+ (* Variable: sep_tab *)
+ let sep_tab = Sep.tab
+
+ (* Variable: comma *)
+ let comma = Sep.comma
+
+ (* Group: Generic primitives *)
+
+ (* Variable: eol *)
+ let eol = Util.eol
+
+ (* Variable: comment *)
+ let comment = Util.comment
+
+ (* Variable: empty *)
+ let empty = Util.empty
+
+ (* Variable: word *)
+ let word = Rx.word
+
+ (* Variable: optval *)
+ let optval = /[A-Za-z0-9\/_.:-]+/
+
+ (* Variable: target *)
+ let target = Rx.device_name
+
+ (* Variable: fspath *)
+ let fspath = Rx.fspath
+
+ (* Variable: uuid *)
+ let uuid = /UUID=[0-9a-f-]+/
+
+ (************************************************************************
+ * Group: ENTRIES
+ *************************************************************************)
+
+ (************************************************************************
+ * View: comma_sep_list
+ * A comma separated list of options (opt=value or opt)
+ *************************************************************************)
+ let comma_sep_list (l:string) =
+ let value = [ label "value" . Util.del_str "=" . store optval ] in
+ let lns = [ label l . store word . value? ] in
+ Build.opt_list lns comma
+
+ (************************************************************************
+ * View: record
+ * A crypttab record
+ *************************************************************************)
+
+ let record = [ seq "entry" .
+ [ label "target" . store target ] . sep_tab .
+ [ label "device" . store (fspath|uuid) ] .
+ (sep_tab . [ label "password" . store fspath ] .
+ ( sep_tab . comma_sep_list "opt")? )?
+ . eol ]
+
+ (*
+ * View: lns
+ * The crypttab lens
+ *)
+ let lns = ( empty | comment | record ) *
+
+ (* Variable: filter *)
+ let filter = (incl "/etc/crypttab")
+
+ let xfm = transform lns filter
+
+(* coding: utf-8 *)
--- /dev/null
+(*
+Module: CSV
+ Generic CSV lens collection
+
+Author: Raphael Pinson <raphael.pinson@camptocamp.com>
+
+About: Reference
+ https://tools.ietf.org/html/rfc4180
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+
+About: Examples
+ The <Test_CSV> file contains various examples and tests.
+
+Caveats:
+ No support for files without an ending CRLF
+*)
+module CSV =
+
+(* View: eol *)
+let eol = Util.del_str "\n"
+
+(* View: comment *)
+let comment = Util.comment
+ | [ del /#[ \t]*\r?\n/ "#\n" ]
+
+(* View: entry
+ An entry of fields, quoted or not *)
+let entry (sep_str:string) =
+ let field = [ seq "field" . store (/[^"#\r\n]/ - sep_str)* ]
+ | [ seq "field" . store /("[^"#]*")+/ ]
+ in let sep = Util.del_str sep_str
+ in [ seq "entry" . counter "field" . Build.opt_list field sep . eol ]
+
+(* View: lns
+ The generic lens, taking the separator as a parameter *)
+let lns_generic (sep:string) = (comment | entry sep)*
+
+(* View: lns
+ The comma-separated value lens *)
+let lns = lns_generic ","
+
+(* View: lns_semicol
+ A semicolon-separated value lens *)
+let lns_semicol = lns_generic ";"
--- /dev/null
+(*
+Module: Cups
+ Parses cups configuration files
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Examples
+ The <Test_Cups> file contains various examples and tests.
+*)
+
+module Cups =
+
+autoload xfm
+
+(* View: lns *)
+let lns = Httpd.lns
+
+(* Variable: filter *)
+let filter = incl "/etc/cups/*.conf"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Cyrus_Imapd module for Augeas
+
+Author: Free Ekanayaka <free@64studio.com>
+*)
+
+module Cyrus_Imapd =
+
+ autoload xfm
+
+(************************************************************************
+ * USEFUL PRIMITIVES
+ *************************************************************************)
+
+let eol = Util.eol
+let indent = del /[ \t]*(\n[ \t]+)?/ " "
+let comment = Util.comment
+let empty = Util.empty
+let eq = del /[ \t]*:/ ":"
+let word = /[A-Za-z0-9_.-]+/
+
+(* The value of a parameter, after the '=' sign. Postfix allows that
+ * lines are continued by starting continuation lines with spaces.
+ * The definition needs to make sure we don't add indented comment lines
+ * into values *)
+let value =
+ let chr = /[^# \t\n]/ in
+ let any = /.*/ in
+ let line = (chr . any* . chr | chr) in
+ let lines = line . (/\n[ \t]+/ . line)* in
+ store lines
+
+(************************************************************************
+ * ENTRIES
+ *************************************************************************)
+
+let entry = [ key word . eq . (indent . value)? . eol ]
+
+(************************************************************************
+ * LENS
+ *************************************************************************)
+
+let lns = (comment|empty|entry) *
+
+let filter = (incl "/etc/imapd.conf")
+ . (incl "/etc/imap/*.conf")
+ . Util.stdexcl
+
+let xfm = transform lns filter
--- /dev/null
+(* Darkice module for Augeas
+ Author: Free Ekanayaka <free@64studio.com>
+
+ Reference: man 5 darkice.cfg
+*)
+
+
+module Darkice =
+ autoload xfm
+
+
+(************************************************************************
+ * INI File settings
+ *************************************************************************)
+let comment = IniFile.comment IniFile.comment_re IniFile.comment_default
+
+let sep = IniFile.sep IniFile.sep_re IniFile.sep_default
+
+let entry_re = ( /[A-Za-z0-9][A-Za-z0-9._-]*/ )
+let entry = IniFile.entry entry_re sep comment
+
+let title = IniFile.title_label "target" IniFile.record_label_re
+let record = IniFile.record title entry
+
+let lns = IniFile.lns record comment
+
+let filter = (incl "/etc/darkice.cfg")
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: debctrl
+ Parses ./debian/control
+
+Author:
+ Dominique Dumont domi.dumont@free.fr or dominique.dumont@hp.com
+
+About: Reference
+ http://augeas.net/page/Create_a_lens_from_bottom_to_top
+ http://www.debian.org/doc/debian-policy/ch-controlfields.html
+
+About: License
+ This file is licensed under the LGPL v2+.
+
+About: Lens Usage
+ Since control file is not a system configuration file, you will have
+ to use augtool -r option to point to 'debian' directory.
+
+ Run augtool:
+ $ augtool -r debian
+
+ Sample usage of this lens in augtool:
+
+ * Get the value stored in control file
+ > print /files/control
+ ...
+
+ Saving your file:
+
+ > save
+
+
+*)
+
+module Debctrl =
+ autoload xfm
+
+let eol = Util.eol
+let del_ws_spc = del /[\t ]*/ " "
+let hardeol = del /\n/ "\n"
+let del_opt_ws = del /[\t ]*/ ""
+let colon = del /:[ \t]*/ ": "
+
+let simple_entry (k:regexp) =
+ let value = store /[^ \t][^\n]+/ in
+ [ key k . colon . value . hardeol ]
+
+let cont_line = del /\n[ \t]+/ "\n "
+let comma = del /,[ \t]*/ ", "
+
+let sep_comma_with_nl = del /[ \t\n]*,[ \t\n]*/ ",\n "
+ (*= del_opt_ws . cont_line* . comma . cont_line**)
+
+let email = store ( /([A-Za-z]+ )+<[^\n>]+>/ | /[^\n,\t<> ]+/ )
+
+let multi_line_array_entry (k:regexp) (v:lens) =
+ [ key k . colon . [ counter "array" . seq "array" . v ] .
+ [ seq "array" . sep_comma_with_nl . v ]* . hardeol ]
+
+(* dependency stuff *)
+
+let version_depends =
+ [ label "version"
+ . [ del / *\( */ " ( " . label "relation" . store /[<>=]+/ ]
+ . [ del_ws_spc . label "number"
+ . store ( /[a-zA-Z0-9_.-]+/ | /\$\{[a-zA-Z0-9:]+\}/ )
+ . del / *\)/ " )" ]
+ ]
+
+let arch_depends =
+ [ label "arch"
+ . [ del / *\[ */ " [ " . label "prefix" . store /!?/ ]
+ . [ label "name" . store /[a-zA-Z0-9_.-]+/ . del / *\]/ " ]" ] ]
+
+
+let package_depends
+ = [ key ( /[a-zA-Z0-9_-]+/ | /\$\{[a-zA-Z0-9:]+\}/ )
+ . ( version_depends | arch_depends ) * ]
+
+
+let dependency = [ label "or" . package_depends ]
+ . [ label "or" . del / *\| */ " | "
+ . package_depends ] *
+
+let dependency_list (field:regexp) =
+ [ key field . colon . [ label "and" . dependency ]
+ . [ label "and" . sep_comma_with_nl . dependency ]*
+ . eol ]
+
+(* source package *)
+let uploaders =
+ multi_line_array_entry /Uploaders/ email
+
+let simple_src_keyword = "Source" | "Section" | "Priority"
+ | "Standards\-Version" | "Homepage" | /Vcs\-Svn/ | /Vcs\-Browser/
+ | "Maintainer" | "DM-Upload-Allowed" | /XS?-Python-Version/
+let depend_src_keywords = /Build\-Depends/ | /Build\-Depends\-Indep/
+
+let src_entries = ( simple_entry simple_src_keyword
+ | uploaders
+ | dependency_list depend_src_keywords ) *
+
+
+(* package paragraph *)
+let multi_line_entry (k:string) =
+ let line = /.*[^ \t\n].*/ in
+ [ label k . del / / " " . store line . hardeol ] *
+
+
+let description
+ = [ key "Description" . colon
+ . [ label "summary" . store /[a-zA-Z][^\n]+/ . hardeol ]
+ . multi_line_entry "text" ]
+
+
+(* binary package *)
+let simple_bin_keywords = "Package" | "Architecture" | "Section"
+ | "Priority" | "Essential" | "Homepage" | "XB-Python-Version"
+let depend_bin_keywords = "Depends" | "Recommends" | "Suggests" | "Provides"
+
+let bin_entries = ( simple_entry simple_bin_keywords
+ | dependency_list depend_bin_keywords
+ ) + . description
+
+(* The whole stuff *)
+let lns = [ label "srcpkg" . src_entries ]
+ . [ label "binpkg" . hardeol+ . bin_entries ]+
+ . eol*
+
+(* lens must be used with AUG_ROOT set to debian package source directory *)
+let xfm = transform lns (incl "/control")
--- /dev/null
+(*
+Module: Desktop
+ Desktop module for Augeas (.desktop files)
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: Lens Usage
+ This lens is made to provide a lens for .desktop files for augeas
+
+Reference: Freedesktop.org
+ http://standards.freedesktop.org/desktop-entry-spec/desktop-entry-spec-latest.html
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+*)
+
+
+module Desktop =
+(* We don't load this lens by default
+ Since a lot of desktop files contain unicode characters
+ which we can't parse *)
+(* autoload xfm *)
+
+(* Comments can be only of # type *)
+let comment = IniFile.comment "#" "#"
+
+
+(* TITLE
+* These represents sections of a desktop file
+* Example : [DesktopEntry]
+*)
+
+let title = IniFile.title IniFile.record_re
+
+let sep = IniFile.sep "=" "="
+
+let setting = /[A-Za-z0-9_.-]+([][@A-Za-z0-9_.-]+)?/
+
+(* Variable: sto_to_comment
+Store until comment *)
+let sto_to_comment = Sep.opt_space . store /[^# \t\r\n][^#\r\n]*[^# \t\r\n]|[^# \t\r\n]/
+
+(* Entries can have comments at their end and so they are modified to represent as such *)
+let entry = [ key setting . sep . sto_to_comment? . (comment|IniFile.eol) ] | comment
+
+let record = IniFile.record title entry
+
+let lns = IniFile.lns record comment
+
+let filter = ( incl "/usr/share/applications/*.desktop"
+ . incl "/usr/share/applications/screensavers/*.desktop" )
+
+let xfm = transform lns filter
--- /dev/null
+module DevfsRules =
+
+ autoload xfm
+
+ let comment = IniFile.comment IniFile.comment_re "#"
+
+ let eol = Util.eol
+
+ let line_re = /[^][#; \t\n][^#;\n]*[^#; \t\n]/
+ let entry = [ seq "entry" . store line_re . (eol | comment) ]
+
+ let title = Util.del_str "["
+ . key Rx.word . [ label "id" . Sep.equal . store Rx.integer ]
+ . Util.del_str "]" . eol
+ . counter "entry"
+
+ let record = IniFile.record title (entry | comment)
+
+ let lns = IniFile.lns record comment
+
+ let filter = incl "/etc/defaults/devfs.rules"
+ . incl "/etc/devfs.rules"
+
+ let xfm = transform lns filter
--- /dev/null
+(* Parsing grub's device.map *)
+
+module Device_map =
+ autoload xfm
+
+ let sep_tab = Sep.tab
+ let eol = Util.eol
+ let fspath = Rx.fspath
+ let del_str = Util.del_str
+
+ let comment = Util.comment
+ let empty = Util.empty
+
+ let dev_name = /(h|f|c)d[0-9]+(,[0-9a-zA-Z]+){0,2}/
+ let dev_hex = Rx.hex
+ let dev_dec = /[0-9]+/
+
+ let device = del_str "(" . key ( dev_name | dev_hex | dev_dec ) . del_str ")"
+
+ let map = [ device . sep_tab . store fspath . eol ]
+
+ let lns = ( empty | comment | map ) *
+
+ let xfm = transform lns (incl "/boot/*/device.map")
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(* Intefraces module for Augeas
+ Author: Free Ekanayaka <free@64studio.com>
+
+ Reference: man dhclient.conf
+ The only difference with the reference syntax is that this lens assumes
+ that statements end with a new line, while the reference syntax allows
+ new statements to be started right after the trailing ";" of the
+ previous statement. This should not be a problem in real-life
+ configuration files as statements get usually split across several
+ lines, rather than merged in a single one.
+
+*)
+
+module Dhclient =
+
+ autoload xfm
+
+(************************************************************************
+ * USEFUL PRIMITIVES
+ *************************************************************************)
+
+let eol = Util.eol
+let comment = Util.comment
+let comment_or_eol = Util.comment_or_eol
+let empty = Util.empty
+
+(* Define separators *)
+let sep_spc = del /[ \t\n]+/ " "
+let sep_scl = del /[ \t]*;/ ";"
+let sep_obr = del /[ \t\n]*\{\n]*/ " {\n"
+let sep_cbr = del /[ \t\n]*\}/ " }"
+let sep_com = del /[ \t\n]*,[ \t\n]*/ ","
+let sep_slh = del "\/" "/"
+let sep_col = del ":" ":"
+let sep_eq = del /[ \t]*=[ \t]*/ "="
+
+(* Define basic types *)
+let word = /[A-Za-z0-9_.-]+(\[[0-9]+\])?/
+
+(* Define fields *)
+
+(* TODO: there could be a " " in the middle of a value ... *)
+let sto_to_spc = store /[^\\#,;{}" \t\n]+|"[^\\#"\n]+"/
+let sto_to_spc_noeval = store /[^=\\#,;{}" \t\n]|[^=\\#,;{}" \t\n][^\\#,;{}" \t\n]*|"[^\\#"\n]+"/
+let sto_to_scl = store /[^ \t\n][^;\n]+[^ \t]|[^ \t;\n]+/
+let rfc_code = [ key "code" . sep_spc . store word ]
+ . sep_eq
+ . [ label "value" . sto_to_scl ]
+let eval = [ label "#eval" . Sep.equal . sep_spc . sto_to_scl ]
+let sto_number = store /[0-9][0-9]*/
+
+(************************************************************************
+ * SIMPLE STATEMENTS
+ *************************************************************************)
+
+let stmt_simple_re = "timeout"
+ | "retry"
+ | "select-timeout"
+ | "reboot"
+ | "backoff-cutoff"
+ | "initial-interval"
+ | "do-forward-updates"
+ | "reject"
+
+let stmt_simple = [ key stmt_simple_re
+ . sep_spc
+ . sto_to_spc
+ . sep_scl
+ . comment_or_eol ]
+
+
+(************************************************************************
+ * ARRAY STATEMENTS
+ *************************************************************************)
+
+(* TODO: the array could also be empty, like in the request statement *)
+let stmt_array_re = "media"
+ | "request"
+ | "require"
+
+let stmt_array = [ key stmt_array_re
+ . sep_spc
+ . counter "stmt_array"
+ . [ seq "stmt_array" . sto_to_spc ]
+ . [ sep_com . seq "stmt_array" . sto_to_spc ]*
+ . sep_scl . comment_or_eol ]
+
+(************************************************************************
+ * HASH STATEMENTS
+ *************************************************************************)
+
+
+let stmt_hash_re = "send"
+ | "option"
+
+let stmt_args = ( [ key word . sep_spc . sto_to_spc_noeval ]
+ | [ key word . sep_spc . (rfc_code|eval) ] )
+ . sep_scl
+ . comment_or_eol
+
+let stmt_hash = [ key stmt_hash_re
+ . sep_spc
+ . stmt_args ]
+
+let stmt_opt_mod_re = "append"
+ | "prepend"
+ | "default"
+ | "supersede"
+
+let stmt_opt_mod = [ key stmt_opt_mod_re
+ . sep_spc
+ . stmt_args ]
+
+(************************************************************************
+ * BLOCK STATEMENTS
+ *************************************************************************)
+
+let stmt_block_re = "interface"
+ | "lease"
+ | "alias"
+
+let stmt_block_opt_re = "interface"
+ | "script"
+ | "bootp"
+ | "fixed-address"
+ | "filename"
+ | "server-name"
+ | "medium"
+ | "vendor option space"
+
+(* TODO: some options could take no argument like bootp *)
+let stmt_block_opt = [ key stmt_block_opt_re
+ . sep_spc
+ . sto_to_spc
+ . sep_scl
+ . comment_or_eol ]
+
+let stmt_block_date_re
+ = "renew"
+ | "rebind"
+ | "expire"
+
+let stmt_block_date = [ key stmt_block_date_re
+ . [ sep_spc . label "weekday" . sto_number ]
+ . [ sep_spc . label "year" . sto_number ]
+ . [ sep_slh . label "month" . sto_number ]
+ . [ sep_slh . label "day" . sto_number ]
+ . [ sep_spc . label "hour" . sto_number ]
+ . [ sep_col . label "minute" . sto_number ]
+ . [ sep_col . label "second" . sto_number ]
+ . sep_scl
+ . comment_or_eol ]
+
+let stmt_block_arg = sep_spc . sto_to_spc
+
+let stmt_block_entry = sep_spc
+ . ( stmt_array
+ | stmt_hash
+ | stmt_opt_mod
+ | stmt_block_opt
+ | stmt_block_date )
+
+let stmt_block = [ key stmt_block_re
+ . stmt_block_arg?
+ . sep_obr
+ . stmt_block_entry+
+ . sep_cbr
+ . comment_or_eol ]
+
+(************************************************************************
+ * LENS & FILTER
+ *************************************************************************)
+
+let statement = (stmt_simple|stmt_opt_mod|stmt_array|stmt_hash|stmt_block)
+
+let lns = ( empty
+ | comment
+ | statement )*
+
+let filter = incl "/etc/dhcp3/dhclient.conf"
+ . incl "/etc/dhcp/dhclient.conf"
+ . incl "/etc/dhclient.conf"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Dhcpd
+ BIND dhcp 3 server configuration module for Augeas
+
+Author: Francis Giraldeau <francis.giraldeau@usherbrooke.ca>
+
+About: Reference
+ Reference: manual of dhcpd.conf and dhcp-eval
+ Follow dhclient module for tree structure
+
+About: License
+ This file is licensed under the GPL.
+
+About: Lens Usage
+ Sample usage of this lens in augtool
+
+ Directive without argument.
+ Set this dhcpd server authoritative on the domain.
+ > clear /files/etc/dhcp3/dhcpd.conf/authoritative
+
+ Directives with integer or string argument.
+ Set max-lease-time to one hour:
+ > set /files/etc/dhcp3/dhcpd.conf/max-lease-time 3600
+
+ Options are declared as a list, even for single values.
+ Set the domain of the network:
+ > set /files/etc/dhcp3/dhcpd.conf/option/domain-name/arg example.org
+ Set two name server:
+ > set /files/etc/dhcp3/dhcpd.conf/option/domain-name-servers/arg[1] foo.example.org
+ > set /files/etc/dhcp3/dhcpd.conf/option/domain-name-servers/arg[2] bar.example.org
+
+ Create the subnet 172.16.0.1 with 10 addresses:
+ > clear /files/etc/dhcp3/dhcpd.conf/subnet[last() + 1]
+ > set /files/etc/dhcp3/dhcpd.conf/subnet[last()]/network 172.16.0.0
+ > set /files/etc/dhcp3/dhcpd.conf/subnet[last()]/netmask 255.255.255.0
+ > set /files/etc/dhcp3/dhcpd.conf/subnet[last()]/range/from 172.16.0.10
+ > set /files/etc/dhcp3/dhcpd.conf/subnet[last()]/range/to 172.16.0.20
+
+ Create a new group "foo" with one static host. Nodes type and address are ordered.
+ > ins group after /files/etc/dhcp3/dhcpd.conf/subnet[network='172.16.0.0']/*[last()]
+ > set /files/etc/dhcp3/dhcpd.conf/subnet[network='172.16.0.0']/group[last()]/host foo
+ > set /files/etc/dhcp3/dhcpd.conf/subnet[network='172.16.0.0']/group[host='foo']/host/hardware/type "ethernet"
+ > set /files/etc/dhcp3/dhcpd.conf/subnet[network='172.16.0.0']/group[host='foo']/host/hardware/address "00:00:00:aa:bb:cc"
+ > set /files/etc/dhcp3/dhcpd.conf/subnet[network='172.16.0.0']/group[host='foo']/host/fixed-address 172.16.0.100
+
+About: Configuration files
+ This lens applies to /etc/dhcpd3/dhcpd.conf. See <filter>.
+*)
+
+module Dhcpd =
+
+autoload xfm
+
+(************************************************************************
+ * USEFUL PRIMITIVES
+ *************************************************************************)
+let dels (s:string) = del s s
+let eol = Util.eol
+let comment = Util.comment
+let empty = Util.empty
+let indent = Util.indent
+let eos = comment?
+
+(* Define separators *)
+let sep_spc = del /[ \t]+/ " "
+let sep_osp = del /[ \t]*/ ""
+let sep_scl = del /[ \t]*;([ \t]*\n)*/ ";\n"
+let sep_obr = del /[ \t\n]*\{([ \t]*\n)*/ " {\n"
+let sep_cbr = del /[ \t]*\}([ \t]*\n)*/ "}\n"
+let sep_com = del /[ \t\n]*,[ \t\n]*/ ", "
+let sep_slh = del "\/" "/"
+let sep_col = del ":" ":"
+let sep_eq = del /[ \t\n]*=[ \t\n]*/ "="
+let scl = del ";" ";"
+
+(* Define basic types *)
+let word = /[A-Za-z0-9_.-]+(\[[0-9]+\])?/
+let ip = Rx.ipv4
+
+(* Define fields *)
+
+(* adapted from sysconfig.aug *)
+ (* Chars allowed in a bare string *)
+ let bchar = /[^ \t\n"'\\{}#,()\/]|\\\\./
+ let qchar = /["']/ (* " *)
+
+ (* We split the handling of right hand sides into a few cases:
+ * bare - strings that contain no spaces, optionally enclosed in
+ * single or double quotes
+ * dquot - strings that contain at least one space, apostrophe or slash
+ * which must be enclosed in double quotes
+ * squot - strings that contain an unescaped double quote
+ *)
+ let bare = del qchar? "" . store (bchar+) . del qchar? ""
+ let quote = Quote.do_quote (store (bchar* . /[ \t'\/]/ . bchar*)+)
+ let dquote = Quote.do_dquote (store (bchar+))
+ (* these two are for special cases. bare_to_scl is for any bareword that is
+ * space or semicolon terminated. dquote_any allows almost any character in
+ * between the quotes. *)
+ let bare_to_scl = Quote.do_dquote_opt (store /[^" \t\n;]+/)
+ let dquote_any = Quote.do_dquote (store /[^"\n]*[ \t]+[^"\n]*/)
+
+let sto_to_spc = store /[^\\#,;\{\}" \t\n]+|"[^\\#"\n]+"/
+let sto_to_scl = store /[^ \t;][^;\n=]+[^ \t;]|[^ \t;=]+/
+
+let sto_number = store /[0-9][0-9]*/
+
+(************************************************************************
+ * NO ARG STATEMENTS
+ *************************************************************************)
+
+let stmt_noarg_re = "authoritative"
+ | "primary"
+ | "secondary"
+
+let stmt_noarg = [ indent
+ . key stmt_noarg_re
+ . sep_scl
+ . eos ]
+
+(************************************************************************
+ * INT ARG STATEMENTS
+ *************************************************************************)
+
+let stmt_integer_re = "default-lease-time"
+ | "max-lease-time"
+ | "min-lease-time"
+ | /lease[ ]+limit/
+ | "port"
+ | /peer[ ]+port/
+ | "max-response-delay"
+ | "max-unacked-updates"
+ | "mclt"
+ | "split"
+ | /load[ ]+balance[ ]+max[ ]+seconds/
+ | "max-lease-misbalance"
+ | "max-lease-ownership"
+ | "min-balance"
+ | "max-balance"
+ | "adaptive-lease-time-threshold"
+ | "dynamic-bootp-lease-length"
+ | "local-port"
+ | "min-sec"
+ | "omapi-port"
+ | "ping-timeout"
+ | "remote-port"
+
+let stmt_integer = [ indent
+ . key stmt_integer_re
+ . sep_spc
+ . sto_number
+ . sep_scl
+ . eos ]
+
+(************************************************************************
+ * STRING ARG STATEMENTS
+ *************************************************************************)
+
+let stmt_string_re = "ddns-update-style"
+ | "ddns-updates"
+ | "ddns-hostname"
+ | "ddns-domainname"
+ | "ddns-rev-domainname"
+ | "log-facility"
+ | "server-name"
+ | "fixed-address"
+ | /failover[ ]+peer/
+ | "use-host-decl-names"
+ | "next-server"
+ | "address"
+ | /peer[ ]+address/
+ | "type"
+ | "file"
+ | "algorithm"
+ | "secret"
+ | "key"
+ | "include"
+ | "hba"
+ | "boot-unknown-clients"
+ | "db-time-format"
+ | "do-forward-updates"
+ | "dynamic-bootp-lease-cutoff"
+ | "get-lease-hostnames"
+ | "infinite-is-reserved"
+ | "lease-file-name"
+ | "local-address"
+ | "one-lease-per-client"
+ | "pid-file-name"
+ | "ping-check"
+ | "server-identifier"
+ | "site-option-space"
+ | "stash-agent-options"
+ | "update-conflict-detection"
+ | "update-optimization"
+ | "update-static-leases"
+ | "use-host-decl-names"
+ | "use-lease-addr-for-default-route"
+ | "vendor-option-space"
+ | "primary"
+ | "omapi-key"
+
+let stmt_string_tpl (kw:regexp) (l:lens) = [ indent
+ . key kw
+ . sep_spc
+ . l
+ . sep_scl
+ . eos ]
+
+let stmt_string = stmt_string_tpl stmt_string_re bare
+ | stmt_string_tpl stmt_string_re quote
+ | stmt_string_tpl "filename" dquote
+
+(************************************************************************
+ * RANGE STATEMENTS
+ *************************************************************************)
+
+let stmt_range = [ indent
+ . key "range"
+ . sep_spc
+ . [ label "flag" . store /dynamic-bootp/ . sep_spc ]?
+ . [ label "from" . store ip . sep_spc ]?
+ . [ label "to" . store ip ]
+ . sep_scl
+ . eos ]
+
+(************************************************************************
+ * HARDWARE STATEMENTS
+ *************************************************************************)
+
+let stmt_hardware = [ indent
+ . key "hardware"
+ . sep_spc
+ . [ label "type" . store /ethernet|tokenring|fddi/ ]
+ . sep_spc
+ . [ label "address" . store /[a-fA-F0-9:-]+/ ]
+ . sep_scl
+ . eos ]
+
+(************************************************************************
+ * SET STATEMENTS
+ *************************************************************************)
+let stmt_set = [ indent
+ . key "set"
+ . sep_spc
+ . store word
+ . sep_spc
+ . Sep.equal
+ . sep_spc
+ . [ label "value" . sto_to_scl ]
+ . sep_scl
+ . eos ]
+
+(************************************************************************
+ * OPTION STATEMENTS
+ *************************************************************************)
+(* The general case is considering options as a list *)
+
+
+let stmt_option_value = /((array of[ \t]+)?(((un)?signed[ \t]+)?integer (8|16|32)|string|ip6?-address|boolean|domain-list|text)|encapsulate [A-Za-z0-9_.-]+)/
+
+let stmt_option_list = ([ label "arg" . bare ] | [ label "arg" . quote ])
+ . ( sep_com . ([ label "arg" . bare ] | [ label "arg" . quote ]))*
+
+let del_trail_spc = del /[ \t\n]*/ ""
+
+let stmt_record = counter "record" . Util.del_str "{"
+ . sep_spc
+ . ([seq "record" . store stmt_option_value . sep_com]*
+ . [seq "record" . store stmt_option_value . del_trail_spc])?
+ . Util.del_str "}"
+
+let stmt_option_code = [ label "label" . store word . sep_spc ]
+ . [ key "code" . sep_spc . store word ]
+ . sep_eq
+ . ([ label "type" . store stmt_option_value ]
+ |[ label "record" . stmt_record ])
+
+let stmt_option_basic = [ key word . sep_spc . stmt_option_list ]
+let stmt_option_extra = [ key word . sep_spc . store /true|false/ . sep_spc . stmt_option_list ]
+
+let stmt_option_body = stmt_option_basic | stmt_option_extra
+
+let stmt_option1 = [ indent
+ . key "option"
+ . sep_spc
+ . stmt_option_body
+ . sep_scl
+ . eos ]
+
+let stmt_option2 = [ indent
+ . dels "option" . label "rfc-code"
+ . sep_spc
+ . stmt_option_code
+ . sep_scl
+ . eos ]
+
+let stmt_option = stmt_option1 | stmt_option2
+
+(************************************************************************
+ * SUBCLASS STATEMENTS
+ *************************************************************************)
+(* this statement is not well documented in the manual dhcpd.conf
+ we support basic use case *)
+
+let stmt_subclass = [ indent . key "subclass" . sep_spc
+ . ( [ label "name" . bare_to_scl ]|[ label "name" . dquote_any ] )
+ . sep_spc
+ . ( [ label "value" . bare_to_scl ]|[ label "value" . dquote_any ] )
+ . sep_scl
+ . eos ]
+
+
+(************************************************************************
+ * ALLOW/DENY STATEMENTS
+ *************************************************************************)
+(* We have to use special key for allow/deny members of
+ to avoid ambiguity in the put direction *)
+
+let allow_deny_re = /unknown(-|[ ]+)clients/
+ | /known(-|[ ]+)clients/
+ | /all[ ]+clients/
+ | /dynamic[ ]+bootp[ ]+clients/
+ | /authenticated[ ]+clients/
+ | /unauthenticated[ ]+clients/
+ | "bootp"
+ | "booting"
+ | "duplicates"
+ | "declines"
+ | "client-updates"
+ | "leasequery"
+
+let stmt_secu_re = "allow"
+ | "deny"
+
+let del_allow = del /allow[ ]+members[ ]+of/ "allow members of"
+let del_deny = del /deny[ \t]+members[ \t]+of/ "deny members of"
+
+(* bare is anything but whitespace, quote marks or semicolon.
+ * technically this should be locked down to mostly alphanumerics, but the
+ * idea right now is just to make things work. Also ideally I would use
+ * dquote_space but I had a whale of a time with it. It doesn't like
+ * semicolon termination and my attempts to fix that led me to 3 hours of
+ * frustration and back to this :)
+ *)
+let stmt_secu_tpl (l:lens) (s:string) =
+ [ indent . l . sep_spc . label s . bare_to_scl . sep_scl . eos ] |
+ [ indent . l . sep_spc . label s . dquote_any . sep_scl . eos ]
+
+
+let stmt_secu = [ indent . key stmt_secu_re . sep_spc .
+ store allow_deny_re . sep_scl . eos ] |
+ stmt_secu_tpl del_allow "allow-members-of" |
+ stmt_secu_tpl del_deny "deny-members-of"
+
+(************************************************************************
+ * MATCH STATEMENTS
+ *************************************************************************)
+
+let sto_com = /[^ \t\n,\(\)][^,\(\)]*[^ \t\n,\(\)]|[^ \t\n,\(\)]+/ | word . /[ \t]*\([^)]*\)/
+(* this is already the most complicated part of this module and it's about to
+ * get worse. match statements can be way more complicated than this
+ *
+ * examples:
+ * using or:
+ * match if ((option vendor-class-identifier="Banana Bready") or (option vendor-class-identifier="Cherry Sunfire"));
+ * unneeded parenthesis:
+ * match if (option vendor-class-identifier="Hello");
+ *
+ * and of course the fact that the above two rules used one of infinately
+ * many potential options instead of a builtin function.
+ *)
+(* sto_com doesn't support quoted strings as arguments. It also doesn't
+ support single arguments (needs to match a comma) It will need to be
+ updated for lcase, ucase and log to be workable.
+
+ it also doesn't support no arguments, so gethostbyname() doesn't work.
+
+ option and config-option are considered operators. They should be matched
+ in stmt_entry but also available under "match if" and "if" conditionals
+ leased-address, host-decl-name, both take no args and return a value. We
+ might need to treat them as variable names in the parser.
+
+ things like this may be near-impossible to parse even with recursion
+ because we have no way of knowing when or if a subfunction takes arguments
+ set ClientMac = binary-to-ascii(16, 8, ":", substring(hardware, 1, 6));
+
+ even if we could parse it, they could get arbitrarily complicated like:
+ binary-to-ascii(16, 8, ":", substring(hardware, 1, 6) and substring(hardware, 2, 3));
+
+ so at some point we may need to programmatically knock it off and tell
+ people to put weird stuff in an include file that augeas doesn't parse.
+
+ the other option is to change the API to not parse the if statement at all,
+ just pull in the conditional as a string.
+ *)
+
+let fct_re = "substring" | "binary-to-ascii" | "suffix" | "lcase" | "ucase"
+ | "gethostbyname" | "packet"
+ | "concat" | "reverse" | "encode-int"
+ | "extract-int" | "lease-time" | "client-state" | "exists" | "known" | "static"
+ | "pick-first-value" | "log" | "execute"
+
+(* not needs to be different because it's a negation of whatever happens next *)
+let op_re = "~="|"="|"~~"|"and"|"or"
+
+let fct_args = [ label "args" . dels "(" . sep_osp .
+ ([ label "arg" . store sto_com ] . [ label "arg" . sep_com . store sto_com ]+) .
+ sep_osp . dels ")" ]
+
+let stmt_match_ifopt = [ dels "if" . sep_spc . key "option" . sep_spc . store word .
+ sep_eq . ([ label "value" . bare_to_scl ]|[ label "value" . dquote_any ]) ]
+
+let stmt_match_func = [ store fct_re . sep_osp . label "function" . fct_args ] .
+ sep_eq . ([ label "value" . bare_to_scl ]|[ label "value" . dquote_any ])
+
+let stmt_match_pfv = [ label "function" . store "pick-first-value" . sep_spc .
+ dels "(" . sep_osp .
+ [ label "args" .
+ [ label "arg" . store sto_com ] .
+ [ sep_com . label "arg" . store sto_com ]+ ] .
+ dels ")" ]
+
+let stmt_match_tpl (l:lens) = [ indent . key "match" . sep_spc . l . sep_scl . eos ]
+
+let stmt_match = stmt_match_tpl (dels "if" . sep_spc . stmt_match_func | stmt_match_pfv | stmt_match_ifopt)
+
+(************************************************************************
+ * BLOCK STATEMENTS
+ *************************************************************************)
+(* Blocks doesn't support comments at the end of the closing bracket *)
+
+let stmt_entry = stmt_secu
+ | stmt_option
+ | stmt_hardware
+ | stmt_range
+ | stmt_string
+ | stmt_integer
+ | stmt_noarg
+ | stmt_match
+ | stmt_subclass
+ | stmt_set
+ | empty
+ | comment
+
+let stmt_block_noarg_re = "pool" | "group"
+
+let stmt_block_noarg (body:lens)
+ = [ indent
+ . key stmt_block_noarg_re
+ . sep_obr
+ . body*
+ . sep_cbr ]
+
+let stmt_block_arg_re = "host"
+ | "class"
+ | "shared-network"
+ | /failover[ ]+peer/
+ | "zone"
+ | "group"
+ | "on"
+
+let stmt_block_arg (body:lens)
+ = ([ indent . key stmt_block_arg_re . sep_spc . dquote_any . sep_obr . body* . sep_cbr ]
+ |[ indent . key stmt_block_arg_re . sep_spc . bare_to_scl . sep_obr . body* . sep_cbr ]
+ |[ indent . del /key/ "key" . label "key_block" . sep_spc . dquote_any . sep_obr . body* . sep_cbr . del /(;([ \t]*\n)*)?/ "" ]
+ |[ indent . del /key/ "key" . label "key_block" . sep_spc . bare_to_scl . sep_obr . body* . sep_cbr . del /(;([ \t]*\n)*)?/ "" ])
+
+let stmt_block_subnet (body:lens)
+ = [ indent
+ . key "subnet"
+ . sep_spc
+ . [ label "network" . store ip ]
+ . sep_spc
+ . [ key "netmask" . sep_spc . store ip ]
+ . sep_obr
+ . body*
+ . sep_cbr ]
+
+let conditional (body:lens) =
+ let condition = /[^{ \r\t\n][^{\n]*[^{ \r\t\n]|[^{ \t\n\r]/
+ in let elsif = [ indent
+ . Build.xchgs "elsif" "@elsif"
+ . sep_spc
+ . store condition
+ . sep_obr
+ . body*
+ . sep_cbr ]
+ in let else = [ indent
+ . Build.xchgs "else" "@else"
+ . sep_obr
+ . body*
+ . sep_cbr ]
+ in [ indent
+ . Build.xchgs "if" "@if"
+ . sep_spc
+ . store condition
+ . sep_obr
+ . body*
+ . sep_cbr
+ . elsif*
+ . else? ]
+
+
+let all_block (body:lens) =
+ let lns1 = stmt_block_subnet body in
+ let lns2 = stmt_block_arg body in
+ let lns3 = stmt_block_noarg body in
+ let lns4 = conditional body in
+ (lns1 | lns2 | lns3 | lns4 | stmt_entry)
+
+let rec lns_staging = stmt_entry|all_block lns_staging
+let lns = (lns_staging)*
+
+let filter = incl "/etc/dhcp3/dhcpd.conf"
+ . incl "/etc/dhcp/dhcpd.conf"
+ . incl "/etc/dhcpd.conf"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Dns_Zone
+ Lens for parsing DNS zone files
+
+Authors:
+ Kaarle Ritvanen <kaarle.ritvanen@datakunkku.fi>
+
+About: Reference
+ RFC 1035, RFC 2782, RFC 3403
+
+About: License
+ This file is licensed under the LGPL v2+
+*)
+
+module Dns_Zone =
+
+autoload xfm
+
+let eol = del /([ \t\n]*(;[^\n]*)?\n)+/ "\n"
+let opt_eol = del /([ \t\n]*(;[^\n]*)?\n)*/ ""
+
+let ws = del /[ \t]+|(([ \t\n]*;[^\n]*)?\n)+[ \t]*/ " "
+let opt_ws = del /(([ \t\n]*;[^\n]*)?\n)*[ \t]*/ ""
+
+let token = /([^ \t\n";()\\]|\\\\.)+|"([^"\\]|\\\\.)*"/
+
+
+let control = [ key /\$[^ \t\n\/]+/
+ . Util.del_ws_tab
+ . store token
+ . eol ]
+
+
+let labeled_token (lbl:string) (re:regexp) (sep:lens) =
+ [ label lbl . store re . sep ]
+
+let regexp_token (lbl:string) (re:regexp) =
+ labeled_token lbl re Util.del_ws_tab
+
+let type_token (re:regexp) = regexp_token "type" re
+
+let simple_token (lbl:string) = regexp_token lbl token
+
+let enclosed_token (lbl:string) = labeled_token lbl token ws
+
+let last_token (lbl:string) = labeled_token lbl token eol
+
+
+let class_re = /IN/
+
+let ttl = regexp_token "ttl" /[0-9]+[DHMWdhmw]?/
+let class = regexp_token "class" class_re
+
+let rr =
+ let simple_type = /[A-Z]+/ - class_re - /MX|NAPTR|SOA|SRV/
+ in type_token simple_type . last_token "rdata"
+
+
+let mx = type_token "MX"
+ . simple_token "priority"
+ . last_token "exchange"
+
+let naptr = type_token "NAPTR"
+ . simple_token "order"
+ . simple_token "preference"
+ . simple_token "flags"
+ . simple_token "service"
+ . simple_token "regexp"
+ . last_token "replacement"
+
+let soa = type_token "SOA"
+ . simple_token "mname"
+ . simple_token "rname"
+ . Util.del_str "("
+ . opt_ws
+ . enclosed_token "serial"
+ . enclosed_token "refresh"
+ . enclosed_token "retry"
+ . enclosed_token "expiry"
+ . labeled_token "minimum" token opt_ws
+ . Util.del_str ")"
+ . eol
+
+let srv = type_token "SRV"
+ . simple_token "priority"
+ . simple_token "weight"
+ . simple_token "port"
+ . last_token "target"
+
+
+let record = seq "owner"
+ . ((ttl? . class?) | (class . ttl))
+ . (rr|mx|naptr|soa|srv)
+let ws_record = [ Util.del_ws_tab . record ]
+let records (k:regexp) = [ key k . counter "owner" . ws_record+ ]
+
+let any_record_block = records /[^ \t\n;\/$][^ \t\n;\/]*/
+let non_root_records = records /@[^ \t\n;\/]+|[^ \t\n;\/$@][^ \t\n;\/]*/
+
+let root_records = [ del /@?/ "@"
+ . Util.del_ws_tab
+ . label "@"
+ . counter "owner"
+ . [ record ]
+ . ws_record* ]
+
+let lns = opt_eol
+ . control*
+ . ( (root_records|non_root_records)
+ . (control|any_record_block)* )?
+
+let filter = incl "/var/bind/pri/*.zone"
+let xfm = transform Dns_Zone.lns filter
--- /dev/null
+(* Dnsmasq module for Augeas
+ Author: Free Ekanayaka <free@64studio.com>
+
+ Reference: man dnsmasq (8)
+
+ "Format is one option per line, legal options are the same
+ as the long options legal on the command line. See
+ "/usr/sbin/dnsmasq --help" or "man 8 dnsmasq" for details."
+
+*)
+
+module Dnsmasq =
+
+ autoload xfm
+
+(************************************************************************
+ * USEFUL PRIMITIVES
+ *************************************************************************)
+
+let eol = Util.eol
+let spc = Util.del_ws_spc
+let comment = Util.comment
+let empty = Util.empty
+
+let sep_eq = Sep.equal
+let sto_to_eol = store /([^ \t\n].*[^ \t\n]|[^ \t\n])/
+
+let slash = Util.del_str "/"
+let sto_no_slash = store /([^\/ \t\n]+)/
+let domains = slash . [ label "domain" . sto_no_slash . slash ]+
+
+(************************************************************************
+ * SIMPLE ENTRIES
+ *************************************************************************)
+
+let entry_re = Rx.word - /(address|server)/
+let entry = [ key entry_re . (sep_eq . sto_to_eol)? . eol ]
+
+(************************************************************************
+ * STRUCTURED ENTRIES
+ *************************************************************************)
+
+let address = [ key "address" . sep_eq . domains . sto_no_slash . eol ]
+
+let server =
+ let port = [ Build.xchgs "#" "port" . store Rx.integer ]
+ in let source = [ Build.xchgs "@" "source" . store /[^#\/ \t\n]+/ . port? ]
+ in let srv_spec = store /(#|([^#@\/ \t\n]+))/ . port? . source?
+ in [ key "server" . sep_eq . domains? . srv_spec? . eol ]
+
+(************************************************************************
+ * LENS
+ *************************************************************************)
+
+let lns = (comment|empty|address|server|entry) *
+
+let filter = incl "/etc/dnsmasq.conf"
+ . incl "/etc/dnsmasq.d/*"
+ . excl ".*"
+ . Util.stdexcl
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Dovecot
+ Parses dovecot configuration files.
+
+Author: Serge Smetana <serge.smetana@gmail.com>
+ Acunote http://www.acunote.com
+ Pluron, Inc. http://pluron.com
+
+About: License
+ This file is licensed under the LGPL v2+.
+
+About: Configuration files
+ This lens applies to /etc/dovecot/dovecot.conf and files in
+ /etc/dovecot/conf.d/. See <filter>.
+
+About: Examples
+ The <Test_Dovecot> file contains various examples and tests.
+
+About: TODO
+ Support for multiline values like queries in dict-sql.conf
+*)
+
+module Dovecot =
+
+ autoload xfm
+
+(******************************************************************
+ * Group: USEFUL PRIMITIVES
+ ******************************************************************)
+
+(* View: indent *)
+let indent = Util.indent
+
+(* View: eol *)
+let eol = Util.eol
+
+(* View: empty
+Map empty lines. *)
+let empty = Util.empty
+
+(* View: comment
+Map comments in "#comment" nodes. *)
+let comment = Util.comment
+
+(* View: eq *)
+let eq = del /[ \t]*=/ " ="
+
+(* Variable: any *)
+let any = Rx.no_spaces
+
+(* Variable: value
+Match any value after " =".
+Should not start and end with spaces. May contain spaces inside *)
+let value = any . (Rx.space . any)*
+
+(* View: command_start *)
+let command_start = Util.del_str "!"
+
+
+(******************************************************************
+ * Group: ENTRIES
+ ******************************************************************)
+
+(* Variable: commands *)
+let commands = /include|include_try/
+
+(* Variable: block_names *)
+let block_names = /dict|userdb|passdb|protocol|service|plugin|namespace|map|fields|unix_listener|fifo_listener|inet_listener/
+
+(* Variable: keys
+Match any possible key except commands and block names. *)
+let keys = Rx.word - (commands | block_names)
+
+(* View: entry
+Map simple "key = value" entries including "key =" entries with empty value. *)
+let entry = [ indent . key keys. eq . (Sep.opt_space . store value)? . eol ]
+
+(* View: command
+Map commands started with "!". *)
+let command = [ command_start . key commands . Sep.space . store Rx.fspath . eol ]
+
+(*
+View: dquote_spaces
+ Make double quotes mandatory if value contains spaces,
+ and optional if value doesn't contain spaces.
+
+Based off Quote.dquote_spaces
+
+Parameters:
+ lns1:lens - the lens before
+ lns2:lens - the lens after
+*)
+let dquote_spaces (lns1:lens) (lns2:lens) =
+ (* bare has no spaces, and is optionally quoted *)
+ let bare = Quote.do_dquote_opt (store /[^" \t\n]+/)
+ (* quoted has at least one space, and must be quoted *)
+ in let quoted = Quote.do_dquote (store /[^"\n]*[ \t]+[^"\n]*/)
+ in [ lns1 . bare . lns2 ] | [ lns1 . quoted . lns2 ]
+
+let mailbox = indent
+ . dquote_spaces
+ (key /mailbox/ . Sep.space)
+ (Build.block_newlines_spc entry comment . eol)
+
+let block_ldelim_newlines_re = /[ \t]+\{([ \t\n]*\n)?/
+
+let block_newlines (entry:lens) (comment:lens) =
+ let indent = del Rx.opt_space "\t"
+ in del block_ldelim_newlines_re Build.block_ldelim_default
+ . ((entry | comment) . (Util.empty | entry | comment)*)?
+ . del Build.block_rdelim_newlines_re Build.block_rdelim_newlines_default
+
+(* View: block
+Map block enclosed in brackets recursively.
+Block may be indented and have optional argument.
+Block body may have entries, comments, empty lines, and nested blocks recursively. *)
+let rec block = [ indent . key block_names . (Sep.space . Quote.do_dquote_opt (store /!?[\/A-Za-z0-9_-]+/))? . block_newlines (entry|block|mailbox) comment . eol ]
+
+
+(******************************************************************
+ * Group: LENS AND FILTER
+ ******************************************************************)
+
+(* View: lns
+The Dovecot lens *)
+let lns = (comment|empty|entry|command|block)*
+
+(* Variable: filter *)
+let filter = incl "/etc/dovecot/dovecot.conf"
+ . (incl "/etc/dovecot/conf.d/*.conf")
+ . incl "/usr/local/etc/dovecot/dovecot.conf"
+ . (incl "/usr/local/etc/dovecot/conf.d/*.conf")
+ . Util.stdexcl
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Dpkg
+ Parses /etc/dpkg/dpkg.cfg
+
+Author: Robin Lee Powell <rlpowell@digitalkingdom.org>
+
+About: License
+ This file, and the attendant test_dpgk.aug, are explicitly
+ placed in the public domain.
+
+About: Description
+ dpkg.cfg is a simple list of options, the same ones as the
+ command line options, with or without a value.
+
+ The tree is a list of either comments or option/value pairs by
+ name. Use "set" to set an option with a value, and "clear" for a
+ bare option.
+
+About: Usage Example
+
+(start code)
+ $ augtool -n
+ augtool> ls /files/etc/dpkg/dpkg.cfg
+ #comment[1] = dpkg configuration file
+ #comment[2] = This file can contain default options for dpkg. All command-line
+ #comment[3] = options are allowed. Values can be specified by putting them after
+ #comment[4] = the option, separated by whitespace and/or an `=' sign.
+ #comment[5] = Do not enable debsig-verify by default; since the distribution is not using
+ #comment[6] = embedded signatures, debsig-verify would reject all packages.
+ no-debsig = (none)
+ #comment[7] = Log status changes and actions to a file.
+ log = /var/log/dpkg.log
+ augtool> get /files/etc/dpkg/dpkg.cfg/no-debsig
+ /files/etc/dpkg/dpkg.cfg/no-debsig (none)
+ augtool> get /files/etc/dpkg/dpkg.cfg/log
+ /files/etc/dpkg/dpkg.cfg/log = /var/log/dpkg.log
+ augtool> clear /files/etc/dpkg/dpkg.cfg/testopt
+ augtool> set /files/etc/dpkg/dpkg.cfg/testopt2 test
+ augtool> save
+ Saved 1 file(s)
+ augtool>
+ $ cat /etc/dpkg/dpkg.cfg.augnew
+ # dpkg configuration file
+ #
+ # This file can contain default options for dpkg. All command-line
+ # options are allowed. Values can be specified by putting them after
+ # the option, separated by whitespace and/or an `=' sign.
+ #
+
+ # Do not enable debsig-verify by default; since the distribution is not using
+ # embedded signatures, debsig-verify would reject all packages.
+ no-debsig
+
+ # Log status changes and actions to a file.
+ log /var/log/dpkg.log
+ testopt
+ testopt2 test
+(end code)
+
+*)
+
+module Dpkg =
+ autoload xfm
+
+ let sep_tab = Util.del_ws_tab
+ let sep_spc = Util.del_ws_spc
+ let eol = del /[ \t]*\n/ "\n"
+
+ let comment = Util.comment
+ let empty = Util.empty
+
+ let word = /[^,# \n\t]+/
+ let keyword = /[^,# \n\t\/]+/
+
+ (* View: record
+ Keyword, followed by optional whitespace and value, followed
+ by EOL.
+
+ The actual file specification doesn't require EOL, but the
+ likelihood of the file not having one is pretty slim, and
+ this way things we add have EOL.
+ *)
+
+ let record = [ key keyword . (sep_spc . store word)? . eol ]
+
+ (* View: lns
+ Any number of empty lines, comments, and records.
+ *)
+ let lns = ( empty | comment | record ) *
+
+ let xfm = transform lns (incl "/etc/dpkg/dpkg.cfg")
--- /dev/null
+(* Dput module for Augeas
+ Author: Raphael Pinson <raphink@gmail.com>
+
+
+ Reference: dput uses Python's ConfigParser:
+ http://docs.python.org/lib/module-ConfigParser.html
+*)
+
+
+module Dput =
+ autoload xfm
+
+
+(************************************************************************
+ * INI File settings
+ *************************************************************************)
+let comment = IniFile.comment IniFile.comment_re IniFile.comment_default
+
+let sep = IniFile.sep IniFile.sep_re IniFile.sep_default
+
+
+let setting = "allow_dcut"
+ | "allow_non-us_software"
+ | "allow_unsigned_uploads"
+ | "check_version"
+ | "default_host_main"
+ | "default_host_non-us"
+ | "fqdn"
+ | "hash"
+ | "incoming"
+ | "login"
+ | "method"
+ | "passive_ftp"
+ | "post_upload_command"
+ | "pre_upload_command"
+ | "progress_indicator"
+ | "run_dinstall"
+ | "run_lintian"
+ | "scp_compress"
+ | "ssh_config_options"
+ | "allowed_distributions"
+
+(************************************************************************
+ * "name: value" entries, with continuations in the style of RFC 822;
+ * "name=value" is also accepted
+ * leading whitespace is removed from values
+ *************************************************************************)
+let entry = IniFile.entry setting sep comment
+
+
+(************************************************************************
+ * sections, led by a "[section]" header
+ * We can't use titles as node names here since they could contain "/"
+ * We remove #comment from possible keys
+ * since it is used as label for comments
+ * We also remove / as first character
+ * because augeas doesn't like '/' keys (although it is legal in INI Files)
+ *************************************************************************)
+let title = IniFile.title_label "target" IniFile.record_label_re
+let record = IniFile.record title entry
+
+let lns = IniFile.lns record comment
+
+let filter = (incl "/etc/dput.cf")
+ . (incl (Sys.getenv("HOME") . "/.dput.cf"))
+
+let xfm = transform lns filter
+
--- /dev/null
+(*
+Module: Erlang
+ Parses Erlang configuration files
+
+Author: Raphael Pinson <raphael.pinson@camptocamp.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `http://www.erlang.org/doc/man/config.html` where possible.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to Erlang configuration files. See <filter>.
+
+About: Examples
+ The <Test_Erlang> file contains various examples and tests.
+*)
+module Erlang =
+
+
+(* Group: Spacing Functions *)
+
+(* View: space *)
+let space = del /[ \t\n]*/ ""
+
+(* View: lspace
+ Add spaces to the left of char *)
+let lspace (char:string) = del (/[ \t\n]*/ . char) char
+
+(* View: rspace
+ Add spaces to the right of char *)
+let rspace (char:string) = del (char . /[ \t\n]*/ ) char
+
+(* View: lrspace
+ Add spaces to the left or right of char *)
+let lrspace (char:string) = del (/[ \t\n]*/ . char . /[ \t\n]*/ ) char
+
+
+(* Group: Separators *)
+
+(* Variable: lbrace
+ Left square bracket *)
+let lbrace = "{"
+
+(* Variable: rbrace
+ Right square bracket *)
+let rbrace = "}"
+
+(* Variable: lbrack
+ Left curly brackets *)
+let lbrack = "["
+
+(* Variable: rbrack
+ Right curly brackets *)
+let rbrack = "]"
+
+(* Variable: lglob
+ Left glob separator *)
+let lglob = "<<\""
+
+(* Variable: rglob
+ Right glob separator *)
+let rglob = "\">>"
+
+(* Variable: comma *)
+let comma = ","
+
+
+(* Group: Value types *)
+
+(* View: opt_list
+ An optional list of elements, in square brackets *)
+let opt_list (lns:lens) = rspace lbrack
+ . (Build.opt_list lns (lrspace comma) . space)?
+ . Util.del_str rbrack
+
+(* View: integer
+ Store a <Rx.integer> *)
+let integer = store Rx.integer
+
+(* View: decimal
+ Store a decimal value *)
+let decimal = store /[0-9]+(.[0-9]+)?/
+
+(* View: quoted
+ Store a quoted value *)
+let quoted = Quote.do_quote (store /[^,\n}{]+/)
+
+(* View: bare
+ Store a bare <Rx.word> *)
+let bare = store Rx.word
+
+(* View: boolean
+ Store a boolean value *)
+let boolean = store /true|false/
+
+(* View: path
+ Store a path (<quoted>) *)
+let path = quoted
+
+(* View: glob
+ Store a glob *)
+let glob = Util.del_str lglob . store /[^\n"]+/ . Util.del_str rglob
+
+(* View: make_value
+ Make a "value" subnode for arrays/tuples *)
+let make_value (lns:lens) = [ label "value" . lns ]
+
+
+(* Group: Store types *)
+
+(* View: value
+ A single value *)
+let value (kw:regexp) (sto:lens) =
+ [ rspace lbrace
+ . key kw
+ . lrspace comma
+ . sto
+ . lspace rbrace ]
+
+(* View: tuple
+ A tuple of values *)
+let tuple (one:lens) (two:lens) =
+ [ rspace lbrace
+ . label "tuple"
+ . [ label "value" . one ]
+ . lrspace comma
+ . [ label "value" . two ]
+ . lspace rbrace ]
+
+(* View: tuple3
+ A tuple of 3 values *)
+let tuple3 (one:lens) (two:lens) (three:lens) =
+ [ rspace lbrace
+ . label "tuple"
+ . [ label "value" . one ]
+ . lrspace comma
+ . [ label "value" . two ]
+ . lrspace comma
+ . [ label "value" . three ]
+ . lspace rbrace ]
+
+(* View: list
+ A list of lenses *)
+let list (kw:regexp) (lns:lens) =
+ [ rspace lbrace
+ . key kw
+ . lrspace comma
+ . opt_list lns
+ . lspace rbrace ]
+
+(* View: value_list
+ A <list> of seq entries *)
+let value_list (kw:regexp) (sto:lens) =
+ list kw (make_value sto)
+
+(* View: application *)
+let application (name:regexp) (parameter:lens) =
+ list name parameter
+
+(* View: kernel_parameters
+ Config parameters accepted for kernel app *)
+let kernel_parameters =
+ value "browser_cmd" path
+ | value "dist_auto_connect" (store /never|once/)
+ | value "error_logger" (store /tty|false|silent/)
+ | value "net_setuptime" integer
+ | value "net_ticktime" integer
+ | value "shutdown_timeout" integer
+ | value "sync_nodes_timeout" integer
+ | value "start_dist_ac" boolean
+ | value "start_boot_server" boolean
+ | value "start_disk_log" boolean
+ | value "start_pg2" boolean
+ | value "start_timer" boolean
+
+(* View: kernel
+ Core Erlang kernel app configuration *)
+let kernel = application "kernel" kernel_parameters
+
+(* View: comment *)
+let comment = Util.comment_generic /%[ \t]*/ "% "
+
+(* View: config
+ A top-level config *)
+let config (app:lens) =
+ (Util.empty | comment)*
+ . rspace lbrack
+ . Build.opt_list (kernel | app) (lrspace comma)
+ . lrspace rbrack
+ . Util.del_str "." . Util.eol
+ . (Util.empty | comment)*
--- /dev/null
+(* Parsing /etc/ethers *)
+
+module Ethers =
+ autoload xfm
+
+ let sep_tab = Util.del_ws_tab
+
+ let eol = del /[ \t]*\n/ "\n"
+ let indent = del /[ \t]*/ ""
+
+ let comment = Util.comment
+ let empty = [ del /[ \t]*#?[ \t]*\n/ "\n" ]
+
+ let word = /[^# \n\t]+/
+ let address =
+ let hex = /[0-9a-fA-F][0-9a-fA-F]?/ in
+ hex . ":" . hex . ":" . hex . ":" . hex . ":" . hex . ":" . hex
+
+ let record = [ seq "ether" . indent .
+ [ label "mac" . store address ] . sep_tab .
+ [ label "ip" . store word ] . eol ]
+
+ let lns = ( empty | comment | record ) *
+
+ let xfm = transform lns (incl "/etc/ethers")
--- /dev/null
+(* Lens for Linux syntax of NFS exports(5) *)
+
+(*
+Module: Exports
+ Parses /etc/exports
+
+Author: David Lutterkort <lutter@redhat.com>
+
+About: Description
+ /etc/exports contains lines associating a directory with one or
+ more hosts, and NFS options for each host.
+
+About: Usage Example
+
+(start code)
+
+ $ augtool
+ augtool> ls /files/etc/exports/
+ comment[1] = /etc/exports: the access control list for filesystems which may be exported
+ comment[2] = to NFS clients. See exports(5).
+ comment[3] = sample /etc/exports file
+ dir[1]/ = /
+ dir[2]/ = /projects
+ dir[3]/ = /usr
+ dir[4]/ = /home/joe
+
+
+ augtool> ls /files/etc/exports/dir[1]
+ client[1]/ = master
+ client[2]/ = trusty
+(end code)
+
+The corresponding line in the file is:
+
+(start code)
+ / master(rw) trusty(rw,no_root_squash)
+(end code)
+
+ Digging further:
+
+(start code)
+ augtool> ls /files/etc/exports/dir[1]/client[1]
+ option = rw
+
+ To add a new entry, you'd do something like this:
+(end code)
+
+(start code)
+ augtool> set /files/etc/exports/dir[10000] /foo
+ augtool> set /files/etc/exports/dir[last()]/client[1] weeble
+ augtool> set /files/etc/exports/dir[last()]/client[1]/option[1] ro
+ augtool> set /files/etc/exports/dir[last()]/client[1]/option[2] all_squash
+ augtool> save
+ Saved 1 file(s)
+(end code)
+
+ Which creates the line:
+
+(start code)
+ /foo weeble(ro,all_squash)
+(end code)
+
+About: Limitations
+ This lens cannot handle options without a host, as with the last
+ example line in "man 5 exports":
+
+ /pub (ro,insecure,all_squash)
+
+ In this case, though, you can just do:
+
+ /pub *(ro,insecure,all_squash)
+
+ It also can't handle whitespace before the directory name.
+*)
+
+module Exports =
+ autoload xfm
+
+ let client_re = /[][a-zA-Z0-9.@*?\/:-]+/
+
+ let eol = Util.eol
+ let lbracket = Util.del_str "("
+ let rbracket = Util.del_str ")"
+ let sep_com = Sep.comma
+ let sep_spc = Sep.space
+
+ let option = [ label "option" . store /[^,)]*/ ]
+
+ let client = [ label "client" . store client_re .
+ ( Build.brackets lbracket rbracket
+ ( Build.opt_list option sep_com ) )? ]
+
+ let entry = [ label "dir" . store /[^ \t\n#]*/
+ . sep_spc . Build.opt_list client sep_spc . eol ]
+
+ let lns = (Util.empty | Util.comment | entry)*
+
+ let xfm = transform lns (incl "/etc/exports")
--- /dev/null
+(*
+Module: FAI_DiskConfig
+ Parses disk_config files for FAI
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: Reference
+ This lens tries to keep as close as possible to the FAI wiki where possible:
+ http://wiki.fai-project.org/wiki/Setup-storage#New_configuration_file_syntax
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+
+About: Examples
+ The <Test_FAI_DiskConfig> file contains various examples and tests.
+*)
+
+module FAI_DiskConfig =
+
+(* autoload xfm *)
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+(* Group: Generic primitives *)
+(* Variable: eol *)
+let eol = Util.eol
+
+(* Variable: space *)
+let space = Sep.space
+
+(* Variable: empty *)
+let empty = Util.empty
+
+(* Variable: comment *)
+let comment = Util.comment
+
+(* Variable: tag
+ A generic tag beginning with a colon *)
+let tag (re:regexp) = [ Util.del_str ":" . key re ]
+
+(* Variable: generic_opt
+ A generic key/value option *)
+let generic_opt (type:string) (kw:regexp) =
+ [ key type . Util.del_str ":" . store kw ]
+
+(* Variable: generic_opt_list
+ A generic key/list option *)
+let generic_opt_list (type:string) (kw:regexp) =
+ [ key type . Util.del_str ":" . counter "locallist"
+ . Build.opt_list [seq "locallist" . store kw] Sep.comma ]
+
+
+(************************************************************************
+ * Group: RECORDS
+ *************************************************************************)
+
+
+(* Group: volume *)
+
+(* Variable: mountpoint_kw *)
+let mountpoint_kw = "-" (* do not mount *)
+ | "swap" (* swap space *)
+ (* fully qualified path; if :encrypt is given, the partition
+ * will be encrypted, the key is generated automatically *)
+ | /\/[^: \t\n]*/
+
+(* Variable: encrypt
+ encrypt tag *)
+let encrypt = tag "encrypt"
+
+(* Variable: mountpoint *)
+let mountpoint = [ label "mountpoint" . store mountpoint_kw
+ (* encrypt is only for the fspath, but we parse it anyway *)
+ . encrypt?]
+
+(* Variable: resize
+ resize tag *)
+let resize = tag "resize"
+
+(* Variable: size_kw
+ Regexps for size *)
+let size_kw = /[0-9]+[kMGTP%]?(-([0-9]+[kMGTP%]?)?)?/
+ | /-[0-9]+[kMGTP%]?/
+
+(* Variable: size *)
+let size = [ label "size" . store size_kw . resize? ]
+
+(* Variable: filesystem_kw
+ Regexps for filesystem *)
+let filesystem_kw = "-"
+ | "swap"
+ (* NOTE: Restraining this regexp would improve perfs *)
+ | (Rx.no_spaces - ("-" | "swap")) (* mkfs.xxx must exist *)
+
+(* Variable: filesystem *)
+let filesystem = [ label "filesystem" . store filesystem_kw ]
+
+
+(* Variable: mount_option_value *)
+let mount_option_value = [ label "value" . Util.del_str "="
+ . store /[^,= \t\n]+/ ]
+
+(* Variable: mount_option
+ Counting options *)
+let mount_option = [ seq "mount_option"
+ . store /[^,= \t\n]+/
+ . mount_option_value? ]
+
+(* Variable: mount_options
+ An array of <mount_option>s *)
+let mount_options = [ label "mount_options"
+ . counter "mount_option"
+ . Build.opt_list mount_option Sep.comma ]
+
+(* Variable: fs_option *)
+let fs_option =
+ [ key /createopts|tuneopts/
+ . Util.del_str "=\"" . store /[^"\n]*/ . Util.del_str "\"" ]
+
+(* Variable: fs_options
+ An array of <fs_option>s *)
+let fs_options =
+ (* options to append to mkfs.xxx and to the filesystem-specific
+ * tuning tool *)
+ [ label "fs_options" . Build.opt_list fs_option Sep.space ]
+
+(* Variable: volume_full *)
+let volume_full (type:lens) (third_field:lens) =
+ [ type . space
+ . mountpoint .space
+ (* The third field changes depending on types *)
+ . third_field . space
+ . filesystem . space
+ . mount_options
+ . (space . fs_options)?
+ . eol ]
+
+(* Variable: name
+ LVM volume group name *)
+let name = [ label "name" . store /[^\/ \t\n]+/ ]
+
+(* Variable: partition
+ An optional partition number for <disk> *)
+let partition = [ label "partition" . Util.del_str "." . store /[0-9]+/ ]
+
+(* Variable: disk *)
+let disk = [ label "disk" . store /[^., \t\n]+/ . partition? ]
+
+(* Variable: vg_option
+ An option for <volume_vg> *)
+let vg_option =
+ [ key "pvcreateopts"
+ . Util.del_str "=\"" . store /[^"\n]*/ . Util.del_str "\"" ]
+
+(* Variable: volume_vg *)
+let volume_vg = [ key "vg"
+ . space . name
+ . space . disk
+ . (space . vg_option)?
+ . eol ]
+
+(* Variable: spare_missing *)
+let spare_missing = tag /spare|missing/
+
+(* Variable: disk_with_opt
+ A <disk> with a spare/missing option for raids *)
+let disk_with_opt = [ label "disk" . store /[^:., \t\n]+/ . partition?
+ . spare_missing* ]
+
+(* Variable: disk_list
+ A list of <disk_with_opt>s *)
+let disk_list = Build.opt_list disk_with_opt Sep.comma
+
+(* Variable: type_label_lv *)
+let type_label_lv = label "lv"
+ . [ label "vg" . store (/[^# \t\n-]+/ - "raw") ]
+ . Util.del_str "-"
+ . [ label "name" . store /[^ \t\n]+/ ]
+
+(* Variable: volume_tmpfs *)
+let volume_tmpfs =
+ [ key "tmpfs" . space
+ . mountpoint .space
+ . size . space
+ . mount_options
+ . (space . fs_options)?
+ . eol ]
+
+(* Variable: volume_lvm *)
+let volume_lvm = volume_full type_label_lv size (* lvm logical volume: vg name and lv name *)
+ | volume_vg
+
+(* Variable: volume_raid *)
+let volume_raid = volume_full (key /raid[0156]/) disk_list (* raid level *)
+
+(* Variable: device *)
+let device = [ label "device" . store Rx.fspath ]
+
+(* Variable: volume_cryptsetup *)
+let volume_cryptsetup = volume_full (key ("swap"|"tmp"|"luks")) device
+
+(* Variable: volume *)
+let volume = volume_full (key "primary") size (* for physical disks only *)
+ | volume_full (key "logical") size (* for physical disks only *)
+ | volume_full (key "raw-disk") size
+
+(* Variable: volume_or_comment
+ A succesion of <volume>s and <comment>s *)
+let volume_or_comment (vol:lens) =
+ (vol|empty|comment)* . vol
+
+(* Variable: disk_config_entry *)
+let disk_config_entry (kw:regexp) (opt:lens) (vol:lens) =
+ [ key "disk_config" . space . store kw
+ . (space . opt)* . eol
+ . (volume_or_comment vol)? ]
+
+(* Variable: lvmoption *)
+let lvmoption =
+ (* preserve partitions -- always *)
+ generic_opt "preserve_always" /[^\/, \t\n-]+-[^\/, \t\n-]+(,[^\/, \t\n-]+-[^\/, \t\n-]+)*/
+ (* preserve partitions -- unless the system is installed
+ * for the first time *)
+ | generic_opt "preserve_reinstall" /[^\/, \t\n-]+-[^\/, \t\n-]+(,[^\/, \t\n-]+-[^\/, \t\n-]+)*/
+ (* attempt to resize partitions *)
+ | generic_opt "resize" /[^\/, \t\n-]+-[^\/, \t\n-]+(,[^\/, \t\n-]+-[^\/, \t\n-]+)*/
+ (* when creating the fstab, the key used for defining the device
+ * may be the device (/dev/xxx), a label given using -L, or the uuid *)
+ | generic_opt "fstabkey" /device|label|uuid/
+
+(* Variable: raidoption *)
+let raidoption =
+ (* preserve partitions -- always *)
+ generic_opt_list "preserve_always" (Rx.integer | "all")
+ (* preserve partitions -- unless the system is installed
+ * for the first time *)
+ | generic_opt_list "preserve_reinstall" Rx.integer
+ (* when creating the fstab, the key used for defining the device
+ * may be the device (/dev/xxx), a label given using -L, or the uuid *)
+ | generic_opt "fstabkey" /device|label|uuid/
+
+(* Variable: option *)
+let option =
+ (* preserve partitions -- always *)
+ generic_opt_list "preserve_always" (Rx.integer | "all")
+ (* preserve partitions -- unless the system is installed
+ for the first time *)
+ | generic_opt_list "preserve_reinstall" Rx.integer
+ (* attempt to resize partitions *)
+ | generic_opt_list "resize" Rx.integer
+ (* write a disklabel - default is msdos *)
+ | generic_opt "disklabel" /msdos|gpt/
+ (* mark a partition bootable, default is / *)
+ | generic_opt "bootable" Rx.integer
+ (* do not assume the disk to be a physical device, use with xen *)
+ | [ key "virtual" ]
+ (* when creating the fstab, the key used for defining the device
+ * may be the device (/dev/xxx), a label given using -L, or the uuid *)
+ | generic_opt "fstabkey" /device|label|uuid/
+ | generic_opt_list "always_format" Rx.integer
+ | generic_opt "sameas" Rx.fspath
+
+let cryptoption =
+ [ key "randinit" ]
+
+(* Variable: disk_config *)
+let disk_config =
+ let excludes = "lvm" | "raid" | "end" | /disk[0-9]+/
+ | "cryptsetup" | "tmpfs" in
+ let other_label = Rx.fspath - excludes in
+ disk_config_entry "lvm" lvmoption volume_lvm
+ | disk_config_entry "raid" raidoption volume_raid
+ | disk_config_entry "tmpfs" option volume_tmpfs
+ | disk_config_entry "end" option volume (* there shouldn't be an option here *)
+ | disk_config_entry /disk[0-9]+/ option volume
+ | disk_config_entry "cryptsetup" cryptoption volume_cryptsetup
+ | disk_config_entry other_label option volume
+
+(* Variable: lns
+ The disk_config lens *)
+let lns = (disk_config|comment|empty)*
+
+
+(* let xfm = transform lns Util.stdexcl *)
--- /dev/null
+(* Fail2ban module for Augeas *)
+(* Author: Nicolas Gif <ngf18490@pm.me> *)
+(* Heavily based on DPUT module by Raphael Pinson *)
+(* <raphink@gmail.com> *)
+(* *)
+
+module Fail2ban =
+ autoload xfm
+
+
+(************************************************************************
+ * INI File settings
+ *************************************************************************)
+let comment = IniFile.comment IniFile.comment_re IniFile.comment_default
+
+let sep = IniFile.sep IniFile.sep_re IniFile.sep_default
+
+
+(************************************************************************
+ * "name: value" entries, with continuations in the style of RFC 822;
+ * "name=value" is also accepted
+ * leading whitespace is removed from values
+ *************************************************************************)
+let entry = IniFile.entry IniFile.entry_re sep comment
+
+
+(************************************************************************
+ * sections, led by a "[section]" header
+ * We can't use titles as node names here since they could contain "/"
+ * We remove #comment from possible keys
+ * since it is used as label for comments
+ * We also remove / as first character
+ * because augeas doesn't like '/' keys (although it is legal in INI Files)
+ *************************************************************************)
+let title = IniFile.title IniFile.record_re
+let record = IniFile.record title entry
+
+let lns = IniFile.lns record comment
+
+let filter = (incl "/etc/fail2ban/fail2ban.conf")
+ . (incl "/etc/fail2ban/jail.conf")
+ . (incl "/etc/fail2ban/jail.local")
+ . (incl "/etc/fail2ban/fail2ban.d/*.conf")
+ . (incl "/etc/fail2ban/jail.d/*.conf")
+
+let xfm = transform lns filter
+
--- /dev/null
+(*
+Module: Fonts
+ Parses the /etc/fonts directory
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 5 fonts-conf` where possible.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to files in the /etc/fonts directory. See <filter>.
+
+About: Examples
+ The <Test_Fonts> file contains various examples and tests.
+*)
+
+module Fonts =
+
+autoload xfm
+
+(* View: lns *)
+let lns = Xml.lns
+
+(* Variable: filter *)
+let filter = incl "/etc/fonts/fonts.conf"
+ . incl "/etc/fonts/conf.avail/*"
+ . incl "/etc/fonts/conf.d/*"
+ . excl "/etc/fonts/*/README"
+ . Util.stdexcl
+
+let xfm = transform lns filter
--- /dev/null
+(* Parsing /etc/fstab *)
+
+module Fstab =
+ autoload xfm
+
+ let sep_tab = Sep.tab
+ let sep_spc = Sep.space
+ let comma = Sep.comma
+ let eol = Util.eol
+
+ let comment = Util.comment
+ let empty = Util.empty
+
+ let file = /[^# \t\n]+/
+
+ (* An option label can't contain comma, comment, equals, or space *)
+ let optlabel = /[^,#= \n\t]+/
+ let spec = /[^,# \n\t][^ \n\t]*/
+
+ let comma_sep_list (l:string) =
+ let value = [ label "value" . Util.del_str "=" . ( store Rx.neg1 )? ] in
+ let lns = [ label l . store optlabel . value? ] in
+ Build.opt_list lns comma
+
+ let record = [ seq "mntent" .
+ Util.indent .
+ [ label "spec" . store spec ] . sep_tab .
+ [ label "file" . store file ] . sep_tab .
+ comma_sep_list "vfstype" .
+ (sep_tab . comma_sep_list "opt" .
+ (sep_tab . [ label "dump" . store /[0-9]+/ ] .
+ ( sep_spc . [ label "passno" . store /[0-9]+/ ])? )? )?
+ . Util.comment_or_eol ]
+
+ let lns = ( empty | comment | record ) *
+ let filter = incl "/etc/fstab"
+ . incl "/etc/mtab"
+
+ let xfm = transform lns filter
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(*
+Module: Fuse
+ Parses /etc/fuse.conf
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: Reference
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/fuse.conf. See <filter>.
+
+About: Examples
+ The <Test_Fuse> file contains various examples and tests.
+*)
+
+
+module Fuse =
+autoload xfm
+
+(* Variable: equal *)
+let equal = del /[ \t]*=[ \t]*/ " = "
+
+(* View: mount_max *)
+let mount_max = Build.key_value_line "mount_max" equal (store Rx.integer)
+
+(* View: user_allow_other *)
+let user_allow_other = Build.flag_line "user_allow_other"
+
+
+(* View: lns
+ The fuse.conf lens
+*)
+let lns = ( Util.empty | Util.comment | mount_max | user_allow_other )*
+
+(* Variable: filter *)
+let filter = incl "/etc/fuse.conf"
+
+let xfm = transform lns filter
+
--- /dev/null
+(* Gdm module for Augeas *)
+(* Author: Free Ekanayaka <freek@64studio.com> *)
+(* *)
+
+module Gdm =
+ autoload xfm
+
+(************************************************************************
+ * INI File settings
+ *************************************************************************)
+
+let comment = IniFile.comment IniFile.comment_re IniFile.comment_default
+let sep = IniFile.sep IniFile.sep_re IniFile.sep_default
+let empty = IniFile.empty
+
+
+(************************************************************************
+ * ENTRY
+ * Entry keywords can be bare digits as well (the [server] section)
+ *************************************************************************)
+let entry_re = ( /[A-Za-z0-9][A-Za-z0-9._-]*/ )
+let entry = IniFile.entry entry_re sep comment
+
+
+(************************************************************************
+ * TITLE
+ *
+ * We use IniFile.title_label because there can be entries
+ * outside of sections whose labels would conflict with section names
+ *************************************************************************)
+let title = IniFile.title ( IniFile.record_re - ".anon" )
+let record = IniFile.record title entry
+
+let record_anon = [ label ".anon" . ( entry | empty )+ ]
+
+
+(************************************************************************
+ * LENS & FILTER
+ * There can be entries before any section
+ * IniFile.entry includes comment management, so we just pass entry to lns
+ *************************************************************************)
+let lns = record_anon? . record*
+
+let filter = (incl "/etc/gdm/gdm.conf*")
+ . (incl "/etc/gdm/custom.conf")
+ . Util.stdexcl
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Getcap
+ Parses generic termcap-style capability databases
+
+Author: Matt Dainty <matt@bodgit-n-scarper.com>
+
+About: Reference
+ - man 3 getcap
+ - man 5 login.conf
+ - man 5 printcap
+
+Each line represents a record consisting of a number of ':'-separated fields
+the first of which is the name or identifier for the record. The name can
+optionally be split by '|' and each subsequent value is considered an alias
+of the first. Records can be split across multiple lines with '\'.
+
+See also the Rtadvd and Termcap modules which contain slightly more specific
+grammars.
+
+*)
+
+module Getcap =
+ autoload xfm
+
+ (* Comments cannot have any leading characters *)
+ let comment = Util.comment_generic /#[ \t]*/ "# "
+
+ let nfield = /[^#:\\\\\t\n|][^:\\\\\t\n|]*/
+
+ (* field must not contain ':' *)
+ let cfield = /[a-zA-Z0-9-]+([%^$#\\]?@|[%^$#\\=]([^:\\\\^]|\\\\[0-7]{1,3}|\\\\[bBcCeEfFnNrRtT\\^]|\^.)*)?/
+
+ let csep = del /:([ \t]*\\\\\n[ \t]*:)?/ ":\\\n\t:"
+ let nsep = Util.del_str "|"
+ let name = [ label "name" . store nfield ]
+ let capability (re:regexp) = [ label "capability" . store re ]
+ let record (re:regexp) = [ label "record" . name . ( nsep . name )* . ( csep . capability re )* . Sep.colon . Util.eol ]
+
+ let lns = ( Util.empty | comment | record cfield )*
+
+ let filter = incl "/etc/login.conf"
+ . incl "/etc/printcap"
+ . Util.stdexcl
+
+ let xfm = transform lns filter
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(* Group module for Augeas
+ Author: Free Ekanayaka <free@64studio.com>
+
+ Reference: man 5 group
+
+*)
+
+module Group =
+
+ autoload xfm
+
+(************************************************************************
+ * USEFUL PRIMITIVES
+ *************************************************************************)
+
+let eol = Util.eol
+let comment = Util.comment
+let empty = Util.empty
+let dels = Util.del_str
+
+let colon = Sep.colon
+let comma = Sep.comma
+
+let sto_to_spc = store Rx.space_in
+let sto_to_col = Passwd.sto_to_col
+
+let word = Rx.word
+let password = /[A-Za-z0-9_.!*-]*/
+let integer = Rx.integer
+
+(************************************************************************
+ * ENTRIES
+ *************************************************************************)
+
+let user = [ label "user" . store word ]
+let user_list = Build.opt_list user comma
+let params = [ label "password" . store password . colon ]
+ . [ label "gid" . store integer . colon ]
+ . user_list?
+let entry = Build.key_value_line word colon params
+
+let nisdefault =
+ let overrides =
+ colon
+ . [ label "password" . store password? . colon ]
+ . [ label "gid" . store integer? . colon ]
+ . user_list? in
+ [ dels "+" . label "@nisdefault" . overrides? . eol ]
+
+
+(************************************************************************
+ * LENS
+ *************************************************************************)
+
+let lns = (comment|empty|entry|nisdefault) *
+
+let filter = incl "/etc/group"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Grub
+ Parses grub configuration
+
+Author: David Lutterkort <lutter@redhat.com>
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+*)
+
+module Grub =
+ autoload xfm
+
+ (* This only covers the most basic grub directives. Needs to be *)
+ (* expanded to cover more (and more esoteric) directives *)
+ (* It is good enough to handle the grub.conf on my Fedora 8 box *)
+
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+ (* View: value_to_eol *)
+ let value_to_eol = store /[^= \t\n][^\n]*[^= \t\n]|[^= \t\n]/
+
+ (* View: eol *)
+ let eol = Util.eol
+
+ (* View: spc *)
+ let spc = Util.del_ws_spc
+
+ (* View: opt_ws *)
+ let opt_ws = Util.del_opt_ws ""
+
+ (* View: dels *)
+ let dels (s:string) = Util.del_str s
+
+ (* View: eq *)
+ let eq = dels "="
+
+ (* View: switch *)
+ let switch (n:regexp) = dels "--" . key n
+
+ (* View: switch_arg *)
+ let switch_arg (n:regexp) = switch n . eq . store Rx.no_spaces
+
+ (* View: value_sep *)
+ let value_sep (dflt:string) = del /[ \t]*[ \t=][ \t]*/ dflt
+
+ (* View: comment_re *)
+ let comment_re = /([^ \t\n].*[^ \t\n]|[^ \t\n])/
+ - /# ## (Start|End) Default Options ##/
+
+ (* View: comment *)
+ let comment =
+ [ Util.indent . label "#comment" . del /#[ \t]*/ "# "
+ . store comment_re . eol ]
+
+ (* View: empty *)
+ let empty = Util.empty
+
+(************************************************************************
+ * Group: USEFUL FUNCTIONS
+ *************************************************************************)
+
+ (* View: command *)
+ let command (kw:regexp) (indent:string) =
+ Util.del_opt_ws indent . key kw
+
+ (* View: kw_arg *)
+ let kw_arg (kw:regexp) (indent:string) (dflt_sep:string) =
+ [ command kw indent . value_sep dflt_sep . value_to_eol . eol ]
+
+ (* View: kw_boot_arg *)
+ let kw_boot_arg (kw:regexp) = kw_arg kw "\t" " "
+
+ (* View: kw_menu_arg *)
+ let kw_menu_arg (kw:regexp) = kw_arg kw "" " "
+
+ (* View: password_arg *)
+ let password_arg = [ command "password" "" .
+ (spc . [ switch "md5" ])? .
+ (spc . [ switch "encrypted" ])? .
+ spc . store (/[^ \t\n]+/ - /--[^ \t\n]+/) .
+ (spc . [ label "file" . store /[^ \t\n]+/ ])? .
+ eol ]
+
+ (* View: kw_pres *)
+ let kw_pres (kw:string) = [ opt_ws . key kw . eol ]
+
+ (* View: error
+ * Parse a line that looks almost like a valid setting, but isn't,
+ * into an '#error' node. Any line that starts with letters, but not
+ * anything matching kw, is considered an error line.
+ *
+ * Parameters:
+ * kw:regexp - the valid keywords that are _not_ considered an
+ * error
+ *)
+ let error (kw:regexp) =
+ let not_kw = /[a-zA-Z]+/ - kw in
+ [ label "#error" . Util.del_opt_ws "\t"
+ . store (not_kw . /([^a-zA-Z\n].*[^ \t\n])?/) . eol ]
+
+
+(************************************************************************
+ * Group: BOOT ENTRIES
+ *************************************************************************)
+
+ (* View: device
+ * This is a shell-only directive in upstream grub; the grub versions
+ * in at least Fedora/RHEL use this to find devices for UEFI boot *)
+ let device =
+ [ command "device" "" . Sep.space . store /\([A-Za-z0-9_.-]+\)/ . spc .
+ [ label "file" . value_to_eol ] . Util.eol ]
+
+ (* View: color *)
+ let color =
+ (* Should we nail it down to exactly the color names that *)
+ (* grub supports ? *)
+ let color_name = store /[A-Za-z-]+/ in
+ let color_spec =
+ [ label "foreground" . color_name] .
+ dels "/" .
+ [ label "background" . color_name ] in
+ [ opt_ws . key "color" .
+ spc . [ label "normal" . color_spec ] .
+ (spc . [ label "highlight" . color_spec ])? .
+ eol ]
+
+ (* View: serial *)
+ let serial =
+ [ command "serial" "" .
+ [ spc . switch_arg /unit|port|speed|word|parity|stop|device/ ]* .
+ eol ]
+
+ (* View: terminal *)
+ let terminal =
+ [ command "terminal" "" .
+ ([ spc . switch /dumb|no-echo|no-edit|silent/ ]
+ |[ spc . switch_arg /timeout|lines/ ])* .
+ [ spc . key /console|serial|hercules/ ]* . eol ]
+
+ (* View: setkey *)
+ let setkey = [ command "setkey" "" .
+ ( spc . [ label "to" . store Rx.no_spaces ] .
+ spc . [ label "from" . store Rx.no_spaces ] )? .
+ eol ]
+
+ (* View: menu_entry *)
+ let menu_entry = kw_menu_arg "default"
+ | kw_menu_arg "fallback"
+ | kw_pres "hiddenmenu"
+ | kw_menu_arg "timeout"
+ | kw_menu_arg "splashimage"
+ | kw_menu_arg "gfxmenu"
+ | kw_menu_arg "foreground"
+ | kw_menu_arg "background"
+ | kw_menu_arg "verbose"
+ | kw_menu_arg "boot" (* only for CLI, ignored in conf *)
+ | serial
+ | terminal
+ | password_arg
+ | color
+ | device
+ | setkey
+
+ (* View: menu_error
+ * Accept lines not matching menu_entry and stuff them into
+ * '#error' nodes
+ *)
+ let menu_error =
+ let kw = /default|fallback|hiddenmenu|timeout|splashimage|gfxmenu/
+ |/foreground|background|verbose|boot|password|title/
+ |/serial|setkey|terminal|color|device/ in
+ error kw
+
+ (* View: menu_setting
+ * a valid menu setting or a line that looks like one but is an #error
+ *)
+ let menu_setting = menu_entry | menu_error
+
+ (* View: title *)
+ let title = del /title[ \t=]+/ "title " . value_to_eol . eol
+
+ (* View: multiboot_arg
+ * Permits a second form for Solaris multiboot kernels that
+ * take a path (with a slash) as their first arg, e.g.
+ * /boot/multiboot kernel/unix another=arg *)
+ let multiboot_arg = [ label "@path" .
+ store (Rx.word . "/" . Rx.no_spaces) ]
+
+ (* View: kernel_args
+ Parse the file name and args on a kernel or module line. *)
+ let kernel_args =
+ let arg = /[A-Za-z0-9_.$\+-]+/ - /type|no-mem-option/ in
+ store /(\([a-z0-9,]+\))?\/[^ \t\n]*/ .
+ (spc . multiboot_arg)? .
+ (spc . [ key arg . (eq. store /([^ \t\n])*/)?])* . eol
+
+ (* View: module_line
+ Solaris extension adds module$ and kernel$ for variable interpolation *)
+ let module_line =
+ [ command /module\$?/ "\t" . spc . kernel_args ]
+
+ (* View: map_line *)
+ let map_line =
+ [ command "map" "\t" . spc .
+ [ label "from" . store /[()A-za-z0-9]+/ ] . spc .
+ [ label "to" . store /[()A-za-z0-9]+/ ] . eol ]
+
+ (* View: kernel *)
+ let kernel =
+ [ command /kernel\$?/ "\t" .
+ (spc .
+ ([switch "type" . eq . store /[a-z]+/]
+ |[switch "no-mem-option"]))* .
+ spc . kernel_args ]
+
+ (* View: chainloader *)
+ let chainloader =
+ [ command "chainloader" "\t" .
+ [ spc . switch "force" ]? . spc . store Rx.no_spaces . eol ]
+
+ (* View: savedefault *)
+ let savedefault =
+ [ command "savedefault" "\t" . (spc . store Rx.integer)? . eol ]
+
+ (* View: configfile *)
+ let configfile =
+ [ command "configfile" "\t" . spc . store Rx.no_spaces . eol ]
+
+ (* View: boot_entry
+ <boot> entries *)
+ let boot_entry =
+ let boot_arg_re = "root" | "initrd" | "rootnoverify" | "uuid"
+ | "findroot" | "bootfs" (* Solaris extensions *)
+ in kw_boot_arg boot_arg_re
+ | kernel
+ | chainloader
+ | kw_pres "quiet" (* Seems to be a Ubuntu extension *)
+ | savedefault
+ | configfile
+ | module_line
+ | map_line
+ | kw_pres "lock"
+ | kw_pres "makeactive"
+ | password_arg
+
+ (* View: boot_error
+ * Accept lines not matching boot_entry and stuff them into
+ * '#error' nodes
+ *)
+ let boot_error =
+ let kw = /lock|uuid|password|root|initrd|rootnoverify|findroot|bootfs/
+ |/configfile|chainloader|title|boot|quiet|kernel|module/
+ |/makeactive|savedefault|map/ in
+ error kw
+
+ (* View: boot_setting
+ * a valid boot setting or a line that looks like one but is an #error
+ *)
+ let boot_setting = boot_entry | boot_error
+
+ (* View: boot *)
+ let boot =
+ let line = ((boot_setting|comment)* . boot_setting)? in
+ [ label "title" . title . line ]
+
+(************************************************************************
+ * Group: DEBIAN-SPECIFIC SECTIONS
+ *************************************************************************)
+
+ (* View: debian_header
+ Header for a <debian>-specific section *)
+ let debian_header = "## ## Start Default Options ##\n"
+
+ (* View: debian_footer
+ Footer for a <debian>-specific section *)
+ let debian_footer = "## ## End Default Options ##\n"
+
+ (* View: debian_comment_re *)
+ let debian_comment_re = /([^ \t\n].*[^ \t\n]|[^ \t\n])/
+ - "## End Default Options ##"
+
+ (* View: debian_comment
+ A comment entry inside a <debian>-specific section *)
+ let debian_comment =
+ [ Util.indent . label "#comment" . del /##[ \t]*/ "## "
+ . store debian_comment_re . eol ]
+
+ (* View: debian_setting_re *)
+ let debian_setting_re = "kopt"
+ | "groot"
+ | "alternative"
+ | "lockalternative"
+ | "defoptions"
+ | "lockold"
+ | "xenhopt"
+ | "xenkopt"
+ | "altoptions"
+ | "howmany"
+ | "memtest86"
+ | "updatedefaultentry"
+ | "savedefault"
+ | "indomU"
+
+ (* View: debian_entry *)
+ let debian_entry = [ Util.del_str "#" . Util.indent
+ . key debian_setting_re . del /[ \t]*=/ "="
+ . value_to_eol? . eol ]
+
+ (* View: debian
+ A debian-specific section, made of <debian_entry> lines *)
+ let debian = [ label "debian"
+ . del debian_header debian_header
+ . (debian_comment|empty|debian_entry)*
+ . del debian_footer debian_footer ]
+
+(************************************************************************
+ * Group: LENS AND FILTER
+ *************************************************************************)
+
+ (* View: lns *)
+ let lns = (comment | empty | menu_setting | debian)*
+ . (boot . (comment | empty | boot)*)?
+
+ (* View: filter *)
+ let filter = incl "/boot/grub/grub.conf"
+ . incl "/boot/grub/menu.lst"
+ . incl "/etc/grub.conf"
+ . incl "/boot/efi/EFI/*/grub.conf"
+
+ let xfm = transform lns filter
--- /dev/null
+(* Parsing /boot/grub/grubenv *)
+
+module GrubEnv =
+ autoload xfm
+
+ let eol = Util.del_str "\n"
+
+ let comment = Util.comment
+ let eq = Util.del_str "="
+ let value = /[^\\\n]*(\\\\(\\\\|\n)[^\\\n]*)*/
+
+ let word = /[A-Za-z_][A-Za-z0-9_]*/
+ let record = [ seq "target" .
+ [ label "name" . store word ] . eq .
+ [ label "value" . store value ] . eol ]
+
+ let lns = ( comment | record ) *
+
+ let xfm = transform lns (incl "/boot/grub/grubenv" . incl "/boot/grub2/grubenv")
--- /dev/null
+(*
+ Module: Gshadow
+ Parses /etc/gshadow
+
+ Author: Lorenzo M. Catucci <catucci@ccd.uniroma2.it>
+
+ Original Author: Free Ekanayaka <free@64studio.com>
+
+ About: Reference
+ - man 5 gshadow
+
+ About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+
+ About:
+
+ Each line in the gshadow files represents the additional shadow-defined
+ attributes for the corresponding group, as defined in the group file.
+
+*)
+
+module Gshadow =
+
+ autoload xfm
+
+(************************************************************************
+ * USEFUL PRIMITIVES
+ *************************************************************************)
+
+let eol = Util.eol
+let comment = Util.comment
+let empty = Util.empty
+
+let colon = Sep.colon
+let comma = Sep.comma
+
+let sto_to_spc = store Rx.space_in
+
+let word = Rx.word
+let password = /[A-Za-z0-9_.!*-]*/
+let integer = Rx.integer
+
+(************************************************************************
+ * Group: ENTRIES
+ *************************************************************************)
+
+(* View: member *)
+let member = [ label "member" . store word ]
+(* View: member_list
+ the member list is a comma separated list of
+ users allowed to chgrp to the group without
+ being prompted for the group's password *)
+let member_list = Build.opt_list member comma
+
+(* View: admin *)
+let admin = [ label "admin" . store word ]
+(* View: admin_list
+ the admin_list is a comma separated list of
+ users allowed to change the group's password
+ and the member_list *)
+let admin_list = Build.opt_list admin comma
+
+(* View: params *)
+let params = [ label "password" . store password . colon ]
+ . admin_list? . colon
+ . member_list?
+
+let entry = Build.key_value_line word colon params
+
+(************************************************************************
+ * LENS
+ *************************************************************************)
+
+let lns = (comment|empty|entry) *
+
+let filter
+ = incl "/etc/gshadow"
+ . Util.stdexcl
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: GtkBookmarks
+ Parses $HOME/.gtk-bookmarks
+
+Author: Raphael Pinson <raphael.pinson@camptocamp.com>
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to $HOME/.gtk-bookmarks. See <filter>.
+
+About: Examples
+ The <Test_GtkBookmarks> file contains various examples and tests.
+*)
+module GtkBookmarks =
+
+autoload xfm
+
+(* View: empty
+ Comment are not allowed, even empty comments *)
+let empty = Util.empty_generic Rx.opt_space
+
+(* View: entry *)
+let entry = [ label "bookmark" . store Rx.no_spaces
+ . (Sep.space . [ label "label" . store Rx.space_in ])?
+ . Util.eol ]
+
+(* View: lns *)
+let lns = (empty | entry)*
+
+(* View: xfm *)
+let xfm = transform lns (incl (Sys.getenv("HOME") . "/.gtk-bookmarks"))
--- /dev/null
+(*
+Module: Host_Conf
+ Parses /etc/host.conf
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 5 host.conf` where possible.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/host.conf. See <filter>.
+*)
+
+module Host_Conf =
+
+autoload xfm
+
+(************************************************************************
+ * Group: ENTRY TYPES
+ *************************************************************************)
+
+(* View: sto_bool
+ Store a boolean value *)
+let sto_bool = store ("on"|"off")
+
+(* View: sto_bool_warn
+ Store a boolean value *)
+let sto_bool_warn = store ("on"|"off"|"warn"|"nowarn")
+
+(* View: bool
+ A boolean switch *)
+let bool (kw:regexp) = Build.key_value_line kw Sep.space sto_bool
+
+(* View: bool_warn
+ A boolean switch with extended values *)
+let bool_warn (kw:regexp) = Build.key_value_line kw Sep.space sto_bool_warn
+
+(* View: list
+ A list of items *)
+let list (kw:regexp) (elem:string) =
+ let list_elems = Build.opt_list [seq elem . store Rx.word] (Sep.comma . Sep.opt_space) in
+ Build.key_value_line kw Sep.space list_elems
+
+(* View: trim *)
+let trim =
+ let trim_list = Build.opt_list [seq "trim" . store Rx.word] (del /[:;,]/ ":") in
+ Build.key_value_line "trim" Sep.space trim_list
+
+(* View: entry *)
+let entry = bool ("multi"|"nospoof"|"spoofalert"|"reorder")
+ | bool_warn "spoof"
+ | list "order" "order"
+ | trim
+
+(************************************************************************
+ * Group: LENS AND FILTER
+ *************************************************************************)
+
+(* View: lns *)
+let lns = ( Util.empty | Util.comment | entry )*
+
+(* View: filter *)
+let filter = incl "/etc/host.conf"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Hostname
+ Parses /etc/hostname and /etc/mailname
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+*)
+
+
+module Hostname =
+autoload xfm
+
+(* View: lns *)
+let lns = [ label "hostname" . store Rx.word . Util.eol ] | Util.empty
+
+(* View: filter *)
+let filter = incl "/etc/hostname"
+ . incl "/etc/mailname"
+
+let xfm = transform lns filter
--- /dev/null
+(* Parsing /etc/hosts *)
+
+module Hosts =
+ autoload xfm
+
+ let word = /[^# \n\t]+/
+ let record = [ seq "host" . Util.indent .
+ [ label "ipaddr" . store word ] . Sep.tab .
+ [ label "canonical" . store word ] .
+ [ label "alias" . Sep.space . store word ]*
+ . Util.comment_or_eol ]
+
+ let lns = ( Util.empty | Util.comment | record ) *
+
+ let xfm = transform lns (incl "/etc/hosts")
--- /dev/null
+(*
+Module: Hosts_Access
+ Parses /etc/hosts.{allow,deny}
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 5 hosts_access` and `man 5 hosts_options` where possible.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/hosts.{allow,deny}. See <filter>.
+*)
+
+module Hosts_Access =
+
+autoload xfm
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+(* View: colon *)
+let colon = del /[ \t]*(\\\\[ \t]*\n[ \t]+)?:[ \t]*(\\\\[ \t]*\n[ \t]+)?/ ": "
+
+(* Variable: comma_sep *)
+let comma_sep = /([ \t]|(\\\\\n))*,([ \t]|(\\\\\n))*/
+
+(* Variable: ws_sep *)
+let ws_sep = / +/
+
+(* View: list_sep *)
+let list_sep = del ( comma_sep | ws_sep ) ", "
+
+(* View: list_item *)
+let list_item = store ( Rx.word - /EXCEPT/i )
+
+(* View: client_host_item
+ Allows @ for netgroups, supports [ipv6] syntax *)
+let client_host_item =
+ let client_hostname_rx = /[A-Za-z0-9_.@?*-][A-Za-z0-9_.?*-]*/ in
+ let client_ipv6_rx = "[" . /[A-Za-z0-9:?*%]+/ . "]" in
+ let client_host_rx = client_hostname_rx | client_ipv6_rx in
+ let netmask = [ Util.del_str "/" . label "netmask" . store Rx.word ] in
+ store ( client_host_rx - /EXCEPT/i ) . netmask?
+
+(* View: client_file_item *)
+let client_file_item =
+ let client_file_rx = /\/[^ \t\n,:]+/ in
+ store ( client_file_rx - /EXCEPT/i )
+
+(* Variable: option_kw
+ Since either an option or a shell command can be given, use an explicit list
+ of known options to avoid misinterpreting a command as an option *)
+let option_kw = "severity"
+ | "spawn"
+ | "twist"
+ | "keepalive"
+ | "linger"
+ | "rfc931"
+ | "banners"
+ | "nice"
+ | "setenv"
+ | "umask"
+ | "user"
+ | /allow/i
+ | /deny/i
+
+(* Variable: shell_command_rx *)
+let shell_command_rx = /[^ \t\n:][^\n]*[^ \t\n]|[^ \t\n:\\\\]/
+ - ( option_kw . /.*/ )
+
+(* View: sto_to_colon
+ Allows escaped colon sequences *)
+let sto_to_colon = store /[^ \t\n:=][^\n:]*((\\\\:|\\\\[ \t]*\n[ \t]+)[^\n:]*)*[^ \\\t\n:]|[^ \t\n:\\\\]/
+
+(* View: except
+ * The except operator makes it possible to write very compact rules.
+ *)
+let except (lns:lens) = [ label "except" . Sep.space
+ . del /except/i "EXCEPT"
+ . Sep.space . lns ]
+
+(************************************************************************
+ * Group: ENTRY TYPES
+ *************************************************************************)
+
+(* View: daemon *)
+let daemon =
+ let host = [ label "host"
+ . Util.del_str "@"
+ . list_item ] in
+ [ label "process"
+ . list_item
+ . host? ]
+
+(* View: daemon_list
+ A list of <daemon>s *)
+let daemon_list = Build.opt_list daemon list_sep
+
+(* View: client *)
+let client =
+ let user = [ label "user"
+ . list_item
+ . Util.del_str "@" ] in
+ [ label "client"
+ . user?
+ . client_host_item ]
+
+(* View: client_file *)
+let client_file = [ label "file" . client_file_item ]
+
+(* View: client_list
+ A list of <client>s *)
+let client_list = Build.opt_list ( client | client_file ) list_sep
+
+(* View: option
+ Optional extensions defined in hosts_options(5) *)
+let option = [ key option_kw
+ . ( del /([ \t]*=[ \t]*|[ \t]+)/ " " . sto_to_colon )? ]
+
+(* View: shell_command *)
+let shell_command = [ label "shell_command"
+ . store shell_command_rx ]
+
+(* View: entry *)
+let entry = [ seq "line"
+ . daemon_list
+ . (except daemon_list)?
+ . colon
+ . client_list
+ . (except client_list)?
+ . ( (colon . option)+ | (colon . shell_command)? )
+ . Util.eol ]
+
+(************************************************************************
+ * Group: LENS AND FILTER
+ *************************************************************************)
+
+(* View: lns *)
+let lns = (Util.empty | Util.comment | entry)*
+
+(* View: filter *)
+let filter = incl "/etc/hosts.allow"
+ . incl "/etc/hosts.deny"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Htpasswd
+ Parses htpasswd and rsyncd.secrets files
+
+Author: Marc Fournier <marc.fournier@camptocamp.com>
+
+About: Reference
+ This lens is based on examples in htpasswd(1) and rsyncd.conf(5)
+
+About: Usage Example
+(start code)
+ augtool> set /augeas/load/Htpasswd/lens "Htpasswd.lns"
+ augtool> set /augeas/load/Htpasswd/incl "/var/www/.htpasswd"
+ augtool> load
+
+ augtool> get /files/var/www/.htpasswd/foo
+ /files/var/www/.htpasswd/foo = $apr1$e2WS6ARQ$lYhqy9CLmwlxR/07TLR46.
+
+ augtool> set /files/var/www/.htpasswd/foo bar
+ augtool> save
+ Saved 1 file(s)
+
+ $ cat /var/www/.htpasswd
+ foo:bar
+(end code)
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+*)
+
+module Htpasswd =
+autoload xfm
+
+let entry = Build.key_value_line Rx.word Sep.colon (store Rx.space_in)
+let lns = (Util.empty | Util.comment | entry)*
+
+let filter = incl "/etc/httpd/htpasswd"
+ . incl "/etc/apache2/htpasswd"
+ . incl "/etc/rsyncd.secrets"
+
+let xfm = transform lns filter
--- /dev/null
+(* Apache HTTPD lens for Augeas
+
+Authors:
+ David Lutterkort <lutter@redhat.com>
+ Francis Giraldeau <francis.giraldeau@usherbrooke.ca>
+ Raphael Pinson <raphink@gmail.com>
+
+About: Reference
+ Online Apache configuration manual: http://httpd.apache.org/docs/trunk/
+
+About: License
+ This file is licensed under the LGPL v2+.
+
+About: Lens Usage
+ Sample usage of this lens in augtool
+
+ Apache configuration is represented by two main structures, nested sections
+ and directives. Sections are used as labels, while directives are kept as a
+ value. Sections and directives can have positional arguments inside values
+ of "arg" nodes. Arguments of sections must be the firsts child of the
+ section node.
+
+ This lens doesn't support automatic string quoting. Hence, the string must
+ be quoted when containing a space.
+
+ Create a new VirtualHost section with one directive:
+ > clear /files/etc/apache2/sites-available/foo/VirtualHost
+ > set /files/etc/apache2/sites-available/foo/VirtualHost/arg "172.16.0.1:80"
+ > set /files/etc/apache2/sites-available/foo/VirtualHost/directive "ServerAdmin"
+ > set /files/etc/apache2/sites-available/foo/VirtualHost/*[self::directive="ServerAdmin"]/arg "admin@example.com"
+
+About: Configuration files
+ This lens applies to files in /etc/httpd and /etc/apache2. See <filter>.
+
+*)
+
+
+module Httpd =
+
+autoload xfm
+
+(******************************************************************
+ * Utilities lens
+ *****************************************************************)
+let dels (s:string) = del s s
+
+(* The continuation sequence that indicates that we should consider the
+ * next line part of the current line *)
+let cont = /\\\\\r?\n/
+
+(* Whitespace within a line: space, tab, and the continuation sequence *)
+let ws = /[ \t]/ | cont
+
+(* Any possible character - '.' does not match \n *)
+let any = /(.|\n)/
+
+(* Any character preceded by a backslash *)
+let esc_any = /\\\\(.|\n)/
+
+(* Newline sequence - both for Unix and DOS newlines *)
+let nl = /\r?\n/
+
+(* Whitespace at the end of a line *)
+let eol = del (ws* . nl) "\n"
+
+(* deal with continuation lines *)
+let sep_spc = del ws+ " "
+let sep_osp = del ws* ""
+let sep_eq = del (ws* . "=" . ws*) "="
+
+let nmtoken = /[a-zA-Z:_][a-zA-Z0-9:_.-]*/
+let word = /[a-z][a-z0-9._-]*/i
+
+(* A complete line that is either just whitespace or a comment that only
+ * contains whitespace *)
+let empty = [ del (ws* . /#?/ . ws* . nl) "\n" ]
+
+let indent = Util.indent
+
+(* A comment that is not just whitespace. We define it in terms of the
+ * things that are not allowed as part of such a comment:
+ * 1) Starts with whitespace
+ * 2) Ends with whitespace, a backslash or \r
+ * 3) Unescaped newlines
+ *)
+let comment =
+ let comment_start = del (ws* . "#" . ws* ) "# " in
+ let unesc_eol = /[^\]?/ . nl in
+ let w = /[^\t\n\r \\]/ in
+ let r = /[\r\\]/ in
+ let s = /[\t\r ]/ in
+ (*
+ * we'd like to write
+ * let b = /\\\\/ in
+ * let t = /[\t\n\r ]/ in
+ * let x = b . (t? . (s|w)* ) in
+ * but the definition of b depends on commit 244c0edd in 1.9.0 and
+ * would make the lens unusable with versions before 1.9.0. So we write
+ * x out which works in older versions, too
+ *)
+ let x = /\\\\[\t\n\r ]?[^\n\\]*/ in
+ let line = ((r . s* . w|w|r) . (s|w)* . x*|(r.s* )?).w.(s*.w)* in
+ [ label "#comment" . comment_start . store line . eol ]
+
+(* borrowed from shellvars.aug *)
+let char_arg_sec = /([^\\ '"\t\r\n>]|[^ '"\t\r\n>]+[^\\ \t\r\n>])|\\\\"|\\\\'|\\\\ /
+let char_arg_wl = /([^\\ '"},\t\r\n]|[^ '"},\t\r\n]+[^\\ '"},\t\r\n])/
+
+let dquot =
+ let no_dquot = /[^"\\\r\n]/
+ in /"/ . (no_dquot|esc_any)* . /"/
+let dquot_msg =
+ let no_dquot = /([^ \t"\\\r\n]|[^"\\\r\n]+[^ \t"\\\r\n])/
+ in /"/ . (no_dquot|esc_any)* . no_dquot
+
+let squot =
+ let no_squot = /[^'\\\r\n]/
+ in /'/ . (no_squot|esc_any)* . /'/
+let comp = /[<>=]?=/
+
+(******************************************************************
+ * Attributes
+ *****************************************************************)
+
+(* The arguments for a directive come in two flavors: quoted with single or
+ * double quotes, or bare. Bare arguments may not start with a single or
+ * double quote; since we also treat "word lists" special, i.e. lists
+ * enclosed in curly braces, bare arguments may not start with those,
+ * either.
+ *
+ * Bare arguments may not contain unescaped spaces, but we allow escaping
+ * with '\\'. Quoted arguments can contain anything, though the quote must
+ * be escaped with '\\'.
+ *)
+let bare = /([^{"' \t\n\r]|\\\\.)([^ \t\n\r]|\\\\.)*[^ \t\n\r\\]|[^{"' \t\n\r\\]/
+
+let arg_quoted = [ label "arg" . store (dquot|squot) ]
+let arg_bare = [ label "arg" . store bare ]
+
+(* message argument starts with " but ends at EOL *)
+let arg_dir_msg = [ label "arg" . store dquot_msg ]
+let arg_wl = [ label "arg" . store (char_arg_wl+|dquot|squot) ]
+
+(* comma-separated wordlist as permitted in the SSLRequire directive *)
+let arg_wordlist =
+ let wl_start = dels "{" in
+ let wl_end = dels "}" in
+ let wl_sep = del /[ \t]*,[ \t]*/ ", "
+ in [ label "wordlist" . wl_start . arg_wl . (wl_sep . arg_wl)* . wl_end ]
+
+let argv (l:lens) = l . (sep_spc . l)*
+
+(* the arguments of a directive. We use this once we have parsed the name
+ * of the directive, and the space right after it. When dir_args is used,
+ * we also know that we have at least one argument. We need to be careful
+ * with the spacing between arguments: quoted arguments and word lists do
+ * not need to have space between them, but bare arguments do.
+ *
+ * Apache apparently is also happy if the last argument starts with a double
+ * quote, but has no corresponding closing duoble quote, which is what
+ * arg_dir_msg handles
+ *)
+let dir_args =
+ let arg_nospc = arg_quoted|arg_wordlist in
+ (arg_bare . sep_spc | arg_nospc . sep_osp)* . (arg_bare|arg_nospc|arg_dir_msg)
+
+let directive =
+ [ indent . label "directive" . store word . (sep_spc . dir_args)? . eol ]
+
+let arg_sec = [ label "arg" . store (char_arg_sec+|comp|dquot|squot) ]
+
+let section (body:lens) =
+ (* opt_eol includes empty lines *)
+ let opt_eol = del /([ \t]*#?[ \t]*\r?\n)*/ "\n" in
+ let inner = (sep_spc . argv arg_sec)? . sep_osp .
+ dels ">" . opt_eol . ((body|comment) . (body|empty|comment)*)? .
+ indent . dels "</" in
+ let kword = key (word - /perl/i) in
+ let dword = del (word - /perl/i) "a" in
+ [ indent . dels "<" . square kword inner dword . del />[ \t\n\r]*/ ">\n" ]
+
+let perl_section = [ indent . label "Perl" . del /<perl>/i "<Perl>"
+ . store /[^<]*/
+ . del /<\/perl>/i "</Perl>" . eol ]
+
+
+let rec content = section (content|directive)
+ | perl_section
+
+let lns = (content|directive|comment|empty)*
+
+let filter = (incl "/etc/apache2/apache2.conf") .
+ (incl "/etc/apache2/httpd.conf") .
+ (incl "/etc/apache2/ports.conf") .
+ (incl "/etc/apache2/conf.d/*") .
+ (incl "/etc/apache2/conf-available/*.conf") .
+ (incl "/etc/apache2/mods-available/*") .
+ (incl "/etc/apache2/sites-available/*") .
+ (incl "/etc/apache2/vhosts.d/*.conf") .
+ (incl "/etc/httpd/conf.d/*.conf") .
+ (incl "/etc/httpd/httpd.conf") .
+ (incl "/etc/httpd/conf/httpd.conf") .
+ (incl "/etc/httpd/conf.modules.d/*.conf") .
+ Util.stdexcl
+
+let xfm = transform lns filter
--- /dev/null
+(* inetd.conf lens definition for Augeas
+ Auther: Matt Palmer <mpalmer@hezmatt.org>
+
+ Copyright (C) 2009 Matt Palmer, All Rights Reserved
+
+ This program is free software: you can redistribute it and/or modify it
+ under the terms of the GNU Lesser General Public License version 2.1 as
+ published by the Free Software Foundation.
+
+ This program is distributed in the hope that it will be useful, but
+ WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ Public License for more details.
+
+ You should have received a copy of the GNU General Public License along
+ with this program. If not, see <http://www.gnu.org/licenses/>.
+
+This lens parses /etc/inetd.conf. The current format is based on the
+syntax documented in the inetd manpage shipped with Debian's openbsd-inetd
+package version 0.20080125-2. Apologies if your inetd.conf doesn't follow
+the same format.
+
+Each top-level entry will have a key being that of the service name (the
+first column in the service definition, which is the name or number of the
+port that the service should listen on). The attributes for the service all
+sit under that. In regular Augeas style, the order of the attributes
+matter, and attempts to set things in a different order will fail miserably.
+The defined attribute names (and the order in which they must appear) are as
+follows (with mandatory parameters indicated by [*]):
+
+address -- a sequence of IP addresses or hostnames on which this service
+ should listen.
+
+socket[*] -- The type of the socket that will be created (either stream or
+ dgram, although the lens doesn't constrain the possibilities here)
+
+protocol[*] -- The socket protocol. I believe that the usual possibilities
+ are "tcp", "udp", or "unix", but no restriction is made on what you
+ can actually put here.
+
+sndbuf -- Specify a non-default size for the send buffer of the connection.
+
+rcvbuf -- Specify a non-default size for the receive buffer of the connection.
+
+wait[*] -- Whether to wait for new connections ("wait"), or just terminate
+ immediately ("nowait").
+
+max -- The maximum number of times that a service can be invoked in one minute.
+
+user[*] -- The user to run the service as.
+
+group -- A group to set the running service to, rather than the primary
+ group of the previously specified user.
+
+command[*] -- What program to run.
+
+arguments -- A sequence of arguments to pass to the command.
+
+In addition to this straightforward tree, inetd has the ability to set
+"default" listen addresses; this is a little used feature which nonetheless
+comes in handy sometimes. The key for entries of this type is "address"
+, and the subtree should be a sequence of addresses. "*" can
+always be used to return the default behaviour of listening on INADDR_ANY.
+
+*)
+
+module Inetd =
+ autoload xfm
+
+ (***************************
+ * PRIMITIVES
+ ***************************)
+
+ (* Store whitespace *)
+ let wsp = del /[ \t]+/ " "
+ let sep = del /[ \t]+/ " "
+ let owsp(t:string) = del /[ \t]*/ t
+
+ (* It's the end of the line as we know it... doo, doo, dooooo *)
+ let eol = Util.eol
+
+ (* In the beginning, the earth was without form, and void *)
+ let empty = Util.empty
+
+ let comment = Util.comment
+
+ let del_str = Util.del_str
+
+ let address = [ seq "addrseq" . store /([a-zA-Z0-9.-]+|\[[A-Za-z0-9:?*%]+\]|\*)/ ]
+ let address_list = ( counter "addrseq" . (address . del_str ",")* . address )
+
+ let argument = [ seq "argseq" . store /[^ \t\n]+/ ]
+ let argument_list = ( counter "argseq" . [ label "arguments" . (argument . wsp)* . argument ] )
+
+ (***************************
+ * ELEMENTS
+ ***************************)
+
+ let service (l:string) = ( label l . [label "address" . address_list . del_str ":" ]? . store /[^ \t\n\/:#]+/ )
+
+ let socket = [ label "socket" . store /[^ \t\n#]+/ ]
+
+ let protocol = ( [ label "protocol" . store /[^ \t\n,#]+/ ]
+ . [ del_str "," . key /sndbuf/ . del_str "=" . store /[^ \t\n,]+/ ]?
+ . [ del_str "," . key /rcvbuf/ . del_str "=" . store /[^ \t\n,]+/ ]?
+ )
+
+ let wait = ( [ label "wait" . store /(wait|nowait)/ ]
+ . [ del_str "." . label "max" . store /[0-9]+/ ]?
+ )
+
+ let usergroup = ( [ label "user" . store /[^ \t\n:.]+/ ]
+ . [ del /[:.]/ ":" . label "group" . store /[^ \t\n:.]+/ ]?
+ )
+
+ let command = ( [ label "command" . store /[^ \t\n]+/ ]
+ . (wsp . argument_list)?
+ )
+
+ (***************************
+ * SERVICE LINES
+ ***************************)
+
+ let service_line = [ service "service"
+ . sep
+ . socket
+ . sep
+ . protocol
+ . sep
+ . wait
+ . sep
+ . usergroup
+ . sep
+ . command
+ . eol
+ ]
+
+
+ (***************************
+ * RPC LINES
+ ***************************)
+
+ let rpc_service = service "rpc_service" . Util.del_str "/"
+ . [ label "version" . store Rx.integer ]
+
+ let rpc_endpoint = [ label "endpoint-type" . store Rx.word ]
+ let rpc_protocol = Util.del_str "rpc/"
+ . (Build.opt_list
+ [label "protocol" . store /[^ \t\n,#]+/ ]
+ Sep.comma)
+
+ let rpc_line = [ rpc_service
+ . sep
+ . rpc_endpoint
+ . sep
+ . rpc_protocol
+ . sep
+ . wait
+ . sep
+ . usergroup
+ . sep
+ . command
+ . eol
+ ]
+
+
+ (***************************
+ * DEFAULT LISTEN ADDRESSES
+ ***************************)
+
+ let default_listen_address = [ label "address"
+ . address_list
+ . del_str ":"
+ . eol
+ ]
+
+ (***********************
+ * LENS / FILTER
+ ***********************)
+
+ let lns = (comment|empty|service_line|rpc_line|default_listen_address)*
+
+ let filter = incl "/etc/inetd.conf"
+
+ let xfm = transform lns filter
--- /dev/null
+(*
+Module: IniFile
+ Generic module to create INI files lenses
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+
+About: TODO
+ Things to add in the future
+ - Support double quotes in value
+
+About: Lens usage
+ This lens is made to provide generic primitives to construct INI File lenses.
+ See <Puppet>, <PHP>, <MySQL> or <Dput> for examples of real life lenses using it.
+
+About: Examples
+ The <Test_IniFile> file contains various examples and tests.
+*)
+
+module IniFile =
+
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+(* Group: Internal primitives *)
+
+(*
+Variable: eol
+ End of line, inherited from <Util.eol>
+*)
+let eol = Util.doseol
+
+
+(* Group: Separators *)
+
+
+
+(*
+Variable: sep
+ Generic separator
+
+ Parameters:
+ pat:regexp - the pattern to delete
+ default:string - the default string to use
+*)
+let sep (pat:regexp) (default:string)
+ = Sep.opt_space . del pat default
+
+(*
+Variable: sep_noindent
+ Generic separator, no indentation
+
+ Parameters:
+ pat:regexp - the pattern to delete
+ default:string - the default string to use
+*)
+let sep_noindent (pat:regexp) (default:string)
+ = del pat default
+
+(*
+Variable: sep_re
+ The default regexp for a separator
+*)
+
+let sep_re = /[=:]/
+
+(*
+Variable: sep_default
+ The default separator value
+*)
+let sep_default = "="
+
+
+(* Group: Stores *)
+
+
+(*
+Variable: sto_to_eol
+ Store until end of line
+*)
+let sto_to_eol = Sep.opt_space . store Rx.space_in
+
+(*
+Variable: to_comment_re
+ Regex until comment
+*)
+let to_comment_re = /[^";# \t\n][^";#\n]*[^";# \t\n]|[^";# \t\n]/
+
+(*
+Variable: sto_to_comment
+ Store until comment
+*)
+let sto_to_comment = Sep.opt_space . store to_comment_re
+
+(*
+Variable: sto_multiline
+ Store multiline values
+*)
+let sto_multiline = Sep.opt_space
+ . store (to_comment_re
+ . (/[ \t]*\n/ . Rx.space . to_comment_re)*)
+
+(*
+Variable: sto_multiline_nocomment
+ Store multiline values without an end-of-line comment
+*)
+let sto_multiline_nocomment = Sep.opt_space
+ . store (Rx.space_in . (/[ \t]*\n/ . Rx.space . Rx.space_in)*)
+
+
+(* Group: Define comment and defaults *)
+
+(*
+View: comment_noindent
+ Map comments into "#comment" nodes,
+ no indentation allowed
+
+ Parameters:
+ pat:regexp - pattern to delete before commented data
+ default:string - default pattern before commented data
+
+ Sample Usage:
+ (start code)
+ let comment = IniFile.comment_noindent "#" "#"
+ let comment = IniFile.comment_noindent IniFile.comment_re IniFile.comment_default
+ (end code)
+*)
+let comment_noindent (pat:regexp) (default:string) =
+ Util.comment_generic_seteol (pat . Rx.opt_space) default eol
+
+(*
+View: comment
+ Map comments into "#comment" nodes
+
+ Parameters:
+ pat:regexp - pattern to delete before commented data
+ default:string - default pattern before commented data
+
+ Sample Usage:
+ (start code)
+ let comment = IniFile.comment "#" "#"
+ let comment = IniFile.comment IniFile.comment_re IniFile.comment_default
+ (end code)
+*)
+let comment (pat:regexp) (default:string) =
+ Util.comment_generic_seteol (Rx.opt_space . pat . Rx.opt_space) default eol
+
+(*
+Variable: comment_re
+ Default regexp for <comment> pattern
+*)
+
+let comment_re = /[;#]/
+
+(*
+Variable: comment_default
+ Default value for <comment> pattern
+*)
+let comment_default = ";"
+
+(*
+View: empty_generic
+ Empty line, including empty comments
+
+ Parameters:
+ indent:regexp - the indentation regexp
+ comment_re:regexp - the comment separator regexp
+*)
+let empty_generic (indent:regexp) (comment_re:regexp) =
+ Util.empty_generic_dos (indent . comment_re? . Rx.opt_space)
+
+(*
+View: empty
+ Empty line
+*)
+let empty = empty_generic Rx.opt_space comment_re
+
+(*
+View: empty_noindent
+ Empty line, without indentation
+*)
+let empty_noindent = empty_generic "" comment_re
+
+
+(************************************************************************
+ * Group: ENTRY
+ *************************************************************************)
+
+(* Group: entry includes comments *)
+
+(*
+View: entry_generic_nocomment
+ A very generic INI File entry, not including comments
+ It allows to set the key lens (to set indentation
+ or subnodes linked to the key) as well as the comment
+ separator regexp, used to tune the store regexps.
+
+ Parameters:
+ kw:lens - lens to match the key, including optional indentation
+ sep:lens - lens to use as key/value separator
+ comment_re:regexp - comment separator regexp
+ comment:lens - lens to use as comment
+
+ Sample Usage:
+ > let entry = IniFile.entry_generic (key "setting") sep IniFile.comment_re comment
+*)
+let entry_generic_nocomment (kw:lens) (sep:lens)
+ (comment_re:regexp) (comment:lens) =
+ let bare_re_noquot = (/[^" \t\r\n]/ - comment_re)
+ in let bare_re = (/[^\r\n]/ - comment_re)+
+ in let no_quot = /[^"\r\n]*/
+ in let bare = Quote.do_dquote_opt_nil (store (bare_re_noquot . (bare_re* . bare_re_noquot)?))
+ in let quoted = Quote.do_dquote (store (no_quot . comment_re+ . no_quot))
+ in [ kw . sep . (Sep.opt_space . bare)? . (comment|eol) ]
+ | [ kw . sep . Sep.opt_space . quoted . (comment|eol) ]
+
+(*
+View: entry_generic
+ A very generic INI File entry
+ It allows to set the key lens (to set indentation
+ or subnodes linked to the key) as well as the comment
+ separator regexp, used to tune the store regexps.
+
+ Parameters:
+ kw:lens - lens to match the key, including optional indentation
+ sep:lens - lens to use as key/value separator
+ comment_re:regexp - comment separator regexp
+ comment:lens - lens to use as comment
+
+ Sample Usage:
+ > let entry = IniFile.entry_generic (key "setting") sep IniFile.comment_re comment
+*)
+let entry_generic (kw:lens) (sep:lens) (comment_re:regexp) (comment:lens) =
+ entry_generic_nocomment kw sep comment_re comment | comment
+
+(*
+View: entry
+ Generic INI File entry
+
+ Parameters:
+ kw:regexp - keyword regexp for the label
+ sep:lens - lens to use as key/value separator
+ comment:lens - lens to use as comment
+
+ Sample Usage:
+ > let entry = IniFile.entry setting sep comment
+*)
+let entry (kw:regexp) (sep:lens) (comment:lens) =
+ entry_generic (key kw) sep comment_re comment
+
+(*
+View: indented_entry
+ Generic INI File entry that might be indented with an arbitrary
+ amount of whitespace
+
+ Parameters:
+ kw:regexp - keyword regexp for the label
+ sep:lens - lens to use as key/value separator
+ comment:lens - lens to use as comment
+
+ Sample Usage:
+ > let entry = IniFile.indented_entry setting sep comment
+*)
+let indented_entry (kw:regexp) (sep:lens) (comment:lens) =
+ entry_generic (Util.indent . key kw) sep comment_re comment
+
+(*
+View: entry_multiline_generic
+ A very generic multiline INI File entry
+ It allows to set the key lens (to set indentation
+ or subnodes linked to the key) as well as the comment
+ separator regexp, used to tune the store regexps.
+
+ Parameters:
+ kw:lens - lens to match the key, including optional indentation
+ sep:lens - lens to use as key/value separator
+ comment_re:regexp - comment separator regexp
+ comment:lens - lens to use as comment
+ eol:lens - lens for end of line
+
+ Sample Usage:
+ > let entry = IniFile.entry_generic (key "setting") sep IniFile.comment_re comment comment_or_eol
+*)
+let entry_multiline_generic (kw:lens) (sep:lens) (comment_re:regexp)
+ (comment:lens) (eol:lens) =
+ let newline = /\r?\n[ \t]+/
+ in let bare =
+ let word_re_noquot = (/[^" \t\r\n]/ - comment_re)+
+ in let word_re = (/[^\r\n]/ - comment_re)+
+ in let base_re = (word_re_noquot . (word_re* . word_re_noquot)?)
+ in let sto_re = base_re . (newline . base_re)*
+ | (newline . base_re)+
+ in Quote.do_dquote_opt_nil (store sto_re)
+ in let quoted =
+ let no_quot = /[^"\r\n]*/
+ in let base_re = (no_quot . comment_re+ . no_quot)
+ in let sto_re = base_re . (newline . base_re)*
+ | (newline . base_re)+
+ in Quote.do_dquote (store sto_re)
+ in [ kw . sep . (Sep.opt_space . bare)? . eol ]
+ | [ kw . sep . Sep.opt_space . quoted . eol ]
+ | comment
+
+
+(*
+View: entry_multiline
+ Generic multiline INI File entry
+
+ Parameters:
+ kw:regexp - keyword regexp for the label
+ sep:lens - lens to use as key/value separator
+ comment:lens - lens to use as comment
+*)
+let entry_multiline (kw:regexp) (sep:lens) (comment:lens) =
+ entry_multiline_generic (key kw) sep comment_re comment (comment|eol)
+
+(*
+View: entry_multiline_nocomment
+ Generic multiline INI File entry without an end-of-line comment
+
+ Parameters:
+ kw:regexp - keyword regexp for the label
+ sep:lens - lens to use as key/value separator
+ comment:lens - lens to use as comment
+*)
+let entry_multiline_nocomment (kw:regexp) (sep:lens) (comment:lens) =
+ entry_multiline_generic (key kw) sep comment_re comment eol
+
+(*
+View: entry_list
+ Generic INI File list entry
+
+ Parameters:
+ kw:regexp - keyword regexp for the label
+ sep:lens - lens to use as key/value separator
+ sto:regexp - store regexp for the values
+ list_sep:lens - lens to use as list separator
+ comment:lens - lens to use as comment
+*)
+let entry_list (kw:regexp) (sep:lens) (sto:regexp) (list_sep:lens) (comment:lens) =
+ let list = counter "elem"
+ . Build.opt_list [ seq "elem" . store sto ] list_sep
+ in Build.key_value_line_comment kw sep (Sep.opt_space . list) comment
+
+(*
+View: entry_list_nocomment
+ Generic INI File list entry without an end-of-line comment
+
+ Parameters:
+ kw:regexp - keyword regexp for the label
+ sep:lens - lens to use as key/value separator
+ sto:regexp - store regexp for the values
+ list_sep:lens - lens to use as list separator
+*)
+let entry_list_nocomment (kw:regexp) (sep:lens) (sto:regexp) (list_sep:lens) =
+ let list = counter "elem"
+ . Build.opt_list [ seq "elem" . store sto ] list_sep
+ in Build.key_value_line kw sep (Sep.opt_space . list)
+
+(*
+Variable: entry_re
+ Default regexp for <entry> keyword
+*)
+let entry_re = ( /[A-Za-z][A-Za-z0-9._-]*/ )
+
+
+(************************************************************************
+ * Group: RECORD
+ *************************************************************************)
+
+(* Group: Title definition *)
+
+(*
+View: title
+ Title for <record>. This maps the title of a record as a node in the abstract tree.
+
+ Parameters:
+ kw:regexp - keyword regexp for the label
+
+ Sample Usage:
+ > let title = IniFile.title IniFile.record_re
+*)
+let title (kw:regexp)
+ = Util.del_str "[" . key kw
+ . Util.del_str "]". eol
+
+(*
+View: indented_title
+ Title for <record>. This maps the title of a record as a node in the abstract tree. The title may be indented with arbitrary amounts of whitespace
+
+ Parameters:
+ kw:regexp - keyword regexp for the label
+
+ Sample Usage:
+ > let title = IniFile.title IniFile.record_re
+*)
+let indented_title (kw:regexp)
+ = Util.indent . title kw
+
+(*
+View: title_label
+ Title for <record>. This maps the title of a record as a value in the abstract tree.
+
+ Parameters:
+ name:string - name for the title label
+ kw:regexp - keyword regexp for the label
+
+ Sample Usage:
+ > let title = IniFile.title_label "target" IniFile.record_label_re
+*)
+let title_label (name:string) (kw:regexp)
+ = label name
+ . Util.del_str "[" . store kw
+ . Util.del_str "]". eol
+
+(*
+View: indented_title_label
+ Title for <record>. This maps the title of a record as a value in the abstract tree. The title may be indented with arbitrary amounts of whitespace
+
+ Parameters:
+ name:string - name for the title label
+ kw:regexp - keyword regexp for the label
+
+ Sample Usage:
+ > let title = IniFile.title_label "target" IniFile.record_label_re
+*)
+let indented_title_label (name:string) (kw:regexp)
+ = Util.indent . title_label name kw
+
+
+(*
+Variable: record_re
+ Default regexp for <title> keyword pattern
+*)
+let record_re = ( /[^]\r\n\/]+/ - /#comment/ )
+
+(*
+Variable: record_label_re
+ Default regexp for <title_label> keyword pattern
+*)
+let record_label_re = /[^]\r\n]+/
+
+
+(* Group: Record definition *)
+
+(*
+View: record_noempty
+ INI File Record with no empty lines allowed.
+
+ Parameters:
+ title:lens - lens to use for title. Use either <title> or <title_label>.
+ entry:lens - lens to use for entries in the record. See <entry>.
+*)
+let record_noempty (title:lens) (entry:lens)
+ = [ title
+ . entry* ]
+
+(*
+View: record
+ Generic INI File record
+
+ Parameters:
+ title:lens - lens to use for title. Use either <title> or <title_label>.
+ entry:lens - lens to use for entries in the record. See <entry>.
+
+ Sample Usage:
+ > let record = IniFile.record title entry
+*)
+let record (title:lens) (entry:lens)
+ = record_noempty title ( entry | empty )
+
+
+(************************************************************************
+ * Group: GENERIC LENSES
+ *************************************************************************)
+
+
+(*
+
+Group: Lens definition
+
+View: lns_noempty
+ Generic INI File lens with no empty lines
+
+ Parameters:
+ record:lens - record lens to use. See <record_noempty>.
+ comment:lens - comment lens to use. See <comment>.
+
+ Sample Usage:
+ > let lns = IniFile.lns_noempty record comment
+*)
+let lns_noempty (record:lens) (comment:lens)
+ = comment* . record*
+
+(*
+View: lns
+ Generic INI File lens
+
+ Parameters:
+ record:lens - record lens to use. See <record>.
+ comment:lens - comment lens to use. See <comment>.
+
+ Sample Usage:
+ > let lns = IniFile.lns record comment
+*)
+let lns (record:lens) (comment:lens)
+ = lns_noempty record (comment|empty)
+
+
+(************************************************************************
+ * Group: READY-TO-USE LENSES
+ *************************************************************************)
+
+let record_anon (entry:lens) = [ label "section" . value ".anon" . ( entry | empty )+ ]
+
+(*
+View: lns_loose
+ A loose, ready-to-use lens, featuring:
+ - sections as values (to allow '/' in names)
+ - support empty lines and comments
+ - support for [#;] as comment, defaulting to ";"
+ - .anon sections
+ - don't allow multiline values
+ - allow indented titles
+ - allow indented entries
+*)
+let lns_loose =
+ let l_comment = comment comment_re comment_default
+ in let l_sep = sep sep_re sep_default
+ in let l_entry = indented_entry entry_re l_sep l_comment
+ in let l_title = indented_title_label "section" (record_label_re - ".anon")
+ in let l_record = record l_title l_entry
+ in (record_anon l_entry)? . l_record*
+
+(*
+View: lns_loose_multiline
+ A loose, ready-to-use lens, featuring:
+ - sections as values (to allow '/' in names)
+ - support empty lines and comments
+ - support for [#;] as comment, defaulting to ";"
+ - .anon sections
+ - allow multiline values
+*)
+let lns_loose_multiline =
+ let l_comment = comment comment_re comment_default
+ in let l_sep = sep sep_re sep_default
+ in let l_entry = entry_multiline entry_re l_sep l_comment
+ in let l_title = title_label "section" (record_label_re - ".anon")
+ in let l_record = record l_title l_entry
+ in (record_anon l_entry)? . l_record*
+
--- /dev/null
+(* Parsing /etc/inittab *)
+module Inittab =
+ autoload xfm
+
+ let sep = Util.del_str ":"
+ let eol = Util.del_str "\n"
+
+ let id = /[^\/#:\n]{1,4}/
+ let value = /[^#:\n]*/
+
+ let comment = Util.comment|Util.empty
+
+ let record =
+ let field (name:string) = [ label name . store value ] in
+ let process = [ label "process" . store /[^#\n]*/ ] in
+ let eolcomment =
+ [ label "#comment" . del /#[ \t]*/ "# "
+ . store /([^ \t\n].*[^ \t\n]|[^ \t\n]?)/ ] in
+ [ key id . sep .
+ field "runlevels" . sep .
+ field "action" . sep .
+ process . eolcomment? . eol ]
+
+ let lns = ( comment | record ) *
+
+ let xfm = transform lns (incl "/etc/inittab")
+
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(*
+Module: Inputrc
+ Parses /etc/inputrc
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 3 readline` where possible.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/inputrc. See <filter>.
+
+About: Examples
+ The <Test_Inputrc> file contains various examples and tests.
+*)
+
+module Inputrc =
+
+autoload xfm
+
+(* View: entry
+ An inputrc mapping entry *)
+let entry =
+ let mapping = [ label "mapping" . store /[A-Za-z0-9_."\*\/+\,\\-]+/ ]
+ in [ label "entry"
+ . Util.del_str "\"" . store /[^" \t\n]+/
+ . Util.del_str "\":" . Sep.space
+ . mapping
+ . Util.eol ]
+
+(* View: variable
+ An inputrc variable declaration *)
+let variable = [ Util.del_str "set" . Sep.space
+ . key (Rx.word - "entry") . Sep.space
+ . store Rx.word . Util.eol ]
+
+(* View: condition
+ An "if" declaration, recursive *)
+let rec condition = [ Util.del_str "$if" . label "@if"
+ . Sep.space . store Rx.space_in . Util.eol
+ . (Util.empty | Util.comment | condition | variable | entry)*
+ . [ Util.del_str "$else" . label "@else" . Util.eol
+ . (Util.empty | Util.comment | condition | variable | entry)* ] ?
+ . Util.del_str "$endif" . Util.eol ]
+
+(* View: lns
+ The inputrc lens *)
+let lns = (Util.empty | Util.comment | condition | variable | entry)*
+
+(* Variable: filter *)
+let filter = incl "/etc/inputrc"
+
+let xfm = transform lns filter
--- /dev/null
+(* Interfaces module for Augeas
+ Author: Free Ekanayaka <free@64studio.com>
+
+ Reference: man interfaces
+
+*)
+
+module Interfaces =
+ autoload xfm
+
+(************************************************************************
+ * USEFUL PRIMITIVES
+ *************************************************************************)
+
+let eol = Util.eol
+
+(* Define separators *)
+
+(* a line can be extended across multiple lines by making the last *)
+(* character a backslash *)
+let sep_spc = del /([ \t]+|[ \t]*\\\\\n[ \t]*)/ " "
+
+(* Define fields *)
+let sto_to_eol = store /([^\\ \t\n].*[^\\ \t\n]|[^\\ \t\n])/ . eol
+let sto_to_spc = store /[^\\ \t\n]+/
+
+(* Define comments and empty lines *)
+
+(* note that the comment definition from Util does not support *)
+(* splitting lines with a backlash *)
+let comment = Util.comment
+
+let empty = Util.empty
+
+(* Define tree stanza_ids *)
+let stanza_id (t:string) = key t . sep_spc . sto_to_spc
+let stanza_param (l:string) = [ sep_spc . label l . sto_to_spc ]
+
+(* Define reserved words and multi-value options *)
+let stanza_word =
+ /(source(-directory)?|iface|auto|allow-[a-z-]+|mapping|bond-slaves|bridge-ports)/
+
+(* Define stanza option indentation *)
+let stanza_indent = del /[ \t]*/ " "
+
+(* Define additional lines for multi-line stanzas *)
+let stanza_option = [ stanza_indent
+ . key ( /[a-z0-9_-]+/ - stanza_word )
+ . sep_spc
+ . sto_to_eol ]
+
+(* Define space-separated array *)
+let array (r:regexp) (t:string) = del r t . label t . counter t
+ . [ sep_spc . seq t . sto_to_spc ]+
+
+(************************************************************************
+ * AUTO
+ *************************************************************************)
+
+let auto = [ array /(allow-)?auto/ "auto" . eol ]
+
+(************************************************************************
+ * GENERIC ALLOW
+ *************************************************************************)
+
+let allow = [ key ( /allow-[a-z-]+/ - "allow-auto" )
+ . counter "allow_seq"
+ . [ sep_spc . seq "allow_seq" . sto_to_spc ]+
+ . eol ]
+
+(************************************************************************
+ * MAPPING
+ *************************************************************************)
+
+let mapping = [ stanza_id "mapping"
+ . eol
+ . (stanza_option|comment|empty)+ ]
+
+(************************************************************************
+ * IFACE
+ *************************************************************************)
+
+let multi_option (t:string) = [ stanza_indent . array t t . eol ]
+
+let iface = [ Util.indent
+ . stanza_id "iface"
+ . stanza_param "family"
+ . stanza_param "method"
+ . eol
+ . ( stanza_option
+ | multi_option "bond-slaves"
+ | multi_option "bridge-ports"
+ | comment
+ | empty )* ]
+
+(************************************************************************
+ * SOURCE
+ *************************************************************************)
+
+let source = [ key "source" . sep_spc . sto_to_eol ]
+
+(************************************************************************
+ * SOURCE-DIRECTORY
+ *************************************************************************)
+
+let source_directory = [ key "source-directory" . sep_spc . sto_to_eol ]
+
+(************************************************************************
+ * STANZAS
+ *************************************************************************)
+
+(* The auto and hotplug stanzas always consist of one line only, while
+ iface and mapping can spand along more lines. Comment nodes are
+ inserted in the tree as direct children of the root node only when they
+ come after an auto or hotplug stanza, otherwise they are considered part
+ of an iface or mapping block *)
+
+let stanza_single = (auto|allow|source|source_directory) . (comment|empty)*
+let stanza_multi = iface|mapping
+
+(************************************************************************
+ * LENS & FILTER
+ *************************************************************************)
+
+ let lns = (comment|empty)* . (stanza_multi | stanza_single)*
+
+ let filter = (incl "/etc/network/interfaces")
+ . (incl "/etc/network/interfaces.d/*")
+ . Util.stdexcl
+
+ let xfm = transform lns filter
--- /dev/null
+module IPRoute2 =
+ autoload xfm
+
+ let empty = [ del /[ \t]*#?[ \t]*\n/ "\n" ]
+ let id = Rx.hex | Rx.integer
+ let record = [ key id . del /[ \t]+/ "\t" . store /[a-zA-Z0-9\/-]+/ . Util.comment_or_eol ]
+
+ let lns = ( empty | Util.comment | record ) *
+
+ let xfm = transform lns (incl "/etc/iproute2/*" . Util.stdexcl)
--- /dev/null
+module Iptables =
+ autoload xfm
+
+(*
+Module: Iptables
+ Parse the iptables file format as produced by iptables-save. The
+ resulting tree is fairly simple; in particular a rule is simply
+ a long list of options/switches and their values (if any)
+
+ This lens should be considered experimental
+*)
+
+let comment = Util.comment
+let empty = Util.empty
+let eol = Util.eol
+let spc = Util.del_ws_spc
+let dels = Util.del_str
+
+let chain_name = store /[A-Za-z0-9_-]+/
+let chain =
+ let policy = [ label "policy" . store /ACCEPT|DROP|REJECT|-/ ] in
+ let counters_eol = del /[ \t]*(\[[0-9:]+\])?[ \t]*\n/ "\n" in
+ [ label "chain" .
+ dels ":" . chain_name . spc . policy . counters_eol ]
+
+let param (long:string) (short:string) =
+ [ label long .
+ spc . del (/--/ . long | /-/ . short) ("-" . short) . spc .
+ store /(![ \t]*)?[^ \t\n!-][^ \t\n]*/ ]
+
+(* A negatable parameter, which can either be FTW
+ ! --param arg
+ or
+ --param ! arg
+*)
+let neg_param (long:string) (short:string) =
+ [ label long .
+ [ spc . dels "!" . label "not" ]? .
+ spc . del (/--/ . long | /-/ . short) ("-" . short) . spc .
+ store /(![ \t]*)?[^ \t\n!-][^ \t\n]*/ ]
+
+let tcp_flags =
+ let flags = /SYN|ACK|FIN|RST|URG|PSH|ALL|NONE/ in
+ let flag_list (name:string) =
+ Build.opt_list [label name . store flags] (dels ",") in
+ [ label "tcp-flags" .
+ spc . dels "--tcp-flags" .
+ spc . flag_list "mask" . spc . flag_list "set" ]
+
+(* misses --set-counters *)
+let ipt_match =
+ let any_key = /[a-zA-Z-][a-zA-Z0-9-]+/ -
+ /protocol|source|destination|jump|goto|in-interface|out-interface|fragment|match|tcp-flags/ in
+ let any_val = /([^" \t\n!-][^ \t\n]*)|"([^"\\\n]|\\\\.)*"/ in
+ let any_param =
+ [ [ spc . dels "!" . label "not" ]? .
+ spc . dels "--" . key any_key . (spc . store any_val)? ] in
+ (neg_param "protocol" "p"
+ |neg_param "source" "s"
+ |neg_param "destination" "d"
+ |param "jump" "j"
+ |param "goto" "g"
+ |neg_param "in-interface" "i"
+ |neg_param "out-interface" "o"
+ |neg_param "fragment" "f"
+ |param "match" "m"
+ |tcp_flags
+ |any_param)*
+
+let chain_action (n:string) (o:string) =
+ [ label n .
+ del (/--/ . n | o) o .
+ spc . chain_name . ipt_match . eol ]
+
+let table_rule = chain_action "append" "-A"
+ | chain_action "insert" "-I"
+ | empty
+
+
+let table = [ del /\*/ "*" . label "table" . store /[a-z]+/ . eol .
+ (chain|comment|table_rule)* .
+ dels "COMMIT" . eol ]
+
+let lns = (comment|empty|table)*
+let xfm = transform lns (incl "/etc/sysconfig/iptables"
+ . incl "/etc/sysconfig/iptables.save"
+ . incl "/etc/iptables-save")
--- /dev/null
+(*
+Module: Iscsid
+Parses iscsid configuration file
+Author: Joey Boggs <jboggs@redhat.com>
+About: Reference
+This lens is targeted at /etc/iscsi/iscsid.conf
+*)
+module Iscsid =
+ autoload xfm
+
+ let filter = incl "/etc/iscsi/iscsid.conf"
+
+ let eol = Util.eol
+ let indent = Util.indent
+ let key_re = /[][A-Za-z0-9_.-]+/
+ let eq = del /[ \t]*=[ \t]*/ " = "
+ let value_re = /[^ \t\n](.*[^ \t\n])?/
+
+ let comment = [ indent . label "#comment" . del /[#;][ \t]*/ "# "
+ . store /([^ \t\n].*[^ \t\n]|[^ \t\n])/ . eol ]
+
+ let empty = Util.empty
+
+ let kv = [ indent . key key_re . eq . store value_re . eol ]
+
+ let lns = (empty | comment | kv) *
+
+ let xfm = transform lns filter
--- /dev/null
+(* Module Jaas *)
+(* Original Author: Simon Vocella <voxsim@gmail.com> *)
+(* Updated by: Steve Shipway <steve@steveshipway.org> *)
+(* Changes: allow comments within Modules, allow optionless flags, *)
+(* allow options without linebreaks, allow naked true/false options *)
+(* Trailing ';' terminator should not be included in option value *)
+(* Note: requires latest Util.aug for multiline comments to work *)
+
+module Jaas =
+
+autoload xfm
+
+let space_equal = del (/[ \t]*/ . "=" . /[ \t]*/) (" = ")
+let lbrace = del (/[ \t\n]*\{[ \t]*\n/) " {\n"
+let rbrace = del (/[ \t]*}[ \t]*;/) " };"
+let word = /[A-Za-z0-9_.-]+/
+let wsnl = del (/[ \t\n]+/) ("\n")
+let endflag = del ( /[ \t]*;/ ) ( ";" )
+
+let value_re =
+ let value_squote = /'[^\n']*'/
+ in let value_dquote = /"[^\n"]*"/
+ in let value_tf = /(true|false)/
+ in value_squote | value_dquote | value_tf
+
+let moduleOption = [ wsnl . key word . space_equal . (store value_re) ]
+let moduleSuffix = ( moduleOption | Util.eol . Util.comment_c_style | Util.comment_multiline )
+let flag = [ Util.del_ws_spc . label "flag" . (store word) . moduleSuffix* . endflag ]
+let loginModuleClass = [( Util.del_opt_ws "" . label "loginModuleClass" . (store word) . flag ) ]
+
+let content = (Util.empty | Util.comment_c_style | Util.comment_multiline | loginModuleClass)*
+let loginModule = [Util.del_opt_ws "" . label "login" . (store word . lbrace) . (content . rbrace)]
+
+let lns = (Util.empty | Util.comment_c_style | Util.comment_multiline | loginModule)*
+let filter = incl "/opt/shibboleth-idp/conf/login.config"
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: JettyRealm
+ JettyRealm Properties for Augeas
+
+Author: Brian Redbeard <redbeard@dead-city.org>
+
+About: Reference
+ This lens ensures that properties files for JettyRealms are properly
+ handled by Augeas.
+
+About: License
+ This file is licensed under the LGPL License.
+
+About: Lens Usage
+ Sample usage of this lens in augtool:
+
+ * Create a new user
+ > ins user after /files/etc/activemq/jetty-realm.properties/user
+ > set /files/etc/activemq/jetty-realm.properties/user[last()]/username redbeard
+ > set /files/etc/activemq/jetty-realm.properties/user[last()]/password testing
+ > set /files/etc/activemq/jetty-realm.properties/user[last()]/realm admin
+ ...
+
+ * Delete the user named sample_user
+ > rm /files/etc/activemq/jetty-realm.properties/user[*][username = "sample_user"]
+
+ Saving your file:
+
+ > save
+
+About: Configuration files
+ This lens applies to jetty-realm.properties files. See <filter>.
+*)
+
+module JettyRealm =
+ autoload xfm
+
+
+(* View: comma_sep *)
+let comma_sep = del /,[ \t]*/ ", "
+
+(* View: realm_entry *)
+let realm_entry = [ label "user" .
+ [ label "username" . store Rx.word ] . del /[ \t]*:[ \t]*/ ": " .
+ [ label "password" . store Rx.word ] .
+ [ label "realm" . comma_sep . store Rx.word ]* .
+ Util.eol ]
+
+(* View: lns *)
+let lns = ( Util.comment | Util.empty | realm_entry )*
+
+
+(* Variable: filter *)
+let filter = incl "/etc/activemq/jetty-realm.properties"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: JMXAccess
+ JMXAccess module for Augeas
+
+Author: Brian Redbeard <redbeard@dead-city.org>
+
+
+About: Reference
+ This lens ensures that files included in JMXAccess are properly
+ handled by Augeas.
+
+About: License
+ This file is licensed under the LGPL License.
+
+About: Lens Usage
+ Sample usage of this lens in augtool:
+
+ * Create a new user
+ > ins user after /files/etc/activemq/jmx.access
+ > set /files/etc/activemq/jmx.password/user[last()]/username redbeard
+ > set /files/etc/activemq/jmx.password/user[last()]/access readonly
+ ...
+
+ * Delete the user named sample_user
+ > rm /files/etc/activemq/jmx.password/user[*][username = "sample_user"]
+
+ Saving your file:
+
+ > save
+
+About: Configuration files
+ This lens applies to relevant conf files located in /etc/activemq/
+ The following views correspond to the related files:
+ * access_entry:
+ /etc/activemq/jmx.access
+ See <filter>.
+
+
+*)
+
+module JMXAccess =
+ autoload xfm
+
+(* View: access_entry *)
+let access_entry = [ label "user" .
+ [ label "username" . store Rx.word ] . Sep.space .
+ [ label "access" . store /(readonly|readwrite)/i ] . Util.eol ]
+
+
+
+(* View: lns *)
+let lns = ( Util.comment | Util.empty | access_entry )*
+
+
+(* Variable: filter *)
+let filter = incl "/etc/activemq/jmx.access"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: JMXPassword
+ JMXPassword for Augeas
+
+Author: Brian Redbeard <redbeard@dead-city.org>
+
+
+About: Reference
+ This lens ensures that files included in JMXPassword are properly
+ handled by Augeas.
+
+About: License
+ This file is licensed under the LGPL License.
+
+About: Lens Usage
+ Sample usage of this lens in augtool:
+
+ * Create a new user
+ > ins user after /files/etc/activemq/jmx.password
+ > set /files/etc/activemq/jmx.password/user[last()]/username redbeard
+ > set /files/etc/activemq/jmx.password/user[last()]/password testing
+ ...
+
+ * Delete the user named sample_user
+ > rm /files/etc/activemq/jmx.password/user[*][username = "sample_user"]
+
+ Saving your file:
+
+ > save
+
+About: Configuration files
+ This lens applies to relevant conf files located in /etc/activemq/
+ The following views correspond to the related files:
+ * pass_entry:
+ /etc/activemq/jmx.password
+ See <filter>.
+
+
+*)
+
+module JMXPassword =
+ autoload xfm
+
+(* View: pass_entry *)
+let pass_entry = [ label "user" .
+ [ label "username" . store Rx.word ] . Sep.space .
+ [ label "password" . store Rx.no_spaces ] . Util.eol ]
+
+(* View: lns *)
+let lns = ( Util.comment | Util.empty | pass_entry )*
+
+
+(* Variable: filter *)
+let filter = incl "/etc/activemq/jmx.password"
+
+let xfm = transform lns filter
--- /dev/null
+module Json =
+
+(* A generic lens for Json files *)
+(* Based on the following grammar from http://www.json.org/ *)
+(* Object ::= '{'Members ? '}' *)
+(* Members ::= Pair+ *)
+(* Pair ::= String ':' Value *)
+(* Array ::= '[' Elements ']' *)
+(* Elements ::= Value ( "," Value )* *)
+(* Value ::= String | Number | Object | Array | "true" | "false" | "null" *)
+(* String ::= "\"" Char* "\"" *)
+(* Number ::= /-?[0-9]+(\.[0-9]+)?([eE][+-]?[0-9]+)?/ *)
+
+
+let ws = del /[ \t\n]*/ ""
+let comment = Util.empty_c_style | Util.comment_c_style | Util.comment_multiline
+let comments = comment* . Sep.opt_space
+
+let comma = Util.del_str "," . comments
+let colon = Util.del_str ":" . comments
+let lbrace = Util.del_str "{" . comments
+let rbrace = Util.del_str "}"
+let lbrack = Util.del_str "[" . comments
+let rbrack = Util.del_str "]"
+
+(* This follows the definition of 'string' at https://www.json.org/
+ It's a little wider than what's allowed there as it would accept
+ nonsensical \u escapes *)
+let str_store = Quote.dquote . store /([^\\"]|\\\\["\/bfnrtu\\])*/ . Quote.dquote
+
+let number = [ label "number" . store /-?[0-9]+(\.[0-9]+)?([eE][+-]?[0-9]+)?/
+ . comments ]
+let str = [ label "string" . str_store . comments ]
+
+let const (r:regexp) = [ label "const" . store r . comments ]
+
+let fix_value (value:lens) =
+ let array = [ label "array" . lbrack
+ . ( ( Build.opt_list value comma . rbrack . comments )
+ | (rbrack . ws) ) ]
+ in let pair = [ label "entry" . str_store . ws . colon . value ]
+ in let obj = [ label "dict" . lbrace
+ . ( ( Build.opt_list pair comma. rbrace . comments )
+ | (rbrace . ws ) ) ]
+ in (str | number | obj | array | const /true|false|null/)
+
+(* Process arbitrarily deeply nested JSON objects *)
+let rec rlns = fix_value rlns
+
+let lns = comments . rlns
--- /dev/null
+(*
+Module: Kdump
+ Parses /etc/kdump.conf
+
+Author: Roman Rakus <rrakus@redhat.com>
+
+About: References
+ manual page kdump.conf(5)
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Configuration files
+ This lens applies to /etc/kdump.conf. See <filter>.
+*)
+
+module Kdump =
+ autoload xfm
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+let empty = Util.empty
+let comment = Util.comment
+let value_to_eol = store /[^ \t\n#][^\n#]*[^ \t\n#]|[^ \t\n#]/
+let int_to_eol = store Rx.integer
+let yn_to_eol = store ("yes" | "no")
+let delimiter = Util.del_ws_spc
+let eol = Util.eol
+let value_to_spc = store Rx.neg1
+let key_to_space = key /[A-Za-z0-9_.\$-]+/
+let eq = Sep.equal
+
+(************************************************************************
+ * Group: ENTRY TYPES
+ *************************************************************************)
+
+let list (kw:string) = counter kw
+ . Build.key_value_line_comment kw delimiter
+ (Build.opt_list [ seq kw . value_to_spc ] delimiter)
+ comment
+
+let mdl_key_value = [ delimiter . key_to_space . ( eq . value_to_spc)? ]
+let mdl_options = [ key_to_space . mdl_key_value+ ]
+let mod_options = [ key "options" . delimiter . mdl_options . (comment|eol) ]
+
+(************************************************************************
+ * Group: ENTRIES
+ *************************************************************************)
+
+(* Got from mount(8) *)
+let fs_types = "adfs" | "affs" | "autofs" | "cifs" | "coda" | "coherent"
+ | "cramfs" | "debugfs" | "devpts" | "efs" | "ext" | "ext2"
+ | "ext3" | "ext4" | "hfs" | "hfsplus" | "hpfs" | "iso9660"
+ | "jfs" | "minix" | "msdos" | "ncpfs" | "nfs" | "nfs4" | "ntfs"
+ | "proc" | "qnx4" | "ramfs" | "reiserfs" | "romfs" | "squashfs"
+ | "smbfs" | "sysv" | "tmpfs" | "ubifs" | "udf" | "ufs" | "umsdos"
+ | "usbfs" | "vfat" | "xenix" | "xfs" | "xiafs"
+
+let simple_kws = "raw" | "net" | "path" | "core_collector" | "kdump_post"
+ | "kdump_pre" | "default" | "ssh" | "sshkey" | "dracut_args"
+ | "fence_kdump_args"
+
+let int_kws = "force_rebuild" | "override_resettable" | "debug_mem_level"
+ | "link_delay" | "disk_timeout"
+
+let yn_kws = "auto_reset_crashkernel"
+
+let option = Build.key_value_line_comment ( simple_kws | fs_types )
+ delimiter value_to_eol comment
+ | Build.key_value_line_comment int_kws delimiter int_to_eol comment
+ | Build.key_value_line_comment yn_kws delimiter yn_to_eol comment
+ | list "extra_bins"
+ | list "extra_modules"
+ | list "blacklist"
+ | list "fence_kdump_nodes"
+ | mod_options
+
+(* View: lns
+ The options lens
+*)
+let lns = ( empty | comment | option )*
+
+let filter = incl "/etc/kdump.conf"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Keepalived
+ Parses /etc/keepalived/keepalived.conf
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 5 keepalived.conf` where possible.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/keepalived/keepalived.conf. See <filter>.
+
+About: Examples
+ The <Test_Keepalived> file contains various examples and tests.
+*)
+
+
+module Keepalived =
+ autoload xfm
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+(* Group: Comments and empty lines *)
+
+(* View: indent *)
+let indent = Util.indent
+
+(* View: eol *)
+let eol = Util.eol
+
+(* View: opt_eol *)
+let opt_eol = del /[ \t]*\n?/ " "
+
+(* View: sep_spc *)
+let sep_spc = Sep.space
+
+(* View: comment
+Map comments in "#comment" nodes *)
+let comment = Util.comment_generic /[ \t]*[#!][ \t]*/ "# "
+
+(* View: comment_eol
+Map comments at eol *)
+let comment_eol = Util.comment_generic /[ \t]*[#!][ \t]*/ " # "
+
+(* View: comment_or_eol
+A <comment_eol> or <eol> *)
+let comment_or_eol = comment_eol | (del /[ \t]*[#!]?\n/ "\n")
+
+(* View: empty
+Map empty lines *)
+let empty = Util.empty
+
+(* View: sto_email_addr *)
+let sto_email_addr = store Rx.email_addr
+
+(* Variable: word *)
+let word = Rx.word
+
+(* Variable: word_slash *)
+let word_slash = word | "/"
+
+(* View: sto_word *)
+let sto_word = store word
+
+(* View: sto_num *)
+let sto_num = store Rx.relinteger
+
+(* View: sto_ipv6 *)
+let sto_ipv6 = store Rx.ipv6
+
+(* View: sto_to_eol *)
+let sto_to_eol = store /[^#! \t\n][^#!\n]*[^#! \t\n]|[^#! \t\n]/
+
+(* View: field *)
+let field (kw:regexp) (sto:lens) = indent . Build.key_value_line_comment kw sep_spc sto comment_eol
+
+(* View: flag
+A single word *)
+let flag (kw:regexp) = [ indent . key kw . comment_or_eol ]
+
+(* View: ip_port
+ An IP <space> port pair *)
+let ip_port = [ label "ip" . sto_word ] . sep_spc . [ label "port" . sto_num ]
+
+(* View: lens_block
+A generic block with a title lens.
+The definition is very similar to Build.block_newlines
+but uses a different type of <comment>. *)
+let lens_block (title:lens) (sto:lens) =
+ [ indent . title
+ . Build.block_newlines sto comment . eol ]
+
+(* View: block
+A simple block with just a block title *)
+let block (kw:regexp) (sto:lens) = lens_block (key kw) sto
+
+(* View: named_block
+A block with a block title and name *)
+let named_block (kw:string) (sto:lens) = lens_block (key kw . sep_spc . sto_word) sto
+
+(* View: named_block_arg_title
+A title lens for named_block_arg *)
+let named_block_arg_title (kw:string) (name:string) (arg:string) =
+ key kw . sep_spc
+ . [ label name . sto_word ]
+ . sep_spc
+ . [ label arg . sto_word ]
+
+(* View: named_block_arg
+A block with a block title, a name and an argument *)
+let named_block_arg (kw:string) (name:string) (arg:string) (sto:lens) =
+ lens_block (named_block_arg_title kw name arg) sto
+
+
+(************************************************************************
+ * Group: GLOBAL CONFIGURATION
+ *************************************************************************)
+
+(* View: email
+A simple email address entry *)
+let email = [ indent . label "email" . sto_email_addr . comment_or_eol ]
+
+(* View: global_defs_field
+Possible fields in the global_defs block *)
+let global_defs_field =
+ let word_re = "smtp_server"|"lvs_id"|"router_id"|"vrrp_mcast_group4"
+ in let ipv6_re = "vrrp_mcast_group6"
+ in let num_re = "smtp_connect_timeout"
+ in block "notification_email" email
+ | field "notification_email_from" sto_email_addr
+ | field word_re sto_word
+ | field num_re sto_num
+ | field ipv6_re sto_ipv6
+
+(* View: global_defs
+A global_defs block *)
+let global_defs = block "global_defs" global_defs_field
+
+(* View: prefixlen
+A prefix for IP addresses *)
+let prefixlen = [ label "prefixlen" . Util.del_str "/" . sto_num ]
+
+(* View: ipaddr
+An IP address or range with an optional mask *)
+let ipaddr = label "ipaddr" . store /[0-9.-]+/ . prefixlen?
+
+(* View: ipdev
+A device for IP addresses *)
+let ipdev = [ key "dev" . sep_spc . sto_word ]
+
+(* View: static_ipaddress_field
+The whole string is fed to ip addr add.
+You can truncate the string anywhere you like and let ip addr add use defaults for the rest of the string.
+To be refined with fields according to `ip addr help`.
+*)
+let static_ipaddress_field = [ indent . ipaddr
+ . (sep_spc . ipdev)?
+ . comment_or_eol ]
+
+(* View: static_routes_field
+src $SRC_IP to $DST_IP dev $SRC_DEVICE
+*)
+let static_routes_field = [ indent . label "route"
+ . [ key "src" . sto_word ] . sep_spc
+ . [ key "to" . sto_word ] . sep_spc
+ . [ key "dev" . sto_word ] . comment_or_eol ]
+
+(* View: static_routes *)
+let static_routes = block "static_ipaddress" static_ipaddress_field
+ | block "static_routes" static_routes_field
+
+
+(* View: global_conf
+A global configuration entry *)
+let global_conf = global_defs | static_routes
+
+
+(************************************************************************
+ * Group: VRRP CONFIGURATION
+ *************************************************************************)
+
+(*View: vrrp_sync_group_field *)
+let vrrp_sync_group_field =
+ let to_eol_re = /notify(_master|_backup|_fault|_stop|_deleted)?/
+ in let flag_re = "smtp_alert"
+ in field to_eol_re sto_to_eol
+ | flag flag_re
+ | block "group" [ indent . key word . comment_or_eol ]
+
+(* View: vrrp_sync_group *)
+let vrrp_sync_group = named_block "vrrp_sync_group" vrrp_sync_group_field
+
+(* View: vrrp_instance_field *)
+let vrrp_instance_field =
+ let word_re = "state" | "interface" | "lvs_sync_daemon_interface"
+ in let num_re = "virtual_router_id" | "priority" | "advert_int" | /garp_master_(delay|repeat|refresh|refresh_repeat)/
+ in let to_eol_re = /notify(_master|_backup|_fault|_stop|_deleted)?/ | /(mcast|unicast)_src_ip/
+ in let flag_re = "smtp_alert" | "nopreempt" | "ha_suspend" | "debug" | "use_vmac" | "vmac_xmit_base" | "native_ipv6" | "dont_track_primary" | "preempt_delay"
+ in field word_re sto_word
+ | field num_re sto_num
+ | field to_eol_re sto_to_eol
+ | flag flag_re
+ | block "authentication" (
+ field /auth_(type|pass)/ sto_word
+ )
+ | block "virtual_ipaddress" static_ipaddress_field
+ | block /track_(interface|script)/ ( flag word )
+ | block "unicast_peer" static_ipaddress_field
+
+(* View: vrrp_instance *)
+let vrrp_instance = named_block "vrrp_instance" vrrp_instance_field
+
+(* View: vrrp_script_field *)
+let vrrp_script_field =
+ let num_re = "interval" | "weight" | "fall" | "raise"
+ in let to_eol_re = "script"
+ in field to_eol_re sto_to_eol
+ | field num_re sto_num
+
+(* View: vrrp_script *)
+let vrrp_script = named_block "vrrp_script" vrrp_script_field
+
+
+(* View: vrrpd_conf
+contains subblocks of VRRP synchronization group(s) and VRRP instance(s) *)
+let vrrpd_conf = vrrp_sync_group | vrrp_instance | vrrp_script
+
+
+(************************************************************************
+ * Group: REAL SERVER CHECKS CONFIGURATION
+ *************************************************************************)
+
+(* View: tcp_check_field *)
+let tcp_check_field =
+ let word_re = "bindto"
+ in let num_re = /connect_(timeout|port)/
+ in field word_re sto_word
+ | field num_re sto_num
+
+(* View: misc_check_field *)
+let misc_check_field =
+ let flag_re = "misc_dynamic"
+ in let num_re = "misc_timeout"
+ in let to_eol_re = "misc_path"
+ in field num_re sto_num
+ | flag flag_re
+ | field to_eol_re sto_to_eol
+
+(* View: smtp_host_check_field *)
+let smtp_host_check_field =
+ let word_re = "connect_ip" | "bindto"
+ in let num_re = "connect_port"
+ in field word_re sto_word
+ | field num_re sto_num
+
+(* View: smtp_check_field *)
+let smtp_check_field =
+ let word_re = "connect_ip" | "bindto"
+ in let num_re = "connect_timeout" | "retry" | "delay_before_retry"
+ in let to_eol_re = "helo_name"
+ in field word_re sto_word
+ | field num_re sto_num
+ | field to_eol_re sto_to_eol
+ | block "host" smtp_host_check_field
+
+(* View: http_url_check_field *)
+let http_url_check_field =
+ let word_re = "digest"
+ in let num_re = "status_code"
+ in let to_eol_re = "path"
+ in field word_re sto_word
+ | field num_re sto_num
+ | field to_eol_re sto_to_eol
+
+(* View: http_check_field *)
+let http_check_field =
+ let num_re = /connect_(timeout|port)/ | "nb_get_retry" | "delay_before_retry"
+ in field num_re sto_num
+ | block "url" http_url_check_field
+
+(* View: real_server_field *)
+let real_server_field =
+ let num_re = "weight"
+ in let flag_re = "inhibit_on_failure"
+ in let to_eol_re = /notify_(up|down)/
+ in field num_re sto_num
+ | flag flag_re
+ | field to_eol_re sto_to_eol
+ | block "TCP_CHECK" tcp_check_field
+ | block "MISC_CHECK" misc_check_field
+ | block "SMTP_CHECK" smtp_check_field
+ | block /(HTTP|SSL)_GET/ http_check_field
+
+(************************************************************************
+ * Group: LVS CONFIGURATION
+ *************************************************************************)
+
+(* View: virtual_server_field *)
+let virtual_server_field =
+ let num_re = "delay_loop" | "persistence_timeout" | "quorum" | "hysteresis"
+ in let word_re = /lb_(algo|kind)/ | "nat_mask" | "protocol" | "persistence_granularity"
+ | "virtualhost"
+ in let flag_re = "ops" | "ha_suspend" | "alpha" | "omega"
+ in let to_eol_re = /quorum_(up|down)/
+ in let ip_port_re = "sorry_server"
+ in field num_re sto_num
+ | field word_re sto_word
+ | flag flag_re
+ | field to_eol_re sto_to_eol
+ | field ip_port_re ip_port
+ | named_block_arg "real_server" "ip" "port" real_server_field
+
+(* View: virtual_server *)
+let virtual_server = named_block_arg "virtual_server" "ip" "port" virtual_server_field
+
+(* View: virtual_server_group_field *)
+let virtual_server_group_field = [ indent . label "vip"
+ . [ ipaddr ]
+ . sep_spc
+ . [ label "port" . sto_num ]
+ . comment_or_eol ]
+
+(* View: virtual_server_group *)
+let virtual_server_group = named_block "virtual_server_group" virtual_server_group_field
+
+(* View: lvs_conf
+contains subblocks of Virtual server group(s) and Virtual server(s) *)
+let lvs_conf = virtual_server | virtual_server_group
+
+
+(* View: lns
+ The keepalived lens
+*)
+let lns = ( empty | comment | global_conf | vrrpd_conf | lvs_conf )*
+
+(* Variable: filter *)
+let filter = incl "/etc/keepalived/keepalived.conf"
+
+let xfm = transform lns filter
+
--- /dev/null
+(*
+Module: Known_Hosts
+ Parses SSH known_hosts files
+
+Author: Raphaël Pinson <raphink@gmail.com>
+
+About: Reference
+ This lens manages OpenSSH's known_hosts files. See `man 8 sshd` for reference.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ Sample usage of this lens in augtool:
+
+ * Get a key by name from ssh_known_hosts
+ > print /files/etc/ssh_known_hosts/*[.="foo.example.com"]
+ ...
+
+ * Change a host's key
+ > set /files/etc/ssh_known_hosts/*[.="foo.example.com"]/key "newkey"
+
+About: Configuration files
+ This lens applies to SSH known_hosts files. See <filter>.
+
+*)
+
+module Known_Hosts =
+
+autoload xfm
+
+
+(* View: marker
+ The marker is optional, but if it is present then it must be one of
+ “@cert-authority”, to indicate that the line contains a certification
+ authority (CA) key, or “@revoked”, to indicate that the key contained
+ on the line is revoked and must not ever be accepted.
+ Only one marker should be used on a key line.
+*)
+let marker = [ key /@(revoked|cert-authority)/ . Sep.space ]
+
+
+(* View: type
+ Bits, exponent, and modulus are taken directly from the RSA host key;
+ they can be obtained, for example, from /etc/ssh/ssh_host_key.pub.
+ The optional comment field continues to the end of the line, and is not used.
+*)
+let type = [ label "type" . store Rx.neg1 ]
+
+
+(* View: entry
+ A known_hosts entry *)
+let entry =
+ let alias = [ label "alias" . store Rx.neg1 ]
+ in let key = [ label "key" . store Rx.neg1 ]
+ in [ Util.indent . seq "entry" . marker?
+ . store Rx.neg1
+ . (Sep.comma . Build.opt_list alias Sep.comma)?
+ . Sep.space . type . Sep.space . key
+ . Util.comment_or_eol ]
+
+(* View: lns
+ The known_hosts lens *)
+let lns = (Util.empty | Util.comment | entry)*
+
+(* Variable: filter *)
+let filter = incl "/etc/ssh/ssh_known_hosts"
+ . incl (Sys.getenv("HOME") . "/.ssh/known_hosts")
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Koji
+ Parses koji config files
+
+Author: Pat Riehecky <riehecky@fnal.gov>
+
+About: Reference
+ This lens tries to keep as close as possible to koji config syntax
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to:
+ /etc/koji.conf
+ /etc/kojid/kojid.conf
+ /etc/koji-hub/hub.conf
+ /etc/kojira/kojira.conf
+ /etc/kojiweb/web.conf
+ /etc/koji-shadow/koji-shadow.conf
+
+ See <filter>.
+*)
+
+module Koji =
+ autoload xfm
+
+let lns = IniFile.lns_loose_multiline
+
+let filter = incl "/etc/koji.conf"
+ . incl "/etc/kojid/kojid.conf"
+ . incl "/etc/koji-hub/hub.conf"
+ . incl "/etc/kojira/kojira.conf"
+ . incl "/etc/kojiweb/web.conf"
+ . incl "/etc/koji-shadow/koji-shadow.conf"
+
+let xfm = transform lns filter
--- /dev/null
+module Krb5 =
+
+autoload xfm
+
+let comment = Inifile.comment IniFile.comment_re "#"
+let empty = Inifile.empty
+let eol = Inifile.eol
+let dels = Util.del_str
+
+let indent = del /[ \t]*/ ""
+let comma_or_space_sep = del /[ \t,]{1,}/ " "
+let eq = del /[ \t]*=[ \t]*/ " = "
+let eq_openbr = del /[ \t]*=[ \t\n]*\{[ \t]*\n/ " = {\n"
+let closebr = del /[ \t]*\}/ "}"
+
+(* These two regexps for realms and apps are not entirely true
+ - strictly speaking, there's no requirement that a realm is all upper case
+ and an application only uses lowercase. But it's what's used in practice.
+
+ Without that distinction we couldn't distinguish between applications
+ and realms in the [appdefaults] section.
+*)
+
+let include_re = /include(dir)?/
+let realm_re = /[A-Z0-9][.a-zA-Z0-9-]*/
+let realm_anycase_re = /[A-Za-z0-9][.a-zA-Z0-9-]*/
+let app_re = /[a-z][a-zA-Z0-9_]*/
+let name_re = /[.a-zA-Z0-9_-]+/ - include_re
+
+let value_br = store /[^;# \t\r\n{}]+/
+let value = store /[^;# \t\r\n]+/
+let entry (kw:regexp) (sep:lens) (value:lens) (comment:lens)
+ = [ indent . key kw . sep . value . (comment|eol) ] | comment
+
+let subsec_entry (kw:regexp) (sep:lens) (comment:lens)
+ = ( entry kw sep value_br comment ) | empty
+
+let simple_section (n:string) (k:regexp) =
+ let title = Inifile.indented_title n in
+ let entry = entry k eq value comment in
+ Inifile.record title entry
+
+let record (t:string) (e:lens) =
+ let title = Inifile.indented_title t in
+ Inifile.record title e
+
+let v4_name_convert (subsec:lens) = [ indent . key "v4_name_convert" .
+ eq_openbr . subsec* . closebr . eol ]
+
+(*
+ For the enctypes this appears to be a list of the valid entries:
+ c4-hmac arcfour-hmac aes128-cts rc4-hmac
+ arcfour-hmac-md5 des3-cbc-sha1 des-cbc-md5 des-cbc-crc
+*)
+let enctype_re = /[a-zA-Z0-9-]{3,}/
+let enctypes = /permitted_enctypes|default_tgs_enctypes|default_tkt_enctypes/i
+
+(* An #eol label prevents ambiguity between "k = v1 v2" and "k = v1\n k = v2" *)
+let enctype_list (nr:regexp) (ns:string) =
+ indent . del nr ns . eq
+ . Build.opt_list [ label ns . store enctype_re ] comma_or_space_sep
+ . (comment|eol) . [ label "#eol" ]
+
+let libdefaults =
+ let option = entry (name_re - ("v4_name_convert" |enctypes)) eq value comment in
+ let enctype_lists = enctype_list /permitted_enctypes/i "permitted_enctypes"
+ | enctype_list /default_tgs_enctypes/i "default_tgs_enctypes"
+ | enctype_list /default_tkt_enctypes/i "default_tkt_enctypes" in
+ let subsec = [ indent . key /host|plain/ . eq_openbr .
+ (subsec_entry name_re eq comment)* . closebr . eol ] in
+ record "libdefaults" (option|enctype_lists|v4_name_convert subsec)
+
+let login =
+ let keys = /krb[45]_get_tickets|krb4_convert|krb_run_aklog/
+ |/aklog_path|accept_passwd/ in
+ simple_section "login" keys
+
+let appdefaults =
+ let option = entry (name_re - ("realm" | "application")) eq value_br comment in
+ let realm = [ indent . label "realm" . store realm_re .
+ eq_openbr . (option|empty)* . closebr . eol ] in
+ let app = [ indent . label "application" . store app_re .
+ eq_openbr . (realm|option|empty)* . closebr . eol] in
+ record "appdefaults" (option|realm|app)
+
+let realms =
+ let simple_option = /kdc|admin_server|database_module|default_domain/
+ |/v4_realm|auth_to_local(_names)?|master_kdc|kpasswd_server/
+ |/admin_server|ticket_lifetime|pkinit_(anchors|identities|identity|pool)/
+ |/krb524_server/ in
+ let subsec_option = /v4_instance_convert/ in
+ let option = subsec_entry simple_option eq comment in
+ let subsec = [ indent . key subsec_option . eq_openbr .
+ (subsec_entry name_re eq comment)* . closebr . eol ] in
+ let v4subsec = [ indent . key /host|plain/ . eq_openbr .
+ (subsec_entry name_re eq comment)* . closebr . eol ] in
+ let realm = [ indent . label "realm" . store realm_anycase_re .
+ eq_openbr . (option|subsec|(v4_name_convert v4subsec))* .
+ closebr . eol ] in
+ record "realms" (realm|comment)
+
+let domain_realm =
+ simple_section "domain_realm" name_re
+
+let logging =
+ let keys = /kdc|admin_server|default/ in
+ let xchg (m:regexp) (d:string) (l:string) =
+ del m d . label l in
+ let xchgs (m:string) (l:string) = xchg m m l in
+ let dest =
+ [ xchg /FILE[=:]/ "FILE=" "file" . value ]
+ |[ xchgs "STDERR" "stderr" ]
+ |[ xchgs "CONSOLE" "console" ]
+ |[ xchgs "DEVICE=" "device" . value ]
+ |[ xchgs "SYSLOG" "syslog" .
+ ([ xchgs ":" "severity" . store /[A-Za-z0-9]+/ ].
+ [ xchgs ":" "facility" . store /[A-Za-z0-9]+/ ]?)? ] in
+ let entry = [ indent . key keys . eq . dest . (comment|eol) ] | comment in
+ record "logging" entry
+
+let capaths =
+ let realm = [ indent . key realm_re .
+ eq_openbr .
+ (entry realm_re eq value_br comment)* . closebr . eol ] in
+ record "capaths" (realm|comment)
+
+let dbdefaults =
+ let keys = /database_module|ldap_kerberos_container_dn|ldap_kdc_dn/
+ |/ldap_kadmind_dn|ldap_service_password_file|ldap_servers/
+ |/ldap_conns_per_server/ in
+ simple_section "dbdefaults" keys
+
+let dbmodules =
+ let subsec_key = /database_name|db_library|disable_last_success/
+ |/disable_lockout|ldap_conns_per_server|ldap_(kdc|kadmind)_dn/
+ |/ldap_(kdc|kadmind)_sasl_mech|ldap_(kdc|kadmind)_sasl_authcid/
+ |/ldap_(kdc|kadmind)_sasl_authzid|ldap_(kdc|kadmind)_sasl_realm/
+ |/ldap_kerberos_container_dn|ldap_servers/
+ |/ldap_service_password_file|mapsize|max_readers|nosync/
+ |/unlockiter/ in
+ let subsec_option = subsec_entry subsec_key eq comment in
+ let key = /db_module_dir/ in
+ let option = entry key eq value comment in
+ let realm = [ indent . label "realm" . store realm_re .
+ eq_openbr . (subsec_option)* . closebr . eol ] in
+ record "dbmodules" (option|realm)
+
+(* This section is not documented in the krb5.conf manpage,
+ but the Fermi example uses it. *)
+let instance_mapping =
+ let value = dels "\"" . store /[^;# \t\r\n{}]*/ . dels "\"" in
+ let map_node = label "mapping" . store /[a-zA-Z0-9\/*]+/ in
+ let mapping = [ indent . map_node . eq .
+ [ label "value" . value ] . (comment|eol) ] in
+ let instance = [ indent . key name_re .
+ eq_openbr . (mapping|comment)* . closebr . eol ] in
+ record "instancemapping" instance
+
+let kdc =
+ simple_section "kdc" /profile/
+
+let pam =
+ simple_section "pam" name_re
+
+let plugins =
+ let interface_option = subsec_entry name_re eq comment in
+ let interface = [ indent . key name_re .
+ eq_openbr . (interface_option)* . closebr . eol ] in
+ record "plugins" (interface|comment)
+
+let includes = Build.key_value_line include_re Sep.space (store Rx.fspath)
+let include_lines = includes . (comment|empty)*
+
+let lns = (comment|empty)* .
+ (libdefaults|login|appdefaults|realms|domain_realm
+ |logging|capaths|dbdefaults|dbmodules|instance_mapping|kdc|pam|include_lines
+ |plugins)*
+
+let filter = (incl "/etc/krb5.conf.d/*.conf")
+ . (incl "/etc/krb5.conf")
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Ldif
+ Parses the LDAP Data Interchange Format (LDIF)
+
+Author: Dominic Cleal <dcleal@redhat.com>
+
+About: Reference
+ This lens tries to keep as close as possible to RFC2849
+ <http://tools.ietf.org/html/rfc2849>
+ and OpenLDAP's ldif(5)
+
+About: Licence
+ This file is licensed under the LGPLv2+, like the rest of Augeas.
+*)
+
+module Ldif =
+autoload xfm
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ ************************************************************************)
+
+(* View: comment *)
+let comment = Util.comment_generic /#[ \t]*/ "# "
+
+(* View: empty
+ Map empty lines, including empty comments *)
+let empty = [ del /#?[ \t]*\n/ "\n" ]
+
+(* View: eol
+ Only eol, don't include whitespace *)
+let eol = Util.del_str "\n"
+
+(* View: sep_colon
+ The separator for attributes and values *)
+let sep_colon = del /:[ \t]*/ ": "
+
+(* View: sep_base64
+ The separator for attributes and base64 encoded values *)
+let sep_base64 = del /::[ \t]*/ ":: "
+
+(* View: sep_url
+ The separator for attributes and URL-sourced values *)
+let sep_url = del /:<[ \t]*/ ":< "
+
+(* Variable: ldapoid_re
+ Format of an LDAP OID from RFC 2251 *)
+let ldapoid_re = /[0-9][0-9\.]*/
+
+(* View: sep_modspec
+ Separator between modify operations *)
+let sep_modspec = Util.del_str "-" . eol
+
+(************************************************************************
+ * Group: BASIC ATTRIBUTES
+ ************************************************************************)
+
+(* Different types of values, all permitting continuation where the next line
+ begins with whitespace *)
+let attr_safe_string =
+ let line = /[^ \t\n:<][^\n]*/
+ in let lines = line . (/\n[ \t]+[^ \t\n][^\n]*/)*
+ in sep_colon . store lines
+
+let attr_base64_string =
+ let line = /[a-zA-Z0-9=+]+/
+ in let lines = line . (/\n[ \t]+/ . line)*
+ in sep_base64 . [ label "@base64" . store lines ]
+
+let attr_url_string =
+ let line = /[^ \t\n][^\n]*/
+ in let lines = line . (/\n[ \t]+/ . line)*
+ in sep_url . [ label "@url" . store lines ]
+
+let attr_intflag = sep_colon . store /0|1/
+
+(* View: attr_version
+ version-spec = "version:" FILL version-number *)
+let attr_version = Build.key_value_line "version" sep_colon (store /[0-9]+/)
+
+(* View: attr_dn
+ dn-spec = "dn:" (FILL distinguishedName /
+ ":" FILL base64-distinguishedName) *)
+let attr_dn = del /dn/i "dn"
+ . ( attr_safe_string | attr_base64_string )
+ . eol
+
+(* View: attr_type
+ AttributeType = ldap-oid / (ALPHA *(attr-type-chars)) *)
+let attr_type = ldapoid_re | /[a-zA-Z][a-zA-Z0-9-]*/
+ - /dn/i
+ - /changeType/i
+ - /include/i
+
+(* View: attr_option
+ options = option / (option ";" options) *)
+let attr_option = Util.del_str ";"
+ . [ label "@option" . store /[a-zA-Z0-9-]+/ ]
+
+(* View: attr_description
+ Attribute name, possibly with options *)
+let attr_description = key attr_type . attr_option*
+
+(* View: attr_val_spec
+ Generic attribute with a value *)
+let attr_val_spec = [ attr_description
+ . ( attr_safe_string
+ | attr_base64_string
+ | attr_url_string )
+ . eol ]
+
+(* View: attr_changetype
+ Parameters:
+ t:regexp - value of changeType *)
+let attr_changetype (t:regexp) =
+ key /changeType/i . sep_colon . store t . eol
+
+(* View: attr_modspec *)
+let attr_modspec = key /add|delete|replace/ . sep_colon . store attr_type
+ . attr_option* . eol
+
+(* View: attr_dn_value
+ Parses an attribute line with a DN on the RHS
+ Parameters:
+ k:regexp - match attribute name as key *)
+let attr_dn_value (k:regexp) =
+ [ key k . ( attr_safe_string | attr_base64_string ) . eol ]
+
+(* View: sep_line *)
+let sep_line = empty | comment
+
+(* View: attr_include
+ OpenLDAP extension, must be separated by blank lines *)
+let attr_include = eol . [ key "include" . sep_colon
+ . store /[^ \t\n][^\n]*/ . eol . comment* . eol ]
+
+(* View: sep_record *)
+let sep_record = ( sep_line | attr_include )*
+
+(************************************************************************
+ * Group: LDIF CONTENT RECORDS
+ ************************************************************************)
+
+(* View: ldif_attrval_record
+ ldif-attrval-record = dn-spec SEP 1*attrval-spec *)
+let ldif_attrval_record = [ seq "record"
+ . attr_dn
+ . ( sep_line* . attr_val_spec )+ ]
+
+(* View: ldif_content
+ ldif-content = version-spec 1*(1*SEP ldif-attrval-record) *)
+let ldif_content = [ label "@content"
+ . ( sep_record . attr_version )?
+ . ( sep_record . ldif_attrval_record )+
+ . sep_record ]
+
+(************************************************************************
+ * Group: LDIF CHANGE RECORDS
+ ************************************************************************)
+
+(* View: change_add
+ change-add = "add" SEP 1*attrval-spec *)
+let change_add = [ attr_changetype "add" ] . ( sep_line* . attr_val_spec )+
+
+(* View: change_delete
+ change-delete = "add" SEP 1*attrval-spec *)
+let change_delete = [ attr_changetype "delete" ]
+
+(* View: change_modspec
+ change-modspec = add/delete/replace: AttributeDesc SEP *attrval-spec "-" *)
+let change_modspec = attr_modspec . ( sep_line* . attr_val_spec )*
+
+(* View: change_modify
+ change-modify = "modify" SEP *mod-spec *)
+let change_modify = [ attr_changetype "modify" ]
+ . ( sep_line* . [ change_modspec
+ . sep_line* . sep_modspec ] )+
+
+(* View: change_modrdn
+ ("modrdn" / "moddn") SEP newrdn/newsuperior/deleteoldrdn *)
+let change_modrdn =
+ let attr_deleteoldrdn = [ key "deleteoldrdn" . attr_intflag . eol ]
+ in let attrs_modrdn = attr_dn_value "newrdn"
+ | attr_dn_value "newsuperior"
+ | attr_deleteoldrdn
+ in [ attr_changetype /modr?dn/ ]
+ . ( sep_line | attrs_modrdn )* . attrs_modrdn
+
+(* View: change_record
+ changerecord = "changetype:" FILL (changeadd/delete/modify/moddn) *)
+let change_record = ( change_add | change_delete | change_modify
+ | change_modrdn)
+
+(* View: change_control
+ "control:" FILL ldap-oid 0*1(1*SPACE ("true" / "false")) 0*1(value-spec) *)
+let change_control =
+ let attr_criticality = [ Util.del_ws_spc . label "criticality"
+ . store /true|false/ ]
+ in let attr_ctrlvalue = [ label "value" . (attr_safe_string
+ | attr_base64_string
+ | attr_url_string ) ]
+ in [ key "control" . sep_colon . store ldapoid_re
+ . attr_criticality? . attr_ctrlvalue? . eol ]
+
+(* View: ldif_change_record
+ ldif-change-record = dn-spec SEP *control changerecord *)
+let ldif_change_record = [ seq "record" . attr_dn
+ . ( ( sep_line | change_control )* . change_control )?
+ . sep_line* . change_record ]
+
+(* View: ldif_changes
+ ldif-changes = version-spec 1*(1*SEP ldif-change-record) *)
+let ldif_changes = [ label "@changes"
+ . ( sep_record . attr_version )?
+ . ( sep_record . ldif_change_record )+
+ . sep_record ]
+
+(************************************************************************
+ * Group: LENS
+ ************************************************************************)
+
+(* View: lns *)
+let lns = sep_record | ldif_content | ldif_changes
+
+let filter = incl "/etc/openldap/schema/*.ldif"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Keepalived
+ Parses /etc/ld.so.conf and /etc/ld.so.conf.d/*
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/ld.so.conf and /etc/ld.so.conf.d/*. See <filter>.
+
+About: Examples
+ The <Test_Ldso> file contains various examples and tests.
+*)
+
+module LdSo =
+
+autoload xfm
+
+(* View: path *)
+let path = [ label "path" . store /[^# \t\n][^ \t\n]*/ . Util.eol ]
+
+(* View: include *)
+let include = Build.key_value_line "include" Sep.space (store Rx.fspath)
+
+(* View: hwcap *)
+let hwcap =
+ let hwcap_val = [ label "bit" . store Rx.integer ] . Sep.space .
+ [ label "name" . store Rx.word ]
+ in Build.key_value_line "hwcap" Sep.space hwcap_val
+
+(* View: lns *)
+let lns = (Util.empty | Util.comment | path | include | hwcap)*
+
+(* Variable: filter *)
+let filter = incl "/etc/ld.so.conf"
+ . incl "/etc/ld.so.conf.d/*"
+ . Util.stdexcl
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Lightdm
+ Lightdm module for Augeas for which parses /etc/lightdm/*.conf files which
+ are standard INI file format.
+
+Author: David Salmen <dsalmen@dsalmen.com>
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/lightdm/*.conf. See <filter>.
+
+About: Tests
+ The tests/test_lightdm.aug file contains unit tests.
+*)
+
+module Lightdm =
+ autoload xfm
+
+(************************************************************************
+ * INI File settings
+ *
+ * lightdm.conf only supports "# as commentary and "=" as separator
+ *************************************************************************)
+let comment = IniFile.comment "#" "#"
+let sep = IniFile.sep "=" "="
+
+
+(************************************************************************
+ * ENTRY
+ * lightdm.conf uses standard INI File entries
+ *************************************************************************)
+let entry = IniFile.indented_entry IniFile.entry_re sep comment
+
+
+(************************************************************************
+ * RECORD
+ * lightdm.conf uses standard INI File records
+ *************************************************************************)
+let title = IniFile.indented_title IniFile.record_re
+let record = IniFile.record title entry
+
+
+(************************************************************************
+ * LENS & FILTER
+ * lightdm.conf uses standard INI File records
+ *************************************************************************)
+let lns = IniFile.lns record comment
+
+let filter = (incl "/etc/lightdm/*.conf")
+
+let xfm = transform lns filter
--- /dev/null
+(* Limits module for Augeas
+ Author: Free Ekanayaka <free@64studio.com>
+
+ Reference: /etc/security/limits.conf
+
+*)
+
+module Limits =
+
+ autoload xfm
+
+(************************************************************************
+ * USEFUL PRIMITIVES
+ *************************************************************************)
+
+let eol = Util.eol
+let comment_or_eol = Util.comment_or_eol
+let spc = Util.del_ws_spc
+let comment = Util.comment
+let empty = Util.empty
+
+let sto_to_eol = store /([^ \t\n].*[^ \t\n]|[^ \t\n])/
+
+(************************************************************************
+ * ENTRIES
+ *************************************************************************)
+
+let domain = label "domain" . store /[%@]?[A-Za-z0-9_.:-]+|\*/
+
+let type_re = "soft"
+ | "hard"
+ | "-"
+let type = [ label "type" . store type_re ]
+
+let item_re = "core"
+ | "data"
+ | "fsize"
+ | "memlock"
+ | "nofile"
+ | "rss"
+ | "stack"
+ | "cpu"
+ | "nproc"
+ | "as"
+ | "maxlogins"
+ | "maxsyslogins"
+ | "priority"
+ | "locks"
+ | "sigpending"
+ | "msgqueue"
+ | "nice"
+ | "rtprio"
+ | "chroot"
+let item = [ label "item" . store item_re ]
+
+let value = [ label "value" . store /[A-Za-z0-9_.\/-]+/ ]
+let entry = [ domain . spc
+ . type . spc
+ . item . spc
+ . value . comment_or_eol ]
+
+(************************************************************************
+ * LENS
+ *************************************************************************)
+
+let lns = (comment|empty|entry) *
+
+let filter = incl "/etc/security/limits.conf"
+ . incl "/etc/security/limits.d/*.conf"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Login_defs
+ Lense for login.defs
+
+Author: Erinn Looney-Triggs
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+
+About: Configuration files
+ This lens applies to /etc/login.defs. See <filter>.
+*)
+module Login_defs =
+autoload xfm
+
+(* View: record
+ A login.defs record *)
+let record =
+ let value = store /[^ \t\n]+([ \t]+[^ \t\n]+)*/ in
+ [ key Rx.word . Sep.space . value . Util.eol ]
+
+(* View: lns
+ The login.defs lens *)
+let lns = (record | Util.comment | Util.empty) *
+
+(* View: filter *)
+let filter = incl "/etc/login.defs"
+
+let xfm = transform lns filter
--- /dev/null
+(* Logrotate module for Augeas *)
+(* Author: Raphael Pinson <raphink@gmail.com> *)
+(* Patches from: *)
+(* Sean Millichamp <sean@bruenor.org> *)
+(* *)
+(* Supported : *)
+(* - defaults *)
+(* - rules *)
+(* - (pre|post)rotate entries *)
+(* *)
+(* Todo : *)
+(* *)
+
+module Logrotate =
+ autoload xfm
+
+ let sep_spc = Sep.space
+ let sep_val = del /[ \t]*=[ \t]*|[ \t]+/ " "
+ let eol = Util.eol
+ let num = Rx.relinteger
+ let word = /[^,#= \n\t{}]+/
+ let filename = Quote.do_quote_opt (store /\/[^"',#= \n\t{}]+/)
+ let size = num . /[kMG]?/
+
+ let indent = del Rx.opt_space "\t"
+
+ (* define omments and empty lines *)
+ let comment = Util.comment
+ let empty = Util.empty
+
+
+ (* Useful functions *)
+
+ let list_item = [ sep_spc . key /[^\/+,# \n\t{}]+/ ]
+ let select_to_eol (kw:string) (select:regexp) = [ label kw . store select ]
+ let value_to_eol (kw:string) (value:regexp) = Build.key_value kw sep_val (store value)
+ let flag_to_eol (kw:string) = Build.flag kw
+ let list_to_eol (kw:string) = [ key kw . list_item+ ]
+
+
+ (* Defaults *)
+
+ let create =
+ let mode = sep_spc . [ label "mode" . store num ] in
+ let owner = sep_spc . [ label "owner" . store word ] in
+ let group = sep_spc . [ label "group" . store word ] in
+ [ key "create" .
+ ( mode | mode . owner | mode . owner . group )? ]
+
+ let su =
+ let owner = sep_spc . [ label "owner" . store word ] in
+ let group = sep_spc . [ label "group" . store word ] in
+ [ key "su" .
+ ( owner | owner . group )? ]
+
+ let tabooext = [ key "tabooext" . ( sep_spc . store /\+/ )? . list_item+ ]
+
+ let attrs = select_to_eol "schedule" /(hourly|daily|weekly|monthly|yearly)/
+ | value_to_eol "rotate" num
+ | create
+ | flag_to_eol "nocreate"
+ | su
+ | value_to_eol "include" word
+ | select_to_eol "missingok" /(no)?missingok/
+ | select_to_eol "compress" /(no)?compress/
+ | select_to_eol "delaycompress" /(no)?delaycompress/
+ | select_to_eol "ifempty" /(not)?ifempty/
+ | select_to_eol "sharedscripts" /(no)?sharedscripts/
+ | value_to_eol "size" size
+ | tabooext
+ | value_to_eol "olddir" word
+ | flag_to_eol "noolddir"
+ | value_to_eol "mail" word
+ | flag_to_eol "mailfirst"
+ | flag_to_eol "maillast"
+ | flag_to_eol "nomail"
+ | value_to_eol "errors" word
+ | value_to_eol "extension" word
+ | select_to_eol "dateext" /(no)?dateext/
+ | value_to_eol "dateformat" word
+ | flag_to_eol "dateyesterday"
+ | value_to_eol "compresscmd" word
+ | value_to_eol "uncompresscmd" word
+ | value_to_eol "compressext" word
+ | list_to_eol "compressoptions"
+ | select_to_eol "copy" /(no)?copy/
+ | select_to_eol "copytruncate" /(no)?copytruncate/
+ | value_to_eol "maxage" num
+ | value_to_eol "minsize" size
+ | value_to_eol "maxsize" size
+ | select_to_eol "shred" /(no)?shred/
+ | value_to_eol "shredcycles" num
+ | value_to_eol "start" num
+
+ (* Define hooks *)
+
+
+ let hook_lines =
+ let line_re = /.*/ - /[ \t]*endscript[ \t]*/ in
+ store ( line_re . ("\n" . line_re)* )? . Util.del_str "\n"
+
+ let hooks =
+ let hook_names = /(pre|post)rotate|(first|last)action/ in
+ [ key hook_names . eol .
+ hook_lines? .
+ del /[ \t]*endscript/ "\tendscript" ]
+
+ (* Define rule *)
+
+ let body = Build.block_newlines
+ (indent . (attrs | hooks) . eol)
+ Util.comment
+
+ let rule =
+ let filename_entry = [ label "file" . filename ] in
+ let filename_sep = del /[ \t\n]+/ " " in
+ let filenames = Build.opt_list filename_entry filename_sep in
+ [ label "rule" . Util.indent . filenames . body . eol ]
+
+ let lns = ( comment | empty | (attrs . eol) | rule )*
+
+ let filter = incl "/etc/logrotate.d/*"
+ . incl "/etc/logrotate.conf"
+ . Util.stdexcl
+
+ let xfm = transform lns filter
--- /dev/null
+(* Logwatch module for Augeas
+ Author: Francois Lebel <francois@flebel.com>
+ Based on the dnsmasq lens written by Free Ekanayaka.
+
+ Reference: man logwatch (8)
+
+ "Format is one option per line, legal options are the same
+ as the long options legal on the command line. See
+ "logwatch.pl --help" or "man 8 logwatch" for details."
+
+*)
+
+module Logwatch =
+
+ autoload xfm
+
+(************************************************************************
+ * USEFUL PRIMITIVES
+ *************************************************************************)
+
+let eol = Util.eol
+let spc = Util.del_ws_spc
+let comment = Util.comment
+let empty = Util.empty
+
+let sep_eq = del / = / " = "
+let sto_to_eol = store /([^ \t\n].*[^ \t\n]|[^ \t\n])/
+
+(************************************************************************
+ * ENTRIES
+ *************************************************************************)
+
+let entry_re = /[A-Za-z0-9._-]+/
+let entry = [ key entry_re . sep_eq . sto_to_eol . eol ]
+
+(************************************************************************
+ * LENS
+ *************************************************************************)
+
+let lns = (comment|empty|entry) *
+
+let filter = incl "/etc/logwatch/conf/logwatch.conf"
+ . excl ".*"
+ . Util.stdexcl
+
+let xfm = transform lns filter
--- /dev/null
+module Lokkit =
+ autoload xfm
+
+(* Module: Lokkit
+ Parse the config file for lokkit from system-config-firewall
+*)
+
+let comment = Util.comment
+let empty = Util.empty
+let eol = Util.eol
+let spc = Util.del_ws_spc
+let dels = Util.del_str
+
+let eq = del /[ \t=]+/ "="
+let token = store /[a-zA-Z0-9][a-zA-Z0-9-]*/
+
+let long_opt (n:regexp) =
+ [ dels "--" . key n . eq . token . eol ]
+
+let flag (n:regexp) =
+ [ dels "--" . key n . eol ]
+
+let option (l:string) (s:string) =
+ del ("--" . l | "-" . s) ("--" . l) . label l . eq
+
+let opt (l:string) (s:string) =
+ [ option l s . token . eol ]
+
+(* trust directive
+ -t <interface>, --trust=<interface>
+*)
+let trust =
+ [ option "trust" "t" . store Rx.device_name . eol ]
+
+(* port directive
+ -p <port>[-<port>]:<protocol>, --port=<port>[-<port>]:<protocol>
+*)
+let port =
+ let portnum = store /[0-9]+/ in
+ [ option "port" "p" .
+ [ label "start" . portnum ] .
+ (dels "-" . [ label "end" . portnum])? .
+ dels ":" . [ label "protocol" . token ] . eol ]
+
+(* custom_rules directive
+ --custom-rules=[<type>:][<table>:]<filename>
+*)
+let custom_rules =
+ let types = store /ipv4|ipv6/ in
+ let tables = store /mangle|nat|filter/ in
+ let filename = store /[^ \t\n:=][^ \t\n:]*/ in
+ [ dels "--custom-rules" . label "custom-rules" . eq .
+ [ label "type" . types . dels ":" ]? .
+ [ label "table" . tables . dels ":"]? .
+ filename . eol ]
+
+(* forward_port directive
+ --forward-port=if=<interface>:port=<port>:proto=<protocol>[:toport=<destination port>][:toaddr=<destination address>]
+*)
+let forward_port =
+ let elem (n:string) (v:lens) =
+ [ key n . eq . v ] in
+ let ipaddr = store /[0-9.]+/ in
+ let colon = dels ":" in
+ [ dels "--forward-port" . label "forward-port" . eq .
+ elem "if" token . colon .
+ elem "port" token . colon .
+ elem "proto" token .
+ (colon . elem "toport" token)? .
+ (colon . elem "toaddr" ipaddr)? . eol ]
+
+let entry =
+ long_opt /selinux|selinuxtype|addmodule|removemodule|block-icmp/
+ |flag /enabled|disabled/
+ |opt "service" "s"
+ |port
+ |trust
+ |opt "masq" "m"
+ |custom_rules
+ |forward_port
+
+let lns = (comment|empty|entry)*
+
+let xfm = transform lns (incl "/etc/sysconfig/system-config-firewall")
--- /dev/null
+(*
+Module: LVM
+ Parses LVM metadata.
+
+Author: Gabriel de Perthuis <g2p.code+augeas@gmail.com>
+
+About: License
+ This file is licensed under the LGPL v2+.
+
+About: Configuration files
+ This lens applies to files in /etc/lvm/backup and /etc/lvm/archive.
+
+About: Examples
+ The <Test_LVM> file contains various examples and tests.
+*)
+
+module LVM =
+ autoload xfm
+
+ (* See lvm2/libdm/libdm-config.c for tokenisation;
+ * libdm uses a blacklist but I prefer the safer whitelist approach. *)
+ (* View: identifier
+ * The left hand side of a definition *)
+ let identifier = /[a-zA-Z0-9_-]+/
+
+ (* strings can contain backslash-escaped dquotes, but I don't know
+ * how to get the message across to augeas *)
+ let str = [label "str". Quote.do_dquote (store /([^\"]|\\\\.)*/)]
+ let int = [label "int". store Rx.relinteger]
+ (* View: flat_literal
+ * A literal without structure *)
+ let flat_literal = int|str
+
+ (* allow multiline and mixed int/str, used for raids and stripes *)
+ (* View: list
+ * A list containing flat literals *)
+ let list = [
+ label "list" . counter "list"
+ . del /\[[ \t\n]*/ "["
+ .([seq "list". flat_literal . del /,[ \t\n]*/ ", "]*
+ . [seq "list". flat_literal . del /[ \t\n]*/ ""])?
+ . Util.del_str "]"]
+
+ (* View: val
+ * Any value that appears on the right hand side of an assignment *)
+ let val = flat_literal | list
+
+ (* View: nondef
+ * A line that doesn't contain a statement *)
+ let nondef =
+ Util.empty
+ | Util.comment
+
+ (* Build.block couldn't be reused, because of recursion and
+ * a different philosophy of whitespace handling. *)
+ (* View: def
+ * An assignment, or a block containing definitions *)
+ let rec def = [
+ Util.indent . key identifier . (
+ del /[ \t]*\{\n/ " {\n"
+ .[label "dict".(nondef | def)*]
+ . Util.indent . Util.del_str "}\n"
+ |Sep.space_equal . val . Util.comment_or_eol)]
+
+ (* View: lns
+ * The main lens *)
+ let lns = (nondef | def)*
+
+ let filter =
+ incl "/etc/lvm/archive/*.vg"
+ . incl "/etc/lvm/backup/*"
+ . incl "/etc/lvm/lvm.conf"
+ . Util.stdexcl
+
+ let xfm = transform lns filter
--- /dev/null
+(*
+Module: Mailscanner
+ Parses MailScanner configuration files.
+
+Author: Andrew Colin Kissa <andrew@topdog.za.net>
+ Baruwa Enterprise Edition http://www.baruwa.com
+
+About: License
+ This file is licensed under the LGPL v2+.
+
+About: Configuration files
+ This lens applies to /etc/MailScanner/MailScanner.conf and files in
+ /etc/MailScanner/conf.d/. See <filter>.
+*)
+
+module Mailscanner =
+autoload xfm
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+let comment = Util.comment
+
+let empty = Util.empty
+
+let space = Sep.space
+
+let eol = Util.eol
+
+let non_eq = /[^ =\t\r\n]+/
+
+let non_space = /[^# \t\n]/
+
+let any = /.*/
+
+let word = /[A-Za-z%][ :<>%A-Za-z0-9_.-]+[A-Za-z%2]/
+
+let include_kw = /include/
+
+let keys = word - include_kw
+
+let eq = del /[ \t]*=/ " ="
+
+let indent = del /[ \t]*(\n[ \t]+)?/ " "
+
+let line_value = store (non_space . any . non_space | non_space)
+
+(************************************************************************
+ * Group: Entries
+ *************************************************************************)
+
+let include_line = Build.key_value_line include_kw space (store non_eq)
+
+let normal_line = [ key keys . eq . (indent . line_value)? . eol ]
+
+(************************************************************************
+ * Group: Lns and Filter
+ *************************************************************************)
+
+let lns = (empty|include_line|normal_line|comment) *
+
+let filter = (incl "/etc/MailScanner/MailScanner.conf")
+ . (incl "/etc/MailScanner/conf.d/*.conf")
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Mailscanner_rules
+ Parses MailScanner rules files.
+
+Author: Andrew Colin Kissa <andrew@topdog.za.net>
+ Baruwa Enterprise Edition http://www.baruwa.com
+
+About: License
+ This file is licensed under the LGPL v2+.
+
+About: Configuration files
+ This lens applies to MailScanner rules files
+ The format is described below:
+
+ # NOTE: Fields are separated by TAB characters --- Important!
+ #
+ # Syntax is allow/deny/deny+delete/rename/rename to replacement-text/email-addresses,
+ # then regular expression,
+ # then log text,
+ # then user report text.
+ # The "email-addresses" can be a space or comma-separated list of email
+ # addresses. If the rule hits, the message will be sent to these address(es)
+ # instead of the original recipients.
+
+ # If a rule is a "rename" rule, then the attachment filename will be renamed
+ # according to the "Default Rename Pattern" setting in MailScanner.conf.
+ # If a rule is a "rename" rule and the "to replacement-text" is supplied, then
+ # the text matched by the regular expression in the 2nd field of the line
+ # will be replaced with the "replacement-text" string.
+ # For example, the rule
+ # rename to .ppt \.pps$ Renamed .pps to .ppt Renamed .pps to .ppt
+ # will find all filenames ending in ".pps" and rename them so they end in
+ # ".ppt" instead.
+*)
+
+module Mailscanner_Rules =
+autoload xfm
+
+(************************************************************************
+ * USEFUL PRIMITIVES
+ *************************************************************************)
+
+let eol = del /\n/ "\n"
+let ws = del /[\t]+/ "\t"
+let comment = Util.comment
+let empty = Util.empty
+let action = /allow|deny|deny\+delete|rename|rename[ ]+to[ ]+[^# \t\n]+|([A-Za-z0-9_+.-]+@[A-Za-z0-9_.-]+[, ]?)+/
+let non_space = /[^# \t\n]+/
+let non_tab = /[^\t\n]+/
+
+let field (l:string) (r:regexp)
+ = [ label l . store r ]
+
+(************************************************************************
+ * ENTRIES
+ *************************************************************************)
+
+let entry = [ seq "rule" . field "action" action
+ . ws . field "regex" non_tab
+ . ws . field "log-text" non_tab
+ . ws . field "user-report" non_tab
+ . eol ]
+
+(************************************************************************
+ * LENS
+ *************************************************************************)
+
+let lns = (comment|empty|entry)*
+
+let filter = (incl "/etc/MailScanner/filename.rules.conf")
+ . (incl "/etc/MailScanner/filetype.rules.conf")
+ . (incl "/etc/MailScanner/archives.filename.rules.conf")
+ . (incl "/etc/MailScanner/archives.filetype.rules.conf")
+
+let xfm = transform lns filter
--- /dev/null
+(*
+ Module: MasterPasswd
+ Parses /etc/master.passwd
+
+ Author: Matt Dainty <matt@bodgit-n-scarper.com>
+
+ About: Reference
+ - man 5 master.passwd
+
+ Each line in the master.passwd file represents a single user record, whose
+ colon-separated attributes correspond to the members of the passwd struct
+
+*)
+
+module MasterPasswd =
+
+ autoload xfm
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+(* Group: Comments and empty lines *)
+
+let eol = Util.eol
+let comment = Util.comment
+let empty = Util.empty
+let dels = Util.del_str
+
+let word = Rx.word
+let integer = Rx.integer
+
+let colon = Sep.colon
+
+let sto_to_eol = Passwd.sto_to_eol
+let sto_to_col = Passwd.sto_to_col
+(* Store an empty string if nothing matches *)
+let sto_to_col_or_empty = Passwd.sto_to_col_or_empty
+
+(************************************************************************
+ * Group: ENTRIES
+ *************************************************************************)
+
+let username = /[_.A-Za-z0-9][-_.A-Za-z0-9]*\$?/
+
+(* View: password
+ pw_passwd *)
+let password = [ label "password" . sto_to_col? . colon ]
+
+(* View: uid
+ pw_uid *)
+let uid = [ label "uid" . store integer . colon ]
+
+(* View: gid
+ pw_gid *)
+let gid = [ label "gid" . store integer . colon ]
+
+(* View: class
+ pw_class *)
+let class = [ label "class" . sto_to_col? . colon ]
+
+(* View: change
+ pw_change *)
+let change_date = [ label "change_date" . store integer? . colon ]
+
+(* View: expire
+ pw_expire *)
+let expire_date = [ label "expire_date" . store integer? . colon ]
+
+(* View: name
+ pw_gecos; the user's full name *)
+let name = [ label "name" . sto_to_col? . colon ]
+
+(* View: home
+ pw_dir *)
+let home = [ label "home" . sto_to_col? . colon ]
+
+(* View: shell
+ pw_shell *)
+let shell = [ label "shell" . sto_to_eol? ]
+
+(* View: entry
+ struct passwd *)
+let entry = [ key username
+ . colon
+ . password
+ . uid
+ . gid
+ . class
+ . change_date
+ . expire_date
+ . name
+ . home
+ . shell
+ . eol ]
+
+(* NIS entries *)
+let niscommon = [ label "password" . sto_to_col ]? . colon
+ . [ label "uid" . store integer ]? . colon
+ . [ label "gid" . store integer ]? . colon
+ . [ label "class" . sto_to_col ]? . colon
+ . [ label "change_date" . store integer ]? . colon
+ . [ label "expire_date" . store integer ]? . colon
+ . [ label "name" . sto_to_col ]? . colon
+ . [ label "home" . sto_to_col ]? . colon
+ . [ label "shell" . sto_to_eol ]?
+
+let nisentry =
+ let overrides =
+ colon
+ . niscommon in
+ [ dels "+@" . label "@nis" . store username . overrides . eol ]
+
+let nisuserplus =
+ let overrides =
+ colon
+ . niscommon in
+ [ dels "+" . label "@+nisuser" . store username . overrides . eol ]
+
+let nisuserminus =
+ let overrides =
+ colon
+ . niscommon in
+ [ dels "-" . label "@-nisuser" . store username . overrides . eol ]
+
+let nisdefault =
+ let overrides =
+ colon
+ . [ label "password" . sto_to_col_or_empty . colon ]
+ . [ label "uid" . store integer? . colon ]
+ . [ label "gid" . store integer? . colon ]
+ . [ label "class" . sto_to_col? . colon ]
+ . [ label "change_date" . store integer? . colon ]
+ . [ label "expire_date" . store integer? . colon ]
+ . [ label "name" . sto_to_col? . colon ]
+ . [ label "home" . sto_to_col? . colon ]
+ . [ label "shell" . sto_to_eol? ] in
+ [ dels "+" . label "@nisdefault" . overrides? . eol ]
+
+(************************************************************************
+ * LENS
+ *************************************************************************)
+
+let lns = (comment|empty|entry|nisentry|nisdefault|nisuserplus|nisuserminus) *
+
+let filter = incl "/etc/master.passwd"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: MCollective
+ Parses MCollective's configuration files
+
+Author: Marc Fournier <marc.fournier@camptocamp.com>
+
+About: Reference
+ This lens is based on MCollective's default client.cfg and server.cfg.
+
+About: Usage Example
+(start code)
+ augtool> get /files/etc/mcollective/client.cfg/plugin.psk
+ /files/etc/mcollective/client.cfg/plugin.psk = unset
+
+ augtool> ls /files/etc/mcollective/client.cfg/
+ topicprefix = /topic/
+ main_collective = mcollective
+ collectives = mcollective
+ [...]
+
+ augtool> set /files/etc/mcollective/client.cfg/plugin.stomp.password example123
+ augtool> save
+ Saved 1 file(s)
+(end code)
+ The <Test_MCollective> file also contains various examples.
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+*)
+
+module MCollective =
+autoload xfm
+
+let lns = Simplevars.lns
+
+let filter = incl "/etc/mcollective/client.cfg"
+ . incl "/etc/mcollective/server.cfg"
+ . incl "/etc/puppetlabs/mcollective/client.cfg"
+ . incl "/etc/puppetlabs/mcollective/server.cfg"
+
+let xfm = transform lns filter
--- /dev/null
+(******************************************************************************
+Mdadm_conf module for Augeas
+
+Author: Matthew Booth <mbooth@redhat.com>
+
+Copyright (C):
+ 2011 Red Hat Inc.
+
+Reference:
+ mdadm(5)
+ config.c and policy.c from mdadm-3.2.2
+
+License:
+ This file is licensed under the LGPL v2+.
+
+This is a lens for /etc/mdadm.conf. It aims to parse every valid configuration
+file as of version 3.2.2, and many invalid ones too. This last point is a
+feature, not a bug! madm will generate warnings for invalid configuration which
+do not prevent correct operation of the tool. Wherever possible, we try to
+allow for this behaviour.
+
+Keywords in mdadm.conf are matched with a case-insensitive prefix match of at
+least 3 characters. Keys in key/value pairs are also matched case-insensitively,
+but require a full match. The exception is POLICY and PART-POLICY, where keys
+are matched case-sensitively.
+
+N.B. We can't use case-insensitive regular expressions in most places due to bug
+#147.
+*******************************************************************************)
+
+module Mdadm_conf =
+
+ autoload xfm
+
+
+(******************************************************************************
+ * PRIMITIVES
+ ******************************************************************************)
+
+let eol = Util.comment_or_eol
+let comment = Util.comment
+let empty = Util.empty
+let value = /[^ \t\n#]+/
+let value_no_eq = /[^ \t\n#=]+/
+let value_no_eq_sl = /[^ \t\n#=\/]+/
+
+let continuation = /\n[ \t]+/
+let space = /[ \t]+/
+let value_sep = ( del ( continuation | space . continuation? ) " "
+ | comment . del space " " )
+
+(* We parse specific keys rather than having a catch-all owing to the varying
+case of the syntax. This means the user can rely on 'array/uuid' rather than
+additionally testing for 'array/UUID'.
+
+It would be good to have an additional catchall, but I haven't been able to make
+that work.
+*)
+let keyvalue (r:regexp) (lc:string) (uc:string) =
+ [ del ( r . /=/ ) ( uc . "=" ) . label lc . store value ]
+
+let simplevalue (r:regexp) (lc:string) (uc:string) =
+ [ del r uc . label lc
+ . ( value_sep . [ label "value" . store value ] )* . eol ]
+
+
+(******************************************************************************
+ * DEVICES
+ ******************************************************************************)
+
+let dev_re = /dev(i(ce?)?)?/i
+
+let dev_containers_re = /containers/i
+let dev_partitions_re = /partitions/i
+
+let dev_containers = [ del dev_containers_re "containers" . label "containers" ]
+let dev_partitions = [ del dev_partitions_re "partitions" . label "partitions" ]
+let dev_device = [ label "name". store ( value - (dev_containers_re | dev_partitions_re)) ]
+
+(* Strictly there must be at least 1 device, but we err on the side of parsing
+*)
+let dev_devices = ( value_sep . ( dev_containers
+ | dev_partitions
+ | dev_device ) )*
+
+let device = [ del dev_re "DEVICE" . label "device" . dev_devices . eol ]
+
+
+(******************************************************************************
+ * ARRAY
+ ******************************************************************************)
+
+let array_re = /arr(ay?)?/i
+
+let arr_auto_re = /auto/i
+let arr_bitmap_re = /bitmap/i
+let arr_container_re = /container/i
+let arr_devices_re = /devices/i
+let arr_disks_re = /disks/i (* Undocumented *)
+let arr_level_re = /level/i
+let arr_member_re = /member/i
+let arr_metadata_re = /metadata/i
+let arr_name_re = /name/i
+let arr_num_devices_re = /num-devices/i
+let arr_spare_group_re = /spare-group/i
+let arr_spares_re = /spares/i
+let arr_super_minor_re = /super-minor/i
+let arr_uuid_re = /uuid/i
+
+let arr_devicename = [ store value_no_eq . label "devicename" ]
+
+let arr_auto = keyvalue arr_auto_re "auto" "AUTO"
+let arr_bitmap = keyvalue arr_bitmap_re "bitmap" "BITMAP"
+let arr_container = keyvalue arr_container_re "container" "CONTAINER"
+let arr_devices = keyvalue arr_devices_re "devices" "DEVICES"
+let arr_disks = keyvalue arr_disks_re "disks" "DISKS"
+let arr_level = keyvalue arr_level_re "level" "LEVEL"
+let arr_member = keyvalue arr_member_re "member" "MEMBER"
+let arr_metadata = keyvalue arr_metadata_re "metadata" "METADATA"
+let arr_name = keyvalue arr_name_re "name" "NAME"
+let arr_num_devices = keyvalue arr_num_devices_re "num-devices" "NUM-DEVICES"
+let arr_spare_group = keyvalue arr_spare_group_re "spare-group" "SPARE-GROUP"
+let arr_spares = keyvalue arr_spares_re "spares" "SPARES"
+let arr_super_minor = keyvalue arr_super_minor_re "super-minor" "SUPER-MINOR"
+let arr_uuid = keyvalue arr_uuid_re "uuid" "UUID"
+
+let arr_options = ( value_sep . ( arr_devicename
+ | arr_auto
+ | arr_bitmap
+ | arr_container
+ | arr_devices
+ | arr_disks
+ | arr_level
+ | arr_member
+ | arr_metadata
+ | arr_name
+ | arr_num_devices
+ | arr_spare_group
+ | arr_spares
+ | arr_super_minor
+ | arr_uuid ) )*
+
+let array = [ del array_re "ARRAY" . label "array" . arr_options . eol ]
+
+
+(******************************************************************************
+ * MAILADDR
+ ******************************************************************************)
+
+let mailaddr_re = /mai(l(a(d(dr?)?)?)?)?/i
+
+(* We intentionally allow multiple mailaddr values here, even though this is
+invalid and would produce a warning. This is better than not parsing the file.
+*)
+let mailaddr = simplevalue mailaddr_re "mailaddr" "MAILADDR"
+
+
+(******************************************************************************
+ * MAILFROM
+ ******************************************************************************)
+
+(* N.B. MAILFROM can only be abbreviated to 5 characters *)
+let mailfrom_re = /mailf(r(om?)?)?/i
+
+let mailfrom = [ del mailfrom_re "MAILFROM" . label "mailfrom"
+ . ( value_sep . [ label "value" . store value ] )* . eol ]
+
+
+(******************************************************************************
+ * PROGRAM
+ ******************************************************************************)
+
+let program_re = /pro(g(r(am?)?)?)?/i
+
+let program = simplevalue program_re "program" "PROGRAM"
+
+
+(******************************************************************************
+ * CREATE
+ ******************************************************************************)
+
+let create_re = /cre(a(te?)?)?/i
+
+let cre_auto_re = /auto/i
+let cre_owner_re = /owner/i
+let cre_group_re = /group/i
+let cre_mode_re = /mode/i
+let cre_metadata_re = /metadata/i
+let cre_symlinks_re = /symlinks/i
+
+let cre_auto = keyvalue cre_auto_re "auto" "AUTO"
+let cre_group = keyvalue cre_group_re "group" "GROUP"
+let cre_metadata = keyvalue cre_metadata_re "metadata" "METADATA"
+let cre_mode = keyvalue cre_mode_re "mode" "MODE"
+let cre_owner = keyvalue cre_owner_re "owner" "OWNER"
+let cre_symlinks = keyvalue cre_symlinks_re "symlinks" "SYMLINKS"
+
+let cre_options = ( value_sep . ( arr_auto
+ | cre_owner
+ | cre_group
+ | cre_mode
+ | cre_metadata
+ | cre_symlinks) )*
+
+let create = [ del create_re "CREATE" . label "create" . cre_options . eol ]
+
+
+(******************************************************************************
+ * HOMEHOST
+ ******************************************************************************)
+
+let homehost_re = /hom(e(h(o(st?)?)?)?)?/i
+
+let homehost = simplevalue homehost_re "homehost" "HOMEHOST"
+
+
+(******************************************************************************
+ * AUTO
+ ******************************************************************************)
+
+let auto_re = /auto?/i
+
+let aut_plus = [ key "+" . store value ]
+let aut_minus = [ key "-" . store value ]
+let aut_homehost = [ del /homehost/i "homehost" . label "homehost" ]
+
+let aut_list = ( value_sep . ( aut_plus | aut_minus | aut_homehost ) )*
+
+let auto = [ del auto_re "AUTO" . label "auto" . aut_list . eol ]
+
+
+(******************************************************************************
+ * POLICY and PART-POLICY
+ ******************************************************************************)
+
+(* PART-POLICY is undocumented. A cursory inspection of the parsing code
+suggests it's parsed the same way as POLICY, but treated slightly differently
+thereafter. *)
+
+let policy_re = /pol(i(cy?)?)?/i
+let part_policy_re = /par(t(-(p(o(l(i(cy?)?)?)?)?)?)?)?/i
+
+(* Unlike everything else, policy keys are matched case sensitive. This means we
+don't have to mess around with explicit option matching, as the match string is
+fixed for a working configuration. *)
+
+let pol_option (act:string) =
+ [ del ( act . "=" ) ( act . "=" ) . label act . store value ]
+
+let pol_options = ( value_sep . [ key value_no_eq_sl . del "=" "="
+ . store value ] )*
+
+let policy = [ del policy_re "POLICY" . label "policy"
+ . pol_options . eol ]
+let part_policy = [ del part_policy_re "PART-POLICY" . label "part-policy"
+ . pol_options . eol ]
+
+
+(******************************************************************************
+ * LENS
+ ******************************************************************************)
+
+let lns = (comment
+ | empty
+ | device
+ | array
+ | mailaddr
+ | mailfrom
+ | program
+ | create
+ | homehost
+ | auto
+ | policy
+ | part_policy )*
+
+let filter = incl "/etc/mdadm.conf"
+ . incl "/etc/mdadm/mdadm.conf"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Memcached
+ Parses Memcached's configuration files
+
+Author: Marc Fournier <marc.fournier@camptocamp.com>
+
+About: Reference
+ This lens is based on Memcached's default memcached.conf file.
+
+About: Usage Example
+(start code)
+ augtool> get /files/etc/memcached.conf/u
+ /files/etc/memcached.conf/u = nobody
+
+ augtool> set /files/etc/memcached.conf/m 128
+ augtool> save
+ Saved 1 file(s)
+(end code)
+ The <Test_Memcached> file also contains various examples.
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+*)
+
+module Memcached =
+autoload xfm
+
+let comment = Util.comment
+let comment_eol = Util.comment_generic /[#][ \t]*/ "# "
+let option = /[a-zA-Z]/
+let val = /[^# \n\t]+/
+let empty = Util.empty
+let eol = Util.del_str "\n"
+
+let entry = [ Util.del_str "-" . key option
+ . ( Util.del_ws_spc . (store val) )?
+ . del /[ \t]*/ "" . (eol|comment_eol) ]
+
+let logfile = Build.key_value_line_comment
+ "logfile" Sep.space (store val) comment
+
+let lns = ( entry | logfile | comment | empty )*
+
+let filter = incl "/etc/memcached.conf"
+ . incl "/etc/memcachedb.conf"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Mke2fs
+ Parses /etc/mke2fs.conf
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 5 mke2fs.conf` where possible.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/mke2fs.conf. See <filter>.
+*)
+
+
+module Mke2fs =
+ autoload xfm
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+(* View: comment *)
+let comment = IniFile.comment IniFile.comment_re IniFile.comment_default
+
+(* View: sep *)
+let sep = IniFile.sep /=[ \t]*/ "="
+
+(* View: empty *)
+let empty = IniFile.empty
+
+(* View: boolean
+ The configuration parser of e2fsprogs recognizes different values
+ for booleans, so list all the recognized values *)
+let boolean = ("y"|"yes"|"true"|"t"|"1"|"on"|"n"|"no"|"false"|"nil"|"0"|"off")
+
+(* View: fspath *)
+let fspath = /[^ \t\n"]+/
+
+
+(************************************************************************
+ * Group: RECORD TYPES
+ *************************************************************************)
+
+
+(* View: entry
+ A generic entry for lens lns *)
+let entry (kw:regexp) (lns:lens) = Build.key_value_line kw sep lns
+
+
+(* View: list_sto
+ A list of values with given lens *)
+let list_sto (kw:regexp) (lns:lens) =
+ entry kw (Quote.do_dquote_opt_nil (Build.opt_list [lns] Sep.comma))
+
+(* View: entry_sto
+ Store a regexp as entry value *)
+let entry_sto (kw:regexp) (val:regexp) =
+ entry kw (Quote.do_dquote_opt_nil (store val))
+ | entry kw (Util.del_str "\"\"")
+
+
+(************************************************************************
+ * Group: COMMON ENTRIES
+ *************************************************************************)
+
+(* View: common_entries_list
+ Entries with a list value *)
+let common_entries_list = ("base_features"|"default_features"|"default_mntopts")
+
+(* View: common_entries_int
+ Entries with an integer value *)
+let common_entries_int = ("cluster_size"|"flex_bg_size"|"force_undo"
+ |"inode_ratio"|"inode_size"|"num_backup_sb")
+
+(* View: common_entries_bool
+ Entries with a boolean value *)
+let common_entries_bool = ("auto_64-bit_support"|"discard"
+ |"enable_periodic_fsck"|"lazy_itable_init"
+ |"lazy_journal_init"|"packed_meta_blocks")
+
+(* View: common_entries_string
+ Entries with a string value *)
+let common_entries_string = ("encoding"|"journal_location")
+
+(* View: common_entries_double
+ Entries with a double value *)
+let common_entries_double = ("reserved_ratio")
+
+(* View: common_entry
+ Entries shared between <defaults> and <fs_types> sections *)
+let common_entry = list_sto common_entries_list (key Rx.word)
+ | entry_sto common_entries_int Rx.integer
+ | entry_sto common_entries_bool boolean
+ | entry_sto common_entries_string Rx.word
+ | entry_sto common_entries_double Rx.decimal
+ | entry_sto "blocksize" ("-"? . Rx.integer)
+ | entry_sto "hash_alg" ("legacy"|"half_md4"|"tea")
+ | entry_sto "errors" ("continue"|"remount-ro"|"panic")
+ | list_sto "features"
+ ([del /\^/ "^" . label "disable"]?
+ . key Rx.word)
+ | list_sto "options"
+ (key Rx.word . Util.del_str "="
+ . store Rx.word)
+
+(************************************************************************
+ * Group: DEFAULTS SECTION
+ *************************************************************************)
+
+(* View: defaults_entry
+ Possible entries under the <defaults> section *)
+let defaults_entry = entry_sto "fs_type" Rx.word
+ | entry_sto "undo_dir" fspath
+
+(* View: defaults_title
+ Title for the <defaults> section *)
+let defaults_title = IniFile.title "defaults"
+
+(* View: defaults
+ A defaults section *)
+let defaults = IniFile.record defaults_title
+ ((Util.indent . (defaults_entry|common_entry)) | comment)
+
+
+(************************************************************************
+ * Group: FS_TYPES SECTION
+ *************************************************************************)
+
+(* View: fs_types_record
+ Fs group records under the <fs_types> section *)
+let fs_types_record = [ label "filesystem"
+ . Util.indent . store Rx.word
+ . del /[ \t]*=[ \t]*\{[ \t]*\n/ " = {\n"
+ . ((Util.indent . common_entry) | empty | comment)*
+ . del /[ \t]*\}[ \t]*\n/ " }\n" ]
+
+(* View: fs_types_title
+ Title for the <fs_types> section *)
+let fs_types_title = IniFile.title "fs_types"
+
+(* View: fs_types
+ A fs_types section *)
+let fs_types = IniFile.record fs_types_title
+ (fs_types_record | comment)
+
+
+(************************************************************************
+ * Group: OPTIONS SECTION
+ *************************************************************************)
+
+(* View: options_entries_int
+ Entries with an integer value *)
+let options_entries_int = ("proceed_delay"|"sync_kludge")
+
+(* View: options_entries_bool
+ Entries with a boolean value *)
+let options_entries_bool = ("old_bitmaps")
+
+(* View: options_entry
+ Possible entries under the <options> section *)
+let options_entry = entry_sto options_entries_int Rx.integer
+ | entry_sto options_entries_bool boolean
+
+(* View: defaults_title
+ Title for the <options> section *)
+let options_title = IniFile.title "options"
+
+(* View: options
+ A options section *)
+let options = IniFile.record options_title
+ ((Util.indent . options_entry) | comment)
+
+
+(************************************************************************
+ * Group: LENS AND FILTER
+ *************************************************************************)
+
+(* View: lns
+ The mke2fs lens
+*)
+let lns = (empty|comment)* . (defaults|fs_types|options)*
+
+(* Variable: filter *)
+let filter = incl "/etc/mke2fs.conf"
+
+let xfm = transform lns filter
+
+
--- /dev/null
+(*
+Module: Modprobe
+ Parses /etc/modprobe.conf and /etc/modprobe.d/*
+
+Original Author: David Lutterkort <lutter@redhat.com>
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 5 modprobe.conf` where possible.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/modprobe.conf and /etc/modprobe.d/*. See <filter>.
+*)
+
+module Modprobe =
+autoload xfm
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+(* View: comment *)
+let comment = Util.comment
+
+(* View: empty *)
+let empty = Util.empty
+
+(* View: sep_space *)
+let sep_space = del /([ \t]|(\\\\\n))+/ " "
+
+(* View: sto_no_spaces *)
+let sto_no_spaces = store /[^# \t\n\\\\]+/
+
+(* View: sto_no_colons *)
+let sto_no_colons = store /[^:# \t\n\\\\]+/
+
+(* View: sto_to_eol *)
+let sto_to_eol = store /(([^# \t\n\\\\][^#\n\\\\]*[ \t]*\\\\[ \t]*\n[ \t]*)*([^# \t\n\\\\][^#\n\\\\]*[^# \t\n\\\\]|[^# \t\n\\\\])|[^# \t\n\\\\])/
+
+(* View: alias *)
+let alias =
+ let modulename = [ label "modulename" . sto_no_spaces ] in
+ Build.key_value_line_comment "alias" sep_space
+ (sto_no_spaces . sep_space . modulename)
+ comment
+
+(************************************************************************
+ * Group: ENTRY TYPES
+ *************************************************************************)
+
+(* View: options *)
+let options =
+ let opt_value = /[^#" \t\n\\\\]+|"[^#"\n\\\\]*"/ in
+ let option = [ key Rx.word . (del /[ \t]*=[ \t]*/ "=" . store opt_value)? ] in
+ [ key "options" . sep_space . sto_no_spaces
+ . (sep_space . option)* . Util.comment_or_eol ]
+
+(* View: install_remove *)
+let kv_line_command (kw:regexp) =
+ let command = [ label "command" . sto_to_eol ] in
+ [ key kw . sep_space . sto_no_spaces
+ . sep_space . command . Util.comment_or_eol ]
+
+(* View: blacklist *)
+let blacklist = Build.key_value_line_comment "blacklist" sep_space
+ sto_no_spaces
+ comment
+
+(* View: config *)
+let config = Build.key_value_line_comment "config" sep_space
+ (store /binary_indexes|yes|no/)
+ comment
+
+(* View: softdep *)
+let softdep =
+ let premod = [ label "pre" . sep_space . sto_no_colons ] in
+ let pre = sep_space . Util.del_str "pre:" . premod+ in
+ let postmod = [ label "post" . sep_space . sto_no_colons ] in
+ let post = sep_space . Util.del_str "post:" . postmod+ in
+ [ key "softdep" . sep_space . sto_no_colons . pre? . post?
+ . Util.comment_or_eol ]
+
+(* View: entry *)
+let entry = alias
+ | options
+ | kv_line_command /install|remove/
+ | blacklist
+ | config
+ | softdep
+
+(************************************************************************
+ * Group: LENS AND FILTER
+ *************************************************************************)
+
+(* View: lns *)
+let lns = (comment|empty|entry)*
+
+(* View: filter *)
+let filter = (incl "/etc/modprobe.conf") .
+ (incl "/etc/modprobe.d/*").
+ (incl "/etc/modprobe.conf.local").
+ Util.stdexcl
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Modules
+ Parses /etc/modules
+
+About: Reference
+ This lens tries to keep as close as possible to `man 5 modules` where possible.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/modules. See <filter>.
+*)
+module Modules =
+autoload xfm
+
+(* View: word *)
+let word = /[^#, \n\t\/]+/
+
+(* View: sto_line *)
+let sto_line = store /[^# \t\n].*[^ \t\n]|[^# \t\n]/
+
+(* View: record *)
+let record = [ key word . (Util.del_ws_tab . sto_line)? . Util.eol ]
+
+(* View: lns *)
+let lns = ( Util.empty | Util.comment | record ) *
+
+(* View: filter *)
+let filter = incl "/etc/modules"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Modules_conf
+ Parses /etc/modules.conf and /etc/conf.modules
+
+ Based on the similar Modprobe lens
+
+ Not all directives currently listed in modules.conf(5) are currently
+ supported.
+*)
+module Modules_conf =
+autoload xfm
+
+let comment = Util.comment
+let empty = Util.empty
+let eol = Util.eol | Util.comment
+
+(* Basic file structure is the same as modprobe.conf *)
+let sto_to_eol = Modprobe.sto_to_eol
+let sep_space = Modprobe.sep_space
+
+let path = [ key "path" . Util.del_str "=" . sto_to_eol . eol ]
+let keep = [ key "keep" . eol ]
+let probeall = Build.key_value_line_comment "probeall" sep_space
+ sto_to_eol
+ comment
+
+let entry =
+ Modprobe.alias
+ | Modprobe.options
+ | Modprobe.kv_line_command /install|pre-install|post-install/
+ | Modprobe.kv_line_command /remove|pre-remove|post-remove/
+ | keep
+ | path
+ | probeall
+
+
+let lns = (comment|empty|entry)*
+
+let filter = (incl "/etc/modules.conf") .
+ (incl "/etc/conf.modules")
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: MongoDBServer
+ Parses /etc/mongodb.conf
+
+Author: Brian Redbeard <redbeard@dead-city.org>
+
+About: Reference
+ For information on configuration options available to mongod reference one
+ of the following resources:
+ * The Mongo DB Manual - <http://docs.mongodb.org/manual/>
+ * The current options available for your operating system via:
+ > man mongos
+
+About: License
+ This file is licenced under the LGPL v2+, conforming to the other components
+ of Augeas.
+
+About: Lens Usage
+ Sample usage of this lens in augtool:
+
+ * Get your current setup
+ > print /files/etc/mongodb.conf
+ ...
+
+ * Change MongoDB port
+ > set /files/etc/mongodb.conf/port 27117
+
+ Saving your file:
+
+ > save
+
+About: Configuration files
+ This lens applies to /etc/mongodb.conf. See <filter>.
+
+About: Examples
+ The <Test_MongoDBServer> file contains various examples and tests.
+*)
+module MongoDBServer =
+
+autoload xfm
+
+(* View: entry *)
+let entry =
+ Build.key_value_line Rx.word Sep.space_equal (store Rx.space_in)
+
+(* View: lns *)
+let lns = (Util.empty | Util.comment | entry)*
+
+
+(* Variable: filter *)
+let filter = incl "/etc/mongodb.conf"
+
+let xfm = transform lns filter
--- /dev/null
+(* Monit module for Augeas
+ Author: Free Ekanayaka <free@64studio.com>
+
+ Reference: man monit (1), section "HOW TO MONITOR"
+
+ "A monit control file consists of a series of service entries and
+ global option statements in a free-format, token-oriented syntax.
+
+ Comments begin with a # and extend through the end of the line. There
+ are three kinds of tokens in the control file: grammar keywords, numbers
+ and strings. On a semantic level, the control file consists of three
+ types of statements:
+
+ 1. Global set-statements
+ A global set-statement starts with the keyword set and the item to
+ configure.
+
+ 2. Global include-statement
+ The include statement consists of the keyword include and a glob
+ string.
+
+ 3. One or more service entry statements.
+ A service entry starts with the keyword check followed by the
+ service type"
+
+*)
+
+module Monit =
+
+ autoload xfm
+
+(************************************************************************
+ * USEFUL PRIMITIVES
+ *************************************************************************)
+
+let spc = Sep.space
+let comment = Util.comment
+let empty = Util.empty
+
+let sto_to_spc = store Rx.space_in
+let sto_no_spc = store Rx.no_spaces
+
+let word = Rx.word
+let value = Build.key_value_line word spc sto_to_spc
+
+(************************************************************************
+ * ENTRIES
+ *************************************************************************)
+
+(* set statement *)
+let set = Build.key_value "set" spc value
+
+(* include statement *)
+let include = Build.key_value_line "include" spc sto_to_spc
+
+(* service statement *)
+let service = Build.key_value "check" spc (Build.list value spc)
+
+let entry = (set|include|service)
+
+(************************************************************************
+ * LENS
+ *************************************************************************)
+
+let lns = (comment|empty|entry) *
+
+let filter = incl "/etc/monit/monitrc"
+ . incl "/etc/monitrc"
+
+let xfm = transform lns filter
--- /dev/null
+(* Process /etc/multipath.conf *)
+(* The lens is based on the multipath.conf(5) man page *)
+module Multipath =
+
+autoload xfm
+
+let comment = Util.comment
+let empty = Util.empty
+let dels = Util.del_str
+let eol = Util.eol
+
+let ws = del /[ \t]+/ " "
+let indent = del /[ \t]*/ ""
+(* We require that braces are always followed by a newline *)
+let obr = del /\{([ \t]*)\n/ "{\n"
+let cbr = del /[ \t]*}[ \t]*\n/ "}\n"
+
+(* Like Rx.fspath, but we disallow quotes at the beginning or end *)
+let fspath = /[^" \t\n]|[^" \t\n][^ \t\n]*[^" \t\n]/
+
+let ikey (k:regexp) = indent . key k
+
+let section (n:regexp) (b:lens) =
+ [ ikey n . ws . obr . (b|empty|comment)* . cbr ]
+
+let kv (k:regexp) (v:regexp) =
+ [ ikey k . ws . del /"?/ "" . store v . del /"?/ "" . eol ]
+
+(* FIXME: it would be much more concise to write *)
+(* [ key k . ws . (bare | quoted) ] *)
+(* but the typechecker trips over that *)
+let qstr (k:regexp) =
+ let delq = del /['"]/ "\"" in
+ let bare = del /["']?/ "" . store /[^"' \t\n]+/ . del /["']?/ "" in
+ let quoted = delq . store /.*[ \t].*/ . delq in
+ [ ikey k . ws . bare . eol ]
+ |[ ikey k . ws . quoted . eol ]
+
+(* Settings that can be changed in various places *)
+let common_setting =
+ qstr "path_selector"
+ |kv "path_grouping_policy" /failover|multibus|group_by_(serial|prio|node_name)/
+ |kv "path_checker" /tur|emc_clariion|hp_sw|rdac|directio|rdb|readsector0/
+ |kv "prio" /const|emc|alua|ontap|rdac|hp_sw|hds|random|weightedpath/
+ |qstr "prio_args"
+ |kv "failback" (Rx.integer | /immediate|manual|followover/)
+ |kv "rr_weight" /priorities|uniform/
+ |kv "flush_on_last_del" /yes|no/
+ |kv "user_friendly_names" /yes|no/
+ |kv "no_path_retry" (Rx.integer | /fail|queue/)
+ |kv /rr_min_io(_q)?/ Rx.integer
+ |qstr "features"
+ |kv "reservation_key" Rx.word
+ |kv "deferred_remove" /yes|no/
+ |kv "delay_watch_checks" (Rx.integer | "no")
+ |kv "delay_wait_checks" (Rx.integer | "no")
+ |kv "skip_kpartx" /yes|no/
+ (* Deprecated settings for backwards compatibility *)
+ |qstr /(getuid|prio)_callout/
+ (* Settings not documented in `man multipath.conf` *)
+ |kv /rr_min_io_rq/ Rx.integer
+ |kv "udev_dir" fspath
+ |qstr "selector"
+ |kv "async_timeout" Rx.integer
+ |kv "pg_timeout" Rx.word
+ |kv "h_on_last_deleassign_maps" /yes|no/
+ |qstr "uid_attribute"
+ |kv "hwtable_regex_match" /yes|no|on|off/
+ |kv "reload_readwrite" /yes|no/
+
+let default_setting =
+ common_setting
+ |kv "polling_interval" Rx.integer
+ |kv "max_polling_interval" Rx.integer
+ |kv "multipath_dir" fspath
+ |kv "find_multipaths" /yes|no/
+ |kv "verbosity" /[0-6]/
+ |kv "reassign_maps" /yes|no/
+ |kv "uid_attrribute" Rx.word
+ |kv "max_fds" (Rx.integer|"max")
+ |kv "checker_timeout" Rx.integer
+ |kv "fast_io_fail_tmo" (Rx.integer|"off")
+ |kv "dev_loss_tmo" (Rx.integer|"infinity")
+ |kv "queue_without_daemon" /yes|no/
+ |kv "bindings_file" fspath
+ |kv "wwids_file" fspath
+ |kv "log_checker_err" /once|always/
+ |kv "retain_attached_hw_handler" /yes|no/
+ |kv "detect_prio" /yes|no/
+ |kv "hw_str_match" /yes|no/
+ |kv "force_sync" /yes|no/
+ |kv "config_dir" fspath
+ |kv "missing_uev_wait_timeout" Rx.integer
+ |kv "ignore_new_boot_devs" /yes|no/
+ |kv "retrigger_tries" Rx.integer
+ |kv "retrigger_delay" Rx.integer
+ |kv "new_bindings_in_boot" /yes|no/
+
+(* A device subsection *)
+let device =
+ let setting =
+ qstr /vendor|product|product_blacklist|hardware_handler|alias_prefix/
+ |default_setting in
+ section "device" setting
+
+(* The defaults section *)
+let defaults =
+ section "defaults" default_setting
+
+(* The blacklist and blacklist_exceptions sections *)
+let blacklist =
+ let setting =
+ qstr /devnode|wwid|property/
+ |device in
+ section /blacklist(_exceptions)?/ setting
+
+(* A multipath subsection *)
+let multipath =
+ let setting =
+ kv "wwid" (Rx.word|"*")
+ |qstr "alias"
+ |common_setting in
+ section "multipath" setting
+
+(* The multipaths section *)
+let multipaths =
+ section "multipaths" multipath
+
+(* The devices section *)
+let devices =
+ section "devices" device
+
+let lns = (comment|empty|defaults|blacklist|devices|multipaths)*
+
+let xfm = transform lns (incl "/etc/multipath.conf" .
+ incl "/etc/multipath/conf.d/*.conf")
--- /dev/null
+(* MySQL module for Augeas *)
+(* Author: Tim Stoop <tim@kumina.nl> *)
+(* Heavily based on php.aug by Raphael Pinson *)
+(* <raphink@gmail.com> *)
+(* *)
+
+module MySQL =
+ autoload xfm
+
+(************************************************************************
+ * INI File settings
+ *************************************************************************)
+let comment = IniFile.comment IniFile.comment_re "#"
+
+let sep = IniFile.sep IniFile.sep_re IniFile.sep_default
+
+let entry =
+ let bare = Quote.do_dquote_opt_nil (store /[^#;" \t\r\n]+([ \t]+[^#;" \t\r\n]+)*/)
+ in let quoted = Quote.do_dquote (store /[^"\r\n]*[#;]+[^"\r\n]*/)
+ in [ Util.indent . key IniFile.entry_re . sep . Sep.opt_space . bare . (comment|IniFile.eol) ]
+ | [ Util.indent . key IniFile.entry_re . sep . Sep.opt_space . quoted . (comment|IniFile.eol) ]
+ | [ Util.indent . key IniFile.entry_re . store // . (comment|IniFile.eol) ]
+ | comment
+
+(************************************************************************
+ * sections, led by a "[section]" header
+ * We can't use titles as node names here since they could contain "/"
+ * We remove #comment from possible keys
+ * since it is used as label for comments
+ * We also remove / as first character
+ * because augeas doesn't like '/' keys (although it is legal in INI Files)
+ *************************************************************************)
+let title = IniFile.indented_title_label "target" IniFile.record_label_re
+let record = IniFile.record title entry
+
+let includedir = Build.key_value_line /!include(dir)?/ Sep.space (store Rx.fspath)
+ . (comment|IniFile.empty)*
+
+let lns = (comment|IniFile.empty)* . (record|includedir)*
+
+let filter = (incl "/etc/mysql/my.cnf")
+ . (incl "/etc/mysql/conf.d/*.cnf")
+ . (incl "/etc/my.cnf")
+ . (incl "/etc/my.cnf.d/*.cnf")
+
+let xfm = transform lns filter
+
--- /dev/null
+(*
+Module: NagiosConfig
+ Parses /etc/{nagios{3,},icinga}/*.cfg
+
+Authors: Sebastien Aperghis-Tramoni <sebastien@aperghis.net>
+ Raphaël Pinson <raphink@gmail.com>
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/{nagios{3,},icinga}/*.cfg. See <filter>.
+*)
+
+module NagiosCfg =
+autoload xfm
+
+(************************************************************************
+ * Group: Utility variables/functions
+ ************************************************************************)
+(* View: param_def
+ define a field *)
+let param_def =
+ let space_in = /[^ \t\n][^\n=]*[^ \t\n]|[^ \t\n]/
+ in key /[A-Za-z0-9_]+/
+ . Sep.opt_space . Sep.equal . Sep.opt_space
+ . store space_in
+
+(* View: macro_def
+ Macro line, as used in resource.cfg *)
+let macro_def =
+ let macro = /\$[A-Za-z0-9]+\$/
+ in let macro_decl = Rx.word | Rx.fspath
+ in key macro . Sep.space_equal . store macro_decl
+
+(************************************************************************
+ * Group: Entries
+ ************************************************************************)
+(* View: param
+ Params can have sub params *)
+let param =
+ [ Util.indent . param_def
+ . [ Sep.space . param_def ]*
+ . Util.eol ]
+
+(* View: macro *)
+let macro = [ Util.indent . macro_def . Util.eol ]
+
+(************************************************************************
+ * Group: Lens
+ ************************************************************************)
+(* View: entry
+ Define the accepted entries, such as param for regular configuration
+ files, and macro for resources.cfg .*)
+let entry = param
+ | macro
+
+(* View: lns
+ main structure *)
+let lns = ( Util.empty | Util.comment | entry )*
+
+(* View: filter *)
+let filter = incl "/etc/nagios3/*.cfg"
+ . incl "/etc/nagios/*.cfg"
+ . incl "/etc/icinga/*.cfg"
+ . excl "/etc/nagios3/commands.cfg"
+ . excl "/etc/nagios/commands.cfg"
+ . excl "/etc/nagios/nrpe.cfg"
+ . incl "/etc/icinga/commands.cfg"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: NagiosObjects
+ Parses /etc/{nagios{3,},icinga}/objects/*.cfg
+
+Authors: Sebastien Aperghis-Tramoni <sebastien@aperghis.net>
+ Raphaël Pinson <raphink@gmail.com>
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+
+ This lens applies to /etc/{nagios{3,},icinga}/objects/*.cfg. See <filter>.
+*)
+
+module NagiosObjects =
+ autoload xfm
+
+ (* basic atoms *)
+ let eol = Util.eol
+ let ws = Sep.space
+
+ let keyword = key /[A-Za-z0-9_]+/
+
+ (* optional, but preferred, whitespace *)
+ let opt_ws = del Rx.opt_space " "
+
+ (* define an empty line *)
+ let empty = Util.empty
+
+ (* define a comment *)
+ let comment = Util.comment_generic /[ \t]*[#;][ \t]*/ "# "
+
+ (* define a field *)
+ let object_field =
+ let field_name = keyword in
+ let field_value = store Rx.space_in in
+ [ Util.indent . field_name . ws
+ . field_value . eol ]
+
+ (* define an object *)
+ let object_def =
+ let object_type = keyword in
+ [ Util.indent
+ . Util.del_str "define" . ws
+ . object_type . opt_ws
+ . Util.del_str "{" . eol
+ . ( empty | comment | object_field )*
+ . Util.indent . Util.del_str "}" . eol ]
+
+ (* main structure *)
+ let lns = ( empty | comment | object_def )*
+
+ let filter = incl "/etc/nagios3/objects/*.cfg"
+ . incl "/etc/nagios/objects/*.cfg"
+ . incl "/etc/icinga/objects/*.cfg"
+
+ let xfm = transform lns filter
+
--- /dev/null
+(*
+Module: Netmasks
+ Parses /etc/inet/netmasks on Solaris
+
+Author: Dominic Cleal <dcleal@redhat.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 4 netmasks` where possible.
+
+About: Licence
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+
+About: Configuration files
+ This lens applies to /etc/netmasks and /etc/inet/netmasks. See <filter>.
+*)
+
+module Netmasks =
+autoload xfm
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ ************************************************************************)
+
+(* View: comment *)
+let comment = Util.comment
+
+(* View: comment_or_eol *)
+let comment_or_eol = Util.comment_or_eol
+
+(* View: indent *)
+let indent = Util.indent
+
+(* View: empty *)
+let empty = Util.empty
+
+(* View: sep
+ The separator for network/mask entries *)
+let sep = Util.del_ws_tab
+
+(************************************************************************
+ * Group: ENTRIES
+ ************************************************************************)
+
+(* View: entry
+ Network / netmask line *)
+let entry = [ seq "network" . indent .
+ [ label "network" . store Rx.ipv4 ] . sep .
+ [ label "netmask" . store Rx.ipv4 ] . comment_or_eol ]
+
+(************************************************************************
+ * Group: LENS
+ ************************************************************************)
+
+(* View: lns *)
+let lns = ( empty
+ | comment
+ | entry )*
+
+(* Variable: filter *)
+let filter = (incl "/etc/netmasks"
+ . incl "/etc/inet/netmasks")
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: NetworkManager
+ Parses /etc/NetworkManager/system-connections/* files which are GLib
+ key-value setting files.
+
+Author: Raphael Pinson <raphael.pinson@camptocamp.com>
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/NetworkManager/system-connections/*. See <filter>.
+
+About: Examples
+ The <Test_NetworkManager> file contains various examples and tests.
+*)
+
+module NetworkManager =
+autoload xfm
+
+(************************************************************************
+ * INI File settings
+ *
+ * GLib only supports "# as commentary and "=" as separator
+ *************************************************************************)
+let comment = IniFile.comment "#" "#"
+let sep = Sep.equal
+let eol = Util.eol
+
+(************************************************************************
+ * ENTRY
+ * GLib entries can contain semicolons, entry names can contain spaces and
+ * brackets
+ *
+ * At least entry for WPA-PSK definition can contain all printable ASCII
+ * characters including '#', ' ' and others. Comments following the entry
+ * are no option for this reason.
+ *************************************************************************)
+(* Variable: entry_re *)
+let entry_re = /[A-Za-z][A-Za-z0-9:._\(\) \t-]+/
+
+(* Lens: entry *)
+let entry = [ key entry_re . sep
+ . IniFile.sto_to_eol? . eol ]
+ | comment
+
+(************************************************************************
+ * RECORD
+ * GLib uses standard INI File records
+ *************************************************************************)
+let title = IniFile.indented_title IniFile.record_re
+let record = IniFile.record title entry
+
+
+(************************************************************************
+ * LENS & FILTER
+ * GLib uses standard INI File records
+ *************************************************************************)
+let lns = IniFile.lns record comment
+
+(* Variable: filter *)
+let filter = incl "/etc/NetworkManager/system-connections/*"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Networks
+ Parses /etc/networks
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 5 networks` where possible.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/networks. See <filter>.
+*)
+
+module Networks =
+
+autoload xfm
+
+(* View: ipv4
+ A network IP, trailing .0 may be omitted *)
+let ipv4 =
+ let dot = "." in
+ let digits = /(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)/ in
+ digits . (dot . digits . (dot . digits . (dot . digits)?)?)?
+
+(*View: entry *)
+let entry =
+ let alias = [ seq "alias" . store Rx.word ] in
+ [ seq "network" . counter "alias"
+ . [ label "name" . store Rx.word ]
+ . [ Sep.space . label "number" . store ipv4 ]
+ . [ Sep.space . label "aliases" . Build.opt_list alias Sep.space ]?
+ . (Util.eol|Util.comment) ]
+
+(* View: lns *)
+let lns = ( Util.empty | Util.comment | entry )*
+
+(* View: filter *)
+let filter = incl "/etc/networks"
+
+let xfm = transform lns filter
--- /dev/null
+(* Module: Nginx
+ Nginx module for Augeas
+
+Authors: Ian Berry <iberry@barracuda.com>
+ Raphael Pinson <raphael.pinson@camptocamp.com>
+
+About: Reference
+
+ This module was built to support a limited subset of nginx
+ configuration syntax. It works fine with simple blocks and
+ field/value lines.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/nginx/nginx.conf. See <filter>.
+
+About: Examples
+ The <Test_Nginx> file contains various examples and tests.
+
+About: TODO
+ * Convert statement keyworks for a regex
+ * Support more advanced block syntax (location)
+*)
+
+module Nginx =
+
+autoload xfm
+
+(* Variable: word *)
+let word = /[A-Za-z0-9_.:-]+/
+
+(* Variable: block_re
+ The keywords reserved for block entries *)
+let block_re = "http" | "events" | "server" | "mail" | "stream"
+
+(* All block keywords, including the ones we treat specially *)
+let block_re_all = block_re | "if" | "location" | "geo" | "map"
+ | "split_clients" | "upstream"
+
+(* View: simple
+ A simple entry *)
+let simple =
+ let kw = word - block_re_all
+ in let mask = [ label "mask" . Util.del_str "/" . store Rx.integer ]
+ in let sto = store /[^ \t\n;#]([^";#]|"[^"]*\")*/
+ in [ Util.indent .
+ key kw . mask? .
+ (Sep.space . sto)? . Sep.semicolon .
+ (Util.eol|Util.comment_eol) ]
+
+(* View: server
+ A simple server entry *)
+let server =
+ let address = /[A-Za-z0-9_.:\/-]+/
+ in [ Util.indent . label "@server" . Util.del_str "server"
+ . [ Sep.space . label "@address" . store address ]
+ . [ Sep.space . key word . (Sep.equal . store word)? ]*
+ . Sep.semicolon
+ . (Util.eol|Util.comment_eol) ]
+
+let arg (name:string) (rx:regexp) =
+ [ label name . Sep.space . store rx ]
+
+(* Match any argument (as much as possible) *)
+let any_rx =
+ let bare_rx = /[^" \t\n{][^ \t\n{]*/ in
+ let dquote_rx = /"([^\"]|\\.)*"/ in
+ bare_rx | dquote_rx
+
+let any_arg (name:string) = arg name any_rx
+
+(* 'if' conditions are enclosed in matching parens which we can't match
+ precisely with a regular expression. Instead, we gobble up anything that
+ doesn't contain an opening brace. That can of course lead to trouble if
+ a condition actually contains an opening brace *)
+let block_if = key "if"
+ . arg "#cond" /\(([^ \t\n{]|[ \t\n][^{])*\)/
+
+let block_location = key "location"
+ . (arg "#comp" /=|~|~\*|\^~/)?
+ . any_arg "#uri"
+
+let block_geo = key "geo"
+ . (any_arg "#address")?
+ . any_arg "#geo"
+
+let block_map = key "map"
+ . any_arg "#source"
+ . any_arg "#variable"
+
+let block_split_clients = key "split_clients"
+ . any_arg "#string"
+ . any_arg "#variable"
+
+let block_upstream = key "upstream"
+ . any_arg "#name"
+
+let block_head = key block_re
+ | block_if
+ | block_location
+ | block_geo
+ | block_map
+ | block_split_clients
+ | block_upstream
+
+(* View: block
+ A block containing <simple> entries *)
+let block (entry : lens) =
+ [ Util.indent . block_head
+ . Build.block_newlines entry Util.comment
+ . Util.eol ]
+
+let rec directive = simple | server | block directive
+
+(* View: lns *)
+let lns = ( Util.comment | Util.empty | directive )*
+
+(* Variable: filter *)
+let filter = incl "/etc/nginx/nginx.conf"
+ . incl "/etc/nginx/conf.d/*.conf"
+ . incl "/etc/nginx/sites-available/*"
+ . incl "/etc/nginx/sites-enabled/*"
+ . incl "/usr/portage/www-servers/nginx/files/nginx.conf"
+ . incl "/usr/local/etc/nginx/nginx.conf"
+ . incl "/usr/local/etc/nginx/conf.d/*.conf"
+ . incl "/usr/local/etc/nginx/sites-available/*"
+ . incl "/usr/local/etc/nginx/sites-enabled/*"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Nrpe
+ Parses nagios-nrpe configuration files.
+
+Author: Marc Fournier <marc.fournier@camptocamp.com>
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+*)
+
+module Nrpe =
+ autoload xfm
+
+
+let eol = Util.eol
+let eq = Sep.equal
+
+(* View: word *)
+let word = /[^=\n\t ]+/
+
+(* View: item_re *)
+let item_re = /[^#=\n\t\/ ]+/ - (/command\[[^]\/\n]+\]/ | "include" | "include_dir")
+
+(* View: command
+ nrpe.cfg usually has many entries defining commands to run
+
+ > command[check_foo]=/path/to/nagios/plugin -w 123 -c 456
+ > command[check_bar]=/path/to/another/nagios/plugin --option
+*)
+let command =
+ let obrkt = del /\[/ "[" in
+ let cbrkt = del /\]/ "]" in
+ [ key "command" .
+ [ obrkt . key /[^]\/\n]+/ . cbrkt . eq
+ . store /[^\n]+/ . del /\n/ "\n" ]
+ ]
+
+
+(* View: item
+ regular entries
+
+ > allow_bash_command_substitution=0
+*)
+let item = [ key item_re . eq . store word . eol ]
+
+(* View: include
+ An include entry.
+
+ nrpe.cfg can include more than one file or directory of files
+
+ > include=/path/to/file1.cfg
+ > include=/path/to/file2.cfg
+*)
+let include = [ key "include" .
+ [ label "file" . eq . store word . eol ]
+]
+
+(* View: include_dir
+ > include_dir=/path/to/dir/
+*)
+let include_dir = [ key "include_dir" .
+ [ label "dir" . eq . store word . eol ]
+]
+
+
+(* View: comment
+ Nrpe comments must start at beginning of line *)
+let comment = Util.comment_generic /#[ \t]*/ "# "
+
+(* blank lines and empty comments *)
+let empty = Util.empty
+
+(* View: lns
+ The Nrpe lens *)
+let lns = ( command | include | include_dir | item | comment | empty ) *
+
+(* View: filter
+ File filter *)
+let filter = incl "/etc/nrpe.cfg" .
+ incl "/etc/nagios/nrpe.cfg"
+
+let xfm = transform lns (filter)
+
--- /dev/null
+(*
+Module: Nslcd
+ Parses /etc/nslcd.conf
+
+Author: Jose Plana <jplana@gmail.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 5 nslcd.conf` where
+ possible.
+
+License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+
+About: Lens Usage
+
+ Sample usage of this lens in augtool:
+
+ * get uid
+ > get /files/etc/nslcd.conf/threads
+
+ * set ldap URI
+ > set /files/etc/nslcd.conf/uri "ldaps://x.y.z"
+
+ * get cache values
+ > get /files/etc/nslcd.conf/cache
+
+ * change syslog level to debug
+ > set /files/etc/nslcd.conf/log "syslog debug"
+
+ * add/change filter for the passwd map
+ > set /files/etc/nslcd.conf/filter/passwd "(objectClass=posixGroup)"
+
+ * change the default search scope
+ > set /files/etc/nslcd.conf/scope[count( * )] "subtree"
+
+ * get the default search scope
+ > get /files/etc/nslcd.conf/scope[count( * )] "subtree"
+
+ * add/set a scope search value for a specific (host) map
+ > set /files/etc/nslcd.conf/scope[host]/host "subtree"
+
+ * get all default base search
+ > match /files/etc/nslcd.conf/base[count( * ) = 0]
+
+ * get the 3rd base search default value
+ > get /files/etc/nslcd.conf/base[3]
+
+ * add a new base search default value
+ > set /files/etc/nslcd.conf/base[last()+1] "dc=example,dc=com"
+
+ * change a base search default value to a new base value
+ > set /files/etc/nslcd.conf/base[self::* = "dc=example,dc=com"] "dc=test,dc=com"
+
+ * add/change a base search for a specific map (hosts)
+ > set /files/etc/nslcd.conf/base[hosts]/hosts "dc=hosts,dc=example,dc=com"
+
+ * add a base search for a specific map (passwd)
+ > set /files/etc/nslcd.conf/base[last()+1]/passwd "dc=users,dc=example,dc=com"
+
+ * remove all base search value for a map (rpc)
+ > rm /files/etc/nslcd.conf/base/rpc
+
+ * remove a specific search base value for a map (passwd)
+ > rm /files/etc/nslcd.conf/base/passwd[self::* = "dc=users,dc=example,dc=com"]
+
+ * get an attribute mapping value for a map
+ > get /files/etc/nslcd.conf/map/passwd/homeDirectory
+
+ * get all attribute values for a map
+ > match /files/etc/nslcd.conf/map/passwd/*
+
+ * set a specific attribute for a map
+ > set /files/etc/nslcd.conf/map/passwd/homeDirectory "\"${homeDirectory:-/home/$uid}\""
+
+ * add/change a specific attribute for a map (a map that might not be defined before)
+ > set /files/etc/nslcd.conf/map[shadow/userPassword]/shadow/userPassword "*"
+
+ * remove an attribute for a specific map
+ > rm /files/etc/nslcd.conf/map/shadow/userPassword
+
+ * remove all attributes for a specific map
+ > rm /files/etc/nslcd.conf/map/passwd/*
+
+About: Configuration files
+ This lens applies to /etc/nslcd.conf. See <filter>.
+
+About: Examples
+ The <Test_Nslcd> file contains various examples and tests.
+*)
+
+module Nslcd =
+autoload xfm
+
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+
+(* Group: Comments and empty lines *)
+
+(* View: eol *)
+let eol = Util.eol
+(* View: empty *)
+let empty = Util.empty
+(* View: spc *)
+let spc = Util.del_ws_spc
+(* View: comma *)
+let comma = Sep.comma
+(* View: comment *)
+let comment = Util.comment
+(* View: do_dquote *)
+let do_dquote = Quote.do_dquote
+(* View: opt_list *)
+let opt_list = Build.opt_list
+
+(* Group: Ldap related values
+Values that need to be parsed.
+*)
+
+(* Variable: ldap_rdn *)
+let ldap_rdn = /[A-Za-z][A-Za-z]+=[A-Za-z0-9_.-]+/
+(* Variable: ldap_dn *)
+let ldap_dn = ldap_rdn . (/(,)?/ . ldap_rdn)*
+(* Variable: ldap_filter *)
+let ldap_filter = /\(.*\)/
+(* Variable: ldap_scope *)
+let ldap_scope = /sub(tree)?|one(level)?|base/
+(* Variable: map_names *)
+let map_names = /alias(es)?/
+ | /ether(s)?/
+ | /group/
+ | /host(s)?/
+ | /netgroup/
+ | /network(s)?/
+ | /passwd/
+ | /protocol(s)?/
+ | /rpc/
+ | /service(s)?/
+ | /shadow/
+(* Variable: key_name *)
+let key_name = /[^ #\n\t\/][^ #\n\t\/]+/
+
+
+(************************************************************************
+ * Group: CONFIGURATION ENTRIES
+ *************************************************************************)
+
+(* Group: Generic definitions *)
+
+(* View: simple_entry
+The simplest configuration option a key spc value. As in `gid id`
+*)
+let simple_entry (kw:string) = Build.key_ws_value kw
+
+(* View: simple_entry_quoted_value
+Simple entry with quoted value
+*)
+let simple_entry_quoted_value (kw:string) = Build.key_value_line kw spc (do_dquote (store /.*/))
+
+(* View simple_entry_opt_list_comma_value
+Simple entry that contains a optional list separated by commas
+*)
+let simple_entry_opt_list_value (kw:string) (lsep:lens) = Build.key_value_line kw spc (opt_list [ seq kw . store /[^, \t\n\r]+/ ] (lsep))
+(* View: key_value_line_regexp
+A simple configuration option but specifying the regex for the value.
+*)
+let key_value_line_regexp (kw:string) (sto:regexp) = Build.key_value_line kw spc (store sto)
+
+(* View: mapped_entry
+A mapped configuration as in `filter MAP option`.
+*)
+let mapped_entry (kw:string) (sto:regexp) = [ key kw . spc
+ . Build.key_value_line map_names spc (store sto)
+ ]
+(* View: key_value_line_regexp_opt_map
+A mapped configuration but the MAP value is optional as in scope [MAP] value`.
+*)
+let key_value_line_regexp_opt_map (kw:string) (sto:regexp) =
+ ( key_value_line_regexp kw sto | mapped_entry kw sto )
+
+(* View: map_entry
+A map entry as in `map MAP ATTRIBUTE NEWATTRIBUTE`.
+*)
+let map_entry = [ key "map" . spc
+ . [ key map_names . spc
+ . [ key key_name . spc . store Rx.no_spaces ]
+ ] .eol
+ ]
+
+(* Group: Option definitions *)
+
+(* View: Base entry *)
+let base_entry = key_value_line_regexp_opt_map "base" ldap_dn
+
+(* View: Scope entry *)
+let scope_entry = key_value_line_regexp_opt_map "scope" ldap_scope
+
+(* View: Filter entry *)
+let filter_entry = mapped_entry "filter" ldap_filter
+
+(* View: entries
+All the combined entries.
+*)
+let entries = map_entry
+ | base_entry
+ | scope_entry
+ | filter_entry
+ | simple_entry "threads"
+ | simple_entry "uid"
+ | simple_entry "gid"
+ | simple_entry_opt_list_value "uri" spc
+ | simple_entry "ldap_version"
+ | simple_entry "binddn"
+ | simple_entry "bindpw"
+ | simple_entry "rootpwmoddn"
+ | simple_entry "rootpwmodpw"
+ | simple_entry "sasl_mech"
+ | simple_entry "sasl_realm"
+ | simple_entry "sasl_authcid"
+ | simple_entry "sasl_authzid"
+ | simple_entry "sasl_secprops"
+ | simple_entry "sasl_canonicalize"
+ | simple_entry "krb5_ccname"
+ | simple_entry "deref"
+ | simple_entry "referrals"
+ | simple_entry "bind_timelimit"
+ | simple_entry "timelimit"
+ | simple_entry "idle_timelimit"
+ | simple_entry "reconnect_sleeptime"
+ | simple_entry "reconnect_retrytime"
+ | simple_entry "ssl"
+ | simple_entry "tls_reqcert"
+ | simple_entry "tls_cacertdir"
+ | simple_entry "tls_cacertfile"
+ | simple_entry "tls_randfile"
+ | simple_entry "tls_ciphers"
+ | simple_entry "tls_cert"
+ | simple_entry "tls_key"
+ | simple_entry "pagesize"
+ | simple_entry_opt_list_value "nss_initgroups_ignoreusers" comma
+ | simple_entry "nss_min_uid"
+ | simple_entry "nss_nested_groups"
+ | simple_entry "nss_getgrent_skipmembers"
+ | simple_entry "nss_disable_enumeration"
+ | simple_entry "validnames"
+ | simple_entry "ignorecase"
+ | simple_entry "pam_authz_search"
+ | simple_entry_quoted_value "pam_password_prohibit_message"
+ | simple_entry "reconnect_invalidate"
+ | simple_entry "cache"
+ | simple_entry "log"
+ | simple_entry "pam_authc_ppolicy"
+
+(* View: lens *)
+let lns = (entries|empty|comment)+
+
+(* View: filter *)
+let filter = incl "/etc/nslcd.conf"
+ . Util.stdexcl
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Nsswitch
+ Parses /etc/nsswitch.conf
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man nsswitch.conf` where possible.
+
+About: Licence
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+
+About: Configuration files
+ This lens applies to /etc/nsswitch.conf. See <filter>.
+*)
+
+module Nsswitch =
+autoload xfm
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+(* View: comment *)
+let comment = Util.comment
+
+(* View: empty *)
+let empty = Util.empty
+
+(* View: sep_colon
+ The separator for database entries *)
+let sep_colon = del /:[ \t]*/ ": "
+
+(* View: database_kw
+ The database specification like `passwd', `shadow', or `hosts' *)
+let database_kw = Rx.word
+
+(* View: service
+ The service specification like `files', `db', or `nis' *)
+let service = [ label "service" . store Rx.word ]
+
+(* View: reaction
+ The reaction on lookup result like `[NOTFOUND=return]'
+ TODO: Use case-insensitive regexps when ticket #147 is fixed.
+*)
+let reaction =
+ let status_kw = /[Ss][Uu][Cc][Cc][Ee][Ss][Ss]/
+ | /[Nn][Oo][Tt][Ff][Oo][Uu][Nn][Dd]/
+ | /[Uu][Nn][Aa][Vv][Aa][Ii][Ll]/
+ | /[Tt][Rr][Yy][Aa][Gg][Aa][Ii][Nn]/
+ in let action_kw = /[Rr][Ee][Tt][Uu][Rr][Nn]/
+ | /[Cc][Oo][Nn][Tt][Ii][Nn][Uu][Ee]/
+ | /[Mm][Ee][Rr][Gg][Ee]/
+ in let negate = [ Util.del_str "!" . label "negate" ]
+ in let reaction_entry = [ label "status" . negate?
+ . store status_kw
+ . Util.del_str "="
+ . [ label "action" . store action_kw ] ]
+ in Util.del_str "["
+ . [ label "reaction"
+ . (Build.opt_list reaction_entry Sep.space) ]
+ . Util.del_str "]"
+
+(* View: database *)
+let database =
+ [ label "database" . store database_kw
+ . sep_colon
+ . (Build.opt_list
+ (service|reaction)
+ Sep.space)
+ . Util.comment_or_eol ]
+
+(* View: lns *)
+let lns = ( empty | comment | database )*
+
+(* Variable: filter *)
+let filter = (incl "/etc/nsswitch.conf")
+
+let xfm = transform lns filter
--- /dev/null
+(* NTP module for Augeas *)
+(* Author: Raphael Pinson <raphink@gmail.com> *)
+(* *)
+(* Status: basic settings supported *)
+
+module Ntp =
+ autoload xfm
+
+
+ (* Define useful shortcuts *)
+
+ let eol = del /[ \t]*/ "" . [ label "#comment" . store /#.*/]?
+ . Util.del_str "\n"
+ let sep_spc = Util.del_ws_spc
+ let word = /[^,# \n\t]+/
+ let num = /[0-9]+/
+
+
+ (* define comments and empty lines *)
+ let comment = [ label "#comment" . del /#[ \t]*/ "#" .
+ store /([^ \t\n][^\n]*)?/ . del "\n" "\n" ]
+ let empty = [ del /[ \t]*\n/ "\n" ]
+
+
+ let kv (k:regexp) (v:regexp) =
+ [ key k . sep_spc. store v . eol ]
+
+ (* Define generic record *)
+ let record (kw:regexp) (value:lens) =
+ [ key kw . sep_spc . store word . value . eol ]
+
+ (* Define a command record; see confopt.html#cfg in the ntp docs *)
+ let command_record =
+ let opt = [ sep_spc . key /minpoll|maxpoll|ttl|version|key/ .
+ sep_spc . store word ]
+ | [ sep_spc . key (/autokey|burst|iburst|noselect|preempt/ |
+ /prefer|true|dynamic/) ] in
+ let cmd = /pool|server|peer|broadcast|manycastclient/
+ | /multicastclient|manycastserver/ in
+ record cmd opt*
+
+ let broadcastclient =
+ [ key "broadcastclient" . [ sep_spc . key "novolley" ]? . eol ]
+
+ (* Define a fudge record *)
+ let fudge_opt_re = "refid" | "stratum"
+ let fudge_opt = [ sep_spc . key fudge_opt_re . sep_spc . store word ]
+ let fudge_record = record "fudge" fudge_opt?
+
+ (* Define simple settings, see miscopt.html in ntp docs *)
+ let flags =
+ let flags_re = /auth|bclient|calibrate|kernel|monitor|ntp|pps|stats/ in
+ let flag = [ label "flag" . store flags_re ] in
+ [ key /enable|disable/ . (sep_spc . flag)* . eol ]
+
+ let simple_setting (k:regexp) = kv k word
+
+ (* Still incomplete, misses logconfig, phone, setvar, tos,
+ trap, ttl *)
+ let simple_settings =
+ kv "broadcastdelay" Rx.decimal
+ | flags
+ | simple_setting /driftfile|leapfile|logfile|includefile/
+ | simple_setting "statsdir"
+ | simple_setting "ntpsigndsocket"
+
+ (* Misc commands, see miscopt.html in ntp docs *)
+
+ (* Define restrict *)
+ let restrict_record =
+ let ip6_restrict = [ label "ipv6" . sep_spc . Util.del_str "-6" ] in
+ let ip4_restrict = [ label "ipv4" . sep_spc . Util.del_str "-4" ] in
+ let action = [ label "action" . sep_spc . store /[^,# \n\t-][^,# \n\t]*/ ] in
+ [ key "restrict" . (ip6_restrict | ip4_restrict)? . sep_spc . store /[^,# \n\t-][^,# \n\t]*/ . action* . eol ]
+
+ (* Define statistics *)
+ let statistics_flag (kw:string) = [ sep_spc . key kw ]
+
+ let statistics_opts = statistics_flag "loopstats"
+ | statistics_flag "peerstats"
+ | statistics_flag "clockstats"
+ | statistics_flag "rawstats"
+
+ let statistics_record = [ key "statistics" . statistics_opts* . eol ]
+
+
+ (* Define filegen *)
+ let filegen = del /filegen[ \t]+/ "filegen " . store word
+ let filegen_opt (kw:string) = [ sep_spc . key kw . sep_spc . store word ]
+ (* let filegen_flag (kw:string) = [ label kw . sep_spc . store word ] *)
+ let filegen_select (kw:string) (select:regexp) = [ label kw . sep_spc . store select ]
+
+ let filegen_opts = filegen_opt "file"
+ | filegen_opt "type"
+ | filegen_select "enable" /(en|dis)able/
+ | filegen_select "link" /(no)?link/
+
+ let filegen_record = [ label "filegen" . filegen . filegen_opts* . eol ]
+
+ (* Authentication commands, see authopt.html#cmd; incomplete *)
+ let auth_command =
+ [ key /controlkey|keys|keysdir|requestkey|authenticate/ .
+ sep_spc . store word . eol ]
+ | [ key /autokey|revoke/ . [sep_spc . store word]? . eol ]
+ | [ key /trustedkey/ . [ sep_spc . label "key" . store word ]+ . eol ]
+
+ (* tinker [step step | panic panic | dispersion dispersion |
+ stepout stepout | minpoll minpoll | allan allan | huffpuff huffpuff] *)
+ let tinker =
+ let arg_names = /step|panic|dispersion|stepout|minpoll|allan|huffpuff/ in
+ let arg = [ key arg_names . sep_spc . store Rx.decimal ] in
+ [ key "tinker" . (sep_spc . arg)* . eol ]
+
+ (* tos [beacon beacon | ceiling ceiling | cohort {0 | 1} |
+ floor floor | maxclock maxclock | maxdist maxdist |
+ minclock minclock | mindist mindist | minsane minsane |
+ orphan stratum | orphanwait delay] *)
+
+ let tos =
+ let arg_names = /beacon|ceiling|cohort|floor|maxclock|maxdist|
+ minclock|mindist|minsane|orphan|orphanwait/ in
+ let arg = [ key arg_names . sep_spc . store Rx.decimal ] in
+ [ key "tos" . (sep_spc . arg)* . eol ]
+
+ let interface =
+ let action = [ label "action" . store /listen|ignore|drop/ ]
+ in let addresses = [ label "addresses" . store Rx.word ]
+ in [ key "interface" . sep_spc . action . sep_spc . addresses . eol ]
+
+ (* Define lens *)
+
+ let lns = ( comment | empty | command_record | fudge_record
+ | restrict_record | simple_settings | statistics_record
+ | filegen_record | broadcastclient
+ | auth_command | tinker | tos | interface)*
+
+ let filter = (incl "/etc/ntp.conf")
+
+ let xfm = transform lns filter
--- /dev/null
+(*
+Module: Ntpd
+ Parses OpenNTPD's ntpd.conf
+
+Author: Jasper Lievisse Adriaanse <jasper@jasper.la>
+
+About: Reference
+ This lens is used to parse OpenNTPD's configuration file, ntpd.conf.
+ http://openntpd.org/
+
+About: Usage Example
+ To be documented
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+
+About: Configuration files
+ This lens applies to /etc/ntpd.conf.
+See <filter>.
+*)
+
+module Ntpd =
+autoload xfm
+
+(************************************************************************
+ * Group: Utility variables/functions
+ ************************************************************************)
+
+(* View: comment *)
+let comment = Util.comment
+(* View: empty *)
+let empty = Util.empty
+(* View: eol *)
+let eol = Util.eol
+(* View: space *)
+let space = Sep.space
+(* View: word *)
+let word = Rx.word
+(* View: device_re *)
+let device_re = Rx.device_name | /\*/
+
+(* View: address_re *)
+let address_re = Rx.ip | /\*/ | Rx.hostname
+
+(* View: stratum_re
+ value between 1 and 15 *)
+let stratum_re = /1[0-5]|[1-9]/
+
+(* View: refid_re
+ string with length < 5 *)
+let refid_re = /[A-Za-z0-9_.-]{1,5}/
+
+(* View: weight_re
+ value between 1 and 10 *)
+let weight_re = /10|[1-9]/
+
+(* View: rtable_re
+ 0 - RT_TABLE_MAX *)
+let rtable_re = Rx.byte
+
+(* View: correction_re
+ should actually only match between -127000000 and 127000000 *)
+let correction_re = Rx.relinteger_noplus
+
+(************************************************************************
+ * View: key_opt_rtable_line
+ * A subnode with a keyword, an optional routing table id and an end
+ * of line.
+ *
+ * Parameters:
+ * kw:regexp - the pattern to match as key
+ * sto:lens - the storing lens
+ ************************************************************************)
+let key_opt_rtable_line (kw:regexp) (sto:lens) =
+ let rtable = [ Util.del_str "rtable" . space . label "rtable"
+ . store rtable_re ]
+ in [ key kw . space . sto . (space . rtable)? . eol ]
+
+(************************************************************************
+ * View: key_opt_weight_rtable_line
+ * A subnode with a keyword, an optional routing table id, an optional
+ * weight-value and an end of line.
+ * of line.
+ *
+ * Parameters:
+ * kw:regexp - the pattern to match as key
+ * sto:lens - the storing lens
+ ************************************************************************)
+let key_opt_weight_rtable_line (kw:regexp) (sto:lens) =
+ let rtable = [ Util.del_str "rtable" . space . label "rtable" . store rtable_re ]
+ in let weight = [ Util.del_str "weight" . space . label "weight"
+ . store weight_re ]
+ in [ key kw . space . sto . (space . weight)? . (space . rtable)? . eol ]
+
+(************************************************************************
+ * View: opt_value
+ * A subnode for optional values.
+ *
+ * Parameters:
+ * s:string - the option name and subtree label
+ * r:regexp - the pattern to match as store
+ ************************************************************************)
+let opt_value (s:string) (r:regexp) =
+ Build.key_value s space (store r)
+
+(************************************************************************
+ * Group: Keywords
+ ************************************************************************)
+
+(* View: listen
+ listen on address [rtable table-id] *)
+let listen =
+ let addr = [ label "address" . store address_re ]
+ in key_opt_rtable_line "listen on" addr
+
+(* View: server
+ server address [weight weight-value] [rtable table-id] *)
+let server =
+ let addr = [ label "address" . store address_re ]
+ in key_opt_weight_rtable_line "server" addr
+
+(* View: servers
+ servers address [weight weight-value] [rtable table-id] *)
+let servers =
+ let addr = [ label "address" . store address_re ]
+ in key_opt_weight_rtable_line "servers" addr
+
+(* View: sensor
+ sensor device [correction microseconds] [weight weight-value] [refid
+ string] [stratum stratum-value] *)
+let sensor =
+ let device = [ label "device" . store device_re ]
+ in let correction = opt_value "correction" correction_re
+ in let weight = opt_value "weight" weight_re
+ in let refid = opt_value "refid" refid_re
+ in let stratum = opt_value "stratum" stratum_re
+ in [ key "sensor" . space . device
+ . (space . correction)?
+ . (space . weight)?
+ . (space . refid)?
+ . (space . stratum)?
+ . eol ]
+
+(************************************************************************
+ * Group: Lens
+ ************************************************************************)
+
+(* View: keyword *)
+let keyword = listen | server | servers | sensor
+
+(* View: lns *)
+let lns = ( empty | comment | keyword )*
+
+(* View: filter *)
+let filter = (incl "/etc/ntpd.conf")
+
+let xfm = transform lns filter
--- /dev/null
+(*
+ ODBC lens for Augeas
+ Author: Marc Fournier <marc.fournier@camptocamp.com>
+
+ odbc.ini and odbcinst.ini are standard INI files.
+*)
+
+
+module Odbc =
+ autoload xfm
+
+(************************************************************************
+ * INI File settings
+ * odbc.ini only supports "# as commentary and "=" as separator
+ ************************************************************************)
+let comment = IniFile.comment "#" "#"
+let sep = IniFile.sep "=" "="
+
+
+(************************************************************************
+ * ENTRY
+ * odbc.ini uses standard INI File entries
+ ************************************************************************)
+let entry = IniFile.indented_entry IniFile.entry_re sep comment
+
+
+(************************************************************************
+ * RECORD
+ * odbc.ini uses standard INI File records
+ ************************************************************************)
+let title = IniFile.indented_title IniFile.record_re
+let record = IniFile.record title entry
+
+
+(************************************************************************
+ * LENS & FILTER
+ ************************************************************************)
+let lns = IniFile.lns record comment
+
+let filter = incl "/etc/odbc.ini"
+ . incl "/etc/odbcinst.ini"
+
+let xfm = transform lns filter
--- /dev/null
+module Opendkim =
+ autoload xfm
+
+ (* Inifile.comment is saner than Util.comment regarding spacing after the # *)
+ let comment = Inifile.comment "#" "#"
+ let eol = Util.eol
+ let empty = Util.empty
+
+ (*
+ The Dataset spec is so broad as to encompass any string (particularly the
+ degenerate 'single literal string' case of a comma separated list with
+ only one item). So treat them as 'String' types, and it's up to the user to
+ format them correctly. Given that many of the variants include file paths
+ etc, it's impossible to validate for 'correctness' anyway
+ *)
+ let stringkv_rx = /ADSPAction|AuthservID|AutoRestartRate|BaseDirectory/
+ | /BogusKey|BogusPolicy|Canonicalization|ChangeRootDirectory/
+ | /DiagnosticDirectory|FinalPolicyScript|IdentityHeader|Include|KeyFile/
+ | /LDAPAuthMechanism|LDAPAuthName|LDAPAuthRealm|LDAPAuthUser/
+ | /LDAPBindPassword|LDAPBindUser|Minimum|Mode|MTACommand|Nameservers/
+ | /On-BadSignature|On-Default|On-DNSError|On-InternalError|On-KeyNotFound/
+ | /On-NoSignature|On-PolicyError|On-Security|On-SignatureError|PidFile/
+ | /ReplaceRules|ReportAddress|ReportBccAddress|ResolverConfiguration/
+ | /ScreenPolicyScript|SelectCanonicalizationHeader|Selector|SelectorHeader/
+ | /SenderMacro|SetupPolicyScript|SignatureAlgorithm|SMTPURI|Socket/
+ | /StatisticsName|StatisticsPrefix|SyslogFacility|TemporaryDirectory/
+ | /TestPublicKeys|TrustAnchorFile|UnprotectedKey|UnprotectedPolicy|UserID/
+ | /VBR-Certifiers|VBR-PurgeFields|VBR-TrustedCertifiers|VBR-Type/
+ | /BodyLengthDB|Domain|DontSignMailTo|ExemptDomains|ExternalIgnoreList/
+ | /InternalHosts|KeyTable|LocalADSP|MacroList|MTA|MustBeSigned|OmitHeaders/
+ | /OversignHeaders|PeerList|POPDBFile|RemoveARFrom|ResignMailTo/
+ | /SenderHeaders|SignHeaders|SigningTable|TrustSignaturesFrom/
+ let stringkv = key stringkv_rx .
+ del /[ \t]+/ " " . store /[0-9a-zA-Z\/][^ \t\n#]+/ . eol
+
+ let integerkv_rx = /AutoRestartCount|ClockDrift|DNSTimeout/
+ | /LDAPKeepaliveIdle|LDAPKeepaliveInterval|LDAPKeepaliveProbes|LDAPTimeout/
+ | /MaximumHeaders|MaximumSignaturesToVerify|MaximumSignedBytes|MilterDebug/
+ | /MinimumKeyBits|SignatureTTL|UMask/
+ let integerkv = key integerkv_rx .
+ del /[ \t]+/ " " . store /[0-9]+/ . eol
+
+ let booleankv_rx = /AddAllSignatureResults|ADSPNoSuchDomain/
+ | /AllowSHA1Only|AlwaysAddARHeader|AuthservIDWithJobID|AutoRestart/
+ | /Background|CaptureUnknownErrors|Diagnostics|DisableADSP/
+ | /DisableCryptoInit|DNSConnect|FixCRLF|IdentityHeaderRemove/
+ | /LDAPDisableCache|LDAPSoftStart|LDAPUseTLS|MultipleSignatures|NoHeaderB/
+ | /Quarantine|QueryCache|RemoveARAll|RemoveOldSignatures|ResolverTracing/
+ | /SelectorHeaderRemove|SendADSPReports|SendReports|SoftwareHeader/
+ | /StrictHeaders|StrictTestMode|SubDomains|Syslog|SyslogSuccess/
+ | /VBR-TrustedCertifiersOnly|WeakSyntaxChecks|LogWhy/
+ let booleankv = key booleankv_rx .
+ del /[ \t]+/ " " . store /([Tt]rue|[Ff]alse|[Yy]es|[Nn]o|1|0)/ . eol
+
+ let entry = [ integerkv ] | [ booleankv ] | [ stringkv ]
+
+ let lns = (comment | empty | entry)*
+
+ let xfm = transform lns (incl "/etc/opendkim.conf")
+
--- /dev/null
+(*
+Module: OpenShift_Config
+ Parses
+ - /etc/openshift/broker.conf
+ - /etc/openshift/broker-dev.conf
+ - /etc/openshift/console.conf
+ - /etc/openshift/console-dev.conf
+ - /etc/openshift/node.conf
+ - /etc/openshift/plugins.d/*.conf
+
+Author: Brian Redbeard <redbeard@dead-city.org>
+
+About: License
+ This file is licenced under the LGPL v2+, conforming to the other components
+ of Augeas.
+
+About: Lens Usage
+ Sample usage of this lens in augtool:
+
+ * Get your current setup
+ > print /files/etc/openshift
+ ...
+
+ * Change OpenShift domain
+ > set /files/etc/openshift/broker.conf/CLOUD_DOMAIN ose.example.com
+
+ Saving your file:
+
+ > save
+
+About: Configuration files
+ /etc/openshift/broker.conf - Configuration file for an OpenShift Broker
+ running in production mode.
+ /etc/openshift/broker-dev.conf - Configuration file for an OpenShift
+ Broker running in development mode.
+ /etc/openshift/console.conf - Configuration file for an OpenShift
+ console running in production mode.
+ /etc/openshift/console-dev.conf - Configuration file for an OpenShift
+ console running in development mode.
+ /etc/openshift/node.conf - Configuration file for an OpenShift node
+ /etc/openshift/plugins.d/*.conf - Configuration files for OpenShift
+ plugins (i.e. mcollective configuration, remote auth, dns updates)
+
+About: Examples
+ The <Test_OpenShift_Config> file contains various examples and tests.
+*)
+module OpenShift_Config =
+ autoload xfm
+
+(* Variable: blank_val *)
+let blank_val = del /["']{2}/ "\"\""
+
+(* View: primary_entry *)
+let primary_entry = Build.key_value_line Rx.word Sep.equal Quote.any_opt
+
+(* View: empty_entry *)
+let empty_entry = Build.key_value_line Rx.word Sep.equal blank_val
+
+(* View: lns *)
+let lns = (Util.empty | Util.comment | primary_entry | empty_entry )*
+
+(* Variable: filter *)
+let filter = incl "/etc/openshift/broker.conf"
+ . incl "/etc/openshift/broker-dev.conf"
+ . incl "/etc/openshift/console.conf"
+ . incl "/etc/openshift/resource_limits.conf"
+ . incl "/etc/openshift/console-dev.conf"
+ . incl "/etc/openshift/node.conf"
+ . incl "/etc/openshift/plugins.d/*.conf"
+ . incl "/var/www/openshift/broker/conf/broker.conf"
+ . incl "/var/www/openshift/broker/conf/plugins.d/*.conf"
+ . Util.stdexcl
+
+let xfm = transform lns filter
+(* vim: set ts=4 expandtab sw=4: *)
--- /dev/null
+(*
+Module: OpenShift_Http
+ Parses HTTPD related files specific to openshift
+
+Author: Brian Redbeard <redbeard@dead-city.org>
+
+About: License
+ This file is licenced under the LGPL v2+, conforming to the other components
+ of Augeas.
+
+About: Lens Usage
+ Sample usage of this lens in augtool:
+
+ * Get your current setup
+ > print /files/var/www/openshift
+
+About: Examples
+ The <Test_OpenShift_Http> file contains various examples and tests.
+*)
+module OpenShift_Http =
+ autoload xfm
+
+(* Variable: filter *)
+let filter = incl "/var/www/openshift/console/httpd/httpd.conf"
+ . incl "/var/www/openshift/console/httpd/conf.d/*.conf"
+ . incl "/var/www/openshift/broker/httpd/conf.d/*.conf"
+ . incl "/var/www/openshift/broker/httpd/httpd.conf"
+ . incl "/var/www/openshift/console/httpd/console.conf"
+ . incl "/var/www/openshift/broker/httpd/broker.conf"
+ . Util.stdexcl
+
+(* View: lns *)
+let lns = Httpd.lns
+
+let xfm = transform lns filter
+(* vim: set ts=4 expandtab sw=4: *)
--- /dev/null
+(*
+Module: OpenShift_Quickstarts
+ Parses
+ - /etc/openshift/quickstarts.json
+
+Author: Brian Redbeard <redbeard@dead-city.org>
+
+About: License
+ This file is licenced under the LGPL v2+, conforming to the other components
+ of Augeas.
+
+About: Lens Usage
+ Sample usage of this lens in augtool:
+
+ * Get your current setup
+ > print /files/etc/openshift/quickstarts.json
+ ...
+
+ * Delete the quickstart named WordPress
+ > rm /files/etc/openshift/quickstarts.json/array/dict[*]/entry/dict/entry[*][string = 'WordPress']/../../../
+
+ Saving your file:
+
+ > save
+
+About: Configuration files
+ /etc/openshift/quickstarts.json - Quickstarts available via the
+ OpenShift Console.
+
+About: Examples
+ The <Test_OpenShift_Quickstarts> file contains various examples and tests.
+*)
+module OpenShift_Quickstarts =
+ autoload xfm
+
+(* View: lns *)
+let lns = Json.lns
+
+(* Variable: filter *)
+let filter = incl "/etc/openshift/quickstarts.json"
+
+let xfm = transform lns filter
+(* vim: set ts=4 expandtab sw=4: *)
--- /dev/null
+(* OpenVPN module for Augeas
+ Author: Raphael Pinson <raphink@gmail.com>
+ Author: Justin Akers <dafugg@gmail.com>
+
+ Reference: http://openvpn.net/index.php/documentation/howto.html
+ Reference: https://community.openvpn.net/openvpn/wiki/Openvpn23ManPage
+
+ TODO: Inline file support
+*)
+
+
+module OpenVPN =
+ autoload xfm
+
+(************************************************************************
+ * USEFUL PRIMITIVES
+ *************************************************************************)
+
+let eol = Util.eol
+let indent = Util.indent
+
+(* Define separators *)
+let sep = Util.del_ws_spc
+
+(* Define value regexps.
+ Custom simplified ipv6 used instead of Rx.ipv6 as the augeas Travis instances
+ are limited to 2GB of memory. Using 'ipv6_re = Rx.ipv6' consumes an extra
+ 2GB of memory and thus the test is OOM-killed.
+*)
+let ipv6_re = /[0-9A-Fa-f:]+/
+let ipv4_re = Rx.ipv4
+let ip_re = ipv4_re|ipv6_re
+let num_re = Rx.integer
+let fn_re = /[^#; \t\n][^#;\n]*[^#; \t\n]|[^#; \t\n]/
+let fn_safe_re = /[^#; \t\r\n]+/
+let an_re = /[a-z][a-z0-9_-]*/
+let hn_re = Rx.hostname
+let port_re = /[0-9]+/
+let host_re = ip_re|hn_re
+let proto_re = /(tcp|udp)/
+let proto_ext_re = /(udp|tcp-client|tcp-server)/
+let alg_re = /(none|[A-Za-z][A-Za-z0-9-]+)/
+let ipv6_bits_re = ipv6_re . /\/[0-9]+/
+
+(* Define store aliases *)
+let ip = store ip_re
+let num = store num_re
+let filename = store fn_re
+let filename_safe = store fn_safe_re
+let hostname = store hn_re
+let sto_to_dquote = store /[^"\n]+/ (* " Emacs, relax *)
+let port = store port_re
+let host = store host_re
+let proto = store proto_re
+let proto_ext = store proto_ext_re
+
+(* define comments and empty lines *)
+let comment = Util.comment_generic /[ \t]*[;#][ \t]*/ "# "
+let comment_or_eol = eol | Util.comment_generic /[ \t]*[;#][ \t]*/ " # "
+
+let empty = Util.empty
+
+
+(************************************************************************
+ * SINGLE VALUES
+ *
+ * - local => IP|hostname
+ * - port => num
+ * - proto => udp|tcp-client|tcp-server
+ * - proto-force => udp|tcp
+ * - mode => p2p|server
+ * - dev => (tun|tap)\d*
+ * - dev-node => filename
+ * - ca => filename
+ * - config => filename
+ * - cert => filename
+ * - key => filename
+ * - dh => filename
+ * - ifconfig-pool-persist => filename
+ * - learn-address => filename
+ * - cipher => [A-Z0-9-]+
+ * - max-clients => num
+ * - user => alphanum
+ * - group => alphanum
+ * - status => filename
+ * - log => filename
+ * - log-append => filename
+ * - client-config-dir => filename
+ * - verb => num
+ * - mute => num
+ * - fragment => num
+ * - mssfix => num
+ * - connect-retry num
+ * - connect-retry-max num
+ * - connect-timeout num
+ * - http-proxy-timeout num
+ * - max-routes num
+ * - ns-cert-type => "server"
+ * - resolv-retry => "infinite"
+ * - script-security => [0-3] (execve|system)?
+ * - ipchange => command
+ * - topology => type
+ *************************************************************************)
+
+let single_host = "local" | "tls-remote"
+let single_ip = "lladdr"
+let single_ipv6_bits = "iroute-ipv6"
+ | "server-ipv6"
+ | "ifconfig-ipv6-pool"
+let single_num = "port"
+ | "max-clients"
+ | "verb"
+ | "mute"
+ | "fragment"
+ | "mssfix"
+ | "connect-retry"
+ | "connect-retry-max"
+ | "connect-timeout"
+ | "http-proxy-timeout"
+ | "resolv-retry"
+ | "lport"
+ | "rport"
+ | "max-routes"
+ | "max-routes-per-client"
+ | "route-metric"
+ | "tun-mtu"
+ | "tun-mtu-extra"
+ | "shaper"
+ | "ping"
+ | "ping-exit"
+ | "ping-restart"
+ | "sndbuf"
+ | "rcvbuf"
+ | "txqueuelen"
+ | "link-mtu"
+ | "nice"
+ | "management-log-cache"
+ | "bcast-buffers"
+ | "tcp-queue-limit"
+ | "server-poll-timeout"
+ | "keysize"
+ | "pkcs11-pin-cache"
+ | "tls-timeout"
+ | "reneg-bytes"
+ | "reneg-pkts"
+ | "reneg-sec"
+ | "hand-window"
+ | "tran-window"
+let single_fn = "ca"
+ | "cert"
+ | "extra-certs"
+ | "config"
+ | "key"
+ | "dh"
+ | "log"
+ | "log-append"
+ | "client-config-dir"
+ | "dev-node"
+ | "cd"
+ | "chroot"
+ | "writepid"
+ | "client-config-dir"
+ | "tmp-dir"
+ | "replay-persist"
+ | "ca"
+ | "capath"
+ | "pkcs12"
+ | "pkcs11-id"
+ | "askpass"
+ | "tls-export-cert"
+ | "x509-track"
+let single_an = "user"
+ | "group"
+ | "management-client-user"
+ | "management-client-group"
+let single_cmd = "ipchange"
+ | "iproute"
+ | "route-up"
+ | "route-pre-down"
+ | "mark"
+ | "up"
+ | "down"
+ | "setcon"
+ | "echo"
+ | "client-connect"
+ | "client-disconnect"
+ | "learn-address"
+ | "tls-verify"
+
+let single_entry (kw:regexp) (re:regexp)
+ = [ key kw . sep . store re . comment_or_eol ]
+
+let single_opt_entry (kw:regexp) (re:regexp)
+ = [ key kw . (sep . store re)? .comment_or_eol ]
+
+let single = single_entry single_num num_re
+ | single_entry single_fn fn_re
+ | single_entry single_an an_re
+ | single_entry single_host host_re
+ | single_entry single_ip ip_re
+ | single_entry single_ipv6_bits ipv6_bits_re
+ | single_entry single_cmd fn_re
+ | single_entry "proto" proto_ext_re
+ | single_entry "proto-force" proto_re
+ | single_entry "mode" /(p2p|server)/
+ | single_entry "dev" /(tun|tap)[0-9]*|null/
+ | single_entry "dev-type" /(tun|tap)/
+ | single_entry "topology" /(net30|p2p|subnet)/
+ | single_entry "cipher" alg_re
+ | single_entry "auth" alg_re
+ | single_entry "resolv-retry" "infinite"
+ | single_entry "script-security" /[0-3]( execve| system)?/
+ | single_entry "route-gateway" (host_re|/dhcp/)
+ | single_entry "mtu-disc" /(no|maybe|yes)/
+ | single_entry "remap-usr1" /SIG(HUP|TERM)/
+ | single_entry "socket-flags" /(TCP_NODELAY)/
+ | single_entry "auth-retry" /(none|nointeract|interact)/
+ | single_entry "tls-version-max" Rx.decimal
+ | single_entry "verify-hash" /([A-Za-z0-9]{2}:)+[A-Za-z0-9]{2}/
+ | single_entry "pkcs11-cert-private" /[01]/
+ | single_entry "pkcs11-protected-authentication" /[01]/
+ | single_entry "pkcs11-private-mode" /[A-Za-z0-9]+/
+ | single_entry "key-method" /[12]/
+ | single_entry "ns-cert-type" /(client|server)/
+ | single_entry "remote-cert-tls" /(client|server)/
+
+let single_opt = single_opt_entry "comp-lzo" /(yes|no|adaptive)/
+ | single_opt_entry "syslog" fn_re
+ | single_opt_entry "daemon" fn_re
+ | single_opt_entry "auth-user-pass" fn_re
+ | single_opt_entry "explicit-exit-notify" num_re
+ | single_opt_entry "engine" fn_re
+
+(************************************************************************
+ * DOUBLE VALUES
+ *************************************************************************)
+
+let double_entry (kw:regexp) (a:string) (aval:regexp) (b:string) (bval:regexp)
+ = [ key kw
+ . sep . [ label a . store aval ]
+ . sep . [ label b . store bval ]
+ . comment_or_eol
+ ]
+
+let double_secopt_entry (kw:regexp) (a:string) (aval:regexp) (b:string) (bval:regexp)
+ = [ key kw
+ . sep . [ label a . store aval ]
+ . (sep . [ label b . store bval ])?
+ . comment_or_eol
+ ]
+
+
+let double = double_entry "keepalive" "ping" num_re "timeout" num_re
+ | double_entry "hash-size" "real" num_re "virtual" num_re
+ | double_entry "ifconfig" "local" ip_re "remote" ip_re
+ | double_entry "connect-freq" "num" num_re "sec" num_re
+ | double_entry "verify-x509-name" "name" hn_re "type"
+ /(subject|name|name-prefix)/
+ | double_entry "ifconfig-ipv6" "address" ipv6_bits_re "remote" ipv6_re
+ | double_entry "ifconfig-ipv6-push" "address" ipv6_bits_re "remote" ipv6_re
+ | double_secopt_entry "iroute" "local" ip_re "netmask" ip_re
+ | double_secopt_entry "stale-routes-check" "age" num_re "interval" num_re
+ | double_secopt_entry "ifconfig-pool-persist"
+ "file" fn_safe_re "seconds" num_re
+ | double_secopt_entry "secret" "file" fn_safe_re "direction" /[01]/
+ | double_secopt_entry "prng" "algorithm" alg_re "nsl" num_re
+ | double_secopt_entry "replay-window" "window-size" num_re "seconds" num_re
+
+
+(************************************************************************
+ * FLAGS
+ *************************************************************************)
+
+let flag_words = "client-to-client"
+ | "duplicate-cn"
+ | "persist-key"
+ | "persist-tun"
+ | "client"
+ | "remote-random"
+ | "nobind"
+ | "mute-replay-warnings"
+ | "http-proxy-retry"
+ | "socks-proxy-retry"
+ | "remote-random-hostname"
+ | "show-proxy-settings"
+ | "float"
+ | "bind"
+ | "nobind"
+ | "tun-ipv6"
+ | "ifconfig-noexec"
+ | "ifconfig-nowarn"
+ | "route-noexec"
+ | "route-nopull"
+ | "allow-pull-fqdn"
+ | "mtu-test"
+ | "ping-timer-rem"
+ | "persist-tun"
+ | "persist-local-ip"
+ | "persist-remote-ip"
+ | "mlock"
+ | "up-delay"
+ | "down-pre"
+ | "up-restart"
+ | "disable-occ"
+ | "errors-to-stderr"
+ | "passtos"
+ | "suppress-timestamps"
+ | "fast-io"
+ | "multihome"
+ | "comp-noadapt"
+ | "management-client"
+ | "management-query-passwords"
+ | "management-query-proxy"
+ | "management-query-remote"
+ | "management-forget-disconnect"
+ | "management-hold"
+ | "management-signal"
+ | "management-up-down"
+ | "management-client-auth"
+ | "management-client-pf"
+ | "push-reset"
+ | "push-peer-info"
+ | "disable"
+ | "ifconfig-pool-linear"
+ | "client-to-client"
+ | "duplicate-cn"
+ | "ccd-exclusive"
+ | "tcp-nodelay"
+ | "opt-verify"
+ | "auth-user-pass-optional"
+ | "client-cert-not-required"
+ | "username-as-common-name"
+ | "pull"
+ | "key-direction"
+ | "no-replay"
+ | "mute-replay-warnings"
+ | "no-iv"
+ | "use-prediction-resistance"
+ | "test-crypto"
+ | "tls-server"
+ | "tls-client"
+ | "pkcs11-id-management"
+ | "single-session"
+ | "tls-exit"
+ | "auth-nocache"
+ | "show-ciphers"
+ | "show-digests"
+ | "show-tls"
+ | "show-engines"
+ | "genkey"
+ | "mktun"
+ | "rmtun"
+
+
+let flag_entry (kw:regexp)
+ = [ key kw . comment_or_eol ]
+
+let flag = flag_entry flag_words
+
+
+(************************************************************************
+ * OTHER FIELDS
+ *
+ * - server => IP IP [nopool]
+ * - server-bridge => IP IP IP IP
+ * - route => host host [host [num]]
+ * - push => "string"
+ * - tls-auth => filename [01]
+ * - remote => hostname/IP [num] [(tcp|udp)]
+ * - management => IP num filename
+ * - http-proxy => host port [filename|keyword] [method]
+ * - http-proxy-option => (VERSION decimal|AGENT string)
+ * ...
+ * and many others
+ *
+ *************************************************************************)
+
+let server = [ key "server"
+ . sep . [ label "address" . ip ]
+ . sep . [ label "netmask" . ip ]
+ . (sep . [ key "nopool" ]) ?
+ . comment_or_eol
+ ]
+
+let server_bridge =
+ let ip_params = [ label "address" . ip ] . sep
+ . [ label "netmask" . ip ] . sep
+ . [ label "start" . ip ] . sep
+ . [ label "end" . ip ] in
+ [ key "server-bridge"
+ . sep . (ip_params|store /(nogw)/)
+ . comment_or_eol
+ ]
+
+let route =
+ let route_net_kw = store (/(vpn_gateway|net_gateway|remote_host)/|host_re) in
+ [ key "route" . sep
+ . [ label "address" . route_net_kw ]
+ . (sep . [ label "netmask" . store (ip_re|/default/) ]
+ . (sep . [ label "gateway" . route_net_kw ]
+ . (sep . [ label "metric" . store (/default/|num_re)] )?
+ )?
+ )?
+ . comment_or_eol
+ ]
+
+let route_ipv6 =
+ let route_net_re = /(vpn_gateway|net_gateway|remote_host)/ in
+ [ key "route-ipv6" . sep
+ . [ label "network" . store (route_net_re|ipv6_bits_re) ]
+ . (sep . [ label "gateway" . store (route_net_re|ipv6_re) ]
+ . (sep . [ label "metric" . store (/default/|num_re)] )?
+ )?
+ . comment_or_eol
+ ]
+
+let push = [ key "push" . sep
+ . Quote.do_dquote sto_to_dquote
+ . comment_or_eol
+ ]
+
+let tls_auth = [ key "tls-auth" . sep
+ . [ label "key" . filename ] . sep
+ . [ label "is_client" . store /[01]/ ] . comment_or_eol
+ ]
+
+let remote = [ key "remote" . sep
+ . [ label "server" . host ]
+ . (sep . [label "port" . port]
+ . (sep . [label "proto" . proto]) ? ) ?
+ . comment_or_eol
+ ]
+
+let http_proxy =
+ let auth_method_re = /(none|basic|ntlm)/ in
+ let auth_method = store auth_method_re in
+ [ key "http-proxy"
+ . sep . [ label "server" . host ]
+ . sep . [ label "port" . port ]
+ . (sep . [ label "auth" . filename_safe ]
+ . (sep . [ label "auth-method" . auth_method ]) ? )?
+ . comment_or_eol
+ ]
+
+let http_proxy_option = [ key "http-proxy-option"
+ . sep . [ label "option" . store /(VERSION|AGENT)/ ]
+ . sep . [ label "value" . filename ]
+ . comment_or_eol
+ ]
+
+let socks_proxy = [ key "socks-proxy"
+ . sep . [ label "server" . host ]
+ . (sep . [ label "port" . port ]
+ . (sep . [ label "auth" . filename_safe ])? )?
+ . comment_or_eol
+ ]
+
+let port_share = [ key "port-share"
+ . sep . [ label "host" . host ]
+ . sep . [ label "port" . port ]
+ . (sep . [ label "dir" . filename ])?
+ . comment_or_eol
+ ]
+
+let route_delay = [ key "route-delay"
+ . (sep . [ label "seconds" . num ]
+ . (sep . [ label "win-seconds" . num ] ) ?
+ )?
+ . comment_or_eol
+ ]
+
+let inetd = [ key "inetd"
+ . (sep . [label "mode" . store /(wait|nowait)/ ]
+ . (sep . [ label "progname" . filename ] ) ?
+ )?
+ . comment_or_eol
+ ]
+
+let inactive = [ key "inactive"
+ . sep . [ label "seconds" . num ]
+ . (sep . [ label "bytes" . num ] ) ?
+ . comment_or_eol
+ ]
+
+let client_nat = [ key "client-nat"
+ . sep . [ label "type" . store /(snat|dnat)/ ]
+ . sep . [ label "network" . ip ]
+ . sep . [ label "netmask" . ip ]
+ . sep . [ label "alias" . ip ]
+ . comment_or_eol
+ ]
+
+let status = [ key "status"
+ . sep . [ label "file" . filename_safe ]
+ . (sep . [ label "repeat-seconds" . num ]) ?
+ . comment_or_eol
+ ]
+
+let plugin = [ key "plugin"
+ . sep . [ label "file" . filename_safe ]
+ . (sep . [ label "init-string" . filename ]) ?
+ . comment_or_eol
+ ]
+
+let management = [ key "management" . sep
+ . [ label "server" . ip ]
+ . sep . [ label "port" . port ]
+ . (sep . [ label "pwfile" . filename ] ) ?
+ . comment_or_eol
+ ]
+
+let auth_user_pass_verify = [ key "auth-user-pass-verify"
+ . sep . [ Quote.quote_spaces (label "command") ]
+ . sep . [ label "method" . store /via-(env|file)/ ]
+ . comment_or_eol
+ ]
+
+let static_challenge = [ key "static-challenge"
+ . sep . [ Quote.quote_spaces (label "text") ]
+ . sep . [ label "echo" . store /[01]/ ]
+ . comment_or_eol
+ ]
+
+let cryptoapicert = [ key "cryptoapicert" . sep . Quote.dquote
+ . [ key /[A-Z]+/ . Sep.colon . store /[A-Za-z _-]+/ ]
+ . Quote.dquote . comment_or_eol
+ ]
+
+let setenv =
+ let envvar = /[^#;\/ \t\n][A-Za-z0-9_-]+/ in
+ [ key ("setenv"|"setenv-safe")
+ . sep . [ key envvar . sep . store fn_re ]
+ . comment_or_eol
+ ]
+
+let redirect =
+ let redirect_flag = /(local|autolocal|def1|bypass-dhcp|bypass-dns|block-local)/ in
+ let redirect_key = "redirect-gateway" | "redirect-private" in
+ [ key redirect_key
+ . (sep . [ label "flag" . store redirect_flag ] ) +
+ . comment_or_eol
+ ]
+
+let tls_cipher =
+ let ciphername = /[A-Za-z0-9!_-]+/ in
+ [ key "tls-cipher" . sep
+ . [label "cipher" . store ciphername]
+ . (Sep.colon . [label "cipher" . store ciphername])*
+ . comment_or_eol
+ ]
+
+let remote_cert_ku =
+ let usage = [label "usage" . store /[A-Za-z0-9]{1,2}/] in
+ [ key "remote-cert-ku" . sep . usage . (sep . usage)* . comment_or_eol ]
+
+(* FIXME: Surely there's a nicer way to do this *)
+let remote_cert_eku =
+ let oid = [label "oid" . store /[0-9]+\.([0-9]+\.)*[0-9]+/] in
+ let symbolic = [Quote.do_quote_opt
+ (label "symbol" . store /[A-Za-z0-9][A-Za-z0-9 _-]*[A-Za-z0-9]/)] in
+ [ key "remote-cert-eku" . sep . (oid|symbolic) . comment_or_eol ]
+
+let status_version = [ key "status-version"
+ . (sep . num) ?
+ . comment_or_eol
+ ]
+
+let ifconfig_pool = [ key "ifconfig-pool"
+ . sep . [ label "start" . ip ]
+ . sep . [ label "end" . ip ]
+ . (sep . [ label "netmask" . ip ])?
+ . comment_or_eol
+ ]
+
+let ifconfig_push = [ key "ifconfig-push"
+ . sep . [ label "local" . ip ]
+ . sep . [ label "remote-netmask" . ip ]
+ . (sep . [ label "alias" . store /[A-Za-z0-9_-]+/ ] )?
+ . comment_or_eol
+ ]
+
+let ignore_unknown_option = [ key "ignore-unknown-option"
+ . (sep . [ label "opt" . store /[A-Za-z0-9_-]+/ ] ) +
+ . comment_or_eol
+ ]
+
+let tls_version_min = [ key "tls-version-min"
+ . sep . store Rx.decimal
+ . (sep . [ key "or-highest" ]) ?
+ . comment_or_eol
+ ]
+
+let crl_verify = [ key "crl-verify"
+ . sep . filename_safe
+ . (sep . [ key "dir" ]) ?
+ . comment_or_eol
+ ]
+
+let x509_username_field =
+ let fieldname = /[A-Za-z0-9_-]+/ in
+ let extfield = ([key /ext/ . Sep.colon . store fieldname]) in
+ let subjfield = ([label "subj" . store fieldname]) in
+ [ key "x509-username-field"
+ . sep . (extfield|subjfield)
+ . comment_or_eol
+ ]
+
+let other = server
+ | server_bridge
+ | route
+ | push
+ | tls_auth
+ | remote
+ | http_proxy
+ | http_proxy_option
+ | socks_proxy
+ | management
+ | route_delay
+ | client_nat
+ | redirect
+ | inactive
+ | setenv
+ | inetd
+ | status
+ | status_version
+ | plugin
+ | ifconfig_pool
+ | ifconfig_push
+ | ignore_unknown_option
+ | auth_user_pass_verify
+ | port_share
+ | static_challenge
+ | tls_version_min
+ | tls_cipher
+ | cryptoapicert
+ | x509_username_field
+ | remote_cert_ku
+ | remote_cert_eku
+ | crl_verify
+ | route_ipv6
+
+
+(************************************************************************
+ * LENS & FILTER
+ *************************************************************************)
+
+let lns = ( comment | empty | single | single_opt | double | flag | other )*
+
+let filter = (incl "/etc/openvpn/client.conf")
+ . (incl "/etc/openvpn/server.conf")
+
+let xfm = transform lns filter
+
+
+
--- /dev/null
+(*
+Module: Oz
+ Oz module for Augeas
+
+ Author: Pat Riehecky <riehecky@fnal.gov>
+
+ oz.cfg is a standard INI File.
+*)
+
+module Oz =
+ autoload xfm
+
+(************************************************************************
+ * Group: INI File settings
+ * avahi-daemon.conf only supports "# as commentary and "=" as separator
+ *************************************************************************)
+(* View: comment *)
+let comment = IniFile.comment "#" "#"
+(* View: sep *)
+let sep = IniFile.sep "=" "="
+
+(************************************************************************
+ * Group: Entry
+ *************************************************************************)
+(* View: entry *)
+let entry = IniFile.indented_entry IniFile.entry_re sep comment
+
+(************************************************************************
+ * Group: Record
+ *************************************************************************)
+(* View: title *)
+let title = IniFile.indented_title IniFile.record_re
+(* View: record *)
+let record = IniFile.record title entry
+
+(************************************************************************
+ * Group: Lens and filter
+ *************************************************************************)
+(* View: lns *)
+let lns = IniFile.lns record comment
+
+(* View: filter *)
+let filter = (incl "/etc/oz/oz.cfg")
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Pagekite
+ Parses /etc/pagekite.d/
+
+Author: Michael Pimmer <blubb@fonfon.at>
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+*)
+
+module Pagekite =
+autoload xfm
+
+(* View: lns *)
+
+(* Variables *)
+let equals = del /[ \t]*=[ \t]*/ "="
+let neg2 = /[^# \n\t]+/
+let neg3 = /[^# \:\n\t]+/
+let eol = del /\n/ "\n"
+(* Match everything from here to eol, cropping whitespace at both ends *)
+let to_eol = /[^ \t\n](.*[^ \t\n])?/
+
+(* A key followed by comma-separated values
+ k: name of the key
+ key_sep: separator between key and values
+ value_sep: separator between values
+ sto: store for values
+*)
+let key_csv_line (k:string) (key_sep:lens) (value_sep:lens) (sto:lens) =
+ [ key k . key_sep . [ seq k . sto ] .
+ [ seq k . value_sep . sto ]* . Util.eol ]
+
+(* entries for pagekite.d/10_account.rc *)
+let domain = [ key "domain" . equals . store neg2 . Util.comment_or_eol ]
+let frontend = Build.key_value_line ("frontend" | "frontends")
+ equals (store Rx.neg1)
+let host = Build.key_value_line "host" equals (store Rx.ip)
+let ports = key_csv_line "ports" equals Sep.comma (store Rx.integer)
+let protos = key_csv_line "protos" equals Sep.comma (store Rx.word)
+
+(* entries for pagekite.d/20_frontends.rc *)
+let kitesecret = Build.key_value_line "kitesecret" equals (store Rx.space_in)
+let kv_frontend = Build.key_value_line ( "kitename" | "fe_certname" |
+ "ca_certs" | "tls_endpoint" )
+ equals (store Rx.neg1)
+
+(* entries for services like 80_httpd.rc *)
+let service_colon = del /[ \t]*:[ \t]*/ " : "
+let service_on = [ key "service_on" . [ seq "service_on" . equals .
+ [ label "protocol" . store neg3 ] . service_colon .
+ [ label "kitename" . (store neg3) ] . service_colon .
+ [ label "backend_host" . (store neg3) ] . service_colon .
+ [ label "backend_port" . (store neg3) ] . service_colon . (
+ [ label "secret" . (store Rx.no_spaces) . Util.eol ] | eol
+ ) ] ]
+
+let service_cfg = [ key "service_cfg" . equals . store to_eol . eol ]
+
+let flags = ( "defaults" | "isfrontend" | "abort_not_configured" | "insecure" )
+
+let entries = Build.flag_line flags
+ | domain
+ | frontend
+ | host
+ | ports
+ | protos
+ | kv_frontend
+ | kitesecret
+ | service_on
+ | service_cfg
+
+let lns = ( entries | Util.empty | Util.comment )*
+
+(* View: filter *)
+let filter = incl "/etc/pagekite.d/*.rc"
+ . Util.stdexcl
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Pam
+ Parses /etc/pam.conf and /etc/pam.d/* service files
+
+Author: David Lutterkort <lutter@redhat.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man pam.conf` where
+ possible.
+
+About: Licence
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+
+About: Configuration files
+ This lens autoloads /etc/pam.d/* for service specific files. See <filter>.
+ It provides a lens for /etc/pam.conf, which is used in the PamConf module.
+*)
+module Pam =
+ autoload xfm
+
+ let eol = Util.eol
+ let indent = Util.indent
+ let space = del /([ \t]|\\\\\n)+/ " "
+
+ (* For the control syntax of [key=value ..] we could split the key value *)
+ (* pairs into an array and generate a subtree control/N/KEY = VALUE *)
+ (* The valid control values if the [...] syntax is not used, is *)
+ (* required|requisite|optional|sufficient|include|substack *)
+ (* We allow more than that because this list is not case sensitive and *)
+ (* to be more lenient with typos *)
+ let control = /(\[[^]#\n]*\]|[a-zA-Z]+)/
+ let word = /([^# \t\n\\]|\\\\.)+/
+ (* Allowed types *)
+ let types = /(auth|session|account|password)/i
+
+ (* This isn't entirely right: arguments enclosed in [ .. ] can contain *)
+ (* a ']' if escaped with a '\' and can be on multiple lines ('\') *)
+ let argument = /(\[[^]#\n]+\]|[^[#\n \t\\][^#\n \t\\]*)/
+
+ let comment = Util.comment
+ let comment_or_eol = Util.comment_or_eol
+ let empty = Util.empty
+
+
+ (* Not mentioned in the man page, but Debian uses the syntax *)
+ (* @include module *)
+ (* quite a bit *)
+ let include = [ indent . Util.del_str "@" . key "include" .
+ space . store word . eol ]
+
+ (* Shared with PamConf *)
+ let record = [ label "optional" . del "-" "-" ]? .
+ [ label "type" . store types ] .
+ space .
+ [ label "control" . store control] .
+ space .
+ [ label "module" . store word ] .
+ [ space . label "argument" . store argument ]* .
+ comment_or_eol
+
+ let record_svc = [ seq "record" . indent . record ]
+
+ let lns = ( empty | comment | include | record_svc ) *
+
+ let filter = incl "/etc/pam.d/*"
+ . excl "/etc/pam.d/allow.pamlist"
+ . excl "/etc/pam.d/README"
+ . Util.stdexcl
+
+ let xfm = transform lns filter
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(*
+Module: PamConf
+ Parses /etc/pam.conf files
+
+Author: Dominic Cleal <dcleal@redhat.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man pam.conf` where
+ possible.
+
+About: Licence
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+
+About: Configuration files
+ This lens applies to /etc/pam.conf. See <filter>.
+*)
+module PamConf =
+ autoload xfm
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+let indent = Util.indent
+
+let comment = Util.comment
+
+let empty = Util.empty
+
+let include = Pam.include
+
+let service = Rx.word
+
+(************************************************************************
+ * Group: LENSES
+ *************************************************************************)
+
+let record = [ seq "record" . indent .
+ [ label "service" . store service ] .
+ Sep.space .
+ Pam.record ]
+
+let lns = ( empty | comment | include | record ) *
+
+let filter = incl "/etc/pam.conf"
+
+let xfm = transform lns filter
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(*
+ Module: Passwd
+ Parses /etc/passwd
+
+ Author: Free Ekanayaka <free@64studio.com>
+
+ About: Reference
+ - man 5 passwd
+ - man 3 getpwnam
+
+ Each line in the unix passwd file represents a single user record, whose
+ colon-separated attributes correspond to the members of the passwd struct
+
+*)
+
+module Passwd =
+
+ autoload xfm
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+(* Group: Comments and empty lines *)
+
+let eol = Util.eol
+let comment = Util.comment
+let empty = Util.empty
+let dels = Util.del_str
+
+let word = Rx.word
+let integer = Rx.integer
+
+let colon = Sep.colon
+
+let sto_to_eol = store Rx.space_in
+let sto_to_col = store /[^:\r\n]+/
+(* Store an empty string if nothing matches *)
+let sto_to_col_or_empty = store /[^:\r\n]*/
+
+(************************************************************************
+ * Group: ENTRIES
+ *************************************************************************)
+
+let username = /[_.A-Za-z0-9][-_.A-Za-z0-9]*\$?/
+
+(* View: password
+ pw_passwd *)
+let password = [ label "password" . sto_to_col? . colon ]
+
+(* View: uid
+ pw_uid *)
+let uid = [ label "uid" . store integer . colon ]
+
+(* View: gid
+ pw_gid *)
+let gid = [ label "gid" . store integer . colon ]
+
+(* View: name
+ pw_gecos; the user's full name *)
+let name = [ label "name" . sto_to_col? . colon ]
+
+(* View: home
+ pw_dir *)
+let home = [ label "home" . sto_to_col? . colon ]
+
+(* View: shell
+ pw_shell *)
+let shell = [ label "shell" . sto_to_eol? ]
+
+(* View: entry
+ struct passwd *)
+let entry = [ key username
+ . colon
+ . password
+ . uid
+ . gid
+ . name
+ . home
+ . shell
+ . eol ]
+
+(* NIS entries *)
+let niscommon = [ label "password" . sto_to_col ]? . colon
+ . [ label "uid" . store integer ]? . colon
+ . [ label "gid" . store integer ]? . colon
+ . [ label "name" . sto_to_col ]? . colon
+ . [ label "home" . sto_to_col ]? . colon
+ . [ label "shell" . sto_to_eol ]?
+
+let nisentry =
+ let overrides =
+ colon
+ . niscommon in
+ [ dels "+@" . label "@nis" . store username . overrides . eol ]
+
+let nisuserplus =
+ let overrides =
+ colon
+ . niscommon in
+ [ dels "+" . label "@+nisuser" . store username . overrides . eol ]
+
+let nisuserminus =
+ let overrides =
+ colon
+ . niscommon in
+ [ dels "-" . label "@-nisuser" . store username . overrides . eol ]
+
+let nisdefault =
+ let overrides =
+ colon
+ . [ label "password" . sto_to_col_or_empty . colon ]
+ . [ label "uid" . store integer? . colon ]
+ . [ label "gid" . store integer? . colon ]
+ . [ label "name" . sto_to_col? . colon ]
+ . [ label "home" . sto_to_col? . colon ]
+ . [ label "shell" . sto_to_eol? ] in
+ [ dels "+" . label "@nisdefault" . overrides? . eol ]
+
+(************************************************************************
+ * LENS
+ *************************************************************************)
+
+let lns = (comment|empty|entry|nisentry|nisdefault|nisuserplus|nisuserminus) *
+
+let filter = incl "/etc/passwd"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Pbuilder
+ Parses /etc/pbuilderrc, /etc/pbuilder/pbuilderrc
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: Reference
+ Pbuilderrc is a standard shellvars file.
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+
+About: Configuration files
+ This lens applies to /etc/pbuilderrc and /etc/pbuilder/pbuilderrc.
+ See <filter>.
+*)
+
+module Pbuilder =
+
+autoload xfm
+
+(* View: filter
+ The pbuilder conffiles *)
+let filter = incl "/etc/pbuilder/pbuilderrc"
+ . incl "/etc/pbuilderrc"
+
+(* View: lns
+ The pbuilder lens *)
+let lns = Shellvars.lns
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Pg_Hba
+ Parses PostgreSQL's pg_hba.conf
+
+Author: Aurelien Bompard <aurelien@bompard.org>
+About: Reference
+ The file format is described in PostgreSQL's documentation:
+ http://www.postgresql.org/docs/current/static/auth-pg-hba-conf.html
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+
+About: Configuration files
+ This lens applies to pg_hba.conf. See <filter> for exact locations.
+*)
+
+
+module Pg_Hba =
+ autoload xfm
+
+ (* Group: Generic primitives *)
+
+ let eol = Util.eol
+ let word = Rx.neg1
+ (* Variable: ipaddr
+ CIDR or ip+netmask *)
+ let ipaddr = /[0-9a-fA-F:.]+(\/[0-9]+|[ \t]+[0-9.]+)/
+ (* Variable: hostname
+ Hostname, FQDN or part of an FQDN possibly
+ starting with a dot. Taken from the syslog lens. *)
+ let hostname = /\.?[a-zA-Z0-9]([a-zA-Z0-9-]*[a-zA-Z0-9])?(\.[a-zA-Z0-9]([a-zA-Z0-9-]*[a-zA-Z0-9])?)*/
+
+ let comma_sep_list (l:string) =
+ let lns = [ label l . store word ] in
+ Build.opt_list lns Sep.comma
+
+ (* Group: Columns definitions *)
+
+ (* View: ipaddr_or_hostname *)
+ let ipaddr_or_hostname = ipaddr | hostname
+ (* View: database
+ TODO: support for quoted strings *)
+ let database = comma_sep_list "database"
+ (* View: user
+ TODO: support for quoted strings *)
+ let user = comma_sep_list "user"
+ (* View: address *)
+ let address = [ label "address" . store ipaddr_or_hostname ]
+ (* View: option
+ part of <method> *)
+ let option =
+ let value_start = label "value" . Sep.equal
+ in [ label "option" . store Rx.word
+ . (Quote.quote_spaces value_start)? ]
+
+ (* View: method
+ can contain an <option> *)
+ let method = [ label "method" . store /[A-Za-z][A-Za-z0-9-]+/
+ . ( Sep.tab . option )* ]
+
+ (* Group: Records definitions *)
+
+ (* View: record_local
+ when type is "local", there is no "address" field *)
+ let record_local = [ label "type" . store "local" ] . Sep.tab .
+ database . Sep.tab . user . Sep.tab . method
+
+ (* Variable: remtypes
+ non-local connection types *)
+ let remtypes = "host" | "hostssl" | "hostnossl"
+
+ (* View: record_remote *)
+ let record_remote = [ label "type" . store remtypes ] . Sep.tab .
+ database . Sep.tab . user . Sep.tab .
+ address . Sep.tab . method
+
+ (* View: record
+ A sequence of <record_local> or <record_remote> entries *)
+ let record = [ seq "entries" . (record_local | record_remote) . eol ]
+
+ (* View: filter
+ The pg_hba.conf conf file *)
+ let filter = (incl "/var/lib/pgsql/data/pg_hba.conf" .
+ incl "/var/lib/pgsql/*/data/pg_hba.conf" .
+ incl "/var/lib/postgresql/*/data/pg_hba.conf" .
+ incl "/etc/postgresql/*/*/pg_hba.conf" )
+
+ (* View: lns
+ The pg_hba.conf lens *)
+ let lns = ( record | Util.comment | Util.empty ) *
+
+ let xfm = transform lns filter
--- /dev/null
+(*
+Module: Pgbouncer
+ Parses Pgbouncer ini configuration files.
+
+Author: Andrew Colin Kissa <andrew@topdog.za.net>
+ Baruwa Enterprise Edition http://www.baruwa.com
+
+About: License
+ This file is licensed under the LGPL v2+.
+
+About: Configuration files
+ This lens applies to /etc/pgbouncer.ini See <filter>.
+
+About: TODO
+ Create a tree for the database options
+*)
+
+module Pgbouncer =
+autoload xfm
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ ************************************************************************)
+
+let comment = IniFile.comment IniFile.comment_re IniFile.comment_default
+
+let sep = IniFile.sep "=" "="
+
+let eol = Util.eol
+
+let entry_re = ( /[A-Za-z][:#A-Za-z0-9._-]+|\*/)
+
+(************************************************************************
+ * Group: ENTRY
+ *************************************************************************)
+
+let non_db_line = [ key entry_re . sep . IniFile.sto_to_eol? . eol ]
+
+let entry = non_db_line|comment
+
+let title = IniFile.title IniFile.record_re
+
+let record = IniFile.record title entry
+
+(******************************************************************
+ * Group: LENS AND FILTER
+ ******************************************************************)
+
+let lns = IniFile.lns record comment
+
+(* Variable: filter *)
+let filter = incl "/etc/pgbouncer.ini"
+
+let xfm = transform lns filter
+
--- /dev/null
+(* PHP module for Augeas *)
+(* Author: Raphael Pinson <raphink@gmail.com> *)
+(* *)
+
+module PHP =
+ autoload xfm
+
+(************************************************************************
+ * INI File settings
+ *************************************************************************)
+
+let comment = IniFile.comment IniFile.comment_re IniFile.comment_default
+let sep = IniFile.sep IniFile.sep_re IniFile.sep_default
+let empty = IniFile.empty
+
+
+(************************************************************************
+ * ENTRY
+ *
+ * We have to remove the keyword "section" from possible entry keywords
+ * otherwise it would lead to an ambiguity with the "section" label
+ * since PHP allows entries outside of sections.
+ *************************************************************************)
+let entry =
+ let word = IniFile.entry_re
+ in let entry_re = word . ( "[" . word . "]" )?
+ in IniFile.indented_entry entry_re sep comment
+
+
+(************************************************************************
+ * TITLE
+ *
+ * We use IniFile.title_label because there can be entries
+ * outside of sections whose labels would conflict with section names
+ *************************************************************************)
+let title = IniFile.title ( IniFile.record_re - ".anon" )
+let record = IniFile.record title entry
+
+let record_anon = [ label ".anon" . ( entry | empty )+ ]
+
+
+(************************************************************************
+ * LENS & FILTER
+ * There can be entries before any section
+ * IniFile.entry includes comment management, so we just pass entry to lns
+ *************************************************************************)
+let lns = record_anon? . record*
+
+let filter = (incl "/etc/php*/*/*.ini")
+ . (incl "/etc/php/*/*/*.ini")
+ . (incl "/etc/php.ini")
+ . (incl "/etc/php.d/*.ini")
+ (* PHPFPM Support *)
+ . (incl "/etc/php*/fpm/pool.d/*.conf")
+ . (incl "/etc/php/*/fpm/pool.d/*.conf")
+ (* Zend Community edition *)
+ . (incl "/usr/local/zend/etc/php.ini")
+ . (incl "/usr/local/zend/etc/conf.d/*.ini")
+ . Util.stdexcl
+
+let xfm = transform lns filter
--- /dev/null
+(* Phpvars module for Augeas
+ Author: Free Ekanayaka <free@64studio.com>
+
+ Reference: PHP syntax
+
+*)
+
+module Phpvars =
+
+ autoload xfm
+
+(************************************************************************
+ * USEFUL PRIMITIVES
+ *************************************************************************)
+
+let eol = Util.eol
+let empty = Util.empty_c_style
+
+let open_php = del /<\?(php)?[ \t]*\n/i "<?php\n"
+let close_php = del /([ \t]*(php)?\?>\n[ \t\n]*)?/i "php?>\n"
+let sep_eq = del /[ \t\n]*=[ \t\n]*/ " = "
+let sep_opt_spc = Sep.opt_space
+let sep_spc = Sep.space
+let sep_dollar = del /\$/ "$"
+let sep_scl = del /[ \t]*;/ ";"
+
+let chr_blank = /[ \t]/
+let chr_nblank = /[^ \t\n]/
+let chr_any = /./
+let chr_star = /\*/
+let chr_nstar = /[^* \t\n]/
+let chr_slash = /\//
+let chr_nslash = /[^\/ \t\n]/
+let chr_variable = /\$[A-Za-z0-9'"_:-]+/
+
+let sto_to_scl = store (/([^ \t\n].*[^ \t\n;]|[^ \t\n;])/ - /.*;[ \t]*(\/\/|#).*/) (* " *)
+let sto_to_eol = store /([^ \t\n].*[^ \t\n]|[^ \t\n])/
+
+(************************************************************************
+ * COMMENTS
+ *************************************************************************)
+
+(* Both c-style and shell-style comments are valid
+ Default to c-style *)
+let comment_one_line = Util.comment_generic /[ \t]*(\/\/|#)[ \t]*/ "// "
+
+let comment_eol = Util.comment_generic /[ \t]*(\/\/|#)[ \t]*/ " // "
+
+let comment = Util.comment_multiline | comment_one_line
+
+let eol_or_comment = eol | comment_eol
+
+
+(************************************************************************
+ * ENTRIES
+ *************************************************************************)
+
+let simple_line (kw:regexp) (lns:lens) = [ key kw
+ . lns
+ . sep_scl
+ . eol_or_comment ]
+
+let global = simple_line "global" (sep_opt_spc . sep_dollar . sto_to_scl)
+
+let assignment =
+ let arraykey = [ label "@arraykey" . store /\[[][A-Za-z0-9'"_:-]+\]/ ] in (* " *)
+ simple_line chr_variable (arraykey? . (sep_eq . sto_to_scl))
+
+let variable = Util.indent . assignment
+
+let classvariable =
+ Util.indent . del /(public|var)/ "public" . Util.del_ws_spc . assignment
+
+let include = simple_line "@include" (sep_opt_spc . sto_to_scl)
+
+let generic_function (kw:regexp) (lns:lens) =
+ let lbracket = del /[ \t]*\([ \t]*/ "(" in
+ let rbracket = del /[ \t]*\)/ ")" in
+ simple_line kw (lbracket . lns . rbracket)
+
+let define =
+ let variable_re = /[A-Za-z0-9'_:-]+/ in
+ let quote = del /["']/ "'" in
+ let sep_comma = del /["'][ \t]*,[ \t]*/ "', " in
+ let sto_to_rbracket = store (/[^ \t\n][^\n]*[^ \t\n\)]|[^ \t\n\)]/
+ - /.*;[ \t]*(\/\/|#).*/) in
+ generic_function "define" (quote . store variable_re . sep_comma
+ . [ label "value" . sto_to_rbracket ])
+
+let simple_function (kw:regexp) =
+ let sto_to_rbracket = store (/[^ \t\n][^\n]*[^ \t\n\)]|[^ \t\n\)]/
+ - /.*;[ \t]*(\/\/|#).*/) in
+ generic_function kw sto_to_rbracket
+
+let entry = Util.indent
+ . ( global
+ | include
+ | define
+ | simple_function "include"
+ | simple_function "include_once"
+ | simple_function "echo" )
+
+
+let class =
+ let classname = key /[A-Za-z0-9'"_:-]+/ in (* " *)
+ del /class[ \t]+/ "class " .
+ [ classname . Util.del_ws_spc . del "{" "{" .
+ (empty|comment|entry|classvariable)*
+ ] . del "}" "}"
+
+(************************************************************************
+ * LENS
+ *************************************************************************)
+
+let lns = open_php . (empty|comment|entry|class|variable)* . close_php
+
+let filter = incl "/etc/squirrelmail/config.php"
+
+let xfm = transform lns filter
--- /dev/null
+(* Parsing /etc/postfix/access *)
+
+module Postfix_Access =
+ autoload xfm
+
+ let sep_tab = Util.del_ws_tab
+ let sep_spc = Util.del_ws_spc
+
+ let eol = del /[ \t]*\n/ "\n"
+ let indent = del /[ \t]*/ ""
+
+ let comment = Util.comment
+ let empty = Util.empty
+
+ let char = /[^# \n\t]/
+ let text =
+ let cont = /\n[ \t]+/ in
+ let any = /[^#\n]/ in
+ char | (char . (any | cont)* .char)
+
+ let word = char+
+ let record = [ seq "spec" .
+ [ label "pattern" . store word ] . sep_tab .
+ [ label "action" . store word ] .
+ [ label "parameters" . sep_spc . store text ]? . eol ]
+
+ let lns = ( empty | comment | record )*
+
+ let xfm = transform lns (incl "/etc/postfix/access" . incl "/usr/local/etc/postfix/access")
--- /dev/null
+(* Postfix_Main module for Augeas
+ Author: Free Ekanayaka <free@64studio.com>
+
+ Reference:
+
+
+*)
+
+module Postfix_Main =
+
+ autoload xfm
+
+(************************************************************************
+ * USEFUL PRIMITIVES
+ *************************************************************************)
+
+let eol = Util.eol
+let indent = del /[ \t]*(\n[ \t]+)?/ " "
+let comment = Util.comment
+let empty = Util.empty
+let eq = del /[ \t]*=/ " ="
+let word = /[A-Za-z0-9_.-]+/
+
+(* The value of a parameter, after the '=' sign. Postfix allows that
+ * lines are continued by starting continuation lines with spaces.
+ * The definition needs to make sure we don't add indented comment lines
+ * into values *)
+let value =
+ let chr = /[^# \t\n]/ in
+ let any = /.*/ in
+ let line = (chr . any* . chr | chr) in
+ let lines = line . (/[ \t]*\n[ \t]+/ . line)* in
+ store lines
+
+(************************************************************************
+ * ENTRIES
+ *************************************************************************)
+
+let entry = [ key word . eq . (indent . value)? . eol ]
+
+(************************************************************************
+ * LENS
+ *************************************************************************)
+
+let lns = (comment|empty|entry) *
+
+let filter = incl "/etc/postfix/main.cf"
+ . incl "/usr/local/etc/postfix/main.cf"
+
+let xfm = transform lns filter
--- /dev/null
+(* Postfix_Master module for Augeas
+ Author: Free Ekanayaka <free@64studio.com>
+
+ Reference:
+
+*)
+
+module Postfix_Master =
+
+ autoload xfm
+
+(************************************************************************
+ * USEFUL PRIMITIVES
+ *************************************************************************)
+
+let eol = Util.eol
+let ws = del /[ \t\n]+/ " "
+let comment = Util.comment
+let empty = Util.empty
+
+let word = /[A-Za-z0-9_.:-]+/
+let words =
+ let char_start = /[A-Za-z0-9$!(){}=_.,:@-]/
+ in let char_end = char_start | /[]["\/]/
+ in let char_middle = char_end | " "
+ in char_start . char_middle* . char_end
+
+let bool = /y|n|-/
+let integer = /([0-9]+|-)\??/
+let command = words . (/[ \t]*\n[ \t]+/ . words)*
+
+let field (l:string) (r:regexp)
+ = [ label l . store r ]
+
+(************************************************************************
+ * ENTRIES
+ *************************************************************************)
+
+let entry = [ key word . ws
+ . field "type" /inet|unix(-dgram)?|fifo|pass/ . ws
+ . field "private" bool . ws
+ . field "unprivileged" bool . ws
+ . field "chroot" bool . ws
+ . field "wakeup" integer . ws
+ . field "limit" integer . ws
+ . field "command" command
+ . eol ]
+
+(************************************************************************
+ * LENS
+ *************************************************************************)
+
+let lns = (comment|empty|entry) *
+
+let filter = incl "/etc/postfix/master.cf"
+ . incl "/usr/local/etc/postfix/master.cf"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Postfix_Passwordmap
+ Parses /etc/postfix/*passwd
+
+Author: Anton Baranov <abaranov@linuxfoundation.org>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 5 postconf` and
+ http://www.postfix.org/SASL_README.html#client_sasl_enable where possible.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Configuration files
+ This lens applies to /etc/postfix/*passwd. See <filter>.
+
+About: Examples
+ The <Test_Postfix_Passwordmap> file contains various examples and tests.
+*)
+
+module Postfix_Passwordmap =
+
+autoload xfm
+
+(* View: space_or_eol *)
+let space_or_eol = del /([ \t]*\n)?[ \t]+/ " "
+
+(* View: word *)
+let word = store /[A-Za-z0-9@_\+\*.-]+/
+
+(* View: colon *)
+let colon = Sep.colon
+
+(* View: username *)
+let username = [ label "username" . word ]
+
+(* View: password *)
+let password = [ label "password" . (store Rx.space_in)? ]
+
+(* View: record *)
+let record = [ label "pattern" . store /\[?[A-Za-z0-9@\*.-]+\]?(:?[A-Za-z0-9]*)*/
+ . space_or_eol . username . colon . password
+ . Util.eol ]
+
+(* View: lns *)
+let lns = (Util.empty | Util.comment | record)*
+
+(* Variable: filter *)
+let filter = incl "/etc/postfix/*passwd"
+ . incl "/usr/local/etc/postfix/*passwd"
+
+let xfm = transform lns filter
--- /dev/null
+module Postfix_sasl_smtpd =
+ autoload xfm
+
+ let eol = Util.eol
+ let colon = del /:[ \t]*/ ": "
+ let value_to_eol = store Rx.space_in
+
+ let simple_entry (kw:string) = [ key kw . colon . value_to_eol . eol ]
+
+ let entries = simple_entry "pwcheck_method"
+ | simple_entry "auxprop_plugin"
+ | simple_entry "saslauthd_path"
+ | simple_entry "mech_list"
+ | simple_entry "sql_engine"
+ | simple_entry "log_level"
+ | simple_entry "auto_transition"
+
+ let lns = entries+
+
+ let filter = incl "/etc/postfix/sasl/smtpd.conf"
+ . incl "/usr/local/etc/postfix/sasl/smtpd.conf"
+
+ let xfm = transform lns filter
--- /dev/null
+(*
+Module: Postfix_Transport
+ Parses /etc/postfix/transport
+
+Author: Raphael Pinson <raphael.pinson@camptocamp.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 5 transport` where possible.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/postfix/transport. See <filter>.
+
+About: Examples
+ The <Test_Postfix_Transport> file contains various examples and tests.
+*)
+
+module Postfix_Transport =
+
+autoload xfm
+
+(* View: space_or_eol *)
+let space_or_eol = del /([ \t]*\n)?[ \t]+/ " "
+
+(* View: colon *)
+let colon = Sep.colon
+
+(* View: nexthop *)
+let nexthop =
+ let host_re = "[" . Rx.word . "]" | /[A-Za-z]([^\n]*[^ \t\n])?/
+ in [ label "nexthop" . (store host_re)? ]
+
+(* View: transport *)
+let transport = [ label "transport" . (store Rx.word)? ]
+ . colon . nexthop
+
+(* View: nexthop_smtp *)
+let nexthop_smtp =
+ let host_re = "[" . Rx.word . "]" | Rx.word
+ in [ label "host" . store host_re ]
+ . colon
+ . [ label "port" . store Rx.integer ]
+
+(* View: record *)
+let record = [ label "pattern" . store /[A-Za-z0-9@\*._-]+/
+ . space_or_eol . (transport | nexthop_smtp)
+ . Util.eol ]
+
+(* View: lns *)
+let lns = (Util.empty | Util.comment | record)*
+
+(* Variable: filter *)
+let filter = incl "/etc/postfix/transport"
+ . incl "/usr/local/etc/postfix/transport"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Postfix_Virtual
+ Parses /etc/postfix/virtual
+
+Author: Raphael Pinson <raphael.pinson@camptocamp.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 5 virtual` where possible.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/postfix/virtual. See <filter>.
+
+About: Examples
+ The <Test_Postfix_Virtual> file contains various examples and tests.
+*)
+
+module Postfix_Virtual =
+
+autoload xfm
+
+(* Variable: space_or_eol_re *)
+let space_or_eol_re = /([ \t]*\n)?[ \t]+/
+
+(* View: space_or_eol *)
+let space_or_eol (sep:regexp) (default:string) =
+ del (space_or_eol_re? . sep . space_or_eol_re?) default
+
+(* View: word *)
+let word = store /[A-Za-z0-9@\*.+=_-]+/
+
+(* View: comma *)
+let comma = space_or_eol "," ", "
+
+(* View: destination *)
+let destination = [ label "destination" . word ]
+
+(* View: record *)
+let record =
+ let destinations = Build.opt_list destination comma
+ in [ label "pattern" . word
+ . space_or_eol Rx.space " " . destinations
+ . Util.eol ]
+
+(* View: lns *)
+let lns = (Util.empty | Util.comment | record)*
+
+(* Variable: filter *)
+let filter = incl "/etc/postfix/virtual"
+ . incl "/usr/local/etc/postfix/virtual"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Postgresql
+ Parses postgresql.conf
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: Reference
+ http://www.postgresql.org/docs/current/static/config-setting.html
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to postgresql.conf. See <filter>.
+
+About: Examples
+ The <Test_Postgresql> file contains various examples and tests.
+*)
+
+
+module Postgresql =
+ autoload xfm
+
+(* View: sep
+ Key and values are separated
+ by either spaces or an equal sign *)
+let sep = del /([ \t]+)|([ \t]*=[ \t]*)/ " = "
+
+(* Variable: word_opt_quot_re
+ Strings that don't require quotes *)
+let word_opt_quot_re = /[A-Za-z][A-Za-z0-9_-]*/
+
+(* View: word_opt_quot
+ Storing a <word_opt_quot_re>, with or without quotes *)
+let word_opt_quot = Quote.do_squote_opt (store word_opt_quot_re)
+
+(* Variable: number_re
+ A relative decimal number, optionally with unit *)
+let number_re = Rx.reldecimal . /[kMG]?B|[m]?s|min|h|d/?
+
+(* View: number
+ Storing <number_re>, with or without quotes *)
+let number = Quote.do_squote_opt (store number_re)
+
+(* View: word_quot
+ Anything other than <word_opt_quot> or <number>
+ Quotes are mandatory *)
+let word_quot =
+ let esc_squot = /\\\\'/
+ in let no_quot = /[^#'\n]/
+ in let forbidden = word_opt_quot_re | number_re
+ in let value = (no_quot|esc_squot)* - forbidden
+ in Quote.do_squote (store value)
+
+(* View: entry_gen
+ Builder to construct entries *)
+let entry_gen (lns:lens) =
+ Util.indent . Build.key_value_line_comment Rx.word sep lns Util.comment_eol
+
+(* View: entry *)
+let entry = entry_gen number
+ | entry_gen word_opt_quot
+ | entry_gen word_quot (* anything else *)
+
+(* View: lns *)
+let lns = (Util.empty | Util.comment | entry)*
+
+(* Variable: filter *)
+let filter = (incl "/var/lib/pgsql/data/postgresql.conf" .
+ incl "/var/lib/pgsql/*/data/postgresql.conf" .
+ incl "/var/lib/postgresql/*/data/postgresql.conf" .
+ incl "/etc/postgresql/*/*/postgresql.conf" )
+
+let xfm = transform lns filter
+
--- /dev/null
+(* Augeas module for editing Java properties files
+ Author: Craig Dunn <craig@craigdunn.org>
+
+ Limitations:
+ - doesn't support \ alone on a line
+ - values are not unescaped
+ - multi-line properties are broken down by line, and can't be replaced with a single line
+
+ See format info: http://docs.oracle.com/javase/6/docs/api/java/util/Properties.html#load(java.io.Reader)
+*)
+
+
+module Properties =
+ (* Define some basic primitives *)
+ let empty = Util.empty_generic_dos /[ \t]*[#!]?[ \t]*/
+ let eol = Util.doseol
+ let hard_eol = del /\r?\n/ "\n"
+ let sepch = del /([ \t]*(=|:)|[ \t])/ "="
+ let sepspc = del /[ \t]/ " "
+ let sepch_ns = del /[ \t]*(=|:)/ "="
+ let sepch_opt = del /[ \t]*(=|:)?[ \t]*/ "="
+ let value_to_eol_ws = store /(:|=)[^\r\n]*[^ \t\r\n\\]/
+ let value_to_bs_ws = store /(:|=)[^\n]*[^\\\n]/
+ let value_to_eol = store /([^ \t\n:=][^\n]*[^ \t\r\n\\]|[^ \t\r\n\\:=])/
+ let value_to_bs = store /([^ \t\n:=][^\n]*[^\\\n]|[^ \t\n\\:=])/
+ let indent = Util.indent
+ let backslash = del /[\\][ \t]*\n/ "\\\n"
+ let opt_backslash = del /([\\][ \t]*\n)?/ ""
+ let entry = /([^ \t\r\n:=!#\\]|[\\]:|[\\]=|[\\][\t ]|[\\][^\/\r\n])+/
+
+ let multi_line_entry =
+ [ indent . value_to_bs? . backslash ] .
+ [ indent . value_to_bs . backslash ] * .
+ [ indent . value_to_eol . eol ] . value " < multi > "
+
+ let multi_line_entry_ws =
+ opt_backslash .
+ [ indent . value_to_bs_ws . backslash ] + .
+ [ indent . value_to_eol . eol ] . value " < multi_ws > "
+
+ (* define comments and properties*)
+ let bang_comment = [ label "!comment" . del /[ \t]*![ \t]*/ "! " . store /([^ \t\n].*[^ \t\r\n]|[^ \t\r\n])/ . eol ]
+ let comment = ( Util.comment | bang_comment )
+ let property = [ indent . key entry . sepch . ( multi_line_entry | indent . value_to_eol . eol ) ]
+ let property_ws = [ indent . key entry . sepch_ns . ( multi_line_entry_ws | indent . value_to_eol_ws . eol ) ]
+ let empty_property = [ indent . key entry . sepch_opt . hard_eol ]
+ let empty_key = [ sepch_ns . ( multi_line_entry | indent . value_to_eol . eol ) ]
+
+ (* setup our lens and filter*)
+ let lns = ( empty | comment | property_ws | property | empty_property | empty_key ) *
--- /dev/null
+(*
+Module: Protocols
+ Parses /etc/protocols
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 5 protocols` where possible.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/protocols. See <filter>.
+
+About: Examples
+ The <Test_Protocols> file contains various examples and tests.
+*)
+
+
+module Protocols =
+
+autoload xfm
+
+let protoname = /[^# \t\n]+/
+
+(* View: entry *)
+let entry =
+ let protocol = [ label "protocol" . store protoname ]
+ in let number = [ label "number" . store Rx.integer ]
+ in let alias = [ label "alias" . store protoname ]
+ in [ seq "protocol" . protocol
+ . Sep.space . number
+ . (Sep.space . Build.opt_list alias Sep.space)?
+ . Util.comment_or_eol ]
+
+(* View: lns
+ The protocols lens *)
+let lns = (Util.empty | Util.comment | entry)*
+
+(* Variable: filter *)
+let filter = incl "/etc/protocols"
+
+let xfm = transform lns filter
--- /dev/null
+(* Puppet module for Augeas
+ Author: Raphael Pinson <raphink@gmail.com>
+
+ puppet.conf is a standard INI File.
+*)
+
+
+module Puppet =
+ autoload xfm
+
+(************************************************************************
+ * INI File settings
+ *
+ * puppet.conf only supports "# as commentary and "=" as separator
+ *************************************************************************)
+let comment = IniFile.comment "#" "#"
+let sep = IniFile.sep "=" "="
+
+
+(************************************************************************
+ * ENTRY
+ * puppet.conf uses standard INI File entries
+ *************************************************************************)
+let entry = IniFile.indented_entry IniFile.entry_re sep comment
+
+
+(************************************************************************
+ * RECORD
+ * puppet.conf uses standard INI File records
+ *************************************************************************)
+let title = IniFile.indented_title IniFile.record_re
+let record = IniFile.record title entry
+
+
+(************************************************************************
+ * LENS & FILTER
+ * puppet.conf uses standard INI File records
+ *************************************************************************)
+let lns = IniFile.lns record comment
+
+let filter = (incl "/etc/puppet/puppet.conf"
+ .incl "/usr/local/etc/puppet/puppet.conf"
+ .incl "/etc/puppetlabs/puppet/puppet.conf")
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Puppet_Auth
+ Parses /etc/puppet/auth.conf
+
+Author: Raphael Pinson <raphael.pinson@camptocamp.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `http://docs.puppetlabs.com/guides/rest_auth_conf.html` where possible.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/puppet/auth.conf. See <filter>.
+
+About: Examples
+ The <Test_Puppet_Auth> file contains various examples and tests.
+*)
+
+module Puppet_Auth =
+
+autoload xfm
+
+(* View: list
+ A list of values *)
+let list (kw:string) (val:regexp) =
+ let item = [ seq kw . store val ]
+ in let comma = del /[ \t]*,[ \t]*/ ", "
+ in [ Util.indent . key kw . Sep.space
+ . Build.opt_list item comma . Util.comment_or_eol ]
+
+(* View: auth
+ An authentication stanza *)
+let auth =
+ [ Util.indent . Build.xchg /auth(enticated)?/ "auth" "auth"
+ . Sep.space . store /yes|no|on|off|any/ . Util.comment_or_eol ]
+
+(* View: reset_counters *)
+let reset_counters =
+ counter "environment" . counter "method"
+ . counter "allow" . counter "allow_ip"
+
+(* View: setting *)
+let setting = list "environment" Rx.word
+ | list "method" /find|search|save|destroy/
+ | list "allow" /[^# \t\n,][^#\n,]*[^# \t\n,]|[^# \t\n,]/
+ | list "allow_ip" /[A-Za-z0-9.:\/]+/
+ | auth
+
+(* View: record *)
+let record =
+ let operator = [ label "operator" . store "~" ]
+ in [ Util.indent . key "path"
+ . (Sep.space . operator)?
+ . Sep.space . store /[^~# \t\n][^#\n]*[^# \t\n]|[^~# \t\n]/ . Util.eol
+ . reset_counters
+ . (Util.empty | Util.comment | setting)*
+ . setting ]
+
+(* View: lns *)
+let lns = (Util.empty | Util.comment | record)*
+
+(* Variable: filter *)
+let filter = (incl "/etc/puppet/auth.conf"
+ .incl "/usr/local/etc/puppet/auth.conf"
+ .incl "/etc/puppetlabs/puppet/auth.conf")
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Puppetfile
+ Parses libarian-puppet's Puppetfile format
+
+Author: Raphael Pinson <raphael.pinson@camptocamp.com>
+
+About: Reference
+ See https://github.com/rodjek/librarian-puppet
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to Puppetfiles.
+
+About: Examples
+ The <Test_Puppetfile> file contains various examples and tests.
+*)
+
+module Puppetfile =
+
+(* View: comma
+ a comma, optionally preceded or followed by spaces or newlines *)
+let comma = del /[ \t\n]*,[ \t\n]*/ ", "
+let comma_nospace = del /[ \t\n]*,/ ","
+
+let comment_or_eol = Util.eol | Util.comment_eol
+let quote_to_comment_or_eol = Quote.do_quote (store /[^#\n]*/) . comment_or_eol
+
+(* View: moduledir
+ The moduledir setting specifies where modules from the Puppetfile will be installed *)
+let moduledir = [ Util.indent . key "moduledir" . Sep.space
+ . quote_to_comment_or_eol ]
+
+(* View: forge
+ a forge entry *)
+let forge = [ Util.indent . key "forge" . Sep.space
+ . quote_to_comment_or_eol ]
+
+(* View: metadata
+ a metadata entry *)
+let metadata = [ Util.indent . key "metadata" . comment_or_eol ]
+
+(* View: mod
+ a module entry, with optional version and options *)
+let mod =
+ let mod_name = Quote.do_quote (store ((Rx.word . /[\/-]/)? . Rx.word))
+ in let version = [ label "@version" . Quote.do_quote (store /[^#:\n]+/) . Util.comment_eol? ]
+ in let sto_opt_val = store /[^#"', \t\n][^#"',\n]*[^#"', \t\n]|[^#"', \t\n]/
+ in let opt = [
+ Util.del_str ":" . key Rx.word
+ . (del /[ \t]*=>[ \t]*/ " => " . Quote.do_quote_opt sto_opt_val)?
+ ]
+ in let opt_eol = del /([ \t\n]*\n)?/ ""
+ in let opt_space_or_eol = del /[ \t\n]*/ " "
+ in let comma_opt_eol_comment = comma_nospace . (opt_eol . Util.comment_eol)*
+ . opt_space_or_eol
+ in let opts = Build.opt_list opt comma_opt_eol_comment
+ in [ Util.indent . Util.del_str "mod" . seq "mod" . Sep.space . mod_name
+ . (comma_opt_eol_comment . version)?
+ . (comma_opt_eol_comment . opts . Util.comment_eol?)?
+ . Util.eol ]
+
+(* View: lns
+ the Puppetfile lens *)
+let lns = (Util.empty | Util.comment | forge | metadata | mod | moduledir )*
--- /dev/null
+(* -*- coding: utf-8 -*-
+Module: PuppetFileserver
+ Parses /etc/puppet/fileserver.conf used by puppetmasterd daemon.
+
+Author: Frédéric Lespez <frederic.lespez@free.fr>
+
+About: Reference
+ This lens tries to keep as close as possible to puppet documentation
+ for this file:
+ http://docs.puppetlabs.com/guides/file_serving.html
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ Sample usage of this lens in augtool
+
+ * Create a new mount point
+ > ins test_mount after /files/etc/puppet/fileserver.conf/*[last()]
+ > defvar test_mount /files/etc/puppet/fileserver.conf/test_mount
+ > set $test_mount/path /etc/puppet/files
+ > set $test_mount/allow *.example.com
+ > ins allow after $test_mount/*[last()]
+ > set $test_mount/allow[last()] server.domain.com
+ > set $test_mount/deny dangerous.server.com
+ * List the definition of a mount point
+ > print /files/etc/puppet/fileserver.conf/files
+ * Remove a mount point
+ > rm /files/etc/puppet/fileserver.conf/test_mount
+
+About: Configuration files
+ This lens applies to /etc/puppet/fileserver.conf. See <filter>.
+*)
+
+
+module PuppetFileserver =
+ autoload xfm
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+(* Group: INI File settings *)
+
+(* Variable: eol *)
+let eol = IniFile.eol
+
+(* Variable: sep
+ Only treat one space as the sep, extras are stripped by IniFile *)
+let sep = Util.del_str " "
+
+(*
+Variable: comment
+ Only supports "#" as commentary
+*)
+let comment = IniFile.comment "#" "#"
+
+(*
+Variable: entry_re
+ Regexp for possible <entry> keyword (path, allow, deny)
+*)
+let entry_re = /path|allow|deny/
+
+
+(************************************************************************
+ * Group: ENTRY
+ *************************************************************************)
+
+(*
+View: entry
+ - It might be indented with an arbitrary amount of whitespace
+ - It does not have any separator between keywords and their values
+ - It can only have keywords with the following values (path, allow, deny)
+*)
+let entry = IniFile.indented_entry entry_re sep comment
+
+
+(************************************************************************
+ * Group: RECORD
+ *************************************************************************)
+
+(* Group: Title definition *)
+
+(*
+View: title
+ Uses standard INI File title
+*)
+let title = IniFile.indented_title IniFile.record_re
+
+(*
+View: title
+ Uses standard INI File record
+*)
+let record = IniFile.record title entry
+
+
+(************************************************************************
+ * Group: LENS
+ *************************************************************************)
+
+(*
+View: lns
+ Uses standard INI File lens
+*)
+let lns = IniFile.lns record comment
+
+(* Variable: filter *)
+let filter = (incl "/etc/puppet/fileserver.conf"
+ .incl "/usr/local/etc/puppet/fileserver.conf"
+ .incl "/etc/puppetlabs/puppet/fileserver.conf")
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: PylonsPaste
+ Parses Pylons Paste ini configuration files.
+
+Author: Andrew Colin Kissa <andrew@topdog.za.net>
+ Baruwa Enterprise Edition http://www.baruwa.com
+
+About: License
+ This file is licensed under the LGPL v2+.
+
+About: Configuration files
+ This lens applies to /etc/baruwa See <filter>.
+*)
+
+module Pylonspaste =
+autoload xfm
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+let comment = IniFile.comment IniFile.comment_re IniFile.comment_default
+
+let sep = IniFile.sep "=" "="
+
+let eol = Util.eol
+
+let optspace = del /\n/ "\n"
+
+let entry_re = ( /[A-Za-z][:#A-Za-z0-9._-]+/)
+
+let plugin_re = /[A-Za-z][;:#A-Za-z0-9._-]+/
+
+let plugins_kw = /plugins/
+
+let debug_kw = /debug/
+
+let normal_opts = entry_re - (debug_kw|plugins_kw)
+
+let del_opt_ws = del /[\t ]*/ ""
+
+let new_ln_sep = optspace . del_opt_ws . store plugin_re
+
+let plugins_multiline = sep . counter "items" . [ seq "items" . new_ln_sep]*
+
+let sto_multiline = optspace . Sep.opt_space . store (Rx.space_in . (/[ \t]*\n/ . Rx.space . Rx.space_in)*)
+
+(************************************************************************
+ * Group: ENTRY
+ *************************************************************************)
+
+let set_option = Util.del_str "set "
+let no_inline_comment_entry (kw:regexp) (sep:lens) (comment:lens) =
+ [ set_option . key debug_kw . sep . IniFile.sto_to_eol . eol ]
+ | [ key plugins_kw . plugins_multiline . eol]
+ | [ key kw . sep . IniFile.sto_to_eol? . eol ]
+ | comment
+
+let entry = no_inline_comment_entry normal_opts sep comment
+
+(************************************************************************
+ * RECORD
+ *************************************************************************)
+
+let title = IniFile.title IniFile.record_re
+
+let record = IniFile.record title entry
+
+(************************************************************************
+ * Group: LENS & FILTER
+ *************************************************************************)
+
+let lns = IniFile.lns record comment
+
+let filter = incl "/etc/baruwa/*.ini"
+
+let xfm = transform lns filter
+
--- /dev/null
+(* Python paste config file lens for Augeas
+ Author: Dan Prince <dprince@redhat.com>
+*)
+module PythonPaste =
+ autoload xfm
+
+(************************************************************************
+ * INI File settings
+ *************************************************************************)
+
+let comment = IniFile.comment "#" "#"
+
+let sep = IniFile.sep "=" "="
+
+let eol = Util.eol
+
+(************************************************************************
+ * ENTRY
+ *************************************************************************)
+
+let url_entry = /\/[\/A-Za-z0-9.-_]* ?[:|=] [A-Za-z0-9.-_]+/
+
+let set_kw = [ Util.del_str "set" . Util.del_ws_spc . label "@set" ]
+
+let no_inline_comment_entry (kw:regexp) (sep:lens) (comment:lens)
+ = [ set_kw? . key kw . sep . IniFile.sto_to_eol? . eol ]
+ | comment
+ | [ seq "urls" . store url_entry . eol ]
+
+let entry_re = ( /[A-Za-z][:#A-Za-z0-9._-]+/ )
+
+let entry = no_inline_comment_entry entry_re sep comment
+
+(************************************************************************
+ * RECORD
+ *************************************************************************)
+
+let title = IniFile.title IniFile.record_re
+
+let record = IniFile.record title entry
+
+(************************************************************************
+ * LENS & FILTER
+ *************************************************************************)
+let lns = IniFile.lns record comment
+
+let filter = ((incl "/etc/glance/*.ini")
+ . (incl "/etc/keystone/keystone.conf")
+ . (incl "/etc/nova/api-paste.ini")
+ . (incl "/etc/swift/swift.conf")
+ . (incl "/etc/swift/proxy-server.conf")
+ . (incl "/etc/swift/account-server/*.conf")
+ . (incl "/etc/swift/container-server/*.conf")
+ . (incl "/etc/swift/object-server/*.conf"))
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Qpid
+ Parses Apache Qpid daemon/client configuration files
+
+Author: Andrew Replogle <areplogl@redhat.com>
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Examples
+ The <Test_Qpid> file contains various examples and tests.
+*)
+
+module Qpid =
+
+autoload xfm
+
+(* View: entry *)
+let entry = Build.key_value_line Rx.word Sep.equal
+ (store Rx.space_in)
+
+(* View: lns *)
+let lns = (Util.empty | Util.comment | entry)*
+
+(* Variable: filter *)
+let filter = incl "/etc/qpidd.conf"
+ . incl "/etc/qpid/qpidc.conf"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Quote
+ Generic module providing useful primitives for quoting
+
+Author: Raphael Pinson <raphael.pinson@camptocamp.com>
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ This is a generic module which doesn't apply to files directly.
+ You can use its definitions to build lenses that require quoted values.
+ It provides several levels of definitions, allowing to define more or less fine-grained quoted values:
+
+ - the quote separators are separators that are useful to define quoted values;
+ - the quoting functions are useful wrappers to easily enclose a lens in various kinds of quotes (single, double, any, optional or not);
+ - the quoted values definitions are common quoted patterns. They use the quoting functions in order to provide useful shortcuts for commonly met needs. In particular, the <quote_spaces> (and similar) function force values that contain spaces to be quoted, but allow values without spaces to be unquoted.
+
+About: Examples
+ The <Test_Quote> file contains various examples and tests.
+*)
+
+module Quote =
+
+(* Group: QUOTE SEPARATORS *)
+
+(* Variable: dquote
+ A double quote *)
+let dquote = Util.del_str "\""
+
+(* Variable: dquote_opt
+ An optional double quote, default to double *)
+let dquote_opt = del /"?/ "\""
+
+(* Variable: dquote_opt_nil
+ An optional double quote, default to nothing *)
+let dquote_opt_nil = del /"?/ ""
+
+(* Variable: squote
+ A single quote *)
+let squote = Util.del_str "'"
+
+(* Variable: squote_opt
+ An optional single quote, default to single *)
+let squote_opt = del /'?/ "'"
+
+(* Variable: squote_opt_nil
+ An optional single quote, default to nothing *)
+let squote_opt_nil = del /'?/ ""
+
+(* Variable: quote
+ A quote, either double or single, default to double *)
+let quote = del /["']/ "\""
+
+(* Variable: quote_opt
+ An optional quote, either double or single, default to double *)
+let quote_opt = del /["']?/ "\""
+
+(* Variable: quote_opt_nil
+ An optional quote, either double or single, default to nothing *)
+let quote_opt_nil = del /["']?/ ""
+
+
+(* Group: QUOTING FUNCTIONS *)
+
+(*
+View: do_dquote
+ Enclose a lens in <dquote>s
+
+ Parameters:
+ body:lens - the lens to be enclosed
+*)
+let do_dquote (body:lens) =
+ square dquote body dquote
+
+(*
+View: do_dquote_opt
+ Enclose a lens in optional <dquote>s,
+ use <dquote>s by default.
+
+ Parameters:
+ body:lens - the lens to be enclosed
+*)
+let do_dquote_opt (body:lens) =
+ square dquote_opt body dquote_opt
+
+(*
+View: do_dquote_opt_nil
+ Enclose a lens in optional <dquote>s,
+ default to no quotes.
+
+ Parameters:
+ body:lens - the lens to be enclosed
+*)
+let do_dquote_opt_nil (body:lens) =
+ square dquote_opt_nil body dquote_opt_nil
+
+(*
+View: do_squote
+ Enclose a lens in <squote>s
+
+ Parameters:
+ body:lens - the lens to be enclosed
+*)
+let do_squote (body:lens) =
+ square squote body squote
+
+(*
+View: do_squote_opt
+ Enclose a lens in optional <squote>s,
+ use <squote>s by default.
+
+ Parameters:
+ body:lens - the lens to be enclosed
+*)
+let do_squote_opt (body:lens) =
+ square squote_opt body squote_opt
+
+(*
+View: do_squote_opt_nil
+ Enclose a lens in optional <squote>s,
+ default to no quotes.
+
+ Parameters:
+ body:lens - the lens to be enclosed
+*)
+let do_squote_opt_nil (body:lens) =
+ square squote_opt_nil body squote_opt_nil
+
+(*
+View: do_quote
+ Enclose a lens in <quote>s.
+
+ Parameters:
+ body:lens - the lens to be enclosed
+*)
+let do_quote (body:lens) =
+ square quote body quote
+
+(*
+View: do_quote
+ Enclose a lens in options <quote>s.
+
+ Parameters:
+ body:lens - the lens to be enclosed
+*)
+let do_quote_opt (body:lens) =
+ square quote_opt body quote_opt
+
+(*
+View: do_quote
+ Enclose a lens in options <quote>s,
+ default to no quotes.
+
+ Parameters:
+ body:lens - the lens to be enclosed
+*)
+let do_quote_opt_nil (body:lens) =
+ square quote_opt_nil body quote_opt_nil
+
+
+(* Group: QUOTED VALUES *)
+
+(* View: double
+ A double-quoted value *)
+let double =
+ let body = store /[^\n]*/
+ in do_dquote body
+
+(* Variable: double_opt_re
+ The regexp to store when value
+ is optionally double-quoted *)
+let double_opt_re = /[^\n\t "]([^\n"]*[^\n\t "])?/
+
+(* View: double_opt
+ An optionally double-quoted value
+ Double quotes are not allowed in value
+ Value cannot begin or end with spaces *)
+let double_opt =
+ let body = store double_opt_re
+ in do_dquote_opt body
+
+(* View: single
+ A single-quoted value *)
+let single =
+ let body = store /[^\n]*/
+ in do_squote body
+
+(* Variable: single_opt_re
+ The regexp to store when value
+ is optionally single-quoted *)
+let single_opt_re = /[^\n\t ']([^\n']*[^\n\t '])?/
+
+(* View: single_opt
+ An optionally single-quoted value
+ Single quotes are not allowed in value
+ Value cannot begin or end with spaces *)
+let single_opt =
+ let body = store single_opt_re
+ in do_squote_opt body
+
+(* View: any
+ A quoted value *)
+let any =
+ let body = store /[^\n]*/
+ in do_quote body
+
+(* Variable: any_opt_re
+ The regexp to store when value
+ is optionally single- or double-quoted *)
+let any_opt_re = /[^\n\t "']([^\n"']*[^\n\t "'])?/
+
+(* View: any_opt
+ An optionally quoted value
+ Double or single quotes are not allowed in value
+ Value cannot begin or end with spaces *)
+let any_opt =
+ let body = store any_opt_re
+ in do_quote_opt body
+
+(*
+View: quote_spaces
+ Make quotes mandatory if value contains spaces,
+ and optional if value doesn't contain spaces.
+
+Parameters:
+ lns:lens - the lens to be enclosed
+*)
+let quote_spaces (lns:lens) =
+ (* bare has no spaces, and is optionally quoted *)
+ let bare = Quote.do_quote_opt (store /[^"' \t\n]+/)
+ (* quoted has at least one space, and must be quoted *)
+ in let quoted = Quote.do_quote (store /[^"'\n]*[ \t]+[^"'\n]*/)
+ in [ lns . bare ] | [ lns . quoted ]
+
+(*
+View: dquote_spaces
+ Make double quotes mandatory if value contains spaces,
+ and optional if value doesn't contain spaces.
+
+Parameters:
+ lns:lens - the lens to be enclosed
+*)
+let dquote_spaces (lns:lens) =
+ (* bare has no spaces, and is optionally quoted *)
+ let bare = Quote.do_dquote_opt (store /[^" \t\n]+/)
+ (* quoted has at least one space, and must be quoted *)
+ in let quoted = Quote.do_dquote (store /[^"\n]*[ \t]+[^"\n]*/)
+ in [ lns . bare ] | [ lns . quoted ]
+
+(*
+View: squote_spaces
+ Make single quotes mandatory if value contains spaces,
+ and optional if value doesn't contain spaces.
+
+Parameters:
+ lns:lens - the lens to be enclosed
+*)
+let squote_spaces (lns:lens) =
+ (* bare has no spaces, and is optionally quoted *)
+ let bare = Quote.do_squote_opt (store /[^' \t\n]+/)
+ (* quoted has at least one space, and must be quoted *)
+ in let quoted = Quote.do_squote (store /[^'\n]*[ \t]+[^'\n]*/)
+ in [ lns . bare ] | [ lns . quoted ]
--- /dev/null
+(*
+Module: Rabbitmq
+ Parses Rabbitmq configuration files
+
+Author: Raphael Pinson <raphael.pinson@camptocamp.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `http://www.rabbitmq.com/configure.html` where possible.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to Rabbitmq configuration files. See <filter>.
+
+About: Examples
+ The <Test_Rabbitmq> file contains various examples and tests.
+*)
+module Rabbitmq =
+
+autoload xfm
+
+(* View: listeners
+ A tcp/ssl listener *)
+let listeners =
+ let value = Erlang.make_value Erlang.integer
+ | Erlang.tuple Erlang.quoted Erlang.integer
+ in Erlang.list /(tcp|ssl)_listeners/ value
+
+
+(* View: ssl_options
+ (Incomplete) list of SSL options *)
+let ssl_options =
+ let versions_list = Erlang.opt_list (Erlang.make_value Erlang.quoted)
+ in let option = Erlang.value /((ca)?cert|key)file/ Erlang.path
+ | Erlang.value "verify" Erlang.bare
+ | Erlang.value "verify_fun" Erlang.boolean
+ | Erlang.value /fail_if_no_peer_cert|reuse_sessions/ Erlang.boolean
+ | Erlang.value "depth" Erlang.integer
+ | Erlang.value "password" Erlang.quoted
+ | Erlang.value "versions" versions_list
+ in Erlang.list "ssl_options" option
+
+(* View: disk_free_limit *)
+let disk_free_limit =
+ let value = Erlang.integer | Erlang.tuple Erlang.bare Erlang.decimal
+ in Erlang.value "disk_free_limit" value
+
+(* View: log_levels *)
+let log_levels =
+ let category = Erlang.tuple Erlang.bare Erlang.bare
+ in Erlang.list "log_levels" category
+
+(* View: cluster_nodes
+ Can be a tuple `(nodes, node_type)` or simple `nodes` *)
+let cluster_nodes =
+ let nodes = Erlang.opt_list (Erlang.make_value Erlang.quoted)
+ in let value = Erlang.tuple nodes Erlang.bare
+ | nodes
+ in Erlang.value "cluster_nodes" value
+
+(* View: cluster_partition_handling
+ Can be single value or
+ `{pause_if_all_down, [nodes], ignore | autoheal}` *)
+let cluster_partition_handling =
+ let nodes = Erlang.opt_list (Erlang.make_value Erlang.quoted)
+ in let value = Erlang.tuple3 Erlang.bare nodes Erlang.bare
+ | Erlang.bare
+ in Erlang.value "cluster_partition_handling" value
+
+(* View: tcp_listen_options *)
+let tcp_listen_options =
+ let value = Erlang.make_value Erlang.bare
+ | Erlang.tuple Erlang.bare Erlang.bare
+ in Erlang.list "tcp_listen_options" value
+
+(* View: parameters
+ Top-level parameters for the lens *)
+let parameters = listeners
+ | ssl_options
+ | disk_free_limit
+ | log_levels
+ | Erlang.value /vm_memory_high_watermark(_paging_ratio)?/ Erlang.decimal
+ | Erlang.value "frame_max" Erlang.integer
+ | Erlang.value "heartbeat" Erlang.integer
+ | Erlang.value /default_(vhost|user|pass)/ Erlang.glob
+ | Erlang.value_list "default_user_tags" Erlang.bare
+ | Erlang.value_list "default_permissions" Erlang.glob
+ | cluster_nodes
+ | Erlang.value_list "server_properties" Erlang.bare
+ | Erlang.value "collect_statistics" Erlang.bare
+ | Erlang.value "collect_statistics_interval" Erlang.integer
+ | Erlang.value_list "auth_mechanisms" Erlang.quoted
+ | Erlang.value_list "auth_backends" Erlang.bare
+ | Erlang.value "delegate_count" Erlang.integer
+ | Erlang.value_list "trace_vhosts" Erlang.bare
+ | tcp_listen_options
+ | Erlang.value "hipe_compile" Erlang.boolean
+ | Erlang.value "msg_store_index_module" Erlang.bare
+ | Erlang.value "backing_queue_module" Erlang.bare
+ | Erlang.value "msg_store_file_size_limit" Erlang.integer
+ | Erlang.value /queue_index_(max_journal_entries|embed_msgs_below)/ Erlang.integer
+ | cluster_partition_handling
+ | Erlang.value /(ssl_)?handshake_timeout/ Erlang.integer
+ | Erlang.value "channel_max" Erlang.integer
+ | Erlang.value_list "loopback_users" Erlang.glob
+ | Erlang.value "reverse_dns_lookups" Erlang.boolean
+ | Erlang.value "cluster_keepalive_interval" Erlang.integer
+ | Erlang.value "mnesia_table_loading_timeout" Erlang.integer
+
+(* View: rabbit
+ The rabbit <Erlang.application> config *)
+let rabbit = Erlang.application "rabbit" parameters
+
+(* View: lns
+ A top-level <Erlang.config> *)
+let lns = Erlang.config rabbit
+
+(* Variable: filter *)
+let filter = incl "/etc/rabbitmq/rabbitmq.config"
+
+let xfm = transform lns filter
--- /dev/null
+(* Radicale module for Augeas
+ Based on Puppet lens.
+
+ Manage config file for http://radicale.org/
+ /etc/radicale/config is a standard INI File.
+*)
+
+
+module Radicale =
+ autoload xfm
+
+(************************************************************************
+ * INI File settings
+ *
+ * /etc/radicale/config only supports "#" as commentary and "=" as separator
+ *************************************************************************)
+let comment = IniFile.comment "#" "#"
+let sep = IniFile.sep "=" "="
+
+
+(************************************************************************
+ * ENTRY
+ * /etc/radicale/config uses standard INI File entries
+ *************************************************************************)
+let entry = IniFile.indented_entry IniFile.entry_re sep comment
+
+
+(************************************************************************
+ * RECORD
+ * /etc/radicale/config uses standard INI File records
+ *************************************************************************)
+let title = IniFile.indented_title IniFile.record_re
+let record = IniFile.record title entry
+
+
+(************************************************************************
+ * LENS & FILTER
+ * /etc/radicale/config uses standard INI File records
+ *************************************************************************)
+let lns = IniFile.lns record comment
+
+let filter = (incl "/etc/radicale/config")
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Rancid
+ Parses RANCiD router database
+
+Author: Matt Dainty <matt@bodgit-n-scarper.com>
+
+About: Reference
+ - man 5 router.db
+
+Each line represents a record consisting of a number of ';'-separated fields
+the first of which is the IP/Hostname of the device, followed by the type, its
+state and optionally a comment.
+
+*)
+
+module Rancid =
+ autoload xfm
+
+ let sep = Util.del_str ";"
+ let field = /[^;#\n]+/
+ let comment = [ label "comment" . store /[^;#\n]*/ ]
+ let eol = Util.del_str "\n"
+ let record = [ label "device" . store field . sep . [ label "type" . store field ] . sep . [ label "state" . store field ] . ( sep . comment )? . eol ]
+
+ let lns = ( Util.empty | Util.comment_generic /#[ \t]*/ "# " | record )*
+
+ let filter = incl "/var/rancid/*/router.db"
+ . Util.stdexcl
+
+ let xfm = transform lns filter
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(*
+Module: Redis
+ Parses Redis's configuration files
+
+Author: Marc Fournier <marc.fournier@camptocamp.com>
+
+About: Reference
+ This lens is based on Redis's default redis.conf
+
+About: Usage Example
+(start code)
+augtool> set /augeas/load/Redis/incl "/etc/redis/redis.conf"
+augtool> set /augeas/load/Redis/lens "Redis.lns"
+augtool> load
+
+augtool> get /files/etc/redis/redis.conf/vm-enabled
+/files/etc/redis/redis.conf/vm-enabled = no
+augtool> print /files/etc/redis/redis.conf/rename-command[1]/
+/files/etc/redis/redis.conf/rename-command
+/files/etc/redis/redis.conf/rename-command/from = "CONFIG"
+/files/etc/redis/redis.conf/rename-command/to = "CONFIG2"
+
+augtool> set /files/etc/redis/redis.conf/activerehashing no
+augtool> save
+Saved 1 file(s)
+augtool> set /files/etc/redis/redis.conf/save[1]/seconds 123
+augtool> set /files/etc/redis/redis.conf/save[1]/keys 456
+augtool> save
+Saved 1 file(s)
+(end code)
+ The <Test_Redis> file also contains various examples.
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+*)
+
+module Redis =
+autoload xfm
+
+let k = Rx.word
+let v = /[^ \t\n'"]+/
+let comment = Util.comment
+let empty = Util.empty
+let indent = Util.indent
+let eol = Util.eol
+let del_ws_spc = Util.del_ws_spc
+let dquote = Util.del_str "\""
+
+(* View: standard_entry
+A standard entry is a key-value pair, separated by blank space, with optional
+blank spaces at line beginning & end. The value part can be optionnaly enclosed
+in single or double quotes. Comments at end-of-line ar NOT allowed by
+redis-server.
+*)
+let standard_entry =
+ let reserved_k = "save" | "rename-command" | "replicaof" | "slaveof"
+ | "bind" | "client-output-buffer-limit"
+ | "sentinel"
+ in let entry_noempty = [ indent . key (k - reserved_k) . del_ws_spc
+ . Quote.do_quote_opt_nil (store v) . eol ]
+ in let entry_empty = [ indent . key (k - reserved_k) . del_ws_spc
+ . dquote . store "" . dquote . eol ]
+ in entry_noempty | entry_empty
+
+let save = /save/
+let seconds = [ label "seconds" . Quote.do_quote_opt_nil (store Rx.integer) ]
+let keys = [ label "keys" . Quote.do_quote_opt_nil (store Rx.integer) ]
+let save_val =
+ let save_val_empty = del_ws_spc . dquote . store "" . dquote
+ in let save_val_sec_keys = del_ws_spc . seconds . del_ws_spc . keys
+ in save_val_sec_keys | save_val_empty
+
+(* View: save_entry
+Entries identified by the "save" keyword can be found more than once. They have
+2 mandatory parameters, both integers or a single parameter that is empty double quoted
+string. The same rules as standard_entry apply for quoting, comments and whitespaces.
+*)
+let save_entry = [ indent . key save . save_val . eol ]
+
+let replicaof = /replicaof|slaveof/
+let ip = [ label "ip" . Quote.do_quote_opt_nil (store Rx.ip) ]
+let port = [ label "port" . Quote.do_quote_opt_nil (store Rx.integer) ]
+(* View: replicaof_entry
+Entries identified by the "replicaof" keyword can be found more than once. They
+have 2 mandatory parameters, the 1st one is an IP address, the 2nd one is a
+port number. The same rules as standard_entry apply for quoting, comments and
+whitespaces.
+*)
+let replicaof_entry = [ indent . key replicaof . del_ws_spc . ip . del_ws_spc . port . eol ]
+
+let sentinel_global_entry =
+ let keys = "deny-scripts-reconfig" | "current-epoch" | "myid"
+ in store keys .
+ del_ws_spc . [ label "value" . store ( Rx.word | Rx.integer ) ]
+
+let sentinel_cluster_setup =
+ let keys = "config-epoch" | "leader-epoch"
+ in store keys .
+ del_ws_spc . [ label "cluster" . store Rx.word ] .
+ del_ws_spc . [ label "epoch" . store Rx.integer ]
+
+let sentinel_cluster_instance_setup =
+ let keys = "monitor" | "known-replica"
+ in store keys .
+ del_ws_spc . [ label "cluster" . store Rx.word ] .
+ del_ws_spc. [ label "ip" . store Rx.ip ] .
+ del_ws_spc . [ label "port" . store Rx.integer ] .
+ (del_ws_spc . [ label "quorum" . store Rx.integer ])?
+
+let sentinel_clustering =
+ let keys = "known-sentinel"
+ in store keys .
+ del_ws_spc . [ label "cluster" . store Rx.word ] .
+ del_ws_spc . [ label "ip" . store Rx.ip ] .
+ del_ws_spc . [ label "port" . store Rx.integer ] .
+ del_ws_spc . [ label "id" . store Rx.word ]
+
+(* View: sentinel_entry
+*)
+let sentinel_entry =
+ indent . [ key "sentinel" . del_ws_spc .
+ (sentinel_global_entry | sentinel_cluster_setup | sentinel_cluster_instance_setup | sentinel_clustering)
+ ] . eol
+
+(* View: bind_entry
+The "bind" entry can be passed one or several ip addresses. A bind
+statement "bind ip1 ip2 .. ipn" results in a tree
+{ "bind" { "ip" = ip1 } { "ip" = ip2 } ... { "ip" = ipn } }
+*)
+let bind_entry =
+ let ip = del_ws_spc . Quote.do_quote_opt_nil (store Rx.ip) in
+ indent . [ key "bind" . [ label "ip" . ip ]+ ] . eol
+
+let renamecmd = /rename-command/
+let from = [ label "from" . Quote.do_quote_opt_nil (store Rx.word) ]
+let to = [ label "to" . Quote.do_quote_opt_nil (store Rx.word) ]
+(* View: save_entry
+Entries identified by the "rename-command" keyword can be found more than once.
+They have 2 mandatory parameters, both strings. The same rules as
+standard_entry apply for quoting, comments and whitespaces.
+*)
+let renamecmd_entry = [ indent . key renamecmd . del_ws_spc . from . del_ws_spc . to . eol ]
+
+let cobl_cmd = /client-output-buffer-limit/
+let class = [ label "class" . Quote.do_quote_opt_nil (store Rx.word) ]
+let hard_limit = [ label "hard_limit" . Quote.do_quote_opt_nil (store Rx.word) ]
+let soft_limit = [ label "soft_limit" . Quote.do_quote_opt_nil (store Rx.word) ]
+let soft_seconds = [ label "soft_seconds" . Quote.do_quote_opt_nil (store Rx.integer) ]
+(* View: client_output_buffer_limit_entry
+Entries identified by the "client-output-buffer-limit" keyword can be found
+more than once. They have four mandatory parameters, of which the first is a
+string, the last one is an integer and the others are either integers or words,
+although redis is very liberal and takes "4242yadayadabytes" as a valid limit.
+The same rules as standard_entry apply for quoting, comments and whitespaces.
+*)
+let client_output_buffer_limit_entry =
+ [ indent . key cobl_cmd . del_ws_spc . class . del_ws_spc . hard_limit .
+ del_ws_spc . soft_limit . del_ws_spc . soft_seconds . eol ]
+
+let entry = standard_entry
+ | save_entry
+ | renamecmd_entry
+ | replicaof_entry
+ | bind_entry
+ | sentinel_entry
+ | client_output_buffer_limit_entry
+
+(* View: lns
+The Redis lens
+*)
+let lns = (comment | empty | entry )*
+
+let filter =
+ incl "/etc/redis.conf"
+ . incl "/etc/redis/redis.conf"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Reprepro_Uploaders
+ Parses reprepro's uploaders files
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 1 reprepro` where possible.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+About: Lens Usage
+ See <lns>.
+
+About: Configuration files
+ This lens applies to reprepro's uploaders files.
+
+About: Examples
+ The <Test_Reprepro_Uploaders> file contains various examples and tests.
+*)
+
+module Reprepro_Uploaders =
+
+(* View: logic_construct_condition
+ A logical construction for <condition> and <condition_list> *)
+let logic_construct_condition (kw:string) (lns:lens) =
+ [ label kw . lns ]
+ . [ Sep.space . key kw . Sep.space . lns ]*
+
+(* View: logic_construct_field
+ A generic definition for <condition_field> *)
+let logic_construct_field (kw:string) (sep:string) (lns:lens) =
+ [ label kw . lns ]
+ . [ Build.xchgs sep kw . lns ]*
+
+(* View: condition_re
+ A condition can be of several types:
+
+ - source
+ - byhand
+ - sections
+ - sections contain
+ - binaries
+ - binaries contain
+ - architectures
+ - architectures contain
+
+ While the lens technically also accepts "source contain"
+ and "byhand contain", these are not understood by reprepro.
+
+ The "contain" types are built by adding a "contain" subnode.
+ See the <condition_field> definition.
+
+ *)
+let condition_re =
+ "source"
+ | "byhand"
+ | "sections"
+ | "binaries"
+ | "architectures"
+ | "distribution"
+
+(* View: condition_field
+ A single condition field is an 'or' node.
+ It may contain several values, listed in 'or' subnodes:
+
+ > $reprepro/allow[1]/and/or = "architectures"
+ > $reprepro/allow[1]/and/or/or[1] = "i386"
+ > $reprepro/allow[1]/and/or/or[2] = "amd64"
+ > $reprepro/allow[1]/and/or/or[3] = "all"
+
+ *)
+let condition_field =
+ let sto_condition = Util.del_str "'" . store /[^'\n]+/ . Util.del_str "'" in
+ [ key "not" . Sep.space ]? .
+ store condition_re
+ . [ Sep.space . key "contain" ]?
+ . Sep.space
+ . logic_construct_field "or" "|" sto_condition
+
+(* View: condition
+ A condition is an 'and' node,
+ representing a union of <condition_fields>,
+ listed under 'or' subnodes:
+
+ > $reprepro/allow[1]/and
+ > $reprepro/allow[1]/and/or = "architectures"
+ > $reprepro/allow[1]/and/or/or[1] = "i386"
+ > $reprepro/allow[1]/and/or/or[2] = "amd64"
+ > $reprepro/allow[1]/and/or/or[3] = "all"
+
+ *)
+let condition =
+ logic_construct_condition "or" condition_field
+
+(* View: condition_list
+ A list of <conditions>, inspired by Debctrl.dependency_list
+ An upload condition list is either the wildcard '*', stored verbatim,
+ or an intersection of conditions listed under 'and' subnodes:
+
+ > $reprepro/allow[1]/and[1]
+ > $reprepro/allow[1]/and[1]/or = "architectures"
+ > $reprepro/allow[1]/and[1]/or/or[1] = "i386"
+ > $reprepro/allow[1]/and[1]/or/or[2] = "amd64"
+ > $reprepro/allow[1]/and[1]/or/or[3] = "all"
+ > $reprepro/allow[1]/and[2]
+ > $reprepro/allow[1]/and[2]/or = "sections"
+ > $reprepro/allow[1]/and[2]/or/contain
+ > $reprepro/allow[1]/and[2]/or/or = "main"
+
+ *)
+let condition_list =
+ store "*"
+ | logic_construct_condition "and" condition
+
+(* View: by_key
+ When a key is used to authenticate packages,
+ the value can either be a key ID or "any":
+
+ > $reprepro/allow[1]/by/key = "ABCD1234"
+ > $reprepro/allow[2]/by/key = "any"
+
+ *)
+let by_key =
+ let any_key = [ store "any" . Sep.space
+ . key "key" ] in
+ let named_key = [ key "key" . Sep.space
+ . store (Rx.word - "any") ] in
+ value "key" . (any_key | named_key)
+
+(* View: by_group
+ Authenticate packages by a groupname.
+
+ > $reprepro/allow[1]/by/group = "groupname"
+
+ *)
+let by_group = value "group"
+ . [ key "group" . Sep.space
+ . store Rx.word ]
+
+(* View: by
+ <by> statements define who is allowed to upload.
+ It can be simple keywords, like "anybody" or "unsigned",
+ or a key ID, in which case a "key" subnode is added:
+
+ > $reprepro/allow[1]/by/key = "ABCD1234"
+ > $reprepro/allow[2]/by/key = "any"
+ > $reprepro/allow[3]/by = "anybody"
+ > $reprepro/allow[4]/by = "unsigned"
+
+ *)
+let by =
+ [ key "by" . Sep.space
+ . ( store ("anybody"|"unsigned")
+ | by_key | by_group ) ]
+
+(* View: allow
+ An allow entry, e.g.:
+
+ > $reprepro/allow[1]
+ > $reprepro/allow[1]/and[1]
+ > $reprepro/allow[1]/and[1]/or = "architectures"
+ > $reprepro/allow[1]/and[1]/or/or[1] = "i386"
+ > $reprepro/allow[1]/and[1]/or/or[2] = "amd64"
+ > $reprepro/allow[1]/and[1]/or/or[3] = "all"
+ > $reprepro/allow[1]/and[2]
+ > $reprepro/allow[1]/and[2]/or = "sections"
+ > $reprepro/allow[1]/and[2]/or/contain
+ > $reprepro/allow[1]/and[2]/or/or = "main"
+ > $reprepro/allow[1]/by = "key"
+ > $reprepro/allow[1]/by/key = "ABCD1234"
+
+ *)
+let allow =
+ [ key "allow" . Sep.space
+ . condition_list . Sep.space
+ . by . Util.eol ]
+
+(* View: group
+ A group declaration *)
+let group =
+ let add = [ key "add" . Sep.space
+ . store Rx.word ]
+ in let contains = [ key "contains" . Sep.space
+ . store Rx.word ]
+ in let empty = [ key "empty" ]
+ in let unused = [ key "unused" ]
+ in [ key "group" . Sep.space
+ . store Rx.word . Sep.space
+ . (add | contains | empty | unused) . Util.eol ]
+
+(* View: entry
+ An entry is either an <allow> statement
+ or a <group> definition.
+ *)
+let entry = allow | group
+
+(* View: lns
+ The lens is made of <Util.empty>, <Util.comment> and <entry> lines *)
+let lns = (Util.empty|Util.comment|entry)*
--- /dev/null
+(*
+Module: Resolv
+ Parses /etc/resolv.conf
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man resolv.conf` where possible.
+
+About: Licence
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+
+About: Configuration files
+ This lens applies to /etc/resolv.conf. See <filter>.
+*)
+
+module Resolv =
+autoload xfm
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+(* View: comment *)
+let comment = Util.comment_generic /[ \t]*[;#][ \t]*/ "# "
+
+(* View: comment_eol *)
+let comment_eol = Util.comment_generic /[ \t]*[;#][ \t]*/ " # "
+
+(* View: empty *)
+let empty = Util.empty_generic_dos /[ \t]*[#;]?[ \t]*/
+
+
+(************************************************************************
+ * Group: MAIN OPTIONS
+ *************************************************************************)
+
+(* View: netmask
+A network mask for IP addresses *)
+let netmask = [ label "netmask" . Util.del_str "/" . store Rx.ip ]
+
+(* View: ipaddr
+An IP address or range with an optional mask *)
+let ipaddr = [label "ipaddr" . store Rx.ip . netmask?]
+
+
+(* View: nameserver
+ A nameserver entry *)
+let nameserver = Build.key_value_line_comment
+ "nameserver" Sep.space (store Rx.ip) comment_eol
+
+(* View: domain *)
+let domain = Build.key_value_line_comment
+ "domain" Sep.space (store Rx.word) comment_eol
+
+(* View: search *)
+let search = Build.key_value_line_comment
+ "search" Sep.space
+ (Build.opt_list
+ [label "domain" . store Rx.word]
+ Sep.space)
+ comment_eol
+
+(* View: sortlist *)
+let sortlist = Build.key_value_line_comment
+ "sortlist" Sep.space
+ (Build.opt_list
+ ipaddr
+ Sep.space)
+ comment_eol
+
+(* View: lookup *)
+let lookup =
+ let lookup_entry = Build.flag("bind"|"file"|"yp")
+ in Build.key_value_line_comment
+ "lookup" Sep.space
+ (Build.opt_list
+ lookup_entry
+ Sep.space)
+ comment_eol
+
+(* View: family *)
+let family =
+ let family_entry = Build.flag("inet4"|"inet6")
+ in Build.key_value_line_comment
+ "family" Sep.space
+ (Build.opt_list
+ family_entry
+ Sep.space)
+ comment_eol
+
+(************************************************************************
+ * Group: SPECIAL OPTIONS
+ *************************************************************************)
+
+(* View: ip6_dotint
+ ip6-dotint option, which supports negation *)
+let ip6_dotint =
+ let negate = [ del "no-" "no-" . label "negate" ]
+ in [ negate? . key "ip6-dotint" ]
+
+(* View: options
+ Options values *)
+let options =
+ let options_entry = Build.key_value ("ndots"|"timeout"|"attempts")
+ (Util.del_str ":") (store Rx.integer)
+ | Build.flag ("debug"|"rotate"|"no-check-names"
+ |"inet6"|"ip6-bytestring"|"edns0"
+ |"single-request"|"single-request-reopen"
+ |"no-tld-query"|"use-vc"|"no-reload"
+ |"trust-ad")
+ | ip6_dotint
+
+ in Build.key_value_line_comment
+ "options" Sep.space
+ (Build.opt_list
+ options_entry
+ Sep.space)
+ comment_eol
+
+(* View: entry *)
+let entry = nameserver
+ | domain
+ | search
+ | sortlist
+ | options
+ | lookup
+ | family
+
+(* View: lns *)
+let lns = ( empty | comment | entry )*
+
+(* Variable: filter *)
+let filter = (incl "/etc/resolv.conf")
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Rhsm
+ Parses subscription-manager config files
+
+Author: Dominic Cleal <dcleal@redhat.com>
+
+About: Reference
+ This lens tries to keep as close as possible to rhsm.conf(5) and
+ Python's SafeConfigParser. All settings must be in sections without
+ indentation. Semicolons and hashes are permitted for comments.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to:
+ /etc/rhsm/rhsm.conf
+
+ See <filter>.
+*)
+
+module Rhsm =
+ autoload xfm
+
+(* Semicolons and hashes are permitted for comments *)
+let comment = IniFile.comment IniFile.comment_re "#"
+(* Equals and colons are permitted for separators *)
+let sep = IniFile.sep IniFile.sep_re IniFile.sep_default
+
+(* All settings must be in sections without indentation *)
+let entry = IniFile.entry_multiline IniFile.entry_re sep comment
+let title = IniFile.title IniFile.record_re
+let record = IniFile.record title entry
+
+let lns = IniFile.lns record comment
+
+let filter = incl "/etc/rhsm/rhsm.conf"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Rmt
+ Parses rmt's configuration file
+
+Author: Dominic Cleal <dcleal@redhat.com>
+
+About: Reference
+ This lens is based on rmt(1)
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+*)
+
+module Rmt =
+ autoload xfm
+
+let sto_to_tab = store Rx.no_spaces
+
+let debug = Build.key_value_line "DEBUG" Sep.equal ( store Rx.fspath )
+let user = Build.key_value_line "USER" Sep.equal sto_to_tab
+let access = Build.key_value_line "ACCESS" Sep.equal
+ ( [ label "name" . sto_to_tab ] . Sep.tab .
+ [ label "host" . sto_to_tab ] . Sep.tab .
+ [ label "path" . sto_to_tab ] )
+
+let lns = ( debug | user | access | Util.comment | Util.empty )*
+
+let filter = incl "/etc/default/rmt"
+
+let xfm = transform lns filter
--- /dev/null
+(* Rsyncd module for Augeas
+ Author: Marc Fournier <marc.fournier@camptocamp.com>
+
+ Reference: man rsyncd.conf(5)
+
+*)
+
+module Rsyncd =
+ autoload xfm
+
+(************************************************************************
+ * INI File settings
+ *************************************************************************)
+let comment = IniFile.comment IniFile.comment_re "#"
+let sep = IniFile.sep IniFile.sep_re IniFile.sep_default
+let indent = del /[ \t]*/ " "
+
+(* Import useful INI File primitives *)
+let eol = IniFile.eol
+let empty = IniFile.empty
+let sto_to_comment
+ = Util.del_opt_ws " "
+ . store /[^;# \t\n][^;#\n]*[^;# \t\n]|[^;# \t\n]/
+
+
+(************************************************************************
+ * ENTRY
+ * rsyncd.conf allows indented entries, but by default entries outside
+ * sections are unindented
+ *************************************************************************)
+let entry_re = /[A-Za-z0-9_.-][A-Za-z0-9 _.-]*[A-Za-z0-9_.-]/
+
+let entry = IniFile.indented_entry entry_re sep comment
+
+(************************************************************************
+ * RECORD & TITLE
+ * We use IniFile.title_label because there can be entries
+ * outside of sections whose labels would conflict with section names
+ *************************************************************************)
+let title = IniFile.indented_title ( IniFile.record_re - ".anon" )
+let record = IniFile.record title entry
+
+let record_anon = [ label ".anon" . ( entry | empty )+ ]
+
+(************************************************************************
+ * LENS & FILTER
+ * There can be entries before any section
+ * IniFile.entry includes comment management, so we just pass entry to lns
+ *************************************************************************)
+let lns = record_anon? . record*
+
+let filter = (incl "/etc/rsyncd.conf")
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Rsyslog
+ Parses /etc/rsyslog.conf
+
+Author: Raphael Pinson <raphael.pinsons@camptocamp.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 5 rsyslog.conf` where possible.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/rsyslog.conf. See <filter>.
+
+About: Examples
+ The <Test_Rsyslog> file contains various examples and tests.
+*)
+module Rsyslog =
+
+autoload xfm
+
+let macro_rx = /[^,# \n\t][^#\n]*[^,# \n\t]|[^,# \n\t]/
+let macro = [ key /$[A-Za-z0-9]+/ . Sep.space . store macro_rx . Util.comment_or_eol ]
+
+let config_object_param = [ key /[A-Za-z.]+/ . Sep.equal . Quote.dquote
+ . store /[^"]+/ . Quote.dquote ]
+(* Inside config objects, we allow embedded comments; we don't surface them
+ * in the tree though *)
+let config_sep = del /[ \t]+|[ \t]*#.*\n[ \t]*/ " "
+
+let config_object =
+ [ key /action|global|input|module|parser|timezone|include/ .
+ Sep.lbracket .
+ config_object_param . ( config_sep . config_object_param )* .
+ Sep.rbracket . Util.comment_or_eol ]
+
+(* View: users
+ Map :omusrmsg: and a list of users, or a single *
+*)
+let omusrmsg = Util.del_str ":omusrmsg:" .
+ Syslog.label_opt_list_or "omusrmsg" (store Syslog.word)
+ Syslog.comma "*"
+
+(* View: file_tmpl
+ File action with a specified template *)
+let file_tmpl = Syslog.file . [ label "template" . Util.del_str ";" . store Rx.word ]
+
+let dynamic = [ Util.del_str "?" . label "dynamic" . store Rx.word ]
+
+let namedpipe = Syslog.pipe . Sep.space . [ label "pipe" . store Syslog.file_r ]
+
+let action = Syslog.action | omusrmsg | file_tmpl | dynamic | namedpipe
+
+(* Cannot use syslog program because rsyslog does not suppport #! *)
+let program = [ label "program" . Syslog.bang .
+ ( Syslog.opt_plus | [ Build.xchgs "-" "reverse" ] ) .
+ Syslog.programs . Util.eol . Syslog.entries ]
+
+(* Cannot use syslog hostname because rsyslog does not suppport #+/- *)
+let hostname = [ label "hostname" .
+ ( Syslog.plus | [ Build.xchgs "-" "reverse" ] ) .
+ Syslog.hostnames . Util.eol . Syslog.entries ]
+
+(* View: actions *)
+let actions =
+ let prop_act = [ label "action" . action ]
+ in let act_sep = del /[ \t]*\n&[ \t]*/ "\n& "
+ in Build.opt_list prop_act act_sep
+
+(* View: entry
+ An entry contains selectors and an action
+*)
+let entry = [ label "entry" . Syslog.selectors . Syslog.sep_tab .
+ actions . Util.eol ]
+
+(* View: prop_filter
+ Parses property-based filters, which start with ":" and the property name *)
+let prop_filter =
+ let sep = Sep.comma . Util.del_opt_ws " "
+ in let prop_name = [ Util.del_str ":" . label "property" . store Rx.word ]
+ in let prop_oper = [ label "operation" . store /[A-Za-z!-]+/ ]
+ in let prop_val = [ label "value" . Quote.do_dquote (store /[^\n"]*/) ]
+ in [ label "filter" . prop_name . sep . prop_oper . sep . prop_val .
+ Sep.space . actions . Util.eol ]
+
+let entries = ( Syslog.empty | Util.comment | entry | macro | config_object | prop_filter )*
+
+let lns = entries . ( program | hostname )*
+
+let filter = incl "/etc/rsyslog.conf"
+ . incl "/etc/rsyslog.d/*"
+ . Util.stdexcl
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Rtadvd
+ Parses rtadvd configuration file
+
+Author: Matt Dainty <matt@bodgit-n-scarper.com>
+
+About: Reference
+ - man 5 rtadvd.conf
+
+Each line represents a record consisting of a number of ':'-separated fields
+the first of which is the name or identifier for the record. The name can
+optionally be split by '|' and each subsequent value is considered an alias
+of the first. Records can be split across multiple lines with '\'.
+
+*)
+
+module Rtadvd =
+ autoload xfm
+
+ let empty = Util.empty
+
+ (* field must not contain ':' unless quoted *)
+ let cfield = /[a-zA-Z0-9-]+(#?@|#[0-9]+|=("[^"]*"|[^:"]*))?/
+
+ let lns = ( empty | Getcap.comment | Getcap.record cfield )*
+
+ let filter = incl "/etc/rtadvd.conf"
+ . Util.stdexcl
+
+ let xfm = transform lns filter
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(*
+Module: Rx
+ Generic regexps to build lenses
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+*)
+
+
+module Rx =
+
+(* Group: Spaces *)
+(* Variable: space
+ A mandatory space or tab *)
+let space = /[ \t]+/
+
+(* Variable: opt_space
+ An optional space or tab *)
+let opt_space = /[ \t]*/
+
+(* Variable: cl
+ A continued line with a backslash *)
+let cl = /[ \t]*\\\\\n[ \t]*/
+
+(* Variable: cl_or_space
+ A <cl> or a <space> *)
+let cl_or_space = cl | space
+
+(* Variable: cl_or_opt_space
+ A <cl> or a <opt_space> *)
+let cl_or_opt_space = cl | opt_space
+
+(* Group: General strings *)
+
+(* Variable: space_in
+ A string which does not start or end with a space *)
+let space_in = /[^ \r\t\n].*[^ \r\t\n]|[^ \t\n\r]/
+
+(* Variable: no_spaces
+ A string with no spaces *)
+let no_spaces = /[^ \t\r\n]+/
+
+(* Variable: word
+ An alphanumeric string *)
+let word = /[A-Za-z0-9_.-]+/
+
+(* Variable: integer
+ One or more digits *)
+let integer = /[0-9]+/
+
+(* Variable: relinteger
+ A relative <integer> *)
+let relinteger = /[-+]?[0-9]+/
+
+(* Variable: relinteger_noplus
+ A relative <integer>, without explicit plus sign *)
+let relinteger_noplus = /[-]?[0-9]+/
+
+(* Variable: decimal
+ A decimal value (using ',' or '.' as a separator) *)
+let decimal = /[0-9]+([.,][0-9]+)?/
+
+(* Variable: reldecimal
+ A relative <decimal> *)
+let reldecimal = /[+-]?[0-9]+([.,][0-9]+)?/
+
+(* Variable: byte
+ A byte (0 - 255) *)
+let byte = /25[0-5]?|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9]/
+
+(* Variable: hex
+ A hex value *)
+let hex = /0x[0-9a-fA-F]+/
+
+(* Variable: octal
+ An octal value *)
+let octal = /0[0-7]+/
+
+(* Variable: fspath
+ A filesystem path *)
+let fspath = /[^ \t\n]+/
+
+(* Group: All but... *)
+(* Variable: neg1
+ Anything but a space, a comma or a comment sign *)
+let neg1 = /[^,# \n\t]+/
+
+(*
+ * Group: IPs
+ * Cf. http://blog.mes-stats.fr/2008/10/09/regex-ipv4-et-ipv6/ (in fr)
+ *)
+
+(* Variable: ipv4 *)
+let ipv4 =
+ let dot = "." in
+ byte . dot . byte . dot . byte . dot . byte
+
+(* Variable: ipv6 *)
+let ipv6 =
+ /(([0-9A-Fa-f]{1,4}:){7}[0-9A-Fa-f]{1,4})/
+ | /(([0-9A-Fa-f]{1,4}:){6}:[0-9A-Fa-f]{1,4})/
+ | /(([0-9A-Fa-f]{1,4}:){5}:([0-9A-Fa-f]{1,4}:)?[0-9A-Fa-f]{1,4})/
+ | /(([0-9A-Fa-f]{1,4}:){4}:([0-9A-Fa-f]{1,4}:){0,2}[0-9A-Fa-f]{1,4})/
+ | /(([0-9A-Fa-f]{1,4}:){3}:([0-9A-Fa-f]{1,4}:){0,3}[0-9A-Fa-f]{1,4})/
+ | /(([0-9A-Fa-f]{1,4}:){2}:([0-9A-Fa-f]{1,4}:){0,4}[0-9A-Fa-f]{1,4})/
+ | ( /([0-9A-Fa-f]{1,4}:){6}/
+ . /((((25[0-5])|(1[0-9]{2})|(2[0-4][0-9])|([0-9]{1,2})))\.){3}/
+ . /(((25[0-5])|(1[0-9]{2})|(2[0-4][0-9])|([0-9]{1,2})))/
+ )
+ | ( /([0-9A-Fa-f]{1,4}:){0,5}:/
+ . /((((25[0-5])|(1[0-9]{2})|(2[0-4][0-9])|([0-9]{1,2})))\.){3}/
+ . /(((25[0-5])|(1[0-9]{2})|(2[0-4][0-9])|([0-9]{1,2})))/
+ )
+ | ( /::([0-9A-Fa-f]{1,4}:){0,5}/
+ . /((((25[0-5])|(1[0-9]{2})|(2[0-4][0-9])|([0-9]{1,2})))\.){3}/
+ . /(((25[0-5])|(1[0-9]{2})|(2[0-4][0-9])|([0-9]{1,2})))/
+ )
+ | ( /[0-9A-Fa-f]{1,4}::([0-9A-Fa-f]{1,4}:){0,5}/
+ . /[0-9A-Fa-f]{1,4}/
+ )
+ | /(::([0-9A-Fa-f]{1,4}:){0,6}[0-9A-Fa-f]{1,4})/
+ | /(([0-9A-Fa-f]{1,4}:){1,7}:)/
+
+
+(* Variable: ip
+ An <ipv4> or <ipv6> *)
+let ip = ipv4 | ipv6
+
+
+(* Variable: hostname
+ A valid RFC 1123 hostname *)
+let hostname = /(([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\-]*[a-zA-Z0-9])\.)*(
+ [A-Za-z0-9]|[A-Za-z0-9][A-Za-z0-9\-]*[A-Za-z0-9])/
+
+(*
+ * Variable: device_name
+ * A Linux device name like eth0 or i2c-0. Might still be too restrictive
+ *)
+
+let device_name = /[a-zA-Z0-9_?.+:!-]+/
+
+(*
+ * Variable: email_addr
+ * To be refined
+ *)
+let email_addr = /[A-Za-z0-9_+.-]+@[A-Za-z0-9_.-]+/
+
+(*
+ * Variable: iso_8601
+ * ISO 8601 date time format
+ *)
+let year = /[0-9]{4}/
+let relyear = /[-+]?/ . year
+let monthday = /W?[0-9]{2}(-?[0-9]{1,2})?/
+let time =
+ let sep = /[T \t]/
+ in let digits = /[0-9]{2}(:?[0-9]{2}(:?[0-9]{2})?)?/
+ in let precis = /[.,][0-9]+/
+ in let zone = "Z" | /[-+]?[0-9]{2}(:?[0-9]{2})?/
+ in sep . digits . precis? . zone?
+let iso_8601 = year . ("-"? . monthday . time?)?
+
+(* Variable: url_3986
+ A valid RFC 3986 url - See Appendix B *)
+let url_3986 = /(([^:\/?#]+):)?(\/\/([^\/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?/
--- /dev/null
+(* Samba module for Augeas
+ Author: Free Ekanayaka <free@64studio.com>
+
+ Reference: man smb.conf(5)
+
+*)
+
+
+module Samba =
+ autoload xfm
+
+(************************************************************************
+ * INI File settings
+ *************************************************************************)
+
+let comment = IniFile.comment IniFile.comment_re IniFile.comment_default
+let sep = del /[ \t]*=/ " ="
+let indent = del /[ \t]*/ " "
+
+(* Import useful INI File primitives *)
+let eol = IniFile.eol
+let empty = IniFile.empty
+let sto_to_comment
+ = Util.del_opt_ws " "
+ . store /[^;# \t\r\n][^;#\r\n]*[^;# \t\r\n]|[^;# \t\r\n]/
+
+(************************************************************************
+ * ENTRY
+ * smb.conf allows indented entries
+ *************************************************************************)
+
+let entry_re = /[A-Za-z0-9_.-][A-Za-z0-9 _.:\*-]*[A-Za-z0-9_.\*-]/
+let entry = let kw = entry_re in
+ [ indent
+ . key kw
+ . sep
+ . sto_to_comment?
+ . (comment|eol) ]
+ | comment
+
+(************************************************************************
+ * TITLE
+ *************************************************************************)
+
+let title = IniFile.title_label "target" IniFile.record_label_re
+let record = IniFile.record title entry
+
+(************************************************************************
+ * LENS & FILTER
+ *************************************************************************)
+
+let lns = IniFile.lns record comment
+
+let filter = (incl "/etc/samba/smb.conf")
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Schroot
+ Parses /etc/schroot/schroot.conf
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 5 schroot.conf` where possible.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/schroot/schroot.conf. See <filter>.
+*)
+
+
+module Schroot =
+autoload xfm
+
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+(* View: comment
+ An <IniFile.comment> entry *)
+let comment = IniFile.comment "#" "#"
+
+(* View: sep
+ An <IniFile.sep> entry *)
+let sep = IniFile.sep "=" "="
+
+
+(************************************************************************
+ * Group: ENTRIES
+ *************************************************************************)
+
+(* View: description
+ Descriptions are special entries, which can have an optional lang parameter *)
+let description =
+ let lang = [ Util.del_str "[" . label "lang"
+ . store IniFile.entry_re . Util.del_str "]" ]
+ in IniFile.entry_generic_nocomment (key "description" . lang?) sep "#" comment
+
+(* View: entry
+ An <IniFile.entry>, or <description> *)
+let entry = IniFile.entry (IniFile.entry_re - "description") sep comment
+ | description
+
+(* View: title
+ An <IniFile.title> *)
+let title = IniFile.title IniFile.record_re
+
+(* View: record
+ An <IniFile.record> *)
+let record = IniFile.record title entry
+
+(* View: lns
+ An <IniFile.lns> *)
+let lns = IniFile.lns record comment
+
+(* View: filter *)
+let filter = (incl "/etc/schroot/schroot.conf")
+
+let xfm = transform lns filter
--- /dev/null
+(* Parses entries in /etc/securetty
+
+ Author: Simon Josi <josi@yokto.net>
+*)
+module Securetty =
+ autoload xfm
+
+ let word = /[^ \t\n#]+/
+ let eol = Util.eol
+ let empty = Util.empty
+ let comment = Util.comment
+ let comment_or_eol = Util.comment_or_eol
+
+ let record = [ seq "securetty" . store word . comment_or_eol ]
+ let lns = ( empty | comment | record )*
+
+ let filter = (incl "/etc/securetty")
+ let xfm = transform lns filter
--- /dev/null
+(*
+Module: Semanage
+ Parses /etc/selinux/semanage.conf
+
+Author:
+ Pino Toscano <ptoscano@redhat.com>
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Configuration files
+ This lens applies to /etc/selinux/semanage.conf. See <filter>.
+
+About: Examples
+ The <Test_Semanage> file contains various examples and tests.
+*)
+
+module Semanage =
+ autoload xfm
+
+let comment = IniFile.comment "#" "#"
+let sep = IniFile.sep "=" "="
+let empty = IniFile.empty
+let eol = IniFile.eol
+
+let list_keys = "ignoredirs"
+let scl = del ";" ";"
+let fspath = /[^ \t\n;#]+/ (* Rx.fspath without ; or # *)
+
+let entry = IniFile.entry_list list_keys sep fspath scl comment
+ | IniFile.entry (IniFile.entry_re - list_keys) sep comment
+ | empty
+
+let title = IniFile.title_label "@group" (IniFile.record_re - /^end$/)
+let record = [ title . entry+ . Util.del_str "[end]" . eol ]
+
+let lns = (entry | record)*
+
+(* Variable: filter *)
+let filter = incl "/etc/selinux/semanage.conf"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Sep
+ Generic separators to build lenses
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+*)
+
+
+module Sep =
+
+(* Variable: colon *)
+let colon = Util.del_str ":"
+
+(* Variable: semicolon *)
+let semicolon = Util.del_str ";"
+
+(* Variable: comma *)
+let comma = Util.del_str ","
+
+(* Variable: equal *)
+let equal = Util.del_str "="
+
+(* Variable: space_equal *)
+let space_equal = Util.delim "="
+
+(* Variable: space
+ Deletes a <Rx.space> and default to a single space *)
+let space = del Rx.space " "
+
+(* Variable: tab
+ Deletes a <Rx.space> and default to a tab *)
+let tab = del Rx.space "\t"
+
+(* Variable: opt_space
+ Deletes a <Rx.opt_space> and default to an empty string *)
+let opt_space = del Rx.opt_space ""
+
+(* Variable: opt_tab
+ Deletes a <Rx.opt_space> and default to a tab *)
+let opt_tab = del Rx.opt_space "\t"
+
+(* Variable: cl_or_space
+ Deletes a <Rx.cl_or_space> and default to a single space *)
+let cl_or_space = del Rx.cl_or_space " "
+
+(* Variable: cl_or_opt_space
+ Deletes a <Rx.cl_or_opt_space> and default to a single space *)
+let cl_or_opt_space = del Rx.cl_or_opt_space " "
+
+(* Variable: lbracket *)
+let lbracket = Util.del_str "("
+
+(* Variable: rbracket *)
+let rbracket = Util.del_str ")"
--- /dev/null
+(*
+Module: Services
+ Parses /etc/services
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: Reference
+ This lens tries to keep as close as possible to 'man services' where possible.
+
+The definitions from 'man services' are put as commentaries for reference
+throughout the file. More information can be found in the manual.
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ Sample usage of this lens in augtool
+
+ * Get the name of the service running on port 22 with protocol tcp
+ > match "/files/etc/services/service-name[port = '22'][protocol = 'tcp']"
+ * Remove the tcp entry for "domain" service
+ > rm "/files/etc/services/service-name[. = 'domain'][protocol = 'tcp']"
+ * Add a tcp service named "myservice" on port 55234
+ > ins service-name after /files/etc/services/service-name[last()]
+ > set /files/etc/services/service-name[last()] "myservice"
+ > set "/files/etc/services/service-name[. = 'myservice']/port" "55234"
+ > set "/files/etc/services/service-name[. = 'myservice']/protocol" "tcp"
+
+About: Configuration files
+ This lens applies to /etc/services. See <filter>.
+*)
+
+module Services =
+ autoload xfm
+
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+(* Group: Generic primitives *)
+
+(* Variable: eol *)
+let eol = del /[ \t]*(#)?[ \t]*\n/ "\n"
+let indent = Util.indent
+let comment = Util.comment
+let comment_or_eol = Util.comment_or_eol
+let empty = Util.empty
+let protocol_re = /[a-zA-Z]+/
+let word_re = /[a-zA-Z0-9_.+*\/:-]+/
+let num_re = /[0-9]+/
+
+(* Group: Separators *)
+let sep_spc = Util.del_ws_spc
+
+
+(************************************************************************
+ * Group: LENSES
+ *************************************************************************)
+
+(* View: port *)
+let port = [ label "port" . store num_re ]
+
+(* View: port_range *)
+let port_range = [ label "start" . store num_re ]
+ . Util.del_str "-"
+ . [ label "end" . store num_re ]
+
+(* View: protocol *)
+let protocol = [ label "protocol" . store protocol_re ]
+
+(* View: alias *)
+let alias = [ label "alias" . store word_re ]
+
+(*
+ * View: record
+ * A standard /etc/services record
+ * TODO: make sure a space is added before a comment on new nodes
+ *)
+let record = [ label "service-name" . store word_re
+ . sep_spc . (port | port_range)
+ . del "/" "/" . protocol . ( sep_spc . alias )*
+ . comment_or_eol ]
+
+(* View: lns
+ The services lens is either <empty>, <comment> or <record> *)
+let lns = ( empty | comment | record )*
+
+
+(* View: filter *)
+let filter = (incl "/etc/services")
+
+let xfm = transform lns filter
--- /dev/null
+(*
+ Module: Shadow
+ Parses /etc/shadow
+
+ Author: Lorenzo M. Catucci <catucci@ccd.uniroma2.it>
+
+ Original Author: Free Ekanayaka <free@64studio.com>
+
+ About: Reference
+
+ - man 5 shadow
+ - man 3 getspnam
+
+ About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+
+ About:
+
+ Each line in the shadow files represents the additional shadow-defined attributes
+ for the corresponding user, as defined in the passwd file.
+
+*)
+
+module Shadow =
+
+ autoload xfm
+
+(************************************************************************
+ * USEFUL PRIMITIVES
+ *************************************************************************)
+
+let eol = Util.eol
+let comment = Util.comment
+let empty = Util.empty
+let dels = Util.del_str
+
+let colon = Sep.colon
+
+let word = Rx.word
+let integer = Rx.integer
+
+let sto_to_col = Passwd.sto_to_col
+let sto_to_eol = Passwd.sto_to_eol
+
+(************************************************************************
+ * Group: ENTRIES
+ *************************************************************************)
+(* Common for entry and nisdefault *)
+let common = [ label "lastchange_date" . store integer? . colon ]
+ . [ label "minage_days" . store integer? . colon ]
+ . [ label "maxage_days" . store integer? . colon ]
+ . [ label "warn_days" . store integer? . colon ]
+ . [ label "inactive_days" . store integer? . colon ]
+ . [ label "expire_date" . store integer? . colon ]
+ . [ label "flag" . store integer? ]
+
+(* View: entry *)
+let entry = [ key word
+ . colon
+ . [ label "password" . sto_to_col? . colon ]
+ . common
+ . eol ]
+
+let nisdefault =
+ let overrides =
+ colon
+ . [ label "password" . store word? . colon ]
+ . common in
+ [ dels "+" . label "@nisdefault" . overrides? . eol ]
+
+(************************************************************************
+ * LENS
+ *************************************************************************)
+
+let lns = (comment|empty|entry|nisdefault) *
+
+let filter
+ = incl "/etc/shadow"
+ . Util.stdexcl
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Shells
+ Parses /etc/shells
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 5 shells` where possible.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/shells. See <filter>.
+*)
+
+
+module Shells =
+ autoload xfm
+
+let empty = Util.empty
+let comment = Util.comment
+let comment_or_eol = Util.comment_or_eol
+let shell = [ seq "shell" . store /[^# \t\n]+/ . comment_or_eol ]
+
+(* View: lns
+ The shells lens
+*)
+let lns = ( empty | comment | shell )*
+
+(* Variable: filter *)
+let filter = incl "/etc/shells"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Shellvars
+ Generic lens for shell-script config files like the ones found
+ in /etc/sysconfig
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+*)
+
+module Shellvars =
+ autoload xfm
+
+ (* Delete a blank line, rather than mapping it *)
+ let del_empty = del (Util.empty_generic_re . "\n") "\n"
+
+ let empty = Util.empty
+ let empty_part_re = Util.empty_generic_re . /\n+/
+ let eol = del (/[ \t]+|[ \t]*[;\n]/ . empty_part_re*) "\n"
+ let semicol_eol = del (/[ \t]*[;\n]/ . empty_part_re*) "\n"
+ let brace_eol = del /[ \t\n]+/ "\n"
+
+ let key_re = /[A-Za-z0-9_][-A-Za-z0-9_]*(\[[0-9A-Za-z_,]+\])?/ - ("unset" | "export")
+ let matching_re = "${!" . key_re . /[\*@]\}/
+ let eq = Util.del_str "="
+
+ let eol_for_comment = del /([ \t]*\n)([ \t]*(#[ \t]*)?\n)*/ "\n"
+ let comment = Util.comment_generic_seteol /[ \t]*#[ \t]*/ " # " eol_for_comment
+ (* comment_eol in shell MUST begin with a space *)
+ let comment_eol = Util.comment_generic_seteol /[ \t]+#[ \t]*/ " # " eol_for_comment
+ let comment_or_eol = comment_eol | semicol_eol
+
+ let xchgs = Build.xchgs
+ let semicol = del /;?/ ""
+
+ let char = /[^`;()'"&|\n\\# \t]#*|\\\\./
+ let dquot =
+ let char = /[^"\\]|\\\\./ | Rx.cl
+ in "\"" . char* . "\"" (* " Emacs, relax *)
+ let squot = /'[^']*'/
+ let bquot = /`[^`\n]+`/
+ (* dbquot don't take spaces or semi-colons *)
+ let dbquot = /``[^` \t\n;]+``/
+ let dollar_assign = /\$\([^\(\)#\n]*\)/
+ let dollar_arithm = /\$\(\([^\)#\n]*\)\)/
+
+ let anyquot = (char|dquot|squot|dollar_assign|dollar_arithm)+ | bquot | dbquot
+ let sto_to_semicol = store (anyquot . (Rx.cl_or_space . anyquot)*)
+
+ (* Array values of the form '(val1 val2 val3)'. We do not handle empty *)
+ (* arrays here because of typechecking headaches. Instead, they are *)
+ (* treated as a simple value *)
+ let array =
+ let array_value = store anyquot in
+ del /\([ \t]*/ "(" . counter "values" .
+ [ seq "values" . array_value ] .
+ [ del /[ \t\n]+/ " " . seq "values" . array_value ] *
+ . del /[ \t]*\)/ ")"
+
+ (* Treat an empty list () as a value '()'; that's not quite correct *)
+ (* but fairly close. *)
+ let simple_value =
+ let empty_array = /\([ \t]*\)/ in
+ store (anyquot | empty_array)?
+
+ let export = [ key "export" . Util.del_ws_spc ]
+ let kv = Util.indent . export? . key key_re
+ . eq . (simple_value | array)
+
+ let var_action (name:string) =
+ Util.indent . del name name . Util.del_ws_spc
+ . label ("@" . name) . counter "var_action"
+ . Build.opt_list [ seq "var_action" . store (key_re | matching_re) ] Util.del_ws_spc
+
+ let unset = var_action "unset"
+ let bare_export = var_action "export"
+
+ let source =
+ Util.indent
+ . del /\.|source/ "." . label ".source"
+ . Util.del_ws_spc . store /[^;=# \t\n]+/
+
+ let shell_builtin_cmds = "ulimit" | "shift" | "exit"
+
+ let eval =
+ Util.indent . Util.del_str "eval" . Util.del_ws_spc
+ . label "@eval" . store anyquot
+
+ let alias =
+ Util.indent . Util.del_str "alias" . Util.del_ws_spc
+ . label "@alias" . store key_re . eq
+ . [ label "value" . store anyquot ]
+
+ let builtin =
+ Util.indent . label "@builtin"
+ . store shell_builtin_cmds
+ . (Sep.cl_or_space
+ . [ label "args" . sto_to_semicol ])?
+
+ let keyword (kw:string) = Util.indent . Util.del_str kw
+ let keyword_label (kw:string) (lbl:string) = keyword kw . label lbl
+
+ let return =
+ Util.indent . label "@return"
+ . Util.del_str "return"
+ . ( Util.del_ws_spc . store Rx.integer )?
+
+ let action (operator:string) (lbl:string) (sto:lens) =
+ let sp = Rx.cl_or_opt_space | /[ \t\n]+/
+ in [ del (sp . operator . sp) (" " . operator . " ")
+ . label ("@".lbl) . sto ]
+
+ let action_pipe = action "|" "pipe"
+ let action_and = action "&&" "and"
+ let action_or = action "||" "or"
+
+ let condition =
+ let cond (start:string) (end:string) = [ label "type" . store start ]
+ . Util.del_ws_spc . sto_to_semicol
+ . Util.del_ws_spc . Util.del_str end
+ . ( action_and sto_to_semicol | action_or sto_to_semicol )*
+ in Util.indent . label "@condition" . (cond "[" "]" | cond "[[" "]]")
+
+ (* Entry types *)
+ let entry_eol_item (item:lens) = [ item . comment_or_eol ]
+ let entry_item (item:lens) = [ item ]
+
+ let entry_eol_nocommand =
+ entry_eol_item source
+ | entry_eol_item kv
+ | entry_eol_item unset
+ | entry_eol_item bare_export
+ | entry_eol_item builtin
+ | entry_eol_item return
+ | entry_eol_item condition
+ | entry_eol_item eval
+ | entry_eol_item alias
+
+ let entry_noeol_nocommand =
+ entry_item source
+ | entry_item kv
+ | entry_item unset
+ | entry_item bare_export
+ | entry_item builtin
+ | entry_item return
+ | entry_item condition
+ | entry_item eval
+ | entry_item alias
+
+ (* Command *)
+ let rec command =
+ let env = [ key key_re . eq . store anyquot . Sep.cl_or_space ]
+ in let reserved_key = /exit|shift|return|ulimit|unset|export|source|\.|if|for|select|while|until|then|else|fi|done|case|eval|alias/
+ in let word = /\$?[-A-Za-z0-9_.\/]+/
+ in let entry_eol = entry_eol_nocommand | entry_eol_item command
+ in let entry_noeol = entry_noeol_nocommand | entry_item command
+ in let entry = entry_eol | entry_noeol
+ in let pipe = action_pipe (entry_eol_item command | entry_item command)
+ in let and = action_and entry
+ in let or = action_or entry
+ in Util.indent . label "@command" . env* . store (word - reserved_key)
+ . [ Sep.cl_or_space . label "@arg" . sto_to_semicol]?
+ . ( pipe | and | or )?
+
+ let entry_eol = entry_eol_nocommand
+ | entry_eol_item command
+
+ let entry_noeol = entry_noeol_nocommand
+ | entry_item command
+
+(************************************************************************
+ * Group: CONDITIONALS AND LOOPS
+ *************************************************************************)
+
+ let generic_cond_start (start_kw:string) (lbl:string)
+ (then_kw:string) (contents:lens) =
+ keyword_label start_kw lbl . Sep.space
+ . sto_to_semicol
+ . ( action_and sto_to_semicol | action_or sto_to_semicol )*
+ . semicol_eol
+ . keyword then_kw . eol
+ . contents
+
+ let generic_cond (start_kw:string) (lbl:string)
+ (then_kw:string) (contents:lens) (end_kw:string) =
+ [ generic_cond_start start_kw lbl then_kw contents
+ . keyword end_kw . comment_or_eol ]
+
+ let cond_if (entry:lens) =
+ let elif = [ generic_cond_start "elif" "@elif" "then" entry+ ] in
+ let else = [ keyword_label "else" "@else" . eol . entry+ ] in
+ generic_cond "if" "@if" "then" (entry+ . elif* . else?) "fi"
+
+ let loop_for (entry:lens) =
+ generic_cond "for" "@for" "do" entry+ "done"
+
+ let loop_while (entry:lens) =
+ generic_cond "while" "@while" "do" entry+ "done"
+
+ let loop_until (entry:lens) =
+ generic_cond "until" "@until" "do" entry+ "done"
+
+ let loop_select (entry:lens) =
+ generic_cond "select" "@select" "do" entry+ "done"
+
+ let case (entry:lens) (entry_noeol:lens) =
+ let pattern = [ label "@pattern" . sto_to_semicol . Sep.opt_space ]
+ in let case_entry = [ label "@case_entry"
+ . Util.indent . pattern
+ . (Util.del_str "|" . Sep.opt_space . pattern)*
+ . Util.del_str ")" . eol
+ . entry* . entry_noeol?
+ . Util.indent . Util.del_str ";;" . eol ] in
+ [ keyword_label "case" "@case" . Sep.space
+ . store (char+ | ("\"" . char+ . "\""))
+ . del /[ \t\n]+/ " " . Util.del_str "in" . eol
+ . (empty* . comment* . case_entry)*
+ . empty* . comment*
+ . keyword "esac" . comment_or_eol ]
+
+ let subshell (entry:lens) =
+ [ Util.indent . label "@subshell"
+ . Util.del_str "{" . brace_eol
+ . entry+
+ . Util.indent . Util.del_str "}" . eol ]
+
+ let function (entry:lens) (start_kw:string) (end_kw:string) =
+ [ Util.indent . label "@function"
+ . del /(function[ \t]+)?/ ""
+ . store Rx.word . del /[ \t]*\(\)/ "()"
+ . (comment_eol|brace_eol) . Util.del_str start_kw . brace_eol
+ . entry+
+ . Util.indent . Util.del_str end_kw . eol ]
+
+ let rec rec_entry =
+ let entry = comment | entry_eol | rec_entry in
+ cond_if entry
+ | loop_for entry
+ | loop_select entry
+ | loop_while entry
+ | loop_until entry
+ | case entry entry_noeol
+ | function entry "{" "}"
+ | function entry "(" ")"
+ | subshell entry
+
+ let lns_norec = del_empty* . (comment | entry_eol) *
+
+ let lns = del_empty* . (comment | entry_eol | rec_entry) *
+
+ let sc_incl (n:string) = (incl ("/etc/sysconfig/" . n))
+ let sc_excl (n:string) = (excl ("/etc/sysconfig/" . n))
+
+ let filter_sysconfig =
+ sc_incl "*" .
+ sc_excl "anaconda" .
+ sc_excl "bootloader" .
+ sc_excl "hw-uuid" .
+ sc_excl "hwconf" .
+ sc_excl "ip*tables" .
+ sc_excl "ip*tables.save" .
+ sc_excl "kernel" .
+ sc_excl "*.pub" .
+ sc_excl "sysstat.ioconf" .
+ sc_excl "system-config-firewall" .
+ sc_excl "system-config-securitylevel" .
+ sc_incl "network/config" .
+ sc_incl "network/dhcp" .
+ sc_incl "network/dhcp6r" .
+ sc_incl "network/dhcp6s" .
+ sc_incl "network/ifcfg-*" .
+ sc_incl "network/if-down.d/*" .
+ sc_incl "network/ifroute-*" .
+ sc_incl "network/if-up.d/*" .
+ sc_excl "network/if-up.d/SuSEfirewall2" .
+ sc_incl "network/providers/*" .
+ sc_excl "network-scripts" .
+ sc_incl "network-scripts/ifcfg-*" .
+ sc_excl "rhn" .
+ sc_incl "rhn/allowed-actions/*" .
+ sc_excl "rhn/allowed-actions/script" .
+ sc_incl "rhn/allowed-actions/script/*" .
+ sc_incl "rhn/rhnsd" .
+ sc_excl "SuSEfirewall2.d" .
+ sc_incl "SuSEfirewall2.d/cobbler" .
+ sc_incl "SuSEfirewall2.d/services/*" .
+ sc_excl "SuSEfirewall2.d/services/TEMPLATE" .
+ sc_excl "*.systemd"
+
+ let filter_default = incl "/etc/default/*"
+ . excl "/etc/default/grub_installdevice*"
+ . excl "/etc/default/rmt"
+ . excl "/etc/default/star"
+ . excl "/etc/default/whoopsie"
+ . incl "/etc/profile"
+ . incl "/etc/profile.d/*"
+ . excl "/etc/profile.d/*.csh"
+ . excl "/etc/profile.d/*.tcsh"
+ . excl "/etc/profile.d/csh.local"
+ let filter_misc = incl "/etc/arno-iptables-firewall/debconf.cfg"
+ . incl "/etc/conf.d/*"
+ . incl "/etc/cron-apt/config"
+ . incl "/etc/environment"
+ . incl "/etc/firewalld/firewalld.conf"
+ . incl "/etc/blkid.conf"
+ . incl "/etc/adduser.conf"
+ . incl "/etc/cowpoke.conf"
+ . incl "/etc/cvs-cron.conf"
+ . incl "/etc/cvs-pserver.conf"
+ . incl "/etc/devscripts.conf"
+ . incl "/etc/kamailio/kamctlrc"
+ . incl "/etc/lbu/lbu.conf"
+ . incl "/etc/lintianrc"
+ . incl "/etc/lsb-release"
+ . incl "/etc/os-release"
+ . incl "/etc/periodic.conf"
+ . incl "/etc/popularity-contest.conf"
+ . incl "/etc/rc.conf"
+ . incl "/etc/rc.conf.d/*"
+ . incl "/etc/rc.conf.local"
+ . incl "/etc/selinux/config"
+ . incl "/etc/ucf.conf"
+ . incl "/etc/locale.conf"
+ . incl "/etc/vconsole.conf"
+ . incl "/etc/byobu/*"
+
+ let filter = filter_sysconfig
+ . filter_default
+ . filter_misc
+ . Util.stdexcl
+
+ let xfm = transform lns filter
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(* Generic lens for shell-script config files like the ones found *)
+(* in /etc/sysconfig, where a string needs to be split into *)
+(* single words. *)
+module Shellvars_list =
+ autoload xfm
+
+ let eol = Util.eol
+
+ let key_re = /[A-Za-z0-9_]+/
+ let eq = Util.del_str "="
+ let comment = Util.comment
+ let comment_or_eol = Util.comment_or_eol
+ let empty = Util.empty
+ let indent = Util.indent
+
+ let sqword = /[^ '\t\n]+/
+ let dqword = /([^ "\\\t\n]|\\\\.)+/
+ let uqword = /([^ `"'\\\t\n]|\\\\.)+/
+ let bqword = /`[^`\n]*`/
+ let space_or_nl = /[ \t\n]+/
+ let space_or_cl = space_or_nl | Rx.cl
+
+ (* lists values of the form ... val1 val2 val3 ... *)
+ let list (word:regexp) (sep:regexp) =
+ let list_value = store word in
+ indent .
+ [ label "value" . list_value ] .
+ [ del sep " " . label "value" . list_value ]* . indent
+
+
+ (* handle single quoted lists *)
+ let squote_arr = [ label "quote" . store /'/ ]
+ . (list sqword space_or_nl)? . del /'/ "'"
+
+ (* similarly handle double quoted lists *)
+ let dquote_arr = [ label "quote" . store /"/ ]
+ . (list dqword space_or_cl)? . del /"/ "\""
+
+ (* handle unquoted single value *)
+ let unquot_val = [ label "quote" . store "" ]
+ . [ label "value" . store (uqword+ | bqword)]?
+
+
+ (* lens for key value pairs *)
+ let kv = [ key key_re . eq .
+ ( (squote_arr | dquote_arr) . comment_or_eol
+ | unquot_val . eol )
+ ]
+
+ let lns = ( comment | empty | kv )*
+
+ let filter = incl "/etc/sysconfig/bootloader"
+ . incl "/etc/sysconfig/kernel"
+
+ let xfm = transform lns filter
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(*
+Module: Simplelines
+ Parses simple lines conffiles
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ See <filter>.
+
+About: Examples
+ The <Test_Simplelines> file contains various examples and tests.
+*)
+
+module Simplelines =
+
+autoload xfm
+
+(* View: line
+ A simple, uncommented, line *)
+let line =
+ let line_re = /[^# \t\n].*[^ \t\n]|[^# \t\n]/
+ in [ seq "line" . Util.indent
+ . store line_re . Util.eol ]
+
+(* View: lns
+ The simplelines lens *)
+let lns = (Util.empty | Util.comment | line)*
+
+(* Variable: filter *)
+let filter = incl "/etc/at.allow"
+ . incl "/etc/at.deny"
+ . incl "/etc/cron.allow"
+ . incl "/etc/cron.deny"
+ . incl "/etc/cron.d/at.allow"
+ . incl "/etc/cron.d/at.deny"
+ . incl "/etc/cron.d/cron.allow"
+ . incl "/etc/cron.d/cron.deny"
+ . incl "/etc/default/grub_installdevice"
+ . incl "/etc/pam.d/allow.pamlist"
+ . incl "/etc/hostname.*"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Simplevars
+ Parses simple key = value conffiles
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Examples
+ The <Test_Simplevars> file contains various examples and tests.
+*)
+
+module Simplevars =
+
+autoload xfm
+
+(* Variable: to_comment_re
+ The regexp to match the value *)
+let to_comment_re =
+ let to_comment_squote = /'[^\n']*'/
+ in let to_comment_dquote = /"[^\n"]*"/
+ in let to_comment_noquote = /[^\n \t'"#][^\n#]*[^\n \t#]|[^\n \t'"#]/
+ in to_comment_squote | to_comment_dquote | to_comment_noquote
+
+(* View: entry *)
+let entry =
+ let some_value = Sep.space_equal . store to_comment_re
+ (* Avoid ambiguity in tree by making a subtree here *)
+ in let empty_value = [del /[ \t]*=/ "="] . store ""
+ in [ Util.indent . key Rx.word
+ . (some_value? | empty_value)
+ . (Util.eol | Util.comment_eol) ]
+
+(* View: lns *)
+let lns = (Util.empty | Util.comment | entry)*
+
+(* Variable: filter *)
+let filter = incl "/etc/kernel-img.conf"
+ . incl "/etc/kerneloops.conf"
+ . incl "/etc/wgetrc"
+ . incl "/etc/zabbix/*.conf"
+ . incl "/etc/audit/auditd.conf"
+ . incl "/etc/mixerctl.conf"
+ . incl "/etc/wsconsctlctl.conf"
+ . incl "/etc/ocsinventory/ocsinventory-agent.cfg"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Sip_Conf
+ Parses /etc/asterisk/sip.conf
+
+Author: Rob Tucker <rtucker@mozilla.com>
+
+About: Reference
+ Lens parses the sip.conf with support for template structure
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/asterisk/sip.conf. See <filter>.
+*)
+
+module Sip_Conf =
+ autoload xfm
+
+(************************************************************************
+ * INI File settings
+ *************************************************************************)
+
+let comment = IniFile.comment IniFile.comment_re IniFile.comment_default
+let sep = IniFile.sep IniFile.sep_re IniFile.sep_default
+let empty = IniFile.empty
+let eol = IniFile.eol
+let comment_or_eol = comment | eol
+
+
+let entry = IniFile.indented_entry IniFile.entry_re sep comment
+
+let text_re = Rx.word
+let tmpl =
+ let is_tmpl = [ label "@is_template" . Util.del_str "!" ]
+ in let use_tmpl = [ label "@use_template" . store Rx.word ]
+ in let comma = Util.delim ","
+ in Util.del_str "(" . Sep.opt_space
+ . Build.opt_list (is_tmpl|use_tmpl) comma
+ . Sep.opt_space . Util.del_str ")"
+let title_comment_re = /[ \t]*[#;].*$/
+
+let title_comment = [ label "#title_comment"
+ . store title_comment_re ]
+let title = label "title" . Util.del_str "["
+ . store text_re . Util.del_str "]"
+ . tmpl? . title_comment? . eol
+let record = IniFile.record title entry
+
+let lns = IniFile.lns record comment
+
+let filter = incl "/etc/asterisk/sip.conf"
+
+let xfm = transform lns filter
--- /dev/null
+(* Slapd module for Augeas
+ Author: Free Ekanayaka <free@64studio.com>
+
+ Reference: man slapd.conf(5), man slapd.access (5)
+
+*)
+
+module Slapd =
+ autoload xfm
+
+(************************************************************************
+ * USEFUL PRIMITIVES
+ *************************************************************************)
+
+let eol = Util.eol
+let spc = Util.del_ws_spc
+let sep = del /[ \t\n]+/ " "
+
+let sto_to_eol = store /([^ \t\n].*[^ \t\n]|[^ \t\n])/
+let sto_to_spc = store /[^\\# \t\n]+/
+
+let comment = Util.comment
+let empty = Util.empty
+
+(************************************************************************
+ * ACCESS TO
+ *************************************************************************)
+
+let access_re = "access to"
+let control_re = "stop" | "continue" | "break"
+let what = [ spc . label "access"
+ . store (/[^\\# \t\n]+/ - ("by" | control_re)) ]
+
+(* TODO: parse the control field, see man slapd.access (5) *)
+let control = [ spc . label "control" . store control_re ]
+let by = [ sep . key "by" . spc . sto_to_spc
+ . what? . control? ]
+
+let access = [ key access_re . spc. sto_to_spc . by+ . eol ]
+
+(************************************************************************
+ * GLOBAL
+ *************************************************************************)
+
+(* TODO: parse special field separately, see man slapd.conf (5) *)
+let global_re = "allow"
+ | "argsfile"
+ | "attributeoptions"
+ | "attributetype"
+ | "authz-policy"
+ | "ldap"
+ | "dn"
+ | "concurrency"
+ | "cron_max_pending"
+ | "conn_max_pending_auth"
+ | "defaultsearchbase"
+ | "disallow"
+ | "ditcontentrule"
+ | "gentlehup"
+ | "idletimeout"
+ | "include"
+ | "index_substr_if_minlen"
+ | "index_substr_if_maxlen"
+ | "index_substr_any_len"
+ | "index_substr_any_step"
+ | "localSSF"
+ | "loglevel"
+ | "moduleload"
+ | "modulepath"
+ | "objectclass"
+ | "objectidentifier"
+ | "password-hash"
+ | "password-crypt-salt-format"
+ | "pidfile"
+ | "referral"
+ | "replica-argsfile"
+ | "replica-pidfile"
+ | "replicationinterval"
+ | "require"
+ | "reverse-lookup"
+ | "rootDSE"
+ | "sasl-host"
+ | "sasl-realm"
+ | "sasl-secprops"
+ | "schemadn"
+ | "security"
+ | "sizelimit"
+ | "sockbuf_max_incoming "
+ | "sockbuf_max_incoming_auth"
+ | "threads"
+ | "timelimit time"
+ | "tool-threads"
+ | "TLSCipherSuite"
+ | "TLSCACertificateFile"
+ | "TLSCACertificatePath"
+ | "TLSCertificateFile"
+ | "TLSCertificateKeyFile"
+ | "TLSDHParamFile"
+ | "TLSRandFile"
+ | "TLSVerifyClient"
+ | "TLSCRLCheck"
+ | "backend"
+
+let global = Build.key_ws_value global_re
+
+(************************************************************************
+ * DATABASE
+ *************************************************************************)
+
+(* TODO: support all types of database backend *)
+let database_hdb = "cachesize"
+ | "cachefree"
+ | "checkpoint"
+ | "dbconfig"
+ | "dbnosync"
+ | "directory"
+ | "dirtyread"
+ | "idlcachesize"
+ | "index"
+ | "linearindex"
+ | "lockdetect"
+ | "mode"
+ | "searchstack"
+ | "shm_key"
+
+let database_re = "suffix"
+ | "lastmod"
+ | "limits"
+ | "maxderefdepth"
+ | "overlay"
+ | "readonly"
+ | "replica uri"
+ | "replogfile"
+ | "restrict"
+ | "rootdn"
+ | "rootpw"
+ | "subordinate"
+ | "syncrepl rid"
+ | "updatedn"
+ | "updateref"
+ | database_hdb
+
+let database_entry =
+ let val = Quote.double_opt
+ in Build.key_value_line database_re Sep.space val
+
+let database = [ key "database"
+ . spc
+ . sto_to_eol
+ . eol
+ . (comment|empty|database_entry|access)* ]
+
+(************************************************************************
+ * LENS
+ *************************************************************************)
+
+let lns = (comment|empty|global|access)* . (database)*
+
+let filter = incl "/etc/ldap/slapd.conf"
+ . incl "/etc/openldap/slapd.conf"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: SmbUsers
+ Parses Samba username maps
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Examples
+ The <Test_SmbUsers> file contains various examples and tests.
+*)
+
+module SmbUsers =
+
+autoload xfm
+
+(* View: entry *)
+let entry =
+ let username = [ label "username" . store Rx.no_spaces ]
+ in let usernames = Build.opt_list username Sep.space
+ in Build.key_value_line Rx.word Sep.space_equal usernames
+
+(* View: lns *)
+let lns = (Util.empty | (Util.comment_generic /[ \t]*[#;][ \t]*/ "# ") | entry)*
+
+(* Variable: filter *)
+let filter = incl "/etc/samba/smbusers"
+ . incl "/etc/samba/usermap.txt"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Solaris_System
+ Parses /etc/system on Solaris
+
+Author: Dominic Cleal <dcleal@redhat.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 4 system` where possible.
+
+About: Licence
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+
+About: Configuration files
+ This lens applies to /etc/system on Solaris. See <filter>.
+*)
+
+module Solaris_System =
+autoload xfm
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ ************************************************************************)
+
+(* View: comment *)
+let comment = Util.comment_generic /[ \t]*\*[ \t]*/ "* "
+
+(* View: empty
+ Map empty lines, including empty asterisk comments *)
+let empty = [ del /[ \t]*\*?[ \t]*\n/ "\n" ]
+
+(* View: sep_colon
+ The separator for key/value entries *)
+let sep_colon = del /:[ \t]*/ ": "
+
+(* View: sep_moddir
+ The separator of directories in a moddir search path *)
+let sep_moddir = del /[: ]+/ " "
+
+(* View: modpath
+ Individual moddir search path entry *)
+let modpath = [ seq "modpath" . store /[^ :\t\n]+/ ]
+
+(* Variable: set_operator
+ Valid set operators: equals, bitwise AND and OR *)
+let set_operators = /[=&|]/
+
+(* View: set_value
+ Sets an integer value or char pointer *)
+let set_value = [ label "value" . store Rx.no_spaces ]
+
+(************************************************************************
+ * Group: COMMANDS
+ ************************************************************************)
+
+(* View: cmd_kv
+ Function for simple key/value setting commands such as rootfs *)
+let cmd_kv (cmd:string) (value:regexp) =
+ Build.key_value_line cmd sep_colon (store value)
+
+(* View: cmd_moddir
+ The moddir command for specifying module search paths *)
+let cmd_moddir =
+ Build.key_value_line "moddir" sep_colon
+ (Build.opt_list modpath sep_moddir)
+
+(* View: set_var
+ Loads the variable name from a set command, no module *)
+let set_var = [ label "variable" . store Rx.word ]
+
+(* View: set_var
+ Loads the module and variable names from a set command *)
+let set_varmod = [ label "module" . store Rx.word ]
+ . Util.del_str ":" . set_var
+
+(* View: set_sep_spc *)
+let set_sep_spc = Util.del_opt_ws " "
+
+(* View: cmd_set
+ The set command for individual kernel/module parameters *)
+let cmd_set = [ key "set"
+ . Util.del_ws_spc
+ . ( set_var | set_varmod )
+ . set_sep_spc
+ . [ label "operator" . store set_operators ]
+ . set_sep_spc
+ . set_value
+ . Util.eol ]
+
+(************************************************************************
+ * Group: LENS
+ ************************************************************************)
+
+(* View: lns *)
+let lns = ( empty
+ | comment
+ | cmd_moddir
+ | cmd_kv "rootdev" Rx.fspath
+ | cmd_kv "rootfs" Rx.word
+ | cmd_kv "exclude" Rx.fspath
+ | cmd_kv "include" Rx.fspath
+ | cmd_kv "forceload" Rx.fspath
+ | cmd_set )*
+
+(* Variable: filter *)
+let filter = (incl "/etc/system")
+
+let xfm = transform lns filter
--- /dev/null
+(* Soma module for Augeas
+ Author: Free Ekanayaka <free@64studio.com>
+
+ Reference: man 5 soma.cfg
+
+*)
+
+module Soma =
+
+ autoload xfm
+
+(************************************************************************
+ * USEFUL PRIMITIVES
+ *************************************************************************)
+
+let eol = Util.eol
+let comment = Util.comment
+let empty = Util.empty
+
+let sep_eq = del /[ \t]*=[ \t]*/ " = "
+
+let sto_to_eol = store /([^ \t\n].*[^ \t\n]|[^ \t\n])/
+
+let word = /[A-Za-z0-9_.-]+/
+
+(************************************************************************
+ * ENTRIES
+ *************************************************************************)
+
+let entry = [ key word
+ . sep_eq
+ . sto_to_eol
+ . eol ]
+
+(************************************************************************
+ * LENS
+ *************************************************************************)
+
+let lns = (comment|empty|entry) *
+
+let filter = incl "/etc/somad/soma.cfg"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Sos
+ Parses Anaconda's user interaction configuration files.
+
+Author: George Hansper <george@hansper.id.au>
+
+About: Reference
+ https://github.com/hercules-team/augeas/wiki/Generic-modules-IniFile
+ https://github.com/sosreport/sos
+
+About: Configuration file
+ This lens applies to /etc/sos/sos.conf
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+*)
+module Sos =
+autoload xfm
+
+let comment = IniFile.comment "#" "#"
+let sep = IniFile.sep "=" "="
+
+let entry = IniFile.entry IniFile.entry_re sep comment
+let title = IniFile.title IniFile.record_re
+let record = IniFile.record title entry
+
+let lns = IniFile.lns record comment
+
+let filter = ( incl "/etc/sos/sos.conf" )
+ . ( Util.stdexcl )
+
+let xfm = transform lns filter
--- /dev/null
+(* Spacevars module for Augeas
+ Author: Free Ekanayaka <free@64studio.com>
+
+ Reference: man interfaces
+ This is a generic lens for simple key/value configuration files where
+ keys and values are separated by a sequence of spaces or tabs.
+
+*)
+
+module Spacevars =
+ autoload xfm
+
+(************************************************************************
+ * USEFUL PRIMITIVES
+ *************************************************************************)
+
+let comment = Util.comment
+let empty = Util.empty
+
+(************************************************************************
+ * ENTRIES
+ *************************************************************************)
+
+
+let entry = Build.key_ws_value /[A-Za-z0-9._-]+(\[[0-9]+\])?/
+
+let flag = [ key /[A-Za-z0-9._-]+(\[[0-9]+\])?/ . Util.doseol ]
+
+(************************************************************************
+ * LENS
+ *************************************************************************)
+
+let lns = (comment|empty|entry|flag)*
+
+let simple_lns = lns (* An alias for compatibility reasons *)
+
+(* configuration files that can be parsed without customizing the lens *)
+let filter = incl "/etc/havp/havp.config"
+ . incl "/etc/ldap.conf"
+ . incl "/etc/ldap/ldap.conf"
+ . incl "/etc/libnss-ldap.conf"
+ . incl "/etc/pam_ldap.conf"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Splunk
+ Parses /opt/splunk/etc/*, /opt/splunkforwarder/etc/system/local/*.conf and /opt/splunkforwarder/etc/apps/*/(default|local)/*.conf
+
+Author: Tim Brigham, Jason Antman
+
+About: Reference
+ https://docs.splunk.com/Documentation/Splunk/6.5.0/Admin/AboutConfigurationFiles
+
+About: License
+ This file is licenced under the LGPL v2+
+
+About: Lens Usage
+ Works like IniFile lens, with anonymous section for entries without enclosing section and allowing underscore-prefixed keys.
+
+About: Configuration files
+ This lens applies to conf files under /opt/splunk/etc and /opt/splunkforwarder/etc See <filter>.
+
+About: Examples
+ The <Test_Splunk> file contains various examples and tests.
+*)
+
+module Splunk =
+ autoload xfm
+
+ let comment = IniFile.comment IniFile.comment_re IniFile.comment_default
+ let sep = IniFile.sep IniFile.sep_re IniFile.sep_default
+ let empty = IniFile.empty
+
+ let entry_re = ( /[A-Za-z_][A-Za-z0-9._-]*/ )
+ let setting = entry_re
+ let title = IniFile.indented_title_label "target" IniFile.record_label_re
+ let entry = [ key entry_re . sep . IniFile.sto_to_eol? . IniFile.eol ] | comment
+
+
+ let record = IniFile.record title entry
+ let anon = [ label ".anon" . (entry|empty)+ ]
+ let lns = anon . (record)* | (record)*
+
+ let filter = incl "/opt/splunk/etc/system/local/*.conf"
+ . incl "/opt/splunk/etc/apps/*/local/*.conf"
+ . incl "/opt/splunkforwarder/etc/system/local/*.conf"
+ . incl "/opt/splunkforwarder/etc/apps/*/default/*.conf"
+ . incl "/opt/splunkforwarder/etc/apps/*/local/*.conf"
+ let xfm = transform lns filter
--- /dev/null
+(* Squid module for Augeas
+ Author: Free Ekanayaka <free@64studio.com>
+
+ Reference: the self-documented default squid.conf file
+
+*)
+
+module Squid =
+ autoload xfm
+
+(************************************************************************
+ * USEFUL PRIMITIVES
+ *************************************************************************)
+
+let eol = Util.eol
+let spc = Util.del_ws_spc
+let indent = Util.indent
+
+let word = /[A-Za-z0-9!_.-]+(\[[0-9]+\])?/
+let sto_to_spc = store /[^# \t\n]+/
+let sto_to_eol = store /([^# \t\n][^#\n]*[^# \t\n])|[^# \t\n]/
+
+let comment = Util.comment
+let empty = Util.empty
+let comment_or_eol = Util.comment_or_eol
+let value (kw:string)
+ = [ spc . label kw . sto_to_spc ]
+
+let value_space_in (kw:string)
+ = [ spc . label kw . sto_to_eol ]
+
+let parameters = [ label "parameters"
+ . counter "parameters"
+ . [ spc . seq "parameters" . sto_to_spc ]+ ]
+
+(************************************************************************
+ * SPACEVARS SETTINGS
+ *************************************************************************)
+
+let entry_re = "accept_filter"
+ | "access_log"
+ | "acl_uses_indirect_client"
+ | "adaptation_access"
+ | "adaptation_service_set"
+ | "allow_underscore"
+ | "always_direct"
+ | "announce_file"
+ | "announce_host"
+ | "announce_period"
+ | "announce_port"
+ | "append_domain"
+ | "as_whois_server"
+ | "authenticate_cache_garbage_interval"
+ | "authenticate_ip_shortcircuit_access"
+ | "authenticate_ip_shortcircuit_ttl"
+ | "authenticate_ip_ttl"
+ | "authenticate_ttl"
+ | "background_ping_rate"
+ | "balance_on_multiple_ip"
+ | "broken_posts"
+ | "buffered_logs"
+ | "cache"
+ | "cache_dir"
+ | "cache_dns_program"
+ | "cache_effective_group"
+ | "cache_effective_user"
+ | "cache_log"
+ | "cache_mem"
+ | "cache_mgr"
+ | "cachemgr_passwd"
+ | "cache_peer"
+ | "cache_peer_access"
+ | "cache_peer_domain"
+ | "cache_replacement_policy"
+ | "cache_store_log"
+ | "cache_swap_high"
+ | "cache_swap_low"
+ | "cache_swap_state"
+ | "cache_vary"
+ | "check_hostnames"
+ | "chroot"
+ | "client_db"
+ | "client_lifetime"
+ | "client_netmask"
+ | "client_persistent_connections"
+ | "clientside_tos"
+ | "collapsed_forwarding"
+ | "connect_timeout"
+ | "coredump_dir"
+ | "dead_peer_timeout"
+ | "debug_options"
+ | "delay_access"
+ | "delay_class"
+ | "delay_initial_bucket_level"
+ | "delay_parameters"
+ | "delay_pools"
+ | "delay_pool_uses_indirect_client"
+ | "deny_info"
+ | "detect_broken_pconn"
+ | "digest_bits_per_entry"
+ | "digest_generation"
+ | "digest_rebuild_chunk_percentage"
+ | "digest_rebuild_period"
+ | "digest_rewrite_period"
+ | "digest_swapout_chunk_size"
+ | "diskd_program"
+ | "dns_children"
+ | "dns_defnames"
+ | "dns_nameservers"
+ | "dns_retransmit_interval"
+ | "dns_testnames"
+ | "dns_timeout"
+ | "dns_v4_fallback"
+ | "ecap_enable"
+ | "ecap_service"
+ | "email_err_data"
+ | "emulate_httpd_log"
+ | "err_html_text"
+ | "error_default_language"
+ | "error_directory"
+ | "error_log_languages"
+ | "error_map"
+ | "err_page_stylesheet"
+ | "esi_parser"
+ | "external_acl_type"
+ | "external_refresh_check"
+ | "follow_x_forwarded_for"
+ | "forwarded_for"
+ | "forward_log"
+ | "forward_timeout"
+ | "fqdncache_size"
+ | "ftp_epsv_all"
+ | "ftp_list_width"
+ | "ftp_passive"
+ | "ftp_sanitycheck"
+ | "ftp_telnet_protocol"
+ | "ftp_user"
+ | "global_internal_static"
+ | "half_closed_clients"
+ | "header_access"
+ | "header_replace"
+ | "hierarchy_stoplist"
+ | "high_memory_warning"
+ | "high_page_fault_warning"
+ | "high_response_time_warning"
+ | "hostname_aliases"
+ | "hosts_file"
+ | "htcp_access"
+ | "htcp_clr_access"
+ | "htcp_port"
+ | "http_accel_surrogate_remote"
+ | "http_access2"
+ | "httpd_accel_no_pmtu_disc"
+ | "httpd_accel_surrogate_id"
+ | "httpd_suppress_version_string"
+ | "http_port"
+ | "http_reply_access"
+ | "https_port"
+ | "icap_access"
+ | "icap_class"
+ | "icap_client_username_encode"
+ | "icap_client_username_header"
+ | "icap_connect_timeout"
+ | "icap_default_options_ttl"
+ | "icap_enable"
+ | "icap_io_timeout"
+ | "icap_persistent_connections"
+ | "icap_preview_enable"
+ | "icap_preview_size"
+ | "icap_send_client_ip"
+ | "icap_send_client_username"
+ | "icap_service"
+ | "icap_service_failure_limit"
+ | "icap_service_revival_delay"
+ | "icon_directory"
+ | "icp_access"
+ | "icp_hit_stale"
+ | "icp_port"
+ | "icp_query_timeout"
+ | "ident_lookup_access"
+ | "ident_timeout"
+ | "ie_refresh"
+ | "ignore_expect_100"
+ | "ignore_ims_on_miss"
+ | "ignore_unknown_nameservers"
+ | "incoming_dns_average"
+ | "incoming_http_average"
+ | "incoming_icp_average"
+ | "incoming_rate"
+ | "ipcache_high"
+ | "ipcache_low"
+ | "ipcache_size"
+ | "loadable_modules"
+ | "location_rewrite_access"
+ | "location_rewrite_children"
+ | "location_rewrite_concurrency"
+ | "location_rewrite_program"
+ | "log_access"
+ | "logfile_daemon"
+ | "logfile_rotate"
+ | "logformat"
+ | "log_fqdn"
+ | "log_icp_queries"
+ | "log_ip_on_direct"
+ | "log_mime_hdrs"
+ | "log_uses_indirect_client"
+ | "mail_from"
+ | "mail_program"
+ | "max_filedescriptors"
+ | "maximum_icp_query_timeout"
+ | "maximum_object_size"
+ | "maximum_object_size_in_memory"
+ | "maximum_single_addr_tries"
+ | "max_open_disk_fds"
+ | "max_stale"
+ | "mcast_groups"
+ | "mcast_icp_query_timeout"
+ | "mcast_miss_addr"
+ | "mcast_miss_encode_key"
+ | "mcast_miss_port"
+ | "mcast_miss_ttl"
+ | "memory_pools"
+ | "memory_pools_limit"
+ | "memory_replacement_policy"
+ | "mime_table"
+ | "min_dns_poll_cnt"
+ | "min_http_poll_cnt"
+ | "min_icp_poll_cnt"
+ | "minimum_direct_hops"
+ | "minimum_direct_rtt"
+ | "minimum_expiry_time"
+ | "minimum_icp_query_timeout"
+ | "minimum_object_size"
+ | "miss_access"
+ | "negative_dns_ttl"
+ | "negative_ttl"
+ | "neighbor_type_domain"
+ | "netdb_filename"
+ | "netdb_high"
+ | "netdb_low"
+ | "netdb_ping_period"
+ | "never_direct"
+ | "no_cache"
+ | "nonhierarchical_direct"
+ | "offline_mode"
+ | "pconn_timeout"
+ | "peer_connect_timeout"
+ | "persistent_connection_after_error"
+ | "persistent_request_timeout"
+ | "pid_filename"
+ | "pinger_enable"
+ | "pinger_program"
+ | "pipeline_prefetch"
+ | "positive_dns_ttl"
+ | "prefer_direct"
+ | "qos_flows"
+ | "query_icmp"
+ | "quick_abort_max"
+ | "quick_abort_min"
+ | "quick_abort_pct"
+ | "range_offset_limit"
+ | "read_ahead_gap"
+ | "read_timeout"
+ | "redirector_bypass"
+ | "referer_log"
+ | "refresh_all_ims"
+ | "refresh_stale_hit"
+ | "relaxed_header_parser"
+ | "reload_into_ims"
+ | "reply_body_max_size"
+ | "reply_header_access"
+ | "reply_header_max_size"
+ | "request_body_max_size"
+ | "request_entities"
+ | "request_header_access"
+ | "request_header_max_size"
+ | "request_timeout"
+ | "retry_on_error"
+ | "server_http11"
+ | "server_persistent_connections"
+ | "short_icon_urls"
+ | "shutdown_lifetime"
+ | "sleep_after_fork"
+ | "snmp_access"
+ | "snmp_incoming_address"
+ | "snmp_outgoing_address"
+ | "snmp_port"
+ | "ssl_bump"
+ | "ssl_engine"
+ | "sslpassword_program"
+ | "sslproxy_cafile"
+ | "sslproxy_capath"
+ | "sslproxy_cert_error"
+ | "sslproxy_cipher"
+ | "sslproxy_client_certificate"
+ | "sslproxy_client_key"
+ | "sslproxy_flags"
+ | "sslproxy_options"
+ | "sslproxy_version"
+ | "ssl_unclean_shutdown"
+ | "store_avg_object_size"
+ | "store_dir_select_algorithm"
+ | "store_objects_per_bucket"
+ | "storeurl_access"
+ | "storeurl_rewrite_children"
+ | "storeurl_rewrite_concurrency"
+ | "storeurl_rewrite_program"
+ | "strip_query_terms"
+ | "tcp_outgoing_address"
+ | "tcp_outgoing_tos"
+ | "tcp_recv_bufsize"
+ | "test_reachability"
+ | "udp_incoming_address"
+ | "udp_outgoing_address"
+ | "umask"
+ | "unique_hostname"
+ | "unlinkd_program"
+ | "update_headers"
+ | "uri_whitespace"
+ | "url_rewrite_access"
+ | "url_rewrite_bypass"
+ | "url_rewrite_children"
+ | "url_rewrite_concurrency"
+ | "url_rewrite_host_header"
+ | "url_rewrite_program"
+ | "useragent_log"
+ | "vary_ignore_expire"
+ | "via"
+ | "visible_hostname"
+ | "wccp2_address"
+ | "wccp2_assignment_method"
+ | "wccp2_forwarding_method"
+ | "wccp2_rebuild_wait"
+ | "wccp2_return_method"
+ | "wccp2_router"
+ | "wccp2_service"
+ | "wccp2_service_info"
+ | "wccp2_weight"
+ | "wccp_address"
+ | "wccp_router"
+ | "wccp_version"
+ | "windows_ipaddrchangemonitor"
+ | "zero_buffers"
+ | "zph_local"
+ | "zph_mode"
+ | "zph_option"
+ | "zph_parent"
+ | "zph_sibling"
+
+let entry = indent . (Build.key_ws_value entry_re)
+
+(************************************************************************
+ * AUTH
+ *************************************************************************)
+
+let auth_re = "auth_param"
+let auth = indent
+ . [ key "auth_param"
+ . value "scheme"
+ . value "parameter"
+ . (value_space_in "setting") ?
+ . comment_or_eol ]
+
+(************************************************************************
+ * ACL
+ *************************************************************************)
+
+let acl_re = "acl"
+let acl = indent
+ . [ key acl_re . spc
+ . [ key word
+ . value "type"
+ . value "setting"
+ . parameters?
+ . comment_or_eol ] ]
+
+(************************************************************************
+ * HTTP ACCESS
+ *************************************************************************)
+
+let http_access_re
+ = "http_access"
+ | "upgrade_http0.9"
+ | "broken_vary_encoding"
+
+let http_access
+ = indent
+ . [ key http_access_re
+ . spc
+ . [ key /allow|deny/
+ . spc
+ . sto_to_spc
+ . parameters? ]
+ . comment_or_eol ]
+
+(************************************************************************
+ * REFRESH PATTERN
+ *************************************************************************)
+
+let refresh_pattern_option_re = "override-expire"
+ | "override-lastmod"
+ | "reload-into-ims"
+ | "ignore-reload"
+ | "ignore-no-cache"
+ | "ignore-no-store"
+ | "ignore-must-revalidate"
+ | "ignore-private"
+ | "ignore-auth"
+ | "refresh-ims"
+ | "store-stale"
+
+let refresh_pattern = indent . [ key "refresh_pattern" . spc
+ . [ label "case_insensitive" . Util.del_str "-i" . spc ]?
+ . store /[^ \t\n]+/ . spc
+ . [ label "min" . store Rx.integer ] . spc
+ . [ label "percent" . store Rx.integer . Util.del_str "%" ] . spc
+ . [ label "max" . store Rx.integer ]
+ . (spc . Build.opt_list [ label "option" . store refresh_pattern_option_re ] spc)?
+ . comment_or_eol ]
+
+(************************************************************************
+ * EXTENSION METHODS
+ *************************************************************************)
+
+let extension_methods = indent . [ key "extension_methods" . spc
+ . Build.opt_list [ seq "extension_method" . store Rx.word ] spc
+ . comment_or_eol ]
+
+(************************************************************************
+ * LENS
+ *************************************************************************)
+
+let lns = (comment|empty|entry|auth|acl|http_access|refresh_pattern|extension_methods)*
+
+let filter = incl "/etc/squid/squid.conf"
+ . incl "/etc/squid3/squid.conf"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Ssh
+ Parses ssh client configuration
+
+Author: Jiri Suchomel <jsuchome@suse.cz>
+
+About: Reference
+ ssh_config man page
+
+About: License
+ This file is licensed under the GPL.
+
+About: Lens Usage
+ Sample usage of this lens in augtool
+
+(start code)
+augtool> set /files/etc/ssh/ssh_config/Host example.com
+augtool> set /files/etc/ssh/ssh_config/Host[.='example.com']/RemoteForward/machine1:1234 machine2:5678
+augtool> set /files/etc/ssh/ssh_config/Host[.='example.com']/Ciphers/1 aes128-ctr
+augtool> set /files/etc/ssh/ssh_config/Host[.='example.com']/Ciphers/2 aes192-ctr
+(end code)
+
+*)
+
+module Ssh =
+ autoload xfm
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+ let eol = Util.doseol
+ let spc = Util.del_ws_spc
+ let spc_eq = del /[ \t]+|[ \t]*=[ \t]*/ " "
+ let comment = Util.comment
+ let empty = Util.empty
+ let comma = Util.del_str ","
+ let indent = Util.indent
+ let value_to_eol = store Rx.space_in
+ let value_to_spc = store /[^ \t\r\n=][^ \t\r\n]*/
+ let value_to_comma = store /[^, \t\r\n=][^, \t\r\n]*/
+
+
+(************************************************************************
+ * Group: ENTRIES
+ *************************************************************************)
+
+ let array_entry (k:regexp) =
+ [ indent . key k . counter "array_entry"
+ . [ spc . seq "array_entry" . value_to_spc]* . eol ]
+
+ let commas_entry (k:regexp) =
+ let value = [ seq "commas_entry" . value_to_comma]
+ in [ indent . key k . counter "commas_entry" . spc_eq .
+ Build.opt_list value comma . eol ]
+
+ let spaces_entry (k:regexp) =
+ let value = [ seq "spaces_entry" . value_to_spc ]
+ in [ indent . key k . counter "spaces_entry" . spc_eq .
+ Build.opt_list value spc . eol ]
+
+ let fw_entry (k:regexp) = [ indent . key k . spc_eq .
+ [ key /[^ \t\r\n\/=][^ \t\r\n\/]*/ . spc . value_to_eol . eol ]]
+
+ let send_env = array_entry /SendEnv/i
+
+ let proxy_command = [ indent . key /ProxyCommand/i . spc . value_to_eol . eol ]
+
+ let remote_fw = fw_entry /RemoteForward/i
+ let local_fw = fw_entry /LocalForward/i
+
+ let ciphers = commas_entry /Ciphers/i
+ let macs = commas_entry /MACs/i
+ let algorithms = commas_entry /(HostKey|Kex)Algorithms/i
+ let pubkey_accepted_key_types = commas_entry /PubkeyAcceptedKeyTypes/i
+
+ let global_knownhosts_file = spaces_entry /GlobalKnownHostsFile/i
+
+ let rekey_limit = [ indent . key /RekeyLimit/i . spc_eq .
+ [ label "amount" . value_to_spc ] .
+ [ spc . label "duration" . value_to_spc ]? . eol ]
+
+ let special_entry = send_env
+ | proxy_command
+ | remote_fw
+ | local_fw
+ | macs
+ | ciphers
+ | algorithms
+ | pubkey_accepted_key_types
+ | global_knownhosts_file
+ | rekey_limit
+
+ let key_re = /[A-Za-z0-9]+/
+ - /SendEnv|Host|Match|ProxyCommand|RemoteForward|LocalForward|MACs|Ciphers|(HostKey|Kex)Algorithms|PubkeyAcceptedKeyTypes|GlobalKnownHostsFile|RekeyLimit/i
+
+
+ let other_entry = [ indent . key key_re
+ . spc_eq . value_to_spc . eol ]
+
+ let entry = comment | empty
+ | special_entry
+ | other_entry
+
+ let host = [ key /Host/i . spc . value_to_eol . eol . entry* ]
+
+
+ let condition_entry =
+ let k = /[A-Za-z0-9]+/ in
+ let no_spc = Quote.do_dquote_opt (store /[^"' \t\r\n=]+/) in
+ let with_spc = Quote.do_quote (store /[^"'\t\r\n]* [^"'\t\r\n]*/) in
+ [ spc . key k . spc . no_spc ]
+ | [ spc . key k . spc . with_spc ]
+
+ let match_cond =
+ [ label "Condition" . condition_entry+ . eol ]
+
+ let match_entry = entry
+
+ let match =
+ [ key /Match/i . match_cond
+ . [ label "Settings" . match_entry+ ]
+ ]
+
+
+(************************************************************************
+ * Group: LENS
+ *************************************************************************)
+
+ let lns = entry* . (host | match)*
+
+ let xfm = transform lns (incl "/etc/ssh/ssh_config" .
+ incl (Sys.getenv("HOME") . "/.ssh/config") .
+ incl "/etc/ssh/ssh_config.d/*.conf")
--- /dev/null
+(*
+Module: Sshd
+ Parses /etc/ssh/sshd_config
+
+Author: David Lutterkort lutter@redhat.com
+ Dominique Dumont dominique.dumont@hp.com
+
+About: Reference
+ sshd_config man page.
+ See http://www.openbsd.org/cgi-bin/man.cgi?query=sshd_config&sektion=5
+
+About: License
+ This file is licensed under the LGPL v2+.
+
+About: Lens Usage
+ Sample usage of this lens in augtool:
+
+ * Get your current setup
+ > print /files/etc/ssh/sshd_config
+ ...
+
+ * Set X11Forwarding to "no"
+ > set /files/etc/ssh/sshd_config/X11Forwarding "no"
+
+ More advanced usage:
+
+ * Set a Match section
+ > set /files/etc/ssh/sshd_config/Match[1]/Condition/User "foo"
+ > set /files/etc/ssh/sshd_config/Match[1]/Settings/X11Forwarding "yes"
+
+ Saving your file:
+
+ > save
+
+
+About: CAVEATS
+
+ In sshd_config, Match blocks must be located at the end of the file.
+ This means that any new "global" parameters (i.e. outside of a Match
+ block) must be written before the first Match block. By default,
+ Augeas will write new parameters at the end of the file.
+
+ I.e. if you have a Match section and no ChrootDirectory parameter,
+ this command:
+
+ > set /files/etc/ssh/sshd_config/ChrootDirectory "foo"
+
+ will be stored in a new node after the Match section and Augeas will
+ refuse to save sshd_config file.
+
+ To create a new parameter as the right place, you must first create
+ a new Augeas node before the Match section:
+
+ > ins ChrootDirectory before /files/etc/ssh/sshd_config/Match
+
+ Then, you can set the parameter
+
+ > set /files/etc/ssh/sshd_config/ChrootDirectory "foo"
+
+
+About: Configuration files
+ This lens applies to /etc/ssh/sshd_config
+
+*)
+
+module Sshd =
+ autoload xfm
+
+ let eol = del /[ \t]*\n/ "\n"
+
+ let sep = del /[ \t=]+/ " "
+
+ let indent = del /[ \t]*/ " "
+
+ let key_re = /[A-Za-z0-9]+/
+ - /MACs|Match|AcceptEnv|Subsystem|Ciphers|((GSSAPI|)Kex|HostKey|CASignature|PubkeyAccepted)Algorithms|PubkeyAcceptedKeyTypes|(Allow|Deny)(Groups|Users)/i
+
+ let comment = Util.comment
+ let comment_noindent = Util.comment_noindent
+ let empty = Util.empty
+
+ let array_entry (kw:regexp) (sq:string) =
+ let bare = Quote.do_quote_opt_nil (store /[^"' \t\n=]+/) in
+ let quoted = Quote.do_quote (store /[^"'\n]*[ \t]+[^"'\n]*/) in
+ [ key kw
+ . ( [ sep . seq sq . bare ] | [ sep . seq sq . quoted ] )*
+ . eol ]
+
+ let other_entry =
+ let value = store /[^ \t\n=]+([ \t=]+[^ \t\n=]+)*/ in
+ [ key key_re . sep . value . eol ]
+
+ let accept_env = array_entry /AcceptEnv/i "AcceptEnv"
+
+ let allow_groups = array_entry /AllowGroups/i "AllowGroups"
+ let allow_users = array_entry /AllowUsers/i "AllowUsers"
+ let deny_groups = array_entry /DenyGroups/i "DenyGroups"
+ let deny_users = array_entry /DenyUsers/i "DenyUsers"
+
+ let subsystemvalue =
+ let value = store (/[^ \t\n=](.*[^ \t\n=])?/) in
+ [ key /[A-Za-z0-9\-]+/ . sep . value . eol ]
+
+ let subsystem =
+ [ key /Subsystem/i . sep . subsystemvalue ]
+
+ let list (kw:regexp) (sq:string) =
+ let value = store /[^, \t\n=]+/ in
+ [ key kw . sep .
+ [ seq sq . value ] .
+ ([ seq sq . Util.del_str "," . value])* .
+ eol ]
+
+ let macs = list /MACs/i "MACs"
+
+ let ciphers = list /Ciphers/i "Ciphers"
+
+ let kexalgorithms = list /KexAlgorithms/i "KexAlgorithms"
+
+ let hostkeyalgorithms = list /HostKeyAlgorithms/i "HostKeyAlgorithms"
+
+ let gssapikexalgorithms = list /GSSAPIKexAlgorithms/i "GSSAPIKexAlgorithms"
+
+ let casignaturealgorithms = list /CASignatureAlgorithms/i "CASignatureAlgorithms"
+
+ let pubkeyacceptedkeytypes = list /PubkeyAcceptedKeyTypes/i "PubkeyAcceptedKeyTypes"
+
+ let pubkeyacceptedalgorithms = list /PubkeyAcceptedAlgorithms/i "PubkeyAcceptedAlgorithms"
+
+ let entry = accept_env | allow_groups | allow_users
+ | deny_groups | subsystem | deny_users
+ | macs | ciphers | kexalgorithms | hostkeyalgorithms
+ | gssapikexalgorithms | casignaturealgorithms
+ | pubkeyacceptedkeytypes | pubkeyacceptedalgorithms | other_entry
+
+ let condition_entry =
+ let k = /[A-Za-z0-9]+/ in
+ let no_spc = Quote.do_dquote_opt (store /[^"' \t\n=]+/) in
+ let spc = Quote.do_quote (store /[^"'\t\n]* [^"'\t\n]*/) in
+ [ sep . key k . sep . no_spc ]
+ | [ sep . key k . sep . spc ]
+
+ let match_cond =
+ [ label "Condition" . condition_entry+ . eol ]
+
+ let match_entry = indent . (entry | comment_noindent)
+ | empty
+
+ let match =
+ [ key /Match/i . match_cond
+ . [ label "Settings" . match_entry+ ]
+ ]
+
+ let lns = (entry | comment | empty)* . match*
+
+ let filter = (incl "/etc/ssh/sshd_config" )
+ . ( incl "/etc/ssh/sshd_config.d/*.conf" )
+
+ let xfm = transform lns filter
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(*
+Module Sssd
+ Lens for parsing sssd.conf
+
+Author: Erinn Looney-Triggs <erinn.looneytriggs@gmail.com>
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+
+About: Configuration files
+ This lens applies to /etc/sssd/sssd.conf. See <filter>.
+*)
+
+module Sssd =
+ autoload xfm
+
+let comment = IniFile.comment /[#;]/ "#"
+
+let sep = IniFile.sep "=" "="
+
+let entry = IniFile.indented_entry IniFile.entry_re sep comment
+
+(* View: title
+ An sssd.conf section title *)
+let title = IniFile.indented_title_label "target" IniFile.record_label_re
+
+(* View: record
+ An sssd.conf record *)
+let record = IniFile.record title entry
+
+(* View: lns
+ The sssd.conf lens *)
+let lns = ( comment | IniFile.empty )* . (record)*
+
+(* View: filter *)
+let filter = (incl "/etc/sssd/sssd.conf")
+
+let xfm = transform lns filter
+
--- /dev/null
+(*
+Module: Star
+ Parses star's configuration file
+
+Author: Cedric Bosdonnat <cbosdonnat@suse.com>
+
+About: Reference
+ This lens is based on star(1)
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+*)
+
+module Star =
+ autoload xfm
+
+let sto_to_tab = store Rx.no_spaces
+
+let size = Build.key_value_line "STAR_FIFOSIZE" Sep.space_equal ( store /[0-9x*.a-z]+/ )
+let size_max = Build.key_value_line "STAR_FIFOSIZE_MAX" Sep.space_equal ( store /[0-9x*.a-z]+/ )
+let archive = Build.key_value_line ( "archive". /[0-7]/ ) Sep.equal
+ ( [ label "device" . sto_to_tab ] . Sep.tab .
+ [ label "block" . sto_to_tab ] . Sep.tab .
+ [ label "size" . sto_to_tab ] . ( Sep.tab .
+ [ label "istape" . sto_to_tab ] )? )
+
+let lns = ( size | size_max | archive | Util.comment | Util.empty )*
+
+let filter = incl "/etc/default/star"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Strongswan
+ Lens for parsing strongSwan configuration files
+
+Authors:
+ Kaarle Ritvanen <kaarle.ritvanen@datakunkku.fi>
+
+About: Reference
+ strongswan.conf(5), swanctl.conf(5)
+
+About: License
+ This file is licensed under the LGPL v2+
+*)
+
+module Strongswan =
+
+autoload xfm
+
+let ws = del /[\n\t ]*(#[\t ]*\n[\n\t ]*)*/
+
+let rec conf =
+ let keys = /[^\/.\{\}#\n\t ]+/ - /include/
+ in let lists = /(crl|oscp)_uris|(local|remote)_(addrs|ts)|vips|pools|(ca)?certs|pubkeys|groups|cert_policy|dns|nbns|dhcp|netmask|server|subnet|split_(in|ex)clude|interfaces_(ignore|use)|preferred/
+ in let proposals = /((ah|esp)_)?proposals/
+ in let name (pat:lens) (sep:string) =
+ pat . Util.del_ws_spc . Util.del_str sep
+ in let val = store /[^\n\t ].*/ . Util.del_str "\n" . ws ""
+ in let sval = Util.del_ws_spc . val
+ in let ival (pat:lens) (end:string) =
+ Util.del_opt_ws " " . seq "item" . pat . Util.del_str end
+ in let list (l:string) (k:regexp) (v:lens) =
+ [ label l . name (store k) "=" . counter "item" .
+ [ ival v "," ]* . [ ival v "\n" ] . ws "" ]
+ in let alg = seq "alg" . store /[a-z0-9]+/
+in (
+ [ Util.del_str "#" . label "#comment" . Util.del_opt_ws " " . val ] |
+ [ key "include" . sval ] |
+ [ name (key (keys - lists - proposals)) "=" . sval ] |
+ list "#list" lists (store /[^\n\t ,][^\n,]*/) |
+ list "#proposals" proposals (counter "alg" . [ alg ] . [ Util.del_str "-" . alg ]*) |
+ [ name (key keys) "{" . ws "\n" . conf . Util.del_str "}" . ws "\n" ]
+)*
+
+let lns = ws "" . conf
+
+let xfm = transform lns (
+ incl "/etc/strongswan.d/*.conf" .
+ incl "/etc/strongswan.d/**/*.conf" .
+ incl "/etc/swanctl/conf.d/*.conf" .
+ incl "/etc/swanctl/swanctl.conf"
+)
--- /dev/null
+(* Stunnel configuration file module for Augeas *)
+
+module Stunnel =
+ autoload xfm
+
+ let comment = IniFile.comment IniFile.comment_re IniFile.comment_default
+ let sep = IniFile.sep "=" "="
+
+ let setting = "chroot"
+ | "compression"
+ | "debug"
+ | "EGD"
+ | "engine"
+ | "engineCtrl"
+ | "fips"
+ | "foreground"
+ | "output"
+ | "pid"
+ | "RNDbytes"
+ | "RNDfile"
+ | "RNDoverwrite"
+ | "service"
+ | "setgid"
+ | "setuid"
+ | "socket"
+ | "syslog"
+ | "taskbar"
+ | "accept"
+ | "CApath"
+ | "CAfile"
+ | "cert"
+ | "ciphers"
+ | "client"
+ | "connect"
+ | "CRLpath"
+ | "CRLfile"
+ | "curve"
+ | "delay"
+ | "engineNum"
+ | "exec"
+ | "execargs"
+ | "failover"
+ | "ident"
+ | "key"
+ | "local"
+ | "OCSP"
+ | "OCSPflag"
+ | "options"
+ | "protocol"
+ | "protocolAuthentication"
+ | "protocolHost"
+ | "protocolPassword"
+ | "protocolUsername"
+ | "pty"
+ | "retry"
+ | "session"
+ | "sessiond"
+ | "sni"
+ | "sslVersion"
+ | "stack"
+ | "TIMEOUTbusy"
+ | "TIMEOUTclose"
+ | "TIMEOUTconnect"
+ | "TIMEOUTidle"
+ | "transparent"
+ | "verify"
+
+ let entry = IniFile.indented_entry setting sep comment
+ let empty = IniFile.empty
+
+ let title = IniFile.indented_title ( IniFile.record_re - ".anon" )
+ let record = IniFile.record title entry
+
+ let rc_anon = [ label ".anon" . ( entry | empty )+ ]
+
+ let lns = rc_anon? . record*
+
+ let filter = (incl "/etc/stunnel/stunnel.conf")
+
+ let xfm = transform lns filter
--- /dev/null
+(*
+Module: Subversion
+ Parses subversion's INI files
+
+Authors:
+ Marc Fournier <marc.fournier@camptocamp.com>
+ Raphael Pinson <raphael.pinson@camptocamp.com>
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Examples
+ The <Test_Subversion> file contains various examples and tests.
+
+*)
+
+module Subversion =
+autoload xfm
+
+(************************************************************************
+ * Group: INI File settings
+ *
+ * subversion only supports comments starting with "#"
+ *
+ *************************************************************************)
+
+(* View: comment *)
+let comment = IniFile.comment_noindent "#" "#"
+
+(* View: empty
+ An empty line or a non-indented empty comment *)
+let empty = IniFile.empty_noindent
+
+(* View: sep *)
+let sep = IniFile.sep IniFile.sep_default IniFile.sep_default
+
+(************************************************************************
+ * Group: ENTRY
+ *
+ * subversion doesn't support indented entries
+ *
+ *************************************************************************)
+
+(* Variable: comma_list_re *)
+let comma_list_re = "password-stores"
+
+(* Variable: space_list_re *)
+let space_list_re = "global-ignores" | "preserved-conflict-file-exts"
+
+(* Variable: std_re *)
+let std_re = /[^ \t\r\n\/=#]+/ - (comma_list_re | space_list_re)
+
+(* View: entry_std
+ A standard entry
+ Similar to a <IniFile.entry_multiline_nocomment> entry,
+ but allows ';' *)
+let entry_std =
+ IniFile.entry_multiline_generic (key std_re) sep "#" comment IniFile.eol
+
+(* View: entry *)
+let entry =
+ let comma_list_re = "password-stores"
+ in let space_list_re = "global-ignores" | "preserved-conflict-file-exts"
+ in let std_re = /[^ \t\r\n\/=#]+/ - (comma_list_re | space_list_re)
+ in entry_std
+ | IniFile.entry_list_nocomment comma_list_re sep Rx.word Sep.comma
+ | IniFile.entry_list_nocomment space_list_re sep Rx.no_spaces (del /(\r?\n)?[ \t]+/ " ")
+
+
+
+(************************************************************************
+ * Group: TITLE
+ *
+ * subversion doesn't allow anonymous entries (outside sections)
+ *
+ *************************************************************************)
+
+(* View: title *)
+let title = IniFile.title IniFile.entry_re
+
+(* View: record
+ Use the non-indented <empty> *)
+let record = IniFile.record_noempty title (entry|empty)
+
+(************************************************************************
+ * Group: LENS & FILTER
+ *************************************************************************)
+
+(* View: lns *)
+let lns = IniFile.lns record comment
+
+(* Variable: filter *)
+let filter = incl "/etc/subversion/config"
+ . incl "/etc/subversion/servers"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Sudoers
+ Parses /etc/sudoers
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man sudoers` where possible.
+
+For example, recursive definitions such as
+
+ > Cmnd_Spec_List ::= Cmnd_Spec |
+ > Cmnd_Spec ',' Cmnd_Spec_List
+
+are replaced by
+
+ > let cmnd_spec_list = cmnd_spec . ( sep_com . cmnd_spec )*
+
+since Augeas cannot deal with recursive definitions.
+The definitions from `man sudoers` are put as commentaries for reference
+throughout the file. More information can be found in the manual.
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+
+
+About: Lens Usage
+ Sample usage of this lens in augtool
+
+ * Set first Defaults to apply to the "LOCALNET" network alias
+ > set /files/etc/sudoers/Defaults[1]/type "@LOCALNET"
+ * List all user specifications applying explicitly to the "admin" Unix group
+ > match /files/etc/sudoers/spec/user "%admin"
+ * Remove the full 3rd user specification
+ > rm /files/etc/sudoers/spec[3]
+
+About: Configuration files
+ This lens applies to /etc/sudoers. See <filter>.
+*)
+
+
+
+module Sudoers =
+ autoload xfm
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+(* Group: Generic primitives *)
+(* Variable: eol *)
+let eol = Util.eol
+
+(* Variable: indent *)
+let indent = Util.indent
+
+
+(* Group: Separators *)
+
+(* Variable: sep_spc *)
+let sep_spc = Sep.space
+
+(* Variable: sep_cont *)
+let sep_cont = Sep.cl_or_space
+
+(* Variable: sep_cont_opt *)
+let sep_cont_opt = Sep.cl_or_opt_space
+
+(* Variable: sep_cont_opt_build *)
+let sep_cont_opt_build (sep:string) =
+ del (Rx.cl_or_opt_space . sep . Rx.cl_or_opt_space) (" " . sep . " ")
+
+(* Variable: sep_com *)
+let sep_com = sep_cont_opt_build ","
+
+(* Variable: sep_eq *)
+let sep_eq = sep_cont_opt_build "="
+
+(* Variable: sep_col *)
+let sep_col = sep_cont_opt_build ":"
+
+(* Variable: sep_dquote *)
+let sep_dquote = Util.del_str "\""
+
+(* Group: Negation expressions *)
+
+(************************************************************************
+ * View: del_negate
+ * Delete an even number of '!' signs
+ *************************************************************************)
+let del_negate = del /(!!)*/ ""
+
+(************************************************************************
+ * View: negate_node
+ * Negation of boolean values for <defaults>. Accept one optional '!'
+ * and produce a 'negate' node if there is one.
+ *************************************************************************)
+let negate_node = [ del "!" "!" . label "negate" ]
+
+(************************************************************************
+ * View: negate_or_value
+ * A <del_negate>, followed by either a negated key, or a key/value pair
+ *************************************************************************)
+let negate_or_value (key:lens) (value:lens) =
+ [ del_negate . (negate_node . key | key . value) ]
+
+(* Group: Stores *)
+
+(* Variable: sto_to_com_cmnd
+sto_to_com_cmnd does not begin or end with a space *)
+
+let sto_to_com_cmnd = del_negate . negate_node? . (
+ let alias = Rx.word - /(NO)?(PASSWD|EXEC|SETENV)/
+ in let non_alias = /[\/a-z]([^,:#()\n\\]|\\\\[=:,\\])*[^,=:#() \t\n\\]|[^,=:#() \t\n\\]/
+ in store (alias | non_alias))
+
+(* Variable: sto_to_com
+
+There could be a \ in the middle of a command *)
+let sto_to_com = store /([^,=:#() \t\n\\][^,=:#()\n]*[^,=:#() \t\n\\])|[^,=:#() \t\n\\]/
+
+(* Variable: sto_to_com_host *)
+let sto_to_com_host = store /[^,=:#() \t\n\\]+/
+
+
+(* Variable: sto_to_com_user
+Escaped spaces and NIS domains and allowed*)
+let sto_to_com_user =
+ let nis_re = /([A-Z]([-A-Z0-9]|(\\\\[ \t]))*+\\\\\\\\)/
+ in let user_re = /[%+@a-z]([-A-Za-z0-9._+]|(\\\\[ \t])|\\\\\\\\[A-Za-z0-9])*/ - /@include(dir)?/
+ in let alias_re = /[A-Z_]+/
+ in store ((nis_re? . user_re) | alias_re)
+
+(* Variable: to_com_chars *)
+let to_com_chars = /[^",=#() \t\n\\]+/ (* " relax emacs *)
+
+(* Variable: to_com_dquot *)
+let to_com_dquot = /"[^",=#()\n\\]+"/ (* " relax emacs *)
+
+(* Variable: sto_to_com_dquot *)
+let sto_to_com_dquot = store (to_com_chars|to_com_dquot)
+
+(* Variable: sto_to_com_col *)
+let sto_to_com_col = store to_com_chars
+
+(* Variable: sto_to_eq *)
+let sto_to_eq = store /[^,=:#() \t\n\\]+/
+
+(* Variable: sto_to_spc *)
+let sto_to_spc = store /[^", \t\n\\]+|"[^", \t\n\\]+"/
+
+(* Variable: sto_to_spc_no_dquote *)
+let sto_to_spc_no_dquote = store /[^",# \t\n\\]+/ (* " relax emacs *)
+
+(* Variable: sto_integer *)
+let sto_integer = store /-?[0-9]+/
+
+
+(* Group: Comments and empty lines *)
+
+(* View: comment
+Map comments in "#comment" nodes *)
+let comment =
+ let sto_to_eol = store (/([^ \t\n].*[^ \t\n]|[^ \t\n])/ - /include(dir)?.*/) in
+ [ label "#comment" . del /[ \t]*#[ \t]*/ "# " . sto_to_eol . eol ]
+
+(* View: comment_eol
+Requires a space before the # *)
+let comment_eol = Util.comment_generic /[ \t]+#[ \t]*/ " # "
+
+(* View: comment_or_eol
+A <comment_eol> or <eol> *)
+let comment_or_eol = comment_eol | (del /([ \t]+#\n|[ \t]*\n)/ "\n")
+
+(* View: empty
+Map empty lines *)
+let empty = [ del /[ \t]*#?[ \t]*\n/ "\n" ]
+
+(* View: includedir *)
+let includedir =
+ [ key /(#|@)include(dir)?/ . Sep.space . store Rx.fspath . eol ]
+
+
+(************************************************************************
+ * Group: ALIASES
+ *************************************************************************)
+
+(************************************************************************
+ * View: alias_field
+ * Generic alias field to gather all Alias definitions
+ *
+ * Definition:
+ * > User_Alias ::= NAME '=' User_List
+ * > Runas_Alias ::= NAME '=' Runas_List
+ * > Host_Alias ::= NAME '=' Host_List
+ * > Cmnd_Alias ::= NAME '=' Cmnd_List
+ *
+ * Parameters:
+ * kw:string - the label string
+ * sto:lens - the store lens
+ *************************************************************************)
+let alias_field (kw:string) (sto:lens) = [ label kw . sto ]
+
+(* View: alias_list
+ List of <alias_fields>, separated by commas *)
+let alias_list (kw:string) (sto:lens) =
+ Build.opt_list (alias_field kw sto) sep_com
+
+(************************************************************************
+ * View: alias_name
+ * Name of an <alias_entry_single>
+ *
+ * Definition:
+ * > NAME ::= [A-Z]([A-Z][0-9]_)*
+ *************************************************************************)
+let alias_name
+ = [ label "name" . store /[A-Z][A-Z0-9_]*/ ]
+
+(************************************************************************
+ * View: alias_entry_single
+ * Single <alias_entry>, named using <alias_name> and listing <alias_list>
+ *
+ * Definition:
+ * > Alias_Type NAME = item1, item2, ...
+ *
+ * Parameters:
+ * field:string - the field name, passed to <alias_list>
+ * sto:lens - the store lens, passed to <alias_list>
+ *************************************************************************)
+let alias_entry_single (field:string) (sto:lens)
+ = [ label "alias" . alias_name . sep_eq . alias_list field sto ]
+
+(************************************************************************
+ * View: alias_entry
+ * Alias entry, a list of comma-separated <alias_entry_single> fields
+ *
+ * Definition:
+ * > Alias_Type NAME = item1, item2, item3 : NAME = item4, item5
+ *
+ * Parameters:
+ * kw:string - the alias keyword string
+ * field:string - the field name, passed to <alias_entry_single>
+ * sto:lens - the store lens, passed to <alias_entry_single>
+ *************************************************************************)
+let alias_entry (kw:string) (field:string) (sto:lens)
+ = [ indent . key kw . sep_cont . alias_entry_single field sto
+ . ( sep_col . alias_entry_single field sto )* . comment_or_eol ]
+
+(* TODO: go further in user definitions *)
+(* View: user_alias
+ User_Alias, see <alias_field> *)
+let user_alias = alias_entry "User_Alias" "user" sto_to_com
+(* View: runas_alias
+ Run_Alias, see <alias_field> *)
+let runas_alias = alias_entry "Runas_Alias" "runas_user" sto_to_com
+(* View: host_alias
+ Host_Alias, see <alias_field> *)
+let host_alias = alias_entry "Host_Alias" "host" sto_to_com
+(* View: cmnd_alias
+ Cmnd_Alias, see <alias_field> *)
+let cmnd_alias = alias_entry "Cmnd_Alias" "command" sto_to_com_cmnd
+
+
+(************************************************************************
+ * View: alias
+ * Every kind of Alias entry,
+ * see <user_alias>, <runas_alias>, <host_alias> and <cmnd_alias>
+ *
+ * Definition:
+ * > Alias ::= 'User_Alias' User_Alias (':' User_Alias)* |
+ * > 'Runas_Alias' Runas_Alias (':' Runas_Alias)* |
+ * > 'Host_Alias' Host_Alias (':' Host_Alias)* |
+ * > 'Cmnd_Alias' Cmnd_Alias (':' Cmnd_Alias)*
+ *************************************************************************)
+let alias = user_alias | runas_alias | host_alias | cmnd_alias
+
+(************************************************************************
+ * Group: DEFAULTS
+ *************************************************************************)
+
+
+(************************************************************************
+ * View: default_type
+ * Type definition for <defaults>
+ *
+ * Definition:
+ * > Default_Type ::= 'Defaults' |
+ * > 'Defaults' '@' Host_List |
+ * > 'Defaults' ':' User_List |
+ * > 'Defaults' '!' Cmnd_List |
+ * > 'Defaults' '>' Runas_List
+ *************************************************************************)
+let default_type =
+ let value = store /[@:!>][^ \t\n\\]+/ in
+ [ label "type" . value ]
+
+(************************************************************************
+ * View: parameter_flag
+ * A flag parameter for <defaults>
+ *
+ * Flags are implicitly boolean and can be turned off via the '!' operator.
+ * Some integer, string and list parameters may also be used in a boolean
+ * context to disable them.
+ *************************************************************************)
+let parameter_flag_kw = "always_set_home" | "authenticate" | "env_editor"
+ | "env_reset" | "fqdn" | "ignore_dot"
+ | "ignore_local_sudoers" | "insults" | "log_host"
+ | "log_year" | "long_otp_prompt" | "mail_always"
+ | "mail_badpass" | "mail_no_host" | "mail_no_perms"
+ | "mail_no_user" | "noexec" | "path_info"
+ | "passprompt_override" | "preserve_groups"
+ | "requiretty" | "root_sudo" | "rootpw" | "runaspw"
+ | "set_home" | "set_logname" | "setenv"
+ | "shell_noargs" | "stay_setuid" | "targetpw"
+ | "tty_tickets" | "visiblepw" | "closefrom_override"
+ | "closefrom_override" | "compress_io" | "fast_glob"
+ | "log_input" | "log_output" | "pwfeedback"
+ | "umask_override" | "use_pty" | "match_group_by_gid"
+ | "always_query_group_plugin"
+
+let parameter_flag = [ del_negate . negate_node?
+ . key parameter_flag_kw ]
+
+(************************************************************************
+ * View: parameter_integer
+ * An integer parameter for <defaults>
+ *************************************************************************)
+let parameter_integer_nobool_kw = "passwd_tries"
+
+let parameter_integer_nobool = [ key parameter_integer_nobool_kw . sep_eq
+ . del /"?/ "" . sto_integer
+ . del /"?/ "" ]
+
+
+let parameter_integer_bool_kw = "loglinelen" | "passwd_timeout"
+ | "timestamp_timeout" | "umask"
+
+let parameter_integer_bool =
+ negate_or_value
+ (key parameter_integer_bool_kw)
+ (sep_eq . del /"?/ "" . sto_integer . del /"?/ "")
+
+let parameter_integer = parameter_integer_nobool
+ | parameter_integer_bool
+
+(************************************************************************
+ * View: parameter_string
+ * A string parameter for <defaults>
+ *
+ * An odd number of '!' operators negate the value of the item;
+ * an even number just cancel each other out.
+ *************************************************************************)
+let parameter_string_nobool_kw = "badpass_message" | "editor" | "mailsub"
+ | "noexec_file" | "passprompt" | "runas_default"
+ | "syslog_badpri" | "syslog_goodpri"
+ | "timestampdir" | "timestampowner" | "secure_path"
+
+let parameter_string_nobool = [ key parameter_string_nobool_kw . sep_eq
+ . sto_to_com_dquot ]
+
+let parameter_string_bool_kw = "exempt_group" | "lecture" | "lecture_file"
+ | "listpw" | "logfile" | "mailerflags"
+ | "mailerpath" | "mailto" | "mailfrom"
+ | "syslog" | "verifypw"
+
+let parameter_string_bool =
+ negate_or_value
+ (key parameter_string_bool_kw)
+ (sep_eq . sto_to_com_dquot)
+
+let parameter_string = parameter_string_nobool
+ | parameter_string_bool
+
+(************************************************************************
+ * View: parameter_lists
+ * A single list parameter for <defaults>
+ *
+ * All lists can be used in a boolean context
+ * The argument may be a double-quoted, space-separated list or a single
+ * value without double-quotes.
+ * The list can be replaced, added to, deleted from, or disabled
+ * by using the =, +=, -=, and ! operators respectively.
+ * An odd number of '!' operators negate the value of the item;
+ * an even number just cancel each other out.
+ *************************************************************************)
+let parameter_lists_kw = "env_check" | "env_delete" | "env_keep"
+let parameter_lists_value = [ label "var" . sto_to_spc_no_dquote ]
+let parameter_lists_value_dquote = [ label "var"
+ . del /"?/ "" . sto_to_spc_no_dquote
+ . del /"?/ "" ]
+
+let parameter_lists_values = parameter_lists_value_dquote
+ | ( sep_dquote . parameter_lists_value
+ . ( sep_cont . parameter_lists_value )+
+ . sep_dquote )
+
+let parameter_lists_sep = sep_cont_opt
+ . ( [ del "+" "+" . label "append" ]
+ | [ del "-" "-" . label "remove" ] )?
+ . del "=" "=" . sep_cont_opt
+
+let parameter_lists =
+ negate_or_value
+ (key parameter_lists_kw)
+ (parameter_lists_sep . parameter_lists_values)
+
+(************************************************************************
+ * View: parameter
+ * A single parameter for <defaults>
+ *
+ * Definition:
+ * > Parameter ::= Parameter '=' Value |
+ * > Parameter '+=' Value |
+ * > Parameter '-=' Value |
+ * > '!'* Parameter
+ *
+ * Parameters may be flags, integer values, strings, or lists.
+ *
+ *************************************************************************)
+let parameter = parameter_flag | parameter_integer
+ | parameter_string | parameter_lists
+
+(************************************************************************
+ * View: parameter_list
+ * A list of comma-separated <parameters> for <defaults>
+ *
+ * Definition:
+ * > Parameter_List ::= Parameter |
+ * > Parameter ',' Parameter_List
+ *************************************************************************)
+let parameter_list = parameter . ( sep_com . parameter )*
+
+(************************************************************************
+ * View: defaults
+ * A Defaults entry
+ *
+ * Definition:
+ * > Default_Entry ::= Default_Type Parameter_List
+ *************************************************************************)
+let defaults = [ indent . key "Defaults" . default_type? . sep_cont
+ . parameter_list . comment_or_eol ]
+
+
+
+(************************************************************************
+ * Group: USER SPECIFICATION
+ *************************************************************************)
+
+(************************************************************************
+ * View: runas_spec
+ * A runas specification for <spec>, using <alias_list> for listing
+ * users and/or groups used to run a command
+ *
+ * Definition:
+ * > Runas_Spec ::= '(' Runas_List ')' |
+ * > '(:' Runas_List ')' |
+ * > '(' Runas_List ':' Runas_List ')'
+ *************************************************************************)
+let runas_spec_user = alias_list "runas_user" sto_to_com
+let runas_spec_group = Util.del_str ":" . indent
+ . alias_list "runas_group" sto_to_com
+
+let runas_spec_usergroup = runas_spec_user . indent . runas_spec_group
+
+let runas_spec = Util.del_str "("
+ . (runas_spec_user
+ | runas_spec_group
+ | runas_spec_usergroup )
+ . Util.del_str ")" . sep_cont_opt
+
+(************************************************************************
+ * View: tag_spec
+ * Tag specification for <spec>
+ *
+ * Definition:
+ * > Tag_Spec ::= ('NOPASSWD:' | 'PASSWD:' | 'NOEXEC:' | 'EXEC:' |
+ * > 'SETENV:' | 'NOSETENV:')
+ *************************************************************************)
+let tag_spec =
+ [ label "tag" . store /(NO)?(PASSWD|EXEC|SETENV)/ . sep_col ]
+
+(************************************************************************
+ * View: cmnd_spec
+ * Command specification for <spec>,
+ * with optional <runas_spec> and any amount of <tag_specs>
+ *
+ * Definition:
+ * > Cmnd_Spec ::= Runas_Spec? Tag_Spec* Cmnd
+ *************************************************************************)
+let cmnd_spec =
+ [ label "command" . runas_spec? . tag_spec* . sto_to_com_cmnd ]
+
+(************************************************************************
+ * View: cmnd_spec_list
+ * A list of comma-separated <cmnd_specs>
+ *
+ * Definition:
+ * > Cmnd_Spec_List ::= Cmnd_Spec |
+ * > Cmnd_Spec ',' Cmnd_Spec_List
+ *************************************************************************)
+let cmnd_spec_list = Build.opt_list cmnd_spec sep_com
+
+
+(************************************************************************
+ * View: spec_list
+ * Group of hosts with <cmnd_spec_list>
+ *************************************************************************)
+let spec_list = [ label "host_group" . alias_list "host" sto_to_com_host
+ . sep_eq . cmnd_spec_list ]
+
+(************************************************************************
+ * View: spec
+ * A user specification, listing colon-separated <spec_lists>
+ *
+ * Definition:
+ * > User_Spec ::= User_List Host_List '=' Cmnd_Spec_List \
+ * > (':' Host_List '=' Cmnd_Spec_List)*
+ *************************************************************************)
+let spec = [ label "spec" . indent
+ . alias_list "user" sto_to_com_user . sep_cont
+ . Build.opt_list spec_list sep_col
+ . comment_or_eol ]
+
+
+(************************************************************************
+ * Group: LENS & FILTER
+ *************************************************************************)
+
+(* View: lns
+ The sudoers lens, any amount of
+ * <empty> lines
+ * <comments>
+ * <includedirs>
+ * <aliases>
+ * <defaults>
+ * <specs>
+*)
+let lns = ( empty | comment | includedir | alias | defaults | spec )*
+
+(* View: filter *)
+let filter = (incl "/etc/sudoers")
+ . (incl "/usr/local/etc/sudoers")
+ . (incl "/etc/sudoers.d/*")
+ . (incl "/usr/local/etc/sudoers.d/*")
+ . (incl "/opt/csw/etc/sudoers")
+ . (incl "/etc/opt/csw/sudoers")
+ . Util.stdexcl
+
+let xfm = transform lns filter
--- /dev/null
+(* Variation of the Shellvars lens *)
+(* Supports only what's needed to handle sysconfig files *)
+(* Modified to strip quotes. In the put direction, add double quotes *)
+(* around values that need them *)
+(* To keep things simple, we also do not support shell variable arrays *)
+module Sysconfig =
+
+ let eol = Shellvars.eol
+ let semicol_eol = Shellvars.semicol_eol
+
+ let key_re = Shellvars.key_re
+ let eq = Util.del_str "="
+
+ let eol_for_comment = del /([ \t]*\n)([ \t]*(#[ \t]*)?\n)*/ "\n"
+ let comment = Util.comment_generic_seteol /[ \t]*#[ \t]*/ "# " eol_for_comment
+ let comment_or_eol = Shellvars.comment_or_eol
+
+ let empty = Util.empty
+
+ let bchar = /[^; \t\n"'\\]|\\\\./ (* " Emacs, relax *)
+ let qchar = /["']/ (* " Emacs, relax *)
+
+ (* We split the handling of right hand sides into a few cases:
+ * bare - strings that contain no spaces, optionally enclosed in
+ * single or double quotes
+ * quot - strings that must be enclosed in single or double quotes
+ * dquot - strings that contain at least one space or apostrophe,
+ * which must be enclosed in double quotes
+ * squot - strings that contain an unescaped double quote
+ *)
+ let bare = Quote.do_quote_opt (store bchar+)
+
+ let quot =
+ let word = bchar* . /[; \t]/ . bchar* in
+ Quote.do_quote (store word+)
+
+ let dquot =
+ let char = /[^"\\]|\\\\./ in (* " *)
+ let word = char* . "'" . char* in
+ Quote.do_dquote (store word+)
+
+ let squot =
+ (* We do not allow escaped double quotes in single quoted strings, as *)
+ (* that leads to a put ambiguity with bare, e.g. for the string '\"'. *)
+ let char = /[^'\\]|\\\\[^"]/ in (* " *)
+ let word = char* . "\"" . char* in
+ Quote.do_squote (store word+)
+
+ let kv (value:lens) =
+ let export = Shellvars.export in
+ let indent = Util.del_opt_ws "" in
+ [ indent . export? . key key_re . eq . value . comment_or_eol ]
+
+ let assign =
+ let nothing = del /(""|'')?/ "" . value "" in
+ kv nothing | kv bare | kv quot | kv dquot | kv squot
+
+ let var_action = Shellvars.var_action
+
+ let unset = [ var_action "unset" . comment_or_eol ]
+ let bare_export = [ var_action "export" . comment_or_eol ]
+
+ let source = [ Shellvars.source . comment_or_eol ]
+
+ let lns = empty* . (comment | source | assign | unset | bare_export)*
+
+(*
+ Examples:
+
+ abc -> abc -> abc
+ "abc" -> abc -> abc
+ "a b" -> a b -> "a b"
+ 'a"b' -> a"b -> 'a"b'
+*)
--- /dev/null
+(*
+Module: Sysconfig_Route
+ Parses /etc/sysconfig/network-scripts/route-${device}
+
+Author: Stephen P. Schaefer
+
+About: Reference
+ This lens allows manipulation of *one* IPv4 variant of the
+/etc/sysconfig/network-scripts/route-${device} script found in RHEL5+, CentOS5+
+and Fedora.
+
+ The variant handled consists of lines like
+ "destination_subnet/cidr via router_ip", e.g.,
+ "10.132.11.0/24 via 10.10.2.1"
+
+ There are other variants; if you use them, please enhance this lens.
+
+ The natural key would be "10.132.11.0/24" with value "10.10.2.1", but since
+ augeas cannot deal with slashes in the key value, I reverse them, so that the
+ key is "10.10.2.1[1]" (and "10.10.2.1[2]"... if multiple subnets are reachable
+ via that router).
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ Sample usage of this lens in augtool
+
+ * Set the first subnet reachable by a router reachable on the eth0 subnet
+ > set /files/etc/sysconfig/network-scripts/route-eth0/10.10.2.1[1] 172.16.254.0/24
+ * List all the subnets reachable by a router reachable on eth0 subnet
+ > match /files/etc/sysconfig/network-scripts/route-eth0/10.10.2.1
+
+About: Configuration files
+ This lens applies to /etc/sysconfig/network-scripts/route-*
+
+About: Examples
+ The <Test_Sysconfig_Route> file contains various examples and tests.
+*)
+
+module Sysconfig_Route =
+ autoload xfm
+
+(******************************************************************************
+ * Group: USEFUL PRIMITIVES
+ ******************************************************************************)
+
+(* Variable: router
+ A router *)
+let router = Rx.ipv4
+(* Variable: cidr
+ A subnet mask can be 0 to 32 bits *)
+let cidr = /(3[012]|[12][0-9]|[0-9])/
+(* Variable: subnet
+ Subnet specification *)
+let subnet = Rx.ipv4 . "/" . cidr
+
+(******************************************************************************
+ * Group: ENTRY TYPES
+ ******************************************************************************)
+
+(* View: entry
+ One route *)
+let entry = [ store subnet . del /[ \t]*via[ \t]*/ " via "
+ . key router . Util.del_str "\n" ]
+
+(******************************************************************************
+ * Group: LENS AND FILTER
+ ******************************************************************************)
+
+(* View: lns *)
+let lns = entry+
+
+(* View: filter *)
+let filter = incl "/etc/sysconfig/network-scripts/route-*"
+ . Util.stdexcl
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Sysctl
+ Parses /etc/sysctl.conf and /etc/sysctl.d/*
+
+Author: David Lutterkort <lutter@redhat.com>
+
+About: Reference
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/sysctl.conf and /etc/sysctl.d/*. See <filter>.
+
+About: Examples
+ The <Test_Sysctl> file contains various examples and tests.
+*)
+
+module Sysctl =
+autoload xfm
+
+(* Variable: filter *)
+let filter = incl "/boot/loader.conf"
+ . incl "/etc/sysctl.conf"
+ . incl "/etc/sysctl.d/*"
+ . excl "/etc/sysctl.d/README"
+ . excl "/etc/sysctl.d/README.sysctl"
+ . Util.stdexcl
+
+(* View: comment *)
+let comment = Util.comment_generic /[ \t]*[#;][ \t]*/ "# "
+
+(* View: entry
+ basically a Simplevars.entry but key has to allow some special chars as '*' *)
+let entry =
+ let some_value = Sep.space_equal . store Simplevars.to_comment_re
+ (* Rx.word extended by * and : *)
+ in let word = /[*:\/A-Za-z0-9_.-]+/
+ (* Avoid ambiguity in tree by making a subtree here *)
+ in let empty_value = [del /[ \t]*=/ "="] . store ""
+ in [ Util.indent . key word
+ . (some_value? | empty_value)
+ . (Util.eol | Util.comment_eol) ]
+
+(* View: lns
+ The sysctl lens *)
+let lns = (Util.empty | comment | entry)*
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Syslog
+ parses /etc/syslog.conf
+
+Author: Mathieu Arnold <mat@FreeBSD.org>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 5 resolv.conf` where possible.
+ An online source being :
+ http://www.freebsd.org/cgi/man.cgi?query=syslog.conf&sektion=5
+
+About: Licence
+ This file is licensed under the BSD License.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/syslog.conf. See <filter>.
+
+ *)
+module Syslog =
+ autoload xfm
+
+ (************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+ (* Group: Comments and empty lines *)
+
+ (* Variable: empty *)
+ let empty = Util.empty
+ (* Variable: eol *)
+ let eol = Util.eol
+ (* Variable: sep_tab *)
+ let sep_tab = del /([ \t]+|[ \t]*\\\\\n[ \t]*)/ "\t"
+
+ (* Variable: sep_tab_opt *)
+ let sep_tab_opt = del /([ \t]*|[ \t]*\\\\\n[ \t]*)/ ""
+
+ (* View: comment
+ Map comments into "#comment" nodes
+ Can't use Util.comment as #+ and #! have a special meaning.
+ However, '# !' and '# +' have no special meaning so they should be allowed.
+ *)
+
+ let comment_gen (space:regexp) (sto:regexp) =
+ [ label "#comment" . del (Rx.opt_space . "#" . space) "# "
+ . store sto . eol ]
+
+ let comment =
+ let comment_withsign = comment_gen Rx.space /([!+-].*[^ \t\n]|[!+-])/
+ in let comment_nosign = comment_gen Rx.opt_space /([^ \t\n+!-].*[^ \t\n]|[^ \t\n+!-])/
+ in comment_withsign | comment_nosign
+
+ (* Group: single characters macro *)
+
+ (* Variable: comma
+ Deletes a comma and default to it
+ *)
+ let comma = sep_tab_opt . Util.del_str "," . sep_tab_opt
+ (* Variable: colon
+ Deletes a colon and default to it
+ *)
+ let colon = sep_tab_opt . Util.del_str ":" . sep_tab_opt
+ (* Variable: semicolon
+ Deletes a semicolon and default to it
+ *)
+ let semicolon = sep_tab_opt . Util.del_str ";" . sep_tab_opt
+ (* Variable: dot
+ Deletes a dot and default to it
+ *)
+ let dot = Util.del_str "."
+ (* Variable: pipe
+ Deletes a pipe and default to it
+ *)
+ let pipe = Util.del_str "|"
+ (* Variable: plus
+ Deletes a plus and default to it
+ *)
+ let plus = Util.del_str "+"
+ (* Variable: bang
+ Deletes a bang and default to it
+ *)
+ let bang = Util.del_str "!"
+
+ (* Variable: opt_hash
+ deletes an optional # sign
+ *)
+ let opt_hash = del /#?/ ""
+ (* Variable: opt_plus
+ deletes an optional + sign
+ *)
+ let opt_plus = del /\+?/ ""
+
+ (* Group: various macros *)
+
+ (* Variable: word
+ our version can't start with [_.-] because it would mess up the grammar
+ *)
+ let word = /[A-Za-z0-9][A-Za-z0-9_.-]*/
+
+ (* Variable: comparison
+ a comparison is an optional ! with optionally some of [<=>]
+ *)
+ let comparison = /(!|[<=>]+|![<=>]+)/
+
+ (* Variable: protocol
+ @ means UDP
+ @@ means TCP
+ *)
+ let protocol = /@{1,2}/
+
+ (* Variable: token
+ alphanum or "*"
+ *)
+ let token = /([A-Za-z0-9]+|\*)/
+
+ (* Variable: file_r
+ a file begins with a / and get almost anything else after
+ *)
+ let file_r = /\/[^ \t\n;]+/
+
+ (* Variable: loghost_r
+ Matches a hostname, that is labels speparated by dots, labels can't
+ start or end with a "-". maybe a bit too complicated for what it's worth *)
+ let loghost_r = /[a-zA-Z0-9]([a-zA-Z0-9-]*[a-zA-Z0-9])?(\.[a-zA-Z0-9]([a-zA-Z0-9-]*[a-zA-Z0-9])?)*/ |
+ "[" . Rx.ipv6 . "]"
+
+ (* Group: Function *)
+
+ (* View: label_opt_list
+ Uses Build.opt_list to generate a list of labels
+
+ Parameters:
+ l:string - the label name
+ r:lens - the lens going after the label
+ s:lens - the separator lens passed to Build.opt_list
+ *)
+ let label_opt_list (l:string) (r:lens) (s:lens) = Build.opt_list [ label l . r ] s
+
+ (* View: label_opt_list_or
+ Either label_opt_list matches something or it emits a single label
+ with the "or" string.
+
+ Parameters:
+ l:string - the label name
+ r:lens - the lens going after the label
+ s:lens - the separator lens passed to Build.opt_list
+ or:string - the string used if the label_opt_list does not match anything
+ *)
+ let label_opt_list_or (l:string) (r:lens) (s:lens) (or:string) =
+ ( label_opt_list l r s | [ label l . store or ] )
+
+
+ (************************************************************************
+ * Group: LENSE DEFINITION
+ *************************************************************************)
+
+ (* Group: selector *)
+
+ (* View: facilities
+ a list of facilities, separated by commas
+ *)
+ let facilities = label_opt_list "facility" (store token) comma
+
+ (* View: selector
+ a selector is a list of facilities, an optional comparison and a level
+ *)
+ let selector = facilities . dot .
+ [ label "comparison" . store comparison]? .
+ [ label "level" . store token ]
+
+ (* View: selectors
+ a list of selectors, separated by semicolons
+ *)
+ let selectors = label_opt_list "selector" selector semicolon
+
+ (* Group: action *)
+
+ (* View: file
+ a file may start with a "-" meaning it does not gets sync'ed everytime
+ *)
+ let file = [ Build.xchgs "-" "no_sync" ]? . [ label "file" . store file_r ]
+
+ (* View: loghost
+ a loghost is an @ sign followed by the hostname and a possible port
+ *)
+ let loghost = [label "protocol" . store protocol] . [ label "hostname" . store loghost_r ] .
+ (colon . [ label "port" . store /[0-9]+/ ] )?
+
+ (* View: users
+ a list of users or a "*"
+ *)
+ let users = label_opt_list_or "user" (store word) comma "*"
+
+ (* View: logprogram
+ a log program begins with a pipe
+ *)
+ let logprogram = pipe . [ label "program" . store /[^ \t\n][^\n]+[^ \t\n]/ ]
+
+ (* View: discard
+ discards matching messages
+ *)
+ let discard = [ label "discard" . Util.del_str "~" ]
+
+ (* View: action
+ an action is either a file, a host, users, a program, or discard
+ *)
+ let action = (file | loghost | users | logprogram | discard)
+
+ (* Group: Entry *)
+
+ (* View: entry
+ an entry contains selectors and an action
+ *)
+ let entry = [ label "entry" .
+ selectors . sep_tab .
+ [ label "action" . action ] . eol ]
+
+ (* View: entries
+ entries are either comments/empty lines or entries
+ *)
+ let entries = (empty | comment | entry )*
+
+ (* Group: Program matching *)
+
+ (* View: programs
+ a list of programs
+ *)
+ let programs = label_opt_list_or "program" (store word) comma "*"
+
+ (* View: program
+ a program begins with an optional hash, a bang, and an optional + or -
+ *)
+ let program = [ label "program" . opt_hash . bang .
+ ( opt_plus | [ Build.xchgs "-" "reverse" ] ) .
+ programs . eol . entries ]
+
+ (* Group: Hostname maching *)
+
+ (* View: hostnames
+ a list of hostnames
+ *)
+ let hostnames = label_opt_list_or "hostname" (store Rx.word) comma "*"
+
+ (* View: hostname
+ a program begins with an optional hash, and a + or -
+ *)
+ let hostname = [ label "hostname" . opt_hash .
+ ( plus | [ Build.xchgs "-" "reverse" ] ) .
+ hostnames . eol . entries ]
+
+ (* Group: Top of the tree *)
+
+ let include =
+ [ key "include" . sep_tab . store file_r . eol ]
+
+ (* View: lns
+ generic entries then programs or hostnames matching blocs
+ *)
+ let lns = entries . ( program | hostname | include )*
+
+ (* Variable: filter
+ all you need is /etc/syslog.conf
+ *)
+ let filter = incl "/etc/syslog.conf"
+
+ let xfm = transform lns filter
--- /dev/null
+(*
+Module: Systemd
+ Parses systemd unit files.
+
+Author: Dominic Cleal <dcleal@redhat.com>
+
+About: Reference
+ This lens tries to keep as close as possible to systemd.unit(5) and
+ systemd.service(5) etc where possible.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /lib/systemd/system/* and /etc/systemd/system/*.
+ See <filter>.
+*)
+
+module Systemd =
+autoload xfm
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+(* View: eol *)
+let eol = Util.eol
+
+(* View: eol_comment
+ An <IniFile.comment> entry for standalone comment lines (; or #) *)
+let comment = IniFile.comment IniFile.comment_re "#"
+
+(* View: eol_comment
+ An <IniFile.comment> entry for end of line comments (# only) *)
+let eol_comment = IniFile.comment "#" "#"
+
+(* View: sep
+ An <IniFile.sep> entry *)
+let sep = IniFile.sep "=" "="
+
+(* Variable: entry_single_kw *)
+let entry_single_kw = "Description"
+
+(* Variable: entry_command_kw *)
+let entry_command_kw = /Exec[A-Za-z][A-Za-z0-9._-]+/
+
+(* Variable: entry_env_kw *)
+let entry_env_kw = "Environment"
+
+(* Variable: entry_multi_kw *)
+let entry_multi_kw =
+ let forbidden = entry_single_kw | entry_command_kw | entry_env_kw
+ in /[A-Za-z][A-Za-z0-9._-]+/ - forbidden
+
+(* Variable: value_single_re *)
+let value_single_re = /[^# \t\n\\][^#\n\\]*[^# \t\n\\]|[^# \t\n\\]/
+
+(* View: sto_value_single
+ Support multiline values with a backslash *)
+let sto_value_single = Util.del_opt_ws ""
+ . store (value_single_re
+ . (/\\\\\n/ . value_single_re)*)
+
+(* View: sto_value *)
+let sto_value = store /[^# \t\n]*[^# \t\n\\]/
+
+(* Variable: value_sep
+ Multi-value entries separated by whitespace or backslash and newline *)
+let value_sep = del /[ \t]+|[ \t]*\\\\[ \t]*\n[ \t]*/ " "
+
+(* Variable: value_cmd_re
+ Don't parse @ and - prefix flags *)
+let value_cmd_re = /[^#@ \t\n\\-][^#@ \t\n\\-][^# \t\n\\]*/
+
+(* Variable: env_key *)
+let env_key = /[A-Za-z0-9_]+(\[[0-9]+\])?/
+
+(************************************************************************
+ * Group: ENTRIES
+ *************************************************************************)
+
+(*
+Supported entry features, selected by key names:
+ * multi-value space separated attrs (the default)
+ * single-value attrs (Description)
+ * systemd.service: Exec* attrs with flags, command and arguments
+ * systemd.service: Environment NAME=arg
+*)
+
+(* View: entry_fn
+ Prototype for our various key=value lines, with optional comment *)
+let entry_fn (kw:regexp) (val:lens) =
+ [ key kw . sep . val . (eol_comment|eol) ]
+
+(* View: entry_value
+ Store a value that doesn't contain spaces *)
+let entry_value = [ label "value" . sto_value ]
+
+(* View: entry_single
+ Entry that takes a single value containing spaces *)
+let entry_single = entry_fn entry_single_kw
+ [ label "value" . sto_value_single ]?
+
+(* View: entry_command
+ Entry that takes a space separated set of values (the default) *)
+let entry_multi = entry_fn entry_multi_kw
+ ( Util.del_opt_ws ""
+ . Build.opt_list entry_value value_sep )?
+
+(* View: entry_command_flags
+ Exec* flags "@" and "-". Order is important, see systemd.service(8) *)
+let entry_command_flags =
+ let exit = [ label "ignoreexit" . Util.del_str "-" ]
+ in let arg0 = [ label "arg0" . Util.del_str "@" ]
+ in exit? . arg0?
+
+(* View: entry_command
+ Entry that takes a command, arguments and the optional prefix flags *)
+let entry_command =
+ let cmd = [ label "command" . store value_cmd_re ]
+ in let arg = [ seq "args" . sto_value ]
+ in let args = [ counter "args" . label "arguments"
+ . (value_sep . arg)+ ]
+ in entry_fn entry_command_kw ( entry_command_flags . Util.del_opt_ws "" . cmd . args? )?
+
+(* View: entry_env
+ Entry that takes a space separated set of ENV=value key/value pairs *)
+let entry_env =
+ let envkv (env_val:lens) = key env_key . Util.del_str "=" . env_val
+ (* bare has no spaces, and is optionally quoted *)
+ in let bare = Quote.do_quote_opt (envkv (store /[^#'" \t\n]*[^#'" \t\n\\]/)?)
+ in let bare_dqval = envkv (store /"[^#"\t\n]*"/)
+ in let bare_sqval = envkv (store /'[^#'\t\n]*'/)
+ (* quoted may be empty *)
+ in let quoted = Quote.do_quote (envkv (store /[^#"'\n]*[ \t]+[^#"'\n]*/))
+ in let envkv_quoted = [ bare ] | [ bare_dqval ] | [ bare_sqval ] | [ quoted ]
+ in entry_fn entry_env_kw ( Util.del_opt_ws "" . ( Build.opt_list envkv_quoted value_sep ))
+
+
+(************************************************************************
+ * Group: LENS
+ *************************************************************************)
+
+(* View: entry
+ An <IniFile.entry> *)
+let entry = entry_single | entry_multi | entry_command | entry_env | comment
+
+(* View: include
+ Includes another file at this position *)
+let include = [ key ".include" . Util.del_ws_spc . sto_value
+ . (eol_comment|eol) ]
+
+(* View: title
+ An <IniFile.title> *)
+let title = IniFile.title IniFile.record_re
+
+(* View: record
+ An <IniFile.record> *)
+let record = IniFile.record title (entry|include)
+
+(* View: lns
+ An <IniFile.lns> *)
+let lns = IniFile.lns record (comment|include)
+
+(* View: filter *)
+let filter = incl "/lib/systemd/system/*"
+ . incl "/lib/systemd/system/*/*"
+ . incl "/etc/systemd/system/*"
+ . incl "/etc/systemd/system/*/*"
+ . incl "/etc/systemd/logind.conf"
+ . incl "/etc/sysconfig/*.systemd"
+ . incl "/lib/systemd/network/*"
+ . incl "/usr/local/lib/systemd/network/*"
+ . incl "/etc/systemd/network/*"
+ . excl "/lib/systemd/system/*.d"
+ . excl "/etc/systemd/system/*.d"
+ . excl "/lib/systemd/system/*.wants"
+ . excl "/etc/systemd/system/*.wants"
+ . Util.stdexcl
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Termcap
+ Parses termcap capability database
+
+Author: Matt Dainty <matt@bodgit-n-scarper.com>
+
+About: Reference
+ - man 5 termcap
+
+Each line represents a record consisting of a number of ':'-separated fields
+the first of which is the name or identifier for the record. The name can
+optionally be split by '|' and each subsequent value is considered an alias
+of the first. Records can be split across multiple lines with '\'.
+
+*)
+
+module Termcap =
+ autoload xfm
+
+ (* All termcap capabilities are two characters, optionally preceded by *)
+ (* upto two periods and the only types are boolean, numeric or string *)
+ let cfield = /\.{0,2}([a-zA-Z0-9]{2}|[@#%&*!][a-zA-Z0-9]|k;)(#?@|#[0-9]+|=([^:\\\\^]|\\\\[0-7]{3}|\\\\[:bBcCeEfFnNrRstT0\\^]|\^.)*)?/
+
+ let lns = ( Util.empty | Getcap.comment | Getcap.record cfield )*
+
+ let filter = incl "/etc/termcap"
+ . incl "/usr/share/misc/termcap"
+ . Util.stdexcl
+
+ let xfm = transform lns filter
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(*
+Module: Test_Access
+ Provides unit tests and examples for the <Access> lens.
+*)
+
+module Test_access =
+
+(* Variable: conf
+ A full configuration *)
+let conf = "+ : ALL : LOCAL
++ : root : localhost.localdomain
+- : root : 127.0.0.1 .localdomain
++ : root alice@server1 @admins (wheel) : cron crond :0 tty1 tty2 tty3 tty4 tty5 tty6
+# IP v6 support
++ : john foo : 2001:4ca0:0:101::1 2001:4ca0:0:101::/64
+# Except
++ : ALL EXCEPT john @wheel : ALL EXCEPT LOCAL .win.tue.nl
+# No spaces
++:root:.example.com
+"
+
+(* Test: Access.lns
+ Test the full <conf> *)
+test Access.lns get conf =
+ { "access" = "+"
+ { "user" = "ALL" }
+ { "origin" = "LOCAL" } }
+ { "access" = "+"
+ { "user" = "root" }
+ { "origin" = "localhost.localdomain" } }
+ { "access" = "-"
+ { "user" = "root" }
+ { "origin" = "127.0.0.1" }
+ { "origin" = ".localdomain" } }
+ { "access" = "+"
+ { "user" = "root" }
+ { "user" = "alice"
+ { "host" = "server1" } }
+ { "netgroup" = "admins" }
+ { "group" = "wheel" }
+ { "origin" = "cron" }
+ { "origin" = "crond" }
+ { "origin" = ":0" }
+ { "origin" = "tty1" }
+ { "origin" = "tty2" }
+ { "origin" = "tty3" }
+ { "origin" = "tty4" }
+ { "origin" = "tty5" }
+ { "origin" = "tty6" } }
+ { "#comment" = "IP v6 support" }
+ { "access" = "+"
+ { "user" = "john" }
+ { "user" = "foo" }
+ { "origin" = "2001:4ca0:0:101::1" }
+ { "origin" = "2001:4ca0:0:101::/64" } }
+ { "#comment" = "Except" }
+ { "access" = "+"
+ { "user" = "ALL" }
+ { "except"
+ { "user" = "john" }
+ { "netgroup" = "wheel" } }
+ { "origin" = "ALL" }
+ { "except"
+ { "origin" = "LOCAL" }
+ { "origin" = ".win.tue.nl" } } }
+ { "#comment" = "No spaces" }
+ { "access" = "+"
+ { "user" = "root" }
+ { "origin" = ".example.com" } }
+
+test Access.lns put conf after
+ insa "access" "access[last()]" ;
+ set "access[last()]" "-" ;
+ set "access[last()]/user" "ALL" ;
+ set "access[last()]/origin" "ALL"
+ = "+ : ALL : LOCAL
++ : root : localhost.localdomain
+- : root : 127.0.0.1 .localdomain
++ : root alice@server1 @admins (wheel) : cron crond :0 tty1 tty2 tty3 tty4 tty5 tty6
+# IP v6 support
++ : john foo : 2001:4ca0:0:101::1 2001:4ca0:0:101::/64
+# Except
++ : ALL EXCEPT john @wheel : ALL EXCEPT LOCAL .win.tue.nl
+# No spaces
++:root:.example.com
+- : ALL : ALL
+"
+
+(* Test: Access.lns
+ - netgroups (starting with '@') are mapped as "netgroup" nodes;
+ - nisdomains (starting with '@@') are mapped as "nisdomain" nodes.
+
+ This closes <ticket #190 at https://fedorahosted.org/augeas/ticket/190>.
+ *)
+test Access.lns get "+ : @group@@domain : ALL \n" =
+ { "access" = "+"
+ { "netgroup" = "group"
+ { "nisdomain" = "domain" } }
+ { "origin" = "ALL" } }
+
+(* Test Access.lns
+ Put test for netgroup and nisdomain *)
+test Access.lns put "+ : @group : ALL \n" after
+ set "/access/netgroup[. = 'group']/nisdomain" "domain" =
+"+ : @group@@domain : ALL \n"
+
+(* Check DOMAIN\user style entry *)
+test Access.lns get "+ : root : foo1.bar.org foo3.bar.org
++ : (DOMAIN\linux_users) : ALL
++ : DOMAIN\linux_user : ALL\n" =
+ { "access" = "+"
+ { "user" = "root" }
+ { "origin" = "foo1.bar.org" }
+ { "origin" = "foo3.bar.org" } }
+ { "access" = "+"
+ { "group" = "DOMAIN\linux_users" }
+ { "origin" = "ALL" } }
+ { "access" = "+"
+ { "user" = "DOMAIN\linux_user" }
+ { "origin" = "ALL" } }
--- /dev/null
+(*
+Module: Test_ActiveMQ_Conf
+ Provides unit tests and examples for the <ActiveMQ_Conf> lens.
+*)
+
+module Test_ActiveMQ_Conf =
+
+(* Variable: conf *)
+let conf = "
+ACTIVEMQ_HOME=/usr/share/activemq
+ACTIVEMQ_BASE=${ACTIVEMQ_HOME}
+"
+
+(* Variable: new_conf *)
+let new_conf = "
+ACTIVEMQ_HOME=/usr/local/share/activemq
+ACTIVEMQ_BASE=${ACTIVEMQ_HOME}
+"
+
+let lns = ActiveMQ_Conf.lns
+
+(* Test: ActiveMQ_Conf.lns
+ * Get test against tree structure
+*)
+test lns get conf =
+ { }
+ { "ACTIVEMQ_HOME" = "/usr/share/activemq" }
+ { "ACTIVEMQ_BASE" = "${ACTIVEMQ_HOME}" }
+
+(* Test: ActiveMQ_Conf.lns
+ * Put test changing user to nobody
+*)
+test lns put conf after set "/ACTIVEMQ_HOME" "/usr/local/share/activemq" = new_conf
+
+(* vim: set ts=4 expandtab sw=4: *)
--- /dev/null
+(*
+Module: Test_ActiveMQ_XML
+ Provides unit tests and examples for the <ActiveMQ_XML> lens.
+*)
+
+module Test_ActiveMQ_XML =
+
+(* Variable: conf *)
+let conf = "<beans>
+ <broker xmlns=\"http://activemq.apache.org/schema/core\" brokerName=\"localhost\" dataDirectory=\"${activemq.data}\">
+ <transportConnectors>
+ <transportConnector name=\"openwire\" uri=\"tcp://0.0.0.0:61616\"/>
+ </transportConnectors>
+ </broker>
+</beans>
+"
+
+(* Variable: new_conf *)
+let new_conf = "<beans>
+ <broker xmlns=\"http://activemq.apache.org/schema/core\" brokerName=\"localhost\" dataDirectory=\"${activemq.data}\">
+ <transportConnectors>
+ <transportConnector name=\"openwire\" uri=\"tcp://127.0.0.1:61616\"/>
+ </transportConnectors>
+ </broker>
+</beans>
+"
+
+let lns = ActiveMQ_XML.lns
+
+(* Test: ActiveMQ_XML.lns
+ * Get test against tree structure
+*)
+test lns get conf =
+ { "beans"
+ { "#text" = "
+ " }
+ { "broker"
+ { "#attribute"
+ { "xmlns" = "http://activemq.apache.org/schema/core" }
+ { "brokerName" = "localhost" }
+ { "dataDirectory" = "${activemq.data}" }
+ }
+ { "#text" = "
+ " }
+ { "transportConnectors"
+ { "#text" = "
+ " }
+ { "transportConnector" = "#empty"
+ { "#attribute"
+ { "name" = "openwire" }
+ { "uri" = "tcp://0.0.0.0:61616" }
+ }
+ }
+ { "#text" = " " }
+ }
+ { "#text" = " " }
+ }
+ }
+
+(* Test: ActiveMQ_XML.lns
+ * Put test changing transport connector to localhost
+*)
+test lns put conf after set "/beans/broker/transportConnectors/transportConnector/#attribute/uri" "tcp://127.0.0.1:61616" = new_conf
+
+(* vim: set ts=4 expandtab sw=4: *)
--- /dev/null
+(*
+Module: Test_AFS_cellalias
+ Provides unit tests and examples for the <AFS_cellalias> lens.
+*)
+
+module Test_AFS_cellalias =
+
+(* Variable: conf
+ A full configuration *)
+let conf = "# Cell Aliases are meant to act like symlinks like '/afs/openafs.org -> oao'
+# in root.afs, so sites relying on such a link for their cell can use dynroot.
+# These aliases are set with 'fs newalias', or read from
+# /usr/vice/etc/CellAlias
+#
+# Formatting for /usr/vice/etc/CellAlias is in the form
+# <target> <alias>
+# an example would be
+# fnal.gov/common/usr usr
+
+fnal.gov fnal
+fnal.gov/files fnal-files
+"
+
+(* Test: AFS_cellalias.lns
+ Test the full <conf> *)
+test AFS_cellalias.lns get conf = { "#comment" = "Cell Aliases are meant to act like symlinks like '/afs/openafs.org -> oao'" }
+ { "#comment" = "in root.afs, so sites relying on such a link for their cell can use dynroot." }
+ { "#comment" = "These aliases are set with 'fs newalias', or read from" }
+ { "#comment" = "/usr/vice/etc/CellAlias" }
+ { }
+ { "#comment" = "Formatting for /usr/vice/etc/CellAlias is in the form" }
+ { "#comment" = "<target> <alias>" }
+ { "#comment" = "an example would be" }
+ { "#comment" = "fnal.gov/common/usr usr" }
+ { }
+ { "target" = "fnal.gov"
+ { "linkname" = "fnal" }
+ }
+ { "target" = "fnal.gov/files"
+ { "linkname" = "fnal-files" }
+ }
+
--- /dev/null
+(*
+Module: Test_Aliases
+ Provides unit tests and examples for the <Aliases> lens.
+*)
+
+module Test_aliases =
+
+(* Variable: file
+ A full configuration file *)
+ let file = "#
+# Aliases in this file will NOT be expanded in the header from
+# Mail, but WILL be visible over networks or from /bin/mail.
+
+# Basic system aliases -- these MUST be present.
+mailer-daemon: postmaster
+postmaster: root
+
+# General redirections for pseudo accounts.
+bin: root , adm,
+ bob
+daemon: root
+adm: root
+file: /var/foo
+pipe1: |/bin/ls
+pipe2 : |\"/usr/bin/ls args,\"
+"
+
+(* Test: Aliases.lns
+ Testing <Aliases.lns> on <file> *)
+ test Aliases.lns get file =
+ { }
+ { "#comment" = "Aliases in this file will NOT be expanded in the header from" }
+ { "#comment" = "Mail, but WILL be visible over networks or from /bin/mail." }
+ {}
+ { "#comment" = "Basic system aliases -- these MUST be present." }
+ { "1" { "name" = "mailer-daemon" }
+ { "value" = "postmaster" } }
+ { "2" { "name" = "postmaster" }
+ { "value" = "root" } }
+ {}
+ { "#comment" = "General redirections for pseudo accounts." }
+ { "3" { "name" = "bin" }
+ { "value" = "root" }
+ { "value" = "adm" }
+ { "value" = "bob" } }
+ { "4" { "name" = "daemon" }
+ { "value" = "root" } }
+ { "5" { "name" = "adm" }
+ { "value" = "root" } }
+ { "6" { "name" = "file" }
+ { "value" = "/var/foo" } }
+ { "7" { "name" = "pipe1" }
+ { "value" = "|/bin/ls" } }
+ { "8" { "name" = "pipe2" }
+ { "value" = "|\"/usr/bin/ls args,\"" } }
+
+(* Test: Aliases.lns
+ Put test for <Aliases.lns> on <file> *)
+ test Aliases.lns put file after
+ rm "/4" ; rm "/5" ; rm "/6" ; rm "/7" ; rm "/8" ;
+ set "/1/value[2]" "barbar" ;
+ set "/3/value[2]" "ruth"
+ = "#
+# Aliases in this file will NOT be expanded in the header from
+# Mail, but WILL be visible over networks or from /bin/mail.
+
+# Basic system aliases -- these MUST be present.
+mailer-daemon: postmaster, barbar
+postmaster: root
+
+# General redirections for pseudo accounts.
+bin: root , ruth,
+ bob
+"
+
+ (* Test: Aliases.lns
+ Schema violation, no 3/name *)
+ test Aliases.lns put file after
+ rm "/3" ;
+ set "/3/value/2" "ruth"
+ = *
+
+ (* Variable: nocomma
+ Don't have to have whitespace after a comma *)
+ let nocomma = "alias: target1,target2\n"
+
+ (* Test: Aliases.lns
+ Testing <Aliases.lns> on <nocomma> *)
+ test Aliases.lns get nocomma =
+ { "1"
+ { "name" = "alias" }
+ { "value" = "target1" }
+ { "value" = "target2" } }
+
+ (* Test: Aliases.lns
+ Ticket #229: commands can be fully enclosed in quotes *)
+ test Aliases.lns get "somebody: \"|exit 67\"\n" =
+ { "1"
+ { "name" = "somebody" }
+ { "value" = "\"|exit 67\"" } }
+
+ (* Test: Aliases.lns
+ Don't have to have whitespace after the colon *)
+ test Aliases.lns get "alias:target\n" =
+ { "1"
+ { "name" = "alias" }
+ { "value" = "target" } }
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(*
+Module: Test_Anaconda
+ Provides unit tests and examples for the <Anaconda> lens.
+
+ - 'exampleN' snippets are taken from the documentation:
+ https://anaconda-installer.readthedocs.io/en/latest/user-interaction-config-file-spec.html
+ - 'installedN' snippets are taken from the resulting files after
+ a successful installation
+*)
+
+module Test_Anaconda =
+
+let example1 = "# comment example - before the section headers
+
+[section_1]
+# comment example - inside section 1
+key_a_in_section1=some_value
+key_b_in_section1=some_value
+
+[section_2]
+# comment example - inside section 2
+key_a_in_section2=some_value
+"
+
+test Anaconda.lns get example1 =
+ { "#comment" = "comment example - before the section headers" }
+ { }
+ { "section_1"
+ { "#comment" = "comment example - inside section 1" }
+ { "key_a_in_section1" = "some_value" }
+ { "key_b_in_section1" = "some_value" }
+ { }
+ }
+ { "section_2"
+ { "#comment" = "comment example - inside section 2" }
+ { "key_a_in_section2" = "some_value" }
+ }
+
+let example2 = "# this is the user interaction config file
+
+[General]
+post_install_tools_disabled=0
+
+[DatetimeSpoke]
+# the date and time spoke has been visited
+visited=1
+changed_timezone=1
+changed_ntp=0
+changed_timedate=1
+
+[KeyboardSpoke]
+# the keyboard spoke has not been visited
+visited=0
+"
+
+test Anaconda.lns get example2 =
+ { "#comment" = "this is the user interaction config file" }
+ { }
+ { "General"
+ { "post_install_tools_disabled" = "0" }
+ { }
+ }
+ { "DatetimeSpoke"
+ { "#comment" = "the date and time spoke has been visited" }
+ { "visited" = "1" }
+ { "changed_timezone" = "1" }
+ { "changed_ntp" = "0" }
+ { "changed_timedate" = "1" }
+ { }
+ }
+ { "KeyboardSpoke"
+ { "#comment" = "the keyboard spoke has not been visited" }
+ { "visited" = "0" }
+ }
+
+let installed1 = "# This file has been generated by the Anaconda Installer 21.48.22.134-1
+
+[ProgressSpoke]
+visited = 1
+
+"
+
+test Anaconda.lns get installed1 =
+ { "#comment" = "This file has been generated by the Anaconda Installer 21.48.22.134-1" }
+ { }
+ { "ProgressSpoke"
+ { "visited" = "1" }
+ { }
+ }
--- /dev/null
+(*
+Module: Test_Anacron
+ Provides unit tests and examples for the <Anacron> lens.
+*)
+
+module Test_anacron =
+
+(* Variable: conf *)
+let conf = "# /etc/anacrontab: configuration file for anacron
+
+SHELL=/bin/sh
+PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
+
+# These replace cron's entries
+1 5 cron.daily nice run-parts --report /etc/cron.daily
+7 10 cron.weekly nice run-parts --report /etc/cron.weekly
+@monthly 15 cron.monthly nice run-parts --report /etc/cron.monthly
+"
+
+(* Test: Anacron.lns *)
+test Anacron.lns get conf =
+ { "#comment" = "/etc/anacrontab: configuration file for anacron" }
+ { }
+ { "SHELL" = "/bin/sh" }
+ { "PATH" = "/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin" }
+ { }
+ { "#comment" = "These replace cron's entries" }
+ { "entry" = "nice run-parts --report /etc/cron.daily"
+ { "period" = "1" }
+ { "delay" = "5" }
+ { "job-identifier" = "cron.daily" } }
+ { "entry" = "nice run-parts --report /etc/cron.weekly"
+ { "period" = "7" }
+ { "delay" = "10" }
+ { "job-identifier" = "cron.weekly" } }
+ { "entry" = "nice run-parts --report /etc/cron.monthly"
+ { "period_name" = "monthly" }
+ { "delay" = "15" }
+ { "job-identifier" = "cron.monthly" } }
--- /dev/null
+(*
+Module: Test_Approx
+ Provides unit tests and examples for the <Approx> lens.
+*)
+
+module Test_approx =
+
+(* Variable: default_approx
+ A full configuration *)
+ let default_approx = "# The following are the defaults, so there is no need
+# to uncomment them unless you want a different value.
+# See approx.conf(5) for details.
+
+$interface any
+$port 9999
+$interval 720
+$max_wait 10
+$max_rate unlimited
+$debug false
+
+# Here are some examples of remote repository mappings.
+# See http://www.debian.org/mirror/list for mirror sites.
+
+debian http://ftp.nl.debian.org/debian
+debian-volatile http://ftp.nl.debian.org/debian-volatile
+security http://security.debian.org
+"
+
+(* Test: Approx.lns
+ Testing <Approx.lns> on <default_approx> *)
+ test Approx.lns get default_approx =
+ { "#comment" = "The following are the defaults, so there is no need" }
+ { "#comment" = "to uncomment them unless you want a different value." }
+ { "#comment" = "See approx.conf(5) for details." }
+ { }
+ { "$interface" = "any" }
+ { "$port" = "9999" }
+ { "$interval" = "720" }
+ { "$max_wait" = "10" }
+ { "$max_rate" = "unlimited" }
+ { "$debug" = "false" }
+ { }
+ { "#comment" = "Here are some examples of remote repository mappings." }
+ { "#comment" = "See http://www.debian.org/mirror/list for mirror sites." }
+ { }
+ { "debian" = "http://ftp.nl.debian.org/debian" }
+ { "debian-volatile" = "http://ftp.nl.debian.org/debian-volatile" }
+ { "security" = "http://security.debian.org" }
--- /dev/null
+(*
+Module: Test_Apt_Update_Manager
+ Provides unit tests and examples for the <Apt_Update_Manager> lens.
+*)
+module Test_Apt_Update_Manager =
+
+(* Variable: meta_release *)
+let meta_release = "# default location for the meta-release file
+
+[METARELEASE]
+URI = http://changelogs.ubuntu.com/meta-release
+URI_LTS = http://changelogs.ubuntu.com/meta-release-lts
+URI_UNSTABLE_POSTFIX = -development
+URI_PROPOSED_POSTFIX = -proposed
+"
+
+(* Test: Apt_Update_Manager.lns *)
+test Apt_Update_Manager.lns get meta_release =
+ { "#comment" = "default location for the meta-release file" }
+ { }
+ { "METARELEASE"
+ { "URI" = "http://changelogs.ubuntu.com/meta-release" }
+ { "URI_LTS" = "http://changelogs.ubuntu.com/meta-release-lts" }
+ { "URI_UNSTABLE_POSTFIX" = "-development" }
+ { "URI_PROPOSED_POSTFIX" = "-proposed" }
+ }
+
+(* Variable: release_upgrades *)
+let release_upgrades = "# Default behavior for the release upgrader.
+
+[DEFAULT]
+Prompt=lts
+"
+
+(* Test: Apt_Update_Manager.lns *)
+test Apt_Update_Manager.lns get release_upgrades =
+ { "#comment" = "Default behavior for the release upgrader." }
+ { }
+ { "DEFAULT"
+ { "Prompt" = "lts" }
+ }
--- /dev/null
+(* Test for AptCacherNGSecurity lens.
+
+About: License
+ Copyright 2013 Erik B. Andersen; this file is licenced under the LGPL v2+.
+
+*)
+module Test_AptCacherNGSecurity =
+ let conf = "
+# This file contains confidential data and should be protected with file
+# permissions from being read by untrusted users.
+#
+# NOTE: permissions are fixated with dpkg-statoverride on Debian systems.
+# Read its manual page for details.
+
+# Basic authentication with username and password, required to
+# visit pages with administrative functionality. Format: username:password
+
+AdminAuth: mooma:moopa
+"
+
+ test AptCacherNGSecurity.lns get conf =
+ {}
+ { "#comment" = "This file contains confidential data and should be protected with file" }
+ { "#comment" = "permissions from being read by untrusted users." }
+ {}
+ { "#comment" = "NOTE: permissions are fixated with dpkg-statoverride on Debian systems." }
+ { "#comment" = "Read its manual page for details." }
+ {}
+ { "#comment" = "Basic authentication with username and password, required to" }
+ { "#comment" = "visit pages with administrative functionality. Format: username:password" }
+ {}
+ { "AdminAuth"
+ { "mooma" = "moopa" }
+ }
--- /dev/null
+module Test_aptconf =
+
+ (* Test multiline C-style comments *)
+ let comment_multiline = "/* This is a long
+/* multiline
+comment
+*/
+"
+
+ test AptConf.comment get comment_multiline =
+ { "#mcomment"
+ { "1" = "This is a long" }
+ { "2" = "/* multiline" }
+ { "3" = "comment" } }
+
+
+ (* Test empty multiline C-style comments *)
+ let comment_multiline_empty = "/* */\n"
+
+ test AptConf.empty get comment_multiline_empty = { }
+
+
+ (* Test a simple entry *)
+ let simple_entry = "APT::Clean-Installed \"true\";\n"
+
+ test AptConf.entry get simple_entry =
+ { "APT" { "Clean-Installed" = "true" } }
+
+ (* Test simple recursivity *)
+ let simple_recursion = "APT { Clean-Installed \"true\"; };\n"
+
+ test AptConf.entry get simple_recursion =
+ { "APT" { "Clean-Installed" = "true" } }
+
+ (* Test simple recursivity with several entries *)
+ let simple_recursion_multi =
+ "APT {
+ Clean-Installed \"true\";
+ Get::Assume-Yes \"true\";
+ }\n"
+
+ test AptConf.entry get simple_recursion_multi =
+ { "APT"
+ { "Clean-Installed" = "true" }
+ { "Get" { "Assume-Yes" = "true" } } }
+
+ (* Test multiple recursivity *)
+ let multiple_recursion =
+ "APT { Get { Assume-Yes \"true\"; } };\n"
+
+ test AptConf.entry get multiple_recursion =
+ { "APT" { "Get" { "Assume-Yes" = "true" } } }
+
+ (* Test simple list *)
+ let simple_list = "DPKG::options { \"--force-confold\"; }\n"
+
+ test AptConf.entry get simple_list =
+ { "DPKG" { "options" { "@elem" = "--force-confold" } } }
+
+
+ (* Test list elements with spaces *)
+ let list_spaces = "Unattended-Upgrade::Allowed-Origins {
+ \"Ubuntu lucid-security\"; };\n"
+
+ test AptConf.entry get list_spaces =
+ { "Unattended-Upgrade" { "Allowed-Origins"
+ { "@elem" = "Ubuntu lucid-security" } } }
+
+ (* Test recursive list *)
+ let recursive_list =
+ "DPKG {
+ options {
+ \"--force-confold\";
+ \"--nocheck\";
+ } };\n"
+
+ test AptConf.entry get recursive_list =
+ { "DPKG"
+ { "options"
+ { "@elem" = "--force-confold" }
+ { "@elem" = "--nocheck" } } }
+
+ (* Test empty group *)
+ let empty_group =
+ "APT\n{\n};\n"
+
+ test AptConf.entry get empty_group = { "APT" }
+
+ (* Test #include *)
+ let include = " #include /path/to/file\n"
+
+ test AptConf.include get include =
+ { "#include" = "/path/to/file" }
+
+ (* Test #clear *)
+ let clear = "#clear Dpkg::options Apt::Get::Assume-Yes\n"
+
+ test AptConf.clear get clear =
+ { "#clear"
+ { "name" = "Dpkg::options" }
+ { "name" = "Apt::Get::Assume-Yes" } }
+
+
+ (* Test put simple value *)
+ test AptConf.entry put "APT::Clean-Installed \"true\";\n"
+ after set "/APT/Clean-Installed" "false" =
+ "APT {\nClean-Installed \"false\";\n};\n"
+
+ (* Test rm everything *)
+ test AptConf.entry put "APT { Clean-Installed \"true\"; }\n"
+ after rm "/APT" = ""
+
+ (* Test rm on recursive value *)
+ test AptConf.entry put "APT { Clean-Installed \"true\"; }\n"
+ after rm "/APT/Clean-Installed" = "APT { }\n"
+
+ (* Test put recursive value *)
+ test AptConf.entry put "APT { Clean-Installed \"true\"; }\n"
+ after set "/APT/Clean-Installed" "false" =
+ "APT { Clean-Installed \"false\"; }\n"
+
+ (* Test multiple lens *)
+ let multiple_entries =
+ "APT { Clean-Installed \"true\"; }\n
+ APT::Clean-Installed \"true\";\n"
+
+ test AptConf.lns get multiple_entries =
+ { "APT" { "Clean-Installed" = "true" } }
+ {}
+ { "APT" { "Clean-Installed" = "true" } }
+
+ (* Test with full lens *)
+ test AptConf.lns put "APT { Clean-Installed \"true\"; }\n"
+ after set "/APT/Clean-Installed" "false" =
+ "APT { Clean-Installed \"false\"; }\n"
+
+ (* Test single commented entry *)
+ let commented_entry =
+ "Unattended-Upgrade::Allowed-Origins {
+ \"Ubuntu lucid-security\";
+// \"Ubuntu lucid-updates\";
+ };\n"
+
+ test AptConf.lns get commented_entry =
+ { "Unattended-Upgrade" { "Allowed-Origins"
+ { "@elem" = "Ubuntu lucid-security" }
+ { "#comment" = "\"Ubuntu lucid-updates\";" } } }
+
+ (* Test multiple commented entries *)
+ let commented_entries =
+ "// List of packages to not update
+Unattended-Upgrade::Package-Blacklist {
+// \"vim\";
+// \"libc6\";
+// \"libc6-dev\";
+// \"libc6-i686\"
+};
+"
+
+ test AptConf.lns get commented_entries =
+ { "#comment" = "List of packages to not update" }
+ { "Unattended-Upgrade" { "Package-Blacklist"
+ { "#comment" = "\"vim\";" }
+ { "#comment" = "\"libc6\";" }
+ { "#comment" = "\"libc6-dev\";" }
+ { "#comment" = "\"libc6-i686\"" }
+ } }
+
+ (* Test complex elem *)
+ let complex_elem = "DPkg::Post-Invoke {\"if [ -d /var/lib/update-notifier ]; then touch /var/lib/update-notifier/dpkg-run-stamp; fi; if [ -e /var/lib/update-notifier/updates-available ]; then echo > /var/lib/update-notifier/updates-available; fi \"};\n"
+
+ test AptConf.lns get complex_elem =
+ { "DPkg" { "Post-Invoke"
+ { "@elem" = "if [ -d /var/lib/update-notifier ]; then touch /var/lib/update-notifier/dpkg-run-stamp; fi; if [ -e /var/lib/update-notifier/updates-available ]; then echo > /var/lib/update-notifier/updates-available; fi " } } }
+
+ (* Accept hash comments *)
+ test AptConf.lns get "# a comment\n" =
+ { "#comment" = "a comment" }
+
+ (* Accept empty hash comments *)
+ test AptConf.lns get "# \n" = { }
--- /dev/null
+module Test_aptpreferences =
+
+ let conf ="Explanation: Backport packages are never prioritary
+Package: *
+Pin: release a=backports
+Pin-Priority: 100
+
+# This is a comment
+Explanation: My packages are the most prioritary
+Package: *
+Pin: release l=Raphink, v=3.0
+Pin-Priority: 700
+
+Package: liferea-data
+Pin: version 1.4.26-4
+Pin-Priority: 600
+
+Package: *
+Pin: origin packages.linuxmint.com
+Pin-Priority: 700
+"
+
+ test AptPreferences.lns get conf =
+ { "1"
+ { "Explanation" = "Backport packages are never prioritary" }
+ { "Package" = "*" }
+ { "Pin" = "release"
+ { "a" = "backports" } }
+ { "Pin-Priority" = "100" } }
+ { "2"
+ { "#comment" = "This is a comment" }
+ { "Explanation" = "My packages are the most prioritary" }
+ { "Package" = "*" }
+ { "Pin" = "release"
+ { "l" = "Raphink" }
+ { "v" = "3.0" } }
+ { "Pin-Priority" = "700" } }
+ { "3"
+ { "Package" = "liferea-data" }
+ { "Pin" = "version"
+ { "version" = "1.4.26-4" } }
+ { "Pin-Priority" = "600" } }
+ { "4"
+ { "Package" = "*" }
+ { "Pin" = "origin"
+ { "origin" = "packages.linuxmint.com" } }
+ { "Pin-Priority" = "700" } }
+
+(*************************************************************************)
+
+ test AptPreferences.lns put "\n" after
+ set "/1/Package" "something-funny";
+ set "/1/Pin" "version";
+ set "/1/Pin/version" "1.2.3-4";
+ set "/1/Pin-Priority" "2000"
+ = "
+Package: something-funny
+Pin: version 1.2.3-4
+Pin-Priority: 2000
+"
+
+(* Test: AptPreferences.pin
+ Spaces in origins are valid *)
+test AptPreferences.pin get "Pin: release o=Quantum GIS project\n" =
+ { "Pin" = "release"
+ { "o" = "Quantum GIS project" } }
--- /dev/null
+module Test_aptsources =
+
+ let simple_source = "deb ftp://mirror.bytemark.co.uk/debian/ etch main\n"
+ let multi_components = "deb http://security.debian.org/ etch/updates main contrib non-free\n"
+
+ test Aptsources.lns get simple_source =
+ { "1"
+ { "type" = "deb" }
+ { "uri" = "ftp://mirror.bytemark.co.uk/debian/" }
+ { "distribution" = "etch" }
+ { "component" = "main" }
+ }
+
+ test Aptsources.lns get multi_components =
+ { "1"
+ { "type" = "deb" }
+ { "uri" = "http://security.debian.org/" }
+ { "distribution" = "etch/updates" }
+ { "component" = "main" }
+ { "component" = "contrib" }
+ { "component" = "non-free" }
+ }
+
+
+let multi_line = "#deb http://www.backports.org/debian/ sarge postfix
+ # deb http://people.debian.org/~adconrad sarge subversion
+
+deb ftp://mirror.bytemark.co.uk/debian/ etch main non-free contrib
+ deb http://security.debian.org/ etch/updates main contrib non-free # security line
+ deb-src http://mirror.bytemark.co.uk/debian etch main contrib non-free\n"
+
+ test Aptsources.lns get multi_line =
+ { "#comment" = "deb http://www.backports.org/debian/ sarge postfix" }
+ { "#comment" = "deb http://people.debian.org/~adconrad sarge subversion" }
+ {}
+ { "1"
+ { "type" = "deb" }
+ { "uri" = "ftp://mirror.bytemark.co.uk/debian/" }
+ { "distribution" = "etch" }
+ { "component" = "main" }
+ { "component" = "non-free" }
+ { "component" = "contrib" }
+ }
+ { "2"
+ { "type" = "deb" }
+ { "uri" = "http://security.debian.org/" }
+ { "distribution" = "etch/updates" }
+ { "component" = "main" }
+ { "component" = "contrib" }
+ { "component" = "non-free" }
+ }
+ { "3"
+ { "type" = "deb-src" }
+ { "uri" = "http://mirror.bytemark.co.uk/debian" }
+ { "distribution" = "etch" }
+ { "component" = "main" }
+ { "component" = "contrib" }
+ { "component" = "non-free" }
+ }
+
+ let trailing_comment = "deb ftp://server/debian/ etch main # comment\n"
+
+ (* Should be a noop; makes sure that we preserve the trailing comment *)
+ test Aptsources.lns put trailing_comment after
+ set "/1/type" "deb"
+ = trailing_comment
+
+ (* Support options, GH #295 *)
+ test Aptsources.lns get "deb [arch=amd64] tor+http://ftp.us.debian.org/debian sid main contrib
+deb [ arch+=amd64 trusted-=true ] http://ftp.us.debian.org/debian sid main contrib\n" =
+ { "1"
+ { "type" = "deb" }
+ { "options"
+ { "arch" = "amd64" }
+ }
+ { "uri" = "tor+http://ftp.us.debian.org/debian" }
+ { "distribution" = "sid" }
+ { "component" = "main" }
+ { "component" = "contrib" } }
+ { "2"
+ { "type" = "deb" }
+ { "options"
+ { "arch" = "amd64" { "operation" = "+" } }
+ { "trusted" = "true" { "operation" = "-" } }
+ }
+ { "uri" = "http://ftp.us.debian.org/debian" }
+ { "distribution" = "sid" }
+ { "component" = "main" }
+ { "component" = "contrib" } }
+
+ (* cdrom entries may have spaces, GH #296 *)
+ test Aptsources.lns get "deb cdrom:[Debian GNU/Linux 7.5.0 _Wheezy_ - Official amd64 CD Binary-1 20140426-13:37]/ wheezy main\n" =
+ { "1"
+ { "type" = "deb" }
+ { "uri" = "cdrom:[Debian GNU/Linux 7.5.0 _Wheezy_ - Official amd64 CD Binary-1 20140426-13:37]/" }
+ { "distribution" = "wheezy" }
+ { "component" = "main" } }
+
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+module Test_authinfo2 =
+
+let conf = "# Comment
+[s3]
+storage-url: s3://
+backend-login: joe
+backend-password: notquitesecret
+
+[fs1]
+storage-url: s3://joes-first-bucket
+fs-passphrase: neitheristhis
+
+[fs2]
+
+storage-url: s3://joes-second-bucket
+fs-passphrase: swordfish
+[fs3]
+storage-url: s3://joes-second-bucket/with-prefix
+backend-login: bill
+backend-password: bi23ll
+fs-passphrase: ll23bi
+"
+
+
+test Authinfo2.lns get conf =
+ { "#comment" = "Comment" }
+ { "s3"
+ { "storage-url" = "s3://" }
+ { "backend-login" = "joe" }
+ { "backend-password" = "notquitesecret" }
+ {} }
+ { "fs1"
+ { "storage-url" = "s3://joes-first-bucket" }
+ { "fs-passphrase" = "neitheristhis" }
+ {} }
+ { "fs2"
+ {}
+ { "storage-url" = "s3://joes-second-bucket" }
+ { "fs-passphrase" = "swordfish" } }
+ { "fs3"
+ { "storage-url" = "s3://joes-second-bucket/with-prefix" }
+ { "backend-login" = "bill" }
+ { "backend-password" = "bi23ll" }
+ { "fs-passphrase" = "ll23bi" } }
--- /dev/null
+(*
+Module: Test_Authorized_Keys
+ Provides unit tests and examples for the <Authorized_Keys> lens.
+*)
+
+module Test_Authorized_Keys =
+
+(* Test: Authorized_Keys *)
+test Authorized_Keys.lns get "tunnel=\"0\",no-agent-forwarding,command=\"sh /etc/netstart tun0\",permitopen=\"192.0.2.1:80\",permitopen=\"192.0.2.2:25\" ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA3RC8whKGFx+b7BMTFtnIWl6t/qyvOvnuqIrMNI9J8+1sEYv8Y/pJRh0vAe2RaSKAgB2hyzXwSJ1Fh+ooraUAJ+q7P2gg2kQF1nCFeGVjtV9m4ZrV5kZARcQMhp0Bp67tPo2TCtnthPYZS/YQG6u/6Aco1XZjPvuKujAQMGSgqNskhKBO9zfhhkAMIcKVryjKYHDfqbDUCCSNzlwFLts3nJ0Hfno6Hz+XxuBIfKOGjHfbzFyUQ7smYnzF23jFs4XhvnjmIGQJcZT4kQAsRwQubyuyDuqmQXqa+2SuQfkKTaPOlVqyuEWJdG2weIF8g3YP12czsBgNppz3jsnhEgstnQ== rpinson on rpinson\n" =
+ { "key" = "AAAAB3NzaC1yc2EAAAABIwAAAQEA3RC8whKGFx+b7BMTFtnIWl6t/qyvOvnuqIrMNI9J8+1sEYv8Y/pJRh0vAe2RaSKAgB2hyzXwSJ1Fh+ooraUAJ+q7P2gg2kQF1nCFeGVjtV9m4ZrV5kZARcQMhp0Bp67tPo2TCtnthPYZS/YQG6u/6Aco1XZjPvuKujAQMGSgqNskhKBO9zfhhkAMIcKVryjKYHDfqbDUCCSNzlwFLts3nJ0Hfno6Hz+XxuBIfKOGjHfbzFyUQ7smYnzF23jFs4XhvnjmIGQJcZT4kQAsRwQubyuyDuqmQXqa+2SuQfkKTaPOlVqyuEWJdG2weIF8g3YP12czsBgNppz3jsnhEgstnQ=="
+ { "options"
+ { "tunnel" = "0" }
+ { "no-agent-forwarding" }
+ { "command" = "sh /etc/netstart tun0" }
+ { "permitopen" = "192.0.2.1:80" }
+ { "permitopen" = "192.0.2.2:25" }
+ }
+ { "type" = "ssh-rsa" }
+ { "comment" = "rpinson on rpinson" } }
+
+(* Variable: keys *)
+let keys = "# Example keys, one of each type
+#
+ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDpWrKYsEsVUyuwMN4ReBN/TMGsaUWzDKDz/uQr6MlNNM95MDK/BPyJ+DiBiNMFVLpRt3gH3eCJBLJKMuUDaTNy5uym2zNgAaAIVct6M2GHI68W3iY3Ja8/MaRPbyTpMh1O74S+McpAW1SGL2YzFchYMjTnu/kOD3lxiWNiDLvdLFZu0wPOi7CYG37VXR4Thb0cC92zqnCjaP1TwfhpEYUZoowElYkoV2vG+19O6cRm/zduYcf8hmegZKB4GFUJTtZ2gZ18XJDSQd0ykK3KPt/+bKskdrtfiOwSZAmUZmd2YuAlY6+CBn1T3UBdQntueukd0z1xhd6SX7Bl8+qyqLQ3 user@example
+ssh-dsa AAAA user@example
+ecdsa-sha2-nistp256 AAAA user@example
+ssh-ed25519 AAAA user@example
+
+# Example comments
+ssh-dsa AAAA
+ssh-dsa AAAA user@example
+"
+
+(* Test: Authorized_Keys.lns *)
+test Authorized_Keys.lns get keys =
+ { "#comment" = "Example keys, one of each type" }
+ { }
+ { "key" =
+"AAAAB3NzaC1yc2EAAAADAQABAAABAQDpWrKYsEsVUyuwMN4ReBN/TMGsaUWzDKDz/uQr6MlNNM95MDK/BPyJ+DiBiNMFVLpRt3gH3eCJBLJKMuUDaTNy5uym2zNgAaAIVct6M2GHI68W3iY3Ja8/MaRPbyTpMh1O74S+McpAW1SGL2YzFchYMjTnu/kOD3lxiWNiDLvdLFZu0wPOi7CYG37VXR4Thb0cC92zqnCjaP1TwfhpEYUZoowElYkoV2vG+19O6cRm/zduYcf8hmegZKB4GFUJTtZ2gZ18XJDSQd0ykK3KPt/+bKskdrtfiOwSZAmUZmd2YuAlY6+CBn1T3UBdQntueukd0z1xhd6SX7Bl8+qyqLQ3"
+ { "type" = "ssh-rsa" }
+ { "comment" = "user@example" }
+ }
+ { "key" = "AAAA"
+ { "type" = "ssh-dsa" }
+ { "comment" = "user@example" }
+ }
+ { "key" = "AAAA"
+ { "type" = "ecdsa-sha2-nistp256" }
+ { "comment" = "user@example" }
+ }
+ { "key" = "AAAA"
+ { "type" = "ssh-ed25519" }
+ { "comment" = "user@example" }
+ }
+ { }
+ { "#comment" = "Example comments" }
+ { "key" = "AAAA"
+ { "type" = "ssh-dsa" }
+ }
+ { "key" = "AAAA"
+ { "type" = "ssh-dsa" }
+ { "comment" = "user@example" }
+ }
+
+(* Variable: options *)
+let options = "# Example options
+no-pty ssh-dsa AAAA
+no-pty ssh-ed25519 AAAA
+no-pty,command=\"foo\" ssh-dsa AAAA
+no-pty,command=\"foo bar\" ssh-dsa AAAA
+no-pty,from=\"example.com,10.1.1.0/16\" ssh-dsa AAAA
+no-pty,environment=\"LANG=en_GB.UTF8\" ssh-dsa AAAA
+"
+
+(* Test: Authorized_Keys.lns *)
+test Authorized_Keys.lns get options =
+ { "#comment" = "Example options" }
+ { "key" = "AAAA"
+ { "options"
+ { "no-pty" }
+ }
+ { "type" = "ssh-dsa" }
+ }
+ { "key" = "AAAA"
+ { "options"
+ { "no-pty" }
+ }
+ { "type" = "ssh-ed25519" }
+ }
+ { "key" = "AAAA"
+ { "options"
+ { "no-pty" }
+ { "command" = "foo" }
+ }
+ { "type" = "ssh-dsa" }
+ }
+ { "key" = "AAAA"
+ { "options"
+ { "no-pty" }
+ { "command" = "foo bar" }
+ }
+ { "type" = "ssh-dsa" }
+ }
+ { "key" = "AAAA"
+ { "options"
+ { "no-pty" }
+ { "from" = "example.com,10.1.1.0/16" }
+ }
+ { "type" = "ssh-dsa" }
+ }
+ { "key" = "AAAA"
+ { "options"
+ { "no-pty" }
+ { "environment" = "LANG=en_GB.UTF8" }
+ }
+ { "type" = "ssh-dsa" }
+ }
+
+(* Test: Authorized_keys.lns
+ GH 165 *)
+test Authorized_keys.lns get "command=\"echo 'Please login as the user \\\"blaauser\\\" rather than the user \\\"root\\\".';echo;sleep 10\" ssh-rsa DEADBEEF== username1\n" =
+ { "key" = "DEADBEEF=="
+ { "options"
+ { "command" = "echo 'Please login as the user \\\"blaauser\\\" rather than the user \\\"root\\\".';echo;sleep 10" }
+ }
+ { "type" = "ssh-rsa" }
+ { "comment" = "username1" }
+ }
--- /dev/null
+module Test_AuthselectPam =
+
+let example ="auth required pam_env.so
+auth required pam_faildelay.so delay=2000000
+auth required pam_faillock.so preauth silent {include if \"with-faillock\"}
+auth required pam_u2f.so cue {if not \"without-pam-u2f-nouserok\":nouserok} {include if \"with-pam-u2f-2fa\"}
+"
+
+test AuthselectPam.lns get example =
+ { "1"
+ { "type" = "auth" }
+ { "control" = "required" }
+ { "module" = "pam_env.so" } }
+ { "2"
+ { "type" = "auth" }
+ { "control" = "required" }
+ { "module" = "pam_faildelay.so" }
+ { "argument" = "delay=2000000" } }
+ { "3"
+ { "type" = "auth" }
+ { "control" = "required" }
+ { "module" = "pam_faillock.so" }
+ { "argument" = "preauth" }
+ { "argument" = "silent" }
+ { "authselect_conditional" = "include if"
+ { "feature" = "with-faillock" } } }
+ { "4"
+ { "type" = "auth" }
+ { "control" = "required" }
+ { "module" = "pam_u2f.so" }
+ { "argument" = "cue" }
+ { "authselect_conditional" = "if"
+ { "not" }
+ { "feature" = "without-pam-u2f-nouserok" }
+ { "on_true" = "nouserok" } }
+ { "authselect_conditional" = "include if"
+ { "feature" = "with-pam-u2f-2fa" } } }
--- /dev/null
+module Test_automaster =
+
+ let example = "#
+# Sample auto.master file
+#
+
+/- auto.data
+/net -hosts ro
+/misc /etc/auto.misc
+/home /etc/auto.home
+/home ldap:example.com:ou=auto.home,dc=example,dc=com
+/mnt yp:mnt.map -strict,-Dfoo=bar,uid=1000
+/mnt yp,sun:mnt.map
+/auto /etc/auto.HD --timeout=15 --ghost
+
++dir:/etc/auto.master.d
++ auto.master
+"
+
+ test Automaster.lns get example =
+ { }
+ { "#comment" = "Sample auto.master file" }
+ { }
+ { }
+ { "1" = "/-"
+ { "map" = "auto.data" } }
+ { "2" = "/net"
+ { "map" = "-hosts" }
+ { "opt" = "ro" } }
+ { "3" = "/misc"
+ { "map" = "/etc/auto.misc" } }
+ { "4" = "/home"
+ { "map" = "/etc/auto.home" } }
+ { "5" = "/home"
+ { "type" = "ldap" }
+ { "host" = "example.com" }
+ { "map" = "ou=auto.home,dc=example,dc=com" } }
+ { "6" = "/mnt"
+ { "type" = "yp" }
+ { "map" = "mnt.map" }
+ { "opt" = "-strict" }
+ { "opt" = "-Dfoo"
+ { "value" = "bar" } }
+ { "opt" = "uid"
+ { "value" = "1000" } } }
+ { "7" = "/mnt"
+ { "type" = "yp" }
+ { "format" = "sun" }
+ { "map" = "mnt.map" } }
+ { "8" = "/auto"
+ { "map" = "/etc/auto.HD" }
+ { "opt" = "--timeout"
+ { "value" = "15" } }
+ { "opt" = "--ghost" } }
+ { }
+ { "9" = "+"
+ { "type" = "dir" }
+ { "map" = "/etc/auto.master.d" } }
+ { "10" = "+"
+ { "map" = "auto.master" } }
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+module Test_automounter =
+
+ let example = "#
+# This is an automounter map and it has the following format
+# key [ -mount-options-separated-by-comma ] location
+# Details may be found in the autofs(5) manpage
+
+# indirect map
+cd -fstype=iso9660,ro,nosuid,nodev :/dev/cdrom
+kernel -ro,soft,intr ftp.kernel.org:/pub/linux
+* -fstype=auto,loop,ro :/srv/distros/isos/&.iso
+
+# direct map
+/nfs/apps/mozilla bogus:/usr/local/moxill
+
+# replicated server
+path host1,host2,hostn:/path/path
+path host1,host2:/blah host3(1):/some/other/path
+path host1(5),host2(6),host3(1):/path/path
+
+# multi-mount map
+server -rw,hard,intr / -ro myserver.me.org:/
+server -rw,hard,intr / -ro myserver.me.org:/ /usr myserver.me.org:/usr
+server -rw,hard,intr / -ro myserver.me.org:/ \
+ /usr myserver.me.org:/usr \
+ /home myserver.me.org:/home
+
+server -rw,hard,intr / -ro my-with-dash-server.me.org:/
+
+# included maps
++auto_home
+"
+
+ test Automounter.lns get example =
+ { }
+ { "#comment" = "This is an automounter map and it has the following format" }
+ { "#comment" = "key [ -mount-options-separated-by-comma ] location" }
+ { "#comment" = "Details may be found in the autofs(5) manpage" }
+ { }
+ { "#comment" = "indirect map" }
+ { "1" = "cd"
+ { "opt" = "fstype"
+ { "value" = "iso9660" } }
+ { "opt" = "ro" }
+ { "opt" = "nosuid" }
+ { "opt" = "nodev" }
+ { "location"
+ { "1"
+ { "path" = "/dev/cdrom" } } } }
+ { "2" = "kernel"
+ { "opt" = "ro" }
+ { "opt" = "soft" }
+ { "opt" = "intr" }
+ { "location"
+ { "1"
+ { "host" = "ftp.kernel.org" }
+ { "path" = "/pub/linux" } } } }
+ { "3" = "*"
+ { "opt" = "fstype"
+ { "value" = "auto" } }
+ { "opt" = "loop" }
+ { "opt" = "ro" }
+ { "location"
+ { "1"
+ { "path" = "/srv/distros/isos/&.iso" } } } }
+ { }
+ { "#comment" = "direct map" }
+ { "4" = "/nfs/apps/mozilla"
+ { "location"
+ { "1"
+ { "host" = "bogus" }
+ { "path" = "/usr/local/moxill" } } } }
+ { }
+ { "#comment" = "replicated server" }
+ { "5" = "path"
+ { "location"
+ { "1"
+ { "host" = "host1" }
+ { "host" = "host2" }
+ { "host" = "hostn" }
+ { "path" = "/path/path" } } } }
+ { "6" = "path"
+ { "location"
+ { "1"
+ { "host" = "host1" }
+ { "host" = "host2" }
+ { "path" = "/blah" } }
+ { "2"
+ { "host" = "host3"
+ { "weight" = "1" } }
+ { "path" = "/some/other/path" } } } }
+ { "7" = "path"
+ { "location"
+ { "1"
+ { "host" = "host1"
+ { "weight" = "5" } }
+ { "host" = "host2"
+ { "weight" = "6" } }
+ { "host" = "host3"
+ { "weight" = "1" } }
+ { "path" = "/path/path" } } } }
+ { }
+ { "#comment" = "multi-mount map" }
+ { "8" = "server"
+ { "opt" = "rw" }
+ { "opt" = "hard" }
+ { "opt" = "intr" }
+ { "mount"
+ { "1" = "/"
+ { "opt" = "ro" }
+ { "location"
+ { "1"
+ { "host" = "myserver.me.org" }
+ { "path" = "/" } } } } } }
+ { "9" = "server"
+ { "opt" = "rw" }
+ { "opt" = "hard" }
+ { "opt" = "intr" }
+ { "mount"
+ { "1" = "/"
+ { "opt" = "ro" }
+ { "location"
+ { "1"
+ { "host" = "myserver.me.org" }
+ { "path" = "/" } } } }
+ { "2" = "/usr"
+ { "location"
+ { "1"
+ { "host" = "myserver.me.org" }
+ { "path" = "/usr" } } } } } }
+ { "10" = "server"
+ { "opt" = "rw" }
+ { "opt" = "hard" }
+ { "opt" = "intr" }
+ { "mount"
+ { "1" = "/"
+ { "opt" = "ro" }
+ { "location"
+ { "1"
+ { "host" = "myserver.me.org" }
+ { "path" = "/" } } } }
+ { "2" = "/usr"
+ { "location"
+ { "1"
+ { "host" = "myserver.me.org" }
+ { "path" = "/usr" } } } }
+ { "3" = "/home"
+ { "location"
+ { "1"
+ { "host" = "myserver.me.org" }
+ { "path" = "/home" } } } } } }
+ { }
+ { "11" = "server"
+ { "opt" = "rw" }
+ { "opt" = "hard" }
+ { "opt" = "intr" }
+ { "mount"
+ { "1" = "/"
+ { "opt" = "ro" }
+ { "location"
+ { "1"
+ { "host" = "my-with-dash-server.me.org" }
+ { "path" = "/" } } } } } }
+ { }
+ { "#comment" = "included maps" }
+ { "12" = "+"
+ { "map" = "auto_home" } }
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+module Test_avahi =
+
+ let conf = "
+[server]
+host-name=web
+domain=example.com
+
+[wide-area]
+enable-wide-area=yes
+"
+
+ test Avahi.lns get conf =
+ {}
+ { "server"
+ { "host-name" = "web" }
+ { "domain" = "example.com" }
+ {} }
+ { "wide-area"
+ { "enable-wide-area" = "yes" } }
+
+ test Avahi.lns put conf after
+ set "server/use-ipv4" "yes";
+ set "server/clients-max" "4096"
+ = "
+[server]
+host-name=web
+domain=example.com
+
+use-ipv4=yes
+clients-max=4096
+[wide-area]
+enable-wide-area=yes
+"
+
--- /dev/null
+module Test_BackupPCHosts =
+
+let conf = "host dhcp user moreUsers
+hostname1 0 user1 anotheruser,athirduser
+hostname2 1 user2 stillanotheruser\n"
+
+test BackupPCHosts.lns get conf =
+ { "1"
+ { "host" = "host" }
+ { "dhcp" = "dhcp" }
+ { "user" = "user" }
+ { "moreusers" = "moreUsers" }
+ }
+ { "2"
+ { "host" = "hostname1" }
+ { "dhcp" = "0" }
+ { "user" = "user1" }
+ { "moreusers" = "anotheruser" }
+ { "moreusers" = "athirduser" }
+ }
+ { "3"
+ { "host" = "hostname2" }
+ { "dhcp" = "1" }
+ { "user" = "user2" }
+ { "moreusers" = "stillanotheruser" }
+ }
--- /dev/null
+module Test_bbhosts =
+
+ let conf = "
+# A comment
+
+page firstpage My first page
+
+group-compress A group
+1.2.3.4 amachine # http://url.to/monitor https://another.url/to/monitor cont;http://a.cont.url/to/monitor;wordtofind
+1.2.3.5 amachine2 # http://url.to/monitor https://another.url/to/monitor !cont;http://a.cont.url/to/monitor;wordtofind
+
+group-only dns VIP DNS
+10.50.25.48 mydnsmachine.network #
+10.50.25.49 myotherdnsmachine.network #noping noconn !ssh dns;mydnstocheck
+# a comment in a group
+
+
+page anotherpage A new page
+
+# a comment in a page
+
+group-compress My test
+192.168.0.2 myhost # https://myurl.com:1256 noconn pop3 imap2 ssh
+192.168.0.3 myhost2 # !imap2 telnet dns
+
+group-compress DownTime
+0.0.0.0 myhost3 # DOWNTIME=fping,http:*:1800:1015:\"Frontal 01 Redirect Amazon eteint entre 18h et 10h\"
+0.0.0.0 myhost4 # ftps imaps imap4 pop-3 pop2s pop smtp smtps ssh ssh1 ssh2 telnet telnets
+"
+
+ test BBhosts.lns get conf =
+ {}
+ { "#comment" = "A comment" }
+ {}
+ { "page" = "firstpage"
+ { "title" = "My first page" }
+ {}
+ { "group-compress" = "A group"
+ { "host"
+ { "ip" = "1.2.3.4" }
+ { "fqdn" = "amachine" }
+ { "probes"
+ { "url" = "http://url.to/monitor" }
+ { "url" = "https://another.url/to/monitor" }
+ { "cont" = ""
+ { "url" = "http://a.cont.url/to/monitor" }
+ { "keyword" = "wordtofind" } } } }
+ { "host"
+ { "ip" = "1.2.3.5" }
+ { "fqdn" = "amachine2" }
+ { "probes"
+ { "url" = "http://url.to/monitor" }
+ { "url" = "https://another.url/to/monitor" }
+ { "cont" = "!"
+ { "url" = "http://a.cont.url/to/monitor" }
+ { "keyword" = "wordtofind" } } } }
+ {} }
+ { "group-only" = "VIP DNS"
+ { "col" = "dns" }
+ { "host"
+ { "ip" = "10.50.25.48" }
+ { "fqdn" = "mydnsmachine.network" }
+ { "probes" } }
+ { "host"
+ { "ip" = "10.50.25.49" }
+ { "fqdn" = "myotherdnsmachine.network" }
+ { "probes"
+ { "noping" = "" }
+ { "noconn" = "" }
+ { "ssh" = "!" }
+ { "dns" = ""
+ { "url" = "mydnstocheck" } } } }
+ { "#comment" = "a comment in a group" }
+ {}
+ {} } }
+ { "page" = "anotherpage"
+ { "title" = "A new page" }
+ {}
+ { "#comment" = "a comment in a page" }
+ {}
+ { "group-compress" = "My test"
+ { "host"
+ { "ip" = "192.168.0.2" }
+ { "fqdn" = "myhost" }
+ { "probes"
+ { "url" = "https://myurl.com:1256" }
+ { "noconn" = "" }
+ { "pop3" = "" }
+ { "imap2" = "" }
+ { "ssh" = "" } } }
+ { "host"
+ { "ip" = "192.168.0.3" }
+ { "fqdn" = "myhost2" }
+ { "probes"
+ { "imap2" = "!" }
+ { "telnet" = "" }
+ { "dns" = "" } } }
+ {}
+ }
+ { "group-compress" = "DownTime"
+ { "host"
+ { "ip" = "0.0.0.0" }
+ { "fqdn" = "myhost3" }
+ { "probes"
+ { "DOWNTIME"
+ { "probe" = "fping" }
+ { "probe" = "http" }
+ { "day" = "*" }
+ { "starttime" = "1800" }
+ { "endtime" = "1015" }
+ { "cause" = "Frontal 01 Redirect Amazon eteint entre 18h et 10h" }
+ } } }
+ { "host"
+ { "ip" = "0.0.0.0" }
+ { "fqdn" = "myhost4" }
+ { "probes"
+ { "ftps" = "" }
+ { "imaps" = "" }
+ { "imap4" = "" }
+ { "pop-3" = "" }
+ { "pop2s" = "" }
+ { "pop" = "" }
+ { "smtp" = "" }
+ { "smtps" = "" }
+ { "ssh" = "" }
+ { "ssh1" = "" }
+ { "ssh2" = "" }
+ { "telnet" = "" }
+ { "telnets" = "" }
+ } } } }
+
--- /dev/null
+(*
+Module: Test_BootConf
+ Provides unit tests for the <BootConf> lens.
+*)
+
+module Test_bootconf =
+
+test BootConf.boot get "boot /bsd -s\n" =
+ { "boot"
+ { "image" = "/bsd" }
+ { "arg" = "-s" } }
+
+test BootConf.echo get "echo 42\n" =
+ { "echo" = "42" }
+
+test BootConf.ls get "ls /\n" =
+ { "ls" = "/" }
+
+test BootConf.ls get "ls //\n" =
+ { "ls" = "//" }
+
+test BootConf.ls get "ls /some/path/\n" =
+ { "ls" = "/some/path/" }
+
+test BootConf.machine get "machine diskinfo\n" =
+ { "machine"
+ { "diskinfo" } }
+
+test BootConf.machine get "machine comaddr 0xdeadbeef\n" =
+ { "machine"
+ { "comaddr" = "0xdeadbeef" } }
+
+test BootConf.set get "set tty com0\n" =
+ { "set"
+ { "tty" = "com0" } }
+
+test BootConf.single_command get "help\n" =
+ { "help" }
+
+test BootConf.stty get "stty /dev/cuaU0 115200\n" =
+ { "stty"
+ { "device" = "/dev/cuaU0" }
+ { "speed" = "115200" } }
+
+test BootConf.stty get "stty /dev/cuaU0\n" =
+ { "stty"
+ { "device" = "/dev/cuaU0" } }
--- /dev/null
+(*
+Module: Test_Build
+ Provides unit tests and examples for the <Build> lens.
+*)
+
+module Test_Build =
+
+(************************************************************************
+ * Group: GENERIC CONSTRUCTIONS
+ ************************************************************************)
+
+(* View: brackets
+ Test brackets *)
+let brackets = [ Build.brackets Sep.lbracket Sep.rbracket (key Rx.word) ]
+
+(* Test: brackets *)
+test brackets get "(foo)" = { "foo" }
+
+
+(************************************************************************
+ * Group: LIST CONSTRUCTIONS
+ ************************************************************************)
+
+(* View: list *)
+let list = Build.list [ key Rx.word ] Sep.space
+
+(* Test: list *)
+test list get "foo bar baz" = { "foo" } { "bar" } { "baz" }
+
+(* Test: list *)
+test list get "foo" = *
+
+(* View: opt_list *)
+let opt_list = Build.opt_list [ key Rx.word ] Sep.space
+
+(* Test: opt_list *)
+test opt_list get "foo bar baz" = { "foo" } { "bar" } { "baz" }
+
+
+(************************************************************************
+ * Group: LABEL OPERATIONS
+ ************************************************************************)
+
+(* View: xchg *)
+let xchg = [ Build.xchg Rx.space " " "space" ]
+
+(* Test: xchg *)
+test xchg get " \t " = { "space" }
+
+(* View: xchgs *)
+let xchgs = [ Build.xchgs " " "space" ]
+
+(* Test: xchgs *)
+test xchgs get " " = { "space" }
+
+
+(************************************************************************
+ * Group: SUBNODE CONSTRUCTIONS
+ ************************************************************************)
+
+(* View: key_value_line *)
+let key_value_line = Build.key_value_line Rx.word Sep.equal (store Rx.word)
+
+(* Test: key_value_line *)
+test key_value_line get "foo=bar\n" = { "foo" = "bar" }
+
+(* View: key_value_line_comment *)
+let key_value_line_comment = Build.key_value_line_comment Rx.word
+ Sep.equal (store Rx.word) Util.comment
+
+(* Test: key_value_line_comment *)
+test key_value_line_comment get "foo=bar # comment\n" =
+ { "foo" = "bar" { "#comment" = "comment" } }
+
+(* View: key_value *)
+let key_value = Build.key_value Rx.word Sep.equal (store Rx.word)
+
+(* Test: key_value *)
+test key_value get "foo=bar" = { "foo" = "bar" }
+
+(* View: key_ws_value *)
+let key_ws_value = Build.key_ws_value Rx.word
+
+(* Test: key_ws_value *)
+test key_ws_value get "foo bar\n" = { "foo" = "bar" }
+
+(* View: flag *)
+let flag = Build.flag Rx.word
+
+(* Test: flag *)
+test flag get "foo" = { "foo" }
+
+(* View: flag_line *)
+let flag_line = Build.flag_line Rx.word
+
+(* Test: flag_line *)
+test flag_line get "foo\n" = { "foo" }
+
+
+(************************************************************************
+ * Group: BLOCK CONSTRUCTIONS
+ ************************************************************************)
+
+(* View: block_entry
+ The block entry used for testing *)
+let block_entry = Build.key_value "test" Sep.equal (store Rx.word)
+
+(* View: block
+ The block used for testing *)
+let block = Build.block block_entry
+
+(* Test: block
+ Simple test for <block> *)
+test block get " {test=1}" =
+ { "test" = "1" }
+
+(* Test: block
+ Simple test for <block> with newlines *)
+test block get " {\n test=1\n}" =
+ { "test" = "1" }
+
+(* Test: block
+ Simple test for <block> two indented entries *)
+test block get " {\n test=1 \n test=2 \n}" =
+ { "test" = "1" }
+ { "test" = "2" }
+
+(* Test: block
+ Test <block> with a comment *)
+test block get " { # This is a comment\n}" =
+ { "#comment" = "This is a comment" }
+
+(* Test: block
+ Test <block> with comments and newlines *)
+test block get " { # This is a comment\n# Another comment\n}" =
+ { "#comment" = "This is a comment" }
+ { "#comment" = "Another comment" }
+
+(* Test: block
+ Test defaults for blocks *)
+test block put " { test=1 }" after
+ set "/#comment" "a comment";
+ rm "/test";
+ set "/test" "2" =
+ " { # a comment\ntest=2 }"
+
+(* View: named_block
+ The named block used for testing *)
+let named_block = Build.named_block "foo" block_entry
+
+(* Test: named_block
+ Simple test for <named_block> *)
+test named_block get "foo {test=1}\n" =
+ { "foo" { "test" = "1" } }
+
+(* View: logrotate_block
+ A minimalistic logrotate block *)
+let logrotate_block =
+ let entry = [ key Rx.word ]
+ in let filename = [ label "file" . store /\/[^,#= \n\t{}]+/ ]
+ in let filename_sep = del /[ \t\n]+/ " "
+ in let filenames = Build.opt_list filename filename_sep
+ in [ label "rule" . filenames . Build.block entry ]
+
+(* Test: logrotate_block *)
+test logrotate_block get "/var/log/wtmp\n/var/log/wtmp2\n{
+ missingok
+ monthly
+}" =
+ { "rule"
+ { "file" = "/var/log/wtmp" }
+ { "file" = "/var/log/wtmp2" }
+ { "missingok" }
+ { "monthly" }
+ }
+
+
+(************************************************************************
+ * Group: COMBINATORICS
+ ************************************************************************)
+
+(* View: combine_two
+ A minimalistic combination lens *)
+let combine_two =
+ let entry (k:string) = [ key k ]
+ in Build.combine_two (entry "a") (entry "b")
+
+(* Test: combine_two
+ Should parse ab *)
+test combine_two get "ab" = { "a" } { "b" }
+
+(* Test: combine_two
+ Should parse ba *)
+test combine_two get "ba" = { "b" } { "a" }
+
+(* Test: combine_two
+ Should not parse a *)
+test combine_two get "a" = *
+
+(* Test: combine_two
+ Should not parse b *)
+test combine_two get "b" = *
+
+(* Test: combine_two
+ Should not parse aa *)
+test combine_two get "aa" = *
+
+(* Test: combine_two
+ Should not parse bb *)
+test combine_two get "bb" = *
+
+
+(* View: combine_two_opt
+ A minimalistic optional combination lens *)
+let combine_two_opt =
+ let entry (k:string) = [ key k ]
+ in Build.combine_two_opt (entry "a") (entry "b")
+
+(* Test: combine_two_opt
+ Should parse ab *)
+test combine_two_opt get "ab" = { "a" } { "b" }
+
+(* Test: combine_two_opt
+ Should parse ba *)
+test combine_two_opt get "ba" = { "b" } { "a" }
+
+(* Test: combine_two_opt
+ Should parse a *)
+test combine_two_opt get "a" = { "a" }
+
+(* Test: combine_two_opt
+ Should parse b *)
+test combine_two_opt get "b" = { "b" }
+
+(* Test: combine_two_opt
+ Should not parse aa *)
+test combine_two_opt get "aa" = *
+
+(* Test: combine_two_opt
+ Should not parse bb *)
+test combine_two_opt get "bb" = *
+
+
+(* View: combine_three
+ A minimalistic optional combination lens *)
+let combine_three =
+ let entry (k:string) = [ key k ]
+ in Build.combine_three (entry "a") (entry "b") (entry "c")
+
+(* Test: combine_three
+ Should not parse ab *)
+test combine_three get "ab" = *
+
+(* Test: combine_three
+ Should not parse ba *)
+test combine_three get "ba" = *
+
+(* Test: combine_three
+ Should not parse a *)
+test combine_three get "a" = *
+
+(* Test: combine_three
+ Should not parse b *)
+test combine_three get "b" = *
+
+(* Test: combine_three
+ Should not parse aa *)
+test combine_three get "aa" = *
+
+(* Test: combine_three
+ Should not parse bbc *)
+test combine_three get "bbc" = *
+
+(* Test: combine_three
+ Should parse abc *)
+test combine_three get "abc" = { "a" } { "b" } { "c" }
+
+(* Test: combine_three
+ Should parse cab *)
+test combine_three get "cab" = { "c" } { "a" } { "b" }
+
+
+(* View: combine_three_opt
+ A minimalistic optional combination lens *)
+let combine_three_opt =
+ let entry (k:string) = [ key k ]
+ in Build.combine_three_opt (entry "a") (entry "b") (entry "c")
+
+(* Test: combine_three_opt
+ Should parse ab *)
+test combine_three_opt get "ab" = { "a" } { "b" }
+
+(* Test: combine_three_opt
+ Should parse ba *)
+test combine_three_opt get "ba" = { "b" } { "a" }
+
+(* Test: combine_three_opt
+ Should parse a *)
+test combine_three_opt get "a" = { "a" }
+
+(* Test: combine_three_opt
+ Should parse b *)
+test combine_three_opt get "b" = { "b" }
+
+(* Test: combine_three_opt
+ Should not parse aa *)
+test combine_three_opt get "aa" = *
+
+(* Test: combine_three_opt
+ Should not parse bbc *)
+test combine_three_opt get "bbc" = *
+
+(* Test: combine_three_opt
+ Should parse abc *)
+test combine_three_opt get "abc" = { "a" } { "b" } { "c" }
+
+(* Test: combine_three_opt
+ Should parse cab *)
+test combine_three_opt get "cab" = { "c" } { "a" } { "b" }
--- /dev/null
+module Test_Cachefilesd =
+
+ let conf = "
+# I am a comment
+dir /var/cache/fscache
+tAg mycache
+brun 10%
+bcull 7%
+bstop 3%
+frun 10%
+fcull 7%
+fstop 3%
+nocull
+
+secctx system_u:system_r:cachefiles_kernel_t:s0
+"
+ test Cachefilesd.lns get conf =
+ { }
+ { "#comment" = "I am a comment" }
+ { "dir" = "/var/cache/fscache" }
+ { "tAg" = "mycache" }
+ { "brun" = "10%" }
+ { "bcull" = "7%" }
+ { "bstop" = "3%" }
+ { "frun" = "10%" }
+ { "fcull" = "7%" }
+ { "fstop" = "3%" }
+ { "nocull" }
+ { }
+ { "secctx" = "system_u:system_r:cachefiles_kernel_t:s0" }
--- /dev/null
+(*
+Module: Test_Carbon
+ Provides unit tests and examples for the <Carbon> lens.
+*)
+
+module Test_Carbon =
+
+let carbon_conf = "[cache]
+# Configure carbon directories.
+
+# Specify the user to drop privileges to
+# If this is blank carbon runs as the user that invokes it
+# This user must have write access to the local data directory
+USER =
+
+MAX_CACHE_SIZE = inf # comment at EOL
+LINE_RECEIVER_INTERFACE=0.0.0.0
+LINE_RECEIVER_PORT = 2003
+ENABLE_UDP_LISTENER = False
+
+[relay]
+LINE_RECEIVER_INTERFACE = 0.0.0.0
+LINE_RECEIVER_PORT = 2013
+PICKLE_RECEIVER_INTERFACE = 0.0.0.0
+PICKLE_RECEIVER_PORT = 2014
+"
+
+test Carbon.lns get carbon_conf =
+ { "cache"
+ { "#comment" = "Configure carbon directories." }
+ { }
+ { "#comment" = "Specify the user to drop privileges to" }
+ { "#comment" = "If this is blank carbon runs as the user that invokes it" }
+ { "#comment" = "This user must have write access to the local data directory" }
+ { "USER" }
+ { }
+ { "MAX_CACHE_SIZE" = "inf"
+ { "#comment" = "comment at EOL" }
+ }
+ { "LINE_RECEIVER_INTERFACE" = "0.0.0.0" }
+ { "LINE_RECEIVER_PORT" = "2003" }
+ { "ENABLE_UDP_LISTENER" = "False" }
+ { }
+ }
+ { "relay"
+ { "LINE_RECEIVER_INTERFACE" = "0.0.0.0" }
+ { "LINE_RECEIVER_PORT" = "2013" }
+ { "PICKLE_RECEIVER_INTERFACE" = "0.0.0.0" }
+ { "PICKLE_RECEIVER_PORT" = "2014" }
+ }
+
+let relay_rules_conf = "# You must have exactly one section with 'default = true'
+# Note that all destinations listed must also exist in carbon.conf
+# in the DESTINATIONS setting in the [relay] section
+[default]
+default = true
+destinations = 127.0.0.1:2004:a, 127.0.0.1:2104:b
+"
+
+test Carbon.lns get relay_rules_conf =
+ { "#comment" = "You must have exactly one section with 'default = true'" }
+ { "#comment" = "Note that all destinations listed must also exist in carbon.conf" }
+ { "#comment" = "in the DESTINATIONS setting in the [relay] section" }
+ { "default"
+ { "default" = "true" }
+ { "destinations" = "127.0.0.1:2004:a, 127.0.0.1:2104:b" }
+ }
+
+let storage_aggregation_conf = "# Aggregation methods for whisper files. Entries are scanned in order,
+# and first match wins. This file is scanned for changes every 60 seconds
+[max]
+pattern = \.max$
+xFilesFactor = 0.1
+aggregationMethod = max
+"
+
+test Carbon.lns get storage_aggregation_conf =
+ { "#comment" = "Aggregation methods for whisper files. Entries are scanned in order," }
+ { "#comment" = "and first match wins. This file is scanned for changes every 60 seconds" }
+ { "max"
+ { "pattern" = "\.max$" }
+ { "xFilesFactor" = "0.1" }
+ { "aggregationMethod" = "max" }
+ }
--- /dev/null
+
+module Test_ceph =
+
+ let ceph_simple = "[global]
+### http://ceph.com/docs/master/rados/configuration/general-config-ref/
+
+ fsid = b4b2e571-fbbf-4ff3-a9f8-ab80f08b7fe6 # use `uuidgen` to generate your own UUID
+ public network = 192.168.0.0/24
+ cluster network = 192.168.0.0/24
+"
+
+ let ceph_extended = "##
+# Sample ceph ceph.conf file.
+##
+# This file defines cluster membership, the various locations
+# that Ceph stores data, and any other runtime options.
+
+[global]
+ ; Start-line comment
+ log file = /var/log/ceph/$cluster-$name.log ; End-line comment
+
+ [mon.alpha]
+ host = alpha
+ mon addr = 192.168.0.10:6789
+
+ [mon.beta]
+ host = beta
+ mon addr = 192.168.0.11:6789
+
+[mon.gamma]
+ host = gamma
+ mon addr = 192.168.0.12:6789
+
+
+[mon]
+mon initial members = mycephhost
+
+mon host = cephhost01,cephhost02
+ mon addr = 192.168.0.101,192.168.0.102
+
+mon osd nearfull ratio = .85
+
+ # logging, for debugging monitor crashes, in order of
+ # their likelihood of being helpful :)
+ debug ms = 1
+ debug mon = 20
+ debug paxos = 20
+ debug auth = 20
+
+[mds]
+
+ osd max backfills = 5
+
+ #osd mkfs type = {fs-type}
+ #osd mkfs options {fs-type} = {mkfs options} # default for xfs is \"-f\"
+ #osd mount options {fs-type} = {mount options} # default mount option is \"rw, noatime\"
+ osd mkfs type = btrfs
+ osd mount options btrfs = noatime,nodiratime
+ journal dio = false
+
+ debug ms = 1
+ debug osd = 20
+ debug filestore = 20
+ debug journal = 20
+
+ filestore merge threshold = 10
+
+ osd crush update on start = false
+
+[osd.0]
+ host = delta
+
+[osd.1]
+ host = epsilon
+
+[osd.2]
+ host = zeta
+
+[osd.3]
+ host = eta
+
+ rgw dns name = radosgw.ceph.internal
+"
+
+ test Ceph.lns get ceph_simple =
+ { "global"
+ { "#comment" = "## http://ceph.com/docs/master/rados/configuration/general-config-ref/" }
+ { }
+ { "fsid" = "b4b2e571-fbbf-4ff3-a9f8-ab80f08b7fe6"
+ { "#comment" = "use `uuidgen` to generate your own UUID" }
+ }
+ { "public network" = "192.168.0.0/24" }
+ { "cluster network" = "192.168.0.0/24" }
+ }
+
+ test Ceph.lns get ceph_extended =
+ { "#comment" = "#" }
+ { "#comment" = "Sample ceph ceph.conf file." }
+ { "#comment" = "#" }
+ { "#comment" = "This file defines cluster membership, the various locations" }
+ { "#comment" = "that Ceph stores data, and any other runtime options." }
+ { }
+ { "global"
+ { "#comment" = "Start-line comment" }
+ { "log file" = "/var/log/ceph/$cluster-$name.log"
+ { "#comment" = "End-line comment" }
+ }
+ { }
+ }
+ { "mon.alpha"
+ { "host" = "alpha" }
+ { "mon addr" = "192.168.0.10:6789" }
+ { }
+ }
+ { "mon.beta"
+ { "host" = "beta" }
+ { "mon addr" = "192.168.0.11:6789" }
+ { }
+ }
+ { "mon.gamma"
+ { "host" = "gamma" }
+ { "mon addr" = "192.168.0.12:6789" }
+ { }
+ { }
+ }
+ { "mon"
+ { "mon initial members" = "mycephhost" }
+ { }
+ { "mon host" = "cephhost01,cephhost02" }
+ { "mon addr" = "192.168.0.101,192.168.0.102" }
+ { }
+ { "mon osd nearfull ratio" = ".85" }
+ { }
+ { "#comment" = "logging, for debugging monitor crashes, in order of" }
+ { "#comment" = "their likelihood of being helpful :)" }
+ { "debug ms" = "1" }
+ { "debug mon" = "20" }
+ { "debug paxos" = "20" }
+ { "debug auth" = "20" }
+ { }
+ }
+ { "mds"
+ { }
+ { "osd max backfills" = "5" }
+ { }
+ { "#comment" = "osd mkfs type = {fs-type}" }
+ { "#comment" = "osd mkfs options {fs-type} = {mkfs options} # default for xfs is \"-f\"" }
+ { "#comment" = "osd mount options {fs-type} = {mount options} # default mount option is \"rw, noatime\"" }
+ { "osd mkfs type" = "btrfs" }
+ { "osd mount options btrfs" = "noatime,nodiratime" }
+ { "journal dio" = "false" }
+ { }
+ { "debug ms" = "1" }
+ { "debug osd" = "20" }
+ { "debug filestore" = "20" }
+ { "debug journal" = "20" }
+ { }
+ { "filestore merge threshold" = "10" }
+ { }
+ { "osd crush update on start" = "false" }
+ { }
+ }
+ { "osd.0"
+ { "host" = "delta" }
+ { }
+ }
+ { "osd.1"
+ { "host" = "epsilon" }
+ { }
+ }
+ { "osd.2"
+ { "host" = "zeta" }
+ { }
+ }
+ { "osd.3"
+ { "host" = "eta" }
+ { }
+ { "rgw dns name" = "radosgw.ceph.internal" }
+ }
+
--- /dev/null
+module Test_cgconfig =
+
+let conf="#cgconfig test cofiguration file
+mount { 123 = 456; 456 = 789;}
+"
+
+test Cgconfig.lns get conf =
+ { "#comment" = "cgconfig test cofiguration file" }
+ { "mount"
+ { "123" = "456" }
+ { "456" = "789" } }
+ {}
+
+(* white spaces before mount sign *)
+let conf2="
+ mount { 123 = 456;}
+ mount { 123 = 456;}
+
+mount { 123 = 456;}mount { 123 = 456;}
+"
+
+test Cgconfig.lns get conf2 =
+ { }
+ { "mount" { "123" = "456"} }
+ { }
+ { "mount" { "123" = "456"} }
+ { }
+ { }
+ { "mount" { "123" = "456"} }
+ { "mount" { "123" = "456" } }
+ { }
+
+let conf3="#cgconfig test cofiguration file
+mount { 123 = 456;
+#eswkh
+ 456 = 789;}
+"
+test Cgconfig.lns get conf3 =
+ { "#comment" = "cgconfig test cofiguration file" }
+ { "mount"
+ { "123" = "456" }
+ {}
+ { "#comment" = "eswkh" }
+ { "456" = "789" } }
+ {}
+
+let conf4="#cgconfig test cofiguration file
+mount {
+123 = 456;1245=456;
+}
+mount { 323=324;}mount{324=5343; }# this is a comment
+"
+
+test Cgconfig.lns get conf4 =
+ {"#comment" = "cgconfig test cofiguration file" }
+ {"mount"
+ { }
+ { "123" = "456"}
+ { "1245" = "456" }
+ { }}
+ { }
+ { "mount" { "323" = "324" } }
+ { "mount" { "324" = "5343" } }
+ { "#comment" = "this is a comment" }
+
+let group1="
+group user {
+ cpuacct {
+ lll = jjj;
+ }
+ cpu {
+ }
+}"
+
+test Cgconfig.lns get group1 =
+ { }
+ { "group" = "user"
+ { }
+ { "controller" = "cpuacct"
+ { }
+ { "lll" = "jjj" }
+ { } }
+ { }
+ { "controller" = "cpu" { } }
+ { } }
+
+let group2="
+group aa-1{
+ perm {
+ task { }
+ admin { }
+ }
+}"
+
+test Cgconfig.lns get group2 =
+ { }
+ { "group" = "aa-1"
+ { }
+ { "perm"
+ { }
+ { "task" }
+ { }
+ { "admin" }
+ { } }
+ { } }
+
+
+let group3 ="
+group xx/www {
+ perm {
+ task {
+ gid = root;
+ uid = root;
+ }
+ admin {
+ gid = aaa;
+# no aaa
+ uid = aaa;
+ }
+}
+}
+"
+
+test Cgconfig.lns get group3 =
+ { }
+ { "group" = "xx/www"
+ { }
+ { "perm"
+ { }
+ { "task"
+ { }
+ { "gid" = "root" }
+ { }
+ { "uid" = "root" }
+ { } }
+ { }
+ { "admin"
+ { }
+ { "gid" = "aaa" }
+ { }
+ { "#comment" = "no aaa" }
+ { "uid" = "aaa" }
+ { } }
+ { } }
+ { } }
+ { }
+
+let group4 ="
+#group daemons {
+# cpuacct{
+# }
+#}
+
+group daemons/ftp {
+ cpuacct{
+ }
+}
+
+ group daemons/www {
+ perm {
+ task {
+ uid = root;
+ gid = root;
+ }
+ admin {
+ uid = root;
+ gid = root;
+ }
+ }
+# cpu {
+# cpu.shares = 1000;
+# }
+}
+#
+#
+
+ mount {
+ devices = /mnt/cgroups/devices;cpuacct = /mnt/cgroups/cpuset;
+ cpuset = /mnt/cgroups/cpuset;
+
+
+ cpu = /mnt/cpu;
+# cpuset = /mnt/cgroups/cpuset2;
+}
+mount {
+devices = /mnt/cgroups/devices;
+# cpuacct = /mnt/cgroups/cpuacct;
+ ns = /mnt/cgroups/ns;
+#
+}
+
+"
+
+test Cgconfig.lns get group4 =
+ { }
+ { "#comment" = "group daemons {" }
+ { "#comment" = "cpuacct{" }
+ { "#comment" = "}" }
+ { "#comment" = "}" }
+ { }
+ { "group" = "daemons/ftp"
+ { }
+ { "controller" = "cpuacct" { } }
+ { } }
+ { }
+ { }
+ { "group" = "daemons/www"
+ { }
+ { "perm"
+ { }
+ { "task"
+ { }
+ { "uid" = "root" }
+ { }
+ { "gid" = "root" }
+ { } }
+ { }
+ { "admin"
+ { }
+ { "uid" = "root" }
+ { }
+ { "gid" = "root" }
+ { } }
+ { } }
+ { }
+ { "#comment" = "cpu {" }
+ { "#comment" = "cpu.shares = 1000;" }
+ { "#comment" = "}" } }
+ { }
+ { }
+ { }
+ { }
+ { "mount"
+ { }
+ { "devices" = "/mnt/cgroups/devices" }
+ { "cpuacct" = "/mnt/cgroups/cpuset" }
+ { }
+ { "cpuset" = "/mnt/cgroups/cpuset" }
+ { }
+ { }
+ { }
+ { "cpu" = "/mnt/cpu" }
+ { }
+ { "#comment" = "cpuset = /mnt/cgroups/cpuset2;" } }
+ { }
+ { "mount"
+ { }
+ { "devices" = "/mnt/cgroups/devices" }
+ { }
+ { "#comment" = "cpuacct = /mnt/cgroups/cpuacct;" }
+ { "ns" = "/mnt/cgroups/ns" }
+ { }
+ { } }
+ { }
+ { }
+
+test Cgconfig.lns put "group tst {memory {}}" after
+ set "/group" "tst2"
+= "group tst2 {memory {}}"
+
+let group5="
+group user {
+ cpuacct {}
+ cpu {}
+ cpuset {}
+ devices {}
+ freezer {}
+ memory {}
+ net_cls {}
+ blkio {}
+ hugetlb {}
+ perf_event {}
+}"
+
+test Cgconfig.lns get group5 =
+ { }
+ { "group" = "user"
+ { }
+ { "controller" = "cpuacct" }
+ { }
+ { "controller" = "cpu" }
+ { }
+ { "controller" = "cpuset" }
+ { }
+ { "controller" = "devices" }
+ { }
+ { "controller" = "freezer" }
+ { }
+ { "controller" = "memory" }
+ { }
+ { "controller" = "net_cls" }
+ { }
+ { "controller" = "blkio" }
+ { }
+ { "controller" = "hugetlb" }
+ { }
+ { "controller" = "perf_event" }
+ { }
+ }
+
+(* quoted controller parameter whitespace *)
+let group6="
+group blklimit {
+ blkio {
+ blkio.throttle.read_iops_device=\"8:0 50\";
+ }
+}"
+
+test Cgconfig.lns get group6 =
+ { }
+ { "group" = "blklimit"
+ { }
+ { "controller" = "blkio"
+ { }
+ { "blkio.throttle.read_iops_device" = "\"8:0 50\"" }
+ { }
+ }
+ { }
+ }
+
+let group7 ="
+group daemons/www {
+ perm {
+ task {
+ uid = root;
+ gid = root;
+ fperm = 770;
+ }
+ admin {
+ uid = root;
+ gid = root;
+ dperm = 777;
+ }
+ }
+}
+"
+
+test Cgconfig.lns get group7 =
+ { }
+ { "group" = "daemons/www"
+ { }
+ { "perm"
+ { }
+ { "task"
+ { }
+ { "uid" = "root" }
+ { }
+ { "gid" = "root" }
+ { }
+ { "fperm" = "770" }
+ { } }
+ { }
+ { "admin"
+ { }
+ { "uid" = "root" }
+ { }
+ { "gid" = "root" }
+ { }
+ { "dperm" = "777" }
+ { } }
+ { } }
+ { }
+ }
+ { }
+
--- /dev/null
+module Test_cgrules =
+
+let conf="#cgrules test configuration file
+poooeter cpu test1/
+% memory test2/
+@somegroup cpu toto/
+% devices toto1/
+% memory toto3/
+"
+test Cgrules.lns get conf =
+ { "#comment" = "cgrules test configuration file" }
+ { "user" = "poooeter"
+ { "cpu" = "test1/" }
+ { "memory" = "test2/" } }
+ { "group" = "somegroup"
+ { "cpu" = "toto/" }
+ { "devices" = "toto1/" }
+ { "memory" = "toto3/" } }
+
+test Cgrules.lns put conf after
+ set "user/cpu" "test3/";
+ rm "user/memory";
+ rm "group";
+ insa "devices" "user/*[last()]";
+ set "user/devices" "newtest/";
+ insb "memory" "user/devices";
+ set "user/memory" "memtest/"
+= "#cgrules test configuration file
+poooeter cpu test3/
+% memory memtest/
+% devices newtest/
+"
--- /dev/null
+(*
+Module: Test_Channels
+ Provides unit tests and examples for the <Channels> lens.
+*)
+
+module Test_Channels =
+
+(* Variable: conf
+ A full configuration file *)
+let conf = "Direct 8 TV;SES ASTRA:12551:VC56M2O0S0:S19.2E:22000:1111=2:1112=fra@3:1116:0:12174:1:1108:0
+:FAVORIS
+Direct 8 TV;SES ASTRA:12551:VC56M2O0S0:S19.2E:22000:1111=2:1112=fra@3:1116:0:12175:1:1108:0
+TF1;CSAT:11895:VC34M2O0S0:S19.2E:27500:171=2:124=fra+spa@4,125=eng@4;126=deu@4:53:500,1811,1863,100:8371:1:1074:0
+:TNT
+TF1;SMR6:690167:I999B8C999D999M998T999G999Y0:T:27500:120=2:130=fra@3,131=eng@3,133=qad@3:140;150=fra,151=eng:0:1537:8442:6:0
+; this is a comment
+France 5;GR1:618167:I999B8C999D999M998T999G999Y0:T:27500:374+320=2:330=fra@3,331=qad@3:0;340=fra:0:260:8442:1:0
+CANAL+ FAMILY HD:12012:VC23M5O35S1:S19.2E:27500:164=27:0;98=@106,99=eng@106:0;45=fra+fra:1811,500,1863,100,9C4,9C7,9AF:8825:1:1080:0
+"
+
+(* Test: Channels.lns
+ Test the full <conf> *)
+test Channels.lns get conf =
+ { "entry" = "Direct 8 TV"
+ { "provider" = "SES ASTRA" }
+ { "frequency" = "12551" }
+ { "parameter" = "VC56M2O0S0" }
+ { "signal_source" = "S19.2E" }
+ { "symbol_rate" = "22000" }
+ { "vpid" = "1111" { "codec" = "2" } }
+ { "apid" = "1112" { "lang" = "fra" } { "codec" = "3" } }
+ { "tpid" = "1116" }
+ { "caid" = "0" }
+ { "sid" = "12174" }
+ { "nid" = "1" }
+ { "tid" = "1108" }
+ { "rid" = "0" }
+ }
+ { "group" = "FAVORIS"
+ { "entry" = "Direct 8 TV"
+ { "provider" = "SES ASTRA" }
+ { "frequency" = "12551" }
+ { "parameter" = "VC56M2O0S0" }
+ { "signal_source" = "S19.2E" }
+ { "symbol_rate" = "22000" }
+ { "vpid" = "1111" { "codec" = "2" } }
+ { "apid" = "1112" { "lang" = "fra" } { "codec" = "3" } }
+ { "tpid" = "1116" }
+ { "caid" = "0" }
+ { "sid" = "12175" }
+ { "nid" = "1" }
+ { "tid" = "1108" }
+ { "rid" = "0" }
+ }
+ { "entry" = "TF1"
+ { "provider" = "CSAT" }
+ { "frequency" = "11895" }
+ { "parameter" = "VC34M2O0S0" }
+ { "signal_source" = "S19.2E" }
+ { "symbol_rate" = "27500" }
+ { "vpid" = "171" { "codec" = "2" } }
+ { "apid" = "124" { "lang" = "fra" } { "lang" = "spa" } { "codec" = "4" } }
+ { "apid" = "125" { "lang" = "eng" } { "codec" = "4" } }
+ { "apid_dolby" = "126" { "lang" = "deu" } { "codec" = "4" } }
+ { "tpid" = "53" }
+ { "caid" = "500" }
+ { "caid" = "1811" }
+ { "caid" = "1863" }
+ { "caid" = "100" }
+ { "sid" = "8371" }
+ { "nid" = "1" }
+ { "tid" = "1074" }
+ { "rid" = "0" }
+ }
+ }
+ { "group" = "TNT"
+ { "entry" = "TF1"
+ { "provider" = "SMR6" }
+ { "frequency" = "690167" }
+ { "parameter" = "I999B8C999D999M998T999G999Y0" }
+ { "signal_source" = "T" }
+ { "symbol_rate" = "27500" }
+ { "vpid" = "120" { "codec" = "2" } }
+ { "apid" = "130" { "lang" = "fra" } { "codec" = "3" } }
+ { "apid" = "131" { "lang" = "eng" } { "codec" = "3" } }
+ { "apid" = "133" { "lang" = "qad" } { "codec" = "3" } }
+ { "tpid" = "140" }
+ { "tpid_bylang" = "150" { "lang" = "fra" } }
+ { "tpid_bylang" = "151" { "lang" = "eng" } }
+ { "caid" = "0" }
+ { "sid" = "1537" }
+ { "nid" = "8442" }
+ { "tid" = "6" }
+ { "rid" = "0" }
+ }
+ { "#comment" = "this is a comment" }
+ { "entry" = "France 5"
+ { "provider" = "GR1" }
+ { "frequency" = "618167" }
+ { "parameter" = "I999B8C999D999M998T999G999Y0" }
+ { "signal_source" = "T" }
+ { "symbol_rate" = "27500" }
+ { "vpid" = "374" }
+ { "vpid_pcr" = "320" { "codec" = "2" } }
+ { "apid" = "330" { "lang" = "fra" } { "codec" = "3" } }
+ { "apid" = "331" { "lang" = "qad" } { "codec" = "3" } }
+ { "tpid" = "0" }
+ { "tpid_bylang" = "340" { "lang" = "fra" } }
+ { "caid" = "0" }
+ { "sid" = "260" }
+ { "nid" = "8442" }
+ { "tid" = "1" }
+ { "rid" = "0" }
+ }
+ { "entry" = "CANAL+ FAMILY HD"
+ { "frequency" = "12012" }
+ { "parameter" = "VC23M5O35S1" }
+ { "signal_source" = "S19.2E" }
+ { "symbol_rate" = "27500" }
+ { "vpid" = "164" { "codec" = "27" } }
+ { "apid" = "0" }
+ { "apid_dolby" = "98" { "codec" = "106" } }
+ { "apid_dolby" = "99" { "lang" = "eng" } { "codec" = "106" } }
+ { "tpid" = "0" }
+ { "tpid_bylang" = "45" { "lang" = "fra" } { "lang" = "fra" } }
+ { "caid" = "1811" }
+ { "caid" = "500" }
+ { "caid" = "1863" }
+ { "caid" = "100" }
+ { "caid" = "9C4" }
+ { "caid" = "9C7" }
+ { "caid" = "9AF" }
+ { "sid" = "8825" }
+ { "nid" = "1" }
+ { "tid" = "1080" }
+ { "rid" = "0" }
+ }
+ }
--- /dev/null
+(*
+Module: Test_Chrony
+ Provides unit tests and examples for the <Chrony> lens.
+*)
+
+module Test_Chrony =
+
+ let exampleconf = "# Comment
+#Comment
+! Comment
+!Comment
+; Comment
+;Comment
+% Comment
+%Comment
+
+server ntp1.example.com
+server ntp2.example.com iburst
+server ntp3.example.com presend 2
+server ntp4.example.com offline polltarget 4 extfield F323 copy
+server ntp5.example.com maxdelay 2 offline certset 1
+server ntp6.example.com maxdelay 2 iburst presend 2 xleave offset 1e-4
+server ntp7.example.com iburst presend 2 offline prefer trust require
+server ntp8.example.com minsamples 8 maxsamples 16 version 3
+server ntp9.example.com burst mindelay 0.1 asymmetry 0.5 nts filter 3
+peer ntpc1.example.com
+pool pool1.example.com iburst maxsources 3
+allow
+deny all
+cmdallow 192.168.1.0/24
+cmddeny all 192.168.2.0/24
+stratumweight 0
+ driftfile /var/lib/chrony/drift
+ rtcsync
+makestep 10 -1
+bindcmdaddress 127.0.0.1
+bindcmdaddress ::1
+bindacqdevice eth0
+bindcmddevice eth0
+binddevice eth0
+clockprecision 10e-9
+local
+local stratum 10
+local distance 1.0 orphan
+keyfile /etc/chrony.keys
+commandkey 1
+generatecommandkey
+manual
+noclientlog
+logchange 0.5
+logdir /var/log/chrony
+log rtc measurements rawmeasurements statistics tracking refclocks tempcomp
+leapsectz right/UTC
+broadcast 10 192.168.1.255
+broadcast 10 192.168.100.255 123
+fallbackdrift 16 19
+mailonchange root@localhost 0.5
+maxchange 1000 1 2
+maxdistance 1.0
+maxdrift 100
+hwtimestamp eth0 minpoll -2 txcomp 300e-9 rxcomp 645e-9 nocrossts rxfilter all
+hwtimestamp eth1 minsamples 10 maxsamples 20
+initstepslew 30 foo.bar.com
+initstepslew 30 foo.bar.com baz.quz.com
+ratelimit interval 4 burst 16 leak 2
+cmdratelimit
+ntsratelimit
+refclock SHM 0 refid SHM0 delay 0.1 offset 0.2 noselect tai stratum 3
+refclock SOCK /var/run/chrony-GPS.sock pps width 0.1
+refclock PPS /dev/pps0 dpoll 2 poll 3 lock SHM0 rate 5 minsamples 8
+smoothtime 400 0.001 leaponly
+tempcomp /sys/class/hwmon/hwmon0/temp2_input 30 26000 0.0 0.000183 0.0
+tempcomp /sys/class/hwmon/hwmon0/temp2_input 30 /etc/chrony.tempcomp
+ntpsigndsocket /var/lib/samba/ntp_signd
+confdir /etc/chrony.d /usr/lib/chrony.d
+sourcedir /etc/chrony.d /var/run/chrony.d
+authselectmode require
+dscp 46
+maxntsconnections 10
+nocerttimecheck 1
+nosystemcert
+ntsservercert /etc/chrony/server.crt
+ntsserverkey /etc/chrony/server.key
+ntstrustedcerts /etc/chrony/trusted.crt
+ntsdumpdir /var/lib/chrony
+ntsntpserver foo.example.com
+ntsport 123
+ntsprocesses 2
+ntsrefresh 86400
+ntsrotate 86400
+ptpport 319
+"
+
+ test Chrony.lns get exampleconf =
+ { "#comment" = "Comment" }
+ { "#comment" = "Comment" }
+ { "#comment" = "Comment" }
+ { "#comment" = "Comment" }
+ { "#comment" = "Comment" }
+ { "#comment" = "Comment" }
+ { "#comment" = "Comment" }
+ { "#comment" = "Comment" }
+ { }
+ { "server" = "ntp1.example.com" }
+ { "server" = "ntp2.example.com"
+ { "iburst" }
+ }
+ { "server" = "ntp3.example.com"
+ { "presend" = "2" }
+ }
+ { "server" = "ntp4.example.com"
+ { "offline" }
+ { "polltarget" = "4" }
+ { "extfield" = "F323" }
+ { "copy" }
+ }
+ { "server" = "ntp5.example.com"
+ { "maxdelay" = "2" }
+ { "offline" }
+ { "certset" = "1" }
+ }
+ { "server" = "ntp6.example.com"
+ { "maxdelay" = "2" }
+ { "iburst" }
+ { "presend" = "2" }
+ { "xleave" }
+ { "offset" = "1e-4" }
+ }
+ { "server" = "ntp7.example.com"
+ { "iburst" }
+ { "presend" = "2" }
+ { "offline" }
+ { "prefer" }
+ { "trust" }
+ { "require" }
+ }
+ { "server" = "ntp8.example.com"
+ { "minsamples" = "8" }
+ { "maxsamples" = "16" }
+ { "version" = "3" }
+ }
+ { "server" = "ntp9.example.com"
+ { "burst" }
+ { "mindelay" = "0.1" }
+ { "asymmetry" = "0.5" }
+ { "nts" }
+ { "filter" = "3" }
+ }
+ { "peer" = "ntpc1.example.com" }
+ { "pool" = "pool1.example.com"
+ { "iburst" }
+ { "maxsources" = "3" }
+ }
+ { "allow" }
+ { "deny"
+ { "all" }
+ }
+ { "cmdallow" = "192.168.1.0/24" }
+ { "cmddeny" = "192.168.2.0/24"
+ { "all" }
+ }
+ { "stratumweight" = "0" }
+ { "driftfile" = "/var/lib/chrony/drift" }
+ { "rtcsync" }
+ { "makestep"
+ { "threshold" = "10" }
+ { "limit" = "-1" }
+ }
+ { "bindcmdaddress" = "127.0.0.1" }
+ { "bindcmdaddress" = "::1" }
+ { "bindacqdevice" = "eth0" }
+ { "bindcmddevice" = "eth0" }
+ { "binddevice" = "eth0" }
+ { "clockprecision" = "10e-9" }
+ { "local" }
+ { "local"
+ { "stratum" = "10" }
+ }
+ { "local"
+ { "distance" = "1.0" }
+ { "orphan" }
+ }
+ { "keyfile" = "/etc/chrony.keys" }
+ { "commandkey" = "1" }
+ { "generatecommandkey" }
+ { "manual" }
+ { "noclientlog" }
+ { "logchange" = "0.5" }
+ { "logdir" = "/var/log/chrony" }
+ { "log"
+ { "rtc" }
+ { "measurements" }
+ { "rawmeasurements" }
+ { "statistics" }
+ { "tracking" }
+ { "refclocks" }
+ { "tempcomp" }
+ }
+ { "leapsectz" = "right/UTC" }
+ { "broadcast"
+ { "interval" = "10" }
+ { "address" = "192.168.1.255" }
+ }
+ { "broadcast"
+ { "interval" = "10" }
+ { "address" = "192.168.100.255" }
+ { "port" = "123" }
+ }
+ { "fallbackdrift"
+ { "min" = "16" }
+ { "max" = "19" }
+ }
+ { "mailonchange"
+ { "emailaddress" = "root@localhost" }
+ { "threshold" = "0.5" }
+ }
+ { "maxchange"
+ { "threshold" = "1000" }
+ { "delay" = "1" }
+ { "limit" = "2" }
+ }
+ { "maxdistance" = "1.0" }
+ { "maxdrift" = "100" }
+ { "hwtimestamp"
+ { "interface" = "eth0" }
+ { "minpoll" = "-2" }
+ { "txcomp" = "300e-9" }
+ { "rxcomp" = "645e-9" }
+ { "nocrossts" }
+ { "rxfilter" = "all" }
+ }
+ { "hwtimestamp"
+ { "interface" = "eth1" }
+ { "minsamples" = "10" }
+ { "maxsamples" = "20" }
+ }
+ { "initstepslew"
+ { "threshold" = "30" }
+ { "address" = "foo.bar.com" }
+ }
+ { "initstepslew"
+ { "threshold" = "30" }
+ { "address" = "foo.bar.com" }
+ { "address" = "baz.quz.com" }
+ }
+ { "ratelimit"
+ { "interval" = "4" }
+ { "burst" = "16" }
+ { "leak" = "2" }
+ }
+ { "cmdratelimit" }
+ { "ntsratelimit" }
+ { "refclock"
+ { "driver" = "SHM" }
+ { "parameter" = "0" }
+ { "refid" = "SHM0" }
+ { "delay" = "0.1" }
+ { "offset" = "0.2" }
+ { "noselect" }
+ { "tai" }
+ { "stratum" = "3" }
+ }
+ { "refclock"
+ { "driver" = "SOCK" }
+ { "parameter" = "/var/run/chrony-GPS.sock" }
+ { "pps" }
+ { "width" = "0.1" }
+ }
+ { "refclock"
+ { "driver" = "PPS" }
+ { "parameter" = "/dev/pps0" }
+ { "dpoll" = "2" }
+ { "poll" = "3" }
+ { "lock" = "SHM0" }
+ { "rate" = "5" }
+ { "minsamples" = "8" }
+ }
+ { "smoothtime"
+ { "maxfreq" = "400" }
+ { "maxwander" = "0.001" }
+ { "leaponly" }
+ }
+ { "tempcomp"
+ { "sensorfile" = "/sys/class/hwmon/hwmon0/temp2_input" }
+ { "interval" = "30" }
+ { "t0" = "26000" }
+ { "k0" = "0.0" }
+ { "k1" = "0.000183" }
+ { "k2" = "0.0" }
+ }
+ { "tempcomp"
+ { "sensorfile" = "/sys/class/hwmon/hwmon0/temp2_input" }
+ { "interval" = "30" }
+ { "pointfile" = "/etc/chrony.tempcomp" }
+ }
+ { "ntpsigndsocket" = "/var/lib/samba/ntp_signd" }
+ { "confdir"
+ { "directory" = "/etc/chrony.d" }
+ { "directory" = "/usr/lib/chrony.d" }
+ }
+ { "sourcedir"
+ { "directory" = "/etc/chrony.d" }
+ { "directory" = "/var/run/chrony.d" }
+ }
+ { "authselectmode" = "require" }
+ { "dscp" = "46" }
+ { "maxntsconnections" = "10" }
+ { "nocerttimecheck" = "1" }
+ { "nosystemcert" }
+ { "ntsservercert" = "/etc/chrony/server.crt" }
+ { "ntsserverkey" = "/etc/chrony/server.key" }
+ { "ntstrustedcerts" = "/etc/chrony/trusted.crt" }
+ { "ntsdumpdir" = "/var/lib/chrony" }
+ { "ntsntpserver" = "foo.example.com" }
+ { "ntsport" = "123" }
+ { "ntsprocesses" = "2" }
+ { "ntsrefresh" = "86400" }
+ { "ntsrotate" = "86400" }
+ { "ptpport" = "319" }
+
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+module Test_ClamAV =
+
+let clamd_conf ="##
+## Example config file for the Clam AV daemon
+## Please read the clamd.conf(5) manual before editing this file.
+##
+
+# Comment or remove the line below.
+Example
+# LogFile must be writable for the user running daemon.
+LogFile /var/log/clamav/clamd.log
+LogFileUnlock yes
+LogFileMaxSize 0
+LogTime yes
+LogClean yes
+LogSyslog yes
+LogFacility LOG_MAIL
+LogVerbose yes
+LogRotate yes
+ExtendedDetectionInfo yes
+PidFile /var/run/clamav/clamd.pid
+TemporaryDirectory /var/tmp
+DatabaseDirectory /var/lib/clamav
+OfficialDatabaseOnly no
+LocalSocket /var/run/clamav/clamd.sock
+LocalSocketGroup virusgroup
+LocalSocketMode 660
+FixStaleSocket yes
+TCPSocket 3310
+TCPAddr 127.0.0.1
+MaxConnectionQueueLength 30
+StreamMaxLength 10M
+StreamMinPort 30000
+StreamMaxPort 32000
+MaxThreads 50
+ReadTimeout 300
+CommandReadTimeout 5
+SendBufTimeout 200
+MaxQueue 200
+IdleTimeout 60
+ExcludePath ^/proc/
+ExcludePath ^/sys/
+MaxDirectoryRecursion 20
+FollowDirectorySymlinks yes
+FollowFileSymlinks yes
+CrossFilesystems yes
+SelfCheck 600
+VirusEvent /usr/local/bin/send_sms 123456789 \"VIRUS ALERT: %v\"
+User clam
+AllowSupplementaryGroups yes
+ExitOnOOM yes
+Foreground yes
+Debug yes
+LeaveTemporaryFiles yes
+AllowAllMatchScan no
+DetectPUA yes
+ExcludePUA NetTool
+ExcludePUA PWTool
+IncludePUA Spy
+IncludePUA Scanner
+IncludePUA RAT
+AlgorithmicDetection yes
+ForceToDisk yes
+DisableCache yes
+ScanPE yes
+DisableCertCheck yes
+ScanELF yes
+DetectBrokenExecutables yes
+ScanOLE2 yes
+OLE2BlockMacros no
+ScanPDF yes
+ScanSWF yes
+ScanMail yes
+ScanPartialMessages yes
+PhishingSignatures yes
+PhishingScanURLs yes
+PhishingAlwaysBlockSSLMismatch no
+PhishingAlwaysBlockCloak no
+PartitionIntersection no
+HeuristicScanPrecedence yes
+StructuredDataDetection yes
+StructuredMinCreditCardCount 5
+StructuredMinSSNCount 5
+StructuredSSNFormatNormal yes
+StructuredSSNFormatStripped yes
+ScanHTML yes
+ScanArchive yes
+ArchiveBlockEncrypted no
+MaxScanSize 150M
+MaxFileSize 30M
+MaxRecursion 10
+MaxFiles 15000
+MaxEmbeddedPE 10M
+MaxHTMLNormalize 10M
+"
+
+let freshclam_conf ="##
+## Example config file for freshclam
+## Please read the freshclam.conf(5) manual before editing this file.
+##
+
+
+# Comment or remove the line below.
+Example
+
+DatabaseDirectory /var/lib/clamav
+UpdateLogFile /var/log/clamav/freshclam.log
+LogFileMaxSize 2M
+LogTime yes
+LogVerbose yes
+LogSyslog yes
+LogFacility LOG_MAIL
+LogRotate yes
+PidFile /var/run/freshclam.pid
+DatabaseOwner clam
+AllowSupplementaryGroups yes
+DNSDatabaseInfo current.cvd.clamav.net
+DatabaseMirror db.XY.clamav.net
+DatabaseMirror database.clamav.net
+MaxAttempts 5
+ScriptedUpdates yes
+CompressLocalDatabase no
+DatabaseCustomURL http://myserver.com/mysigs.ndb
+DatabaseCustomURL file:///mnt/nfs/local.hdb
+PrivateMirror mirror1.mynetwork.com
+PrivateMirror mirror2.mynetwork.com
+Checks 24
+HTTPProxyServer myproxy.com
+HTTPProxyPort 1234
+HTTPProxyUsername myusername
+HTTPProxyPassword mypass
+HTTPUserAgent SomeUserAgentIdString
+LocalIPAddress aaa.bbb.ccc.ddd
+NotifyClamd /etc/clamd.conf
+OnUpdateExecute command
+OnErrorExecute command
+OnOutdatedExecute command
+Foreground yes
+Debug yes
+ConnectTimeout 60
+ReceiveTimeout 60
+TestDatabases yes
+SubmitDetectionStats /etc/clamd.conf
+DetectionStatsCountry za
+DetectionStatsCountry zw
+SafeBrowsing yes
+Bytecode yes
+ExtraDatabase dbname1
+ExtraDatabase dbname2
+"
+
+test ClamAV.lns get clamd_conf =
+ { "#comment" = "#" }
+ { "#comment" = "# Example config file for the Clam AV daemon" }
+ { "#comment" = "# Please read the clamd.conf(5) manual before editing this file." }
+ { "#comment" = "#" }
+ { }
+ { "#comment" = "Comment or remove the line below." }
+ { "Example" }
+ { "#comment" = "LogFile must be writable for the user running daemon." }
+ { "LogFile" = "/var/log/clamav/clamd.log" }
+ { "LogFileUnlock" = "yes" }
+ { "LogFileMaxSize" = "0" }
+ { "LogTime" = "yes" }
+ { "LogClean" = "yes" }
+ { "LogSyslog" = "yes" }
+ { "LogFacility" = "LOG_MAIL" }
+ { "LogVerbose" = "yes" }
+ { "LogRotate" = "yes" }
+ { "ExtendedDetectionInfo" = "yes" }
+ { "PidFile" = "/var/run/clamav/clamd.pid" }
+ { "TemporaryDirectory" = "/var/tmp" }
+ { "DatabaseDirectory" = "/var/lib/clamav" }
+ { "OfficialDatabaseOnly" = "no" }
+ { "LocalSocket" = "/var/run/clamav/clamd.sock" }
+ { "LocalSocketGroup" = "virusgroup" }
+ { "LocalSocketMode" = "660" }
+ { "FixStaleSocket" = "yes" }
+ { "TCPSocket" = "3310" }
+ { "TCPAddr" = "127.0.0.1" }
+ { "MaxConnectionQueueLength" = "30" }
+ { "StreamMaxLength" = "10M" }
+ { "StreamMinPort" = "30000" }
+ { "StreamMaxPort" = "32000" }
+ { "MaxThreads" = "50" }
+ { "ReadTimeout" = "300" }
+ { "CommandReadTimeout" = "5" }
+ { "SendBufTimeout" = "200" }
+ { "MaxQueue" = "200" }
+ { "IdleTimeout" = "60" }
+ { "ExcludePath" = "^/proc/" }
+ { "ExcludePath" = "^/sys/" }
+ { "MaxDirectoryRecursion" = "20" }
+ { "FollowDirectorySymlinks" = "yes" }
+ { "FollowFileSymlinks" = "yes" }
+ { "CrossFilesystems" = "yes" }
+ { "SelfCheck" = "600" }
+ { "VirusEvent" = "/usr/local/bin/send_sms 123456789 \"VIRUS ALERT: %v\"" }
+ { "User" = "clam" }
+ { "AllowSupplementaryGroups" = "yes" }
+ { "ExitOnOOM" = "yes" }
+ { "Foreground" = "yes" }
+ { "Debug" = "yes" }
+ { "LeaveTemporaryFiles" = "yes" }
+ { "AllowAllMatchScan" = "no" }
+ { "DetectPUA" = "yes" }
+ { "ExcludePUA" = "NetTool" }
+ { "ExcludePUA" = "PWTool" }
+ { "IncludePUA" = "Spy" }
+ { "IncludePUA" = "Scanner" }
+ { "IncludePUA" = "RAT" }
+ { "AlgorithmicDetection" = "yes" }
+ { "ForceToDisk" = "yes" }
+ { "DisableCache" = "yes" }
+ { "ScanPE" = "yes" }
+ { "DisableCertCheck" = "yes" }
+ { "ScanELF" = "yes" }
+ { "DetectBrokenExecutables" = "yes" }
+ { "ScanOLE2" = "yes" }
+ { "OLE2BlockMacros" = "no" }
+ { "ScanPDF" = "yes" }
+ { "ScanSWF" = "yes" }
+ { "ScanMail" = "yes" }
+ { "ScanPartialMessages" = "yes" }
+ { "PhishingSignatures" = "yes" }
+ { "PhishingScanURLs" = "yes" }
+ { "PhishingAlwaysBlockSSLMismatch" = "no" }
+ { "PhishingAlwaysBlockCloak" = "no" }
+ { "PartitionIntersection" = "no" }
+ { "HeuristicScanPrecedence" = "yes" }
+ { "StructuredDataDetection" = "yes" }
+ { "StructuredMinCreditCardCount" = "5" }
+ { "StructuredMinSSNCount" = "5" }
+ { "StructuredSSNFormatNormal" = "yes" }
+ { "StructuredSSNFormatStripped" = "yes" }
+ { "ScanHTML" = "yes" }
+ { "ScanArchive" = "yes" }
+ { "ArchiveBlockEncrypted" = "no" }
+ { "MaxScanSize" = "150M" }
+ { "MaxFileSize" = "30M" }
+ { "MaxRecursion" = "10" }
+ { "MaxFiles" = "15000" }
+ { "MaxEmbeddedPE" = "10M" }
+ { "MaxHTMLNormalize" = "10M" }
+
+test ClamAV.lns get freshclam_conf =
+ { "#comment" = "#" }
+ { "#comment" = "# Example config file for freshclam" }
+ { "#comment" = "# Please read the freshclam.conf(5) manual before editing this file." }
+ { "#comment" = "#" }
+ { }
+ { }
+ { "#comment" = "Comment or remove the line below." }
+ { "Example" }
+ { }
+ { "DatabaseDirectory" = "/var/lib/clamav" }
+ { "UpdateLogFile" = "/var/log/clamav/freshclam.log" }
+ { "LogFileMaxSize" = "2M" }
+ { "LogTime" = "yes" }
+ { "LogVerbose" = "yes" }
+ { "LogSyslog" = "yes" }
+ { "LogFacility" = "LOG_MAIL" }
+ { "LogRotate" = "yes" }
+ { "PidFile" = "/var/run/freshclam.pid" }
+ { "DatabaseOwner" = "clam" }
+ { "AllowSupplementaryGroups" = "yes" }
+ { "DNSDatabaseInfo" = "current.cvd.clamav.net" }
+ { "DatabaseMirror" = "db.XY.clamav.net" }
+ { "DatabaseMirror" = "database.clamav.net" }
+ { "MaxAttempts" = "5" }
+ { "ScriptedUpdates" = "yes" }
+ { "CompressLocalDatabase" = "no" }
+ { "DatabaseCustomURL" = "http://myserver.com/mysigs.ndb" }
+ { "DatabaseCustomURL" = "file:///mnt/nfs/local.hdb" }
+ { "PrivateMirror" = "mirror1.mynetwork.com" }
+ { "PrivateMirror" = "mirror2.mynetwork.com" }
+ { "Checks" = "24" }
+ { "HTTPProxyServer" = "myproxy.com" }
+ { "HTTPProxyPort" = "1234" }
+ { "HTTPProxyUsername" = "myusername" }
+ { "HTTPProxyPassword" = "mypass" }
+ { "HTTPUserAgent" = "SomeUserAgentIdString" }
+ { "LocalIPAddress" = "aaa.bbb.ccc.ddd" }
+ { "NotifyClamd" = "/etc/clamd.conf" }
+ { "OnUpdateExecute" = "command" }
+ { "OnErrorExecute" = "command" }
+ { "OnOutdatedExecute" = "command" }
+ { "Foreground" = "yes" }
+ { "Debug" = "yes" }
+ { "ConnectTimeout" = "60" }
+ { "ReceiveTimeout" = "60" }
+ { "TestDatabases" = "yes" }
+ { "SubmitDetectionStats" = "/etc/clamd.conf" }
+ { "DetectionStatsCountry" = "za" }
+ { "DetectionStatsCountry" = "zw" }
+ { "SafeBrowsing" = "yes" }
+ { "Bytecode" = "yes" }
+ { "ExtraDatabase" = "dbname1" }
+ { "ExtraDatabase" = "dbname2" }
--- /dev/null
+module Test_cmdline =
+
+let lns = Cmdline.lns
+
+test lns get "foo\nbar" = *
+test lns get "foo\n" = { "foo" }
+test lns get "foo \n" = { "foo" }
+test lns get "foo" = { "foo" }
+test lns get "foo bar" = { "foo" } { "bar" }
+test lns get "foo bar" = { "foo" } { "bar" }
+test lns get "foo=bar" = { "foo" = "bar" }
+test lns get "foo=bar foo=baz" = { "foo" = "bar" } { "foo" = "baz" }
+test lns get "foo bar=bar quux baz=x" =
+ { "foo" } { "bar" = "bar" } { "quux" } { "baz" = "x" }
+test lns get "initrd=\linux\initrd.img-4.19.0-6-amd64 root=UUID=SOME_UUID rw" =
+ { "initrd" = "\linux\initrd.img-4.19.0-6-amd64" } { "root" = "UUID=SOME_UUID" } { "rw" }
+
+test lns put "" after set "foo" "bar" = "foo=bar"
+test lns put "foo=bar" after rm "foo" = ""
+test lns put "x=y foo=bar" after set "foo" "baz" = "x=y foo=baz"
+test lns put "foo=bar foo=baz" after set "foo[. = 'bar']" "quux" = "foo=quux foo=baz"
+test lns put "foo=bar foo=baz" after set "foo[. = 'baz']" "quux" = "foo=bar foo=quux"
+test lns put "" after set "foo" "" = "foo"
--- /dev/null
+module Test_cobblermodules =
+
+ let conf = "
+[serializes]
+settings = serializer_catalog
+
+[authentication]
+modules = auth_denyall
+"
+
+ test CobblerModules.lns get conf =
+ {}
+ { "serializes"
+ { "settings" = "serializer_catalog" }
+ {} }
+ { "authentication"
+ { "modules" = "auth_denyall" } }
+
+ test CobblerModules.lns put conf after
+ set "serializes/distro" "serializer_catalog";
+ set "serializes/repo" "serializer_catalog"
+ = "
+[serializes]
+settings = serializer_catalog
+
+distro=serializer_catalog
+repo=serializer_catalog
+[authentication]
+modules = auth_denyall
+"
--- /dev/null
+module Test_cobblersettings =
+
+test Cobblersettings.lns get "Simple_Setting: Value \n" =
+ { "Simple_Setting" = "Value" }
+
+test Cobblersettings.lns get "Simple_Setting2: 'Value2@acme.com' \n" =
+ { "Simple_Setting2" = "'Value2@acme.com'" }
+
+test Cobblersettings.lns get "Simple_Setting3: ''\n" =
+ { "Simple_Setting3" = "''" }
+
+test Cobblersettings.lns get "Simple_Setting4: \"\"\n" =
+ { "Simple_Setting4" = "\"\"" }
+
+test Cobblersettings.lns get "Simple_Setting_Trailing_Space : Value \n" =
+ { "Simple_Setting_Trailing_Space" = "Value" }
+
+test Cobblersettings.lns get "Setting_List:[Value1, Value2, Value3]\n" =
+ { "Setting_List"
+ { "sequence"
+ { "item" = "Value1" }
+ { "item" = "Value2" }
+ { "item" = "Value3" } } }
+
+test Cobblersettings.lns get "Empty_Setting_List: []\n" =
+ { "Empty_Setting_List"
+ { "sequence" } }
+
+test Cobblersettings.lns get "# Commented_Out_Setting: 'some value'\n" =
+ { "#comment" = "Commented_Out_Setting: 'some value'" }
+
+test Cobblersettings.lns get "---\n" =
+ { "---" = "---"}
+
+test Cobblersettings.lns get "Nested_Setting:\n Test: Value\n" =
+ { "Nested_Setting"
+ { "Test" = "Value" } }
+
+test Cobblersettings.lns get "Nested_Setting:\n - Test \n" =
+ { "Nested_Setting"
+ { "list"
+ { "value" = "Test" } } }
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+module Test_cockpit =
+(************************************************************************
+ * Basic Parse
+ *************************************************************************)
+
+ let conf = "
+[WebService]
+Origins = https://somedomain1.com https://somedomain2.com:9090
+
+MaxStartups= 10:30:60
+[Log]
+Fatal =criticals warnings
+
+[Session]
+
+IdleTimeout=15
+"
+
+ test Cockpit.lns get conf =
+ { }
+ { "WebService"
+ { "Origins" = "https://somedomain1.com" }
+ { "Origins" = "https://somedomain2.com:9090" }
+ { }
+ { "MaxStartups" = "10:30:60" }
+ }
+ { "Log"
+ { "Fatal" = "criticals" }
+ { "Fatal" = "warnings" }
+ { }
+ }
+ { "Session"
+ { }
+ { "IdleTimeout" = "15" }
+ }
+
+(************************************************************************
+ * Insert simple
+ *************************************************************************)
+ test Cockpit.lns put conf after
+ set "Log/EXAMPLE" "text"
+ = "
+[WebService]
+Origins = https://somedomain1.com https://somedomain2.com:9090
+
+MaxStartups= 10:30:60
+[Log]
+Fatal =criticals warnings
+
+EXAMPLE=text
+[Session]
+
+IdleTimeout=15
+"
+
+
+(************************************************************************
+ * Insert Origin makes sense
+ *************************************************************************)
+
+ test Cockpit.lns put conf after
+ insa "Origins" "WebService/Origins[last()]";
+ set "WebService/Origins[last()]" "https://thirdhost.com:8080"
+ = "
+[WebService]
+Origins = https://somedomain1.com https://somedomain2.com:9090 https://thirdhost.com:8080
+
+MaxStartups= 10:30:60
+[Log]
+Fatal =criticals warnings
+
+[Session]
+
+IdleTimeout=15
+"
+
+(************************************************************************
+ * Insert Fatal makes sense
+ *************************************************************************)
+ test Cockpit.lns put conf after
+ insa "Fatal" "Log/Fatal[last()]";
+ set "Log/Fatal[last()]" "info"
+ = "
+[WebService]
+Origins = https://somedomain1.com https://somedomain2.com:9090
+
+MaxStartups= 10:30:60
+[Log]
+Fatal =criticals warnings info
+
+[Session]
+
+IdleTimeout=15
+"
--- /dev/null
+(*
+Module: Test_Collectd
+ Provides unit tests and examples for the <Collectd> lens.
+*)
+module Test_Collectd =
+
+(* Variable: simple *)
+let simple = "LoadPlugin contextswitch
+LoadPlugin cpu
+FQDNLookup \"true\"
+Include \"/var/lib/puppet/modules/collectd/plugins/*.conf\"
+"
+
+(* Test: Collectd.lns *)
+test Collectd.lns get simple =
+ { "directive" = "LoadPlugin"
+ { "arg" = "contextswitch" }
+ }
+ { "directive" = "LoadPlugin"
+ { "arg" = "cpu" }
+ }
+ { "directive" = "FQDNLookup"
+ { "arg" = "\"true\"" }
+ }
+ { "directive" = "Include"
+ { "arg" = "\"/var/lib/puppet/modules/collectd/plugins/*.conf\"" }
+ }
+
+
+(* Variable: filters *)
+let filters = "<Chain \"PreCache\">
+ <Rule \"no_fqdn\">
+ <Match \"regex\">
+ Host \"^[^\.]*$\"
+ Invert false
+ </Match>
+ Target \"stop\"
+ </Rule>
+</Chain>
+"
+
+
+(* Test: Collectd.lns *)
+test Collectd.lns get filters =
+ { "Chain"
+ { "arg" = "\"PreCache\"" }
+ { "Rule"
+ { "arg" = "\"no_fqdn\"" }
+ { "Match"
+ { "arg" = "\"regex\"" }
+ { "directive" = "Host"
+ { "arg" = "\"^[^\.]*$\"" }
+ }
+ { "directive" = "Invert"
+ { "arg" = "false" }
+ }
+ }
+ { "directive" = "Target"
+ { "arg" = "\"stop\"" }
+ }
+ }
+ }
+
+
+
--- /dev/null
+(*
+Module: Test_CPanel
+ Provides unit tests and examples for the <CPanel> lens.
+*)
+module Test_CPanel =
+
+(* Variable: config
+ A sample cpanel.config file *)
+let config = "#### NOTICE ####
+# After manually editing any configuration settings in this file,
+# please run '/usr/local/cpanel/whostmgr/bin/whostmgr2 --updatetweaksettings'
+# to fully update your server's configuration.
+
+skipantirelayd=1
+ionice_optimizefs=6
+account_login_access=owner_root
+enginepl=cpanel.pl
+stats_log=/usr/local/cpanel/logs/stats_log
+cpaddons_notify_users=Allow users to choose
+apache_port=0.0.0.0:80
+allow_server_info_status_from=
+system_diskusage_warn_percent=82.5500
+maxemailsperhour
+email_send_limits_max_defer_fail_percentage
+default_archive-logs=1
+SecurityPolicy::xml-api=1\n"
+
+(* Test: CPanel.lns
+ Get <config> *)
+test CPanel.lns get config =
+ { "#comment" = "### NOTICE ####" }
+ { "#comment" = "After manually editing any configuration settings in this file," }
+ { "#comment" = "please run '/usr/local/cpanel/whostmgr/bin/whostmgr2 --updatetweaksettings'" }
+ { "#comment" = "to fully update your server's configuration." }
+ { }
+ { "skipantirelayd" = "1" }
+ { "ionice_optimizefs" = "6" }
+ { "account_login_access" = "owner_root" }
+ { "enginepl" = "cpanel.pl" }
+ { "stats_log" = "/usr/local/cpanel/logs/stats_log" }
+ { "cpaddons_notify_users" = "Allow users to choose" }
+ { "apache_port" = "0.0.0.0:80" }
+ { "allow_server_info_status_from" = "" }
+ { "system_diskusage_warn_percent" = "82.5500" }
+ { "maxemailsperhour" }
+ { "email_send_limits_max_defer_fail_percentage" }
+ { "default_archive-logs" = "1" }
+ { "SecurityPolicy::xml-api" = "1" }
--- /dev/null
+module Test_cron =
+
+ let conf = "# /etc/cron.d/anacron: crontab entries for the anacron package
+
+SHELL=/bin/sh
+PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
+CRON_TZ=America/Los_Angeles
+MAILTO=user1@tld1,user2@tld2;user3@tld3
+
+ 30 7 * * * root test -x /etc/init.d/anacron && /usr/sbin/invoke-rc.d anacron start >/dev/null
+ 00 */3 15-25/2 May 1-5 user somecommand
+ 00 */3 15-25/2 May mon-tue user somecommand
+# a comment
+@yearly foo a command\n"
+
+ test Cron.lns get conf =
+ { "#comment" = "/etc/cron.d/anacron: crontab entries for the anacron package" }
+ {}
+ { "SHELL" = "/bin/sh" }
+ { "PATH" = "/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin" }
+ { "CRON_TZ" = "America/Los_Angeles" }
+ { "MAILTO" = "user1@tld1,user2@tld2;user3@tld3" }
+ {}
+ { "entry" = "test -x /etc/init.d/anacron && /usr/sbin/invoke-rc.d anacron start >/dev/null"
+ { "time"
+ { "minute" = "30" }
+ { "hour" = "7" }
+ { "dayofmonth" = "*" }
+ { "month" = "*" }
+ { "dayofweek" = "*" } }
+ { "user" = "root" } }
+ { "entry" = "somecommand"
+ { "time"
+ { "minute" = "00" }
+ { "hour" = "*/3" }
+ { "dayofmonth" = "15-25/2" }
+ { "month" = "May" }
+ { "dayofweek" = "1-5" } }
+ { "user" = "user" } }
+ { "entry" = "somecommand"
+ { "time"
+ { "minute" = "00" }
+ { "hour" = "*/3" }
+ { "dayofmonth" = "15-25/2" }
+ { "month" = "May" }
+ { "dayofweek" = "mon-tue" } }
+ { "user" = "user" } }
+ { "#comment" = "a comment" }
+ { "entry" = "a command"
+ { "schedule" = "yearly" }
+ { "user" = "foo" } }
--- /dev/null
+module Test_Cron_User =
+
+let s = "MAILTO=cron@example.com
+31 * * * * ${HOME}/bin/stuff
+54 16 * * * /usr/sbin/tmpwatch -umc 30d ${HOME}/tmp\n"
+
+let lns = Cron_User.lns
+
+test lns get s =
+ { "MAILTO" = "cron@example.com" }
+ { "entry" = "${HOME}/bin/stuff"
+ { "time"
+ { "minute" = "31" }
+ { "hour" = "*" }
+ { "dayofmonth" = "*" }
+ { "month" = "*" }
+ { "dayofweek" = "*" }
+ }
+ }
+ { "entry" = "/usr/sbin/tmpwatch -umc 30d ${HOME}/tmp"
+ { "time"
+ { "minute" = "54" }
+ { "hour" = "16" }
+ { "dayofmonth" = "*" }
+ { "month" = "*" }
+ { "dayofweek" = "*" }
+ }
+ }
+
+test lns put s after
+rm "/MAILTO";
+rm "/entry[time/minute = '54']";
+set "/entry[. = '${HOME}/bin/stuff']/time/minute" "24" =
+ "24 * * * * ${HOME}/bin/stuff\n"
--- /dev/null
+module Test_crypttab =
+
+ let simple = "sda1_crypt\t /dev/sda1\t /dev/random \t swap\n"
+
+ let simple_tree =
+ { "1"
+ { "target" = "sda1_crypt" }
+ { "device" = "/dev/sda1" }
+ { "password" = "/dev/random" }
+ { "opt" = "swap" } }
+
+ let trailing_ws = "sda1_crypt\t /dev/sda1\t /dev/random \t swap\t\n"
+
+ let no_opts = "sda1_crypt\t /dev/sda1\t /etc/key\n"
+
+ let no_opts_tree =
+ { "1"
+ { "target" = "sda1_crypt" }
+ { "device" = "/dev/sda1" }
+ { "password" = "/etc/key" } }
+
+ let no_password = "sda1_crypt\t /dev/sda1\n"
+
+ let no_password_tree =
+ { "1"
+ { "target" = "sda1_crypt" }
+ { "device" = "/dev/sda1" } }
+
+ let multi_opts = "sda1_crypt\t /dev/sda1\t /etc/key \t cipher=aes-cbc-essiv:sha256,verify\n"
+
+ let multi_opts_tree =
+ { "1"
+ { "target" = "sda1_crypt" }
+ { "device" = "/dev/sda1" }
+ { "password" = "/etc/key" }
+ { "opt" = "cipher"
+ { "value" = "aes-cbc-essiv:sha256" } }
+ { "opt" = "verify" } }
+
+ let uuid = "sda3_crypt UUID=5b8b6e72-acf9-43bc-bd2d-8dbcaee82f99 none luks,keyscript=/usr/share/yubikey-luks/ykluks-keyscript,discard\n"
+
+ let uuid_tree =
+ { "1"
+ { "target" = "sda3_crypt" }
+ { "device" = "UUID=5b8b6e72-acf9-43bc-bd2d-8dbcaee82f99" }
+ { "password" = "none" }
+ { "opt" = "luks" }
+ { "opt" = "keyscript"
+ { "value" = "/usr/share/yubikey-luks/ykluks-keyscript" } }
+ { "opt" = "discard" } }
+
+ test Crypttab.lns get simple = simple_tree
+
+ test Crypttab.lns get trailing_ws = simple_tree
+
+ test Crypttab.lns get no_opts = no_opts_tree
+
+ test Crypttab.lns get no_password = no_password_tree
+
+ test Crypttab.lns get multi_opts = multi_opts_tree
+
+ test Crypttab.lns get uuid = uuid_tree
--- /dev/null
+module Test_CSV =
+
+(* Test: CSV.lns
+ Simple test *)
+test CSV.lns get "a,b,c\n" =
+ { "1"
+ { "1" = "a" }
+ { "2" = "b" }
+ { "3" = "c" } }
+
+(* Test: CSV.lns
+ Values with spaces *)
+test CSV.lns get "a,b c,d\n" =
+ { "1"
+ { "1" = "a" }
+ { "2" = "b c" }
+ { "3" = "d" } }
+
+(* Test: CSV.lns
+ Quoted values *)
+test CSV.lns get "a,\"b,c\",d
+# comment
+#
+e,f,with space\n" =
+ { "1"
+ { "1" = "a" }
+ { "2" = "\"b,c\"" }
+ { "3" = "d" } }
+ { "#comment" = "comment" }
+ { }
+ { "2"
+ { "1" = "e" }
+ { "2" = "f" }
+ { "3" = "with space" } }
+
+(* Test: CSV.lns
+ Empty values *)
+test CSV.lns get ",
+,,\n" =
+ { "1"
+ { "1" = "" }
+ { "2" = "" } }
+ { "2"
+ { "1" = "" }
+ { "2" = "" }
+ { "3" = "" } }
+
+(* Test: CSV.lns
+ Trailing spaces *)
+test CSV.lns get "a , b
+ \n" =
+ { "1"
+ { "1" = "a " }
+ { "2" = " b " } }
+ { "2"
+ { "1" = " " } }
+
+(* Test: CSV.lns
+ Quoted values in quoted values *)
+test CSV.lns get "\"a,b\"\"c d\"\"\"\n" =
+ { "1" { "1" = "\"a,b\"\"c d\"\"\"" } }
+
+(* Test: CSV.lns
+ Quote in quoted values *)
+test CSV.lns get "\"a,b\"\"c d\"\n" =
+ { "1" { "1" = "\"a,b\"\"c d\"" } }
+
+(* Test: CSV.lns
+ Values with newlines *)
+test CSV.lns get "a,\"b\n c\"\n" =
+ { "1"
+ { "1" = "a" }
+ { "2" = "\"b\n c\"" } }
+
+(* Test: CSV.lns_semicol
+ Semi-colon lens *)
+test CSV.lns_semicol get "a;\"b;c\";d
+# comment
+#
+e;f;with space\n" =
+ { "1"
+ { "1" = "a" }
+ { "2" = "\"b;c\"" }
+ { "3" = "d" } }
+ { "#comment" = "comment" }
+ { }
+ { "2"
+ { "1" = "e" }
+ { "2" = "f" }
+ { "3" = "with space" } }
+
--- /dev/null
+(*
+Module: Test_Cups
+ Provides unit tests and examples for the <Cups> lens.
+*)
+
+module Test_Cups =
+
+(* Variable: conf *)
+let conf = "# Sample configuration file for the CUPS scheduler.
+LogLevel warn
+
+# Deactivate CUPS' internal logrotating, as we provide a better one, especially
+# LogLevel debug2 gets usable now
+MaxLogSize 0
+
+# Administrator user group...
+SystemGroup lpadmin
+
+
+# Only listen for connections from the local machine.
+Listen localhost:631
+Listen /var/run/cups/cups.sock
+
+# Show shared printers on the local network.
+BrowseOrder allow,deny
+BrowseAllow all
+BrowseLocalProtocols CUPS dnssd
+BrowseAddress @LOCAL
+
+# Default authentication type, when authentication is required...
+DefaultAuthType Basic
+
+# Web interface setting...
+WebInterface Yes
+
+# Restrict access to the server...
+<Location />
+ Order allow,deny
+</Location>
+
+# Restrict access to the admin pages...
+<Location /admin>
+ Order allow,deny
+</Location>
+
+# Restrict access to configuration files...
+<Location /admin/conf>
+ AuthType Default
+ Require user @SYSTEM
+ Order allow,deny
+</Location>
+
+# Set the default printer/job policies...
+<Policy default>
+ # Job/subscription privacy...
+ JobPrivateAccess default
+ JobPrivateValues default
+ SubscriptionPrivateAccess default
+ SubscriptionPrivateValues default
+
+ # Job-related operations must be done by the owner or an administrator...
+ <Limit Create-Job Print-Job Print-URI Validate-Job>
+ Order deny,allow
+ </Limit>
+
+ <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job Cancel-My-Jobs Close-Job CUPS-Move-Job CUPS-Get-Document>
+ Require user @OWNER @SYSTEM
+ Order deny,allow
+ </Limit>
+
+ # All administration operations require an administrator to authenticate...
+ <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default CUPS-Get-Devices>
+ AuthType Default
+ Require user @SYSTEM
+ Order deny,allow
+ </Limit>
+
+ # All printer operations require a printer operator to authenticate...
+ <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After Cancel-Jobs CUPS-Accept-Jobs CUPS-Reject-Jobs>
+ AuthType Default
+ Require user @SYSTEM
+ Order deny,allow
+ </Limit>
+
+ # Only the owner or an administrator can cancel or authenticate a job...
+ <Limit Cancel-Job CUPS-Authenticate-Job>
+ Require user @OWNER @SYSTEM
+ Order deny,allow
+ </Limit>
+
+ <Limit All>
+ Order deny,allow
+ </Limit>
+</Policy>
+
+# Set the authenticated printer/job policies...
+<Policy authenticated>
+ # Job/subscription privacy...
+ JobPrivateAccess default
+ JobPrivateValues default
+ SubscriptionPrivateAccess default
+ SubscriptionPrivateValues default
+
+ # Job-related operations must be done by the owner or an administrator...
+ <Limit Create-Job Print-Job Print-URI Validate-Job>
+ AuthType Default
+ Order deny,allow
+ </Limit>
+
+ <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job Cancel-My-Jobs Close-Job CUPS-Move-Job CUPS-Get-Document>
+ AuthType Default
+ Require user @OWNER @SYSTEM
+ Order deny,allow
+ </Limit>
+
+ # All administration operations require an administrator to authenticate...
+ <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default>
+ AuthType Default
+ Require user @SYSTEM
+ Order deny,allow
+ </Limit>
+
+ # All printer operations require a printer operator to authenticate...
+ <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After Cancel-Jobs CUPS-Accept-Jobs CUPS-Reject-Jobs>
+ AuthType Default
+ Require user @SYSTEM
+ Order deny,allow
+ </Limit>
+
+ # Only the owner or an administrator can cancel or authenticate a job...
+ <Limit Cancel-Job CUPS-Authenticate-Job>
+ AuthType Default
+ Require user @OWNER @SYSTEM
+ Order deny,allow
+ </Limit>
+
+ <Limit All>
+ Order deny,allow
+ </Limit>
+</Policy>
+"
+
+(* Test: Simplevars.lns *)
+test Cups.lns get conf =
+ { "#comment" = "Sample configuration file for the CUPS scheduler." }
+ { "directive" = "LogLevel"
+ { "arg" = "warn" }
+ }
+ { }
+ { "#comment" = "Deactivate CUPS' internal logrotating, as we provide a better one, especially" }
+ { "#comment" = "LogLevel debug2 gets usable now" }
+ { "directive" = "MaxLogSize"
+ { "arg" = "0" }
+ }
+ { }
+ { "#comment" = "Administrator user group..." }
+ { "directive" = "SystemGroup"
+ { "arg" = "lpadmin" }
+ }
+ { }
+ { }
+ { "#comment" = "Only listen for connections from the local machine." }
+ { "directive" = "Listen"
+ { "arg" = "localhost:631" }
+ }
+ { "directive" = "Listen"
+ { "arg" = "/var/run/cups/cups.sock" }
+ }
+ { }
+ { "#comment" = "Show shared printers on the local network." }
+ { "directive" = "BrowseOrder"
+ { "arg" = "allow,deny" }
+ }
+ { "directive" = "BrowseAllow"
+ { "arg" = "all" }
+ }
+ { "directive" = "BrowseLocalProtocols"
+ { "arg" = "CUPS" }
+ { "arg" = "dnssd" }
+ }
+ { "directive" = "BrowseAddress"
+ { "arg" = "@LOCAL" }
+ }
+ { }
+ { "#comment" = "Default authentication type, when authentication is required..." }
+ { "directive" = "DefaultAuthType"
+ { "arg" = "Basic" }
+ }
+ { }
+ { "#comment" = "Web interface setting..." }
+ { "directive" = "WebInterface"
+ { "arg" = "Yes" }
+ }
+ { }
+ { "#comment" = "Restrict access to the server..." }
+ { "Location"
+ { "arg" = "/" }
+ { "directive" = "Order"
+ { "arg" = "allow,deny" }
+ }
+ }
+ { "#comment" = "Restrict access to the admin pages..." }
+ { "Location"
+ { "arg" = "/admin" }
+ { "directive" = "Order"
+ { "arg" = "allow,deny" }
+ }
+ }
+ { "#comment" = "Restrict access to configuration files..." }
+ { "Location"
+ { "arg" = "/admin/conf" }
+ { "directive" = "AuthType"
+ { "arg" = "Default" }
+ }
+ { "directive" = "Require"
+ { "arg" = "user" }
+ { "arg" = "@SYSTEM" }
+ }
+ { "directive" = "Order"
+ { "arg" = "allow,deny" }
+ }
+ }
+ { "#comment" = "Set the default printer/job policies..." }
+ { "Policy"
+ { "arg" = "default" }
+ { "#comment" = "Job/subscription privacy..." }
+ { "directive" = "JobPrivateAccess"
+ { "arg" = "default" }
+ }
+ { "directive" = "JobPrivateValues"
+ { "arg" = "default" }
+ }
+ { "directive" = "SubscriptionPrivateAccess"
+ { "arg" = "default" }
+ }
+ { "directive" = "SubscriptionPrivateValues"
+ { "arg" = "default" }
+ }
+ { }
+ { "#comment" = "Job-related operations must be done by the owner or an administrator..." }
+ { "Limit"
+ { "arg" = "Create-Job" }
+ { "arg" = "Print-Job" }
+ { "arg" = "Print-URI" }
+ { "arg" = "Validate-Job" }
+ { "directive" = "Order"
+ { "arg" = "deny,allow" }
+ }
+ }
+ { "Limit"
+ { "arg" = "Send-Document" }
+ { "arg" = "Send-URI" }
+ { "arg" = "Hold-Job" }
+ { "arg" = "Release-Job" }
+ { "arg" = "Restart-Job" }
+ { "arg" = "Purge-Jobs" }
+ { "arg" = "Set-Job-Attributes" }
+ { "arg" = "Create-Job-Subscription" }
+ { "arg" = "Renew-Subscription" }
+ { "arg" = "Cancel-Subscription" }
+ { "arg" = "Get-Notifications" }
+ { "arg" = "Reprocess-Job" }
+ { "arg" = "Cancel-Current-Job" }
+ { "arg" = "Suspend-Current-Job" }
+ { "arg" = "Resume-Job" }
+ { "arg" = "Cancel-My-Jobs" }
+ { "arg" = "Close-Job" }
+ { "arg" = "CUPS-Move-Job" }
+ { "arg" = "CUPS-Get-Document" }
+ { "directive" = "Require"
+ { "arg" = "user" }
+ { "arg" = "@OWNER" }
+ { "arg" = "@SYSTEM" }
+ }
+ { "directive" = "Order"
+ { "arg" = "deny,allow" }
+ }
+ }
+ { "#comment" = "All administration operations require an administrator to authenticate..." }
+ { "Limit"
+ { "arg" = "CUPS-Add-Modify-Printer" }
+ { "arg" = "CUPS-Delete-Printer" }
+ { "arg" = "CUPS-Add-Modify-Class" }
+ { "arg" = "CUPS-Delete-Class" }
+ { "arg" = "CUPS-Set-Default" }
+ { "arg" = "CUPS-Get-Devices" }
+ { "directive" = "AuthType"
+ { "arg" = "Default" }
+ }
+ { "directive" = "Require"
+ { "arg" = "user" }
+ { "arg" = "@SYSTEM" }
+ }
+ { "directive" = "Order"
+ { "arg" = "deny,allow" }
+ }
+ }
+ { "#comment" = "All printer operations require a printer operator to authenticate..." }
+ { "Limit"
+ { "arg" = "Pause-Printer" }
+ { "arg" = "Resume-Printer" }
+ { "arg" = "Enable-Printer" }
+ { "arg" = "Disable-Printer" }
+ { "arg" = "Pause-Printer-After-Current-Job" }
+ { "arg" = "Hold-New-Jobs" }
+ { "arg" = "Release-Held-New-Jobs" }
+ { "arg" = "Deactivate-Printer" }
+ { "arg" = "Activate-Printer" }
+ { "arg" = "Restart-Printer" }
+ { "arg" = "Shutdown-Printer" }
+ { "arg" = "Startup-Printer" }
+ { "arg" = "Promote-Job" }
+ { "arg" = "Schedule-Job-After" }
+ { "arg" = "Cancel-Jobs" }
+ { "arg" = "CUPS-Accept-Jobs" }
+ { "arg" = "CUPS-Reject-Jobs" }
+ { "directive" = "AuthType"
+ { "arg" = "Default" }
+ }
+ { "directive" = "Require"
+ { "arg" = "user" }
+ { "arg" = "@SYSTEM" }
+ }
+ { "directive" = "Order"
+ { "arg" = "deny,allow" }
+ }
+ }
+ { "#comment" = "Only the owner or an administrator can cancel or authenticate a job..." }
+ { "Limit"
+ { "arg" = "Cancel-Job" }
+ { "arg" = "CUPS-Authenticate-Job" }
+ { "directive" = "Require"
+ { "arg" = "user" }
+ { "arg" = "@OWNER" }
+ { "arg" = "@SYSTEM" }
+ }
+ { "directive" = "Order"
+ { "arg" = "deny,allow" }
+ }
+ }
+ { "Limit"
+ { "arg" = "All" }
+ { "directive" = "Order"
+ { "arg" = "deny,allow" }
+ }
+ }
+ }
+ { "#comment" = "Set the authenticated printer/job policies..." }
+ { "Policy"
+ { "arg" = "authenticated" }
+ { "#comment" = "Job/subscription privacy..." }
+ { "directive" = "JobPrivateAccess"
+ { "arg" = "default" }
+ }
+ { "directive" = "JobPrivateValues"
+ { "arg" = "default" }
+ }
+ { "directive" = "SubscriptionPrivateAccess"
+ { "arg" = "default" }
+ }
+ { "directive" = "SubscriptionPrivateValues"
+ { "arg" = "default" }
+ }
+ { }
+ { "#comment" = "Job-related operations must be done by the owner or an administrator..." }
+ { "Limit"
+ { "arg" = "Create-Job" }
+ { "arg" = "Print-Job" }
+ { "arg" = "Print-URI" }
+ { "arg" = "Validate-Job" }
+ { "directive" = "AuthType"
+ { "arg" = "Default" }
+ }
+ { "directive" = "Order"
+ { "arg" = "deny,allow" }
+ }
+ }
+ { "Limit"
+ { "arg" = "Send-Document" }
+ { "arg" = "Send-URI" }
+ { "arg" = "Hold-Job" }
+ { "arg" = "Release-Job" }
+ { "arg" = "Restart-Job" }
+ { "arg" = "Purge-Jobs" }
+ { "arg" = "Set-Job-Attributes" }
+ { "arg" = "Create-Job-Subscription" }
+ { "arg" = "Renew-Subscription" }
+ { "arg" = "Cancel-Subscription" }
+ { "arg" = "Get-Notifications" }
+ { "arg" = "Reprocess-Job" }
+ { "arg" = "Cancel-Current-Job" }
+ { "arg" = "Suspend-Current-Job" }
+ { "arg" = "Resume-Job" }
+ { "arg" = "Cancel-My-Jobs" }
+ { "arg" = "Close-Job" }
+ { "arg" = "CUPS-Move-Job" }
+ { "arg" = "CUPS-Get-Document" }
+ { "directive" = "AuthType"
+ { "arg" = "Default" }
+ }
+ { "directive" = "Require"
+ { "arg" = "user" }
+ { "arg" = "@OWNER" }
+ { "arg" = "@SYSTEM" }
+ }
+ { "directive" = "Order"
+ { "arg" = "deny,allow" }
+ }
+ }
+ { "#comment" = "All administration operations require an administrator to authenticate..." }
+ { "Limit"
+ { "arg" = "CUPS-Add-Modify-Printer" }
+ { "arg" = "CUPS-Delete-Printer" }
+ { "arg" = "CUPS-Add-Modify-Class" }
+ { "arg" = "CUPS-Delete-Class" }
+ { "arg" = "CUPS-Set-Default" }
+ { "directive" = "AuthType"
+ { "arg" = "Default" }
+ }
+ { "directive" = "Require"
+ { "arg" = "user" }
+ { "arg" = "@SYSTEM" }
+ }
+ { "directive" = "Order"
+ { "arg" = "deny,allow" }
+ }
+ }
+ { "#comment" = "All printer operations require a printer operator to authenticate..." }
+ { "Limit"
+ { "arg" = "Pause-Printer" }
+ { "arg" = "Resume-Printer" }
+ { "arg" = "Enable-Printer" }
+ { "arg" = "Disable-Printer" }
+ { "arg" = "Pause-Printer-After-Current-Job" }
+ { "arg" = "Hold-New-Jobs" }
+ { "arg" = "Release-Held-New-Jobs" }
+ { "arg" = "Deactivate-Printer" }
+ { "arg" = "Activate-Printer" }
+ { "arg" = "Restart-Printer" }
+ { "arg" = "Shutdown-Printer" }
+ { "arg" = "Startup-Printer" }
+ { "arg" = "Promote-Job" }
+ { "arg" = "Schedule-Job-After" }
+ { "arg" = "Cancel-Jobs" }
+ { "arg" = "CUPS-Accept-Jobs" }
+ { "arg" = "CUPS-Reject-Jobs" }
+ { "directive" = "AuthType"
+ { "arg" = "Default" }
+ }
+ { "directive" = "Require"
+ { "arg" = "user" }
+ { "arg" = "@SYSTEM" }
+ }
+ { "directive" = "Order"
+ { "arg" = "deny,allow" }
+ }
+ }
+ { "#comment" = "Only the owner or an administrator can cancel or authenticate a job..." }
+ { "Limit"
+ { "arg" = "Cancel-Job" }
+ { "arg" = "CUPS-Authenticate-Job" }
+ { "directive" = "AuthType"
+ { "arg" = "Default" }
+ }
+ { "directive" = "Require"
+ { "arg" = "user" }
+ { "arg" = "@OWNER" }
+ { "arg" = "@SYSTEM" }
+ }
+ { "directive" = "Order"
+ { "arg" = "deny,allow" }
+ }
+ }
+ { "Limit"
+ { "arg" = "All" }
+ { "directive" = "Order"
+ { "arg" = "deny,allow" }
+ }
+ }
+ }
+
--- /dev/null
+module Test_cyrus_imapd =
+
+let conf = "configdirectory: /var/lib/imap
+partition-default: /var/spool/imap
+admins: cyrus-admin
+sievedir: /var/lib/imap/sieve
+sendmail: /usr/sbin/sendmail
+sasl_pwcheck_method: auxprop saslauthd
+sasl_mech_list: PLAIN LOGIN
+allowplaintext: no
+tls_cert_file: /etc/pki/cyrus-imapd/cyrus-imapd.pem
+tls_key_file: /etc/pki/cyrus-imapd/cyrus-imapd.pem
+tls_ca_file: /etc/pki/tls/certs/ca-bundle.crt
+# uncomment this if you're operating in a DSCP environment (RFC-4594)
+# qosmarking: af13\n"
+
+test Cyrus_Imapd.lns get conf =
+ { "configdirectory" = "/var/lib/imap" }
+ { "partition-default" = "/var/spool/imap" }
+ { "admins" = "cyrus-admin" }
+ { "sievedir" = "/var/lib/imap/sieve" }
+ { "sendmail" = "/usr/sbin/sendmail" }
+ { "sasl_pwcheck_method" = "auxprop saslauthd" }
+ { "sasl_mech_list" = "PLAIN LOGIN" }
+ { "allowplaintext" = "no" }
+ { "tls_cert_file" = "/etc/pki/cyrus-imapd/cyrus-imapd.pem" }
+ { "tls_key_file" = "/etc/pki/cyrus-imapd/cyrus-imapd.pem" }
+ { "tls_ca_file" = "/etc/pki/tls/certs/ca-bundle.crt" }
+ { "#comment" = "uncomment this if you're operating in a DSCP environment (RFC-4594)" }
+ { "#comment" = "qosmarking: af13" }
+
+test Cyrus_Imapd.lns get "admins: cyrus-admin\n"
+ =
+ { "admins" = "cyrus-admin" }
+
+test Cyrus_Imapd.lns put "" after set "munge8bit" "false" =
+ "munge8bit: false\n"
--- /dev/null
+module Test_darkice =
+
+ let conf = "# this is a comment
+
+[general]
+duration = 0
+bufferSecs = 5 # size of internal slip buffer, in seconds
+
+[icecast2-0]
+bitrateMode=cbr
+format = vorbis
+"
+
+ test Darkice.lns get conf =
+ { "#comment" = "this is a comment" }
+ {}
+ { "target" = "general"
+ { "duration" = "0" }
+ { "bufferSecs" = "5"
+ { "#comment" = "size of internal slip buffer, in seconds" } }
+ {} }
+ { "target" = "icecast2-0"
+ { "bitrateMode" = "cbr" }
+ { "format" = "vorbis" } }
--- /dev/null
+module Test_debctrl =
+
+ let source = "Source: libtest-distmanifest-perl\n"
+ let source_result = { "Source" = "libtest-distmanifest-perl" }
+
+ test (Debctrl.simple_entry Debctrl.simple_src_keyword ) get source =
+ source_result
+
+ test (Debctrl.simple_entry Debctrl.simple_src_keyword ) get
+ "Maintainer: Debian Perl Group <pkg-perl-maintainers@lists.alioth.debian.org>\n"
+ = { "Maintainer" = "Debian Perl Group <pkg-perl-maintainers@lists.alioth.debian.org>"
+ }
+
+ let uploaders
+ = "Uploaders: foo@bar, Dominique Dumont <dominique.dumont@xx.yyy>,\n"
+ . " gregor herrmann <gregoa@xxx.yy>\n"
+
+ let uploaders_result =
+ { "Uploaders"
+ { "1" = "foo@bar"}
+ { "2" = "Dominique Dumont <dominique.dumont@xx.yyy>" }
+ { "3" = "gregor herrmann <gregoa@xxx.yy>" } }
+
+ test Debctrl.uploaders get uploaders = uploaders_result
+
+(* test package dependencies *)
+test Debctrl.version_depends get "( >= 5.8.8-12 )" =
+ { "version" { "relation" = ">=" } { "number" = "5.8.8-12" } }
+
+test Debctrl.arch_depends get "[ !hurd-i386]" =
+ { "arch" { "prefix" = "!" } { "name" = "hurd-i386" } }
+
+test Debctrl.arch_depends get "[ hurd-i386]" =
+ { "arch" { "prefix" = "" } { "name" = "hurd-i386" } }
+
+let p_depends_test = "perl ( >= 5.8.8-12 ) [ !hurd-i386]"
+
+test Debctrl.package_depends get p_depends_test =
+ { "perl"
+ { "version"
+ { "relation" = ">=" }
+ { "number" = "5.8.8-12" } }
+ { "arch" { "prefix" = "!" } { "name" = "hurd-i386" } } }
+
+let dependency_test = "perl-modules (>= 5.10) | libmodule-build-perl"
+
+test Debctrl.dependency get dependency_test =
+ { "or" { "perl-modules"
+ { "version" { "relation" = ">=" }
+ { "number" = "5.10" } } } }
+ { "or" { "libmodule-build-perl" } }
+
+test (Debctrl.dependency_list "Build-Depends-Indep") get
+ "Build-Depends-Indep: perl (>= 5.8.8-12) [ !hurd-i386], \n"
+ . " perl-modules (>= 5.10) | libmodule-build-perl,\n"
+ . " libcarp-assert-more-perl,\n"
+ . " libconfig-tiny-perl\n"
+ = { "Build-Depends-Indep"
+ { "and" { "or" { "perl"
+ { "version"
+ { "relation" = ">=" }
+ { "number" = "5.8.8-12" } }
+ { "arch"
+ { "prefix" = "!" }
+ { "name" = "hurd-i386" } } } } }
+ { "and" { "or" { "perl-modules"
+ { "version" { "relation" = ">=" }
+ { "number" = "5.10" } } } }
+ { "or" { "libmodule-build-perl" } } }
+ { "and" { "or" { "libcarp-assert-more-perl" } } }
+ { "and" { "or" { "libconfig-tiny-perl" } } } }
+
+test (Debctrl.dependency_list "Depends") get
+ "Depends: ${perl:Depends}, ${misc:Depends},\n"
+ ." libparse-recdescent-perl (>= 1.90.0)\n"
+ = { "Depends"
+ { "and" { "or" { "${perl:Depends}" }} }
+ { "and" { "or" { "${misc:Depends}" }} }
+ { "and" { "or" { "libparse-recdescent-perl"
+ { "version"
+ { "relation" = ">=" }
+ { "number" = "1.90.0" } } } } }
+ }
+
+ let description = "Description: describe and edit configuration data\n"
+ ." Config::Model enables [...] must:\n"
+ ." - if the configuration data\n"
+ ." .\n"
+ ." With the elements above, (...) on ReadLine.\n"
+
+ test Debctrl.description get description =
+ { "Description"
+ { "summary" = "describe and edit configuration data" }
+ { "text" = "Config::Model enables [...] must:" }
+ { "text" = " - if the configuration data" }
+ { "text" = "." }
+ { "text" = "With the elements above, (...) on ReadLine."} }
+
+
+ let simple_bin_pkg1 = "Package: libconfig-model-perl\n"
+ . "Architecture: all\n"
+ . "Description: dummy1\n"
+ . " dummy text 1\n"
+
+ let simple_bin_pkg2 = "Package: libconfig-model2-perl\n"
+ . "Architecture: all\n"
+ . "Description: dummy2\n"
+ . " dummy text 2\n"
+
+ test Debctrl.src_entries get source.uploaders
+ = { "Source" = "libtest-distmanifest-perl" }
+ { "Uploaders"
+ { "1" = "foo@bar"}
+ { "2" = "Dominique Dumont <dominique.dumont@xx.yyy>" }
+ { "3" = "gregor herrmann <gregoa@xxx.yy>" } }
+
+ test Debctrl.bin_entries get simple_bin_pkg1 =
+ { "Package" = "libconfig-model-perl" }
+ { "Architecture" = "all" }
+ { "Description" { "summary" = "dummy1" } {"text" = "dummy text 1" } }
+
+
+ let paragraph_simple = source . uploaders ."\n"
+ . simple_bin_pkg1 . "\n"
+ . simple_bin_pkg2
+
+ test Debctrl.lns get paragraph_simple =
+ { "srcpkg" { "Source" = "libtest-distmanifest-perl" }
+ { "Uploaders"
+ { "1" = "foo@bar"}
+ { "2" = "Dominique Dumont <dominique.dumont@xx.yyy>" }
+ { "3" = "gregor herrmann <gregoa@xxx.yy>" } } }
+ { "binpkg" { "Package" = "libconfig-model-perl" }
+ { "Architecture" = "all" }
+ { "Description" { "summary" = "dummy1" }
+ { "text" = "dummy text 1" } } }
+ { "binpkg" { "Package" = "libconfig-model2-perl" }
+ { "Architecture" = "all" }
+ { "Description" { "summary" = "dummy2" }
+ { "text" = "dummy text 2" } } }
+
+
+(* PUT TESTS *)
+
+test Debctrl.src_entries
+ put uploaders
+ after set "/Uploaders/1" "foo@bar"
+ = uploaders
+
+test Debctrl.src_entries
+ put uploaders
+ after set "/Uploaders/1" "bar@bar"
+ = "Uploaders: bar@bar, Dominique Dumont <dominique.dumont@xx.yyy>,\n"
+ . " gregor herrmann <gregoa@xxx.yy>\n"
+
+test Debctrl.src_entries
+ put uploaders
+ after set "/Uploaders/4" "baz@bar"
+ = "Uploaders: foo@bar, Dominique Dumont <dominique.dumont@xx.yyy>,\n"
+ . " gregor herrmann <gregoa@xxx.yy>,\n"
+ . " baz@bar\n"
+
+test Debctrl.lns put (source."\nPackage: test\nDescription: foobar\n")
+ after
+ set "/srcpkg/Uploaders/1" "foo@bar" ;
+ set "/srcpkg/Uploaders/2" "Dominique Dumont <dominique.dumont@xx.yyy>" ;
+ set "/srcpkg/Uploaders/3" "gregor herrmann <gregoa@xxx.yy>" ;
+ set "/srcpkg/Build-Depends-Indep/and[1]/or/perl/version/relation" ">=" ;
+ set "/srcpkg/Build-Depends-Indep/and[1]/or/perl/version/number" "5.8.8-12" ;
+ set "/srcpkg/Build-Depends-Indep/and[1]/or/perl/arch/prefix" "!" ;
+ set "/srcpkg/Build-Depends-Indep/and[1]/or/perl/arch/name" "hurd-i386" ;
+ set "/srcpkg/Build-Depends-Indep/and[2]/or[1]/perl-modules/version/relation" ">=" ;
+ set "/srcpkg/Build-Depends-Indep/and[2]/or[1]/perl-modules/version/number" "5.10" ;
+ set "/srcpkg/Build-Depends-Indep/and[2]/or[2]/libmodule-build-perl" "";
+ set "/srcpkg/Build-Depends-Indep/and[3]/or/libcarp-assert-more-perl" "" ;
+ set "/srcpkg/Build-Depends-Indep/and[4]/or/libconfig-tiny-perl" "" ;
+ set "/binpkg[1]/Package" "libconfig-model-perl" ;
+ (* must remove description because set cannot insert Archi before description *)
+ rm "/binpkg[1]/Description" ;
+ set "/binpkg/Architecture" "all" ;
+ set "/binpkg[1]/Description/summary" "dummy1" ;
+ set "/binpkg[1]/Description/text" "dummy text 1" ;
+ set "/binpkg[2]/Package" "libconfig-model2-perl" ;
+ set "/binpkg[2]/Architecture" "all" ;
+ set "/binpkg[2]/Description/summary" "dummy2" ;
+ set "/binpkg[2]/Description/text" "dummy text 2"
+ =
+"Source: libtest-distmanifest-perl
+Uploaders: foo@bar,
+ Dominique Dumont <dominique.dumont@xx.yyy>,
+ gregor herrmann <gregoa@xxx.yy>
+Build-Depends-Indep: perl ( >= 5.8.8-12 ) [ !hurd-i386 ],
+ perl-modules ( >= 5.10 ) | libmodule-build-perl,
+ libcarp-assert-more-perl,
+ libconfig-tiny-perl
+
+Package: libconfig-model-perl
+Architecture: all
+Description: dummy1
+ dummy text 1
+
+Package: libconfig-model2-perl
+Architecture: all
+Description: dummy2
+ dummy text 2
+"
+
+(* Test Augeas' own control file *)
+let augeas_control = "Source: augeas
+Priority: optional
+Maintainer: Nicolas Valcárcel Scerpella (Canonical) <nicolas.valcarcel@canonical.com>
+Uploaders: Free Ekanayaka <freee@debian.org>, Micah Anderson <micah@debian.org>
+Build-Depends: debhelper (>= 5), autotools-dev, libreadline-dev, chrpath,
+ naturaldocs (>= 1.51-1), texlive-latex-base
+Standards-Version: 3.9.2
+Section: libs
+Homepage: http://augeas.net/
+DM-Upload-Allowed: yes
+
+Package: augeas-tools
+Section: admin
+Architecture: any
+Depends: ${shlibs:Depends}, ${misc:Depends}
+Description: Augeas command line tools
+ Augeas is a configuration editing tool. It parses configuration files in their
+ native formats and transforms them into a tree. Configuration changes are made
+ by manipulating this tree and saving it back into native config files.
+ .
+ This package provides command line tools based on libaugeas0:
+ - augtool, a tool to manage configuration files.
+ - augparse, a testing and debugging tool for augeas lenses.
+
+Package: libaugeas-dev
+Section: libdevel
+Architecture: any
+Depends: libaugeas0 (= ${binary:Version}), ${shlibs:Depends}, ${misc:Depends}
+Description: Development files for writing applications based on libaugeas0
+ Augeas is a configuration editing tool. It parses configuration files in their
+ native formats and transforms them into a tree. Configuration changes are made
+ by manipulating this tree and saving it back into native config files.
+ .
+ This package includes the development files to write programs using the Augeas
+ API.
+"
+test DebCtrl.lns get augeas_control =
+ { "srcpkg"
+ { "Source" = "augeas" }
+ { "Priority" = "optional" }
+ { "Maintainer" = "Nicolas Valcárcel Scerpella (Canonical) <nicolas.valcarcel@canonical.com>" }
+ { "Uploaders"
+ { "1" = "Free Ekanayaka <freee@debian.org>" }
+ { "2" = "Micah Anderson <micah@debian.org>" }
+ }
+ { "Build-Depends"
+ { "and"
+ { "or"
+ { "debhelper"
+ { "version"
+ { "relation" = ">=" }
+ { "number" = "5" }
+ }
+ }
+ }
+ }
+ { "and"
+ { "or"
+ { "autotools-dev" }
+ }
+ }
+ { "and"
+ { "or"
+ { "libreadline-dev" }
+ }
+ }
+ { "and"
+ { "or"
+ { "chrpath" }
+ }
+ }
+ { "and"
+ { "or"
+ { "naturaldocs"
+ { "version"
+ { "relation" = ">=" }
+ { "number" = "1.51-1" }
+ }
+ }
+ }
+ }
+ { "and"
+ { "or"
+ { "texlive-latex-base" }
+ }
+ }
+ }
+ { "Standards-Version" = "3.9.2" }
+ { "Section" = "libs" }
+ { "Homepage" = "http://augeas.net/" }
+ { "DM-Upload-Allowed" = "yes" }
+ }
+ { "binpkg"
+ { "Package" = "augeas-tools" }
+ { "Section" = "admin" }
+ { "Architecture" = "any" }
+ { "Depends"
+ { "and"
+ { "or"
+ { "${shlibs:Depends}" }
+ }
+ }
+ { "and"
+ { "or"
+ { "${misc:Depends}" }
+ }
+ }
+ }
+ { "Description"
+ { "summary" = "Augeas command line tools" }
+ { "text" = "Augeas is a configuration editing tool. It parses configuration files in their" }
+ { "text" = "native formats and transforms them into a tree. Configuration changes are made" }
+ { "text" = "by manipulating this tree and saving it back into native config files." }
+ { "text" = "." }
+ { "text" = "This package provides command line tools based on libaugeas0:" }
+ { "text" = "- augtool, a tool to manage configuration files." }
+ { "text" = "- augparse, a testing and debugging tool for augeas lenses." }
+ }
+ }
+ { "binpkg"
+ { "Package" = "libaugeas-dev" }
+ { "Section" = "libdevel" }
+ { "Architecture" = "any" }
+ { "Depends"
+ { "and"
+ { "or"
+ { "libaugeas0"
+ { "version"
+ { "relation" = "=" }
+ { "number" = "${binary:Version}" }
+ }
+ }
+ }
+ }
+ { "and"
+ { "or"
+ { "${shlibs:Depends}" }
+ }
+ }
+ { "and"
+ { "or"
+ { "${misc:Depends}" }
+ }
+ }
+ }
+ { "Description"
+ { "summary" = "Development files for writing applications based on libaugeas0" }
+ { "text" = "Augeas is a configuration editing tool. It parses configuration files in their" }
+ { "text" = "native formats and transforms them into a tree. Configuration changes are made" }
+ { "text" = "by manipulating this tree and saving it back into native config files." }
+ { "text" = "." }
+ { "text" = "This package includes the development files to write programs using the Augeas" }
+ { "text" = "API." }
+ }
+ }
+
+(* Bug #267: Python module extensions, from Debian Python Policy, chapter 2 *)
+let python_control = "Source: graphite-web
+Maintainer: Will Pearson (Editure Key) <wpearson@editure.co.uk>
+Section: python
+Priority: optional
+Build-Depends: debhelper (>= 7), python-support (>= 0.8.4)
+Standards-Version: 3.7.2
+XS-Python-Version: current
+
+Package: python-graphite-web
+Architecture: all
+Depends: ${python:Depends}
+XB-Python-Version: ${python:Versions}
+Provides: ${python:Provides}
+Description: Enterprise scalable realtime graphing
+"
+test Debctrl.lns get python_control =
+ { "srcpkg"
+ { "Source" = "graphite-web" }
+ { "Maintainer" = "Will Pearson (Editure Key) <wpearson@editure.co.uk>" }
+ { "Section" = "python" }
+ { "Priority" = "optional" }
+ { "Build-Depends"
+ { "and"
+ { "or"
+ { "debhelper"
+ { "version"
+ { "relation" = ">=" }
+ { "number" = "7" }
+ }
+ }
+ }
+ }
+ { "and"
+ { "or"
+ { "python-support"
+ { "version"
+ { "relation" = ">=" }
+ { "number" = "0.8.4" }
+ }
+ }
+ }
+ }
+ }
+ { "Standards-Version" = "3.7.2" }
+ { "XS-Python-Version" = "current" }
+ }
+ { "binpkg"
+ { "Package" = "python-graphite-web" }
+ { "Architecture" = "all" }
+ { "Depends"
+ { "and"
+ { "or"
+ { "${python:Depends}" }
+ }
+ }
+ }
+ { "XB-Python-Version" = "${python:Versions}" }
+ { "Provides"
+ { "and"
+ { "or"
+ { "${python:Provides}" }
+ }
+ }
+ }
+ { "Description"
+ { "summary" = "Enterprise scalable realtime graphing" }
+ }
+ }
+
--- /dev/null
+module Test_desktop =
+
+let conf = "# A comment
+[Desktop Entry]
+Version=1.0
+Type=Application
+Name=Foo Viewer
+# another comment
+Comment=The best viewer for Foo objects available!
+TryExec=fooview
+Exec=fooview %F
+Icon=fooview
+MimeType=image/x-foo;
+X-KDE-Library=libfooview
+X-KDE-FactoryName=fooviewfactory
+X-KDE-ServiceType=FooService
+"
+
+test Desktop.lns get conf =
+ { "#comment" = "A comment" }
+ { "Desktop Entry"
+ { "Version" = "1.0" }
+ { "Type" = "Application" }
+ { "Name" = "Foo Viewer" }
+ { "#comment" = "another comment" }
+ { "Comment" = "The best viewer for Foo objects available!" }
+ { "TryExec" = "fooview" }
+ { "Exec" = "fooview %F" }
+ { "Icon" = "fooview" }
+ { "MimeType" = "image/x-foo;" }
+ { "X-KDE-Library" = "libfooview" }
+ { "X-KDE-FactoryName" = "fooviewfactory" }
+ { "X-KDE-ServiceType" = "FooService" } }
+
+(* Entries with square brackets *)
+test Desktop.lns get "[Desktop Entry]
+X-GNOME-FullName[ca]=En canadien
+" =
+ { "Desktop Entry"
+ { "X-GNOME-FullName[ca]" = "En canadien" } }
+
+(* Test: Desktop.lns
+ Allow @ in setting (GH issue #92) *)
+test Desktop.lns get "[Desktop Entry]
+Name[sr@latin] = foobar\n" =
+ { "Desktop Entry"
+ { "Name[sr@latin]" = "foobar" } }
--- /dev/null
+module Test_DevfsRules =
+
+ let manpage_example = "[localrules=10]
+add path 'da*s*' mode 0660 group usb
+"
+
+ test DevfsRules.lns get manpage_example =
+ { "localrules" { "id" = "10" }
+ { "1" = "add path 'da*s*' mode 0660 group usb" } }
+
+
+ let example = "[devfsrules_jail_unhide_usb_printer_and_scanner=30]
+add include $devfsrules_hide_all
+add include $devfsrules_unhide_basic
+add include $devfsrules_unhide_login
+add path 'ulpt*' mode 0660 group printscan unhide
+add path 'unlpt*' mode 0660 group printscan unhide
+add path 'ugen2.8' mode 0660 group printscan unhide # Scanner (ugen2.8 is a symlink to usb/2.8.0)
+add path usb unhide
+add path usbctl unhide
+add path 'usb/2.8.0' mode 0660 group printscan unhide
+
+[devfsrules_jail_unhide_usb_scanner_only=30]
+add include $devfsrules_hide_all
+add include $devfsrules_unhide_basic
+add include $devfsrules_unhide_login
+add path 'ugen2.8' mode 0660 group scan unhide # Scanner
+add path usb unhide
+add path usbctl unhide
+add path 'usb/2.8.0' mode 0660 group scan unhide
+"
+
+ test DevfsRules.lns get example =
+ { "devfsrules_jail_unhide_usb_printer_and_scanner" { "id" = "30" }
+ { "1" = "add include $devfsrules_hide_all" }
+ { "2" = "add include $devfsrules_unhide_basic" }
+ { "3" = "add include $devfsrules_unhide_login" }
+ { "4" = "add path 'ulpt*' mode 0660 group printscan unhide" }
+ { "5" = "add path 'unlpt*' mode 0660 group printscan unhide" }
+ { "6" = "add path 'ugen2.8' mode 0660 group printscan unhide"
+ { "#comment" = "Scanner (ugen2.8 is a symlink to usb/2.8.0)" }
+ }
+ { "7" = "add path usb unhide" }
+ { "8" = "add path usbctl unhide" }
+ { "9" = "add path 'usb/2.8.0' mode 0660 group printscan unhide" }
+ { }
+ }
+ { "devfsrules_jail_unhide_usb_scanner_only" { "id" = "30" }
+ { "1" = "add include $devfsrules_hide_all" }
+ { "2" = "add include $devfsrules_unhide_basic" }
+ { "3" = "add include $devfsrules_unhide_login" }
+ { "4" = "add path 'ugen2.8' mode 0660 group scan unhide"
+ { "#comment" = "Scanner" }
+ }
+ { "5" = "add path usb unhide" }
+ { "6" = "add path usbctl unhide" }
+ { "7" = "add path 'usb/2.8.0' mode 0660 group scan unhide" }
+ }
+
--- /dev/null
+module Test_device_map =
+
+ let conf = "# this device map was generated by anaconda
+(fd0) /dev/fda
+(hd0) /dev/sda
+(cd0) /dev/cdrom
+(hd1,1) /dev/sdb1
+(hd0,a) /dev/sda1
+(0x80) /dev/sda
+(128) /dev/sda
+"
+
+ test Device_map.lns get conf =
+ { "#comment" = "this device map was generated by anaconda" }
+ { "fd0" = "/dev/fda" }
+ { "hd0" = "/dev/sda" }
+ { "cd0" = "/dev/cdrom" }
+ { "hd1,1" = "/dev/sdb1" }
+ { "hd0,a" = "/dev/sda1" }
+ { "0x80" = "/dev/sda" }
+ { "128" = "/dev/sda" }
+
+ test Device_map.lns put conf after
+ set "hd2\,1" "/dev/sdb1"
+ = "# this device map was generated by anaconda
+(fd0) /dev/fda
+(hd0) /dev/sda
+(cd0) /dev/cdrom
+(hd1,1) /dev/sdb1
+(hd0,a) /dev/sda1
+(0x80) /dev/sda
+(128) /dev/sda
+(hd2,1)\t/dev/sdb1
+"
--- /dev/null
+module Test_dhclient =
+
+ let conf =" # Sample dhclient.conf
+ # Protocol timing
+timeout 3; # Expect a fast server
+retry
+ 10;
+# Lease requirements and requests
+request
+ subnet-mask,
+ broadcast-address,
+ ntp-servers;
+# Dynamic DNS
+send
+ fqdn.fqdn
+ \"grosse.fugue.com.\";
+
+option rfc3442-classless-static-routes code 121 = array of unsigned integer 8;
+
+interface ep0 {
+ script /sbin/dhclient-script;
+ send dhcp-client-identifier 1:0:a0:24:ab:fb:9c;
+ send dhcp-lease-time 3600;
+ request subnet-mask, broadcast-address, time-offset, routers,
+ domain-name, domain-name-servers, host-name;
+ media media10baseT/UTP, \"media10base2/BNC\";
+}
+
+alias {
+ interface \"ep0\";
+ fixed-address 192.5.5.213;
+ option subnet-mask 255.255.255.255;
+}
+
+lease {
+ interface \"eth0\";
+ fixed-address 192.33.137.200;
+ medium \"link0 link1\";
+ vendor option space \"name\";
+ option host-name \"andare.swiftmedia.com\";
+ option subnet-mask 255.255.255.0;
+ option broadcast-address 192.33.137.255;
+ option routers 192.33.137.250;
+ option domain-name-servers 127.0.0.1;
+ renew 2 2000/1/12 00:00:01;
+ rebind 2 2000/1/12 00:00:01;
+ expire 2 2000/1/12 00:00:01;
+}
+"
+
+ test Dhclient.lns get conf =
+ { "#comment" = "Sample dhclient.conf" }
+ { "#comment" = "Protocol timing" }
+ { "timeout" = "3"
+ { "#comment" = "Expect a fast server" } }
+ { "retry" = "10" }
+ { "#comment" = "Lease requirements and requests" }
+ { "request"
+ { "1" = "subnet-mask" }
+ { "2" = "broadcast-address" }
+ { "3" = "ntp-servers" } }
+ { "#comment" = "Dynamic DNS" }
+ { "send"
+ { "fqdn.fqdn" = "\"grosse.fugue.com.\"" } }
+ {}
+ { "option"
+ { "rfc3442-classless-static-routes"
+ { "code" = "121" }
+ { "value" = "array of unsigned integer 8" } } }
+ {}
+ { "interface" = "ep0"
+ { "script" = "/sbin/dhclient-script" }
+ { "send"
+ { "dhcp-client-identifier" = "1:0:a0:24:ab:fb:9c" } }
+ { "send"
+ { "dhcp-lease-time" = "3600" } }
+ { "request"
+ { "1" = "subnet-mask" }
+ { "2" = "broadcast-address" }
+ { "3" = "time-offset" }
+ { "4" = "routers" }
+ { "5" = "domain-name" }
+ { "6" = "domain-name-servers" }
+ { "7" = "host-name" } }
+ { "media"
+ { "1" = "media10baseT/UTP" }
+ { "2" = "\"media10base2/BNC\"" } } }
+ {}
+ { "alias"
+ { "interface" = "\"ep0\"" }
+ { "fixed-address" = "192.5.5.213" }
+ { "option"
+ { "subnet-mask" = "255.255.255.255" } } }
+ {}
+ { "lease"
+ { "interface" = "\"eth0\"" }
+ { "fixed-address" = "192.33.137.200" }
+ { "medium" = "\"link0 link1\"" }
+ { "vendor option space" = "\"name\"" }
+ { "option"
+ { "host-name" = "\"andare.swiftmedia.com\"" } }
+ { "option"
+ { "subnet-mask" = "255.255.255.0" } }
+ { "option"
+ { "broadcast-address" = "192.33.137.255" } }
+ { "option"
+ { "routers" = "192.33.137.250" } }
+ { "option"
+ { "domain-name-servers" = "127.0.0.1" } }
+ { "renew"
+ { "weekday" = "2" }
+ { "year" = "2000" }
+ { "month" = "1" }
+ { "day" = "12" }
+ { "hour" = "00" }
+ { "minute" = "00" }
+ { "second" = "01" } }
+ { "rebind"
+ { "weekday" = "2" }
+ { "year" = "2000" }
+ { "month" = "1" }
+ { "day" = "12" }
+ { "hour" = "00" }
+ { "minute" = "00" }
+ { "second" = "01" } }
+ { "expire"
+ { "weekday" = "2" }
+ { "year" = "2000" }
+ { "month" = "1" }
+ { "day" = "12" }
+ { "hour" = "00" }
+ { "minute" = "00" }
+ { "second" = "01" } } }
+
+
+test Dhclient.lns get "append domain-name-servers 127.0.0.1;\n" =
+ { "append"
+ { "domain-name-servers" = "127.0.0.1" } }
+
+test Dhclient.lns put "" after set "/prepend/domain-name-servers" "127.0.0.1" =
+ "prepend domain-name-servers 127.0.0.1;\n"
+
+(* When = is used before the value, it's an evaluated string, see dhcp-eval *)
+test Dhclient.lns get "send dhcp-client-identifier = hardware;\n" =
+ { "send"
+ { "dhcp-client-identifier"
+ { "#eval" = "hardware" } } }
+
+test Dhclient.lns put "send host-name = gethostname();\n"
+ after set "/send/host-name/#eval" "test" = "send host-name = test;\n"
--- /dev/null
+module Test_dhcpd =
+
+let lns = Dhcpd.lns
+
+let conf = "#
+# Sample configuration file for ISC dhcpd for Debian
+#
+# Attention: If /etc/ltsp/dhcpd.conf exists, that will be used as
+# configuration file instead of this file.
+#
+# $Id: dhcpd.conf,v 1.1.1.1 2002/05/21 00:07:44 peloy Exp $
+#
+
+# The ddns-updates-style parameter controls whether or not the server will
+# attempt to do a DNS update when a lease is confirmed. We default to the
+# behavior of the version 2 packages ('none', since DHCP v2 didn't
+# have support for DDNS.)
+ddns-update-style none;
+
+# option definitions common to all supported networks...
+option domain-name \"example.org\";
+option domain-name-servers ns1.example.org, ns2.example.org;
+
+default-lease-time 600;
+max-lease-time 7200;
+
+# If this DHCP server is the official DHCP server for the local
+# network, the authoritative directive should be uncommented.
+authoritative;
+
+allow booting;
+allow bootp;
+
+# Use this to send dhcp log messages to a different log file (you also
+# have to hack syslog.conf to complete the redirection).
+log-facility local7;
+
+# No service will be given on this subnet, but declaring it helps the
+# DHCP server to understand the network topology.
+
+subnet 10.152.187.0 netmask 255.255.255.0 {
+}
+
+# This is a very basic subnet declaration.
+
+subnet 10.254.239.0 netmask 255.255.255.224 {
+ range 10.254.239.10 10.254.239.20;
+ option routers rtr-239-0-1.example.org, rtr-239-0-2.example.org;
+}
+
+# This declaration allows BOOTP clients to get dynamic addresses,
+# which we don't really recommend.
+
+subnet 10.254.239.32 netmask 255.255.255.224 {
+ range dynamic-bootp 10.254.239.40 10.254.239.60;
+ option broadcast-address 10.254.239.31;
+ option routers rtr-239-32-1.example.org;
+}
+
+# A slightly different configuration for an internal subnet.
+subnet 10.5.5.0 netmask 255.255.255.224 {
+ range 10.5.5.26 10.5.5.30;
+ option domain-name-servers ns1.internal.example.org;
+ option domain-name \"internal.example.org\";
+ option routers 10.5.5.1;
+ option broadcast-address 10.5.5.31;
+ default-lease-time 600;
+ max-lease-time 7200;
+}
+
+# Hosts which require special configuration options can be listed in
+# host statements. If no address is specified, the address will be
+# allocated dynamically (if possible), but the host-specific information
+# will still come from the host declaration.
+
+host passacaglia {
+ hardware ethernet 0:0:c0:5d:bd:95;
+ filename \"vmunix.passacaglia\";
+ server-name \"toccata.fugue.com\";
+}
+
+# Fixed IP addresses can also be specified for hosts. These addresses
+# should not also be listed as being available for dynamic assignment.
+# Hosts for which fixed IP addresses have been specified can boot using
+# BOOTP or DHCP. Hosts for which no fixed address is specified can only
+# be booted with DHCP, unless there is an address range on the subnet
+# to which a BOOTP client is connected which has the dynamic-bootp flag
+# set.
+host fantasia {
+ hardware ethernet 08:00:07:26:c0:a5;
+ fixed-address fantasia.fugue.com;
+}
+
+# You can declare a class of clients and then do address allocation
+# based on that. The example below shows a case where all clients
+# in a certain class get addresses on the 10.17.224/24 subnet, and all
+# other clients get addresses on the 10.0.29/24 subnet.
+
+#class \"foo\" {
+# match if substring (option vendor-class-identifier, 0, 4) = \"SUNW\";
+#}
+
+shared-network 224-29 {
+ subnet 10.17.224.0 netmask 255.255.255.0 {
+ option routers rtr-224.example.org;
+ }
+ subnet 10.0.29.0 netmask 255.255.255.0 {
+ option routers rtr-29.example.org;
+ }
+ pool {
+ allow members of \"foo\";
+ range 10.17.224.10 10.17.224.250;
+ }
+ pool {
+ deny members of \"foo\";
+ range 10.0.29.10 10.0.29.230;
+ }
+}
+"
+
+test lns get "authoritative;" = { "authoritative" }
+test lns get "ddns-update-style none;" = { "ddns-update-style" = "none" }
+test lns get "option domain-name \"example.org\";" =
+ { "option"
+ { "domain-name"
+ { "arg" = "example.org" }
+ }
+ }
+
+test lns get "option domain-name-servers ns1.example.org, ns2.example.org;" =
+ { "option"
+ { "domain-name-servers"
+ { "arg" = "ns1.example.org" }
+ { "arg" = "ns2.example.org" }
+ }
+ }
+
+test lns get "default-lease-time 600;" = { "default-lease-time" = "600" }
+test lns get "range 10.254.239.60;" =
+{ "range"
+ { "to" = "10.254.239.60" }
+ }
+
+test lns get "range dynamic-bootp 10.254.239.60;" =
+ { "range"
+ { "flag" = "dynamic-bootp" }
+ { "to" = "10.254.239.60" }
+ }
+
+test lns get "range dynamic-bootp 10.254.239.40 10.254.239.60;" =
+ { "range"
+ { "flag" = "dynamic-bootp" }
+ { "from" = "10.254.239.40" }
+ { "to" = "10.254.239.60" }
+ }
+
+test lns get "subnet 10.152.187.0 netmask 255.255.255.0 {}\n" =
+ { "subnet"
+ { "network" = "10.152.187.0" }
+ { "netmask" = "255.255.255.0" }
+ }
+
+test lns get " pool {
+ pool {
+
+ }
+}
+" =
+ { "pool"
+ { "pool" }
+ }
+
+test lns get "group { host some-host {hardware ethernet 00:00:aa:bb:cc:dd;
+fixed-address 10.1.1.1;}}" =
+ { "group"
+ { "host" = "some-host"
+ { "hardware"
+ { "type" = "ethernet" }
+ { "address" = "00:00:aa:bb:cc:dd" }
+ }
+ { "fixed-address" = "10.1.1.1" }
+ }
+ }
+
+test lns get "group fan-tas_tic { }" =
+ { "group" = "fan-tas_tic" }
+
+test Dhcpd.stmt_secu get "allow members of \"foo\";" = { "allow-members-of" = "foo" }
+test Dhcpd.stmt_secu get "allow booting;" = { "allow" = "booting" }
+test Dhcpd.stmt_secu get "allow bootp;" = { "allow" = "bootp" }
+test Dhcpd.stmt_option get "option voip-boot-server code 66 = string;" =
+ { "rfc-code"
+ { "label" = "voip-boot-server" }
+ { "code" = "66" }
+ { "type" = "string" }
+ }
+
+test Dhcpd.stmt_option get "option special-option code 25 = array of string;" =
+ { "rfc-code"
+ { "label" = "special-option" }
+ { "code" = "25" }
+ { "type" = "array of string" }
+ }
+
+test Dhcpd.stmt_option get "option special-option code 25 = integer 32;" =
+ { "rfc-code"
+ { "label" = "special-option" }
+ { "code" = "25" }
+ { "type" = "integer 32" }
+ }
+
+
+test Dhcpd.stmt_option get "option special-option code 25 = array of integer 32;" =
+ { "rfc-code"
+ { "label" = "special-option" }
+ { "code" = "25" }
+ { "type" = "array of integer 32" }
+ }
+
+
+
+test Dhcpd.lns get "authoritative;
+log-facility local7;
+ddns-update-style none;
+default-lease-time 21600;
+max-lease-time 43200;
+
+# Additional options for VOIP
+option voip-boot-server code 66 = string;
+option voip-vlan-id code 128 = string;
+" =
+ { "authoritative" }
+ { "log-facility" = "local7" }
+ { "ddns-update-style" = "none" }
+ { "default-lease-time" = "21600" }
+ { "max-lease-time" = "43200"
+ { "#comment" = "Additional options for VOIP" }
+ }
+ { "rfc-code"
+ { "label" = "voip-boot-server" }
+ { "code" = "66" }
+ { "type" = "string" }
+ }
+ { "rfc-code"
+ { "label" = "voip-vlan-id" }
+ { "code" = "128" }
+ { "type" = "string" }
+ }
+
+
+test Dhcpd.lns get "
+option domain-name-servers 10.1.1.1, 10.11.2.1, 10.1.3.1;
+next-server 10.1.1.1;
+
+failover peer \"redondance01\" {
+ primary;
+ address 10.1.1.1;
+ port 647;
+ peer address 10.1.1.1;
+ peer port 647;
+ max-response-delay 20;
+ max-unacked-updates 10;
+ mclt 3600; #comment.
+ split 128; #comment.
+ load balance max seconds 3;
+ }
+" =
+ { }
+ { "option"
+ { "domain-name-servers"
+ { "arg" = "10.1.1.1" }
+ { "arg" = "10.11.2.1" }
+ { "arg" = "10.1.3.1" }
+ }
+ }
+ { "next-server" = "10.1.1.1" }
+ { "failover peer" = "redondance01"
+ { "primary" }
+ { "address" = "10.1.1.1" }
+ { "port" = "647" }
+ { "peer address" = "10.1.1.1" }
+ { "peer port" = "647" }
+ { "max-response-delay" = "20" }
+ { "max-unacked-updates" = "10" }
+ { "mclt" = "3600"
+ { "#comment" = "comment." }
+ }
+ { "split" = "128"
+ { "#comment" = "comment." }
+ }
+ { "load balance max seconds" = "3" }
+ }
+
+
+(* test get and put for record types *)
+let record_test = "option test_records code 123 = { string, ip-address, integer 32, ip6-address, domain-list };"
+
+test Dhcpd.lns get record_test =
+ { "rfc-code"
+ { "label" = "test_records" }
+ { "code" = "123" }
+ { "record"
+ { "1" = "string" }
+ { "2" = "ip-address" }
+ { "3" = "integer 32" }
+ { "4" = "ip6-address" }
+ { "5" = "domain-list" }
+ }
+ }
+
+test Dhcpd.lns put record_test after set "/rfc-code[1]/code" "124" =
+ "option test_records code 124 = { string, ip-address, integer 32, ip6-address, domain-list };"
+
+test Dhcpd.lns get "
+option CallManager code 150 = ip-address;
+option slp-directory-agent true 10.1.1.1, 10.2.2.2;
+option slp-service-scope true \"SLP-GLOBAL\";
+option nds-context \"EXAMPLE\";
+option nds-tree-name \"EXAMPLE\";
+" =
+ { }
+ { "rfc-code"
+ { "label" = "CallManager" }
+ { "code" = "150" }
+ { "type" = "ip-address" }
+ }
+ { "option"
+ { "slp-directory-agent" = "true"
+ { "arg" = "10.1.1.1" }
+ { "arg" = "10.2.2.2" }
+ }
+ }
+ { "option"
+ { "slp-service-scope" = "true"
+ { "arg" = "SLP-GLOBAL" }
+ }
+ }
+ { "option"
+ { "nds-context"
+ { "arg" = "EXAMPLE" }
+ }
+ }
+ { "option"
+ { "nds-tree-name"
+ { "arg" = "EXAMPLE" }
+ }
+ }
+
+
+test Dhcpd.lns get "option voip-vlan-id \"VLAN=1234;\";" =
+ { "option"
+ { "voip-vlan-id"
+ { "arg" = "VLAN=1234;" }
+ }
+ }
+
+test Dhcpd.lns get "option domain-name \"x.example.com y.example.com z.example.com\";" =
+ { "option"
+ { "domain-name"
+ { "arg" = "x.example.com y.example.com z.example.com" }
+ }
+ }
+
+test Dhcpd.lns get "include \"/etc/dhcpd.master\";" =
+ { "include" = "/etc/dhcpd.master" }
+
+test Dhcpd.lns put "\n" after set "/include" "/etc/dhcpd.master" =
+ "\ninclude \"/etc/dhcpd.master\";\n"
+
+test Dhcpd.fct_args get "(option dhcp-client-identifier, 1, 3)" =
+ { "args"
+ { "arg" = "option dhcp-client-identifier" }
+ { "arg" = "1" }
+ { "arg" = "3" }
+ }
+
+test Dhcpd.stmt_match get "match if substring (option dhcp-client-identifier, 1, 3) = \"RAS\";" =
+ { "match"
+ { "function" = "substring"
+ { "args"
+ { "arg" = "option dhcp-client-identifier" }
+ { "arg" = "1" }
+ { "arg" = "3" }
+ }
+ }
+ { "value" = "RAS" }
+ }
+
+test Dhcpd.stmt_match get "match if suffix (option dhcp-client-identifier, 4) = \"RAS\";" =
+ { "match"
+ { "function" = "suffix"
+ { "args"
+ { "arg" = "option dhcp-client-identifier" }
+ { "arg" = "4" }
+ }
+ }
+ { "value" = "RAS" }
+ }
+
+test Dhcpd.stmt_match get "match if option vendor-class-identifier=\"RAS\";" =
+ { "match"
+ { "option" = "vendor-class-identifier"
+ { "value" = "RAS" }
+ }
+ }
+
+
+test Dhcpd.lns get "match pick-first-value (option dhcp-client-identifier, hardware);" =
+ { "match"
+ { "function" = "pick-first-value"
+ { "args"
+ { "arg" = "option dhcp-client-identifier" }
+ { "arg" = "hardware" }
+ }
+ }
+ }
+
+test Dhcpd.fct_args get "(16, 32, \"\", substring(hardware, 0, 4))" =
+ { "args"
+ { "arg" = "16" }
+ { "arg" = "32" }
+ { "arg" = "\"\"" }
+ { "arg" = "substring(hardware, 0, 4)" }
+ }
+
+test Dhcpd.stmt_match get "match if binary-to-ascii(16, 32, \"\", substring(hardware, 0, 4)) = \"1525400\";" =
+ { "match"
+ { "function" = "binary-to-ascii"
+ { "args"
+ { "arg" = "16" }
+ { "arg" = "32" }
+ { "arg" = "\"\"" }
+ { "arg" = "substring(hardware, 0, 4)" }
+ }
+ }
+ { "value" = "1525400" }
+ }
+
+test Dhcpd.lns get "subclass allocation-class-1 1:8:0:2b:4c:39:ad;" =
+ { "subclass"
+ { "name" = "allocation-class-1" }
+ { "value" = "1:8:0:2b:4c:39:ad" }
+ }
+
+
+test Dhcpd.lns get "subclass \"allocation-class-1\" 1:8:0:2b:4c:39:ad;" =
+ { "subclass"
+ { "name" = "allocation-class-1" }
+ { "value" = "1:8:0:2b:4c:39:ad" }
+ }
+
+test Dhcpd.lns get "subclass \"quoted class\" \"quoted value\";" =
+ { "subclass"
+ { "name" = "quoted class" }
+ { "value" = "quoted value" }
+ }
+
+
+(* overall test *)
+test Dhcpd.lns put conf after rm "/x" = conf
+
+(* bug #293: primary should support argument *)
+let input293 = "zone EXAMPLE.ORG. {
+ primary 127.0.0.1;
+}"
+
+test Dhcpd.lns get input293 =
+ { "zone" = "EXAMPLE.ORG."
+ { "primary" = "127.0.0.1" }
+ }
+
+(* bug #311: filename should be quoted *)
+let input311 = "subnet 172.16.0.0 netmask 255.255.255.0 {
+filename \"pxelinux.0\";
+}"
+
+test Dhcpd.lns put "subnet 172.16.0.0 netmask 255.255.255.0 {
+}" after
+ set "subnet/filename" "pxelinux.0" = input311
+
+(* GH issue #34: support conditional structures *)
+let gh34_empty = "if exists dhcp-parameter-request-list {
+}\n"
+
+test Dhcpd.lns get gh34_empty =
+ { "@if" = "exists dhcp-parameter-request-list" }
+
+let gh34_empty_multi = "subnet 192.168.100.0 netmask 255.255.255.0 {
+ if true {
+ } elsif false {
+ } else {
+ }
+}\n"
+
+test Dhcpd.lns get gh34_empty_multi =
+ { "subnet"
+ { "network" = "192.168.100.0" }
+ { "netmask" = "255.255.255.0" }
+ { "@if" = "true"
+ { "@elsif" = "false" }
+ { "@else" } }
+ }
+
+let gh34_simple = "if exists dhcp-parameter-request-list {
+ default-lease-time 600;
+ } else {
+default-lease-time 200;
+}\n"
+
+test Dhcpd.lns get gh34_simple =
+ { "@if" = "exists dhcp-parameter-request-list"
+ { "default-lease-time" = "600" }
+ { "@else"
+ { "default-lease-time" = "200" } } }
+
+test Dhcpd.lns get "omapi-key fookey;" =
+ { "omapi-key" = "fookey" }
+
+(* almost all DHCP groups should support braces starting on the next line *)
+test Dhcpd.lns get "class introduction
+{
+}" =
+ { "class" = "introduction" }
+
+(* equals should work the same *)
+test Dhcpd.lns get "option test_records code 123 =
+ string;" =
+ { "rfc-code"
+ { "label" = "test_records" }
+ { "code" = "123" }
+ { "type" = "string" }
+ }
+
+test Dhcpd.lns get "deny members of \"Are things like () allowed?\";" =
+ { "deny-members-of" = "Are things like () allowed?" }
+
+test Dhcpd.lns get "deny unknown clients;" =
+ { "deny" = "unknown clients" }
+test Dhcpd.lns get "deny known-clients;" =
+ { "deny" = "known-clients" }
+
+test Dhcpd.lns get "set ClientMac = binary-to-ascii(16, 8, \":\" , substring(hardware, 1, 6));" =
+ { "set" = "ClientMac"
+ { "value" = "binary-to-ascii(16, 8, \":\" , substring(hardware, 1, 6))" }
+ }
+
+test Dhcpd.lns get "set myvariable = foo;" =
+ { "set" = "myvariable"
+ { "value" = "foo" }
+ }
+
+test Dhcpd.stmt_hardware get "hardware fddi 00:01:02:03:04:05;" =
+ { "hardware"
+ { "type" = "fddi" }
+ { "address" = "00:01:02:03:04:05" }
+ }
+
+test Dhcpd.lns get "on commit
+{
+ set test = thing;
+}" =
+ { "on" = "commit"
+ { "set" = "test"
+ { "value" = "thing" }
+ }
+ }
+
+(* key block get/put/set test *)
+let key_tests = "key sample {
+ algorithm hmac-md5;
+ secret \"secret==\";
+}
+
+key \"interesting\" { };
+
+key \"third key\" {
+ secret \"two==\";
+}"
+
+test Dhcpd.lns get key_tests =
+ { "key_block" = "sample"
+ { "algorithm" = "hmac-md5" }
+ { "secret" = "secret==" }
+ }
+ { "key_block" = "interesting" }
+ { "key_block" = "third key"
+ { "secret" = "two==" }
+ }
+
+test Dhcpd.lns put key_tests after set "/key_block[1]" "sample2" =
+ "key sample2 {
+ algorithm hmac-md5;
+ secret \"secret==\";
+}
+
+key \"interesting\" { };
+
+key \"third key\" {
+ secret \"two==\";
+}"
+
+test Dhcpd.lns get "group \"hello\" { }" =
+ { "group" = "hello" }
+
+test Dhcpd.lns get "class \"testing class with spaces and quotes and ()\" {}" =
+ { "class" = "testing class with spaces and quotes and ()" }
--- /dev/null
+module Test_Dns_Zone =
+
+let lns = Dns_Zone.lns
+
+(* RFC 1034 §6 *)
+test lns get "
+EDU. IN SOA SRI-NIC.ARPA. HOSTMASTER.SRI-NIC.ARPA. (
+ 870729 ;serial
+ 1800 ;refresh every 30 minutes
+ 300 ;retry every 5 minutes
+ 604800 ;expire after a week
+ 86400 ;minimum of a day
+ )
+ NS SRI-NIC.ARPA.
+ NS C.ISI.EDU.
+
+UCI 172800 NS ICS.UCI
+ 172800 NS ROME.UCI
+ICS.UCI 172800 A 192.5.19.1
+ROME.UCI 172800 A 192.5.19.31
+
+ISI 172800 NS VAXA.ISI
+ 172800 NS A.ISI
+ 172800 NS VENERA.ISI.EDU.
+VAXA.ISI 172800 A 10.2.0.27
+ 172800 A 128.9.0.33
+VENERA.ISI.EDU. 172800 A 10.1.0.52
+ 172800 A 128.9.0.32
+A.ISI 172800 A 26.3.0.103
+
+UDEL.EDU. 172800 NS LOUIE.UDEL.EDU.
+ 172800 NS UMN-REI-UC.ARPA.
+LOUIE.UDEL.EDU. 172800 A 10.0.0.96
+ 172800 A 192.5.39.3
+
+YALE.EDU. 172800 NS YALE.ARPA.
+YALE.EDU. 172800 NS YALE-BULLDOG.ARPA.
+
+MIT.EDU. 43200 NS XX.LCS.MIT.EDU.
+ 43200 NS ACHILLES.MIT.EDU.
+XX.LCS.MIT.EDU. 43200 A 10.0.0.44
+ACHILLES.MIT.EDU. 43200 A 18.72.0.8
+" =
+ { "EDU."
+ { "1"
+ { "class" = "IN" }
+ { "type" = "SOA" }
+ { "mname" = "SRI-NIC.ARPA." }
+ { "rname" = "HOSTMASTER.SRI-NIC.ARPA." }
+ { "serial" = "870729" }
+ { "refresh" = "1800" }
+ { "retry" = "300" }
+ { "expiry" = "604800" }
+ { "minimum" = "86400" }
+ }
+ { "2" { "type" = "NS" } { "rdata" = "SRI-NIC.ARPA." } }
+ { "3" { "type" = "NS" } { "rdata" = "C.ISI.EDU." } }
+ }
+ { "UCI"
+ { "1" { "ttl" = "172800" } { "type" = "NS" } { "rdata" = "ICS.UCI" } }
+ { "2" { "ttl" = "172800" } { "type" = "NS" } { "rdata" = "ROME.UCI" } }
+ }
+ { "ICS.UCI"
+ { "1" { "ttl" = "172800" } { "type" = "A" } { "rdata" = "192.5.19.1" } }
+ }
+ { "ROME.UCI"
+ { "1" { "ttl" = "172800" } { "type" = "A" } { "rdata" = "192.5.19.31" } }
+ }
+ { "ISI"
+ { "1" { "ttl" = "172800" } { "type" = "NS" } { "rdata" = "VAXA.ISI" } }
+ { "2" { "ttl" = "172800" } { "type" = "NS" } { "rdata" = "A.ISI" } }
+ { "3"
+ { "ttl" = "172800" } { "type" = "NS" } { "rdata" = "VENERA.ISI.EDU." }
+ }
+ }
+ { "VAXA.ISI"
+ { "1" { "ttl" = "172800" } { "type" = "A" } { "rdata" = "10.2.0.27" } }
+ { "2" { "ttl" = "172800" } { "type" = "A" } { "rdata" = "128.9.0.33" } }
+ }
+ { "VENERA.ISI.EDU."
+ { "1" { "ttl" = "172800" } { "type" = "A" } { "rdata" = "10.1.0.52" } }
+ { "2" { "ttl" = "172800" } { "type" = "A" } { "rdata" = "128.9.0.32" } }
+ }
+ { "A.ISI"
+ { "1" { "ttl" = "172800" } { "type" = "A" } { "rdata" = "26.3.0.103" } }
+ }
+ { "UDEL.EDU."
+ { "1"
+ { "ttl" = "172800" } { "type" = "NS" } { "rdata" = "LOUIE.UDEL.EDU." }
+ }
+ { "2"
+ { "ttl" = "172800" } { "type" = "NS" } { "rdata" = "UMN-REI-UC.ARPA." }
+ }
+ }
+ { "LOUIE.UDEL.EDU."
+ { "1" { "ttl" = "172800" } { "type" = "A" } { "rdata" = "10.0.0.96" } }
+ { "2" { "ttl" = "172800" } { "type" = "A" } { "rdata" = "192.5.39.3" } }
+ }
+ { "YALE.EDU."
+ { "1" { "ttl" = "172800" } { "type" = "NS" } { "rdata" = "YALE.ARPA." } }
+ }
+ { "YALE.EDU."
+ { "1"
+ { "ttl" = "172800" } { "type" = "NS" } { "rdata" = "YALE-BULLDOG.ARPA." }
+ }
+ }
+ { "MIT.EDU."
+ { "1"
+ { "ttl" = "43200" } { "type" = "NS" } { "rdata" = "XX.LCS.MIT.EDU." }
+ }
+ { "2"
+ { "ttl" = "43200" } { "type" = "NS" } { "rdata" = "ACHILLES.MIT.EDU." }
+ }
+ }
+ { "XX.LCS.MIT.EDU."
+ { "1" { "ttl" = "43200" } { "type" = "A" } { "rdata" = "10.0.0.44" } }
+ }
+ { "ACHILLES.MIT.EDU."
+ { "1" { "ttl" = "43200" } { "type" = "A" } { "rdata" = "18.72.0.8" } }
+ }
+
+
+(* RFC 1035 §5.3 *)
+test lns get "
+@ IN SOA VENERA Action\.domains (
+ 20 ; SERIAL
+ 7200 ; REFRESH
+ 600 ; RETRY
+ 3600000; EXPIRE
+ 60) ; MINIMUM
+
+ NS A.ISI.EDU.
+ NS VENERA
+ NS VAXA
+ MX 10 VENERA
+ MX 20 VAXA
+
+A A 26.3.0.103
+
+VENERA A 10.1.0.52
+ A 128.9.0.32
+
+VAXA A 10.2.0.27
+ A 128.9.0.33
+" =
+ { "@"
+ { "1"
+ { "class" = "IN" }
+ { "type" = "SOA" }
+ { "mname" = "VENERA" }
+ { "rname" = "Action\\.domains" }
+ { "serial" = "20" }
+ { "refresh" = "7200" }
+ { "retry" = "600" }
+ { "expiry" = "3600000" }
+ { "minimum" = "60" }
+ }
+ { "2" { "type" = "NS" } { "rdata" = "A.ISI.EDU." } }
+ { "3" { "type" = "NS" } { "rdata" = "VENERA" } }
+ { "4" { "type" = "NS" } { "rdata" = "VAXA" } }
+ { "5" { "type" = "MX" } { "priority" = "10" } { "exchange" = "VENERA" } }
+ { "6" { "type" = "MX" } { "priority" = "20" } { "exchange" = "VAXA" } }
+ }
+ { "A" { "1" { "type" = "A" } { "rdata" = "26.3.0.103" } } }
+ { "VENERA"
+ { "1" { "type" = "A" } { "rdata" = "10.1.0.52" } }
+ { "2" { "type" = "A" } { "rdata" = "128.9.0.32" } }
+ }
+ { "VAXA"
+ { "1" { "type" = "A" } { "rdata" = "10.2.0.27" } }
+ { "2" { "type" = "A" } { "rdata" = "128.9.0.33" } }
+ }
+
+
+(* RFC 2782 *)
+test lns get "
+$ORIGIN example.com.
+@ SOA server.example.com. root.example.com. (
+ 1995032001 3600 3600 604800 86400 )
+ NS server.example.com.
+ NS ns1.ip-provider.net.
+ NS ns2.ip-provider.net.
+; foobar - use old-slow-box or new-fast-box if either is
+; available, make three quarters of the logins go to
+; new-fast-box.
+_foobar._tcp SRV 0 1 9 old-slow-box.example.com.
+ SRV 0 3 9 new-fast-box.example.com.
+; if neither old-slow-box or new-fast-box is up, switch to
+; using the sysdmin's box and the server
+ SRV 1 0 9 sysadmins-box.example.com.
+ SRV 1 0 9 server.example.com.
+server A 172.30.79.10
+old-slow-box A 172.30.79.11
+sysadmins-box A 172.30.79.12
+new-fast-box A 172.30.79.13
+; NO other services are supported
+*._tcp SRV 0 0 0 .
+*._udp SRV 0 0 0 .
+" =
+ { "$ORIGIN" = "example.com." }
+ { "@"
+ { "1"
+ { "type" = "SOA" }
+ { "mname" = "server.example.com." }
+ { "rname" = "root.example.com." }
+ { "serial" = "1995032001" }
+ { "refresh" = "3600" }
+ { "retry" = "3600" }
+ { "expiry" = "604800" }
+ { "minimum" = "86400" }
+ }
+ { "2" { "type" = "NS" } { "rdata" = "server.example.com." } }
+ { "3" { "type" = "NS" } { "rdata" = "ns1.ip-provider.net." } }
+ { "4" { "type" = "NS" } { "rdata" = "ns2.ip-provider.net." } }
+ }
+ { "_foobar._tcp"
+ { "1"
+ { "type" = "SRV" }
+ { "priority" = "0" }
+ { "weight" = "1" }
+ { "port" = "9" }
+ { "target" = "old-slow-box.example.com." }
+ }
+ { "2"
+ { "type" = "SRV" }
+ { "priority" = "0" }
+ { "weight" = "3" }
+ { "port" = "9" }
+ { "target" = "new-fast-box.example.com." }
+ }
+ { "3"
+ { "type" = "SRV" }
+ { "priority" = "1" }
+ { "weight" = "0" }
+ { "port" = "9" }
+ { "target" = "sysadmins-box.example.com." }
+ }
+ { "4"
+ { "type" = "SRV" }
+ { "priority" = "1" }
+ { "weight" = "0" }
+ { "port" = "9" }
+ { "target" = "server.example.com." }
+ }
+ }
+ { "server" { "1" { "type" = "A" } { "rdata" = "172.30.79.10" } } }
+ { "old-slow-box" { "1" { "type" = "A" } { "rdata" = "172.30.79.11" } } }
+ { "sysadmins-box" { "1" { "type" = "A" } { "rdata" = "172.30.79.12" } } }
+ { "new-fast-box" { "1" { "type" = "A" } { "rdata" = "172.30.79.13" } } }
+ { "*._tcp"
+ { "1"
+ { "type" = "SRV" }
+ { "priority" = "0" }
+ { "weight" = "0" }
+ { "port" = "0" }
+ { "target" = "." }
+ }
+ }
+ { "*._udp"
+ { "1"
+ { "type" = "SRV" }
+ { "priority" = "0" }
+ { "weight" = "0" }
+ { "port" = "0" }
+ { "target" = "." }
+ }
+ }
+
+
+(* RFC 3403 §6.2 *)
+test lns get "
+$ORIGIN 2.1.2.1.5.5.5.0.7.7.1.e164.arpa.
+ IN NAPTR 100 10 \"u\" \"sip+E2U\" \"!^.*$!sip:information@foo.se!i\" .
+ IN NAPTR 102 10 \"u\" \"smtp+E2U\" \"!^.*$!mailto:information@foo.se!i\" .
+" =
+ { "$ORIGIN" = "2.1.2.1.5.5.5.0.7.7.1.e164.arpa." }
+ { "@"
+ { "1"
+ { "class" = "IN" }
+ { "type" = "NAPTR" }
+ { "order" = "100" }
+ { "preference" = "10" }
+ { "flags" = "\"u\"" }
+ { "service" = "\"sip+E2U\"" }
+ { "regexp" = "\"!^.*$!sip:information@foo.se!i\"" }
+ { "replacement" = "." }
+ }
+ { "2"
+ { "class" = "IN" }
+ { "type" = "NAPTR" }
+ { "order" = "102" }
+ { "preference" = "10" }
+ { "flags" = "\"u\"" }
+ { "service" = "\"smtp+E2U\"" }
+ { "regexp" = "\"!^.*$!mailto:information@foo.se!i\"" }
+ { "replacement" = "." }
+ }
+ }
+
+
+(* SOA record on a single line *)
+test lns get "
+$ORIGIN example.com.
+@ IN SOA ns root.example.com. (1 2 3 4 5)
+" =
+ { "$ORIGIN" = "example.com." }
+ { "@"
+ { "1"
+ { "class" = "IN" }
+ { "type" = "SOA" }
+ { "mname" = "ns" }
+ { "rname" = "root.example.com." }
+ { "serial" = "1" }
+ { "refresh" = "2" }
+ { "retry" = "3" }
+ { "expiry" = "4" }
+ { "minimum" = "5" }
+ }
+ }
+
+
+(* Different ordering of TTL and class *)
+test lns get "
+$ORIGIN example.com.
+foo 1D IN A 10.1.2.3
+bar IN 2W A 10.4.5.6
+" =
+ { "$ORIGIN" = "example.com." }
+ { "foo"
+ { "1"
+ { "ttl" = "1D" }
+ { "class" = "IN" }
+ { "type" = "A" }
+ { "rdata" = "10.1.2.3" }
+ }
+ }
+ { "bar"
+ { "1"
+ { "class" = "IN" }
+ { "ttl" = "2W" }
+ { "type" = "A" }
+ { "rdata" = "10.4.5.6" }
+ }
+ }
+
+
+(* Escaping *)
+test lns get "
+$ORIGIN example.com.
+foo TXT abc\\\\def\\\"ghi
+bar TXT \"ab cd\\\\ef\\\"gh\"
+" =
+ { "$ORIGIN" = "example.com." }
+ { "foo" { "1" { "type" = "TXT" } { "rdata" = "abc\\\\def\\\"ghi" } } }
+ { "bar" { "1" { "type" = "TXT" } { "rdata" = "\"ab cd\\\\ef\\\"gh\"" } } }
+
+
+(* Whitespace at the end of the line *)
+test lns get "
+$ORIGIN example.com. \n@ IN SOA ns root.example.com. (1 2 3 4 5) \t
+foo 1D IN A 10.1.2.3\t
+" =
+ { "$ORIGIN" = "example.com." }
+ { "@"
+ { "1"
+ { "class" = "IN" }
+ { "type" = "SOA" }
+ { "mname" = "ns" }
+ { "rname" = "root.example.com." }
+ { "serial" = "1" }
+ { "refresh" = "2" }
+ { "retry" = "3" }
+ { "expiry" = "4" }
+ { "minimum" = "5" }
+ }
+ }
+ { "foo"
+ { "1"
+ { "ttl" = "1D" }
+ { "class" = "IN" }
+ { "type" = "A" }
+ { "rdata" = "10.1.2.3" }
+ }
+ }
--- /dev/null
+module Test_dnsmasq =
+
+let conf = "# Configuration file for dnsmasq.
+#
+#bogus-priv
+
+conf-dir=/etc/dnsmasq.d
+selfmx
+
+address=/foo.com/bar.net/10.1.2.3
+
+server=10.4.5.6#1234
+server=/bar.com/foo.net/10.7.8.9
+server=/foo.org/bar.org/10.3.2.1@eth0#5678
+server=/baz.org/#
+server=/baz.net/#@eth1
+server=10.6.5.4#1234@eth0#5678
+server=/qux.com/qux.net/
+"
+
+test Dnsmasq.lns get conf =
+ { "#comment" = "Configuration file for dnsmasq." }
+ {}
+ { "#comment" = "bogus-priv" }
+ {}
+ { "conf-dir" = "/etc/dnsmasq.d" }
+ { "selfmx" }
+ {}
+ { "address" = "10.1.2.3"
+ { "domain" = "foo.com" }
+ { "domain" = "bar.net" }
+ }
+ {}
+ { "server" = "10.4.5.6"
+ { "port" = "1234" }
+ }
+ { "server" = "10.7.8.9"
+ { "domain" = "bar.com" }
+ { "domain" = "foo.net" }
+ }
+ { "server" = "10.3.2.1"
+ { "domain" = "foo.org" }
+ { "domain" = "bar.org" }
+ { "source" = "eth0"
+ { "port" = "5678" }
+ }
+ }
+ { "server" = "#"
+ { "domain" = "baz.org" }
+ }
+ { "server" = "#"
+ { "domain" = "baz.net" }
+ { "source" = "eth1" }
+ }
+ { "server" = "10.6.5.4"
+ { "port" = "1234" }
+ { "source" = "eth0"
+ { "port" = "5678" }
+ }
+ }
+ { "server"
+ { "domain" = "qux.com" }
+ { "domain" = "qux.net" }
+ }
--- /dev/null
+(*
+Module: Test_Dovecot
+ Provides unit tests and examples for the <Dovecot> lens.
+*)
+
+module Test_Dovecot =
+
+(* *********************** /etc/dovecot.conf ******************************** *)
+
+let dovecot_conf = "# Dovecot configuration file
+
+# If you're in a hurry, see http://wiki2.dovecot.org/QuickConfiguration
+
+# Default values are shown for each setting, it's not required to uncomment
+# those. These are exceptions to this though: No sections (e.g. namespace {})
+# or plugin settings are added by default, they're listed only as examples.
+# Paths are also just examples with the real defaults being based on configure
+# options. The paths listed here are for configure --prefix=/usr
+# --sysconfdir=/etc --localstatedir=/var
+
+# include_try command
+!include_try /usr/share/dovecot/protocols.d/*.protocol
+
+# Wildcard, comma and space in value
+listen = *, ::
+
+# Filesystem path in value
+base_dir = /var/run/dovecot/
+instance_name = dovecot
+
+# Space and dot in value
+login_greeting = Dovecot ready.
+
+# Empty values
+login_trusted_networks =
+login_access_sockets =
+
+# Simple values
+verbose_proctitle = no
+shutdown_clients = yes
+
+# Number in value
+doveadm_worker_count = 0
+# Dash in value
+doveadm_socket_path = doveadm-server
+
+import_environment = TZ
+
+##
+## Comment
+##
+
+# Simple commented dict block
+dict {
+ #quota = mysql:/etc/dovecot/dovecot-dict-sql.conf.ext
+ #expire = sqlite:/etc/dovecot/dovecot-dict-sql.conf.ext
+}
+
+# Simple uncommented dict block
+dict {
+ quota = mysql:/etc/dovecot/dovecot-dict-sql.conf.ext
+ expire = sqlite:/etc/dovecot/dovecot-dict-sql.conf.ext
+}
+
+# Include command
+!include conf.d/*.conf
+
+# Include_try command
+!include_try local.conf
+
+"
+
+test Dovecot.lns get dovecot_conf =
+ { "#comment" = "Dovecot configuration file" }
+ { }
+ { "#comment" = "If you're in a hurry, see http://wiki2.dovecot.org/QuickConfiguration" }
+ { }
+ { "#comment" = "Default values are shown for each setting, it's not required to uncomment" }
+ { "#comment" = "those. These are exceptions to this though: No sections (e.g. namespace {})" }
+ { "#comment" = "or plugin settings are added by default, they're listed only as examples." }
+ { "#comment" = "Paths are also just examples with the real defaults being based on configure" }
+ { "#comment" = "options. The paths listed here are for configure --prefix=/usr" }
+ { "#comment" = "--sysconfdir=/etc --localstatedir=/var" }
+ { }
+ { "#comment" = "include_try command" }
+ { "include_try" = "/usr/share/dovecot/protocols.d/*.protocol" }
+ { }
+ { "#comment" = "Wildcard, comma and space in value" }
+ { "listen" = "*, ::" }
+ { }
+ { "#comment" = "Filesystem path in value" }
+ { "base_dir" = "/var/run/dovecot/" }
+ { "instance_name" = "dovecot" }
+ { }
+ { "#comment" = "Space and dot in value" }
+ { "login_greeting" = "Dovecot ready." }
+ { }
+ { "#comment" = "Empty values" }
+ { "login_trusted_networks" }
+ { "login_access_sockets" }
+ { }
+ { "#comment" = "Simple values" }
+ { "verbose_proctitle" = "no" }
+ { "shutdown_clients" = "yes" }
+ { }
+ { "#comment" = "Number in value" }
+ { "doveadm_worker_count" = "0" }
+ { "#comment" = "Dash in value" }
+ { "doveadm_socket_path" = "doveadm-server" }
+ { }
+ { "import_environment" = "TZ" }
+ { }
+ { "#comment" = "#" }
+ { "#comment" = "# Comment" }
+ { "#comment" = "#" }
+ { }
+ { "#comment" = "Simple commented dict block" }
+ { "dict"
+ { "#comment" = "quota = mysql:/etc/dovecot/dovecot-dict-sql.conf.ext" }
+ { "#comment" = "expire = sqlite:/etc/dovecot/dovecot-dict-sql.conf.ext" }
+ }
+ { }
+ { "#comment" = "Simple uncommented dict block" }
+ { "dict"
+ { "quota" = "mysql:/etc/dovecot/dovecot-dict-sql.conf.ext" }
+ { "expire" = "sqlite:/etc/dovecot/dovecot-dict-sql.conf.ext" }
+ }
+ { }
+ { "#comment" = "Include command" }
+ { "include" = "conf.d/*.conf" }
+ { }
+ { "#comment" = "Include_try command" }
+ { "include_try" = "local.conf" }
+ { }
+
+
+
+(* *********************************** dict ********************************* *)
+
+let dovecot_dict_sql_conf = "connect = host=localhost dbname=mails user=testuser password=pass
+
+# CREATE TABLE quota (
+# username varchar(100) not null,
+# bytes bigint not null default 0,
+# messages integer not null default 0,
+# primary key (username)
+# );
+
+map {
+ pattern = priv/quota/storage
+ table = quota
+ username_field = username
+ value_field = bytes
+}
+map {
+ pattern = priv/quota/messages
+ table = quota
+ username_field = username
+ value_field = messages
+}
+
+# CREATE TABLE expires (
+# username varchar(100) not null,
+# mailbox varchar(255) not null,
+# expire_stamp integer not null,
+# primary key (username, mailbox)
+# );
+
+map {
+ pattern = shared/expire/$user/$mailbox
+ table = expires
+ value_field = expire_stamp
+
+ fields {
+ username = $user
+ mailbox = $mailbox
+ }
+}
+"
+
+test Dovecot.lns get dovecot_dict_sql_conf =
+ { "connect" = "host=localhost dbname=mails user=testuser password=pass" }
+ { }
+ { "#comment" = "CREATE TABLE quota (" }
+ { "#comment" = "username varchar(100) not null," }
+ { "#comment" = "bytes bigint not null default 0," }
+ { "#comment" = "messages integer not null default 0," }
+ { "#comment" = "primary key (username)" }
+ { "#comment" = ");" }
+ { }
+ { "map"
+ { "pattern" = "priv/quota/storage" }
+ { "table" = "quota" }
+ { "username_field" = "username" }
+ { "value_field" = "bytes" }
+ }
+ { "map"
+ { "pattern" = "priv/quota/messages" }
+ { "table" = "quota" }
+ { "username_field" = "username" }
+ { "value_field" = "messages" }
+ }
+ { }
+ { "#comment" = "CREATE TABLE expires (" }
+ { "#comment" = "username varchar(100) not null," }
+ { "#comment" = "mailbox varchar(255) not null," }
+ { "#comment" = "expire_stamp integer not null," }
+ { "#comment" = "primary key (username, mailbox)" }
+ { "#comment" = ");" }
+ { }
+ { "map"
+ { "pattern" = "shared/expire/$user/$mailbox" }
+ { "table" = "expires" }
+ { "value_field" = "expire_stamp" }
+ { }
+ { "fields"
+ { "username" = "$user" }
+ { "mailbox" = "$mailbox" }
+ }
+ }
+
+(* ********************************** auth ********************************** *)
+
+let auth_conf = "## Authentication processes
+
+disable_plaintext_auth = yes
+auth_cache_size = 0
+auth_cache_ttl = 1 hour
+auth_cache_negative_ttl = 1 hour
+auth_realms =
+auth_default_realm =
+auth_username_chars = abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890.-_@
+auth_username_translation =
+auth_username_format =
+auth_master_user_separator =
+auth_anonymous_username = anonymous
+auth_worker_max_count = 30
+auth_gssapi_hostname =
+auth_krb5_keytab =
+auth_use_winbind = no
+auth_winbind_helper_path = /usr/bin/ntlm_auth
+auth_failure_delay = 2 secs
+auth_ssl_require_client_cert = no
+auth_ssl_username_from_cert = no
+auth_mechanisms = plain
+
+!include auth-deny.conf.ext
+!include auth-master.conf.ext
+!include auth-system.conf.ext
+!include auth-sql.conf.ext
+!include auth-ldap.conf.ext
+!include auth-passwdfile.conf.ext
+!include auth-checkpassword.conf.ext
+!include auth-vpopmail.conf.ext
+!include auth-static.conf.ext
+
+passdb {
+ driver = passwd-file
+ deny = yes
+
+ # File contains a list of usernames, one per line
+ args = /etc/dovecot/deny-users
+}
+
+passdb {
+ driver = passwd-file
+ master = yes
+ args = /etc/dovecot/master-users
+
+ # Unless you're using PAM, you probably still want the destination user to
+ # be looked up from passdb that it really exists. pass=yes does that.
+ pass = yes
+}
+
+userdb {
+ driver = passwd-file
+ args = username_format=%u /etc/dovecot/users
+}
+"
+
+test Dovecot.lns get auth_conf =
+ { "#comment" = "# Authentication processes" }
+ { }
+ { "disable_plaintext_auth" = "yes" }
+ { "auth_cache_size" = "0" }
+ { "auth_cache_ttl" = "1 hour" }
+ { "auth_cache_negative_ttl" = "1 hour" }
+ { "auth_realms" }
+ { "auth_default_realm" }
+ { "auth_username_chars" = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890.-_@" }
+ { "auth_username_translation" }
+ { "auth_username_format" }
+ { "auth_master_user_separator" }
+ { "auth_anonymous_username" = "anonymous" }
+ { "auth_worker_max_count" = "30" }
+ { "auth_gssapi_hostname" }
+ { "auth_krb5_keytab" }
+ { "auth_use_winbind" = "no" }
+ { "auth_winbind_helper_path" = "/usr/bin/ntlm_auth" }
+ { "auth_failure_delay" = "2 secs" }
+ { "auth_ssl_require_client_cert" = "no" }
+ { "auth_ssl_username_from_cert" = "no" }
+ { "auth_mechanisms" = "plain" }
+ { }
+ { "include" = "auth-deny.conf.ext" }
+ { "include" = "auth-master.conf.ext" }
+ { "include" = "auth-system.conf.ext" }
+ { "include" = "auth-sql.conf.ext" }
+ { "include" = "auth-ldap.conf.ext" }
+ { "include" = "auth-passwdfile.conf.ext" }
+ { "include" = "auth-checkpassword.conf.ext" }
+ { "include" = "auth-vpopmail.conf.ext" }
+ { "include" = "auth-static.conf.ext" }
+ { }
+ { "passdb"
+ { "driver" = "passwd-file" }
+ { "deny" = "yes" }
+ { }
+ { "#comment" = "File contains a list of usernames, one per line" }
+ { "args" = "/etc/dovecot/deny-users" }
+ }
+ { }
+ { "passdb"
+ { "driver" = "passwd-file" }
+ { "master" = "yes" }
+ { "args" = "/etc/dovecot/master-users" }
+ { }
+ { "#comment" = "Unless you're using PAM, you probably still want the destination user to" }
+ { "#comment" = "be looked up from passdb that it really exists. pass=yes does that." }
+ { "pass" = "yes" }
+ }
+ { }
+ { "userdb"
+ { "driver" = "passwd-file" }
+ { "args" = "username_format=%u /etc/dovecot/users" }
+ }
+
+(* ******************************** director ******************************** *)
+
+let director_conf = "## Director-specific settings.
+director_servers =
+director_mail_servers =
+director_user_expire = 15 min
+director_doveadm_port = 0
+
+service director {
+ unix_listener login/director {
+ mode = 0666
+ }
+ fifo_listener login/proxy-notify {
+ mode = 0666
+ }
+ unix_listener director-userdb {
+ #mode = 0600
+ }
+ inet_listener {
+ port =
+ }
+}
+
+service imap-login {
+ executable = imap-login director
+}
+service pop3-login {
+ executable = pop3-login director
+}
+protocol lmtp {
+ auth_socket_path = director-userdb
+}
+"
+
+test Dovecot.lns get director_conf =
+ { "#comment" = "# Director-specific settings." }
+ { "director_servers" }
+ { "director_mail_servers" }
+ { "director_user_expire" = "15 min" }
+ { "director_doveadm_port" = "0" }
+ { }
+ { "service" = "director"
+ { "unix_listener" = "login/director"
+ { "mode" = "0666" }
+ }
+ { "fifo_listener" = "login/proxy-notify"
+ { "mode" = "0666" }
+ }
+ { "unix_listener" = "director-userdb"
+ { "#comment" = "mode = 0600" }
+ }
+ { "inet_listener"
+ { "port" }
+ }
+ }
+ { }
+ { "service" = "imap-login"
+ { "executable" = "imap-login director" }
+ }
+ { "service" = "pop3-login"
+ { "executable" = "pop3-login director" }
+ }
+ { "protocol" = "lmtp"
+ { "auth_socket_path" = "director-userdb" }
+ }
+
+(* ********************************* logging ******************************** *)
+
+let logging_conf = "## Log destination.
+log_path = syslog
+info_log_path =
+debug_log_path =
+syslog_facility = mail
+auth_verbose = no
+auth_verbose_passwords = no
+auth_debug = no
+auth_debug_passwords = no
+mail_debug = no
+verbose_ssl = no
+
+plugin {
+ mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename
+ mail_log_fields = uid box msgid size
+}
+
+log_timestamp = \"%b %d %H:%M:%S \"
+login_log_format_elements = user=<%u> method=%m rip=%r lip=%l mpid=%e %c
+login_log_format = %$: %s
+mail_log_prefix = \"%s(%u): \"
+deliver_log_format = msgid=%m: %$
+"
+
+test Dovecot.lns get logging_conf =
+ { "#comment" = "# Log destination." }
+ { "log_path" = "syslog" }
+ { "info_log_path" }
+ { "debug_log_path" }
+ { "syslog_facility" = "mail" }
+ { "auth_verbose" = "no" }
+ { "auth_verbose_passwords" = "no" }
+ { "auth_debug" = "no" }
+ { "auth_debug_passwords" = "no" }
+ { "mail_debug" = "no" }
+ { "verbose_ssl" = "no" }
+ { }
+ { "plugin"
+ { "mail_log_events" = "delete undelete expunge copy mailbox_delete mailbox_rename" }
+ { "mail_log_fields" = "uid box msgid size" }
+ }
+ { }
+ { "log_timestamp" = "\"%b %d %H:%M:%S \"" }
+ { "login_log_format_elements" = "user=<%u> method=%m rip=%r lip=%l mpid=%e %c" }
+ { "login_log_format" = "%$: %s" }
+ { "mail_log_prefix" = "\"%s(%u): \"" }
+ { "deliver_log_format" = "msgid=%m: %$" }
+
+
+(* ********************************** mail ********************************** *)
+
+let mail_conf = "## Mailbox locations and namespaces
+mail_location =
+namespace {
+ type = private
+ separator =
+ prefix =
+ location =
+ inbox = no
+ hidden = no
+ list = yes
+ subscriptions = yes
+ mailbox \"Sent Messages\" {
+ special_use = \Sent
+ }
+}
+
+# Example shared namespace configuration
+namespace {
+ type = shared
+ separator = /
+ prefix = shared/%%u/
+ location = maildir:%%h/Maildir:INDEX=~/Maildir/shared/%%u
+ subscriptions = no
+ list = children
+}
+
+mail_uid =
+mail_gid =
+mail_privileged_group =
+mail_access_groups =
+mail_full_filesystem_access = no
+mmap_disable = no
+dotlock_use_excl = yes
+mail_fsync = optimized
+mail_nfs_storage = no
+mail_nfs_index = no
+lock_method = fcntl
+mail_temp_dir = /tmp
+first_valid_uid = 500
+last_valid_uid = 0
+first_valid_gid = 1
+last_valid_gid = 0
+mail_max_keyword_length = 50
+valid_chroot_dirs =
+mail_chroot =
+auth_socket_path = /var/run/dovecot/auth-userdb
+mail_plugin_dir = /usr/lib/dovecot/modules
+mail_plugins =
+mail_cache_min_mail_count = 0
+mailbox_idle_check_interval = 30 secs
+mail_save_crlf = no
+maildir_stat_dirs = no
+maildir_copy_with_hardlinks = yes
+maildir_very_dirty_syncs = no
+mbox_read_locks = fcntl
+mbox_write_locks = dotlock fcntl
+mbox_lock_timeout = 5 mins
+mbox_dotlock_change_timeout = 2 mins
+mbox_dirty_syncs = yes
+mbox_very_dirty_syncs = no
+mbox_lazy_writes = yes
+mbox_min_index_size = 0
+mdbox_rotate_size = 2M
+mdbox_rotate_interval = 0
+mdbox_preallocate_space = no
+mail_attachment_dir =
+mail_attachment_min_size = 128k
+mail_attachment_fs = sis posix
+mail_attachment_hash = %{sha1}
+
+protocol !indexer-worker {
+ mail_vsize_bg_after_count = 0
+}
+"
+test Dovecot.lns get mail_conf =
+ { "#comment" = "# Mailbox locations and namespaces" }
+ { "mail_location" }
+ { "namespace"
+ { "type" = "private" }
+ { "separator" }
+ { "prefix" }
+ { "location" }
+ { "inbox" = "no" }
+ { "hidden" = "no" }
+ { "list" = "yes" }
+ { "subscriptions" = "yes" }
+ { "mailbox" = "Sent Messages"
+ { "special_use" = "\Sent" }
+ }
+ }
+ { }
+ { "#comment" = "Example shared namespace configuration" }
+ { "namespace"
+ { "type" = "shared" }
+ { "separator" = "/" }
+ { "prefix" = "shared/%%u/" }
+ { "location" = "maildir:%%h/Maildir:INDEX=~/Maildir/shared/%%u" }
+ { "subscriptions" = "no" }
+ { "list" = "children" }
+ }
+ { }
+ { "mail_uid" }
+ { "mail_gid" }
+ { "mail_privileged_group" }
+ { "mail_access_groups" }
+ { "mail_full_filesystem_access" = "no" }
+ { "mmap_disable" = "no" }
+ { "dotlock_use_excl" = "yes" }
+ { "mail_fsync" = "optimized" }
+ { "mail_nfs_storage" = "no" }
+ { "mail_nfs_index" = "no" }
+ { "lock_method" = "fcntl" }
+ { "mail_temp_dir" = "/tmp" }
+ { "first_valid_uid" = "500" }
+ { "last_valid_uid" = "0" }
+ { "first_valid_gid" = "1" }
+ { "last_valid_gid" = "0" }
+ { "mail_max_keyword_length" = "50" }
+ { "valid_chroot_dirs" }
+ { "mail_chroot" }
+ { "auth_socket_path" = "/var/run/dovecot/auth-userdb" }
+ { "mail_plugin_dir" = "/usr/lib/dovecot/modules" }
+ { "mail_plugins" }
+ { "mail_cache_min_mail_count" = "0" }
+ { "mailbox_idle_check_interval" = "30 secs" }
+ { "mail_save_crlf" = "no" }
+ { "maildir_stat_dirs" = "no" }
+ { "maildir_copy_with_hardlinks" = "yes" }
+ { "maildir_very_dirty_syncs" = "no" }
+ { "mbox_read_locks" = "fcntl" }
+ { "mbox_write_locks" = "dotlock fcntl" }
+ { "mbox_lock_timeout" = "5 mins" }
+ { "mbox_dotlock_change_timeout" = "2 mins" }
+ { "mbox_dirty_syncs" = "yes" }
+ { "mbox_very_dirty_syncs" = "no" }
+ { "mbox_lazy_writes" = "yes" }
+ { "mbox_min_index_size" = "0" }
+ { "mdbox_rotate_size" = "2M" }
+ { "mdbox_rotate_interval" = "0" }
+ { "mdbox_preallocate_space" = "no" }
+ { "mail_attachment_dir" }
+ { "mail_attachment_min_size" = "128k" }
+ { "mail_attachment_fs" = "sis posix" }
+ { "mail_attachment_hash" = "%{sha1}" }
+ { }
+ { "protocol" = "!indexer-worker"
+ { "mail_vsize_bg_after_count" = "0" }
+ }
+
+
+(* ********************************* master ********************************* *)
+
+let master_conf = "
+default_process_limit = 100
+default_client_limit = 1000
+default_vsz_limit = 256M
+default_login_user = dovenull
+default_internal_user = dovecot
+
+service imap-login {
+ inet_listener imap {
+ port = 143
+ }
+ inet_listener imaps {
+ port = 993
+ ssl = yes
+ }
+ service_count = 1
+ process_min_avail = 0
+ vsz_limit = 64M
+}
+
+service pop3-login {
+ inet_listener pop3 {
+ port = 110
+ }
+ inet_listener pop3s {
+ port = 995
+ ssl = yes
+ }
+}
+
+service lmtp {
+ unix_listener lmtp {
+ mode = 0666
+ }
+ inet_listener lmtp {
+ address =
+ port =
+ }
+}
+
+service imap {
+ vsz_limit = 256M
+ process_limit = 1024
+}
+
+service auth {
+ unix_listener auth-userdb {
+ mode = 0600
+ user =
+ group =
+ }
+}
+
+service auth-worker {
+ user = root
+}
+
+service dict {
+ unix_listener dict {
+ mode = 0600
+ user =
+ group =
+ }
+}
+"
+
+test Dovecot.lns get master_conf =
+ { }
+ { "default_process_limit" = "100" }
+ { "default_client_limit" = "1000" }
+ { "default_vsz_limit" = "256M" }
+ { "default_login_user" = "dovenull" }
+ { "default_internal_user" = "dovecot" }
+ { }
+ { "service" = "imap-login"
+ { "inet_listener" = "imap"
+ { "port" = "143" }
+ }
+ { "inet_listener" = "imaps"
+ { "port" = "993" }
+ { "ssl" = "yes" }
+ }
+ { "service_count" = "1" }
+ { "process_min_avail" = "0" }
+ { "vsz_limit" = "64M" }
+ }
+ { }
+ { "service" = "pop3-login"
+ { "inet_listener" = "pop3"
+ { "port" = "110" }
+ }
+ { "inet_listener" = "pop3s"
+ { "port" = "995" }
+ { "ssl" = "yes" }
+ }
+ }
+ { }
+ { "service" = "lmtp"
+ { "unix_listener" = "lmtp"
+ { "mode" = "0666" }
+ }
+ { "inet_listener" = "lmtp"
+ { "address" }
+ { "port" }
+ }
+ }
+ { }
+ { "service" = "imap"
+ { "vsz_limit" = "256M" }
+ { "process_limit" = "1024" }
+ }
+ { }
+ { "service" = "auth"
+ { "unix_listener" = "auth-userdb"
+ { "mode" = "0600" }
+ { "user" }
+ { "group" }
+ }
+ }
+ { }
+ { "service" = "auth-worker"
+ { "user" = "root" }
+ }
+ { }
+ { "service" = "dict"
+ { "unix_listener" = "dict"
+ { "mode" = "0600" }
+ { "user" }
+ { "group" }
+ }
+ }
+
+(* *********************************** ssl ********************************** *)
+
+let ssl_conf = "## SSL settings
+ssl = yes
+ssl_cert = </etc/ssl/certs/dovecot.pem
+ssl_key = </etc/ssl/private/dovecot.pem
+ssl_key_password =
+ssl_ca =
+ssl_verify_client_cert = no
+ssl_cert_username_field = commonName
+ssl_parameters_regenerate = 168
+ssl_cipher_list = ALL:!LOW:!SSLv2:!EXP:!aNULL
+"
+test Dovecot.lns get ssl_conf =
+ { "#comment" = "# SSL settings" }
+ { "ssl" = "yes" }
+ { "ssl_cert" = "</etc/ssl/certs/dovecot.pem" }
+ { "ssl_key" = "</etc/ssl/private/dovecot.pem" }
+ { "ssl_key_password" }
+ { "ssl_ca" }
+ { "ssl_verify_client_cert" = "no" }
+ { "ssl_cert_username_field" = "commonName" }
+ { "ssl_parameters_regenerate" = "168" }
+ { "ssl_cipher_list" = "ALL:!LOW:!SSLv2:!EXP:!aNULL" }
+
+(* ********************* /etc/dovecot/conf.d/15-lda.conf ******************** *)
+
+let lda_conf = "## LDA specific settings (also used by LMTP)
+postmaster_address =
+hostname =
+quota_full_tempfail = no
+sendmail_path = /usr/sbin/sendmail
+submission_host =
+rejection_subject = Rejected: %s
+rejection_reason = Your message to <%t> was automatically rejected:%n%r
+recipient_delimiter = +
+lda_original_recipient_header =
+lda_mailbox_autocreate = no
+lda_mailbox_autosubscribe = no
+
+protocol lda {
+ mail_plugins = $mail_plugins
+}
+"
+test Dovecot.lns get lda_conf =
+ { "#comment" = "# LDA specific settings (also used by LMTP)" }
+ { "postmaster_address" }
+ { "hostname" }
+ { "quota_full_tempfail" = "no" }
+ { "sendmail_path" = "/usr/sbin/sendmail" }
+ { "submission_host" }
+ { "rejection_subject" = "Rejected: %s" }
+ { "rejection_reason" = "Your message to <%t> was automatically rejected:%n%r" }
+ { "recipient_delimiter" = "+" }
+ { "lda_original_recipient_header" }
+ { "lda_mailbox_autocreate" = "no" }
+ { "lda_mailbox_autosubscribe" = "no" }
+ { }
+ { "protocol" = "lda"
+ { "mail_plugins" = "$mail_plugins" }
+ }
+
+(* *********************************** acl ********************************** *)
+
+let acl_conf = "## Mailbox access control lists.
+plugin {
+ acl = vfile:/etc/dovecot/global-acls:cache_secs=300
+}
+plugin {
+ acl_shared_dict = file:/var/lib/dovecot/shared-mailboxes
+}
+"
+
+test Dovecot.lns get acl_conf =
+ { "#comment" = "# Mailbox access control lists." }
+ { "plugin"
+ { "acl" = "vfile:/etc/dovecot/global-acls:cache_secs=300" }
+ }
+ { "plugin"
+ { "acl_shared_dict" = "file:/var/lib/dovecot/shared-mailboxes" }
+ }
+
+(* ******************************** plugins ********************************* *)
+
+let plugins_conf = "
+plugin {
+ quota_rule = *:storage=1G
+ quota_rule2 = Trash:storage=+100M
+}
+plugin {
+ quota_warning = storage=95%% quota-warning 95 %u
+ quota_warning2 = storage=80%% quota-warning 80 %u
+}
+service quota-warning {
+ executable = script /usr/local/bin/quota-warning.sh
+ user = dovecot
+ unix_listener quota-warning {
+ user = vmail
+ }
+}
+plugin {
+ quota = dirsize:User quota
+ quota = maildir:User quota
+ quota = dict:User quota::proxy::quota
+ quota = fs:User quota
+}
+plugin {
+ quota = dict:user::proxy::quota
+ quota2 = dict:domain:%d:proxy::quota_domain
+ quota_rule = *:storage=102400
+ quota2_rule = *:storage=1048576
+}
+plugin {
+ acl = vfile:/etc/dovecot/global-acls:cache_secs=300
+}
+plugin {
+ acl_shared_dict = file:/var/lib/dovecot/shared-mailboxes
+}
+"
+test Dovecot.lns get plugins_conf =
+ { }
+ { "plugin"
+ { "quota_rule" = "*:storage=1G" }
+ { "quota_rule2" = "Trash:storage=+100M" }
+ }
+ { "plugin"
+ { "quota_warning" = "storage=95%% quota-warning 95 %u" }
+ { "quota_warning2" = "storage=80%% quota-warning 80 %u" }
+ }
+ { "service" = "quota-warning"
+ { "executable" = "script /usr/local/bin/quota-warning.sh" }
+ { "user" = "dovecot" }
+ { "unix_listener" = "quota-warning"
+ { "user" = "vmail" }
+ }
+ }
+ { "plugin"
+ { "quota" = "dirsize:User quota" }
+ { "quota" = "maildir:User quota" }
+ { "quota" = "dict:User quota::proxy::quota" }
+ { "quota" = "fs:User quota" }
+ }
+ { "plugin"
+ { "quota" = "dict:user::proxy::quota" }
+ { "quota2" = "dict:domain:%d:proxy::quota_domain" }
+ { "quota_rule" = "*:storage=102400" }
+ { "quota2_rule" = "*:storage=1048576" }
+ }
+ { "plugin"
+ { "acl" = "vfile:/etc/dovecot/global-acls:cache_secs=300" }
+ }
+ { "plugin"
+ { "acl_shared_dict" = "file:/var/lib/dovecot/shared-mailboxes" }
+ }
--- /dev/null
+module Test_dpkg =
+
+let conf ="# dpkg configuration file
+# Do not enable debsig-verify by default
+no-debsig
+
+log /var/log/dpkg.log\n"
+
+test Dpkg.lns get conf =
+ { "#comment" = "dpkg configuration file" }
+ { "#comment" = "Do not enable debsig-verify by default" }
+ { "no-debsig" }
+ {}
+ { "log" = "/var/log/dpkg.log" }
--- /dev/null
+module Test_dput =
+
+ let conf = "# Example dput.cf that defines the host that can be used
+# with dput for uploading.
+
+[DEFAULT]
+login = username
+method = ftp
+hash = md5
+allow_unsigned_uploads = 0
+run_lintian = 0
+run_dinstall = 0
+check_version = 0
+scp_compress = 0
+post_upload_command =
+pre_upload_command =
+passive_ftp = 1
+default_host_non-us =
+default_host_main = hebex
+allowed_distributions = (?!UNRELEASED)
+
+[hebex]
+fqdn = condor.infra.s1.p.fti.net
+login = anonymous
+method = ftp
+incoming = /incoming/hebex
+passive_ftp = 0
+
+[dop/desktop]
+fqdn = condor.infra.s1.p.fti.net
+login = anonymous
+method = ftp
+incoming = /incoming/dop/desktop
+passive_ftp = 0
+
+[jp-non-us]
+fqdn = hp.debian.or.jp
+incoming = /pub/Incoming/upload-non-US
+login = anonymous
+
+# DISABLED due to being repaired currently
+#[erlangen]
+#fqdn = ftp.uni-erlangen.de
+#incoming = /public/pub/Linux/debian/UploadQueue/
+#login = anonymous
+
+[ftp-master]
+fqdn = ftp-master.debian.org
+incoming = /pub/UploadQueue/
+login = anonymous
+post_upload_command = /usr/bin/mini-dinstall --batch
+# And if you want to override one of the defaults, add it here.
+# # For example, comment out the next line
+# # login = another_username
+# # post_upload_command = /path/to/some/script
+# # pre_upload_command = /path/to/some/script
+
+"
+
+ test Dput.lns get conf =
+ { "#comment" = "Example dput.cf that defines the host that can be used" }
+ { "#comment" = "with dput for uploading." }
+ {}
+ { "target" = "DEFAULT"
+ { "login" = "username" }
+ { "method" = "ftp" }
+ { "hash" = "md5" }
+ { "allow_unsigned_uploads" = "0" }
+ { "run_lintian" = "0" }
+ { "run_dinstall" = "0" }
+ { "check_version" = "0" }
+ { "scp_compress" = "0" }
+ { "post_upload_command" }
+ { "pre_upload_command" }
+ { "passive_ftp" = "1" }
+ { "default_host_non-us" }
+ { "default_host_main" = "hebex" }
+ { "allowed_distributions" = "(?!UNRELEASED)" }
+ {} }
+ { "target" = "hebex"
+ { "fqdn" = "condor.infra.s1.p.fti.net" }
+ { "login" = "anonymous" }
+ { "method" = "ftp" }
+ { "incoming" = "/incoming/hebex" }
+ { "passive_ftp" = "0" }
+ {} }
+ { "target" = "dop/desktop"
+ { "fqdn" = "condor.infra.s1.p.fti.net" }
+ { "login" = "anonymous" }
+ { "method" = "ftp" }
+ { "incoming" = "/incoming/dop/desktop" }
+ { "passive_ftp" = "0" }
+ {} }
+ { "target" = "jp-non-us"
+ { "fqdn" = "hp.debian.or.jp" }
+ { "incoming" = "/pub/Incoming/upload-non-US" }
+ { "login" = "anonymous" }
+ {}
+ { "#comment" = "DISABLED due to being repaired currently" }
+ { "#comment" = "[erlangen]" }
+ { "#comment" = "fqdn = ftp.uni-erlangen.de" }
+ { "#comment" = "incoming = /public/pub/Linux/debian/UploadQueue/" }
+ { "#comment" = "login = anonymous" }
+ {} }
+ { "target" = "ftp-master"
+ { "fqdn" = "ftp-master.debian.org" }
+ { "incoming" = "/pub/UploadQueue/" }
+ { "login" = "anonymous" }
+ { "post_upload_command" = "/usr/bin/mini-dinstall --batch" }
+ { "#comment" = "And if you want to override one of the defaults, add it here." }
+ { "#comment" = "# For example, comment out the next line" }
+ { "#comment" = "# login = another_username" }
+ { "#comment" = "# post_upload_command = /path/to/some/script" }
+ { "#comment" = "# pre_upload_command = /path/to/some/script" }
+ {} }
--- /dev/null
+(*
+Module: Test_Erlang
+ Provides unit tests and examples for the <Erlang> lens.
+*)
+module Test_Erlang =
+
+(* Group: comments *)
+test Erlang.comment get "% This is a comment\n" =
+ { "#comment" = "This is a comment" }
+
+(* Group: simple values *)
+
+let value_bare = Erlang.value Rx.word Erlang.bare
+
+test value_bare get "{foo, bar}" = { "foo" = "bar" }
+
+let value_decimal = Erlang.value Rx.word Erlang.decimal
+
+test value_bare get "{foo, 0.25}" = { "foo" = "0.25" }
+
+let value_quoted = Erlang.value Rx.word Erlang.quoted
+
+test value_quoted get "{foo, '0.25'}" = { "foo" = "0.25" }
+
+let value_glob = Erlang.value Rx.word Erlang.glob
+
+test value_glob get "{foo, <<\".*\">>}" = { "foo" = ".*" }
+
+let value_boolean = Erlang.value Rx.word Erlang.boolean
+
+test value_boolean get "{foo, false}" = { "foo" = "false" }
+
+
+(* Group: list values *)
+
+let list_bare = Erlang.value_list Rx.word Erlang.bare
+
+test list_bare get "{foo, [bar, baz]}" =
+ { "foo"
+ { "value" = "bar" }
+ { "value" = "baz" } }
+
+(* Group: tuple values *)
+
+let tuple_bare = Erlang.tuple Erlang.bare Erlang.bare
+
+test tuple_bare get "{foo, bar}" =
+ { "tuple"
+ { "value" = "foo" }
+ { "value" = "bar" } }
+
+let tuple3_bare = Erlang.tuple3 Erlang.bare Erlang.bare Erlang.bare
+
+test tuple3_bare get "{foo, bar, baz}" =
+ { "tuple"
+ { "value" = "foo" }
+ { "value" = "bar" }
+ { "value" = "baz" } }
+
+(* Group: application *)
+
+let list_bare_app = Erlang.application (Rx.word - "kernel") list_bare
+
+test list_bare_app get "{foo, [{bar, [baz, bat]}]}" =
+ { "foo"
+ { "bar"
+ { "value" = "baz" }
+ { "value" = "bat" } } }
+
+(* no settings *)
+test list_bare_app get "{foo, []}" =
+ { "foo" }
+
+(* Group: kernel *)
+
+test Erlang.kernel get "{kernel, [
+ {browser_cmd, \"/foo/bar\"},
+ {dist_auto_connect, once},
+ {error_logger, tty},
+ {net_setuptime, 5},
+ {start_dist_ac, true}
+]}" =
+ { "kernel"
+ { "browser_cmd" = "/foo/bar" }
+ { "dist_auto_connect" = "once" }
+ { "error_logger" = "tty" }
+ { "net_setuptime" = "5" }
+ { "start_dist_ac" = "true" } }
+
+(* Group: config *)
+
+let list_bare_config = Erlang.config list_bare_app
+
+test list_bare_config get "[
+ {foo, [{bar, [baz, bat]}]},
+ {goo, [{gar, [gaz, gat]}]}
+ ].\n" =
+ { "foo"
+ { "bar"
+ { "value" = "baz" }
+ { "value" = "bat" } } }
+ { "goo"
+ { "gar"
+ { "value" = "gaz" }
+ { "value" = "gat" } } }
+
+(* Test Erlang's kernel app config is parsed *)
+test list_bare_config get "[
+ {foo, [{bar, [baz, bat]}]},
+ {kernel, [{start_timer, true}]}
+ ].\n" =
+ { "foo"
+ { "bar"
+ { "value" = "baz" }
+ { "value" = "bat" } } }
+ { "kernel"
+ { "start_timer" = "true" } }
--- /dev/null
+(* Tests for the Ethers module *)
+
+module Test_ethers =
+
+(*
+ let empty_entries = "# see man ethers for syntax\n"
+
+ test Ethers.record get empty_entries =
+ { "#comment" = "see man ethers for syntax" }
+*)
+
+ let three_entries = "54:52:00:01:00:01 192.168.1.1
+# \tcomment\t
+54:52:00:01:00:02 foo.example.com
+00:16:3e:01:fe:03 bar
+"
+
+ test Ethers.lns get three_entries =
+ { "1" { "mac" = "54:52:00:01:00:01" }
+ { "ip" = "192.168.1.1" } }
+ { "#comment" = "comment" }
+ { "2" { "mac" = "54:52:00:01:00:02" }
+ { "ip" = "foo.example.com" } }
+ { "3" { "mac" = "00:16:3e:01:fe:03" }
+ { "ip" = "bar" } }
+
+ (* Deleting the 'ip' node violates the schema *)
+ test Ethers.lns put three_entries after
+ rm "/1/ip"
+ = *
+
+ test Ethers.lns put three_entries after
+ set "/2/ip" "192.168.1.2" ;
+ set "/3/ip" "baz"
+ = "54:52:00:01:00:01 192.168.1.1
+# \tcomment\t
+54:52:00:01:00:02 192.168.1.2
+00:16:3e:01:fe:03 baz
+"
+
+ test Ethers.lns put three_entries after
+ rm "/3"
+ = "54:52:00:01:00:01 192.168.1.1
+# \tcomment\t
+54:52:00:01:00:02 foo.example.com
+"
+
+ (* Make sure blank and indented lines get through *)
+ test Ethers.lns get "54:52:00:01:00:01\tfoo \n \n\n
+54:52:00:01:00:02 bar\n" =
+ { "1" { "mac" = "54:52:00:01:00:01" }
+ { "ip" = "foo" } }
+ {} {} {}
+ { "2" { "mac" = "54:52:00:01:00:02" }
+ { "ip" = "bar" } }
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
+
+
--- /dev/null
+module Test_exports =
+
+let s = "/local 172.31.0.0/16(rw,sync) \t
+
+/home 172.31.0.0/16(rw,root_squash,sync) @netgroup(rw) *.example.com
+# Yes, we export /tmp
+/tmp 172.31.0.0/16(rw,root_squash,sync,)
+/local2 somehost(rw,sync)
+/local3 some-host(rw,sync)
+/local3 an-other-host(rw,sync)
+/local4 2000:123:456::/64(rw)
+/local5 somehost-[01](rw)
+"
+
+test Exports.lns get s =
+ { "dir" = "/local"
+ { "client" = "172.31.0.0/16"
+ { "option" = "rw" }
+ { "option" = "sync" } } }
+ { }
+ { "dir" = "/home"
+ { "client" = "172.31.0.0/16"
+ { "option" = "rw"}
+ { "option" = "root_squash" }
+ { "option" = "sync" } }
+ { "client" = "@netgroup"
+ { "option" = "rw" } }
+ { "client" = "*.example.com" } }
+ { "#comment" = "Yes, we export /tmp" }
+ { "dir" = "/tmp"
+ { "client" = "172.31.0.0/16"
+ { "option" = "rw" }
+ { "option" = "root_squash" }
+ { "option" = "sync" }
+ { "option" = "" } } }
+ { "dir" = "/local2"
+ { "client" = "somehost"
+ { "option" = "rw" }
+ { "option" = "sync" } } }
+ { "dir" = "/local3"
+ { "client" = "some-host"
+ { "option" = "rw" }
+ { "option" = "sync" } } }
+ { "dir" = "/local3"
+ { "client" = "an-other-host"
+ { "option" = "rw" }
+ { "option" = "sync" } } }
+ { "dir" = "/local4"
+ { "client" = "2000:123:456::/64"
+ { "option" = "rw" } } }
+ { "dir" = "/local5"
+ { "client" = "somehost-[01]"
+ { "option" = "rw" } } }
+
+test Exports.lns get "\"/path/in/quotes\" 192.168.0.1(rw,all_squash)\n" =
+ { "dir" = "\"/path/in/quotes\""
+ { "client" = "192.168.0.1"
+ { "option" = "rw" }
+ { "option" = "all_squash" } } }
--- /dev/null
+(*
+Module: Test_FAI_DiskConfig
+ Provides unit tests and examples for the <FAI_DiskConfig> lens.
+*)
+
+module Test_FAI_DiskConfig =
+
+
+(* Test: FAI_DiskConfig.disk_config
+ Test <FAI_DiskConfig.disk_config> *)
+test FAI_DiskConfig.disk_config get
+ "disk_config hda preserve_always:6,7 disklabel:msdos bootable:3\n" =
+
+ { "disk_config" = "hda"
+ { "preserve_always"
+ { "1" = "6" }
+ { "2" = "7" } }
+ { "disklabel" = "msdos" }
+ { "bootable" = "3" } }
+
+(* Test: FAI_DiskConfig.volume
+ Test <FAI_DiskConfig.volume> *)
+test FAI_DiskConfig.volume get
+ "primary /boot 20-100 ext3 rw\n" =
+
+ { "primary"
+ { "mountpoint" = "/boot" }
+ { "size" = "20-100" }
+ { "filesystem" = "ext3" }
+ { "mount_options"
+ { "1" = "rw" } } }
+
+(* Test: FAI_DiskConfig.volume
+ Testing <FAI_DiskConfig.volume> *)
+test FAI_DiskConfig.volume get
+ "primary swap 1000 swap sw\n" =
+
+ { "primary"
+ { "mountpoint" = "swap" }
+ { "size" = "1000" }
+ { "filesystem" = "swap" }
+ { "mount_options"
+ { "1" = "sw" } } }
+
+(* Test: FAI_DiskConfig.volume
+ Testing <FAI_DiskConfig.volume> *)
+test FAI_DiskConfig.volume get
+ "primary / 12000 ext3 rw createopts=\"-b 2048\"\n" =
+
+ { "primary"
+ { "mountpoint" = "/" }
+ { "size" = "12000" }
+ { "filesystem" = "ext3" }
+ { "mount_options"
+ { "1" = "rw" } }
+ { "fs_options"
+ { "createopts" = "-b 2048" } } }
+
+(* Test: FAI_DiskConfig.volume
+ Testing <FAI_DiskConfig.volume> *)
+test FAI_DiskConfig.volume get
+ "logical /tmp 1000 ext3 rw,nosuid\n" =
+
+ { "logical"
+ { "mountpoint" = "/tmp" }
+ { "size" = "1000" }
+ { "filesystem" = "ext3" }
+ { "mount_options"
+ { "1" = "rw" }
+ { "2" = "nosuid" } } }
+
+(* Test: FAI_DiskConfig.volume
+ Testing <FAI_DiskConfig.volume> *)
+test FAI_DiskConfig.volume get
+ "logical /var 10%- ext3 rw\n" =
+
+ { "logical"
+ { "mountpoint" = "/var" }
+ { "size" = "10%-" }
+ { "filesystem" = "ext3" }
+ { "mount_options"
+ { "1" = "rw" } } }
+
+(* Test: FAI_DiskConfig.volume
+ Testing <FAI_DiskConfig.volume> *)
+test FAI_DiskConfig.volume get
+ "logical /nobackup 0- xfs rw\n" =
+
+ { "logical"
+ { "mountpoint" = "/nobackup" }
+ { "size" = "0-" }
+ { "filesystem" = "xfs" }
+ { "mount_options"
+ { "1" = "rw" } } }
+
+(* Variable: simple_config
+ A simple configuration file *)
+let simple_config = "# A comment
+disk_config disk2
+raw-disk - 0 - -
+
+disk_config lvm
+vg my_pv sda2
+vg test disk1.9
+my_pv-_swap swap 2048 swap sw
+my_pv-_root / 2048 ext3 rw,errors=remount-ro
+
+disk_config raid
+raid1 /boot disk1.1,disk2.1,disk3.1,disk4.1,disk5.1,disk6.1 ext3 rw
+raid1 swap disk1.2,disk2.2,disk3.2,disk4.2,disk5.2,disk6.2 swap sw
+raid5 /srv/data disk1.11,disk2.11,disk3.11,disk4.11,disk5.11,disk6.11 ext3 ro createopts=\"-m 0\"
+raid0 - disk2.2,sdc1,sde1:spare:missing ext2 default
+
+disk_config tmpfs
+tmpfs /var/opt/hosting/tmp 500 defaults
+"
+
+(* Test: FAI_DiskConfig.lns
+ Testing the full <FAI_DiskConfig.lns> on <simple_config> *)
+test FAI_DiskConfig.lns get simple_config =
+ { "#comment" = "A comment" }
+ { "disk_config" = "disk2"
+ { "raw-disk"
+ { "mountpoint" = "-" }
+ { "size" = "0" }
+ { "filesystem" = "-" }
+ { "mount_options"
+ { "1" = "-" }
+ }
+ }
+ }
+ { }
+ { "disk_config" = "lvm"
+ { "vg"
+ { "name" = "my_pv" }
+ { "disk" = "sda2" }
+ }
+ { "vg"
+ { "name" = "test" }
+ { "disk" = "disk1"
+ { "partition" = "9" }
+ }
+ }
+ { "lv"
+ { "vg" = "my_pv" }
+ { "name" = "_swap" }
+ { "mountpoint" = "swap" }
+ { "size" = "2048" }
+ { "filesystem" = "swap" }
+ { "mount_options"
+ { "1" = "sw" }
+ }
+ }
+ { "lv"
+ { "vg" = "my_pv" }
+ { "name" = "_root" }
+ { "mountpoint" = "/" }
+ { "size" = "2048" }
+ { "filesystem" = "ext3" }
+ { "mount_options"
+ { "1" = "rw" }
+ { "2" = "errors"
+ { "value" = "remount-ro" }
+ }
+ }
+ }
+ }
+ { }
+ { "disk_config" = "raid"
+ { "raid1"
+ { "mountpoint" = "/boot" }
+ { "disk" = "disk1"
+ { "partition" = "1" }
+ }
+ { "disk" = "disk2"
+ { "partition" = "1" }
+ }
+ { "disk" = "disk3"
+ { "partition" = "1" }
+ }
+ { "disk" = "disk4"
+ { "partition" = "1" }
+ }
+ { "disk" = "disk5"
+ { "partition" = "1" }
+ }
+ { "disk" = "disk6"
+ { "partition" = "1" }
+ }
+ { "filesystem" = "ext3" }
+ { "mount_options"
+ { "1" = "rw" }
+ }
+ }
+ { "raid1"
+ { "mountpoint" = "swap" }
+ { "disk" = "disk1"
+ { "partition" = "2" }
+ }
+ { "disk" = "disk2"
+ { "partition" = "2" }
+ }
+ { "disk" = "disk3"
+ { "partition" = "2" }
+ }
+ { "disk" = "disk4"
+ { "partition" = "2" }
+ }
+ { "disk" = "disk5"
+ { "partition" = "2" }
+ }
+ { "disk" = "disk6"
+ { "partition" = "2" }
+ }
+ { "filesystem" = "swap" }
+ { "mount_options"
+ { "1" = "sw" }
+ }
+ }
+ { "raid5"
+ { "mountpoint" = "/srv/data" }
+ { "disk" = "disk1"
+ { "partition" = "11" }
+ }
+ { "disk" = "disk2"
+ { "partition" = "11" }
+ }
+ { "disk" = "disk3"
+ { "partition" = "11" }
+ }
+ { "disk" = "disk4"
+ { "partition" = "11" }
+ }
+ { "disk" = "disk5"
+ { "partition" = "11" }
+ }
+ { "disk" = "disk6"
+ { "partition" = "11" }
+ }
+ { "filesystem" = "ext3" }
+ { "mount_options"
+ { "1" = "ro" }
+ }
+ { "fs_options"
+ { "createopts" = "-m 0" }
+ }
+ }
+ { "raid0"
+ { "mountpoint" = "-" }
+ { "disk" = "disk2"
+ { "partition" = "2" }
+ }
+ { "disk" = "sdc1" }
+ { "disk" = "sde1"
+ { "spare" }
+ { "missing" }
+ }
+ { "filesystem" = "ext2" }
+ { "mount_options"
+ { "1" = "default" }
+ }
+ }
+ }
+ { }
+ { "disk_config" = "tmpfs"
+ { "tmpfs"
+ { "mountpoint" = "/var/opt/hosting/tmp" }
+ { "size" = "500" }
+ { "mount_options"
+ { "1" = "defaults" }
+ }
+ }
+ }
+
+
+(* Variable: config1
+ Another full configuration *)
+let config1 = "disk_config disk1 bootable:1 preserve_always:all always_format:5,6,7,8,9,10,11
+primary - 0 - -
+primary - 0 - -
+logical / 0 ext3 rw,relatime,errors=remount-ro createopts=\"-c -j\"
+logical swap 0 swap sw
+logical /var 0 ext3 rw,relatime createopts=\"-m 5 -j\"
+logical /tmp 0 ext3 rw createopts=\"-m 0 -j\"
+logical /usr 0 ext3 rw,relatime createopts=\"-j\"
+logical /home 0 ext3 rw,relatime,nosuid,nodev createopts=\"-m 1 -j\"
+logical /wrk 0 ext3 rw,relatime,nosuid,nodev createopts=\"-m 1 -j\"
+logical /transfer 0 vfat rw
+"
+
+(* Test: FAI_DiskConfig.lns
+ Testing <FAI_DiskConfig.lns> on <config1> *)
+test FAI_DiskConfig.lns get config1 =
+ { "disk_config" = "disk1"
+ { "bootable" = "1" }
+ { "preserve_always"
+ { "1" = "all" }
+ }
+ { "always_format"
+ { "1" = "5" }
+ { "2" = "6" }
+ { "3" = "7" }
+ { "4" = "8" }
+ { "5" = "9" }
+ { "6" = "10" }
+ { "7" = "11" }
+ }
+ { "primary"
+ { "mountpoint" = "-" }
+ { "size" = "0" }
+ { "filesystem" = "-" }
+ { "mount_options"
+ { "1" = "-" }
+ }
+ }
+ { "primary"
+ { "mountpoint" = "-" }
+ { "size" = "0" }
+ { "filesystem" = "-" }
+ { "mount_options"
+ { "1" = "-" }
+ }
+ }
+ { "logical"
+ { "mountpoint" = "/" }
+ { "size" = "0" }
+ { "filesystem" = "ext3" }
+ { "mount_options"
+ { "1" = "rw" }
+ { "2" = "relatime" }
+ { "3" = "errors"
+ { "value" = "remount-ro" }
+ }
+ }
+ { "fs_options"
+ { "createopts" = "-c -j" }
+ }
+ }
+ { "logical"
+ { "mountpoint" = "swap" }
+ { "size" = "0" }
+ { "filesystem" = "swap" }
+ { "mount_options"
+ { "1" = "sw" }
+ }
+ }
+ { "logical"
+ { "mountpoint" = "/var" }
+ { "size" = "0" }
+ { "filesystem" = "ext3" }
+ { "mount_options"
+ { "1" = "rw" }
+ { "2" = "relatime" }
+ }
+ { "fs_options"
+ { "createopts" = "-m 5 -j" }
+ }
+ }
+ { "logical"
+ { "mountpoint" = "/tmp" }
+ { "size" = "0" }
+ { "filesystem" = "ext3" }
+ { "mount_options"
+ { "1" = "rw" }
+ }
+ { "fs_options"
+ { "createopts" = "-m 0 -j" }
+ }
+ }
+ { "logical"
+ { "mountpoint" = "/usr" }
+ { "size" = "0" }
+ { "filesystem" = "ext3" }
+ { "mount_options"
+ { "1" = "rw" }
+ { "2" = "relatime" }
+ }
+ { "fs_options"
+ { "createopts" = "-j" }
+ }
+ }
+ { "logical"
+ { "mountpoint" = "/home" }
+ { "size" = "0" }
+ { "filesystem" = "ext3" }
+ { "mount_options"
+ { "1" = "rw" }
+ { "2" = "relatime" }
+ { "3" = "nosuid" }
+ { "4" = "nodev" }
+ }
+ { "fs_options"
+ { "createopts" = "-m 1 -j" }
+ }
+ }
+ { "logical"
+ { "mountpoint" = "/wrk" }
+ { "size" = "0" }
+ { "filesystem" = "ext3" }
+ { "mount_options"
+ { "1" = "rw" }
+ { "2" = "relatime" }
+ { "3" = "nosuid" }
+ { "4" = "nodev" }
+ }
+ { "fs_options"
+ { "createopts" = "-m 1 -j" }
+ }
+ }
+ { "logical"
+ { "mountpoint" = "/transfer" }
+ { "size" = "0" }
+ { "filesystem" = "vfat" }
+ { "mount_options"
+ { "1" = "rw" }
+ }
+ }
+ }
+
+
+(* Variable: config2
+ Another full configuration *)
+let config2 = "disk_config /dev/sda
+primary - 250M - -
+primary - 20G - -
+logical - 8G - -
+logical - 4G - -
+logical - 5G - -
+
+disk_config /dev/sdb sameas:/dev/sda
+
+disk_config raid
+raid1 /boot sda1,sdb1 ext3 defaults
+raid1 / sda2,sdb2 ext3 defaults,errors=remount-ro
+raid1 swap sda5,sdb5 swap defaults
+raid1 /tmp sda6,sdb6 ext3 defaults createopts=\"-m 1\"
+raid1 /var sda7,sdb7 ext3 defaults
+"
+
+(* Test: FAI_DiskConfig.lns
+ Testing <FAI_DiskConfig.lns> on <config2> *)
+test FAI_DiskConfig.lns get config2 =
+ { "disk_config" = "/dev/sda"
+ { "primary"
+ { "mountpoint" = "-" }
+ { "size" = "250M" }
+ { "filesystem" = "-" }
+ { "mount_options"
+ { "1" = "-" }
+ }
+ }
+ { "primary"
+ { "mountpoint" = "-" }
+ { "size" = "20G" }
+ { "filesystem" = "-" }
+ { "mount_options"
+ { "1" = "-" }
+ }
+ }
+ { "logical"
+ { "mountpoint" = "-" }
+ { "size" = "8G" }
+ { "filesystem" = "-" }
+ { "mount_options"
+ { "1" = "-" }
+ }
+ }
+ { "logical"
+ { "mountpoint" = "-" }
+ { "size" = "4G" }
+ { "filesystem" = "-" }
+ { "mount_options"
+ { "1" = "-" }
+ }
+ }
+ { "logical"
+ { "mountpoint" = "-" }
+ { "size" = "5G" }
+ { "filesystem" = "-" }
+ { "mount_options"
+ { "1" = "-" }
+ }
+ }
+ }
+ { }
+ { "disk_config" = "/dev/sdb"
+ { "sameas" = "/dev/sda" }
+ }
+ { }
+ { "disk_config" = "raid"
+ { "raid1"
+ { "mountpoint" = "/boot" }
+ { "disk" = "sda1" }
+ { "disk" = "sdb1" }
+ { "filesystem" = "ext3" }
+ { "mount_options"
+ { "1" = "defaults" }
+ }
+ }
+ { "raid1"
+ { "mountpoint" = "/" }
+ { "disk" = "sda2" }
+ { "disk" = "sdb2" }
+ { "filesystem" = "ext3" }
+ { "mount_options"
+ { "1" = "defaults" }
+ { "2" = "errors"
+ { "value" = "remount-ro" }
+ }
+ }
+ }
+ { "raid1"
+ { "mountpoint" = "swap" }
+ { "disk" = "sda5" }
+ { "disk" = "sdb5" }
+ { "filesystem" = "swap" }
+ { "mount_options"
+ { "1" = "defaults" }
+ }
+ }
+ { "raid1"
+ { "mountpoint" = "/tmp" }
+ { "disk" = "sda6" }
+ { "disk" = "sdb6" }
+ { "filesystem" = "ext3" }
+ { "mount_options"
+ { "1" = "defaults" }
+ }
+ { "fs_options"
+ { "createopts" = "-m 1" }
+ }
+ }
+ { "raid1"
+ { "mountpoint" = "/var" }
+ { "disk" = "sda7" }
+ { "disk" = "sdb7" }
+ { "filesystem" = "ext3" }
+ { "mount_options"
+ { "1" = "defaults" }
+ }
+ }
+ }
+
+
+(* Variable: config3
+ Another full configuration *)
+let config3 = "disk_config /dev/sdb
+primary / 21750 ext3 defaults,errors=remount-ro
+primary /boot 250 ext3 defaults
+logical - 4000 - -
+logical - 2000 - -
+logical - 10- - -
+
+disk_config cryptsetup randinit
+swap swap /dev/sdb5 swap defaults
+tmp /tmp /dev/sdb6 ext2 defaults
+luks /local00 /dev/sdb7 ext3 defaults,errors=remount-ro createopts=\"-m 0\"
+"
+
+(* Test: FAI_DiskConfig.lns
+ Testing <FAI_DiskConfig.lns> on <config3> *)
+test FAI_DiskConfig.lns get config3 =
+ { "disk_config" = "/dev/sdb"
+ { "primary"
+ { "mountpoint" = "/" }
+ { "size" = "21750" }
+ { "filesystem" = "ext3" }
+ { "mount_options"
+ { "1" = "defaults" }
+ { "2" = "errors"
+ { "value" = "remount-ro" }
+ }
+ }
+ }
+ { "primary"
+ { "mountpoint" = "/boot" }
+ { "size" = "250" }
+ { "filesystem" = "ext3" }
+ { "mount_options"
+ { "1" = "defaults" }
+ }
+ }
+ { "logical"
+ { "mountpoint" = "-" }
+ { "size" = "4000" }
+ { "filesystem" = "-" }
+ { "mount_options"
+ { "1" = "-" }
+ }
+ }
+ { "logical"
+ { "mountpoint" = "-" }
+ { "size" = "2000" }
+ { "filesystem" = "-" }
+ { "mount_options"
+ { "1" = "-" }
+ }
+ }
+ { "logical"
+ { "mountpoint" = "-" }
+ { "size" = "10-" }
+ { "filesystem" = "-" }
+ { "mount_options"
+ { "1" = "-" }
+ }
+ }
+ }
+ { }
+ { "disk_config" = "cryptsetup"
+ { "randinit" }
+ { "swap"
+ { "mountpoint" = "swap" }
+ { "device" = "/dev/sdb5" }
+ { "filesystem" = "swap" }
+ { "mount_options"
+ { "1" = "defaults" }
+ }
+ }
+ { "tmp"
+ { "mountpoint" = "/tmp" }
+ { "device" = "/dev/sdb6" }
+ { "filesystem" = "ext2" }
+ { "mount_options"
+ { "1" = "defaults" }
+ }
+ }
+ { "luks"
+ { "mountpoint" = "/local00" }
+ { "device" = "/dev/sdb7" }
+ { "filesystem" = "ext3" }
+ { "mount_options"
+ { "1" = "defaults" }
+ { "2" = "errors"
+ { "value" = "remount-ro" }
+ }
+ }
+ { "fs_options"
+ { "createopts" = "-m 0" }
+ }
+ }
+ }
+
+
+(* Variable: with_spaces *)
+let with_spaces = "disk_config disk2
+
+raw-disk - 0 - -
+"
+
+(* Test: FAI_DiskConfig.lns
+ Testing <FAI_DiskConfig.lns> with <with_spaces> *)
+test FAI_DiskConfig.lns get with_spaces =
+ { "disk_config" = "disk2"
+ { }
+ { "raw-disk"
+ { "mountpoint" = "-" }
+ { "size" = "0" }
+ { "filesystem" = "-" }
+ { "mount_options"
+ { "1" = "-" }
+ }
+ }
+ }
--- /dev/null
+module Test_fail2ban =
+
+let conf = "[DEFAULT]
+mta = ssmtp
+bantime = 432000
+destemail = fail2ban@domain.com
+findtime = 3600
+maxretry = 3
+
+[sshd]
+enabled = true
+"
+
+
+test Fail2ban.lns get conf =
+ { "DEFAULT"
+ { "mta" = "ssmtp" }
+ { "bantime" = "432000" }
+ { "destemail" = "fail2ban@domain.com" }
+ { "findtime" = "3600" }
+ { "maxretry" = "3" }
+ {} }
+ { "sshd"
+ { "enabled" = "true" } }
--- /dev/null
+(*
+Module: Test_Fonts
+ Provides unit tests and examples for the <Fonts> lens.
+*)
+
+module Test_Fonts =
+
+(* Variable: conf *)
+let conf = "<?xml version=\"1.0\"?>
+<!DOCTYPE fontconfig SYSTEM \"fonts.dtd\">
+<!-- /etc/fonts/fonts.conf file to configure system font access -->
+<fontconfig>
+
+<!--
+ DO NOT EDIT THIS FILE.
+ IT WILL BE REPLACED WHEN FONTCONFIG IS UPDATED.
+ LOCAL CHANGES BELONG IN 'local.conf'.
+
+ The intent of this standard configuration file is to be adequate for
+ most environments. If you have a reasonably normal environment and
+ have found problems with this configuration, they are probably
+ things that others will also want fixed. Please submit any
+ problems to the fontconfig bugzilla system located at fontconfig.org
+
+ Note that the normal 'make install' procedure for fontconfig is to
+ replace any existing fonts.conf file with the new version. Place
+ any local customizations in local.conf which this file references.
+
+ Keith Packard
+-->
+
+<!-- Font directory list -->
+
+ <dir>/usr/share/fonts</dir>
+ <dir>/usr/X11R6/lib/X11/fonts</dir> <dir>/usr/local/share/fonts</dir>
+ <dir>~/.fonts</dir>
+
+<!--
+ Accept deprecated 'mono' alias, replacing it with 'monospace'
+-->
+ <match target=\"pattern\">
+ <test qual=\"any\" name=\"family\">
+ <string>mono</string>
+ </test>
+ <edit name=\"family\" mode=\"assign\">
+ <string>monospace</string>
+ </edit>
+ </match>
+
+<!--
+ Accept alternate 'sans serif' spelling, replacing it with 'sans-serif'
+-->
+ <match target=\"pattern\">
+ <test qual=\"any\" name=\"family\">
+ <string>sans serif</string>
+ </test>
+ <edit name=\"family\" mode=\"assign\">
+ <string>sans-serif</string>
+ </edit>
+ </match>
+
+<!--
+ Accept deprecated 'sans' alias, replacing it with 'sans-serif'
+-->
+ <match target=\"pattern\">
+ <test qual=\"any\" name=\"family\">
+ <string>sans</string>
+ </test>
+ <edit name=\"family\" mode=\"assign\">
+ <string>sans-serif</string>
+ </edit>
+ </match>
+
+<!--
+ Load local system customization file
+-->
+ <include ignore_missing=\"yes\">conf.d</include>
+
+<!-- Font cache directory list -->
+
+ <cachedir>/var/cache/fontconfig</cachedir>
+ <cachedir>~/.fontconfig</cachedir>
+
+ <config>
+<!--
+ These are the default Unicode chars that are expected to be blank
+ in fonts. All other blank chars are assumed to be broken and
+ won't appear in the resulting charsets
+ -->
+ <blank>
+ <int>0x0020</int> <!-- SPACE -->
+ <int>0x00A0</int> <!-- NO-BREAK SPACE -->
+ <int>0x00AD</int> <!-- SOFT HYPHEN -->
+ <int>0x034F</int> <!-- COMBINING GRAPHEME JOINER -->
+ <int>0x0600</int> <!-- ARABIC NUMBER SIGN -->
+ <int>0x0601</int> <!-- ARABIC SIGN SANAH -->
+ <int>0x0602</int> <!-- ARABIC FOOTNOTE MARKER -->
+ <int>0x0603</int> <!-- ARABIC SIGN SAFHA -->
+ <int>0x06DD</int> <!-- ARABIC END OF AYAH -->
+ <int>0x070F</int> <!-- SYRIAC ABBREVIATION MARK -->
+ <int>0x115F</int> <!-- HANGUL CHOSEONG FILLER -->
+ <int>0x1160</int> <!-- HANGUL JUNGSEONG FILLER -->
+ <int>0x1680</int> <!-- OGHAM SPACE MARK -->
+ <int>0x17B4</int> <!-- KHMER VOWEL INHERENT AQ -->
+ <int>0x17B5</int> <!-- KHMER VOWEL INHERENT AA -->
+ <int>0x180E</int> <!-- MONGOLIAN VOWEL SEPARATOR -->
+ <int>0x2000</int> <!-- EN QUAD -->
+ <int>0x2001</int> <!-- EM QUAD -->
+ <int>0x2002</int> <!-- EN SPACE -->
+ <int>0x2003</int> <!-- EM SPACE -->
+ <int>0x2004</int> <!-- THREE-PER-EM SPACE -->
+ <int>0x2005</int> <!-- FOUR-PER-EM SPACE -->
+ <int>0x2006</int> <!-- SIX-PER-EM SPACE -->
+ <int>0x2007</int> <!-- FIGURE SPACE -->
+ <int>0x2008</int> <!-- PUNCTUATION SPACE -->
+ <int>0x2009</int> <!-- THIN SPACE -->
+ <int>0x200A</int> <!-- HAIR SPACE -->
+ <int>0x200B</int> <!-- ZERO WIDTH SPACE -->
+ <int>0x200C</int> <!-- ZERO WIDTH NON-JOINER -->
+ <int>0x200D</int> <!-- ZERO WIDTH JOINER -->
+ <int>0x200E</int> <!-- LEFT-TO-RIGHT MARK -->
+ <int>0x200F</int> <!-- RIGHT-TO-LEFT MARK -->
+ <int>0x2028</int> <!-- LINE SEPARATOR -->
+ <int>0x2029</int> <!-- PARAGRAPH SEPARATOR -->
+ <int>0x202A</int> <!-- LEFT-TO-RIGHT EMBEDDING -->
+ <int>0x202B</int> <!-- RIGHT-TO-LEFT EMBEDDING -->
+ <int>0x202C</int> <!-- POP DIRECTIONAL FORMATTING -->
+ <int>0x202D</int> <!-- LEFT-TO-RIGHT OVERRIDE -->
+ <int>0x202E</int> <!-- RIGHT-TO-LEFT OVERRIDE -->
+ <int>0x202F</int> <!-- NARROW NO-BREAK SPACE -->
+ <int>0x205F</int> <!-- MEDIUM MATHEMATICAL SPACE -->
+ <int>0x2060</int> <!-- WORD JOINER -->
+ <int>0x2061</int> <!-- FUNCTION APPLICATION -->
+ <int>0x2062</int> <!-- INVISIBLE TIMES -->
+ <int>0x2063</int> <!-- INVISIBLE SEPARATOR -->
+ <int>0x206A</int> <!-- INHIBIT SYMMETRIC SWAPPING -->
+ <int>0x206B</int> <!-- ACTIVATE SYMMETRIC SWAPPING -->
+ <int>0x206C</int> <!-- INHIBIT ARABIC FORM SHAPING -->
+ <int>0x206D</int> <!-- ACTIVATE ARABIC FORM SHAPING -->
+ <int>0x206E</int> <!-- NATIONAL DIGIT SHAPES -->
+ <int>0x206F</int> <!-- NOMINAL DIGIT SHAPES -->
+ <int>0x2800</int> <!-- BRAILLE PATTERN BLANK -->
+ <int>0x3000</int> <!-- IDEOGRAPHIC SPACE -->
+ <int>0x3164</int> <!-- HANGUL FILLER -->
+ <int>0xFEFF</int> <!-- ZERO WIDTH NO-BREAK SPACE -->
+ <int>0xFFA0</int> <!-- HALFWIDTH HANGUL FILLER -->
+ <int>0xFFF9</int> <!-- INTERLINEAR ANNOTATION ANCHOR -->
+ <int>0xFFFA</int> <!-- INTERLINEAR ANNOTATION SEPARATOR -->
+ <int>0xFFFB</int> <!-- INTERLINEAR ANNOTATION TERMINATOR -->
+ </blank>
+<!--
+ Rescan configuration every 30 seconds when FcFontSetList is called
+ -->
+ <rescan>
+ <int>30</int>
+ </rescan>
+ </config>
+
+</fontconfig>
+"
+
+(* Test: Fonts.lns *)
+test Fonts.lns get conf =
+ { "#declaration"
+ { "#attribute"
+ { "version" = "1.0" }
+ }
+ }
+ { "!DOCTYPE" = "fontconfig"
+ { "SYSTEM" = "fonts.dtd" }
+ }
+ { "#comment" = " /etc/fonts/fonts.conf file to configure system font access " }
+ { "fontconfig"
+ { "#text" = "
+
+" }
+ { "#comment" = "
+ DO NOT EDIT THIS FILE.
+ IT WILL BE REPLACED WHEN FONTCONFIG IS UPDATED.
+ LOCAL CHANGES BELONG IN 'local.conf'.
+
+ The intent of this standard configuration file is to be adequate for
+ most environments. If you have a reasonably normal environment and
+ have found problems with this configuration, they are probably
+ things that others will also want fixed. Please submit any
+ problems to the fontconfig bugzilla system located at fontconfig.org
+
+ Note that the normal 'make install' procedure for fontconfig is to
+ replace any existing fonts.conf file with the new version. Place
+ any local customizations in local.conf which this file references.
+
+ Keith Packard
+" }
+ { "#text" = "
+
+" }
+ { "#comment" = " Font directory list " }
+ { "#text" = "
+
+ " }
+ { "dir"
+ { "#text" = "/usr/share/fonts" }
+ }
+ { "#text" = " " }
+ { "dir"
+ { "#text" = "/usr/X11R6/lib/X11/fonts" }
+ }
+ { "#text" = " " }
+ { "dir"
+ { "#text" = "/usr/local/share/fonts" }
+ }
+ { "#text" = " " }
+ { "dir"
+ { "#text" = "~/.fonts" }
+ }
+ { "#text" = "
+" }
+ { "#comment" = "
+ Accept deprecated 'mono' alias, replacing it with 'monospace'
+" }
+ { "#text" = "
+ " }
+ { "match"
+ { "#attribute"
+ { "target" = "pattern" }
+ }
+ { "#text" = "
+ " }
+ { "test"
+ { "#attribute"
+ { "qual" = "any" }
+ { "name" = "family" }
+ }
+ { "#text" = "
+ " }
+ { "string"
+ { "#text" = "mono" }
+ }
+ { "#text" = " " }
+ }
+ { "#text" = " " }
+ { "edit"
+ { "#attribute"
+ { "name" = "family" }
+ { "mode" = "assign" }
+ }
+ { "#text" = "
+ " }
+ { "string"
+ { "#text" = "monospace" }
+ }
+ { "#text" = " " }
+ }
+ { "#text" = " " }
+ }
+ { "#text" = "
+" }
+ { "#comment" = "
+ Accept alternate 'sans serif' spelling, replacing it with 'sans-serif'
+" }
+ { "#text" = "
+ " }
+ { "match"
+ { "#attribute"
+ { "target" = "pattern" }
+ }
+ { "#text" = "
+ " }
+ { "test"
+ { "#attribute"
+ { "qual" = "any" }
+ { "name" = "family" }
+ }
+ { "#text" = "
+ " }
+ { "string"
+ { "#text" = "sans serif" }
+ }
+ { "#text" = " " }
+ }
+ { "#text" = " " }
+ { "edit"
+ { "#attribute"
+ { "name" = "family" }
+ { "mode" = "assign" }
+ }
+ { "#text" = "
+ " }
+ { "string"
+ { "#text" = "sans-serif" }
+ }
+ { "#text" = " " }
+ }
+ { "#text" = " " }
+ }
+ { "#text" = "
+" }
+ { "#comment" = "
+ Accept deprecated 'sans' alias, replacing it with 'sans-serif'
+" }
+ { "#text" = "
+ " }
+ { "match"
+ { "#attribute"
+ { "target" = "pattern" }
+ }
+ { "#text" = "
+ " }
+ { "test"
+ { "#attribute"
+ { "qual" = "any" }
+ { "name" = "family" }
+ }
+ { "#text" = "
+ " }
+ { "string"
+ { "#text" = "sans" }
+ }
+ { "#text" = " " }
+ }
+ { "#text" = " " }
+ { "edit"
+ { "#attribute"
+ { "name" = "family" }
+ { "mode" = "assign" }
+ }
+ { "#text" = "
+ " }
+ { "string"
+ { "#text" = "sans-serif" }
+ }
+ { "#text" = " " }
+ }
+ { "#text" = " " }
+ }
+ { "#text" = "
+" }
+ { "#comment" = "
+ Load local system customization file
+" }
+ { "#text" = "
+ " }
+ { "include"
+ { "#attribute"
+ { "ignore_missing" = "yes" }
+ }
+ { "#text" = "conf.d" }
+ }
+ { "#text" = "
+" }
+ { "#comment" = " Font cache directory list " }
+ { "#text" = "
+
+ " }
+ { "cachedir"
+ { "#text" = "/var/cache/fontconfig" }
+ }
+ { "#text" = " " }
+ { "cachedir"
+ { "#text" = "~/.fontconfig" }
+ }
+ { "#text" = "
+ " }
+ { "config"
+ { "#text" = "
+" }
+ { "#comment" = "
+ These are the default Unicode chars that are expected to be blank
+ in fonts. All other blank chars are assumed to be broken and
+ won't appear in the resulting charsets
+ " }
+ { "#text" = "
+ " }
+ { "blank"
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x0020" }
+ }
+ { "#text" = " " }
+ { "#comment" = " SPACE " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x00A0" }
+ }
+ { "#text" = " " }
+ { "#comment" = " NO-BREAK SPACE " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x00AD" }
+ }
+ { "#text" = " " }
+ { "#comment" = " SOFT HYPHEN " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x034F" }
+ }
+ { "#text" = " " }
+ { "#comment" = " COMBINING GRAPHEME JOINER " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x0600" }
+ }
+ { "#text" = " " }
+ { "#comment" = " ARABIC NUMBER SIGN " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x0601" }
+ }
+ { "#text" = " " }
+ { "#comment" = " ARABIC SIGN SANAH " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x0602" }
+ }
+ { "#text" = " " }
+ { "#comment" = " ARABIC FOOTNOTE MARKER " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x0603" }
+ }
+ { "#text" = " " }
+ { "#comment" = " ARABIC SIGN SAFHA " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x06DD" }
+ }
+ { "#text" = " " }
+ { "#comment" = " ARABIC END OF AYAH " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x070F" }
+ }
+ { "#text" = " " }
+ { "#comment" = " SYRIAC ABBREVIATION MARK " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x115F" }
+ }
+ { "#text" = " " }
+ { "#comment" = " HANGUL CHOSEONG FILLER " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x1160" }
+ }
+ { "#text" = " " }
+ { "#comment" = " HANGUL JUNGSEONG FILLER " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x1680" }
+ }
+ { "#text" = " " }
+ { "#comment" = " OGHAM SPACE MARK " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x17B4" }
+ }
+ { "#text" = " " }
+ { "#comment" = " KHMER VOWEL INHERENT AQ " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x17B5" }
+ }
+ { "#text" = " " }
+ { "#comment" = " KHMER VOWEL INHERENT AA " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x180E" }
+ }
+ { "#text" = " " }
+ { "#comment" = " MONGOLIAN VOWEL SEPARATOR " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x2000" }
+ }
+ { "#text" = " " }
+ { "#comment" = " EN QUAD " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x2001" }
+ }
+ { "#text" = " " }
+ { "#comment" = " EM QUAD " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x2002" }
+ }
+ { "#text" = " " }
+ { "#comment" = " EN SPACE " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x2003" }
+ }
+ { "#text" = " " }
+ { "#comment" = " EM SPACE " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x2004" }
+ }
+ { "#text" = " " }
+ { "#comment" = " THREE-PER-EM SPACE " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x2005" }
+ }
+ { "#text" = " " }
+ { "#comment" = " FOUR-PER-EM SPACE " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x2006" }
+ }
+ { "#text" = " " }
+ { "#comment" = " SIX-PER-EM SPACE " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x2007" }
+ }
+ { "#text" = " " }
+ { "#comment" = " FIGURE SPACE " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x2008" }
+ }
+ { "#text" = " " }
+ { "#comment" = " PUNCTUATION SPACE " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x2009" }
+ }
+ { "#text" = " " }
+ { "#comment" = " THIN SPACE " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x200A" }
+ }
+ { "#text" = " " }
+ { "#comment" = " HAIR SPACE " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x200B" }
+ }
+ { "#text" = " " }
+ { "#comment" = " ZERO WIDTH SPACE " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x200C" }
+ }
+ { "#text" = " " }
+ { "#comment" = " ZERO WIDTH NON-JOINER " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x200D" }
+ }
+ { "#text" = " " }
+ { "#comment" = " ZERO WIDTH JOINER " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x200E" }
+ }
+ { "#text" = " " }
+ { "#comment" = " LEFT-TO-RIGHT MARK " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x200F" }
+ }
+ { "#text" = " " }
+ { "#comment" = " RIGHT-TO-LEFT MARK " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x2028" }
+ }
+ { "#text" = " " }
+ { "#comment" = " LINE SEPARATOR " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x2029" }
+ }
+ { "#text" = " " }
+ { "#comment" = " PARAGRAPH SEPARATOR " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x202A" }
+ }
+ { "#text" = " " }
+ { "#comment" = " LEFT-TO-RIGHT EMBEDDING " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x202B" }
+ }
+ { "#text" = " " }
+ { "#comment" = " RIGHT-TO-LEFT EMBEDDING " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x202C" }
+ }
+ { "#text" = " " }
+ { "#comment" = " POP DIRECTIONAL FORMATTING " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x202D" }
+ }
+ { "#text" = " " }
+ { "#comment" = " LEFT-TO-RIGHT OVERRIDE " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x202E" }
+ }
+ { "#text" = " " }
+ { "#comment" = " RIGHT-TO-LEFT OVERRIDE " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x202F" }
+ }
+ { "#text" = " " }
+ { "#comment" = " NARROW NO-BREAK SPACE " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x205F" }
+ }
+ { "#text" = " " }
+ { "#comment" = " MEDIUM MATHEMATICAL SPACE " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x2060" }
+ }
+ { "#text" = " " }
+ { "#comment" = " WORD JOINER " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x2061" }
+ }
+ { "#text" = " " }
+ { "#comment" = " FUNCTION APPLICATION " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x2062" }
+ }
+ { "#text" = " " }
+ { "#comment" = " INVISIBLE TIMES " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x2063" }
+ }
+ { "#text" = " " }
+ { "#comment" = " INVISIBLE SEPARATOR " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x206A" }
+ }
+ { "#text" = " " }
+ { "#comment" = " INHIBIT SYMMETRIC SWAPPING " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x206B" }
+ }
+ { "#text" = " " }
+ { "#comment" = " ACTIVATE SYMMETRIC SWAPPING " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x206C" }
+ }
+ { "#text" = " " }
+ { "#comment" = " INHIBIT ARABIC FORM SHAPING " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x206D" }
+ }
+ { "#text" = " " }
+ { "#comment" = " ACTIVATE ARABIC FORM SHAPING " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x206E" }
+ }
+ { "#text" = " " }
+ { "#comment" = " NATIONAL DIGIT SHAPES " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x206F" }
+ }
+ { "#text" = " " }
+ { "#comment" = " NOMINAL DIGIT SHAPES " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x2800" }
+ }
+ { "#text" = " " }
+ { "#comment" = " BRAILLE PATTERN BLANK " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x3000" }
+ }
+ { "#text" = " " }
+ { "#comment" = " IDEOGRAPHIC SPACE " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0x3164" }
+ }
+ { "#text" = " " }
+ { "#comment" = " HANGUL FILLER " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0xFEFF" }
+ }
+ { "#text" = " " }
+ { "#comment" = " ZERO WIDTH NO-BREAK SPACE " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0xFFA0" }
+ }
+ { "#text" = " " }
+ { "#comment" = " HALFWIDTH HANGUL FILLER " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0xFFF9" }
+ }
+ { "#text" = " " }
+ { "#comment" = " INTERLINEAR ANNOTATION ANCHOR " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0xFFFA" }
+ }
+ { "#text" = " " }
+ { "#comment" = " INTERLINEAR ANNOTATION SEPARATOR " }
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "0xFFFB" }
+ }
+ { "#text" = " " }
+ { "#comment" = " INTERLINEAR ANNOTATION TERMINATOR " }
+ { "#text" = "
+ " }
+ }
+ { "#comment" = "
+ Rescan configuration every 30 seconds when FcFontSetList is called
+ " }
+ { "#text" = "
+ " }
+ { "rescan"
+ { "#text" = "
+ " }
+ { "int"
+ { "#text" = "30" }
+ }
+ { "#text" = " " }
+ }
+ { "#text" = " " }
+ }
+ { "#text" = "
+" }
+ }
+
--- /dev/null
+module Test_fstab =
+
+ let simple = "/dev/vg00/lv00\t /\t ext3\t defaults 1 1\n"
+
+ let simple_tree =
+ { "1"
+ { "spec" = "/dev/vg00/lv00" }
+ { "file" = "/" }
+ { "vfstype" = "ext3" }
+ { "opt" = "defaults" }
+ { "dump" = "1" }
+ { "passno" = "1" } }
+
+ let leading_ws = " /dev/vg00/lv00\t /\t ext3\t defaults 1 1\n"
+
+ let trailing_ws = "/dev/vg00/lv00\t /\t ext3\t defaults 1 1 \t\n"
+
+ let gen_no_passno(passno:string) =
+ "LABEL=/boot\t /boot\t ext3\t defaults 1" . passno . " \t\n"
+ let no_passno = gen_no_passno ""
+
+ let no_passno_tree =
+ { "1"
+ { "spec" = "LABEL=/boot" }
+ { "file" = "/boot" }
+ { "vfstype" = "ext3" }
+ { "opt" = "defaults" }
+ { "dump" = "1" } }
+
+ let no_dump = "/dev/vg00/lv00\t /\t ext3\t defaults\n"
+
+ let no_dump_tree =
+ { "1"
+ { "spec" = "/dev/vg00/lv00" }
+ { "file" = "/" }
+ { "vfstype" = "ext3" }
+ { "opt" = "defaults" } }
+
+ let no_opts = "/dev/vg00/lv00\t /\t ext3\n"
+
+ let no_opts_tree =
+ { "1"
+ { "spec" = "/dev/vg00/lv00" }
+ { "file" = "/" }
+ { "vfstype" = "ext3" } }
+
+ let multi_opts = "devpts\t /dev/pts\t devpts gid=5,mode=620,fscontext=system_u:object_r:removable_t 0 0\n"
+
+ let multi_opts_tree =
+ { "1"
+ { "spec" = "devpts" }
+ { "file" = "/dev/pts" }
+ { "vfstype" = "devpts" }
+ { "opt" = "gid"
+ { "value" = "5" } }
+ { "opt" = "mode"
+ { "value" = "620" } }
+ { "opt" = "fscontext"
+ { "value" = "system_u:object_r:removable_t" } }
+ { "dump" = "0" }
+ { "passno" = "0" } }
+
+ test Fstab.lns get simple = simple_tree
+
+ test Fstab.lns get leading_ws = simple_tree
+
+ test Fstab.lns get trailing_ws = simple_tree
+
+ test Fstab.lns get no_passno = no_passno_tree
+
+ test Fstab.lns put no_passno after set "/1/passno" "1" = gen_no_passno " 1"
+
+ test Fstab.lns get no_dump = no_dump_tree
+
+ test Fstab.lns get no_opts = no_opts_tree
+
+ test Fstab.lns get multi_opts = multi_opts_tree
+
+ test Fstab.lns get "/dev/hdc /media/cdrom0 udf,iso9660 user,noauto\t0\t0\n" =
+ { "1"
+ { "spec" = "/dev/hdc" }
+ { "file" = "/media/cdrom0" }
+ { "vfstype" = "udf" }
+ { "vfstype" = "iso9660" }
+ { "opt" = "user" }
+ { "opt" = "noauto" }
+ { "dump" = "0" }
+ { "passno" = "0" } }
+
+ (* Allow # in the spec *)
+ test Fstab.lns get "sshfs#jon@10.0.0.2:/home /media/server fuse uid=1000,gid=100,port=1022 0 0\n" =
+ { "1"
+ { "spec" = "sshfs#jon@10.0.0.2:/home" }
+ { "file" = "/media/server" }
+ { "vfstype" = "fuse" }
+ { "opt" = "uid"
+ { "value" = "1000" } }
+ { "opt" = "gid"
+ { "value" = "100" } }
+ { "opt" = "port"
+ { "value" = "1022" } }
+ { "dump" = "0" }
+ { "passno" = "0" } }
+
+ (* Bug #191 *)
+ test Fstab.lns get "tmpfs /dev/shm tmpfs rw,rootcontext=\"system_u:object_r:tmpfs_t:s0\" 0 0\n" =
+ { "1"
+ { "spec" = "tmpfs" }
+ { "file" = "/dev/shm" }
+ { "vfstype" = "tmpfs" }
+ { "opt" = "rw" }
+ { "opt" = "rootcontext"
+ { "value" = "\"system_u:object_r:tmpfs_t:s0\"" } }
+ { "dump" = "0" }
+ { "passno" = "0" } }
+
+ (* BZ https://bugzilla.redhat.com/show_bug.cgi?id=751342
+ * Mounting multiple cgroups together results in path with ','
+ *)
+ test Fstab.lns get "spec /path/file1,file2 vfs opts 0 0\n" =
+ { "1"
+ { "spec" = "spec" }
+ { "file" = "/path/file1,file2" }
+ { "vfstype" = "vfs" }
+ { "opt" = "opts" }
+ { "dump" = "0" }
+ { "passno" = "0" } }
+
+ (* Parse when empty option value given, only equals sign *)
+ test Fstab.lns get "//host.example.org/a_share /mnt cifs defaults,ro,password= 0 0\n" =
+ { "1"
+ { "spec" = "//host.example.org/a_share" }
+ { "file" = "/mnt" }
+ { "vfstype" = "cifs" }
+ { "opt" = "defaults" }
+ { "opt" = "ro" }
+ { "opt" = "password"
+ { "value" }
+ }
+ { "dump" = "0" }
+ { "passno" = "0" }
+ }
+
+ (* Allow end of line comments *)
+ test Fstab.lns get "UUID=0314be77-bb1e-47d4-b2a2-e69ae5bc954f / ext4 rw,errors=remount-ro 0 1 # device at install: /dev/sda3\n" =
+ { "1"
+ { "spec" = "UUID=0314be77-bb1e-47d4-b2a2-e69ae5bc954f" }
+ { "file" = "/" }
+ { "vfstype" = "ext4" }
+ { "opt" = "rw" }
+ { "opt" = "errors"
+ { "value" = "remount-ro" }
+ }
+ { "dump" = "0" }
+ { "passno" = "1" }
+ { "#comment" = "device at install: /dev/sda3" }
+ }
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(*
+Module: Test_Fuse
+ Provides unit tests and examples for the <Fuse> lens.
+*)
+
+module Test_Fuse =
+
+(* Variable: conf *)
+let conf = "# Set the maximum number of FUSE mounts allowed to non-root users.
+mount_max = 1000
+
+# Allow non-root users to specify the 'allow_other' or 'allow_root'
+user_allow_other
+"
+
+(* Test: Fuse.lns *)
+test Fuse.lns get conf =
+ { "#comment" = "Set the maximum number of FUSE mounts allowed to non-root users." }
+ { "mount_max" = "1000" }
+ { }
+ { "#comment" = "Allow non-root users to specify the 'allow_other' or 'allow_root'" }
+ { "user_allow_other" }
--- /dev/null
+module Test_gdm =
+
+ let conf = "[daemon]
+# Automatic login, if true the first attached screen will automatically logged
+# in as user as set with AutomaticLogin key.
+AutomaticLoginEnable=false
+AutomaticLogin=
+
+[server]
+0=Standard device=/dev/console
+"
+
+ test Gdm.lns get conf =
+ { "daemon"
+ { "#comment" = "Automatic login, if true the first attached screen will automatically logged" }
+ { "#comment" = "in as user as set with AutomaticLogin key." }
+ { "AutomaticLoginEnable" = "false" }
+ { "AutomaticLogin" }
+ {} }
+ { "server"
+ { "0" = "Standard device=/dev/console" } }
--- /dev/null
+module Test_getcap =
+
+(* Example from getcap(3) *)
+let getcap = "example|an example of binding multiple values to names:\\
+ :foo%bar:foo^blah:foo@:\\
+ :abc%xyz:abc^frap:abc$@:\\
+ :tc=more:
+"
+
+test Getcap.lns get getcap =
+ { "record"
+ { "name" = "example" }
+ { "name" = "an example of binding multiple values to names" }
+ { "capability" = "foo%bar" }
+ { "capability" = "foo^blah" }
+ { "capability" = "foo@" }
+ { "capability" = "abc%xyz" }
+ { "capability" = "abc^frap" }
+ { "capability" = "abc$@" }
+ { "capability" = "tc=more" }
+ }
+
+(* Taken from the standard /etc/login.conf *)
+let login_conf = "# Default allowed authentication styles
+auth-defaults:auth=passwd,skey:
+
+# Default allowed authentication styles for authentication type ftp
+auth-ftp-defaults:auth-ftp=passwd:
+
+#
+# The default values
+# To alter the default authentication types change the line:
+# :tc=auth-defaults:\\
+# to be read something like: (enables passwd, \"myauth\", and activ)
+# :auth=passwd,myauth,activ:\\
+# Any value changed in the daemon class should be reset in default
+# class.
+#
+default:\\
+ :path=/usr/bin /bin /usr/sbin /sbin /usr/X11R6/bin /usr/local/bin /usr/local/sbin:\\
+ :umask=022:\\
+ :datasize-max=512M:\\
+ :datasize-cur=512M:\\
+ :maxproc-max=256:\\
+ :maxproc-cur=128:\\
+ :openfiles-cur=512:\\
+ :stacksize-cur=4M:\\
+ :localcipher=blowfish,8:\\
+ :ypcipher=old:\\
+ :tc=auth-defaults:\\
+ :tc=auth-ftp-defaults:
+"
+
+test Getcap.lns get login_conf =
+ { "#comment" = "Default allowed authentication styles" }
+ { "record"
+ { "name" = "auth-defaults" }
+ { "capability" = "auth=passwd,skey" }
+ }
+ { }
+ { "#comment" = "Default allowed authentication styles for authentication type ftp" }
+ { "record"
+ { "name" = "auth-ftp-defaults" }
+ { "capability" = "auth-ftp=passwd" }
+ }
+ { }
+ { }
+ { "#comment" = "The default values" }
+ { "#comment" = "To alter the default authentication types change the line:" }
+ { "#comment" = ":tc=auth-defaults:\\" }
+ { "#comment" = "to be read something like: (enables passwd, \"myauth\", and activ)" }
+ { "#comment" = ":auth=passwd,myauth,activ:\\" }
+ { "#comment" = "Any value changed in the daemon class should be reset in default" }
+ { "#comment" = "class." }
+ { }
+ { "record"
+ { "name" = "default" }
+ { "capability" = "path=/usr/bin /bin /usr/sbin /sbin /usr/X11R6/bin /usr/local/bin /usr/local/sbin" }
+ { "capability" = "umask=022" }
+ { "capability" = "datasize-max=512M" }
+ { "capability" = "datasize-cur=512M" }
+ { "capability" = "maxproc-max=256" }
+ { "capability" = "maxproc-cur=128" }
+ { "capability" = "openfiles-cur=512" }
+ { "capability" = "stacksize-cur=4M" }
+ { "capability" = "localcipher=blowfish,8" }
+ { "capability" = "ypcipher=old" }
+ { "capability" = "tc=auth-defaults" }
+ { "capability" = "tc=auth-ftp-defaults" }
+ }
+
+(* Sample /etc/printcap *)
+let printcap = "# $OpenBSD: printcap,v 1.1 2014/07/12 03:52:39 deraadt Exp $
+
+lp|local line printer:\\
+ :lp=/dev/lp:sd=/var/spool/output:lf=/var/log/lpd-errs:
+
+rp|remote line printer:\\
+ :lp=:rm=printhost:rp=lp:sd=/var/spool/output:lf=/var/log/lpd-errs:
+"
+
+test Getcap.lns get printcap =
+ { "#comment" = "$OpenBSD: printcap,v 1.1 2014/07/12 03:52:39 deraadt Exp $" }
+ { }
+ { "record"
+ { "name" = "lp" }
+ { "name" = "local line printer" }
+ { "capability" = "lp=/dev/lp" }
+ { "capability" = "sd=/var/spool/output" }
+ { "capability" = "lf=/var/log/lpd-errs" }
+ }
+ { }
+ { "record"
+ { "name" = "rp" }
+ { "name" = "remote line printer" }
+ { "capability" = "lp=" }
+ { "capability" = "rm=printhost" }
+ { "capability" = "rp=lp" }
+ { "capability" = "sd=/var/spool/output" }
+ { "capability" = "lf=/var/log/lpd-errs" }
+ }
+
--- /dev/null
+module Test_group =
+
+let conf = "bin:x:2:
+audio:x:29:joe
+avahi-autoipd:!:113:bill,martha
+"
+
+test Group.lns get conf =
+ { "bin"
+ { "password" = "x" }
+ { "gid" = "2" } }
+ { "audio"
+ { "password" = "x" }
+ { "gid" = "29" }
+ { "user" = "joe" } }
+ { "avahi-autoipd"
+ { "password" = "!" }
+ { "gid" = "113" }
+ { "user" = "bill"}
+ { "user" = "martha"} }
+
+(* Password field can be empty *)
+test Group.lns get "root::0:root\n" =
+ { "root"
+ { "password" = "" }
+ { "gid" = "0" }
+ { "user" = "root" } }
+
+(* Password field can be disabled by ! or * *)
+test Group.lns get "testgrp:!:0:testusr\n" =
+ { "testgrp"
+ { "password" = "!" }
+ { "gid" = "0" }
+ { "user" = "testusr" } }
+
+test Group.lns get "testgrp:*:0:testusr\n" =
+ { "testgrp"
+ { "password" = "*" }
+ { "gid" = "0" }
+ { "user" = "testusr" } }
+
+(* NIS defaults *)
+test Group.lns get "+\n" =
+ { "@nisdefault" }
+
+test Group.lns get "+:::\n" =
+ { "@nisdefault"
+ { "password" = "" }
+ { "gid" = "" } }
+
+test Group.lns get "+:*::\n" =
+ { "@nisdefault"
+ { "password" = "*" }
+ { "gid" = "" } }
--- /dev/null
+module Test_grub =
+
+ let conf = "# grub.conf generated by anaconda
+#
+# Note that you do not have to rerun grub after making changes to this file
+# NOTICE: You have a /boot partition. This means that
+# all kernel and initrd paths are relative to /boot/, eg.
+# root (hd0,0)
+# kernel /vmlinuz-version ro root=/dev/vg00/lv00
+# initrd /initrd-version.img
+boot=/dev/sda
+device (hd0) HD(1,800,64000,9895c137-d4b2-4e3b-a93b-dc9ac4)
+password --md5 $1$M9NLj$p2gs87vwNv48BUu.wAfVw0
+default=0
+setkey
+setkey less backquote
+background 103332
+timeout=5
+splashimage=(hd0,0)/grub/splash.xpm.gz
+gfxmenu=(hd0,0)/boot/message
+verbose = 0
+hiddenmenu
+title Fedora (2.6.24.4-64.fc8)
+ root (hd0,0)
+ kernel /vmlinuz-2.6.24.4-64.fc8 ro root=/dev/vg00/lv00 crashkernel=
+ initrd /initrd-2.6.24.4-64.fc8.img
+title=Fedora (2.6.24.3-50.fc8)
+ root (hd0,0)
+ kernel /vmlinuz-2.6.24.3-50.fc8 ro root=/dev/vg00/lv00
+ initrd /initrd-2.6.24.3-50.fc8.img
+title Fedora (2.6.21.7-3.fc8xen)
+ root (hd0,0)
+ kernel /xen.gz-2.6.21.7-3.fc8
+ module /vmlinuz-2.6.21.7-3.fc8xen ro root=/dev/vg00/lv00
+ module /initrd-2.6.21.7-3.fc8xen.img
+title Fedora (2.6.24.3-34.fc8)
+ root (hd0,0)
+ kernel /vmlinuz-2.6.24.3-34.fc8 ro root=/dev/vg00/lv00
+ initrd /initrd-2.6.24.3-34.fc8.img
+ map (hd0) (hd1)
+title othermenu
+ lock
+ makeactive
+ configfile /boot/grub/othergrub.conf
+"
+
+ test Grub.lns get conf =
+ { "#comment" = "grub.conf generated by anaconda" }
+ {}
+ { "#comment" = "Note that you do not have to rerun grub after making changes to this file" }
+ { "#comment" = "NOTICE: You have a /boot partition. This means that" }
+ { "#comment" = "all kernel and initrd paths are relative to /boot/, eg." }
+ { "#comment" = "root (hd0,0)" }
+ { "#comment" = "kernel /vmlinuz-version ro root=/dev/vg00/lv00" }
+ { "#comment" = "initrd /initrd-version.img" }
+ { "boot" = "/dev/sda" }
+ { "device" = "(hd0)"
+ { "file" = "HD(1,800,64000,9895c137-d4b2-4e3b-a93b-dc9ac4)" } }
+ { "password" = "$1$M9NLj$p2gs87vwNv48BUu.wAfVw0"
+ { "md5" } }
+ { "default" = "0" }
+ { "setkey" }
+ { "setkey"
+ { "to" = "less" }
+ { "from" = "backquote" } }
+ { "background" = "103332" }
+ { "timeout" = "5" }
+ { "splashimage" = "(hd0,0)/grub/splash.xpm.gz" }
+ { "gfxmenu" = "(hd0,0)/boot/message" }
+ { "verbose" = "0" }
+ { "hiddenmenu" }
+ { "title" = "Fedora (2.6.24.4-64.fc8)"
+ { "root" = "(hd0,0)" }
+ { "kernel" = "/vmlinuz-2.6.24.4-64.fc8"
+ { "ro" } { "root" = "/dev/vg00/lv00" } {"crashkernel" = ""} }
+ { "initrd" = "/initrd-2.6.24.4-64.fc8.img" } }
+ { "title" = "Fedora (2.6.24.3-50.fc8)"
+ { "root" = "(hd0,0)" }
+ { "kernel" = "/vmlinuz-2.6.24.3-50.fc8"
+ { "ro" } { "root" = "/dev/vg00/lv00" } }
+ { "initrd" = "/initrd-2.6.24.3-50.fc8.img" } }
+ { "title" = "Fedora (2.6.21.7-3.fc8xen)"
+ { "root" = "(hd0,0)" }
+ { "kernel" = "/xen.gz-2.6.21.7-3.fc8" }
+ { "module" = "/vmlinuz-2.6.21.7-3.fc8xen"
+ { "ro" } { "root" = "/dev/vg00/lv00" } }
+ { "module" = "/initrd-2.6.21.7-3.fc8xen.img" } }
+ { "title" = "Fedora (2.6.24.3-34.fc8)"
+ { "root" = "(hd0,0)" }
+ { "kernel" = "/vmlinuz-2.6.24.3-34.fc8"
+ { "ro" } { "root" = "/dev/vg00/lv00" } }
+ { "initrd" = "/initrd-2.6.24.3-34.fc8.img" }
+ { "map" { "from" = "(hd0)" } { "to" = "(hd1)" } } }
+ { "title" = "othermenu"
+ { "lock" }
+ { "makeactive" }
+ { "configfile" = "/boot/grub/othergrub.conf" } }
+
+
+ test Grub.lns put conf after set "default" "0" = conf
+
+ test Grub.lns get "# menu.lst - See: grub(8), info grub, update-grub(8)
+
+## default num\n" =
+ { "#comment" = "menu.lst - See: grub(8), info grub, update-grub(8)" }
+ {}
+ { "#comment" = "# default num" }
+
+ (* Color directive *)
+ test Grub.lns get "color cyan/blue white/blue\n" =
+ { "color"
+ { "normal" { "foreground" = "cyan" }
+ { "background" = "blue" } }
+ { "highlight" { "foreground" = "white" }
+ { "background" = "blue" } } }
+
+ test Grub.lns get "\tcolor cyan/light-blue\n" =
+ { "color"
+ { "normal" { "foreground" = "cyan" }
+ { "background" = "light-blue" } } }
+
+ test Grub.lns put "color cyan/light-blue\n" after
+ set "/color/highlight/foreground" "white";
+ set "/color/highlight/background" "black" =
+ "color cyan/light-blue white/black\n"
+
+ (* Boot stanza with savedefault *)
+ let boot_savedefault =
+"title\t\tDebian GNU/Linux, kernel 2.6.18-6-vserver-686
+root\t\t(hd0,0)
+ kernel\t\t/boot/vmlinuz-2.6.18-6-vserver-686 root=/dev/md0 ro
+initrd\t\t/boot/initrd.img-2.6.18-6-vserver-686
+\tsavedefault\n"
+
+ test Grub.lns get boot_savedefault =
+ { "title" = "Debian GNU/Linux, kernel 2.6.18-6-vserver-686"
+ { "root" = "(hd0,0)" }
+ { "kernel" = "/boot/vmlinuz-2.6.18-6-vserver-686"
+ { "root" = "/dev/md0" } { "ro" } }
+ { "initrd" = "/boot/initrd.img-2.6.18-6-vserver-686" }
+ { "savedefault" } }
+
+ test Grub.lns get
+ "serial --unit=0 --speed=9600 --word=8 --parity=no --stop=1\n"
+ =
+ { "serial"
+ { "unit" = "0" }
+ { "speed" = "9600" }
+ { "word" = "8" }
+ { "parity" = "no" }
+ { "stop" = "1" } }
+
+ test Grub.lns get
+ "terminal --timeout=10 serial console\n" =
+ { "terminal"
+ { "timeout" = "10" }
+ { "serial" }
+ { "console" } }
+
+ test Grub.boot_setting get
+ "chainloader --force +1 \n" = { "chainloader" = "+1" { "force" } }
+
+ test Grub.savedefault put "savedefault\n" after
+ set "/savedefault" "3" = "savedefault 3\n"
+
+ test Grub.lns get
+"password foo
+password foo /boot/grub/custom.lst
+password --md5 $1$Ahx/T0$Sgcp7Z0xgGlyANIJCdESi.
+password --encrypted ^9^32kwzzX./3WISQ0C
+password --encrypted ^9^32kwzzX./3WISQ0C /boot/grub/custom.lst
+" =
+ { "password" = "foo" }
+ { "password" = "foo"
+ { "file" = "/boot/grub/custom.lst" } }
+ { "password" = "$1$Ahx/T0$Sgcp7Z0xgGlyANIJCdESi."
+ { "md5" } }
+ { "password" = "^9^32kwzzX./3WISQ0C"
+ { "encrypted" } }
+ { "password" = "^9^32kwzzX./3WISQ0C"
+ { "encrypted" }
+ { "file" = "/boot/grub/custom.lst" } }
+
+ (* BZ 590067 - handle comments in a title section *)
+ (* Comments within a boot stanza belong to that boot stanza *)
+ test Grub.lns get "title Red Hat Enterprise Linux AS (2.4.21-63.ELsmp)
+ root (hd0,0)
+ kernel /vmlinuz-2.4.21-63.ELsmp ro root=LABEL=/
+ #initrd /initrd-2.4.21-63.ELsmp.img
+ initrd /initrd-2.4.21-63.EL.img.e1000.8139\n" =
+ { "title" = "Red Hat Enterprise Linux AS (2.4.21-63.ELsmp)"
+ { "root" = "(hd0,0)" }
+ { "kernel" = "/vmlinuz-2.4.21-63.ELsmp" { "ro" } { "root" = "LABEL=/" } }
+ { "#comment" = "initrd /initrd-2.4.21-63.ELsmp.img" }
+ { "initrd" = "/initrd-2.4.21-63.EL.img.e1000.8139" } }
+
+ (* Comments at the end of a boot stanza go into the top level *)
+ test Grub.lns get "title Red Hat Enterprise Linux AS (2.4.21-63.ELsmp)
+ root (hd0,0)
+ kernel /vmlinuz-2.4.21-63.ELsmp ro root=LABEL=/
+ initrd /initrd-2.4.21-63.EL.img.e1000.8139
+ # Now for something completely different\n" =
+ { "title" = "Red Hat Enterprise Linux AS (2.4.21-63.ELsmp)"
+ { "root" = "(hd0,0)" }
+ { "kernel" = "/vmlinuz-2.4.21-63.ELsmp" { "ro" } { "root" = "LABEL=/" } }
+ { "initrd" = "/initrd-2.4.21-63.EL.img.e1000.8139" } }
+ { "#comment" = "Now for something completely different" }
+
+ (* Solaris 10 extensions: kernel$ and module$ are permitted and enable *)
+ (* variable expansion. findroot (similar to root) and bootfs added *)
+ test Grub.lns get "title Solaris 10 10/09 s10x_u8wos_08a X86
+ findroot (pool_rpool,0,a)
+ bootfs rpool/mybootenv-alt
+ kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS
+ module$ /platform/i86pc/boot_archive\n" =
+ { "title" = "Solaris 10 10/09 s10x_u8wos_08a X86"
+ { "findroot" = "(pool_rpool,0,a)" }
+ { "bootfs" = "rpool/mybootenv-alt" }
+ { "kernel$" = "/platform/i86pc/multiboot" { "-B" } { "$ZFS-BOOTFS" } }
+ { "module$" = "/platform/i86pc/boot_archive" } }
+
+ (* Solaris 10 extension: multiboot kernel may take a path as its first *)
+ (* argument. *)
+ test Grub.lns get "title Solaris failsafe
+ findroot (pool_rpool,0,a)
+ kernel /boot/multiboot kernel/unix -s
+ module /boot/x86.miniroot-safe\n" =
+ { "title" = "Solaris failsafe"
+ { "findroot" = "(pool_rpool,0,a)" }
+ { "kernel" = "/boot/multiboot" { "@path" = "kernel/unix" } { "-s" } }
+ { "module" = "/boot/x86.miniroot-safe" } }
+
+ test Grub.lns get "title SUSE Linux Enterprise Server 11 SP1 - 2.6.32.27-0.2
+ kernel (hd0,0)/vmlinuz root=/dev/vg_root/lv_root resume=/dev/vg_root/lv_swap splash=silent showopts
+ initrd (hd0,0)/initrd\n" =
+ { "title" = "SUSE Linux Enterprise Server 11 SP1 - 2.6.32.27-0.2"
+ { "kernel" = "(hd0,0)/vmlinuz"
+ { "root" = "/dev/vg_root/lv_root" }
+ { "resume" = "/dev/vg_root/lv_swap" }
+ { "splash" = "silent" }
+ { "showopts" } }
+ { "initrd" = "(hd0,0)/initrd" } }
+
+ (* Password protected kernel, issue #229 *)
+ test Grub.lns get "title Password Protected Kernel
+ root (hd0,0)
+ kernel /vmlinuz ro root=/dev/mapper/root
+ initrd /initramfs
+ password --md5 secret\n" =
+ { "title" = "Password Protected Kernel"
+ { "root" = "(hd0,0)" }
+ { "kernel" = "/vmlinuz"
+ { "ro" }
+ { "root" = "/dev/mapper/root" }
+ }
+ { "initrd" = "/initramfs" }
+ { "password" = "secret"
+ { "md5" }
+ } }
+
+ (* Test kernel options with different special characters. *)
+ test Grub.lns get "title Fedora (2.6.24.4-64.fc8)
+ root (hd0,0)
+ kernel /vmlinuz-2.6.24.4-64.fc8 ro root=/dev/vg00/lv00 with.dot=1 with-dash=1 with_underscore=1 with+plus=1
+ initrd /initrd-2.6.24.4-64.fc8.img\n" =
+ { "title" = "Fedora (2.6.24.4-64.fc8)"
+ { "root" = "(hd0,0)" }
+ { "kernel" = "/vmlinuz-2.6.24.4-64.fc8"
+ { "ro" }
+ { "root" = "/dev/vg00/lv00" }
+ { "with.dot" = "1" }
+ { "with-dash" = "1" }
+ { "with_underscore" = "1" }
+ { "with+plus" = "1" }
+ }
+ { "initrd" = "/initrd-2.6.24.4-64.fc8.img" } }
+
+ (* Test parsing of invalid entries via menu_error *)
+ test Grub.lns get "default=0\ncrud=no\n" =
+ { "default" = "0" }
+ { "#error" = "crud=no" }
+
+ (* We handle some pretty bizarre bad syntax *)
+ test Grub.lns get "default=0
+crud no
+valid:nope
+nonsense = yes
+bad arg1 arg2 arg3=v\n" =
+ { "default" = "0" }
+ { "#error" = "crud no" }
+ { "#error" = "valid:nope" }
+ { "#error" = "nonsense = yes" }
+ { "#error" = "bad arg1 arg2 arg3=v" }
+
+ (* Test parsing of invalid entries via boot_error *)
+ test Grub.lns get "title test
+ root (hd0,0)
+ crud foo\n" =
+ { "title" = "test"
+ { "root" = "(hd0,0)" }
+ { "#error" = "crud foo" } }
--- /dev/null
+module Test_grubenv =
+
+ let conf = "# GRUB Environment Block
+serial=1
+serial_speed=115200
+dummy1=abc\\\\xyz
+dummy2=abc\\
+xyz
+dummy3=abc\\\\uvw\\
+xyz
+########################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################
+"
+
+ test GrubEnv.lns get conf =
+ { "#comment" = "GRUB Environment Block" }
+ { "1"
+ { "name" = "serial" }
+ { "value" = "1" }
+ }
+ { "2"
+ { "name" = "serial_speed" }
+ { "value" = "115200" }
+ }
+ { "3"
+ { "name" = "dummy1" }
+ { "value" = "abc\\\\xyz" }
+ }
+ { "4"
+ { "name" = "dummy2" }
+ { "value" = "abc\\\nxyz" }
+ }
+ { "5"
+ { "name" = "dummy3" }
+ { "value" = "abc\\\\uvw\\\nxyz" }
+ }
+ { "#comment" = "#######################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################" }
--- /dev/null
+module Test_Gshadow =
+
+let conf = "root:x::
+uucp:x::
+sudo:x:suadmin1,suadmin2:coadmin1,coadmin2
+"
+
+test Gshadow.lns get conf =
+ { "root"
+ { "password" = "x" } }
+ { "uucp"
+ { "password" = "x" } }
+ { "sudo"
+ { "password" = "x" }
+ { "admin" = "suadmin1" }
+ { "admin" = "suadmin2" }
+ { "member" = "coadmin1" }
+ { "member" = "coadmin2" } }
--- /dev/null
+(*
+Module: Test_GtkBookmarks
+ Provides unit tests and examples for the <GtkBookmarks> lens.
+*)
+
+module Test_GtkBookmarks =
+
+(* Test: GtkBookmarks.lns
+ Test without label *)
+test GtkBookmarks.lns get "ftp://user@myftp.com/somedir\n" =
+ { "bookmark" = "ftp://user@myftp.com/somedir" }
+
+(* Test: GtkBookmarks.lns
+ Test with label *)
+test GtkBookmarks.lns get "file:///home/rpinson/Ubuntu%20One Ubuntu One\n" =
+ { "bookmark" = "file:///home/rpinson/Ubuntu%20One"
+ { "label" = "Ubuntu One" } }
+
+(* Test: GtkBookmarks.lns
+ Empty lines are allowed, not comments *)
+test GtkBookmarks.lns get "ftp://user@myftp.com/somedir\n\nfile:///home/rpinson/Ubuntu%20One Ubuntu One\n" =
+ { "bookmark" = "ftp://user@myftp.com/somedir" }
+ { }
+ { "bookmark" = "file:///home/rpinson/Ubuntu%20One"
+ { "label" = "Ubuntu One" } }
--- /dev/null
+module Test_Host_Conf =
+
+let conf = "
+# /etc/host.conf
+# We have named running, but no NIS (yet)
+order bind, hosts
+# Allow multiple addrs
+multi on
+# Guard against spoof attempts
+nospoof on
+# Trim local domain (not really necessary).
+trim vbrew.com.:fedora.org.
+trim augeas.net.,ubuntu.com.
+"
+
+test Host_Conf.lns get conf =
+ { }
+ { "#comment" = "/etc/host.conf" }
+ { "#comment" = "We have named running, but no NIS (yet)" }
+ { "order"
+ { "1" = "bind" }
+ { "2" = "hosts" }
+ }
+ { "#comment" = "Allow multiple addrs" }
+ { "multi" = "on" }
+ { "#comment" = "Guard against spoof attempts" }
+ { "nospoof" = "on" }
+ { "#comment" = "Trim local domain (not really necessary)." }
+ { "trim"
+ { "1" = "vbrew.com." }
+ { "2" = "fedora.org." }
+ }
+ { "trim"
+ { "3" = "augeas.net." }
+ { "4" = "ubuntu.com." }
+ }
--- /dev/null
+module Test_Hostname =
+
+test Hostname.lns get "local.localnet\n" =
+ { "hostname" = "local.localnet" }
--- /dev/null
+(* Tests for the Hosts module *)
+
+module Test_hosts =
+
+ let two_entries = "127.0.0.1 foo foo.example.com
+ # \tcomment\t
+192.168.0.1 pigiron.example.com pigiron pigiron.example
+"
+
+ test Hosts.record get "127.0.0.1 foo\n" =
+ { "1" { "ipaddr" = "127.0.0.1" }
+ { "canonical" = "foo" } }
+
+ test Hosts.lns get two_entries =
+ { "1" { "ipaddr" = "127.0.0.1" }
+ { "canonical" = "foo" }
+ { "alias" = "foo.example.com" }
+ }
+ { "#comment" = "comment" }
+ { "2" { "ipaddr" = "192.168.0.1" }
+ { "canonical" = "pigiron.example.com" }
+ { "alias" = "pigiron" }
+ { "alias" = "pigiron.example" } }
+
+ test Hosts.record put "127.0.0.1 foo\n" after
+ set "/1/canonical" "bar"
+ = "127.0.0.1 bar\n"
+
+ test Hosts.lns put two_entries after
+ set "/2/alias[10]" "piggy" ;
+ rm "/1/alias[1]" ;
+ rm "/2/alias[2]"
+ = "127.0.0.1 foo
+ # \tcomment\t
+192.168.0.1 pigiron.example.com pigiron piggy
+"
+
+ (* Deleting the 'canonical' node violates the schema; each host entry *)
+ (* must have one *)
+ test Hosts.lns put two_entries after
+ rm "/1/canonical"
+ = *
+
+ (* Make sure blank and indented lines get through *)
+ test Hosts.lns get " \t 127.0.0.1\tlocalhost \n \n\n
+127.0.1.1\tetch.example.com\tetch\n" =
+ { "1" { "ipaddr" = "127.0.0.1" }
+ { "canonical" = "localhost" } }
+ {} {} {}
+ { "2" { "ipaddr" = "127.0.1.1" }
+ { "canonical" = "etch.example.com" }
+ { "alias" = "etch" } }
+
+ (* Comment at the end of a line *)
+ test Hosts.lns get "127.0.0.1 localhost # must always be there \n" =
+ { "1" { "ipaddr" = "127.0.0.1" }
+ { "canonical" = "localhost" }
+ { "#comment" = "must always be there" } }
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
+
+
--- /dev/null
+module Test_Hosts_Access =
+
+let multi_daemon = "sshd, sendmail : 10.234.\n"
+
+test Hosts_Access.lns get multi_daemon =
+ { "1"
+ { "process" = "sshd" }
+ { "process" = "sendmail" }
+ { "client" = "10.234." }
+ }
+
+let multi_daemon_spc = "sshd sendmail : 10.234.\n"
+
+test Hosts_Access.lns get multi_daemon_spc =
+ { "1"
+ { "process" = "sshd" }
+ { "process" = "sendmail" }
+ { "client" = "10.234." }
+ }
+
+let multi_client = "sshd: 10.234. , 192.168.\n"
+
+test Hosts_Access.lns get multi_client =
+ { "1"
+ { "process" = "sshd" }
+ { "client" = "10.234." }
+ { "client" = "192.168." }
+ }
+
+let multi_client_spc = "sshd: 10.234. 192.168.\n"
+
+test Hosts_Access.lns get multi_client_spc =
+ { "1"
+ { "process" = "sshd" }
+ { "client" = "10.234." }
+ { "client" = "192.168." }
+ }
+
+let daemon_except = "ALL Except sshd : 10.234.\n"
+
+test Hosts_Access.lns get daemon_except =
+ { "1"
+ { "process" = "ALL" }
+ { "except"
+ { "process" = "sshd" }
+ }
+ { "client" = "10.234." }
+ }
+
+let client_except = "sshd : ALL EXCEPT 192.168\n"
+
+test Hosts_Access.lns get client_except =
+ { "1"
+ { "process" = "sshd" }
+ { "client" = "ALL" }
+ { "except"
+ { "client" = "192.168" }
+ }
+ }
+
+let daemon_host = "sshd@192.168.0.1: 10.234.\n"
+
+test Hosts_Access.lns get daemon_host =
+ { "1"
+ { "process" = "sshd"
+ { "host" = "192.168.0.1" }
+ }
+ { "client" = "10.234." }
+ }
+
+let user_client = "sshd: root@.example.tld\n"
+
+test Hosts_Access.lns get user_client =
+ { "1"
+ { "process" = "sshd" }
+ { "client" = ".example.tld"
+ { "user" = "root" }
+ }
+ }
+
+let shell_command = "sshd: 192.168. : /usr/bin/my_cmd -t -f some_arg\n"
+
+test Hosts_Access.lns get shell_command =
+ { "1"
+ { "process" = "sshd" }
+ { "client" = "192.168." }
+ { "shell_command" = "/usr/bin/my_cmd -t -f some_arg" }
+ }
+
+let client_netgroup = "sshd: @hostgroup\n"
+test Hosts_Access.lns get client_netgroup =
+ { "1"
+ { "process" = "sshd" }
+ { "client" = "@hostgroup" }
+ }
+
+let client_netmask = "sshd: 192.168.0.0/255.255.0.0\n"
+test Hosts_Access.lns get client_netmask =
+ { "1"
+ { "process" = "sshd" }
+ { "client" = "192.168.0.0"
+ { "netmask" = "255.255.0.0" } }
+ }
+
+let client_cidr_v4 = "sshd: 192.168.0.0/24\n"
+test Hosts_Access.lns get client_cidr_v4 =
+ { "1"
+ { "process" = "sshd" }
+ { "client" = "192.168.0.0"
+ { "netmask" = "24" } }
+ }
+
+let client_cidr_v6 = "sshd: [fe80::%fxp0]/64\n"
+test Hosts_Access.lns get client_cidr_v6 =
+ { "1"
+ { "process" = "sshd" }
+ { "client" = "[fe80::%fxp0]"
+ { "netmask" = "64" } }
+ }
+
+let client_file = "sshd: /etc/external_file\n"
+test Hosts_Access.lns get client_file =
+ { "1"
+ { "process" = "sshd" }
+ { "file" = "/etc/external_file" }
+ }
+
+let client_wildcard = "sshd: 192.168.?.*\n"
+test Hosts_Access.lns get client_wildcard =
+ { "1"
+ { "process" = "sshd" }
+ { "client" = "192.168.?.*" }
+ }
+
+let sample_hosts_allow = "# hosts.allow This file describes the names of the hosts which are
+# allowed to use the local INET services, as decided
+# by the '/usr/sbin/tcpd' server.
+in.telnetd: 192.168.1.
+sshd: 70.16., 207.228.
+ipop3d: ALL
+sendmail: ALL
+"
+
+test Hosts_Access.lns get sample_hosts_allow =
+ { "#comment" = "hosts.allow This file describes the names of the hosts which are" }
+ { "#comment" = "allowed to use the local INET services, as decided" }
+ { "#comment" = "by the '/usr/sbin/tcpd' server." }
+ { "1"
+ { "process" = "in.telnetd" }
+ { "client" = "192.168.1." }
+ }
+ { "2"
+ { "process" = "sshd" }
+ { "client" = "70.16." }
+ { "client" = "207.228." }
+ }
+ { "3"
+ { "process" = "ipop3d" }
+ { "client" = "ALL" }
+ }
+ { "4"
+ { "process" = "sendmail" }
+ { "client" = "ALL" }
+ }
+
+
+let sample_hosts_deny = "#
+# hosts.deny This file describes the names of the hosts which are
+# *not* allowed to use the local INET services, as decided
+# by the '/usr/sbin/tcpd' server.
+in.telnetd: all
+
+sshd: 61., 62., \
+ 64.179., 65.
+"
+
+test Hosts_Access.lns get sample_hosts_deny =
+ { }
+ { "#comment" = "hosts.deny This file describes the names of the hosts which are" }
+ { "#comment" = "*not* allowed to use the local INET services, as decided" }
+ { "#comment" = "by the '/usr/sbin/tcpd' server." }
+ { "1"
+ { "process" = "in.telnetd" }
+ { "client" = "all" }
+ }
+ { }
+ { "2"
+ { "process" = "sshd" }
+ { "client" = "61." }
+ { "client" = "62." }
+ { "client" = "64.179." }
+ { "client" = "65." }
+ }
+
+
+let ip_mask = "sshd: 61./255.255.255.255\n"
+
+test Hosts_Access.lns get ip_mask =
+ { "1"
+ { "process" = "sshd" }
+ { "client" = "61." { "netmask" = "255.255.255.255" } } }
+
+(* Support options from hosts_options(5) *)
+test Hosts_Access.lns get "sshd: all: keepalive\n" =
+ { "1"
+ { "process" = "sshd" }
+ { "client" = "all" }
+ { "keepalive" } }
+
+test Hosts_Access.lns get "sshd: all: severity mail.info\n" =
+ { "1"
+ { "process" = "sshd" }
+ { "client" = "all" }
+ { "severity" = "mail.info" } }
+
+test Hosts_Access.lns get "sshd: all: severity mail.info : rfc931 5 : DENY\n" =
+ { "1"
+ { "process" = "sshd" }
+ { "client" = "all" }
+ { "severity" = "mail.info" }
+ { "rfc931" = "5" }
+ { "DENY" } }
+
+(* Ticket #255, from FreeBSD *)
+let host_options_cmds = "# You need to be clever with finger; do _not_ backfinger!! You can easily
+# start a \"finger war\".
+fingerd : ALL \
+ : spawn (echo Finger. | \
+ /usr/bin/mail -s \"tcpd\: %u@%h[%a] fingered me!\" root) & \
+ : deny
+
+# The rest of the daemons are protected.
+ALL : ALL : \
+ severity auth.info \
+ : twist /bin/echo \"You are not welcome to use %d from %h.\"
+"
+
+test Hosts_Access.lns get host_options_cmds =
+ { "#comment" = "You need to be clever with finger; do _not_ backfinger!! You can easily" }
+ { "#comment" = "start a \"finger war\"." }
+ { "1"
+ { "process" = "fingerd" }
+ { "client" = "ALL" }
+ { "spawn" = "(echo Finger. | \
+ /usr/bin/mail -s \"tcpd\\: %u@%h[%a] fingered me!\" root) &" }
+ { "deny" } }
+ { }
+ { "#comment" = "The rest of the daemons are protected." }
+ { "2"
+ { "process" = "ALL" }
+ { "client" = "ALL" }
+ { "severity" = "auth.info" }
+ { "twist" = "/bin/echo \"You are not welcome to use %d from %h.\"" } }
--- /dev/null
+(*
+Module: Test_Htpasswd
+ Provides unit tests and examples for the <Htpasswd> lens.
+*)
+
+module Test_Htpasswd =
+
+let htpasswd = "foo-plain:bar
+foo-crypt:78YuxG9nnfUCo
+foo-md5:$apr1$NqCzyXmd$WLc/Wb35AkC.8tQQB3/Uw/
+foo-sha1:{SHA}Ys23Ag/5IOWqZCw9QGaVDdHwH00=
+"
+
+test Htpasswd.lns get htpasswd =
+ { "foo-plain" = "bar" }
+ { "foo-crypt" = "78YuxG9nnfUCo" }
+ { "foo-md5" = "$apr1$NqCzyXmd$WLc/Wb35AkC.8tQQB3/Uw/" }
+ { "foo-sha1" = "{SHA}Ys23Ag/5IOWqZCw9QGaVDdHwH00=" }
+
--- /dev/null
+module Test_httpd =
+
+(* Check that we can iterate on directive *)
+let _ = Httpd.directive+
+
+(* Check that we can do a non iterative section *)
+let _ = Httpd.section Httpd.directive
+
+(* directives testing *)
+let d1 = "ServerRoot \"/etc/apache2\"\n"
+test Httpd.directive get d1 =
+ { "directive" = "ServerRoot"
+ { "arg" = "\"/etc/apache2\"" }
+ }
+
+(* simple quotes *)
+let d1s = "ServerRoot '/etc/apache2'\n"
+test Httpd.directive get d1s =
+ { "directive" = "ServerRoot"
+ { "arg" = "'/etc/apache2'" }
+ }
+
+let d2 = "ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/\n"
+test Httpd.directive get d2 =
+ { "directive" = "ScriptAlias"
+ { "arg" = "/cgi-bin/" }
+ { "arg" = "/usr/lib/cgi-bin/" }
+ }
+
+let d3 = "LockFile /var/lock/apache2/accept.lock\n"
+test Httpd.directive get d3 =
+ { "directive" = "LockFile"
+ { "arg" = "/var/lock/apache2/accept.lock" }
+ }
+
+let c1 = "
+<IfModule>
+</IfModule>
+"
+let c1_put =
+"
+<IfModule foo bar>
+</IfModule>
+"
+
+
+test Httpd.lns get c1 = { }{ "IfModule" }
+
+test Httpd.lns put c1 after set "/IfModule/arg[1]" "foo";
+ set "/IfModule/arg[2]" "bar" = c1_put
+
+let c2 = "
+<IfModule !mpm_winnt.c>
+ <IfModule !mpm_netware.c>
+ LockFile /var/lock/apache2/accept.lock
+ </IfModule>
+</IfModule>
+"
+
+test Httpd.lns get c2 =
+ { }
+ { "IfModule"
+ { "arg" = "!mpm_winnt.c" }
+ { "IfModule"
+ { "arg" = "!mpm_netware.c" }
+ { "directive" = "LockFile"
+ { "arg" = "/var/lock/apache2/accept.lock" }
+ }
+ }
+ }
+
+(* arguments must be the first child of the section *)
+test Httpd.lns put c2 after rm "/IfModule/arg";
+ insb "arg" "/IfModule/*[1]";
+ set "/IfModule/arg" "foo" =
+"
+<IfModule foo>
+ <IfModule !mpm_netware.c>
+ LockFile /var/lock/apache2/accept.lock
+ </IfModule>
+</IfModule>
+"
+
+let c3 = "
+<IfModule mpm_event_module>
+ StartServers 2
+ MaxClients 150
+ MinSpareThreads 25
+ MaxSpareThreads 75
+ ThreadLimit 64
+ ThreadsPerChild 25
+ MaxRequestsPerChild 0
+</IfModule>
+"
+
+test Httpd.lns get c3 =
+ { }
+ { "IfModule"
+ { "arg" = "mpm_event_module" }
+ { "directive" = "StartServers"
+ { "arg" = "2" }
+ }
+ { "directive" = "MaxClients"
+ { "arg" = "150" }
+ }
+ { "directive" = "MinSpareThreads"
+ { "arg" = "25" }
+ }
+ { "directive" = "MaxSpareThreads"
+ { "arg" = "75" }
+ }
+ { "directive" = "ThreadLimit"
+ { "arg" = "64" }
+ }
+ { "directive" = "ThreadsPerChild"
+ { "arg" = "25" }
+ }
+ { "directive" = "MaxRequestsPerChild"
+ { "arg" = "0" }
+ }
+ }
+
+
+
+let c4 = "
+<Files ~ \"^\.ht\">
+ Order allow,deny
+ Deny from all
+ Satisfy all
+</Files>
+"
+
+test Httpd.lns get c4 =
+ { }
+ { "Files"
+ { "arg" = "~" }
+ { "arg" = "\"^\.ht\"" }
+ { "directive" = "Order"
+ { "arg" = "allow,deny" }
+ }
+ { "directive" = "Deny"
+ { "arg" = "from" }
+ { "arg" = "all" }
+ }
+ { "directive" = "Satisfy"
+ { "arg" = "all" }
+ }
+ }
+
+
+
+let c5 = "LogFormat \"%{User-agent}i\" agent\n"
+test Httpd.lns get c5 =
+ { "directive" = "LogFormat"
+ { "arg" = "\"%{User-agent}i\"" }
+ { "arg" = "agent" }
+ }
+
+let c7 = "LogFormat \"%v:%p %h %l %u %t \\\"%r\\\" %>s %O \\\"%{Referer}i\\\" \\\"%{User-Agent}i\\\"\" vhost_combined\n"
+test Httpd.lns get c7 =
+ { "directive" = "LogFormat"
+ { "arg" = "\"%v:%p %h %l %u %t \\\"%r\\\" %>s %O \\\"%{Referer}i\\\" \\\"%{User-Agent}i\\\"\"" }
+ { "arg" = "vhost_combined" }
+ }
+
+let c8 = "IndexIgnore .??* *~ *# RCS CVS *,v *,t \n"
+test Httpd.directive get c8 =
+ { "directive" = "IndexIgnore"
+ { "arg" = ".??*" }
+ { "arg" = "*~" }
+ { "arg" = "*#" }
+ { "arg" = "RCS" }
+ { "arg" = "CVS" }
+ { "arg" = "*,v" }
+ { "arg" = "*,t" }
+ }
+
+(* FIXME: not yet supported:
+ * The backslash "\" may be used as the last character on a line to indicate
+ * that the directive continues onto the next line. There must be no other
+ * characters or white space between the backslash and the end of the line.
+ *)
+let multiline = "Options Indexes \
+FollowSymLinks MultiViews
+"
+
+test Httpd.directive get multiline =
+ { "directive" = "Options"
+ { "arg" = "Indexes" }
+ { "arg" = "FollowSymLinks" }
+ { "arg" = "MultiViews" }
+ }
+
+
+let conf2 = "<VirtualHost *:80>
+ ServerAdmin webmaster@localhost
+
+ DocumentRoot /var/www
+ <Directory />
+ Options FollowSymLinks
+ AllowOverride None
+ </Directory>
+ <Directory /var/www/>
+ Options Indexes FollowSymLinks MultiViews
+ AllowOverride None
+ Order allow,deny
+ allow from all
+ </Directory>
+
+ ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
+ <Directory \"/usr/lib/cgi-bin\">
+ AllowOverride None
+ Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
+ Order allow,deny
+ Allow from all
+ </Directory>
+
+ ErrorLog /var/log/apache2/error.log
+
+ # Possible values include: debug, info, notice, warn, error, crit,
+ # alert, emerg.
+ LogLevel warn
+
+ CustomLog /var/log/apache2/access.log combined
+
+ SSLRequireSSL
+
+ Alias /doc/ \"/usr/share/doc/\"
+ <Directory \"/usr/share/doc/\">
+ Options Indexes MultiViews FollowSymLinks
+ AllowOverride None
+ Order deny,allow
+ Deny from all
+ Allow from 127.0.0.0/255.0.0.0 ::1/128
+ </Directory>
+
+</VirtualHost>
+"
+
+test Httpd.lns get conf2 =
+ { "VirtualHost"
+ { "arg" = "*:80" }
+ { "directive" = "ServerAdmin"
+ { "arg" = "webmaster@localhost" }
+ }
+ { }
+ { "directive" = "DocumentRoot"
+ { "arg" = "/var/www" }
+ }
+ { "Directory"
+ { "arg" = "/" }
+ { "directive" = "Options"
+ { "arg" = "FollowSymLinks" }
+ }
+ { "directive" = "AllowOverride"
+ { "arg" = "None" }
+ }
+ }
+ { "Directory"
+ { "arg" = "/var/www/" }
+ { "directive" = "Options"
+ { "arg" = "Indexes" }
+ { "arg" = "FollowSymLinks" }
+ { "arg" = "MultiViews" }
+ }
+ { "directive" = "AllowOverride"
+ { "arg" = "None" }
+ }
+ { "directive" = "Order"
+ { "arg" = "allow,deny" }
+ }
+ { "directive" = "allow"
+ { "arg" = "from" }
+ { "arg" = "all" }
+ }
+ }
+ { "directive" = "ScriptAlias"
+ { "arg" = "/cgi-bin/" }
+ { "arg" = "/usr/lib/cgi-bin/" }
+ }
+ { "Directory"
+ { "arg" = "\"/usr/lib/cgi-bin\"" }
+ { "directive" = "AllowOverride"
+ { "arg" = "None" }
+ }
+ { "directive" = "Options"
+ { "arg" = "+ExecCGI" }
+ { "arg" = "-MultiViews" }
+ { "arg" = "+SymLinksIfOwnerMatch" }
+ }
+ { "directive" = "Order"
+ { "arg" = "allow,deny" }
+ }
+ { "directive" = "Allow"
+ { "arg" = "from" }
+ { "arg" = "all" }
+ }
+ }
+ { "directive" = "ErrorLog"
+ { "arg" = "/var/log/apache2/error.log" }
+ }
+ { }
+ { "#comment" = "Possible values include: debug, info, notice, warn, error, crit," }
+ { "#comment" = "alert, emerg." }
+ { "directive" = "LogLevel"
+ { "arg" = "warn" }
+ }
+ { }
+ { "directive" = "CustomLog"
+ { "arg" = "/var/log/apache2/access.log" }
+ { "arg" = "combined" }
+ }
+ { }
+ { "directive" = "SSLRequireSSL" }
+ { }
+ { "directive" = "Alias"
+ { "arg" = "/doc/" }
+ { "arg" = "\"/usr/share/doc/\"" }
+ }
+ { "Directory"
+ { "arg" = "\"/usr/share/doc/\"" }
+ { "directive" = "Options"
+ { "arg" = "Indexes" }
+ { "arg" = "MultiViews" }
+ { "arg" = "FollowSymLinks" }
+ }
+ { "directive" = "AllowOverride"
+ { "arg" = "None" }
+ }
+ { "directive" = "Order"
+ { "arg" = "deny,allow" }
+ }
+ { "directive" = "Deny"
+ { "arg" = "from" }
+ { "arg" = "all" }
+ }
+ { "directive" = "Allow"
+ { "arg" = "from" }
+ { "arg" = "127.0.0.0/255.0.0.0" }
+ { "arg" = "::1/128" }
+ }
+ }
+ }
+
+(* Eol comment *)
+test Httpd.lns get "<a> # a comment
+MyDirective Foo
+</a>\n" =
+ { "a"
+ { "#comment" = "a comment" }
+ { "directive" = "MyDirective" { "arg" = "Foo" } } }
+
+test Httpd.lns get "<a>
+# a comment
+</a>\n" =
+ { "a" { "#comment" = "a comment" } }
+
+(* Test: Httpd.lns
+ Newlines inside quoted value (GH issue #104) *)
+test Httpd.lns get "Single 'Foo\\
+bar'
+Double \"Foo\\
+bar\"\n" =
+ { "directive" = "Single"
+ { "arg" = "'Foo\\\nbar'" } }
+ { "directive" = "Double"
+ { "arg" = "\"Foo\\\nbar\"" } }
+
+(* Test: Httpd.lns
+ Support >= in tags (GH #154) *)
+let versioncheck = "
+<IfVersion = 2.1>
+<IfModule !proxy_ajp_module>
+LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
+</IfModule>
+</IfVersion>
+
+<IfVersion >= 2.4>
+<IfModule !proxy_ajp_module>
+LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
+</IfModule>
+</IfVersion>
+"
+
+test Httpd.lns get versioncheck =
+ { }
+ { "IfVersion"
+ { "arg" = "=" }
+ { "arg" = "2.1" }
+ { "IfModule"
+ { "arg" = "!proxy_ajp_module" }
+ { "directive" = "LoadModule"
+ { "arg" = "proxy_ajp_module" }
+ { "arg" = "modules/mod_proxy_ajp.so" }
+ }
+ }
+ }
+ { "IfVersion"
+ { "arg" = ">=" }
+ { "arg" = "2.4" }
+ { "IfModule"
+ { "arg" = "!proxy_ajp_module" }
+ { "directive" = "LoadModule"
+ { "arg" = "proxy_ajp_module" }
+ { "arg" = "modules/mod_proxy_ajp.so" }
+ }
+ }
+ }
+
+
+(* GH #220 *)
+let double_comment = "<IfDefine Foo>
+##
+## Comment
+##
+</IfDefine>\n"
+
+test Httpd.lns get double_comment =
+ { "IfDefine"
+ { "arg" = "Foo" }
+ { "#comment" = "#" }
+ { "#comment" = "# Comment" }
+ { "#comment" = "#" }
+ }
+
+let single_comment = "<IfDefine Foo>
+#
+## Comment
+##
+</IfDefine>\n"
+
+test Httpd.lns get single_comment =
+ { "IfDefine"
+ { "arg" = "Foo" }
+ { "#comment" = "# Comment" }
+ { "#comment" = "#" }
+ }
+
+let single_empty = "<IfDefine Foo>
+#
+
+</IfDefine>\n"
+test Httpd.lns get single_empty =
+ { "IfDefine"
+ { "arg" = "Foo" }
+ }
+
+let eol_empty = "<IfDefine Foo> #
+</IfDefine>\n"
+test Httpd.lns get eol_empty =
+ { "IfDefine"
+ { "arg" = "Foo" }
+ }
+
+(* Issue #140 *)
+test Httpd.lns get "<IfModule mod_ssl.c>
+ # one comment
+ # another comment
+</IfModule>\n" =
+ { "IfModule"
+ { "arg" = "mod_ssl.c" }
+ { "#comment" = "one comment" }
+ { "#comment" = "another comment" }
+ }
+
+(* Issue #307: backslashes in regexes *)
+test Httpd.lns get "<VirtualHost *:80>
+ RewriteRule ^/(.*) http\:\/\/example\.com\/$1 [L,R,NE]
+ RewriteRule \.css\.gz$ - [T=text/css,E=no-gzip:1]
+</VirtualHost>\n" =
+ { "VirtualHost"
+ { "arg" = "*:80" }
+ { "directive" = "RewriteRule"
+ { "arg" = "^/(.*)" }
+ { "arg" = "http\:\/\/example\.com\/$1" }
+ { "arg" = "[L,R,NE]" } }
+ { "directive" = "RewriteRule"
+ { "arg" = "\.css\.gz$" }
+ { "arg" = "-" }
+ { "arg" = "[T=text/css,E=no-gzip:1]" } } }
+
+(* https://github.com/letsencrypt/letsencrypt/issues/1294#issuecomment-161805063 *)
+test Httpd.lns get "<IfModule>
+</ifModule>\n" =
+ { "IfModule" }
+
+(* https://github.com/letsencrypt/letsencrypt/issues/1693 *)
+test Httpd.lns get "<IfModule mod_ssl.c>
+ <VirtualHost *:443>
+ ServerAdmin admin@example.com
+ </VirtualHost> </IfModule>\n" =
+ { "IfModule"
+ { "arg" = "mod_ssl.c" }
+ { "VirtualHost"
+ { "arg" = "*:443" }
+ { "directive" = "ServerAdmin"
+ { "arg" = "admin@example.com" } } } }
+
+(* Double quotes inside braces in directive arguments
+ https://github.com/letsencrypt/letsencrypt/issues/1766 *)
+test Httpd.lns get "SSLRequire %{SSL_CLIENT_S_DN_CN} in {\"foo@bar.com\", bar@foo.com}\n" =
+ { "directive" = "SSLRequire"
+ { "arg" = "%{SSL_CLIENT_S_DN_CN}" }
+ { "arg" = "in" }
+ { "wordlist"
+ { "arg" = "\"foo@bar.com\"" }
+ { "arg" = "bar@foo.com" } } }
+
+(* Issue #330: optional end double quote to directive arg, for messages *)
+test Httpd.lns get "SSLCipherSuite \"EECDH+ECDSA+AESGCM EECDH+aRS$\n" =
+ { "directive" = "SSLCipherSuite"
+ { "arg" = "\"EECDH+ECDSA+AESGCM EECDH+aRS$" } }
+
+test Httpd.lns get "ErrorDocument 404 \"The requested file favicon.ico was not found.\n" =
+ { "directive" = "ErrorDocument"
+ { "arg" = "404" }
+ { "arg" = "\"The requested file favicon.ico was not found." } }
+
+(* Quotes inside a unquoted directive argument
+ https://github.com/letsencrypt/letsencrypt/issues/1934 *)
+test Httpd.lns get "<VirtualHost *:80>
+ WSGIDaemonProcess _graphite processes=5 threads=5 display-name='%{GROUP}' inactivity-timeout=120 user=_graphite group=_graphite
+</VirtualHost>\n" =
+ { "VirtualHost"
+ { "arg" = "*:80" }
+ { "directive" = "WSGIDaemonProcess"
+ { "arg" = "_graphite" }
+ { "arg" = "processes=5" }
+ { "arg" = "threads=5" }
+ { "arg" = "display-name='%{GROUP}'" }
+ { "arg" = "inactivity-timeout=120" }
+ { "arg" = "user=_graphite" }
+ { "arg" = "group=_graphite" } } }
+
+(* Issue #327: perl blocks *)
+test Httpd.lns get "<Perl>
+ Apache::AuthDBI->setCacheTime(600);
+</Perl>\n" =
+ { "Perl" = "\n Apache::AuthDBI->setCacheTime(600);\n" }
+
+(* Line continuations inside VirtualHost blocks *)
+test Httpd.lns get "<VirtualHost \\
+ 0.0.0.0:7080 \\
+ [00000:000:000:0000::2]:7080 \\
+ 0.0.0.0:7080 \\
+ 127.0.0.1:7080 \\
+ >
+</VirtualHost>\n" =
+ { "VirtualHost"
+ { "arg" = "0.0.0.0:7080" }
+ { "arg" = "[00000:000:000:0000::2]:7080" }
+ { "arg" = "0.0.0.0:7080" }
+ { "arg" = "127.0.0.1:7080" } }
+
+(* Blank line continuations inside VirtualHost blocks *)
+test Httpd.lns get "<VirtualHost \\
+ 0.0.0.0:7080 \\
+ \\
+ 0.0.0.0:7080 \\
+ \\
+ >
+</VirtualHost>\n" =
+ { "VirtualHost"
+ { "arg" = "0.0.0.0:7080" }
+ { "arg" = "0.0.0.0:7080" } }
+
+(* Non-continuation backslashes inside VirtualHost section headings *)
+test Httpd.lns get "<FilesMatch \.php$>
+ ExpiresActive Off
+</FilesMatch>\n" =
+ { "FilesMatch"
+ { "arg" = "\.php$" }
+ { "directive" = "ExpiresActive"
+ { "arg" = "Off" } } }
+
+(* Escaped spaces in directive and section arguments *)
+test Httpd.lns get "RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.+/trackback/?\ HTTP/ [NC]\n" =
+ { "directive" = "RewriteCond"
+ { "arg" = "%{THE_REQUEST}" }
+ { "arg" = "^[A-Z]{3,9}\ /.+/trackback/?\ HTTP/" }
+ { "arg" = "[NC]" } }
+
+test Httpd.lns get "<FilesMatch \ test\.php$></FilesMatch>\n" =
+ { "FilesMatch"
+ { "arg" = "\ test\.php$" } }
+
+(* Continuations in comments cause the comment to be continued without a new comment character *)
+test Httpd.lns get "#ServerRoot \\\n /var/www\n" =
+ { "#comment" = "ServerRoot \\\n /var/www" }
+
+(* Empty comments can contain continuations, too. Issue #423 *)
+test Httpd.lns get "# \\\n\n" = { }
+test Httpd.comment get "# a\\\n\n" = { "#comment" = "a" }
+test Httpd.comment get "# \\\na\\\n\n" = { "#comment" = "a" }
+test Httpd.comment get "# \\\n\\\na \\\n\\\n\n" = { "#comment" = "a" }
+
+(* Comparison with empty string did not work. Issue #429 *)
+test Httpd.dir_args get ">\"a\"" = { "arg" = ">\"a\"" }
+test Httpd.dir_args get ">\"\"" = { "arg" = ">\"\"" }
+test Httpd.directive get "RewriteCond ${movedPageMap:$1} >\"a\"\n" =
+ { "directive" = "RewriteCond"
+ { "arg" = "${movedPageMap:$1}" }
+ { "arg" = ">\"a\"" }}
+test Httpd.directive get "RewriteCond ${movedPageMap:$1} >\"\"\n" =
+ { "directive" = "RewriteCond"
+ { "arg" = "${movedPageMap:$1}" }
+ { "arg" = ">\"\"" }}
+
+(* Quoted arguments may or may not have space spearating them. Issue #435 *)
+test Httpd.directive get
+ "ProxyPassReverse \"/js\" \"http://127.0.0.1:8123/js\"\n" =
+ { "directive" = "ProxyPassReverse"
+ { "arg" = "\"/js\"" }
+ { "arg" = "\"http://127.0.0.1:8123/js\"" } }
+
+test Httpd.directive get
+ "ProxyPassReverse \"/js\"\"http://127.0.0.1:8123/js\"\n" =
+ { "directive" = "ProxyPassReverse"
+ { "arg" = "\"/js\"" }
+ { "arg" = "\"http://127.0.0.1:8123/js\"" } }
+
+(* Don't get confused by quoted strings inside bare arguments. Issue #470 *)
+test Httpd.directive get
+ "RequestHeader set X-Forwarded-Proto https expr=(%{HTTP:CF-Visitor}='{\"scheme\":\"https\"}')\n" =
+ { "directive" = "RequestHeader"
+ { "arg" = "set" }
+ { "arg" = "X-Forwarded-Proto" }
+ { "arg" = "https" }
+ { "arg" = "expr=(%{HTTP:CF-Visitor}='{\"scheme\":\"https\"}')" } }
+
+(* Issue #577: we make the newline starting a section optional, including
+ an empty comment at the end of the line. This used to miss empty comments
+ with whitespace *)
+test Httpd.lns get "<If cond>#\n</If>\n" = { "If" { "arg" = "cond" } }
+
+test Httpd.lns get "<If cond># \n</If>\n" = { "If" { "arg" = "cond" } }
+
+test Httpd.lns get "<If cond>\n# \n</If>\n" = { "If" { "arg" = "cond" } }
+
+test Httpd.lns get "<If cond># text\n</If>\n" =
+ { "If"
+ { "arg" = "cond" }
+ { "#comment" = "text" } }
+
+test Httpd.lns get "<If cond>\n\t# text\n</If>\n" =
+ { "If"
+ { "arg" = "cond" }
+ { "#comment" = "text" } }
--- /dev/null
+module Test_inetd =
+
+ (* The standard "parse a bucket of text" test *)
+ let conf = "# Blah di blah comment
+
+simplesrv stream tcp nowait fred /usr/bin/simplesrv
+arguserve dgram udp wait mary /usr/bin/usenet foo bar wombat
+
+1234 stream tcp nowait fred /usr/bin/numbersrv
+
+127.0.0.1:addrsrv stream tcp nowait fred /usr/bin/addrsrv
+127.0.0.1,10.0.0.1:multiaddrsrv stream tcp nowait fred /usr/bin/multiaddrsrv
+faff.fred.com:
+127.0.0.1,faff.fred.com:
+*:
+[::1]:addrsrv stream tcp nowait fred /usr/bin/addrsrv
+
+sndbufsrv stream tcp,sndbuf=12k nowait fred /usr/bin/sndbufsrv
+rcvbufsrv stream tcp,rcvbuf=24k nowait fred /usr/bin/rcvbufsrv
+allbufsrv stream tcp,sndbuf=1m,rcvbuf=24k nowait fred /usr/bin/allbufsrv
+
+dotgroupsrv stream tcp nowait fred.wilma /usr/bin/dotgroupsrv
+colongroupsrv stream tcp nowait fred:wilma /usr/bin/colongroupsrv
+
+maxsrv stream tcp nowait.20 fred /usr/bin/maxsrv
+
+dummy/1 tli rpc/circuit_v,udp wait root /tmp/test_svc test_svc
+"
+
+ test Inetd.lns get conf =
+ { "#comment" = "Blah di blah comment" }
+ {}
+ { "service" = "simplesrv"
+ { "socket" = "stream" }
+ { "protocol" = "tcp" }
+ { "wait" = "nowait" }
+ { "user" = "fred" }
+ { "command" = "/usr/bin/simplesrv" }
+ }
+ { "service" = "arguserve"
+ { "socket" = "dgram" }
+ { "protocol" = "udp" }
+ { "wait" = "wait" }
+ { "user" = "mary" }
+ { "command" = "/usr/bin/usenet" }
+ { "arguments"
+ { "1" = "foo" }
+ { "2" = "bar" }
+ { "3" = "wombat" }
+ }
+ }
+ {}
+ { "service" = "1234"
+ { "socket" = "stream" }
+ { "protocol" = "tcp" }
+ { "wait" = "nowait" }
+ { "user" = "fred" }
+ { "command" = "/usr/bin/numbersrv" }
+ }
+ {}
+ { "service" = "addrsrv"
+ { "address"
+ { "1" = "127.0.0.1" }
+ }
+ { "socket" = "stream" }
+ { "protocol" = "tcp" }
+ { "wait" = "nowait" }
+ { "user" = "fred" }
+ { "command" = "/usr/bin/addrsrv" }
+ }
+ { "service" = "multiaddrsrv"
+ { "address"
+ { "1" = "127.0.0.1" }
+ { "2" = "10.0.0.1" }
+ }
+ { "socket" = "stream" }
+ { "protocol" = "tcp" }
+ { "wait" = "nowait" }
+ { "user" = "fred" }
+ { "command" = "/usr/bin/multiaddrsrv" }
+ }
+ { "address"
+ { "1" = "faff.fred.com" }
+ }
+ { "address"
+ { "1" = "127.0.0.1" }
+ { "2" = "faff.fred.com" }
+ }
+ { "address"
+ { "1" = "*" }
+ }
+ { "service" = "addrsrv"
+ { "address"
+ { "1" = "[::1]" }
+ }
+ { "socket" = "stream" }
+ { "protocol" = "tcp" }
+ { "wait" = "nowait" }
+ { "user" = "fred" }
+ { "command" = "/usr/bin/addrsrv" }
+ }
+ {}
+ { "service" = "sndbufsrv"
+ { "socket" = "stream" }
+ { "protocol" = "tcp" }
+ { "sndbuf" = "12k" }
+ { "wait" = "nowait" }
+ { "user" = "fred" }
+ { "command" = "/usr/bin/sndbufsrv" }
+ }
+ { "service" = "rcvbufsrv"
+ { "socket" = "stream" }
+ { "protocol" = "tcp" }
+ { "rcvbuf" = "24k" }
+ { "wait" = "nowait" }
+ { "user" = "fred" }
+ { "command" = "/usr/bin/rcvbufsrv" }
+ }
+ { "service" = "allbufsrv"
+ { "socket" = "stream" }
+ { "protocol" = "tcp" }
+ { "sndbuf" = "1m" }
+ { "rcvbuf" = "24k" }
+ { "wait" = "nowait" }
+ { "user" = "fred" }
+ { "command" = "/usr/bin/allbufsrv" }
+ }
+ {}
+ { "service" = "dotgroupsrv"
+ { "socket" = "stream" }
+ { "protocol" = "tcp" }
+ { "wait" = "nowait" }
+ { "user" = "fred" }
+ { "group" = "wilma" }
+ { "command" = "/usr/bin/dotgroupsrv" }
+ }
+ { "service" = "colongroupsrv"
+ { "socket" = "stream" }
+ { "protocol" = "tcp" }
+ { "wait" = "nowait" }
+ { "user" = "fred" }
+ { "group" = "wilma" }
+ { "command" = "/usr/bin/colongroupsrv" }
+ }
+ {}
+ { "service" = "maxsrv"
+ { "socket" = "stream" }
+ { "protocol" = "tcp" }
+ { "wait" = "nowait" }
+ { "max" = "20" }
+ { "user" = "fred" }
+ { "command" = "/usr/bin/maxsrv" }
+ }
+ {}
+ { "rpc_service" = "dummy"
+ { "version" = "1" }
+ { "endpoint-type" = "tli" }
+ { "protocol" = "circuit_v" }
+ { "protocol" = "udp" }
+ { "wait" = "wait" }
+ { "user" = "root" }
+ { "command" = "/tmp/test_svc" }
+ { "arguments"
+ { "1" = "test_svc" } }
+ }
+
+
+(**************************************************************************)
+
+ (* Test new file creation *)
+
+ test Inetd.lns put "" after
+ set "/service" "faffsrv";
+ set "/service/socket" "stream";
+ set "/service/protocol" "tcp";
+ set "/service/wait" "nowait";
+ set "/service/user" "george";
+ set "/service/command" "/sbin/faffsrv"
+ = "faffsrv stream tcp nowait george /sbin/faffsrv\n"
+
+
--- /dev/null
+(*
+Module: Test_IniFile
+ Provides unit tests and examples for the <IniFile> module.
+
+About: Tests to run
+
+ The tests are run with all combinations of the following
+ three parameters:
+
+ > separator : (a) default (/[:=]/ "=") ; (b) "=" "="
+ > comment : (c) default (/[;#]/ ";") ; (d) ";" ";"
+ > empty lines : (e) default ; (f) noempty
+
+*)
+
+module Test_IniFile =
+
+ (* ALL TESTS TO RUN *)
+
+
+ (* Group: TEST a/c/e *)
+ (* Variable: comment_ace *)
+ let comment_ace = IniFile.comment IniFile.comment_re IniFile.comment_default
+ (* Variable: sep_ace *)
+ let sep_ace = IniFile.sep IniFile.sep_re IniFile.sep_default
+ (* Variable: entry_ace *)
+ let entry_ace = IniFile.entry IniFile.entry_re sep_ace comment_ace
+ (* Variable: title_ace *)
+ let title_ace = IniFile.title IniFile.record_re
+ (* Variable: record_ace *)
+ let record_ace = IniFile.record title_ace entry_ace
+ (* Variable: lns_ace *)
+ let lns_ace = IniFile.lns record_ace comment_ace
+ (* Variable: conf_ace *)
+ let conf_ace = "# comment with sharp
+
+[section1]
+test_ace = value # end of line comment
+test_ace =
+test_ace = \"value with spaces\"
+; comment with colon
+
+"
+ (* Test: lns_ace
+ Testing the a/c/e combination *)
+ test lns_ace get conf_ace =
+ { "#comment" = "comment with sharp" }
+ {}
+ { "section1"
+ { "test_ace" = "value"
+ { "#comment" = "end of line comment" } }
+ { "test_ace" }
+ { "test_ace" = "value with spaces" }
+ { "#comment" = "comment with colon" }
+ {} }
+
+ test lns_ace put conf_ace after
+ set "section1/foo" "yes" = "# comment with sharp
+
+[section1]
+test_ace = value # end of line comment
+test_ace =
+test_ace = \"value with spaces\"
+; comment with colon
+
+foo=yes
+"
+
+ (* Test: lns_ace
+ Quotes can appear within bare values *)
+ test lns_ace get "[section]\ntest_ace = value \"with quotes\" inside\n" =
+ { "section" { "test_ace" = "value \"with quotes\" inside" } }
+
+ (* Group: TEST a/c/f *)
+ (* Variable: comment_acf *)
+ let comment_acf = IniFile.comment IniFile.comment_re IniFile.comment_default
+ (* Variable: sep_acf *)
+ let sep_acf = IniFile.sep IniFile.sep_re IniFile.sep_default
+ (* Variable: entry_acf *)
+ let entry_acf = IniFile.entry IniFile.entry_re sep_acf comment_acf
+ (* Variable: title_acf *)
+ let title_acf = IniFile.title IniFile.record_re
+ (* Variable: record_acf *)
+ let record_acf = IniFile.record_noempty title_acf entry_acf
+ (* Variable: lns_acf *)
+ let lns_acf = IniFile.lns_noempty record_acf comment_acf
+ (* Variable: conf_acf *)
+ let conf_acf = "# comment with sharp
+[section1]
+test_acf = value
+test_acf =
+test_acf : value2 # end of line comment
+; comment with colon
+"
+ (* Test: lns_acf
+ Testing the a/c/f combination *)
+ test lns_acf get conf_acf =
+ { "#comment" = "comment with sharp" }
+ { "section1"
+ { "test_acf" = "value" }
+ { "test_acf" }
+ { "test_acf" = "value2"
+ { "#comment" = "end of line comment" } }
+ { "#comment" = "comment with colon" } }
+
+
+ (* Group: TEST a/d/e *)
+ (* Variable: comment_ade *)
+ let comment_ade = IniFile.comment ";" ";"
+ (* Variable: sep_ade *)
+ let sep_ade = IniFile.sep IniFile.sep_re IniFile.sep_default
+ (* Variable: entry_ade *)
+ let entry_ade = IniFile.entry IniFile.entry_re sep_ade comment_ade
+ (* Variable: title_ade *)
+ let title_ade = IniFile.title IniFile.record_re
+ (* Variable: record_ade *)
+ let record_ade = IniFile.record title_ade entry_ade
+ (* Variable: lns_ade *)
+ let lns_ade = IniFile.lns record_ade comment_ade
+ (* Variable: conf_ade *)
+ let conf_ade = "; a first comment with colon
+[section1]
+test_ade = value
+test_ade : value2 ; end of line comment
+; comment with colon
+
+test_ade =
+"
+ (* Test: lns_ade
+ Testing the a/d/e combination *)
+ test lns_ade get conf_ade =
+ { "#comment" = "a first comment with colon" }
+ { "section1"
+ { "test_ade" = "value" }
+ { "test_ade" = "value2"
+ { "#comment" = "end of line comment" } }
+ { "#comment" = "comment with colon" }
+ {}
+ { "test_ade" } }
+
+
+ (* Group: TEST a/d/f *)
+ (* Variable: comment_adf *)
+ let comment_adf = IniFile.comment ";" ";"
+ (* Variable: sep_adf *)
+ let sep_adf = IniFile.sep IniFile.sep_re IniFile.sep_default
+ (* Variable: entry_adf *)
+ let entry_adf = IniFile.entry IniFile.entry_re sep_adf comment_adf
+ (* Variable: title_adf *)
+ let title_adf = IniFile.title IniFile.record_re
+ (* Variable: record_adf *)
+ let record_adf = IniFile.record_noempty title_adf entry_adf
+ (* Variable: lns_adf *)
+ let lns_adf = IniFile.lns_noempty record_adf comment_adf
+ (* Variable: conf_adf *)
+ let conf_adf = "; a first comment with colon
+[section1]
+test_adf = value
+test_adf : value2 ; end of line comment
+; comment with colon
+test_adf =
+"
+ (* Test: lns_adf
+ Testing the a/d/f combination *)
+ test lns_adf get conf_adf =
+ { "#comment" = "a first comment with colon" }
+ { "section1"
+ { "test_adf" = "value" }
+ { "test_adf" = "value2"
+ { "#comment" = "end of line comment" } }
+ { "#comment" = "comment with colon" }
+ { "test_adf" } }
+
+
+ (* Group: TEST b/c/e *)
+ (* Variable: comment_bce *)
+ let comment_bce = IniFile.comment IniFile.comment_re IniFile.comment_default
+ (* Variable: sep_bce *)
+ let sep_bce = IniFile.sep "=" "="
+ (* Variable: entry_bce *)
+ let entry_bce = IniFile.entry IniFile.entry_re sep_bce comment_bce
+ (* Variable: title_bce *)
+ let title_bce = IniFile.title IniFile.record_re
+ (* Variable: record_bce *)
+ let record_bce = IniFile.record title_bce entry_bce
+ (* Variable: lns_bce *)
+ let lns_bce = IniFile.lns record_bce comment_bce
+ (* Variable: conf_bce *)
+ let conf_bce = "# comment with sharp
+
+[section1]
+test_bce = value # end of line comment
+; comment with colon
+
+test_bce =
+"
+ (* Test: lns_bce
+ Testing the b/c/e combination *)
+ test lns_bce get conf_bce =
+ { "#comment" = "comment with sharp" }
+ {}
+ { "section1"
+ { "test_bce" = "value"
+ { "#comment" = "end of line comment" } }
+ { "#comment" = "comment with colon" }
+ {}
+ { "test_bce" } }
+
+
+ (* Group: TEST b/c/f *)
+ (* Variable: comment_bcf *)
+ let comment_bcf = IniFile.comment IniFile.comment_re IniFile.comment_default
+ (* Variable: sep_bcf *)
+ let sep_bcf = IniFile.sep "=" "="
+ (* Variable: entry_bcf *)
+ let entry_bcf = IniFile.entry IniFile.entry_re sep_bcf comment_bcf
+ (* Variable: title_bcf *)
+ let title_bcf = IniFile.title IniFile.record_re
+ (* Variable: record_bcf *)
+ let record_bcf = IniFile.record_noempty title_bce entry_bcf
+ (* Variable: lns_bcf *)
+ let lns_bcf = IniFile.lns_noempty record_bce comment_bcf
+ (* Variable: conf_bcf *)
+ let conf_bcf = "# conf with sharp
+[section1]
+test_bcf = value # end of line comment
+; comment with colon
+test_bcf =
+"
+ (* Test: lns_bcf
+ Testing the b/c/f combination *)
+ test lns_bcf get conf_bcf =
+ { "#comment" = "conf with sharp" }
+ { "section1"
+ { "test_bcf" = "value"
+ { "#comment" = "end of line comment" } }
+ { "#comment" = "comment with colon" }
+ { "test_bcf" } }
+
+
+ (* Group: TEST b/d/e *)
+ (* Variable: comment_bde *)
+ let comment_bde = IniFile.comment ";" ";"
+ (* Variable: sep_bde *)
+ let sep_bde = IniFile.sep "=" "="
+ (* Variable: entry_bde *)
+ let entry_bde = IniFile.entry IniFile.entry_re sep_bde comment_bde
+ (* Variable: title_bde *)
+ let title_bde = IniFile.title IniFile.record_re
+ (* Variable: record_bde *)
+ let record_bde = IniFile.record title_bde entry_bde
+ (* Variable: lns_bde *)
+ let lns_bde = IniFile.lns record_bde comment_bde
+ (* Variable: conf_bde *)
+ let conf_bde = "; first comment with colon
+
+[section1]
+test_bde = value ; end of line comment
+; comment with colon
+
+test_bde =
+"
+ (* Test: lns_bde
+ Testing the b/d/e combination *)
+ test lns_bde get conf_bde =
+ { "#comment" = "first comment with colon" }
+ {}
+ { "section1"
+ { "test_bde" = "value"
+ { "#comment" = "end of line comment" } }
+ { "#comment" = "comment with colon" }
+ {}
+ { "test_bde" } }
+
+
+ (* Group: TEST b/d/f *)
+ (* Variable: comment_bdf *)
+ let comment_bdf = IniFile.comment ";" ";"
+ (* Variable: sep_bdf *)
+ let sep_bdf = IniFile.sep "=" "="
+ (* Variable: entry_bdf *)
+ let entry_bdf = IniFile.entry IniFile.entry_re sep_bdf comment_bdf
+ (* Variable: title_bdf *)
+ let title_bdf = IniFile.title IniFile.record_re
+ (* Variable: record_bdf *)
+ let record_bdf = IniFile.record_noempty title_bdf entry_bdf
+ (* Variable: lns_bdf *)
+ let lns_bdf = IniFile.lns_noempty record_bdf comment_bdf
+ (* Variable: conf_bdf *)
+ let conf_bdf = "; first comment with colon
+[section1]
+test_bdf = value ; end of line comment
+; comment with colon
+test_bdf =
+"
+ (* Test: lns_bdf
+ Testing the b/d/f combination *)
+ test lns_bdf get conf_bdf =
+ { "#comment" = "first comment with colon" }
+ { "section1"
+ { "test_bdf" = "value"
+ { "#comment" = "end of line comment" } }
+ { "#comment" = "comment with colon" }
+ { "test_bdf" } }
+
+
+ (* Group: TEST multiline values *)
+ (* Variable: multiline_test *)
+ let multiline_test = "test_ace = val1\n val2\n val3\n"
+ (* Variable: multiline_nl *)
+ let multiline_nl = "test_ace =\n val2\n val3\n"
+ (* Variable: multiline_ace *)
+ let multiline_ace = IniFile.entry_multiline IniFile.entry_re sep_ace comment_ace
+ (* Test: multiline_ace
+ Testing the a/c/e combination with a multiline entry *)
+ test multiline_ace get multiline_test =
+ { "test_ace" = "val1\n val2\n val3" }
+ (* Test: multiline_nl
+ Multiline values can begin with a single newline *)
+ test multiline_ace get multiline_nl =
+ { "test_ace" = "\n val2\n val3" }
+
+ (* Test: lns_ace
+ Ticket #243 *)
+ test lns_ace get "[section1]
+ticket_243 = \"value1;value2#value3\" # end of line comment
+" =
+ { "section1"
+ { "ticket_243" = "value1;value2#value3"
+ { "#comment" = "end of line comment" }
+ }
+ }
+
+ (* Group: TEST list entries *)
+ (* Variable: list_test *)
+ let list_test = "test_ace = val1,val2,val3 # a comment\n"
+ (* Lens: list_ace *)
+ let list_ace = IniFile.entry_list IniFile.entry_re sep_ace RX.word Sep.comma comment_ace
+ (* Test: list_ace
+ Testing the a/c/e combination with a list entry *)
+ test list_ace get list_test =
+ { "test_ace"
+ { "1" = "val1" }
+ { "2" = "val2" }
+ { "3" = "val3" }
+ { "#comment" = "a comment" }
+ }
+
+ (* Variable: list_nocomment_test *)
+ let list_nocomment_test = "test_ace = val1,val2,val3 \n"
+ (* Lens: list_nocomment_ace *)
+ let list_nocomment_ace = IniFile.entry_list_nocomment IniFile.entry_re sep_ace RX.word Sep.comma
+ (* Test: list_nocomment_ace
+ Testing the a/c/e combination with a list entry without end-of-line comment *)
+ test list_nocomment_ace get list_nocomment_test =
+ { "test_ace"
+ { "1" = "val1" }
+ { "2" = "val2" }
+ { "3" = "val3" }
+ }
+
+ (* Test: IniFile.lns_loose *)
+ test IniFile.lns_loose get conf_ace =
+ { "section" = ".anon"
+ { "#comment" = "comment with sharp" }
+ { }
+ }
+ { "section" = "section1"
+ { "test_ace" = "value"
+ { "#comment" = "end of line comment" }
+ }
+ { "test_ace" }
+ { "test_ace" = "value with spaces" }
+ { "#comment" = "comment with colon" }
+ { }
+ }
+
+ (* Test: IniFile.lns_loose_multiline *)
+ test IniFile.lns_loose_multiline get conf_ace =
+ { "section" = ".anon"
+ { "#comment" = "comment with sharp" }
+ { }
+ }
+ { "section" = "section1"
+ { "test_ace" = "value"
+ { "#comment" = "end of line comment" }
+ }
+ { "test_ace" }
+ { "test_ace" = "value with spaces" }
+ { "#comment" = "comment with colon" }
+ { }
+ }
+
+ test IniFile.lns_loose_multiline get multiline_test =
+ { "section" = ".anon" { "test_ace" = "val1\n val2\n val3" } }
+
--- /dev/null
+module Test_inittab =
+
+ let simple = "id:5:initdefault:\n"
+
+ let inittab = "id:5:initdefault:
+# System initialization.
+si::sysinit:/etc/rc.d/rc.sysinit
+
+# Trap CTRL-ALT-DELETE
+ca::ctrlaltdel:/sbin/shutdown -t3 -r now
+
+l0:0:wait:/etc/rc.d/rc 0
+"
+
+ test Inittab.lns get inittab =
+ { "id"
+ { "runlevels" = "5" }
+ { "action" = "initdefault" }
+ { "process" = "" } }
+ { "#comment" = "System initialization." }
+ { "si"
+ { "runlevels" = "" }
+ { "action" = "sysinit" }
+ { "process" = "/etc/rc.d/rc.sysinit" } }
+ { }
+ { "#comment" = "Trap CTRL-ALT-DELETE" }
+ { "ca"
+ { "runlevels" = "" }
+ { "action" = "ctrlaltdel" }
+ { "process" = "/sbin/shutdown -t3 -r now" } }
+ { }
+ { "l0"
+ { "runlevels" = "0" }
+ { "action" = "wait" }
+ { "process" = "/etc/rc.d/rc 0" } }
+
+ test Inittab.lns put simple after rm "/id/process" = *
+
+ test Inittab.lns put simple after set "/id/runlevels" "3" =
+ "id:3:initdefault:\n"
+
+ test Inittab.lns get
+ "co:2345:respawn:/usr/bin/command # End of line comment\n" =
+ { "co"
+ { "runlevels" = "2345" }
+ { "action" = "respawn" }
+ { "process" = "/usr/bin/command " }
+ { "#comment" = "End of line comment" } }
+
+ test Inittab.lns get
+ "co:2345:respawn:/usr/bin/blank_comment # \t \n" =
+ { "co"
+ { "runlevels" = "2345" }
+ { "action" = "respawn" }
+ { "process" = "/usr/bin/blank_comment " }
+ { "#comment" = "" } }
+
+ (* Bug 108: allow ':' in the process field *)
+ test Inittab.record get
+ "co:234:respawn:ttymon -p \"console login: \"\n" =
+ { "co"
+ { "runlevels" = "234" }
+ { "action" = "respawn" }
+ { "process" = "ttymon -p \"console login: \"" } }
+
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(*
+Module: Test_Inputrc
+ Provides unit tests and examples for the <Inputrc> lens.
+*)
+
+module Test_Inputrc =
+
+(* Variable: conf *)
+let conf = "# /etc/inputrc - global inputrc for libreadline
+# See readline(3readline) and `info rluserman' for more information.
+
+# Be 8 bit clean.
+set input-meta on
+set output-meta on
+
+# To allow the use of 8bit-characters like the german umlauts, uncomment
+# the line below. However this makes the meta key not work as a meta key,
+# which is annoying to those which don't need to type in 8-bit characters.
+
+# set convert-meta off
+
+# try to enable the application keypad when it is called. Some systems
+# need this to enable the arrow keys.
+# set enable-keypad on
+
+# see /usr/share/doc/bash/inputrc.arrows for other codes of arrow keys
+
+# do not bell on tab-completion
+# set bell-style none
+# set bell-style visible
+
+# some defaults / modifications for the emacs mode
+$if mode=emacs
+
+# allow the use of the Home/End keys
+\"\\e[1~\": beginning-of-line
+\"\\e[4~\": end-of-line
+
+# allow the use of the Delete/Insert keys
+\"\\e[3~\": delete-char
+\"\\e[2~\": quoted-insert
+
+# mappings for \"page up\" and \"page down\" to step to the beginning/end
+# of the history
+# \"\\e[5~\": beginning-of-history
+# \"\\e[6~\": end-of-history
+
+# alternate mappings for \"page up\" and \"page down\" to search the history
+# \"\\e[5~\": history-search-backward
+# \"\\e[6~\": history-search-forward
+
+# mappings for Ctrl-left-arrow and Ctrl-right-arrow for word moving
+\"\\e[1;5C\": forward-word
+\"\\e[1;5D\": backward-word
+\"\\e[5C\": forward-word
+\"\\e[5D\": backward-word
+\"\\e\\e[C\": forward-word
+\"\\e\\e[D\": backward-word
+
+$if term=rxvt
+\"\\e[8~\": end-of-line
+\"\\eOc\": forward-word
+\"\\eOd\": backward-word
+$else
+\"\\e[G\": \",\"
+$endif
+
+# for non RH/Debian xterm, can't hurt for RH/Debian xterm
+# \"\\eOH\": beginning-of-line
+# \"\\eOF\": end-of-line
+
+# for freebsd console
+# \"\\e[H\": beginning-of-line
+# \"\\e[F\": end-of-line
+
+$endif
+"
+
+(* Test: Inputrc.lns *)
+test Inputrc.lns get conf =
+ { "#comment" = "/etc/inputrc - global inputrc for libreadline" }
+ { "#comment" = "See readline(3readline) and `info rluserman' for more information." }
+ { }
+ { "#comment" = "Be 8 bit clean." }
+ { "input-meta" = "on" }
+ { "output-meta" = "on" }
+ { }
+ { "#comment" = "To allow the use of 8bit-characters like the german umlauts, uncomment" }
+ { "#comment" = "the line below. However this makes the meta key not work as a meta key," }
+ { "#comment" = "which is annoying to those which don't need to type in 8-bit characters." }
+ { }
+ { "#comment" = "set convert-meta off" }
+ { }
+ { "#comment" = "try to enable the application keypad when it is called. Some systems" }
+ { "#comment" = "need this to enable the arrow keys." }
+ { "#comment" = "set enable-keypad on" }
+ { }
+ { "#comment" = "see /usr/share/doc/bash/inputrc.arrows for other codes of arrow keys" }
+ { }
+ { "#comment" = "do not bell on tab-completion" }
+ { "#comment" = "set bell-style none" }
+ { "#comment" = "set bell-style visible" }
+ { }
+ { "#comment" = "some defaults / modifications for the emacs mode" }
+ { "@if" = "mode=emacs"
+ { }
+ { "#comment" = "allow the use of the Home/End keys" }
+ { "entry" = "\\e[1~"
+ { "mapping" = "beginning-of-line" }
+ }
+ { "entry" = "\\e[4~"
+ { "mapping" = "end-of-line" }
+ }
+ { }
+ { "#comment" = "allow the use of the Delete/Insert keys" }
+ { "entry" = "\\e[3~"
+ { "mapping" = "delete-char" }
+ }
+ { "entry" = "\\e[2~"
+ { "mapping" = "quoted-insert" }
+ }
+ { }
+ { "#comment" = "mappings for \"page up\" and \"page down\" to step to the beginning/end" }
+ { "#comment" = "of the history" }
+ { "#comment" = "\"\\e[5~\": beginning-of-history" }
+ { "#comment" = "\"\\e[6~\": end-of-history" }
+ { }
+ { "#comment" = "alternate mappings for \"page up\" and \"page down\" to search the history" }
+ { "#comment" = "\"\\e[5~\": history-search-backward" }
+ { "#comment" = "\"\\e[6~\": history-search-forward" }
+ { }
+ { "#comment" = "mappings for Ctrl-left-arrow and Ctrl-right-arrow for word moving" }
+ { "entry" = "\\e[1;5C"
+ { "mapping" = "forward-word" }
+ }
+ { "entry" = "\\e[1;5D"
+ { "mapping" = "backward-word" }
+ }
+ { "entry" = "\\e[5C"
+ { "mapping" = "forward-word" }
+ }
+ { "entry" = "\\e[5D"
+ { "mapping" = "backward-word" }
+ }
+ { "entry" = "\\e\\e[C"
+ { "mapping" = "forward-word" }
+ }
+ { "entry" = "\\e\\e[D"
+ { "mapping" = "backward-word" }
+ }
+ { }
+ { "@if" = "term=rxvt"
+ { "entry" = "\\e[8~"
+ { "mapping" = "end-of-line" }
+ }
+ { "entry" = "\\eOc"
+ { "mapping" = "forward-word" }
+ }
+ { "entry" = "\\eOd"
+ { "mapping" = "backward-word" }
+ }
+ { "@else"
+ { "entry" = "\\e[G"
+ { "mapping" = "\",\"" }
+ }
+ }
+ }
+ { }
+ { "#comment" = "for non RH/Debian xterm, can't hurt for RH/Debian xterm" }
+ { "#comment" = "\"\\eOH\": beginning-of-line" }
+ { "#comment" = "\"\\eOF\": end-of-line" }
+ { }
+ { "#comment" = "for freebsd console" }
+ { "#comment" = "\"\\e[H\": beginning-of-line" }
+ { "#comment" = "\"\\e[F\": end-of-line" }
+ { }
+ }
+
--- /dev/null
+module Test_interfaces =
+
+ let conf ="# This file describes the network interfaces available on your system
+# and how to activate them. For more information, see interfaces(5).
+# The loopback network interface
+
+source /etc/network/interfaces.d/*.conf
+
+auto lo eth0 #foo
+allow-hotplug eth1
+
+iface lo inet \
+ loopback
+
+mapping eth0
+ script /usr/local/sbin/map-scheme
+map HOME eth0-home
+ map \
+ WORK eth0-work
+
+iface eth0-home inet static
+
+address 192.168.1.1
+ netmask 255.255.255.0
+ bridge_maxwait 0
+# up flush-mail
+ down Mambo #5
+
+ iface eth0-work inet dhcp
+
+allow-auto eth1
+iface eth1 inet dhcp
+
+iface tap0 inet static
+ vde2-switch -
+
+mapping eth1
+ # I like mapping ...
+ # ... and I like comments
+
+ script\
+ /usr/local/sbin/map-scheme
+
+iface bond0 inet dhcp
+ bridge-ports eth2 eth3 eth4
+
+iface br0 inet static
+ bond-slaves eth5 eth6
+ address 10.0.0.1
+ netmask 255.0.0.0
+
+source /etc/network.d/*.net.conf
+"
+
+ test Interfaces.lns get conf =
+ { "#comment" = "This file describes the network interfaces available on your system"}
+ { "#comment" = "and how to activate them. For more information, see interfaces(5)." }
+ { "#comment" = "The loopback network interface" }
+ {}
+ {"source" = "/etc/network/interfaces.d/*.conf"}
+ {}
+ { "auto"
+ { "1" = "lo" }
+ { "2" = "eth0" }
+ { "3" = "#foo" } }
+ { "allow-hotplug" { "1" = "eth1" } }
+ { }
+ { "iface" = "lo"
+ { "family" = "inet"}
+ { "method" = "loopback"} {} }
+ { "mapping" = "eth0"
+ { "script" = "/usr/local/sbin/map-scheme"}
+ { "map" = "HOME eth0-home"}
+ { "map" = "WORK eth0-work"}
+ {} }
+ { "iface" = "eth0-home"
+ { "family" = "inet"}
+ { "method" = "static"}
+ {}
+ { "address" = "192.168.1.1" }
+ { "netmask" = "255.255.255.0" }
+ { "bridge_maxwait" = "0" }
+ { "#comment" = "up flush-mail" }
+ { "down" = "Mambo #5" }
+ {} }
+ { "iface" = "eth0-work"
+ { "family" = "inet"}
+ { "method" = "dhcp"}
+ {} }
+ { "auto"
+ { "1" = "eth1" } }
+ { "iface" = "eth1"
+ { "family" = "inet"}
+ { "method" = "dhcp"}
+ {} }
+ { "iface" = "tap0"
+ { "family" = "inet" }
+ { "method" = "static" }
+ { "vde2-switch" = "-" }
+ {} }
+ { "mapping" = "eth1"
+ { "#comment" = "I like mapping ..." }
+ { "#comment" = "... and I like comments" }
+ {}
+ { "script" = "/usr/local/sbin/map-scheme"}
+ {} }
+ { "iface" = "bond0"
+ { "family" = "inet" }
+ { "method" = "dhcp" }
+ { "bridge-ports"
+ { "1" = "eth2" }
+ { "2" = "eth3" }
+ { "3" = "eth4" }
+ }
+ {} }
+ { "iface" = "br0"
+ { "family" = "inet" }
+ { "method" = "static" }
+ { "bond-slaves"
+ { "1" = "eth5" }
+ { "2" = "eth6" }
+ }
+ { "address" = "10.0.0.1" }
+ { "netmask" = "255.0.0.0" }
+ {} }
+ {"source" = "/etc/network.d/*.net.conf"}
+
+test Interfaces.lns put "" after
+ set "/iface[1]" "eth0";
+ set "/iface[1]/family" "inet";
+ set "/iface[1]/method" "dhcp"
+= "iface eth0 inet dhcp\n"
+
+test Interfaces.lns put "" after
+ set "/source[0]" "/etc/network/conf.d/*.conf"
+= "source /etc/network/conf.d/*.conf\n"
+
+(* Test: Interfaces.lns
+ source-directory (Issue #306) *)
+test Interfaces.lns get "source-directory interfaces.d\n" =
+ { "source-directory" = "interfaces.d" }
--- /dev/null
+module Test_IPRoute2 =
+
+let conf = "
+# /etc/iproute2/rt_tables
+#
+# reserved values
+#
+255 local
+254 main
+253 default
+0 unspec
+#
+# local
+#
+#1 inr.ruhep
+200 h3g0
+201 adsl1
+202 adsl2
+203 adsl3
+204 adsl4
+205 wifi0
+#
+# From rt_dsfield
+#
+0x00 default
+0x80 flash-override
+
+# From rt_protos
+#
+254 gated/aggr
+253 gated/bgp
+"
+
+test IPRoute2.lns get conf =
+ { }
+ { "#comment" = "/etc/iproute2/rt_tables" }
+ { }
+ { "#comment" = "reserved values" }
+ { }
+ { "255" = "local" }
+ { "254" = "main" }
+ { "253" = "default" }
+ { "0" = "unspec" }
+ { }
+ { "#comment" = "local" }
+ { }
+ { "#comment" = "1 inr.ruhep" }
+ { "200" = "h3g0" }
+ { "201" = "adsl1" }
+ { "202" = "adsl2" }
+ { "203" = "adsl3" }
+ { "204" = "adsl4" }
+ { "205" = "wifi0" }
+ { }
+ { "#comment" = "From rt_dsfield" }
+ { }
+ { "0x00" = "default" }
+ { "0x80" = "flash-override" }
+ { }
+ { "#comment" = "From rt_protos" }
+ { }
+ { "254" = "gated/aggr" }
+ { "253" = "gated/bgp" }
--- /dev/null
+module Test_iptables =
+
+let add_rule = Iptables.table_rule
+let ipt_match = Iptables.ipt_match
+
+test add_rule get
+"-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT\n" =
+ { "append" = "INPUT"
+ { "match" = "state" }
+ { "state" = "ESTABLISHED,RELATED" }
+ { "jump" = "ACCEPT" } }
+
+test add_rule get
+"-A INPUT -p icmp -j \tACCEPT \n" =
+ { "append" = "INPUT"
+ { "protocol" = "icmp" }
+ { "jump" = "ACCEPT" } }
+
+test add_rule get
+"-A INPUT -i lo -j ACCEPT\n" =
+ { "append" = "INPUT"
+ { "in-interface" = "lo" }
+ { "jump" = "ACCEPT" } }
+
+test ipt_match get " -m tcp -p tcp --dport 53" =
+ { "match" = "tcp" } { "protocol" = "tcp" } { "dport" = "53" }
+
+let arule = " -m state --state NEW -m tcp -p tcp --dport 53 -j ACCEPT"
+
+test add_rule get ("--append INPUT" . arule . "\n") =
+ { "append" = "INPUT"
+ { "match" = "state" }
+ { "state" = "NEW" }
+ { "match" = "tcp" }
+ { "protocol" = "tcp" }
+ { "dport" = "53" }
+ { "jump" = "ACCEPT" } }
+
+test ipt_match get arule =
+ { "match" = "state" } { "state" = "NEW" } { "match" = "tcp" }
+ { "protocol" = "tcp" } { "dport" = "53" } { "jump" = "ACCEPT" }
+
+test ipt_match get ("-A INPUT" . arule) = *
+
+test ipt_match get " -p esp -j ACCEPT" =
+ { "protocol" = "esp" } { "jump" = "ACCEPT" }
+
+test ipt_match get
+ " -m state --state NEW -m udp -p udp --dport 5353 -d 224.0.0.251 -j ACCEPT"
+ =
+ { "match" = "state" } { "state" = "NEW" } { "match" = "udp" }
+ { "protocol" = "udp" } { "dport" = "5353" }
+ { "destination" = "224.0.0.251" } { "jump" = "ACCEPT" }
+
+test add_rule get
+ "-I FORWARD -m physdev --physdev-is-bridged -j ACCEPT\n" =
+ { "insert" = "FORWARD"
+ { "match" = "physdev" } { "physdev-is-bridged" } { "jump" = "ACCEPT" } }
+
+test add_rule get
+ "-A INPUT -j REJECT --reject-with icmp-host-prohibited\n" =
+ { "append" = "INPUT"
+ { "jump" = "REJECT" } { "reject-with" = "icmp-host-prohibited" } }
+
+test add_rule get
+ "-A RH-Firewall-1-INPUT -p icmp --icmp-type any -j ACCEPT\n" =
+ { "append" = "RH-Firewall-1-INPUT"
+ { "protocol" = "icmp" }
+ { "icmp-type" = "any" }
+ { "jump" = "ACCEPT" } }
+
+test Iptables.table get "*filter
+:RH-Firewall-1-INPUT - [0:0]
+-A FORWARD -j RH-Firewall-1-INPUT
+-A RH-Firewall-1-INPUT -i lo -j ACCEPT
+COMMIT\n" =
+ { "table" = "filter"
+ { "chain" = "RH-Firewall-1-INPUT"
+ { "policy" = "-" } }
+ { "append" = "FORWARD"
+ { "jump" = "RH-Firewall-1-INPUT" } }
+ { "append" = "RH-Firewall-1-INPUT"
+ { "in-interface" = "lo" }
+ { "jump" = "ACCEPT" } } }
+
+test Iptables.table get "*filter
+
+:RH-Firewall-1-INPUT - [0:0]
+
+-A FORWARD -j RH-Firewall-1-INPUT
+
+COMMIT\n" =
+ { "table" = "filter"
+ { }
+ { "chain" = "RH-Firewall-1-INPUT"
+ { "policy" = "-" } }
+ { }
+ { "append" = "FORWARD"
+ { "jump" = "RH-Firewall-1-INPUT" } }
+ { } }
+
+let conf = "# Generated by iptables-save v1.2.6a on Wed Apr 24 10:19:55 2002
+*filter
+:INPUT DROP [1:229]
+:FORWARD DROP [0:0]
+:OUTPUT DROP [0:0]
+-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
+
+-I FORWARD -i eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
+
+# comments and blank lines are allow between rules
+
+-A FORWARD -i eth1 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT
+--append OUTPUT -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT
+COMMIT
+# Completed on Wed Apr 24 10:19:55 2002
+# Generated by iptables-save v1.2.6a on Wed Apr 24 10:19:55 2002
+*mangle
+:PREROUTING ACCEPT [658:32445]
+
+:INPUT ACCEPT [658:32445]
+:FORWARD ACCEPT [0:0]
+:OUTPUT ACCEPT [891:68234]
+:POSTROUTING ACCEPT [891:68234]
+COMMIT
+# Completed on Wed Apr 24 10:19:55 2002
+# Generated by iptables-save v1.2.6a on Wed Apr 24 10:19:55 2002
+*nat
+:PREROUTING ACCEPT [1:229]
+:POSTROUTING ACCEPT [3:450]
+# The output chain
+:OUTPUT ACCEPT [3:450]
+# insert something
+--insert POSTROUTING -o eth0 -j SNAT --to-source 195.233.192.1 \t
+# and now commit
+COMMIT
+# Completed on Wed Apr 24 10:19:55 2002\n"
+
+test Iptables.lns get conf =
+ { "#comment" =
+ "Generated by iptables-save v1.2.6a on Wed Apr 24 10:19:55 2002" }
+ { "table" = "filter"
+ { "chain" = "INPUT" { "policy" = "DROP" } }
+ { "chain" = "FORWARD" { "policy" = "DROP" } }
+ { "chain" = "OUTPUT" { "policy" = "DROP" } }
+ { "append" = "INPUT"
+ { "match" = "state" }
+ { "state" = "RELATED,ESTABLISHED" }
+ { "jump" = "ACCEPT" } }
+ {}
+ { "insert" = "FORWARD"
+ { "in-interface" = "eth0" }
+ { "match" = "state" }
+ { "state" = "RELATED,ESTABLISHED" }
+ { "jump" = "ACCEPT" } }
+ {}
+ { "#comment" = "comments and blank lines are allow between rules" }
+ {}
+ { "append" = "FORWARD"
+ { "in-interface" = "eth1" }
+ { "match" = "state" }
+ { "state" = "NEW,RELATED,ESTABLISHED" }
+ { "jump" = "ACCEPT" } }
+ { "append" = "OUTPUT"
+ { "match" = "state" }
+ { "state" = "NEW,RELATED,ESTABLISHED" }
+ { "jump" = "ACCEPT" } } }
+ { "#comment" = "Completed on Wed Apr 24 10:19:55 2002" }
+ { "#comment" =
+ "Generated by iptables-save v1.2.6a on Wed Apr 24 10:19:55 2002" }
+ { "table" = "mangle"
+ { "chain" = "PREROUTING" { "policy" = "ACCEPT" } }
+ {}
+ { "chain" = "INPUT" { "policy" = "ACCEPT" } }
+ { "chain" = "FORWARD" { "policy" = "ACCEPT" } }
+ { "chain" = "OUTPUT" { "policy" = "ACCEPT" } }
+ { "chain" = "POSTROUTING" { "policy" = "ACCEPT" } } }
+ { "#comment" = "Completed on Wed Apr 24 10:19:55 2002" }
+ { "#comment" =
+ "Generated by iptables-save v1.2.6a on Wed Apr 24 10:19:55 2002" }
+ { "table" = "nat"
+ { "chain" = "PREROUTING" { "policy" = "ACCEPT" } }
+ { "chain" = "POSTROUTING" { "policy" = "ACCEPT" } }
+ { "#comment" = "The output chain" }
+ { "chain" = "OUTPUT" { "policy" = "ACCEPT" } }
+ { "#comment" = "insert something" }
+ { "insert" = "POSTROUTING"
+ { "out-interface" = "eth0" }
+ { "jump" = "SNAT" }
+ { "to-source" = "195.233.192.1" } }
+ { "#comment" = "and now commit" } }
+ { "#comment" = "Completed on Wed Apr 24 10:19:55 2002" }
+
+test ipt_match get " -m comment --comment \"A comment\"" =
+ { "match" = "comment" }
+ { "comment" = "\"A comment\"" }
+
+(*
+ * Test the various schemes for negation that iptables supports
+ *
+ * Note that the two ways in which a parameter can be negated lead to
+ * two different trees that mean the same.
+ *)
+test add_rule get "-I POSTROUTING ! -d 192.168.122.0/24 -j MASQUERADE\n" =
+ { "insert" = "POSTROUTING"
+ { "destination" = "192.168.122.0/24"
+ { "not" } }
+ { "jump" = "MASQUERADE" } }
+
+test add_rule get "-I POSTROUTING -d ! 192.168.122.0/24 -j MASQUERADE\n" =
+ { "insert" = "POSTROUTING"
+ { "destination" = "! 192.168.122.0/24" }
+ { "jump" = "MASQUERADE" } }
+
+test add_rule put "-I POSTROUTING ! -d 192.168.122.0/24 -j MASQUERADE\n"
+ after rm "/insert/destination/not" =
+ "-I POSTROUTING -d 192.168.122.0/24 -j MASQUERADE\n"
+
+(* I have no idea if iptables will accept double negations, but we
+ * allow it syntactically *)
+test add_rule put "-I POSTROUTING -d ! 192.168.122.0/24 -j MASQUERADE\n"
+ after clear "/insert/destination/not" =
+ "-I POSTROUTING ! -d ! 192.168.122.0/24 -j MASQUERADE\n"
+
+test Iptables.chain get ":tcp_packets - [0:0]
+" =
+ { "chain" = "tcp_packets" { "policy" = "-" } }
+
+(* Bug #157 *)
+test ipt_match get " --tcp-flags SYN,RST,ACK,FIN SYN" =
+ { "tcp-flags"
+ { "mask" = "SYN" }
+ { "mask" = "RST" }
+ { "mask" = "ACK" }
+ { "mask" = "FIN" }
+ { "set" = "SYN" } }
+
+(* Bug #224 *)
+test ipt_match get " --icmpv6-type neighbor-solicitation" =
+ { "icmpv6-type" = "neighbor-solicitation" }
--- /dev/null
+(*
+Module: Test_Iscsid
+ Provides unit tests and examples for the <Iscsid> lens.
+*)
+
+module Test_Iscsid =
+
+(* Variable: conf
+ A full configuration *)
+let conf = "################
+# iSNS settings
+################
+# Address of iSNS server
+isns.address = 127.0.0.1
+isns.port = 3260
+
+# *************
+# CHAP Settings
+# *************
+
+# To enable CHAP authentication set node.session.auth.authmethod
+# to CHAP. The default is None.
+node.session.auth.authmethod = CHAP
+
+# To set a CHAP username and password for initiator
+# authentication by the target(s), uncomment the following lines:
+node.session.auth.username = someuser1
+node.session.auth.password = somep$31#$^&7!
+
+# To enable CHAP authentication for a discovery session to the target
+# set discovery.sendtargets.auth.authmethod to CHAP. The default is None.
+discovery.sendtargets.auth.authmethod = CHAP
+
+# To set a discovery session CHAP username and password for the initiator
+# authentication by the target(s), uncomment the following lines:
+discovery.sendtargets.auth.username = someuser3
+discovery.sendtargets.auth.password = _09+7)(,./?;'p[]
+"
+
+(* Test: Iscsid.lns
+ Test the full <conf> *)
+test Iscsid.lns get conf = { "#comment" = "###############" }
+ { "#comment" = "iSNS settings" }
+ { "#comment" = "###############" }
+ { "#comment" = "Address of iSNS server" }
+ { "isns.address" = "127.0.0.1" }
+ { "isns.port" = "3260" }
+ { }
+ { "#comment" = "*************" }
+ { "#comment" = "CHAP Settings" }
+ { "#comment" = "*************" }
+ { }
+ { "#comment" = "To enable CHAP authentication set node.session.auth.authmethod" }
+ { "#comment" = "to CHAP. The default is None." }
+ { "node.session.auth.authmethod" = "CHAP" }
+ { }
+ { "#comment" = "To set a CHAP username and password for initiator" }
+ { "#comment" = "authentication by the target(s), uncomment the following lines:" }
+ { "node.session.auth.username" = "someuser1" }
+ { "node.session.auth.password" = "somep$31#$^&7!" }
+ { }
+ { "#comment" = "To enable CHAP authentication for a discovery session to the target" }
+ { "#comment" = "set discovery.sendtargets.auth.authmethod to CHAP. The default is None." }
+ { "discovery.sendtargets.auth.authmethod" = "CHAP" }
+ { }
+ { "#comment" = "To set a discovery session CHAP username and password for the initiator" }
+ { "#comment" = "authentication by the target(s), uncomment the following lines:" }
+ { "discovery.sendtargets.auth.username" = "someuser3" }
+ { "discovery.sendtargets.auth.password" = "_09+7)(,./?;'p[]" }
--- /dev/null
+(* Module Jaas *)
+(* Author: Simon Vocella <voxsim@gmail.com> *)
+module Test_jaas =
+
+let conf = "
+/*
+ This is the JAAS configuration file used by the Shibboleth IdP.
+
+ A JAAS configuration file is a grouping of LoginModules defined in the following manner:
+ <LoginModuleClass> <Flag> <ModuleOptions>;
+
+ LoginModuleClass - fully qualified class name of the LoginModule class
+ Flag - indicates whether the requirement level for the modules;
+ allowed values: required, requisite, sufficient, optional
+ ModuleOptions - a space delimited list of name=\"value\" options
+
+ For complete documentation on the format of this file see:
+ http://java.sun.com/j2se/1.5.0/docs/api/javax/security/auth/login/Configuration.html
+
+ For LoginModules available within the Sun JVM see:
+ http://java.sun.com/j2se/1.5.0/docs/guide/security/jaas/tutorials/LoginConfigFile.html
+
+ Warning: Do NOT use Sun's JNDI LoginModule to authentication against an LDAP directory,
+ Use the LdapLoginModule that ships with Shibboleth and is demonstrated below.
+
+ Note, the application identifier MUST be ShibUserPassAuth
+*/
+
+
+ShibUserPassAuth {
+
+// Example LDAP authentication
+// See: https://wiki.shibboleth.net/confluence/display/SHIB2/IdPAuthUserPass
+/*
+ edu.vt.middleware.ldap.jaas.LdapLoginModule required
+ ldapUrl=\"ldap://ldap.example.org\"
+ baseDn=\"ou=people,dc=example,dc=org\"
+ ssl=\"true\"
+ userFilter=\"uid={0}\";
+*/
+
+// Example Kerberos authentication, requires Sun's JVM
+// See: https://wiki.shibboleth.net/confluence/display/SHIB2/IdPAuthUserPass
+/*
+ com.sun.security.auth.module.Krb5LoginModule required
+ useKeyTab=\"true\"
+ keyTab=\"/path/to/idp/keytab/file\";
+*/
+
+ edu.vt.middleware.ldap.jaas.LdapLoginModule required
+ host = \"ldap://127.0.0.1:389\"
+ base = \"dc=example,dc=com\"
+ serviceUser = \"cn=admin,dc=example,dc=com\"
+ serviceCredential = \"ldappassword\"
+ ssl = \"false\"
+ userField = \"uid\"
+ // Example comment within definition
+ subtreeSearch = \"true\";
+};
+
+NetAccountAuth {
+ // Test of optionless flag
+ nz.ac.auckland.jaas.Krb5LoginModule required;
+};
+
+com.sun.security.jgss.krb5.initiate {
+ // Test of omitted linebreaks and naked boolean
+ com.sun.security.auth.module.Krb5LoginModule required useTicketCache=true;
+};"
+
+test Jaas.lns get conf =
+ { }
+ { "#mcomment"
+ { "1" = "This is the JAAS configuration file used by the Shibboleth IdP." }
+ { "2" = "A JAAS configuration file is a grouping of LoginModules defined in the following manner:" }
+ { "3" = "<LoginModuleClass> <Flag> <ModuleOptions>;" }
+ { "4" = "LoginModuleClass - fully qualified class name of the LoginModule class" }
+ { "5" = "Flag - indicates whether the requirement level for the modules;" }
+ { "6" = "allowed values: required, requisite, sufficient, optional" }
+ { "7" = "ModuleOptions - a space delimited list of name=\"value\" options" }
+ { "8" = "For complete documentation on the format of this file see:" }
+ { "9" = "http://java.sun.com/j2se/1.5.0/docs/api/javax/security/auth/login/Configuration.html" }
+ { "10" = "For LoginModules available within the Sun JVM see:" }
+ { "11" = "http://java.sun.com/j2se/1.5.0/docs/guide/security/jaas/tutorials/LoginConfigFile.html" }
+ { "12" = "Warning: Do NOT use Sun's JNDI LoginModule to authentication against an LDAP directory," }
+ { "13" = "Use the LdapLoginModule that ships with Shibboleth and is demonstrated below." }
+ { "14" = "Note, the application identifier MUST be ShibUserPassAuth" }
+ }
+ { }
+ { }
+ { "login" = "ShibUserPassAuth"
+ { }
+ { "#comment" = "Example LDAP authentication" }
+ { "#comment" = "See: https://wiki.shibboleth.net/confluence/display/SHIB2/IdPAuthUserPass" }
+ { "#mcomment"
+ { "1" = "edu.vt.middleware.ldap.jaas.LdapLoginModule required" }
+ { "2" = "ldapUrl=\"ldap://ldap.example.org\"" }
+ { "3" = "baseDn=\"ou=people,dc=example,dc=org\"" }
+ { "4" = "ssl=\"true\"" }
+ { "5" = "userFilter=\"uid={0}\";" }
+ }
+ { }
+ { "#comment" = "Example Kerberos authentication, requires Sun's JVM" }
+ { "#comment" = "See: https://wiki.shibboleth.net/confluence/display/SHIB2/IdPAuthUserPass" }
+ { "#mcomment"
+ { "1" = "com.sun.security.auth.module.Krb5LoginModule required" }
+ { "2" = "useKeyTab=\"true\"" }
+ { "3" = "keyTab=\"/path/to/idp/keytab/file\";" }
+ }
+ { }
+ { "loginModuleClass" = "edu.vt.middleware.ldap.jaas.LdapLoginModule"
+ { "flag" = "required"
+ { "host" = "\"ldap://127.0.0.1:389\"" }
+ { "base" = "\"dc=example,dc=com\"" }
+ { "serviceUser" = "\"cn=admin,dc=example,dc=com\"" }
+ { "serviceCredential" = "\"ldappassword\"" }
+ { "ssl" = "\"false\"" }
+ { "userField" = "\"uid\"" }
+ { "#comment" = "Example comment within definition" }
+ { "subtreeSearch" = "\"true\"" }
+ }
+ }
+ { }
+ }
+ { }
+ { }
+ { "login" = "NetAccountAuth"
+ { "#comment" = "Test of optionless flag" }
+ { "loginModuleClass" = "nz.ac.auckland.jaas.Krb5LoginModule"
+ { "flag" = "required" }
+ }
+ { }
+ }
+ { }
+ { }
+ { "login" = "com.sun.security.jgss.krb5.initiate"
+ { "#comment" = "Test of omitted linebreaks and naked boolean" }
+ { "loginModuleClass" = "com.sun.security.auth.module.Krb5LoginModule"
+ { "flag" = "required"
+ { "useTicketCache" = "true" }
+ }
+ }
+ { }
+ }
--- /dev/null
+(*
+Module: Test_JettyRealm
+ Provides unit tests and examples for the <JettyRealm> lens.
+*)
+
+module Test_JettyRealm =
+
+(* Variable: conf *)
+let conf = "### Comment
+admin: admin, admin
+"
+
+(* Variable: conf_norealm *)
+let conf_norealm = "### Comment
+admin: admin
+"
+
+(* Variable: new_conf *)
+let new_conf = "### Comment
+admin: password, admin
+"
+
+let lns = JettyRealm.lns
+
+(* Test: JettyRealm.lns
+ * Get test against tree structure
+*)
+test lns get conf =
+ { "#comment" = "## Comment" }
+ { "user"
+ { "username" = "admin" }
+ { "password" = "admin" }
+ { "realm" = "admin" }
+ }
+
+(* Test: JettyRealm.lns
+ * Get test against tree structure without a realm
+*)
+test lns get conf_norealm =
+ { "#comment" = "## Comment" }
+ { "user"
+ { "username" = "admin" }
+ { "password" = "admin" }
+ }
+
+(* Test: JettyRealm.lns
+ * Put test changing password to password
+*)
+test lns put conf after set "/user/password" "password" = new_conf
+
+(* vim: set ts=4 expandtab sw=4: *)
--- /dev/null
+(*
+Module: Test_JMXAccess
+ Provides unit tests and examples for the <JMXAccess> lens.
+*)
+
+module Test_JMXAccess =
+
+(* Variable: conf *)
+let conf = "# Comment
+admin readwrite
+"
+
+(* Variable: new_conf *)
+let new_conf = "# Comment
+admin readonly
+"
+
+let lns = JMXAccess.lns
+
+(* Test: JMXAccess.lns
+ * Get test against tree structure
+*)
+test lns get conf =
+ { "#comment" = "Comment" }
+ { "user"
+ { "username" = "admin" }
+ { "access" = "readwrite" }
+ }
+
+(* Test: JMXAccess.lns
+ * Put test changing access to readonly
+*)
+test lns put conf after set "/user/access" "readonly" = new_conf
+
+(* vim: set ts=4 expandtab sw=4: *)
--- /dev/null
+(*
+Module: Test_JMXPassword
+ Provides unit tests and examples for the <JMXPassword> lens.
+*)
+
+module Test_JMXPassword =
+
+(* Variable: conf *)
+let conf = "# Comment
+admin activemq
+"
+
+(* Variable: new_conf *)
+let new_conf = "# Comment
+admin password
+"
+
+let lns = JMXPassword.lns
+
+(* Test: JMXPassword.lns
+ * Get test against tree structure
+*)
+test lns get conf =
+ { "#comment" = "Comment" }
+ { "user"
+ { "username" = "admin" }
+ { "password" = "activemq" }
+ }
+
+(* Test: JMXPassword.lns
+ * Put test changing password to password
+*)
+test lns put conf after set "/user/password" "password" = new_conf
+
+(* vim: set ts=4 expandtab sw=4: *)
--- /dev/null
+module Test_json =
+
+let lns = Json.lns
+
+(* Non recursive checks *)
+
+(* Typecheck finitely deep nesting *)
+let value0 = Json.str | Json.number | Json.const /true|false|null/
+let value1 = Json.fix_value value0
+(* This test is usually too heavy, activate at will
+let value2 = Json.fix_value value1
+*)
+
+test lns get "\"menu\"" = { "string" = "menu" }
+
+test lns get "true" = { "const" = "true" }
+
+test lns get "3.141" = { "number" = "3.141" }
+
+test lns get "{ \"key\" : 666 }" =
+ { "dict" { "entry" = "key" { "number" = "666" } } }
+
+test lns get "[true, 0, \"yo\"]" =
+ { "array" { "const" = "true" } { "number" = "0" } { "string" = "yo" } }
+
+test lns get "{\"a\" : true}" =
+ { "dict" { "entry" = "a" { "const" = "true" } } }
+
+test lns get "{ \"0\":true, \"1\":false }" =
+ { "dict" { "entry" = "0" { "const" = "true" } }
+ { "entry" = "1" { "const" = "false" } } }
+
+
+test lns get "{ \"0\": true, \"1\":false }" =
+ { "dict"
+ { "entry" = "0" { "const" = "true" } }
+ { "entry" = "1" { "const" = "false" } } }
+
+test lns get "{\"menu\": \"entry one\"}" =
+ { "dict" { "entry" = "menu" { "string" = "entry one" } } }
+
+test lns get "[ ]" =
+ { "array" }
+
+test lns get "{}" =
+ { "dict" }
+
+let s = "{\"menu\": {
+ \"id\": \"file\",
+ \"value\": \"File\",
+ \"popup\": {
+ \"menuitem\": [
+ {\"value\": \"New\", \"onclick\": \"CreateNewDoc()\"},
+ {\"value\": \"Open\", \"onclick\": \"OpenDoc()\"},
+ {\"value\": \"Close\", \"onclick\": \"CloseDoc()\"}
+ ]
+ }
+}}"
+
+test lns get s =
+ { "dict"
+ { "entry" = "menu"
+ { "dict"
+ { }
+ { "entry" = "id" { "string" = "file" } }
+ { }
+ { "entry" = "value" { "string" = "File" } }
+ { }
+ { "entry" = "popup"
+ { "dict"
+ { }
+ { "entry" = "menuitem"
+ { "array"
+ { }
+ { "dict"
+ { "entry" = "value" { "string" = "New" } }
+ { "entry" = "onclick"
+ { "string" = "CreateNewDoc()" } } }
+ { }
+ { "dict"
+ { "entry" = "value" { "string" = "Open" } }
+ { "entry" = "onclick" { "string" = "OpenDoc()" } } }
+ { }
+ { "dict"
+ { "entry" = "value" { "string" = "Close" } }
+ { "entry" = "onclick" { "string" = "CloseDoc()" } }
+ { }
+ }
+ { } } } { } } } } } }
+
+let t = "
+{\"web-app\": {
+ \"servlet\": [
+ {
+ \"servlet-name\": \"cofaxCDS\",
+ \"servlet-class\": \"org.cofax.cds.CDSServlet\",
+ \"init-param\": {
+ \"configGlossary:installationAt\": \"Philadelphia, PA\",
+ \"configGlossary:adminEmail\": \"ksm@pobox.com\",
+ \"configGlossary:poweredBy\": \"Cofax\",
+ \"configGlossary:poweredByIcon\": \"/images/cofax.gif\",
+ \"configGlossary:staticPath\": \"/content/static\",
+ \"templateProcessorClass\": \"org.cofax.WysiwygTemplate\",
+ \"templateLoaderClass\": \"org.cofax.FilesTemplateLoader\",
+ \"templatePath\": \"templates\",
+ \"templateOverridePath\": \"\",
+ \"defaultListTemplate\": \"listTemplate.htm\",
+ \"defaultFileTemplate\": \"articleTemplate.htm\",
+ \"useJSP\": false,
+ \"jspListTemplate\": \"listTemplate.jsp\",
+ \"jspFileTemplate\": \"articleTemplate.jsp\",
+ \"cachePackageTagsTrack\": 200,
+ \"cachePackageTagsStore\": 200,
+ \"cachePackageTagsRefresh\": 60,
+ \"cacheTemplatesTrack\": 100,
+ \"cacheTemplatesStore\": 50,
+ \"cacheTemplatesRefresh\": 15,
+ \"cachePagesTrack\": 200,
+ \"cachePagesStore\": 100,
+ \"cachePagesRefresh\": 10,
+ \"cachePagesDirtyRead\": 10,
+ \"searchEngineListTemplate\": \"forSearchEnginesList.htm\",
+ \"searchEngineFileTemplate\": \"forSearchEngines.htm\",
+ \"searchEngineRobotsDb\": \"WEB-INF/robots.db\",
+ \"useDataStore\": true,
+ \"dataStoreClass\": \"org.cofax.SqlDataStore\",
+ \"redirectionClass\": \"org.cofax.SqlRedirection\",
+ \"dataStoreName\": \"cofax\",
+ \"dataStoreDriver\": \"com.microsoft.jdbc.sqlserver.SQLServerDriver\",
+ \"dataStoreUrl\": \"jdbc:microsoft:sqlserver://LOCALHOST:1433;DatabaseName=goon\",
+ \"dataStoreUser\": \"sa\",
+ \"dataStorePassword\": \"dataStoreTestQuery\",
+ \"dataStoreTestQuery\": \"SET NOCOUNT ON;select test='test';\",
+ \"dataStoreLogFile\": \"/usr/local/tomcat/logs/datastore.log\",
+ \"dataStoreInitConns\": 10,
+ \"dataStoreMaxConns\": 100,
+ \"dataStoreConnUsageLimit\": 100,
+ \"dataStoreLogLevel\": \"debug\",
+ \"maxUrlLength\": 500}},
+ {
+ \"servlet-name\": \"cofaxEmail\",
+ \"servlet-class\": \"org.cofax.cds.EmailServlet\",
+ \"init-param\": {
+ \"mailHost\": \"mail1\",
+ \"mailHostOverride\": \"mail2\"}},
+ {
+ \"servlet-name\": \"cofaxAdmin\",
+ \"servlet-class\": \"org.cofax.cds.AdminServlet\"},
+
+ {
+ \"servlet-name\": \"fileServlet\",
+ \"servlet-class\": \"org.cofax.cds.FileServlet\"},
+ {
+ \"servlet-name\": \"cofaxTools\",
+ \"servlet-class\": \"org.cofax.cms.CofaxToolsServlet\",
+ \"init-param\": {
+ \"templatePath\": \"toolstemplates/\",
+ \"log\": 1,
+ \"logLocation\": \"/usr/local/tomcat/logs/CofaxTools.log\",
+ \"logMaxSize\": \"\",
+ \"dataLog\": 1,
+ \"dataLogLocation\": \"/usr/local/tomcat/logs/dataLog.log\",
+ \"dataLogMaxSize\": \"\",
+ \"removePageCache\": \"/content/admin/remove?cache=pages&id=\",
+ \"removeTemplateCache\": \"/content/admin/remove?cache=templates&id=\",
+ \"fileTransferFolder\": \"/usr/local/tomcat/webapps/content/fileTransferFolder\",
+ \"lookInContext\": 1,
+ \"adminGroupID\": 4,
+ \"betaServer\": true}}],
+ \"servlet-mapping\": {
+ \"cofaxCDS\": \"/\",
+ \"cofaxEmail\": \"/cofaxutil/aemail/*\",
+ \"cofaxAdmin\": \"/admin/*\",
+ \"fileServlet\": \"/static/*\",
+ \"cofaxTools\": \"/tools/*\"},
+
+ \"taglib\": {
+ \"taglib-uri\": \"cofax.tld\",
+ \"taglib-location\": \"/WEB-INF/tlds/cofax.tld\"}}}"
+
+test lns get t =
+ { }
+ { "dict"
+ { "entry" = "web-app"
+ { "dict"
+ { }
+ { "entry" = "servlet"
+ { "array"
+ { }
+ { "dict"
+ { }
+ { "entry" = "servlet-name" { "string" = "cofaxCDS" } }
+ { }
+ { "entry" = "servlet-class"
+ { "string" = "org.cofax.cds.CDSServlet" } }
+ { }
+ { "entry" = "init-param"
+ { "dict"
+ { }
+ { "entry" = "configGlossary:installationAt"
+ { "string" = "Philadelphia, PA" } }
+ { }
+ { "entry" = "configGlossary:adminEmail"
+ { "string" = "ksm@pobox.com" } }
+ { }
+ { "entry" = "configGlossary:poweredBy"
+ { "string" = "Cofax" } }
+ { }
+ { "entry" = "configGlossary:poweredByIcon"
+ { "string" = "/images/cofax.gif" } }
+ { }
+ { "entry" = "configGlossary:staticPath"
+ { "string" = "/content/static" } }
+ { }
+ { "entry" = "templateProcessorClass"
+ { "string" = "org.cofax.WysiwygTemplate" } }
+ { }
+ { "entry" = "templateLoaderClass"
+ { "string" = "org.cofax.FilesTemplateLoader" } }
+ { }
+ { "entry" = "templatePath"
+ { "string" = "templates" } }
+ { }
+ { "entry" = "templateOverridePath"
+ { "string" = "" } }
+ { }
+ { "entry" = "defaultListTemplate"
+ { "string" = "listTemplate.htm" } }
+ { }
+ { "entry" = "defaultFileTemplate"
+ { "string" = "articleTemplate.htm" } }
+ { }
+ { "entry" = "useJSP"
+ { "const" = "false" } }
+ { }
+ { "entry" = "jspListTemplate"
+ { "string" = "listTemplate.jsp" } }
+ { }
+ { "entry" = "jspFileTemplate"
+ { "string" = "articleTemplate.jsp" } }
+ { }
+ { "entry" = "cachePackageTagsTrack"
+ { "number" = "200" } }
+ { }
+ { "entry" = "cachePackageTagsStore"
+ { "number" = "200" } }
+ { }
+ { "entry" = "cachePackageTagsRefresh"
+ { "number" = "60" } }
+ { }
+ { "entry" = "cacheTemplatesTrack"
+ { "number" = "100" } }
+ { }
+ { "entry" = "cacheTemplatesStore"
+ { "number" = "50" } }
+ { }
+ { "entry" = "cacheTemplatesRefresh"
+ { "number" = "15" } }
+ { }
+ { "entry" = "cachePagesTrack"
+ { "number" = "200" } }
+ { }
+ { "entry" = "cachePagesStore"
+ { "number" = "100" } }
+ { }
+ { "entry" = "cachePagesRefresh"
+ { "number" = "10" } }
+ { }
+ { "entry" = "cachePagesDirtyRead"
+ { "number" = "10" } }
+ { }
+ { "entry" = "searchEngineListTemplate"
+ { "string" = "forSearchEnginesList.htm" } }
+ { }
+ { "entry" = "searchEngineFileTemplate"
+ { "string" = "forSearchEngines.htm" } }
+ { }
+ { "entry" = "searchEngineRobotsDb"
+ { "string" = "WEB-INF/robots.db" } }
+ { }
+ { "entry" = "useDataStore"
+ { "const" = "true" } }
+ { }
+ { "entry" = "dataStoreClass"
+ { "string" = "org.cofax.SqlDataStore" } }
+ { }
+ { "entry" = "redirectionClass"
+ { "string" = "org.cofax.SqlRedirection" } }
+ { }
+ { "entry" = "dataStoreName"
+ { "string" = "cofax" } }
+ { }
+ { "entry" = "dataStoreDriver"
+ { "string" = "com.microsoft.jdbc.sqlserver.SQLServerDriver" } }
+ { }
+ { "entry" = "dataStoreUrl"
+ { "string" = "jdbc:microsoft:sqlserver://LOCALHOST:1433;DatabaseName=goon" } }
+ { }
+ { "entry" = "dataStoreUser"
+ { "string" = "sa" } }
+ { }
+ { "entry" = "dataStorePassword"
+ { "string" = "dataStoreTestQuery" } }
+ { }
+ { "entry" = "dataStoreTestQuery"
+ { "string" = "SET NOCOUNT ON;select test='test';" } }
+ { }
+ { "entry" = "dataStoreLogFile"
+ { "string" = "/usr/local/tomcat/logs/datastore.log" } }
+ { }
+ { "entry" = "dataStoreInitConns"
+ { "number" = "10" } }
+ { }
+ { "entry" = "dataStoreMaxConns"
+ { "number" = "100" } }
+ { }
+ { "entry" = "dataStoreConnUsageLimit"
+ { "number" = "100" } }
+ { }
+ { "entry" = "dataStoreLogLevel"
+ { "string" = "debug" } }
+ { }
+ { "entry" = "maxUrlLength"
+ { "number" = "500" } } } } }
+ { }
+ { "dict"
+ { }
+ { "entry" = "servlet-name"
+ { "string" = "cofaxEmail" } }
+ { }
+ { "entry" = "servlet-class"
+ { "string" = "org.cofax.cds.EmailServlet" } }
+ { }
+ { "entry" = "init-param"
+ { "dict"
+ { }
+ { "entry" = "mailHost"
+ { "string" = "mail1" } }
+ { }
+ { "entry" = "mailHostOverride"
+ { "string" = "mail2" } } } } }
+ { }
+ { "dict"
+ { }
+ { "entry" = "servlet-name"
+ { "string" = "cofaxAdmin" } }
+ { }
+ { "entry" = "servlet-class"
+ { "string" = "org.cofax.cds.AdminServlet" } } }
+ { }
+ { }
+ { "dict"
+ { }
+ { "entry" = "servlet-name"
+ { "string" = "fileServlet" } }
+ { }
+ { "entry" = "servlet-class"
+ { "string" = "org.cofax.cds.FileServlet" } } }
+ { }
+ { "dict"
+ { }
+ { "entry" = "servlet-name"
+ { "string" = "cofaxTools" } }
+ { }
+ { "entry" = "servlet-class"
+ { "string" = "org.cofax.cms.CofaxToolsServlet" } }
+ { }
+ { "entry" = "init-param"
+ { "dict"
+ { }
+ { "entry" = "templatePath"
+ { "string" = "toolstemplates/" } }
+ { }
+ { "entry" = "log"
+ { "number" = "1" } }
+ { }
+ { "entry" = "logLocation"
+ { "string" = "/usr/local/tomcat/logs/CofaxTools.log" } }
+ { }
+ { "entry" = "logMaxSize"
+ { "string" = "" } }
+ { }
+ { "entry" = "dataLog"
+ { "number" = "1" } }
+ { }
+ { "entry" = "dataLogLocation"
+ { "string" = "/usr/local/tomcat/logs/dataLog.log" } }
+ { }
+ { "entry" = "dataLogMaxSize"
+ { "string" = "" } }
+ { }
+ { "entry" = "removePageCache"
+ { "string" = "/content/admin/remove?cache=pages&id=" } }
+ { }
+ { "entry" = "removeTemplateCache"
+ { "string" = "/content/admin/remove?cache=templates&id=" } }
+ { }
+ { "entry" = "fileTransferFolder"
+ { "string" = "/usr/local/tomcat/webapps/content/fileTransferFolder" } }
+ { }
+ { "entry" = "lookInContext"
+ { "number" = "1" } }
+ { }
+ { "entry" = "adminGroupID"
+ { "number" = "4" } }
+ { }
+ { "entry" = "betaServer"
+ { "const" = "true" } } } } } } }
+ { }
+ { "entry" = "servlet-mapping"
+ { "dict"
+ { }
+ { "entry" = "cofaxCDS"
+ { "string" = "/" } }
+ { }
+ { "entry" = "cofaxEmail"
+ { "string" = "/cofaxutil/aemail/*" } }
+ { }
+ { "entry" = "cofaxAdmin"
+ { "string" = "/admin/*" } }
+ { }
+ { "entry" = "fileServlet"
+ { "string" = "/static/*" } }
+ { }
+ { "entry" = "cofaxTools"
+ { "string" = "/tools/*" } } } }
+ { }
+ { }
+ { "entry" = "taglib"
+ { "dict"
+ { }
+ { "entry" = "taglib-uri"
+ { "string" = "cofax.tld" } }
+ { }
+ { "entry" = "taglib-location"
+ { "string" = "/WEB-INF/tlds/cofax.tld" } } } } } } }
+
+(* Comments *)
+test lns get "// A comment
+//
+{\"menu\": 1 }
+//
+/*
+This is a multiline comment
+*/\n" =
+ { "#comment" = "A comment" }
+ { }
+ { "dict"
+ { "entry" = "menu"
+ { "number" = "1" } }
+ { }
+ { }
+ { "#mcomment"
+ { "1" = "This is a multiline comment" } } }
+
+
+let s_commented = "/* before */
+{ // before all values
+\"key\": // my key
+ \"value\" // my value
+ , // after value
+\"key2\": [ // before array values
+ \"val21\",
+ \"val22\"
+ // after value 22
+]
+// after all values
+}
+"
+test lns get s_commented =
+ { "#mcomment" { "1" = "before" } }
+ { "dict"
+ { "#comment" = "before all values" }
+ { "entry" = "key"
+ { "#comment" = "my key" }
+ { "string" = "value" { "#comment" = "my value" } } }
+ { "#comment" = "after value" }
+ { "entry" = "key2"
+ { "array"
+ { "#comment" = "before array values" }
+ { "string" = "val21" }
+ { }
+ { "string" = "val22"
+ { }
+ { "#comment" = "after value 22" } }
+ { }
+ { "#comment" = "after all values" } } }
+ { } }
+
+(* Test lns
+ Allow escaped quotes, backslashes and tabs/newlines *)
+test lns get "{ \"filesystem\": \"ext3\\\" \\\\ \t \r\n SEC_TYPE=\\\"ext2\" }\n" =
+ { "dict"
+ { "entry" = "filesystem"
+ { "string" = "ext3\\\" \\\\ \t \r\n SEC_TYPE=\\\"ext2" } }
+ { } }
+
+test Json.str get "\"\\\"\"" = { "string" = "\\\"" }
+
+test Json.str get "\"\\\"" = *
+
+test Json.str get "\"\"\"" = *
+
+test Json.str get "\"\\u1234\"" = { "string" = "\u1234" }
+
+(* Allow spurious backslashes; Issue #557 *)
+test Json.str get "\"\\/\"" = { "string" = "\\/" }
+
+test lns get "{ \"download-dir\": \"\\/var\\/tmp\\/\" }" =
+ { "dict"
+ { "entry" = "download-dir"
+ { "string" = "\/var\/tmp\/" } } }
--- /dev/null
+(*
+Module: Test_Kdump
+ Provides unit tests and examples for the <Kdump> lens.
+*)
+
+module Test_Kdump =
+
+ let conf = "# this is a comment
+#another commented line
+
+#comment after empty line
+#
+#comment after empty comment
+auto_reset_crashkernel yes
+path /var/crash #comment after entry
+core_collector makedumpfile -c
+default poweroff
+raw /dev/sda5
+ext3 /dev/sda3
+net my.server.com:/export/tmp
+nfs my.server.com:/export/tmp
+net user@my.server.com
+ssh user@my.server.com
+link_delay 60
+kdump_pre /var/crash/scripts/kdump-pre.sh
+kdump_post /var/crash/scripts/kdump-post.sh
+#extra_bins /usr/bin/lftp /a/b/c
+extra_bins /usr/bin/lftp /a/b/c # comment
+disk_timeout 30
+extra_modules gfs2 extra modules more
+options babla labl kbak df=dfg
+options babla labl kbak df=dfg
+options babla labl kbak df=dfg # comment
+sshkey /root/.ssh/kdump_id_rsa
+force_rebuild 1
+override_resettable 1
+dracut_args --omit-drivers \"cfg80211 snd\" --add-drivers \"ext2 ext3\"
+fence_kdump_args -p 7410 -f auto
+fence_kdump_nodes 192.168.1.10 10.34.63.155
+debug_mem_level 3
+blacklist gfs2
+"
+
+ (* Test: Kdump.lns
+ Check whole config file *)
+ test Kdump.lns get conf =
+ { "#comment" = "this is a comment" }
+ { "#comment" = "another commented line" }
+ { }
+ { "#comment" = "comment after empty line" }
+ { }
+ { "#comment" = "comment after empty comment" }
+ { "auto_reset_crashkernel" = "yes" }
+ { "path" = "/var/crash"
+ { "#comment" = "comment after entry" } }
+ { "core_collector" = "makedumpfile -c" }
+ { "default" = "poweroff" }
+ { "raw" = "/dev/sda5" }
+ { "ext3" = "/dev/sda3" }
+ { "net" = "my.server.com:/export/tmp" }
+ { "nfs" = "my.server.com:/export/tmp" }
+ { "net" = "user@my.server.com" }
+ { "ssh" = "user@my.server.com" }
+ { "link_delay" = "60" }
+ { "kdump_pre" = "/var/crash/scripts/kdump-pre.sh" }
+ { "kdump_post" = "/var/crash/scripts/kdump-post.sh" }
+ { "#comment" = "extra_bins /usr/bin/lftp /a/b/c" }
+ { "extra_bins"
+ { "1" = "/usr/bin/lftp" }
+ { "2" = "/a/b/c" }
+ { "#comment" = "comment" } }
+ { "disk_timeout" = "30" }
+ { "extra_modules"
+ { "1" = "gfs2" }
+ { "2" = "extra" }
+ { "3" = "modules" }
+ { "4" = "more" } }
+ { "options"
+ { "babla"
+ { "labl" }
+ { "kbak" }
+ { "df" = "dfg" } } }
+ { "options"
+ { "babla"
+ { "labl" }
+ { "kbak" }
+ { "df" = "dfg" } } }
+ { "options"
+ { "babla"
+ { "labl" }
+ { "kbak" }
+ { "df" = "dfg" } }
+ { "#comment" = "comment" } }
+ { "sshkey" = "/root/.ssh/kdump_id_rsa" }
+ { "force_rebuild" = "1" }
+ { "override_resettable" = "1" }
+ { "dracut_args" = "--omit-drivers \"cfg80211 snd\" --add-drivers \"ext2 ext3\"" }
+ { "fence_kdump_args" = "-p 7410 -f auto" }
+ { "fence_kdump_nodes"
+ { "1" = "192.168.1.10" }
+ { "2" = "10.34.63.155" } }
+ { "debug_mem_level" = "3" }
+ { "blacklist"
+ { "1" = "gfs2" } }
--- /dev/null
+(*
+Module: Test_Keepalived
+ Provides unit tests and examples for the <Keepalived> lens.
+*)
+
+module Test_Keepalived =
+
+(* Variable: conf
+ A full configuration file *)
+ let conf = "! This is a comment
+! Configuration File for keepalived
+
+global_defs {
+ ! this is who emails will go to on alerts
+ notification_email {
+ admins@example.com
+ fakepager@example.com
+ ! add a few more email addresses here if you would like
+ }
+ notification_email_from admins@example.com
+
+ smtp_server 127.0.0.1 ! I use the local machine to relay mail
+ smtp_connect_timeout 30
+
+ ! each load balancer should have a different ID
+ ! this will be used in SMTP alerts, so you should make
+ ! each router easily identifiable
+ lvs_id LVS_EXAMPLE_01
+
+ vrrp_mcast_group4 224.0.0.18
+ vrrp_mcast_group6 ff02::12
+}
+
+vrrp_sync_group VG1 {
+ group {
+ inside_network # name of vrrp_instance (below)
+ outside_network # One for each moveable IP.
+ }
+ notify /usr/bin/foo
+ notify_master /usr/bin/foo
+ smtp_alert
+}
+
+vrrp_instance VI_1 {
+ state MASTER
+ interface eth0
+
+ track_interface {
+ eth0 # Back
+ eth1 # DMZ
+ }
+ track_script {
+ check_apache2 # weight = +2 si ok, 0 si nok
+ }
+ garp_master_delay 5
+ garp_master_repeat 5
+ garp_master_refresh 5
+ garp_master_refresh_repeat 5
+ priority 50
+ advert_int 2
+ authentication {
+ auth_type PASS
+ auth_pass mypass
+ }
+ virtual_ipaddress {
+ 10.234.66.146/32 dev eth0
+ }
+
+ lvs_sync_daemon_interface eth0
+ ha_suspend
+
+ notify_master \"/svr/scripts/notify_master.sh\"
+ notify_backup \"/svr/scripts/notify_backup.sh\"
+ notify_fault \"/svr/scripts/notify_fault.sh\"
+ notify_stop \"/svr/scripts/notify_stop.sh\"
+ notify_deleted \"/svr/scripts/notify_deleted.sh\"
+ notify \"/svr/scripts/notify.sh\"
+
+ ! each virtual router id must be unique per instance name!
+ virtual_router_id 51
+
+ ! MASTER and BACKUP state are determined by the priority
+ ! even if you specify MASTER as the state, the state will
+ ! be voted on by priority (so if your state is MASTER but your
+ ! priority is lower than the router with BACKUP, you will lose
+ ! the MASTER state)
+ ! I make it a habit to set priorities at least 50 points apart
+ ! note that a lower number is lesser priority - lower gets less vote
+ priority 150
+
+ ! how often should we vote, in seconds?
+ advert_int 1
+
+ ! send an alert when this instance changes state from MASTER to BACKUP
+ smtp_alert
+
+ ! this authentication is for syncing between failover servers
+ ! keepalived supports PASS, which is simple password
+ ! authentication
+ ! or AH, which is the IPSec authentication header.
+ ! I don't use AH
+ ! yet as many people have reported problems with it
+ authentication {
+ auth_type PASS
+ auth_pass example
+ }
+
+ ! these are the IP addresses that keepalived will setup on this
+ ! machine. Later in the config we will specify which real
+ ! servers are behind these IPs
+ ! without this block, keepalived will not setup and takedown the
+ ! any IP addresses
+
+ virtual_ipaddress {
+ 192.168.1.11
+ 10.234.66.146/32 dev vlan933 # parse it well
+ ! and more if you want them
+ }
+
+ use_vmac
+ vmac_xmit_base
+ native_ipv6
+ dont_track_primary
+ preempt_delay
+
+ mcast_src_ip 192.168.1.1
+ unicast_src_ip 192.168.1.1
+
+ unicast_peer {
+ 192.168.1.2
+ 192.168.1.3
+ }
+}
+
+virtual_server 192.168.1.11 22 {
+ delay_loop 6
+
+ ! use round-robin as a load balancing algorithm
+ lb_algo rr
+
+ ! we are doing NAT
+ lb_kind NAT
+ nat_mask 255.255.255.0
+
+ protocol TCP
+
+ sorry_server 10.20.40.30 22
+
+ ! there can be as many real_server blocks as you need
+
+ real_server 10.20.40.10 22 {
+
+ ! if we used weighted round-robin or a similar lb algo,
+ ! we include the weight of this server
+
+ weight 1
+
+ ! here is a health checker for this server.
+ ! we could use a custom script here (see the keepalived docs)
+ ! but we will just make sure we can do a vanilla tcp connect()
+ ! on port 22
+ ! if it fails, we will pull this realserver out of the pool
+ ! and send email about the removal
+ TCP_CHECK {
+ connect_timeout 3
+ connect_port 22
+ }
+ }
+}
+
+virtual_server_group DNS_1 {
+ 192.168.0.1 22
+ 10.234.55.22-25 36
+ 10.45.58.59/32 27
+}
+
+vrrp_script chk_apache2 { # Requires keepalived-1.1.13
+ script \"killall -0 apache2\" # faster
+ interval 2 # check every 2 seconds
+ weight 2 # add 2 points of prio if OK
+ fall 5
+ raise 5
+}
+
+! that's all
+"
+
+
+(* Test: Keepalived.lns
+ Test the full <conf> *)
+ test Keepalived.lns get conf =
+ { "#comment" = "This is a comment" }
+ { "#comment" = "Configuration File for keepalived" }
+ {}
+ { "global_defs"
+ { "#comment" = "this is who emails will go to on alerts" }
+ { "notification_email"
+ { "email" = "admins@example.com" }
+ { "email" = "fakepager@example.com" }
+ { "#comment" = "add a few more email addresses here if you would like" } }
+ { "notification_email_from" = "admins@example.com" }
+ { }
+ { "smtp_server" = "127.0.0.1"
+ { "#comment" = "I use the local machine to relay mail" } }
+ { "smtp_connect_timeout" = "30" }
+ {}
+ { "#comment" = "each load balancer should have a different ID" }
+ { "#comment" = "this will be used in SMTP alerts, so you should make" }
+ { "#comment" = "each router easily identifiable" }
+ { "lvs_id" = "LVS_EXAMPLE_01" }
+ {}
+ { "vrrp_mcast_group4" = "224.0.0.18" }
+ { "vrrp_mcast_group6" = "ff02::12" } }
+ {}
+ { "vrrp_sync_group" = "VG1"
+ { "group"
+ { "inside_network"
+ { "#comment" = "name of vrrp_instance (below)" } }
+ { "outside_network"
+ { "#comment" = "One for each moveable IP." } } }
+ { "notify" = "/usr/bin/foo" }
+ { "notify_master" = "/usr/bin/foo" }
+ { "smtp_alert" } }
+ {}
+ { "vrrp_instance" = "VI_1"
+ { "state" = "MASTER" }
+ { "interface" = "eth0" }
+ { }
+ { "track_interface"
+ { "eth0" { "#comment" = "Back" } }
+ { "eth1" { "#comment" = "DMZ" } } }
+ { "track_script"
+ { "check_apache2" { "#comment" = "weight = +2 si ok, 0 si nok" } } }
+ { "garp_master_delay" = "5" }
+ { "garp_master_repeat" = "5" }
+ { "garp_master_refresh" = "5" }
+ { "garp_master_refresh_repeat" = "5" }
+ { "priority" = "50" }
+ { "advert_int" = "2" }
+ { "authentication"
+ { "auth_type" = "PASS" }
+ { "auth_pass" = "mypass" } }
+ { "virtual_ipaddress"
+ { "ipaddr" = "10.234.66.146"
+ { "prefixlen" = "32" }
+ { "dev" = "eth0" } } }
+ { }
+ { "lvs_sync_daemon_interface" = "eth0" }
+ { "ha_suspend" }
+ { }
+ { "notify_master" = "\"/svr/scripts/notify_master.sh\"" }
+ { "notify_backup" = "\"/svr/scripts/notify_backup.sh\"" }
+ { "notify_fault" = "\"/svr/scripts/notify_fault.sh\"" }
+ { "notify_stop" = "\"/svr/scripts/notify_stop.sh\"" }
+ { "notify_deleted" = "\"/svr/scripts/notify_deleted.sh\"" }
+ { "notify" = "\"/svr/scripts/notify.sh\"" }
+ { }
+ { "#comment" = "each virtual router id must be unique per instance name!" }
+ { "virtual_router_id" = "51" }
+ { }
+ { "#comment" = "MASTER and BACKUP state are determined by the priority" }
+ { "#comment" = "even if you specify MASTER as the state, the state will" }
+ { "#comment" = "be voted on by priority (so if your state is MASTER but your" }
+ { "#comment" = "priority is lower than the router with BACKUP, you will lose" }
+ { "#comment" = "the MASTER state)" }
+ { "#comment" = "I make it a habit to set priorities at least 50 points apart" }
+ { "#comment" = "note that a lower number is lesser priority - lower gets less vote" }
+ { "priority" = "150" }
+ { }
+ { "#comment" = "how often should we vote, in seconds?" }
+ { "advert_int" = "1" }
+ { }
+ { "#comment" = "send an alert when this instance changes state from MASTER to BACKUP" }
+ { "smtp_alert" }
+ { }
+ { "#comment" = "this authentication is for syncing between failover servers" }
+ { "#comment" = "keepalived supports PASS, which is simple password" }
+ { "#comment" = "authentication" }
+ { "#comment" = "or AH, which is the IPSec authentication header." }
+ { "#comment" = "I don't use AH" }
+ { "#comment" = "yet as many people have reported problems with it" }
+ { "authentication"
+ { "auth_type" = "PASS" }
+ { "auth_pass" = "example" } }
+ { }
+ { "#comment" = "these are the IP addresses that keepalived will setup on this" }
+ { "#comment" = "machine. Later in the config we will specify which real" }
+ { "#comment" = "servers are behind these IPs" }
+ { "#comment" = "without this block, keepalived will not setup and takedown the" }
+ { "#comment" = "any IP addresses" }
+ { }
+ { "virtual_ipaddress"
+ { "ipaddr" = "192.168.1.11" }
+ { "ipaddr" = "10.234.66.146"
+ { "prefixlen" = "32" }
+ { "dev" = "vlan933" }
+ { "#comment" = "parse it well" } }
+ { "#comment" = "and more if you want them" } }
+ { }
+ { "use_vmac" }
+ { "vmac_xmit_base" }
+ { "native_ipv6" }
+ { "dont_track_primary" }
+ { "preempt_delay" }
+ { }
+ { "mcast_src_ip" = "192.168.1.1" }
+ { "unicast_src_ip" = "192.168.1.1" }
+ { }
+ { "unicast_peer"
+ { "ipaddr" = "192.168.1.2" }
+ { "ipaddr" = "192.168.1.3" } } }
+ { }
+ { "virtual_server"
+ { "ip" = "192.168.1.11" }
+ { "port" = "22" }
+ { "delay_loop" = "6" }
+ { }
+ { "#comment" = "use round-robin as a load balancing algorithm" }
+ { "lb_algo" = "rr" }
+ { }
+ { "#comment" = "we are doing NAT" }
+ { "lb_kind" = "NAT" }
+ { "nat_mask" = "255.255.255.0" }
+ { }
+ { "protocol" = "TCP" }
+ { }
+ { "sorry_server"
+ { "ip" = "10.20.40.30" }
+ { "port" = "22" } }
+ { }
+ { "#comment" = "there can be as many real_server blocks as you need" }
+ { }
+ { "real_server"
+ { "ip" = "10.20.40.10" }
+ { "port" = "22" }
+ { "#comment" = "if we used weighted round-robin or a similar lb algo," }
+ { "#comment" = "we include the weight of this server" }
+ { }
+ { "weight" = "1" }
+ { }
+ { "#comment" = "here is a health checker for this server." }
+ { "#comment" = "we could use a custom script here (see the keepalived docs)" }
+ { "#comment" = "but we will just make sure we can do a vanilla tcp connect()" }
+ { "#comment" = "on port 22" }
+ { "#comment" = "if it fails, we will pull this realserver out of the pool" }
+ { "#comment" = "and send email about the removal" }
+ { "TCP_CHECK"
+ { "connect_timeout" = "3" }
+ { "connect_port" = "22" } } } }
+ { }
+ { "virtual_server_group" = "DNS_1"
+ { "vip"
+ { "ipaddr" = "192.168.0.1" }
+ { "port" = "22" } }
+ { "vip"
+ { "ipaddr" = "10.234.55.22-25" }
+ { "port" = "36" } }
+ { "vip"
+ { "ipaddr" = "10.45.58.59"
+ { "prefixlen" = "32" } }
+ { "port" = "27" } } }
+ { }
+ { "vrrp_script" = "chk_apache2"
+ { "#comment" = "Requires keepalived-1.1.13" }
+ { "script" = "\"killall -0 apache2\""
+ { "#comment" = "faster" } }
+ { "interval" = "2"
+ { "#comment" = "check every 2 seconds" } }
+ { "weight" = "2"
+ { "#comment" = "add 2 points of prio if OK" } }
+ { "fall" = "5" }
+ { "raise" = "5" } }
+ { }
+ { "#comment" = "that's all" }
+
+(* Variable: tcp_check
+ An example of a TCP health checker *)
+let tcp_check = "virtual_server 192.168.1.11 22 {
+ real_server 10.20.40.10 22 {
+ TCP_CHECK {
+ connect_timeout 3
+ connect_port 22
+ bindto 192.168.1.1
+ }
+ }
+}
+"
+test Keepalived.lns get tcp_check =
+ { "virtual_server"
+ { "ip" = "192.168.1.11" }
+ { "port" = "22" }
+ { "real_server"
+ { "ip" = "10.20.40.10" }
+ { "port" = "22" }
+ { "TCP_CHECK"
+ { "connect_timeout" = "3" }
+ { "connect_port" = "22" }
+ { "bindto" = "192.168.1.1" } } } }
+
+(* Variable: misc_check
+ An example of a MISC health checker *)
+let misc_check = "virtual_server 192.168.1.11 22 {
+ real_server 10.20.40.10 22 {
+ MISC_CHECK {
+ misc_path /usr/local/bin/server_test
+ misc_timeout 3
+ misc_dynamic
+ }
+ }
+}
+"
+test Keepalived.lns get misc_check =
+ { "virtual_server"
+ { "ip" = "192.168.1.11" }
+ { "port" = "22" }
+ { "real_server"
+ { "ip" = "10.20.40.10" }
+ { "port" = "22" }
+ { "MISC_CHECK"
+ { "misc_path" = "/usr/local/bin/server_test" }
+ { "misc_timeout" = "3" }
+ { "misc_dynamic" } } } }
+
+(* Variable: smtp_check
+ An example of an SMTP health checker *)
+let smtp_check = "virtual_server 192.168.1.11 22 {
+ real_server 10.20.40.10 22 {
+ SMTP_CHECK {
+ host {
+ connect_ip 10.20.40.11
+ connect_port 587
+ bindto 192.168.1.1
+ }
+ connect_timeout 3
+ retry 5
+ delay_before_retry 10
+ helo_name \"Testing Augeas\"
+ }
+ }
+}
+"
+test Keepalived.lns get smtp_check =
+ { "virtual_server"
+ { "ip" = "192.168.1.11" }
+ { "port" = "22" }
+ { "real_server"
+ { "ip" = "10.20.40.10" }
+ { "port" = "22" }
+ { "SMTP_CHECK"
+ { "host"
+ { "connect_ip" = "10.20.40.11" }
+ { "connect_port" = "587" }
+ { "bindto" = "192.168.1.1" } }
+ { "connect_timeout" = "3" }
+ { "retry" = "5" }
+ { "delay_before_retry" = "10" }
+ { "helo_name" = "\"Testing Augeas\"" } } } }
+
+(* Variable: http_check
+ An example of an HTTP health checker *)
+let http_check = "virtual_server 192.168.1.11 22 {
+ real_server 10.20.40.10 22 {
+ HTTP_GET {
+ url {
+ path /mrtg2/
+ digest 9b3a0c85a887a256d6939da88aabd8cd
+ status_code 200
+ }
+ connect_timeout 3
+ connect_port 8080
+ nb_get_retry 5
+ delay_before_retry 10
+ }
+ SSL_GET {
+ connect_port 8443
+ }
+ }
+}
+"
+test Keepalived.lns get http_check =
+ { "virtual_server"
+ { "ip" = "192.168.1.11" }
+ { "port" = "22" }
+ { "real_server"
+ { "ip" = "10.20.40.10" }
+ { "port" = "22" }
+ { "HTTP_GET"
+ { "url"
+ { "path" = "/mrtg2/" }
+ { "digest" = "9b3a0c85a887a256d6939da88aabd8cd" }
+ { "status_code" = "200" } }
+ { "connect_timeout" = "3" }
+ { "connect_port" = "8080" }
+ { "nb_get_retry" = "5" }
+ { "delay_before_retry" = "10" } }
+ { "SSL_GET"
+ { "connect_port" = "8443" } } } }
--- /dev/null
+(*
+Module: Test_Known_Hosts
+ Provides unit tests and examples for the <Known_Hosts> lens.
+*)
+
+module Test_Known_Hosts =
+
+(* Test: Known_Hosts.lns
+ Simple get test *)
+test Known_Hosts.lns get "# A comment
+foo.example.com,foo ecdsa-sha2-nistp256 AAABBBDKDFX=
+bar.example.com,bar ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN9NJSjDZh4+K6WBS16iX7ZndnwbGsaEbLwHlCEhZmef
+|1|FhUqf1kMlRWNfK6InQSAmXiNiSY=|jwbKFwD4ipl6D0k6OoshmW7xOao= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIvNOU8OedkWalFmoFcJWP3nasnCLx6M78F9y0rzTQtplggNd0dvR0A4SQOBfHInmk5dH6YGGcpT3PM3cJBR7rI=\n" =
+ { "#comment" = "A comment" }
+ { "1" = "foo.example.com"
+ { "alias" = "foo" }
+ { "type" = "ecdsa-sha2-nistp256" }
+ { "key" = "AAABBBDKDFX=" } }
+ { "2" = "bar.example.com"
+ { "alias" = "bar" }
+ { "type" = "ssh-ed25519" }
+ { "key" = "AAAAC3NzaC1lZDI1NTE5AAAAIN9NJSjDZh4+K6WBS16iX7ZndnwbGsaEbLwHlCEhZmef" } }
+ { "3" = "|1|FhUqf1kMlRWNfK6InQSAmXiNiSY=|jwbKFwD4ipl6D0k6OoshmW7xOao="
+ { "type" = "ecdsa-sha2-nistp256" }
+ { "key" = "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIvNOU8OedkWalFmoFcJWP3nasnCLx6M78F9y0rzTQtplggNd0dvR0A4SQOBfHInmk5dH6YGGcpT3PM3cJBR7rI=" } }
+
+(* Test: Known_Hosts.lns
+ Markers *)
+test Known_Hosts.lns get "@revoked * ssh-rsa AAAAB5W
+@cert-authority *.mydomain.org,*.mydomain.com ssh-rsa AAAAB5W\n" =
+ { "1" = "*"
+ { "@revoked" }
+ { "type" = "ssh-rsa" }
+ { "key" = "AAAAB5W" } }
+ { "2" = "*.mydomain.org"
+ { "@cert-authority" }
+ { "alias" = "*.mydomain.com" }
+ { "type" = "ssh-rsa" }
+ { "key" = "AAAAB5W" } }
+
+(* Test: Known_Hosts.lns
+ Eol comment *)
+test Known_Hosts.lns get "@revoked * ssh-rsa AAAAB5W # this is revoked\n" =
+ { "1" = "*"
+ { "@revoked" }
+ { "type" = "ssh-rsa" }
+ { "key" = "AAAAB5W" }
+ { "#comment" = "this is revoked" } }
--- /dev/null
+(*
+Module: Test_Koji
+ Provides unit tests and examples for the <Koji> lens.
+*)
+
+module Test_koji =
+
+ (* Variable: conf
+ A full koji.conf *)
+ let conf = "[koji]
+
+;configuration for koji cli tool
+
+;url of XMLRPC server
+server = http://localhost/kojihub
+
+;url of web interface
+weburl = http://localhost/koji
+
+;url of package download site
+pkgurl = http://localhost/packages
+
+;path to the koji top directory
+topdir = /mnt/koji
+
+;configuration for SSL athentication
+
+;client certificate
+cert = /etc/pki/koji/kojiadm.pem
+
+;certificate of the CA that issued the client certificate
+ca = /etc/pki/koji/koji_ca_cert.crt
+
+;certificate of the CA that issued the HTTP server certificate
+serverca = /etc/pki/koji/koji_ca_cert.crt
+"
+
+ test Koji.lns get conf =
+ { "section" = "koji"
+ { }
+ { "#comment" = "configuration for koji cli tool" }
+ { }
+ { "#comment" = "url of XMLRPC server" }
+ { "server" = "http://localhost/kojihub" }
+ { }
+ { "#comment" = "url of web interface" }
+ { "weburl" = "http://localhost/koji" }
+ { }
+ { "#comment" = "url of package download site" }
+ { "pkgurl" = "http://localhost/packages" }
+ { }
+ { "#comment" = "path to the koji top directory" }
+ { "topdir" = "/mnt/koji" }
+ { }
+ { "#comment" = "configuration for SSL athentication" }
+ { }
+ { "#comment" = "client certificate" }
+ { "cert" = "/etc/pki/koji/kojiadm.pem" }
+ { }
+ { "#comment" = "certificate of the CA that issued the client certificate" }
+ { "ca" = "/etc/pki/koji/koji_ca_cert.crt" }
+ { }
+ { "#comment" = "certificate of the CA that issued the HTTP server certificate" }
+ { "serverca" = "/etc/pki/koji/koji_ca_cert.crt" }
+ }
--- /dev/null
+module Test_krb5 =
+
+ (* Krb5.conf from Fermilab *)
+ let fermi_str = "###
+### This krb5.conf template is intended for use with Fermi
+### Kerberos v1_2 and later. Earlier versions may choke on the
+### \"auth_to_local = \" lines unless they are commented out.
+### The installation process should do all the right things in
+### any case, but if you are reading this and haven't updated
+### your kerberos product to v1_2 or later, you really should!
+###
+[libdefaults]
+ ticket_lifetime = 1560m
+ default_realm = FNAL.GOV
+ ccache_type = 4
+ default_tgs_enCtypes = des-cbc-crc
+ default_tkt_enctypes = des-cbc-crc
+ permitted_enctypes = des-cbc-crc des3-cbc-sha1
+ default_lifetime = 7d
+ renew_lifetime = 7d
+ autologin = true
+ forward = true
+ forwardable = true
+ renewable = true
+ encrypt = true
+ v4_name_convert = {
+ host = {
+ rcmd = host
+ }
+ }
+
+[realms]
+ FNAL.GOV = {
+ kdc = krb-fnal-1.fnal.gov:88
+ kdc = krb-fnal-2.fnal.gov:88
+ kdc = krb-fnal-3.fnal.gov:88
+ kdc = krb-fnal-4.fnal.gov:88
+ kdc = krb-fnal-5.fnal.gov:88
+ kdc = krb-fnal-6.fnal.gov:88
+ kdc = krb-fnal-7.fnal.gov:88
+ master_kdc = krb-fnal-admin.fnal.gov:88
+ admin_server = krb-fnal-admin.fnal.gov
+ default_domain = fnal.gov
+ }
+ WIN.FNAL.GOV = {
+ kdc = littlebird.win.fnal.gov:88
+ kdc = bigbird.win.fnal.gov:88
+ default_domain = fnal.gov
+ }
+ FERMI.WIN.FNAL.GOV = {
+ kdc = sully.fermi.win.fnal.gov:88
+ kdc = elmo.fermi.win.fnal.gov:88
+ kdc = grover.fermi.win.fnal.gov:88
+ kdc = oscar.fermi.win.fnal.gov:88
+ kdc = cookie.fermi.win.fnal.gov:88
+ kdc = herry.fermi.win.fnal.gov:88
+ default_domain = fnal.gov
+ }
+ UCHICAGO.EDU = {
+ kdc = kerberos-0.uchicago.edu
+ kdc = kerberos-1.uchicago.edu
+ kdc = kerberos-2.uchicago.edu
+ admin_server = kerberos.uchicago.edu
+ default_domain = uchicago.edu
+ }
+ PILOT.FNAL.GOV = {
+ kdc = i-krb-2.fnal.gov:88
+ master_kdc = i-krb-2.fnal.gov:88
+ admin_server = i-krb-2.fnal.gov
+ default_domain = fnal.gov
+ }
+ WINBETA.FNAL.GOV = {
+ kdc = wbdc1.winbeta.fnal.gov:88
+ kdc = wbdc2.winbeta.fnal.gov:88
+ default_domain = fnal.gov
+ }
+ FERMIBETA.WINBETA.FNAL.GOV = {
+ kdc = fbdc1.fermibeta.winbeta.fnal.gov:88
+ kdc = fbdc2.fermibeta.winbeta.fnal.gov:88
+ default_domain = fnal.gov
+ }
+ CERN.CH = {
+ kdc = afsdb2.cern.ch
+ kdc = afsdb3.cern.ch
+ kdc = afsdb1.cern.ch
+ default_domain = cern.ch
+ kpasswd_server = afskrb5m.cern.ch
+ admin_server = afskrb5m.cern.ch
+ v4_name_convert = {
+ host = {
+ rcmd = host
+ }
+ }
+ }
+ 1TS.ORG = {
+ kdc = kerberos.1ts.org
+ admin_server = kerberos.1ts.org
+ }
+ stanford.edu = {
+ kdc = krb5auth1.stanford.edu
+ kdc = krb5auth2.stanford.edu
+ kdc = krb5auth3.stanford.edu
+ master_kdc = krb5auth1.stanford.edu
+ admin_server = krb5-admin.stanford.edu
+ default_domain = stanford.edu
+ krb524_server = krb524.stanford.edu
+ }
+
+[instancemapping]
+ afs = {
+ cron/* = \"\"
+ cms/* = \"\"
+ afs/* = \"\"
+ e898/* = \"\"
+ }
+
+[capaths]
+
+# FNAL.GOV and PILOT.FNAL.GOV are the MIT Kerberos Domains
+# FNAL.GOV is production and PILOT is for testing
+# The FERMI Windows domain uses the WIN.FNAL.GOV root realm
+# with the FERMI.WIN.FNAL.GOV sub-realm where machines and users
+# reside. The WINBETA and FERMIBETA domains are the equivalent
+# testing realms for the FERMIBETA domain. The 2-way transitive
+# trust structure of this complex is as follows:
+#
+# FNAL.GOV <=> PILOT.FNAL.GOV
+# FNAL.GOV <=> WIN.FERMI.GOV <=> FERMI.WIN.FERMI.GOV
+# PILOT.FNAL.GOV <=> WINBETA.FNAL.GOV <=> FERMIBETA.WINBETA.FNAL.GOV
+
+FNAL.GOV = {
+ PILOT.FNAL.GOV = .
+ FERMI.WIN.FNAL.GOV = WIN.FNAL.GOV
+ WIN.FNAL.GOV = .
+ FERMIBETA.WINBETA.FNAL.GOV = WINBETA.FNAL.GOV
+ WINBETA.FNAL.GOV = PILOT.FNAL.GOV
+}
+PILOT.FNAL.GOV = {
+ FNAL.GOV = .
+ FERMI.WIN.FNAL.GOV = WIN.FNAL.GOV
+ WIN.FNAL.GOV = FNAL.GOV
+ FERMIBETA.WINBETA.FNAL.GOV = WINBETA.FNAL.GOV
+ WINBETA.FNAL.GOV = .
+}
+WIN.FNAL.GOV = {
+ FNAL.GOV = .
+ PILOT.FNAL.GOV = FNAL.GOV
+ FERMI.WIN.FNAL.GOV = .
+ FERMIBETA.WINBETA.FNAL.GOV = WINBETA.FNAL.GOV
+ WINBETA.FNAL.GOV = PILOT.FNAL.GOV
+}
+WINBETA.FNAL.GOV = {
+ PILOT.FNAL.GOV = .
+ FERMIBETA.WINBETA.FNAL.GOV = .
+ FNAL.GOV = PILOT.FNAL.GOV
+ FERMI.WIN.FNAL.GOV = WIN.FNAL.GOV
+ WIN.FNAL.GOV = PILOT.FNAL.GOV
+}
+
+[logging]
+ kdc = SYSLOG:info:local1
+ admin_server = SYSLOG:info:local2
+ default = SYSLOG:err:auth
+
+[domain_realm]
+# Fermilab's (non-windows-centric) domains
+ .fnal.gov = FNAL.GOV
+ .cdms-soudan.org = FNAL.GOV
+ .deemz.net = FNAL.GOV
+ .dhcp.fnal.gov = FNAL.GOV
+ .minos-soudan.org = FNAL.GOV
+ i-krb-2.fnal.gov = PILOT.FNAL.GOV
+ .win.fnal.gov = WIN.FNAL.GOV
+ .fermi.win.fnal.gov = FERMI.WIN.FNAL.GOV
+ .winbeta.fnal.gov = WINBETA.FNAL.GOV
+ .fermibeta.winbeta.fnal.gov = FERMIBETA.WINBETA.FNAL.GOV
+# Fermilab's KCA servers so FERMI.WIN principals work in FNAL.GOV realm
+# winserver.fnal.gov = FERMI.WIN.FNAL.GOV
+# winserver2.fnal.gov = FERMI.WIN.FNAL.GOVA
+# Accelerator nodes to FERMI.WIN for Linux/OS X users
+ adgroups.fnal.gov = FERMI.WIN.FNAL.GOV
+ adusers.fnal.gov = FERMI.WIN.FNAL.GOV
+ webad.fnal.gov = FERMI.WIN.FNAL.GOV
+# Friends and family (by request)
+ .cs.ttu.edu = FNAL.GOV
+ .geol.uniovi.es = FNAL.GOV
+ .harvard.edu = FNAL.GOV
+ .hpcc.ttu.edu = FNAL.GOV
+ .infn.it = FNAL.GOV
+ .knu.ac.kr = FNAL.GOV
+ .lns.mit.edu = FNAL.GOV
+ .ph.liv.ac.uk = FNAL.GOV
+ .pha.jhu.edu = FNAL.GOV
+ .phys.ttu.edu = FNAL.GOV
+ .phys.ualberta.ca = FNAL.GOV
+ .physics.lsa.umich.edu = FNAL.GOV
+ .physics.ucla.edu = FNAL.GOV
+ .physics.ucsb.edu = FNAL.GOV
+ .physics.utoronto.ca = FNAL.GOV
+ .rl.ac.uk = FNAL.GOV
+ .rockefeller.edu = FNAL.GOV
+ .rutgers.edu = FNAL.GOV
+ .sdsc.edu = FNAL.GOV
+ .sinica.edu.tw = FNAL.GOV
+ .tsukuba.jp.hep.net = FNAL.GOV
+ .ucsd.edu = FNAL.GOV
+ .unl.edu = FNAL.GOV
+ .in2p3.fr = FNAL.GOV
+ .wisc.edu = FNAL.GOV
+ .pic.org.es = FNAL.GOV
+ .kisti.re.kr = FNAL.GOV
+
+# The whole \"top half\" is replaced during \"ups installAsRoot krb5conf\", so:
+# It would probably be a bad idea to change anything on or above this line
+
+# If you need to add any .domains or hosts, put them here
+[domain_realm]
+ mojo.lunet.edu = FNAL.GOV
+
+[appdefaults]
+ default_lifetime = 7d
+ retain_ccache = false
+ autologin = true
+ forward = true
+ forwardable = true
+ renewable = true
+ encrypt = true
+ krb5_aklog_path = /usr/bin/aklog
+
+ telnet = {
+ }
+
+ rcp = {
+ forward = true
+ encrypt = false
+ allow_fallback = true
+ }
+
+ rsh = {
+ allow_fallback = true
+ }
+
+ rlogin = {
+ allow_fallback = false
+ }
+
+
+ login = {
+ forwardable = true
+ krb5_run_aklog = false
+ krb5_get_tickets = true
+ krb4_get_tickets = false
+ krb4_convert = false
+ }
+
+ kinit = {
+ forwardable = true
+ krb5_run_aklog = false
+ }
+
+ kadmin = {
+ forwardable = false
+ }
+
+ rshd = {
+ krb5_run_aklog = false
+ }
+
+ ftpd = {
+ krb5_run_aklog = false
+ default_lifetime = 10h
+ }
+
+ pam = {
+ debug = false
+ forwardable = true
+ renew_lifetime = 7d
+ ticket_lifetime = 1560m
+ krb4_convert = true
+ afs_cells = fnal.gov
+ krb5_run_aklog = false
+ }
+"
+
+test Krb5.lns get fermi_str =
+ { "#comment" = "##" }
+ { "#comment" = "## This krb5.conf template is intended for use with Fermi" }
+ { "#comment" = "## Kerberos v1_2 and later. Earlier versions may choke on the" }
+ { "#comment" = "## \"auth_to_local = \" lines unless they are commented out." }
+ { "#comment" = "## The installation process should do all the right things in" }
+ { "#comment" = "## any case, but if you are reading this and haven't updated" }
+ { "#comment" = "## your kerberos product to v1_2 or later, you really should!" }
+ { "#comment" = "##" }
+ { "libdefaults"
+ { "ticket_lifetime" = "1560m" }
+ { "default_realm" = "FNAL.GOV" }
+ { "ccache_type" = "4" }
+ { "default_tgs_enctypes" = "des-cbc-crc" }
+ { "#eol" }
+ { "default_tkt_enctypes" = "des-cbc-crc" }
+ { "#eol" }
+ { "permitted_enctypes" = "des-cbc-crc" }
+ { "permitted_enctypes" = "des3-cbc-sha1" }
+ { "#eol" }
+ { "default_lifetime" = "7d" }
+ { "renew_lifetime" = "7d" }
+ { "autologin" = "true" }
+ { "forward" = "true" }
+ { "forwardable" = "true" }
+ { "renewable" = "true" }
+ { "encrypt" = "true" }
+ { "v4_name_convert"
+ { "host"
+ { "rcmd" = "host" }
+ }
+ }
+ { } }
+ { "realms"
+ { "realm" = "FNAL.GOV"
+ { "kdc" = "krb-fnal-1.fnal.gov:88" }
+ { "kdc" = "krb-fnal-2.fnal.gov:88" }
+ { "kdc" = "krb-fnal-3.fnal.gov:88" }
+ { "kdc" = "krb-fnal-4.fnal.gov:88" }
+ { "kdc" = "krb-fnal-5.fnal.gov:88" }
+ { "kdc" = "krb-fnal-6.fnal.gov:88" }
+ { "kdc" = "krb-fnal-7.fnal.gov:88" }
+ { "master_kdc" = "krb-fnal-admin.fnal.gov:88" }
+ { "admin_server" = "krb-fnal-admin.fnal.gov" }
+ { "default_domain" = "fnal.gov" } }
+ { "realm" = "WIN.FNAL.GOV"
+ { "kdc" = "littlebird.win.fnal.gov:88" }
+ { "kdc" = "bigbird.win.fnal.gov:88" }
+ { "default_domain" = "fnal.gov" } }
+ { "realm" = "FERMI.WIN.FNAL.GOV"
+ { "kdc" = "sully.fermi.win.fnal.gov:88" }
+ { "kdc" = "elmo.fermi.win.fnal.gov:88" }
+ { "kdc" = "grover.fermi.win.fnal.gov:88" }
+ { "kdc" = "oscar.fermi.win.fnal.gov:88" }
+ { "kdc" = "cookie.fermi.win.fnal.gov:88" }
+ { "kdc" = "herry.fermi.win.fnal.gov:88" }
+ { "default_domain" = "fnal.gov" } }
+ { "realm" = "UCHICAGO.EDU"
+ { "kdc" = "kerberos-0.uchicago.edu" }
+ { "kdc" = "kerberos-1.uchicago.edu" }
+ { "kdc" = "kerberos-2.uchicago.edu" }
+ { "admin_server" = "kerberos.uchicago.edu" }
+ { "default_domain" = "uchicago.edu" } }
+ { "realm" = "PILOT.FNAL.GOV"
+ { "kdc" = "i-krb-2.fnal.gov:88" }
+ { "master_kdc" = "i-krb-2.fnal.gov:88" }
+ { "admin_server" = "i-krb-2.fnal.gov" }
+ { "default_domain" = "fnal.gov" } }
+ { "realm" = "WINBETA.FNAL.GOV"
+ { "kdc" = "wbdc1.winbeta.fnal.gov:88" }
+ { "kdc" = "wbdc2.winbeta.fnal.gov:88" }
+ { "default_domain" = "fnal.gov" } }
+ { "realm" = "FERMIBETA.WINBETA.FNAL.GOV"
+ { "kdc" = "fbdc1.fermibeta.winbeta.fnal.gov:88" }
+ { "kdc" = "fbdc2.fermibeta.winbeta.fnal.gov:88" }
+ { "default_domain" = "fnal.gov" } }
+ { "realm" = "CERN.CH"
+ { "kdc" = "afsdb2.cern.ch" }
+ { "kdc" = "afsdb3.cern.ch" }
+ { "kdc" = "afsdb1.cern.ch" }
+ { "default_domain" = "cern.ch" }
+ { "kpasswd_server" = "afskrb5m.cern.ch" }
+ { "admin_server" = "afskrb5m.cern.ch" }
+ { "v4_name_convert"
+ { "host"
+ { "rcmd" = "host" }
+ }
+ }
+ }
+ { "realm" = "1TS.ORG"
+ { "kdc" = "kerberos.1ts.org" }
+ { "admin_server" = "kerberos.1ts.org" }
+ }
+ { "realm" = "stanford.edu"
+ { "kdc" = "krb5auth1.stanford.edu" }
+ { "kdc" = "krb5auth2.stanford.edu" }
+ { "kdc" = "krb5auth3.stanford.edu" }
+ { "master_kdc" = "krb5auth1.stanford.edu" }
+ { "admin_server" = "krb5-admin.stanford.edu" }
+ { "default_domain" = "stanford.edu" }
+ { "krb524_server" = "krb524.stanford.edu" }
+ }
+ { } }
+ { "instancemapping"
+ { "afs"
+ { "mapping" = "cron/*" { "value" = "" } }
+ { "mapping" = "cms/*" { "value" = "" } }
+ { "mapping" = "afs/*" { "value" = "" } }
+ { "mapping" = "e898/*" { "value" = "" } } }
+ { } }
+ { "capaths"
+ { }
+ { "#comment" = "FNAL.GOV and PILOT.FNAL.GOV are the MIT Kerberos Domains" }
+ { "#comment" = "FNAL.GOV is production and PILOT is for testing" }
+ { "#comment" = "The FERMI Windows domain uses the WIN.FNAL.GOV root realm" }
+ { "#comment" = "with the FERMI.WIN.FNAL.GOV sub-realm where machines and users" }
+ { "#comment" = "reside. The WINBETA and FERMIBETA domains are the equivalent" }
+ { "#comment" = "testing realms for the FERMIBETA domain. The 2-way transitive" }
+ { "#comment" = "trust structure of this complex is as follows:" }
+ {}
+ { "#comment" = "FNAL.GOV <=> PILOT.FNAL.GOV" }
+ { "#comment" = "FNAL.GOV <=> WIN.FERMI.GOV <=> FERMI.WIN.FERMI.GOV" }
+ { "#comment" = "PILOT.FNAL.GOV <=> WINBETA.FNAL.GOV <=> FERMIBETA.WINBETA.FNAL.GOV" }
+ { }
+ { "FNAL.GOV"
+ { "PILOT.FNAL.GOV" = "." }
+ { "FERMI.WIN.FNAL.GOV" = "WIN.FNAL.GOV" }
+ { "WIN.FNAL.GOV" = "." }
+ { "FERMIBETA.WINBETA.FNAL.GOV" = "WINBETA.FNAL.GOV" }
+ { "WINBETA.FNAL.GOV" = "PILOT.FNAL.GOV" } }
+ { "PILOT.FNAL.GOV"
+ { "FNAL.GOV" = "." }
+ { "FERMI.WIN.FNAL.GOV" = "WIN.FNAL.GOV" }
+ { "WIN.FNAL.GOV" = "FNAL.GOV" }
+ { "FERMIBETA.WINBETA.FNAL.GOV" = "WINBETA.FNAL.GOV" }
+ { "WINBETA.FNAL.GOV" = "." } }
+ { "WIN.FNAL.GOV"
+ { "FNAL.GOV" = "." }
+ { "PILOT.FNAL.GOV" = "FNAL.GOV" }
+ { "FERMI.WIN.FNAL.GOV" = "." }
+ { "FERMIBETA.WINBETA.FNAL.GOV" = "WINBETA.FNAL.GOV" }
+ { "WINBETA.FNAL.GOV" = "PILOT.FNAL.GOV" } }
+ { "WINBETA.FNAL.GOV"
+ { "PILOT.FNAL.GOV" = "." }
+ { "FERMIBETA.WINBETA.FNAL.GOV" = "." }
+ { "FNAL.GOV" = "PILOT.FNAL.GOV" }
+ { "FERMI.WIN.FNAL.GOV" = "WIN.FNAL.GOV" }
+ { "WIN.FNAL.GOV" = "PILOT.FNAL.GOV" } }
+ { } }
+ { "logging"
+ { "kdc"
+ { "syslog"
+ { "severity" = "info" }
+ { "facility" = "local1" } } }
+ { "admin_server"
+ { "syslog"
+ { "severity" = "info" }
+ { "facility" = "local2" } } }
+ { "default"
+ { "syslog"
+ { "severity" = "err" }
+ { "facility" = "auth" } } }
+ { } }
+ { "domain_realm"
+ { "#comment" = "Fermilab's (non-windows-centric) domains" }
+ { ".fnal.gov" = "FNAL.GOV" }
+ { ".cdms-soudan.org" = "FNAL.GOV" }
+ { ".deemz.net" = "FNAL.GOV" }
+ { ".dhcp.fnal.gov" = "FNAL.GOV" }
+ { ".minos-soudan.org" = "FNAL.GOV" }
+ { "i-krb-2.fnal.gov" = "PILOT.FNAL.GOV" }
+ { ".win.fnal.gov" = "WIN.FNAL.GOV" }
+ { ".fermi.win.fnal.gov" = "FERMI.WIN.FNAL.GOV" }
+ { ".winbeta.fnal.gov" = "WINBETA.FNAL.GOV" }
+ { ".fermibeta.winbeta.fnal.gov" = "FERMIBETA.WINBETA.FNAL.GOV" }
+ { "#comment" = "Fermilab's KCA servers so FERMI.WIN principals work in FNAL.GOV realm" }
+ { "#comment" = "winserver.fnal.gov = FERMI.WIN.FNAL.GOV" }
+ { "#comment" = "winserver2.fnal.gov = FERMI.WIN.FNAL.GOVA" }
+ { "#comment" = "Accelerator nodes to FERMI.WIN for Linux/OS X users" }
+ { "adgroups.fnal.gov" = "FERMI.WIN.FNAL.GOV" }
+ { "adusers.fnal.gov" = "FERMI.WIN.FNAL.GOV" }
+ { "webad.fnal.gov" = "FERMI.WIN.FNAL.GOV" }
+ { "#comment" = "Friends and family (by request)" }
+ { ".cs.ttu.edu" = "FNAL.GOV" }
+ { ".geol.uniovi.es" = "FNAL.GOV" }
+ { ".harvard.edu" = "FNAL.GOV" }
+ { ".hpcc.ttu.edu" = "FNAL.GOV" }
+ { ".infn.it" = "FNAL.GOV" }
+ { ".knu.ac.kr" = "FNAL.GOV" }
+ { ".lns.mit.edu" = "FNAL.GOV" }
+ { ".ph.liv.ac.uk" = "FNAL.GOV" }
+ { ".pha.jhu.edu" = "FNAL.GOV" }
+ { ".phys.ttu.edu" = "FNAL.GOV" }
+ { ".phys.ualberta.ca" = "FNAL.GOV" }
+ { ".physics.lsa.umich.edu" = "FNAL.GOV" }
+ { ".physics.ucla.edu" = "FNAL.GOV" }
+ { ".physics.ucsb.edu" = "FNAL.GOV" }
+ { ".physics.utoronto.ca" = "FNAL.GOV" }
+ { ".rl.ac.uk" = "FNAL.GOV" }
+ { ".rockefeller.edu" = "FNAL.GOV" }
+ { ".rutgers.edu" = "FNAL.GOV" }
+ { ".sdsc.edu" = "FNAL.GOV" }
+ { ".sinica.edu.tw" = "FNAL.GOV" }
+ { ".tsukuba.jp.hep.net" = "FNAL.GOV" }
+ { ".ucsd.edu" = "FNAL.GOV" }
+ { ".unl.edu" = "FNAL.GOV" }
+ { ".in2p3.fr" = "FNAL.GOV" }
+ { ".wisc.edu" = "FNAL.GOV" }
+ { ".pic.org.es" = "FNAL.GOV" }
+ { ".kisti.re.kr" = "FNAL.GOV" }
+ { }
+ { "#comment" = "The whole \"top half\" is replaced during \"ups installAsRoot krb5conf\", so:" }
+ { "#comment" = "It would probably be a bad idea to change anything on or above this line" }
+ { }
+ { "#comment" = "If you need to add any .domains or hosts, put them here" } }
+ { "domain_realm"
+ { "mojo.lunet.edu" = "FNAL.GOV" }
+ { } }
+ { "appdefaults"
+ { "default_lifetime" = "7d" }
+ { "retain_ccache" = "false" }
+ { "autologin" = "true" }
+ { "forward" = "true" }
+ { "forwardable" = "true" }
+ { "renewable" = "true" }
+ { "encrypt" = "true" }
+ { "krb5_aklog_path" = "/usr/bin/aklog" }
+ { }
+ { "application" = "telnet" }
+ { }
+ { "application" = "rcp"
+ { "forward" = "true" }
+ { "encrypt" = "false" }
+ { "allow_fallback" = "true" } }
+ { }
+ { "application" = "rsh"
+ { "allow_fallback" = "true" } }
+ { }
+ { "application" = "rlogin"
+ { "allow_fallback" = "false" } }
+ { }
+ { }
+ { "application" = "login"
+ { "forwardable" = "true" }
+ { "krb5_run_aklog" = "false" }
+ { "krb5_get_tickets" = "true" }
+ { "krb4_get_tickets" = "false" }
+ { "krb4_convert" = "false" } }
+ { }
+ { "application" = "kinit"
+ { "forwardable" = "true" }
+ { "krb5_run_aklog" = "false" } }
+ { }
+ { "application" = "kadmin"
+ { "forwardable" = "false" } }
+ { }
+ { "application" = "rshd"
+ { "krb5_run_aklog" = "false" } }
+ { }
+ { "application" = "ftpd"
+ { "krb5_run_aklog" = "false" }
+ { "default_lifetime" = "10h" } }
+ { }
+ { "application" = "pam"
+ { "debug" = "false" }
+ { "forwardable" = "true" }
+ { "renew_lifetime" = "7d" }
+ { "ticket_lifetime" = "1560m" }
+ { "krb4_convert" = "true" }
+ { "afs_cells" = "fnal.gov" }
+ { "krb5_run_aklog" = "false" } } }
+
+
+(* Example from the krb5 distrubution *)
+let dist_str = "[libdefaults]
+ default_realm = ATHENA.MIT.EDU
+ krb4_config = /usr/kerberos/lib/krb.conf
+ krb4_realms = /usr/kerberos/lib/krb.realms
+
+[realms]
+ ATHENA.MIT.EDU = {
+ admin_server = KERBEROS.MIT.EDU
+ default_domain = MIT.EDU
+ v4_instance_convert = {
+ mit = mit.edu
+ lithium = lithium.lcs.mit.edu
+ }
+ }
+ ANDREW.CMU.EDU = {
+ admin_server = vice28.fs.andrew.cmu.edu
+ }
+# use \"kdc =\" if realm admins haven't put SRV records into DNS
+ GNU.ORG = {
+ kdc = kerberos.gnu.org
+ kdc = kerberos-2.gnu.org
+ admin_server = kerberos.gnu.org
+ }
+
+[domain_realm]
+ .mit.edu = ATHENA.MIT.EDU
+ mit.edu = ATHENA.MIT.EDU
+ .media.mit.edu = MEDIA-LAB.MIT.EDU
+ media.mit.edu = MEDIA-LAB.MIT.EDU
+ .ucsc.edu = CATS.UCSC.EDU
+
+[logging]
+# kdc = CONSOLE
+"
+
+test Krb5.lns get dist_str =
+ { "libdefaults"
+ { "default_realm" = "ATHENA.MIT.EDU" }
+ { "krb4_config" = "/usr/kerberos/lib/krb.conf" }
+ { "krb4_realms" = "/usr/kerberos/lib/krb.realms" }
+ { } }
+ { "realms"
+ { "realm" = "ATHENA.MIT.EDU"
+ { "admin_server" = "KERBEROS.MIT.EDU" }
+ { "default_domain" = "MIT.EDU" }
+ { "v4_instance_convert"
+ { "mit" = "mit.edu" }
+ { "lithium" = "lithium.lcs.mit.edu" } } }
+ { "realm" = "ANDREW.CMU.EDU"
+ { "admin_server" = "vice28.fs.andrew.cmu.edu" } }
+ { "#comment" = "use \"kdc =\" if realm admins haven't put SRV records into DNS" }
+ { "realm" = "GNU.ORG"
+ { "kdc" = "kerberos.gnu.org" }
+ { "kdc" = "kerberos-2.gnu.org" }
+ { "admin_server" = "kerberos.gnu.org" } }
+ { } }
+ { "domain_realm"
+ { ".mit.edu" = "ATHENA.MIT.EDU" }
+ { "mit.edu" = "ATHENA.MIT.EDU" }
+ { ".media.mit.edu" = "MEDIA-LAB.MIT.EDU" }
+ { "media.mit.edu" = "MEDIA-LAB.MIT.EDU" }
+ { ".ucsc.edu" = "CATS.UCSC.EDU" }
+ { } }
+ { "logging"
+ { "#comment" = "kdc = CONSOLE" } }
+
+(* Test for [libdefaults] *)
+test Krb5.libdefaults get "[libdefaults]
+ default_realm = ATHENA.MIT.EDU
+ krb4_config = /usr/kerberos/lib/krb.conf
+ krb4_realms = /usr/kerberos/lib/krb.realms\n\n" =
+ { "libdefaults"
+ { "default_realm" = "ATHENA.MIT.EDU" }
+ { "krb4_config" = "/usr/kerberos/lib/krb.conf" }
+ { "krb4_realms" = "/usr/kerberos/lib/krb.realms" }
+ { } }
+
+(* Test for [appfdefaults] *)
+test Krb5.appdefaults get "[appdefaults]\n\tdefault_lifetime = 7d\n" =
+ { "appdefaults" { "default_lifetime" = "7d" } }
+
+test Krb5.appdefaults get
+ "[appdefaults]\nrcp = { \n forward = true\n encrypt = false\n }\n" =
+ { "appdefaults"
+ { "application" = "rcp"
+ { "forward" = "true" }
+ { "encrypt" = "false" } } }
+
+test Krb5.appdefaults get "[appdefaults]\ntelnet = {\n\t}\n" =
+ { "appdefaults" { "application" = "telnet" } }
+
+test Krb5.appdefaults get "[appdefaults]
+ rcp = {
+ forward = true
+ ATHENA.MIT.EDU = {
+ encrypt = false
+ }
+ MEDIA-LAB.MIT.EDU = {
+ encrypt = true
+ }
+ forwardable = true
+ }\n" =
+ { "appdefaults"
+ { "application" = "rcp"
+ { "forward" = "true" }
+ { "realm" = "ATHENA.MIT.EDU"
+ { "encrypt" = "false" } }
+ { "realm" = "MEDIA-LAB.MIT.EDU"
+ { "encrypt" = "true" } }
+ { "forwardable" = "true" } } }
+
+let appdef = "[appdefaults]
+ default_lifetime = 7d
+ retain_ccache = false
+ autologin = true
+ forward = true
+ forwardable = true
+ renewable = true
+ encrypt = true
+ krb5_aklog_path = /usr/bin/aklog
+
+ telnet = {
+ }
+
+ rcp = {
+ forward = true
+ encrypt = false
+ allow_fallback = true
+ }
+
+ rsh = {
+ allow_fallback = true
+ }
+
+ rlogin = {
+ allow_fallback = false
+ }
+
+
+ login = {
+ forwardable = true
+ krb5_run_aklog = false
+ krb5_get_tickets = true
+ krb4_get_tickets = false
+ krb4_convert = false
+ }
+
+ kinit = {
+ forwardable = true
+ krb5_run_aklog = false
+ }
+
+ kadmin = {
+ forwardable = false
+ }
+
+ rshd = {
+ krb5_run_aklog = false
+ }
+
+ ftpd = {
+ krb5_run_aklog = false
+ default_lifetime = 10h
+ }
+
+ pam = {
+ debug = false
+ forwardable = true
+ renew_lifetime = 7d
+ ticket_lifetime = 1560m
+ krb4_convert = true
+ afs_cells = fnal.gov
+ krb5_run_aklog = false
+ }\n"
+
+let appdef_tree =
+ { "appdefaults"
+ { "default_lifetime" = "7d" }
+ { "retain_ccache" = "false" }
+ { "autologin" = "true" }
+ { "forward" = "true" }
+ { "forwardable" = "true" }
+ { "renewable" = "true" }
+ { "encrypt" = "true" }
+ { "krb5_aklog_path" = "/usr/bin/aklog" }
+ { }
+ { "application" = "telnet" }
+ { }
+ { "application" = "rcp"
+ { "forward" = "true" }
+ { "encrypt" = "false" }
+ { "allow_fallback" = "true" }
+ }
+ { }
+ { "application" = "rsh"
+ { "allow_fallback" = "true" }
+ }
+ { }
+ { "application" = "rlogin"
+ { "allow_fallback" = "false" }
+ }
+ { }
+ { }
+ { "application" = "login"
+ { "forwardable" = "true" }
+ { "krb5_run_aklog" = "false" }
+ { "krb5_get_tickets" = "true" }
+ { "krb4_get_tickets" = "false" }
+ { "krb4_convert" = "false" }
+ }
+ { }
+ { "application" = "kinit"
+ { "forwardable" = "true" }
+ { "krb5_run_aklog" = "false" }
+ }
+ { }
+ { "application" = "kadmin"
+ { "forwardable" = "false" }
+ }
+ { }
+ { "application" = "rshd"
+ { "krb5_run_aklog" = "false" }
+ }
+ { }
+ { "application" = "ftpd"
+ { "krb5_run_aklog" = "false" }
+ { "default_lifetime" = "10h" }
+ }
+ { }
+ { "application" = "pam"
+ { "debug" = "false" }
+ { "forwardable" = "true" }
+ { "renew_lifetime" = "7d" }
+ { "ticket_lifetime" = "1560m" }
+ { "krb4_convert" = "true" }
+ { "afs_cells" = "fnal.gov" }
+ { "krb5_run_aklog" = "false" }
+ }
+ }
+
+
+test Krb5.appdefaults get appdef = appdef_tree
+test Krb5.lns get appdef = appdef_tree
+
+
+(* Test realms section *)
+let realms_str = "[realms]
+ ATHENA.MIT.EDU = {
+ admin_server = KERBEROS.MIT.EDU
+ default_domain = MIT.EDU
+ database_module = ldapconf
+
+ # test
+ v4_instance_convert = {
+ mit = mit.edu
+ lithium = lithium.lcs.mit.edu
+ }
+ v4_realm = LCS.MIT.EDU
+ }\n"
+
+test Krb5.lns get realms_str =
+ { "realms"
+ { "realm" = "ATHENA.MIT.EDU"
+ { "admin_server" = "KERBEROS.MIT.EDU" }
+ { "default_domain" = "MIT.EDU" }
+ { "database_module" = "ldapconf" }
+ { }
+ { "#comment" = "test" }
+ { "v4_instance_convert"
+ { "mit" = "mit.edu" }
+ { "lithium" = "lithium.lcs.mit.edu" } }
+ { "v4_realm" = "LCS.MIT.EDU" } } }
+
+(* Test dpmain_realm section *)
+let domain_realm_str = "[domain_realm]
+ .mit.edu = ATHENA.MIT.EDU
+ mit.edu = ATHENA.MIT.EDU
+ dodo.mit.edu = SMS_TEST.MIT.EDU
+ .ucsc.edu = CATS.UCSC.EDU\n"
+
+test Krb5.lns get domain_realm_str =
+ { "domain_realm"
+ { ".mit.edu" = "ATHENA.MIT.EDU" }
+ { "mit.edu" = "ATHENA.MIT.EDU" }
+ { "dodo.mit.edu" = "SMS_TEST.MIT.EDU" }
+ { ".ucsc.edu" = "CATS.UCSC.EDU" } }
+
+(* Test logging section *)
+let logging_str = "[logging]
+ kdc = CONSOLE
+ kdc = SYSLOG:INFO:DAEMON
+ admin_server = FILE:/var/adm/kadmin.log
+ admin_server = DEVICE=/dev/tty04\n"
+
+test Krb5.lns get logging_str =
+ { "logging"
+ { "kdc"
+ { "console" } }
+ { "kdc"
+ { "syslog"
+ { "severity" = "INFO" }
+ { "facility" = "DAEMON" } } }
+ { "admin_server"
+ { "file" = "/var/adm/kadmin.log" } }
+ { "admin_server"
+ { "device" = "/dev/tty04" } } }
+
+(* Test capaths section *)
+let capaths_str = "[capaths]
+ ANL.GOV = {
+ TEST.ANL.GOV = .
+ PNL.GOV = ES.NET
+ NERSC.GOV = ES.NET
+ ES.NET = .
+ }
+ TEST.ANL.GOV = {
+ ANL.GOV = .
+ }
+ PNL.GOV = {
+ ANL.GOV = ES.NET
+ }
+ NERSC.GOV = {
+ ANL.GOV = ES.NET
+ }
+ ES.NET = {
+ ANL.GOV = .
+ }\n"
+
+test Krb5.lns get capaths_str =
+ { "capaths"
+ { "ANL.GOV"
+ { "TEST.ANL.GOV" = "." }
+ { "PNL.GOV" = "ES.NET" }
+ { "NERSC.GOV" = "ES.NET" }
+ { "ES.NET" = "." } }
+ { "TEST.ANL.GOV"
+ { "ANL.GOV" = "." } }
+ { "PNL.GOV"
+ { "ANL.GOV" = "ES.NET" } }
+ { "NERSC.GOV"
+ { "ANL.GOV" = "ES.NET" } }
+ { "ES.NET"
+ { "ANL.GOV" = "." } } }
+
+(* Test instancemapping *)
+
+test Krb5.instance_mapping get "[instancemapping]
+ afs = {
+ cron/* = \"\"
+ cms/* = \"\"
+ afs/* = \"\"
+ e898/* = \"\"
+ }\n" =
+ { "instancemapping"
+ { "afs"
+ { "mapping" = "cron/*"
+ { "value" = "" } }
+ { "mapping" = "cms/*"
+ { "value" = "" } }
+ { "mapping" = "afs/*"
+ { "value" = "" } }
+ { "mapping" = "e898/*"
+ { "value" = "" } } } }
+
+test Krb5.kdc get "[kdc]
+ profile = /var/kerberos/krb5kdc/kdc.conf\n" =
+ { "kdc"
+ { "profile" = "/var/kerberos/krb5kdc/kdc.conf" } }
+
+(* v4_name_convert in libdefaults *)
+test Krb5.libdefaults get "[libdefaults]
+ default_realm = MY.REALM
+ clockskew = 300
+ v4_instance_resolve = false
+ v4_name_convert = {
+ host = {
+ rcmd = host
+ ftp = ftp
+ }
+ plain = {
+ something = something-else
+ }
+ }\n" =
+
+ { "libdefaults"
+ { "default_realm" = "MY.REALM" }
+ { "clockskew" = "300" }
+ { "v4_instance_resolve" = "false" }
+ { "v4_name_convert"
+ { "host" { "rcmd" = "host" } { "ftp" = "ftp" } }
+ { "plain" { "something" = "something-else" } } } }
+
+(* Test pam section *)
+let pam_str = "[pam]
+ debug = false
+ ticket_lifetime = 36000
+ renew_lifetime = 36000
+ forwardable = true
+ krb4_convert = false
+"
+
+test Krb5.lns get pam_str =
+ { "pam"
+ { "debug" = "false" }
+ { "ticket_lifetime" = "36000" }
+ { "renew_lifetime" = "36000" }
+ { "forwardable" = "true" }
+ { "krb4_convert" = "false" } }
+
+(* Ticket #274 - multiple *enctypes values *)
+let multiple_enctypes = "[libdefaults]
+permitted_enctypes = arcfour-hmac-md5 arcfour-hmac des3-cbc-sha1 des-cbc-md5 des-cbc-crc aes128-cts
+default_tgs_enctypes = des3-cbc-sha1 des-cbc-md5
+default_tkt_enctypes = des-cbc-md5
+"
+
+test Krb5.lns get multiple_enctypes =
+ { "libdefaults"
+ { "permitted_enctypes" = "arcfour-hmac-md5" }
+ { "permitted_enctypes" = "arcfour-hmac" }
+ { "permitted_enctypes" = "des3-cbc-sha1" }
+ { "permitted_enctypes" = "des-cbc-md5" }
+ { "permitted_enctypes" = "des-cbc-crc" }
+ { "permitted_enctypes" = "aes128-cts" }
+ { "#eol" }
+ { "default_tgs_enctypes" = "des3-cbc-sha1" }
+ { "default_tgs_enctypes" = "des-cbc-md5" }
+ { "#eol" }
+ { "default_tkt_enctypes" = "des-cbc-md5" }
+ { "#eol" }
+ }
+
+(* Ticket #274 - v4_name_convert subsection *)
+let v4_name_convert = "[realms]
+ EXAMPLE.COM = {
+ kdc = kerberos.example.com:88
+ admin_server = kerberos.example.com:749
+ default_domain = example.com
+ ticket_lifetime = 12h
+ v4_name_convert = {
+ host = {
+ rcmd = host
+ }
+ }
+ }
+"
+
+test Krb5.lns get v4_name_convert =
+ { "realms"
+ { "realm" = "EXAMPLE.COM"
+ { "kdc" = "kerberos.example.com:88" }
+ { "admin_server" = "kerberos.example.com:749" }
+ { "default_domain" = "example.com" }
+ { "ticket_lifetime" = "12h" }
+ { "v4_name_convert"
+ { "host"
+ { "rcmd" = "host" }
+ }
+ }
+ }
+ }
+
+(* Ticket #288: semicolons for comments *)
+test Krb5.lns get "; AD : This Kerberos configuration is for CERN's Active Directory realm.\n" =
+ { "#comment" = "AD : This Kerberos configuration is for CERN's Active Directory realm." }
+
+(* RHBZ#1066419: braces in values *)
+test Krb5.lns get "[libdefaults]\n
+default_ccache_name = KEYRING:persistent:%{uid}\n" =
+ { "libdefaults"
+ { }
+ { "default_ccache_name" = "KEYRING:persistent:%{uid}" } }
+
+(* Include(dir) tests *)
+let include_test = "include /etc/krb5.other_conf.d/other.conf
+includedir /etc/krb5.conf.d/
+"
+
+test Krb5.lns get include_test =
+ { "include" = "/etc/krb5.other_conf.d/other.conf" }
+ { "includedir" = "/etc/krb5.conf.d/" }
+
+let include2_test = "[logging]
+ default = FILE:/var/log/krb5libs.log
+
+include /etc/krb5.other_conf.d/other.conf
+
+includedir /etc/krb5.conf.d/
+"
+
+test Krb5.lns get include2_test =
+ { "logging"
+ { "default"
+ { "file" = "/var/log/krb5libs.log" } }
+ { }
+ }
+ { "include" = "/etc/krb5.other_conf.d/other.conf" }
+ { }
+ { "includedir" = "/etc/krb5.conf.d/" }
+
+(* [dbmodules] test *)
+let dbmodules_test = "[dbmodules]
+ ATHENA.MIT.EDU = {
+ disable_last_success = true
+ }
+ db_module_dir = /some/path
+"
+
+test Krb5.lns get dbmodules_test =
+ { "dbmodules"
+ { "realm" = "ATHENA.MIT.EDU"
+ { "disable_last_success" = "true" }
+ }
+ { "db_module_dir" = "/some/path" }
+ }
+
+(* [plugins] test *)
+let plugins_test = "[plugins]
+ clpreauth = {
+ module = mypreauth:/path/to/mypreauth.so
+ }
+ ccselect = {
+ disable = k5identity
+ }
+ pwqual = {
+ module = mymodule:/path/to/mymodule.so
+ module = mymodule2:/path/to/mymodule2.so
+ enable_only = mymodule
+ }
+ kadm5_hook = {
+ }
+"
+
+test Krb5.lns get plugins_test =
+ { "plugins"
+ { "clpreauth"
+ { "module" = "mypreauth:/path/to/mypreauth.so" }
+ }
+ { "ccselect"
+ { "disable" = "k5identity" }
+ }
+ { "pwqual"
+ { "module" = "mymodule:/path/to/mymodule.so" }
+ { "module" = "mymodule2:/path/to/mymodule2.so" }
+ { "enable_only" = "mymodule" }
+ }
+ { "kadm5_hook"
+ }
+ }
--- /dev/null
+module Test_ldap =
+
+let conf = "host 127.0.0.1
+
+# The distinguished name of the search base.
+base dc=example,dc=com
+#tls_key
+ssl no
+pam_password md5
+"
+
+test Spacevars.simple_lns get conf =
+ { "host" = "127.0.0.1" }
+ {}
+ { "#comment" = "The distinguished name of the search base." }
+ { "base" = "dc=example,dc=com" }
+ { "#comment" = "tls_key" }
+ { "ssl" = "no" }
+ { "pam_password" = "md5" }
+
--- /dev/null
+(* Test for LDIF lens *)
+module Test_ldif =
+
+ (* Test LDIF content only *)
+ let content = "version: 1
+dn: cn=foo bar,dc=example,dc=com
+# test
+ou: example value
+cn:: Zm9vIGJhcg==
+# test
+telephoneNumber;foo;bar: +1 123 456 789
+binary;foo:< file:///file/something
+# test
+
+dn: cn=simple,dc=example,dc=com
+cn: simple
+test: split line starts with
+ :colon
+
+dn:: Y249c2ltcGxlLGRjPWV4YW1wbGUsZGM9Y29t
+# test
+cn: simple
+
+dn: cn=simple,dc=exam
+ ple,dc=com
+cn: simple
+telephoneNumber:: KzEgMTIzIDQ1
+ NiA3ODk=
+
+# test
+"
+
+ test Ldif.lns get content =
+ { "@content"
+ { "version" = "1" }
+ { "1" = "cn=foo bar,dc=example,dc=com"
+ { "#comment" = "test" }
+ { "ou" = "example value" }
+ { "cn"
+ { "@base64" = "Zm9vIGJhcg==" } }
+ { "#comment" = "test" }
+ { "telephoneNumber" = "+1 123 456 789"
+ { "@option" = "foo" }
+ { "@option" = "bar" } }
+ { "binary"
+ { "@option" = "foo" }
+ { "@url" = "file:///file/something" } } }
+ { "#comment" = "test" }
+ {}
+ { "2" = "cn=simple,dc=example,dc=com"
+ { "cn" = "simple" }
+ { "test" = "split line starts with
+ :colon" } }
+ {}
+ { "3"
+ { "@base64" = "Y249c2ltcGxlLGRjPWV4YW1wbGUsZGM9Y29t" }
+ { "#comment" = "test" }
+ { "cn" = "simple" } }
+ {}
+ { "4" = "cn=simple,dc=exam
+ ple,dc=com"
+ { "cn" = "simple" }
+ { "telephoneNumber"
+ { "@base64" = "KzEgMTIzIDQ1
+ NiA3ODk=" } } }
+ {}
+ { "#comment" = "test" }
+ }
+
+ (* Test LDIF changes *)
+ let changes = "version: 1
+dn: cn=foo,dc=example,dc=com
+changetype: delete
+
+dn: cn=simple,dc=example,dc=com
+control: 1.2.3.4
+control: 1.2.3.4 true
+# test
+control: 1.2.3.4 true: foo bar
+control: 1.2.3.4 true:: Zm9vIGJhcg==
+changetype: add
+cn: simple
+
+dn: cn=foo bar,dc=example,dc=com
+changeType: modify
+add: telephoneNumber
+telephoneNumber: +1 123 456 789
+-
+replace: homePostalAddress;lang-fr
+homePostalAddress;lang-fr: 34 rue de Seine
+# test
+-
+delete: telephoneNumber
+-
+replace: telephoneNumber
+telephoneNumber:: KzEgMTIzIDQ1NiA3ODk=
+-
+
+dn: cn=foo,dc=example,dc=com
+changetype: moddn
+newrdn: cn=bar
+deleteoldrdn: 0
+newsuperior: dc=example,dc=net
+"
+
+ test Ldif.lns get changes =
+ { "@changes"
+ { "version" = "1" }
+ { "1" = "cn=foo,dc=example,dc=com"
+ { "changetype" = "delete" } }
+ {}
+ { "2" = "cn=simple,dc=example,dc=com"
+ { "control" = "1.2.3.4" }
+ { "control" = "1.2.3.4"
+ { "criticality" = "true" } }
+ { "#comment" = "test" }
+ { "control" = "1.2.3.4"
+ { "criticality" = "true" }
+ { "value" = "foo bar" } }
+ { "control" = "1.2.3.4"
+ { "criticality" = "true" }
+ { "value"
+ { "@base64" = "Zm9vIGJhcg==" } } }
+ { "changetype" = "add" }
+ { "cn" = "simple" } }
+ {}
+ { "3" = "cn=foo bar,dc=example,dc=com"
+ { "changeType" = "modify" }
+ { "add" = "telephoneNumber"
+ { "telephoneNumber" = "+1 123 456 789" } }
+ { "replace" = "homePostalAddress"
+ { "@option" = "lang-fr" }
+ { "homePostalAddress" = "34 rue de Seine"
+ { "@option" = "lang-fr" } }
+ { "#comment" = "test" } }
+ { "delete" = "telephoneNumber" }
+ { "replace" = "telephoneNumber"
+ { "telephoneNumber"
+ { "@base64" = "KzEgMTIzIDQ1NiA3ODk=" } } } }
+ {}
+ { "4" = "cn=foo,dc=example,dc=com"
+ { "changetype" = "moddn" }
+ { "newrdn" = "cn=bar" }
+ { "deleteoldrdn" = "0" }
+ { "newsuperior" = "dc=example,dc=net" } }
+ }
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(*
+Module: Test_Ldso
+ Provides unit tests and examples for the <Ldso> lens.
+*)
+
+module Test_Ldso =
+
+(* Variable: conf *)
+let conf = "include /etc/ld.so.conf.d/*.conf
+
+# libc default configuration
+/usr/local/lib
+
+hwcap 1 nosegneg
+"
+
+(* Test: Ldso.lns *)
+test Ldso.lns get conf =
+ { "include" = "/etc/ld.so.conf.d/*.conf" }
+ { }
+ { "#comment" = "libc default configuration" }
+ { "path" = "/usr/local/lib" }
+ { }
+ { "hwcap"
+ { "bit" = "1" }
+ { "name" = "nosegneg" } }
--- /dev/null
+(*
+Module: Test_Lightdm
+ Module to test Lightdm module for Augeas
+
+Author: David Salmen <dsalmen@dsalmen.com>
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+*)
+
+module Test_lightdm =
+
+ let conf_lightdm = "
+[SeatDefaults]
+greeter-session=unity-greeter
+user-session=ubuntu
+"
+
+ test Lightdm.lns get conf_lightdm =
+ {}
+ { "SeatDefaults"
+ { "greeter-session" = "unity-greeter" }
+ { "user-session" = "ubuntu" }
+ }
+
+ test Lightdm.lns put conf_lightdm after
+ set "SeatDefaults/allow-guest" "false"
+ = "
+[SeatDefaults]
+greeter-session=unity-greeter
+user-session=ubuntu
+allow-guest=false
+"
+
+ test Lightdm.lns put conf_lightdm after
+ set "SeatDefaults/allow-guest" "true"
+ = "
+[SeatDefaults]
+greeter-session=unity-greeter
+user-session=ubuntu
+allow-guest=true
+"
+
+ let conf_unity_greeter = "
+#
+# background = Background file to use, either an image path or a color (e.g. #772953)
+# logo = Logo file to use
+# theme-name = GTK+ theme to use
+# font-name = Font to use
+# xft-antialias = Whether to antialias Xft fonts (true or false)
+# xft-dpi = Resolution for Xft in dots per inch (e.g. 96)
+# xft-hintstyle = What degree of hinting to use (hintnone, hintslight, hintmedium, or hintfull)
+# xft-rgba = Type of subpixel antialiasing (none, rgb, bgr, vrgb or vbgr)
+#
+[greeter]
+background=/usr/share/backgrounds/warty-final-ubuntu.png
+logo=/usr/share/unity-greeter/logo.png
+theme-name=Ambiance
+icon-theme-name=ubuntu-mono-dark
+font-name=Ubuntu 11
+xft-antialias=true
+xft-dpi=96
+xft-hintstyle=hintslight
+xft-rgba=rgb
+"
+
+ test Lightdm.lns get conf_unity_greeter =
+ {}
+ {}
+ { "#comment" = "background = Background file to use, either an image path or a color (e.g. #772953)" }
+ { "#comment" = "logo = Logo file to use" }
+ { "#comment" = "theme-name = GTK+ theme to use" }
+ { "#comment" = "font-name = Font to use" }
+ { "#comment" = "xft-antialias = Whether to antialias Xft fonts (true or false)" }
+ { "#comment" = "xft-dpi = Resolution for Xft in dots per inch (e.g. 96)" }
+ { "#comment" = "xft-hintstyle = What degree of hinting to use (hintnone, hintslight, hintmedium, or hintfull)" }
+ { "#comment" = "xft-rgba = Type of subpixel antialiasing (none, rgb, bgr, vrgb or vbgr)" }
+ {}
+ { "greeter"
+ { "background" = "/usr/share/backgrounds/warty-final-ubuntu.png" }
+ { "logo" = "/usr/share/unity-greeter/logo.png" }
+ { "theme-name" = "Ambiance" }
+ { "icon-theme-name" = "ubuntu-mono-dark" }
+ { "font-name" = "Ubuntu 11" }
+ { "xft-antialias" = "true" }
+ { "xft-dpi" = "96" }
+ { "xft-hintstyle" = "hintslight" }
+ { "xft-rgba" = "rgb" }
+ }
+
+ let conf_users = "
+#
+# User accounts configuration
+#
+# NOTE: If you have AccountsService installed on your system, then LightDM will
+# use this instead and these settings will be ignored
+#
+# minimum-uid = Minimum UID required to be shown in greeter
+# hidden-users = Users that are not shown to the user
+# hidden-shells = Shells that indicate a user cannot login
+#
+[UserAccounts]
+minimum-uid=500
+hidden-users=nobody nobody4 noaccess
+hidden-shells=/bin/false /usr/sbin/nologin
+"
+
+ test Lightdm.lns get conf_users =
+ {}
+ {}
+ { "#comment" = "User accounts configuration" }
+ {}
+ { "#comment" = "NOTE: If you have AccountsService installed on your system, then LightDM will" }
+ { "#comment" = "use this instead and these settings will be ignored" }
+ {}
+ { "#comment" = "minimum-uid = Minimum UID required to be shown in greeter" }
+ { "#comment" = "hidden-users = Users that are not shown to the user" }
+ { "#comment" = "hidden-shells = Shells that indicate a user cannot login" }
+ {}
+ { "UserAccounts"
+ { "minimum-uid" = "500" }
+ { "hidden-users" = "nobody nobody4 noaccess" }
+ { "hidden-shells" = "/bin/false /usr/sbin/nologin" }
+ }
+
--- /dev/null
+module Test_limits =
+
+ let conf = "@audio - rtprio 99
+ftp hard nproc /ftp
+1200:2000 - as 1024
+* soft core 0
+"
+
+ test Limits.lns get conf =
+ { "domain" = "@audio"
+ { "type" = "-" }
+ { "item" = "rtprio" }
+ { "value" = "99" } }
+ { "domain" = "ftp"
+ { "type" = "hard" }
+ { "item" = "nproc" }
+ { "value" = "/ftp" } }
+ { "domain" = "1200:2000"
+ { "type" = "-" }
+ { "item" = "as" }
+ { "value" = "1024" } }
+ { "domain" = "*"
+ { "type" = "soft" }
+ { "item" = "core" }
+ { "value" = "0" } }
+
+ test Limits.lns put conf after
+ insa "domain" "domain[last()]" ;
+ set "domain[last()]" "*" ;
+ set "domain[last()]/type" "-" ;
+ set "domain[last()]/item" "nofile" ;
+ set "domain[last()]/value" "4096"
+ = "@audio - rtprio 99
+ftp hard nproc /ftp
+1200:2000 - as 1024
+* soft core 0
+* - nofile 4096\n"
+
+ test Limits.lns get "* soft core 0 # clever comment\n" =
+ { "domain" = "*"
+ { "type" = "soft" }
+ { "item" = "core" }
+ { "value" = "0" }
+ { "#comment" = "clever comment" } }
--- /dev/null
+(*
+Module: Test_login_defs
+ Test cases for the login_defs lense
+
+Author: Erinn Looney-Triggs
+
+About: License
+ This file is licensed under the LGPLv2+, like the rest of Augeas.
+*)
+module Test_login_defs =
+
+let record = "MAIL_DIR /var/spool/mail
+ENCRYPT_METHOD SHA512
+UMASK 077
+"
+
+test Login_defs.lns get record =
+ { "MAIL_DIR" = "/var/spool/mail" }
+ { "ENCRYPT_METHOD" = "SHA512" }
+ { "UMASK" = "077" }
+
+let comment ="# *REQUIRED*
+"
+
+test Login_defs.lns get comment =
+ {"#comment" = "*REQUIRED*"}
--- /dev/null
+module Test_logrotate =
+
+test Logrotate.body get "\n{\n monthly\n}" =
+ { "schedule" = "monthly" }
+
+test Logrotate.rule get "/var/log/foo\n{\n monthly\n}\n" =
+ { "rule"
+ { "file" = "/var/log/foo" }
+ { "schedule" = "monthly" } }
+
+test Logrotate.rule get "/var/log/foo /var/log/bar\n{\n monthly\n}\n" =
+ { "rule"
+ { "file" = "/var/log/foo" }
+ { "file" = "/var/log/bar" }
+ { "schedule" = "monthly" } }
+
+test Logrotate.rule get "\"/var/log/foo\"\n{\n monthly\n}\n" =
+ { "rule"
+ { "file" = "/var/log/foo" }
+ { "schedule" = "monthly" } }
+
+let conf = "# see man logrotate for details
+# rotate log files weekly
+weekly
+
+# keep 4 weeks worth of backlogs
+rotate 4
+
+# create new (empty) log files after rotating old ones
+create
+
+# uncomment this if you want your log files compressed
+#compress
+
+tabooext + .old .orig .ignore
+
+# packages drop log rotation information into this directory
+include /etc/logrotate.d
+
+# no packages own wtmp, or btmp -- we'll rotate them here
+/var/log/wtmp
+/var/log/wtmp2
+{
+ missingok
+ monthly
+ create 0664 root utmp
+ rotate 1
+}
+
+/var/log/btmp /var/log/btmp* {
+ missingok
+ # ftpd doesn't handle SIGHUP properly
+ monthly
+ create 0664 root utmp
+ rotate 1
+}
+/var/log/vsftpd.log {
+ # ftpd doesn't handle SIGHUP properly
+ nocompress
+ missingok
+ notifempty
+ rotate 4
+ weekly
+}
+
+/var/log/apache2/*.log {
+ weekly
+ missingok
+ rotate 52
+ compress
+ delaycompress
+ notifempty
+ create 640 root adm
+ sharedscripts
+ prerotate
+ if [ -f /var/run/apache2.pid ]; then
+ /etc/init.d/apache2 restart > /dev/null
+ fi
+ endscript
+}
+/var/log/mailman/digest {
+ su root list
+ monthly
+ missingok
+ create 0664 list list
+ rotate 4
+ compress
+ delaycompress
+ sharedscripts
+ postrotate
+ [ -f '/var/run/mailman/mailman.pid' ] && /usr/lib/mailman/bin/mailmanctl -q reopen || exit 0
+ endscript
+}
+/var/log/ntp {
+ compress
+ dateext
+ maxage 365
+ rotate 99
+ size=+2048k
+ notifempty
+ missingok
+ copytruncate
+ postrotate
+ chmod 644 /var/log/ntp
+ endscript
+}
+"
+
+test Logrotate.lns get conf =
+ { "#comment" = "see man logrotate for details" }
+ { "#comment" = "rotate log files weekly" }
+ { "schedule" = "weekly" }
+ {}
+ { "#comment" = "keep 4 weeks worth of backlogs" }
+ { "rotate" = "4" }
+ {}
+ { "#comment" = "create new (empty) log files after rotating old ones" }
+ { "create" }
+ {}
+ { "#comment" = "uncomment this if you want your log files compressed" }
+ { "#comment" = "compress" }
+ {}
+ { "tabooext" = "+" { ".old" } { ".orig" } { ".ignore" } }
+ {}
+ { "#comment" = "packages drop log rotation information into this directory" }
+ { "include" = "/etc/logrotate.d" }
+ {}
+ { "#comment" = "no packages own wtmp, or btmp -- we'll rotate them here" }
+ { "rule"
+ { "file" = "/var/log/wtmp" }
+ { "file" = "/var/log/wtmp2" }
+ { "missingok" = "missingok" }
+ { "schedule" = "monthly" }
+ { "create"
+ { "mode" = "0664" }
+ { "owner" = "root" }
+ { "group" = "utmp" } }
+ { "rotate" = "1" } }
+ {}
+ { "rule"
+ { "file" = "/var/log/btmp" }
+ { "file" = "/var/log/btmp*" }
+ { "missingok" = "missingok" }
+ { "#comment" = "ftpd doesn't handle SIGHUP properly" }
+ { "schedule" = "monthly" }
+ { "create"
+ { "mode" = "0664" }
+ { "owner" = "root" }
+ { "group" = "utmp" } }
+ { "rotate" = "1" } }
+ { "rule"
+ { "file" = "/var/log/vsftpd.log" }
+ { "#comment" = "ftpd doesn't handle SIGHUP properly" }
+ { "compress" = "nocompress" }
+ { "missingok" = "missingok" }
+ { "ifempty" = "notifempty" }
+ { "rotate" = "4" }
+ { "schedule" = "weekly" } }
+ {}
+ { "rule"
+ { "file" = "/var/log/apache2/*.log" }
+ { "schedule" = "weekly" }
+ { "missingok" = "missingok" }
+ { "rotate" = "52" }
+ { "compress" = "compress" }
+ { "delaycompress" = "delaycompress" }
+ { "ifempty" = "notifempty" }
+ { "create"
+ { "mode" = "640" }
+ { "owner" = "root" }
+ { "group" = "adm" } }
+ { "sharedscripts" = "sharedscripts" }
+ { "prerotate" = " if [ -f /var/run/apache2.pid ]; then
+ /etc/init.d/apache2 restart > /dev/null
+ fi" } }
+ { "rule"
+ { "file" = "/var/log/mailman/digest" }
+ { "su"
+ { "owner" = "root" }
+ { "group" = "list" } }
+ { "schedule" = "monthly" }
+ { "missingok" = "missingok" }
+ { "create"
+ { "mode" = "0664" }
+ { "owner" = "list" }
+ { "group" = "list" } }
+ { "rotate" = "4" }
+ { "compress" = "compress" }
+ { "delaycompress" = "delaycompress" }
+ { "sharedscripts" = "sharedscripts" }
+ { "postrotate" = " [ -f '/var/run/mailman/mailman.pid' ] && /usr/lib/mailman/bin/mailmanctl -q reopen || exit 0" } }
+ { "rule"
+ { "file" = "/var/log/ntp" }
+ { "compress" = "compress" }
+ { "dateext" = "dateext" }
+ { "maxage" = "365" }
+ { "rotate" = "99" }
+ { "size" = "+2048k" }
+ { "ifempty" = "notifempty" }
+ { "missingok" = "missingok" }
+ { "copytruncate" = "copytruncate" }
+ { "postrotate" = " chmod 644 /var/log/ntp" } }
+
+test Logrotate.lns get "/var/log/file {\n dateext\n}\n" =
+ { "rule"
+ { "file" = "/var/log/file" }
+ { "dateext" = "dateext" } }
+
+ (* Make sure 'minsize 1M' works *)
+test Logrotate.lns get "/avr/log/wtmp {\n minsize 1M\n}\n" =
+ { "rule"
+ { "file" = "/avr/log/wtmp" }
+ { "minsize" = "1M" } }
+
+ (* '=' is a legal separator, file names can be indented *)
+test Logrotate.lns get " \t /file {\n size=5M\n}\n" =
+ { "rule"
+ { "file" = "/file" }
+ { "size" = "5M" } }
+
+ (* Can leave owner/group off a create statement *)
+test Logrotate.lns get "/file {
+ create 600\n}\n" =
+ { "rule"
+ { "file" = "/file" }
+ { "create"
+ { "mode" = "600" } } }
+
+test Logrotate.lns put "/file {\n create 600\n}\n" after
+ set "/rule/create/owner" "user"
+ = "/file {\n create 600 user\n}\n"
+
+ (* The newline at the end of a script is optional *)
+test Logrotate.lns put "/file {\n size=5M\n}\n" after
+ set "/rule/prerotate" "\tfoobar"
+ =
+"/file {
+ size=5M
+\tprerotate
+\tfoobar
+\tendscript\n}\n"
+
+test Logrotate.lns put "/file {\n size=5M\n}\n" after
+ set "/rule/prerotate" "\tfoobar\n"
+ =
+"/file {
+ size=5M
+\tprerotate
+\tfoobar\n
+\tendscript\n}\n"
+
+(* Bug #101: whitespace at the end of the line *)
+test Logrotate.lns get "/file {\n missingok \t\n}\n" =
+ { "rule"
+ { "file" = "/file" }
+ { "missingok" = "missingok" } }
+
+(* Bug #104: file names can be separated by newlines *)
+let conf2 = "/var/log/mail.info
+/var/log/mail.warn
+/var/log/mail.err
+{
+ weekly
+}
+"
+test Logrotate.lns get conf2 =
+ { "rule"
+ { "file" = "/var/log/mail.info" }
+ { "file" = "/var/log/mail.warn" }
+ { "file" = "/var/log/mail.err" }
+ { "schedule" = "weekly" } }
+
+(* Issue #217: support for dateformat *)
+let dateformat = "dateformat -%Y%m%d\n"
+
+test Logrotate.lns get dateformat =
+ { "dateformat" = "-%Y%m%d" }
+
+(* Issue #123: no space before '{' *)
+test Logrotate.lns get "/file{\n missingok \t\n}\n" =
+ { "rule"
+ { "file" = "/file" }
+ { "missingok" = "missingok" } }
+
+(* RHBZ#1213292: maxsize 30k *)
+test Logrotate.lns get "/var/log/yum.log {\n maxsize 30k\n}\n" =
+ { "rule"
+ { "file" = "/var/log/yum.log" }
+ { "maxsize" = "30k" } }
+
--- /dev/null
+module Test_logwatch =
+
+let conf = "# Configuration file for logwatch.
+#
+#Mailto_host1 = user@example.com
+
+MailFrom = root@example.com
+"
+
+test Logwatch.lns get conf =
+ { "#comment" = "Configuration file for logwatch." }
+ {}
+ { "#comment" = "Mailto_host1 = user@example.com" }
+ {}
+ { "MailFrom" = "root@example.com" }
--- /dev/null
+module Test_lokkit =
+
+let conf = "# Configuration file for system-config-firewall
+
+--enabled
+--port=111:tcp
+-p 111:udp
+-p 2020-2049:tcp
+--port=5900-5910:tcp
+--custom-rules=ipv4:filter:/var/lib/misc/iptables-forward-bridged
+-s dns
+--service=ssh
+--trust=trust1
+--masq=eth42
+--block-icmp=5
+-t trust0
+--addmodule=fancy
+--removemodule=broken
+--forward-port=if=forw0:port=42:proto=tcp:toport=42:toaddr=192.168.0.42
+--selinux=permissive
+"
+
+test Lokkit.lns get conf =
+ { "#comment" = "Configuration file for system-config-firewall" }
+ { }
+ { "enabled" }
+ { "port"
+ { "start" = "111" }
+ { "protocol" = "tcp" } }
+ { "port"
+ { "start" = "111" }
+ { "protocol" = "udp" } }
+ { "port"
+ { "start" = "2020" }
+ { "end" = "2049" }
+ { "protocol" = "tcp" } }
+ { "port"
+ { "start" = "5900" }
+ { "end" = "5910" }
+ { "protocol" = "tcp" } }
+ { "custom-rules" = "/var/lib/misc/iptables-forward-bridged"
+ { "type" = "ipv4" }
+ { "table" = "filter" } }
+ { "service" = "dns" }
+ { "service" = "ssh" }
+ { "trust" = "trust1" }
+ { "masq" = "eth42" }
+ { "block-icmp" = "5" }
+ { "trust" = "trust0" }
+ { "addmodule" = "fancy" }
+ { "removemodule" = "broken" }
+ { "forward-port"
+ { "if" = "forw0" }
+ { "port" = "42" }
+ { "proto" = "tcp" }
+ { "toport" = "42" }
+ { "toaddr" = "192.168.0.42" } }
+ { "selinux" = "permissive" }
+
+test Lokkit.custom_rules get
+"--custom-rules=ipv4:filter:/some/file\n" =
+ { "custom-rules" = "/some/file"
+ { "type" = "ipv4" }
+ { "table" = "filter" } }
+
+test Lokkit.custom_rules get
+"--custom-rules=filter:/some/file\n" =
+ { "custom-rules" = "/some/file"
+ { "table" = "filter" } }
+
+test Lokkit.custom_rules get
+"--custom-rules=ipv4:/some/file\n" =
+ { "custom-rules" = "/some/file"
+ { "type" = "ipv4" } }
+
+test Lokkit.custom_rules get
+"--custom-rules=/some/file\n" =
+ { "custom-rules" = "/some/file" }
+
+test Lokkit.lns get
+"--trust=tun+\n--trust=eth0.42\n--trust=eth0:1\n" =
+ { "trust" = "tun+" }
+ { "trust" = "eth0.42" }
+ { "trust" = "eth0:1" }
+
+(* We didn't allow '-' in the service name *)
+test Lokkit.lns get "--service=samba-client\n" =
+ { "service" = "samba-client" }
--- /dev/null
+(*
+Module: Test_LVM
+ Provides unit tests and examples for the <LVM> lens.
+*)
+
+module Test_LVM =
+
+(* Variable: conf
+ A full configuration file *)
+let conf = "# Generated by LVM2: date
+
+contents = \"Text Format Volume Group\"
+version = 1
+
+description = \"Created *after* executing 'eek'\"
+
+creation_host = \"eek\" # Linux eek
+creation_time = 6666666666 # eeeek
+
+VG1 {
+ id = \"uuid-uuid-uuid-uuid\"
+ seqno = 2
+ status = [\"RESIZEABLE\", \"READ\", \"WRITE\"]
+ extent_size = 8192 # 4 Megabytes
+ max_lv = 0
+ max_pv = 0
+ process_priority = -18
+
+ physical_volumes {
+ pv0 {
+ id = \"uuid-uuid-uuid-uuid\"
+ device = \"/dev/sda6\" # Hint only
+
+ status = [\"ALLOCATABLE\"]
+ pe_start = 123
+ pe_count = 123456 # many Gigabytes
+ }
+ }
+
+ logical_volumes {
+ LogicalEek {
+ id = \"uuid-uuid-uuid-uuid\"
+ status = [\"READ\", \"WRITE\", \"VISIBLE\"]
+ segment_count = 1
+
+ segment1 {
+ start_extent = 0
+ extent_count = 123456 # beaucoup Gigabytes
+
+ type = \"striped\"
+ stripe_count = 1 # linear
+
+ stripes = [
+ \"pv0\", 0
+ ]
+ }
+ }
+ }
+}
+"
+
+test LVM.int get "5" = { "int" = "5" }
+test LVM.str get "\"abc\"" = { "str" = "abc"}
+test LVM.lns get "\n" = {}
+test LVM.lns get "#foo\n" = { "#comment" = "foo"}
+
+test LVM.lns get "# Generated by LVM2: date
+
+contents = \"Text Format Volume Group\"
+version = 1
+
+description = \"Created *after* executing 'eek'\"
+
+creation_host = \"eek\" # Linux eek
+creation_time = 6666666666 # eeeek\n" =
+ { "#comment" = "Generated by LVM2: date" }
+ {}
+ { "contents"
+ { "str" = "Text Format Volume Group" }
+ }
+ { "version"
+ { "int" = "1" }
+ }
+ {}
+ { "description"
+ { "str" = "Created *after* executing 'eek'" }
+ }
+ {}
+ { "creation_host"
+ { "str" = "eek" }
+ { "#comment" = "Linux eek" }
+ }
+ { "creation_time"
+ { "int" = "6666666666" }
+ { "#comment" = "eeeek" }
+ }
+
+(* Test: LVM.lns
+ Test the full <conf> *)
+test LVM.lns get conf =
+ { "#comment" = "Generated by LVM2: date" }
+ {}
+ { "contents"
+ { "str" = "Text Format Volume Group" }
+ }
+ { "version"
+ { "int" = "1" }
+ }
+ {}
+ { "description"
+ { "str" = "Created *after* executing 'eek'" }
+ }
+ {}
+ { "creation_host"
+ { "str" = "eek" }
+ { "#comment" = "Linux eek" }
+ }
+ { "creation_time"
+ { "int" = "6666666666" }
+ { "#comment" = "eeeek" }
+ }
+ {}
+ { "VG1"
+ { "dict"
+ { "id"
+ { "str" = "uuid-uuid-uuid-uuid" }
+ }
+ { "seqno"
+ { "int" = "2" }
+ }
+ { "status"
+ { "list"
+ { "1"
+ { "str" = "RESIZEABLE" }
+ }
+ { "2"
+ { "str" = "READ" }
+ }
+ { "3"
+ { "str" = "WRITE" }
+ }
+ }
+ }
+ { "extent_size"
+ { "int" = "8192" }
+ { "#comment" = "4 Megabytes" }
+ }
+ { "max_lv"
+ { "int" = "0" }
+ }
+ { "max_pv"
+ { "int" = "0" }
+ }
+ { "process_priority"
+ { "int" = "-18" }
+ }
+ {}
+ { "physical_volumes"
+ { "dict"
+ { "pv0"
+ { "dict"
+ { "id"
+ { "str" = "uuid-uuid-uuid-uuid" }
+ }
+ { "device"
+ { "str" = "/dev/sda6" }
+ { "#comment" = "Hint only" }
+ }
+ {}
+ { "status"
+ { "list"
+ { "1"
+ { "str" = "ALLOCATABLE" }
+ }
+ }
+ }
+ { "pe_start"
+ { "int" = "123" }
+ }
+ { "pe_count"
+ { "int" = "123456" }
+ { "#comment" = "many Gigabytes" }
+ }
+ }
+ }
+ }
+ }
+ {}
+ { "logical_volumes"
+ { "dict"
+ { "LogicalEek"
+ { "dict"
+ { "id"
+ { "str" = "uuid-uuid-uuid-uuid" }
+ }
+ { "status"
+ { "list"
+ { "1"
+ { "str" = "READ" }
+ }
+ { "2"
+ { "str" = "WRITE" }
+ }
+ { "3"
+ { "str" = "VISIBLE" }
+ }
+ }
+ }
+ { "segment_count"
+ { "int" = "1" }
+ }
+ {}
+ { "segment1"
+ { "dict"
+ { "start_extent"
+ { "int" = "0" }
+ }
+ { "extent_count"
+ { "int" = "123456" }
+ { "#comment" = "beaucoup Gigabytes" }
+ }
+ {}
+ { "type"
+ { "str" = "striped" }
+ }
+ { "stripe_count"
+ { "int" = "1" }
+ { "#comment" = "linear" }
+ }
+ {}
+ { "stripes"
+ { "list"
+ { "1"
+ { "str" = "pv0" }
+ }
+ { "2"
+ { "int" = "0" }
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+
+(* Parse description from RHEL 6 *)
+let descr="\"Created *before* executing '/sbin/vgs --noheadings -o name --config 'log{command_names=0 prefix=\\\" \\\"}''\""
+test LVM.str get descr =
+ { "str" = "Created *before* executing '/sbin/vgs --noheadings -o name --config 'log{command_names=0 prefix=\\\" \\\"}''" }
--- /dev/null
+module Test_Mailscanner =
+
+let conf = "# Main configuration file for the MailScanner E-Mail Virus Scanner
+%org-name% = BARUWA
+%org-long-name% = BARUWA MAILFW
+%web-site% = www.baruwa.com
+%etc-dir% = /etc/MailScanner
+%reports-base% = /etc/MailScanner/reports
+%report-dir% = /etc/MailScanner/reports/en
+%rules-dir% = /etc/MailScanner/rules
+%mcp-dir% = /etc/MailScanner/mcp
+%spool-dir% = /var/spool/MailScanner
+%signature-dir% = /etc/MailScanner/baruwa/signatures
+%brules-dir% = /etc/MailScanner/baruwa/rules
+Max Children = 5
+Run As User = exim
+Run As Group = exim
+Queue Scan Interval = 6
+Incoming Queue Dir = /var/spool/exim.in/input
+Outgoing Queue Dir = /var/spool/exim/input
+Incoming Work Dir = %spool-dir%/incoming
+Quarantine Dir = %spool-dir%/quarantine
+PID file = /var/run/MailScanner/MailScanner.pid
+Restart Every = 7200
+MTA = exim
+Sendmail = /usr/sbin/exim -C /etc/exim/exim_out.conf
+Sendmail2 = /usr/sbin/exim -C /etc/exim/exim_out.conf
+Incoming Work User = exim
+Incoming Work Group = clam
+Incoming Work Permissions = 0640
+Quarantine User = exim
+Quarantine Group = baruwa
+Quarantine Permissions = 0660
+Max Unscanned Bytes Per Scan = 100m
+Max Unsafe Bytes Per Scan = 50m
+Max Unscanned Messages Per Scan = 30
+Max Unsafe Messages Per Scan = 30
+Max Normal Queue Size = 800
+Scan Messages = %rules-dir%/scan.messages.rules
+Reject Message = no
+Maximum Processing Attempts = 6
+Processing Attempts Database = %spool-dir%/incoming/Processing.db
+Maximum Attachments Per Message = 200
+Expand TNEF = yes
+Use TNEF Contents = replace
+Deliver Unparsable TNEF = no
+TNEF Expander = /usr/bin/tnef --maxsize=100000000
+TNEF Timeout = 120
+File Command = /usr/sbin/file-wrapper
+File Timeout = 20
+Gunzip Command = /bin/gunzip
+Gunzip Timeout = 50
+Unrar Command = /usr/bin/unrar
+Unrar Timeout = 50
+Find UU-Encoded Files = no
+Maximum Message Size = %brules-dir%/message.size.rules
+Maximum Attachment Size = -1
+Minimum Attachment Size = -1
+Maximum Archive Depth = 4
+Find Archives By Content = yes
+Unpack Microsoft Documents = yes
+Zip Attachments = no
+Attachments Zip Filename = MessageAttachments.zip
+Attachments Min Total Size To Zip = 100k
+Attachment Extensions Not To Zip = .zip .rar .gz .tgz .jpg .jpeg .mpg .mpe .mpeg .mp3 .rpm .htm .html .eml
+Add Text Of Doc = no
+Antiword = /usr/bin/antiword -f
+Antiword Timeout = 50
+Unzip Maximum Files Per Archive = 0
+Unzip Maximum File Size = 50k
+Unzip Filenames = *.txt *.ini *.log *.csv
+Unzip MimeType = text/plain
+Virus Scanning = %brules-dir%/virus.checks.rules
+Virus Scanners = auto
+Virus Scanner Timeout = 300
+Deliver Disinfected Files = no
+Silent Viruses = HTML-IFrame All-Viruses
+Still Deliver Silent Viruses = no
+Non-Forging Viruses = Joke/ OF97/ WM97/ W97M/ eicar
+Spam-Virus Header = X-%org-name%-BaruwaFW-SpamVirus-Report:
+Virus Names Which Are Spam = Sanesecurity.Spam*UNOFFICIAL HTML/* *Phish* *Suspected-phishing_safebrowsing*
+Block Encrypted Messages = no
+Block Unencrypted Messages = no
+Allow Password-Protected Archives = yes
+Check Filenames In Password-Protected Archives = yes
+Allowed Sophos Error Messages =
+Sophos IDE Dir = /opt/sophos-av/lib/sav
+Sophos Lib Dir = /opt/sophos-av/lib
+Monitors For Sophos Updates = /opt/sophos-av/lib/sav/*.ide
+Monitors for ClamAV Updates = /var/lib/clamav/*.cvd
+ClamAVmodule Maximum Recursion Level = 8
+ClamAVmodule Maximum Files = 1000
+ClamAVmodule Maximum File Size = 10000000
+ClamAVmodule Maximum Compression Ratio = 250
+Clamd Port = 3310
+Clamd Socket = /var/run/clamav/clamd.sock
+Clamd Lock File =
+Clamd Use Threads = no
+ClamAV Full Message Scan = yes
+Fpscand Port = 10200
+Dangerous Content Scanning = %rules-dir%/content.scanning.rules
+Allow Partial Messages = no
+Allow External Message Bodies = no
+Find Phishing Fraud = yes
+Also Find Numeric Phishing = yes
+Use Stricter Phishing Net = yes
+Highlight Phishing Fraud = yes
+Phishing Safe Sites File = %etc-dir%/phishing.safe.sites.conf
+Phishing Bad Sites File = %etc-dir%/phishing.bad.sites.conf
+Country Sub-Domains List = %etc-dir%/country.domains.conf
+Allow IFrame Tags = disarm
+Allow Form Tags = disarm
+Allow Script Tags = disarm
+Allow WebBugs = disarm
+Ignored Web Bug Filenames = spacer pixel.gif pixel.png gap
+Known Web Bug Servers = msgtag.com
+Web Bug Replacement = https://datafeeds.baruwa.com/1x1spacer.gif
+Allow Object Codebase Tags = disarm
+Convert Dangerous HTML To Text = no
+Convert HTML To Text = no
+Archives Are = zip rar ole
+Allow Filenames =
+Deny Filenames =
+Filename Rules = %rules-dir%/filename.rules
+Allow Filetypes =
+Allow File MIME Types =
+Deny Filetypes =
+Deny File MIME Types =
+Filetype Rules = %rules-dir%/filetype.rules
+Archives: Allow Filenames =
+Archives: Deny Filenames =
+Archives: Filename Rules = %etc-dir%/archives.filename.rules.conf
+Archives: Allow Filetypes =
+Archives: Allow File MIME Types =
+Archives: Deny Filetypes =
+Archives: Deny File MIME Types =
+Archives: Filetype Rules = %etc-dir%/archives.filetype.rules.conf
+Default Rename Pattern = __FILENAME__.disarmed
+Quarantine Infections = yes
+Quarantine Silent Viruses = no
+Quarantine Modified Body = no
+Quarantine Whole Message = yes
+Quarantine Whole Messages As Queue Files = no
+Keep Spam And MCP Archive Clean = yes
+Language Strings = %brules-dir%/languages.rules
+Rejection Report = %brules-dir%/rejectionreport.rules
+Deleted Bad Content Message Report = %brules-dir%/deletedcontentmessage.rules
+Deleted Bad Filename Message Report = %brules-dir%/deletedfilenamemessage.rules
+Deleted Virus Message Report = %brules-dir%/deletedvirusmessage.rules
+Deleted Size Message Report = %brules-dir%/deletedsizemessage.rules
+Stored Bad Content Message Report = %brules-dir%/storedcontentmessage.rules
+Stored Bad Filename Message Report = %brules-dir%/storedfilenamemessage.rules
+Stored Virus Message Report = %brules-dir%/storedvirusmessage.rules
+Stored Size Message Report = %brules-dir%/storedsizemessage.rules
+Disinfected Report = %brules-dir%/disinfectedreport.rules
+Inline HTML Signature = %brules-dir%/html.sigs.rules
+Inline Text Signature = %brules-dir%/text.sigs.rules
+Signature Image Filename = %brules-dir%/sig.imgs.names.rules
+Signature Image <img> Filename = %brules-dir%/sig.imgs.rules
+Inline HTML Warning = %brules-dir%/inlinewarninghtml.rules
+Inline Text Warning = %brules-dir%/inlinewarningtxt.rules
+Sender Content Report = %brules-dir%/sendercontentreport.rules
+Sender Error Report = %brules-dir%/sendererrorreport.rules
+Sender Bad Filename Report = %brules-dir%/senderfilenamereport.rules
+Sender Virus Report = %brules-dir%/sendervirusreport.rules
+Sender Size Report = %brules-dir%/sendersizereport.rules
+Hide Incoming Work Dir = yes
+Include Scanner Name In Reports = yes
+Mail Header = X-%org-name%-BaruwaFW:
+Spam Header = X-%org-name%-BaruwaFW-SpamCheck:
+Spam Score Header = X-%org-name%-BaruwaFW-SpamScore:
+Information Header = X-%org-name%-BaruwaFW-Information:
+Add Envelope From Header = yes
+Add Envelope To Header = no
+Envelope From Header = X-BARUWA-BaruwaFW-From:
+Envelope To Header = X-%org-name%-BaruwaFW-To:
+ID Header = X-%org-name%-BaruwaFW-ID:
+IP Protocol Version Header =
+Spam Score Character = s
+SpamScore Number Instead Of Stars = no
+Minimum Stars If On Spam List = 0
+Clean Header Value = Found to be clean
+Infected Header Value = Found to be infected
+Disinfected Header Value = Disinfected
+Information Header Value = Please contact %org-long-name% for more information
+Detailed Spam Report = yes
+Include Scores In SpamAssassin Report = yes
+Always Include SpamAssassin Report = no
+Multiple Headers = add
+Place New Headers At Top Of Message = yes
+Hostname = the %org-name% ($HOSTNAME) Baruwa
+Sign Messages Already Processed = no
+Sign Clean Messages = %brules-dir%/sign.clean.msgs.rules
+Attach Image To Signature = yes
+Attach Image To HTML Message Only = yes
+Allow Multiple HTML Signatures = no
+Dont Sign HTML If Headers Exist =
+Mark Infected Messages = yes
+Mark Unscanned Messages = no
+Unscanned Header Value = Not scanned: please contact %org-long-name% for details
+Remove These Headers = X-Mozilla-Status: X-Mozilla-Status2:
+Deliver Cleaned Messages = yes
+Notify Senders = no
+Notify Senders Of Viruses = no
+Notify Senders Of Blocked Filenames Or Filetypes = no
+Notify Senders Of Blocked Size Attachments = no
+Notify Senders Of Other Blocked Content = no
+Never Notify Senders Of Precedence = list bulk
+Scanned Modify Subject = no
+Scanned Subject Text = {Scanned}
+Virus Modify Subject = no
+Virus Subject Text = {Virus?}
+Filename Modify Subject = no
+Filename Subject Text = {Filename?}
+Content Modify Subject = no
+Content Subject Text = {Dangerous Content?}
+Size Modify Subject = no
+Size Subject Text = {Size}
+Disarmed Modify Subject = no
+Disarmed Subject Text = {Disarmed}
+Phishing Modify Subject = yes
+Phishing Subject Text = {Suspected Phishing?}
+Spam Modify Subject = no
+Spam Subject Text = {Spam?}
+High Scoring Spam Modify Subject = no
+High Scoring Spam Subject Text = {Spam?}
+Warning Is Attachment = yes
+Attachment Warning Filename = %org-name%-Attachment-Warning.txt
+Attachment Encoding Charset = ISO-8859-1
+Archive Mail =
+Missing Mail Archive Is = file
+Send Notices = no
+Notices Include Full Headers = yes
+Hide Incoming Work Dir in Notices = yes
+Notice Signature = -- \\n%org-name%\\nEmail Security\\n%website%
+Notices From = Baruwa
+Notices To = postmaster
+Local Postmaster = postmaster
+Spam List Definitions = %etc-dir%/spam.lists.conf
+Virus Scanner Definitions = %etc-dir%/virus.scanners.conf
+Spam Checks = %brules-dir%/spam.checks.rules
+Spam List =
+Spam Domain List =
+Spam Lists To Be Spam = 1
+Spam Lists To Reach High Score = 3
+Spam List Timeout = 10
+Max Spam List Timeouts = 7
+Spam List Timeouts History = 10
+Is Definitely Not Spam = %brules-dir%/approved.senders.rules
+Is Definitely Spam = %brules-dir%/banned.senders.rules
+Definite Spam Is High Scoring = yes
+Ignore Spam Whitelist If Recipients Exceed = 20
+Max Spam Check Size = 4000k
+Use Watermarking = no
+Add Watermark = yes
+Check Watermarks With No Sender = yes
+Treat Invalid Watermarks With No Sender as Spam = nothing
+Check Watermarks To Skip Spam Checks = yes
+Watermark Secret = %org-name%-BaruwaFW-Secret
+Watermark Lifetime = 604800
+Watermark Header = X-%org-name%-BaruwaFW-Watermark:
+Use SpamAssassin = yes
+Max SpamAssassin Size = 800k
+Required SpamAssassin Score = %brules-dir%/spam.score.rules
+High SpamAssassin Score = %brules-dir%/highspam.score.rules
+SpamAssassin Auto Whitelist = yes
+SpamAssassin Timeout = 75
+Max SpamAssassin Timeouts = 10
+SpamAssassin Timeouts History = 30
+Check SpamAssassin If On Spam List = yes
+Include Binary Attachments In SpamAssassin = no
+Spam Score = yes
+Cache SpamAssassin Results = yes
+SpamAssassin Cache Database File = %spool-dir%/incoming/SpamAssassin.cache.db
+Rebuild Bayes Every = 0
+Wait During Bayes Rebuild = no
+Use Custom Spam Scanner = no
+Max Custom Spam Scanner Size = 20k
+Custom Spam Scanner Timeout = 20
+Max Custom Spam Scanner Timeouts = 10
+Custom Spam Scanner Timeout History = 20
+Spam Actions = %brules-dir%/spam.actions.rules
+High Scoring Spam Actions = %brules-dir%/highspam.actions.rules
+Non Spam Actions = %rules-dir%/nonspam.actions.rules
+SpamAssassin Rule Actions =
+Sender Spam Report = %brules-dir%/senderspamreport.rules
+Sender Spam List Report = %brules-dir%/senderspamrblreport.rules
+Sender SpamAssassin Report = %brules-dir%/senderspamsareport.rules
+Inline Spam Warning = %brules-dir%/inlinespamwarning.rules
+Recipient Spam Report = %brules-dir%/recipientspamreport.rules
+Enable Spam Bounce = %rules-dir%/bounce.rules
+Bounce Spam As Attachment = no
+Syslog Facility = mail
+Log Speed = no
+Log Spam = no
+Log Non Spam = no
+Log Delivery And Non-Delivery = no
+Log Permitted Filenames = no
+Log Permitted Filetypes = no
+Log Permitted File MIME Types = no
+Log Silent Viruses = no
+Log Dangerous HTML Tags = no
+Log SpamAssassin Rule Actions = no
+SpamAssassin Temporary Dir = /var/spool/MailScanner/incoming/SpamAssassin-Temp
+SpamAssassin User State Dir =
+SpamAssassin Install Prefix =
+SpamAssassin Site Rules Dir = /etc/mail/spamassassin
+SpamAssassin Local Rules Dir =
+SpamAssassin Local State Dir =
+SpamAssassin Default Rules Dir =
+DB DSN = DBI:Pg:database=baruwa
+DB Username = baruwa
+DB Password = password
+SQL Serial Number = SELECT MAX(value) AS confserialnumber FROM configurations WHERE internal='confserialnumber'
+SQL Quick Peek = SELECT dbvalue(value) AS value FROM quickpeek WHERE external = ? AND (hostname = ? OR hostname='default') LIMIT 1
+SQL Config = SELECT internal, dbvalue(value) AS value, hostname FROM quickpeek WHERE hostname=? OR hostname='default'
+SQL Ruleset = SELECT row_number, ruleset AS rule FROM msrulesets WHERE name=?
+SQL SpamAssassin Config =
+SQL Debug = no
+Sphinx Host = 127.0.0.1
+Sphinx Port = 9306
+MCP Checks = no
+First Check = spam
+MCP Required SpamAssassin Score = 1
+MCP High SpamAssassin Score = 10
+MCP Error Score = 1
+MCP Header = X-%org-name%-BaruwaFW-MCPCheck:
+Non MCP Actions = deliver
+MCP Actions = deliver
+High Scoring MCP Actions = deliver
+Bounce MCP As Attachment = no
+MCP Modify Subject = start
+MCP Subject Text = {MCP?}
+High Scoring MCP Modify Subject = start
+High Scoring MCP Subject Text = {MCP?}
+Is Definitely MCP = no
+Is Definitely Not MCP = no
+Definite MCP Is High Scoring = no
+Always Include MCP Report = no
+Detailed MCP Report = yes
+Include Scores In MCP Report = no
+Log MCP = no
+MCP Max SpamAssassin Timeouts = 20
+MCP Max SpamAssassin Size = 100k
+MCP SpamAssassin Timeout = 10
+MCP SpamAssassin Prefs File = %mcp-dir%/mcp.spam.assassin.prefs.conf
+MCP SpamAssassin User State Dir =
+MCP SpamAssassin Local Rules Dir = %mcp-dir%
+MCP SpamAssassin Default Rules Dir = %mcp-dir%
+MCP SpamAssassin Install Prefix = %mcp-dir%
+Recipient MCP Report = %report-dir%/recipient.mcp.report.txt
+Sender MCP Report = %report-dir%/sender.mcp.report.txt
+Use Default Rules With Multiple Recipients = no
+Read IP Address From Received Header = no
+Spam Score Number Format = %d.1f
+MailScanner Version Number = 4.85.5
+SpamAssassin Cache Timings = 1800,300,10800,172800,600
+Debug = no
+Debug SpamAssassin = no
+Run In Foreground = no
+Always Looked Up Last = &BaruwaLog
+Always Looked Up Last After Batch = no
+Deliver In Background = yes
+Delivery Method = batch
+Split Exim Spool = no
+Lockfile Dir = %spool-dir%/incoming/Locks
+Custom Functions Dir = /usr/share/baruwa/CustomFunctions
+Lock Type =
+Syslog Socket Type =
+Automatic Syntax Check = yes
+Minimum Code Status = supported
+include /etc/MailScanner/conf.d/*.conf
+"
+
+test Mailscanner.lns get conf =
+ { "#comment" = "Main configuration file for the MailScanner E-Mail Virus Scanner"}
+ { "%org-name%" = "BARUWA" }
+ { "%org-long-name%" = "BARUWA MAILFW" }
+ { "%web-site%" = "www.baruwa.com" }
+ { "%etc-dir%" = "/etc/MailScanner" }
+ { "%reports-base%" = "/etc/MailScanner/reports" }
+ { "%report-dir%" = "/etc/MailScanner/reports/en" }
+ { "%rules-dir%" = "/etc/MailScanner/rules" }
+ { "%mcp-dir%" = "/etc/MailScanner/mcp" }
+ { "%spool-dir%" = "/var/spool/MailScanner" }
+ { "%signature-dir%" = "/etc/MailScanner/baruwa/signatures" }
+ { "%brules-dir%" = "/etc/MailScanner/baruwa/rules" }
+ { "Max Children" = "5" }
+ { "Run As User" = "exim" }
+ { "Run As Group" = "exim" }
+ { "Queue Scan Interval" = "6" }
+ { "Incoming Queue Dir" = "/var/spool/exim.in/input" }
+ { "Outgoing Queue Dir" = "/var/spool/exim/input" }
+ { "Incoming Work Dir" = "%spool-dir%/incoming" }
+ { "Quarantine Dir" = "%spool-dir%/quarantine" }
+ { "PID file" = "/var/run/MailScanner/MailScanner.pid" }
+ { "Restart Every" = "7200" }
+ { "MTA" = "exim" }
+ { "Sendmail" = "/usr/sbin/exim -C /etc/exim/exim_out.conf" }
+ { "Sendmail2" = "/usr/sbin/exim -C /etc/exim/exim_out.conf" }
+ { "Incoming Work User" = "exim" }
+ { "Incoming Work Group" = "clam" }
+ { "Incoming Work Permissions" = "0640" }
+ { "Quarantine User" = "exim" }
+ { "Quarantine Group" = "baruwa" }
+ { "Quarantine Permissions" = "0660" }
+ { "Max Unscanned Bytes Per Scan" = "100m" }
+ { "Max Unsafe Bytes Per Scan" = "50m" }
+ { "Max Unscanned Messages Per Scan" = "30" }
+ { "Max Unsafe Messages Per Scan" = "30" }
+ { "Max Normal Queue Size" = "800" }
+ { "Scan Messages" = "%rules-dir%/scan.messages.rules" }
+ { "Reject Message" = "no" }
+ { "Maximum Processing Attempts" = "6" }
+ { "Processing Attempts Database" = "%spool-dir%/incoming/Processing.db" }
+ { "Maximum Attachments Per Message" = "200" }
+ { "Expand TNEF" = "yes" }
+ { "Use TNEF Contents" = "replace" }
+ { "Deliver Unparsable TNEF" = "no" }
+ { "TNEF Expander" = "/usr/bin/tnef --maxsize=100000000" }
+ { "TNEF Timeout" = "120" }
+ { "File Command" = "/usr/sbin/file-wrapper" }
+ { "File Timeout" = "20" }
+ { "Gunzip Command" = "/bin/gunzip" }
+ { "Gunzip Timeout" = "50" }
+ { "Unrar Command" = "/usr/bin/unrar" }
+ { "Unrar Timeout" = "50" }
+ { "Find UU-Encoded Files" = "no" }
+ { "Maximum Message Size" = "%brules-dir%/message.size.rules" }
+ { "Maximum Attachment Size" = "-1" }
+ { "Minimum Attachment Size" = "-1" }
+ { "Maximum Archive Depth" = "4" }
+ { "Find Archives By Content" = "yes" }
+ { "Unpack Microsoft Documents" = "yes" }
+ { "Zip Attachments" = "no" }
+ { "Attachments Zip Filename" = "MessageAttachments.zip" }
+ { "Attachments Min Total Size To Zip" = "100k" }
+ { "Attachment Extensions Not To Zip" = ".zip .rar .gz .tgz .jpg .jpeg .mpg .mpe .mpeg .mp3 .rpm .htm .html .eml" }
+ { "Add Text Of Doc" = "no" }
+ { "Antiword" = "/usr/bin/antiword -f" }
+ { "Antiword Timeout" = "50" }
+ { "Unzip Maximum Files Per Archive" = "0" }
+ { "Unzip Maximum File Size" = "50k" }
+ { "Unzip Filenames" = "*.txt *.ini *.log *.csv" }
+ { "Unzip MimeType" = "text/plain" }
+ { "Virus Scanning" = "%brules-dir%/virus.checks.rules" }
+ { "Virus Scanners" = "auto" }
+ { "Virus Scanner Timeout" = "300" }
+ { "Deliver Disinfected Files" = "no" }
+ { "Silent Viruses" = "HTML-IFrame All-Viruses" }
+ { "Still Deliver Silent Viruses" = "no" }
+ { "Non-Forging Viruses" = "Joke/ OF97/ WM97/ W97M/ eicar" }
+ { "Spam-Virus Header" = "X-%org-name%-BaruwaFW-SpamVirus-Report:" }
+ { "Virus Names Which Are Spam" = "Sanesecurity.Spam*UNOFFICIAL HTML/* *Phish* *Suspected-phishing_safebrowsing*" }
+ { "Block Encrypted Messages" = "no" }
+ { "Block Unencrypted Messages" = "no" }
+ { "Allow Password-Protected Archives" = "yes" }
+ { "Check Filenames In Password-Protected Archives" = "yes" }
+ { "Allowed Sophos Error Messages" }
+ { "Sophos IDE Dir" = "/opt/sophos-av/lib/sav" }
+ { "Sophos Lib Dir" = "/opt/sophos-av/lib" }
+ { "Monitors For Sophos Updates" = "/opt/sophos-av/lib/sav/*.ide" }
+ { "Monitors for ClamAV Updates" = "/var/lib/clamav/*.cvd" }
+ { "ClamAVmodule Maximum Recursion Level" = "8" }
+ { "ClamAVmodule Maximum Files" = "1000" }
+ { "ClamAVmodule Maximum File Size" = "10000000" }
+ { "ClamAVmodule Maximum Compression Ratio" = "250" }
+ { "Clamd Port" = "3310" }
+ { "Clamd Socket" = "/var/run/clamav/clamd.sock" }
+ { "Clamd Lock File" }
+ { "Clamd Use Threads" = "no" }
+ { "ClamAV Full Message Scan" = "yes" }
+ { "Fpscand Port" = "10200" }
+ { "Dangerous Content Scanning" = "%rules-dir%/content.scanning.rules" }
+ { "Allow Partial Messages" = "no" }
+ { "Allow External Message Bodies" = "no" }
+ { "Find Phishing Fraud" = "yes" }
+ { "Also Find Numeric Phishing" = "yes" }
+ { "Use Stricter Phishing Net" = "yes" }
+ { "Highlight Phishing Fraud" = "yes" }
+ { "Phishing Safe Sites File" = "%etc-dir%/phishing.safe.sites.conf" }
+ { "Phishing Bad Sites File" = "%etc-dir%/phishing.bad.sites.conf" }
+ { "Country Sub-Domains List" = "%etc-dir%/country.domains.conf" }
+ { "Allow IFrame Tags" = "disarm" }
+ { "Allow Form Tags" = "disarm" }
+ { "Allow Script Tags" = "disarm" }
+ { "Allow WebBugs" = "disarm" }
+ { "Ignored Web Bug Filenames" = "spacer pixel.gif pixel.png gap" }
+ { "Known Web Bug Servers" = "msgtag.com" }
+ { "Web Bug Replacement" = "https://datafeeds.baruwa.com/1x1spacer.gif" }
+ { "Allow Object Codebase Tags" = "disarm" }
+ { "Convert Dangerous HTML To Text" = "no" }
+ { "Convert HTML To Text" = "no" }
+ { "Archives Are" = "zip rar ole" }
+ { "Allow Filenames" }
+ { "Deny Filenames" }
+ { "Filename Rules" = "%rules-dir%/filename.rules" }
+ { "Allow Filetypes" }
+ { "Allow File MIME Types" }
+ { "Deny Filetypes" }
+ { "Deny File MIME Types" }
+ { "Filetype Rules" = "%rules-dir%/filetype.rules" }
+ { "Archives: Allow Filenames" }
+ { "Archives: Deny Filenames" }
+ { "Archives: Filename Rules" = "%etc-dir%/archives.filename.rules.conf" }
+ { "Archives: Allow Filetypes" }
+ { "Archives: Allow File MIME Types" }
+ { "Archives: Deny Filetypes" }
+ { "Archives: Deny File MIME Types" }
+ { "Archives: Filetype Rules" = "%etc-dir%/archives.filetype.rules.conf" }
+ { "Default Rename Pattern" = "__FILENAME__.disarmed" }
+ { "Quarantine Infections" = "yes" }
+ { "Quarantine Silent Viruses" = "no" }
+ { "Quarantine Modified Body" = "no" }
+ { "Quarantine Whole Message" = "yes" }
+ { "Quarantine Whole Messages As Queue Files" = "no" }
+ { "Keep Spam And MCP Archive Clean" = "yes" }
+ { "Language Strings" = "%brules-dir%/languages.rules" }
+ { "Rejection Report" = "%brules-dir%/rejectionreport.rules" }
+ { "Deleted Bad Content Message Report" = "%brules-dir%/deletedcontentmessage.rules" }
+ { "Deleted Bad Filename Message Report" = "%brules-dir%/deletedfilenamemessage.rules" }
+ { "Deleted Virus Message Report" = "%brules-dir%/deletedvirusmessage.rules" }
+ { "Deleted Size Message Report" = "%brules-dir%/deletedsizemessage.rules" }
+ { "Stored Bad Content Message Report" = "%brules-dir%/storedcontentmessage.rules" }
+ { "Stored Bad Filename Message Report" = "%brules-dir%/storedfilenamemessage.rules" }
+ { "Stored Virus Message Report" = "%brules-dir%/storedvirusmessage.rules" }
+ { "Stored Size Message Report" = "%brules-dir%/storedsizemessage.rules" }
+ { "Disinfected Report" = "%brules-dir%/disinfectedreport.rules" }
+ { "Inline HTML Signature" = "%brules-dir%/html.sigs.rules" }
+ { "Inline Text Signature" = "%brules-dir%/text.sigs.rules" }
+ { "Signature Image Filename" = "%brules-dir%/sig.imgs.names.rules" }
+ { "Signature Image <img> Filename" = "%brules-dir%/sig.imgs.rules" }
+ { "Inline HTML Warning" = "%brules-dir%/inlinewarninghtml.rules" }
+ { "Inline Text Warning" = "%brules-dir%/inlinewarningtxt.rules" }
+ { "Sender Content Report" = "%brules-dir%/sendercontentreport.rules" }
+ { "Sender Error Report" = "%brules-dir%/sendererrorreport.rules" }
+ { "Sender Bad Filename Report" = "%brules-dir%/senderfilenamereport.rules" }
+ { "Sender Virus Report" = "%brules-dir%/sendervirusreport.rules" }
+ { "Sender Size Report" = "%brules-dir%/sendersizereport.rules" }
+ { "Hide Incoming Work Dir" = "yes" }
+ { "Include Scanner Name In Reports" = "yes" }
+ { "Mail Header" = "X-%org-name%-BaruwaFW:" }
+ { "Spam Header" = "X-%org-name%-BaruwaFW-SpamCheck:" }
+ { "Spam Score Header" = "X-%org-name%-BaruwaFW-SpamScore:" }
+ { "Information Header" = "X-%org-name%-BaruwaFW-Information:" }
+ { "Add Envelope From Header" = "yes" }
+ { "Add Envelope To Header" = "no" }
+ { "Envelope From Header" = "X-BARUWA-BaruwaFW-From:" }
+ { "Envelope To Header" = "X-%org-name%-BaruwaFW-To:" }
+ { "ID Header" = "X-%org-name%-BaruwaFW-ID:" }
+ { "IP Protocol Version Header" }
+ { "Spam Score Character" = "s" }
+ { "SpamScore Number Instead Of Stars" = "no" }
+ { "Minimum Stars If On Spam List" = "0" }
+ { "Clean Header Value" = "Found to be clean" }
+ { "Infected Header Value" = "Found to be infected" }
+ { "Disinfected Header Value" = "Disinfected" }
+ { "Information Header Value" = "Please contact %org-long-name% for more information" }
+ { "Detailed Spam Report" = "yes" }
+ { "Include Scores In SpamAssassin Report" = "yes" }
+ { "Always Include SpamAssassin Report" = "no" }
+ { "Multiple Headers" = "add" }
+ { "Place New Headers At Top Of Message" = "yes" }
+ { "Hostname" = "the %org-name% ($HOSTNAME) Baruwa" }
+ { "Sign Messages Already Processed" = "no" }
+ { "Sign Clean Messages" = "%brules-dir%/sign.clean.msgs.rules" }
+ { "Attach Image To Signature" = "yes" }
+ { "Attach Image To HTML Message Only" = "yes" }
+ { "Allow Multiple HTML Signatures" = "no" }
+ { "Dont Sign HTML If Headers Exist" }
+ { "Mark Infected Messages" = "yes" }
+ { "Mark Unscanned Messages" = "no" }
+ { "Unscanned Header Value" = "Not scanned: please contact %org-long-name% for details" }
+ { "Remove These Headers" = "X-Mozilla-Status: X-Mozilla-Status2:" }
+ { "Deliver Cleaned Messages" = "yes" }
+ { "Notify Senders" = "no" }
+ { "Notify Senders Of Viruses" = "no" }
+ { "Notify Senders Of Blocked Filenames Or Filetypes" = "no" }
+ { "Notify Senders Of Blocked Size Attachments" = "no" }
+ { "Notify Senders Of Other Blocked Content" = "no" }
+ { "Never Notify Senders Of Precedence" = "list bulk" }
+ { "Scanned Modify Subject" = "no" }
+ { "Scanned Subject Text" = "{Scanned}" }
+ { "Virus Modify Subject" = "no" }
+ { "Virus Subject Text" = "{Virus?}" }
+ { "Filename Modify Subject" = "no" }
+ { "Filename Subject Text" = "{Filename?}" }
+ { "Content Modify Subject" = "no" }
+ { "Content Subject Text" = "{Dangerous Content?}" }
+ { "Size Modify Subject" = "no" }
+ { "Size Subject Text" = "{Size}" }
+ { "Disarmed Modify Subject" = "no" }
+ { "Disarmed Subject Text" = "{Disarmed}" }
+ { "Phishing Modify Subject" = "yes" }
+ { "Phishing Subject Text" = "{Suspected Phishing?}" }
+ { "Spam Modify Subject" = "no" }
+ { "Spam Subject Text" = "{Spam?}" }
+ { "High Scoring Spam Modify Subject" = "no" }
+ { "High Scoring Spam Subject Text" = "{Spam?}" }
+ { "Warning Is Attachment" = "yes" }
+ { "Attachment Warning Filename" = "%org-name%-Attachment-Warning.txt" }
+ { "Attachment Encoding Charset" = "ISO-8859-1" }
+ { "Archive Mail" }
+ { "Missing Mail Archive Is" = "file" }
+ { "Send Notices" = "no" }
+ { "Notices Include Full Headers" = "yes" }
+ { "Hide Incoming Work Dir in Notices" = "yes" }
+ { "Notice Signature" = "-- \\n%org-name%\\nEmail Security\\n%website%" }
+ { "Notices From" = "Baruwa" }
+ { "Notices To" = "postmaster" }
+ { "Local Postmaster" = "postmaster" }
+ { "Spam List Definitions" = "%etc-dir%/spam.lists.conf" }
+ { "Virus Scanner Definitions" = "%etc-dir%/virus.scanners.conf" }
+ { "Spam Checks" = "%brules-dir%/spam.checks.rules" }
+ { "Spam List" }
+ { "Spam Domain List" }
+ { "Spam Lists To Be Spam" = "1" }
+ { "Spam Lists To Reach High Score" = "3" }
+ { "Spam List Timeout" = "10" }
+ { "Max Spam List Timeouts" = "7" }
+ { "Spam List Timeouts History" = "10" }
+ { "Is Definitely Not Spam" = "%brules-dir%/approved.senders.rules" }
+ { "Is Definitely Spam" = "%brules-dir%/banned.senders.rules" }
+ { "Definite Spam Is High Scoring" = "yes" }
+ { "Ignore Spam Whitelist If Recipients Exceed" = "20" }
+ { "Max Spam Check Size" = "4000k" }
+ { "Use Watermarking" = "no" }
+ { "Add Watermark" = "yes" }
+ { "Check Watermarks With No Sender" = "yes" }
+ { "Treat Invalid Watermarks With No Sender as Spam" = "nothing" }
+ { "Check Watermarks To Skip Spam Checks" = "yes" }
+ { "Watermark Secret" = "%org-name%-BaruwaFW-Secret" }
+ { "Watermark Lifetime" = "604800" }
+ { "Watermark Header" = "X-%org-name%-BaruwaFW-Watermark:" }
+ { "Use SpamAssassin" = "yes" }
+ { "Max SpamAssassin Size" = "800k" }
+ { "Required SpamAssassin Score" = "%brules-dir%/spam.score.rules" }
+ { "High SpamAssassin Score" = "%brules-dir%/highspam.score.rules" }
+ { "SpamAssassin Auto Whitelist" = "yes" }
+ { "SpamAssassin Timeout" = "75" }
+ { "Max SpamAssassin Timeouts" = "10" }
+ { "SpamAssassin Timeouts History" = "30" }
+ { "Check SpamAssassin If On Spam List" = "yes" }
+ { "Include Binary Attachments In SpamAssassin" = "no" }
+ { "Spam Score" = "yes" }
+ { "Cache SpamAssassin Results" = "yes" }
+ { "SpamAssassin Cache Database File" = "%spool-dir%/incoming/SpamAssassin.cache.db" }
+ { "Rebuild Bayes Every" = "0" }
+ { "Wait During Bayes Rebuild" = "no" }
+ { "Use Custom Spam Scanner" = "no" }
+ { "Max Custom Spam Scanner Size" = "20k" }
+ { "Custom Spam Scanner Timeout" = "20" }
+ { "Max Custom Spam Scanner Timeouts" = "10" }
+ { "Custom Spam Scanner Timeout History" = "20" }
+ { "Spam Actions" = "%brules-dir%/spam.actions.rules" }
+ { "High Scoring Spam Actions" = "%brules-dir%/highspam.actions.rules" }
+ { "Non Spam Actions" = "%rules-dir%/nonspam.actions.rules" }
+ { "SpamAssassin Rule Actions" }
+ { "Sender Spam Report" = "%brules-dir%/senderspamreport.rules" }
+ { "Sender Spam List Report" = "%brules-dir%/senderspamrblreport.rules" }
+ { "Sender SpamAssassin Report" = "%brules-dir%/senderspamsareport.rules" }
+ { "Inline Spam Warning" = "%brules-dir%/inlinespamwarning.rules" }
+ { "Recipient Spam Report" = "%brules-dir%/recipientspamreport.rules" }
+ { "Enable Spam Bounce" = "%rules-dir%/bounce.rules" }
+ { "Bounce Spam As Attachment" = "no" }
+ { "Syslog Facility" = "mail" }
+ { "Log Speed" = "no" }
+ { "Log Spam" = "no" }
+ { "Log Non Spam" = "no" }
+ { "Log Delivery And Non-Delivery" = "no" }
+ { "Log Permitted Filenames" = "no" }
+ { "Log Permitted Filetypes" = "no" }
+ { "Log Permitted File MIME Types" = "no" }
+ { "Log Silent Viruses" = "no" }
+ { "Log Dangerous HTML Tags" = "no" }
+ { "Log SpamAssassin Rule Actions" = "no" }
+ { "SpamAssassin Temporary Dir" = "/var/spool/MailScanner/incoming/SpamAssassin-Temp" }
+ { "SpamAssassin User State Dir" }
+ { "SpamAssassin Install Prefix" }
+ { "SpamAssassin Site Rules Dir" = "/etc/mail/spamassassin" }
+ { "SpamAssassin Local Rules Dir" }
+ { "SpamAssassin Local State Dir" }
+ { "SpamAssassin Default Rules Dir" }
+ { "DB DSN" = "DBI:Pg:database=baruwa" }
+ { "DB Username" = "baruwa" }
+ { "DB Password" = "password" }
+ { "SQL Serial Number" = "SELECT MAX(value) AS confserialnumber FROM configurations WHERE internal='confserialnumber'" }
+ { "SQL Quick Peek" = "SELECT dbvalue(value) AS value FROM quickpeek WHERE external = ? AND (hostname = ? OR hostname='default') LIMIT 1" }
+ { "SQL Config" = "SELECT internal, dbvalue(value) AS value, hostname FROM quickpeek WHERE hostname=? OR hostname='default'" }
+ { "SQL Ruleset" = "SELECT row_number, ruleset AS rule FROM msrulesets WHERE name=?" }
+ { "SQL SpamAssassin Config" }
+ { "SQL Debug" = "no" }
+ { "Sphinx Host" = "127.0.0.1" }
+ { "Sphinx Port" = "9306" }
+ { "MCP Checks" = "no" }
+ { "First Check" = "spam" }
+ { "MCP Required SpamAssassin Score" = "1" }
+ { "MCP High SpamAssassin Score" = "10" }
+ { "MCP Error Score" = "1" }
+ { "MCP Header" = "X-%org-name%-BaruwaFW-MCPCheck:" }
+ { "Non MCP Actions" = "deliver" }
+ { "MCP Actions" = "deliver" }
+ { "High Scoring MCP Actions" = "deliver" }
+ { "Bounce MCP As Attachment" = "no" }
+ { "MCP Modify Subject" = "start" }
+ { "MCP Subject Text" = "{MCP?}" }
+ { "High Scoring MCP Modify Subject" = "start" }
+ { "High Scoring MCP Subject Text" = "{MCP?}" }
+ { "Is Definitely MCP" = "no" }
+ { "Is Definitely Not MCP" = "no" }
+ { "Definite MCP Is High Scoring" = "no" }
+ { "Always Include MCP Report" = "no" }
+ { "Detailed MCP Report" = "yes" }
+ { "Include Scores In MCP Report" = "no" }
+ { "Log MCP" = "no" }
+ { "MCP Max SpamAssassin Timeouts" = "20" }
+ { "MCP Max SpamAssassin Size" = "100k" }
+ { "MCP SpamAssassin Timeout" = "10" }
+ { "MCP SpamAssassin Prefs File" = "%mcp-dir%/mcp.spam.assassin.prefs.conf" }
+ { "MCP SpamAssassin User State Dir" }
+ { "MCP SpamAssassin Local Rules Dir" = "%mcp-dir%" }
+ { "MCP SpamAssassin Default Rules Dir" = "%mcp-dir%" }
+ { "MCP SpamAssassin Install Prefix" = "%mcp-dir%" }
+ { "Recipient MCP Report" = "%report-dir%/recipient.mcp.report.txt" }
+ { "Sender MCP Report" = "%report-dir%/sender.mcp.report.txt" }
+ { "Use Default Rules With Multiple Recipients" = "no" }
+ { "Read IP Address From Received Header" = "no" }
+ { "Spam Score Number Format" = "%d.1f" }
+ { "MailScanner Version Number" = "4.85.5" }
+ { "SpamAssassin Cache Timings" = "1800,300,10800,172800,600" }
+ { "Debug" = "no" }
+ { "Debug SpamAssassin" = "no" }
+ { "Run In Foreground" = "no" }
+ { "Always Looked Up Last" = "&BaruwaLog" }
+ { "Always Looked Up Last After Batch" = "no" }
+ { "Deliver In Background" = "yes" }
+ { "Delivery Method" = "batch" }
+ { "Split Exim Spool" = "no" }
+ { "Lockfile Dir" = "%spool-dir%/incoming/Locks" }
+ { "Custom Functions Dir" = "/usr/share/baruwa/CustomFunctions" }
+ { "Lock Type" }
+ { "Syslog Socket Type" }
+ { "Automatic Syntax Check" = "yes" }
+ { "Minimum Code Status" = "supported" }
+ { "include" = "/etc/MailScanner/conf.d/*.conf"}
\ No newline at end of file
--- /dev/null
+module Test_Mailscanner_Rules =
+let conf = "# JKF 10/08/2007 Adobe Acrobat nastiness
+rename \.fdf$ Dangerous Adobe Acrobat data-file Opening this file can cause auto-loading of any file from the internet
+
+# JKF 04/01/2005 More Microsoft security vulnerabilities
+deny \.ico$ Windows icon file security vulnerability Possible buffer overflow in Windows
+allow \.(jan|feb|mar|apr|may|jun|june|jul|july|aug|sep|sept|oct|nov|dec)\.[a-z0-9]{3}$ - -
+deny+delete \.cur$ Windows cursor file security vulnerability Possible buffer overflow in Windows
+andrew@baruwa.com,andrew@baruwa.net \.reg$ Possible Windows registry attack Windows registry entries are very dangerous in email
+andrew@baruwa.com andrew@baruwa.net \.chm$ Possible compiled Help file-based virus Compiled help files are very dangerous in email
+rename to .ppt \.pps$ Renamed .pps to .ppt Renamed .pps to .ppt
+"
+
+test Mailscanner_Rules.lns get conf =
+ { "#comment" = "JKF 10/08/2007 Adobe Acrobat nastiness" }
+ { "1"
+ { "action" = "rename" }
+ { "regex" = "\.fdf$" }
+ { "log-text" = "Dangerous Adobe Acrobat data-file" }
+ { "user-report" = "Opening this file can cause auto-loading of any file from the internet" }
+ }
+ {}
+ { "#comment" = "JKF 04/01/2005 More Microsoft security vulnerabilities" }
+ { "2"
+ { "action" = "deny" }
+ { "regex" = "\.ico$" }
+ { "log-text" = "Windows icon file security vulnerability" }
+ { "user-report" = "Possible buffer overflow in Windows" }
+ }
+ { "3"
+ { "action" = "allow" }
+ { "regex" = "\.(jan|feb|mar|apr|may|jun|june|jul|july|aug|sep|sept|oct|nov|dec)\.[a-z0-9]{3}$" }
+ { "log-text" = "-" }
+ { "user-report" = "-" }
+ }
+ { "4"
+ { "action" = "deny+delete" }
+ { "regex" = "\.cur$" }
+ { "log-text" = "Windows cursor file security vulnerability" }
+ { "user-report" = "Possible buffer overflow in Windows" }
+ }
+ { "5"
+ { "action" = "andrew@baruwa.com,andrew@baruwa.net" }
+ { "regex" = "\.reg$" }
+ { "log-text" = "Possible Windows registry attack" }
+ { "user-report" = "Windows registry entries are very dangerous in email" }
+ }
+ { "6"
+ { "action" = "andrew@baruwa.com andrew@baruwa.net" }
+ { "regex" = "\.chm$" }
+ { "log-text" = "Possible compiled Help file-based virus" }
+ { "user-report" = "Compiled help files are very dangerous in email" }
+ }
+ { "7"
+ { "action" = "rename to .ppt" }
+ { "regex" = "\.pps$" }
+ { "log-text" = "Renamed .pps to .ppt" }
+ { "user-report" = "Renamed .pps to .ppt" }
+ }
\ No newline at end of file
--- /dev/null
+module Test_MasterPasswd =
+
+let conf = "root:*:0:0:daemon:0:0:Charlie &:/root:/bin/ksh
+sshd:*:27:27::0:0:sshd privsep:/var/empty:/sbin/nologin
+_portmap:*:28:28::0:0:portmap:/var/empty:/sbin/nologin
+test:x:1000:1000:ldap:1434329080:1434933880:Test User,,,:/home/test:/bin/bash
+"
+
+test MasterPasswd.lns get conf =
+ { "root"
+ { "password" = "*" }
+ { "uid" = "0" }
+ { "gid" = "0" }
+ { "class" = "daemon" }
+ { "change_date" = "0" }
+ { "expire_date" = "0" }
+ { "name" = "Charlie &" }
+ { "home" = "/root" }
+ { "shell" = "/bin/ksh" } }
+ { "sshd"
+ { "password" = "*" }
+ { "uid" = "27" }
+ { "gid" = "27" }
+ { "class" }
+ { "change_date" = "0" }
+ { "expire_date" = "0" }
+ { "name" = "sshd privsep" }
+ { "home" = "/var/empty" }
+ { "shell" = "/sbin/nologin" } }
+ { "_portmap"
+ { "password" = "*" }
+ { "uid" = "28" }
+ { "gid" = "28" }
+ { "class" }
+ { "change_date" = "0" }
+ { "expire_date" = "0" }
+ { "name" = "portmap" }
+ { "home" = "/var/empty" }
+ { "shell" = "/sbin/nologin" } }
+ { "test"
+ { "password" = "x" }
+ { "uid" = "1000" }
+ { "gid" = "1000" }
+ { "class" = "ldap" }
+ { "change_date" = "1434329080" }
+ { "expire_date" = "1434933880" }
+ { "name" = "Test User,,," }
+ { "home" = "/home/test" }
+ { "shell" = "/bin/bash" } }
+
+(* Popular on Solaris *)
+test MasterPasswd.lns get "+@some-nis-group:::::::::\n" =
+ { "@nis" = "some-nis-group" }
+
+test MasterPasswd.lns get "+\n" =
+ { "@nisdefault" }
+
+test MasterPasswd.lns get "+:::::::::\n" =
+ { "@nisdefault"
+ { "password" = "" }
+ { "uid" = "" }
+ { "gid" = "" }
+ { "class" }
+ { "change_date" = "" }
+ { "expire_date" = "" }
+ { "name" }
+ { "home" }
+ { "shell" } }
+
+test MasterPasswd.lns get "+:::::::::/sbin/nologin\n" =
+ { "@nisdefault"
+ { "password" = "" }
+ { "uid" = "" }
+ { "gid" = "" }
+ { "class" }
+ { "change_date" = "" }
+ { "expire_date" = "" }
+ { "name" }
+ { "home" }
+ { "shell" = "/sbin/nologin" } }
+
+test MasterPasswd.lns get "+:*:::ldap:::::\n" =
+ { "@nisdefault"
+ { "password" = "*" }
+ { "uid" = "" }
+ { "gid" = "" }
+ { "class" = "ldap" }
+ { "change_date" = "" }
+ { "expire_date" = "" }
+ { "name" }
+ { "home" }
+ { "shell" } }
+
+(* NIS entries with overrides, ticket #339 *)
+test MasterPasswd.lns get "+@bob::::::::/home/bob:/bin/bash\n" =
+ { "@nis" = "bob"
+ { "home" = "/home/bob" }
+ { "shell" = "/bin/bash" } }
+
+(* NIS user entries *)
+test MasterPasswd.lns get "+bob:::::::::\n" =
+ { "@+nisuser" = "bob" }
+
+test MasterPasswd.lns get "+bob:::::::User Comment:/home/bob:/bin/bash\n" =
+ { "@+nisuser" = "bob"
+ { "name" = "User Comment" }
+ { "home" = "/home/bob" }
+ { "shell" = "/bin/bash" } }
+
+test MasterPasswd.lns put "+bob:::::::::\n" after
+ set "@+nisuser" "alice"
+= "+alice:::::::::\n"
+
+test MasterPasswd.lns put "+bob:::::::::\n" after
+ set "@+nisuser/name" "User Comment";
+ set "@+nisuser/home" "/home/bob";
+ set "@+nisuser/shell" "/bin/bash"
+= "+bob:::::::User Comment:/home/bob:/bin/bash\n"
+
+test MasterPasswd.lns get "-bob:::::::::\n" =
+ { "@-nisuser" = "bob" }
+
+test MasterPasswd.lns put "-bob:::::::::\n" after
+ set "@-nisuser" "alice"
+= "-alice:::::::::\n"
--- /dev/null
+(*
+Module: Test_MCollective
+ Provides unit tests and examples for the <MCollective> lens.
+*)
+
+module Test_MCollective =
+
+let conf = "topicprefix = /topic/
+main_collective = mcollective
+collectives = mcollective
+libdir = /usr/libexec/mcollective
+logger_type = console
+loglevel = warn
+
+# Plugins
+securityprovider = psk
+plugin.psk = unset
+
+connector = stomp
+plugin.stomp.host = localhost
+plugin.stomp.port = 61613
+plugin.stomp.user = mcollective
+plugin.stomp.password = secret
+
+# Facts
+factsource = yaml # bla
+plugin.yaml=/etc/mcollective/facts.yaml
+"
+
+test MCollective.lns get conf =
+ { "topicprefix" = "/topic/" }
+ { "main_collective" = "mcollective" }
+ { "collectives" = "mcollective" }
+ { "libdir" = "/usr/libexec/mcollective" }
+ { "logger_type" = "console" }
+ { "loglevel" = "warn" }
+ { }
+ { "#comment" = "Plugins" }
+ { "securityprovider" = "psk" }
+ { "plugin.psk" = "unset" }
+ { }
+ { "connector" = "stomp" }
+ { "plugin.stomp.host" = "localhost" }
+ { "plugin.stomp.port" = "61613" }
+ { "plugin.stomp.user" = "mcollective" }
+ { "plugin.stomp.password" = "secret" }
+ { }
+ { "#comment" = "Facts" }
+ { "factsource" = "yaml"
+ { "#comment" = "bla" }
+ }
+ { "plugin.yaml" = "/etc/mcollective/facts.yaml" }
+
--- /dev/null
+module Test_mdadm_conf =
+
+let conf = "
+# Comment
+device containers
+ # Comment
+DEVICE partitions \ndev
+ /dev/hda* \n /dev/hdc*
+deVI
+ARRAY /dev/md0 UUID=c3d3134f-2aa9-4514-9da3-82ccd1cccc7b Name=foo=bar
+ supeR-minor=3 devicEs=/dev/hda,/dev/hdb Level=1 num-devices=5 spares=2
+ spare-group=bar auTo=yes BITMAP=/path/to/bitmap metadata=frob
+ container=/dev/sda member=1
+MAIL # Initial comment
+ user@example.com # End of line comment
+MAILF user@example.com # MAILFROM can only be abbreviated to 5 characters
+PROGRA /usr/sbin/handle-mdadm-events
+CREA group=system mode=0640 auto=part-8
+HOME <system>
+AUT +1.x Homehost -all
+POL domain=domain1 metadata=imsm path=pci-0000:00:1f.2-scsi-*
+ action=spare
+PART domain=domain1 metadata=imsm path=pci-0000:04:00.0-scsi-[01]*
+ action=include
+"
+
+test Mdadm_conf.lns get conf =
+ {}
+ { "#comment" = "Comment" }
+ { "device"
+ { "containers" }
+ }
+ { "#comment" = "Comment" }
+ { "device"
+ { "partitions" }
+ }
+ { "device"
+ { "name" = "/dev/hda*" }
+ { "name" = "/dev/hdc*" }
+ }
+ { "device" }
+ { "array"
+ { "devicename" = "/dev/md0" }
+ { "uuid" = "c3d3134f-2aa9-4514-9da3-82ccd1cccc7b" }
+ { "name" = "foo=bar" }
+ { "super-minor" = "3" }
+ { "devices" = "/dev/hda,/dev/hdb" }
+ { "level" = "1" }
+ { "num-devices" = "5" }
+ { "spares" = "2" }
+ { "spare-group" = "bar" }
+ { "auto" = "yes" }
+ { "bitmap" = "/path/to/bitmap" }
+ { "metadata" = "frob" }
+ { "container" = "/dev/sda" }
+ { "member" = "1" }
+ }
+ { "mailaddr"
+ { "#comment" = "Initial comment" }
+ { "value" = "user@example.com" }
+ { "#comment" = "End of line comment" }
+ }
+ { "mailfrom"
+ { "value" = "user@example.com" }
+ { "#comment" = "MAILFROM can only be abbreviated to 5 characters" }
+ }
+ { "program"
+ { "value" = "/usr/sbin/handle-mdadm-events" }
+ }
+ { "create"
+ { "group" = "system" }
+ { "mode" = "0640" }
+ { "auto" = "part-8" }
+ }
+ { "homehost"
+ { "value" = "<system>" }
+ }
+ { "auto"
+ { "+" = "1.x" }
+ { "homehost" }
+ { "-" = "all" }
+ }
+ { "policy"
+ { "domain" = "domain1" }
+ { "metadata" = "imsm" }
+ { "path" = "pci-0000:00:1f.2-scsi-*" }
+ { "action" = "spare" }
+ }
+ { "part-policy"
+ { "domain" = "domain1" }
+ { "metadata" = "imsm" }
+ { "path" = "pci-0000:04:00.0-scsi-[01]*" }
+ { "action" = "include" }
+ }
--- /dev/null
+(*
+Module: Test_Memcached
+ Provides unit tests and examples for the <Memcached> lens.
+*)
+
+module Test_Memcached =
+
+let conf = "# memcached default config file
+
+# Run memcached as a daemon. This command is implied, and is not needed for the
+# daemon to run. See the README.Debian that comes with this package for more
+# information.
+-d
+-l 127.0.0.1
+
+# Log memcached's output to /var/log/memcached
+logfile /var/log/memcached.log
+
+# Default connection port is 11211
+-p 11211
+-m 64 # Start with a cap of 64 megs of memory.
+-M
+"
+
+test Memcached.lns get conf =
+ { "#comment" = "memcached default config file" }
+ { }
+ { "#comment" = "Run memcached as a daemon. This command is implied, and is not needed for the" }
+ { "#comment" = "daemon to run. See the README.Debian that comes with this package for more" }
+ { "#comment" = "information." }
+ { "d" }
+ { "l" = "127.0.0.1" }
+ { }
+ { "#comment" = "Log memcached's output to /var/log/memcached" }
+ { "logfile" = "/var/log/memcached.log" }
+ { }
+ { "#comment" = "Default connection port is 11211" }
+ { "p" = "11211" }
+ { "m" = "64"
+ { "#comment" = "Start with a cap of 64 megs of memory." }
+ }
+ { "M" }
+
--- /dev/null
+(* Test for keepalived lens *)
+
+module Test_mke2fs =
+
+ let conf = "# This is a comment
+; and another comment
+
+[defaults]
+ base_features = sparse_super,filetype,resize_inode,dir_index,ext_attr
+ default_mntopts = acl,user_xattr
+ enable_periodic_fsck = 0
+ blocksize = 4096
+ inode_size = 256
+ ; here goes inode_ratio
+ inode_ratio = 16384
+
+[fs_types]
+ ; here we have fs_types
+ ext4dev = {
+ # this is ext4dev conf
+
+ features = has_journal,^extent
+ auto_64-bit_support = 1
+ inode_size = 256
+ options = test_fs=1
+ }
+ small = {
+ blocksize = 1024
+ inode_size = 128
+ inode_ratio = 4096
+ }
+ largefile = {
+ inode_ratio = 1048576
+ blocksize = -1
+ }
+
+[options]
+ proceed_delay = 1
+ sync_kludge = 1
+"
+
+ test Mke2fs.lns get conf =
+ { "#comment" = "This is a comment" }
+ { "#comment" = "and another comment" }
+ {}
+ { "defaults"
+ { "base_features"
+ { "sparse_super" }
+ { "filetype" }
+ { "resize_inode" }
+ { "dir_index" }
+ { "ext_attr" } }
+ { "default_mntopts"
+ { "acl" }
+ { "user_xattr" } }
+ { "enable_periodic_fsck" = "0" }
+ { "blocksize" = "4096" }
+ { "inode_size" = "256" }
+ { "#comment" = "here goes inode_ratio" }
+ { "inode_ratio" = "16384" }
+ {} }
+ { "fs_types"
+ { "#comment" = "here we have fs_types" }
+ { "filesystem" = "ext4dev"
+ { "#comment" = "this is ext4dev conf" }
+ {}
+ { "features"
+ { "has_journal" }
+ { "extent"
+ { "disable" } } }
+ { "auto_64-bit_support" = "1" }
+ { "inode_size" = "256" }
+ { "options"
+ { "test_fs" = "1" } } }
+ { "filesystem" = "small"
+ { "blocksize" = "1024" }
+ { "inode_size" = "128" }
+ { "inode_ratio" = "4096" } }
+ { "filesystem" = "largefile"
+ { "inode_ratio" = "1048576" }
+ { "blocksize" = "-1" } }
+ {} }
+ { "options"
+ { "proceed_delay" = "1" }
+ { "sync_kludge" = "1" } }
+
+
+ let quoted_conf = "[defaults]
+ base_features = \"sparse_super,filetype,resize_inode,dir_index,ext_attr\"
+
+[fs_types]
+ ext4dev = {
+ features = \"has_journal,^extent\"
+ default_mntopts = \"user_xattr\"
+ encoding = \"utf8\"
+ encoding = \"\"
+ }
+"
+
+ test Mke2fs.lns get quoted_conf =
+ { "defaults"
+ { "base_features"
+ { "sparse_super" }
+ { "filetype" }
+ { "resize_inode" }
+ { "dir_index" }
+ { "ext_attr" } }
+ {} }
+ { "fs_types"
+ { "filesystem" = "ext4dev"
+ { "features"
+ { "has_journal" }
+ { "extent"
+ { "disable" } } }
+ { "default_mntopts"
+ { "user_xattr" } }
+ { "encoding" = "utf8" }
+ { "encoding" }
+ } }
+
+
+test Mke2fs.common_entry
+ put "features = has_journal,^extent\n"
+ after set "/features/has_journal/disable" "";
+ rm "/features/extent/disable" = "features = ^has_journal,extent\n"
+
--- /dev/null
+module Test_modprobe =
+
+(* Based on 04config.sh from module-init-tools *)
+
+let conf = "# Various aliases
+alias alias_to_foo foo
+alias alias_to_bar bar
+alias alias_to_export_dep-$BITNESS export_dep-$BITNESS
+
+# Various options, including options to aliases.
+options alias_to_export_dep-$BITNESS I am alias to export_dep
+options alias_to_noexport_nodep-$BITNESS_with_tabbed_options index=0 id=\"Thinkpad\" isapnp=0 \\
+\tport=0x530 cport=0x538 fm_port=0x388 \\
+\tmpu_port=-1 mpu_irq=-1 \\
+\tirq=9 dma1=1 dma2=3 \\
+\tenable=1 isapnp=0
+
+# Blacklist
+blacklist watchdog_drivers \t
+
+# Install commands
+install bar echo Installing bar
+install foo echo Installing foo
+install export_nodep-$BITNESS echo Installing export_nodep
+
+# Remove commands
+remove bar echo Removing bar
+remove foo echo Removing foo
+remove export_nodep-$BITNESS echo Removing export_nodep
+
+# Softdep
+softdep uhci-hcd post: foo
+softdep uhci-hcd pre: ehci-hcd foo
+softdep uhci-hcd pre: ehci-hcd foo post: foo
+"
+
+test Modprobe.lns get conf =
+ { "#comment" = "Various aliases" }
+ { "alias" = "alias_to_foo"
+ { "modulename" = "foo" }
+ }
+ { "alias" = "alias_to_bar"
+ { "modulename" = "bar" }
+ }
+ { "alias" = "alias_to_export_dep-$BITNESS"
+ { "modulename" = "export_dep-$BITNESS" }
+ }
+ { }
+ { "#comment" = "Various options, including options to aliases." }
+ { "options" = "alias_to_export_dep-$BITNESS"
+ { "I" }
+ { "am" }
+ { "alias" }
+ { "to" }
+ { "export_dep" }
+ }
+ { "options" = "alias_to_noexport_nodep-$BITNESS_with_tabbed_options"
+ { "index" = "0" }
+ { "id" = "\"Thinkpad\"" }
+ { "isapnp" = "0" }
+ { "port" = "0x530" }
+ { "cport" = "0x538" }
+ { "fm_port" = "0x388" }
+ { "mpu_port" = "-1" }
+ { "mpu_irq" = "-1" }
+ { "irq" = "9" }
+ { "dma1" = "1" }
+ { "dma2" = "3" }
+ { "enable" = "1" }
+ { "isapnp" = "0" }
+ }
+ { }
+ { "#comment" = "Blacklist" }
+ { "blacklist" = "watchdog_drivers" }
+ { }
+ { "#comment" = "Install commands" }
+ { "install" = "bar"
+ { "command" = "echo Installing bar" }
+ }
+ { "install" = "foo"
+ { "command" = "echo Installing foo" }
+ }
+ { "install" = "export_nodep-$BITNESS"
+ { "command" = "echo Installing export_nodep" }
+ }
+ { }
+ { "#comment" = "Remove commands" }
+ { "remove" = "bar"
+ { "command" = "echo Removing bar" }
+ }
+ { "remove" = "foo"
+ { "command" = "echo Removing foo" }
+ }
+ { "remove" = "export_nodep-$BITNESS"
+ { "command" = "echo Removing export_nodep" }
+ }
+ { }
+ { "#comment" = "Softdep" }
+ { "softdep" = "uhci-hcd"
+ { "post" = "foo" }
+ }
+ { "softdep" = "uhci-hcd"
+ { "pre" = "ehci-hcd" }
+ { "pre" = "foo" }
+ }
+ { "softdep" = "uhci-hcd"
+ { "pre" = "ehci-hcd" }
+ { "pre" = "foo" }
+ { "post" = "foo" }
+ }
+
+
+(* eol-comments *)
+test Modprobe.lns get "blacklist brokenmodule # never worked\n" =
+ { "blacklist" = "brokenmodule"
+ { "#comment" = "never worked" } }
+
+
+(* Ticket 108 *)
+let options_space_quote = "options name attr1=\"val\" attr2=\"val2 val3\"\n"
+
+test Modprobe.entry get options_space_quote =
+ { "options" = "name"
+ { "attr1" = "\"val\"" }
+ { "attr2" = "\"val2 val3\"" }
+ }
+
+(* Allow spaces around the '=', BZ 826752 *)
+test Modprobe.entry get "options ipv6 disable = 1\n" =
+ { "options" = "ipv6"
+ { "disable" = "1" } }
+
+(* Support multiline split commands, Ubuntu bug #1054306 *)
+test Modprobe.lns get "# /etc/modprobe.d/iwlwifi.conf
+# iwlwifi will dyamically load either iwldvm or iwlmvm depending on the
+# microcode file installed on the system. When removing iwlwifi, first
+# remove the iwl?vm module and then iwlwifi.
+remove iwlwifi \
+(/sbin/lsmod | grep -o -e ^iwlmvm -e ^iwldvm -e ^iwlwifi | xargs /sbin/rmmod) \
+&& /sbin/modprobe -r mac80211\n" =
+ { "#comment" = "/etc/modprobe.d/iwlwifi.conf" }
+ { "#comment" = "iwlwifi will dyamically load either iwldvm or iwlmvm depending on the" }
+ { "#comment" = "microcode file installed on the system. When removing iwlwifi, first" }
+ { "#comment" = "remove the iwl?vm module and then iwlwifi." }
+ { "remove" = "iwlwifi"
+ { "command" = "(/sbin/lsmod | grep -o -e ^iwlmvm -e ^iwldvm -e ^iwlwifi | xargs /sbin/rmmod) \\\n&& /sbin/modprobe -r mac80211" }
+ }
--- /dev/null
+module Test_Modules =
+
+let conf = "# /etc/modules: kernel modules to load at boot time.
+
+lp
+rtc
+"
+
+test Modules.lns get conf =
+ { "#comment" = "/etc/modules: kernel modules to load at boot time." }
+ { }
+ { "lp" }
+ { "rtc" }
--- /dev/null
+module Test_modules_conf =
+
+(* Based on 04config.sh from module-init-tools *)
+
+let conf = "# Various aliases
+alias alias_to_foo foo
+alias alias_to_bar bar
+alias alias_to_export_dep-$BITNESS export_dep-$BITNESS
+
+# Various options, including options to aliases.
+options alias_to_export_dep-$BITNESS I am alias to export_dep
+options alias_to_noexport_nodep-$BITNESS_with_tabbed_options index=0 id=\"Thinkpad\" isapnp=0 \\
+\tport=0x530 cport=0x538 fm_port=0x388 \\
+\tmpu_port=-1 mpu_irq=-1 \\
+\tirq=9 dma1=1 dma2=3 \\
+\tenable=1 isapnp=0
+
+# Install commands
+install bar echo Installing bar
+install foo echo Installing foo
+install export_nodep-$BITNESS echo Installing export_nodep
+
+# Pre- and post- install something
+pre-install ide-scsi modprobe ide-cd # load ide-cd before ide-scsi
+post-install serial /etc/init.d/setserial modload > /dev/null 2> /dev/null
+
+# Remove commands
+remove bar echo Removing bar
+remove foo echo Removing foo
+remove export_nodep-$BITNESS echo Removing export_nodep
+
+#Pre- and post- remove something
+pre-remove serial /etc/init.d/setserial modsave > /dev/null 2> /dev/null
+post-remove bttv rmmod tuner
+
+# Misc other directives
+probeall /dev/cdroms
+keep
+path=/lib/modules/`uname -r`/alsa
+"
+
+test Modules_conf.lns get conf =
+ { "#comment" = "Various aliases" }
+ { "alias" = "alias_to_foo"
+ { "modulename" = "foo" }
+ }
+ { "alias" = "alias_to_bar"
+ { "modulename" = "bar" }
+ }
+ { "alias" = "alias_to_export_dep-$BITNESS"
+ { "modulename" = "export_dep-$BITNESS" }
+ }
+ { }
+ { "#comment" = "Various options, including options to aliases." }
+ { "options" = "alias_to_export_dep-$BITNESS"
+ { "I" }
+ { "am" }
+ { "alias" }
+ { "to" }
+ { "export_dep" }
+ }
+ { "options" = "alias_to_noexport_nodep-$BITNESS_with_tabbed_options"
+ { "index" = "0" }
+ { "id" = "\"Thinkpad\"" }
+ { "isapnp" = "0" }
+ { "port" = "0x530" }
+ { "cport" = "0x538" }
+ { "fm_port" = "0x388" }
+ { "mpu_port" = "-1" }
+ { "mpu_irq" = "-1" }
+ { "irq" = "9" }
+ { "dma1" = "1" }
+ { "dma2" = "3" }
+ { "enable" = "1" }
+ { "isapnp" = "0" }
+ }
+ { }
+ { "#comment" = "Install commands" }
+ { "install" = "bar"
+ { "command" = "echo Installing bar" }
+ }
+ { "install" = "foo"
+ { "command" = "echo Installing foo" }
+ }
+ { "install" = "export_nodep-$BITNESS"
+ { "command" = "echo Installing export_nodep" }
+ }
+ { }
+ { "#comment" = "Pre- and post- install something" }
+ { "pre-install" = "ide-scsi"
+ { "command" = "modprobe ide-cd" }
+ { "#comment" = "load ide-cd before ide-scsi" }
+ }
+ { "post-install" = "serial"
+ { "command" = "/etc/init.d/setserial modload > /dev/null 2> /dev/null" }
+ }
+ { }
+ { "#comment" = "Remove commands" }
+ { "remove" = "bar"
+ { "command" = "echo Removing bar" }
+ }
+ { "remove" = "foo"
+ { "command" = "echo Removing foo" }
+ }
+ { "remove" = "export_nodep-$BITNESS"
+ { "command" = "echo Removing export_nodep" }
+ }
+ { }
+ { "#comment" = "Pre- and post- remove something" }
+ { "pre-remove" = "serial"
+ { "command" = "/etc/init.d/setserial modsave > /dev/null 2> /dev/null" }
+ }
+ { "post-remove" = "bttv"
+ { "command" = "rmmod tuner" }
+ }
+ { }
+ { "#comment" = "Misc other directives" }
+ { "probeall" = "/dev/cdroms" }
+ { "keep" }
+ { "path" = "/lib/modules/`uname -r`/alsa" }
--- /dev/null
+(*
+Module: Test_MongoDBServer
+ Provides unit tests and examples for the <MongoDBServer> lens.
+*)
+
+module Test_MongoDBServer =
+
+(* Variable: conf *)
+let conf = "port = 27017
+fork = true
+pidfilepath = /var/run/mongodb/mongodb.pid
+logpath = /var/log/mongodb/mongodb.log
+dbpath =/var/lib/mongodb
+journal = true
+nohttpinterface = true
+"
+
+(* Test: MongoDBServer.lns *)
+test MongoDBServer.lns get conf =
+ { "port" = "27017" }
+ { "fork" = "true" }
+ { "pidfilepath" = "/var/run/mongodb/mongodb.pid" }
+ { "logpath" = "/var/log/mongodb/mongodb.log" }
+ { "dbpath" = "/var/lib/mongodb" }
+ { "journal" = "true" }
+ { "nohttpinterface" = "true" }
+
+(* Test: MongoDBServer.lns
+ Values have to be without quotes *)
+test MongoDBServer.lns get "port = 27017\n" =
+ { "port" = "27017" }
--- /dev/null
+module Test_monit =
+
+let conf = "# Configuration file for monit.
+#
+set alert root@localhost
+include /my/monit/conf
+
+check process sshd
+ start program \"/etc/init.d/ssh start\"
+ if failed port 22 protocol ssh then restart
+
+check process httpd with pidfile /usr/local/apache2/logs/httpd.pid
+ group www-data
+ start program \"/usr/local/apache2/bin/apachectl start\"
+ stop program \"/usr/local/apache2/bin/apachectl stop\"
+"
+
+test Monit.lns get conf =
+ { "#comment" = "Configuration file for monit." }
+ {}
+ { "set"
+ {"alert" = "root@localhost" } }
+ { "include" = "/my/monit/conf" }
+ {}
+ { "check"
+ { "process" = "sshd" }
+ { "start" = "program \"/etc/init.d/ssh start\"" }
+ { "if" = "failed port 22 protocol ssh then restart" } }
+ {}
+ { "check"
+ { "process" = "httpd with pidfile /usr/local/apache2/logs/httpd.pid" }
+ { "group" = "www-data" }
+ { "start" = "program \"/usr/local/apache2/bin/apachectl start\"" }
+ { "stop" = "program \"/usr/local/apache2/bin/apachectl stop\"" }
+}
--- /dev/null
+module Test_multipath =
+
+ let conf = "# Blacklist all devices by default.
+blacklist {
+ devnode \"*\"
+ wwid *
+}
+
+# By default, devices with vendor = \"IBM\" and product = \"S/390.*\" are
+# blacklisted. To enable mulitpathing on these devies, uncomment the
+# following lines.
+blacklist_exceptions {
+ device {
+ vendor \"IBM\"
+ product \"S/390.*\"
+ }
+}
+
+#
+# Here is an example of how to configure some standard options.
+#
+
+defaults {
+ udev_dir /dev
+ polling_interval 10
+ selector \"round-robin 0\"
+ path_grouping_policy multibus
+ getuid_callout \"/sbin/scsi_id --whitelisted /dev/%n\"
+ prio alua
+ path_checker readsector0
+ rr_min_io 100
+ max_fds 8192
+ rr_weight priorities
+ failback immediate
+ no_path_retry fail
+ user_friendly_names yes
+ dev_loss_tmo 30
+ max_polling_interval 300
+ verbosity 2
+ reassign_maps yes
+ fast_io_fail_tmo 5
+ async_timeout 5
+ flush_on_last_del no
+ delay_watch_checks no
+ delay_wait_checks no
+ find_multipaths yes
+ checker_timeout 10
+ hwtable_regex_match yes
+ reload_readwrite no
+ force_sync yes
+ config_dir /etc/multipath/conf.d
+}
+
+# Sections without empty lines in between
+blacklist {
+ wwid 26353900f02796769
+ devnode \"^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*\"
+
+ # Comments and blank lines inside a section
+ devnode \"^hd[a-z]\"
+
+}
+multipaths {
+ multipath {
+ wwid 3600508b4000156d700012000000b0000
+ alias yellow
+ path_grouping_policy multibus
+ path_checker readsector0
+ path_selector \"round-robin 0\"
+ failback manual
+ rr_weight priorities
+ no_path_retry 5
+ flush_on_last_del yes
+ }
+ multipath {
+ wwid 1DEC_____321816758474
+ alias red
+ }
+}
+devices {
+ device {
+ vendor \"COMPAQ \"
+ product \"HSV110 (C)COMPAQ\"
+ path_grouping_policy multibus
+ getuid_callout \"/sbin/scsi_id --whitelisted /dev/%n\"
+ path_checker readsector0
+ path_selector \"round-robin 0\"
+ hardware_handler \"0\"
+ failback 15
+ rr_weight priorities
+ rr_min_io_rq 75
+ no_path_retry queue
+ reservation_key a12345
+ }
+ device {
+ vendor \"COMPAQ \"
+ product \"MSA1000 \"
+ path_grouping_policy multibus
+ polling_interval 9
+ delay_watch_checks 10
+ delay_wait_checks 10
+ }
+}\n"
+
+test Multipath.lns get conf =
+ { "#comment" = "Blacklist all devices by default." }
+ { "blacklist"
+ { "devnode" = "*" }
+ { "wwid" = "*" } }
+ { }
+ { "#comment" = "By default, devices with vendor = \"IBM\" and product = \"S/390.*\" are" }
+ { "#comment" = "blacklisted. To enable mulitpathing on these devies, uncomment the" }
+ { "#comment" = "following lines." }
+ { "blacklist_exceptions"
+ { "device"
+ { "vendor" = "IBM" }
+ { "product" = "S/390.*" } } }
+ { }
+ { }
+ { "#comment" = "Here is an example of how to configure some standard options." }
+ { }
+ { }
+ { "defaults"
+ { "udev_dir" = "/dev" }
+ { "polling_interval" = "10" }
+ { "selector" = "round-robin 0" }
+ { "path_grouping_policy" = "multibus" }
+ { "getuid_callout" = "/sbin/scsi_id --whitelisted /dev/%n" }
+ { "prio" = "alua" }
+ { "path_checker" = "readsector0" }
+ { "rr_min_io" = "100" }
+ { "max_fds" = "8192" }
+ { "rr_weight" = "priorities" }
+ { "failback" = "immediate" }
+ { "no_path_retry" = "fail" }
+ { "user_friendly_names" = "yes" }
+ { "dev_loss_tmo" = "30" }
+ { "max_polling_interval" = "300" }
+ { "verbosity" = "2" }
+ { "reassign_maps" = "yes" }
+ { "fast_io_fail_tmo" = "5" }
+ { "async_timeout" = "5" }
+ { "flush_on_last_del" = "no" }
+ { "delay_watch_checks" = "no" }
+ { "delay_wait_checks" = "no" }
+ { "find_multipaths" = "yes" }
+ { "checker_timeout" = "10" }
+ { "hwtable_regex_match" = "yes" }
+ { "reload_readwrite" = "no" }
+ { "force_sync" = "yes" }
+ { "config_dir" = "/etc/multipath/conf.d" } }
+ { }
+ { "#comment" = "Sections without empty lines in between" }
+ { "blacklist"
+ { "wwid" = "26353900f02796769" }
+ { "devnode" = "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*" }
+ { }
+ { "#comment" = "Comments and blank lines inside a section" }
+ { "devnode" = "^hd[a-z]" }
+ { } }
+ { "multipaths"
+ { "multipath"
+ { "wwid" = "3600508b4000156d700012000000b0000" }
+ { "alias" = "yellow" }
+ { "path_grouping_policy" = "multibus" }
+ { "path_checker" = "readsector0" }
+ { "path_selector" = "round-robin 0" }
+ { "failback" = "manual" }
+ { "rr_weight" = "priorities" }
+ { "no_path_retry" = "5" }
+ { "flush_on_last_del" = "yes" } }
+ { "multipath"
+ { "wwid" = "1DEC_____321816758474" }
+ { "alias" = "red" } } }
+ { "devices"
+ { "device"
+ { "vendor" = "COMPAQ " }
+ { "product" = "HSV110 (C)COMPAQ" }
+ { "path_grouping_policy" = "multibus" }
+ { "getuid_callout" = "/sbin/scsi_id --whitelisted /dev/%n" }
+ { "path_checker" = "readsector0" }
+ { "path_selector" = "round-robin 0" }
+ { "hardware_handler" = "0" }
+ { "failback" = "15" }
+ { "rr_weight" = "priorities" }
+ { "rr_min_io_rq" = "75" }
+ { "no_path_retry" = "queue" }
+ { "reservation_key" = "a12345" } }
+ { "device"
+ { "vendor" = "COMPAQ " }
+ { "product" = "MSA1000 " }
+ { "path_grouping_policy" = "multibus" }
+ { "polling_interval" = "9" }
+ { "delay_watch_checks" = "10" }
+ { "delay_wait_checks" = "10" } } }
+
+test Multipath.lns get "blacklist {
+ devnode \".*\"
+ wwid \"([a-z]+|ab?c).*\"
+}\n" =
+ { "blacklist"
+ { "devnode" = ".*" }
+ { "wwid" = "([a-z]+|ab?c).*" } }
+
+test Multipath.lns get "blacklist {\n property \"[a-z]+\"\n}\n" =
+ { "blacklist"
+ { "property" = "[a-z]+" } }
+
+(* Check that '""' inside a string works *)
+test Multipath.lns get "blacklist {
+ device {
+ vendor SomeCorp
+ product \"2.5\"\" SSD\"
+ }
+}\n" =
+ { "blacklist"
+ { "device"
+ { "vendor" = "SomeCorp" }
+ { "product" = "2.5\"\" SSD" } } }
+
+(* Issue #583 - allow optional quotes around values and strip them *)
+test Multipath.lns get "devices {
+ device {
+ vendor \"COMPELNT\"
+ product \"Compellent Vol\"
+ path_grouping_policy \"multibus\"
+ path_checker \"tur\"
+ features \"0\"
+ hardware_handler \"0\"
+ prio \"const\"
+ failback \"immediate\"
+ rr_weight \"uniform\"
+ no_path_retry \"queue\"
+ }
+}\n" =
+ { "devices"
+ { "device"
+ { "vendor" = "COMPELNT" }
+ { "product" = "Compellent Vol" }
+ { "path_grouping_policy" = "multibus" }
+ { "path_checker" = "tur" }
+ { "features" = "0" }
+ { "hardware_handler" = "0" }
+ { "prio" = "const" }
+ { "failback" = "immediate" }
+ { "rr_weight" = "uniform" }
+ { "no_path_retry" = "queue" } } }
--- /dev/null
+module Test_mysql =
+
+ let conf = "# The MySQL database server configuration file.
+#
+# You can copy this to one of:
+#
+# One can use all long options that the program supports.
+# Run program with --help to get a list of available options and with
+# --print-defaults to see which it would actually understand and use.
+#
+# For explanations see
+# http://dev.mysql.com/doc/mysql/en/server-system-variables.html
+
+# This will be passed to all mysql clients
+# It has been reported that passwords should be enclosed with ticks/quotes
+# Remember to edit /etc/mysql/debian.cnf when changing the socket location.
+ [client]
+ port = 3306
+socket = /var/run/mysqld/mysqld.sock
+
+# Here is entries for some specific programs
+# The following values assume you have at least 32M ram
+
+# This was formally known as [safe_mysqld]. Both versions are currently parsed.
+ [mysqld_safe]
+socket = /var/run/mysqld/mysqld.sock
+nice = 0
+
+[mysqld]
+#
+# * Basic Settings
+#
+user = mysql
+pid-file = /var/run/mysqld/mysqld.pid
+socket = /var/run/mysqld/mysqld.sock
+port = 3306
+basedir = /usr
+datadir = /var/lib/mysql
+tmpdir = /tmp
+language = /usr/share/mysql/english
+ skip-external-locking
+#
+# Instead of skip-networking the default is now to listen only on
+# localhost which is more compatible and is not less secure.
+bind-address = 127.0.0.1
+#
+# * Fine Tuning
+#
+key_buffer = 16M
+max_allowed_packet = 16M
+thread_stack = 128K
+thread_cache_size = 8
+#max_connections = 100
+#table_cache = 64
+#thread_concurrency = 10
+#
+# * Query Cache Configuration
+#
+query_cache_limit = 1M
+query_cache_size = 16M
+#
+# * Logging and Replication
+#
+# Both location gets rotated by the cronjob.
+# Be aware that this log type is a performance killer.
+#log = /var/log/mysql/mysql.log
+#
+# Error logging goes to syslog. This is a Debian improvement :)
+#
+# Here you can see queries with especially long duration
+#log_slow_queries = /var/log/mysql/mysql-slow.log
+#long_query_time = 2
+#log-queries-not-using-indexes
+#
+# The following can be used as easy to replay backup logs or for replication.
+#server-id = 1
+#log_bin = /var/log/mysql/mysql-bin.log
+# WARNING: Using expire_logs_days without bin_log crashes the server! See README.Debian!
+#expire_logs_days = 10
+#max_binlog_size = 100M
+#binlog_do_db = include_database_name
+#binlog_ignore_db = include_database_name
+#
+# * BerkeleyDB
+#
+# Using BerkeleyDB is now discouraged as its support will cease in 5.1.12.
+skip-bdb
+#
+# * InnoDB
+#
+# InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.
+# Read the manual for more InnoDB related options. There are many!
+# You might want to disable InnoDB to shrink the mysqld process by circa 100MB.
+#skip-innodb
+#
+# * Security Features
+#
+# Read the manual, too, if you want chroot!
+# chroot = /var/lib/mysql/
+#
+#
+# ssl-ca=/etc/mysql/cacert.pem
+# ssl-cert=/etc/mysql/server-cert.pem
+# ssl-key=/etc/mysql/server-key.pem
+
+
+
+[mysqldump]
+quick
+quote-names
+max_allowed_packet = 16M
+
+[mysql]
+#no-auto-rehash # faster start of mysql but no tab completition
+
+[isamchk]
+key_buffer = 16M
+
+#
+# * NDB Cluster
+#
+# See /usr/share/doc/mysql-server-*/README.Debian for more information.
+#
+# The following configuration is read by the NDB Data Nodes (ndbd processes)
+# not from the NDB Management Nodes (ndb_mgmd processes).
+#
+# [MYSQL_CLUSTER]
+# ndb-connectstring=127.0.0.1
+
+
+#
+# * IMPORTANT: Additional settings that can override those from this file!
+#
+!includedir /etc/mysql/conf.d/
+!include /etc/mysql/other_conf.d/someconf.cnf
+# Another comment
+
+"
+
+
+ test MySQL.lns get conf = { "#comment" = "The MySQL database server configuration file." }
+ {}
+ { "#comment" = "You can copy this to one of:" }
+ {}
+ { "#comment" = "One can use all long options that the program supports." }
+ { "#comment" = "Run program with --help to get a list of available options and with" }
+ { "#comment" = "--print-defaults to see which it would actually understand and use." }
+ {}
+ { "#comment" = "For explanations see" }
+ { "#comment" = "http://dev.mysql.com/doc/mysql/en/server-system-variables.html" }
+ { }
+ { "#comment" = "This will be passed to all mysql clients" }
+ { "#comment" = "It has been reported that passwords should be enclosed with ticks/quotes" }
+ { "#comment" = "Remember to edit /etc/mysql/debian.cnf when changing the socket location." }
+ { "target" = "client"
+ { "port" = "3306" }
+ { "socket" = "/var/run/mysqld/mysqld.sock" }
+ { }
+ { "#comment" = "Here is entries for some specific programs" }
+ { "#comment" = "The following values assume you have at least 32M ram" }
+ { }
+ { "#comment" = "This was formally known as [safe_mysqld]. Both versions are currently parsed." }
+ }
+ { "target" = "mysqld_safe"
+ { "socket" = "/var/run/mysqld/mysqld.sock" }
+ { "nice" = "0" }
+ { }
+ }
+ { "target" = "mysqld"
+ {}
+ { "#comment" = "* Basic Settings" }
+ {}
+ { "user" = "mysql" }
+ { "pid-file" = "/var/run/mysqld/mysqld.pid" }
+ { "socket" = "/var/run/mysqld/mysqld.sock" }
+ { "port" = "3306" }
+ { "basedir" = "/usr" }
+ { "datadir" = "/var/lib/mysql" }
+ { "tmpdir" = "/tmp" }
+ { "language" = "/usr/share/mysql/english" }
+ { "skip-external-locking" = "" }
+ {}
+ { "#comment" = "Instead of skip-networking the default is now to listen only on" }
+ { "#comment" = "localhost which is more compatible and is not less secure." }
+ { "bind-address" = "127.0.0.1" }
+ {}
+ { "#comment" = "* Fine Tuning" }
+ {}
+ { "key_buffer" = "16M" }
+ { "max_allowed_packet" = "16M" }
+ { "thread_stack" = "128K" }
+ { "thread_cache_size" = "8" }
+ { "#comment" = "max_connections = 100" }
+ { "#comment" = "table_cache = 64" }
+ { "#comment" = "thread_concurrency = 10" }
+ {}
+ { "#comment" = "* Query Cache Configuration" }
+ {}
+ { "query_cache_limit" = "1M" }
+ { "query_cache_size" = "16M" }
+ {}
+ { "#comment" = "* Logging and Replication" }
+ {}
+ { "#comment" = "Both location gets rotated by the cronjob." }
+ { "#comment" = "Be aware that this log type is a performance killer." }
+ { "#comment" = "log = /var/log/mysql/mysql.log" }
+ {}
+ { "#comment" = "Error logging goes to syslog. This is a Debian improvement :)" }
+ {}
+ { "#comment" = "Here you can see queries with especially long duration" }
+ { "#comment" = "log_slow_queries = /var/log/mysql/mysql-slow.log" }
+ { "#comment" = "long_query_time = 2" }
+ { "#comment" = "log-queries-not-using-indexes" }
+ {}
+ { "#comment" = "The following can be used as easy to replay backup logs or for replication." }
+ { "#comment" = "server-id = 1" }
+ { "#comment" = "log_bin = /var/log/mysql/mysql-bin.log" }
+ { "#comment" = "WARNING: Using expire_logs_days without bin_log crashes the server! See README.Debian!" }
+ { "#comment" = "expire_logs_days = 10" }
+ { "#comment" = "max_binlog_size = 100M" }
+ { "#comment" = "binlog_do_db = include_database_name" }
+ { "#comment" = "binlog_ignore_db = include_database_name" }
+ {}
+ { "#comment" = "* BerkeleyDB" }
+ {}
+ { "#comment" = "Using BerkeleyDB is now discouraged as its support will cease in 5.1.12." }
+ { "skip-bdb" = "" }
+ {}
+ { "#comment" = "* InnoDB" }
+ {}
+ { "#comment" = "InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/." }
+ { "#comment" = "Read the manual for more InnoDB related options. There are many!" }
+ { "#comment" = "You might want to disable InnoDB to shrink the mysqld process by circa 100MB." }
+ { "#comment" = "skip-innodb" }
+ {}
+ { "#comment" = "* Security Features" }
+ {}
+ { "#comment" = "Read the manual, too, if you want chroot!" }
+ { "#comment" = "chroot = /var/lib/mysql/" }
+ {}
+ {}
+ { "#comment" = "ssl-ca=/etc/mysql/cacert.pem" }
+ { "#comment" = "ssl-cert=/etc/mysql/server-cert.pem" }
+ { "#comment" = "ssl-key=/etc/mysql/server-key.pem" }
+ { }
+ { }
+ { }
+ }
+ { "target" = "mysqldump"
+ { "quick" = "" }
+ { "quote-names" = "" }
+ { "max_allowed_packet" = "16M" }
+ { }
+ }
+ { "target" = "mysql"
+ { "#comment" = "no-auto-rehash # faster start of mysql but no tab completition" }
+ { }
+ }
+ { "target" = "isamchk"
+ { "key_buffer" = "16M" }
+ { }
+ {}
+ { "#comment" = "* NDB Cluster" }
+ {}
+ { "#comment" = "See /usr/share/doc/mysql-server-*/README.Debian for more information." }
+ {}
+ { "#comment" = "The following configuration is read by the NDB Data Nodes (ndbd processes)" }
+ { "#comment" = "not from the NDB Management Nodes (ndb_mgmd processes)." }
+ {}
+ { "#comment" = "[MYSQL_CLUSTER]" }
+ { "#comment" = "ndb-connectstring=127.0.0.1" }
+ { }
+ { }
+ {}
+ { "#comment" = "* IMPORTANT: Additional settings that can override those from this file!" }
+ {}
+ }
+ { "!includedir" = "/etc/mysql/conf.d/" }
+ { "!include" = "/etc/mysql/other_conf.d/someconf.cnf" }
+ { "#comment" = "Another comment" }
+ { }
+
+
--- /dev/null
+(*
+Module: Test_NagiosCfg
+ Provides unit tests and examples for the <NagiosCfg> lens.
+*)
+
+module Test_NagiosCfg =
+ let conf="
+# LOG FILE
+log_file=/var/log/nagios3/nagios.log
+
+# OBJECT CONFIGURATION FILE(S)
+cfg_file=/etc/nagios3/objects/check_commands.cfg
+cfg_file=/etc/nagios3/objects/contact_groups.cfg
+cfg_file=/etc/nagios3/objects/contacts.cfg
+cfg_file=/etc/nagios3/objects/hostgroups.cfg
+cfg_file=/etc/nagios3/objects/hosts.cfg
+cfg_file=/etc/nagios3/objects/services.cfg
+
+# NAGIOS USER
+nagios_user=nagios
+
+# NAGIOS GROUP
+nagios_group=nagios
+
+# DATE FORMAT
+date_format=iso8601
+
+# ILLEGAL OBJECT NAME CHARS
+illegal_object_name_chars=`~!$%^&*|'\"<>?,()'=
+
+# ILLEGAL MACRO OUTPUT CHARS
+illegal_macro_output_chars=`~$&|'\"<>
+
+# MISC DIRECTIVES
+p1_file=/usr/lib/nagios3/p1.pl
+event_broker_options=-1
+use_large_installation_tweaks=1
+broker_module=/usr/lib/nagios3/libNagiosCluster-1.0.so.4.0.0
+broker_module=/usr/sbin/ndomod.o config_file=/etc/nagios3/ndomod.cfg
+"
+
+ test NagiosCfg.lns get conf =
+ {}
+ { "#comment" = "LOG FILE" }
+ { "log_file" = "/var/log/nagios3/nagios.log" }
+ {}
+ { "#comment" = "OBJECT CONFIGURATION FILE(S)" }
+ { "cfg_file" = "/etc/nagios3/objects/check_commands.cfg" }
+ { "cfg_file" = "/etc/nagios3/objects/contact_groups.cfg" }
+ { "cfg_file" = "/etc/nagios3/objects/contacts.cfg" }
+ { "cfg_file" = "/etc/nagios3/objects/hostgroups.cfg" }
+ { "cfg_file" = "/etc/nagios3/objects/hosts.cfg" }
+ { "cfg_file" = "/etc/nagios3/objects/services.cfg" }
+ {}
+ { "#comment" = "NAGIOS USER" }
+ { "nagios_user" = "nagios" }
+ {}
+ { "#comment" = "NAGIOS GROUP" }
+ { "nagios_group"= "nagios" }
+ {}
+ { "#comment" = "DATE FORMAT" }
+ { "date_format" = "iso8601" }
+ {}
+ { "#comment" = "ILLEGAL OBJECT NAME CHARS" }
+ { "illegal_object_name_chars" = "`~!$%^&*|'\"<>?,()'=" }
+ {}
+ { "#comment" = "ILLEGAL MACRO OUTPUT CHARS" }
+ { "illegal_macro_output_chars" = "`~$&|'\"<>" }
+ {}
+ { "#comment" = "MISC DIRECTIVES" }
+ { "p1_file" = "/usr/lib/nagios3/p1.pl" }
+ { "event_broker_options" = "-1" }
+ { "use_large_installation_tweaks" = "1" }
+ { "broker_module" = "/usr/lib/nagios3/libNagiosCluster-1.0.so.4.0.0" }
+ { "broker_module" = "/usr/sbin/ndomod.o"
+ { "config_file" = "/etc/nagios3/ndomod.cfg" } }
+
+
+(* Spaces are fine in values *)
+let space_in = "nagios_check_command=/usr/lib/nagios/plugins/check_nagios /var/cache/nagios3/status.dat 5 '/usr/sbin/nagios3'\n"
+
+test NagiosCfg.lns get space_in =
+ { "nagios_check_command" = "/usr/lib/nagios/plugins/check_nagios /var/cache/nagios3/status.dat 5 '/usr/sbin/nagios3'" }
+
+test NagiosCfg.lns get "$USER1$=/usr/local/libexec/nagios\n" =
+ { "$USER1$" = "/usr/local/libexec/nagios" }
+
+test NagiosCfg.lns get "$USER3$=somepassword\n" =
+ { "$USER3$" = "somepassword" }
--- /dev/null
+module Test_NagiosObjects =
+ let conf="
+#
+# Nagios Objects definitions file
+#
+
+define host {
+ host_name plonk
+ alias plonk
+ address plonk
+ use generic_template
+ contact_groups Monitoring-Team,admins
+}
+
+define service {
+ service_description gen
+ use generic_template_passive
+ host_name plonk
+ check_command nopassivecheckreceived
+ contact_groups admins
+}
+
+; This is a semicolon comment
+
+define service{
+ service_description gen2
+ use generic_template_passive
+ host_name plonk
+ }
+"
+
+ test NagiosObjects.lns get conf =
+ {}
+ {}
+ { "#comment" = "Nagios Objects definitions file" }
+ {}
+ {}
+ { "host"
+ { "host_name" = "plonk" }
+ { "alias" = "plonk" }
+ { "address" = "plonk" }
+ { "use" = "generic_template" }
+ { "contact_groups" = "Monitoring-Team,admins" }
+ }
+ {}
+ { "service"
+ { "service_description" = "gen" }
+ { "use" = "generic_template_passive" }
+ { "host_name" = "plonk" }
+ { "check_command" = "nopassivecheckreceived" }
+ { "contact_groups" = "admins" }
+ }
+ {}
+ { "#comment" = "This is a semicolon comment" }
+ {}
+ { "service"
+ { "service_description" = "gen2" }
+ { "use" = "generic_template_passive" }
+ { "host_name" = "plonk" }
+ }
+
--- /dev/null
+(* Test for netmasks lens *)
+module Test_netmasks =
+
+ let conf = "# The netmasks file associates Internet Protocol (IP) address
+# masks with IP network numbers.
+#
+
+192.168.1.0 255.255.255.0
+10.0.0.0 255.0.0.0
+"
+
+ test Netmasks.lns get conf =
+ { "#comment" = "The netmasks file associates Internet Protocol (IP) address" }
+ { "#comment" = "masks with IP network numbers." }
+ { }
+ { }
+ { "1"
+ { "network" = "192.168.1.0" }
+ { "netmask" = "255.255.255.0" } }
+ { "2"
+ { "network" = "10.0.0.0" }
+ { "netmask" = "255.0.0.0" } }
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(*
+Module: Test_NetworkManager
+ Provides unit tests and examples for the <NetworkManager> lens.
+*)
+
+module Test_NetworkManager =
+
+(* Variable: conf *)
+let conf = "[connection]
+id=wifoobar
+uuid=16fa8830-cf15-4523-8c1f-c6c635246855
+permissions=user:foo:;
+type=802-11-wireless
+
+[802-11-wireless]
+ssid=wifoobar
+mode=infrastructure
+mac-address=11:00:99:33:33:AA
+security=802-11-wireless-security
+
+[802-11-wireless-security]
+key-mgmt=none
+wep-key0=123abc123abc
+
+[ipv4]
+method=auto
+
+[ipv6]
+method=auto
+
+[vpn]
+NAT Traversal Mode=natt
+DPD idle timeout (our side)=0\n"
+
+let conf_psk = "[wifi]
+ssid=TEST
+mode=infrastructure
+
+[wifi-security]
+key-mgmt=wpa-psk
+auth-alg=open
+psk=\"#weird but valid psk!\"\n"
+
+let conf_empty = ""
+
+(* Test: NetworkManager.lns *)
+test NetworkManager.lns get conf =
+ { "connection"
+ { "id" = "wifoobar" }
+ { "uuid" = "16fa8830-cf15-4523-8c1f-c6c635246855" }
+ { "permissions" = "user:foo:;" }
+ { "type" = "802-11-wireless" }
+ { }
+ }
+ { "802-11-wireless"
+ { "ssid" = "wifoobar" }
+ { "mode" = "infrastructure" }
+ { "mac-address" = "11:00:99:33:33:AA" }
+ { "security" = "802-11-wireless-security" }
+ { }
+ }
+ { "802-11-wireless-security"
+ { "key-mgmt" = "none" }
+ { "wep-key0" = "123abc123abc" }
+ { }
+ }
+ { "ipv4"
+ { "method" = "auto" }
+ { }
+ }
+ { "ipv6"
+ { "method" = "auto" }
+ { }
+ }
+ { "vpn"
+ { "NAT Traversal Mode" = "natt" }
+ { "DPD idle timeout (our side)" = "0" }
+ }
+
+(* Test: NetworkManager.lns - nontrivial WPA-PSK *)
+test NetworkManager.lns get conf_psk =
+ { "wifi"
+ { "ssid" = "TEST" }
+ { "mode" = "infrastructure" }
+ { }
+ }
+ { "wifi-security"
+ { "key-mgmt" = "wpa-psk" }
+ { "auth-alg" = "open" }
+ { "psk" = "\"#weird but valid psk!\"" }
+ }
+
+(* Test: NetworkManager.lns - write new values unquoted *)
+test NetworkManager.lns put conf_empty after
+ insa "wifi-security" "/";
+ set "wifi-security/psk" "#the key"
+ = "[wifi-security]
+psk=#the key\n"
--- /dev/null
+module Test_Networks =
+
+let conf = "# Sample networks
+default 0.0.0.0 # default route - mandatory
+loopnet 127.0.0.0 loopnet_alias loopnet_alias2 # loopback network - mandatory
+mynet 128.253.154 # Modify for your own network address
+
+loopback 127
+arpanet 10 arpa # Historical
+localnet 192.168.25.192
+"
+
+test Networks.lns get conf =
+ { "#comment" = "Sample networks" }
+ { "1"
+ { "name" = "default" }
+ { "number" = "0.0.0.0" }
+ { "#comment" = "default route - mandatory" }
+ }
+ { "2"
+ { "name" = "loopnet" }
+ { "number" = "127.0.0.0" }
+ { "aliases"
+ { "1" = "loopnet_alias" }
+ { "2" = "loopnet_alias2" }
+ }
+ { "#comment" = "loopback network - mandatory" }
+ }
+ { "3"
+ { "name" = "mynet" }
+ { "number" = "128.253.154" }
+ { "#comment" = "Modify for your own network address" }
+ }
+ {}
+ { "4"
+ { "name" = "loopback" }
+ { "number" = "127" }
+ }
+ { "5"
+ { "name" = "arpanet" }
+ { "number" = "10" }
+ { "aliases"
+ { "1" = "arpa" }
+ }
+ { "#comment" = "Historical" }
+ }
+ { "6"
+ { "name" = "localnet" }
+ { "number" = "192.168.25.192" }
+ }
--- /dev/null
+(*
+Module: Test_Nginx
+ Provides unit tests and examples for the <Nginx> lens.
+*)
+module Test_nginx =
+
+(* Check for non-recursive ambiguities *)
+let directive = Nginx.simple
+ | Nginx.block (
+ Nginx.simple
+ | Nginx.block Nginx.simple
+ )
+
+(* Do some limited typechecking on the recursive lens; note that
+ unrolling once more leads to a typecheck error that seems to
+ be spurious, though it's not clear why
+
+ Unrolling once more amounts to adding the clause
+ Nginx.block (Nginx.block Nginx.simple)
+ to unrolled and results in an error
+ overlapping lenses in union.get
+ Example matched by both: 'upstream{}\n'
+*)
+let unrolled = Nginx.simple | Nginx.block Nginx.simple
+
+let lns_unrolled = (Util.comment | Util.empty | unrolled)
+
+(* Normal unit tests *)
+let lns = Nginx.lns
+
+let conf ="user nginx nginx;
+worker_processes 1;
+error_log /var/log/nginx/error_log info;
+
+events {
+ worker_connections 1024;
+ use epoll;
+}
+
+# comment1
+# comment2
+
+http {
+ # comment3
+ include /etc/nginx/mime.types;
+ default_type application/octet-stream;
+ log_format main
+ '$remote_addr - $remote_user [$time_local] '
+ '\"$request\" $status $bytes_sent '
+ '\"$http_referer\" \"$http_user_agent\" '
+ '\"$gzip_ratio\"';
+ client_header_timeout 10m;
+ client_body_timeout 10m;
+ send_timeout 10m;
+ connection_pool_size 256;
+ client_header_buffer_size 2k;
+ large_client_header_buffers 4 8k;
+ request_pool_size 4k;
+ gzip on;
+ gzip_min_length 1000;
+ gzip_buffers 4 8k;
+ gzip_types text/plain application/json;
+ output_buffers 1 32k;
+ postpone_output 1460;
+ sendfile on;
+ tcp_nopush on;
+ tcp_nodelay on;
+ keepalive_timeout 75 20;
+ ignore_invalid_headers on;
+ index index.html index.php;
+ include vhosts/*.conf;
+}
+"
+
+test lns get conf =
+ { "user" = "nginx nginx" }
+ { "worker_processes" = "1" }
+ { "error_log" = "/var/log/nginx/error_log info" }
+ {}
+ { "events"
+ { "worker_connections" = "1024" }
+ { "use" = "epoll" } }
+ {}
+ { "#comment" = "comment1" }
+ { "#comment" = "comment2" }
+ {}
+ { "http"
+ { "#comment" = "comment3" }
+ { "include" = "/etc/nginx/mime.types" }
+ { "default_type" = "application/octet-stream" }
+ { "log_format" = "main
+ '$remote_addr - $remote_user [$time_local] '
+ '\"$request\" $status $bytes_sent '
+ '\"$http_referer\" \"$http_user_agent\" '
+ '\"$gzip_ratio\"'" }
+ { "client_header_timeout" = "10m" }
+ { "client_body_timeout" = "10m" }
+ { "send_timeout" = "10m" }
+ { "connection_pool_size" = "256" }
+ { "client_header_buffer_size" = "2k" }
+ { "large_client_header_buffers" = "4 8k" }
+ { "request_pool_size" = "4k" }
+ { "gzip" = "on" }
+ { "gzip_min_length" = "1000" }
+ { "gzip_buffers" = "4 8k" }
+ { "gzip_types" = "text/plain application/json" }
+ { "output_buffers" = "1 32k" }
+ { "postpone_output" = "1460" }
+ { "sendfile" = "on" }
+ { "tcp_nopush" = "on" }
+ { "tcp_nodelay" = "on" }
+ { "keepalive_timeout" = "75 20" }
+ { "ignore_invalid_headers" = "on" }
+ { "index" = "index.html index.php" }
+ { "include" = "vhosts/*.conf" } }
+
+(* location blocks *)
+test lns get "location / { }\n" =
+ { "location"
+ { "#uri" = "/" } }
+
+test lns get "location = / { }\n" =
+ { "location"
+ { "#comp" = "=" }
+ { "#uri" = "/" } }
+
+test lns get "location /documents/ { }\n" =
+ { "location"
+ { "#uri" = "/documents/" } }
+
+test lns get "location ^~ /images/ { }\n" =
+ { "location"
+ { "#comp" = "^~" }
+ { "#uri" = "/images/" } }
+
+test lns get "location ~* \.(gif|jpg|jpeg)$ { }\n" =
+ { "location"
+ { "#comp" = "~*" }
+ { "#uri" = "\.(gif|jpg|jpeg)$" } }
+
+test lns get "location @fallback { }\n" =
+ { "location"
+ { "#uri" = "@fallback" } }
+
+(* if blocks *)
+test lns get "if ($slow) {
+ tcp_nodelay on;
+}\n" =
+ { "if"
+ { "#cond" = "($slow)" }
+ { "tcp_nodelay" = "on" } }
+
+test lns get "if ($request_method = POST) { }\n" =
+ { "if"
+ { "#cond" = "($request_method = POST)" } }
+
+
+test lns get "if ($http_cookie ~* \"id=([^;]+)(?:;|$)\") { }\n" =
+ { "if"
+ { "#cond" = "($http_cookie ~* \"id=([^;]+)(?:;|$)\")" } }
+
+(* geo blocks *)
+test lns get "geo $geo { }\n" =
+ { "geo"
+ { "#geo" = "$geo" } }
+
+test lns get "geo $address $geo { }\n" =
+ { "geo"
+ { "#address" = "$address" }
+ { "#geo" = "$geo" } }
+
+(* map blocks *)
+test lns get "map $http_host $name { }\n" =
+ { "map"
+ { "#source" = "$http_host" }
+ { "#variable" = "$name" } }
+
+(* split_clients block *)
+test lns get "split_clients \"${remote_addr}AAA\" $variable { }\n" =
+ { "split_clients"
+ { "#string" = "\"${remote_addr}AAA\"" }
+ { "#variable" = "$variable" } }
+
+(* upstream block *)
+test lns get "upstream backend { }\n" =
+ { "upstream"
+ { "#name" = "backend" } }
+
+(* GH #179 - recursive blocks *)
+let http = "http {
+ server {
+ listen 80;
+ location / {
+ root\thtml;
+ }
+ }
+ gzip on;
+}\n"
+
+test lns get http =
+ { "http"
+ { "server"
+ { "listen" = "80" }
+ { "location"
+ { "#uri" = "/" }
+ { "root" = "html" } } }
+ { "gzip" = "on" } }
+
+
+(* GH #335 - server single line entries *)
+let http_server_single_line_entries = "http {
+ upstream big_server_com {
+ server 127.0.0.3:8000 weight=5;
+ server 127.0.0.3:8001 weight=5;
+ server 192.168.0.1:8000;
+ server 192.168.0.1:8001;
+ server backend2.example.com:8080 fail_timeout=5s slow_start=30s;
+ server backend3.example.com resolve;
+ }
+}\n"
+
+test lns get http_server_single_line_entries =
+ { "http"
+ { "upstream"
+ { "#name" = "big_server_com" }
+ { "@server" { "@address" = "127.0.0.3:8000" } { "weight" = "5" } }
+ { "@server" { "@address" = "127.0.0.3:8001" } { "weight" = "5" } }
+ { "@server" { "@address" = "192.168.0.1:8000" } }
+ { "@server" { "@address" = "192.168.0.1:8001" } }
+ { "@server"
+ { "@address" = "backend2.example.com:8080" }
+ { "fail_timeout" = "5s" }
+ { "slow_start" = "30s" } }
+ { "@server"
+ { "@address" = "backend3.example.com" }
+ { "resolve" } } } }
+
+(* Make sure we do not screw up the indentation of the file *)
+test lns put http after set "/http/gzip" "off" =
+"http {
+ server {
+ listen 80;
+ location / {
+ root\thtml;
+ }
+ }
+ gzip off;
+}\n"
+
+(* Test lns
+ GH #260 *)
+test lns get "http {
+ geo $geo {
+ default 0;
+
+ 127.0.0.1 2;
+ 192.168.1.0/24 1;
+ 10.1.0.0/16 1;
+
+ ::1 2;
+ 2001:0db8::/32 1;
+ }
+}\n" =
+ { "http"
+ { "geo"
+ { "#geo" = "$geo" }
+ { "default" = "0" }
+ { }
+ { "127.0.0.1" = "2" }
+ { "192.168.1.0" = "1"
+ { "mask" = "24" } }
+ { "10.1.0.0" = "1"
+ { "mask" = "16" } }
+ { }
+ { "::1" = "2" }
+ { "2001:0db8::" = "1"
+ { "mask" = "32" } } } }
+
+test lns get "add_header X-XSS-Protection \"1; mode=block\" always;\n" =
+ { "add_header" = "X-XSS-Protection \"1; mode=block\" always" }
+
+test lns get "location /foo {
+ root /var/www/html;
+ internal; # only valid in location blocks
+}\n" =
+ { "location"
+ { "#uri" = "/foo" }
+ { "root" = "/var/www/html" }
+ { "internal"
+ { "#comment" = "only valid in location blocks" } } }
+
+test lns get "upstream php-handler {
+ server unix:/var/run/php/php7.3-fpm.sock;
+}\n" =
+ { "upstream"
+ { "#name" = "php-handler" }
+ { "@server"
+ { "@address" = "unix:/var/run/php/php7.3-fpm.sock" } } }
+
--- /dev/null
+module Test_nrpe =
+
+ let command = "command[foo]=bar\n"
+
+ test Nrpe.command get command =
+ { "command"
+ { "foo" = "bar" }
+ }
+
+
+ let item = "nrpe_user=nagios\n"
+
+ test Nrpe.item get item =
+ { "nrpe_user" = "nagios" }
+
+
+ let include = "include=/path/to/file.cfg\n"
+
+ test Nrpe.include get include =
+ { "include"
+ { "file" = "/path/to/file.cfg" }
+ }
+
+
+ let comment = "# a comment\n"
+
+ test Nrpe.comment get comment =
+ { "#comment" = "a comment" }
+
+
+ let empty = "# \n"
+
+ test Nrpe.empty get empty =
+ { }
+
+
+ let lns = "
+#
+# server address:
+server_address=127.0.0.1
+
+nrpe_user=nagios
+nrpe_group=nagios
+
+include=/etc/nrpe_local.cfg
+
+command[check_users]=/usr/lib/nagios/check_users -w 5 -c 10
+command[check_load]=/usr/lib/nagios/check_load -w 15,10,5 -c 30,25,20
+command[check_mongoscl_proc]=/usr/lib64/nagios/plugins/check_procs -c 1:1 --ereg-argument-array=mongosCL
+command[test_command]= foo bar \n
+include=/etc/nrpe/nrpe.cfg
+include_dir=/etc/nrpe/cfgdir/ \n
+# trailing whitespaces \n"
+
+ test Nrpe.lns get lns =
+ { }
+ { }
+ { "#comment" = "server address:" }
+ { "server_address" = "127.0.0.1" }
+ { }
+ { "nrpe_user" = "nagios" }
+ { "nrpe_group" = "nagios" }
+ { }
+ { "include"
+ { "file" = "/etc/nrpe_local.cfg" }
+ }
+ { }
+ { "command"
+ { "check_users" = "/usr/lib/nagios/check_users -w 5 -c 10" }
+ }
+ { "command"
+ { "check_load" = "/usr/lib/nagios/check_load -w 15,10,5 -c 30,25,20" }
+ }
+ { "command"
+ { "check_mongoscl_proc" = "/usr/lib64/nagios/plugins/check_procs -c 1:1 --ereg-argument-array=mongosCL" }
+ }
+ { "command"
+ { "test_command" = " foo bar " }
+ }
+ { }
+ { "include"
+ { "file" = "/etc/nrpe/nrpe.cfg" }
+ }
+ { "include_dir"
+ { "dir" = "/etc/nrpe/cfgdir/" }
+ }
+ { }
+ { "#comment" = "trailing whitespaces" }
--- /dev/null
+(*
+Module: Test_Nslcd
+ Provides unit tests and examples for the <Nslcd> lens.
+*)
+
+module Test_nslcd =
+
+let real_file = "# /etc/nslcd.conf
+# nslcd configuration file. See nslcd.conf(5)
+# for details.
+
+# Specifies the number of threads to start that can handle requests and perform LDAP queries.
+threads 5
+
+# The user and group nslcd should run as.
+uid nslcd
+gid nslcd
+
+# This option controls the way logging is done.
+log syslog info
+
+# The location at which the LDAP server(s) should be reachable.
+uri ldaps://XXX.XXX.XXX ldaps://YYY.YYY.YYY
+
+# The search base that will be used for all queries.
+base dc=XXX,dc=XXX
+
+# The LDAP protocol version to use.
+ldap_version 3
+
+# The DN to bind with for normal lookups.
+binddn cn=annonymous,dc=example,dc=net
+bindpw secret
+
+
+# The DN used for password modifications by root.
+rootpwmoddn cn=admin,dc=example,dc=com
+
+# The password used for password modifications by root.
+rootpwmodpw XXXXXX
+
+
+# SASL authentication options
+sasl_mech OTP
+sasl_realm realm
+sasl_authcid authcid
+sasl_authzid dn:cn=annonymous,dc=example,dc=net
+sasl_secprops noanonymous,noplain,minssf=0,maxssf=2,maxbufsize=65535
+sasl_canonicalize yes
+
+# Kerberos authentication options
+krb5_ccname ccname
+
+# Search/mapping options
+
+# Specifies the base distinguished name (DN) to use as search base.
+base dc=people,dc=example,dc=com
+base dc=morepeople,dc=example,dc=com
+base alias dc=aliases,dc=example,dc=com
+base alias dc=morealiases,dc=example,dc=com
+base group dc=group,dc=example,dc=com
+base group dc=moregroup,dc=example,dc=com
+base passwd dc=users,dc=example,dc=com
+
+# Specifies the search scope (subtree, onelevel, base or children).
+scope sub
+scope passwd sub
+scope aliases sub
+
+# Specifies the policy for dereferencing aliases.
+deref never
+
+# Specifies whether automatic referral chasing should be enabled.
+referrals yes
+
+# The FILTER is an LDAP search filter to use for a specific map.
+filter group (objectClass=posixGroup)
+
+# This option allows for custom attributes to be looked up instead of the default RFC 2307 attributes.
+map passwd homeDirectory \"${homeDirectory:-/home/$uid}\"
+map passwd loginShell \"${loginShell:-/bin/bash}\"
+map shadow userPassword myPassword
+
+# Timing/reconnect options
+
+# Specifies the time limit (in seconds) to use when connecting to the directory server.
+bind_timelimit 30
+
+# Specifies the time limit (in seconds) to wait for a response from the LDAP server.
+timelimit 5
+
+# Specifies the period if inactivity (in seconds) after which the connection to the LDAP server will be closed.
+idle_timelimit 10
+
+# Specifies the number of seconds to sleep when connecting to all LDAP servers fails.
+reconnect_sleeptime 10
+
+# Specifies the time after which the LDAP server is considered to be permanently unavailable.
+reconnect_retrytime 10
+
+# SSL/TLS options
+
+# Specifies whether to use SSL/TLS or not (the default is not to).
+ssl start_tls
+# Specifies what checks to perform on a server-supplied certificate.
+tls_reqcert never
+# Specifies the directory containing X.509 certificates for peer authentication.
+tls_cacertdir /etc/ssl/ca
+# Specifies the path to the X.509 certificate for peer authentication.
+tls_cacertfile /etc/ssl/certs/ca-certificates.crt
+# Specifies the path to an entropy source.
+tls_randfile /dev/random
+# Specifies the ciphers to use for TLS.
+tls_ciphers TLSv1
+# Specifies the path to the file containing the local certificate for client TLS authentication.
+tls_cert /etc/ssl/certs/cert.pem
+# Specifies the path to the file containing the private key for client TLS authentication.
+tls_key /etc/ssl/private/cert.pem
+
+# Other options
+pagesize 100
+nss_initgroups_ignoreusers user1,user2,user3
+nss_min_uid 1000
+nss_nested_groups yes
+nss_getgrent_skipmembers yes
+nss_disable_enumeration yes
+validnames /^[a-z0-9._@$()]([a-z0-9._@$() \\~-]*[a-z0-9._@$()~-])?$/i
+ignorecase yes
+pam_authc_ppolicy yes
+pam_authz_search (&(objectClass=posixAccount)(uid=$username)(|(authorizedService=$service)(!(authorizedService=*))))
+pam_password_prohibit_message \"MESSAGE LONG AND WITH SPACES\"
+reconnect_invalidate nfsidmap,db2,db3
+cache dn2uid 1s 2h
+
+"
+
+test Nslcd.lns get real_file =
+ { "#comment" = "/etc/nslcd.conf" }
+ { "#comment" = "nslcd configuration file. See nslcd.conf(5)" }
+ { "#comment" = "for details." }
+ { }
+ { "#comment" = "Specifies the number of threads to start that can handle requests and perform LDAP queries." }
+ { "threads" = "5" }
+ { }
+ { "#comment" = "The user and group nslcd should run as." }
+ { "uid" = "nslcd" }
+ { "gid" = "nslcd" }
+ { }
+ { "#comment" = "This option controls the way logging is done." }
+ { "log" = "syslog info" }
+ { }
+ { "#comment" = "The location at which the LDAP server(s) should be reachable." }
+ { "uri"
+ { "1" = "ldaps://XXX.XXX.XXX" }
+ { "2" = "ldaps://YYY.YYY.YYY" }
+ }
+ { }
+ { "#comment" = "The search base that will be used for all queries." }
+ { "base" = "dc=XXX,dc=XXX" }
+ { }
+ { "#comment" = "The LDAP protocol version to use." }
+ { "ldap_version" = "3" }
+ { }
+ { "#comment" = "The DN to bind with for normal lookups." }
+ { "binddn" = "cn=annonymous,dc=example,dc=net" }
+ { "bindpw" = "secret" }
+ { }
+ { }
+ { "#comment" = "The DN used for password modifications by root." }
+ { "rootpwmoddn" = "cn=admin,dc=example,dc=com" }
+ { }
+ { "#comment" = "The password used for password modifications by root." }
+ { "rootpwmodpw" = "XXXXXX" }
+ { }
+ { }
+ { "#comment" = "SASL authentication options" }
+ { "sasl_mech" = "OTP" }
+ { "sasl_realm" = "realm" }
+ { "sasl_authcid" = "authcid" }
+ { "sasl_authzid" = "dn:cn=annonymous,dc=example,dc=net" }
+ { "sasl_secprops" = "noanonymous,noplain,minssf=0,maxssf=2,maxbufsize=65535" }
+ { "sasl_canonicalize" = "yes" }
+ { }
+ { "#comment" = "Kerberos authentication options" }
+ { "krb5_ccname" = "ccname" }
+ { }
+ { "#comment" = "Search/mapping options" }
+ { }
+ { "#comment" = "Specifies the base distinguished name (DN) to use as search base." }
+ { "base" = "dc=people,dc=example,dc=com" }
+ { "base" = "dc=morepeople,dc=example,dc=com" }
+ { "base"
+ { "alias" = "dc=aliases,dc=example,dc=com" }
+ }
+ { "base"
+ { "alias" = "dc=morealiases,dc=example,dc=com" }
+ }
+ { "base"
+ { "group" = "dc=group,dc=example,dc=com" }
+ }
+ { "base"
+ { "group" = "dc=moregroup,dc=example,dc=com" }
+ }
+ { "base"
+ { "passwd" = "dc=users,dc=example,dc=com" }
+ }
+ { }
+ { "#comment" = "Specifies the search scope (subtree, onelevel, base or children)." }
+ { "scope" = "sub" }
+ { "scope"
+ { "passwd" = "sub" }
+ }
+ { "scope"
+ { "aliases" = "sub" }
+ }
+ { }
+ { "#comment" = "Specifies the policy for dereferencing aliases." }
+ { "deref" = "never" }
+ { }
+ { "#comment" = "Specifies whether automatic referral chasing should be enabled." }
+ { "referrals" = "yes" }
+ { }
+ { "#comment" = "The FILTER is an LDAP search filter to use for a specific map." }
+ { "filter"
+ { "group" = "(objectClass=posixGroup)" }
+ }
+ { }
+ { "#comment" = "This option allows for custom attributes to be looked up instead of the default RFC 2307 attributes." }
+ { "map"
+ { "passwd"
+ { "homeDirectory" = "\"${homeDirectory:-/home/$uid}\"" }
+ }
+ }
+ { "map"
+ { "passwd"
+ { "loginShell" = "\"${loginShell:-/bin/bash}\"" }
+ }
+ }
+ { "map"
+ { "shadow"
+ { "userPassword" = "myPassword" }
+ }
+ }
+ { }
+ { "#comment" = "Timing/reconnect options" }
+ { }
+ { "#comment" = "Specifies the time limit (in seconds) to use when connecting to the directory server." }
+ { "bind_timelimit" = "30" }
+ { }
+ { "#comment" = "Specifies the time limit (in seconds) to wait for a response from the LDAP server." }
+ { "timelimit" = "5" }
+ { }
+ { "#comment" = "Specifies the period if inactivity (in seconds) after which the connection to the LDAP server will be closed." }
+ { "idle_timelimit" = "10" }
+ { }
+ { "#comment" = "Specifies the number of seconds to sleep when connecting to all LDAP servers fails." }
+ { "reconnect_sleeptime" = "10" }
+ { }
+ { "#comment" = "Specifies the time after which the LDAP server is considered to be permanently unavailable." }
+ { "reconnect_retrytime" = "10" }
+ { }
+ { "#comment" = "SSL/TLS options" }
+ { }
+ { "#comment" = "Specifies whether to use SSL/TLS or not (the default is not to)." }
+ { "ssl" = "start_tls" }
+ { "#comment" = "Specifies what checks to perform on a server-supplied certificate." }
+ { "tls_reqcert" = "never" }
+ { "#comment" = "Specifies the directory containing X.509 certificates for peer authentication." }
+ { "tls_cacertdir" = "/etc/ssl/ca" }
+ { "#comment" = "Specifies the path to the X.509 certificate for peer authentication." }
+ { "tls_cacertfile" = "/etc/ssl/certs/ca-certificates.crt" }
+ { "#comment" = "Specifies the path to an entropy source." }
+ { "tls_randfile" = "/dev/random" }
+ { "#comment" = "Specifies the ciphers to use for TLS." }
+ { "tls_ciphers" = "TLSv1" }
+ { "#comment" = "Specifies the path to the file containing the local certificate for client TLS authentication." }
+ { "tls_cert" = "/etc/ssl/certs/cert.pem" }
+ { "#comment" = "Specifies the path to the file containing the private key for client TLS authentication." }
+ { "tls_key" = "/etc/ssl/private/cert.pem" }
+ { }
+ { "#comment" = "Other options" }
+ { "pagesize" = "100" }
+ { "nss_initgroups_ignoreusers"
+ { "1" = "user1" }
+ { "2" = "user2" }
+ { "3" = "user3" }
+ }
+ { "nss_min_uid" = "1000" }
+ { "nss_nested_groups" = "yes" }
+ { "nss_getgrent_skipmembers" = "yes" }
+ { "nss_disable_enumeration" = "yes" }
+ { "validnames" = "/^[a-z0-9._@$()]([a-z0-9._@$() \~-]*[a-z0-9._@$()~-])?$/i" }
+ { "ignorecase" = "yes" }
+ { "pam_authc_ppolicy" = "yes" }
+ { "pam_authz_search" = "(&(objectClass=posixAccount)(uid=$username)(|(authorizedService=$service)(!(authorizedService=*))))" }
+ { "pam_password_prohibit_message" = "MESSAGE LONG AND WITH SPACES" }
+ { "reconnect_invalidate" = "nfsidmap,db2,db3" }
+ { "cache" = "dn2uid 1s 2h" }
+ { }
+(* Test writes *)
+
+(* Test a simple parameter *)
+test Nslcd.lns put "pagesize 9999\n" after
+ set "/pagesize" "1000" =
+ "pagesize 1000\n"
+
+(* Test base parameter *)
+test Nslcd.lns put "\n" after
+ set "/base" "dc=example,dc=com" =
+ "\nbase dc=example,dc=com\n"
+
+test Nslcd.lns put "base dc=change,dc=me\n" after
+ set "/base" "dc=example,dc=com" =
+ "base dc=example,dc=com\n"
+
+test Nslcd.lns put "\n" after
+ set "/base/passwd" "dc=example,dc=com" =
+ "\nbase passwd dc=example,dc=com\n"
+
+test Nslcd.lns put "base passwd dc=change,dc=me\n" after
+ set "/base[passwd]/passwd" "dc=example,dc=com";
+ set "/base[shadow]/shadow" "dc=example,dc=com" =
+ "base passwd dc=example,dc=com\nbase shadow dc=example,dc=com\n"
+
+(* Test scope entry *)
+test Nslcd.lns put "\n" after
+ set "/scope" "sub" =
+ "\nscope sub\n"
+
+test Nslcd.lns put "scope one\n" after
+ set "/scope" "subtree" =
+ "scope subtree\n"
+
+test Nslcd.lns put "\n" after
+ set "/scope/passwd" "base" =
+ "\nscope passwd base\n"
+
+test Nslcd.lns put "scope shadow onelevel\n" after
+ set "/scope[passwd]/passwd" "subtree";
+ set "/scope[shadow]/shadow" "base" =
+ "scope shadow base\nscope passwd subtree\n"
+
+(* Test filter entry *)
+test Nslcd.lns put "\n" after
+ set "/filter/passwd" "(objectClass=posixAccount)" =
+ "\nfilter passwd (objectClass=posixAccount)\n"
+
+test Nslcd.lns put "filter shadow (objectClass=posixAccount)\n" after
+ set "/filter[passwd]/passwd" "(objectClass=Account)";
+ set "/filter[shadow]/shadow" "(objectClass=Account)" =
+ "filter shadow (objectClass=Account)\nfilter passwd (objectClass=Account)\n"
+
+(* Test map entry *)
+test Nslcd.lns put "map passwd loginShell ab\n" after
+ set "/map/passwd/loginShell" "bc" =
+ "map passwd loginShell bc\n"
+
+test Nslcd.lns put "map passwd loginShell ab\n" after
+ set "/map[2]/passwd/homeDirectory" "bc" =
+ "map passwd loginShell ab\nmap passwd homeDirectory bc\n"
+
+test Nslcd.lns put "map passwd loginShell ab\n" after
+ set "/map[passwd/homeDirectory]/passwd/homeDirectory" "bc" =
+ "map passwd loginShell ab\nmap passwd homeDirectory bc\n"
+
+test Nslcd.lns put "map passwd loginShell ab\nmap passwd homeDirectory ab\n" after
+ set "/map[passwd/homeDirectory]/passwd/homeDirectory" "bc" =
+ "map passwd loginShell ab\nmap passwd homeDirectory bc\n"
+
+
+(* Test simple entries *)
+let simple = "uid nslcd\n"
+
+test Nslcd.lns get simple =
+{ "uid" = "nslcd" }
+
+(* Test simple entries with spaces at the end *)
+let simple_spaces = "uid nslcd \n"
+
+test Nslcd.lns get simple_spaces =
+{ "uid" = "nslcd" }
+
+(* Test multi valued entries *)
+
+let multi_valued = "cache 1 2 \n"
+
+test Nslcd.lns get multi_valued =
+{ "cache" = "1 2" }
+
+let multi_valued_real = "map passwd homeDirectory ${homeDirectory:-/home/$uid}\n"
+
+test Nslcd.lns get multi_valued_real =
+{ "map"
+ { "passwd"
+ { "homeDirectory" = "${homeDirectory:-/home/$uid}" }
+ }
+}
+
+(* Test multiline *)
+
+let simple_multiline = "uid nslcd\ngid nslcd\n"
+
+test Nslcd.lns get simple_multiline =
+{"uid" = "nslcd"}
+{"gid" = "nslcd"}
+
+
+let multiline_separators = "\n\n \nuid nslcd \ngid nslcd \n"
+
+test Nslcd.lns get multiline_separators =
+{}
+{}
+{}
+{"uid" = "nslcd"}
+{"gid" = "nslcd"}
--- /dev/null
+module Test_nsswitch =
+
+ let conf = "# Sample nsswitch.conf
+passwd: compat
+
+hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4
+networks: nis [!UNAVAIL=return success=continue] files
+protocols: db files
+netgroup: nis
+bootparams: nisplus [NOTFOUND=return] files
+aliases: files # uses by mail
+sudoers: files ldap
+"
+
+test Nsswitch.lns get conf =
+ { "#comment" = "Sample nsswitch.conf" }
+ { "database" = "passwd"
+ { "service" = "compat" } }
+ {}
+ { "database" = "hosts"
+ { "service" = "files" }
+ { "service" = "mdns4_minimal" }
+ { "reaction"
+ { "status" = "NOTFOUND"
+ { "action" = "return" } } }
+ { "service" = "dns" }
+ { "service" = "mdns4" } }
+ { "database" = "networks"
+ { "service" = "nis" }
+ { "reaction"
+ { "status" = "UNAVAIL"
+ { "negate" }
+ { "action" = "return" } }
+ { "status" = "success"
+ { "action" = "continue" } } }
+ { "service" = "files" } }
+ { "database" = "protocols"
+ { "service" = "db" }
+ { "service" = "files" } }
+ { "database" = "netgroup"
+ { "service" = "nis" } }
+ { "database" = "bootparams"
+ { "service" = "nisplus" }
+ { "reaction"
+ { "status" = "NOTFOUND"
+ { "action" = "return" } } }
+ { "service" = "files" } }
+ { "database" = "aliases"
+ { "service" = "files" }
+ { "#comment" = "uses by mail" } }
+ { "database" = "sudoers"
+ { "service" = "files" }
+ { "service" = "ldap" } }
+
--- /dev/null
+module Test_ntp =
+
+ let conf = "#
+# Fichier genere par puppet
+# Environnement: development
+
+server dns01.echo-net.net version 3
+server dns02.echo-net.net version 4
+
+driftfile /var/lib/ntp/ntp.drift
+
+restrict default ignore
+
+#server dns01.echo-net.net
+restrict 192.168.0.150 nomodify
+
+# allow everything from localhost
+restrict 127.0.0.1
+
+logfile /var/log/ntpd
+statsdir /var/log/ntpstats/
+ntpsigndsocket /var/lib/samba/ntp_signd
+
+statistics loopstats peerstats clockstats
+filegen loopstats file loopstats type day enable link
+filegen peerstats file peerstats type day disable
+filegen clockstats file clockstats type day enable nolink
+
+interface ignore wildcard
+interface listen 127.0.0.1
+"
+
+ test Ntp.lns get conf =
+ { "#comment" = "" }
+ { "#comment" = "Fichier genere par puppet" }
+ { "#comment" = "Environnement: development" }
+ {}
+ { "server" = "dns01.echo-net.net"
+ { "version" = "3" } }
+ { "server" = "dns02.echo-net.net"
+ { "version" = "4" } }
+ {}
+ { "driftfile" = "/var/lib/ntp/ntp.drift" }
+ {}
+ { "restrict" = "default"
+ { "action" = "ignore" } }
+ {}
+ { "#comment" = "server dns01.echo-net.net" }
+ { "restrict" = "192.168.0.150"
+ { "action" = "nomodify" } }
+ {}
+ { "#comment" = "allow everything from localhost" }
+ { "restrict" = "127.0.0.1" }
+ {}
+ { "logfile" = "/var/log/ntpd" }
+ { "statsdir" = "/var/log/ntpstats/" }
+ { "ntpsigndsocket" = "/var/lib/samba/ntp_signd" }
+ {}
+ { "statistics"
+ { "loopstats" }
+ { "peerstats" }
+ { "clockstats" } }
+ { "filegen" = "loopstats"
+ { "file" = "loopstats" }
+ { "type" = "day" }
+ { "enable" = "enable" }
+ { "link" = "link" } }
+ { "filegen" = "peerstats"
+ { "file" = "peerstats" }
+ { "type" = "day" }
+ { "enable" = "disable" } }
+ { "filegen" = "clockstats"
+ { "file" = "clockstats" }
+ { "type" = "day" }
+ { "enable" = "enable" }
+ { "link" = "nolink" } }
+ { }
+ { "interface"
+ { "action" = "ignore" }
+ { "addresses" = "wildcard" } }
+ { "interface"
+ { "action" = "listen" }
+ { "addresses" = "127.0.0.1" } }
+
+ (* Some things needed to process the default ntp.conf on Fedora *)
+ test Ntp.lns get
+ "server 66.187.233.4 # added by /sbin/dhclient-script\n" =
+ { "server" = "66.187.233.4"
+ { "#comment" = "# added by /sbin/dhclient-script" } }
+
+ test Ntp.lns get
+ "server 0.fedora.pool.ntp.org iburst dynamic\n" =
+ { "server" = "0.fedora.pool.ntp.org" { "iburst" } { "dynamic" } }
+
+ test Ntp.lns get
+ "restrict 127.0.0.1 \n" =
+ { "restrict" = "127.0.0.1" }
+
+ test Ntp.lns get
+ "restrict default kod nomodify notrap nopeer noquery\n" =
+ { "restrict" = "default"
+ { "action" = "kod" }
+ { "action" = "nomodify" }
+ { "action" = "notrap" }
+ { "action" = "nopeer" }
+ { "action" = "noquery" } }
+
+ test Ntp.lns put
+ "restrict default kod nomodify notrap nopeer noquery\n"
+ after
+ insb "ipv6" "restrict/action[1]" =
+ "restrict -6 default kod nomodify notrap nopeer noquery\n"
+
+ test Ntp.lns get
+ "restrict -6 default kod nomodify notrap nopeer noquery\n" =
+ { "restrict" = "default"
+ { "ipv6" }
+ { "action" = "kod" }
+ { "action" = "nomodify" }
+ { "action" = "notrap" }
+ { "action" = "nopeer" }
+ { "action" = "noquery" } }
+
+ test Ntp.lns put
+ "restrict default kod nomodify notrap nopeer noquery\n"
+ after
+ insb "ipv4" "restrict/action[1]" =
+ "restrict -4 default kod nomodify notrap nopeer noquery\n"
+
+ test Ntp.lns get
+ "restrict -4 default notrap nomodify nopeer noquery\n" =
+ { "restrict" = "default"
+ { "ipv4" }
+ { "action" = "notrap" }
+ { "action" = "nomodify" }
+ { "action" = "nopeer" }
+ { "action" = "noquery" } }
+
+ test Ntp.lns get
+ "includefile /etc/ntp/crypto/pw\n" =
+ { "includefile" = "/etc/ntp/crypto/pw" }
+
+ test Ntp.lns get "fudge 127.127.1.0 stratum 10\n" =
+ { "fudge" = "127.127.1.0" { "stratum" = "10" } }
+
+ test Ntp.lns get "broadcast 192.168.1.255 key 42\n" =
+ { "broadcast" = "192.168.1.255" { "key" = "42" } }
+
+
+ test Ntp.lns get "multicastclient 224.0.1.1\n" =
+ { "multicastclient" = "224.0.1.1" }
+
+ test Ntp.lns put "broadcastclient\tnovolley # broadcast\n"
+ after rm "/*/novolley" = "broadcastclient # broadcast\n"
+
+ test Ntp.auth_command get "trustedkey 4 8 42\n" =
+ { "trustedkey"
+ { "key" = "4" }
+ { "key" = "8" }
+ { "key" = "42" } }
+
+ test Ntp.auth_command get "trustedkey 42\n" =
+ { "trustedkey" { "key" = "42" } }
+
+ test Ntp.lns get "broadcastdelay 0.008\n" =
+ { "broadcastdelay" = "0.008" }
+
+ test Ntp.lns get "enable auth calibrate\ndisable kernel stats\n" =
+ { "enable"
+ { "flag" = "auth" }
+ { "flag" = "calibrate" } }
+ { "disable"
+ { "flag" = "kernel" }
+ { "flag" = "stats" } }
+
+(* Bug #103: tinker directive *)
+test Ntp.tinker get "tinker panic 0 huffpuff 3.14\n" =
+ { "tinker"
+ { "panic" = "0" }
+ { "huffpuff" = "3.14" } }
+
+(* Bug #297: tos directive *)
+test Ntp.tos get "tos maxdist 16\n" =
+ { "tos"
+ { "maxdist" = "16" } }
--- /dev/null
+(*
+Module: Test_Ntpd
+ Provides unit tests for the <Ntpd> lens.
+*)
+
+module Test_ntpd =
+
+test Ntpd.listen get "listen on *\n" =
+ { "listen on"
+ { "address" = "*" } }
+
+test Ntpd.listen get "listen on 127.0.0.1\n" =
+ { "listen on"
+ { "address" = "127.0.0.1" } }
+
+test Ntpd.listen get "listen on ::1\n" =
+ { "listen on"
+ { "address" = "::1" } }
+
+test Ntpd.listen get "listen on ::1 rtable 4\n" =
+ { "listen on"
+ { "address" = "::1" }
+ { "rtable" = "4" } }
+
+test Ntpd.server get "server ntp.example.org\n" =
+ { "server"
+ { "address" = "ntp.example.org" } }
+
+test Ntpd.server get "server ntp.example.org rtable 42\n" =
+ { "server"
+ { "address" = "ntp.example.org" }
+ { "rtable" = "42" } }
+
+test Ntpd.server get "server ntp.example.org weight 1 rtable 42\n" =
+ { "server"
+ { "address" = "ntp.example.org" }
+ { "weight" = "1" }
+ { "rtable" = "42" } }
+
+test Ntpd.server get "server ntp.example.org weight 10\n" =
+ { "server"
+ { "address" = "ntp.example.org" }
+ { "weight" = "10" } }
+
+
+test Ntpd.sensor get "sensor *\n" =
+ { "sensor"
+ { "device" = "*" } }
+
+test Ntpd.sensor get "sensor nmea0\n" =
+ { "sensor"
+ { "device" = "nmea0" } }
+
+test Ntpd.sensor get "sensor nmea0 correction 42\n" =
+ { "sensor"
+ { "device" = "nmea0" }
+ { "correction" = "42" } }
+
+test Ntpd.sensor get "sensor nmea0 correction -42\n" =
+ { "sensor"
+ { "device" = "nmea0" }
+ { "correction" = "-42" } }
+
+test Ntpd.sensor get "sensor nmea0 correction 42 weight 2\n" =
+ { "sensor"
+ { "device" = "nmea0" }
+ { "correction" = "42" }
+ { "weight" = "2" } }
+
+test Ntpd.sensor get "sensor nmea0 correction 42 refid Puffy\n" =
+ { "sensor"
+ { "device" = "nmea0" }
+ { "correction" = "42" }
+ { "refid" = "Puffy" } }
+
+test Ntpd.sensor get "sensor nmea0 correction 42 stratum 2\n" =
+ { "sensor"
+ { "device" = "nmea0" }
+ { "correction" = "42" }
+ { "stratum" = "2" } }
--- /dev/null
+module Test_odbc =
+
+ let conf = "
+# Example driver definitinions
+#
+#
+
+# Included in the unixODBC package
+[PostgreSQL]
+Description = ODBC for PostgreSQL
+Driver = /usr/lib/libodbcpsql.so
+Setup = /usr/lib/libodbcpsqlS.so
+FileUsage = 1
+
+[MySQL]
+# Driver from the MyODBC package
+# Setup from the unixODBC package
+Description = ODBC for MySQL
+Driver = /usr/lib/libmyodbc.so
+Setup = /usr/lib/libodbcmyS.so
+FileUsage = 1
+"
+
+test Odbc.lns get conf =
+ { }
+ { "#comment" = "Example driver definitinions" }
+ {}
+ {}
+ { }
+ { "#comment" = "Included in the unixODBC package" }
+
+ { "PostgreSQL"
+ { "Description" = "ODBC for PostgreSQL" }
+ { "Driver" = "/usr/lib/libodbcpsql.so" }
+ { "Setup" = "/usr/lib/libodbcpsqlS.so" }
+ { "FileUsage" = "1" }
+ {}
+ }
+ { "MySQL"
+ { "#comment" = "Driver from the MyODBC package" }
+ { "#comment" = "Setup from the unixODBC package" }
+ { "Description" = "ODBC for MySQL" }
+ { "Driver" = "/usr/lib/libmyodbc.so" }
+ { "Setup" = "/usr/lib/libodbcmyS.so" }
+ { "FileUsage" = "1" }
+ }
+
+test Odbc.lns put conf after
+ set "MySQL/Driver" "/usr/lib64/libmyodbc3.so";
+ set "MySQL/Driver/#comment" "note the path" = "
+# Example driver definitinions
+#
+#
+
+# Included in the unixODBC package
+[PostgreSQL]
+Description = ODBC for PostgreSQL
+Driver = /usr/lib/libodbcpsql.so
+Setup = /usr/lib/libodbcpsqlS.so
+FileUsage = 1
+
+[MySQL]
+# Driver from the MyODBC package
+# Setup from the unixODBC package
+Description = ODBC for MySQL
+Driver = /usr/lib64/libmyodbc3.so#note the path
+Setup = /usr/lib/libodbcmyS.so
+FileUsage = 1
+"
--- /dev/null
+module Test_Opendkim =
+
+ let simple_string_value = "ADSPAction discard\n"
+ test Opendkim.lns get simple_string_value =
+ { "ADSPAction" = "discard" }
+ test Opendkim.lns put simple_string_value after
+ set "ADSPAction" "discard" = simple_string_value
+
+ let simple_integer_value = "AutoRestartCount 1\n"
+ test Opendkim.lns get simple_integer_value =
+ { "AutoRestartCount" = "1" }
+ test Opendkim.lns put simple_integer_value after
+ set "AutoRestartCount" "1" = simple_integer_value
+
+ let simple_boolean_value = "AddAllSignatureResults true\n"
+ test Opendkim.lns get simple_boolean_value =
+ { "AddAllSignatureResults" = "true" }
+ test Opendkim.lns put simple_boolean_value after
+ set "AddAllSignatureResults" "true" = simple_boolean_value
+
+ let yes_boolean_value= "AddAllSignatureResults yes\n"
+ test Opendkim.lns get yes_boolean_value =
+ { "AddAllSignatureResults" = "yes" }
+ test Opendkim.lns put yes_boolean_value after
+ set "AddAllSignatureResults" "yes" = yes_boolean_value
+
+ let one_boolean_value= "AddAllSignatureResults 1\n"
+ test Opendkim.lns get one_boolean_value =
+ { "AddAllSignatureResults" = "1" }
+ test Opendkim.lns put one_boolean_value after
+ set "AddAllSignatureResults" "1" = one_boolean_value
+
+ let one_boolean_value_uppercase_yes = "AutoRestart Yes\n"
+ test Opendkim.lns get one_boolean_value_uppercase_yes =
+ { "AutoRestart" = "Yes" }
+ test Opendkim.lns put one_boolean_value_uppercase_yes after
+ set "AutoRestart" "Yes" = one_boolean_value_uppercase_yes
+
+ let one_boolean_value_uppercase_no = "AutoRestart No\n"
+ test Opendkim.lns get one_boolean_value_uppercase_no =
+ { "AutoRestart" = "No" }
+ test Opendkim.lns put one_boolean_value_uppercase_no after
+ set "AutoRestart" "No" = one_boolean_value_uppercase_no
+
+ let one_boolean_value_uppercase_true = "AutoRestart True\n"
+ test Opendkim.lns get one_boolean_value_uppercase_true =
+ { "AutoRestart" = "True" }
+ test Opendkim.lns put one_boolean_value_uppercase_true after
+ set "AutoRestart" "True" = one_boolean_value_uppercase_true
+
+ let one_boolean_value_uppercase_false = "AutoRestart False\n"
+ test Opendkim.lns get one_boolean_value_uppercase_false =
+ { "AutoRestart" = "False" }
+ test Opendkim.lns put one_boolean_value_uppercase_false after
+ set "AutoRestart" "False" = one_boolean_value_uppercase_false
+
+ let string_value_starting_with_number = "AutoRestartRate 10/1h\n"
+ test Opendkim.lns get string_value_starting_with_number =
+ { "AutoRestartRate" = "10/1h" }
+ test Opendkim.lns put string_value_starting_with_number after
+ set "AutoRestartRate" "10/1h" = string_value_starting_with_number
+
+ let string_value_containing_slash = "TrustAnchorFile /usr/share/dns/root.key\n"
+ test Opendkim.lns get string_value_containing_slash =
+ { "TrustAnchorFile" = "/usr/share/dns/root.key" }
+ test Opendkim.lns put string_value_containing_slash after
+ set "TrustAnchorFile" "/usr/share/dns/root.key" = string_value_containing_slash
+
+ let logwhy_keyword_boolean = "LogWhy Yes\n"
+ test Opendkim.lns get logwhy_keyword_boolean =
+ { "LogWhy" = "Yes" }
+ test Opendkim.lns put logwhy_keyword_boolean after
+ set "LogWhy" "Yes" = logwhy_keyword_boolean
+
+ let three_type_value = "AddAllSignatureResults false\nADSPAction discard\nAutoRestartCount 2\n"
+ test Opendkim.lns get three_type_value =
+ { "AddAllSignatureResults" = "false" }
+ { "ADSPAction" = "discard" }
+ { "AutoRestartCount" = "2" }
+
+ test Opendkim.lns put "" after
+ set "AddAllSignatureResults" "false";
+ set "ADSPAction" "discard";
+ set "AutoRestartCount" "2" = three_type_value
+
+ let two_boolean_value = "AddAllSignatureResults false\nADSPNoSuchDomain true\n"
+ test Opendkim.lns get two_boolean_value =
+ { "AddAllSignatureResults" = "false" }
+ { "ADSPNoSuchDomain" = "true" }
+
+ test Opendkim.lns put "" after
+ set "AddAllSignatureResults" "false";
+ set "ADSPNoSuchDomain" "true" = two_boolean_value
+
+ let blank_line_between= "AddAllSignatureResults false\n\nADSPNoSuchDomain true\n"
+ test Opendkim.lns get blank_line_between =
+ { "AddAllSignatureResults" = "false" }
+ { }
+ { "ADSPNoSuchDomain" = "true" }
+
+ test Opendkim.lns put "" after
+ set "AddAllSignatureResults" "false";
+ set "ADSPNoSuchDomain" "true" = "AddAllSignatureResults false\nADSPNoSuchDomain true\n"
+
+ let include_comment_line= "AddAllSignatureResults false\n#A comment\nADSPNoSuchDomain true\n"
+ test Opendkim.lns get include_comment_line =
+ { "AddAllSignatureResults" = "false" }
+ { "#comment" = "A comment" }
+ { "ADSPNoSuchDomain" = "true" }
+
+ test Opendkim.lns put "" after
+ set "AddAllSignatureResults" "false";
+ set "#comment" "A comment";
+ set "ADSPNoSuchDomain" "true" = include_comment_line
+
+ let default_config_file = "# This is a basic configuration that can easily be adapted to suit a standard
+# installation. For more advanced options, see opendkim.conf(5) and/or
+# /usr/share/doc/opendkim/examples/opendkim.conf.sample.
+
+# Log to syslog
+Syslog yes
+# Required to use local socket with MTAs that access the socket as a non-
+# privileged user (e.g. Postfix)
+UMask 002
+
+# Sign for example.com with key in /etc/mail/dkim.key using
+# selector '2007' (e.g. 2007._domainkey.example.com)
+#Domain example.com
+#KeyFile /etc/mail/dkim.key
+#Selector 2007
+
+# Commonly-used options; the commented-out versions show the defaults.
+#Canonicalization simple
+#Mode sv
+#SubDomains no
+#ADSPAction continue
+
+# Always oversign From (sign using actual From and a null From to prevent
+# malicious signatures header fields (From and/or others) between the signer
+# and the verifier. From is oversigned by default in the Debian package
+# because it is often the identity key used by reputation systems and thus
+# somewhat security sensitive.
+OversignHeaders From
+
+# List domains to use for RFC 6541 DKIM Authorized Third-Party Signatures
+# (ATPS) (experimental)
+
+#ATPSDomains example.com
+"
+ test Opendkim.lns get default_config_file =
+ { "#comment" = "This is a basic configuration that can easily be adapted to suit a standard" }
+ { "#comment" = "installation. For more advanced options, see opendkim.conf(5) and/or" }
+ { "#comment" = "/usr/share/doc/opendkim/examples/opendkim.conf.sample." }
+ { }
+ { "#comment" = "Log to syslog" }
+ { "Syslog" = "yes" }
+ { "#comment" = "Required to use local socket with MTAs that access the socket as a non-" }
+ { "#comment" = "privileged user (e.g. Postfix)" }
+ { "UMask" = "002" }
+ { }
+ { "#comment" = "Sign for example.com with key in /etc/mail/dkim.key using" }
+ { "#comment" = "selector '2007' (e.g. 2007._domainkey.example.com)" }
+ { "#comment" = "Domain example.com" }
+ { "#comment" = "KeyFile /etc/mail/dkim.key" }
+ { "#comment" = "Selector 2007" }
+ { }
+ { "#comment" = "Commonly-used options; the commented-out versions show the defaults." }
+ { "#comment" = "Canonicalization simple" }
+ { "#comment" = "Mode sv" }
+ { "#comment" = "SubDomains no" }
+ { "#comment" = "ADSPAction continue" }
+ { }
+ { "#comment" = "Always oversign From (sign using actual From and a null From to prevent" }
+ { "#comment" = "malicious signatures header fields (From and/or others) between the signer" }
+ { "#comment" = "and the verifier. From is oversigned by default in the Debian package" }
+ { "#comment" = "because it is often the identity key used by reputation systems and thus" }
+ { "#comment" = "somewhat security sensitive." }
+ { "OversignHeaders" = "From" }
+ { }
+ { "#comment" = "List domains to use for RFC 6541 DKIM Authorized Third-Party Signatures" }
+ { "#comment" = "(ATPS) (experimental)" }
+ { }
+ { "#comment" = "ATPSDomains example.com" }
--- /dev/null
+(*
+Module: Test_OpenShift_Config
+ Provides unit tests and examples for the <OpenShift_Config> lens.
+*)
+
+module Test_OpenShift_Config =
+
+(* Variable: conf *)
+let conf = "CLOUD_DOMAIN=\"example.com\"
+VALID_GEAR_SIZES=\"small,medium\"
+DEFAULT_MAX_GEARS=\"100\"
+DEFAULT_GEAR_CAPABILITIES=\"small\"
+DEFAULT_GEAR_SIZE=\"small\"
+MONGO_HOST_PORT=\"localhost:27017\"
+MONGO_USER=\"openshift\"
+MONGO_PASSWORD=\"mooo\"
+MONGO_DB=\"openshift_broker_dev\"
+MONGO_SSL=\"false\"
+ENABLE_USAGE_TRACKING_DATASTORE=\"false\"
+ENABLE_USAGE_TRACKING_AUDIT_LOG=\"false\"
+USAGE_TRACKING_AUDIT_LOG_FILE=\"/var/log/openshift/broker/usage.log\"
+ENABLE_ANALYTICS=\"false\"
+ENABLE_USER_ACTION_LOG=\"true\"
+USER_ACTION_LOG_FILE=\"/var/log/openshift/broker/user_action.log\"
+AUTH_PRIVKEYFILE=\"/etc/openshift/server_priv.pem\"
+AUTH_PRIVKEYPASS=\"\"
+AUTH_PUBKEYFILE=\"/etc/openshift/server_pub.pem\"
+AUTH_RSYNC_KEY_FILE=\"/etc/openshift/rsync_id_rsa\"
+AUTH_SCOPE_TIMEOUTS=\"session=1.days|7.days, *=1.months|6.months\"
+ENABLE_MAINTENANCE_MODE=\"false\"
+MAINTENANCE_NOTIFICATION_FILE=\"/etc/openshift/outage_notification.txt\"
+DOWNLOAD_CARTRIDGES_ENABLED=\"false\"
+"
+
+(* Variable: new_conf *)
+let new_conf = "CLOUD_DOMAIN=\"rhcloud.com\"
+VALID_GEAR_SIZES=\"small,medium\"
+DEFAULT_MAX_GEARS=\"100\"
+DEFAULT_GEAR_CAPABILITIES=\"small\"
+DEFAULT_GEAR_SIZE=\"small\"
+MONGO_HOST_PORT=\"localhost:27017\"
+MONGO_USER=\"openshift\"
+MONGO_PASSWORD=\"mooo\"
+MONGO_DB=\"openshift_broker_dev\"
+MONGO_SSL=\"false\"
+ENABLE_USAGE_TRACKING_DATASTORE=\"false\"
+ENABLE_USAGE_TRACKING_AUDIT_LOG=\"false\"
+USAGE_TRACKING_AUDIT_LOG_FILE=\"/var/log/openshift/broker/usage.log\"
+ENABLE_ANALYTICS=\"false\"
+ENABLE_USER_ACTION_LOG=\"true\"
+USER_ACTION_LOG_FILE=\"/var/log/openshift/broker/user_action.log\"
+AUTH_PRIVKEYFILE=\"/etc/openshift/server_priv.pem\"
+AUTH_PRIVKEYPASS=\"\"
+AUTH_PUBKEYFILE=\"/etc/openshift/server_pub.pem\"
+AUTH_RSYNC_KEY_FILE=\"/etc/openshift/rsync_id_rsa\"
+AUTH_SCOPE_TIMEOUTS=\"session=1.days|7.days, *=1.months|6.months\"
+ENABLE_MAINTENANCE_MODE=\"false\"
+MAINTENANCE_NOTIFICATION_FILE=\"/etc/openshift/outage_notification.txt\"
+DOWNLOAD_CARTRIDGES_ENABLED=\"false\"
+"
+
+(* Test: OpenShift_Config.lns *)
+test OpenShift_Config.lns get conf =
+ { "CLOUD_DOMAIN" = "example.com" }
+ { "VALID_GEAR_SIZES" = "small,medium" }
+ { "DEFAULT_MAX_GEARS" = "100" }
+ { "DEFAULT_GEAR_CAPABILITIES" = "small" }
+ { "DEFAULT_GEAR_SIZE" = "small" }
+ { "MONGO_HOST_PORT" = "localhost:27017" }
+ { "MONGO_USER" = "openshift" }
+ { "MONGO_PASSWORD" = "mooo" }
+ { "MONGO_DB" = "openshift_broker_dev" }
+ { "MONGO_SSL" = "false" }
+ { "ENABLE_USAGE_TRACKING_DATASTORE" = "false" }
+ { "ENABLE_USAGE_TRACKING_AUDIT_LOG" = "false" }
+ { "USAGE_TRACKING_AUDIT_LOG_FILE" = "/var/log/openshift/broker/usage.log" }
+ { "ENABLE_ANALYTICS" = "false" }
+ { "ENABLE_USER_ACTION_LOG" = "true" }
+ { "USER_ACTION_LOG_FILE" = "/var/log/openshift/broker/user_action.log" }
+ { "AUTH_PRIVKEYFILE" = "/etc/openshift/server_priv.pem" }
+ { "AUTH_PRIVKEYPASS" }
+ { "AUTH_PUBKEYFILE" = "/etc/openshift/server_pub.pem" }
+ { "AUTH_RSYNC_KEY_FILE" = "/etc/openshift/rsync_id_rsa" }
+ { "AUTH_SCOPE_TIMEOUTS" = "session=1.days|7.days, *=1.months|6.months" }
+ { "ENABLE_MAINTENANCE_MODE" = "false" }
+ { "MAINTENANCE_NOTIFICATION_FILE" = "/etc/openshift/outage_notification.txt" }
+ { "DOWNLOAD_CARTRIDGES_ENABLED" = "false" }
+
+(* Test: OpenShift_Config.lns
+ * Second get test against OpenShift configs
+*)
+test OpenShift_Config.lns get "MONGO_SSL=\"false\"\n" =
+ { "MONGO_SSL" = "false" }
+
+(* Test: OpenShift_Config.lns
+ * Put test changing CLOUD_DOMAIN to rhcloud.com
+*)
+test OpenShift_Config.lns put conf after set "CLOUD_DOMAIN" "rhcloud.com"
+ = new_conf
+
+(* vim: set ts=4 expandtab sw=4: *)
--- /dev/null
+(*
+Module: Test_OpenShift_Http
+ Provides unit tests and examples for the <OpenShift_Http> lens.
+*)
+
+module Test_OpenShift_Http =
+
+(* Variable: conf *)
+let conf = "Listen 127.0.0.1:8080
+User apache
+Group apache
+include /etc/httpd/conf.d/ruby193-passenger.conf
+PassengerUser apache
+PassengerMaxPoolSize 80
+PassengerMinInstances 2
+PassengerPreStart http://127.0.0.1:8080/
+PassengerUseGlobalQueue off
+RackBaseURI /broker
+PassengerRuby /var/www/openshift/broker/script/broker_ruby
+<Directory /var/www/openshift/broker/httpd/root/broker>
+ Options -MultiViews
+</Directory>
+"
+
+(* Variable: new_conf *)
+let new_conf = "Listen 127.0.0.1:8080
+User nobody
+Group apache
+include /etc/httpd/conf.d/ruby193-passenger.conf
+PassengerUser apache
+PassengerMaxPoolSize 80
+PassengerMinInstances 2
+PassengerPreStart http://127.0.0.1:8080/
+PassengerUseGlobalQueue off
+RackBaseURI /broker
+PassengerRuby /var/www/openshift/broker/script/broker_ruby
+<Directory /var/www/openshift/broker/httpd/root/broker>
+ Options -MultiViews
+</Directory>
+"
+
+let lns = OpenShift_Http.lns
+
+(* Test: OpenShift_Http.lns
+ * Get test against tree structure
+*)
+test lns get conf =
+ { "directive" = "Listen" {"arg" = "127.0.0.1:8080" } }
+ { "directive" = "User" { "arg" = "apache" } }
+ { "directive" = "Group" { "arg" = "apache" } }
+ { "directive" = "include" { "arg" = "/etc/httpd/conf.d/ruby193-passenger.conf" } }
+ { "directive" = "PassengerUser" { "arg" = "apache" } }
+ { "directive" = "PassengerMaxPoolSize" { "arg" = "80" } }
+ { "directive" = "PassengerMinInstances" { "arg" = "2" } }
+ { "directive" = "PassengerPreStart" { "arg" = "http://127.0.0.1:8080/" } }
+ { "directive" = "PassengerUseGlobalQueue" { "arg" = "off" } }
+ { "directive" = "RackBaseURI" { "arg" = "/broker" } }
+ { "directive" = "PassengerRuby" { "arg" = "/var/www/openshift/broker/script/broker_ruby" } }
+ { "Directory"
+ { "arg" = "/var/www/openshift/broker/httpd/root/broker" }
+ { "directive" = "Options"
+ { "arg" = "-MultiViews" }
+ }
+ }
+
+(* Test: OpenShift_Http.lns
+ * Put test changing user to nobody
+*)
+test lns put conf after set "/directive[2]/arg" "nobody" = new_conf
+
+(* vim: set ts=4 expandtab sw=4: *)
--- /dev/null
+(*
+Module: Test_OpenShift_Quickstarts
+ Provides unit tests and examples for the <OpenShift_Quickstarts> lens.
+*)
+
+module Test_OpenShift_Quickstarts =
+
+(* Variable: conf *)
+let conf = "[
+ {\"quickstart\": {
+ \"id\": \"1\",
+ \"name\":\"CakePHP\",
+ \"website\":\"http://cakephp.org/\",
+ \"initial_git_url\":\"git://github.com/openshift/cakephp-example.git\",
+ \"cartridges\":[\"php-5.4\",\"mysql-5.1\"],
+ \"summary\":\"CakePHP is a rapid development framework for PHP which uses commonly known design patterns like Active Record, Association Data Mapping, Front Controller and MVC.\",
+ \"tags\":[\"php\",\"cakephp\",\"framework\"],
+ \"admin_tags\":[]
+ }},
+ {\"quickstart\": {
+ \"id\": \"2\",
+ \"name\":\"Django\",
+ \"website\":\"https://www.djangoproject.com/\",
+ \"initial_git_url\":\"git://github.com/openshift/django-example.git\",
+ \"cartridges\":[\"python-2.7\"],
+ \"summary\":\"A high-level Python web framework that encourages rapid development and clean, pragmatic design. Administrator user name and password are written to $OPENSHIFT_DATA_DIR/CREDENTIALS.\",
+ \"tags\":[\"python\",\"django\",\"framework\"],
+ \"admin_tags\":[]
+ }},
+ {\"quickstart\":{
+ \"id\": \"4\",
+ \"name\":\"Drupal\",
+ \"website\":\"http://drupal.org/\",
+ \"initial_git_url\":\"git://github.com/openshift/drupal-example.git\",
+ \"cartridges\":[\"php-5.4\",\"mysql-5.1\"],
+ \"summary\":\"An open source content management platform written in PHP powering millions of websites and applications. It is built, used, and supported by an active and diverse community of people around the world. Administrator user name and password are written to $OPENSHIFT_DATA_DIR/CREDENTIALS.\",
+ \"tags\":[\"php\",\"drupal\",\"wiki\",\"framework\",\"instant_app\"],
+ \"admin_tags\":[]
+ }},
+ {\"quickstart\":{
+ \"id\": \"6\",
+ \"name\":\"Ruby on Rails\",
+ \"website\":\"http://rubyonrails.org/\",
+ \"initial_git_url\":\"git://github.com/openshift/rails-example.git\",
+ \"cartridges\":[\"ruby-1.9\",\"mysql-5.1\"],
+ \"summary\":\"An open source web framework for Ruby that is optimized for programmer happiness and sustainable productivity. It lets you write beautiful code by favoring convention over configuration.\",
+ \"tags\":[\"ruby\",\"rails\",\"framework\"],
+ \"admin_tags\":[]
+ }},
+ {\"quickstart\":{
+ \"id\": \"8\",
+ \"name\":\"WordPress\",
+ \"website\":\"http://wordpress.org\",
+ \"initial_git_url\":\"git://github.com/openshift/wordpress-example.git\",
+ \"cartridges\":[\"php-5.4\",\"mysql-5.1\"],
+ \"summary\":\"A semantic personal publishing platform written in PHP with a MySQL back end, focusing on aesthetics, web standards, and usability. Administrator user name and password are written to $OPENSHIFT_DATA_DIR/CREDENTIALS.\",
+ \"tags\":[\"php\",\"wordpress\",\"blog\",\"framework\",\"instant_app\"],
+ \"admin_tags\":[]
+ }}
+]"
+
+(* Variable: new_conf *)
+let new_conf = "[
+ {\"quickstart\": {
+ \"id\": \"1\",
+ \"name\":\"CakePHP\",
+ \"website\":\"http://cakephp.org/\",
+ \"initial_git_url\":\"git://github.com/openshift/cakephp-example.git\",
+ \"cartridges\":[\"php-5.4\",\"mysql-5.1\"],
+ \"summary\":\"CakePHP is a rapid development framework for PHP which uses commonly known design patterns like Active Record, Association Data Mapping, Front Controller and MVC.\",
+ \"tags\":[\"php\",\"cakephp\",\"framework\"],
+ \"admin_tags\":[]
+ }},
+ {\"quickstart\": {
+ \"id\": \"2\",
+ \"name\":\"Django\",
+ \"website\":\"https://www.djangoproject.com/\",
+ \"initial_git_url\":\"git://github.com/openshift/django-example.git\",
+ \"cartridges\":[\"python-2.7\"],
+ \"summary\":\"A high-level Python web framework that encourages rapid development and clean, pragmatic design. Administrator user name and password are written to $OPENSHIFT_DATA_DIR/CREDENTIALS.\",
+ \"tags\":[\"python\",\"django\",\"framework\"],
+ \"admin_tags\":[]
+ }},
+ {\"quickstart\":{
+ \"id\": \"4\",
+ \"name\":\"Drupal\",
+ \"website\":\"http://drupal.org/\",
+ \"initial_git_url\":\"git://github.com/openshift/drupal-example.git\",
+ \"cartridges\":[\"php-5.4\",\"mysql-5.1\"],
+ \"summary\":\"An open source content management platform written in PHP powering millions of websites and applications. It is built, used, and supported by an active and diverse community of people around the world. Administrator user name and password are written to $OPENSHIFT_DATA_DIR/CREDENTIALS.\",
+ \"tags\":[\"php\",\"drupal\",\"wiki\",\"framework\",\"instant_app\"],
+ \"admin_tags\":[]
+ }},
+ {\"quickstart\":{
+ \"id\": \"6\",
+ \"name\":\"Ruby on Rails\",
+ \"website\":\"http://rubyonrails.org/\",
+ \"initial_git_url\":\"git://github.com/openshift/rails-example.git\",
+ \"cartridges\":[\"ruby-1.9\",\"mysql-5.1\"],
+ \"summary\":\"An open source web framework for Ruby that is optimized for programmer happiness and sustainable productivity. It lets you write beautiful code by favoring convention over configuration.\",
+ \"tags\":[\"ruby\",\"rails\",\"framework\"],
+ \"admin_tags\":[]
+ }},
+ {\"quickstart\":{
+ \"id\": \"8\",
+ \"name\":\"WordPress\",
+ \"website\":\"https://wordpress.org\",
+ \"initial_git_url\":\"git://github.com/openshift/wordpress-example.git\",
+ \"cartridges\":[\"php-5.4\",\"mysql-5.1\"],
+ \"summary\":\"A semantic personal publishing platform written in PHP with a MySQL back end, focusing on aesthetics, web standards, and usability. Administrator user name and password are written to $OPENSHIFT_DATA_DIR/CREDENTIALS.\",
+ \"tags\":[\"php\",\"wordpress\",\"blog\",\"framework\",\"instant_app\"],
+ \"admin_tags\":[]
+ }}
+]"
+
+(* Test: OpenShift_Quickstarts.lns *)
+test OpenShift_Quickstarts.lns get conf =
+ { "array"
+ { }
+ { "dict"
+ { "entry" = "quickstart"
+ { "dict"
+ { }
+ { "entry" = "id"
+ { "string" = "1" }
+ }
+ { }
+ { "entry" = "name"
+ { "string" = "CakePHP" }
+ }
+ { }
+ { "entry" = "website"
+ { "string" = "http://cakephp.org/" }
+ }
+ { }
+ { "entry" = "initial_git_url"
+ { "string" = "git://github.com/openshift/cakephp-example.git" }
+ }
+ { }
+ { "entry" = "cartridges"
+ { "array"
+ { "string" = "php-5.4" }
+ { "string" = "mysql-5.1" }
+ }
+ }
+ { }
+ { "entry" = "summary"
+ { "string" = "CakePHP is a rapid development framework for PHP which uses commonly known design patterns like Active Record, Association Data Mapping, Front Controller and MVC." }
+ }
+ { }
+ { "entry" = "tags"
+ { "array"
+ { "string" = "php" }
+ { "string" = "cakephp" }
+ { "string" = "framework" }
+ }
+ }
+ { }
+ { "entry" = "admin_tags"
+ { "array" }
+ }
+ }
+ }
+ }
+ { }
+ { "dict"
+ { "entry" = "quickstart"
+ { "dict"
+ { }
+ { "entry" = "id"
+ { "string" = "2" }
+ }
+ { }
+ { "entry" = "name"
+ { "string" = "Django" }
+ }
+ { }
+ { "entry" = "website"
+ { "string" = "https://www.djangoproject.com/" }
+ }
+ { }
+ { "entry" = "initial_git_url"
+ { "string" = "git://github.com/openshift/django-example.git" }
+ }
+ { }
+ { "entry" = "cartridges"
+ { "array"
+ { "string" = "python-2.7" }
+ }
+ }
+ { }
+ { "entry" = "summary"
+ { "string" = "A high-level Python web framework that encourages rapid development and clean, pragmatic design. Administrator user name and password are written to $OPENSHIFT_DATA_DIR/CREDENTIALS." }
+ }
+ { }
+ { "entry" = "tags"
+ { "array"
+ { "string" = "python" }
+ { "string" = "django" }
+ { "string" = "framework" }
+ }
+ }
+ { }
+ { "entry" = "admin_tags"
+ { "array" }
+ }
+ }
+ }
+ }
+ { }
+ { "dict"
+ { "entry" = "quickstart"
+ { "dict"
+ { }
+ { "entry" = "id"
+ { "string" = "4" }
+ }
+ { }
+ { "entry" = "name"
+ { "string" = "Drupal" }
+ }
+ { }
+ { "entry" = "website"
+ { "string" = "http://drupal.org/" }
+ }
+ { }
+ { "entry" = "initial_git_url"
+ { "string" = "git://github.com/openshift/drupal-example.git" }
+ }
+ { }
+ { "entry" = "cartridges"
+ { "array"
+ { "string" = "php-5.4" }
+ { "string" = "mysql-5.1" }
+ }
+ }
+ { }
+ { "entry" = "summary"
+ { "string" = "An open source content management platform written in PHP powering millions of websites and applications. It is built, used, and supported by an active and diverse community of people around the world. Administrator user name and password are written to $OPENSHIFT_DATA_DIR/CREDENTIALS." }
+ }
+ { }
+ { "entry" = "tags"
+ { "array"
+ { "string" = "php" }
+ { "string" = "drupal" }
+ { "string" = "wiki" }
+ { "string" = "framework" }
+ { "string" = "instant_app" }
+ }
+ }
+ { }
+ { "entry" = "admin_tags"
+ { "array" }
+ }
+ }
+ }
+ }
+ { }
+ { "dict"
+ { "entry" = "quickstart"
+ { "dict"
+ { }
+ { "entry" = "id"
+ { "string" = "6" }
+ }
+ { }
+ { "entry" = "name"
+ { "string" = "Ruby on Rails" }
+ }
+ { }
+ { "entry" = "website"
+ { "string" = "http://rubyonrails.org/" }
+ }
+ { }
+ { "entry" = "initial_git_url"
+ { "string" = "git://github.com/openshift/rails-example.git" }
+ }
+ { }
+ { "entry" = "cartridges"
+ { "array"
+ { "string" = "ruby-1.9" }
+ { "string" = "mysql-5.1" }
+ }
+ }
+ { }
+ { "entry" = "summary"
+ { "string" = "An open source web framework for Ruby that is optimized for programmer happiness and sustainable productivity. It lets you write beautiful code by favoring convention over configuration." }
+ }
+ { }
+ { "entry" = "tags"
+ { "array"
+ { "string" = "ruby" }
+ { "string" = "rails" }
+ { "string" = "framework" }
+ }
+ }
+ { }
+ { "entry" = "admin_tags"
+ { "array" }
+ }
+ }
+ }
+ }
+ { }
+ { "dict"
+ { "entry" = "quickstart"
+ { "dict"
+ { }
+ { "entry" = "id"
+ { "string" = "8" }
+ }
+ { }
+ { "entry" = "name"
+ { "string" = "WordPress" }
+ }
+ { }
+ { "entry" = "website"
+ { "string" = "http://wordpress.org" }
+ }
+ { }
+ { "entry" = "initial_git_url"
+ { "string" = "git://github.com/openshift/wordpress-example.git" }
+ }
+ { }
+ { "entry" = "cartridges"
+ { "array"
+ { "string" = "php-5.4" }
+ { "string" = "mysql-5.1" }
+ }
+ }
+ { }
+ { "entry" = "summary"
+ { "string" = "A semantic personal publishing platform written in PHP with a MySQL back end, focusing on aesthetics, web standards, and usability. Administrator user name and password are written to $OPENSHIFT_DATA_DIR/CREDENTIALS." }
+ }
+ { }
+ { "entry" = "tags"
+ { "array"
+ { "string" = "php" }
+ { "string" = "wordpress" }
+ { "string" = "blog" }
+ { "string" = "framework" }
+ { "string" = "instant_app" }
+ }
+ }
+ { }
+ { "entry" = "admin_tags"
+ { "array" }
+ }
+ }
+ }
+ { }
+ }
+}
+
+
+(* FIXME: not yet supported:
+ * The manner in which the JSON utility lens currently works does not maintain
+ * whitepsace as per the Augeas specification in the put direction.
+
+test OpenShift_Quickstarts.lns put conf after set "/array/dict[5]/entry/dict/entry[3]/string" "https://wordpress.org"
+ = new_conf *)
+
+(* vim: set ts=4 expandtab sw=4: *)
--- /dev/null
+
+module Test_OpenVPN =
+
+let server_conf = "
+daemon
+local 10.0.5.20
+port 1194
+# TCP or UDP server?
+proto udp
+;dev tap
+dev tun
+
+dev-node MyTap
+ca ca.crt
+cert server.crt
+key server.key # This file should be kept secret
+
+# Diffie hellman parameters.
+dh dh1024.pem
+
+server 10.8.0.0 255.255.255.0
+ifconfig-pool-persist ipp.txt
+
+client-config-dir /etc/openvpn/ccd
+server-bridge 10.8.0.4 255.255.255.0 10.8.0.50 10.8.0.100
+route 10.9.0.0 255.255.255.0
+push \"route 192.168.10.0 255.255.255.0\"
+learn-address ./script
+push \"redirect-gateway\"
+push \"dhcp-option DNS 10.8.0.1\"
+push \"dhcp-option WINS 10.8.0.1\"
+client-to-client
+duplicate-cn
+keepalive 10 120
+tls-auth ta.key 0 # This file is secret
+cipher BF-CBC # Blowfish (default)
+;cipher AES-128-CBC # AES
+;cipher DES-EDE3-CBC # Triple-DES
+comp-lzo
+max-clients 100
+user nobody
+group nobody
+persist-key
+persist-tun
+status openvpn-status.log
+log openvpn.log
+log-append openvpn.log
+verb 3
+mute 20
+management 10.0.5.20 1193 /etc/openvpn/mpass
+"
+
+test OpenVPN.lns get server_conf =
+ {}
+ { "daemon" }
+ { "local" = "10.0.5.20" }
+ { "port" = "1194" }
+ { "#comment" = "TCP or UDP server?" }
+ { "proto" = "udp" }
+ { "#comment" = "dev tap" }
+ { "dev" = "tun" }
+ {}
+ { "dev-node" = "MyTap" }
+ { "ca" = "ca.crt" }
+ { "cert" = "server.crt" }
+ { "key" = "server.key"
+ { "#comment" = "This file should be kept secret" } }
+ {}
+ { "#comment" = "Diffie hellman parameters." }
+ { "dh" = "dh1024.pem" }
+ {}
+ { "server"
+ { "address" = "10.8.0.0" }
+ { "netmask" = "255.255.255.0" } }
+ { "ifconfig-pool-persist"
+ { "file" = "ipp.txt" }
+ }
+ {}
+ { "client-config-dir" = "/etc/openvpn/ccd" }
+ { "server-bridge"
+ { "address" = "10.8.0.4" }
+ { "netmask" = "255.255.255.0" }
+ { "start" = "10.8.0.50" }
+ { "end" = "10.8.0.100" } }
+ { "route"
+ { "address" = "10.9.0.0" }
+ { "netmask" = "255.255.255.0" } }
+ { "push" = "route 192.168.10.0 255.255.255.0" }
+ { "learn-address" = "./script" }
+ { "push" = "redirect-gateway" }
+ { "push" = "dhcp-option DNS 10.8.0.1" }
+ { "push" = "dhcp-option WINS 10.8.0.1" }
+ { "client-to-client" }
+ { "duplicate-cn" }
+ { "keepalive"
+ { "ping" = "10" }
+ { "timeout" = "120" } }
+ { "tls-auth"
+ { "key" = "ta.key" }
+ { "is_client" = "0" }
+ { "#comment" = "This file is secret" } }
+ { "cipher" = "BF-CBC"
+ { "#comment" = "Blowfish (default)" } }
+ { "#comment" = "cipher AES-128-CBC # AES" }
+ { "#comment" = "cipher DES-EDE3-CBC # Triple-DES" }
+ { "comp-lzo" }
+ { "max-clients" = "100" }
+ { "user" = "nobody" }
+ { "group" = "nobody" }
+ { "persist-key" }
+ { "persist-tun" }
+ { "status"
+ { "file" = "openvpn-status.log" }
+ }
+ { "log" = "openvpn.log" }
+ { "log-append" = "openvpn.log" }
+ { "verb" = "3" }
+ { "mute" = "20" }
+ { "management"
+ { "server" = "10.0.5.20" }
+ { "port" = "1193" }
+ { "pwfile" = "/etc/openvpn/mpass" } }
+
+
+
+let client_conf = "
+client
+remote my-server-1 1194
+;remote my-server-2 1194
+remote-random
+resolv-retry infinite
+nobind
+http-proxy-retry # retry on connection failures
+http-proxy mytest 1024
+mute-replay-warnings
+ns-cert-type server
+"
+
+test OpenVPN.lns get client_conf =
+ {}
+ { "client" }
+ { "remote"
+ { "server" = "my-server-1" }
+ { "port" = "1194" } }
+ { "#comment" = "remote my-server-2 1194" }
+ { "remote-random" }
+ { "resolv-retry" = "infinite" }
+ { "nobind" }
+ { "http-proxy-retry"
+ { "#comment" = "retry on connection failures" } }
+ { "http-proxy"
+ { "server" = "mytest" }
+ { "port" = "1024" } }
+ { "mute-replay-warnings" }
+ { "ns-cert-type" = "server" }
+
+(* Most (hopefully all) permutations for OpenVPN 2.3
+ * NOTE: This completely ignores IPv6 because it's hard to tell which OpenVPN
+ * options actually work with IPv6. Thar be dragons.
+ *)
+let all_permutations_conf = "
+config /a/canonical/file
+config relative_file
+mode p2p
+mode server
+local 192.168.1.1
+local hostname
+remote 192.168.1.1 1234
+remote hostname 1234
+remote hostname
+remote 192.168.1.1
+remote hostname 1234 tcp
+remote 192.168.1.1 1234 tcp
+remote hostname 1234 udp
+remote-random-hostname
+#comment square <connection> blocks should go here
+proto-force udp
+proto-force tcp
+remote-random
+proto udp
+proto tcp-client
+proto tcp-server
+connect-retry 5
+connect-timeout 10
+connect-retry-max 0
+show-proxy-settings
+http-proxy servername 1234
+http-proxy servername 1234 auto
+http-proxy servername 1234 auto-nct
+http-proxy servername 1234 auto none
+http-proxy servername 1234 auto basic
+http-proxy servername 1234 auto ntlm
+http-proxy servername 1234 relative_filename ntlm
+http-proxy servername 1234 /canonical/filename basic
+http-proxy-retry
+http-proxy-timeout 5
+http-proxy-option VERSION 1.0
+http-proxy-option AGENT an unquoted string with spaces
+http-proxy-option AGENT an_unquoted_string_without_spaces
+socks-proxy servername
+socks-proxy servername 1234
+socks-proxy servername 1234 /canonical/file
+socks-proxy servername 1234 relative/file
+socks-proxy-retry
+resolv-retry 5
+float
+ipchange my command goes here
+port 1234
+lport 1234
+rport 1234
+bind
+nobind
+dev tun
+dev tun0
+dev tap
+dev tap0
+dev null
+dev-type tun
+dev-type tap
+topology net30
+topology p2p
+topology subnet
+tun-ipv6
+dev-node /canonical/file
+dev-node relative/file
+lladdr 1.2.3.4
+iproute my command goes here
+ifconfig 1.2.3.4 5.6.7.8
+ifconfig-noexec
+ifconfig-nowarn
+route 111.222.123.123
+route networkname
+route vpn_gateway
+route net_gateway
+route remote_host
+route 111.222.123.123 255.123.255.221
+route 111.222.123.123 default
+route 111.222.123.123 255.123.255.231 111.222.123.1
+route 111.222.123.123 default 111.222.123.1
+route 111.222.123.123 255.123.255.231 default
+route 111.222.123.123 default default
+route 111.222.123.123 255.123.255.231 gatewayname
+route 111.222.123.123 255.123.255.231 gatewayname 5
+route 111.222.123.123 255.123.255.231 vpn_gateway
+route 111.222.123.123 255.123.255.231 net_gateway
+route 111.222.123.123 255.123.255.231 remote_host
+route 111.222.123.123 255.123.255.231 111.222.123.1
+route 111.222.123.123 255.123.255.231 111.222.123.1 5
+max-routes 5
+route-gateway gateway-name
+route-gateway 111.222.123.1
+route-gateway dhcp
+route-metric 5
+route-delay
+route-delay 1
+route-delay 1 2
+route-up my command goes here
+route-pre-down my command goes here
+route-noexec
+route-nopull
+allow-pull-fqdn
+client-nat snat 1.2.3.4 5.6.7.8 9.8.7.6
+client-nat dnat 1.2.3.4 5.6.7.8 9.8.7.6
+redirect-gateway local
+redirect-gateway local autolocal
+redirect-gateway local autolocal def1 bypass-dhcp bypass-dns block-local
+link-mtu 5
+redirect-private local
+redirect-private local autolocal
+redirect-private local autolocal def1 bypass-dhcp bypass-dns block-local
+tun-mtu 5
+tun-mtu-extra 5
+mtu-disc no
+mtu-disc maybe
+mtu-disc yes
+mtu-test
+fragment 5
+mssfix 1600
+sndbuf 65536
+rcvbuf 65535
+mark blahvalue
+socket-flags TCP_NODELAY
+txqueuelen 5
+shaper 50
+inactive 5
+inactive 5 1024
+ping 10
+ping-exit 10
+ping-restart 10
+keepalive 1 2
+ping-timer-rem
+persist-tun
+persist-key
+persist-local-ip
+persist-remote-ip
+mlock
+up my command goes here
+up-delay
+down my command goes here
+down-pre
+up-restart
+setenv myname myvalue
+setenv my0-_name my value with spaces
+setenv-safe myname myvalue
+setenv-safe my-_name my value with spaces
+ignore-unknown-option anopt
+ignore-unknown-option anopt anotheropt
+script-security 3
+disable-occ
+user username
+group groupname
+cd /canonical/dir
+cd relative/dir/
+chroot /canonical/dir
+chroot relative/dir/
+setcon selinux-context
+daemon
+daemon mydaemon_name
+syslog
+syslog my_syslog-name
+errors-to-stderr
+passtos
+inetd
+inetd wait
+inetd nowait
+inetd wait my-program_name
+log myfilename
+log-append myfilename
+suppress-timestamps
+writepid myfile
+nice 5
+fast-io
+multihome
+echo stuff to echo until end of line
+remap-usr1 SIGHUP
+remap-usr1 SIGTERM
+verb 6
+status myfile
+status myfile 15
+status-version
+status-version 3
+mute 20
+comp-lzo
+comp-lzo yes
+comp-lzo no
+comp-lzo adaptive
+management 123.123.123.123 1234
+management 123.123.123.123 1234 /canonical/file
+management-client
+management-query-passwords
+management-query-proxy
+management-query-remote
+management-forget-disconnect
+management-hold
+management-signal
+management-up-down
+management-client-auth
+management-client-pf
+management-log-cache 5
+management-client-user myuser
+management-client-user mygroup
+plugin /canonical/file
+plugin relative/file
+plugin myfile an init string
+server 1.2.3.4 255.255.255.0
+server 1.2.3.4 255.255.255.255 nopool
+server-bridge 1.2.3.4 1.2.3.5 50.5.5.5 50.5.5.6
+server-bridge nogw
+push \"my push string\"
+push-reset
+push-peer-info
+disable
+ifconfig-pool 1.1.1.1 2.2.2.2
+ifconfig-pool 1.1.1.1 2.2.2.2 255.255.255.0
+ifconfig-pool-persist myfile
+ifconfig-pool-persist myfile 50
+ifconfig-pool-linear
+ifconfig-push 1.1.1.1 2.2.2.2
+ifconfig-push 1.1.1.1 2.2.2.2 alias-name
+iroute 1.1.1.1
+iroute 1.1.1.1 2.2.2.2
+client-to-client
+duplicate-cn
+client-connect my command goes here
+client-disconnect my command goes here
+client-config-dir directory
+ccd-exclusive
+tmp-dir /directory
+hash-size 1 2
+bcast-buffers 5
+tcp-queue-limit 50
+tcp-nodelay
+max-clients 50
+max-routes-per-client 50
+stale-routes-check 5
+stale-routes-check 5 50
+connect-freq 50 100
+learn-address my command goes here
+auth-user-pass-verify /my/script/with/no/arguments.sh via-env
+auth-user-pass-verify \"myscript.sh arg1 arg2\" via-file
+opt-verify
+auth-user-pass-optional
+client-cert-not-required
+username-as-common-name
+port-share 1.1.1.1 1234
+port-share myhostname 1234
+port-share myhostname 1234 /canonical/dir
+client
+pull
+auth-user-pass
+auth-user-pass /canonical/file
+auth-user-pass relative/file
+auth-retry none
+auth-retry nointeract
+auth-retry interact
+static-challenge challenge_no_spaces 1
+static-challenge \"my quoted challenge string\" 0
+server-poll-timeout 50
+explicit-exit-notify
+explicit-exit-notify 5
+secret /canonicalfile
+secret relativefile
+secret filename 1
+secret filename 0
+key-direction
+auth none
+auth sha1
+cipher SHA1
+cipher sha1
+keysize 50
+prng SHA1
+prng SHA1 500
+engine
+engine blah
+no-replay
+replay-window 64
+replay-window 64 16
+mute-replay-warnings
+replay-persist /my/canonical/filename
+no-iv
+use-prediction-resistance
+test-crypto
+tls-server
+tls-client
+ca myfile
+capath /mydir/
+dh myfile
+cert myfile
+extra-certs myfile
+key myfile
+tls-version-min 1.1
+tls-version-min 2
+tls-version-min 1.1 or-highest
+tls-version-max 5.5
+pkcs12 myfile
+verify-hash AD:B0:95:D8:09:C8:36:45:12:A9:89:C8:90:09:CB:13:72:A6:AD:16
+pkcs11-cert-private 0
+pkcs11-cert-private 1
+pkcs11-id myname
+pkcs11-id-management
+pkcs11-pin-cache 50
+pkcs11-protected-authentication 0
+pkcs11-protected-authentication 1
+cryptoapicert \"SUBJ:Justin Akers\"
+key-method 2
+tls-cipher DEFAULT:!EXP:!PSK:!SRP:!kRSA
+tls-timeout 50
+reneg-bytes 50
+reneg-pkts 50
+reneg-sec 5
+hand-window 123
+tran-window 456
+single-session
+tls-exit
+tls-auth filename 1
+askpass /canonical/filename
+auth-nocache
+tls-verify my command goes here
+tls-export-cert /a/directory/for/things
+x509-username-field emailAddress
+x509-username-field ext:subjectAltName
+tls-remote myhostname
+verify-x509-name hostname name
+verify-x509-name hostname name-prefix
+verify-x509-name hostname subject
+ns-cert-type server
+ns-cert-type client
+remote-cert-tls server
+remote-cert-tls client
+remote-cert-ku 01
+remote-cert-ku 01 02 fa FF b3
+remote-cert-eku 123.3510.350.10
+remote-cert-eku \"TLS Web Client Authentication\"
+remote-cert-eku serverAuth
+crl-verify /a/file/path
+crl-verify /a/directory/ dir
+show-ciphers
+show-digests
+show-tls
+show-engines
+genkey
+mktun
+rmtun
+ifconfig-ipv6 2000:123:456::/64 1234:99:123::124
+ifconfig-ipv6-push 2000:123:456::/64 1234:99:123::124
+iroute-ipv6 2000:123:456::/64
+route-ipv6 2000:123:456::/64
+route-ipv6 2000:123:456::/64 1234:99:123::124
+route-ipv6 2000:123:456::/64 1234:99:123::124 500
+server-ipv6 2000:123:456::/64
+ifconfig-ipv6-pool 2000:123:456::/64
+
+"
+
+test OpenVPN.lns get all_permutations_conf =
+ { }
+ { "config" = "/a/canonical/file" }
+ { "config" = "relative_file" }
+ { "mode" = "p2p" }
+ { "mode" = "server" }
+ { "local" = "192.168.1.1" }
+ { "local" = "hostname" }
+ { "remote"
+ { "server" = "192.168.1.1" }
+ { "port" = "1234" }
+ }
+ { "remote"
+ { "server" = "hostname" }
+ { "port" = "1234" }
+ }
+ { "remote"
+ { "server" = "hostname" }
+ }
+ { "remote"
+ { "server" = "192.168.1.1" }
+ }
+ { "remote"
+ { "server" = "hostname" }
+ { "port" = "1234" }
+ { "proto" = "tcp" }
+ }
+ { "remote"
+ { "server" = "192.168.1.1" }
+ { "port" = "1234" }
+ { "proto" = "tcp" }
+ }
+ { "remote"
+ { "server" = "hostname" }
+ { "port" = "1234" }
+ { "proto" = "udp" }
+ }
+ { "remote-random-hostname" }
+ { "#comment" = "comment square <connection> blocks should go here" }
+ { "proto-force" = "udp" }
+ { "proto-force" = "tcp" }
+ { "remote-random" }
+ { "proto" = "udp" }
+ { "proto" = "tcp-client" }
+ { "proto" = "tcp-server" }
+ { "connect-retry" = "5" }
+ { "connect-timeout" = "10" }
+ { "connect-retry-max" = "0" }
+ { "show-proxy-settings" }
+ { "http-proxy"
+ { "server" = "servername" }
+ { "port" = "1234" }
+ }
+ { "http-proxy"
+ { "server" = "servername" }
+ { "port" = "1234" }
+ { "auth" = "auto" }
+ }
+ { "http-proxy"
+ { "server" = "servername" }
+ { "port" = "1234" }
+ { "auth" = "auto-nct" }
+ }
+ { "http-proxy"
+ { "server" = "servername" }
+ { "port" = "1234" }
+ { "auth" = "auto" }
+ { "auth-method" = "none" }
+ }
+ { "http-proxy"
+ { "server" = "servername" }
+ { "port" = "1234" }
+ { "auth" = "auto" }
+ { "auth-method" = "basic" }
+ }
+ { "http-proxy"
+ { "server" = "servername" }
+ { "port" = "1234" }
+ { "auth" = "auto" }
+ { "auth-method" = "ntlm" }
+ }
+ { "http-proxy"
+ { "server" = "servername" }
+ { "port" = "1234" }
+ { "auth" = "relative_filename" }
+ { "auth-method" = "ntlm" }
+ }
+ { "http-proxy"
+ { "server" = "servername" }
+ { "port" = "1234" }
+ { "auth" = "/canonical/filename" }
+ { "auth-method" = "basic" }
+ }
+ { "http-proxy-retry" }
+ { "http-proxy-timeout" = "5" }
+ { "http-proxy-option"
+ { "option" = "VERSION" }
+ { "value" = "1.0" }
+ }
+ { "http-proxy-option"
+ { "option" = "AGENT" }
+ { "value" = "an unquoted string with spaces" }
+ }
+ { "http-proxy-option"
+ { "option" = "AGENT" }
+ { "value" = "an_unquoted_string_without_spaces" }
+ }
+ { "socks-proxy"
+ { "server" = "servername" }
+ }
+ { "socks-proxy"
+ { "server" = "servername" }
+ { "port" = "1234" }
+ }
+ { "socks-proxy"
+ { "server" = "servername" }
+ { "port" = "1234" }
+ { "auth" = "/canonical/file" }
+ }
+ { "socks-proxy"
+ { "server" = "servername" }
+ { "port" = "1234" }
+ { "auth" = "relative/file" }
+ }
+ { "socks-proxy-retry" }
+ { "resolv-retry" = "5" }
+ { "float" }
+ { "ipchange" = "my command goes here" }
+ { "port" = "1234" }
+ { "lport" = "1234" }
+ { "rport" = "1234" }
+ { "bind" }
+ { "nobind" }
+ { "dev" = "tun" }
+ { "dev" = "tun0" }
+ { "dev" = "tap" }
+ { "dev" = "tap0" }
+ { "dev" = "null" }
+ { "dev-type" = "tun" }
+ { "dev-type" = "tap" }
+ { "topology" = "net30" }
+ { "topology" = "p2p" }
+ { "topology" = "subnet" }
+ { "tun-ipv6" }
+ { "dev-node" = "/canonical/file" }
+ { "dev-node" = "relative/file" }
+ { "lladdr" = "1.2.3.4" }
+ { "iproute" = "my command goes here" }
+ { "ifconfig"
+ { "local" = "1.2.3.4" }
+ { "remote" = "5.6.7.8" }
+ }
+ { "ifconfig-noexec" }
+ { "ifconfig-nowarn" }
+ { "route"
+ { "address" = "111.222.123.123" }
+ }
+ { "route"
+ { "address" = "networkname" }
+ }
+ { "route"
+ { "address" = "vpn_gateway" }
+ }
+ { "route"
+ { "address" = "net_gateway" }
+ }
+ { "route"
+ { "address" = "remote_host" }
+ }
+ { "route"
+ { "address" = "111.222.123.123" }
+ { "netmask" = "255.123.255.221" }
+ }
+ { "route"
+ { "address" = "111.222.123.123" }
+ { "netmask" = "default" }
+ }
+ { "route"
+ { "address" = "111.222.123.123" }
+ { "netmask" = "255.123.255.231" }
+ { "gateway" = "111.222.123.1" }
+ }
+ { "route"
+ { "address" = "111.222.123.123" }
+ { "netmask" = "default" }
+ { "gateway" = "111.222.123.1" }
+ }
+ { "route"
+ { "address" = "111.222.123.123" }
+ { "netmask" = "255.123.255.231" }
+ { "gateway" = "default" }
+ }
+ { "route"
+ { "address" = "111.222.123.123" }
+ { "netmask" = "default" }
+ { "gateway" = "default" }
+ }
+ { "route"
+ { "address" = "111.222.123.123" }
+ { "netmask" = "255.123.255.231" }
+ { "gateway" = "gatewayname" }
+ }
+ { "route"
+ { "address" = "111.222.123.123" }
+ { "netmask" = "255.123.255.231" }
+ { "gateway" = "gatewayname" }
+ { "metric" = "5" }
+ }
+ { "route"
+ { "address" = "111.222.123.123" }
+ { "netmask" = "255.123.255.231" }
+ { "gateway" = "vpn_gateway" }
+ }
+ { "route"
+ { "address" = "111.222.123.123" }
+ { "netmask" = "255.123.255.231" }
+ { "gateway" = "net_gateway" }
+ }
+ { "route"
+ { "address" = "111.222.123.123" }
+ { "netmask" = "255.123.255.231" }
+ { "gateway" = "remote_host" }
+ }
+ { "route"
+ { "address" = "111.222.123.123" }
+ { "netmask" = "255.123.255.231" }
+ { "gateway" = "111.222.123.1" }
+ }
+ { "route"
+ { "address" = "111.222.123.123" }
+ { "netmask" = "255.123.255.231" }
+ { "gateway" = "111.222.123.1" }
+ { "metric" = "5" }
+ }
+ { "max-routes" = "5" }
+ { "route-gateway" = "gateway-name" }
+ { "route-gateway" = "111.222.123.1" }
+ { "route-gateway" = "dhcp" }
+ { "route-metric" = "5" }
+ { "route-delay" }
+ { "route-delay"
+ { "seconds" = "1" }
+ }
+ { "route-delay"
+ { "seconds" = "1" }
+ { "win-seconds" = "2" }
+ }
+ { "route-up" = "my command goes here" }
+ { "route-pre-down" = "my command goes here" }
+ { "route-noexec" }
+ { "route-nopull" }
+ { "allow-pull-fqdn" }
+ { "client-nat"
+ { "type" = "snat" }
+ { "network" = "1.2.3.4" }
+ { "netmask" = "5.6.7.8" }
+ { "alias" = "9.8.7.6" }
+ }
+ { "client-nat"
+ { "type" = "dnat" }
+ { "network" = "1.2.3.4" }
+ { "netmask" = "5.6.7.8" }
+ { "alias" = "9.8.7.6" }
+ }
+ { "redirect-gateway"
+ { "flag" = "local" }
+ }
+ { "redirect-gateway"
+ { "flag" = "local" }
+ { "flag" = "autolocal" }
+ }
+ { "redirect-gateway"
+ { "flag" = "local" }
+ { "flag" = "autolocal" }
+ { "flag" = "def1" }
+ { "flag" = "bypass-dhcp" }
+ { "flag" = "bypass-dns" }
+ { "flag" = "block-local" }
+ }
+ { "link-mtu" = "5" }
+ { "redirect-private"
+ { "flag" = "local" }
+ }
+ { "redirect-private"
+ { "flag" = "local" }
+ { "flag" = "autolocal" }
+ }
+ { "redirect-private"
+ { "flag" = "local" }
+ { "flag" = "autolocal" }
+ { "flag" = "def1" }
+ { "flag" = "bypass-dhcp" }
+ { "flag" = "bypass-dns" }
+ { "flag" = "block-local" }
+ }
+ { "tun-mtu" = "5" }
+ { "tun-mtu-extra" = "5" }
+ { "mtu-disc" = "no" }
+ { "mtu-disc" = "maybe" }
+ { "mtu-disc" = "yes" }
+ { "mtu-test" }
+ { "fragment" = "5" }
+ { "mssfix" = "1600" }
+ { "sndbuf" = "65536" }
+ { "rcvbuf" = "65535" }
+ { "mark" = "blahvalue" }
+ { "socket-flags" = "TCP_NODELAY" }
+ { "txqueuelen" = "5" }
+ { "shaper" = "50" }
+ { "inactive"
+ { "seconds" = "5" }
+ }
+ { "inactive"
+ { "seconds" = "5" }
+ { "bytes" = "1024" }
+ }
+ { "ping" = "10" }
+ { "ping-exit" = "10" }
+ { "ping-restart" = "10" }
+ { "keepalive"
+ { "ping" = "1" }
+ { "timeout" = "2" }
+ }
+ { "ping-timer-rem" }
+ { "persist-tun" }
+ { "persist-key" }
+ { "persist-local-ip" }
+ { "persist-remote-ip" }
+ { "mlock" }
+ { "up" = "my command goes here" }
+ { "up-delay" }
+ { "down" = "my command goes here" }
+ { "down-pre" }
+ { "up-restart" }
+ { "setenv"
+ { "myname" = "myvalue" }
+ }
+ { "setenv"
+ { "my0-_name" = "my value with spaces" }
+ }
+ { "setenv-safe"
+ { "myname" = "myvalue" }
+ }
+ { "setenv-safe"
+ { "my-_name" = "my value with spaces" }
+ }
+ { "ignore-unknown-option"
+ { "opt" = "anopt" }
+ }
+ { "ignore-unknown-option"
+ { "opt" = "anopt" }
+ { "opt" = "anotheropt" }
+ }
+ { "script-security" = "3" }
+ { "disable-occ" }
+ { "user" = "username" }
+ { "group" = "groupname" }
+ { "cd" = "/canonical/dir" }
+ { "cd" = "relative/dir/" }
+ { "chroot" = "/canonical/dir" }
+ { "chroot" = "relative/dir/" }
+ { "setcon" = "selinux-context" }
+ { "daemon" }
+ { "daemon" = "mydaemon_name" }
+ { "syslog" }
+ { "syslog" = "my_syslog-name" }
+ { "errors-to-stderr" }
+ { "passtos" }
+ { "inetd" }
+ { "inetd"
+ { "mode" = "wait" }
+ }
+ { "inetd"
+ { "mode" = "nowait" }
+ }
+ { "inetd"
+ { "mode" = "wait" }
+ { "progname" = "my-program_name" }
+ }
+ { "log" = "myfilename" }
+ { "log-append" = "myfilename" }
+ { "suppress-timestamps" }
+ { "writepid" = "myfile" }
+ { "nice" = "5" }
+ { "fast-io" }
+ { "multihome" }
+ { "echo" = "stuff to echo until end of line" }
+ { "remap-usr1" = "SIGHUP" }
+ { "remap-usr1" = "SIGTERM" }
+ { "verb" = "6" }
+ { "status"
+ { "file" = "myfile" }
+ }
+ { "status"
+ { "file" = "myfile" }
+ { "repeat-seconds" = "15" }
+ }
+ { "status-version" }
+ { "status-version" = "3" }
+ { "mute" = "20" }
+ { "comp-lzo" }
+ { "comp-lzo" = "yes" }
+ { "comp-lzo" = "no" }
+ { "comp-lzo" = "adaptive" }
+ { "management"
+ { "server" = "123.123.123.123" }
+ { "port" = "1234" }
+ }
+ { "management"
+ { "server" = "123.123.123.123" }
+ { "port" = "1234" }
+ { "pwfile" = "/canonical/file" }
+ }
+ { "management-client" }
+ { "management-query-passwords" }
+ { "management-query-proxy" }
+ { "management-query-remote" }
+ { "management-forget-disconnect" }
+ { "management-hold" }
+ { "management-signal" }
+ { "management-up-down" }
+ { "management-client-auth" }
+ { "management-client-pf" }
+ { "management-log-cache" = "5" }
+ { "management-client-user" = "myuser" }
+ { "management-client-user" = "mygroup" }
+ { "plugin"
+ { "file" = "/canonical/file" }
+ }
+ { "plugin"
+ { "file" = "relative/file" }
+ }
+ { "plugin"
+ { "file" = "myfile" }
+ { "init-string" = "an init string" }
+ }
+ { "server"
+ { "address" = "1.2.3.4" }
+ { "netmask" = "255.255.255.0" }
+ }
+ { "server"
+ { "address" = "1.2.3.4" }
+ { "netmask" = "255.255.255.255" }
+ { "nopool" }
+ }
+ { "server-bridge"
+ { "address" = "1.2.3.4" }
+ { "netmask" = "1.2.3.5" }
+ { "start" = "50.5.5.5" }
+ { "end" = "50.5.5.6" }
+ }
+ { "server-bridge" = "nogw" }
+ { "push" = "my push string" }
+ { "push-reset" }
+ { "push-peer-info" }
+ { "disable" }
+ { "ifconfig-pool"
+ { "start" = "1.1.1.1" }
+ { "end" = "2.2.2.2" }
+ }
+ { "ifconfig-pool"
+ { "start" = "1.1.1.1" }
+ { "end" = "2.2.2.2" }
+ { "netmask" = "255.255.255.0" }
+ }
+ { "ifconfig-pool-persist"
+ { "file" = "myfile" }
+ }
+ { "ifconfig-pool-persist"
+ { "file" = "myfile" }
+ { "seconds" = "50" }
+ }
+ { "ifconfig-pool-linear" }
+ { "ifconfig-push"
+ { "local" = "1.1.1.1" }
+ { "remote-netmask" = "2.2.2.2" }
+ }
+ { "ifconfig-push"
+ { "local" = "1.1.1.1" }
+ { "remote-netmask" = "2.2.2.2" }
+ { "alias" = "alias-name" }
+ }
+ { "iroute"
+ { "local" = "1.1.1.1" }
+ }
+ { "iroute"
+ { "local" = "1.1.1.1" }
+ { "netmask" = "2.2.2.2" }
+ }
+ { "client-to-client" }
+ { "duplicate-cn" }
+ { "client-connect" = "my command goes here" }
+ { "client-disconnect" = "my command goes here" }
+ { "client-config-dir" = "directory" }
+ { "ccd-exclusive" }
+ { "tmp-dir" = "/directory" }
+ { "hash-size"
+ { "real" = "1" }
+ { "virtual" = "2" }
+ }
+ { "bcast-buffers" = "5" }
+ { "tcp-queue-limit" = "50" }
+ { "tcp-nodelay" }
+ { "max-clients" = "50" }
+ { "max-routes-per-client" = "50" }
+ { "stale-routes-check"
+ { "age" = "5" }
+ }
+ { "stale-routes-check"
+ { "age" = "5" }
+ { "interval" = "50" }
+ }
+ { "connect-freq"
+ { "num" = "50" }
+ { "sec" = "100" }
+ }
+ { "learn-address" = "my command goes here" }
+ { "auth-user-pass-verify"
+ {
+ { "command" = "/my/script/with/no/arguments.sh" }
+ }
+ { "method" = "via-env" }
+ }
+ { "auth-user-pass-verify"
+ {
+ { "command" = "myscript.sh arg1 arg2" }
+ }
+ { "method" = "via-file" }
+ }
+ { "opt-verify" }
+ { "auth-user-pass-optional" }
+ { "client-cert-not-required" }
+ { "username-as-common-name" }
+ { "port-share"
+ { "host" = "1.1.1.1" }
+ { "port" = "1234" }
+ }
+ { "port-share"
+ { "host" = "myhostname" }
+ { "port" = "1234" }
+ }
+ { "port-share"
+ { "host" = "myhostname" }
+ { "port" = "1234" }
+ { "dir" = "/canonical/dir" }
+ }
+ { "client" }
+ { "pull" }
+ { "auth-user-pass" }
+ { "auth-user-pass" = "/canonical/file" }
+ { "auth-user-pass" = "relative/file" }
+ { "auth-retry" = "none" }
+ { "auth-retry" = "nointeract" }
+ { "auth-retry" = "interact" }
+ { "static-challenge"
+ {
+ { "text" = "challenge_no_spaces" }
+ }
+ { "echo" = "1" }
+ }
+ { "static-challenge"
+ {
+ { "text" = "my quoted challenge string" }
+ }
+ { "echo" = "0" }
+ }
+ { "server-poll-timeout" = "50" }
+ { "explicit-exit-notify" }
+ { "explicit-exit-notify" = "5" }
+ { "secret"
+ { "file" = "/canonicalfile" }
+ }
+ { "secret"
+ { "file" = "relativefile" }
+ }
+ { "secret"
+ { "file" = "filename" }
+ { "direction" = "1" }
+ }
+ { "secret"
+ { "file" = "filename" }
+ { "direction" = "0" }
+ }
+ { "key-direction" }
+ { "auth" = "none" }
+ { "auth" = "sha1" }
+ { "cipher" = "SHA1" }
+ { "cipher" = "sha1" }
+ { "keysize" = "50" }
+ { "prng"
+ { "algorithm" = "SHA1" }
+ }
+ { "prng"
+ { "algorithm" = "SHA1" }
+ { "nsl" = "500" }
+ }
+ { "engine" }
+ { "engine" = "blah" }
+ { "no-replay" }
+ { "replay-window"
+ { "window-size" = "64" }
+ }
+ { "replay-window"
+ { "window-size" = "64" }
+ { "seconds" = "16" }
+ }
+ { "mute-replay-warnings" }
+ { "replay-persist" = "/my/canonical/filename" }
+ { "no-iv" }
+ { "use-prediction-resistance" }
+ { "test-crypto" }
+ { "tls-server" }
+ { "tls-client" }
+ { "ca" = "myfile" }
+ { "capath" = "/mydir/" }
+ { "dh" = "myfile" }
+ { "cert" = "myfile" }
+ { "extra-certs" = "myfile" }
+ { "key" = "myfile" }
+ { "tls-version-min" = "1.1" }
+ { "tls-version-min" = "2" }
+ { "tls-version-min" = "1.1"
+ { "or-highest" }
+ }
+ { "tls-version-max" = "5.5" }
+ { "pkcs12" = "myfile" }
+ { "verify-hash" = "AD:B0:95:D8:09:C8:36:45:12:A9:89:C8:90:09:CB:13:72:A6:AD:16" }
+ { "pkcs11-cert-private" = "0" }
+ { "pkcs11-cert-private" = "1" }
+ { "pkcs11-id" = "myname" }
+ { "pkcs11-id-management" }
+ { "pkcs11-pin-cache" = "50" }
+ { "pkcs11-protected-authentication" = "0" }
+ { "pkcs11-protected-authentication" = "1" }
+ { "cryptoapicert"
+ { "SUBJ" = "Justin Akers" }
+ }
+ { "key-method" = "2" }
+ { "tls-cipher"
+ { "cipher" = "DEFAULT" }
+ { "cipher" = "!EXP" }
+ { "cipher" = "!PSK" }
+ { "cipher" = "!SRP" }
+ { "cipher" = "!kRSA" }
+ }
+ { "tls-timeout" = "50" }
+ { "reneg-bytes" = "50" }
+ { "reneg-pkts" = "50" }
+ { "reneg-sec" = "5" }
+ { "hand-window" = "123" }
+ { "tran-window" = "456" }
+ { "single-session" }
+ { "tls-exit" }
+ { "tls-auth"
+ { "key" = "filename" }
+ { "is_client" = "1" }
+ }
+ { "askpass" = "/canonical/filename" }
+ { "auth-nocache" }
+ { "tls-verify" = "my command goes here" }
+ { "tls-export-cert" = "/a/directory/for/things" }
+ { "x509-username-field"
+ { "subj" = "emailAddress" }
+ }
+ { "x509-username-field"
+ { "ext" = "subjectAltName" }
+ }
+ { "tls-remote" = "myhostname" }
+ { "verify-x509-name"
+ { "name" = "hostname" }
+ { "type" = "name" }
+ }
+ { "verify-x509-name"
+ { "name" = "hostname" }
+ { "type" = "name-prefix" }
+ }
+ { "verify-x509-name"
+ { "name" = "hostname" }
+ { "type" = "subject" }
+ }
+ { "ns-cert-type" = "server" }
+ { "ns-cert-type" = "client" }
+ { "remote-cert-tls" = "server" }
+ { "remote-cert-tls" = "client" }
+ { "remote-cert-ku"
+ { "usage" = "01" }
+ }
+ { "remote-cert-ku"
+ { "usage" = "01" }
+ { "usage" = "02" }
+ { "usage" = "fa" }
+ { "usage" = "FF" }
+ { "usage" = "b3" }
+ }
+ { "remote-cert-eku"
+ { "oid" = "123.3510.350.10" }
+ }
+ { "remote-cert-eku"
+ { "symbol" = "TLS Web Client Authentication" }
+ }
+ { "remote-cert-eku"
+ { "symbol" = "serverAuth" }
+ }
+ { "crl-verify" = "/a/file/path" }
+ { "crl-verify" = "/a/directory/"
+ { "dir" }
+ }
+ { "show-ciphers" }
+ { "show-digests" }
+ { "show-tls" }
+ { "show-engines" }
+ { "genkey" }
+ { "mktun" }
+ { "rmtun" }
+ { "ifconfig-ipv6"
+ { "address" = "2000:123:456::/64" }
+ { "remote" = "1234:99:123::124" }
+ }
+ { "ifconfig-ipv6-push"
+ { "address" = "2000:123:456::/64" }
+ { "remote" = "1234:99:123::124" }
+ }
+ { "iroute-ipv6" = "2000:123:456::/64" }
+ { "route-ipv6"
+ { "network" = "2000:123:456::/64" }
+ }
+ { "route-ipv6"
+ { "network" = "2000:123:456::/64" }
+ { "gateway" = "1234:99:123::124" }
+ }
+ { "route-ipv6"
+ { "network" = "2000:123:456::/64" }
+ { "gateway" = "1234:99:123::124" }
+ { "metric" = "500" }
+ }
+ { "server-ipv6" = "2000:123:456::/64" }
+ { "ifconfig-ipv6-pool" = "2000:123:456::/64" }
+ { }
+
+
--- /dev/null
+module Test_oz =
+
+ let conf = "
+[paths]
+output_dir = /var/lib/libvirt/images
+data_dir = /var/lib/oz
+
+[libvirt]
+uri = qemu:///system
+image_type = raw
+"
+
+ test Oz.lns get conf =
+ {}
+ { "paths"
+ { "output_dir" = "/var/lib/libvirt/images" }
+ { "data_dir" = "/var/lib/oz" }
+ {} }
+ { "libvirt"
+ { "uri" = "qemu:///system" }
+ { "image_type" = "raw" }
+ }
+
+ test Oz.lns put conf after
+ set "libvirt/cpus" "2"
+ = "
+[paths]
+output_dir = /var/lib/libvirt/images
+data_dir = /var/lib/oz
+
+[libvirt]
+uri = qemu:///system
+image_type = raw
+cpus=2
+"
+
--- /dev/null
+module Test_Pagekite =
+
+let conf1 = "# Use the pagekite.net service defaults.
+defaults
+"
+test Pagekite.lns get conf1 =
+ { "#comment" = "Use the pagekite.net service defaults." }
+ { "defaults" }
+
+
+let conf2 ="
+frontends = pagekite.freedombox.me
+ports=80,81
+"
+test Pagekite.lns get conf2 =
+ { }
+ { "frontends" = "pagekite.freedombox.me" }
+ { "ports"
+ { "1" = "80" }
+ { "2" = "81" } }
+
+
+let conf3 = "frontend=pagekite.freedombox.me
+host=192.168.0.3
+"
+test Pagekite.lns get conf3 =
+ { "frontend" = "pagekite.freedombox.me" }
+ { "host" = "192.168.0.3" }
+
+
+let conf4 = "isfrontend
+ports=80,443
+protos=http,https
+domain=http,https:*.your.domain:MakeUpAPasswordHere
+"
+test Pagekite.lns get conf4 =
+ { "isfrontend" }
+ { "ports"
+ { "1" = "80" }
+ { "2" = "443" } }
+ { "protos"
+ { "1" = "http" }
+ { "2" = "https" } }
+ { "domain" = "http,https:*.your.domain:MakeUpAPasswordHere" }
+
+let conf_account = "kitename = my.freedombox.me
+kitesecret = 0420
+# Delete this line!
+abort_not_configured
+"
+test Pagekite.lns get conf_account =
+ { "kitename" = "my.freedombox.me" }
+ { "kitesecret" = "0420" }
+ { "#comment" = "Delete this line!" }
+ { "abort_not_configured" }
+
+
+let conf_service = "
+service_on = raw/22:@kitename : localhost:22 : @kitesecret
+service_on=http:192.168.0.1:127.0.0.1:80:
+service_on=https:yourhostname,fqdn:127.0.0.1:443:
+"
+test Pagekite.lns get conf_service =
+ { }
+ { "service_on"
+ { "1"
+ { "protocol" = "raw/22" }
+ { "kitename" = "@kitename" }
+ { "backend_host" = "localhost" }
+ { "backend_port" = "22" }
+ { "secret" = "@kitesecret" }
+ }
+ }
+ { "service_on"
+ { "2"
+ { "protocol" = "http" }
+ { "kitename" = "192.168.0.1" }
+ { "backend_host" = "127.0.0.1" }
+ { "backend_port" = "80" }
+ }
+ }
+ { "service_on"
+ { "3"
+ { "protocol" = "https" }
+ { "kitename" = "yourhostname,fqdn" }
+ { "backend_host" = "127.0.0.1" }
+ { "backend_port" = "443" }
+ }
+ }
+
+
+let conf_encryption = "
+frontend=frontend.your.domain:443
+fe_certname=frontend.your/domain
+ca_certs=/etc/pagekite.d/site-cert.pem
+tls_endpoint=frontend.your.domain:/path/to/frontend.pem
+"
+test Pagekite.lns get conf_encryption =
+ { }
+ { "frontend" = "frontend.your.domain:443" }
+ { "fe_certname" = "frontend.your/domain" }
+ { "ca_certs" = "/etc/pagekite.d/site-cert.pem" }
+ { "tls_endpoint" = "frontend.your.domain:/path/to/frontend.pem" }
+
+
+let conf_service_cfg = "insecure
+service_cfg = KITENAME.pagekite.me/80 : insecure : True
+"
+test Pagekite.lns get conf_service_cfg =
+ { "insecure" }
+ { "service_cfg" = "KITENAME.pagekite.me/80 : insecure : True" }
--- /dev/null
+module Test_pam =
+
+ let example = "#%PAM-1.0
+AUTH [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so
+session optional pam_keyinit.so force revoke
+"
+
+ test Pam.lns get example =
+ { "#comment" = "%PAM-1.0" }
+ { "1" { "type" = "AUTH" }
+ { "control" = "[user_unknown=ignore success=ok ignore=ignore default=bad]" }
+ { "module" = "pam_securetty.so" } }
+ { "2" { "type" = "session" }
+ { "control" = "optional" }
+ { "module" = "pam_keyinit.so" }
+ { "argument" = "force" }
+ { "argument" = "revoke" } }
+
+ test Pam.lns put example after
+ set "/1/control" "requisite"
+ = "#%PAM-1.0
+AUTH requisite pam_securetty.so
+session optional pam_keyinit.so force revoke
+"
+
+ (* Check that trailing whitespace is handled & preserved *)
+ let trailing_ws = "auth\trequired\tpam_unix.so \n"
+
+ test Pam.lns put trailing_ws after
+ set "/1/type" "auth"
+ = trailing_ws
+
+ test Pam.lns get "@include common-password\n" =
+ { "include" = "common-password" }
+
+ test Pam.lns get "-password optional pam_gnome_keyring.so\n" =
+ { "1"
+ { "optional" }
+ { "type" = "password" }
+ { "control" = "optional" }
+ { "module" = "pam_gnome_keyring.so" }
+ }
+
+ test Pam.lns get "session optional pam_motd.so [motd=/etc/bad example]\n" =
+ { "1"
+ { "type" = "session" }
+ { "control" = "optional" }
+ { "module" = "pam_motd.so" }
+ { "argument" = "[motd=/etc/bad example]" }
+ }
+
+(* Multiline PAM entries; issue #590 *)
+test Pam.lns get "account \\\ninclude \\\n system-auth\n" =
+ { "1"
+ { "type" = "account" }
+ { "control" = "include" }
+ { "module" = "system-auth" } }
+
+test Pam.lns get "account\\\n[success=1 default=ignore] \\
+ pam_succeed_if.so\\\nuser\\\n =\\\nvagrant\\\nuse_uid\\\nquiet\n" =
+ { "1"
+ { "type" = "account" }
+ { "control" = "[success=1 default=ignore]" }
+ { "module" = "pam_succeed_if.so" }
+ { "argument" = "user" }
+ { "argument" = "=" }
+ { "argument" = "vagrant" }
+ { "argument" = "use_uid" }
+ { "argument" = "quiet" } }
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+module Test_pamconf =
+
+ let example = "# Authentication management
+#
+# login service (explicit because of pam_dial_auth)
+#
+login auth requisite pam_authtok_get.so.1
+login auth required pam_dhkeys.so.1 arg
+
+other session required pam_unix_session.so.1
+"
+
+ test PamConf.lns get example =
+ { "#comment" = "Authentication management" }
+ { }
+ { "#comment" = "login service (explicit because of pam_dial_auth)" }
+ { }
+ { "1" { "service" = "login" }
+ { "type" = "auth" }
+ { "control" = "requisite" }
+ { "module" = "pam_authtok_get.so.1" } }
+ { "2" { "service" = "login" }
+ { "type" = "auth" }
+ { "control" = "required" }
+ { "module" = "pam_dhkeys.so.1" }
+ { "argument" = "arg" } }
+ { }
+ { "3" { "service" = "other" }
+ { "type" = "session" }
+ { "control" = "required" }
+ { "module" = "pam_unix_session.so.1" } }
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+module Test_Passwd =
+
+let conf = "root:x:0:0:root:/root:/bin/bash
+libuuid:x:100:101::/var/lib/libuuid:/bin/sh
+free:x:1000:1000:Free Ekanayaka,,,:/home/free:/bin/bash
+root:*:0:0:Charlie &:/root:/bin/csh
+"
+
+test Passwd.lns get conf =
+ { "root"
+ { "password" = "x" }
+ { "uid" = "0" }
+ { "gid" = "0" }
+ { "name" = "root" }
+ { "home" = "/root" }
+ { "shell" = "/bin/bash" } }
+ { "libuuid"
+ { "password" = "x" }
+ { "uid" = "100" }
+ { "gid" = "101" }
+ { "name" }
+ { "home" = "/var/lib/libuuid" }
+ { "shell" = "/bin/sh" } }
+ { "free"
+ { "password" = "x" }
+ { "uid" = "1000" }
+ { "gid" = "1000" }
+ { "name" = "Free Ekanayaka,,," }
+ { "home" = "/home/free" }
+ { "shell" = "/bin/bash" } }
+ { "root"
+ { "password" = "*" }
+ { "uid" = "0" }
+ { "gid" = "0" }
+ { "name" = "Charlie &" }
+ { "home" = "/root" }
+ { "shell" = "/bin/csh" } }
+
+(* Popular on Solaris *)
+test Passwd.lns get "+@some-nis-group::::::\n" =
+ { "@nis" = "some-nis-group" }
+
+test Passwd.lns get "+\n" =
+ { "@nisdefault" }
+
+test Passwd.lns get "+::::::\n" =
+ { "@nisdefault"
+ { "password" = "" }
+ { "uid" = "" }
+ { "gid" = "" }
+ { "name" }
+ { "home" }
+ { "shell" } }
+
+test Passwd.lns get "+::::::/sbin/nologin\n" =
+ { "@nisdefault"
+ { "password" = "" }
+ { "uid" = "" }
+ { "gid" = "" }
+ { "name" }
+ { "home" }
+ { "shell" = "/sbin/nologin" } }
+
+test Passwd.lns get "+:*:0:0:::\n" =
+ { "@nisdefault"
+ { "password" = "*" }
+ { "uid" = "0" }
+ { "gid" = "0" }
+ { "name" }
+ { "home" }
+ { "shell" } }
+
+(* NIS entries with overrides, ticket #339 *)
+test Passwd.lns get "+@bob:::::/home/bob:/bin/bash\n" =
+ { "@nis" = "bob"
+ { "home" = "/home/bob" }
+ { "shell" = "/bin/bash" } }
+
+(* NIS user entries *)
+test Passwd.lns get "+bob::::::\n" =
+ { "@+nisuser" = "bob" }
+
+test Passwd.lns get "+bob::::User Comment:/home/bob:/bin/bash\n" =
+ { "@+nisuser" = "bob"
+ { "name" = "User Comment" }
+ { "home" = "/home/bob" }
+ { "shell" = "/bin/bash" } }
+
+test Passwd.lns put "+bob::::::\n" after
+ set "@+nisuser" "alice"
+= "+alice::::::\n"
+
+test Passwd.lns put "+bob::::::\n" after
+ set "@+nisuser/name" "User Comment";
+ set "@+nisuser/home" "/home/bob";
+ set "@+nisuser/shell" "/bin/bash"
+= "+bob::::User Comment:/home/bob:/bin/bash\n"
+
+test Passwd.lns get "-bob::::::\n" =
+ { "@-nisuser" = "bob" }
+
+test Passwd.lns put "-bob::::::\n" after
+ set "@-nisuser" "alice"
+= "-alice::::::\n"
--- /dev/null
+
+
+module Test_Pbuilder =
+
+let conf = "BASETGZ=/var/cache/pbuilder/base.tgz
+#EXTRAPACKAGES=gcc3.0-athlon-builder
+export DEBIAN_BUILDARCH=athlon
+BUILDPLACE=/var/cache/pbuilder/build/
+MIRRORSITE=http://ftp.jp.debian.org/debian
+"
+
+test Pbuilder.lns get conf =
+ { "BASETGZ" = "/var/cache/pbuilder/base.tgz" }
+ { "#comment" = "EXTRAPACKAGES=gcc3.0-athlon-builder" }
+ { "DEBIAN_BUILDARCH" = "athlon"
+ { "export" } }
+ { "BUILDPLACE" = "/var/cache/pbuilder/build/" }
+ { "MIRRORSITE" = "http://ftp.jp.debian.org/debian" }
+
+
--- /dev/null
+module Test_pg_hba =
+
+ (* Main test *)
+ let conf ="# TYPE DATABASE USER CIDR-ADDRESS METHOD
+
+local all all ident sameuser
+# IPv4 local connections:
+host all all 127.0.0.1/32 md5
+# Remote connections by hostname:
+host all all foo.example.com md5
+# Remote connections by suffix of hostname/fqdn:
+host all all .example.com md5
+# IPv6 local connections:
+host all all ::1/128 md5
+"
+
+ test Pg_Hba.lns get conf =
+ { "#comment" = "TYPE DATABASE USER CIDR-ADDRESS METHOD" }
+ {}
+ { "1"
+ { "type" = "local" }
+ { "database" = "all" }
+ { "user" = "all" }
+ { "method" = "ident"
+ { "option" = "sameuser" } }
+ }
+ { "#comment" = "IPv4 local connections:" }
+ { "2"
+ { "type" = "host" }
+ { "database" = "all" }
+ { "user" = "all" }
+ { "address" = "127.0.0.1/32" }
+ { "method" = "md5" }
+ }
+ { "#comment" = "Remote connections by hostname:" }
+ { "3"
+ { "type" = "host" }
+ { "database" = "all" }
+ { "user" = "all" }
+ { "address" = "foo.example.com" }
+ { "method" = "md5" }
+ }
+ { "#comment" = "Remote connections by suffix of hostname/fqdn:" }
+ { "4"
+ { "type" = "host" }
+ { "database" = "all" }
+ { "user" = "all" }
+ { "address" = ".example.com" }
+ { "method" = "md5" }
+ }
+ { "#comment" = "IPv6 local connections:" }
+ { "5"
+ { "type" = "host" }
+ { "database" = "all" }
+ { "user" = "all" }
+ { "address" = "::1/128" }
+ { "method" = "md5" }
+ }
+
+(* ------------------------------------------------------------- *)
+
+ (* Simple local test *)
+ test Pg_Hba.lns get "local all all trust\n" =
+ { "1"
+ { "type" = "local" }
+ { "database" = "all" }
+ { "user" = "all" }
+ { "method" = "trust" }
+ }
+
+ (* Remote test with comma-sparated database names *)
+ test Pg_Hba.lns get "hostssl db1,db2,db3 +pgusers 127.0.0.1/32 trust\n" =
+ { "1"
+ { "type" = "hostssl" }
+ { "database" = "db1" }
+ { "database" = "db2" }
+ { "database" = "db3" }
+ { "user" = "+pgusers" }
+ { "address" = "127.0.0.1/32" }
+ { "method" = "trust" }
+ }
+
+ (* Test with comma-sparated user names *)
+ test Pg_Hba.lns get "hostnossl sameuser u1,u2,u3 127.0.0.1/32 trust\n" =
+ { "1"
+ { "type" = "hostnossl" }
+ { "database" = "sameuser" }
+ { "user" = "u1" }
+ { "user" = "u2" }
+ { "user" = "u3" }
+ { "address" = "127.0.0.1/32" }
+ { "method" = "trust" }
+ }
+
+ (* Test with quoted database and user names *)
+ test Pg_Hba.lns get "host \"sameuser\" \"all\" 127.0.0.1/32 trust\n" =
+ { "1"
+ { "type" = "host" }
+ { "database" = "\"sameuser\"" }
+ { "user" = "\"all\"" }
+ { "address" = "127.0.0.1/32" }
+ { "method" = "trust" }
+ }
+
+ (* Test with IP + netmask address format *)
+ test Pg_Hba.lns get "host all all 192.168.1.1 255.255.0.0 trust\n" =
+ { "1"
+ { "type" = "host" }
+ { "database" = "all" }
+ { "user" = "all" }
+ { "address" = "192.168.1.1 255.255.0.0" }
+ { "method" = "trust" }
+ }
+
+ (* Test with fqdn as address *)
+ test Pg_Hba.lns get "host all all foo.example.com md5\n" =
+ { "1"
+ { "type" = "host" }
+ { "database" = "all" }
+ { "user" = "all" }
+ { "address" = "foo.example.com" }
+ { "method" = "md5" }
+ }
+
+ (* Test with fqdn suffix as address *)
+ test Pg_Hba.lns get "host all all .example.com md5\n" =
+ { "1"
+ { "type" = "host" }
+ { "database" = "all" }
+ { "user" = "all" }
+ { "address" = ".example.com" }
+ { "method" = "md5" }
+ }
+
+ (* Local types may not have and address *)
+ test Pg_Hba.lns get "local all all 127.0.0.1/32 trust\n" = *
+
+ (* Remote types must have an address *)
+ test Pg_Hba.lns get "host all all trust\n" = *
+
+ (* The space between the IP and the netmask must not be considered as a
+ column separator ("method" is missing here) *)
+ test Pg_Hba.lns get "host all all 192.168.1.1 255.255.0.0\n" = *
+
+ (* Ticket #313: support authentication method options *)
+ test Pg_Hba.lns get "host all all .dev.example.com gss include_realm=0 krb_realm=EXAMPLE.COM map=somemap
+host all all .dev.example.com ldap ldapserver=auth.example.com ldaptls=1 ldapprefix=\"uid=\" ldapsuffix=\",ou=people,dc=example,dc=com\"\n" =
+ { "1"
+ { "type" = "host" }
+ { "database" = "all" }
+ { "user" = "all" }
+ { "address" = ".dev.example.com" }
+ { "method" = "gss"
+ { "option" = "include_realm"
+ { "value" = "0" } }
+ { "option" = "krb_realm"
+ { "value" = "EXAMPLE.COM" } }
+ { "option" = "map"
+ { "value" = "somemap" } } } }
+ { "2"
+ { "type" = "host" }
+ { "database" = "all" }
+ { "user" = "all" }
+ { "address" = ".dev.example.com" }
+ { "method" = "ldap"
+ { "option" = "ldapserver"
+ { "value" = "auth.example.com" } }
+ { "option" = "ldaptls"
+ { "value" = "1" } }
+ { "option" = "ldapprefix"
+ { "value" = "uid=" } }
+ { "option" = "ldapsuffix"
+ { "value" = ",ou=people,dc=example,dc=com" } } } }
+
+ let conf_issues = "# TYPE DATABASE USER CIDR-ADDRESS METHOD
+# Issue 775 - auth-method may contain hyphens
+local all all scram-sha-256 sameuser
+host all all 127.0.0.1/32 scram-sha-256
+"
+ test Pg_Hba.lns get conf_issues =
+ { "#comment" = "TYPE DATABASE USER CIDR-ADDRESS METHOD" }
+ { "#comment" = "Issue 775 - auth-method may contain hyphens" }
+ { "1"
+ { "type" = "local" }
+ { "database" = "all" }
+ { "user" = "all" }
+ { "method" = "scram-sha-256"
+ { "option" = "sameuser" } }
+ }
+ { "2"
+ { "type" = "host" }
+ { "database" = "all" }
+ { "user" = "all" }
+ { "address" = "127.0.0.1/32" }
+ { "method" = "scram-sha-256" }
+ }
+
+ (* Unsupported yet *)
+ (* test Pg_Hba.lns get "host \"db with spaces\" \"user with spaces\" 127.0.0.1/32 trust\n" =? *)
+ (* test Pg_Hba.lns get "host \"db,with,commas\" \"user,with,commas\" 127.0.0.1/32 trust\n" =? *)
--- /dev/null
+module Test_Pgbouncer =
+ let pgconfig =";; database name = connect string
+;;
+;; connect string params:
+;; dbname= host= port= user= password=
+[databases]
+; foodb over unix socket
+foodb =
+
+; redirect bardb to bazdb on localhost
+bardb = host=localhost dbname=bazdb
+
+; acceess to dest database will go with single user
+forcedb = host=127.0.0.1 port=300 user=baz password=foo client_encoding=UNICODE datestyle=ISO connect_query='SELECT 1'
+[pgbouncer]
+;;; Administrative settings
+logfile = /var/log/pgbouncer/pgbouncer.log
+pidfile = /var/run/pgbouncer/pgbouncer.pid
+; ip address or * which means all ip-s
+listen_addr = 127.0.0.1
+listen_port = 6432
+;auth_file = /8.0/main/global/pg_auth
+auth_file = /var/lib/pgsql/data/global/pg_auth
+admin_users = postgres
+server_reset_query = DISCARD ALL
+"
+
+test Pgbouncer.lns get pgconfig =
+ { "#comment" = "; database name = connect string" }
+ { "#comment" = ";" }
+ { "#comment" = "; connect string params:" }
+ { "#comment" = "; dbname= host= port= user= password=" }
+ { "databases"
+ { "#comment" = "foodb over unix socket" }
+ { "foodb" }
+ { }
+ { "#comment" = "redirect bardb to bazdb on localhost" }
+ { "bardb" = "host=localhost dbname=bazdb" }
+ { }
+ { "#comment" = "acceess to dest database will go with single user" }
+ { "forcedb" = "host=127.0.0.1 port=300 user=baz password=foo client_encoding=UNICODE datestyle=ISO connect_query='SELECT 1'" }
+ }
+ { "pgbouncer"
+ { "#comment" = ";; Administrative settings" }
+ { "logfile" = "/var/log/pgbouncer/pgbouncer.log" }
+ { "pidfile" = "/var/run/pgbouncer/pgbouncer.pid" }
+ { "#comment" = "ip address or * which means all ip-s" }
+ { "listen_addr" = "127.0.0.1" }
+ { "listen_port" = "6432" }
+ { "#comment" = "auth_file = /8.0/main/global/pg_auth" }
+ { "auth_file" = "/var/lib/pgsql/data/global/pg_auth" }
+ { "admin_users" = "postgres" }
+ { "server_reset_query" = "DISCARD ALL" }
+ }
+
\ No newline at end of file
--- /dev/null
+module Test_php =
+
+let conf = "
+safe_mode = Off
+[PHP]
+; Enable the PHP scripting language engine under Apache.
+engine = On
+
+; Enable compatibility mode with Zend Engine 1 (PHP 4.x)
+zend.ze1_compatibility_mode = Off
+ unserialize_callback_func=
+date.default_latitude = 31.7667
+
+[sqlite]
+sqlite.assoc_case = 0
+"
+
+
+test PHP.lns get conf =
+ { ".anon"
+ {}
+ { "safe_mode" = "Off" } }
+ { "PHP"
+ { "#comment" = "Enable the PHP scripting language engine under Apache." }
+ { "engine" = "On" }
+ {}
+ { "#comment" = "Enable compatibility mode with Zend Engine 1 (PHP 4.x)" }
+ { "zend.ze1_compatibility_mode" = "Off" }
+ { "unserialize_callback_func" }
+ { "date.default_latitude" = "31.7667" }
+ {} }
+ { "sqlite"
+ { "sqlite.assoc_case" = "0" } }
+
+test PHP.lns put conf after rm "noop" = conf
+
+
+test PHP.lns get ";\n" = { ".anon" {} }
+
+(* Section titles can have spaces *)
+test PHP.lns get "[mail function]\n" = { "mail function" }
+
+(* Keys can be lower and upper case *)
+test PHP.lns get "[fake]
+SMTP = localhost
+mixed_KEY = 25
+" =
+ { "fake"
+ { "SMTP" = "localhost" }
+ { "mixed_KEY" = "25" } }
+
+(* Ticket #243 *)
+test PHP.lns get "session.save_path = \"3;/var/lib/php5\"\n" =
+ { ".anon"
+ { "session.save_path" = "3;/var/lib/php5" } }
+
+(* GH issue #35
+ php-fpm syntax *)
+test PHP.lns get "php_admin_flag[log_errors] = on\n" =
+ { ".anon"
+ { "php_admin_flag[log_errors]" = "on" } }
--- /dev/null
+module Test_phpvars =
+
+let conf = "<?pHp
+/**/
+/**
+ * Multi line comment
+ *
+ */
+
+/* One line comment */
+
+// Inline comment
+# Bash-style comment
+global $version;
+global $config_version;
+$config_version = '1.4.0';
+$theme=array();
+
+$theme[0]['NAME'] = 'Default'; // end-of line comment
+$theme[0][\"PATH\"] = SM_PATH . 'themes/default_theme.php';
+$theme[0]['XPATH'] = '/some//x/path' ;
+define ('MYVAR', ROOT . 'some value'); # end-of line comment
+include_once( ROOT . \"/path/to/conf\" );
+include( ROOT . \"/path/to/conf\" );
+@include SM_PATH . 'config/config_local.php';
+class config {
+ var $tmppath = \"/tmp\";
+ var $offline = 1;
+}
+?>
+"
+
+test Phpvars.lns get conf =
+ { }
+ { "#mcomment"
+ { "1" = "*" }
+ { "2" = "* Multi line comment" }
+ { "3" = "*" }
+ }
+ { }
+ { "#mcomment"
+ { "1" = "One line comment" }
+ }
+ { }
+ { "#comment" = "Inline comment" }
+ { "#comment" = "Bash-style comment" }
+ { "global" = "version" }
+ { "global" = "config_version" }
+ { "$config_version" = "'1.4.0'" }
+ { "$theme" = "array()" }
+ { }
+ { "$theme" = "'Default'"
+ { "@arraykey" = "[0]['NAME']" }
+ { "#comment" = "end-of line comment" } }
+ { "$theme" = "SM_PATH . 'themes/default_theme.php'"
+ { "@arraykey" = "[0][\"PATH\"]" }
+ }
+ { "$theme" = "'/some//x/path'"
+ { "@arraykey" = "[0]['XPATH']" }
+ }
+ { "define" = "MYVAR"
+ { "value" = "ROOT . 'some value'" }
+ { "#comment" = "end-of line comment" } }
+ { "include_once" = "ROOT . \"/path/to/conf\"" }
+ { "include" = "ROOT . \"/path/to/conf\"" }
+ { "@include" = "SM_PATH . 'config/config_local.php'" }
+ { "config"
+ { }
+ {"$tmppath" = "\"/tmp\""}
+ {"$offline" = "1"}
+ }
+ { }
--- /dev/null
+(* Tests for the Postfix Access module *)
+
+module Test_postfix_access =
+
+ let three_entries = "127.0.0.1 DISCARD You totally suck
+ Really
+#ok no more comments
+user@ REJECT
+"
+
+ test Postfix_access.record get "127.0.0.1 REJECT\n" =
+ { "1" { "pattern" = "127.0.0.1" }
+ { "action" = "REJECT" } }
+
+ test Postfix_access.lns get three_entries =
+ { "1" { "pattern" = "127.0.0.1" }
+ { "action" = "DISCARD" }
+ { "parameters" = "You totally suck\n Really" } }
+ {"#comment" = "ok no more comments" }
+ { "2" { "pattern" = "user@" }
+ { "action" = "REJECT" } }
+
+ test Postfix_access.record put "127.0.0.1 OK\n" after
+ set "/1/action" "REJECT"
+ = "127.0.0.1 REJECT\n"
+
+ test Postfix_access.lns put three_entries after
+ set "/2/parameters" "Rejected you loser" ;
+ rm "/1/parameters"
+ = "127.0.0.1 DISCARD
+#ok no more comments
+user@ REJECT Rejected you loser
+"
+
+(* Deleting the 'action' node violates the schema; each postfix access *)
+(* entry must have one *)
+ test Postfix_access.lns put three_entries after
+ rm "/1/action"
+ = *
+
+ (* Make sure blank lines get through *)
+ test Postfix_access.lns get "127.0.0.1\tREJECT \n \n\n
+user@*\tOK\tI 'll let you in \n\tseriously\n" =
+ { "1" { "pattern" = "127.0.0.1" }
+ { "action" = "REJECT" } }
+ {} {} {}
+ { "2" { "pattern" = "user@*" }
+ { "action" = "OK" }
+ { "parameters" = "I 'll let you in \n\tseriously" } }
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+module Test_postfix_main =
+
+let conf = "# main.cf
+myorigin = /etc/mailname
+
+smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu)
+mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
+relayhost =
+import_environment =
+ MAIL_CONFIG MAIL_DEBUG MAIL_LOGTAG TZ LANG=C \n KRB5CCNAME=FILE:${queue_directory}/kerberos/krb5_ccache\n"
+
+test Postfix_Main.lns get conf =
+ { "#comment" = "main.cf" }
+ { "myorigin" = "/etc/mailname" }
+ {}
+ { "smtpd_banner" = "$myhostname ESMTP $mail_name (Ubuntu)" }
+ { "mynetworks" = "127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128" }
+ { "relayhost" }
+ { "import_environment" = "MAIL_CONFIG MAIL_DEBUG MAIL_LOGTAG TZ LANG=C \n KRB5CCNAME=FILE:${queue_directory}/kerberos/krb5_ccache" }
+
+test Postfix_main.lns get "debugger_command =
+\t PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin
+\t ddd $daemon_directory/$process_name $process_id & sleep 5\n"
+ =
+ { "debugger_command" = "PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin
+ ddd $daemon_directory/$process_name $process_id & sleep 5" }
--- /dev/null
+module Test_postfix_master =
+
+let conf = "# master.cf
+smtp inet n - - 10? - smtpd
+maildrop unix - n n - - pipe
+ flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient}
+"
+
+test Postfix_Master.lns get conf =
+ { "#comment" = "master.cf" }
+ { "smtp"
+ { "type" = "inet" }
+ { "private" = "n" }
+ { "unprivileged" = "-" }
+ { "chroot" = "-" }
+ { "wakeup" = "10?" }
+ { "limit" = "-" }
+ { "command" = "smtpd" } }
+ { "maildrop"
+ { "type" = "unix" }
+ { "private" = "-" }
+ { "unprivileged" = "n" }
+ { "chroot" = "n" }
+ { "wakeup" = "-" }
+ { "limit" = "-" }
+ { "command" = "pipe\n flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient}" } }
+
+(* fixes bug #69 : accept double quotes in arguments *)
+let conf2 = "# The Cyrus deliver program has changed incompatibly, multiple times.
+cyrus unix - n n - - pipe
+ flags=R user=cyrus argv=/usr/sbin/cyrdeliver -e -m \"${extension}\" ${user}
+"
+
+test Postfix_Master.lns get conf2 =
+ { "#comment" = "The Cyrus deliver program has changed incompatibly, multiple times." }
+ { "cyrus"
+ { "type" = "unix" }
+ { "private" = "-" }
+ { "unprivileged" = "n" }
+ { "chroot" = "n" }
+ { "wakeup" = "-" }
+ { "limit" = "-" }
+ { "command" = "pipe\n flags=R user=cyrus argv=/usr/sbin/cyrdeliver -e -m \"${extension}\" ${user}" }
+ }
+
+(* accept commas in arguments *)
+let conf3 = "# master.cf
+submission inet n - n - - smtpd
+ -o smtpd_client_restrictions=permit_sasl_authenticated,reject
+"
+
+test Postfix_Master.lns get conf3 =
+ { "#comment" = "master.cf" }
+ { "submission"
+ { "type" = "inet" }
+ { "private" = "n" }
+ { "unprivileged" = "-" }
+ { "chroot" = "n" }
+ { "wakeup" = "-" }
+ { "limit" = "-" }
+ { "command" = "smtpd\n -o smtpd_client_restrictions=permit_sasl_authenticated,reject" } }
+
+(* : is allowed *)
+let conf4 = "127.0.0.1:10060 inet n n n - 0 spawn
+ user=nobody argv=/usr/sbin/hapolicy -l --default=DEFER
+"
+
+test Postfix_Master.lns get conf4 =
+ { "127.0.0.1:10060"
+ { "type" = "inet" }
+ { "private" = "n" }
+ { "unprivileged" = "n" }
+ { "chroot" = "n" }
+ { "wakeup" = "-" }
+ { "limit" = "0" }
+ { "command" = "spawn
+ user=nobody argv=/usr/sbin/hapolicy -l --default=DEFER" }
+ }
+
+
+(* Spaces are allowed after the first word of the command *)
+let conf5 = "sympa unix - n n - - pipe \n flags=R user=sympa argv=/home/sympa/bin/queue ${recipient}
+"
+
+test Postfix_Master.lns get conf5 =
+ { "sympa"
+ { "type" = "unix" }
+ { "private" = "-" }
+ { "unprivileged" = "n" }
+ { "chroot" = "n" }
+ { "wakeup" = "-" }
+ { "limit" = "-" }
+ { "command" = "pipe \n flags=R user=sympa argv=/home/sympa/bin/queue ${recipient}" }
+ }
+
+(* Arobase is allowed in command *)
+let conf6 = "sympafamilypfs unix - n n - - pipe
+ flags=R user=sympa argv=/home/sympa/bin/familyqueue ${user}@domain.net pfs
+"
+test Postfix_Master.lns get conf6 =
+ { "sympafamilypfs"
+ { "type" = "unix" }
+ { "private" = "-" }
+ { "unprivileged" = "n" }
+ { "chroot" = "n" }
+ { "wakeup" = "-" }
+ { "limit" = "-" }
+ { "command" = "pipe
+ flags=R user=sympa argv=/home/sympa/bin/familyqueue ${user}@domain.net pfs" }
+ }
+
+(* Ticket #345 *)
+let conf7 = "# master.cf
+submission inet n - n - - smtpd
+ -o mynetworks=127.0.0.1/8,[::1]
+"
+
+test Postfix_Master.lns get conf7 =
+ { "#comment" = "master.cf" }
+ { "submission"
+ { "type" = "inet" }
+ { "private" = "n" }
+ { "unprivileged" = "-" }
+ { "chroot" = "n" }
+ { "wakeup" = "-" }
+ { "limit" = "-" }
+ { "command" = "smtpd\n -o mynetworks=127.0.0.1/8,[::1]" } }
+
+(* Ticket #635 *)
+let conf8 = "postlog unix-dgram n - n - 1 postlogd\n"
+
+test Postfix_Master.lns get conf8 =
+ { "postlog"
+ { "type" = "unix-dgram" }
+ { "private" = "n" }
+ { "unprivileged" = "-" }
+ { "chroot" = "n" }
+ { "wakeup" = "-" }
+ { "limit" = "1" }
+ { "command" = "postlogd" } }
+
--- /dev/null
+(*
+Module: Test_Postfix_Passwordmap
+ Provides unit tests and examples for the <Postfix_Passwordmap> lens.
+*)
+
+module Test_Postfix_Passwordmap =
+
+(* View: conf *)
+let conf = "# comment
+* username:password
+[mail.isp.example] username:password
+[mail.isp.example]:submission username:password
+[mail.isp.example]:587 username:password
+mail.isp.example username:password
+user@mail.isp.example username:
+mail.isp.example
+ username2:password2
+"
+
+(* Test: Postfix_Passwordmap.lns *)
+test Postfix_Passwordmap.lns get conf =
+ { "#comment" = "comment" }
+ { "pattern" = "*"
+ { "username" = "username" }
+ { "password" = "password" } }
+ { "pattern" = "[mail.isp.example]"
+ { "username" = "username" }
+ { "password" = "password" } }
+ { "pattern" = "[mail.isp.example]:submission"
+ { "username" = "username" }
+ { "password" = "password" } }
+ { "pattern" = "[mail.isp.example]:587"
+ { "username" = "username" }
+ { "password" = "password" } }
+ { "pattern" = "mail.isp.example"
+ { "username" = "username" }
+ { "password" = "password" } }
+ { "pattern" = "user@mail.isp.example"
+ { "username" = "username" }
+ { "password" } }
+ { "pattern" = "mail.isp.example"
+ { "username" = "username2" }
+ { "password" = "password2" } }
--- /dev/null
+module Test_Postfix_Sasl_Smtpd =
+
+
+ let conf = "pwcheck_method: auxprop saslauthd
+auxprop_plugin: plesk
+saslauthd_path: /private/plesk_saslauthd
+mech_list: CRAM-MD5 PLAIN LOGIN
+sql_engine: intentionally disabled
+log_level: 4
+"
+
+
+ test Postfix_sasl_smtpd.lns get conf =
+ { "pwcheck_method" = "auxprop saslauthd" }
+ { "auxprop_plugin" = "plesk" }
+ { "saslauthd_path" = "/private/plesk_saslauthd" }
+ { "mech_list" = "CRAM-MD5 PLAIN LOGIN" }
+ { "sql_engine" = "intentionally disabled" }
+ { "log_level" = "4" }
--- /dev/null
+(*
+Module: Test_Postfix_Transport
+ Provides unit tests and examples for the <Postfix_Transport> lens.
+*)
+
+module Test_Postfix_Transport =
+
+(* View: conf *)
+let conf = "# a comment
+the.backed-up.domain.tld relay:[their.mail.host.tld]
+.my.domain :
+* smtp:outbound-relay.my.domain
+example.com uucp:example
+example.com slow:
+example.com :[gateway.example.com]
+user.foo@example.com
+ smtp:bar.example:2025
+firstname_lastname@example.com discard:
+.example.com error:mail for *.example.com is not deliverable
+"
+
+(* Test: Postfix_Transport.lns *)
+test Postfix_Transport.lns get conf =
+ { "#comment" = "a comment" }
+ { "pattern" = "the.backed-up.domain.tld"
+ { "transport" = "relay" }
+ { "nexthop" = "[their.mail.host.tld]" } }
+ { "pattern" = ".my.domain"
+ { "transport" }
+ { "nexthop" } }
+ { "pattern" = "*"
+ { "transport" = "smtp" }
+ { "nexthop" = "outbound-relay.my.domain" } }
+ { "pattern" = "example.com"
+ { "transport" = "uucp" }
+ { "nexthop" = "example" } }
+ { "pattern" = "example.com"
+ { "transport" = "slow" }
+ { "nexthop" } }
+ { "pattern" = "example.com"
+ { "transport" }
+ { "nexthop" = "[gateway.example.com]" } }
+ { "pattern" = "user.foo@example.com"
+ { "transport" = "smtp" }
+ { "nexthop" = "bar.example:2025" } }
+ { "pattern" = "firstname_lastname@example.com"
+ { "transport" = "discard" }
+ { "nexthop" } }
+ { "pattern" = ".example.com"
+ { "transport" = "error" }
+ { "nexthop" = "mail for *.example.com is not deliverable" } }
+
+(* Test: Postfix_Transport.lns
+ Bug #303 *)
+test Postfix_Transport.lns get "user@example.com [12.34.56.78]:587\n" =
+ { "pattern" = "user@example.com"
+ { "host" = "[12.34.56.78]" }
+ { "port" = "587" } }
--- /dev/null
+(*
+Module: Test_Postfix_Virtual
+ Provides unit tests and examples for the <Postfix_Virtual> lens.
+*)
+
+module Test_Postfix_Virtual =
+
+(* View: conf *)
+let conf = "# a comment
+virtual-alias.domain anything
+postmaster@virtual-alias.domain postmaster
+user1@virtual-alias.domain address1
+user2@virtual-alias.domain
+ address2,
+ address3
+root robert.oot@domain.com
+@example.net root,postmaster
+postmaster mtaadmin+root=mta1
+some_user localuser
+"
+
+(* Test: Postfix_Virtual.lns *)
+test Postfix_Virtual.lns get conf =
+ { "#comment" = "a comment" }
+ { "pattern" = "virtual-alias.domain"
+ { "destination" = "anything" }
+ }
+ { "pattern" = "postmaster@virtual-alias.domain"
+ { "destination" = "postmaster" }
+ }
+ { "pattern" = "user1@virtual-alias.domain"
+ { "destination" = "address1" }
+ }
+ { "pattern" = "user2@virtual-alias.domain"
+ { "destination" = "address2" }
+ { "destination" = "address3" }
+ }
+ { "pattern" = "root"
+ { "destination" = "robert.oot@domain.com" }
+ }
+ { "pattern" = "@example.net"
+ { "destination" = "root" }
+ { "destination" = "postmaster" }
+ }
+ { "pattern" = "postmaster"
+ { "destination" = "mtaadmin+root=mta1" }
+ }
+ { "pattern" = "some_user"
+ { "destination" = "localuser" }
+ }
--- /dev/null
+(*
+Module: Test_Postgresql
+ Provides unit tests and examples for the <Postgresql> lens.
+*)
+
+module Test_Postgresql =
+
+(* "=" separator is optional *)
+let missing_equal = "fsync on\n"
+test Postgresql.lns get missing_equal =
+ { "fsync" = "on" }
+test Postgresql.lns put missing_equal after
+ set "fsync" "off" = "fsync off\n"
+
+(* extra whitespace is valid anywhere *)
+let extra_whitespace = " fsync = on # trailing comment \n"
+test Postgresql.lns get extra_whitespace =
+ { "fsync" = "on"
+ { "#comment" = "trailing comment" }
+ }
+test Postgresql.lns put extra_whitespace after
+ set "fsync" "off" = " fsync = off # trailing comment \n"
+
+(* no whitespace at all is also valid *)
+let no_whitespace = "fsync=on\n"
+test Postgresql.lns get no_whitespace =
+ { "fsync" = "on" }
+test Postgresql.lns put no_whitespace after
+ set "fsync" "off" = "fsync=off\n"
+
+(* Some settings specify a memory or time value. [...] Valid memory units are
+ kB (kilobytes), MB (megabytes), and GB (gigabytes); valid time units are
+ ms (milliseconds), s (seconds), min (minutes), h (hours), and d (days). *)
+let numeric_suffix_quotes = "shared_buffers = 24MB
+archive_timeout = 2min
+deadlock_timeout = '1s'
+"
+test Postgresql.lns get numeric_suffix_quotes =
+ { "shared_buffers" = "24MB" }
+ { "archive_timeout" = "2min" }
+ { "deadlock_timeout" = "1s" }
+test Postgresql.lns put numeric_suffix_quotes after
+ set "deadlock_timeout" "2s";
+ set "max_stack_depth" "2MB";
+ set "shared_buffers" "48MB" = "shared_buffers = 48MB
+archive_timeout = 2min
+deadlock_timeout = '2s'
+max_stack_depth = '2MB'
+"
+
+(* floats and ints can be single quoted or not *)
+let float_quotes = "seq_page_cost = 2.0
+random_page_cost = '4.0'
+vacuum_freeze_min_age = 50000000
+vacuum_freeze_table_age = '150000000'
+wal_buffers = -1
+"
+test Postgresql.lns get float_quotes =
+ { "seq_page_cost" = "2.0" }
+ { "random_page_cost" = "4.0" }
+ { "vacuum_freeze_min_age" = "50000000" }
+ { "vacuum_freeze_table_age" = "150000000" }
+ { "wal_buffers" = "-1" }
+test Postgresql.lns put float_quotes after
+ set "seq_page_cost" "5.0";
+ set "vacuum_cost_limit" "200";
+ set "bgwriter_lru_multiplier" "2.0";
+ set "log_temp_files" "-1";
+ set "wal_buffers" "1" = "seq_page_cost = 5.0
+random_page_cost = '4.0'
+vacuum_freeze_min_age = 50000000
+vacuum_freeze_table_age = '150000000'
+wal_buffers = 1
+vacuum_cost_limit = '200'
+bgwriter_lru_multiplier = '2.0'
+log_temp_files = '-1'
+"
+
+(* Boolean values can be written as on, off, true, false, yes, no, 1, 0 (all
+ case-insensitive) or any unambiguous prefix of these. *)
+let bool_quotes = "log_connections = yes
+transform_null_equals = OFF
+sql_inheritance = 'on'
+synchronize_seqscans = 1
+standard_conforming_strings = fal
+"
+test Postgresql.lns get bool_quotes =
+ { "log_connections" = "yes" }
+ { "transform_null_equals" = "OFF" }
+ { "sql_inheritance" = "on" }
+ { "synchronize_seqscans" = "1" }
+ { "standard_conforming_strings" = "fal" }
+test Postgresql.lns put bool_quotes after
+ set "sql_inheritance" "off";
+ set "log_lock_waits" "off" = "log_connections = yes
+transform_null_equals = OFF
+sql_inheritance = 'off'
+synchronize_seqscans = 1
+standard_conforming_strings = fal
+log_lock_waits = 'off'
+"
+
+(* Strings must be single-quoted, except if they have no special character *)
+let string_quotes = "listen_addresses = 'localhost'
+stats_temp_directory = pg_stat_tmp
+lc_messages = 'en_US.UTF-8'
+log_filename = log
+archive_command = 'tar \'quoted option\''
+search_path = '\"$user\",public'
+password_encryption = scram-sha-256
+"
+test Postgresql.lns get string_quotes =
+ { "listen_addresses" = "localhost" }
+ { "stats_temp_directory" = "pg_stat_tmp" }
+ { "lc_messages" = "en_US.UTF-8" }
+ { "log_filename" = "log" }
+ { "archive_command" = "tar \'quoted option\'" }
+ { "search_path" = "\"$user\",public" }
+ { "password_encryption" = "scram-sha-256" }
+test Postgresql.lns put string_quotes after
+ set "stats_temp_directory" "foo_bar";
+ set "log_filename" "postgresql-%Y-%m-%d_%H%M%S.log";
+ set "log_statement" "none" = "listen_addresses = 'localhost'
+stats_temp_directory = foo_bar
+lc_messages = 'en_US.UTF-8'
+log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
+archive_command = 'tar \'quoted option\''
+search_path = '\"$user\",public'
+password_encryption = scram-sha-256
+log_statement = 'none'
+"
+
+(* external files can be included more than once *)
+let include_keyword = "Include 'foo.conf'
+# can appear several times
+Include 'bar.conf'
+"
+test Postgresql.lns get include_keyword =
+ { "Include" = "foo.conf" }
+ { "#comment" = "can appear several times" }
+ { "Include" = "bar.conf" }
+
+(* Variable: conf
+ A full configuration file *)
+ let conf = "data_directory = '/var/lib/postgresql/8.4/main' # use data in another directory
+hba_file = '/etc/postgresql/8.4/main/pg_hba.conf' # host-based authentication file
+ident_file = '/etc/postgresql/8.4/main/pg_ident.conf' # ident configuration file
+
+# If external_pid_file is not explicitly set, no extra PID file is written.
+external_pid_file = '/var/run/postgresql/8.4-main.pid' # write an extra PID file
+listen_addresses = 'localhost' # what IP address(es) to listen on;
+port = 5432 # (change requires restart)
+max_connections = 100 # (change requires restart)
+superuser_reserved_connections = 3 # (change requires restart)
+unix_socket_directory = '/var/run/postgresql' # (change requires restart)
+unix_socket_group = '' # (change requires restart)
+unix_socket_permissions = 0777 # begin with 0 to use octal notation
+ # (change requires restart)
+bonjour_name = '' # defaults to the computer name
+
+authentication_timeout = 1min # 1s-600s
+ssl = true # (change requires restart)
+ssl_ciphers = 'ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH' # allowed SSL ciphers
+ssl_renegotiation_limit = 512MB # amount of data between renegotiations
+password_encryption = on
+db_user_namespace = off
+
+search_path = '\"$user\",public' # schema names
+default_tablespace = '' # a tablespace name, '' uses the default
+temp_tablespaces = '' # a list of tablespace names, '' uses
+
+datestyle = 'iso, mdy'
+intervalstyle = 'postgres'
+timezone = unknown # actually, defaults to TZ environment
+"
+
+(* Test: Postgresql.lns *)
+test Postgresql.lns get conf =
+ { "data_directory" = "/var/lib/postgresql/8.4/main"
+ { "#comment" = "use data in another directory" }
+ }
+ { "hba_file" = "/etc/postgresql/8.4/main/pg_hba.conf"
+ { "#comment" = "host-based authentication file" }
+ }
+ { "ident_file" = "/etc/postgresql/8.4/main/pg_ident.conf"
+ { "#comment" = "ident configuration file" }
+ }
+ { }
+ { "#comment" = "If external_pid_file is not explicitly set, no extra PID file is written." }
+ { "external_pid_file" = "/var/run/postgresql/8.4-main.pid"
+ { "#comment" = "write an extra PID file" }
+ }
+ { "listen_addresses" = "localhost"
+ { "#comment" = "what IP address(es) to listen on;" }
+ }
+ { "port" = "5432"
+ { "#comment" = "(change requires restart)" }
+ }
+ { "max_connections" = "100"
+ { "#comment" = "(change requires restart)" }
+ }
+ { "superuser_reserved_connections" = "3"
+ { "#comment" = "(change requires restart)" }
+ }
+ { "unix_socket_directory" = "/var/run/postgresql"
+ { "#comment" = "(change requires restart)" }
+ }
+ { "unix_socket_group" = ""
+ { "#comment" = "(change requires restart)" }
+ }
+ { "unix_socket_permissions" = "0777"
+ { "#comment" = "begin with 0 to use octal notation" }
+ }
+ { "#comment" = "(change requires restart)" }
+ { "bonjour_name" = ""
+ { "#comment" = "defaults to the computer name" }
+ }
+ { }
+ { "authentication_timeout" = "1min"
+ { "#comment" = "1s-600s" }
+ }
+ { "ssl" = "true"
+ { "#comment" = "(change requires restart)" }
+ }
+ { "ssl_ciphers" = "ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH"
+ { "#comment" = "allowed SSL ciphers" }
+ }
+ { "ssl_renegotiation_limit" = "512MB"
+ { "#comment" = "amount of data between renegotiations" }
+ }
+ { "password_encryption" = "on" }
+ { "db_user_namespace" = "off" }
+ { }
+ { "search_path" = "\"$user\",public"
+ { "#comment" = "schema names" }
+ }
+ { "default_tablespace" = ""
+ { "#comment" = "a tablespace name, '' uses the default" }
+ }
+ { "temp_tablespaces" = ""
+ { "#comment" = "a list of tablespace names, '' uses" }
+ }
+ { }
+ { "datestyle" = "iso, mdy" }
+ { "intervalstyle" = "postgres" }
+ { "timezone" = "unknown"
+ { "#comment" = "actually, defaults to TZ environment" }
+ }
--- /dev/null
+module Test_properties =
+ let conf = "
+#
+# Test tomcat properties file
+#tomcat.commented.value=1
+ # config
+tomcat.port = 8080
+tomcat.application.name=testapp
+ tomcat.application.description=my test application
+property.with_underscore=works
+empty.property=
+empty.property.withtrailingspaces= \n! more comments
+key: value
+key2:value2
+key3 :value3
+key4:=value4
+key5\"=value5
+key6/c=value6
+
+long.description=this is a description that happens to span \
+ more than one line with a combination of tabs and \
+ spaces \ \nor not
+
+# comment break
+
+short.break = a\
+ b
+
+=empty_key
+ =empty_key
+
+cheeses
+
+spaces only
+multi spaces
+ indented spaces
+
+\= =A
+space and = equals
+space with \
+ multiline
+
+escaped\:colon=value
+escaped\=equals=value
+escaped\ space=value
+"
+
+(* Other tests that aren't supported yet
+overflow.description=\
+ just wanted to indent it
+*)
+
+let lns = Properties.lns
+
+test lns get conf =
+ { } { }
+ { "#comment" = "Test tomcat properties file" }
+ { "#comment" = "tomcat.commented.value=1" }
+ { "#comment" = "config" }
+ { "tomcat.port" = "8080" }
+ { "tomcat.application.name" = "testapp" }
+ { "tomcat.application.description" = "my test application" }
+ { "property.with_underscore" = "works" }
+ { "empty.property" }
+ { "empty.property.withtrailingspaces" }
+ { "!comment" = "more comments" }
+ { "key" = "value" }
+ { "key2" = "value2" }
+ { "key3" = "value3" }
+ { "key4" = "=value4" }
+ { "key5\"" = "value5" }
+ { "key6/c" = "value6" }
+ {}
+ { "long.description" = " < multi > "
+ { = "this is a description that happens to span " }
+ { = "more than one line with a combination of tabs and " }
+ { = "spaces " }
+ { = "or not" }
+ }
+ {}
+ { "#comment" = "comment break" }
+ {}
+ { "short.break" = " < multi > "
+ { = "a" }
+ { = "b" }
+ }
+ {}
+ { = "empty_key" }
+ { = "empty_key" }
+ {}
+ { "cheeses" }
+ {}
+ { "spaces" = "only" }
+ { "multi" = "spaces" }
+ { "indented" = "spaces" }
+ {}
+ { "\\=" = "A" }
+ { "space" = "and = equals" }
+ { "space" = " < multi > "
+ { = "with " }
+ { = "multiline" }
+ }
+ {}
+ { "escaped\:colon" = "value" }
+ { "escaped\=equals" = "value" }
+ { "escaped\ space" = "value" }
+test lns put conf after
+ set "tomcat.port" "99";
+ set "tomcat.application.host" "foo.network.com"
+ = "
+#
+# Test tomcat properties file
+#tomcat.commented.value=1
+ # config
+tomcat.port = 99
+tomcat.application.name=testapp
+ tomcat.application.description=my test application
+property.with_underscore=works
+empty.property=
+empty.property.withtrailingspaces= \n! more comments
+key: value
+key2:value2
+key3 :value3
+key4:=value4
+key5\"=value5
+key6/c=value6
+
+long.description=this is a description that happens to span \
+ more than one line with a combination of tabs and \
+ spaces \ \nor not
+
+# comment break
+
+short.break = a\
+ b
+
+=empty_key
+ =empty_key
+
+cheeses
+
+spaces only
+multi spaces
+ indented spaces
+
+\= =A
+space and = equals
+space with \
+ multiline
+
+escaped\:colon=value
+escaped\=equals=value
+escaped\ space=value
+tomcat.application.host=foo.network.com
+"
+
+(* GH issue #19: value on new line *)
+test lns get "k=\
+b\
+c\n" =
+ { "k" = " < multi > "
+ { } { = "b" } { = "c" } }
+
+test lns get "tomcat.util.scan.DefaultJarScanner.jarsToSkip=\
+bootstrap.jar,commons-daemon.jar,tomcat-juli.jar\n" =
+ { "tomcat.util.scan.DefaultJarScanner.jarsToSkip" = " < multi > "
+ { } { = "bootstrap.jar,commons-daemon.jar,tomcat-juli.jar" } }
+
+
+test lns get "# comment\r\na.b=val\r\nx=\r\n" =
+ { "#comment" = "comment" }
+ { "a.b" = "val" }
+ { "x" }
+
+test lns get "# \r\n! \r\n" = { } { }
--- /dev/null
+(*
+Module: Test_Protocols
+ Provides unit tests and examples for the <Protocols> lens.
+*)
+
+module Test_Protocols =
+
+(* Variable: conf *)
+let conf = "# Internet (IP) protocols
+
+ip 0 IP # internet protocol, pseudo protocol number
+#hopopt 0 HOPOPT # IPv6 Hop-by-Hop Option [RFC1883]
+icmp 1 ICMP # internet control message protocol
+igmp 2 IGMP # Internet Group Management
+tp++ 39 TP++ # TP++ Transport Protocol
+a/n 107 A/N # Active Networks
+"
+
+(* Test: Protocols.lns *)
+test Protocols.lns get conf =
+ { "#comment" = "Internet (IP) protocols" }
+ { }
+ { "1"
+ { "protocol" = "ip" }
+ { "number" = "0" }
+ { "alias" = "IP" }
+ { "#comment" = "internet protocol, pseudo protocol number" }
+ }
+ { "#comment" = "hopopt 0 HOPOPT # IPv6 Hop-by-Hop Option [RFC1883]" }
+ { "2"
+ { "protocol" = "icmp" }
+ { "number" = "1" }
+ { "alias" = "ICMP" }
+ { "#comment" = "internet control message protocol" }
+ }
+ { "3"
+ { "protocol" = "igmp" }
+ { "number" = "2" }
+ { "alias" = "IGMP" }
+ { "#comment" = "Internet Group Management" }
+ }
+ { "4"
+ { "protocol" = "tp++" }
+ { "number" = "39" }
+ { "alias" = "TP++" }
+ { "#comment" = "TP++ Transport Protocol" }
+ }
+ { "5"
+ { "protocol" = "a/n" }
+ { "number" = "107" }
+ { "alias" = "A/N" }
+ { "#comment" = "Active Networks" }
+ }
--- /dev/null
+module Test_puppet =
+
+ let conf = "
+[main]
+logdir=/var/log/puppet
+
+ [puppetd]
+ server=misspiggy.network.com
+"
+
+ test Puppet.lns get conf =
+ {}
+ { "main"
+ { "logdir" = "/var/log/puppet" }
+ {} }
+ { "puppetd"
+ { "server" = "misspiggy.network.com" } }
+
+ test Puppet.lns put conf after
+ set "main/vardir" "/var/lib/puppet";
+ set "main/rundir" "/var/run/puppet"
+ = "
+[main]
+logdir=/var/log/puppet
+
+vardir=/var/lib/puppet
+rundir=/var/run/puppet
+ [puppetd]
+ server=misspiggy.network.com
+"
+
--- /dev/null
+(*
+Module: Test_Puppet_Auth
+ Provides unit tests and examples for the <Puppet_Auth> lens.
+*)
+
+module Test_Puppet_Auth =
+
+(* Variable: full *)
+let full = "path ~ ^/file_(metadata|content)/user_files/
+# Set environments
+environment production, development
+environment foo
+method find, search
+auth yes
+method save
+ allow /^(.+\.)?example.com$/
+ allow_ip 192.168.100.0/24 # Added in Puppet 3.0.0
+# This overrides the previous auth
+ authenticated any
+"
+
+(* Test: Puppet_Auth.lns *)
+test Puppet_Auth.lns get full =
+ { "path" = "^/file_(metadata|content)/user_files/" { "operator" = "~" }
+ { "#comment" = "Set environments" }
+ { "environment"
+ { "1" = "production" }
+ { "2" = "development" } }
+ { "environment"
+ { "3" = "foo" } }
+ { "method"
+ { "1" = "find" }
+ { "2" = "search" } }
+ { "auth" = "yes" }
+ { "method"
+ { "3" = "save" } }
+ { "allow"
+ { "1" = "/^(.+\.)?example.com$/" } }
+ { "allow_ip"
+ { "1" = "192.168.100.0/24" }
+ { "#comment" = "Added in Puppet 3.0.0" } }
+ { "#comment" = "This overrides the previous auth" }
+ { "auth" = "any" } }
--- /dev/null
+(*
+Module: Test_Puppetfile
+ Provides unit tests and examples for the <Puppetfile> lens.
+*)
+module Test_Puppetfile =
+
+(* Test: Puppetfile.lns *)
+test Puppetfile.lns get "forge \"https://forgeapi.puppetlabs.com\" # the default forge
+
+mod 'puppetlabs-razor'
+mod 'puppetlabs-ntp', \"0.0.3\"
+
+mod 'puppetlabs-apt',
+ :git => \"git://github.com/puppetlabs/puppetlabs-apt.git\"
+
+mod 'puppetlabs-stdlib',
+ :git => \"git://github.com/puppetlabs/puppetlabs-stdlib.git\"
+
+mod 'puppetlabs-apache', '0.6.0',
+ :github_tarball => 'puppetlabs/puppetlabs-apache'
+
+metadata # we want metadata\n" =
+ { "forge" = "https://forgeapi.puppetlabs.com"
+ { "#comment" = "the default forge" } }
+ { }
+ { "1" = "puppetlabs-razor" }
+ { "2" = "puppetlabs-ntp"
+ { "@version" = "0.0.3" }
+ }
+ { }
+ { "3" = "puppetlabs-apt"
+ { "git" = "git://github.com/puppetlabs/puppetlabs-apt.git" }
+ }
+ { }
+ { "4" = "puppetlabs-stdlib"
+ { "git" = "git://github.com/puppetlabs/puppetlabs-stdlib.git" }
+ }
+ { }
+ { "5" = "puppetlabs-apache"
+ { "@version" = "0.6.0" }
+ { "github_tarball" = "puppetlabs/puppetlabs-apache" }
+ }
+ { }
+ { "metadata" { "#comment" = "we want metadata" } }
+
+(* Test: Puppetfile.lns
+ Complex version conditions *)
+test Puppetfile.lns get "mod 'puppetlabs/stdlib', '< 5.0.0'
+mod 'theforeman/concat_native', '>= 1.3.0 < 1.4.0'
+mod 'herculesteam/augeasproviders', '2.1.x'\n" =
+ { "1" = "puppetlabs/stdlib"
+ { "@version" = "< 5.0.0" }
+ }
+ { "2" = "theforeman/concat_native"
+ { "@version" = ">= 1.3.0 < 1.4.0" }
+ }
+ { "3" = "herculesteam/augeasproviders"
+ { "@version" = "2.1.x" }
+ }
+
+(* Test: Puppetfile.lns
+ Owner is not mandatory if git is given *)
+test Puppetfile.lns get "mod 'stdlib',
+ :git => \"git://github.com/puppetlabs/puppetlabs-stdlib.git\"\n" =
+ { "1" = "stdlib"
+ { "git" = "git://github.com/puppetlabs/puppetlabs-stdlib.git" } }
+
+
+(* Issue #427 *)
+test Puppetfile.lns get "mod 'puppetlabs/apache', :latest\n" =
+ { "1" = "puppetlabs/apache"
+ { "latest" } }
+
+test Puppetfile.lns get "mod 'data',
+ :git => 'ssh://git@stash.example.com/bp/puppet-hiera.git',
+ :branch => :control_branch,
+ :default_branch => 'development',
+ :install_path => '.'\n" =
+ { "1" = "data"
+ { "git" = "ssh://git@stash.example.com/bp/puppet-hiera.git" }
+ { "branch" = ":control_branch" }
+ { "default_branch" = "development" }
+ { "install_path" = "." } }
+
+(* Comment: after module name comma
+ This conflicts with the comma comment tree below *)
+test Puppetfile.lns get "mod 'data' # eol comment\n" = *
+
+(* Comment: after first comma *)
+test Puppetfile.lns get "mod 'data', # eol comment
+ # and another
+ '1.2.3'\n" =
+ { "1" = "data"
+ { "#comment" = "eol comment" }
+ { "#comment" = "and another" }
+ { "@version" = "1.2.3" } }
+
+(* Comment: after version
+ Current culprit: need two \n *)
+test Puppetfile.lns get "mod 'data', '1.2.3' # eol comment\n" = *
+test Puppetfile.lns get "mod 'data', '1.2.3' # eol comment\n\n" =
+ { "1" = "data"
+ { "@version" = "1.2.3" { "#comment" = "eol comment" } } }
+
+(* Comment: eol after version comma *)
+test Puppetfile.lns get "mod 'data', '1.2.3', # a comment
+ :local => true\n" =
+ { "1" = "data"
+ { "@version" = "1.2.3" }
+ { "#comment" = "a comment" }
+ { "local" = "true" } }
+
+(* Comment: after version comma with newline *)
+test Puppetfile.lns get "mod 'data', '1.2.3',
+ # a comment
+ :local => true\n" =
+ { "1" = "data"
+ { "@version" = "1.2.3" }
+ { "#comment" = "a comment" }
+ { "local" = "true" } }
+
+(* Comment: eol before opts, without version *)
+test Puppetfile.lns get "mod 'data', # a comment
+ # :ref => 'abcdef',
+ :local => true\n" =
+ { "1" = "data"
+ { "#comment" = "a comment" }
+ { "#comment" = ":ref => 'abcdef'," }
+ { "local" = "true" } }
+
+(* Comment: after opt comma *)
+test Puppetfile.lns get "mod 'data', '1.2.3',
+ :ref => 'abcdef', # eol comment
+ :local => true\n" =
+ { "1" = "data"
+ { "@version" = "1.2.3" }
+ { "ref" = "abcdef" }
+ { "#comment" = "eol comment" }
+ { "local" = "true" } }
+
+(* Comment: in opts *)
+test Puppetfile.lns get "mod 'data', '1.2.3',
+ :ref => 'abcdef',
+ # a comment
+ :local => true\n" =
+ { "1" = "data"
+ { "@version" = "1.2.3" }
+ { "ref" = "abcdef" }
+ { "#comment" = "a comment" }
+ { "local" = "true" } }
+
+(* Comment: after last opt *)
+test Puppetfile.lns get "mod 'data', '1.2.3',
+ :local => true # eol comment\n\n" =
+ { "1" = "data"
+ { "@version" = "1.2.3" }
+ { "local" = "true" }
+ { "#comment" = "eol comment" } }
--- /dev/null
+(* Tests for the PuppetFileserver module *)
+
+module Test_puppetfileserver =
+
+let fileserver = "# This a comment
+
+[mount1]
+ # Mount1 options
+ path /etc/puppet/files/%h
+ allow host.domain1.com
+ allow *.domain2.com
+ deny badhost.domain2.com
+[mount2]
+ allow *
+ deny *.evil.example.com
+ deny badhost.domain2.com
+[mount3]
+allow * # Puppet #6026: same line comment
+# And trailing whitespace
+allow * \n"
+
+test PuppetFileserver.lns get fileserver =
+ { "#comment" = "This a comment" }
+ { }
+ { "mount1"
+ { "#comment" = "Mount1 options" }
+ { "path" = "/etc/puppet/files/%h" }
+ { "allow" = "host.domain1.com" }
+ { "allow" = "*.domain2.com" }
+ { "deny" = "badhost.domain2.com" }
+ }
+ { "mount2"
+ { "allow" = "*" }
+ { "deny" = "*.evil.example.com" }
+ { "deny" = "badhost.domain2.com" }
+ }
+ { "mount3"
+ { "allow" = "*"
+ { "#comment" = "Puppet #6026: same line comment" } }
+ { "#comment" = "And trailing whitespace" }
+ { "allow" = "*" }
+ }
--- /dev/null
+module Test_Pylonspaste =
+ let pylons_conf ="# baruwa - Pylons configuration
+# The %(here)s variable will be replaced with the parent directory of this file
+[uwsgi]
+socket = /var/run/baruwa/baruwa.sock
+processes = 5
+uid = baruwa
+daemonize = /var/log/uwsgi/uwsgi-baruwa.log
+
+[server:main]
+use = egg:Paste#http
+host = 0.0.0.0
+port = 5000
+
+[app:main]
+use = egg:baruwa
+full_stack = true
+static_files = false
+set debug = false
+
+[identifiers]
+plugins =
+ form;browser
+ auth_tkt
+
+[authenticators]
+plugins =
+ sa_auth
+ baruwa_pop3_auth
+ baruwa_imap_auth
+ baruwa_smtp_auth
+ baruwa_ldap_auth
+ baruwa_radius_auth
+"
+
+test Pylonspaste.lns get pylons_conf =
+ { "#comment" = "baruwa - Pylons configuration" }
+ { "#comment" = "The %(here)s variable will be replaced with the parent directory of this file" }
+ { "uwsgi"
+ { "socket" = "/var/run/baruwa/baruwa.sock" }
+ { "processes" = "5" }
+ { "uid" = "baruwa" }
+ { "daemonize" = "/var/log/uwsgi/uwsgi-baruwa.log" }
+ { }
+ }
+ { "server:main"
+ { "use" = "egg:Paste#http" }
+ { "host" = "0.0.0.0" }
+ { "port" = "5000" }
+ { }
+ }
+ { "app:main"
+ { "use" = "egg:baruwa" }
+ { "full_stack" = "true" }
+ { "static_files" = "false" }
+ { "debug" = "false" }
+ { }
+ }
+ { "identifiers"
+ { "plugins"
+ { "1" = "form;browser" }
+ { "2" = "auth_tkt" }
+ }
+ {}
+ }
+ { "authenticators"
+ { "plugins"
+ { "1" = "sa_auth" }
+ { "2" = "baruwa_pop3_auth" }
+ { "3" = "baruwa_imap_auth" }
+ { "4" = "baruwa_smtp_auth" }
+ { "5" = "baruwa_ldap_auth" }
+ { "6" = "baruwa_radius_auth" }
+ }
+ }
--- /dev/null
+module Test_pythonpaste =
+
+ let conf = "
+#blah blah
+[main]
+pipeline = hello
+
+[composite:main]
+use = egg:Paste#urlmap
+/v2.0 = public_api
+/: public_version_api
+"
+
+ test PythonPaste.lns get conf =
+ { }
+ { "#comment" = "blah blah" }
+ { "main"
+ { "pipeline" = "hello" }
+ { }
+ }
+ { "composite:main"
+ { "use" = "egg:Paste#urlmap" }
+ { "1" = "/v2.0 = public_api" }
+ { "2" = "/: public_version_api" }
+ }
+
+
+ test PythonPaste.lns put conf after
+ set "main/pipeline" "goodbye";
+ set "composite:main/3" "/v3: a_new_api_version"
+ = "
+#blah blah
+[main]
+pipeline = goodbye
+
+[composite:main]
+use = egg:Paste#urlmap
+/v2.0 = public_api
+/: public_version_api
+/v3: a_new_api_version
+"
+
+ (* Paste can define global config in DEFAULT, then override with "set" in sections, RHBZ#1175545 *)
+ test PythonPaste.lns get "[DEFAULT]
+log_name = swift
+log_facility = LOG_LOCAL1
+
+[app:proxy-server]
+use = egg:swift#proxy
+set log_name = proxy-server\n" =
+ { "DEFAULT"
+ { "log_name" = "swift" }
+ { "log_facility" = "LOG_LOCAL1" }
+ { } }
+ { "app:proxy-server"
+ { "use" = "egg:swift#proxy" }
+ { "log_name" = "proxy-server"
+ { "@set" } } }
--- /dev/null
+(*
+Module: Test_Qpid
+ Provides unit tests and examples for the <Qpid> lens.
+*)
+
+module Test_Qpid =
+
+(* Variable: qpidd *)
+let qpidd = "# Configuration file for qpidd. Entries are of the form:
+# name=value
+
+# (Note: no spaces on either side of '='). Using default settings:
+# \"qpidd --help\" or \"man qpidd\" for more details.
+cluster-mechanism=ANONYMOUS
+auth=no
+max-connections=22000
+syslog-name=qpidd1
+"
+
+(* Test: Qpid.lns *)
+test Qpid.lns get qpidd =
+ { "#comment" = "Configuration file for qpidd. Entries are of the form:" }
+ { "#comment" = "name=value" }
+ { }
+ { "#comment" = "(Note: no spaces on either side of '='). Using default settings:" }
+ { "#comment" = "\"qpidd --help\" or \"man qpidd\" for more details." }
+ { "cluster-mechanism" = "ANONYMOUS" }
+ { "auth" = "no" }
+ { "max-connections" = "22000" }
+ { "syslog-name" = "qpidd1" }
+
+(* Variable: qpidc *)
+let qpidc = "# Configuration file for the qpid c++ client library. Entries are of
+# the form:
+# name=value
+
+ssl-cert-db=/root/certs/server_db
+ssl-port=5674
+"
+
+(* Test: Qpid.lns *)
+test Qpid.lns get qpidc =
+ { "#comment" = "Configuration file for the qpid c++ client library. Entries are of" }
+ { "#comment" = "the form:" }
+ { "#comment" = "name=value" }
+ { }
+ { "ssl-cert-db" = "/root/certs/server_db" }
+ { "ssl-port" = "5674" }
--- /dev/null
+(*
+Module: Test_Quote
+ Provides unit tests and examples for the <Quote> lens.
+*)
+
+module Test_Quote =
+
+(* View: double *)
+let double = [ label "double" . Quote.double ]
+
+(* Test: double *)
+test double get "\" this is a test\"" =
+ { "double" = " this is a test" }
+
+(* View: double_opt *)
+let double_opt = [ label "double_opt" . Quote.double_opt ]
+
+(* Test: double_opt *)
+test double_opt get "\"this is a test\"" =
+ { "double_opt" = "this is a test" }
+
+(* Test: double_opt *)
+test double_opt get "this is a test" =
+ { "double_opt" = "this is a test" }
+
+(* Test: double_opt
+ Value cannot start with a space *)
+test double_opt get " this is a test" = *
+
+(* View: single *)
+let single = [ label "single" . Quote.single ]
+
+(* Test: single *)
+test single get "' this is a test'" =
+ { "single" = " this is a test" }
+
+(* View: single_opt *)
+let single_opt = [ label "single_opt" . Quote.single_opt ]
+
+(* Test: single_opt *)
+test single_opt get "'this is a test'" =
+ { "single_opt" = "this is a test" }
+
+(* Test: single_opt *)
+test single_opt get "this is a test" =
+ { "single_opt" = "this is a test" }
+
+(* Test: single_opt
+ Value cannot start with a space *)
+test single_opt get " this is a test" = *
+
+(* View: any *)
+let any = [ label "any" . Quote.any ]
+
+(* Test: any *)
+test any get "\" this is a test\"" =
+ { "any" = " this is a test" }
+
+(* Test: any *)
+test any get "' this is a test'" =
+ { "any" = " this is a test" }
+
+(* View: any_opt *)
+let any_opt = [ label "any_opt" . Quote.any_opt ]
+
+(* Test: any_opt *)
+test any_opt get "\"this is a test\"" =
+ { "any_opt" = "this is a test" }
+
+(* Test: any_opt *)
+test any_opt get "'this is a test'" =
+ { "any_opt" = "this is a test" }
+
+(* Test: any_opt *)
+test any_opt get "this is a test" =
+ { "any_opt" = "this is a test" }
+
+(* Test: any_opt
+ Value cannot start with a space *)
+test any_opt get " this is a test" = *
+
+(* View: double_opt_allow_spc *)
+let double_opt_allow_spc =
+ let body = store /[^\n"]+/ in
+ [ label "double" . Quote.do_dquote_opt body ]
+
+(* Test: double_opt_allow_spc *)
+test double_opt_allow_spc get " test with spaces " =
+ { "double" = " test with spaces " }
+
+(* Group: quote_spaces *)
+
+(* View: quote_spaces *)
+let quote_spaces =
+ Quote.quote_spaces (label "spc")
+
+(* Test: quote_spaces
+ Unquoted value *)
+test quote_spaces get "this" =
+ { "spc" = "this" }
+
+(* Test: quote_spaces
+ double quoted value *)
+test quote_spaces get "\"this\"" =
+ { "spc" = "this" }
+
+(* Test: quote_spaces
+ single quoted value *)
+test quote_spaces get "'this'" =
+ { "spc" = "this" }
+
+(* Test: quote_spaces
+ unquoted value with spaces *)
+test quote_spaces get "this that those" = *
+
+(* Test: quote_spaces
+ double quoted value with spaces *)
+test quote_spaces get "\"this that those\"" =
+ { "spc" = "this that those" }
+
+(* Test: quote_spaces
+ single quoted value with spaces *)
+test quote_spaces get "'this that those'" =
+ { "spc" = "this that those" }
+
+(* Test: quote_spaces
+ remove spaces from double-quoted value *)
+test quote_spaces put "\"this that those\""
+ after set "spc" "thisthat" =
+ "\"thisthat\""
+
+(* Test: quote_spaces
+ remove spaces from single-quoted value *)
+test quote_spaces put "'this that those'"
+ after set "spc" "thisthat" =
+ "'thisthat'"
+
+(* Test: quote_spaces
+ add spaces to unquoted value *)
+test quote_spaces put "this"
+ after set "spc" "this that those" =
+ "\"this that those\""
+
+(* Test: quote_spaces
+ add spaces to double-quoted value *)
+test quote_spaces put "\"this\""
+ after set "spc" "this that those" =
+ "\"this that those\""
+
+(* Test: quote_spaces
+ add spaces to single-quoted value *)
+test quote_spaces put "'this'"
+ after set "spc" "this that those" =
+ "'this that those'"
+
+(* Group: dquote_spaces *)
+
+(* View: dquote_spaces *)
+let dquote_spaces =
+ Quote.dquote_spaces (label "spc")
+
+(* Test: dquote_spaces
+ Unquoted value *)
+test dquote_spaces get "this" =
+ { "spc" = "this" }
+
+(* Test: dquote_spaces
+ double quoted value *)
+test dquote_spaces get "\"this\"" =
+ { "spc" = "this" }
+
+(* Test: dquote_spaces
+ single quoted value *)
+test dquote_spaces get "'this'" =
+ { "spc" = "'this'" }
+
+(* Test: dquote_spaces
+ unquoted value with spaces *)
+test dquote_spaces get "this that those" = *
+
+(* Test: dquote_spaces
+ double quoted value with spaces *)
+test dquote_spaces get "\"this that those\"" =
+ { "spc" = "this that those" }
+
+(* Test: dquote_spaces
+ single quoted value with spaces *)
+test dquote_spaces get "'this that those'" = *
+
+(* Test: dquote_spaces
+ remove spaces from double-quoted value *)
+test dquote_spaces put "\"this that those\""
+ after set "spc" "thisthat" =
+ "\"thisthat\""
+
+(* Test: dquote_spaces
+ add spaces to unquoted value *)
+test dquote_spaces put "this"
+ after set "spc" "this that those" =
+ "\"this that those\""
+
+(* Test: dquote_spaces
+ add spaces to double-quoted value *)
+test dquote_spaces put "\"this\""
+ after set "spc" "this that those" =
+ "\"this that those\""
+
+(* Test: dquote_spaces
+ add spaces to single-quoted value *)
+test dquote_spaces put "'this'"
+ after set "spc" "this that those" =
+ "\"this that those\""
+
+(* Group: squote_spaces *)
+
+(* View: squote_spaces *)
+let squote_spaces =
+ Quote.squote_spaces (label "spc")
+
+(* Test: squote_spaces
+ Unquoted value *)
+test squote_spaces get "this" =
+ { "spc" = "this" }
+
+(* Test: squote_spaces
+ double quoted value *)
+test squote_spaces get "\"this\"" =
+ { "spc" = "\"this\"" }
+
+(* Test: squote_spaces
+ single quoted value *)
+test squote_spaces get "'this'" =
+ { "spc" = "this" }
+
+(* Test: squote_spaces
+ unquoted value with spaces *)
+test squote_spaces get "this that those" = *
+
+(* Test: squote_spaces
+ double quoted value with spaces *)
+test squote_spaces get "\"this that those\"" = *
+
+(* Test: squote_spaces
+ single quoted value with spaces *)
+test squote_spaces get "'this that those'" =
+ { "spc" = "this that those" }
+
+(* Test: squote_spaces
+ remove spaces from single-quoted value *)
+test squote_spaces put "'this that those'"
+ after set "spc" "thisthat" =
+ "'thisthat'"
+
+(* Test: squote_spaces
+ add spaces to unquoted value *)
+test squote_spaces put "this"
+ after set "spc" "this that those" =
+ "'this that those'"
+
+(* Test: squote_spaces
+ add spaces to double-quoted value *)
+test squote_spaces put "\"this\""
+ after set "spc" "this that those" =
+ "'this that those'"
+
+(* Test: squote_spaces
+ add spaces to single-quoted value *)
+test squote_spaces put "'this'"
+ after set "spc" "this that those" =
+ "'this that those'"
+
+(* Group: nil cases *)
+
+(* View: dquote_opt_nil *)
+let dquote_opt_nil =
+ let body = store Quote.double_opt_re
+ in [ label "dquote_opt_nil" . Quote.do_dquote_opt_nil body ]?
+
+(* Test: dquote_opt_nil *)
+test dquote_opt_nil get "this" =
+ { "dquote_opt_nil" = "this" }
+
+(* Test: dquote_opt_nil *)
+test dquote_opt_nil get "'this'" =
+ { "dquote_opt_nil" = "'this'" }
+
+(* Test: dquote_opt_nil *)
+test dquote_opt_nil get "\"this\"" =
+ { "dquote_opt_nil" = "this" }
+
+(* Test: dquote_opt_nil *)
+test dquote_opt_nil put ""
+ after set "dquote_opt_nil" "this" =
+ "this"
+
+(* Test: dquote_opt_nil *)
+test dquote_opt_nil put "\"this\""
+ after set "dquote_opt_nil" "this" =
+ "\"this\""
+
+(* Test: dquote_opt_nil *)
+test dquote_opt_nil put "'this'"
+ after set "dquote_opt_nil" "this" =
+ "this"
+
+(* View: squote_opt_nil *)
+let squote_opt_nil =
+ let body = store Quote.single_opt_re
+ in [ label "squote_opt_nil" . Quote.do_squote_opt_nil body ]?
+
+(* Test: squote_opt_nil *)
+test squote_opt_nil get "this" =
+ { "squote_opt_nil" = "this" }
+
+(* Test: squote_opt_nil *)
+test squote_opt_nil get "'this'" =
+ { "squote_opt_nil" = "this" }
+
+(* Test: squote_opt_nil *)
+test squote_opt_nil get "\"this\"" =
+ { "squote_opt_nil" = "\"this\"" }
+
+(* Test: squote_opt_nil *)
+test squote_opt_nil put ""
+ after set "squote_opt_nil" "this" =
+ "this"
+
+(* Test: squote_opt_nil *)
+test squote_opt_nil put "\"this\""
+ after set "squote_opt_nil" "this" =
+ "this"
+
+(* Test: squote_opt_nil *)
+test squote_opt_nil put "\"this\""
+ after set "squote_opt_nil" "this" =
+ "this"
+
+(* View: quote_opt_nil *)
+let quote_opt_nil =
+ let body = store Quote.any_opt_re
+ in [ label "quote_opt_nil" . Quote.do_quote_opt_nil body ]?
+
+(* Test: quote_opt_nil *)
+test quote_opt_nil get "this" =
+ { "quote_opt_nil" = "this" }
+
+(* Test: quote_opt_nil *)
+test quote_opt_nil get "'this'" =
+ { "quote_opt_nil" = "this" }
+
+(* Test: quote_opt_nil *)
+test quote_opt_nil get "\"this\"" =
+ { "quote_opt_nil" = "this" }
+
+(* Test: quote_opt_nil *)
+test quote_opt_nil put ""
+ after set "quote_opt_nil" "this" =
+ "this"
+
+(* Test: quote_opt_nil *)
+test quote_opt_nil put "\"this\""
+ after set "quote_opt_nil" "this" =
+ "\"this\""
+
+(* Test: quote_opt_nil *)
+test quote_opt_nil put "'this'"
+ after set "quote_opt_nil" "this" =
+ "'this'"
+
--- /dev/null
+(*
+Module: Test_Rabbitmq
+ Provides unit tests and examples for the <Rabbitmq> lens.
+*)
+module Test_Rabbitmq =
+
+(* Test: Rabbitmq.listeners *)
+test Rabbitmq.listeners get "{ssl_listeners, [5671, {\"127.0.0.1\", 5672}]}" =
+ { "ssl_listeners"
+ { "value" = "5671" }
+ { "tuple"
+ { "value" = "127.0.0.1" }
+ { "value" = "5672" } } }
+
+(* Test: Rabbitmq.ssl_options *)
+test Rabbitmq.ssl_options get "{ssl_options, [
+ {cacertfile,\"/path/to/testca/cacert.pem\"},
+ {certfile,\"/path/to/server/cert.pem\"},
+ {keyfile,\"/path/to/server/key.pem\"},
+ {verify,verify_peer},
+ {versions, ['tlsv1.2', 'tlsv1.1', 'tlsv1']},
+ {fail_if_no_peer_cert,false}]}" =
+ { "ssl_options"
+ { "cacertfile" = "/path/to/testca/cacert.pem" }
+ { "certfile" = "/path/to/server/cert.pem" }
+ { "keyfile" = "/path/to/server/key.pem" }
+ { "verify" = "verify_peer" }
+ { "versions"
+ { "value" = "tlsv1.2" }
+ { "value" = "tlsv1.1" }
+ { "value" = "tlsv1" } }
+ { "fail_if_no_peer_cert" = "false" } }
+
+(* Test: Rabbitmq.disk_free_limit *)
+test Rabbitmq.disk_free_limit get "{disk_free_limit, 1000000000}" =
+ { "disk_free_limit" = "1000000000" }
+
+(* Test: Rabbitmq.disk_free_limit *)
+test Rabbitmq.disk_free_limit get "{disk_free_limit, {mem_relative, 1.0}}" =
+ { "disk_free_limit"
+ { "tuple"
+ { "value" = "mem_relative" }
+ { "value" = "1.0" } } }
+
+(* Test: Rabbitmq.log_levels *)
+test Rabbitmq.log_levels get "{log_levels, [{connection, info}]}" =
+ { "log_levels"
+ { "tuple"
+ { "value" = "connection" }
+ { "value" = "info" } } }
+
+(* Test: Rabbitmq.cluster_nodes *)
+test Rabbitmq.cluster_nodes get "{cluster_nodes, {['rabbit@rabbit1', 'rabbit@rabbit2', 'rabbit@rabbit3'], disc}}" =
+ { "cluster_nodes"
+ { "tuple"
+ { "value"
+ { "value" = "rabbit@rabbit1" }
+ { "value" = "rabbit@rabbit2" }
+ { "value" = "rabbit@rabbit3" } }
+ { "value" = "disc" } } }
+
+(* Test: Rabbitmq.cluster_nodes
+ Apparently, tuples are not mandatory *)
+test Rabbitmq.cluster_nodes get "{cluster_nodes, ['rabbit@rabbit1', 'rabbit@rabbit2', 'rabbit@rabbit3']}" =
+ { "cluster_nodes"
+ { "value" = "rabbit@rabbit1" }
+ { "value" = "rabbit@rabbit2" }
+ { "value" = "rabbit@rabbit3" } }
+
+(* Test: Rabbitmq.cluster_partition_handling, single value *)
+test Rabbitmq.cluster_partition_handling get "{cluster_partition_handling, ignore}" =
+ { "cluster_partition_handling" = "ignore" }
+
+(* Test: Rabbitmq.cluster_partition_handling, tuple *)
+test Rabbitmq.cluster_partition_handling get "{cluster_partition_handling, {pause_if_all_down, ['rabbit@rabbit1', 'rabbit@rabbit2', 'rabbit@rabbit3'], autoheal}}" =
+ { "cluster_partition_handling"
+ { "tuple"
+ { "value" = "pause_if_all_down" }
+ { "value"
+ { "value" = "rabbit@rabbit1" }
+ { "value" = "rabbit@rabbit2" }
+ { "value" = "rabbit@rabbit3" } }
+ { "value" = "autoheal" } } }
+
+(* Test: Rabbitmq.lns
+ Top-level test *)
+test Rabbitmq.lns get "
+% A standard configuration
+[
+ {rabbit, [
+ {ssl_listeners, [5671]},
+ {ssl_options, [{cacertfile,\"/path/to/testca/cacert.pem\"},
+ {certfile,\"/path/to/server/cert.pem\"},
+ {keyfile,\"/path/to/server/key.pem\"},
+ {verify,verify_peer},
+ {fail_if_no_peer_cert,false}]}
+ ]}
+].
+% EOF\n" =
+ { }
+ { "#comment" = "A standard configuration" }
+ { "rabbit"
+ { "ssl_listeners"
+ { "value" = "5671" } }
+ { "ssl_options"
+ { "cacertfile" = "/path/to/testca/cacert.pem" }
+ { "certfile" = "/path/to/server/cert.pem" }
+ { "keyfile" = "/path/to/server/key.pem" }
+ { "verify" = "verify_peer" }
+ { "fail_if_no_peer_cert" = "false" } } }
+ { "#comment" = "EOF" }
+
--- /dev/null
+module Test_radicale =
+
+ let conf = "
+[server]
+
+[encoding]
+
+[well-known]
+
+[auth]
+
+[git]
+
+[rights]
+
+[storage]
+
+[logging]
+
+[headers]
+
+"
+
+ test Radicale.lns get conf =
+ {}
+ { "server"
+ {} }
+ { "encoding"
+ {} }
+ { "well-known"
+ {} }
+ { "auth"
+ {} }
+ { "git"
+ {} }
+ { "rights"
+ {} }
+ { "storage"
+ {} }
+ { "logging"
+ {} }
+ { "headers"
+ {} }
+
+ test Radicale.lns put conf after
+ set "server/hosts" "127.0.0.1:5232, [::1]:5232";
+ set "server/base_prefix" "/radicale/";
+ set "well-known/caldav" "/radicale/%(user)s/caldav/";
+ set "well-known/cardav" "/radicale/%(user)s/carddav/";
+ set "auth/type" "remote_user";
+ set "rights/type" "owner_only"
+ = "
+[server]
+
+hosts=127.0.0.1:5232, [::1]:5232
+base_prefix=/radicale/
+[encoding]
+
+[well-known]
+
+caldav=/radicale/%(user)s/caldav/
+cardav=/radicale/%(user)s/carddav/
+[auth]
+
+type=remote_user
+[git]
+
+[rights]
+
+type=owner_only
+[storage]
+
+[logging]
+
+[headers]
+
+"
--- /dev/null
+module Test_rancid =
+
+(* Examples from router.db(5) *)
+let rancid = "dial1.paris;cisco;up
+core1.paris;cisco;down;in testing until 5/5/2001.
+core2.paris;cisco;ticketed;Ticket 6054234, 5/3/2001
+border1.paris;juniper;up;
+"
+
+test Rancid.lns get rancid =
+ { "device" = "dial1.paris"
+ { "type" = "cisco" }
+ { "state" = "up" }
+ }
+ { "device" = "core1.paris"
+ { "type" = "cisco" }
+ { "state" = "down" }
+ { "comment" = "in testing until 5/5/2001." }
+ }
+ { "device" = "core2.paris"
+ { "type" = "cisco" }
+ { "state" = "ticketed" }
+ { "comment" = "Ticket 6054234, 5/3/2001" }
+ }
+ { "device" = "border1.paris"
+ { "type" = "juniper" }
+ { "state" = "up" }
+ { "comment" = "" }
+ }
+
--- /dev/null
+(*
+Module: Test_Redis
+ Provides unit tests and examples for the <Redis> lens.
+*)
+
+module Test_Redis =
+
+let standard_entry = "dir /var/lib/redis\n"
+test Redis.lns get standard_entry = { "dir" = "/var/lib/redis" }
+
+let double_quoted_entry = "dir \"/var/lib/redis\"\n"
+test Redis.lns get double_quoted_entry = { "dir" = "/var/lib/redis" }
+
+let single_quoted_entry = "dir '/var/lib/redis'\n"
+test Redis.lns get single_quoted_entry = { "dir" = "/var/lib/redis" }
+
+let extra_whitespace_entry = " dir /var/lib/redis \n"
+test Redis.lns get extra_whitespace_entry = { "dir" = "/var/lib/redis" }
+
+let save_entry = "save 60 10000\n"
+test Redis.lns get save_entry =
+{ "save"
+ { "seconds" = "60" }
+ { "keys" = "10000" }
+}
+
+let save_entry_quotes = "save '60' \"10000\"\n"
+test Redis.lns get save_entry_quotes =
+{ "save"
+ { "seconds" = "60" }
+ { "keys" = "10000" }
+}
+
+let save_entry_empty = "save \"\"\n"
+test Redis.lns get save_entry_empty = { "save" = "" }
+
+let replicaof_entry = "slaveof 192.168.0.10 6379\nreplicaof 192.168.0.11 6380\n"
+test Redis.lns get replicaof_entry =
+{ "slaveof"
+ { "ip" = "192.168.0.10" }
+ { "port" = "6379" } }
+{ "replicaof"
+ { "ip" = "192.168.0.11" }
+ { "port" = "6380" } }
+
+let rename_command_entry = "rename-command CONFIG CONFIG2\n"
+test Redis.lns get rename_command_entry =
+{ "rename-command"
+ { "from" = "CONFIG" }
+ { "to" = "CONFIG2" }
+}
+
+let client_output_buffer_limit_entry_1 = "client-output-buffer-limit normal 0 0 0\n"
+test Redis.lns get client_output_buffer_limit_entry_1 =
+{ "client-output-buffer-limit"
+ { "class" = "normal" }
+ { "hard_limit" = "0" }
+ { "soft_limit" = "0" }
+ { "soft_seconds" = "0" }
+}
+
+let client_output_buffer_limit_entry_2 = "client-output-buffer-limit slave 256mb 64mb 60\n"
+test Redis.lns get client_output_buffer_limit_entry_2 =
+{ "client-output-buffer-limit"
+ { "class" = "slave" }
+ { "hard_limit" = "256mb" }
+ { "soft_limit" = "64mb" }
+ { "soft_seconds" = "60" }
+}
+
+let include_entry = "include /foo/redis.conf\ninclude /bar/redis.conf\n"
+test Redis.lns get include_entry =
+{ "include" = "/foo/redis.conf" }
+{ "include" = "/bar/redis.conf" }
+
+let standard_comment = "# a comment\n"
+test Redis.lns get standard_comment = { "#comment" = "a comment" }
+
+let extra_whitespace_comment = " # another comment \n"
+test Redis.lns get extra_whitespace_comment = { "#comment" = "another comment" }
+
+let redis_conf = "# Redis configuration file example
+
+# Note on units: when memory size is needed, it is possible to specify
+# it in the usual form of 1k 5GB 4M and so forth:
+#
+# 1k => 1000 bytes
+# 1kb => 1024 bytes
+# 1m => 1000000 bytes
+# 1mb => 1024*1024 bytes
+# 1g => 1000000000 bytes
+# 1gb => 1024*1024*1024 bytes
+#
+# units are case insensitive so 1GB 1Gb 1gB are all the same.
+
+# By default Redis does not run as a daemon. Use 'yes' if you need it.
+# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
+daemonize yes
+
+# When running daemonized, Redis writes a pid file in /var/run/redis.pid by
+# default. You can specify a custom pid file location here.
+pidfile /var/run/redis/redis-server.pid
+
+# Accept connections on the specified port, default is 6379.
+# If port 0 is specified Redis will not listen on a TCP socket.
+port 6379
+
+# If you want you can bind a single interface, if the bind option is not
+# specified all the interfaces will listen for incoming connections.
+#
+bind 127.0.0.1
+
+# Note: you can disable saving at all commenting all the \"save\" lines.
+
+save 900 1
+save 300 10
+save 60 10000
+
+# Include one or more other config files here. This is useful if you
+# have a standard template that goes to all redis server but also need
+# to customize a few per-server settings. Include files can include
+# other files, so use this wisely.
+#
+include /path/to/local.conf
+include /path/to/other.conf
+"
+
+test Redis.lns get redis_conf =
+ { "#comment" = "Redis configuration file example" }
+ { }
+ { "#comment" = "Note on units: when memory size is needed, it is possible to specify" }
+ { "#comment" = "it in the usual form of 1k 5GB 4M and so forth:" }
+ { }
+ { "#comment" = "1k => 1000 bytes" }
+ { "#comment" = "1kb => 1024 bytes" }
+ { "#comment" = "1m => 1000000 bytes" }
+ { "#comment" = "1mb => 1024*1024 bytes" }
+ { "#comment" = "1g => 1000000000 bytes" }
+ { "#comment" = "1gb => 1024*1024*1024 bytes" }
+ { }
+ { "#comment" = "units are case insensitive so 1GB 1Gb 1gB are all the same." }
+ { }
+ { "#comment" = "By default Redis does not run as a daemon. Use 'yes' if you need it." }
+ { "#comment" = "Note that Redis will write a pid file in /var/run/redis.pid when daemonized." }
+ { "daemonize" = "yes" }
+ { }
+ { "#comment" = "When running daemonized, Redis writes a pid file in /var/run/redis.pid by" }
+ { "#comment" = "default. You can specify a custom pid file location here." }
+ { "pidfile" = "/var/run/redis/redis-server.pid" }
+ { }
+ { "#comment" = "Accept connections on the specified port, default is 6379." }
+ { "#comment" = "If port 0 is specified Redis will not listen on a TCP socket." }
+ { "port" = "6379" }
+ { }
+ { "#comment" = "If you want you can bind a single interface, if the bind option is not" }
+ { "#comment" = "specified all the interfaces will listen for incoming connections." }
+ { }
+ { "bind" { "ip" = "127.0.0.1" } }
+ { }
+ { "#comment" = "Note: you can disable saving at all commenting all the \"save\" lines." }
+ { }
+ { "save"
+ { "seconds" = "900" }
+ { "keys" = "1" }
+ }
+ { "save"
+ { "seconds" = "300" }
+ { "keys" = "10" }
+ }
+ { "save"
+ { "seconds" = "60" }
+ { "keys" = "10000" }
+ }
+ { }
+ { "#comment" = "Include one or more other config files here. This is useful if you" }
+ { "#comment" = "have a standard template that goes to all redis server but also need" }
+ { "#comment" = "to customize a few per-server settings. Include files can include" }
+ { "#comment" = "other files, so use this wisely." }
+ { }
+ { "include" = "/path/to/local.conf" }
+ { "include" = "/path/to/other.conf" }
+
+(* Test: Redis.lns
+ Empty value (GH issue #115) *)
+test Redis.lns get "notify-keyspace-events \"\"\n" =
+ { "notify-keyspace-events" = "" }
+
+(* Test: Redis.lns
+ Multiple bind IP addresses (GH issue #194) *)
+test Redis.lns get "bind 127.0.0.1 \"::1\" 192.168.1.1\n" =
+ { "bind"
+ { "ip" = "127.0.0.1" }
+ { "ip" = "::1" }
+ { "ip" = "192.168.1.1" } }
+
+test Redis.lns get "bind 127.0.0.1\n bind 192.168.1.1\n" =
+ { "bind"
+ { "ip" = "127.0.0.1" } }
+ { "bind"
+ { "ip" = "192.168.1.1" } }
+
+let sentinel_conf = "sentinel myid ccae7d051dfaa62078cb3ac3dec100240e637d5a
+sentinel deny-scripts-reconfig yes
+sentinel monitor Master 8.8.8.8 6379 2
+sentinel monitor Othercluster 1.1.1.1 6380 4
+sentinel config-epoch Master 693
+sentinel leader-epoch Master 691
+sentinel known-replica Master 4.4.4.4 6379
+sentinel known-replica Master 1.1.1.1 6379
+sentinel known-sentinel Master 4.4.4.4 26379 9bbd89f3846b5366f7da4d20b516fdc3f5c3a993
+sentinel known-sentinel Master 1.1.1.1 26379 f435adae0efeb9d5841712d05d7399f7584f333b
+sentinel known-sentinel Othercluster 4.4.4.4 26379 9bbd89f3846b5366f7da4d20b516fdc3f5c3a993
+sentinel known-sentinel Othercluster 1.1.1.1 26379 f435adae0efeb9d5841712d05d7399f7584f333b
+sentinel current-epoch 693
+"
+
+test Redis.lns get sentinel_conf =
+ { "sentinel" = "myid"
+ { "value" = "ccae7d051dfaa62078cb3ac3dec100240e637d5a" } }
+ { "sentinel" = "deny-scripts-reconfig"
+ { "value" = "yes" } }
+ { "sentinel" = "monitor"
+ { "cluster" = "Master" }
+ { "ip" = "8.8.8.8" }
+ { "port" = "6379" }
+ { "quorum" = "2" } }
+ { "sentinel" = "monitor"
+ { "cluster" = "Othercluster" }
+ { "ip" = "1.1.1.1" }
+ { "port" = "6380" }
+ { "quorum" = "4" } }
+ { "sentinel" = "config-epoch"
+ { "cluster" = "Master" }
+ { "epoch" = "693" } }
+ { "sentinel" = "leader-epoch"
+ { "cluster" = "Master" }
+ { "epoch" = "691" } }
+ { "sentinel" = "known-replica"
+ { "cluster" = "Master" }
+ { "ip" = "4.4.4.4" }
+ { "port" = "6379" } }
+ { "sentinel" = "known-replica"
+ { "cluster" = "Master" }
+ { "ip" = "1.1.1.1" }
+ { "port" = "6379" } }
+ { "sentinel" = "known-sentinel"
+ { "cluster" = "Master" }
+ { "ip" = "4.4.4.4" }
+ { "port" = "26379" }
+ { "id" = "9bbd89f3846b5366f7da4d20b516fdc3f5c3a993" } }
+ { "sentinel" = "known-sentinel"
+ { "cluster" = "Master" }
+ { "ip" = "1.1.1.1" }
+ { "port" = "26379" }
+ { "id" = "f435adae0efeb9d5841712d05d7399f7584f333b" } }
+ { "sentinel" = "known-sentinel"
+ { "cluster" = "Othercluster" }
+ { "ip" = "4.4.4.4" }
+ { "port" = "26379" }
+ { "id" = "9bbd89f3846b5366f7da4d20b516fdc3f5c3a993" } }
+ { "sentinel" = "known-sentinel"
+ { "cluster" = "Othercluster" }
+ { "ip" = "1.1.1.1" }
+ { "port" = "26379" }
+ { "id" = "f435adae0efeb9d5841712d05d7399f7584f333b" } }
+ { "sentinel" = "current-epoch"
+ { "value" = "693" } }
--- /dev/null
+(*
+Module: Test_Reprepro_Uploaders
+ Provides unit tests and examples for the <Reprepro_Uploaders> lens.
+*)
+
+module Test_Reprepro_Uploaders =
+
+(* Test: Reprepro_Uploaders.entry
+ A star condition gets mapped as direct value
+ of the "allow" node.
+ *)
+test Reprepro_Uploaders.entry get
+ "allow * by anybody\n" =
+
+ { "allow" = "*"
+ { "by" = "anybody" } }
+
+(* Test: Reprepro_Uploaders.entry
+ For simple keys, the "by" node gets the value "key"
+ and the key ID gets mapped in a "key" subnode.
+ *)
+test Reprepro_Uploaders.entry get
+ "allow * by key ABCD1234\n" =
+
+ { "allow" = "*"
+ { "by" = "key"
+ { "key" = "ABCD1234" } } }
+
+(* Test: Reprepro_Uploaders.entry
+ Conditions are mapped inside a tree containing
+ at least an "and" node and an "or" subnode.
+
+ The value of each "or" subnode is the type of check
+ (e.g. "source"), and this node contains "or" subnodes
+ with the value(s) allowed for the check (e.g. "bash"). *)
+test Reprepro_Uploaders.entry get
+ "allow source 'bash' by anybody\n" =
+
+ { "allow"
+ { "and"
+ { "or" = "source"
+ { "or" = "bash" } } }
+ { "by" = "anybody" } }
+
+(* Test: Reprepro_Uploaders.entry
+ Check the field distribution *)
+test Reprepro_Uploaders.entry get
+ "allow distribution 'sid' by anybody\n" =
+
+ { "allow"
+ { "and"
+ { "or" = "distribution"
+ { "or" = "sid" } } }
+ { "by" = "anybody" } }
+
+(* Test: Reprepro_Uploaders.entry
+ Some checks use the "contain" keyword to loosen the condition.
+ In that case, a "contain" subnode is added. Be sure to check for it
+ to know how the condition has to be checked.
+ *)
+test Reprepro_Uploaders.entry get
+ "allow source 'bash' and binaries contain 'bash-doc' by anybody\n" =
+
+ { "allow"
+ { "and"
+ { "or" = "source"
+ { "or" = "bash" } } }
+ { "and"
+ { "or" = "binaries"
+ { "contain" }
+ { "or" = "bash-doc" } } }
+ { "by" = "anybody" } }
+
+(* Test: Reprepro_Uploaders.entry
+ Some checks support multiple values, separated by '|'.
+ In this case, each value gets added to an "or" subnode.
+ *)
+test Reprepro_Uploaders.entry get
+
+ "allow sections 'main'|'restricted' and source 'bash' or binaries contain 'bash-doc' by anybody\n" =
+
+ { "allow"
+ { "and"
+ { "or" = "sections"
+ { "or" = "main" }
+ { "or" = "restricted" } } }
+ { "and"
+ { "or" = "source"
+ { "or" = "bash" } }
+ { "or" = "binaries"
+ { "contain" }
+ { "or" = "bash-doc" } } }
+ { "by" = "anybody" } }
+
+(* Test: Reprepro_Uploaders.entry
+ Negated conditions are mapped with a "not" subnode. *)
+test Reprepro_Uploaders.entry get
+
+ "allow not source 'bash' by anybody\n" =
+
+ { "allow"
+ { "and"
+ { "or" = "source"
+ { "not" }
+ { "or" = "bash" } } }
+ { "by" = "anybody" } }
+
+
+
+(* Variable: conf
+ A full configuration *)
+let conf = "# ftpmaster
+allow * by key 74BF771E
+
+allow sections 'desktop/*' by anybody
+allow sections 'gforge/*' and binaries contain 'bzr' or not source '*melanie*'|'katya' by any key
+"
+
+(* Test: Reprepro_Uploaders.lns
+ Testing the full <conf> against <Reprepro_Uploaders.lns> *)
+test Reprepro_Uploaders.lns get conf =
+ { "#comment" = "ftpmaster" }
+ { "allow" = "*"
+ { "by" = "key"
+ { "key" = "74BF771E" } } }
+ { }
+ { "allow"
+ { "and" { "or" = "sections" { "or" = "desktop/*" } } }
+ { "by" = "anybody" } }
+ { "allow"
+ { "and" { "or" = "sections" { "or" = "gforge/*" } } }
+ { "and" { "or" = "binaries" { "contain" } { "or" = "bzr" } }
+ { "or" = "source" { "not" } { "or" = "*melanie*" } { "or" = "katya" } } }
+ { "by" = "key"
+ { "key" = "any" } } }
+
+(* Test: Reprepro_Uploaders.lns
+ Support group conditions, GH #283 *)
+test Reprepro_Uploaders.lns get "allow sections 'desktop/*' by group groupname\n" =
+ { "allow"
+ { "and" { "or" = "sections" { "or" = "desktop/*" } } }
+ { "by" = "group" { "group" = "groupname" } } }
+
+(* Test: Reprepro_Uploaders.lns
+ Declare group condition, GH #283 *)
+test Reprepro_Uploaders.lns get "group groupname add key-id\n" =
+ { "group" = "groupname"
+ { "add" = "key-id" } }
+
+(* Test: Reprepro_Uploaders.lns
+ Group inheritance, GH #283 *)
+test Reprepro_Uploaders.lns get "group groupname contains group2\n" =
+ { "group" = "groupname"
+ { "contains" = "group2" } }
+
+(* Test: Reprepro_Uploaders.lns
+ Empty group, GH #283 *)
+test Reprepro_Uploaders.lns get "group groupname empty\n" =
+ { "group" = "groupname"
+ { "empty" } }
+
+(* Test: Reprepro_Uploaders.lns
+ Unused group, GH #283 *)
+test Reprepro_Uploaders.lns get "group groupname unused\n" =
+ { "group" = "groupname"
+ { "unused" } }
--- /dev/null
+module Test_resolv =
+
+ let conf = "# Sample resolv.conf
+; With multiple comment styles
+nameserver 192.168.0.3 # and EOL comments
+nameserver ff02::1
+domain mynet.com # and EOL comments
+search mynet.com anotherorg.net
+
+# A sortlist now
+sortlist 130.155.160.0/255.255.240.0 130.155.0.0
+
+options ndots:3 debug timeout:2
+options no-ip6-dotint single-request-reopen # and EOL comments
+options attempts:3 rotate no-check-names inet6 ip6-bytestring ip6-dotint edns0 single-request no-tld-query use-vc no-reload trust-ad
+
+lookup file bind
+family inet6 inet4
+"
+
+test Resolv.lns get conf =
+ { "#comment" = "Sample resolv.conf" }
+ { "#comment" = "With multiple comment styles" }
+ { "nameserver" = "192.168.0.3"
+ { "#comment" = "and EOL comments" } }
+ { "nameserver" = "ff02::1" }
+ { "domain" = "mynet.com"
+ { "#comment" = "and EOL comments" } }
+ { "search"
+ { "domain" = "mynet.com" }
+ { "domain" = "anotherorg.net" } }
+ {}
+ { "#comment" = "A sortlist now" }
+ { "sortlist"
+ { "ipaddr" = "130.155.160.0"
+ { "netmask" = "255.255.240.0" } }
+ { "ipaddr" = "130.155.0.0" } }
+ {}
+ { "options"
+ { "ndots" = "3" }
+ { "debug" }
+ { "timeout" = "2" } }
+ { "options"
+ { "ip6-dotint"
+ { "negate" } }
+ { "single-request-reopen" }
+ { "#comment" = "and EOL comments" } }
+ { "options"
+ { "attempts" = "3" }
+ { "rotate" }
+ { "no-check-names" }
+ { "inet6" }
+ { "ip6-bytestring" }
+ { "ip6-dotint" }
+ { "edns0" }
+ { "single-request" }
+ { "no-tld-query" }
+ { "use-vc" }
+ { "no-reload" }
+ { "trust-ad" } }
+ {}
+ { "lookup"
+ { "file" }
+ { "bind" } }
+ { "family"
+ { "inet6" }
+ { "inet4" } }
+
+test Resolv.ip6_dotint
+ put "ip6-dotint"
+ after set "/ip6-dotint/negate" "" = "no-ip6-dotint"
+
+test Resolv.lns get "; \r\n; \t \n" = { } { }
--- /dev/null
+(*
+Module: Test_Rhsm
+ Provides unit tests and examples for the <Rhsm> lens.
+*)
+
+module Test_rhsm =
+
+ (* Variable: conf
+ A full rhsm.conf *)
+ let conf = "# Red Hat Subscription Manager Configuration File:
+
+# Unified Entitlement Platform Configuration
+[server]
+# Server hostname:
+hostname = subscription.rhn.redhat.com
+
+# Server prefix:
+prefix = /subscription
+
+# Server port:
+port = 443
+
+# Set to 1 to disable certificate validation:
+insecure = 0
+
+# Set the depth of certs which should be checked
+# when validating a certificate
+ssl_verify_depth = 3
+
+# an http proxy server to use
+proxy_hostname =
+
+# port for http proxy server
+proxy_port =
+
+# user name for authenticating to an http proxy, if needed
+proxy_user =
+
+# password for basic http proxy auth, if needed
+proxy_password =
+
+[rhsm]
+# Content base URL:
+baseurl= https://cdn.redhat.com
+
+# Server CA certificate location:
+ca_cert_dir = /etc/rhsm/ca/
+
+# Default CA cert to use when generating yum repo configs:
+repo_ca_cert = %(ca_cert_dir)sredhat-uep.pem
+
+# Where the certificates should be stored
+productCertDir = /etc/pki/product
+entitlementCertDir = /etc/pki/entitlement
+consumerCertDir = /etc/pki/consumer
+
+# Manage generation of yum repositories for subscribed content:
+manage_repos = 1
+
+# Refresh repo files with server overrides on every yum command
+full_refresh_on_yum = 0
+
+# If set to zero, the client will not report the package profile to
+# the subscription management service.
+report_package_profile = 1
+
+# The directory to search for subscription manager plugins
+pluginDir = /usr/share/rhsm-plugins
+
+# The directory to search for plugin configuration files
+pluginConfDir = /etc/rhsm/pluginconf.d
+
+[rhsmcertd]
+# Interval to run cert check (in minutes):
+certCheckInterval = 240
+# Interval to run auto-attach (in minutes):
+autoAttachInterval = 1440
+"
+
+ test Rhsm.lns get conf =
+ { "#comment" = "Red Hat Subscription Manager Configuration File:" }
+ { }
+ { "#comment" = "Unified Entitlement Platform Configuration" }
+ { "server"
+ { "#comment" = "Server hostname:" }
+ { "hostname" = "subscription.rhn.redhat.com" }
+ { }
+ { "#comment" = "Server prefix:" }
+ { "prefix" = "/subscription" }
+ { }
+ { "#comment" = "Server port:" }
+ { "port" = "443" }
+ { }
+ { "#comment" = "Set to 1 to disable certificate validation:" }
+ { "insecure" = "0" }
+ { }
+ { "#comment" = "Set the depth of certs which should be checked" }
+ { "#comment" = "when validating a certificate" }
+ { "ssl_verify_depth" = "3" }
+ { }
+ { "#comment" = "an http proxy server to use" }
+ { "proxy_hostname" }
+ { }
+ { "#comment" = "port for http proxy server" }
+ { "proxy_port" }
+ { }
+ { "#comment" = "user name for authenticating to an http proxy, if needed" }
+ { "proxy_user" }
+ { }
+ { "#comment" = "password for basic http proxy auth, if needed" }
+ { "proxy_password" }
+ { }
+ }
+ { "rhsm"
+ { "#comment" = "Content base URL:" }
+ { "baseurl" = "https://cdn.redhat.com" }
+ { }
+ { "#comment" = "Server CA certificate location:" }
+ { "ca_cert_dir" = "/etc/rhsm/ca/" }
+ { }
+ { "#comment" = "Default CA cert to use when generating yum repo configs:" }
+ { "repo_ca_cert" = "%(ca_cert_dir)sredhat-uep.pem" }
+ { }
+ { "#comment" = "Where the certificates should be stored" }
+ { "productCertDir" = "/etc/pki/product" }
+ { "entitlementCertDir" = "/etc/pki/entitlement" }
+ { "consumerCertDir" = "/etc/pki/consumer" }
+ { }
+ { "#comment" = "Manage generation of yum repositories for subscribed content:" }
+ { "manage_repos" = "1" }
+ { }
+ { "#comment" = "Refresh repo files with server overrides on every yum command" }
+ { "full_refresh_on_yum" = "0" }
+ { }
+ { "#comment" = "If set to zero, the client will not report the package profile to" }
+ { "#comment" = "the subscription management service." }
+ { "report_package_profile" = "1" }
+ { }
+ { "#comment" = "The directory to search for subscription manager plugins" }
+ { "pluginDir" = "/usr/share/rhsm-plugins" }
+ { }
+ { "#comment" = "The directory to search for plugin configuration files" }
+ { "pluginConfDir" = "/etc/rhsm/pluginconf.d" }
+ { }
+ }
+ { "rhsmcertd"
+ { "#comment" = "Interval to run cert check (in minutes):" }
+ { "certCheckInterval" = "240" }
+ { "#comment" = "Interval to run auto-attach (in minutes):" }
+ { "autoAttachInterval" = "1440" }
+ }
--- /dev/null
+module Test_rmt =
+
+let conf = "#ident @(#)rmt.dfl 1.2 05/08/09 Copyr 2000 J. Schilling
+#
+# This file is /etc/default/rmt
+
+DEBUG=/tmp/RMT
+USER=*
+
+ACCESS=rtape sparky /dev/rmt/*
+ACCESS=* * /dev/rmt/*
+
+# Historically, Red Hat rmt was not so ^^ restrictive.
+ACCESS=* * *
+"
+
+test Rmt.lns get conf =
+ { "#comment" = "ident @(#)rmt.dfl 1.2 05/08/09 Copyr 2000 J. Schilling" }
+ { }
+ { "#comment" = "This file is /etc/default/rmt" }
+ { }
+ { "DEBUG" = "/tmp/RMT" }
+ { "USER" = "*" }
+ { }
+ { "ACCESS"
+ { "name" = "rtape" }
+ { "host" = "sparky" }
+ { "path" = "/dev/rmt/*" } }
+ { "ACCESS"
+ { "name" = "*" }
+ { "host" = "*" }
+ { "path" = "/dev/rmt/*" } }
+ { }
+ { "#comment" = "Historically, Red Hat rmt was not so ^^ restrictive." }
+ { "ACCESS"
+ { "name" = "*" }
+ { "host" = "*" }
+ { "path" = "*" } }
--- /dev/null
+module Test_rsyncd =
+
+ let conf = "
+# A more sophisticated example would be:
+
+uid = nobody
+ gid = nobody
+use chroot = yes
+max connections = 4
+syslog facility = local5
+pid file = /var/run/rsyncd.pid
+
+[ftp]
+ # this is a comment
+ path = /var/ftp/./pub
+ comment = whole ftp area (approx 6.1 GB)
+
+ [cvs]
+ ; comment with semicolon
+ path = /data/cvs
+ comment = CVS repository (requires authentication)
+ auth users = tridge, susan # comment at EOL
+ secrets file = /etc/rsyncd.secrets
+
+"
+
+ test Rsyncd.lns get conf =
+ { ".anon"
+ {}
+ { "#comment" = "A more sophisticated example would be:" }
+ {}
+ { "uid" = "nobody" }
+ { "gid" = "nobody" }
+ { "use chroot" = "yes" }
+ { "max connections" = "4" }
+ { "syslog facility" = "local5" }
+ { "pid file" = "/var/run/rsyncd.pid" }
+ {}
+ }
+ { "ftp"
+ { "#comment" = "this is a comment" }
+ { "path" = "/var/ftp/./pub" }
+ { "comment" = "whole ftp area (approx 6.1 GB)" }
+ {}
+ }
+ { "cvs"
+ { "#comment" = "comment with semicolon" }
+ { "path" = "/data/cvs" }
+ { "comment" = "CVS repository (requires authentication)" }
+ { "auth users" = "tridge, susan"
+ { "#comment" = "comment at EOL" }
+ }
+ { "secrets file" = "/etc/rsyncd.secrets" }
+ {}
+ }
+
--- /dev/null
+(*
+Module: Test_Rsyslog
+ Provides unit tests and examples for the <Rsyslog> lens.
+*)
+
+module Test_Rsyslog =
+
+(* Variable: conf *)
+let conf = "# rsyslog v5 configuration file
+
+$ModLoad imuxsock # provides support for local system logging (e.g. via logger command)
+$ModLoad imklog # provides kernel logging support (previously done by rklogd)
+module(load=\"immark\" markmessageperiod=\"60\" fakeoption=\"bar\") #provides --MARK-- message capability
+
+timezone(id=\"CET\" offset=\"+01:00\")
+
+$UDPServerRun 514
+$InputTCPServerRun 514
+$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
+$ActionFileEnableSync on
+$IncludeConfig /etc/rsyslog.d/*.conf
+
+*.info;mail.none;authpriv.none;cron.none /var/log/messages
+authpriv.* /var/log/secure
+*.emerg *
+*.* @2.7.4.1
+*.* @@2.7.4.1
+*.emerg :omusrmsg:*
+*.emerg :omusrmsg:foo,bar
+*.emerg | /dev/xconsole
+"
+
+(* Test: Rsyslog.lns *)
+test Rsyslog.lns get conf =
+ { "#comment" = "rsyslog v5 configuration file" }
+ { }
+ { "$ModLoad" = "imuxsock"
+ { "#comment" = "provides support for local system logging (e.g. via logger command)" }
+ }
+ { "$ModLoad" = "imklog"
+ { "#comment" = "provides kernel logging support (previously done by rklogd)" }
+ }
+ { "module"
+ { "load" = "immark" }
+ { "markmessageperiod" = "60" }
+ { "fakeoption" = "bar" }
+ { "#comment" = "provides --MARK-- message capability" }
+ }
+ { }
+ { "timezone"
+ { "id" = "CET" }
+ { "offset" = "+01:00" }
+ }
+ { }
+ { "$UDPServerRun" = "514" }
+ { "$InputTCPServerRun" = "514" }
+ { "$ActionFileDefaultTemplate" = "RSYSLOG_TraditionalFileFormat" }
+ { "$ActionFileEnableSync" = "on" }
+ { "$IncludeConfig" = "/etc/rsyslog.d/*.conf" }
+ { }
+ { "entry"
+ { "selector"
+ { "facility" = "*" }
+ { "level" = "info" }
+ }
+ { "selector"
+ { "facility" = "mail" }
+ { "level" = "none" }
+ }
+ { "selector"
+ { "facility" = "authpriv" }
+ { "level" = "none" }
+ }
+ { "selector"
+ { "facility" = "cron" }
+ { "level" = "none" }
+ }
+ { "action"
+ { "file" = "/var/log/messages" }
+ }
+ }
+ { "entry"
+ { "selector"
+ { "facility" = "authpriv" }
+ { "level" = "*" }
+ }
+ { "action"
+ { "file" = "/var/log/secure" }
+ }
+ }
+ { "entry"
+ { "selector"
+ { "facility" = "*" }
+ { "level" = "emerg" }
+ }
+ { "action"
+ { "user" = "*" }
+ }
+ }
+ { "entry"
+ { "selector"
+ { "facility" = "*" }
+ { "level" = "*" }
+ }
+ { "action"
+ { "protocol" = "@" }
+ { "hostname" = "2.7.4.1" }
+ }
+ }
+ { "entry"
+ { "selector"
+ { "facility" = "*" }
+ { "level" = "*" }
+ }
+ { "action"
+ { "protocol" = "@@" }
+ { "hostname" = "2.7.4.1" }
+ }
+ }
+ { "entry"
+ { "selector"
+ { "facility" = "*" }
+ { "level" = "emerg" }
+ }
+ { "action"
+ { "omusrmsg" = "*" }
+ }
+ }
+ { "entry"
+ { "selector"
+ { "facility" = "*" }
+ { "level" = "emerg" }
+ }
+ { "action"
+ { "omusrmsg" = "foo" }
+ { "omusrmsg" = "bar" } }
+ }
+ { "entry"
+ { "selector"
+ { "facility" = "*" }
+ { "level" = "emerg" }
+ }
+ { "action"
+ { "pipe" = "/dev/xconsole" }
+ }
+ }
+
+(* Parse complex $template lines, RHBZ#1083016 *)
+test Rsyslog.lns get "$template SpiceTmpl,\"%TIMESTAMP%.%TIMESTAMP:::date-subseconds% %syslogtag% %syslogseverity-text%:%msg:::sp-if-no-1st-sp%%msg:::drop-last-lf%\\n\"\n" =
+ { "$template" = "SpiceTmpl,\"%TIMESTAMP%.%TIMESTAMP:::date-subseconds% %syslogtag% %syslogseverity-text%:%msg:::sp-if-no-1st-sp%%msg:::drop-last-lf%\\n\"" }
+
+(* Parse property-based filters, RHBZ#1083016 *)
+test Rsyslog.lns get ":programname, startswith, \"spice-vdagent\" /var/log/spice-vdagent.log;SpiceTmpl\n" =
+ { "filter"
+ { "property" = "programname" }
+ { "operation" = "startswith" }
+ { "value" = "spice-vdagent" }
+ { "action"
+ { "file" = "/var/log/spice-vdagent.log" }
+ { "template" = "SpiceTmpl" } } }
+
+test Rsyslog.lns get ":msg, !contains, \"error\" /var/log/noterror.log\n" =
+ { "filter"
+ { "property" = "msg" }
+ { "operation" = "!contains" }
+ { "value" = "error" }
+ { "action"
+ { "file" = "/var/log/noterror.log" } } }
+
+test Rsyslog.lns get ":msg,!contains,\"garbage\" ~\n" =
+ { "filter"
+ { "property" = "msg" }
+ { "operation" = "!contains" }
+ { "value" = "garbage" }
+ { "action"
+ { "discard" } } }
+
+test Rsyslog.lns put "" after
+ set "/module[1]/load" "imuxsock"
+ = "module(load=\"imuxsock\")\n"
+
+test Rsyslog.lns put "" after
+ set "/module[1]/load" "imuxsock" ;
+ set "/module[1]/SysSock.RateLimit.Interval" "0"
+ = "module(load=\"imuxsock\" SysSock.RateLimit.Interval=\"0\")\n"
+
+test Rsyslog.lns put "" after
+ set "/module[1]/load" "imuxsock" ;
+ set "/module[1]/SysSock.RateLimit.Interval" "0" ;
+ set "/module[1]/SysSock.RateLimit.Burst" "1"
+ = "module(load=\"imuxsock\" SysSock.RateLimit.Interval=\"0\" SysSock.RateLimit.Burst=\"1\")\n"
+
+(* On Fedora 26, there are comments in module statements *)
+test Rsyslog.lns get "module(load=\"imuxsock\" # provides support for local system logging (e.g. via logger command)
+ SysSock.Use=\"off\") # Turn off message reception via local log socket;
+ # local messages are retrieved through imjournal now.\n" =
+ { "module"
+ { "load" = "imuxsock" }
+ { "SysSock.Use" = "off" }
+ { "#comment" = "Turn off message reception via local log socket;" } }
+ { "#comment" = "local messages are retrieved through imjournal now." }
+
+(* rsyslog doesn't use bsd-like #! or #+/- specifications *)
+test Rsyslog.lns get "#!prog\n" = { "#comment" = "!prog" }
+test Rsyslog.lns get "#+host\n" = { "#comment" = "+host" }
+test Rsyslog.lns get "#-host\n" = { "#comment" = "-host" }
+
+(* Added in rsyslog 8.33 *)
+test Rsyslog.lns get "include(file=\"/etc/rsyslog.d/*.conf\" mode=\"optional\")\n" =
+ { "include"
+ { "file" = "/etc/rsyslog.d/*.conf" }
+ { "mode" = "optional" } }
+
+(* Dynamic file name template *)
+test Rsyslog.lns get "*.* ?DynamicFile\n" =
+ { "entry"
+ { "selector"
+ { "facility" = "*" }
+ { "level" = "*" }
+ }
+ { "action"
+ { "dynamic" = "DynamicFile" }
+ }
+ }
+
+(* Multiple actions in filters and selectors *)
+test Rsyslog.lns get ":msg, startswith, \"iptables:\" -/var/log/iptables.log
+& ~
+# Save boot messages also to boot.log
+local7.* /var/log/boot.log
+local3.err /var/log/nfsen/nfsenlog
+& /var/log/also.log
+\n" =
+ { "filter"
+ { "property" = "msg" }
+ { "operation" = "startswith" }
+ { "value" = "iptables:" }
+ { "action"
+ { "no_sync" }
+ { "file" = "/var/log/iptables.log" } }
+ { "action"
+ { "discard" } }
+ }
+ { "#comment" = "Save boot messages also to boot.log" }
+ { "entry"
+ { "selector"
+ { "facility" = "local7" }
+ { "level" = "*" } }
+ { "action"
+ { "file" = "/var/log/boot.log" } }
+ }
+ { "entry"
+ { "selector"
+ { "facility" = "local3" }
+ { "level" = "err" } }
+ { "action"
+ { "file" = "/var/log/nfsen/nfsenlog" } }
+ { "action"
+ { "file" = "/var/log/also.log" } } }
+ { }
+
--- /dev/null
+module Test_rtadvd =
+
+(* Example from rtadvd.conf(5) *)
+let rtadvd_conf = "default:\\
+ :chlim#64:raflags#0:rltime#1800:rtime#0:retrans#0:\\
+ :pinfoflags=\"la\":vltime#2592000:pltime#604800:mtu#0:
+ef0:\\
+ :addr=\"2001:db8:ffff:1000::\":prefixlen#64:tc=default:
+"
+
+test Rtadvd.lns get rtadvd_conf =
+ { "record"
+ { "name" = "default" }
+ { "capability" = "chlim#64" }
+ { "capability" = "raflags#0" }
+ { "capability" = "rltime#1800" }
+ { "capability" = "rtime#0" }
+ { "capability" = "retrans#0" }
+ { "capability" = "pinfoflags=\"la\"" }
+ { "capability" = "vltime#2592000" }
+ { "capability" = "pltime#604800" }
+ { "capability" = "mtu#0" }
+ }
+ { "record"
+ { "name" = "ef0" }
+ { "capability" = "addr=\"2001:db8:ffff:1000::\"" }
+ { "capability" = "prefixlen#64" }
+ { "capability" = "tc=default" }
+ }
+
--- /dev/null
+module Test_rx =
+
+let sto_ipv4 = [ label "IP" . store Rx.ipv4 ]
+
+test sto_ipv4 get "192.168.0.1" = { "IP" = "192.168.0.1" }
+test sto_ipv4 get "255.255.255.254" = { "IP" = "255.255.255.254" }
+
+let sto_ipv6 = [ label "IP" . store Rx.ipv6 ]
+test sto_ipv6 get "fe80::215:f2ff:fea4:b8d9" = { "IP" = "fe80::215:f2ff:fea4:b8d9" }
+
+let sto_ip = [ label "IP" . store Rx.ip ]
+
+test sto_ip get "192.168.0.1" = { "IP" = "192.168.0.1" }
+test sto_ip get "255.255.255.254" = { "IP" = "255.255.255.254" }
+test sto_ip get "fe80::215:f2ff:fea4:b8d9" = { "IP" = "fe80::215:f2ff:fea4:b8d9" }
+
+(* iso_8601 *)
+let iso_8601 = [ label "date" . store Rx.iso_8601 ]
+
+test iso_8601 get "2009-12T12:34" = { "date" = "2009-12T12:34" }
+test iso_8601 get "2009" = { "date" = "2009" }
+test iso_8601 get "2009-05-19" = { "date" = "2009-05-19" }
+test iso_8601 get "2009-05-19" = { "date" = "2009-05-19" }
+test iso_8601 get "20090519" = { "date" = "20090519" }
+test iso_8601 get "2009123" = { "date" = "2009123" }
+test iso_8601 get "2009-05" = { "date" = "2009-05" }
+test iso_8601 get "2009-123" = { "date" = "2009-123" }
+test iso_8601 get "2009-222" = { "date" = "2009-222" }
+test iso_8601 get "2009-001" = { "date" = "2009-001" }
+test iso_8601 get "2009-W01-1" = { "date" = "2009-W01-1" }
+test iso_8601 get "2009-W51-1" = { "date" = "2009-W51-1" }
+test iso_8601 get "2009-W511" = { "date" = "2009-W511" }
+test iso_8601 get "2009-W33" = { "date" = "2009-W33" }
+test iso_8601 get "2009W511" = { "date" = "2009W511" }
+test iso_8601 get "2009-05-19" = { "date" = "2009-05-19" }
+test iso_8601 get "2009-05-19 00:00" = { "date" = "2009-05-19 00:00" }
+test iso_8601 get "2009-05-19 14" = { "date" = "2009-05-19 14" }
+test iso_8601 get "2009-05-19 14:31" = { "date" = "2009-05-19 14:31" }
+test iso_8601 get "2009-05-19 14:39:22" = { "date" = "2009-05-19 14:39:22" }
+test iso_8601 get "2009-05-19T14:39Z" = { "date" = "2009-05-19T14:39Z" }
+test iso_8601 get "2009-W21-2" = { "date" = "2009-W21-2" }
+test iso_8601 get "2009-W21-2T01:22" = { "date" = "2009-W21-2T01:22" }
+test iso_8601 get "2009-139" = { "date" = "2009-139" }
+test iso_8601 get "2009-05-19 14:39:22-06:00" = { "date" = "2009-05-19 14:39:22-06:00" }
+test iso_8601 get "2009-05-19 14:39:22+0600" = { "date" = "2009-05-19 14:39:22+0600" }
+test iso_8601 get "2009-05-19 14:39:22-01" = { "date" = "2009-05-19 14:39:22-01" }
+test iso_8601 get "20090621T0545Z" = { "date" = "20090621T0545Z" }
+test iso_8601 get "2007-04-06T00:00" = { "date" = "2007-04-06T00:00" }
+test iso_8601 get "2007-04-05T24:00" = { "date" = "2007-04-05T24:00" }
+test iso_8601 get "2010-02-18T16:23:48.5" = { "date" = "2010-02-18T16:23:48.5" }
+test iso_8601 get "2010-02-18T16:23:48,444" = { "date" = "2010-02-18T16:23:48,444" }
+test iso_8601 get "2010-02-18T16:23:48,3-06:00" = { "date" = "2010-02-18T16:23:48,3-06:00" }
+test iso_8601 get "2010-02-18T16:23.4" = { "date" = "2010-02-18T16:23.4" }
+test iso_8601 get "2010-02-18T16:23,25" = { "date" = "2010-02-18T16:23,25" }
+test iso_8601 get "2010-02-18T16:23.33+0600" = { "date" = "2010-02-18T16:23.33+0600" }
+test iso_8601 get "2010-02-18T16.23334444" = { "date" = "2010-02-18T16.23334444" }
+test iso_8601 get "2010-02-18T16,2283" = { "date" = "2010-02-18T16,2283" }
+test iso_8601 get "2009-05-19 143922.500" = { "date" = "2009-05-19 143922.500" }
+test iso_8601 get "2009-05-19 1439,55" = { "date" = "2009-05-19 1439,55" }
+
+(* url_3986 *)
+let url_3986 = [ label "url" . store Rx.url_3986 ]
+
+test url_3986 get "http://tools.ietf.org/rfc/rfc3986.txt" = { "url" = "http://tools.ietf.org/rfc/rfc3986.txt" }
+test url_3986 get "https://github.com/hercules-team/augeas/" = { "url" = "https://github.com/hercules-team/augeas/" }
+test url_3986 get "http://www.ics.uci.edu:80/pub/ietf/uri/#Related" = { "url" = "http://www.ics.uci.edu:80/pub/ietf/uri/#Related" }
+test url_3986 get "EXAMPLE://a/./b/../b/%63/%7bfoo%7d" = { "url" = "EXAMPLE://a/./b/../b/%63/%7bfoo%7d" }
+test url_3986 get "http://a/b/c/g;?x=1/y#z" = { "url" = "http://a/b/c/g;?x=1/y#z" }
+test url_3986 get "eXaMpLe://a.very.sub.domain.tld:1234/b/c/e/f/g.txt;?x=1/y&q=%7b-w-%7b#z" = { "url" = "eXaMpLe://a.very.sub.domain.tld:1234/b/c/e/f/g.txt;?x=1/y&q=%7b-w-%7b#z" }
+
--- /dev/null
+module Test_samba =
+
+ let conf = "#
+# Sample configuration file for the Samba suite for Debian GNU/Linux.
+#
+#
+# This is the main Samba configuration file. You should read the
+# smb.conf(5) manual page in order to understand the options listed
+# here. Samba has a huge number of configurable options most of which
+# are not shown in this example
+#
+
+#======================= Global Settings =======================
+
+[global]
+
+## Browsing/Identification ###
+
+# Change this to the workgroup/NT-domain name your Samba server will part of
+ workgroup = WORKGROUP
+
+# server string is the equivalent of the NT Description field
+ server string = %h server (Samba, Ubuntu)
+
+# Windows Internet Name Serving Support Section:
+# WINS Support - Tells the NMBD component of Samba to enable its WINS Server
+; wins support = no
+
+# Windows clients look for this share name as a source of downloadable
+# printer drivers
+[print$]
+ comment = All Printers
+ browseable = no
+ path = /tmp
+ printable = yes
+ public = yes
+ writable = no
+ create mode = 0700
+ printcap name = /etc/printcap
+ print command = /usr/bin/lpr -P%p -r %s
+ printing = cups
+"
+
+ test Samba.lns get conf =
+ {}
+ { "#comment" = "Sample configuration file for the Samba suite for Debian GNU/Linux." }
+ {}
+ {}
+ { "#comment" = "This is the main Samba configuration file. You should read the" }
+ { "#comment" = "smb.conf(5) manual page in order to understand the options listed" }
+ { "#comment" = "here. Samba has a huge number of configurable options most of which" }
+ { "#comment" = "are not shown in this example" }
+ {}
+ {}
+ { "#comment" = "======================= Global Settings =======================" }
+ {}
+ { "target" = "global"
+ {}
+ { "#comment" = "# Browsing/Identification ###" }
+ {}
+ { "#comment" = "Change this to the workgroup/NT-domain name your Samba server will part of" }
+ { "workgroup" = "WORKGROUP" }
+ {}
+ { "#comment" = "server string is the equivalent of the NT Description field" }
+ { "server string" = "%h server (Samba, Ubuntu)" }
+ {}
+ { "#comment" = "Windows Internet Name Serving Support Section:" }
+ { "#comment" = "WINS Support - Tells the NMBD component of Samba to enable its WINS Server" }
+ { "#comment" = "wins support = no" }
+ {}
+ { "#comment" = "Windows clients look for this share name as a source of downloadable" }
+ { "#comment" = "printer drivers" } }
+ { "target" = "print$"
+ { "comment" = "All Printers" }
+ { "browseable" = "no" }
+ { "path" = "/tmp" }
+ { "printable" = "yes" }
+ { "public" = "yes" }
+ { "writable" = "no" }
+ { "create mode" = "0700" }
+ { "printcap name" = "/etc/printcap" }
+ { "print command" = "/usr/bin/lpr -P%p -r %s" }
+ { "printing" = "cups" } }
+
+ test Samba.lns get "[test]\ncrazy:entry = foo\n" =
+ { "target" = "test"
+ {"crazy:entry" = "foo"}}
+
+ (* Test complex idmap commands with asterisk in key name, ticket #354 *)
+ test Samba.lns get "[test]
+ idmap backend = tdb
+ idmap uid = 1000000-1999999
+ idmap gid = 1000000-1999999
+
+ idmap config CORP : backend = ad
+ idmap config * : range = 1000-999999\n" =
+ { "target" = "test"
+ { "idmap backend" = "tdb" }
+ { "idmap uid" = "1000000-1999999" }
+ { "idmap gid" = "1000000-1999999" }
+ { }
+ { "idmap config CORP : backend" = "ad" }
+ { "idmap config * : range" = "1000-999999" } }
--- /dev/null
+module Test_schroot =
+
+ let conf = "# Sample configuration
+
+[sid]
+type=plain
+description=Debian unstable
+description[fr_FR]=Debian instable
+location=/srv/chroot/sid
+priority=3
+groups=sbuild
+root-groups=root
+aliases=unstable,default
+
+[etch]
+type=block-device
+description=Debian testing
+priority=2
+#groups=sbuild-security
+aliases=testing
+device=/dev/hda_vg/etch_chroot
+mount-options=-o atime
+run-setup-scripts=true
+run-exec-scripts=true
+
+[sid-file]
+type=file
+description=Debian sid file-based chroot
+priority=3
+groups=sbuild
+file=/srv/chroots/sid.tar.gz
+run-setup-scripts=true
+run-exec-scripts=true
+
+[sid-snapshot]
+type=lvm-snapshot
+description=Debian unstable LVM snapshot
+priority=3
+groups=sbuild
+root-groups=root
+device=/dev/hda_vg/sid_chroot
+mount-options=-o atime,sync,user_xattr
+lvm-snapshot-options=--size 2G
+run-setup-scripts=true
+run-exec-scripts=true
+"
+
+ test Schroot.lns get conf =
+ { "#comment" = "Sample configuration" }
+ { }
+ { "sid"
+ { "type" = "plain" }
+ { "description" = "Debian unstable" }
+ { "description" = "Debian instable"
+ { "lang" = "fr_FR" }
+ }
+ { "location" = "/srv/chroot/sid" }
+ { "priority" = "3" }
+ { "groups" = "sbuild" }
+ { "root-groups" = "root" }
+ { "aliases" = "unstable,default" }
+ { }
+ }
+ { "etch"
+ { "type" = "block-device" }
+ { "description" = "Debian testing" }
+ { "priority" = "2" }
+ { "#comment" = "groups=sbuild-security" }
+ { "aliases" = "testing" }
+ { "device" = "/dev/hda_vg/etch_chroot" }
+ { "mount-options" = "-o atime" }
+ { "run-setup-scripts" = "true" }
+ { "run-exec-scripts" = "true" }
+ { }
+ }
+ { "sid-file"
+ { "type" = "file" }
+ { "description" = "Debian sid file-based chroot" }
+ { "priority" = "3" }
+ { "groups" = "sbuild" }
+ { "file" = "/srv/chroots/sid.tar.gz" }
+ { "run-setup-scripts" = "true" }
+ { "run-exec-scripts" = "true" }
+ { }
+ }
+ { "sid-snapshot"
+ { "type" = "lvm-snapshot" }
+ { "description" = "Debian unstable LVM snapshot" }
+ { "priority" = "3" }
+ { "groups" = "sbuild" }
+ { "root-groups" = "root" }
+ { "device" = "/dev/hda_vg/sid_chroot" }
+ { "mount-options" = "-o atime,sync,user_xattr" }
+ { "lvm-snapshot-options" = "--size 2G" }
+ { "run-setup-scripts" = "true" }
+ { "run-exec-scripts" = "true" }
+ }
--- /dev/null
+module Test_securetty =
+
+ (* basic test *)
+ let basic = "tty0\ntty1\ntty2\n"
+
+ (* declare the lens to test and the resulting tree *)
+ test Securetty.lns get basic =
+ { "1" = "tty0" }
+ { "2" = "tty1" }
+ { "3" = "tty2" }
+
+ (* complete test *)
+ let complete = "# some comment
+
+tty0
+# X11 display
+:0.0
+
+console # allow root from console
+"
+
+ (* declare the lens to test and the resulting tree *)
+ test Securetty.lns get complete =
+ { "#comment" = "some comment" }
+ {}
+ { "1" = "tty0" }
+ { "#comment" = "X11 display" }
+ { "2" = ":0.0" }
+ {}
+ { "3" = "console"
+ { "#comment" = "allow root from console" } }
--- /dev/null
+(*
+Module: Test_Semanage
+ Provides unit tests and examples for the <Semanage> lens.
+*)
+
+module Test_Semanage =
+
+(* Variable: phony_conf *)
+let phony_conf = "# this is a comment
+
+mykey = myvalue # eol comment
+anotherkey = another value
+"
+
+(* Test: Semanage.lns *)
+test Semanage.lns get phony_conf =
+ { "#comment" = "this is a comment" }
+ { }
+ { "mykey" = "myvalue"
+ { "#comment" = "eol comment" } }
+ { "anotherkey" = "another value" }
+
+(* Test: Semanage.lns
+ Quotes are OK in variables that do not begin with a quote *)
+test Semanage.lns get "UserParameter=custom.vfs.dev.read.ops[*],cat /proc/diskstats | grep $1 | head -1 | awk '{print $$4}'\n" =
+ { "UserParameter" = "custom.vfs.dev.read.ops[*],cat /proc/diskstats | grep $1 | head -1 | awk '{print $$4}'" }
+
+(* Test: Semanage.lns
+ Support empty values *)
+test Semanage.lns get "foo =\n" =
+ { "foo" }
+
+(* Variable: conf *)
+let conf = "module-store = direct
+module-store = \"source\"
+
+#policy-version = 19
+
+expand-check=0
+
+usepasswd=False
+bzip-small=true
+bzip-blocksize=5
+ignoredirs=/root
+
+[sefcontext_compile]
+path = /usr/sbin/sefcontext_compile
+args = -r $@
+
+[end]
+
+config=test
+
+[verify module]
+test=value
+[end]
+"
+
+(* Test: Semanage.lns *)
+test Semanage.lns get conf =
+ { "module-store" = "direct" }
+ { "module-store" = "source" }
+ { }
+ { "#comment" = "policy-version = 19" }
+ { }
+ { "expand-check" = "0" }
+ { }
+ { "usepasswd" = "False" }
+ { "bzip-small" = "true" }
+ { "bzip-blocksize" = "5" }
+ { "ignoredirs"
+ { "1" = "/root" }
+ }
+ { }
+ { "@group" = "sefcontext_compile"
+ { "path" = "/usr/sbin/sefcontext_compile" }
+ { "args" = "-r $@" }
+ { } }
+ { }
+ { "config" = "test" }
+ { }
+ { "@group" = "verify module"
+ { "test" = "value" } }
--- /dev/null
+(* Tests for the Services module *)
+
+module Test_services =
+
+ let example = "# a comment
+
+tcpmux 1/tcp # TCP port service multiplexer
+echo 7/udp
+discard 9/tcp sink null
+systat 11/tcp users
+# another comment
+whois++ 63/tcp
+z39.50 210/tcp z3950 wais # NISO Z39.50 database \n"
+
+ test Services.lns get example =
+ { "#comment" = "a comment" }
+ { }
+ { "service-name" = "tcpmux"
+ { "port" = "1" }
+ { "protocol" = "tcp" }
+ { "#comment" = "TCP port service multiplexer" } }
+ { "service-name" = "echo"
+ { "port" = "7" }
+ { "protocol" = "udp" } }
+ { "service-name" = "discard"
+ { "port" = "9" }
+ { "protocol" = "tcp" }
+ { "alias" = "sink" }
+ { "alias" = "null" } }
+ { "service-name" = "systat"
+ { "port" = "11" }
+ { "protocol" = "tcp" }
+ { "alias" = "users" } }
+ { "#comment" = "another comment" }
+ { "service-name" = "whois++"
+ { "port" = "63" }
+ { "protocol" = "tcp" } }
+ { "service-name" = "z39.50"
+ { "port" = "210" }
+ { "protocol" = "tcp" }
+ { "alias" = "z3950" }
+ { "alias" = "wais" }
+ { "#comment" = "NISO Z39.50 database" } }
+
+ (* We completely suppress empty comments *)
+ test Services.record get "mtp\t\t1911/tcp\t\t\t#\n" =
+ { "service-name" = "mtp"
+ { "port" = "1911" }
+ { "protocol" = "tcp" } }
+
+ (* And comments with one space in *)
+ test Services.lns get "mtp\t\t\t1911/tcp\t\t\t# \nfoo 123/tcp\n" =
+ { "service-name" = "mtp"
+ { "port" = "1911" }
+ { "protocol" = "tcp" } }
+ { "service-name" = "foo"
+ { "port" = "123" }
+ { "protocol" = "tcp" } }
+
+ test Services.lns get "sql*net\t\t66/tcp\t\t\t# Oracle SQL*NET\n" =
+ { "service-name" = "sql*net"
+ { "port" = "66" }
+ { "protocol" = "tcp" }
+ { "#comment" = "Oracle SQL*NET" } }
+
+ (* Fake service to check that we allow enoughspecial characters *)
+ test Services.lns get "special.*+-/chars\t0/proto\n" =
+ { "service-name" = "special.*+-/chars"
+ { "port" = "0" }
+ { "protocol" = "proto" } }
+
+ test Services.lns put "tcpmux 1/tcp # some comment\n"
+ after rm "/service-name/#comment" = "tcpmux 1/tcp\n"
+
+ (* On AIX, port ranges are valid *)
+ test Services.lns get "x11 6000-6063/tcp # X Window System\n" =
+ { "service-name" = "x11"
+ { "start" = "6000" }
+ { "end" = "6063" }
+ { "protocol" = "tcp" }
+ { "#comment" = "X Window System" } }
+
+ (* Colons permitted in service names, RHBZ#1121263 *)
+ test Services.lns get "SWRPC.ACCESS.BSS:BS_rmq 48102/tcp # SWIFTAlliance_SWRPC ACCESS\n" =
+ { "service-name" = "SWRPC.ACCESS.BSS:BS_rmq"
+ { "port" = "48102" }
+ { "protocol" = "tcp" }
+ { "#comment" = "SWIFTAlliance_SWRPC ACCESS" } }
--- /dev/null
+module Test_Shadow =
+
+let conf = "root:x:0:0:999999:7:::
+libuuid:*:0:0:0::::
+expired:$6$INVALID:0:0:0:::100:
+locked:!$6$INVALID:0:0:0::::
+"
+
+test Shadow.lns get conf =
+ { "root"
+ { "password" = "x" }
+ { "lastchange_date" = "0" }
+ { "minage_days" = "0" }
+ { "maxage_days" = "999999" }
+ { "warn_days" = "7" }
+ { "inactive_days" = "" }
+ { "expire_date" = "" }
+ { "flag" = "" } }
+ { "libuuid"
+ { "password" = "*" }
+ { "lastchange_date" = "0" }
+ { "minage_days" = "0" }
+ { "maxage_days" = "0" }
+ { "warn_days" = "" }
+ { "inactive_days" = "" }
+ { "expire_date" = "" }
+ { "flag" = "" } }
+ { "expired"
+ { "password" = "$6$INVALID" }
+ { "lastchange_date" = "0" }
+ { "minage_days" = "0" }
+ { "maxage_days" = "0" }
+ { "warn_days" = "" }
+ { "inactive_days" = "" }
+ { "expire_date" = "100" }
+ { "flag" = "" } }
+ { "locked"
+ { "password" = "!$6$INVALID" }
+ { "lastchange_date" = "0" }
+ { "minage_days" = "0" }
+ { "maxage_days" = "0" }
+ { "warn_days" = "" }
+ { "inactive_days" = "" }
+ { "expire_date" = "" }
+ { "flag" = "" } }
+
+test Shadow.lns get "+\n" =
+ { "@nisdefault" }
+
+test Shadow.lns get "+::::::::\n" =
+ { "@nisdefault"
+ { "password" = "" }
+ { "lastchange_date" = "" }
+ { "minage_days" = "" }
+ { "maxage_days" = "" }
+ { "warn_days" = "" }
+ { "inactive_days" = "" }
+ { "expire_date" = "" }
+ { "flag" = "" } }
+
+test Shadow.lns put "+\n" after
+ set "@nisdefault/password" "";
+ set "@nisdefault/lastchange_date" "";
+ set "@nisdefault/minage_days" "";
+ set "@nisdefault/maxage_days" "";
+ set "@nisdefault/warn_days" "";
+ set "@nisdefault/inactive_days" "";
+ set "@nisdefault/expire_date" "";
+ set "@nisdefault/flag" ""
+= "+::::::::\n"
+
+test Shadow.lns put "+::::::::\n" after
+ rm "@nisdefault/password";
+ rm "@nisdefault/lastchange_date";
+ rm "@nisdefault/minage_days";
+ rm "@nisdefault/maxage_days";
+ rm "@nisdefault/warn_days";
+ rm "@nisdefault/inactive_days";
+ rm "@nisdefault/expire_date";
+ rm "@nisdefault/flag"
+= "+\n"
--- /dev/null
+(* Test for shells lens *)
+
+module Test_shells =
+
+ let conf = "# this is a comment
+
+/bin/bash
+/bin/tcsh
+/opt/csw/bin/bash # CSWbash
+"
+
+ test Shells.lns get conf =
+ { "#comment" = "this is a comment" }
+ {}
+ { "1" = "/bin/bash" }
+ { "2" = "/bin/tcsh" }
+ { "3" = "/opt/csw/bin/bash"
+ { "#comment" = "CSWbash" } }
--- /dev/null
+(* Test for shell lens *)
+module Test_shellvars =
+
+ let lns = Shellvars.lns
+
+ let eth_static = "# Intel Corporation PRO/100 VE Network Connection
+DEVICE=eth0
+BOOTPROTO=static
+BROADCAST=172.31.0.255
+HWADDR=ab:cd:ef:12:34:56
+export IPADDR=172.31.0.31 # this is our IP
+#DHCP_HOSTNAME=host.example.com
+NETMASK=255.255.255.0
+NETWORK=172.31.0.0
+unset ONBOOT # We do not want this var
+"
+ let empty_val = "EMPTY=\nDEVICE=eth0\n"
+
+ let key_brack = "SOME_KEY[1]=\nDEVICE=eth0\n"
+
+ test lns get eth_static =
+ { "#comment" = "Intel Corporation PRO/100 VE Network Connection" }
+ { "DEVICE" = "eth0" }
+ { "BOOTPROTO" = "static" }
+ { "BROADCAST" = "172.31.0.255" }
+ { "HWADDR" = "ab:cd:ef:12:34:56" }
+ { "IPADDR" = "172.31.0.31"
+ { "export" }
+ { "#comment" = "this is our IP" } }
+ { "#comment" = "DHCP_HOSTNAME=host.example.com" }
+ { "NETMASK" = "255.255.255.0" }
+ { "NETWORK" = "172.31.0.0" }
+ { "@unset"
+ { "1" = "ONBOOT" }
+ { "#comment" = "We do not want this var" } }
+
+ test lns put eth_static after
+ set "BOOTPROTO" "dhcp" ;
+ rm "IPADDR" ;
+ rm "BROADCAST" ;
+ rm "NETMASK" ;
+ rm "NETWORK"
+ = "# Intel Corporation PRO/100 VE Network Connection
+DEVICE=eth0
+BOOTPROTO=dhcp
+HWADDR=ab:cd:ef:12:34:56
+#DHCP_HOSTNAME=host.example.com
+unset ONBOOT # We do not want this var
+"
+ test lns get empty_val =
+ { "EMPTY" = "" } { "DEVICE" = "eth0" }
+
+ test lns get key_brack =
+ { "SOME_KEY[1]" = "" } { "DEVICE" = "eth0" }
+
+ test lns get "smartd_opts=\"-q never\"\n" =
+ { "smartd_opts" = "\"-q never\"" }
+
+ test lns get "var=val \n" = { "var" = "val" }
+
+ test lns get ". /etc/java/java.conf\n" =
+ { ".source" = "/etc/java/java.conf" }
+
+ (* Quoted strings and other oddities *)
+ test lns get "var=\"foo 'bar'\"\n" =
+ { "var" = "\"foo 'bar'\"" }
+
+ test lns get "var='Some \"funny\" value'\n" =
+ { "var" = "'Some \"funny\" value'" }
+
+ test lns get "var=\"\\\"\"\n" =
+ { "var" = "\"\\\"\"" }
+
+ test lns get "var=\\\"\n" =
+ { "var" = "\\\"" }
+
+ test lns get "var=ab#c\n" =
+ { "var" = "ab#c" }
+
+ test lns get "var=ab #c\n" =
+ { "var" = "ab"
+ { "#comment" = "c" } }
+
+ test lns get "var=ab; #c\n" =
+ { "var" = "ab" }
+ { "#comment" = "c" }
+
+ test lns put "var=ab; #c\n" after
+ set "/#comment" "d" =
+ "var=ab; #d\n"
+
+ test lns get "var=ab;\n" =
+ { "var" = "ab" }
+
+ test lns get "var='ab#c'\n" =
+ { "var" = "'ab#c'" }
+
+ test lns get "var=\"ab#c\"\n" =
+ { "var" = "\"ab#c\"" }
+
+ test lns get "ESSID='Joe'\"'\"'s net'\n" =
+ { "ESSID" = "'Joe'\"'\"'s net'" }
+
+ test lns get "var=`ab#c`\n" =
+ { "var" = "`ab#c`" }
+
+ test lns get "var=`grep nameserver /etc/resolv.conf | head -1`\n" =
+ { "var" = "`grep nameserver /etc/resolv.conf | head -1`" }
+
+ test lns put "var=ab #c\n"
+ after rm "/var/#comment" = "var=ab\n"
+
+ test lns put "var=ab\n"
+ after set "/var/#comment" "this is a var" =
+ "var=ab # this is a var\n"
+
+ (* Handling of arrays *)
+ test lns get "var=(val1 \"val\\\"2\\\"\" val3)\n" =
+ { "var"
+ { "1" = "val1" }
+ { "2" = "\"val\\\"2\\\"\"" }
+ { "3" = "val3" } }
+
+ test lns get "var=()\n" = { "var" = "()" }
+
+ test lns put "var=()\n" after
+ set "var" "value"
+ = "var=value\n"
+
+ test lns put "var=(v1 v2)\n" after
+ rm "var/*" ;
+ set "var" "value"
+ = "var=value\n"
+
+ test lns put "var=(v1 v2)\n" after
+ set "var/3" "v3"
+ = "var=(v1 v2 v3)\n"
+
+ test lns get "var=(v1 v2 \n \t v3)\n" =
+ { "var"
+ { "1" = "v1" }
+ { "2" = "v2" }
+ { "3" = "v3" } }
+
+ (* Allow spaces after/before opening/closing parens for array *)
+ test lns get "config_eth1=( \"10.128.0.48/24\" )\n" =
+ { "config_eth1" { "1" = "\"10.128.0.48/24\"" } }
+
+ (* Bug 109: allow a bare export *)
+ test lns get "export FOO\n" =
+ { "@export"
+ { "1" = "FOO" } }
+
+ (* Bug 73: allow ulimit builtin *)
+ test lns get "ulimit -c unlimited\n" =
+ { "@builtin" = "ulimit" { "args" = "-c unlimited" } }
+
+ (* Allow shift builtin *)
+ test Shellvars.lns get "shift\nshift 2\n" =
+ { "@builtin" = "shift" }
+ { "@builtin" = "shift" { "args" = "2" } }
+
+ (* Allow exit builtin *)
+ test Shellvars.lns get "exit\nexit 2\n" =
+ { "@builtin" = "exit" }
+ { "@builtin" = "exit" { "args" = "2" } }
+
+ (* Allow wrapping builtin arguments to multiple lines *)
+ test Shellvars.lns get "ulimit -c \\\nunlimited\nulimit \\\n -x 123\n" =
+ { "@builtin" = "ulimit" { "args" = "-c \\\nunlimited" } }
+ { "@builtin" = "ulimit" { "args" = "-x 123" } }
+
+ (* Test semicolons *)
+ test lns get "VAR1=\"this;is;a;test\"\nVAR2=this;\n" =
+ { "VAR1" = "\"this;is;a;test\"" }
+ { "VAR2" = "this" }
+
+ (* Bug 230: parse conditions *)
+ test lns get "if [ -f /etc/default/keyboard ]; then\n. /etc/default/keyboard\nfi\n" =
+ { "@if" = "[ -f /etc/default/keyboard ]" { ".source" = "/etc/default/keyboard" } }
+
+ (* Recursive condition *)
+ test lns get "if [ -f /tmp/file1 ]; then
+ if [ -f /tmp/file2 ]
+ then
+ . /tmp/file2
+ elif [ -f /tmp/file3 ]; then
+ . /tmp/file3; else; . /tmp/file4
+ fi
+else
+ . /tmp/file3
+fi\n" =
+ { "@if" = "[ -f /tmp/file1 ]"
+ { "@if" = "[ -f /tmp/file2 ]"
+ { ".source" = "/tmp/file2" }
+ { "@elif" = "[ -f /tmp/file3 ]"
+ { ".source" = "/tmp/file3" } }
+ { "@else"
+ { ".source" = "/tmp/file4" }
+ }
+ }
+ { "@else"
+ { ".source" = "/tmp/file3" }
+ }
+ }
+
+ (* Multiple elif *)
+ test Shellvars.lns get "if [ -f /tmp/file1 ]; then
+ . /tmp/file1
+ elif [ -f /tmp/file2 ]; then
+ . /tmp/file2
+ elif [ -f /tmp/file3 ]; then
+ . /tmp/file3
+ fi\n" =
+ { "@if" = "[ -f /tmp/file1 ]"
+ { ".source" = "/tmp/file1" }
+ { "@elif" = "[ -f /tmp/file2 ]"
+ { ".source" = "/tmp/file2" }
+ }
+ { "@elif" = "[ -f /tmp/file3 ]"
+ { ".source" = "/tmp/file3" }
+ }
+ }
+
+
+ (* Comment or eol *)
+ test lns get "VAR=value # eol-comment\n" =
+ { "VAR" = "value"
+ { "#comment" = "eol-comment" }
+ }
+
+ (* One-liners *)
+ test lns get "if [ -f /tmp/file1 ]; then . /tmp/file1; else . /tmp/file2; fi\n" =
+ { "@if" = "[ -f /tmp/file1 ]"
+ { ".source" = "/tmp/file1" }
+ { "@else"
+ { ".source" = "/tmp/file2" }
+ }
+ }
+
+ (* Loops *)
+ test lns get "for f in /tmp/file*; do
+ while [ 1 ]; do . $f; done
+done\n" =
+ { "@for" = "f in /tmp/file*"
+ { "@while" = "[ 1 ]"
+ { ".source" = "$f" }
+ }
+ }
+
+ (* Case *)
+ test lns get "case $f in
+ /tmp/file1)
+ . /tmp/file1
+ ;;
+ /tmp/file2)
+ . /tmp/file2
+ ;;
+ *)
+ unset f
+ ;;
+esac\n" =
+ { "@case" = "$f"
+ { "@case_entry"
+ { "@pattern" = "/tmp/file1" }
+ { ".source" = "/tmp/file1" } }
+ { "@case_entry"
+ { "@pattern" = "/tmp/file2" }
+ { ".source" = "/tmp/file2" } }
+ { "@case_entry"
+ { "@pattern" = "*" }
+ { "@unset"
+ { "1" = "f" } } } }
+
+ (* Select *)
+ test lns get "select i in a b c; do . /tmp/file$i
+ done\n" =
+ { "@select" = "i in a b c"
+ { ".source" = "/tmp/file$i" }
+ }
+
+ (* Return *)
+ test lns get "return\nreturn 2\n" =
+ { "@return" }
+ { "@return" = "2" }
+
+ (* Functions *)
+ test Shellvars.lns get "foo() {
+ . /tmp/bar
+ }\n" =
+ { "@function" = "foo"
+ { ".source" = "/tmp/bar" }
+ }
+
+ test Shellvars.lns get "function foo () {
+ . /tmp/bar
+ }\n" =
+ { "@function" = "foo"
+ { ".source" = "/tmp/bar" }
+ }
+
+ test Shellvars.lns get "foo() (
+ . /tmp/bar
+ )\n" =
+ { "@function" = "foo"
+ { ".source" = "/tmp/bar" }
+ }
+
+ (* Dollar assignment *)
+ test Shellvars.lns get "FOO=$(bar arg)\n" =
+ { "FOO" = "$(bar arg)" }
+
+ (* Empty lines before esac *)
+ test Shellvars.lns get "case $f in
+ a)
+ B=C
+ ;;
+
+ esac\n" =
+ { "@case" = "$f"
+ { "@case_entry"
+ { "@pattern" = "a" }
+ { "B" = "C" } }
+ }
+
+
+ (* Empty lines before a case_entry *)
+ test Shellvars.lns get "case $f in
+
+ a)
+ B=C
+ ;;
+
+ b)
+ A=D
+ ;;
+ esac\n" =
+ { "@case" = "$f"
+ { "@case_entry"
+ { "@pattern" = "a" }
+ { "B" = "C" } }
+ { "@case_entry"
+ { "@pattern" = "b" }
+ { "A" = "D" } } }
+
+
+ (* Comments anywhere *)
+ test Shellvars.lns get "case ${INTERFACE} in
+# comment before
+eth0)
+# comment in
+OPTIONS=()
+;;
+
+# comment before 2
+*)
+# comment in 2
+unset f
+;;
+# comment after
+esac\n" =
+ { "@case" = "${INTERFACE}"
+ { "#comment" = "comment before" }
+ { "@case_entry"
+ { "@pattern" = "eth0" }
+ { "#comment" = "comment in" }
+ { "OPTIONS" = "()" } }
+ { "#comment" = "comment before 2" }
+ { "@case_entry"
+ { "@pattern" = "*" }
+ { "#comment" = "comment in 2" }
+ { "@unset"
+ { "1" = "f" } } }
+ { "#comment" = "comment after" } }
+
+ (* Empty case *)
+ test Shellvars.lns get "case $a in
+ *)
+ ;;
+ esac\n" =
+ { "@case" = "$a"
+ { "@case_entry" { "@pattern" = "*" } } }
+
+ (* case variables can be surrounded by double quotes *)
+ test Shellvars.lns get "case \"${options}\" in
+*debug*)
+ shift
+ ;;
+esac\n" =
+ { "@case" = "\"${options}\""
+ { "@case_entry"
+ { "@pattern" = "*debug*" }
+ { "@builtin" = "shift" } } }
+
+ (* Double quoted values can have newlines *)
+ test Shellvars.lns get "FOO=\"123\n456\"\n" =
+ { "FOO" = "\"123\n456\"" }
+
+ (* Single quoted values can have newlines *)
+ test Shellvars.lns get "FOO='123\n456'\n" =
+ { "FOO" = "'123\n456'" }
+
+ (* bquoted values can have semi-colons *)
+ test Shellvars.lns get "FOO=`bar=date;$bar`\n" =
+ { "FOO" = "`bar=date;$bar`" }
+
+ (* dollar-assigned values can have semi-colons *)
+ test Shellvars.lns get "FOO=$(bar=date;$bar)\n" =
+ { "FOO" = "$(bar=date;$bar)" }
+
+ (* dollar-assigned value in bquot *)
+ test Shellvars.lns get "FOO=`echo $(date)`\n" =
+ { "FOO" = "`echo $(date)`" }
+
+ (* bquot value in dollar-assigned value *)
+ test Shellvars.lns get "FOO=$(echo `date`)\n" =
+ { "FOO" = "$(echo `date`)" }
+
+ (* dbquot *)
+ test Shellvars.lns get "FOO=``bar``\n" =
+ { "FOO" = "``bar``" }
+
+ (* Partial quoting is allowed *)
+ test Shellvars.lns get "FOO=\"$bar\"/'baz'/$(quux)$((1 + 2))\n" =
+ { "FOO" = "\"$bar\"/'baz'/$(quux)$((1 + 2))" }
+
+ (* unset can be used on wildcard variables *)
+ test Shellvars.lns get "unset ${!LC_*}\n" =
+ { "@unset"
+ { "1" = "${!LC_*}" } }
+
+ (* Empty comment before entries *)
+ test Shellvars.lns get "# \nfoo=bar\n" =
+ { "foo" = "bar" }
+
+ (* Empty comment after entries *)
+ test Shellvars.lns get "foo=bar\n# \n\n" =
+ { "foo" = "bar" }
+
+ (* Whitespace between lines *)
+ test Shellvars.lns get "DEVICE=eth0\n\nBOOTPROTO=static\n" =
+ { "DEVICE" = "eth0" }
+ { "BOOTPROTO" = "static" }
+
+ (* Whitespace after line *)
+ test Shellvars.lns get "DEVICE=eth0\n\n" =
+ { "DEVICE" = "eth0" }
+
+ (* Fails adding variable assignment between comment and blank line *)
+ let ins_after_comment = "# foo
+
+"
+ test lns put ins_after_comment after
+ insa "foo" "#comment" ;
+ set "foo" "yes"
+ = "# foo\n\nfoo=yes\n"
+
+ (* Make sure to support empty comments *)
+ test lns get "# foo
+ #
+ #
+ foo=bar
+ #\n" =
+ { "#comment" = "foo" }
+ { "foo" = "bar" }
+
+ (* Single quotes in arrays, ticket #357 *)
+ test lns get "DLAGENTS=('ftp::/usr/bin/curl -fC - --ftp-pasv --retry 3 --retry-delay 3 -o %o %u'
+ 'scp::/usr/bin/scp -C %u %o')\n" =
+ { "DLAGENTS"
+ { "1" = "'ftp::/usr/bin/curl -fC - --ftp-pasv --retry 3 --retry-delay 3 -o %o %u'" }
+ { "2" = "'scp::/usr/bin/scp -C %u %o'" } }
+
+ (* Accept continued lines in quoted values *)
+ test lns get "BLAH=\" \
+test \
+test2\"\n" =
+ { "BLAH" = "\" \\\ntest \\\ntest2\"" }
+
+ (* Export of multiple variables, RHBZ#1033795 *)
+ test lns get "export TestVar1 TestVar2\n" =
+ { "@export"
+ { "1" = "TestVar1" }
+ { "2" = "TestVar2" } }
+
+ (* Support ;; on same line as a case statement entry, RHBZ#1033799 *)
+ test lns get "case $ARG in
+ 0) TestVar=\"test0\" ;;
+ 1) TestVar=\"test1\" ;;
+esac\n" =
+ { "@case" = "$ARG"
+ { "@case_entry"
+ { "@pattern" = "0" }
+ { "TestVar" = "\"test0\"" } }
+ { "@case_entry"
+ { "@pattern" = "1" }
+ { "TestVar" = "\"test1\"" } } }
+
+ (* case: support ;; on the same line with multiple commands *)
+ test lns get "case $ARG in
+ 0) Foo=0; Bar=1;;
+ 1)
+ Foo=2
+ Bar=3; Baz=4;;
+esac\n" =
+ { "@case" = "$ARG"
+ { "@case_entry"
+ { "@pattern" = "0" }
+ { "Foo" = "0" }
+ { "Bar" = "1" }
+ }
+ { "@case_entry"
+ { "@pattern" = "1" }
+ { "Foo" = "2" }
+ { "Bar" = "3" }
+ { "Baz" = "4" }
+ }
+ }
+
+(* Test: Shellvars.lns
+ Support `##` bashism in conditions (GH issue #118) *)
+test Shellvars.lns get "if [ \"${APACHE_CONFDIR##/etc/apache2-}\" != \"${APACHE_CONFDIR}\" ] ; then
+ SUFFIX=\"-${APACHE_CONFDIR##/etc/apache2-}\"
+else
+ SUFFIX=
+fi\n" =
+ { "@if" = "[ \"${APACHE_CONFDIR##/etc/apache2-}\" != \"${APACHE_CONFDIR}\" ]"
+ { "SUFFIX" = "\"-${APACHE_CONFDIR##/etc/apache2-}\"" }
+ { "@else"
+ { "SUFFIX" = "" }
+ }
+ }
+
+ (* Support $(( .. )) arithmetic expansion in variable assignment, RHBZ#1100550 *)
+ test lns get "export MALLOC_PERTURB_=$(($RANDOM % 255 + 1))\n" =
+ { "MALLOC_PERTURB_" = "$(($RANDOM % 255 + 1))"
+ { "export" } }
+
+ (*
+ * Github issue 202
+ *)
+ let starts_with_blank = "\n \nVAR=value\n"
+
+ test lns get starts_with_blank = { "VAR" = "value" }
+
+ (* It is now possible to insert at the beginning of a file
+ * that starts with blank lines *)
+ test lns put starts_with_blank after
+ insb "#comment" "/*[1]";
+ set "/#comment[1]" "a comment" =
+ " # a comment\nVAR=value\n"
+
+ (* Modifications of the file lose the blank lines though *)
+ test lns put starts_with_blank after
+ set "/VAR2" "abc" = "VAR=value\nVAR2=abc\n"
+
+ test lns put starts_with_blank after
+ rm "/VAR";
+ set "/VAR2" "abc" = "VAR2=abc\n"
+
+ test lns put starts_with_blank after
+ rm "/VAR" = ""
+
+ (* Support associative arrays *)
+ test lns get "var[alpha_beta,gamma]=something\n" =
+ { "var[alpha_beta,gamma]" = "something" }
+
+ (* GH #188: support more conditions *)
+ test Shellvars.lns get "[ -f $FILENAME ]\n" =
+ { "@condition" = "-f $FILENAME"
+ { "type" = "[" } }
+
+ test Shellvars.lns get "[[ -f $FILENAME ]]\n" =
+ { "@condition" = "-f $FILENAME"
+ { "type" = "[[" } }
+
+ (* Allow wrapping loop condition to multiple lines *)
+ test Shellvars.lns get "for x in foo \\\nbar\\\nbaz; do y=$x; done\n" =
+ { "@for" = "x in foo \\\nbar\\\nbaz" { "y" = "$x" } }
+
+ (* Allow quotes in loop conditions *)
+ test Shellvars.lns get "for x in \"$@\"; do y=$x; done\n" =
+ { "@for" = "x in \"$@\"" { "y" = "$x" } }
+
+ (* case: support quotes and spaces in pattern lists *)
+ test lns get "case $ARG in
+ \"foo bar\")
+ Foo=0
+ ;;
+ baz | quux)
+ Foo=1
+ ;;
+esac\n" =
+ { "@case" = "$ARG"
+ { "@case_entry"
+ { "@pattern" = "\"foo bar\"" }
+ { "Foo" = "0" }
+ }
+ { "@case_entry"
+ { "@pattern" = "baz" }
+ { "@pattern" = "quux" }
+ { "Foo" = "1" }
+ }
+ }
+
+ (* eval *)
+ test lns get "eval `dircolors`\n" =
+ { "@eval" = "`dircolors`" }
+
+ (* alias *)
+ test lns get "alias ls='ls $LS_OPTIONS'\n" =
+ { "@alias" = "ls" { "value" = "'ls $LS_OPTIONS'" } }
+
+ test lns get "alias ls-options='ls $LS_OPTIONS'\n" =
+ { "@alias" = "ls-options" { "value" = "'ls $LS_OPTIONS'" } }
+
+ (* Allow && and || constructs after condition *)
+ test Shellvars.lns get "[ -f $FILENAME ] && do this || or that\n" =
+ { "@condition" = "-f $FILENAME"
+ { "type" = "[" }
+ { "@and" = "do this" }
+ { "@or" = "or that" } }
+
+(* Test: Shellvars.lns
+ Parse (almost) any command *)
+test Shellvars.lns get "echo foobar 'and this is baz'
+/usr/local/bin/myscript-with-dash_and_underscore.sh with args
+echo foo \
+bar\n" =
+ { "@command" = "echo"
+ { "@arg" = "foobar 'and this is baz'" }
+ }
+ { "@command" = "/usr/local/bin/myscript-with-dash_and_underscore.sh"
+ { "@arg" = "with args" }
+ }
+ { "@command" = "echo"
+ { "@arg" = "foo \\\nbar" }
+ }
+
+(* Test: Shellvars.lns
+ Support pipes in commands *)
+test Shellvars.lns get "echo \"$STRING\" | grep foo\n" =
+ { "@command" = "echo"
+ { "@arg" = "\"$STRING\"" }
+ { "@pipe"
+ { "@command" = "grep"
+ { "@arg" = "foo" } } } }
+
+(* Test: Shellvars.lns
+ Support && and || after command
+ GH #215 *)
+test Shellvars.lns get "grep -q \"Debian\" /etc/issue && echo moo\n" =
+ { "@command" = "grep"
+ { "@arg" = "-q \"Debian\" /etc/issue" }
+ { "@and"
+ { "@command" = "echo"
+ { "@arg" = "moo" } } } }
+
+test Shellvars.lns get "grep -q \"Debian\" /etc/issue || echo baa\n" =
+ { "@command" = "grep"
+ { "@arg" = "-q \"Debian\" /etc/issue" }
+ { "@or"
+ { "@command" = "echo"
+ { "@arg" = "baa" } } } }
+
+test Shellvars.lns get "grep -q \"Debian\" /etc/issue && DEBIAN=1\n" =
+ { "@command" = "grep"
+ { "@arg" = "-q \"Debian\" /etc/issue" }
+ { "@and"
+ { "DEBIAN" = "1" } } }
+
+test Shellvars.lns get "cat /etc/issue | grep -q \"Debian\" && echo moo || echo baa\n" =
+ { "@command" = "cat"
+ { "@arg" = "/etc/issue" }
+ { "@pipe"
+ { "@command" = "grep"
+ { "@arg" = "-q \"Debian\"" }
+ { "@and"
+ { "@command" = "echo"
+ { "@arg" = "moo" }
+ { "@or"
+ { "@command" = "echo"
+ { "@arg" = "baa" } } } } } } } }
+
+(* Command-specific environment variables *)
+test Shellvars.lns get "abc=def \\\n ghi=\"jkl mno\" command arg1 arg2\n" =
+ { "@command" = "command"
+ { "abc" = "def" }
+ { "ghi" = "\"jkl mno\"" }
+ { "@arg" = "arg1 arg2" }
+ }
+
+(* Wrapped command sequences *)
+
+test Shellvars.lns get "foo && \\\nbar baz \\\n|| qux \\\n quux\\\ncorge grault\n" =
+ { "@command" = "foo"
+ { "@and"
+ { "@command" = "bar"
+ { "@arg" = "baz" }
+ { "@or" { "@command" = "qux" { "@arg" = "quux\\\ncorge grault" } } }
+ }
+ }
+ }
+
+(* Comment after function definition (Issue #339) *)
+test Shellvars.lns get "SetDir() # hello
+{
+ echo
+}\n" =
+ { "@function" = "SetDir"
+ { "#comment" = "hello" }
+ { "@command" = "echo" }
+ }
+
+(* Function with new lines *)
+test Shellvars.lns get "MyFunc()
+{
+ echo
+}\n" =
+ { "@function" = "MyFunc"
+ { "@command" = "echo" }
+ }
+
+(* Pipe and newline without cl (Issue #339) *)
+test Shellvars.lns get "echo |
+tr\n" =
+ { "@command" = "echo"
+ { "@pipe"
+ { "@command" = "tr" } } }
+
+
+(* Subshell (Issue #339) *)
+test Shellvars.lns get "{ echo
+}\n" =
+ { "@subshell"
+ { "@command" = "echo" }
+ }
+
+(* One-liner function *)
+test Shellvars.lns get "MyFunc() { echo; }\n" =
+ { "@function" = "MyFunc"
+ { "@command" = "echo" }
+ }
+
+(* Support and/or in if conditions *)
+test Shellvars.lns get "if [ -f /tmp/file1 ] && [ -f /tmp/file2 ] || [ -f /tmp/file3 ]; then
+ echo foo
+fi
+" =
+ { "@if" = "[ -f /tmp/file1 ]"
+ { "@and" = "[ -f /tmp/file2 ]" }
+ { "@or" = "[ -f /tmp/file3 ]" }
+ { "@command" = "echo"
+ { "@arg" = "foo" }
+ }
+ }
+
+(* Support variable as command *)
+test Shellvars.lns get "$FOO bar\n" =
+ { "@command" = "$FOO"
+ { "@arg" = "bar" }
+ }
+
+
+(*********************************************************
+ * Group: Unsupported syntax *
+ * *
+ * The following tests are known to be failing currently *
+ *********************************************************)
+
+(* Any piping (Issue #343) *)
+test Shellvars.lns get "FOO=bar && BAR=foo
+echo foo || { echo bar; }
+echo FOO | myfunc() { echo bar; }\n" = *
+
+
+(* Stream redirections (Issue #626 *)
+test Shellvars.lns get "echo foo 2>&1 >/dev/null\n" = *
+
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(* Test for shell list handling lens *)
+module Test_shellvars_list =
+
+ let list_vals = "# Some comment
+MODULES_LOADED_ON_BOOT=\"ipv6 sunrpc\"
+
+DEFAULT_APPEND=\"showopts noresume console=tty0 console=ttyS0,115200n8 ro\"
+
+LOADER_TYPE=\"grub\"
+"
+
+ test Shellvars_list.lns get list_vals =
+ { "#comment" = "Some comment" }
+ { "MODULES_LOADED_ON_BOOT"
+ { "quote" = "\"" }
+ { "value" = "ipv6" }
+ { "value" = "sunrpc" } }
+ { }
+ { "DEFAULT_APPEND"
+ { "quote" = "\"" }
+ { "value" = "showopts" }
+ { "value" = "noresume" }
+ { "value" = "console=tty0" }
+ { "value" = "console=ttyS0,115200n8" }
+ { "value" = "ro" } }
+ { }
+ { "LOADER_TYPE"
+ { "quote" = "\"" }
+ { "value" = "grub" } }
+
+
+ (* append a value *)
+ test Shellvars_list.lns put "VAR=\"test1\t \ntest2\"\n" after
+ set "VAR/value[last()+1]" "test3"
+ = "VAR=\"test1\t \ntest2 test3\"\n"
+
+ (* in double quoted lists, single quotes and escaped values are allowed *)
+ test Shellvars_list.lns get "VAR=\"test'1 test2 a\ \\\"longer\\\"\ test\"\n" =
+ { "VAR"
+ { "quote" = "\"" }
+ { "value" = "test'1" }
+ { "value" = "test2" }
+ { "value" = "a\ \\\"longer\\\"\ test" } }
+
+ (* add new value, delete one and append something *)
+ test Shellvars_list.lns put list_vals after
+ set "FAILSAVE_APPEND/quote" "\"" ;
+ set "FAILSAVE_APPEND/value[last()+1]" "console=ttyS0" ;
+ rm "LOADER_TYPE" ;
+ rm "MODULES_LOADED_ON_BOOT/value[1]" ;
+ set "DEFAULT_APPEND/value[last()+1]" "teststring"
+ = "# Some comment
+MODULES_LOADED_ON_BOOT=\"sunrpc\"
+
+DEFAULT_APPEND=\"showopts noresume console=tty0 console=ttyS0,115200n8 ro teststring\"
+
+FAILSAVE_APPEND=\"console=ttyS0\"
+"
+
+ (* test of single quotes (leading/trailing whitespaces are kept *)
+ (* leading/trailing) *)
+ test Shellvars_list.lns put "VAR=' \t test1\t \ntest2 '\n" after
+ set "VAR/value[last()+1]" "test3"
+ = "VAR=' \t test1\t \ntest2 test3 '\n"
+
+ (* change quotes (leading/trailing whitespaces are lost *)
+ test Shellvars_list.lns put "VAR=' \t test1\t \ntest2 '\n" after
+ set "VAR/quote" "\""
+ = "VAR=\"test1\t \ntest2\"\n"
+
+ (* double quotes are allowed in single quoted lists *)
+ test Shellvars_list.lns get "VAR='test\"1 test2'\n" =
+ { "VAR"
+ { "quote" = "'" }
+ { "value" = "test\"1" }
+ { "value" = "test2" } }
+
+ (* empty list with quotes *)
+ test Shellvars_list.lns get "VAR=''\n" =
+ { "VAR"
+ { "quote" = "'" } }
+
+ (* unquoted value *)
+ test Shellvars_list.lns get "VAR=test\n" =
+ { "VAR"
+ { "quote" = "" }
+ { "value" = "test" } }
+
+ (* uquoted value with escaped space etc. *)
+ test Shellvars_list.lns get "VAR=a\\ \\\"long\\\"\\ test\n" =
+ { "VAR"
+ { "quote" = "" }
+ { "value" = "a\\ \\\"long\\\"\\ test" } }
+
+ (* append to unquoted value *)
+ test Shellvars_list.lns put "VAR=test1\n" after
+ set "VAR/quote" "\"";
+ set "VAR/value[last()+1]" "test2"
+ = "VAR=\"test1 test2\"\n"
+
+ (* empty entry *)
+ test Shellvars_list.lns get "VAR=\n" =
+ { "VAR"
+ { "quote" = "" } }
+
+ (* set value w/o quotes to empty value... *)
+ test Shellvars_list.lns put "VAR=\n" after
+ set "VAR/value[last()+1]" "test"
+ = "VAR=test\n"
+
+ (* ... or no value *)
+ test Shellvars_list.lns put "" after
+ set "VAR/quote" "";
+ set "VAR/value[1]" "test"
+ = "VAR=test\n"
+
+ (* Ticket #368 - backticks *)
+ test Shellvars_list.lns get "GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`\n" =
+ { "GRUB_DISTRIBUTOR"
+ { "quote" = "" }
+ { "value" = "`lsb_release -i -s 2> /dev/null || echo Debian`" } }
+
+ (* Test: Shellvars_list.lns
+ Ticket #342: end-of-line comments *)
+ test Shellvars_list.lns get "service_ping=\"ping/icmp\" #ping\n" =
+ { "service_ping"
+ { "quote" = "\"" }
+ { "value" = "ping/icmp" }
+ { "#comment" = "ping" } }
+
+ (* Test: Shellvars_list.lns
+ Support double-quoted continued lines *)
+ test Shellvars_list.lns get "DAEMON_OPTS=\"-a :6081 \
+ -T localhost:6082\"\n" =
+ { "DAEMON_OPTS"
+ { "quote" = "\"" }
+ { "value" = "-a" }
+ { "value" = ":6081" }
+ { "value" = "-T" }
+ { "value" = "localhost:6082" } }
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(*
+Module: Test_Simplelines
+ Provides unit tests and examples for the <Simplelines> lens.
+*)
+
+module Test_Simplelines =
+
+(* Variable: conf *)
+let conf = "# This is a comment
+
+word
+a line
+ indented line
+with $péci@l cH@r2ct3rs
+"
+
+(* Test: Simplelines.lns *)
+test Simplelines.lns get conf =
+ { "#comment" = "This is a comment" }
+ { }
+ { "1" = "word" }
+ { "2" = "a line" }
+ { "3" = "indented line" }
+ { "4" = "with $péci@l cH@r2ct3rs" }
+
+(* Variable: cronallow *)
+ let cronallow = "# Test comment
+#
+user1
+another
+
+user2
+"
+
+(* Test: cron.allow file *)
+ test SimpleLines.lns get cronallow =
+ { "#comment" = "Test comment" }
+ { }
+ { "1" = "user1" }
+ { "2" = "another" }
+ { }
+ { "3" = "user2" }
--- /dev/null
+(*
+Module: Test_Simplevars
+ Provides unit tests and examples for the <Simplevars> lens.
+*)
+
+module Test_Simplevars =
+
+(* Variable: conf *)
+let conf = "# this is a comment
+
+mykey = myvalue # eol comment
+anotherkey = another value
+"
+
+(* Test: Simplevars.lns *)
+test Simplevars.lns get conf =
+ { "#comment" = "this is a comment" }
+ { }
+ { "mykey" = "myvalue"
+ { "#comment" = "eol comment" } }
+ { "anotherkey" = "another value" }
+
+(* Test: Simplevars.lns
+ Quotes are OK in variables that do not begin with a quote *)
+test Simplevars.lns get "UserParameter=custom.vfs.dev.read.ops[*],cat /proc/diskstats | grep $1 | head -1 | awk '{print $$4}'\n" =
+ { "UserParameter" = "custom.vfs.dev.read.ops[*],cat /proc/diskstats | grep $1 | head -1 | awk '{print $$4}'" }
+
+(* Test: Simplevars.lns
+ Support flags *)
+test Simplevars.lns get "dnsadminapp\n" =
+ { "dnsadminapp" }
+
+(* Test: Simplevars.lns
+ Support empty values *)
+test Simplevars.lns get "foo =\n" =
+ { "foo" = "" { } }
--- /dev/null
+module Test_sip_conf =
+
+let conf = "[general]
+context=default ; Default context for incoming calls
+udpbindaddr=0.0.0.0 ; IP address to bind UDP listen socket to (0.0.0.0 binds to all)
+; The address family of the bound UDP address is used to determine how Asterisk performs
+; DNS lookups. In cases a) and c) above, only A records are considered. In case b), only
+; AAAA records are considered. In case d), both A and AAAA records are considered. Note,
+
+
+[basic-options-title](!,superclass-template);a template for my preferred codecs !@#$%#@$%^^&%%^*&$%
+ #comment after the title
+ dtmfmode=rfc2833
+ context=from-office
+ type=friend
+
+
+[my-codecs](!) ; a template for my preferred codecs
+ disallow=all
+ allow=ilbc
+ allow=g729
+ allow=gsm
+ allow=g723
+ allow=ulaw
+
+[2133](natted-phone,my-codecs) ;;;;; some sort of comment
+ secret = peekaboo
+[2134](natted-phone,ulaw-phone)
+ secret = not_very_secret
+[2136](public-phone,ulaw-phone)
+ secret = not_very_secret_either
+"
+
+test Sip_Conf.lns get conf =
+ { "title" = "general"
+ { "context" = "default"
+ { "#comment" = "Default context for incoming calls" }
+ }
+ { "udpbindaddr" = "0.0.0.0"
+ { "#comment" = "IP address to bind UDP listen socket to (0.0.0.0 binds to all)" }
+ }
+ { "#comment" = "The address family of the bound UDP address is used to determine how Asterisk performs" }
+ { "#comment" = "DNS lookups. In cases a) and c) above, only A records are considered. In case b), only" }
+ { "#comment" = "AAAA records are considered. In case d), both A and AAAA records are considered. Note," }
+ { }
+ { }
+ }
+ { "title" = "basic-options-title"
+ { "@is_template" }
+ { "@use_template" = "superclass-template" }
+ { "#title_comment" = ";a template for my preferred codecs !@#$%#@$%^^&%%^*&$%" }
+ { "#comment" = "comment after the title" }
+ { "dtmfmode" = "rfc2833" }
+ { "context" = "from-office" }
+ { "type" = "friend" }
+ { }
+ { }
+ }
+ { "title" = "my-codecs"
+ { "@is_template" }
+ { "#title_comment" = " ; a template for my preferred codecs" }
+ { "disallow" = "all" }
+ { "allow" = "ilbc" }
+ { "allow" = "g729" }
+ { "allow" = "gsm" }
+ { "allow" = "g723" }
+ { "allow" = "ulaw" }
+ { }
+ }
+ { "title" = "2133"
+ { "@use_template" = "natted-phone" }
+ { "@use_template" = "my-codecs" }
+ { "#title_comment" = " ;;;;; some sort of comment" }
+ { "secret" = "peekaboo" }
+ }
+ { "title" = "2134"
+ { "@use_template" = "natted-phone" }
+ { "@use_template" = "ulaw-phone" }
+ { "secret" = "not_very_secret" }
+ }
+ { "title" = "2136"
+ { "@use_template" = "public-phone" }
+ { "@use_template" = "ulaw-phone" }
+ { "secret" = "not_very_secret_either" }
+ }
+
+ (*********************************************
+ * Tests for update, create, delete
+ *
+ *********************************************)
+
+ (*********************************************
+ * Test to confirm that we can update the
+ * default context
+ *
+ *********************************************)
+ test Sip_Conf.lns put "[general]\ncontext=default\n" after
+ set "title[.='general']/context" "updated"
+ = "[general]\ncontext=updated
+"
+
+ (*********************************************
+ * Test to confirm that we can create a
+ * new title with a context
+ *
+ *********************************************)
+ test Sip_Conf.lns put "[general]\ncontext=default\n" after
+ set "/title[.='newtitle']" "newtitle"; set "/title[.='newtitle']/context" "foobarbaz"
+ = "[general]\ncontext=default
+[newtitle]
+context=foobarbaz
+"
+
--- /dev/null
+module Test_slapd =
+
+let conf = "# This is the main slapd configuration file. See slapd.conf(5) for more
+# info on the configuration options.
+
+#######################################################################
+# Global Directives:
+
+# Features to permit
+#allow bind_v2
+
+# Schema and objectClass definitions
+include /etc/ldap/schema/core.schema
+
+#######################################################################
+# Specific Directives for database #1, of type hdb:
+# Database specific directives apply to this databasse until another
+# 'database' directive occurs
+database hdb
+
+# The base of your directory in database #1
+suffix \"dc=nodomain\"
+
+access to attrs=userPassword,shadowLastChange
+ by dn=\"cn=admin,dc=nodomain\" write
+ by anonymous auth
+ by self write
+ by * none
+"
+
+test Slapd.lns get conf =
+ { "#comment" = "This is the main slapd configuration file. See slapd.conf(5) for more" }
+ { "#comment" = "info on the configuration options." }
+ {}
+ { "#comment" = "######################################################################" }
+ { "#comment" = "Global Directives:"}
+ {}
+ { "#comment" = "Features to permit" }
+ { "#comment" = "allow bind_v2" }
+ {}
+ { "#comment" = "Schema and objectClass definitions" }
+ { "include" = "/etc/ldap/schema/core.schema" }
+ {}
+ { "#comment" = "######################################################################" }
+ { "#comment" = "Specific Directives for database #1, of type hdb:" }
+ { "#comment" = "Database specific directives apply to this databasse until another" }
+ { "#comment" = "'database' directive occurs" }
+ { "database" = "hdb"
+ {}
+ { "#comment" = "The base of your directory in database #1" }
+ { "suffix" = "dc=nodomain" }
+ {}
+ { "access to" = "attrs=userPassword,shadowLastChange"
+ { "by" = "dn=\"cn=admin,dc=nodomain\""
+ { "access" = "write" } }
+ { "by" = "anonymous"
+ { "access" = "auth" } }
+ { "by" = "self"
+ { "access" = "write" } }
+ { "by" = "*"
+ { "access" = "none" } } } }
+
+(* Test: Slapd.lns
+ Full access test with who/access/control *)
+test Slapd.lns get "access to dn.subtree=\"dc=example,dc=com\"
+ by self write stop\n" =
+ { "access to" = "dn.subtree=\"dc=example,dc=com\""
+ { "by" = "self"
+ { "access" = "write" }
+ { "control" = "stop" } } }
+
+(* Test: Slapd.lns
+ access test with who *)
+test Slapd.lns get "access to dn.subtree=\"dc=example,dc=com\"
+ by self\n" =
+ { "access to" = "dn.subtree=\"dc=example,dc=com\""
+ { "by" = "self" } }
+
+(* Test: Slapd.lns
+ access test with who/access *)
+test Slapd.lns get "access to dn.subtree=\"dc=example,dc=com\"
+ by self write\n" =
+ { "access to" = "dn.subtree=\"dc=example,dc=com\""
+ { "by" = "self"
+ { "access" = "write" } } }
+
+(* Test: Slapd.lns
+ access test with who/control *)
+test Slapd.lns get "access to dn.subtree=\"dc=example,dc=com\"
+ by self stop\n" =
+ { "access to" = "dn.subtree=\"dc=example,dc=com\""
+ { "by" = "self"
+ { "control" = "stop" } } }
+
--- /dev/null
+(*
+Module: Test_SmbUsers
+ Provides unit tests and examples for the <SmbUsers> lens.
+*)
+
+module Test_SmbUsers =
+
+(* Variable: conf *)
+let conf = "# this is a comment
+
+jarwin = JosephArwin
+manderso = MarkAnderson MarkusAndersonus
+users = @account
+nobody = *
+;commented = SomeOne
+"
+
+(* Test: Simplevars.lns *)
+test SmbUsers.lns get conf =
+ { "#comment" = "this is a comment" }
+ { }
+ { "jarwin"
+ { "username" = "JosephArwin" } }
+ { "manderso"
+ { "username" = "MarkAnderson" }
+ { "username" = "MarkusAndersonus" } }
+ { "users"
+ { "username" = "@account" } }
+ { "nobody"
+ { "username" = "*" } }
+ { "#comment" = "commented = SomeOne" }
--- /dev/null
+(* Test for system lens *)
+module Test_solaris_system =
+
+ let conf = "*ident \"@(#)system 1.18 97/06/27 SMI\" /* SVR4 1.5 */
+*
+* SYSTEM SPECIFICATION FILE
+*
+
+moddir: /kernel /usr/kernel /other/modules
+
+rootfs:ufs
+rootdev:/sbus@1,f8000000/esp@0,800000/sd@3,0:a
+
+include: win
+include: sys/shmsys
+
+exclude: win
+exclude: sys/shmsys
+
+forceload: drv/foo
+forceload: drv/ssd
+
+set nautopush=32
+set noexec_user_stack=1
+set zfs:zfs_arc_max=12884901888
+set test_module:debug = 0x13
+set fcp:ssfcp_enable_auto_configuration=1
+set scsi_options = 0x7F8
+set moddebug & ~0x880
+set moddebug | 0x40
+"
+
+ test Solaris_System.lns get conf =
+ { "#comment" = "ident \"@(#)system 1.18 97/06/27 SMI\" /* SVR4 1.5 */" }
+ { }
+ { "#comment" = "SYSTEM SPECIFICATION FILE" }
+ { }
+ { }
+ { "moddir"
+ { "1" = "/kernel" }
+ { "2" = "/usr/kernel" }
+ { "3" = "/other/modules" } }
+ { }
+ { "rootfs" = "ufs" }
+ { "rootdev" = "/sbus@1,f8000000/esp@0,800000/sd@3,0:a" }
+ { }
+ { "include" = "win" }
+ { "include" = "sys/shmsys" }
+ { }
+ { "exclude" = "win" }
+ { "exclude" = "sys/shmsys" }
+ { }
+ { "forceload" = "drv/foo" }
+ { "forceload" = "drv/ssd" }
+ { }
+ { "set"
+ { "variable" = "nautopush" }
+ { "operator" = "=" }
+ { "value" = "32" } }
+ { "set"
+ { "variable" = "noexec_user_stack" }
+ { "operator" = "=" }
+ { "value" = "1" } }
+ { "set"
+ { "module" = "zfs" }
+ { "variable" = "zfs_arc_max" }
+ { "operator" = "=" }
+ { "value" = "12884901888" } }
+ { "set"
+ { "module" = "test_module" }
+ { "variable" = "debug" }
+ { "operator" = "=" }
+ { "value" = "0x13" } }
+ { "set"
+ { "module" = "fcp" }
+ { "variable" = "ssfcp_enable_auto_configuration" }
+ { "operator" = "=" }
+ { "value" = "1" } }
+ { "set"
+ { "variable" = "scsi_options" }
+ { "operator" = "=" }
+ { "value" = "0x7F8" } }
+ { "set"
+ { "variable" = "moddebug" }
+ { "operator" = "&" }
+ { "value" = "~0x880" } }
+ { "set"
+ { "variable" = "moddebug" }
+ { "operator" = "|" }
+ { "value" = "0x40" } }
+
+(* Check that moddir supports colons and spaces *)
+ let moddir_colons = "moddir:/kernel:/usr/kernel:/other/modules
+"
+
+ test Solaris_System.lns get moddir_colons =
+ { "moddir"
+ { "1" = "/kernel" }
+ { "2" = "/usr/kernel" }
+ { "3" = "/other/modules" } }
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+module Test_soma =
+
+let conf = "# comment
+User = soma
+
+OptionsItem = \"-msglevel avsync=5 -ao jack -af volnorm -novideo -noconsolecontrols -nojoystick -nolirc -nomouseinput\"
+Debug = 3
+"
+
+test Soma.lns get conf =
+ { "#comment" = "comment" }
+ { "User" = "soma" }
+ {}
+ { "OptionsItem" = "\"-msglevel avsync=5 -ao jack -af volnorm -novideo -noconsolecontrols -nojoystick -nolirc -nomouseinput\"" }
+ { "Debug" = "3" }
--- /dev/null
+(*
+Module: Test_Sos
+ Provides unit tests and examples for the <Sos> lens.
+*)
+
+module Test_Sos =
+
+let sos_conf_example = "# Sample /etc/sos/sos.conf
+
+[global]
+# Set global options here
+verbose = 3
+verify=yes
+
+[report]
+# Options for any `sos report`
+skip-plugins = rpm, selinux, dovecot
+enable-plugins = host,logs
+
+[collect]
+# Options for `sos collect`
+ssh-key = /home/user/.ssh/mykey
+password = true
+
+[clean]
+# Options for `sos clean|mask`
+domains = mydomain.com
+
+[plugin_options]
+rpm.rpmva = off
+"
+
+test Sos.lns get sos_conf_example =
+ { "#comment" = "Sample /etc/sos/sos.conf" }
+ { }
+ { "global"
+ { "#comment" = "Set global options here" }
+ { "verbose" = "3" }
+ { "verify" = "yes" }
+ { }
+ }
+ { "report"
+ { "#comment" = "Options for any `sos report`" }
+ { "skip-plugins" = "rpm, selinux, dovecot" }
+ { "enable-plugins" = "host,logs" }
+ { }
+ }
+ { "collect"
+ { "#comment" = "Options for `sos collect`" }
+ { "ssh-key" = "/home/user/.ssh/mykey" }
+ { "password" = "true" }
+ { }
+ }
+ { "clean"
+ { "#comment" = "Options for `sos clean|mask`" }
+ { "domains" = "mydomain.com" }
+ { }
+ }
+ { "plugin_options"
+ { "rpm.rpmva" = "off" }
+ }
+
--- /dev/null
+module Test_spacevars =
+
+ let conf ="# This a spaced key/value configuration file.
+keyword value
+
+# I like comments
+very.useful-key my=value
+
+a.flag
+"
+
+ let lns = Spacevars.lns
+
+ test lns get conf =
+ { "#comment" = "This a spaced key/value configuration file."}
+ { "keyword" = "value" }
+ {}
+ { "#comment" = "I like comments"}
+ { "very.useful-key" = "my=value" }
+ {}
+ { "a.flag" }
--- /dev/null
+(**
+ *
+ * This module is used to test the Splunk module for valid extractions.
+ * Written by Tim Brigham.
+ * This file is licensed under the LGPLv2+, like the rest of Augeas.
+ **)
+
+module Test_splunk =
+
+(** inputs.conf **)
+
+ let inputs = "[default]
+host = splunk-node-1.example.com
+enable_autocomplete_login = False
+_meta = metakey::metaval foo::bar
+
+[udp://514]
+connection_host = none
+source = test
+sourcetype = syslog
+
+"
+test Splunk.lns get inputs =
+ { "target" = "default"
+ { "host" = "splunk-node-1.example.com" }
+ { "enable_autocomplete_login" = "False" }
+ { "_meta" = "metakey::metaval foo::bar" }
+ {}}
+ { "target" = "udp://514"
+ { "connection_host" = "none" }
+ { "source" = "test" }
+ { "sourcetype" = "syslog" }
+ {}}
+
+
+(** web.conf **)
+ let web = "[settings]
+enableSplunkWebSSL = 1
+enable_autocomplete_login = False
+"
+
+
+test Splunk.lns get web =
+ { "target" = "settings"
+ { "enableSplunkWebSSL" = "1" }
+ { "enable_autocomplete_login" = "False" }
+ }
+
+
+
+(** props.conf **)
+
+ let props = "[splunkd_stdout]
+PREFIX_SOURCETYPE = False
+SHOULD_LINEMERGE = False
+is_valid = False
+maxDist = 99
+
+"
+
+test Splunk.lns get props =
+ {
+ "target" = "splunkd_stdout"
+ { "PREFIX_SOURCETYPE" = "False" }
+ { "SHOULD_LINEMERGE" = "False" }
+ { "is_valid" = "False" }
+ { "maxDist" = "99" }
+ {}}
+
+(** tenants.conf **)
+ let tenants = "[tenant:default]
+whitelist.0 = *
+"
+
+test Splunk.lns get tenants =
+ { "target" = "tenant:default"
+ { "whitelist.0" = "*" }
+ }
+
+
+
+ let server = "[license]
+active_group = Free
+master_uri = https://myserver.mydomain.com:8089
+
+[general]
+serverName = splunk-node-1
+trustedIP = 127.0.0.1
+guid = XXXXXXX-XXXXXXX-XXXXXXX-XXXXXXX-XXXXXXXXXXXXX
+
+[sslConfig]
+sslKeysfilePassword = $1$XX2X4XX6XXXXXXXXX
+
+"
+
+test Splunk.lns get server =
+ { "target" = "license"
+ { "active_group" = "Free" }
+ { "master_uri" = "https://myserver.mydomain.com:8089" }
+ {}}
+ { "target" = "general"
+ { "serverName" = "splunk-node-1" }
+ { "trustedIP" = "127.0.0.1" }
+ { "guid" = "XXXXXXX-XXXXXXX-XXXXXXX-XXXXXXX-XXXXXXXXXXXXX" }
+ {}}
+ { "target" = "sslConfig"
+ { "sslKeysfilePassword" = "$1$XX2X4XX6XXXXXXXXX" }
+ {}}
+
+
+(* test anonymous attributes *)
+let anon = "
+# master
+serverName = splunk-node-1
+
+# slave
+serverName = splunk-node-2
+
+[general]
+serverName = splunk-node-3
+
+"
+
+test Splunk.lns get anon =
+ { ".anon"
+ { }
+ { "#comment" = "master" }
+ { "serverName" = "splunk-node-1" }
+ { }
+ { "#comment" = "slave" }
+ { "serverName" = "splunk-node-2" }
+ { }
+ }
+ { "target" = "general"
+ { "serverName" = "splunk-node-3" }
+ { }
+ }
+
+
+(* test empty value entry *)
+
+let override = "
+[general]
+# normal entry
+foo = bar
+# override entry
+foo =
+"
+
+test Splunk.lns get override =
+ { ".anon"
+ { }
+ }
+ { "target" = "general"
+ { "#comment" = "normal entry" }
+ { "foo" = "bar" }
+ { "#comment" = "override entry" }
+ { "foo" }
+ }
+
--- /dev/null
+module Test_squid =
+
+let conf = "# comment at the beginning of the file
+
+auth_param negotiate children 5
+acl many_spaces rep_header Content-Disposition -i [[:space:]]{3,}
+acl CONNECT method CONNECT
+# comment in the middle
+ acl local_network src 192.168.1.0/24
+
+http_access allow manager localhost
+http_access allow local_network
+"
+
+test Squid.lns get conf =
+ { "#comment" = "comment at the beginning of the file" }
+ {}
+ { "auth_param"
+ { "scheme" = "negotiate" }
+ { "parameter" = "children" }
+ { "setting" = "5" } }
+ { "acl"
+ { "many_spaces"
+ { "type" = "rep_header" }
+ { "setting" = "Content-Disposition" }
+ { "parameters"
+ { "1" = "-i" }
+ { "2" = "[[:space:]]{3,}" } } } }
+ { "acl"
+ { "CONNECT"
+ { "type" = "method" }
+ { "setting" = "CONNECT" } } }
+ { "#comment" = "comment in the middle" }
+ { "acl"
+ { "local_network"
+ { "type" = "src" }
+ { "setting" = "192.168.1.0/24" } } }
+ {}
+ { "http_access"
+ { "allow" = "manager"
+ { "parameters"
+ { "1" = "localhost" } } } }
+ { "http_access"
+ { "allow" = "local_network" } }
+
+(*
+ This tests the Debian lenny default squid.conf
+ Comments were stripped out
+*)
+
+let debian_lenny_default = "acl all src all
+acl manager proto cache_object
+acl localhost src 127.0.0.1/32
+acl to_localhost dst 127.0.0.0/8
+acl purge method PURGE
+acl CONNECT method CONNECT
+http_access allow manager localhost
+http_access deny manager
+http_access allow purge localhost
+http_access deny purge
+http_access deny !Safe_ports
+http_access deny CONNECT !SSL_ports
+http_access allow localhost
+http_access deny all
+no_cache deny query_no_cache
+icp_access allow localnet
+icp_access deny all
+http_port 3128
+hierarchy_stoplist cgi-bin ?
+access_log /var/log/squid/access.log squid
+refresh_pattern ^ftp: 1440 20% 10080
+refresh_pattern ^gopher: 1440 0% 1440
+refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
+refresh_pattern (Release|Package(.gz)*)$ 0 20% 2880
+refresh_pattern . 0 20% 4320 ignore-reload ignore-auth # testing options
+acl shoutcast rep_header X-HTTP09-First-Line ^ICY\s[0-9]
+upgrade_http0.9 deny shoutcast
+acl apache rep_header Server ^Apache
+broken_vary_encoding allow apache
+extension_methods REPORT MERGE MKACTIVITY CHECKOUT
+hosts_file /etc/hosts
+coredump_dir /var/spool/squid
+"
+
+test Squid.lns get debian_lenny_default =
+ { "acl"
+ { "all"
+ { "type" = "src" }
+ { "setting" = "all" }
+ }
+ }
+ { "acl"
+ { "manager"
+ { "type" = "proto" }
+ { "setting" = "cache_object" }
+ }
+ }
+ { "acl"
+ { "localhost"
+ { "type" = "src" }
+ { "setting" = "127.0.0.1/32" }
+ }
+ }
+ { "acl"
+ { "to_localhost"
+ { "type" = "dst" }
+ { "setting" = "127.0.0.0/8" }
+ }
+ }
+ { "acl"
+ { "purge"
+ { "type" = "method" }
+ { "setting" = "PURGE" }
+ }
+ }
+ { "acl"
+ { "CONNECT"
+ { "type" = "method" }
+ { "setting" = "CONNECT" }
+ }
+ }
+ { "http_access"
+ { "allow" = "manager"
+ { "parameters"
+ { "1" = "localhost" }
+ }
+ }
+ }
+ { "http_access"
+ { "deny" = "manager" }
+ }
+ { "http_access"
+ { "allow" = "purge"
+ { "parameters"
+ { "1" = "localhost" }
+ }
+ }
+ }
+ { "http_access"
+ { "deny" = "purge" }
+ }
+ { "http_access"
+ { "deny" = "!Safe_ports" }
+ }
+ { "http_access"
+ { "deny" = "CONNECT"
+ { "parameters"
+ { "1" = "!SSL_ports" }
+ }
+ }
+ }
+ { "http_access"
+ { "allow" = "localhost" }
+ }
+ { "http_access"
+ { "deny" = "all" }
+ }
+ { "no_cache" = "deny query_no_cache" }
+ { "icp_access" = "allow localnet" }
+ { "icp_access" = "deny all" }
+ { "http_port" = "3128" }
+ { "hierarchy_stoplist" = "cgi-bin ?" }
+ { "access_log" = "/var/log/squid/access.log squid" }
+ { "refresh_pattern" = "^ftp:"
+ { "min" = "1440" }
+ { "percent" = "20" }
+ { "max" = "10080" } }
+ { "refresh_pattern" = "^gopher:"
+ { "min" = "1440" }
+ { "percent" = "0" }
+ { "max" = "1440" } }
+ { "refresh_pattern" = "(/cgi-bin/|\?)"
+ { "case_insensitive" }
+ { "min" = "0" }
+ { "percent" = "0" }
+ { "max" = "0" } }
+ { "refresh_pattern" = "(Release|Package(.gz)*)$"
+ { "min" = "0" }
+ { "percent" = "20" }
+ { "max" = "2880" } }
+ { "refresh_pattern" = "."
+ { "min" = "0" }
+ { "percent" = "20" }
+ { "max" = "4320" }
+ { "option" = "ignore-reload" }
+ { "option" = "ignore-auth" }
+ { "#comment" = "testing options" } }
+ { "acl"
+ { "shoutcast"
+ { "type" = "rep_header" }
+ { "setting" = "X-HTTP09-First-Line" }
+ { "parameters"
+ { "1" = "^ICY\s[0-9]" }
+ }
+ }
+ }
+ { "upgrade_http0.9"
+ { "deny" = "shoutcast" } }
+ { "acl"
+ { "apache"
+ { "type" = "rep_header" }
+ { "setting" = "Server" }
+ { "parameters"
+ { "1" = "^Apache" }
+ }
+ }
+ }
+ { "broken_vary_encoding"
+ { "allow" = "apache" } }
+ { "extension_methods"
+ { "1" = "REPORT" }
+ { "2" = "MERGE" }
+ { "3" = "MKACTIVITY" }
+ { "4" = "CHECKOUT" } }
+ { "hosts_file" = "/etc/hosts" }
+ { "coredump_dir" = "/var/spool/squid" }
--- /dev/null
+(* Module: Test_ssh *)
+module Test_ssh =
+
+ let conf =
+"# start
+IdentityFile /etc/ssh/identity.asc
+
+Match final all
+ GSSAPIAuthentication yes
+
+Host suse.cz
+ ForwardAgent yes
+SendEnv LC_LANG
+
+Host *
+ ForwardAgent no
+ForwardX11Trusted yes
+
+# IdentityFile ~/.ssh/identity
+ SendEnv LC_IDENTIFICATION LC_ALL LC_*
+ProxyCommand ssh -q -W %h:%p gateway.example.com
+RemoteForward [1.2.3.4]:20023 localhost:22
+RemoteForward 2221 lhost1:22
+LocalForward 3001 remotehost:3000
+Ciphers aes128-ctr,aes192-ctr
+MACs hmac-md5,hmac-sha1,umac-64@openssh.com
+HostKeyAlgorithms ssh-ed25519-cert-v01@openssh.com,ssh-ed25519,ssh-rsa-cert-v01@openssh.com,ssh-rsa
+KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256
+PubkeyAcceptedKeyTypes ssh-ed25519-cert-v01@openssh.com,ssh-ed25519,ssh-rsa-cert-v01@openssh.com,ssh-rsa
+"
+
+ test Ssh.lns get conf =
+ { "#comment" = "start" }
+ { "IdentityFile" = "/etc/ssh/identity.asc" }
+ { }
+ { "Match"
+ { "Condition"
+ { "final" = "all" }
+ }
+ { "Settings"
+ { "GSSAPIAuthentication" = "yes" }
+ { }
+ }
+ }
+ { "Host" = "suse.cz"
+ { "ForwardAgent" = "yes" }
+ { "SendEnv"
+ { "1" = "LC_LANG" } }
+ { }
+ }
+ { "Host" = "*"
+ { "ForwardAgent" = "no" }
+ { "ForwardX11Trusted" = "yes" }
+ { }
+ { "#comment" = "IdentityFile ~/.ssh/identity" }
+ { "SendEnv"
+ { "1" = "LC_IDENTIFICATION" }
+ { "2" = "LC_ALL" }
+ { "3" = "LC_*" } }
+ { "ProxyCommand" = "ssh -q -W %h:%p gateway.example.com" }
+ { "RemoteForward"
+ { "[1.2.3.4]:20023" = "localhost:22" }
+ }
+ { "RemoteForward"
+ { "2221" = "lhost1:22" }
+ }
+ { "LocalForward"
+ { "3001" = "remotehost:3000" }
+ }
+ { "Ciphers"
+ { "1" = "aes128-ctr" }
+ { "2" = "aes192-ctr" }
+ }
+ { "MACs"
+ { "1" = "hmac-md5" }
+ { "2" = "hmac-sha1" }
+ { "3" = "umac-64@openssh.com" }
+ }
+ { "HostKeyAlgorithms"
+ { "1" = "ssh-ed25519-cert-v01@openssh.com" }
+ { "2" = "ssh-ed25519" }
+ { "3" = "ssh-rsa-cert-v01@openssh.com" }
+ { "4" = "ssh-rsa" }
+ }
+ { "KexAlgorithms"
+ { "1" = "curve25519-sha256@libssh.org" }
+ { "2" = "diffie-hellman-group-exchange-sha256" }
+ }
+ { "PubkeyAcceptedKeyTypes"
+ { "1" = "ssh-ed25519-cert-v01@openssh.com" }
+ { "2" = "ssh-ed25519" }
+ { "3" = "ssh-rsa-cert-v01@openssh.com" }
+ { "4" = "ssh-rsa" }
+ }
+ }
+
+(* Test: Ssh.lns
+ Proxycommand is case-insensitive *)
+
+test Ssh.lns get "Proxycommand ssh -q test nc -q0 %h 22\n" =
+ { "Proxycommand" = "ssh -q test nc -q0 %h 22" }
+
+(* Test: Ssh.lns
+ GlobalKnownHostsFile *)
+test Ssh.lns get "GlobalKnownHostsFile /etc/ssh/ssh_known_hosts /etc/ssh/ssh_known_hosts2\n" =
+ { "GlobalKnownHostsFile"
+ { "1" = "/etc/ssh/ssh_known_hosts" }
+ { "2" = "/etc/ssh/ssh_known_hosts2" }
+ }
+
+(* Keywords can be separated from their arguments with '=', too *)
+test Ssh.lns get "Host mail.watzmann.net
+ LocalForward=11111 mail.watzmann.net:110\n" =
+ { "Host" = "mail.watzmann.net"
+ { "LocalForward"
+ { "11111" = "mail.watzmann.net:110" } } }
+
+test Ssh.lns get "ForwardAgent=yes\n" =
+ { "ForwardAgent" = "yes" }
+
+test Ssh.lns get "ForwardAgent =\tyes\n" =
+ { "ForwardAgent" = "yes" }
+
+(* Issue #605 *)
+test Ssh.lns get "RekeyLimit 1G 1h\n" =
+ { "RekeyLimit"
+ { "amount" = "1G" }
+ { "duration" = "1h" } }
+
+test Ssh.lns get "RekeyLimit 1G\n" =
+ { "RekeyLimit"
+ { "amount" = "1G" } }
--- /dev/null
+(* Module: Test_sshd *)
+module Test_sshd =
+
+ let accept_env = "Protocol 2
+AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT
+AcceptEnv LC_IDENTIFICATION LC_ALL\n"
+
+ test Sshd.lns get accept_env =
+ { "Protocol" = "2" }
+ { "AcceptEnv"
+ { "1" = "LC_PAPER" }
+ { "2" = "LC_NAME" }
+ { "3" = "LC_ADDRESS" }
+ { "4" = "LC_TELEPHONE" }
+ { "5" = "LC_MEASUREMENT" } }
+ { "AcceptEnv"
+ { "6" = "LC_IDENTIFICATION" }
+ { "7" = "LC_ALL" } }
+
+
+ test Sshd.lns get "HostKey /etc/ssh/ssh_host_rsa_key
+HostKey /etc/ssh/ssh_host_dsa_key\n" =
+ { "HostKey" = "/etc/ssh/ssh_host_rsa_key" }
+ { "HostKey" = "/etc/ssh/ssh_host_dsa_key" }
+
+
+ test Sshd.lns put accept_env after
+ rm "AcceptEnv";
+ rm "AcceptEnv";
+ set "Protocol" "1.5";
+ set "X11Forwarding" "yes"
+ = "Protocol 1.5\nX11Forwarding yes\n"
+
+ test Sshd.lns get "AuthorizedKeysFile %h/.ssh/authorized_keys\n" =
+ { "AuthorizedKeysFile" = "%h/.ssh/authorized_keys" }
+
+ test Sshd.lns get "Subsystem sftp /usr/lib/openssh/sftp-server\n" =
+ { "Subsystem"
+ { "sftp" = "/usr/lib/openssh/sftp-server" } }
+
+ test Sshd.lns get "Subsystem sftp-test /usr/lib/openssh/sftp-server\n" =
+ { "Subsystem"
+ { "sftp-test" = "/usr/lib/openssh/sftp-server" } }
+
+
+
+ let match_blocks = "X11Forwarding yes
+Match User sarko Group pres.*
+ Banner /etc/bienvenue.txt
+ X11Forwarding no
+Match User bush Group pres.* Host white.house.*
+Banner /etc/welcome.txt
+Match Group \"Domain users\"
+ X11Forwarding yes
+"
+ test Sshd.lns get match_blocks =
+ { "X11Forwarding" = "yes"}
+ { "Match"
+ { "Condition" { "User" = "sarko" }
+ { "Group" = "pres.*" } }
+ { "Settings" { "Banner" = "/etc/bienvenue.txt" }
+ { "X11Forwarding" = "no" } } }
+ { "Match"
+ { "Condition" { "User" = "bush" }
+ { "Group" = "pres.*" }
+ { "Host" = "white.house.*" } }
+ { "Settings" { "Banner" = "/etc/welcome.txt" } } }
+ { "Match"
+ { "Condition" { "Group" = "Domain users" } }
+ { "Settings" { "X11Forwarding" = "yes" } } }
+
+ test Sshd.lns put match_blocks after
+ insb "Subsystem" "/Match[1]";
+ set "/Subsystem/sftp" "/usr/libexec/openssh/sftp-server"
+ = "X11Forwarding yes
+Subsystem sftp /usr/libexec/openssh/sftp-server
+Match User sarko Group pres.*
+ Banner /etc/bienvenue.txt
+ X11Forwarding no
+Match User bush Group pres.* Host white.house.*
+Banner /etc/welcome.txt
+Match Group \"Domain users\"
+ X11Forwarding yes\n"
+
+(* Test: Sshd.lns
+ Indent when adding to a Match group *)
+ test Sshd.lns put match_blocks after
+ set "Match[1]/Settings/PermitRootLogin" "yes";
+ set "Match[1]/Settings/#comment" "a comment" =
+"X11Forwarding yes
+Match User sarko Group pres.*
+ Banner /etc/bienvenue.txt
+ X11Forwarding no
+ PermitRootLogin yes
+ # a comment
+Match User bush Group pres.* Host white.house.*
+Banner /etc/welcome.txt
+Match Group \"Domain users\"
+ X11Forwarding yes\n"
+
+
+(* Test: Sshd.lns
+ Parse Ciphers, KexAlgorithms, HostKeyAlgorithms as lists (GH issue #69)
+ Parse GSSAPIKexAlgorithms, PubkeyAcceptedKeyTypes, CASignatureAlgorithms as lists (GH PR #721)
+ Parse PubkeyAcceptedAlgorithms as a list (GH issue #804) *)
+test Sshd.lns get "Ciphers aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes128-ctr
+KexAlgorithms diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1
+HostKeyAlgorithms ssh-ed25519-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ssh-ed25519,ssh-rsa
+GSSAPIKexAlgorithms gss-curve25519-sha256-,gss-nistp256-sha256-,gss-group14-sha256-
+PubkeyAcceptedKeyTypes ecdsa-sha2-nistp256,ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384
+PubkeyAcceptedAlgorithms ecdsa-sha2-nistp256,ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384
+CASignatureAlgorithms ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521\n" =
+ { "Ciphers"
+ { "1" = "aes256-gcm@openssh.com" }
+ { "2" = "aes128-gcm@openssh.com" }
+ { "3" = "aes256-ctr" }
+ { "4" = "aes128-ctr" }
+ }
+ { "KexAlgorithms"
+ { "1" = "diffie-hellman-group-exchange-sha256" }
+ { "2" = "diffie-hellman-group14-sha1" }
+ { "3" = "diffie-hellman-group-exchange-sha1" }
+ }
+ { "HostKeyAlgorithms"
+ { "1" = "ssh-ed25519-cert-v01@openssh.com" }
+ { "2" = "ssh-rsa-cert-v01@openssh.com" }
+ { "3" = "ssh-ed25519" }
+ { "4" = "ssh-rsa" }
+ }
+ { "GSSAPIKexAlgorithms"
+ { "1" = "gss-curve25519-sha256-" }
+ { "2" = "gss-nistp256-sha256-" }
+ { "3" = "gss-group14-sha256-" }
+ }
+ { "PubkeyAcceptedKeyTypes"
+ { "1" = "ecdsa-sha2-nistp256" }
+ { "2" = "ecdsa-sha2-nistp256-cert-v01@openssh.com" }
+ { "3" = "ecdsa-sha2-nistp384" }
+ }
+ { "PubkeyAcceptedAlgorithms"
+ { "1" = "ecdsa-sha2-nistp256" }
+ { "2" = "ecdsa-sha2-nistp256-cert-v01@openssh.com" }
+ { "3" = "ecdsa-sha2-nistp384" }
+ }
+ { "CASignatureAlgorithms"
+ { "1" = "ecdsa-sha2-nistp256" }
+ { "2" = "ecdsa-sha2-nistp384" }
+ { "3" = "ecdsa-sha2-nistp521" }
+ }
+
+(* Test: Sshd.lns
+ Keys are case-insensitive *)
+test Sshd.lns get "ciPheRs aes256-gcm@openssh.com,aes128-ctr
+maTcH User foo
+ x11forwarding no\n" =
+ { "ciPheRs"
+ { "1" = "aes256-gcm@openssh.com" }
+ { "2" = "aes128-ctr" }
+ }
+ { "maTcH"
+ { "Condition"
+ { "User" = "foo" }
+ }
+ { "Settings"
+ { "x11forwarding" = "no" }
+ }
+ }
+
+(* Test: Sshd.lns
+ Allow AllowGroups in Match groups (GH issue #75) *)
+test Sshd.lns get "Match User foo
+ AllowGroups users\n" =
+ { "Match" { "Condition" { "User" = "foo" } }
+ { "Settings" { "AllowGroups" { "1" = "users" } } } }
+
+(* Test: Sshd.lns
+ Recognize quoted group names with spaces in AllowGroups and similar
+ (Issue #477) *)
+test Sshd.lns get "Match User foo
+ AllowGroups math-domain-users \"access admins\"\n" =
+ { "Match" { "Condition" { "User" = "foo" } }
+ { "Settings"
+ { "AllowGroups"
+ { "1" = "math-domain-users" }
+ { "2" = "access admins" } } } }
+
+test Sshd.lns put "Match User foo\nAllowGroups users\n" after
+ set "/Match/Settings/AllowGroups/1" "all people" =
+ "Match User foo\nAllowGroups \"all people\"\n"
+
+test Sshd.lns put "Match User foo\nAllowGroups users\n" after
+ set "/Match/Settings/AllowGroups/01" "all people" =
+ "Match User foo\nAllowGroups users \"all people\"\n"
+
+test Sshd.lns put "Match User foo\nAllowGroups users\n" after
+ set "/Match/Settings/AllowGroups/01" "people" =
+ "Match User foo\nAllowGroups users people\n"
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(*
+Module: Test_sssd
+ Test cases for the sssd lense
+
+Author: Erinn Looney-Triggs
+
+About: License
+ This file is licensed under the LGPLv2+, like the rest of Augeas.
+*)
+module Test_sssd =
+
+let conf = "[domain/example.com]
+#Comment here
+; another comment
+cache_credentials = True
+krb5_store_password_if_offline = True
+ipa_server = _srv_, ipa.example.com
+[sssd]
+services = nss, pam
+config_file_version = 2
+
+domains = example.com
+[nss]
+
+[pam]
+"
+
+test Sssd.lns get conf =
+ { "target" = "domain/example.com"
+ { "#comment" = "Comment here" }
+ { "#comment" = "another comment" }
+ { "cache_credentials" = "True" }
+ { "krb5_store_password_if_offline" = "True" }
+ { "ipa_server" = "_srv_, ipa.example.com" }
+ }
+ { "target" = "sssd"
+ { "services" = "nss, pam" }
+ { "config_file_version" = "2" }
+ { }
+ { "domains" = "example.com" }
+ }
+ { "target" = "nss"
+ { }
+ }
+ { "target" = "pam" }
--- /dev/null
+module Test_star =
+
+let conf = "# @(#)star.dfl 1.2 05/08/09 Copyright 2003 J. Schilling
+#
+# This file is /etc/default/star
+
+STAR_FIFOSIZE= 32m
+
+STAR_FIFOSIZE_MAX= 100m
+
+archive0=/dev/rmt/0 20 0 N
+archive1=/dev/rmt/0n 20 0 n
+archive2=/dev/rmt/1 20 0 y
+archive3=/dev/rmt/1n 20 0
+archive4=/dev/rmt/0 126 0
+archive5=/dev/rmt/0n 126 0
+archive6=/dev/rmt/1 126 0
+archive7=/dev/rmt/1n 126 0
+"
+test Star.lns get conf =
+ { "#comment" = "@(#)star.dfl 1.2 05/08/09 Copyright 2003 J. Schilling" }
+ { }
+ { "#comment" = "This file is /etc/default/star" }
+ { }
+ { "STAR_FIFOSIZE" = "32m" }
+ { }
+ { "STAR_FIFOSIZE_MAX" = "100m" }
+ { }
+ { "archive0"
+ { "device" = "/dev/rmt/0" }
+ { "block" = "20" }
+ { "size" = "0" }
+ { "istape" = "N" } }
+ { "archive1"
+ { "device" = "/dev/rmt/0n" }
+ { "block" = "20" }
+ { "size" = "0" }
+ { "istape" = "n" } }
+ { "archive2"
+ { "device" = "/dev/rmt/1" }
+ { "block" = "20" }
+ { "size" = "0" }
+ { "istape" = "y" } }
+ { "archive3"
+ { "device" = "/dev/rmt/1n" }
+ { "block" = "20" }
+ { "size" = "0" } }
+ { "archive4"
+ { "device" = "/dev/rmt/0" }
+ { "block" = "126" }
+ { "size" = "0" } }
+ { "archive5"
+ { "device" = "/dev/rmt/0n" }
+ { "block" = "126" }
+ { "size" = "0" } }
+ { "archive6"
+ { "device" = "/dev/rmt/1" }
+ { "block" = "126" }
+ { "size" = "0" } }
+ { "archive7"
+ { "device" = "/dev/rmt/1n" }
+ { "block" = "126" }
+ { "size" = "0" } }
--- /dev/null
+(*
+ Some of the configuration snippets have been copied from the strongSwan
+ source tree.
+*)
+
+module Test_Strongswan =
+
+(* conf/strongswan.conf *)
+let default = "
+# strongswan.conf - strongSwan configuration file
+#
+# Refer to the strongswan.conf(5) manpage for details
+#
+# Configuration changes should be made in the included files
+
+charon {
+ load_modular = yes
+ plugins {
+ include strongswan.d/charon/*.conf
+ }
+}
+
+include strongswan.d/*.conf
+"
+
+test Strongswan.lns get default =
+ { "#comment" = "strongswan.conf - strongSwan configuration file" }
+ { "#comment" = "Refer to the strongswan.conf(5) manpage for details" }
+ { "#comment" = "Configuration changes should be made in the included files" }
+ { "charon"
+ { "load_modular" = "yes" }
+ { "plugins" { "include" = "strongswan.d/charon/*.conf" } }
+ }
+ { "include" = "strongswan.d/*.conf" }
+
+(* conf/strongswan.conf.5.head.in *)
+let man_example = "
+ a = b
+ section-one {
+ somevalue = asdf
+ subsection {
+ othervalue = xxx
+ }
+ # yei, a comment
+ yetanother = zz
+ }
+ section-two {
+ x = 12
+ }
+"
+
+test Strongswan.lns get man_example =
+ { "a" = "b" }
+ { "section-one"
+ { "somevalue" = "asdf" }
+ { "subsection" { "othervalue" = "xxx" } }
+ { "#comment" = "yei, a comment" }
+ { "yetanother" = "zz" }
+ }
+ { "section-two" { "x" = "12" } }
+
+test Strongswan.lns get "foo { bar = baz\n } quux {}\t#quuux\n" =
+ { "foo" { "bar" = "baz" } }
+ { "quux" }
+ { "#comment" = "quuux" }
+
+
+let connection = "
+
+connections {
+ foo {
+ pools = bar, baz
+ proposals = aes256gcm16-aes128gcm16-ecp512, aes256-sha256-sha1-ecp256-modp4096-modp2048, 3des-md5-modp768
+ }
+ children {
+ bar {
+ esp_proposals = aes128-sha256-sha1,3des-md5
+ }
+ }
+}
+"
+
+test Strongswan.lns get connection =
+ { "connections"
+ { "foo"
+ { "#list" = "pools"
+ { "1" = "bar" }
+ { "2" = "baz" }
+ }
+ { "#proposals" = "proposals"
+ { "1"
+ { "1" = "aes256gcm16" }
+ { "2" = "aes128gcm16" }
+ { "3" = "ecp512" }
+ }
+ { "2"
+ { "1" = "aes256" }
+ { "2" = "sha256" }
+ { "3" = "sha1" }
+ { "4" = "ecp256" }
+ { "5" = "modp4096" }
+ { "6" = "modp2048" }
+ }
+ { "3"
+ { "1" = "3des" }
+ { "2" = "md5" }
+ { "3" = "modp768" }
+ }
+ }
+ }
+ { "children"
+ { "bar"
+ { "#proposals" = "esp_proposals"
+ { "1"
+ { "1" = "aes128" }
+ { "2" = "sha256" }
+ { "3" = "sha1" }
+ }
+ { "2"
+ { "1" = "3des" }
+ { "2" = "md5" }
+ }
+ }
+ }
+ }
+ }
--- /dev/null
+module Test_stunnel =
+ let conf ="; Test stunnel-like config file
+; Foo bar baz
+cert = /path/1
+key = /path/2
+
+sslVersion = SSLv3
+
+; another comment
+
+[service1]
+accept = 49999
+connect = servicedest:1234
+
+[service2]
+accept = 1234
+"
+
+ test Stunnel.lns get conf =
+ { ".anon"
+ { "#comment" = "Test stunnel-like config file" }
+ { "#comment" = "Foo bar baz" }
+ { "cert" = "/path/1" }
+ { "key" = "/path/2" }
+ {}
+ { "sslVersion" = "SSLv3" }
+ {}
+ { "#comment" = "another comment" }
+ {}
+ }
+ { "service1"
+ { "accept" = "49999" }
+ { "connect" = "servicedest:1234" }
+ {}
+ }
+ { "service2"
+ { "accept" = "1234" }
+ }
--- /dev/null
+(*
+Module: Test_Subversion
+ Provides unit tests and examples for the <Subversion> lens.
+*)
+
+module Test_Subversion =
+
+(* Variable: conf *)
+let conf = "# This file configures various client-side behaviors.
+[auth]
+password-stores = gnome-keyring,kwallet
+store-passwords = no
+store-auth-creds = no
+
+[helpers]
+editor-cmd = /usr/bin/vim
+diff-cmd = /usr/bin/diff
+diff3-cmd = /usr/bin/diff3
+diff3-has-program-arg = yes
+
+[tunnels]
+ssh = $SVN_SSH ssh -o ControlMaster=no
+rsh = /path/to/rsh -l myusername
+
+[miscellany]
+global-ignores = *.o *.lo *.la *.al .libs *.so *.so.[0-9]* *.a *.pyc *.pyo
+ *.rej *~ #*# .#* .*.swp .DS_Store
+# Set log-encoding to the default encoding for log messages
+log-encoding = latin1
+use-commit-times = yes
+no-unlock = yes
+mime-types-file = /path/to/mime.types
+preserved-conflict-file-exts = doc ppt xls od?
+enable-auto-props = yes
+interactive-conflicts = no
+
+[auto-props]
+*.c = svn:eol-style=native
+*.cpp = svn:eol-style=native
+*.h = svn:eol-style=native
+*.dsp = svn:eol-style=CRLF
+*.dsw = svn:eol-style=CRLF
+*.sh = svn:eol-style=native;svn:executable
+*.txt = svn:eol-style=native
+*.png = svn:mime-type=image/png
+*.jpg = svn:mime-type=image/jpeg
+Makefile = svn:eol-style=native
+"
+
+(* Test: Subversion.lns *)
+test Subversion.lns get conf =
+{ "#comment" = "This file configures various client-side behaviors." }
+ { "auth"
+ { "password-stores"
+ { "1" = "gnome-keyring" }
+ { "2" = "kwallet" } }
+ { "store-passwords" = "no" }
+ { "store-auth-creds" = "no" }
+ { }
+ }
+ { "helpers"
+ { "editor-cmd" = "/usr/bin/vim" }
+ { "diff-cmd" = "/usr/bin/diff" }
+ { "diff3-cmd" = "/usr/bin/diff3" }
+ { "diff3-has-program-arg" = "yes" }
+ { }
+ }
+ { "tunnels"
+ { "ssh" = "$SVN_SSH ssh -o ControlMaster=no" }
+ { "rsh" = "/path/to/rsh -l myusername" }
+ { }
+ }
+ { "miscellany"
+ { "global-ignores"
+ { "1" = "*.o" }
+ { "2" = "*.lo" }
+ { "3" = "*.la" }
+ { "4" = "*.al" }
+ { "5" = ".libs" }
+ { "6" = "*.so" }
+ { "7" = "*.so.[0-9]*" }
+ { "8" = "*.a" }
+ { "9" = "*.pyc" }
+ { "10" = "*.pyo" }
+ { "11" = "*.rej" }
+ { "12" = "*~" }
+ { "13" = "#*#" }
+ { "14" = ".#*" }
+ { "15" = ".*.swp" }
+ { "16" = ".DS_Store" } }
+ { "#comment" = "Set log-encoding to the default encoding for log messages" }
+ { "log-encoding" = "latin1" }
+ { "use-commit-times" = "yes" }
+ { "no-unlock" = "yes" }
+ { "mime-types-file" = "/path/to/mime.types" }
+ { "preserved-conflict-file-exts"
+ { "1" = "doc" }
+ { "2" = "ppt" }
+ { "3" = "xls" }
+ { "4" = "od?" } }
+ { "enable-auto-props" = "yes" }
+ { "interactive-conflicts" = "no" }
+ { }
+ }
+ { "auto-props"
+ { "*.c" = "svn:eol-style=native" }
+ { "*.cpp" = "svn:eol-style=native" }
+ { "*.h" = "svn:eol-style=native" }
+ { "*.dsp" = "svn:eol-style=CRLF" }
+ { "*.dsw" = "svn:eol-style=CRLF" }
+ { "*.sh" = "svn:eol-style=native;svn:executable" }
+ { "*.txt" = "svn:eol-style=native" }
+ { "*.png" = "svn:mime-type=image/png" }
+ { "*.jpg" = "svn:mime-type=image/jpeg" }
+ { "Makefile" = "svn:eol-style=native" }
+ }
+
+
--- /dev/null
+(* Module: Test_sudoers *)
+module Test_sudoers =
+
+let test_user = [ label "user" . Sudoers.sto_to_com_user . Util.eol ]*
+
+(* Test: test_user *)
+test test_user get "root
+@pbuilder
++secre-taries
+@my\ admin\ group
+EXAMPLE\\\\cslack
+%ad.domain.com\\\\sudo-users
+MY\ EX-AMPLE\ 9\\\\cslack\ group
+" =
+ { "user" = "root" }
+ { "user" = "@pbuilder" }
+ { "user" = "+secre-taries" }
+ { "user" = "@my\\ admin\\ group" }
+ { "user" = "EXAMPLE\\\\cslack" }
+ { "user" = "%ad.domain.com\\\\sudo-users" }
+ { "user" = "MY\\ EX-AMPLE\\ 9\\\\cslack\\ group" }
+
+let conf = "
+ Host_Alias LOCALNET = 192.168.0.0/24, localhost
+
+ # User alias specification
+
+User_Alias EXAMPLE_ADMINS = cslack, EXAMPLE\\\\cslack,\
+ EXAMPLE\\\\jmalstrom
+
+# Cmnd alias specification
+
+Cmnd_Alias \
+ DEBIAN_TOOLS \
+ = \
+ /usr/bin/apt-get,\
+ /usr/bin/auto-get, \
+ /usr/bin/dpkg, /usr/bin/dselect, /usr/sbin/dpkg-reconfigure \
+ : PBUILDER = /usr/sbin/pbuilder
+
+ Cmnd_Alias ICAL = /bin/cat /home/rpinson/.kde/share/apps/korganizer/std.ics
+
+ Defaults@LOCALNET !lecture, \
+ \t\t tty_tickets,!fqdn, !!env_reset
+
+Defaults !visiblepw
+
+Defaults:buildd env_keep+=\"APT_CONFIG DEBIAN_FRONTEND SHELL\"
+Defaults!PBUILDER env_keep+=\"HOME ARCH DIST DISTRIBUTION PDEBUILD_PBUILDER\"
+
+# User privilege specification
+root ALL=(ALL) ALL
+root ALL=(: ALL) ALL
+root ALL=(ALL :ALL) ALL
+
+# Members of the admin group may gain root privileges
+%admin ALL=(ALL) ALL, NOPASSWD : NOSETENV: \
+ DEBIAN_TOOLS
+%pbuilder LOCALNET = NOPASSWD: PBUILDER
+www-data +biglab=(rpinson)NOEXEC: ICAL \
+ : \
+ localhost = NOPASSWD: /usr/bin/test
+
+ +secretaries ALPHA = /usr/bin/su [!-]*, !/usr/bin/su *root*
+
+@my\ admin\ group ALL=(root) NOPASSWD: /usr/bin/python /usr/local/sbin/filterlog -iu\\=www /var/log/something.log
+#includedir /etc/sudoers.d
+#include /etc/sudoers.d
+@includedir /etc/sudoers.d
+@include /etc/sudoers.file
+"
+
+ test Sudoers.lns get conf =
+ {}
+ { "Host_Alias"
+ { "alias"
+ { "name" = "LOCALNET" }
+ { "host" = "192.168.0.0/24" }
+ { "host" = "localhost" } } }
+ {}
+ { "#comment" = "User alias specification" }
+ {}
+ { "User_Alias"
+ { "alias"
+ { "name" = "EXAMPLE_ADMINS" }
+ { "user" = "cslack" }
+ { "user" = "EXAMPLE\\\\cslack" }
+ { "user" = "EXAMPLE\\\\jmalstrom" } } }
+ {}
+ { "#comment" = "Cmnd alias specification" }
+ {}
+ { "Cmnd_Alias"
+ { "alias"
+ { "name" = "DEBIAN_TOOLS" }
+ { "command" = "/usr/bin/apt-get" }
+ { "command" = "/usr/bin/auto-get" }
+ { "command" = "/usr/bin/dpkg" }
+ { "command" = "/usr/bin/dselect" }
+ { "command" = "/usr/sbin/dpkg-reconfigure" } }
+ { "alias"
+ { "name" = "PBUILDER" }
+ { "command" = "/usr/sbin/pbuilder" } } }
+ {}
+ { "Cmnd_Alias"
+ { "alias"
+ { "name" = "ICAL" }
+ { "command" = "/bin/cat /home/rpinson/.kde/share/apps/korganizer/std.ics" } } }
+ {}
+ { "Defaults"
+ { "type" = "@LOCALNET" }
+ { "lecture" { "negate" } }
+ { "tty_tickets" }
+ { "fqdn" { "negate" } }
+ { "env_reset" } }
+ {}
+ { "Defaults"
+ { "visiblepw" { "negate" } } }
+ {}
+ { "Defaults"
+ { "type" = ":buildd" }
+ { "env_keep"
+ { "append" }
+ { "var" = "APT_CONFIG" }
+ { "var" = "DEBIAN_FRONTEND" }
+ { "var" = "SHELL" } } }
+ { "Defaults"
+ { "type" = "!PBUILDER" }
+ { "env_keep"
+ { "append" }
+ { "var" = "HOME" }
+ { "var" = "ARCH" }
+ { "var" = "DIST" }
+ { "var" = "DISTRIBUTION" }
+ { "var" = "PDEBUILD_PBUILDER" } } }
+ {}
+ { "#comment" = "User privilege specification" }
+ { "spec"
+ { "user" = "root" }
+ { "host_group"
+ { "host" = "ALL" }
+ { "command" = "ALL"
+ { "runas_user" = "ALL" } } } }
+ { "spec"
+ { "user" = "root" }
+ { "host_group"
+ { "host" = "ALL" }
+ { "command" = "ALL"
+ { "runas_group" = "ALL" } } } }
+ { "spec"
+ { "user" = "root" }
+ { "host_group"
+ { "host" = "ALL" }
+ { "command" = "ALL"
+ { "runas_user" = "ALL" }
+ { "runas_group" = "ALL" } } } }
+ {}
+ { "#comment" = "Members of the admin group may gain root privileges" }
+ { "spec"
+ { "user" = "%admin" }
+ { "host_group"
+ { "host" = "ALL" }
+ { "command" = "ALL"
+ { "runas_user" = "ALL" } }
+ { "command" = "DEBIAN_TOOLS"
+ { "tag" = "NOPASSWD" }
+ { "tag" = "NOSETENV" } } } }
+ { "spec"
+ { "user" = "%pbuilder" }
+ { "host_group"
+ { "host" = "LOCALNET" }
+ { "command" = "PBUILDER"
+ { "tag" = "NOPASSWD" } } } }
+ { "spec"
+ { "user" = "www-data" }
+ { "host_group"
+ { "host" = "+biglab" }
+ { "command" = "ICAL"
+ { "runas_user" = "rpinson" }
+ { "tag" = "NOEXEC" } } }
+ { "host_group"
+ { "host" = "localhost" }
+ { "command" = "/usr/bin/test"
+ { "tag" = "NOPASSWD" } } } }
+ {}
+ { "spec"
+ { "user" = "+secretaries" }
+ { "host_group"
+ { "host" = "ALPHA" }
+ { "command" = "/usr/bin/su [!-]*" }
+ { "command" = "/usr/bin/su *root*"
+ { "negate" } } } }
+ {}
+ { "spec"
+ { "user" = "@my\ admin\ group" }
+ { "host_group"
+ { "host" = "ALL" }
+ { "command" = "/usr/bin/python /usr/local/sbin/filterlog -iu\\=www /var/log/something.log"
+ { "runas_user" = "root" }
+ { "tag" = "NOPASSWD" }
+ }
+ }
+ }
+ { "#includedir" = "/etc/sudoers.d" }
+ { "#include" = "/etc/sudoers.d" }
+ { "@includedir" = "/etc/sudoers.d" }
+ { "@include" = "/etc/sudoers.file" }
+
+test Sudoers.parameter_integer_bool
+ put "umask = 022"
+ after set "/umask/negate" "" = "!umask"
+
+test Sudoers.parameter_integer_bool
+ put "!!!!!umask"
+ after rm "/umask/negate"; set "/umask" "022" = "!!!!umask = 022"
+
+test Sudoers.parameter_integer_bool put "!!!!umask = 022" after
+ set "/umask/negate" "" = "!!!!!umask"
+
+test Sudoers.parameter_integer_bool get "!!!umask = 022" = *
+
+(* BZ 566134 *)
+
+let s = "Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin\n"
+test Sudoers.lns get s =
+ { "Defaults"
+ { "secure_path" = "/sbin:/bin:/usr/sbin:/usr/bin" } }
+
+(* #724 - check timestamp_timeout is extracted OK if unsigned OR negative (-1) *)
+test Sudoers.lns get "Defaults timestamp_timeout = 3\n" =
+ { "Defaults"
+ { "timestamp_timeout" = "3" } }
+test Sudoers.lns get "Defaults timestamp_timeout = -1\n" =
+ { "Defaults"
+ { "timestamp_timeout" = "-1" } }
+
+(* Ticket #206, comments at end of lines *)
+let commenteol = "#
+Defaults targetpw # ask for
+Host_Alias LOCALNET = 192.168.0.0/24 # foo eol
+root ALL=(ALL) ALL # all root\n"
+test Sudoers.lns get commenteol =
+ {}
+ { "Defaults"
+ { "targetpw" }
+ { "#comment" = "ask for" } }
+ { "Host_Alias"
+ { "alias"
+ { "name" = "LOCALNET" }
+ { "host" = "192.168.0.0/24" } }
+ { "#comment" = "foo eol" } }
+ { "spec"
+ { "user" = "root" }
+ { "host_group"
+ { "host" = "ALL" }
+ { "command" = "ALL"
+ { "runas_user" = "ALL" } } }
+ { "#comment" = "all root" } }
+
+(* Allow = in commands *)
+test Sudoers.spec get "root ALL= /usr/bin/mylvmbackup --configfile=/etc/mylvbackup_amanda.conf\n" =
+ { "spec"
+ { "user" = "root" }
+ { "host_group"
+ { "host" = "ALL" }
+ { "command" = "/usr/bin/mylvmbackup --configfile=/etc/mylvbackup_amanda.conf" } } }
+
+(* Allow commands without full path
+ -- if they begin with a lowcase letter *)
+test Sudoers.spec get "root ALL= sudoedit /etc/passwd\n" =
+ { "spec"
+ { "user" = "root" }
+ { "host_group"
+ { "host" = "ALL" }
+ { "command" = "sudoedit /etc/passwd" } } }
+
+(* Ticket #263, quoted values in defaults line *)
+let defaults_spaces = "Defaults passprompt=\"Your SecurID Passcode: \"\n"
+test Sudoers.lns get defaults_spaces =
+ { "Defaults"
+ { "passprompt" = "\"Your SecurID Passcode: \"" }
+ }
+
+(* Ticket #263, quoted values in defaults line (string/bool parameters) *)
+let defaults_spaces_strbool = "Defaults mailfrom=\"root@example.com\"\n"
+test Sudoers.lns get defaults_spaces_strbool =
+ { "Defaults"
+ { "mailfrom" = "\"root@example.com\"" }
+ }
+
+(* Test: Sudoers.spec
+ Spec users can be aliases *)
+test Sudoers.spec get "APACHE_ADMIN ALL= ALL\n" =
+ { "spec"
+ { "user" = "APACHE_ADMIN" }
+ { "host_group"
+ { "host" = "ALL" }
+ { "command" = "ALL" } } }
+
+(* Test: Sudoers.spec
+ Ticket #337: allow period in user names *)
+test Sudoers.spec get "user.one somehost = ALL\n" =
+ { "spec"
+ { "user" = "user.one" }
+ { "host_group"
+ { "host" = "somehost" }
+ { "command" = "ALL" }
+ }
+ }
+
+(* Test: Sudoers.spec
+ Ticket #370: allow underscore in group names *)
+test Sudoers.spec get "%sudo_users ALL=(ALL) ALL\n" =
+ { "spec"
+ { "user" = "%sudo_users" }
+ { "host_group"
+ { "host" = "ALL" }
+ { "command" = "ALL"
+ { "runas_user" = "ALL" } }
+ }
+ }
+
+(* Test: Sudoers.spec
+ allow ad group names with backslashes *)
+test Sudoers.spec get "%ad.domain.com\\\\sudo-users ALL=(ALL) ALL\n" =
+ { "spec"
+ { "user" = "%ad.domain.com\\\\sudo-users" }
+ { "host_group"
+ { "host" = "ALL" }
+ { "command" = "ALL"
+ { "runas_user" = "ALL" } }
+ }
+ }
+
+(* Test: Sudoers.spec
+ Ticket #376: allow uppercase characters in user names *)
+test Sudoers.spec get "%GrOup ALL = (ALL) ALL\n" =
+ { "spec"
+ { "user" = "%GrOup" }
+ { "host_group"
+ { "host" = "ALL" }
+ { "command" = "ALL"
+ { "runas_user" = "ALL" } }
+ }
+ }
+
+(* Test: Sudoers.spec
+ allow + in user-/groupnames *)
+test Sudoers.spec get "group+user somehost = ALL\n" =
+ { "spec"
+ { "user" = "group+user" }
+ { "host_group"
+ { "host" = "somehost" }
+ { "command" = "ALL" }
+ }
+ }
+
+(* Test: Sudoers.spec
+ GH #262: Sudoers lens doesn't support `!` for command aliases *)
+test Sudoers.spec get "%opssudoers ALL=(ALL) ALL, !!!BANNED\n" =
+ { "spec"
+ { "user" = "%opssudoers" }
+ { "host_group"
+ { "host" = "ALL" }
+ { "command" = "ALL"
+ { "runas_user" = "ALL" } }
+ { "command" = "BANNED"
+ { "negate" } }
+ }
+ }
+
+(* Test: Sudoers.spec
+ Handle multiple `!` properly in commands *)
+test Sudoers.spec get "%opssudoers ALL=(ALL) ALL, !!!/bin/mount\n" =
+ { "spec"
+ { "user" = "%opssudoers" }
+ { "host_group"
+ { "host" = "ALL" }
+ { "command" = "ALL"
+ { "runas_user" = "ALL" } }
+ { "command" = "/bin/mount"
+ { "negate" } }
+ }
+ }
--- /dev/null
+(* Test for sysconfig lens *)
+module Test_sysconfig =
+
+ let lns = Sysconfig.lns
+
+ let eth_static = "# Intel Corporation PRO/100 VE Network Connection
+DEVICE=eth0
+BOOTPROTO=static
+BROADCAST=172.31.0.255
+HWADDR=ab:cd:ef:12:34:56
+export IPADDR=172.31.0.31 # this is our IP
+#DHCP_HOSTNAME=host.example.com
+NETMASK=255.255.255.0
+NETWORK=172.31.0.0
+unset ONBOOT # We do not want this var
+"
+ let empty_val = "EMPTY=\nDEVICE=eth0\n"
+
+ let key_brack = "SOME_KEY[1]=\nDEVICE=eth0\n"
+
+ test lns get eth_static =
+ { "#comment" = "Intel Corporation PRO/100 VE Network Connection" }
+ { "DEVICE" = "eth0" }
+ { "BOOTPROTO" = "static" }
+ { "BROADCAST" = "172.31.0.255" }
+ { "HWADDR" = "ab:cd:ef:12:34:56" }
+ { "IPADDR" = "172.31.0.31"
+ { "export" }
+ { "#comment" = "this is our IP" } }
+ { "#comment" = "DHCP_HOSTNAME=host.example.com" }
+ { "NETMASK" = "255.255.255.0" }
+ { "NETWORK" = "172.31.0.0" }
+ { "@unset"
+ { "1" = "ONBOOT" }
+ { "#comment" = "We do not want this var" } }
+
+ test lns put eth_static after
+ set "BOOTPROTO" "dhcp" ;
+ rm "IPADDR" ;
+ rm "BROADCAST" ;
+ rm "NETMASK" ;
+ rm "NETWORK"
+ = "# Intel Corporation PRO/100 VE Network Connection
+DEVICE=eth0
+BOOTPROTO=dhcp
+HWADDR=ab:cd:ef:12:34:56
+#DHCP_HOSTNAME=host.example.com
+unset ONBOOT # We do not want this var
+"
+ test lns get empty_val =
+ { "EMPTY" = "" } { "DEVICE" = "eth0" }
+
+ test lns get key_brack =
+ { "SOME_KEY[1]" = "" } { "DEVICE" = "eth0" }
+
+ test lns get "smartd_opts=\"-q never\"\n" =
+ { "smartd_opts" = "-q never" }
+
+ test lns get "var=val \n" = { "var" = "val" }
+
+ test lns get ". /etc/java/java.conf\n" =
+ { ".source" = "/etc/java/java.conf" }
+
+ (* Quoted strings and other oddities *)
+ test lns get "var=\"foo 'bar'\"\n" =
+ { "var" = "foo 'bar'" }
+
+ test lns get "var=\"eth0\"\n" =
+ { "var" = "eth0" }
+
+ test lns get "var='eth0'\n" =
+ { "var" = "eth0" }
+
+ test lns get "var='Some \"funny\" value'\n" =
+ { "var" = "Some \"funny\" value" }
+
+ test lns get "var=\"\\\"\"\n" =
+ { "var" = "\\\"" }
+
+ test lns get "var=\\\"\n" =
+ { "var" = "\\\"" }
+
+ test lns get "var=ab#c\n" =
+ { "var" = "ab#c" }
+
+ test lns get "var='ab#c'\n" =
+ { "var" = "ab#c" }
+
+ test lns get "var=\"ab#c\"\n" =
+ { "var" = "ab#c" }
+
+ test lns get "var=\"ab#c\"\n" =
+ { "var" = "ab#c" }
+
+ (* We don't handle backticks *)
+ test lns get
+ "var=`grep nameserver /etc/resolv.conf | head -1`\n" = *
+
+ test lns get "var=ab #c\n" =
+ { "var" = "ab"
+ { "#comment" = "c" } }
+
+ test lns put "var=ab #c\n"
+ after rm "/var/#comment" = "var=ab\n"
+
+ test lns put "var=ab\n"
+ after set "/var/#comment" "this is a var" =
+ "var=ab # this is a var\n"
+
+ (* Test semicolons *)
+ test lns get "VAR1=\"this;is;a;test\"\nVAR2=this;\n" =
+ { "VAR1" = "this;is;a;test" }
+ { "VAR2" = "this" }
+
+ (* BZ 761246 *)
+ test lns get "DEVICE=\"eth0\";\n" =
+ { "DEVICE" = "eth0" }
+
+ test lns put "DEVICE=\"eth0\";\n" after
+ set "/DEVICE" "em1" = "DEVICE=\"em1\";\n"
+
+ test lns get "DEVICE=\"eth0\"; # remark\n" =
+ { "DEVICE" = "eth0" }
+ { "#comment" = "remark" }
+
+ (* Bug 109: allow a bare export *)
+ test lns get "export FOO\n" =
+ { "@export"
+ { "1" = "FOO" } }
+
+ (* Check we put quotes in when changes require them *)
+ test lns put "var=\"v\"\n" after rm "/foo" =
+ "var=\"v\"\n"
+
+ test lns put "var=v\n" after set "/var" "v w"=
+ "var=\"v w\"\n"
+
+ test lns put "var='v'\n" after set "/var" "v w"=
+ "var='v w'\n"
+
+ test lns put "var=v\n" after set "/var" "v'w"=
+ "var=\"v'w\"\n"
+
+ test lns put "var=v\n" after set "/var" "v\"w"=
+ "var='v\"w'\n"
+
+ (* RHBZ#1043636: empty comment lines after comments *)
+ test lns get "#MOUNTD_NFS_V3\n#\n" =
+ { "#comment" = "MOUNTD_NFS_V3" }
+
+ (* Handle leading whitespace at the beginning of a line correctly *)
+ test lns get " var=value\n" = { "var" = "value" }
+
+ test lns put " var=value\n" after set "/var" "val2" = " var=val2\n"
+
+ test lns get "\t \tvar=value\n" = { "var" = "value" }
+
+ test lns get " export var=value\n" = { "var" = "value" { "export" } }
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(*
+Module: Test_Sysconfig_Route
+ Provides unit tests and examples for the <Sysconfig_Route> lens.
+*)
+module Test_sysconfig_route =
+
+(* Test: Sysconfig_Route.lns *)
+test Sysconfig_Route.lns get "10.40.11.102/32 via 10.40.8.1\n10.1.8.0/24 via 10.40.8.254\n" =
+{ "10.40.8.1" = "10.40.11.102/32" }
+{ "10.40.8.254" = "10.1.8.0/24" }
+
+(* Test: Sysconfig_Route.lns *)
+test Sysconfig_Route.lns get "10.40.11.102/32 via 10.40.8.1\n10.1.8.0/24 via 10.40.8.1\n" =
+{ "10.40.8.1" = "10.40.11.102/32" }
+{ "10.40.8.1" = "10.1.8.0/24" }
--- /dev/null
+(*
+Module: Test_Sysctl
+ Provides unit tests and examples for the <Sysctl> lens.
+*)
+
+module Test_sysctl =
+
+(* Variable: default_sysctl *)
+let default_sysctl = "# Kernel sysctl configuration file
+# Controls IP packet forwarding
+net.ipv4.ip_forward = 0
+
+net.ipv4.conf.default.rp_filter = 1
+net.ipv4.conf.default.accept_source_route = \t0
+kernel.sysrq = 0
+
+; Semicolon comments are also allowed
+net.ipv4.tcp_mem = \t393216 524288 786432
+"
+
+(* Variable: spec_chars_sysctl *)
+let spec_chars_sysctl = "# Kernel sysctl configuration file
+# Controls IP packet forwarding
+net.ipv4.conf.*.rp_filter = 2
+net.ipv4.conf.ib0:0.arp_filter = 1
+"
+
+(* Variable: slash_char_sysctl *)
+let slash_in_sysctl_key = "# Kernel sysctl configuration file
+# slash and dot can be interchanged
+net/ipv4/conf/enp3s0.200/forwarding = 1
+"
+
+(* Test: Sysctl.lns *)
+test Sysctl.lns get default_sysctl =
+ { "#comment" = "Kernel sysctl configuration file" }
+ { "#comment" = "Controls IP packet forwarding"}
+ { "net.ipv4.ip_forward" = "0" }
+ { }
+ { "net.ipv4.conf.default.rp_filter" = "1" }
+ { "net.ipv4.conf.default.accept_source_route" = "0" }
+ { "kernel.sysrq" = "0" }
+ { }
+ { "#comment" = "Semicolon comments are also allowed" }
+ { "net.ipv4.tcp_mem" = "393216 524288 786432" }
+
+(* Test: Sysctl.lns *)
+test Sysctl.lns get spec_chars_sysctl =
+ { "#comment" = "Kernel sysctl configuration file" }
+ { "#comment" = "Controls IP packet forwarding"}
+ { "net.ipv4.conf.*.rp_filter" = "2" }
+ { "net.ipv4.conf.ib0:0.arp_filter" = "1" }
+
+(* Test: Sysctl.lns *)
+test Sysctl.lns get slash_in_sysctl_key =
+ { "#comment" = "Kernel sysctl configuration file" }
+ { "#comment" = "slash and dot can be interchanged"}
+ { "net/ipv4/conf/enp3s0.200/forwarding" = "1" }
+
+(* Test: Sysctl.lns *)
+test Sysctl.lns put default_sysctl after
+ set "net.ipv4.ip_forward" "1" ;
+ rm "net.ipv4.conf.default.rp_filter" ;
+ rm "net.ipv4.conf.default.accept_source_route" ;
+ rm "kernel.sysrq"
+ = "# Kernel sysctl configuration file
+# Controls IP packet forwarding
+net.ipv4.ip_forward = 1
+
+
+; Semicolon comments are also allowed
+net.ipv4.tcp_mem = \t393216 524288 786432
+"
+
+(* Test: Sysctl.lns *)
+test Sysctl.lns put spec_chars_sysctl after
+ set "net.ipv4.conf.*.rp_filter" "0" ;
+ set "net.ipv4.conf.ib0:0.arp_filter" "0"
+ = "# Kernel sysctl configuration file
+# Controls IP packet forwarding
+net.ipv4.conf.*.rp_filter = 0
+net.ipv4.conf.ib0:0.arp_filter = 0
+"
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+module Test_syslog =
+
+ let conf="# $FreeBSD: src/etc/syslog.conf,v 1.30.2.1 2009/08/03 08:13:06 kensmith Exp $
+#
+
+daemon.info /var/log/cvsupd.log
+security.* -/var/log/security
+*.notice;authpriv.none;kern.debug;lpr.info;mail.crit;news.err /var/log/messages
+uucp,news.crit /var/log/spooler
+*.emerg *
+daemon.!info /var/log/foo
+daemon.<=info /var/log/foo
+daemon.!<=info /var/log/foo
+*.* @syslog.far.away
+*.* @syslog.far.away:123
+*.* @@syslog.far.away
+*.* @@syslog.far.away:123
+*.* @[2001::1]:514
+*.* foo,bar
+*.* |\"/usr/bin/soft arg\"
+!startslip
+# get that out
+*.* /var/log/slip.log
+!pppd,ppp
+
+*.* /var/log/ppp.log
+!+startslip
+*.* /var/log/slip.log
+!-startslip
+*.* /var/log/slip.log
+#!pppd
+*.* /var/log/ppp.log
++foo.example.com
+daemon.info /var/log/cvsupd.log
++foo.example.com,bar.example.com
+daemon.info /var/log/cvsupd.log
+#+bar.example.com
+daemon.info /var/log/cvsupd.log
+-foo.example.com
+daemon.info /var/log/cvsupd.log
++*
+daemon.info /var/log/cvsupd.log
+!*
+daemon.info /var/log/cvsupd.log
+*.=debug;\
+ auth,authpriv.none;\
+ news.none;mail.none -/var/log/debug
+# !pppd
+"
+
+ test Syslog.lns get conf =
+ { "#comment" = "$FreeBSD: src/etc/syslog.conf,v 1.30.2.1 2009/08/03 08:13:06 kensmith Exp $" }
+ { }
+ { }
+ { "entry"
+ { "selector" { "facility" = "daemon" } { "level" = "info" } }
+ { "action" { "file" = "/var/log/cvsupd.log" } }
+ }
+ { "entry"
+ { "selector" { "facility" = "security" } { "level" = "*" } }
+ { "action" { "no_sync" } { "file" = "/var/log/security" } }
+ }
+ { "entry"
+ { "selector" { "facility" = "*" } { "level" = "notice" } }
+ { "selector" { "facility" = "authpriv" } { "level" = "none" } }
+ { "selector" { "facility" = "kern" } { "level" = "debug" } }
+ { "selector" { "facility" = "lpr" } { "level" = "info" } }
+ { "selector" { "facility" = "mail" } { "level" = "crit" } }
+ { "selector" { "facility" = "news" } { "level" = "err" } }
+ { "action" { "file" = "/var/log/messages" } }
+ }
+ { "entry"
+ { "selector" { "facility" = "uucp" } { "facility" = "news" } { "level" = "crit" } }
+ { "action" { "file" = "/var/log/spooler" } }
+ }
+ { "entry"
+ { "selector" { "facility" = "*" } { "level" = "emerg" } }
+ { "action" { "user" = "*" } }
+ }
+ { "entry"
+ { "selector" { "facility" = "daemon" } { "comparison" = "!" } { "level" = "info" } }
+ { "action" { "file" = "/var/log/foo" } }
+ }
+ { "entry"
+ { "selector" { "facility" = "daemon" } { "comparison" = "<=" } { "level" = "info" } }
+ { "action" { "file" = "/var/log/foo" } }
+ }
+ { "entry"
+ { "selector" { "facility" = "daemon" } { "comparison" = "!<=" } { "level" = "info" } }
+ { "action" { "file" = "/var/log/foo" } }
+ }
+ { "entry"
+ { "selector" { "facility" = "*" } { "level" = "*" } }
+ { "action" { "protocol" = "@" } { "hostname" = "syslog.far.away" } }
+ }
+ { "entry"
+ { "selector" { "facility" = "*" } { "level" = "*" } }
+ { "action" { "protocol" = "@" } { "hostname" = "syslog.far.away" } { "port" = "123" } }
+ }
+ { "entry"
+ { "selector" { "facility" = "*" } { "level" = "*" } }
+ { "action" { "protocol" = "@@" } { "hostname" = "syslog.far.away" } }
+ }
+ { "entry"
+ { "selector" { "facility" = "*" } { "level" = "*" } }
+ { "action" { "protocol" = "@@" } { "hostname" = "syslog.far.away" } { "port" = "123" } }
+ }
+ { "entry"
+ { "selector" { "facility" = "*" } { "level" = "*" } }
+ { "action" { "protocol" = "@" } { "hostname" = "[2001::1]" } { "port" = "514" } }
+ }
+ { "entry"
+ { "selector" { "facility" = "*" } { "level" = "*" } }
+ { "action" { "user" = "foo" } { "user" = "bar" } }
+ }
+ { "entry"
+ { "selector" { "facility" = "*" } { "level" = "*" } }
+ { "action" { "program" = "\"/usr/bin/soft arg\"" } }
+ }
+ { "program"
+ { "program" = "startslip" }
+ { "#comment" = "get that out" }
+ { "entry"
+ { "selector" { "facility" = "*" } { "level" = "*" } }
+ { "action" { "file" = "/var/log/slip.log" } }
+ }
+ }
+ { "program"
+ { "program" = "pppd" }
+ { "program" = "ppp" }
+ { }
+ { "entry"
+ { "selector" { "facility" = "*" } { "level" = "*" } }
+ { "action" { "file" = "/var/log/ppp.log" } }
+ }
+ }
+ { "program"
+ { "program" = "startslip" }
+ { "entry"
+ { "selector" { "facility" = "*" } { "level" = "*" } }
+ { "action" { "file" = "/var/log/slip.log" } }
+ }
+ }
+ { "program"
+ { "reverse" }
+ { "program" = "startslip" }
+ { "entry"
+ { "selector" { "facility" = "*" } { "level" = "*" } }
+ { "action" { "file" = "/var/log/slip.log" } }
+ }
+ }
+ { "program"
+ { "program" = "pppd" }
+ { "entry"
+ { "selector" { "facility" = "*" } { "level" = "*" } }
+ { "action" { "file" = "/var/log/ppp.log" } }
+ }
+ }
+ { "hostname"
+ { "hostname" = "foo.example.com" }
+ { "entry"
+ { "selector" { "facility" = "daemon" } { "level" = "info" } }
+ { "action" { "file" = "/var/log/cvsupd.log" } }
+ }
+ }
+ { "hostname"
+ { "hostname" = "foo.example.com" }
+ { "hostname" = "bar.example.com" }
+ { "entry"
+ { "selector" { "facility" = "daemon" } { "level" = "info" } }
+ { "action" { "file" = "/var/log/cvsupd.log" } }
+ }
+ }
+ { "hostname"
+ { "hostname" = "bar.example.com" }
+ { "entry"
+ { "selector" { "facility" = "daemon" } { "level" = "info" } }
+ { "action" { "file" = "/var/log/cvsupd.log" } }
+ }
+ }
+ { "hostname"
+ { "reverse" }
+ { "hostname" = "foo.example.com" }
+ { "entry"
+ { "selector" { "facility" = "daemon" } { "level" = "info" } }
+ { "action" { "file" = "/var/log/cvsupd.log" } }
+ }
+ }
+ { "hostname"
+ { "hostname" = "*" }
+ { "entry"
+ { "selector" { "facility" = "daemon" } { "level" = "info" } }
+ { "action" { "file" = "/var/log/cvsupd.log" } }
+ }
+ }
+ { "program"
+ { "program" = "*" }
+ { "entry"
+ { "selector" { "facility" = "daemon" } { "level" = "info" } }
+ { "action" { "file" = "/var/log/cvsupd.log" } } }
+ { "entry"
+ { "selector" { "facility" = "*" } { "comparison" = "=" } { "level" = "debug" } }
+ { "selector" { "facility" = "auth" } { "facility" = "authpriv" } { "level" = "none" } }
+ { "selector" { "facility" = "news" } { "level" = "none" } }
+ { "selector" { "facility" = "mail" } { "level" = "none" } }
+ { "action" { "no_sync" } { "file" = "/var/log/debug" } } }
+ { "#comment" = "!pppd" }
+ }
+
+ (* changing file *)
+ test Syslog.lns put "*.* /var\n" after
+ set "/entry[1]/action/file" "/foo"
+ = "*.* /foo\n"
+
+ (* changing file to discard *)
+ test Syslog.lns put "*.* /var\n" after
+ rm "/entry[1]/action/file" ;
+ set "/entry[1]/action/discard" ""
+ = "*.* ~\n"
+
+ (* removing entry *)
+ test Syslog.lns put "*.* /var\n" after
+ rm "/entry[1]"
+ = ""
+
+ (* changing facility and level *)
+ test Syslog.lns put "*.* /var\n" after
+ set "/entry[1]/selector/facility" "daemon" ;
+ set "/entry[1]/selector/level" "info"
+ = "daemon.info /var\n"
+
+ (* insert a facility *)
+ test Syslog.lns put "daemon.* /var\n" after
+ insa "facility" "/entry/selector/facility" ;
+ set "/entry/selector/facility[2]" "mail"
+ = "daemon,mail.* /var\n"
+
+ (* creating an entry *)
+ test Syslog.lns put "" after
+ set "/entry/selector/facility" "daemon" ;
+ set "/entry/selector/level" "info" ;
+ set "/entry/action/file" "/var"
+ = "daemon.info\t/var\n"
+
+ (* inserting an entry before *)
+ test Syslog.lns put "*.* /var\n" after
+ insb "entry" "/entry" ;
+ set "/entry[1]/selector/facility" "daemon" ;
+ set "/entry[1]/selector/level" "info" ;
+ set "/entry[1]/action/file" "/foo"
+ = "daemon.info /foo\n*.*\t/var\n"
+
+ (* inserting an entry after *)
+ test Syslog.lns put "*.* /var\n" after
+ insa "entry" "/entry" ;
+ set "/entry[2]/selector/facility" "daemon" ;
+ set "/entry[2]/selector/level" "info" ;
+ set "/entry[2]/action/file" "/foo"
+ = "*.* /var\ndaemon.info\t/foo\n"
+
+ (* insert sync on a file *)
+ test Syslog.lns put "*.* /var\n" after
+ insb "no_sync" "/entry/action/file"
+ = "*.* -/var\n"
+
+ (* changing file to remote host *)
+ test Syslog.lns put "*.* /var\n" after
+ rm "/entry/action/file" ;
+ set "/entry/action/protocol" "@" ;
+ set "/entry/action/hostname" "far.far.away"
+ = "*.* @far.far.away\n"
+
+ (* changing file to remote host *)
+ test Syslog.lns put "*.* /var/lib\n" after
+ rm "/entry/action/file" ;
+ set "/entry/action/protocol" "@@" ;
+ set "/entry/action/hostname" "far.far.away"
+ = "*.* @@far.far.away\n"
+
+ (* changing file to * *)
+ test Syslog.lns put "*.* /var\n" after
+ rm "/entry/action/file" ;
+ set "/entry/action/user" "*"
+ = "*.* *\n"
+
+ (* changing file to users *)
+ test Syslog.lns put "*.* /var\n" after
+ rm "/entry/action/file" ;
+ set "/entry/action/user[1]" "john" ;
+ set "/entry/action/user[2]" "paul" ;
+ set "/entry/action/user[3]" "george" ;
+ set "/entry/action/user[4]" "ringo"
+ = "*.* john,paul,george,ringo\n"
+
+ (* changing file to program *)
+ test Syslog.lns put "*.* /var\n" after
+ rm "/entry/action/file" ;
+ set "/entry/action/program" "/usr/bin/foo"
+ = "*.* |/usr/bin/foo\n"
+
+ (* inserting a matching program *)
+ test Syslog.lns put "" after
+ insa "program" "/" ;
+ set "/program/program" "foo"
+ = "!foo\n"
+
+ (* inserting an entry to a matching program *)
+ test Syslog.lns put "!foo\n" after
+ set "/program/entry/selector/facility" "*" ;
+ set "/program/entry/selector/level" "*" ;
+ set "/program/entry/action/file" "/foo"
+ = "!foo\n*.*\t/foo\n"
+
+ (* inserting a matching hostname *)
+ test Syslog.lns put "" after
+ insa "hostname" "/" ;
+ set "/hostname/hostname" "foo.foo.away"
+ = "+foo.foo.away\n"
+
+ (* inserting an entry to a matching hostname *)
+ test Syslog.lns put "+foo.foo.away\n" after
+ set "/hostname/entry/selector/facility" "*" ;
+ set "/hostname/entry/selector/level" "*" ;
+ set "/hostname/entry/action/file" "/foo"
+ = "+foo.foo.away\n*.*\t/foo\n"
+
+ (* inserting a reverse matching hostname *)
+ test Syslog.lns put "" after
+ insa "hostname" "/" ;
+ set "/hostname/reverse" "" ;
+ set "/hostname/hostname" "foo.foo.away"
+ = "-foo.foo.away\n"
+
+ (* tokens can contain capital letters *)
+ test Syslog.lns get "LOCAL5.* -/var/log/foo.log\n" =
+ { "entry"
+ { "selector"
+ { "facility" = "LOCAL5" }
+ { "level" = "*" }
+ }
+ { "action"
+ { "no_sync" }
+ { "file" = "/var/log/foo.log" }
+ }
+ }
+
+ (* test for commented out statements *)
+ test Syslog.lns put "" after
+ set "#comment" "!pppd" = "# !pppd\n"
+
+ (* allow space before comments *)
+ test Syslog.lns get " \t# space comment\n" =
+ { "#comment" = "space comment" }
+
+ test Syslog.lns get "include /etc/syslog.d\n" =
+ { "include" = "/etc/syslog.d" }
--- /dev/null
+(*
+Module: Test_Systemd
+ Provides unit tests and examples for the <Systemd> lens.
+*)
+
+module Test_Systemd =
+
+(* Variable: desc *)
+let desc = "[Unit]
+Description=RPC
+Description=RPC bind service
+Description=RPC bind\\
+service
+Description= Resets System Activity Logs
+"
+(* Test: Systemd.lns *)
+test Systemd.lns get desc =
+ { "Unit"
+ { "Description"
+ { "value" = "RPC" }
+ }
+ { "Description"
+ { "value" = "RPC bind service" }
+ }
+ { "Description"
+ { "value" = "RPC bind\
+service" }
+ }
+ { "Description"
+ { "value" = "Resets System Activity Logs" }
+ }
+ }
+
+(* Variable: multi *)
+let multi = "[Unit]
+After=syslog.target network.target
+Also=canberra-system-shutdown.service canberra-system-shutdown-reboot.service
+Before=sysinit.target shutdown.target
+CapabilityBoundingSet=CAP_SYS_ADMIN CAP_SETUID CAP_SETGID
+Conflicts=emergency.service emergency.target
+ControlGroup=%R/user/%I/shared cpu:/
+ListenNetlink=kobject-uevent 1
+Requires=shutdown.target umount.target final.target
+Sockets=udev-control.socket udev-kernel.socket
+WantedBy=halt.target poweroff.target
+Wants=local-fs.target swap.target
+Wants=local-fs.target \
+swap.target
+Wants=local-fs.target\
+swap.target
+Wants= local-fs.target
+"
+(* Test: Systemd.lns *)
+test Systemd.lns get multi =
+ { "Unit"
+ { "After"
+ { "value" = "syslog.target" }
+ { "value" = "network.target" }
+ }
+ { "Also"
+ { "value" = "canberra-system-shutdown.service" }
+ { "value" = "canberra-system-shutdown-reboot.service" }
+ }
+ { "Before"
+ { "value" = "sysinit.target" }
+ { "value" = "shutdown.target" }
+ }
+ { "CapabilityBoundingSet"
+ { "value" = "CAP_SYS_ADMIN" }
+ { "value" = "CAP_SETUID" }
+ { "value" = "CAP_SETGID" }
+ }
+ { "Conflicts"
+ { "value" = "emergency.service" }
+ { "value" = "emergency.target" }
+ }
+ { "ControlGroup"
+ { "value" = "%R/user/%I/shared" }
+ { "value" = "cpu:/" }
+ }
+ { "ListenNetlink"
+ { "value" = "kobject-uevent" }
+ { "value" = "1" }
+ }
+ { "Requires"
+ { "value" = "shutdown.target" }
+ { "value" = "umount.target" }
+ { "value" = "final.target" }
+ }
+ { "Sockets"
+ { "value" = "udev-control.socket" }
+ { "value" = "udev-kernel.socket" }
+ }
+ { "WantedBy"
+ { "value" = "halt.target" }
+ { "value" = "poweroff.target" }
+ }
+ { "Wants"
+ { "value" = "local-fs.target" }
+ { "value" = "swap.target" }
+ }
+ { "Wants"
+ { "value" = "local-fs.target" }
+ { "value" = "swap.target" }
+ }
+ { "Wants"
+ { "value" = "local-fs.target" }
+ { "value" = "swap.target" }
+ }
+ { "Wants"
+ { "value" = "local-fs.target" }
+ }
+ }
+
+(* Variable: exec *)
+let exec = "[Service]
+ExecStart=/bin/ls
+ExecReload=/bin/kill -USR1 $MAINPID
+ExecStart=/sbin/rpcbind -w
+ExecStartPost=/bin/systemctl disable firstboot-graphical.service firstboot-text.service
+ExecStartPre=/sbin/modprobe -qa $SUPPORTED_DRIVERS
+ExecStop=/usr/sbin/aiccu stop
+ExecStopPost=-/bin/systemctl poweroff
+ExecStopPost=@/bin/systemctl poweroff
+ExecStopPost=-@/bin/systemctl poweroff
+ExecStopPost=/bin/systemctl\
+poweroff
+"
+(* Test: Systemd.lns *)
+test Systemd.lns get exec =
+ { "Service"
+ { "ExecStart"
+ { "command" = "/bin/ls" }
+ }
+ { "ExecReload"
+ { "command" = "/bin/kill" }
+ { "arguments"
+ { "1" = "-USR1" }
+ { "2" = "$MAINPID" }
+ }
+ }
+ { "ExecStart"
+ { "command" = "/sbin/rpcbind" }
+ { "arguments"
+ { "1" = "-w" }
+ }
+ }
+ { "ExecStartPost"
+ { "command" = "/bin/systemctl" }
+ { "arguments"
+ { "1" = "disable" }
+ { "2" = "firstboot-graphical.service" }
+ { "3" = "firstboot-text.service" }
+ }
+ }
+ { "ExecStartPre"
+ { "command" = "/sbin/modprobe" }
+ { "arguments"
+ { "1" = "-qa" }
+ { "2" = "$SUPPORTED_DRIVERS" }
+ }
+ }
+ { "ExecStop"
+ { "command" = "/usr/sbin/aiccu" }
+ { "arguments"
+ { "1" = "stop" }
+ }
+ }
+ { "ExecStopPost"
+ { "ignoreexit" }
+ { "command" = "/bin/systemctl" }
+ { "arguments"
+ { "1" = "poweroff" }
+ }
+ }
+ { "ExecStopPost"
+ { "arg0" }
+ { "command" = "/bin/systemctl" }
+ { "arguments"
+ { "1" = "poweroff" }
+ }
+ }
+ { "ExecStopPost"
+ { "ignoreexit" }
+ { "arg0" }
+ { "command" = "/bin/systemctl" }
+ { "arguments"
+ { "1" = "poweroff" }
+ }
+ }
+ { "ExecStopPost"
+ { "command" = "/bin/systemctl" }
+ { "arguments"
+ { "1" = "poweroff" }
+ }
+ }
+ }
+
+(* Variable: env *)
+let env = "[Service]
+Environment=LANG=C
+Environment=LANG=C FOO=BAR
+Environment=LANG= LANGUAGE= LC_CTYPE= LC_NUMERIC= LC_TIME= LC_COLLATE= LC_MONETARY= LC_MESSAGES= LC_PAPER= LC_NAME= LC_ADDRESS= LC_TELEPHONE= LC_MEASUREMENT= LC_IDENTIFICATION=
+Environment=LANG=C\
+FOO=BAR
+Environment=\"LANG=foo bar\" FOO=BAR
+Environment=OPTIONS=\"-LS0-6d\"
+Environment=OPTIONS='-LS0-6d'
+Environment=VAR=\"with some spaces\" VAR2='more spaces'
+Environment=VAR='with some spaces'
+"
+(* Test: Systemd.lns *)
+test Systemd.lns get env =
+ { "Service"
+ { "Environment"
+ { "LANG" = "C" }
+ }
+ { "Environment"
+ { "LANG" = "C" }
+ { "FOO" = "BAR" }
+ }
+ { "Environment"
+ { "LANG" }
+ { "LANGUAGE" }
+ { "LC_CTYPE" }
+ { "LC_NUMERIC" }
+ { "LC_TIME" }
+ { "LC_COLLATE" }
+ { "LC_MONETARY" }
+ { "LC_MESSAGES" }
+ { "LC_PAPER" }
+ { "LC_NAME" }
+ { "LC_ADDRESS" }
+ { "LC_TELEPHONE" }
+ { "LC_MEASUREMENT" }
+ { "LC_IDENTIFICATION" }
+ }
+ { "Environment"
+ { "LANG" = "C" }
+ { "FOO" = "BAR" }
+ }
+ { "Environment"
+ { "LANG" = "foo bar" }
+ { "FOO" = "BAR" }
+ }
+ { "Environment"
+ { "OPTIONS" = "\"-LS0-6d\"" }
+ }
+ { "Environment"
+ { "OPTIONS" = "'-LS0-6d'" }
+ }
+ { "Environment"
+ { "VAR" = "\"with some spaces\"" }
+ { "VAR2" = "'more spaces'" }
+ }
+ { "Environment"
+ { "VAR" = "'with some spaces'" }
+ }
+ }
+
+(* Variable: unit *)
+let unit = "# This file is part of systemd.
+#
+
+# See systemd.special(7) for details
+
+.include /etc/example
+
+[Unit]
+Description=Locale Service
+# Add another file
+.include /etc/example
+
+[Service]
+ExecStart=/lib/systemd/systemd-localed
+Type=dbus
+BusName=org.freedesktop.locale1
+CapabilityBoundingSet=
+
+"
+(* Test: Systemd.lns *)
+test Systemd.lns get unit =
+ { "#comment" = "This file is part of systemd." }
+ {}
+ { }
+ { "#comment" = "See systemd.special(7) for details" }
+ { }
+ { ".include" = "/etc/example" }
+ { }
+ { "Unit"
+ { "Description"
+ { "value" = "Locale Service" }
+ }
+ { "#comment" = "Add another file" }
+ { ".include" = "/etc/example" }
+ { }
+ }
+ { "Service"
+ { "ExecStart"
+ { "command" = "/lib/systemd/systemd-localed" }
+ }
+ { "Type"
+ { "value" = "dbus" }
+ }
+ { "BusName"
+ { "value" = "org.freedesktop.locale1" }
+ }
+ { "CapabilityBoundingSet" }
+ { }
+ }
+
+(* Test: Systemd.lns
+ Values can contain backslashes *)
+test Systemd.entry_command get "ExecStart=/usr/bin/find /var/lib/sudo -exec /usr/bin/touch -t 198501010000 '{}' \073\n" =
+ { "ExecStart"
+ { "command" = "/usr/bin/find" }
+ { "arguments"
+ { "1" = "/var/lib/sudo" }
+ { "2" = "-exec" }
+ { "3" = "/usr/bin/touch" }
+ { "4" = "-t" }
+ { "5" = "198501010000" }
+ { "6" = "'{}'" }
+ { "7" = "\073" }
+ }
+ }
+
+let exec_tmux = "ExecStart=/usr/bin/tmux unbind-key -a; \
+ kill-window -t anaconda:shell; \
+ bind-key ? list-keys\n"
+
+(* Test: Systemd.lns
+ Semicolons are permitted in entry values, e.g. as part of a command *)
+test Systemd.entry_command get exec_tmux =
+ { "ExecStart"
+ { "command" = "/usr/bin/tmux" }
+ { "arguments"
+ { "1" = "unbind-key" }
+ { "2" = "-a;" }
+ { "3" = "kill-window" }
+ { "4" = "-t" }
+ { "5" = "anaconda:shell;" }
+ { "6" = "bind-key" }
+ { "7" = "?" }
+ { "8" = "list-keys" } } }
+
+(* Test: Systemd.lns
+ # and ; are OK for standalone comments, but # only for EOL comments *)
+test Systemd.lns get "[Service]\n# hash\n; semicolon\nExecStart=/bin/echo # hash\n" =
+ { "Service"
+ { "#comment" = "hash" }
+ { "#comment" = "semicolon" }
+ { "ExecStart"
+ { "command" = "/bin/echo" }
+ { "#comment" = "hash" } } }
+
+(* Test: Systemd.lns
+ empty quoted environment var values *)
+test Systemd.lns get "[Service]\nEnvironment=TERM=linux PX_MODULE_PATH=\"\"\n" =
+ { "Service"
+ { "Environment"
+ { "TERM" = "linux" }
+ { "PX_MODULE_PATH" = "\"\"" } } }
+
+(* Test: Systemd.lns
+ values may start with spaces *)
+test Systemd.lns get "[Service]\nExecStart= /usr/bin/find\nEnvironment= TERM=linux\n" =
+ { "Service"
+ { "ExecStart"
+ { "command" = "/usr/bin/find" } }
+ { "Environment"
+ { "TERM" = "linux" } } }
--- /dev/null
+module Test_termcap =
+
+(* Sample termcap entry with escaped ':''s *)
+let termcap = "vt420pc|DEC VT420 w/PC keyboard:\\
+ :@7=\\E[4~:F1=\\E[23~:F2=\\E[24~:F3=\\E[11;2~:F4=\\E[12;2~:\\
+ :F5=\\E[13;2~:F6=\\E[14;2~:F7=\\E[15;2~:F8=\\E[17;2~:\\
+ :F9=\\E[18;2~:FA=\\E[19;2~:FB=\\E[20;2~:FC=\\E[21;2~:\\
+ :FD=\\E[23;2~:FE=\\E[24;2~:FF=\\E[23~:FG=\\E[24~:FH=\\E[25~:\\
+ :FI=\\E[26~:FJ=\\E[28~:FK=\\E[29~:FL=\\E[31~:FM=\\E[32~:\\
+ :FN=\\E[33~:FO=\\E[34~:FP=\\E[35~:FQ=\\E[36~:FR=\\E[23;2~:\\
+ :FS=\\E[24;2~:FT=\\E[25;2~:FU=\\E[26;2~:FV=\\E[28;2~:\\
+ :FW=\\E[29;2~:FX=\\E[31;2~:FY=\\E[32;2~:FZ=\\E[33;2~:\\
+ :Fa=\\E[34;2~:Fb=\\E[35;2~:Fc=\\E[36;2~:\\
+ :S6=USR_TERM\\:vt420pcdos\\::k1=\\E[11~:k2=\\E[12~:\\
+ :k3=\\E[13~:k4=\\E[14~:k5=\\E[15~:k6=\\E[17~:k7=\\E[18~:\\
+ :k8=\\E[19~:k9=\\E[20~:k;=\\E[21~:kD=\\177:kh=\\E[H:\\
+ :..px=\\EP1;1|%?%{16}%p1%>%t%{0}%e%{21}%p1%>%t%{1}%e%{25}%p1%>%t%{2}%e%{27}%p1%>%t%{3}%e%{30}%p1%>%t%{4}%e%{5}%;%p1%+%d/%p2%s\E\\\\:\\
+ :tc=vt420:
+"
+
+test Termcap.lns get termcap =
+ { "record"
+ { "name" = "vt420pc" }
+ { "name" = "DEC VT420 w/PC keyboard" }
+ { "capability" = "@7=\\E[4~" }
+ { "capability" = "F1=\\E[23~" }
+ { "capability" = "F2=\\E[24~" }
+ { "capability" = "F3=\\E[11;2~" }
+ { "capability" = "F4=\\E[12;2~" }
+ { "capability" = "F5=\\E[13;2~" }
+ { "capability" = "F6=\\E[14;2~" }
+ { "capability" = "F7=\\E[15;2~" }
+ { "capability" = "F8=\\E[17;2~" }
+ { "capability" = "F9=\\E[18;2~" }
+ { "capability" = "FA=\\E[19;2~" }
+ { "capability" = "FB=\\E[20;2~" }
+ { "capability" = "FC=\\E[21;2~" }
+ { "capability" = "FD=\\E[23;2~" }
+ { "capability" = "FE=\\E[24;2~" }
+ { "capability" = "FF=\\E[23~" }
+ { "capability" = "FG=\\E[24~" }
+ { "capability" = "FH=\\E[25~" }
+ { "capability" = "FI=\\E[26~" }
+ { "capability" = "FJ=\\E[28~" }
+ { "capability" = "FK=\\E[29~" }
+ { "capability" = "FL=\\E[31~" }
+ { "capability" = "FM=\\E[32~" }
+ { "capability" = "FN=\\E[33~" }
+ { "capability" = "FO=\\E[34~" }
+ { "capability" = "FP=\\E[35~" }
+ { "capability" = "FQ=\\E[36~" }
+ { "capability" = "FR=\\E[23;2~" }
+ { "capability" = "FS=\\E[24;2~" }
+ { "capability" = "FT=\\E[25;2~" }
+ { "capability" = "FU=\\E[26;2~" }
+ { "capability" = "FV=\\E[28;2~" }
+ { "capability" = "FW=\\E[29;2~" }
+ { "capability" = "FX=\\E[31;2~" }
+ { "capability" = "FY=\\E[32;2~" }
+ { "capability" = "FZ=\\E[33;2~" }
+ { "capability" = "Fa=\\E[34;2~" }
+ { "capability" = "Fb=\\E[35;2~" }
+ { "capability" = "Fc=\\E[36;2~" }
+ { "capability" = "S6=USR_TERM\\:vt420pcdos\\:" }
+ { "capability" = "k1=\\E[11~" }
+ { "capability" = "k2=\\E[12~" }
+ { "capability" = "k3=\\E[13~" }
+ { "capability" = "k4=\\E[14~" }
+ { "capability" = "k5=\\E[15~" }
+ { "capability" = "k6=\\E[17~" }
+ { "capability" = "k7=\\E[18~" }
+ { "capability" = "k8=\\E[19~" }
+ { "capability" = "k9=\\E[20~" }
+ { "capability" = "k;=\\E[21~" }
+ { "capability" = "kD=\\177" }
+ { "capability" = "kh=\\E[H" }
+ { "capability" = "..px=\\EP1;1|%?%{16}%p1%>%t%{0}%e%{21}%p1%>%t%{1}%e%{25}%p1%>%t%{2}%e%{27}%p1%>%t%{3}%e%{30}%p1%>%t%{4}%e%{5}%;%p1%+%d/%p2%s\\E\\\\" }
+ { "capability" = "tc=vt420" }
+ }
+
+let termcap2 = "tws-generic|dku7102|Bull Questar tws terminals:\\
+ :am:es:hs:mi:ms:xn:xo:xs@:\\
+ :co#80:it#8:li#24:ws#80:\\
+ :AL=\\E[%dL:DC=\\E[%dP:DL=\\E[%dM:DO=\\E[%dB:LE=\\E[%dD:\\
+ :RI=\\E[%dC:UP=\\E[%dA:al=\\E[L:bl=^G:bt=\\E[Z:cd=\\E[J:ce=\\E[K:\\
+ :cl=\\E[2J:cm=\\E[%i%d;%df:cr=^M:ct=\\E[3g:dc=\\E[P:dl=\\E[M:\\
+ :do=^J:ds=\\EPY99\\:98\\E\\\\\\E[0;98v\\E[2J\\E[v:ei=\\E[4l:\\
+ :fs=\\E[v:ho=\\E[H:i1=\\E[?=h\\Ec\\E`\\E[?>h\\EPY99\\:98\\E\\\\\\\\:\\
+ :i2=\\Eb\\E[?<h:im=\\E[4h:\\
+ :is=\\E[5;>;12;18;?<l\\E[=h\\EP1s\\E\\\\\\E[\\027p:\\
+ :k1=\\E[1u\\027:k2=\\E[2u\\027:k3=\\E[3u\\027:k4=\\E[4u\\027:\\
+ :k5=\\E[5u\\027:k6=\\E[6u\\027:k7=\\E[7u\\027:k8=\\E[8u\\027:\\
+ :kD=\\E[P:kb=^H:kd=\\E[B:kh=\\E[H:kl=\\E[D:kr=\\E[C:ku=\\E[A:\\
+ :le=^H:ll=\\E[H\\E[A:mb=\\E[0;5m:me=\\E[0m\\017:mh=\\E[0;2m:\\
+ :mr=\\E[0;7m:nd=\\E[C:rs=\\E[?=h\\Ec:se=\\E[m:sf=^J:so=\\E[0;7m:\\
+ :st=\\EH:ta=\\E[I:te=\\E[0;98v\\E[2J\\E[v:\\
+ :ti=\\E[?>h\\EPY99\\:98\\E\\\\\\\\:\\
+ :ts=\\EPY99\\:98\\E\\\\\\E[0;98v\\E[2;7m:ue=\\E[m:up=\\E[A:\\
+ :us=\\E[0;4m:ve=\\E[r:vi=\\E[1r:
+"
+
+test Termcap.lns get termcap2 =
+ { "record"
+ { "name" = "tws-generic" }
+ { "name" = "dku7102" }
+ { "name" = "Bull Questar tws terminals" }
+ { "capability" = "am" }
+ { "capability" = "es" }
+ { "capability" = "hs" }
+ { "capability" = "mi" }
+ { "capability" = "ms" }
+ { "capability" = "xn" }
+ { "capability" = "xo" }
+ { "capability" = "xs@" }
+ { "capability" = "co#80" }
+ { "capability" = "it#8" }
+ { "capability" = "li#24" }
+ { "capability" = "ws#80" }
+ { "capability" = "AL=\\E[%dL" }
+ { "capability" = "DC=\\E[%dP" }
+ { "capability" = "DL=\\E[%dM" }
+ { "capability" = "DO=\\E[%dB" }
+ { "capability" = "LE=\\E[%dD" }
+ { "capability" = "RI=\\E[%dC" }
+ { "capability" = "UP=\\E[%dA" }
+ { "capability" = "al=\\E[L" }
+ { "capability" = "bl=^G" }
+ { "capability" = "bt=\\E[Z" }
+ { "capability" = "cd=\\E[J" }
+ { "capability" = "ce=\\E[K" }
+ { "capability" = "cl=\\E[2J" }
+ { "capability" = "cm=\\E[%i%d;%df" }
+ { "capability" = "cr=^M" }
+ { "capability" = "ct=\\E[3g" }
+ { "capability" = "dc=\\E[P" }
+ { "capability" = "dl=\\E[M" }
+ { "capability" = "do=^J" }
+ { "capability" = "ds=\\EPY99\\:98\\E\\\\\\E[0;98v\\E[2J\\E[v" }
+ { "capability" = "ei=\\E[4l" }
+ { "capability" = "fs=\\E[v" }
+ { "capability" = "ho=\\E[H" }
+ { "capability" = "i1=\\E[?=h\\Ec\\E`\\E[?>h\\EPY99\\:98\\E\\\\\\\\" }
+ { "capability" = "i2=\\Eb\\E[?<h" }
+ { "capability" = "im=\\E[4h" }
+ { "capability" = "is=\\E[5;>;12;18;?<l\\E[=h\\EP1s\\E\\\\\\E[\\027p" }
+ { "capability" = "k1=\\E[1u\\027" }
+ { "capability" = "k2=\\E[2u\\027" }
+ { "capability" = "k3=\\E[3u\\027" }
+ { "capability" = "k4=\\E[4u\\027" }
+ { "capability" = "k5=\\E[5u\\027" }
+ { "capability" = "k6=\\E[6u\\027" }
+ { "capability" = "k7=\\E[7u\\027" }
+ { "capability" = "k8=\\E[8u\\027" }
+ { "capability" = "kD=\\E[P" }
+ { "capability" = "kb=^H" }
+ { "capability" = "kd=\\E[B" }
+ { "capability" = "kh=\\E[H" }
+ { "capability" = "kl=\\E[D" }
+ { "capability" = "kr=\\E[C" }
+ { "capability" = "ku=\\E[A" }
+ { "capability" = "le=^H" }
+ { "capability" = "ll=\\E[H\\E[A" }
+ { "capability" = "mb=\\E[0;5m" }
+ { "capability" = "me=\\E[0m\\017" }
+ { "capability" = "mh=\\E[0;2m" }
+ { "capability" = "mr=\\E[0;7m" }
+ { "capability" = "nd=\\E[C" }
+ { "capability" = "rs=\\E[?=h\\Ec" }
+ { "capability" = "se=\\E[m" }
+ { "capability" = "sf=^J" }
+ { "capability" = "so=\\E[0;7m" }
+ { "capability" = "st=\\EH" }
+ { "capability" = "ta=\\E[I" }
+ { "capability" = "te=\\E[0;98v\\E[2J\\E[v" }
+ { "capability" = "ti=\\E[?>h\\EPY99\\:98\\E\\\\\\\\" }
+ { "capability" = "ts=\\EPY99\\:98\\E\\\\\\E[0;98v\\E[2;7m" }
+ { "capability" = "ue=\\E[m" }
+ { "capability" = "up=\\E[A" }
+ { "capability" = "us=\\E[0;4m" }
+ { "capability" = "ve=\\E[r" }
+ { "capability" = "vi=\\E[1r" }
+ }
+
+let termcap3 = "stv52|MiNT virtual console:\\
+ :am:ms:\\
+ :co#80:it#8:li#30:\\
+ :%1=\\EH:&8=\\EK:F1=\\Ep:F2=\\Eq:F3=\\Er:F4=\\Es:F5=\\Et:F6=\\Eu:\\
+ :F7=\\Ev:F8=\\Ew:F9=\\Ex:FA=\\Ey:al=\\EL:bl=^G:cd=\\EJ:ce=\\EK:\\
+ :cl=\\EE:cm=\\EY%+ %+ :cr=^M:dl=\\EM:do=\\EB:ho=\\EH:k1=\\EP:\\
+ :k2=\\EQ:k3=\\ER:k4=\\ES:k5=\\ET:k6=\\EU:k7=\\EV:k8=\\EW:k9=\\EX:\\
+ :k;=\\EY:kD=\\177:kI=\\EI:kN=\\Eb:kP=\\Ea:kb=^H:kd=\\EB:kh=\\EE:\\
+ :kl=\\ED:kr=\\EC:ku=\\EA:le=^H:mb=\\Er:md=\\EyA:me=\\Ez_:mh=\\Em:\\
+ :mr=\\Ep:nd=\\EC:nw=2*\\r\\n:op=\\Eb@\\EcO:r1=\\Ez_\\Eb@\\EcA:\\
+ :se=\\Eq:sf=2*\\n:so=\\Ep:sr=2*\\EI:ta=^I:te=\\Ev\\E. \\Ee\\Ez_:\\
+ :ti=\\Ev\\Ee\\Ez_:ue=\\EzH:up=\\EA:us=\\EyH:ve=\\E. \\Ee:vi=\\Ef:\\
+ :vs=\\E.\":
+"
+
+test Termcap.lns get termcap3 =
+ { "record"
+ { "name" = "stv52" }
+ { "name" = "MiNT virtual console" }
+ { "capability" = "am" }
+ { "capability" = "ms" }
+ { "capability" = "co#80" }
+ { "capability" = "it#8" }
+ { "capability" = "li#30" }
+ { "capability" = "%1=\\EH" }
+ { "capability" = "&8=\\EK" }
+ { "capability" = "F1=\\Ep" }
+ { "capability" = "F2=\\Eq" }
+ { "capability" = "F3=\\Er" }
+ { "capability" = "F4=\\Es" }
+ { "capability" = "F5=\\Et" }
+ { "capability" = "F6=\\Eu" }
+ { "capability" = "F7=\\Ev" }
+ { "capability" = "F8=\\Ew" }
+ { "capability" = "F9=\\Ex" }
+ { "capability" = "FA=\\Ey" }
+ { "capability" = "al=\\EL" }
+ { "capability" = "bl=^G" }
+ { "capability" = "cd=\\EJ" }
+ { "capability" = "ce=\\EK" }
+ { "capability" = "cl=\\EE" }
+ { "capability" = "cm=\\EY%+ %+ " }
+ { "capability" = "cr=^M" }
+ { "capability" = "dl=\\EM" }
+ { "capability" = "do=\\EB" }
+ { "capability" = "ho=\\EH" }
+ { "capability" = "k1=\\EP" }
+ { "capability" = "k2=\\EQ" }
+ { "capability" = "k3=\\ER" }
+ { "capability" = "k4=\\ES" }
+ { "capability" = "k5=\\ET" }
+ { "capability" = "k6=\\EU" }
+ { "capability" = "k7=\\EV" }
+ { "capability" = "k8=\\EW" }
+ { "capability" = "k9=\\EX" }
+ { "capability" = "k;=\\EY" }
+ { "capability" = "kD=\\177" }
+ { "capability" = "kI=\\EI" }
+ { "capability" = "kN=\\Eb" }
+ { "capability" = "kP=\\Ea" }
+ { "capability" = "kb=^H" }
+ { "capability" = "kd=\\EB" }
+ { "capability" = "kh=\\EE" }
+ { "capability" = "kl=\\ED" }
+ { "capability" = "kr=\\EC" }
+ { "capability" = "ku=\\EA" }
+ { "capability" = "le=^H" }
+ { "capability" = "mb=\\Er" }
+ { "capability" = "md=\\EyA" }
+ { "capability" = "me=\\Ez_" }
+ { "capability" = "mh=\\Em" }
+ { "capability" = "mr=\\Ep" }
+ { "capability" = "nd=\\EC" }
+ { "capability" = "nw=2*\\r\\n" }
+ { "capability" = "op=\\Eb@\\EcO" }
+ { "capability" = "r1=\\Ez_\\Eb@\\EcA" }
+ { "capability" = "se=\\Eq" }
+ { "capability" = "sf=2*\\n" }
+ { "capability" = "so=\\Ep" }
+ { "capability" = "sr=2*\\EI" }
+ { "capability" = "ta=^I" }
+ { "capability" = "te=\\Ev\\E. \\Ee\\Ez_" }
+ { "capability" = "ti=\\Ev\\Ee\\Ez_" }
+ { "capability" = "ue=\\EzH" }
+ { "capability" = "up=\\EA" }
+ { "capability" = "us=\\EyH" }
+ { "capability" = "ve=\\E. \\Ee" }
+ { "capability" = "vi=\\Ef" }
+ { "capability" = "vs=\\E.\"" }
+ }
+
+let termcap4 = "rbcomm|IBM PC with RBcomm and EMACS keybindings:\\
+ :am:bw:mi:ms:xn:\\
+ :co#80:it#8:li#25:\\
+ :AL=\\E[%dL:DL=\\E[%dM:al=^K:bl=^G:bt=\\E[Z:cd=^F5:ce=^P^P:\\
+ :cl=^L:cm=\\037%r%+ %+ :cr=^M:cs=\\E[%i%d;%dr:dc=^W:dl=^Z:\\
+ :dm=:do=^C:ec=\\E[%dX:ed=:ei=^]:im=^\\:\\
+ :is=\\017\\035\\E(B\\E)0\\E[?7h\\E[?3l\\E[>8g:kb=^H:kd=^N:\\
+ :ke=\\E>:kh=^A:kl=^B:kr=^F:ks=\\E=:ku=^P:le=^H:mb=\\E[5m:\\
+ :md=\\E[1m:me=\\E[m:mk=\\E[8m:mr=^R:nd=^B:nw=^M\\ED:\\
+ :r1=\\017\\E(B\\E)0\\025\\E[?3l\\E[>8g:rc=\\E8:rp=\\030%.%.:\\
+ :sc=\\E7:se=^U:sf=\\ED:so=^R:sr=\\EM:ta=^I:te=:ti=:ue=^U:up=^^:\\
+ :us=^T:ve=\\E[?25h:vi=\\E[?25l:
+"
+
+test Termcap.lns get termcap4 =
+ { "record"
+ { "name" = "rbcomm" }
+ { "name" = "IBM PC with RBcomm and EMACS keybindings" }
+ { "capability" = "am" }
+ { "capability" = "bw" }
+ { "capability" = "mi" }
+ { "capability" = "ms" }
+ { "capability" = "xn" }
+ { "capability" = "co#80" }
+ { "capability" = "it#8" }
+ { "capability" = "li#25" }
+ { "capability" = "AL=\\E[%dL" }
+ { "capability" = "DL=\\E[%dM" }
+ { "capability" = "al=^K" }
+ { "capability" = "bl=^G" }
+ { "capability" = "bt=\\E[Z" }
+ { "capability" = "cd=^F5" }
+ { "capability" = "ce=^P^P" }
+ { "capability" = "cl=^L" }
+ { "capability" = "cm=\\037%r%+ %+ " }
+ { "capability" = "cr=^M" }
+ { "capability" = "cs=\\E[%i%d;%dr" }
+ { "capability" = "dc=^W" }
+ { "capability" = "dl=^Z" }
+ { "capability" = "dm=" }
+ { "capability" = "do=^C" }
+ { "capability" = "ec=\\E[%dX" }
+ { "capability" = "ed=" }
+ { "capability" = "ei=^]" }
+ { "capability" = "im=^\\" }
+ { "capability" = "is=\\017\\035\\E(B\\E)0\\E[?7h\\E[?3l\\E[>8g" }
+ { "capability" = "kb=^H" }
+ { "capability" = "kd=^N" }
+ { "capability" = "ke=\\E>" }
+ { "capability" = "kh=^A" }
+ { "capability" = "kl=^B" }
+ { "capability" = "kr=^F" }
+ { "capability" = "ks=\\E=" }
+ { "capability" = "ku=^P" }
+ { "capability" = "le=^H" }
+ { "capability" = "mb=\\E[5m" }
+ { "capability" = "md=\\E[1m" }
+ { "capability" = "me=\\E[m" }
+ { "capability" = "mk=\\E[8m" }
+ { "capability" = "mr=^R" }
+ { "capability" = "nd=^B" }
+ { "capability" = "nw=^M\\ED" }
+ { "capability" = "r1=\\017\\E(B\\E)0\\025\\E[?3l\\E[>8g" }
+ { "capability" = "rc=\\E8" }
+ { "capability" = "rp=\\030%.%." }
+ { "capability" = "sc=\\E7" }
+ { "capability" = "se=^U" }
+ { "capability" = "sf=\\ED" }
+ { "capability" = "so=^R" }
+ { "capability" = "sr=\\EM" }
+ { "capability" = "ta=^I" }
+ { "capability" = "te=" }
+ { "capability" = "ti=" }
+ { "capability" = "ue=^U" }
+ { "capability" = "up=^^" }
+ { "capability" = "us=^T" }
+ { "capability" = "ve=\\E[?25h" }
+ { "capability" = "vi=\\E[?25l" }
+ }
+
+let termcap5 = "rxvt+pcfkeys|fragment for PC-style fkeys:\\
+ :#2=\\E[7$:#3=\\E[2$:#4=\\E[d:%c=\\E[6$:%e=\\E[5$:%i=\\E[c:\\
+ :*4=\\E[3$:*6=\\E[4~:*7=\\E[8$:@0=\\E[1~:@7=\\E[8~:F1=\\E[23~:\\
+ :F2=\\E[24~:F3=\\E[25~:F4=\\E[26~:F5=\\E[28~:F6=\\E[29~:\\
+ :F7=\\E[31~:F8=\\E[32~:F9=\\E[33~:FA=\\E[34~:FB=\\E[23$:\\
+ :FC=\\E[24$:FD=\\E[11\\136:FE=\\E[12\\136:FF=\\E[13\\136:FG=\\E[14\\136:\\
+ :FH=\\E[15\\136:FI=\\E[17\\136:FJ=\\E[18\\136:FK=\\E[19\\136:FL=\\E[20\\136:\\
+ :FM=\\E[21\\136:FN=\\E[23\\136:FO=\\E[24\\136:FP=\\E[25\\136:FQ=\\E[26\\136:\\
+ :FR=\\E[28\\136:FS=\\E[29\\136:FT=\\E[31\\136:FU=\\E[32\\136:FV=\\E[33\\136:\\
+ :FW=\\E[34\\136:FX=\\E[23@:FY=\\E[24@:k1=\\E[11~:k2=\\E[12~:\\
+ :k3=\\E[13~:k4=\\E[14~:k5=\\E[15~:k6=\\E[17~:k7=\\E[18~:\\
+ :k8=\\E[19~:k9=\\E[20~:k;=\\E[21~:kD=\\E[3~:kE=\\E[8\\136:kF=\\E[a:\\
+ :kI=\\E[2~:kN=\\E[6~:kP=\\E[5~:kR=\\E[b:kd=\\E[B:kh=\\E[7~:\\
+ :kl=\\E[D:kr=\\E[C:ku=\\E[A:
+"
+
+test Termcap.lns get termcap5 =
+ { "record"
+ { "name" = "rxvt+pcfkeys" }
+ { "name" = "fragment for PC-style fkeys" }
+ { "capability" = "#2=\\E[7$" }
+ { "capability" = "#3=\\E[2$" }
+ { "capability" = "#4=\\E[d" }
+ { "capability" = "%c=\\E[6$" }
+ { "capability" = "%e=\\E[5$" }
+ { "capability" = "%i=\\E[c" }
+ { "capability" = "*4=\\E[3$" }
+ { "capability" = "*6=\\E[4~" }
+ { "capability" = "*7=\\E[8$" }
+ { "capability" = "@0=\\E[1~" }
+ { "capability" = "@7=\\E[8~" }
+ { "capability" = "F1=\\E[23~" }
+ { "capability" = "F2=\\E[24~" }
+ { "capability" = "F3=\\E[25~" }
+ { "capability" = "F4=\\E[26~" }
+ { "capability" = "F5=\\E[28~" }
+ { "capability" = "F6=\\E[29~" }
+ { "capability" = "F7=\\E[31~" }
+ { "capability" = "F8=\\E[32~" }
+ { "capability" = "F9=\\E[33~" }
+ { "capability" = "FA=\\E[34~" }
+ { "capability" = "FB=\\E[23$" }
+ { "capability" = "FC=\\E[24$" }
+ { "capability" = "FD=\\E[11\\136" }
+ { "capability" = "FE=\\E[12\\136" }
+ { "capability" = "FF=\\E[13\\136" }
+ { "capability" = "FG=\\E[14\\136" }
+ { "capability" = "FH=\\E[15\\136" }
+ { "capability" = "FI=\\E[17\\136" }
+ { "capability" = "FJ=\\E[18\\136" }
+ { "capability" = "FK=\\E[19\\136" }
+ { "capability" = "FL=\\E[20\\136" }
+ { "capability" = "FM=\\E[21\\136" }
+ { "capability" = "FN=\\E[23\\136" }
+ { "capability" = "FO=\\E[24\\136" }
+ { "capability" = "FP=\\E[25\\136" }
+ { "capability" = "FQ=\\E[26\\136" }
+ { "capability" = "FR=\\E[28\\136" }
+ { "capability" = "FS=\\E[29\\136" }
+ { "capability" = "FT=\\E[31\\136" }
+ { "capability" = "FU=\\E[32\\136" }
+ { "capability" = "FV=\\E[33\\136" }
+ { "capability" = "FW=\\E[34\\136" }
+ { "capability" = "FX=\\E[23@" }
+ { "capability" = "FY=\\E[24@" }
+ { "capability" = "k1=\\E[11~" }
+ { "capability" = "k2=\\E[12~" }
+ { "capability" = "k3=\\E[13~" }
+ { "capability" = "k4=\\E[14~" }
+ { "capability" = "k5=\\E[15~" }
+ { "capability" = "k6=\\E[17~" }
+ { "capability" = "k7=\\E[18~" }
+ { "capability" = "k8=\\E[19~" }
+ { "capability" = "k9=\\E[20~" }
+ { "capability" = "k;=\\E[21~" }
+ { "capability" = "kD=\\E[3~" }
+ { "capability" = "kE=\\E[8\\136" }
+ { "capability" = "kF=\\E[a" }
+ { "capability" = "kI=\\E[2~" }
+ { "capability" = "kN=\\E[6~" }
+ { "capability" = "kP=\\E[5~" }
+ { "capability" = "kR=\\E[b" }
+ { "capability" = "kd=\\E[B" }
+ { "capability" = "kh=\\E[7~" }
+ { "capability" = "kl=\\E[D" }
+ { "capability" = "kr=\\E[C" }
+ { "capability" = "ku=\\E[A" }
+ }
+
--- /dev/null
+(*
+Module: Test_Thttpd
+ Provides unit tests and examples for the <Thttpd> lens.
+*)
+
+module Test_Thttpd =
+
+let conf = "# This file is for thttpd processes created by /etc/init.d/thttpd.
+# Commentary is based closely on the thttpd(8) 2.25b manpage, by Jef Poskanzer.
+
+# Specifies an alternate port number to listen on.
+port=80
+host=
+
+ dir=/var/www
+chroot
+ novhost
+
+ # Specifies what user to switch to after initialization when started as root.
+user=www-data # EOL comment
+nosymlinks # EOL comment
+"
+
+test Thttpd.lns get conf =
+ { "#comment" = "This file is for thttpd processes created by /etc/init.d/thttpd." }
+ { "#comment" = "Commentary is based closely on the thttpd(8) 2.25b manpage, by Jef Poskanzer." }
+ { }
+ { "#comment" = "Specifies an alternate port number to listen on." }
+ { "port" = "80" }
+ { "host" = "" }
+ { }
+ { "dir" = "/var/www" }
+ { "chroot" }
+ { "novhost" }
+ { }
+ { "#comment" = "Specifies what user to switch to after initialization when started as root." }
+ { "user" = "www-data"
+ { "#comment" = "EOL comment" }
+ }
+ { "nosymlinks"
+ { "#comment" = "EOL comment" }
+ }
+
+(* There must not be spaces around the '=' *)
+test Thttpd.lns get "port = 80" = *
--- /dev/null
+module Test_tinc =
+
+let lns = Tinc.lns
+
+test lns get "Subnet = 10.1.4.5\n" = { "Subnet" = "10.1.4.5" }
+test lns get "foo = bar\n" = { "foo" = "bar" }
+test lns get "foo bar\n" = { "foo" = "bar" }
+test lns get "foo bar\n" = { "foo" = "bar" }
+
+test lns get
+"-----BEGIN RSA PUBLIC KEY-----
+abcde
+-----END RSA PUBLIC KEY-----" = { "#key" = "abcde" }
+
+test lns get "foo = bar\nbar = baz\n" =
+ { "foo" = "bar" }
+ { "bar" = "baz" }
+
+test lns get
+"foo = bar
+
+-----BEGIN RSA PUBLIC KEY-----
+bar
+-----END RSA PUBLIC KEY-----" =
+ { "foo" = "bar" }
+ { }
+ { "#key" = "bar" }
+
+
+(*
+test lns get
+"-----BEGIN RSA PUBLIC KEY-----
+foo
+-----END RSA PUBLIC KEY-----
+
+-----BEGIN RSA PUBLIC KEY-----
+bar
+-----END RSA PUBLIC KEY-----
+" = ?
+*)
--- /dev/null
+(*
+Module: Test_Tmpfiles
+ Provides unit tests and examples for the <Tmpfiles> lens.
+*)
+
+module Test_Tmpfiles =
+
+(************************************************************************
+ * Group: VALID EXAMPLES
+ *************************************************************************)
+ (* Variable: simple
+One line, simple example *)
+ let simple = "d /run/user 0755 root mysql 10d -\n"
+
+ (* Variable: simple_tree
+Tree for <simple> *)
+ let simple_tree =
+ {
+ "1"
+ { "type" = "d" }
+ { "path" = "/run/user" }
+ { "mode" = "0755" }
+ { "uid" = "root" }
+ { "gid" = "mysql" }
+ { "age" = "10d" }
+ { "argument" = "-" }
+ }
+
+ (* Variable: complex
+A more complex example, comes from the manual *)
+ let complex = "#Type Path Mode UID GID Age Argument\nd /run/user 0755 root root 10d -\nL /tmp/foobar - - - - /dev/null\n"
+
+ (* Variable: complex_tree
+Tree for <complex> and <trailing_ws> *)
+ let complex_tree =
+ { "#comment" = "Type Path Mode UID GID Age Argument" }
+ { "1"
+ { "type" = "d" }
+ { "path" = "/run/user" }
+ { "mode" = "0755" }
+ { "uid" = "root" }
+ { "gid" = "root" }
+ { "age" = "10d" }
+ { "argument" = "-" }
+ }
+ { "2"
+ { "type" = "L" }
+ { "path" = "/tmp/foobar" }
+ { "mode" = "-" }
+ { "uid" = "-" }
+ { "gid" = "-" }
+ { "age" = "-" }
+ { "argument" = "/dev/null" }
+ }
+
+ (* Variable: trailing_ws
+The complex example with extra spaces *)
+ let trailing_ws = " #Type Path Mode UID GID Age Argument \n d /run/user 0755 root root 10d - \t\n L /tmp/foobar - - - - /dev/null\t\n"
+
+ (* Variable: empty
+Empty example *)
+ let empty = "\n\n\n"
+
+ (* Variable: exclamation_mark
+Example with an exclamation mark in the type *)
+ let exclamation_mark = "D! /tmp/foo - - - - -\n"
+
+ (* Variable: exclamation_mark_tree
+Tree for <exclamation_mark> *)
+ let exclamation_mark_tree =
+ {
+ "1"
+ { "type" = "D!" }
+ { "path" = "/tmp/foo" }
+ { "mode" = "-" }
+ { "uid" = "-" }
+ { "gid" = "-" }
+ { "age" = "-" }
+ { "argument" = "-" }
+ }
+
+ (* Variable: minus
+Example with an minus mark in the type *)
+ let minus = "D- /tmp/foo - - - - -\n"
+
+ (* Variable: minus_tree
+Tree for <minus_tree> *)
+ let minus_tree =
+ {
+ "1"
+ { "type" = "D-" }
+ { "path" = "/tmp/foo" }
+ { "mode" = "-" }
+ { "uid" = "-" }
+ { "gid" = "-" }
+ { "age" = "-" }
+ { "argument" = "-" }
+ }
+
+ (* Variable: equal
+Example with an equal sign in the type *)
+ let equal = "d= /tmp/foo 0755 root root - -\n"
+
+ (* Variable: equal_tree
+Tree for <equal> *)
+ let equal_tree =
+ {
+ "1"
+ { "type" = "d=" }
+ { "path" = "/tmp/foo" }
+ { "mode" = "0755" }
+ { "uid" = "root" }
+ { "gid" = "root" }
+ { "age" = "-" }
+ { "argument" = "-" }
+ }
+
+ (* Variable: tilde
+Example with a tilde character in the type *)
+ let tilde = "w~ /tmp/foo 0755 root root - dGVzdAo=\n"
+
+ (* Variable: tilde_tree
+Tree for <tilde> *)
+ let tilde_tree =
+ {
+ "1"
+ { "type" = "w~" }
+ { "path" = "/tmp/foo" }
+ { "mode" = "0755" }
+ { "uid" = "root" }
+ { "gid" = "root" }
+ { "age" = "-" }
+ { "argument" = "dGVzdAo=" }
+ }
+
+ (* Variable: caret
+Example with a caret in the type *)
+ let caret = "f^ /etc/motd.d/50-provision.conf - - - - login.motd\n"
+
+ (* Variable: caret_tree
+Tree for <caret> *)
+ let caret_tree =
+ {
+ "1"
+ { "type" = "f^" }
+ { "path" = "/etc/motd.d/50-provision.conf" }
+ { "mode" = "-" }
+ { "uid" = "-" }
+ { "gid" = "-" }
+ { "age" = "-" }
+ { "argument" = "login.motd" }
+ }
+
+ (* Variable: short
+Example with only type and path *)
+ let short = "A+ /tmp/foo\n"
+
+ (* Variable: short_tree
+Tree for <short> *)
+ let short_tree =
+ {
+ "1"
+ { "type" = "A+" }
+ { "path" = "/tmp/foo" }
+ }
+
+ (* Variable: short_mode
+Example with only 3 fields *)
+ let short_mode = "c+! /tmp/foo ~0755\n"
+
+ (* Variable: short_mode_tree
+Tree for <short_mode> *)
+ let short_mode_tree =
+ {
+ "1"
+ { "type" = "c+!" }
+ { "path" = "/tmp/foo" }
+ { "mode" = "~0755" }
+ }
+
+ (* Variable: short_uid
+Example with only 4 fields *)
+ let short_uid = "A+ /tmp/foo - 0\n"
+
+ (* Variable: short_uid_tree
+Tree for <short_uid> *)
+ let short_uid_tree =
+ {
+ "1"
+ { "type" = "A+" }
+ { "path" = "/tmp/foo" }
+ { "mode" = "-" }
+ { "uid" = "0" }
+ }
+
+ (* Variable: short_gid
+Example with only 5 fields *)
+ let short_gid = "z /tmp/bar/foo -\t- augd\n"
+
+ (* Variable: short_gid_tree
+Tree for <short_gid> *)
+ let short_gid_tree =
+ {
+ "1"
+ { "type" = "z" }
+ { "path" = "/tmp/bar/foo" }
+ { "mode" = "-" }
+ { "uid" = "-" }
+ { "gid" = "augd" }
+ }
+
+ (* Variable: short_age
+Example with only 6 fields *)
+ let short_age = "H /var/tmp/fooBarFOO - jj jj ~10d\n"
+
+ (* Variable: short_age_tree
+Tree for <short_age> *)
+ let short_age_tree =
+ {
+ "1"
+ { "type" = "H" }
+ { "path" = "/var/tmp/fooBarFOO" }
+ { "mode" = "-" }
+ { "uid" = "jj" }
+ { "gid" = "jj" }
+ { "age" = "~10d" }
+ }
+
+ (* Variable: complex_arg
+Complex argument example. That one comes from the manual *)
+ let complex_arg = "t /run/screen - - - - user.name=\"John Smith\" security.SMACK64=screen\n"
+
+ (* Variable: complex_arg_tree
+Tree for <complex_arg> *)
+ let complex_arg_tree =
+ {
+ "1"
+ { "type" = "t" }
+ { "path" = "/run/screen" }
+ { "mode" = "-" }
+ { "uid" = "-" }
+ { "gid" = "-" }
+ { "age" = "-" }
+ { "argument" = "user.name=\"John Smith\" security.SMACK64=screen" }
+ }
+
+ (* Variable: valid_short_args
+A short argument value example. *)
+ let valid_short_args = "h /var/log/journal - - - - C\nh /var/log/journal - - - - +C\n"
+
+ (* Variable: valid_short_args_tree
+Tree for <valid_short_args> *)
+ let valid_short_args_tree =
+ {
+ "1"
+ { "type" = "h" }
+ { "path" = "/var/log/journal" }
+ { "mode" = "-" }
+ { "uid" = "-" }
+ { "gid" = "-" }
+ { "age" = "-" }
+ { "argument" = "C" }
+ }
+ {
+ "2"
+ { "type" = "h" }
+ { "path" = "/var/log/journal" }
+ { "mode" = "-" }
+ { "uid" = "-" }
+ { "gid" = "-" }
+ { "age" = "-" }
+ { "argument" = "+C" }
+ }
+
+ (* Variable: valid_age
+Example with a complex age. *)
+ let valid_age = "v /var/tmp/js 4221 johnsmith - ~10d12h\n"
+
+ (* Variable: valid_age_tree
+Tree for <valid_age> *)
+ let valid_age_tree =
+ {
+ "1"
+ { "type" = "v" }
+ { "path" = "/var/tmp/js" }
+ { "mode" = "4221" }
+ { "uid" = "johnsmith" }
+ { "gid" = "-" }
+ { "age" = "~10d12h" }
+ }
+
+ (* Variable: valid_second
+Example with full age unit *)
+ let valid_second = "p+ /var/tmp - jsmith - 0second\n"
+
+ (* Variable: valid_second_tree
+Tree for <valid_second> *)
+ let valid_second_tree =
+ {
+ "1"
+ { "type" = "p+" }
+ { "path" = "/var/tmp" }
+ { "mode" = "-" }
+ { "uid" = "jsmith" }
+ { "gid" = "-" }
+ { "age" = "0second" }
+ }
+
+ (* Variable: valid_days
+Example with full age unit (plural) *)
+ let valid_days = "x /var/tmp/manu - jonhsmith - 9days\n"
+
+ (* Variable: valid_days_tree
+Tree for <valid_days> *)
+ let valid_days_tree =
+ {
+ "1"
+ { "type" = "x" }
+ { "path" = "/var/tmp/manu" }
+ { "mode" = "-" }
+ { "uid" = "jonhsmith" }
+ { "gid" = "-" }
+ { "age" = "9days" }
+ }
+
+ (* Variable: percent
+Test with a percent sign *)
+ let percent = "m /var/log/%m 2755 root systemdjournal - -\n"
+
+ (* Variable: percent_tree
+Tree for <percent> *)
+ let percent_tree =
+ {
+ "1"
+ { "type" = "m" }
+ { "path" = "/var/log/%m" }
+ { "mode" = "2755" }
+ { "uid" = "root" }
+ { "gid" = "systemdjournal" }
+ { "age" = "-" }
+ { "argument" = "-" }
+ }
+
+ (* Variable: hyphen
+Test with a hyphen in gid *)
+ let hyphen = "L /var/log/journal 2755 root systemd-journal - -\n"
+
+ (* Variable: hyphen_tree
+Tree for <hyphen> *)
+ let hyphen_tree =
+ {
+ "1"
+ { "type" = "L" }
+ { "path" = "/var/log/journal" }
+ { "mode" = "2755" }
+ { "uid" = "root" }
+ { "gid" = "systemd-journal" }
+ { "age" = "-" }
+ { "argument" = "-" }
+ }
+
+ (* Variable: valid_base
+A valid test to be re-used by the failure cases *)
+ let valid_base = "H /var/tmp/js 0000 jonhsmith 60 1s foo\n"
+
+ (* Variable: valid_base_tree
+Tree for <valid_base> *)
+ let valid_base_tree =
+ {
+ "1"
+ { "type" = "H" }
+ { "path" = "/var/tmp/js" }
+ { "mode" = "0000" }
+ { "uid" = "jonhsmith" }
+ { "gid" = "60" }
+ { "age" = "1s" }
+ { "argument" = "foo" }
+ }
+
+ (* Variable: mode3
+Mode field example with only three digits *)
+ let mode3 = "c+! /tmp/foo 755\n"
+
+ (* Variable: mode3_tree
+Tree for <mode3> *)
+ let mode3_tree =
+ {
+ "1"
+ { "type" = "c+!" }
+ { "path" = "/tmp/foo" }
+ { "mode" = "755" }
+ }
+
+ (* Variable: mode_colon
+Mode field with colon prefix *)
+ let mode_colon = "d- /root :0700 root :root\n"
+
+ (* Variable: mode_colon_tree
+Tree for <mode_colon> *)
+ let mode_colon_tree =
+ {
+ "1"
+ { "type" = "d-" }
+ { "path" = "/root" }
+ { "mode" = ":0700" }
+ { "uid" = "root" }
+ { "gid" = ":root" }
+ }
+
+(************************************************************************
+ * Group: INVALID EXAMPLES
+ *************************************************************************)
+
+ (* Variable: invalid_too_short
+Invalid example that do not contain path *)
+ let invalid_too_short = "H\n"
+
+ (* Variable: invalid_age
+Invalid example that contain invalid age *)
+ let invalid_age = "H /var/tmp/js 0000 jonhsmith 60 1sss foo\n"
+
+ (* Variable: invalid_type
+Invalid example that contain invalid type (bad letter) *)
+ let invalid_type = "i /var/tmp/js 0000 jonhsmith 60 1s foo\n"
+
+ (* Variable: invalid_type_num
+ Invalid example that contain invalid type (numeric) *)
+ let invalid_type_num = "1 /var/tmp/js 0000 jonhsmith 60 1s foo\n"
+
+ (* Variable: invalid_mode
+Invalid example that contain invalid mode (bad int) *)
+ let invalid_mode = "H /var/tmp/js 8000 jonhsmith 60 1s foo\n"
+
+ (* Variable: invalid_mode_alpha
+Invalid example that contain invalid mode (letter) *)
+ let invalid_mode_alpha = "H /var/tmp/js a000 jonhsmith 60 1s foo\n"
+
+ test Tmpfiles.lns get simple = simple_tree
+
+ test Tmpfiles.lns get complex = complex_tree
+
+ test Tmpfiles.lns get trailing_ws = complex_tree
+
+ test Tmpfiles.lns get empty = {}{}{}
+
+ test Tmpfiles.lns get exclamation_mark = exclamation_mark_tree
+
+ test Tmpfiles.lns get minus = minus_tree
+
+ test Tmpfiles.lns get equal = equal_tree
+
+ test Tmpfiles.lns get tilde = tilde_tree
+
+ test Tmpfiles.lns get caret = caret_tree
+
+ test Tmpfiles.lns get short = short_tree
+
+ test Tmpfiles.lns get short_mode = short_mode_tree
+
+ test Tmpfiles.lns get short_uid = short_uid_tree
+
+ test Tmpfiles.lns get short_gid = short_gid_tree
+
+ test Tmpfiles.lns get short_age = short_age_tree
+
+ test Tmpfiles.lns get complex_arg = complex_arg_tree
+
+ test Tmpfiles.lns get valid_short_args = valid_short_args_tree
+
+ test Tmpfiles.lns get valid_second = valid_second_tree
+
+ test Tmpfiles.lns get valid_days = valid_days_tree
+
+ test Tmpfiles.lns get valid_age = valid_age_tree
+
+ test Tmpfiles.lns get percent = percent_tree
+
+ test Tmpfiles.lns get hyphen = hyphen_tree
+
+ test Tmpfiles.lns get valid_base = valid_base_tree
+
+ test Tmpfiles.lns get mode3 = mode3_tree
+
+ test Tmpfiles.lns get mode_colon = mode_colon_tree
+
+
+(* failure cases *)
+
+ test Tmpfiles.lns get invalid_too_short = *
+
+ test Tmpfiles.lns get invalid_age = *
+
+ test Tmpfiles.lns get invalid_type = *
+
+ test Tmpfiles.lns get invalid_type_num = *
+
+ test Tmpfiles.lns get invalid_mode = *
+
+ test Tmpfiles.lns get invalid_mode_alpha = *
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+module Test_Toml =
+
+(* Test: Toml.norec
+ String value *)
+test Toml.norec get "\"foo\"" = { "string" = "foo" }
+
+(* Test: Toml.norec
+ Integer value *)
+test Toml.norec get "42" = { "integer" = "42" }
+
+(* Test: Toml.norec
+ Positive integer value *)
+test Toml.norec get "+42" = { "integer" = "+42" }
+
+(* Test: Toml.norec
+ Negative integer value *)
+test Toml.norec get "-42" = { "integer" = "-42" }
+
+(* Test: Toml.norec
+ Large integer value *)
+test Toml.norec get "5_349_221" = { "integer" = "5_349_221" }
+
+(* Test: Toml.norec
+ Hexadecimal integer value *)
+test Toml.norec get "0xDEADBEEF" = { "integer" = "0xDEADBEEF" }
+
+(* Test: Toml.norec
+ Octal integer value *)
+test Toml.norec get "0o755" = { "integer" = "0o755" }
+
+(* Test: Toml.norec
+ Binary integer value *)
+test Toml.norec get "0b11010110" = { "integer" = "0b11010110" }
+
+(* Test: Toml.norec
+ Float value *)
+test Toml.norec get "3.14" = { "float" = "3.14" }
+
+(* Test: Toml.norec
+ Positive float value *)
+test Toml.norec get "+3.14" = { "float" = "+3.14" }
+
+(* Test: Toml.norec
+ Negative float value *)
+test Toml.norec get "-3.14" = { "float" = "-3.14" }
+
+(* Test: Toml.norec
+ Complex float value *)
+test Toml.norec get "-3_220.145_223e-34" = { "float" = "-3_220.145_223e-34" }
+
+(* Test: Toml.norec
+ Inf float value *)
+test Toml.norec get "-inf" = { "float" = "-inf" }
+
+(* Test: Toml.norec
+ Nan float value *)
+test Toml.norec get "-nan" = { "float" = "-nan" }
+
+(* Test: Toml.norec
+ Bool value *)
+test Toml.norec get "true" = { "bool" = "true" }
+
+(* Test: Toml.norec
+ Datetime value *)
+test Toml.norec get "1979-05-27T07:32:00Z" =
+ { "datetime" = "1979-05-27T07:32:00Z" }
+test Toml.norec get "1979-05-27 07:32:00.999999" =
+ { "datetime" = "1979-05-27 07:32:00.999999" }
+
+(* Test: Toml.norec
+ Date value *)
+test Toml.norec get "1979-05-27" =
+ { "date" = "1979-05-27" }
+
+(* Test: Toml.norec
+ Time value *)
+test Toml.norec get "07:32:00" =
+ { "time" = "07:32:00" }
+
+(* Test: Toml.norec
+ String value with newline *)
+test Toml.norec get "\"bar\nbaz\"" =
+ { "string" = "bar\nbaz" }
+
+(* Test: Toml.norec
+ Multiline value *)
+test Toml.norec get "\"\"\"\nbar\nbaz\n \"\"\"" =
+ { "string_multi" = "bar\nbaz" }
+
+(* Test: Toml.norec
+ Literal string value *)
+test Toml.norec get "'bar\nbaz'" =
+ { "string_literal" = "bar\nbaz" }
+
+(* Test: Toml.array_norec
+ Empty array *)
+test Toml.array_norec get "[ ]" =
+ { "array" {} }
+
+(* Test: Toml.array_norec
+ Array of strings *)
+test Toml.array_norec get "[ \"foo\", \"bar\" ]" =
+ { "array" {}
+ { "string" = "foo" } {}
+ { "string" = "bar" } {} }
+
+(* Test: Toml.array_norec
+ Array of strings with trailing comma *)
+test Toml.array_norec get "[ \"foo\", \"bar\", ]" =
+ { "array" {}
+ { "string" = "foo" } {}
+ { "string" = "bar" } {} }
+
+(* Test: Toml.array_norec
+ Array of integers with trailing comma multiline *)
+test Toml.array_norec get "[
+ 1,
+ 2,
+]" =
+ { "array" {}
+ { "integer" = "1" } {}
+ { "integer" = "2" } {} }
+
+(* Test: Toml.array_norec
+ Array of integers with trailing comma and comment *)
+test Toml.array_norec get "[
+ 1,
+ 2, # this is ok
+]" =
+ { "array" {}
+ { "integer" = "1" } {}
+ { "integer" = "2" } { "#comment" = "this is ok" } }
+
+(* Test: Toml.array_rec
+ Array of arrays *)
+test Toml.array_rec get "[ [ \"foo\", \"bar\" ], 42 ]" =
+ { "array" {}
+ { "array" {}
+ { "string" = "foo" } {}
+ { "string" = "bar" } {} } {}
+ { "integer" = "42" } {} }
+
+(* Test: Toml.lns
+ Global parameters *)
+test Toml.lns get "# Globals
+foo = \"bar\"\n" =
+ { "#comment" = "Globals" }
+ { "entry" = "foo" { "string" = "bar" } }
+
+(* Test: Toml.lns
+ Simple section/value *)
+test Toml.lns get "[foo]
+bar = \"baz\"\n" =
+ { "table" = "foo" { "entry" = "bar" { "string" = "baz" } } }
+
+(* Test: Toml.lns
+ Subsections *)
+test Toml.lns get "[foo]
+title = \"bar\"
+ [foo.one]
+ hello = \"world\"\n" =
+ { "table" = "foo"
+ { "entry" = "title" { "string" = "bar" } } }
+ { "table" = "foo.one"
+ { "entry" = "hello" { "string" = "world" } } }
+
+(* Test: Toml.lns
+ Nested subsections *)
+test Toml.lns get "[foo]
+[foo.one]
+[foo.one.two]
+bar = \"baz\"\n" =
+ { "table" = "foo" }
+ { "table" = "foo.one" }
+ { "table" = "foo.one.two"
+ { "entry" = "bar" { "string" = "baz" } } }
+
+(* Test: Toml.lns
+ Arrays of tables *)
+test Toml.lns get "[[products]]
+name = \"Hammer\"
+sku = 738594937
+
+[[products]]
+
+[[products]]
+name = \"Nail\"
+sku = 284758393
+color = \"gray\"\n" =
+ { "@table" = "products"
+ { "entry" = "name"
+ { "string" = "Hammer" } }
+ { "entry" = "sku"
+ { "integer" = "738594937" } }
+ { }
+ }
+ { "@table" = "products"
+ { }
+ }
+ { "@table" = "products"
+ { "entry" = "name"
+ { "string" = "Nail" } }
+ { "entry" = "sku"
+ { "integer" = "284758393" } }
+ { "entry" = "color"
+ { "string" = "gray" } }
+ }
+
+(* Test: Toml.entry
+ Empty inline table *)
+test Toml.entry get "name = { }\n" =
+ { "entry" = "name"
+ { "inline_table"
+ { } } }
+
+(* Test: Toml.entry
+ Inline table *)
+test Toml.entry get "name = { first = \"Tom\", last = \"Preston-Werner\" }\n" =
+ { "entry" = "name"
+ { "inline_table" {}
+ { "entry" = "first"
+ { "string" = "Tom" } } {}
+ { "entry" = "last"
+ { "string" = "Preston-Werner" } } {} } }
+
+(* Test: Toml.entry
+ Array value in inline_table *)
+test Toml.entry get "foo = { bar = [\"baz\"] }\n" =
+ { "entry" = "foo"
+ { "inline_table" {}
+ { "entry" = "bar"
+ { "array"
+ { "string" = "baz" } } } {} } }
+
+
+(* Variable: example
+ The example from https://github.com/mojombo/toml *)
+let example = "# This is a TOML document. Boom.
+
+title = \"TOML Example\"
+
+[owner]
+name = \"Tom Preston-Werner\"
+organization = \"GitHub\"
+bio = \"GitHub Cofounder & CEO\nLikes tater tots and beer.\"
+dob = 1979-05-27T07:32:00Z # First class dates? Why not?
+
+[database]
+server = \"192.168.1.1\"
+ports = [ 8001, 8001, 8002 ]
+connection_max = 5000
+enabled = true
+
+[servers]
+
+ # You can indent as you please. Tabs or spaces. TOML don't care.
+ [servers.alpha]
+ ip = \"10.0.0.1\"
+ dc = \"eqdc10\"
+
+ [servers.beta]
+ ip = \"10.0.0.2\"
+ dc = \"eqdc10\"
+ country = \"中国\" # This should be parsed as UTF-8
+
+[clients]
+data = [ [\"gamma\", \"delta\"], [1, 2] ] # just an update to make sure parsers support it
+
+# Line breaks are OK when inside arrays
+hosts = [
+ \"alpha\",
+ \"omega\"
+]
+
+# Products
+
+ [[products]]
+ name = \"Hammer\"
+ sku = 738594937
+
+ [[products]]
+ name = \"Nail\"
+ sku = 284758393
+ color = \"gray\"
+"
+
+test Toml.lns get example =
+ { "#comment" = "This is a TOML document. Boom." } { }
+ { "entry" = "title"
+ { "string" = "TOML Example" } } { }
+ { "table" = "owner"
+ { "entry" = "name"
+ { "string" = "Tom Preston-Werner" } }
+ { "entry" = "organization"
+ { "string" = "GitHub" } }
+ { "entry" = "bio"
+ { "string" = "GitHub Cofounder & CEO
+Likes tater tots and beer." }
+ }
+ { "entry" = "dob"
+ { "datetime" = "1979-05-27T07:32:00Z" }
+ { "#comment" = "First class dates? Why not?" } } { } }
+ { "table" = "database"
+ { "entry" = "server"
+ { "string" = "192.168.1.1" } }
+ { "entry" = "ports"
+ { "array" { }
+ { "integer" = "8001" } { }
+ { "integer" = "8001" } { }
+ { "integer" = "8002" } { } } }
+ { "entry" = "connection_max"
+ { "integer" = "5000" } }
+ { "entry" = "enabled"
+ { "bool" = "true" } } { } }
+ { "table" = "servers" { }
+ { "#comment" = "You can indent as you please. Tabs or spaces. TOML don't care." } }
+ { "table" = "servers.alpha"
+ { "entry" = "ip"
+ { "string" = "10.0.0.1" } }
+ { "entry" = "dc"
+ { "string" = "eqdc10" } } { } }
+ { "table" = "servers.beta"
+ { "entry" = "ip"
+ { "string" = "10.0.0.2" } }
+ { "entry" = "dc"
+ { "string" = "eqdc10" } }
+ { "entry" = "country"
+ { "string" = "中国" }
+ { "#comment" = "This should be parsed as UTF-8" } } { } }
+ { "table" = "clients"
+ { "entry" = "data"
+ { "array" { }
+ { "array"
+ { "string" = "gamma" } { }
+ { "string" = "delta" } } { }
+ { "array"
+ { "integer" = "1" } { }
+ { "integer" = "2" } } { } }
+ { "#comment" = "just an update to make sure parsers support it" } } { }
+ { "#comment" = "Line breaks are OK when inside arrays" }
+ { "entry" = "hosts"
+ { "array" { }
+ { "string" = "alpha" } { }
+ { "string" = "omega" } { } } } { }
+ { "#comment" = "Products" } { } }
+ { "@table" = "products"
+ { "entry" = "name"
+ { "string" = "Hammer" } }
+ { "entry" = "sku"
+ { "integer" = "738594937" } } { } }
+ { "@table" = "products"
+ { "entry" = "name"
+ { "string" = "Nail" } }
+ { "entry" = "sku"
+ { "integer" = "284758393" } }
+ { "entry" = "color"
+ { "string" = "gray" } } }
+
+(* Variable: minimal_toml *)
+let minimal_toml = "[root]
+foo = \"bar\"
+"
+
+(* Test: minimal write *)
+test Toml.lns put minimal_toml after
+ set "/table[1]/entry[1]/string" "foo"
+ = "[root]
+foo = \"foo\"
+"
--- /dev/null
+module Test_Trapperkeeper =
+
+(* Variable: config *)
+let config = "
+ # This is a comment
+ webserver: {
+ bar: {
+ # A comment
+ host: localhost
+ port= 9000
+ default-server: true
+ }
+
+ foo: {
+ host: localhost
+ port = 10000
+ }
+}
+
+jruby-puppet: {
+ # This setting determines where JRuby will look for gems. It is also
+ # used by the `puppetserver gem` command line tool.
+ gem-home: /var/lib/puppet/jruby-gems
+
+ # (optional) path to puppet conf dir; if not specified, will use the puppet default
+ master-conf-dir: /etc/puppet
+
+ # (optional) path to puppet var dir; if not specified, will use the puppet default
+ master-var-dir: /var/lib/puppet
+
+ # (optional) maximum number of JRuby instances to allow; defaults to <num-cpus>+2
+ #max-active-instances: 1
+}
+
+
+# CA-related settings
+certificate-authority: {
+
+ # settings for the certificate_status HTTP endpoint
+ certificate-status: {
+
+ # this setting contains a list of client certnames who are whitelisted to
+ # have access to the certificate_status endpoint. Any requests made to
+ # this endpoint that do not present a valid client cert mentioned in
+ # this list will be denied access.
+ client-whitelist: []
+ }
+}
+
+os-settings: {
+ ruby-load-path: [/usr/lib/ruby/vendor_ruby, /home/foo/ruby ]
+}
+\n"
+
+(* Test: Trapperkeeper.lns
+ Test full config file *)
+test Trapperkeeper.lns get config =
+ { }
+ { "#comment" = "This is a comment" }
+ { "@hash" = "webserver"
+ { "@hash" = "bar"
+ { "#comment" = "A comment" }
+ { "@simple" = "host" { "@value" = "localhost" } }
+ { "@simple" = "port" { "@value" = "9000" } }
+ { "@simple" = "default-server" { "@value" = "true" } }
+ }
+ { }
+ { "@hash" = "foo"
+ { "@simple" = "host" { "@value" = "localhost" } }
+ { "@simple" = "port" { "@value" = "10000" } }
+ }
+ }
+ { }
+ { "@hash" = "jruby-puppet"
+ { "#comment" = "This setting determines where JRuby will look for gems. It is also" }
+ { "#comment" = "used by the `puppetserver gem` command line tool." }
+ { "@simple" = "gem-home" { "@value" = "/var/lib/puppet/jruby-gems" } }
+ { }
+ { "#comment" = "(optional) path to puppet conf dir; if not specified, will use the puppet default" }
+ { "@simple" = "master-conf-dir" { "@value" = "/etc/puppet" } }
+ { }
+ { "#comment" = "(optional) path to puppet var dir; if not specified, will use the puppet default" }
+ { "@simple" = "master-var-dir" { "@value" = "/var/lib/puppet" } }
+ { }
+ { "#comment" = "(optional) maximum number of JRuby instances to allow; defaults to <num-cpus>+2" }
+ { "#comment" = "max-active-instances: 1" }
+ }
+ { }
+ { }
+ { "#comment" = "CA-related settings" }
+ { "@hash" = "certificate-authority"
+ { "#comment" = "settings for the certificate_status HTTP endpoint" }
+ { "@hash" = "certificate-status"
+ { "#comment" = "this setting contains a list of client certnames who are whitelisted to" }
+ { "#comment" = "have access to the certificate_status endpoint. Any requests made to" }
+ { "#comment" = "this endpoint that do not present a valid client cert mentioned in" }
+ { "#comment" = "this list will be denied access." }
+ { "@array" = "client-whitelist" }
+ }
+ }
+ { }
+ { "@hash" = "os-settings"
+ { "@array" = "ruby-load-path"
+ { "1" = "/usr/lib/ruby/vendor_ruby" }
+ { "2" = "/home/foo/ruby" }
+ }
+ }
+ { }
+
+
+(* Test: Trapperkeeper.lns
+ Should parse an empty file *)
+test Trapperkeeper.lns get "\n" = {}
+
+(* Test: Trapperkeeper.lns
+ Values can be quoted *)
+test Trapperkeeper.lns get "os-settings: {
+ ruby-load-paths: [\"/usr/lib/ruby/site_ruby/1.8\"]
+}\n" =
+ { "@hash" = "os-settings"
+ { "@array" = "ruby-load-paths"
+ { "1" = "/usr/lib/ruby/site_ruby/1.8" }
+ }
+ }
+
+(* Test: Trapperkeeper.lns
+ Keys can be quoted *)
+test Trapperkeeper.lns get "test: {
+ \"x\": true
+}\n" =
+ { "@hash" = "test"
+ { "@simple" = "x" { "@value" = "true" } } }
+
+(* Test: Trapperkeeper.lns
+ Keys can contain / (GH #7)
+*)
+test Trapperkeeper.lns get "test: {
+ \"x/y\" : z
+}\n" =
+ { "@hash" = "test"
+ { "@simple" = "x/y" { "@value" = "z" } } }
--- /dev/null
+module Test_tuned =
+
+let conf = "# Global tuned configuration file.
+
+dynamic_tuning = 0
+update_interval = 10
+"
+
+test Tuned.lns get conf =
+ { "#comment" = "Global tuned configuration file." }
+ { }
+ { "dynamic_tuning" = "0" }
+ { "update_interval" = "10" }
--- /dev/null
+(*
+Module: Test_Up2date
+ Provides unit tests and examples for the <Up2date> lens.
+*)
+
+module Test_Up2date =
+
+(* Variable: empty *)
+let empty = "keyword=
+"
+test Up2date.lns get empty =
+ { "1" = "keyword" }
+
+(* Variable: list_empty *)
+let list_empty = "keyword=;
+"
+test Up2date.lns get list_empty =
+ { "1" = "keyword"
+ { "values" } }
+
+(* Variable: list_one *)
+let list_one = "keyword=foo;
+"
+test Up2date.lns get list_one =
+ { "1" = "keyword"
+ { "values"
+ { "1" = "foo" } } }
+
+(* Variable: list_two
+ Probably not useful, up2date throws "bar" away *)
+let list_two = "keyword=foo;bar
+"
+test Up2date.lns get list_two =
+ { "1" = "keyword"
+ { "values"
+ { "1" = "foo" }
+ { "2" = "bar" } } }
+
+(* Variable: list_two_trailing *)
+let list_two_trailing = "keyword=foo;bar;
+"
+test Up2date.lns get list_two_trailing =
+ { "1" = "keyword"
+ { "values"
+ { "1" = "foo" }
+ { "2" = "bar" } } }
+
+(* Variable: conf *)
+let conf = "# Red Hat Update Agent config file.
+# Format: 1.0
+
+debug[comment]=Whether or not debugging is enabled
+debug=0
+
+systemIdPath[comment]=Location of system id
+systemIdPath=/etc/sysconfig/rhn/systemid
+
+serverURL[comment]=Remote server URL (use FQDN)
+#serverURL=https://xmlrpc.rhn.redhat.com/XMLRPC
+serverURL=https://enter.your.server.url.here/XMLRPC
+
+hostedWhitelist[comment]=RHN Hosted URL's
+hostedWhitelist=
+
+enableProxy[comment]=Use a HTTP Proxy
+enableProxy=0
+
+versionOverride[comment]=Override the automatically determined system version
+versionOverride=
+
+httpProxy[comment]=HTTP proxy in host:port format, e.g. squid.redhat.com:3128
+httpProxy=
+
+noReboot[comment]=Disable the reboot actions
+noReboot=0
+
+networkRetries[comment]=Number of attempts to make at network connections before giving up
+networkRetries=1
+
+disallowConfChanges[comment]=Config options that can not be overwritten by a config update action
+disallowConfChanges=noReboot;sslCACert;useNoSSLForPackages;noSSLServerURL;serverURL;disallowConfChanges;
+
+sslCACert[comment]=The CA cert used to verify the ssl server
+sslCACert=/usr/share/rhn/RHNS-CA-CERT
+
+# Akamai does not support http protocol, therefore setting this option as side effect disable \"Location aware\" function
+useNoSSLForPackages[comment]=Use the noSSLServerURL for package, package list, and header fetching (disable Akamai)
+useNoSSLForPackages=0
+
+retrieveOnly[comment]=Retrieve packages only
+retrieveOnly=0
+
+skipNetwork[comment]=Skips network information in hardware profile sync during registration.
+skipNetwork=0
+
+tmpDir[comment]=Use this Directory to place the temporary transport files
+tmpDir=/tmp
+
+writeChangesToLog[comment]=Log to /var/log/up2date which packages has been added and removed
+writeChangesToLog=0
+
+stagingContent[comment]=Retrieve content of future actions in advance
+stagingContent=1
+
+stagingContentWindow[comment]=How much forward we should look for future actions. In hours.
+stagingContentWindow=24
+"
+
+(* Test: Up2date.lns *)
+test Up2date.lns get conf =
+ { "#comment" = "Red Hat Update Agent config file." }
+ { "#comment" = "Format: 1.0" }
+ { }
+ { "1" = "debug[comment]"
+ { "value" = "Whether or not debugging is enabled" } }
+ { "2" = "debug"
+ { "value" = "0" } }
+ { }
+ { "3" = "systemIdPath[comment]"
+ { "value" = "Location of system id" } }
+ { "4" = "systemIdPath"
+ { "value" = "/etc/sysconfig/rhn/systemid" } }
+ { }
+ { "5" = "serverURL[comment]"
+ { "value" = "Remote server URL (use FQDN)" } }
+ { "#comment" = "serverURL=https://xmlrpc.rhn.redhat.com/XMLRPC" }
+ { "6" = "serverURL"
+ { "value" = "https://enter.your.server.url.here/XMLRPC" } }
+ { }
+ { "7" = "hostedWhitelist[comment]"
+ { "value" = "RHN Hosted URL's" } }
+ { "8" = "hostedWhitelist" }
+ { }
+ { "9" = "enableProxy[comment]"
+ { "value" = "Use a HTTP Proxy" } }
+ { "10" = "enableProxy"
+ { "value" = "0" } }
+ { }
+ { "11" = "versionOverride[comment]"
+ { "value" = "Override the automatically determined system version" } }
+ { "12" = "versionOverride" }
+ { }
+ { "13" = "httpProxy[comment]"
+ { "value" = "HTTP proxy in host:port format, e.g. squid.redhat.com:3128" } }
+ { "14" = "httpProxy" }
+ { }
+ { "15" = "noReboot[comment]"
+ { "value" = "Disable the reboot actions" } }
+ { "16" = "noReboot"
+ { "value" = "0" } }
+ { }
+ { "17" = "networkRetries[comment]"
+ { "value" = "Number of attempts to make at network connections before giving up" } }
+ { "18" = "networkRetries"
+ { "value" = "1" } }
+ { }
+ { "19" = "disallowConfChanges[comment]"
+ { "value" = "Config options that can not be overwritten by a config update action" } }
+ { "20" = "disallowConfChanges"
+ { "values"
+ { "1" = "noReboot" }
+ { "2" = "sslCACert" }
+ { "3" = "useNoSSLForPackages" }
+ { "4" = "noSSLServerURL" }
+ { "5" = "serverURL" }
+ { "6" = "disallowConfChanges" } } }
+ { }
+ { "21" = "sslCACert[comment]"
+ { "value" = "The CA cert used to verify the ssl server" } }
+ { "22" = "sslCACert"
+ { "value" = "/usr/share/rhn/RHNS-CA-CERT" } }
+ { }
+ { "#comment" = "Akamai does not support http protocol, therefore setting this option as side effect disable \"Location aware\" function" }
+ { "23" = "useNoSSLForPackages[comment]"
+ { "value" = "Use the noSSLServerURL for package, package list, and header fetching (disable Akamai)" } }
+ { "24" = "useNoSSLForPackages"
+ { "value" = "0" } }
+ { }
+ { "25" = "retrieveOnly[comment]"
+ { "value" = "Retrieve packages only" } }
+ { "26" = "retrieveOnly"
+ { "value" = "0" } }
+ { }
+ { "27" = "skipNetwork[comment]"
+ { "value" = "Skips network information in hardware profile sync during registration." } }
+ { "28" = "skipNetwork"
+ { "value" = "0" } }
+ { }
+ { "29" = "tmpDir[comment]"
+ { "value" = "Use this Directory to place the temporary transport files" } }
+ { "30" = "tmpDir"
+ { "value" = "/tmp" } }
+ { }
+ { "31" = "writeChangesToLog[comment]"
+ { "value" = "Log to /var/log/up2date which packages has been added and removed" } }
+ { "32" = "writeChangesToLog"
+ { "value" = "0" } }
+ { }
+ { "33" = "stagingContent[comment]"
+ { "value" = "Retrieve content of future actions in advance" } }
+ { "34" = "stagingContent"
+ { "value" = "1" } }
+ { }
+ { "35" = "stagingContentWindow[comment]"
+ { "value" = "How much forward we should look for future actions. In hours." } }
+ { "36" = "stagingContentWindow"
+ { "value" = "24" } }
+
+
--- /dev/null
+(*
+Module: Test_UpdateDB
+ Provides unit tests and examples for the <UpdateDB> lens.
+*)
+module Test_UpdateDB =
+
+(* Test: UpdateDB.lns
+ Simple get test *)
+test UpdateDB.lns get "# A comment
+PRUNEPATHS=\"/tmp /var/spool /media /home/.ecryptfs\"
+PRUNEFS= \"NFS nfs nfs4 rpc_pipefs\"
+PRUNE_BIND_MOUNTS = \"yes\"\n" =
+ { "#comment" = "A comment" }
+ { "PRUNEPATHS"
+ { "entry" = "/tmp" }
+ { "entry" = "/var/spool" }
+ { "entry" = "/media" }
+ { "entry" = "/home/.ecryptfs" }
+ }
+ { "PRUNEFS"
+ { "entry" = "NFS" }
+ { "entry" = "nfs" }
+ { "entry" = "nfs4" }
+ { "entry" = "rpc_pipefs" }
+ }
+ { "PRUNE_BIND_MOUNTS" = "yes" }
+
+(* Test: UpdateDB.lns
+ Adding to a list *)
+test UpdateDB.lns put "PRUNEFS=\"NFS nfs nfs4 rpc_pipefs\"\n"
+ after set "/PRUNEFS/entry[last()+1]" "ecryptfs" =
+"PRUNEFS=\"NFS nfs nfs4 rpc_pipefs ecryptfs\"\n"
--- /dev/null
+module Test_Util =
+
+test Util.empty_c_style get "/* */\n" =
+ { }
+
+test Util.comment_multiline get "/* comment */\n" =
+ { "#mcomment"
+ { "1" = "comment" }
+ }
+
+test Util.comment_multiline get "/*\ncomment\n*/\n" =
+ { "#mcomment"
+ { "1" = "comment" }
+ }
+
+test Util.comment_multiline get "/**
+ * Multi line comment
+ *
+ */\n" =
+ { "#mcomment"
+ { "1" = "*" }
+ { "2" = "* Multi line comment" }
+ { "3" = "*" }
+ }
--- /dev/null
+module Test_vfstab =
+
+ let simple = "/dev/dsk/c0t0d0s1 /dev/rdsk/c0t0d0s1 /test ufs 1 yes ro\n"
+ let simple_tree =
+ { "1"
+ { "spec" = "/dev/dsk/c0t0d0s1" }
+ { "fsck" = "/dev/rdsk/c0t0d0s1" }
+ { "file" = "/test" }
+ { "vfstype" = "ufs" }
+ { "passno" = "1" }
+ { "atboot" = "yes" }
+ { "opt" = "ro" } }
+ test Vfstab.lns get simple = simple_tree
+
+ let trailing_ws = "/dev/dsk/c0t0d0s1\t /dev/rdsk/c0t0d0s1\t /test\t ufs\t 1\t yes\t ro \t \n"
+ test Vfstab.lns get trailing_ws = simple_tree
+
+ (* Now test combinations where unneeded fields can be replaced by dashes and
+ then should not appear in the tree. *)
+ let gen_empty_field(fsck:string) (passno:string) (opt:string) =
+ "/dev/dsk/c0t0d0s1\t " . fsck . "\t /test\t ufs\t " . passno . " yes " . opt . "\t\n"
+
+ (* Missing fsck *)
+ let no_fsck = gen_empty_field "-" "1" "ro"
+ test Vfstab.lns get no_fsck =
+ { "1"
+ { "spec" = "/dev/dsk/c0t0d0s1" }
+ { "file" = "/test" }
+ { "vfstype" = "ufs" }
+ { "passno" = "1" }
+ { "atboot" = "yes" }
+ { "opt" = "ro" } }
+
+ test Vfstab.lns put no_fsck after
+ insa "fsck" "/1/spec" ;
+ set "/1/fsck" "/dev/rdsk/c0t0d0s1" = gen_empty_field "/dev/rdsk/c0t0d0s1" "1" "ro"
+
+ (* Missing passno *)
+ let no_passno = gen_empty_field "/dev/rdsk/c0t0d0s1" "-" "ro"
+ test Vfstab.lns get no_passno =
+ { "1"
+ { "spec" = "/dev/dsk/c0t0d0s1" }
+ { "fsck" = "/dev/rdsk/c0t0d0s1" }
+ { "file" = "/test" }
+ { "vfstype" = "ufs" }
+ { "atboot" = "yes" }
+ { "opt" = "ro" } }
+
+ test Vfstab.lns put no_passno after
+ insa "passno" "/1/vfstype" ;
+ set "/1/passno" "1" = gen_empty_field "/dev/rdsk/c0t0d0s1" "1" "ro"
+
+ (* Missing opts *)
+ let no_opts = gen_empty_field "/dev/rdsk/c0t0d0s1" "1" "-"
+ test Vfstab.lns get no_opts =
+ { "1"
+ { "spec" = "/dev/dsk/c0t0d0s1" }
+ { "fsck" = "/dev/rdsk/c0t0d0s1" }
+ { "file" = "/test" }
+ { "vfstype" = "ufs" }
+ { "passno" = "1" }
+ { "atboot" = "yes" } }
+
+ test Vfstab.lns put no_opts after
+ insa "opt" "/1/atboot" ;
+ insa "opt" "/1/atboot" ;
+ set "/1/opt[1]" "ro" ;
+ set "/1/opt[2]" "fg" = gen_empty_field "/dev/rdsk/c0t0d0s1" "1" "ro,fg"
+
+ let multi_opts = "/dev/dsk/c0t0d0s1 /dev/rdsk/c0t0d0s1 /test ufs 1 yes ro,nosuid,retry=5,fg\n"
+ let multi_opts_tree =
+ { "1"
+ { "spec" = "/dev/dsk/c0t0d0s1" }
+ { "fsck" = "/dev/rdsk/c0t0d0s1" }
+ { "file" = "/test" }
+ { "vfstype" = "ufs" }
+ { "passno" = "1" }
+ { "atboot" = "yes" }
+ { "opt" = "ro" }
+ { "opt" = "nosuid" }
+ { "opt" = "retry"
+ { "value" = "5" } }
+ { "opt" = "fg" } }
+ test Vfstab.lns get multi_opts = multi_opts_tree
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(*
+Module: Test_VMware_Config
+ Provides unit tests and examples for the <VMware_Config> lens.
+*)
+
+module Test_VMware_Config =
+
+(* Variable: conf *)
+let conf = "libdir = \"/usr/lib/vmware\"
+dhcpd.fullpath = \"/usr/bin/vmnet-dhcpd\"
+authd.fullpath = \"/usr/sbin/vmware-authd\"
+authd.client.port = \"902\"
+loop.fullpath = \"/usr/bin/vmware-loop\"
+vmware.fullpath = \"/usr/bin/vmware\"
+control.fullpath = \"/usr/bin/vmware-cmd\"
+serverd.fullpath = \"/usr/sbin/vmware-serverd\"
+wizard.fullpath = \"/usr/bin/vmware-wizard\"
+serverd.init.fullpath = \"/usr/lib/vmware/serverd/init.pl\"
+serverd.vpxuser = \"vpxuser\"
+serverd.snmpdconf.subagentenabled = \"TRUE\"
+template.useFlatDisks = \"TRUE\"
+autoStart.defaultStartDelay = \"60\"
+autoStart.enabled = \"True\"
+autoStart.defaultStopDelay = \"60\"
+"
+
+(* Test: VMware_Config.lns *)
+test VMware_Config.lns get conf =
+ { "libdir" = "/usr/lib/vmware" }
+ { "dhcpd.fullpath" = "/usr/bin/vmnet-dhcpd" }
+ { "authd.fullpath" = "/usr/sbin/vmware-authd" }
+ { "authd.client.port" = "902" }
+ { "loop.fullpath" = "/usr/bin/vmware-loop" }
+ { "vmware.fullpath" = "/usr/bin/vmware" }
+ { "control.fullpath" = "/usr/bin/vmware-cmd" }
+ { "serverd.fullpath" = "/usr/sbin/vmware-serverd" }
+ { "wizard.fullpath" = "/usr/bin/vmware-wizard" }
+ { "serverd.init.fullpath" = "/usr/lib/vmware/serverd/init.pl" }
+ { "serverd.vpxuser" = "vpxuser" }
+ { "serverd.snmpdconf.subagentenabled" = "TRUE" }
+ { "template.useFlatDisks" = "TRUE" }
+ { "autoStart.defaultStartDelay" = "60" }
+ { "autoStart.enabled" = "True" }
+ { "autoStart.defaultStopDelay" = "60" }
+
+(* Test: VMware_Config.lns
+ Quotes are not mandatory *)
+test VMware_Config.lns get "xkeymap.nokeycodeMap = true\n" =
+ { "xkeymap.nokeycodeMap" = "true" }
--- /dev/null
+module Test_vsftpd =
+
+test Vsftpd.lns get "listen=YES\nmdtm_write=false\n" =
+ { "listen" = "YES" }
+ { "mdtm_write" = "false" }
+
+test Vsftpd.lns get "listen=on\n" = *
+
+test Vsftpd.lns get "local_umask=0777\n" = { "local_umask" = "0777" }
+
+test Vsftpd.lns get "listen_port=ftp\n" = *
+
+test Vsftpd.lns get "ftp_username=ftp_user\n" = { "ftp_username" = "ftp_user" }
+
+(* There must not be spaces around the '=' *)
+test Vsftpd.lns get "anon_root = /var/lib/vsftpd/anon" = *
+
+
+let conf = "# Example config file /etc/vsftpd/vsftpd.conf
+#
+# The default compiled in settings are fairly paranoid. This sample file
+# loosens things up a bit, to make the ftp daemon more usable.
+# Please see vsftpd.conf.5 for all compiled in defaults.
+#
+# Allow anonymous FTP? (Beware - allowed by default if you comment this out).
+anonymous_enable=YES
+#
+# Default umask for local users is 077. You may wish to change this to 022,
+# if your users expect that (022 is used by most other ftpd's)
+local_umask=022
+#
+# You may specify an explicit list of local users to chroot() to their home
+# directory. If chroot_local_user is YES, then this list becomes a list of
+# users to NOT chroot().
+chroot_list_enable=YES
+# (default follows)
+chroot_list_file=/etc/vsftpd/chroot_list
+#
+
+pam_service_name=vsftpd
+userlist_enable=YES
+tcp_wrappers=YES
+allow_writeable_chroot=YES
+
+"
+
+test Vsftpd.lns get conf =
+ { "#comment" = "Example config file /etc/vsftpd/vsftpd.conf" }
+ {}
+ { "#comment" = "The default compiled in settings are fairly paranoid. This sample file" }
+ { "#comment" = "loosens things up a bit, to make the ftp daemon more usable." }
+ { "#comment" = "Please see vsftpd.conf.5 for all compiled in defaults." }
+ {}
+ { "#comment" = "Allow anonymous FTP? (Beware - allowed by default if you comment this out)." }
+ { "anonymous_enable" = "YES" }
+ {}
+ { "#comment" = "Default umask for local users is 077. You may wish to change this to 022," }
+ { "#comment" = "if your users expect that (022 is used by most other ftpd's)" }
+ { "local_umask" = "022" }
+ {}
+ { "#comment" = "You may specify an explicit list of local users to chroot() to their home" }
+ { "#comment" = "directory. If chroot_local_user is YES, then this list becomes a list of" }
+ { "#comment" = "users to NOT chroot()." }
+ { "chroot_list_enable" = "YES" }
+ { "#comment" = "(default follows)" }
+ { "chroot_list_file" = "/etc/vsftpd/chroot_list" }
+ {}
+ {}
+ { "pam_service_name" = "vsftpd" }
+ { "userlist_enable" = "YES" }
+ { "tcp_wrappers" = "YES" }
+ { "allow_writeable_chroot" = "YES" }
+ {}
+
--- /dev/null
+module Test_webmin =
+
+let conf = "port=10000
+realm=Webmin Server
+denyfile=\.pl$
+"
+
+test Webmin.lns get conf =
+ { "port" = "10000" }
+ { "realm" = "Webmin Server" }
+ { "denyfile" = "\.pl$" }
--- /dev/null
+module Test_wine =
+
+let s1 = "WINE REGISTRY Version 2
+;; All keys relative to \\Machine
+
+[Software\\Borland\\Database Engine\\Settings\\SYSTEM\\INIT] 1255960431
+\"SHAREDMEMLOCATION\"=\"9000\"
+
+[Software\\Classes\\.gif] 1255960430
+@=\"giffile\"
+\"Content Type\"=\"image/gif\"
+
+[Software\\Classes\\CLSID\\{083863F1-70DE-11D0-BD40-00A0C911CE86}\\Instance\\{1B544C20-FD0B-11CE-8C63-00AA0044B51E}] 1255960430
+\"CLSID\"=\"{1B544C20-FD0B-11CE-8C63-00AA0044B51E}\"
+\"FilterData\"=hex:02,00,00,00,00,00,60,00,02,00,00,00,00,00,00,00,30,70,69,33,\
+ 9f,53,00,20,af,0b,a7,70,76,69,64,73,00,00,10,00,80,00,00,aa,00,38,9b,71,00,\
+ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00
+\"FriendlyName\"=\"AVI Splitter\"
+
+[Software\\Classes\\CLSID\\{0AFACED1-E828-11D1-9187-B532F1E9575D}\\ShellFolder] 1255960429
+\"Attributes\"=dword:60010000
+\"CallForAttributes\"=dword:f0000000
+"
+
+test Wine.lns get s1 =
+ { "registry" = "WINE REGISTRY" }
+ { "version" = "2" }
+ { "#comment" = "All keys relative to \Machine" }
+ { }
+ { "section" = "Software\Borland\Database Engine\Settings\SYSTEM\INIT"
+ { "timestamp" = "1255960431" }
+ { "entry"
+ { "key" = "SHAREDMEMLOCATION" }
+ { "value" = "9000" } }
+ { } }
+ { "section" = "Software\Classes\.gif"
+ { "timestamp" = "1255960430" }
+ { "anon" { "value" = "giffile" } }
+ { "entry"
+ { "key" = "Content Type" }
+ { "value" = "image/gif" } }
+ { } }
+ { "section" = "Software\Classes\CLSID\{083863F1-70DE-11D0-BD40-00A0C911CE86}\Instance\{1B544C20-FD0B-11CE-8C63-00AA0044B51E}"
+ { "timestamp" = "1255960430" }
+ { "entry"
+ { "key" = "CLSID" }
+ { "value" = "{1B544C20-FD0B-11CE-8C63-00AA0044B51E}" } }
+ { "entry"
+ { "key" = "FilterData" }
+ { "type" = "hex" }
+ { "value" = "02,00,00,00,00,00,60,00,02,00,00,00,00,00,00,00,30,70,69,33,\
+ 9f,53,00,20,af,0b,a7,70,76,69,64,73,00,00,10,00,80,00,00,aa,00,38,9b,71,00,\
+ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00" } }
+ { "entry"
+ { "key" = "FriendlyName" }
+ { "value" = "AVI Splitter" } }
+ { } }
+ { "section" = "Software\Classes\CLSID\{0AFACED1-E828-11D1-9187-B532F1E9575D}\ShellFolder"
+ { "timestamp" = "1255960429" }
+ { "entry"
+ { "key" = "Attributes" }
+ { "type" = "dword" }
+ { "value" = "60010000" } }
+ { "entry"
+ { "key" = "CallForAttributes" }
+ { "type" = "dword" }
+ { "value" = "f0000000" } } }
+
+(* The weird 'str(2)' type *)
+let s2 = "[Software\\Microsoft\\Windows\\CurrentVersion\\Explorer\\User Shell Folders] 1248768928
+\"AppData\"=str(2):\"%USERPROFILE%\\Application Data\"
+\"Cache\"=str(2):\"%USERPROFILE%\\Local Settings\\Temporary Internet Files\"
+\"Cookies\"=str(2):\"%USERPROFILE%\\Cookies\"
+\"Desktop\"=str(2):\"%USERPROFILE%\\Desktop\"
+"
+
+test Wine.section get s2 =
+ { "section" =
+ "Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders"
+ { "timestamp" = "1248768928" }
+ { "entry"
+ { "key" = "AppData" }
+ { "type" = "str(2)" }
+ { "value" = "%USERPROFILE%\Application Data" } }
+ { "entry"
+ { "key" = "Cache" }
+ { "type" = "str(2)" }
+ { "value" = "%USERPROFILE%\Local Settings\Temporary Internet Files" } }
+ { "entry"
+ { "key" = "Cookies" }
+ { "type" = "str(2)" }
+ { "value" = "%USERPROFILE%\Cookies" } }
+ { "entry"
+ { "key" = "Desktop" }
+ { "type" = "str(2)" }
+ { "value" = "%USERPROFILE%\Desktop" } } }
+
+(* Quoted doublequotes embedded in the string *)
+let s3 = "[Software\\Classes\\CLSID\\Shell\\OpenHomePage\\Command] 1248768931
+@=\"\\\"C:\\Program Files\\Internet Explorer\\iexplore.exe\\\"\"\n"
+
+test Wine.section get s3 =
+ { "section" = "Software\Classes\CLSID\Shell\OpenHomePage\Command"
+ { "timestamp" = "1248768931" }
+ { "anon"
+ { "value" = "\\\"C:\Program Files\Internet Explorer\iexplore.exe\\\"" } } }
+
+(* There's a str(7) type, too *)
+let s4 = "[Software\\Microsoft\\Cryptography\\OID\\EncodingType 1\\CertDllVerifyRevocation\\DEFAULT] 1248768928
+\"Dll\"=str(7):\"cryptnet.dll\0\"\n"
+
+test Wine.section get s4 =
+ { "section" = "Software\Microsoft\Cryptography\OID\EncodingType 1\CertDllVerifyRevocation\DEFAULT"
+ { "timestamp" = "1248768928" }
+ { "entry"
+ { "key" = "Dll" }
+ { "type" = "str(7)" }
+ { "value" = "cryptnet.dll\0" } } }
+
+(* The Windows Registry Editor header *)
+test Wine.header get "Windows Registry Editor Version 5.00\n" =
+ { "registry" = "Windows Registry Editor" }
+ { "version" = "5.00" }
+
+(* The type hex(7) *)
+let s5 = "[HKEY_LOCAL_MACHINE\SOFTWARE\ADFS]
+\"Patches\"=hex(7):25,00,77,00,69,00,6e,00,64,00,69,00,72,00,25,00,5c,00,61,00,\
+ 64,00,66,00,73,00,73,00,70,00,32,00,2e,00,6d,00,73,00,70,00,\
+ 00,00,00,00,f8
+"
+test Wine.section get s5 =
+ { "section" = "HKEY_LOCAL_MACHINE\SOFTWARE\ADFS"
+ { "entry"
+ { "key" = "Patches" }
+ { "type" = "hex(7)" }
+ { "value" = "25,00,77,00,69,00,6e,00,64,00,69,00,72,00,25,00,5c,00,61,00,\
+ 64,00,66,00,73,00,73,00,70,00,32,00,2e,00,6d,00,73,00,70,00,\
+ 00,00,00,00,f8" } } }
+
+(* Test DOS line endings *)
+let s6 = "Windows Registry Editor Version 5.00\r\n\r
+[HKEY_LOCAL_MACHINE\SOFTWARE\C07ft5Y\WinXP]\r
+\"@\"=\"\"\r
+\r
+[HKEY_LOCAL_MACHINE\SOFTWARE\Classes]\r\n"
+
+test Wine.lns get s6 =
+ { "registry" = "Windows Registry Editor" }
+ { "version" = "5.00" }
+ { }
+ { "section" = "HKEY_LOCAL_MACHINE\SOFTWARE\C07ft5Y\WinXP"
+ { "anon" { "value" = "" } }
+ { } }
+ { "section" = "HKEY_LOCAL_MACHINE\SOFTWARE\Classes" }
+
+(* Keys can contain '/' *)
+let s7 =
+"\"application/vnd.ms-xpsdocument\"=\"{c18d5e87-12b4-46a3-ae40-67cf39bc6758}\"\n"
+
+test Wine.entry get s7 =
+ { "entry"
+ { "key" = "application/vnd.ms-xpsdocument" }
+ { "value" = "{c18d5e87-12b4-46a3-ae40-67cf39bc6758}" } }
--- /dev/null
+module Test_xendconfsxp =
+
+ (* Empty file and comments *)
+
+ test Xendconfsxp.lns get "\n" = { }
+
+ test Xendconfsxp.lns get "# my comment\n" =
+ { "#scomment" = " my comment" }
+
+ test Xendconfsxp.lns get " \n# my comment\n" =
+ { }
+ { "#scomment" = " my comment" }
+
+ test Xendconfsxp.lns get "# my comment\n\n" =
+ { "#scomment" = " my comment" }
+ { }
+
+ test Xendconfsxp.lns get "# my comment\n \n" =
+ { "#scomment" = " my comment" }
+ { }
+
+ test Xendconfsxp.lns get "# my comment\n#com2\n" =
+ { "#scomment" = " my comment" }
+ { "#scomment" = "com2" }
+
+ test Xendconfsxp.lns get "# my comment\n\n" =
+ { "#scomment" = " my comment" }
+ { }
+
+ test Xendconfsxp.lns get "\n# my comment\n" =
+ { }
+ { "#scomment" = " my comment" }
+
+ test Xendconfsxp.lns get "(key value)" =
+ { "key" { "item" = "value" } }
+
+ test Xendconfsxp.lns get "(key value)\n# com\n" =
+ { "key" { "item" = "value" } }
+ { }
+ { "#scomment" = " com" }
+
+ test Xendconfsxp.lns get "(k2ey value)" =
+ { "k2ey" { "item" = "value" } }
+
+ test Xendconfsxp.lns get "(- value)" =
+ { "-" { "item" = "value" } }
+
+ test Xendconfsxp.lns get "(-bar value)" =
+ { "-bar" { "item" = "value" } }
+
+ test Xendconfsxp.lns get "(k-ey v-alue)\n# com\n" =
+ { "k-ey" { "item" = "v-alue" } }
+ { }
+ { "#scomment" = " com" }
+
+ test Xendconfsxp.lns get "(key value)" =
+ { "key" { "item" = "value" } }
+
+ test Xendconfsxp.lns get "(key value)(key2 value2)" =
+ { "key"
+ { "item" = "value" }
+ }
+ { "key2"
+ { "item" = "value2" }
+ }
+
+ test Xendconfsxp.lns get "( key value )( key2 value2 )" =
+ { "key"
+ { "item" = "value" }
+ }
+ { "key2"
+ { "item" = "value2" }
+ }
+
+ let source_7 = "(key value)(key2 value2)"
+ test Xendconfsxp.lns get source_7 =
+ { "key"
+ { "item" = "value" }
+ }
+ { "key2"
+ { "item" = "value2" }
+ }
+
+(* 2 items? it is a key/item *)
+
+ test Xendconfsxp.lns get "(key value)" =
+ { "key"
+ { "item" = "value" }
+ }
+
+ test Xendconfsxp.lns get "( key value )" =
+ { "key"
+ { "item" = "value" }
+ }
+
+(* 3 items? Not allowed. 2nd item must be in ()'s *)
+ test Xendconfsxp.lns get "(key value value2)" = *
+
+(* key/list pairs. *)
+
+(* 0 item -- TODO: implement this. *)
+
+ test Xendconfsxp.lns get "( key ())" = *
+
+(* 1 item *)
+
+ test Xendconfsxp.lns get "(key (listitem1) )" =
+ { "key"
+ { "array"
+ { "item" = "listitem1" }
+ }
+ }
+
+ test Xendconfsxp.lns get "(key ( (foo bar bar bat ) ) )" =
+ { "key"
+ { "array"
+ { "array"
+ { "item" = "foo" }
+ { "item" = "bar" }
+ { "item" = "bar" }
+ { "item" = "bat" }
+ }
+ }
+ }
+
+ test Xendconfsxp.lns get "(key ((foo foo foo (bar bar bar))))" =
+ { "key"
+ { "array"
+ { "array"
+ { "item" = "foo" }
+ { "item" = "foo" }
+ { "item" = "foo" }
+ { "array"
+ { "item" = "bar" }
+ { "item" = "bar" }
+ { "item" = "bar" }
+ }
+ }
+ }
+ }
+
+(* 2 item *)
+
+ test Xendconfsxp.lns get "(xen-api-server (foo bar))" =
+ { "xen-api-server"
+ { "array"
+ { "item" = "foo" }
+ { "item" = "bar" }
+ }
+ }
+
+ test Xendconfsxp.lns get "( key ( value value2 ))" =
+ { "key"
+ { "array"
+ { "item" = "value" }
+ { "item" = "value2" }
+ }
+ }
+
+(* 3 item *)
+
+ test Xendconfsxp.lns get "( key ( value value2 value3 ))" =
+ { "key"
+ { "array"
+ { "item" = "value" }
+ { "item" = "value2" }
+ { "item" = "value3" }
+ }
+ }
+
+(* quotes *)
+ test Xendconfsxp.lns get "(key \"foo\")" = { "key" { "item" = "\"foo\"" } }
+ test Xendconfsxp.lns get "(key \"\")" = { "key" { "item" = "\"\"" } }
+ test Xendconfsxp.lns get "( key \"foo\" )" = { "key" { "item" = "\"foo\"" } }
+ test Xendconfsxp.lns get "( key \"f oo\" )" = { "key" { "item" = "\"f oo\"" } }
+ test Xendconfsxp.lns get "( key \"f oo\" )" = { "key" { "item" = "\"f oo\"" } }
+ test Xendconfsxp.lns get "(foo \"this is \\\"quoted\\\" in quotes\")" =
+ { "foo" { "item" = "\"this is \\\"quoted\\\" in quotes\"" } }
+
+ test Xendconfsxp.lns get "( key \"fo\\'\" )" = { "key" { "item" = "\"fo\\'\""} }
+ test Xendconfsxp.lns get "( key \"fo\\'\" )" = { "key" { "item" = "\"fo\\'\""} }
+ test Xendconfsxp.lns get "( key \"\")" = { "key" { "item" = "\"\"" } }
+
+(* Newlines in strange places *)
+ test Xendconfsxp.lns get "(\nkey\nbar\n)" = { "key" { "item" = "bar" } }
+ test Xendconfsxp.lns get "(\n\n\nkey\n\n\nbar\n\n\n)" = { "key" { "item" = "bar" } }
+ test Xendconfsxp.lns get "(\nkey\nbar\n)" = { "key" { "item" = "bar" } }
+
+(* Comments in strange places *)
+ test Xendconfsxp.lns get "(\nkey #aa\nbar)" =
+ { "key"
+ { "#comment" = "aa" }
+ { "item" = "bar" }
+ }
+
+ test Xendconfsxp.lns get "(\nkey\n#aa\nbar)" =
+ { "key"
+ { "#comment" = "aa" }
+ { "item" = "bar" }
+ }
+
+ test Xendconfsxp.lns get "(\nkey\n #aa\nbar)" =
+ { "key"
+ { "#comment" = "aa" }
+ { "item" = "bar" }
+ }
+
+(* Comments must have 1 space before the # *)
+ test Xendconfsxp.lns get "(\nkey# com\nbar\n)" = *
+ test Xendconfsxp.lns get "(\nkey#aa\nbar)" = *
+
+(* Comments may start a line *)
+ test Xendconfsxp.lns get "(\nkey\n# com\nbar)" =
+ { "key"
+ { "#comment" = "com" }
+ { "item" = "bar" }
+ }
+ test Xendconfsxp.lns get "(\nkey\n#aa\nbar)" =
+ { "key"
+ { "#comment" = "aa" }
+ { "item" = "bar" }
+ }
+
+(* Sub lists *)
+ test Xendconfsxp.lns get "(key ((foo foo) (bar bar)))" =
+ { "key"
+ { "array"
+ { "array"
+ { "item" = "foo" }
+ { "item" = "foo" }
+ }
+ { "array"
+ { "item" = "bar" }
+ { "item" = "bar" }
+ }
+ }
+ }
+
+
+ test Xendconfsxp.lns get "(aaa ((bbb ccc ddd) (eee fff)))" =
+ { "aaa"
+ { "array"
+ { "array"
+ { "item" = "bbb" }
+ { "item" = "ccc" }
+ { "item" = "ddd" }
+ }
+ { "array"
+ { "item" = "eee" }
+ { "item" = "fff" }
+ }
+ }
+ }
+
+(* Comments in strange places *)
+ test Xendconfsxp.lns get "(\nkey\nbar # bb\n)" =
+ { "key"
+ { "item" = "bar" }
+ { "#comment" = "bb" }
+ }
+ test Xendconfsxp.lns get "(\nkey\nbar \n#cc\n)" =
+ { "key" { "item" = "bar" }
+ { "#comment" = "cc" } }
+ test Xendconfsxp.lns get "(\nkey\nbar \n #cc\n)" =
+ { "key" { "item" = "bar" }
+ { "#comment" = "cc" } }
+ test Xendconfsxp.lns get "(\nkey\nbar \n #cc\n )" =
+ { "key" { "item" = "bar" }
+ { "#comment" = "cc" } }
+ test Xendconfsxp.lns get "(foo ((foo foo foo) (unix none)))" =
+ { "foo"
+ { "array"
+ { "array"
+ { "item" = "foo" }
+ { "item" = "foo" }
+ { "item" = "foo" }
+ }
+ { "array"
+ { "item" = "unix" }
+ { "item" = "none" }
+ }
+ }
+ }
+
+ test Xendconfsxp.lns get "(foo ((foo foo 'foo') (unix none)))" =
+ { "foo"
+ { "array"
+ { "array"
+ { "item" = "foo" }
+ { "item" = "foo" }
+ { "item" = "'foo'" }
+ }
+ { "array"
+ { "item" = "unix" }
+ { "item" = "none" }
+ }
+ }
+ }
+
+ test Xendconfsxp.lns get "(xen-api-server ((9363 pam '^localhost$ example\\.com$') (unix none)))" =
+ { "xen-api-server"
+ { "array"
+ { "array"
+ { "item" = "9363" }
+ { "item" = "pam" }
+ { "item" = "'^localhost$ example\.com$'" }
+ }
+ { "array"
+ { "item" = "unix" }
+ { "item" = "none" }
+ }
+ }
+ }
+
+ test Xendconfsxp.lns get "# -*- sh -*-\n#foo\n#bar\n\n\n(foo bar)" =
+ { "#scomment" = " -*- sh -*-" }
+ { "#scomment" = "foo" }
+ { "#scomment" = "bar" }
+ { }
+ { }
+ { "foo"
+ { "item" = "bar" }
+ }
+
+(* Test whitespace before lparen *)
+ test Xendconfsxp.lns get " (network-script network-bridge)\n#\n" =
+ { "network-script" { "item" = "network-bridge" } }
+ { }
+ { "#scomment" = "" }
+
+(* Bugs: *)
+
+(* Note: It works if you change the last \n to \t *)
+ test Xendconfsxp.lns get "(\nkey\n# com\nbar\n)" = *
--- /dev/null
+module Test_xinetd =
+
+let eol_ws = "defaults \t \n{\n enabled = cvs echo \n}\n\n"
+
+let cvs = "# default: off
+# description: The CVS service can record the history of your source
+# files. CVS stores all the versions of a file in a single
+# file in a clever way that only stores the differences
+# between versions.
+service cvspserver
+{
+ disable = yes
+ port = 2401
+ socket_type = stream
+ protocol = tcp
+ wait = no
+ user = root
+ passenv = PATH
+ server = /usr/bin/cvs
+ env -= HOME=/var/cvs
+ server_args = -f --allow-root=/var/cvs pserver
+# bind = 127.0.0.1
+ log_on_failure += HOST
+ FLAGS = IPv6 IPv4
+}
+"
+
+let lst_add = "service svc_add
+{
+ log_on_failure += HOST
+}
+"
+
+test Xinetd.lns get eol_ws =
+ { "defaults" { "enabled"
+ { "value" = "cvs" }
+ { "value" = "echo" } } }
+ {}
+
+test Xinetd.lns put eol_ws after rm "/defaults/enabled/value[last()]" =
+ "defaults \t \n{\n enabled = cvs \n}\n\n"
+
+test Xinetd.lns get cvs =
+ { "#comment" = "default: off" }
+ { "#comment" = "description: The CVS service can record the history of your source" }
+ { "#comment" = "files. CVS stores all the versions of a file in a single" }
+ { "#comment" = "file in a clever way that only stores the differences" }
+ { "#comment" = "between versions." }
+ { "service" = "cvspserver"
+ { "disable" = "yes" }
+ { "port" = "2401" }
+ { "socket_type" = "stream" }
+ { "protocol" = "tcp" }
+ { "wait" = "no" }
+ { "user" = "root" }
+ { "passenv" { "value" = "PATH" } }
+ { "server" = "/usr/bin/cvs" }
+ { "env" { "del" } { "value" = "HOME=/var/cvs" } }
+ { "server_args"
+ { "value" = "-f" }
+ { "value" = "--allow-root=/var/cvs" }
+ { "value" = "pserver" } }
+ { "#comment" = "bind = 127.0.0.1" }
+ { "log_on_failure" { "add" } { "value" = "HOST" } }
+ { "FLAGS"
+ { "value" = "IPv6" }
+ { "value" = "IPv4" } } }
+
+(* Switch the '+=' to a simple '=' *)
+test Xinetd.lns put lst_add after rm "/service/log_on_failure/add" =
+ "service svc_add\n{\n log_on_failure = HOST\n}\n"
+
+test Xinetd.lns put "" after
+ set "/service" "svc";
+ set "/service/instances" "UNLIMITED" = "service svc
+{
+\tinstances = UNLIMITED
+}
+"
+
+(* Support missing values in lists *)
+test Xinetd.lns get "service check_mk\n{\n log_on_success =\n server_args=\n}\n" =
+ { "service" = "check_mk"
+ { "log_on_success" }
+ { "server_args" }
+ }
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(*
+Module: Test_Xml
+ Provides unit tests and examples for the <Xml> lens.
+*)
+
+module Test_Xml =
+
+(* View: knode
+ A simple flag function
+
+ Parameters:
+ r:regexp - the pattern for the flag
+*)
+let knode (r:regexp) = [ key r ]
+
+(************************************************************************
+ * Group: Utilities lens
+ *************************************************************************)
+(*
+let _ = print_regexp(lens_ctype(Xml.text))
+let _ = print_endline ""
+*)
+
+(* Group: Comments *)
+
+(* Test: Xml.comment
+ Comments get mapped into "#comment" nodes. *)
+test Xml.comment get
+ "<!-- declarations for <head> & <body> -->" =
+
+ { "#comment" = " declarations for <head> & <body> " }
+
+(* Test: Xml.comment
+ This syntax is not understood. *)
+test Xml.comment get
+ "<!-- B+, B, or B--->" = *
+
+(* Group: Prolog and declarations *)
+
+(* Test: Xml.prolog
+ The XML prolog tag is mapped in a "#declaration" node,
+ which contains an "#attribute" node with various attributes of the tag. *)
+test Xml.prolog get
+ "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" =
+
+ { "#declaration"
+ { "#attribute"
+ { "version" = "1.0" }
+ { "encoding" = "UTF-8" }
+ }
+ }
+
+(* Test: Xml.decl_def_item
+ !ELEMENT declaration tags are mapped in "!ELEMENT" nodes.
+ The associated declaration attribute is mapped in a "#decl" subnode. *)
+test Xml.decl_def_item get
+ "<!ELEMENT greeting (#PCDATA)>" =
+
+ { "!ELEMENT" = "greeting"
+ { "#decl" = "(#PCDATA)" }
+ }
+
+(* Test: Xml.decl_def_item
+ !ENTITY declaration tags are mapped in "!ENTITY" nodes.
+ The associated declaration attribute is mapped in a "#decl" subnode. *)
+test Xml.decl_def_item get
+ "<!ENTITY da \"
\">" =
+
+ { "!ENTITY" = "da"
+ { "#decl" = "
" }
+ }
+
+(* Test: Xml.doctype
+ !DOCTYPE tags are mapped in "!DOCTYPE" nodes.
+ The associated system attribute is mapped in a "SYSTEM" subnode. *)
+test Xml.doctype get
+ "<!DOCTYPE greeting:foo SYSTEM \"hello.dtd\">" =
+
+ { "!DOCTYPE" = "greeting:foo"
+ { "SYSTEM" = "hello.dtd" }
+ }
+
+(* Test: Xml.doctype
+ This is an example of a !DOCTYPE tag with !ELEMENT children tags. *)
+test Xml.doctype get "<!DOCTYPE foo [
+<!ELEMENT bar (#PCDATA)>
+<!ELEMENT baz (bar)* >
+]>" =
+
+ { "!DOCTYPE" = "foo"
+ { "!ELEMENT" = "bar"
+ { "#decl" = "(#PCDATA)" }
+ }
+ { "!ELEMENT" = "baz"
+ { "#decl" = "(bar)*" }
+ }
+ }
+
+(* Group: Attributes *)
+
+(* Variable: att_def1 *)
+let att_def1 = "<!ATTLIST termdef
+id ID #REQUIRED
+name CDATA #IMPLIED>"
+(* Variable: att_def2 *)
+let att_def2 = "<!ATTLIST list
+type (bullets|ordered|glossary) \"ordered\">"
+(* Variable: att_def3 *)
+let att_def3 = "<!ATTLIST form
+method CDATA #FIXED \"POST\">"
+
+(* Test: Xml.att_list_def *)
+test Xml.att_list_def get
+ att_def1 =
+
+ { "!ATTLIST" = "termdef"
+ { "1"
+ { "#name" = "id" }
+ { "#type" = "ID" }
+ { "#REQUIRED" }
+ }
+ { "2"
+ { "#name" = "name" }
+ { "#type" = "CDATA" }
+ { "#IMPLIED" }
+ }
+ }
+
+(* Test: Xml.att_list_def *)
+test Xml.att_list_def get
+ att_def2 =
+
+ { "!ATTLIST" = "list"
+ { "1"
+ { "#name" = "type" }
+ { "#type" = "(bullets|ordered|glossary)" }
+ { "#FIXED" = "ordered" }
+ }
+ }
+
+(* Test: Xml.att_list_def *)
+test Xml.att_list_def get
+ att_def3 =
+
+ { "!ATTLIST" = "form"
+ { "1"
+ { "#name" = "method" }
+ { "#type" = "CDATA" }
+ { "#FIXED" = "POST" }
+ }
+ }
+
+(* Test: Xml.notation_def *)
+test Xml.notation_def get
+ "<!NOTATION not3 SYSTEM \"\">" =
+
+ { "!NOTATION" = "not3"
+ { "SYSTEM" = "" }
+ }
+
+(* Variable: cdata1 *)
+let cdata1 = "<![CDATA[testing]]>"
+(* Test: Xml.cdata *)
+test Xml.cdata get cdata1 = { "#CDATA" = "testing" }
+
+(* Variable: attr1 *)
+let attr1 = " attr1=\"value1\" attr2=\"value2\""
+(* Variable: attr2 *)
+let attr2 = " attr2=\"foo\""
+(* Test: Xml.attributes *)
+test Xml.attributes get attr1 =
+ { "#attribute"
+ { "attr1" = "value1" }
+ { "attr2" = "value2" }
+ }
+
+(* Test: Xml.attributes *)
+test Xml.attributes get " refs=\"A1\nA2 A3\"" =
+ { "#attribute"
+ { "refs" = "A1\nA2 A3" }
+ }
+
+(* Test: Xml.attributes *)
+test Xml.attributes put attr1 after rm "/#attribute[1]";
+ set "/#attribute/attr2" "foo" = attr2
+
+(* test quoting *)
+(* well formed values *)
+test Xml.attributes get " attr1=\"value1\"" = { "#attribute" { "attr1" = "value1" } }
+test Xml.attributes get " attr1='value1'" = { "#attribute" { "attr1" = "value1" } }
+test Xml.attributes get " attr1='va\"lue1'" = { "#attribute" { "attr1" = "va\"lue1" } }
+test Xml.attributes get " attr1=\"va'lue1\"" = { "#attribute" { "attr1" = "va'lue1" } }
+
+(* illegal as per the XML standard *)
+test Xml.attributes get " attr1=\"va\"lue1\"" = *
+test Xml.attributes get " attr1='va'lue1'" = *
+
+(* malformed values *)
+test Xml.attributes get " attr1=\"value1'" = *
+test Xml.attributes get " attr1='value1\"" = *
+
+(* Group: empty *)
+
+(* Variable: empty1 *)
+let empty1 = "<a/>"
+(* Variable: empty2 *)
+let empty2 = "<a foo=\"bar\"/>"
+(* Variable: empty3 *)
+let empty3 = "<a foo=\"bar\"></a>\n"
+(* Variable: empty4 *)
+let empty4 = "<a foo=\"bar\" far=\"baz\"/>"
+(* Test: Xml.empty_element *)
+test Xml.empty_element get empty1 = { "a" = "#empty" }
+(* Test: Xml.empty_element *)
+test Xml.empty_element get empty2 =
+ { "a" = "#empty" { "#attribute" { "foo" = "bar"} } }
+
+(* Test: Xml.empty_element *)
+test Xml.empty_element put empty1 after set "/a/#attribute/foo" "bar" = empty2
+
+(* Test: Xml.empty_element
+ The attribute node must be the first child of the element *)
+test Xml.empty_element put empty1 after set "/a/#attribute/foo" "bar";
+ set "/a/#attribute/far" "baz" = empty4
+
+(* Test: Xml.content *)
+test Xml.content put "<a><b/></a>" after clear "/a/b" = "<a><b></b>\n</a>"
+
+
+(* Group: Full lens *)
+
+(* Test: Xml.lns *)
+test Xml.lns put "<a></a >" after set "/a/#text[1]" "foo";
+ set "/a/#text[2]" "bar" = "<a>foobar</a >"
+
+(* Test: Xml.lns *)
+test Xml.lns get "<?xml version=\"1.0\"?>
+<!DOCTYPE catalog PUBLIC \"-//OASIS//DTD XML Catalogs V1.0//EN\"
+ \"file:///usr/share/xml/schema/xml-core/catalog.dtd\">
+ <doc/>" =
+ { "#declaration"
+ { "#attribute"
+ { "version" = "1.0" }
+ }
+ }
+ { "!DOCTYPE" = "catalog"
+ { "PUBLIC"
+ { "#literal" = "-//OASIS//DTD XML Catalogs V1.0//EN" }
+ { "#literal" = "file:///usr/share/xml/schema/xml-core/catalog.dtd" }
+ }
+ }
+ { "doc" = "#empty" }
+
+(* Test: Xml.lns *)
+test Xml.lns get "<oor:component-data xmlns:oor=\"http://openoffice.org/2001/registry\"/>
+" =
+ { "oor:component-data" = "#empty"
+ { "#attribute"
+ { "xmlns:oor" = "http://openoffice.org/2001/registry" }
+ }
+ }
+
+(* Variable: input1 *)
+let input1 = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>
+<html>\r
+ <head>
+ <title>Wiki</title>
+ </head>
+ <body>
+ <h1>Augeas</h1>
+ <p class=\"main\">Augeas is now able to parse XML files!</p>
+ <ul>
+ <li>Translate from XML to a tree syntax</li>
+ <li>Translate from the tree back to XML</li> <!-- this is some comment -->
+ <li>this</li>
+ </ul>
+ </body>
+</html>
+"
+
+(* Test: Xml.doc
+ Test <input1> with <Xml.doc> *)
+test Xml.doc get input1 =
+ { "#declaration"
+ { "#attribute"
+ { "version" = "1.0" }
+ { "encoding" = "UTF-8" }
+ }
+ }
+ { "html"
+ { "#text" = "\r\n " }
+ { "head"
+ { "#text" = "\n " }
+ { "title"
+ { "#text" = "Wiki" }
+ }
+ { "#text" = " " }
+ }
+ { "#text" = " " }
+ { "body"
+ { "#text" = "
+ " }
+ { "h1"
+ { "#text" = "Augeas" }
+ }
+ { "#text" = " " }
+ { "p"
+ { "#attribute"
+ { "class" = "main" }
+ }
+ { "#text" = "Augeas is now able to parse XML files!" }
+ }
+ { "#text" = " " }
+ { "ul"
+ { "#text" = "\n " }
+ { "li"
+ { "#text" = "Translate from XML to a tree syntax" }
+ }
+ { "#text" = " " }
+ { "li"
+ { "#text" = "Translate from the tree back to XML" }
+ }
+ { "#text" = " " }
+ { "#comment" = " this is some comment " }
+ { "#text" = "
+ " }
+ { "li"
+ { "#text" = "this" }
+ }
+ { "#text" = " " }
+ }
+ { "#text" = " " }
+ }
+ }
+
+(* Test: Xml.doc
+ Modify <input1> with <Xml.doc> *)
+test Xml.doc put input1 after rm "/html/body" =
+"<?xml version=\"1.0\" encoding=\"UTF-8\"?>
+<html>\r
+ <head>
+ <title>Wiki</title>
+ </head>
+ </html>
+"
+
+
+(* Variable: ul1 *)
+let ul1 = "
+<ul>
+ <li>test1</li>
+ <li>test2</li>
+ <li>test3</li>
+ <li>test4</li>
+</ul>
+"
+
+test Xml.doc get ul1 =
+ { "ul"
+ { "#text" = "
+ " }
+ { "li"
+ { "#text" = "test1" }
+ }
+ { "#text" = " " }
+ { "li"
+ { "#text" = "test2" }
+ }
+ { "#text" = " " }
+ { "li"
+ { "#text" = "test3" }
+ }
+ { "#text" = " " }
+ { "li"
+ { "#text" = "test4" }
+ }
+ }
+
+
+test Xml.doc put ul1 after set "/ul/li[3]/#text" "bidon" = "
+<ul>
+ <li>test1</li>
+ <li>test2</li>
+ <li>bidon</li>
+ <li>test4</li>
+</ul>
+"
+
+test Xml.doc put ul1 after rm "/ul/li[2]" = "
+<ul>
+ <li>test1</li>
+ <li>test3</li>
+ <li>test4</li>
+</ul>
+"
+
+
+(* #text nodes don't move when inserting a node, the result depends on where the node is added *)
+test Xml.doc put ul1 after insb "a" "/ul/li[2]" = "
+<ul>
+ <li>test1</li>
+ <a></a>
+<li>test2</li>
+ <li>test3</li>
+ <li>test4</li>
+</ul>
+"
+
+test Xml.doc put ul1 after insa "a" "/ul/li[1]" = "
+<ul>
+ <li>test1</li>
+<a></a>
+ <li>test2</li>
+ <li>test3</li>
+ <li>test4</li>
+</ul>
+"
+
+(* Attributes must be added before text nodes *)
+test Xml.doc put ul1 after insb "#attribute" "/ul/li[2]/#text";
+ set "/ul/li[2]/#attribute/bidon" "gazou";
+ set "/ul/li[2]/#attribute/foo" "bar" = "
+<ul>
+ <li>test1</li>
+ <li bidon=\"gazou\" foo=\"bar\">test2</li>
+ <li>test3</li>
+ <li>test4</li>
+</ul>
+"
+
+(* if empty element is allowed to be as root, this test triggers error *)
+test Xml.lns get "<doc>
+<a><c/><b><c/></b><c/><c/><a></a></a>
+</doc>" =
+ { "doc"
+ { "#text" = "\n" }
+ { "a"
+ { "c" = "#empty" }
+ { "b"
+ { "c" = "#empty" }
+ }
+ { "c" = "#empty" }
+ { "c" = "#empty" }
+ { "a" }
+ }
+ }
+
+let p01pass2 = "<?PI before document element?>
+<!-- comment after document element-->
+<?PI before document element?>
+<!-- comment after document element-->
+<?PI before document element?>
+<!-- comment after document element-->
+<?PI before document element?>
+<!DOCTYPE doc
+[
+<!ELEMENT doc ANY>
+<!ELEMENT a ANY>
+<!ELEMENT b ANY>
+<!ELEMENT c ANY>
+]>
+<doc>
+<a><b><c/></b></a>
+</doc>
+<!-- comment after document element-->
+<?PI after document element?>
+<!-- comment after document element-->
+<?PI after document element?>
+<!-- comment after document element-->
+<?PI after document element?>
+"
+
+test Xml.lns get p01pass2 =
+ { "#pi"
+ { "#target" = "PI" }
+ { "#instruction" = "before document element" }
+ }
+ { "#comment" = " comment after document element" }
+ { "#pi"
+ { "#target" = "PI" }
+ { "#instruction" = "before document element" }
+ }
+ { "#comment" = " comment after document element" }
+ { "#pi"
+ { "#target" = "PI" }
+ { "#instruction" = "before document element" }
+ }
+ { "#comment" = " comment after document element" }
+ { "#pi"
+ { "#target" = "PI" }
+ { "#instruction" = "before document element" }
+ }
+ { "!DOCTYPE" = "doc"
+ { "!ELEMENT" = "doc"
+ { "#decl" = "ANY" }
+ }
+ { "!ELEMENT" = "a"
+ { "#decl" = "ANY" }
+ }
+ { "!ELEMENT" = "b"
+ { "#decl" = "ANY" }
+ }
+ { "!ELEMENT" = "c"
+ { "#decl" = "ANY" }
+ }
+ }
+ { "doc"
+ { "#text" = "
+" }
+ { "a"
+ { "b"
+ { "c" = "#empty" }
+ }
+ }
+ }
+ { "#comment" = " comment after document element" }
+ { "#pi"
+ { "#target" = "PI" }
+ { "#instruction" = "after document element" }
+ }
+ { "#comment" = " comment after document element" }
+ { "#pi"
+ { "#target" = "PI" }
+ { "#instruction" = "after document element" }
+ }
+ { "#comment" = " comment after document element" }
+ { "#pi"
+ { "#target" = "PI" }
+ { "#instruction" = "after document element" }
+ }
+
+
+(* various valid Name constructions *)
+test Xml.lns get "<doc>\n<A:._-0/>\n<::._-0/>\n<_:._-0/>\n<A/>\n<_/>\n<:/>\n</doc>" =
+ { "doc"
+ { "#text" = "\n" }
+ { "A:._-0" = "#empty" }
+ { "::._-0" = "#empty" }
+ { "_:._-0" = "#empty" }
+ { "A" = "#empty" }
+ { "_" = "#empty" }
+ { ":" = "#empty" }
+ }
+
+test Xml.lns get "<doc>
+<abcdefghijklmnopqrstuvwxyz/>
+<ABCDEFGHIJKLMNOPQRSTUVWXYZ/>
+<A01234567890/>
+<A.-:/>
+</doc>" =
+ { "doc"
+ { "#text" = "\n" }
+ { "abcdefghijklmnopqrstuvwxyz" = "#empty" }
+ { "ABCDEFGHIJKLMNOPQRSTUVWXYZ" = "#empty" }
+ { "A01234567890" = "#empty" }
+ { "A.-:" = "#empty" }
+ }
+
+
+let p06fail1 = "<!--non-validating processors may pass this instance because they don't check the IDREFS attribute type-->
+<!DOCTYPE doc
+[
+<!ELEMENT doc (a|refs)*>
+<!ELEMENT a EMPTY>
+<!ELEMENT refs EMPTY>
+<!ATTLIST refs refs IDREFS #REQUIRED>
+<!ATTLIST a id ID #REQUIRED>
+]>
+<doc>
+<a id=\"A1\"/><a id=\"A2\"/><a id=\"A3\"/>
+<refs refs=\"\"/>
+</doc>"
+
+(* we accept this test because we do not verify XML references *)
+test Xml.lns get p06fail1 =
+ { "#comment" = "non-validating processors may pass this instance because they don't check the IDREFS attribute type" }
+ { "!DOCTYPE" = "doc"
+ { "!ELEMENT" = "doc"
+ { "#decl" = "(a|refs)*" }
+ }
+ { "!ELEMENT" = "a"
+ { "#decl" = "EMPTY" }
+ }
+ { "!ELEMENT" = "refs"
+ { "#decl" = "EMPTY" }
+ }
+ { "!ATTLIST" = "refs"
+ { "1"
+ { "#name" = "refs" }
+ { "#type" = "IDREFS" }
+ { "#REQUIRED" }
+ }
+ }
+ { "!ATTLIST" = "a"
+ { "1"
+ { "#name" = "id" }
+ { "#type" = "ID" }
+ { "#REQUIRED" }
+ }
+ }
+ }
+ { "doc"
+ { "#text" = "
+" }
+ { "a" = "#empty"
+ { "#attribute"
+ { "id" = "A1" }
+ }
+ }
+ { "a" = "#empty"
+ { "#attribute"
+ { "id" = "A2" }
+ }
+ }
+ { "a" = "#empty"
+ { "#attribute"
+ { "id" = "A3" }
+ }
+ }
+ { "refs" = "#empty"
+ { "#attribute"
+ { "refs" = "" }
+ }
+ }
+ }
+
+(* we accept dquote, but not single quotes, because of resulting ambiguity *)
+let p10pass1_1 = "<doc><A a=\"asdf>'">\nasdf\n ?>%\"/></doc>"
+let p10pass1_2 = "<doc><A a='\"\">'"'/></doc>"
+
+test Xml.lns get p10pass1_1 =
+ { "doc"
+ { "A" = "#empty"
+ { "#attribute"
+ { "a" = "asdf>'">\nasdf\n ?>%" }
+ }
+ }
+ }
+
+test Xml.lns get p10pass1_2 =
+ { "doc"
+ { "A" = "#empty"
+ { "#attribute"
+ { "a" = "\"\">'"" }
+ }
+ }
+ }
+
+(* here again, test exclude single quote *)
+let p11pass1 = "<!--Inability to resolve a notation should not be reported as an error-->
+<!DOCTYPE doc
+[
+<!ELEMENT doc EMPTY>
+<!NOTATION not1 SYSTEM \"a%a&b�<!ELEMENT<!--<?</>?>/\''\">
+<!NOTATION not3 SYSTEM \"\">
+]>
+<doc></doc>"
+
+test Xml.lns get p11pass1 =
+ { "#comment" = "Inability to resolve a notation should not be reported as an error" }
+ { "!DOCTYPE" = "doc"
+ { "!ELEMENT" = "doc"
+ { "#decl" = "EMPTY" }
+ }
+ { "!NOTATION" = "not1"
+ { "SYSTEM" = "a%a&b�<!ELEMENT<!--<?</>?>/\''" }
+ }
+ { "!NOTATION" = "not3"
+ { "SYSTEM" = "" }
+ }
+ }
+ { "doc" }
+
+test Xml.lns get "<doc>a%b%</doc></doc>]]<&</doc>" =
+ { "doc"
+ { "#text" = "a%b%</doc></doc>]]<&" }
+ }
+
+let p15pass1 = "<!--a
+<!DOCTYPE
+<?-
+]]>-<[ CDATA [
+\"- -'-
+-<doc>-->
+<!---->
+<doc></doc>"
+
+test Xml.lns get p15pass1 =
+ { "#comment" = "a
+<!DOCTYPE
+<?-
+]]>-<[ CDATA [
+\"- -'-
+-<doc>" }
+ { "#comment" = "" }
+ { "doc" }
+
+let p22pass3 = "<?xml version=\"1.0\"?>
+<!--comment--> <?pi some instruction ?>
+<doc><?pi?></doc>"
+
+test Xml.lns get p22pass3 =
+ { "#declaration"
+ { "#attribute"
+ { "version" = "1.0" }
+ }
+ }
+ { "#comment" = "comment" }
+ { "#pi"
+ { "#target" = "pi" }
+ { "#instruction" = "some instruction" }
+ }
+ { "doc"
+ { "#pi"
+ { "#target" = "pi" }
+ }
+ }
+
+let p25pass2 = "<?xml version
+
+
+=
+
+
+\"1.0\"?>
+<doc></doc>"
+
+test Xml.lns get p25pass2 =
+ { "#declaration"
+ { "#attribute"
+ { "version" = "1.0" }
+ }
+ }
+ { "doc" }
+
+
+test Xml.lns get "<!DOCTYPE
+
+doc
+
+[
+<!ELEMENT doc EMPTY>
+]>
+<doc></doc>" =
+ { "!DOCTYPE" = "doc"
+ { "!ELEMENT" = "doc"
+ { "#decl" = "EMPTY" }
+ }
+ }
+ { "doc" }
+
+test Xml.lns get "<doc></doc \n>" = { "doc" }
+
+test Xml.lns get "<a><doc att=\"val\" \natt2=\"val2\" att3=\"val3\"/></a>" =
+ { "a"
+ { "doc" = "#empty"
+ { "#attribute"
+ { "att" = "val" }
+ { "att2" = "val2" }
+ { "att3" = "val3" }
+ }
+ }
+ }
+
+test Xml.lns get "<doc/>" = { "doc" = "#empty" }
+
+test Xml.lns get "<a><![CDATA[Thu, 13 Feb 2014 12:22:35 +0000]]></a>" =
+ { "a"
+ { "#CDATA" = "Thu, 13 Feb 2014 12:22:35 +0000" } }
+
+(* failure tests *)
+(* only one document element *)
+test Xml.lns get "<doc></doc><bad/>" = *
+
+(* document element must be complete *)
+test Xml.lns get "<doc>" = *
+
+(* accept empty document *)
+test Xml.lns get "\n" = {}
+
+(* malformed element *)
+test Xml.lns get "<a><A@/></a>" = *
+
+(* a Name cannot start with a digit *)
+test Xml.lns get "<a><0A/></a>" = *
+
+(* no space before "CDATA" *)
+test Xml.lns get "<doc><![ CDATA[a]]></doc>" = *
+
+(* no space after "CDATA" *)
+test Xml.lns get "<doc><![CDATA [a]]></doc>" = *
+
+(* FIXME: CDSect's can't nest *)
+test Xml.lns get "<doc>
+<![CDATA[
+<![CDATA[XML doesn't allow CDATA sections to nest]]>
+]]>
+</doc>" =
+ { "doc"
+ { "#text" = "\n" }
+ { "#CDATA" = "\n<![CDATA[XML doesn't allow CDATA sections to nest" }
+ { "#text" = "\n]]" }
+ { "#text" = ">\n" } }
+
+(* Comment is illegal in VersionInfo *)
+test Xml.lns get "<?xml version <!--bad comment--> =\"1.0\"?>
+<doc></doc>" = *
+
+(* only declarations in DTD *)
+test Xml.lns get "<!DOCTYPE doc [
+<!ELEMENT doc EMPTY>
+<doc></doc>
+]>" = *
+
+(* we do not support external entities *)
+test Xml.lns get "<!DOCTYPE doc [
+<!ENTITY % eldecl \"<!ELEMENT doc EMPTY>\">
+%eldecl;
+]>
+<doc></doc>" = *
+
+(* Escape character in attributes *)
+test Xml.lns get "<a password=\"my\!pass\" />" =
+ { "a" = "#empty"
+ { "#attribute" { "password" = "my\!pass" } } }
+
+test Xml.lns put ""
+after set "/a" "#empty" = "<a/>\n"
+
+(* Issue #142 *)
+test Xml.entity_def get
+ "<!ENTITY open-hatch SYSTEM \"http://examplecom/OpenHatch.xml\">" =
+ { "!ENTITY" = "open-hatch"
+ { "SYSTEM"
+ { "#systemliteral" = "http://examplecom/OpenHatch.xml" }
+ } }
+
+test Xml.entity_def get
+ "<!ENTITY open-hatch PUBLIC \"-//Textuality//TEXT Standard open-hatch boilerplate//EN\" \"http://www.textuality.com/boilerplate/OpenHatch.xml\">" =
+ { "!ENTITY" = "open-hatch"
+ { "PUBLIC"
+ { "#pubidliteral" =
+ "-//Textuality//TEXT Standard open-hatch boilerplate//EN" }
+ { "#systemliteral" =
+ "http://www.textuality.com/boilerplate/OpenHatch.xml" } } }
+
+let dt_with_entities =
+"<!DOCTYPE server-xml [
+ <!ENTITY sys-ent SYSTEM \"sys-file.xml\">
+ <!ENTITY pub-ent PUBLIC \"-//something public//TEXT\"
+ \"pub-file.xml\">
+ ]>"
+
+test Xml.doctype get dt_with_entities =
+ { "!DOCTYPE" = "server-xml"
+ { "!ENTITY" = "sys-ent"
+ { "SYSTEM"
+ { "#systemliteral" = "sys-file.xml" }
+ }
+ }
+ { "!ENTITY" = "pub-ent"
+ { "PUBLIC"
+ { "#pubidliteral" = "-//something public//TEXT" }
+ { "#systemliteral" = "pub-file.xml" }
+ }
+ }
+ }
+
+test Xml.doctype put dt_with_entities after
+ rm "/\!DOCTYPE/\!ENTITY[2]";
+ set "/\!DOCTYPE/\!ENTITY[. = \"sys-ent\"]/SYSTEM/#systemliteral"
+ "other-file.xml"
+ =
+"<!DOCTYPE server-xml [
+ <!ENTITY sys-ent SYSTEM \"other-file.xml\">
+ ]>"
+
+test Xml.lns get (dt_with_entities . "<body></body>") =
+ { "!DOCTYPE" = "server-xml"
+ { "!ENTITY" = "sys-ent"
+ { "SYSTEM"
+ { "#systemliteral" = "sys-file.xml" }
+ }
+ }
+ { "!ENTITY" = "pub-ent"
+ { "PUBLIC"
+ { "#pubidliteral" = "-//something public//TEXT" }
+ { "#systemliteral" = "pub-file.xml" }
+ }
+ }
+ }
+ { "body" }
+
+test Xml.lns put "<?xml version=\"1.0\"?>
+<body>
+</body>"
+ after
+ insa "!DOCTYPE" "#declaration";
+ set "\\!DOCTYPE" "Server";
+ set "\\!DOCTYPE/\\!ENTITY" "resourcesFile";
+ set "\\!DOCTYPE/\\!ENTITY/SYSTEM/#systemliteral" "data.xml"
+ =
+"<?xml version=\"1.0\"?><!DOCTYPE Server[
+<!ENTITY resourcesFile SYSTEM \"data.xml\">]>
+<body>\n</body>"
--- /dev/null
+(* Tests for the Xorg module *)
+
+module Test_xorg =
+
+ let conf = "
+# xorg.conf
+
+Section \"ServerLayout\"
+ Identifier \"single head configuration\"
+ Screen 0 \"Screen0\" 0 0
+ InputDevice \"Generic Keyboard\" \"CoreKeyboard\"
+EndSection
+
+Section \"InputDevice\"
+ Identifier \"Generic Keyboard\"
+ # that's a driver
+ Driver \"kbd\"
+ Option \"XkbOptions\" \"lv3:ralt_switch\"
+EndSection
+
+Section \"Device\"
+ Identifier \"Configured Video Device\"
+ Option \"MonitorLayout\" \"LVDS,VGA\"
+ VideoRam 229376
+ Option \"NoAccel\"
+ Option \"fbdev\" \"\"
+ Screen 0
+EndSection
+
+Section \"Screen\"
+ Identifier \"Screen0\"
+ Device \"Configured Video Device\"
+ DefaultDepth 24
+ SubSection \"Display\"
+ Viewport 0 0
+ Depth 24
+ Modes \"1280x1024\" \"1280x960\" \"1280x800\"
+ EndSubSection
+EndSection
+
+Section \"Module\"
+ SubSection \"extmod\"
+ Option \"omit XFree86-DGA\"
+ EndSubSection
+EndSection
+"
+
+ test Xorg.lns get conf =
+ { }
+ { "#comment" = "xorg.conf" }
+ { }
+ { "ServerLayout"
+ { "Identifier" = "single head configuration" }
+ { "Screen" = "Screen0"
+ { "num" = "0" }
+ { "position" = "0 0" } }
+ { "InputDevice" = "Generic Keyboard"
+ { "option" = "CoreKeyboard" } } }
+ { }
+ { "InputDevice"
+ { "Identifier" = "Generic Keyboard" }
+ { "#comment" = "that's a driver" }
+ { "Driver" = "kbd" }
+ { "Option" = "XkbOptions"
+ { "value" = "lv3:ralt_switch" } } }
+ { }
+ { "Device"
+ { "Identifier" = "Configured Video Device" }
+ { "Option" = "MonitorLayout"
+ { "value" = "LVDS,VGA" } }
+ { "VideoRam" = "229376" }
+ { "Option" = "NoAccel" }
+ { "Option" = "fbdev"
+ { "value" = "" } }
+ { "Screen"
+ { "num" = "0" } } }
+ { }
+ { "Screen"
+ { "Identifier" = "Screen0" }
+ { "Device" = "Configured Video Device" }
+ { "DefaultDepth" = "24" }
+ { "Display"
+ { "ViewPort"
+ { "x" = "0" }
+ { "y" = "0" } }
+ { "Depth" = "24" }
+ { "Modes"
+ { "mode" = "1280x1024" }
+ { "mode" = "1280x960" }
+ { "mode" = "1280x800" } } } }
+ { }
+ { "Module"
+ { "extmod"
+ { "Option" = "omit XFree86-DGA" } } }
--- /dev/null
+module Test_xymon =
+
+let conf = "
+#atest comment
+
+title test title
+page page1 'This is a test page'
+1.1.1.1 testhost.localdomain # test1 test2 http:443 ldaps=testhost.localdomain http://testhost.localdomain
+2.2.2.2 testhost2.local.domain # COMMENT:stuff apache=wow
+#test comment
+
+page newpage
+1.1.1.1 testhost.localdomain # test1 test2 http:443 ldaps=testhost.localdomain http://testhost.localdomain
+2.2.2.2 testhost2.local.domain # COMMENT:stuff apache=wow
+
+title test title
+group group1
+3.3.3.3 host1 #
+4.4.4.4 host2 #
+
+group-sorted group2
+5.5.5.5 host3 # conn
+6.6.6.6 host4 # ssh
+
+subparent page1 page2 This is after page 1
+10.0.0.1 router1.loni.org #
+10.0.0.2 sw1.localdomain #
+
+"
+test Xymon.lns get conf =
+ { }
+ { "#comment" = "atest comment" }
+ { }
+ { "title" = "test title" }
+ { "page" = "page1"
+ { "pagetitle" = "'This is a test page'" }
+ { "host"
+ { "ip" = "1.1.1.1" }
+ { "fqdn" = "testhost.localdomain" }
+ { "tag" = "test1" }
+ { "tag" = "test2" }
+ { "tag" = "http:443" }
+ { "tag" = "ldaps=testhost.localdomain" }
+ { "tag" = "http://testhost.localdomain" }
+ }
+ { "host"
+ { "ip" = "2.2.2.2" }
+ { "fqdn" = "testhost2.local.domain" }
+ { "tag" = "COMMENT:stuff" }
+ { "tag" = "apache=wow" }
+ }
+ { "#comment" = "test comment" }
+ { }
+ }
+ { "page" = "newpage"
+ { "host"
+ { "ip" = "1.1.1.1" }
+ { "fqdn" = "testhost.localdomain" }
+ { "tag" = "test1" }
+ { "tag" = "test2" }
+ { "tag" = "http:443" }
+ { "tag" = "ldaps=testhost.localdomain" }
+ { "tag" = "http://testhost.localdomain" }
+ }
+ { "host"
+ { "ip" = "2.2.2.2" }
+ { "fqdn" = "testhost2.local.domain" }
+ { "tag" = "COMMENT:stuff" }
+ { "tag" = "apache=wow" }
+ }
+ { }
+ { "title" = "test title" }
+ { "group" = "group1"
+ { "host"
+ { "ip" = "3.3.3.3" }
+ { "fqdn" = "host1" }
+ }
+ { "host"
+ { "ip" = "4.4.4.4" }
+ { "fqdn" = "host2" }
+ }
+ { }
+ }
+ { "group-sorted" = "group2"
+ { "host"
+ { "ip" = "5.5.5.5" }
+ { "fqdn" = "host3" }
+ { "tag" = "conn" }
+ }
+ { "host"
+ { "ip" = "6.6.6.6" }
+ { "fqdn" = "host4" }
+ { "tag" = "ssh" }
+ }
+ { }
+ }
+ }
+ { "subparent" = "page2"
+ { "parent" = "page1" }
+ { "pagetitle" = "This is after page 1" }
+ { "host"
+ { "ip" = "10.0.0.1" }
+ { "fqdn" = "router1.loni.org" }
+ }
+ { "host"
+ { "ip" = "10.0.0.2" }
+ { "fqdn" = "sw1.localdomain" }
+ }
+ { }
+ }
+
+
+ test Xymon.host get "192.168.1.1 server1.test.example.com # tag1 tag2 CLASS:classname CLIENT:clienthostname NOCOLUMNS:column1,column2\n" =
+ { "host"
+ { "ip" = "192.168.1.1" }
+ { "fqdn" = "server1.test.example.com" }
+ { "tag" = "tag1" }
+ { "tag" = "tag2" }
+ { "tag" = "CLASS:classname" }
+ { "tag" = "CLIENT:clienthostname" }
+ { "tag" = "NOCOLUMNS:column1,column2" }
+ }
+
+ test Xymon.host get "192.168.1.1 test.example.com # \n" =
+ { "host"
+ { "ip" = "192.168.1.1" }
+ { "fqdn" = "test.example.com" }
+ }
+
+ test Xymon.host get "192.168.1.1 test.example.com # http://google.com COMMENT:asdf\n" =
+ { "host"
+ { "ip" = "192.168.1.1" }
+ { "fqdn" = "test.example.com" }
+ { "tag" = "http://google.com" }
+ { "tag" = "COMMENT:asdf" }
+ }
+
+ test Xymon.include get "include file1.txt\n" =
+ { "include" = "file1.txt" }
+
+ test Xymon.include get "directory dir2\n" =
+ { "directory" = "dir2" }
+
+ test Xymon.page get "page page1 page 1 title is here\n" =
+ { "page" = "page1"
+ { "pagetitle" = "page 1 title is here" }
+ }
+
+ test Xymon.page get "page page2\n" =
+ { "page" = "page2"
+ }
+
+ test Xymon.subparent get "subparent page1 page2 PAGETITLE 1\n1.1.1.1 host1.lan #\n2.2.2.2 host2.lan # \n" =
+ { "subparent" = "page2"
+ { "parent" = "page1" }
+ { "pagetitle" = "PAGETITLE 1" }
+ { "host"
+ { "ip" = "1.1.1.1" }
+ { "fqdn" = "host1.lan" }
+ }
+ { "host"
+ { "ip" = "2.2.2.2" }
+ { "fqdn" = "host2.lan" }
+ }
+ }
+
+ test Xymon.title get "title title 1 goes here\n" =
+ { "title" = "title 1 goes here" }
+
+ test Xymon.lns get "page page1\ninclude file1.cfg\nsubparent page1 page2\ninclude page2.cfg\n" =
+ { "page" = "page1"
+ { "include" = "file1.cfg" }
+ }
+ { "subparent" = "page2"
+ { "parent" = "page1" }
+ { "include" = "page2.cfg" }
+ }
+
+
--- /dev/null
+(*
+Module: Test_Xymon_Alerting
+ Provides unit tests and examples for the <Xymon_Alerting> lens.
+*)
+
+module Test_Xymon_Alerting =
+ let macro_definition = "$NOTIF_LOCAL=SCRIPT /foo/xymonqpage.sh $PAGER SCRIPT /foo/xymonsms.sh $SMS FORMAT=SMS COLOR=!yellow\n"
+ test Xymon_Alerting.lns get macro_definition =
+ { "$NOTIF_LOCAL" = "SCRIPT /foo/xymonqpage.sh $PAGER SCRIPT /foo/xymonsms.sh $SMS FORMAT=SMS COLOR=!yellow" }
+
+ let basic_syntax = "HOST=hostname IGNORE\n"
+ test Xymon_Alerting.lns get basic_syntax =
+ { "1"
+ { "filters"
+ { "HOST" = "hostname" }
+ }
+ { "recipients"
+ { "IGNORE" { "filters" } }
+ }
+ }
+
+ let two_filters = "HOST=hostname SERVICE=service IGNORE\n"
+ test Xymon_Alerting.lns get two_filters =
+ { "1"
+ { "filters"
+ { "HOST" = "hostname" }
+ { "SERVICE" = "service" }
+ }
+ { "recipients"
+ { "IGNORE" { "filters" } }
+ }
+ }
+
+ let two_recipients = "HOST=hostname IGNORE STOP\n"
+ test Xymon_Alerting.lns get two_recipients =
+ { "1"
+ { "filters"
+ { "HOST" = "hostname" }
+ }
+ { "recipients"
+ { "IGNORE" { "filters" } }
+ { "STOP" { "filters" } }
+ }
+ }
+
+ let two_lines = "HOST=hostname SERVICE=service\n IGNORE\n"
+ test Xymon_Alerting.lns get two_lines =
+ { "1"
+ { "filters"
+ { "HOST" = "hostname" }
+ { "SERVICE" = "service" }
+ }
+ { "recipients"
+ { "IGNORE" { "filters" } }
+ }
+ }
+
+ let two_lines_for_recipients = "HOST=hostname SERVICE=service\n IGNORE\nSTOP\n"
+ test Xymon_Alerting.lns get two_lines_for_recipients =
+ { "1"
+ { "filters"
+ { "HOST" = "hostname" }
+ { "SERVICE" = "service" }
+ }
+ { "recipients"
+ { "IGNORE" { "filters" } }
+ { "STOP" { "filters" } }
+ }
+ }
+
+ let with_blanks_at_eol = "HOST=hostname SERVICE=service \n IGNORE \n"
+ test Xymon_Alerting.lns get with_blanks_at_eol =
+ { "1"
+ { "filters"
+ { "HOST" = "hostname" }
+ { "SERVICE" = "service" }
+ }
+ { "recipients"
+ { "IGNORE" { "filters" } }
+ }
+ }
+
+ let several_rules = "HOST=hostname SERVICE=service\nIGNORE\nHOST=hostname2 SERVICE=svc\nIGNORE\nSTOP\n"
+ test Xymon_Alerting.lns get several_rules =
+ { "1"
+ { "filters"
+ { "HOST" = "hostname" }
+ { "SERVICE" = "service" }
+ }
+ { "recipients"
+ { "IGNORE" { "filters" } }
+ }
+ }
+ { "2"
+ { "filters"
+ { "HOST" = "hostname2" }
+ { "SERVICE" = "svc" }
+ }
+ { "recipients"
+ { "IGNORE" { "filters" } }
+ { "STOP" { "filters" } }
+ }
+ }
+
+
+ let duration = "HOST=hostname DURATION>20 SERVICE=service\nIGNORE\n"
+ test Xymon_Alerting.lns get duration =
+ { "1"
+ { "filters"
+ { "HOST" = "hostname" }
+ { "DURATION"
+ { "operator" = ">" }
+ { "value" = "20" }
+ }
+ { "SERVICE" = "service" }
+ }
+ { "recipients"
+ { "IGNORE" { "filters" } }
+ }
+ }
+
+ let notice = "HOST=hostname NOTICE SERVICE=service\nIGNORE\n"
+ test Xymon_Alerting.lns get notice =
+ { "1"
+ { "filters"
+ { "HOST" = "hostname" }
+ { "NOTICE" }
+ { "SERVICE" = "service" }
+ }
+ { "recipients"
+ { "IGNORE" { "filters" } }
+ }
+ }
+
+ let mail = "HOST=hostname MAIL astreinteMail\n"
+ test Xymon_Alerting.lns get mail =
+ { "1"
+ { "filters"
+ { "HOST" = "hostname" }
+ }
+ { "recipients"
+ { "MAIL" = "astreinteMail"
+ { "filters" }
+ }
+ }
+ }
+
+ let script = "HOST=hostname SCRIPT /foo/email.sh astreinteMail\n"
+ test Xymon_Alerting.lns get script =
+ { "1"
+ { "filters"
+ { "HOST" = "hostname" }
+ }
+ { "recipients"
+ { "SCRIPT"
+ { "script" = "/foo/email.sh" }
+ { "recipient" = "astreinteMail" }
+ { "filters" }
+ }
+ }
+ }
+
+ let repeat = "HOST=hostname REPEAT=15\n"
+ test Xymon_Alerting.lns get repeat =
+ { "1"
+ { "filters"
+ { "HOST" = "hostname" }
+ }
+ { "recipients"
+ { "REPEAT" = "15"
+ { "filters" }
+ }
+ }
+ }
+
+ let mail_with_filters = "HOST=hostname MAIL astreinteMail EXSERVICE=service\n"
+ test Xymon_Alerting.lns get mail_with_filters =
+ { "1"
+ { "filters"
+ { "HOST" = "hostname" }
+ }
+ { "recipients"
+ { "MAIL" = "astreinteMail"
+ { "filters"
+ { "EXSERVICE" = "service" }
+ }
+ }
+ }
+ }
+
+ let mail_with_several_filters = "HOST=hostname MAIL astreinteMail EXSERVICE=service DURATION>20\n"
+ test Xymon_Alerting.lns get mail_with_several_filters =
+ { "1"
+ { "filters"
+ { "HOST" = "hostname" }
+ }
+ { "recipients"
+ { "MAIL" = "astreinteMail"
+ { "filters"
+ { "EXSERVICE" = "service" }
+ { "DURATION"
+ { "operator" = ">" }
+ { "value" = "20" }
+ }
+ }
+ }
+ }
+ }
+
+ let script_with_several_filters = "HOST=hostname SCRIPT /foo/email.sh astreinteMail EXSERVICE=service DURATION>20\n"
+ test Xymon_Alerting.lns get script_with_several_filters =
+ { "1"
+ { "filters"
+ { "HOST" = "hostname" }
+ }
+ { "recipients"
+ { "SCRIPT"
+ { "script" = "/foo/email.sh" }
+ { "recipient" = "astreinteMail" }
+ { "filters"
+ { "EXSERVICE" = "service" }
+ { "DURATION"
+ { "operator" = ">" }
+ { "value" = "20" }
+ }
+ }
+ }
+ }
+ }
+
+ let repeat_with_several_filters = "HOST=hostname REPEAT=15 EXSERVICE=service DURATION>20\n"
+ test Xymon_Alerting.lns get repeat_with_several_filters =
+ { "1"
+ { "filters"
+ { "HOST" = "hostname" }
+ }
+ { "recipients"
+ { "REPEAT" = "15"
+ { "filters"
+ { "EXSERVICE" = "service" }
+ { "DURATION"
+ { "operator" = ">" }
+ { "value" = "20" }
+ }
+ }
+ }
+ }
+ }
+
+ let recipients_with_several_filters = "HOST=hostname\nREPEAT=15 EXSERVICE=service DURATION>20\nMAIL astreinteMail TIME=weirdtimeformat\n"
+ test Xymon_Alerting.lns get recipients_with_several_filters =
+ { "1"
+ { "filters"
+ { "HOST" = "hostname" }
+ }
+ { "recipients"
+ { "REPEAT" = "15"
+ { "filters"
+ { "EXSERVICE" = "service" }
+ { "DURATION"
+ { "operator" = ">" }
+ { "value" = "20" }
+ }
+ }
+ }
+ { "MAIL" = "astreinteMail"
+ { "filters"
+ { "TIME" = "weirdtimeformat" }
+ }
+ }
+ }
+ }
+
+ let recipient_macro = "HOST=hostname\n $NOTIF_LOCAL\n STOP\n"
+ test Xymon_Alerting.lns get recipient_macro =
+ { "1"
+ { "filters"
+ { "HOST" = "hostname" }
+ }
+ { "recipients"
+ { "$NOTIF_LOCAL"
+ { "filters" }
+ }
+ { "STOP"
+ { "filters" }
+ }
+ }
+ }
+
--- /dev/null
+module Test_YAML =
+
+(* Inherit test *)
+test YAML.lns get "host1:
+ <<: *production\n" =
+{ "host1"
+ { "<<" = "production" }
+}
+
+(* top level sequence *)
+test YAML.lns get "
+- foo: 1
+ bar: 2
+
+- baz: 3
+ gee: 4
+" =
+{ }
+{ "@sequence"
+ { "foo" = "1" }
+ { "bar" = "2" }
+}
+{ }
+{ "@sequence"
+ { "baz" = "3" }
+ { "gee" = "4" }
+}
+
+test YAML.lns get "
+defaults: &defaults
+ repo1: master
+ repo2: master
+
+# Live
+production: &production
+ # repo3: dc89d7a
+ repo4: 2d39995
+ # repo5: bc4a40d
+
+host1:
+ <<: *production
+
+host2:
+ <<: *defaults
+ repo6: branch1
+
+host3:
+ <<: *defaults
+ # repo7: branch2
+ repo8: branch3
+" =
+{}
+{ "defaults" = "defaults"
+ { "repo1" = "master" }
+ { "repo2" = "master" }
+}
+{}
+{ "#comment" = "Live" }
+{ "production" = "production"
+ { "#comment" = "repo3: dc89d7a" }
+ { "repo4" = "2d39995" }
+ { "#comment" = "repo5: bc4a40d" }
+}
+{}
+{ "host1"
+ { "<<" = "production" }
+}
+{}
+{ "host2"
+ { "<<" = "defaults" }
+ { "repo6" = "branch1" }
+}
+{}
+{ "host3"
+ { "<<" = "defaults" }
+ { "#comment" = "repo7: branch2" }
+ { "repo8" = "branch3" }
+}
+
+(* Ruby YAML header *)
+test YAML.lns get "--- !ruby/object:Puppet::Node::Factspress RETURN)\n" =
+ { "@yaml" = "!ruby/object:Puppet::Node::Factspress RETURN)" }
+
+
+(* Continued lines *)
+test YAML.lns get "abc:
+ def: |-
+ ghi
+\n" =
+ { "abc"
+ { "def"
+ { "@mval"
+ { "@line" = "ghi" } } } }
+
--- /dev/null
+(*
+Module: Test_Yum
+ Provides unit tests and examples for the <Yum> lens.
+*)
+
+module Test_yum =
+
+ let yum_simple = "[sec1]
+# comment
+key=value
+[sec-two]
+key1=value1
+# comment
+key2=value2
+"
+
+ let yum_conf = "[main]
+cachedir=/var/cache/yum
+keepcache=0
+debuglevel=2
+logfile=/var/log/yum.log
+exactarch=1
+obsoletes=1
+gpgcheck=1
+plugins=1
+metadata_expire=1800
+
+installonly_limit=100
+
+# PUT YOUR REPOS HERE OR IN separate files named file.repo
+# in /etc/yum.repos.d
+"
+
+ let yum_repo1 = "[fedora]
+name=Fedora $releasever - $basearch
+failovermethod=priority
+#baseurl=http://download.fedora.redhat.com/pub/fedora/linux/releases/$releasever/Everything/$basearch/os/
+mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=fedora-$releasever&arch=$basearch
+enabled=1
+gpgcheck=1
+gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora file:///etc/pki/rpm-gpg/RPM-GPG-KEY
+
+[fedora-debuginfo]
+name=Fedora $releasever - $basearch - Debug
+failovermethod=priority
+#baseurl=http://download.fedora.redhat.com/pub/fedora/linux/releases/$releasever/Everything/$basearch/debug/
+mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=fedora-debug-$releasever&arch=$basearch
+enabled=0
+gpgcheck=1
+gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora file:///etc/pki/rpm-gpg/RPM-GPG-KEY
+
+[fedora-source]
+name=Fedora $releasever - Source
+failovermethod=priority
+#baseurl=http://download.fedora.redhat.com/pub/fedora/linux/releases/$releasever/Everything/source/SRPMS/
+mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=fedora-source-$releasever&arch=$basearch
+enabled=0
+gpgcheck=1
+gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora file:///etc/pki/rpm-gpg/RPM-GPG-KEY
+"
+ let yum_repo2 = "[remi]
+name=Les RPM de remi pour FC$releasever - $basearch
+baseurl=http://remi.collet.free.fr/rpms/fc$releasever.$basearch/
+ http://iut-info.ens.univ-reims.fr/remirpms/fc$releasever.$basearch/
+enabled=0
+gpgcheck=1
+gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-remi
+
+[remi-test]
+name=Les RPM de remi en test pour FC$releasever - $basearch
+baseurl=http://remi.collet.free.fr/rpms/test-fc$releasever.$basearch/
+ http://iut-info.ens.univ-reims.fr/remirpms/test-fc$releasever.$basearch/
+enabled=0
+gpgcheck=1
+gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-remi
+"
+
+ let epel_repo = "[epel]
+name=Extra Packages for Enterprise Linux 6 - $basearch
+exclude=ocs* clamav*
+
+"
+
+ let cont = "[main]\nbaseurl=url1\n url2 , url3\n \n"
+
+ test Yum.lns get yum_simple =
+ { "sec1"
+ { "#comment" = "comment" }
+ { "key" = "value" }
+ }
+ { "sec-two"
+ { "key1" = "value1" }
+ { "#comment" = "comment" }
+ { "key2" = "value2" }
+ }
+
+ test Yum.lns put yum_conf after
+ rm "main"
+ = ""
+
+ test Yum.lns put yum_simple after
+ set "sec1/key" "othervalue"
+ = "[sec1]\n# comment\nkey=othervalue\n[sec-two]\nkey1=value1\n# comment\nkey2=value2\n"
+
+ test Yum.lns put yum_simple after
+ rm "sec1" ;
+ rm "sec-two/key1"
+ = "[sec-two]\n# comment\nkey2=value2\n"
+
+ test Yum.lns put yum_simple after
+ rm "sec1" ;
+ rm "sec-two/key1" ;
+ set "sec-two/newkey" "newvalue"
+ = "[sec-two]\n# comment\nkey2=value2\nnewkey=newvalue\n"
+
+ test Yum.lns put yum_simple after
+ rm "sec1" ;
+ set "sec-two/key1" "newvalue"
+ = "[sec-two]\nkey1=newvalue\n# comment\nkey2=value2\n"
+
+ test Yum.lns get cont =
+ { "main"
+ { "baseurl" = "url1" }
+ { "baseurl" = "url2" }
+ { "baseurl" = "url3" }
+ {}
+ }
+
+ test Yum.lns put cont after
+ insa "baseurl" "main/baseurl[last()]";
+ set "main/baseurl[last()]" "url4"
+ = "[main]\nbaseurl=url1\n url2 , url3\n url4\n \n"
+
+ test Yum.lns put cont after
+ set "main/gpgcheck" "1"
+ =
+ cont . "gpgcheck=1\n"
+
+ (* We are actually stricter than yum in checking syntax. The yum.conf *)
+ (* man page mentions that it is illegal to have multiple baseurl keys *)
+ (* in the same section, but yum will just carry on, usually with *)
+ (* results that surpise the unsuspecting user *)
+ test Yum.lns get "[repo]\nbaseurl=url1\nbaseurl=url2\n" = *
+
+ (* This checks that we take the right branch in the section lens. *)
+ test Yum.record get "[repo]\nname=A name\nbaseurl=url1\n" =
+ { "repo"
+ { "name" = "A name" }
+ { "baseurl" = "url1" } }
+
+ (* Handle continuation lines for gpgkey; bug #132 *)
+ test Yum.lns get "[main]\ngpgkey=key1\n key2\n" =
+ { "main"
+ { "gpgkey" = "key1" }
+ { "gpgkey" = "key2" } }
+
+ test Yum.lns get yum_repo1 =
+ { "fedora"
+ { "name" = "Fedora $releasever - $basearch" }
+ { "failovermethod" = "priority" }
+ { "#comment" = "baseurl=http://download.fedora.redhat.com/pub/fedora/linux/releases/$releasever/Everything/$basearch/os/" }
+ { "mirrorlist" = "http://mirrors.fedoraproject.org/mirrorlist?repo=fedora-$releasever&arch=$basearch" }
+ { "enabled" = "1" }
+ { "gpgcheck" = "1" }
+ { "gpgkey" = "file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora" }
+ { "gpgkey" = "file:///etc/pki/rpm-gpg/RPM-GPG-KEY" }
+ { }
+ }
+ { "fedora-debuginfo"
+ { "name" = "Fedora $releasever - $basearch - Debug" }
+ { "failovermethod" = "priority" }
+ { "#comment" = "baseurl=http://download.fedora.redhat.com/pub/fedora/linux/releases/$releasever/Everything/$basearch/debug/" }
+ { "mirrorlist" = "http://mirrors.fedoraproject.org/mirrorlist?repo=fedora-debug-$releasever&arch=$basearch" }
+ { "enabled" = "0" }
+ { "gpgcheck" = "1" }
+ { "gpgkey" = "file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora" }
+ { "gpgkey" = "file:///etc/pki/rpm-gpg/RPM-GPG-KEY" }
+ { }
+ }
+ { "fedora-source"
+ { "name" = "Fedora $releasever - Source" }
+ { "failovermethod" = "priority" }
+ { "#comment" = "baseurl=http://download.fedora.redhat.com/pub/fedora/linux/releases/$releasever/Everything/source/SRPMS/" }
+ { "mirrorlist" = "http://mirrors.fedoraproject.org/mirrorlist?repo=fedora-source-$releasever&arch=$basearch" }
+ { "enabled" = "0" }
+ { "gpgcheck" = "1" }
+ { "gpgkey" = "file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora" }
+ { "gpgkey" = "file:///etc/pki/rpm-gpg/RPM-GPG-KEY" }
+ }
+
+
+ test Yum.lns get yum_repo2 =
+ { "remi"
+ { "name" = "Les RPM de remi pour FC$releasever - $basearch" }
+ { "baseurl" = "http://remi.collet.free.fr/rpms/fc$releasever.$basearch/" }
+ { "baseurl" = "http://iut-info.ens.univ-reims.fr/remirpms/fc$releasever.$basearch/" }
+ { "enabled" = "0" }
+ { "gpgcheck" = "1" }
+ { "gpgkey" = "file:///etc/pki/rpm-gpg/RPM-GPG-KEY-remi" }
+ { }
+ }
+ { "remi-test"
+ { "name" = "Les RPM de remi en test pour FC$releasever - $basearch" }
+ { "baseurl" = "http://remi.collet.free.fr/rpms/test-fc$releasever.$basearch/" }
+ { "baseurl" = "http://iut-info.ens.univ-reims.fr/remirpms/test-fc$releasever.$basearch/" }
+ { "enabled" = "0" }
+ { "gpgcheck" = "1" }
+ { "gpgkey" = "file:///etc/pki/rpm-gpg/RPM-GPG-KEY-remi" }
+ }
+
+ (* Test: Yum.lns
+ Check that we can parse an empty line, to fix test-save *)
+ test Yum.lns get "\n" = { }
+
+ (* Test: Yum.lns
+ Issue #45: allow spaces around equals sign *)
+ test Yum.lns get "[rpmforge]
+name = RHEL $releasever - RPMforge.net - dag
+baseurl = http://apt.sw.be/redhat/el6/en/$basearch/rpmforge\n" =
+ { "rpmforge"
+ { "name" = "RHEL $releasever - RPMforge.net - dag" }
+ { "baseurl" = "http://apt.sw.be/redhat/el6/en/$basearch/rpmforge" }
+ }
+
+ (* Test: Yum.lns
+ Issue #275: parse excludes as a list *)
+ test Yum.lns get epel_repo =
+ { "epel"
+ { "name" = "Extra Packages for Enterprise Linux 6 - $basearch" }
+ { "exclude" = "ocs*" }
+ { "exclude" = "clamav*" }
+ { } }
+
+ test Yum.lns put epel_repo after
+ insa "exclude" "epel/exclude[last()]";
+ set "epel/exclude[last()]" "clang" = "[epel]
+name=Extra Packages for Enterprise Linux 6 - $basearch
+exclude=ocs* clamav*
+ clang
+
+"
+
+ test Yum.lns put yum_repo2 after
+ set "remi/exclude" "php" = "[remi]
+name=Les RPM de remi pour FC$releasever - $basearch
+baseurl=http://remi.collet.free.fr/rpms/fc$releasever.$basearch/
+ http://iut-info.ens.univ-reims.fr/remirpms/fc$releasever.$basearch/
+enabled=0
+gpgcheck=1
+gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-remi
+
+exclude=php
+[remi-test]
+name=Les RPM de remi en test pour FC$releasever - $basearch
+baseurl=http://remi.collet.free.fr/rpms/test-fc$releasever.$basearch/
+ http://iut-info.ens.univ-reims.fr/remirpms/test-fc$releasever.$basearch/
+enabled=0
+gpgcheck=1
+gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-remi
+"
+
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(*
+Module: Thttpd
+ Parses Thttpd's configuration files
+
+Author: Marc Fournier <marc.fournier@camptocamp.com>
+
+About: Reference
+ This lens is based on Thttpd's default thttpd.conf file.
+
+About: Usage Example
+(start code)
+ augtool> get /files/etc/thttpd/thttpd.conf/port
+ /files/etc/thttpd/thttpd.conf/port = 80
+
+ augtool> set /files/etc/thttpd/thttpd.conf/port 8080
+ augtool> save
+ Saved 1 file(s)
+(end code)
+ The <Test_Thttpd> file also contains various examples.
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+*)
+
+module Thttpd =
+autoload xfm
+
+let comment = Util.comment
+let comment_eol = Util.comment_generic /[ \t]*[#][ \t]*/ " # "
+let empty = Util.empty
+let eol = Util.del_str "\n"
+let bol = Util.del_opt_ws ""
+
+let kvkey = /(port|dir|data_dir|user|cgipat|throttles|host|logfile|pidfile|charset|p3p|max_age)/
+let flag = /(no){0,1}(chroot|symlinks|vhost|globalpasswd)/
+let val = /[^\n# \t]*/
+
+let kventry = key kvkey . Util.del_str "=" . store val
+let flagentry = key flag
+
+let kvline = [ bol . kventry . (eol|comment_eol) ]
+let flagline = [ bol . flagentry . (eol|comment_eol) ]
+
+let lns = (kvline|flagline|comment|empty)*
+
+let filter = incl "/etc/thttpd/thttpd.conf"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Tinc
+ Parses Tinc VPN configuration files
+
+Author: Thomas Weißschuh <thomas.weissschuh@amadeus.com>
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+*)
+
+module Tinc =
+
+autoload xfm
+
+let no_spaces_no_equals = /[^ \t\r\n=]+/
+let assign = del (/[ \t]*[= ][ \t]*/) " = "
+let del_str = Util.del_str
+
+let entry = Build.key_value_line /[A-Za-z]+/ assign (store no_spaces_no_equals)
+
+let key_section_start = "-----BEGIN RSA PUBLIC KEY-----\n"
+let key_section_end = "\n-----END RSA PUBLIC KEY-----"
+ (* the last line does not include a newline *)
+let base_64 = /[A-Za-z0-9+\/=\n]+[A-Za-z0-9+\/=]/
+let key_section = del_str key_section_start .
+ (label "#key" . store base_64) .
+ del_str key_section_end
+
+(* we only support a single key section *)
+let lns = (Util.comment | Util.empty | entry) * . [(key_section . Util.empty *)]?
+
+let filter = incl "/etc/tinc.conf"
+ . incl "/etc/tinc/*/tinc.conf"
+ . incl "/etc/tinc/hosts/*"
+ . incl "/etc/tinc/*/hosts/*"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Tmpfiles
+ Parses systemd tmpfiles.d files
+
+Author: Julien Pivotto <roidelapluie@inuits.eu>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 5 tmpfiles.d` where possible.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/tmpfiles.d/*.conf /usr/lib/tmpfiles.d/*.conf and
+ /run/tmpfiles.d/*.conf. See <filter>.
+
+About: Examples
+ The <Test_Tmpfiles> file contains various examples and tests.
+*)
+
+module Tmpfiles =
+ autoload xfm
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+(* Group: Comments and empty lines *)
+
+ (* View: sep_spc
+Space *)
+ let sep_spc = Sep.space
+
+ (* View: sep_opt_spc
+Optional space (for the beginning of the lines) *)
+ let sep_opt_spc = Sep.opt_space
+
+ (* View: comment
+Comments *)
+ let comment = Util.comment
+
+ (* View: empty
+Empty lines *)
+ let empty = Util.empty
+
+(* Group: Lense-specific primitives *)
+
+ (* View: type
+One letter. Some of them can have a "+" and all can have an
+exclamation mark ("!"), a minus sign ("-"), an equal sign ("="),
+a tilde character ("~") and/or a caret ("^").
+
+Not all letters are valid.
+*)
+ let type = /([fFwdDevqQpLcbCxXrRzZtThHaAm]|[fFwpLcbaA]\+)[-!=~^]*/
+
+ (* View: mode
+"-", or 3-4 bytes. Optionally starts with a "~" or a ":". *)
+ let mode = /(-|(~|:)?[0-7]{3,4})/
+
+ (* View: age
+"-", or one of the formats seen in the manpage: 10d, 5seconds, 1y5days.
+optionally starts with a "~'. *)
+ let age = /(-|(~?[0-9]+(s|m|min|h|d|w|ms|us|((second|minute|hour|day|week|millisecond|microsecond)s?))?)+)/
+
+ (* View: argument
+The last field. It can contain spaces. *)
+ let argument = /([^# \t\n][^#\n]*[^# \t\n]|[^# \t\n])/
+
+ (* View: field
+Applies to the other fields: path, gid and uid fields *)
+ let field = /[^# \t\n]+/
+
+ (* View: record
+A valid record, one line in the file.
+Only the two first fields are mandatory. *)
+ let record = [ seq "record" . sep_opt_spc .
+ [ label "type" . store type ] . sep_spc .
+ [ label "path" . store field ] . ( sep_spc .
+ [ label "mode" . store mode ] . ( sep_spc .
+ [ label "uid" . store field ] . ( sep_spc .
+ [ label "gid" . store field ] . ( sep_spc .
+ [ label "age" . store age ] . ( sep_spc .
+ [ label "argument" . store argument ] )? )? )? )? )? .
+ Util.comment_or_eol ]
+
+(************************************************************************
+ * Group: THE TMPFILES LENSE
+ *************************************************************************)
+
+ (* View: lns
+The tmpfiles lens.
+Each line can be a comment, a record or empty. *)
+ let lns = ( empty | comment | record ) *
+
+ (* View: filter *)
+ let filter = incl "/etc/tmpfiles.d/*.conf"
+ . incl "/usr/lib/tmpfiles.d/*.conf"
+ . incl "/run/tmpfiles.d/*.conf"
+
+ let xfm = transform lns filter
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(*
+Module: Toml
+ Parses TOML files
+
+Author: Raphael Pinson <raphael.pinson@camptocamp.com>
+
+About: Reference
+ https://toml.io/en/v1.0.0
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to TOML files.
+
+About: Examples
+ The <Test_Toml> file contains various examples and tests.
+*)
+
+module Toml =
+
+(* Group: base definitions *)
+
+(* View: comment
+ A simple comment *)
+let comment = IniFile.comment "#" "#"
+
+(* View: empty
+ An empty line *)
+let empty = Util.empty_dos
+
+(* View: eol
+ An end of line *)
+let eol = Util.doseol
+
+
+(* Group: value entries *)
+
+let bare_re_noquot = (/[^][", \t\r\n]/ - "#")
+let bare_re = (/[^][,\r=]/ - "#")+
+let no_quot = /[^]["\r\n]*/
+let bare = Quote.do_dquote_opt_nil (store (bare_re_noquot . (bare_re* . bare_re_noquot)?))
+let quoted = Quote.do_dquote (store (/[^"]/ . "#"* . /[^"]/))
+
+let ws = del /[ \t\n]*/ ""
+
+let space_or_empty = [ del /[ \t\n]+/ " " ]
+
+let comma = Util.del_str "," . (space_or_empty | comment)?
+let lbrace = Util.del_str "{" . (space_or_empty | comment)?
+let rbrace = Util.del_str "}"
+let lbrack = Util.del_str "[" . (space_or_empty | comment)?
+let rbrack = Util.del_str "]"
+
+(* This follows the definition of 'string' at https://www.json.org/
+ It's a little wider than what's allowed there as it would accept
+ nonsensical \u escapes *)
+let triple_dquote = Util.del_str "\"\"\""
+let str_store = Quote.dquote . store /([^\\"]|\\\\["\/bfnrtu\\])*/ . Quote.dquote
+
+let str_store_multi = triple_dquote . eol
+ . store /([^\\"]|\\\\["\/bfnrtu\\])*/
+ . del /\n[ \t]*/ "\n" . triple_dquote
+
+let str_store_literal = Quote.squote . store /([^\\']|\\\\['\/bfnrtu\\])*/ . Quote.squote
+
+let integer =
+ let base10 = /[+-]?[0-9_]+/
+ in let hex = /0x[A-Za-z0-9]+/
+ in let oct = /0o[0-7]+/
+ in let bin = /0b[01]+/
+ in [ label "integer" . store (base10 | hex | oct | bin) ]
+
+let float =
+ let n = /[0-9_]+/
+ in let pm = /[+-]?/
+ in let z = pm . n
+ in let decim = "." . n
+ in let exp = /[Ee]/ . z
+ in let num = z . decim | z . exp | z . decim . exp
+ in let inf = pm . "inf"
+ in let nan = pm . "nan"
+ in [ label "float" . store (num | inf | nan) ]
+
+let str = [ label "string" . str_store ]
+
+let str_multi = [ label "string_multi" . str_store_multi ]
+
+let str_literal = [ label "string_literal" . str_store_literal ]
+
+let bool (r:regexp) = [ label "bool" . store r ]
+
+
+let date_re = /[0-9]{4}-[0-9]{2}-[0-9]{2}/
+let time_re = /[0-9]{1,2}:[0-9]{2}:[0-9]{2}(\.[0-9]+)?[A-Z]*/
+
+let datetime = [ label "datetime" . store (date_re . /[T ]/ . time_re) ]
+let date = [ label "date" . store date_re ]
+let time = [ label "time" . store time_re ]
+
+let norec = str | str_multi | str_literal
+ | integer | float | bool /true|false/
+ | datetime | date | time
+
+let array (value:lens) = [ label "array" . lbrack
+ . ( ( Build.opt_list value comma . (del /,?/ "") . (space_or_empty | comment)? . rbrack )
+ | rbrack ) ]
+
+let array_norec = array norec
+
+(* This is actually no real recursive array, instead it is one or two dimensional
+ For more info on this see https://github.com/hercules-team/augeas/issues/715 *)
+let array_rec = array (norec | array_norec)
+
+let entry_base (value:lens) = [ label "entry" . store Rx.word . Sep.space_equal . value ]
+
+let inline_table (value:lens) = [ label "inline_table" . lbrace
+ . ( (Build.opt_list (entry_base value) comma . space_or_empty? . rbrace)
+ | rbrace ) ]
+
+let entry = [ label "entry" . Util.indent . store Rx.word . Sep.space_equal
+ . (norec | array_rec | inline_table (norec|array_norec)) . (eol | comment) ]
+
+(* Group: tables *)
+
+(* View: table_gen
+ A generic table *)
+let table_gen (name:string) (lbrack:string) (rbrack:string) =
+ let title = Util.indent . label name
+ . Util.del_str lbrack
+ . store /[^]\r\n.]+(\.[^]\r\n.]+)*/
+ . Util.del_str rbrack . eol
+ in [ title . (entry|empty|comment)* ]
+
+(* View: table
+ A table or array of tables *)
+let table = table_gen "table" "[" "]"
+ | table_gen "@table" "[[" "]]"
+
+(* Group: lens *)
+
+(* View: lns
+ The Toml lens *)
+let lns = (entry | empty | comment)* . table*
--- /dev/null
+(*
+Module: Trapperkeeper
+ Parses Trapperkeeper configuration files
+
+Author: Raphael Pinson <raphael.pinson@camptocamp.com>
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to Trapperkeeper webservice configuration files. See <filter>.
+
+About: Examples
+ The <Test_Trapperkeeper> file contains various examples and tests.
+*)
+module Trapperkeeper =
+
+autoload xfm
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+(* View: empty *)
+let empty = Util.empty
+
+(* View: comment *)
+let comment = Util.comment
+
+(* View: sep *)
+let sep = del /[ \t]*[:=]/ ":"
+
+(* View: sep_with_spc *)
+let sep_with_spc = sep . Sep.opt_space
+
+(************************************************************************
+ * Group: BLOCKS (FROM 1.2, FOR 0.10 COMPATIBILITY)
+ *************************************************************************)
+
+(* Variable: block_ldelim_newlines_re *)
+let block_ldelim_newlines_re = /[ \t\n]+\{([ \t\n]*\n)?/
+
+(* Variable: block_rdelim_newlines_re *)
+let block_rdelim_newlines_re = /[ \t]*\}/
+
+(* Variable: block_ldelim_newlines_default *)
+let block_ldelim_newlines_default = "\n{\n"
+
+(* Variable: block_rdelim_newlines_default *)
+let block_rdelim_newlines_default = "}"
+
+(************************************************************************
+ * View: block_newline
+ * A block enclosed in brackets, with newlines forced
+ * and indentation defaulting to a tab.
+ *
+ * Parameters:
+ * entry:lens - the entry to be stored inside the block.
+ * This entry should not include <Util.empty>,
+ * <Util.comment> or <Util.comment_noindent>,
+ * should be indented and finish with an eol.
+ ************************************************************************)
+let block_newlines (entry:lens) (comment:lens) =
+ del block_ldelim_newlines_re block_ldelim_newlines_default
+ . ((entry | comment) . (Util.empty | entry | comment)*)?
+ . del block_rdelim_newlines_re block_rdelim_newlines_default
+
+(************************************************************************
+ * Group: ENTRY TYPES
+ *************************************************************************)
+
+let opt_dquot (lns:lens) = del /"?/ "" . lns . del /"?/ ""
+
+(* View: simple *)
+let simple = [ Util.indent . label "@simple" . opt_dquot (store /[A-Za-z0-9_.\/-]+/) . sep_with_spc
+ . [ label "@value" . opt_dquot (store /[^,"\[ \t\n]+/) ]
+ . Util.eol ]
+
+(* View: array *)
+let array =
+ let lbrack = Util.del_str "["
+ in let rbrack = Util.del_str "]"
+ in let opt_space = del /[ \t]*/ ""
+ in let comma = opt_space . Util.del_str "," . opt_space
+ in let elem = [ seq "elem" . opt_dquot (store /[^,"\[ \t\n]+/) ]
+ in let elems = counter "elem" . Build.opt_list elem comma
+ in [ Util.indent . label "@array" . store Rx.word
+ . sep_with_spc . lbrack . Sep.opt_space
+ . (elems . Sep.opt_space)?
+ . rbrack . Util.eol ]
+
+(* View: hash *)
+let hash (lns:lens) = [ Util.indent . label "@hash" . store Rx.word . sep
+ . block_newlines lns Util.comment
+ . Util.eol ]
+
+
+(************************************************************************
+ * Group: ENTRY
+ *************************************************************************)
+
+(* Just for typechecking *)
+let entry_no_rec = hash (simple|array)
+
+(* View: entry *)
+let rec entry = hash (entry|simple|array)
+
+(************************************************************************
+ * Group: LENS AND FILTER
+ *************************************************************************)
+
+(* View: lns *)
+let lns = (empty|comment)* . (entry . (empty|comment)*)*
+
+(* Variable: filter *)
+let filter = incl "/etc/puppetserver/conf.d/*"
+ . incl "/etc/puppetlabs/puppetserver/conf.d/*"
+ . Util.stdexcl
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Tuned
+ Parses Tuned's configuration files
+
+Author: Pat Riehecky <riehecky@fnal.gov>
+
+About: Reference
+ This lens is based on tuned's tuned-main.conf
+
+About: License
+ This file is licensed under the LGPL v2+, like the rest of Augeas.
+*)
+
+module Tuned =
+autoload xfm
+
+let lns = Simplevars.lns
+
+let filter = incl "/etc/tuned/tuned-main.conf"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Up2date
+ Parses /etc/sysconfig/rhn/up2date
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 5 up2date` where possible.
+
+About: License
+ This file is licenced under the LGPLv2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/sysconfig/rhn/up2date. See <filter>.
+
+About: Examples
+ The <Test_Up2date> file contains various examples and tests.
+*)
+
+module Up2date =
+
+autoload xfm
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+(* Variable: key_re *)
+let key_re = /[^=# \t\n]+/
+
+(* Variable: value_re *)
+let value_re = /[^ \t\n;][^\n;]*[^ \t\n;]|[^ \t\n;]/
+
+(* View: sep_semi *)
+let sep_semi = Sep.semicolon
+
+(************************************************************************
+ * Group: ENTRIES
+ *************************************************************************)
+
+(* View: single_entry
+ key=foo *)
+let single_entry = [ label "value" . store value_re ]
+
+(* View: multi_empty
+ key=; *)
+let multi_empty = sep_semi
+
+(* View: multi_value
+ One value in a list setting *)
+let multi_value = [ seq "multi" . store value_re ]
+
+(* View: multi_single
+ key=foo; (parsed as a list) *)
+let multi_single = multi_value . sep_semi
+
+(* View: multi_values
+ key=foo;bar
+ key=foo;bar; *)
+let multi_values = multi_value . ( sep_semi . multi_value )+ . del /;?/ ";"
+
+(* View: multi_entry
+ List settings go under a 'values' node *)
+let multi_entry = [ label "values" . counter "multi"
+ . ( multi_single | multi_values | multi_empty ) ]
+
+(* View: entry *)
+let entry = [ seq "entry" . store key_re . Sep.equal
+ . ( multi_entry | single_entry )? . Util.eol ]
+
+(************************************************************************
+ * Group: LENS
+ *************************************************************************)
+
+(* View: lns *)
+let lns = (Util.empty | Util.comment | entry)*
+
+(* Variable: filter *)
+let filter = incl "/etc/sysconfig/rhn/up2date"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: UpdateDB
+ Parses /etc/updatedb.conf
+
+Author: Raphael Pinson <raphink@gmail.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 5 updatedb.conf` where possible.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/updatedb.conf. See <filter>.
+
+About: Examples
+ The <Test_UpdateDB> file contains various examples and tests.
+*)
+
+module UpdateDB =
+
+autoload xfm
+
+(* View: list
+ A list entry *)
+let list =
+ let entry = [ label "entry" . store Rx.no_spaces ]
+ in let entry_list = Build.opt_list entry Sep.space
+ in [ key /PRUNE(FS|NAMES|PATHS)/ . Sep.space_equal
+ . Quote.do_dquote entry_list . Util.doseol ]
+
+(* View: bool
+ A boolean entry *)
+let bool = [ key "PRUNE_BIND_MOUNTS" . Sep.space_equal
+ . Quote.do_dquote (store /[01]|no|yes/)
+ . Util.doseol ]
+
+(* View: lns
+ The <UpdateDB> lens *)
+let lns = (Util.empty|Util.comment|list|bool)*
+
+(* Variable: filter
+ The filter *)
+let filter = incl "/etc/updatedb.conf"
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Util
+ Generic module providing useful primitives
+
+Author: David Lutterkort
+
+About: License
+ This file is licensed under the LGPLv2+, like the rest of Augeas.
+*)
+
+
+module Util =
+
+
+(*
+Variable: del_str
+ Delete a string and default to it
+
+ Parameters:
+ s:string - the string to delete and default to
+*)
+ let del_str (s:string) = del s s
+
+(*
+Variable: del_ws
+ Delete mandatory whitespace
+*)
+ let del_ws = del /[ \t]+/
+
+(*
+Variable: del_ws_spc
+ Delete mandatory whitespace, default to single space
+*)
+ let del_ws_spc = del_ws " "
+
+(*
+Variable: del_ws_tab
+ Delete mandatory whitespace, default to single tab
+*)
+ let del_ws_tab = del_ws "\t"
+
+
+(*
+Variable: del_opt_ws
+ Delete optional whitespace
+*)
+ let del_opt_ws = del /[ \t]*/
+
+
+(*
+Variable: eol
+ Delete end of line, including optional trailing whitespace
+*)
+ let eol = del /[ \t]*\n/ "\n"
+
+(*
+Variable: doseol
+ Delete end of line with optional carriage return,
+ including optional trailing whitespace
+*)
+ let doseol = del /[ \t]*\r?\n/ "\n"
+
+
+(*
+Variable: indent
+ Delete indentation, including leading whitespace
+*)
+ let indent = del /[ \t]*/ ""
+
+(* Group: Comment
+ This is a general definition of comment
+ It allows indentation for comments, removes the leading and trailing spaces
+ of comments and stores them in nodes, except for empty comments which are
+ ignored together with empty lines
+*)
+
+
+(* View: comment_generic_seteol
+ Map comments and set default comment sign
+*)
+
+ let comment_generic_seteol (r:regexp) (d:string) (eol:lens) =
+ [ label "#comment" . del r d
+ . store /([^ \t\r\n].*[^ \t\r\n]|[^ \t\r\n])/ . eol ]
+
+(* View: comment_generic
+ Map comments and set default comment sign
+*)
+
+ let comment_generic (r:regexp) (d:string) =
+ comment_generic_seteol r d doseol
+
+(* View: comment
+ Map comments into "#comment" nodes
+*)
+ let comment = comment_generic /[ \t]*#[ \t]*/ "# "
+
+(* View: comment_noindent
+ Map comments into "#comment" nodes, without indentation
+*)
+ let comment_noindent = comment_generic /#[ \t]*/ "# "
+
+(* View: comment_eol
+ Map eol comments into "#comment" nodes
+ Add a space before # for end of line comments
+*)
+ let comment_eol = comment_generic /[ \t]*#[ \t]*/ " # "
+
+(* View: comment_or_eol
+ A <comment_eol> or <eol>, with an optional empty comment *)
+ let comment_or_eol = comment_eol | (del /[ \t]*(#[ \t]*)?\n/ "\n")
+
+(* View: comment_multiline
+ A C-style multiline comment *)
+ let comment_multiline =
+ let mline_re = (/[^ \t\r\n].*[^ \t\r\n]|[^ \t\r\n]/ - /.*\*\/.*/) in
+ let mline = [ seq "mline"
+ . del /[ \t\r\n]*/ "\n"
+ . store mline_re ] in
+ [ label "#mcomment" . del /[ \t]*\/\*/ "/*"
+ . counter "mline"
+ . mline . (eol . mline)*
+ . del /[ \t\r\n]*\*\/[ \t]*\r?\n/ "\n*/\n" ]
+
+(* View: comment_c_style
+ A comment line, C-style *)
+ let comment_c_style =
+ comment_generic /[ \t]*\/\/[ \t]*/ "// "
+
+(* View: comment_c_style_or_hash
+ A comment line, C-style or hash *)
+ let comment_c_style_or_hash =
+ comment_generic /[ \t]*((\/\/)|#)[ \t]*/ "// "
+
+(* View: empty_generic
+ A generic definition of <empty>
+ Map empty lines, including empty comments *)
+ let empty_generic (r:regexp) =
+ [ del r "" . del_str "\n" ]
+
+(* Variable: empty_generic_re *)
+ let empty_generic_re = /[ \t]*#?[ \t]*/
+
+(* View: empty
+ Map empty lines, including empty comments *)
+ let empty = empty_generic empty_generic_re
+
+(* Variable: empty_c_style_re *)
+ let empty_c_style_re = /[ \t]*((\/\/)|(\/\*[ \t]*\*\/))?[ \t]*/
+
+(* View: empty_c_style
+ Map empty lines, including C-style empty comment *)
+ let empty_c_style = empty_generic empty_c_style_re
+
+(* View: empty_any
+ Either <empty> or <empty_c_style> *)
+ let empty_any = empty_generic (empty_generic_re | empty_c_style_re)
+
+(* View: empty_generic_dos
+ A generic definition of <empty> with dos newlines
+ Map empty lines, including empty comments *)
+ let empty_generic_dos (r:regexp) =
+ [ del r "" . del /\r?\n/ "\n" ]
+
+(* View: empty_dos *)
+ let empty_dos =
+ empty_generic_dos /[ \t]*#?[ \t]*/
+
+
+(* View: Split *)
+(* Split (SEP . ELT)* into an array-like tree where each match for ELT *)
+(* appears in a separate subtree. The labels for the subtrees are *)
+(* consecutive numbers, starting at 0 *)
+ let split (elt:lens) (sep:lens) =
+ let sym = gensym "split" in
+ counter sym . ( [ seq sym . sep . elt ] ) *
+
+(* View: delim *)
+ let delim (op:string) = del (/[ \t]*/ . op . /[ \t]*/)
+ (" " . op . " ")
+
+(* Group: Exclusions
+
+Variable: stdexcl
+ Exclusion for files that are commonly not wanted/needed
+*)
+ let stdexcl = (excl "*~") .
+ (excl "*.rpmnew") .
+ (excl "*.rpmsave") .
+ (excl "*.dpkg-old") .
+ (excl "*.dpkg-new") .
+ (excl "*.dpkg-bak") .
+ (excl "*.dpkg-dist") .
+ (excl "*.augsave") .
+ (excl "*.augnew") .
+ (excl "*.bak") .
+ (excl "*.old") .
+ (excl "#*#")
--- /dev/null
+(*
+Module: Vfstab
+ Parses Solaris vfstab config file, based on Fstab lens
+
+Author: Dominic Cleal <dcleal@redhat.com>
+
+About: Reference
+ See vfstab(4)
+
+About: License
+ This file is licenced under the LGPLv2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/vfstab.
+
+About: Examples
+ The <Test_Vfstab> file contains various examples and tests.
+*)
+
+module Vfstab =
+ autoload xfm
+
+ let sep_tab = Sep.tab
+ let sep_spc = Sep.space
+ let comma = Sep.comma
+ let eol = Util.eol
+
+ let comment = Util.comment
+ let empty = Util.empty
+
+ let file = /[^# \t\n]+/
+
+ let int = Rx.integer
+ let bool = "yes" | "no"
+
+ (* An option label can't contain comma, comment, equals, or space *)
+ let optlabel = /[^,#= \n\t]+/ - "-"
+ let spec = /[^-,# \n\t][^ \n\t]*/
+
+ let optional = Util.del_str "-"
+
+ let comma_sep_list (l:string) =
+ let value = [ label "value" . Util.del_str "=" . store Rx.neg1 ] in
+ let lns = [ label l . store optlabel . value? ] in
+ Build.opt_list lns comma
+
+ let record = [ seq "mntent" .
+ [ label "spec" . store spec ] . sep_tab .
+ ( [ label "fsck" . store spec ] | optional ). sep_tab .
+ [ label "file" . store file ] . sep_tab .
+ comma_sep_list "vfstype" . sep_tab .
+ ( [ label "passno" . store int ] | optional ) . sep_spc .
+ [ label "atboot" . store bool ] . sep_tab .
+ ( comma_sep_list "opt" | optional ) .
+ eol ]
+
+ let lns = ( empty | comment | record ) *
+ let filter = incl "/etc/vfstab"
+
+ let xfm = transform lns filter
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(*
+Module: VWware_Config
+ Parses /etc/vmware/config
+
+Author: Raphael Pinson <raphael.pinson@camptocamp.com>
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Configuration files
+ This lens applies to /etc/vmware/config. See <filter>.
+
+About: Examples
+ The <Test_VMware_Config> file contains various examples and tests.
+*)
+module VMware_Config =
+
+autoload xfm
+
+(* View: entry *)
+let entry =
+ Build.key_value_line Rx.word Sep.space_equal Quote.double_opt
+
+(* View: lns *)
+let lns = (Util.empty | Util.comment | entry)*
+
+
+(* Variable: filter *)
+let filter = incl "/etc/vmware/config"
+
+let xfm = transform lns filter
--- /dev/null
+(* Parse vsftpd.conf *)
+module Vsftpd =
+ autoload xfm
+
+(* The code in parseconf.c does not seem to allow for trailing whitespace *)
+(* in the config file *)
+let eol = Util.del_str "\n"
+let empty = Util.empty
+let comment = Util.comment
+
+let bool_option_re = /anonymous_enable|isolate|isolate_network|local_enable|pasv_enable|port_enable|chroot_local_user|write_enable|anon_upload_enable|anon_mkdir_write_enable|anon_other_write_enable|chown_uploads|connect_from_port_20|xferlog_enable|dirmessage_enable|anon_world_readable_only|async_abor_enable|ascii_upload_enable|ascii_download_enable|one_process_model|xferlog_std_format|pasv_promiscuous|deny_email_enable|chroot_list_enable|setproctitle_enable|text_userdb_names|ls_recurse_enable|log_ftp_protocol|guest_enable|userlist_enable|userlist_deny|use_localtime|check_shell|hide_ids|listen|port_promiscuous|passwd_chroot_enable|no_anon_password|tcp_wrappers|use_sendfile|force_dot_files|listen_ipv6|dual_log_enable|syslog_enable|background|virtual_use_local_privs|session_support|download_enable|dirlist_enable|chmod_enable|secure_email_list_enable|run_as_launching_user|no_log_lock|ssl_enable|allow_anon_ssl|force_local_logins_ssl|force_local_data_ssl|ssl_sslv2|ssl_sslv3|ssl_tlsv1|tilde_user_enable|force_anon_logins_ssl|force_anon_data_ssl|mdtm_write|lock_upload_files|pasv_addr_resolve|debug_ssl|require_cert|validate_cert|require_ssl_reuse|allow_writeable_chroot|seccomp_sandbox/
+
+let uint_option_re = /accept_timeout|connect_timeout|local_umask|anon_umask|ftp_data_port|idle_session_timeout|data_connection_timeout|pasv_min_port|pasv_max_port|anon_max_rate|local_max_rate|listen_port|max_clients|file_open_mode|max_per_ip|trans_chunk_size|delay_failed_login|delay_successful_login|max_login_fails|chown_upload_mode/
+
+let str_option_re = /secure_chroot_dir|ftp_username|chown_username|xferlog_file|vsftpd_log_file|message_file|nopriv_user|ftpd_banner|banned_email_file|chroot_list_file|pam_service_name|guest_username|userlist_file|anon_root|local_root|banner_file|pasv_address|listen_address|user_config_dir|listen_address6|cmds_allowed|hide_file|deny_file|user_sub_token|email_password_file|rsa_cert_file|dsa_cert_file|ssl_ciphers|rsa_private_key_file|dsa_private_key_file|ca_certs_file/
+
+let bool_value_re = /[yY][eE][sS]|[tT][rR][uU][eE]|1|[nN][oO]|[fF][aA][lL][sS][eE]|0/
+
+let option (k:regexp) (v:regexp) = [ key k . Util.del_str "=" . store v . eol ]
+
+let bool_option = option bool_option_re bool_value_re
+
+let str_option = option str_option_re /[^\n]+/
+
+let uint_option = option uint_option_re /[0-9]+/
+
+let lns = (bool_option|str_option|uint_option|comment|empty)*
+
+let filter = (incl "/etc/vsftpd.conf") . (incl "/etc/vsftpd/vsftpd.conf")
+
+let xfm = transform lns filter
--- /dev/null
+(* Webmin module for Augeas
+ Author: Free Ekanayaka <free@64studio.com>
+
+ Reference:
+
+*)
+
+module Webmin =
+
+ autoload xfm
+
+(************************************************************************
+ * USEFUL PRIMITIVES
+ *************************************************************************)
+
+let eol = Util.eol
+let comment = Util.comment
+let empty = Util.empty
+
+let sep_eq = del /=/ "="
+
+let sto_to_eol = store /([^ \t\n].*[^ \t\n]|[^ \t\n])/
+
+let word = /[A-Za-z0-9_.-]+/
+
+(************************************************************************
+ * ENTRIES
+ *************************************************************************)
+
+let entry = [ key word
+ . sep_eq
+ . sto_to_eol?
+ . eol ]
+
+(************************************************************************
+ * LENS
+ *************************************************************************)
+
+let lns = (comment|empty|entry) *
+
+let wm_incl (n:string)
+ = (incl ("/etc/webmin/" . n))
+let filter = wm_incl "miniserv.conf"
+ . wm_incl "ldap-useradmin/config"
+
+let xfm = transform lns filter
--- /dev/null
+(* Lens for the textual representation of Windows registry files, as used *)
+(* by wine etc. *)
+(* This is pretty quick and dirty, as it doesn't put a lot of finesse on *)
+(* splitting up values that have structure, e.g. hex arrays or *)
+(* collections of paths. *)
+module Wine =
+
+(* We handle Unix and DOS line endings, though we can only add one or the *)
+(* other to new lines. Maybe provide a function to gather that from the *)
+(* current file ? *)
+let eol = del /[ \t]*\r?\n/ "\n"
+let comment = [ label "#comment" . del /[ \t]*;;[ \t]*/ ";; "
+ . store /([^ \t\r\n].*[^ \t\r\n]|[^ \t\r\n])/ . eol ]
+let empty = [ eol ]
+let dels = Util.del_str
+let del_ws = Util.del_ws_spc
+
+let header =
+ [ label "registry" . store /[a-zA-Z0-9 \t]*[a-zA-Z0-9]/ ] .
+ del /[ \t]*Version[ \t]*/ " Version " .
+ [ label "version" . store /[0-9.]+/ ] . eol
+
+let qstr =
+ let re = /([^"\n]|\\\\.)*/ - /@|"@"/ in (* " Relax, emacs *)
+ dels "\"" . store re . dels "\""
+
+let typed_val =
+ ([ label "type" . store /dword|hex(\\([0-9]+\\))?/ ] . dels ":" .
+ [ label "value" . store /[a-zA-Z0-9,()]+(\\\\\r?\n[ \t]*[a-zA-Z0-9,]+)*/])
+ |([ label "type" . store /str\\([0-9]+\\)/ ] . dels ":" .
+ dels "\"" . [ label "value" . store /[^"\n]*/ ] . dels "\"") (* " Relax, emacs *)
+
+let entry =
+ let qkey = [ label "key" . qstr ] in
+ let eq = del /[ \t]*=[ \t]*/ "=" in
+ let qstore = [ label "value" . qstr ] in
+ [ label "entry" . qkey . eq . (qstore|typed_val) . eol ]
+ |[label "anon" . del /"?@"?/ "@" . eq . (qstore|typed_val) .eol ]
+
+let section =
+ let ts = [ label "timestamp" . store Rx.integer ] in
+ [ label "section" . del /[ \t]*\\[/ "[" .
+ store /[^]\n]+/ . dels "]" . (del_ws . ts)? . eol .
+ (entry|empty|comment)* ]
+
+let lns = header . (empty|comment)* . section*
--- /dev/null
+module Xendconfsxp =
+ autoload xfm
+
+let spc1 = /[ \t\n]+/
+let ws = del spc1 " "
+
+let lbrack = del /[ \t]*\([ \t\n]*/ "("
+let rbrack = del /[ \t\n]*\)/ ")"
+
+let empty_line = [ del /[ \t]*\n/ "\n" ]
+
+let no_ws_comment =
+ [ label "#comment" . del /#[ \t]*/ "# " . store /[^ \t]+[^\n]*/ . del /\n/ "\n" ]
+
+let standalone_comment = [ label "#scomment" . del /#/ "#" . store /.*/ . del /\n/ "\n" ]
+(* Minor bug: The initial whitespace is stored, not deleted. *)
+
+let ws_and_comment = ws . no_ws_comment
+
+(* Either a word or a quoted string *)
+let str_store = store /[A-Za-z0-9_.\/-]+|\"([^\"\\\\]|(\\\\.))*\"|'([^'\\\\]|(\\\\.))*'/
+
+let str = [ label "string" . str_store ]
+
+let var_name = key Rx.word
+
+let rec thing =
+ let array = [ label "array" . lbrack . Build.opt_list thing ws . ws_and_comment? . rbrack ] in
+ let str = [ label "item" . str_store ] in
+ str | array
+
+let sexpr = [ lbrack . var_name . ws . no_ws_comment? . thing . ws_and_comment? . rbrack ]
+
+let lns = ( empty_line | standalone_comment | sexpr ) *
+
+let filter = incl "xend-config.sxp"
+let xfm = transform lns filter
--- /dev/null
+(*
+ * Module: Xinetd
+ * Parses xinetd configuration files
+ *
+ * The structure of the lens and allowed attributes are ripped directly
+ * from xinetd's parser in xinetd/parse.c in xinetd's source checkout
+ * The downside of being so precise here is that if attributes are added
+ * they need to be added here, too. Writing a catchall entry, and getting
+ * to typecheck correctly would be a huge pain.
+ *
+ * A really enterprising soul could tighten this down even further by
+ * restricting the acceptable values for each attribute.
+ *
+ * Author: David Lutterkort
+ *)
+
+module Xinetd =
+ autoload xfm
+
+ let opt_spc = Util.del_opt_ws " "
+
+ let spc_equal = opt_spc . Sep.equal
+
+ let op = ([ label "add" . opt_spc . Util.del_str "+=" ]
+ |[ label "del" . opt_spc . Util.del_str "-=" ]
+ | spc_equal)
+
+ let value = store Rx.no_spaces
+
+ let indent = del Rx.opt_space "\t"
+
+ let attr_one (n:regexp) =
+ Build.key_value n Sep.space_equal value
+
+ let attr_lst (n:regexp) (op_eq: lens) =
+ let value_entry = [ label "value" . value ] in
+ Build.key_value n op_eq (opt_spc . Build.opt_list value_entry Sep.space)?
+
+ let attr_lst_eq (n:regexp) = attr_lst n spc_equal
+
+ let attr_lst_op (n:regexp) = attr_lst n op
+
+ (* Variable: service_attr
+ * Note:
+ * It is much faster to combine, for example, all the attr_one
+ * attributes into one regexp and pass that to a lens instead of
+ * using lens union (attr_one "a" | attr_one "b"|..) because the latter
+ * causes the type checker to work _very_ hard.
+ *)
+ let service_attr =
+ attr_one (/socket_type|protocol|wait|user|group|server|instances/i
+ |/rpc_version|rpc_number|id|port|nice|banner|bind|interface/i
+ |/per_source|groups|banner_success|banner_fail|disable|max_load/i
+ |/rlimit_as|rlimit_cpu|rlimit_data|rlimit_rss|rlimit_stack|v6only/i
+ |/deny_time|umask|mdns|libwrap/i)
+ (* redirect and cps aren't really lists, they take exactly two values *)
+ |attr_lst_eq (/server_args|log_type|access_times|type|flags|redirect|cps/i)
+ |attr_lst_op (/log_on_success|log_on_failure|only_from|no_access|env|passenv/i)
+
+ let default_attr =
+ attr_one (/instances|banner|bind|interface|per_source|groups/i
+ |/banner_success|banner_fail|max_load|v6only|umask|mdns/i)
+ |attr_lst_eq /cps/i (* really only two values, not a whole list *)
+ |attr_lst_op (/log_type|log_on_success|log_on_failure|disabled/i
+ |/no_access|only_from|passenv|enabled/i)
+
+ (* View: body
+ * Note:
+ * We would really like to say "the body can contain any of a list
+ * of a list of attributes, each of them at most once"; but that
+ * would require that we build a lens that matches the permutation
+ * of all attributes; with around 40 individual attributes, that's
+ * not computationally feasible, even if we didn't have to worry
+ * about how to write that down. The resulting regular expressions
+ * would simply be prohibitively large.
+ *)
+ let body (attr:lens) = Build.block_newlines_spc
+ (indent . attr . Util.eol)
+ Util.comment
+
+ (* View: includes
+ * Note:
+ * It would be nice if we could use the directories given in include and
+ * includedir directives to parse additional files instead of hardcoding
+ * all the places where xinetd config files can be found; but that is
+ * currently not possible, and implementing that has a good amount of
+ * hairy corner cases to consider.
+ *)
+ let includes =
+ Build.key_value_line /include(dir)?/ Sep.space (store Rx.no_spaces)
+
+ let service =
+ let sto_re = /[^# \t\n\/]+/ in
+ Build.key_value_line "service" Sep.space (store sto_re . body service_attr)
+
+ let defaults = [ key "defaults" . body default_attr . Util.eol ]
+
+ let lns = ( Util.empty | Util.comment | includes | defaults | service )*
+
+ let filter = incl "/etc/xinetd.d/*"
+ . incl "/etc/xinetd.conf"
+ . Util.stdexcl
+
+ let xfm = transform lns filter
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(* XML lens for Augeas
+ Author: Francis Giraldeau <francis.giraldeau@usherbrooke.ca>
+
+ Reference: http://www.w3.org/TR/2006/REC-xml11-20060816/
+*)
+
+module Xml =
+
+autoload xfm
+
+(************************************************************************
+ * Utilities lens
+ *************************************************************************)
+
+let dels (s:string) = del s s
+let spc = /[ \t\r\n]+/
+let osp = /[ \t\r\n]*/
+let sep_spc = del /[ \t\r\n]+/ " "
+let sep_osp = del /[ \t\r\n]*/ ""
+let sep_eq = del /[ \t\r\n]*=[ \t\r\n]*/ "="
+
+let nmtoken = /[a-zA-Z:_][a-zA-Z0-9:_.-]*/
+let word = /[a-zA-Z][a-zA-Z0-9._-]*/
+let char = /.|(\r?\n)/
+(* if we hide the quotes, then we can only accept single or double quotes *)
+(* otherwise a put ambiguity is raised *)
+let sto_dquote = dels "\"" . store /[^"]*/ . dels "\"" (* " *)
+let sto_squote = dels "'" . store /[^']*/ . dels "'"
+
+let comment = [ label "#comment" .
+ dels "<!--" .
+ store /([^-]|-[^-])*/ .
+ dels "-->" ]
+
+let pi_target = nmtoken - /[Xx][Mm][Ll]/
+let empty = Util.empty
+let del_end = del />[\r?\n]?/ ">\n"
+let del_end_simple = dels ">"
+
+(* This is siplified version of processing instruction
+ * pi has to not start or end with a white space and the string
+ * must not contain "?>". We restrict too much by not allowing any
+ * "?" nor ">" in PI
+ *)
+let pi = /[^ \r\n\t]|[^ \r\n\t][^?>]*[^ \r\n\t]/
+
+(************************************************************************
+ * Attributes
+ *************************************************************************)
+
+
+let decl = [ label "#decl" . sep_spc .
+ store /[^> \t\n\r]|[^> \t\n\r][^>\t\n\r]*[^> \t\n\r]/ ]
+
+let decl_def (r:regexp) (b:lens) = [ dels "<" . key r .
+ sep_spc . store nmtoken .
+ b . sep_osp . del_end_simple ]
+
+let elem_def = decl_def /!ELEMENT/ decl
+
+let enum = "(" . osp . nmtoken . ( osp . "|" . osp . nmtoken )* . osp . ")"
+
+let att_type = /CDATA|ID|IDREF|IDREFS|ENTITY|ENTITIES|NMTOKEN|NMTOKENS/ |
+ enum
+
+let id_def = [ sep_spc . key /PUBLIC/ .
+ [ label "#literal" . sep_spc . sto_dquote ]* ] |
+ [ sep_spc . key /SYSTEM/ . sep_spc . sto_dquote ]
+
+let notation_def = decl_def /!NOTATION/ id_def
+
+let att_def = counter "att_id" .
+ [ sep_spc . seq "att_id" .
+ [ label "#name" . store word . sep_spc ] .
+ [ label "#type" . store att_type . sep_spc ] .
+ ([ key /#REQUIRED|#IMPLIED/ ] |
+ [ label "#FIXED" . del /#FIXED[ \r\n\t]*|/ "" . sto_dquote ]) ]*
+
+let att_list_def = decl_def /!ATTLIST/ att_def
+
+let entity_def =
+ let literal (lbl:string) = [ sep_spc . label lbl . sto_dquote ] in
+ decl_def /!ENTITY/
+ ( literal "#decl"
+ | [ sep_spc . key /SYSTEM/ . literal "#systemliteral" ]
+ | [ sep_spc . key /PUBLIC/ . literal "#pubidliteral"
+ . literal "#systemliteral" ] )
+
+let decl_def_item = elem_def | entity_def | att_list_def | notation_def
+
+let decl_outer = sep_osp . del /\[[ \n\t\r]*/ "[\n" .
+ (decl_def_item . sep_osp )* . dels "]"
+
+(* let dtd_def = [ sep_spc . key "SYSTEM" . sep_spc . sto_dquote ] *)
+
+let doctype = decl_def /!DOCTYPE/ (decl_outer|id_def)
+
+(* General shape of an attribute
+ * q is the regexp matching the quote character for the value
+ * qd is the default quote character
+ * brx is what the actual attribute value must match *)
+let attval (q:regexp) (qd:string) (brx:regexp) =
+ let quote = del q qd in
+ let body = store brx in
+ [ sep_spc . key nmtoken . sep_eq . square quote body quote ]
+
+(* We treat attributes according to one of the following three patterns:
+ attval1 : values that must be quoted with single quotes
+ attval2 : values that must be quoted with double quotes
+ attval3 : values that can be quoted with either *)
+let attributes =
+ let attval1 = attval "'" "'" /[^']*"[^']*/ in (* " *)
+ let attval2 = attval "\"" "\"" /[^"]*'[^"]*/ in
+ let attval3 = attval /['"]/ "\"" /(\\\\|[^'\"])*/ in (* " *)
+ [ label "#attribute" . (attval1|attval2|attval3)+ ]
+
+let prolog = [ label "#declaration" .
+ dels "<?xml" .
+ attributes .
+ sep_osp .
+ dels "?>" ]
+
+
+(************************************************************************
+ * Tags
+ *************************************************************************)
+
+(* we consider entities as simple text *)
+let text_re = /[^<]+/ - /([^<]*\]\]>[^<]*)/
+let text = [ label "#text" . store text_re ]
+let cdata = [ label "#CDATA" . dels "<![CDATA[" .
+ store (char* - (char* . "]]>" . char*)) . dels "]]>" ]
+
+(* the value of nmtoken_del is always the nmtoken_key string *)
+let nmtoken_key = key nmtoken
+let nmtoken_del = del nmtoken "a"
+
+let element (body:lens) =
+ let h = attributes? . sep_osp . dels ">" . body* . dels "</" in
+ [ dels "<" . square nmtoken_key h nmtoken_del . sep_osp . del_end ]
+
+let empty_element = [ dels "<" . nmtoken_key . value "#empty" .
+ attributes? . sep_osp . del /\/>[\r?\n]?/ "/>\n" ]
+
+let pi_instruction = [ dels "<?" . label "#pi" .
+ [ label "#target" . store pi_target ] .
+ [ sep_spc . label "#instruction" . store pi ]? .
+ sep_osp . del /\?>/ "?>" ]
+
+(* Typecheck is weaker on rec lens, detected by unfolding *)
+(*
+let content1 = element text
+let rec content2 = element (content1|text|comment)
+*)
+
+let rec content = element (text|comment|content|empty_element|pi_instruction|cdata)
+
+(* Constraints are weaker here, but it's better than being too strict *)
+let doc = (sep_osp . (prolog | comment | doctype | pi_instruction))* .
+ ((sep_osp . content) | (sep_osp . empty_element)) .
+ (sep_osp . (comment | pi_instruction ))* . sep_osp
+
+let lns = doc | Util.empty?
+
+let filter = (incl "/etc/xml/*.xml")
+ . (incl "/etc/xml/catalog")
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Module: Xorg
+ Parses /etc/X11/xorg.conf
+
+Authors: Raphael Pinson <raphink@gmail.com>
+ Matthew Booth <mbooth@redhat.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man xorg.conf` where
+ possible.
+
+The definitions from `man xorg.conf` are put as commentaries for reference
+throughout the file. More information can be found in the manual.
+
+About: License
+ This file is licensed under the LGPLv2+, like the rest of Augeas.
+
+About: Lens Usage
+ Sample usage of this lens in augtool
+
+ * Get the identifier of the devices with a "Clone" option:
+ > match "/files/etc/X11/xorg.conf/Device[Option = 'Clone']/Identifier"
+
+About: Configuration files
+ This lens applies to /etc/X11/xorg.conf. See <filter>.
+*)
+
+module Xorg =
+ autoload xfm
+
+(************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+(* Group: Generic primitives *)
+
+(* Variable: eol *)
+let eol = Util.eol
+
+(* Variable: to_eol
+ * Match everything from here to eol, cropping whitespace at both ends
+ *)
+let to_eol = /[^ \t\n](.*[^ \t\n])?/
+
+(* Variable: indent *)
+let indent = Util.indent
+
+(* Variable: comment *)
+let comment = Util.comment
+
+(* Variable: empty *)
+let empty = Util.empty
+
+
+(* Group: Separators *)
+
+(* Variable: sep_spc *)
+let sep_spc = Util.del_ws_spc
+
+(* Variable: sep_dquote *)
+let sep_dquote = Util.del_str "\""
+
+
+(* Group: Fields and values *)
+
+(* Variable: entries_re
+ * This is a list of all patterns which have specific handlers, and should
+ * therefore not be matched by the generic handler
+ *)
+let entries_re = /([oO]ption|[sS]creen|[iI]nput[dD]evice|[dD]river|[sS]ub[sS]ection|[dD]isplay|[iI]dentifier|[vV]ideo[rR]am|[dD]efault[dD]epth|[dD]evice)/
+
+(* Variable: generic_entry_re *)
+let generic_entry_re = /[^# \t\n\/]+/ - entries_re
+
+(* Variable: quoted_non_empty_string_val *)
+let quoted_non_empty_string_val = del "\"" "\"" . store /[^"\n]+/
+ . del "\"" "\""
+ (* " relax, emacs *)
+
+(* Variable: quoted_string_val *)
+let quoted_string_val = del "\"" "\"" . store /[^"\n]*/ . del "\"" "\""
+ (* " relax, emacs *)
+
+(* Variable: int *)
+let int = /[0-9]+/
+
+
+(************************************************************************
+ * Group: ENTRIES AND OPTIONS
+ *************************************************************************)
+
+
+(* View: entry_int
+ * This matches an entry which takes a single integer for an argument
+ *)
+let entry_int (canon:string) (re:regexp) =
+ [ indent . del re canon . label canon . sep_spc . store int . eol ]
+
+(* View: entry_rgb
+ * This matches an entry which takes 3 integers as arguments representing red,
+ * green and blue components
+ *)
+let entry_rgb (canon:string) (re:regexp) =
+ [ indent . del re canon . label canon
+ . [ label "red" . sep_spc . store int ]
+ . [ label "green" . sep_spc . store int ]
+ . [ label "blue" . sep_spc . store int ]
+ . eol ]
+
+(* View: entry_xy
+ * This matches an entry which takes 2 integers as arguments representing X and
+ * Y coordinates
+ *)
+let entry_xy (canon:string) (re:regexp) =
+ [ indent . del re canon . label canon
+ . [ label "x" . sep_spc . store int ]
+ . [ label "y" . sep_spc . store int ]
+ . eol ]
+
+(* View: entry_str
+ * This matches an entry which takes a single quoted string
+ *)
+let entry_str (canon:string) (re:regexp) =
+ [ indent . del re canon . label canon
+ . sep_spc . quoted_non_empty_string_val . eol ]
+
+(* View: entry_generic
+ * An entry without a specific handler. Store everything after the keyword,
+ * cropping whitespace at both ends.
+ *)
+let entry_generic = [ indent . key generic_entry_re
+ . sep_spc . store to_eol . eol ]
+
+(* View: option *)
+let option = [ indent . del /[oO]ption/ "Option" . label "Option" . sep_spc
+ . quoted_non_empty_string_val
+ . [ label "value" . sep_spc . quoted_string_val ]*
+ . eol ]
+
+(* View: screen
+ * The Screen entry of ServerLayout
+ *)
+let screen = [ indent . del /[sS]creen/ "Screen" . label "Screen"
+ . [ sep_spc . label "num" . store int ]?
+ . ( sep_spc . quoted_non_empty_string_val
+ . [ sep_spc . label "position" . store to_eol ]? )?
+ . eol ]
+
+(* View: input_device *)
+let input_device = [ indent . del /[iI]nput[dD]evice/ "InputDevice"
+ . label "InputDevice" . sep_spc
+ . quoted_non_empty_string_val
+ . [ label "option" . sep_spc
+ . quoted_non_empty_string_val ]*
+ . eol ]
+
+(* View: driver *)
+let driver = entry_str "Driver" /[dD]river/
+
+(* View: identifier *)
+let identifier = entry_str "Identifier" /[iI]dentifier/
+
+(* View: videoram *)
+let videoram = entry_int "VideoRam" /[vV]ideo[rR]am/
+
+(* View: default_depth *)
+let default_depth = entry_int "DefaultDepth" /[dD]efault[dD]epth/
+
+(* View: device *)
+let device = entry_str "Device" /[dD]evice/
+
+(************************************************************************
+ * Group: DISPLAY SUBSECTION
+ *************************************************************************)
+
+
+(* View: display_modes *)
+let display_modes = [ indent . del /[mM]odes/ "Modes" . label "Modes"
+ . [ label "mode" . sep_spc
+ . quoted_non_empty_string_val ]+
+ . eol ]
+
+(*************************************************************************
+ * View: display_entry
+ * Known values for entries in the Display subsection
+ *
+ * Definition:
+ * > Depth depth
+ * > FbBpp bpp
+ * > Weight red-weight green-weight blue-weight
+ * > Virtual xdim ydim
+ * > ViewPort x0 y0
+ * > Modes "mode-name" ...
+ * > Visual "visual-name"
+ * > Black red green blue
+ * > White red green blue
+ * > Options
+ *)
+
+let display_entry = entry_int "Depth" /[dD]epth/ |
+ entry_int "FbBpp" /[fF]b[bB]pp/ |
+ entry_rgb "Weight" /[wW]eight/ |
+ entry_xy "Virtual" /[vV]irtual/ |
+ entry_xy "ViewPort" /[vV]iew[pP]ort/ |
+ display_modes |
+ entry_str "Visual" /[vV]isual/ |
+ entry_rgb "Black" /[bB]lack/ |
+ entry_rgb "White" /[wW]hite/ |
+ entry_str "Options" /[oO]ptions/ |
+ empty |
+ comment
+
+(* View: display *)
+let display = [ indent . del "SubSection" "SubSection" . sep_spc
+ . sep_dquote . key "Display" . sep_dquote
+ . eol
+ . display_entry*
+ . indent . del "EndSubSection" "EndSubSection" . eol ]
+
+(************************************************************************
+ * Group: EXTMOD SUBSECTION
+ *************************************************************************)
+
+let extmod_entry = entry_str "Option" /[oO]ption/ |
+ empty |
+ comment
+
+let extmod = [ indent . del "SubSection" "SubSection" . sep_spc
+ . sep_dquote . key "extmod" . sep_dquote
+ . eol
+ . extmod_entry*
+ . indent . del "EndSubSection" "EndSubSection" . eol ]
+
+(************************************************************************
+ * Group: SECTIONS
+ *************************************************************************)
+
+
+(************************************************************************
+ * Variable: section_re
+ * Known values for Section names
+ *
+ * Definition:
+ * > The section names are:
+ * >
+ * > Files File pathnames
+ * > ServerFlags Server flags
+ * > Module Dynamic module loading
+ * > Extensions Extension Enabling
+ * > InputDevice Input device description
+ * > InputClass Input Class description
+ * > Device Graphics device description
+ * > VideoAdaptor Xv video adaptor description
+ * > Monitor Monitor description
+ * > Modes Video modes descriptions
+ * > Screen Screen configuration
+ * > ServerLayout Overall layout
+ * > DRI DRI-specific configuration
+ * > Vendor Vendor-specific configuration
+ *************************************************************************)
+let section_re = /(Extensions|Files|ServerFlags|Module|InputDevice|InputClass|Device|VideoAdaptor|Monitor|Modes|Screen|ServerLayout|DRI|Vendor)/
+
+
+(************************************************************************
+ * Variable: secton_re_obsolete
+ * The following obsolete section names are still recognised for
+ * compatibility purposes. In new config files, the InputDevice
+ * section should be used instead.
+ *
+ * Definition:
+ * > Keyboard Keyboard configuration
+ * > Pointer Pointer/mouse configuration
+ *************************************************************************)
+let section_re_obsolete = /(Keyboard|Pointer)/
+
+(* View: section_entry *)
+let section_entry = option |
+ screen |
+ display |
+ extmod |
+ input_device |
+ driver |
+ identifier |
+ videoram |
+ default_depth |
+ device |
+ entry_generic |
+ empty | comment
+
+(************************************************************************
+ * View: section
+ * A section in xorg.conf
+ *
+ * Definition:
+ * > Section "SectionName"
+ * > SectionEntry
+ * > ...
+ * > EndSection
+ *************************************************************************)
+let section = [ indent . del "Section" "Section"
+ . sep_spc . sep_dquote
+ . key (section_re|section_re_obsolete) . sep_dquote
+ . eol
+ . section_entry*
+ . indent . del "EndSection" "EndSection" . eol ]
+
+(*
+ * View: lns
+ * The xorg.conf lens
+ *)
+let lns = ( empty | comment | section )*
+
+
+(* Variable: filter *)
+let filter = incl "/etc/X11/xorg.conf"
+ . incl "/etc/X11/xorg.conf.d/*.conf"
+ . Util.stdexcl
+
+let xfm = transform lns filter
--- /dev/null
+(*
+Xymon configuration
+By Jason Kincl - 2012
+*)
+
+module Xymon =
+ autoload xfm
+
+let empty = Util.empty
+let eol = Util.eol
+let word = Rx.word
+let space = Rx.space
+let ip = Rx.ip
+let del_ws_spc = Util.del_ws_spc
+let value_to_eol = store /[^ \t][^\n]+/
+let eol_no_spc = Util.del_str "\n"
+
+let comment = Util.comment_generic /[ \t]*[;#][ \t]*/ "# "
+let include = [ key /include|dispinclude|netinclude|directory/ . del_ws_spc . value_to_eol . eol_no_spc ]
+let title = [ key "title" . del_ws_spc . value_to_eol . eol_no_spc ]
+
+(* Define host *)
+let tag = del_ws_spc . [ label "tag" . store /[^ \n\t]+/ ]
+let host_ip = [ label "ip" . store ip ]
+let host_hostname = [ label "fqdn" . store word ]
+let host_colon = del /[ \t]*#/ " #"
+let host = [ label "host" . host_ip . del space " " . host_hostname . host_colon . tag* . eol ]
+
+(* Define group-compress and group-only *)
+let group_extra = del_ws_spc . value_to_eol . eol_no_spc . (comment | empty | host | title)*
+let group = [ key "group" . group_extra ]
+let group_compress = [ key "group-compress" . group_extra ]
+let group_sorted = [ key "group-sorted" . group_extra ]
+
+let group_only_col = [ label "col" . store Rx.word ]
+let group_only_cols = del_ws_spc . group_only_col . ( Util.del_str "|" . group_only_col )*
+let group_only = [ key "group-only" . group_only_cols . group_extra ]
+
+(* Have to use namespacing because page's title overlaps plain title tag *)
+let page_name = store word
+let page_title = [ label "pagetitle" . del_ws_spc . value_to_eol . eol_no_spc ]
+let page_extra = del_ws_spc . page_name . (page_title | eol_no_spc) . (comment | empty | title | include | host)*
+ . (group | group_compress | group_sorted | group_only)*
+let page = [ key /page|subpage/ . page_extra ]
+
+let subparent_parent = [ label "parent" . store word ]
+let subparent = [ key "subparent" . del_ws_spc . subparent_parent . page_extra ]
+
+let ospage = [ key "ospage" . del_ws_spc . store word . del_ws_spc . [ label "ospagetitle" . value_to_eol . eol_no_spc ] ]
+
+let lns = (empty | comment | include | host | title | ospage )* . (group | group_compress | group_sorted | group_only)* . (page | subparent)*
+
+let filter = incl "/etc/xymon/hosts.cfg" . incl "/etc/xymon/pages.cfg"
+
+let xfm = transform lns filter
+
--- /dev/null
+(*
+Module: Xymon_Alerting
+ Parses xymon alerting files
+
+Author: Francois Maillard <fmaillard@gmail.com>
+
+About: Reference
+ This lens tries to keep as close as possible to `man 5 alerts.cfg` where possible.
+
+About: License
+ This file is licenced under the LGPL v2+, like the rest of Augeas.
+
+About: Lens Usage
+ To be documented
+
+About: Not supported
+ File inclusion are not followed
+
+About: Configuration files
+ This lens applies to /etc/xymon/alerts.d/*.cfg and /etc/xymon/alerts.cfg. See <filter>.
+
+About: Examples
+ The <Test_Xymon_Alerting> file contains various examples and tests.
+*)
+
+module Xymon_Alerting =
+ autoload xfm
+
+ (************************************************************************
+ * Group: USEFUL PRIMITIVES
+ *************************************************************************)
+
+ (* View: store_word *)
+ let store_word = store /[^ =\t\n#]+/
+
+ (* View: comparison The greater and lesser than operators *)
+ let comparison = store /[<>]/
+
+ (* View: equal *)
+ let equal = Sep.equal
+
+ (* View: ws *)
+ let ws = Sep.space
+
+ (* View: eol *)
+ let eol = Util.eol
+
+ (* View: ws_or_eol *)
+ let ws_or_eol = del /([ \t]+|[ \t]*\n[ \t]*)/ " "
+
+ (* View: comment *)
+ let comment = Util.comment
+
+ (* View: empty *)
+ let empty = Util.empty
+
+ (* View: include *)
+ let include = [ key "include" . ws . store_word . eol ]
+
+ (************************************************************************
+ * Group: MACRO DEFINITION
+ *************************************************************************)
+
+ (* View: macrodefinition
+ A string that starts with $ and that is assigned something *)
+ let macrodefinition = [ key /\$[^ =\t\n#\/]+/ . Sep.space_equal . store Rx.space_in . eol ]
+
+
+ (* View: flag
+ A flag value *)
+ let flag (kw:string) = Build.flag kw
+
+ (* View: kw_word
+ A key=value value *)
+ let kw_word (kw:regexp) = Build.key_value kw equal store_word
+
+ (************************************************************************
+ * Group: FILTERS
+ *************************************************************************)
+
+ (* View: page
+ The (ex)?page filter definition *)
+ let page = kw_word /(EX)?PAGE/
+
+ (* View: group
+ The (ex)?group filter definition *)
+ let group = kw_word /(EX)?GROUP/
+
+ (* View: host
+ The (ex)?host filter definition *)
+ let host = kw_word /(EX)?HOST/
+
+ (* View: service
+ The (ex)?service filter definition *)
+ let service = kw_word /(EX)?SERVICE/
+
+ (* View: color
+ The color filter definition *)
+ let color = kw_word "COLOR"
+
+ (* View: time
+ The time filter definition *)
+ let time = kw_word "TIME"
+
+ (* View: duration
+ The duration filter definition *)
+ let duration = [ key "DURATION" . [ label "operator" . comparison ] . [ label "value" . store_word ] ]
+ (* View: recover
+ The recover filter definition *)
+ let recover = flag "RECOVER"
+ (* View: notice
+ The notice filter definition *)
+ let notice = flag "NOTICE"
+
+ (* View: rule_filter
+ Filters are made out of any of the above filter definitions *)
+ let rule_filter = page | group | host | service
+ | color | time | duration | recover | notice
+
+ (* View: filters
+ One or more filters *)
+ let filters = [ label "filters" . Build.opt_list rule_filter ws ]
+
+ (* View: filters_opt
+ Zero, one or more filters *)
+ let filters_opt = [ label "filters" . (ws . Build.opt_list rule_filter ws)? ]
+
+ (* View: kw_word_filters_opt
+ A <kw_word> entry with optional filters *)
+ let kw_word_filters_opt (kw:string) = [ key kw . equal . store_word . filters_opt ]
+
+ (* View: flag_filters_opt
+ A <flag> with optional filters *)
+ let flag_filters_opt (kw:string) = [ key kw . filters_opt ]
+
+ (************************************************************************
+ * Group: RECIPIENTS
+ *************************************************************************)
+
+ (* View: mail
+ The mail recipient definition *)
+ let mail = [ key "MAIL" . ws . store_word . filters_opt ]
+
+ (* View: script
+ The script recipient definition *)
+ let script = [ key "SCRIPT" . ws . [ label "script" . store_word ]
+ . ws . [ label "recipient" . store_word ] . filters_opt ]
+
+ (* View: ignore
+ The ignore recipient definition *)
+ let ignore = flag_filters_opt "IGNORE"
+
+ (* View: format
+ The format recipient definition *)
+ let format = kw_word_filters_opt "FORMAT"
+
+ (* View: repeat
+ The repeat recipient definition *)
+ let repeat = kw_word_filters_opt "REPEAT"
+
+ (* View: unmatched
+ The unmatched recipient definition *)
+ let unmatched = flag_filters_opt "UNMATCHED"
+
+ (* View: stop
+ The stop recipient definition *)
+ let stop = flag_filters_opt "STOP"
+
+ (* View: macro
+ The macro recipient definition *)
+ let macro = [ key /\$[^ =\t\n#\/]+/ . filters_opt ]
+
+ (* View: recipient
+ Recipients are made out of any of the above recipient definitions *)
+ let recipient = mail | script | ignore | format | repeat | unmatched
+ | stop | macro
+
+ let recipients = [ label "recipients" . Build.opt_list recipient ws_or_eol ]
+
+
+ (************************************************************************
+ * Group: RULES
+ *************************************************************************)
+
+ (* View: rule
+ Rules are made of rule_filter and then recipients sperarated by a whitespace *)
+ let rule = [ seq "rules" . filters . ws_or_eol . recipients . eol ]
+
+ (* View: lns
+ The Xymon_Alerting lens *)
+ let lns = ( rule | macrodefinition | include | empty | comment )*
+
+ (* Variable: filter *)
+ let filter = incl "/etc/xymon/alerts.d/*.cfg"
+ . incl "/etc/xymon/alerts.cfg"
+ . Util.stdexcl
+
+ let xfm = transform lns filter
+
--- /dev/null
+(*
+Module: Yaml
+ Only valid for the following subset:
+
+> defaults: &anchor
+> repo1: master
+>
+> host:
+> # Inheritance
+> <<: *anchor
+> repo2: branch
+
+Author: Dimitar Dimitrov <mitkofr@yahoo.fr>
+*)
+module YAML =
+
+(* Group: helpers *)
+let dash = Util.del_str "-"
+let colon = Sep.colon
+let space = Sep.space
+let val = store Rx.word
+let eol = Util.eol
+let empty = Util.empty
+let comment = Util.comment_noindent
+
+(*
+View: indent
+ the imposed indent is 2 spaces
+*)
+let indent = del /[ \t]+/ " "
+
+let mval = [ label "@mval" . Util.del_str "|-" . eol
+ . [ label "@line" . indent . store Rx.space_in . eol ]+ ]
+
+(*
+View: inherit
+> <<: *anchor
+*)
+let _inherit = [ key "<<" . colon . space . Util.del_str "*" . val . eol ]
+let inherit = indent . _inherit . (indent . comment)*
+
+(*
+View: repo
+> { "repo" = "branch" }
+*)
+let _repo = [ key Rx.word . colon . space . (val | mval) . eol ]
+let repo = indent . _repo . (indent . comment)*
+
+(*
+View: anchor
+> &anchor
+*)
+let anchor = Util.del_str "&" . val
+
+(*
+View: entry
+> host:
+> # Inheritance
+> <<: *anchor
+> repo2: branch
+*)
+let entry = [ key Rx.word . colon . (space . anchor)? . eol
+ . (indent . comment)*
+ . ((inherit . (repo+)?) | repo+)
+ ]
+
+(* View: top level sequence *)
+let sequence = [ label "@sequence" . counter "sequence" . dash . repo+ ]
+
+(* View: header *)
+let header = [ label "@yaml" . Util.del_str "---"
+ . (Sep.space . store Rx.space_in)? . eol ]
+
+(*
+View: lns
+ The yaml lens
+*)
+let lns = ((empty|comment)* . header)? . (sequence | entry | comment | empty)*
--- /dev/null
+(* Parsing yum's config files *)
+module Yum =
+ autoload xfm
+
+(************************************************************************
+ * INI File settings
+ *************************************************************************)
+
+let comment = IniFile.comment "#" "#"
+let sep = IniFile.sep "=" "="
+let empty = Util.empty
+let eol = IniFile.eol
+
+(************************************************************************
+ * ENTRY
+ *************************************************************************)
+
+let list_entry (list_key:string) =
+ let list_value = store /[^# \t\r\n,][^ \t\r\n,]*[^# \t\r\n,]|[^# \t\r\n,]/ in
+ let list_sep = del /([ \t]*(,[ \t]*|\r?\n[ \t]+))|[ \t]+/ "\n\t" in
+ [ key list_key . sep . Sep.opt_space . list_value ]
+ . (list_sep . Build.opt_list [ label list_key . list_value ] list_sep)?
+ . eol
+
+let entry_re = IniFile.entry_re - ("baseurl" | "gpgkey" | "exclude")
+
+let entry = IniFile.entry entry_re sep comment
+ | empty
+
+let entries =
+ let list_entry_elem (k:string) = list_entry k . entry*
+ in entry*
+ | entry* . Build.combine_three_opt
+ (list_entry_elem "baseurl")
+ (list_entry_elem "gpgkey")
+ (list_entry_elem "exclude")
+
+
+(***********************************************************************a
+ * TITLE
+ *************************************************************************)
+let title = IniFile.title IniFile.record_re
+let record = [ title . entries ]
+
+
+(************************************************************************
+ * LENS & FILTER
+ *************************************************************************)
+let lns = (empty | comment)* . record*
+
+ let filter = (incl "/etc/yum.conf")
+ . (incl "/etc/yum.repos.d/*.repo")
+ . (incl "/etc/yum/yum-cron*.conf")
+ . (incl "/etc/yum/pluginconf.d/*")
+ . (excl "/etc/yum/pluginconf.d/versionlock.list")
+ . (incl "/etc/dnf/dnf.conf")
+ . (incl "/etc/dnf/automatic.conf")
+ . (incl "/etc/dnf/plugins/*.conf")
+ . Util.stdexcl
+
+ let xfm = transform lns filter
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+
+EXTRA_DIST=$(wildcard *.pod) $(wildcard *.[0-9])
+
+man1_MANS=augtool.1 augparse.1 augmatch.1 augprint.1
+
+%.1: %.pod
+ pod2man -c "Augeas" -r "Augeas $(VERSION)" $< > $@
--- /dev/null
+=head1 NAME
+
+augmatch - inspect and match contents of configuration files
+
+=head1 SYNOPSIS
+
+augmatch [OPTIONS] FILE
+
+=head1 DESCRIPTION
+
+B<augmatch> prints the tree that Augeas generates by parsing a
+configuration file, or only those parts of the tree that match a certain
+path expression. Parsing is controlled by lenses, many of which ship with
+Augeas. B<augmatch> to select the correct lens for a given file
+automatically unless one is specified with the B<--lens> option.
+
+=head1 OPTIONS
+
+=over 4
+
+=item B<-a>, B<--all>
+
+Print all tree nodes, even ones without an associated value. Without this
+flag, augmatch omits these nodes from the output as they are usually
+uninteresting.
+
+=item B<-e>, B<--exact>
+
+Only print the parts of the tree that exactly match the expression provided
+with B<--match> and not any of the descendants of matching nodes.
+
+=item B<-I>, B<--include>=I<DIR>
+
+Add DIR to the module loadpath. Can be given multiple times. The
+directories set here are searched before any directories specified in the
+AUGEAS_LENS_LIB environment variable, and before the default directories
+F</usr/share/augeas/lenses> and F</usr/share/augeas/lenses/dist>.
+
+=item B<-l>, B<--lens>=I<LENS>
+
+Use LENS for the given file; without this option, B<augmatch> tries to
+guess the lens for the file based on the file's name and path which only
+works for files in standard locations.
+
+=item B<-L>, B<--print-lens>
+
+Print the name of the lens that will be used with the given file and exit.
+
+=item B<-m>, B<--match>=I<EXPR>
+
+Only print the parts of the tree that match the path expression EXPR. All
+nodes that match EXPR and their descendants will be printed. Use L<--exact>
+to print only matching nodes but no descendants.
+
+=item B<-r>, B<--root>=I<ROOT>
+
+Use directory ROOT as the root of the filesystem. Takes precedence over a
+root set with the AUGEAS_ROOT environment variable.
+
+=item B<-S>, B<--nostdinc>
+
+Do not search any of the default directories for lenses. When this option
+is set, only directories specified explicitly with B<-I> or specified in
+B<AUGEAS_LENS_LIB> will be searched for modules.
+
+=item B<-o>, B<--only-value>
+
+Print only the value and not the label or the path of nodes.
+
+=item B<-q>, B<--quiet>
+
+Do not print anything. Exit with zero status if a match was found
+
+=back
+
+=head1 ENVIRONMENT VARIABLES
+
+=over 4
+
+=item B<AUGEAS_ROOT>
+
+The file system root, defaults to '/'. Can be overridden with
+the B<-r> command line option
+
+=item B<AUGEAS_LENS_LIB>
+
+Colon separated list of directories with lenses. Directories specified here
+are searched after any directories set with the B<-I> command line option,
+but before the default directories F</usr/share/augeas/lenses> and
+F</usr/share/augeas/lenses/dist>
+
+=back
+
+=head1 EXAMPLES
+
+ # print the tree for /etc/exports
+ augmatch /etc/exports
+
+ # show only the entry for a specific mount
+ augmatch -m 'dir["/home"]' /etc/exports
+
+ # show all the clients to which we are exporting /home
+ augmatch -eom 'dir["/home"]/client' /etc/exports
+
+=head1 EXIT STATUS
+
+The exit status is 0 when there was at least one match, 1 if there was no
+match, and 2 if an error occurred.
+
+=head1 FILES
+
+Lenses and schema definitions in F</usr/share/augeas/lenses> and
+F</usr/share/augeas/lenses/dist>
+
+=head1 AUTHOR
+
+David Lutterkort <lutter@watzmann.net>
+
+=head1 COPYRIGHT AND LICENSE
+
+Copyright 2007-2018 David Lutterkort
+
+Augeas (and augmatch) are distributed under the GNU Lesser General Public
+License (LGPL)
+
+=head1 SEE ALSO
+
+B<Augeas> project homepage L<http://www.augeas.net/>
+
+B<Augeas> path expressions L<https://github.com/hercules-team/augeas/wiki/Path-expressions>
+
+L<augprint>
--- /dev/null
+=head1 NAME
+
+augparse - execute an Augeas module
+
+=head1 SYNOPSIS
+
+augparse [OPTIONS] MODULE
+
+=head1 DESCRIPTION
+
+Execute an Augeas module, most commonly to evaluate the tests it contains.
+
+=head1 OPTIONS
+
+=over 4
+
+=item B<-I>, B<--include>=I<DIR>
+
+Add DIR to the module loadpath. Can be given multiple times. The
+directories set here are searched before any directories specified in the
+AUGEAS_LENS_LIB environment variable, and before the default directory
+F</usr/share/augeas/lenses>.
+
+=item B<-t>, B<--trace>
+
+Print a trace of the modules that are being loaded.
+
+=item B<--nostdinc>
+
+Do not search any of the default directories for modules. When this option
+is set, only directories specified explicitly with B<-I> or specified in
+B<AUGEAS_LENS_LIB> will be searched for modules.
+
+=item B<--notypecheck>
+
+Do not perform lens type checks. Only use this option during lens
+development and make sure you typecheck lenses when you are done developing
+- you should never use a lens that hasn't been typechecked. This option is
+sometimes useful when you are working on unit tests for a lens to speed up
+the time it takes to repeatedly run and fix tests.
+
+=item B<--version>
+
+Print version information and exit.
+
+=item B<-h>
+
+Display this help and exit
+
+=back
+
+=head1 EXAMPLES
+
+To run the tests in F<lenses/tests/test_foo.aug> and use modules from the
+directory F<lenses>, run
+
+=over 4
+
+augparse -I lenses lenses/tests/test_foo.aug
+
+=back
+
+=head1 TESTS
+
+Tests can appear as top-level forms anywhere in a module. Generally, the
+tests for a module F<lenses/foo.aug> are kept in a separate file, usually
+in F<lenses/tests/test_foo.aug>.
+
+There are two different kinds of tests that Augeas can run: B<get> and
+B<put> tests. The syntax for B<get> tests is
+
+=over 4
+
+test LENS get STRING = RESULT
+
+=back
+
+which applies the I<get> direction of the lens LENS to STRING and compares
+it with the given RESULT. RESULT can either be a tree literal, the symbol
+B<?> to print the result of applying LENS to STRING, or the symbol B<*> to
+indicate that the test should produce an exception.
+
+The syntax for B<put> tests is
+
+=over 4
+
+test LENS put STRING after COMMANDS = RESULT
+
+=back
+
+which first applies the I<get> direction of the lens LENS to STRING, then
+applies the given COMMANDS to the resulting tree, and finally transforms
+the modified tree back to a string using the I<put> direction of LENS. The
+resulting string is then compared to RESULT, which can be a string, the
+symbol B<?> to print the result of applying LENS to STRING, or the symbol
+B<*> to indicate that the test should produce an exception.
+
+=head1 AUTHOR
+
+David Lutterkort <lutter@watzmann.net>
+
+=head1 COPYRIGHT AND LICENSE
+
+Copyright 2007-2016 David Lutterkort
+
+Augeas (and augparse) are distributed under the GNU Lesser General Public
+License (LGPL)
+
+=head1 SEE ALSO
+
+B<Augeas> project homepage L<http://www.augeas.net/>
+
+L<augtool>
--- /dev/null
+# vim: expandtab
+=head1 NAME
+
+augprint - create an idempotent augtool script for a given file
+
+=head1 SYNOPSIS
+
+augprint [--pretty|-p] [--regexp[=n]|-r[n]] [--noseq|-s] [--verbose|-v] [--lens name|-l name] [--target /target|-t /target] FILE
+
+=head1 DESCRIPTION
+
+B<augprint> creates an augtool script for a given B<FILE>
+consisting primarily of C<set> commands.
+
+The resulting augtool script is designed to be idempotent, and
+will not result in any changes when applied to the original file.
+
+B<augprint> replaces each numbered location in the tree with
+a path-expression that uniquely identifies the position using the values
+I<within> that position.
+
+This makes the path-expression independant of the position-number,
+and thereby applicable to files which in which the same data may exist at
+an alternate position-number
+
+See "Examples" for sample output
+
+=head2 Regexp output
+
+By default B<augprint> produces path-expressions made up of simple equality C<=> comparisions
+
+ set /files/etc/hosts/seq::*[ipaddr='127.0.0.1']/ipaddr '127.0.0.1'
+
+The option B<--regexp> changes the output to produce regular expression comparisions
+
+ set /files/etc/hosts/seq::*[ipaddr=~regexp('127\\.0\\..*')]/ipaddr '127.0.0.1'
+
+The minimum length I<N> of the regular expression can be specified using C<--regexp=N>
+
+B<augprint> will choose a longer regular expression than I<N> if multiple values
+would match using the I<N> character regular expression.
+
+=head2 Limitations
+
+=head3 Append-only
+
+The output is based primarily on set operations.
+The set operation can only:
+
+a) change an existing value in-situ
+
+b) append a new value after the last position in the group
+
+This means that when an entry is re-created, it may not be in the same position as originally intended.
+ie if the entry for C<192.0.2.3> does not already exist, it will be created as the I<last> entry in F</etc/hosts>
+
+Often, such out-of-sequence entries will not matter to the resulting configuration file.
+If it does matter, further manual editing of the C<augtool> script will be required.
+
+=head3 Repeated Values
+
+B<augprint> is not always successful in finding a path-expression which is unique to a position.
+In this case B<augprint> appends a position to an expression which is not unique
+
+This occurs in particular if there are repeated values within the file.
+
+For an F</etc/hosts> file of
+
+ #------
+ 192.0.2.3 defaultdns
+ #------
+
+B<augprint> would produce the output
+
+ set /files/etc/hosts/#comment[.='--------'][1] '--------'
+ set /files/etc/hosts/seq::*[ipaddr='192.0.2.3']/ipaddr '192.0.2.3'
+ set /files/etc/hosts/seq::*[ipaddr='192.0.2.3']/canonical 'defaultdns'
+ set /files/etc/hosts/#comment[.='--------'][2] '--------'
+
+Notice how C<#comment> paths have C<[1]> and C<[2]> appended respectively to the C<[expr]>
+
+Other paths which do have unique path-expressions are not directly affected
+
+
+=head1 OPTIONS
+
+=over 4
+
+=item B<-v>, B<--verbose>
+
+Include the original numbered paths as comments in the output
+
+=item B<-p>, B<--pretty>
+
+Create more readable output by adding spaces and empty lines
+
+=item B<-r>, B<-r>I<N>, B<--regexp>, B<--regexp>=I<N>
+
+Generate regular expressions to match values,
+using a minumum length of I<N> characters from the value
+
+I<N> can be omitted and defaults to 8
+
+=item B<-l>, B<--lens>=I<LENS>
+
+Use I<LENS> for the given file; without this option, B<augprint> uses the
+default lens for the file
+
+=item B<-t> I<targetfile>, B<--target>=I<targetfile>
+
+Generate the script for the I<FILE> specified as if its path was really I<targetfile>
+
+This will apply the lens corresponding to I<targetfile> to I<FILE>
+and modifying the resulting path-expressions of I<FILE> to correspond to I<targetfile>
+
+I<targetfile> must be the full path name, starting with a '/'
+
+See "Examples" for how B<--target> can be used in practice
+
+=item B<-s>, B<--noseq>
+
+Do not use C<seq::*> in the output, use C<*> instead.
+For example
+
+ set /files/etc/hosts/*[ipaddr='127.0.0.1']/ipaddr '127.0.0.1'
+
+IMPORTANT: The resulting output will no longer I<create> a new entry
+for C<127.0.0.1> if none already exists. The C<--noseq> option exists so
+that the resulting paths can be used with augeas versions prior to 1.13.0
+(subject to this limitation)
+
+=back
+
+=head1 EXAMPLES
+
+These examples use the following F</etc/hosts> file as the I<FILE>
+
+ 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
+ 192.0.2.3 dns-a
+ 192.0.2.4 dns-b
+
+The output from C<augtool 'print /files/etc/hosts'> would be
+
+ /files/etc/hosts
+ /files/etc/hosts/1
+ /files/etc/hosts/1/ipaddr = "127.0.0.1"
+ /files/etc/hosts/1/canonical = "localhost"
+ /files/etc/hosts/1/alias[1] = "localhost.localdomain"
+ /files/etc/hosts/1/alias[2] = "localhost4"
+ /files/etc/hosts/1/alias[3] = "localhost4.localdomain4"
+ /files/etc/hosts/2
+ /files/etc/hosts/2/ipaddr = "192.0.2.3"
+ /files/etc/hosts/2/canonical = "dns-a"
+ /files/etc/hosts/3
+ /files/etc/hosts/3/ipaddr = "192.0.2.4"
+ /files/etc/hosts/3/canonical = "dns-b"
+
+=head2 Default output
+
+C<augprint /etc/hosts>
+
+ set /files/etc/hosts/seq::*[ipaddr='127.0.0.1']/ipaddr '127.0.0.1'
+ set /files/etc/hosts/seq::*[ipaddr='127.0.0.1']/canonical 'localhost'
+ set /files/etc/hosts/seq::*[ipaddr='127.0.0.1']/alias[.='localhost.localdomain'] 'localhost.localdomain'
+ set /files/etc/hosts/seq::*[ipaddr='127.0.0.1']/alias[.='localhost4'] 'localhost4'
+ set /files/etc/hosts/seq::*[ipaddr='127.0.0.1']/alias[.='localhost4.localdomain4'] 'localhost4.localdomain4'
+ set /files/etc/hosts/seq::*[ipaddr='192.0.2.3']/ipaddr '192.0.2.3'
+ set /files/etc/hosts/seq::*[ipaddr='192.0.2.3']/canonical 'dns-a'
+ set /files/etc/hosts/seq::*[ipaddr='192.0.2.4']/ipaddr '192.0.2.4'
+ set /files/etc/hosts/seq::*[ipaddr='192.0.2.4']/canonical 'dns-b'
+
+=head2 Verbose output
+
+C<augprint --verbose /etc/hosts>
+
+ # /files/etc/hosts
+ # /files/etc/hosts/1
+ # /files/etc/hosts/1/ipaddr '127.0.0.1'
+ set /files/etc/hosts/seq::*[ipaddr='127.0.0.1']/ipaddr '127.0.0.1'
+ # /files/etc/hosts/1/canonical 'localhost'
+ set /files/etc/hosts/seq::*[ipaddr='127.0.0.1']/canonical 'localhost'
+ # /files/etc/hosts/1/alias[1] 'localhost.localdomain'
+ set /files/etc/hosts/seq::*[ipaddr='127.0.0.1']/alias[.='localhost.localdomain'] 'localhost.localdomain'
+ ...
+
+=head2 Rexexp output
+
+C<augprint --regexp=4 /etc/hosts>
+
+ set /files/etc/hosts/seq::*[ipaddr=~regexp('127\\..*')]/ipaddr '127.0.0.1'
+ set /files/etc/hosts/seq::*[ipaddr=~regexp('127\\..*')]/canonical 'localhost'
+ set /files/etc/hosts/seq::*[ipaddr=~regexp('127\\..*')]/alias[.=~regexp('localhost\\..*')] 'localhost.localdomain'
+ set /files/etc/hosts/seq::*[ipaddr=~regexp('127\\..*')]/alias[.=~regexp('localhost4')] 'localhost4'
+ set /files/etc/hosts/seq::*[ipaddr=~regexp('127\\..*')]/alias[.=~regexp('localhost4\\..*')] 'localhost4.localdomain4'
+ set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.3')]/ipaddr '192.0.2.3'
+ set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.3')]/canonical 'dns-a'
+ set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.4')]/ipaddr '192.0.2.4'
+ set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.4')]/canonical 'dns-b'
+
+Note that although a I<minimum> length of 4 has been specified, B<augprint> will choose longer regular expressions
+as needed to ensure a unique match.
+
+=head2 Using --lens
+
+If a file is not assocatiated with a lens by default, I<--lens lensname> can be used to specify a lens.
+
+When I<--lens> is specified, the output is prefixed with suitable C<transform> and C<load-file> statements,
+as required to complete the augtool script, and a I<setm> statement to exclude other autoloaded lenses.
+
+C<augprint --lens shellvars /etc/skel/.bashrc>
+
+ setm /augeas/load/*[incl='/etc/skel/.bashrc' and label() != 'shellvars']/excl '/etc/skel/.bashrc'
+ transform shellvars incl /etc/skel/.bashrc
+ load-file /etc/skel/.bashrc
+ set /files/etc/skel/.bashrc/#comment[.='.bashrc'] '.bashrc'
+ set /files/etc/skel/.bashrc/#comment[.='Source global definitions'] 'Source global definitions'
+ set /files/etc/skel/.bashrc/@if[.='[ -f /etc/bashrc ]'] '[ -f /etc/bashrc ]'
+ set /files/etc/skel/.bashrc/@if[.='[ -f /etc/bashrc ]']/.source '/etc/bashrc'
+ set /files/etc/skel/.bashrc/#comment[.='User specific environment'] 'User specific environment'
+ ...
+
+The lenses C<simplelines> C<shellvars> are most commonly useful as lenses for files that do not have
+a specific lens
+
+=head2 Using --target
+
+In order to prepare an augtool script intended for a given file, it may be desired to
+copy the file to another location, rather than editting the original file.
+
+The option I<--target> simplifies this process.
+
+a) copy F</etc/hosts> to a new location
+
+ cp /etc/hosts ~
+
+b) edit F<~/hosts> to suit
+
+ echo '192.0.2.7 defaultdns' >> ~/hosts
+
+c) Run C<augprint> as follows
+
+ augprint --target /etc/hosts ~/hosts
+
+d) Copy the relevant part of the output to an augtool script or other Augeas client
+
+ set /files/etc/hosts/seq::*[ipaddr='192.0.2.7']/ipaddr '192.0.2.7'
+ set /files/etc/hosts/seq::*[ipaddr='192.0.2.7']/canonical 'defaultdns'
+
+Notice that C<augprint> has generated paths corresponding to I<--target> (/etc/hosts) instead of the I<FILE> argument (~/hosts)
+
+
+
+=head1 ENVIRONMENT VARIABLES
+
+=over 4
+
+=item B<AUGEAS_ROOT>
+
+The effective file system root, defaults to '/'.
+
+=item B<AUGEAS_LENS_LIB>
+
+Colon separated list of directories with lenses. Directories specified here
+are searched before the default directories F</usr/share/augeas/lenses> and
+F</usr/share/augeas/lenses/dist>
+
+=back
+
+=head1 EXIT STATUS
+
+The exit status is 0 when the command was successful
+and 1 if any error occurred.
+
+=head1 FILES
+
+Lenses and schema definitions in F</usr/share/augeas/lenses> and
+F</usr/share/augeas/lenses/dist>
+
+=head1 AUTHOR
+
+George Hansper <george@hansper.id.au>
+
+=head1 COPYRIGHT AND LICENSE
+
+Copyright 2022 George Hansper
+
+Augeas (and augprint) are distributed under the GNU Lesser General Public
+License (LGPL), version 2.1
+
+=head1 SEE ALSO
+
+augtool(1)
+
+B<Augeas> project homepage L<https://www.augeas.net/>
+
+B<Augeas> path expressions L<https://github.com/hercules-team/augeas/wiki/Path-expressions>
--- /dev/null
+.\" Automatically generated by Pod::Man 4.14 (Pod::Simple 3.43)
+.\"
+.\" Standard preamble:
+.\" ========================================================================
+.de Sp \" Vertical space (when we can't use .PP)
+.if t .sp .5v
+.if n .sp
+..
+.de Vb \" Begin verbatim text
+.ft CW
+.nf
+.ne \\$1
+..
+.de Ve \" End verbatim text
+.ft R
+.fi
+..
+.\" Set up some character translations and predefined strings. \*(-- will
+.\" give an unbreakable dash, \*(PI will give pi, \*(L" will give a left
+.\" double quote, and \*(R" will give a right double quote. \*(C+ will
+.\" give a nicer C++. Capital omega is used to do unbreakable dashes and
+.\" therefore won't be available. \*(C` and \*(C' expand to `' in nroff,
+.\" nothing in troff, for use with C<>.
+.tr \(*W-
+.ds C+ C\v'-.1v'\h'-1p'\s-2+\h'-1p'+\s0\v'.1v'\h'-1p'
+.ie n \{\
+. ds -- \(*W-
+. ds PI pi
+. if (\n(.H=4u)&(1m=24u) .ds -- \(*W\h'-12u'\(*W\h'-12u'-\" diablo 10 pitch
+. if (\n(.H=4u)&(1m=20u) .ds -- \(*W\h'-12u'\(*W\h'-8u'-\" diablo 12 pitch
+. ds L" ""
+. ds R" ""
+. ds C` ""
+. ds C' ""
+'br\}
+.el\{\
+. ds -- \|\(em\|
+. ds PI \(*p
+. ds L" ``
+. ds R" ''
+. ds C`
+. ds C'
+'br\}
+.\"
+.\" Escape single quotes in literal strings from groff's Unicode transform.
+.ie \n(.g .ds Aq \(aq
+.el .ds Aq '
+.\"
+.\" If the F register is >0, we'll generate index entries on stderr for
+.\" titles (.TH), headers (.SH), subsections (.SS), items (.Ip), and index
+.\" entries marked with X<> in POD. Of course, you'll have to process the
+.\" output yourself in some meaningful fashion.
+.\"
+.\" Avoid warning from groff about undefined register 'F'.
+.de IX
+..
+.nr rF 0
+.if \n(.g .if rF .nr rF 1
+.if (\n(rF:(\n(.g==0)) \{\
+. if \nF \{\
+. de IX
+. tm Index:\\$1\t\\n%\t"\\$2"
+..
+. if !\nF==2 \{\
+. nr % 0
+. nr F 2
+. \}
+. \}
+.\}
+.rr rF
+.\"
+.\" Accent mark definitions (@(#)ms.acc 1.5 88/02/08 SMI; from UCB 4.2).
+.\" Fear. Run. Save yourself. No user-serviceable parts.
+. \" fudge factors for nroff and troff
+.if n \{\
+. ds #H 0
+. ds #V .8m
+. ds #F .3m
+. ds #[ \f1
+. ds #] \fP
+.\}
+.if t \{\
+. ds #H ((1u-(\\\\n(.fu%2u))*.13m)
+. ds #V .6m
+. ds #F 0
+. ds #[ \&
+. ds #] \&
+.\}
+. \" simple accents for nroff and troff
+.if n \{\
+. ds ' \&
+. ds ` \&
+. ds ^ \&
+. ds , \&
+. ds ~ ~
+. ds /
+.\}
+.if t \{\
+. ds ' \\k:\h'-(\\n(.wu*8/10-\*(#H)'\'\h"|\\n:u"
+. ds ` \\k:\h'-(\\n(.wu*8/10-\*(#H)'\`\h'|\\n:u'
+. ds ^ \\k:\h'-(\\n(.wu*10/11-\*(#H)'^\h'|\\n:u'
+. ds , \\k:\h'-(\\n(.wu*8/10)',\h'|\\n:u'
+. ds ~ \\k:\h'-(\\n(.wu-\*(#H-.1m)'~\h'|\\n:u'
+. ds / \\k:\h'-(\\n(.wu*8/10-\*(#H)'\z\(sl\h'|\\n:u'
+.\}
+. \" troff and (daisy-wheel) nroff accents
+.ds : \\k:\h'-(\\n(.wu*8/10-\*(#H+.1m+\*(#F)'\v'-\*(#V'\z.\h'.2m+\*(#F'.\h'|\\n:u'\v'\*(#V'
+.ds 8 \h'\*(#H'\(*b\h'-\*(#H'
+.ds o \\k:\h'-(\\n(.wu+\w'\(de'u-\*(#H)/2u'\v'-.3n'\*(#[\z\(de\v'.3n'\h'|\\n:u'\*(#]
+.ds d- \h'\*(#H'\(pd\h'-\w'~'u'\v'-.25m'\f2\(hy\fP\v'.25m'\h'-\*(#H'
+.ds D- D\\k:\h'-\w'D'u'\v'-.11m'\z\(hy\v'.11m'\h'|\\n:u'
+.ds th \*(#[\v'.3m'\s+1I\s-1\v'-.3m'\h'-(\w'I'u*2/3)'\s-1o\s+1\*(#]
+.ds Th \*(#[\s+2I\s-2\h'-\w'I'u*3/5'\v'-.3m'o\v'.3m'\*(#]
+.ds ae a\h'-(\w'a'u*4/10)'e
+.ds Ae A\h'-(\w'A'u*4/10)'E
+. \" corrections for vroff
+.if v .ds ~ \\k:\h'-(\\n(.wu*9/10-\*(#H)'\s-2\u~\d\s+2\h'|\\n:u'
+.if v .ds ^ \\k:\h'-(\\n(.wu*10/11-\*(#H)'\v'-.4m'^\v'.4m'\h'|\\n:u'
+. \" for low resolution devices (crt and lpr)
+.if \n(.H>23 .if \n(.V>19 \
+\{\
+. ds : e
+. ds 8 ss
+. ds o a
+. ds d- d\h'-1'\(ga
+. ds D- D\h'-1'\(hy
+. ds th \o'bp'
+. ds Th \o'LP'
+. ds ae ae
+. ds Ae AE
+.\}
+.rm #[ #] #H #V #F C
+.\" ========================================================================
+.\"
+.IX Title "AUGTOOL 1"
+.TH AUGTOOL 1 "2022-10-24" "Augeas 1.13.0" "Augeas"
+.\" For nroff, turn off justification. Always turn off hyphenation; it makes
+.\" way too many mistakes in technical documents.
+.if n .ad l
+.nh
+.SH "NAME"
+augtool \- inspect and modify configuration files
+.SH "SYNOPSIS"
+.IX Header "SYNOPSIS"
+augtool [\s-1OPTIONS\s0] [\s-1COMMAND\s0]
+.SH "DESCRIPTION"
+.IX Header "DESCRIPTION"
+Augeas is a configuration editing tool. It parses configuration files
+in their native formats and transforms them into a tree. Configuration
+changes are made by manipulating this tree and saving it back into
+native config files.
+.PP
+augtool provides a command line interface to the generated tree. \s-1COMMAND\s0
+can be a single command as described under \*(L"\s-1COMMANDS\*(R"\s0. When called with
+no \s-1COMMAND,\s0 it reads commands from standard input until an end-of-file is
+encountered.
+.SH "OPTIONS"
+.IX Header "OPTIONS"
+.IP "\fB\-c\fR, \fB\-\-typecheck\fR" 4
+.IX Item "-c, --typecheck"
+Typecheck lenses. This can be very slow, and is therefore not done by
+default, but is highly recommended during development.
+.IP "\fB\-b\fR, \fB\-\-backup\fR" 4
+.IX Item "-b, --backup"
+When files are changed, preserve the originals in a file with extension
+\&'.augsave'
+.IP "\fB\-n\fR, \fB\-\-new\fR" 4
+.IX Item "-n, --new"
+Save changes in files with extension '.augnew', do not modify the original
+files
+.IP "\fB\-r\fR, \fB\-\-root\fR=\fI\s-1ROOT\s0\fR" 4
+.IX Item "-r, --root=ROOT"
+Use directory \s-1ROOT\s0 as the root of the filesystem. Takes precedence over a
+root set with the \s-1AUGEAS_ROOT\s0 environment variable.
+.IP "\fB\-I\fR, \fB\-\-include\fR=\fI\s-1DIR\s0\fR" 4
+.IX Item "-I, --include=DIR"
+Add \s-1DIR\s0 to the module loadpath. Can be given multiple times. The
+directories set here are searched before any directories specified in the
+\&\s-1AUGEAS_LENS_LIB\s0 environment variable, and before the default directories
+\&\fI/usr/share/augeas/lenses\fR and \fI/usr/share/augeas/lenses/dist\fR.
+.IP "\fB\-t\fR, \fB\-\-transform\fR=\fI\s-1XFM\s0\fR" 4
+.IX Item "-t, --transform=XFM"
+Add a file transform; uses the 'transform' command syntax,
+e.g. \f(CW\*(C`\-t \*(AqFstab incl /etc/fstab.bak\*(Aq\*(C'\fR.
+.IP "\fB\-l\fR, \fB\-\-load\-file\fR=\fI\s-1FILE\s0\fR" 4
+.IX Item "-l, --load-file=FILE"
+Load an individual \s-1FILE\s0 into the tree. The lens to use is determined
+automatically (based on autoload information in the lenses) and will be the
+same that is used for this file when the entire tree is loaded. The option
+can be specified multiple times to load several files, e.g. \f(CW\*(C`\-l /etc/fstab
+\&\-l /etc/hosts\*(C'\fR. This lens implies \f(CW\*(C`\-\-noload\*(C'\fR so that only the files
+specified with this option will be loaded.
+.IP "\fB\-f\fR, \fB\-\-file\fR=\fI\s-1FILE\s0\fR" 4
+.IX Item "-f, --file=FILE"
+Read commands from \s-1FILE.\s0
+.IP "\fB\-i\fR, \fB\-\-interactive\fR" 4
+.IX Item "-i, --interactive"
+Read commands from the terminal. When combined with \fB\-f\fR or redirection of
+stdin, drop into an interactive session after executing the commands from
+the file.
+.IP "\fB\-e\fR, \fB\-\-echo\fR" 4
+.IX Item "-e, --echo"
+When reading commands from a file via stdin, echo the commands before
+printing their output.
+.IP "\fB\-s\fR, \fB\-\-autosave\fR" 4
+.IX Item "-s, --autosave"
+Automatically save all changes at the end of the session.
+.IP "\fB\-S\fR, \fB\-\-nostdinc\fR" 4
+.IX Item "-S, --nostdinc"
+Do not search any of the default directories for modules. When this option
+is set, only directories specified explicitly with \fB\-I\fR or specified in
+\&\fB\s-1AUGEAS_LENS_LIB\s0\fR will be searched for modules.
+.IP "\fB\-L\fR, \fB\-\-noload\fR" 4
+.IX Item "-L, --noload"
+Do not load any files on startup. This is generally used to fine-tune which
+files to load by modifying the entries in \f(CW\*(C`/augeas/load\*(C'\fR and then issuing
+a \f(CW\*(C`load\*(C'\fR command.
+.IP "\fB\-A\fR, \fB\-\-noautoload\fR" 4
+.IX Item "-A, --noautoload"
+Do not load any lens modules, and therefore no files, on startup. This
+creates no entries under \f(CW\*(C`/augeas/load\*(C'\fR whatsoever; to read any files,
+they need to be set up manually and loading must be initiated with a
+\&\f(CW\*(C`load\*(C'\fR command. Using this option gives the fastest startup.
+.IP "\fB\-\-span\fR" 4
+.IX Item "--span"
+Load span positions for nodes in the tree, as they relate to the original
+file. Enables the use of the \fBspan\fR command to retrieve position data.
+.IP "\fB\-\-timing\fR" 4
+.IX Item "--timing"
+After executing each command, print how long, in milliseconds, executing
+the command took. This makes it easier to spot slow queries, usually
+through \fBmatch\fR commands, and allows exploring alternative queries that
+yield the same result but might be faster.
+.IP "\fB\-\-version\fR" 4
+.IX Item "--version"
+Print version information and exit. The version is also in the tree under
+\&\f(CW\*(C`/augeas/version\*(C'\fR.
+.SH "COMMANDS"
+.IX Header "COMMANDS"
+In interactive mode, commands and paths can be completed by pressing \f(CW\*(C`TAB\*(C'\fR.
+.PP
+The paths accepted as arguments by commands use a small subset of XPath
+path expressions. A path expression consists of a number of segments,
+separated by \f(CW\*(C`/\*(C'\fR. In each segment, the character \f(CW\*(C`*\*(C'\fR can be used to match
+every node regardless of its label. Sibling nodes with identical labels can
+be distinguished by appending \f(CW\*(C`[N]\*(C'\fR to their label to match the N\-th
+sibling with such a label. The last sibling with a specific label can be
+reached as \f(CW\*(C`[last()]\*(C'\fR. See \*(L"\s-1EXAMPLES\*(R"\s0 for some examples of this.
+.SS "\s-1ADMIN COMMANDS\s0"
+.IX Subsection "ADMIN COMMANDS"
+The following commands control the behavior of Augeas and augtool itself.
+.IP "\fBhelp\fR" 4
+.IX Item "help"
+Print this help text
+.IP "\fBload\fR" 4
+.IX Item "load"
+Load files according to the transforms in \f(CW\*(C`/augeas/load\*(C'\fR.
+.IP "\fBquit\fR" 4
+.IX Item "quit"
+Exit the program
+.IP "\fBretrieve\fR <\s-1LENS\s0> <\s-1NODE_IN\s0> <\s-1PATH\s0> <\s-1NODE_OUT\s0>" 4
+.IX Item "retrieve <LENS> <NODE_IN> <PATH> <NODE_OUT>"
+Transform tree at \s-1PATH\s0 back into text using lens \s-1LENS\s0 and store the
+resulting string at \s-1NODE_OUT.\s0 Assume that the tree was initially read in
+with the same lens and the string stored at \s-1NODE_IN\s0 as input.
+.IP "\fBsave\fR" 4
+.IX Item "save"
+Save all pending changes to disk. Unless either the \fB\-b\fR or \fB\-n\fR
+command line options are given, files are changed in place.
+.IP "\fBstore\fR <\s-1LENS\s0> <\s-1NODE\s0> <\s-1PATH\s0>" 4
+.IX Item "store <LENS> <NODE> <PATH>"
+Parse \s-1NODE\s0 using \s-1LENS\s0 and store the resulting tree at \s-1PATH.\s0
+.IP "\fBtransform\fR <\s-1LENS\s0> <\s-1FILTER\s0> <\s-1FILE\s0>" 4
+.IX Item "transform <LENS> <FILTER> <FILE>"
+Add a transform for \s-1FILE\s0 using \s-1LENS.\s0 The \s-1LENS\s0 may be a module name or a
+full lens name. If a module name is given, then \*(L"lns\*(R" will be the lens
+assumed. The \s-1FILTER\s0 must be either \*(L"incl\*(R" or \*(L"excl\*(R". If the filter is
+\&\*(L"incl\*(R", the \s-1FILE\s0 will be parsed by the \s-1LENS.\s0 If the filter is \*(L"excl\*(R",
+the \s-1FILE\s0 will be excluded from the \s-1LENS. FILE\s0 may contain wildcards.
+.IP "\fBload-file\fR <\s-1FILE\s0>" 4
+.IX Item "load-file <FILE>"
+Load a specific \s-1FILE,\s0 automatically determining the proper lens from the
+information in \fI/augeas/load\fR; without further intervention, the lens that
+would oridnarily be used for this file will be used.
+.SS "\s-1READ COMMANDS\s0"
+.IX Subsection "READ COMMANDS"
+The following commands are used to retrieve data from the Augeas tree.
+.IP "\fBdump-xml\fR \fI[<\s-1PATH\s0>]\fR" 4
+.IX Item "dump-xml [<PATH>]"
+Print entries in the tree as \s-1XML.\s0 If \s-1PATH\s0 is given, printing starts there,
+otherwise the whole tree is printed.
+.IP "\fBget\fR <\s-1PATH\s0>" 4
+.IX Item "get <PATH>"
+Print the value associated with \s-1PATH\s0
+.IP "\fBlabel\fR <\s-1PATH\s0>" 4
+.IX Item "label <PATH>"
+Get and print the label associated with \s-1PATH\s0
+.IP "\fBls\fR <\s-1PATH\s0>" 4
+.IX Item "ls <PATH>"
+List the direct children of \s-1PATH\s0
+.IP "\fBmatch\fR <\s-1PATTERN\s0> [<\s-1VALUE\s0>]" 4
+.IX Item "match <PATTERN> [<VALUE>]"
+Find all paths that match \s-1PATTERN.\s0 If \s-1VALUE\s0 is given, only the matching
+paths whose value equals \s-1VALUE\s0 are printed
+.IP "\fBprint\fR \fI[<\s-1PATH\s0>]\fR" 4
+.IX Item "print [<PATH>]"
+Print entries in the tree. If \s-1PATH\s0 is given, printing starts there,
+otherwise the whole tree is printed
+.IP "\fBspan\fR <\s-1PATH\s0>" 4
+.IX Item "span <PATH>"
+Print the name of the file from which the node \s-1PATH\s0 was generated, as well
+as information about the positions in the file corresponding to the label,
+the value, and the entire node. \s-1PATH\s0 must match exactly one node.
+.Sp
+You need to run 'set /augeas/span enable' prior to loading files to enable
+recording of span information. It is disabled by default.
+.SS "\s-1WRITE COMMANDS\s0"
+.IX Subsection "WRITE COMMANDS"
+The following commands are used to modify the Augeas tree.
+.IP "\fBclear\fR <\s-1PATH\s0>" 4
+.IX Item "clear <PATH>"
+Set the value for \s-1PATH\s0 to \s-1NULL.\s0 If \s-1PATH\s0 is not in the tree yet, it and all
+its ancestors will be created.
+.IP "\fBclearm\fR <\s-1BASE\s0> <\s-1SUB\s0>" 4
+.IX Item "clearm <BASE> <SUB>"
+Clear multiple nodes values in one operation. Find or create a node matching \s-1SUB\s0
+by interpreting \s-1SUB\s0 as a path expression relative to each node matching
+\&\s-1BASE.\s0 If \s-1SUB\s0 is '.', the nodes matching \s-1BASE\s0 will be modified.
+.IP "\fBins\fR \fI<\s-1LABEL\s0>\fR \fI<\s-1WHERE\s0>\fR \fI<\s-1PATH\s0>\fR" 4
+.IX Item "ins <LABEL> <WHERE> <PATH>"
+Insert a new node with label \s-1LABEL\s0 right before or after \s-1PATH\s0 into the
+tree. \s-1WHERE\s0 must be either 'before' or 'after'.
+.IP "\fBinsert\fR \fI<\s-1LABEL\s0>\fR \fI<\s-1WHERE\s0>\fR \fI<\s-1PATH\s0>\fR" 4
+.IX Item "insert <LABEL> <WHERE> <PATH>"
+Alias of \fBins\fR.
+.IP "\fBmv\fR <\s-1SRC\s0> <\s-1DST\s0>" 4
+.IX Item "mv <SRC> <DST>"
+Move node \s-1SRC\s0 to \s-1DST. SRC\s0 must match exactly one node in the tree. \s-1DST\s0
+must either match exactly one node in the tree, or may not exist yet. If
+\&\s-1DST\s0 exists already, it and all its descendants are deleted. If \s-1DST\s0 does not
+exist yet, it and all its missing ancestors are created.
+.IP "\fBmove\fR <\s-1SRC\s0> <\s-1DST\s0>" 4
+.IX Item "move <SRC> <DST>"
+Alias of \fBmv\fR.
+.IP "\fBcp\fR <\s-1SRC\s0> <\s-1DST\s0>" 4
+.IX Item "cp <SRC> <DST>"
+Copy node \s-1SRC\s0 to \s-1DST. SRC\s0 must match exactly one node in the tree. \s-1DST\s0
+must either match exactly one node in the tree, or may not exist yet. If
+\&\s-1DST\s0 exists already, it and all its descendants are deleted. If \s-1DST\s0 does not
+exist yet, it and all its missing ancestors are created.
+.IP "\fBcopy\fR <\s-1SRC\s0> <\s-1DST\s0>" 4
+.IX Item "copy <SRC> <DST>"
+Alias of \fBcp\fR.
+.IP "\fBrename\fR <\s-1SRC\s0> <\s-1LBL\s0>" 4
+.IX Item "rename <SRC> <LBL>"
+Rename the label of all nodes matching \s-1SRC\s0 to \s-1LBL.\s0
+.IP "\fBrm\fR <\s-1PATH\s0>" 4
+.IX Item "rm <PATH>"
+Delete \s-1PATH\s0 and all its children from the tree
+.IP "\fBset\fR <\s-1PATH\s0> <\s-1VALUE\s0>" 4
+.IX Item "set <PATH> <VALUE>"
+Associate \s-1VALUE\s0 with \s-1PATH.\s0 If \s-1PATH\s0 is not in the tree yet,
+it and all its ancestors will be created.
+.IP "\fBsetm\fR <\s-1BASE\s0> <\s-1SUB\s0> [<\s-1VALUE\s0>]" 4
+.IX Item "setm <BASE> <SUB> [<VALUE>]"
+Set multiple nodes in one operation. Find or create a node matching \s-1SUB\s0 by
+interpreting \s-1SUB\s0 as a path expression relative to each node matching
+\&\s-1BASE.\s0 If \s-1SUB\s0 is '.', the nodes matching \s-1BASE\s0 will be modified.
+.IP "\fBtouch\fR <\s-1PATH\s0>" 4
+.IX Item "touch <PATH>"
+Create \s-1PATH\s0 with the value \s-1NULL\s0 if it is not in the tree yet. All its
+ancestors will also be created. These new tree entries will appear
+last amongst their siblings.
+.SS "\s-1PATH EXPRESSION COMMANDS\s0"
+.IX Subsection "PATH EXPRESSION COMMANDS"
+The following commands help when working with path expressions.
+.IP "\fBdefnode\fR <\s-1NAME\s0> <\s-1EXPR\s0> [<\s-1VALUE\s0>]" 4
+.IX Item "defnode <NAME> <EXPR> [<VALUE>]"
+Define the variable \s-1NAME\s0 to the result of evaluating \s-1EXPR,\s0 which must be a
+nodeset. If no node matching \s-1EXPR\s0 exists yet, one is created and \s-1NAME\s0 will
+refer to it. If \s-1VALUE\s0 is given, this is the same as 'set \s-1EXPR VALUE\s0'; if
+\&\s-1VALUE\s0 is not given, the node is created as if with 'clear \s-1EXPR\s0' would and
+\&\s-1NAME\s0 refers to that node.
+.IP "\fBdefvar\fR <\s-1NAME\s0> <\s-1EXPR\s0>" 4
+.IX Item "defvar <NAME> <EXPR>"
+Define the variable \s-1NAME\s0 to the result of evaluating \s-1EXPR.\s0 The variable
+can be used in path expressions as \f(CW$NAME\fR. Note that \s-1EXPR\s0 is evaluated when
+the variable is defined, not when it is used.
+.SH "ENVIRONMENT VARIABLES"
+.IX Header "ENVIRONMENT VARIABLES"
+.IP "\fB\s-1AUGEAS_ROOT\s0\fR" 4
+.IX Item "AUGEAS_ROOT"
+The file system root, defaults to '/'. Can be overridden with
+the \fB\-r\fR command line option
+.IP "\fB\s-1AUGEAS_LENS_LIB\s0\fR" 4
+.IX Item "AUGEAS_LENS_LIB"
+Colon separated list of directories with lenses. Directories specified here
+are searched after any directories set with the \fB\-I\fR command line option,
+but before the default directories \fI/usr/share/augeas/lenses\fR and
+\&\fI/usr/share/augeas/lenses/dist\fR
+.SH "DIAGNOSTICS"
+.IX Header "DIAGNOSTICS"
+Normally, exit status is 0. If one or more commands fail, the exit status
+is set to a non-zero value.
+.PP
+Note though that failure to load some of the files specified by transforms
+in \f(CW\*(C`/augeas/load\*(C'\fR is not considered a failure. If it is important to know
+that all files were loaded, you need to issue a \f(CW\*(C`match /augeas//error\*(C'\fR
+after loading to find out details about what files could not be loaded and
+why.
+.SH "EXAMPLES"
+.IX Header "EXAMPLES"
+.Vb 2
+\& # command line mode
+\& augtool print /files/etc/hosts/
+\&
+\& # interactive mode
+\& augtool
+\& augtool> help
+\& augtool> print /files/etc/hosts/
+\&
+\& # Print the third entry from the second AcceptEnv line
+\& augtool print \*(Aq/files/etc/ssh/sshd_config/AcceptEnv[2]/3\*(Aq
+\&
+\& # Find the entry in inittab with action \*(Aqinitdefault\*(Aq
+\& augtool> match /files/etc/inittab/*/action initdefault
+\&
+\& # Print the last alias for each entry in /etc/hosts
+\& augtool> print /files/etc/hosts/*/alias[last()]
+.Ve
+.SH "FILES"
+.IX Header "FILES"
+Lenses and schema definitions in \fI/usr/share/augeas/lenses\fR and
+\&\fI/usr/share/augeas/lenses/dist\fR
+.SH "AUTHOR"
+.IX Header "AUTHOR"
+David Lutterkort <lutter@watzmann.net>
+.SH "COPYRIGHT AND LICENSE"
+.IX Header "COPYRIGHT AND LICENSE"
+Copyright 2007\-2016 David Lutterkort
+.PP
+Augeas (and augtool) are distributed under the \s-1GNU\s0 Lesser General Public
+License (\s-1LGPL\s0)
+.SH "SEE ALSO"
+.IX Header "SEE ALSO"
+\&\fBAugeas\fR project homepage <http://www.augeas.net/>
+.PP
+\&\fBAugeas\fR path expressions <https://github.com/hercules\-team/augeas/wiki/Path\-expressions>
+.PP
+augparse
+.PP
+augprint
--- /dev/null
+=head1 NAME
+
+augtool - inspect and modify configuration files
+
+=head1 SYNOPSIS
+
+augtool [OPTIONS] [COMMAND]
+
+=head1 DESCRIPTION
+
+Augeas is a configuration editing tool. It parses configuration files
+in their native formats and transforms them into a tree. Configuration
+changes are made by manipulating this tree and saving it back into
+native config files.
+
+augtool provides a command line interface to the generated tree. COMMAND
+can be a single command as described under L</COMMANDS>. When called with
+no COMMAND, it reads commands from standard input until an end-of-file is
+encountered.
+
+=head1 OPTIONS
+
+=over 4
+
+=item B<-c>, B<--typecheck>
+
+Typecheck lenses. This can be very slow, and is therefore not done by
+default, but is highly recommended during development.
+
+=item B<-b>, B<--backup>
+
+When files are changed, preserve the originals in a file with extension
+'.augsave'
+
+=item B<-n>, B<--new>
+
+Save changes in files with extension '.augnew', do not modify the original
+files
+
+=item B<-r>, B<--root>=I<ROOT>
+
+Use directory ROOT as the root of the filesystem. Takes precedence over a
+root set with the AUGEAS_ROOT environment variable.
+
+=item B<-I>, B<--include>=I<DIR>
+
+Add DIR to the module loadpath. Can be given multiple times. The
+directories set here are searched before any directories specified in the
+AUGEAS_LENS_LIB environment variable, and before the default directories
+F</usr/share/augeas/lenses> and F</usr/share/augeas/lenses/dist>.
+
+=item B<-t>, B<--transform>=I<XFM>
+
+Add a file transform; uses the 'transform' command syntax,
+e.g. C<-t 'Fstab incl /etc/fstab.bak'>.
+
+=item B<-l>, B<--load-file>=I<FILE>
+
+Load an individual FILE into the tree. The lens to use is determined
+automatically (based on autoload information in the lenses) and will be the
+same that is used for this file when the entire tree is loaded. The option
+can be specified multiple times to load several files, e.g. C<-l /etc/fstab
+-l /etc/hosts>. This lens implies C<--noload> so that only the files
+specified with this option will be loaded.
+
+=item B<-f>, B<--file>=I<FILE>
+
+Read commands from FILE.
+
+=item B<-i>, B<--interactive>
+
+Read commands from the terminal. When combined with B<-f> or redirection of
+stdin, drop into an interactive session after executing the commands from
+the file.
+
+=item B<-e>, B<--echo>
+
+When reading commands from a file via stdin, echo the commands before
+printing their output.
+
+=item B<-s>, B<--autosave>
+
+Automatically save all changes at the end of the session.
+
+=item B<-S>, B<--nostdinc>
+
+Do not search any of the default directories for modules. When this option
+is set, only directories specified explicitly with B<-I> or specified in
+B<AUGEAS_LENS_LIB> will be searched for modules.
+
+=item B<-L>, B<--noload>
+
+Do not load any files on startup. This is generally used to fine-tune which
+files to load by modifying the entries in C</augeas/load> and then issuing
+a C<load> command.
+
+=item B<-A>, B<--noautoload>
+
+Do not load any lens modules, and therefore no files, on startup. This
+creates no entries under C</augeas/load> whatsoever; to read any files,
+they need to be set up manually and loading must be initiated with a
+C<load> command. Using this option gives the fastest startup.
+
+=item B<--span>
+
+Load span positions for nodes in the tree, as they relate to the original
+file. Enables the use of the B<span> command to retrieve position data.
+
+=item B<--timing>
+
+After executing each command, print how long, in milliseconds, executing
+the command took. This makes it easier to spot slow queries, usually
+through B<match> commands, and allows exploring alternative queries that
+yield the same result but might be faster.
+
+=item B<--version>
+
+Print version information and exit. The version is also in the tree under
+C</augeas/version>.
+
+=back
+
+=head1 COMMANDS
+
+In interactive mode, commands and paths can be completed by pressing C<TAB>.
+
+The paths accepted as arguments by commands use a small subset of XPath
+path expressions. A path expression consists of a number of segments,
+separated by C</>. In each segment, the character C<*> can be used to match
+every node regardless of its label. Sibling nodes with identical labels can
+be distinguished by appending C<[N]> to their label to match the N-th
+sibling with such a label. The last sibling with a specific label can be
+reached as C<[last()]>. See L</EXAMPLES> for some examples of this.
+
+=head2 ADMIN COMMANDS
+
+The following commands control the behavior of Augeas and augtool itself.
+
+=over 4
+
+=item B<help>
+
+Print this help text
+
+=item B<load>
+
+Load files according to the transforms in C</augeas/load>.
+
+=item B<quit>
+
+Exit the program
+
+=item B<retrieve> E<lt>LENSE<gt> E<lt>NODE_INE<gt> E<lt>PATHE<gt> E<lt>NODE_OUTE<gt>
+
+Transform tree at PATH back into text using lens LENS and store the
+resulting string at NODE_OUT. Assume that the tree was initially read in
+with the same lens and the string stored at NODE_IN as input.
+
+=item B<save>
+
+Save all pending changes to disk. Unless either the B<-b> or B<-n>
+command line options are given, files are changed in place.
+
+=item B<store> E<lt>LENSE<gt> E<lt>NODEE<gt> E<lt>PATHE<gt>
+
+Parse NODE using LENS and store the resulting tree at PATH.
+
+=item B<transform> E<lt>LENSE<gt> E<lt>FILTERE<gt> E<lt>FILEE<gt>
+
+Add a transform for FILE using LENS. The LENS may be a module name or a
+full lens name. If a module name is given, then "lns" will be the lens
+assumed. The FILTER must be either "incl" or "excl". If the filter is
+"incl", the FILE will be parsed by the LENS. If the filter is "excl",
+the FILE will be excluded from the LENS. FILE may contain wildcards.
+
+=item B<load-file> E<lt>FILEE<gt>
+
+Load a specific FILE, automatically determining the proper lens from the
+information in F</augeas/load>; without further intervention, the lens that
+would oridnarily be used for this file will be used.
+
+=back
+
+=head2 READ COMMANDS
+
+The following commands are used to retrieve data from the Augeas tree.
+
+=over 4
+
+=item B<dump-xml> I<[E<lt>PATHE<gt>]>
+
+Print entries in the tree as XML. If PATH is given, printing starts there,
+otherwise the whole tree is printed.
+
+=item B<get> E<lt>PATHE<gt>
+
+Print the value associated with PATH
+
+=item B<label> E<lt>PATHE<gt>
+
+Get and print the label associated with PATH
+
+=item B<ls> E<lt>PATHE<gt>
+
+List the direct children of PATH
+
+=item B<match> E<lt>PATTERNE<gt> [E<lt>VALUEE<gt>]
+
+Find all paths that match PATTERN. If VALUE is given, only the matching
+paths whose value equals VALUE are printed
+
+=item B<print> I<[E<lt>PATHE<gt>]>
+
+Print entries in the tree. If PATH is given, printing starts there,
+otherwise the whole tree is printed
+
+=item B<span> E<lt>PATHE<gt>
+
+Print the name of the file from which the node PATH was generated, as well
+as information about the positions in the file corresponding to the label,
+the value, and the entire node. PATH must match exactly one node.
+
+You need to run 'set /augeas/span enable' prior to loading files to enable
+recording of span information. It is disabled by default.
+
+=back
+
+=head2 WRITE COMMANDS
+
+The following commands are used to modify the Augeas tree.
+
+=over 4
+
+=item B<clear> E<lt>PATHE<gt>
+
+Set the value for PATH to NULL. If PATH is not in the tree yet, it and all
+its ancestors will be created.
+
+=item B<clearm> E<lt>BASEE<gt> E<lt>SUBE<gt>
+
+Clear multiple nodes values in one operation. Find or create a node matching SUB
+by interpreting SUB as a path expression relative to each node matching
+BASE. If SUB is '.', the nodes matching BASE will be modified.
+
+=item B<ins> I<E<lt>LABELE<gt>> I<E<lt>WHEREE<gt>> I<E<lt>PATHE<gt>>
+
+Insert a new node with label LABEL right before or after PATH into the
+tree. WHERE must be either 'before' or 'after'.
+
+=item B<insert> I<E<lt>LABELE<gt>> I<E<lt>WHEREE<gt>> I<E<lt>PATHE<gt>>
+
+Alias of B<ins>.
+
+=item B<mv> E<lt>SRCE<gt> E<lt>DSTE<gt>
+
+Move node SRC to DST. SRC must match exactly one node in the tree. DST
+must either match exactly one node in the tree, or may not exist yet. If
+DST exists already, it and all its descendants are deleted. If DST does not
+exist yet, it and all its missing ancestors are created.
+
+=item B<move> E<lt>SRCE<gt> E<lt>DSTE<gt>
+
+Alias of B<mv>.
+
+=item B<cp> E<lt>SRCE<gt> E<lt>DSTE<gt>
+
+Copy node SRC to DST. SRC must match exactly one node in the tree. DST
+must either match exactly one node in the tree, or may not exist yet. If
+DST exists already, it and all its descendants are deleted. If DST does not
+exist yet, it and all its missing ancestors are created.
+
+=item B<copy> E<lt>SRCE<gt> E<lt>DSTE<gt>
+
+Alias of B<cp>.
+
+=item B<rename> E<lt>SRCE<gt> E<lt>LBLE<gt>
+
+Rename the label of all nodes matching SRC to LBL.
+
+=item B<rm> E<lt>PATHE<gt>
+
+Delete PATH and all its children from the tree
+
+=item B<set> E<lt>PATHE<gt> E<lt>VALUEE<gt>
+
+Associate VALUE with PATH. If PATH is not in the tree yet,
+it and all its ancestors will be created.
+
+=item B<setm> E<lt>BASEE<gt> E<lt>SUBE<gt> [E<lt>VALUEE<gt>]
+
+Set multiple nodes in one operation. Find or create a node matching SUB by
+interpreting SUB as a path expression relative to each node matching
+BASE. If SUB is '.', the nodes matching BASE will be modified.
+
+=item B<touch> E<lt>PATHE<gt>
+
+Create PATH with the value NULL if it is not in the tree yet. All its
+ancestors will also be created. These new tree entries will appear
+last amongst their siblings.
+
+=back
+
+=head2 PATH EXPRESSION COMMANDS
+
+The following commands help when working with path expressions.
+
+=over 4
+
+=item B<defnode> E<lt>NAMEE<gt> E<lt>EXPRE<gt> [E<lt>VALUEE<gt>]
+
+Define the variable NAME to the result of evaluating EXPR, which must be a
+nodeset. If no node matching EXPR exists yet, one is created and NAME will
+refer to it. If VALUE is given, this is the same as 'set EXPR VALUE'; if
+VALUE is not given, the node is created as if with 'clear EXPR' would and
+NAME refers to that node.
+
+=item B<defvar> E<lt>NAMEE<gt> E<lt>EXPRE<gt>
+
+Define the variable NAME to the result of evaluating EXPR. The variable
+can be used in path expressions as $NAME. Note that EXPR is evaluated when
+the variable is defined, not when it is used.
+
+=back
+
+=head1 ENVIRONMENT VARIABLES
+
+=over 4
+
+=item B<AUGEAS_ROOT>
+
+The file system root, defaults to '/'. Can be overridden with
+the B<-r> command line option
+
+=item B<AUGEAS_LENS_LIB>
+
+Colon separated list of directories with lenses. Directories specified here
+are searched after any directories set with the B<-I> command line option,
+but before the default directories F</usr/share/augeas/lenses> and
+F</usr/share/augeas/lenses/dist>
+
+=back
+
+=head1 DIAGNOSTICS
+
+Normally, exit status is 0. If one or more commands fail, the exit status
+is set to a non-zero value.
+
+Note though that failure to load some of the files specified by transforms
+in C</augeas/load> is not considered a failure. If it is important to know
+that all files were loaded, you need to issue a C<match /augeas//error>
+after loading to find out details about what files could not be loaded and
+why.
+
+=head1 EXAMPLES
+
+ # command line mode
+ augtool print /files/etc/hosts/
+
+ # interactive mode
+ augtool
+ augtool> help
+ augtool> print /files/etc/hosts/
+
+ # Print the third entry from the second AcceptEnv line
+ augtool print '/files/etc/ssh/sshd_config/AcceptEnv[2]/3'
+
+ # Find the entry in inittab with action 'initdefault'
+ augtool> match /files/etc/inittab/*/action initdefault
+
+ # Print the last alias for each entry in /etc/hosts
+ augtool> print /files/etc/hosts/*/alias[last()]
+
+=head1 FILES
+
+Lenses and schema definitions in F</usr/share/augeas/lenses> and
+F</usr/share/augeas/lenses/dist>
+
+=head1 AUTHOR
+
+David Lutterkort <lutter@watzmann.net>
+
+=head1 COPYRIGHT AND LICENSE
+
+Copyright 2007-2016 David Lutterkort
+
+Augeas (and augtool) are distributed under the GNU Lesser General Public
+License (LGPL)
+
+=head1 SEE ALSO
+
+B<Augeas> project homepage L<http://www.augeas.net/>
+
+B<Augeas> path expressions L<https://github.com/hercules-team/augeas/wiki/Path-expressions>
+
+L<augparse>
+
+L<augprint>
--- /dev/null
+diff --git a/src/Makefile.am b/src/Makefile.am
+index e64fb31..30745fb 100644
+--- a/src/Makefile.am
++++ b/src/Makefile.am
+@@ -7,7 +7,7 @@ AM_YFLAGS=-d -p spec_
+
+ EXTRA_DIST = try augeas_sym.version fa_sym.version
+
+-BUILT_SOURCES = datadir.h
++BUILT_SOURCES = datadir.h parser.h
+
+ DISTCLEANFILES = datadir.h
+
--- /dev/null
+diff --git a/bootstrap b/bootstrap
+index a84eb39..1ed9c48 100755
+--- a/bootstrap
++++ b/bootstrap
+@@ -1,3 +1,4 @@
++
+ #!/bin/sh
+
+ usage() {
+@@ -33,33 +34,6 @@ do
+ esac
+ done
+
+-# Get gnulib files.
+-
+-case ${GNULIB_SRCDIR--} in
+--)
+- echo "$0: getting gnulib files..."
+- git submodule init || exit $?
+- git submodule update || exit $?
+- GNULIB_SRCDIR=.gnulib
+- ;;
+-*)
+- # Redirect the gnulib submodule to the directory on the command line
+- # if possible.
+- if test -d "$GNULIB_SRCDIR"/.git && \
+- git config --file .gitmodules submodule.gnulib.url >/dev/null; then
+- git submodule init
+- GNULIB_SRCDIR=`cd $GNULIB_SRCDIR && pwd`
+- git config --replace-all submodule.gnulib.url $GNULIB_SRCDIR
+- echo "$0: getting gnulib files..."
+- git submodule update || exit $?
+- GNULIB_SRCDIR=.gnulib
+- else
+- echo >&2 "$0: invalid gnulib srcdir: $GNULIB_SRCDIR"
+- exit 1
+- fi
+- ;;
+-esac
+-
+ gnulib_tool=$GNULIB_SRCDIR/gnulib-tool
+ <$gnulib_tool || exit
+
--- /dev/null
+<manifest>
+ <request>
+ <domain name="_"/>
+ </request>
+</manifest>
--- /dev/null
+%define _unpackaged_files_terminate_build 0
+%define debug_package %{nil}
+
+Name: augeas
+Version: 1.14.1
+Release: 0
+Summary: A library for changing configuration files
+License: LGPL-2.1+
+Group: System/Libraries
+Url: http://augeas.net/
+Source0: http://augeas.net/download/augeas-%{version}.tar.gz
+Source1: baselibs.conf
+Source2: gnulib.tar.gz
+Source3: Fix_bootstrap_for_build.patch
+Source4: Add_parser.h_to_Makefile.patch
+Source1001: augeas.manifest
+
+BuildRequires: glib2-devel
+BuildRequires: libxml2-devel
+BuildRequires: readline-devel
+BuildRequires: flex
+BuildRequires: bison
+
+%description
+A library for programmatically editing configuration files. Augeas parses
+configuration files into a tree structure, which it exposes through its
+public API. Changes made through the API are written back to the initially
+read files.
+
+The transformation works very hard to preserve comments and formatting
+details. It is controlled by ``lens'' definitions that describe the file
+format and the transformation into a tree.
+
+%package devel
+Summary: A library for changing configuration files
+Group: System/Development
+Requires: %{name}-libs = %{version}
+Requires: pkgconfig
+
+%description devel
+The %{name}-devel package contains libraries and header files for
+developing applications that use %{name}.
+
+
+%package libs
+Summary: Libraries for %{name}
+Group: System/Libraries
+
+%description libs
+The libraries for %{name}.
+
+Augeas is a library for programmatically editing configuration files. It parses
+configuration files into a tree structure, which it exposes through its
+public API. Changes made through the API are written back to the initially
+read files.
+
+%prep
+%setup -q
+cp %{SOURCE1001} .
+tar -zxvf %{SOURCE2}
+%{__patch} -p1 < %{SOURCE3}
+%{__patch} -p1 < %{SOURCE4}
+
+%build
+./autogen.sh --gnulib-srcdir=%{_builddir}/%{?buildsubdir}/gnulib --prefix=%{_prefix} --disable-static --libdir=%{_libdir}
+make %{?_smp_mflags}
+
+%install
+rm -rf $RPM_BUILD_ROOT
+make install DESTDIR=$RPM_BUILD_ROOT INSTALL="%{__install} -p"
+find $RPM_BUILD_ROOT -name '*.la' -exec rm -f {} ';'
+
+# The tests/ subdirectory contains lenses used only for testing, and
+# so it shouldn't be packaged.
+rm -r $RPM_BUILD_ROOT%{_datadir}/augeas/lenses/dist/tests
+
+%remove_docs
+
+%check
+#pushd ./gnulib/tests/
+#chmod +x *.sh
+#popd
+#%{__make} check || exit 0
+
+%post libs -p /sbin/ldconfig
+
+%postun libs -p /sbin/ldconfig
+
+%files
+%manifest %{name}.manifest
+%{_bindir}/augtool
+%{_bindir}/augparse
+%{_bindir}/fadot
+%{_datadir}/vim/vimfiles/syntax/augeas.vim
+%{_datadir}/vim/vimfiles/ftdetect/augeas.vim
+
+%files libs
+%manifest %{name}.manifest
+# %{_datadir}/augeas and %{_datadir}/augeas/lenses are owned
+# by filesystem.
+%{_datadir}/augeas/lenses/dist
+%{_libdir}/*.so.*
+%license COPYING
+
+%files devel
+%manifest %{name}.manifest
+%{_includedir}/*
+%{_libdir}/*.so
+%{_libdir}/pkgconfig/augeas.pc
--- /dev/null
+libaugeas
+augeas-devel
+ requires -augeas-<targettype>
+ requires "libaugeas-<targettype> = <version>"
--- /dev/null
+GNULIB= ../gnulib/lib/libgnu.la
+GNULIB_CFLAGS= -I $(top_builddir)/gnulib/lib -I $(top_srcdir)/gnulib/lib
+
+AM_CFLAGS = @AUGEAS_CFLAGS@ @WARN_CFLAGS@ $(GNULIB_CFLAGS) $(LIBXML_CFLAGS)
+
+AM_YFLAGS=-d -p spec_
+
+EXTRA_DIST = try augeas_sym.version fa_sym.version
+
+BUILT_SOURCES = datadir.h
+
+DISTCLEANFILES = datadir.h
+
+lib_LTLIBRARIES = libfa.la libaugeas.la
+noinst_LTLIBRARIES = liblexer.la
+
+bin_PROGRAMS = augtool augparse augmatch augprint
+
+include_HEADERS = augeas.h fa.h
+
+libaugeas_la_SOURCES = augeas.h augeas.c augrun.c pathx.c \
+ internal.h internal.c \
+ memory.h memory.c ref.h ref.c \
+ syntax.c syntax.h parser.y builtin.c lens.c lens.h regexp.c regexp.h \
+ transform.h transform.c ast.c get.c put.c list.h \
+ info.c info.h errcode.c errcode.h jmt.h jmt.c xml.c
+
+if USE_VERSION_SCRIPT
+ AUGEAS_VERSION_SCRIPT = $(VERSION_SCRIPT_FLAGS)$(srcdir)/augeas_sym.version
+ FA_VERSION_SCRIPT = $(VERSION_SCRIPT_FLAGS)$(srcdir)/fa_sym.version
+else
+ AUGEAS_VERSION_SCRIPT =
+ FA_VERSION_SCRIPT =
+endif
+
+libaugeas_la_LDFLAGS = $(AUGEAS_VERSION_SCRIPT) \
+ -version-info $(LIBAUGEAS_VERSION_INFO)
+libaugeas_la_LIBADD = liblexer.la libfa.la $(LIB_SELINUX) $(LIBXML_LIBS) $(GNULIB)
+
+augtool_SOURCES = augtool.c
+augtool_LDADD = libaugeas.la $(READLINE_LIBS) $(LIBXML_LIBS) $(GNULIB)
+
+augparse_SOURCES = augparse.c
+augparse_LDADD = libaugeas.la $(LIBXML_LIBS) $(GNULIB)
+
+augmatch_SOURCES = augmatch.c
+augmatch_LDADD = libaugeas.la $(LIBXML_LIBS) $(GNULIB)
+
+augprint_SOURCES = augprint.c augprint.h
+augprint_LDADD = libaugeas.la $(GNULIB)
+
+libfa_la_SOURCES = fa.c fa.h hash.c hash.h memory.c memory.h ref.h ref.c
+libfa_la_LIBADD = $(LIB_SELINUX) $(GNULIB)
+libfa_la_LDFLAGS = $(FA_VERSION_SCRIPT) -version-info $(LIBFA_VERSION_INFO)
+
+liblexer_la_SOURCES = lexer.l
+liblexer_la_CFLAGS = $(AM_CFLAGS) -Wno-error
+
+FAILMALLOC_START ?= 1
+FAILMALLOC_REP ?= 20
+FAILMALLOC_PROG ?= ./augtool
+
+include $(top_srcdir)/Makefile.inc
+
+# Generate datadir.h. AUGEAS_LENS_DIR in internal.h depends on
+# the value of DATADIR
+internal.h: datadir.h
+
+FORCE-datadir.h: Makefile
+ echo '#define DATADIR "$(datadir)"' > datadir.h1
+ $(top_srcdir)/build/ac-aux/move-if-change datadir.h1 datadir.h
+
+datadir.h: FORCE-datadir.h
+
+#bashcompdir = @bashcompdir@
+bashcompdir = $(datadir)/bash-completion/completions
+dist_bashcomp_DATA = bash-completion/augtool bash-completion/augmatch bash-completion/augprint
+
--- /dev/null
+/*
+ * ast.c: Support routines for put/get
+ *
+ * Copyright (C) 2008-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <dlutter@redhat.com>
+ */
+
+#include <config.h>
+#include <stdint.h>
+
+#include "internal.h"
+#include "memory.h"
+#include "lens.h"
+
+/* A dictionary that maps key to a list of (skel, dict) */
+struct dict_entry {
+ struct dict_entry *next;
+ struct skel *skel;
+ struct dict *dict;
+};
+
+/* Associates a KEY with a list of skel/dict pairs.
+
+ Dicts are used in two phases: first they are constructed, through
+ repeated calls to dict_append. In the second phase, items are looked up
+ and removed from the dict.
+
+ During construction, MARK points to the end of the list ENTRY. Once
+ construction is done, MARK points to the head of that list, and ENTRY is
+ moved towards the tail everytime an item is looked up.
+*/
+struct dict_node {
+ char *key;
+ struct dict_entry *entry; /* This will change as entries are looked up */
+ struct dict_entry *mark; /* Pointer to initial entry, will never change */
+};
+
+/* Nodes are kept sorted by their key, with NULL being smaller than any
+ string */
+struct dict {
+ struct dict_node **nodes;
+ uint32_t size;
+ uint32_t used;
+ bool marked;
+};
+
+static const int dict_initial_size = 2;
+static const int dict_max_expansion = 128;
+static const uint32_t dict_max_size = (1<<24) - 1;
+
+struct dict *make_dict(char *key, struct skel *skel, struct dict *subdict) {
+ struct dict *dict = NULL;
+ if (ALLOC(dict) < 0)
+ goto error;
+ if (ALLOC_N(dict->nodes, dict_initial_size) < 0)
+ goto error;
+ if (ALLOC(dict->nodes[0]) < 0)
+ goto error;
+ if (ALLOC(dict->nodes[0]->entry) < 0)
+ goto error;
+
+ dict->size = dict_initial_size;
+ dict->used = 1;
+ dict->nodes[0]->key = key;
+ dict->nodes[0]->entry->skel = skel;
+ dict->nodes[0]->entry->dict = subdict;
+ dict->nodes[0]->mark = dict->nodes[0]->entry;
+
+ return dict;
+ error:
+ if (dict->nodes) {
+ if (dict->nodes[0])
+ FREE(dict->nodes[0]->entry);
+ FREE(dict->nodes[0]);
+ }
+ FREE(dict->nodes);
+ FREE(dict);
+ return NULL;
+}
+
+void free_dict(struct dict *dict) {
+ if (dict == NULL)
+ return;
+
+ for (int i=0; i < dict->used; i++) {
+ struct dict_node *node = dict->nodes[i];
+ if (! dict->marked)
+ node->mark = node->entry;
+ while (node->mark != NULL) {
+ struct dict_entry *del = node->mark;
+ node->mark = del->next;
+ free_skel(del->skel);
+ free_dict(del->dict);
+ free(del);
+ }
+ free(node->key);
+ FREE(node);
+ }
+ FREE(dict->nodes);
+ FREE(dict);
+}
+
+/* Return the position of KEY in DICT as an integer between 0 and
+ DICT->USED. If KEY is not in DICT, return a negative number P such that
+ -(P + 1) is the position at which KEY must be inserted to keep the keys
+ of the nodes in DICT sorted.
+*/
+static int dict_pos(struct dict *dict, const char *key) {
+ if (key == NULL) {
+ return (dict->nodes[0]->key == NULL) ? 0 : -1;
+ }
+
+ int l = dict->nodes[0]->key == NULL ? 1 : 0;
+ int h = dict->used;
+ while (l < h) {
+ int m = (l + h)/2;
+ int cmp = strcmp(dict->nodes[m]->key, key);
+ if (cmp > 0)
+ h = m;
+ else if (cmp < 0)
+ l = m + 1;
+ else
+ return m;
+ }
+ return -(l + 1);
+}
+
+static int dict_expand(struct dict *dict) {
+ uint32_t size = dict->size;
+
+ if (size == dict_max_size)
+ return -1;
+ if (size > dict_max_expansion)
+ size += dict_max_expansion;
+ else
+ size *= 2;
+ if (size > dict_max_size)
+ size = dict_max_size;
+ dict->size = size;
+ return REALLOC_N(dict->nodes, dict->size);
+}
+
+int dict_append(struct dict **dict, struct dict *d2) {
+ if (d2 == NULL)
+ return 0;
+
+ if (*dict == NULL) {
+ *dict = d2;
+ return 0;
+ }
+
+ struct dict *d1 = *dict;
+ for (int i2 = 0; i2 < d2->used; i2++) {
+ struct dict_node *n2 = d2->nodes[i2];
+ int i1 = dict_pos(d1, n2->key);
+ if (i1 < 0) {
+ i1 = - i1 - 1;
+ if (d1->size == d1->used) {
+ if (dict_expand(d1) < 0)
+ return -1;
+ }
+ memmove(d1->nodes + i1 + 1, d1->nodes + i1,
+ sizeof(*d1->nodes) * (d1->used - i1));
+ d1->nodes[i1] = n2;
+ d1->used += 1;
+ } else {
+ struct dict_node *n1 = d1->nodes[i1];
+ list_tail_cons(n1->entry, n1->mark, n2->entry);
+ FREE(n2->key);
+ FREE(n2);
+ }
+ }
+ FREE(d2->nodes);
+ FREE(d2);
+ return 0;
+}
+
+void dict_lookup(const char *key, struct dict *dict,
+ struct skel **skel, struct dict **subdict) {
+ *skel = NULL;
+ *subdict = NULL;
+ if (dict != NULL) {
+ if (! dict->marked) {
+ for (int i=0; i < dict->used; i++) {
+ dict->nodes[i]->mark = dict->nodes[i]->entry;
+ }
+ dict->marked = 1;
+ }
+ int p = dict_pos(dict, key);
+ if (p >= 0) {
+ struct dict_node *node = dict->nodes[p];
+ if (node->entry != NULL) {
+ *skel = node->entry->skel;
+ *subdict = node->entry->dict;
+ node->entry = node->entry->next;
+ }
+ }
+ }
+}
+
+
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+/*
+ * augeas.c: the core data structure for storing key/value pairs
+ *
+ * Copyright (C) 2007-2017 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <dlutter@redhat.com>
+ */
+
+#include <config.h>
+#include "augeas.h"
+#include "internal.h"
+#include "memory.h"
+#include "syntax.h"
+#include "transform.h"
+#include "errcode.h"
+
+#include <fnmatch.h>
+#include <argz.h>
+#include <string.h>
+#include <stdarg.h>
+#include <locale.h>
+
+/* Some popular labels that we use in /augeas */
+static const char *const s_augeas = "augeas";
+static const char *const s_files = "files";
+static const char *const s_load = "load";
+static const char *const s_pathx = "pathx";
+static const char *const s_error = "error";
+static const char *const s_pos = "pos";
+static const char *const s_vars = "variables";
+static const char *const s_lens = "lens";
+static const char *const s_excl = "excl";
+static const char *const s_incl = "incl";
+
+#define AUGEAS_META_PATHX_FUNC AUGEAS_META_TREE "/version/pathx/functions"
+
+static const char *const static_nodes[][2] = {
+ { AUGEAS_FILES_TREE, NULL },
+ { AUGEAS_META_TREE "/variables", NULL },
+ { AUGEAS_META_TREE "/version", PACKAGE_VERSION },
+ { AUGEAS_META_TREE "/version/save/mode[1]", AUG_SAVE_BACKUP_TEXT },
+ { AUGEAS_META_TREE "/version/save/mode[2]", AUG_SAVE_NEWFILE_TEXT },
+ { AUGEAS_META_TREE "/version/save/mode[3]", AUG_SAVE_NOOP_TEXT },
+ { AUGEAS_META_TREE "/version/save/mode[4]", AUG_SAVE_OVERWRITE_TEXT },
+ { AUGEAS_META_TREE "/version/defvar/expr", NULL },
+ { AUGEAS_META_PATHX_FUNC "/count", NULL },
+ { AUGEAS_META_PATHX_FUNC "/glob", NULL },
+ { AUGEAS_META_PATHX_FUNC "/label", NULL },
+ { AUGEAS_META_PATHX_FUNC "/last", NULL },
+ { AUGEAS_META_PATHX_FUNC "/modified", NULL },
+ { AUGEAS_META_PATHX_FUNC "/position", NULL },
+ { AUGEAS_META_PATHX_FUNC "/regexp", NULL }
+};
+
+static const char *const errcodes[] = {
+ "No error", /* AUG_NOERROR */
+ "Cannot allocate memory", /* AUG_ENOMEM */
+ "Internal error (please file a bug)", /* AUG_EINTERNAL */
+ "Invalid path expression", /* AUG_EPATHX */
+ "No match for path expression", /* AUG_ENOMATCH */
+ "Too many matches for path expression", /* AUG_EMMATCH */
+ "Syntax error in lens definition", /* AUG_ESYNTAX */
+ "Lens not found", /* AUG_ENOLENS */
+ "Multiple transforms", /* AUG_EMXFM */
+ "Node has no span info", /* AUG_ENOSPAN */
+ "Cannot move node into its descendant", /* AUG_EMVDESC */
+ "Failed to execute command", /* AUG_ECMDRUN */
+ "Invalid argument in function call", /* AUG_EBADARG */
+ "Invalid label", /* AUG_ELABEL */
+ "Cannot copy node into its descendant", /* AUG_ECPDESC */
+ "Cannot access file" /* AUG_EFILEACCESS */
+};
+
+static void tree_mark_dirty(struct tree *tree) {
+ tree->dirty = 1;
+ while (tree != tree->parent ) {
+ if ( tree->file ) {
+ tree->dirty = 1;
+ break;
+ }
+ tree = tree->parent;
+ }
+}
+
+void tree_clean(struct tree *tree) {
+ if ( tree->file && ! tree->dirty )
+ return;
+ list_for_each(c, tree->children)
+ tree_clean(c);
+ tree->dirty = 0;
+}
+
+struct tree *tree_child(struct tree *tree, const char *label) {
+ if (tree == NULL)
+ return NULL;
+
+ list_for_each(child, tree->children) {
+ if (streqv(label, child->label))
+ return child;
+ }
+ return NULL;
+}
+
+struct tree *tree_child_cr(struct tree *tree, const char *label) {
+ static struct tree *child = NULL;
+
+ if (tree == NULL)
+ return NULL;
+
+ child = tree_child(tree, label);
+ if (child == NULL) {
+ char *l = strdup(label);
+ if (l == NULL)
+ return NULL;
+ child = tree_append(tree, l, NULL);
+ }
+ return child;
+}
+
+struct tree *tree_path_cr(struct tree *tree, int n, ...) {
+ va_list ap;
+
+ va_start(ap, n);
+ for (int i=0; i < n; i++) {
+ const char *l = va_arg(ap, const char *);
+ tree = tree_child_cr(tree, l);
+ }
+ va_end(ap);
+ return tree;
+}
+
+static struct tree *tree_fpath_int(struct augeas *aug, const char *fpath,
+ bool create) {
+ int r;
+ char *steps = NULL, *step = NULL;
+ size_t nsteps = 0;
+ struct tree *result = NULL;
+
+ r = argz_create_sep(fpath, '/', &steps, &nsteps);
+ ERR_NOMEM(r < 0, aug);
+ result = aug->origin;
+ while ((step = argz_next(steps, nsteps, step))) {
+ if (create) {
+ result = tree_child_cr(result, step);
+ ERR_THROW(result == NULL, aug, AUG_ENOMEM,
+ "while searching %s: can not create %s", fpath, step);
+ } else {
+ /* Lookup only */
+ result = tree_child(result, step);
+ if (result == NULL)
+ goto done;
+ }
+ }
+ done:
+ free(steps);
+ return result;
+ error:
+ result = NULL;
+ goto done;
+}
+
+struct tree *tree_fpath(struct augeas *aug, const char *fpath) {
+ return tree_fpath_int(aug, fpath, false);
+}
+
+struct tree *tree_fpath_cr(struct augeas *aug, const char *fpath) {
+ return tree_fpath_int(aug, fpath, true);
+}
+
+struct tree *tree_find(struct augeas *aug, const char *path) {
+ struct pathx *p = NULL;
+ struct tree *result = NULL;
+ int r;
+
+ p = pathx_aug_parse(aug, aug->origin, tree_root_ctx(aug), path, true);
+ ERR_BAIL(aug);
+
+ r = pathx_find_one(p, &result);
+ BUG_ON(r > 1, aug,
+ "Multiple matches for %s when only one was expected",
+ path);
+ done:
+ free_pathx(p);
+ return result;
+ error:
+ result = NULL;
+ goto done;
+}
+
+struct tree *tree_find_cr(struct augeas *aug, const char *path) {
+ struct pathx *p = NULL;
+ struct tree *result = NULL;
+ int r;
+
+ p = pathx_aug_parse(aug, aug->origin, tree_root_ctx(aug), path, true);
+ ERR_BAIL(aug);
+
+ r = pathx_expand_tree(p, &result);
+ ERR_BAIL(aug);
+ ERR_THROW(r < 0, aug, AUG_EINTERNAL, "pathx_expand_tree failed");
+ error:
+ free_pathx(p);
+ return result;
+}
+
+void tree_store_value(struct tree *tree, char **value) {
+ if (streqv(tree->value, *value)) {
+ free(*value);
+ *value = NULL;
+ return;
+ }
+ if (tree->value != NULL) {
+ free(tree->value);
+ tree->value = NULL;
+ }
+ if (*value != NULL) {
+ tree->value = *value;
+ *value = NULL;
+ }
+ tree_mark_dirty(tree);
+}
+
+int tree_set_value(struct tree *tree, const char *value) {
+ char *v = NULL;
+
+ if (streqv(tree->value, value))
+ return 0;
+ if (value != NULL) {
+ v = strdup(value);
+ if (v == NULL)
+ return -1;
+ }
+ tree_store_value(tree, &v);
+ return 0;
+}
+
+static void store_error(const struct augeas *aug, const char *label, const char *value,
+ int nentries, ...) {
+ va_list ap;
+ struct tree *tree;
+
+ ensure(nentries % 2 == 0, aug);
+ tree = tree_path_cr(aug->origin, 3, s_augeas, s_error, label);
+ if (tree == NULL)
+ return;
+
+ tree_set_value(tree, value);
+
+ va_start(ap, nentries);
+ for (int i=0; i < nentries; i += 2) {
+ char *l = va_arg(ap, char *);
+ char *v = va_arg(ap, char *);
+ struct tree *t = tree_child_cr(tree, l);
+ if (t != NULL)
+ tree_set_value(t, v);
+ }
+ va_end(ap);
+ error:
+ return;
+}
+
+/* Report pathx errors in /augeas/pathx/error */
+static void store_pathx_error(const struct augeas *aug) {
+ if (aug->error->code != AUG_EPATHX)
+ return;
+
+ store_error(aug, s_pathx, aug->error->minor_details,
+ 2, s_pos, aug->error->details);
+}
+
+struct pathx *pathx_aug_parse(const struct augeas *aug,
+ struct tree *tree,
+ struct tree *root_ctx,
+ const char *path, bool need_nodeset) {
+ struct pathx *result;
+ struct error *err = err_of_aug(aug);
+
+ if (tree == NULL)
+ tree = aug->origin;
+
+ pathx_parse(tree, err, path, need_nodeset, aug->symtab, root_ctx, &result);
+ return result;
+}
+
+/* Find the tree stored in AUGEAS_CONTEXT */
+struct tree *tree_root_ctx(const struct augeas *aug) {
+ struct pathx *p = NULL;
+ struct tree *match = NULL;
+ const char *ctx_path;
+ int r;
+
+ p = pathx_aug_parse(aug, aug->origin, NULL, AUGEAS_CONTEXT, true);
+ ERR_BAIL(aug);
+
+ r = pathx_find_one(p, &match);
+ ERR_THROW(r > 1, aug, AUG_EMMATCH,
+ "There are %d nodes matching %s, expecting one",
+ r, AUGEAS_CONTEXT);
+
+ if (match == NULL || match->value == NULL || *match->value == '\0')
+ goto error;
+
+ /* Clean via augrun's helper to ensure it's valid */
+ ctx_path = cleanpath(match->value);
+ free_pathx(p);
+
+ p = pathx_aug_parse(aug, aug->origin, NULL, ctx_path, true);
+ ERR_BAIL(aug);
+
+ if (pathx_first(p) == NULL) {
+ r = pathx_expand_tree(p, &match);
+ if (r < 0)
+ goto done;
+ r = tree_set_value(match, NULL);
+ if (r < 0)
+ goto done;
+ } else {
+ r = pathx_find_one(p, &match);
+ ERR_THROW(r > 1, aug, AUG_EMMATCH,
+ "There are %d nodes matching the context %s, expecting one",
+ r, ctx_path);
+ }
+
+ done:
+ free_pathx(p);
+ return match;
+ error:
+ match = NULL;
+ goto done;
+}
+
+struct tree *tree_append(struct tree *parent,
+ char *label, char *value) {
+ struct tree *result = make_tree(label, value, parent, NULL);
+ if (result != NULL)
+ list_append(parent->children, result);
+ return result;
+}
+
+static struct tree *tree_append_s(struct tree *parent,
+ const char *l0, char *v) {
+ struct tree *result;
+ char *l;
+
+ if (l0 == NULL) {
+ return NULL;
+ } else {
+ l = strdup(l0);
+ }
+ result = tree_append(parent, l, v);
+ if (result == NULL)
+ free(l);
+ return result;
+}
+
+static struct tree *tree_from_transform(struct augeas *aug,
+ const char *modname,
+ struct transform *xfm) {
+ struct tree *meta = tree_child_cr(aug->origin, s_augeas);
+ struct tree *load = NULL, *txfm = NULL, *t;
+ char *v = NULL;
+ int r;
+
+ ERR_NOMEM(meta == NULL, aug);
+
+ load = tree_child_cr(meta, s_load);
+ ERR_NOMEM(load == NULL, aug);
+
+ if (modname == NULL)
+ modname = "_";
+
+ txfm = tree_append_s(load, modname, NULL);
+ ERR_NOMEM(txfm == NULL, aug);
+
+ r = asprintf(&v, "@%s", modname);
+ ERR_NOMEM(r < 0, aug);
+
+ t = tree_append_s(txfm, s_lens, v);
+ ERR_NOMEM(t == NULL, aug);
+ v = NULL;
+
+ list_for_each(f, xfm->filter) {
+ const char *l = f->include ? s_incl : s_excl;
+ v = strdup(f->glob->str);
+ ERR_NOMEM(v == NULL, aug);
+ t = tree_append_s(txfm, l, v);
+ ERR_NOMEM(t == NULL, aug);
+ }
+ return txfm;
+ error:
+ free(v);
+ tree_unlink(aug, txfm);
+ return NULL;
+}
+
+/* Save user locale and switch to C locale */
+#if HAVE_USELOCALE
+static void save_locale(struct augeas *aug) {
+ if (aug->c_locale == NULL) {
+ aug->c_locale = newlocale(LC_ALL_MASK, "C", NULL);
+ ERR_NOMEM(aug->c_locale == NULL, aug);
+ }
+
+ aug->user_locale = uselocale(aug->c_locale);
+ error:
+ return;
+}
+#else
+static void save_locale(ATTRIBUTE_UNUSED struct augeas *aug) { }
+#endif
+
+#if HAVE_USELOCALE
+static void restore_locale(struct augeas *aug) {
+ uselocale(aug->user_locale);
+ aug->user_locale = NULL;
+}
+#else
+static void restore_locale(ATTRIBUTE_UNUSED struct augeas *aug) { }
+#endif
+
+/* Clean up old error messages every time we enter through the public
+ * API. Since we make internal calls through the public API, we keep a
+ * count of how many times a public API call was made, and only reset when
+ * that count is 0. That requires that all public functions enclose their
+ * work within a matching pair of api_entry/api_exit calls.
+ */
+void api_entry(const struct augeas *aug) {
+ struct error *err = ((struct augeas *) aug)->error;
+
+ ((struct augeas *) aug)->api_entries += 1;
+
+ if (aug->api_entries > 1)
+ return;
+
+ reset_error(err);
+ save_locale((struct augeas *) aug);
+}
+
+void api_exit(const struct augeas *aug) {
+ assert(aug->api_entries > 0);
+ ((struct augeas *) aug)->api_entries -= 1;
+ if (aug->api_entries == 0) {
+ store_pathx_error(aug);
+ restore_locale((struct augeas *) aug);
+ }
+}
+
+static int init_root(struct augeas *aug, const char *root0) {
+ if (root0 == NULL)
+ root0 = getenv(AUGEAS_ROOT_ENV);
+ if (root0 == NULL || root0[0] == '\0')
+ root0 = "/";
+
+ aug->root = strdup(root0);
+ if (aug->root == NULL)
+ return -1;
+
+ if (aug->root[strlen(aug->root)-1] != SEP) {
+ if (REALLOC_N(aug->root, strlen(aug->root) + 2) < 0)
+ return -1;
+ strcat((char *) aug->root, "/");
+ }
+ return 0;
+}
+
+static int init_loadpath(struct augeas *aug, const char *loadpath) {
+ int r;
+
+ aug->modpathz = NULL;
+ aug->nmodpath = 0;
+ if (loadpath != NULL) {
+ r = argz_add_sep(&aug->modpathz, &aug->nmodpath,
+ loadpath, PATH_SEP_CHAR);
+ if (r != 0)
+ return -1;
+ }
+ char *env = getenv(AUGEAS_LENS_ENV);
+ if (env != NULL) {
+ r = argz_add_sep(&aug->modpathz, &aug->nmodpath,
+ env, PATH_SEP_CHAR);
+ if (r != 0)
+ return -1;
+ }
+ if (!(aug->flags & AUG_NO_STDINC)) {
+ r = argz_add(&aug->modpathz, &aug->nmodpath, AUGEAS_LENS_DIR);
+ if (r != 0)
+ return -1;
+ r = argz_add(&aug->modpathz, &aug->nmodpath,
+ AUGEAS_LENS_DIST_DIR);
+ if (r != 0)
+ return -1;
+ }
+ /* Clean up trailing slashes */
+ if (aug->nmodpath > 0) {
+ argz_stringify(aug->modpathz, aug->nmodpath, PATH_SEP_CHAR);
+ char *s, *t;
+ const char *e = aug->modpathz + strlen(aug->modpathz);
+ for (s = aug->modpathz, t = aug->modpathz; s < e; s++) {
+ char *p = s;
+ if (*p == '/') {
+ while (*p == '/') p += 1;
+ if (*p == '\0' || *p == PATH_SEP_CHAR)
+ s = p;
+ }
+ if (t != s)
+ *t++ = *s;
+ else
+ t += 1;
+ }
+ if (t != s) {
+ *t = '\0';
+ }
+ s = aug->modpathz;
+ aug->modpathz = NULL;
+ r = argz_create_sep(s, PATH_SEP_CHAR, &aug->modpathz,
+ &aug->nmodpath);
+ free(s);
+ if (r != 0)
+ return -1;
+ }
+ return 0;
+}
+
+static void init_save_mode(struct augeas *aug) {
+ const char *v = AUG_SAVE_OVERWRITE_TEXT;
+
+ if (aug->flags & AUG_SAVE_NEWFILE) {
+ v = AUG_SAVE_NEWFILE_TEXT;
+ } else if (aug->flags & AUG_SAVE_BACKUP) {
+ v = AUG_SAVE_BACKUP_TEXT;
+ } else if (aug->flags & AUG_SAVE_NOOP) {
+ v = AUG_SAVE_NOOP_TEXT;
+ }
+
+ aug_set(aug, AUGEAS_META_SAVE_MODE, v);
+}
+
+struct augeas *aug_init(const char *root, const char *loadpath,
+ unsigned int flags) {
+ struct augeas *result;
+ struct tree *tree_root = make_tree(NULL, NULL, NULL, NULL);
+ int r;
+ bool close_on_error = true;
+
+ if (tree_root == NULL)
+ return NULL;
+
+ if (ALLOC(result) < 0)
+ goto error;
+ if (ALLOC(result->error) < 0)
+ goto error;
+ if (make_ref(result->error->info) < 0)
+ goto error;
+ result->error->info->error = result->error;
+ result->error->info->filename = dup_string("(unknown file)");
+ if (result->error->info->filename == NULL)
+ goto error;
+ result->error->aug = result;
+
+ result->origin = make_tree_origin(tree_root);
+ if (result->origin == NULL) {
+ free_tree(tree_root);
+ goto error;
+ }
+
+ api_entry(result);
+
+ result->flags = flags;
+
+ r = init_root(result, root);
+ ERR_NOMEM(r < 0, result);
+
+ result->origin->children->label = strdup(s_augeas);
+
+ /* We are now initialized enough that we can dare return RESULT even
+ * when we encounter errors if the caller so wishes */
+ close_on_error = !(flags & AUG_NO_ERR_CLOSE);
+
+ r = init_loadpath(result, loadpath);
+ ERR_NOMEM(r < 0, result);
+
+ /* We report the root dir in AUGEAS_META_ROOT, but we only use the
+ value we store internally, to avoid any problems with
+ AUGEAS_META_ROOT getting changed. */
+ aug_set(result, AUGEAS_META_ROOT, result->root);
+ ERR_BAIL(result);
+
+ /* Set the default path context */
+ aug_set(result, AUGEAS_CONTEXT, AUG_CONTEXT_DEFAULT);
+ ERR_BAIL(result);
+
+ for (int i=0; i < ARRAY_CARDINALITY(static_nodes); i++) {
+ aug_set(result, static_nodes[i][0], static_nodes[i][1]);
+ ERR_BAIL(result);
+ }
+
+ init_save_mode(result);
+ ERR_BAIL(result);
+
+ const char *v = (flags & AUG_ENABLE_SPAN) ? AUG_ENABLE : AUG_DISABLE;
+ aug_set(result, AUGEAS_SPAN_OPTION, v);
+ ERR_BAIL(result);
+
+ if (interpreter_init(result) == -1)
+ goto error;
+
+ list_for_each(modl, result->modules) {
+ struct transform *xform = modl->autoload;
+ if (xform == NULL)
+ continue;
+ tree_from_transform(result, modl->name, xform);
+ ERR_BAIL(result);
+ }
+ if (!(result->flags & AUG_NO_LOAD))
+ if (aug_load(result) < 0)
+ goto error;
+
+ api_exit(result);
+ return result;
+
+ error:
+ if (close_on_error) {
+ aug_close(result);
+ result = NULL;
+ }
+ if (result != NULL && result->api_entries > 0)
+ api_exit(result);
+ return result;
+}
+
+/* Free one tree node */
+static void free_tree_node(struct tree *tree) {
+ if (tree == NULL)
+ return;
+
+ if (tree->span != NULL)
+ free_span(tree->span);
+ free(tree->label);
+ free(tree->value);
+ free(tree);
+}
+
+/* Only unlink; assume we know TREE is not in the symtab */
+static int tree_unlink_raw(struct tree *tree) {
+ int result = 0;
+
+ assert (tree->parent != NULL);
+ list_remove(tree, tree->parent->children);
+ tree_mark_dirty(tree->parent);
+ result = free_tree(tree->children) + 1;
+ free_tree_node(tree);
+ return result;
+}
+
+int tree_unlink(struct augeas *aug, struct tree *tree) {
+ if (tree == NULL)
+ return 0;
+ pathx_symtab_remove_descendants(aug->symtab, tree);
+ return tree_unlink_raw(tree);
+}
+
+void tree_unlink_children(struct augeas *aug, struct tree *tree) {
+ if (tree == NULL)
+ return;
+
+ pathx_symtab_remove_descendants(aug->symtab, tree);
+
+ while (tree->children != NULL)
+ tree_unlink_raw(tree->children);
+}
+
+static void tree_mark_files(struct tree *tree) {
+ if (tree_child(tree, "path") != NULL) {
+ tree_mark_dirty(tree);
+ } else {
+ list_for_each(c, tree->children) {
+ tree_mark_files(c);
+ }
+ }
+}
+
+static void tree_rm_dirty_files(struct augeas *aug, struct tree *tree) {
+ struct tree *p;
+
+ if (tree->file && !tree->dirty) {
+ return;
+ } else if (tree->file && tree->dirty && ((p = tree_child(tree, "path")) != NULL)) {
+ tree_unlink(aug, tree_fpath(aug, p->value));
+ tree_unlink(aug, tree);
+ } else {
+ struct tree *c = tree->children;
+ while (c != NULL) {
+ struct tree *next = c->next;
+ tree_rm_dirty_files(aug, c);
+ c = next;
+ }
+ }
+}
+
+static void tree_rm_dirty_leaves(struct augeas *aug, struct tree *tree,
+ struct tree *protect) {
+ if (tree->file && !tree->dirty)
+ return;
+
+ struct tree *c = tree->children;
+ while (c != NULL) {
+ struct tree *next = c->next;
+ tree_rm_dirty_leaves(aug, c, protect);
+ c = next;
+ }
+
+ if (tree != protect && tree->children == NULL)
+ tree_unlink(aug, tree);
+}
+
+int aug_load(struct augeas *aug) {
+ const char *option = NULL;
+ struct tree *meta = tree_child_cr(aug->origin, s_augeas);
+ struct tree *meta_files = tree_child_cr(meta, s_files);
+ struct tree *files = tree_child_cr(aug->origin, s_files);
+ struct tree *load = tree_child_cr(meta, s_load);
+ struct tree *vars = tree_child_cr(meta, s_vars);
+
+ api_entry(aug);
+
+ ERR_NOMEM(load == NULL, aug);
+
+ /* To avoid unnecessary loads of files, we reload an existing file in
+ * several steps:
+ * (1) mark all file nodes under /augeas/files as dirty (and only those)
+ * (2) process all files matched by a lens; we check (in
+ * transform_load) if the file has been modified. If it has, we
+ * reparse it. Either way, we clear the dirty flag. We also need to
+ * reread the file if part or all of it has been modified in the
+ * tree but not been saved yet
+ * (3) remove all files from the tree that still have a dirty entry
+ * under /augeas/files. Those files are not processed by any lens
+ * anymore
+ * (4) Remove entries from /augeas/files and /files that correspond
+ * to directories without any files of interest
+ */
+
+ /* update flags according to option value */
+ if (aug_get(aug, AUGEAS_SPAN_OPTION, &option) == 1) {
+ if (strcmp(option, AUG_ENABLE) == 0) {
+ aug->flags |= AUG_ENABLE_SPAN;
+ } else {
+ aug->flags &= ~AUG_ENABLE_SPAN;
+ }
+ }
+
+ tree_clean(meta_files);
+ tree_mark_files(meta_files);
+
+ list_for_each(xfm, load->children) {
+ if (transform_validate(aug, xfm) == 0)
+ transform_load(aug, xfm, NULL);
+ }
+
+ /* This makes it possible to spot 'directories' that are now empty
+ * because we removed their file contents */
+ tree_clean(files);
+
+ tree_rm_dirty_files(aug, meta_files);
+ tree_rm_dirty_leaves(aug, meta_files, meta_files);
+ tree_rm_dirty_leaves(aug, files, files);
+
+ tree_clean(aug->origin);
+
+ list_for_each(v, vars->children) {
+ aug_defvar(aug, v->label, v->value);
+ ERR_BAIL(aug);
+ }
+
+ api_exit(aug);
+ return 0;
+ error:
+ api_exit(aug);
+ return -1;
+}
+
+static int find_one_node(struct pathx *p, struct tree **match) {
+ struct error *err = err_of_pathx(p);
+ int r = pathx_find_one(p, match);
+
+ if (r == 1)
+ return 0;
+
+ if (r == 0) {
+ report_error(err, AUG_ENOMATCH, NULL);
+ } else {
+ /* r > 1 */
+ report_error(err, AUG_EMMATCH, NULL);
+ }
+
+ return -1;
+}
+
+int aug_get(const struct augeas *aug, const char *path, const char **value) {
+ struct pathx *p = NULL;
+ struct tree *match;
+ int r;
+
+ if (value != NULL)
+ *value = NULL;
+
+ api_entry(aug);
+
+ p = pathx_aug_parse(aug, aug->origin, tree_root_ctx(aug), path, true);
+ ERR_BAIL(aug);
+
+ r = pathx_find_one(p, &match);
+ ERR_BAIL(aug);
+ ERR_THROW(r > 1, aug, AUG_EMMATCH, "There are %d nodes matching %s",
+ r, path);
+
+ if (r == 1 && value != NULL)
+ *value = match->value;
+ free_pathx(p);
+
+ api_exit(aug);
+ return r;
+ error:
+ free_pathx(p);
+ api_exit(aug);
+ return -1;
+}
+
+int aug_label(const struct augeas *aug, const char *path, const char **label) {
+ struct pathx *p = NULL;
+ struct tree *match;
+ int r;
+
+ api_entry(aug);
+
+ p = pathx_aug_parse(aug, aug->origin, tree_root_ctx(aug), path, true);
+ ERR_BAIL(aug);
+
+ if (label != NULL)
+ *label = NULL;
+
+ r = pathx_find_one(p, &match);
+ ERR_BAIL(aug);
+ ERR_THROW(r > 1, aug, AUG_EMMATCH, "There are %d nodes matching %s",
+ r, path);
+
+ if (r == 1 && label != NULL)
+ *label = match->label;
+ free_pathx(p);
+
+ api_exit(aug);
+ return r;
+ error:
+ free_pathx(p);
+ api_exit(aug);
+ return -1;
+}
+
+static void record_var_meta(struct augeas *aug, const char *name,
+ const char *expr) {
+ /* Record the definition of the variable */
+ struct tree *tree = tree_path_cr(aug->origin, 2, s_augeas, s_vars);
+ ERR_NOMEM(tree == NULL, aug);
+ if (expr == NULL) {
+ tree_unlink(aug, tree_child(tree, name));
+ } else {
+ tree = tree_child_cr(tree, name);
+ ERR_NOMEM(tree == NULL, aug);
+ tree_set_value(tree, expr);
+ }
+ error:
+ return;
+}
+
+int aug_defvar(augeas *aug, const char *name, const char *expr) {
+ struct pathx *p = NULL;
+ int result = -1;
+
+ api_entry(aug);
+
+ if (expr == NULL) {
+ result = pathx_symtab_undefine(&(aug->symtab), name);
+ } else {
+ p = pathx_aug_parse(aug, aug->origin, tree_root_ctx(aug), expr, false);
+ ERR_BAIL(aug);
+ result = pathx_symtab_define(&(aug->symtab), name, p);
+ }
+ ERR_BAIL(aug);
+
+ record_var_meta(aug, name, expr);
+ ERR_BAIL(aug);
+ error:
+ free_pathx(p);
+ api_exit(aug);
+ return result;
+}
+
+int aug_defnode(augeas *aug, const char *name, const char *expr,
+ const char *value, int *created) {
+ struct pathx *p = NULL;
+ int result = -1;
+ int r, cr;
+ struct tree *tree;
+
+ api_entry(aug);
+
+ if (expr == NULL)
+ goto error;
+ if (created == NULL)
+ created = &cr;
+
+ p = pathx_aug_parse(aug, aug->origin, tree_root_ctx(aug), expr, false);
+ ERR_BAIL(aug);
+
+ if (pathx_first(p) == NULL) {
+ r = pathx_expand_tree(p, &tree);
+ if (r < 0)
+ goto done;
+ *created = 1;
+ } else {
+ *created = 0;
+ }
+
+ if (*created) {
+ r = tree_set_value(tree, value);
+ if (r < 0)
+ goto done;
+ result = pathx_symtab_assign_tree(&(aug->symtab), name, tree);
+ char *e = path_of_tree(tree);
+ ERR_NOMEM(e == NULL, aug)
+ record_var_meta(aug, name, e);
+ free(e);
+ ERR_BAIL(aug);
+ } else {
+ result = pathx_symtab_define(&(aug->symtab), name, p);
+ record_var_meta(aug, name, expr);
+ ERR_BAIL(aug);
+ }
+
+ done:
+ error:
+ free_pathx(p);
+ api_exit(aug);
+ return result;
+}
+
+struct tree *tree_set(struct pathx *p, const char *value) {
+ struct tree *tree;
+ int r;
+
+ r = pathx_expand_tree(p, &tree);
+ if (r == -1)
+ return NULL;
+
+ r = tree_set_value(tree, value);
+ if (r < 0)
+ return NULL;
+ return tree;
+}
+
+int aug_set(struct augeas *aug, const char *path, const char *value) {
+ struct pathx *p = NULL;
+ int result = -1;
+
+ api_entry(aug);
+
+ /* Get-out clause, in case context is broken */
+ struct tree *root_ctx = NULL;
+ if (STRNEQ(path, AUGEAS_CONTEXT))
+ root_ctx = tree_root_ctx(aug);
+
+ p = pathx_aug_parse(aug, aug->origin, root_ctx, path, true);
+ ERR_BAIL(aug);
+
+ result = tree_set(p, value) == NULL ? -1 : 0;
+ error:
+ free_pathx(p);
+ api_exit(aug);
+ return result;
+}
+
+int aug_setm(struct augeas *aug, const char *base,
+ const char *sub, const char *value) {
+ struct pathx *bx = NULL, *sx = NULL;
+ struct tree *bt, *st;
+ int result, r;
+
+ api_entry(aug);
+
+ bx = pathx_aug_parse(aug, aug->origin, tree_root_ctx(aug), base, true);
+ ERR_BAIL(aug);
+
+ if (sub != NULL && STREQ(sub, "."))
+ sub = NULL;
+
+ result = 0;
+ for (bt = pathx_first(bx); bt != NULL; bt = pathx_next(bx)) {
+ if (sub != NULL) {
+ /* Handle subnodes of BT */
+ sx = pathx_aug_parse(aug, bt, NULL, sub, true);
+ ERR_BAIL(aug);
+ if (pathx_first(sx) != NULL) {
+ /* Change existing subnodes matching SUB */
+ for (st = pathx_first(sx); st != NULL; st = pathx_next(sx)) {
+ r = tree_set_value(st, value);
+ ERR_NOMEM(r < 0, aug);
+ result += 1;
+ }
+ } else {
+ /* Create a new subnode matching SUB */
+ r = pathx_expand_tree(sx, &st);
+ if (r == -1)
+ goto error;
+ r = tree_set_value(st, value);
+ ERR_NOMEM(r < 0, aug);
+ result += 1;
+ }
+ free_pathx(sx);
+ sx = NULL;
+ } else {
+ /* Set nodes matching BT directly */
+ r = tree_set_value(bt, value);
+ ERR_NOMEM(r < 0, aug);
+ result += 1;
+ }
+ }
+
+ done:
+ free_pathx(bx);
+ free_pathx(sx);
+ api_exit(aug);
+ return result;
+ error:
+ result = -1;
+ goto done;
+}
+
+int tree_insert(struct pathx *p, const char *label, int before) {
+ struct tree *new = NULL, *match;
+
+ if (strchr(label, SEP) != NULL)
+ return -1;
+
+ if (find_one_node(p, &match) < 0)
+ goto error;
+
+ new = make_tree(strdup(label), NULL, match->parent, NULL);
+ if (new == NULL || new->label == NULL)
+ goto error;
+
+ if (before) {
+ list_insert_before(new, match, new->parent->children);
+ } else {
+ new->next = match->next;
+ match->next = new;
+ }
+ return 0;
+ error:
+ free_tree(new);
+ return -1;
+}
+
+int aug_insert(struct augeas *aug, const char *path, const char *label,
+ int before) {
+ struct pathx *p = NULL;
+ int result = -1;
+
+ api_entry(aug);
+
+ p = pathx_aug_parse(aug, aug->origin, tree_root_ctx(aug), path, true);
+ ERR_BAIL(aug);
+
+ result = tree_insert(p, label, before);
+ error:
+ free_pathx(p);
+ api_exit(aug);
+ return result;
+}
+
+struct tree *make_tree(char *label, char *value, struct tree *parent,
+ struct tree *children) {
+ struct tree *tree;
+ if (ALLOC(tree) < 0)
+ return NULL;
+
+ tree->label = label;
+ tree->value = value;
+ tree->parent = parent;
+ tree->children = children;
+ list_for_each(c, tree->children)
+ c->parent = tree;
+ if (parent != NULL)
+ tree_mark_dirty(tree);
+ else
+ tree->dirty = 1;
+ return tree;
+}
+
+struct tree *make_tree_origin(struct tree *root) {
+ struct tree *origin = NULL;
+
+ origin = make_tree(NULL, NULL, NULL, root);
+ if (origin == NULL)
+ return NULL;
+
+ origin->parent = origin;
+ return origin;
+}
+
+/* Recursively free the whole tree TREE and all its siblings */
+int free_tree(struct tree *tree) {
+ int cnt = 0;
+
+ while (tree != NULL) {
+ struct tree *del = tree;
+ tree = del->next;
+ cnt += free_tree(del->children);
+ free_tree_node(del);
+ cnt += 1;
+ }
+
+ return cnt;
+}
+
+int tree_rm(struct pathx *p) {
+ struct tree *tree, **del;
+ int cnt = 0, ndel = 0, i;
+
+ /* set ndel to the number of trees we could possibly delete */
+ for (tree = pathx_first(p); tree != NULL; tree = pathx_next(p)) {
+ if (! TREE_HIDDEN(tree))
+ ndel += 1;
+ }
+
+ if (ndel == 0)
+ return 0;
+
+ if (ALLOC_N(del, ndel) < 0) {
+ free(del);
+ return -1;
+ }
+
+ for (i = 0, tree = pathx_first(p); tree != NULL; tree = pathx_next(p)) {
+ if (TREE_HIDDEN(tree))
+ continue;
+ pathx_symtab_remove_descendants(pathx_get_symtab(p), tree);
+ /* Collect the tree nodes that actually need to be deleted in
+ del. Mark the root of every subtree we are going to free by
+ setting tree->added. Only add a node to del if none of its
+ ancestors would have been freed by the time we get to freeing
+ that node; this avoids double frees for situations where the
+ path expression matches both /node and /node/child as unlinking
+ /node implicitly unlinks /node/child */
+ int live = 1;
+ for (struct tree *t = tree; live && ! ROOT_P(t); t = t->parent) {
+ if (t->added)
+ live = 0;
+ }
+ if (live) {
+ del[i] = tree;
+ i += 1;
+ tree->added = 1;
+ }
+ }
+ /* ndel now means: the number of trees we are actually going to delete */
+ ndel = i;
+
+ for (i = 0; i < ndel; i++) {
+ if (del[i] != NULL) {
+ cnt += tree_unlink_raw(del[i]);
+ }
+ }
+ free(del);
+
+ return cnt;
+}
+
+int aug_rm(struct augeas *aug, const char *path) {
+ struct pathx *p = NULL;
+ int result = -1;
+
+ api_entry(aug);
+
+ p = pathx_aug_parse(aug, aug->origin, tree_root_ctx(aug), path, true);
+ ERR_BAIL(aug);
+
+ result = tree_rm(p);
+
+ error:
+ free_pathx(p);
+ api_exit(aug);
+ return result;
+}
+
+int aug_span(struct augeas *aug, const char *path, char **filename,
+ uint *label_start, uint *label_end, uint *value_start, uint *value_end,
+ uint *span_start, uint *span_end) {
+ struct pathx *p = NULL;
+ int result = -1;
+ struct tree *tree = NULL;
+ struct span *span;
+
+ api_entry(aug);
+
+ p = pathx_aug_parse(aug, aug->origin, tree_root_ctx(aug), path, true);
+ ERR_BAIL(aug);
+
+ tree = pathx_first(p);
+ ERR_BAIL(aug);
+
+ ERR_THROW(tree == NULL, aug, AUG_ENOMATCH, "No node matching %s", path);
+ ERR_THROW(tree->span == NULL, aug, AUG_ENOSPAN, "No span info for %s", path);
+ ERR_THROW(pathx_next(p) != NULL, aug, AUG_EMMATCH, "Multiple nodes match %s", path);
+
+ span = tree->span;
+
+ if (label_start != NULL)
+ *label_start = span->label_start;
+
+ if (label_end != NULL)
+ *label_end = span->label_end;
+
+ if (value_start != NULL)
+ *value_start = span->value_start;
+
+ if (value_end != NULL)
+ *value_end = span->value_end;
+
+ if (span_start != NULL)
+ *span_start = span->span_start;
+
+ if (span_end != NULL)
+ *span_end = span->span_end;
+
+ /* We are safer here, make sure we have a filename */
+ if (filename != NULL) {
+ if (span->filename == NULL || span->filename->str == NULL) {
+ *filename = strdup("");
+ } else {
+ *filename = strdup(span->filename->str);
+ }
+ ERR_NOMEM(*filename == NULL, aug);
+ }
+
+ result = 0;
+ error:
+ free_pathx(p);
+ api_exit(aug);
+ return result;
+}
+
+int aug_mv(struct augeas *aug, const char *src, const char *dst) {
+ struct pathx *s = NULL, *d = NULL;
+ struct tree *ts, *td, *t;
+ int r, ret;
+
+ api_entry(aug);
+
+ ret = -1;
+ s = pathx_aug_parse(aug, aug->origin, tree_root_ctx(aug), src, true);
+ ERR_BAIL(aug);
+
+ d = pathx_aug_parse(aug, aug->origin, tree_root_ctx(aug), dst, true);
+ ERR_BAIL(aug);
+
+ r = find_one_node(s, &ts);
+ if (r < 0)
+ goto error;
+
+ r = pathx_expand_tree(d, &td);
+ if (r == -1)
+ goto error;
+
+ /* Don't move SRC into its own descendent */
+ t = td;
+ do {
+ ERR_THROW(t == ts, aug, AUG_EMVDESC,
+ "destination %s is a descendant of %s", dst, src);
+ t = t->parent;
+ } while (t != aug->origin);
+
+ free_tree(td->children);
+
+ td->children = ts->children;
+ list_for_each(c, td->children) {
+ c->parent = td;
+ }
+ free(td->value);
+ td->value = ts->value;
+
+ ts->value = NULL;
+ ts->children = NULL;
+
+ tree_unlink(aug, ts);
+ tree_mark_dirty(td);
+
+ ret = 0;
+ error:
+ free_pathx(s);
+ free_pathx(d);
+ api_exit(aug);
+ return ret;
+}
+
+static void tree_copy_rec(struct tree *src, struct tree *dst) {
+ struct tree *n;
+ char *value;
+
+ list_for_each(c, src->children) {
+ value = c->value == NULL ? NULL : strdup(c->value);
+ n = tree_append_s(dst, c->label, value);
+ tree_copy_rec(c, n);
+ }
+}
+
+int aug_cp(struct augeas *aug, const char *src, const char *dst) {
+ struct pathx *s = NULL, *d = NULL;
+ struct tree *ts, *td, *t;
+ int r, ret;
+
+ api_entry(aug);
+
+ ret = -1;
+ s = pathx_aug_parse(aug, aug->origin, tree_root_ctx(aug), src, true);
+ ERR_BAIL(aug);
+
+ d = pathx_aug_parse(aug, aug->origin, tree_root_ctx(aug), dst, true);
+ ERR_BAIL(aug);
+
+ r = find_one_node(s, &ts);
+ if (r < 0)
+ goto error;
+
+ r = pathx_expand_tree(d, &td);
+ if (r == -1)
+ goto error;
+
+ /* Don't copy SRC into its own descendent */
+ t = td;
+ do {
+ ERR_THROW(t == ts, aug, AUG_ECPDESC,
+ "destination %s is a descendant of %s", dst, src);
+ t = t->parent;
+ } while (t != aug->origin);
+
+ tree_set_value(td, ts->value);
+ free_tree(td->children);
+ td->children = NULL;
+ tree_copy_rec(ts, td);
+ tree_mark_dirty(td);
+
+ ret = 0;
+ error:
+ free_pathx(s);
+ free_pathx(d);
+ api_exit(aug);
+ return ret;
+}
+
+int aug_rename(struct augeas *aug, const char *src, const char *lbl) {
+ struct pathx *s = NULL;
+ struct tree *ts;
+ int ret;
+ int count = 0;
+
+ api_entry(aug);
+
+ ret = -1;
+ ERR_THROW(strchr(lbl, '/') != NULL, aug, AUG_ELABEL,
+ "Label %s contains a /", lbl);
+
+ s = pathx_aug_parse(aug, aug->origin, tree_root_ctx(aug), src, true);
+ ERR_BAIL(aug);
+
+ for (ts = pathx_first(s); ts != NULL; ts = pathx_next(s)) {
+ free(ts->label);
+ ts->label = strdup(lbl);
+ tree_mark_dirty(ts);
+ count ++;
+ }
+
+ free_pathx(s);
+ api_exit(aug);
+ return count;
+ error:
+ free_pathx(s);
+ api_exit(aug);
+ return ret;
+}
+
+int aug_match(const struct augeas *aug, const char *pathin, char ***matches) {
+ struct pathx *p = NULL;
+ struct tree *tree;
+ int cnt = 0;
+
+ api_entry(aug);
+
+ if (matches != NULL)
+ *matches = NULL;
+
+ if (STREQ(pathin, "/")) {
+ pathin = "/*";
+ }
+
+ p = pathx_aug_parse(aug, aug->origin, tree_root_ctx(aug), pathin, true);
+ ERR_BAIL(aug);
+
+ for (tree = pathx_first(p); tree != NULL; tree = pathx_next(p)) {
+ if (! TREE_HIDDEN(tree))
+ cnt += 1;
+ }
+ ERR_BAIL(aug);
+
+ if (matches == NULL)
+ goto done;
+
+ if (ALLOC_N(*matches, cnt) < 0)
+ goto error;
+
+ int i = 0;
+ for (tree = pathx_first(p); tree != NULL; tree = pathx_next(p)) {
+ if (TREE_HIDDEN(tree))
+ continue;
+ (*matches)[i] = path_of_tree(tree);
+ if ((*matches)[i] == NULL) {
+ goto error;
+ }
+ i += 1;
+ }
+ ERR_BAIL(aug);
+ done:
+ free_pathx(p);
+ api_exit(aug);
+ return cnt;
+
+ error:
+ if (matches != NULL) {
+ if (*matches != NULL) {
+ for (i=0; i < cnt; i++)
+ free((*matches)[i]);
+ free(*matches);
+ }
+ }
+ free_pathx(p);
+ api_exit(aug);
+ return -1;
+}
+
+/* XFM1 and XFM2 can both be used to save the same file. That is an error
+ only if the two lenses in the two transforms are actually different. */
+static int check_save_dup(struct augeas *aug, const char *path,
+ struct tree *xfm1, struct tree *xfm2) {
+ int result = 0;
+ struct lens *l1 = xfm_lens(aug, xfm1, NULL);
+ struct lens *l2 = xfm_lens(aug, xfm2, NULL);
+ if (l1 != l2) {
+ const char *filename = path + strlen(AUGEAS_FILES_TREE) + 1;
+ transform_file_error(aug, "mxfm_save", filename,
+ "Lenses %s and %s could be used to save this file",
+ xfm_lens_name(xfm1),
+ xfm_lens_name(xfm2));
+ ERR_REPORT(aug, AUG_EMXFM,
+ "Path %s transformable by lens %s and %s",
+ path,
+ xfm_lens_name(xfm1),
+ xfm_lens_name(xfm2));
+ result = -1;
+ }
+ return result;
+}
+
+static int tree_save(struct augeas *aug, struct tree *tree,
+ const char *path) {
+ int result = 0;
+ struct tree *meta = tree_child_cr(aug->origin, s_augeas);
+ struct tree *load = tree_child_cr(meta, s_load);
+
+ // FIXME: We need to detect subtrees that aren't saved by anything
+
+ if (load == NULL)
+ return -1;
+
+ list_for_each(t, tree) {
+ if (t->file && ! t->dirty) {
+ continue;
+ } else {
+ char *tpath = NULL;
+ struct tree *transform = NULL;
+ if (asprintf(&tpath, "%s/%s", path, t->label) == -1) {
+ result = -1;
+ continue;
+ }
+ if ( t->dirty ) {
+ list_for_each(xfm, load->children) {
+ if (transform_applies(xfm, tpath)) {
+ if (transform == NULL || transform == xfm) {
+ transform = xfm;
+ } else {
+ result = check_save_dup(aug, tpath, transform, xfm);
+ }
+ }
+ }
+ }
+ if (transform != NULL) {
+ /* If this file did not previously exist and is being created by augeas,
+ * then the 'file' flag in the node t will not be set yet. Set it now
+ */
+ t->file = true;
+ int r = transform_save(aug, transform, tpath, t);
+ if (r == -1)
+ result = -1;
+ } else {
+ if (tree_save(aug, t->children, tpath) == -1)
+ result = -1;
+ }
+ free(tpath);
+ }
+ }
+ return result;
+}
+
+/* Reset the flags based on what is set in the tree. */
+static int update_save_flags(struct augeas *aug) {
+ const char *savemode ;
+
+ aug_get(aug, AUGEAS_META_SAVE_MODE, &savemode);
+ if (savemode == NULL)
+ return -1;
+
+ aug->flags &= ~(AUG_SAVE_BACKUP|AUG_SAVE_NEWFILE|AUG_SAVE_NOOP);
+ if (STREQ(savemode, AUG_SAVE_NEWFILE_TEXT)) {
+ aug->flags |= AUG_SAVE_NEWFILE;
+ } else if (STREQ(savemode, AUG_SAVE_BACKUP_TEXT)) {
+ aug->flags |= AUG_SAVE_BACKUP;
+ } else if (STREQ(savemode, AUG_SAVE_NOOP_TEXT)) {
+ aug->flags |= AUG_SAVE_NOOP ;
+ } else if (STRNEQ(savemode, AUG_SAVE_OVERWRITE_TEXT)) {
+ return -1;
+ }
+
+ return 0;
+}
+
+static int unlink_removed_files(struct augeas *aug,
+ struct tree *files, struct tree *meta) {
+ /* Find all nodes that correspond to a file and might have to be
+ * unlinked. A node corresponds to a file if it has a child labelled
+ * 'path', and we only consider it if there are no errors associated
+ * with it */
+ static const char *const file_nodes =
+ "descendant-or-self::*[path][count(error) = 0]";
+
+ int result = 0;
+
+ if (files->file)
+ return 0;
+
+ for (struct tree *tm = meta->children; tm != NULL;) {
+ struct tree *tf = tree_child(files, tm->label);
+ struct tree *next = tm->next;
+ if (tf == NULL) {
+ /* Unlink all files in tm */
+ struct pathx *px = NULL;
+ if (pathx_parse(tm, err_of_aug(aug), file_nodes, true,
+ aug->symtab, NULL, &px) != PATHX_NOERROR) {
+ result = -1;
+ free_pathx(px);
+ continue;
+ }
+ for (struct tree *t = pathx_first(px);
+ t != NULL;
+ t = pathx_next(px)) {
+ if (remove_file(aug, t) < 0)
+ result = -1;
+ }
+ free_pathx(px);
+ } else if (! tree_child(tm, "path")) {
+ if (unlink_removed_files(aug, tf, tm) < 0)
+ result = -1;
+ }
+ tm = next;
+ }
+ return result;
+}
+
+int aug_save(struct augeas *aug) {
+ int ret = 0;
+ struct tree *meta = tree_child_cr(aug->origin, s_augeas);
+ struct tree *meta_files = tree_child_cr(meta, s_files);
+ struct tree *files = tree_child_cr(aug->origin, s_files);
+ struct tree *load = tree_child_cr(meta, s_load);
+
+ api_entry(aug);
+
+ if (update_save_flags(aug) < 0)
+ goto error;
+
+ if (files == NULL || meta == NULL || load == NULL)
+ goto error;
+
+ aug_rm(aug, AUGEAS_EVENTS_SAVED);
+
+ list_for_each(xfm, load->children)
+ transform_validate(aug, xfm);
+
+ if (tree_save(aug, files->children, AUGEAS_FILES_TREE) == -1)
+ ret = -1;
+
+ /* Remove files whose entire subtree was removed. */
+ if (meta_files != NULL) {
+ if (unlink_removed_files(aug, files, meta_files) < 0)
+ ret = -1;
+ }
+ if (!(aug->flags & AUG_SAVE_NOOP)) {
+ tree_clean(aug->origin);
+ }
+
+ api_exit(aug);
+ return ret;
+ error:
+ api_exit(aug);
+ return -1;
+}
+
+static int print_one(FILE *out, const char *path, const char *value) {
+ int r;
+
+ r = fprintf(out, "%s", path);
+ if (r < 0)
+ return -1;
+ if (value != NULL) {
+ char *val = escape(value, -1, STR_ESCAPES);
+ r = fprintf(out, " = \"%s\"", val);
+ free(val);
+ if (r < 0)
+ return -1;
+ }
+ r = fputc('\n', out);
+ if (r == EOF)
+ return -1;
+ return 0;
+}
+
+/* PATH is the path up to TREE's parent */
+static int print_rec(FILE *out, struct tree *start, const char *ppath,
+ int pr_hidden) {
+ int r;
+ char *path = NULL;
+
+ list_for_each(tree, start) {
+ if (TREE_HIDDEN(tree) && ! pr_hidden)
+ continue;
+
+ path = path_expand(tree, ppath);
+ if (path == NULL)
+ goto error;
+
+ r = print_one(out, path, tree->value);
+ if (r < 0)
+ goto error;
+ r = print_rec(out, tree->children, path, pr_hidden);
+ free(path);
+ path = NULL;
+ if (r < 0)
+ goto error;
+ }
+ return 0;
+ error:
+ free(path);
+ return -1;
+}
+
+static int print_tree(FILE *out, struct pathx *p, int pr_hidden) {
+ char *path = NULL;
+ struct tree *tree;
+ int r;
+
+ for (tree = pathx_first(p); tree != NULL; tree = pathx_next(p)) {
+ if (TREE_HIDDEN(tree) && ! pr_hidden)
+ continue;
+
+ path = path_of_tree(tree);
+ if (path == NULL)
+ goto error;
+ r = print_one(out, path, tree->value);
+ if (r < 0)
+ goto error;
+ r = print_rec(out, tree->children, path, pr_hidden);
+ if (r < 0)
+ goto error;
+ free(path);
+ path = NULL;
+ }
+ return 0;
+ error:
+ free(path);
+ return -1;
+}
+
+int dump_tree(FILE *out, struct tree *tree) {
+ struct pathx *p;
+ int result;
+
+ if (pathx_parse(tree, NULL, "/*", true, NULL, NULL, &p) != PATHX_NOERROR) {
+ free_pathx(p);
+ return -1;
+ }
+
+ result = print_tree(out, p, 1);
+ free_pathx(p);
+ return result;
+}
+
+int aug_text_store(augeas *aug, const char *lens, const char *node,
+ const char *path) {
+
+ struct pathx *p;
+ const char *src;
+ int result = -1, r;
+
+ api_entry(aug);
+
+ /* Validate PATH is syntactically correct */
+ p = pathx_aug_parse(aug, aug->origin, tree_root_ctx(aug), path, true);
+ free_pathx(p);
+ ERR_BAIL(aug);
+
+ r = aug_get(aug, node, &src);
+ ERR_BAIL(aug);
+ ERR_THROW(r == 0, aug, AUG_ENOMATCH,
+ "Source node %s does not exist", node);
+ ERR_THROW(src == NULL, aug, AUG_ENOMATCH,
+ "Source node %s has a NULL value", node);
+
+ result = text_store(aug, lens, path, src);
+ error:
+ api_exit(aug);
+ return result;
+}
+
+int aug_text_retrieve(struct augeas *aug, const char *lens,
+ const char *node_in, const char *path,
+ const char *node_out) {
+ struct tree *tree = NULL;
+ const char *src;
+ char *out = NULL;
+ struct tree *tree_out;
+ int r;
+
+ api_entry(aug);
+
+ tree = tree_find(aug, path);
+ ERR_BAIL(aug);
+
+ r = aug_get(aug, node_in, &src);
+ ERR_BAIL(aug);
+ ERR_THROW(r == 0, aug, AUG_ENOMATCH,
+ "Source node %s does not exist", node_in);
+ ERR_THROW(src == NULL, aug, AUG_ENOMATCH,
+ "Source node %s has a NULL value", node_in);
+
+ r = text_retrieve(aug, lens, path, tree, src, &out);
+ if (r < 0)
+ goto error;
+
+ tree_out = tree_find_cr(aug, node_out);
+ ERR_BAIL(aug);
+
+ tree_store_value(tree_out, &out);
+
+ api_exit(aug);
+ return 0;
+ error:
+ free(out);
+ api_exit(aug);
+ return -1;
+}
+
+int aug_transform(struct augeas *aug, const char *lens,
+ const char *file, int excl) {
+ struct tree *meta = tree_child_cr(aug->origin, s_augeas);
+ struct tree *load = tree_child_cr(meta, s_load);
+
+ int r = 0, result = -1;
+ struct tree *xfm = NULL, *lns = NULL, *t = NULL;
+ const char *filter = NULL;
+ char *p;
+ int exists;
+ char *lensname = NULL, *xfmname = NULL;
+
+ api_entry(aug);
+
+ ERR_NOMEM(meta == NULL || load == NULL, aug);
+
+ ARG_CHECK(STREQ("", lens), aug, "aug_transform: LENS must not be empty");
+ ARG_CHECK(STREQ("", file), aug, "aug_transform: FILE must not be empty");
+
+ if ((p = strrchr(lens, '.'))) {
+ lensname = strdup(lens);
+ xfmname = strndup(lens, p - lens);
+ ERR_NOMEM(lensname == NULL || xfmname == NULL, aug);
+ } else {
+ r = xasprintf(&lensname, "%s.lns", lens);
+ xfmname = strdup(lens);
+ ERR_NOMEM(r < 0 || xfmname == NULL, aug);
+ }
+
+ xfm = tree_child_cr(load, xfmname);
+ ERR_NOMEM(xfm == NULL, aug);
+
+ lns = tree_child_cr(xfm, s_lens);
+ ERR_NOMEM(lns == NULL, aug);
+
+ tree_store_value(lns, &lensname);
+
+ exists = 0;
+
+ filter = excl ? s_excl : s_incl;
+ list_for_each(c, xfm->children) {
+ if (c->value != NULL && STREQ(c->value, file)
+ && streqv(c->label, filter)) {
+ exists = 1;
+ break;
+ }
+ }
+ if (! exists) {
+ t = tree_append_s(xfm, filter, NULL);
+ ERR_NOMEM(t == NULL, aug);
+ r = tree_set_value(t, file);
+ ERR_NOMEM(r < 0, aug);
+ }
+
+ result = 0;
+ error:
+ free(lensname);
+ free(xfmname);
+ api_exit(aug);
+ return result;
+}
+
+int aug_escape_name(augeas *aug, const char *in, char **out) {
+ int result = -1;
+
+ api_entry(aug);
+ ARG_CHECK(in == NULL, aug, "aug_escape_name: IN must not be NULL");
+ ARG_CHECK(out == NULL, aug, "aug_escape_name: OUT must not be NULL");
+
+ result = pathx_escape_name(in, out);
+ ERR_NOMEM(result < 0, aug);
+ error:
+ api_exit(aug);
+ return result;
+}
+
+int aug_load_file(struct augeas *aug, const char *file) {
+ int result = -1, r;
+ struct tree *meta = tree_child_cr(aug->origin, s_augeas);
+ struct tree *load = tree_child_cr(meta, s_load);
+ char *tree_path = NULL;
+ bool found = false;
+
+ api_entry(aug);
+
+ ERR_NOMEM(load == NULL, aug);
+
+ list_for_each(xfm, load->children) {
+ if (filter_matches(xfm, file)) {
+ transform_load(aug, xfm, file);
+ found = true;
+ break;
+ }
+ }
+
+ ERR_THROW(!found, aug, AUG_ENOLENS,
+ "can not determine lens to load file %s", file);
+
+ /* Mark the nodes we just loaded as clean so they won't get saved
+ without additional modifications */
+ r = xasprintf(&tree_path, "/files/%s", file);
+ ERR_NOMEM(r < 0, aug);
+
+ struct tree *t = tree_fpath(aug, tree_path);
+ if (t != NULL) {
+ tree_clean(t);
+ }
+
+ result = 0;
+error:
+ api_exit(aug);
+ free(tree_path);
+ return result;
+}
+
+int aug_print(const struct augeas *aug, FILE *out, const char *pathin) {
+ struct pathx *p;
+ int result = -1;
+
+ api_entry(aug);
+
+ if (pathin == NULL || strlen(pathin) == 0) {
+ pathin = "/*";
+ }
+
+ p = pathx_aug_parse(aug, aug->origin, tree_root_ctx(aug), pathin, true);
+ ERR_BAIL(aug);
+
+ result = print_tree(out, p, 0);
+ error:
+ free_pathx(p);
+ api_exit(aug);
+ return result;
+}
+
+static char *
+tree_source(const augeas *aug, struct tree *tree) {
+ char *result = NULL;
+
+ while (!(ROOT_P(tree) || tree->file))
+ tree = tree->parent;
+
+ if (tree->file) {
+ if (tree->span == NULL) {
+ int r;
+ r = ALLOC(tree->span);
+ ERR_NOMEM(r < 0, aug);
+ tree->span->filename = make_string(path_of_tree(tree));
+ ERR_NOMEM(tree->span->filename == NULL, aug);
+ }
+ result = strdup(tree->span->filename->str);
+ ERR_NOMEM(result == NULL, aug);
+ }
+ error:
+ return result;
+}
+
+int aug_source(const augeas *aug, const char *path, char **file_path) {
+ int result = -1, r;
+ struct pathx *p = NULL;
+ struct tree *match;
+
+ api_entry(aug);
+
+ ARG_CHECK(file_path == NULL, aug,
+ "aug_source_file: FILE_PATH must not be NULL");
+ *file_path = NULL;
+
+ p = pathx_aug_parse(aug, aug->origin, tree_root_ctx(aug), path, true);
+ ERR_BAIL(aug);
+
+ r = pathx_find_one(p, &match);
+ ERR_BAIL(aug);
+ ERR_THROW(r > 1, aug, AUG_EMMATCH, "There are %d nodes matching %s",
+ r, path);
+ ERR_THROW(r == 0, aug, AUG_ENOMATCH, "There is no node matching %s",
+ path);
+
+ *file_path = tree_source(aug, match);
+ ERR_BAIL(aug);
+
+ result = 0;
+ error:
+ free_pathx(p);
+ api_exit(aug);
+ return result;
+}
+
+int aug_preview(struct augeas *aug, const char *path, char **out) {
+ struct tree *tree = NULL;
+ struct pathx *p;
+ int r;
+ int result=-1;
+ char *lens_path = NULL;
+ char *lens_name = NULL;
+ char *file_path = NULL;
+ char *source_filename = NULL;
+ char *source_text = NULL;
+
+ *out = NULL;
+
+ api_entry(aug);
+
+ p = pathx_aug_parse(aug, aug->origin, tree_root_ctx(aug), path, true);
+ ERR_BAIL(aug);
+
+ tree = pathx_first(p);
+ ERR_BAIL(aug);
+ ERR_THROW(tree == NULL, aug, AUG_ENOMATCH, "No node matching %s", path);
+
+ file_path = tree_source(aug, tree);
+
+ ERR_THROW(file_path == NULL, aug, AUG_EBADARG, "Path %s is not associated with a file", path);
+
+ tree = tree_find(aug, file_path);
+
+ xasprintf(&lens_path, "%s%s/%s", AUGEAS_META_TREE, file_path, s_lens);
+ ERR_NOMEM(lens_path == NULL, aug);
+
+ aug_get(aug,lens_path,(const char **) &lens_name);
+ ERR_BAIL(aug);
+
+ ERR_THROW(lens_name == NULL, aug, AUG_ENOLENS, "No lens found for path %s", path);
+
+ xasprintf(&source_filename, "%s%s",aug->root, file_path + strlen(AUGEAS_FILES_TREE) + 1);
+ ERR_NOMEM(source_filename == NULL, aug);
+
+ source_text = xread_file(source_filename);
+
+ ERR_THROW(source_text == NULL, aug, AUG_EFILEACCESS, "Cannot read file %s", source_filename);
+
+ r = text_retrieve(aug, lens_name, file_path, tree, source_text, out);
+ if (r < 0)
+ goto error;
+
+ result = 0;
+
+ error:
+ free(p);
+ free(file_path);
+ free(lens_path);
+ free(source_filename);
+ free(source_text);
+ api_exit(aug);
+ return result;
+}
+
+void aug_close(struct augeas *aug) {
+ if (aug == NULL)
+ return;
+
+ /* There's no point in bothering with api_entry/api_exit here */
+ free_tree(aug->origin);
+ unref(aug->modules, module);
+ if (aug->error->exn != NULL) {
+ aug->error->exn->ref = 0;
+ free_value(aug->error->exn);
+ aug->error->exn = NULL;
+ }
+ free((void *) aug->root);
+ free(aug->modpathz);
+ free_symtab(aug->symtab);
+ unref(aug->error->info, info);
+ free(aug->error->details);
+ free(aug->error);
+ free(aug);
+}
+
+int __aug_load_module_file(struct augeas *aug, const char *filename) {
+ api_entry(aug);
+ int r = load_module_file(aug, filename, NULL);
+ api_exit(aug);
+ return r;
+}
+
+int tree_equal(const struct tree *t1, const struct tree *t2) {
+ while (t1 != NULL && t2 != NULL) {
+ if (!streqv(t1->label, t2->label))
+ return 0;
+ if (!streqv(t1->value, t2->value))
+ return 0;
+ if (! tree_equal(t1->children, t2->children))
+ return 0;
+ t1 = t1->next;
+ t2 = t2->next;
+ }
+ return t1 == t2;
+}
+
+int aug_ns_attr(const augeas* aug, const char *var, int i,
+ const char **value, const char **label, char **file_path) {
+ int result = -1;
+
+ if (value != NULL)
+ *value = NULL;
+
+ if (label != NULL)
+ *label = NULL;
+
+ if (file_path != NULL)
+ *file_path = NULL;
+
+ api_entry(aug);
+
+ struct tree *tree = pathx_symtab_get_tree(aug->symtab, var, i);
+ ERR_THROW(tree == NULL, aug, AUG_ENOMATCH,
+ "Node %s[%d] does not exist", var, i);
+
+ if (file_path != NULL) {
+ *file_path = tree_source(aug, tree);
+ ERR_BAIL(aug);
+ }
+
+ if (value != NULL)
+ *value = tree->value;
+
+ if (label != NULL)
+ *label = tree->label;
+
+ result = 1;
+
+ error:
+ api_exit(aug);
+ return result;
+}
+
+int aug_ns_label(const augeas* aug, const char *var, int i,
+ const char **label, int *index) {
+ int result = -1;
+
+ if (label != NULL)
+ *label = NULL;
+
+ if (index != NULL)
+ *index = -1;
+
+ api_entry(aug);
+
+ struct tree *tree = pathx_symtab_get_tree(aug->symtab, var, i);
+ ERR_THROW(tree == NULL, aug, AUG_ENOMATCH,
+ "Node %s[%d] does not exist", var, i);
+
+ if (label != NULL)
+ *label = tree->label;
+
+ if (index != NULL) {
+ *index = tree_sibling_index(tree);
+ }
+
+ result = 1;
+
+ error:
+ api_exit(aug);
+ return result;
+}
+
+int aug_ns_value(const augeas* aug, const char *var, int i,
+ const char **value) {
+ int result = -1;
+
+ if (value != NULL)
+ *value = NULL;
+
+ api_entry(aug);
+
+ struct tree *tree = pathx_symtab_get_tree(aug->symtab, var, i);
+ ERR_THROW(tree == NULL, aug, AUG_ENOMATCH,
+ "Node %s[%d] does not exist", var, i);
+
+ if (value != NULL)
+ *value = tree->value;
+
+ result = 1;
+
+ error:
+ api_exit(aug);
+ return result;
+}
+
+int aug_ns_count(const augeas *aug, const char *var) {
+ int result = -1;
+
+ api_entry(aug);
+
+ result = pathx_symtab_count(aug->symtab, var);
+
+ api_exit(aug);
+
+ return result;
+}
+
+int aug_ns_path(const augeas *aug, const char *var, int i, char **path) {
+ int result = -1;
+
+ *path = NULL;
+
+ api_entry(aug);
+
+ struct tree *tree = pathx_symtab_get_tree(aug->symtab, var, i);
+ ERR_THROW(tree == NULL, aug, AUG_ENOMATCH,
+ "Node %s[%d] does not exist", var, i);
+
+ *path = path_of_tree(tree);
+
+ result = 0;
+
+ error:
+ api_exit(aug);
+ return result;
+}
+
+/*
+ * Error reporting API
+ */
+int aug_error(struct augeas *aug) {
+ return aug->error->code;
+}
+
+const char *aug_error_message(struct augeas *aug) {
+ aug_errcode_t errcode = aug->error->code;
+
+ if (errcode >= ARRAY_CARDINALITY(errcodes))
+ errcode = AUG_EINTERNAL;
+ return errcodes[errcode];
+}
+
+const char *aug_error_minor_message(struct augeas *aug) {
+ return aug->error->minor_details;
+}
+
+const char *aug_error_details(struct augeas *aug) {
+ return aug->error->details;
+}
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+/*
+ * augeas.h: public headers for augeas
+ *
+ * Copyright (C) 2007-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <dlutter@redhat.com>
+ */
+
+#include <stdio.h>
+#include <libxml/tree.h>
+
+#ifndef AUGEAS_H_
+#define AUGEAS_H_
+
+typedef struct augeas augeas;
+
+/* Enum: aug_flags
+ *
+ * Flags to influence the behavior of Augeas. Pass a bitmask of these flags
+ * to AUG_INIT.
+ */
+enum aug_flags {
+ AUG_NONE = 0,
+ AUG_SAVE_BACKUP = (1 << 0), /* Keep the original file with a
+ .augsave extension */
+ AUG_SAVE_NEWFILE = (1 << 1), /* Save changes into a file with
+ extension .augnew, and do not
+ overwrite the original file. Takes
+ precedence over AUG_SAVE_BACKUP */
+ AUG_TYPE_CHECK = (1 << 2), /* Typecheck lenses; since it can be very
+ expensive it is not done by default */
+ AUG_NO_STDINC = (1 << 3), /* Do not use the builtin load path for
+ modules */
+ AUG_SAVE_NOOP = (1 << 4), /* Make save a no-op process, just record
+ what would have changed */
+ AUG_NO_LOAD = (1 << 5), /* Do not load the tree from AUG_INIT */
+ AUG_NO_MODL_AUTOLOAD = (1 << 6),
+ AUG_ENABLE_SPAN = (1 << 7), /* Track the span in the input of nodes */
+ AUG_NO_ERR_CLOSE = (1 << 8), /* Do not close automatically when
+ encountering error during aug_init */
+ AUG_TRACE_MODULE_LOADING = (1 << 9) /* For use by augparse -t */
+};
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* Function: aug_init
+ *
+ * Initialize the library.
+ *
+ * Use ROOT as the filesystem root. If ROOT is NULL, use the value of the
+ * environment variable AUGEAS_ROOT. If that doesn't exist eitehr, use "/".
+ *
+ * LOADPATH is a colon-separated list of directories that modules should be
+ * searched in. This is in addition to the standard load path and the
+ * directories in AUGEAS_LENS_LIB. LOADPATH can be NULL, indicating that
+ * nothing should be added to the load path.
+ *
+ * FLAGS is a bitmask made up of values from AUG_FLAGS. The flag
+ * AUG_NO_ERR_CLOSE can be used to get more information on why
+ * initialization failed. If it is set in FLAGS, the caller must check that
+ * aug_error returns AUG_NOERROR before using the returned augeas handle
+ * for any other operation. If the handle reports any error, the caller
+ * should only call the aug_error functions an aug_close on this handle.
+ *
+ * Returns:
+ * a handle to the Augeas tree upon success. If initialization fails,
+ * returns NULL if AUG_NO_ERR_CLOSE is not set in FLAGS. If
+ * AUG_NO_ERR_CLOSE is set, might return an Augeas handle even on
+ * failure. In that case, caller must check for errors using augeas_error,
+ * and, if an error is reported, only use the handle with the aug_error
+ * functions and aug_close.
+ */
+augeas *aug_init(const char *root, const char *loadpath, unsigned int flags);
+
+/* Function: aug_defvar
+ *
+ * Define a variable NAME whose value is the result of evaluating EXPR. If
+ * a variable NAME already exists, its name will be replaced with the
+ * result of evaluating EXPR. Context will not be applied to EXPR.
+ *
+ * If EXPR is NULL, the variable NAME will be removed if it is defined.
+ *
+ * Path variables can be used in path expressions later on by prefixing
+ * them with '$'.
+ *
+ * Returns -1 on error; on success, returns 0 if EXPR evaluates to anything
+ * other than a nodeset, and the number of nodes if EXPR evaluates to a
+ * nodeset
+ */
+int aug_defvar(augeas *aug, const char *name, const char *expr);
+
+/* Function: aug_defnode
+ *
+ * Define a variable NAME whose value is the result of evaluating EXPR,
+ * which must be non-NULL and evaluate to a nodeset. If a variable NAME
+ * already exists, its name will be replaced with the result of evaluating
+ * EXPR.
+ *
+ * If EXPR evaluates to an empty nodeset, a node is created, equivalent to
+ * calling AUG_SET(AUG, EXPR, VALUE) and NAME will be the nodeset containing
+ * that single node.
+ *
+ * If CREATED is non-NULL, it is set to 1 if a node was created, and 0 if
+ * it already existed.
+ *
+ * Returns -1 on error; on success, returns the number of nodes in the
+ * nodeset
+ */
+int aug_defnode(augeas *aug, const char *name, const char *expr,
+ const char *value, int *created);
+
+/* Function: aug_get
+ *
+ * Lookup the value associated with PATH. VALUE can be NULL, in which case
+ * it is ignored. If VALUE is not NULL, it is used to return a pointer to
+ * the value associated with PATH if PATH matches exactly one node. If PATH
+ * matches no nodes or more than one node, *VALUE is set to NULL. Note that
+ * it is perfectly legal for nodes to have a NULL value, and that that by
+ * itself does not indicate an error.
+ *
+ * The string *VALUE must not be freed by the caller, and is valid as long
+ * as its node remains unchanged.
+ *
+ * Returns:
+ * 1 if there is exactly one node matching PATH, 0 if there is none,
+ * and a negative value if there is more than one node matching PATH, or if
+ * PATH is not a legal path expression.
+ */
+int aug_get(const augeas *aug, const char *path, const char **value);
+
+/* Function: aug_label
+ *
+ * Lookup the label associated with PATH. LABEL can be NULL, in which case
+ * it is ignored. If LABEL is not NULL, it is used to return a pointer to
+ * the value associated with PATH if PATH matches exactly one node. If PATH
+ * matches no nodes or more than one node, *LABEL is set to NULL.
+ *
+ * The string *LABEL must not be freed by the caller, and is valid as long
+ * as its node remains unchanged.
+ *
+ * Returns:
+ * 1 if there is exactly one node matching PATH, 0 if there is none,
+ * and a negative value if there is more than one node matching PATH, or if
+ * PATH is not a legal path expression.
+ */
+int aug_label(const augeas *aug, const char *path, const char **label);
+
+/* Function: aug_set
+ *
+ * Set the value associated with PATH to VALUE. VALUE is copied into the
+ * internal data structure, and the caller is responsible for freeing
+ * it. Intermediate entries are created if they don't exist.
+ *
+ * Returns:
+ * 0 on success, -1 on error. It is an error if more than one node
+ * matches PATH.
+ */
+int aug_set(augeas *aug, const char *path, const char *value);
+
+/* Function: aug_setm
+ *
+ * Set the value of multiple nodes in one operation. Find or create a node
+ * matching SUB by interpreting SUB as a path expression relative to each
+ * node matching BASE. SUB may be NULL, in which case all the nodes
+ * matching BASE will be modified.
+ *
+ * Returns:
+ * number of modified nodes on success, -1 on error
+ */
+int aug_setm(augeas *aug, const char *base, const char *sub, const char *value);
+
+/* Function: aug_span
+ *
+ * Get the span according to input file of the node associated with PATH. If
+ * the node is associated with a file, the filename, label and value start and
+ * end positions are set, and return value is 0. The caller is responsible for
+ * freeing returned filename. If an argument for return value is NULL, then the
+ * corresponding value is not set. If the node associated with PATH doesn't
+ * belong to a file or is doesn't exists, filename and span are not set and
+ * return value is -1.
+ *
+ * Returns:
+ * 0 on success with filename, label_start, label_stop, value_start, value_end,
+ * span_start, span_end
+ * -1 on error
+ */
+
+int aug_span(augeas *aug, const char *path, char **filename,
+ unsigned int *label_start, unsigned int *label_end,
+ unsigned int *value_start, unsigned int *value_end,
+ unsigned int *span_start, unsigned int *span_end);
+
+/* Function: aug_insert
+ *
+ * Create a new sibling LABEL for PATH by inserting into the tree just
+ * before PATH if BEFORE == 1 or just after PATH if BEFORE == 0.
+ *
+ * PATH must match exactly one existing node in the tree, and LABEL must be
+ * a label, i.e. not contain a '/', '*' or end with a bracketed index
+ * '[N]'.
+ *
+ * Returns:
+ * 0 on success, and -1 if the insertion fails.
+ */
+int aug_insert(augeas *aug, const char *path, const char *label, int before);
+
+/* Function: aug_rm
+ *
+ * Remove path and all its children. Returns the number of entries removed.
+ * All nodes that match PATH, and their descendants, are removed.
+ */
+int aug_rm(augeas *aug, const char *path);
+
+/* Function: aug_mv
+ *
+ * Move the node SRC to DST. SRC must match exactly one node in the
+ * tree. DST must either match exactly one node in the tree, or may not
+ * exist yet. If DST exists already, it and all its descendants are
+ * deleted. If DST does not exist yet, it and all its missing ancestors are
+ * created.
+ *
+ * Note that the node SRC always becomes the node DST: when you move /a/b
+ * to /x, the node /a/b is now called /x, no matter whether /x existed
+ * initially or not.
+ *
+ * Returns:
+ * 0 on success and -1 on failure.
+ */
+int aug_mv(augeas *aug, const char *src, const char *dst);
+
+/* Function: aug_cp
+ *
+ * Copy the node SRC to DST. SRC must match exactly one node in the
+ * tree. DST must either match exactly one node in the tree, or may not
+ * exist yet. If DST exists already, it and all its descendants are
+ * deleted. If DST does not exist yet, it and all its missing ancestors are
+ * created.
+ *
+ * Returns:
+ * 0 on success and -1 on failure.
+ */
+int aug_cp(augeas *aug, const char *src, const char *dst);
+
+/* Function: aug_rename
+ *
+ * Rename the label of all nodes matching SRC to LBL.
+ *
+ * Returns:
+ * The number of nodes renamed on success and -1 on failure.
+ */
+int aug_rename(augeas *aug, const char *src, const char *lbl);
+
+/* Function: aug_match
+ *
+ * Returns:
+ * the number of matches of the path expression PATH in AUG. If
+ * MATCHES is non-NULL, an array with the returned number of elements will
+ * be allocated and filled with the paths of the matches. The caller must
+ * free both the array and the entries in it. The returned paths are
+ * sufficiently qualified to make sure that they match exactly one node in
+ * the current tree.
+ *
+ * If MATCHES is NULL, nothing is allocated and only the number
+ * of matches is returned.
+ *
+ * Returns -1 on error, or the total number of matches (which might be 0).
+ *
+ * Path expressions:
+ * Path expressions use a very simple subset of XPath: the path PATH
+ * consists of a number of segments, separated by '/'; each segment can
+ * either be a '*', matching any tree node, or a string, optionally
+ * followed by an index in brackets, matching tree nodes labelled with
+ * exactly that string. If no index is specified, the expression matches
+ * all nodes with that label; the index can be a positive number N, which
+ * matches exactly the Nth node with that label (counting from 1), or the
+ * special expression 'last()' which matches the last node with the given
+ * label. All matches are done in fixed positions in the tree, and nothing
+ * matches more than one path segment.
+ *
+ */
+int aug_match(const augeas *aug, const char *path, char ***matches);
+
+/* Function: aug_save
+ *
+ * Write all pending changes to disk.
+ *
+ * Returns:
+ * -1 if an error is encountered,
+ * 0 on success. Only files that had any changes made to them are written.
+ *
+ * If AUG_SAVE_NEWFILE is set in the FLAGS passed to AUG_INIT, create
+ * changed files as new files with the extension ".augnew", and leave the
+ * original file unmodified.
+ *
+ * Otherwise, if AUG_SAVE_BACKUP is set in the FLAGS passed to AUG_INIT,
+ * move the original file to a new file with extension ".augsave".
+ *
+ * If neither of these flags is set, overwrite the original file.
+ */
+int aug_save(augeas *aug);
+
+/* Function: aug_load
+ *
+ * Load files into the tree. Which files to load and what lenses to use on
+ * them is specified under /augeas/load in the tree; each entry
+ * /augeas/load/NAME specifies a 'transform', by having itself exactly one
+ * child 'lens' and any number of children labelled 'incl' and 'excl'. The
+ * value of NAME has no meaning.
+ *
+ * The 'lens' grandchild of /augeas/load specifies which lens to use, and
+ * can either be the fully qualified name of a lens 'Module.lens' or
+ * '@Module'. The latter form means that the lens from the transform marked
+ * for autoloading in MODULE should be used.
+ *
+ * The 'incl' and 'excl' grandchildren of /augeas/load indicate which files
+ * to transform. Their value are used as glob patterns. Any file that
+ * matches at least one 'incl' pattern and no 'excl' pattern is
+ * transformed. The order of 'incl' and 'excl' entries is irrelevant.
+ *
+ * When AUG_INIT is first called, it populates /augeas/load with the
+ * transforms marked for autoloading in all the modules it finds.
+ *
+ * Before loading any files, AUG_LOAD will remove everything underneath
+ * /augeas/files and /files, regardless of whether any entries have been
+ * modified or not.
+ *
+ * Returns -1 on error, 0 on success. Note that success includes the case
+ * where some files could not be loaded. Details of such files can be found
+ * as '/augeas//error'.
+ */
+int aug_load(augeas *aug);
+
+/* Function: aug_text_store
+ *
+ * Use the value of node NODE as a string and transform it into a tree
+ * using the lens LENS and store it in the tree at PATH, which will be
+ * overwritten. PATH and NODE are path expressions.
+ *
+ * Returns:
+ * 0 on success, or a negative value on failure
+ */
+int aug_text_store(augeas *aug, const char *lens, const char *node,
+ const char *path);
+
+/* Function: aug_text_retrieve
+ *
+ * Transform the tree at PATH into a string using lens LENS and store it in
+ * the node NODE_OUT, assuming the tree was initially generated using the
+ * value of node NODE_IN. PATH, NODE_IN, and NODE_OUT are path expressions.
+ *
+ * Returns:
+ * 0 on success, or a negative value on failure
+ */
+int aug_text_retrieve(struct augeas *aug, const char *lens,
+ const char *node_in, const char *path,
+ const char *node_out);
+
+/* Function: aug_escape_name
+ *
+ * Escape special characters in a string such that it can be used as part
+ * of a path expressions and only matches a node named exactly
+ * IN. Characters that have special meanings in path expressions, such as
+ * '[' and ']' are prefixed with a '\\'. Note that this function assumes
+ * that it is passed a name, not a path, and will therefore escape '/',
+ * too.
+ *
+ * On return, *OUT is NULL if IN does not need any escaping at all, and
+ * points to an escaped copy of IN otherwise.
+ *
+ * Returns:
+ * 0 on success, or a negative value on failure
+ */
+int aug_escape_name(augeas *aug, const char *in, char **out);
+
+/* Function: aug_print
+ *
+ * Print each node matching PATH and its descendants to OUT.
+ *
+ * Returns:
+ * 0 on success, or a negative value on failure
+ */
+int aug_print(const augeas *aug, FILE *out, const char *path);
+
+/* Function: aug_source
+ *
+ * For the node matching PATH, return the path to the node representing the
+ * file to which PATH belongs. If PATH belongs to a file, *FILE_PATH will
+ * contain the path to the toplevel node of that file underneath /files. If
+ * it does not, *FILE_PATH will be NULL.
+ *
+ * The caller is responsible for freeing *FILE_PATH
+ *
+ * Returns:
+ * 0 on success, or a negative value on failure. It is an error if PATH
+ * matches more than one node.
+ */
+int aug_source(const augeas *aug, const char *path, char **file_path);
+
+/* Function: aug_preview
+ *
+ * Return the contents of the file that would be written for the file associated with path
+ * If there is no file corresponfing to PATH, *OUT will be NULL.
+ * The caller is responsible for freeing *OUT
+ *
+ * Returns:
+ * 0 on success, -1 on error
+ */
+int aug_preview(augeas *aug, const char *path, char **out);
+
+/* Function: aug_to_xml
+ *
+ * Turn the Augeas tree(s) matching PATH into an XML tree XMLDOC. The
+ * parameter FLAGS is currently unused and must be set to 0.
+ *
+ * Returns:
+ * 0 on success, or a negative value on failure
+ *
+ * In case of failure, *xmldoc is set to NULL
+ */
+int aug_to_xml(const augeas *aug, const char *path, xmlNode **xmldoc,
+ unsigned int flags);
+
+/*
+ * Function: aug_transform
+ *
+ * Add a transform for FILE using LENS.
+ * EXCL specifies if this the file is to be included (0)
+ * or excluded (1) from the LENS.
+ * The LENS maybe be a module name or a full lens name.
+ * If a module name is given, then lns will be the lens assumed.
+ *
+ * Returns:
+ * 1 on success, -1 on failure
+ */
+int aug_transform(augeas *aug, const char *lens, const char *file, int excl);
+
+/*
+ * Function: aug_load_file
+ *
+ * Load a FILE using the lens that would ordinarily be used by aug_load,
+ * i.e. the lens whose autoload statement matches the FILE. Similar to
+ * aug_load, this function returns successfully even if FILE does not exist
+ * or if the FILE can not be processed by the associated lens. It is an
+ * error though if no lens can be found to process FILE. In that case, the
+ * error code in AUG will be set to AUG_ENOLENS.
+ *
+ * Returns:
+ * 0 on success, -1 on failure
+ */
+int aug_load_file(augeas *aug, const char *file);
+
+/*
+ * Function: aug_srun
+ *
+ * Run one or more newline-separated commands. The output of the commands
+ * will be printed to OUT. Running just 'help' will print what commands are
+ * available. Commands accepted by this are identical to what augtool
+ * accepts.
+ *
+ * Returns:
+ * the number of executed commands on success, -1 on failure, and -2 if a
+ * 'quit' command was encountered
+ */
+int aug_srun(augeas *aug, FILE *out, const char *text);
+
+/* Function: aug_close
+ *
+ * Close this Augeas instance and free any storage associated with
+ * it. After running AUG_CLOSE, AUG is invalid and can not be used for any
+ * more operations.
+ */
+void aug_close(augeas *aug);
+
+// We can't put //* into the examples in these comments since the C
+// preprocessor complains about that. So we'll resort to the equivalent but
+// more wordy notation /descendant::*
+
+/*
+ * Function: aug_ns_attr
+ *
+ * Look up the ith node in the variable VAR and retrieve information about
+ * it. Set *VALUE to the value of the node, *LABEL to its label, and
+ * *FILE_PATH to the path of the file it belongs to, or to NULL if that
+ * node does not belong to a file. It is permissible to pass NULL for any
+ * of these variables to indicate that the caller is not interested in that
+ * attribute.
+ *
+ * It is assumed that VAR was defined with a path expression evaluating to
+ * a nodeset, like '/files/etc/hosts/descendant::*'. This function is
+ * equivalent to, but faster than, aug_get(aug, "$VAR[I+1]", value),
+ * respectively the corresponding calls to aug_label and aug_source. Note
+ * that the index is 0-based, not 1-based.
+ *
+ * If VAR does not exist, or is not a nodeset, or if it has fewer than I
+ * nodes, this call fails.
+ *
+ * The caller is responsible for freeing *FILE_PATH, but must not free
+ * *VALUE or *LABEL. Those pointers are only valid up to the next call to a
+ * function in this API that might modify the tree.
+ *
+ * Returns:
+ * 1 on success (for consistency with aug_get), a negative value on failure
+ */
+int aug_ns_attr(const augeas* aug, const char *var, int i,
+ const char **value, const char **label, char **file_path);
+
+/*
+ * Function: aug_ns_label
+ *
+ * Look up the LABEL and its INDEX amongst its siblings for the ith node in
+ * variable VAR. (See aug_ns_attr for details of what is expected of VAR)
+ *
+ * Either of LABEL and INDEX may be NULL. The *INDEX will be set to the
+ * number of siblings + 1 of the node $VAR[I+1] that precede it and have
+ * the same label if there are at least two siblings with that label. If
+ * the node $VAR[I+1] does not have any siblings with the same label as
+ * itself, *INDEX will be set to 0.
+ *
+ * The caller must not free *LABEL. The pointer is only valid up to the
+ * next call to a function in this API that might modify the tree.
+ *
+ * Returns:
+ * 1 on success (for consistency with aug_get), a negative value on failure
+ */
+int aug_ns_label(const augeas *aug, const char *var, int i,
+ const char **label, int *index);
+
+/*
+ * Function: aug_ns_value
+ *
+ * Look up the VALUE of the ith node in variable VAR. (See aug_ns_attr for
+ * details of what is expected of VAR)
+ *
+ * The caller must not free *VALUE. The pointer is only valid up to the
+ * next call to a function in this API that might modify the tree.
+ *
+ * Returns:
+ * 1 on success (for consistency with aug_get), a negative value on failure
+ */
+int aug_ns_value(const augeas *aug, const char *var, int i,
+ const char **value);
+
+/*
+ * Function: aug_ns_count
+ *
+ * Return the number of nodes in variable VAR. (See aug_ns_attr for details
+ * of what is expected of VAR)
+ *
+ * Returns: the number of nodes in VAR, or a negative value on failure
+ */
+int aug_ns_count(const augeas *aug, const char *var);
+
+/*
+ * Function: aug_ns_count
+ *
+ * Put the fully qualified path to the ith node in VAR into *PATH. (See
+ * aug_ns_attr for details of what is expected of VAR)
+ *
+ * The caller is responsible for freeing *PATH, which is allocated by this
+ * function.
+ *
+ * Returns: 1 on success (for consistency with aug_get), a negative value
+ * on failure
+ */
+int aug_ns_path(const augeas *aug, const char *var, int i, char **path);
+
+/*
+ * Error reporting
+ */
+
+typedef enum {
+ AUG_NOERROR, /* No error */
+ AUG_ENOMEM, /* Out of memory */
+ AUG_EINTERNAL, /* Internal error (bug) */
+ AUG_EPATHX, /* Invalid path expression */
+ AUG_ENOMATCH, /* No match for path expression */
+ AUG_EMMATCH, /* Too many matches for path expression */
+ AUG_ESYNTAX, /* Syntax error in lens file */
+ AUG_ENOLENS, /* Lens lookup failed */
+ AUG_EMXFM, /* Multiple transforms */
+ AUG_ENOSPAN, /* No span for this node */
+ AUG_EMVDESC, /* Cannot move node into its descendant */
+ AUG_ECMDRUN, /* Failed to execute command */
+ AUG_EBADARG, /* Invalid argument in function call */
+ AUG_ELABEL, /* Invalid label */
+ AUG_ECPDESC, /* Cannot copy node into its descendant */
+ AUG_EFILEACCESS /* Cannot open or read a file */
+} aug_errcode_t;
+
+/* Return the error code from the last API call */
+int aug_error(augeas *aug);
+
+/* Return a human-readable message for the error code */
+const char *aug_error_message(augeas *aug);
+
+/* Return a human-readable message elaborating the error code; might be
+ * NULL. For example, when the error code is AUG_EPATHX, this will explain
+ * how the path expression is invalid */
+const char *aug_error_minor_message(augeas *aug);
+
+/* Return details about the error, which might be NULL. For example, for
+ * AUG_EPATHX, indicates where in the path expression the error
+ * occurred. The returned value can only be used until the next API call
+ */
+const char *aug_error_details(augeas *aug);
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
+
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+AUGEAS_0.1.0 {
+ global:
+ aug_init;
+ aug_close;
+ aug_get;
+ aug_set;
+ aug_insert;
+ aug_rm;
+ aug_mv;
+ aug_match;
+ aug_save;
+ aug_print;
+ # Symbols with __ are private
+ __aug_load_module_file;
+ local: *;
+};
+
+AUGEAS_0.8.0 {
+ global:
+ aug_defvar;
+ aug_defnode;
+ aug_load;
+} AUGEAS_0.1.0;
+
+AUGEAS_0.10.0 {
+ global:
+ aug_error;
+ aug_error_message;
+ aug_error_minor_message;
+ aug_error_details;
+} AUGEAS_0.8.0;
+
+AUGEAS_0.11.0 {
+ global:
+ aug_setm;
+} AUGEAS_0.10.0;
+
+AUGEAS_0.12.0 {
+ global:
+ aug_span;
+} AUGEAS_0.11.0;
+
+AUGEAS_0.14.0 {
+ global:
+ aug_srun;
+ __aug_init_memstream;
+ __aug_close_memstream;
+} AUGEAS_0.12.0;
+
+AUGEAS_0.15.0 {
+ global:
+ aug_to_xml;
+} AUGEAS_0.14.0;
+
+AUGEAS_0.16.0 {
+ global:
+ aug_text_store;
+ aug_text_retrieve;
+ aug_rename;
+ aug_transform;
+ aug_label;
+} AUGEAS_0.15.0;
+
+AUGEAS_0.18.0 {
+ global:
+ aug_cp;
+} AUGEAS_0.16.0;
+
+AUGEAS_0.20.0 {
+ global:
+ aug_escape_name;
+} AUGEAS_0.18.0;
+
+AUGEAS_0.21.0 {
+ global:
+ aug_load_file;
+} AUGEAS_0.20.0;
+
+AUGEAS_0.22.0 {
+ global:
+ aug_source;
+} AUGEAS_0.21.0;
+
+AUGEAS_0.23.0 {
+ global:
+ aug_ns_attr;
+} AUGEAS_0.22.0;
+
+AUGEAS_0.24.0 {
+ global:
+ aug_ns_label;
+ aug_ns_value;
+ aug_ns_count;
+ aug_ns_path;
+} AUGEAS_0.23.0;
+
+AUGEAS_0.25.0 {
+ global:
+ aug_preview;
+} AUGEAS_0.24.0;
--- /dev/null
+/*
+ * augrp.c: utility for printing and reading files as parsed by Augeas
+ *
+ * Copyright (C) 2007-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <dlutter@redhat.com>
+ */
+
+#include <config.h>
+#include <argz.h>
+#include <getopt.h>
+#include <stdbool.h>
+#include <ctype.h>
+#include <libgen.h>
+
+#include "memory.h"
+#include "augeas.h"
+#include <locale.h>
+
+#define EXIT_TROUBLE 2
+
+#define cleanup(_x) __attribute__((__cleanup__(_x)))
+
+const char *progname;
+bool print_all = false;
+bool print_only_values = false;
+bool print_exact = false;
+
+static void freep(void *p) {
+ free(*(void **)p);
+}
+
+static void aug_closep(struct augeas **p) {
+ aug_close(*p);
+}
+
+__attribute__((noreturn))
+static void usage(void) {
+ fprintf(stderr, "Usage: %s [OPTIONS] FILE\n", progname);
+ fprintf(stderr,
+"Print the contents of a file as parsed by augeas.\n\n"
+"Options:\n\n"
+" -l, --lens LENS use LENS to transform the file\n"
+" -L, --print-lens print the lens that will be used for a file an exit\n"
+" -a, --all print all nodes, even ones without a value\n"
+" -m, --match EXPR start printing where nodes match EXPR\n"
+" -e, --exact print only exact matches instead of the entire tree\n"
+" starting at a match\n"
+" -o, --only-value print only the values of tree nodes, but no path\n"
+" -q, --quiet do not print anything. Exit with zero status if a\n"
+" match was found\n"
+" -r, --root ROOT use ROOT as the root of the filesystem\n"
+" -I, --include DIR search DIR for modules; can be given mutiple times\n"
+" -S, --nostdinc do not search the builtin default directories\n"
+" for modules\n\n"
+"Examples:\n\n"
+" Print how augeas sees /etc/exports:\n"
+" augmatch /etc/exports\n\n"
+" Show only the entry for a specific mount:\n"
+" augmatch -m 'dir[\"/home\"]' /etc/exports\n\n"
+" Show all the clients to which we are exporting /home:\n"
+" augmatch -eom 'dir[\"/home\"]/client' /etc/exports\n\n");
+ exit(EXIT_SUCCESS);
+}
+
+/* Exit with a failure if COND is true. Use FORMAT and the remaining
+ * arguments like printf to print an error message on stderr before
+ * exiting */
+static void die(bool cond, const char *format, ...) {
+ if (cond) {
+ fputs("error: ", stderr);
+ va_list args;
+ va_start(args, format);
+ vfprintf(stderr, format, args);
+ va_end(args);
+ exit(EXIT_TROUBLE);
+ }
+}
+
+static void oom_when(bool cond) {
+ die(cond, "out of memory.\n");
+}
+
+/* Format a string with vasprintf and return it. The caller is responsible
+ * for freeing the result. */
+static char *format(const char *format, ...) {
+ va_list args;
+ char *result;
+ int r;
+
+ va_start(args, format);
+ r = vasprintf(&result, format, args);
+ va_end(args);
+ oom_when(r < 0);
+ return result;
+}
+
+/* Check for an Augeas error. If there is one, print it and all its detail
+ * and exit with failure. If there is no error, do nothing */
+static void check_error(struct augeas *aug) {
+ die(aug == NULL, "could not initialize augeas\n");
+ oom_when(aug_error(aug) == AUG_ENOMEM);
+ if (aug_error(aug) != AUG_NOERROR) {
+ fprintf(stderr, "error: %s\n", aug_error_message(aug));
+ const char *msg = aug_error_minor_message(aug);
+ if (msg != NULL) {
+ fprintf(stderr, "%s\n", msg);
+ }
+ msg = aug_error_details(aug);
+ if (msg != NULL) {
+ fprintf(stderr, "%s\n", msg);
+ }
+ exit(EXIT_TROUBLE);
+ }
+}
+
+/* Check for an error trying to load FILE (e.g., a parse error) If there
+ * was one, print details and exit with failure. If there was none, do
+ * nothing. */
+static void check_load_error(struct augeas *aug, const char *file) {
+ char *info = format("/augeas/files%s", file);
+ const char *msg, *line, *col;
+
+ aug_defvar(aug, "info", info);
+ free(info);
+ die(aug_ns_count(aug, "info") == 0, "file %s does not exist\n", file);
+
+ aug_defvar(aug, "error", "$info/error");
+ if (aug_ns_count(aug, "error") == 0)
+ return;
+
+ aug_get(aug, "$error", &msg);
+ aug_get(aug, "$error/line", &line);
+ aug_get(aug, "$error/char", &col);
+
+ if (streqv(msg, "parse_failed")) {
+ msg = "parsing failed";
+ } else if (streqv(msg, "read_failed")) {
+ aug_get(aug, "$error/message", &msg);
+ }
+
+ if ((line != NULL) && (col != NULL)) {
+ fprintf(stderr, "error reading %s: %s on line %s, column %s\n",
+ file, msg, line, col);
+ } else {
+ fprintf(stderr, "error reading %s: %s\n", file, msg);
+ }
+ exit(EXIT_TROUBLE);
+}
+
+/* We keep track of where we are in the tree when we are printing it by
+ * assigning to augeas variables and using one struct node for each level
+ * in the tree. To keep things simple, we just preallocate a lot of them
+ * (up to max_nodes many). If we ever have a tree deeper than this, we are
+ * in trouble and will simply abort the program. */
+static const size_t max_nodes = 256;
+
+struct node {
+ char *var; /* The variable where we store the nodes for this level */
+ const char *label; /* The label, index, and value of the current node */
+ int index; /* at the level that this struct node is for */
+ const char *value;
+};
+
+/* Print information about NODES[LEVEL] by printing the path to it going
+ * from NODES[0] to NODES[LEVEL]. PREFIX is the path from the start of the
+ * file to the current match (if the user specified --match) or the empty
+ * string (if we are printing the entire file. */
+static void print_one(int level, const char *prefix, struct node *nodes) {
+ if (nodes[level].value == NULL && ! print_all)
+ return;
+
+ if (print_only_values && nodes[level].value != NULL) {
+ printf("%s\n", nodes[level].value);
+ return;
+ }
+
+ if (*prefix) {
+ if (level > 0)
+ printf("%s/", prefix);
+ else
+ printf("%s", prefix);
+ }
+
+ for (int i=1; i <= level; i++) {
+ if (nodes[i].index > 0) {
+ printf("%s[%d]", nodes[i].label, nodes[i].index);
+ } else {
+ printf("%s", nodes[i].label);
+ }
+ if (i < level) {
+ printf("/");
+ }
+ }
+
+ if (nodes[level].value) {
+ printf(" = %s\n", nodes[level].value);
+ } else {
+ printf("\n");
+ }
+}
+
+/* Recursively print the tree starting at NODES[LEVEL] */
+static void print_tree(struct augeas *aug, int level,
+ const char *prefix, struct node *nodes) {
+ die(level + 1 >= max_nodes,
+ "tree has more than %d levels, which is more than we can handle\n",
+ max_nodes);
+
+ struct node *cur = nodes + level;
+ struct node *next = cur + 1;
+
+ int count = aug_ns_count(aug, cur->var);
+ for (int i=0; i < count; i++) {
+ cleanup(freep) char *pattern = NULL;
+
+ aug_ns_label(aug, cur->var, i, &(cur->label), &(cur->index));
+ aug_ns_value(aug, cur->var, i, &(cur->value));
+ print_one(level, prefix, nodes);
+
+ if (! print_exact) {
+ pattern = format("$%s[%d]/*", cur->var, i+1);
+ aug_defvar(aug, next->var, pattern);
+ check_error(aug);
+ print_tree(aug, level+1, prefix, nodes);
+ }
+ }
+}
+
+/* Print the tree for file PATH (which must already start with /files), but
+ * only the nodes matching MATCH.
+ *
+ * Return EXIT_SUCCESS if there was at least one match, and EXIT_FAILURE
+ * if there was none.
+ */
+static int print(struct augeas *aug, const char *path, const char *match) {
+ static const char *const match_var = "match";
+
+ struct node *nodes = NULL;
+
+ nodes = calloc(max_nodes, sizeof(struct node));
+ oom_when(nodes == NULL);
+
+ for (int i=0; i < max_nodes; i++) {
+ nodes[i].var = format("var%d", i);
+ }
+
+ /* Set $match to the nodes matching the user's match expression */
+ aug_defvar(aug, match_var, match);
+ check_error(aug);
+
+ /* Go through the matches in MATCH_VAR one by one. We need to do it
+ * this way, since the prefix we need to print for each entry in
+ * MATCH_VAR is different for each entry. */
+ int count = aug_ns_count(aug, match_var);
+ for (int i=0; i < count; i++) {
+ cleanup(freep) char *prefix = NULL;
+ aug_ns_path(aug, match_var, i, &prefix);
+ aug_defvar(aug, nodes[0].var, prefix);
+ print_tree(aug, 0, prefix + strlen(path) + 1, nodes);
+ }
+ for (int i=0; i < max_nodes; i++) {
+ free(nodes[i].var);
+ }
+ free(nodes);
+
+ return (count == 0) ? EXIT_FAILURE : EXIT_SUCCESS;
+}
+
+/* Look at the filename and try to guess based on the extension. The
+ * builtin filters for lenses do not do that, as that would force augtool
+ * to scan everything on start
+ */
+static char *guess_lens_name(const char *file) {
+ const char *ext = strrchr(file, '.');
+
+ if (ext == NULL)
+ return NULL;
+
+ if (streqv(ext, ".json")) {
+ return strdup("Json.lns");
+ } else if (streqv(ext, ".xml")) {
+ return strdup("Xml.lns");
+ }
+
+ return NULL;
+}
+
+int main(int argc, char **argv) {
+ int opt;
+ cleanup(aug_closep) struct augeas *aug;
+ cleanup(freep) char *loadpath = NULL;
+ size_t loadpath_len = 0;
+ cleanup(freep) char *root = NULL;
+ cleanup(freep) char *lens = NULL;
+ cleanup(freep) char *matches = NULL;
+ size_t matches_len = 0;
+ const char *match = "*";
+ bool print_lens = false;
+ bool quiet = false;
+ int result = EXIT_SUCCESS;
+
+ struct option options[] = {
+ { "help", 0, 0, 'h' },
+ { "include", 1, 0, 'I' },
+ { "lens", 1, 0, 'l' },
+ { "all", 0, 0, 'a' },
+ { "index", 0, 0, 'i' },
+ { "match", 1, 0, 'm' },
+ { "only-value", 0, 0, 'o' },
+ { "nostdinc", 0, 0, 'S' },
+ { "root", 1, 0, 'r' },
+ { "print-lens", 0, 0, 'L' },
+ { "exact", 0, 0, 'e' },
+ { "quiet", 0, 0, 'q' },
+ { 0, 0, 0, 0}
+ };
+ unsigned int flags = AUG_NO_LOAD|AUG_NO_ERR_CLOSE;
+ progname = basename(argv[0]);
+
+ setlocale(LC_ALL, "");
+ while ((opt = getopt_long(argc, argv, "ahI:l:m:oSr:eLq", options, NULL)) != -1) {
+ switch(opt) {
+ case 'I':
+ argz_add(&loadpath, &loadpath_len, optarg);
+ break;
+ case 'l':
+ lens = strdup(optarg);
+ break;
+ case 'L':
+ print_lens = true;
+ break;
+ case 'h':
+ usage();
+ break;
+ case 'a':
+ print_all = true;
+ break;
+ case 'm':
+ // If optarg is a numeric string like '1', it is not a legal
+ // part of a path by itself, and so we need to prefix it with
+ // an explicit axis
+ die(optarg[0] == '/',
+ "matches can only be relative paths, not %s\n", optarg);
+ argz_add(&matches, &matches_len, format("child::%s", optarg));
+ break;
+ case 'o':
+ print_only_values = true;
+ break;
+ case 'r':
+ root = strdup(optarg);
+ break;
+ case 'S':
+ flags |= AUG_NO_STDINC;
+ break;
+ case 'e':
+ print_exact = true;
+ break;
+ case 'q':
+ quiet = true;
+ break;
+ default:
+ fprintf(stderr, "Try '%s --help' for more information.\n",
+ progname);
+ exit(EXIT_TROUBLE);
+ break;
+ }
+ }
+
+ if (optind >= argc) {
+ fprintf(stderr, "Expected an input file\n");
+ fprintf(stderr, "Try '%s --help' for more information.\n",
+ progname);
+ exit(EXIT_TROUBLE);
+ }
+
+ const char *file = argv[optind];
+
+ argz_stringify(loadpath, loadpath_len, ':');
+
+ if (lens == NULL) {
+ lens = guess_lens_name(file);
+ }
+
+ if (lens != NULL) {
+ /* We know which lens we want, we do not need to load all of them */
+ flags |= AUG_NO_MODL_AUTOLOAD;
+ }
+
+ aug = aug_init(root, loadpath, flags|AUG_NO_ERR_CLOSE);
+ check_error(aug);
+
+ if (lens == NULL) {
+ aug_load_file(aug, file);
+ } else {
+ aug_transform(aug, lens, file, false);
+ aug_load(aug);
+ }
+ check_error(aug);
+
+ /* The user just wants the lens name */
+ if (print_lens) {
+ char *info = format("/augeas/files%s", file);
+ const char *lens_name;
+ aug_defvar(aug, "info", info);
+ die(aug_ns_count(aug, "info") == 0,
+ "file %s does not exist\n", file);
+ aug_get(aug, "$info/lens", &lens_name);
+ /* We are being extra careful here - the check_error above would
+ have already aborted the program if we could not determine a
+ lens; dieing here indicates some sort of bug */
+ die(lens_name == NULL, "could not find lens for %s\n",
+ file);
+ if (lens_name[0] == '@')
+ lens_name += 1;
+ printf("%s\n", lens_name);
+ exit(EXIT_SUCCESS);
+ }
+
+ check_load_error(aug, file);
+
+ char *path = format("/files%s", file);
+ aug_set(aug, "/augeas/context", path);
+
+
+ if (matches_len > 0) {
+ argz_stringify(matches, matches_len, '|');
+ match = matches;
+ }
+
+ if (quiet) {
+ int n = aug_match(aug, match, NULL);
+ check_error(aug);
+ result = (n == 0) ? EXIT_FAILURE : EXIT_SUCCESS;
+ } else {
+ result = print(aug, path, match);
+ }
+ free(path);
+
+ return result;
+}
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+/*
+ * augparse.c: utility for parsing config files and seeing what's happening
+ *
+ * Copyright (C) 2007-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <dlutter@redhat.com>
+ */
+
+#include <config.h>
+#include <argz.h>
+#include <getopt.h>
+
+#include "list.h"
+#include "syntax.h"
+#include "augeas.h"
+#include <locale.h>
+
+const char *progname;
+bool print_version = false;
+
+__attribute__((noreturn))
+static void usage(void) {
+ fprintf(stderr, "Usage: %s [OPTIONS] MODULE\n", progname);
+ fprintf(stderr, "Evaluate MODULE. Generally, MODULE should contain unit tests.\n");
+ fprintf(stderr, "\nOptions:\n\n");
+ fprintf(stderr, " -I, --include DIR search DIR for modules; can be given multiple times\n");
+ fprintf(stderr, " -t, --trace trace module loading\n");
+ fprintf(stderr, " --nostdinc do not search the builtin default directories for modules\n");
+ fprintf(stderr, " --notypecheck do not typecheck lenses\n");
+ fprintf(stderr, " --version print version information and exit\n");
+
+ exit(EXIT_FAILURE);
+}
+
+static void print_version_info(struct augeas *aug) {
+ const char *version;
+ int r;
+
+ r = aug_get(aug, "/augeas/version", &version);
+ if (r != 1)
+ goto error;
+
+ fprintf(stderr, "augparse %s <http://augeas.net/>\n", version);
+ fprintf(stderr, "Copyright (C) 2007-2016 David Lutterkort\n");
+ fprintf(stderr, "License LGPLv2+: GNU LGPL version 2.1 or later\n");
+ fprintf(stderr, " <http://www.gnu.org/licenses/lgpl-2.1.html>\n");
+ fprintf(stderr, "This is free software: you are free to change and redistribute it.\n");
+ fprintf(stderr, "There is NO WARRANTY, to the extent permitted by law.\n\n");
+ fprintf(stderr, "Written by David Lutterkort\n");
+ return;
+ error:
+ fprintf(stderr, "Something went terribly wrong internally - please file a bug\n");
+}
+
+int main(int argc, char **argv) {
+ int opt;
+ struct augeas *aug;
+ char *loadpath = NULL;
+ size_t loadpathlen = 0;
+ enum {
+ VAL_NO_STDINC = CHAR_MAX + 1,
+ VAL_NO_TYPECHECK = VAL_NO_STDINC + 1,
+ VAL_VERSION = VAL_NO_TYPECHECK + 1
+ };
+ struct option options[] = {
+ { "help", 0, 0, 'h' },
+ { "include", 1, 0, 'I' },
+ { "trace", 0, 0, 't' },
+ { "nostdinc", 0, 0, VAL_NO_STDINC },
+ { "notypecheck", 0, 0, VAL_NO_TYPECHECK },
+ { "version", 0, 0, VAL_VERSION },
+ { 0, 0, 0, 0}
+ };
+ int idx;
+ unsigned int flags = AUG_TYPE_CHECK|AUG_NO_MODL_AUTOLOAD;
+ progname = argv[0];
+
+ setlocale(LC_ALL, "");
+ while ((opt = getopt_long(argc, argv, "hI:t", options, &idx)) != -1) {
+ switch(opt) {
+ case 'I':
+ argz_add(&loadpath, &loadpathlen, optarg);
+ break;
+ case 't':
+ flags |= AUG_TRACE_MODULE_LOADING;
+ break;
+ case 'h':
+ usage();
+ break;
+ case VAL_NO_STDINC:
+ flags |= AUG_NO_STDINC;
+ break;
+ case VAL_NO_TYPECHECK:
+ flags &= ~(AUG_TYPE_CHECK);
+ break;
+ case VAL_VERSION:
+ print_version = true;
+ break;
+ default:
+ usage();
+ break;
+ }
+ }
+
+ if (!print_version && optind >= argc) {
+ fprintf(stderr, "Expected .aug file\n");
+ usage();
+ }
+
+ argz_stringify(loadpath, loadpathlen, PATH_SEP_CHAR);
+ aug = aug_init(NULL, loadpath, flags);
+ if (aug == NULL) {
+ fprintf(stderr, "Memory exhausted\n");
+ return 2;
+ }
+
+ if (print_version) {
+ print_version_info(aug);
+ aug_close(aug);
+ return EXIT_SUCCESS;
+ }
+
+ if (__aug_load_module_file(aug, argv[optind]) == -1) {
+ fprintf(stderr, "%s\n", aug_error_message(aug));
+ const char *s = aug_error_details(aug);
+ if (s != NULL) {
+ fprintf(stderr, "%s\n", s);
+ }
+ aug_close(aug);
+ exit(EXIT_FAILURE);
+ }
+
+ aug_close(aug);
+ free(loadpath);
+}
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+/* vim: expandtab:softtabstop=2:tabstop=2:shiftwidth=2
+ *
+ * Copyright (C) 2022 George Hansper <george@hansper.id.au>
+ * -----------------------------------------------------------------------
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ * See the GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along with this program.
+ * If not, see <https://www.gnu.org/licenses/>.
+ *
+ * Author: George Hansper <george@hansper.id.au>
+ * -----------------------------------------------------------------------
+ * This tool produces output similar to the augeas 'print' statement
+ * or aug_print() API function
+ *
+ * Where augeas print may produce a set of paths and values like this:
+ *
+ * /files/some/path/label[1]/tail_a value_1a
+ * /files/some/path/label[1]/tail_b value_1b
+ * /files/some/path/label[2]/tail_a value_2a
+ * /files/some/path/label[2]/tail_b value_2b
+ *
+ * or
+ *
+ * /files/some/path/1/tail_a value_1a
+ * /files/some/path/1/tail_b value_1b
+ * /files/some/path/2/tail_a value_2a
+ * /files/some/path/2/tail_b value_2b
+ *
+ *
+ * This tool replaces the abosolute 'position' (1, 2, 3,..) with a path-expression that matches the position
+ * where the values are used to identify the position
+ *
+ * /files/some/path/label[tail_a = value_1a]/tail_a value_1a
+ * /files/some/path/label[tail_a = value_1a]/tail_b value_1b
+ * /files/some/path/label[tail_a = value_2a]/tail_a value_2a
+ * /files/some/path/label[tail_a = value_2a]/tail_b value_2b
+ *
+ * Terms used within:
+ *
+ * /files/some/path/label[1]/tail_a value_1a
+ * `--------------------' \ `-----' `------'
+ * `--- head \ \ `--- value
+ * \ `--- tail
+ * `-- position
+ *
+ * For more complex paths
+ * /files/some/path/1/segement/label[1]/tail_a value_1a
+ * `----------------------'
+ * |
+ * v
+ * /segement/label/tail_a
+ * `--------------------'
+ * `--- simple_tail
+ */
+
+#define _GNU_SOURCE
+#include <config.h>
+#include <stdio.h>
+#include <stdlib.h> /* for exit, strtoul */
+#include <getopt.h>
+#include <string.h>
+#include <augeas.h>
+#include <errno.h>
+#include <libgen.h> /* for basename() on FreeBSD and MacOS */
+#include <sys/param.h> /* for MIN() MAX() */
+#include <unistd.h>
+#include "augprint.h"
+
+#define CHECK_OOM(condition, action, arg) \
+ do { \
+ if (condition) { \
+ out_of_memory|=1; \
+ action(arg); \
+ } \
+ } while (0)
+
+#define MAX_PRETTY_WIDTH 30
+
+static augeas *aug = NULL;
+static unsigned int flags = AUG_NONE;
+static unsigned int num_groups = 0;
+static struct group **all_groups=NULL;
+static char **all_matches;
+static int num_matched;
+static struct augeas_path_value **all_augeas_paths; /* array of pointers */
+
+static int out_of_memory=0;
+static int verbose=0;
+static int debug=0;
+static int pretty=0;
+static int noseq=0;
+static int help=0;
+static int print_version=0;
+static int use_regexp=0;
+static char *lens = NULL;
+static char *loadpath = NULL;
+
+static char *str_next_pos(char *start, char **head_end, unsigned int *pos);
+static char *str_simplified_tail(char *tail_orig);
+static void add_segment_to_group(struct path_segment *segment, struct augeas_path_value *);
+static char *quote_value(char *);
+static char *regexp_value(char *, int);
+
+
+static void exit_oom(const char *msg) {
+ fprintf(stderr, "Out of memory");
+ if( msg ) {
+ fprintf(stderr, " %s\n", msg);
+ } else {
+ fprintf(stderr, "\n");
+ }
+ exit(1);
+}
+
+/* Remove /./ and /../ components from path
+ * because they just don't work with augeas
+ */
+static void cleanup_filepath(char *path) {
+ char *to=path, *from=path;
+ while(*from) {
+ if(*from == '/' ) {
+ if( *(from+1) == '/' ) {
+ /* // skip over 2nd / */
+ from++;
+ continue;
+ } else if( *(from+1) == '.' ) {
+ if( *(from+2) == '/' ) {
+ /* /./ skip 2 spaces */
+ from+=2;
+ continue;
+ } else if ( *(from+2) == '.' && *(from+3) == '/' ) {
+ /* /../ rewind to previous / */
+ from+=3;
+ while( to > path && *(--to) != '/' )
+ ;
+ continue;
+ }
+ }
+ }
+ *to++ = *from++;
+ }
+ *to='\0';
+}
+
+static char *find_lens_for_path(char *filename) {
+ char *aug_load_path = NULL;
+ char **matching_lenses;
+ int num_lenses, result, ndx;
+ char *filename_tail;
+ filename_tail = filename;
+ for (char *s1 = filename; *s1; s1++ ) {
+ if ( *s1 == '/' )
+ filename_tail = s1+1;
+ }
+ result = asprintf(&aug_load_path, "/augeas/load/*['%s' =~ glob(incl)]['%s' !~ glob(excl)]['%s' !~ glob(excl)]", filename, filename, filename_tail);
+ CHECK_OOM( result < 0, exit_oom, NULL);
+
+ if(debug) {
+ fprintf(stderr,"path expr: %s\n",aug_load_path);
+ aug_print(aug, stderr, aug_load_path);
+ }
+ num_lenses = aug_match( aug, aug_load_path, &matching_lenses);
+ if ( num_lenses == 0 ) {
+ fprintf(stderr, "Aborting - no lens applies for target: %s\n", filename);
+ exit(1);
+ }
+ lens = matching_lenses[0] + 13; /* skip over /augeas/load */
+
+ if ( num_lenses > 1 ) {
+ /* Should never happen */
+ for( ndx=0; ndx<num_lenses;ndx++) {
+ fprintf(stderr,"Found lens: %s\n", matching_lenses[ndx]);
+ }
+ fprintf(stderr, "Warning: multiple lenses apply to target %s - using %s\n", filename, lens);
+ }
+
+ free(aug_load_path);
+ return(lens);
+}
+
+static void move_tree(char *inputfile, char *target_file) {
+ char *files_inputfile;
+ char *files_targetfile;
+ char *dangling_path;
+ char *aug_rm_path;
+ int result;
+ int removed;
+ char *last, *s;
+ result = asprintf(&files_inputfile, "/files%s", inputfile );
+ CHECK_OOM( result < 0, exit_oom, NULL);
+
+ result = asprintf(&files_targetfile, "/files%s", target_file );
+ CHECK_OOM( result < 0, exit_oom, NULL);
+
+ aug_mv(aug, files_inputfile, files_targetfile);
+ /* After the aug_mv, we're left with the empty parent nodes */
+ dangling_path = files_inputfile;
+ do {
+ /* dirname(prune_path), without copying anything */
+ for( s=last=dangling_path; *s; s++ ) {
+ if( *s == '/')
+ last=s;
+ }
+ *last = '\0';
+
+ result = asprintf(&aug_rm_path, "%s[count(*)=0 and .='']", dangling_path );
+ CHECK_OOM( result < 0, exit_oom, NULL);
+
+ removed = aug_rm(aug, aug_rm_path);
+ free(aug_rm_path);
+
+ } while ( removed == 1 );
+
+ free(files_inputfile);
+ free(files_targetfile);
+}
+
+/* ----- split_path() str_next_pos() str_simplified_tail() add_segment_to_group() ----- */
+/* split_path()
+ * Break up a path like this
+ * /head/label_a[123]/middle/label_b[456]/tail
+ *
+ * into (struct path_segment) segments like this
+ *
+ * head = "/head/label_a"
+ * segement = "/head/label_a"
+ * position = 123
+ * simplified_tail = "/middle/label_b/tail"
+ *
+ * head = "/head/label_a[123]/middle/label_b"
+ * segement = "/middle/label_b"
+ * position = 456
+ * simplified_tail = "/tail"
+ *
+ * head = "/head/label_a[123]/middle/label_b[456]/tail"
+ * segement = "/tail"
+ * position = UINT_MAX (-1)
+ * simplified_tail = ""
+ *
+ * If label_b is absent, use seq::* or * instead in the simplified tail, head is unaffected
+ */
+static struct path_segment *split_path(struct augeas_path_value *path_value) {
+ char *path = path_value->path;
+ struct path_segment *first_segment = NULL;
+ struct path_segment *this_segment = NULL;
+ struct path_segment **next_segment = &first_segment;
+ unsigned int position;
+ char *head_end;
+ char *path_seg_start=path;
+ char *path_seg_end;
+
+ while(*path_seg_start) {
+ this_segment = malloc(sizeof(struct path_segment));
+ CHECK_OOM(! this_segment, exit_oom, "split_path() allocating struct path_segment");
+
+ *next_segment = this_segment;
+ path_seg_end = str_next_pos(path_seg_start, &head_end, &position);
+ this_segment->head = strndup(path, (head_end-path));
+ this_segment->segment = (this_segment->head) + (path_seg_start-path);
+ this_segment->position = position;
+ this_segment->simplified_tail = str_simplified_tail(path_seg_end);
+ path_seg_start = path_seg_end;
+ this_segment->next = NULL;
+ next_segment = &(this_segment->next);
+ if ( position != UINT_MAX ) {
+ add_segment_to_group(this_segment, path_value);
+ } else {
+ this_segment->group = NULL;
+ }
+ }
+ return(first_segment);
+}
+
+/*
+ * str_next_pos() scans a string from (char *)start, and finds the next occurance
+ * of the substring '[123]' or '/123/' where 123 is a decimal number
+ * (int *)pos is set to the value of 123
+ * if [123] is found,
+ * - head_end points to the character '['
+ * - returns a pointer to the character after the ']'
+ * if /123/ or /123\0 is found
+ * - head_end points to character after the first '/'
+ * - returns a pointer to the second '/' or '\0'
+ * if none of the above are found
+ * - head_end points to the terminating '\0'
+ * - returns a pointer to the terminating '\0' (same as head_end)
+ * ie. look for [123] or /123/ or /123\0, set int pos to 123 or UINT_MAX, set head_end to char before '[' or before 1st digit; return pointer to trailing / or \0
+*/
+static char *str_next_pos(char *start, char **head_end, unsigned int *pos) {
+ char *endptr=NULL;
+ char *s=start;
+ unsigned long lpos;
+ *pos=UINT_MAX;
+ while(*s) {
+ if( *s=='[' && *(s+1) >= '0' && *(s+1) <= '9' ) {
+ lpos = strtoul(s+1, &endptr, 10);
+ *pos = MIN(lpos, UINT_MAX);
+ if ( *endptr == ']' ) {
+ /* success */
+ *head_end = s;
+ return(endptr+1);
+ }
+ } else if ( *s == '/' && *(s+1) >= '0' && *(s+1) <= '9' ) {
+ lpos = strtoul(s+1, &endptr, 10);
+ *pos = MIN(lpos, UINT_MAX);
+ if ( *endptr == '\0' || *endptr == '/' ) {
+ /* success */
+ *head_end = s+1;
+ return(endptr);
+ }
+ }
+ s++;
+ }
+ *head_end=s;
+ return(s);
+}
+
+static char *str_simplified_tail(char *tail_orig) {
+ int tail_len=0;
+ char *tail;
+ char *from, *to, *scan;
+ char *simple;
+ /* first work out how much space we will need to allocate */
+ tail=tail_orig;
+ while(*tail) {
+ if( *tail == '[' && *(tail+1) >= '0' && *(tail+1) <= '9' ) {
+ /* Look for matching ']' */
+ scan = tail;
+ scan++;
+ while (*scan >= '0' && *scan <= '9')
+ scan++;
+ if(*scan == ']') {
+ tail=scan+1;
+ continue;
+ }
+ } else if ( *tail == '/' && *(tail+1) >= '0' && *(tail+1) <= '9' ) {
+ /* Look for next '/' or '\0' */
+ scan = tail;
+ scan++;
+ while (*scan >= '0' && *scan <= '9')
+ scan++;
+ if(*scan == '/' || *scan == '\0' ) {
+ tail=scan;
+ tail_len += 7; /* allow for /seq::* */
+ continue;
+ }
+ }
+ tail_len++;
+ tail++;
+ }
+ simple = (char *) malloc( sizeof(char) * (tail_len+1));
+ CHECK_OOM( ! simple, exit_oom, "allocating simple_tail in str_simplified_tail()");
+
+ from=tail_orig;
+ to=simple;
+ while(*from) {
+ if( *from == '[' && *(from+1) >= '0' && *(from+1) <= '9' ) {
+ /* skip over [123] */
+ scan = from;
+ scan++;
+ while (*scan >= '0' && *scan <= '9')
+ scan++;
+ if(*scan == ']') {
+ from=scan+1;
+ continue;
+ }
+ } else if ( *from == '/' && *(from+1) >= '0' && *(from+1) <= '9' ) {
+ /* replace /123 with /seq::* */
+ scan = from;
+ scan++;
+ while (*scan >= '0' && *scan <= '9')
+ scan++;
+ if(*scan == '/' || *scan == '\0' ) {
+ from=scan;
+ if ( noseq ) {
+ strcpy(to,"/*");
+ to += 2;
+ } else {
+ strcpy(to,"/seq::*");
+ to += 7; /* len("/seq::*") */
+ }
+ continue;
+ }
+ }
+ *to++ = *from++; /* copy */
+ }
+ *to='\0';
+ return(simple);
+}
+
+/* Compare two values (char *), subject to use_regexp
+ * If both pointers are NULL, return 1 (true)
+ * If only one pointer is NULL, return 0 (false)
+ * set *matched to the number of characters in common
+ * return 1 (true) if the strings match, otherwise 0 (false)
+ */
+static int value_cmp(char *v1, char *v2, unsigned int *matched) {
+ char *s1, *s2;
+ if( v1 == NULL && v2 == NULL ) {
+ *matched = 0;
+ return(1);
+ }
+ if( v1 == NULL || v2 == NULL ) {
+ *matched = 0;
+ return(0);
+ }
+ s1 = v1;
+ s2 = v2;
+ *matched = 0;
+ if( use_regexp ) {
+ /* Compare values, allowing for the fact that ']' is replaced with '.' */
+ while( *s1 || *s2 ) {
+ if( *s1 != *s2 ) {
+ if( *s1 =='\0' || *s2 == '\0')
+ return(0);
+ if( *s1 != ']' && *s2 != ']' )
+ return(0);
+ }
+ s1++; s2++; (*matched)++;
+ }
+ return(1);
+ } else {
+ while( *s1 == *s2 ) {
+ if( *s1 == '\0' ) {
+ return(1);
+ }
+ s1++; s2++; (*matched)++;
+ }
+ return(0);
+ }
+ return(1); /* unreachable */
+}
+
+/* Find an existing group with the same 'head'
+ * If no such group exists, create a new one
+ * Update the size of all_groups array if required
+ */
+static struct group *find_or_create_group(char *head) {
+ unsigned long ndx;
+ struct group **all_groups_realloc;
+ unsigned int num_groups_newsize;
+ struct group *group = NULL;
+ /* Look for an existing group with group->head matching path_seg->head */
+ for(ndx=0; ndx < num_groups; ndx++) {
+ if( strcmp(head, all_groups[ndx]->head) == 0 ) {
+ group = all_groups[ndx];
+ return(group);
+ }
+ }
+ /* Group not found - create a new one */
+ /* First, grow all_groups[] array if required */
+ if ( num_groups % 32 == 0 ) {
+ num_groups_newsize = (num_groups)/32*32+32;
+ all_groups_realloc = reallocarray(all_groups, sizeof(struct group *), num_groups_newsize);
+ CHECK_OOM( ! all_groups_realloc, exit_oom, "in find_or_create_group()");
+
+ all_groups=all_groups_realloc;
+ }
+ /* Create new group */
+ group = malloc(sizeof(struct group));
+ CHECK_OOM( ! group, exit_oom, "allocating struct group in find_or_create_group()");
+
+ all_groups[num_groups++] = group;
+ group->head = head;
+ group->all_tails = NULL;
+ group->position_array_size = 0;
+ group->tails_at_position = NULL;
+ group->chosen_tail = NULL;
+ group->chosen_tail_state = NULL;
+ group->first_tail = NULL;
+ group->position_array_size = 0;
+ group->max_position = 0;
+ group->subgroups = NULL; /* subgroups are only created if we need to use our 3rd preference */
+ group->subgroup_position = NULL;
+ /* for --pretty */
+ group->pretty_width_ct = NULL;
+ /* for --regexp */
+ group->re_width_ct = NULL;
+ group->re_width_ft = NULL;
+
+ return(group);
+}
+
+/* Find a matching tail+value within group->all_tails linked list
+ * If no such tail exists, append a new (struct tail) list item
+ * Return the tail found, or the new tail
+ */
+static struct tail *find_or_create_tail(struct group *group, struct path_segment *path_seg, struct augeas_path_value *path_value) {
+ /* Scan for a matching simplified tail+value in group->all_tails */
+ struct tail *tail;
+ struct tail *found_tail_value=NULL;
+ struct tail *found_tail=NULL;
+ struct tail **all_tails_end;
+ unsigned int tail_found_this_pos=1;
+ unsigned int match_length;
+ all_tails_end =&(group->all_tails);
+ found_tail_value=NULL;
+ for( tail = group->all_tails; tail != NULL; tail=tail->next ) {
+ if( strcmp(path_seg->simplified_tail, tail->simple_tail) == 0 ) {
+ /* found matching simple_tail - increment counters */
+ tail->tail_found_map[path_seg->position]++;
+ tail_found_this_pos = tail->tail_found_map[path_seg->position];
+ if ( value_cmp(tail->value, path_value->value, &match_length ) ) {
+ /* matching tail+value found, increment tail_value_found */
+ tail->tail_value_found_map[path_seg->position]++;
+ tail->tail_value_found++;
+ found_tail_value=tail;
+ }
+ found_tail=tail;
+ }
+ all_tails_end=&tail->next;
+ }
+ if ( found_tail_value == NULL ) {
+ /* matching tail+value not found, create a new one */
+ tail = malloc(sizeof(struct tail));
+ CHECK_OOM( ! tail, exit_oom, "in find_or_create_tail()");
+
+ tail->tail_found_map = reallocarray(NULL, sizeof(unsigned int), group->position_array_size);
+ CHECK_OOM( ! tail->tail_found_map, exit_oom, "in find_or_create_tail()");
+
+ tail->tail_value_found_map = reallocarray(NULL, sizeof(unsigned int), group->position_array_size);
+ CHECK_OOM( ! tail->tail_value_found_map, exit_oom, "in find_or_create_tail()");
+
+
+ for(unsigned int i=0; i<group->position_array_size; i++) {
+ tail->tail_found_map[i]=0;
+ tail->tail_value_found_map[i]=0;
+ }
+
+ if ( found_tail ) {
+ for( unsigned int ndx=0; ndx<=group->max_position; ndx++ ) {
+ tail->tail_found_map[ndx] = found_tail->tail_found_map[ndx];
+ }
+ }
+ tail->tail_found_map[path_seg->position]=tail_found_this_pos;
+ tail->tail_value_found_map[path_seg->position]=1;
+ tail->tail_value_found = 1;
+ tail->simple_tail = path_seg->simplified_tail;
+ tail->value = path_value->value;
+ tail->value_qq = path_value->value_qq;
+ tail->next = NULL;
+ *all_tails_end = tail;
+ return(tail);
+ } else {
+ return(found_tail_value);
+ }
+}
+
+/* Append a (struct tail_stub) to the linked list group->tails_at_position[position] */
+static void append_tail_stub(struct group *group, struct tail *tail, unsigned int position) {
+ struct tail_stub **tail_stub_pp;
+
+ for( tail_stub_pp=&(group->tails_at_position[position]); *tail_stub_pp != NULL; tail_stub_pp=&(*tail_stub_pp)->next ) {
+ }
+ *tail_stub_pp = malloc(sizeof(struct tail_stub));
+ CHECK_OOM( ! *tail_stub_pp, exit_oom, "in append_tail_stub()");
+
+ (*tail_stub_pp)->tail = tail;
+ (*tail_stub_pp)->next = NULL;
+}
+
+/* Grow memory structures within the group record and associated tail records
+ * to accommodate additional positions
+ */
+static void grow_position_arrays(struct group *group, unsigned int new_max_position) {
+ struct tail_stub **tails_at_position_realloc;
+ struct tail **chosen_tail_realloc;
+ struct tail_stub **first_tail_realloc;
+ unsigned int *chosen_tail_state_realloc;
+ unsigned int *pretty_width_ct_realloc;
+ unsigned int *re_width_ct_realloc;
+ unsigned int *re_width_ft_realloc;
+ unsigned int ndx;
+ if( new_max_position != UINT_MAX && new_max_position >= group->position_array_size ) {
+ unsigned int old_size = group->position_array_size;
+ unsigned int new_size = (new_max_position+1) / 8 * 8 + 8;
+
+ /* Grow arrays within struct group */
+ tails_at_position_realloc = reallocarray(group->tails_at_position, sizeof(struct tail_stub *), new_size);
+ chosen_tail_realloc = reallocarray(group->chosen_tail, sizeof(struct tail *), new_size);
+ first_tail_realloc = reallocarray(group->first_tail, sizeof(struct tail_stub *), new_size);
+ chosen_tail_state_realloc = reallocarray(group->chosen_tail_state, sizeof(chosen_tail_state_t), new_size);
+ pretty_width_ct_realloc = reallocarray(group->pretty_width_ct, sizeof(unsigned int), new_size);
+ re_width_ct_realloc = reallocarray(group->re_width_ct, sizeof(unsigned int), new_size);
+ re_width_ft_realloc = reallocarray(group->re_width_ft, sizeof(unsigned int), new_size);
+ CHECK_OOM( ! tails_at_position_realloc || ! chosen_tail_realloc || ! chosen_tail_state_realloc ||
+ ! pretty_width_ct_realloc || ! re_width_ct_realloc || ! re_width_ft_realloc ||
+ ! first_tail_realloc, exit_oom, "in grow_position_arrays()");
+
+ /* initialize array entries between old size to new_size */
+ for( ndx=old_size; ndx < new_size; ndx++) {
+ tails_at_position_realloc[ndx]=NULL;
+ chosen_tail_realloc[ndx]=NULL;
+ first_tail_realloc[ndx]=NULL;
+ chosen_tail_state_realloc[ndx] = NOT_DONE;
+ pretty_width_ct_realloc[ndx] = 0;
+ re_width_ct_realloc[ndx] = 0;
+ re_width_ft_realloc[ndx] = 0;
+ }
+ group->tails_at_position = tails_at_position_realloc;
+ group->chosen_tail = chosen_tail_realloc;
+ group->first_tail = first_tail_realloc;
+ group->chosen_tail_state = chosen_tail_state_realloc;
+ group->pretty_width_ct = pretty_width_ct_realloc;
+ group->re_width_ct = re_width_ct_realloc;
+ group->re_width_ft = re_width_ft_realloc;
+ /* Grow arrays in all_tails */
+ struct tail *tail;
+ for( tail = group->all_tails; tail != NULL; tail=tail->next ) {
+ unsigned int *tail_found_map_realloc;
+ unsigned int *tail_value_found_map_realloc;
+ tail_found_map_realloc = reallocarray(tail->tail_found_map, sizeof(unsigned int), new_size);
+ tail_value_found_map_realloc = reallocarray(tail->tail_value_found_map, sizeof(unsigned int), new_size);
+ CHECK_OOM( ! tail_found_map_realloc || ! tail_value_found_map_realloc, exit_oom, "in grow_position_arrays()");
+
+ /* initialize array entries between old size to new_size */
+ for( ndx=old_size; ndx < new_size; ndx++) {
+ tail_found_map_realloc[ndx]=0;
+ tail_value_found_map_realloc[ndx]=0;
+ }
+ tail->tail_found_map = tail_found_map_realloc;
+ tail->tail_value_found_map = tail_value_found_map_realloc;
+ }
+ group->position_array_size = new_size;
+ }
+}
+
+static void add_segment_to_group(struct path_segment *path_seg, struct augeas_path_value *path_value) {
+ struct group *group = NULL;
+ struct tail *tail;
+ group = find_or_create_group(path_seg->head);
+
+ /* group is our new or matching group for this segment->head */
+ path_seg->group = group;
+ if( path_seg->position != UINT_MAX && path_seg->position > group->max_position ) {
+ group->max_position = path_seg->position;
+ if( group->max_position >= group->position_array_size ) {
+ /* grow arrays in group */
+ grow_position_arrays(group, group->max_position);
+ }
+ }
+ tail = find_or_create_tail(group, path_seg, path_value);
+
+ /* Append a tail_stub record to the linked list @ group->tails_at_position[position] */
+ append_tail_stub(group, tail, path_seg->position);
+}
+
+/* find_or_create_subgroup()
+ * This is called from choose_tail(), and is only used if we need to go to our 3rd Preference
+ */
+static struct subgroup *find_or_create_subgroup(struct group *group, struct tail *first_tail) {
+ struct subgroup *subgroup_ptr;
+ struct subgroup **sg_pp;
+ for( sg_pp=&(group->subgroups); *sg_pp != NULL; sg_pp=&(*sg_pp)->next) {
+ if( (*sg_pp)->first_tail == first_tail ) {
+ return(*sg_pp);
+ }
+ }
+ /* Create and populate subgroup */
+ subgroup_ptr = (struct subgroup *) malloc( sizeof(struct subgroup));
+ CHECK_OOM( ! subgroup_ptr, exit_oom, "in find_or_create_subgroup()");
+
+ subgroup_ptr->next=NULL;
+ subgroup_ptr->first_tail=first_tail;
+ /* positions are 1..max_position, +1 for the terminating 0=end-of-list */
+ subgroup_ptr->matching_positions = malloc( (group->max_position+1) * sizeof( unsigned int ));
+ CHECK_OOM( ! subgroup_ptr->matching_positions, exit_oom, "in find_or_create_subgroup()");
+
+ /* malloc group->subgroup_position if not already done */
+ if ( ! group->subgroup_position ) {
+ group->subgroup_position = malloc( (group->max_position+1) * sizeof( unsigned int ));
+ CHECK_OOM( ! group->subgroup_position, exit_oom, "in find_or_create_subgroup()");
+
+ }
+ *sg_pp = subgroup_ptr; /* Append new subgroup record to list */
+ /* populate matching_positions */
+ unsigned int pos_ndx;
+ unsigned int ndx = 0;
+ for(pos_ndx=1; pos_ndx <= group->max_position; pos_ndx++ ){
+ /* save the position if this tail+value exists for this position - not necessarily the first tail, we need to check all tails at this position */
+ struct tail_stub *tail_stub_ptr;
+ for( tail_stub_ptr = group->tails_at_position[pos_ndx]; tail_stub_ptr != NULL; tail_stub_ptr=tail_stub_ptr->next ) {
+ if( tail_stub_ptr->tail == first_tail ) {
+ subgroup_ptr->matching_positions[ndx++] = pos_ndx;
+ if( first_tail == group->first_tail[pos_ndx]->tail ) {
+ /* If first_fail is also the first_tail for this position, update subgroup_position[] */
+ group->subgroup_position[pos_ndx]=ndx; /* yes, we want ndx+1, because matching_positions index starts at 0, where as the fallback position starts at 1 */
+ }
+ break;
+ }
+ }
+ }
+ subgroup_ptr->matching_positions[ndx] = 0; /* 0 = end of list */
+ return(subgroup_ptr);
+}
+
+/* str_ischild()
+ * compare 2 strings which are of the form simple_tail
+ * return true(1) if parent == /path and child == /path/tail
+ * return false(0) if child == /pathother or child == /pat or anything else
+ */
+static int str_ischild(char *parent, char *child) {
+ while( *parent ) {
+ if( *parent != *child ) {
+ return(0);
+ }
+ parent++;
+ child++;
+ }
+ if( *child == '/' ) {
+ return(1);
+ } else {
+ return(0);
+ }
+}
+
+/* Find the first tail in the linked-list that is not NULL, or has no child nodes
+ * eg for paths starting with /head/123/... ignore the entry:
+ * /head/123 (null)
+ * and any further paths like this
+ * head/123/tail (null)
+ * head/123/tail/child (null)
+ * stop when we encounter a value, or find a tail that has no child nodes,
+ * if the next tail is eg
+ * head/123/tail2
+ * then head/123/tail/child is significant, and that becomes the first_tail
+ */
+static struct tail_stub *find_first_tail(struct tail_stub *tail_stub_ptr) {
+ if( tail_stub_ptr == NULL )
+ return(NULL);
+ for( ; tail_stub_ptr->next != NULL; tail_stub_ptr=tail_stub_ptr->next ) {
+ if ( tail_stub_ptr->tail->value != NULL && tail_stub_ptr->tail->value[0] != '\0' ) {
+ break;
+ }
+ if( ! str_ischild( tail_stub_ptr->tail->simple_tail, tail_stub_ptr->next->tail->simple_tail) ) {
+ /* the next tail is not a child-node of this tail */
+ break;
+ }
+ }
+ return(tail_stub_ptr);
+}
+
+static struct tail *choose_tail(struct group *group, unsigned int position ) {
+ struct tail_stub *first_tail_stub;
+ struct tail_stub *tail_stub_ptr;
+ unsigned int ndx;
+
+ if( group->tails_at_position[position] == NULL ) {
+ /* first_tail_stub == NULL
+ * this does not happen, because every position gets at least one tail of ""
+ * eg, even if the value is NULL.
+ * /head/1 (null)
+ * ...simple_tail ""
+ * ...value NULL
+ * paths without a position ( /head/tail ) are not added to any group
+ * We can't do anything with this, use seq::* or [*] only (no value) */
+ fprintf(stderr,"# choose_tail() %s[%u] first_tail_stub is NULL (internal error)\n", group->head, position);
+ group->chosen_tail_state[position] = NO_CHILD_NODES;
+ return(NULL);
+ }
+
+ first_tail_stub = group->first_tail[position];
+
+ /* First preference - if the first-tail+value is unique, use that */
+ if( first_tail_stub->tail->tail_value_found == 1 ) {
+ group->chosen_tail_state[position] = FIRST_TAIL;
+ return(first_tail_stub->tail);
+ }
+
+ /* Second preference - find a unique tail+value that has only one value for this position and has the tail existing for all other positions */
+ for( tail_stub_ptr=first_tail_stub; tail_stub_ptr!=NULL; tail_stub_ptr=tail_stub_ptr->next) {
+ if( tail_stub_ptr->tail->tail_value_found == 1 ) { /* tail_stub_ptr->tail->value can be NULL, just needs to be unique */
+ int found=1;
+ for( ndx=1; ndx <= group->max_position; ndx++ ) {
+ if( tail_stub_ptr->tail->tail_found_map[ndx] == 0 ) {
+ /* tail does not exist for every position within this group */
+ found=0;
+ break;
+ }
+ }
+ if ( found ) {
+ /* This works only if chosen_tail->simple_tail is the first appearance of simple_tail at this position */
+ struct tail_stub *tail_check_ptr;
+ for( tail_check_ptr=first_tail_stub; tail_check_ptr != tail_stub_ptr; tail_check_ptr=tail_check_ptr->next) {
+ if( strcmp(tail_check_ptr->tail->simple_tail, tail_stub_ptr->tail->simple_tail ) == 0 ) {
+ found=0;
+ }
+ }
+ }
+ if ( found ) {
+ group->chosen_tail_state[position] = CHOSEN_TAIL_START;
+ return(tail_stub_ptr->tail);
+ }
+ } /* if ... tail_value_found == 1 */
+ }
+
+ /* Third preference - first tail is not unique but could make a unique combination with another tail */
+ struct subgroup *subgroup_ptr = find_or_create_subgroup(group, first_tail_stub->tail);
+ for( tail_stub_ptr=first_tail_stub->next; tail_stub_ptr!=NULL; tail_stub_ptr=tail_stub_ptr->next) {
+ /* for each tail at this position (other than the first) */
+ /* Find a tail at this position where:
+ * a) tail+value is unique within this subgroup
+ * b) tail exists at all positions within this subgroup
+ */
+ int found=1;
+ for(ndx=0; subgroup_ptr->matching_positions[ndx] != 0; ndx++ ) {
+ int pos=subgroup_ptr->matching_positions[ndx];
+ if ( pos == position ) continue;
+ if( tail_stub_ptr->tail->tail_value_found_map[pos] != 0 ) {
+ /* tail+value is not unique within this subgroup */
+ found=0;
+ break;
+ }
+ if( tail_stub_ptr->tail->tail_found_map[pos] == 0 ) {
+ /* tail does not exist for every position within this subgroup */
+ found=0;
+ break;
+ }
+ }
+ if ( found ) {
+ /* This works only if chosen_tail->simple_tail is the first appearance of simple_tail at this position */
+ struct tail_stub *tail_check_ptr;
+ for( tail_check_ptr=first_tail_stub; tail_check_ptr != tail_stub_ptr; tail_check_ptr=tail_check_ptr->next) {
+ if( strcmp(tail_check_ptr->tail->simple_tail, tail_stub_ptr->tail->simple_tail ) == 0 ) {
+ found=0;
+ }
+ }
+ }
+ if ( found ) {
+ group->chosen_tail_state[position] = CHOSEN_TAIL_PLUS_FIRST_TAIL_START;
+ return(tail_stub_ptr->tail);
+ }
+ }
+ /* Fourth preference (fallback) - use first_tail PLUS the position with the subgroup */
+ group->chosen_tail_state[position] = FIRST_TAIL_PLUS_POSITION;
+ return(first_tail_stub->tail);
+}
+
+/* simple_tail_expr()
+ * given a simple_tail of the form "/path" or ""
+ * return "path" or "."
+ */
+static const char *simple_tail_expr(char *simple_tail) {
+ if( *simple_tail == '/' ) {
+ /* usual case - .../123/... or /label[123]/... */
+ return(simple_tail+1);
+ } else if ( *simple_tail == '\0' ) {
+ /* path ending in /123 or /label[123] */
+ return(".");
+ } else {
+ /* unreachabe ? */
+ return(simple_tail);
+ }
+}
+
+/* Write out the path-segment, up to and including the [ expr ] (if required) */
+static void output_segment(struct path_segment *ps_ptr, struct augeas_path_value *path_value_seg) {
+ char *last_c, *str;
+ struct group *group;
+ struct tail *chosen_tail;
+ unsigned int position;
+ chosen_tail_state_t chosen_tail_state;
+ struct tail_stub *first_tail;
+
+ char *value_qq = path_value_seg->value_qq;
+
+ /* print segment possibly followed by * or seq::* */
+ last_c=ps_ptr->segment;
+ for(str=ps_ptr->segment; *str; last_c=str++) /* find end of string */
+ ;
+ if(*last_c=='/') {
+ /* sequential position .../123 */
+ if ( noseq )
+ printf("%s*", ps_ptr->segment);
+ else
+ printf("%sseq::*", ps_ptr->segment);
+ } else {
+ /* label with a position .../label[123], or no position ... /last */
+ printf("%s", ps_ptr->segment);
+ }
+ group = ps_ptr->group;
+ if( group == NULL ) {
+ /* last segment .../last_tail No position, nothing else to print */
+ return;
+ }
+
+ /* apply "chosen_tail" criteria here */
+ position = ps_ptr->position;
+ chosen_tail = group->chosen_tail[position];
+ if( chosen_tail == NULL ) {
+ /* This should not happen */
+ fprintf(stderr,"chosen_tail==NULL ???\n");
+ }
+
+ first_tail = find_first_tail(group->tails_at_position[position]);
+ chosen_tail_state = group->chosen_tail_state[position];
+
+
+ switch( chosen_tail_state ) {
+ case CHOSEN_TAIL_START:
+ group->chosen_tail_state[position] = CHOSEN_TAIL_WIP;
+ __attribute__ ((fallthrough)); /* drop through */
+ case FIRST_TAIL:
+ case CHOSEN_TAIL_DONE:
+ case FIRST_TAIL_PLUS_POSITION:
+ if ( chosen_tail->value == NULL ) {
+ printf("[%s]", simple_tail_expr(chosen_tail->simple_tail));
+ } else if ( use_regexp ) {
+ printf("[%s=~regexp(%*s)]",
+ simple_tail_expr(chosen_tail->simple_tail),
+ -(group->pretty_width_ct[position]), /* minimum field width */
+ chosen_tail->value_re
+ );
+ } else {
+ printf("[%s=%*s]",
+ simple_tail_expr(chosen_tail->simple_tail),
+ -(group->pretty_width_ct[position]), /* minimum field width */
+ chosen_tail->value_qq
+ );
+ }
+ if ( chosen_tail_state == FIRST_TAIL_PLUS_POSITION ) {
+ /* no unique tail+value - duplicate or overlapping positions */
+ printf("[%u]", group->subgroup_position[position] );
+ }
+ break;
+ case CHOSEN_TAIL_WIP:
+ if ( chosen_tail->value == NULL ) {
+ /* theoretically possible - how to test? */
+ printf("[%s or count(%s)=0]",
+ simple_tail_expr(chosen_tail->simple_tail),
+ simple_tail_expr(chosen_tail->simple_tail));
+ } else if ( use_regexp ) {
+ printf("[%s=~regexp(%*s) or count(%s)=0]",
+ simple_tail_expr(chosen_tail->simple_tail),
+ -(group->pretty_width_ct[position]), /* minimum field width */
+ chosen_tail->value_re,
+ simple_tail_expr(chosen_tail->simple_tail));
+ } else {
+ printf("[%s=%*s or count(%s)=0]",
+ simple_tail_expr(chosen_tail->simple_tail),
+ -(group->pretty_width_ct[position]), /* minimum field width */
+ chosen_tail->value_qq,
+ simple_tail_expr(chosen_tail->simple_tail));
+ }
+ if ( strcmp(chosen_tail->simple_tail, ps_ptr->simplified_tail) == 0 && strcmp(chosen_tail->value_qq, value_qq) == 0 ) {
+ group->chosen_tail_state[position] = CHOSEN_TAIL_DONE;
+ }
+ break;
+ case CHOSEN_TAIL_PLUS_FIRST_TAIL_START:
+ if ( first_tail->tail->value == NULL && use_regexp ) {
+ /* test with /etc/sudoers */
+ printf("[%s and %s=~regexp(%s)]",
+ simple_tail_expr(first_tail->tail->simple_tail),
+ simple_tail_expr(chosen_tail->simple_tail),
+ chosen_tail->value_re
+ );
+
+ } else if ( first_tail->tail->value == NULL && ! use_regexp ) {
+ /* test with /etc/sudoers */
+ printf("[%s and %s=%s]",
+ simple_tail_expr(first_tail->tail->simple_tail),
+ simple_tail_expr(chosen_tail->simple_tail),
+ chosen_tail->value_qq
+ );
+ } else if ( use_regexp ) {
+ printf("[%s=~regexp(%*s) and %s=~regexp(%s)]",
+ simple_tail_expr(first_tail->tail->simple_tail),
+ -(group->pretty_width_ct[position]), /* minimum field width */
+ first_tail->tail->value_re,
+ simple_tail_expr(chosen_tail->simple_tail),
+ chosen_tail->value_re );
+ } else {
+ printf( "[%s=%*s and %s=%s]",
+ simple_tail_expr(first_tail->tail->simple_tail),
+ -(group->pretty_width_ct[position]), /* minimum field width */
+ first_tail->tail->value_qq,
+ simple_tail_expr(chosen_tail->simple_tail),
+ chosen_tail->value_qq );
+ }
+ group->chosen_tail_state[position] = CHOSEN_TAIL_PLUS_FIRST_TAIL_WIP;
+ break;
+ case CHOSEN_TAIL_PLUS_FIRST_TAIL_WIP:
+ if ( first_tail->tail->value == NULL && use_regexp ) {
+ printf("[%s and ( %s=~regexp(%s) or count(%s)=0 )]",
+ simple_tail_expr(first_tail->tail->simple_tail),
+ simple_tail_expr(chosen_tail->simple_tail),
+ chosen_tail->value_re,
+ simple_tail_expr(chosen_tail->simple_tail)
+ );
+ } else if ( first_tail->tail->value == NULL && ! use_regexp ) {
+ printf("[%s and ( %s=%s or count(%s)=0 )]",
+ simple_tail_expr(first_tail->tail->simple_tail),
+ simple_tail_expr(chosen_tail->simple_tail),
+ chosen_tail->value_qq,
+ simple_tail_expr(chosen_tail->simple_tail)
+ );
+ } else if ( use_regexp ) {
+ printf("[%s=~regexp(%*s) and ( %s=~regexp(%s) or count(%s)=0 ) ]",
+ simple_tail_expr(first_tail->tail->simple_tail),
+ -(group->pretty_width_ct[position]), /* minimum field width */
+ first_tail->tail->value_re,
+ simple_tail_expr(chosen_tail->simple_tail),
+ chosen_tail->value_re,
+ simple_tail_expr(chosen_tail->simple_tail)
+ );
+ } else {
+ printf("[%s=%*s and ( %s=%s or count(%s)=0 ) ]",
+ simple_tail_expr(first_tail->tail->simple_tail),
+ -(group->pretty_width_ct[position]), /* minimum field width */
+ first_tail->tail->value_qq,
+ simple_tail_expr(chosen_tail->simple_tail),
+ chosen_tail->value_qq,
+ simple_tail_expr(chosen_tail->simple_tail)
+ );
+ }
+ if ( strcmp(chosen_tail->simple_tail, ps_ptr->simplified_tail) == 0 && strcmp(chosen_tail->value_qq, value_qq) == 0 ) {
+ group->chosen_tail_state[position] = CHOSEN_TAIL_PLUS_FIRST_TAIL_DONE;
+ }
+ break;
+ case CHOSEN_TAIL_PLUS_FIRST_TAIL_DONE:
+ if ( first_tail->tail->value == NULL && use_regexp ) {
+ printf("[%s and %s=~regexp(%s)]",
+ simple_tail_expr(first_tail->tail->simple_tail),
+ simple_tail_expr(chosen_tail->simple_tail),
+ chosen_tail->value_re
+ );
+ } else if ( first_tail->tail->value == NULL && ! use_regexp ) {
+ printf("[%s and %s=%s]",
+ simple_tail_expr(first_tail->tail->simple_tail),
+ simple_tail_expr(chosen_tail->simple_tail),
+ chosen_tail->value_qq
+ );
+ } else if ( use_regexp ) {
+ printf("[%s=~regexp(%*s) and %s=~regexp(%s)]",
+ simple_tail_expr(first_tail->tail->simple_tail),
+ -(group->pretty_width_ct[position]), /* minimum field width */
+ first_tail->tail->value_re,
+ simple_tail_expr(chosen_tail->simple_tail),
+ chosen_tail->value_re
+ );
+ } else {
+ printf("[%s=%*s and %s=%s]",
+ simple_tail_expr(first_tail->tail->simple_tail),
+ -(group->pretty_width_ct[position]), /* minimum field width */
+ first_tail->tail->value_qq,
+ simple_tail_expr(chosen_tail->simple_tail),
+ chosen_tail->value_qq
+ );
+ }
+ break;
+ case NO_CHILD_NODES:
+ if(*last_c!='/') {
+ printf("[*]"); /* /head/label with no child nodes */
+ }
+ break;
+ default:
+ /* unreachable */
+ printf("[ %s=%s ]", simple_tail_expr(chosen_tail->simple_tail),chosen_tail->value_qq);
+ }
+}
+
+static void output_path(struct augeas_path_value *path_value_seg) {
+ struct path_segment *ps_ptr;
+ printf("set ");
+ for( ps_ptr=path_value_seg->segments; ps_ptr != NULL; ps_ptr=ps_ptr->next) {
+ output_segment(ps_ptr, path_value_seg);
+ }
+ if( path_value_seg->value_qq != NULL ) {
+ printf(" %s\n", path_value_seg->value_qq);
+ } else {
+ printf("\n");
+ }
+}
+
+static void output(void) {
+ int ndx; /* index to matches() */
+ struct augeas_path_value *path_value_seg;
+ char *value;
+ for( ndx=0; ndx<num_matched; ndx++) {
+ path_value_seg = all_augeas_paths[ndx];
+ value = path_value_seg->value;
+ if( value != NULL && *value == '\0' )
+ value = NULL;
+ if(verbose) {
+ if ( value == NULL )
+ fprintf(stdout,"# %s\n", path_value_seg->path);
+ else
+ fprintf(stdout,"# %s %s\n", path_value_seg->path, path_value_seg->value_qq);
+ }
+ /* weed out null paths here, eg
+ * /head/123 (null)
+ * /head/123/tail (null)
+ * /head/path (null)
+ * ie. if value==NULL AND this node has child nodes
+ * does not apply if there is no
+ * /head/path/tail
+ */
+ if ( value == NULL && ndx < num_matched-1 ) {
+ if(str_ischild(all_augeas_paths[ndx]->path, all_augeas_paths[ndx+1]->path)) {
+ continue;
+ }
+ }
+ output_path(path_value_seg);
+ if( pretty ) {
+ if( ndx < num_matched-1 ) {
+ /* fixme - do we just need to compare the position? */
+ struct group *this_group, *next_group;
+ this_group = all_augeas_paths[ndx]->segments->group;
+ next_group = all_augeas_paths[ndx+1]->segments->group;
+ if ( this_group != next_group
+ || ( this_group != NULL && all_augeas_paths[ndx]->segments->position != all_augeas_paths[ndx+1]->segments->position )
+ ) {
+ /* New group, put in a newline for visual seperation */
+ printf("\n");
+ }
+ }
+ }
+ }
+}
+
+static void choose_re_width(struct group *group) {
+ unsigned int position;
+ /* For each position, compare the value of chosen_tail with
+ * all other matching simple_tails in the group, to find the minimum
+ * required length of the RE
+ */
+ for(position=1; position<=group->max_position; position++) {
+ unsigned int max_re_width_ct=0;
+ unsigned int max_re_width_ft=0;
+ unsigned int re_width;
+ struct tail *chosen_tail = group->chosen_tail[position];
+ struct tail *first_tail = group->first_tail[position]->tail;
+ struct tail *tail_ptr;
+ for(tail_ptr = group->all_tails; tail_ptr != NULL; tail_ptr = tail_ptr->next) {
+ if ( tail_ptr != chosen_tail ) {
+ if( strcmp(tail_ptr->simple_tail, chosen_tail->simple_tail) == 0 ) {
+ value_cmp(tail_ptr->value, chosen_tail->value, &re_width);
+ if( re_width + 1 > max_re_width_ct ) {
+ max_re_width_ct = re_width+1;
+ }
+ }
+ }
+ if( group->chosen_tail_state[position] == CHOSEN_TAIL_PLUS_FIRST_TAIL_START && chosen_tail != first_tail ) {
+ /* 3rd preference, we need an re_width for both the chosen_tail and the first_tail */
+ /* In theory, the first_tail of this position may be present in other positions, but may not be first */
+ if ( tail_ptr != first_tail ) {
+ if( strcmp(tail_ptr->simple_tail, first_tail->simple_tail) == 0 ) {
+ value_cmp(tail_ptr->value, first_tail->value, &re_width);
+ if( re_width + 1 > max_re_width_ft ) {
+ max_re_width_ft = re_width+1;
+ }
+ }
+ }
+ } /* If 3rd preference */
+ } /* for each tail in group->all_tails */
+ max_re_width_ct = MAX(max_re_width_ct,use_regexp);
+ max_re_width_ft = MAX(max_re_width_ft,use_regexp);
+ group->re_width_ct[position] = max_re_width_ct;
+ group->re_width_ft[position] = max_re_width_ft;
+ chosen_tail->value_re = regexp_value( chosen_tail->value, max_re_width_ct );
+ if ( group->chosen_tail_state[position] == CHOSEN_TAIL_PLUS_FIRST_TAIL_START ) {
+ /* otherwise, max_re_width_ft=0, and we don't need first_tail->value_re at all */
+ if ( chosen_tail == first_tail ) {
+ /* if chosen_tail == first_tail, we would overwrite chosen_tail->value_re */
+ first_tail->value_re = chosen_tail->value_re;
+ } else {
+ first_tail->value_re = regexp_value( first_tail->value, max_re_width_ft );
+ }
+ }
+ } /* for position 1..max_position */
+}
+
+static void choose_pretty_width(struct group *group) {
+ unsigned int position;
+ int value_len;
+ for(position=1; position<=group->max_position; position++) {
+ struct tail *pretty_tail;
+ if( group->chosen_tail_state[position] == CHOSEN_TAIL_PLUS_FIRST_TAIL_START ) {
+ pretty_tail = group->first_tail[position]->tail;
+ } else {
+ pretty_tail = group->chosen_tail[position];
+ }
+ if( use_regexp ) {
+ value_len = pretty_tail->value_re == NULL ? 0 : strlen(pretty_tail->value_re);
+ } else {
+ value_len = pretty_tail->value_qq == NULL ? 0 : strlen(pretty_tail->value_qq);
+ }
+ group->pretty_width_ct[position] = value_len;
+ }
+ /* find the highest pretty_width_ct for each unique chosen_tail->simple_tail in the group */
+ for(position=1; position<=group->max_position; position++) {
+ unsigned int max_width=0;
+ unsigned int pos_search;
+ char *chosen_simple_tail = group->chosen_tail[position]->simple_tail;
+ for(pos_search=position; pos_search <= group->max_position; pos_search++) {
+ if(strcmp( group->chosen_tail[pos_search]->simple_tail, chosen_simple_tail) == 0 ) {
+ value_len = group->pretty_width_ct[pos_search];
+ if( value_len <= MAX_PRETTY_WIDTH ) {
+ /* If we're already over the limit, do not pad everything else out too */
+ max_width = MAX(max_width, value_len);
+ }
+ group->pretty_width_ct[pos_search] = max_width; /* so we can start at position+1 */
+ }
+ }
+ max_width = MIN(max_width,MAX_PRETTY_WIDTH);
+ group->pretty_width_ct[position] = max_width;
+ } /* for position 1..max_position */
+}
+
+/* populate group->chosen_tail[] and group->first_tail[] arrays */
+/* Also call choose_re_width() and choose_pretty_width() to populate group->re_width_ct[] ..->re_width_ft[] and ..->pretty_width_ft[] */
+static void choose_all_tails(void) {
+ int ndx; /* index to all_groups() */
+ unsigned int position;
+ struct group *group;
+ for(ndx=0; ndx<num_groups; ndx++) {
+ group=all_groups[ndx];
+ for(position=1; position<=group->max_position; position++) {
+ /* find_first_tail() - find first "significant" tail
+ * populate group->first_tail[] before calling choose_tail()
+ * We need these values for find_or_create_subgroup()
+ */
+ group->first_tail[position] = find_first_tail(group->tails_at_position[position]);
+ }
+ for(position=1; position<=group->max_position; position++) {
+ group->chosen_tail[position] = choose_tail(group, position);
+ }
+ if( use_regexp ) {
+ choose_re_width(group);
+ }
+ if( pretty ) {
+ choose_pretty_width(group);
+ }
+ }
+}
+
+/* Create a quoted value from the value, using single quotes if possible
+ * Quotes are not strictly required for the value, but they _are_ required
+ * for values within the path-expressions
+ */
+static char *quote_value(char *value) {
+ char *s, *t, *value_qq, quote;
+ int len=0;
+ int has_q=0;
+ int has_qq=0;
+ int has_special=0;
+ int has_nl=0;
+ int new_len;
+ if(value==NULL)
+ return(NULL);
+ for(s = value, len=0; *s; s++, len++) {
+ switch(*s) {
+ case '"': has_qq++; break;
+ case '\'': has_q++; break;
+ case ' ':
+ case '/':
+ case '*':
+ case '.':
+ case ':':
+ has_special++; break;
+ case '\n':
+ case '\t':
+ case '\\':
+ has_nl++; break;
+ default:
+ ;
+ }
+ }
+ if( has_q == 0 ) {
+ /* Normal case, no single-quotes within the value */
+ new_len = len+2+has_nl;
+ quote='\'';
+ } else if ( has_qq == 0 ) {
+ new_len = len+2+has_nl;
+ quote='"';
+ } else {
+ /* This needs a bugfix in augeas */
+ new_len = len+2+has_q+has_nl;
+ quote='\'';
+ }
+ value_qq = malloc( sizeof(char) * ++new_len); /* don't forget the \0 */
+ CHECK_OOM( ! value_qq, exit_oom, "in quote_value()");
+
+ t=value_qq;
+ *t++ = quote;
+ for(s = value; *s; s++, t++) {
+ if ( *s == quote ) {
+ *t++ = '\\';
+ *t =quote;
+ continue;
+ } else if ( *s == '\n' ) {
+ *t++ = '\\';
+ *t = 'n';
+ continue;
+ } else if ( *s == '\t' ) {
+ *t++ = '\\';
+ *t = 't';
+ continue;
+ } else if ( *s == '\\' ) {
+ *t++ = '\\';
+ *t = '\\';
+ continue;
+ }
+ *t = *s;
+ }
+ *t++ = quote;
+ *t++ = '\0';
+ return(value_qq);
+}
+
+/* Create a quoted regular expression from the value, using single quotes if possible
+ */
+static char *regexp_value(char *value, int max_len) {
+ char *s, *t, *value_re, quote;
+ int len=0;
+ int has_q=0;
+ int has_qq=0;
+ int has_special=0;
+ int has_nl=0;
+ int new_len;
+ if(value==NULL)
+ return(NULL);
+ for(s = value, len=0; *s; s++, len++) {
+ switch(*s) {
+ case '"': has_qq++; break;
+ case '\'': has_q++; break;
+ case '*':
+ case '?':
+ case '.':
+ case '[':
+ case ']':
+ case '(':
+ case ')':
+ case '^':
+ case '$':
+ case '|':
+ has_special++; break;
+ case '\n':
+ case '\t':
+ has_nl++; break;
+ case '\\':
+ has_special+=2; break;
+ default:
+ ;
+ }
+ }
+ len++; /* don't forget the \0 */
+ if( has_q == 0 ) {
+ /* Normal case, no single-quotes within the value */
+ new_len = len+2+has_nl+has_special*2;
+ quote='\'';
+ } else if ( has_qq == 0 ) {
+ new_len = len+2+has_nl+has_special*2;
+ quote='"';
+ } else {
+ /* This needs a bugfix in augeas */
+ new_len = len+2+has_q+has_nl+has_special*2;
+ quote='\'';
+ }
+ value_re = malloc( sizeof(char) * new_len);
+ CHECK_OOM( ! value_re, exit_oom, "in regexp_value()");
+
+ t=value_re;
+ *t++ = quote;
+ for(s = value; *s; s++, t++) {
+ if ( *s == quote ) {
+ *t++ = '\\';
+ *t =quote;
+ continue;
+ } else if ( *s == '\n' ) {
+ *t++ = '\\';
+ *t = 'n';
+ continue;
+ } else if ( *s == '\t' ) {
+ *t++ = '\\';
+ *t = 't';
+ continue;
+ } else if ( *s == '\\' || *s == ']' ) {
+ *t = '.';
+ continue;
+ }
+ switch(*s) {
+ /* Special handling for ] */
+ case ']':
+ *t = '.'; continue;
+ case '[':
+ *t++ = '\\';
+ break;
+ case '*':
+ case '?':
+ case '.':
+ case '(':
+ case ')':
+ case '^':
+ case '$':
+ case '|':
+ *t++ = '\\';
+ *t++ = '\\';
+ break;
+ case '\\':
+ case '\n':
+ case '\t':
+ break; /* already dealt with above */
+ default:
+ ;
+ }
+ *t = *s;
+ if( ( s - value ) + 1 >= max_len && *(s+1)!='\0' && *(s+2)!='\0' && *(s+3)!='\0' ) {
+ /* don't append .* if there are only one or two chars left in the string */
+ t++;
+ *t++='.';
+ *t++='*';
+ break;
+ }
+ }
+ *t++ = quote;
+ *t++ = '\0';
+ return(value_re);
+}
+
+static void usage(const char *progname) {
+ if(progname == NULL)
+ progname = "augprint";
+ fprintf(stdout, "Usage:\n\n%s [--target=realname] [--lens=Lensname] [--pretty] [--regexp[=n]] [--noseq] /path/filename\n\n",progname);
+ fprintf(stdout, " -t, --target ... use this as the filename in the output set-commands\n");
+ fprintf(stdout, " this filename also implies the default lens to use\n");
+ fprintf(stdout, " -l, --lens ... override the default lens and target and use this one\n");
+ fprintf(stdout, " -p, --pretty ... make the output more readable\n");
+ fprintf(stdout, " -r, --regexp ... use regexp() in path-expressions instead of absolute values\n");
+ fprintf(stdout, " if followed by a number, this is the minimum length of the regexp to use\n");
+ fprintf(stdout, " -s, --noseq ... use * instead of seq::* (useful for compatability with augeas < 1.13.0)\n");
+ fprintf(stdout, " -h, --help ... this message\n");
+ fprintf(stdout, " -V, --version ... print augeas version information and exit.\n");
+ fprintf(stdout, " /path/filename ... full pathname to the file being analysed (required)\n\n");
+ fprintf(stdout, "%s will generate a script of augtool set-commands suitable for rebuilding the file specified\n", progname);
+ fprintf(stdout, "If --target is specified, then the lens associated with the target will be use to parse the file\n");
+ fprintf(stdout, "If --lens is specified, then the given lens will be used, overriding the default, and --target\n\n");
+ fprintf(stdout, "Examples:\n");
+ fprintf(stdout, "\t%s --target=/etc/squid/squid.conf /etc/squid/squid.conf.new\n", progname);
+ fprintf(stdout, "\t\tOutput an augtool script for re-creating /etc/squid/squid.conf.new at /etc/squid/squid.conf\n\n");
+ fprintf(stdout, "\t%s --lens=simplelines /etc/hosts\n", progname);
+ fprintf(stdout, "\t\tOutput an augtool script for /etc/hosts using the lens simplelines instead of the default for /etc/hosts\n\n");
+ fprintf(stdout, "\t%s --regexp=12 /etc/hosts\n", progname);
+ fprintf(stdout, "\t\tUse regular expressions in the resulting augtool script, each being at least 12 chars long\n");
+ fprintf(stdout, "\t\tIf the value is less than 12 chars, use the whole value in the expression\n");
+ fprintf(stdout, "\t\tRegular expressions longer than 12 chars may be generated, if the 12 char regexp\n");
+ fprintf(stdout, "\t\twould be match more than one value\n");
+}
+
+static void print_version_info(const char *progname) {
+ const char *version;
+ int r;
+
+ r = aug_get(aug, "/augeas/version", &version);
+ if (r != 1)
+ goto error;
+
+ fprintf(stderr, "%s %s <https://augeas.net/>\n", progname, version);
+ return;
+ error:
+ fprintf(stderr, "Something went terribly wrong internally - please file a bug\n");
+}
+
+int main(int argc, char **argv) {
+ int opt;
+ char *augeas_root = getenv("AUGEAS_ROOT");
+ char *inputfile = NULL;
+ char *target_file = NULL;
+ char *program_name = basename(argv[0]);
+ char *value; /* result of aug_get() */
+
+ while (1) {
+ int option_index = 0;
+ static struct option long_options[] = {
+ {"help", no_argument, &help, 1 },
+ {"version", no_argument, &print_version, 1 },
+ {"verbose", no_argument, &verbose, 1 },
+ {"debug", no_argument, &debug, 1 },
+ {"lens", required_argument, 0, 0 },
+ {"noseq", no_argument, &noseq, 1 },
+ {"seq", no_argument, &noseq, 0 },
+ {"target", required_argument, 0, 0 },
+ {"pretty", no_argument, &pretty, 1 },
+ {"regexp", optional_argument, &use_regexp, 1 },
+ {0, 0, 0, 0 } /* marker for end of data */
+ };
+
+ opt = getopt_long(argc, argv, "vdhVl:sSr::pt:", long_options, &option_index);
+ if (opt == -1)
+ break;
+
+ switch (opt) {
+ case 0:
+ if(debug) {
+ fprintf(stderr,"option %d %s", option_index, long_options[option_index].name);
+ if(optarg) fprintf(stderr," with arg %s", optarg);
+ fprintf(stderr,"\n");
+ }
+ if (strcmp(long_options[option_index].name, "lens") == 0 ) {
+ lens = optarg;
+ flags |= AUG_NO_MODL_AUTOLOAD;
+ } else if (strcmp(long_options[option_index].name, "target") == 0) {
+ target_file = optarg;
+ if( *target_file != '/' ) {
+ fprintf(stderr,"%s: Error: target \"%s\" must be an absolute path\neg.\n\t--target=/etc/%s\n", program_name, target_file, target_file);
+ exit(1);
+ }
+ } else if (strcmp(long_options[option_index].name, "regexp") == 0) {
+ if(optarg) {
+ int optarg_int = strtol(optarg, NULL, 0);
+ if(optarg_int > 0)
+ use_regexp = optarg_int;
+ /* else use the default 1 set by getopt() */
+ } else {
+ use_regexp = 8;
+ }
+ }
+ break;
+
+ case 'h':
+ help=1;
+ break;
+ case 'V':
+ print_version=1;
+ break;
+ case 'v':
+ verbose=1;
+ break;
+ case 'd':
+ debug=1;
+ fprintf(stderr,"option d with value '%s'\n", optarg);
+ break;
+ case 'S':
+ noseq=0;
+ break;
+ case 's':
+ noseq=1;
+ break;
+ case 'l':
+ lens = optarg;
+ flags |= AUG_NO_MODL_AUTOLOAD;
+ break;
+ case 't':
+ target_file = optarg;
+ if( *target_file != '/' ) {
+ fprintf(stderr,"%s: Error: target \"%s\" must be an absolute path\neg.\n\t--target=/etc/%s\n", program_name, target_file, target_file);
+ exit(1);
+ }
+ break;
+ case 'r':
+ if(optarg) {
+ int optarg_int = strtol(optarg, NULL, 0);
+ use_regexp = optarg_int > 0 ? optarg_int : 8;
+ }
+ use_regexp = use_regexp ? use_regexp : 8;
+ break;
+
+ case '?': /* unknown option */
+ break;
+
+ default:
+ fprintf(stderr,"?? getopt returned character code 0x%x ??\n", opt);
+ }
+ }
+
+ if( help ) {
+ usage(program_name);
+ exit(0);
+ }
+ if( print_version ) {
+ aug = aug_init(NULL, loadpath, flags|AUG_NO_ERR_CLOSE|AUG_NO_LOAD|AUG_NO_MODL_AUTOLOAD);
+ print_version_info(program_name);
+ exit(0);
+ }
+ if (optind == argc-1) {
+ /* We need exactly one non-option argument - the input filename */
+ if( *argv[optind] == '/' ) {
+ /* filename is an absolute path - use it verbatim */
+ inputfile = argv[optind];
+ } else {
+ /* filename is a relative path - prepend the current PWD */
+ int result = asprintf(&inputfile, "%s/%s", getenv("PWD"), argv[optind] );
+ CHECK_OOM( result < 0, exit_oom, NULL);
+ }
+ if(debug) {
+ fprintf(stderr,"non-option ARGV-elements: ");
+ while (optind < argc)
+ fprintf(stderr,"%s ", argv[optind++]);
+ fprintf(stderr,"\n");
+ }
+ } else if( optind == argc ) {
+ /* No non-option args given (missing inputfile) */
+ fprintf(stderr,"Missing command-line argument\nPlease specify a filename to read eg.\n\t%s %s\n", program_name, "/etc/hosts");
+ fprintf(stderr, "\nTry '%s --help' for more information.\n", program_name);
+ exit(1);
+ } else {
+ /* Too many args - we only want one */
+ fprintf(stderr,"Too many command-line arguments\nPlease specify only one filename to read eg.\n\t%s %s\n", program_name, "/etc/hosts");
+ fprintf(stderr, "\nTry '%s --help' for more information.\n", program_name);
+ exit(1);
+ }
+
+ cleanup_filepath(inputfile);
+ char *inputfile_real;
+ if( augeas_root != NULL ) {
+ int result = asprintf(&inputfile_real, "%s/%s", augeas_root, inputfile );
+ if ( result == -1 ) {
+ perror(program_name);
+ exit(1);
+ }
+ } else {
+ inputfile_real = inputfile;
+ }
+ if( access(inputfile_real, F_OK|R_OK) ) {
+ fprintf(stderr, "%s: Could not access file %s: %s\n", program_name, inputfile_real, strerror(errno));
+ exit(1);
+ }
+
+ aug = aug_init(NULL, loadpath, flags|AUG_NO_ERR_CLOSE|AUG_NO_LOAD);
+
+ if ( target_file != NULL && lens == NULL ) {
+ /* Infer the lens which applies to the --target_file option */
+ lens = find_lens_for_path(target_file);
+ }
+
+ if ( lens != NULL ) {
+ /* Explict lens given, or inferred from --target */
+ char *filename;
+ if ( aug_transform(aug, lens, inputfile, 0) != 0 ) {
+ fprintf(stderr, "%s\n", aug_error_details(aug));
+ exit(1);
+ }
+ if ( target_file ) {
+ filename = target_file;
+ } else {
+ filename = inputfile;
+ }
+ printf("setm /augeas/load/*[incl='%s' and label() != '%s']/excl '%s'\n", filename, lens, filename);
+ printf("transform %s incl %s\n", lens, filename);
+ printf("load-file %s\n", filename);
+
+ } else {
+ /* --lens not specified, print the default lens as a comment if --verbose specified */
+ if( verbose ) {
+ char *default_lens;
+ default_lens = find_lens_for_path( inputfile );
+ printf("# Using default lens: %s\n# transform %s incl %s\n", default_lens, default_lens, inputfile);
+ }
+ }
+
+ if ( aug_load_file(aug, inputfile) != 0 || aug_error_details(aug) != NULL ) {
+ const char *msg;
+ fprintf(stderr, "%s: Failed to load file %s\n", program_name, inputfile);
+ msg = aug_error_details(aug);
+ if(msg) {
+ fprintf(stderr,"%s\n",msg);
+ } else {
+ msg = aug_error_message(aug);
+ if(msg)
+ fprintf(stderr,"%s\n",msg);
+ msg = aug_error_minor_message(aug);
+ if(msg)
+ fprintf(stderr,"%s\n",msg);
+ }
+ exit(1);
+ }
+
+ if ( target_file ) {
+ /* Rename the tree from inputfile to target_file, if specified */
+ move_tree(inputfile, target_file);
+ }
+
+ /* There is a subtle difference between "/files//(star)" and "/files/descendant::(star)" in the order that matches appear */
+ /* descendant::* is better suited, as it allows us to prune out intermediate nodes with null values (directory-like nodes) */
+ /* These would be created implicity by "set" */
+ num_matched = aug_match(aug, "/files/descendant::*", &all_matches);
+ if( num_matched == 0 ) {
+ if( lens == NULL )
+ lens = find_lens_for_path(inputfile);
+ fprintf(stderr,"%s: Failed to parse file %s using lens %s\n", program_name, inputfile, lens);
+ exit(1);
+ }
+ all_augeas_paths = (struct augeas_path_value **) malloc( sizeof(struct augeas_path_value *) * num_matched);
+ CHECK_OOM( all_augeas_paths == NULL, exit_oom, NULL);
+
+ for (int ndx=0; ndx < num_matched; ndx++) {
+ all_augeas_paths[ndx] = (struct augeas_path_value *) malloc( sizeof(struct augeas_path_value));
+ CHECK_OOM( all_augeas_paths[ndx] == NULL, exit_oom, NULL);
+ all_augeas_paths[ndx]->path = all_matches[ndx];
+ aug_get(aug, all_matches[ndx], (const char **) &value );
+ all_augeas_paths[ndx]->value = value;
+ all_augeas_paths[ndx]->value_qq = quote_value(value);
+ all_augeas_paths[ndx]->segments = split_path(all_augeas_paths[ndx]);
+ }
+ choose_all_tails();
+ output();
+
+ exit(0);
+}
+
--- /dev/null
+/*
+ * Copyright (C) 2022 George Hansper <george@hansper.id.au>
+ * -----------------------------------------------------------------------
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ * See the GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along with this program.
+ * If not, see <https://www.gnu.org/licenses/>.
+ *
+ * Author: George Hansper <george@hansper.id.au>
+ * -----------------------------------------------------------------------
+ *
+ Have:
+ /head/label_a[pos1]/mid/label_b[pos2]/tail value_a1_b1
+ /head/label_a[pos3]/mid/label_b[pos4]/tail value_a2_b1
+
+ +-------------------------------------------------------+
+ | augeas_path_value |
+ | path = "/head/label_a[pos1]/mid/label_b[pos2]/tail" |
+ | value = "value_a1_b1" |
+ | value_qq = "'value_a1_b1'" or "\"value_a1_b1\"" |
+ | segments --. |
+ +---------------\---------------------------------------+
+ \
+ +----------------------------------------+ +--------------------------------------------+
+ | path_segment | | path_segment |
+ | head = "/head/label_a" | | head = "/head/label_a[pos1]/mid/label_b" |
+ | segment = "/head/label_a" | | seqment = "/mid/label_b" |
+ | simplified_tail = "mid/label_b/tail" | | simplified_tail = "tail" |
+ | position = (int) pos1 | | position = (int) pos2 |
+ | next ------------------------------------->| next --> NULL |
+ | group --. | | group --. |
+ +------------\---------------------------+ +------------\-------------------------------+
+ \ \
+ +-----------------------------+ +--------------------------------------------+
+ | group | | group |
+ | head = "/head/label_a" | | head = "/head/label_a[pos1]/mid/label_b" |
+ | max_position | | max_position |
+ | chosen_tail[] | | chosen_tail[] | array of *tail, index is position
+ | tails_at_position[]----------------. | tails_at_position[]----------------. | array of *tail_stub lists, index is position
+ | all_tails ---. | \ | all_tails ---. \ | linked list, unique to group
+ +-----------------\-----------+ \ +-----------------\----------------------\---+
+ \ \ \ \
+ +------------------------------------+ \ +------------------------------------+ \
+ | tail |-+ | | tail |-+ |
+ | simple_tail = "mid/label_b/tail" | | | | simple_tail = "tail" | | |
+ | value = "value_a1_b1" | | | | value = "value_a1_b1" | | |
+ | next (next in all_tails list) | | | | next (next in all_tails list) | | |
+ | tail_value_found | | | | tail_value_found | | | count of matching tail+value
+ | tail_value_found_map[] | | | | tail_value_found_map[] | | | per-position count of tail+value
+ | tail_found_map[] | | | | tail_found_map[] | | | per-position count of matching tail
+ +------------------------------------+ | | +------------------------------------+ | |
+ +------------------------------------+ | +------------------------------------+ |
+ (linked-list) ^ | (linked-list) ^ |
+ | | | |
+ | .-----------' | .------------'
+ | | | |
+ | | | |
+ | v | v
+ +---------------------|--------------+ +---------------------|--------------+
+ | tail_stub | |-+ | tail_stub | |-+
+ | *tail (ptr) ---' | | | *tail (ptr) ---' | |
+ | next (in order of appearance) | | | next (in order of appearance) | |
+ +------------------------------------+ | +------------------------------------+ |
+ +------------------------------------+ +------------------------------------+
+ (linked-list) (linked-list)
+
+all_tails is a linked-list of (struct tail), in no particular order, where the combination of tail+value is unique in this list
+
+all_tails list is unique to a group, and the (struct tail) records are not shared outside the group
+The (struct tail) records are shared within the group, across all [123] positions
+
+Each (struct_tail) contains three counters:
+* tail_value_found
+ This is the number of times this tail+value combination appears within the group
+ If this counter >1, this indicates a duplicate tail+value, ie two (or more) identical entries within the group
+* tail_value_found_map
+ This is similar to tail_value_found, but there is an individiual counter for each position within the group
+* tail_found_map
+ This is the number of times this tail (regardless of value) appears for each position within the group
+
+There is a (struct tail_stub) record for _every_ tail that we find for this group, including duplicates
+
+The (struct group) record keeps an array tails_at_position[] which is indexed by position
+Each array-element points to a linked-list of tail_stub records, which contain
+a pointer to a (struct tail) record from the all_tails linked list
+The tails_at_position[position] linked-list give us a complete list of all the tail+value records
+for this position in the group, in their original order of appearance
+*/
+
+/* all_tails record */
+struct tail {
+ char *simple_tail;
+ char *value;
+ char *value_qq; /* The value, quoted and escaped as-needed */
+ char *value_re; /* The value expressed as a regular-expression, long enough to uniquely identify the value */
+ struct tail *next; /* next all_tails record */
+ unsigned int tail_value_found; /* number of times we have seen this tail+value within this group, (used by 1st preference) */
+ unsigned int *tail_value_found_map; /* Array, indexed by position, number of times we have seen this tail+value within this group (used by 3rd preference) */
+ unsigned int *tail_found_map; /* Array, indexed by position, number of times we have seen this tail (regardless of value) within this group (used by 2nd preference) */
+};
+
+/* Linked list of pointers into the all_tails list
+ * One such linked-list exists for each position within the group
+ * Each list begins at group->tails_at_position[position]
+ */
+struct tail_stub {
+ struct tail *tail;
+ struct tail_stub *next;
+};
+
+/* subgroup exists only to analyse the 3rd preference
+ * it maps a subset of positions within a group
+ */
+struct subgroup {
+ struct tail *first_tail;
+ unsigned int *matching_positions; /* zero-terminated array of positions with the same first_tail */
+ struct subgroup *next;
+};
+
+typedef enum {
+ NOT_DONE=0,
+ FIRST_TAIL=1, /* 1st preference */
+ CHOSEN_TAIL_START=4, /* 2nd preference - unique tail found for this position */
+ CHOSEN_TAIL_WIP=5, /* 2nd preference */
+ CHOSEN_TAIL_DONE=6, /* 2nd preference */
+ CHOSEN_TAIL_PLUS_FIRST_TAIL_START=8, /* 3rd preference - unique tail found in a subgroup with a common first_tail */
+ CHOSEN_TAIL_PLUS_FIRST_TAIL_WIP=9, /* 3rd preference */
+ CHOSEN_TAIL_PLUS_FIRST_TAIL_DONE=10, /* 3rd preference */
+ FIRST_TAIL_PLUS_POSITION=12, /* Fallback - use first_tail subgroup and append a position */
+ NO_CHILD_NODES=16, /* /head/123 with no child nodes */
+} chosen_tail_state_t;
+
+struct group {
+ char *head;
+ struct tail *all_tails; /* Linked list */
+ struct tail_stub **tails_at_position; /* array of linked-lists, index is position */
+ struct tail **chosen_tail; /* array of (struct tail) pointers, index is position */
+ struct tail_stub **first_tail; /* array of (struct tail_stub) pointers, index is position */
+ unsigned int max_position; /* highest position seen for this group */
+ unsigned int position_array_size; /* array size for arrays indexed by position, >= max_position+1, used for malloc() */
+ chosen_tail_state_t *chosen_tail_state; /* array, index is position */
+ struct subgroup *subgroups; /* Linked list, subgroups based on common first-tail - used only for 3rd preference and fallback */
+ unsigned int *subgroup_position; /* array, position within subgroup for this position - used only for fallback */
+ /* For --pretty */
+ unsigned int *pretty_width_ct; /* array, index is position, value width to use for --pretty */
+ /* For --regexp */
+ unsigned int *re_width_ct; /* array, index is position, matching width to use for --regexp */
+ unsigned int *re_width_ft; /* array, index is position, matching width to use for --regexp */
+};
+
+struct path_segment {
+ char *head;
+ char *segment;
+ char *simplified_tail;
+ unsigned int position;
+ struct group *group;
+ struct path_segment *next;
+};
+
+/* Results of aug_match() and aug_get() - one record per path returned by aug_match() */
+struct augeas_path_value {
+ char *path;
+ char *value;
+ char *value_qq; /* value in quotes - used in path-expressions, and as the value being assigned */
+ /* result of split_path() */
+ struct path_segment *segments;
+};
--- /dev/null
+/*
+ * augrun.c: command interpreter for augeas
+ *
+ * Copyright (C) 2011-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <lutter@redhat.com>
+ */
+
+#include <config.h>
+#include "augeas.h"
+#include "internal.h"
+#include "memory.h"
+#include "errcode.h"
+
+#include <ctype.h>
+#include <libxml/tree.h>
+#include <argz.h>
+
+/*
+ * Command handling infrastructure
+ */
+enum command_opt_type {
+ CMD_NONE,
+ CMD_STR, /* String argument */
+ CMD_PATH /* Path expression */
+};
+
+struct command_opt_def {
+ bool optional; /* Optional or mandatory */
+ enum command_opt_type type;
+ const char *name;
+ const char *help;
+};
+
+#define CMD_OPT_DEF_LAST { .type = CMD_NONE, .name = NULL }
+
+struct command {
+ const struct command_def *def;
+ struct command_opt *opt;
+ struct augeas *aug;
+ struct error *error; /* Same as aug->error */
+ FILE *out;
+ bool quit;
+};
+
+typedef void (*cmd_handler)(struct command*);
+
+struct command_def {
+ const char *name;
+ const char *category;
+ const struct command_opt_def *opts;
+ cmd_handler handler;
+ const char *synopsis;
+ const char *help;
+};
+
+static const struct command_def cmd_def_last =
+ { .name = NULL, .opts = NULL, .handler = NULL,
+ .synopsis = NULL, .help = NULL };
+
+struct command_opt {
+ struct command_opt *next;
+ const struct command_opt_def *def;
+ char *value;
+};
+
+struct command_grp_def {
+ const char *name;
+ const struct command_def *commands[];
+};
+
+static const struct command_grp_def cmd_grp_def_last =
+ { .name = NULL, .commands = { } };
+
+static const struct command_def *lookup_cmd_def(const char *name);
+
+static const struct command_opt_def *
+find_def(const struct command *cmd, const char *name) {
+ const struct command_opt_def *def;
+ for (def = cmd->def->opts; def->name != NULL; def++) {
+ if (STREQ(def->name, name))
+ return def;
+ }
+ return NULL;
+}
+
+static struct command_opt *
+find_opt(const struct command *cmd, const char *name) {
+ const struct command_opt_def *def = find_def(cmd, name);
+ assert(def != NULL);
+
+ for (struct command_opt *opt = cmd->opt; opt != NULL; opt = opt->next) {
+ if (opt->def == def)
+ return opt;
+ }
+ assert(def->optional);
+ return NULL;
+}
+
+static const char *arg_value(const struct command *cmd, const char *name) {
+ struct command_opt *opt = find_opt(cmd, name);
+
+ return (opt == NULL) ? NULL : opt->value;
+}
+
+static char *nexttoken(struct command *cmd, char **line, bool path) {
+ char *r, *s, *w;
+ char quot = '\0';
+ int nbracket = 0;
+ int nescaped = 0;
+ bool copy;
+
+ s = *line;
+
+ while (*s && isblank(*s)) s+= 1;
+ r = s;
+ w = s;
+ while (*s) {
+ copy = true;
+ if (*s == '\\') {
+ switch (*(s+1)) {
+ case ']':
+ case '[':
+ case '|':
+ case '/':
+ case '=':
+ case '(':
+ case ')':
+ case '!':
+ case ',': /* pass them literally;
+ * see 'name_follow' in pathx.c */
+ nescaped = 2;
+ break;
+ case 't': /* insert tab */
+ *(s+1) = '\t';
+ nescaped = 1;
+ s += 1;
+ break;
+ case 'n': /* insert newline */
+ *(s+1) = '\n';
+ nescaped = 1;
+ s += 1;
+ break;
+ case ' ':
+ case '\t': /* pass both through if quoted, else fall */
+ if (quot) break;
+ ATTRIBUTE_FALLTHROUGH;
+ case '\'':
+ case '"': /* pass both through if opposite quote, else fall */
+ if (quot && quot != *(s+1)) break;
+ ATTRIBUTE_FALLTHROUGH;
+ case '\\': /* pass next character through */
+ nescaped = 1;
+ s += 1;
+ break;
+ default:
+ ERR_REPORT(cmd, AUG_ECMDRUN, "unknown escape sequence: %c", *(s+1));
+ return NULL;
+ }
+ }
+
+ if (nescaped == 0) {
+ if (*s == '[') nbracket += 1;
+ if (*s == ']') nbracket -= 1;
+ if (nbracket < 0 && path) {
+ ERR_REPORT(cmd, AUG_ECMDRUN, "unmatched [");
+ return NULL;
+ }
+
+ if (!path || nbracket == 0) {
+ if (!quot && (*s == '\'' || *s == '"')) {
+ quot = *s;
+ copy = false;
+ } else if (quot && *s == quot) {
+ quot = '\0';
+ copy = false;
+ }
+
+ if (!quot && isblank(*s))
+ break;
+ }
+ } else {
+ nescaped -= 1;
+ }
+
+ if (copy) {
+ *w = *s;
+ w += 1;
+ }
+ s += 1;
+ }
+ if (*s == '\0' && path && nbracket > 0) {
+ ERR_REPORT(cmd, AUG_ECMDRUN, "unmatched [");
+ return NULL;
+ }
+ if (*s == '\0' && quot) {
+ ERR_REPORT(cmd, AUG_ECMDRUN, "unmatched %c", quot);
+ return NULL;
+ }
+ while (*w && w <= s)
+ *w++ = '\0';
+ *line = w;
+ return r;
+}
+
+static struct command_opt *
+make_command_opt(struct command *cmd, const struct command_opt_def *def) {
+ struct command_opt *copt = NULL;
+ int r;
+
+ r = ALLOC(copt);
+ ERR_NOMEM(r < 0, cmd->aug);
+ copt->def = def;
+ list_append(cmd->opt, copt);
+ error:
+ return copt;
+}
+
+static void free_command_opts(struct command *cmd) {
+ struct command_opt *next;
+
+ next = cmd->opt;
+ while (next != NULL) {
+ struct command_opt *del = next;
+ next = del->next;
+ free(del);
+ }
+ cmd->opt = NULL;
+}
+
+static int parseline(struct command *cmd, char *line) {
+ char *tok;
+ int narg = 0, nopt = 0;
+ const struct command_opt_def *def;
+
+ tok = nexttoken(cmd, &line, false);
+ if (tok == NULL)
+ return -1;
+ cmd->def = lookup_cmd_def(tok);
+ if (cmd->def == NULL) {
+ ERR_REPORT(cmd, AUG_ECMDRUN, "Unknown command '%s'", tok);
+ return -1;
+ }
+
+ for (def = cmd->def->opts; def->name != NULL; def++) {
+ narg += 1;
+ if (def->optional)
+ nopt += 1;
+ }
+
+ int curarg = 0;
+ def = cmd->def->opts;
+ while (*line != '\0') {
+ while (*line && isblank(*line)) line += 1;
+
+ if (curarg >= narg) {
+ ERR_REPORT(cmd, AUG_ECMDRUN,
+ "Too many arguments. Command %s takes only %d arguments",
+ cmd->def->name, narg);
+ return -1;
+ }
+
+ struct command_opt *opt = make_command_opt(cmd, def);
+ if (opt == NULL)
+ return -1;
+
+ if (def->type == CMD_PATH) {
+ tok = nexttoken(cmd, &line, true);
+ cleanpath(tok);
+ } else {
+ tok = nexttoken(cmd, &line, false);
+ }
+ if (tok == NULL)
+ return -1;
+ opt->value = tok;
+ curarg += 1;
+ def += 1;
+ while (*line && isblank(*line)) line += 1;
+ }
+
+ if (curarg < narg - nopt) {
+ ERR_REPORT(cmd, AUG_ECMDRUN, "Not enough arguments for %s", cmd->def->name);
+ return -1;
+ }
+
+ return 0;
+}
+
+/*
+ * Commands
+ */
+static void format_desc(const char *d) {
+ printf(" ");
+ for (const char *s = d; *s; s++) {
+ if (*s == '\n')
+ printf("\n ");
+ else
+ putchar(*s);
+ }
+ printf("\n\n");
+}
+
+static void format_defname(char *buf, const struct command_opt_def *def,
+ bool mark_optional) {
+ char *p;
+ if (mark_optional && def->optional)
+ p = stpcpy(buf, " [<");
+ else
+ p = stpcpy(buf, " <");
+ for (int i=0; i < strlen(def->name); i++)
+ *p++ = toupper(def->name[i]);
+ *p++ = '>';
+ if (mark_optional && def->optional)
+ *p++ = ']';
+ *p = '\0';
+}
+
+static void cmd_help(struct command *cmd);
+
+static const struct command_opt_def cmd_help_opts[] = {
+ { .type = CMD_STR, .name = "command", .optional = true,
+ .help = "print help for this command only" },
+ CMD_OPT_DEF_LAST
+};
+
+static const struct command_def cmd_help_def = {
+ .name = "help",
+ .opts = cmd_help_opts,
+ .handler = cmd_help,
+ .synopsis = "print help",
+ .help = "list all commands or print details about one command"
+};
+
+static void cmd_quit(ATTRIBUTE_UNUSED struct command *cmd) {
+ cmd->quit = true;
+}
+
+static const struct command_opt_def cmd_quit_opts[] = {
+ CMD_OPT_DEF_LAST
+};
+
+static const struct command_def cmd_quit_def = {
+ .name = "quit",
+ .opts = cmd_quit_opts,
+ .handler = cmd_quit,
+ .synopsis = "exit the program",
+ .help = "Exit the program"
+};
+
+static char *ls_pattern(struct command *cmd, const char *path) {
+ char *q = NULL;
+ int r;
+
+ if (path[strlen(path)-1] == SEP)
+ r = xasprintf(&q, "%s*", path);
+ else
+ r = xasprintf(&q, "%s/*", path);
+ ERR_NOMEM(r < 0, cmd->aug);
+ error:
+ return q;
+}
+
+static int child_count(struct command *cmd, const char *path) {
+ char *q = ls_pattern(cmd, path);
+ int cnt;
+
+ if (q == NULL)
+ return 0;
+ cnt = aug_match(cmd->aug, q, NULL);
+ if (HAS_ERR(cmd))
+ cnt = -1;
+ free(q);
+ return cnt;
+}
+
+static void cmd_ls(struct command *cmd) {
+ int cnt = 0;
+ char *path = NULL;
+ char **paths = NULL;
+
+ path = ls_pattern(cmd, arg_value(cmd, "path"));
+ ERR_BAIL(cmd);
+
+ cnt = aug_match(cmd->aug, path, &paths);
+ ERR_BAIL(cmd);
+ for (int i=0; i < cnt; i++) {
+ const char *val;
+ const char *basnam = strrchr(paths[i], SEP);
+ int dir = child_count(cmd, paths[i]);
+ aug_get(cmd->aug, paths[i], &val);
+ ERR_BAIL(cmd);
+ basnam = (basnam == NULL) ? paths[i] : basnam + 1;
+ if (val == NULL)
+ val = "(none)";
+ fprintf(cmd->out, "%s%s= %s\n", basnam, dir ? "/ " : " ", val);
+ FREE(paths[i]);
+ }
+ error:
+ free(path);
+ for (int i=0; i < cnt; i++)
+ FREE(paths[i]);
+ free(paths);
+}
+
+static const struct command_opt_def cmd_ls_opts[] = {
+ { .type = CMD_PATH, .name = "path", .optional = false,
+ .help = "the node whose children to list" },
+ CMD_OPT_DEF_LAST
+};
+
+static const struct command_def cmd_ls_def = {
+ .name = "ls",
+ .opts = cmd_ls_opts,
+ .handler = cmd_ls,
+ .synopsis = "list children of a node",
+ .help = "list the direct children of a node"
+};
+
+static void cmd_match(struct command *cmd) {
+ int cnt = 0;
+ const char *pattern = arg_value(cmd, "path");
+ const char *value = arg_value(cmd, "value");
+ char **matches = NULL;
+ bool filter = (value != NULL) && (strlen(value) > 0);
+
+ cnt = aug_match(cmd->aug, pattern, &matches);
+ ERR_BAIL(cmd);
+ ERR_THROW(cnt < 0, cmd->aug, AUG_ECMDRUN,
+ " (error matching %s)\n", pattern);
+ if (cnt == 0) {
+ fprintf(cmd->out, " (no matches)\n");
+ goto done;
+ }
+
+ for (int i=0; i < cnt; i++) {
+ const char *val;
+ aug_get(cmd->aug, matches[i], &val);
+ ERR_BAIL(cmd);
+ if (val == NULL)
+ val = "(none)";
+ if (filter) {
+ if (STREQ(value, val))
+ fprintf(cmd->out, "%s\n", matches[i]);
+ } else {
+ fprintf(cmd->out, "%s = %s\n", matches[i], val);
+ }
+ }
+ error:
+ done:
+ for (int i=0; i < cnt; i++)
+ free(matches[i]);
+ free(matches);
+}
+
+static const struct command_opt_def cmd_match_opts[] = {
+ { .type = CMD_PATH, .name = "path", .optional = false,
+ .help = "the path expression to match" },
+ { .type = CMD_STR, .name = "value", .optional = true,
+ .help = "only show matches with this value" },
+ CMD_OPT_DEF_LAST
+};
+
+static const struct command_def cmd_match_def = {
+ .name = "match",
+ .opts = cmd_match_opts,
+ .handler = cmd_match,
+ .synopsis = "print matches for a path expression",
+ .help = "Find all paths that match the path expression PATH. "
+ "If VALUE is given,\n only the matching paths whose value equals "
+ "VALUE are printed"
+};
+
+static void cmd_count(struct command *cmd) {
+ int cnt = 0;
+ const char *pattern = arg_value(cmd, "path");
+
+ cnt = aug_match(cmd->aug, pattern, NULL);
+ ERR_BAIL(cmd);
+ ERR_THROW(cnt < 0, cmd->aug, AUG_ECMDRUN,
+ " (error matching %s)\n", pattern);
+ if (cnt == 0) {
+ fprintf(cmd->out, " no matches\n");
+ } else if (cnt == 1) {
+ fprintf(cmd->out, " 1 match\n");
+ } else {
+ fprintf(cmd->out, " %d matches\n", cnt);
+ }
+
+ error:
+ return;
+}
+
+static const struct command_opt_def cmd_count_opts[] = {
+ { .type = CMD_PATH, .name = "path", .optional = false,
+ .help = "the path expression to match" },
+ CMD_OPT_DEF_LAST
+};
+
+static const struct command_def cmd_count_def = {
+ .name = "count",
+ .opts = cmd_count_opts,
+ .handler = cmd_count,
+ .synopsis = "print the number of matches for a path expression",
+ .help = "Print how many paths match the the path expression PATH"
+};
+
+static void cmd_rm(struct command *cmd) {
+ int cnt;
+ const char *path = arg_value(cmd, "path");
+ cnt = aug_rm(cmd->aug, path);
+ if (! HAS_ERR(cmd))
+ fprintf(cmd->out, "rm : %s %d\n", path, cnt);
+}
+
+static const struct command_opt_def cmd_rm_opts[] = {
+ { .type = CMD_PATH, .name = "path", .optional = false,
+ .help = "remove all nodes matching this path expression" },
+ CMD_OPT_DEF_LAST
+};
+
+static const struct command_def cmd_rm_def = {
+ .name = "rm",
+ .opts = cmd_rm_opts,
+ .handler = cmd_rm,
+ .synopsis = "delete nodes and subtrees",
+ .help = "Delete PATH and all its children from the tree"
+};
+
+static void cmd_mv(struct command *cmd) {
+ const char *src = arg_value(cmd, "src");
+ const char *dst = arg_value(cmd, "dst");
+ int r;
+
+ r = aug_mv(cmd->aug, src, dst);
+ if (r < 0)
+ ERR_REPORT(cmd, AUG_ECMDRUN,
+ "Moving %s to %s failed", src, dst);
+}
+
+static const struct command_opt_def cmd_mv_opts[] = {
+ { .type = CMD_PATH, .name = "src", .optional = false,
+ .help = "the tree to move" },
+ { .type = CMD_PATH, .name = "dst", .optional = false,
+ .help = "where to put the source tree" },
+ CMD_OPT_DEF_LAST
+};
+
+static const char cmd_mv_help[] =
+ "Move node SRC to DST. SRC must match exactly one node in "
+ "the tree.\n DST must either match exactly one node in the tree, "
+ "or may not\n exist yet. If DST exists already, it and all its "
+ "descendants are\n deleted. If DST does not exist yet, it and "
+ "all its missing\n ancestors are created.";
+
+static const struct command_def cmd_mv_def = {
+ .name = "mv",
+ .opts = cmd_mv_opts,
+ .handler = cmd_mv,
+ .synopsis = "move a subtree",
+ .help = cmd_mv_help
+};
+
+static const struct command_def cmd_move_def = {
+ .name = "move",
+ .opts = cmd_mv_opts,
+ .handler = cmd_mv,
+ .synopsis = "move a subtree (alias of 'mv')",
+ .help = cmd_mv_help
+};
+
+static void cmd_cp(struct command *cmd) {
+ const char *src = arg_value(cmd, "src");
+ const char *dst = arg_value(cmd, "dst");
+ int r;
+
+ r = aug_cp(cmd->aug, src, dst);
+ if (r < 0)
+ ERR_REPORT(cmd, AUG_ECMDRUN,
+ "Copying %s to %s failed", src, dst);
+}
+
+static const struct command_opt_def cmd_cp_opts[] = {
+ { .type = CMD_PATH, .name = "src", .optional = false,
+ .help = "the tree to copy" },
+ { .type = CMD_PATH, .name = "dst", .optional = false,
+ .help = "where to copy the source tree" },
+ CMD_OPT_DEF_LAST
+};
+
+static const char cmd_cp_help[] =
+ "Copy node SRC to DST. SRC must match exactly one node in "
+ "the tree.\n DST must either match exactly one node in the tree, "
+ "or may not\n exist yet. If DST exists already, it and all its "
+ "descendants are\n deleted. If DST does not exist yet, it and "
+ "all its missing\n ancestors are created.";
+
+static const struct command_def cmd_cp_def = {
+ .name = "cp",
+ .opts = cmd_cp_opts,
+ .handler = cmd_cp,
+ .synopsis = "copy a subtree",
+ .help = cmd_cp_help
+};
+
+static const struct command_def cmd_copy_def = {
+ .name = "copy",
+ .opts = cmd_cp_opts,
+ .handler = cmd_cp,
+ .synopsis = "copy a subtree (alias of 'cp')",
+ .help = cmd_cp_help
+};
+
+static void cmd_rename(struct command *cmd) {
+ const char *src = arg_value(cmd, "src");
+ const char *lbl = arg_value(cmd, "lbl");
+ int cnt;
+
+ cnt = aug_rename(cmd->aug, src, lbl);
+ if (cnt < 0)
+ ERR_REPORT(cmd, AUG_ECMDRUN,
+ "Renaming %s to %s failed", src, lbl);
+ if (! HAS_ERR(cmd))
+ fprintf(cmd->out, "rename : %s to %s %d\n", src, lbl, cnt);
+}
+
+static const struct command_opt_def cmd_rename_opts[] = {
+ { .type = CMD_PATH, .name = "src", .optional = false,
+ .help = "the tree to rename" },
+ { .type = CMD_STR, .name = "lbl", .optional = false,
+ .help = "the new label" },
+ CMD_OPT_DEF_LAST
+};
+
+static const char cmd_rename_help[] =
+ "Rename the label of all nodes matching SRC to LBL.";
+
+static const struct command_def cmd_rename_def = {
+ .name = "rename",
+ .opts = cmd_rename_opts,
+ .handler = cmd_rename,
+ .synopsis = "rename a subtree label",
+ .help = cmd_rename_help
+};
+
+static void cmd_set(struct command *cmd) {
+ const char *path = arg_value(cmd, "path");
+ const char *val = arg_value(cmd, "value");
+ int r;
+
+ r = aug_set(cmd->aug, path, val);
+ if (r < 0)
+ ERR_REPORT(cmd, AUG_ECMDRUN, "Setting %s failed", path);
+}
+
+static const struct command_opt_def cmd_set_opts[] = {
+ { .type = CMD_PATH, .name = "path", .optional = false,
+ .help = "set the value of this node" },
+ { .type = CMD_STR, .name = "value", .optional = true,
+ .help = "the new value for the node" },
+ CMD_OPT_DEF_LAST
+};
+
+static const struct command_def cmd_set_def = {
+ .name = "set",
+ .opts = cmd_set_opts,
+ .handler = cmd_set,
+ .synopsis = "set the value of a node",
+ .help = "Associate VALUE with PATH. If PATH is not in the tree yet, "
+ "it and all\n its ancestors will be created. These new tree entries "
+ "will appear last\n amongst their siblings"
+};
+
+static void cmd_setm(struct command *cmd) {
+ const char *base = arg_value(cmd, "base");
+ const char *sub = arg_value(cmd, "sub");
+ const char *val = arg_value(cmd, "value");
+
+ aug_setm(cmd->aug, base, sub, val);
+}
+
+static const struct command_opt_def cmd_setm_opts[] = {
+ { .type = CMD_PATH, .name = "base", .optional = false,
+ .help = "the base node" },
+ { .type = CMD_PATH, .name = "sub", .optional = false,
+ .help = "the subtree relative to the base" },
+ { .type = CMD_STR, .name = "value", .optional = true,
+ .help = "the value for the nodes" },
+ CMD_OPT_DEF_LAST
+};
+
+static const struct command_def cmd_setm_def = {
+ .name = "setm",
+ .opts = cmd_setm_opts,
+ .handler = cmd_setm,
+ .synopsis = "set the value of multiple nodes",
+ .help = "Set multiple nodes in one operation. Find or create a node"
+ " matching SUB\n by interpreting SUB as a path expression relative"
+ " to each node matching\n BASE. If SUB is '.', the nodes matching "
+ "BASE will be modified."
+};
+
+static void cmd_clearm(struct command *cmd) {
+ const char *base = arg_value(cmd, "base");
+ const char *sub = arg_value(cmd, "sub");
+
+ aug_setm(cmd->aug, base, sub, NULL);
+}
+
+static const struct command_opt_def cmd_clearm_opts[] = {
+ { .type = CMD_PATH, .name = "base", .optional = false,
+ .help = "the base node" },
+ { .type = CMD_PATH, .name = "sub", .optional = false,
+ .help = "the subtree relative to the base" },
+ CMD_OPT_DEF_LAST
+};
+
+static const struct command_def cmd_clearm_def = {
+ .name = "clearm",
+ .opts = cmd_clearm_opts,
+ .handler = cmd_clearm,
+ .synopsis = "clear the value of multiple nodes",
+ .help = "Clear multiple nodes values in one operation. Find or create a"
+ " node matching SUB\n by interpreting SUB as a path expression relative"
+ " to each node matching\n BASE. If SUB is '.', the nodes matching "
+ "BASE will be modified."
+};
+
+static void cmd_span(struct command *cmd) {
+ const char *path = arg_value(cmd, "path");
+ int r;
+ uint label_start, label_end, value_start, value_end, span_start, span_end;
+ char *filename = NULL;
+ const char *option = NULL;
+
+ if (aug_get(cmd->aug, AUGEAS_SPAN_OPTION, &option) != 1) {
+ printf("Error: option " AUGEAS_SPAN_OPTION " not found\n");
+ return;
+ }
+ if (streqv(AUG_DISABLE, option)) {
+ ERR_REPORT(cmd, AUG_ECMDRUN,
+ "Span is not enabled. To enable, run the commands:\n"
+ " set %s %s\n rm %s\n load\n",
+ AUGEAS_SPAN_OPTION, AUG_ENABLE, AUGEAS_FILES_TREE);
+ return;
+ } else if (! streqv(AUG_ENABLE, option)) {
+ ERR_REPORT(cmd, AUG_ECMDRUN,
+ "option %s must be %s or %s\n", AUGEAS_SPAN_OPTION,
+ AUG_ENABLE, AUG_DISABLE);
+ return;
+ }
+ r = aug_span(cmd->aug, path, &filename, &label_start, &label_end,
+ &value_start, &value_end, &span_start, &span_end);
+ ERR_THROW(r == -1, cmd, AUG_ECMDRUN, "failed to retrieve span");
+
+ fprintf(cmd->out, "%s label=(%i:%i) value=(%i:%i) span=(%i,%i)\n",
+ filename, label_start, label_end,
+ value_start, value_end, span_start, span_end);
+ error:
+ free(filename);
+}
+
+static const struct command_opt_def cmd_span_opts[] = {
+ { .type = CMD_PATH, .name = "path", .optional = false,
+ .help = "path matching exactly one node" },
+ CMD_OPT_DEF_LAST
+};
+
+static const struct command_def cmd_span_def = {
+ .name = "span",
+ .opts = cmd_span_opts,
+ .handler = cmd_span,
+ .synopsis = "print position in input file corresponding to tree",
+ .help = "Print the name of the file from which the node PATH was generated, as\n well as information about the positions in the file corresponding to\n the label, the value, and the entire node. PATH must match exactly\n one node.\n\n You need to run 'set /augeas/span enable' prior to loading files to\n enable recording of span information. It is disabled by default."
+};
+
+static void cmd_defvar(struct command *cmd) {
+ const char *name = arg_value(cmd, "name");
+ const char *path = arg_value(cmd, "expr");
+
+ aug_defvar(cmd->aug, name, path);
+}
+
+static const struct command_opt_def cmd_defvar_opts[] = {
+ { .type = CMD_STR, .name = "name", .optional = false,
+ .help = "the name of the variable" },
+ { .type = CMD_PATH, .name = "expr", .optional = false,
+ .help = "the path expression" },
+ CMD_OPT_DEF_LAST
+};
+
+static const struct command_def cmd_defvar_def = {
+ .name = "defvar",
+ .opts = cmd_defvar_opts,
+ .handler = cmd_defvar,
+ .synopsis = "set a variable",
+ .help = "Evaluate EXPR and set the variable NAME to the resulting "
+ "nodeset. The\n variable can be used in path expressions as $NAME. "
+ "Note that EXPR is\n evaluated when the variable is defined, not when "
+ "it is used."
+};
+
+static void cmd_defnode(struct command *cmd) {
+ const char *name = arg_value(cmd, "name");
+ const char *path = arg_value(cmd, "expr");
+ const char *value = arg_value(cmd, "value");
+
+ /* Make 'defnode foo ""' mean the same as 'defnode foo' */
+ if (value != NULL && strlen(value) == 0)
+ value = NULL;
+ aug_defnode(cmd->aug, name, path, value, NULL);
+}
+
+static const struct command_opt_def cmd_defnode_opts[] = {
+ { .type = CMD_STR, .name = "name", .optional = false,
+ .help = "the name of the variable" },
+ { .type = CMD_PATH, .name = "expr", .optional = false,
+ .help = "the path expression" },
+ { .type = CMD_STR, .name = "value", .optional = true,
+ .help = "the value for the new node" },
+ CMD_OPT_DEF_LAST
+};
+
+static const struct command_def cmd_defnode_def = {
+ .name = "defnode",
+ .opts = cmd_defnode_opts,
+ .handler = cmd_defnode,
+ .synopsis = "set a variable, possibly creating a new node",
+ .help = "Define the variable NAME to the result of evaluating EXPR, "
+ " which must\n be a nodeset. If no node matching EXPR exists yet, one "
+ "is created and\n NAME will refer to it. When a node is created and "
+ "VALUE is given, the\n new node's value is set to VALUE."
+};
+
+static void cmd_clear(struct command *cmd) {
+ const char *path = arg_value(cmd, "path");
+ int r;
+
+ r = aug_set(cmd->aug, path, NULL);
+ if (r < 0)
+ ERR_REPORT(cmd, AUG_ECMDRUN, "Clearing %s failed", path);
+}
+
+static const struct command_opt_def cmd_clear_opts[] = {
+ { .type = CMD_PATH, .name = "path", .optional = false,
+ .help = "clear the value of this node" },
+ CMD_OPT_DEF_LAST
+};
+
+static const struct command_def cmd_clear_def = {
+ .name = "clear",
+ .opts = cmd_clear_opts,
+ .handler = cmd_clear,
+ .synopsis = "clear the value of a node",
+ .help = "Set the value for PATH to NULL. If PATH is not in the tree yet, "
+ "it and\n all its ancestors will be created. These new tree entries "
+ "will appear\n last amongst their siblings"
+};
+
+static void cmd_touch(struct command *cmd) {
+ const char *path = arg_value(cmd, "path");
+ int r;
+
+ r = aug_match(cmd->aug, path, NULL);
+ if (r == 0) {
+ r = aug_set(cmd->aug, path, NULL);
+ if (r < 0)
+ ERR_REPORT(cmd, AUG_ECMDRUN, "Touching %s failed", path);
+ }
+}
+
+static const struct command_opt_def cmd_touch_opts[] = {
+ { .type = CMD_PATH, .name = "path", .optional = false,
+ .help = "touch this node" },
+ CMD_OPT_DEF_LAST
+};
+
+static const struct command_def cmd_touch_def = {
+ .name = "touch",
+ .opts = cmd_touch_opts,
+ .handler = cmd_touch,
+ .synopsis = "create a new node",
+ .help = "Create PATH with the value NULL if it is not in the tree yet. "
+ "All its\n ancestors will also be created. These new tree entries will "
+ "appear\n last amongst their siblings."
+};
+
+static void cmd_get(struct command *cmd) {
+ const char *path = arg_value(cmd, "path");
+ const char *val;
+ int r;
+
+ r = aug_get(cmd->aug, path, &val);
+ ERR_RET(cmd);
+ fprintf(cmd->out, "%s", path);
+ if (r == 0) {
+ fprintf(cmd->out, " (o)\n");
+ } else if (val == NULL) {
+ fprintf(cmd->out, " (none)\n");
+ } else {
+ fprintf(cmd->out, " = %s\n", val);
+ }
+}
+
+static const struct command_opt_def cmd_get_opts[] = {
+ { .type = CMD_PATH, .name = "path", .optional = false,
+ .help = "get the value of this node" },
+ CMD_OPT_DEF_LAST
+};
+
+static const struct command_def cmd_get_def = {
+ .name = "get",
+ .opts = cmd_get_opts,
+ .handler = cmd_get,
+ .synopsis = "get the value of a node",
+ .help = "Get and print the value associated with PATH"
+};
+
+static void cmd_label(struct command *cmd) {
+ const char *path = arg_value(cmd, "path");
+ const char *lbl;
+ int r;
+
+ r = aug_label(cmd->aug, path, &lbl);
+ ERR_RET(cmd);
+ fprintf(cmd->out, "%s", path);
+ if (r == 0) {
+ fprintf(cmd->out, " (o)\n");
+ } else if (lbl == NULL) {
+ fprintf(cmd->out, " (none)\n");
+ } else {
+ fprintf(cmd->out, " = %s\n", lbl);
+ }
+}
+
+static const struct command_opt_def cmd_label_opts[] = {
+ { .type = CMD_PATH, .name = "path", .optional = false,
+ .help = "get the label of this node" },
+ CMD_OPT_DEF_LAST
+};
+
+static const struct command_def cmd_label_def = {
+ .name = "label",
+ .opts = cmd_label_opts,
+ .handler = cmd_label,
+ .synopsis = "get the label of a node",
+ .help = "Get and print the label associated with PATH"
+};
+
+static void cmd_print(struct command *cmd) {
+ const char *path = arg_value(cmd, "path");
+
+ aug_print(cmd->aug, cmd->out, path);
+}
+
+static const struct command_opt_def cmd_print_opts[] = {
+ { .type = CMD_PATH, .name = "path", .optional = true,
+ .help = "print this subtree" },
+ CMD_OPT_DEF_LAST
+};
+
+static const struct command_def cmd_print_def = {
+ .name = "print",
+ .opts = cmd_print_opts,
+ .handler = cmd_print,
+ .synopsis = "print a subtree",
+ .help = "Print entries in the tree. If PATH is given, printing starts there,\n otherwise the whole tree is printed"
+};
+
+static void cmd_source(struct command *cmd) {
+ const char *path = arg_value(cmd, "path");
+ char *file_path = NULL;
+
+ aug_source(cmd->aug, path, &file_path);
+ ERR_RET(cmd);
+ if (file_path != NULL) {
+ fprintf(cmd->out, "%s\n", file_path);
+ }
+ free(file_path);
+}
+
+static const struct command_opt_def cmd_source_opts[] = {
+ { .type = CMD_PATH, .name = "path", .optional = false,
+ .help = "path to a single node" },
+ CMD_OPT_DEF_LAST
+};
+
+static const struct command_def cmd_source_def = {
+ .name = "source",
+ .opts = cmd_source_opts,
+ .handler = cmd_source,
+ .synopsis = "print the file to which a node belongs",
+ .help = "Print the file to which the node for PATH belongs. PATH must match\n a single node coming from some file. In particular, that means\n it must be underneath /files."
+};
+
+static void cmd_preview(struct command *cmd) {
+ const char *path = arg_value(cmd, "path");
+ char *out = NULL;
+ int r;
+
+ r = aug_preview(cmd->aug, path, &out);
+ if (r < 0 || out == NULL)
+ ERR_REPORT(cmd, AUG_ECMDRUN, "Preview of file for path %s failed", path);
+ else {
+ fprintf(cmd->out, "%s", out);
+ }
+ free(out);
+}
+
+static const struct command_opt_def cmd_preview_opts[] = {
+ { .type = CMD_PATH, .name = "path", .optional = false,
+ .help = "preview the file output which corresponds to path" },
+ CMD_OPT_DEF_LAST
+};
+
+static const struct command_def cmd_preview_def = {
+ .name = "preview",
+ .opts = cmd_preview_opts,
+ .handler = cmd_preview,
+ .synopsis = "preview the file contents for the path specified",
+ .help = "Print the file that would be written, for the file that corresponds to a path."
+ "\n The path must be within the " AUGEAS_FILES_TREE " tree."
+};
+
+static void cmd_context(struct command *cmd) {
+ const char *path = arg_value(cmd, "path");
+
+ if (path == NULL) {
+ aug_get(cmd->aug, "/augeas/context", &path);
+ ERR_RET(cmd);
+ if (path == NULL) {
+ fprintf(cmd->out, "/\n");
+ } else {
+ fprintf(cmd->out, "%s\n", path);
+ }
+ } else {
+ aug_set(cmd->aug, "/augeas/context", path);
+ ERR_RET(cmd);
+ }
+}
+
+static const struct command_opt_def cmd_context_opts[] = {
+ { .type = CMD_PATH, .name = "path", .optional = true,
+ .help = "new context for relative paths" },
+ CMD_OPT_DEF_LAST
+};
+
+static const struct command_def cmd_context_def = {
+ .name = "context",
+ .opts = cmd_context_opts,
+ .handler = cmd_context,
+ .synopsis = "change how relative paths are interpreted",
+ .help = "Relative paths are interpreted relative to a context path which\n is stored in /augeas/context.\n\n When no PATH is given, this command prints the current context path\n and is equivalent to 'get /augeas/context'\n\n When PATH is given, this command changes that context, and has a\n similar effect to 'cd' in a shell and and is the same as running\n the command 'set /augeas/context PATH'."
+};
+
+static void cmd_dump_xml(struct command *cmd) {
+ const char *path = arg_value(cmd, "path");
+ xmlNodePtr xmldoc;
+ int r;
+
+ r = aug_to_xml(cmd->aug, path, &xmldoc, 0);
+ if (r < 0)
+ ERR_REPORT(cmd, AUG_ECMDRUN,
+ "XML export of path %s failed", path);
+
+ xmlElemDump(stdout, NULL, xmldoc);
+ printf("\n");
+
+ xmlFreeNode(xmldoc);
+}
+
+static const struct command_opt_def cmd_dump_xml_opts[] = {
+ { .type = CMD_PATH, .name = "path", .optional = true,
+ .help = "print this subtree" },
+ CMD_OPT_DEF_LAST
+};
+
+static const struct command_def cmd_dump_xml_def = {
+ .name = "dump-xml",
+ .opts = cmd_dump_xml_opts,
+ .handler = cmd_dump_xml,
+ .synopsis = "print a subtree as XML",
+ .help = "Export entries in the tree as XML. If PATH is given, printing starts there,\n otherwise the whole tree is printed."
+};
+
+static void cmd_transform(struct command *cmd) {
+ const char *lens = arg_value(cmd, "lens");
+ const char *filter = arg_value(cmd, "filter");
+ const char *file = arg_value(cmd, "file");
+ int r, excl = 0;
+
+ if (STREQ("excl", filter))
+ excl = 1;
+ else if (STREQ("incl", filter))
+ excl = 0;
+ else
+ ERR_REPORT(cmd, AUG_ECMDRUN,
+ "FILTER must be \"incl\" or \"excl\"");
+
+ r = aug_transform(cmd->aug, lens, file, excl);
+ if (r < 0)
+ ERR_REPORT(cmd, AUG_ECMDRUN,
+ "Adding transform for %s on lens %s failed", lens, file);
+}
+
+static const struct command_opt_def cmd_transform_opts[] = {
+ { .type = CMD_PATH, .name = "lens", .optional = false,
+ .help = "the lens to use" },
+ { .type = CMD_PATH, .name = "filter", .optional = false,
+ .help = "the type of filter, either \"incl\" or \"excl\"" },
+ { .type = CMD_PATH, .name = "file", .optional = false,
+ .help = "the file to associate to the lens" },
+ CMD_OPT_DEF_LAST
+};
+
+static const char cmd_transform_help[] =
+ "Add a transform for FILE using LENS. The LENS may be a module name or a\n"
+ " full lens name. If a module name is given, then \"lns\" will be the lens\n"
+ " assumed. The FILTER must be either \"incl\" or \"excl\". If the filter is\n"
+ " \"incl\", the FILE will be parsed by the LENS. If the filter is \"excl\",\n"
+ " the FILE will be excluded from the LENS. FILE may contain wildcards." ;
+
+static const struct command_def cmd_transform_def = {
+ .name = "transform",
+ .opts = cmd_transform_opts,
+ .handler = cmd_transform,
+ .synopsis = "add a file transform",
+ .help = cmd_transform_help
+};
+
+static void cmd_load_file(struct command *cmd) {
+ const char *file = arg_value(cmd, "file");
+ int r = 0;
+
+ r = aug_load_file(cmd->aug, file);
+ if (r < 0)
+ ERR_REPORT(cmd, AUG_ECMDRUN,
+ "Failed to load file %s", file);
+}
+
+static const struct command_opt_def cmd_load_file_opts[] = {
+ { .type = CMD_PATH, .name = "file", .optional = false,
+ .help = "the file to load" },
+ CMD_OPT_DEF_LAST
+};
+
+static const char cmd_load_file_help[] =
+ "Load a specific FILE, using autoload statements.\n";
+
+static const struct command_def cmd_load_file_def = {
+ .name = "load-file",
+ .opts = cmd_load_file_opts,
+ .handler = cmd_load_file,
+ .synopsis = "load a specific file",
+ .help = cmd_load_file_help
+};
+
+static void cmd_save(struct command *cmd) {
+ int r;
+ r = aug_save(cmd->aug);
+ if (r == -1) {
+ ERR_REPORT(cmd, AUG_ECMDRUN,
+ "saving failed (run 'errors' for details)");
+ } else {
+ r = aug_match(cmd->aug, "/augeas/events/saved", NULL);
+ if (r > 0) {
+ fprintf(cmd->out, "Saved %d file(s)\n", r);
+ }
+ }
+}
+
+static const struct command_opt_def cmd_save_opts[] = {
+ CMD_OPT_DEF_LAST
+};
+
+static const struct command_def cmd_save_def = {
+ .name = "save",
+ .opts = cmd_save_opts,
+ .handler = cmd_save,
+ .synopsis = "save all pending changes",
+ .help = "Save all pending changes to disk. How exactly that is done depends on\n the value of the node /augeas/save, which can be changed by the user.\n The possible values for it are\n \n noop - do not write files; useful for finding errors that\n might happen during a save\n backup - save the original file in a file by appending the extension\n '.augsave' and overwrite the original with new content\n newfile - leave the original file untouched and write new content to\n a file with extension '.augnew' next to the original file\n overwrite - overwrite the original file with new content\n \n Save always tries to save all files for which entries in the tree have\n changed. When saving fails, some files will be written. Details about\n why a save failed can by found by running the 'errors' command"
+};
+
+static void cmd_load(struct command *cmd) {
+ int r;
+ r = aug_load(cmd->aug);
+ if (r == -1) {
+ ERR_REPORT(cmd, AUG_ECMDRUN,
+ "loading failed (run 'errors' for details)");
+ }
+}
+
+static const struct command_opt_def cmd_load_opts[] = {
+ CMD_OPT_DEF_LAST
+};
+
+static const struct command_def cmd_load_def = {
+ .name = "load",
+ .opts = cmd_load_opts,
+ .handler = cmd_load,
+ .synopsis = "(re)load files under /files",
+ .help = "Load files according to the transforms in /augeas/load. "
+ "A transform\n Foo is represented with a subtree /augeas/load/Foo."
+ " Underneath\n /augeas/load/Foo, one node labelled 'lens' must exist,"
+ " whose value is\n the fully qualified name of a lens, for example "
+ "'Foo.lns', and\n multiple nodes 'incl' and 'excl' whose values are "
+ "globs that determine\n which files are transformed by that lens. It "
+ "is an error if one file\n can be processed by multiple transforms."
+};
+
+static void cmd_info(struct command *cmd) {
+ const char *v;
+ int n;
+
+ aug_get(cmd->aug, "/augeas/version", &v);
+ ERR_RET(cmd);
+ if (v != NULL) {
+ fprintf(cmd->out, "version = %s\n", v);
+ }
+
+ aug_get(cmd->aug, "/augeas/root", &v);
+ ERR_RET(cmd);
+ if (v != NULL) {
+ fprintf(cmd->out, "root = %s\n", v);
+ }
+
+ fprintf(cmd->out, "loadpath = ");
+ for (char *entry = cmd->aug->modpathz;
+ entry != NULL;
+ entry = argz_next(cmd->aug->modpathz, cmd->aug->nmodpath, entry)) {
+ if (entry != cmd->aug->modpathz) {
+ fprintf(cmd->out, ":");
+ }
+ fprintf(cmd->out, "%s", entry);
+ }
+ fprintf(cmd->out, "\n");
+
+ aug_get(cmd->aug, "/augeas/context", &v);
+ ERR_RET(cmd);
+ if (v == NULL) {
+ v = "/";
+ }
+ fprintf(cmd->out, "context = %s\n", v);
+
+ n = aug_match(cmd->aug, "/augeas/files//path", NULL);
+ fprintf(cmd->out, "num_files = %d\n", n);
+}
+
+static const struct command_opt_def cmd_info_opts[] = {
+ CMD_OPT_DEF_LAST
+};
+
+static const struct command_def cmd_info_def = {
+ .name = "info",
+ .opts = cmd_info_opts,
+ .handler = cmd_info,
+ .synopsis = "print runtime information",
+ .help = "Print information about Augeas. The output contains:\n"
+ " version : the version number from /augeas/version\n"
+ " root : what Augeas considers the filesystem root\n"
+ " from /augeas/root\n"
+ " loadpath : the paths from which Augeas loads modules\n"
+ " context : the context path (see context command)\n"
+ " num_files : the number of files currently loaded (based on\n"
+ " the number of /augeas/files//path nodes)"
+};
+
+static void cmd_ins(struct command *cmd) {
+ const char *label = arg_value(cmd, "label");
+ const char *where = arg_value(cmd, "where");
+ const char *path = arg_value(cmd, "path");
+ int before;
+
+ if (STREQ(where, "after"))
+ before = 0;
+ else if (STREQ(where, "before"))
+ before = 1;
+ else {
+ ERR_REPORT(cmd, AUG_ECMDRUN,
+ "the <WHERE> argument for ins must be either 'before' or 'after'.");
+ return;
+ }
+
+ aug_insert(cmd->aug, path, label, before);
+}
+
+static const struct command_opt_def cmd_ins_opts[] = {
+ { .type = CMD_STR, .name = "label", .optional = false,
+ .help = "the label for the new node" },
+ { .type = CMD_STR, .name = "where", .optional = false,
+ .help = "either 'before' or 'after'" },
+ { .type = CMD_PATH, .name = "path", .optional = false,
+ .help = "the node before/after which to insert" },
+ CMD_OPT_DEF_LAST
+};
+
+static const char cmd_ins_help[] =
+ "Insert a new node with label LABEL right before or after "
+ "PATH into the\n tree. WHERE must be either 'before' or 'after'.";
+
+static const struct command_def cmd_ins_def = {
+ .name = "ins",
+ .opts = cmd_ins_opts,
+ .handler = cmd_ins,
+ .synopsis = "insert new node",
+ .help = cmd_ins_help
+};
+
+static const struct command_def cmd_insert_def = {
+ .name = "insert",
+ .opts = cmd_ins_opts,
+ .handler = cmd_ins,
+ .synopsis = "insert new node (alias of 'ins')",
+ .help = cmd_ins_help
+};
+
+static void cmd_store(struct command *cmd) {
+ const char *lens = arg_value(cmd, "lens");
+ const char *path = arg_value(cmd, "path");
+ const char *node = arg_value(cmd, "node");
+
+ aug_text_store(cmd->aug, lens, node, path);
+}
+
+static const struct command_opt_def cmd_store_opts[] = {
+ { .type = CMD_STR, .name = "lens", .optional = false,
+ .help = "the name of the lens" },
+ { .type = CMD_PATH, .name = "node", .optional = false,
+ .help = "where to find the input text" },
+ { .type = CMD_PATH, .name = "path", .optional = false,
+ .help = "where to store parsed text" },
+ CMD_OPT_DEF_LAST
+};
+
+static const char cmd_store_help[] =
+ "Parse NODE using LENS and store the resulting tree at PATH.";
+
+static const struct command_def cmd_store_def = {
+ .name = "store",
+ .opts = cmd_store_opts,
+ .handler = cmd_store,
+ .synopsis = "parse text into tree",
+ .help = cmd_store_help
+};
+
+static void cmd_retrieve(struct command *cmd) {
+ const char *lens = arg_value(cmd, "lens");
+ const char *node_in = arg_value(cmd, "node_in");
+ const char *path = arg_value(cmd, "path");
+ const char *node_out = arg_value(cmd, "node_out");
+
+ aug_text_retrieve(cmd->aug, lens, node_in, path, node_out);
+}
+
+static const struct command_opt_def cmd_retrieve_opts[] = {
+ { .type = CMD_STR, .name = "lens", .optional = false,
+ .help = "the name of the lens" },
+ { .type = CMD_PATH, .name = "node_in", .optional = false,
+ .help = "the node containing the initial text (path expression)" },
+ { .type = CMD_PATH, .name = "path", .optional = false,
+ .help = "the tree to transform (path expression)" },
+ { .type = CMD_PATH, .name = "node_out", .optional = false,
+ .help = "where to store the resulting text (path expression)" },
+ CMD_OPT_DEF_LAST
+};
+
+static const char cmd_retrieve_help[] =
+ "Transform tree at PATH back into text using lens LENS and store the\n"
+ " resulting string at NODE_OUT. Assume that the tree was initially read in\n"
+ " with the same lens and the string stored at NODE_IN as input.";
+
+static const struct command_def cmd_retrieve_def = {
+ .name = "retrieve",
+ .opts = cmd_retrieve_opts,
+ .handler = cmd_retrieve,
+ .synopsis = "transform tree into text",
+ .help = cmd_retrieve_help
+};
+
+/* Given a path "/augeas/files/FILENAME/error", return FILENAME */
+static char *err_filename(const char *match) {
+ int noise = strlen(AUGEAS_META_FILES) + strlen("/error");
+ if (strlen(match) < noise + 1)
+ goto error;
+ return strndup(match + strlen(AUGEAS_META_FILES), strlen(match) - noise);
+ error:
+ return strdup("(no filename)");
+}
+
+static const char *err_get(struct augeas *aug,
+ const char *match, const char *child) {
+ char *path = NULL;
+ const char *value = "";
+ int r;
+
+ r = pathjoin(&path, 2, match, child);
+ ERR_NOMEM(r < 0, aug);
+
+ aug_get(aug, path, &value);
+ ERR_BAIL(aug);
+
+ error:
+ free(path);
+ return value;
+}
+
+static void cmd_errors(struct command *cmd) {
+ char **matches = NULL;
+ int cnt = 0;
+ struct augeas *aug = cmd->aug;
+
+ cnt = aug_match(aug, "/augeas//error", &matches);
+ ERR_BAIL(cmd);
+ ERR_THROW(cnt < 0, aug, AUG_ECMDRUN,
+ " (problem retrieving error messages)\n");
+ if (cnt == 0) {
+ fprintf(cmd->out, " (no errors)\n");
+ goto done;
+ }
+
+ for (int i=0; i < cnt; i++) {
+ const char *match = matches[i];
+ const char *line = err_get(aug, match, "line");
+ const char *char_pos = err_get(aug, match, "char");
+ const char *lens = err_get(aug, match, "lens");
+ const char *last = err_get(aug, match, "lens/last_matched");
+ const char *next = err_get(aug, match, "lens/next_not_matched");
+ const char *msg = err_get(aug, match, "message");
+ const char *path = err_get(aug, match, "path");
+ const char *kind = NULL;
+
+ aug_get(aug, match, &kind);
+ ERR_BAIL(aug);
+
+ char *filename = err_filename(match);
+ ERR_NOMEM(filename == NULL, aug);
+
+ if (i>0)
+ fprintf(cmd->out, "\n");
+
+ if (line != NULL) {
+ fprintf(cmd->out, "Error in %s:%s.%s (%s)\n",
+ filename, line, char_pos, kind);
+ } else if (path != NULL) {
+ fprintf(cmd->out, "Error in %s at node %s (%s)\n", filename, path, kind);
+ } else {
+ fprintf(cmd->out, "Error in %s (%s)\n", filename, kind);
+ }
+ FREE(filename);
+
+ if (msg != NULL)
+ fprintf(cmd->out, " %s\n", msg);
+ if (lens != NULL)
+ fprintf(cmd->out, " Lens: %s\n", lens);
+ if (last != NULL)
+ fprintf(cmd->out, " Last matched: %s\n", last);
+ if (next != NULL)
+ fprintf(cmd->out, " Next (no match): %s\n", next);
+ }
+
+ done:
+ error:
+ for (int i=0; i < cnt; i++)
+ free(matches[i]);
+ free(matches);
+}
+
+static const struct command_opt_def cmd_errors_opts[] = {
+ CMD_OPT_DEF_LAST
+};
+
+static const char cmd_errors_help[] =
+ "Show all the errors encountered in processing files. For each error,\n"
+ " print detailed information about where it happened and how. The same\n"
+ " information can be retrieved by running 'print /augeas//error'\n\n"
+ " For each error, the file in which the error occurred together with the\n"
+ " line number and position in that line is shown, as well as information\n"
+ " about the lens that encountered the error. For some errors, the last\n"
+ " lens that matched successfully and the next lens that should have\n"
+ " matched but didn't are also shown\n";
+
+static const struct command_def cmd_errors_def = {
+ .name = "errors",
+ .opts = cmd_errors_opts,
+ .handler = cmd_errors,
+ .synopsis = "show all errors encountered in processing files",
+ .help = cmd_errors_help
+};
+
+/* Groups of commands */
+static const struct command_grp_def cmd_grp_admin_def = {
+ .name = "Admin",
+ .commands = {
+ &cmd_context_def,
+ &cmd_load_def,
+ &cmd_save_def,
+ &cmd_transform_def,
+ &cmd_load_file_def,
+ &cmd_retrieve_def,
+ &cmd_store_def,
+ &cmd_quit_def,
+ &cmd_def_last
+ }
+};
+
+static const struct command_grp_def cmd_grp_info_def = {
+ .name = "Informational",
+ .commands = {
+ &cmd_errors_def,
+ &cmd_info_def,
+ &cmd_help_def,
+ &cmd_source_def,
+ &cmd_preview_def,
+ &cmd_def_last
+ }
+};
+
+static const struct command_grp_def cmd_grp_read_def = {
+ .name = "Read",
+ .commands = {
+ &cmd_dump_xml_def,
+ &cmd_get_def,
+ &cmd_label_def,
+ &cmd_ls_def,
+ &cmd_match_def,
+ &cmd_count_def,
+ &cmd_print_def,
+ &cmd_span_def,
+ &cmd_def_last
+ }
+};
+
+static const struct command_grp_def cmd_grp_write_def = {
+ .name = "Write",
+ .commands = {
+ &cmd_clear_def,
+ &cmd_clearm_def,
+ &cmd_ins_def,
+ &cmd_insert_def,
+ &cmd_mv_def,
+ &cmd_move_def,
+ &cmd_cp_def,
+ &cmd_copy_def,
+ &cmd_rename_def,
+ &cmd_rm_def,
+ &cmd_set_def,
+ &cmd_setm_def,
+ &cmd_touch_def,
+ &cmd_def_last
+ }
+};
+
+static const struct command_grp_def cmd_grp_pathx_def = {
+ .name = "Path expression",
+ .commands = {
+ &cmd_defnode_def,
+ &cmd_defvar_def,
+ &cmd_def_last
+ }
+};
+
+static const struct command_grp_def *const cmd_groups[] = {
+ &cmd_grp_admin_def,
+ &cmd_grp_info_def,
+ &cmd_grp_read_def,
+ &cmd_grp_write_def,
+ &cmd_grp_pathx_def,
+ &cmd_grp_def_last
+};
+
+static const struct command_def *lookup_cmd_def(const char *name) {
+ for (int i = 0; cmd_groups[i]->name != NULL; i++) {
+ for (int j = 0; cmd_groups[i]->commands[j]->name != NULL; j++) {
+ if (STREQ(name, cmd_groups[i]->commands[j]->name))
+ return cmd_groups[i]->commands[j];
+ }
+ }
+ return NULL;
+}
+
+static void cmd_help(struct command *cmd) {
+ const char *name = arg_value(cmd, "command");
+ char buf[100];
+
+ if (name == NULL) {
+ //fprintf(cmd->out, "Commands:\n\n");
+ fprintf(cmd->out, "\n");
+ for (int i=0; cmd_groups[i]->name != NULL; i++) {
+ fprintf(cmd->out, "%s commands:\n", cmd_groups[i]->name);
+ for (int j=0; cmd_groups[i]->commands[j]->name != NULL; j++) {
+ const struct command_def *def = cmd_groups[i]->commands[j];
+ fprintf(cmd->out, " %-10s - %s\n", def->name, def->synopsis);
+ }
+ fprintf(cmd->out, "\n");
+ }
+ fprintf(cmd->out,
+ "Type 'help <command>' for more information on a command\n\n");
+ } else {
+ const struct command_def *def = lookup_cmd_def(name);
+ const struct command_opt_def *odef = NULL;
+
+ ERR_THROW(def == NULL, cmd->aug, AUG_ECMDRUN,
+ "unknown command %s\n", name);
+ fprintf(cmd->out, " COMMAND\n");
+ fprintf(cmd->out, " %s - %s\n\n", name, def->synopsis);
+ fprintf(cmd->out, " SYNOPSIS\n");
+ fprintf(cmd->out, " %s", name);
+
+ for (odef = def->opts; odef->name != NULL; odef++) {
+ format_defname(buf, odef, true);
+ fprintf(cmd->out, "%s", buf);
+ }
+ fprintf(cmd->out, "\n\n");
+ fprintf(cmd->out, " DESCRIPTION\n");
+ format_desc(def->help);
+ if (def->opts->name != NULL) {
+ fprintf(cmd->out, " OPTIONS\n");
+ for (odef = def->opts; odef->name != NULL; odef++) {
+ const char *help = odef->help;
+ if (help == NULL)
+ help = "";
+ format_defname(buf, odef, false);
+ fprintf(cmd->out, " %-10s %s\n", buf, help);
+ }
+ }
+ fprintf(cmd->out, "\n");
+ }
+ error:
+ return;
+}
+
+int aug_srun(augeas *aug, FILE *out, const char *text) {
+ char *line = NULL;
+ const char *eol;
+ struct command cmd;
+ int result = 0;
+
+ api_entry(aug);
+
+ MEMZERO(&cmd, 1);
+ cmd.aug = aug;
+ cmd.error = aug->error;
+ cmd.out = out;
+
+ if (text == NULL)
+ goto done;
+
+ while (*text != '\0' && result >= 0) {
+ eol = strchrnul(text, '\n');
+ while (isspace(*text) && text < eol) text++;
+ if (*text == '\0')
+ break;
+ if (*text == '#' || text == eol) {
+ text = (*eol == '\0') ? eol : eol + 1;
+ continue;
+ }
+
+ line = strndup(text, eol - text);
+ ERR_NOMEM(line == NULL, aug);
+
+ if (parseline(&cmd, line) == 0) {
+ cmd.def->handler(&cmd);
+ result += 1;
+ } else {
+ result = -1;
+ }
+
+ ERR_BAIL(aug);
+ if (result >= 0 && cmd.quit) {
+ result = -2;
+ goto done;
+ }
+
+ free_command_opts(&cmd);
+ FREE(line);
+ text = (*eol == '\0') ? eol : eol + 1;
+ }
+ done:
+ free_command_opts(&cmd);
+ FREE(line);
+
+ api_exit(aug);
+ return result;
+ error:
+ result = -1;
+ goto done;
+}
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+/*
+ * augtool.c:
+ *
+ * Copyright (C) 2007-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <dlutter@redhat.com>
+ */
+
+#include <config.h>
+#include "augeas.h"
+#include "internal.h"
+#include "safe-alloc.h"
+
+#include <readline/readline.h>
+#include <readline/history.h>
+#include <argz.h>
+#include <getopt.h>
+#include <limits.h>
+#include <ctype.h>
+#include <locale.h>
+#include <signal.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <pwd.h>
+#include <stdarg.h>
+#include <sys/time.h>
+
+/* Global variables */
+
+static augeas *aug = NULL;
+static const char *const progname = "augtool";
+static unsigned int flags = AUG_NONE;
+const char *root = NULL;
+char *loadpath = NULL;
+char *transforms = NULL;
+char *loadonly = NULL;
+size_t transformslen = 0;
+size_t loadonlylen = 0;
+const char *inputfile = NULL;
+int echo_commands = 0; /* Gets also changed in main_loop */
+bool print_version = false;
+bool auto_save = false;
+bool interactive = false;
+bool timing = false;
+/* History file is ~/.augeas/history */
+char *history_file = NULL;
+
+#define AUGTOOL_PROMPT "augtool> "
+
+/*
+ * General utilities
+ */
+
+/* Private copy of xasprintf from internal to avoid Multiple definition in
+ * static builds.
+ */
+static int _xasprintf(char **strp, const char *format, ...) {
+ va_list args;
+ int result;
+
+ va_start (args, format);
+ result = vasprintf (strp, format, args);
+ va_end (args);
+ if (result < 0)
+ *strp = NULL;
+ return result;
+}
+
+static int child_count(const char *path) {
+ char *pat = NULL;
+ int r;
+
+ if (path[strlen(path)-1] == SEP)
+ r = asprintf(&pat, "%s*", path);
+ else
+ r = asprintf(&pat, "%s/*", path);
+ if (r < 0)
+ return -1;
+ r = aug_match(aug, pat, NULL);
+ free(pat);
+
+ return r;
+}
+
+static char *readline_path_generator(const char *text, int state) {
+ static int current = 0;
+ static char **children = NULL;
+ static int nchildren = 0;
+ static char *ctx = NULL;
+
+ char *end = strrchr(text, SEP);
+ if (end != NULL)
+ end += 1;
+
+ if (state == 0) {
+ char *path;
+ if (end == NULL) {
+ if ((path = strdup("*")) == NULL)
+ return NULL;
+ } else {
+ if (ALLOC_N(path, end - text + 2) < 0)
+ return NULL;
+ strncpy(path, text, end - text);
+ strcat(path, "*");
+ }
+
+ for (;current < nchildren; current++)
+ free((void *) children[current]);
+ free((void *) children);
+ nchildren = aug_match(aug, path, &children);
+ current = 0;
+
+ ctx = NULL;
+ if (path[0] != SEP)
+ aug_get(aug, AUGEAS_CONTEXT, (const char **) &ctx);
+
+ free(path);
+ }
+
+ if (end == NULL)
+ end = (char *) text;
+
+ while (current < nchildren) {
+ char *child = children[current];
+ current += 1;
+
+ char *chend = strrchr(child, SEP) + 1;
+ if (STREQLEN(chend, end, strlen(end))) {
+ if (child_count(child) > 0) {
+ char *c = realloc(child, strlen(child)+2);
+ if (c == NULL) {
+ free(child);
+ return NULL;
+ }
+ child = c;
+ strcat(child, "/");
+ }
+
+ /* strip off context if the user didn't give it */
+ if (ctx != NULL) {
+ int ctxidx = strlen(ctx);
+ if (child[ctxidx] == SEP)
+ ctxidx++;
+ char *c = strdup(&child[ctxidx]);
+ free(child);
+ if (c == NULL)
+ return NULL;
+ child = c;
+ }
+
+ rl_filename_completion_desired = 1;
+ rl_completion_append_character = '\0';
+ return child;
+ } else {
+ free(child);
+ }
+ }
+ return NULL;
+}
+
+static char *readline_command_generator(const char *text, int state) {
+ // FIXME: expose somewhere under /augeas
+ static const char *const commands[] = {
+ "quit", "clear", "defnode", "defvar",
+ "get", "label", "ins", "load", "ls", "match",
+ "mv", "cp", "rename", "print", "dump-xml", "rm", "save", "set", "setm",
+ "clearm", "span", "store", "retrieve", "transform", "load-file",
+ "help", "touch", "insert", "move", "copy", "errors", "source", "context",
+ "info", "count", "preview",
+ NULL };
+
+ static int current = 0;
+ const char *name;
+
+ if (state == 0)
+ current = 0;
+
+ rl_completion_append_character = ' ';
+ while ((name = commands[current]) != NULL) {
+ current += 1;
+ if (STREQLEN(text, name, strlen(text)))
+ return strdup(name);
+ }
+ return NULL;
+}
+
+#ifndef HAVE_RL_COMPLETION_MATCHES
+typedef char *rl_compentry_func_t(const char *, int);
+static char **rl_completion_matches(ATTRIBUTE_UNUSED const char *text,
+ ATTRIBUTE_UNUSED rl_compentry_func_t *func) {
+ return NULL;
+}
+#endif
+
+#ifndef HAVE_RL_CRLF
+static int rl_crlf(void) {
+ if (rl_outstream != NULL)
+ putc('\n', rl_outstream);
+ return 0;
+}
+#endif
+
+#ifndef HAVE_RL_REPLACE_LINE
+static void rl_replace_line(ATTRIBUTE_UNUSED const char *text,
+ ATTRIBUTE_UNUSED int clear_undo) {
+ return;
+}
+#endif
+
+static char **readline_completion(const char *text, int start,
+ ATTRIBUTE_UNUSED int end) {
+ if (start == 0)
+ return rl_completion_matches(text, readline_command_generator);
+ else
+ return rl_completion_matches(text, readline_path_generator);
+
+ return NULL;
+}
+
+static char *get_home_dir(uid_t uid) {
+ char *strbuf;
+ char *result;
+ struct passwd pwbuf;
+ struct passwd *pw = NULL;
+ long val = sysconf(_SC_GETPW_R_SIZE_MAX);
+
+ if (val < 0) {
+ // The libc won't tell us how big a buffer to reserve.
+ // Let's hope that 16k is enough (it really should be).
+ val = 16*1024;
+ }
+
+ size_t strbuflen = (size_t) val;
+
+ if (ALLOC_N(strbuf, strbuflen) < 0)
+ return NULL;
+
+ if (getpwuid_r(uid, &pwbuf, strbuf, strbuflen, &pw) != 0 || pw == NULL) {
+ free(strbuf);
+
+ // Try to get the user's home dir from the environment
+ char *env = getenv("HOME");
+ if (env != NULL) {
+ return strdup(env);
+ }
+ return NULL;
+ }
+
+ result = strdup(pw->pw_dir);
+
+ free(strbuf);
+
+ return result;
+}
+
+/* Inspired from:
+ * https://thoughtbot.com/blog/tab-completion-in-gnu-readline
+ */
+static int quote_detector(char *str, int index) {
+ return index > 0
+ && str[index - 1] == '\\'
+ && quote_detector(str, index - 1) == 0;
+}
+
+static void readline_init(void) {
+ rl_readline_name = "augtool";
+ rl_attempted_completion_function = readline_completion;
+ rl_completion_entry_function = readline_path_generator;
+ rl_completer_quote_characters = "\"'";
+ rl_completer_word_break_characters = (char *) " ";
+ rl_char_is_quoted_p = "e_detector;
+
+ /* Set up persistent history */
+ char *home_dir = get_home_dir(getuid());
+ char *history_dir = NULL;
+
+ if (home_dir == NULL)
+ goto done;
+
+ if (_xasprintf(&history_dir, "%s/.augeas", home_dir) < 0)
+ goto done;
+
+ if (mkdir(history_dir, 0755) < 0 && errno != EEXIST)
+ goto done;
+
+ if (_xasprintf(&history_file, "%s/history", history_dir) < 0)
+ goto done;
+
+ stifle_history(500);
+
+ read_history(history_file);
+
+ done:
+ free(home_dir);
+ free(history_dir);
+}
+
+__attribute__((noreturn))
+static void help(void) {
+ fprintf(stderr, "Usage: %s [OPTIONS] [COMMAND]\n", progname);
+ fprintf(stderr, "Load the Augeas tree and modify it. If no COMMAND is given, run interactively\n");
+ fprintf(stderr, "Run '%s help' to get a list of possible commands.\n",
+ progname);
+ fprintf(stderr, "\nOptions:\n\n");
+ fprintf(stderr, " -c, --typecheck typecheck lenses\n");
+ fprintf(stderr, " -b, --backup preserve originals of modified files with\n"
+ " extension '.augsave'\n");
+ fprintf(stderr, " -n, --new save changes in files with extension '.augnew',\n"
+ " leave original unchanged\n");
+ fprintf(stderr, " -r, --root ROOT use ROOT as the root of the filesystem\n");
+ fprintf(stderr, " -I, --include DIR search DIR for modules; can be given multiple times\n");
+ fprintf(stderr, " -t, --transform XFM add a file transform; uses the 'transform' command\n"
+ " syntax, e.g. -t 'Fstab incl /etc/fstab.bak'\n");
+ fprintf(stderr, " -l, --load-file FILE load individual FILE in the tree\n");
+ fprintf(stderr, " -e, --echo echo commands when reading from a file\n");
+ fprintf(stderr, " -f, --file FILE read commands from FILE\n");
+ fprintf(stderr, " -s, --autosave automatically save at the end of instructions\n");
+ fprintf(stderr, " -i, --interactive run an interactive shell after evaluating\n"
+ " the commands in STDIN and FILE\n");
+ fprintf(stderr, " -S, --nostdinc do not search the builtin default directories\n"
+ " for modules\n");
+ fprintf(stderr, " -L, --noload do not load any files into the tree on startup\n");
+ fprintf(stderr, " -A, --noautoload do not autoload modules from the search path\n");
+ fprintf(stderr, " --span load span positions for nodes related to a file\n");
+ fprintf(stderr, " --timing after executing each command, show how long it took\n");
+ fprintf(stderr, " --version print version information and exit.\n");
+
+ exit(EXIT_FAILURE);
+}
+
+static void parse_opts(int argc, char **argv) {
+ int opt;
+ size_t loadpathlen = 0;
+ enum {
+ VAL_VERSION = CHAR_MAX + 1,
+ VAL_SPAN = VAL_VERSION + 1,
+ VAL_TIMING = VAL_SPAN + 1
+ };
+ struct option options[] = {
+ { "help", 0, 0, 'h' },
+ { "typecheck", 0, 0, 'c' },
+ { "backup", 0, 0, 'b' },
+ { "new", 0, 0, 'n' },
+ { "root", 1, 0, 'r' },
+ { "include", 1, 0, 'I' },
+ { "transform", 1, 0, 't' },
+ { "load-file", 1, 0, 'l' },
+ { "echo", 0, 0, 'e' },
+ { "file", 1, 0, 'f' },
+ { "autosave", 0, 0, 's' },
+ { "interactive", 0, 0, 'i' },
+ { "nostdinc", 0, 0, 'S' },
+ { "noload", 0, 0, 'L' },
+ { "noautoload", 0, 0, 'A' },
+ { "span", 0, 0, VAL_SPAN },
+ { "timing", 0, 0, VAL_TIMING },
+ { "version", 0, 0, VAL_VERSION },
+ { 0, 0, 0, 0}
+ };
+ int idx;
+
+ while ((opt = getopt_long(argc, argv, "hnbcr:I:t:l:ef:siSLA", options, &idx)) != -1) {
+ switch(opt) {
+ case 'c':
+ flags |= AUG_TYPE_CHECK;
+ break;
+ case 'b':
+ flags |= AUG_SAVE_BACKUP;
+ break;
+ case 'n':
+ flags |= AUG_SAVE_NEWFILE;
+ break;
+ case 'h':
+ help();
+ break;
+ case 'r':
+ root = optarg;
+ break;
+ case 'I':
+ argz_add(&loadpath, &loadpathlen, optarg);
+ break;
+ case 't':
+ argz_add(&transforms, &transformslen, optarg);
+ break;
+ case 'l':
+ // --load-file implies --noload
+ flags |= AUG_NO_LOAD;
+ argz_add(&loadonly, &loadonlylen, optarg);
+ break;
+ case 'e':
+ echo_commands = 1;
+ break;
+ case 'f':
+ inputfile = optarg;
+ break;
+ case 's':
+ auto_save = true;
+ break;
+ case 'i':
+ interactive = true;
+ break;
+ case 'S':
+ flags |= AUG_NO_STDINC;
+ break;
+ case 'L':
+ flags |= AUG_NO_LOAD;
+ break;
+ case 'A':
+ flags |= AUG_NO_MODL_AUTOLOAD;
+ break;
+ case VAL_VERSION:
+ flags |= AUG_NO_MODL_AUTOLOAD;
+ print_version = true;
+ break;
+ case VAL_SPAN:
+ flags |= AUG_ENABLE_SPAN;
+ break;
+ case VAL_TIMING:
+ timing = true;
+ break;
+ default:
+ fprintf(stderr, "Try '%s --help' for more information.\n",
+ progname);
+ exit(EXIT_FAILURE);
+ break;
+ }
+ }
+ argz_stringify(loadpath, loadpathlen, PATH_SEP_CHAR);
+}
+
+static void print_version_info(void) {
+ const char *version;
+ int r;
+
+ r = aug_get(aug, "/augeas/version", &version);
+ if (r != 1)
+ goto error;
+
+ fprintf(stderr, "augtool %s <http://augeas.net/>\n", version);
+ fprintf(stderr, "Copyright (C) 2007-2016 David Lutterkort\n");
+ fprintf(stderr, "License LGPLv2+: GNU LGPL version 2.1 or later\n");
+ fprintf(stderr, " <http://www.gnu.org/licenses/lgpl-2.1.html>\n");
+ fprintf(stderr, "This is free software: you are free to change and redistribute it.\n");
+ fprintf(stderr, "There is NO WARRANTY, to the extent permitted by law.\n\n");
+ fprintf(stderr, "Written by David Lutterkort\n");
+ return;
+ error:
+ fprintf(stderr, "Something went terribly wrong internally - please file a bug\n");
+}
+
+static void print_time_taken(const struct timeval *start,
+ const struct timeval *stop) {
+ time_t elapsed = (stop->tv_sec - start->tv_sec)*1000
+ + (stop->tv_usec - start->tv_usec)/1000;
+ printf("Time: %ld ms\n", elapsed);
+}
+
+static int run_command(const char *line, bool with_timing) {
+ int result;
+ struct timeval stop, start;
+
+ gettimeofday(&start, NULL);
+ result = aug_srun(aug, stdout, line);
+ gettimeofday(&stop, NULL);
+ if (with_timing && result >= 0) {
+ print_time_taken(&start, &stop);
+ }
+
+ if (isatty(fileno(stdin)))
+ add_history(line);
+ return result;
+}
+
+static void print_aug_error(void) {
+ if (aug_error(aug) == AUG_ENOMEM) {
+ fprintf(stderr, "Out of memory.\n");
+ return;
+ }
+ if (aug_error(aug) != AUG_NOERROR) {
+ fprintf(stderr, "error: %s\n", aug_error_message(aug));
+ if (aug_error_minor_message(aug) != NULL)
+ fprintf(stderr, "error: %s\n",
+ aug_error_minor_message(aug));
+ if (aug_error_details(aug) != NULL) {
+ fputs(aug_error_details(aug), stderr);
+ fprintf(stderr, "\n");
+ }
+ }
+}
+
+static void sigint_handler(ATTRIBUTE_UNUSED int signum) {
+ // Cancel the current line of input, along with undo info for that line.
+ rl_replace_line("", 1);
+
+ // Move the cursor to the next screen line, then force a re-display.
+ rl_crlf();
+ rl_forced_update_display();
+}
+
+static void install_signal_handlers(void) {
+ // On Ctrl-C, cancel the current line (rather than exit the program).
+ struct sigaction sigint_action;
+ MEMZERO(&sigint_action, 1);
+ sigint_action.sa_handler = sigint_handler;
+ sigemptyset(&sigint_action.sa_mask);
+ sigint_action.sa_flags = 0;
+ sigaction(SIGINT, &sigint_action, NULL);
+}
+
+static int main_loop(void) {
+ char *line = NULL;
+ int ret = 0;
+ char inputline [128];
+ int code;
+ bool end_reached = false;
+ bool get_line = true;
+ bool in_interactive = false;
+
+ if (inputfile) {
+ if (freopen(inputfile, "r", stdin) == NULL) {
+ char *msg = NULL;
+ if (asprintf(&msg, "Failed to open %s", inputfile) < 0)
+ perror("Failed to open input file");
+ else
+ perror(msg);
+ return -1;
+ }
+ }
+
+ install_signal_handlers();
+
+ // make readline silent by default
+ echo_commands = echo_commands || isatty(fileno(stdin));
+ if (echo_commands)
+ rl_outstream = NULL;
+ else
+ rl_outstream = fopen("/dev/null", "w");
+
+ while(1) {
+ if (get_line) {
+ line = readline(AUGTOOL_PROMPT);
+ } else {
+ line = NULL;
+ }
+
+ if (line == NULL) {
+ if (!isatty(fileno(stdin)) && interactive && !in_interactive) {
+ in_interactive = true;
+ if (echo_commands)
+ printf("\n");
+ echo_commands = true;
+
+ // reopen in stream
+ if (freopen("/dev/tty", "r", stdin) == NULL) {
+ perror("Failed to open terminal for reading");
+ return -1;
+ }
+ rl_instream = stdin;
+
+ // reopen stdout and stream to a tty if originally silenced or
+ // not connected to a tty, for full interactive mode
+ if (rl_outstream == NULL || !isatty(fileno(rl_outstream))) {
+ if (rl_outstream != NULL) {
+ fclose(rl_outstream);
+ rl_outstream = NULL;
+ }
+ if (freopen("/dev/tty", "w", stdout) == NULL) {
+ perror("Failed to reopen stdout");
+ return -1;
+ }
+ rl_outstream = stdout;
+ }
+ continue;
+ }
+
+ if (auto_save) {
+ strncpy(inputline, "save", sizeof(inputline));
+ line = inputline;
+ if (echo_commands)
+ printf("%s\n", line);
+ auto_save = false;
+ } else {
+ end_reached = true;
+ }
+ get_line = false;
+ }
+
+ if (end_reached) {
+ if (echo_commands)
+ printf("\n");
+ return ret;
+ }
+
+ if (*line == '\0' || *line == '#') {
+ free(line);
+ continue;
+ }
+
+ code = run_command(line, timing);
+ if (code == -2) {
+ free(line);
+ return ret;
+ }
+
+ if (code < 0) {
+ ret = -1;
+ print_aug_error();
+ }
+
+ if (line != inputline)
+ free(line);
+ }
+}
+
+static int run_args(int argc, char **argv) {
+ size_t len = 0;
+ char *line = NULL;
+ int code;
+
+ for (int i=0; i < argc; i++)
+ len += strlen(argv[i]) + 1;
+ if (ALLOC_N(line, len + 1) < 0)
+ return -1;
+ for (int i=0; i < argc; i++) {
+ strcat(line, argv[i]);
+ strcat(line, " ");
+ }
+ if (echo_commands)
+ printf("%s%s\n", AUGTOOL_PROMPT, line);
+ code = run_command(line, timing);
+ free(line);
+ if (code >= 0 && auto_save)
+ if (echo_commands)
+ printf("%ssave\n", AUGTOOL_PROMPT);
+ code = run_command("save", false);
+
+ if (code < 0) {
+ code = -1;
+ print_aug_error();
+ }
+ return (code >= 0 || code == -2) ? 0 : -1;
+}
+
+static void add_transforms(char *ts, size_t tslen) {
+ char *command;
+ int r;
+ char *t = NULL;
+ bool added_transform = false;
+
+ while ((t = argz_next(ts, tslen, t))) {
+ r = _xasprintf(&command, "transform %s", t);
+ if (r < 0)
+ fprintf(stderr, "error: Failed to add transform %s: could not allocate memory\n", t);
+
+ r = aug_srun(aug, stdout, command);
+ if (r < 0)
+ fprintf(stderr, "error: Failed to add transform %s: %s\n", t, aug_error_message(aug));
+
+ free(command);
+ added_transform = true;
+ }
+
+ if (added_transform) {
+ r = aug_load(aug);
+ if (r < 0)
+ fprintf(stderr, "error: Failed to load with new transforms: %s\n", aug_error_message(aug));
+ }
+}
+
+static void load_files(char *ts, size_t tslen) {
+ char *command;
+ int r;
+ char *t = NULL;
+
+ while ((t = argz_next(ts, tslen, t))) {
+ r = _xasprintf(&command, "load-file %s", t);
+ if (r < 0)
+ fprintf(stderr, "error: Failed to load file %s: could not allocate memory\n", t);
+
+ r = aug_srun(aug, stdout, command);
+ if (r < 0)
+ fprintf(stderr, "error: Failed to load file %s: %s\n", t, aug_error_message(aug));
+
+ free(command);
+ }
+}
+
+int main(int argc, char **argv) {
+ int r;
+ struct timeval start, stop;
+
+ setlocale(LC_ALL, "");
+
+ parse_opts(argc, argv);
+
+ if (timing) {
+ printf("Initializing augeas ... ");
+ fflush(stdout);
+ }
+ gettimeofday(&start, NULL);
+
+ aug = aug_init(root, loadpath, flags|AUG_NO_ERR_CLOSE);
+
+ gettimeofday(&stop, NULL);
+ if (timing) {
+ printf("done\n");
+ print_time_taken(&start, &stop);
+ }
+
+ if (aug == NULL || aug_error(aug) != AUG_NOERROR) {
+ fprintf(stderr, "Failed to initialize Augeas\n");
+ if (aug != NULL)
+ print_aug_error();
+ exit(EXIT_FAILURE);
+ }
+ load_files(loadonly, loadonlylen);
+ add_transforms(transforms, transformslen);
+ if (print_version) {
+ print_version_info();
+ return EXIT_SUCCESS;
+ }
+ readline_init();
+ if (optind < argc) {
+ // Accept one command from the command line
+ r = run_args(argc - optind, argv+optind);
+ } else {
+ r = main_loop();
+ }
+ if (history_file != NULL)
+ write_history(history_file);
+
+ aug_close(aug);
+ return r == 0 ? EXIT_SUCCESS : EXIT_FAILURE;
+}
+
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+# bash completion for augtool -*- shell-script -*-
+
+_augmatch()
+{
+ local cur prev words cword
+ _init_completion || return
+
+ case $prev in
+ --help | -!(-*)[h])
+ return
+ ;;
+ --lens | -!(-*)[l])
+ local lenses
+ lenses="$(augtool --noload 'match /augeas/load/*/lens' | sed -e 's/.*@//')"
+ COMPREPLY=($(compgen -W "${lenses,,}" -- "${cur,,}"))
+ return
+ ;;
+ --root | --include | -!(-*)[rI])
+ _filedir -d
+ return
+ ;;
+ esac
+
+ if [[ "$cur" == -* ]]; then
+ local opts="$(_parse_help "$1")"
+ COMPREPLY=($(compgen -W '${opts:-$(_parse_help "$1")}' -- "$cur"))
+ else
+ _filedir
+ fi
+} &&
+ complete -F _augmatch augmatch
+
+# ex: filetype=sh
--- /dev/null
+# bash completion for augtool -*- shell-script -*-
+
+_augprint()
+{
+ local cur prev words cword
+ _init_completion || return
+
+ case $prev in
+ --help | --version | -!(-*)[hV])
+ return
+ ;;
+ --target | -!(-*)[t])
+ _filedir
+ return
+ ;;
+ --lens | -!(-*)[l])
+ local lenses
+ lenses="$(augtool --noload 'match /augeas/load/*/lens' | sed -e 's/.*@//')"
+ COMPREPLY=($(compgen -W "${lenses,,}" -- "${cur,,}"))
+ return
+ ;;
+ esac
+
+ if [[ "$cur" == -* ]]; then
+ local opts="$(_parse_help "$1")"
+ COMPREPLY=($(compgen -W '${opts:-$(_parse_help "$1")}' -- "$cur"))
+ else
+ _filedir
+ fi
+} &&
+ complete -F _augprint augprint
+
+# ex: filetype=sh
--- /dev/null
+# bash completion for augtool -*- shell-script -*-
+
+_augtool()
+{
+ local cur prev words cword
+ _init_completion || return
+
+ case $prev in
+ --help | --version | -!(-*)[hV])
+ return
+ ;;
+ --load-file | --file | -!(-*)[lf])
+ _filedir
+ return
+ ;;
+ --root | --include | -!(-*)[rI])
+ _filedir -d
+ return
+ ;;
+ esac
+
+ if [[ "$cur" == -* ]]; then
+ local opts="$(_parse_help "$1")"
+ COMPREPLY=($(compgen -W '${opts:-$(_parse_help "$1")}' -- "$cur"))
+ fi
+} &&
+ complete -F _augtool augtool
+
+# ex: filetype=sh
--- /dev/null
+/*
+ * builtin.c: builtin primitives
+ *
+ * Copyright (C) 2007-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <dlutter@redhat.com>
+ */
+
+#include <config.h>
+#include <stdio.h>
+#include <stdarg.h>
+#include <errno.h>
+#include <stdlib.h>
+
+#include "syntax.h"
+#include "memory.h"
+#include "transform.h"
+#include "errcode.h"
+
+#define UNIMPL_BODY(name) \
+ { \
+ FIXME(#name " called"); \
+ abort(); \
+ }
+
+/*
+ * Lenses
+ */
+
+/* V_REGEXP -> V_STRING -> V_LENS */
+static struct value *lns_del(struct info *info, struct value **argv) {
+ struct value *rxp = argv[0];
+ struct value *dflt = argv[1];
+
+ assert(rxp->tag == V_REGEXP);
+ assert(dflt->tag == V_STRING);
+ return lns_make_prim(L_DEL, ref(info),
+ ref(rxp->regexp), ref(dflt->string));
+}
+
+/* V_REGEXP -> V_LENS */
+static struct value *lns_store(struct info *info, struct value **argv) {
+ struct value *rxp = argv[0];
+
+ assert(rxp->tag == V_REGEXP);
+ return lns_make_prim(L_STORE, ref(info), ref(rxp->regexp), NULL);
+}
+
+/* V_STRING -> V_LENS */
+static struct value *lns_value(struct info *info, struct value **argv) {
+ struct value *str = argv[0];
+
+ assert(str->tag == V_STRING);
+ return lns_make_prim(L_VALUE, ref(info), NULL, ref(str->string));
+}
+
+/* V_REGEXP -> V_LENS */
+static struct value *lns_key(struct info *info, struct value **argv) {
+ struct value *rxp = argv[0];
+
+ assert(rxp->tag == V_REGEXP);
+ return lns_make_prim(L_KEY, ref(info), ref(rxp->regexp), NULL);
+}
+
+/* V_STRING -> V_LENS */
+static struct value *lns_label(struct info *info, struct value **argv) {
+ struct value *str = argv[0];
+
+ assert(str->tag == V_STRING);
+ return lns_make_prim(L_LABEL, ref(info), NULL, ref(str->string));
+}
+
+/* V_STRING -> V_LENS */
+static struct value *lns_seq(struct info *info, struct value **argv) {
+ struct value *str = argv[0];
+
+ assert(str->tag == V_STRING);
+ return lns_make_prim(L_SEQ, ref(info), NULL, ref(str->string));
+}
+
+/* V_STRING -> V_LENS */
+static struct value *lns_counter(struct info *info, struct value **argv) {
+ struct value *str = argv[0];
+
+ assert(str->tag == V_STRING);
+ return lns_make_prim(L_COUNTER, ref(info), NULL, ref(str->string));
+}
+
+/* V_LENS -> V_LENS -> V_LENS -> V_LENS */
+static struct value *lns_square(struct info *info, struct value **argv) {
+ struct value *l1 = argv[0];
+ struct value *l2 = argv[1];
+ struct value *l3 = argv[2];
+
+ assert(l1->tag == V_LENS);
+ assert(l2->tag == V_LENS);
+ assert(l3->tag == V_LENS);
+ int check = typecheck_p(info);
+
+ return lns_make_square(ref(info), ref(l1->lens), ref(l2->lens), ref(l3->lens), check);
+}
+
+static void exn_lns_error_detail(struct value *exn, const char *label,
+ struct lens *lens) {
+ if (lens == NULL)
+ return;
+
+ char *s = format_info(lens->info);
+ exn_printf_line(exn, "%s: %s", label, s);
+ free(s);
+}
+
+static struct value *make_exn_lns_error(struct info *info,
+ struct lns_error *err,
+ const char *text) {
+ struct value *v;
+
+ if (HAS_ERR(info))
+ return info->error->exn;
+
+ v = make_exn_value(ref(info), "%s", err->message);
+ exn_lns_error_detail(v, "Lens", err->lens);
+ exn_lns_error_detail(v, " Last match", err->last);
+ exn_lns_error_detail(v, " Not matching", err->next);
+ if (err->pos >= 0) {
+ char *pos = format_pos(text, err->pos);
+ size_t line, ofs;
+ calc_line_ofs(text, err->pos, &line, &ofs);
+ exn_printf_line(v,
+ "Error encountered at %d:%d (%d characters into string)",
+ (int) line, (int) ofs, err->pos);
+ if (pos != NULL)
+ exn_printf_line(v, "%s", pos);
+ free(pos);
+ } else {
+ exn_printf_line(v, "Error encountered at path %s", err->path);
+ }
+
+ return v;
+}
+
+static void exn_print_tree(struct value *exn, struct tree *tree) {
+ struct memstream ms;
+
+ init_memstream(&ms);
+ dump_tree(ms.stream, tree);
+ close_memstream(&ms);
+ exn_printf_line(exn, "%s", ms.buf);
+ FREE(ms.buf);
+}
+
+static struct value *make_pathx_exn(struct info *info, struct pathx *p) {
+ struct value *v;
+ char *msg;
+ const char *txt, *px_err;
+ int pos;
+
+ px_err = pathx_error(p, &txt, &pos);
+ v = make_exn_value(ref(info), "syntax error in path expression: %s",
+ px_err);
+
+ if (ALLOC_N(msg, strlen(txt) + 4) >= 0) {
+ strncpy(msg, txt, pos);
+ strcat(msg, "|=|");
+ strcat(msg, txt + pos);
+ exn_add_lines(v, 1, msg);
+ }
+ return v;
+}
+
+static struct value *pathx_parse_glue(struct info *info, struct value *tree,
+ struct value *path, struct pathx **p) {
+ assert(path->tag == V_STRING);
+ assert(tree->tag == V_TREE);
+
+ if (pathx_parse(tree->origin, info->error, path->string->str, true,
+ NULL, NULL, p) != PATHX_NOERROR) {
+ return make_pathx_exn(info, *p);
+ } else {
+ return NULL;
+ }
+}
+
+/* V_LENS -> V_STRING -> V_TREE */
+static struct value *lens_get(struct info *info, struct value **argv) {
+ struct value *l = argv[0];
+ struct value *str = argv[1];
+
+ assert(l->tag == V_LENS);
+ assert(str->tag == V_STRING);
+ struct lns_error *err;
+ struct value *v;
+ const char *text = str->string->str;
+
+ struct tree *tree = lns_get(info, l->lens, text, 0, &err);
+ if (err == NULL && ! HAS_ERR(info)) {
+ v = make_value(V_TREE, ref(info));
+ v->origin = make_tree_origin(tree);
+ } else {
+ struct tree *t = make_tree_origin(tree);
+ if (t == NULL)
+ free_tree(tree);
+ tree = t;
+
+ v = make_exn_lns_error(info, err, text);
+ if (tree != NULL) {
+ exn_printf_line(v, "Tree generated so far:");
+ exn_print_tree(v, tree);
+ free_tree(tree);
+ }
+ free_lns_error(err);
+ }
+ return v;
+}
+
+
+/* V_LENS -> V_TREE -> V_STRING -> V_STRING */
+static struct value *lens_put(struct info *info, struct value **argv) {
+ struct value *l = argv[0];
+ struct value *tree = argv[1];
+ struct value *str = argv[2];
+
+ assert(l->tag == V_LENS);
+ assert(tree->tag == V_TREE);
+ assert(str->tag == V_STRING);
+
+ struct memstream ms;
+ struct value *v;
+ struct lns_error *err;
+
+ init_memstream(&ms);
+ lns_put(info, ms.stream, l->lens, tree->origin->children,
+ str->string->str, 0, &err);
+ close_memstream(&ms);
+
+ if (err == NULL && ! HAS_ERR(info)) {
+ v = make_value(V_STRING, ref(info));
+ v->string = make_string(ms.buf);
+ } else {
+ v = make_exn_lns_error(info, err, str->string->str);
+ free_lns_error(err);
+ FREE(ms.buf);
+ }
+ return v;
+}
+
+/* V_STRING -> V_STRING -> V_TREE -> V_TREE */
+static struct value *tree_set_glue(struct info *info, struct value **argv) {
+ // FIXME: This only works if TREE is not referenced more than once;
+ // otherwise we'll have some pretty weird semantics, and would really
+ // need to copy TREE first
+ struct value *path = argv[0];
+ struct value *val = argv[1];
+ struct value *tree = argv[2];
+
+ assert(path->tag == V_STRING);
+ assert(val->tag == V_STRING);
+ assert(tree->tag == V_TREE);
+
+ struct tree *fake = NULL;
+ struct pathx *p = NULL;
+ struct value *result = NULL;
+
+ if (tree->origin->children == NULL) {
+ tree->origin->children = make_tree(NULL, NULL, tree->origin, NULL);
+ fake = tree->origin->children;
+ }
+
+ result = pathx_parse_glue(info, tree, path, &p);
+ if (result != NULL)
+ goto done;
+
+ if (tree_set(p, val->string->str) == NULL) {
+ result = make_exn_value(ref(info),
+ "Tree set of %s to '%s' failed",
+ path->string->str, val->string->str);
+ goto done;
+ }
+ if (fake != NULL) {
+ list_remove(fake, tree->origin->children);
+ free_tree(fake);
+ }
+ result = ref(tree);
+
+ done:
+ free_pathx(p);
+ return result;
+}
+
+/* V_STRING -> V_TREE -> V_TREE */
+static struct value *tree_clear_glue(struct info *info, struct value **argv) {
+ // FIXME: This only works if TREE is not referenced more than once;
+ // otherwise we'll have some pretty weird semantics, and would really
+ // need to copy TREE first
+ struct value *path = argv[0];
+ struct value *tree = argv[1];
+
+ assert(path->tag == V_STRING);
+ assert(tree->tag == V_TREE);
+
+ struct tree *fake = NULL;
+ struct pathx *p = NULL;
+ struct value *result = NULL;
+
+ if (tree->origin->children == NULL) {
+ tree->origin->children = make_tree(NULL, NULL, tree->origin, NULL);
+ fake = tree->origin->children;
+ }
+
+ result = pathx_parse_glue(info, tree, path, &p);
+ if (result != NULL)
+ goto done;
+
+ if (tree_set(p, NULL) == NULL) {
+ result = make_exn_value(ref(info),
+ "Tree set of %s to NULL failed",
+ path->string->str);
+ goto done;
+ }
+ if (fake != NULL) {
+ list_remove(fake, tree->origin->children);
+ free_tree(fake);
+ }
+ result = ref(tree);
+
+ done:
+ free_pathx(p);
+ return result;
+}
+
+static struct value *tree_insert_glue(struct info *info, struct value *label,
+ struct value *path, struct value *tree,
+ int before) {
+ // FIXME: This only works if TREE is not referenced more than once;
+ // otherwise we'll have some pretty weird semantics, and would really
+ // need to copy TREE first
+ assert(label->tag == V_STRING);
+ assert(path->tag == V_STRING);
+ assert(tree->tag == V_TREE);
+
+ int r;
+ struct pathx *p = NULL;
+ struct value *result = NULL;
+
+ result = pathx_parse_glue(info, tree, path, &p);
+ if (result != NULL)
+ goto done;
+
+ r = tree_insert(p, label->string->str, before);
+ if (r != 0) {
+ result = make_exn_value(ref(info),
+ "Tree insert of %s at %s failed",
+ label->string->str, path->string->str);
+ goto done;
+ }
+
+ result = ref(tree);
+ done:
+ free_pathx(p);
+ return result;
+}
+
+/* Insert after */
+/* V_STRING -> V_STRING -> V_TREE -> V_TREE */
+static struct value *tree_insa_glue(struct info *info, struct value **argv) {
+ struct value *label = argv[0];
+ struct value *path = argv[1];
+ struct value *tree = argv[2];
+
+ return tree_insert_glue(info, label, path, tree, 0);
+}
+
+/* Insert before */
+/* V_STRING -> V_STRING -> V_TREE -> V_TREE */
+static struct value *tree_insb_glue(struct info *info, struct value **argv) {
+ struct value *label = argv[0];
+ struct value *path = argv[1];
+ struct value *tree = argv[2];
+
+ return tree_insert_glue(info, label, path, tree, 1);
+}
+
+/* V_STRING -> V_TREE -> V_TREE */
+static struct value *tree_rm_glue(struct info *info, struct value **argv) {
+ // FIXME: This only works if TREE is not referenced more than once;
+ // otherwise we'll have some pretty weird semantics, and would really
+ // need to copy TREE first
+ struct value *path = argv[0];
+ struct value *tree = argv[1];
+
+ assert(path->tag == V_STRING);
+ assert(tree->tag == V_TREE);
+
+ struct pathx *p = NULL;
+ struct value *result = NULL;
+
+ result = pathx_parse_glue(info, tree, path, &p);
+ if (result != NULL)
+ goto done;
+
+ if (tree_rm(p) == -1) {
+ result = make_exn_value(ref(info), "Tree rm of %s failed",
+ path->string->str);
+ goto done;
+ }
+ result = ref(tree);
+ done:
+ free_pathx(p);
+ return result;
+}
+
+/* V_STRING -> V_STRING */
+static struct value *gensym(struct info *info, struct value **argv) {
+ struct value *prefix = argv[0];
+
+ assert(prefix->tag == V_STRING);
+ static unsigned int count = 0;
+ struct value *v;
+ char *s;
+ int r;
+
+ r = asprintf(&s, "%s%u", prefix->string->str, count);
+ if (r == -1)
+ return NULL;
+ v = make_value(V_STRING, ref(info));
+ v->string = make_string(s);
+ return v;
+}
+
+/* V_STRING -> V_FILTER */
+static struct value *xform_incl(struct info *info, struct value **argv) {
+ struct value *s = argv[0];
+
+ assert(s->tag == V_STRING);
+ struct value *v = make_value(V_FILTER, ref(info));
+ v->filter = make_filter(ref(s->string), 1);
+ return v;
+}
+
+/* V_STRING -> V_FILTER */
+static struct value *xform_excl(struct info *info, struct value **argv) {
+ struct value *s = argv[0];
+
+ assert(s->tag == V_STRING);
+ struct value *v = make_value(V_FILTER, ref(info));
+ v->filter = make_filter(ref(s->string), 0);
+ return v;
+}
+
+/* V_LENS -> V_FILTER -> V_TRANSFORM */
+static struct value *xform_transform(struct info *info, struct value **argv) {
+ struct value *l = argv[0];
+ struct value *f = argv[1];
+
+ assert(l->tag == V_LENS);
+ assert(f->tag == V_FILTER);
+ if (l->lens->value || l->lens->key) {
+ return make_exn_value(ref(info), "Can not build a transform "
+ "from a lens that leaves a %s behind",
+ l->lens->key ? "key" : "value");
+ }
+ struct value *v = make_value(V_TRANSFORM, ref(info));
+ v->transform = make_transform(ref(l->lens), ref(f->filter));
+ return v;
+}
+
+static struct value *sys_getenv(struct info *info, struct value **argv) {
+ assert(argv[0]->tag == V_STRING);
+ struct value *v = make_value(V_STRING, ref(info));
+ v->string = dup_string(getenv(argv[0]->string->str));
+ return v;
+}
+
+static struct value *sys_read_file(struct info *info, struct value **argv) {
+ struct value *n = argv[0];
+
+ assert(n->tag == V_STRING);
+ char *str = NULL;
+
+ str = xread_file(n->string->str);
+ if (str == NULL) {
+ char error_buf[1024];
+ const char *errmsg;
+ errmsg = xstrerror(errno, error_buf, sizeof(error_buf));
+ struct value *exn = make_exn_value(ref(info),
+ "reading file %s failed:", n->string->str);
+ exn_printf_line(exn, "%s", errmsg);
+ return exn;
+ }
+ struct value *v = make_value(V_STRING, ref(info));
+ v->string = make_string(str);
+ return v;
+}
+
+/* V_LENS -> V_LENS */
+static struct value *lns_check_rec_glue(struct info *info,
+ struct value **argv) {
+ struct value *l = argv[0];
+ struct value *r = argv[1];
+
+ assert(l->tag == V_LENS);
+ assert(r->tag == V_LENS);
+ int check = typecheck_p(info);
+
+ return lns_check_rec(info, l->lens, r->lens, check);
+}
+
+/*
+ * Print functions
+ */
+
+/* V_STRING -> V_UNIT */
+static struct value *pr_string(struct info *info, struct value **argv) {
+ printf("%s", argv[0]->string->str);
+ return make_unit(ref(info));
+}
+
+/* V_REGEXP -> V_UNIT */
+static struct value *pr_regexp(struct info *info, struct value **argv) {
+ print_regexp(stdout, argv[0]->regexp);
+ return make_unit(ref(info));
+}
+
+/* V_STRING -> V_UNIT */
+static struct value *pr_endline(struct info *info, struct value **argv) {
+ printf("%s\n", argv[0]->string->str);
+ return make_unit(ref(info));
+}
+
+/* V_TREE -> V_TREE */
+static struct value *pr_tree(ATTRIBUTE_UNUSED struct info *info,
+ struct value **argv) {
+ print_tree_braces(stdout, 0, argv[0]->origin);
+ return ref(argv[0]);
+}
+
+/*
+ * Lens inspection
+ */
+
+static struct value *lns_value_of_type(struct info *info, struct regexp *rx) {
+ struct value *result = make_value(V_REGEXP, ref(info));
+ if (rx)
+ result->regexp = ref(rx);
+ else
+ result->regexp = regexp_make_empty(ref(info));
+ return result;
+}
+
+/* V_LENS -> V_REGEXP */
+static struct value *lns_ctype(struct info *info, struct value **argv) {
+ return lns_value_of_type(info, argv[0]->lens->ctype);
+}
+
+/* V_LENS -> V_REGEXP */
+static struct value *lns_atype(struct info *info, struct value **argv) {
+ return lns_value_of_type(info, argv[0]->lens->atype);
+}
+
+/* V_LENS -> V_REGEXP */
+static struct value *lns_vtype(struct info *info, struct value **argv) {
+ return lns_value_of_type(info, argv[0]->lens->vtype);
+}
+
+/* V_LENS -> V_REGEXP */
+static struct value *lns_ktype(struct info *info, struct value **argv) {
+ return lns_value_of_type(info, argv[0]->lens->ktype);
+}
+
+/* V_LENS -> V_STRING */
+static struct value *lns_fmt_atype(struct info *info, struct value **argv) {
+ struct value *l = argv[0];
+
+ struct value *result = NULL;
+ char *s = NULL;
+ int r;
+
+ r = lns_format_atype(l->lens, &s);
+ if (r < 0)
+ return info->error->exn;
+ result = make_value(V_STRING, ref(info));
+ result->string = make_string(s);
+ return result;
+}
+
+/* V_REGEXP -> V_STRING -> V_STRING */
+static struct value *rx_match(struct info *info, struct value **argv) {
+ struct value *rx = argv[0];
+ struct value *s = argv[1];
+
+ struct value *result = NULL;
+ const char *str = s->string->str;
+ struct re_registers regs;
+ int r;
+
+ MEMZERO(®s, 1);
+ r = regexp_match(rx->regexp, str, strlen(str), 0, ®s);
+ if (r < -1) {
+ result =
+ make_exn_value(ref(info), "regexp match failed (internal error)");
+ } else {
+ char *match = NULL;
+ if (r == -1) {
+ /* No match */
+ match = strdup("");
+ } else {
+ match = strndup(str + regs.start[0], regs.end[0] - regs.start[0]);
+ }
+ if (match == NULL) {
+ result = info->error->exn;
+ } else {
+ result = make_value(V_STRING, ref(info));
+ result->string = make_string(match);
+ }
+ }
+ return result;
+}
+
+struct module *builtin_init(struct error *error) {
+ struct module *modl = module_create("Builtin");
+ int r;
+
+#define DEFINE_NATIVE(modl, name, nargs, impl, types ...) \
+ r = define_native(error, modl, name, nargs, impl, ##types); \
+ if (r < 0) goto error;
+
+ DEFINE_NATIVE(modl, "gensym", 1, gensym, T_STRING, T_STRING);
+
+ /* Primitive lenses */
+ DEFINE_NATIVE(modl, "del", 2, lns_del, T_REGEXP, T_STRING, T_LENS);
+ DEFINE_NATIVE(modl, "store", 1, lns_store, T_REGEXP, T_LENS);
+ DEFINE_NATIVE(modl, "value", 1, lns_value, T_STRING, T_LENS);
+ DEFINE_NATIVE(modl, "key", 1, lns_key, T_REGEXP, T_LENS);
+ DEFINE_NATIVE(modl, "label", 1, lns_label, T_STRING, T_LENS);
+ DEFINE_NATIVE(modl, "seq", 1, lns_seq, T_STRING, T_LENS);
+ DEFINE_NATIVE(modl, "counter", 1, lns_counter, T_STRING, T_LENS);
+ DEFINE_NATIVE(modl, "square", 3, lns_square, T_LENS, T_LENS, T_LENS, T_LENS);
+ /* Applying lenses (mostly for tests) */
+ DEFINE_NATIVE(modl, "get", 2, lens_get, T_LENS, T_STRING, T_TREE);
+ DEFINE_NATIVE(modl, "put", 3, lens_put, T_LENS, T_TREE, T_STRING,
+ T_STRING);
+ /* Tree manipulation used by the PUT tests */
+ DEFINE_NATIVE(modl, "set", 3, tree_set_glue, T_STRING, T_STRING, T_TREE,
+ T_TREE);
+ DEFINE_NATIVE(modl, "clear", 2, tree_clear_glue, T_STRING, T_TREE,
+ T_TREE);
+ DEFINE_NATIVE(modl, "rm", 2, tree_rm_glue, T_STRING, T_TREE, T_TREE);
+ DEFINE_NATIVE(modl, "insa", 3, tree_insa_glue, T_STRING, T_STRING, T_TREE,
+ T_TREE);
+ DEFINE_NATIVE(modl, "insb", 3, tree_insb_glue, T_STRING, T_STRING, T_TREE,
+ T_TREE);
+ /* Transforms and filters */
+ DEFINE_NATIVE(modl, "incl", 1, xform_incl, T_STRING, T_FILTER);
+ DEFINE_NATIVE(modl, "excl", 1, xform_excl, T_STRING, T_FILTER);
+ DEFINE_NATIVE(modl, "transform", 2, xform_transform, T_LENS, T_FILTER,
+ T_TRANSFORM);
+ DEFINE_NATIVE(modl, LNS_CHECK_REC_NAME,
+ 2, lns_check_rec_glue, T_LENS, T_LENS, T_LENS);
+ /* Printing */
+ DEFINE_NATIVE(modl, "print_string", 1, pr_string, T_STRING, T_UNIT);
+ DEFINE_NATIVE(modl, "print_regexp", 1, pr_regexp, T_REGEXP, T_UNIT);
+ DEFINE_NATIVE(modl, "print_endline", 1, pr_endline, T_STRING, T_UNIT);
+ DEFINE_NATIVE(modl, "print_tree", 1, pr_tree, T_TREE, T_TREE);
+
+ /* Lens inspection */
+ DEFINE_NATIVE(modl, "lens_ctype", 1, lns_ctype, T_LENS, T_REGEXP);
+ DEFINE_NATIVE(modl, "lens_atype", 1, lns_atype, T_LENS, T_REGEXP);
+ DEFINE_NATIVE(modl, "lens_vtype", 1, lns_vtype, T_LENS, T_REGEXP);
+ DEFINE_NATIVE(modl, "lens_ktype", 1, lns_ktype, T_LENS, T_REGEXP);
+ DEFINE_NATIVE(modl, "lens_format_atype", 1, lns_fmt_atype,
+ T_LENS, T_STRING);
+
+ /* Regexp matching */
+ DEFINE_NATIVE(modl, "regexp_match", 2, rx_match, T_REGEXP, T_STRING,
+ T_STRING);
+
+ /* System functions */
+ struct module *sys = module_create("Sys");
+ modl->next = sys;
+ DEFINE_NATIVE(sys, "getenv", 1, sys_getenv, T_STRING, T_STRING);
+ DEFINE_NATIVE(sys, "read_file", 1, sys_read_file, T_STRING, T_STRING);
+ return modl;
+ error:
+ unref(modl, module);
+ return NULL;
+}
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+/*
+ * errcode.c: internal interface for error reporting
+ *
+ * Copyright (C) 2009-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <lutter@redhat.com>
+ */
+
+#include <config.h>
+
+#include "errcode.h"
+#include "memory.h"
+#include <stdarg.h>
+
+static void vreport_error(struct error *err, aug_errcode_t errcode,
+ const char *format, va_list ap) {
+ /* We only remember the first error */
+ if (err->code != AUG_NOERROR)
+ return;
+ assert(err->details == NULL);
+
+ err->code = errcode;
+ if (format != NULL) {
+ if (vasprintf(&err->details, format, ap) < 0)
+ err->details = NULL;
+ }
+}
+
+void report_error(struct error *err, aug_errcode_t errcode,
+ const char *format, ...) {
+ va_list ap;
+
+ va_start(ap, format);
+ vreport_error(err, errcode, format, ap);
+ va_end(ap);
+}
+
+void bug_on(struct error *err, const char *srcfile, int srclineno,
+ const char *format, ...) {
+ char *msg = NULL;
+ int r;
+ va_list ap;
+
+ if (err->code != AUG_NOERROR)
+ return;
+
+ va_start(ap, format);
+ vreport_error(err, AUG_EINTERNAL, format, ap);
+ va_end(ap);
+
+ if (err->details == NULL) {
+ xasprintf(&err->details, "%s:%d:internal error", srcfile, srclineno);
+ } else {
+ r = xasprintf(&msg, "%s:%d:%s", srcfile, srclineno, err->details);
+ if (r >= 0) {
+ free(err->details);
+ err->details = msg;
+ }
+ }
+}
+
+void reset_error(struct error *err) {
+ err->code = AUG_NOERROR;
+ err->minor = 0;
+ FREE(err->details);
+ err->minor_details = NULL;
+}
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+/*
+ * errcode.h: internal interface for error reporting
+ *
+ * Copyright (C) 2009-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <lutter@redhat.com>
+ */
+
+#ifndef ERRCODE_H_
+#define ERRCODE_H_
+
+#include "internal.h"
+/* Include augeas.h for the error codes */
+#include "augeas.h"
+
+/*
+ * Error details in a separate struct that we can pass around
+ */
+struct error {
+ aug_errcode_t code;
+ int minor;
+ char *details; /* Human readable explanation */
+ const char *minor_details; /* Human readable version of MINOR */
+ /* A dummy info of last resort; this can be used in places where
+ * a struct info is needed but none available
+ */
+ struct info *info;
+ /* Bit of a kludge to get at struct augeas, but since struct error
+ * is now available in a lot of places (through struct info), this
+ * gives a convenient way to get at the overall state
+ */
+ const struct augeas *aug;
+ /* A preallocated exception so that we can throw something, even
+ * under OOM conditions
+ */
+ struct value *exn;
+};
+
+void report_error(struct error *err, aug_errcode_t errcode,
+ const char *format, ...)
+ ATTRIBUTE_FORMAT(printf, 3, 4);
+
+void bug_on(struct error *err, const char *srcfile, int srclineno,
+ const char *format, ...)
+ ATTRIBUTE_FORMAT(printf, 4, 5);
+
+void reset_error(struct error *err);
+
+#define HAS_ERR(obj) ((obj)->error->code != AUG_NOERROR)
+
+#define ERR_BAIL(obj) if ((obj)->error->code != AUG_NOERROR) goto error;
+
+#define ERR_RET(obj) if ((obj)->error->code != AUG_NOERROR) return;
+
+#define ERR_NOMEM(cond, obj) \
+ if (cond) { \
+ report_error((obj)->error, AUG_ENOMEM, NULL); \
+ goto error; \
+ }
+
+#define ERR_REPORT(obj, code, fmt ...) \
+ report_error((obj)->error, code, ## fmt)
+
+#define ERR_THROW(cond, obj, code, fmt ...) \
+ do { \
+ if (cond) { \
+ report_error((obj)->error, code, ## fmt); \
+ goto error; \
+ } \
+ } while(0)
+
+#define ARG_CHECK(cond, obj, fmt ...) \
+ do { \
+ if (cond) { \
+ report_error((obj)->error, AUG_EBADARG, ## fmt); \
+ goto error; \
+ } \
+ } while(0)
+
+/* Assertions that use our error reporting infrastructure instead of
+ * aborting
+ */
+#define ensure(cond, obj) \
+ if (!(cond)) { \
+ bug_on((obj)->error, __FILE__, __LINE__, NULL); \
+ goto error; \
+ }
+#define ensure0(cond, obj) \
+ if (!(cond)) { \
+ bug_on((obj)->error, __FILE__, __LINE__, NULL); \
+ return NULL; \
+ }
+
+#define BUG_ON(cond, obj, fmt ...) \
+ if (cond) { \
+ bug_on((obj)->error, __FILE__, __LINE__, ## fmt); \
+ goto error; \
+ }
+
+#endif
+
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+/*
+ * fa.c: finite automata
+ *
+ * Copyright (C) 2007-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <dlutter@redhat.com>
+ */
+
+/*
+ * This implementation follows closely the Java dk.brics.automaton package
+ * by Anders Moeller. The project's website is
+ * http://www.brics.dk/automaton/.
+ *
+ * It is by no means a complete reimplementation of that package; only a
+ * subset of what Automaton provides is implemented here.
+ */
+
+#include <config.h>
+#include <limits.h>
+#include <ctype.h>
+#include <stdbool.h>
+
+#include "internal.h"
+#include "memory.h"
+#include "ref.h"
+#include "hash.h"
+#include "fa.h"
+
+#define UCHAR_NUM (UCHAR_MAX+1)
+#define UCHAR_MIN 0
+typedef unsigned char uchar;
+
+#define E(cond) if (cond) goto error
+#define F(expr) if ((expr) < 0) goto error
+
+/* Which algorithm to use in FA_MINIMIZE */
+int fa_minimization_algorithm = FA_MIN_HOPCROFT;
+
+/* A finite automaton. INITIAL is both the initial state and the head of
+ * the list of all states. Any state that is allocated for this automaton
+ * is put on this list. Dead/unreachable states are cleared from the list
+ * at opportune times (e.g., during minimization) It's poor man's garbage
+ * collection
+ *
+ * Normally, transitions are on a character range [min..max]; in
+ * fa_as_regexp, we store regexps on transitions in the re field of each
+ * transition. TRANS_RE indicates that we do that, and is used by fa_dot to
+ * produce proper graphs of an automaton transitioning on regexps.
+ *
+ * For case-insensitive regexps (nocase == 1), the FA never has transitions
+ * on uppercase letters [A-Z], effectively removing these letters from the
+ * alphabet.
+ */
+struct fa {
+ struct state *initial;
+ int deterministic : 1;
+ int minimal : 1;
+ unsigned int nocase : 1;
+ int trans_re : 1;
+};
+
+/* A state in a finite automaton. Transitions are never shared between
+ states so that we can free the list when we need to free the state */
+struct state {
+ struct state *next;
+ hash_val_t hash;
+ unsigned int accept : 1;
+ unsigned int live : 1;
+ unsigned int reachable : 1;
+ unsigned int visited : 1; /* Used in various places to track progress */
+ /* Array of transitions. The TUSED first entries are used, the array
+ has allocated room for TSIZE */
+ size_t tused;
+ size_t tsize;
+ struct trans *trans;
+};
+
+/* A transition. If the input has a character in the inclusive
+ * range [MIN, MAX], move to TO
+ */
+struct trans {
+ struct state *to;
+ union {
+ struct {
+ uchar min;
+ uchar max;
+ };
+ struct re *re;
+ };
+};
+
+/*
+ * Bitsets
+ */
+#define UINT_BIT (sizeof(unsigned int) * CHAR_BIT)
+
+typedef unsigned int bitset;
+
+static bitset *bitset_init(size_t nbits) {
+ bitset *bs;
+ if (ALLOC_N(bs, (nbits + UINT_BIT) / UINT_BIT) == -1)
+ return NULL;
+ return bs;
+}
+
+static inline void bitset_clr(bitset *bs, unsigned int bit) {
+ bs[bit/UINT_BIT] &= ~(1 << (bit % UINT_BIT));
+}
+
+static inline void bitset_set(bitset *bs, unsigned int bit) {
+ bs[bit/UINT_BIT] |= 1 << (bit % UINT_BIT);
+}
+
+ATTRIBUTE_PURE
+static inline int bitset_get(const bitset * const bs, unsigned int bit) {
+ return (bs[bit/UINT_BIT] >> bit % UINT_BIT) & 1;
+}
+
+ATTRIBUTE_PURE
+static inline bool bitset_disjoint(const bitset *const bs1,
+ const bitset *const bs2,
+ size_t nbits) {
+ for (int i=0; i < (nbits + UINT_BIT) / UINT_BIT; i++) {
+ if (bs1[i] & bs2[i])
+ return false;
+ }
+ return true;
+}
+
+static void bitset_free(bitset *bs) {
+ free(bs);
+}
+
+static void bitset_negate(bitset *bs, size_t nbits) {
+ for (int i=0; i < (nbits + UINT_BIT) / UINT_BIT; i++)
+ bs[i] = ~ bs[i];
+}
+
+/*
+ * Representation of a parsed regular expression. The regular expression is
+ * parsed according to the following grammar by PARSE_REGEXP:
+ *
+ * regexp: concat_exp ('|' regexp)?
+ * concat_exp: repeated_exp concat_exp?
+ * repeated_exp: simple_exp
+ * | simple_exp '*'
+ * | simple_exp '+'
+ * | simple_exp '?'
+ * | simple_exp '{' INT (',' INT)? '}'
+ * simple_exp: char_class
+ * | '.'
+ * | '(' regexp ')'
+ * | CHAR
+ * char_class: '[' char_exp+ ']'
+ * | '[' '^' char_exp+ ']'
+ * char_exp: CHAR '-' CHAR
+ * | CHAR
+ */
+
+enum re_type {
+ UNION,
+ CONCAT,
+ CSET,
+ CHAR,
+ ITER,
+ EPSILON
+};
+
+#define re_unref(r) unref(r, re)
+
+struct re {
+ ref_t ref;
+ enum re_type type;
+ union {
+ struct { /* UNION, CONCAT */
+ struct re *exp1;
+ struct re *exp2;
+ };
+ struct { /* CSET */
+ bool negate;
+ bitset *cset;
+ /* Whether we can use character ranges when converting back
+ * to a string */
+ unsigned int no_ranges:1;
+ };
+ struct { /* CHAR */
+ uchar c;
+ };
+ struct { /* ITER */
+ struct re *exp;
+ int min;
+ int max;
+ };
+ };
+};
+
+/* Used to keep state of the regex parse; RX may contain NUL's */
+struct re_parse {
+ const char *rx; /* Current position in regex */
+ const char *rend; /* Last char of rx+ 1 */
+ int error; /* error code */
+ /* Whether new CSET's should have the no_ranges flag set */
+ unsigned int no_ranges:1;
+};
+
+/* String with explicit length, used when converting re to string */
+struct re_str {
+ char *rx;
+ size_t len;
+};
+
+static struct re *parse_regexp(struct re_parse *parse);
+
+/* A map from a set of states to a state. */
+typedef hash_t state_set_hash;
+
+static hash_val_t ptr_hash(const void *p);
+
+static const int array_initial_size = 4;
+static const int array_max_expansion = 128;
+
+enum state_set_init_flags {
+ S_NONE = 0,
+ S_SORTED = (1 << 0),
+ S_DATA = (1 << 1)
+};
+
+struct state_set {
+ size_t size;
+ size_t used;
+ unsigned int sorted : 1;
+ unsigned int with_data : 1;
+ struct state **states;
+ void **data;
+};
+
+struct state_set_list {
+ struct state_set_list *next;
+ struct state_set *set;
+};
+
+/* Clean up FA by removing dead transitions and states and reducing
+ * transitions. Unreachable states are freed. The return value is the same
+ * as FA; returning it is merely a convenience.
+ *
+ * Only automata in this state should be returned to the user
+ */
+ATTRIBUTE_RETURN_CHECK
+static int collect(struct fa *fa);
+
+ATTRIBUTE_RETURN_CHECK
+static int totalize(struct fa *fa);
+
+/* Print an FA into a (fairly) fixed file if the environment variable
+ * FA_DOT_DIR is set. This code is only used for debugging
+ */
+#define FA_DOT_DIR "FA_DOT_DIR"
+
+ATTRIBUTE_UNUSED
+static void fa_dot_debug(struct fa *fa, const char *tag) {
+ const char *dot_dir;
+ static int count = 0;
+ int r;
+ char *fname;
+ FILE *fp;
+
+ if ((dot_dir = getenv(FA_DOT_DIR)) == NULL)
+ return;
+
+ r = asprintf(&fname, "%s/fa_%02d_%s.dot", dot_dir, count++, tag);
+ if (r == -1)
+ return;
+
+ fp = fopen(fname, "w");
+ if (fp == NULL) {
+ free(fname);
+ return;
+ }
+
+ fa_dot(fp, fa);
+ fclose(fp);
+ free(fname);
+}
+
+static void print_char_set(struct re *set) {
+ int from, to;
+
+ if (set->negate)
+ printf("[^");
+ else
+ printf("[");
+ for (from = UCHAR_MIN; from <= UCHAR_MAX; from = to+1) {
+ while (bitset_get(set->cset, from) == set->negate)
+ from += 1;
+ if (from > UCHAR_MAX)
+ break;
+ for (to = from;
+ to < UCHAR_MAX && (bitset_get(set->cset, to+1) == !set->negate);
+ to++);
+ if (to == from) {
+ printf("%c", from);
+ } else {
+ printf("%c-%c", from, to);
+ }
+ }
+ printf("]");
+}
+
+ATTRIBUTE_UNUSED
+static void print_re(struct re *re) {
+ switch(re->type) {
+ case UNION:
+ print_re(re->exp1);
+ printf("|");
+ print_re(re->exp2);
+ break;
+ case CONCAT:
+ print_re(re->exp1);
+ printf(".");
+ print_re(re->exp2);
+ break;
+ case CSET:
+ print_char_set(re);
+ break;
+ case CHAR:
+ printf("%c", re->c);
+ break;
+ case ITER:
+ printf("(");
+ print_re(re->exp);
+ printf("){%d,%d}", re->min, re->max);
+ break;
+ case EPSILON:
+ printf("<>");
+ break;
+ default:
+ printf("(**)");
+ break;
+ }
+}
+
+/*
+ * struct re_str
+ */
+static void release_re_str(struct re_str *str) {
+ if (str == NULL)
+ return;
+ FREE(str->rx);
+ str->len = 0;
+}
+
+static void free_re_str(struct re_str *str) {
+ if (str == NULL)
+ return;
+ release_re_str(str);
+ FREE(str);
+}
+
+static struct re_str *make_re_str(const char *s) {
+ struct re_str *str;
+
+ if (ALLOC(str) < 0)
+ return NULL;
+ if (s != NULL) {
+ str->rx = strdup(s);
+ str->len = strlen(s);
+ if (str->rx == NULL) {
+ FREE(str);
+ return NULL;
+ }
+ }
+ return str;
+}
+
+static int re_str_alloc(struct re_str *str) {
+ return ALLOC_N(str->rx, str->len + 1);
+}
+
+/*
+ * Memory management
+ */
+
+static void free_trans(struct state *s) {
+ free(s->trans);
+ s->trans = NULL;
+ s->tused = s->tsize = 0;
+}
+
+static void gut(struct fa *fa) {
+ list_for_each(s, fa->initial) {
+ free_trans(s);
+ }
+ list_free(fa->initial);
+ fa->initial = NULL;
+}
+
+void fa_free(struct fa *fa) {
+ if (fa == NULL)
+ return;
+ gut(fa);
+ free(fa);
+}
+
+static struct state *make_state(void) {
+ struct state *s;
+ if (ALLOC(s) == -1)
+ return NULL;
+ s->hash = ptr_hash(s);
+ return s;
+}
+
+static struct state *add_state(struct fa *fa, int accept) {
+ struct state *s = make_state();
+ if (s) {
+ s->accept = accept;
+ if (fa->initial == NULL) {
+ fa->initial = s;
+ } else {
+ list_cons(fa->initial->next, s);
+ }
+ }
+ return s;
+}
+
+#define last_trans(s) ((s)->trans + (s)->tused - 1)
+
+#define for_each_trans(t, s) \
+ for (struct trans *t = (s)->trans; \
+ (t - (s)->trans) < (s)->tused; \
+ t++)
+
+ATTRIBUTE_RETURN_CHECK
+static int add_new_trans(struct state *from, struct state *to,
+ uchar min, uchar max) {
+ assert(to != NULL);
+
+ if (from->tused == from->tsize) {
+ size_t tsize = from->tsize;
+ if (tsize == 0)
+ tsize = array_initial_size;
+ else if (from->tsize > array_max_expansion)
+ tsize += array_max_expansion;
+ else
+ tsize *= 2;
+ if (REALLOC_N(from->trans, tsize) == -1)
+ return -1;
+ from->tsize = tsize;
+ }
+ from->trans[from->tused].to = to;
+ from->trans[from->tused].min = min;
+ from->trans[from->tused].max = max;
+ from->tused += 1;
+ return 0;
+}
+
+ATTRIBUTE_RETURN_CHECK
+static int add_epsilon_trans(struct state *from,
+ struct state *to) {
+ int r;
+ from->accept |= to->accept;
+ for_each_trans(t, to) {
+ r = add_new_trans(from, t->to, t->min, t->max);
+ if (r < 0)
+ return -1;
+ }
+ return 0;
+}
+
+static void set_initial(struct fa *fa, struct state *s) {
+ list_remove(s, fa->initial);
+ list_cons(fa->initial, s);
+}
+
+/* Merge automaton FA2 into FA1. This simply adds FA2's states to FA1
+ and then frees FA2. It has no influence on the language accepted by FA1
+*/
+static void fa_merge(struct fa *fa1, struct fa **fa2) {
+ list_append(fa1->initial, (*fa2)->initial);
+ free(*fa2);
+ *fa2 = NULL;
+}
+
+/*
+ * Operations on STATE_SET
+ */
+static void state_set_free(struct state_set *set) {
+ if (set == NULL)
+ return;
+ free(set->states);
+ free(set->data);
+ free(set);
+}
+
+static int state_set_init_data(struct state_set *set) {
+ set->with_data = 1;
+ if (set->data == NULL)
+ return ALLOC_N(set->data, set->size);
+ else
+ return 0;
+}
+
+/* Create a new STATE_SET with an initial size of SIZE. If SIZE is -1, use
+ the default size ARRAY_INITIAL_SIZE. FLAGS is a bitmask indicating
+ some options:
+ - S_SORTED: keep the states in the set sorted by their address, and use
+ binary search for lookups. If it is not set, entries are kept in the
+ order in which they are added and lookups scan linearly through the
+ set of states.
+ - S_DATA: allocate the DATA array in the set, and keep its size in sync
+ with the size of the STATES array.
+*/
+static struct state_set *state_set_init(int size, int flags) {
+ struct state_set *set = NULL;
+
+ F(ALLOC(set));
+
+ set->sorted = (flags & S_SORTED) ? 1 : 0;
+ set->with_data = (flags & S_DATA) ? 1 : 0;
+ if (size > 0) {
+ set->size = size;
+ F(ALLOC_N(set->states, set->size));
+ if (set->with_data)
+ F(state_set_init_data(set));
+ }
+ return set;
+ error:
+ state_set_free(set);
+ return NULL;
+}
+
+ATTRIBUTE_RETURN_CHECK
+static int state_set_expand(struct state_set *set) {
+ if (set->size == 0)
+ set->size = array_initial_size;
+ else if (set->size > array_max_expansion)
+ set->size += array_max_expansion;
+ else
+ set->size *= 2;
+ if (REALLOC_N(set->states, set->size) < 0)
+ goto error;
+ if (set->with_data)
+ if (REALLOC_N(set->data, set->size) < 0)
+ goto error;
+ return 0;
+ error:
+ /* We do this to provoke a SEGV as early as possible */
+ FREE(set->states);
+ FREE(set->data);
+ return -1;
+}
+
+/* Return the index where S belongs in SET->STATES to keep it sorted. S
+ may not be in SET->STATES. The returned index is in the interval [0
+ .. SET->USED], with the latter indicating that S is larger than all
+ values in SET->STATES
+*/
+static int state_set_pos(const struct state_set *set, const struct state *s) {
+ int l = 0, h = set->used;
+ while (l < h) {
+ int m = (l + h)/2;
+ if (set->states[m] > s)
+ h = m;
+ else if (set->states[m] < s)
+ l = m + 1;
+ else
+ return m;
+ }
+ return l;
+}
+
+ATTRIBUTE_RETURN_CHECK
+static int state_set_push(struct state_set *set, struct state *s) {
+ if (set->size == set->used)
+ if (state_set_expand(set) < 0)
+ return -1;
+ if (set->sorted) {
+ int p = state_set_pos(set, s);
+ if (set->size == set->used)
+ if (state_set_expand(set) < 0)
+ return -1;
+ while (p < set->used && set->states[p] <= s)
+ p += 1;
+ if (p < set->used) {
+ memmove(set->states + p + 1, set->states + p,
+ sizeof(*set->states) * (set->used - p));
+ if (set->data != NULL)
+ memmove(set->data + p + 1, set->data + p,
+ sizeof(*set->data) * (set->used - p));
+ }
+ set->states[p] = s;
+ set->used += 1;
+ return p;
+ } else {
+ set->states[set->used++] = s;
+ return set->used - 1;
+ }
+}
+
+ATTRIBUTE_RETURN_CHECK
+static int state_set_push_data(struct state_set *set, struct state *s,
+ void *d) {
+ int i = state_set_push(set, s);
+ if (i == -1)
+ return -1;
+ set->data[i] = d;
+ return i;
+}
+
+static int state_set_index(const struct state_set *set,
+ const struct state *s) {
+ if (set->sorted) {
+ int p = state_set_pos(set, s);
+ return (p < set->used && set->states[p] == s) ? p : -1;
+ } else {
+ for (int i=0; i < set->used; i++) {
+ if (set->states[i] == s)
+ return i;
+ }
+ }
+ return -1;
+}
+
+static void state_set_remove(struct state_set *set,
+ const struct state *s) {
+ if (set->sorted) {
+ int p = state_set_index(set, s);
+ if (p == -1) return;
+ memmove(set->states + p, set->states + p + 1,
+ sizeof(*set->states) * (set->used - p - 1));
+ if (set->data != NULL)
+ memmove(set->data + p, set->data + p + 1,
+ sizeof(*set->data) * (set->used - p - 1));
+ } else {
+ int p = state_set_index(set, s);
+ if (p >= 0) {
+ set->states[p] = set->states[--set->used];
+ }
+ }
+}
+
+/* Only add S if it's not in SET yet. Return 1 if S was added, 0 if it was
+ already in the set and -1 on error. */
+ATTRIBUTE_RETURN_CHECK
+static int state_set_add(struct state_set *set, struct state *s) {
+ if (set->sorted) {
+ int p = state_set_pos(set, s);
+ if (p < set->used && set->states[p] == s)
+ return 0;
+ if (set->size == set->used)
+ if (state_set_expand(set) < 0)
+ return -1;
+ while (p < set->used && set->states[p] <= s)
+ p += 1;
+ if (p < set->used) {
+ memmove(set->states + p + 1, set->states + p,
+ sizeof(*set->states) * (set->used - p));
+ if (set->data != NULL)
+ memmove(set->data + p + 1, set->data + p,
+ sizeof(*set->data) * (set->used - p));
+ }
+ set->states[p] = s;
+ set->used += 1;
+ } else {
+ if (state_set_index(set, s) >= 0)
+ return 0;
+ if (state_set_push(set, s) < 0)
+ goto error;
+ }
+ return 1;
+ error:
+ /* We do this to provoke a SEGV as early as possible */
+ FREE(set->states);
+ FREE(set->data);
+ return -1;
+}
+
+static struct state *state_set_pop(struct state_set *set) {
+ struct state *s = NULL;
+ if (set->used > 0)
+ s = set->states[--set->used];
+ return s;
+}
+
+static struct state *state_set_pop_data(struct state_set *set, void **d) {
+ struct state *s = NULL;
+ s = state_set_pop(set);
+ *d = set->data[set->used];
+ return s;
+}
+
+static void *state_set_find_data(struct state_set *set, struct state *s) {
+ int i = state_set_index(set, s);
+ if (i >= 0)
+ return set->data[i];
+ else
+ return NULL;
+}
+
+static int state_set_equal(const struct state_set *set1,
+ const struct state_set *set2) {
+ if (set1->used != set2->used)
+ return 0;
+ if (set1->sorted && set2->sorted) {
+ for (int i = 0; i < set1->used; i++)
+ if (set1->states[i] != set2->states[i])
+ return 0;
+ return 1;
+ } else {
+ for (int i=0; i < set1->used; i++)
+ if (state_set_index(set2, set1->states[i]) == -1)
+ return 0;
+ return 1;
+ }
+}
+
+#if 0
+static void state_set_compact(struct state_set *set) {
+ while (set->used > 0 && set->states[set->used] == NULL)
+ set->used -= 1;
+ for (int i=0; i < set->used; i++) {
+ if (set->states[i] == NULL) {
+ set->used -= 1;
+ set->states[i] = set->states[set->used];
+ if (set->data)
+ set->data[i] = set->data[set->used];
+ }
+ while (set->used > 0 && set->states[set->used] == NULL)
+ set->used -= 1;
+ }
+}
+#endif
+
+/* Add an entry (FST, SND) to SET. FST is stored in SET->STATES, and SND is
+ stored in SET->DATA at the same index.
+*/
+ATTRIBUTE_RETURN_CHECK
+static int state_pair_push(struct state_set **set,
+ struct state *fst, struct state *snd) {
+ if (*set == NULL)
+ *set = state_set_init(-1, S_DATA);
+ if (*set == NULL)
+ return -1;
+ int i = state_set_push(*set, fst);
+ if (i == -1)
+ return -1;
+ (*set)->data[i] = snd;
+
+ return 0;
+}
+
+/* Return the index of the pair (FST, SND) in SET, or -1 if SET contains no
+ such pair.
+ */
+static int state_pair_find(struct state_set *set, struct state *fst,
+ struct state *snd) {
+ for (int i=0; i < set->used; i++)
+ if (set->states[i] == fst && set->data[i] == snd)
+ return i;
+ return -1;
+}
+
+/* Jenkins' hash for void* */
+static hash_val_t ptr_hash(const void *p) {
+ hash_val_t hash = 0;
+ char *c = (char *) &p;
+ for (int i=0; i < sizeof(p); i++) {
+ hash += c[i];
+ hash += (hash << 10);
+ hash ^= (hash >> 6);
+ }
+ hash += (hash << 3);
+ hash ^= (hash >> 11);
+ hash += (hash << 15);
+ return hash;
+}
+
+typedef hash_t state_triple_hash;
+
+static hash_val_t pair_hash(const void *key) {
+ register struct state *const *pair = key;
+ return pair[0]->hash + pair[1]->hash;
+}
+
+static int pair_cmp(const void *key1, const void *key2) {
+ return memcmp(key1, key2, 2*sizeof(struct state *));
+}
+
+static void state_triple_node_free(hnode_t *node, ATTRIBUTE_UNUSED void *ctx) {
+ free((void *) hnode_getkey(node));
+ free(node);
+}
+
+static state_triple_hash *state_triple_init(void) {
+ state_triple_hash *hash;
+
+ hash = hash_create(HASHCOUNT_T_MAX, pair_cmp, pair_hash);
+ if (hash == NULL)
+ return NULL;
+ hash_set_allocator(hash, NULL, state_triple_node_free, NULL);
+ return hash;
+}
+
+ATTRIBUTE_RETURN_CHECK
+static int state_triple_push(state_triple_hash *hash,
+ struct state *s1,
+ struct state *s2,
+ struct state *s3) {
+ struct state **pair;
+ if (ALLOC_N(pair, 2) < 0)
+ return -1;
+ pair[0] = s1;
+ pair[1] = s2;
+ return hash_alloc_insert(hash, pair, s3);
+}
+
+static struct state * state_triple_thd(state_triple_hash *hash,
+ struct state *s1,
+ struct state *s2) {
+ struct state *pair[2];
+ hnode_t *node;
+ pair[0] = s1;
+ pair[1] = s2;
+ node = hash_lookup(hash, pair);
+ return (node == NULL) ? NULL : (struct state *) hnode_get(node);
+}
+
+static void state_triple_free(state_triple_hash *hash) {
+ if (hash != NULL) {
+ hash_free_nodes(hash);
+ hash_destroy(hash);
+ }
+}
+
+/*
+ * State operations
+ */
+ATTRIBUTE_RETURN_CHECK
+static int mark_reachable(struct fa *fa) {
+ struct state_set *worklist = state_set_init(-1, S_NONE);
+ int result = -1;
+
+ E(worklist == NULL);
+
+ list_for_each(s, fa->initial) {
+ s->reachable = 0;
+ }
+ fa->initial->reachable = 1;
+
+ for (struct state *s = fa->initial;
+ s != NULL;
+ s = state_set_pop(worklist)) {
+ for_each_trans(t, s) {
+ if (! t->to->reachable) {
+ t->to->reachable = 1;
+ F(state_set_push(worklist, t->to));
+ }
+ }
+ }
+ result = 0;
+
+ error:
+ state_set_free(worklist);
+ return result;
+}
+
+/* Return all reachable states. As a sideeffect, all states have their
+ REACHABLE flag set appropriately.
+ */
+static struct state_set *fa_states(struct fa *fa) {
+ struct state_set *visited = state_set_init(-1, S_NONE);
+ int r;
+
+ r = mark_reachable(fa);
+ E(visited == NULL || r < 0);
+
+ list_for_each(s, fa->initial) {
+ if (s->reachable)
+ F(state_set_push(visited, s));
+ }
+ return visited;
+ error:
+ state_set_free(visited);
+ return NULL;
+}
+
+/* Return all reachable accepting states. As a sideeffect, all states have
+ their REACHABLE flag set appropriately.
+ */
+static struct state_set *fa_accept_states(struct fa *fa) {
+ struct state_set *accept = state_set_init(-1, S_NONE);
+ int r;
+
+ E(accept == NULL);
+
+ r = mark_reachable(fa);
+ E(r < 0);
+
+ list_for_each(s, fa->initial) {
+ if (s->reachable && s->accept)
+ F(state_set_push(accept, s));
+ }
+ return accept;
+ error:
+ state_set_free(accept);
+ return NULL;
+}
+
+/* Mark all live states, i.e. states from which an accepting state can be
+ reached. All states have their REACHABLE and LIVE flags set
+ appropriately.
+ */
+ATTRIBUTE_RETURN_CHECK
+static int mark_live(struct fa *fa) {
+ int changed;
+
+ F(mark_reachable(fa));
+
+ list_for_each(s, fa->initial) {
+ s->live = s->reachable && s->accept;
+ }
+
+ do {
+ changed = 0;
+ list_for_each(s, fa->initial) {
+ if (! s->live && s->reachable) {
+ for_each_trans(t, s) {
+ if (t->to->live) {
+ s->live = 1;
+ changed = 1;
+ break;
+ }
+ }
+ }
+ }
+ } while (changed);
+ return 0;
+ error:
+ return -1;
+}
+
+/*
+ * Reverse an automaton in place. Change FA so that it accepts the
+ * language that is the reverse of the input automaton.
+ *
+ * Returns a list of the new initial states of the automaton. The list must
+ * be freed by the caller.
+ */
+static struct state_set *fa_reverse(struct fa *fa) {
+ struct state_set *all = NULL;
+ struct state_set *accept = NULL;
+ int r;
+
+ all = fa_states(fa);
+ E(all == NULL);
+ accept = fa_accept_states(fa);
+ E(accept == NULL);
+
+ F(state_set_init_data(all));
+
+ /* Reverse all transitions */
+ int *tused;
+ F(ALLOC_N(tused, all->used));
+ for (int i=0; i < all->used; i++) {
+ all->data[i] = all->states[i]->trans;
+ tused[i] = all->states[i]->tused;
+ all->states[i]->trans = NULL;
+ all->states[i]->tsize = 0;
+ all->states[i]->tused = 0;
+ }
+ for (int i=0; i < all->used; i++) {
+ struct state *s = all->states[i];
+ struct trans *t = all->data[i];
+ s->accept = 0;
+ for (int j=0; j < tused[i]; j++) {
+ r = add_new_trans(t[j].to, s, t[j].min, t[j].max);
+ if (r < 0)
+ goto error;
+ }
+ free(t);
+ }
+ free(tused);
+
+ /* Make new initial and final states */
+ struct state *s = add_state(fa, 0);
+ E(s == NULL);
+
+ fa->initial->accept = 1;
+ set_initial(fa, s);
+ for (int i=0; i < accept->used; i++) {
+ r = add_epsilon_trans(s, accept->states[i]);
+ if (r < 0)
+ goto error;
+ }
+
+ fa->deterministic = 0;
+ fa->minimal = 0;
+ state_set_free(all);
+ return accept;
+ error:
+ state_set_free(all);
+ state_set_free(accept);
+ return NULL;
+}
+
+/*
+ * Return a sorted array of all interval start points in FA. The returned
+ * array is a string (null terminated)
+ */
+static uchar* start_points(struct fa *fa, int *npoints) {
+ char pointset[UCHAR_NUM];
+ uchar *points = NULL;
+
+ F(mark_reachable(fa));
+ MEMZERO(pointset, UCHAR_NUM);
+ list_for_each(s, fa->initial) {
+ if (! s->reachable)
+ continue;
+ pointset[0] = 1;
+ for_each_trans(t, s) {
+ pointset[t->min] = 1;
+ if (t->max < UCHAR_MAX)
+ pointset[t->max+1] = 1;
+ }
+ }
+
+ *npoints = 0;
+ for(int i=0; i < UCHAR_NUM; *npoints += pointset[i], i++);
+
+ F(ALLOC_N(points, *npoints+1));
+ for (int i=0, n=0; i < UCHAR_NUM; i++) {
+ if (pointset[i])
+ points[n++] = (uchar) i;
+ }
+
+ return points;
+ error:
+ free(points);
+ return NULL;
+}
+
+/*
+ * Operations on STATE_SET_HASH
+ */
+static int state_set_hash_contains(state_set_hash *smap,
+ struct state_set *set) {
+ return hash_lookup(smap, set) != NULL;
+}
+
+/*
+ * Find the set in SMAP that has the same states as SET. If the two are
+ * different, i.e. they point to different memory locations, free SET and
+ * return the set found in SMAP
+ */
+static struct state_set *state_set_hash_uniq(state_set_hash *smap,
+ struct state_set *set) {
+ hnode_t *node = hash_lookup(smap, set);
+ const struct state_set *orig_set = hnode_getkey(node);
+ if (orig_set != set) {
+ state_set_free(set);
+ }
+ return (struct state_set *) orig_set;
+}
+
+static struct state *state_set_hash_get_state(state_set_hash *smap,
+ struct state_set *set) {
+ hnode_t *node = hash_lookup(smap, set);
+ return (struct state *) hnode_get(node);
+}
+
+static hash_val_t set_hash(const void *key) {
+ hash_val_t hash = 0;
+ const struct state_set *set = key;
+
+ for (int i = 0; i < set->used; i++) {
+ hash += set->states[i]->hash;
+ }
+ return hash;
+}
+
+static int set_cmp(const void *key1, const void *key2) {
+ const struct state_set *set1 = key1;
+ const struct state_set *set2 = key2;
+
+ return state_set_equal(set1, set2) ? 0 : 1;
+}
+
+static void set_destroy(hnode_t *node, ATTRIBUTE_UNUSED void *ctx) {
+ struct state_set *set = (struct state_set *) hnode_getkey(node);
+ state_set_free(set);
+ free(node);
+}
+
+ATTRIBUTE_RETURN_CHECK
+static int state_set_hash_add(state_set_hash **smap,
+ struct state_set *set, struct fa *fa) {
+ if (*smap == NULL) {
+ *smap = hash_create(HASHCOUNT_T_MAX, set_cmp, set_hash);
+ E(*smap == NULL);
+ hash_set_allocator(*smap, NULL, set_destroy, NULL);
+ }
+ struct state *s = add_state(fa, 0);
+ E(s == NULL);
+ F(hash_alloc_insert(*smap, set, s));
+ return 0;
+ error:
+ return -1;
+}
+
+static void state_set_hash_free(state_set_hash *smap,
+ struct state_set *protect) {
+ if (protect != NULL) {
+ hnode_t *node = hash_lookup(smap, protect);
+ hash_delete(smap, node);
+ hnode_getkey(node) = NULL;
+ set_destroy(node, NULL);
+ }
+ hash_free_nodes(smap);
+ hash_destroy(smap);
+}
+
+static int state_set_list_add(struct state_set_list **list,
+ struct state_set *set) {
+ struct state_set_list *elt;
+ if (ALLOC(elt) < 0)
+ return -1;
+ elt->set = set;
+ list_cons(*list, elt);
+ return 0;
+}
+
+static struct state_set *state_set_list_pop(struct state_set_list **list) {
+ struct state_set_list *elt = *list;
+ struct state_set *set = elt->set;
+
+ *list = elt->next;
+ free(elt);
+ return set;
+}
+
+/* Compare transitions lexicographically by (to, min, reverse max) */
+static int trans_to_cmp(const void *v1, const void *v2) {
+ const struct trans *t1 = v1;
+ const struct trans *t2 = v2;
+
+ if (t1->to != t2->to) {
+ return (t1->to < t2->to) ? -1 : 1;
+ }
+ if (t1->min < t2->min)
+ return -1;
+ if (t1->min > t2->min)
+ return 1;
+ if (t1->max > t2->max)
+ return -1;
+ return (t1->max < t2->max) ? 1 : 0;
+}
+
+/* Compare transitions lexicographically by (min, reverse max, to) */
+static int trans_intv_cmp(const void *v1, const void *v2) {
+ const struct trans *t1 = v1;
+ const struct trans *t2 = v2;
+
+ if (t1->min < t2->min)
+ return -1;
+ if (t1->min > t2->min)
+ return 1;
+ if (t1->max > t2->max)
+ return -1;
+ if (t1->max < t2->max)
+ return 1;
+ if (t1->to != t2->to) {
+ return (t1->to < t2->to) ? -1 : 1;
+ }
+ return 0;
+}
+
+/*
+ * Reduce an automaton by combining overlapping and adjacent edge intervals
+ * with the same destination.
+ */
+static void reduce(struct fa *fa) {
+ list_for_each(s, fa->initial) {
+ if (s->tused == 0)
+ continue;
+
+ qsort(s->trans, s->tused, sizeof(*s->trans), trans_to_cmp);
+ int i=0, j=1;
+ struct trans *t = s->trans;
+ while (j < s->tused) {
+ if (t[i].to == t[j].to && t[j].min <= t[i].max + 1) {
+ if (t[j].max > t[i].max)
+ t[i].max = t[j].max;
+ j += 1;
+ } else {
+ i += 1;
+ if (i < j)
+ memmove(s->trans + i, s->trans + j,
+ sizeof(*s->trans) * (s->tused - j));
+ s->tused -= j - i;
+ j = i + 1;
+ }
+ }
+ s->tused = i+1;
+ /* Shrink if we use less than half the allocated size */
+ if (s->tsize > array_initial_size && 2*s->tused < s->tsize) {
+ int r;
+ r = REALLOC_N(s->trans, s->tsize);
+ if (r == 0)
+ s->tsize = s->tused;
+ }
+ }
+}
+
+/*
+ * Remove dead transitions from an FA; a transition is dead is it does not
+ * lead to a live state. This also removes any states that are not
+ * reachable any longer from FA->INITIAL.
+ *
+ * Returns the same FA as a convenience
+ */
+
+static void collect_trans(struct fa *fa) {
+ list_for_each(s, fa->initial) {
+ if (! s->live) {
+ free_trans(s);
+ } else {
+ int i=0;
+ while (i < s->tused) {
+ if (! s->trans[i].to->live) {
+ s->tused -= 1;
+ memmove(s->trans + i, s->trans + s->tused,
+ sizeof(*s->trans));
+ } else {
+ i += 1;
+ }
+ }
+ }
+ }
+}
+
+static void collect_dead_states(struct fa *fa) {
+ /* Remove all dead states and free their storage */
+ for (struct state *s = fa->initial; s->next != NULL; ) {
+ if (! s->next->live) {
+ struct state *del = s->next;
+ s->next = del->next;
+ free_trans(del);
+ free(del);
+ } else {
+ s = s->next;
+ }
+ }
+}
+
+static int collect(struct fa *fa) {
+ F(mark_live(fa));
+
+ if (! fa->initial->live) {
+ /* This automaton accepts nothing, make it the canonical
+ * epsilon automaton
+ */
+ list_for_each(s, fa->initial) {
+ free_trans(s);
+ }
+ list_free(fa->initial->next);
+ fa->deterministic = 1;
+ } else {
+ collect_trans(fa);
+ collect_dead_states(fa);
+ }
+ reduce(fa);
+ return 0;
+ error:
+ return -1;
+}
+
+static void swap_initial(struct fa *fa) {
+ struct state *s = fa->initial;
+ if (s->next != NULL) {
+ fa->initial = s->next;
+ s->next = fa->initial->next;
+ fa->initial->next = s;
+ }
+}
+
+/*
+ * Make a finite automaton deterministic using the given set of initial
+ * states with the subset construction. This also eliminates dead states
+ * and transitions and reduces and orders the transitions for each state
+ */
+static int determinize(struct fa *fa, struct state_set *ini) {
+ int npoints;
+ int make_ini = (ini == NULL);
+ const uchar *points = NULL;
+ state_set_hash *newstate = NULL;
+ struct state_set_list *worklist = NULL;
+ int ret = 0;
+
+ if (fa->deterministic)
+ return 0;
+
+ points = start_points(fa, &npoints);
+ E(points == NULL);
+ if (make_ini) {
+ ini = state_set_init(-1, S_NONE);
+ if (ini == NULL || state_set_push(ini, fa->initial) < 0) {
+ state_set_free(ini);
+ goto error;
+ }
+ }
+
+ F(state_set_list_add(&worklist, ini));
+ F(state_set_hash_add(&newstate, ini, fa));
+ // Make the new state the initial state
+ swap_initial(fa);
+ while (worklist != NULL) {
+ struct state_set *sset = state_set_list_pop(&worklist);
+ struct state *r = state_set_hash_get_state(newstate, sset);
+ for (int q=0; q < sset->used; q++) {
+ r->accept |= sset->states[q]->accept;
+ }
+ for (int n=0; n < npoints; n++) {
+ struct state_set *pset = state_set_init(-1, S_SORTED);
+ E(pset == NULL);
+ for(int q=0 ; q < sset->used; q++) {
+ for_each_trans(t, sset->states[q]) {
+ if (t->min <= points[n] && points[n] <= t->max) {
+ F(state_set_add(pset, t->to));
+ }
+ }
+ }
+ if (!state_set_hash_contains(newstate, pset)) {
+ F(state_set_list_add(&worklist, pset));
+ F(state_set_hash_add(&newstate, pset, fa));
+ }
+ pset = state_set_hash_uniq(newstate, pset);
+
+ struct state *q = state_set_hash_get_state(newstate, pset);
+ uchar min = points[n];
+ uchar max = UCHAR_MAX;
+ if (n+1 < npoints)
+ max = points[n+1] - 1;
+ if (add_new_trans(r, q, min, max) < 0)
+ goto error;
+ }
+ }
+ fa->deterministic = 1;
+
+ done:
+ if (newstate)
+ state_set_hash_free(newstate, make_ini ? NULL : ini);
+ free((void *) points);
+ if (collect(fa) < 0)
+ ret = -1;
+ return ret;
+ error:
+ ret = -1;
+ goto done;
+}
+
+/*
+ * Minimization. As a sideeffect of minimization, the transitions are
+ * reduced and ordered.
+ */
+
+static struct state *step(struct state *s, uchar c) {
+ for_each_trans(t, s) {
+ if (t->min <= c && c <= t->max)
+ return t->to;
+ }
+ return NULL;
+}
+
+struct state_list {
+ struct state_list_node *first;
+ struct state_list_node *last;
+ unsigned int size;
+};
+
+struct state_list_node {
+ struct state_list *sl;
+ struct state_list_node *next;
+ struct state_list_node *prev;
+ struct state *state;
+};
+
+static struct state_list_node *state_list_add(struct state_list *sl,
+ struct state *s) {
+ struct state_list_node *n;
+
+ if (ALLOC(n) < 0)
+ return NULL;
+
+ n->state = s;
+ n->sl = sl;
+
+ if (sl->size++ == 0) {
+ sl->first = n;
+ sl->last = n;
+ } else {
+ sl->last->next = n;
+ n->prev = sl->last;
+ sl->last = n;
+ }
+ return n;
+}
+
+static void state_list_remove(struct state_list_node *n) {
+ struct state_list *sl = n->sl;
+ sl->size -= 1;
+ if (sl->first == n)
+ sl->first = n->next;
+ else
+ n->prev->next = n->next;
+ if (sl->last == n)
+ sl->last = n->prev;
+ else
+ n->next->prev = n->prev;
+
+ free(n);
+}
+
+static void state_list_free(struct state_list *sl) {
+ if (sl)
+ list_free(sl->first);
+ free(sl);
+}
+
+/* The linear index of element (q,c) in an NSTATES * NSIGMA matrix */
+#define INDEX(q, c) (q * nsigma + c)
+
+static int minimize_hopcroft(struct fa *fa) {
+ struct state_set *states = NULL;
+ uchar *sigma = NULL;
+ struct state_set **reverse = NULL;
+ bitset *reverse_nonempty = NULL;
+ struct state_set **partition = NULL;
+ unsigned int *block = NULL;
+ struct state_list **active = NULL;
+ struct state_list_node **active2 = NULL;
+ int *pending = NULL;
+ bitset *pending2 = NULL;
+ struct state_set *split = NULL;
+ bitset *split2 = NULL;
+ int *refine = NULL;
+ bitset *refine2 = NULL;
+ struct state_set **splitblock = NULL;
+ struct state_set *newstates = NULL;
+ int *nsnum = NULL;
+ int *nsind = NULL;
+ int result = -1;
+ unsigned int nstates = 0;
+ int nsigma = 0;
+
+ F(determinize(fa, NULL));
+
+ /* Total automaton, nothing to do */
+ if (fa->initial->tused == 1
+ && fa->initial->trans[0].to == fa->initial
+ && fa->initial->trans[0].min == UCHAR_MIN
+ && fa->initial->trans[0].max == UCHAR_MAX)
+ return 0;
+
+ F(totalize(fa));
+
+ /* make arrays for numbered states and effective alphabet */
+ states = state_set_init(-1, S_NONE);
+ E(states == NULL);
+
+ list_for_each(s, fa->initial) {
+ F(state_set_push(states, s));
+ }
+ nstates = states->used;
+
+ sigma = start_points(fa, &nsigma);
+ E(sigma == NULL);
+
+ /* initialize data structures */
+
+ /* An ss->used x nsigma matrix of lists of states */
+ F(ALLOC_N(reverse, nstates * nsigma));
+ reverse_nonempty = bitset_init(nstates * nsigma);
+ E(reverse_nonempty == NULL);
+ F(ALLOC_N(partition, nstates));
+ F(ALLOC_N(block, nstates));
+
+ F(ALLOC_N(active, nstates * nsigma));
+ F(ALLOC_N(active2, nstates * nsigma));
+
+ /* PENDING is an array of pairs of ints. The i'th pair is stored in
+ * PENDING[2*i] and PENDING[2*i + 1]. There are NPENDING pairs in
+ * PENDING at any time. SPENDING is the maximum number of pairs
+ * allocated for PENDING.
+ */
+ size_t npending = 0, spending = 0;
+ pending2 = bitset_init(nstates * nsigma);
+ E(pending2 == NULL);
+
+ split = state_set_init(-1, S_NONE);
+ split2 = bitset_init(nstates);
+ E(split == NULL || split2 == NULL);
+
+ F(ALLOC_N(refine, nstates));
+ refine2 = bitset_init(nstates);
+ E(refine2 == NULL);
+
+ F(ALLOC_N(splitblock, nstates));
+
+ for (int q = 0; q < nstates; q++) {
+ splitblock[q] = state_set_init(-1, S_NONE);
+ partition[q] = state_set_init(-1, S_NONE);
+ E(splitblock[q] == NULL || partition[q] == NULL);
+ for (int x = 0; x < nsigma; x++) {
+ reverse[INDEX(q, x)] = state_set_init(-1, S_NONE);
+ E(reverse[INDEX(q, x)] == NULL);
+ F(ALLOC_N(active[INDEX(q, x)], 1));
+ }
+ }
+
+ /* find initial partition and reverse edges */
+ for (int q = 0; q < nstates; q++) {
+ struct state *qq = states->states[q];
+ int j;
+ if (qq->accept)
+ j = 0;
+ else
+ j = 1;
+ F(state_set_push(partition[j], qq));
+ block[q] = j;
+ for (int x = 0; x < nsigma; x++) {
+ uchar y = sigma[x];
+ struct state *p = step(qq, y);
+ assert(p != NULL);
+ int pn = state_set_index(states, p);
+ assert(pn >= 0);
+ F(state_set_push(reverse[INDEX(pn, x)], qq));
+ bitset_set(reverse_nonempty, INDEX(pn, x));
+ }
+ }
+
+ /* initialize active sets */
+ for (int j = 0; j <= 1; j++)
+ for (int x = 0; x < nsigma; x++)
+ for (int q = 0; q < partition[j]->used; q++) {
+ struct state *qq = partition[j]->states[q];
+ int qn = state_set_index(states, qq);
+ if (bitset_get(reverse_nonempty, INDEX(qn, x))) {
+ active2[INDEX(qn, x)] =
+ state_list_add(active[INDEX(j, x)], qq);
+ E(active2[INDEX(qn, x)] == NULL);
+ }
+ }
+
+ /* initialize pending */
+ F(ALLOC_N(pending, 2*nsigma));
+ npending = nsigma;
+ spending = nsigma;
+ for (int x = 0; x < nsigma; x++) {
+ int a0 = active[INDEX(0,x)]->size;
+ int a1 = active[INDEX(1,x)]->size;
+ int j;
+ if (a0 <= a1)
+ j = 0;
+ else
+ j = 1;
+ pending[2*x] = j;
+ pending[2*x+1] = x;
+ bitset_set(pending2, INDEX(j, x));
+ }
+
+ /* process pending until fixed point */
+ int k = 2;
+ while (npending-- > 0) {
+ int p = pending[2*npending];
+ int x = pending[2*npending+1];
+ bitset_clr(pending2, INDEX(p, x));
+ int ref = 0;
+ /* find states that need to be split off their blocks */
+ struct state_list *sh = active[INDEX(p,x)];
+ for (struct state_list_node *m = sh->first; m != NULL; m = m->next) {
+ int q = state_set_index(states, m->state);
+ struct state_set *rev = reverse[INDEX(q, x)];
+ for (int r =0; r < rev->used; r++) {
+ struct state *rs = rev->states[r];
+ int s = state_set_index(states, rs);
+ if (! bitset_get(split2, s)) {
+ bitset_set(split2, s);
+ F(state_set_push(split, rs));
+ int j = block[s];
+ F(state_set_push(splitblock[j], rs));
+ if (!bitset_get(refine2, j)) {
+ bitset_set(refine2, j);
+ refine[ref++] = j;
+ }
+ }
+ }
+ }
+ // refine blocks
+ for(int rr=0; rr < ref; rr++) {
+ int j = refine[rr];
+ struct state_set *sp = splitblock[j];
+ if (sp->used < partition[j]->used) {
+ struct state_set *b1 = partition[j];
+ struct state_set *b2 = partition[k];
+ for (int s = 0; s < sp->used; s++) {
+ state_set_remove(b1, sp->states[s]);
+ F(state_set_push(b2, sp->states[s]));
+ int snum = state_set_index(states, sp->states[s]);
+ block[snum] = k;
+ for (int c = 0; c < nsigma; c++) {
+ struct state_list_node *sn = active2[INDEX(snum, c)];
+ if (sn != NULL && sn->sl == active[INDEX(j,c)]) {
+ state_list_remove(sn);
+ active2[INDEX(snum, c)] =
+ state_list_add(active[INDEX(k, c)],
+ sp->states[s]);
+ E(active2[INDEX(snum, c)] == NULL);
+ }
+ }
+ }
+ // update pending
+ for (int c = 0; c < nsigma; c++) {
+ int aj = active[INDEX(j, c)]->size;
+ int ak = active[INDEX(k, c)]->size;
+ if (npending + 1 > spending) {
+ spending *= 2;
+ F(REALLOC_N(pending, 2 * spending));
+ }
+ pending[2*npending + 1] = c;
+ if (!bitset_get(pending2, INDEX(j, c))
+ && 0 < aj && aj <= ak) {
+ bitset_set(pending2, INDEX(j, c));
+ pending[2*npending] = j;
+ } else {
+ bitset_set(pending2, INDEX(k, c));
+ pending[2*npending] = k;
+ }
+ npending += 1;
+ }
+ k++;
+ }
+ for (int s = 0; s < sp->used; s++) {
+ int snum = state_set_index(states, sp->states[s]);
+ bitset_clr(split2, snum);
+ }
+ bitset_clr(refine2, j);
+ sp->used = 0;
+ }
+ split->used = 0;
+ }
+
+ /* make a new state for each equivalence class, set initial state */
+ newstates = state_set_init(k, S_NONE);
+ E(newstates == NULL);
+ F(ALLOC_N(nsnum, k));
+ F(ALLOC_N(nsind, nstates));
+
+ for (int n = 0; n < k; n++) {
+ struct state *s = make_state();
+ E(s == NULL);
+ newstates->states[n] = s;
+ struct state_set *partn = partition[n];
+ for (int q=0; q < partn->used; q++) {
+ struct state *qs = partn->states[q];
+ int qnum = state_set_index(states, qs);
+ if (qs == fa->initial)
+ s->live = 1; /* Abuse live to flag the new initial state */
+ nsnum[n] = qnum; /* select representative */
+ nsind[qnum] = n; /* and point from partition to new state */
+ }
+ }
+
+ /* build transitions and set acceptance */
+ for (int n = 0; n < k; n++) {
+ struct state *s = newstates->states[n];
+ s->accept = states->states[nsnum[n]]->accept;
+ for_each_trans(t, states->states[nsnum[n]]) {
+ int toind = state_set_index(states, t->to);
+ struct state *nto = newstates->states[nsind[toind]];
+ F(add_new_trans(s, nto, t->min, t->max));
+ }
+ }
+
+ /* Get rid of old states and transitions and turn NEWTSTATES into
+ a linked list */
+ gut(fa);
+ for (int n=0; n < k; n++)
+ if (newstates->states[n]->live) {
+ struct state *ini = newstates->states[n];
+ newstates->states[n] = newstates->states[0];
+ newstates->states[0] = ini;
+ }
+ for (int n=0; n < k-1; n++)
+ newstates->states[n]->next = newstates->states[n+1];
+ fa->initial = newstates->states[0];
+
+ result = 0;
+
+ /* clean up */
+ done:
+ free(nsind);
+ free(nsnum);
+ state_set_free(states);
+ free(sigma);
+ bitset_free(reverse_nonempty);
+ free(block);
+ for (int i=0; i < nstates*nsigma; i++) {
+ if (reverse)
+ state_set_free(reverse[i]);
+ if (active)
+ state_list_free(active[i]);
+ }
+ free(reverse);
+ free(active);
+ free(active2);
+ free(pending);
+ bitset_free(pending2);
+ state_set_free(split);
+ bitset_free(split2);
+ free(refine);
+ bitset_free(refine2);
+ for (int q=0; q < nstates; q++) {
+ if (splitblock)
+ state_set_free(splitblock[q]);
+ if (partition)
+ state_set_free(partition[q]);
+ }
+ free(splitblock);
+ free(partition);
+ state_set_free(newstates);
+
+ if (collect(fa) < 0)
+ result = -1;
+ return result;
+ error:
+ result = -1;
+ goto done;
+}
+
+static int minimize_brzozowski(struct fa *fa) {
+ struct state_set *set;
+
+ /* Minimize using Brzozowski's algorithm */
+ set = fa_reverse(fa);
+ E(set == NULL);
+ F(determinize(fa, set));
+ state_set_free(set);
+
+ set = fa_reverse(fa);
+ E(set == NULL);
+ F(determinize(fa, set));
+ state_set_free(set);
+ return 0;
+ error:
+ return -1;
+}
+
+int fa_minimize(struct fa *fa) {
+ int r;
+
+ if (fa == NULL)
+ return -1;
+ if (fa->minimal)
+ return 0;
+
+ if (fa_minimization_algorithm == FA_MIN_BRZOZOWSKI) {
+ r = minimize_brzozowski(fa);
+ } else {
+ r = minimize_hopcroft(fa);
+ }
+
+ if (r == 0)
+ fa->minimal = 1;
+ return r;
+}
+
+/*
+ * Construction of fa
+ */
+
+static struct fa *fa_make_empty(void) {
+ struct fa *fa;
+
+ if (ALLOC(fa) < 0)
+ return NULL;
+ if (add_state(fa, 0) == NULL) {
+ fa_free(fa);
+ return NULL;
+ }
+ /* Even though, technically, this fa is both minimal and deterministic,
+ * this function is also used to allocate new fa's which are then modified
+ * further. Rather than risk erroneously marking such an fa as minimal
+ * and deterministic, we do not do that here and take the minor hit if
+ * that should ever need to be determined for an actual empty fa
+ */
+ return fa;
+}
+
+static struct fa *fa_make_epsilon(void) {
+ struct fa *fa = fa_make_empty();
+ if (fa) {
+ fa->initial->accept = 1;
+ fa->deterministic= 1;
+ fa->minimal = 1;
+ }
+ return fa;
+}
+
+static struct fa *fa_make_char(uchar c) {
+ struct fa *fa = fa_make_empty();
+ if (! fa)
+ return NULL;
+
+ struct state *s = fa->initial;
+ struct state *t = add_state(fa, 1);
+ int r;
+
+ if (t == NULL)
+ goto error;
+
+ r = add_new_trans(s, t, c, c);
+ if (r < 0)
+ goto error;
+ fa->deterministic = 1;
+ fa->minimal = 1;
+ return fa;
+ error:
+ fa_free(fa);
+ return NULL;
+}
+
+struct fa *fa_make_basic(unsigned int basic) {
+ int r;
+
+ if (basic == FA_EMPTY) {
+ return fa_make_empty();
+ } else if (basic == FA_EPSILON) {
+ return fa_make_epsilon();
+ } else if (basic == FA_TOTAL) {
+ struct fa *fa = fa_make_epsilon();
+ r = add_new_trans(fa->initial, fa->initial, UCHAR_MIN, UCHAR_MAX);
+ if (r < 0) {
+ fa_free(fa);
+ fa = NULL;
+ }
+ return fa;
+ }
+ return NULL;
+}
+
+int fa_is_basic(struct fa *fa, unsigned int basic) {
+ if (basic == FA_EMPTY) {
+ return ! fa->initial->accept && fa->initial->tused == 0;
+ } else if (basic == FA_EPSILON) {
+ return fa->initial->accept && fa->initial->tused == 0;
+ } else if (basic == FA_TOTAL) {
+ if (! fa->initial->accept)
+ return 0;
+ if (fa->nocase) {
+ if (fa->initial->tused != 2)
+ return 0;
+ struct trans *t1 = fa->initial->trans;
+ struct trans *t2 = fa->initial->trans + 1;
+ if (t1->to != fa->initial || t2->to != fa->initial)
+ return 0;
+ if (t2->max != UCHAR_MAX) {
+ t1 = t2;
+ t2 = fa->initial->trans;
+ }
+ return (t1->min == UCHAR_MIN && t1->max == 'A' - 1 &&
+ t2->min == 'Z' + 1 && t2->max == UCHAR_MAX);
+ } else {
+ struct trans *t = fa->initial->trans;
+ return fa->initial->tused == 1 &&
+ t->to == fa->initial &&
+ t->min == UCHAR_MIN && t->max == UCHAR_MAX;
+ }
+ }
+ return 0;
+}
+
+static struct fa *fa_clone(struct fa *fa) {
+ struct fa *result = NULL;
+ struct state_set *set = state_set_init(-1, S_DATA|S_SORTED);
+ int r;
+
+ if (fa == NULL || set == NULL || ALLOC(result) < 0)
+ goto error;
+
+ result->deterministic = fa->deterministic;
+ result->minimal = fa->minimal;
+ result->nocase = fa->nocase;
+ list_for_each(s, fa->initial) {
+ int i = state_set_push(set, s);
+ E(i < 0);
+
+ struct state *q = add_state(result, s->accept);
+ if (q == NULL)
+ goto error;
+ set->data[i] = q;
+ q->live = s->live;
+ q->reachable = s->reachable;
+ }
+ for (int i=0; i < set->used; i++) {
+ struct state *s = set->states[i];
+ struct state *sc = set->data[i];
+ for_each_trans(t, s) {
+ int to = state_set_index(set, t->to);
+ assert(to >= 0);
+ struct state *toc = set->data[to];
+ r = add_new_trans(sc, toc, t->min, t->max);
+ if (r < 0)
+ goto error;
+ }
+ }
+ state_set_free(set);
+ return result;
+ error:
+ state_set_free(set);
+ fa_free(result);
+ return NULL;
+}
+
+static int case_expand(struct fa *fa);
+
+/* Compute FA1|FA2 and set FA1 to that automaton. FA2 is freed */
+ATTRIBUTE_RETURN_CHECK
+static int union_in_place(struct fa *fa1, struct fa **fa2) {
+ struct state *s;
+ int r;
+
+ if (fa1->nocase != (*fa2)->nocase) {
+ if (case_expand(fa1) < 0)
+ return -1;
+ if (case_expand(*fa2) < 0)
+ return -1;
+ }
+
+ s = add_state(fa1, 0);
+ if (s == NULL)
+ return -1;
+ r = add_epsilon_trans(s, fa1->initial);
+ if (r < 0)
+ return -1;
+ r = add_epsilon_trans(s, (*fa2)->initial);
+ if (r < 0)
+ return -1;
+
+ fa1->deterministic = 0;
+ fa1->minimal = 0;
+ fa_merge(fa1, fa2);
+
+ set_initial(fa1, s);
+
+ return 0;
+}
+
+struct fa *fa_union(struct fa *fa1, struct fa *fa2) {
+ fa1 = fa_clone(fa1);
+ fa2 = fa_clone(fa2);
+ if (fa1 == NULL || fa2 == NULL)
+ goto error;
+
+ F(union_in_place(fa1, &fa2));
+
+ return fa1;
+ error:
+ fa_free(fa1);
+ fa_free(fa2);
+ return NULL;
+}
+
+/* Concat FA2 onto FA1; frees FA2 and changes FA1 to FA1.FA2 */
+ATTRIBUTE_RETURN_CHECK
+static int concat_in_place(struct fa *fa1, struct fa **fa2) {
+ int r;
+
+ if (fa1->nocase != (*fa2)->nocase) {
+ if (case_expand(fa1) < 0)
+ return -1;
+ if (case_expand(*fa2) < 0)
+ return -1;
+ }
+
+ list_for_each(s, fa1->initial) {
+ if (s->accept) {
+ s->accept = 0;
+ r = add_epsilon_trans(s, (*fa2)->initial);
+ if (r < 0)
+ return -1;
+ }
+ }
+
+ fa1->deterministic = 0;
+ fa1->minimal = 0;
+ fa_merge(fa1, fa2);
+
+ return 0;
+}
+
+struct fa *fa_concat(struct fa *fa1, struct fa *fa2) {
+ fa1 = fa_clone(fa1);
+ fa2 = fa_clone(fa2);
+
+ if (fa1 == NULL || fa2 == NULL)
+ goto error;
+
+ F(concat_in_place(fa1, &fa2));
+
+ F(collect(fa1));
+
+ return fa1;
+
+ error:
+ fa_free(fa1);
+ fa_free(fa2);
+ return NULL;
+}
+
+static struct fa *fa_make_char_set(bitset *cset, int negate) {
+ struct fa *fa = fa_make_empty();
+ if (!fa)
+ return NULL;
+
+ struct state *s = fa->initial;
+ struct state *t = add_state(fa, 1);
+ int from = 0;
+ int r;
+
+ if (t == NULL)
+ goto error;
+
+ while (from <= UCHAR_MAX) {
+ while (from <= UCHAR_MAX && bitset_get(cset, from) == negate)
+ from += 1;
+ if (from > UCHAR_MAX)
+ break;
+ int to = from;
+ while (to < UCHAR_MAX && (bitset_get(cset, to + 1) == !negate))
+ to += 1;
+ r = add_new_trans(s, t, from, to);
+ if (r < 0)
+ goto error;
+ from = to + 1;
+ }
+
+ fa->deterministic = 1;
+ fa->minimal = 1;
+ return fa;
+
+ error:
+ fa_free(fa);
+ return NULL;
+}
+
+static struct fa *fa_star(struct fa *fa) {
+ struct state *s;
+ int r;
+
+ fa = fa_clone(fa);
+ if (fa == NULL)
+ return NULL;
+
+ s = add_state(fa, 1);
+ if (s == NULL)
+ goto error;
+
+ r = add_epsilon_trans(s, fa->initial);
+ if (r < 0)
+ goto error;
+
+ set_initial(fa, s);
+ list_for_each(p, fa->initial->next) {
+ if (p->accept) {
+ r = add_epsilon_trans(p, s);
+ if (r < 0)
+ goto error;
+ }
+ }
+ fa->deterministic = 0;
+ fa->minimal = 0;
+
+ return fa;
+
+ error:
+ fa_free(fa);
+ return NULL;
+}
+
+/* Form the automaton (FA){N}; FA is not modified */
+static struct fa *repeat(struct fa *fa, int n) {
+ if (n == 0) {
+ return fa_make_epsilon();
+ } else if (n == 1) {
+ return fa_clone(fa);
+ } else {
+ struct fa *cfa = fa_clone(fa);
+ if (cfa == NULL)
+ return NULL;
+ while (n > 1) {
+ struct fa *tfa = fa_clone(fa);
+ if (tfa == NULL) {
+ fa_free(cfa);
+ return NULL;
+ }
+ if (concat_in_place(cfa, &tfa) < 0) {
+ fa_free(cfa);
+ fa_free(tfa);
+ return NULL;
+ }
+ n -= 1;
+ }
+ return cfa;
+ }
+}
+
+struct fa *fa_iter(struct fa *fa, int min, int max) {
+ int r;
+
+ if (min < 0)
+ min = 0;
+
+ if (min > max && max != -1) {
+ return fa_make_empty();
+ }
+ if (max == -1) {
+ struct fa *sfa = fa_star(fa);
+ if (min == 0)
+ return sfa;
+ if (! sfa)
+ return NULL;
+ struct fa *cfa = repeat(fa, min);
+ if (! cfa) {
+ fa_free(sfa);
+ return NULL;
+ }
+ if (concat_in_place(cfa, &sfa) < 0) {
+ fa_free(sfa);
+ fa_free(cfa);
+ return NULL;
+ }
+ return cfa;
+ } else {
+ struct fa *cfa = NULL;
+
+ max -= min;
+ cfa = repeat(fa, min);
+ if (cfa == NULL)
+ return NULL;
+ if (max > 0) {
+ struct fa *cfa2 = fa_clone(fa);
+ if (cfa2 == NULL) {
+ fa_free(cfa);
+ return NULL;
+ }
+ while (max > 1) {
+ struct fa *cfa3 = fa_clone(fa);
+ if (cfa3 == NULL) {
+ fa_free(cfa);
+ fa_free(cfa2);
+ return NULL;
+ }
+ list_for_each(s, cfa3->initial) {
+ if (s->accept) {
+ r = add_epsilon_trans(s, cfa2->initial);
+ if (r < 0) {
+ fa_free(cfa);
+ fa_free(cfa2);
+ fa_free(cfa3);
+ return NULL;
+ }
+ }
+ }
+ fa_merge(cfa3, &cfa2);
+ cfa2 = cfa3;
+ max -= 1;
+ }
+ list_for_each(s, cfa->initial) {
+ if (s->accept) {
+ r = add_epsilon_trans(s, cfa2->initial);
+ if (r < 0) {
+ fa_free(cfa);
+ fa_free(cfa2);
+ return NULL;
+ }
+ }
+ }
+ fa_merge(cfa, &cfa2);
+ cfa->deterministic = 0;
+ cfa->minimal = 0;
+ }
+ if (collect(cfa) < 0) {
+ fa_free(cfa);
+ cfa = NULL;
+ }
+ return cfa;
+ }
+}
+
+static void sort_transition_intervals(struct fa *fa) {
+ list_for_each(s, fa->initial) {
+ qsort(s->trans, s->tused, sizeof(*s->trans), trans_intv_cmp);
+ }
+}
+
+struct fa *fa_intersect(struct fa *fa1, struct fa *fa2) {
+ int ret;
+ struct fa *fa = NULL;
+ struct state_set *worklist = NULL;
+ state_triple_hash *newstates = NULL;
+
+ if (fa1 == fa2)
+ return fa_clone(fa1);
+
+ if (fa_is_basic(fa1, FA_EMPTY) || fa_is_basic(fa2, FA_EMPTY))
+ return fa_make_empty();
+
+ if (fa1->nocase != fa2->nocase) {
+ F(case_expand(fa1));
+ F(case_expand(fa2));
+ }
+
+ fa = fa_make_empty();
+ worklist = state_set_init(-1, S_NONE);
+ newstates = state_triple_init();
+ if (fa == NULL || worklist == NULL || newstates == NULL)
+ goto error;
+
+ sort_transition_intervals(fa1);
+ sort_transition_intervals(fa2);
+
+ F(state_set_push(worklist, fa1->initial));
+ F(state_set_push(worklist, fa2->initial));
+ F(state_set_push(worklist, fa->initial));
+ F(state_triple_push(newstates,
+ fa1->initial, fa2->initial, fa->initial));
+ while (worklist->used) {
+ struct state *s = state_set_pop(worklist);
+ struct state *p2 = state_set_pop(worklist);
+ struct state *p1 = state_set_pop(worklist);
+ s->accept = p1->accept && p2->accept;
+
+ struct trans *t1 = p1->trans;
+ struct trans *t2 = p2->trans;
+ for (int n1 = 0, b2 = 0; n1 < p1->tused; n1++) {
+ while (b2 < p2->tused && t2[b2].max < t1[n1].min)
+ b2++;
+ for (int n2 = b2;
+ n2 < p2->tused && t1[n1].max >= t2[n2].min;
+ n2++) {
+ if (t2[n2].max >= t1[n1].min) {
+ struct state *r = state_triple_thd(newstates,
+ t1[n1].to, t2[n2].to);
+ if (r == NULL) {
+ r = add_state(fa, 0);
+ E(r == NULL);
+ F(state_set_push(worklist, t1[n1].to));
+ F(state_set_push(worklist, t2[n2].to));
+ F(state_set_push(worklist, r));
+ F(state_triple_push(newstates,
+ t1[n1].to, t2[n2].to, r));
+ }
+ char min = t1[n1].min > t2[n2].min
+ ? t1[n1].min : t2[n2].min;
+ char max = t1[n1].max < t2[n2].max
+ ? t1[n1].max : t2[n2].max;
+ ret = add_new_trans(s, r, min, max);
+ if (ret < 0)
+ goto error;
+ }
+ }
+ }
+ }
+ fa->deterministic = fa1->deterministic && fa2->deterministic;
+ fa->nocase = fa1->nocase && fa2->nocase;
+ done:
+ state_set_free(worklist);
+ state_triple_free(newstates);
+ if (fa != NULL) {
+ if (collect(fa) < 0) {
+ fa_free(fa);
+ fa = NULL;
+ }
+ }
+
+ return fa;
+ error:
+ fa_free(fa);
+ fa = NULL;
+ goto done;
+}
+
+int fa_contains(struct fa *fa1, struct fa *fa2) {
+ int result = 0;
+ struct state_set *worklist = NULL; /* List of pairs of states */
+ struct state_set *visited = NULL; /* List of pairs of states */
+
+ if (fa1 == NULL || fa2 == NULL)
+ return -1;
+
+ if (fa1 == fa2)
+ return 1;
+
+ F(determinize(fa2, NULL));
+ sort_transition_intervals(fa1);
+ sort_transition_intervals(fa2);
+
+ F(state_pair_push(&worklist, fa1->initial, fa2->initial));
+ F(state_pair_push(&visited, fa1->initial, fa2->initial));
+ while (worklist->used) {
+ struct state *p1, *p2;
+ void *v2;
+ p1 = state_set_pop_data(worklist, &v2);
+ p2 = v2;
+
+ if (p1->accept && !p2->accept)
+ goto done;
+
+ struct trans *t1 = p1->trans;
+ struct trans *t2 = p2->trans;
+ for(int n1 = 0, b2 = 0; n1 < p1->tused; n1++) {
+ while (b2 < p2->tused && t2[b2].max < t1[n1].min)
+ b2++;
+ int min1 = t1[n1].min, max1 = t1[n1].max;
+ for (int n2 = b2;
+ n2 < p2->tused && t1[n1].max >= t2[n2].min;
+ n2++) {
+ if (t2[n2].min > min1)
+ goto done;
+ if (t2[n2].max < CHAR_MAX)
+ min1 = t2[n2].max + 1;
+ else {
+ min1 = UCHAR_MAX;
+ max1 = UCHAR_MIN;
+ }
+ if (state_pair_find(visited, t1[n1].to, t2[n2].to) == -1) {
+ F(state_pair_push(&worklist, t1[n1].to, t2[n2].to));
+ F(state_pair_push(&visited, t1[n1].to, t2[n2].to));
+ }
+ }
+ if (min1 <= max1)
+ goto done;
+ }
+ }
+
+ result = 1;
+ done:
+ state_set_free(worklist);
+ state_set_free(visited);
+ return result;
+ error:
+ result = -1;
+ goto done;
+}
+
+static int add_crash_trans(struct fa *fa, struct state *s, struct state *crash,
+ int min, int max) {
+ int result;
+
+ if (fa->nocase) {
+ /* Never transition on anything in [A-Z] */
+ if (min > 'Z' || max < 'A') {
+ result = add_new_trans(s, crash, min, max);
+ } else if (min >= 'A' && max <= 'Z') {
+ result = 0;
+ } else if (max <= 'Z') {
+ /* min < 'A' */
+ result = add_new_trans(s, crash, min, 'A' - 1);
+ } else if (min >= 'A') {
+ /* max > 'Z' */
+ result = add_new_trans(s, crash, 'Z' + 1, max);
+ } else {
+ /* min < 'A' && max > 'Z' */
+ result = add_new_trans(s, crash, min, 'A' - 1);
+ if (result == 0)
+ result = add_new_trans(s, crash, 'Z' + 1, max);
+ }
+ } else {
+ result = add_new_trans(s, crash, min, max);
+ }
+ return result;
+}
+
+static int totalize(struct fa *fa) {
+ int r;
+ struct state *crash = add_state(fa, 0);
+
+ E(crash == NULL);
+ F(mark_reachable(fa));
+ sort_transition_intervals(fa);
+
+ r = add_crash_trans(fa, crash, crash, UCHAR_MIN, UCHAR_MAX);
+ if (r < 0)
+ return -1;
+
+ list_for_each(s, fa->initial) {
+ int next = UCHAR_MIN;
+ int tused = s->tused;
+ for (int i=0; i < tused; i++) {
+ uchar min = s->trans[i].min, max = s->trans[i].max;
+ if (min > next) {
+ r = add_crash_trans(fa, s, crash, next, min - 1);
+ if (r < 0)
+ return -1;
+ }
+ if (max + 1 > next)
+ next = max + 1;
+ }
+ if (next <= UCHAR_MAX) {
+ r = add_crash_trans(fa, s, crash, next, UCHAR_MAX);
+ if (r < 0)
+ return -1;
+ }
+ }
+ return 0;
+ error:
+ return -1;
+}
+
+struct fa *fa_complement(struct fa *fa) {
+ fa = fa_clone(fa);
+ E(fa == NULL);
+ F(determinize(fa, NULL));
+ F(totalize(fa));
+ list_for_each(s, fa->initial)
+ s->accept = ! s->accept;
+
+ F(collect(fa));
+ return fa;
+ error:
+ fa_free(fa);
+ return NULL;
+}
+
+struct fa *fa_minus(struct fa *fa1, struct fa *fa2) {
+ if (fa1 == NULL || fa2 == NULL)
+ return NULL;
+
+ if (fa_is_basic(fa1, FA_EMPTY) || fa1 == fa2)
+ return fa_make_empty();
+ if (fa_is_basic(fa2, FA_EMPTY))
+ return fa_clone(fa1);
+
+ struct fa *cfa2 = fa_complement(fa2);
+ if (cfa2 == NULL)
+ return NULL;
+
+ struct fa *result = fa_intersect(fa1, cfa2);
+ fa_free(cfa2);
+ return result;
+}
+
+static int accept_to_accept(struct fa *fa) {
+ int r;
+ struct state *s = add_state(fa, 0);
+ if (s == NULL)
+ return -1;
+
+ F(mark_reachable(fa));
+ list_for_each(a, fa->initial) {
+ if (a->accept && a->reachable) {
+ r = add_epsilon_trans(s, a);
+ if (r < 0)
+ return -1;
+ }
+ }
+
+ set_initial(fa, s);
+ fa->deterministic = fa->minimal = 0;
+ return 0;
+ error:
+ return -1;
+}
+
+struct fa *fa_overlap(struct fa *fa1, struct fa *fa2) {
+ struct fa *fa = NULL, *eps = NULL, *result = NULL;
+ struct state_set *map = NULL;
+
+ if (fa1 == NULL || fa2 == NULL)
+ return NULL;
+
+ fa1 = fa_clone(fa1);
+ fa2 = fa_clone(fa2);
+ if (fa1 == NULL || fa2 == NULL)
+ goto error;
+
+ if (determinize(fa1, NULL) < 0)
+ goto error;
+ if (accept_to_accept(fa1) < 0)
+ goto error;
+
+ map = fa_reverse(fa2);
+ state_set_free(map);
+ if (determinize(fa2, NULL) < 0)
+ goto error;
+ if (accept_to_accept(fa2) < 0)
+ goto error;
+ map = fa_reverse(fa2);
+ state_set_free(map);
+ if (determinize(fa2, NULL) < 0)
+ goto error;
+
+ fa = fa_intersect(fa1, fa2);
+ if (fa == NULL)
+ goto error;
+
+ eps = fa_make_epsilon();
+ if (eps == NULL)
+ goto error;
+
+ result = fa_minus(fa, eps);
+ if (result == NULL)
+ goto error;
+
+ error:
+ fa_free(fa1);
+ fa_free(fa2);
+ fa_free(fa);
+ fa_free(eps);
+ return result;
+}
+
+int fa_equals(struct fa *fa1, struct fa *fa2) {
+ if (fa1 == NULL || fa2 == NULL)
+ return -1;
+
+ /* fa_contains(fa1, fa2) && fa_contains(fa2, fa1) with error checking */
+ int c1 = fa_contains(fa1, fa2);
+ if (c1 < 0)
+ return -1;
+ if (c1 == 0)
+ return 0;
+ return fa_contains(fa2, fa1);
+}
+
+static unsigned int chr_score(char c) {
+ if (isalpha(c)) {
+ return 2;
+ } else if (isalnum(c)) {
+ return 3;
+ } else if (isprint(c)) {
+ return 7;
+ } else if (c == '\0') {
+ return 10000;
+ } else {
+ return 100;
+ }
+}
+
+static unsigned int str_score(const struct re_str *str) {
+ unsigned int score = 0;
+ for (int i=0; i < str->len; i++) {
+ score += chr_score(str->rx[i]);
+ }
+ return score;
+}
+
+/* See if we get a better string for DST by appending C to SRC. If DST is
+ * NULL or empty, always use SRC + C
+ */
+static struct re_str *string_extend(struct re_str *dst,
+ const struct re_str *src,
+ char c) {
+ if (dst == NULL
+ || dst->len == 0
+ || str_score(src) + chr_score(c) < str_score(dst)) {
+ int slen = src->len;
+ if (dst == NULL)
+ dst = make_re_str(NULL);
+ if (dst == NULL)
+ return NULL;
+ if (REALLOC_N(dst->rx, slen+2) < 0) {
+ free(dst);
+ return NULL;
+ }
+ memcpy(dst->rx, src->rx, slen);
+ dst->rx[slen] = c;
+ dst->rx[slen + 1] = '\0';
+ dst->len = slen + 1;
+ }
+ return dst;
+}
+
+static char pick_char(struct trans *t) {
+ for (int c = t->min; c <= t->max; c++)
+ if (isalpha(c)) return c;
+ for (int c = t->min; c <= t->max; c++)
+ if (isalnum(c)) return c;
+ for (int c = t->min; c <= t->max; c++)
+ if (isprint(c)) return c;
+ return t->max;
+}
+
+/* Generate an example string for FA. Traverse all transitions and record
+ * at each turn the "best" word found for that state.
+ */
+int fa_example(struct fa *fa, char **example, size_t *example_len) {
+ struct re_str *word = NULL;
+ struct state_set *path = NULL, *worklist = NULL;
+ struct re_str *str = NULL;
+
+ *example = NULL;
+ *example_len = 0;
+
+ /* Sort to avoid any ambiguity because of reordering of transitions */
+ sort_transition_intervals(fa);
+
+ /* Map from state to string */
+ path = state_set_init(-1, S_DATA|S_SORTED);
+ str = make_re_str("");
+ if (path == NULL || str == NULL)
+ goto error;
+ F(state_set_push_data(path, fa->initial, str));
+ str = NULL;
+
+ /* List of states still to visit */
+ worklist = state_set_init(-1, S_NONE);
+ if (worklist == NULL)
+ goto error;
+ F(state_set_push(worklist, fa->initial));
+
+ while (worklist->used) {
+ struct state *s = state_set_pop(worklist);
+ struct re_str *ps = state_set_find_data(path, s);
+ for_each_trans(t, s) {
+ char c = pick_char(t);
+ int toind = state_set_index(path, t->to);
+ if (toind == -1) {
+ struct re_str *w = string_extend(NULL, ps, c);
+ E(w == NULL);
+ F(state_set_push(worklist, t->to));
+ F(state_set_push_data(path, t->to, w));
+ } else {
+ path->data[toind] = string_extend(path->data[toind], ps, c);
+ }
+ }
+ }
+
+ for (int i=0; i < path->used; i++) {
+ struct state *p = path->states[i];
+ struct re_str *ps = path->data[i];
+ if (p->accept &&
+ (word == NULL || word->len == 0
+ || (ps->len > 0 && str_score(word) > str_score(ps)))) {
+ free_re_str(word);
+ word = ps;
+ } else {
+ free_re_str(ps);
+ }
+ }
+ state_set_free(path);
+ state_set_free(worklist);
+ if (word != NULL) {
+ *example_len = word->len;
+ *example = word->rx;
+ free(word);
+ }
+ return 0;
+ error:
+ state_set_free(path);
+ state_set_free(worklist);
+ free_re_str(word);
+ free_re_str(str);
+ return -1;
+}
+
+struct enum_intl {
+ int limit;
+ int nwords;
+ char **words;
+ char *buf;
+ size_t bsize;
+};
+
+static int fa_enumerate_intl(struct state *s, struct enum_intl *ei, int pos) {
+ int result = -1;
+
+ if (ei->bsize <= pos + 1) {
+ ei->bsize *= 2;
+ F(REALLOC_N(ei->buf, ei->bsize));
+ }
+
+ ei->buf[pos] = '\0';
+ for_each_trans(t, s) {
+ if (t->to->visited)
+ return -2;
+ t->to->visited = 1;
+ for (int i=t->min; i <= t->max; i++) {
+ ei->buf[pos] = i;
+ if (t->to->accept) {
+ if (ei->nwords >= ei->limit)
+ return -2;
+ ei->words[ei->nwords] = strdup(ei->buf);
+ E(ei->words[ei->nwords] == NULL);
+ ei->nwords += 1;
+ }
+ result = fa_enumerate_intl(t->to, ei, pos+1);
+ E(result < 0);
+ }
+ t->to->visited = 0;
+ }
+ ei->buf[pos] = '\0';
+ result = 0;
+ error:
+ return result;
+}
+
+int fa_enumerate(struct fa *fa, int limit, char ***words) {
+ struct enum_intl ei;
+ int result = -1;
+
+ *words = NULL;
+ MEMZERO(&ei, 1);
+ ei.bsize = 8; /* Arbitrary initial size */
+ ei.limit = limit;
+ F(ALLOC_N(ei.words, limit));
+ F(ALLOC_N(ei.buf, ei.bsize));
+
+ /* We use the visited bit to track which states we already visited
+ * during the construction of a word to detect loops */
+ list_for_each(s, fa->initial)
+ s->visited = 0;
+ fa->initial->visited = 1;
+ if (fa->initial->accept) {
+ if (ei.nwords >= limit)
+ return -2;
+ ei.words[0] = strdup("");
+ E(ei.words[0] == NULL);
+ ei.nwords = 1;
+ }
+ result = fa_enumerate_intl(fa->initial, &ei, 0);
+ E(result < 0);
+
+ result = ei.nwords;
+ *words = ei.words;
+ ei.words = NULL;
+ done:
+ free(ei.buf);
+ return result;
+
+ error:
+ for (int i=0; i < ei.nwords; i++)
+ free(ei.words[i]);
+ free(ei.words);
+ goto done;
+}
+
+/* Expand the automaton FA by replacing every transition s(c) -> p from
+ * state s to p on character c by two transitions s(X) -> r, r(c) -> p via
+ * a new state r.
+ * If ADD_MARKER is true, also add for each original state s a new a loop
+ * s(Y) -> q and q(X) -> s through a new state q.
+ *
+ * The end result is that an automaton accepting "a|ab" is turned into one
+ * accepting "Xa|XaXb" if add_marker is false and "(YX)*Xa|(YX)*Xa(YX)*Xb"
+ * when add_marker is true.
+ *
+ * The returned automaton is a copy of FA, FA is not modified.
+ */
+static struct fa *expand_alphabet(struct fa *fa, int add_marker,
+ char X, char Y) {
+ int ret;
+
+ fa = fa_clone(fa);
+ if (fa == NULL)
+ return NULL;
+
+ F(mark_reachable(fa));
+ list_for_each(p, fa->initial) {
+ if (! p->reachable)
+ continue;
+
+ struct state *r = add_state(fa, 0);
+ if (r == NULL)
+ goto error;
+ r->trans = p->trans;
+ r->tused = p->tused;
+ r->tsize = p->tsize;
+ p->trans = NULL;
+ p->tused = p->tsize = 0;
+ ret = add_new_trans(p, r, X, X);
+ if (ret < 0)
+ goto error;
+ if (add_marker) {
+ struct state *q = add_state(fa, 0);
+ ret = add_new_trans(p, q, Y, Y);
+ if (ret < 0)
+ goto error;
+ ret = add_new_trans(q, p, X, X);
+ if (ret < 0)
+ goto error;
+ }
+ }
+ return fa;
+ error:
+ fa_free(fa);
+ return NULL;
+}
+
+static bitset *alphabet(struct fa *fa) {
+ bitset *bs = bitset_init(UCHAR_NUM);
+
+ if (bs == NULL)
+ return NULL;
+
+ list_for_each(s, fa->initial) {
+ for (int i=0; i < s->tused; i++) {
+ for (uint c = s->trans[i].min; c <= s->trans[i].max; c++)
+ bitset_set(bs, c);
+ }
+ }
+ return bs;
+}
+
+static bitset *last_chars(struct fa *fa) {
+ bitset *bs = bitset_init(UCHAR_NUM);
+
+ if (bs == NULL)
+ return NULL;
+
+ list_for_each(s, fa->initial) {
+ for (int i=0; i < s->tused; i++) {
+ if (s->trans[i].to->accept) {
+ for (uint c = s->trans[i].min; c <= s->trans[i].max; c++)
+ bitset_set(bs, c);
+ }
+ }
+ }
+ return bs;
+}
+
+static bitset *first_chars(struct fa *fa) {
+ bitset *bs = bitset_init(UCHAR_NUM);
+ struct state *s = fa->initial;
+
+ if (bs == NULL)
+ return NULL;
+
+ for (int i=0; i < s->tused; i++) {
+ for (uint c = s->trans[i].min; c <= s->trans[i].max; c++)
+ bitset_set(bs, c);
+ }
+ return bs;
+}
+
+/* Return 1 if F1 and F2 are known to be unambiguously concatenable
+ * according to simple heuristics. Return 0 if they need to be checked
+ * further to decide ambiguity
+ * Return -1 if an allocation fails
+ */
+static int is_splittable(struct fa *fa1, struct fa *fa2) {
+ bitset *alpha1 = NULL;
+ bitset *alpha2 = NULL;
+ bitset *last1 = NULL;
+ bitset *first2 = NULL;
+ bool result = -1;
+
+ alpha2 = alphabet(fa2);
+ last1 = last_chars(fa1);
+ if (alpha2 == NULL || last1 == NULL)
+ goto done;
+ if (bitset_disjoint(last1, alpha2, UCHAR_NUM)) {
+ result = 1;
+ goto done;
+ }
+
+ alpha1 = alphabet(fa1);
+ first2 = first_chars(fa2);
+ if (alpha1 == NULL || first2 == NULL)
+ goto done;
+ if (bitset_disjoint(first2, alpha1, UCHAR_NUM)) {
+ result = 1;
+ goto done;
+ }
+ result = 0;
+ done:
+ bitset_free(alpha1);
+ bitset_free(alpha2);
+ bitset_free(last1);
+ bitset_free(first2);
+ return result;
+}
+
+/* This algorithm is due to Anders Moeller, and can be found in class
+ * AutomatonOperations in dk.brics.grammar
+ */
+int fa_ambig_example(struct fa *fa1, struct fa *fa2,
+ char **upv, size_t *upv_len,
+ char **pv, char **v) {
+ static const char X = '\001';
+ static const char Y = '\002';
+ char *result = NULL, *s = NULL;
+ size_t result_len = 0;
+ int ret = -1, r;
+ struct fa *mp = NULL, *ms = NULL, *sp = NULL, *ss = NULL, *amb = NULL;
+ struct fa *a1f = NULL, *a1t = NULL, *a2f = NULL, *a2t = NULL;
+ struct fa *b1 = NULL, *b2 = NULL;
+
+ *upv = NULL;
+ *upv_len = 0;
+ if (pv != NULL)
+ *pv = NULL;
+ if (v != NULL)
+ *v = NULL;
+
+ r = is_splittable(fa1, fa2);
+ if (r < 0)
+ goto error;
+ if (r == 1)
+ return 0;
+
+#define Xs "\001"
+#define Ys "\002"
+#define MPs Ys Xs "(" Xs "(.|\n))+"
+#define MSs Ys Xs "(" Xs "(.|\n))*"
+#define SPs "(" Xs "(.|\n))+" Ys Xs
+#define SSs "(" Xs "(.|\n))*" Ys Xs
+ /* These could become static constants */
+ r = fa_compile( MPs, strlen(MPs), &mp);
+ if (r != REG_NOERROR)
+ goto error;
+ r = fa_compile( MSs, strlen(MSs), &ms);
+ if (r != REG_NOERROR)
+ goto error;
+ r = fa_compile( SPs, strlen(SPs), &sp);
+ if (r != REG_NOERROR)
+ goto error;
+ r = fa_compile( SSs, strlen(SSs), &ss);
+ if (r != REG_NOERROR)
+ goto error;
+#undef SSs
+#undef SPs
+#undef MSs
+#undef MPs
+#undef Xs
+#undef Ys
+
+ a1f = expand_alphabet(fa1, 0, X, Y);
+ a1t = expand_alphabet(fa1, 1, X, Y);
+ a2f = expand_alphabet(fa2, 0, X, Y);
+ a2t = expand_alphabet(fa2, 1, X, Y);
+ if (a1f == NULL || a1t == NULL || a2f == NULL || a2t == NULL)
+ goto error;
+
+ /* Compute b1 = ((a1f . mp) & a1t) . ms */
+ if (concat_in_place(a1f, &mp) < 0)
+ goto error;
+ b1 = fa_intersect(a1f, a1t);
+ if (b1 == NULL)
+ goto error;
+ if (concat_in_place(b1, &ms) < 0)
+ goto error;
+ if (fa_is_basic(b1, FA_EMPTY)) {
+ /* We are done - amb which we take an example from below
+ * will be empty, and there can therefore not be an ambiguity */
+ ret = 0;
+ goto done;
+ }
+
+ /* Compute b2 = ss . ((sp . a2f) & a2t) */
+ if (concat_in_place(sp, &a2f) < 0)
+ goto error;
+ b2 = fa_intersect(sp, a2t);
+ if (b2 == NULL)
+ goto error;
+ if (concat_in_place(ss, &b2) < 0)
+ goto error;
+ b2 = ss;
+ ss = NULL;
+
+ /* The automaton we are really interested in */
+ amb = fa_intersect(b1, b2);
+ if (amb == NULL)
+ goto error;
+
+ size_t s_len = 0;
+ r = fa_example(amb, &s, &s_len);
+ if (r < 0)
+ goto error;
+
+ if (s != NULL) {
+ char *t;
+ result_len = (s_len-1)/2 - 1;
+ F(ALLOC_N(result, result_len + 1));
+ t = result;
+ int i = 0;
+ for (i=0; s[2*i] == X; i++) {
+ assert((t - result) < result_len);
+ *t++ = s[2*i + 1];
+ }
+ if (pv != NULL)
+ *pv = t;
+ i += 1;
+
+ for ( ;s[2*i] == X; i++) {
+ assert((t - result) < result_len);
+ *t++ = s[2*i + 1];
+ }
+ if (v != NULL)
+ *v = t;
+ i += 1;
+
+ for (; 2*i+1 < s_len; i++) {
+ assert((t - result) < result_len);
+ *t++ = s[2*i + 1];
+ }
+ }
+ ret = 0;
+
+ done:
+ /* Clean up intermediate automata */
+ fa_free(mp);
+ fa_free(ms);
+ fa_free(ss);
+ fa_free(sp);
+ fa_free(a1f);
+ fa_free(a1t);
+ fa_free(a2f);
+ fa_free(a2t);
+ fa_free(b1);
+ fa_free(b2);
+ fa_free(amb);
+
+ FREE(s);
+ *upv = result;
+ if (result != NULL)
+ *upv_len = result_len;
+ return ret;
+ error:
+ FREE(result);
+ ret = -1;
+ goto done;
+}
+
+/*
+ * Construct an fa from a regular expression
+ */
+static struct fa *fa_from_re(struct re *re) {
+ struct fa *result = NULL;
+
+ switch(re->type) {
+ case UNION:
+ {
+ result = fa_from_re(re->exp1);
+ if (result == NULL)
+ goto error;
+ struct fa *fa2 = fa_from_re(re->exp2);
+ if (fa2 == NULL)
+ goto error;
+ if (union_in_place(result, &fa2) < 0)
+ goto error;
+ }
+ break;
+ case CONCAT:
+ {
+ result = fa_from_re(re->exp1);
+ if (result == NULL)
+ goto error;
+ struct fa *fa2 = fa_from_re(re->exp2);
+ if (fa2 == NULL)
+ goto error;
+ if (concat_in_place(result, &fa2) < 0)
+ goto error;
+ }
+ break;
+ case CSET:
+ result = fa_make_char_set(re->cset, re->negate);
+ break;
+ case ITER:
+ {
+ struct fa *fa = fa_from_re(re->exp);
+ if (fa == NULL)
+ goto error;
+ result = fa_iter(fa, re->min, re->max);
+ fa_free(fa);
+ }
+ break;
+ case EPSILON:
+ result = fa_make_epsilon();
+ break;
+ case CHAR:
+ result = fa_make_char(re->c);
+ break;
+ default:
+ assert(0);
+ break;
+ }
+ return result;
+ error:
+ fa_free(result);
+ return NULL;
+}
+
+static void free_re(struct re *re) {
+ if (re == NULL)
+ return;
+ assert(re->ref == 0);
+
+ if (re->type == UNION || re->type == CONCAT) {
+ re_unref(re->exp1);
+ re_unref(re->exp2);
+ } else if (re->type == ITER) {
+ re_unref(re->exp);
+ } else if (re->type == CSET) {
+ bitset_free(re->cset);
+ }
+ free(re);
+}
+
+int fa_compile(const char *regexp, size_t size, struct fa **fa) {
+ struct re *re = NULL;
+ struct re_parse parse;
+
+ *fa = NULL;
+
+ parse.rx = regexp;
+ parse.rend = regexp + size;
+ parse.error = REG_NOERROR;
+
+ re = parse_regexp(&parse);
+ if (re == NULL)
+ return parse.error;
+
+ *fa = fa_from_re(re);
+ re_unref(re);
+
+ if (*fa == NULL || collect(*fa) < 0)
+ parse.error = REG_ESPACE;
+ return parse.error;
+}
+
+/* We represent a case-insensitive FA by using only transitions on
+ * lower-case letters.
+ */
+int fa_nocase(struct fa *fa) {
+ if (fa == NULL || fa->nocase)
+ return 0;
+
+ fa->nocase = 1;
+ list_for_each(s, fa->initial) {
+ int tused = s->tused;
+ /* For every transition on characters in [A-Z] add a corresponding
+ * transition on [a-z]; remove any portion covering [A-Z] */
+ for (int i=0; i < tused; i++) {
+ struct trans *t = s->trans + i;
+ int lc_min = t->min < 'A' ? 'a' : tolower(t->min);
+ int lc_max = t->max > 'Z' ? 'z' : tolower(t->max);
+
+ if (t->min > 'Z' || t->max < 'A')
+ continue;
+ if (t->min >= 'A' && t->max <= 'Z') {
+ t->min = tolower(t->min);
+ t->max = tolower(t->max);
+ } else if (t->max <= 'Z') {
+ /* t->min < 'A' */
+ t->max = 'A' - 1;
+ F(add_new_trans(s, t->to, lc_min, lc_max));
+ } else if (t->min >= 'A') {
+ /* t->max > 'Z' */
+ t->min = 'Z' + 1;
+ F(add_new_trans(s, t->to, lc_min, lc_max));
+ } else {
+ /* t->min < 'A' && t->max > 'Z' */
+ F(add_new_trans(s, t->to, 'Z' + 1, t->max));
+ s->trans[i].max = 'A' - 1;
+ F(add_new_trans(s, s->trans[i].to, lc_min, lc_max));
+ }
+ }
+ }
+ F(collect(fa));
+ return 0;
+ error:
+ return -1;
+}
+
+int fa_is_nocase(struct fa *fa) {
+ return fa->nocase;
+}
+
+/* If FA is case-insensitive, turn it into a case-sensitive automaton by
+ * adding transitions on upper-case letters for each existing transition on
+ * lower-case letters */
+static int case_expand(struct fa *fa) {
+ if (! fa->nocase)
+ return 0;
+
+ fa->nocase = 0;
+ list_for_each(s, fa->initial) {
+ int tused = s->tused;
+ /* For every transition on characters in [a-z] add a corresponding
+ * transition on [A-Z] */
+ for (int i=0; i < tused; i++) {
+ struct trans *t = s->trans + i;
+ int lc_min = t->min < 'a' ? 'A' : toupper(t->min);
+ int lc_max = t->max > 'z' ? 'Z' : toupper(t->max);
+
+ if (t->min > 'z' || t->max < 'a')
+ continue;
+ F(add_new_trans(s, t->to, lc_min, lc_max));
+ }
+ }
+ F(collect(fa));
+ return 0;
+ error:
+ return -1;
+}
+
+/*
+ * Regular expression parser
+ */
+
+static struct re *make_re(enum re_type type) {
+ struct re *re;
+ if (make_ref(re) == 0)
+ re->type = type;
+ return re;
+}
+
+static struct re *make_re_rep(struct re *exp, int min, int max) {
+ struct re *re = make_re(ITER);
+ if (re) {
+ re->exp = exp;
+ re->min = min;
+ re->max = max;
+ } else {
+ re_unref(exp);
+ }
+ return re;
+}
+
+static struct re *make_re_binop(enum re_type type, struct re *exp1,
+ struct re *exp2) {
+ struct re *re = make_re(type);
+ if (re) {
+ re->exp1 = exp1;
+ re->exp2 = exp2;
+ } else {
+ re_unref(exp1);
+ re_unref(exp2);
+ }
+ return re;
+}
+
+static struct re *make_re_char(uchar c) {
+ struct re *re = make_re(CHAR);
+ if (re)
+ re->c = c;
+ return re;
+}
+
+static struct re *make_re_char_set(bool negate, bool no_ranges) {
+ struct re *re = make_re(CSET);
+ if (re) {
+ re->negate = negate;
+ re->no_ranges = no_ranges;
+ re->cset = bitset_init(UCHAR_NUM);
+ if (re->cset == NULL)
+ re_unref(re);
+ }
+ return re;
+}
+
+static bool more(struct re_parse *parse) {
+ return parse->rx < parse->rend;
+}
+
+static bool match(struct re_parse *parse, char m) {
+ if (!more(parse))
+ return false;
+ if (*parse->rx == m) {
+ parse->rx += 1;
+ return true;
+ }
+ return false;
+}
+
+static bool peek(struct re_parse *parse, const char *chars) {
+ return *parse->rx != '\0' && strchr(chars, *parse->rx) != NULL;
+}
+
+static bool next(struct re_parse *parse, char *c) {
+ if (!more(parse))
+ return false;
+ *c = *parse->rx;
+ parse->rx += 1;
+ return true;
+}
+
+static bool parse_char(struct re_parse *parse, int quoted, char *c) {
+ if (!more(parse))
+ return false;
+ if (quoted && *parse->rx == '\\') {
+ parse->rx += 1;
+ return next(parse, c);
+ } else {
+ return next(parse, c);
+ }
+}
+
+static void add_re_char(struct re *re, uchar from, uchar to) {
+ assert(re->type == CSET);
+ for (unsigned int c = from; c <= to; c++)
+ bitset_set(re->cset, c);
+}
+
+static void parse_char_class(struct re_parse *parse, struct re *re) {
+ if (! more(parse)) {
+ parse->error = REG_EBRACK;
+ goto error;
+ }
+ char from, to;
+ parse_char(parse, 0, &from);
+ to = from;
+ if (match(parse, '-')) {
+ if (! more(parse)) {
+ parse->error = REG_EBRACK;
+ goto error;
+ }
+ if (peek(parse, "]")) {
+ if (from > to) {
+ parse->error = REG_ERANGE;
+ goto error;
+ }
+ add_re_char(re, from, to);
+ add_re_char(re, '-', '-');
+ return;
+ } else if (!parse_char(parse, 0, &to)) {
+ parse->error = REG_ERANGE;
+ goto error;
+ }
+ }
+ if (from > to) {
+ parse->error = REG_ERANGE;
+ goto error;
+ }
+ add_re_char(re, from, to);
+ error:
+ return;
+}
+
+static struct re *parse_simple_exp(struct re_parse *parse) {
+ struct re *re = NULL;
+
+ if (match(parse, '[')) {
+ bool negate = match(parse, '^');
+ re = make_re_char_set(negate, parse->no_ranges);
+ if (re == NULL) {
+ parse->error = REG_ESPACE;
+ goto error;
+ }
+ parse_char_class(parse, re);
+ if (parse->error != REG_NOERROR)
+ goto error;
+ while (more(parse) && ! peek(parse, "]")) {
+ parse_char_class(parse, re);
+ if (parse->error != REG_NOERROR)
+ goto error;
+ }
+ if (! match(parse, ']')) {
+ parse->error = REG_EBRACK;
+ goto error;
+ }
+ } else if (match(parse, '(')) {
+ if (match(parse, ')')) {
+ re = make_re(EPSILON);
+ if (re == NULL) {
+ parse->error = REG_ESPACE;
+ goto error;
+ }
+ } else {
+ re = parse_regexp(parse);
+ if (re == NULL)
+ goto error;
+ if (! match(parse, ')')) {
+ parse->error = REG_EPAREN;
+ goto error;
+ }
+ }
+ } else if (match(parse, '.')) {
+ re = make_re_char_set(1, parse->no_ranges);
+ if (re == NULL) {
+ parse->error = REG_ESPACE;
+ goto error;
+ }
+ add_re_char(re, '\n', '\n');
+ } else if (more(parse)) {
+ char c;
+ if (!parse_char(parse, 1, &c)) {
+ parse->error = REG_EESCAPE;
+ goto error;
+ }
+ re = make_re_char(c);
+ if (re == NULL) {
+ parse->error = REG_ESPACE;
+ goto error;
+ }
+ } else {
+ re = make_re(EPSILON);
+ if (re == NULL) {
+ parse->error = REG_ESPACE;
+ goto error;
+ }
+ }
+ return re;
+ error:
+ re_unref(re);
+ return NULL;
+}
+
+static int parse_int(struct re_parse *parse) {
+ const char *lim;
+ char *end;
+ size_t used;
+ long l;
+
+ /* We need to be careful that strtoul will never access
+ * memory beyond parse->rend
+ */
+ for (lim = parse->rx; lim < parse->rend && *lim >= '0' && *lim <= '9';
+ lim++);
+ if (lim < parse->rend) {
+ l = strtoul(parse->rx, &end, 10);
+ used = end - parse->rx;
+ } else {
+ char *s = strndup(parse->rx, parse->rend - parse->rx);
+ if (s == NULL) {
+ parse->error = REG_ESPACE;
+ return -1;
+ }
+ l = strtoul(s, &end, 10);
+ used = end - s;
+ free(s);
+ }
+
+ if (used == 0)
+ return -1;
+ parse->rx += used;
+ if ((l<0) || (l > INT_MAX)) {
+ parse->error = REG_BADBR;
+ return -1;
+ }
+ return (int) l;
+}
+
+static struct re *parse_repeated_exp(struct re_parse *parse) {
+ struct re *re = parse_simple_exp(parse);
+ if (re == NULL)
+ goto error;
+ if (match(parse, '?')) {
+ re = make_re_rep(re, 0, 1);
+ } else if (match(parse, '*')) {
+ re = make_re_rep(re, 0, -1);
+ } else if (match(parse, '+')) {
+ re = make_re_rep(re, 1, -1);
+ } else if (match(parse, '{')) {
+ int min, max;
+ min = parse_int(parse);
+ if (min == -1)
+ goto error;
+ if (match(parse, ',')) {
+ max = parse_int(parse);
+ if (max == -1)
+ max = -1; /* If it's not an int, it means 'unbounded' */
+ if (! match(parse, '}')) {
+ parse->error = REG_EBRACE;
+ goto error;
+ }
+ } else if (match(parse, '}')) {
+ max = min;
+ } else {
+ parse->error = REG_EBRACE;
+ goto error;
+ }
+ if (min > max && max != -1) {
+ parse->error = REG_BADBR;
+ goto error;
+ }
+ re = make_re_rep(re, min, max);
+ }
+ if (re == NULL)
+ parse->error = REG_ESPACE;
+ return re;
+ error:
+ re_unref(re);
+ return NULL;
+}
+
+static struct re *parse_concat_exp(struct re_parse *parse) {
+ struct re *re = parse_repeated_exp(parse);
+ if (re == NULL)
+ goto error;
+
+ if (more(parse) && ! peek(parse, ")|")) {
+ struct re *re2 = parse_concat_exp(parse);
+ if (re2 == NULL)
+ goto error;
+ re = make_re_binop(CONCAT, re, re2);
+ if (re == NULL) {
+ parse->error = REG_ESPACE;
+ goto error;
+ }
+ }
+ return re;
+
+ error:
+ re_unref(re);
+ return NULL;
+}
+
+static struct re *parse_regexp(struct re_parse *parse) {
+ struct re *re = NULL;
+
+ /* Something like (|r) */
+ if (peek(parse, "|"))
+ re = make_re(EPSILON);
+ else
+ re = parse_concat_exp(parse);
+ if (re == NULL)
+ goto error;
+
+ if (match(parse, '|')) {
+ struct re *re2 = NULL;
+ /* Something like (r|) */
+ if (peek(parse, ")"))
+ re2 = make_re(EPSILON);
+ else
+ re2 = parse_regexp(parse);
+ if (re2 == NULL)
+ goto error;
+
+ re = make_re_binop(UNION, re, re2);
+ if (re == NULL) {
+ parse->error = REG_ESPACE;
+ goto error;
+ }
+ }
+ return re;
+
+ error:
+ re_unref(re);
+ return NULL;
+}
+
+/*
+ * Convert a STRUCT RE to a string. Code is complicated by the fact that
+ * we try to be clever and avoid unneeded parens and concatenation with
+ * epsilon etc.
+ */
+static int re_as_string(const struct re *re, struct re_str *str);
+
+static int re_binop_count(enum re_type type, const struct re *re) {
+ assert(type == CONCAT || type == UNION);
+ if (re->type == type) {
+ return re_binop_count(type, re->exp1) + re_binop_count(type, re->exp2);
+ } else {
+ return 1;
+ }
+}
+
+static int re_binop_store(enum re_type type, const struct re *re,
+ const struct re **list) {
+ int pos = 0;
+ if (type == re->type) {
+ pos = re_binop_store(type, re->exp1, list);
+ pos += re_binop_store(type, re->exp2, list + pos);
+ } else {
+ list[0] = re;
+ pos = 1;
+ }
+ return pos;
+}
+
+static int re_union_as_string(const struct re *re, struct re_str *str) {
+ assert(re->type == UNION);
+
+ int result = -1;
+ const struct re **res = NULL;
+ struct re_str *strings = NULL;
+ int nre = 0, r;
+
+ nre = re_binop_count(re->type, re);
+ r = ALLOC_N(res, nre);
+ if (r < 0)
+ goto done;
+
+ re_binop_store(re->type, re, res);
+
+ r = ALLOC_N(strings, nre);
+ if (r < 0)
+ goto error;
+
+ str->len = 0;
+ for (int i=0; i < nre; i++) {
+ if (re_as_string(res[i], strings + i) < 0)
+ goto error;
+ str->len += strings[i].len;
+ }
+ str->len += nre-1;
+
+ r = re_str_alloc(str);
+ if (r < 0)
+ goto error;
+
+ char *p = str->rx;
+ for (int i=0; i < nre; i++) {
+ if (i>0)
+ *p++ = '|';
+ memcpy(p, strings[i].rx, strings[i].len);
+ p += strings[i].len;
+ }
+
+ result = 0;
+ done:
+ free(res);
+ if (strings != NULL) {
+ for (int i=0; i < nre; i++)
+ release_re_str(strings + i);
+ }
+ free(strings);
+ return result;
+ error:
+ release_re_str(str);
+ result = -1;
+ goto done;
+}
+
+ATTRIBUTE_PURE
+static int re_needs_parens_in_concat(const struct re *re) {
+ return (re->type != CHAR && re->type != CSET && re->type != ITER);
+}
+
+static int re_concat_as_string(const struct re *re, struct re_str *str) {
+ assert(re->type == CONCAT);
+
+ const struct re **res = NULL;
+ struct re_str *strings = NULL;
+ int nre = 0, r;
+ int result = -1;
+
+ nre = re_binop_count(re->type, re);
+ r = ALLOC_N(res, nre);
+ if (r < 0)
+ goto error;
+ re_binop_store(re->type, re, res);
+
+ r = ALLOC_N(strings, nre);
+ if (r < 0)
+ goto error;
+
+ str->len = 0;
+ for (int i=0; i < nre; i++) {
+ if (res[i]->type == EPSILON)
+ continue;
+ if (re_as_string(res[i], strings + i) < 0)
+ goto error;
+ str->len += strings[i].len;
+ if (re_needs_parens_in_concat(res[i]))
+ str->len += 2;
+ }
+
+ r = re_str_alloc(str);
+ if (r < 0)
+ goto error;
+
+ char *p = str->rx;
+ for (int i=0; i < nre; i++) {
+ if (res[i]->type == EPSILON)
+ continue;
+ if (re_needs_parens_in_concat(res[i]))
+ *p++ = '(';
+ p = memcpy(p, strings[i].rx, strings[i].len);
+ p += strings[i].len;
+ if (re_needs_parens_in_concat(res[i]))
+ *p++ = ')';
+ }
+
+ result = 0;
+ done:
+ free(res);
+ if (strings != NULL) {
+ for (int i=0; i < nre; i++)
+ release_re_str(strings + i);
+ }
+ free(strings);
+ return result;
+ error:
+ release_re_str(str);
+ result = -1;
+ goto done;
+}
+
+static bool cset_contains(const struct re *cset, int c) {
+ return bitset_get(cset->cset, c) != cset->negate;
+}
+
+static int re_cset_as_string(const struct re *re, struct re_str *str) {
+ const uchar rbrack = ']';
+ const uchar dash = '-';
+ const uchar nul = '\0';
+
+ static const char *const empty_set = "[]";
+ static const char *const total_set = "(.|\n)";
+ static const char *const not_newline = ".";
+
+ char *s;
+ int from, to, negate;
+ size_t len;
+ int incl_rbrack, incl_dash;
+ int r;
+
+ str->len = strlen(empty_set);
+
+ /* We can not include NUL explicitly in a CSET since we use ordinary
+ NUL delimited strings to represent them. That means that we need to
+ use negated representation if NUL is to be included (and vice versa)
+ */
+ negate = cset_contains(re, nul);
+ if (negate) {
+ for (from = UCHAR_MIN;
+ from <= UCHAR_MAX && cset_contains(re, from);
+ from += 1);
+ if (from > UCHAR_MAX) {
+ /* Special case: the set matches every character */
+ str->rx = strdup(total_set);
+ goto done;
+ }
+ if (from == '\n') {
+ for (from += 1;
+ from <= UCHAR_MAX && cset_contains(re, from);
+ from += 1);
+ if (from > UCHAR_MAX) {
+ /* Special case: the set matches everything but '\n' */
+ str->rx = strdup(not_newline);
+ goto done;
+ }
+ }
+ }
+
+ /* See if ']' and '-' will be explicitly included in the character set
+ (INCL_RBRACK, INCL_DASH) As we loop over the character set, we reset
+ these flags if they are in the set, but not mentioned explicitly
+ */
+ incl_rbrack = cset_contains(re, rbrack) != negate;
+ incl_dash = cset_contains(re, dash) != negate;
+
+ if (re->no_ranges) {
+ for (from = UCHAR_MIN; from <= UCHAR_MAX; from++)
+ if (cset_contains(re, from) != negate)
+ str->len += 1;
+ } else {
+ for (from = UCHAR_MIN; from <= UCHAR_MAX; from = to+1) {
+ while (from <= UCHAR_MAX && cset_contains(re, from) == negate)
+ from += 1;
+ if (from > UCHAR_MAX)
+ break;
+ for (to = from;
+ to < UCHAR_MAX && (cset_contains(re, to+1) != negate);
+ to++);
+
+ if (to == from && (from == rbrack || from == dash))
+ continue;
+ if (from == rbrack || from == dash)
+ from += 1;
+ if (to == rbrack || to == dash)
+ to -= 1;
+
+ len = (to == from) ? 1 : ((to == from + 1) ? 2 : 3);
+
+ if (from < rbrack && rbrack < to)
+ incl_rbrack = 0;
+ if (from < dash && dash < to)
+ incl_dash = 0;
+ str->len += len;
+ }
+ str->len += incl_rbrack + incl_dash;
+ }
+ if (negate)
+ str->len += 1; /* For the ^ */
+
+ r = re_str_alloc(str);
+ if (r < 0)
+ goto error;
+
+ s = str->rx;
+ *s++ = '[';
+ if (negate)
+ *s++ = '^';
+ if (incl_rbrack)
+ *s++ = rbrack;
+
+ if (re->no_ranges) {
+ for (from = UCHAR_MIN; from <= UCHAR_MAX; from++) {
+ if (from == rbrack || from == dash)
+ continue;
+ if (cset_contains(re, from) != negate)
+ *s++ = from;
+ }
+ } else {
+ for (from = UCHAR_MIN; from <= UCHAR_MAX; from = to+1) {
+ while (from <= UCHAR_MAX && cset_contains(re, from) == negate)
+ from += 1;
+ if (from > UCHAR_MAX)
+ break;
+ for (to = from;
+ to < UCHAR_MAX && (cset_contains(re, to+1) != negate);
+ to++);
+
+ if (to == from && (from == rbrack || from == dash))
+ continue;
+ if (from == rbrack || from == dash)
+ from += 1;
+ if (to == rbrack || to == dash)
+ to -= 1;
+
+ if (to == from) {
+ *s++ = from;
+ } else if (to == from + 1) {
+ *s++ = from;
+ *s++ = to;
+ } else {
+ *s++ = from;
+ *s++ = '-';
+ *s++ = to;
+ }
+ }
+ }
+ if (incl_dash)
+ *s++ = dash;
+
+ *s = ']';
+ done:
+ if (str->rx == NULL)
+ goto error;
+ str->len = strlen(str->rx);
+ return 0;
+ error:
+ release_re_str(str);
+ return -1;
+}
+
+static int re_iter_as_string(const struct re *re, struct re_str *str) {
+ const char *quant = NULL;
+ char *iter = NULL;
+ int r, result = -1;
+
+ if (re_as_string(re->exp, str) < 0)
+ return -1;
+
+ if (re->min == 0 && re->max == -1) {
+ quant = "*";
+ } else if (re->min == 1 && re->max == -1) {
+ quant = "+";
+ } else if (re->min == 0 && re->max == 1) {
+ quant = "?";
+ } else if (re->max == -1) {
+ r = asprintf(&iter, "{%d,}", re->min);
+ if (r < 0)
+ return -1;
+ quant = iter;
+ } else {
+ r = asprintf(&iter, "{%d,%d}", re->min, re->max);
+ if (r < 0)
+ return -1;
+ quant = iter;
+ }
+
+ if (re->exp->type == CHAR || re->exp->type == CSET) {
+ if (REALLOC_N(str->rx, str->len + strlen(quant) + 1) < 0)
+ goto error;
+ strcpy(str->rx + str->len, quant);
+ str->len += strlen(quant);
+ } else {
+ /* Format '(' + str->rx ')' + quant */
+ if (REALLOC_N(str->rx, str->len + strlen(quant) + 1 + 2) < 0)
+ goto error;
+ memmove(str->rx + 1, str->rx, str->len);
+ str->rx[0] = '(';
+ str->rx[str->len + 1] = ')';
+ str->len += 2;
+ strcpy(str->rx + str->len, quant);
+ str->len += strlen(quant);
+ }
+
+ result = 0;
+ done:
+ FREE(iter);
+ return result;
+ error:
+ release_re_str(str);
+ goto done;
+}
+
+static int re_as_string(const struct re *re, struct re_str *str) {
+ /* Characters that must be escaped */
+ static const char * const special_chars = ".()[]{}*|+?\\^$";
+ int result = 0;
+
+ switch(re->type) {
+ case UNION:
+ result = re_union_as_string(re, str);
+ break;
+ case CONCAT:
+ result = re_concat_as_string(re, str);
+ break;
+ case CSET:
+ result = re_cset_as_string(re, str);
+ break;
+ case CHAR:
+ if (re->c == '\0' || strchr(special_chars, re->c) == NULL) {
+ if (ALLOC_N(str->rx, 2) < 0)
+ goto error;
+ str->rx[0] = re->c;
+ str->len = 1;
+ } else {
+ if (ALLOC_N(str->rx, 3) < 0)
+ goto error;
+ str->rx[0] = '\\';
+ str->rx[1] = re->c;
+ str->len = strlen(str->rx);
+ }
+ break;
+ case ITER:
+ result = re_iter_as_string(re, str);
+ break;
+ case EPSILON:
+ if (ALLOC_N(str->rx, 3) < 0)
+ goto error;
+ strcpy(str->rx, "()");
+ str->len = strlen(str->rx);
+ break;
+ default:
+ assert(0);
+ abort();
+ break;
+ }
+ return result;
+ error:
+ release_re_str(str);
+ return -1;
+}
+
+static int convert_trans_to_re(struct state *s) {
+ struct re *re = NULL;
+ size_t nto = 1;
+ struct trans *trans = NULL;
+ int r;
+
+ if (s->tused == 0)
+ return 0;
+
+ qsort(s->trans, s->tused, sizeof(*s->trans), trans_to_cmp);
+ for (int i = 0; i < s->tused - 1; i++) {
+ if (s->trans[i].to != s->trans[i+1].to)
+ nto += 1;
+ }
+ r = ALLOC_N(trans, nto);
+ if (r < 0)
+ goto error;
+
+ struct state *to = s->trans[0].to;
+ int tind = 0;
+ for_each_trans(t, s) {
+ if (t->to != to) {
+ trans[tind].to = to;
+ trans[tind].re = re;
+ tind += 1;
+ re = NULL;
+ to = t->to;
+ }
+ if (re == NULL) {
+ re = make_re_char_set(0, 0);
+ if (re == NULL)
+ goto error;
+ }
+ add_re_char(re, t->min, t->max);
+ }
+ assert(nto == tind + 1);
+ trans[tind].to = to;
+ trans[tind].re = re;
+
+ /* Simplify CSETs with a single char to a CHAR */
+ for (int t=0; t < nto; t++) {
+ int cnt = 0;
+ uchar chr = UCHAR_MIN;
+ for (int c = 0; c < UCHAR_NUM; c++) {
+ if (bitset_get(trans[t].re->cset, c)) {
+ cnt += 1;
+ chr = c;
+ }
+ }
+ if (cnt == 1) {
+ re_unref(trans[t].re);
+ trans[t].re = make_re_char(chr);
+ if (trans[t].re == NULL)
+ goto error;
+ }
+ }
+ free_trans(s);
+ s->trans = trans;
+ s->tused = s->tsize = nto;
+ return 0;
+
+ error:
+ if (trans)
+ for (int i=0; i < nto; i++)
+ unref(trans[i].re, re);
+ free(trans);
+ return -1;
+}
+
+ATTRIBUTE_RETURN_CHECK
+static int add_new_re_trans(struct state *s1, struct state *s2,
+ struct re *re) {
+ int r;
+ r = add_new_trans(s1, s2, 0, 0);
+ if (r < 0)
+ return -1;
+ last_trans(s1)->re = re;
+ return 0;
+}
+
+/* Add the regular expression R1 . LOOP* . R2 to the transition
+ from S1 to S2. */
+static int re_collapse_trans(struct state *s1, struct state *s2,
+ struct re *r1, struct re *loop, struct re *r2) {
+ struct re *re = NULL;
+
+ if (loop->type != EPSILON) {
+ loop = make_re_rep(ref(loop), 0, -1);
+ if (loop == NULL)
+ goto error;
+ }
+
+ if (r1->type == EPSILON) {
+ if (loop->type == EPSILON) {
+ re = ref(r2);
+ } else {
+ re = make_re_binop(CONCAT, loop, ref(r2));
+ }
+ } else {
+ if (loop->type == EPSILON) {
+ if (r2->type == EPSILON) {
+ re = ref(r1);
+ } else {
+ re = make_re_binop(CONCAT, ref(r1), ref(r2));
+ }
+ } else {
+ re = make_re_binop(CONCAT, ref(r1), loop);
+ if (re != NULL && r2->type != EPSILON) {
+ re = make_re_binop(CONCAT, re, ref(r2));
+ }
+ }
+ }
+ if (re == NULL)
+ goto error;
+
+ struct trans *t = NULL;
+ for (t = s1->trans; t <= last_trans(s1) && t->to != s2; t += 1);
+ if (t > last_trans(s1)) {
+ if (add_new_re_trans(s1, s2, re) < 0)
+ goto error;
+ } else {
+ if (t->re == NULL) {
+ t->re = re;
+ } else {
+ t->re = make_re_binop(UNION, re, t->re);
+ if (t->re == NULL)
+ goto error;
+ }
+ }
+ return 0;
+ error:
+ // FIXME: make sure we don't leak loop
+ return -1;
+}
+
+static int convert_strings(struct fa *fa) {
+ struct state_set *worklist = state_set_init(-1, S_NONE);
+ int result = -1;
+
+ E(worklist == NULL);
+
+ /* Abuse hash to count indegree, reachable to mark visited states */
+ list_for_each(s, fa->initial) {
+ s->hash = 0;
+ s->reachable = 0;
+ }
+
+ list_for_each(s, fa->initial) {
+ for_each_trans(t, s) {
+ t->to->hash += 1;
+ }
+ }
+
+ for (struct state *s = fa->initial;
+ s != NULL;
+ s = state_set_pop(worklist)) {
+ for (int i=0; i < s->tused; i++) {
+ struct trans *t = s->trans + i;
+ struct state *to = t->to;
+ while (to->hash == 1 && to->tused == 1 && ! to->accept) {
+ if (t->re == NULL) {
+ t->re = to->trans->re;
+ to->trans->re = NULL;
+ } else {
+ t->re = make_re_binop(CONCAT, t->re, to->trans->re);
+ if (t->re == NULL)
+ goto error;
+ }
+ t->to = to->trans->to;
+ to->tused = 0;
+ to->hash -= 1;
+ to = t->to;
+ for (int j=0; j < s->tused; j++) {
+ if (j != i && s->trans[j].to == to) {
+ /* Combine transitions i and j; remove trans j */
+ t->re = make_re_binop(UNION, t->re, s->trans[j].re);
+ if (t->re == NULL)
+ goto error;
+ memmove(s->trans + j, s->trans + j + 1,
+ sizeof(s->trans[j]) * (s->tused - j - 1));
+ to->hash -= 1;
+ s->tused -= 1;
+ if (j < i) {
+ i = i - 1;
+ t = s->trans + i;
+ }
+ }
+ }
+ }
+
+ if (! to->reachable) {
+ to->reachable = 1;
+ F(state_set_push(worklist, to));
+ }
+ }
+ }
+
+ for (struct state *s = fa->initial; s->next != NULL; ) {
+ if (s->next->hash == 0 && s->next->tused == 0) {
+ struct state *del = s->next;
+ s->next = del->next;
+ free(del->trans);
+ free(del);
+ } else {
+ s = s->next;
+ }
+ }
+ result = 0;
+
+ error:
+ state_set_free(worklist);
+ return result;
+}
+
+/* Convert an FA to a regular expression.
+ * The strategy is the following:
+ * (1) For all states S1 and S2, convert the transitions between them
+ * into one transition whose regexp is a CSET
+ * (2) Add a new initial state INI and a new final state FIN to the automaton,
+ * a transition from INI to the old initial state of FA, and a transition
+ * from all accepting states of FA to FIN; the regexp on those transitions
+ * matches only the empty word
+ * (3) Eliminate states S (except for INI and FIN) one by one:
+ * Let LOOP the regexp for the transition S -> S if it exists, epsilon
+ * otherwise.
+ * For all S1, S2 different from S with S1 -> S -> S2
+ * Let R1 the regexp of S1 -> S
+ * R2 the regexp of S -> S2
+ * R3 the regexp S1 -> S2 (or epsilon if no such transition)
+ * set the regexp on the transition S1 -> S2 to
+ * R1 . (LOOP)* . R2 | R3
+ * (4) The regexp for the whole FA can now be found as the regexp of
+ * the transition INI -> FIN
+ * (5) Convert that STRUCT RE to a string with RE_AS_STRING
+ */
+int fa_as_regexp(struct fa *fa, char **regexp, size_t *regexp_len) {
+ int r;
+ struct state *fin = NULL, *ini = NULL;
+ struct re *eps = NULL;
+
+ *regexp = NULL;
+ *regexp_len = 0;
+ fa = fa_clone(fa);
+ if (fa == NULL)
+ goto error;
+
+ eps = make_re(EPSILON);
+ if (eps == NULL)
+ goto error;
+
+ fin = add_state(fa,1);
+ if (fin == NULL)
+ goto error;
+
+ fa->trans_re = 1;
+
+ list_for_each(s, fa->initial) {
+ r = convert_trans_to_re(s);
+ if (r < 0)
+ goto error;
+ if (s->accept && s != fin) {
+ r = add_new_re_trans(s, fin, ref(eps));
+ if (r < 0)
+ goto error;
+ s->accept = 0;
+ }
+ }
+
+ ini = add_state(fa, 0);
+ if (ini == NULL)
+ goto error;
+
+ r = add_new_re_trans(ini, fa->initial, ref(eps));
+ if (r < 0)
+ goto error;
+ set_initial(fa, ini);
+
+ convert_strings(fa);
+
+ list_for_each(s, fa->initial->next) {
+ if (s == fin)
+ continue;
+ /* Eliminate S */
+ struct re *loop = eps;
+ for_each_trans(t, s) {
+ if (t->to == s)
+ loop = t->re;
+ }
+ list_for_each(s1, fa->initial) {
+ if (s == s1)
+ continue;
+ for (int t1 = 0; t1 < s1->tused; t1++) {
+ if (s1->trans[t1].to == s) {
+ for (int t = 0; t < s->tused; t++) {
+ if (s->trans[t].to == s)
+ continue;
+ r = re_collapse_trans(s1, s->trans[t].to,
+ s1->trans[t1].re,
+ loop,
+ s->trans[t].re);
+ if (r < 0)
+ goto error;
+ }
+ }
+ }
+ }
+ }
+
+ re_unref(eps);
+
+ for_each_trans(t, fa->initial) {
+ if (t->to == fin) {
+ struct re_str str;
+ MEMZERO(&str, 1);
+ if (re_as_string(t->re, &str) < 0)
+ goto error;
+ *regexp = str.rx;
+ *regexp_len = str.len;
+ }
+ }
+
+ list_for_each(s, fa->initial) {
+ for_each_trans(t, s) {
+ unref(t->re, re);
+ }
+ }
+ fa_free(fa);
+
+ return 0;
+ error:
+ fa_free(fa);
+ re_unref(eps);
+ return -1;
+}
+
+static int re_restrict_alphabet(struct re *re, uchar from, uchar to) {
+ int r1, r2;
+ int result = 0;
+
+ switch(re->type) {
+ case UNION:
+ case CONCAT:
+ r1 = re_restrict_alphabet(re->exp1, from, to);
+ r2 = re_restrict_alphabet(re->exp2, from, to);
+ result = (r1 != 0) ? r1 : r2;
+ break;
+ case CSET:
+ if (re->negate) {
+ re->negate = 0;
+ bitset_negate(re->cset, UCHAR_NUM);
+ }
+ for (int i=from; i <= to; i++)
+ bitset_clr(re->cset, i);
+ break;
+ case CHAR:
+ if (from <= re->c && re->c <= to)
+ result = -1;
+ break;
+ case ITER:
+ result = re_restrict_alphabet(re->exp, from, to);
+ break;
+ case EPSILON:
+ break;
+ default:
+ assert(0);
+ abort();
+ break;
+ }
+ return result;
+}
+
+int fa_restrict_alphabet(const char *regexp, size_t regexp_len,
+ char **newregexp, size_t *newregexp_len,
+ char from, char to) {
+ int result;
+ struct re *re = NULL;
+ struct re_parse parse;
+ struct re_str str;
+
+ *newregexp = NULL;
+ MEMZERO(&parse, 1);
+ parse.rx = regexp;
+ parse.rend = regexp + regexp_len;
+ parse.error = REG_NOERROR;
+ re = parse_regexp(&parse);
+ if (parse.error != REG_NOERROR)
+ return parse.error;
+
+ result = re_restrict_alphabet(re, from, to);
+ if (result != 0) {
+ result = -2;
+ goto done;
+ }
+
+ MEMZERO(&str, 1);
+ result = re_as_string(re, &str);
+ *newregexp = str.rx;
+ *newregexp_len = str.len;
+ done:
+ re_unref(re);
+ return result;
+}
+
+int fa_expand_char_ranges(const char *regexp, size_t regexp_len,
+ char **newregexp, size_t *newregexp_len) {
+ int result;
+ struct re *re = NULL;
+ struct re_parse parse;
+ struct re_str str;
+
+ *newregexp = NULL;
+ MEMZERO(&parse, 1);
+ parse.rx = regexp;
+ parse.rend = regexp + regexp_len;
+ parse.error = REG_NOERROR;
+ parse.no_ranges = 1;
+ re = parse_regexp(&parse);
+ if (parse.error != REG_NOERROR)
+ return parse.error;
+
+ MEMZERO(&str, 1);
+ result = re_as_string(re, &str);
+ *newregexp = str.rx;
+ *newregexp_len = str.len;
+ re_unref(re);
+ return result;
+}
+
+/* Expand regexp so that it is case-insensitive in a case-sensitive match.
+ *
+ * Return 1 when a change was made, -1 when an allocation failed, and 0
+ * when no change was made.
+ */
+static int re_case_expand(struct re *re) {
+ int result = 0, r1, r2;
+
+ switch(re->type) {
+ case UNION:
+ case CONCAT:
+ r1 = re_case_expand(re->exp1);
+ r2 = re_case_expand(re->exp2);
+ result = (r1 != 0) ? r1 : r2;
+ break;
+ case CSET:
+ for (int c = 'A'; c <= 'Z'; c++)
+ if (bitset_get(re->cset, c)) {
+ result = 1;
+ bitset_set(re->cset, tolower(c));
+ }
+ for (int c = 'a'; c <= 'z'; c++)
+ if (bitset_get(re->cset, c)) {
+ result = 1;
+ bitset_set(re->cset, toupper(c));
+ }
+ break;
+ case CHAR:
+ if (isalpha(re->c)) {
+ int c = re->c;
+ re->type = CSET;
+ re->negate = false;
+ re->no_ranges = 0;
+ re->cset = bitset_init(UCHAR_NUM);
+ if (re->cset == NULL)
+ return -1;
+ bitset_set(re->cset, tolower(c));
+ bitset_set(re->cset, toupper(c));
+ result = 1;
+ }
+ break;
+ case ITER:
+ result = re_case_expand(re->exp);
+ break;
+ case EPSILON:
+ break;
+ default:
+ assert(0);
+ abort();
+ break;
+ }
+ return result;
+}
+
+int fa_expand_nocase(const char *regexp, size_t regexp_len,
+ char **newregexp, size_t *newregexp_len) {
+ int result, r;
+ struct re *re = NULL;
+ struct re_parse parse;
+ struct re_str str;
+
+ *newregexp = NULL;
+ MEMZERO(&parse, 1);
+ parse.rx = regexp;
+ parse.rend = regexp + regexp_len;
+ parse.error = REG_NOERROR;
+ re = parse_regexp(&parse);
+ if (parse.error != REG_NOERROR)
+ return parse.error;
+
+ r = re_case_expand(re);
+ if (r < 0) {
+ re_unref(re);
+ return REG_ESPACE;
+ }
+
+ if (r == 1) {
+ MEMZERO(&str, 1);
+ result = re_as_string(re, &str);
+ *newregexp = str.rx;
+ *newregexp_len = str.len;
+ } else {
+ *newregexp = strndup(regexp, regexp_len);
+ *newregexp_len = regexp_len;
+ result = (*newregexp == NULL) ? REG_ESPACE : REG_NOERROR;
+ }
+ re_unref(re);
+ return result;
+}
+
+static void print_char(FILE *out, uchar c) {
+ /* We escape '/' as '\\/' since dot chokes on bare slashes in labels;
+ Also, a space ' ' is shown as '\s' */
+ static const char *const escape_from = " \n\t\v\b\r\f\a/\0";
+ static const char *const escape_to = "sntvbrfa/0";
+ char *p = strchr(escape_from, c);
+ if (p != NULL) {
+ int i = p - escape_from;
+ fprintf(out, "\\\\%c", escape_to[i]);
+ } else if (! isprint(c)) {
+ fprintf(out, "\\\\0%03o", (unsigned char) c);
+ } else if (c == '"') {
+ fprintf(out, "\\\"");
+ } else {
+ fputc(c, out);
+ }
+}
+
+void fa_dot(FILE *out, struct fa *fa) {
+ fprintf(out, "digraph {\n rankdir=LR;");
+ list_for_each(s, fa->initial) {
+ if (s->accept) {
+ fprintf(out, "\"%p\" [shape=doublecircle];\n", s);
+ } else {
+ fprintf(out, "\"%p\" [shape=circle];\n", s);
+ }
+ }
+ fprintf(out, "%s -> \"%p\";\n", fa->deterministic ? "dfa" : "nfa",
+ fa->initial);
+
+ struct re_str str;
+ MEMZERO(&str, 1);
+ list_for_each(s, fa->initial) {
+ for_each_trans(t, s) {
+ fprintf(out, "\"%p\" -> \"%p\" [ label = \"", s, t->to);
+ if (fa->trans_re) {
+ re_as_string(t->re, &str);
+ for (int i=0; i < str.len; i++) {
+ print_char(out, str.rx[i]);
+ }
+ release_re_str(&str);
+ } else {
+ print_char(out, t->min);
+ if (t->min != t->max) {
+ fputc('-', out);
+ print_char(out, t->max);
+ }
+ }
+ fprintf(out, "\" ];\n");
+ }
+ }
+ fprintf(out, "}\n");
+}
+
+int fa_json(FILE *out, struct fa *fa) {
+ hash_val_t *list_hashes = NULL;
+ int list_size = 100;
+ int num_states = 0;
+ int it;
+ char first = true;
+ int result = -1;
+
+ fprintf(out,"{\n\t\"final\": [");
+
+ F(ALLOC_N(list_hashes, list_size));
+
+ list_for_each(s, fa->initial) {
+ if (num_states == list_size - 1){
+ list_size += list_size;
+ F(REALLOC_N(list_hashes, list_size));
+ }
+ // Store hash value
+ list_hashes[num_states] = s->hash;
+ // We use the hashes to map states to Z_{num_states}
+ s->hash = num_states++;
+ if (s->accept) {
+ if (first) {
+ fprintf(out,"%ld", s->hash);
+ first = false;
+ } else {
+ fprintf(out, ", %ld", s->hash);
+ }
+ }
+ }
+
+ fprintf(out, "],\n\t\"deterministic\": %d,\n\t\"transitions\": [\n",
+ fa->deterministic ? 1 : 0);
+
+ first = true;
+ list_for_each(s, fa->initial) {
+ for_each_trans(t, s) {
+ if (!first)
+ fprintf(out, ",\n");
+ first = false;
+ fprintf(out, "\t\t{ \"from\": %ld, \"to\": %ld, \"on\": \"",
+ s->hash, t->to->hash);
+ print_char(out, t->min);
+ if (t->min != t->max) {
+ fputc('-', out);
+ print_char(out, t->max);
+ }
+ fprintf(out, "\" }");
+ }
+ }
+
+ fprintf(out,"\n\t]\n}");
+ result = 0;
+
+error:
+ // Restoring hash values to leave the FA structure untouched. That is
+ // only needed if we actually copied hashes, indicated by num_states
+ // being non-zero
+ if (num_states > 0) {
+ it = 0;
+ list_for_each(s, fa->initial) {
+ s->hash = list_hashes[it++];
+ }
+ }
+ free(list_hashes);
+ return result;
+}
+
+bool fa_is_deterministic(struct fa *fa) {
+ return fa->deterministic;
+}
+
+struct state *fa_state_initial(struct fa *fa) {
+ return fa->initial;
+}
+
+bool fa_state_is_accepting(struct state *st) {
+ return st->accept;
+}
+
+struct state* fa_state_next(struct state *st) {
+ return st->next;
+}
+
+size_t fa_state_num_trans(struct state *st) {
+ return st->tused;
+}
+
+int fa_state_trans(struct state *st, size_t i,
+ struct state **to, unsigned char *min, unsigned char *max) {
+ if (st->tused <= i)
+ return -1;
+
+ (*to) = st->trans[i].to;
+ (*min) = st->trans[i].min;
+ (*max) = st->trans[i].max;
+ return 0;
+}
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+/*
+ * fa.h: finite automata
+ *
+ * Copyright (C) 2007-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <dlutter@redhat.com>
+ */
+
+#ifndef FA_H_
+#define FA_H_
+
+#include <stdio.h>
+#include <regex.h>
+#include <stdbool.h>
+#include <stdint.h>
+
+/* The type for a finite automaton. */
+struct fa;
+
+/* The type of a state of a finite automaton. The fa_state functions return
+ * pointers to this struct. Those pointers are only valid as long as the
+ * only fa_* functions that are called are fa_state_* functions. For
+ * example, the following code will almost certainly result in a crash (or
+ * worse):
+ *
+ * struct state *s = fa_state_initial(fa);
+ * fa_minimize(fa);
+ * // Crashes as S will likely have been freed
+ * s = fa_state_next(s)
+ */
+struct state;
+
+/* Denote some basic automata, used by fa_is_basic and fa_make_basic */
+enum fa_basic {
+ FA_EMPTY, /* Accepts the empty language, i.e. no strings */
+ FA_EPSILON, /* Accepts only the empty word */
+ FA_TOTAL /* Accepts all words */
+};
+
+/* Choice of minimization algorithm to use; either Hopcroft's O(n log(n))
+ * algorithm or Brzozowski's reverse-determinize-reverse-determinize
+ * algorithm. While the latter has exponential complexity in theory, it
+ * works quite well for some cases.
+ */
+enum fa_minimization_algorithms {
+ FA_MIN_HOPCROFT,
+ FA_MIN_BRZOZOWSKI
+};
+
+/* Which minimization algorithm to use in FA_MINIMIZE. The library
+ * minimizes internally at certain points, too.
+ *
+ * Defaults to FA_MIN_HOPCROFT
+ */
+extern int fa_minimization_algorithm;
+
+/* Unless otherwise mentioned, automata passed into routines are never
+ * modified. It is the responsibility of the caller to free automata
+ * returned by any of these routines when they are no longer needed.
+ */
+
+/*
+ * Compile the regular expression RE of length SIZE into an automaton. The
+ * return value is the same as the return value for the POSIX function
+ * regcomp. The syntax for regular expressions is extended POSIX syntax,
+ * with the difference that '.' does not match newlines.
+ *
+ * On success, FA points to the newly allocated automaton constructed for
+ * RE, and the function returns REG_NOERROR. Otherwise, FA is NULL, and the
+ * return value indicates the error.
+ *
+ * The FA is case sensitive. Call FA_NOCASE to switch it to
+ * case-insensitive.
+ */
+int fa_compile(const char *re, size_t size, struct fa **fa);
+
+/* Make a new automaton that accepts one of the basic languages defined in
+ * the enum FA_BASIC.
+ */
+struct fa *fa_make_basic(unsigned int basic);
+
+/* Return 1 if FA accepts the basic language BASIC, which must be one of
+ * the constants from enum FA_BASIC.
+ */
+int fa_is_basic(struct fa *fa, unsigned int basic);
+
+/* Minimize FA using the currently-set fa_minimization_algorithm.
+ * As a side effect, the automaton will also be deterministic after being
+ * minimized. Modifies the automaton in place.
+ */
+int fa_minimize(struct fa *fa);
+
+/* Return a finite automaton that accepts the concatenation of the
+ * languages for FA1 and FA2, i.e. L(FA1).L(FA2)
+ */
+struct fa *fa_concat(struct fa *fa1, struct fa *fa2);
+
+/* Return a finite automaton that accepts the union of the languages that
+ * FA1 and FA2 accept (the '|' operator in regular expressions).
+ */
+struct fa *fa_union(struct fa *fa1, struct fa *fa2);
+
+/* Return a finite automaton that accepts the intersection of the languages
+ * of FA1 and FA2.
+ */
+struct fa *fa_intersect(struct fa *fa1, struct fa *fa2);
+
+/* Return a finite automaton that accepts the complement of the language of
+ * FA, i.e. the set of all words not accepted by FA
+ */
+struct fa *fa_complement(struct fa *fa);
+
+/* Return a finite automaton that accepts the set difference of the
+ * languages of FA1 and FA2, i.e. L(FA1)\L(FA2)
+ */
+struct fa *fa_minus(struct fa *fa1, struct fa *fa2);
+
+/* Return a finite automaton that accepts a repetition of the language that
+ * FA accepts. If MAX == -1, the returned automaton accepts arbitrarily
+ * long repetitions. MIN must be 0 or bigger, and unless MAX == -1, MIN
+ * must be less or equal to MAX. If MIN is greater than 0, the returned
+ * automaton accepts only words that have at least MIN repetitions of words
+ * from L(FA).
+ *
+ * The following common regexp repetitios are achieved by the following
+ * calls (using a lose notation equating automata and their languages):
+ *
+ * - FA* = FA_ITER(FA, 0, -1)
+ * - FA+ = FA_ITER(FA, 1, -1)
+ * - FA? = FA_ITER(FA, 0, 1)
+ * - FA{n,m} = FA_ITER(FA, n, m) with 0 <= n and m = -1 or n <= m
+ */
+struct fa *fa_iter(struct fa *fa, int min, int max);
+
+/* If successful, returns 1 if the language of FA1 is contained in the language
+ * of FA2, 0 otherwise. Returns a negative number if an error occurred.
+ */
+int fa_contains(struct fa *fa1, struct fa *fa2);
+
+/* If successful, returns 1 if the language of FA1 equals the language of FA2,
+ * 0 otherwise. Returns a negative number if an error occurred.
+ */
+int fa_equals(struct fa *fa1, struct fa *fa2);
+
+/* Free all memory used by FA */
+void fa_free(struct fa *fa);
+
+/* Print FA to OUT as a graphviz dot file */
+void fa_dot(FILE *out, struct fa *fa);
+
+/* Return a finite automaton that accepts the overlap of the languages of
+ * FA1 and FA2. The overlap of two languages is the set of strings that can
+ * be split in more than one way into a left part accepted by FA1 and a
+ * right part accepted by FA2.
+ */
+struct fa *fa_overlap(struct fa *fa1, struct fa *fa2);
+
+/* Produce an example for the language of FA. The example is not
+ * necessarily the shortest possible. The implementation works very hard to
+ * have printable characters (preferably alphanumeric) in the example, and
+ * to avoid just an empty word.
+ *
+ * *EXAMPLE will be the example, which may be NULL. If it is non-NULL,
+ * EXAMPLE_LEN will hold the length of the example.
+ *
+ * Return 0 on success, and a negative numer on error. On error, *EXAMPLE
+ * will be NULL.
+ *
+ * If *EXAMPLE is set, it is the caller's responsibility to free the string
+ * by calling free().
+ */
+int fa_example(struct fa *fa, char **example, size_t *example_len);
+
+/* Produce an example of an ambiguous word for the concatenation of the
+ * languages of FA1 and FA2. The return value is such a word (which must be
+ * freed by the caller) if it exists. If none exists, NULL is returned.
+ *
+ * The returned word is of the form UPV and PV and V are set to the first
+ * character of P and V in the returned word. The word UPV has the property
+ * that U and UP are accepted by FA1 and that PV and V are accepted by FA2.
+ *
+ * Neither the language of FA1 or of FA2 may contain words with the
+ * characters '\001' and '\002', as they are used during construction of
+ * the ambiguous word.
+ *
+ * UPV_LEN will be set to the length of the entire string UPV
+ *
+ * Returns 0 on success, and a negative number on failure. On failure, UPV,
+ * PV, and V will be NULL
+ */
+int fa_ambig_example(struct fa *fa1, struct fa *fa2,
+ char **upv, size_t *upv_len,
+ char **pv, char **v);
+
+/* Convert the finite automaton FA into a regular expression and set REGEXP
+ * to point to that. When REGEXP is compiled into another automaton, it is
+ * guaranteed that that automaton and FA accept the same language.
+ *
+ * The code tries to be semi-clever about keeping the generated regular
+ * expression short; to guarantee reasonably short regexps, the automaton
+ * should be minimized before passing it to this routine.
+ *
+ * On success, REGEXP_LEN is set to the length of REGEXP
+ *
+ * Return 0 on success, and a negative number on failure. The only reason
+ * for FA_AS_REGEXP to fail is running out of memory.
+ */
+int fa_as_regexp(struct fa *fa, char **regexp, size_t *regexp_len);
+
+/* Given the regular expression REGEXP construct a new regular expression
+ * NEWREGEXP that does not match strings containing any of the characters
+ * in the range FROM to TO, with the endpoints included.
+ *
+ * The new regular expression is constructed by removing the range FROM to
+ * TO from all character sets in REGEXP; if any of the characters [FROM,
+ * TO] appear outside a character set in REGEXP, return -2.
+ *
+ * Return 0 if NEWREGEXP was constructed successfully, -1 if an internal
+ * error happened (e.g., an allocation failed) and -2 if NEWREGEXP can not
+ * be constructed because any character in the range [FROM, TO] appears
+ * outside of a character set.
+ *
+ * Return a positive value if REGEXP is not syntactically valid; the value
+ * returned is one of the REG_ERRCODE_T POSIX error codes. Return 0 on
+ * success and -1 if an allocation fails.
+ */
+int fa_restrict_alphabet(const char *regexp, size_t regexp_len,
+ char **newregexp, size_t *newregexp_len,
+ char from, char to);
+
+/* Convert REGEXP into one that does not use ranges inside character
+ * classes.
+ *
+ * Return a positive value if REGEXP is not syntactically valid; the value
+ * returned is one of the REG_ERRCODE_T POSIX error codes. Return 0 on
+ * success and -1 if an allocation fails.
+ */
+int fa_expand_char_ranges(const char *regexp, size_t regexp_len,
+ char **newregexp, size_t *newregexp_len);
+
+/* Modify FA so that it matches ignoring case.
+ *
+ * Returns 0 on success, and -1 if an allocation fails. On failure, the
+ * automaton is not guaranteed to represent anything sensible.
+ */
+int fa_nocase(struct fa *fa);
+
+/* Return 1 if FA matches ignoring case, 0 if matches are case sensitive */
+int fa_is_nocase(struct fa *fa);
+
+/* Assume REGEXP is a case-insensitive regular expression, and convert it
+ * to one that matches the same strings when used case sensitively. All
+ * occurrences of individual letters c in the regular expression will be
+ * replaced by character sets [cC], and lower/upper case characters are
+ * added to character sets as needed.
+ *
+ * Return a positive value if REGEXP is not syntactically valid; the value
+ * returned is one of the REG_ERRCODE_T POSIX error codes. Return 0 on
+ * success and -1 if an allocation fails.
+ */
+int fa_expand_nocase(const char *regexp, size_t regexp_len,
+ char **newregexp, size_t *newregexp_len);
+
+/* Generate up to LIMIT words from the language of FA, which is assumed to
+ * be finite. The words are returned in WORDS, which is allocated by this
+ * function and must be freed by the caller.
+ *
+ * If FA accepts the empty word, the empty string will be included in
+ * WORDS.
+ *
+ * Return the number of generated words on success, -1 if we run out of
+ * memory, and -2 if FA has more than LIMIT words.
+ */
+int fa_enumerate(struct fa *fa, int limit, char ***words);
+
+/* Print FA to OUT as a JSON file. State 0 is always the initial one.
+ * Returns 0 on success, and -1 on failure.
+ */
+int fa_json(FILE *out, struct fa *fa);
+
+/* Returns true if the FA is deterministic and 0 otherwise */
+bool fa_is_deterministic(struct fa *fa);
+
+/* Return the initial state */
+struct state *fa_state_initial(struct fa *fa);
+
+/* Return true if this is an accepting state */
+bool fa_state_is_accepting(struct state *st);
+
+/* Return the next state; return NULL if there are no more states */
+struct state* fa_state_next(struct state *st);
+
+/* Return the number of transitions for a state */
+size_t fa_state_num_trans(struct state *st);
+
+/* Produce details about the i-th transition.
+ *
+ * On success, *to points to the destination state of the transition and
+ * the interval [min-max] is the label of the transition.
+ *
+ * On failure, *to, min and max are not modified.
+ *
+ * Return 0 on success and -1 when I is larger than the number of
+ * transitions of ST.
+ */
+int fa_state_trans(struct state *st, size_t i,
+ struct state **to, unsigned char *min, unsigned char *max);
+
+#endif
+
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+FA_1.0.0 {
+ global:
+ fa_minimization_algorithm;
+ fa_compile;
+ fa_make_basic;
+ fa_is_basic;
+ fa_minimize;
+ fa_concat;
+ fa_union;
+ fa_intersect;
+ fa_complement;
+ fa_minus;
+ fa_iter;
+ fa_contains;
+ fa_equals;
+ fa_free;
+ fa_dot;
+ fa_overlap;
+ fa_example;
+ fa_ambig_example;
+ fa_as_regexp;
+ fa_restrict_alphabet;
+ fa_expand_char_ranges;
+ local: *;
+};
+
+FA_1.2.0 {
+ fa_nocase;
+ fa_is_nocase;
+ fa_expand_nocase;
+} FA_1.0.0;
+
+FA_1.4.0 {
+ fa_enumerate;
+} FA_1.2.0;
+
+FA_1.5.0 {
+ fa_json;
+ fa_state_initial;
+ fa_state_is_accepting;
+ fa_state_next;
+ fa_state_num_trans;
+ fa_state_trans;
+ fa_is_deterministic;
+} FA_1.4.0;
--- /dev/null
+/*
+ * parser.c: parse a configuration file according to a grammar
+ *
+ * Copyright (C) 2007-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <dlutter@redhat.com>
+ */
+
+#include <config.h>
+
+#include <regex.h>
+#include <stdarg.h>
+
+#include "regexp.h"
+#include "list.h"
+#include "internal.h"
+#include "memory.h"
+#include "info.h"
+#include "lens.h"
+#include "errcode.h"
+
+/* Our favorite error message */
+static const char *const short_iteration =
+ "Iterated lens matched less than it should";
+
+struct seq {
+ struct seq *next;
+ const char *name;
+ int value;
+};
+
+struct state {
+ struct info *info;
+ struct span *span;
+ const char *text;
+ struct seq *seqs;
+ char *key;
+ char *value; /* GET_STORE leaves a value here */
+ struct lns_error *error;
+ int enable_span;
+ /* We use the registers from a regular expression match to keep track
+ * of the substring we are currently looking at. REGS are the registers
+ * from the last regexp match; NREG is the number of the register
+ * in REGS that describes the substring we are currently looking at.
+ *
+ * We adjust NREG as we traverse lenses either because we know the
+ * grouping structure of lenses (for L_STAR, the child lens is always
+ * in NREG + 1) or by looking at the number of groups in a sublens (as
+ * we move from one child of L_CONCAT to the next, we need to add 1 +
+ * number of groups of that child to NREG) How NREG is adjusted is
+ * closely related to how the REGEXP_* routines build up bigger regexps
+ * from smaller ones.
+ */
+ struct re_registers *regs;
+ uint nreg;
+};
+
+/* Used by recursive lenses to stack intermediate results
+ Frames are used for a few different purposes:
+
+ (1) to save results from individual lenses that are later gathered and
+ combined by combinators like L_STAR and L_CONCAT
+ (2) to preserve parse state when descending into a L_SUBTREE
+ (3) as a marker to determine if a L_MAYBE had a match or not
+ */
+struct frame {
+ struct lens *lens;
+ char *key;
+ struct span *span;
+ union {
+ struct { /* MGET */
+ char *value;
+ struct tree *tree;
+ };
+ struct { /* M_PARSE */
+ struct skel *skel;
+ struct dict *dict;
+ };
+ };
+};
+
+/* Used by recursive lenses in get_rec and parse_rec */
+enum mode_t { M_GET, M_PARSE };
+
+/* Abstract Syntax Tree for recursive parse */
+struct ast {
+ struct ast *parent;
+ struct ast **children;
+ uint nchildren;
+ uint capacity;
+ struct lens *lens;
+ uint start;
+ uint end;
+};
+
+struct rec_state {
+ enum mode_t mode;
+ struct state *state;
+ uint fsize;
+ uint fused;
+ struct frame *frames;
+ size_t start;
+ uint lvl; /* Debug only */
+ struct ast *ast;
+ /* Will either be get_combine or parse_combine, depending on MODE, for
+ the duration of the whole recursive parse */
+ void (*combine)(struct rec_state *, struct lens *, uint);
+};
+
+#define REG_START(state) ((state)->regs->start[(state)->nreg])
+#define REG_END(state) ((state)->regs->end[(state)->nreg])
+#define REG_SIZE(state) (REG_END(state) - REG_START(state))
+#define REG_POS(state) ((state)->text + REG_START(state))
+#define REG_VALID(state) ((state)->regs != NULL && \
+ (state)->nreg < (state)->regs->num_regs)
+#define REG_MATCHED(state) (REG_VALID(state) \
+ && (state)->regs->start[(state)->nreg] >= 0)
+
+/* The macros SAVE_REGS and RESTORE_REGS are used to remember and restore
+ * where our caller was in processing their regexp match before we do
+ * matching ourselves (and would therefore clobber the caller's state)
+ *
+ * These two macros are admittedly horrible in that they declare local
+ * variables called OLD_REGS and OLD_NREG. The alternative to avoiding that
+ * would be to manually manage a stack of regs/nreg in STATE.
+ */
+#define SAVE_REGS(state) \
+ struct re_registers *old_regs = state->regs; \
+ uint old_nreg = state->nreg; \
+ state->regs = NULL; \
+ state->nreg = 0
+
+#define RESTORE_REGS(state) \
+ free_regs(state); \
+ state->regs = old_regs; \
+ state->nreg = old_nreg;
+
+/*
+ * AST utils
+ */
+static struct ast *make_ast(struct lens *lens) {
+ struct ast *ast = NULL;
+
+ if (ALLOC(ast) < 0)
+ return NULL;
+ ast->lens = lens;
+ ast->capacity = 4;
+ if (ALLOC_N(ast->children, ast->capacity) < 0) {
+ FREE(ast);
+ return NULL;
+ }
+ return ast;
+}
+
+/* recursively free the node and all it's descendants */
+static void free_ast(struct ast *ast) {
+ int i;
+ if (ast == NULL)
+ return;
+ for (i = 0; i < ast->nchildren; i++) {
+ free_ast(ast->children[i]);
+ }
+ if (ast->children != NULL)
+ FREE(ast->children);
+ FREE(ast);
+}
+
+static struct ast *ast_root(struct ast *ast) {
+ struct ast *root = ast;
+ while(root != NULL && root->parent != NULL)
+ root = root->parent;
+ return root;
+}
+
+static void ast_set(struct ast *ast, uint start, uint end) {
+ if (ast == NULL)
+ return;
+ ast->start = start;
+ ast->end = end;
+}
+
+/* append a child to the parent ast */
+static struct ast *ast_append(struct rec_state *rec_state, struct lens *lens,
+ uint start, uint end) {
+ int ret;
+ struct ast *child, *parent;
+ struct state *state = rec_state->state;
+
+ parent = rec_state->ast;
+ if (parent == NULL)
+ return NULL;
+
+ child = make_ast(lens);
+ ERR_NOMEM(child == NULL, state->info);
+
+ ast_set(child, start, end);
+ if (parent->nchildren >= parent->capacity) {
+ ret = REALLOC_N(parent->children, parent->capacity * 2);
+ ERR_NOMEM(ret < 0, state->info);
+ parent->capacity = parent->capacity * 2;
+ }
+ parent->children[parent->nchildren++] = child;
+ child->parent = parent;
+
+ return child;
+ error:
+ free_ast(child);
+ return NULL;
+}
+
+/* pop the ast from one level, fail the parent is NULL */
+static void ast_pop(struct rec_state *rec_state) {
+ ensure(rec_state->ast != NULL && rec_state->ast->parent != NULL, rec_state->state->info);
+ rec_state->ast = rec_state->ast->parent;
+ error:
+ return;
+}
+
+static void print_ast(const struct ast *ast, int lvl) {
+ int i;
+ char *lns;
+ if (ast == NULL)
+ return;
+ for (i = 0; i < lvl; i++) fputs(" ", stdout);
+ lns = format_lens(ast->lens);
+ printf("%d..%d %s\n", ast->start, ast->end, lns);
+ free(lns);
+ for (i = 0; i < ast->nchildren; i++) {
+ print_ast(ast->children[i], lvl + 1);
+ }
+}
+
+static void get_error(struct state *state, struct lens *lens,
+ const char *format, ...)
+ ATTRIBUTE_FORMAT(printf, 3, 4);
+
+void free_lns_error(struct lns_error *err) {
+ if (err == NULL)
+ return;
+ free(err->message);
+ free(err->path);
+ unref(err->lens, lens);
+ free(err);
+}
+
+static void vget_error(struct state *state, struct lens *lens,
+ const char *format, va_list ap) {
+ int r;
+
+ if (state->error != NULL)
+ return;
+ if (ALLOC(state->error) < 0)
+ return;
+ state->error->lens = ref(lens);
+ if (REG_MATCHED(state))
+ state->error->pos = REG_END(state);
+ else
+ state->error->pos = 0;
+ r = vasprintf(&state->error->message, format, ap);
+ if (r == -1)
+ state->error->message = NULL;
+}
+
+static void get_error(struct state *state, struct lens *lens,
+ const char *format, ...)
+{
+ va_list ap;
+
+ va_start(ap, format);
+ vget_error(state, lens, format, ap);
+ va_end(ap);
+}
+
+static struct skel *make_skel(struct lens *lens) {
+ struct skel *skel;
+ enum lens_tag tag = lens->tag;
+ if (ALLOC(skel) < 0)
+ return NULL;
+ skel->tag = tag;
+ return skel;
+}
+
+void free_skel(struct skel *skel) {
+ if (skel == NULL)
+ return;
+ if (skel->tag == L_CONCAT || skel->tag == L_STAR || skel->tag == L_MAYBE ||
+ skel->tag == L_SQUARE) {
+ while (skel->skels != NULL) {
+ struct skel *del = skel->skels;
+ skel->skels = del->next;
+ free_skel(del);
+ }
+ } else if (skel->tag == L_DEL) {
+ free(skel->text);
+ }
+ free(skel);
+}
+
+static void print_skel(struct skel *skel);
+static void print_skel_list(struct skel *skels, const char *beg,
+ const char *sep, const char *end) {
+ printf("%s", beg);
+ list_for_each(s, skels) {
+ print_skel(s);
+ if (s->next != NULL)
+ printf("%s", sep);
+ }
+ printf("%s", end);
+}
+
+static void print_skel(struct skel *skel) {
+ switch(skel->tag) {
+ case L_DEL:
+ if (skel->text == NULL) {
+ printf("<>");
+ } else {
+ fputc('\'', stdout);
+ print_chars(stdout, skel->text, -1);
+ fputc('\'', stdout);
+ }
+ break;
+ case L_CONCAT:
+ print_skel_list(skel->skels, "", " . ", "");
+ break;
+ case L_STAR:
+ print_skel_list(skel->skels, "(", " ", ")*");
+ break;
+ case L_MAYBE:
+ print_skel_list(skel->skels, "(", " ", ")?");
+ break;
+ case L_SUBTREE:
+ print_skel_list(skel->skels, "[", " ", "]");
+ break;
+ default:
+ printf("??");
+ break;
+ }
+}
+
+// DICT_DUMP is only used for debugging
+#ifdef DICT_DUMP
+static void print_dict(struct dict *dict, int indent) {
+ list_for_each(d, dict) {
+ printf("%*s%s:\n", indent, "", d->key);
+ list_for_each(e, d->entry) {
+ printf("%*s", indent+2, "");
+ print_skel(e->skel);
+ printf("\n");
+ print_dict(e->dict, indent+2);
+ }
+ }
+}
+#endif
+
+static void get_expected_error(struct state *state, struct lens *l) {
+ /* Size of the excerpt of the input text we'll show */
+ static const int wordlen = 10;
+ char word[wordlen+1];
+ char *p, *pat;
+
+ if (REG_MATCHED(state))
+ strncpy(word, REG_POS(state), wordlen);
+ else
+ strncpy(word, state->text, wordlen);
+ word[wordlen] = '\0';
+ for (p = word; *p != '\0' && *p != '\n'; p++);
+ *p = '\0';
+
+ pat = escape(l->ctype->pattern->str, -1, NULL);
+ get_error(state, l, "expected %s at '%s'", pat, word);
+ free(pat);
+}
+
+/*
+ * Splitting of the input text
+ */
+
+static char *token(struct state *state) {
+ ensure0(REG_MATCHED(state), state->info);
+ return strndup(REG_POS(state), REG_SIZE(state));
+}
+
+static char *token_range(const char *text, uint start, uint end) {
+ return strndup(text + start, end - start);
+}
+
+static void regexp_match_error(struct state *state, struct lens *lens,
+ int count, struct regexp *r) {
+ char *text = NULL;
+ char *pat = regexp_escape(r);
+
+ if (state->regs != NULL)
+ text = strndup(REG_POS(state), REG_SIZE(state));
+ else
+ text = strdup("(unknown)");
+
+ if (count == -1) {
+ get_error(state, lens, "Failed to match /%s/ with %s", pat, text);
+ } else if (count == -2) {
+ get_error(state, lens, "Internal error matching /%s/ with %s",
+ pat, text);
+ } else if (count == -3) {
+ /* Should have been caught by the typechecker */
+ get_error(state, lens, "Syntax error in regexp /%s/", pat);
+ }
+ free(pat);
+ free(text);
+}
+
+static void no_match_error(struct state *state, struct lens *lens) {
+ ensure(lens->tag == L_KEY || lens->tag == L_DEL
+ || lens->tag == L_STORE, state->info);
+ char *pat = regexp_escape(lens->ctype);
+ const char *lname = "(lname)";
+ if (lens->tag == L_KEY)
+ lname = "key";
+ else if (lens->tag == L_DEL)
+ lname = "del";
+ else if (lens->tag == L_STORE)
+ lname = "store";
+ get_error(state, lens, "no match for %s /%s/", lname, pat);
+ free(pat);
+ error:
+ return;
+}
+
+/* Modifies STATE->REGS and STATE->NREG. The caller must save these
+ * if they are still needed
+ *
+ * Return the number of characters matched
+ */
+static int match(struct state *state, struct lens *lens,
+ struct regexp *re, uint size, uint start) {
+ struct re_registers *regs;
+ int count;
+
+ if (ALLOC(regs) < 0)
+ return -1;
+
+ count = regexp_match(re, state->text, size, start, regs);
+ if (count < -1) {
+ regexp_match_error(state, lens, count, re);
+ FREE(regs);
+ return -1;
+ }
+ state->regs = regs;
+ state->nreg = 0;
+ return count;
+}
+
+static void free_regs(struct state *state) {
+ if (state->regs != NULL) {
+ free(state->regs->start);
+ free(state->regs->end);
+ FREE(state->regs);
+ }
+}
+
+static struct tree *get_lens(struct lens *lens, struct state *state);
+static struct skel *parse_lens(struct lens *lens, struct state *state,
+ struct dict **dict);
+
+static void free_seqs(struct seq *seqs) {
+ /* Do not free seq->name; it's not owned by the seq, but by some lens */
+ list_free(seqs);
+}
+
+static struct seq *find_seq(const char *name, struct state *state) {
+ ensure0(name != NULL, state->info);
+ struct seq *seq;
+
+ for (seq=state->seqs;
+ seq != NULL && STRNEQ(seq->name, name);
+ seq = seq->next);
+
+ if (seq == NULL) {
+ if (ALLOC(seq) < 0)
+ return NULL;
+ seq->name = name;
+ seq->value = 1;
+ list_append(state->seqs, seq);
+ }
+
+ return seq;
+}
+
+static struct tree *get_seq(struct lens *lens, struct state *state) {
+ ensure0(lens->tag == L_SEQ, state->info);
+ struct seq *seq = find_seq(lens->string->str, state);
+ int r;
+
+ r = asprintf((char **) &(state->key), "%d", seq->value);
+ ERR_NOMEM(r < 0, state->info);
+
+ seq->value += 1;
+ error:
+ return NULL;
+}
+
+static struct skel *parse_seq(struct lens *lens, struct state *state) {
+ get_seq(lens, state);
+ return make_skel(lens);
+}
+
+static struct tree *get_counter(struct lens *lens, struct state *state) {
+ ensure0(lens->tag == L_COUNTER, state->info);
+ struct seq *seq = find_seq(lens->string->str, state);
+ seq->value = 1;
+ return NULL;
+}
+
+static struct skel *parse_counter(struct lens *lens, struct state *state) {
+ get_counter(lens, state);
+ return make_skel(lens);
+}
+
+static struct tree *get_del(struct lens *lens, struct state *state) {
+ ensure0(lens->tag == L_DEL, state->info);
+ if (! REG_MATCHED(state)) {
+ char *pat = regexp_escape(lens->ctype);
+ get_error(state, lens, "no match for del /%s/", pat);
+ free(pat);
+ }
+ update_span(state->span, REG_START(state), REG_END(state));
+ return NULL;
+}
+
+static struct skel *parse_del(struct lens *lens, struct state *state) {
+ ensure0(lens->tag == L_DEL, state->info);
+ struct skel *skel = NULL;
+
+ skel = make_skel(lens);
+ if (! REG_MATCHED(state))
+ no_match_error(state, lens);
+ else
+ skel->text = token(state);
+ return skel;
+}
+
+static struct tree *get_store(struct lens *lens, struct state *state) {
+ ensure0(lens->tag == L_STORE, state->info);
+ ensure0(state->value == NULL, state->info);
+
+ struct tree *tree = NULL;
+
+ if (state->value != NULL)
+ get_error(state, lens, "More than one store in a subtree");
+ else if (! REG_MATCHED(state))
+ no_match_error(state, lens);
+ else {
+ state->value = token(state);
+ if (state->span) {
+ state->span->value_start = REG_START(state);
+ state->span->value_end = REG_END(state);
+ update_span(state->span, REG_START(state), REG_END(state));
+ }
+ }
+ return tree;
+}
+
+static struct skel *parse_store(struct lens *lens,
+ ATTRIBUTE_UNUSED struct state *state) {
+ ensure0(lens->tag == L_STORE, state->info);
+ return make_skel(lens);
+}
+
+static struct tree *get_value(struct lens *lens, struct state *state) {
+ ensure0(lens->tag == L_VALUE, state->info);
+ state->value = strdup(lens->string->str);
+ return NULL;
+}
+
+static struct skel *parse_value(struct lens *lens,
+ ATTRIBUTE_UNUSED struct state *state) {
+ ensure0(lens->tag == L_VALUE, state->info);
+ return make_skel(lens);
+}
+
+static struct tree *get_key(struct lens *lens, struct state *state) {
+ ensure0(lens->tag == L_KEY, state->info);
+ if (! REG_MATCHED(state))
+ no_match_error(state, lens);
+ else {
+ state->key = token(state);
+ if (state->span) {
+ state->span->label_start = REG_START(state);
+ state->span->label_end = REG_END(state);
+ update_span(state->span, REG_START(state), REG_END(state));
+ }
+ }
+ return NULL;
+}
+
+static struct skel *parse_key(struct lens *lens, struct state *state) {
+ get_key(lens, state);
+ return make_skel(lens);
+}
+
+static struct tree *get_label(struct lens *lens, struct state *state) {
+ ensure0(lens->tag == L_LABEL, state->info);
+ state->key = strdup(lens->string->str);
+ return NULL;
+}
+
+static struct skel *parse_label(struct lens *lens, struct state *state) {
+ get_label(lens, state);
+ return make_skel(lens);
+}
+
+static struct tree *get_union(struct lens *lens, struct state *state) {
+ ensure0(lens->tag == L_UNION, state->info);
+
+ struct tree *tree = NULL;
+ int applied = 0;
+ uint old_nreg = state->nreg;
+
+ state->nreg += 1;
+ for (int i=0; i < lens->nchildren; i++) {
+ if (REG_MATCHED(state)) {
+ tree = get_lens(lens->children[i], state);
+ applied = 1;
+ break;
+ }
+ state->nreg += 1 + regexp_nsub(lens->children[i]->ctype);
+ }
+ state->nreg = old_nreg;
+ if (!applied)
+ get_expected_error(state, lens);
+ return tree;
+}
+
+static struct skel *parse_union(struct lens *lens, struct state *state,
+ struct dict **dict) {
+ ensure0(lens->tag == L_UNION, state->info);
+
+ struct skel *skel = NULL;
+ int applied = 0;
+ uint old_nreg = state->nreg;
+
+ state->nreg += 1;
+ for (int i=0; i < lens->nchildren; i++) {
+ struct lens *l = lens->children[i];
+ if (REG_MATCHED(state)) {
+ skel = parse_lens(l, state, dict);
+ applied = 1;
+ break;
+ }
+ state->nreg += 1 + regexp_nsub(lens->children[i]->ctype);
+ }
+ state->nreg = old_nreg;
+ if (! applied)
+ get_expected_error(state, lens);
+
+ return skel;
+}
+
+static struct tree *get_concat(struct lens *lens, struct state *state) {
+ ensure0(lens->tag == L_CONCAT, state->info);
+
+ struct tree *tree = NULL;
+ uint old_nreg = state->nreg;
+
+ state->nreg += 1;
+ for (int i=0; i < lens->nchildren; i++) {
+ struct tree *t = NULL;
+ if (! REG_VALID(state)) {
+ get_error(state, lens->children[i],
+ "Not enough components in concat");
+ free_tree(tree);
+ state->nreg = old_nreg;
+ return NULL;
+ }
+
+ t = get_lens(lens->children[i], state);
+ list_append(tree, t);
+ state->nreg += 1 + regexp_nsub(lens->children[i]->ctype);
+ }
+ state->nreg = old_nreg;
+
+ return tree;
+}
+
+static struct skel *parse_concat(struct lens *lens, struct state *state,
+ struct dict **dict) {
+ ensure0(lens->tag == L_CONCAT, state->info);
+ struct skel *skel = make_skel(lens);
+ uint old_nreg = state->nreg;
+
+ state->nreg += 1;
+ for (int i=0; i < lens->nchildren; i++) {
+ struct skel *sk = NULL;
+ struct dict *di = NULL;
+ if (! REG_VALID(state)) {
+ get_error(state, lens->children[i],
+ "Not enough components in concat");
+ free_skel(skel);
+ return NULL;
+ }
+
+ sk = parse_lens(lens->children[i], state, &di);
+ list_append(skel->skels, sk);
+ dict_append(dict, di);
+ state->nreg += 1 + regexp_nsub(lens->children[i]->ctype);
+ }
+ state->nreg = old_nreg;
+
+ return skel;
+}
+
+/* Given a lens that does not match at the current position, try to find
+ * the left-most child LAST that does match and the lens next to it NEXT
+ * that does not match. Return the length of the match.
+ *
+ * If no such child exists, return 0
+ */
+static int try_match(struct lens *lens, struct state *state,
+ uint start, uint end,
+ struct lens **last, struct lens **next) {
+ int result = 0, r;
+
+ switch(lens->tag) {
+ case L_VALUE:
+ case L_LABEL:
+ case L_SEQ:
+ case L_COUNTER:
+ *last = lens;
+ return result;
+ break;
+ case L_DEL:
+ case L_KEY:
+ case L_STORE:
+ result = regexp_match(lens->ctype, state->text, end, start, NULL);
+ if (result >= 0)
+ *last = lens;
+ return result;
+ case L_CONCAT:
+ for (int i=0; i < lens->nchildren; i++) {
+ struct lens *child = lens->children[i];
+ struct lens *next_child =
+ (i < lens->nchildren - 1) ? lens->children[i+1] : NULL;
+
+ r = regexp_match(child->ctype, state->text, end, start, NULL);
+ if (r >= 0) {
+ result += r;
+ start += r;
+ *last = child;
+ } else if (result > 0) {
+ if (*next == NULL)
+ *next = child;
+ return result;
+ } else {
+ result = try_match(child, state, start, end, last, next);
+ if (result > 0 && *next == NULL)
+ *next = next_child;
+ return result;
+ }
+ }
+ return result;
+ break;
+ case L_UNION:
+ for (int i=0; i < lens->nchildren; i++) {
+ struct lens *child = lens->children[i];
+ result = try_match(child, state, start, end, last, next);
+ if (result > 0)
+ return result;
+ }
+ return 0;
+ break;
+ case L_SUBTREE:
+ case L_STAR:
+ case L_MAYBE:
+ case L_SQUARE:
+ return try_match(lens->child, state, start, end, last, next);
+ break;
+ default:
+ BUG_ON(true, state->info, "illegal lens tag %d", lens->tag);
+ break;
+ }
+ error:
+ return 0;
+}
+
+static void short_iteration_error(struct lens *lens, struct state *state,
+ uint start, uint end) {
+ int match_count;
+
+ get_error(state, lens, "%s", short_iteration);
+
+ match_count = try_match(lens->child, state, start, end,
+ &state->error->last, &state->error->next);
+ state->error->pos = start + match_count;
+}
+
+static struct tree *get_quant_star(struct lens *lens, struct state *state) {
+ ensure0(lens->tag == L_STAR, state->info);
+ struct lens *child = lens->child;
+ struct tree *tree = NULL, *tail = NULL;
+ uint end = REG_END(state);
+ uint start = REG_START(state);
+ uint size = end - start;
+
+ SAVE_REGS(state);
+ while (size > 0 && match(state, child, child->ctype, end, start) > 0) {
+ struct tree *t = NULL;
+
+ t = get_lens(lens->child, state);
+ list_tail_cons(tree, tail, t);
+
+ start += REG_SIZE(state);
+ size -= REG_SIZE(state);
+ free_regs(state);
+ }
+ RESTORE_REGS(state);
+ if (size != 0) {
+ short_iteration_error(lens, state, start, end);
+ }
+ return tree;
+}
+
+static struct skel *parse_quant_star(struct lens *lens, struct state *state,
+ struct dict **dict) {
+ ensure0(lens->tag == L_STAR, state->info);
+ struct lens *child = lens->child;
+ struct skel *skel = make_skel(lens), *tail = NULL;
+ uint end = REG_END(state);
+ uint start = REG_START(state);
+ uint size = end - start;
+
+ *dict = NULL;
+ SAVE_REGS(state);
+ while (size > 0 && match(state, child, child->ctype, end, start) > 0) {
+ struct skel *sk;
+ struct dict *di = NULL;
+
+ sk = parse_lens(lens->child, state, &di);
+ list_tail_cons(skel->skels, tail, sk);
+ dict_append(dict, di);
+
+ start += REG_SIZE(state);
+ size -= REG_SIZE(state);
+ free_regs(state);
+ }
+ RESTORE_REGS(state);
+ if (size != 0) {
+ get_error(state, lens, "%s", short_iteration);
+ }
+ return skel;
+}
+
+static struct tree *get_quant_maybe(struct lens *lens, struct state *state) {
+ ensure0(lens->tag == L_MAYBE, state->info);
+ struct tree *tree = NULL;
+
+ /* Check that our child matched. For a construct like (r)?, the
+ * matcher will report a zero length match for group 0, even if r
+ * does not match at all
+ */
+ state->nreg += 1;
+ if (REG_MATCHED(state)) {
+ tree = get_lens(lens->child, state);
+ }
+ state->nreg -= 1;
+ return tree;
+}
+
+static struct skel *parse_quant_maybe(struct lens *lens, struct state *state,
+ struct dict **dict) {
+ ensure0(lens->tag == L_MAYBE, state->info);
+
+ struct skel *skel = NULL;
+
+ state->nreg += 1;
+ if (REG_MATCHED(state)) {
+ skel = parse_lens(lens->child, state, dict);
+ }
+ state->nreg -= 1;
+ if (skel == NULL)
+ skel = make_skel(lens);
+ return skel;
+}
+
+static struct tree *get_subtree(struct lens *lens, struct state *state) {
+ char *key = state->key;
+ char *value = state->value;
+ struct span *span = move(state->span);
+
+ struct tree *tree = NULL, *children;
+
+ state->key = NULL;
+ state->value = NULL;
+ if (state->enable_span) {
+ state->span = make_span(state->info);
+ ERR_NOMEM(state->span == NULL, state->info);
+ }
+
+ children = get_lens(lens->child, state);
+
+ tree = make_tree(state->key, state->value, NULL, children);
+ ERR_NOMEM(tree == NULL, state->info);
+ tree->span = move(state->span);
+
+ if (tree->span != NULL) {
+ update_span(span, tree->span->span_start, tree->span->span_end);
+ }
+
+ state->key = key;
+ state->value = value;
+ state->span = span;
+ return tree;
+ error:
+ free_span(state->span);
+ state->span = span;
+ return NULL;
+}
+
+static struct skel *parse_subtree(struct lens *lens, struct state *state,
+ struct dict **dict) {
+ char *key = state->key;
+ struct skel *skel;
+ struct dict *di = NULL;
+
+ state->key = NULL;
+ skel = parse_lens(lens->child, state, &di);
+ *dict = make_dict(state->key, skel, di);
+ state->key = key;
+ return make_skel(lens);
+}
+
+/* Check if left and right strings matches according to the square lens
+ * definition.
+ *
+ * Returns 1 if strings matches, 0 otherwise
+ */
+static int square_match(struct lens *lens, char *left, char *right) {
+ int cmp = 0;
+ struct lens *concat = NULL;
+
+ // if one of the argument is NULL, returns no match
+ if (left == NULL || right == NULL || lens == NULL)
+ return cmp;
+
+ concat = lens->child;
+ /* If either right or left lens is nocase, then ignore case */
+ if (child_first(concat)->ctype->nocase ||
+ child_last(concat)->ctype->nocase) {
+ cmp = STRCASEEQ(left, right);
+ } else {
+ cmp = STREQ(left, right);
+ }
+ return cmp;
+}
+
+/*
+ * This function applies only for non-recursive lens, handling of recursive
+ * square is done in visit_exit().
+ */
+static struct tree *get_square(struct lens *lens, struct state *state) {
+ ensure0(lens->tag == L_SQUARE, state->info);
+
+ struct lens *concat = lens->child;
+ struct tree *tree = NULL;
+ uint end = REG_END(state);
+ uint start = REG_START(state);
+ char *rsqr = NULL, *lsqr = NULL;
+ int r;
+
+ SAVE_REGS(state);
+ r = match(state, lens->child, lens->child->ctype, end, start);
+ ERR_NOMEM(r < 0, state->info);
+
+ tree = get_lens(lens->child, state);
+
+ /* retrieve left component */
+ state->nreg = 1;
+ start = REG_START(state);
+ end = REG_END(state);
+ lsqr = token_range(state->text, start, end);
+
+ /* retrieve right component */
+ /* compute nreg for the last children */
+ for (int i = 0; i < concat->nchildren - 1; i++)
+ state->nreg += 1 + regexp_nsub(concat->children[i]->ctype);
+
+ start = REG_START(state);
+ end = REG_END(state);
+ rsqr = token_range(state->text, start, end);
+
+ if (!square_match(lens, lsqr, rsqr)) {
+ get_error(state, lens, "%s \"%s\" %s \"%s\"",
+ "Parse error: mismatched in square lens, expecting", lsqr,
+ "but got", rsqr);
+ goto error;
+ }
+
+ done:
+ RESTORE_REGS(state);
+ FREE(lsqr);
+ FREE(rsqr);
+ return tree;
+
+ error:
+ free_tree(tree);
+ tree = NULL;
+ goto done;
+}
+
+static struct skel *parse_square(struct lens *lens, struct state *state,
+ struct dict **dict) {
+ ensure0(lens->tag == L_SQUARE, state->info);
+ uint end = REG_END(state);
+ uint start = REG_START(state);
+ struct skel *skel = NULL, *sk = NULL;
+ int r;
+
+ SAVE_REGS(state);
+ r = match(state, lens->child, lens->child->ctype, end, start);
+ ERR_NOMEM(r < 0, state->info);
+
+ skel = parse_lens(lens->child, state, dict);
+ if (skel == NULL)
+ return NULL;
+ sk = make_skel(lens);
+ sk->skels = skel;
+
+ error:
+ RESTORE_REGS(state);
+ return sk;
+}
+
+/*
+ * Helpers for recursive lenses
+ */
+
+ATTRIBUTE_UNUSED
+static void print_frames(struct rec_state *state) {
+ for (int j = state->fused - 1; j >=0; j--) {
+ struct frame *f = state->frames + j;
+ for (int i=0; i < state->lvl; i++) fputc(' ', stderr);
+ fprintf(stderr, "%2d %s %s", j, f->key, f->value);
+ if (f->tree == NULL) {
+ fprintf(stderr, " - ");
+ } else {
+ fprintf(stderr, " { %s = %s } ", f->tree->label, f->tree->value);
+ }
+ char *s = format_lens(f->lens);
+ if (s != NULL) {
+ fprintf(stderr, "%s\n", s);
+ free(s);
+ }
+ }
+}
+
+ATTRIBUTE_PURE
+static struct frame *top_frame(struct rec_state *state) {
+ ensure0(state->fsize > 0, state->state->info);
+ return state->frames + state->fused - 1;
+}
+
+/* The nth frame from the top of the stack, where 0th frame is the top */
+ATTRIBUTE_PURE
+static struct frame *nth_frame(struct rec_state *state, uint n) {
+ ensure0(state->fsize > n, state->state->info);
+ return state->frames + state->fused - (n+1);
+}
+
+static struct frame *push_frame(struct rec_state *state, struct lens *lens) {
+ int r;
+
+ if (state->fused >= state->fsize) {
+ uint expand = state->fsize;
+ if (expand < 8)
+ expand = 8;
+ r = REALLOC_N(state->frames, state->fsize + expand);
+ ERR_NOMEM(r < 0, state->state->info);
+ state->fsize += expand;
+ }
+
+ state->fused += 1;
+
+ struct frame *top = top_frame(state);
+ MEMZERO(top, 1);
+ top->lens = lens;
+ return top;
+ error:
+ return NULL;
+}
+
+static struct frame *pop_frame(struct rec_state *state) {
+ ensure0(state->fused > 0, state->state->info);
+
+ struct frame *result = top_frame(state);
+ state->fused -= 1;
+ return result;
+}
+
+static void dbg_visit(struct lens *lens, char action, size_t start, size_t end,
+ int fused, int lvl) {
+ char *lns;
+ for (int i=0; i < lvl; i++)
+ fputc(' ', stderr);
+ lns = format_lens(lens);
+ fprintf(stderr, "%c %zd..%zd %d %s\n", action, start, end,
+ fused, lns);
+ free(lns);
+}
+
+static void get_terminal(struct frame *top, struct lens *lens,
+ struct state *state) {
+ top->tree = get_lens(lens, state);
+ top->key = state->key;
+ top->value = state->value;
+ state->key = NULL;
+ state->value = NULL;
+}
+
+static void parse_terminal(struct frame *top, struct lens *lens,
+ struct state *state) {
+ top->dict = NULL;
+ top->skel = parse_lens(lens, state, &top->dict);
+ top->key = state->key;
+ state->key = NULL;
+}
+
+static void visit_terminal(struct lens *lens, size_t start, size_t end,
+ void *data) {
+ struct rec_state *rec_state = data;
+ struct state *state = rec_state->state;
+ struct ast *child;
+
+ if (state->error != NULL)
+ return;
+
+ SAVE_REGS(state);
+ if (debugging("cf.get"))
+ dbg_visit(lens, 'T', start, end, rec_state->fused, rec_state->lvl);
+ match(state, lens, lens->ctype, end, start);
+ struct frame *top = push_frame(rec_state, lens);
+ ERR_BAIL(state->info);
+ if (rec_state->mode == M_GET)
+ get_terminal(top, lens, state);
+ else
+ parse_terminal(top, lens, state);
+ child = ast_append(rec_state, lens, start, end);
+ ERR_NOMEM(child == NULL, state->info);
+ error:
+ RESTORE_REGS(state);
+}
+
+static bool rec_gen_span(struct rec_state *rec_state) {
+ return ((rec_state->mode == M_GET) && (rec_state->state->enable_span));
+}
+
+static void visit_enter(struct lens *lens,
+ ATTRIBUTE_UNUSED size_t start,
+ ATTRIBUTE_UNUSED size_t end,
+ void *data) {
+ struct rec_state *rec_state = data;
+ struct state *state = rec_state->state;
+ struct ast *child;
+
+ if (state->error != NULL)
+ return;
+
+ if (debugging("cf.get"))
+ dbg_visit(lens, '{', start, end, rec_state->fused, rec_state->lvl);
+ rec_state->lvl += 1;
+ if (lens->tag == L_SUBTREE) {
+ /* Same for parse and get */
+ /* Use this frame to preserve the current state before we process
+ the contents of the subtree, i.e., lens->child */
+ struct frame *f = push_frame(rec_state, lens);
+ ERR_BAIL(state->info);
+ f->key = state->key;
+ f->value = state->value;
+ state->key = NULL;
+ state->value = NULL;
+ if (rec_gen_span(rec_state)) {
+ f->span = state->span;
+ state->span = make_span(state->info);
+ ERR_NOMEM(state->span == NULL, state->info);
+ }
+ } else if (lens->tag == L_MAYBE) {
+ /* Push a frame as a marker so we can tell whether lens->child
+ actually had a match or not */
+ push_frame(rec_state, lens);
+ ERR_BAIL(state->info);
+ }
+ child = ast_append(rec_state, lens, start, end);
+ if (child != NULL)
+ rec_state->ast = child;
+ error:
+ return;
+}
+
+/* Combine n frames from the stack into one new frame for M_GET */
+static void get_combine(struct rec_state *rec_state,
+ struct lens *lens, uint n) {
+ struct tree *tree = NULL, *tail = NULL;
+ char *key = NULL, *value = NULL;
+ struct frame *top = NULL;
+
+ for (int i=0; i < n; i++) {
+ top = pop_frame(rec_state);
+ ERR_BAIL(lens->info);
+ list_tail_cons(tree, tail, top->tree);
+ /* top->tree might have more than one node, update tail */
+ if (tail != NULL)
+ while (tail->next != NULL) tail = tail->next;
+
+ if (top->key != NULL) {
+ ensure(key == NULL, rec_state->state->info);
+ key = top->key;
+ }
+ if (top->value != NULL) {
+ ensure(value == NULL, rec_state->state->info);
+ value = top->value;
+ }
+ }
+ top = push_frame(rec_state, lens);
+ ERR_BAIL(lens->info);
+ top->tree = tree;
+ top->key = key;
+ top->value = value;
+ error:
+ return;
+}
+
+/* Combine n frames from the stack into one new frame for M_PUT */
+static void parse_combine(struct rec_state *rec_state,
+ struct lens *lens, uint n) {
+ struct skel *skel = make_skel(lens), *tail = NULL;
+ struct dict *dict = NULL;
+ char *key = NULL;
+ struct frame *top = NULL;
+
+ for (int i=0; i < n; i++) {
+ top = pop_frame(rec_state);
+ ERR_BAIL(lens->info);
+ list_tail_cons(skel->skels, tail, top->skel);
+ /* top->skel might have more than one node, update skel */
+ if (tail != NULL)
+ while (tail->next != NULL) tail = tail->next;
+ dict_append(&dict, top->dict);
+ if (top->key != NULL) {
+ ensure(key == NULL, rec_state->state->info);
+ key = top->key;
+ }
+ }
+ top = push_frame(rec_state, lens);
+ ERR_BAIL(lens->info);
+ top->skel = move(skel);
+ top->dict = move(dict);
+ top->key = key;
+ error:
+ free_skel(skel);
+ free_dict(dict);
+ return;
+}
+
+static void visit_exit_put_subtree(struct lens *lens,
+ struct rec_state *rec_state,
+ struct frame *top) {
+ struct state *state = rec_state->state;
+ struct skel *skel = NULL;
+ struct dict *dict = NULL;
+
+ skel = make_skel(lens);
+ ERR_NOMEM(skel == NULL, lens->info);
+ dict = make_dict(top->key, top->skel, top->dict);
+ ERR_NOMEM(dict == NULL, lens->info);
+
+ top = pop_frame(rec_state);
+ ERR_BAIL(state->info);
+ ensure(lens == top->lens, state->info);
+ state->key = top->key;
+ top = push_frame(rec_state, lens);
+ ERR_BAIL(state->info);
+ top->skel = move(skel);
+ top->dict = move(dict);
+ error:
+ free_skel(skel);
+ free_dict(dict);
+}
+
+static void visit_exit(struct lens *lens,
+ ATTRIBUTE_UNUSED size_t start,
+ ATTRIBUTE_UNUSED size_t end,
+ void *data) {
+ struct rec_state *rec_state = data;
+ struct state *state = rec_state->state;
+ struct tree *tree = NULL;
+
+ if (state->error != NULL)
+ return;
+
+ rec_state->lvl -= 1;
+ if (debugging("cf.get"))
+ dbg_visit(lens, '}', start, end, rec_state->fused, rec_state->lvl);
+
+ ERR_BAIL(lens->info);
+
+ if (lens->tag == L_SUBTREE) {
+ /* Get the result of parsing lens->child */
+ struct frame *top = pop_frame(rec_state);
+ ERR_BAIL(state->info);
+ if (rec_state->mode == M_GET) {
+ tree = make_tree(top->key, top->value, NULL, top->tree);
+ ERR_NOMEM(tree == NULL, lens->info);
+ tree->span = state->span;
+ /* Restore the parse state from before entering this subtree */
+ top = pop_frame(rec_state);
+ ERR_BAIL(state->info);
+ ensure(lens == top->lens, state->info);
+ state->key = top->key;
+ state->value = top->value;
+ state->span = top->span;
+ /* Push the result of parsing this subtree */
+ top = push_frame(rec_state, lens);
+ ERR_BAIL(state->info);
+ top->tree = move(tree);
+ } else {
+ visit_exit_put_subtree(lens, rec_state, top);
+ }
+ } else if (lens->tag == L_CONCAT) {
+ ensure(rec_state->fused >= lens->nchildren, state->info);
+ for (int i = 0; i < lens->nchildren; i++) {
+ struct frame *fr = nth_frame(rec_state, i);
+ ERR_BAIL(state->info);
+ BUG_ON(lens->children[i] != fr->lens,
+ lens->info,
+ "Unexpected lens in concat %zd..%zd\n Expected: %s\n Actual: %s",
+ start, end,
+ format_lens(lens->children[i]),
+ format_lens(fr->lens));
+ }
+ rec_state->combine(rec_state, lens, lens->nchildren);
+ } else if (lens->tag == L_STAR) {
+ uint n = 0;
+ while (n < rec_state->fused &&
+ nth_frame(rec_state, n)->lens == lens->child)
+ n++;
+ ERR_BAIL(state->info);
+ rec_state->combine(rec_state, lens, n);
+ } else if (lens->tag == L_MAYBE) {
+ uint n = 1;
+ if (rec_state->fused > 0
+ && top_frame(rec_state)->lens == lens->child) {
+ n = 2;
+ }
+ ERR_BAIL(state->info);
+ /* If n = 2, the top of the stack is our child's result, and the
+ frame underneath it is the marker frame we pushed during
+ visit_enter. Combine these two frames into one, which represents
+ the result of parsing the whole L_MAYBE. */
+ rec_state->combine(rec_state, lens, n);
+ } else if (lens->tag == L_SQUARE) {
+ if (rec_state->mode == M_GET) {
+ struct ast *square, *concat, *right, *left;
+ char *rsqr, *lsqr;
+ int ret;
+
+ square = rec_state->ast;
+ concat = child_first(square);
+ right = child_first(concat);
+ left = child_last(concat);
+ lsqr = token_range(state->text, left->start, left->end);
+ rsqr = token_range(state->text, right->start, right->end);
+ ret = square_match(lens, lsqr, rsqr);
+ if (! ret) {
+ get_error(state, lens, "%s \"%s\" %s \"%s\"",
+ "Parse error: mismatched in square lens, expecting", lsqr,
+ "but got", rsqr);
+ }
+ FREE(lsqr);
+ FREE(rsqr);
+ if (! ret)
+ goto error;
+ }
+ rec_state->combine(rec_state, lens, 1);
+ } else {
+ /* Turn the top frame from having the result of one of our children
+ to being our result */
+ top_frame(rec_state)->lens = lens;
+ ERR_BAIL(state->info);
+ }
+ ast_pop(rec_state);
+ error:
+ free_tree(tree);
+ return;
+}
+
+static void visit_error(struct lens *lens, void *data, size_t pos,
+ const char *format, ...) {
+ struct rec_state *rec_state = data;
+ va_list ap;
+
+ va_start(ap, format);
+ vget_error(rec_state->state, lens, format, ap);
+ va_end(ap);
+ rec_state->state->error->pos = rec_state->start + pos;
+}
+
+static struct frame *rec_process(enum mode_t mode, struct lens *lens,
+ struct state *state) {
+ uint end = REG_END(state);
+ uint start = REG_START(state);
+ size_t len = 0;
+ int r;
+ struct jmt_visitor visitor;
+ struct rec_state rec_state;
+ int i;
+ struct frame *f = NULL;
+
+ MEMZERO(&rec_state, 1);
+ MEMZERO(&visitor, 1);
+ SAVE_REGS(state);
+
+ if (lens->jmt == NULL) {
+ lens->jmt = jmt_build(lens);
+ ERR_BAIL(lens->info);
+ }
+
+ rec_state.mode = mode;
+ rec_state.state = state;
+ rec_state.fused = 0;
+ rec_state.lvl = 0;
+ rec_state.start = start;
+ rec_state.ast = make_ast(lens);
+ rec_state.combine = (mode == M_GET) ? get_combine : parse_combine;
+ ERR_NOMEM(rec_state.ast == NULL, state->info);
+
+ visitor.parse = jmt_parse(lens->jmt, state->text + start, end - start);
+ ERR_BAIL(lens->info);
+ visitor.terminal = visit_terminal;
+ visitor.enter = visit_enter;
+ visitor.exit = visit_exit;
+ visitor.error = visit_error;
+ visitor.data = &rec_state;
+ r = jmt_visit(&visitor, &len);
+ ERR_BAIL(lens->info);
+ if (r != 1) {
+ get_error(state, lens, "Syntax error");
+ state->error->pos = start + len;
+ }
+ if (rec_state.fused == 0) {
+ get_error(state, lens,
+ "Parse did not leave a result on the stack");
+ goto error;
+ } else if (rec_state.fused > 1) {
+ get_error(state, lens,
+ "Parse left additional garbage on the stack");
+ goto error;
+ }
+
+ rec_state.ast = ast_root(rec_state.ast);
+ ensure(rec_state.ast->parent == NULL, state->info);
+ done:
+ if (debugging("cf.get.ast"))
+ print_ast(ast_root(rec_state.ast), 0);
+ RESTORE_REGS(state);
+ jmt_free_parse(visitor.parse);
+ free_ast(ast_root(rec_state.ast));
+ return rec_state.frames;
+ error:
+
+ for(i = 0; i < rec_state.fused; i++) {
+ f = nth_frame(&rec_state, i);
+ FREE(f->key);
+ free_span(f->span);
+ if (mode == M_GET) {
+ FREE(f->value);
+ free_tree(f->tree);
+ } else if (mode == M_PARSE) {
+ free_skel(f->skel);
+ free_dict(f->dict);
+ }
+ }
+ FREE(rec_state.frames);
+ goto done;
+}
+
+static struct tree *get_rec(struct lens *lens, struct state *state) {
+ struct frame *fr;
+ struct tree *tree = NULL;
+
+ fr = rec_process(M_GET, lens, state);
+ if (fr != NULL) {
+ tree = fr->tree;
+ state->key = fr->key;
+ state->value = fr->value;
+ FREE(fr);
+ }
+ return tree;
+}
+
+static struct skel *parse_rec(struct lens *lens, struct state *state,
+ struct dict **dict) {
+ struct skel *skel = NULL;
+ struct frame *fr;
+
+ fr = rec_process(M_PARSE, lens, state);
+ if (fr != NULL) {
+ skel = fr->skel;
+ *dict = fr->dict;
+ state->key = fr->key;
+ FREE(fr);
+ }
+ return skel;
+}
+
+static struct tree *get_lens(struct lens *lens, struct state *state) {
+ struct tree *tree = NULL;
+
+ switch(lens->tag) {
+ case L_DEL:
+ tree = get_del(lens, state);
+ break;
+ case L_STORE:
+ tree = get_store(lens, state);
+ break;
+ case L_VALUE:
+ tree = get_value(lens, state);
+ break;
+ case L_KEY:
+ tree = get_key(lens, state);
+ break;
+ case L_LABEL:
+ tree = get_label(lens, state);
+ break;
+ case L_SEQ:
+ tree = get_seq(lens, state);
+ break;
+ case L_COUNTER:
+ tree = get_counter(lens, state);
+ break;
+ case L_CONCAT:
+ tree = get_concat(lens, state);
+ break;
+ case L_UNION:
+ tree = get_union(lens, state);
+ break;
+ case L_SUBTREE:
+ tree = get_subtree(lens, state);
+ break;
+ case L_STAR:
+ tree = get_quant_star(lens, state);
+ break;
+ case L_MAYBE:
+ tree = get_quant_maybe(lens, state);
+ break;
+ case L_SQUARE:
+ tree = get_square(lens, state);
+ break;
+ default:
+ BUG_ON(true, state->info, "illegal lens tag %d", lens->tag);
+ break;
+ }
+ error:
+ return tree;
+}
+
+/* Initialize registers. Return 0 if the lens matches the entire text, 1 if
+ * it does not and -1 on error.
+ */
+static int init_regs(struct state *state, struct lens *lens, uint size) {
+ int r;
+
+ if (lens->tag != L_STAR && ! lens->recursive) {
+ r = match(state, lens, lens->ctype, size, 0);
+ if (r == -1)
+ get_error(state, lens, "Input string does not match at all");
+ if (r <= -1)
+ return -1;
+ return r != size;
+ }
+ /* Special case the very common situation that the lens is (l)*
+ * We can avoid matching the entire text in that case - that
+ * match can be very expensive
+ */
+ if (ALLOC(state->regs) < 0)
+ return -1;
+ state->regs->num_regs = 1;
+ if (ALLOC(state->regs->start) < 0 || ALLOC(state->regs->end) < 0)
+ return -1;
+ state->regs->start[0] = 0;
+ state->regs->end[0] = size;
+ return 0;
+}
+
+struct tree *lns_get(struct info *info, struct lens *lens, const char *text,
+ int enable_span, struct lns_error **err) {
+ struct state state;
+ struct tree *tree = NULL;
+ uint size = strlen(text);
+ int partial, r;
+
+ MEMZERO(&state, 1);
+ r = ALLOC(state.info);
+ ERR_NOMEM(r < 0, info);
+
+ *state.info = *info;
+ state.info->ref = UINT_MAX;
+
+ state.text = text;
+
+ state.enable_span = enable_span;
+
+ /* We are probably being overly cautious here: if the lens can't process
+ * all of TEXT, we should really fail somewhere in one of the sublenses.
+ * But to be safe, we check that we can process everything anyway, then
+ * try to process, hoping we'll get a more specific error, and if that
+ * fails, we throw our arms in the air and say 'something went wrong'
+ */
+ partial = init_regs(&state, lens, size);
+ if (partial >= 0) {
+ if (lens->recursive)
+ tree = get_rec(lens, &state);
+ else
+ tree = get_lens(lens, &state);
+ }
+
+ free_seqs(state.seqs);
+ if (state.key != NULL) {
+ get_error(&state, lens, "get left unused key %s", state.key);
+ free(state.key);
+ }
+ if (state.value != NULL) {
+ get_error(&state, lens, "get left unused value %s", state.value);
+ free(state.value);
+ }
+ if (partial && state.error == NULL) {
+ get_error(&state, lens, "Get did not match entire input");
+ }
+
+ error:
+ free_regs(&state);
+ FREE(state.info);
+
+ if (err != NULL) {
+ *err = state.error;
+ } else {
+ if (state.error != NULL) {
+ free_tree(tree);
+ tree = NULL;
+ }
+ free_lns_error(state.error);
+ }
+ return tree;
+}
+
+static struct skel *parse_lens(struct lens *lens, struct state *state,
+ struct dict **dict) {
+ struct skel *skel = NULL;
+
+ switch(lens->tag) {
+ case L_DEL:
+ skel = parse_del(lens, state);
+ break;
+ case L_STORE:
+ skel = parse_store(lens, state);
+ break;
+ case L_VALUE:
+ skel = parse_value(lens, state);
+ break;
+ case L_KEY:
+ skel = parse_key(lens, state);
+ break;
+ case L_LABEL:
+ skel = parse_label(lens, state);
+ break;
+ case L_SEQ:
+ skel = parse_seq(lens, state);
+ break;
+ case L_COUNTER:
+ skel = parse_counter(lens, state);
+ break;
+ case L_CONCAT:
+ skel = parse_concat(lens, state, dict);
+ break;
+ case L_UNION:
+ skel = parse_union(lens, state, dict);
+ break;
+ case L_SUBTREE:
+ skel = parse_subtree(lens, state, dict);
+ break;
+ case L_STAR:
+ skel = parse_quant_star(lens, state, dict);
+ break;
+ case L_MAYBE:
+ skel = parse_quant_maybe(lens, state, dict);
+ break;
+ case L_SQUARE:
+ skel = parse_square(lens, state, dict);
+ break;
+ default:
+ BUG_ON(true, state->info, "illegal lens tag %d", lens->tag);
+ break;
+ }
+ error:
+ return skel;
+}
+
+struct skel *lns_parse(struct lens *lens, const char *text, struct dict **dict,
+ struct lns_error **err) {
+ struct state state;
+ struct skel *skel = NULL;
+ uint size = strlen(text);
+ int partial, r;
+
+ MEMZERO(&state, 1);
+ r = ALLOC(state.info);
+ ERR_NOMEM(r< 0, lens->info);
+ state.info->ref = UINT_MAX;
+ state.info->error = lens->info->error;
+ state.text = text;
+
+ state.text = text;
+
+ partial = init_regs(&state, lens, size);
+ if (! partial) {
+ *dict = NULL;
+ if (lens->recursive)
+ skel = parse_rec(lens, &state, dict);
+ else
+ skel = parse_lens(lens, &state, dict);
+
+ free_seqs(state.seqs);
+ if (state.error != NULL) {
+ free_skel(skel);
+ skel = NULL;
+ free_dict(*dict);
+ *dict = NULL;
+ }
+ if (state.key != NULL) {
+ get_error(&state, lens, "parse left unused key %s", state.key);
+ free(state.key);
+ }
+ if (state.value != NULL) {
+ get_error(&state, lens, "parse left unused value %s", state.value);
+ free(state.value);
+ }
+ } else {
+ // This should never happen during lns_parse
+ get_error(&state, lens, "parse can not process entire input");
+ }
+
+ error:
+ free_regs(&state);
+ FREE(state.info);
+ if (err != NULL) {
+ *err = state.error;
+ } else {
+ free_lns_error(state.error);
+ }
+ return skel;
+}
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+/*
+ * Hash Table Data Type
+ * Copyright (C) 1997 Kaz Kylheku <kaz@ashi.footprints.net>
+ *
+ * Free Software License:
+ *
+ * All rights are reserved by the author, with the following exceptions:
+ * Permission is granted to freely reproduce and distribute this software,
+ * possibly in exchange for a fee, provided that this copyright notice appears
+ * intact. Permission is also granted to adapt this software to produce
+ * derivative works, as long as the modified versions carry this copyright
+ * notice and additional notices stating that the work has been modified.
+ * This source code may be translated into executable form and incorporated
+ * into proprietary software; there is no requirement for such software to
+ * contain a copyright notice related to this source.
+ *
+ * $Id: hash.c,v 1.36.2.11 2000/11/13 01:36:45 kaz Exp $
+ * $Name: kazlib_1_20 $
+ *
+ * 2008-04-17 Small modifications to build with stricter warnings
+ * David Lutterkort <dlutter@redhat.com>
+ *
+ */
+
+#include <config.h>
+
+#include <stdlib.h>
+#include <stddef.h>
+#include <assert.h>
+#include <string.h>
+#define HASH_IMPLEMENTATION
+#include "internal.h"
+#include "hash.h"
+
+#ifdef HASH_DEBUG_VERIFY
+# define expensive_assert(expr) assert (expr)
+#else
+# define expensive_assert(expr) /* empty */
+#endif
+
+#ifdef KAZLIB_RCSID
+static const char rcsid[] = "$Id: hash.c,v 1.36.2.11 2000/11/13 01:36:45 kaz Exp $";
+#endif
+
+#define INIT_BITS 4
+#define INIT_SIZE (1UL << (INIT_BITS)) /* must be power of two */
+#define INIT_MASK ((INIT_SIZE) - 1)
+
+#define next hash_next
+#define key hash_key
+#define data hash_data
+#define hkey hash_hkey
+
+#define table hash_table
+#define nchains hash_nchains
+#define nodecount hash_nodecount
+#define maxcount hash_maxcount
+#define highmark hash_highmark
+#define lowmark hash_lowmark
+#define compare hash_compare
+#define function hash_function
+#define allocnode hash_allocnode
+#define freenode hash_freenode
+#define context hash_context
+#define mask hash_mask
+#define dynamic hash_dynamic
+
+#define table hash_table
+#define chain hash_chain
+
+static hnode_t *hnode_alloc(void *context);
+static void hnode_free(hnode_t *node, void *context);
+static hash_val_t hash_fun_default(const void *key);
+static int hash_comp_default(const void *key1, const void *key2);
+
+int hash_val_t_bit;
+
+/*
+ * Compute the number of bits in the hash_val_t type. We know that hash_val_t
+ * is an unsigned integral type. Thus the highest value it can hold is a
+ * Mersenne number (power of two, less one). We initialize a hash_val_t
+ * object with this value and then shift bits out one by one while counting.
+ * Notes:
+ * 1. HASH_VAL_T_MAX is a Mersenne number---one that is one less than a power
+ * of two. This means that its binary representation consists of all one
+ * bits, and hence ``val'' is initialized to all one bits.
+ * 2. While bits remain in val, we increment the bit count and shift it to the
+ * right, replacing the topmost bit by zero.
+ */
+
+static void compute_bits(void)
+{
+ hash_val_t val = HASH_VAL_T_MAX; /* 1 */
+ int bits = 0;
+
+ while (val) { /* 2 */
+ bits++;
+ val >>= 1;
+ }
+
+ hash_val_t_bit = bits;
+}
+
+/*
+ * Verify whether the given argument is a power of two.
+ */
+
+static int is_power_of_two(hash_val_t arg)
+{
+ if (arg == 0)
+ return 0;
+ while ((arg & 1) == 0)
+ arg >>= 1;
+ return (arg == 1);
+}
+
+/*
+ * Compute a shift amount from a given table size
+ */
+
+static hash_val_t compute_mask(hashcount_t size)
+{
+ assert (is_power_of_two(size));
+ assert (size >= 2);
+
+ return size - 1;
+}
+
+/*
+ * Initialize the table of pointers to null.
+ */
+
+static void clear_table(hash_t *hash)
+{
+ hash_val_t i;
+
+ for (i = 0; i < hash->nchains; i++)
+ hash->table[i] = NULL;
+}
+
+/*
+ * Double the size of a dynamic table. This works as follows. Each chain splits
+ * into two adjacent chains. The shift amount increases by one, exposing an
+ * additional bit of each hashed key. For each node in the original chain, the
+ * value of this newly exposed bit will decide which of the two new chains will
+ * receive the node: if the bit is 1, the chain with the higher index will have
+ * the node, otherwise the lower chain will receive the node. In this manner,
+ * the hash table will continue to function exactly as before without having to
+ * rehash any of the keys.
+ * Notes:
+ * 1. Overflow check.
+ * 2. The new number of chains is twice the old number of chains.
+ * 3. The new mask is one bit wider than the previous, revealing a
+ * new bit in all hashed keys.
+ * 4. Allocate a new table of chain pointers that is twice as large as the
+ * previous one.
+ * 5. If the reallocation was successful, we perform the rest of the growth
+ * algorithm, otherwise we do nothing.
+ * 6. The exposed_bit variable holds a mask with which each hashed key can be
+ * AND-ed to test the value of its newly exposed bit.
+ * 7. Now loop over each chain in the table and sort its nodes into two
+ * chains based on the value of each node's newly exposed hash bit.
+ * 8. The low chain replaces the current chain. The high chain goes
+ * into the corresponding sister chain in the upper half of the table.
+ * 9. We have finished dealing with the chains and nodes. We now update
+ * the various bookeeping fields of the hash structure.
+ */
+
+static void grow_table(hash_t *hash)
+{
+ hnode_t **newtable;
+
+ assert (2 * hash->nchains > hash->nchains); /* 1 */
+
+ newtable = realloc(hash->table,
+ sizeof *newtable * hash->nchains * 2); /* 4 */
+
+ if (newtable) { /* 5 */
+ hash_val_t mask = (hash->mask << 1) | 1; /* 3 */
+ hash_val_t exposed_bit = mask ^ hash->mask; /* 6 */
+ hash_val_t chain;
+
+ assert (mask != hash->mask);
+
+ for (chain = 0; chain < hash->nchains; chain++) { /* 7 */
+ hnode_t *low_chain = 0, *high_chain = 0, *hptr, *next;
+
+ for (hptr = newtable[chain]; hptr != 0; hptr = next) {
+ next = hptr->next;
+
+ if (hptr->hkey & exposed_bit) {
+ hptr->next = high_chain;
+ high_chain = hptr;
+ } else {
+ hptr->next = low_chain;
+ low_chain = hptr;
+ }
+ }
+
+ newtable[chain] = low_chain; /* 8 */
+ newtable[chain + hash->nchains] = high_chain;
+ }
+
+ hash->table = newtable; /* 9 */
+ hash->mask = mask;
+ hash->nchains *= 2;
+ hash->lowmark *= 2;
+ hash->highmark *= 2;
+ }
+ expensive_assert (hash_verify(hash));
+}
+
+/*
+ * Cut a table size in half. This is done by folding together adjacent chains
+ * and populating the lower half of the table with these chains. The chains are
+ * simply spliced together. Once this is done, the whole table is reallocated
+ * to a smaller object.
+ * Notes:
+ * 1. It is illegal to have a hash table with one slot. This would mean that
+ * hash->shift is equal to hash_val_t_bit, an illegal shift value.
+ * Also, other things could go wrong, such as hash->lowmark becoming zero.
+ * 2. Looping over each pair of sister chains, the low_chain is set to
+ * point to the head node of the chain in the lower half of the table,
+ * and high_chain points to the head node of the sister in the upper half.
+ * 3. The intent here is to compute a pointer to the last node of the
+ * lower chain into the low_tail variable. If this chain is empty,
+ * low_tail ends up with a null value.
+ * 4. If the lower chain is not empty, we simply tack the upper chain onto it.
+ * If the upper chain is a null pointer, nothing happens.
+ * 5. Otherwise if the lower chain is empty but the upper one is not,
+ * If the low chain is empty, but the high chain is not, then the
+ * high chain is simply transferred to the lower half of the table.
+ * 6. Otherwise if both chains are empty, there is nothing to do.
+ * 7. All the chain pointers are in the lower half of the table now, so
+ * we reallocate it to a smaller object. This, of course, invalidates
+ * all pointer-to-pointers which reference into the table from the
+ * first node of each chain.
+ * 8. Though it's unlikely, the reallocation may fail. In this case we
+ * pretend that the table _was_ reallocated to a smaller object.
+ * 9. Finally, update the various table parameters to reflect the new size.
+ */
+
+static void shrink_table(hash_t *hash)
+{
+ hash_val_t chain, nchains;
+ hnode_t **newtable, *low_tail, *low_chain, *high_chain;
+
+ assert (hash->nchains >= 2); /* 1 */
+ nchains = hash->nchains / 2;
+
+ for (chain = 0; chain < nchains; chain++) {
+ low_chain = hash->table[chain]; /* 2 */
+ high_chain = hash->table[chain + nchains];
+ for (low_tail = low_chain; low_tail && low_tail->next; low_tail = low_tail->next)
+ ; /* 3 */
+ if (low_chain != 0) /* 4 */
+ low_tail->next = high_chain;
+ else if (high_chain != 0) /* 5 */
+ hash->table[chain] = high_chain;
+ else
+ assert (hash->table[chain] == NULL); /* 6 */
+ }
+ newtable = realloc(hash->table,
+ sizeof *newtable * nchains); /* 7 */
+ if (newtable) /* 8 */
+ hash->table = newtable;
+ hash->mask >>= 1; /* 9 */
+ hash->nchains = nchains;
+ hash->lowmark /= 2;
+ hash->highmark /= 2;
+ expensive_assert (hash_verify(hash));
+}
+
+
+/*
+ * Create a dynamic hash table. Both the hash table structure and the table
+ * itself are dynamically allocated. Furthermore, the table is extendible in
+ * that it will automatically grow as its load factor increases beyond a
+ * certain threshold.
+ * Notes:
+ * 1. If the number of bits in the hash_val_t type has not been computed yet,
+ * we do so here, because this is likely to be the first function that the
+ * user calls.
+ * 2. Allocate a hash table control structure.
+ * 3. If a hash table control structure is successfully allocated, we
+ * proceed to initialize it. Otherwise we return a null pointer.
+ * 4. We try to allocate the table of hash chains.
+ * 5. If we were able to allocate the hash chain table, we can finish
+ * initializing the hash structure and the table. Otherwise, we must
+ * backtrack by freeing the hash structure.
+ * 6. INIT_SIZE should be a power of two. The high and low marks are always set
+ * to be twice the table size and half the table size respectively. When the
+ * number of nodes in the table grows beyond the high size (beyond load
+ * factor 2), it will double in size to cut the load factor down to about
+ * about 1. If the table shrinks down to or beneath load factor 0.5,
+ * it will shrink, bringing the load up to about 1. However, the table
+ * will never shrink beneath INIT_SIZE even if it's emptied.
+ * 7. This indicates that the table is dynamically allocated and dynamically
+ * resized on the fly. A table that has this value set to zero is
+ * assumed to be statically allocated and will not be resized.
+ * 8. The table of chains must be properly reset to all null pointers.
+ */
+
+hash_t *hash_create(hashcount_t maxcount, hash_comp_t compfun,
+ hash_fun_t hashfun)
+{
+ hash_t *hash;
+
+ if (hash_val_t_bit == 0) /* 1 */
+ compute_bits();
+
+ hash = malloc(sizeof *hash); /* 2 */
+
+ if (hash) { /* 3 */
+ hash->table = malloc(sizeof *hash->table * INIT_SIZE); /* 4 */
+ if (hash->table) { /* 5 */
+ hash->nchains = INIT_SIZE; /* 6 */
+ hash->highmark = INIT_SIZE * 2;
+ hash->lowmark = INIT_SIZE / 2;
+ hash->nodecount = 0;
+ hash->maxcount = maxcount;
+ hash->compare = compfun ? compfun : hash_comp_default;
+ hash->function = hashfun ? hashfun : hash_fun_default;
+ hash->allocnode = hnode_alloc;
+ hash->freenode = hnode_free;
+ hash->context = NULL;
+ hash->mask = INIT_MASK;
+ hash->dynamic = 1; /* 7 */
+ clear_table(hash); /* 8 */
+ expensive_assert (hash_verify(hash));
+ return hash;
+ }
+ free(hash);
+ }
+
+ return NULL;
+}
+
+/*
+ * Select a different set of node allocator routines.
+ */
+
+void hash_set_allocator(hash_t *hash, hnode_alloc_t al,
+ hnode_free_t fr, void *context)
+{
+ assert (hash_count(hash) == 0);
+
+ hash->allocnode = al ? al : hnode_alloc;
+ hash->freenode = fr ? fr : hnode_free;
+ hash->context = context;
+}
+
+/*
+ * Free every node in the hash using the hash->freenode() function pointer, and
+ * cause the hash to become empty.
+ */
+
+void hash_free_nodes(hash_t *hash)
+{
+ hnode_t *node, *next;
+ hash_val_t chain;
+
+ for (chain = 0; chain < hash->nchains; chain ++) {
+ node = hash->table[chain];
+ while (node != NULL) {
+ next = node->next;
+ hash->freenode(node, hash->context);
+ node = next;
+ }
+ hash->table[chain] = NULL;
+ }
+ hash->nodecount = 0;
+ clear_table(hash);
+}
+
+/*
+ * Obsolescent function for removing all nodes from a table,
+ * freeing them and then freeing the table all in one step.
+ */
+
+void hash_free(hash_t *hash)
+{
+#ifdef KAZLIB_OBSOLESCENT_DEBUG
+ assert ("call to obsolescent function hash_free()" && 0);
+#endif
+ hash_free_nodes(hash);
+ hash_destroy(hash);
+}
+
+/*
+ * Free a dynamic hash table structure.
+ */
+
+void hash_destroy(hash_t *hash)
+{
+ assert (hash_val_t_bit != 0);
+ assert (hash_isempty(hash));
+ free(hash->table);
+ free(hash);
+}
+
+/*
+ * Initialize a user supplied hash structure. The user also supplies a table of
+ * chains which is assigned to the hash structure. The table is static---it
+ * will not grow or shrink.
+ * 1. See note 1. in hash_create().
+ * 2. The user supplied array of pointers hopefully contains nchains nodes.
+ * 3. See note 7. in hash_create().
+ * 4. We must dynamically compute the mask from the given power of two table
+ * size.
+ * 5. The user supplied table can't be assumed to contain null pointers,
+ * so we reset it here.
+ */
+
+hash_t *hash_init(hash_t *hash, hashcount_t maxcount,
+ hash_comp_t compfun, hash_fun_t hashfun, hnode_t **table,
+ hashcount_t nchains)
+{
+ if (hash_val_t_bit == 0) /* 1 */
+ compute_bits();
+
+ assert (is_power_of_two(nchains));
+
+ hash->table = table; /* 2 */
+ hash->nchains = nchains;
+ hash->nodecount = 0;
+ hash->maxcount = maxcount;
+ hash->compare = compfun ? compfun : hash_comp_default;
+ hash->function = hashfun ? hashfun : hash_fun_default;
+ hash->dynamic = 0; /* 3 */
+ hash->mask = compute_mask(nchains); /* 4 */
+ clear_table(hash); /* 5 */
+
+ expensive_assert (hash_verify(hash));
+ return hash;
+}
+
+/*
+ * Reset the hash scanner so that the next element retrieved by
+ * hash_scan_next() shall be the first element on the first non-empty chain.
+ * Notes:
+ * 1. Locate the first non empty chain.
+ * 2. If an empty chain is found, remember which one it is and set the next
+ * pointer to refer to its first element.
+ * 3. Otherwise if a chain is not found, set the next pointer to NULL
+ * so that hash_scan_next() shall indicate failure.
+ */
+
+void hash_scan_begin(hscan_t *scan, hash_t *hash)
+{
+ hash_val_t nchains = hash->nchains;
+ hash_val_t chain;
+
+ scan->table = hash;
+
+ /* 1 */
+
+ for (chain = 0; chain < nchains && hash->table[chain] == 0; chain++)
+ ;
+
+ if (chain < nchains) { /* 2 */
+ scan->chain = chain;
+ scan->next = hash->table[chain];
+ } else { /* 3 */
+ scan->next = NULL;
+ }
+}
+
+/*
+ * Retrieve the next node from the hash table, and update the pointer
+ * for the next invocation of hash_scan_next().
+ * Notes:
+ * 1. Remember the next pointer in a temporary value so that it can be
+ * returned.
+ * 2. This assertion essentially checks whether the module has been properly
+ * initialized. The first point of interaction with the module should be
+ * either hash_create() or hash_init(), both of which set hash_val_t_bit to
+ * a non zero value.
+ * 3. If the next pointer we are returning is not NULL, then the user is
+ * allowed to call hash_scan_next() again. We prepare the new next pointer
+ * for that call right now. That way the user is allowed to delete the node
+ * we are about to return, since we will no longer be needing it to locate
+ * the next node.
+ * 4. If there is a next node in the chain (next->next), then that becomes the
+ * new next node, otherwise ...
+ * 5. We have exhausted the current chain, and must locate the next subsequent
+ * non-empty chain in the table.
+ * 6. If a non-empty chain is found, the first element of that chain becomes
+ * the new next node. Otherwise there is no new next node and we set the
+ * pointer to NULL so that the next time hash_scan_next() is called, a null
+ * pointer shall be immediately returned.
+ */
+
+
+hnode_t *hash_scan_next(hscan_t *scan)
+{
+ hnode_t *next = scan->next; /* 1 */
+ hash_t *hash = scan->table;
+ hash_val_t chain = scan->chain + 1;
+ hash_val_t nchains = hash->nchains;
+
+ assert (hash_val_t_bit != 0); /* 2 */
+
+ if (next) { /* 3 */
+ if (next->next) { /* 4 */
+ scan->next = next->next;
+ } else {
+ while (chain < nchains && hash->table[chain] == 0) /* 5 */
+ chain++;
+ if (chain < nchains) { /* 6 */
+ scan->chain = chain;
+ scan->next = hash->table[chain];
+ } else {
+ scan->next = NULL;
+ }
+ }
+ }
+ return next;
+}
+
+/*
+ * Insert a node into the hash table.
+ * Notes:
+ * 1. It's illegal to insert more than the maximum number of nodes. The client
+ * should verify that the hash table is not full before attempting an
+ * insertion.
+ * 2. The same key may not be inserted into a table twice.
+ * 3. If the table is dynamic and the load factor is already at >= 2,
+ * grow the table.
+ * 4. We take the bottom N bits of the hash value to derive the chain index,
+ * where N is the base 2 logarithm of the size of the hash table.
+ */
+
+void hash_insert(hash_t *hash, hnode_t *node, const void *key)
+{
+ hash_val_t hkey, chain;
+
+ assert (hash_val_t_bit != 0);
+ assert (node->next == NULL);
+ assert (hash->nodecount < hash->maxcount); /* 1 */
+ expensive_assert (hash_lookup(hash, key) == NULL); /* 2 */
+
+ if (hash->dynamic && hash->nodecount >= hash->highmark) /* 3 */
+ grow_table(hash);
+
+ hkey = hash->function(key);
+ chain = hkey & hash->mask; /* 4 */
+
+ node->key = key;
+ node->hkey = hkey;
+ node->next = hash->table[chain];
+ hash->table[chain] = node;
+ hash->nodecount++;
+
+ expensive_assert (hash_verify(hash));
+}
+
+/*
+ * Find a node in the hash table and return a pointer to it.
+ * Notes:
+ * 1. We hash the key and keep the entire hash value. As an optimization, when
+ * we descend down the chain, we can compare hash values first and only if
+ * hash values match do we perform a full key comparison.
+ * 2. To locate the chain from among 2^N chains, we look at the lower N bits of
+ * the hash value by anding them with the current mask.
+ * 3. Looping through the chain, we compare the stored hash value inside each
+ * node against our computed hash. If they match, then we do a full
+ * comparison between the unhashed keys. If these match, we have located the
+ * entry.
+ */
+
+hnode_t *hash_lookup(hash_t *hash, const void *key)
+{
+ hash_val_t hkey, chain;
+ hnode_t *nptr;
+
+ hkey = hash->function(key); /* 1 */
+ chain = hkey & hash->mask; /* 2 */
+
+ for (nptr = hash->table[chain]; nptr; nptr = nptr->next) { /* 3 */
+ if (nptr->hkey == hkey && hash->compare(nptr->key, key) == 0)
+ return nptr;
+ }
+
+ return NULL;
+}
+
+/*
+ * Delete the given node from the hash table. Since the chains
+ * are singly linked, we must locate the start of the node's chain
+ * and traverse.
+ * Notes:
+ * 1. The node must belong to this hash table, and its key must not have
+ * been tampered with.
+ * 2. If this deletion will take the node count below the low mark, we
+ * shrink the table now.
+ * 3. Determine which chain the node belongs to, and fetch the pointer
+ * to the first node in this chain.
+ * 4. If the node being deleted is the first node in the chain, then
+ * simply update the chain head pointer.
+ * 5. Otherwise advance to the node's predecessor, and splice out
+ * by updating the predecessor's next pointer.
+ * 6. Indicate that the node is no longer in a hash table.
+ */
+
+hnode_t *hash_delete(hash_t *hash, hnode_t *node)
+{
+ hash_val_t chain;
+ hnode_t *hptr;
+
+ expensive_assert (hash_lookup(hash, node->key) == node); /* 1 */
+ assert (hash_val_t_bit != 0);
+
+ if (hash->dynamic && hash->nodecount <= hash->lowmark
+ && hash->nodecount > INIT_SIZE)
+ shrink_table(hash); /* 2 */
+
+ chain = node->hkey & hash->mask; /* 3 */
+ hptr = hash->table[chain];
+
+ if (hptr == node) { /* 4 */
+ hash->table[chain] = node->next;
+ } else {
+ while (hptr->next != node) { /* 5 */
+ assert (hptr != 0);
+ hptr = hptr->next;
+ }
+ assert (hptr->next == node);
+ hptr->next = node->next;
+ }
+
+ hash->nodecount--;
+ expensive_assert (hash_verify(hash));
+
+ node->next = NULL; /* 6 */
+ return node;
+}
+
+int hash_alloc_insert(hash_t *hash, const void *key, void *data)
+{
+ hnode_t *node = hash->allocnode(hash->context);
+
+ if (node) {
+ hnode_init(node, data);
+ hash_insert(hash, node, key);
+ return 0;
+ }
+ return -1;
+}
+
+void hash_delete_free(hash_t *hash, hnode_t *node)
+{
+ hash_delete(hash, node);
+ hash->freenode(node, hash->context);
+}
+
+/*
+ * Exactly like hash_delete, except does not trigger table shrinkage. This is to be
+ * used from within a hash table scan operation. See notes for hash_delete.
+ */
+
+hnode_t *hash_scan_delete(hash_t *hash, hnode_t *node)
+{
+ hash_val_t chain;
+ hnode_t *hptr;
+
+ expensive_assert (hash_lookup(hash, node->key) == node);
+ assert (hash_val_t_bit != 0);
+
+ chain = node->hkey & hash->mask;
+ hptr = hash->table[chain];
+
+ if (hptr == node) {
+ hash->table[chain] = node->next;
+ } else {
+ while (hptr->next != node)
+ hptr = hptr->next;
+ hptr->next = node->next;
+ }
+
+ hash->nodecount--;
+ expensive_assert (hash_verify(hash));
+ node->next = NULL;
+
+ return node;
+}
+
+/*
+ * Like hash_delete_free but based on hash_scan_delete.
+ */
+
+void hash_scan_delfree(hash_t *hash, hnode_t *node)
+{
+ hash_scan_delete(hash, node);
+ hash->freenode(node, hash->context);
+}
+
+/*
+ * Verify whether the given object is a valid hash table. This means
+ * Notes:
+ * 1. If the hash table is dynamic, verify whether the high and
+ * low expansion/shrinkage thresholds are powers of two.
+ * 2. Count all nodes in the table, and test each hash value
+ * to see whether it is correct for the node's chain.
+ */
+
+int hash_verify(hash_t *hash)
+{
+ hashcount_t count = 0;
+ hash_val_t chain;
+ hnode_t *hptr;
+
+ if (hash->dynamic) { /* 1 */
+ if (hash->lowmark >= hash->highmark)
+ return 0;
+ if (!is_power_of_two(hash->highmark))
+ return 0;
+ if (!is_power_of_two(hash->lowmark))
+ return 0;
+ }
+
+ for (chain = 0; chain < hash->nchains; chain++) { /* 2 */
+ for (hptr = hash->table[chain]; hptr != 0; hptr = hptr->next) {
+ if ((hptr->hkey & hash->mask) != chain)
+ return 0;
+ count++;
+ }
+ }
+
+ if (count != hash->nodecount)
+ return 0;
+
+ return 1;
+}
+
+/*
+ * Test whether the hash table is full and return 1 if this is true,
+ * 0 if it is false.
+ */
+
+#undef hash_isfull
+int hash_isfull(hash_t *hash)
+{
+ return hash->nodecount == hash->maxcount;
+}
+
+/*
+ * Test whether the hash table is empty and return 1 if this is true,
+ * 0 if it is false.
+ */
+
+#undef hash_isempty
+int hash_isempty(hash_t *hash)
+{
+ return hash->nodecount == 0;
+}
+
+static hnode_t *hnode_alloc(ATTRIBUTE_UNUSED void *context)
+{
+ return malloc(sizeof *hnode_alloc(NULL));
+}
+
+static void hnode_free(hnode_t *node, ATTRIBUTE_UNUSED void *context)
+{
+ free(node);
+}
+
+
+/*
+ * Create a hash table node dynamically and assign it the given data.
+ */
+
+hnode_t *hnode_create(void *data)
+{
+ hnode_t *node = malloc(sizeof *node);
+ if (node) {
+ node->data = data;
+ node->next = NULL;
+ }
+ return node;
+}
+
+/*
+ * Initialize a client-supplied node
+ */
+
+hnode_t *hnode_init(hnode_t *hnode, void *data)
+{
+ hnode->data = data;
+ hnode->next = NULL;
+ return hnode;
+}
+
+/*
+ * Destroy a dynamically allocated node.
+ */
+
+void hnode_destroy(hnode_t *hnode)
+{
+ free(hnode);
+}
+
+#undef hnode_put
+void hnode_put(hnode_t *node, void *data)
+{
+ node->data = data;
+}
+
+#undef hnode_get
+void *hnode_get(hnode_t *node)
+{
+ return node->data;
+}
+
+#undef hnode_getkey
+const void *hnode_getkey(hnode_t *node)
+{
+ return node->key;
+}
+
+#undef hash_count
+hashcount_t hash_count(hash_t *hash)
+{
+ return hash->nodecount;
+}
+
+#undef hash_size
+hashcount_t hash_size(hash_t *hash)
+{
+ return hash->nchains;
+}
+
+static hash_val_t hash_fun_default(const void *key)
+{
+ static unsigned long randbox[] = {
+ 0x49848f1bU, 0xe6255dbaU, 0x36da5bdcU, 0x47bf94e9U,
+ 0x8cbcce22U, 0x559fc06aU, 0xd268f536U, 0xe10af79aU,
+ 0xc1af4d69U, 0x1d2917b5U, 0xec4c304dU, 0x9ee5016cU,
+ 0x69232f74U, 0xfead7bb3U, 0xe9089ab6U, 0xf012f6aeU,
+ };
+
+ const unsigned char *str = key;
+ hash_val_t acc = 0;
+
+ while (*str) {
+ acc ^= randbox[(*str + acc) & 0xf];
+ acc = (acc << 1) | (acc >> 31);
+ acc &= 0xffffffffU;
+ acc ^= randbox[((*str++ >> 4) + acc) & 0xf];
+ acc = (acc << 2) | (acc >> 30);
+ acc &= 0xffffffffU;
+ }
+ return acc;
+}
+
+static int hash_comp_default(const void *key1, const void *key2)
+{
+ return strcmp(key1, key2);
+}
+
+#ifdef KAZLIB_TEST_MAIN
+
+#include <stdio.h>
+#include <ctype.h>
+#include <stdarg.h>
+
+typedef char input_t[256];
+
+static int tokenize(char *string, ...)
+{
+ char **tokptr;
+ va_list arglist;
+ int tokcount = 0;
+
+ va_start(arglist, string);
+ tokptr = va_arg(arglist, char **);
+ while (tokptr) {
+ while (*string && isspace((unsigned char) *string))
+ string++;
+ if (!*string)
+ break;
+ *tokptr = string;
+ while (*string && !isspace((unsigned char) *string))
+ string++;
+ tokptr = va_arg(arglist, char **);
+ tokcount++;
+ if (!*string)
+ break;
+ *string++ = 0;
+ }
+ va_end(arglist);
+
+ return tokcount;
+}
+
+static char *dupstring(char *str)
+{
+ int sz = strlen(str) + 1;
+ char *new = malloc(sz);
+ if (new)
+ memcpy(new, str, sz);
+ return new;
+}
+
+static hnode_t *new_node(void *c)
+{
+ static hnode_t few[5];
+ static int count;
+
+ if (count < 5)
+ return few + count++;
+
+ return NULL;
+}
+
+static void del_node(hnode_t *n, void *c)
+{
+}
+
+int main(void)
+{
+ input_t in;
+ hash_t *h = hash_create(HASHCOUNT_T_MAX, 0, 0);
+ hnode_t *hn;
+ hscan_t hs;
+ char *tok1, *tok2, *val;
+ const char *key;
+ int prompt = 0;
+
+ char *help =
+ "a <key> <val> add value to hash table\n"
+ "d <key> delete value from hash table\n"
+ "l <key> lookup value in hash table\n"
+ "n show size of hash table\n"
+ "c show number of entries\n"
+ "t dump whole hash table\n"
+ "+ increase hash table (private func)\n"
+ "- decrease hash table (private func)\n"
+ "b print hash_t_bit value\n"
+ "p turn prompt on\n"
+ "s switch to non-functioning allocator\n"
+ "q quit";
+
+ if (!h)
+ puts("hash_create failed");
+
+ for (;;) {
+ if (prompt)
+ putchar('>');
+ fflush(stdout);
+
+ if (!fgets(in, sizeof(input_t), stdin))
+ break;
+
+ switch(in[0]) {
+ case '?':
+ puts(help);
+ break;
+ case 'b':
+ printf("%d\n", hash_val_t_bit);
+ break;
+ case 'a':
+ if (tokenize(in+1, &tok1, &tok2, (char **) 0) != 2) {
+ puts("what?");
+ break;
+ }
+ key = dupstring(tok1);
+ val = dupstring(tok2);
+
+ if (!key || !val) {
+ puts("out of memory");
+ free((void *) key);
+ free(val);
+ break;
+ }
+
+ if (hash_alloc_insert(h, key, val) < 0) {
+ puts("hash_alloc_insert failed");
+ free((void *) key);
+ free(val);
+ break;
+ }
+ break;
+ case 'd':
+ if (tokenize(in+1, &tok1, (char **) 0) != 1) {
+ puts("what?");
+ break;
+ }
+ hn = hash_lookup(h, tok1);
+ if (!hn) {
+ puts("hash_lookup failed");
+ break;
+ }
+ val = hnode_get(hn);
+ key = hnode_getkey(hn);
+ hash_scan_delfree(h, hn);
+ free((void *) key);
+ free(val);
+ break;
+ case 'l':
+ if (tokenize(in+1, &tok1, (char **) 0) != 1) {
+ puts("what?");
+ break;
+ }
+ hn = hash_lookup(h, tok1);
+ if (!hn) {
+ puts("hash_lookup failed");
+ break;
+ }
+ val = hnode_get(hn);
+ puts(val);
+ break;
+ case 'n':
+ printf("%lu\n", (unsigned long) hash_size(h));
+ break;
+ case 'c':
+ printf("%lu\n", (unsigned long) hash_count(h));
+ break;
+ case 't':
+ hash_scan_begin(&hs, h);
+ while ((hn = hash_scan_next(&hs)))
+ printf("%s\t%s\n", (char*) hnode_getkey(hn),
+ (char*) hnode_get(hn));
+ break;
+ case '+':
+ grow_table(h); /* private function */
+ break;
+ case '-':
+ shrink_table(h); /* private function */
+ break;
+ case 'q':
+ exit(0);
+ break;
+ case '\0':
+ break;
+ case 'p':
+ prompt = 1;
+ break;
+ case 's':
+ hash_set_allocator(h, new_node, del_node, NULL);
+ break;
+ default:
+ putchar('?');
+ putchar('\n');
+ break;
+ }
+ }
+
+ return 0;
+}
+
+#endif
--- /dev/null
+/*
+ * Hash Table Data Type
+ * Copyright (C) 1997 Kaz Kylheku <kaz@ashi.footprints.net>
+ *
+ * Free Software License:
+ *
+ * All rights are reserved by the author, with the following exceptions:
+ * Permission is granted to freely reproduce and distribute this software,
+ * possibly in exchange for a fee, provided that this copyright notice appears
+ * intact. Permission is also granted to adapt this software to produce
+ * derivative works, as long as the modified versions carry this copyright
+ * notice and additional notices stating that the work has been modified.
+ * This source code may be translated into executable form and incorporated
+ * into proprietary software; there is no requirement for such software to
+ * contain a copyright notice related to this source.
+ *
+ * $Id: hash.h,v 1.22.2.7 2000/11/13 01:36:45 kaz Exp $
+ * $Name: kazlib_1_20 $
+ */
+
+#ifndef HASH_H
+#define HASH_H
+
+#include <limits.h>
+#ifdef KAZLIB_SIDEEFFECT_DEBUG
+#include "sfx.h"
+#endif
+
+/*
+ * Blurb for inclusion into C++ translation units
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+typedef unsigned long hashcount_t;
+#define HASHCOUNT_T_MAX ULONG_MAX
+
+typedef unsigned long hash_val_t;
+#define HASH_VAL_T_MAX ULONG_MAX
+
+extern int hash_val_t_bit;
+
+#ifndef HASH_VAL_T_BIT
+#define HASH_VAL_T_BIT ((int) hash_val_t_bit)
+#endif
+
+/*
+ * Hash chain node structure.
+ * Notes:
+ * 1. This preprocessing directive is for debugging purposes. The effect is
+ * that if the preprocessor symbol KAZLIB_OPAQUE_DEBUG is defined prior to the
+ * inclusion of this header, then the structure shall be declared as having
+ * the single member int __OPAQUE__. This way, any attempts by the
+ * client code to violate the principles of information hiding (by accessing
+ * the structure directly) can be diagnosed at translation time. However,
+ * note the resulting compiled unit is not suitable for linking.
+ * 2. This is a pointer to the next node in the chain. In the last node of a
+ * chain, this pointer is null.
+ * 3. The key is a pointer to some user supplied data that contains a unique
+ * identifier for each hash node in a given table. The interpretation of
+ * the data is up to the user. When creating or initializing a hash table,
+ * the user must supply a pointer to a function for comparing two keys,
+ * and a pointer to a function for hashing a key into a numeric value.
+ * 4. The value is a user-supplied pointer to void which may refer to
+ * any data object. It is not interpreted in any way by the hashing
+ * module.
+ * 5. The hashed key is stored in each node so that we don't have to rehash
+ * each key when the table must grow or shrink.
+ */
+
+typedef struct hnode_t {
+ #if defined(HASH_IMPLEMENTATION) || !defined(KAZLIB_OPAQUE_DEBUG) /* 1 */
+ struct hnode_t *hash_next; /* 2 */
+ const void *hash_key; /* 3 */
+ void *hash_data; /* 4 */
+ hash_val_t hash_hkey; /* 5 */
+ #else
+ int hash_dummy;
+ #endif
+} hnode_t;
+
+/*
+ * The comparison function pointer type. A comparison function takes two keys
+ * and produces a value of -1 if the left key is less than the right key, a
+ * value of 0 if the keys are equal, and a value of 1 if the left key is
+ * greater than the right key.
+ */
+
+typedef int (*hash_comp_t)(const void *, const void *);
+
+/*
+ * The hashing function performs some computation on a key and produces an
+ * integral value of type hash_val_t based on that key. For best results, the
+ * function should have a good randomness properties in *all* significant bits
+ * over the set of keys that are being inserted into a given hash table. In
+ * particular, the most significant bits of hash_val_t are most significant to
+ * the hash module. Only as the hash table expands are less significant bits
+ * examined. Thus a function that has good distribution in its upper bits but
+ * not lower is preferable to one that has poor distribution in the upper bits
+ * but not the lower ones.
+ */
+
+typedef hash_val_t (*hash_fun_t)(const void *);
+
+/*
+ * allocator functions
+ */
+
+typedef hnode_t *(*hnode_alloc_t)(void *);
+typedef void (*hnode_free_t)(hnode_t *, void *);
+
+/*
+ * This is the hash table control structure. It keeps track of information
+ * about a hash table, as well as the hash table itself.
+ * Notes:
+ * 1. Pointer to the hash table proper. The table is an array of pointers to
+ * hash nodes (of type hnode_t). If the table is empty, every element of
+ * this table is a null pointer. A non-null entry points to the first
+ * element of a chain of nodes.
+ * 2. This member keeps track of the size of the hash table---that is, the
+ * number of chain pointers.
+ * 3. The count member maintains the number of elements that are presently
+ * in the hash table.
+ * 4. The maximum count is the greatest number of nodes that can populate this
+ * table. If the table contains this many nodes, no more can be inserted,
+ * and the hash_isfull() function returns true.
+ * 5. The high mark is a population threshold, measured as a number of nodes,
+ * which, if exceeded, will trigger a table expansion. Only dynamic hash
+ * tables are subject to this expansion.
+ * 6. The low mark is a minimum population threshold, measured as a number of
+ * nodes. If the table population drops below this value, a table shrinkage
+ * will occur. Only dynamic tables are subject to this reduction. No table
+ * will shrink beneath a certain absolute minimum number of nodes.
+ * 7. This is the a pointer to the hash table's comparison function. The
+ * function is set once at initialization or creation time.
+ * 8. Pointer to the table's hashing function, set once at creation or
+ * initialization time.
+ * 9. The current hash table mask. If the size of the hash table is 2^N,
+ * this value has its low N bits set to 1, and the others clear. It is used
+ * to select bits from the result of the hashing function to compute an
+ * index into the table.
+ * 10. A flag which indicates whether the table is to be dynamically resized. It
+ * is set to 1 in dynamically allocated tables, 0 in tables that are
+ * statically allocated.
+ */
+
+typedef struct hash_t {
+ #if defined(HASH_IMPLEMENTATION) || !defined(KAZLIB_OPAQUE_DEBUG)
+ struct hnode_t **hash_table; /* 1 */
+ hashcount_t hash_nchains; /* 2 */
+ hashcount_t hash_nodecount; /* 3 */
+ hashcount_t hash_maxcount; /* 4 */
+ hashcount_t hash_highmark; /* 5 */
+ hashcount_t hash_lowmark; /* 6 */
+ hash_comp_t hash_compare; /* 7 */
+ hash_fun_t hash_function; /* 8 */
+ hnode_alloc_t hash_allocnode;
+ hnode_free_t hash_freenode;
+ void *hash_context;
+ hash_val_t hash_mask; /* 9 */
+ int hash_dynamic; /* 10 */
+ #else
+ int hash_dummy;
+ #endif
+} hash_t;
+
+/*
+ * Hash scanner structure, used for traversals of the data structure.
+ * Notes:
+ * 1. Pointer to the hash table that is being traversed.
+ * 2. Reference to the current chain in the table being traversed (the chain
+ * that contains the next node that shall be retrieved).
+ * 3. Pointer to the node that will be retrieved by the subsequent call to
+ * hash_scan_next().
+ */
+
+typedef struct hscan_t {
+ #if defined(HASH_IMPLEMENTATION) || !defined(KAZLIB_OPAQUE_DEBUG)
+ hash_t *hash_table; /* 1 */
+ hash_val_t hash_chain; /* 2 */
+ hnode_t *hash_next; /* 3 */
+ #else
+ int hash_dummy;
+ #endif
+} hscan_t;
+
+extern hash_t *hash_create(hashcount_t, hash_comp_t, hash_fun_t);
+extern void hash_set_allocator(hash_t *, hnode_alloc_t, hnode_free_t, void *);
+extern void hash_destroy(hash_t *);
+extern void hash_free_nodes(hash_t *);
+extern void hash_free(hash_t *);
+extern hash_t *hash_init(hash_t *, hashcount_t, hash_comp_t,
+ hash_fun_t, hnode_t **, hashcount_t);
+extern void hash_insert(hash_t *, hnode_t *, const void *);
+extern hnode_t *hash_lookup(hash_t *, const void *);
+extern hnode_t *hash_delete(hash_t *, hnode_t *);
+extern int hash_alloc_insert(hash_t *, const void *, void *);
+extern void hash_delete_free(hash_t *, hnode_t *);
+
+extern void hnode_put(hnode_t *, void *);
+extern void *hnode_get(hnode_t *);
+extern const void *hnode_getkey(hnode_t *);
+extern hashcount_t hash_count(hash_t *);
+extern hashcount_t hash_size(hash_t *);
+
+extern int hash_isfull(hash_t *);
+extern int hash_isempty(hash_t *);
+
+extern void hash_scan_begin(hscan_t *, hash_t *);
+extern hnode_t *hash_scan_next(hscan_t *);
+extern hnode_t *hash_scan_delete(hash_t *, hnode_t *);
+extern void hash_scan_delfree(hash_t *, hnode_t *);
+
+extern int hash_verify(hash_t *);
+
+extern hnode_t *hnode_create(void *);
+extern hnode_t *hnode_init(hnode_t *, void *);
+extern void hnode_destroy(hnode_t *);
+
+#if defined(HASH_IMPLEMENTATION) || !defined(KAZLIB_OPAQUE_DEBUG)
+#ifdef KAZLIB_SIDEEFFECT_DEBUG
+#define hash_isfull(H) (SFX_CHECK(H)->hash_nodecount == (H)->hash_maxcount)
+#else
+#define hash_isfull(H) ((H)->hash_nodecount == (H)->hash_maxcount)
+#endif
+#define hash_isempty(H) ((H)->hash_nodecount == 0)
+#define hash_count(H) ((H)->hash_nodecount)
+#define hash_size(H) ((H)->hash_nchains)
+#define hnode_get(N) ((N)->hash_data)
+#define hnode_getkey(N) ((N)->hash_key)
+#define hnode_put(N, V) ((N)->hash_data = (V))
+#endif
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
--- /dev/null
+/*
+ * info.c: filename/linenumber information for parser/interpreter
+ *
+ * Copyright (C) 2007-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <lutter@redhat.com>
+ */
+
+#include <config.h>
+#include <stdbool.h>
+
+#include "info.h"
+#include "internal.h"
+#include "memory.h"
+#include "ref.h"
+#include "errcode.h"
+
+/*
+ * struct string
+ */
+struct string *make_string(char *str) {
+ struct string *string;
+ make_ref(string);
+ string->str = str;
+ return string;
+}
+
+struct string *dup_string(const char *str) {
+ struct string *string;
+ make_ref(string);
+ if (str == NULL)
+ string->str = strdup("");
+ else
+ string->str = strdup(str);
+ if (string->str == NULL)
+ unref(string, string);
+ return string;
+}
+
+void free_string(struct string *string) {
+ if (string == NULL)
+ return;
+ assert(string->ref == 0);
+ free(string->str);
+ free(string);
+}
+
+/*
+ * struct info
+ */
+char *format_info(struct info *info) {
+ const char *fname;
+ char *result = NULL;
+ int r = 0;
+
+ if (info == NULL) {
+ return strdup("(no file info)");
+ }
+
+ int fl = info->first_line, ll = info->last_line;
+ int fc = info->first_column, lc = info->last_column;
+ fname = (info->filename != NULL) ? info->filename->str : "(unknown file)";
+
+ if (fl > 0) {
+ if (fl == ll) {
+ if (fc == lc) {
+ r = xasprintf(&result, "%s:%d.%d:", fname, fl, fc);
+ } else {
+ r = xasprintf(&result, "%s:%d.%d-.%d:", fname, fl, fc, lc);
+ }
+ } else {
+ r = xasprintf(&result, "%s:%d.%d-%d.%d:", fname, fl, fc, ll, lc);
+ }
+ } else {
+ r = xasprintf(&result, "%s:", fname);
+ }
+ return (r == -1) ? NULL : result;
+}
+
+void print_info(FILE *out, struct info *info) {
+ if (info == NULL) {
+ fprintf(out, "(no file info):");
+ return;
+ }
+ fprintf(out, "%s:",
+ info->filename != NULL ? info->filename->str : "(unknown file)");
+ if (info->first_line > 0) {
+ if (info->first_line == info->last_line) {
+ if (info->first_column == info->last_column) {
+ fprintf(out, "%d.%d:", info->first_line, info->first_column);
+ } else {
+ fprintf(out, "%d.%d-.%d:", info->first_line,
+ info->first_column, info->last_column);
+ }
+ } else {
+ fprintf(out, "%d.%d-%d.%d:",
+ info->first_line, info->first_column,
+ info->last_line, info->last_column);
+ }
+ }
+}
+
+bool typecheck_p(const struct info *info) {
+ return (info->error->aug->flags & AUG_TYPE_CHECK) != 0;
+}
+
+void free_info(struct info *info) {
+ if (info == NULL)
+ return;
+ assert(info->ref == 0);
+ unref(info->filename, string);
+ free(info);
+}
+
+struct span *make_span(struct info *info) {
+ struct span *span = NULL;
+ if (ALLOC(span) < 0) {
+ return NULL;
+ }
+ /* UINT_MAX means span is not initialized yet */
+ span->span_start = UINT_MAX;
+ span->filename = ref(info->filename);
+ return span;
+}
+
+void free_span(struct span *span) {
+ if (span == NULL)
+ return;
+ unref(span->filename, string);
+ free(span);
+}
+
+void print_span(struct span *span) {
+ if (span == NULL)
+ return;
+ printf("%s label=(%i:%i) value=(%i:%i) span=(%i,%i)\n",
+ span->filename->str,
+ span->label_start, span->label_end,
+ span->value_start, span->value_end,
+ span->span_start, span->span_end);
+}
+
+void update_span(struct span *node_info, int x, int y) {
+ if (node_info == NULL)
+ return;
+ if (node_info->span_start == UINT_MAX) {
+ node_info->span_start = x;
+ node_info->span_end = y;
+ } else {
+ if (node_info->span_start > x)
+ node_info->span_start = x;
+ if (node_info->span_end < y)
+ node_info->span_end = y;
+ }
+}
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+/*
+ * info.h: filename/linenumber information for parser/interpreter
+ *
+ * Copyright (C) 2007-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <lutter@redhat.com>
+ */
+
+#ifndef INFO_H_
+#define INFO_H_
+
+#include <stdio.h>
+#include <stdint.h>
+#include <stdlib.h>
+#include "ref.h"
+
+/* Reference-counted strings */
+struct string {
+ ref_t ref;
+ char *str;
+};
+
+struct string *make_string(char *str);
+
+/* Duplicate a string; if STR is NULL, use the empty string "" */
+struct string *dup_string(const char *str);
+
+/* Do not call directly, use UNREF instead */
+void free_string(struct string *string);
+
+/* File information */
+struct info {
+ /* There is only one struct error for each Augeas instance */
+ struct error *error;
+ struct string *filename;
+ uint16_t first_line;
+ uint16_t first_column;
+ uint16_t last_line;
+ uint16_t last_column;
+ ref_t ref;
+};
+
+struct span {
+ struct string *filename;
+ uint label_start;
+ uint label_end;
+ uint value_start;
+ uint value_end;
+ uint span_start;
+ uint span_end;
+};
+
+char *format_info(struct info *info);
+
+void print_info(FILE *out, struct info *info);
+
+/* Return true if typechecking is turned on. (This uses the somewhat gross
+ * fact that we can get at the augeas flags through error->aug) */
+bool typecheck_p(const struct info *info);
+
+/* Do not call directly, use UNREF instead */
+void free_info(struct info *info);
+
+struct span *make_span(struct info *info);
+void free_span(struct span *node_info);
+void update_span(struct span *node_info, int x, int y);
+void print_span(struct span *node_info);
+#endif
+
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+/*
+ * internal.c: internal data structures and helpers
+ *
+ * Copyright (C) 2007-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <dlutter@redhat.com>
+ */
+
+#include <config.h>
+
+#include <ctype.h>
+#include <stdio.h>
+#include <stdarg.h>
+#include <locale.h>
+
+#include "internal.h"
+#include "memory.h"
+#include "fa.h"
+
+#ifndef MIN
+# define MIN(a, b) ((a) < (b) ? (a) : (b))
+#endif
+
+/* Cap file reads somwhat arbitrarily at 32 MB */
+#define MAX_READ_LEN (32*1024*1024)
+
+int pathjoin(char **path, int nseg, ...) {
+ va_list ap;
+
+ va_start(ap, nseg);
+ for (int i=0; i < nseg; i++) {
+ const char *seg = va_arg(ap, const char *);
+ if (seg == NULL)
+ seg = "()";
+ int len = strlen(seg) + 1;
+
+ if (*path != NULL) {
+ len += strlen(*path) + 1;
+ if (REALLOC_N(*path, len) == -1) {
+ FREE(*path);
+ va_end(ap);
+ return -1;
+ }
+ if (strlen(*path) == 0 || (*path)[strlen(*path)-1] != SEP)
+ strcat(*path, "/");
+ if (seg[0] == SEP)
+ seg += 1;
+ strcat(*path, seg);
+ } else {
+ if ((*path = malloc(len)) == NULL) {
+ va_end(ap);
+ return -1;
+ }
+ strcpy(*path, seg);
+ }
+ }
+ va_end(ap);
+ return 0;
+}
+
+/* Like gnulib's fread_file, but read no more than the specified maximum
+ number of bytes. If the length of the input is <= max_len, and
+ upon error while reading that data, it works just like fread_file.
+
+ Taken verbatim from libvirt's util.c
+*/
+
+static char *
+fread_file_lim (FILE *stream, size_t max_len, size_t *length)
+{
+ char *buf = NULL;
+ size_t alloc = 0;
+ size_t size = 0;
+ int save_errno;
+
+ for (;;) {
+ size_t count;
+ size_t requested;
+
+ if (size + BUFSIZ + 1 > alloc) {
+ char *new_buf;
+
+ alloc += alloc / 2;
+ if (alloc < size + BUFSIZ + 1)
+ alloc = size + BUFSIZ + 1;
+
+ new_buf = realloc (buf, alloc);
+ if (!new_buf) {
+ save_errno = errno;
+ break;
+ }
+
+ buf = new_buf;
+ }
+
+ /* Ensure that (size + requested <= max_len); */
+ requested = MIN (size < max_len ? max_len - size : 0,
+ alloc - size - 1);
+ count = fread (buf + size, 1, requested, stream);
+ size += count;
+
+ if (count != requested || requested == 0) {
+ save_errno = errno;
+ if (ferror (stream))
+ break;
+ buf[size] = '\0';
+ *length = size;
+ return buf;
+ }
+ }
+
+ free (buf);
+ errno = save_errno;
+ return NULL;
+}
+
+char* xfread_file(FILE *fp) {
+ char *result;
+ size_t len;
+
+ if (!fp)
+ return NULL;
+
+ result = fread_file_lim(fp, MAX_READ_LEN, &len);
+
+ if (result != NULL
+ && len <= MAX_READ_LEN
+ && (int) len == len)
+ return result;
+
+ free(result);
+ return NULL;
+}
+
+char* xread_file(const char *path) {
+ FILE *fp;
+ char *result;
+
+ fp = fopen(path, "r");
+ if (!fp)
+ return NULL;
+
+ result = xfread_file(fp);
+ fclose(fp);
+
+ return result;
+}
+
+/*
+ * Escape/unescape of string literals
+ */
+static const char *const escape_chars = "\a\b\t\n\v\f\r";
+static const char *const escape_names = "abtnvfr";
+
+char *unescape(const char *s, int len, const char *extra) {
+ size_t size;
+ const char *n;
+ char *result, *t;
+ int i;
+
+ if (len < 0 || len > strlen(s))
+ len = strlen(s);
+
+ size = 0;
+ for (i=0; i < len; i++, size++) {
+ if (s[i] == '\\' && strchr(escape_names, s[i+1])) {
+ i += 1;
+ } else if (s[i] == '\\' && extra && strchr(extra, s[i+1])) {
+ i += 1;
+ }
+ }
+
+ if (ALLOC_N(result, size + 1) < 0)
+ return NULL;
+
+ for (i = 0, t = result; i < len; i++, size++) {
+ if (s[i] == '\\' && (n = strchr(escape_names, s[i+1])) != NULL) {
+ *t++ = escape_chars[n - escape_names];
+ i += 1;
+ } else if (s[i] == '\\' && extra && strchr(extra, s[i+1]) != NULL) {
+ *t++ = s[i+1];
+ i += 1;
+ } else {
+ *t++ = s[i];
+ }
+ }
+ return result;
+}
+
+char *escape(const char *text, int cnt, const char *extra) {
+
+ int len = 0;
+ char *esc = NULL, *e;
+
+ if (cnt < 0 || cnt > strlen(text))
+ cnt = strlen(text);
+
+ for (int i=0; i < cnt; i++) {
+ if (text[i] && (strchr(escape_chars, text[i]) != NULL))
+ len += 2; /* Escaped as '\x' */
+ else if (text[i] && extra && (strchr(extra, text[i]) != NULL))
+ len += 2; /* Escaped as '\x' */
+ else if (! isprint(text[i]))
+ len += 4; /* Escaped as '\ooo' */
+ else
+ len += 1;
+ }
+ if (ALLOC_N(esc, len+1) < 0)
+ return NULL;
+ e = esc;
+ for (int i=0; i < cnt; i++) {
+ char *p;
+ if (text[i] && ((p = strchr(escape_chars, text[i])) != NULL)) {
+ *e++ = '\\';
+ *e++ = escape_names[p - escape_chars];
+ } else if (text[i] && extra && (strchr(extra, text[i]) != NULL)) {
+ *e++ = '\\';
+ *e++ = text[i];
+ } else if (! isprint(text[i])) {
+ sprintf(e, "\\%03o", (unsigned char) text[i]);
+ e += 4;
+ } else {
+ *e++ = text[i];
+ }
+ }
+ return esc;
+}
+
+int print_chars(FILE *out, const char *text, int cnt) {
+ int total = 0;
+ char *esc;
+
+ if (text == NULL) {
+ fprintf(out, "nil");
+ return 3;
+ }
+ if (cnt < 0)
+ cnt = strlen(text);
+
+ esc = escape(text, cnt, "\"");
+ total = strlen(esc);
+ if (out != NULL)
+ fprintf(out, "%s", esc);
+ free(esc);
+
+ return total;
+}
+
+char *format_pos(const char *text, int pos) {
+ static const int window = 28;
+ char *buf = NULL, *left = NULL, *right = NULL;
+ int before = pos;
+ int llen, rlen;
+ int r;
+
+ if (before > window)
+ before = window;
+ left = escape(text + pos - before, before, NULL);
+ if (left == NULL)
+ goto done;
+ right = escape(text + pos, window, NULL);
+ if (right == NULL)
+ goto done;
+
+ llen = strlen(left);
+ rlen = strlen(right);
+ if (llen < window && rlen < window) {
+ r = asprintf(&buf, "%*s%s|=|%s%-*s\n", window - llen, "<", left,
+ right, window - rlen, ">");
+ } else if (strlen(left) < window) {
+ r = asprintf(&buf, "%*s%s|=|%s>\n", window - llen, "<", left, right);
+ } else if (strlen(right) < window) {
+ r = asprintf(&buf, "<%s|=|%s%-*s\n", left, right, window - rlen, ">");
+ } else {
+ r = asprintf(&buf, "<%s|=|%s>\n", left, right);
+ }
+ if (r < 0) {
+ buf = NULL;
+ }
+
+ done:
+ free(left);
+ free(right);
+ return buf;
+}
+
+void print_pos(FILE *out, const char *text, int pos) {
+ char *format = format_pos(text, pos);
+
+ if (format != NULL) {
+ fputs(format, out);
+ FREE(format);
+ }
+}
+
+int __aug_init_memstream(struct memstream *ms) {
+ MEMZERO(ms, 1);
+#if HAVE_OPEN_MEMSTREAM
+ ms->stream = open_memstream(&(ms->buf), &(ms->size));
+ return ms->stream == NULL ? -1 : 0;
+#else
+ ms->stream = tmpfile();
+ if (ms->stream == NULL) {
+ return -1;
+ }
+ return 0;
+#endif
+}
+
+int __aug_close_memstream(struct memstream *ms) {
+#if !HAVE_OPEN_MEMSTREAM
+ rewind(ms->stream);
+ ms->buf = fread_file_lim(ms->stream, MAX_READ_LEN, &(ms->size));
+#endif
+ if (fclose(ms->stream) == EOF) {
+ FREE(ms->buf);
+ ms->size = 0;
+ return -1;
+ }
+ return 0;
+}
+
+int tree_sibling_index(struct tree *tree) {
+ struct tree *siblings = tree->parent->children;
+
+ int cnt = 0, ind = 0;
+
+ list_for_each(t, siblings) {
+ if (streqv(t->label, tree->label)) {
+ cnt += 1;
+ if (t == tree)
+ ind = cnt;
+ }
+ }
+
+ if (cnt > 1) {
+ return ind;
+ } else {
+ return 0;
+ }
+}
+
+char *path_expand(struct tree *tree, const char *ppath) {
+ char *path;
+ const char *label;
+ char *escaped = NULL;
+ int r;
+
+ int ind = tree_sibling_index(tree);
+
+ if (ppath == NULL)
+ ppath = "";
+
+ if (tree->label == NULL)
+ label = "(none)";
+ else
+ label = tree->label;
+
+ r = pathx_escape_name(label, &escaped);
+ if (r < 0)
+ return NULL;
+
+ if (escaped != NULL)
+ label = escaped;
+
+ if (ind > 0) {
+ r = asprintf(&path, "%s/%s[%d]", ppath, label, ind);
+ } else {
+ r = asprintf(&path, "%s/%s", ppath, label);
+ }
+
+ free(escaped);
+
+ if (r == -1)
+ return NULL;
+ return path;
+}
+
+char *path_of_tree(struct tree *tree) {
+ int depth, i;
+ struct tree *t, **anc;
+ char *path = NULL;
+
+ for (t = tree, depth = 1; ! ROOT_P(t); depth++, t = t->parent);
+ if (ALLOC_N(anc, depth) < 0)
+ return NULL;
+
+ for (t = tree, i = depth - 1; i >= 0; i--, t = t->parent)
+ anc[i] = t;
+
+ for (i = 0; i < depth; i++) {
+ char *p = path_expand(anc[i], path);
+ free(path);
+ path = p;
+ }
+ FREE(anc);
+ return path;
+}
+
+/* User-facing path cleaning */
+static char *cleanstr(char *path, const char sep) {
+ if (path == NULL || strlen(path) == 0)
+ return path;
+ char *e = path + strlen(path) - 1;
+ while (e >= path && (*e == sep || isspace(*e)))
+ *e-- = '\0';
+ return path;
+}
+
+char *cleanpath(char *path) {
+ if (path == NULL || strlen(path) == 0)
+ return path;
+ if (STREQ(path, "/"))
+ return path;
+ return cleanstr(path, SEP);
+}
+
+const char *xstrerror(int errnum, char *buf, size_t len) {
+#ifdef HAVE_STRERROR_R
+# if defined(__USE_GNU) && defined(__GLIBC__)
+ /* Annoying GNU specific API contract */
+ return strerror_r(errnum, buf, len);
+# else
+ strerror_r(errnum, buf, len);
+ return buf;
+# endif
+#else
+ int n = snprintf(buf, len, "errno=%d", errnum);
+ return (0 < n && n < len
+ ? buf : "internal error: buffer too small in xstrerror");
+#endif
+}
+
+int xasprintf(char **strp, const char *format, ...) {
+ va_list args;
+ int result;
+
+ va_start (args, format);
+ result = vasprintf (strp, format, args);
+ va_end (args);
+ if (result < 0)
+ *strp = NULL;
+ return result;
+}
+
+/* From libvirt's src/xen/block_stats.c */
+int xstrtoint64(char const *s, int base, int64_t *result) {
+ long long int lli;
+ char *p;
+
+ errno = 0;
+ lli = strtoll(s, &p, base);
+ if (errno || !(*p == 0 || *p == '\n') || p == s || (int64_t) lli != lli)
+ return -1;
+ *result = lli;
+ return 0;
+}
+
+void calc_line_ofs(const char *text, size_t pos, size_t *line, size_t *ofs) {
+ *line = 1;
+ *ofs = 0;
+ for (const char *t = text; t < text + pos; t++) {
+ *ofs += 1;
+ if (*t == '\n') {
+ *ofs = 0;
+ *line += 1;
+ }
+ }
+}
+
+#if HAVE_USELOCALE
+int regexp_c_locale(ATTRIBUTE_UNUSED char **u, ATTRIBUTE_UNUSED size_t *len) {
+ /* On systems with uselocale, we are ok, since we make sure that we
+ * switch to the "C" locale any time we enter through the public API
+ */
+ return 0;
+}
+#else
+int regexp_c_locale(char **u, size_t *len) {
+ /* Without uselocale, we need to expand character ranges */
+ int r;
+ char *s = *u;
+ size_t s_len, u_len;
+ if (len == NULL) {
+ len = &u_len;
+ s_len = strlen(s);
+ } else {
+ s_len = *len;
+ }
+ r = fa_expand_char_ranges(s, s_len, u, len);
+ if (r != 0) {
+ *u = s;
+ *len = s_len;
+ }
+ if (r < 0)
+ return -1;
+ /* Syntax errors will be caught when the result is compiled */
+ if (r > 0)
+ return 0;
+ free(s);
+ return 1;
+}
+#endif
+
+#if ENABLE_DEBUG
+bool debugging(const char *category) {
+ const char *debug = getenv("AUGEAS_DEBUG");
+ const char *s;
+
+ if (debug == NULL)
+ return false;
+
+ for (s = debug; s != NULL; ) {
+ if (STREQLEN(s, category, strlen(category)))
+ return true;
+ s = strchr(s, ':');
+ if (s != NULL)
+ s+=1;
+ }
+ return false;
+}
+
+FILE *debug_fopen(const char *format, ...) {
+ va_list ap;
+ FILE *result = NULL;
+ const char *dir;
+ char *name = NULL, *path = NULL;
+ int r;
+
+ dir = getenv("AUGEAS_DEBUG_DIR");
+ if (dir == NULL)
+ goto error;
+
+ va_start(ap, format);
+ r = vasprintf(&name, format, ap);
+ va_end(ap);
+ if (r < 0)
+ goto error;
+
+ r = xasprintf(&path, "%s/%s", dir, name);
+ if (r < 0)
+ goto error;
+
+ result = fopen(path, "w");
+
+ error:
+ free(name);
+ free(path);
+ return result;
+}
+#endif
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+/*
+ * internal.h: Useful definitions
+ *
+ * Copyright (C) 2007-2017 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <dlutter@redhat.com>
+ */
+
+#ifndef INTERNAL_H_
+#define INTERNAL_H_
+
+#include "list.h"
+#include "datadir.h"
+#include "augeas.h"
+
+#include <stdio.h>
+#include <string.h>
+#include <strings.h>
+#include <stdlib.h>
+#include <stdbool.h>
+#include <unistd.h>
+#include <errno.h>
+#include <assert.h>
+#include <locale.h>
+#include <stdint.h>
+
+/*
+ * Various parameters about env vars, special tree nodes etc.
+ */
+
+/* Define: AUGEAS_LENS_DIR
+ * The default location for lens definitions */
+#define AUGEAS_LENS_DIR DATADIR "/augeas/lenses"
+
+/* The directory where we install lenses distribute with Augeas */
+#define AUGEAS_LENS_DIST_DIR DATADIR "/augeas/lenses/dist"
+
+/* Define: AUGEAS_ROOT_ENV
+ * The env var that points to the chroot holding files we may modify.
+ * Mostly useful for testing */
+#define AUGEAS_ROOT_ENV "AUGEAS_ROOT"
+
+/* Define: AUGEAS_FILES_TREE
+ * The root for actual file contents */
+#define AUGEAS_FILES_TREE "/files"
+
+/* Define: AUGEAS_META_TREE
+ * Augeas reports some information in this subtree */
+#define AUGEAS_META_TREE "/augeas"
+
+/* Define: AUGEAS_META_FILES
+ * Information about files */
+#define AUGEAS_META_FILES AUGEAS_META_TREE AUGEAS_FILES_TREE
+
+/* Define: AUGEAS_META_TEXT
+ * Information about text (see aug_text_store and aug_text_retrieve) */
+#define AUGEAS_META_TEXT AUGEAS_META_TREE "/text"
+
+/* Define: AUGEAS_META_ROOT
+ * The root directory */
+#define AUGEAS_META_ROOT AUGEAS_META_TREE "/root"
+
+/* Define: AUGEAS_META_SAVE_MODE
+ * How we save files. One of 'backup', 'overwrite' or 'newfile' */
+#define AUGEAS_META_SAVE_MODE AUGEAS_META_TREE "/save"
+
+/* Define: AUGEAS_CLONE_IF_RENAME_FAILS
+ * Control what save does when renaming the temporary file to its final
+ * destination fails with EXDEV or EBUSY: when this tree node exists, copy
+ * the file contents. If it is not present, simply give up and report an
+ * error. */
+#define AUGEAS_COPY_IF_RENAME_FAILS \
+ AUGEAS_META_SAVE_MODE "/copy_if_rename_fails"
+
+/* Define: AUGEAS_CONTEXT
+ * Context prepended to all non-absolute paths */
+#define AUGEAS_CONTEXT AUGEAS_META_TREE "/context"
+
+/* A hierarchy where we record certain 'events', e.g. which tree
+ * nodes actually gotsaved into files */
+#define AUGEAS_EVENTS AUGEAS_META_TREE "/events"
+
+#define AUGEAS_EVENTS_SAVED AUGEAS_EVENTS "/saved"
+
+/* Where to put information about parsing of path expressions */
+#define AUGEAS_META_PATHX AUGEAS_META_TREE "/pathx"
+
+/* Define: AUGEAS_SPAN_OPTION
+ * Enable or disable node indexes */
+#define AUGEAS_SPAN_OPTION AUGEAS_META_TREE "/span"
+
+/* Define: AUGEAS_LENS_ENV
+ * Name of env var that contains list of paths to search for additional
+ spec files */
+#define AUGEAS_LENS_ENV "AUGEAS_LENS_LIB"
+
+/* Define: MAX_ENV_SIZE
+ * Fairly arbitrary bound on the length of the path we
+ * accept from AUGEAS_SPEC_ENV */
+#define MAX_ENV_SIZE 4096
+
+/* Define: PATH_SEP_CHAR
+ * Character separating paths in a list of paths */
+#define PATH_SEP_CHAR ':'
+
+/* Constants for setting the save mode via the augeas path at
+ * AUGEAS_META_SAVE_MODE */
+#define AUG_SAVE_BACKUP_TEXT "backup"
+#define AUG_SAVE_NEWFILE_TEXT "newfile"
+#define AUG_SAVE_NOOP_TEXT "noop"
+#define AUG_SAVE_OVERWRITE_TEXT "overwrite"
+
+/* constants for options in the tree */
+#define AUG_ENABLE "enable"
+#define AUG_DISABLE "disable"
+
+/* default value for the relative path context */
+#define AUG_CONTEXT_DEFAULT "/files"
+
+#ifdef __GNUC__
+
+#ifndef __GNUC_PREREQ
+#define __GNUC_PREREQ(maj,min) 0
+#endif
+
+/**
+* AUGEAS_LIKELY:
+*
+* Macro to flag a code branch as a likely branch
+*
+* AUGEAS_UNLIKELY:
+*
+* Macro to flag a code branch as an unlikely branch
+*/
+#ifndef __has_builtin
+# define __has_builtin(x) (0)
+#endif
+#if defined(__builtin_expect) || __has_builtin(__builtin_expect)
+# define AUGEAS_LIKELY(x) (__builtin_expect(!!(x), 1))
+# define AUGEAS_UNLIKELY(x) (__builtin_expect(!!(x), 0))
+#else
+# define AUGEAS_LIKELY(x) (x)
+# define AUGEAS_UNLIKELY(x) (x)
+#endif
+
+/**
+ * ATTRIBUTE_UNUSED:
+ *
+ * Macro to flag conciously unused parameters to functions
+ */
+#ifndef ATTRIBUTE_UNUSED
+#define ATTRIBUTE_UNUSED __attribute__((__unused__))
+#endif
+
+/**
+ * ATTRIBUTE_FORMAT
+ *
+ * Macro used to check printf/scanf-like functions, if compiling
+ * with gcc.
+ */
+#ifndef ATTRIBUTE_FORMAT
+#define ATTRIBUTE_FORMAT(args...) __attribute__((__format__ (args)))
+#endif
+
+#ifndef ATTRIBUTE_PURE
+#define ATTRIBUTE_PURE __attribute__((pure))
+#endif
+
+#ifndef ATTRIBUTE_RETURN_CHECK
+#if __GNUC_PREREQ (3, 4)
+#define ATTRIBUTE_RETURN_CHECK __attribute__((__warn_unused_result__))
+#else
+#define ATTRIBUTE_RETURN_CHECK
+#endif
+#endif
+
+/* Allow falling through in switch statements for the few cases where that
+ is needed */
+#ifndef ATTRIBUTE_FALLTHROUGH
+# if __GNUC_PREREQ (7, 0)
+# define ATTRIBUTE_FALLTHROUGH __attribute__ ((fallthrough))
+# else
+# define ATTRIBUTE_FALLTHROUGH
+# endif
+#endif
+
+/* A poor man's macro to get some move semantics: return the value of P but
+ set P itself to NULL. This has the effect that if you say 'x = move(y)'
+ that there is still only one pointer pointing to the memory Y pointed to
+ initially.
+ */
+#define move(p) ({ typeof(p) _tmp = (p); (p) = NULL; _tmp; })
+
+#else
+#define ATTRIBUTE_UNUSED
+#define ATTRIBUTE_FORMAT(...)
+#define ATTRIBUTE_PURE
+#define ATTRIBUTE_RETURN_CHECK
+#define ATTRIBUTE_FALLTHROUGH
+#define move(p) p
+#endif /* __GNUC__ */
+
+#define ARRAY_CARDINALITY(array) (sizeof (array) / sizeof *(array))
+
+/* String equality tests, suggested by Jim Meyering. */
+#define STREQ(a,b) (strcmp((a),(b)) == 0)
+#define STRCASEEQ(a,b) (strcasecmp((a),(b)) == 0)
+#define STRCASEEQLEN(a,b,n) (strncasecmp((a),(b),(n)) == 0)
+#define STRNEQ(a,b) (strcmp((a),(b)) != 0)
+#define STRCASENEQ(a,b) (strcasecmp((a),(b)) != 0)
+#define STREQLEN(a,b,n) (strncmp((a),(b),(n)) == 0)
+#define STRNEQLEN(a,b,n) (strncmp((a),(b),(n)) != 0)
+
+ATTRIBUTE_PURE
+static inline int streqv(const char *a, const char *b) {
+ if (a == NULL || b == NULL)
+ return a == b;
+ return STREQ(a,b);
+}
+
+/* Path length and comparison */
+
+#define SEP '/'
+
+/* Length of PATH without any trailing '/' */
+ATTRIBUTE_PURE
+static inline int pathlen(const char *path) {
+ int len = strlen(path);
+
+ if (len > 0 && path[len-1] == SEP)
+ len--;
+
+ return len;
+}
+
+/* Return 1 if P1 is a prefix of P2. P1 as a string must have length <= P2 */
+ATTRIBUTE_PURE
+static inline int pathprefix(const char *p1, const char *p2) {
+ if (p1 == NULL || p2 == NULL)
+ return 0;
+ int l1 = pathlen(p1);
+
+ return STREQLEN(p1, p2, l1) && (p2[l1] == '\0' || p2[l1] == SEP);
+}
+
+static inline int pathendswith(const char *path, const char *basenam) {
+ const char *p = strrchr(path, SEP);
+ if (p == NULL)
+ return 0;
+ return streqv(p+1, basenam);
+}
+
+/* Join NSEG path components (passed as const char *) into one PATH.
+ Allocate as needed. Return 0 on success, -1 on failure */
+int pathjoin(char **path, int nseg, ...);
+
+#define MEMZERO(ptr, n) memset((ptr), 0, (n) * sizeof(*(ptr)));
+
+#define MEMMOVE(dest, src, n) memmove((dest), (src), (n) * sizeof(*(src)))
+
+/**
+ * TODO:
+ *
+ * macro to flag unimplemented blocks
+ */
+#define TODO \
+ fprintf(stderr, "%s:%d Unimplemented block\n", \
+ __FILE__, __LINE__);
+
+#define FIXME(msg, args ...) \
+ do { \
+ fprintf(stderr, "%s:%d Fixme: ", \
+ __FILE__, __LINE__); \
+ fprintf(stderr, msg, ## args); \
+ fputc('\n', stderr); \
+ } while(0)
+
+/*
+ * Internal data structures
+ */
+
+// internal.c
+
+/* Function: escape
+ * Escape nonprintable characters within TEXT, similar to how it's done in
+ * C string literals. Caller must free the returned string.
+ */
+char *escape(const char *text, int cnt, const char *extra);
+
+/* Function: unescape */
+char *unescape(const char *s, int len, const char *extra);
+
+/* Extra characters to be escaped in strings and regexps respectively */
+#define STR_ESCAPES "\"\\"
+#define RX_ESCAPES "/\\"
+
+/* Function: print_chars */
+int print_chars(FILE *out, const char *text, int cnt);
+
+/* Function: print_pos
+ * Print a pretty representation of being at position POS within TEXT */
+void print_pos(FILE *out, const char *text, int pos);
+char *format_pos(const char *text, int pos);
+
+/* Function: xread_file
+ * Read the contents of file PATH and return them as one long string. The
+ * caller must free the result. Return NULL if any error occurs.
+ */
+char* xread_file(const char *path);
+
+/* Like xread_file, but caller supplies a file pointer */
+char* xfread_file(FILE *fp);
+
+/* Get the error message for ERRNUM in a threadsafe way. Based on libvirt's
+ * virStrError
+ */
+const char *xstrerror(int errnum, char *buf, size_t len);
+
+/* Like asprintf, but set *STRP to NULL on error */
+int xasprintf(char **strp, const char *format, ...);
+
+/* Convert S to RESULT with error checking */
+int xstrtoint64(char const *s, int base, int64_t *result);
+
+/* Calculate line and column number of character POS in TEXT */
+void calc_line_ofs(const char *text, size_t pos, size_t *line, size_t *ofs);
+
+/* Cleans path from user, removing trailing slashes and whitespace */
+char *cleanpath(char *path);
+
+/* Take the first LEN characters from the regexp *U and expand any
+ * character ranges in it. The expanded regexp, if expansion is necessary,
+ * is in U, and the old string is freed. If expansion is not needed or an
+ * error happens, U will be unchanged.
+ *
+ * Return 0 if expansion is not necessary, -1 if an error occurs, and 1 if
+ * expansion was needed.
+ */
+int regexp_c_locale(char **u, size_t *len);
+
+/* Struct: augeas
+ * The data structure representing a connection to Augeas. */
+struct augeas {
+ struct tree *origin; /* Actual tree root is origin->children */
+ const char *root; /* Filesystem root for all files */
+ /* always ends with '/' */
+ unsigned int flags; /* Flags passed to AUG_INIT */
+ struct module *modules; /* Loaded modules */
+ size_t nmodpath;
+ char *modpathz; /* The search path for modules as a
+ glibc argz vector */
+ struct pathx_symtab *symtab;
+ struct error *error;
+ uint api_entries; /* Number of entries through a public
+ * API, 0 when called from outside */
+#if HAVE_USELOCALE
+ /* On systems that have a uselocale call, we switch to the C locale
+ * on entry into API functions, and back to the old user locale
+ * on exit.
+ * FIXME: We need some solution for systems without uselocale, like
+ * setlocale + critical section, though that is very heavy-handed
+ */
+ locale_t c_locale;
+ locale_t user_locale;
+#endif
+};
+
+static inline struct error *err_of_aug(const struct augeas *aug) {
+ return ((struct augeas *) aug)->error;
+}
+
+/* Used by augparse for loading tests */
+int __aug_load_module_file(struct augeas *aug, const char *filename);
+
+/* Called at beginning and end of every _public_ API function */
+void api_entry(const struct augeas *aug);
+void api_exit(const struct augeas *aug);
+
+/* Struct: tree
+ * An entry in the global config tree. The data structure allows associating
+ * values with interior nodes, but the API currently marks that as an error.
+ *
+ * To make dealing with parents uniform, even for the root, we create
+ * standalone trees with a fake root, called origin. That root is generally
+ * not referenced from anywhere. Standalone trees should be created with
+ * MAKE_TREE_ORIGIN.
+ *
+ * The DIRTY flag is used to track which parts of the tree might need to be
+ * saved. For any node that is marked dirty, all of its ancestors must be
+ * marked dirty, too. Instead of setting this flag directly, the function
+ * TREE_MARK_DIRTY in augeas.c should be used (and only functions in that
+ * file should have a need to mark nodes as dirty)
+ *
+ * The FILE flag is set for entries underneath /augeas/files that hold the
+ * metadata for a file by ADD_FILE_INFO. The FILE flag is set for entries
+ * underneath /files for the toplevel node corresponding to a file by
+ * TREE_FREPLACE and is used by AUG_SOURCE to find the file to which a node
+ * belongs.
+ */
+struct tree {
+ struct tree *next;
+ struct tree *parent; /* Points to self for root */
+ char *label; /* Last component of PATH */
+ struct tree *children; /* List of children through NEXT */
+ char *value;
+ struct span *span;
+
+ /* Flags */
+ bool dirty;
+ bool file;
+ bool added; /* only used by ns_add and tree_rm to dedupe
+ nodesets */
+};
+
+/* The opaque structure used to represent path expressions. API's
+ * using STRUCT PATHX are declared farther below
+ */
+struct pathx;
+
+#define ROOT_P(t) ((t) != NULL && (t)->parent == (t)->parent->parent)
+
+#define TREE_HIDDEN(tree) ((tree)->label == NULL)
+
+/* Function: make_tree
+ * Allocate a new tree node with the given LABEL, VALUE, and CHILDREN,
+ * which are not copied. The new tree is marked as dirty
+ */
+struct tree *make_tree(char *label, char *value,
+ struct tree *parent, struct tree *children);
+
+/* Mark a tree as a standalone tree; this creates a fake parent for ROOT,
+ * so that even ROOT has a parent. A new node with only child ROOT is
+ * returned on success, and NULL on failure.
+ */
+struct tree *make_tree_origin(struct tree *root);
+
+/* Make a new tree node and append it to parent's children */
+struct tree *tree_append(struct tree *parent, char *label, char *value);
+
+int tree_rm(struct pathx *p);
+int tree_unlink(struct augeas *aug, struct tree *tree);
+struct tree *tree_set(struct pathx *p, const char *value);
+int tree_insert(struct pathx *p, const char *label, int before);
+int free_tree(struct tree *tree);
+int dump_tree(FILE *out, struct tree *tree);
+int tree_equal(const struct tree *t1, const struct tree *t2);
+/* Return the 1-based index of TREE amongst its siblings with the same
+ * label or 0 if none of TREE's siblings have the same label */
+int tree_sibling_index(struct tree *tree);
+char *path_expand(struct tree *tree, const char *ppath);
+char *path_of_tree(struct tree *tree);
+/* Clear the dirty flag in the whole TREE */
+void tree_clean(struct tree *tree);
+/* Return first child with label LABEL or NULL */
+struct tree *tree_child(struct tree *tree, const char *label);
+/* Return first existing child with label LABEL or create one. Return NULL
+ * when allocation fails */
+struct tree *tree_child_cr(struct tree *tree, const char *label);
+/* Create a path in the tree; nodes along the path are looked up with
+ * tree_child_cr */
+struct tree *tree_path_cr(struct tree *tree, int n, ...);
+/* Store VALUE directly as the value of TREE and set VALUE to NULL.
+ * Update dirty flags */
+void tree_store_value(struct tree *tree, char **value);
+/* Set the value of TREE to a copy of VALUE and update dirty flags */
+int tree_set_value(struct tree *tree, const char *value);
+/* Cleanly remove all children of TREE, but leave TREE itself unchanged */
+void tree_unlink_children(struct augeas *aug, struct tree *tree);
+/* Find a node in the tree at path FPATH; FPATH is a file path, i.e.
+ * not interpreted as a path expression. If no such node exists, return NULL
+ */
+struct tree *tree_fpath(struct augeas *aug, const char *fpath);
+/* Find a node in the tree at path FPATH; FPATH is a file path, i.e.
+ * not interpreted as a path expression. If no such node exists, create
+ * it and all its missing ancestors.
+ */
+struct tree *tree_fpath_cr(struct augeas *aug, const char *fpath);
+/* Find the node matching PATH.
+ * Returns the node or NULL on error
+ * Errors: EMMATCH - more than one node matches PATH
+ * ENOMEM - allocation error
+ */
+struct tree *tree_find(struct augeas *aug, const char *path);
+/* Find the node matching PATH. Expand the tree to contain such a node if
+ * none exists.
+ * Returns the node or NULL on error
+ */
+struct tree *tree_find_cr(struct augeas *aug, const char *path);
+/* Find the node at the path stored in AUGEAS_CONTEXT, i.e. the root context
+ * node for relative paths.
+ * Errors: EMMATCH - more than one node matches PATH
+ * ENOMEM - allocation error
+ */
+struct tree *tree_root_ctx(const struct augeas *aug);
+
+/* Struct: memstream
+ * Wrappers to simulate OPEN_MEMSTREAM where that's not available. The
+ * STREAM member is opened by INIT_MEMSTREAM and closed by
+ * CLOSE_MEMSTREAM. The BUF is allocated automatically, but can not be used
+ * until after CLOSE_MEMSTREAM has been called. It is the callers
+ * responsibility to free up BUF.
+ */
+struct memstream {
+ FILE *stream;
+ char *buf;
+ size_t size;
+};
+
+/* Function: init_memstream
+ * Initialize a memstream. On systems that have OPEN_MEMSTREAM, it is used
+ * to open MS->STREAM. On systems without OPEN_MEMSTREAM, MS->STREAM is
+ * backed by a temporary file.
+ *
+ * MS must be allocated in advance; INIT_MEMSTREAM initializes it.
+ */
+int __aug_init_memstream(struct memstream *ms);
+#define init_memstream(ms) __aug_init_memstream(ms);
+
+/* Function: close_memstream
+ * Close a memstream. After calling this, MS->STREAM can not be used
+ * anymore and a string representing whatever was written to it is
+ * available in MS->BUF. The caller must free MS->BUF.
+ *
+ * The caller must free the MEMSTREAM structure.
+ */
+int __aug_close_memstream(struct memstream *ms);
+#define close_memstream(ms) __aug_close_memstream(ms)
+
+/*
+ * Path expressions
+ */
+
+typedef enum {
+ PATHX_NOERROR = 0,
+ PATHX_ENAME,
+ PATHX_ESTRING,
+ PATHX_ENUMBER,
+ PATHX_EDELIM,
+ PATHX_ENOEQUAL,
+ PATHX_ENOMEM,
+ PATHX_EPRED,
+ PATHX_EPAREN,
+ PATHX_ESLASH,
+ PATHX_EINTERNAL,
+ PATHX_ETYPE,
+ PATHX_ENOVAR,
+ PATHX_EEND,
+ PATHX_ENOMATCH,
+ PATHX_EARITY,
+ PATHX_EREGEXP,
+ PATHX_EMMATCH,
+ PATHX_EREGEXPFLAG
+} pathx_errcode_t;
+
+struct pathx;
+struct pathx_symtab;
+
+const char *pathx_error(struct pathx *pathx, const char **txt, int *pos);
+
+/* Parse a path expression PATH rooted at TREE, which is a node somewhere
+ * in AUG->ORIGIN. If TREE is NULL, AUG->ORIGIN is used. If ROOT_CTX is not
+ * NULL and the PATH isn't absolute then it will be rooted at ROOT_CTX.
+ *
+ * Use this function rather than PATHX_PARSE for path expressions inside
+ * the tree in AUG->ORIGIN.
+ *
+ * If NEED_NODESET is true, the resulting path expression must evaluate toa
+ * nodeset, otherwise it can evaluate to a value of any type.
+ *
+ * Return the resulting path expression, or NULL on error. If an error
+ * occurs, the error struct in AUG contains details.
+ */
+struct pathx *pathx_aug_parse(const struct augeas *aug,
+ struct tree *tree,
+ struct tree *root_ctx,
+ const char *path, bool need_nodeset);
+
+/* Parse the string PATH into a path expression PX that will be evaluated
+ * against the tree ORIGIN.
+ *
+ * If NEED_NODESET is true, the resulting path expression must evaluate toa
+ * nodeset, otherwise it can evaluate to a value of any type.
+ *
+ * Returns 0 on success, and -1 on error
+ */
+int pathx_parse(const struct tree *origin,
+ struct error *err,
+ const char *path,
+ bool need_nodeset,
+ struct pathx_symtab *symtab,
+ struct tree *root_ctx,
+ struct pathx **px);
+/* Return the error struct that was passed into pathx_parse */
+struct error *err_of_pathx(struct pathx *px);
+struct tree *pathx_first(struct pathx *path);
+struct tree *pathx_next(struct pathx *path);
+/* Return -1 if evaluating PATH runs into trouble, otherwise return the
+ * number of nodes matching PATH and set MATCH to the first matching
+ * node */
+int pathx_find_one(struct pathx *path, struct tree **match);
+int pathx_expand_tree(struct pathx *path, struct tree **tree);
+void free_pathx(struct pathx *path);
+
+struct pathx_symtab *pathx_get_symtab(struct pathx *pathx);
+int pathx_symtab_define(struct pathx_symtab **symtab,
+ const char *name, struct pathx *px);
+/* Returns 1 on success, and -1 when out of memory */
+int pathx_symtab_assign_tree(struct pathx_symtab **symtab, const char *name,
+ struct tree *tree);
+int pathx_symtab_undefine(struct pathx_symtab **symtab, const char *name);
+void pathx_symtab_remove_descendants(struct pathx_symtab *symtab,
+ const struct tree *tree);
+
+/* Return the number of nodes in the nodeset NAME. If the variable NAME
+ * does not exist, or is not a nodeset, return -1 */
+int pathx_symtab_count(const struct pathx_symtab *symtab, const char *name);
+
+/* Return the tree stored in the variable NAME at position I, which is the
+ same as evaluating the path expression '$NAME[I]'. If the variable NAME
+ does not exist, or does not contain a nodeset, or if I is bigger than
+ the size of the nodeset, return NULL */
+struct tree *pathx_symtab_get_tree(struct pathx_symtab *symtab,
+ const char *name, int i);
+void free_symtab(struct pathx_symtab *symtab);
+
+/* Escape a name so that it is safe to pass to parse_name and have it
+ * interpreted as the literal name of a path component.
+ *
+ * On return, *OUT will be NULL if IN does not need escaping, otherwise it
+ * will contain an escaped copy of IN which the caller must free.
+ *
+ * Returns -1 if it failed to allocate memory for *OUT, 0 on success
+ */
+int pathx_escape_name(const char *in, char **out);
+
+/* Debug helpers, all defined in internal.c. When ENABLE_DEBUG is not
+ * set, they compile to nothing.
+ */
+# if ENABLE_DEBUG
+ /* Return true if debugging for CATEGORY is turned on */
+ bool debugging(const char *category);
+ /* Format the arguments into a file name, prepend it with the directory
+ * from the environment variable AUGEAS_DEBUG_DIR, and open the file for
+ * writing.
+ */
+ FILE *debug_fopen(const char *format, ...)
+ ATTRIBUTE_FORMAT(printf, 1, 2);
+# else
+# define debugging(facility) (0)
+# define debug_fopen(format ...) (NULL)
+# endif
+#endif
+
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+/*
+ * jmt.c: Earley parser for lenses based on Jim/Mandelbaum transducers
+ *
+ * Copyright (C) 2009-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <lutter@redhat.com>
+ */
+
+#include <config.h>
+
+#include "jmt.h"
+#include "internal.h"
+#include "memory.h"
+#include "errcode.h"
+
+/* This is an implementation of the Earley parser described in the paper
+ * "Efficient Earley Parsing with Regular Right-hand Sides" by Trever Jim
+ * and Yitzhak Mandelbaum.
+ *
+ * Since we only deal in lenses, they are both terminals and nonterminals:
+ * recursive lenses (those with lens->recursive set) are nonterminals, and
+ * non-recursive lenses are terminals, with the exception of non-recursive
+ * lenses that match the empty word. Lenses are also the semantic actions
+ * attached to a SCAN or COMPLETE during recosntruction of the parse
+ * tree. Our SCAN makes sure that we only ever scan nonempty words - that
+ * means that for nonrecursive lenses which match the empty word (e.g., del
+ * [ \t]* "") we need to make sure we get a COMPLETE when the lens matched
+ * the empty word.
+ *
+ * That is achieved by treating such a lens t as the construct (t|T) with
+ * T := eps.
+ */
+
+/*
+ * Data structures for the Jim/Mandelbaum parser
+ */
+
+#define IND_MAX UINT32_MAX
+
+struct array {
+ size_t elem_size;
+ ind_t used;
+ ind_t size;
+ void *data;
+};
+
+#define array_elem(arr, ind, typ) \
+ (((typ *) (arr).data) + ind)
+
+#define array_for_each(i, arr) \
+ for(ind_t i = 0; i < (arr).used; i++)
+
+#define array_each_elem(elt, arr, typ) \
+ for(typ *elt = (typ *) (arr).data + 0; \
+ elt - (typ *) (arr).data < (arr).used; \
+ elt++)
+
+static void array_init(struct array *arr, size_t elem_size) {
+ MEMZERO(arr, 1);
+ arr->elem_size = elem_size;
+}
+
+ATTRIBUTE_RETURN_CHECK
+static int array_add(struct array *arr, ind_t *ind) {
+ if (arr->used >= arr->size) {
+ int r;
+ ind_t expand = arr->size;
+ if (expand < 8)
+ expand = 8;
+ r = mem_realloc_n(&(arr->data), arr->elem_size, arr->size + expand);
+ if (r < 0)
+ return -1;
+ memset((char *) arr->data + arr->elem_size*arr->size, 0,
+ arr->elem_size * expand);
+ arr->size += expand;
+ }
+ *ind = arr->used;
+ arr->used += 1;
+ return 0;
+}
+
+/* Insert a new entry into the array at index IND. Shift all entries from
+ * IND on back by one. */
+ATTRIBUTE_RETURN_CHECK
+static int array_insert(struct array *arr, ind_t ind) {
+ ind_t last;
+
+ if (array_add(arr, &last) < 0)
+ return -1;
+
+ if (ind >= last)
+ return 0;
+
+ memmove((char *) arr->data + arr->elem_size*(ind+1),
+ (char *) arr->data + arr->elem_size*ind,
+ arr->elem_size * (arr->used - ind - 1));
+ memset((char *) arr->data + arr->elem_size*ind, 0, arr->elem_size);
+ return 0;
+}
+
+static void array_release(struct array *arr) {
+ if (arr != NULL) {
+ free(arr->data);
+ arr->used = arr->size = 0;
+ }
+}
+
+static void array_remove(struct array *arr, ind_t ind) {
+ char *data = arr->data;
+ memmove(data + ind * arr->elem_size,
+ data + (ind + 1) * arr->elem_size,
+ (arr->used - ind - 1) * arr->elem_size);
+ arr->used -= 1;
+}
+
+ATTRIBUTE_RETURN_CHECK
+static int array_join(struct array *dst, struct array *src) {
+ int r;
+
+ if (dst->elem_size != src->elem_size)
+ return -1;
+
+ r = mem_realloc_n(&(dst->data), dst->elem_size,
+ dst->used + src->used);
+ if (r < 0)
+ return -1;
+
+ memcpy(((char *) dst->data) + dst->used * dst->elem_size,
+ src->data,
+ src->used * src->elem_size);
+ dst->used += src->used;
+ dst->size = dst->used;
+ return 0;
+}
+
+/* Special lens indices - these don't reference lenses, but
+ * some pseudo actions. We stash them at the top of the range
+ * of IND_T, with LENS_MAX giving us the maximal lens index for
+ * a real lens
+ */
+enum trans_op {
+ EPS = IND_MAX,
+ CALL = EPS - 1,
+ LENS_MAX = CALL - 1
+};
+
+struct trans {
+ struct state *to;
+ ind_t lens;
+};
+
+struct state {
+ struct state *next; /* Linked list for memory management */
+ struct array trans; /* Array of struct trans */
+ ind_t nret; /* Number of returned lenses */
+ ind_t *ret; /* The returned lenses */
+ ind_t num; /* Counter for num of new states */
+ unsigned int reachable : 1;
+ unsigned int live : 1;
+};
+
+/* Sets of states used in determinizing the NFA to a DFA */
+struct nfa_state {
+ struct state *state; /* The new state in the DFA */
+ struct array set; /* Set of (struct state *) in the NFA, sorted */
+};
+
+/* For recursive lenses (nonterminals), the mapping of nonterminal to
+ * state. We also store nonrecursive lenses here; in that case, the state
+ * will be NULL */
+struct jmt_lens {
+ struct lens *lens;
+ struct state *state;
+};
+
+/* A Jim/Mandelbaum transducer */
+struct jmt {
+ struct error *error;
+ struct array lenses; /* Array of struct jmt_lens */
+ struct state *start;
+ ind_t lens; /* The start symbol of the grammar */
+ ind_t state_count;
+};
+
+enum item_reason {
+ R_ROOT = 1,
+ R_COMPLETE = R_ROOT << 1,
+ R_PREDICT = R_COMPLETE << 1,
+ R_SCAN = R_PREDICT << 1
+};
+
+/* The reason an item was added for parse reconstruction; can be R_SCAN,
+ * R_COMPLETE, R_PREDICT or R_COMPLETE|R_PREDICT */
+struct link {
+ enum item_reason reason;
+ ind_t lens; /* R_COMPLETE, R_SCAN */
+ ind_t from_set; /* R_COMPLETE, R_PREDICT, R_SCAN */
+ ind_t from_item; /* R_COMPLETE, R_PREDICT, R_SCAN */
+ ind_t to_item; /* R_COMPLETE */
+ ind_t caller; /* number of a state */
+};
+
+struct item {
+ /* The 'classical' Earley item (state, parent) */
+ struct state *state;
+ ind_t parent;
+ /* Backlinks to why item was added */
+ ind_t nlinks;
+ struct link *links;
+};
+
+struct item_set {
+ struct array items;
+};
+
+struct jmt_parse {
+ struct jmt *jmt;
+ struct error *error;
+ const char *text;
+ ind_t nsets;
+ struct item_set **sets;
+};
+
+#define for_each_item(it, set) \
+ array_each_elem(it, (set)->items, struct item)
+
+#define for_each_trans(t, s) \
+ array_each_elem(t, (s)->trans, struct trans)
+
+#define parse_state(parse, ind) \
+ array_elem(parse->states, ind, struct state)
+
+static struct item *set_item(struct jmt_parse *parse, ind_t set,
+ ind_t item) {
+ ensure(parse->sets[set] != NULL, parse);
+ ensure(item < parse->sets[set]->items.used, parse);
+ return array_elem(parse->sets[set]->items, item, struct item);
+ error:
+ return NULL;
+}
+
+static struct state *item_state(struct jmt_parse *parse, ind_t set,
+ ind_t item) {
+ return set_item(parse, set, item)->state;
+}
+
+static struct trans *state_trans(struct state *state, ind_t t) {
+ return array_elem(state->trans, t, struct trans);
+}
+
+static ind_t item_parent(struct jmt_parse *parse, ind_t set, ind_t item) {
+ return set_item(parse, set, item)->parent;
+}
+
+static bool is_return(const struct state *s) {
+ return s->nret > 0;
+}
+
+static struct lens *lens_of_parse(struct jmt_parse *parse, ind_t lens) {
+ return array_elem(parse->jmt->lenses, lens, struct jmt_lens)->lens;
+}
+
+/*
+ * The parser
+ */
+
+/*
+ * Manipulate the Earley graph. We denote edges in the graph as
+ * [j, (s,i)] -> [k, item_k] => [l, item_l]
+ * to indicate that we are adding item (s,i) to E_j and record that its
+ * child is the item with index item_k in E_k and its sibling is item
+ * item_l in E_l.
+ */
+
+/* Add item (s, k) to E_j. Note that the item was caused by action reason
+ * using lens starting at from_item in E_{from_set}
+ *
+ * [j, (s,k)] -> [from_set, from_item] => [j, to_item]
+ */
+static ind_t parse_add_item(struct jmt_parse *parse, ind_t j,
+ struct state *s, ind_t k,
+ enum item_reason reason, ind_t lens,
+ ind_t from_set,
+ ind_t from_item, ind_t to_item,
+ ind_t caller) {
+
+ int r;
+ struct item_set *set = parse->sets[j];
+ struct item *item = NULL;
+ ind_t result = IND_MAX;
+
+ ensure(from_item == EPS || from_item < parse->sets[from_set]->items.used,
+ parse);
+ ensure(to_item == EPS || to_item < parse->sets[j]->items.used,
+ parse);
+
+ if (set == NULL) {
+ r = ALLOC(parse->sets[j]);
+ ERR_NOMEM(r < 0, parse);
+ array_init(&parse->sets[j]->items, sizeof(struct item));
+ set = parse->sets[j];
+ }
+
+ for (ind_t i=0; i < set->items.used; i++) {
+ if (item_state(parse, j, i) == s
+ && item_parent(parse, j, i) == k) {
+ result = i;
+ item = set_item(parse, j, i);
+ break;
+ }
+ }
+
+ if (result == IND_MAX) {
+ r = array_add(&set->items, &result);
+ ERR_NOMEM(r < 0, parse);
+
+ item = set_item(parse, j, result);
+ item->state = s;
+ item->parent = k;
+ }
+
+ for (ind_t i = 0; i < item->nlinks; i++) {
+ struct link *lnk = item->links + i;
+ if (lnk->reason == reason && lnk->lens == lens
+ && lnk->from_set == from_set && lnk->from_item == from_item
+ && lnk->to_item == to_item && lnk->caller == caller)
+ return result;
+ }
+
+ r = REALLOC_N(item->links, item->nlinks + 1);
+ ERR_NOMEM(r < 0, parse);
+
+ struct link *lnk = item->links+ item->nlinks;
+ item->nlinks += 1;
+
+ lnk->reason = reason;
+ lnk->lens = lens;
+ lnk->from_set = from_set;
+ lnk->from_item = from_item;
+ lnk->to_item = to_item;
+ lnk->caller = caller;
+ error:
+ return result;
+}
+
+/* Add item (s, i) to set E_j and record that it was added because
+ * a parse of nonterminal lens, starting at itemk in E_k, was completed
+ * by item item in E_j
+ *
+ * [j, (s,i)] -> [j, item] => [k, itemk]
+ */
+static void parse_add_complete(struct jmt_parse *parse, ind_t j,
+ struct state *s, ind_t i,
+ ind_t k, ind_t itemk,
+ ind_t lens,
+ ind_t item) {
+ parse_add_item(parse, j, s, i, R_COMPLETE, lens, k, itemk, item, IND_MAX);
+}
+
+/* Same as parse_add_complete, but mark the item also as a predict
+ *
+ * [j, (s,i)] -> [j, item] => [k, itemk]
+ */
+static void parse_add_predict_complete(struct jmt_parse *parse, ind_t j,
+ struct state *s, ind_t i,
+ ind_t k, ind_t itemk,
+ ind_t lens,
+ ind_t item, ind_t caller) {
+ parse_add_item(parse, j, s, i, R_COMPLETE|R_PREDICT,
+ lens, k, itemk, item, caller);
+}
+
+/* Add item (s, j) to E_j and record that it was added because of a
+ * prediction from item from in E_j
+ */
+static ind_t parse_add_predict(struct jmt_parse *parse, ind_t j,
+ struct state *s, ind_t from) {
+
+ ensure(from < parse->sets[j]->items.used, parse);
+ struct state *t = item_state(parse, j, from);
+
+ return parse_add_item(parse, j, s, j, R_PREDICT, EPS, j, from, EPS,
+ t->num);
+ error:
+ return IND_MAX;
+}
+
+/* Add item (s,i) to E_j and record that it was added because of scanning
+ * with lens starting from item item in E_k.
+ *
+ * [j, (s,i)] -> [k, item]
+ */
+static void parse_add_scan(struct jmt_parse *parse, ind_t j,
+ struct state *s, ind_t i,
+ ind_t lens, ind_t k, ind_t item) {
+ ensure(item < parse->sets[k]->items.used, parse);
+
+ parse_add_item(parse, j, s, i, R_SCAN, lens, k, item, EPS, IND_MAX);
+ error:
+ return;
+}
+
+ATTRIBUTE_PURE
+static bool is_complete(const struct link *lnk) {
+ return lnk->reason & R_COMPLETE;
+}
+
+ATTRIBUTE_PURE
+static bool is_predict(const struct link *lnk) {
+ return lnk->reason & R_PREDICT;
+}
+
+ATTRIBUTE_PURE
+static bool is_scan(const struct link *lnk) {
+ return lnk->reason & R_SCAN;
+}
+
+ATTRIBUTE_PURE
+static bool is_last_sibling(const struct link *lnk) {
+ if (is_complete(lnk))
+ return false;
+ return lnk->reason & (R_PREDICT|R_ROOT);
+}
+
+ATTRIBUTE_PURE
+static bool returns(const struct state *s, ind_t l) {
+ for (ind_t i = 0; i < s->nret; i++)
+ if (s->ret[i] == l)
+ return true;
+ return false;
+}
+
+static void state_add_return(struct jmt *jmt, struct state *s, ind_t l) {
+ int r;
+
+ if (s == NULL || returns(s, l))
+ return;
+
+ r = REALLOC_N(s->ret, s->nret + 1);
+ ERR_NOMEM(r < 0, jmt);
+ s->ret[s->nret] = l;
+ s->nret += 1;
+ error:
+ return;
+}
+
+static void state_merge_returns(struct jmt *jmt, struct state *dst,
+ const struct state *src) {
+ for (ind_t l = 0; l < src->nret; l++)
+ state_add_return(jmt, dst, src->ret[l]);
+}
+
+static void nncomplete(struct jmt_parse *parse, ind_t j,
+ struct state *t, ind_t k, ind_t item) {
+
+ for (ind_t itemk = 0; itemk < parse->sets[k]->items.used; itemk++) {
+ struct state *u = item_state(parse, k, itemk);
+ for_each_trans(y, u) {
+ if (returns(t, y->lens)) {
+ ind_t parent = item_parent(parse, k, itemk);
+ parse_add_complete(parse, j,
+ y->to, parent,
+ k, itemk, y->lens, item);
+ }
+ }
+ }
+}
+
+/* NCALLER for (t, i) in E_j, which has index item in E_j, and t -> s a
+ * call in the transducer and s a return. The item (s,j) has index pred in
+ * E_j
+ */
+static void ncaller(struct jmt_parse *parse, ind_t j, ind_t item,
+ struct state *t, ind_t i, struct state *s, ind_t pred) {
+ for_each_trans(u, t) {
+ if (returns(s, u->lens)) {
+ /* [j, (u->to, i)] -> [j, pred] => [j, item] */
+ parse_add_predict_complete(parse, j,
+ u->to, i,
+ j, item, u->lens,
+ pred, t->num);
+ }
+ }
+}
+
+/* NCALLEE for (t, parent) in E_j, which has index item in E_j, and t -> s
+ * a call in the transducer and s a return. The item (s,j) has index pred
+ * in E_j
+ */
+static void ncallee(struct jmt_parse *parse, ind_t j, ATTRIBUTE_UNUSED ind_t item,
+ ATTRIBUTE_UNUSED struct state *t, ATTRIBUTE_UNUSED ind_t parent, struct state *s, ind_t pred) {
+ for_each_trans(u, s) {
+ if (returns(s, u->lens)) {
+ /* [j, (u->to, j)] -> [j, item] => [j, item] */
+ parse_add_predict_complete(parse, j,
+ u->to, j,
+ j, pred, u->lens,
+ pred, t->num);
+ }
+ }
+}
+
+static struct jmt_parse *parse_init(struct jmt *jmt,
+ const char *text, size_t text_len) {
+ int r;
+ struct jmt_parse *parse;
+
+ r = ALLOC(parse);
+ ERR_NOMEM(r < 0, jmt);
+
+ parse->jmt = jmt;
+ parse->error = jmt->error;
+ parse->text = text;
+ parse->nsets = text_len + 1;
+ r = ALLOC_N(parse->sets, parse->nsets);
+ ERR_NOMEM(r < 0, jmt);
+ return parse;
+ error:
+ if (parse != NULL)
+ free(parse->sets);
+ free(parse);
+ return NULL;
+}
+
+void jmt_free_parse(struct jmt_parse *parse) {
+ if (parse == NULL)
+ return;
+ for (int i=0; i < parse->nsets; i++) {
+ struct item_set *set = parse->sets[i];
+ if (set != NULL) {
+ array_each_elem(x, set->items, struct item)
+ free(x->links);
+ array_release(&set->items);
+ free(set);
+ }
+ }
+ free(parse->sets);
+ free(parse);
+}
+
+static struct state *lens_state(struct jmt *jmt, ind_t l);
+
+static void flens(FILE *fp, ind_t l) {
+ if (l == 0)
+ fprintf(fp, "%c", 'S');
+ else if (l < 'S' - 'A')
+ fprintf(fp, "%c", 'A' + l - 1);
+ else if (l <= 'Z' - 'A')
+ fprintf(fp, "%c", 'A' + l);
+ else
+ fprintf(fp, "%u", l);
+}
+
+static void parse_dot_link(FILE *fp, struct jmt_parse *parse,
+ ind_t k, struct item *x, struct link *lnk) {
+ char *lens_label = NULL;
+ if (is_complete(lnk) || is_scan(lnk)) {
+ struct state *sA = lens_state(parse->jmt, lnk->lens);
+ int r;
+
+ if (sA == NULL)
+ r = xasprintf(&lens_label, "<%d>", lnk->lens);
+ else
+ r = xasprintf(&lens_label, "%d", lnk->lens);
+ if (r < 0) {
+ fprintf(fp, "// Internal error generating lens_label\n");
+ return;
+ }
+ }
+ fprintf(fp, " n%d_%d_%d [ label = \"(%d, %d)\"];\n",
+ k, x->state->num, x->parent, x->state->num, x->parent);
+ if (is_complete(lnk)) {
+ struct item *y = set_item(parse, k, lnk->to_item);
+ const char *pred = is_predict(lnk) ? "p" : "";
+ fprintf(fp, " n%d_%d_%d -> n%s%d_%d_%d [ style = dashed ];\n",
+ k, x->state->num, x->parent,
+ pred, k, y->state->num, y->parent);
+ if (is_predict(lnk)) {
+ fprintf(fp, " n%s%d_%d_%d [ label = \"\" ];\n",
+ pred, k, y->state->num, y->parent);
+ fprintf(fp, " n%s%d_%d_%d -> n%d_%d_%d [ style = bold ];\n",
+ pred, k, y->state->num, y->parent,
+ k, y->state->num, y->parent);
+ }
+ y = set_item(parse, lnk->from_set, lnk->from_item);
+ fprintf(fp,
+ " n%d_%d_%d -> n%d_%d_%d [ style = dashed, label = \"",
+ k, x->state->num, x->parent,
+ lnk->from_set, y->state->num, y->parent);
+ flens(fp, lnk->lens);
+ fprintf(fp, "\" ];\n");
+ } else if (is_scan(lnk)) {
+ struct item *y =
+ set_item(parse, lnk->from_set, lnk->from_item);
+ fprintf(fp,
+ " n%d_%d_%d -> n%d_%d_%d [ label = \"",
+ k, x->state->num, x->parent,
+ lnk->from_set, y->state->num, y->parent);
+ for (ind_t i=lnk->from_set; i < k; i++)
+ fprintf(fp, "%c", parse->text[i]);
+ fprintf(fp, "\" ];\n");
+
+ } else if (is_predict(lnk)) {
+ struct item *y =
+ set_item(parse, lnk->from_set, lnk->from_item);
+ fprintf(fp,
+ " n%d_%d_%d -> n%d_%d_%d [ style = bold ];\n",
+ k, x->state->num, x->parent,
+ lnk->from_set, y->state->num, y->parent);
+
+ }
+ free(lens_label);
+}
+
+static void parse_dot(struct jmt_parse *parse, const char *fname) {
+ FILE *fp = debug_fopen("%s", fname);
+ if (fp == NULL)
+ return;
+
+ fprintf(fp, "digraph \"jmt_parse\" {\n");
+ fprintf(fp, " rankdir = RL;\n");
+ for (int k=0; k < parse->nsets; k++) {
+ struct item_set *set = parse->sets[k];
+
+ if (set == NULL)
+ continue;
+
+ fprintf(fp, " subgraph \"cluster_E_%d\" {\n", k);
+ fprintf(fp, " rankdir=RL;\n rank=same;\n");
+ fprintf(fp, " title%d [ label=\"E%d\", shape=plaintext ]\n", k, k);
+ for_each_item(x, set) {
+ for (int i=0; i < x->nlinks; i++) {
+ struct link *lnk = x->links + i;
+ parse_dot_link(fp, parse, k, x, lnk);
+ }
+ }
+ fprintf(fp, "}\n");
+ }
+ fprintf(fp, "}\n");
+ fclose(fp);
+}
+
+struct jmt_parse *
+jmt_parse(struct jmt *jmt, const char *text, size_t text_len)
+{
+ struct jmt_parse *parse = NULL;
+
+ parse = parse_init(jmt, text, text_len);
+ ERR_BAIL(jmt);
+
+ /* INIT */
+ parse_add_item(parse, 0, jmt->start, 0, R_ROOT, EPS, EPS, EPS, EPS,
+ jmt->lens);
+ /* NINIT */
+ if (is_return(jmt->start)) {
+ for_each_trans(x, jmt->start) {
+ if (returns(jmt->start, x->lens))
+ parse_add_predict_complete(parse, 0, x->to, 0,
+ 0, 0, x->lens, 0, 0);
+ }
+ }
+
+ for (int j=0; j <= text_len; j++) {
+ struct item_set *set = parse->sets[j];
+ if (set == NULL)
+ continue;
+
+ for (int item=0; item < set->items.used; item++) {
+ struct state *t = item_state(parse, j, item);
+ ind_t i = item_parent(parse, j, item);
+
+ if (is_return(t) && i != j) {
+ /* NNCOMPLETE */
+ nncomplete(parse, j, t, i, item);
+ }
+
+ for_each_trans(x, t) {
+ if (x->lens == CALL) {
+ /* PREDICT */
+ ind_t pred = parse_add_predict(parse, j, x->to, item);
+ ERR_BAIL(parse);
+ if (is_return(x->to)) {
+ /* NCALLER */
+ ncaller(parse, j, item, t, i, x->to, pred);
+ /* NCALLEE */
+ ncallee(parse, j, item, t, i, x->to, pred);
+ }
+ } else {
+ int count;
+ struct lens *lens = lens_of_parse(parse, x->lens);
+ struct state *sA = lens_state(parse->jmt, x->lens);
+ if (! lens->recursive && sA == NULL) {
+ /* SCAN, terminal */
+ // FIXME: We really need to find every k so that
+ // text[j..k] matches lens->ctype, not just one
+ count = regexp_match(lens->ctype, text, text_len, j, NULL);
+ if (count > 0) {
+ parse_add_scan(parse, j+count,
+ x->to, i,
+ x->lens, j, item);
+ }
+ }
+ }
+ }
+ }
+ }
+ if (debugging("cf.jmt.parse"))
+ parse_dot(parse, "jmt_parse.dot");
+ return parse;
+ error:
+ jmt_free_parse(parse);
+ return NULL;
+}
+
+/*
+ * Reconstruction of the parse tree
+ */
+
+static void
+build_nullable(struct jmt_parse *parse, ind_t pos,
+ struct jmt_visitor *visitor, struct lens *lens, int lvl) {
+ if (! lens->recursive) {
+ if (visitor->terminal != NULL) {
+ (*visitor->terminal)(lens, pos, pos, visitor->data);
+ ERR_BAIL(parse);
+ }
+ } else {
+ if (visitor->enter != NULL) {
+ (*visitor->enter)(lens, pos, pos, visitor->data);
+ ERR_BAIL(parse);
+ }
+
+ switch(lens->tag) {
+ case L_REC:
+ build_nullable(parse, pos, visitor, lens->body, lvl+1);
+ break;
+ case L_CONCAT:
+ for (int i=0; i < lens->nchildren; i++)
+ build_nullable(parse, pos, visitor, lens->children[i], lvl+1);
+ break;
+ case L_UNION:
+ for (int i=0; i < lens->nchildren; i++)
+ if (lens->children[i]->ctype_nullable)
+ build_nullable(parse, pos, visitor,
+ lens->children[i], lvl+1);
+ break;
+ case L_SUBTREE:
+ case L_SQUARE:
+ build_nullable(parse, pos, visitor, lens->child, lvl+1);
+ break;
+ case L_STAR:
+ case L_MAYBE:
+ break;
+ default:
+ BUG_ON(true, parse, "Unexpected lens tag %d", lens->tag);
+ }
+
+ if (visitor->exit != NULL) {
+ (*visitor->exit)(lens, pos, pos, visitor->data);
+ ERR_BAIL(parse);
+ }
+ }
+ error:
+ return;
+}
+
+static void build_trace(const char *msg, ind_t start, ind_t end,
+ struct item *x, int lvl) {
+ for (int i=0; i < lvl; i++) putc(' ', stderr);
+ if (x != NULL) {
+ printf("%s %d..%d: (%d, %d) %d %s%s%s\n", msg,
+ start, end, x->state->num,
+ x->parent, x->links->lens,
+ is_complete(x->links) ? "c" : "",
+ is_predict(x->links) ? "p" : "",
+ is_scan(x->links) ? "s" : "");
+ } else {
+ printf("%s %d..%d\n", msg, start, end);
+ }
+}
+
+static int add_sibling(struct array *siblings, ind_t lnk) {
+ int r;
+ ind_t ind;
+
+ r = array_add(siblings, &ind);
+ if (r < 0)
+ return -1;
+ *array_elem(*siblings, ind, ind_t) = lnk;
+ return 0;
+}
+
+/* Return true if CALLER is a possible caller for the link LNK which starts
+ * at item X.
+ *
+ * FIXME: We can get rid of the caller field on a link if we distinguish
+ * between NCALLER and NCALLEE in the Earley graph, rather than collapse
+ * them both into links with reason PREDICT|COMPLETE
+ */
+static bool is_caller(struct item *x, struct link *lnk, ind_t caller) {
+ if (lnk->reason & R_ROOT)
+ return caller == lnk->caller;
+
+ if (! is_predict(lnk))
+ return false;
+
+ if (is_complete(lnk)) {
+ /* NCALLER: caller == t
+ * NCALLEE: caller == s */
+ return caller == lnk->caller;
+ }
+ /* PREDICT: caller == t || caller == s */
+ return caller == lnk->caller || caller == x->state->num;
+}
+
+/* Traverse the siblings of x and check that the callee's set of callers
+ * contains CALLER. When a path ending with a call from CALLER exists,
+ * record the number of the corresponding link for each item in
+ * SIBLINGS. The links are recorded in left-to-right order, i.e. the number
+ * of the first link to follow is in the last entry in SIBLINGS.
+ *
+ * Returns 0 if there is a path ending in a callee of CALLER. Return -1 if
+ * there is none. Return -2 if there are multiple such paths. Return -3 for
+ * any other error.
+ */
+static int filter_siblings(struct jmt_visitor *visitor, struct lens *lens,
+ ind_t k, ind_t item, ind_t caller,
+ struct array *siblings) {
+ struct jmt_parse *parse = visitor->parse;
+ struct item *x = set_item(parse, k, item);
+ ind_t nlast = 0;
+ int r;
+
+ for (ind_t lnk = 0; lnk < x->nlinks; lnk++)
+ if (is_last_sibling(x->links + lnk))
+ nlast += 1;
+
+ if (nlast > 0 && nlast < x->nlinks)
+ goto ambig;
+
+ if (nlast == x->nlinks) {
+ for (ind_t lnk = 0; lnk < x->nlinks; lnk++) {
+ if (is_caller(x, x->links + lnk, caller)) {
+ siblings->used = 0;
+ r = add_sibling(siblings, lnk);
+ if (r < 0) {
+ ERR_REPORT(parse, AUG_ENOMEM, NULL);
+ return -3;
+ }
+ return 0;
+ }
+ }
+ return -1;
+ } else {
+ /* nlast == 0 */
+ ind_t found = IND_MAX;
+ for (ind_t lnk = 0; lnk < x->nlinks; lnk++) {
+ struct link *l = x->links + lnk;
+ r = filter_siblings(visitor, lens,
+ l->from_set, l->from_item, caller,
+ siblings);
+ if (r == -1)
+ continue;
+ if (r == 0) {
+ if (found != IND_MAX)
+ goto ambig;
+ else
+ found = lnk;
+ } else {
+ return r;
+ }
+ }
+ if (found == IND_MAX) {
+ return -1;
+ } else {
+ r = add_sibling(siblings, found);
+ if (r < 0) {
+ ERR_REPORT(parse, AUG_ENOMEM, NULL);
+ return -3;
+ }
+ return 0;
+ }
+ }
+ ambig:
+ (*visitor->error)(lens, visitor->data, k,
+ "Ambiguous parse: %d links in state (%d, %d) in E_%d",
+ x->nlinks, x->state->num, x->parent, k);
+ return -2;
+}
+
+static void visit_enter(struct jmt_visitor *visitor, struct lens *lens,
+ size_t start, size_t end,
+ struct item *x, int lvl) {
+ if (debugging("cf.jmt.visit"))
+ build_trace("{", start, end, x, lvl);
+ if (visitor->enter != NULL)
+ (*visitor->enter)(lens, start, end, visitor->data);
+}
+
+static void visit_exit(struct jmt_visitor *visitor, struct lens *lens,
+ size_t start, size_t end,
+ struct item *x, int lvl) {
+ if (debugging("cf.jmt.visit"))
+ build_trace("}", start, end, x, lvl);
+ if (visitor->exit != NULL)
+ (*visitor->exit)(lens, start, end, visitor->data);
+}
+
+static int
+build_children(struct jmt_parse *parse, ind_t k, ind_t item,
+ struct jmt_visitor *visitor, int lvl, ind_t caller);
+
+static int
+build_tree(struct jmt_parse *parse, ind_t k, ind_t item, struct lens *lens,
+ struct jmt_visitor *visitor, int lvl) {
+ struct item *x = set_item(parse, k, item);
+ ind_t start = x->links->from_set;
+ ind_t end = k;
+ struct item *old_x = x;
+
+ if (start == end) {
+ /* This completion corresponds to a nullable nonterminal
+ * that match epsilon. Reconstruct the full parse tree
+ * for matching epsilon */
+ if (debugging("cf.jmt.visit"))
+ build_trace("N", x->links->from_set, k, x, lvl);
+ build_nullable(parse, start, visitor, lens, lvl);
+ return end;
+ }
+
+ ensure(is_complete(x->links), parse);
+
+ visit_enter(visitor, lens, start, end, x, lvl);
+ ERR_BAIL(parse);
+
+ /* x is a completion item. (k, x->to_item) is its first child in the
+ * parse tree. */
+ if (! is_predict(x->links)) {
+ struct link *lnk = x->links;
+ struct item *sib = set_item(parse, lnk->from_set, lnk->from_item);
+ ind_t caller = sib->state->num;
+
+ item = lnk->to_item;
+ x = set_item(parse, k, item);
+ build_children(parse, k, item, visitor, lvl, caller);
+ ERR_BAIL(parse);
+ }
+
+ visit_exit(visitor, lens, start, end, old_x, lvl);
+ ERR_BAIL(parse);
+ error:
+ return end;
+}
+
+static int
+build_children(struct jmt_parse *parse, ind_t k, ind_t item,
+ struct jmt_visitor *visitor, int lvl, ind_t caller) {
+ struct item *x = set_item(parse, k, item);
+ struct lens *lens = lens_of_parse(parse, x->links->lens);
+ struct array siblings;
+ ind_t end = k;
+ int r;
+
+ array_init(&siblings, sizeof(ind_t));
+ r = filter_siblings(visitor, lens, k, item, caller, &siblings);
+ if (r < 0)
+ goto error;
+
+ /* x the first item in a list of siblings; visit items (x->from_set,
+ * x->from_item) in order, which will visit x and its siblings in the
+ * parse tree from right to left */
+ for (ind_t i = siblings.used - 1; i > 0; i--) {
+ ind_t lnk = *array_elem(siblings, i, ind_t);
+ struct lens *sub = lens_of_parse(parse, x->links[lnk].lens);
+ if (sub->recursive) {
+ build_tree(parse, k, item, sub, visitor, lvl+1);
+ ERR_BAIL(parse);
+ } else {
+ if (debugging("cf.jmt.visit"))
+ build_trace("T", x->links->from_set, k, x, lvl+1);
+ if (visitor->terminal != NULL) {
+ (*visitor->terminal)(sub,
+ x->links->from_set, k, visitor->data);
+ ERR_BAIL(parse);
+ }
+ }
+ k = x->links[lnk].from_set;
+ item = x->links[lnk].from_item;
+ x = set_item(parse, k, item);
+ }
+ error:
+ array_release(&siblings);
+ return end;
+}
+
+int jmt_visit(struct jmt_visitor *visitor, size_t *len) {
+ struct jmt_parse *parse = visitor->parse;
+ ind_t k = parse->nsets - 1; /* Current Earley set */
+ ind_t item;
+ struct item_set *set = parse->sets[k];
+
+ if (set == NULL)
+ goto noparse;
+
+ for (item = 0; item < set->items.used; item++) {
+ struct item *x = set_item(parse, k, item);
+ if (x->parent == 0 && returns(x->state, parse->jmt->lens)) {
+ for (ind_t i = 0; i < x->nlinks; i++) {
+ if (is_complete(x->links + i) || is_scan(x->links + i)) {
+ if (debugging("cf.jmt.visit"))
+ printf("visit: found (%d, %d) in E_%d\n",
+ x->state->num, x->parent, k);
+ goto found;
+ }
+ }
+ }
+ }
+ found:
+ if (item >= parse->sets[k]->items.used)
+ goto noparse;
+ struct lens *lens = lens_of_parse(parse, parse->jmt->lens);
+
+ visit_enter(visitor, lens, 0, k, NULL, 0);
+ ERR_BAIL(parse);
+
+ *len = build_children(parse, k, item, visitor, 0,
+ parse->jmt->start->num);
+ ERR_BAIL(parse);
+
+ visit_exit(visitor, lens, 0, k, NULL, 0);
+ ERR_BAIL(parse);
+ return 1;
+ error:
+ return -1;
+ noparse:
+ for (; k > 0; k--)
+ if (parse->sets[k] != NULL) break;
+ *len = k;
+ return 0;
+}
+
+/*
+ * Build the automaton
+ */
+
+
+static struct state *make_state(struct jmt *jmt) {
+ struct state *s;
+ int r;
+
+ r = ALLOC(s);
+ ERR_NOMEM(r < 0, jmt);
+ s->num = jmt->state_count++;
+ array_init(&s->trans, sizeof(struct trans));
+ if (jmt->start != NULL)
+ list_cons(jmt->start->next, s);
+ else
+ jmt->start = s;
+ return s;
+ error:
+ return NULL;
+}
+
+static ind_t add_lens(struct jmt *jmt, struct lens *lens) {
+ int r;
+ ind_t l;
+ struct state *sA = NULL;
+ int nullable = 0;
+
+ r = array_add(&jmt->lenses, &l);
+ ERR_NOMEM(r < 0, jmt);
+ ERR_NOMEM(l == IND_MAX, jmt);
+
+ if (! lens->recursive)
+ nullable = regexp_matches_empty(lens->ctype);
+
+ array_elem(jmt->lenses, l, struct jmt_lens)->lens = lens;
+ /* A nonrecursive lens that matches epsilon is both a terminal
+ * and a nonterminal */
+ if (lens->recursive || nullable) {
+ sA = make_state(jmt);
+ ERR_NOMEM(sA == NULL, jmt);
+ array_elem(jmt->lenses, l, struct jmt_lens)->state = sA;
+ if (! lens->recursive) {
+ /* Add lens again, so that l refers to the nonterminal T
+ * for the lens, and l+1 refers to the terminal t for it */
+ ind_t m;
+ r = array_add(&jmt->lenses, &m);
+ ERR_NOMEM(r < 0, jmt);
+ ERR_NOMEM(m == IND_MAX, jmt);
+
+ array_elem(jmt->lenses, m, struct jmt_lens)->lens = lens;
+ }
+ }
+
+ if (debugging("cf.jmt")) {
+ if (sA == NULL) {
+ char *s = format_lens(lens);
+ printf("add_lens: ");
+ print_regexp(stdout, lens->ctype);
+ printf(" %s\n", s);
+ free(s);
+ } else {
+ char *s = format_lens(lens);
+ printf("add_lens: ");
+ flens(stdout, l);
+ printf(" %u %s\n", sA->num, s);
+ if (nullable) {
+ printf("add_lens: // %s\n", s);
+ }
+ free(s);
+ }
+ }
+
+ return l;
+ error:
+ return IND_MAX;
+}
+
+static struct trans *
+add_new_trans(struct jmt *jmt,
+ struct state *from, struct state *to, ind_t lens) {
+ struct trans *t;
+ ind_t i;
+ int r;
+
+ if (from == NULL || to == NULL)
+ return NULL;
+
+ r = array_add(&from->trans, &i);
+ ERR_NOMEM(r < 0, jmt);
+ t = array_elem(from->trans, i, struct trans);
+ t->to = to;
+ t->lens = lens;
+ return t;
+ error:
+ return NULL;
+}
+
+static struct trans *
+add_eps_trans(struct jmt *jmt, struct state *from, struct state *to) {
+ return add_new_trans(jmt, from, to, EPS);
+}
+
+static struct lens *lens_of_jmt(struct jmt *jmt, ind_t l) {
+ return array_elem(jmt->lenses, l, struct jmt_lens)->lens;
+}
+
+static ind_t lens_index(struct jmt *jmt, struct lens *lens) {
+ array_for_each(i, jmt->lenses)
+ if (lens_of_jmt(jmt, i) == lens)
+ return i;
+ return IND_MAX;
+}
+
+static struct state *lens_state(struct jmt *jmt, ind_t l) {
+ return array_elem(jmt->lenses, l, struct jmt_lens)->state;
+}
+
+static void print_lens_symbol(FILE *fp, struct jmt *jmt, struct lens *lens) {
+ ind_t l = lens_index(jmt, lens);
+ struct state *sA = lens_state(jmt, l);
+
+ if (sA == NULL)
+ print_regexp(fp, lens->ctype);
+ else
+ flens(fp, l);
+}
+
+static void print_grammar(struct jmt *jmt, struct lens *lens) {
+ ind_t l = lens_index(jmt, lens);
+ struct state *sA = lens_state(jmt, l);
+
+ if (sA == NULL || (lens->tag == L_REC && lens->rec_internal))
+ return;
+
+ printf(" ");
+ print_lens_symbol(stdout, jmt, lens);
+ printf(" := ");
+
+ if (! lens->recursive) {
+ /* Nullable regexps */
+ print_regexp(stdout, lens->ctype);
+ printf("\n");
+ return;
+ }
+
+ switch (lens->tag) {
+ case L_CONCAT:
+ print_lens_symbol(stdout, jmt, lens->children[0]);
+ for (int i=1; i < lens->nchildren; i++) {
+ printf(" . ");
+ print_lens_symbol(stdout, jmt, lens->children[i]);
+ }
+ printf("\n");
+ for (int i=0; i < lens->nchildren; i++)
+ print_grammar(jmt, lens->children[i]);
+ break;
+ case L_UNION:
+ print_lens_symbol(stdout, jmt, lens->children[0]);
+ for (int i=1; i < lens->nchildren; i++) {
+ printf(" | ");
+ print_lens_symbol(stdout, jmt, lens->children[i]);
+ }
+ printf("\n");
+ for (int i=0; i < lens->nchildren; i++)
+ print_grammar(jmt, lens->children[i]);
+ break;
+ case L_SUBTREE:
+ print_lens_symbol(stdout, jmt, lens->child);
+ printf("\n");
+ print_grammar(jmt, lens->child);
+ break;
+ case L_STAR:
+ print_lens_symbol(stdout, jmt, lens->child);
+ printf("*\n");
+ print_grammar(jmt, lens->child);
+ break;
+ case L_MAYBE:
+ print_lens_symbol(stdout, jmt, lens->child);
+ printf("?\n");
+ print_grammar(jmt, lens->child);
+ break;
+ case L_REC:
+ print_lens_symbol(stdout, jmt, lens->body);
+ printf("\n");
+ print_grammar(jmt, lens->body);
+ break;
+ case L_SQUARE:
+ print_lens_symbol(stdout, jmt, lens->child);
+ printf("\n");
+ print_grammar(jmt, lens->child);
+ break;
+ default:
+ BUG_ON(true, jmt, "Unexpected lens tag %d", lens->tag);
+ break;
+ }
+ error:
+ return;
+}
+
+static void print_grammar_top(struct jmt *jmt, struct lens *lens) {
+ printf("Grammar:\n");
+ print_grammar(jmt, lens);
+ if (lens->tag == L_REC) {
+ printf(" ");
+ print_lens_symbol(stdout, jmt, lens->alias);
+ printf(" := ");
+ print_lens_symbol(stdout, jmt, lens->alias->body);
+ printf("\n");
+ }
+}
+
+static void index_lenses(struct jmt *jmt, struct lens *lens) {
+ ind_t l;
+
+ l = lens_index(jmt, lens);
+ if (l == IND_MAX) {
+ l = add_lens(jmt, lens);
+ ERR_BAIL(jmt);
+ }
+
+ if (! lens->recursive)
+ return;
+
+ switch (lens->tag) {
+ case L_CONCAT:
+ case L_UNION:
+ for (int i=0; i < lens->nchildren; i++)
+ index_lenses(jmt, lens->children[i]);
+ break;
+ case L_SUBTREE:
+ case L_STAR:
+ case L_MAYBE:
+ case L_SQUARE:
+ index_lenses(jmt, lens->child);
+ break;
+ case L_REC:
+ if (! lens->rec_internal)
+ index_lenses(jmt, lens->body);
+ break;
+ default:
+ BUG_ON(true, jmt, "Unexpected lens tag %d", lens->tag);
+ break;
+ }
+ error:
+ return;
+}
+
+static void thompson(struct jmt *jmt, struct lens *lens,
+ struct state **s, struct state **f) {
+ ind_t l = lens_index(jmt, lens);
+ struct state *sA = lens_state(jmt, l);
+ ensure(l < jmt->lenses.used, jmt);
+
+ *s = make_state(jmt);
+ *f = make_state(jmt);
+ ERR_BAIL(jmt);
+
+ if (lens->recursive) {
+ /* A nonterminal */
+ add_new_trans(jmt, *s, *f, l);
+ add_new_trans(jmt, *s, sA, CALL);
+ } else if (sA == NULL) {
+ /* A terminal that never matches epsilon */
+ add_new_trans(jmt, *s, *f, l);
+ } else {
+ /* A terminal that matches epsilon */
+ add_new_trans(jmt, *s, *f, l);
+ add_new_trans(jmt, *s, sA, CALL);
+ add_new_trans(jmt, *s, *f, l+1);
+ }
+ error:
+ return;
+}
+
+static void conv(struct jmt *jmt, struct lens *lens,
+ struct state **s, struct state **e,
+ struct state **f) {
+ ind_t l = lens_index(jmt, lens);
+ ensure(l < jmt->lenses.used, jmt);
+ struct state *sA = lens_state(jmt, l);
+
+ *s = NULL;
+ *e = NULL;
+ *f = NULL;
+
+ if (lens->recursive) {
+ /* A nonterminal */
+ *s = make_state(jmt);
+ *f = make_state(jmt);
+ ERR_BAIL(jmt);
+ add_new_trans(jmt, *s, *f, l);
+ ERR_BAIL(jmt);
+ ensure(sA != NULL, jmt);
+ add_new_trans(jmt, *s, sA, EPS);
+ ERR_BAIL(jmt);
+ } else if (sA == NULL) {
+ /* A terminal that never matches epsilon */
+ *s = make_state(jmt);
+ *f = make_state(jmt);
+ ERR_BAIL(jmt);
+ add_new_trans(jmt, *s, *f, l);
+ ERR_BAIL(jmt);
+ } else {
+ /* A terminal that matches epsilon */
+ *s = make_state(jmt);
+ *f = make_state(jmt);
+ ERR_BAIL(jmt);
+ add_new_trans(jmt, *s, *f, l);
+ add_new_trans(jmt, *s, *f, l+1);
+ add_new_trans(jmt, *s, sA, EPS);
+ ERR_BAIL(jmt);
+ }
+ error:
+ return;
+}
+
+static void conv_concat(struct jmt *jmt, struct lens *lens,
+ struct state **s, struct state **e,
+ struct state **f) {
+ struct state *s2, *f2, *e2;
+
+ conv(jmt, lens->children[0], &s2, &e2, &f2);
+ *s = make_state(jmt);
+ add_new_trans(jmt, *s, s2, EPS);
+
+ for (int i=1; i < lens->nchildren; i++) {
+ struct state *s3, *e3, *f3, *scall, *fcall;
+ conv(jmt, lens->children[i], &s3, &e3, &f3);
+ thompson(jmt, lens->children[i], &scall, &fcall);
+ ERR_BAIL(jmt);
+ add_eps_trans(jmt, f2, scall);
+ add_eps_trans(jmt, e2, s3);
+ *f = make_state(jmt);
+ add_eps_trans(jmt, f3, *f);
+ add_eps_trans(jmt, fcall, *f);
+ *e = make_state(jmt);
+ add_eps_trans(jmt, e3, *e);
+ f2 = *f;
+ e2 = *e;
+ }
+ error:
+ return;
+}
+
+static void conv_union(struct jmt *jmt, struct lens *lens,
+ struct state **s, struct state **e,
+ struct state **f) {
+
+ *s = make_state(jmt);
+ *e = make_state(jmt);
+ *f = make_state(jmt);
+ ERR_BAIL(jmt);
+
+ for (int i = 0; i < lens->nchildren; i++) {
+ struct state *s2, *e2, *f2;
+
+ conv(jmt, lens->children[i], &s2, &e2, &f2);
+ ERR_BAIL(jmt);
+
+ add_eps_trans(jmt, *s, s2);
+ add_eps_trans(jmt, e2, *e);
+ add_eps_trans(jmt, f2, *f);
+ }
+
+ error:
+ return;
+}
+
+static void conv_star(struct jmt *jmt, struct lens *lens,
+ struct state **s, struct state **e,
+ struct state **f) {
+
+ *s = make_state(jmt);
+ *e = make_state(jmt);
+ *f = make_state(jmt);
+ ERR_BAIL(jmt);
+
+ struct state *si, *ei, *fi, *scall, *fcall;
+ conv(jmt, lens->child, &si, &ei, &fi);
+ thompson(jmt, lens->child, &scall, &fcall);
+ ERR_BAIL(jmt);
+
+ add_eps_trans(jmt, *s, si);
+ add_eps_trans(jmt, ei, si);
+ add_eps_trans(jmt, *s, *e);
+ add_eps_trans(jmt, ei, *e);
+ add_eps_trans(jmt, fi, scall);
+ add_eps_trans(jmt, fcall, scall);
+ add_eps_trans(jmt, fi, *f);
+ add_eps_trans(jmt, fcall, *f);
+ ERR_BAIL(jmt);
+
+ error:
+ return;
+}
+
+static void conv_rhs(struct jmt *jmt, ind_t l) {
+ struct lens *lens = lens_of_jmt(jmt, l);
+ struct state *s = NULL, *e = NULL, *f = NULL;
+ struct state *sA = lens_state(jmt, l);
+
+ if (! lens->recursive) {
+ /* Nothing to do for terminals that do not match epsilon */
+ if (sA != NULL)
+ state_add_return(jmt, sA, l);
+ return;
+ }
+
+ /* All other nonterminals/recursive lenses */
+
+ /* Maintain P1 */
+ if (lens->ctype_nullable)
+ state_add_return(jmt, sA, l);
+
+ switch (lens->tag) {
+ case L_REC:
+ conv(jmt, lens->body, &s, &e, &f);
+ break;
+ case L_CONCAT:
+ conv_concat(jmt, lens, &s, &e, &f);
+ break;
+ case L_UNION:
+ conv_union(jmt, lens, &s, &e, &f);
+ break;
+ case L_SUBTREE:
+ conv(jmt, lens->child, &s, &e, &f);
+ break;
+ case L_STAR:
+ conv_star(jmt, lens, &s, &e, &f);
+ break;
+ case L_MAYBE:
+ conv(jmt, lens->child, &s, &e, &f);
+ add_new_trans(jmt, s, e, EPS);
+ break;
+ case L_SQUARE:
+ conv(jmt, lens->child, &s, &e, &f);
+ break;
+ default:
+ BUG_ON(true, jmt, "Unexpected lens tag %d", lens->tag);
+ }
+
+ ensure(sA != NULL, jmt);
+
+ add_eps_trans(jmt, sA, s);
+ state_add_return(jmt, e, l);
+ state_add_return(jmt, f, l);
+
+ error:
+ return;
+}
+
+ATTRIBUTE_RETURN_CHECK
+static int push_state(struct array *worklist, struct state *s) {
+ int r;
+ ind_t ind;
+
+ r = array_add(worklist, &ind);
+ if (r < 0)
+ return -1;
+ *array_elem(*worklist, ind, struct state *) = s;
+ return 0;
+}
+
+static struct state *pop_state(struct array *worklist) {
+ if (worklist->used > 0) {
+ worklist->used -= 1;
+ return *array_elem(*worklist, worklist->used, struct state *);
+ } else {
+ return NULL;
+ }
+}
+
+static void free_state(struct state *s) {
+ if (s == NULL)
+ return;
+ free(s->ret);
+ array_release(&s->trans);
+ free(s);
+}
+
+static void collect(struct jmt *jmt) {
+ struct array worklist;
+ size_t count, removed;
+ int r;
+
+ count = 0;
+ list_for_each(s, jmt->start) {
+ s->live = 0;
+ s->reachable = 0;
+ count += 1;
+ }
+
+ array_init(&worklist, sizeof(struct state *));
+ jmt->start->reachable = 1;
+ for (struct state *s = jmt->start;
+ s != NULL;
+ s = pop_state(&worklist)) {
+ for_each_trans(t, s) {
+ if (! t->to->reachable) {
+ t->to->reachable = 1;
+ r = push_state(&worklist, t->to);
+ ERR_NOMEM(r < 0, jmt);
+ }
+ }
+ }
+
+ list_for_each(s, jmt->start)
+ if (s->reachable && is_return(s))
+ s->live = 1;
+
+ bool changed;
+ do {
+ changed = false;
+ list_for_each(s, jmt->start) {
+ if (! s->live && s->reachable) {
+ for_each_trans(t, s) {
+ if (t->lens != CALL && t->to->live) {
+ s->live = 1;
+ changed = true;
+ break;
+ }
+ }
+ }
+ }
+ } while (changed);
+
+ list_for_each(s, jmt->start) {
+ if (s->live && s->reachable) {
+ for (ind_t i = 0; i < s->trans.used; ) {
+ struct trans *t = state_trans(s, i);
+ if (! (t->to->live && t->to->reachable))
+ array_remove(&s->trans, i);
+ else
+ i += 1;
+ }
+ }
+ }
+
+ removed = 0;
+ for (struct state *s = jmt->start;
+ s->next != NULL; ) {
+ struct state *p = s->next;
+ if (p->live && p->reachable) {
+ s = p;
+ } else {
+ s->next = p->next;
+ free_state(p);
+ removed += 1;
+ }
+ }
+
+ error:
+ array_release(&worklist);
+ return;
+}
+
+static void dedup(struct state *s) {
+ array_for_each(i, s->trans) {
+ struct trans *t = state_trans(s, i);
+ for (ind_t j = i+1; j < s->trans.used;) {
+ struct trans *u = state_trans(s, j);
+ if (t->to == u->to && t->lens == u->lens)
+ array_remove(&s->trans, j);
+ else
+ j += 1;
+ }
+ }
+}
+
+static void unepsilon(struct jmt *jmt) {
+ int r;
+
+ if (debugging("cf.jmt.build"))
+ jmt_dot(jmt, "jmt_10_raw.dot");
+ collect(jmt);
+
+ /* Get rid of epsilon transitions */
+ bool changed;
+ do {
+ changed = false;
+ list_for_each(s, jmt->start) {
+ array_for_each(i , s->trans) {
+ struct trans *t = state_trans(s, i);
+ if (t->lens == EPS) {
+ struct state *to = t->to;
+ array_remove(&s->trans, i);
+ r = array_join(&s->trans, &to->trans);
+ ERR_NOMEM(r < 0, jmt);
+ state_merge_returns(jmt, s, to);
+ dedup(s);
+ changed = true;
+ }
+ }
+ }
+ } while (changed);
+
+ collect(jmt);
+ if (debugging("cf.jmt.build"))
+ jmt_dot(jmt, "jmt_20_uneps.dot");
+ error:
+ return;
+}
+
+static bool is_deterministic(struct jmt *jmt) {
+ list_for_each(s, jmt->start) {
+ array_for_each(i, s->trans) {
+ struct trans *t = state_trans(s, i);
+ for (ind_t j = i+1; j < s->trans.used; j++) {
+ struct trans *u = state_trans(s, j);
+ if (t->lens == u->lens)
+ return false;
+ }
+ }
+ }
+ return true;
+}
+
+static void free_nfa_state(struct nfa_state *s) {
+ if (s == NULL)
+ return;
+ array_release(&s->set);
+ free(s);
+}
+
+static struct nfa_state *make_nfa_state(struct jmt *jmt) {
+ struct nfa_state *result = NULL;
+ int r;
+
+ r = ALLOC(result);
+ ERR_NOMEM(r < 0, jmt);
+
+ array_init(&result->set, sizeof(struct state *));
+
+ return result;
+ error:
+ FREE(result);
+ return NULL;
+}
+
+static ind_t nfa_state_add(struct jmt *jmt, struct nfa_state *nfa,
+ struct state *s) {
+ ind_t i;
+ int r;
+
+ array_for_each(j, nfa->set) {
+ struct state *q = *array_elem(nfa->set, j, struct state *);
+ if (q == s)
+ return j;
+ }
+
+ /* Keep the list of states sorted */
+ i = nfa->set.used;
+ for (int j=0; j + 1 < nfa->set.used; j++) {
+ if (s < *array_elem(nfa->set, j, struct state *)) {
+ i = j;
+ break;
+ }
+ }
+ r = array_insert(&nfa->set, i);
+ ERR_NOMEM(r < 0, jmt);
+
+ *array_elem(nfa->set, i, struct state *) = s;
+ return i;
+ error:
+ return IND_MAX;
+}
+
+static bool nfa_same_set(struct nfa_state *s1, struct nfa_state *s2) {
+ if (s1->set.used != s2->set.used)
+ return false;
+
+ array_for_each(i, s1->set) {
+ struct state *q1 = *array_elem(s1->set, i, struct state *);
+ struct state *q2 = *array_elem(s2->set, i, struct state *);
+ if (q1 != q2)
+ return false;
+ }
+
+ return true;
+}
+
+static struct nfa_state *nfa_uniq(struct jmt *jmt, struct array *newstates,
+ struct nfa_state *s) {
+ ind_t ind;
+ int r;
+
+ array_each_elem(q, *newstates, struct nfa_state *) {
+ if (nfa_same_set(s, *q)) {
+ if (s == *q)
+ return s;
+ free_nfa_state(s);
+ return *q;
+ }
+ }
+
+ r = array_add(newstates, &ind);
+ ERR_NOMEM(r < 0, jmt);
+ *array_elem(*newstates, ind, struct nfa_state *) = s;
+
+ if (s->state == NULL) {
+ s->state = make_state(jmt);
+ ERR_BAIL(jmt);
+ }
+
+ /* This makes looking at pictures easier */
+ if (s->set.used == 1)
+ s->state->num = (*array_elem(s->set, 0, struct state *))->num;
+
+ return s;
+
+ error:
+ return NULL;
+}
+
+static void det_target(struct jmt *jmt, struct array *newstates,
+ struct nfa_state *nfas, ind_t l) {
+ struct nfa_state *to = NULL;
+
+ array_each_elem(s, nfas->set, struct state *) {
+ for_each_trans(t, *s) {
+ if (t->lens == l) {
+ if (to == NULL) {
+ to = make_nfa_state(jmt);
+ ERR_RET(jmt);
+ }
+ nfa_state_add(jmt, to, t->to);
+ ERR_RET(jmt);
+ }
+ }
+ }
+
+ if (to != NULL) {
+ to = nfa_uniq(jmt, newstates, to);
+ ERR_RET(jmt);
+ add_new_trans(jmt, nfas->state, to->state, l);
+ }
+}
+
+static void determinize(struct jmt *jmt) {
+ struct nfa_state *ini = NULL;
+ struct array *newstates = NULL;
+ int r;
+ ind_t ind, nlenses;
+
+ if (is_deterministic(jmt))
+ return;
+
+ r = ALLOC(newstates);
+ ERR_NOMEM(r < 0, jmt);
+ array_init(newstates, sizeof(struct nfa_state *));
+
+ nlenses = jmt->lenses.used;
+
+ /* The initial state consists of just the start state */
+ ini = make_nfa_state(jmt);
+ ERR_BAIL(jmt);
+ nfa_state_add(jmt, ini, jmt->start);
+ ERR_BAIL(jmt);
+
+ /* Make a new initial state */
+ ini->state = make_state(jmt);
+ ini->state->num = jmt->start->num; /* Makes lokking at pictures easier */
+ ERR_BAIL(jmt);
+ jmt->start->next = ini->state->next;
+ ini->state->next = jmt->start;
+ jmt->start = ini->state;
+
+ r = array_add(newstates, &ind);
+ ERR_NOMEM(r < 0, jmt);
+
+ *array_elem(*newstates, ind, struct nfa_state *) = ini;
+ ini = NULL;
+
+ for (ind_t i = 0; i < newstates->used; i++) {
+ struct nfa_state *nfas = *array_elem(*newstates, i, struct nfa_state *);
+
+ for (int j = 0; j < nfas->set.used; j++) {
+ struct state *s = *array_elem(nfas->set, j, struct state *);
+ state_merge_returns(jmt, nfas->state, s);
+ }
+
+ for (ind_t l = 0; l < nlenses; l++) {
+ det_target(jmt, newstates, nfas, l);
+ ERR_BAIL(jmt);
+ }
+ det_target(jmt, newstates, nfas, CALL);
+ ERR_BAIL(jmt);
+ }
+
+ collect(jmt);
+
+ done:
+ if (newstates) {
+ array_each_elem(s, *newstates, struct nfa_state *) {
+ free_nfa_state(*s);
+ }
+ array_release(newstates);
+ FREE(newstates);
+ }
+ free_nfa_state(ini);
+ return;
+ error:
+ goto done;
+}
+
+struct jmt *jmt_build(struct lens *lens) {
+ struct jmt *jmt = NULL;
+ int r;
+
+ r = ALLOC(jmt);
+ ERR_NOMEM(r < 0, lens->info);
+
+ jmt->error = lens->info->error;
+ array_init(&jmt->lenses, sizeof(struct jmt_lens));
+
+ index_lenses(jmt, lens);
+
+ if (debugging("cf.jmt"))
+ print_grammar_top(jmt, lens);
+
+ for (ind_t i=0; i < jmt->lenses.used; i++) {
+ conv_rhs(jmt, i);
+ ERR_BAIL(jmt);
+ }
+
+ unepsilon(jmt);
+ ERR_BAIL(jmt);
+
+ determinize(jmt);
+ ERR_BAIL(jmt);
+
+ if (debugging("cf.jmt.build"))
+ jmt_dot(jmt, "jmt_30_dfa.dot");
+
+ return jmt;
+ error:
+ jmt_free(jmt);
+ return NULL;
+}
+
+void jmt_free(struct jmt *jmt) {
+ if (jmt == NULL)
+ return;
+ array_release(&jmt->lenses);
+ struct state *s = jmt->start;
+ while (s != NULL) {
+ struct state *del = s;
+ s = del->next;
+ free_state(del);
+ }
+ free(jmt);
+}
+
+void jmt_dot(struct jmt *jmt, const char *fname) {
+ FILE *fp = debug_fopen("%s", fname);
+ if (fp == NULL)
+ return;
+
+ fprintf(fp, "digraph \"jmt\" {\n");
+ fprintf(fp, " rankdir = LR;\n");
+ list_for_each(s, jmt->start) {
+ if (is_return(s)) {
+ fprintf(fp, " %u [ shape = doublecircle, label = \"%u (",
+ s->num, s->num);
+ flens(fp, s->ret[0]);
+ for (ind_t i = 1; i < s->nret; i++) {
+ fprintf(fp, ", ");
+ flens(fp, s->ret[i]);
+ }
+ fprintf(fp, ")\" ];\n");
+ }
+ for_each_trans(t, s) {
+ fprintf(fp, " %u -> %u ", s->num, t->to->num);
+ if (t->lens == EPS)
+ fprintf(fp, ";\n");
+ else if (t->lens == CALL)
+ fprintf(fp, "[ label = \"call\" ];\n");
+ else if (lens_state(jmt, t->lens) == NULL) {
+ struct lens *lens = lens_of_jmt(jmt, t->lens);
+ fprintf(fp, "[ label = \"");
+ print_regexp(fp, lens->ctype);
+ fprintf(fp, "\" ];\n");
+ } else {
+ fprintf(fp, "[ label = \"");
+ flens(fp, t->lens);
+ fprintf(fp, "\" ];\n");
+ }
+ }
+ }
+ fprintf(fp, "}\n");
+ fclose(fp);
+}
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+/*
+ * jmt.h: Earley parser for lenses based on Jim/Mandelbaum transducers
+ *
+ * Copyright (C) 2009-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <lutter@redhat.com>
+ */
+
+#ifndef EARLEY_H_
+#define EARLEY_H_
+
+#include <stdlib.h>
+#include <stdint.h>
+#include "lens.h"
+
+struct jmt;
+struct jmt_parse;
+
+typedef uint32_t ind_t;
+
+struct lens;
+
+typedef void (*jmt_traverser)(struct lens *l, size_t start, size_t end,
+ void *data);
+
+typedef void (*jmt_error)(struct lens *lens, void *data, size_t pos,
+ const char *format, ...);
+
+struct jmt_visitor {
+ struct jmt_parse *parse;
+ jmt_traverser terminal;
+ jmt_traverser enter;
+ jmt_traverser exit;
+ jmt_error error;
+ void *data;
+};
+
+struct jmt *jmt_build(struct lens *l);
+
+struct jmt_parse *jmt_parse(struct jmt *jmt, const char *text, size_t text_len);
+
+void jmt_free_parse(struct jmt_parse *);
+
+/* Returns -1 on internal error, 0 on syntax error, 1 on a successful
+ * parse.
+ */
+int jmt_visit(struct jmt_visitor *visitor, size_t *len);
+
+void jmt_free(struct jmt *jmt);
+
+void jmt_dot(struct jmt *jmt, const char *fname);
+#endif
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+/*
+ * lens.c:
+ *
+ * Copyright (C) 2007-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <dlutter@redhat.com>
+ */
+
+#include <config.h>
+#include <stddef.h>
+
+#include "lens.h"
+#include "memory.h"
+#include "errcode.h"
+#include "internal.h"
+
+/* This enum must be kept in sync with type_offs and ntypes */
+enum lens_type {
+ CTYPE, ATYPE, KTYPE, VTYPE
+};
+
+static const int type_offs[] = {
+ offsetof(struct lens, ctype),
+ offsetof(struct lens, atype),
+ offsetof(struct lens, ktype),
+ offsetof(struct lens, vtype)
+};
+static const int ntypes = sizeof(type_offs)/sizeof(type_offs[0]);
+
+static const char *lens_type_names[] =
+ { "ctype", "atype", "ktype", "vtype" };
+
+#define ltype(lns, t) *((struct regexp **) ((char *) lns + type_offs[t]))
+
+static struct value * typecheck_union(struct info *,
+ struct lens *l1, struct lens *l2);
+static struct value *typecheck_concat(struct info *,
+ struct lens *l1, struct lens *l2);
+static struct value *typecheck_square(struct info *,
+ struct lens *l1, struct lens *l2);
+static struct value *typecheck_iter(struct info *info, struct lens *l);
+static struct value *typecheck_maybe(struct info *info, struct lens *l);
+
+/* Lens names for pretty printing */
+/* keep order in sync with enum type */
+static const char *const tags[] = {
+ "del", "store", "value", "key", "label", "seq", "counter",
+ "concat", "union",
+ "subtree", "star", "maybe", "rec", "square"
+};
+
+#define ltag(lens) (tags[lens->tag - L_DEL])
+
+static const struct string digits_string = {
+ .ref = REF_MAX, .str = (char *) "[0123456789]+"
+};
+static const struct string *const digits_pat = &digits_string;
+
+char *format_lens(struct lens *l) {
+ if (l == NULL) {
+ return strdup("(no lens)");
+ }
+
+ char *inf = format_info(l->info);
+ char *result;
+
+ xasprintf(&result, "%s[%s]%s", tags[l->tag - L_DEL], inf,
+ l->recursive ? "R" : "r");
+ free(inf);
+ return result;
+}
+
+#define BUG_LENS_TAG(lns) bug_lens_tag(lns, __FILE__, __LINE__)
+
+static void bug_lens_tag(struct lens *lens, const char *file, int lineno) {
+ if (lens != NULL && lens->info != NULL && lens->info->error != NULL) {
+ char *s = format_lens(lens);
+ bug_on(lens->info->error, file, lineno, "Unexpected lens tag %s", s);
+ free(s);
+ } else {
+ /* We are really screwed */
+ assert(0);
+ }
+ return;
+}
+
+/* Construct a finite automaton from REGEXP and return it in *FA.
+ *
+ * Return NULL if REGEXP is valid, if the regexp REGEXP has syntax errors,
+ * return an exception.
+ */
+static struct value *str_to_fa(struct info *info, const char *pattern,
+ struct fa **fa, int nocase) {
+ int error;
+ struct value *exn = NULL;
+ size_t re_err_len;
+ char *re_str = NULL, *re_err = NULL;
+
+ *fa = NULL;
+ error = fa_compile(pattern, strlen(pattern), fa);
+ if (error == REG_NOERROR) {
+ if (nocase) {
+ error = fa_nocase(*fa);
+ ERR_NOMEM(error < 0, info);
+ }
+ return NULL;
+ }
+
+ re_str = escape(pattern, -1, RX_ESCAPES);
+ ERR_NOMEM(re_str == NULL, info);
+
+ exn = make_exn_value(info, "Invalid regular expression /%s/", re_str);
+
+ re_err_len = regerror(error, NULL, NULL, 0);
+ error = ALLOC_N(re_err, re_err_len);
+ ERR_NOMEM(error < 0, info);
+
+ regerror(error, NULL, re_err, re_err_len);
+ exn_printf_line(exn, "%s", re_err);
+
+ done:
+ free(re_str);
+ free(re_err);
+ return exn;
+ error:
+ fa_free(*fa);
+ *fa = NULL;
+ exn = info->error->exn;
+ goto done;
+}
+
+static struct value *regexp_to_fa(struct regexp *regexp, struct fa **fa) {
+ return str_to_fa(regexp->info, regexp->pattern->str, fa, regexp->nocase);
+}
+
+static struct lens *make_lens(enum lens_tag tag, struct info *info) {
+ struct lens *lens;
+ make_ref(lens);
+ lens->tag = tag;
+ lens->info = info;
+
+ return lens;
+}
+
+static struct lens *make_lens_unop(enum lens_tag tag, struct info *info,
+ struct lens *child) {
+ struct lens *lens = make_lens(tag, info);
+ lens->child = child;
+ lens->value = child->value;
+ lens->key = child->key;
+ return lens;
+}
+
+typedef struct regexp *regexp_combinator(struct info *, int, struct regexp **);
+
+static struct lens *make_lens_binop(enum lens_tag tag, struct info *info,
+ struct lens *l1, struct lens *l2,
+ regexp_combinator *combinator) {
+ struct lens *lens = make_lens(tag, info);
+ int n1 = (l1->tag == tag) ? l1->nchildren : 1;
+ struct regexp **types = NULL;
+
+ if (lens == NULL)
+ goto error;
+
+ lens->nchildren = n1;
+ lens->nchildren += (l2->tag == tag) ? l2->nchildren : 1;
+
+ lens->recursive = l1->recursive || l2->recursive;
+ lens->rec_internal = l1->rec_internal || l2->rec_internal;
+
+ if (ALLOC_N(lens->children, lens->nchildren) < 0) {
+ lens->nchildren = 0;
+ goto error;
+ }
+
+ if (l1->tag == tag) {
+ for (int i=0; i < l1->nchildren; i++)
+ lens->children[i] = ref(l1->children[i]);
+ unref(l1, lens);
+ } else {
+ lens->children[0] = l1;
+ }
+
+ if (l2->tag == tag) {
+ for (int i=0; i < l2->nchildren; i++)
+ lens->children[n1 + i] = ref(l2->children[i]);
+ unref(l2, lens);
+ } else {
+ lens->children[n1] = l2;
+ }
+
+ for (int i=0; i < lens->nchildren; i++) {
+ lens->value = lens->value || lens->children[i]->value;
+ lens->key = lens->key || lens->children[i]->key;
+ }
+
+ if (ALLOC_N(types, lens->nchildren) < 0)
+ goto error;
+
+ if (! lens->rec_internal) {
+ /* Inside a recursive lens, we assign types with lns_check_rec
+ * once we know the entire lens */
+ for (int t=0; t < ntypes; t++) {
+ if (lens->recursive && t == CTYPE)
+ continue;
+ for (int i=0; i < lens->nchildren; i++)
+ types[i] = ltype(lens->children[i], t);
+ ltype(lens, t) = (*combinator)(info, lens->nchildren, types);
+ }
+ }
+ FREE(types);
+
+ for (int i=0; i < lens->nchildren; i++)
+ ensure(tag != lens->children[i]->tag, lens->info);
+
+ return lens;
+ error:
+ unref(lens, lens);
+ FREE(types);
+ return NULL;
+}
+
+static struct value *make_lens_value(struct lens *lens) {
+ struct value *v;
+ v = make_value(V_LENS, ref(lens->info));
+ v->lens = lens;
+ return v;
+}
+
+struct value *lns_make_union(struct info *info,
+ struct lens *l1, struct lens *l2, int check) {
+ struct lens *lens = NULL;
+ int consumes_value = l1->consumes_value && l2->consumes_value;
+ int recursive = l1->recursive || l2->recursive;
+ int ctype_nullable = l1->ctype_nullable || l2->ctype_nullable;
+
+ if (check) {
+ struct value *exn = typecheck_union(info, l1, l2);
+ if (exn != NULL)
+ return exn;
+ }
+
+ lens = make_lens_binop(L_UNION, info, l1, l2, regexp_union_n);
+ lens->consumes_value = consumes_value;
+ if (! recursive)
+ lens->ctype_nullable = ctype_nullable;
+ return make_lens_value(lens);
+}
+
+struct value *lns_make_concat(struct info *info,
+ struct lens *l1, struct lens *l2, int check) {
+ struct lens *lens = NULL;
+ int consumes_value = l1->consumes_value || l2->consumes_value;
+ int recursive = l1->recursive || l2->recursive;
+ int ctype_nullable = l1->ctype_nullable && l2->ctype_nullable;
+
+ if (check) {
+ struct value *exn = typecheck_concat(info, l1, l2);
+ if (exn != NULL)
+ return exn;
+ }
+ if (l1->value && l2->value) {
+ return make_exn_value(info, "Multiple stores in concat");
+ }
+ if (l1->key && l2->key) {
+ return make_exn_value(info, "Multiple keys/labels in concat");
+ }
+
+ lens = make_lens_binop(L_CONCAT, info, l1, l2, regexp_concat_n);
+ lens->consumes_value = consumes_value;
+ if (! recursive)
+ lens->ctype_nullable = ctype_nullable;
+ return make_lens_value(lens);
+}
+
+static struct regexp *subtree_atype(struct info *info,
+ struct regexp *ktype,
+ struct regexp *vtype) {
+ const char *kpat = (ktype == NULL) ? ENC_NULL : ktype->pattern->str;
+ const char *vpat = (vtype == NULL) ? ENC_NULL : vtype->pattern->str;
+ char *pat;
+ struct regexp *result = NULL;
+ char *ks = NULL, *vs = NULL;
+ int nocase;
+
+ if (ktype != NULL && vtype != NULL && ktype->nocase != vtype->nocase) {
+ ks = regexp_expand_nocase(ktype);
+ vs = regexp_expand_nocase(vtype);
+ ERR_NOMEM(ks == NULL || vs == NULL, info);
+ if (asprintf(&pat, "(%s)%s(%s)%s", ks, ENC_EQ, vs, ENC_SLASH) < 0)
+ ERR_NOMEM(true, info);
+ nocase = 0;
+ } else {
+ if (asprintf(&pat, "(%s)%s(%s)%s", kpat, ENC_EQ, vpat, ENC_SLASH) < 0)
+ ERR_NOMEM(pat == NULL, info);
+
+ nocase = 0;
+ if (ktype != NULL)
+ nocase = ktype->nocase;
+ else if (vtype != NULL)
+ nocase = vtype->nocase;
+ }
+ result = make_regexp(info, pat, nocase);
+ error:
+ free(ks);
+ free(vs);
+ return result;
+}
+
+/*
+ * A subtree lens l1 = [ l ]
+ *
+ * Types are assigned as follows:
+ *
+ * l1->ctype = l->ctype
+ * l1->atype = encode(l->ktype, l->vtype)
+ * l1->ktype = NULL
+ * l1->vtype = NULL
+ */
+struct value *lns_make_subtree(struct info *info, struct lens *l) {
+ struct lens *lens;
+
+ lens = make_lens_unop(L_SUBTREE, info, l);
+ lens->ctype = ref(l->ctype);
+ if (! l->recursive)
+ lens->atype = subtree_atype(info, l->ktype, l->vtype);
+ lens->value = lens->key = 0;
+ lens->recursive = l->recursive;
+ lens->rec_internal = l->rec_internal;
+ if (! l->recursive)
+ lens->ctype_nullable = l->ctype_nullable;
+ return make_lens_value(lens);
+}
+
+struct value *lns_make_star(struct info *info, struct lens *l, int check) {
+ struct lens *lens;
+
+ if (check) {
+ struct value *exn = typecheck_iter(info, l);
+ if (exn != NULL)
+ return exn;
+ }
+ if (l->value) {
+ return make_exn_value(info, "Multiple stores in iteration");
+ }
+ if (l->key) {
+ return make_exn_value(info, "Multiple keys/labels in iteration");
+ }
+
+ lens = make_lens_unop(L_STAR, info, l);
+ for (int t = 0; t < ntypes; t++) {
+ ltype(lens, t) = regexp_iter(info, ltype(l, t), 0, -1);
+ }
+ lens->recursive = l->recursive;
+ lens->rec_internal = l->rec_internal;
+ lens->ctype_nullable = 1;
+ return make_lens_value(lens);
+}
+
+struct value *lns_make_plus(struct info *info, struct lens *l, int check) {
+ struct value *star, *conc;
+
+ star = lns_make_star(info, l, check);
+ if (EXN(star))
+ return star;
+
+ conc = lns_make_concat(ref(info), ref(l), ref(star->lens), check);
+ unref(star, value);
+ return conc;
+}
+
+struct value *lns_make_maybe(struct info *info, struct lens *l, int check) {
+ struct lens *lens;
+
+ if (check) {
+ struct value *exn = typecheck_maybe(info, l);
+ if (exn != NULL)
+ return exn;
+ }
+ lens = make_lens_unop(L_MAYBE, info, l);
+ for (int t=0; t < ntypes; t++)
+ ltype(lens, t) = regexp_maybe(info, ltype(l, t));
+ lens->value = l->value;
+ lens->key = l->key;
+ lens->recursive = l->recursive;
+ lens->rec_internal = l->rec_internal;
+ lens->ctype_nullable = 1;
+ return make_lens_value(lens);
+}
+
+/* The ctype of SQR is a regular approximation of the true ctype of SQR
+ * at this point. In some situations, for example in processing quoted
+ * strings this leads to false typecheck errors; to lower the chances
+ * of these, we try to construct the precise ctype of SQR if the
+ * language of L1 is finite (and has a small number of words)
+ */
+static void square_precise_type(struct info *info,
+ struct regexp **sqr,
+ struct regexp *left,
+ struct regexp *body) {
+
+ char **words = NULL;
+ int nwords = 0, r;
+ struct fa *fa = NULL;
+ struct value *exn = NULL;
+ struct regexp **u = NULL, *c[3], *w = NULL;
+
+ exn = str_to_fa(info, left->pattern->str, &fa, left->nocase);
+ if (exn != NULL)
+ goto error;
+
+ nwords = fa_enumerate(fa, 10, &words); /* The limit of 10 is arbitrary */
+ if (nwords < 0)
+ goto error;
+
+ r = ALLOC_N(u, nwords);
+ ERR_NOMEM(r < 0, info);
+
+ c[1] = body;
+ for (int i=0; i < nwords; i++) {
+ w = make_regexp_literal(left->info, words[i]);
+ ERR_NOMEM(w == NULL, info);
+ w->nocase = left->nocase;
+
+ c[0] = c[2] = w;
+ u[i] = regexp_concat_n(info, 3, c);
+
+ unref(w, regexp);
+ ERR_NOMEM(u[i] == NULL, info);
+ }
+ w = regexp_union_n(info, nwords, u);
+ if (w != NULL) {
+ unref(*sqr, regexp);
+ *sqr = w;
+ w = NULL;
+ }
+
+ error:
+ unref(w, regexp);
+ for (int i=0; i < nwords; i++) {
+ free(words[i]);
+ if (u != NULL)
+ unref(u[i], regexp);
+ }
+ free(words);
+ free(u);
+ fa_free(fa);
+ unref(exn, value);
+}
+
+/* Build a square lens as
+ * left . body . right
+ * where left and right accepts the same language and
+ * captured strings must match. The inability to express this with other
+ * lenses makes the square primitive necessary.
+ */
+struct value * lns_make_square(struct info *info, struct lens *l1,
+ struct lens *l2, struct lens *l3, int check) {
+ struct value *cnt1 = NULL, *cnt2 = NULL, *res = NULL;
+ struct lens *sqr = NULL;
+
+ /* supported types: L_KEY . body . L_DEL or L_DEL . body . L_DEL */
+ if (l3->tag != L_DEL || (l1->tag != L_DEL && l1->tag != L_KEY))
+ return make_exn_value(info, "Supported types: (key lns del) or (del lns del)");
+
+ res = typecheck_square(info, l1, l3);
+ if (res != NULL)
+ goto error;
+
+ res = lns_make_concat(ref(info), ref(l1), ref(l2), check);
+ if (EXN(res))
+ goto error;
+ cnt1 = res;
+ res = lns_make_concat(ref(info), ref(cnt1->lens), ref(l3), check);
+ if (EXN(res))
+ goto error;
+ cnt2 = res;
+
+ sqr = make_lens_unop(L_SQUARE, ref(info), ref(cnt2->lens));
+ ERR_NOMEM(sqr == NULL, info);
+
+ for (int t=0; t < ntypes; t++)
+ ltype(sqr, t) = ref(ltype(cnt2->lens, t));
+
+ square_precise_type(info, &(sqr->ctype), l1->ctype, l2->ctype);
+
+ sqr->recursive = cnt2->lens->recursive;
+ sqr->rec_internal = cnt2->lens->rec_internal;
+ sqr->consumes_value = cnt2->lens->consumes_value;
+
+ res = make_lens_value(sqr);
+ ERR_NOMEM(res == NULL, info);
+ sqr = NULL;
+
+ error:
+ unref(info, info);
+ unref(l1, lens);
+ unref(l2, lens);
+ unref(l3, lens);
+ unref(cnt1, value);
+ unref(cnt2, value);
+ unref(sqr, lens);
+ return res;
+}
+
+/*
+ * Lens primitives
+ */
+
+static struct regexp *make_regexp_from_string(struct info *info,
+ struct string *string) {
+ struct regexp *r;
+ make_ref(r);
+ if (r != NULL) {
+ r->info = ref(info);
+ r->pattern = ref(string);
+ r->nocase = 0;
+ }
+ return r;
+}
+
+static struct regexp *restrict_regexp(struct regexp *r) {
+ char *nre = NULL;
+ struct regexp *result = NULL;
+ size_t nre_len;
+ int ret;
+
+ ret = fa_restrict_alphabet(r->pattern->str, strlen(r->pattern->str),
+ &nre, &nre_len,
+ RESERVED_FROM_CH, RESERVED_TO_CH);
+ ERR_NOMEM(ret == REG_ESPACE || ret < 0, r->info);
+ BUG_ON(ret != 0, r->info, NULL);
+ ensure(nre_len == strlen(nre), r->info);
+
+ ret = regexp_c_locale(&nre, &nre_len);
+ ERR_NOMEM(ret < 0, r->info);
+
+ result = make_regexp(r->info, nre, r->nocase);
+ nre = NULL;
+ BUG_ON(regexp_compile(result) != 0, r->info,
+ "Could not compile restricted regexp");
+ done:
+ free(nre);
+ return result;
+ error:
+ unref(result, regexp);
+ goto done;
+}
+
+static struct value *
+typecheck_prim(enum lens_tag tag, struct info *info,
+ struct regexp *regexp, struct string *string) {
+ struct fa *fa_slash = NULL;
+ struct fa *fa_key = NULL;
+ struct fa *fa_isect = NULL;
+ struct value *exn = NULL;
+
+ /* Typecheck */
+ if (tag == L_DEL && string != NULL) {
+ int cnt;
+ const char *dflt = string->str;
+ cnt = regexp_match(regexp, dflt, strlen(dflt), 0, NULL);
+ if (cnt != strlen(dflt)) {
+ char *s = escape(dflt, -1, RX_ESCAPES);
+ char *r = regexp_escape(regexp);
+ exn = make_exn_value(info,
+ "del: the default value '%s' does not match /%s/", s, r);
+ FREE(s);
+ FREE(r);
+ goto error;
+ }
+ }
+
+ error:
+ fa_free(fa_isect);
+ fa_free(fa_key);
+ fa_free(fa_slash);
+ return exn;
+}
+
+struct value *lns_make_prim(enum lens_tag tag, struct info *info,
+ struct regexp *regexp, struct string *string) {
+ struct lens *lens = NULL;
+ struct value *exn = NULL;
+
+ if (typecheck_p(info)) {
+ exn = typecheck_prim(tag, info, regexp, string);
+ if (exn != NULL)
+ goto error;
+ }
+
+ /* Build the actual lens */
+ lens = make_lens(tag, info);
+ lens->regexp = regexp;
+ lens->string = string;
+ lens->key = (tag == L_KEY || tag == L_LABEL || tag == L_SEQ);
+ lens->value = (tag == L_STORE || tag == L_VALUE);
+ lens->consumes_value = (tag == L_STORE || tag == L_VALUE);
+ lens->atype = regexp_make_empty(info);
+ /* Set the ctype */
+ if (tag == L_DEL || tag == L_STORE || tag == L_KEY) {
+ lens->ctype = ref(regexp);
+ lens->ctype_nullable = regexp_matches_empty(lens->ctype);
+ } else if (tag == L_LABEL || tag == L_VALUE
+ || tag == L_SEQ || tag == L_COUNTER) {
+ lens->ctype = regexp_make_empty(info);
+ lens->ctype_nullable = 1;
+ } else {
+ BUG_LENS_TAG(lens);
+ goto error;
+ }
+
+
+ /* Set the ktype */
+ if (tag == L_SEQ) {
+ lens->ktype =
+ make_regexp_from_string(info, (struct string *) digits_pat);
+ if (lens->ktype == NULL)
+ goto error;
+ } else if (tag == L_KEY) {
+ lens->ktype = restrict_regexp(lens->regexp);
+ } else if (tag == L_LABEL) {
+ lens->ktype = make_regexp_literal(info, lens->string->str);
+ if (lens->ktype == NULL)
+ goto error;
+ }
+
+ /* Set the vtype */
+ if (tag == L_STORE) {
+ lens->vtype = restrict_regexp(lens->regexp);
+ } else if (tag == L_VALUE) {
+ lens->vtype = make_regexp_literal(info, lens->string->str);
+ if (lens->vtype == NULL)
+ goto error;
+ }
+
+ return make_lens_value(lens);
+ error:
+ return exn;
+}
+
+/*
+ * Typechecking of lenses
+ */
+static struct value *disjoint_check(struct info *info, bool is_get,
+ struct regexp *r1, struct regexp *r2) {
+ struct fa *fa1 = NULL;
+ struct fa *fa2 = NULL;
+ struct fa *fa = NULL;
+ struct value *exn = NULL;
+ const char *const msg = is_get ? "union.get" : "tree union.put";
+
+ if (r1 == NULL || r2 == NULL)
+ return NULL;
+
+ exn = regexp_to_fa(r1, &fa1);
+ if (exn != NULL)
+ goto done;
+
+ exn = regexp_to_fa(r2, &fa2);
+ if (exn != NULL)
+ goto done;
+
+ fa = fa_intersect(fa1, fa2);
+ if (! fa_is_basic(fa, FA_EMPTY)) {
+ size_t xmpl_len;
+ char *xmpl;
+ fa_example(fa, &xmpl, &xmpl_len);
+ if (! is_get) {
+ char *fmt = enc_format(xmpl, xmpl_len);
+ if (fmt != NULL) {
+ FREE(xmpl);
+ xmpl = fmt;
+ }
+ }
+ exn = make_exn_value(ref(info),
+ "overlapping lenses in %s", msg);
+
+ if (is_get)
+ exn_printf_line(exn, "Example matched by both: '%s'", xmpl);
+ else
+ exn_printf_line(exn, "Example matched by both: %s", xmpl);
+ free(xmpl);
+ }
+
+ done:
+ fa_free(fa);
+ fa_free(fa1);
+ fa_free(fa2);
+
+ return exn;
+}
+
+static struct value *typecheck_union(struct info *info,
+ struct lens *l1, struct lens *l2) {
+ struct value *exn = NULL;
+
+ exn = disjoint_check(info, true, l1->ctype, l2->ctype);
+ if (exn == NULL) {
+ exn = disjoint_check(info, false, l1->atype, l2->atype);
+ }
+ if (exn != NULL) {
+ char *fi = format_info(l1->info);
+ exn_printf_line(exn, "First lens: %s", fi);
+ free(fi);
+
+ fi = format_info(l2->info);
+ exn_printf_line(exn, "Second lens: %s", fi);
+ free(fi);
+ }
+ return exn;
+}
+
+static struct value *
+ambig_check(struct info *info, struct fa *fa1, struct fa *fa2,
+ enum lens_type typ, struct lens *l1, struct lens *l2,
+ const char *msg, bool iterated) {
+ char *upv, *pv, *v;
+ size_t upv_len;
+ struct value *exn = NULL;
+ int r;
+
+ r = fa_ambig_example(fa1, fa2, &upv, &upv_len, &pv, &v);
+ if (r < 0) {
+ exn = make_exn_value(ref(info), "not enough memory");
+ if (exn != NULL) {
+ return exn;
+ } else {
+ ERR_REPORT(info, AUG_ENOMEM, NULL);
+ return info->error->exn;
+ }
+ }
+
+ if (upv != NULL) {
+ char *e_u, *e_up, *e_upv, *e_pv, *e_v;
+ char *s1, *s2;
+
+ if (typ == ATYPE) {
+ e_u = enc_format(upv, pv - upv);
+ e_up = enc_format(upv, v - upv);
+ e_upv = enc_format(upv, upv_len);
+ e_pv = enc_format(pv, strlen(pv));
+ e_v = enc_format(v, strlen(v));
+ lns_format_atype(l1, &s1);
+ lns_format_atype(l2, &s2);
+ } else {
+ e_u = escape(upv, pv - upv, RX_ESCAPES);
+ e_up = escape(upv, v - upv, RX_ESCAPES);
+ e_upv = escape(upv, -1, RX_ESCAPES);
+ e_pv = escape(pv, -1, RX_ESCAPES);
+ e_v = escape(v, -1, RX_ESCAPES);
+ s1 = regexp_escape(ltype(l1, typ));
+ s2 = regexp_escape(ltype(l2, typ));
+ }
+ exn = make_exn_value(ref(info), "%s", msg);
+ if (iterated) {
+ exn_printf_line(exn, " Iterated regexp: /%s/", s1);
+ } else {
+ exn_printf_line(exn, " First regexp: /%s/", s1);
+ exn_printf_line(exn, " Second regexp: /%s/", s2);
+ }
+ exn_printf_line(exn, " '%s' can be split into", e_upv);
+ exn_printf_line(exn, " '%s|=|%s'\n", e_u, e_pv);
+ exn_printf_line(exn, " and");
+ exn_printf_line(exn, " '%s|=|%s'\n", e_up, e_v);
+ free(e_u);
+ free(e_up);
+ free(e_upv);
+ free(e_pv);
+ free(e_v);
+ free(s1);
+ free(s2);
+ }
+ free(upv);
+ return exn;
+}
+
+static struct value *
+ambig_concat_check(struct info *info, const char *msg,
+ enum lens_type typ, struct lens *l1, struct lens *l2) {
+ struct fa *fa1 = NULL;
+ struct fa *fa2 = NULL;
+ struct value *result = NULL;
+ struct regexp *r1 = ltype(l1, typ);
+ struct regexp *r2 = ltype(l2, typ);
+
+ if (r1 == NULL || r2 == NULL)
+ return NULL;
+
+ result = regexp_to_fa(r1, &fa1);
+ if (result != NULL)
+ goto done;
+
+ result = regexp_to_fa(r2, &fa2);
+ if (result != NULL)
+ goto done;
+
+ result = ambig_check(info, fa1, fa2, typ, l1, l2, msg, false);
+ done:
+ fa_free(fa1);
+ fa_free(fa2);
+ return result;
+}
+
+static struct value *typecheck_concat(struct info *info,
+ struct lens *l1, struct lens *l2) {
+ struct value *result = NULL;
+
+ result = ambig_concat_check(info, "ambiguous concatenation",
+ CTYPE, l1, l2);
+ if (result == NULL) {
+ result = ambig_concat_check(info, "ambiguous tree concatenation",
+ ATYPE, l1, l2);
+ }
+ if (result != NULL) {
+ char *fi = format_info(l1->info);
+ exn_printf_line(result, "First lens: %s", fi);
+ free(fi);
+ fi = format_info(l2->info);
+ exn_printf_line(result, "Second lens: %s", fi);
+ free(fi);
+ }
+ return result;
+}
+
+static struct value *make_exn_square(struct info *info, struct lens *l1,
+ struct lens *l2, const char *msg) {
+
+ char *fi;
+ struct value *exn = make_exn_value(ref(info), "%s",
+ "Inconsistency in lens square");
+ exn_printf_line(exn, "%s", msg);
+ fi = format_info(l1->info);
+ exn_printf_line(exn, "Left lens: %s", fi);
+ free(fi);
+ fi = format_info(l2->info);
+ exn_printf_line(exn, "Right lens: %s", fi);
+ free(fi);
+ return exn;
+}
+
+static struct value *typecheck_square(struct info *info, struct lens *l1,
+ struct lens *l2) {
+ int r;
+ struct value *exn = NULL;
+ struct fa *fa1 = NULL, *fa2 = NULL;
+ struct regexp *r1 = ltype(l1, CTYPE);
+ struct regexp *r2 = ltype(l2, CTYPE);
+
+ if (r1 == NULL || r2 == NULL)
+ return NULL;
+
+ exn = regexp_to_fa(r1, &fa1);
+ if (exn != NULL)
+ goto done;
+
+ exn = regexp_to_fa(r2, &fa2);
+ if (exn != NULL)
+ goto done;
+
+ r = fa_equals(fa1, fa2);
+
+ if (r < 0) {
+ exn = make_exn_value(ref(info), "not enough memory");
+ if (exn != NULL) {
+ return exn;
+ } else {
+ ERR_REPORT(info, AUG_ENOMEM, NULL);
+ return info->error->exn;;
+ }
+ }
+
+ if (r == 0) {
+ exn = make_exn_square(info, l1, l2,
+ "Left and right lenses must accept the same language");
+ goto done;
+ }
+
+ /* check del create consistency */
+ if (l1->tag == L_DEL && l2->tag == L_DEL) {
+ if (!STREQ(l1->string->str, l2->string->str)) {
+ exn = make_exn_square(info, l1, l2,
+ "Left and right lenses must have the same default value");
+ goto done;
+ }
+ }
+
+ done:
+ fa_free(fa1);
+ fa_free(fa2);
+ return exn;
+}
+
+static struct value *
+ambig_iter_check(struct info *info, const char *msg,
+ enum lens_type typ, struct lens *l) {
+ struct fa *fas = NULL, *fa = NULL;
+ struct value *result = NULL;
+ struct regexp *r = ltype(l, typ);
+
+ if (r == NULL)
+ return NULL;
+
+ result = regexp_to_fa(r, &fa);
+ if (result != NULL)
+ goto done;
+
+ fas = fa_iter(fa, 0, -1);
+
+ result = ambig_check(info, fa, fas, typ, l, l, msg, true);
+
+ done:
+ fa_free(fa);
+ fa_free(fas);
+ return result;
+}
+
+static struct value *typecheck_iter(struct info *info, struct lens *l) {
+ struct value *result = NULL;
+
+ result = ambig_iter_check(info, "ambiguous iteration", CTYPE, l);
+ if (result == NULL) {
+ result = ambig_iter_check(info, "ambiguous tree iteration", ATYPE, l);
+ }
+ if (result != NULL) {
+ char *fi = format_info(l->info);
+ exn_printf_line(result, "Iterated lens: %s", fi);
+ free(fi);
+ }
+ return result;
+}
+
+static struct value *typecheck_maybe(struct info *info, struct lens *l) {
+ /* Check (r)? as (<e>|r) where <e> is the empty language */
+ struct value *exn = NULL;
+
+ if (l->ctype != NULL && regexp_matches_empty(l->ctype)) {
+ exn = make_exn_value(ref(info),
+ "illegal optional expression: /%s/ matches the empty word",
+ l->ctype->pattern->str);
+ }
+
+ /* Typecheck the put direction; the check passes if
+ (1) the atype does not match the empty string, because we can tell
+ from looking at tree nodes whether L should be applied or not
+ (2) L handles a value; with that, we know whether to apply L or not
+ depending on whether the current node has a non NULL value or not
+ */
+ if (exn == NULL && ! l->consumes_value) {
+ if (l->atype != NULL && regexp_matches_empty(l->atype)) {
+ exn = make_exn_value(ref(info),
+ "optional expression matches the empty tree but does not consume a value");
+ }
+ }
+ return exn;
+}
+
+void free_lens(struct lens *lens) {
+ if (lens == NULL)
+ return;
+ ensure(lens->ref == 0, lens->info);
+
+ if (debugging("lenses"))
+ dump_lens_tree(lens);
+ switch (lens->tag) {
+ case L_DEL:
+ unref(lens->regexp, regexp);
+ unref(lens->string, string);
+ break;
+ case L_STORE:
+ case L_KEY:
+ unref(lens->regexp, regexp);
+ break;
+ case L_LABEL:
+ case L_SEQ:
+ case L_COUNTER:
+ case L_VALUE:
+ unref(lens->string, string);
+ break;
+ case L_SUBTREE:
+ case L_STAR:
+ case L_MAYBE:
+ case L_SQUARE:
+ unref(lens->child, lens);
+ break;
+ case L_CONCAT:
+ case L_UNION:
+ for (int i=0; i < lens->nchildren; i++)
+ unref(lens->children[i], lens);
+ free(lens->children);
+ break;
+ case L_REC:
+ if (!lens->rec_internal) {
+ unref(lens->body, lens);
+ }
+ break;
+ default:
+ BUG_LENS_TAG(lens);
+ break;
+ }
+
+ for (int t=0; t < ntypes; t++)
+ unref(ltype(lens, t), regexp);
+
+ unref(lens->info, info);
+ jmt_free(lens->jmt);
+ free(lens);
+ error:
+ return;
+}
+
+void lens_release(struct lens *lens) {
+ if (lens == NULL)
+ return;
+
+ for (int t=0; t < ntypes; t++)
+ regexp_release(ltype(lens, t));
+
+ if (lens->tag == L_KEY || lens->tag == L_STORE)
+ regexp_release(lens->regexp);
+
+ if (lens->tag == L_SUBTREE || lens->tag == L_STAR
+ || lens->tag == L_MAYBE || lens->tag == L_SQUARE) {
+ lens_release(lens->child);
+ }
+
+ if (lens->tag == L_UNION || lens->tag == L_CONCAT) {
+ for (int i=0; i < lens->nchildren; i++) {
+ lens_release(lens->children[i]);
+ }
+ }
+
+ if (lens->tag == L_REC && !lens->rec_internal) {
+ lens_release(lens->body);
+ }
+
+ jmt_free(lens->jmt);
+ lens->jmt = NULL;
+}
+
+/*
+ * Encoding of tree levels
+ */
+char *enc_format(const char *e, size_t len) {
+ return enc_format_indent(e, len, 0);
+}
+
+char *enc_format_indent(const char *e, size_t len, int indent) {
+ size_t size = 0;
+ char *result = NULL, *r;
+ const char *k = e;
+
+ while (*k && k - e < len) {
+ char *eq, *slash, *v;
+ eq = strchr(k, ENC_EQ_CH);
+ assert(eq != NULL);
+ slash = strchr(eq, ENC_SLASH_CH);
+ assert(slash != NULL);
+ v = eq + 1;
+
+ if (indent > 0)
+ size += indent + 1;
+ size += 6; /* Surrounding braces */
+ if (k != eq)
+ size += 1 + (eq - k) + 1;
+ if (v != slash)
+ size += 4 + (slash - v) + 1;
+ k = slash + 1;
+ }
+ if (ALLOC_N(result, size + 1) < 0)
+ return NULL;
+
+ k = e;
+ r = result;
+ while (*k && k - e < len) {
+ char *eq, *slash, *v;
+ eq = strchr(k, ENC_EQ_CH);
+ slash = strchr(eq, ENC_SLASH_CH);
+ assert(eq != NULL && slash != NULL);
+ v = eq + 1;
+
+ for (int i=0; i < indent; i++)
+ *r++ = ' ';
+ r = stpcpy(r, " { ");
+ if (k != eq) {
+ r = stpcpy(r, "\"");
+ r = stpncpy(r, k, eq - k);
+ r = stpcpy(r, "\"");
+ }
+ if (v != slash) {
+ r = stpcpy (r, " = \"");
+ r = stpncpy(r, v, slash - v);
+ r = stpcpy(r, "\"");
+ }
+ r = stpcpy(r, " }");
+ if (indent > 0)
+ *r++ = '\n';
+ k = slash + 1;
+ }
+ return result;
+}
+
+static int format_atype(struct lens *l, char **buf, uint indent);
+
+static int format_indent(char **buf, uint indent) {
+ if (ALLOC_N(*buf, indent+1) < 0)
+ return -1;
+ memset(*buf, ' ', indent);
+ return 0;
+}
+
+static int format_subtree_atype(struct lens *l, char **buf, uint indent) {
+ char *k = NULL, *v = NULL;
+ const struct regexp *ktype = l->child->ktype;
+ const struct regexp *vtype = l->child->vtype;
+ int r, result = -1;
+ char *si = NULL;
+
+ if (format_indent(&si, indent) < 0)
+ goto done;
+
+ if (ktype != NULL) {
+ k = regexp_escape(ktype);
+ if (k == NULL)
+ goto done;
+ }
+ if (vtype != NULL) {
+ v = regexp_escape(vtype);
+ if (v == NULL)
+ goto done;
+ if (k == NULL)
+ r = xasprintf(buf, "%s{ = /%s/ }", si, k, v);
+ else
+ r = xasprintf(buf, "%s{ /%s/ = /%s/ }", si, k, v);
+ } else {
+ if (k == NULL)
+ r = xasprintf(buf, "%s{ }", si, k);
+ else
+ r = xasprintf(buf, "%s{ /%s/ }", si, k);
+ }
+ if (r < 0)
+ goto done;
+
+ result = 0;
+ done:
+ FREE(si);
+ FREE(v);
+ FREE(k);
+ return result;
+}
+
+static int format_rep_atype(struct lens *l, char **buf,
+ uint indent, char quant) {
+ char *a = NULL;
+ int r, result = -1;
+
+ r = format_atype(l->child, &a, indent);
+ if (r < 0)
+ goto done;
+ if (strlen(a) == 0) {
+ *buf = a;
+ a = NULL;
+ result = 0;
+ goto done;
+ }
+
+ if (l->child->tag == L_CONCAT || l->child->tag == L_UNION)
+ r = xasprintf(buf, "(%s)%c", a, quant);
+ else
+ r = xasprintf(buf, "%s%c", a, quant);
+
+ if (r < 0)
+ goto done;
+
+ result = 0;
+ done:
+ FREE(a);
+ return result;
+}
+
+static int format_concat_atype(struct lens *l, char **buf, uint indent) {
+ char **c = NULL, *s = NULL, *p;
+ int r, result = -1;
+ size_t len = 0, nconc = 0;
+
+ if (ALLOC_N(c, l->nchildren) < 0)
+ goto done;
+
+ for (int i=0; i < l->nchildren; i++) {
+ r = format_atype(l->children[i], c+i, indent);
+ if (r < 0)
+ goto done;
+ len += strlen(c[i]) + 3;
+ if (strlen(c[i]) > 0)
+ nconc += 1;
+ if (l->children[i]->tag == L_UNION)
+ len += 2;
+ }
+
+ if (ALLOC_N(s, len+1) < 0)
+ goto done;
+ p = s;
+ for (int i=0; i < l->nchildren; i++) {
+ bool needs_parens = nconc > 1 && l->children[i]->tag == L_UNION;
+ if (strlen(c[i]) == 0)
+ continue;
+ if (i > 0)
+ *p++ = '\n';
+ char *t = c[i];
+ if (needs_parens) {
+ for (int j=0; j < indent; j++)
+ *p++ = *t++;
+ *p++ = '(';
+ }
+ p = stpcpy(p, t);
+ if (needs_parens)
+ *p++ = ')';
+ }
+
+ *buf = s;
+ s = NULL;
+ result = 0;
+ done:
+ if (c != NULL)
+ for (int i=0; i < l->nchildren; i++)
+ FREE(c[i]);
+ FREE(c);
+ FREE(s);
+ return result;
+}
+
+static int format_union_atype(struct lens *l, char **buf, uint indent) {
+ char **c = NULL, *s = NULL, *p;
+ int r, result = -1;
+ size_t len = 0;
+
+ if (ALLOC_N(c, l->nchildren) < 0)
+ goto done;
+
+ /* Estimate the length of the string we will build. The calculation
+ overestimates that length so that the logic is a little simpler than
+ in the loop where we actually build the string */
+ for (int i=0; i < l->nchildren; i++) {
+ r = format_atype(l->children[i], c+i, indent + 2);
+ if (r < 0)
+ goto done;
+ /* We will add c[i] and some fixed characters */
+ len += strlen(c[i]) + strlen("\n| ()");
+ if (strlen(c[i]) < indent+2) {
+ /* We will add indent+2 whitespace */
+ len += indent+2;
+ }
+ }
+
+ if (ALLOC_N(s, len+1) < 0)
+ goto done;
+
+ p = s;
+ for (int i=0; i < l->nchildren; i++) {
+ char *t = c[i];
+ if (i > 0) {
+ *p++ = '\n';
+ if (strlen(t) >= indent+2) {
+ /* c[i] is not just whitespace */
+ p = stpncpy(p, t, indent+2);
+ t += indent+2;
+ } else {
+ /* c[i] is just whitespace, make sure we indent the
+ '|' appropriately */
+ memset(p, ' ', indent+2);
+ p += indent+2;
+ }
+ p = stpcpy(p, "| ");
+ } else {
+ /* Skip additional indent */
+ t += 2;
+ }
+ if (strlen(t) == 0)
+ p = stpcpy(p, "()");
+ else
+ p = stpcpy(p, t);
+ }
+ *buf = s;
+ s = NULL;
+ result = 0;
+ done:
+ if (c != NULL)
+ for (int i=0; i < l->nchildren; i++)
+ FREE(c[i]);
+ FREE(c);
+ FREE(s);
+ return result;
+}
+
+static int format_rec_atype(struct lens *l, char **buf, uint indent) {
+ int r;
+
+ if (l->rec_internal) {
+ *buf = strdup("<<rec>>");
+ return (*buf == NULL) ? -1 : 0;
+ }
+
+ char *c = NULL;
+ r = format_atype(l->body, &c, indent);
+ if (r < 0)
+ return -1;
+ r = xasprintf(buf, "<<rec:%s>>", c);
+ free(c);
+ return (r < 0) ? -1 : 0;
+}
+
+static int format_atype(struct lens *l, char **buf, uint indent) {
+ *buf = NULL;
+
+ switch(l->tag) {
+ case L_DEL:
+ case L_STORE:
+ case L_KEY:
+ case L_LABEL:
+ case L_VALUE:
+ case L_SEQ:
+ case L_COUNTER:
+ *buf = strdup("");
+ return (*buf == NULL) ? -1 : 0;
+ break;
+ case L_SUBTREE:
+ return format_subtree_atype(l, buf, indent);
+ break;
+ case L_STAR:
+ return format_rep_atype(l, buf, indent, '*');
+ break;
+ case L_MAYBE:
+ return format_rep_atype(l, buf, indent, '?');
+ break;
+ case L_CONCAT:
+ return format_concat_atype(l, buf, indent);
+ break;
+ case L_UNION:
+ return format_union_atype(l, buf, indent);
+ break;
+ case L_REC:
+ return format_rec_atype(l, buf, indent);
+ break;
+ case L_SQUARE:
+ return format_concat_atype(l->child, buf, indent);
+ break;
+ default:
+ BUG_LENS_TAG(l);
+ break;
+ };
+ return -1;
+}
+
+int lns_format_atype(struct lens *l, char **buf) {
+ int r = 0;
+ r = format_atype(l, buf, 4);
+ return r;
+}
+
+/*
+ * Recursive lenses
+ */
+struct value *lns_make_rec(struct info *info) {
+ struct lens *l = make_lens(L_REC, info);
+ l->recursive = 1;
+ l->rec_internal = 1;
+
+ return make_lens_value(l);
+}
+
+/* Transform a recursive lens into a recursive transition network
+ *
+ * First, we transform the lens into context free grammar, considering any
+ * nonrecursive lens as a terminal
+ *
+ * cfg: lens -> nonterminal -> production list
+ *
+ * cfg(primitive, N) -> N := regexp(primitive)
+ * cfg(l1 . l2, N) -> N := N1 . N2 + cfg(l1, N1) + cfg(l2, N2)
+ * cfg(l1 | l2, N) -> N := N1 | N2 + cfg(l1, N1) + cfg(l2, N2)
+ * cfg(l*, N) -> N := N . N' | eps + cfg(l, N')
+ * cfg([ l ], N) -> N := N' + cfg(l, N')
+ *
+ * We use the lenses as nonterminals themselves; this also means that our
+ * productions are normalized such that the RHS is either a terminal
+ * (regexp) or entirely consists of nonterminals
+ *
+ * In a few places, we need to know that a nonterminal corresponds to a
+ * subtree combinator ([ l ]); this is the main reason that the rule (cfg[
+ * l ], N) introduces a useless production N := N'.
+ *
+ * Computing the types for a recursive lens r is (fairly) straightforward,
+ * given the above grammar, which we convert to an automaton following
+ * http://arxiv.org/abs/cs/9910022; the only complication arises from the
+ * subtree combinator, since it can be used in recursive lenses to
+ * construct trees of arbitrary depth, but we need to approximate the types
+ * of r in a way that fits with our top-down tree automaton in put.c.
+ *
+ * To handle subtree combinators, remember that the type rules for a lens
+ * m = [ l ] are:
+ *
+ * m.ktype = NULL
+ * m.vtype = NULL
+ * m.ctype = l.ctype
+ * m.atype = enc(l.ktype, l.vtype)
+ * ( enc is a function regexp -> regexp -> regexp)
+ *
+ * We compute types for r by modifying its automaton according to
+ * Nederhof's paper and reducing it to a regular expression of lenses. This
+ * has to happen in the following steps:
+ * r.ktype : approximate by using [ .. ].ktype = NULL
+ * r.vtype : same as r.ktype
+ * r.ctype : approximate by treating [ l ] as l
+ * r.atype : approximate by using r.ktype and r.vtype from above
+ * in lens expressions [ f(r) ]
+ */
+
+/* Transitions go to a state and are labeled with a lens. For epsilon
+ * transitions, lens may be NULL. When lens is a simple (nonrecursive
+ * lens), PROD will be NULL. When we modify the automaton to splice
+ * nonterminals in, we remember the production for the nonterminal in PROD.
+ */
+struct trans {
+ struct state *to;
+ struct lens *lens;
+ struct regexp *re;
+};
+
+struct state {
+ struct state *next; /* Linked list for memory management */
+ size_t ntrans;
+ struct trans *trans;
+};
+
+/* Productions for lens LENS. Start state START and end state END. If we
+ start with START, END is the only accepting state. */
+struct prod {
+ struct lens *lens;
+ struct state *start;
+ struct state *end;
+};
+
+/* A recursive transition network used to compute regular approximations
+ * to the types */
+struct rtn {
+ struct info *info;
+ size_t nprod;
+ struct prod **prod;
+ struct state *states; /* Linked list through next of all states in all
+ prods; the states for each production are on
+ the part of the list from prod->start to
+ prod->end */
+ struct value *exn;
+ enum lens_type lens_type;
+ unsigned int check : 1;
+};
+
+#define RTN_BAIL(rtn) if ((rtn)->exn != NULL || \
+ (rtn)->info->error->code != AUG_NOERROR) \
+ goto error;
+
+static void free_prod(struct prod *prod) {
+ if (prod == NULL)
+ return;
+ unref(prod->lens, lens);
+ free(prod);
+}
+
+static void free_rtn(struct rtn *rtn) {
+ if (rtn == NULL)
+ return;
+ for (int i=0; i < rtn->nprod; i++)
+ free_prod(rtn->prod[i]);
+ free(rtn->prod);
+ list_for_each(s, rtn->states) {
+ for (int i=0; i < s->ntrans; i++) {
+ unref(s->trans[i].lens, lens);
+ unref(s->trans[i].re, regexp);
+ }
+ free(s->trans);
+ }
+ list_free(rtn->states);
+ unref(rtn->info, info);
+ unref(rtn->exn, value);
+ free(rtn);
+}
+
+static struct state *add_state(struct prod *prod) {
+ struct state *result = NULL;
+ int r;
+
+ r = ALLOC(result);
+ ERR_NOMEM(r < 0, prod->lens->info);
+
+ list_cons(prod->start->next, result);
+ error:
+ return result;
+}
+
+static struct trans *add_trans(struct rtn *rtn, struct state *state,
+ struct state *to, struct lens *l) {
+ int r;
+ struct trans *result = NULL;
+
+ for (int i=0; i < state->ntrans; i++)
+ if (state->trans[i].to == to && state->trans[i].lens == l)
+ return state->trans + i;
+
+ r = REALLOC_N(state->trans, state->ntrans+1);
+ ERR_NOMEM(r < 0, rtn->info);
+
+ result = state->trans + state->ntrans;
+ state->ntrans += 1;
+
+ MEMZERO(result, 1);
+ result->to = to;
+ if (l != NULL) {
+ result->lens = ref(l);
+ result->re = ref(ltype(l, rtn->lens_type));
+ }
+ error:
+ return result;
+}
+
+static struct prod *make_prod(struct rtn *rtn, struct lens *l) {
+ struct prod *result = NULL;
+ int r;
+
+ r = ALLOC(result);
+ ERR_NOMEM(r < 0, l->info);
+
+ result->lens = ref(l);
+ r = ALLOC(result->start);
+ ERR_NOMEM(r < 0, l->info);
+
+ result->end = add_state(result);
+ ERR_BAIL(l->info);
+
+ result->end->next = rtn->states;
+ rtn->states = result->start;
+
+ return result;
+ error:
+ free_prod(result);
+ return NULL;
+}
+
+static struct prod *prod_for_lens(struct rtn *rtn, struct lens *l) {
+ if (l == NULL)
+ return NULL;
+ for (int i=0; i < rtn->nprod; i++) {
+ if (rtn->prod[i]->lens == l)
+ return rtn->prod[i];
+ }
+ return NULL;
+}
+
+static void rtn_dot(struct rtn *rtn, const char *stage) {
+ FILE *fp;
+ int r = 0;
+
+ fp = debug_fopen("rtn_%s_%s.dot", stage, lens_type_names[rtn->lens_type]);
+ if (fp == NULL)
+ return;
+
+ fprintf(fp, "digraph \"l1\" {\n rankdir=LR;\n");
+ list_for_each(s, rtn->states) {
+ char *label = NULL;
+ for (int p=0; p < rtn->nprod; p++) {
+ if (s == rtn->prod[p]->start) {
+ r = xasprintf(&label, "s%d", p);
+ } else if (s == rtn->prod[p]->end) {
+ r = xasprintf(&label, "e%d", p);
+ }
+ ERR_NOMEM(r < 0, rtn->info);
+ }
+ if (label == NULL) {
+ r = xasprintf(&label, "%p", s);
+ ERR_NOMEM(r < 0, rtn->info);
+ }
+ fprintf(fp, " n%p [label = \"%s\"];\n", s, label == NULL ? "" : label);
+ FREE(label);
+ for (int i=0; i < s->ntrans; i++) {
+ fprintf(fp, " n%p -> n%p", s, s->trans[i].to);
+ if (s->trans[i].re != NULL) {
+ label = regexp_escape(s->trans[i].re);
+ for (char *t = label; *t; t++)
+ if (*t == '\\')
+ *t = '~';
+ fprintf(fp, " [ label = \"%s\" ]", label);
+ FREE(label);
+ }
+ fprintf(fp, ";\n");
+ }
+ }
+ error:
+ fprintf(fp, "}\n");
+ fclose(fp);
+}
+
+/* Add transitions to RTN corresponding to cfg(l, N) */
+static void rtn_rules(struct rtn *rtn, struct lens *l) {
+ if (! l->recursive)
+ return;
+
+ struct prod *prod = prod_for_lens(rtn, l);
+ if (prod != NULL)
+ return;
+
+ int r = REALLOC_N(rtn->prod, rtn->nprod+1);
+ ERR_NOMEM(r < 0, l->info);
+
+ prod = make_prod(rtn, l);
+ rtn->prod[rtn->nprod] = prod;
+ RTN_BAIL(rtn);
+ rtn->nprod += 1;
+
+ struct state *start = prod->start;
+
+ switch (l->tag) {
+ case L_UNION:
+ /* cfg(l1|..|ln, N) -> N := N1 | N2 | ... | Nn */
+ for (int i=0; i < l->nchildren; i++) {
+ add_trans(rtn, start, prod->end, l->children[i]);
+ RTN_BAIL(rtn);
+ rtn_rules(rtn, l->children[i]);
+ RTN_BAIL(rtn);
+ }
+ break;
+ case L_CONCAT:
+ /* cfg(l1 . l2 ... ln, N) -> N := N1 . N2 ... Nn */
+ for (int i=0; i < l->nchildren-1; i++) {
+ struct state *s = add_state(prod);
+ RTN_BAIL(rtn);
+ add_trans(rtn, start, s, l->children[i]);
+ RTN_BAIL(rtn);
+ start = s;
+ rtn_rules(rtn, l->children[i]);
+ RTN_BAIL(rtn);
+ }
+ {
+ struct lens *c = l->children[l->nchildren - 1];
+ add_trans(rtn, start, prod->end, c);
+ RTN_BAIL(rtn);
+ rtn_rules(rtn, c);
+ RTN_BAIL(rtn);
+ }
+ break;
+ case L_STAR: {
+ /* cfg(l*, N) -> N := N . N' | eps */
+ struct state *s = add_state(prod);
+ RTN_BAIL(rtn);
+ add_trans(rtn, start, s, l);
+ RTN_BAIL(rtn);
+ add_trans(rtn, s, prod->end, l->child);
+ RTN_BAIL(rtn);
+ add_trans(rtn, start, prod->end, NULL);
+ RTN_BAIL(rtn);
+ rtn_rules(rtn, l->child);
+ RTN_BAIL(rtn);
+ break;
+ }
+ case L_SUBTREE:
+ switch (rtn->lens_type) {
+ case KTYPE:
+ case VTYPE:
+ /* cfg([ l ], N) -> N := eps */
+ add_trans(rtn, start, prod->end, NULL);
+ break;
+ case CTYPE:
+ /* cfg([ l ], N) -> N := N' plus cfg(l, N') */
+ add_trans(rtn, start, prod->end, l->child);
+ RTN_BAIL(rtn);
+ rtn_rules(rtn, l->child);
+ RTN_BAIL(rtn);
+ break;
+ case ATYPE: {
+ /* At this point, we have propagated ktype and vtype */
+ /* cfg([ l ], N) -> N := enc(l->ktype, l->vtype) */
+ struct trans *t = add_trans(rtn, start, prod->end, NULL);
+ RTN_BAIL(rtn);
+ t->re = subtree_atype(l->info, l->child->ktype, l->child->vtype);
+ break;
+ }
+ default:
+ BUG_ON(true, rtn->info, "Unexpected lens type %d", rtn->lens_type);
+ break;
+ }
+ break;
+ case L_MAYBE:
+ /* cfg(l?, N) -> N := N' | eps plus cfg(l, N') */
+ add_trans(rtn, start, prod->end, l->child);
+ RTN_BAIL(rtn);
+ add_trans(rtn, start, prod->end, NULL);
+ RTN_BAIL(rtn);
+ rtn_rules(rtn, l->child);
+ RTN_BAIL(rtn);
+ break;
+ case L_REC:
+ /* cfg(l, N) -> N := N' plus cfg(l->body, N') */
+ add_trans(rtn, start, prod->end, l->body);
+ RTN_BAIL(rtn);
+ rtn_rules(rtn, l->body);
+ RTN_BAIL(rtn);
+ break;
+ case L_SQUARE:
+ add_trans(rtn, start, prod->end, l->child);
+ RTN_BAIL(rtn);
+ break;
+ default:
+ BUG_LENS_TAG(l);
+ break;
+ }
+ error:
+ return;
+}
+
+/* Replace transition t with two epsilon transitions s => p->start and
+ * p->end => s->trans[i].to where s is the start of t. Instead of adding
+ * epsilon transitions, we expand the epsilon transitions.
+ */
+static void prod_splice(struct rtn *rtn,
+ struct prod *from, struct prod *to, struct trans *t) {
+
+ add_trans(rtn, to->end, t->to, NULL);
+ ERR_BAIL(from->lens->info);
+ t->to = to->start;
+ unref(t->re, regexp);
+
+ error:
+ return;
+}
+
+static void rtn_splice(struct rtn *rtn, struct prod *prod) {
+ for (struct state *s = prod->start; s != prod->end; s = s->next) {
+ for (int i=0; i < s->ntrans; i++) {
+ struct prod *p = prod_for_lens(rtn, s->trans[i].lens);
+ if (p != NULL) {
+ prod_splice(rtn, prod, p, s->trans+i);
+ RTN_BAIL(rtn);
+ }
+ }
+ }
+ error:
+ return;
+}
+
+static struct rtn *rtn_build(struct lens *rec, enum lens_type lt) {
+ int r;
+ struct rtn *rtn;
+
+ r = ALLOC(rtn);
+ ERR_NOMEM(r < 0, rec->info);
+
+ rtn->info = ref(rec->info);
+ rtn->lens_type = lt;
+
+ rtn_rules(rtn, rec);
+ RTN_BAIL(rtn);
+ if (debugging("cf.approx"))
+ rtn_dot(rtn, "10-rules");
+
+ for (int i=0; i < rtn->nprod; i++) {
+ rtn_splice(rtn, rtn->prod[i]);
+ RTN_BAIL(rtn);
+ }
+ if (debugging("cf.approx"))
+ rtn_dot(rtn, "11-splice");
+
+ error:
+ return rtn;
+}
+
+/* Compare transitions lexicographically by (to, lens) */
+static int trans_to_cmp(const void *v1, const void *v2) {
+ const struct trans *t1 = v1;
+ const struct trans *t2 = v2;
+
+ if (t1->to != t2->to)
+ return (t1->to < t2->to) ? -1 : 1;
+
+ if (t1->lens == t2->lens)
+ return 0;
+ return (t1->lens < t2->lens) ? -1 : 1;
+}
+
+/* Collapse a transition S1 -> S -> S2 by adding a transition S1 -> S2 with
+ * lens R1 . (LOOP)* . R2 | R3 where R3 is the regexp on the possibly
+ * existing transition S1 -> S2. If LOOP is NULL or R3 does not exist,
+ * label the transition with a simplified regexp by treating NULL as
+ * epsilon */
+static void collapse_trans(struct rtn *rtn,
+ struct state *s1, struct state *s2,
+ struct regexp *r1, struct regexp *loop,
+ struct regexp *r2) {
+
+ struct trans *t = NULL;
+ struct regexp *r = NULL;
+
+ for (int i=0; i < s1->ntrans; i++) {
+ if (s1->trans[i].to == s2) {
+ t = s1->trans + i;
+ break;
+ }
+ }
+
+ /* Set R = R1 . (LOOP)* . R2, treating NULL's as epsilon */
+ if (loop == NULL) {
+ if (r1 == NULL)
+ r = ref(r2);
+ else if (r2 == NULL)
+ r = ref(r1);
+ else
+ r = regexp_concat(rtn->info, r1, r2);
+ } else {
+ struct regexp *s = regexp_iter(rtn->info, loop, 0, -1);
+ ERR_NOMEM(s == NULL, rtn->info);
+ struct regexp *c = NULL;
+ if (r1 == NULL) {
+ c = s;
+ s = NULL;
+ } else {
+ c = regexp_concat(rtn->info, r1, s);
+ unref(s, regexp);
+ ERR_NOMEM(c == NULL, rtn->info);
+ }
+ if (r2 == NULL) {
+ r = c;
+ c = NULL;
+ } else {
+ r = regexp_concat(rtn->info, c, r2);
+ unref(c, regexp);
+ ERR_NOMEM(r == NULL, rtn->info);
+ }
+ }
+
+ if (t == NULL) {
+ t = add_trans(rtn, s1, s2, NULL);
+ ERR_NOMEM(t == NULL, rtn->info);
+ t->re = r;
+ } else if (t->re == NULL) {
+ if (r == NULL || regexp_matches_empty(r))
+ t->re = r;
+ else {
+ t->re = regexp_maybe(rtn->info, r);
+ unref(r, regexp);
+ ERR_NOMEM(t->re == NULL, rtn->info);
+ }
+ } else if (r == NULL) {
+ if (!regexp_matches_empty(t->re)) {
+ r = regexp_maybe(rtn->info, t->re);
+ unref(t->re, regexp);
+ t->re = r;
+ ERR_NOMEM(r == NULL, rtn->info);
+ }
+ } else {
+ struct regexp *u = regexp_union(rtn->info, r, t->re);
+ unref(r, regexp);
+ unref(t->re, regexp);
+ t->re = u;
+ ERR_NOMEM(u == NULL, rtn->info);
+ }
+
+ return;
+ error:
+ rtn->exn = rtn->info->error->exn;
+ return;
+}
+
+/* Reduce the automaton with start state rprod->start and only accepting
+ * state rprod->end so that we have a single transition rprod->start =>
+ * rprod->end labelled with the overall approximating regexp for the
+ * automaton.
+ *
+ * This is the same algorithm as fa_as_regexp in fa.c
+ */
+static struct regexp *rtn_reduce(struct rtn *rtn, struct lens *rec) {
+ struct prod *prod = prod_for_lens(rtn, rec);
+ int r;
+
+ ERR_THROW(prod == NULL, rtn->info, AUG_EINTERNAL,
+ "No production for recursive lens");
+
+ /* Eliminate epsilon transitions and turn transitions between the same
+ * two states into a regexp union */
+ list_for_each(s, rtn->states) {
+ qsort(s->trans, s->ntrans, sizeof(*s->trans), trans_to_cmp);
+ for (int i=0; i < s->ntrans; i++) {
+ int j = i+1;
+ for (;j < s->ntrans && s->trans[i].to == s->trans[j].to;
+ j++);
+ if (j > i+1) {
+ struct regexp *u, **v;
+ r = ALLOC_N(v, j - i);
+ ERR_NOMEM(r < 0, rtn->info);
+ for (int k=i; k < j; k++)
+ v[k-i] = s->trans[k].re;
+ u = regexp_union_n(rtn->info, j - i, v);
+ if (u == NULL) {
+ // FIXME: The calling convention for regexp_union_n
+ // is bad, since we can't distinguish between alloc
+ // failure and unioning all NULL's
+ for (int k=0; k < j-i; k++)
+ if (v[k] != NULL) {
+ FREE(v);
+ ERR_NOMEM(true, rtn->info);
+ }
+ }
+ FREE(v);
+ for (int k=i; k < j; k++) {
+ unref(s->trans[k].lens, lens);
+ unref(s->trans[k].re, regexp);
+ }
+ s->trans[i].re = u;
+ MEMMOVE(s->trans + (i+1),
+ s->trans + j,
+ s->ntrans - j);
+ s->ntrans -= j - (i + 1);
+ }
+ }
+ }
+
+ /* Introduce new start and end states with epsilon transitions to/from
+ * the old start and end states */
+ struct state *end = NULL;
+ struct state *start = NULL;
+ if (ALLOC(start) < 0 || ALLOC(end) < 0) {
+ FREE(start);
+ FREE(end);
+ ERR_NOMEM(true, rtn->info);
+ }
+ list_insert_before(start, prod->start, rtn->states);
+ end->next = prod->end->next;
+ prod->end->next = end;
+
+ add_trans(rtn, start, prod->start, NULL);
+ RTN_BAIL(rtn);
+ add_trans(rtn, prod->end, end, NULL);
+ RTN_BAIL(rtn);
+
+ prod->start = start;
+ prod->end = end;
+
+ /* Eliminate states S (except for INI and FIN) one by one:
+ * Let LOOP the regexp for the transition S -> S if it exists, epsilon
+ * otherwise.
+ * For all S1, S2 different from S with S1 -> S -> S2
+ * Let R1 the regexp of S1 -> S
+ * R2 the regexp of S -> S2
+ * R3 the regexp of S1 -> S2 (or the regexp matching nothing
+ * if no such transition)
+ * set the regexp on the transition S1 -> S2 to
+ * R1 . (LOOP)* . R2 | R3 */
+ // FIXME: This does not go over all states
+ list_for_each(s, rtn->states) {
+ if (s == prod->end || s == prod->start)
+ continue;
+ struct regexp *loop = NULL;
+ for (int i=0; i < s->ntrans; i++) {
+ if (s == s->trans[i].to) {
+ ensure(loop == NULL, rtn->info);
+ loop = s->trans[i].re;
+ }
+ }
+ list_for_each(s1, rtn->states) {
+ if (s == s1)
+ continue;
+ for (int t1=0; t1 < s1->ntrans; t1++) {
+ if (s == s1->trans[t1].to) {
+ for (int t2=0; t2 < s->ntrans; t2++) {
+ struct state *s2 = s->trans[t2].to;
+ if (s2 == s)
+ continue;
+ collapse_trans(rtn, s1, s2,
+ s1->trans[t1].re, loop,
+ s->trans[t2].re);
+ RTN_BAIL(rtn);
+ }
+ }
+ }
+ }
+ }
+
+ /* Find the overall regexp */
+ struct regexp *result = NULL;
+ for (int i=0; i < prod->start->ntrans; i++) {
+ if (prod->start->trans[i].to == prod->end) {
+ ensure(result == NULL, rtn->info);
+ result = ref(prod->start->trans[i].re);
+ }
+ }
+ return result;
+ error:
+ return NULL;
+}
+
+static void propagate_type(struct lens *l, enum lens_type lt) {
+ struct regexp **types = NULL;
+ int r;
+
+ if (! l->recursive || ltype(l, lt) != NULL)
+ return;
+
+ switch(l->tag) {
+ case L_CONCAT:
+ r = ALLOC_N(types, l->nchildren);
+ ERR_NOMEM(r < 0, l->info);
+ for (int i=0; i < l->nchildren; i++) {
+ propagate_type(l->children[i], lt);
+ types[i] = ltype(l->children[i], lt);
+ }
+ ltype(l, lt) = regexp_concat_n(l->info, l->nchildren, types);
+ FREE(types);
+ break;
+ case L_UNION:
+ r = ALLOC_N(types, l->nchildren);
+ ERR_NOMEM(r < 0, l->info);
+ for (int i=0; i < l->nchildren; i++) {
+ propagate_type(l->children[i], lt);
+ types[i] = ltype(l->children[i], lt);
+ }
+ ltype(l, lt) = regexp_union_n(l->info, l->nchildren, types);
+ FREE(types);
+ break;
+ case L_SUBTREE:
+ propagate_type(l->child, lt);
+ if (lt == ATYPE)
+ l->atype = subtree_atype(l->info, l->child->ktype, l->child->vtype);
+ if (lt == CTYPE)
+ l->ctype = ref(l->child->ctype);
+ break;
+ case L_STAR:
+ propagate_type(l->child, lt);
+ ltype(l, lt) = regexp_iter(l->info, ltype(l->child, lt), 0, -1);
+ break;
+ case L_MAYBE:
+ propagate_type(l->child, lt);
+ ltype(l, lt) = regexp_maybe(l->info, ltype(l->child, lt));
+ break;
+ case L_REC:
+ /* Nothing to do */
+ break;
+ case L_SQUARE:
+ propagate_type(l->child, lt);
+ ltype(l, lt) = ref(ltype(l->child, lt));
+ break;
+ default:
+ BUG_LENS_TAG(l);
+ break;
+ }
+
+ error:
+ FREE(types);
+}
+
+static struct value *typecheck(struct lens *l, int check);
+
+typedef struct value *typecheck_n_make(struct info *,
+ struct lens *, struct lens *, int);
+
+static struct info *merge_info(struct info *i1, struct info *i2) {
+ struct info *info;
+ make_ref(info);
+ ERR_NOMEM(info == NULL, i1);
+
+ info->filename = ref(i1->filename);
+ info->first_line = i1->first_line;
+ info->first_column = i1->first_column;
+ info->last_line = i2->last_line;
+ info->last_column = i2->last_column;
+ info->error = i1->error;
+ return info;
+
+ error:
+ unref(info, info);
+ return NULL;
+}
+
+static struct value *typecheck_n(struct lens *l,
+ typecheck_n_make *make, int check) {
+ struct value *exn = NULL;
+ struct lens *acc = NULL;
+
+ ensure(l->tag == L_CONCAT || l->tag == L_UNION, l->info);
+ for (int i=0; i < l->nchildren; i++) {
+ exn = typecheck(l->children[i], check);
+ if (exn != NULL)
+ goto error;
+ }
+ acc = ref(l->children[0]);
+ for (int i=1; i < l->nchildren; i++) {
+ struct info *info = merge_info(acc->info, l->children[i]->info);
+ ERR_NOMEM(info == NULL, acc->info);
+ exn = (*make)(info, acc, ref(l->children[i]), check);
+ if (EXN(exn))
+ goto error;
+ ensure(exn->tag == V_LENS, l->info);
+ acc = ref(exn->lens);
+ unref(exn, value);
+ }
+ l->value = acc->value;
+ l->key = acc->key;
+ error:
+ unref(acc, lens);
+ return exn;
+}
+
+static struct value *typecheck(struct lens *l, int check) {
+ struct value *exn = NULL;
+
+ /* Nonrecursive lenses are typechecked at build time */
+ if (! l->recursive)
+ return NULL;
+
+ switch(l->tag) {
+ case L_CONCAT:
+ exn = typecheck_n(l, lns_make_concat, check);
+ break;
+ case L_UNION:
+ exn = typecheck_n(l, lns_make_union, check);
+ break;
+ case L_SUBTREE:
+ case L_SQUARE:
+ exn = typecheck(l->child, check);
+ break;
+ case L_STAR:
+ if (check)
+ exn = typecheck_iter(l->info, l->child);
+ if (exn == NULL && l->value)
+ exn = make_exn_value(l->info, "Multiple stores in iteration");
+ if (exn == NULL && l->key)
+ exn = make_exn_value(l->info, "Multiple keys/labels in iteration");
+ break;
+ case L_MAYBE:
+ if (check)
+ exn = typecheck_maybe(l->info, l->child);
+ l->key = l->child->key;
+ l->value = l->child->value;
+ break;
+ case L_REC:
+ /* Nothing to do */
+ break;
+ default:
+ BUG_LENS_TAG(l);
+ break;
+ }
+
+ return exn;
+}
+
+static struct value *rtn_approx(struct lens *rec, enum lens_type lt) {
+ struct rtn *rtn = NULL;
+ struct value *result = NULL;
+
+ rtn = rtn_build(rec, lt);
+ RTN_BAIL(rtn);
+ ltype(rec, lt) = rtn_reduce(rtn, rec);
+ RTN_BAIL(rtn);
+ if (debugging("cf.approx"))
+ rtn_dot(rtn, "50-reduce");
+
+ propagate_type(rec->body, lt);
+ ERR_BAIL(rec->info);
+
+ done:
+ free_rtn(rtn);
+
+ if (debugging("cf.approx")) {
+ printf("approx %s => ", lens_type_names[lt]);
+ print_regexp(stdout, ltype(rec, lt));
+ printf("\n");
+ }
+
+ return result;
+ error:
+ if (rtn->exn == NULL)
+ result = rec->info->error->exn;
+ else
+ result = ref(rtn->exn);
+ goto done;
+}
+
+static struct value *
+exn_multiple_epsilons(struct lens *lens,
+ struct lens *l1, struct lens *l2) {
+ char *fi = NULL;
+ struct value *exn = NULL;
+
+ exn = make_exn_value(ref(lens->info),
+ "more than one nullable branch in a union");
+ fi = format_info(l1->info);
+ exn_printf_line(exn, "First nullable lens: %s", fi);
+ FREE(fi);
+
+ fi = format_info(l2->info);
+ exn_printf_line(exn, "Second nullable lens: %s", fi);
+ FREE(fi);
+
+ return exn;
+}
+
+/* Update lens->ctype_nullable and return 1 if there was a change,
+ * 0 if there was none */
+static int ctype_nullable(struct lens *lens, struct value **exn) {
+ int nullable = 0;
+ int ret = 0;
+ struct lens *null_lens = NULL;
+
+ if (! lens->recursive)
+ return 0;
+
+ switch(lens->tag) {
+ case L_CONCAT:
+ nullable = 1;
+ for (int i=0; i < lens->nchildren; i++) {
+ if (ctype_nullable(lens->children[i], exn))
+ ret = 1;
+ if (! lens->children[i]->ctype_nullable)
+ nullable = 0;
+ }
+ break;
+ case L_UNION:
+ for (int i=0; i < lens->nchildren; i++) {
+ if (ctype_nullable(lens->children[i], exn))
+ ret = 1;
+ if (lens->children[i]->ctype_nullable) {
+ if (nullable) {
+ *exn = exn_multiple_epsilons(lens, null_lens,
+ lens->children[i]);
+ return 0;
+ }
+ nullable = 1;
+ null_lens = lens->children[i];
+ }
+ }
+ break;
+ case L_SUBTREE:
+ case L_SQUARE:
+ ret = ctype_nullable(lens->child, exn);
+ nullable = lens->child->ctype_nullable;
+ break;
+ case L_STAR:
+ case L_MAYBE:
+ nullable = 1;
+ break;
+ case L_REC:
+ nullable = lens->body->ctype_nullable;
+ break;
+ default:
+ BUG_LENS_TAG(lens);
+ break;
+ }
+ if (*exn != NULL)
+ return 0;
+ if (nullable != lens->ctype_nullable) {
+ ret = 1;
+ lens->ctype_nullable = nullable;
+ }
+ return ret;
+}
+
+struct value *lns_check_rec(struct info *info,
+ struct lens *body, struct lens *rec,
+ int check) {
+ /* The types in the order of approximation */
+ static const enum lens_type types[] = { KTYPE, VTYPE, ATYPE };
+ struct value *result = NULL;
+
+ ensure(rec->tag == L_REC, info);
+ ensure(rec->rec_internal, info);
+
+ /* The user might have written down a regular lens with 'let rec' */
+ if (! body->recursive) {
+ result = make_lens_value(ref(body));
+ ERR_NOMEM(result == NULL, info);
+ return result;
+ }
+
+ /* To help memory management, we avoid the cycle inherent ina recursive
+ * lens by using two instances of an L_REC lens. One is marked with
+ * rec_internal, and used inside the body of the lens. The other is the
+ * "toplevel" which receives external references.
+ *
+ * The internal instance of the recursive lens is REC, the external one
+ * is TOP, constructed below
+ */
+ rec->body = body; /* REC does not own BODY */
+
+ for (int i=0; i < ARRAY_CARDINALITY(types); i++) {
+ result = rtn_approx(rec, types[i]);
+ ERR_BAIL(info);
+ }
+
+ if (rec->atype == NULL) {
+ result = make_exn_value(ref(rec->info),
+ "recursive lens generates the empty language for its %s",
+ rec->ctype == NULL ? "ctype" : "atype");
+ goto error;
+ }
+
+ rec->key = rec->body->key;
+ rec->value = rec->body->value;
+ rec->consumes_value = rec->body->consumes_value;
+
+ while(ctype_nullable(rec->body, &result));
+ if (result != NULL)
+ goto error;
+ rec->ctype_nullable = rec->body->ctype_nullable;
+
+ result = typecheck(rec->body, check);
+ if (result != NULL)
+ goto error;
+
+ result = lns_make_rec(ref(rec->info));
+ struct lens *top = result->lens;
+ for (int t=0; t < ntypes; t++)
+ ltype(top, t) = ref(ltype(rec, t));
+ top->value = rec->value;
+ top->key = rec->key;
+ top->consumes_value = rec->consumes_value;
+ top->ctype_nullable = rec->ctype_nullable;
+ top->body = ref(body);
+ top->alias = rec;
+ top->rec_internal = 0;
+ rec->alias = top;
+
+ top->jmt = jmt_build(top);
+ ERR_BAIL(info);
+
+ return result;
+ error:
+ if (result != NULL && result->tag != V_EXN)
+ unref(result, value);
+ if (result == NULL)
+ result = info->error->exn;
+ return result;
+}
+
+#if ENABLE_DEBUG
+void dump_lens_tree(struct lens *lens){
+ static int count = 0;
+ FILE *fp;
+
+ fp = debug_fopen("lens_%02d_%s.dot", count++, ltag(lens));
+ if (fp == NULL)
+ return;
+
+ fprintf(fp, "digraph \"%s\" {\n", "lens");
+ dump_lens(fp, lens);
+ fprintf(fp, "}\n");
+
+ fclose(fp);
+}
+
+void dump_lens(FILE *out, struct lens *lens){
+ int i = 0;
+ struct regexp *re;
+
+ fprintf(out, "\"%p\" [ shape = box, label = \"%s\\n", lens, ltag(lens));
+
+ for (int t=0; t < ntypes; t++) {
+ re = ltype(lens, t);
+ if (re == NULL)
+ continue;
+ fprintf(out, "%s=",lens_type_names[t]);
+ print_regexp(out, re);
+ fprintf(out, "\\n");
+ }
+
+ fprintf(out, "recursive=%x\\n", lens->recursive);
+ fprintf(out, "rec_internal=%x\\n", lens->rec_internal);
+ fprintf(out, "consumes_value=%x\\n", lens->consumes_value);
+ fprintf(out, "ctype_nullable=%x\\n", lens->ctype_nullable);
+ fprintf(out, "\"];\n");
+ switch(lens->tag){
+ case L_DEL:
+ break;
+ case L_STORE:
+ break;
+ case L_VALUE:
+ break;
+ case L_KEY:
+ break;
+ case L_LABEL:
+ break;
+ case L_SEQ:
+ break;
+ case L_COUNTER:
+ break;
+ case L_CONCAT:
+ for(i = 0; i<lens->nchildren;i++){
+ fprintf(out, "\"%p\" -> \"%p\"\n", lens, lens->children[i]);
+ dump_lens(out, lens->children[i]);
+ }
+ break;
+ case L_UNION:
+ for(i = 0; i<lens->nchildren;i++){
+ fprintf(out, "\"%p\" -> \"%p\"\n", lens, lens->children[i]);
+ dump_lens(out, lens->children[i]);
+ }
+ break;
+ case L_SUBTREE:
+ fprintf(out, "\"%p\" -> \"%p\"\n", lens, lens->child);
+ dump_lens(out, lens->child);
+ break;
+ case L_STAR:
+ fprintf(out, "\"%p\" -> \"%p\"\n", lens, lens->child);
+ dump_lens(out, lens->child);
+
+ break;
+ case L_MAYBE:
+ fprintf(out, "\"%p\" -> \"%p\"\n", lens, lens->child);
+ dump_lens(out, lens->child);
+
+ break;
+ case L_REC:
+ if (lens->rec_internal == 0){
+ fprintf(out, "\"%p\" -> \"%p\"\n", lens, lens->child);
+ dump_lens(out, lens->body);
+ }
+ break;
+ case L_SQUARE:
+ fprintf(out, "\"%p\" -> \"%p\"\n", lens, lens->child);
+ dump_lens(out, lens->child);
+ break;
+ default:
+ fprintf(out, "ERROR\n");
+ break;
+ }
+}
+#endif
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+/*
+ * lens.h: Repreentation of lenses
+ *
+ * Copyright (C) 2007-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <dlutter@redhat.com>
+ */
+
+#ifndef LENS_H_
+#define LENS_H_
+
+#include "syntax.h"
+#include "fa.h"
+#include "jmt.h"
+
+/* keep in sync with tag name table */
+enum lens_tag {
+ L_DEL = 42, /* Shift tag values so we fail fast(er) on bad pointers */
+ L_STORE,
+ L_VALUE,
+ L_KEY,
+ L_LABEL,
+ L_SEQ,
+ L_COUNTER,
+ L_CONCAT,
+ L_UNION,
+ L_SUBTREE,
+ L_STAR,
+ L_MAYBE,
+ L_REC,
+ L_SQUARE
+};
+
+/* A lens. The way the type information is computed is a little
+ * delicate. There are various regexps involved to form the final type:
+ *
+ * CTYPE - the concrete type, used to parse file -> tree
+ * ATYPE - the abstract type, used to parse tree -> file
+ * KTYPE - the 'key' type, matching the label that this lens
+ * can produce, or NULL if no label is produced
+ * VTYPE - the 'value' type, matching the value that this lens
+ * can produce, or NULL if no value is produce
+ *
+ * We distinguish between regular and recursive (context-free) lenses. Only
+ * L_REC and the combinators can be marked recursive.
+ *
+ * Types are computed at different times, depending on whether the lens is
+ * recursive or not. For non-recursive lenses, types are computed when the
+ * lens is constructed by one of the LNS_MAKE_* functions; for recursive
+ * lenses, we never compute an explicit ctype (since regular approximations
+ * of it are pretty much useless), we do however compute regular
+ * approximations of the ktype, vtype, and atype in LNS_CHECK_REC. That
+ * means that recursive lenses accept context free languages in the string
+ * -> tree direction, but only regular tree languages in the tree -> string
+ * direction.
+ *
+ * Any lens that uses a recursive lens somehow is marked as recursive
+ * itself.
+ */
+struct lens {
+ unsigned int ref;
+ enum lens_tag tag;
+ struct info *info;
+ struct regexp *ctype; /* NULL when recursive == 1 */
+ struct regexp *atype;
+ struct regexp *ktype;
+ struct regexp *vtype;
+ struct jmt *jmt; /* When recursive == 1, might have jmt */
+ unsigned int value : 1;
+ unsigned int key : 1;
+ unsigned int recursive : 1;
+ unsigned int consumes_value : 1;
+ /* Whether we are inside a recursive lens or outside */
+ unsigned int rec_internal : 1;
+ unsigned int ctype_nullable : 1;
+ union {
+ /* Primitive lenses */
+ struct { /* L_DEL uses both */
+ struct regexp *regexp; /* L_STORE, L_KEY */
+ struct string *string; /* L_VALUE, L_LABEL, L_SEQ, L_COUNTER */
+ };
+ /* Combinators */
+ struct lens *child; /* L_SUBTREE, L_STAR, L_MAYBE, L_SQUARE */
+ struct { /* L_UNION, L_CONCAT */
+ unsigned int nchildren;
+ struct lens **children;
+ };
+ struct {
+ struct lens *body; /* L_REC */
+ /* We represent a recursive lens as two instances of struct
+ * lens with L_REC. One has rec_internal set to 1, the other
+ * has it set to 0. The one with rec_internal is used within
+ * the body, the other is what is used from the 'outside'. This
+ * is necessary to break the cycles inherent in recursive
+ * lenses with reference counting. The link through alias is
+ * set up in lns_check_rec, and not reference counted.
+ *
+ * Generally, any lens used in the body of a recursive lens is
+ * marked with rec_internal == 1; lenses that use the recursive
+ * lens 'from the outside' are marked with rec_internal ==
+ * 0. In the latter case, we can assign types right away,
+ * except for the ctype, which we never have for any recursive
+ * lens.
+ */
+ struct lens *alias;
+ };
+ };
+};
+
+/* Constructors for various lens types. Constructor assumes ownership of
+ * arguments without incrementing. Caller owns returned lenses.
+ *
+ * The return type is VALUE instead of LENS so that we can return an
+ * exception iftypechecking fails.
+ */
+struct value *lns_make_prim(enum lens_tag tag, struct info *info,
+ struct regexp *regexp, struct string *string);
+struct value *lns_make_union(struct info *, struct lens *, struct lens *,
+ int check);
+struct value *lns_make_concat(struct info *, struct lens *, struct lens *,
+ int check);
+struct value *lns_make_subtree(struct info *, struct lens *);
+struct value *lns_make_star(struct info *, struct lens *,
+ int check);
+struct value *lns_make_plus(struct info *, struct lens *,
+ int check);
+struct value *lns_make_maybe(struct info *, struct lens *,
+ int check);
+struct value *lns_make_square(struct info *, struct lens *, struct lens *,
+ struct lens *lens, int check);
+
+
+/* Pretty-print a lens */
+char *format_lens(struct lens *l);
+
+/* Pretty-print the atype of a lens. Allocates BUF, which must be freed by
+ * the caller */
+int lns_format_atype(struct lens *, char **buf);
+
+/* Recursive lenses */
+struct value *lns_make_rec(struct info *info);
+struct value *lns_check_rec(struct info *info,
+ struct lens *body, struct lens *rec,
+ int check);
+
+/* Auxiliary data structures used during get/put/create */
+struct skel {
+ struct skel *next;
+ enum lens_tag tag;
+ union {
+ char *text; /* L_DEL */
+ struct skel *skels; /* L_CONCAT, L_STAR, L_SQUARE */
+ };
+ /* Also tag == L_SUBTREE, with no data in the union */
+};
+
+struct lns_error {
+ struct lens *lens;
+ struct lens *last; /* The last lens that matched */
+ struct lens *next; /* The next lens that should match but doesn't */
+ int pos; /* Errors from get/parse */
+ char *path; /* Errors from put, pos will be -1 */
+ char *message;
+};
+
+struct dict *make_dict(char *key, struct skel *skel, struct dict *subdict);
+void dict_lookup(const char *key, struct dict *dict,
+ struct skel **skel, struct dict **subdict);
+int dict_append(struct dict **dict, struct dict *d2);
+void free_skel(struct skel *skel);
+void free_dict(struct dict *dict);
+void free_lns_error(struct lns_error *err);
+
+/* Parse text TEXT with LENS. INFO indicates where TEXT was read from.
+ *
+ * If ERR is non-NULL, *ERR is set to NULL on success, and to an error
+ * message on failure; the constructed tree is always returned. If ERR is
+ * NULL, return the tree on success, and NULL on failure.
+ *
+ * ENABLE_SPAN indicates whether span information should be collected or not
+ */
+struct tree *lns_get(struct info *info, struct lens *lens, const char *text,
+ int enable_span, struct lns_error **err);
+struct skel *lns_parse(struct lens *lens, const char *text,
+ struct dict **dict, struct lns_error **err);
+
+/* Write tree TREE that was initially read from TEXT (but might have been
+ * modified) into file OUT using LENS.
+ *
+ * If ERR is non-NULL, *ERR is set to NULL on success, and to an error
+ * message on failure.
+ *
+ * INFO indicates where we are writing to, and its flags indicate whether
+ * to update spans or not.
+ */
+void lns_put(struct info *info, FILE *out, struct lens *lens, struct tree *tree,
+ const char *text, int enable_span, struct lns_error **err);
+
+/* Free up temporary data structures, most importantly compiled
+ regular expressions */
+void lens_release(struct lens *lens);
+void free_lens(struct lens *lens);
+
+/*
+ * Encoding of tree levels into strings
+ */
+
+/* Special characters used when encoding one level of the tree as a string.
+ * We encode one tree node as KEY . ENC_EQ . VALUE . ENC_SLASH; if KEY or
+ * VALUE are NULL, we use ENC_NULL, which is the empty string. This has the
+ * effect that NULL strings are treated the same as empty strings.
+ *
+ * This encoding is used both for actual trees in the put direction, and to
+ * produce regular expressions describing one level in the tree (we
+ * disregard subtrees)
+ *
+ * For this to work, neither ENC_EQ nor ENC_SLASH can be allowed in a
+ * VALUE; we do this behind the scenes by rewriting regular expressions for
+ * values.
+ */
+#define ENC_EQ "\003"
+#define ENC_SLASH "\004"
+#define ENC_NULL ""
+#define ENC_EQ_CH (ENC_EQ[0])
+#define ENC_SLASH_CH (ENC_SLASH[0])
+
+/* The reserved range of characters that we do not allow in user-supplied
+ regular expressions, since we need them for internal bookkeeping.
+
+ This range must include the ENC_* characters
+*/
+#define RESERVED_FROM "\001"
+#define RESERVED_TO ENC_SLASH
+#define RESERVED_FROM_CH (RESERVED_FROM[0])
+#define RESERVED_TO_CH ENC_SLASH_CH
+/* The range of reserved chars as it appears in a regex */
+#define RESERVED_RANGE_RX RESERVED_FROM "-" RESERVED_TO
+/* The equivalent of "." in a regexp for display */
+#define RESERVED_DOT_RX "[^" RESERVED_RANGE_RX "\n]"
+
+/* The length of the string S encoded */
+#define ENCLEN(s) ((s) == NULL ? strlen(ENC_NULL) : strlen(s))
+#define ENCSTR(s) ((s) == NULL ? ENC_NULL : s)
+
+/* helper to access first and last child */
+#define child_first(l) (l)->children[0]
+#define child_last(l) (l)->children[(l)->nchildren - 1]
+
+/* Format an encoded level as
+ * { key1 = value1 } { key2 = value2 } .. { keyN = valueN }
+ */
+char *enc_format(const char *e, size_t len);
+/* Format an encoded level similar to ENC_FORMAT, but put each tree node
+ * on a new line indented by INDENT spaces. If INDENT is negative, produce the
+ * same output as ENC_FORMAT
+ * { key1 = value1 } { key2 = value2 } .. { keyN = valueN }
+ */
+char *enc_format_indent(const char *e, size_t len, int indent);
+
+#if ENABLE_DEBUG
+void dump_lens_tree(struct lens *lens);
+void dump_lens(FILE *out, struct lens *lens);
+#endif
+
+#endif
+
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+/* Scanner for config specs -*- C -*- */
+%option 8bit never-interactive yylineno
+%option bison-bridge bison-locations
+%option reentrant noyywrap
+%option warn nodefault
+%option outfile="lex.yy.c" prefix="augl_"
+%option noinput nounput
+
+%top{
+/* config.h must precede flex's inclusion of <stdio.h>
+ in order for its _GNU_SOURCE definition to take effect. */
+#include <config.h>
+}
+
+%{
+#include "syntax.h"
+#include "errcode.h"
+
+typedef struct info YYLTYPE;
+#define YYLTYPE_IS_DECLARED 1
+
+#include "parser.h"
+
+/* Advance of NUM lines. */
+# define LOCATION_LINES(Loc, Num) \
+ (Loc).last_column = 0; \
+ (Loc).last_line += Num;
+
+/* Restart: move the first cursor to the last position. */
+# define LOCATION_STEP(Loc) \
+ (Loc).first_column = (Loc).last_column; \
+ (Loc).first_line = (Loc).last_line;
+
+/* The lack of reference counting for filename is intentional */
+#define YY_USER_ACTION \
+ do { \
+ yylloc->last_column += yyleng; \
+ yylloc->filename = augl_get_info(yyscanner)->filename; \
+ yylloc->error = augl_get_info(yyscanner)->error; \
+ } while(0);
+
+#define YY_USER_INIT LOCATION_STEP(*yylloc)
+
+#define YY_EXTRA_TYPE struct state *
+
+int augl_init_lexer(struct state *state, yyscan_t * scanner);
+void augl_close_lexer(yyscan_t *scanner);
+struct info *augl_get_info(yyscan_t yyscanner);
+
+static void loc_update(YYLTYPE *yylloc, const char *s, int len) {
+ for (int i=0; i < len; i++) {
+ if (s[i] == '\n') {
+ LOCATION_LINES(*yylloc, 1);
+ }
+ }
+}
+
+static char *regexp_literal(const char *s, int len) {
+ char *u = unescape(s, len, RX_ESCAPES);
+
+ if (u == NULL)
+ return NULL;
+
+ size_t u_len = strlen(u);
+ regexp_c_locale(&u, &u_len);
+
+ return u;
+}
+%}
+
+DIGIT [0-9]
+UID [A-Z][A-Za-z0-9_]*
+LID [a-z_][A-Za-z0-9_]*
+LETREC let[ \t]+rec
+WS [ \t\n]
+QID {UID}\.{LID}
+ARROW ->
+
+%s COMMENT
+
+%%
+<*>
+{
+ [ \t]* LOCATION_STEP(*yylloc);
+ \n+ LOCATION_LINES(*yylloc, yyleng); LOCATION_STEP(*yylloc);
+ (\r\n)+ LOCATION_LINES(*yylloc, yyleng/2); LOCATION_STEP(*yylloc);
+}
+
+<INITIAL>
+{
+ \"([^\\\"]|\\(.|\n))*\" {
+ loc_update(yylloc, yytext, yyleng);
+ yylval->string = unescape(yytext+1, yyleng-2, STR_ESCAPES);
+ return DQUOTED;
+ }
+
+ \/([^\\\/]|\\(.|\n))*\/i {
+ loc_update(yylloc, yytext, yyleng);
+ yylval->regexp.nocase = 1;
+ yylval->regexp.pattern = regexp_literal(yytext+1, yyleng-3);
+ return REGEXP;
+ }
+
+ \/([^\\\/]|\\(.|\n))*\/ {
+ loc_update(yylloc, yytext, yyleng);
+ yylval->regexp.nocase = 0;
+ yylval->regexp.pattern = regexp_literal(yytext+1, yyleng-2);
+ return REGEXP;
+ }
+
+ [|*?+()=:;\.\[\]{}-] return yytext[0];
+
+ module return KW_MODULE;
+
+ {LETREC}/{WS} return KW_LET_REC;
+
+ let return KW_LET;
+ string return KW_STRING;
+ regexp return KW_REGEXP;
+ lens return KW_LENS;
+ in return KW_IN;
+ autoload return KW_AUTOLOAD;
+
+ /* tests */
+ test return KW_TEST;
+ get return KW_GET;
+ put return KW_PUT;
+ after return KW_AFTER;
+
+ {ARROW} return ARROW;
+
+ {QID} {
+ yylval->string = strndup(yytext, yyleng);
+ return QIDENT;
+ }
+ {LID} {
+ yylval->string = strndup(yytext, yyleng);
+ return LIDENT;
+ }
+ {UID} {
+ yylval->string = strndup(yytext, yyleng);
+ return UIDENT;
+ }
+ \(\* {
+ augl_get_extra(yyscanner)->comment_depth = 1;
+ BEGIN(COMMENT);
+ }
+ . {
+ report_error(augl_get_info(yyscanner)->error, AUG_ESYNTAX,
+ "%s:%d:%d: Unexpected character %c",
+ augl_get_info(yyscanner)->filename->str,
+ yylineno, yylloc->first_column, yytext[0]);
+ }
+
+ <<EOF>> {
+ augl_close_lexer(yyscanner);
+ yyterminate();
+ }
+
+}
+
+<COMMENT>
+{
+ \(\* {
+ augl_get_extra(yyscanner)->comment_depth += 1;
+ }
+ \*\) {
+ augl_get_extra(yyscanner)->comment_depth -= 1;
+ if (augl_get_extra(yyscanner)->comment_depth == 0)
+ BEGIN(INITIAL);
+ }
+ . /* Skip */;
+ <<EOF>> {
+ report_error(augl_get_info(yyscanner)->error, AUG_ESYNTAX,
+ "%s:%d:%d: Missing *)",
+ augl_get_info(yyscanner)->filename->str,
+ yylineno, yylloc->first_column);
+ augl_close_lexer(yyscanner);
+ yyterminate();
+ }
+}
+%%
+
+void augl_close_lexer(yyscan_t *scanner) {
+ FILE *fp = augl_get_in(scanner);
+
+ if (fp != NULL) {
+ fclose(fp);
+ augl_set_in(NULL, scanner);
+ }
+}
+
+int augl_init_lexer(struct state *state, yyscan_t *scanner) {
+ FILE *f;
+ struct string *name = state->info->filename;
+
+ f = fopen(name->str, "r");
+ if (f == NULL)
+ return -1;
+
+ if (augl_lex_init(scanner) != 0) {
+ fclose(f);
+ return -1;
+ }
+ augl_set_extra(state, *scanner);
+ augl_set_in(f, *scanner);
+ return 0;
+}
+
+struct info *augl_get_info(yyscan_t scanner) {
+ return augl_get_extra(scanner)->info;
+}
--- /dev/null
+/*
+ * list.h: Simple generic list manipulation
+ *
+ * Copyright (C) 2007-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <dlutter@redhat.com>
+ */
+
+#ifndef LIST_H_
+#define LIST_H_
+
+#define list_append(head, tail) \
+ do { \
+ if ((head) == NULL) { \
+ head = tail; \
+ break; \
+ } \
+ typeof(head) _p; \
+ for (_p = (head); _p->next != NULL; _p = _p->next); \
+ _p->next = (tail); \
+ } while (0)
+
+#define list_for_each(iter, list) \
+ for (typeof(list) (iter) = list; (iter) != NULL; (iter) = (iter)->next)
+
+#define list_remove(elt, list) \
+ do { \
+ typeof(elt) _e = (elt); \
+ if (_e == (list)) { \
+ (list) = _e->next; \
+ } else { \
+ typeof(_e) _p; \
+ for (_p = (list); _p != NULL && _p->next != _e; _p = _p->next); \
+ if (_p != NULL) \
+ _p->next = _e->next; \
+ } \
+ _e->next = NULL; \
+ } while(0)
+
+/* Insert NEW in list LIST before element AC. NEW->next must be null,
+ and ELT must really be on LIST, otherwise chaos will ensue
+*/
+#define list_insert_before(new, elt, list) \
+ do { \
+ if ((list) == NULL) { \
+ (list) = (new); \
+ } else if ((elt) == (list)) { \
+ (new)->next = (elt); \
+ (list) = (new); \
+ } else { \
+ typeof(elt) _p; \
+ for (_p = (list); _p != NULL && _p->next != (elt); _p = _p->next); \
+ if (_p != NULL) { \
+ (new)->next = (elt); \
+ _p->next = (new); \
+ } \
+ } \
+ } while(0)
+
+#endif
+
+#define list_free(list) \
+ while ((list) != NULL) { \
+ typeof(list) _p = list; \
+ (list) = (list)->next; \
+ free((void *) _p); \
+ }
+
+#define list_length(len, list) \
+ do { \
+ typeof(list) _p; \
+ for (len=0, _p = (list); _p != NULL; len += 1, _p = _p->next); \
+ } while(0)
+
+/* Make ELT the new head of LIST and set LIST to it */
+#define list_cons(list, elt) \
+ do { \
+ typeof(elt) _e = (elt); \
+ _e->next = (list); \
+ (list) = _e; \
+ } while(0)
+
+#define list_reverse(list) \
+ do { \
+ typeof(list) _head = (list); \
+ typeof(list) _prev = NULL; \
+ while (_head != NULL) { \
+ typeof(list) _next = _head->next; \
+ _head->next = _prev; \
+ _prev = _head; \
+ _head = _next; \
+ } \
+ (list) = _prev; \
+ } while (0)
+
+/* Append ELT to the end of LIST. TAIL must be NULL or a pointer to
+ the last element of LIST. ELT may also be a list
+*/
+#define list_tail_cons(list, tail, elt) \
+ do { \
+ /* Append ELT at the end of LIST */ \
+ if ((list) == NULL) { \
+ (list) = (elt); \
+ } else { \
+ if ((tail) == NULL) \
+ for ((tail) = (list); (tail)->next != NULL; \
+ (tail) = (tail)->next); \
+ (tail)->next = (elt); \
+ } \
+ /* Make sure TAIL is the last element on the combined LIST */ \
+ (tail) = (elt); \
+ if ((tail) != NULL) \
+ while ((tail)->next != NULL) \
+ (tail) = (tail)->next; \
+ } while(0)
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+/*
+ * memory.c: safer memory allocation
+ *
+ * Copyright (C) 2008-2016 Daniel P. Berrange
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ */
+
+#include <config.h>
+
+#include <stdlib.h>
+#include <stddef.h>
+
+#include "memory.h"
+
+
+/* Return 1 if an array of N objects, each of size S, cannot exist due
+ to size arithmetic overflow. S must be positive and N must be
+ nonnegative. This is a macro, not an inline function, so that it
+ works correctly even when SIZE_MAX < N.
+
+ By gnulib convention, SIZE_MAX represents overflow in size
+ calculations, so the conservative dividend to use here is
+ SIZE_MAX - 1, since SIZE_MAX might represent an overflowed value.
+ However, malloc (SIZE_MAX) fails on all known hosts where
+ sizeof (ptrdiff_t) <= sizeof (size_t), so do not bother to test for
+ exactly-SIZE_MAX allocations on such hosts; this avoids a test and
+ branch when S is known to be 1. */
+# define xalloc_oversized(n, s) \
+ ((size_t) (sizeof (ptrdiff_t) <= sizeof (size_t) ? -1 : -2) / (s) < (n))
+
+
+/**
+ * mem_alloc_n:
+ * @ptrptr: pointer to pointer for address of allocated memory
+ * @size: number of bytes to allocate
+ * @count: number of elements to allocate
+ *
+ * Allocate an array of memory 'count' elements long,
+ * each with 'size' bytes. Return the address of the
+ * allocated memory in 'ptrptr'. The newly allocated
+ * memory is filled with zeros.
+ *
+ * Returns -1 on failure to allocate, zero on success
+ */
+int mem_alloc_n(void *ptrptr, size_t size, size_t count)
+{
+ if (AUGEAS_UNLIKELY(size == 0 || count == 0)) {
+ *(void **)ptrptr = NULL;
+ return 0;
+ }
+
+ *(void**)ptrptr = calloc(count, size);
+ if (AUGEAS_UNLIKELY(*(void**)ptrptr == NULL))
+ return -1;
+ return 0;
+}
+
+/**
+ * virReallocN:
+ * @ptrptr: pointer to pointer for address of allocated memory
+ * @size: number of bytes to allocate
+ * @count: number of elements in array
+ *
+ * Resize the block of memory in 'ptrptr' to be an array of
+ * 'count' elements, each 'size' bytes in length. Update 'ptrptr'
+ * with the address of the newly allocated memory. On failure,
+ * 'ptrptr' is not changed and still points to the original memory
+ * block. The newly allocated memory is filled with zeros.
+ *
+ * Returns -1 on failure to allocate, zero on success
+ */
+int mem_realloc_n(void *ptrptr, size_t size, size_t count)
+{
+ void *tmp;
+ if (AUGEAS_UNLIKELY(size == 0 || count == 0)) {
+ free(*(void **)ptrptr);
+ *(void **)ptrptr = NULL;
+ return 0;
+ }
+ if (AUGEAS_UNLIKELY(xalloc_oversized(count, size))) {
+ errno = ENOMEM;
+ return -1;
+ }
+ tmp = realloc(*(void**)ptrptr, size * count);
+ if (AUGEAS_UNLIKELY(!tmp))
+ return -1;
+ *(void**)ptrptr = tmp;
+ return 0;
+}
--- /dev/null
+/*
+ * memory.c: safer memory allocation
+ *
+ * Copyright (C) 2008-2016 Daniel P. Berrange
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ */
+
+
+#ifndef MEMORY_H_
+#define MEMORY_H_
+
+#include "internal.h"
+
+/* Don't call these directly - use the macros below */
+int mem_alloc_n(void *ptrptr, size_t size, size_t count) ATTRIBUTE_RETURN_CHECK;
+int mem_realloc_n(void *ptrptr, size_t size, size_t count) ATTRIBUTE_RETURN_CHECK;
+
+
+/**
+ * ALLOC:
+ * @ptr: pointer to hold address of allocated memory
+ *
+ * Allocate sizeof(*ptr) bytes of memory and store
+ * the address of allocated memory in 'ptr'. Fill the
+ * newly allocated memory with zeros.
+ *
+ * Returns -1 on failure, 0 on success
+ */
+#define ALLOC(ptr) mem_alloc_n(&(ptr), sizeof(*(ptr)), 1)
+
+/**
+ * ALLOC_N:
+ * @ptr: pointer to hold address of allocated memory
+ * @count: number of elements to allocate
+ *
+ * Allocate an array of 'count' elements, each sizeof(*ptr)
+ * bytes long and store the address of allocated memory in
+ * 'ptr'. Fill the newly allocated memory with zeros.
+ *
+ * Returns -1 on failure, 0 on success
+ */
+#define ALLOC_N(ptr, count) mem_alloc_n(&(ptr), sizeof(*(ptr)), (count))
+
+/**
+ * REALLOC_N:
+ * @ptr: pointer to hold address of allocated memory
+ * @count: number of elements to allocate
+ *
+ * Re-allocate an array of 'count' elements, each sizeof(*ptr)
+ * bytes long and store the address of allocated memory in
+ * 'ptr'. Fill the newly allocated memory with zeros
+ *
+ * Returns -1 on failure, 0 on success
+ */
+#define REALLOC_N(ptr, count) mem_realloc_n(&(ptr), sizeof(*(ptr)), (count))
+
+/**
+ * FREE:
+ * @ptr: pointer holding address to be freed
+ *
+ * Free the memory stored in 'ptr' and update to point
+ * to NULL.
+ */
+#define FREE(ptr) \
+ do { \
+ free(ptr); \
+ (ptr) = NULL; \
+ } while(0)
+
+#endif /* __VIR_MEMORY_H_ */
--- /dev/null
+%{
+
+#include <config.h>
+
+#include "internal.h"
+#include "syntax.h"
+#include "list.h"
+#include "errcode.h"
+#include <stdio.h>
+
+/* Work around a problem on FreeBSD where Bison looks for _STDLIB_H
+ * to see if stdlib.h has been included, but the system includes
+ * use _STDLIB_H_
+ */
+#if HAVE_STDLIB_H && ! defined _STDLIB_H
+# include <stdlib.h>
+# define _STDLIB_H 1
+#endif
+
+#define YYDEBUG 1
+
+int augl_parse_file(struct augeas *aug, const char *name, struct term **term);
+
+typedef void *yyscan_t;
+typedef struct info YYLTYPE;
+#define YYLTYPE_IS_DECLARED 1
+/* The lack of reference counting on filename is intentional */
+# define YYLLOC_DEFAULT(Current, Rhs, N) \
+ do { \
+ (Current).filename = augl_get_info(scanner)->filename; \
+ (Current).error = augl_get_info(scanner)->error; \
+ if (N) { \
+ (Current).first_line = YYRHSLOC (Rhs, 1).first_line; \
+ (Current).first_column = YYRHSLOC (Rhs, 1).first_column; \
+ (Current).last_line = YYRHSLOC (Rhs, N).last_line; \
+ (Current).last_column = YYRHSLOC (Rhs, N).last_column; \
+ } else { \
+ (Current).first_line = (Current).last_line = \
+ YYRHSLOC (Rhs, 0).last_line; \
+ (Current).first_column = (Current).last_column = \
+ YYRHSLOC (Rhs, 0).last_column; \
+ } \
+ } while (0)
+%}
+
+%code provides {
+#include "info.h"
+
+/* Track custom scanner state */
+struct state {
+ struct info *info;
+ unsigned int comment_depth;
+};
+
+}
+
+%locations
+%error-verbose
+%name-prefix "augl_"
+%defines
+%pure-parser
+%parse-param {struct term **term}
+%parse-param {yyscan_t scanner}
+%lex-param {yyscan_t scanner}
+
+%initial-action {
+ @$.first_line = 1;
+ @$.first_column = 0;
+ @$.last_line = 1;
+ @$.last_column = 0;
+ @$.filename = augl_get_info(scanner)->filename;
+ @$.error = augl_get_info(scanner)->error;
+};
+
+%token <string> DQUOTED /* "foo" */
+%token <regexp> REGEXP /* /[ \t]+/ */
+%token <string> LIDENT UIDENT QIDENT
+%token ARROW
+
+/* Keywords */
+%token KW_MODULE
+%token KW_AUTOLOAD
+%token KW_LET KW_LET_REC KW_IN
+%token KW_STRING
+%token KW_REGEXP
+%token KW_LENS
+%token KW_TEST KW_GET KW_PUT KW_AFTER
+
+%union {
+ struct term *term;
+ struct type *type;
+ struct ident *ident;
+ struct tree *tree;
+ char *string;
+ struct {
+ int nocase;
+ char *pattern;
+ } regexp;
+ int intval;
+ enum quant_tag quant;
+}
+
+%type<term> start decls
+%type<term> exp composeexp unionexp minusexp catexp appexp rexp aexp
+%type<term> param param_list
+%type<string> qid id autoload
+%type<type> type atype
+%type<quant> rep
+%type<term> test_exp
+%type<intval> test_special_res
+%type<tree> tree_const tree_const2 tree_branch
+%type<string> tree_label
+
+%{
+/* Lexer */
+extern int augl_lex (YYSTYPE * yylval_param,struct info * yylloc_param ,yyscan_t yyscanner);
+int augl_init_lexer(struct state *state, yyscan_t * scanner);
+void augl_close_lexer(yyscan_t *scanner);
+int augl_lex_destroy (yyscan_t yyscanner );
+int augl_get_lineno (yyscan_t yyscanner );
+int augl_get_column (yyscan_t yyscanner);
+struct info *augl_get_info(yyscan_t yyscanner);
+char *augl_get_text (yyscan_t yyscanner );
+
+static void augl_error(struct info *locp, struct term **term,
+ yyscan_t scanner, const char *s);
+
+/* TERM construction */
+ static struct info *clone_info(struct info *locp);
+ static struct term *make_module(char *ident, char *autoload,
+ struct term *decls,
+ struct info *locp);
+
+ static struct term *make_bind(char *ident, struct term *params,
+ struct term *exp, struct term *decls,
+ struct info *locp);
+ static struct term *make_bind_rec(char *ident, struct term *exp,
+ struct term *decls, struct info *locp);
+ static struct term *make_let(char *ident, struct term *params,
+ struct term *exp, struct term *body,
+ struct info *locp);
+ static struct term *make_binop(enum term_tag tag,
+ struct term *left, struct term *right,
+ struct info *locp);
+ static struct term *make_unop(enum term_tag tag,
+ struct term *exp, struct info *locp);
+ static struct term *make_ident(char *qname, struct info *locp);
+ static struct term *make_unit_term(struct info *locp);
+ static struct term *make_string_term(char *value, struct info *locp);
+ static struct term *make_regexp_term(char *pattern,
+ int nocase, struct info *locp);
+ static struct term *make_rep(struct term *exp, enum quant_tag quant,
+ struct info *locp);
+
+ static struct term *make_get_test(struct term *lens, struct term *arg,
+ struct info *info);
+ static struct term *make_put_test(struct term *lens, struct term *arg,
+ struct term *cmds, struct info *info);
+ static struct term *make_test(struct term *test, struct term *result,
+ enum test_result_tag tr_tag,
+ struct term *decls, struct info *locp);
+ static struct term *make_tree_value(struct tree *, struct info*);
+ static struct tree *tree_concat(struct tree *, struct tree *);
+
+#define LOC_MERGE(a, b, c) \
+ do { \
+ (a).filename = (b).filename; \
+ (a).first_line = (b).first_line; \
+ (a).first_column = (b).first_column; \
+ (a).last_line = (c).last_line; \
+ (a).last_column = (c).last_column; \
+ (a).error = (b).error; \
+ } while(0);
+
+%}
+
+%%
+
+start: KW_MODULE UIDENT '=' autoload decls
+ { (*term) = make_module($2, $4, $5, &@1); }
+
+autoload: KW_AUTOLOAD LIDENT
+ { $$ = $2; }
+ | /* empty */
+ { $$ = NULL; }
+
+decls: KW_LET LIDENT param_list '=' exp decls
+ {
+ LOC_MERGE(@1, @1, @5);
+ $$ = make_bind($2, $3, $5, $6, &@1);
+ }
+ | KW_LET_REC LIDENT '=' exp decls
+ {
+ LOC_MERGE(@1, @1, @4);
+ $$ = make_bind_rec($2, $4, $5, &@1);
+ }
+ | KW_TEST test_exp '=' exp decls
+ {
+ LOC_MERGE(@1, @1, @4);
+ $$ = make_test($2, $4, TR_CHECK, $5, &@1);
+ }
+ | KW_TEST test_exp '=' test_special_res decls
+ {
+ LOC_MERGE(@1, @1, @4);
+ $$ = make_test($2, NULL, $4, $5, &@1);
+ }
+ | /* epsilon */
+ { $$ = NULL; }
+
+/* Test expressions and results */
+
+test_exp: aexp KW_GET exp
+ { $$ = make_get_test($1, $3, &@$); }
+ | aexp KW_PUT aexp KW_AFTER exp
+ { $$ = make_put_test($1, $3, $5, &@$); }
+
+test_special_res: '?'
+ { $$ = TR_PRINT; }
+ | '*'
+ { $$ = TR_EXN; }
+
+/* General expressions */
+exp: KW_LET LIDENT param_list '=' exp KW_IN exp
+ {
+ LOC_MERGE(@1, @1, @6);
+ $$ = make_let($2, $3, $5, $7, &@1);
+ }
+ | composeexp
+
+composeexp: composeexp ';' unionexp
+ { $$ = make_binop(A_COMPOSE, $1, $3, &@$); }
+ | unionexp
+ { $$ = $1; }
+
+unionexp: unionexp '|' minusexp
+ { $$ = make_binop(A_UNION, $1, $3, &@$); }
+ | minusexp
+ { $$ = $1; }
+ | tree_const
+ { $$ = make_tree_value($1, &@1); }
+
+minusexp: minusexp '-' catexp
+ { $$ = make_binop(A_MINUS, $1, $3, &@$); }
+ | catexp
+ { $$ = $1; }
+
+catexp: catexp '.' appexp
+{ $$ = make_binop(A_CONCAT, $1, $3, &@$); }
+ | appexp
+{ $$ = $1; }
+
+appexp: appexp rexp
+ { $$ = make_binop(A_APP, $1, $2, &@$); }
+ | rexp
+ { $$ = $1; }
+
+aexp: qid
+ { $$ = make_ident($1, &@1); }
+ | DQUOTED
+ { $$ = make_string_term($1, &@1); }
+ | REGEXP
+ { $$ = make_regexp_term($1.pattern, $1.nocase, &@1); }
+ | '(' exp ')'
+ { $$ = $2; }
+ | '[' exp ']'
+ { $$ = make_unop(A_BRACKET, $2, &@$); }
+ | '(' ')'
+ { $$ = make_unit_term(&@$); }
+
+rexp: aexp rep
+ { $$ = make_rep($1, $2, &@$); }
+ | aexp
+ { $$ = $1; }
+
+rep: '*'
+ { $$ = Q_STAR; }
+ | '+'
+ { $$ = Q_PLUS; }
+ | '?'
+ { $$ = Q_MAYBE; }
+
+qid: LIDENT
+ { $$ = $1; }
+ | QIDENT
+ { $$ = $1; }
+ | KW_GET
+ { $$ = strdup("get"); }
+ | KW_PUT
+ { $$ = strdup("put"); }
+
+param_list: param param_list
+ { $$ = $2; list_cons($$, $1); }
+ | /* epsilon */
+ { $$ = NULL; }
+
+param: '(' id ':' type ')'
+ { $$ = make_param($2, $4, clone_info(&@1)); }
+
+id: LIDENT
+ { $$ = $1; }
+ | KW_GET
+ { $$ = strdup("get"); }
+ | KW_PUT
+ { $$ = strdup("put"); }
+
+type: atype ARROW type
+ { $$ = make_arrow_type($1, $3); }
+ | atype
+ { $$ = $1; }
+
+atype: KW_STRING
+ { $$ = make_base_type(T_STRING); }
+ | KW_REGEXP
+ { $$ = make_base_type(T_REGEXP); }
+ | KW_LENS
+ { $$ = make_base_type(T_LENS); }
+ | '(' type ')'
+ { $$ = $2; }
+
+tree_const: tree_const '{' tree_branch '}'
+ { $$ = tree_concat($1, $3); }
+ | '{' tree_branch '}'
+ { $$ = tree_concat($2, NULL); }
+
+tree_const2: tree_const2 '{' tree_branch '}'
+ {
+ $$ = tree_concat($1, $3);
+ }
+ | /* empty */
+ { $$ = NULL; }
+
+tree_branch: tree_label tree_const2
+ {
+ $$ = make_tree($1, NULL, NULL, $2);
+ }
+ | tree_label '=' DQUOTED tree_const2
+ {
+ $$ = make_tree($1, $3, NULL, $4);
+ }
+tree_label: DQUOTED
+ | /* empty */
+ { $$ = NULL; }
+%%
+
+int augl_parse_file(struct augeas *aug, const char *name,
+ struct term **term) {
+ yyscan_t scanner;
+ struct state state;
+ struct string *sname = NULL;
+ struct info info;
+ int result = -1;
+ int r;
+
+ *term = NULL;
+
+ r = make_ref(sname);
+ ERR_NOMEM(r < 0, aug);
+
+ sname->str = strdup(name);
+ ERR_NOMEM(sname->str == NULL, aug);
+
+ MEMZERO(&info, 1);
+ info.ref = UINT_MAX;
+ info.filename = sname;
+ info.error = aug->error;
+
+ MEMZERO(&state, 1);
+ state.info = &info;
+ state.comment_depth = 0;
+
+ if (augl_init_lexer(&state, &scanner) < 0) {
+ augl_error(&info, term, NULL, "file not found");
+ goto error;
+ }
+
+ yydebug = getenv("YYDEBUG") != NULL;
+ r = augl_parse(term, scanner);
+ augl_close_lexer(scanner);
+ augl_lex_destroy(scanner);
+ if (r == 1) {
+ augl_error(&info, term, NULL, "syntax error");
+ goto error;
+ } else if (r == 2) {
+ augl_error(&info, term, NULL, "parser ran out of memory");
+ ERR_NOMEM(1, aug);
+ }
+ result = 0;
+
+ error:
+ unref(sname, string);
+ // free TERM
+ return result;
+}
+
+// FIXME: Nothing here checks for alloc errors.
+static struct info *clone_info(struct info *locp) {
+ struct info *info;
+ make_ref(info);
+ info->filename = ref(locp->filename);
+ info->first_line = locp->first_line;
+ info->first_column = locp->first_column;
+ info->last_line = locp->last_line;
+ info->last_column = locp->last_column;
+ info->error = locp->error;
+ return info;
+}
+
+static struct term *make_term_locp(enum term_tag tag, struct info *locp) {
+ struct info *info = clone_info(locp);
+ return make_term(tag, info);
+}
+
+static struct term *make_module(char *ident, char *autoload,
+ struct term *decls,
+ struct info *locp) {
+ struct term *term = make_term_locp(A_MODULE, locp);
+ term->mname = ident;
+ term->autoload = autoload;
+ term->decls = decls;
+ return term;
+}
+
+static struct term *make_bind(char *ident, struct term *params,
+ struct term *exp, struct term *decls,
+ struct info *locp) {
+ struct term *term = make_term_locp(A_BIND, locp);
+ if (params != NULL)
+ exp = build_func(params, exp);
+
+ term->bname = ident;
+ term->exp = exp;
+ list_cons(decls, term);
+ return decls;
+}
+
+static struct term *make_bind_rec(char *ident, struct term *exp,
+ struct term *decls, struct info *locp) {
+ /* Desugar let rec IDENT = EXP as
+ * let IDENT =
+ * let RLENS = (lns_make_rec) in
+ * lns_check_rec ((lambda IDENT: EXP) RLENS) RLENS
+ * where RLENS is a brandnew recursive lens.
+ *
+ * That only works since we know that 'let rec' is only defined for lenses,
+ * not general purposes functions, i.e. we know that IDENT has type 'lens'
+ *
+ * The point of all this is that we make it possible to put a recursive
+ * lens (which is a placeholder for the actual recursion) into arbitrary
+ * places in some bigger lens and then have LNS_CHECK_REC rattle through
+ * to do the special-purpose typechecking.
+ */
+ char *id;
+ struct info *info = exp->info;
+ struct term *lambda = NULL, *rlens = NULL;
+ struct term *app1 = NULL, *app2 = NULL, *app3 = NULL;
+
+ id = strdup(ident);
+ if (id == NULL) goto error;
+
+ lambda = make_param(id, make_base_type(T_LENS), ref(info));
+ if (lambda == NULL) goto error;
+ id = NULL;
+
+ build_func(lambda, exp);
+
+ rlens = make_term(A_VALUE, ref(exp->info));
+ if (rlens == NULL) goto error;
+ rlens->value = lns_make_rec(ref(exp->info));
+ if (rlens->value == NULL) goto error;
+ rlens->type = make_base_type(T_LENS);
+
+ app1 = make_app_term(lambda, rlens, ref(info));
+ if (app1 == NULL) goto error;
+
+ id = strdup(LNS_CHECK_REC_NAME);
+ if (id == NULL) goto error;
+ app2 = make_app_ident(id, app1, ref(info));
+ if (app2 == NULL) goto error;
+ id = NULL;
+
+ app3 = make_app_term(app2, ref(rlens), ref(info));
+ if (app3 == NULL) goto error;
+
+ return make_bind(ident, NULL, app3, decls, locp);
+
+ error:
+ free(id);
+ unref(lambda, term);
+ unref(rlens, term);
+ unref(app1, term);
+ unref(app2, term);
+ unref(app3, term);
+ return NULL;
+}
+
+static struct term *make_let(char *ident, struct term *params,
+ struct term *exp, struct term *body,
+ struct info *locp) {
+ /* let f (x:string) = "f " . x in
+ f "a" . f "b" */
+ /* (lambda f: f "a" . f "b") (lambda x: "f " . x) */
+ /* (lambda IDENT: BODY) (lambda PARAMS: EXP) */
+ /* Desugar as (lambda IDENT: BODY) (lambda PARAMS: EXP) */
+ struct term *term = make_term_locp(A_LET, locp);
+ struct term *p = make_param(ident, NULL, ref(term->info));
+ term->left = build_func(p, body);
+ if (params != NULL)
+ term->right = build_func(params, exp);
+ else
+ term->right = exp;
+ return term;
+}
+
+static struct term *make_binop(enum term_tag tag,
+ struct term *left, struct term *right,
+ struct info *locp) {
+ assert(tag == A_COMPOSE || tag == A_CONCAT
+ || tag == A_UNION || tag == A_APP || tag == A_MINUS);
+ struct term *term = make_term_locp(tag, locp);
+ term->left = left;
+ term->right = right;
+ return term;
+}
+
+static struct term *make_unop(enum term_tag tag, struct term *exp,
+ struct info *locp) {
+ assert(tag == A_BRACKET);
+ struct term *term = make_term_locp(tag, locp);
+ term->brexp = exp;
+ return term;
+}
+
+static struct term *make_ident(char *qname, struct info *locp) {
+ struct term *term = make_term_locp(A_IDENT, locp);
+ term->ident = make_string(qname);
+ return term;
+}
+
+static struct term *make_unit_term(struct info *locp) {
+ struct term *term = make_term_locp(A_VALUE, locp);
+ term->value = make_unit(ref(term->info));
+ return term;
+}
+
+static struct term *make_string_term(char *value, struct info *locp) {
+ struct term *term = make_term_locp(A_VALUE, locp);
+ term->value = make_value(V_STRING, ref(term->info));
+ term->value->string = make_string(value);
+ return term;
+}
+
+static struct term *make_regexp_term(char *pattern, int nocase,
+ struct info *locp) {
+ struct term *term = make_term_locp(A_VALUE, locp);
+ term->value = make_value(V_REGEXP, ref(term->info));
+ term->value->regexp = make_regexp(term->info, pattern, nocase);
+ return term;
+}
+
+static struct term *make_rep(struct term *exp, enum quant_tag quant,
+ struct info *locp) {
+ struct term *term = make_term_locp(A_REP, locp);
+ term->quant = quant;
+ term->exp = exp;
+ return term;
+}
+
+static struct term *make_get_test(struct term *lens, struct term *arg,
+ struct info *locp) {
+ /* Return a term for "get" LENS ARG */
+ struct info *info = clone_info(locp);
+ struct term *term = make_app_ident(strdup("get"), lens, info);
+ term = make_app_term(term, arg, ref(info));
+ return term;
+}
+
+static struct term *make_put_test(struct term *lens, struct term *arg,
+ struct term *cmds, struct info *locp) {
+ /* Return a term for "put" LENS (CMDS ("get" LENS ARG)) ARG */
+ struct term *term = make_get_test(lens, arg, locp);
+ term = make_app_term(cmds, term, ref(term->info));
+ struct term *put = make_app_ident(strdup("put"), ref(lens), ref(term->info));
+ put = make_app_term(put, term, ref(term->info));
+ put = make_app_term(put, ref(arg), ref(term->info));
+ return put;
+}
+
+static struct term *make_test(struct term *test, struct term *result,
+ enum test_result_tag tr_tag,
+ struct term *decls, struct info *locp) {
+ struct term *term = make_term_locp(A_TEST, locp);
+ term->tr_tag = tr_tag;
+ term->test = test;
+ term->result = result;
+ term->next = decls;
+ return term;
+}
+
+static struct term *make_tree_value(struct tree *tree, struct info *locp) {
+ struct term *term = make_term_locp(A_VALUE, locp);
+ struct value *value = make_value(V_TREE, ref(term->info));
+ value->origin = make_tree_origin(tree);
+ term->value = value;
+ return term;
+}
+
+static struct tree *tree_concat(struct tree *t1, struct tree *t2) {
+ if (t2 != NULL)
+ list_append(t1, t2);
+ return t1;
+}
+
+void augl_error(struct info *locp,
+ struct term **term,
+ yyscan_t scanner,
+ const char *s) {
+ struct info info;
+ struct string string;
+ MEMZERO(&info, 1);
+ info.ref = string.ref = UINT_MAX;
+ info.filename = &string;
+
+ if (locp != NULL) {
+ info.first_line = locp->first_line;
+ info.first_column = locp->first_column;
+ info.last_line = locp->last_line;
+ info.last_column = locp->last_column;
+ info.filename->str = locp->filename->str;
+ info.error = locp->error;
+ } else if (scanner != NULL) {
+ info.first_line = augl_get_lineno(scanner);
+ info.first_column = augl_get_column(scanner);
+ info.last_line = augl_get_lineno(scanner);
+ info.last_column = augl_get_column(scanner);
+ info.filename = augl_get_info(scanner)->filename;
+ info.error = augl_get_info(scanner)->error;
+ } else if (*term != NULL && (*term)->info != NULL) {
+ memcpy(&info, (*term)->info, sizeof(info));
+ } else {
+ info.first_line = info.last_line = 0;
+ info.first_column = info.last_column = 0;
+ }
+ syntax_error(&info, "%s", s);
+}
--- /dev/null
+/*
+ * pathx.c: handling path expressions
+ *
+ * Copyright (C) 2007-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <dlutter@redhat.com>
+ */
+
+#include <config.h>
+#include <internal.h>
+#include <stdint.h>
+#include <stdbool.h>
+#include <memory.h>
+#include <ctype.h>
+
+#include "ref.h"
+#include "regexp.h"
+#include "errcode.h"
+
+static const char *const errcodes[] = {
+ "no error",
+ "empty name",
+ "illegal string literal",
+ "illegal number", /* PATHX_ENUMBER */
+ "string missing ending ' or \"",
+ "expected '='",
+ "allocation failed",
+ "unmatched '['",
+ "unmatched '('",
+ "expected a '/'",
+ "internal error", /* PATHX_EINTERNAL */
+ "type error", /* PATHX_ETYPE */
+ "undefined variable", /* PATHX_ENOVAR */
+ "garbage at end of path expression", /* PATHX_EEND */
+ "no match for path expression", /* PATHX_ENOMATCH */
+ "wrong number of arguments in function call", /* PATHX_EARITY */
+ "invalid regular expression", /* PATHX_EREGEXP */
+ "too many matches", /* PATHX_EMMATCH */
+ "wrong flag for regexp" /* PATHX_EREGEXPFLAG */
+};
+
+/*
+ * Path expressions are strings that use a notation modelled on XPath.
+ */
+
+enum type {
+ T_NONE = 0, /* Not a type */
+ T_NODESET,
+ T_BOOLEAN,
+ T_NUMBER,
+ T_STRING,
+ T_REGEXP
+};
+
+enum expr_tag {
+ E_FILTER,
+ E_BINARY,
+ E_VALUE,
+ E_VAR,
+ E_APP
+};
+
+enum binary_op {
+ OP_EQ, /* '=' */
+ OP_NEQ, /* '!=' */
+ OP_LT, /* '<' */
+ OP_LE, /* '<=' */
+ OP_GT, /* '>' */
+ OP_GE, /* '>=' */
+ OP_PLUS, /* '+' */
+ OP_MINUS, /* '-' */
+ OP_STAR, /* '*' */
+ OP_AND, /* 'and' */
+ OP_OR, /* 'or' */
+ OP_ELSE, /* 'else' */
+ OP_RE_MATCH, /* '=~' */
+ OP_RE_NOMATCH, /* '!~' */
+ OP_UNION /* '|' */
+};
+
+struct pred {
+ int nexpr;
+ struct expr **exprs;
+};
+
+enum axis {
+ SELF,
+ CHILD,
+ SEQ,
+ DESCENDANT,
+ DESCENDANT_OR_SELF,
+ PARENT,
+ ANCESTOR,
+ ROOT,
+ PRECEDING_SIBLING,
+ FOLLOWING_SIBLING
+};
+
+/* This array is indexed by enum axis */
+static const char *const axis_names[] = {
+ "self",
+ "child",
+ "seq", /* Like child, but only selects node-names which are integers */
+ "descendant",
+ "descendant-or-self",
+ "parent",
+ "ancestor",
+ "root",
+ "preceding-sibling",
+ "following-sibling"
+};
+
+/* The characters that can follow a name in a location expression (aka path)
+ * The parser will assume that name (path component) is finished when it
+ * encounters any of these characters, unless they are escaped by preceding
+ * them with a '\\'.
+ *
+ * See parse_name for the gory details
+ */
+static const char name_follow[] = "][|/=()!,";
+
+/* Doubly linked list of location steps. Besides the information from the
+ * path expression, also contains information to iterate over a node set,
+ * in particular, the context node CTX for the step, and the current node
+ * CUR within that context.
+ */
+struct step {
+ struct step *next;
+ enum axis axis;
+ char *name; /* NULL to match any name */
+ struct pred *predicates;
+};
+
+/* Initialise the root nodeset with the first step */
+static struct tree *step_root(struct step *step, struct tree *ctx,
+ struct tree *root_ctx);
+/* Iteration over the nodes on a step, ignoring the predicates */
+static struct tree *step_first(struct step *step, struct tree *ctx);
+static struct tree *step_next(struct step *step, struct tree *ctx,
+ struct tree *node);
+
+struct pathx_symtab {
+ struct pathx_symtab *next;
+ char *name;
+ struct value *value;
+};
+
+struct pathx {
+ struct state *state;
+ struct nodeset *nodeset;
+ int node;
+ struct tree *origin;
+};
+
+#define L_BRACK '['
+#define R_BRACK ']'
+
+struct locpath {
+ struct step *steps;
+};
+
+struct nodeset {
+ struct tree **nodes;
+ size_t used;
+ size_t size;
+};
+
+typedef uint32_t value_ind_t;
+
+struct value {
+ enum type tag;
+ union {
+ struct nodeset *nodeset; /* T_NODESET */
+ int64_t number; /* T_NUMBER */
+ char *string; /* T_STRING */
+ bool boolval; /* T_BOOLEAN */
+ struct regexp *regexp; /* T_REGEXP */
+ };
+};
+
+struct expr {
+ enum expr_tag tag;
+ enum type type;
+ union {
+ struct { /* E_FILTER */
+ struct expr *primary;
+ struct pred *predicates;
+ struct locpath *locpath;
+ };
+ struct { /* E_BINARY */
+ enum binary_op op;
+ struct expr *left;
+ struct expr *right;
+ bool left_matched;
+ };
+ value_ind_t value_ind; /* E_VALUE */
+ char *ident; /* E_VAR */
+ struct { /* E_APP */
+ const struct func *func;
+ struct expr **args;
+ /* If fold is true, replace this function invocation
+ * with its value after the first time we evaluate this
+ * expression */
+ bool fold;
+ };
+ };
+};
+
+struct locpath_trace {
+ unsigned int maxns;
+ struct nodeset **ns;
+ struct locpath *lp;
+};
+
+/* Internal state of the evaluator/parser */
+struct state {
+ pathx_errcode_t errcode;
+ const char *file;
+ int line;
+ char *errmsg;
+
+ const char *txt; /* Entire expression */
+ const char *pos; /* Current position within TXT during parsing */
+
+ struct tree *ctx; /* The current node */
+ uint ctx_pos;
+ uint ctx_len;
+
+ struct tree *root_ctx; /* Root context for relative paths */
+
+ /* A table of all values. The table is dynamically reallocated, i.e.
+ * pointers to struct value should not be used across calls that
+ * might allocate new values
+ *
+ * value_pool[0] is always the boolean false, and value_pool[1]
+ * always the boolean true
+ */
+ struct value *value_pool;
+ value_ind_t value_pool_used;
+ value_ind_t value_pool_size;
+ /* Stack of values (as indices into value_pool), with bottom of
+ stack in values[0] */
+ value_ind_t *values;
+ size_t values_used;
+ size_t values_size;
+ /* Stack of expressions, with bottom of stack in exprs[0] */
+ struct expr **exprs;
+ size_t exprs_used;
+ size_t exprs_size;
+ /* Trace of a locpath evaluation, needed by pathx_expand_tree.
+ Generally NULL, unless a trace is needed.
+ */
+ struct locpath_trace *locpath_trace;
+ /* Symbol table for variable lookups */
+ struct pathx_symtab *symtab;
+ /* Error structure, used to communicate errors to struct augeas;
+ * we never own this structure, and therefore never free it */
+ struct error *error;
+ /* If a filter-expression contains the 'else' operator, we need
+ * we need to evaluate the filter twice. The has_else flag
+ * means we don't do this unless we really need to */
+ bool has_else;
+};
+
+/* We consider NULL and the empty string to be equal */
+ATTRIBUTE_PURE
+static inline int streqx(const char *s1, const char *s2) {
+ if (s1 == NULL)
+ return (s2 == NULL || strlen(s2) == 0);
+ if (s2 == NULL)
+ return strlen(s1) == 0;
+ return STREQ(s1, s2);
+}
+
+/* Functions */
+
+typedef void (*func_impl_t)(struct state *state, int nargs);
+
+struct func {
+ const char *name;
+ unsigned int arity;
+ enum type type;
+ bool pure; /* Result only depends on args */
+ const enum type *arg_types;
+ func_impl_t impl;
+};
+
+static void func_last(struct state *state, int nargs);
+static void func_position(struct state *state, int nargs);
+static void func_count(struct state *state, int nargs);
+static void func_label(struct state *state, int nargs);
+static void func_regexp(struct state *state, int nargs);
+static void func_regexp_flag(struct state *state, int nargs);
+static void func_glob(struct state *state, int nargs);
+static void func_int(struct state *state, int nargs);
+static void func_not(struct state *state, int nargs);
+static void func_modified(struct state *state, int nargs);
+
+static const enum type arg_types_nodeset[] = { T_NODESET };
+static const enum type arg_types_string[] = { T_STRING };
+static const enum type arg_types_bool[] = { T_BOOLEAN };
+static const enum type arg_types_string_string[] = { T_STRING, T_STRING };
+static const enum type arg_types_nodeset_string[] = { T_NODESET, T_STRING };
+
+static const struct func builtin_funcs[] = {
+ { .name = "last", .arity = 0, .type = T_NUMBER, .arg_types = NULL,
+ .impl = func_last, .pure = false },
+ { .name = "position", .arity = 0, .type = T_NUMBER, .arg_types = NULL,
+ .impl = func_position, .pure = false },
+ { .name = "label", .arity = 0, .type = T_STRING, .arg_types = NULL,
+ .impl = func_label, .pure = false },
+ { .name = "count", .arity = 1, .type = T_NUMBER,
+ .arg_types = arg_types_nodeset,
+ .impl = func_count, .pure = false },
+ { .name = "regexp", .arity = 1, .type = T_REGEXP,
+ .arg_types = arg_types_string,
+ .impl = func_regexp, .pure = true },
+ { .name = "regexp", .arity = 1, .type = T_REGEXP,
+ .arg_types = arg_types_nodeset,
+ .impl = func_regexp, .pure = true },
+ { .name = "regexp", .arity = 2, .type = T_REGEXP,
+ .arg_types = arg_types_string_string,
+ .impl = func_regexp_flag, .pure = true },
+ { .name = "regexp", .arity = 2, .type = T_REGEXP,
+ .arg_types = arg_types_nodeset_string,
+ .impl = func_regexp_flag, .pure = true },
+ { .name = "glob", .arity = 1, .type = T_REGEXP,
+ .arg_types = arg_types_string,
+ .impl = func_glob, .pure = true },
+ { .name = "glob", .arity = 1, .type = T_REGEXP,
+ .arg_types = arg_types_nodeset,
+ .impl = func_glob, .pure = true },
+ { .name = "int", .arity = 1, .type = T_NUMBER,
+ .arg_types = arg_types_string, .impl = func_int, .pure = false },
+ { .name = "int", .arity = 1, .type = T_NUMBER,
+ .arg_types = arg_types_nodeset, .impl = func_int, .pure = false },
+ { .name = "int", .arity = 1, .type = T_NUMBER,
+ .arg_types = arg_types_bool, .impl = func_int, .pure = false },
+ { .name = "modified", .arity = 0, .type = T_BOOLEAN,
+ .arg_types = NULL, .impl = func_modified, .pure = false },
+ { .name = "not", .arity = 1, .type = T_BOOLEAN,
+ .arg_types = arg_types_bool, .impl = func_not, .pure = true }
+};
+
+#define RET_ON_ERROR \
+ if (state->errcode != PATHX_NOERROR) return
+
+#define RET0_ON_ERROR \
+ if (state->errcode != PATHX_NOERROR) return 0
+
+#define STATE_ERROR(state, err) \
+ do { \
+ state->errcode = err; \
+ state->file = __FILE__; \
+ state->line = __LINE__; \
+ } while (0)
+
+#define HAS_ERROR(state) (state->errcode != PATHX_NOERROR)
+
+#define STATE_ENOMEM STATE_ERROR(state, PATHX_ENOMEM)
+
+/*
+ * Free the various data structures
+ */
+
+static void free_expr(struct expr *expr);
+
+static void free_pred(struct pred *pred) {
+ if (pred == NULL)
+ return;
+
+ for (int i=0; i < pred->nexpr; i++) {
+ free_expr(pred->exprs[i]);
+ }
+ free(pred->exprs);
+ free(pred);
+}
+
+static void free_step(struct step *step) {
+ while (step != NULL) {
+ struct step *del = step;
+ step = del->next;
+ free(del->name);
+ free_pred(del->predicates);
+ free(del);
+ }
+}
+
+static void free_locpath(struct locpath *locpath) {
+ if (locpath == NULL)
+ return;
+ while (locpath->steps != NULL) {
+ struct step *step = locpath->steps;
+ locpath->steps = step->next;
+ free(step->name);
+ free_pred(step->predicates);
+ free(step);
+ }
+ free(locpath);
+}
+
+static void free_expr(struct expr *expr) {
+ if (expr == NULL)
+ return;
+ switch (expr->tag) {
+ case E_FILTER:
+ free_expr(expr->primary);
+ free_pred(expr->predicates);
+ free_locpath(expr->locpath);
+ break;
+ case E_BINARY:
+ free_expr(expr->left);
+ free_expr(expr->right);
+ break;
+ case E_VALUE:
+ break;
+ case E_VAR:
+ free(expr->ident);
+ break;
+ case E_APP:
+ for (int i=0; i < expr->func->arity; i++)
+ free_expr(expr->args[i]);
+ free(expr->args);
+ break;
+ default:
+ assert(0);
+ }
+ free(expr);
+}
+
+static void free_nodeset(struct nodeset *ns) {
+ if (ns != NULL) {
+ free(ns->nodes);
+ free(ns);
+ }
+}
+
+/* Free all objects used by VALUE, but not VALUE itself */
+static void release_value(struct value *v) {
+ if (v == NULL)
+ return;
+
+ switch (v->tag) {
+ case T_NODESET:
+ free_nodeset(v->nodeset);
+ break;
+ case T_STRING:
+ free(v->string);
+ break;
+ case T_BOOLEAN:
+ case T_NUMBER:
+ break;
+ case T_REGEXP:
+ unref(v->regexp, regexp);
+ break;
+ default:
+ assert(0);
+ }
+}
+
+static void free_state(struct state *state) {
+ if (state == NULL)
+ return;
+
+ for(int i=0; i < state->exprs_used; i++)
+ free_expr(state->exprs[i]);
+ free(state->exprs);
+
+ for(int i=0; i < state->value_pool_used; i++)
+ release_value(state->value_pool + i);
+ free(state->value_pool);
+ free(state->values);
+ free(state);
+}
+
+void free_pathx(struct pathx *pathx) {
+ if (pathx == NULL)
+ return;
+ free_state(pathx->state);
+ free(pathx);
+}
+
+/*
+ * Nodeset helpers
+ */
+static struct nodeset *make_nodeset(struct state *state) {
+ struct nodeset *result;
+ if (ALLOC(result) < 0)
+ STATE_ENOMEM;
+ return result;
+}
+
+/* Add NODE to NS if it is not in NS yet. This relies on the flag
+ * NODE->ADDED and care must be taken that NS_CLEAR_ADDED is called on NS
+ * as soon as we are done adding nodes to it.
+ */
+static void ns_add(struct nodeset *ns, struct tree *node,
+ struct state *state) {
+ if (node->added)
+ return;
+ if (ns->used >= ns->size) {
+ size_t size = 2 * ns->size;
+ if (size < 10) size = 10;
+ if (REALLOC_N(ns->nodes, size) < 0)
+ STATE_ENOMEM;
+ ns->size = size;
+ }
+ ns->nodes[ns->used] = node;
+ node->added = 1;
+ ns->used += 1;
+}
+
+static void ns_clear_added(struct nodeset *ns) {
+ for (int i=0; i < ns->used; i++)
+ ns->nodes[i]->added = 0;
+}
+
+static struct nodeset *
+clone_nodeset(struct nodeset *ns, struct state *state)
+{
+ struct nodeset *clone;
+ if (ALLOC(clone) < 0) {
+ STATE_ENOMEM;
+ return NULL;
+ }
+ if (ALLOC_N(clone->nodes, ns->used) < 0) {
+ free(clone);
+ STATE_ENOMEM;
+ return NULL;
+ }
+ clone->used = ns->used;
+ clone->size = ns->used;
+ for (int i=0; i < ns->used; i++)
+ clone->nodes[i] = ns->nodes[i];
+ return clone;
+}
+
+/*
+ * Handling values
+ */
+static value_ind_t make_value(enum type tag, struct state *state) {
+ assert(tag != T_BOOLEAN);
+
+ if (state->value_pool_used >= state->value_pool_size) {
+ value_ind_t new_size = 2*state->value_pool_size;
+ if (new_size <= state->value_pool_size) {
+ STATE_ENOMEM;
+ return 0;
+ }
+ if (REALLOC_N(state->value_pool, new_size) < 0) {
+ STATE_ENOMEM;
+ return 0;
+ }
+ state->value_pool_size = new_size;
+ }
+ state->value_pool[state->value_pool_used].tag = tag;
+ state->value_pool[state->value_pool_used].nodeset = NULL;
+ return state->value_pool_used++;
+}
+
+static value_ind_t clone_value(struct value *v, struct state *state) {
+ value_ind_t vind = make_value(v->tag, state);
+ RET0_ON_ERROR;
+ struct value *clone = state->value_pool + vind;
+
+ switch (v->tag) {
+ case T_NODESET:
+ clone->nodeset = clone_nodeset(v->nodeset, state);
+ break;
+ case T_NUMBER:
+ clone->number = v->number;
+ break;
+ case T_STRING:
+ clone->string = strdup(v->string);
+ if (clone->string == NULL) {
+ STATE_ENOMEM;
+ }
+ break;
+ case T_BOOLEAN:
+ clone->boolval = v->boolval;
+ break;
+ case T_REGEXP:
+ clone->regexp = ref(v->regexp);
+ break;
+ default:
+ assert(0);
+ }
+ return vind;
+}
+
+static value_ind_t pop_value_ind(struct state *state) {
+ if (state->values_used > 0) {
+ state->values_used -= 1;
+ return state->values[state->values_used];
+ } else {
+ STATE_ERROR(state, PATHX_EINTERNAL);
+ assert(0);
+ return 0;
+ }
+}
+
+static struct value *pop_value(struct state *state) {
+ value_ind_t vind = pop_value_ind(state);
+ if (HAS_ERROR(state))
+ return NULL;
+ return state->value_pool + vind;
+}
+
+static void push_value(value_ind_t vind, struct state *state) {
+ if (state->values_used >= state->values_size) {
+ size_t new_size = 2*state->values_size;
+ if (new_size == 0) new_size = 8;
+ if (REALLOC_N(state->values, new_size) < 0) {
+ STATE_ENOMEM;
+ return;
+ }
+ state->values_size = new_size;
+ }
+ state->values[state->values_used++] = vind;
+}
+
+static void push_boolean_value(int b, struct state *state) {
+ push_value(b != 0, state);
+}
+
+ATTRIBUTE_PURE
+static struct value *expr_value(struct expr *expr, struct state *state) {
+ return state->value_pool + expr->value_ind;
+}
+
+/*************************************************************************
+ * Evaluation
+ ************************************************************************/
+static void eval_expr(struct expr *expr, struct state *state);
+
+
+#define ensure_arity(min, max) \
+ if (nargs < min || nargs > max) { \
+ STATE_ERROR(state, PATHX_EINTERNAL); \
+ return; \
+ }
+
+static void func_last(struct state *state, int nargs) {
+ ensure_arity(0, 0);
+ value_ind_t vind = make_value(T_NUMBER, state);
+ RET_ON_ERROR;
+
+ state->value_pool[vind].number = state->ctx_len;
+ push_value(vind, state);
+}
+
+static void func_position(struct state *state, int nargs) {
+ ensure_arity(0, 0);
+ value_ind_t vind = make_value(T_NUMBER, state);
+ RET_ON_ERROR;
+
+ state->value_pool[vind].number = state->ctx_pos;
+ push_value(vind, state);
+}
+
+static void func_count(struct state *state, int nargs) {
+ ensure_arity(1, 1);
+ value_ind_t vind = make_value(T_NUMBER, state);
+ RET_ON_ERROR;
+
+ struct value *ns = pop_value(state);
+ state->value_pool[vind].number = ns->nodeset->used;
+ push_value(vind, state);
+}
+
+static void func_label(struct state *state, int nargs) {
+ ensure_arity(0, 0);
+ value_ind_t vind = make_value(T_STRING, state);
+ char *s;
+
+ RET_ON_ERROR;
+
+ if (state->ctx->label)
+ s = strdup(state->ctx->label);
+ else
+ s = strdup("");
+ if (s == NULL) {
+ STATE_ENOMEM;
+ return;
+ }
+ state->value_pool[vind].string = s;
+ push_value(vind, state);
+}
+
+static void func_int(struct state *state, int nargs) {
+ ensure_arity(1, 1);
+ value_ind_t vind = make_value(T_NUMBER, state);
+ int64_t i = -1;
+ RET_ON_ERROR;
+
+ struct value *v = pop_value(state);
+ if (v->tag == T_BOOLEAN) {
+ i = v->boolval;
+ } else {
+ const char *s = NULL;
+ if (v->tag == T_STRING) {
+ s = v->string;
+ } else {
+ /* T_NODESET */
+ if (v->nodeset->used != 1) {
+ STATE_ERROR(state, PATHX_EMMATCH);
+ return;
+ }
+ s = v->nodeset->nodes[0]->value;
+ }
+ if (s != NULL) {
+ int r;
+ r = xstrtoint64(s, 10, &i);
+ if (r < 0) {
+ STATE_ERROR(state, PATHX_ENUMBER);
+ return;
+ }
+ }
+ }
+ state->value_pool[vind].number = i;
+ push_value(vind, state);
+}
+
+static void func_modified(struct state *state, int nargs) {
+ ensure_arity(0, 0);
+
+ push_boolean_value(state->ctx->dirty , state);
+}
+
+static void func_not(struct state *state, int nargs) {
+ ensure_arity(1, 1);
+ RET_ON_ERROR;
+
+ struct value *v = pop_value(state);
+ if (v->tag == T_BOOLEAN) {
+ push_boolean_value(! v->boolval, state);
+ }
+}
+
+static struct regexp *
+nodeset_as_regexp(struct info *info, struct nodeset *ns, int glob, int nocase) {
+ struct regexp *result = NULL;
+ struct regexp **rx = NULL;
+ int used = 0;
+
+ for (int i = 0; i < ns->used; i++) {
+ if (ns->nodes[i]->value != NULL)
+ used += 1;
+ }
+
+ if (used == 0) {
+ /* If the nodeset is empty, make sure we produce a regexp
+ * that never matches anything */
+ result = make_regexp_unescape(info, "[^\001-\7ff]", nocase);
+ } else {
+ if (ALLOC_N(rx, ns->used) < 0)
+ goto error;
+ for (int i=0; i < ns->used; i++) {
+ if (ns->nodes[i]->value == NULL)
+ continue;
+
+ if (glob)
+ rx[i] = make_regexp_from_glob(info, ns->nodes[i]->value);
+ else
+ rx[i] = make_regexp_unescape(info, ns->nodes[i]->value, 0);
+ if (rx[i] == NULL)
+ goto error;
+ }
+ result = regexp_union_n(info, ns->used, rx);
+ }
+
+ error:
+ if (rx != NULL) {
+ for (int i=0; i < ns->used; i++)
+ unref(rx[i], regexp);
+ free(rx);
+ }
+ return result;
+}
+
+static void func_regexp_or_glob(struct state *state, int glob, int nocase) {
+ value_ind_t vind = make_value(T_REGEXP, state);
+ int r;
+
+ RET_ON_ERROR;
+
+ struct value *v = pop_value(state);
+ struct regexp *rx = NULL;
+
+ if (v->tag == T_STRING) {
+ if (glob)
+ rx = make_regexp_from_glob(state->error->info, v->string);
+ else
+ rx = make_regexp_unescape(state->error->info, v->string, nocase);
+ } else if (v->tag == T_NODESET) {
+ rx = nodeset_as_regexp(state->error->info, v->nodeset, glob, nocase);
+ } else {
+ assert(0);
+ }
+
+ if (rx == NULL) {
+ STATE_ENOMEM;
+ return;
+ }
+
+ state->value_pool[vind].regexp = rx;
+ r = regexp_compile(rx);
+ if (r < 0) {
+ const char *msg;
+ regexp_check(rx, &msg);
+ state->errmsg = strdup(msg);
+ STATE_ERROR(state, PATHX_EREGEXP);
+ return;
+ }
+ push_value(vind, state);
+}
+
+static void func_regexp(struct state *state, int nargs) {
+ ensure_arity(1, 1);
+ func_regexp_or_glob(state, 0, 0);
+}
+
+static void func_regexp_flag(struct state *state, int nargs) {
+ ensure_arity(2, 2);
+ int nocase = 0;
+ struct value *f = pop_value(state);
+
+ if (STREQ("i", f->string))
+ nocase = 1;
+ else
+ STATE_ERROR(state, PATHX_EREGEXPFLAG);
+
+ func_regexp_or_glob(state, 0, nocase);
+}
+
+static void func_glob(struct state *state, int nargs) {
+ ensure_arity(1, 1);
+ func_regexp_or_glob(state, 1, 0);
+}
+
+static bool coerce_to_bool(struct value *v) {
+ switch (v->tag) {
+ case T_NODESET:
+ return v->nodeset->used > 0;
+ break;
+ case T_BOOLEAN:
+ return v->boolval;
+ break;
+ case T_NUMBER:
+ return v->number > 0;
+ break;
+ case T_STRING:
+ return strlen(v->string) > 0;
+ break;
+ case T_REGEXP:
+ return true;
+ default:
+ assert(0);
+ return false;
+ }
+}
+
+static int calc_eq_nodeset_nodeset(struct nodeset *ns1, struct nodeset *ns2,
+ int neq) {
+ for (int i1=0; i1 < ns1->used; i1++) {
+ struct tree *t1 = ns1->nodes[i1];
+ for (int i2=0; i2 < ns2->used; i2++) {
+ struct tree *t2 = ns2->nodes[i2];
+ if (neq) {
+ if (!streqx(t1->value, t2->value))
+ return 1;
+ } else {
+ if (streqx(t1->value, t2->value))
+ return 1;
+ }
+ }
+ }
+ return 0;
+}
+
+static int calc_eq_nodeset_string(struct nodeset *ns, const char *s,
+ int neq) {
+ for (int i=0; i < ns->used; i++) {
+ struct tree *t = ns->nodes[i];
+ if (neq) {
+ if (!streqx(t->value, s))
+ return 1;
+ } else {
+ if (streqx(t->value, s))
+ return 1;
+ }
+ }
+ return 0;
+}
+
+static void eval_eq(struct state *state, int neq) {
+ struct value *r = pop_value(state);
+ struct value *l = pop_value(state);
+ int res;
+
+ if (l->tag == T_NODESET && r->tag == T_NODESET) {
+ res = calc_eq_nodeset_nodeset(l->nodeset, r->nodeset, neq);
+ } else if (l->tag == T_NODESET) {
+ res = calc_eq_nodeset_string(l->nodeset, r->string, neq);
+ } else if (r->tag == T_NODESET) {
+ res = calc_eq_nodeset_string(r->nodeset, l->string, neq);
+ } else if (l->tag == T_NUMBER && r->tag == T_NUMBER) {
+ if (neq)
+ res = (l->number != r->number);
+ else
+ res = (l->number == r->number);
+ } else {
+ assert(l->tag == T_STRING);
+ assert(r->tag == T_STRING);
+ res = streqx(l->string, r->string);
+ if (neq)
+ res = !res;
+ }
+ RET_ON_ERROR;
+
+ push_boolean_value(res, state);
+}
+
+static void eval_arith(struct state *state, enum binary_op op) {
+ value_ind_t vind = make_value(T_NUMBER, state);
+ struct value *r = pop_value(state);
+ struct value *l = pop_value(state);
+ int res;
+
+ assert(l->tag == T_NUMBER);
+ assert(r->tag == T_NUMBER);
+
+ RET_ON_ERROR;
+
+ if (op == OP_PLUS)
+ res = l->number + r->number;
+ else if (op == OP_MINUS)
+ res = l->number - r->number;
+ else if (op == OP_STAR)
+ res = l->number * r->number;
+ else
+ assert(0);
+
+ state->value_pool[vind].number = res;
+ push_value(vind, state);
+}
+
+static void eval_rel(struct state *state, bool greater, bool equal) {
+ struct value *r, *l;
+ int res;
+
+ /* We always check l < r or l <= r */
+ if (greater) {
+ l = pop_value(state);
+ r = pop_value(state);
+ } else {
+ r = pop_value(state);
+ l = pop_value(state);
+ }
+ if (l->tag == T_NUMBER) {
+ if (equal)
+ res = (l->number <= r->number);
+ else
+ res = (l->number < r->number);
+ } else if (l->tag == T_STRING) {
+ int cmp = strcmp(l->string, r->string);
+ if (equal)
+ res = cmp <= 0;
+ else
+ res = cmp < 0;
+ } else {
+ assert(0);
+ }
+
+ push_boolean_value(res, state);
+}
+
+static void eval_and_or(struct state *state, enum binary_op op) {
+ struct value *r = pop_value(state);
+ struct value *l = pop_value(state);
+ bool left = coerce_to_bool(l);
+ bool right = coerce_to_bool(r);
+
+
+ if (op == OP_AND)
+ push_boolean_value(left && right, state);
+ else
+ push_boolean_value(left || right, state);
+}
+
+static void eval_else(struct state *state, struct expr *expr, struct locpath_trace *lpt_right) {
+ struct value *r = pop_value(state);
+ struct value *l = pop_value(state);
+
+ if ( l->tag == T_NODESET && r->tag == T_NODESET ) {
+ int discard_maxns=0;
+ struct nodeset **discard_ns=NULL;
+ struct locpath_trace *lpt = state->locpath_trace;
+ value_ind_t vind = make_value(T_NODESET, state);
+ if (l->nodeset->used >0 || expr->left_matched) {
+ expr->left_matched = 1;
+ state->value_pool[vind].nodeset = clone_nodeset(l->nodeset, state);
+ if( lpt_right != NULL ) {
+ discard_maxns = lpt_right->maxns;
+ discard_ns = lpt_right->ns;
+ }
+ } else {
+ state->value_pool[vind].nodeset = clone_nodeset(r->nodeset, state);
+ if( lpt != NULL && lpt_right != NULL ) {
+ discard_maxns = lpt->maxns;
+ discard_ns = lpt->ns;
+ lpt->maxns = lpt_right->maxns;
+ lpt->ns = lpt_right->ns;
+ lpt->lp = lpt_right->lp;
+ }
+ }
+ push_value(vind, state);
+ if ( lpt != NULL && lpt_right != NULL ) {
+ for (int i=0; i < discard_maxns; i++)
+ free_nodeset(discard_ns[i]);
+ FREE(discard_ns);
+ }
+ } else {
+ bool left = coerce_to_bool(l);
+ bool right = coerce_to_bool(r);
+
+ expr->left_matched = expr->left_matched || left;
+ if (expr->left_matched) {
+ /* One or more LHS have matched, so we're not interested in the right expr */
+ push_boolean_value(left, state);
+ } else {
+ /* no LHS has matched (yet), so keep the right expr */
+ /* If this is the 2nd pass, and expr->left_matched is true, no RHS nodes will be included */
+ push_boolean_value(right, state);
+ }
+ }
+}
+
+static bool eval_re_match_str(struct state *state, struct regexp *rx,
+ const char *str) {
+ int r;
+
+ if (str == NULL)
+ str = "";
+
+ r = regexp_match(rx, str, strlen(str), 0, NULL);
+ if (r == -2) {
+ STATE_ERROR(state, PATHX_EINTERNAL);
+ } else if (r == -3) {
+ /* We should never get this far; func_regexp should catch
+ * invalid regexps */
+ assert(false);
+ }
+ return r == strlen(str);
+}
+
+static void eval_union(struct state *state, struct locpath_trace *lpt_right) {
+ value_ind_t vind = make_value(T_NODESET, state);
+ struct value *r = pop_value(state);
+ struct value *l = pop_value(state);
+ struct nodeset *res = NULL;
+ struct locpath_trace *lpt = state->locpath_trace;
+
+ assert(l->tag == T_NODESET);
+ assert(r->tag == T_NODESET);
+
+ RET_ON_ERROR;
+
+ res = clone_nodeset(l->nodeset, state);
+ RET_ON_ERROR;
+ for (int i=0; i < r->nodeset->used; i++) {
+ ns_add(res, r->nodeset->nodes[i], state);
+ if (HAS_ERROR(state))
+ goto error;
+ }
+ state->value_pool[vind].nodeset = res;
+ push_value(vind, state);
+
+ if( lpt != NULL && lpt_right != NULL ) {
+ STATE_ERROR(state, PATHX_EMMATCH);
+ for (int i=0; i < lpt_right->maxns; i++)
+ free_nodeset(lpt_right->ns[i]);
+ FREE(lpt_right->ns);
+ }
+ error:
+ ns_clear_added(res);
+}
+
+static void eval_concat_string(struct state *state) {
+ value_ind_t vind = make_value(T_STRING, state);
+ struct value *r = pop_value(state);
+ struct value *l = pop_value(state);
+ char *res = NULL;
+
+ RET_ON_ERROR;
+
+ if (ALLOC_N(res, strlen(l->string) + strlen(r->string) + 1) < 0) {
+ STATE_ENOMEM;
+ return;
+ }
+ strcpy(res, l->string);
+ strcat(res, r->string);
+ state->value_pool[vind].string = res;
+ push_value(vind, state);
+}
+
+static void eval_concat_regexp(struct state *state) {
+ value_ind_t vind = make_value(T_REGEXP, state);
+ struct value *r = pop_value(state);
+ struct value *l = pop_value(state);
+ struct regexp *rx = NULL;
+
+ RET_ON_ERROR;
+
+ rx = regexp_concat(state->error->info, l->regexp, r->regexp);
+ if (rx == NULL) {
+ STATE_ENOMEM;
+ return;
+ }
+
+ state->value_pool[vind].regexp = rx;
+ push_value(vind, state);
+}
+
+static void eval_re_match(struct state *state, enum binary_op op) {
+ struct value *rx = pop_value(state);
+ struct value *v = pop_value(state);
+
+ bool result = false;
+
+ if (v->tag == T_STRING) {
+ result = eval_re_match_str(state, rx->regexp, v->string);
+ RET_ON_ERROR;
+ } else if (v->tag == T_NODESET) {
+ for (int i=0; i < v->nodeset->used && result == false; i++) {
+ struct tree *t = v->nodeset->nodes[i];
+ result = eval_re_match_str(state, rx->regexp, t->value);
+ RET_ON_ERROR;
+ }
+ }
+ if (op == OP_RE_NOMATCH)
+ result = !result;
+ push_boolean_value(result, state);
+}
+
+static void eval_binary(struct expr *expr, struct state *state) {
+ struct locpath_trace *lpt = state->locpath_trace;
+ struct locpath_trace lpt_right;
+
+ eval_expr(expr->left, state);
+ if ( lpt != NULL && expr->type == T_NODESET ) {
+ MEMZERO(&lpt_right, 1);
+ state->locpath_trace = &lpt_right;
+ }
+ eval_expr(expr->right, state);
+ state->locpath_trace = lpt;
+ RET_ON_ERROR;
+
+ switch (expr->op) {
+ case OP_EQ:
+ eval_eq(state, 0);
+ break;
+ case OP_NEQ:
+ eval_eq(state, 1);
+ break;
+ case OP_LT:
+ eval_rel(state, false, false);
+ break;
+ case OP_LE:
+ eval_rel(state, false, true);
+ break;
+ case OP_GT:
+ eval_rel(state, true, false);
+ break;
+ case OP_GE:
+ eval_rel(state, true, true);
+ break;
+ case OP_PLUS:
+ if (expr->type == T_NUMBER)
+ eval_arith(state, expr->op);
+ else if (expr->type == T_STRING)
+ eval_concat_string(state);
+ else if (expr->type == T_REGEXP)
+ eval_concat_regexp(state);
+ break;
+ case OP_MINUS:
+ case OP_STAR:
+ eval_arith(state, expr->op);
+ break;
+ case OP_AND:
+ case OP_OR:
+ eval_and_or(state, expr->op);
+ break;
+ case OP_ELSE:
+ eval_else(state, expr, &lpt_right);
+ break;
+ case OP_UNION:
+ eval_union(state, &lpt_right);
+ break;
+ case OP_RE_MATCH:
+ case OP_RE_NOMATCH:
+ eval_re_match(state, expr->op);
+ break;
+ default:
+ assert(0);
+ }
+}
+
+static void eval_app(struct expr *expr, struct state *state) {
+ assert(expr->tag == E_APP);
+
+ for (int i=0; i < expr->func->arity; i++) {
+ eval_expr(expr->args[i], state);
+ RET_ON_ERROR;
+ }
+ expr->func->impl(state, expr->func->arity);
+}
+
+static bool eval_pred(struct expr *expr, struct state *state) {
+ eval_expr(expr, state);
+ RET0_ON_ERROR;
+
+ struct value *v = pop_value(state);
+ switch(v->tag) {
+ case T_BOOLEAN:
+ return v->boolval;
+ case T_NUMBER:
+ return (state->ctx_pos == v->number);
+ case T_NODESET:
+ return v->nodeset->used > 0;
+ case T_STRING:
+ return streqv(state->ctx->value, v->string);
+ default:
+ assert(0);
+ return false;
+ }
+}
+
+/* Remove COUNT successive entries from NS. The first entry to remove is at
+ IND */
+static void ns_remove(struct nodeset *ns, int ind, int count) {
+ if (count < 1)
+ return;
+ memmove(ns->nodes + ind, ns->nodes + ind+count,
+ sizeof(ns->nodes[0]) * (ns->used - (ind+count)));
+ ns->used -= count;
+}
+
+/*
+ * Remove all nodes from NS for which one of PRED is false
+ */
+static void ns_filter(struct nodeset *ns, struct pred *predicates,
+ struct state *state) {
+ if (predicates == NULL)
+ return;
+
+ struct tree *old_ctx = state->ctx;
+ uint old_ctx_len = state->ctx_len;
+ uint old_ctx_pos = state->ctx_pos;
+
+ for (int p=0; p < predicates->nexpr; p++) {
+ if ( state->has_else) {
+ for (int i=0; i < ns->used; i++) {
+ /* 1st pass, check if any else statements have match on the left */
+ /* Don't delete any nodes (yet) */
+ state->ctx = ns->nodes[i];
+ eval_pred(predicates->exprs[p], state);
+ }
+ }
+ int first_bad = -1; /* The index of the first non-matching node */
+ state->ctx_len = ns->used;
+ state->ctx_pos = 1;
+ for (int i=0; i < ns->used; state->ctx_pos++) {
+ state->ctx = ns->nodes[i];
+ bool match = eval_pred(predicates->exprs[p], state);
+ RET_ON_ERROR;
+ /* We remove non-matching nodes from NS in batches; this logic
+ * makes sure that we only call ns_remove at the end of a run
+ * of non-matching nodes
+ */
+ if (match) {
+ if (first_bad >= 0) {
+ ns_remove(ns, first_bad, i - first_bad);
+ i = first_bad + 1;
+ } else {
+ i += 1;
+ }
+ first_bad = -1;
+ } else {
+ if (first_bad == -1)
+ first_bad = i;
+ i += 1;
+ }
+ }
+ if (first_bad >= 0) {
+ ns_remove(ns, first_bad, ns->used - first_bad);
+ }
+ }
+
+ state->ctx = old_ctx;
+ state->ctx_pos = old_ctx_pos;
+ state->ctx_len = old_ctx_len;
+}
+
+/* Return true if PRED is solely the predicate '[n]' as in 'foo[17]' */
+static bool position_pred(struct pred *pred) {
+ return pred != NULL &&
+ pred->nexpr == 1 &&
+ pred->exprs[0]->tag == E_VALUE &&
+ pred->exprs[0]->type == T_NUMBER;
+}
+
+/* Return the tree node at the position implied by STEP->PREDICATES. It is
+ assumed and required that STEP->PREDICATES is actually a
+ POSITION_PRED.
+
+ This method hand-optimizes the important case of a path expression like
+ 'service[42]'
+*/
+static struct tree *position_filter(struct nodeset *ns,
+ struct step *step,
+ struct state *state) {
+ int value_ind = step->predicates->exprs[0]->value_ind;
+ int number = state->value_pool[value_ind].number;
+
+ int pos = 1;
+ for (int i=0; i < ns->used; i++) {
+ for (struct tree *node = step_first(step, ns->nodes[i]);
+ node != NULL;
+ node = step_next(step, ns->nodes[i], node), pos++) {
+ if (pos == number)
+ return node;
+ }
+ }
+
+ return NULL;
+}
+
+/* Return an array of nodesets, one for each step in the locpath.
+ *
+ * On return, (*NS)[0] will contain state->ctx, and (*NS)[*MAXNS] will
+ * contain the nodes that matched the entire locpath
+ */
+static void ns_from_locpath(struct locpath *lp, uint *maxns,
+ struct nodeset ***ns,
+ const struct nodeset *root,
+ struct state *state) {
+ struct tree *old_ctx = state->ctx;
+ *maxns = 0;
+
+ ensure(lp != NULL, state);
+
+ *ns = NULL;
+ list_for_each(step, lp->steps)
+ *maxns += 1;
+ if (ALLOC_N(*ns, *maxns+1) < 0) {
+ STATE_ERROR(state, PATHX_ENOMEM);
+ goto error;
+ }
+ for (int i=0; i <= *maxns; i++) {
+ (*ns)[i] = make_nodeset(state);
+ if (HAS_ERROR(state))
+ goto error;
+ }
+
+ if (root == NULL) {
+ struct step *first_step = NULL;
+ first_step = lp->steps;
+
+ struct tree *root_tree;
+ root_tree = step_root(first_step, state->ctx, state->root_ctx);
+ ns_add((*ns)[0], root_tree, state);
+ ns_clear_added((*ns)[0]);
+ } else {
+ for (int i=0; i < root->used; i++)
+ ns_add((*ns)[0], root->nodes[i], state);
+ ns_clear_added((*ns)[0]);
+ }
+
+ if (HAS_ERROR(state))
+ goto error;
+
+ uint cur_ns = 0;
+ list_for_each(step, lp->steps) {
+ struct nodeset *work = (*ns)[cur_ns];
+ struct nodeset *next = (*ns)[cur_ns + 1];
+ if (position_pred(step->predicates)) {
+ struct tree *node = position_filter(work, step, state);
+ if (node) {
+ ns_add(next, node, state);
+ ns_clear_added(next);
+ }
+ } else {
+ for (int i=0; i < work->used; i++) {
+ for (struct tree *node = step_first(step, work->nodes[i]);
+ node != NULL;
+ node = step_next(step, work->nodes[i], node)) {
+ ns_add(next, node, state);
+ }
+ }
+ ns_clear_added(next);
+ ns_filter(next, step->predicates, state);
+ if (HAS_ERROR(state))
+ goto error;
+ }
+ cur_ns += 1;
+ }
+
+ state->ctx = old_ctx;
+ return;
+ error:
+ if (*ns != NULL) {
+ for (int i=0; i <= *maxns; i++)
+ free_nodeset((*ns)[i]);
+ FREE(*ns);
+ }
+ state->ctx = old_ctx;
+ return;
+}
+
+static void eval_filter(struct expr *expr, struct state *state) {
+ struct locpath *lp = expr->locpath;
+ struct nodeset **ns = NULL;
+ struct locpath_trace *lpt = state->locpath_trace;
+ uint maxns;
+
+ state->locpath_trace = NULL;
+ if (expr->primary == NULL) {
+ ns_from_locpath(lp, &maxns, &ns, NULL, state);
+ } else {
+ eval_expr(expr->primary, state);
+ RET_ON_ERROR;
+ value_ind_t primary_ind = pop_value_ind(state);
+ struct value *primary = state->value_pool + primary_ind;
+ assert(primary->tag == T_NODESET);
+ ns_filter(primary->nodeset, expr->predicates, state);
+ /* Evaluating predicates might have reallocated the value_pool */
+ primary = state->value_pool + primary_ind;
+ ns_from_locpath(lp, &maxns, &ns, primary->nodeset, state);
+ }
+ RET_ON_ERROR;
+
+ value_ind_t vind = make_value(T_NODESET, state);
+ RET_ON_ERROR;
+ state->value_pool[vind].nodeset = ns[maxns];
+ push_value(vind, state);
+
+ if (lpt != NULL) {
+ assert(lpt->ns == NULL);
+ assert(lpt->lp == NULL);
+ lpt->maxns = maxns;
+ lpt->ns = ns;
+ lpt->lp = lp;
+ state->locpath_trace = lpt;
+ } else {
+ for (int i=0; i < maxns; i++)
+ free_nodeset(ns[i]);
+ FREE(ns);
+ }
+}
+
+static struct value *lookup_var(const char *ident,
+ const struct pathx_symtab *symtab) {
+ list_for_each(tab, symtab) {
+ if (STREQ(ident, tab->name))
+ return tab->value;
+ }
+ return NULL;
+}
+
+static void eval_var(struct expr *expr, struct state *state) {
+ struct value *v = lookup_var(expr->ident, state->symtab);
+ value_ind_t vind = clone_value(v, state);
+ RET_ON_ERROR;
+ push_value(vind, state);
+}
+
+static void eval_expr(struct expr *expr, struct state *state) {
+ RET_ON_ERROR;
+ switch (expr->tag) {
+ case E_FILTER:
+ eval_filter(expr, state);
+ break;
+ case E_BINARY:
+ eval_binary(expr, state);
+ break;
+ case E_VALUE:
+ push_value(expr->value_ind, state);
+ break;
+ case E_VAR:
+ eval_var(expr, state);
+ break;
+ case E_APP:
+ eval_app(expr, state);
+ if (expr->fold) {
+ /* Do constant folding: replace the function application with
+ * a reference to the value that resulted from evaluating it */
+ for (int i=0; i < expr->func->arity; i++)
+ free_expr(expr->args[i]);
+ free(expr->args);
+ value_ind_t vind = state->values_used - 1;
+ expr->tag = E_VALUE;
+ expr->value_ind = state->values[vind];
+ }
+ break;
+ default:
+ assert(0);
+ }
+}
+
+/*************************************************************************
+ * Typechecker
+ *************************************************************************/
+
+static void check_expr(struct expr *expr, struct state *state);
+
+/* Typecheck a list of predicates. A predicate is a function of
+ * one of the following types:
+ *
+ * T_NODESET -> T_BOOLEAN
+ * T_NUMBER -> T_BOOLEAN (position test)
+ * T_BOOLEAN -> T_BOOLEAN
+ */
+static void check_preds(struct pred *pred, struct state *state) {
+ if (pred == NULL)
+ return;
+ for (int i=0; i < pred->nexpr; i++) {
+ struct expr *e = pred->exprs[i];
+ check_expr(e, state);
+ RET_ON_ERROR;
+ if (e->type != T_NODESET && e->type != T_NUMBER &&
+ e->type != T_BOOLEAN && e->type != T_STRING) {
+ STATE_ERROR(state, PATHX_ETYPE);
+ return;
+ }
+ }
+}
+
+static void check_filter(struct expr *expr, struct state *state) {
+ assert(expr->tag == E_FILTER);
+ struct locpath *locpath = expr->locpath;
+
+ if (expr->primary != NULL) {
+ check_expr(expr->primary, state);
+ if (expr->primary->type != T_NODESET) {
+ STATE_ERROR(state, PATHX_ETYPE);
+ return;
+ }
+ check_preds(expr->predicates, state);
+ RET_ON_ERROR;
+ }
+ list_for_each(s, locpath->steps) {
+ check_preds(s->predicates, state);
+ RET_ON_ERROR;
+ }
+ expr->type = T_NODESET;
+}
+
+static void check_app(struct expr *expr, struct state *state) {
+ assert(expr->tag == E_APP);
+
+ for (int i=0; i < expr->func->arity; i++) {
+ check_expr(expr->args[i], state);
+ RET_ON_ERROR;
+ }
+
+ int f;
+ for (f=0; f < ARRAY_CARDINALITY(builtin_funcs); f++) {
+ const struct func *fn = builtin_funcs + f;
+ if (STRNEQ(expr->func->name, fn->name))
+ continue;
+ if (expr->func->arity != fn->arity)
+ continue;
+
+ int match = 1;
+ for (int i=0; i < expr->func->arity; i++) {
+ if (expr->args[i]->type != fn->arg_types[i]) {
+ match = 0;
+ break;
+ }
+ }
+ if (match)
+ break;
+ }
+
+ if (f < ARRAY_CARDINALITY(builtin_funcs)) {
+ expr->func = builtin_funcs + f;
+ expr->type = expr->func->type;
+ expr->fold = expr->func->pure;
+ if (expr->fold) {
+ /* We only do constant folding for invocations of pure functions
+ * whose arguments are literal values. That misses opportunities
+ * for constant folding, e.g., "regexp('foo' + 'bar')" but is
+ * a bit simpler than doing full tracking of constants
+ */
+ for (int i=0; i < expr->func->arity; i++) {
+ if (expr->args[i]->tag != E_VALUE)
+ expr->fold = false;
+ }
+ }
+ } else {
+ STATE_ERROR(state, PATHX_ETYPE);
+ }
+}
+
+/* Check the binary operators. Type rules:
+ *
+ * '=', '!=' : T_NODESET -> T_NODESET -> T_BOOLEAN
+ * T_STRING -> T_NODESET -> T_BOOLEAN
+ * T_NODESET -> T_STRING -> T_BOOLEAN
+ * T_NUMBER -> T_NUMBER -> T_BOOLEAN
+ *
+ * '>', '>=',
+ * '<', '<=' : T_NUMBER -> T_NUMBER -> T_BOOLEAN
+ * T_STRING -> T_STRING -> T_BOOLEAN
+ * '+' : T_NUMBER -> T_NUMBER -> T_NUMBER
+ * T_STRING -> T_STRING -> T_STRING
+ * T_REGEXP -> T_REGEXP -> T_REGEXP
+ * '+', '-', '*': T_NUMBER -> T_NUMBER -> T_NUMBER
+ *
+ * 'and', 'or': T_BOOLEAN -> T_BOOLEAN -> T_BOOLEAN
+ * '=~', '!~' : T_STRING -> T_REGEXP -> T_BOOLEAN
+ * T_NODESET -> T_REGEXP -> T_BOOLEAN
+ *
+ * '|' : T_NODESET -> T_NODESET -> T_NODESET
+ *
+ * Any type can be coerced to T_BOOLEAN (see coerce_to_bool)
+ */
+static void check_binary(struct expr *expr, struct state *state) {
+ check_expr(expr->left, state);
+ check_expr(expr->right, state);
+ RET_ON_ERROR;
+
+ enum type l = expr->left->type;
+ enum type r = expr->right->type;
+ int ok = 1;
+ enum type res;
+
+ switch(expr->op) {
+ case OP_EQ:
+ case OP_NEQ:
+ ok = ((l == T_NODESET || l == T_STRING)
+ && (r == T_NODESET || r == T_STRING))
+ || (l == T_NUMBER && r == T_NUMBER);;
+ res = T_BOOLEAN;
+ break;
+ case OP_LT:
+ case OP_LE:
+ case OP_GT:
+ case OP_GE:
+ ok = (l == T_NUMBER && r == T_NUMBER)
+ || (l == T_STRING && r == T_STRING);
+ res = T_BOOLEAN;
+ break;
+ case OP_PLUS:
+ ok = (l == r && (l == T_NUMBER || l == T_STRING || l == T_REGEXP));
+ res = l;
+ break;
+ case OP_MINUS:
+ case OP_STAR:
+ ok = (l == T_NUMBER && r == T_NUMBER);
+ res = T_NUMBER;
+ break;
+ case OP_UNION:
+ ok = (l == T_NODESET && r == T_NODESET);
+ res = T_NODESET;
+ break;
+ case OP_AND:
+ case OP_OR:
+ ok = 1;
+ res = T_BOOLEAN;
+ break;
+ case OP_ELSE:
+ if (l == T_NODESET && r == T_NODESET) {
+ res = T_NODESET;
+ } else {
+ res = T_BOOLEAN;
+ }
+ ok = 1;
+ break;
+ case OP_RE_MATCH:
+ case OP_RE_NOMATCH:
+ ok = ((l == T_STRING || l == T_NODESET) && r == T_REGEXP);
+ res = T_BOOLEAN;
+ break;
+ default:
+ assert(0);
+ }
+ if (! ok) {
+ STATE_ERROR(state, PATHX_ETYPE);
+ } else {
+ expr->type = res;
+ }
+}
+
+static void check_var(struct expr *expr, struct state *state) {
+ struct value *v = lookup_var(expr->ident, state->symtab);
+ if (v == NULL) {
+ STATE_ERROR(state, PATHX_ENOVAR);
+ return;
+ }
+ expr->type = v->tag;
+}
+
+/* Typecheck an expression */
+static void check_expr(struct expr *expr, struct state *state) {
+ RET_ON_ERROR;
+ switch(expr->tag) {
+ case E_FILTER:
+ check_filter(expr, state);
+ break;
+ case E_BINARY:
+ check_binary(expr, state);
+ break;
+ case E_VALUE:
+ expr->type = expr_value(expr, state)->tag;
+ break;
+ case E_VAR:
+ check_var(expr, state);
+ break;
+ case E_APP:
+ check_app(expr, state);
+ break;
+ default:
+ assert(0);
+ }
+}
+
+/*
+ * Utility functions for the parser
+ */
+
+static void skipws(struct state *state) {
+ while (isspace(*state->pos)) state->pos += 1;
+}
+
+static int match(struct state *state, char m) {
+ skipws(state);
+
+ if (*state->pos == '\0')
+ return 0;
+ if (*state->pos == m) {
+ state->pos += 1;
+ return 1;
+ }
+ return 0;
+}
+
+static int peek(struct state *state, const char *chars) {
+ return strchr(chars, *state->pos) != NULL;
+}
+
+/* Return 1 if STATE->POS starts with TOKEN, followed by optional
+ * whitespace, followed by FOLLOW. In that case, STATE->POS is set to the
+ * first character after FOLLOW. Return 0 otherwise and leave STATE->POS
+ * unchanged.
+ */
+static int looking_at(struct state *state, const char *token,
+ const char *follow) {
+ if (STREQLEN(state->pos, token, strlen(token))) {
+ const char *p = state->pos + strlen(token);
+ while (isspace(*p)) p++;
+ if (STREQLEN(p, follow, strlen(follow))) {
+ state->pos = p + strlen(follow);
+ return 1;
+ }
+ }
+ return 0;
+}
+
+/*************************************************************************
+ * The parser
+ *************************************************************************/
+
+static void parse_expr(struct state *state);
+
+static struct expr* pop_expr(struct state *state) {
+ if (state->exprs_used > 0) {
+ state->exprs_used -= 1;
+ return state->exprs[state->exprs_used];
+ } else {
+ STATE_ERROR(state, PATHX_EINTERNAL);
+ assert(0);
+ return NULL;
+ }
+}
+
+static void push_expr(struct expr *expr, struct state *state) {
+ if (state->exprs_used >= state->exprs_size) {
+ size_t new_size = 2*state->exprs_size;
+ if (new_size == 0) new_size = 8;
+ if (REALLOC_N(state->exprs, new_size) < 0) {
+ STATE_ENOMEM;
+ return;
+ }
+ state->exprs_size = new_size;
+ }
+ state->exprs[state->exprs_used++] = expr;
+}
+
+static void push_new_binary_op(enum binary_op op, struct state *state) {
+ struct expr *expr = NULL;
+ if (ALLOC(expr) < 0) {
+ STATE_ENOMEM;
+ return;
+ }
+
+ expr->tag = E_BINARY;
+ expr->op = op;
+ expr->right = pop_expr(state);
+ expr->left = pop_expr(state);
+ expr->left_matched = false; /* for 'else' operator only, true if any matches on LHS */
+ push_expr(expr, state);
+}
+
+int pathx_escape_name(const char *in, char **out) {
+ const char *p;
+ int num_to_escape = 0;
+ char *s;
+
+ *out = NULL;
+
+ for (p = in; *p; p++) {
+ if (strchr(name_follow, *p) || isspace(*p) || *p == '\\')
+ num_to_escape += 1;
+ }
+
+ if (num_to_escape == 0)
+ return 0;
+
+ if (ALLOC_N(*out, strlen(in) + num_to_escape + 1) < 0)
+ return -1;
+
+ for (p = in, s = *out; *p; p++) {
+ if (strchr(name_follow, *p) || isspace(*p) || *p == '\\')
+ *s++ = '\\';
+ *s++ = *p;
+ }
+ *s = '\0';
+ return 0;
+}
+
+/* Return true if POS is preceded by an odd number of backslashes, i.e., if
+ * POS is escaped. Stop the search when we get to START */
+static bool backslash_escaped(const char *pos, const char *start) {
+ bool result=false;
+ while (pos-- > start && *pos == '\\') {
+ result = !result;
+ }
+ return result;
+}
+
+/*
+ * NameNoWS ::= [^][|/\= \t\n] | \\.
+ * NameWS ::= [^][|/\=] | \\.
+ * Name ::= NameNoWS NameWS* NameNoWS | NameNoWS
+ */
+static char *parse_name(struct state *state) {
+ const char *s = state->pos;
+ char *result;
+
+ /* Advance state->pos until it points to the first character that is
+ * not part of a name. */
+ while (*state->pos != '\0' && strchr(name_follow, *state->pos) == NULL) {
+ /* Since we allow spaces in names, we need to avoid gobbling up
+ * stuff that is in follow(Name), e.g. 'or' so that things like
+ * [name1 or name2] still work. In other words, we'll parse 'x frob
+ * y' as one name, but for 'x or y', we consider 'x' a name in its
+ * own right. */
+ if (STREQLEN(state->pos, " or ", strlen(" or ")) ||
+ STREQLEN(state->pos, " and ", strlen(" and ")) ||
+ STREQLEN(state->pos, " else ", strlen(" else ")))
+ break;
+
+ if (*state->pos == '\\') {
+ state->pos += 1;
+ if (*state->pos == '\0') {
+ STATE_ERROR(state, PATHX_ENAME);
+ return NULL;
+ }
+ }
+ state->pos += 1;
+ }
+
+ /* Strip trailing white space. Make sure we respect escaped whitespace
+ * and don't strip it as in "x\\ " */
+ if (state->pos > s) {
+ state->pos -= 1;
+ while (isspace(*state->pos) && state->pos > s
+ && !backslash_escaped(state->pos, s))
+ state->pos -= 1;
+ state->pos += 1;
+ }
+
+ if (state->pos == s) {
+ STATE_ERROR(state, PATHX_ENAME);
+ return NULL;
+ }
+
+ result = strndup(s, state->pos - s);
+ if (result == NULL) {
+ STATE_ENOMEM;
+ return NULL;
+ }
+
+ char *p = result;
+ for (char *t = result; *t != '\0'; t++, p++) {
+ if (*t == '\\')
+ t += 1;
+ *p = *t;
+ }
+ *p = '\0';
+
+ return result;
+}
+
+/*
+ * Predicate ::= "[" Expr "]" *
+ */
+static struct pred *parse_predicates(struct state *state) {
+ struct pred *pred = NULL;
+ int nexpr = 0;
+
+ while (match(state, L_BRACK)) {
+ parse_expr(state);
+ nexpr += 1;
+ RET0_ON_ERROR;
+
+ if (! match(state, R_BRACK)) {
+ STATE_ERROR(state, PATHX_EPRED);
+ return NULL;
+ }
+ skipws(state);
+ }
+
+ if (nexpr == 0)
+ return NULL;
+
+ if (ALLOC(pred) < 0) {
+ STATE_ENOMEM;
+ return NULL;
+ }
+ pred->nexpr = nexpr;
+
+ if (ALLOC_N(pred->exprs, nexpr) < 0) {
+ free_pred(pred);
+ STATE_ENOMEM;
+ return NULL;
+ }
+
+ for (int i = nexpr - 1; i >= 0; i--)
+ pred->exprs[i] = pop_expr(state);
+
+ return pred;
+}
+
+/*
+ * Step ::= AxisSpecifier NameTest Predicate* | '.' | '..'
+ * AxisSpecifier ::= AxisName '::' | <epsilon>
+ * AxisName ::= 'ancestor'
+ * | 'ancestor-or-self'
+ * | 'child'
+ * | 'descendant'
+ * | 'descendant-or-self'
+ * | 'parent'
+ * | 'self'
+ * | 'root'
+ */
+static struct step *parse_step(struct state *state) {
+ struct step *step;
+ int explicit_axis = 0, allow_predicates = 1;
+
+ if (ALLOC(step) < 0) {
+ STATE_ENOMEM;
+ return NULL;
+ }
+
+ step->axis = CHILD;
+ for (int i = 0; i < ARRAY_CARDINALITY(axis_names); i++) {
+ if (looking_at(state, axis_names[i], "::")) {
+ step->axis = i;
+ explicit_axis = 1;
+ break;
+ }
+ }
+
+ if (! match(state, '*')) {
+ step->name = parse_name(state);
+ if (HAS_ERROR(state))
+ goto error;
+ if (! explicit_axis) {
+ if (STREQ(step->name, ".") || STREQ(step->name, "..")) {
+ step->axis = STREQ(step->name, ".") ? SELF : PARENT;
+ FREE(step->name);
+ allow_predicates = 0;
+ }
+ }
+ }
+
+ if (allow_predicates) {
+ step->predicates = parse_predicates(state);
+ if (HAS_ERROR(state))
+ goto error;
+ }
+
+ return step;
+
+ error:
+ free_step(step);
+ return NULL;
+}
+
+static struct step *make_step(enum axis axis, struct state *state) {
+ struct step *result = NULL;
+
+ if (ALLOC(result) < 0) {
+ STATE_ENOMEM;
+ return NULL;
+ }
+ result->axis = axis;
+ return result;
+}
+
+/*
+ * RelativeLocationPath ::= Step
+ * | RelativeLocationPath '/' Step
+ * | AbbreviatedRelativeLocationPath
+ * AbbreviatedRelativeLocationPath ::= RelativeLocationPath '//' Step
+ *
+ * The above is the same as
+ * RelativeLocationPath ::= Step ('/' Step | '//' Step)*
+ */
+static struct locpath *
+parse_relative_location_path(struct state *state) {
+ struct step *step = NULL;
+ struct locpath *locpath = NULL;
+
+ step = parse_step(state);
+ if (HAS_ERROR(state))
+ goto error;
+
+ if (ALLOC(locpath) < 0) {
+ STATE_ENOMEM;
+ goto error;
+ }
+ list_append(locpath->steps, step);
+ step = NULL;
+
+ while (match(state, '/')) {
+ if (*state->pos == '/') {
+ state->pos += 1;
+ step = make_step(DESCENDANT_OR_SELF, state);
+ if (step == NULL) {
+ STATE_ENOMEM;
+ goto error;
+ }
+ list_append(locpath->steps, step);
+ }
+ step = parse_step(state);
+ if (HAS_ERROR(state))
+ goto error;
+ list_append(locpath->steps, step);
+ step = NULL;
+ }
+ return locpath;
+
+ error:
+ free_step(step);
+ free_locpath(locpath);
+ return NULL;
+}
+
+/*
+ * LocationPath ::= RelativeLocationPath | AbsoluteLocationPath
+ * AbsoluteLocationPath ::= '/' RelativeLocationPath?
+ * | AbbreviatedAbsoluteLocationPath
+ * AbbreviatedAbsoluteLocationPath ::= '//' RelativeLocationPath
+ *
+ */
+static void parse_location_path(struct state *state) {
+ struct expr *expr = NULL;
+ struct locpath *locpath = NULL;
+
+ if (match(state, '/')) {
+ if (*state->pos == '/') {
+ state->pos += 1;
+ locpath = parse_relative_location_path(state);
+ if (HAS_ERROR(state))
+ goto error;
+ struct step *step = make_step(DESCENDANT_OR_SELF, state);
+ if (HAS_ERROR(state))
+ goto error;
+ list_cons(locpath->steps, step);
+ } else {
+ if (*state->pos != '\0') {
+ locpath = parse_relative_location_path(state);
+ } else {
+ if (ALLOC(locpath) < 0)
+ goto err_nomem;
+ }
+ struct step *step = make_step(ROOT, state);
+ if (HAS_ERROR(state)) {
+ free_step(step);
+ goto error;
+ }
+ list_cons(locpath->steps, step);
+ }
+ } else {
+ locpath = parse_relative_location_path(state);
+ }
+
+ if (ALLOC(expr) < 0)
+ goto err_nomem;
+ expr->tag = E_FILTER;
+ expr->locpath = locpath;
+ push_expr(expr, state);
+ return;
+
+ err_nomem:
+ STATE_ENOMEM;
+ error:
+ free_expr(expr);
+ free_locpath(locpath);
+ return;
+}
+
+/*
+ * Number ::= /[0-9]+/
+ */
+static void parse_number(struct state *state) {
+ struct expr *expr = NULL;
+ unsigned long val;
+ char *end;
+
+ errno = 0;
+ val = strtoul(state->pos, &end, 10);
+ if (errno || end == state->pos || (int) val != val) {
+ STATE_ERROR(state, PATHX_ENUMBER);
+ return;
+ }
+
+ state->pos = end;
+
+ if (ALLOC(expr) < 0)
+ goto err_nomem;
+ expr->tag = E_VALUE;
+ expr->value_ind = make_value(T_NUMBER, state);
+ if (HAS_ERROR(state))
+ goto error;
+ expr_value(expr, state)->number = val;
+
+ push_expr(expr, state);
+ return;
+
+ err_nomem:
+ STATE_ENOMEM;
+ error:
+ free_expr(expr);
+ return;
+}
+
+/*
+ * Literal ::= '"' /[^"]* / '"' | "'" /[^']* / "'"
+ */
+static void parse_literal(struct state *state) {
+ char delim;
+ const char *s;
+ struct expr *expr = NULL;
+
+ if (*state->pos == '"')
+ delim = '"';
+ else if (*state->pos == '\'')
+ delim = '\'';
+ else {
+ STATE_ERROR(state, PATHX_ESTRING);
+ return;
+ }
+ state->pos += 1;
+
+ s = state->pos;
+ while (*state->pos != '\0' && *state->pos != delim) state->pos += 1;
+
+ if (*state->pos != delim) {
+ STATE_ERROR(state, PATHX_EDELIM);
+ return;
+ }
+ state->pos += 1;
+
+ if (ALLOC(expr) < 0)
+ goto err_nomem;
+ expr->tag = E_VALUE;
+ expr->value_ind = make_value(T_STRING, state);
+ if (HAS_ERROR(state))
+ goto error;
+ expr_value(expr, state)->string = strndup(s, state->pos - s - 1);
+ if (expr_value(expr, state)->string == NULL)
+ goto err_nomem;
+
+ push_expr(expr, state);
+ return;
+
+ err_nomem:
+ STATE_ENOMEM;
+ error:
+ free_expr(expr);
+ return;
+}
+
+/*
+ * FunctionCall ::= Name '(' ( Expr ( ',' Expr )* )? ')'
+ */
+static void parse_function_call(struct state *state) {
+ const struct func *func = NULL;
+ struct expr *expr = NULL;
+ int nargs = 0, find = 0;
+
+ for (; find < ARRAY_CARDINALITY(builtin_funcs); find++) {
+ if (looking_at(state, builtin_funcs[find].name, "(")) {
+ func = builtin_funcs + find;
+ break;
+ }
+ }
+ if (func == NULL) {
+ STATE_ERROR(state, PATHX_ENAME);
+ return;
+ }
+
+ if (! match(state, ')')) {
+ do {
+ nargs += 1;
+ parse_expr(state);
+ RET_ON_ERROR;
+ } while (match(state, ','));
+
+ if (! match(state, ')')) {
+ STATE_ERROR(state, PATHX_EPAREN);
+ return;
+ }
+ }
+
+ int found = 0; /* Whether there is a builtin matching in name and arity */
+ for (int i=find; i < ARRAY_CARDINALITY(builtin_funcs); i++) {
+ if (STRNEQ(func->name, builtin_funcs[i].name))
+ break;
+ if (builtin_funcs[i].arity == nargs) {
+ func = builtin_funcs + i;
+ found = 1;
+ break;
+ }
+ }
+
+ if (! found) {
+ STATE_ERROR(state, PATHX_EARITY);
+ return;
+ }
+
+ if (ALLOC(expr) < 0) {
+ STATE_ENOMEM;
+ return;
+ }
+ expr->tag = E_APP;
+ if (ALLOC_N(expr->args, nargs) < 0) {
+ free_expr(expr);
+ STATE_ENOMEM;
+ return;
+ }
+ expr->func = func;
+ for (int i = nargs - 1; i >= 0; i--)
+ expr->args[i] = pop_expr(state);
+
+ push_expr(expr, state);
+}
+
+/*
+ * VariableReference ::= '$' /[a-zA-Z_][a-zA-Z0-9_]* /
+ *
+ * The '$' is consumed by parse_primary_expr
+ */
+static void parse_var(struct state *state) {
+ const char *id = state->pos;
+ struct expr *expr = NULL;
+
+ if (!isalpha(*id) && *id != '_') {
+ STATE_ERROR(state, PATHX_ENAME);
+ return;
+ }
+ id++;
+ while (isalpha(*id) || isdigit(*id) || *id == '_')
+ id += 1;
+
+ if (ALLOC(expr) < 0)
+ goto err_nomem;
+ expr->tag = E_VAR;
+ expr->ident = strndup(state->pos, id - state->pos);
+ if (expr->ident == NULL)
+ goto err_nomem;
+
+ push_expr(expr, state);
+ state->pos = id;
+ return;
+ err_nomem:
+ STATE_ENOMEM;
+ free_expr(expr);
+ return;
+}
+
+/*
+ * PrimaryExpr ::= Literal
+ * | Number
+ * | FunctionCall
+ * | VariableReference
+ * | '(' Expr ')'
+ *
+ */
+static void parse_primary_expr(struct state *state) {
+ if (peek(state, "'\"")) {
+ parse_literal(state);
+ } else if (peek(state, "0123456789")) {
+ parse_number(state);
+ } else if (match(state, '(')) {
+ parse_expr(state);
+ RET_ON_ERROR;
+ if (! match(state, ')')) {
+ STATE_ERROR(state, PATHX_EPAREN);
+ return;
+ }
+ } else if (match(state, '$')) {
+ parse_var(state);
+ } else {
+ parse_function_call(state);
+ }
+}
+
+static int looking_at_primary_expr(struct state *state) {
+ const char *s = state->pos;
+ /* Is it a Number, Literal or VariableReference ? */
+ if (peek(state, "$'\"0123456789"))
+ return 1;
+
+ /* Or maybe a function call, i.e. a word followed by a '(' ?
+ * Note that our function names are only [a-zA-Z]+
+ */
+ while (*s != '\0' && isalpha(*s)) s++;
+ while (*s != '\0' && isspace(*s)) s++;
+ return *s == '(';
+}
+
+/*
+ * PathExpr ::= LocationPath
+ * | FilterExpr
+ * | FilterExpr '/' RelativeLocationPath
+ * | FilterExpr '//' RelativeLocationPath
+ *
+ * FilterExpr ::= PrimaryExpr Predicate
+ *
+ * The grammar is ambiguous here: the expression '42' can either be the
+ * number 42 (a PrimaryExpr) or the RelativeLocationPath 'child::42'. The
+ * reason for this ambiguity is that we allow node names like '42' in the
+ * tree; rather than forbid them, we resolve the ambiguity by always
+ * parsing '42' as a number, and requiring that the user write the
+ * RelativeLocationPath in a different form, e.g. 'child::42' or './42'.
+ */
+static void parse_path_expr(struct state *state) {
+ struct expr *expr = NULL;
+ struct pred *predicates = NULL;
+ struct locpath *locpath = NULL;
+
+ if (looking_at_primary_expr(state)) {
+ parse_primary_expr(state);
+ RET_ON_ERROR;
+ predicates = parse_predicates(state);
+ RET_ON_ERROR;
+ if (match(state, '/')) {
+ if (match(state, '/')) {
+ locpath = parse_relative_location_path(state);
+ if (HAS_ERROR(state))
+ goto error;
+
+ struct step *step = make_step(DESCENDANT_OR_SELF, state);
+ if (HAS_ERROR(state))
+ return;
+ list_cons(locpath->steps, step);
+ } else {
+ if (*state->pos == '\0') {
+ STATE_ERROR(state, PATHX_EEND);
+ goto error;
+ }
+ locpath = parse_relative_location_path(state);
+ }
+ }
+ /* A PathExpr without predicates and locpath is
+ * just a PrimaryExpr
+ */
+ if (predicates == NULL && locpath == NULL)
+ return;
+ /* To make evaluation easier, we parse something like
+ * $var[pred] as $var[pred]/.
+ */
+ if (locpath == NULL) {
+ if (ALLOC(locpath) < 0)
+ goto error;
+ if (ALLOC(locpath->steps) < 0)
+ goto error;
+ locpath->steps->axis = SELF;
+ }
+ if (ALLOC(expr) < 0)
+ goto error;
+ expr->tag = E_FILTER;
+ expr->predicates = predicates;
+ expr->primary = pop_expr(state);
+ expr->locpath = locpath;
+ push_expr(expr, state);
+ } else {
+ parse_location_path(state);
+ }
+ return;
+ error:
+ free_expr(expr);
+ free_pred(predicates);
+ free_locpath(locpath);
+ return;
+}
+
+/*
+ * UnionExpr ::= PathExpr ('|' PathExpr)*
+ */
+static void parse_union_expr(struct state *state) {
+ parse_path_expr(state);
+ RET_ON_ERROR;
+ while (match(state, '|')) {
+ parse_path_expr(state);
+ RET_ON_ERROR;
+ push_new_binary_op(OP_UNION, state);
+ }
+}
+
+/*
+ * MultiplicativeExpr ::= UnionExpr ('*' UnionExpr)*
+ */
+static void parse_multiplicative_expr(struct state *state) {
+ parse_union_expr(state);
+ RET_ON_ERROR;
+ while (match(state, '*')) {
+ parse_union_expr(state);
+ RET_ON_ERROR;
+ push_new_binary_op(OP_STAR, state);
+ }
+}
+
+/*
+ * AdditiveExpr ::= MultiplicativeExpr (AdditiveOp MultiplicativeExpr)*
+ * AdditiveOp ::= '+' | '-'
+ */
+static void parse_additive_expr(struct state *state) {
+ parse_multiplicative_expr(state);
+ RET_ON_ERROR;
+ while (*state->pos == '+' || *state->pos == '-') {
+ enum binary_op op = (*state->pos == '+') ? OP_PLUS : OP_MINUS;
+ state->pos += 1;
+ skipws(state);
+ parse_multiplicative_expr(state);
+ RET_ON_ERROR;
+ push_new_binary_op(op, state);
+ }
+}
+
+/*
+ * RelationalExpr ::= AdditiveExpr (RelationalOp AdditiveExpr)?
+ * RelationalOp ::= ">" | "<" | ">=" | "<="
+ */
+static void parse_relational_expr(struct state *state) {
+ parse_additive_expr(state);
+ RET_ON_ERROR;
+ if (*state->pos == '<' || *state->pos == '>') {
+ enum binary_op op = (*state->pos == '<') ? OP_LT : OP_GT;
+ state->pos += 1;
+ if (*state->pos == '=') {
+ op = (op == OP_LT) ? OP_LE : OP_GE;
+ state->pos += 1;
+ }
+ skipws(state);
+ parse_additive_expr(state);
+ RET_ON_ERROR;
+ push_new_binary_op(op, state);
+ }
+}
+
+/*
+ * EqualityExpr ::= RelationalExpr (EqualityOp RelationalExpr)? | ReMatchExpr
+ * EqualityOp ::= "=" | "!="
+ * ReMatchExpr ::= RelationalExpr MatchOp RelationalExpr
+ * MatchOp ::= "=~" | "!~"
+ */
+static void parse_equality_expr(struct state *state) {
+ parse_relational_expr(state);
+ RET_ON_ERROR;
+ if ((*state->pos == '=' || *state->pos == '!') && state->pos[1] == '~') {
+ enum binary_op op = (*state->pos == '=') ? OP_RE_MATCH : OP_RE_NOMATCH;
+ state->pos += 2;
+ skipws(state);
+ parse_relational_expr(state);
+ RET_ON_ERROR;
+ push_new_binary_op(op, state);
+ } else if (*state->pos == '=' ||
+ (*state->pos == '!' && state->pos[1] == '=')) {
+ enum binary_op op = (*state->pos == '=') ? OP_EQ : OP_NEQ;
+ state->pos += (op == OP_EQ) ? 1 : 2;
+ skipws(state);
+ parse_relational_expr(state);
+ RET_ON_ERROR;
+ push_new_binary_op(op, state);
+ }
+}
+
+/*
+ * AndExpr ::= EqualityExpr ('and' EqualityExpr)*
+ */
+static void parse_and_expr(struct state *state) {
+ parse_equality_expr(state);
+ RET_ON_ERROR;
+ while (*state->pos == 'a' && state->pos[1] == 'n'
+ && state->pos[2] == 'd') {
+ state->pos += 3;
+ skipws(state);
+ parse_equality_expr(state);
+ RET_ON_ERROR;
+ push_new_binary_op(OP_AND, state);
+ }
+}
+
+/*
+ * OrExpr ::= AndExpr ('or' AndExpr)*
+ */
+static void parse_or_expr(struct state *state) {
+ parse_and_expr(state);
+ RET_ON_ERROR;
+ while (*state->pos == 'o' && state->pos[1] == 'r') {
+ state->pos += 2;
+ skipws(state);
+ parse_and_expr(state);
+ RET_ON_ERROR;
+ push_new_binary_op(OP_OR, state);
+ }
+}
+
+/*
+ * ElseExpr ::= OrExpr ('else' OrExpr)*
+ */
+static void parse_else_expr(struct state *state) {
+ parse_or_expr(state);
+ RET_ON_ERROR;
+ while (*state->pos == 'e' && state->pos[1] == 'l'
+ && state->pos[2] == 's' && state->pos[3] == 'e' ) {
+ state->pos += 4;
+ skipws(state);
+ parse_or_expr(state);
+ RET_ON_ERROR;
+ push_new_binary_op(OP_ELSE, state);
+ state->has_else = 1;
+ }
+}
+
+/*
+ * Expr ::= ElseExpr
+ */
+static void parse_expr(struct state *state) {
+ skipws(state);
+ parse_else_expr(state);
+}
+
+static void store_error(struct pathx *pathx) {
+ const char *pathx_msg = NULL;
+ const char *path = pathx->state->txt;
+ const pathx_errcode_t errcode = pathx->state->errcode;
+ struct error *err = pathx->state->error;
+
+ char *pos_str = pathx->state->errmsg;
+ pathx->state->errmsg = NULL;
+
+ if (err == NULL || errcode == PATHX_NOERROR || err->code != AUG_NOERROR)
+ return;
+
+ switch (errcode) {
+ case PATHX_ENOMEM:
+ err->code = AUG_ENOMEM;
+ break;
+ case PATHX_EMMATCH:
+ err->code = AUG_EMMATCH;
+ break;
+ case PATHX_ENOMATCH:
+ err->code = AUG_ENOMATCH;
+ break;
+ default:
+ err->code = AUG_EPATHX;
+ break;
+ }
+
+ /* We only need details for pathx syntax errors */
+ if (err->code != AUG_EPATHX)
+ return;
+
+ int pos;
+ pathx_msg = pathx_error(pathx, NULL, &pos);
+
+ bool has_msg = pos_str != NULL;
+ int pos_str_len = pos_str == NULL ? 0 : strlen(pos_str);
+ if (REALLOC_N(pos_str, pos_str_len + strlen(path) + 8) >= 0) {
+ if (has_msg) {
+ strcat(pos_str, " in ");
+ strncat(pos_str, path, pos);
+ } else {
+ /* initialize pos_str explicitly, path might be "" */
+ pos_str[0] = '\0';
+ strncat(pos_str, path, pos);
+ }
+ strcat(pos_str, "|=|");
+ strcat(pos_str, path + pos);
+ }
+
+ err->minor = errcode;
+ err->details = pos_str;
+ pos_str = NULL;
+ err->minor_details = pathx_msg;
+}
+
+int pathx_parse(const struct tree *tree,
+ struct error *err,
+ const char *txt,
+ bool need_nodeset,
+ struct pathx_symtab *symtab,
+ struct tree *root_ctx,
+ struct pathx **pathx) {
+ struct state *state = NULL;
+
+ *pathx = NULL;
+
+ if (ALLOC(*pathx) < 0)
+ goto oom;
+
+ (*pathx)->origin = (struct tree *) tree;
+
+ /* Set up state */
+ if (ALLOC((*pathx)->state) < 0)
+ goto oom;
+ state = (*pathx)->state;
+
+ state->errcode = PATHX_NOERROR;
+ state->errmsg = NULL;
+ state->txt = txt;
+ state->pos = txt;
+ state->symtab = symtab;
+ state->root_ctx = root_ctx;
+ state->error = err;
+
+ if (ALLOC_N(state->value_pool, 8) < 0) {
+ STATE_ENOMEM;
+ goto done;
+ }
+ state->value_pool_size = 8;
+ state->value_pool[0].tag = T_BOOLEAN;
+ state->value_pool[0].boolval = 0;
+ state->value_pool[1].tag = T_BOOLEAN;
+ state->value_pool[1].boolval = 1;
+ state->value_pool_used = 2;
+
+ /* Parse */
+ parse_expr(state);
+ if (HAS_ERROR(state))
+ goto done;
+ if (state->pos != state->txt + strlen(state->txt)) {
+ STATE_ERROR(state, PATHX_EEND);
+ goto done;
+ }
+
+ if (state->exprs_used != 1) {
+ STATE_ERROR(state, PATHX_EINTERNAL);
+ goto done;
+ }
+
+ /* Typecheck */
+ check_expr(state->exprs[0], state);
+ if (HAS_ERROR(state))
+ goto done;
+
+ if (need_nodeset && state->exprs[0]->type != T_NODESET) {
+ STATE_ERROR(state, PATHX_ETYPE);
+ goto done;
+ }
+
+ done:
+ store_error(*pathx);
+ return state->errcode;
+ oom:
+ free_pathx(*pathx);
+ *pathx = NULL;
+ if (err != NULL)
+ err->code = AUG_ENOMEM;
+ return PATHX_ENOMEM;
+}
+
+/*************************************************************************
+ * Searching in the tree
+ *************************************************************************/
+
+static bool step_matches(struct step *step, struct tree *tree) {
+ if ( step->axis == SEQ && step->name == NULL ) {
+ if ( tree->label == NULL )
+ return false;
+ /* label matches if it consists of numeric digits only */
+ for( char *s = tree->label; *s ; s++) {
+ if ( ! isdigit(*s) )
+ return false;
+ }
+ return true;
+ } else if (step->name == NULL) {
+ return step->axis == ROOT || tree->label != NULL;
+ } else {
+ return streqx(step->name, tree->label);
+ }
+}
+
+static struct tree *tree_prev(struct tree *pos) {
+ struct tree *node = NULL;
+ if (pos != pos->parent->children) {
+ for (node = pos->parent->children;
+ node->next != pos;
+ node = node->next);
+ }
+ return node;
+}
+
+/* When the first step doesn't begin with ROOT then use relative root context
+ * instead. */
+static struct tree *step_root(struct step *step, struct tree *ctx,
+ struct tree *root_ctx) {
+ struct tree *node = NULL;
+ switch (step->axis) {
+ case SELF:
+ case CHILD:
+ case SEQ:
+ case DESCENDANT:
+ case PARENT:
+ case ANCESTOR:
+ case PRECEDING_SIBLING:
+ case FOLLOWING_SIBLING:
+ /* only use root_ctx when ctx is the absolute tree root */
+ if (ctx == ctx->parent && root_ctx != NULL)
+ node = root_ctx;
+ else
+ node = ctx;
+ break;
+ case ROOT:
+ case DESCENDANT_OR_SELF:
+ node = ctx;
+ break;
+ default:
+ assert(0);
+ }
+ if (node == NULL)
+ return NULL;
+ return node;
+}
+
+static struct tree *step_first(struct step *step, struct tree *ctx) {
+ struct tree *node = NULL;
+ switch (step->axis) {
+ case SELF:
+ case DESCENDANT_OR_SELF:
+ node = ctx;
+ break;
+ case CHILD:
+ case SEQ:
+ case DESCENDANT:
+ node = ctx->children;
+ break;
+ case PARENT:
+ case ANCESTOR:
+ node = ctx->parent;
+ break;
+ case ROOT:
+ while (ctx->parent != ctx)
+ ctx = ctx->parent;
+ node = ctx;
+ break;
+ case PRECEDING_SIBLING:
+ node = tree_prev(ctx);
+ break;
+ case FOLLOWING_SIBLING:
+ node = ctx->next;
+ break;
+ default:
+ assert(0);
+ }
+ if (node == NULL)
+ return NULL;
+ if (step_matches(step, node))
+ return node;
+ return step_next(step, ctx, node);
+}
+
+static struct tree *step_next(struct step *step, struct tree *ctx,
+ struct tree *node) {
+ while (node != NULL) {
+ switch (step->axis) {
+ case SELF:
+ node = NULL;
+ break;
+ case SEQ:
+ case CHILD:
+ node = node->next;
+ break;
+ case DESCENDANT:
+ case DESCENDANT_OR_SELF:
+ if (node->children != NULL) {
+ node = node->children;
+ } else {
+ while (node->next == NULL && node != ctx)
+ node = node->parent;
+ if (node == ctx)
+ node = NULL;
+ else
+ node = node->next;
+ }
+ break;
+ case PARENT:
+ case ROOT:
+ node = NULL;
+ break;
+ case ANCESTOR:
+ if (node->parent == node)
+ node = NULL;
+ else
+ node = node->parent;
+ break;
+ case PRECEDING_SIBLING:
+ node = tree_prev(node);
+ break;
+ case FOLLOWING_SIBLING:
+ node = node->next;
+ break;
+ default:
+ assert(0);
+ }
+ if (node != NULL && step_matches(step, node))
+ break;
+ }
+ return node;
+}
+
+static struct value *pathx_eval(struct pathx *pathx) {
+ struct state *state = pathx->state;
+ state->ctx = pathx->origin;
+ state->ctx_pos = 1;
+ state->ctx_len = 1;
+ eval_expr(state->exprs[0], state);
+ if (HAS_ERROR(state))
+ return NULL;
+
+ if (state->values_used != 1) {
+ STATE_ERROR(state, PATHX_EINTERNAL);
+ return NULL;
+ }
+ return pop_value(state);
+}
+
+struct tree *pathx_next(struct pathx *pathx) {
+ if (pathx->node + 1 < pathx->nodeset->used)
+ return pathx->nodeset->nodes[++pathx->node];
+ return NULL;
+}
+
+/* Find the first node in TREE matching PATH. */
+struct tree *pathx_first(struct pathx *pathx) {
+ if (pathx->nodeset == NULL) {
+ struct value *v = pathx_eval(pathx);
+
+ if (HAS_ERROR(pathx->state))
+ goto error;
+ assert(v->tag == T_NODESET);
+ pathx->nodeset = v->nodeset;
+ }
+ pathx->node = 0;
+ if (pathx->nodeset->used == 0)
+ return NULL;
+ else
+ return pathx->nodeset->nodes[0];
+ error:
+ store_error(pathx);
+ return NULL;
+}
+
+/* Find a node in the tree that matches the longest prefix of PATH.
+ *
+ * Return 1 if a node was found that exactly matches PATH, 0 if an incomplete
+ * prefix matches, and -1 if more than one node in the tree match.
+ *
+ * TMATCH is set to the tree node that matches, and SMATCH to the next step
+ * after the one where TMATCH matched. If no node matches or multiple nodes
+ * at the same depth match, TMATCH and SMATCH will be NULL. When exactly
+ * one node matches, TMATCH will be that node, and SMATCH will be NULL.
+ */
+static int locpath_search(struct locpath_trace *lpt,
+ struct tree **tmatch, struct step **smatch) {
+ int last;
+ int result = -1;
+
+ for (last=lpt->maxns; last >= 0 && lpt->ns[last]->used == 0; last--);
+ if (last < 0) {
+ *smatch = lpt->lp->steps;
+ result = 1;
+ goto done;
+ }
+ if (lpt->ns[last]->used > 1) {
+ result = -1;
+ goto done;
+ }
+ result = 0;
+ *tmatch = lpt->ns[last]->nodes[0];
+ *smatch = lpt->lp->steps;
+ for (int i=0; i < last; i++)
+ *smatch = (*smatch)->next;
+ done:
+ for (int i=0; i < lpt->maxns; i++)
+ free_nodeset(lpt->ns[i]);
+ FREE(lpt->ns);
+ return result;
+}
+
+static char *step_seq_choose_name(struct pathx *path, struct tree *tree);
+
+/* Expand the tree ROOT so that it contains all components of PATH. PATH
+ * must have been initialized against ROOT by a call to PATH_FIND_ONE.
+ *
+ * Return the first segment that was created by this operation, or NULL on
+ * error.
+ */
+int pathx_expand_tree(struct pathx *path, struct tree **tree) {
+ int r;
+ struct step *step = NULL;
+ struct locpath_trace lpt;
+ struct tree *first_child = NULL;
+ struct value *v = NULL;
+
+ MEMZERO(&lpt, 1);
+ path->state->locpath_trace = &lpt;
+ v = pathx_eval(path);
+ path->state->locpath_trace = NULL;
+ if (HAS_ERROR(path->state))
+ goto error;
+
+ if (lpt.maxns == 0) {
+ if (v->tag != T_NODESET || v->nodeset->used == 0) {
+ STATE_ERROR(path->state, PATHX_ENOMATCH);
+ goto error;
+ }
+ if (v->nodeset->used > 1)
+ goto error;
+ *tree = v->nodeset->nodes[0];
+ return 0;
+ }
+
+ *tree = path->origin;
+ r = locpath_search(&lpt, tree, &step);
+ if (r == -1) {
+ STATE_ERROR(path->state, PATHX_EMMATCH);
+ goto error;
+ }
+
+ if (step == NULL)
+ return 0;
+
+ struct tree *parent = *tree;
+ if (parent == NULL)
+ parent = path->origin;
+
+ list_for_each(s, step) {
+ if (s->axis != CHILD && s->axis != SEQ)
+ goto error;
+ if (s->axis==SEQ && s->name == NULL) {
+ s->name = step_seq_choose_name(path, parent);
+ }
+ if (s->name == NULL )
+ goto error;
+ struct tree *t = make_tree(strdup(s->name), NULL, parent, NULL);
+ if (first_child == NULL)
+ first_child = t;
+ if (t == NULL || t->label == NULL)
+ goto error;
+ list_append(parent->children, t);
+ parent = t;
+ }
+
+ while (first_child->children != NULL)
+ first_child = first_child->children;
+
+ *tree = first_child;
+ return 1;
+
+ error:
+ if (first_child != NULL) {
+ list_remove(first_child, first_child->parent->children);
+ free_tree(first_child);
+ }
+ *tree = NULL;
+ store_error(path);
+ return -1;
+}
+
+/* Generate a numeric string to use for step->name
+ * Scan tree->children for the highest numbered label, and add 1 to that
+ * numeric labels may be interspersed with #comment or other labels
+ */
+static char *step_seq_choose_name(struct pathx *path, struct tree *tree) {
+ unsigned long int max_node_n=0;
+ unsigned long int node_n;
+ char *step_name;
+ char *label_end;
+ for(tree=tree->children; tree!=NULL; tree=tree->next) {
+ if ( tree->label == NULL)
+ continue;
+ node_n=strtoul(tree->label, &label_end, 10);
+ if ( label_end == tree->label || *label_end != '\0' )
+ /* label is not a number - ignore it */
+ continue;
+ if ( node_n >= ULONG_MAX ) {
+ STATE_ERROR(path->state, PATHX_ENUMBER);
+ return NULL;
+ }
+ if( node_n > max_node_n )
+ max_node_n = node_n;
+ }
+ if (asprintf(&step_name,"%lu",max_node_n+1) >= 0)
+ return step_name;
+ else
+ return NULL;
+}
+
+int pathx_find_one(struct pathx *path, struct tree **tree) {
+ *tree = pathx_first(path);
+ if (HAS_ERROR(path->state))
+ return -1;
+ return path->nodeset->used;
+}
+
+struct error *err_of_pathx(struct pathx *px) {
+ return px->state->error;
+}
+
+const char *pathx_error(struct pathx *path, const char **txt, int *pos) {
+ int errcode = PATHX_ENOMEM;
+
+ if (path != NULL) {
+ if (path->state->errcode < ARRAY_CARDINALITY(errcodes))
+ errcode = path->state->errcode;
+ else
+ errcode = PATHX_EINTERNAL;
+
+ if (txt)
+ *txt = path->state->txt;
+
+ if (pos)
+ *pos = path->state->pos - path->state->txt;
+ }
+ return errcodes[errcode];
+}
+
+/*
+ * Symbol tables
+ */
+static struct pathx_symtab
+*make_symtab(struct pathx_symtab *symtab, const char *name,
+ struct value *value)
+{
+ struct pathx_symtab *new;
+ char *n = NULL;
+
+ n = strdup(name);
+ if (n == NULL)
+ return NULL;
+
+ if (ALLOC(new) < 0) {
+ free(n);
+ return NULL;
+ }
+ new->name = n;
+ new->value = value;
+ if (symtab == NULL) {
+ return new;
+ } else {
+ new->next = symtab->next;
+ symtab->next = new;
+ }
+ return symtab;
+}
+
+void free_symtab(struct pathx_symtab *symtab) {
+
+ while (symtab != NULL) {
+ struct pathx_symtab *del = symtab;
+ symtab = del->next;
+ free(del->name);
+ release_value(del->value);
+ free(del->value);
+ free(del);
+ }
+}
+
+struct pathx_symtab *pathx_get_symtab(struct pathx *pathx) {
+ return pathx->state->symtab;
+}
+
+static int pathx_symtab_set(struct pathx_symtab **symtab,
+ const char *name, struct value *v) {
+ int found = 0;
+
+ list_for_each(tab, *symtab) {
+ if (STREQ(tab->name, name)) {
+ release_value(tab->value);
+ free(tab->value);
+ tab->value = v;
+ found = 1;
+ break;
+ }
+ }
+
+ if (!found) {
+ struct pathx_symtab *new = NULL;
+
+ new = make_symtab(*symtab, name, v);
+ if (new == NULL)
+ goto error;
+ *symtab = new;
+ }
+ return 0;
+ error:
+ return -1;
+}
+
+int pathx_symtab_define(struct pathx_symtab **symtab,
+ const char *name, struct pathx *px) {
+ int r;
+ struct value *value = NULL, *v = NULL;
+ struct state *state = px->state;
+
+ value = pathx_eval(px);
+ if (HAS_ERROR(px->state))
+ goto error;
+
+ if (ALLOC(v) < 0) {
+ STATE_ENOMEM;
+ goto error;
+ }
+
+ *v = *value;
+ value->tag = T_BOOLEAN;
+
+ r = pathx_symtab_set(symtab, name, v);
+ if (r < 0) {
+ STATE_ENOMEM;
+ goto error;
+ }
+
+ if (v->tag == T_NODESET)
+ return v->nodeset->used;
+ else
+ return 0;
+ error:
+ release_value(value);
+ free(value);
+ release_value(v);
+ free(v);
+ store_error(px);
+ return -1;
+}
+
+int pathx_symtab_undefine(struct pathx_symtab **symtab, const char *name) {
+ struct pathx_symtab *del = NULL;
+
+ for(del = *symtab;
+ del != NULL && !STREQ(del->name, name);
+ del = del->next);
+ if (del == NULL)
+ return 0;
+ list_remove(del, *symtab);
+ free_symtab(del);
+ return 0;
+}
+
+int pathx_symtab_assign_tree(struct pathx_symtab **symtab,
+ const char *name, struct tree *tree) {
+ struct value *v = NULL;
+ int r;
+
+ if (ALLOC(v) < 0)
+ goto error;
+
+ v->tag = T_NODESET;
+ if (ALLOC(v->nodeset) < 0)
+ goto error;
+ if (ALLOC_N(v->nodeset->nodes, 1) < 0)
+ goto error;
+ v->nodeset->used = 1;
+ v->nodeset->size = 1;
+ v->nodeset->nodes[0] = tree;
+
+ r = pathx_symtab_set(symtab, name, v);
+ if (r < 0)
+ goto error;
+ return 1;
+ error:
+ release_value(v);
+ free(v);
+ return -1;
+}
+
+int
+pathx_symtab_count(const struct pathx_symtab *symtab, const char *name) {
+ struct value *v = lookup_var(name, symtab);
+
+ if (v == NULL || v->tag != T_NODESET)
+ return -1;
+
+ return v->nodeset->used;
+}
+
+struct tree *
+pathx_symtab_get_tree(struct pathx_symtab *symtab,
+ const char *name, int i) {
+ struct value *v = lookup_var(name, symtab);
+ if (v == NULL)
+ return NULL;
+ if (v->tag != T_NODESET)
+ return NULL;
+ if (i >= v->nodeset->used)
+ return NULL;
+ return v->nodeset->nodes[i];
+}
+
+void pathx_symtab_remove_descendants(struct pathx_symtab *symtab,
+ const struct tree *tree) {
+ list_for_each(tab, symtab) {
+ if (tab->value->tag != T_NODESET)
+ continue;
+ struct nodeset *ns = tab->value->nodeset;
+ for (int i=0; i < ns->used;) {
+ struct tree *t = ns->nodes[i];
+ while (t != t->parent && t != tree)
+ t = t->parent;
+ if (t == tree)
+ ns_remove(ns, i, 1);
+ else
+ i += 1;
+ }
+ }
+}
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+/*
+ * put.c:
+ *
+ * Copyright (C) 2007-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <dlutter@redhat.com>
+ */
+
+#include <config.h>
+
+#include <stdarg.h>
+#include "regexp.h"
+#include "memory.h"
+#include "lens.h"
+#include "errcode.h"
+
+/* Data structure to keep track of where we are in the tree. The split
+ * describes a sublist of the list of siblings in the current tree. The
+ * put_* functions don't operate on the tree directly, instead they operate
+ * on a split.
+ *
+ * The TREE field points to the first tree node for the current invocation
+ * of put_*, FOLLOW points to the first sibling following TREE that is not
+ * part of the split anymore (NULL if we are talking about all the siblings
+ * of TREE)
+ *
+ * ENC is a string containing the encoding of the current position in the
+ * tree. The encoding is
+ * <label>=<value>/<label>=<value>/.../<label>=<value>/
+ * where the label/value pairs come from TREE and its
+ * siblings. The encoding uses ENC_EQ instead of the '=' above to avoid
+ * clashes with legitimate values, and encodes NULL values as ENC_NULL.
+ */
+struct split {
+ struct split *next;
+ struct tree *tree;
+ struct tree *follow;
+ char *enc;
+ size_t start;
+ size_t end;
+};
+
+struct state {
+ FILE *out;
+ struct split *split;
+ const struct tree *tree;
+ const char *override;
+ struct dict *dict;
+ struct skel *skel;
+ char *path; /* Position in the tree, for errors */
+ size_t pos;
+ bool with_span;
+ struct info *info;
+ struct lns_error *error;
+};
+
+static void create_lens(struct lens *lens, struct state *state);
+static void put_lens(struct lens *lens, struct state *state);
+
+static void put_error(struct state *state, struct lens *lens,
+ const char *format, ...)
+{
+ va_list ap;
+ int r;
+
+ if (state->error != NULL)
+ return;
+
+ if (ALLOC(state->error) < 0)
+ return;
+ state->error->lens = ref(lens);
+ state->error->pos = -1;
+ if (strlen(state->path) == 0) {
+ state->error->path = strdup("");
+ } else {
+ state->error->path = strdup(state->path);
+ }
+
+ va_start(ap, format);
+ r = vasprintf(&state->error->message, format, ap);
+ va_end(ap);
+ if (r == -1)
+ state->error->message = NULL;
+}
+
+ATTRIBUTE_PURE
+static int enclen(const char *key, const char *value) {
+ return ENCLEN(key) + strlen(ENC_EQ) + ENCLEN(value)
+ + strlen(ENC_SLASH);
+}
+
+static char *encpcpy(char *e, const char *key, const char *value) {
+ e = stpcpy(e, ENCSTR(key));
+ e = stpcpy(e, ENC_EQ);
+ e = stpcpy(e, ENCSTR(value));
+ e = stpcpy(e, ENC_SLASH);
+ return e;
+}
+
+static void regexp_match_error(struct state *state, struct lens *lens,
+ int count, struct split *split) {
+ char *text = NULL;
+ char *pat = NULL;
+
+ lns_format_atype(lens, &pat);
+ text = enc_format_indent(split->enc + split->start,
+ split->end - split->start,
+ 4);
+
+ if (count == -1) {
+ put_error(state, lens,
+ "Failed to match tree under %s\n\n%s\n with pattern\n %s\n",
+ state->path, text, pat);
+ } else if (count == -2) {
+ put_error(state, lens,
+ "Internal error matching\n %s\n with tree\n %s\n",
+ pat, text);
+ } else if (count == -3) {
+ /* Should have been caught by the typechecker */
+ put_error(state, lens, "Syntax error in tree schema\n %s\n", pat);
+ }
+ free(pat);
+ free(text);
+}
+
+static void free_split(struct split *split) {
+ if (split == NULL)
+ return;
+
+ free(split->enc);
+ free(split);
+}
+
+/* Encode the list of TREE's children as a string.
+ */
+static struct split *make_split(struct tree *tree) {
+ struct split *split;
+
+ if (ALLOC(split) < 0)
+ return NULL;
+
+ split->tree = tree;
+ list_for_each(t, tree) {
+ split->end += enclen(t->label, t->value);
+ }
+
+ if (ALLOC_N(split->enc, split->end + 1) < 0)
+ goto error;
+
+ char *enc = split->enc;
+ list_for_each(t, tree) {
+ enc = encpcpy(enc, t->label, t->value);
+ }
+ return split;
+ error:
+ free_split(split);
+ return NULL;
+}
+
+static struct split *split_append(struct split **split, struct split *tail,
+ struct tree *tree, struct tree *follow,
+ char *enc, size_t start, size_t end) {
+ struct split *sp;
+ if (ALLOC(sp) < 0)
+ return NULL;
+ sp->tree = tree;
+ sp->follow = follow;
+ sp->enc = enc;
+ sp->start = start;
+ sp->end = end;
+ list_tail_cons(*split, tail, sp);
+ return tail;
+}
+
+static struct split *next_split(struct state *state) {
+ if (state->split != NULL) {
+ state->split = state->split->next;
+ if (state->split != NULL)
+ state->pos = state->split->end;
+ }
+ return state->split;
+}
+
+static struct split *set_split(struct state *state, struct split *split) {
+ state->split = split;
+ if (split != NULL)
+ state->pos = split->end;
+ return split;
+}
+
+/* Refine a tree split OUTER according to the L_CONCAT lens LENS */
+static struct split *split_concat(struct state *state, struct lens *lens) {
+ assert(lens->tag == L_CONCAT);
+
+ int count = 0;
+ struct split *outer = state->split;
+ struct re_registers regs;
+ struct split *split = NULL, *tail = NULL;
+ struct regexp *atype = lens->atype;
+
+ MEMZERO(®s, 1);
+
+ /* Fast path for leaf nodes, which will always lead to an empty split */
+ // FIXME: This doesn't match the empty encoding
+ if (outer->tree == NULL && strlen(outer->enc) == 0
+ && regexp_is_empty_pattern(atype)) {
+ for (int i=0; i < lens->nchildren; i++) {
+ tail = split_append(&split, tail, NULL, NULL,
+ outer->enc, 0, 0);
+ if (tail == NULL)
+ goto error;
+ }
+ return split;
+ }
+
+ count = regexp_match(atype, outer->enc, outer->end,
+ outer->start, ®s);
+ if (count >= 0 && count != outer->end - outer->start)
+ count = -1;
+ if (count < 0) {
+ regexp_match_error(state, lens, count, outer);
+ goto error;
+ }
+
+ struct tree *cur = outer->tree;
+ int reg = 1;
+ for (int i=0; i < lens->nchildren; i++) {
+ assert(reg < regs.num_regs);
+ assert(regs.start[reg] != -1);
+ struct tree *follow = cur;
+ for (int j = regs.start[reg]; j < regs.end[reg]; j++) {
+ if (outer->enc[j] == ENC_SLASH_CH)
+ follow = follow->next;
+ }
+ tail = split_append(&split, tail, cur, follow,
+ outer->enc, regs.start[reg], regs.end[reg]);
+ cur = follow;
+ reg += 1 + regexp_nsub(lens->children[i]->atype);
+ }
+ assert(reg < regs.num_regs);
+ done:
+ free(regs.start);
+ free(regs.end);
+ return split;
+ error:
+ free_split(split);
+ split = NULL;
+ goto done;
+}
+
+static struct split *split_iter(struct state *state, struct lens *lens) {
+ assert(lens->tag == L_STAR);
+
+ int count = 0;
+ struct split *outer = state->split;
+ struct split *split = NULL;
+ struct regexp *atype = lens->child->atype;
+
+ struct tree *cur = outer->tree;
+ int pos = outer->start;
+ struct split *tail = NULL;
+ while (pos < outer->end) {
+ count = regexp_match(atype, outer->enc, outer->end, pos, NULL);
+ if (count == -1) {
+ break;
+ } else if (count < -1) {
+ regexp_match_error(state, lens->child, count, outer);
+ goto error;
+ }
+
+ struct tree *follow = cur;
+ for (int j = pos; j < pos + count; j++) {
+ if (outer->enc[j] == ENC_SLASH_CH)
+ follow = follow->next;
+ }
+ tail = split_append(&split, tail, cur, follow,
+ outer->enc, pos, pos + count);
+ cur = follow;
+ pos += count;
+ }
+ return split;
+ error:
+ free_split(split);
+ return NULL;
+}
+
+/* Check if LENS applies to the current split in STATE */
+static int applies(struct lens *lens, struct state *state) {
+ int count;
+ struct split *split = state->split;
+
+ count = regexp_match(lens->atype, split->enc, split->end,
+ split->start, NULL);
+ if (count < -1) {
+ regexp_match_error(state, lens, count, split);
+ return 0;
+ }
+
+ if (count != split->end - split->start)
+ return 0;
+ if (count == 0 && lens->value)
+ return state->tree->value != NULL;
+ return 1;
+}
+
+/*
+ * Check whether SKEL has the skeleton type required by LENS
+ */
+
+static int skel_instance_of(struct lens *lens, struct skel *skel) {
+ if (skel == NULL)
+ return 0;
+
+ switch (lens->tag) {
+ case L_DEL: {
+ int count;
+ if (skel->tag != L_DEL)
+ return 0;
+ count = regexp_match(lens->regexp, skel->text, strlen(skel->text),
+ 0, NULL);
+ return count == strlen(skel->text);
+ }
+ case L_STORE:
+ return skel->tag == L_STORE;
+ case L_KEY:
+ return skel->tag == L_KEY;
+ case L_LABEL:
+ return skel->tag == L_LABEL;
+ case L_VALUE:
+ return skel->tag == L_VALUE;
+ case L_SEQ:
+ return skel->tag == L_SEQ;
+ case L_COUNTER:
+ return skel->tag == L_COUNTER;
+ case L_CONCAT:
+ {
+ if (skel->tag != L_CONCAT)
+ return 0;
+ struct skel *s = skel->skels;
+ for (int i=0; i < lens->nchildren; i++) {
+ if (! skel_instance_of(lens->children[i], s))
+ return 0;
+ s = s->next;
+ }
+ return 1;
+ }
+ break;
+ case L_UNION:
+ {
+ for (int i=0; i < lens->nchildren; i++) {
+ if (skel_instance_of(lens->children[i], skel))
+ return 1;
+ }
+ return 0;
+ }
+ break;
+ case L_SUBTREE:
+ return skel->tag == L_SUBTREE;
+ case L_MAYBE:
+ return skel->tag == L_MAYBE || skel_instance_of(lens->child, skel);
+ case L_STAR:
+ if (skel->tag != L_STAR)
+ return 0;
+ list_for_each(s, skel->skels) {
+ if (! skel_instance_of(lens->child, s))
+ return 0;
+ }
+ return 1;
+ case L_REC:
+ return skel_instance_of(lens->body, skel);
+ case L_SQUARE:
+ return skel->tag == L_SQUARE
+ && skel_instance_of(lens->child, skel->skels);
+ default:
+ BUG_ON(true, lens->info, "illegal lens tag %d", lens->tag);
+ break;
+ }
+ error:
+ return 0;
+}
+
+enum span_kind { S_NONE, S_LABEL, S_VALUE };
+
+static void emit(struct state *state, const char *text, enum span_kind kind) {
+ struct span* span = state->tree->span;
+
+ if (span != NULL) {
+ long start = ftell(state->out);
+ if (kind == S_LABEL) {
+ span->label_start = start;
+ } else if (kind == S_VALUE) {
+ span->value_start = start;
+ }
+ }
+ fprintf(state->out, "%s", text);
+ if (span != NULL) {
+ long end = ftell(state->out);
+ if (kind == S_LABEL) {
+ span->label_end = end;
+ } else if (kind == S_VALUE) {
+ span->value_end = end;
+ }
+ }
+}
+
+/*
+ * put
+ */
+static void put_subtree(struct lens *lens, struct state *state) {
+ assert(lens->tag == L_SUBTREE);
+ struct state oldstate = *state;
+ struct split oldsplit = *state->split;
+ char * oldpath = state->path;
+
+ struct tree *tree = state->split->tree;
+ struct split *split = NULL;
+
+ state->tree = tree;
+ state->path = path_of_tree(tree);
+
+ split = make_split(tree->children);
+ set_split(state, split);
+
+ dict_lookup(tree->label, state->dict, &state->skel, &state->dict);
+ if (state->with_span) {
+ if (tree->span == NULL) {
+ tree->span = make_span(state->info);
+ }
+ tree->span->span_start = ftell(state->out);
+ }
+ if (state->skel == NULL || ! skel_instance_of(lens->child, state->skel)) {
+ create_lens(lens->child, state);
+ } else {
+ put_lens(lens->child, state);
+ }
+ assert(state->error != NULL || state->split->next == NULL);
+ if (tree->span != NULL) {
+ tree->span->span_end = ftell(state->out);
+ }
+
+ oldstate.error = state->error;
+ oldstate.path = state->path;
+ *state = oldstate;
+ *state->split= oldsplit;
+ free_split(split);
+ free(state->path);
+ state->path = oldpath;
+}
+
+static void put_del(ATTRIBUTE_UNUSED struct lens *lens, struct state *state) {
+ assert(lens->tag == L_DEL);
+ assert(state->skel != NULL);
+ assert(state->skel->tag == L_DEL);
+ if (state->override != NULL) {
+ emit(state, state->override, S_NONE);
+ } else {
+ emit(state, state->skel->text, S_NONE);
+ }
+}
+
+static void put_union(struct lens *lens, struct state *state) {
+ assert(lens->tag == L_UNION);
+
+ for (int i=0; i < lens->nchildren; i++) {
+ struct lens *l = lens->children[i];
+ if (applies(l, state)) {
+ if (skel_instance_of(l, state->skel))
+ put_lens(l, state);
+ else
+ create_lens(l, state);
+ return;
+ }
+ }
+ put_error(state, lens, "None of the alternatives in the union match");
+}
+
+static void put_concat(struct lens *lens, struct state *state) {
+ assert(lens->tag == L_CONCAT);
+ struct split *oldsplit = state->split;
+ struct skel *oldskel = state->skel;
+
+ struct split *split = split_concat(state, lens);
+
+ state->skel = state->skel->skels;
+ set_split(state, split);
+ for (int i=0; i < lens->nchildren; i++) {
+ if (state->split == NULL) {
+ put_error(state, lens,
+ "Not enough components in concat");
+ list_free(split);
+ return;
+ }
+ put_lens(lens->children[i], state);
+ state->skel = state->skel->next;
+ next_split(state);
+ }
+ list_free(split);
+ set_split(state, oldsplit);
+ state->skel = oldskel;
+}
+
+static void error_quant_star(struct split *last_split, struct lens *lens,
+ struct state *state, const char *enc) {
+ struct tree *child = NULL;
+ if (last_split != NULL) {
+ if (last_split->follow != NULL) {
+ child = last_split->follow;
+ } else {
+ for (child = last_split->tree;
+ child != NULL && child->next != NULL;
+ child = child->next);
+ }
+ }
+ char *text = NULL;
+ char *pat = NULL;
+
+ lns_format_atype(lens, &pat);
+ text = enc_format_indent(enc, strlen(enc), 4);
+
+ if (child == NULL) {
+ put_error(state, lens,
+ "Missing a node: can not match tree\n\n%s\n with pattern\n %s\n",
+ text, pat);
+ } else {
+ char *s = path_of_tree(child);
+ put_error(state, lens,
+ "Unexpected node '%s': can not match tree\n\n%s\n with pattern\n %s\n",
+ s, text, pat);
+ free(s);
+ }
+ free(pat);
+ free(text);
+}
+
+static void put_quant_star(struct lens *lens, struct state *state) {
+ assert(lens->tag == L_STAR);
+ struct split *oldsplit = state->split;
+ struct skel *oldskel = state->skel;
+ struct split *last_split = NULL;
+
+ struct split *split = split_iter(state, lens);
+
+ state->skel = state->skel->skels;
+ set_split(state, split);
+ last_split = state->split;
+ while (state->split != NULL && state->skel != NULL) {
+ put_lens(lens->child, state);
+ state->skel = state->skel->next;
+ last_split = state->split;
+ next_split(state);
+ }
+ while (state->split != NULL) {
+ create_lens(lens->child, state);
+ last_split = state->split;
+ next_split(state);
+ }
+ if (state->pos != oldsplit->end)
+ error_quant_star(last_split, lens, state, oldsplit->enc + state->pos);
+ list_free(split);
+ set_split(state, oldsplit);
+ state->skel = oldskel;
+}
+
+static void put_quant_maybe(struct lens *lens, struct state *state) {
+ assert(lens->tag == L_MAYBE);
+ struct lens *child = lens->child;
+
+ if (applies(child, state)) {
+ if (skel_instance_of(child, state->skel))
+ put_lens(child, state);
+ else
+ create_lens(child, state);
+ }
+}
+
+static void put_store(struct lens *lens, struct state *state) {
+ const char *value = state->tree->value;
+
+ if (value == NULL) {
+ put_error(state, lens,
+ "Can not store a nonexistent (NULL) value");
+ } else if (regexp_match(lens->regexp, value, strlen(value),
+ 0, NULL) != strlen(value)) {
+ char *pat = regexp_escape(lens->regexp);
+ put_error(state, lens,
+ "Value '%s' does not match regexp /%s/ in store lens",
+ value, pat);
+ free(pat);
+ } else {
+ emit(state, value, S_VALUE);
+ }
+}
+
+static void put_rec(struct lens *lens, struct state *state) {
+ put_lens(lens->body, state);
+}
+
+static void put_square(struct lens *lens, struct state *state) {
+ assert(lens->tag == L_SQUARE);
+ struct skel *oldskel = state->skel;
+ struct split *oldsplit = state->split;
+ struct lens *concat = lens->child;
+ struct lens *left = concat->children[0];
+ struct split *split = split_concat(state, concat);
+
+ /* skels of concat is one depth more */
+ state->skel = state->skel->skels->skels;
+ set_split(state, split);
+ for (int i=0; i < concat->nchildren; i++) {
+ if (state->split == NULL) {
+ put_error(state, concat, "Not enough components in square");
+ list_free(split);
+ return;
+ }
+ struct lens *curr = concat->children[i];
+ if (i == (concat->nchildren - 1) && left->tag == L_KEY)
+ state->override = state->tree->label;
+ put_lens(curr, state);
+ state->override = NULL;
+ state->skel = state->skel->next;
+ next_split(state);
+ }
+ list_free(split);
+ set_split(state, oldsplit);
+ state->skel = oldskel;
+}
+
+static void put_lens(struct lens *lens, struct state *state) {
+ if (state->error != NULL)
+ return;
+
+ switch(lens->tag) {
+ case L_DEL:
+ put_del(lens, state);
+ break;
+ case L_STORE:
+ put_store(lens, state);
+ break;
+ case L_KEY:
+ emit(state, state->tree->label, S_LABEL);
+ break;
+ case L_LABEL:
+ case L_VALUE:
+ /* Nothing to do */
+ break;
+ case L_SEQ:
+ /* Nothing to do */
+ break;
+ case L_COUNTER:
+ /* Nothing to do */
+ break;
+ case L_CONCAT:
+ put_concat(lens, state);
+ break;
+ case L_UNION:
+ put_union(lens, state);
+ break;
+ case L_SUBTREE:
+ put_subtree(lens, state);
+ break;
+ case L_STAR:
+ put_quant_star(lens, state);
+ break;
+ case L_MAYBE:
+ put_quant_maybe(lens, state);
+ break;
+ case L_REC:
+ put_rec(lens, state);
+ break;
+ case L_SQUARE:
+ put_square(lens, state);
+ break;
+ default:
+ assert(0);
+ break;
+ }
+}
+
+static void create_subtree(struct lens *lens, struct state *state) {
+ put_subtree(lens, state);
+}
+
+static void create_del(struct lens *lens, struct state *state) {
+ assert(lens->tag == L_DEL);
+ if (state->override != NULL) {
+ emit(state, state->override, S_NONE);
+ } else {
+ emit(state, lens->string->str, S_NONE);
+ }
+}
+
+static void create_union(struct lens *lens, struct state *state) {
+ assert(lens->tag == L_UNION);
+
+ for (int i=0; i < lens->nchildren; i++) {
+ if (applies(lens->children[i], state)) {
+ create_lens(lens->children[i], state);
+ return;
+ }
+ }
+ put_error(state, lens, "None of the alternatives in the union match");
+}
+
+static void create_concat(struct lens *lens, struct state *state) {
+ assert(lens->tag == L_CONCAT);
+ struct split *oldsplit = state->split;
+
+ struct split *split = split_concat(state, lens);
+
+ set_split(state, split);
+ for (int i=0; i < lens->nchildren; i++) {
+ if (state->split == NULL) {
+ put_error(state, lens,
+ "Not enough components in concat");
+ list_free(split);
+ return;
+ }
+ create_lens(lens->children[i], state);
+ next_split(state);
+ }
+ list_free(split);
+ set_split(state, oldsplit);
+}
+
+static void create_square(struct lens *lens, struct state *state) {
+ assert(lens->tag == L_SQUARE);
+ struct lens *concat = lens->child;
+
+ struct split *oldsplit = state->split;
+ struct split *split = split_concat(state, concat);
+ struct lens *left = concat->children[0];
+
+ set_split(state, split);
+ for (int i=0; i < concat->nchildren; i++) {
+ if (state->split == NULL) {
+ put_error(state, concat, "Not enough components in square");
+ list_free(split);
+ return;
+ }
+ struct lens *curr = concat->children[i];
+ if (i == (concat->nchildren - 1) && left->tag == L_KEY)
+ state->override = state->tree->label;
+ create_lens(curr, state);
+ state->override = NULL;
+ next_split(state);
+ }
+ list_free(split);
+ set_split(state, oldsplit);
+}
+
+static void create_quant_star(struct lens *lens, struct state *state) {
+ assert(lens->tag == L_STAR);
+ struct split *oldsplit = state->split;
+ struct split *last_split = NULL;
+
+ struct split *split = split_iter(state, lens);
+
+ set_split(state, split);
+ last_split = state->split;
+ while (state->split != NULL) {
+ create_lens(lens->child, state);
+ last_split = state->split;
+ next_split(state);
+ }
+ if (state->pos != oldsplit->end)
+ error_quant_star(last_split, lens, state, oldsplit->enc + state->pos);
+ list_free(split);
+ set_split(state, oldsplit);
+}
+
+static void create_quant_maybe(struct lens *lens, struct state *state) {
+ assert(lens->tag == L_MAYBE);
+
+ if (applies(lens->child, state)) {
+ create_lens(lens->child, state);
+ }
+}
+
+static void create_rec(struct lens *lens, struct state *state) {
+ create_lens(lens->body, state);
+}
+
+static void create_lens(struct lens *lens, struct state *state) {
+ if (state->error != NULL)
+ return;
+ switch(lens->tag) {
+ case L_DEL:
+ create_del(lens, state);
+ break;
+ case L_STORE:
+ put_store(lens, state);
+ break;
+ case L_KEY:
+ emit(state, state->tree->label, S_LABEL);
+ break;
+ case L_LABEL:
+ case L_VALUE:
+ /* Nothing to do */
+ break;
+ case L_SEQ:
+ /* Nothing to do */
+ break;
+ case L_COUNTER:
+ /* Nothing to do */
+ break;
+ case L_CONCAT:
+ create_concat(lens, state);
+ break;
+ case L_UNION:
+ create_union(lens, state);
+ break;
+ case L_SUBTREE:
+ create_subtree(lens, state);
+ break;
+ case L_STAR:
+ create_quant_star(lens, state);
+ break;
+ case L_MAYBE:
+ create_quant_maybe(lens, state);
+ break;
+ case L_REC:
+ create_rec(lens, state);
+ break;
+ case L_SQUARE:
+ create_square(lens, state);
+ break;
+ default:
+ assert(0);
+ break;
+ }
+}
+
+void lns_put(struct info *info, FILE *out, struct lens *lens, struct tree *tree,
+ const char *text, int enable_span, struct lns_error **err) {
+ struct state state;
+ struct lns_error *err1;
+
+ if (err != NULL)
+ *err = NULL;
+ if (tree == NULL)
+ return;
+
+ MEMZERO(&state, 1);
+ state.path = strdup("/");
+ state.skel = lns_parse(lens, text, &state.dict, &err1);
+
+ if (err1 != NULL) {
+ if (err != NULL)
+ *err = err1;
+ else
+ free_lns_error(err1);
+ goto error;
+ }
+ state.out = out;
+ state.split = make_split(tree);
+ state.with_span = enable_span;
+ state.tree = tree;
+ state.info = info;
+ if (state.with_span) {
+ if (tree->span == NULL) {
+ tree->span = make_span(info);
+ }
+ tree->span->span_start = ftell(out);
+ }
+ put_lens(lens, &state);
+ if (state.with_span) {
+ tree->span->span_end = ftell(out);
+ }
+ if (err != NULL) {
+ *err = state.error;
+ } else {
+ free_lns_error(state.error);
+ }
+
+ error:
+ free(state.path);
+ free_split(state.split);
+ free_skel(state.skel);
+ free_dict(state.dict);
+}
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+/*
+ * ref.c: reference counting
+ *
+ * Copyright (C) 2009-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <lutter@redhat.com>
+ */
+
+#include <config.h>
+
+#include "ref.h"
+#include <stdlib.h>
+
+int ref_make_ref(void *ptrptr, size_t size, size_t ref_ofs) {
+ *(void**) ptrptr = calloc(1, size);
+ if (*(void **)ptrptr == NULL) {
+ return -1;
+ } else {
+ void *ptr = *(void **)ptrptr;
+ *((ref_t *) ((char*) ptr + ref_ofs)) = 1;
+ return 0;
+ }
+}
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+/*
+ * ref.h: reference counting macros
+ *
+ * Copyright (C) 2007-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <dlutter@redhat.com>
+ */
+
+#ifndef REF_H_
+#define REF_H_
+
+#include <limits.h>
+#include <stddef.h>
+
+/* Reference counting for pointers to structs with a REF field of type ref_t
+ *
+ * When a pointer to such a struct is passed into a function that stores
+ * it, the function can either "receive ownership", meaning it does not
+ * increment the reference count, or it can "take ownership", meaning it
+ * increments the reference count. In the first case, the reference is now
+ * owned by wherever the function stored it, and not the caller anymore; in
+ * the second case, the caller and whereever the reference was stored both
+ * own the reference.
+ */
+// FIXME: This is not threadsafe; incr/decr ref needs to be protected
+
+#define REF_MAX UINT_MAX
+
+typedef unsigned int ref_t;
+
+int ref_make_ref(void *ptrptr, size_t size, size_t ref_ofs);
+
+#define make_ref(var) \
+ ref_make_ref(&(var), sizeof(*(var)), offsetof(typeof(*(var)), ref))
+
+#define make_ref_err(var) if (make_ref(var) < 0) goto error
+
+#define ref(s) (((s) == NULL || (s)->ref == REF_MAX) ? (s) : ((s)->ref++, (s)))
+
+#define unref(s, t) \
+ do { \
+ if ((s) != NULL && (s)->ref != REF_MAX) { \
+ assert((s)->ref > 0); \
+ if (--(s)->ref == 0) { \
+ /*memset(s, 255, sizeof(*s));*/ \
+ free_##t(s); \
+ } \
+ } \
+ (s) = NULL; \
+ } while(0)
+
+/* Make VAR uncollectable and pin it in memory for eternity */
+#define ref_pin(var) (var)->ref = REF_MAX
+
+#endif
+
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+/*
+ * regexp.c:
+ *
+ * Copyright (C) 2007-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <dlutter@redhat.com>
+ */
+
+#include <config.h>
+#include <regex.h>
+
+#include "internal.h"
+#include "syntax.h"
+#include "memory.h"
+#include "errcode.h"
+
+static const struct string empty_pattern_string = {
+ .ref = REF_MAX, .str = (char *) "()"
+};
+
+static const struct string *const empty_pattern = &empty_pattern_string;
+
+char *regexp_escape(const struct regexp *r) {
+ char *pat = NULL;
+
+ if (r == NULL)
+ return strdup("");
+
+#if !HAVE_USELOCALE
+ char *nre = NULL;
+ int ret;
+ size_t nre_len;
+
+ /* Use a range with from > to to force conversion of ranges into
+ * short form */
+ ret = fa_restrict_alphabet(r->pattern->str, strlen(r->pattern->str),
+ &nre, &nre_len, 2, 1);
+ if (ret == 0) {
+ pat = escape(nre, nre_len, RX_ESCAPES);
+ free(nre);
+ }
+#endif
+
+ if (pat == NULL) {
+ /* simplify the regexp by removing some artifacts of reserving
+ chanaracters for internal purposes */
+ if (index(r->pattern->str, RESERVED_FROM_CH)) {
+ char *s = strdup(r->pattern->str);
+ char *t = s;
+ for (char *p = s; *p; p++) {
+ if (STREQLEN(p, RESERVED_RANGE_RX, strlen(RESERVED_RANGE_RX))) {
+ /* Completely eliminate mentions of the reserved range */
+ p += strlen(RESERVED_RANGE_RX);
+ } else if (STREQLEN(p,
+ RESERVED_DOT_RX, strlen(RESERVED_DOT_RX))) {
+ /* Replace what amounts to a '.' by one */
+ p += strlen(RESERVED_DOT_RX);
+ *t++ = '.';
+ }
+ *t++ = *p;
+ }
+ *t = '\0';
+ pat = escape(s, -1, RX_ESCAPES);
+ free(s);
+ } else {
+ pat = escape(r->pattern->str, -1, RX_ESCAPES);
+ }
+ }
+
+ if (pat == NULL)
+ return NULL;
+
+ /* Remove unneeded '()' from pat */
+ for (int changed = 1; changed;) {
+ changed = 0;
+ for (char *p = pat; *p != '\0'; p++) {
+ if (*p == '(' && p[1] == ')') {
+ memmove(p, p+2, strlen(p+2)+1);
+ changed = 1;
+ }
+ }
+ }
+
+ if (pat[0] == '(' && pat[strlen(pat)-1] == ')') {
+ int level = 1;
+ for (int i=1; i < strlen(pat)-1; i++) {
+ if (pat[i] == '(')
+ level += 1;
+ if (pat[i] == ')')
+ level -= 1;
+ if (level == 0)
+ break;
+ }
+ if (level == 1) {
+ memmove(pat, pat+1, strlen(pat+1)+1);
+ pat[strlen(pat)-1] = '\0';
+ }
+ }
+
+ return pat;
+}
+
+void print_regexp(FILE *out, struct regexp *r) {
+ if (r == NULL) {
+ fprintf(out, "<NULL>");
+ return;
+ }
+
+ fputc('/', out);
+ if (r->pattern == NULL)
+ fprintf(out, "%p", r);
+ else {
+ char *rx;
+ size_t rx_len;
+ fa_restrict_alphabet(r->pattern->str, strlen(r->pattern->str),
+ &rx, &rx_len, 2, 1);
+ print_chars(out, rx, rx_len);
+ FREE(rx);
+ }
+ fputc('/', out);
+ if (r->nocase)
+ fputc('i', out);
+}
+
+struct regexp *
+make_regexp_unescape(struct info *info, const char *pat, int nocase) {
+ char *p = unescape(pat, strlen(pat), NULL);
+
+ if (p == NULL)
+ return NULL;
+ return make_regexp(info, p, nocase);
+}
+
+struct regexp *make_regexp(struct info *info, char *pat, int nocase) {
+ struct regexp *regexp;
+
+ make_ref(regexp);
+ regexp->info = ref(info);
+
+ make_ref(regexp->pattern);
+ regexp->pattern->str = pat;
+ regexp->nocase = nocase;
+ return regexp;
+}
+
+/* Take a POSIX glob and turn it into a regexp. The regexp is constructed
+ * by doing the following translations of characters in the string:
+ * * -> [^/]*
+ * ? -> [^/]
+ * leave characters escaped with a backslash alone
+ * escape any of ".|{}()+^$" with a backslash
+ *
+ * Note that that ignores some of the finer points of globs, like
+ * complementation.
+ */
+struct regexp *make_regexp_from_glob(struct info *info, const char *glob) {
+ static const char *const star = "[^/]*";
+ static const char *const qmark = "[^/]";
+ static const char *const special = ".|{}()+^$";
+ int newlen = strlen(glob);
+ char *pat = NULL;
+
+ for (const char *s = glob; *s; s++) {
+ if (*s == '\\' && *(s+1))
+ s += 1;
+ else if (*s == '*')
+ newlen += strlen(star)-1;
+ else if (*s == '?')
+ newlen += strlen(qmark)-1;
+ else if (strchr(special, *s) != NULL)
+ newlen += 1;
+ }
+
+ if (ALLOC_N(pat, newlen + 1) < 0)
+ return NULL;
+
+ char *t = pat;
+ for (const char *s = glob; *s; s++) {
+ if (*s == '\\' && *(s+1)) {
+ *t++ = *s++;
+ *t++ = *s;
+ } else if (*s == '*') {
+ t = stpcpy(t, star);
+ } else if (*s == '?') {
+ t = stpcpy(t, qmark);
+ } else if (strchr(special, *s) != NULL) {
+ *t++ = '\\';
+ *t++ = *s;
+ } else {
+ *t++ = *s;
+ }
+ }
+
+ return make_regexp(info, pat, 0);
+}
+
+void free_regexp(struct regexp *regexp) {
+ if (regexp == NULL)
+ return;
+ assert(regexp->ref == 0);
+ unref(regexp->info, info);
+ unref(regexp->pattern, string);
+ if (regexp->re != NULL) {
+ regfree(regexp->re);
+ free(regexp->re);
+ }
+ free(regexp);
+}
+
+int regexp_is_empty_pattern(struct regexp *r) {
+ for (char *s = r->pattern->str; *s; s++) {
+ if (*s != '(' && *s != ')')
+ return 0;
+ }
+ return 1;
+}
+
+struct regexp *make_regexp_literal(struct info *info, const char *text) {
+ char *pattern, *p;
+
+ /* Escape special characters in text since it should be taken
+ literally */
+ if (ALLOC_N(pattern, 2*strlen(text)+1) < 0)
+ return NULL;
+ p = pattern;
+ for (const char *t = text; *t != '\0'; t++) {
+ if ((*t == '\\') && t[1]) {
+ *p++ = *t++;
+ *p++ = *t;
+ } else if (strchr(".|{}[]()+*?", *t) != NULL) {
+ *p++ = '\\';
+ *p++ = *t;
+ } else {
+ *p++ = *t;
+ }
+ }
+ return make_regexp(info, pattern, 0);
+}
+
+struct regexp *
+regexp_union(struct info *info, struct regexp *r1, struct regexp *r2) {
+ struct regexp *r[2];
+
+ r[0] = r1;
+ r[1] = r2;
+ return regexp_union_n(info, 2, r);
+}
+
+char *regexp_expand_nocase(struct regexp *r) {
+ const char *p = r->pattern->str, *t;
+ char *s = NULL;
+ size_t len;
+ int ret;
+ int psub = 0, rsub = 0;
+
+ if (! r->nocase)
+ return strdup(p);
+
+ ret = fa_expand_nocase(p, strlen(p), &s, &len);
+ ERR_NOMEM(ret == REG_ESPACE, r->info);
+ BUG_ON(ret != REG_NOERROR, r->info, NULL);
+
+ /* Make sure that r->pattern->str and ret have the same number
+ * of parentheses/groups, since our parser critically depends
+ * on the fact that the regexp for a union/concat and those
+ * of its children have groups that are in direct relation */
+ for (t = p; *t; t++) if (*t == '(') psub += 1;
+ for (t = s; *t; t++) if (*t == '(') rsub += 1;
+ BUG_ON(psub < rsub, r->info, NULL);
+ psub -= rsub;
+ if (psub > 0) {
+ char *adjusted = NULL, *a;
+ if (ALLOC_N(adjusted, strlen(s) + 2*psub + 1) < 0)
+ ERR_NOMEM(true, r->info);
+ a = adjusted;
+ for (int i=0; i < psub; i++) *a++ = '(';
+ a = stpcpy(a, s);
+ for (int i=0; i < psub; i++) *a++ = ')';
+ free(s);
+ s = adjusted;
+ }
+ error:
+ return s;
+}
+
+static char *append_expanded(struct regexp *r, char **pat, char *p,
+ size_t *len) {
+ char *expanded = NULL;
+ size_t ofs = p - *pat;
+ int ret;
+
+ expanded = regexp_expand_nocase(r);
+ ERR_BAIL(r->info);
+
+ *len += strlen(expanded) - strlen(r->pattern->str);
+
+ ret = REALLOC_N(*pat, *len);
+ ERR_NOMEM(ret < 0, r->info);
+
+ p = stpcpy(*pat + ofs, expanded);
+ error:
+ FREE(expanded);
+ return p;
+}
+
+struct regexp *
+regexp_union_n(struct info *info, int n, struct regexp **r) {
+ size_t len = 0;
+ char *pat = NULL, *p, *expanded = NULL;
+ int nnocase = 0, npresent = 0;
+
+ for (int i=0; i < n; i++)
+ if (r[i] != NULL) {
+ len += strlen(r[i]->pattern->str) + strlen("()|");
+ npresent += 1;
+ if (r[i]->nocase)
+ nnocase += 1;
+ }
+
+ bool mixedcase = nnocase > 0 && nnocase < npresent;
+
+ if (len == 0)
+ return NULL;
+
+ if (ALLOC_N(pat, len) < 0)
+ return NULL;
+
+ p = pat;
+ int added = 0;
+ for (int i=0; i < n; i++) {
+ if (r[i] == NULL)
+ continue;
+ if (added > 0)
+ *p++ = '|';
+ *p++ = '(';
+ if (mixedcase && r[i]->nocase) {
+ p = append_expanded(r[i], &pat, p, &len);
+ ERR_BAIL(r[i]->info);
+ } else {
+ p = stpcpy(p, r[i]->pattern->str);
+ }
+ *p++ = ')';
+ added += 1;
+ }
+ *p = '\0';
+ return make_regexp(info, pat, nnocase == npresent);
+ error:
+ FREE(expanded);
+ FREE(pat);
+ return NULL;
+}
+
+struct regexp *
+regexp_concat(struct info *info, struct regexp *r1, struct regexp *r2) {
+ struct regexp *r[2];
+
+ r[0] = r1;
+ r[1] = r2;
+ return regexp_concat_n(info, 2, r);
+}
+
+struct regexp *
+regexp_concat_n(struct info *info, int n, struct regexp **r) {
+ size_t len = 0;
+ char *pat = NULL, *p, *expanded = NULL;
+ int nnocase = 0, npresent = 0;
+
+ for (int i=0; i < n; i++)
+ if (r[i] != NULL) {
+ len += strlen(r[i]->pattern->str) + strlen("()");
+ npresent += 1;
+ if (r[i]->nocase)
+ nnocase += 1;
+ }
+
+ bool mixedcase = nnocase > 0 && nnocase < npresent;
+
+ if (len == 0)
+ return NULL;
+
+ len += 1;
+ if (ALLOC_N(pat, len) < 0)
+ return NULL;
+
+ p = pat;
+ for (int i=0; i < n; i++) {
+ if (r[i] == NULL)
+ continue;
+ *p++ = '(';
+ if (mixedcase && r[i]->nocase) {
+ p = append_expanded(r[i], &pat, p, &len);
+ ERR_BAIL(r[i]->info);
+ } else {
+ p = stpcpy(p, r[i]->pattern->str);
+ }
+ *p++ = ')';
+ }
+ *p = '\0';
+ return make_regexp(info, pat, nnocase == npresent);
+ error:
+ FREE(expanded);
+ FREE(pat);
+ return NULL;
+}
+
+static struct fa *regexp_to_fa(struct regexp *r) {
+ const char *p = r->pattern->str;
+ int ret;
+ struct fa *fa = NULL;
+
+ ret = fa_compile(p, strlen(p), &fa);
+ ERR_NOMEM(ret == REG_ESPACE, r->info);
+ BUG_ON(ret != REG_NOERROR, r->info, NULL);
+
+ if (r->nocase) {
+ ret = fa_nocase(fa);
+ ERR_NOMEM(ret < 0, r->info);
+ }
+ return fa;
+
+ error:
+ fa_free(fa);
+ return NULL;
+}
+
+struct regexp *
+regexp_minus(struct info *info, struct regexp *r1, struct regexp *r2) {
+ struct regexp *result = NULL;
+ struct fa *fa = NULL, *fa1 = NULL, *fa2 = NULL;
+ int r;
+ char *s = NULL;
+ size_t s_len;
+
+ fa1 = regexp_to_fa(r1);
+ ERR_BAIL(r1->info);
+
+ fa2 = regexp_to_fa(r2);
+ ERR_BAIL(r2->info);
+
+ fa = fa_minus(fa1, fa2);
+ if (fa == NULL)
+ goto error;
+
+ r = fa_as_regexp(fa, &s, &s_len);
+ if (r < 0)
+ goto error;
+
+ if (s == NULL) {
+ /* FA is the empty set, which we can't represent as a regexp */
+ goto error;
+ }
+
+ if (regexp_c_locale(&s, NULL) < 0)
+ goto error;
+
+ result = make_regexp(info, s, fa_is_nocase(fa));
+ s = NULL;
+
+ done:
+ fa_free(fa);
+ fa_free(fa1);
+ fa_free(fa2);
+ free(s);
+ return result;
+ error:
+ unref(result, regexp);
+ goto done;
+}
+
+
+struct regexp *
+regexp_iter(struct info *info, struct regexp *r, int min, int max) {
+ const char *p;
+ char *s;
+ int ret = 0;
+
+ if (r == NULL)
+ return NULL;
+
+ p = r->pattern->str;
+ if ((min == 0 || min == 1) && max == -1) {
+ char q = (min == 0) ? '*' : '+';
+ ret = asprintf(&s, "(%s)%c", p, q);
+ } else if (min == max) {
+ ret = asprintf(&s, "(%s){%d}", p, min);
+ } else {
+ ret = asprintf(&s, "(%s){%d,%d}", p, min, max);
+ }
+ return (ret == -1) ? NULL : make_regexp(info, s, r->nocase);
+}
+
+struct regexp *
+regexp_maybe(struct info *info, struct regexp *r) {
+ const char *p;
+ char *s;
+ int ret;
+
+ if (r == NULL)
+ return NULL;
+ p = r->pattern->str;
+ ret = asprintf(&s, "(%s)?", p);
+ return (ret == -1) ? NULL : make_regexp(info, s, r->nocase);
+}
+
+struct regexp *regexp_make_empty(struct info *info) {
+ struct regexp *regexp;
+
+ make_ref(regexp);
+ if (regexp != NULL) {
+ regexp->info = ref(info);
+ /* Casting away the CONST for EMPTY_PATTERN is ok since it
+ is protected against changes because REF == REF_MAX */
+ regexp->pattern = (struct string *) empty_pattern;
+ regexp->nocase = 0;
+ }
+ return regexp;
+}
+
+static int regexp_compile_internal(struct regexp *r, const char **c) {
+ /* See the GNU regex manual or regex.h in gnulib for
+ * an explanation of these flags. They are set so that the regex
+ * matcher interprets regular expressions the same way that libfa
+ * does
+ */
+ static const reg_syntax_t syntax =
+ RE_CONTEXT_INDEP_OPS|RE_CONTEXT_INVALID_OPS|RE_DOT_NOT_NULL
+ |RE_INTERVALS|RE_NO_BK_BRACES|RE_NO_BK_PARENS|RE_NO_BK_REFS
+ |RE_NO_BK_VBAR|RE_NO_EMPTY_RANGES
+ |RE_NO_POSIX_BACKTRACKING|RE_CONTEXT_INVALID_DUP|RE_NO_GNU_OPS;
+ reg_syntax_t old_syntax = re_syntax_options;
+
+ *c = NULL;
+
+ if (r->re == NULL) {
+ if (ALLOC(r->re) < 0)
+ return -1;
+ }
+
+ re_syntax_options = syntax;
+ if (r->nocase)
+ re_syntax_options |= RE_ICASE;
+ *c = re_compile_pattern(r->pattern->str, strlen(r->pattern->str), r->re);
+ re_syntax_options = old_syntax;
+
+ r->re->regs_allocated = REGS_REALLOCATE;
+ if (*c != NULL)
+ return -1;
+ return 0;
+}
+
+int regexp_compile(struct regexp *r) {
+ const char *c;
+
+ return regexp_compile_internal(r, &c);
+}
+
+int regexp_check(struct regexp *r, const char **msg) {
+ return regexp_compile_internal(r, msg);
+}
+
+int regexp_match(struct regexp *r,
+ const char *string, const int size,
+ const int start, struct re_registers *regs) {
+ if (r->re == NULL) {
+ if (regexp_compile(r) == -1)
+ return -3;
+ }
+ return re_match(r->re, string, size, start, regs);
+}
+
+int regexp_matches_empty(struct regexp *r) {
+ return regexp_match(r, "", 0, 0, NULL) == 0;
+}
+
+int regexp_nsub(struct regexp *r) {
+ if (r->re == NULL)
+ if (regexp_compile(r) == -1)
+ return -1;
+ return r->re->re_nsub;
+}
+
+void regexp_release(struct regexp *regexp) {
+ if (regexp != NULL && regexp->re != NULL) {
+ regfree(regexp->re);
+ FREE(regexp->re);
+ }
+}
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+/*
+ * regexp.h: wrappers for regexp handling
+ *
+ * Copyright (C) 2009-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <lutter@redhat.com>
+ */
+
+#ifndef REGEXP_H_
+#define REGEXP_H_
+
+#include <stdio.h>
+#include <regex.h>
+
+struct regexp {
+ unsigned int ref;
+ struct info *info;
+ struct string *pattern;
+ struct re_pattern_buffer *re;
+ unsigned int nocase : 1;
+};
+
+void print_regexp(FILE *out, struct regexp *regexp);
+
+/* Make a regexp with pattern PAT, which is not copied. Ownership
+ * of INFO is taken.
+ */
+struct regexp *make_regexp(struct info *info, char *pat, int nocase);
+
+/* Make a regexp with pattern PAT, which is copied. Ownership of INFO is
+ * taken. Escape sequences like \n in PAT are interpreted.
+ */
+struct regexp *make_regexp_unescape(struct info *info, const char *pat,
+ int nocase);
+
+/* Return 1 if R is an empty pattern, i.e. one consisting of nothing but
+ '(' and ')' characters, 0 otherwise */
+int regexp_is_empty_pattern(struct regexp *r);
+
+/* Make a regexp that matches TEXT literally; the string TEXT
+ * is not used by the returned rgexp and must be freed by the caller
+ */
+struct regexp *make_regexp_literal(struct info *info, const char *text);
+
+/* Make a regexp from a glob pattern */
+struct regexp *make_regexp_from_glob(struct info *info, const char *glob);
+
+/* Do not call directly, use UNREF instead */
+void free_regexp(struct regexp *regexp);
+
+/* Compile R->PATTERN into R->RE; return -1 and print an error
+ * if compilation fails. Return 0 otherwise
+ */
+int regexp_compile(struct regexp *r);
+
+/* Check the syntax of R->PATTERN; return -1 if the pattern has a syntax
+ * error, and a string indicating the error in *C. Return 0 if the pattern
+ * is a valid regular expression.
+ */
+int regexp_check(struct regexp *r, const char **msg);
+
+/* Call RE_MATCH on R->RE and return its result; if R hasn't been compiled
+ * yet, compile it. Return -3 if compilation fails
+ */
+int regexp_match(struct regexp *r, const char *string, const int size,
+ const int start, struct re_registers *regs);
+
+/* Return 1 if R matches the empty string, 0 otherwise */
+int regexp_matches_empty(struct regexp *r);
+
+/* Return the number of subexpressions (parentheses) inside R. May cause
+ * compilation of R; return -1 if compilation fails.
+ */
+int regexp_nsub(struct regexp *r);
+
+struct regexp *
+regexp_union(struct info *, struct regexp *r1, struct regexp *r2);
+
+struct regexp *
+regexp_concat(struct info *, struct regexp *r1, struct regexp *r2);
+
+struct regexp *
+regexp_union_n(struct info *, int n, struct regexp **r);
+
+struct regexp *
+regexp_concat_n(struct info *, int n, struct regexp **r);
+
+struct regexp *
+regexp_iter(struct info *info, struct regexp *r, int min, int max);
+
+/* Return a new REGEXP that matches all the words matched by R1 but
+ * not by R2
+ */
+struct regexp *
+regexp_minus(struct info *info, struct regexp *r1, struct regexp *r2);
+
+struct regexp *
+regexp_maybe(struct info *info, struct regexp *r);
+
+struct regexp *regexp_make_empty(struct info *);
+
+/* Free up temporary data structures, most importantly compiled
+ regular expressions */
+void regexp_release(struct regexp *regexp);
+
+/* Produce a printable representation of R. The result will in general not
+ be equivalent to the passed regular expression, but be easier for
+ humans to read. */
+char *regexp_escape(const struct regexp *r);
+
+/* If R is case-insensitive, expand its pattern so that it matches the same
+ * string even when used in a case-sensitive match. */
+char *regexp_expand_nocase(struct regexp *r);
+#endif
+
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+/*
+ * syntax.c:
+ *
+ * Copyright (C) 2007-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <dlutter@redhat.com>
+ */
+
+#include <config.h>
+
+#include <assert.h>
+#include <stdarg.h>
+#include <limits.h>
+#include <ctype.h>
+#include <glob.h>
+#include <argz.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <unistd.h>
+
+#include "memory.h"
+#include "syntax.h"
+#include "augeas.h"
+#include "transform.h"
+#include "errcode.h"
+
+/* Extension of source files */
+#define AUG_EXT ".aug"
+
+#define LNS_TYPE_CHECK(ctx) ((ctx)->aug->flags & AUG_TYPE_CHECK)
+
+static const char *const builtin_module = "Builtin";
+
+static const struct type string_type = { .ref = UINT_MAX, .tag = T_STRING };
+static const struct type regexp_type = { .ref = UINT_MAX, .tag = T_REGEXP };
+static const struct type lens_type = { .ref = UINT_MAX, .tag = T_LENS };
+static const struct type tree_type = { .ref = UINT_MAX, .tag = T_TREE };
+static const struct type filter_type = { .ref = UINT_MAX, .tag = T_FILTER };
+static const struct type transform_type =
+ { .ref = UINT_MAX, .tag = T_TRANSFORM };
+static const struct type unit_type = { .ref = UINT_MAX, .tag = T_UNIT };
+
+const struct type *const t_string = &string_type;
+const struct type *const t_regexp = ®exp_type;
+const struct type *const t_lens = &lens_type;
+const struct type *const t_tree = &tree_type;
+const struct type *const t_filter = &filter_type;
+const struct type *const t_transform = &transform_type;
+const struct type *const t_unit = &unit_type;
+
+static const char *const type_names[] = {
+ "string", "regexp", "lens", "tree", "filter",
+ "transform", "function", "unit", NULL
+};
+
+/* The anonymous identifier which we will never bind */
+static const char anon_ident[] = "_";
+
+static void print_value(FILE *out, struct value *v);
+
+/* The evaluation context with all loaded modules and the bindings for the
+ * module we are working on in LOCAL
+ */
+struct ctx {
+ const char *name; /* The module we are working on */
+ struct augeas *aug;
+ struct binding *local;
+};
+
+static int init_fatal_exn(struct error *error) {
+ if (error->exn != NULL)
+ return 0;
+ error->exn = make_exn_value(ref(error->info), "Error during evaluation");
+ if (error->exn == NULL)
+ return -1;
+ error->exn->exn->seen = 1;
+ error->exn->exn->error = 1;
+ error->exn->exn->lines = NULL;
+ error->exn->exn->nlines = 0;
+ error->exn->ref = REF_MAX;
+ return 0;
+}
+
+static void format_error(struct info *info, aug_errcode_t code,
+ const char *format, va_list ap) {
+ struct error *error = info->error;
+ char *si = NULL, *sf = NULL, *sd = NULL;
+ int r;
+
+ error->code = code;
+ /* Only syntax errors are cumulative */
+ if (code != AUG_ESYNTAX)
+ FREE(error->details);
+
+ si = format_info(info);
+ r = vasprintf(&sf, format, ap);
+ if (r < 0)
+ sf = NULL;
+ if (error->details != NULL) {
+ r = xasprintf(&sd, "%s\n%s%s", error->details,
+ (si == NULL) ? "(no location)" : si,
+ (sf == NULL) ? "(no details)" : sf);
+ } else {
+ r = xasprintf(&sd, "%s%s",
+ (si == NULL) ? "(no location)" : si,
+ (sf == NULL) ? "(no details)" : sf);
+ }
+ if (r >= 0) {
+ free(error->details);
+ error->details = sd;
+ }
+ free(si);
+ free(sf);
+}
+
+void syntax_error(struct info *info, const char *format, ...) {
+ struct error *error = info->error;
+ va_list ap;
+
+ if (error->code != AUG_NOERROR && error->code != AUG_ESYNTAX)
+ return;
+
+ va_start(ap, format);
+ format_error(info, AUG_ESYNTAX, format, ap);
+ va_end(ap);
+}
+
+void fatal_error(struct info *info, const char *format, ...) {
+ struct error *error = info->error;
+ va_list ap;
+
+ if (error->code == AUG_EINTERNAL)
+ return;
+
+ va_start(ap, format);
+ format_error(info, AUG_EINTERNAL, format, ap);
+ va_end(ap);
+}
+
+static void free_param(struct param *param) {
+ if (param == NULL)
+ return;
+ assert(param->ref == 0);
+ unref(param->info, info);
+ unref(param->name, string);
+ unref(param->type, type);
+ free(param);
+}
+
+void free_term(struct term *term) {
+ if (term == NULL)
+ return;
+ assert(term->ref == 0);
+ switch(term->tag) {
+ case A_MODULE:
+ free(term->mname);
+ free(term->autoload);
+ unref(term->decls, term);
+ break;
+ case A_BIND:
+ free(term->bname);
+ unref(term->exp, term);
+ break;
+ case A_COMPOSE:
+ case A_UNION:
+ case A_MINUS:
+ case A_CONCAT:
+ case A_APP:
+ case A_LET:
+ unref(term->left, term);
+ unref(term->right, term);
+ break;
+ case A_VALUE:
+ unref(term->value, value);
+ break;
+ case A_IDENT:
+ unref(term->ident, string);
+ break;
+ case A_BRACKET:
+ unref(term->brexp, term);
+ break;
+ case A_FUNC:
+ unref(term->param, param);
+ unref(term->body, term);
+ break;
+ case A_REP:
+ unref(term->rexp, term);
+ break;
+ case A_TEST:
+ unref(term->test, term);
+ unref(term->result, term);
+ break;
+ default:
+ assert(0);
+ break;
+ }
+ unref(term->next, term);
+ unref(term->info, info);
+ unref(term->type, type);
+ free(term);
+}
+
+static void free_binding(struct binding *binding) {
+ if (binding == NULL)
+ return;
+ assert(binding->ref == 0);
+ unref(binding->next, binding);
+ unref(binding->ident, string);
+ unref(binding->type, type);
+ unref(binding->value, value);
+ free(binding);
+}
+
+void free_module(struct module *module) {
+ if (module == NULL)
+ return;
+ assert(module->ref == 0);
+ free(module->name);
+ unref(module->next, module);
+ unref(module->bindings, binding);
+ unref(module->autoload, transform);
+ free(module);
+}
+
+void free_type(struct type *type) {
+ if (type == NULL)
+ return;
+ assert(type->ref == 0);
+
+ if (type->tag == T_ARROW) {
+ unref(type->dom, type);
+ unref(type->img, type);
+ }
+ free(type);
+}
+
+static void free_exn(struct exn *exn) {
+ if (exn == NULL)
+ return;
+
+ unref(exn->info, info);
+ free(exn->message);
+ for (int i=0; i < exn->nlines; i++) {
+ free(exn->lines[i]);
+ }
+ free(exn->lines);
+ free(exn);
+}
+
+void free_value(struct value *v) {
+ if (v == NULL)
+ return;
+ assert(v->ref == 0);
+
+ switch(v->tag) {
+ case V_STRING:
+ unref(v->string, string);
+ break;
+ case V_REGEXP:
+ unref(v->regexp, regexp);
+ break;
+ case V_LENS:
+ unref(v->lens, lens);
+ break;
+ case V_TREE:
+ free_tree(v->origin);
+ break;
+ case V_FILTER:
+ unref(v->filter, filter);
+ break;
+ case V_TRANSFORM:
+ unref(v->transform, transform);
+ break;
+ case V_NATIVE:
+ if (v->native)
+ unref(v->native->type, type);
+ free(v->native);
+ break;
+ case V_CLOS:
+ unref(v->func, term);
+ unref(v->bindings, binding);
+ break;
+ case V_EXN:
+ free_exn(v->exn);
+ break;
+ case V_UNIT:
+ break;
+ default:
+ assert(0);
+ }
+ unref(v->info, info);
+ free(v);
+}
+
+/*
+ * Creation of (some) terms. Others are in parser.y
+ * Reference counted arguments are now owned by the returned object, i.e.
+ * the make_* functions do not increment the count.
+ * Returned objects have a referece count of 1.
+ */
+struct term *make_term(enum term_tag tag, struct info *info) {
+ struct term *term;
+ if (make_ref(term) < 0) {
+ unref(info, info);
+ } else {
+ term->tag = tag;
+ term->info = info;
+ }
+ return term;
+}
+
+struct term *make_param(char *name, struct type *type, struct info *info) {
+ struct term *term = make_term(A_FUNC, info);
+ if (term == NULL)
+ goto error;
+ make_ref_err(term->param);
+ term->param->info = ref(term->info);
+ make_ref_err(term->param->name);
+ term->param->name->str = name;
+ term->param->type = type;
+ return term;
+ error:
+ unref(term, term);
+ return NULL;
+}
+
+struct value *make_value(enum value_tag tag, struct info *info) {
+ struct value *value = NULL;
+ if (make_ref(value) < 0) {
+ unref(info, info);
+ } else {
+ value->tag = tag;
+ value->info = info;
+ }
+ return value;
+}
+
+struct value *make_unit(struct info *info) {
+ return make_value(V_UNIT, info);
+}
+
+struct term *make_app_term(struct term *lambda, struct term *arg,
+ struct info *info) {
+ struct term *app = make_term(A_APP, info);
+ if (app == NULL) {
+ unref(lambda, term);
+ unref(arg, term);
+ } else {
+ app->left = lambda;
+ app->right = arg;
+ }
+ return app;
+}
+
+struct term *make_app_ident(char *id, struct term *arg, struct info *info) {
+ struct term *ident = make_term(A_IDENT, ref(info));
+ ident->ident = make_string(id);
+ if (ident->ident == NULL) {
+ unref(arg, term);
+ unref(info, info);
+ unref(ident, term);
+ return NULL;
+ }
+ return make_app_term(ident, arg, info);
+}
+
+struct term *build_func(struct term *params, struct term *exp) {
+ assert(params->tag == A_FUNC);
+ if (params->next != NULL)
+ exp = build_func(params->next, exp);
+
+ params->body = exp;
+ params->next = NULL;
+ return params;
+}
+
+/* Ownership is taken as needed */
+static struct value *make_closure(struct term *func, struct binding *bnds) {
+ struct value *v = NULL;
+ if (make_ref(v) == 0) {
+ v->tag = V_CLOS;
+ v->info = ref(func->info);
+ v->func = ref(func);
+ v->bindings = ref(bnds);
+ }
+ return v;
+}
+
+struct value *make_exn_value(struct info *info,
+ const char *format, ...) {
+ va_list ap;
+ int r;
+ struct value *v;
+ char *message;
+
+ va_start(ap, format);
+ r = vasprintf(&message, format, ap);
+ va_end(ap);
+ if (r == -1)
+ return NULL;
+
+ v = make_value(V_EXN, ref(info));
+ if (ALLOC(v->exn) < 0)
+ return info->error->exn;
+ v->exn->info = info;
+ v->exn->message = message;
+
+ return v;
+}
+
+void exn_add_lines(struct value *v, int nlines, ...) {
+ assert(v->tag == V_EXN);
+
+ va_list ap;
+ if (REALLOC_N(v->exn->lines, v->exn->nlines + nlines) == -1)
+ return;
+ va_start(ap, nlines);
+ for (int i=0; i < nlines; i++) {
+ char *line = va_arg(ap, char *);
+ v->exn->lines[v->exn->nlines + i] = line;
+ }
+ va_end(ap);
+ v->exn->nlines += nlines;
+}
+
+void exn_printf_line(struct value *exn, const char *format, ...) {
+ va_list ap;
+ int r;
+ char *line;
+
+ va_start(ap, format);
+ r = vasprintf(&line, format, ap);
+ va_end(ap);
+ if (r >= 0)
+ exn_add_lines(exn, 1, line);
+}
+
+/*
+ * Modules
+ */
+static int load_module(struct augeas *aug, const char *name);
+static char *module_basename(const char *modname);
+
+struct module *module_create(const char *name) {
+ struct module *module;
+ make_ref(module);
+ module->name = strdup(name);
+ return module;
+}
+
+static struct module *module_find(struct module *module, const char *name) {
+ list_for_each(e, module) {
+ if (STRCASEEQ(e->name, name))
+ return e;
+ }
+ return NULL;
+}
+
+static struct binding *bnd_lookup(struct binding *bindings, const char *name) {
+ list_for_each(b, bindings) {
+ if (STREQ(b->ident->str, name))
+ return b;
+ }
+ return NULL;
+}
+
+static char *modname_of_qname(const char *qname) {
+ char *dot = strchr(qname, '.');
+ if (dot == NULL)
+ return NULL;
+
+ return strndup(qname, dot - qname);
+}
+
+static int lookup_internal(struct augeas *aug, const char *ctx_modname,
+ const char *name, struct binding **bnd) {
+ char *modname = modname_of_qname(name);
+
+ *bnd = NULL;
+
+ if (modname == NULL) {
+ struct module *builtin =
+ module_find(aug->modules, builtin_module);
+ assert(builtin != NULL);
+ *bnd = bnd_lookup(builtin->bindings, name);
+ return 0;
+ }
+
+ qual_lookup:
+ list_for_each(module, aug->modules) {
+ if (STRCASEEQ(module->name, modname)) {
+ *bnd = bnd_lookup(module->bindings, name + strlen(modname) + 1);
+ free(modname);
+ return 0;
+ }
+ }
+ /* Try to load the module */
+ if (streqv(modname, ctx_modname)) {
+ free(modname);
+ return 0;
+ }
+ int loaded = load_module(aug, modname) == 0;
+ if (loaded)
+ goto qual_lookup;
+
+ free(modname);
+ return -1;
+}
+
+struct lens *lens_lookup(struct augeas *aug, const char *qname) {
+ struct binding *bnd = NULL;
+
+ if (lookup_internal(aug, NULL, qname, &bnd) < 0)
+ return NULL;
+ if (bnd == NULL || bnd->value->tag != V_LENS)
+ return NULL;
+ return bnd->value->lens;
+}
+
+static struct binding *ctx_lookup_bnd(struct info *info,
+ struct ctx *ctx, const char *name) {
+ struct binding *b = NULL;
+ int nlen = strlen(ctx->name);
+
+ if (STREQLEN(ctx->name, name, nlen) && name[nlen] == '.')
+ name += nlen + 1;
+
+ b = bnd_lookup(ctx->local, name);
+ if (b != NULL)
+ return b;
+
+ if (ctx->aug != NULL) {
+ int r;
+ r = lookup_internal(ctx->aug, ctx->name, name, &b);
+ if (r == 0)
+ return b;
+ char *modname = modname_of_qname(name);
+ syntax_error(info, "Could not load module %s for %s",
+ modname, name);
+ free(modname);
+ return NULL;
+ }
+ return NULL;
+}
+
+static struct value *ctx_lookup(struct info *info,
+ struct ctx *ctx, struct string *ident) {
+ struct binding *b = ctx_lookup_bnd(info, ctx, ident->str);
+ return b == NULL ? NULL : b->value;
+}
+
+static struct type *ctx_lookup_type(struct info *info,
+ struct ctx *ctx, struct string *ident) {
+ struct binding *b = ctx_lookup_bnd(info, ctx, ident->str);
+ return b == NULL ? NULL : b->type;
+}
+
+/* Takes ownership as needed */
+static struct binding *bind_type(struct binding **bnds,
+ const char *name, struct type *type) {
+ struct binding *binding;
+
+ if (STREQ(name, anon_ident))
+ return NULL;
+ make_ref(binding);
+ make_ref(binding->ident);
+ binding->ident->str = strdup(name);
+ binding->type = ref(type);
+ list_cons(*bnds, binding);
+
+ return binding;
+}
+
+/* Takes ownership as needed */
+static void bind_param(struct binding **bnds, struct param *param,
+ struct value *v) {
+ struct binding *b;
+ make_ref(b);
+ b->ident = ref(param->name);
+ b->type = ref(param->type);
+ b->value = ref(v);
+ ref(*bnds);
+ list_cons(*bnds, b);
+}
+
+static void unbind_param(struct binding **bnds, ATTRIBUTE_UNUSED struct param *param) {
+ struct binding *b = *bnds;
+ assert(b->ident == param->name);
+ assert(b->next != *bnds);
+ *bnds = b->next;
+ unref(b, binding);
+}
+
+/* Takes ownership of VALUE */
+static void bind(struct binding **bnds,
+ const char *name, struct type *type, struct value *value) {
+ struct binding *b = NULL;
+
+ if (STRNEQ(name, anon_ident)) {
+ b = bind_type(bnds, name, type);
+ b->value = ref(value);
+ }
+}
+
+/*
+ * Some debug printing
+ */
+
+static char *type_string(struct type *t);
+
+static void dump_bindings(struct binding *bnds) {
+ list_for_each(b, bnds) {
+ char *st = type_string(b->type);
+ fprintf(stderr, " %s: %s", b->ident->str, st);
+ fprintf(stderr, " = ");
+ print_value(stderr, b->value);
+ fputc('\n', stderr);
+ free(st);
+ }
+}
+
+static void dump_module(struct module *module) {
+ if (module == NULL)
+ return;
+ fprintf(stderr, "Module %s\n:", module->name);
+ dump_bindings(module->bindings);
+ dump_module(module->next);
+}
+
+ATTRIBUTE_UNUSED
+static void dump_ctx(struct ctx *ctx) {
+ fprintf(stderr, "Context: %s\n", ctx->name);
+ dump_bindings(ctx->local);
+ if (ctx->aug != NULL) {
+ list_for_each(m, ctx->aug->modules)
+ dump_module(m);
+ }
+}
+
+/*
+ * Values
+ */
+void print_tree_braces(FILE *out, int indent, struct tree *tree) {
+ if (tree == NULL) {
+ fprintf(out, "(null tree)\n");
+ return;
+ }
+ list_for_each(t, tree) {
+ for (int i=0; i < indent; i++) fputc(' ', out);
+ fprintf(out, "{ ");
+ if (t->label != NULL)
+ fprintf(out, "\"%s\"", t->label);
+ if (t->value != NULL)
+ fprintf(out, " = \"%s\"", t->value);
+ if (t->children != NULL) {
+ fputc('\n', out);
+ print_tree_braces(out, indent + 2, t->children);
+ for (int i=0; i < indent; i++) fputc(' ', out);
+ } else {
+ fputc(' ', out);
+ }
+ fprintf(out, "}\n");
+ }
+}
+
+static void print_value(FILE *out, struct value *v) {
+ if (v == NULL) {
+ fprintf(out, "<null>");
+ return;
+ }
+
+ switch(v->tag) {
+ case V_STRING:
+ fprintf(out, "\"%s\"", v->string->str);
+ break;
+ case V_REGEXP:
+ fprintf(out, "/%s/", v->regexp->pattern->str);
+ break;
+ case V_LENS:
+ fprintf(out, "<lens:");
+ print_info(out, v->lens->info);
+ fprintf(out, ">");
+ break;
+ case V_TREE:
+ print_tree_braces(out, 0, v->origin);
+ break;
+ case V_FILTER:
+ fprintf(out, "<filter:");
+ list_for_each(f, v->filter) {
+ fprintf(out, "%c%s%c", f->include ? '+' : '-', f->glob->str,
+ (f->next != NULL) ? ':' : '>');
+ }
+ break;
+ case V_TRANSFORM:
+ fprintf(out, "<transform:");
+ print_info(out, v->transform->lens->info);
+ fprintf(out, ">");
+ break;
+ case V_NATIVE:
+ fprintf(out, "<native:");
+ print_info(out, v->info);
+ fprintf(out, ">");
+ break;
+ case V_CLOS:
+ fprintf(out, "<closure:");
+ print_info(out, v->func->info);
+ fprintf(out, ">");
+ break;
+ case V_EXN:
+ if (! v->exn->seen) {
+ print_info(out, v->exn->info);
+ fprintf(out, "exception: %s\n", v->exn->message);
+ for (int i=0; i < v->exn->nlines; i++) {
+ fprintf(out, " %s\n", v->exn->lines[i]);
+ }
+ v->exn->seen = 1;
+ }
+ break;
+ case V_UNIT:
+ fprintf(out, "()");
+ break;
+ default:
+ assert(0);
+ break;
+ }
+}
+
+static int value_equal(struct value *v1, struct value *v2) {
+ if (v1 == NULL && v2 == NULL)
+ return 1;
+ if (v1 == NULL || v2 == NULL)
+ return 0;
+ if (v1->tag != v2->tag)
+ return 0;
+ switch (v1->tag) {
+ case V_STRING:
+ return STREQ(v1->string->str, v2->string->str);
+ break;
+ case V_REGEXP:
+ // FIXME: Should probably build FA's and compare them
+ return STREQ(v1->regexp->pattern->str, v2->regexp->pattern->str);
+ break;
+ case V_LENS:
+ return v1->lens == v2->lens;
+ break;
+ case V_TREE:
+ return tree_equal(v1->origin->children, v2->origin->children);
+ break;
+ case V_FILTER:
+ return v1->filter == v2->filter;
+ break;
+ case V_TRANSFORM:
+ return v1->transform == v2->transform;
+ break;
+ case V_NATIVE:
+ return v1->native == v2->native;
+ break;
+ case V_CLOS:
+ return v1->func == v2->func && v1->bindings == v2->bindings;
+ break;
+ default:
+ assert(0);
+ abort();
+ break;
+ }
+}
+
+/*
+ * Types
+ */
+struct type *make_arrow_type(struct type *dom, struct type *img) {
+ struct type *type;
+ make_ref(type);
+ type->tag = T_ARROW;
+ type->dom = ref(dom);
+ type->img = ref(img);
+ return type;
+}
+
+struct type *make_base_type(enum type_tag tag) {
+ if (tag == T_STRING)
+ return (struct type *) t_string;
+ else if (tag == T_REGEXP)
+ return (struct type *) t_regexp;
+ else if (tag == T_LENS)
+ return (struct type *) t_lens;
+ else if (tag == T_TREE)
+ return (struct type *) t_tree;
+ else if (tag == T_FILTER)
+ return (struct type *) t_filter;
+ else if (tag == T_TRANSFORM)
+ return (struct type *) t_transform;
+ else if (tag == T_UNIT)
+ return (struct type *) t_unit;
+ assert(0);
+ abort();
+}
+
+static const char *type_name(struct type *t) {
+ for (int i = 0; type_names[i] != NULL; i++)
+ if (i == t->tag)
+ return type_names[i];
+ assert(0);
+ abort();
+}
+
+static char *type_string(struct type *t) {
+ if (t->tag == T_ARROW) {
+ char *s = NULL;
+ int r;
+ char *sd = type_string(t->dom);
+ char *si = type_string(t->img);
+ if (t->dom->tag == T_ARROW)
+ r = asprintf(&s, "(%s) -> %s", sd, si);
+ else
+ r = asprintf(&s, "%s -> %s", sd, si);
+ free(sd);
+ free(si);
+ return (r == -1) ? NULL : s;
+ } else {
+ return strdup(type_name(t));
+ }
+}
+
+/* Decide whether T1 is a subtype of T2. The only subtype relations are
+ * T_STRING <: T_REGEXP and the usual subtyping of functions based on
+ * comparing domains/images
+ *
+ * Return 1 if T1 is a subtype of T2, 0 otherwise
+ */
+static int subtype(struct type *t1, struct type *t2) {
+ if (t1 == t2)
+ return 1;
+ /* We only promote T_STRING => T_REGEXP, no automatic conversion
+ of strings/regexps to lenses (yet) */
+ if (t1->tag == T_STRING)
+ return (t2->tag == T_STRING || t2->tag == T_REGEXP);
+ if (t1->tag == T_ARROW && t2->tag == T_ARROW) {
+ return subtype(t2->dom, t1->dom)
+ && subtype(t1->img, t2->img);
+ }
+ return t1->tag == t2->tag;
+}
+
+static int type_equal(struct type *t1, struct type *t2) {
+ return (t1 == t2) || (subtype(t1, t2) && subtype(t2, t1));
+}
+
+/* Return a type T with subtype(T, T1) && subtype(T, T2) */
+static struct type *type_meet(struct type *t1, struct type *t2);
+
+/* Return a type T with subtype(T1, T) && subtype(T2, T) */
+static struct type *type_join(struct type *t1, struct type *t2) {
+ if (t1->tag == T_STRING) {
+ if (t2->tag == T_STRING)
+ return ref(t1);
+ else if (t2->tag == T_REGEXP)
+ return ref(t2);
+ } else if (t1->tag == T_REGEXP) {
+ if (t2->tag == T_STRING || t2->tag == T_REGEXP)
+ return ref(t1);
+ } else if (t1->tag == T_ARROW) {
+ if (t2->tag != T_ARROW)
+ return NULL;
+ struct type *dom = type_meet(t1->dom, t2->dom);
+ struct type *img = type_join(t1->img, t2->img);
+ if (dom == NULL || img == NULL) {
+ unref(dom, type);
+ unref(img, type);
+ return NULL;
+ }
+ return make_arrow_type(dom, img);
+ } else if (type_equal(t1, t2)) {
+ return ref(t1);
+ }
+ return NULL;
+}
+
+/* Return a type T with subtype(T, T1) && subtype(T, T2) */
+static struct type *type_meet(struct type *t1, struct type *t2) {
+ if (t1->tag == T_STRING) {
+ if (t2->tag == T_STRING || t2->tag == T_REGEXP)
+ return ref(t1);
+ } else if (t1->tag == T_REGEXP) {
+ if (t2->tag == T_STRING || t2->tag == T_REGEXP)
+ return ref(t2);
+ } else if (t1->tag == T_ARROW) {
+ if (t2->tag != T_ARROW)
+ return NULL;
+ struct type *dom = type_join(t1->dom, t2->dom);
+ struct type *img = type_meet(t1->img, t2->img);
+ if (dom == NULL || img == NULL) {
+ unref(dom, type);
+ unref(img, type);
+ return NULL;
+ }
+ return make_arrow_type(dom, img);
+ } else if (type_equal(t1, t2)) {
+ return ref(t1);
+ }
+ return NULL;
+}
+
+static struct type *value_type(struct value *v) {
+ switch(v->tag) {
+ case V_STRING:
+ return make_base_type(T_STRING);
+ case V_REGEXP:
+ return make_base_type(T_REGEXP);
+ case V_LENS:
+ return make_base_type(T_LENS);
+ case V_TREE:
+ return make_base_type(T_TREE);
+ case V_FILTER:
+ return make_base_type(T_FILTER);
+ case V_TRANSFORM:
+ return make_base_type(T_TRANSFORM);
+ case V_UNIT:
+ return make_base_type(T_UNIT);
+ case V_NATIVE:
+ return ref(v->native->type);
+ case V_CLOS:
+ return ref(v->func->type);
+ case V_EXN: /* Fail on exceptions */
+ default:
+ assert(0);
+ abort();
+ }
+}
+
+/* Coerce V to the type T. Currently, only T_STRING can be coerced to
+ * T_REGEXP. Returns a value that is owned by the caller. Trying to perform
+ * an impossible coercion is a fatal error. Receives ownership of V.
+ */
+static struct value *coerce(struct value *v, struct type *t) {
+ struct type *vt = value_type(v);
+ if (type_equal(vt, t)) {
+ unref(vt, type);
+ return v;
+ }
+ if (vt->tag == T_STRING && t->tag == T_REGEXP) {
+ struct value *rxp = make_value(V_REGEXP, ref(v->info));
+ rxp->regexp = make_regexp_literal(v->info, v->string->str);
+ if (rxp->regexp == NULL) {
+ report_error(v->info->error, AUG_ENOMEM, NULL);
+ };
+ unref(v, value);
+ unref(vt, type);
+ return rxp;
+ }
+ return make_exn_value(v->info, "Type %s can not be coerced to %s",
+ type_name(vt), type_name(t));
+}
+
+/* Return one of the expected types (passed as ...).
+ Does not give ownership of the returned type */
+static struct type *expect_types_arr(struct info *info,
+ struct type *act,
+ int ntypes, struct type *allowed[]) {
+ struct type *result = NULL;
+
+ for (int i=0; i < ntypes; i++) {
+ if (subtype(act, allowed[i])) {
+ result = allowed[i];
+ break;
+ }
+ }
+ if (result == NULL) {
+ int len = 0;
+ for (int i=0; i < ntypes; i++) {
+ len += strlen(type_name(allowed[i]));
+ }
+ len += (ntypes - 1) * 4 + 1;
+ char *allowed_names;
+ if (ALLOC_N(allowed_names, len) < 0)
+ return NULL;
+ for (int i=0; i < ntypes; i++) {
+ if (i > 0)
+ strcat(allowed_names, (i == ntypes - 1) ? ", or " : ", ");
+ strcat(allowed_names, type_name(allowed[i]));
+ }
+ char *act_str = type_string(act);
+ syntax_error(info, "type error: expected %s but found %s",
+ allowed_names, act_str);
+ free(act_str);
+ free(allowed_names);
+ }
+ return result;
+}
+
+static struct type *expect_types(struct info *info,
+ struct type *act, int ntypes, ...) {
+ va_list ap;
+ struct type *allowed[ntypes];
+
+ va_start(ap, ntypes);
+ for (int i=0; i < ntypes; i++)
+ allowed[i] = va_arg(ap, struct type *);
+ va_end(ap);
+ return expect_types_arr(info, act, ntypes, allowed);
+}
+
+static struct value *apply(struct term *app, struct ctx *ctx);
+
+typedef struct value *(*impl0)(struct info *);
+typedef struct value *(*impl1)(struct info *, struct value *);
+typedef struct value *(*impl2)(struct info *, struct value *, struct value *);
+typedef struct value *(*impl3)(struct info *, struct value *, struct value *,
+ struct value *);
+typedef struct value *(*impl4)(struct info *, struct value *, struct value *,
+ struct value *, struct value *);
+typedef struct value *(*impl5)(struct info *, struct value *, struct value *,
+ struct value *, struct value *, struct value *);
+
+static struct value *native_call(struct info *info,
+ struct native *func, struct ctx *ctx) {
+ struct value *argv[func->argc + 1];
+ struct binding *b = ctx->local;
+
+ for (int i = func->argc - 1; i >= 0; i--) {
+ argv[i] = b->value;
+ b = b->next;
+ }
+ argv[func->argc] = NULL;
+
+ return func->impl(info, argv);
+}
+
+static void type_error1(struct info *info, const char *msg, struct type *type) {
+ char *s = type_string(type);
+ syntax_error(info, "Type error: ");
+ syntax_error(info, msg, s);
+ free(s);
+}
+
+static void type_error2(struct info *info, const char *msg,
+ struct type *type1, struct type *type2) {
+ char *s1 = type_string(type1);
+ char *s2 = type_string(type2);
+ syntax_error(info, "Type error: ");
+ syntax_error(info, msg, s1, s2);
+ free(s1);
+ free(s2);
+}
+
+static void type_error_binop(struct info *info, const char *opname,
+ struct type *type1, struct type *type2) {
+ char *s1 = type_string(type1);
+ char *s2 = type_string(type2);
+ syntax_error(info, "Type error: ");
+ syntax_error(info, "%s of %s and %s is not possible", opname, s1, s2);
+ free(s1);
+ free(s2);
+}
+
+static int check_exp(struct term *term, struct ctx *ctx);
+
+static struct type *require_exp_type(struct term *term, struct ctx *ctx,
+ int ntypes, struct type *allowed[]) {
+ int r = 1;
+
+ if (term->type == NULL) {
+ r = check_exp(term, ctx);
+ if (! r)
+ return NULL;
+ }
+
+ return expect_types_arr(term->info, term->type, ntypes, allowed);
+}
+
+static int check_compose(struct term *term, struct ctx *ctx) {
+ struct type *tl = NULL, *tr = NULL;
+
+ if (! check_exp(term->left, ctx))
+ return 0;
+ tl = term->left->type;
+
+ if (tl->tag == T_ARROW) {
+ /* Composition of functions f: a -> b and g: c -> d is defined as
+ (f . g) x = g (f x) and is type correct if b <: c yielding a
+ function with type a -> d */
+ if (! check_exp(term->right, ctx))
+ return 0;
+ tr = term->right->type;
+ if (tr->tag != T_ARROW)
+ goto print_error;
+ if (! subtype(tl->img, tr->dom))
+ goto print_error;
+ term->type = make_arrow_type(tl->dom, tr->img);
+ } else if (tl->tag == T_UNIT) {
+ if (! check_exp(term->right, ctx))
+ return 0;
+ term->type = ref(term->right->type);
+ } else {
+ goto print_error;
+ }
+ return 1;
+ print_error:
+ type_error_binop(term->info,
+ "composition", term->left->type, term->right->type);
+ return 0;
+}
+
+static int check_binop(const char *opname, struct term *term,
+ struct ctx *ctx, int ntypes, ...) {
+ va_list ap;
+ struct type *allowed[ntypes];
+ struct type *tl = NULL, *tr = NULL;
+
+ va_start(ap, ntypes);
+ for (int i=0; i < ntypes; i++)
+ allowed[i] = va_arg(ap, struct type *);
+ va_end(ap);
+
+ tl = require_exp_type(term->left, ctx, ntypes, allowed);
+ if (tl == NULL)
+ return 0;
+
+ tr = require_exp_type(term->right, ctx, ntypes, allowed);
+ if (tr == NULL)
+ return 0;
+
+ term->type = type_join(tl, tr);
+ if (term->type == NULL)
+ goto print_error;
+ return 1;
+ print_error:
+ type_error_binop(term->info, opname, term->left->type, term->right->type);
+ return 0;
+}
+
+static int check_value(struct term *term) {
+ const char *msg;
+ struct value *v = term->value;
+
+ if (v->tag == V_REGEXP) {
+ /* The only literal that needs checking are regular expressions,
+ where we need to make sure the regexp is syntactically
+ correct */
+ if (regexp_check(v->regexp, &msg) == -1) {
+ syntax_error(v->info, "Invalid regular expression: %s", msg);
+ return 0;
+ }
+ term->type = make_base_type(T_REGEXP);
+ } else if (v->tag == V_EXN) {
+ /* Exceptions can't be typed */
+ return 0;
+ } else {
+ /* There are cases where we generate values internally, and
+ those have their type already set; we don't want to
+ overwrite that */
+ if (term->type == NULL) {
+ term->type = value_type(v);
+ }
+ }
+ return 1;
+}
+
+/* Return 1 if TERM passes, 0 otherwise */
+static int check_exp(struct term *term, struct ctx *ctx) {
+ int result = 1;
+ assert(term->type == NULL || term->tag == A_VALUE || term->ref > 1);
+ if (term->type != NULL && term->tag != A_VALUE)
+ return 1;
+
+ switch (term->tag) {
+ case A_UNION:
+ result = check_binop("union", term, ctx, 2, t_regexp, t_lens);
+ break;
+ case A_MINUS:
+ result = check_binop("minus", term, ctx, 1, t_regexp);
+ break;
+ case A_COMPOSE:
+ result = check_compose(term, ctx);
+ break;
+ case A_CONCAT:
+ result = check_binop("concatenation", term, ctx,
+ 4, t_string, t_regexp, t_lens, t_filter);
+ break;
+ case A_LET:
+ {
+ result = check_exp(term->right, ctx);
+ if (result) {
+ struct term *func = term->left;
+ assert(func->tag == A_FUNC);
+ assert(func->param->type == NULL);
+ func->param->type = ref(term->right->type);
+
+ result = check_exp(func, ctx);
+ if (result) {
+ term->tag = A_APP;
+ term->type = ref(func->type->img);
+ }
+ }
+ }
+ break;
+ case A_APP:
+ result = check_exp(term->left, ctx) & check_exp(term->right, ctx);
+ if (result) {
+ if (term->left->type->tag != T_ARROW) {
+ type_error1(term->info,
+ "expected function in application but found %s",
+ term->left->type);
+ result = 0;
+ };
+ }
+ if (result) {
+ result = expect_types(term->info,
+ term->right->type,
+ 1, term->left->type->dom) != NULL;
+ if (! result) {
+ type_error_binop(term->info, "application",
+ term->left->type, term->right->type);
+ result = 0;
+ }
+ }
+ if (result)
+ term->type = ref(term->left->type->img);
+ break;
+ case A_VALUE:
+ result = check_value(term);
+ break;
+ case A_IDENT:
+ {
+ struct type *t = ctx_lookup_type(term->info, ctx, term->ident);
+ if (t == NULL) {
+ syntax_error(term->info, "Undefined variable %s",
+ term->ident->str);
+ result = 0;
+ } else {
+ term->type = ref(t);
+ }
+ }
+ break;
+ case A_BRACKET:
+ result = check_exp(term->brexp, ctx);
+ if (result) {
+ term->type = ref(expect_types(term->info, term->brexp->type,
+ 1, t_lens));
+ if (term->type == NULL) {
+ type_error1(term->info,
+ "[..] is only defined for lenses, not for %s",
+ term->brexp->type);
+ result = 0;
+ }
+ }
+ break;
+ case A_FUNC:
+ {
+ bind_param(&ctx->local, term->param, NULL);
+ result = check_exp(term->body, ctx);
+ if (result) {
+ term->type =
+ make_arrow_type(term->param->type, term->body->type);
+ }
+ unbind_param(&ctx->local, term->param);
+ }
+ break;
+ case A_REP:
+ result = check_exp(term->exp, ctx);
+ if (result) {
+ term->type = ref(expect_types(term->info, term->exp->type, 2,
+ t_regexp, t_lens));
+ if (term->type == NULL) {
+ type_error1(term->info,
+ "Incompatible types: repetition is only defined"
+ " for regexp and lens, not for %s",
+ term->exp->type);
+ result = 0;
+ }
+ }
+ break;
+ default:
+ assert(0);
+ break;
+ }
+ assert(!result || term->type != NULL);
+ return result;
+}
+
+static int check_decl(struct term *term, struct ctx *ctx) {
+ assert(term->tag == A_BIND || term->tag == A_TEST);
+
+ if (term->tag == A_BIND) {
+ if (!check_exp(term->exp, ctx))
+ return 0;
+ term->type = ref(term->exp->type);
+
+ if (bnd_lookup(ctx->local, term->bname) != NULL) {
+ syntax_error(term->info,
+ "the name %s is already defined", term->bname);
+ return 0;
+ }
+ bind_type(&ctx->local, term->bname, term->type);
+ } else if (term->tag == A_TEST) {
+ if (!check_exp(term->test, ctx))
+ return 0;
+ if (term->result != NULL) {
+ if (!check_exp(term->result, ctx))
+ return 0;
+ if (! type_equal(term->test->type, term->result->type)) {
+ type_error2(term->info,
+ "expected test result of type %s but got %s",
+ term->result->type, term->test->type);
+ return 0;
+ }
+ } else {
+ if (expect_types(term->info, term->test->type, 2,
+ t_string, t_tree) == NULL)
+ return 0;
+ }
+ term->type = ref(term->test->type);
+ } else {
+ assert(0);
+ }
+ return 1;
+}
+
+static int typecheck(struct term *term, struct augeas *aug) {
+ int ok = 1;
+ struct ctx ctx;
+ char *fname;
+ const char *basenam;
+
+ assert(term->tag == A_MODULE);
+
+ /* Check that the module name is consistent with the filename */
+ fname = module_basename(term->mname);
+
+ basenam = strrchr(term->info->filename->str, SEP);
+ if (basenam == NULL)
+ basenam = term->info->filename->str;
+ else
+ basenam += 1;
+ if (STRNEQ(fname, basenam)) {
+ syntax_error(term->info,
+ "The module %s must be in a file named %s",
+ term->mname, fname);
+ free(fname);
+ return 0;
+ }
+ free(fname);
+
+ ctx.aug = aug;
+ ctx.local = NULL;
+ ctx.name = term->mname;
+ list_for_each(dcl, term->decls) {
+ ok &= check_decl(dcl, &ctx);
+ }
+ unref(ctx.local, binding);
+ return ok;
+}
+
+static struct value *compile_exp(struct info *, struct term *, struct ctx *);
+
+static struct value *compile_union(struct term *exp, struct ctx *ctx) {
+ struct value *v1 = compile_exp(exp->info, exp->left, ctx);
+ if (EXN(v1))
+ return v1;
+ struct value *v2 = compile_exp(exp->info, exp->right, ctx);
+ if (EXN(v2)) {
+ unref(v1, value);
+ return v2;
+ }
+
+ struct type *t = exp->type;
+ struct info *info = exp->info;
+ struct value *v = NULL;
+
+ v1 = coerce(v1, t);
+ if (EXN(v1))
+ return v1;
+ v2 = coerce(v2, t);
+ if (EXN(v2)) {
+ unref(v1, value);
+ return v2;
+ }
+
+ if (t->tag == T_REGEXP) {
+ v = make_value(V_REGEXP, ref(info));
+ v->regexp = regexp_union(info, v1->regexp, v2->regexp);
+ } else if (t->tag == T_LENS) {
+ struct lens *l1 = v1->lens;
+ struct lens *l2 = v2->lens;
+ v = lns_make_union(ref(info), ref(l1), ref(l2), LNS_TYPE_CHECK(ctx));
+ } else {
+ fatal_error(info, "Tried to union a %s and a %s to yield a %s",
+ type_name(exp->left->type), type_name(exp->right->type),
+ type_name(t));
+ }
+ unref(v1, value);
+ unref(v2, value);
+ return v;
+}
+
+static struct value *compile_minus(struct term *exp, struct ctx *ctx) {
+ struct value *v1 = compile_exp(exp->info, exp->left, ctx);
+ if (EXN(v1))
+ return v1;
+ struct value *v2 = compile_exp(exp->info, exp->right, ctx);
+ if (EXN(v2)) {
+ unref(v1, value);
+ return v2;
+ }
+
+ struct type *t = exp->type;
+ struct info *info = exp->info;
+ struct value *v;
+
+ v1 = coerce(v1, t);
+ v2 = coerce(v2, t);
+ if (t->tag == T_REGEXP) {
+ struct regexp *re1 = v1->regexp;
+ struct regexp *re2 = v2->regexp;
+ struct regexp *re = regexp_minus(info, re1, re2);
+ if (re == NULL) {
+ v = make_exn_value(ref(info),
+ "Regular expression subtraction 'r1 - r2' failed");
+ exn_printf_line(v, "r1: /%s/", re1->pattern->str);
+ exn_printf_line(v, "r2: /%s/", re2->pattern->str);
+ } else {
+ v = make_value(V_REGEXP, ref(info));
+ v->regexp = re;
+ }
+ } else {
+ v = NULL;
+ fatal_error(info, "Tried to subtract a %s and a %s to yield a %s",
+ type_name(exp->left->type), type_name(exp->right->type),
+ type_name(t));
+ }
+ unref(v1, value);
+ unref(v2, value);
+ return v;
+}
+
+static struct value *compile_compose(struct term *exp, struct ctx *ctx) {
+ struct info *info = exp->info;
+ struct value *v;
+
+ if (exp->left->type->tag == T_ARROW) {
+ // FIXME: This is really crufty, and should be desugared in the
+ // parser so that we don't have to do all this manual type
+ // computation. Should we write function compostion as
+ // concatenation instead of using a separate syntax ?
+
+ /* Build lambda x: exp->right (exp->left x) as a closure */
+ char *var = strdup("@0");
+ struct term *func = make_param(var, ref(exp->left->type->dom),
+ ref(info));
+ func->type = make_arrow_type(exp->left->type->dom,
+ exp->right->type->img);
+ struct term *ident = make_term(A_IDENT, ref(info));
+ ident->ident = ref(func->param->name);
+ ident->type = ref(func->param->type);
+ struct term *app = make_app_term(ref(exp->left), ident, ref(info));
+ app->type = ref(app->left->type->img);
+ app = make_app_term(ref(exp->right), app, ref(info));
+ app->type = ref(app->right->type->img);
+
+ build_func(func, app);
+
+ if (!type_equal(func->type, exp->type)) {
+ char *f = type_string(func->type);
+ char *e = type_string(exp->type);
+ fatal_error(info,
+ "Composition has type %s but should have type %s", f, e);
+ free(f);
+ free(e);
+ unref(func, term);
+ return info->error->exn;
+ }
+ v = make_closure(func, ctx->local);
+ unref(func, term);
+ } else {
+ v = compile_exp(exp->info, exp->left, ctx);
+ unref(v, value);
+ v = compile_exp(exp->info, exp->right, ctx);
+ }
+ return v;
+}
+
+static struct value *compile_concat(struct term *exp, struct ctx *ctx) {
+ struct value *v1 = compile_exp(exp->info, exp->left, ctx);
+ if (EXN(v1))
+ return v1;
+ struct value *v2 = compile_exp(exp->info, exp->right, ctx);
+ if (EXN(v2)) {
+ unref(v1, value);
+ return v2;
+ }
+
+ struct type *t = exp->type;
+ struct info *info = exp->info;
+ struct value *v;
+
+ v1 = coerce(v1, t);
+ v2 = coerce(v2, t);
+ if (t->tag == T_STRING) {
+ const char *s1 = v1->string->str;
+ const char *s2 = v2->string->str;
+ v = make_value(V_STRING, ref(info));
+ make_ref(v->string);
+ if (ALLOC_N(v->string->str, strlen(s1) + strlen(s2) + 1) < 0)
+ goto error;
+ char *s = v->string->str;
+ strcpy(s, s1);
+ strcat(s, s2);
+ } else if (t->tag == T_REGEXP) {
+ v = make_value(V_REGEXP, ref(info));
+ v->regexp = regexp_concat(info, v1->regexp, v2->regexp);
+ } else if (t->tag == T_FILTER) {
+ struct filter *f1 = v1->filter;
+ struct filter *f2 = v2->filter;
+ v = make_value(V_FILTER, ref(info));
+ if (v2->ref == 1 && f2->ref == 1) {
+ list_append(f2, ref(f1));
+ v->filter = ref(f2);
+ } else if (v1->ref == 1 && f1->ref == 1) {
+ list_append(f1, ref(f2));
+ v->filter = ref(f1);
+ } else {
+ struct filter *cf1, *cf2;
+ cf1 = make_filter(ref(f1->glob), f1->include);
+ cf2 = make_filter(ref(f2->glob), f2->include);
+ cf1->next = ref(f1->next);
+ cf2->next = ref(f2->next);
+ list_append(cf1, cf2);
+ v->filter = cf1;
+ }
+ } else if (t->tag == T_LENS) {
+ struct lens *l1 = v1->lens;
+ struct lens *l2 = v2->lens;
+ v = lns_make_concat(ref(info), ref(l1), ref(l2), LNS_TYPE_CHECK(ctx));
+ } else {
+ v = NULL;
+ fatal_error(info, "Tried to concat a %s and a %s to yield a %s",
+ type_name(exp->left->type), type_name(exp->right->type),
+ type_name(t));
+ }
+ unref(v1, value);
+ unref(v2, value);
+ return v;
+ error:
+ return exp->info->error->exn;
+}
+
+static struct value *apply(struct term *app, struct ctx *ctx) {
+ struct value *f = compile_exp(app->info, app->left, ctx);
+ struct value *result = NULL;
+ struct ctx lctx;
+
+ if (EXN(f))
+ return f;
+
+ struct value *arg = compile_exp(app->info, app->right, ctx);
+ if (EXN(arg)) {
+ unref(f, value);
+ return arg;
+ }
+
+ assert(f->tag == V_CLOS);
+
+ lctx.aug = ctx->aug;
+ lctx.local = ref(f->bindings);
+ lctx.name = ctx->name;
+
+ arg = coerce(arg, f->func->param->type);
+ if (arg == NULL)
+ goto done;
+
+ bind_param(&lctx.local, f->func->param, arg);
+ result = compile_exp(app->info, f->func->body, &lctx);
+ unref(result->info, info);
+ result->info = ref(app->info);
+ unbind_param(&lctx.local, f->func->param);
+
+ done:
+ unref(lctx.local, binding);
+ unref(arg, value);
+ unref(f, value);
+ return result;
+}
+
+static struct value *compile_bracket(struct term *exp, struct ctx *ctx) {
+ struct value *arg = compile_exp(exp->info, exp->brexp, ctx);
+ if (EXN(arg))
+ return arg;
+ assert(arg->tag == V_LENS);
+
+ struct value *v = lns_make_subtree(ref(exp->info), ref(arg->lens));
+ unref(arg, value);
+
+ return v;
+}
+
+static struct value *compile_rep(struct term *rep, struct ctx *ctx) {
+ struct value *arg = compile_exp(rep->info, rep->rexp, ctx);
+ struct value *v = NULL;
+
+ if (EXN(arg))
+ return arg;
+
+ arg = coerce(arg, rep->type);
+ if (rep->type->tag == T_REGEXP) {
+ int min, max;
+ if (rep->quant == Q_STAR) {
+ min = 0; max = -1;
+ } else if (rep->quant == Q_PLUS) {
+ min = 1; max = -1;
+ } else if (rep->quant == Q_MAYBE) {
+ min = 0; max = 1;
+ } else {
+ assert(0);
+ abort();
+ }
+ v = make_value(V_REGEXP, ref(rep->info));
+ v->regexp = regexp_iter(rep->info, arg->regexp, min, max);
+ } else if (rep->type->tag == T_LENS) {
+ int c = LNS_TYPE_CHECK(ctx);
+ if (rep->quant == Q_STAR) {
+ v = lns_make_star(ref(rep->info), ref(arg->lens), c);
+ } else if (rep->quant == Q_PLUS) {
+ v = lns_make_plus(ref(rep->info), ref(arg->lens), c);
+ } else if (rep->quant == Q_MAYBE) {
+ v = lns_make_maybe(ref(rep->info), ref(arg->lens), c);
+ } else {
+ assert(0);
+ }
+ } else {
+ fatal_error(rep->info, "Tried to repeat a %s to yield a %s",
+ type_name(rep->rexp->type), type_name(rep->type));
+ }
+ unref(arg, value);
+ return v;
+}
+
+static struct value *compile_exp(struct info *info,
+ struct term *exp, struct ctx *ctx) {
+ struct value *v = NULL;
+
+ switch (exp->tag) {
+ case A_COMPOSE:
+ v = compile_compose(exp, ctx);
+ break;
+ case A_UNION:
+ v = compile_union(exp, ctx);
+ break;
+ case A_MINUS:
+ v = compile_minus(exp, ctx);
+ break;
+ case A_CONCAT:
+ v = compile_concat(exp, ctx);
+ break;
+ case A_APP:
+ v = apply(exp, ctx);
+ break;
+ case A_VALUE:
+ if (exp->value->tag == V_NATIVE) {
+ v = native_call(info, exp->value->native, ctx);
+ } else {
+ v = ref(exp->value);
+ }
+ break;
+ case A_IDENT:
+ v = ref(ctx_lookup(exp->info, ctx, exp->ident));
+ break;
+ case A_BRACKET:
+ v = compile_bracket(exp, ctx);
+ break;
+ case A_FUNC:
+ v = make_closure(exp, ctx->local);
+ break;
+ case A_REP:
+ v = compile_rep(exp, ctx);
+ break;
+ default:
+ assert(0);
+ break;
+ }
+
+ return v;
+}
+
+static int compile_test(struct term *term, struct ctx *ctx) {
+ struct value *actual = compile_exp(term->info, term->test, ctx);
+ struct value *expect = NULL;
+ int ret = 1;
+
+ if (term->tr_tag == TR_EXN) {
+ if (!EXN(actual)) {
+ print_info(stdout, term->info);
+ printf("Test run should have produced exception, but produced\n");
+ print_value(stdout, actual);
+ printf("\n");
+ ret = 0;
+ }
+ } else {
+ if (EXN(actual)) {
+ print_info(stdout, term->info);
+ printf("exception thrown in test\n");
+ print_value(stdout, actual);
+ printf("\n");
+ ret = 0;
+ } else if (term->tr_tag == TR_CHECK) {
+ expect = compile_exp(term->info, term->result, ctx);
+ if (EXN(expect))
+ goto done;
+ if (! value_equal(actual, expect)) {
+ printf("Test failure:");
+ print_info(stdout, term->info);
+ printf("\n");
+ printf(" Expected:\n");
+ print_value(stdout, expect);
+ printf("\n");
+ printf(" Actual:\n");
+ print_value(stdout, actual);
+ printf("\n");
+ ret = 0;
+ }
+ } else {
+ printf("Test result: ");
+ print_info(stdout, term->info);
+ printf("\n");
+ if (actual->tag == V_TREE) {
+ print_tree_braces(stdout, 2, actual->origin->children);
+ } else {
+ print_value(stdout, actual);
+ }
+ printf("\n");
+ }
+ }
+ done:
+ reset_error(term->info->error);
+ unref(actual, value);
+ unref(expect, value);
+ return ret;
+}
+
+static int compile_decl(struct term *term, struct ctx *ctx) {
+ if (term->tag == A_BIND) {
+ int result;
+
+ struct value *v = compile_exp(term->info, term->exp, ctx);
+ bind(&ctx->local, term->bname, term->type, v);
+
+ if (EXN(v) && !v->exn->seen) {
+ struct error *error = term->info->error;
+ struct memstream ms;
+
+ init_memstream(&ms);
+
+ syntax_error(term->info, "Failed to compile %s",
+ term->bname);
+ fprintf(ms.stream, "%s\n", error->details);
+ print_value(ms.stream, v);
+ close_memstream(&ms);
+
+ v->exn->seen = 1;
+ free(error->details);
+ error->details = ms.buf;
+ }
+ result = !(EXN(v) || HAS_ERR(ctx->aug));
+ unref(v, value);
+ return result;
+ } else if (term->tag == A_TEST) {
+ return compile_test(term, ctx);
+ }
+ assert(0);
+ abort();
+}
+
+static struct module *compile(struct term *term, struct augeas *aug) {
+ struct ctx ctx;
+ struct transform *autoload = NULL;
+ assert(term->tag == A_MODULE);
+
+ ctx.aug = aug;
+ ctx.local = NULL;
+ ctx.name = term->mname;
+ list_for_each(dcl, term->decls) {
+ if (!compile_decl(dcl, &ctx))
+ goto error;
+ }
+
+ if (term->autoload != NULL) {
+ struct binding *bnd = bnd_lookup(ctx.local, term->autoload);
+ if (bnd == NULL) {
+ syntax_error(term->info, "Undefined transform in autoload %s",
+ term->autoload);
+ goto error;
+ }
+ if (expect_types(term->info, bnd->type, 1, t_transform) == NULL)
+ goto error;
+ autoload = bnd->value->transform;
+ }
+ struct module *module = module_create(term->mname);
+ module->bindings = ctx.local;
+ module->autoload = ref(autoload);
+ return module;
+ error:
+ unref(ctx.local, binding);
+ return NULL;
+}
+
+/*
+ * Defining native functions
+ */
+static struct info *
+make_native_info(struct error *error, const char *fname, int line) {
+ struct info *info;
+ if (make_ref(info) < 0)
+ goto error;
+ info->first_line = info->last_line = line;
+ info->first_column = info->last_column = 0;
+ info->error = error;
+ if (make_ref(info->filename) < 0)
+ goto error;
+ info->filename->str = strdup(fname);
+ return info;
+ error:
+ unref(info, info);
+ return NULL;
+}
+
+int define_native_intl(const char *file, int line,
+ struct error *error,
+ struct module *module, const char *name,
+ int argc, func_impl impl, ...) {
+ assert(argc > 0); /* We have no unit type */
+ assert(argc <= 5);
+ va_list ap;
+ enum type_tag tag;
+ struct term *params = NULL, *body = NULL, *func = NULL;
+ struct type *type;
+ struct value *v = NULL;
+ struct info *info = NULL;
+ struct ctx ctx;
+
+ info = make_native_info(error, file, line);
+ if (info == NULL)
+ goto error;
+
+ va_start(ap, impl);
+ for (int i=0; i < argc; i++) {
+ struct term *pterm;
+ char ident[10];
+ tag = va_arg(ap, enum type_tag);
+ type = make_base_type(tag);
+ snprintf(ident, 10, "@%d", i);
+ pterm = make_param(strdup(ident), type, ref(info));
+ list_append(params, pterm);
+ }
+ tag = va_arg(ap, enum type_tag);
+ va_end(ap);
+
+ type = make_base_type(tag);
+
+ make_ref(v);
+ if (v == NULL)
+ goto error;
+ v->tag = V_NATIVE;
+ v->info = info;
+ info = NULL;
+
+ if (ALLOC(v->native) < 0)
+ goto error;
+ v->native->argc = argc;
+ v->native->type = type;
+ v->native->impl = impl;
+
+ make_ref(body);
+ if (body == NULL)
+ goto error;
+ body->info = ref(info);
+ body->type = ref(type);
+ body->tag = A_VALUE;
+ body->value = v;
+ v = NULL;
+
+ func = build_func(params, body);
+ params = NULL;
+ body = NULL;
+
+ ctx.aug = NULL;
+ ctx.local = ref(module->bindings);
+ ctx.name = module->name;
+ if (! check_exp(func, &ctx)) {
+ fatal_error(info, "Typechecking native %s failed",
+ name);
+ abort();
+ }
+ v = make_closure(func, ctx.local);
+ if (v == NULL) {
+ unref(module->bindings, binding);
+ goto error;
+ }
+ bind(&ctx.local, name, func->type, v);
+ unref(v, value);
+ unref(func, term);
+ unref(module->bindings, binding);
+
+ module->bindings = ctx.local;
+ return 0;
+ error:
+ list_for_each(p, params) {
+ unref(p, term);
+ }
+ unref(v, value);
+ unref(body, term);
+ unref(func, term);
+ return -1;
+}
+
+
+/* Defined in parser.y */
+int augl_parse_file(struct augeas *aug, const char *name, struct term **term);
+
+static char *module_basename(const char *modname) {
+ char *fname;
+
+ if (asprintf(&fname, "%s" AUG_EXT, modname) == -1)
+ return NULL;
+ for (int i=0; i < strlen(modname); i++)
+ fname[i] = tolower(fname[i]);
+ return fname;
+}
+
+static char *module_filename(struct augeas *aug, const char *modname) {
+ char *dir = NULL;
+ char *filename = NULL;
+ char *name = module_basename(modname);
+
+ /* Module names that contain slashes can fool us into finding and
+ * loading a module in another directory, but once loaded we won't find
+ * it under MODNAME so that we will later try and load it over and
+ * over */
+ if (index(modname, '/') != NULL)
+ goto error;
+
+ while ((dir = argz_next(aug->modpathz, aug->nmodpath, dir)) != NULL) {
+ int len = strlen(name) + strlen(dir) + 2;
+ struct stat st;
+
+ if (REALLOC_N(filename, len) == -1)
+ goto error;
+ sprintf(filename, "%s/%s", dir, name);
+ if (stat(filename, &st) == 0)
+ goto done;
+ }
+ error:
+ FREE(filename);
+ done:
+ free(name);
+ return filename;
+}
+
+int load_module_file(struct augeas *aug, const char *filename,
+ const char *name) {
+ struct term *term = NULL;
+ int result = -1;
+
+ if (aug->flags & AUG_TRACE_MODULE_LOADING)
+ printf("Module %s", filename);
+ augl_parse_file(aug, filename, &term);
+ if (aug->flags & AUG_TRACE_MODULE_LOADING)
+ printf(HAS_ERR(aug) ? " failed\n" : " loaded\n");
+ ERR_BAIL(aug);
+
+ if (! typecheck(term, aug))
+ goto error;
+
+ struct module *module = compile(term, aug);
+ bool bad_module = (module == NULL);
+ if (bad_module && name != NULL) {
+ /* Put an empty placeholder on the module list so that
+ * we don't retry loading this module everytime its mentioned
+ */
+ module = module_create(name);
+ }
+ if (module != NULL) {
+ list_append(aug->modules, module);
+ list_for_each(bnd, module->bindings) {
+ if (bnd->value->tag == V_LENS) {
+ lens_release(bnd->value->lens);
+ }
+ }
+ }
+ ERR_THROW(bad_module, aug, AUG_ESYNTAX, "Failed to load %s", filename);
+
+ result = 0;
+ error:
+ // FIXME: This leads to a bad free of a string used in a del lens
+ // To reproduce run lenses/tests/test_yum.aug
+ unref(term, term);
+ return result;
+}
+
+static int load_module(struct augeas *aug, const char *name) {
+ char *filename = NULL;
+
+ if (module_find(aug->modules, name) != NULL)
+ return 0;
+
+ if ((filename = module_filename(aug, name)) == NULL)
+ return -1;
+
+ if (load_module_file(aug, filename, name) == -1)
+ goto error;
+
+ free(filename);
+ return 0;
+
+ error:
+ free(filename);
+ return -1;
+}
+
+int interpreter_init(struct augeas *aug) {
+ int r;
+
+ r = init_fatal_exn(aug->error);
+ if (r < 0)
+ return -1;
+
+ aug->modules = builtin_init(aug->error);
+ if (aug->flags & AUG_NO_MODL_AUTOLOAD)
+ return 0;
+
+ // For now, we just load every file on the search path
+ const char *dir = NULL;
+ glob_t globbuf;
+ int gl_flags = GLOB_NOSORT;
+
+ MEMZERO(&globbuf, 1);
+
+ while ((dir = argz_next(aug->modpathz, aug->nmodpath, dir)) != NULL) {
+ char *globpat;
+ r = asprintf(&globpat, "%s/*.aug", dir);
+ ERR_NOMEM(r < 0, aug);
+
+ r = glob(globpat, gl_flags, NULL, &globbuf);
+ if (r != 0 && r != GLOB_NOMATCH) {
+ /* This really has to be an allocation failure; glob is not
+ * supposed to return GLOB_ABORTED here */
+ aug_errcode_t code =
+ r == GLOB_NOSPACE ? AUG_ENOMEM : AUG_EINTERNAL;
+ ERR_REPORT(aug, code, "glob failure for %s", globpat);
+ free(globpat);
+ goto error;
+ }
+ gl_flags |= GLOB_APPEND;
+ free(globpat);
+ }
+
+ for (int i=0; i < globbuf.gl_pathc; i++) {
+ char *name, *p, *q;
+ int res;
+ p = strrchr(globbuf.gl_pathv[i], SEP);
+ if (p == NULL)
+ p = globbuf.gl_pathv[i];
+ else
+ p += 1;
+ q = strchr(p, '.');
+ name = strndup(p, q - p);
+ name[0] = toupper(name[0]);
+ res = load_module(aug, name);
+ free(name);
+ if (res == -1)
+ goto error;
+ }
+ globfree(&globbuf);
+ return 0;
+ error:
+ globfree(&globbuf);
+ return -1;
+}
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+/*
+ * syntax.h: Data types to represent language syntax
+ *
+ * Copyright (C) 2007-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <dlutter@redhat.com>
+ */
+
+#ifndef SYNTAX_H_
+#define SYNTAX_H_
+
+#include <limits.h>
+#include "internal.h"
+#include "lens.h"
+#include "ref.h"
+#include "fa.h"
+#include "regexp.h"
+#include "info.h"
+
+void syntax_error(struct info *info, const char *format, ...)
+ ATTRIBUTE_FORMAT(printf, 2, 3);
+
+void fatal_error(struct info *info, const char *format, ...)
+ ATTRIBUTE_FORMAT(printf, 2, 3);
+
+enum term_tag {
+ A_MODULE,
+ A_BIND, /* Module scope binding of a name */
+ A_LET, /* local LET .. IN binding */
+ A_COMPOSE,
+ A_UNION,
+ A_MINUS,
+ A_CONCAT,
+ A_APP,
+ A_VALUE,
+ A_IDENT,
+ A_BRACKET,
+ A_FUNC,
+ A_REP,
+ A_TEST
+};
+
+enum test_result_tag {
+ TR_CHECK,
+ TR_PRINT,
+ TR_EXN
+};
+
+enum quant_tag {
+ Q_STAR,
+ Q_PLUS,
+ Q_MAYBE
+};
+
+struct term {
+ struct term *next;
+ unsigned int ref;
+ struct info *info;
+ struct type *type; /* Filled in by the typechecker */
+ enum term_tag tag;
+ union {
+ struct { /* A_MODULE */
+ char *mname;
+ char *autoload;
+ struct term *decls;
+ };
+ struct { /* A_BIND */
+ char *bname;
+ struct term *exp;
+ };
+ struct { /* A_COMPOSE, A_UNION, A_CONCAT, A_APP, A_LET */
+ struct term *left;
+ struct term *right;
+ };
+ struct value *value; /* A_VALUE */
+ struct term *brexp; /* A_BRACKET */
+ struct string *ident; /* A_IDENT */
+ struct { /* A_REP */
+ enum quant_tag quant;
+ struct term *rexp;
+ };
+ struct { /* A_FUNC */
+ struct param *param;
+ struct term *body;
+ };
+ struct { /* A_TEST */
+ enum test_result_tag tr_tag;
+ struct term *test;
+ struct term *result;
+ };
+ };
+};
+
+struct param {
+ struct info *info;
+ unsigned int ref;
+ struct string *name;
+ struct type *type;
+};
+
+/* The protoype for the implementation of a native/builtin function in the
+ * interpreter.
+ *
+ * The arguments are passed as a NULL-terminated array of values.
+ */
+typedef struct value *(*func_impl)(struct info *, struct value *argv[]);
+
+struct native {
+ unsigned int argc;
+ struct type *type;
+ func_impl impl;
+};
+
+/* An exception in the interpreter. Some exceptions are reported directly
+ * into the central struct error; an exception for those is only generated
+ * to follow the control flow for exceptions. Such exceptions have both
+ * seen and error set to 1. They are the only exceptions with error == 1.
+ * When error == 1, none of the other fields in the exn will be usable.
+ */
+struct exn {
+ struct info *info;
+ unsigned int seen : 1; /* Whether the user has seen this EXN */
+ unsigned int error : 1;
+ char *message;
+ size_t nlines;
+ char **lines;
+};
+
+/*
+ * Values in the interpreter
+ */
+enum value_tag {
+ V_STRING,
+ V_REGEXP,
+ V_LENS,
+ V_TREE,
+ V_FILTER,
+ V_TRANSFORM,
+ V_NATIVE,
+ V_EXN,
+ V_CLOS,
+ V_UNIT
+};
+
+#define EXN(v) ((v)->tag == V_EXN)
+
+struct value {
+ unsigned int ref;
+ struct info *info;
+ enum value_tag tag;
+ /* Nothing in this union for V_UNIT */
+ union {
+ struct string *string; /* V_STRING */
+ struct regexp *regexp; /* V_REGEXP */
+ struct lens *lens; /* V_LENS */
+ struct native *native; /* V_NATIVE */
+ struct tree *origin; /* V_TREE */
+ struct filter *filter; /* V_FILTER */
+ struct transform *transform; /* V_TRANSFORM */
+ struct exn *exn; /* V_EXN */
+ struct { /* V_CLOS */
+ struct term *func;
+ struct binding *bindings;
+ };
+ };
+};
+
+/* All types except for T_ARROW (functions) are simple. Subtype relations
+ * for the simple types:
+ * T_STRING <: T_REGEXP
+ * and the usual subtype relation for functions.
+ */
+enum type_tag {
+ T_STRING,
+ T_REGEXP,
+ T_LENS,
+ T_TREE,
+ T_FILTER,
+ T_TRANSFORM,
+ T_ARROW,
+ T_UNIT
+};
+
+struct type {
+ unsigned int ref;
+ enum type_tag tag;
+ struct type *dom; /* T_ARROW */
+ struct type *img; /* T_ARROW */
+};
+
+struct binding {
+ unsigned int ref;
+ struct binding *next;
+ struct string *ident;
+ struct type *type;
+ struct value *value;
+};
+
+/* A module maps names to TYPE * VALUE. */
+struct module {
+ unsigned int ref;
+ struct module *next; /* Only used for the global list of modules */
+ struct transform *autoload;
+ char *name;
+ struct binding *bindings;
+};
+
+struct type *make_arrow_type(struct type *dom, struct type *img);
+struct type *make_base_type(enum type_tag tag);
+/* Do not call this directly. Use unref(t, type) instead */
+void free_type(struct type *type);
+
+/* Constructors for some terms in syntax.c Constructor assumes ownership of
+ * arguments without incrementing. Caller owns returned objects.
+ */
+struct term *make_term(enum term_tag tag, struct info *info);
+void free_term(struct term *term);
+struct term *make_param(char *name, struct type *type, struct info *info);
+struct value *make_value(enum value_tag tag, struct info *info);
+struct value *make_unit(struct info *info);
+struct term *make_app_term(struct term *func, struct term *arg,
+ struct info *info);
+struct term *make_app_ident(char *id, struct term *func, struct info *info);
+
+/* Print a tree in the braces style used in modules */
+void print_tree_braces(FILE *out, int indent, struct tree *tree);
+
+/* Make an EXN value
+ * Receive ownership of INFO
+ *
+ * FORMAT and following arguments are printed to a new string. Caller must
+ * clean those up.
+ */
+struct value *make_exn_value(struct info *info, const char *format, ...)
+ ATTRIBUTE_FORMAT(printf, 2, 3);
+
+/* Add NLINES lines (passed as const char *) to EXN, which must be a
+ * value with tag V_EXN, created by MAKE_EXN_VALUE.
+ *
+ * The strings belong to EXN * after the call.
+ */
+void exn_add_lines(struct value *exn, int nlines, ...);
+
+void exn_printf_line(struct value *exn, const char *format, ...)
+ ATTRIBUTE_FORMAT(printf, 2, 3);
+
+/* Do not call these directly, use UNREF instead */
+void free_value(struct value *v);
+void free_module(struct module *module);
+
+/* Turn a list of PARAMS (represented as terms tagged as A_FUNC with the
+ * param in PARAM) into nested A_FUNC terms
+ */
+struct term *build_func(struct term *params, struct term *exp);
+
+struct module *module_create(const char *name);
+
+#define define_native(error, module, name, argc, impl, types ...) \
+ define_native_intl(__FILE__, __LINE__, error, module, name, \
+ argc, impl, ## types)
+
+ATTRIBUTE_RETURN_CHECK
+int define_native_intl(const char *fname, int line,
+ struct error *error,
+ struct module *module, const char *name,
+ int argc, func_impl impl, ...);
+
+struct module *builtin_init(struct error *);
+
+int load_module_file(struct augeas *aug, const char *filename, const char *name);
+
+/* The name of the builtin function that checks recursive lenses */
+#define LNS_CHECK_REC_NAME "lns_check_rec"
+
+int interpreter_init(struct augeas *aug);
+
+struct lens *lens_lookup(struct augeas *aug, const char *qname);
+#endif
+
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+/*
+ * transform.c: support for building and running transformers
+ *
+ * Copyright (C) 2007-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <dlutter@redhat.com>
+ */
+
+#include <config.h>
+
+#include <fnmatch.h>
+#include <glob.h>
+
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <unistd.h>
+#include <selinux/selinux.h>
+#include <stdbool.h>
+
+#include "internal.h"
+#include "memory.h"
+#include "augeas.h"
+#include "syntax.h"
+#include "transform.h"
+#include "errcode.h"
+
+static const int fnm_flags = FNM_PATHNAME;
+static const int glob_flags = GLOB_NOSORT;
+
+/* Extension for newly created files */
+#define EXT_AUGNEW ".augnew"
+/* Extension for backup files */
+#define EXT_AUGSAVE ".augsave"
+
+/* Loaded files are tracked underneath METATREE. When a file with name
+ * FNAME is loaded, certain entries are made under METATREE / FNAME:
+ * path : path where tree for FNAME is put
+ * mtime : time of last modification of the file as reported by stat(2)
+ * lens/info : information about where the applied lens was loaded from
+ * lens/id : unique hexadecimal id of the lens
+ * error : indication of errors during processing FNAME, or NULL
+ * if processing succeeded
+ * error/pos : position in file where error occurred (for get errors)
+ * error/path: path to tree node where error occurred (for put errors)
+ * error/message : human-readable error message
+ */
+static const char *const s_path = "path";
+static const char *const s_lens = "lens";
+static const char *const s_last = "last_matched";
+static const char *const s_next = "next_not_matched";
+static const char *const s_info = "info";
+static const char *const s_mtime = "mtime";
+
+static const char *const s_error = "error";
+/* These are all put underneath "error" */
+static const char *const s_pos = "pos";
+static const char *const s_message = "message";
+static const char *const s_line = "line";
+static const char *const s_char = "char";
+
+/*
+ * Filters
+ */
+struct filter *make_filter(struct string *glb, unsigned int include) {
+ struct filter *f;
+ make_ref(f);
+ f->glob = glb;
+ f->include = include;
+ return f;
+}
+
+void free_filter(struct filter *f) {
+ if (f == NULL)
+ return;
+ assert(f->ref == 0);
+ unref(f->next, filter);
+ unref(f->glob, string);
+ free(f);
+}
+
+static const char *pathbase(const char *path) {
+ const char *p = strrchr(path, SEP);
+ return (p == NULL) ? path : p + 1;
+}
+
+static bool is_excl(struct tree *f) {
+ return streqv(f->label, "excl") && f->value != NULL;
+}
+
+static bool is_incl(struct tree *f) {
+ return streqv(f->label, "incl") && f->value != NULL;
+}
+
+static bool is_regular_file(const char *path) {
+ int r;
+ struct stat st;
+
+ r = stat(path, &st);
+ if (r < 0)
+ return false;
+ return S_ISREG(st.st_mode);
+}
+
+static char *mtime_as_string(struct augeas *aug, const char *fname) {
+ int r;
+ struct stat st;
+ char *result = NULL;
+
+ if (fname == NULL) {
+ result = strdup("0");
+ ERR_NOMEM(result == NULL, aug);
+ goto done;
+ }
+
+ r = stat(fname, &st);
+ if (r < 0) {
+ /* If we fail to stat, silently ignore the error
+ * and report an impossible mtime */
+ result = strdup("0");
+ ERR_NOMEM(result == NULL, aug);
+ } else {
+ r = xasprintf(&result, "%ld", (long) st.st_mtime);
+ ERR_NOMEM(r < 0, aug);
+ }
+ done:
+ return result;
+ error:
+ FREE(result);
+ return NULL;
+}
+
+/* fnmatch(3) which will match // in a pattern to a path, like glob(3) does */
+static int fnmatch_normalize(const char *pattern, const char *string, int flags) {
+ int i, j, r;
+ char *pattern_norm = NULL;
+
+ r = ALLOC_N(pattern_norm, strlen(pattern) + 1);
+ if (r < 0)
+ goto error;
+
+ for (i = 0, j = 0; i < strlen(pattern); i++) {
+ if (pattern[i] != '/' || pattern[i+1] != '/') {
+ pattern_norm[j] = pattern[i];
+ j++;
+ }
+ }
+ pattern_norm[j] = 0;
+
+ r = fnmatch(pattern_norm, string, flags);
+ FREE(pattern_norm);
+ return r;
+
+ error:
+ if (pattern_norm != NULL)
+ FREE(pattern_norm);
+ return -1;
+}
+
+static bool file_current(struct augeas *aug, const char *fname,
+ struct tree *finfo) {
+ struct tree *mtime = tree_child(finfo, s_mtime);
+ struct tree *file = NULL, *path = NULL;
+ int r;
+ struct stat st;
+ int64_t mtime_i;
+
+ if (mtime == NULL || mtime->value == NULL)
+ return false;
+
+ r = xstrtoint64(mtime->value, 10, &mtime_i);
+ if (r < 0) {
+ /* Ignore silently and err on the side of caution */
+ return false;
+ }
+
+ r = stat(fname, &st);
+ if (r < 0)
+ return false;
+
+ if (mtime_i != (int64_t) st.st_mtime)
+ return false;
+
+ path = tree_child(finfo, s_path);
+ if (path == NULL)
+ return false;
+
+ file = tree_fpath(aug, path->value);
+ return (file != NULL && ! file->dirty);
+}
+
+static int filter_generate(struct tree *xfm, const char *root,
+ int *nmatches, char ***matches) {
+ glob_t globbuf;
+ int gl_flags = glob_flags;
+ int r;
+ int ret = 0;
+ char **pathv = NULL;
+ int pathc = 0;
+ int root_prefix = strlen(root) - 1;
+
+ *nmatches = 0;
+ *matches = NULL;
+ MEMZERO(&globbuf, 1);
+
+ list_for_each(f, xfm->children) {
+ char *globpat = NULL;
+ if (! is_incl(f))
+ continue;
+ pathjoin(&globpat, 2, root, f->value);
+ r = glob(globpat, gl_flags, NULL, &globbuf);
+ free(globpat);
+
+ if (r != 0 && r != GLOB_NOMATCH)
+ goto error;
+ gl_flags |= GLOB_APPEND;
+ }
+
+ pathc = globbuf.gl_pathc;
+ int pathind = 0;
+
+ if (ALLOC_N(pathv, pathc) < 0)
+ goto error;
+
+ for (int i=0; i < pathc; i++) {
+ const char *path = globbuf.gl_pathv[i] + root_prefix;
+ bool include = true;
+
+ list_for_each(e, xfm->children) {
+ if (! is_excl(e))
+ continue;
+
+ if (strchr(e->value, SEP) == NULL)
+ path = pathbase(path);
+
+ r = fnmatch_normalize(e->value, path, fnm_flags);
+ if (r < 0)
+ goto error;
+ else if (r == 0)
+ include = false;
+ }
+
+ if (include)
+ include = is_regular_file(globbuf.gl_pathv[i]);
+
+ if (include) {
+ pathv[pathind] = strdup(globbuf.gl_pathv[i]);
+ if (pathv[pathind] == NULL)
+ goto error;
+ pathind += 1;
+ }
+ }
+ pathc = pathind;
+
+ if (REALLOC_N(pathv, pathc) == -1)
+ goto error;
+
+ *matches = pathv;
+ *nmatches = pathc;
+ done:
+ globfree(&globbuf);
+ return ret;
+ error:
+ if (pathv != NULL)
+ for (int i=0; i < pathc; i++)
+ free(pathv[i]);
+ free(pathv);
+ ret = -1;
+ goto done;
+}
+
+int filter_matches(struct tree *xfm, const char *path) {
+ int found = 0;
+ list_for_each(f, xfm->children) {
+ if (is_incl(f) && fnmatch_normalize(f->value, path, fnm_flags) == 0) {
+ found = 1;
+ break;
+ }
+ }
+ if (! found)
+ return 0;
+ list_for_each(f, xfm->children) {
+ if (is_excl(f) && (fnmatch_normalize(f->value, path, fnm_flags) == 0))
+ return 0;
+ }
+ return 1;
+}
+
+/*
+ * Transformers
+ */
+struct transform *make_transform(struct lens *lens, struct filter *filter) {
+ struct transform *xform;
+ make_ref(xform);
+ xform->lens = lens;
+ xform->filter = filter;
+ return xform;
+}
+
+void free_transform(struct transform *xform) {
+ if (xform == NULL)
+ return;
+ assert(xform->ref == 0);
+ unref(xform->lens, lens);
+ unref(xform->filter, filter);
+ free(xform);
+}
+
+static char *err_path(const char *filename) {
+ char *result = NULL;
+ if (filename == NULL)
+ pathjoin(&result, 2, AUGEAS_META_FILES, s_error);
+ else
+ pathjoin(&result, 3, AUGEAS_META_FILES, filename, s_error);
+ return result;
+}
+
+ATTRIBUTE_FORMAT(printf, 4, 5)
+static struct tree *err_set(struct augeas *aug,
+ struct tree *err_info, const char *sub,
+ const char *format, ...) {
+ int r;
+ va_list ap;
+ char *value = NULL;
+ struct tree *tree = NULL;
+
+ va_start(ap, format);
+ r = vasprintf(&value, format, ap);
+ va_end(ap);
+ if (r < 0)
+ value = NULL;
+ ERR_NOMEM(r < 0, aug);
+
+ tree = tree_child_cr(err_info, sub);
+ ERR_NOMEM(tree == NULL, aug);
+
+ r = tree_set_value(tree, value);
+ ERR_NOMEM(r < 0, aug);
+
+ error:
+ free(value);
+ return tree;
+}
+
+static struct tree *err_lens_entry(struct augeas *aug, struct tree *where,
+ struct lens *lens, const char *label) {
+ struct tree *result = NULL;
+
+ if (lens == NULL)
+ return NULL;
+
+ char *fi = format_info(lens->info);
+ if (fi != NULL) {
+ result = err_set(aug, where, label, "%s", fi);
+ free(fi);
+ }
+ return result;
+}
+
+/* Record an error in the tree. The error will show up underneath
+ * /augeas/FILENAME/error if filename is not NULL, and underneath
+ * /augeas/text/PATH otherwise. PATH is the path to the toplevel node in
+ * the tree where the lens application happened. When STATUS is NULL, just
+ * clear any error associated with FILENAME in the tree.
+ */
+static int store_error(struct augeas *aug,
+ const char *filename, const char *path,
+ const char *status, int errnum,
+ const struct lns_error *err, const char *text) {
+ struct tree *err_info = NULL, *finfo = NULL;
+ char *fip = NULL;
+ int r;
+ int result = -1;
+
+ if (filename != NULL) {
+ r = pathjoin(&fip, 2, AUGEAS_META_FILES, filename);
+ } else {
+ r = pathjoin(&fip, 2, AUGEAS_META_TEXT, path);
+ }
+ ERR_NOMEM(r < 0, aug);
+
+ finfo = tree_fpath_cr(aug, fip);
+ ERR_BAIL(aug);
+
+ if (status != NULL) {
+ err_info = tree_child_cr(finfo, s_error);
+ ERR_NOMEM(err_info == NULL, aug);
+
+ r = tree_set_value(err_info, status);
+ ERR_NOMEM(r < 0, aug);
+
+ /* Errors from err_set are ignored on purpose. We try
+ * to report as much as we can */
+ if (err != NULL) {
+ if (err->pos >= 0) {
+ size_t line, ofs;
+ err_set(aug, err_info, s_pos, "%d", err->pos);
+ if (text != NULL) {
+ calc_line_ofs(text, err->pos, &line, &ofs);
+ err_set(aug, err_info, s_line, "%zd", line);
+ err_set(aug, err_info, s_char, "%zd", ofs);
+ }
+ }
+ if (err->path != NULL) {
+ err_set(aug, err_info, s_path, "%s%s", path, err->path);
+ }
+ struct tree *t = err_lens_entry(aug, err_info, err->lens, s_lens);
+ if (t != NULL) {
+ err_lens_entry(aug, t, err->last, s_last);
+ err_lens_entry(aug, t, err->next, s_next);
+ }
+ err_set(aug, err_info, s_message, "%s", err->message);
+ } else if (errnum != 0) {
+ const char *msg = strerror(errnum);
+ err_set(aug, err_info, s_message, "%s", msg);
+ }
+ } else {
+ /* No error, nuke the error node if it exists */
+ err_info = tree_child(finfo, s_error);
+ if (err_info != NULL)
+ tree_unlink(aug, err_info);
+ }
+
+ tree_clean(finfo);
+ result = 0;
+ error:
+ free(fip);
+ return result;
+}
+
+/* Set up the file information in the /augeas tree.
+ *
+ * NODE must be the path to the file contents, and start with /files.
+ * LENS is the lens used to transform the file.
+ * Create entries under /augeas/NODE with some metadata about the file.
+ *
+ * Returns 0 on success, -1 on error
+ */
+static int add_file_info(struct augeas *aug, const char *node,
+ struct lens *lens, const char *lens_name,
+ const char *filename, bool force_reload) {
+ struct tree *file, *tree;
+ char *tmp = NULL;
+ int r;
+ char *path = NULL;
+ int result = -1;
+
+ if (lens == NULL)
+ return -1;
+
+ r = pathjoin(&path, 2, AUGEAS_META_TREE, node);
+ ERR_NOMEM(r < 0, aug);
+
+ file = tree_fpath_cr(aug, path);
+ file->file = true;
+ ERR_BAIL(aug);
+
+ /* Set 'path' */
+ tree = tree_child_cr(file, s_path);
+ ERR_NOMEM(tree == NULL, aug);
+ r = tree_set_value(tree, node);
+ ERR_NOMEM(r < 0, aug);
+
+ /* Set 'mtime' */
+ if (force_reload) {
+ tmp = strdup("0");
+ ERR_NOMEM(tmp == NULL, aug);
+ } else {
+ tmp = mtime_as_string(aug, filename);
+ ERR_BAIL(aug);
+ }
+ tree = tree_child_cr(file, s_mtime);
+ ERR_NOMEM(tree == NULL, aug);
+ tree_store_value(tree, &tmp);
+
+ /* Set 'lens/info' */
+ tmp = format_info(lens->info);
+ ERR_NOMEM(tmp == NULL, aug);
+ tree = tree_path_cr(file, 2, s_lens, s_info);
+ ERR_NOMEM(tree == NULL, aug);
+ r = tree_set_value(tree, tmp);
+ ERR_NOMEM(r < 0, aug);
+ FREE(tmp);
+
+ /* Set 'lens' */
+ tree = tree->parent;
+ r = tree_set_value(tree, lens_name);
+ ERR_NOMEM(r < 0, aug);
+
+ tree_clean(file);
+
+ result = 0;
+ error:
+ free(path);
+ free(tmp);
+ return result;
+}
+
+static char *append_newline(char *text, size_t len) {
+ /* Try to append a newline; this is a big hack to work */
+ /* around the fact that lenses generally break if the */
+ /* file does not end with a newline. */
+ if (len == 0 || text[len-1] != '\n') {
+ if (REALLOC_N(text, len+2) == 0) {
+ text[len] = '\n';
+ text[len+1] = '\0';
+ }
+ }
+ return text;
+}
+
+/* Turn the file name FNAME, which starts with aug->root, into
+ * a path in the tree underneath /files */
+static char *file_name_path(struct augeas *aug, const char *fname) {
+ char *path = NULL;
+
+ pathjoin(&path, 2, AUGEAS_FILES_TREE, fname + strlen(aug->root) - 1);
+ return path;
+}
+
+/* Replace the subtree for FPATH with SUB */
+static void tree_freplace(struct augeas *aug, const char *fpath,
+ struct tree *sub) {
+ struct tree *parent;
+
+ parent = tree_fpath_cr(aug, fpath);
+ ERR_RET(aug);
+
+ parent->file = true;
+ tree_unlink_children(aug, parent);
+ list_append(parent->children, sub);
+ list_for_each(s, sub) {
+ s->parent = parent;
+ }
+}
+
+static struct info*
+make_lns_info(struct augeas *aug, const char *filename,
+ const char *text, int text_len) {
+ struct info *info = NULL;
+
+ make_ref(info);
+ ERR_NOMEM(info == NULL, aug);
+
+ if (filename != NULL) {
+ make_ref(info->filename);
+ ERR_NOMEM(info->filename == NULL, aug);
+ info->filename->str = strdup(filename);
+ }
+
+ info->first_line = 1;
+ info->last_line = 1;
+ info->first_column = 1;
+ if (text != NULL) {
+ info->last_column = text_len;
+ }
+
+ info->error = aug->error;
+
+ return info;
+ error:
+ unref(info, info);
+ return NULL;
+}
+
+/*
+ * Do the bookkeeping around calling lns_get that is common to load_file
+ * and text_store, in particular, make sure the tree we read gets put into
+ * the right place in AUG and that the span for that tree gets set.
+ *
+ * Transform TEXT using LENS and put the resulting tree at PATH. Use
+ * FILENAME in error messages to indicate where the TEXT came from.
+ */
+static void lens_get(struct augeas *aug,
+ struct lens *lens,
+ const char *filename,
+ const char *text, int text_len,
+ const char *path,
+ struct lns_error **err) {
+ struct info *info = NULL;
+ struct span *span = NULL;
+ struct tree *tree = NULL;
+
+ info = make_lns_info(aug, filename, text, text_len);
+ ERR_BAIL(aug);
+
+ if (aug->flags & AUG_ENABLE_SPAN) {
+ /* Allocate the span already to capture a reference to
+ info->filename */
+ span = make_span(info);
+ ERR_NOMEM(span == NULL, info);
+ }
+
+ tree = lns_get(info, lens, text, aug->flags & AUG_ENABLE_SPAN, err);
+
+ if (*err == NULL) {
+ // Successful get
+ tree_freplace(aug, path, tree);
+ ERR_BAIL(aug);
+
+ /* top level node span entire file length */
+ if (span != NULL && tree != NULL) {
+ tree->parent->span = move(span);
+ tree->parent->span->span_start = 0;
+ tree->parent->span->span_end = text_len;
+ }
+ tree = NULL;
+ }
+ error:
+ free_span(span);
+ unref(info, info);
+ free_tree(tree);
+}
+
+static int load_file(struct augeas *aug, struct lens *lens,
+ const char *lens_name, char *filename) {
+ char *text = NULL;
+ const char *err_status = NULL;
+ char *path = NULL;
+ struct lns_error *err = NULL;
+ int result = -1, r, text_len = 0;
+
+ path = file_name_path(aug, filename);
+ ERR_NOMEM(path == NULL, aug);
+
+ r = add_file_info(aug, path, lens, lens_name, filename, false);
+ if (r < 0)
+ goto done;
+
+ text = xread_file(filename);
+ if (text == NULL) {
+ err_status = "read_failed";
+ goto done;
+ }
+ text_len = strlen(text);
+ text = append_newline(text, text_len);
+
+ lens_get(aug, lens, filename, text, text_len, path, &err);
+ if (err != NULL) {
+ err_status = "parse_failed";
+ goto done;
+ }
+ ERR_BAIL(aug);
+
+ result = 0;
+ done:
+ store_error(aug, filename + strlen(aug->root) - 1, path, err_status,
+ errno, err, text);
+ error:
+ free_lns_error(err);
+ free(path);
+ free(text);
+ return result;
+}
+
+/* The lens for a transform can be referred to in one of two ways:
+ * either by a fully qualified name "Module.lens" or by the special
+ * syntax "@Module"; the latter means we should take the lens from the
+ * autoload transform for Module
+ */
+static struct lens *lens_from_name(struct augeas *aug, const char *name) {
+ struct lens *result = NULL;
+
+ if (name[0] == '@') {
+ struct module *modl = NULL;
+ for (modl = aug->modules;
+ modl != NULL && !streqv(modl->name, name + 1);
+ modl = modl->next);
+ ERR_THROW(modl == NULL, aug, AUG_ENOLENS,
+ "Could not find module %s", name + 1);
+ ERR_THROW(modl->autoload == NULL, aug, AUG_ENOLENS,
+ "No autoloaded lens in module %s", name + 1);
+ result = modl->autoload->lens;
+ } else {
+ result = lens_lookup(aug, name);
+ }
+ ERR_THROW(result == NULL, aug, AUG_ENOLENS,
+ "Can not find lens %s", name);
+ return result;
+ error:
+ return NULL;
+}
+
+int text_store(struct augeas *aug, const char *lens_path,
+ const char *path, const char *text) {
+ struct lns_error *err = NULL;
+ int result = -1;
+ const char *err_status = NULL;
+ struct lens *lens = NULL;
+
+ lens = lens_from_name(aug, lens_path);
+ ERR_BAIL(aug);
+
+ lens_get(aug, lens, path, text, strlen(text), path, &err);
+ if (err != NULL) {
+ err_status = "parse_failed";
+ goto error;
+ }
+ ERR_BAIL(aug);
+
+ result = 0;
+ error:
+ store_error(aug, NULL, path, err_status, errno, err, text);
+ free_lns_error(err);
+ return result;
+}
+
+const char *xfm_lens_name(struct tree *xfm) {
+ struct tree *l = tree_child(xfm, s_lens);
+
+ if (l == NULL)
+ return "(unknown)";
+ if (l->value == NULL)
+ return "(noname)";
+ return l->value;
+}
+
+struct lens *xfm_lens(struct augeas *aug,
+ struct tree *xfm, const char **lens_name) {
+ struct tree *l = NULL;
+
+ if (lens_name != NULL)
+ *lens_name = NULL;
+
+ for (l = xfm->children;
+ l != NULL && !streqv("lens", l->label);
+ l = l->next);
+
+ if (l == NULL || l->value == NULL)
+ return NULL;
+ if (lens_name != NULL)
+ *lens_name = l->value;
+
+ return lens_from_name(aug, l->value);
+}
+
+static void xfm_error(struct tree *xfm, const char *msg) {
+ char *v = msg ? strdup(msg) : NULL;
+ char *l = strdup("error");
+
+ if (l == NULL || v == NULL) {
+ free(v);
+ free(l);
+ return;
+ }
+ tree_append(xfm, l, v);
+}
+
+int transform_validate(struct augeas *aug, struct tree *xfm) {
+ struct tree *l = NULL;
+
+ for (struct tree *t = xfm->children; t != NULL; ) {
+ if (streqv(t->label, "lens")) {
+ l = t;
+ } else if ((is_incl(t) || (is_excl(t) && strchr(t->value, SEP) != NULL))
+ && t->value[0] != SEP) {
+ /* Normalize relative paths to absolute ones */
+ int r;
+ r = REALLOC_N(t->value, strlen(t->value) + 2);
+ ERR_NOMEM(r < 0, aug);
+ memmove(t->value + 1, t->value, strlen(t->value) + 1);
+ t->value[0] = SEP;
+ }
+
+ if (streqv(t->label, "error")) {
+ struct tree *del = t;
+ t = del->next;
+ tree_unlink(aug, del);
+ } else {
+ t = t->next;
+ }
+ }
+
+ if (l == NULL) {
+ xfm_error(xfm, "missing a child with label 'lens'");
+ return -1;
+ }
+ if (l->value == NULL) {
+ xfm_error(xfm, "the 'lens' node does not contain a lens name");
+ return -1;
+ }
+ lens_from_name(aug, l->value);
+ ERR_BAIL(aug);
+
+ return 0;
+ error:
+ xfm_error(xfm, aug->error->details);
+ /* We recorded this error in the tree, clear it so that future
+ * operations report this exact same error (against the wrong lens) */
+ reset_error(aug->error);
+ return -1;
+}
+
+void transform_file_error(struct augeas *aug, const char *status,
+ const char *filename, const char *format, ...) {
+ char *ep = err_path(filename);
+ struct tree *err;
+ char *msg;
+ va_list ap;
+ int r;
+
+ err = tree_fpath_cr(aug, ep);
+ FREE(ep);
+ if (err == NULL)
+ return;
+
+ tree_unlink_children(aug, err);
+ tree_set_value(err, status);
+
+ err = tree_child_cr(err, s_message);
+ if (err == NULL)
+ return;
+
+ va_start(ap, format);
+ r = vasprintf(&msg, format, ap);
+ va_end(ap);
+ if (r < 0)
+ return;
+ tree_set_value(err, msg);
+ free(msg);
+}
+
+static struct tree *file_info(struct augeas *aug, const char *fname) {
+ char *path = NULL;
+ struct tree *result = NULL;
+ int r;
+
+ r = pathjoin(&path, 2, AUGEAS_META_FILES, fname);
+ ERR_NOMEM(r < 0, aug);
+
+ result = tree_fpath(aug, path);
+ ERR_BAIL(aug);
+ error:
+ free(path);
+ return result;
+}
+
+int transform_load(struct augeas *aug, struct tree *xfm, const char *file) {
+ int nmatches = 0;
+ char **matches;
+ const char *lens_name;
+ struct lens *lens = xfm_lens(aug, xfm, &lens_name);
+ int r;
+
+ if (lens == NULL) {
+ // FIXME: Record an error and return 0
+ return -1;
+ }
+
+ r = filter_generate(xfm, aug->root, &nmatches, &matches);
+ if (r == -1)
+ return -1;
+ for (int i=0; i < nmatches; i++) {
+ const char *filename = matches[i] + strlen(aug->root) - 1;
+ struct tree *finfo = file_info(aug, filename);
+
+ if (file != NULL && STRNEQ(filename, file)) {
+ FREE(matches[i]);
+ continue;
+ }
+
+ if (finfo != NULL && !finfo->dirty &&
+ tree_child(finfo, s_lens) != NULL) {
+ /* We have a potential conflict: since FINFO is not marked as
+ dirty (see aug_load for how the dirty flag on nodes under
+ /augeas/files is used during loading), we already processed
+ it with another lens. The other lens is recorded in
+ FINFO. If it so happens that the lenses are actually the
+ same, we silently move on, as this duplication does no
+ harm. If they are different we definitely have a problem and
+ need to record an error and remove the work the first lens
+ did. */
+ const char *s = xfm_lens_name(finfo);
+ struct lens *other_lens = lens_from_name(aug, s);
+ if (lens != other_lens) {
+ char *fpath = file_name_path(aug, matches[i]);
+ transform_file_error(aug, "mxfm_load", filename,
+ "Lenses %s and %s could be used to load this file",
+ s, lens_name);
+ aug_rm(aug, fpath);
+ free(fpath);
+ }
+ } else if (!file_current(aug, matches[i], finfo)) {
+ load_file(aug, lens, lens_name, matches[i]);
+ }
+ if (finfo != NULL)
+ finfo->dirty = 0;
+ FREE(matches[i]);
+ }
+ lens_release(lens);
+ free(matches);
+ return 0;
+}
+
+int transform_applies(struct tree *xfm, const char *path) {
+ if (STRNEQLEN(path, AUGEAS_FILES_TREE, strlen(AUGEAS_FILES_TREE))
+ || path[strlen(AUGEAS_FILES_TREE)] != SEP)
+ return 0;
+ return filter_matches(xfm, path + strlen(AUGEAS_FILES_TREE));
+}
+
+static int transfer_file_attrs(FILE *from, FILE *to,
+ const char **err_status) {
+ struct stat st;
+ int ret = 0;
+ int selinux_enabled = (is_selinux_enabled() > 0);
+ char *con = NULL;
+
+ int from_fd;
+ int to_fd = fileno(to);
+
+ if (from == NULL) {
+ *err_status = "replace_from_missing";
+ return -1;
+ }
+
+ from_fd = fileno(from);
+
+ ret = fstat(from_fd, &st);
+ if (ret < 0) {
+ *err_status = "replace_stat";
+ return -1;
+ }
+ if (selinux_enabled) {
+ if (fgetfilecon(from_fd, &con) < 0 && errno != ENOTSUP) {
+ *err_status = "replace_getfilecon";
+ return -1;
+ }
+ }
+
+ if (fchown(to_fd, st.st_uid, st.st_gid) < 0) {
+ *err_status = "replace_chown";
+ return -1;
+ }
+ if (fchmod(to_fd, st.st_mode) < 0) {
+ *err_status = "replace_chmod";
+ return -1;
+ }
+ if (selinux_enabled && con != NULL) {
+ if (fsetfilecon(to_fd, con) < 0 && errno != ENOTSUP) {
+ *err_status = "replace_setfilecon";
+ return -1;
+ }
+ freecon(con);
+ }
+ return 0;
+}
+
+/* Try to rename FROM to TO. If that fails with an error other than EXDEV
+ * or EBUSY, return -1. If the failure is EXDEV or EBUSY (which we assume
+ * means that FROM or TO is a bindmounted file), and COPY_IF_RENAME_FAILS
+ * is true, copy the contents of FROM into TO and delete FROM.
+ *
+ * If COPY_IF_RENAME_FAILS and UNLINK_IF_RENAME_FAILS are true, and the above
+ * copy mechanism is used, it will unlink the TO path and open with O_EXCL
+ * to ensure we only copy *from* a bind mount rather than into an attacker's
+ * mount placed at TO (e.g. for .augsave).
+ *
+ * Return 0 on success (either rename succeeded or we copied the contents
+ * over successfully), -1 on failure.
+ */
+static int clone_file(const char *from, const char *to,
+ const char **err_status, int copy_if_rename_fails,
+ int unlink_if_rename_fails) {
+ FILE *from_fp = NULL, *to_fp = NULL;
+ char buf[BUFSIZ];
+ size_t len;
+ int to_fd = -1, to_oflags, r;
+ int result = -1;
+
+ if (rename(from, to) == 0)
+ return 0;
+ if ((errno != EXDEV && errno != EBUSY) || !copy_if_rename_fails) {
+ *err_status = "rename";
+ return -1;
+ }
+
+ /* rename not possible, copy file contents */
+ if (!(from_fp = fopen(from, "r"))) {
+ *err_status = "clone_open_src";
+ goto done;
+ }
+
+ if (unlink_if_rename_fails) {
+ r = unlink(to);
+ if (r < 0) {
+ *err_status = "clone_unlink_dst";
+ goto done;
+ }
+ }
+
+ to_oflags = unlink_if_rename_fails ? O_EXCL : O_TRUNC;
+ if ((to_fd = open(to, O_WRONLY|O_CREAT|to_oflags, S_IRUSR|S_IWUSR)) < 0) {
+ *err_status = "clone_open_dst";
+ goto done;
+ }
+ if (!(to_fp = fdopen(to_fd, "w"))) {
+ *err_status = "clone_fdopen_dst";
+ goto done;
+ }
+
+ if (transfer_file_attrs(from_fp, to_fp, err_status) < 0)
+ goto done;
+
+ while ((len = fread(buf, 1, BUFSIZ, from_fp)) > 0) {
+ if (fwrite(buf, 1, len, to_fp) != len) {
+ *err_status = "clone_write";
+ goto done;
+ }
+ }
+ if (ferror(from_fp)) {
+ *err_status = "clone_read";
+ goto done;
+ }
+ if (fflush(to_fp) != 0) {
+ *err_status = "clone_flush";
+ goto done;
+ }
+ if (fsync(fileno(to_fp)) < 0) {
+ *err_status = "clone_sync";
+ goto done;
+ }
+ result = 0;
+ done:
+ if (from_fp != NULL)
+ fclose(from_fp);
+ if (to_fp != NULL) {
+ if (fclose(to_fp) != 0) {
+ *err_status = "clone_fclose_dst";
+ result = -1;
+ }
+ } else if (to_fd >= 0 && close(to_fd) < 0) {
+ *err_status = "clone_close_dst";
+ result = -1;
+ }
+ if (result != 0)
+ unlink(to);
+ if (result == 0)
+ unlink(from);
+ return result;
+}
+
+static char *strappend(const char *s1, const char *s2) {
+ size_t len = strlen(s1) + strlen(s2);
+ char *result = NULL, *p;
+
+ if (ALLOC_N(result, len + 1) < 0)
+ return NULL;
+
+ p = stpcpy(result, s1);
+ stpcpy(p, s2);
+ return result;
+}
+
+static int file_saved_event(struct augeas *aug, const char *path) {
+ const char *saved = strrchr(AUGEAS_EVENTS_SAVED, SEP) + 1;
+ struct pathx *px;
+ struct tree *dummy;
+ int r;
+
+ px = pathx_aug_parse(aug, aug->origin, NULL,
+ AUGEAS_EVENTS_SAVED "[last()]", true);
+ ERR_BAIL(aug);
+
+ if (pathx_find_one(px, &dummy) == 1) {
+ r = tree_insert(px, saved, 0);
+ if (r < 0)
+ goto error;
+ }
+
+ if (! tree_set(px, path))
+ goto error;
+
+ free_pathx(px);
+ return 0;
+ error:
+ free_pathx(px);
+ return -1;
+}
+
+/*
+ * Do the bookkeeping around calling LNS_PUT that's needed to update the
+ * span after writing a tree to file
+ */
+static void lens_put(struct augeas *aug, const char *filename,
+ struct lens *lens, const char *text, struct tree *tree,
+ FILE *out, struct lns_error **err) {
+ struct info *info = NULL;
+ size_t text_len = strlen(text);
+ bool with_span = aug->flags & AUG_ENABLE_SPAN;
+
+ info = make_lns_info(aug, filename, text, text_len);
+ ERR_BAIL(aug);
+
+ if (with_span) {
+ if (tree->span == NULL) {
+ tree->span = make_span(info);
+ ERR_NOMEM(tree->span == NULL, aug);
+ }
+ tree->span->span_start = ftell(out);
+ }
+
+ lns_put(info, out, lens, tree->children, text,
+ aug->flags & AUG_ENABLE_SPAN, err);
+
+ if (with_span) {
+ tree->span->span_end = ftell(out);
+ }
+ error:
+ unref(info, info);
+}
+
+/*
+ * Save TREE->CHILDREN into the file PATH using the lens from XFORM. Errors
+ * are noted in the /augeas/files hierarchy in AUG->ORIGIN under
+ * PATH/error.
+ *
+ * Writing the file happens by first writing into a temp file, transferring all
+ * file attributes of PATH to the temp file, and then renaming the temp file
+ * back to PATH.
+ *
+ * Temp files are created alongside the destination file to enable the rename,
+ * which may be the canonical path (PATH_canon) if PATH is a symlink.
+ *
+ * If the AUG_SAVE_NEWFILE flag is set, instead rename to PATH.augnew rather
+ * than PATH. If AUG_SAVE_BACKUP is set, move the original to PATH.augsave.
+ * (Always PATH.aug{new,save} irrespective of whether PATH is a symlink.)
+ *
+ * If the rename fails, and the entry AUGEAS_COPY_IF_FAILURE exists in
+ * AUG->ORIGIN, PATH is instead overwritten by copying file contents.
+ *
+ * The table below shows the locations for each permutation.
+ *
+ * PATH save flag temp file dest file backup?
+ * regular - PATH.XXXX PATH -
+ * regular BACKUP PATH.XXXX PATH PATH.augsave
+ * regular NEWFILE PATH.augnew.XXXX PATH.augnew -
+ * symlink - PATH_canon.XXXX PATH_canon -
+ * symlink BACKUP PATH_canon.XXXX PATH_canon PATH.augsave
+ * symlink NEWFILE PATH.augnew.XXXX PATH.augnew -
+ *
+ * Return 0 on success, -1 on failure.
+ */
+int transform_save(struct augeas *aug, struct tree *xfm,
+ const char *path, struct tree *tree) {
+ int fd;
+ FILE *fp = NULL, *augorig_canon_fp = NULL;
+ char *augtemp = NULL, *augnew = NULL, *augorig = NULL, *augsave = NULL;
+ char *augorig_canon = NULL, *augdest = NULL;
+ int augorig_exists;
+ int copy_if_rename_fails = 0;
+ char *text = NULL;
+ const char *filename = path + strlen(AUGEAS_FILES_TREE) + 1;
+ const char *err_status = NULL;
+ char *dyn_err_status = NULL;
+ struct lns_error *err = NULL;
+ const char *lens_name;
+ struct lens *lens = xfm_lens(aug, xfm, &lens_name);
+ int result = -1, r;
+ bool force_reload;
+ struct info *info = NULL;
+
+ errno = 0;
+
+ if (lens == NULL) {
+ err_status = "lens_name";
+ goto done;
+ }
+
+ copy_if_rename_fails =
+ aug_get(aug, AUGEAS_COPY_IF_RENAME_FAILS, NULL) == 1;
+
+ if (asprintf(&augorig, "%s%s", aug->root, filename) == -1) {
+ augorig = NULL;
+ goto done;
+ }
+
+ augorig_canon = canonicalize_file_name(augorig);
+ augorig_exists = 1;
+ if (augorig_canon == NULL) {
+ if (errno == ENOENT) {
+ augorig_canon = augorig;
+ augorig_exists = 0;
+ } else {
+ err_status = "canon_augorig";
+ goto done;
+ }
+ }
+
+ if (access(augorig_canon, R_OK) == 0) {
+ augorig_canon_fp = fopen(augorig_canon, "r");
+ text = xfread_file(augorig_canon_fp);
+ } else {
+ text = strdup("");
+ }
+
+ if (text == NULL) {
+ err_status = "put_read";
+ goto done;
+ }
+
+ text = append_newline(text, strlen(text));
+
+ /* Figure out where to put the .augnew and temp file. If no .augnew file
+ then put the temp file next to augorig_canon, else next to .augnew. */
+ if (aug->flags & AUG_SAVE_NEWFILE) {
+ if (xasprintf(&augnew, "%s" EXT_AUGNEW, augorig) < 0) {
+ err_status = "augnew_oom";
+ goto done;
+ }
+ augdest = augnew;
+ } else {
+ augdest = augorig_canon;
+ }
+
+ if (xasprintf(&augtemp, "%s.XXXXXX", augdest) < 0) {
+ err_status = "augtemp_oom";
+ goto done;
+ }
+
+ // FIXME: We might have to create intermediate directories
+ // to be able to write augnew, but we have no idea what permissions
+ // etc. they should get. Just the process default ?
+ fd = mkstemp(augtemp);
+ if (fd < 0) {
+ err_status = "mk_augtemp";
+ goto done;
+ }
+ fp = fdopen(fd, "w");
+ if (fp == NULL) {
+ err_status = "open_augtemp";
+ goto done;
+ }
+
+ if (augorig_exists) {
+ if (transfer_file_attrs(augorig_canon_fp, fp, &err_status) != 0) {
+ goto done;
+ }
+ } else {
+ /* Since mkstemp is used, the temp file will have secure permissions
+ * instead of those implied by umask, so change them for new files */
+ mode_t curumsk = umask(022);
+ umask(curumsk);
+
+ if (fchmod(fileno(fp), 0666 & ~curumsk) < 0) {
+ err_status = "create_chmod";
+ goto done;
+ }
+ }
+
+ if (tree != NULL) {
+ lens_put(aug, augorig_canon, lens, text, tree, fp, &err);
+ ERR_BAIL(aug);
+ }
+
+ if (ferror(fp)) {
+ err_status = "error_augtemp";
+ goto done;
+ }
+
+ if (fflush(fp) != 0) {
+ err_status = "flush_augtemp";
+ goto done;
+ }
+
+ if (fsync(fileno(fp)) < 0) {
+ err_status = "sync_augtemp";
+ goto done;
+ }
+
+ if (fclose(fp) != 0) {
+ err_status = "close_augtemp";
+ fp = NULL;
+ goto done;
+ }
+
+ fp = NULL;
+
+ if (err != NULL) {
+ err_status = err->pos >= 0 ? "parse_skel_failed" : "put_failed";
+ unlink(augtemp);
+ goto done;
+ }
+
+ {
+ char *new_text = xread_file(augtemp);
+ int same = 0;
+ if (new_text == NULL) {
+ err_status = "read_augtemp";
+ goto done;
+ }
+ same = STREQ(text, new_text);
+ FREE(new_text);
+ if (same) {
+ result = 0;
+ unlink(augtemp);
+ goto done;
+ } else if (aug->flags & AUG_SAVE_NOOP) {
+ result = 1;
+ unlink(augtemp);
+ goto done;
+ }
+ }
+
+ if (!(aug->flags & AUG_SAVE_NEWFILE)) {
+ if (augorig_exists && (aug->flags & AUG_SAVE_BACKUP)) {
+ r = xasprintf(&augsave, "%s" EXT_AUGSAVE, augorig);
+ if (r == -1) {
+ augsave = NULL;
+ goto done;
+ }
+
+ r = clone_file(augorig_canon, augsave, &err_status, 1, 1);
+ if (r != 0) {
+ dyn_err_status = strappend(err_status, "_augsave");
+ goto done;
+ }
+ }
+ }
+
+ r = clone_file(augtemp, augdest, &err_status, copy_if_rename_fails, 0);
+ if (r != 0) {
+ unlink(augtemp);
+ dyn_err_status = strappend(err_status, "_augtemp");
+ goto done;
+ }
+
+ result = 1;
+
+ done:
+ force_reload = aug->flags & AUG_SAVE_NEWFILE;
+ r = add_file_info(aug, path, lens, lens_name, augorig, force_reload);
+ if (r < 0) {
+ err_status = "file_info";
+ result = -1;
+ }
+ if (result > 0) {
+ r = file_saved_event(aug, path);
+ if (r < 0) {
+ err_status = "saved_event";
+ result = -1;
+ }
+ }
+ {
+ const char *emsg =
+ dyn_err_status == NULL ? err_status : dyn_err_status;
+ store_error(aug, filename, path, emsg, errno, err, text);
+ }
+ error:
+ free(dyn_err_status);
+ lens_release(lens);
+ free(text);
+ free(augtemp);
+ free(augnew);
+ if (augorig_canon != augorig)
+ free(augorig_canon);
+ free(augorig);
+ free(augsave);
+ free_lns_error(err);
+ unref(info, info);
+
+ if (fp != NULL)
+ fclose(fp);
+ if (augorig_canon_fp != NULL)
+ fclose(augorig_canon_fp);
+ return result;
+}
+
+int text_retrieve(struct augeas *aug, const char *lens_name,
+ const char *path, struct tree *tree,
+ const char *text_in, char **text_out) {
+ struct memstream ms;
+ bool ms_open = false;
+ const char *err_status = NULL;
+ struct lns_error *err = NULL;
+ struct lens *lens = NULL;
+ int result = -1, r;
+ struct info *info = NULL;
+
+ MEMZERO(&ms, 1);
+ errno = 0;
+
+ lens = lens_from_name(aug, lens_name);
+ if (lens == NULL) {
+ err_status = "lens_name";
+ goto done;
+ }
+
+ r = init_memstream(&ms);
+ if (r < 0) {
+ err_status = "init_memstream";
+ goto done;
+ }
+ ms_open = true;
+
+ if (tree != NULL) {
+ lens_put(aug, path, lens, text_in, tree, ms.stream, &err);
+ ERR_BAIL(aug);
+ }
+
+ r = close_memstream(&ms);
+ ms_open = false;
+ if (r < 0) {
+ err_status = "close_memstream";
+ goto done;
+ }
+
+ *text_out = ms.buf;
+ ms.buf = NULL;
+
+ if (err != NULL) {
+ err_status = err->pos >= 0 ? "parse_skel_failed" : "put_failed";
+ goto done;
+ }
+
+ result = 0;
+
+ done:
+ store_error(aug, NULL, path, err_status, errno, err, text_in);
+ error:
+ lens_release(lens);
+ if (result < 0) {
+ free(*text_out);
+ *text_out = NULL;
+ }
+ free_lns_error(err);
+ unref(info, info);
+
+ if (ms_open)
+ close_memstream(&ms);
+ return result;
+}
+
+int remove_file(struct augeas *aug, struct tree *tree) {
+ const char *err_status = NULL;
+ char *dyn_err_status = NULL;
+ char *augsave = NULL, *augorig = NULL, *augorig_canon = NULL;
+ struct tree *path = NULL;
+ const char *file_path = NULL;
+ char *meta_path = NULL;
+ int r;
+
+ path = tree_child(tree, s_path);
+ if (path == NULL) {
+ err_status = "no child called 'path' for file entry";
+ goto error;
+ }
+ file_path = path->value + strlen(AUGEAS_FILES_TREE);
+ path = NULL;
+
+ meta_path = path_of_tree(tree);
+ if (meta_path == NULL) {
+ err_status = "path_of_tree";
+ goto error;
+ }
+
+ if ((augorig = strappend(aug->root, file_path)) == NULL) {
+ err_status = "root_file";
+ goto error;
+ }
+
+ augorig_canon = canonicalize_file_name(augorig);
+ if (augorig_canon == NULL) {
+ if (errno == ENOENT) {
+ goto done;
+ } else {
+ err_status = "canon_augorig";
+ goto error;
+ }
+ }
+
+ r = file_saved_event(aug, meta_path + strlen(AUGEAS_META_TREE));
+ if (r < 0) {
+ err_status = "saved_event";
+ goto error;
+ }
+
+ if (aug->flags & AUG_SAVE_NOOP)
+ goto done;
+
+ if (aug->flags & AUG_SAVE_BACKUP) {
+ /* Move file to one with extension .augsave */
+ r = asprintf(&augsave, "%s" EXT_AUGSAVE, augorig_canon);
+ if (r == -1) {
+ augsave = NULL;
+ goto error;
+ }
+
+ r = clone_file(augorig_canon, augsave, &err_status, 1, 1);
+ if (r != 0) {
+ dyn_err_status = strappend(err_status, "_augsave");
+ goto error;
+ }
+ } else {
+ /* Unlink file */
+ r = unlink(augorig_canon);
+ if (r < 0) {
+ err_status = "unlink_orig";
+ goto error;
+ }
+ }
+ path = NULL;
+ tree_unlink(aug, tree);
+ done:
+ free(meta_path);
+ free(augorig);
+ free(augorig_canon);
+ free(augsave);
+ return 0;
+ error:
+ {
+ const char *emsg =
+ dyn_err_status == NULL ? err_status : dyn_err_status;
+ store_error(aug, file_path, meta_path, emsg, errno, NULL, NULL);
+ }
+ free(meta_path);
+ free(augorig);
+ free(augorig_canon);
+ free(augsave);
+ free(dyn_err_status);
+ return -1;
+}
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+/*
+ * Copyright (C) 2009-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <lutter@redhat.com>
+ */
+
+#ifndef TRANSFORM_H_
+#define TRANSFORM_H_
+
+/*
+ * Transformers for going from file globs to path names in the tree
+ * functions are in transform.c
+ */
+
+/* Filters for globbing files */
+struct filter {
+ unsigned int ref;
+ struct filter *next;
+ struct string *glob;
+ unsigned int include : 1;
+};
+
+struct filter *make_filter(struct string *glb, unsigned int include);
+void free_filter(struct filter *filter);
+
+/* Transformers that actually run lenses on contents of files */
+struct transform {
+ unsigned int ref;
+ struct lens *lens;
+ struct filter *filter;
+};
+
+struct transform *make_transform(struct lens *lens, struct filter *filter);
+void free_transform(struct transform *xform);
+
+/*
+ * When we pass a tree for a transform, the tree must have exactly one
+ * child with label "lens" whose value is the qualified name of the lens to
+ * use, and any number of children labelled "incl" or "excl" whose values
+ * are glob patterns used to filter which files to transform.
+ */
+
+/* Verify that the tree XFM represents a valid transform. If it does not,
+ * add an 'error' child to it.
+ *
+ * Return 0 if XFM is a valid transform, -1 otherwise.
+ */
+int transform_validate(struct augeas *aug, struct tree *xfm);
+
+/* Load all files matching the TRANSFORM's filter into the tree in AUG by
+ * applying the TRANSFORM's lens to their contents and putting the
+ * resulting tree under "/files" + filename. Also stores some information
+ * about filename underneath "/augeas/files" + filename
+ * If a FILE is passed, only this FILE will be loaded.
+ */
+int transform_load(struct augeas *aug, struct tree *xfm, const char *file);
+
+/* Return 1 if TRANSFORM applies to PATH, 0 otherwise.
+ * PATH must not include "/files/".
+ */
+int filter_matches(struct tree *xfm, const char *path);
+
+/* Return 1 if TRANSFORM applies to PATH, 0 otherwise. The TRANSFORM
+ * applies to PATH if (1) PATH starts with "/files/" and (2) the rest of
+ * PATH matches the transform's filter
+*/
+int transform_applies(struct tree *xfm, const char *path);
+
+/* Save TREE into the file corresponding to PATH. It is assumed that the
+ * TRANSFORM applies to that PATH
+ */
+int transform_save(struct augeas *aug, struct tree *xfm,
+ const char *path, struct tree *tree);
+
+/* Transform TEXT into a tree and store it at PATH
+ */
+int text_store(struct augeas *aug, const char *lens_name,
+ const char *path, const char *text);
+
+/* Transform the tree at PATH back into TEXT_OUT, assuming TEXT_IN was
+ * used to initially generate the tree
+ */
+int text_retrieve(struct augeas *aug, const char *lens_name,
+ const char *path, struct tree *tree,
+ const char *text_in, char **text_out);
+
+/* Remove the file for TREE, either by moving it to a .augsave file or by
+ * unlinking it, depending on aug->flags. TREE must be the node underneath
+ * /augeas/files corresponding to the file to be removed.
+ *
+ * Return 0 on success, -1 on failure
+ */
+int remove_file(struct augeas *aug, struct tree *tree);
+
+/* Return a printable name for the transform XFM. Never returns NULL. */
+const char *xfm_lens_name(struct tree *xfm);
+
+/* Return the lens and its name from the transform XFM. LENS_NAME may be
+ NULL, in which case the name of the lens is not returned, only the lens
+ itself.
+*/
+struct lens *xfm_lens(struct augeas *aug,
+ struct tree *xfm, const char **lens_name);
+
+/* Store a file-specific transformation error in /augeas/files/PATH/error */
+ATTRIBUTE_FORMAT(printf, 4, 5)
+void transform_file_error(struct augeas *aug, const char *status,
+ const char *filename, const char *format, ...);
+#endif
+
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+#!/usr/bin/env bash
+
+topdir=$(cd $(dirname $0)/.. && pwd)
+export AUGEAS_LENS_LIB=${topdir}/lenses
+export AUGEAS_ROOT=${topdir}/build/try
+
+AUGCMDS=${topdir}/build/augcmds.txt
+GDBCMDS=${topdir}/build/gdbcmds.txt
+
+rm -rf $AUGEAS_ROOT
+cp -pr ${topdir}/tests/root $AUGEAS_ROOT
+find $AUGEAS_ROOT -name \*.augnew\* | xargs -r rm
+
+if [[ ! -f $AUGCMDS ]] ; then
+ cat > $AUGCMDS <<EOF
+match /augeas/version
+EOF
+fi
+
+if [[ ! -f $GDBCMDS ]] ; then
+ cat > $GDBCMDS <<EOF
+run --nostdinc -I $AUGEAS_LENS_LIB -r $AUGEAS_ROOT < $AUGCMDS
+EOF
+fi
+
+cd $topdir/src
+if [[ "x$1" == "xgdb" ]] ; then
+ [[ -n "$EMACS" ]] && int="-i=mi"
+ exec libtool --mode=execute gdb $int -x $GDBCMDS ./augtool
+elif [[ "x$1" == "xstrace" ]] ; then
+ libtool --mode=execute /usr/bin/strace ./augtool --nostdinc < $AUGCMDS
+elif [[ "x$1" == "xvalgrind" ]] ; then
+ shift
+ libtool --mode=execute valgrind --leak-check=full ./augtool --nostdinc "$@" < $AUGCMDS
+elif [[ "x$1" == "xcallgrind" ]] ; then
+ libtool --mode=execute valgrind --tool=callgrind ./augtool --nostdinc < $AUGCMDS
+elif [[ "x$1" == "xcli" ]] ; then
+ shift
+ exec ./augtool --nostdinc "$@"
+else
+ ./augtool --nostdinc "$@" < $AUGCMDS
+ echo
+ for f in $(find $AUGEAS_ROOT -name \*.augnew); do
+ echo "File $f"
+ diff -u ${f%.augnew} $f
+ done
+fi
--- /dev/null
+/*
+ * xml.c: the implementation of aug_to_xml and supporting functions
+ *
+ * Copyright (C) 2017 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <lutter@watzmann.net>
+ */
+
+#include <config.h>
+#include "augeas.h"
+#include "internal.h"
+#include "memory.h"
+#include "info.h"
+#include "errcode.h"
+
+#include <libxml/tree.h>
+
+static int to_xml_span(xmlNodePtr elem, const char *pfor, int start, int end) {
+ int r;
+ char *buf;
+ xmlAttrPtr prop;
+ xmlNodePtr span_elem;
+
+ span_elem = xmlNewChild(elem, NULL, BAD_CAST "span", NULL);
+ if (span_elem == NULL)
+ return -1;
+
+ prop = xmlSetProp(span_elem, BAD_CAST "for", BAD_CAST pfor);
+ if (prop == NULL)
+ return -1;
+
+ /* Format and set the start property */
+ r = xasprintf(&buf, "%d", start);
+ if (r < 0)
+ return -1;
+
+ prop = xmlSetProp(span_elem, BAD_CAST "start", BAD_CAST buf);
+ FREE(buf);
+ if (prop == NULL)
+ return -1;
+
+ /* Format and set the end property */
+ r = xasprintf(&buf, "%d", end);
+ if (r < 0)
+ return -1;
+
+ prop = xmlSetProp(span_elem, BAD_CAST "end", BAD_CAST buf);
+ FREE(buf);
+ if (prop == NULL)
+ return -1;
+
+ return 0;
+}
+
+static int to_xml_one(xmlNodePtr elem, const struct tree *tree,
+ const char *pathin) {
+ xmlNodePtr value;
+ xmlAttrPtr prop;
+ int r;
+
+ prop = xmlSetProp(elem, BAD_CAST "label", BAD_CAST tree->label);
+ if (prop == NULL)
+ goto error;
+
+ if (tree->span) {
+ struct span *span = tree->span;
+
+ prop = xmlSetProp(elem, BAD_CAST "file",
+ BAD_CAST span->filename->str);
+ if (prop == NULL)
+ goto error;
+
+ r = to_xml_span(elem, "label", span->label_start, span->label_end);
+ if (r < 0)
+ goto error;
+
+ r = to_xml_span(elem, "value", span->value_start, span->value_end);
+ if (r < 0)
+ goto error;
+
+ r = to_xml_span(elem, "node", span->span_start, span->span_end);
+ if (r < 0)
+ goto error;
+ }
+
+ if (pathin != NULL) {
+ prop = xmlSetProp(elem, BAD_CAST "path", BAD_CAST pathin);
+ if (prop == NULL)
+ goto error;
+ }
+ if (tree->value != NULL) {
+ value = xmlNewTextChild(elem, NULL, BAD_CAST "value",
+ BAD_CAST tree->value);
+ if (value == NULL)
+ goto error;
+ }
+ return 0;
+ error:
+ return -1;
+}
+
+static int to_xml_rec(xmlNodePtr pnode, struct tree *start,
+ const char *pathin) {
+ int r;
+ xmlNodePtr elem;
+
+ elem = xmlNewChild(pnode, NULL, BAD_CAST "node", NULL);
+ if (elem == NULL)
+ goto error;
+ r = to_xml_one(elem, start, pathin);
+ if (r < 0)
+ goto error;
+
+ list_for_each(tree, start->children) {
+ if (TREE_HIDDEN(tree))
+ continue;
+ r = to_xml_rec(elem, tree, NULL);
+ if (r < 0)
+ goto error;
+ }
+
+ return 0;
+ error:
+ return -1;
+}
+
+static int tree_to_xml(struct pathx *p, xmlNode **xml, const char *pathin) {
+ char *path = NULL;
+ struct tree *tree;
+ xmlAttrPtr expr;
+ int r;
+
+ *xml = xmlNewNode(NULL, BAD_CAST "augeas");
+ if (*xml == NULL)
+ goto error;
+ expr = xmlSetProp(*xml, BAD_CAST "match", BAD_CAST pathin);
+ if (expr == NULL)
+ goto error;
+
+ for (tree = pathx_first(p); tree != NULL; tree = pathx_next(p)) {
+ if (TREE_HIDDEN(tree))
+ continue;
+ path = path_of_tree(tree);
+ if (path == NULL)
+ goto error;
+ r = to_xml_rec(*xml, tree, path);
+ if (r < 0)
+ goto error;
+ FREE(path);
+ }
+ return 0;
+ error:
+ free(path);
+ xmlFree(*xml);
+ *xml = NULL;
+ return -1;
+}
+
+int aug_to_xml(const struct augeas *aug, const char *pathin,
+ xmlNode **xmldoc, unsigned int flags) {
+ struct pathx *p = NULL;
+ int result = -1;
+
+ api_entry(aug);
+
+ ARG_CHECK(flags != 0, aug, "aug_to_xml: FLAGS must be 0");
+ ARG_CHECK(xmldoc == NULL, aug, "aug_to_xml: XMLDOC must be non-NULL");
+
+ *xmldoc = NULL;
+
+ if (pathin == NULL || strlen(pathin) == 0 || strcmp(pathin, "/") == 0) {
+ pathin = "/*";
+ }
+
+ p = pathx_aug_parse(aug, aug->origin, tree_root_ctx(aug), pathin, true);
+ ERR_BAIL(aug);
+ result = tree_to_xml(p, xmldoc, pathin);
+ ERR_THROW(result < 0, aug, AUG_ENOMEM, NULL);
+error:
+ free_pathx(p);
+ api_exit(aug);
+
+ return result;
+}
--- /dev/null
+GNULIB= ../gnulib/lib/libgnu.la
+GNULIB_CFLAGS= -I $(top_srcdir)/gnulib/lib
+
+AM_CFLAGS = $(AUGEAS_CFLAGS) $(WARN_CFLAGS) $(GNULIB_CFLAGS) \
+ $(LIBXML_CFLAGS) -I $(top_builddir)/src
+
+VALGRIND=libtool --mode=execute valgrind --quiet --leak-check=full
+valgrind:
+ $(MAKE) $(MAKEFLAGS) check \
+ VALGRIND="$(VALGRIND)" \
+ AUGPARSE=$(abs_top_builddir)/src/augparse \
+ AUGTOOL=$(abs_top_builddir)/src/augtool
+ $(VALGRIND) ./fatest
+
+valgrind-leak: leak
+ $(TESTS_ENVIRONMENT) $(VALGRIND) ./leak
+
+lens_tests = \
+ lens-sudoers.sh \
+ lens-access.sh \
+ lens-activemq_conf.sh \
+ lens-activemq_xml.sh \
+ lens-afs_cellalias.sh \
+ lens-aliases.sh \
+ lens-anaconda.sh \
+ lens-anacron.sh \
+ lens-approx.sh \
+ lens-apt_update_manager.sh \
+ lens-aptcacherngsecurity.sh \
+ lens-aptpreferences.sh \
+ lens-aptconf.sh \
+ lens-aptsources.sh \
+ lens-authinfo2.sh \
+ lens-authorized_keys.sh \
+ lens-authselectpam.sh \
+ lens-automaster.sh \
+ lens-automounter.sh \
+ lens-avahi.sh \
+ lens-backuppchosts.sh \
+ lens-bbhosts.sh \
+ lens-bootconf.sh \
+ lens-build.sh \
+ lens-cachefilesd.sh \
+ lens-carbon.sh \
+ lens-cgconfig.sh \
+ lens-cgrules.sh \
+ lens-channels.sh \
+ lens-chrony.sh \
+ lens-ceph.sh \
+ lens-clamav.sh \
+ lens-cmdline.sh \
+ lens-cobblersettings.sh \
+ lens-cobblermodules.sh \
+ lens-cockpit.sh \
+ lens-collectd.sh \
+ lens-cpanel.sh \
+ lens-cron.sh \
+ lens-cron_user.sh \
+ lens-crypttab.sh \
+ lens-csv.sh \
+ lens-cyrus_imapd.sh \
+ lens-cups.sh \
+ lens-darkice.sh \
+ lens-debctrl.sh \
+ lens-desktop.sh \
+ lens-devfsrules.sh \
+ lens-device_map.sh \
+ lens-dhclient.sh \
+ lens-dhcpd.sh \
+ lens-dns_zone.sh \
+ lens-dnsmasq.sh \
+ lens-dovecot.sh \
+ lens-dpkg.sh \
+ lens-dput.sh \
+ lens-erlang.sh \
+ lens-ethers.sh \
+ lens-exports.sh \
+ lens-fai_diskconfig.sh \
+ lens-fail2ban.sh \
+ lens-fonts.sh \
+ lens-fstab.sh \
+ lens-fuse.sh \
+ lens-gdm.sh \
+ lens-getcap.sh \
+ lens-group.sh \
+ lens-grubenv.sh \
+ lens-gshadow.sh \
+ lens-gtkbookmarks.sh \
+ lens-json.sh \
+ lens-hostname.sh \
+ lens-hosts.sh \
+ lens-hosts_access.sh \
+ lens-host_conf.sh \
+ lens-htpasswd.sh \
+ lens-httpd.sh \
+ lens-inetd.sh \
+ lens-inifile.sh \
+ lens-inittab.sh \
+ lens-inputrc.sh \
+ lens-interfaces.sh \
+ lens-iptables.sh \
+ lens-iproute2.sh \
+ lens-iscsid.sh \
+ lens-jettyrealm.sh \
+ lens-jmxaccess.sh \
+ lens-jmxpassword.sh \
+ lens-kdump.sh \
+ lens-keepalived.sh \
+ lens-known_hosts.sh \
+ lens-koji.sh \
+ lens-krb5.sh \
+ lens-jaas.sh \
+ lens-ldap.sh \
+ lens-ldif.sh \
+ lens-ldso.sh \
+ lens-lightdm.sh \
+ lens-limits.sh \
+ lens-login_defs.sh \
+ lens-logrotate.sh \
+ lens-logwatch.sh \
+ lens-lokkit.sh \
+ lens-lvm.sh \
+ lens-mailscanner.sh \
+ lens-mailscanner_rules.sh \
+ lens-masterpasswd.sh \
+ lens-mcollective.sh \
+ lens-mdadm_conf.sh \
+ lens-memcached.sh \
+ lens-mke2fs.sh \
+ lens-modprobe.sh \
+ lens-modules.sh \
+ lens-modules_conf.sh \
+ lens-mongodbserver.sh \
+ lens-monit.sh \
+ lens-multipath.sh \
+ lens-mysql.sh \
+ lens-nagioscfg.sh \
+ lens-nagiosobjects.sh \
+ lens-netmasks.sh \
+ lens-networkmanager.sh \
+ lens-networks.sh \
+ lens-nginx.sh \
+ lens-ntp.sh \
+ lens-ntpd.sh \
+ lens-nrpe.sh \
+ lens-nsswitch.sh \
+ lens-nslcd.sh \
+ lens-odbc.sh \
+ lens-opendkim.sh \
+ lens-openshift_config.sh \
+ lens-openshift_http.sh \
+ lens-openshift_quickstarts.sh \
+ lens-openvpn.sh \
+ lens-oz.sh \
+ lens-pagekite.sh \
+ lens-pam.sh \
+ lens-pamconf.sh \
+ lens-passwd.sh \
+ lens-pbuilder.sh \
+ lens-pg_hba.sh \
+ lens-pgbouncer.sh \
+ lens-php.sh \
+ lens-phpvars.sh \
+ lens-postfix_access.sh \
+ lens-postfix_main.sh \
+ lens-postfix_master.sh \
+ lens-postfix_passwordmap.sh \
+ lens-postfix_sasl_smtpd.sh \
+ lens-postfix_transport.sh \
+ lens-postfix_virtual.sh \
+ lens-postgresql.sh \
+ lens-properties.sh \
+ lens-protocols.sh \
+ lens-puppet.sh \
+ lens-puppet_auth.sh \
+ lens-puppetfile.sh \
+ lens-puppetfileserver.sh \
+ lens-pylonspaste.sh \
+ lens-pythonpaste.sh \
+ lens-qpid.sh \
+ lens-quote.sh \
+ lens-rabbitmq.sh \
+ lens-radicale.sh \
+ lens-rancid.sh \
+ lens-redis.sh \
+ lens-reprepro_uploaders.sh \
+ lens-resolv.sh \
+ lens-rhsm.sh \
+ lens-rmt.sh \
+ lens-rsyncd.sh \
+ lens-rsyslog.sh \
+ lens-rtadvd.sh \
+ lens-rx.sh \
+ lens-samba.sh \
+ lens-securetty.sh \
+ lens-semanage.sh \
+ lens-services.sh \
+ lens-shadow.sh \
+ lens-shells.sh \
+ lens-shellvars.sh \
+ lens-shellvars_list.sh \
+ lens-simplelines.sh \
+ lens-simplevars.sh \
+ lens-sip_conf.sh \
+ lens-slapd.sh \
+ lens-smbusers.sh \
+ lens-solaris_system.sh \
+ lens-soma.sh \
+ lens-sos.sh \
+ lens-spacevars.sh \
+ lens-splunk.sh \
+ lens-squid.sh \
+ lens-ssh.sh \
+ lens-sshd.sh \
+ lens-sssd.sh \
+ lens-star.sh \
+ lens-strongswan.sh \
+ lens-stunnel.sh \
+ lens-subversion.sh \
+ lens-sysconfig.sh \
+ lens-sysconfig_route.sh \
+ lens-syslog.sh \
+ lens-sysctl.sh \
+ lens-systemd.sh \
+ lens-termcap.sh \
+ lens-thttpd.sh \
+ lens-tinc.sh \
+ lens-tmpfiles.sh \
+ lens-trapperkeeper.sh \
+ lens-toml.sh \
+ lens-tuned.sh \
+ lens-up2date.sh \
+ lens-updatedb.sh \
+ lens-util.sh \
+ lens-vfstab.sh \
+ lens-vmware_config.sh \
+ lens-vsftpd.sh \
+ lens-webmin.sh \
+ lens-wine.sh \
+ lens-xinetd.sh \
+ lens-xml.sh \
+ lens-xorg.sh \
+ lens-xymon.sh \
+ lens-xymon_alerting.sh \
+ lens-grub.sh \
+ lens-schroot.sh \
+ lens-xendconfsxp.sh \
+ lens-yaml.sh \
+ lens-yum.sh
+
+ME = tests/Makefile.am
+
+# Ensure that the above list stays up to date:
+# Construct two lists: list of lens-*.sh from lens_tests = ... above,
+# and the list of ../lenses/tests/test_*.aug names.
+# If they're not the same, print the new or removed names and fail.
+check: check-lens-tests
+.PHONY: check-lens-tests
+_v = lens_tests
+check-lens-tests:
+ @u=$$({ sed -n '/^$(_v) =[ ]*\\$$/,/[^\]$$/p' \
+ $(srcdir)/Makefile.am \
+ | sed 's/^ *//;/^\$$.*/d;/^$(_v) =/d' \
+ | sed 's,\.sh.*\\,.sh,'; \
+ ls -1 $(srcdir)/../lenses/tests/test_*.aug \
+ | sed 's,.*/test_\([^./]*\)\.aug$$,lens-\1.sh,'; \
+ } | LC_ALL=C sort | uniq -u); \
+ test "x$$u" = x && : \
+ || { printf '%s\n' "$$u" >&2; \
+ echo '$(ME): new test(s)? update lens_tests' >&2; exit 1; }
+
+DISTCLEANFILES = $(lens_tests)
+$(lens_tests): lens-test-1
+ rm -f $@
+ $(LN_S) $< $@
+
+check_SCRIPTS = \
+ test-interpreter.sh \
+ $(lens_tests) \
+ test-get.sh test-augtool.sh \
+ test-put-symlink.sh test-put-symlink-augnew.sh \
+ test-put-symlink-augsave.sh test-put-symlink-augtemp.sh \
+ test-put-mount.sh test-put-mount-augnew.sh test-put-mount-augsave.sh \
+ test-save-empty.sh test-bug-1.sh test-idempotent.sh test-preserve.sh \
+ test-events-saved.sh test-save-mode.sh test-unlink-error.sh \
+ test-augtool-empty-line.sh test-augtool-modify-root.sh \
+ test-span-rec-lens.sh test-nonwritable.sh test-augmatch.sh \
+ test-augprint.sh \
+ test-function-modified.sh test-createfile.sh
+
+EXTRA_DIST = \
+ test-augtool test-augprint root lens-test-1 \
+ $(check_SCRIPTS) $(wildcard modules/*.aug) xpath.tests run.tests
+
+noinst_SCRIPTS = $(check_SCRIPTS)
+
+noinst_PROGRAMS = leak
+
+check_PROGRAMS = fatest test-xpath test-load test-perf test-save test-api test-run
+
+TESTS_ENVIRONMENT = \
+ PATH='$(abs_top_builddir)/src$(PATH_SEPARATOR)'"$$PATH" \
+ abs_top_builddir='$(abs_top_builddir)' \
+ abs_top_srcdir='$(abs_top_srcdir)' \
+ LANG=en_US
+
+TESTS = $(check_SCRIPTS) $(check_PROGRAMS)
+
+INCLUDES = -I$(top_srcdir)/src
+
+fatest_SOURCES = fatest.c cutest.c cutest.h $(top_srcdir)/src/memory.c $(top_srcdir)/src/memory.h
+fatest_LDADD = $(top_builddir)/src/libfa.la $(LIBXML_LIBS) $(GNULIB)
+
+test_xpath_SOURCES = test-xpath.c cutest.c cutest.h $(top_srcdir)/src/memory.c
+test_xpath_LDADD = $(top_builddir)/src/libaugeas.la $(LIBXML_LIBS) $(GNULIB)
+
+test_load_SOURCES = test-load.c cutest.c cutest.h $(top_srcdir)/src/memory.c $(top_srcdir)/src/memory.h
+test_load_LDADD = $(top_builddir)/src/libaugeas.la $(LIBXML_LIBS) $(GNULIB)
+
+test_save_SOURCES = test-save.c cutest.c cutest.h $(top_srcdir)/src/memory.c $(top_srcdir)/src/memory.h
+test_save_LDADD = $(top_builddir)/src/libaugeas.la $(LIBXML_LIBS) $(GNULIB)
+
+test_api_SOURCES = test-api.c cutest.c cutest.h $(top_srcdir)/src/memory.c $(top_srcdir)/src/memory.h
+test_api_LDADD = $(top_builddir)/src/libaugeas.la $(LIBXML_LIBS) $(GNULIB)
+
+test_run_SOURCES = test-run.c cutest.c cutest.h $(top_srcdir)/src/memory.c $(top_srcdir)/src/memory.h
+test_run_LDADD = $(top_builddir)/src/libaugeas.la $(LIBXML_LIBS) $(GNULIB)
+
+test_perf_SOURCES = test-perf.c cutest.c cutest.h $(top_srcdir)/src/memory.c $(top_srcdir)/src/memory.h
+test_perf_LDADD = $(top_builddir)/src/libaugeas.la $(LIBXML_LIBS) $(GNULIB)
+
+leak_SOURCES = leak.c
+leak_LDADD = $(top_builddir)/src/libaugeas.la $(LIBXML_LIBS) $(GNULIB)
+
+FAILMALLOC_START ?= 1
+FAILMALLOC_REP ?= 20
+FAILMALLOC_PROG ?= ./fatest
+
+include $(top_srcdir)/Makefile.inc
--- /dev/null
+/*
+ * This code is based on CuTest by Asim Jalis
+ * http://sourceforge.net/projects/cutest/
+ *
+ * The license for the original code is
+ * LICENSE
+ *
+ * Copyright (c) 2003 Asim Jalis
+ *
+ * This software is provided 'as-is', without any express or implied
+ * warranty. In no event will the authors be held liable for any damages
+ * arising from the use of this software.
+ *
+ * Permission is granted to anyone to use this software for any purpose,
+ * including commercial applications, and to alter it and redistribute it
+ * freely, subject to the following restrictions:
+ *
+ * 1. The origin of this software must not be misrepresented; you must not
+ * claim that you wrote the original software. If you use this software in
+ * a product, an acknowledgment in the product documentation would be
+ * appreciated but is not required.
+ *
+ * 2. Altered source versions must be plainly marked as such, and must not be
+ * misrepresented as being the original software.
+ *
+ * 3. This notice may not be removed or altered from any source distribution.
+ */
+
+#include <config.h>
+
+#include <sys/wait.h>
+#include <assert.h>
+#include <setjmp.h>
+#include <stdlib.h>
+#include <stdio.h>
+#include <string.h>
+#include <math.h>
+
+#include "cutest.h"
+#include "memory.h"
+
+#define HUGE_STRING_LEN 8192
+#define STRING_MAX 256
+
+#define asprintf_or_die(strp, fmt, args ...) \
+ if (asprintf(strp, fmt, ## args) == -1) { \
+ fprintf(stderr, "Fatal error (probably out of memory)\n"); \
+ abort(); \
+ }
+
+void die_oom(void) {
+ printf("Ran out of memory. Send more\n");
+ exit(2);
+}
+
+/*-------------------------------------------------------------------------*
+ * CuTest
+ *-------------------------------------------------------------------------*/
+
+void CuTestInit(CuTest* t, const char* name, TestFunction function) {
+ t->name = strdup(name);
+ t->failed = 0;
+ t->ran = 0;
+ t->message = NULL;
+ t->function = function;
+ t->jumpBuf = NULL;
+}
+
+CuTest* CuTestNew(const char* name, TestFunction function) {
+ CuTest* tc = NULL;
+ if (ALLOC(tc) < 0)
+ die_oom();
+ CuTestInit(tc, name, function);
+ return tc;
+}
+
+void CuTestRun(CuTest* tc, TestFunction setup, TestFunction teardown) {
+ jmp_buf buf;
+
+ if (getenv("CUTEST") && STRNEQ(getenv("CUTEST"), tc->name))
+ return;
+ tc->jumpBuf = &buf;
+ if (setjmp(buf) == 0) {
+ if (setup)
+ (setup)(tc);
+ tc->ran = 1;
+ (tc->function)(tc);
+ }
+ if (teardown && setjmp(buf) == 0) {
+ (teardown)(tc);
+ }
+ tc->jumpBuf = 0;
+}
+
+static void CuFailInternal(CuTest* tc, const char* file, int line,
+ const char *string) {
+ char *buf = NULL;
+
+ asprintf_or_die(&buf, "%s:%d: %s", file, line, string);
+
+ tc->failed = 1;
+ tc->message = buf;
+ if (tc->jumpBuf != 0) longjmp(*(tc->jumpBuf), 0);
+}
+
+void CuFail_Line(CuTest* tc, const char* file, int line,
+ const char* message2, const char* message) {
+ char *string = NULL;
+
+ if (message2 != NULL) {
+ asprintf_or_die(&string, "%s:%s", message2, message);
+ } else {
+ string = strdup(message);
+ }
+ CuFailInternal(tc, file, line, string);
+}
+
+void CuAssert_Line(CuTest* tc, const char* file, int line,
+ const char* message, int condition) {
+ if (condition) return;
+ CuFail_Line(tc, file, line, NULL, message);
+}
+
+void CuAssertStrEquals_LineMsg(CuTest* tc, const char* file, int line,
+ const char* message,
+ const char* expected, const char* actual) {
+ char *string = NULL;
+
+ if ((expected == NULL && actual == NULL) ||
+ (expected != NULL && actual != NULL &&
+ strcmp(expected, actual) == 0))
+ {
+ return;
+ }
+
+ if (message != NULL) {
+ asprintf_or_die(&string, "%s: expected <%s> but was <%s>", message,
+ expected, actual);
+ } else {
+ asprintf_or_die(&string, "expected <%s> but was <%s>", expected, actual);
+ }
+ CuFailInternal(tc, file, line, string);
+}
+
+void CuAssertStrNotEqual_LineMsg(CuTest* tc, const char* file, int line,
+ const char* message,
+ const char* expected, const char* actual) {
+ char *string = NULL;
+
+ if (expected != NULL && actual != NULL && strcmp(expected, actual) != 0)
+ return;
+
+ if (message != NULL) {
+ asprintf_or_die(&string, "%s: expected <%s> but was <%s>", message,
+ expected, actual);
+ } else {
+ asprintf_or_die(&string, "expected <%s> but was <%s>", expected, actual);
+ }
+ CuFailInternal(tc, file, line, string);
+}
+
+void CuAssertIntEquals_LineMsg(CuTest* tc, const char* file, int line,
+ const char* message,
+ int expected, int actual) {
+ char buf[STRING_MAX];
+ if (expected == actual) return;
+ sprintf(buf, "expected <%d> but was <%d>", expected, actual);
+ CuFail_Line(tc, file, line, message, buf);
+}
+
+void CuAssertDblEquals_LineMsg(CuTest* tc, const char* file, int line,
+ const char* message,
+ double expected, double actual, double delta) {
+ char buf[STRING_MAX];
+ if (fabs(expected - actual) <= delta) return;
+ sprintf(buf, "expected <%lf> but was <%lf>", expected, actual);
+ CuFail_Line(tc, file, line, message, buf);
+}
+
+void CuAssertPtrEquals_LineMsg(CuTest* tc, const char* file, int line,
+ const char* message,
+ const void* expected, const void* actual) {
+ char buf[STRING_MAX];
+ if (expected == actual) return;
+ sprintf(buf, "expected pointer <0x%p> but was <0x%p>", expected, actual);
+ CuFail_Line(tc, file, line, message, buf);
+}
+
+void CuAssertPtrNotEqual_LineMsg(CuTest* tc, const char* file, int line,
+ const char* message,
+ const void* expected, const void* actual) {
+ char buf[STRING_MAX];
+ if (expected != actual) return;
+ sprintf(buf, "expected pointer <0x%p> to be different from <0x%p>",
+ expected, actual);
+ CuFail_Line(tc, file, line, message, buf);
+}
+
+
+/*-------------------------------------------------------------------------*
+ * CuSuite
+ *-------------------------------------------------------------------------*/
+
+void CuSuiteInit(CuSuite* testSuite) {
+ testSuite->count = 0;
+ testSuite->failCount = 0;
+}
+
+CuSuite* CuSuiteNew(void) {
+ CuSuite* testSuite = NULL;
+ if (ALLOC(testSuite) < 0)
+ die_oom();
+ CuSuiteInit(testSuite);
+ return testSuite;
+}
+
+void CuSuiteSetup(CuSuite *testSuite,
+ TestFunction setup, TestFunction teardown) {
+ testSuite->setup = setup;
+ testSuite->teardown = teardown;
+}
+
+void CuSuiteAdd(CuSuite* testSuite, CuTest *testCase) {
+ assert(testSuite->count < MAX_TEST_CASES);
+ testSuite->list[testSuite->count] = testCase;
+ testSuite->count++;
+}
+
+void CuSuiteAddSuite(CuSuite* testSuite, CuSuite* testSuite2) {
+ int i;
+ for (i = 0 ; i < testSuite2->count ; ++i)
+ {
+ CuTest* testCase = testSuite2->list[i];
+ CuSuiteAdd(testSuite, testCase);
+ }
+}
+
+void CuSuiteRun(CuSuite* testSuite) {
+ int i;
+ for (i = 0 ; i < testSuite->count ; ++i)
+ {
+ CuTest* testCase = testSuite->list[i];
+ CuTestRun(testCase, testSuite->setup, testSuite->teardown);
+ if (testCase->failed) { testSuite->failCount += 1; }
+ }
+}
+
+static void string_append(char **s, const char *p) {
+ if (*s == NULL) {
+ *s = strdup(p);
+ } else {
+ int len = strlen(*s) + strlen(p) + 1;
+ *s = realloc(*s, len);
+ if (*s == NULL)
+ die_oom();
+ strcat(*s, p);
+ }
+}
+
+void CuSuiteSummary(CuSuite* testSuite, char **summary) {
+ int i;
+
+ for (i = 0 ; i < testSuite->count ; ++i)
+ {
+ CuTest* testCase = testSuite->list[i];
+ string_append(summary, testCase->failed ? "F" : ".");
+ }
+ string_append(summary, "\n\n");
+}
+
+void CuSuiteDetails(CuSuite* testSuite, char **details) {
+ int i;
+ int failCount = 0;
+ char *s = NULL;
+
+ if (testSuite->failCount == 0)
+ {
+ int passCount = testSuite->count - testSuite->failCount;
+ const char* testWord = passCount == 1 ? "test" : "tests";
+ asprintf_or_die(&s, "OK (%d %s)\n", passCount, testWord);
+ string_append(details, s);
+ free(s);
+ } else {
+ if (testSuite->failCount == 1)
+ string_append(details, "There was 1 failure:\n");
+ else {
+ asprintf_or_die(&s, "There were %d failures:\n",
+ testSuite->failCount);
+ string_append(details, s);
+ free(s);
+ }
+ for (i = 0 ; i < testSuite->count ; ++i) {
+ CuTest* testCase = testSuite->list[i];
+ if (testCase->failed) {
+ failCount++;
+ asprintf_or_die(&s, "%d) %s:\n%s\n",
+ failCount, testCase->name, testCase->message);
+ string_append(details, s);
+ free(s);
+ }
+ }
+ string_append(details, "\n!!!FAILURES!!!\n");
+
+ asprintf_or_die(&s, "Runs: %d ", testSuite->count);
+ string_append(details, s);
+ free(s);
+
+ asprintf_or_die(&s, "Passes: %d ",
+ testSuite->count - testSuite->failCount);
+ string_append(details, s);
+ free(s);
+
+ asprintf_or_die(&s, "Fails: %d\n", testSuite->failCount);
+ string_append(details, s);
+ free(s);
+ }
+}
+
+void CuSuiteFree(CuSuite *suite) {
+ for (int i=0; i < suite->count; i++) {
+ CuTest *test = suite->list[i];
+ free(test->name);
+ free(test->message);
+ free(test);
+ }
+ free(suite);
+}
+/*
+ * Test utilities
+ */
+void run(CuTest *tc, const char *format, ...) {
+ char *command;
+ va_list args;
+ int r;
+
+ va_start(args, format);
+ r = vasprintf(&command, format, args);
+ va_end (args);
+ if (r < 0)
+ CuFail(tc, "Failed to format command (out of memory)");
+ r = system(command);
+ if (r < 0 || (WIFEXITED(r) && WEXITSTATUS(r) != 0)) {
+ char *msg;
+ r = asprintf(&msg, "Command %s failed with status %d\n",
+ command, WEXITSTATUS(r));
+ CuFail(tc, msg);
+ free(msg);
+ }
+ free(command);
+}
+
+int should_run(const char *name, int argc, char **argv) {
+ if (argc == 0)
+ return 1;
+ for (int i=0; i < argc; i++)
+ if (STREQ(argv[i], name))
+ return 1;
+ return 0;
+}
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+/*
+ * This code is based on CuTest by Asim Jalis
+ * http://sourceforge.net/projects/cutest/
+ *
+ * The license for the original code is
+ * LICENSE
+ *
+ * Copyright (c) 2003 Asim Jalis
+ *
+ * This software is provided 'as-is', without any express or implied
+ * warranty. In no event will the authors be held liable for any damages
+ * arising from the use of this software.
+ *
+ * Permission is granted to anyone to use this software for any purpose,
+ * including commercial applications, and to alter it and redistribute it
+ * freely, subject to the following restrictions:
+ *
+ * 1. The origin of this software must not be misrepresented; you must not
+ * claim that you wrote the original software. If you use this software in
+ * a product, an acknowledgment in the product documentation would be
+ * appreciated but is not required.
+ *
+ * 2. Altered source versions must be plainly marked as such, and must not be
+ * misrepresented as being the original software.
+ *
+ * 3. This notice may not be removed or altered from any source distribution.
+ */
+
+
+#ifndef CU_TEST_H
+#define CU_TEST_H
+
+#include <setjmp.h>
+#include <stdarg.h>
+
+void die_oom(void);
+
+/* CuTest */
+
+#define CuAssertPositive(tc, n) CuAssertTrue(tc, (n) > 0)
+#define CuAssertZero(tc, n) CuAssertIntEquals(tc, 0, (n))
+#define CuAssertRetSuccess(tc, n) CuAssertIntEquals(tc, 0, (n))
+
+typedef struct CuTest CuTest;
+
+typedef void (*TestFunction)(CuTest *);
+
+struct CuTest
+{
+ char* name;
+ TestFunction function;
+ int failed;
+ int ran;
+ char* message;
+ jmp_buf *jumpBuf;
+};
+
+void CuTestInit(CuTest* t, const char* name, TestFunction function);
+CuTest* CuTestNew(const char* name, TestFunction function);
+void CuTestRun(CuTest* tc, TestFunction setup, TestFunction teardown);
+
+/* Internal versions of assert functions -- use the public versions */
+void CuFail_Line(CuTest* tc, const char* file, int line, const char* message2, const char* message);
+void CuAssert_Line(CuTest* tc, const char* file, int line, const char* message, int condition);
+void CuAssertStrEquals_LineMsg(CuTest* tc,
+ const char* file, int line, const char* message,
+ const char* expected, const char* actual);
+void CuAssertStrNotEqual_LineMsg(CuTest* tc,
+ const char* file, int line, const char* message,
+ const char* expected, const char* actual);
+void CuAssertIntEquals_LineMsg(CuTest* tc,
+ const char* file, int line, const char* message,
+ int expected, int actual);
+void CuAssertDblEquals_LineMsg(CuTest* tc,
+ const char* file, int line, const char* message,
+ double expected, double actual, double delta);
+void CuAssertPtrEquals_LineMsg(CuTest* tc,
+ const char* file, int line, const char* message,
+ const void* expected, const void* actual);
+void CuAssertPtrNotEqual_LineMsg(CuTest* tc,
+ const char* file, int line, const char* message,
+ const void* expected, const void* actual);
+
+/* public assert functions */
+
+#define CuFail(tc, ms) CuFail_Line( (tc), __FILE__, __LINE__, NULL, (ms))
+#define CuAssert(tc, ms, cond) CuAssert_Line((tc), __FILE__, __LINE__, (ms), (cond))
+#define CuAssertTrue(tc, cond) CuAssert_Line((tc), __FILE__, __LINE__, "assert failed", (cond))
+
+#define CuAssertStrEquals(tc,ex,ac) CuAssertStrEquals_LineMsg((tc),__FILE__,__LINE__,NULL,(ex),(ac))
+#define CuAssertStrEquals_Msg(tc,ms,ex,ac) CuAssertStrEquals_LineMsg((tc),__FILE__,__LINE__,(ms),(ex),(ac))
+#define CuAssertStrNotEqual(tc,ex,ac) CuAssertStrNotEqual_LineMsg((tc),__FILE__,__LINE__,NULL,(ex),(ac))
+#define CuAssertIntEquals(tc,ex,ac) CuAssertIntEquals_LineMsg((tc),__FILE__,__LINE__,NULL,(ex),(ac))
+#define CuAssertIntEquals_Msg(tc,ms,ex,ac) CuAssertIntEquals_LineMsg((tc),__FILE__,__LINE__,(ms),(ex),(ac))
+#define CuAssertDblEquals(tc,ex,ac,dl) CuAssertDblEquals_LineMsg((tc),__FILE__,__LINE__,NULL,(ex),(ac),(dl))
+#define CuAssertDblEquals_Msg(tc,ms,ex,ac,dl) CuAssertDblEquals_LineMsg((tc),__FILE__,__LINE__,(ms),(ex),(ac),(dl))
+#define CuAssertPtrEquals(tc,ex,ac) CuAssertPtrEquals_LineMsg((tc),__FILE__,__LINE__,NULL,(ex),(ac))
+#define CuAssertPtrEquals_Msg(tc,ms,ex,ac) CuAssertPtrEquals_LineMsg((tc),__FILE__,__LINE__,(ms),(ex),(ac))
+#define CuAssertPtrNotEqual(tc,ex,ac) CuAssertPtrNotEqual_LineMsg((tc),__FILE__,__LINE__,NULL,(ex),(ac))
+#define CuAssertPtrNotEqual_Msg(tc,ms,ex,ac) CuAssertPtrNotEqual_LineMsg((tc),__FILE__,__LINE__,(ms),(ex),(ac))
+
+#define CuAssertPtrNotNull(tc,p) CuAssert_Line((tc),__FILE__,__LINE__,"null pointer unexpected",(p != NULL))
+#define CuAssertPtrNotNullMsg(tc,msg,p) CuAssert_Line((tc),__FILE__,__LINE__,(msg),(p != NULL))
+
+/* CuSuite */
+
+#define MAX_TEST_CASES 1024
+
+#define SUITE_ADD_TEST(SUITE,TEST) CuSuiteAdd(SUITE, CuTestNew(#TEST, TEST))
+
+typedef struct
+{
+ int count;
+ CuTest* list[MAX_TEST_CASES];
+ int failCount;
+ TestFunction setup;
+ TestFunction teardown;
+} CuSuite;
+
+
+void CuSuiteInit(CuSuite* testSuite);
+CuSuite* CuSuiteNew(void);
+void CuSuiteSetup(CuSuite *testSuite,
+ TestFunction setup, TestFunction teardown);
+void CuSuiteAdd(CuSuite* testSuite, CuTest *testCase);
+void CuSuiteAddSuite(CuSuite* testSuite, CuSuite* testSuite2);
+void CuSuiteRun(CuSuite* testSuite);
+void CuSuiteSummary(CuSuite* testSuite, char **summary);
+void CuSuiteDetails(CuSuite* testSuite, char **details);
+void CuSuiteFree(CuSuite *testSuite);
+
+/* Run a command */
+void run(CuTest *tc, const char *format, ...);
+
+/* Return 1 if NAME is one of the ARGV, or if ARGC == 0; return 0 otherwise */
+int should_run(const char *name, int argc, char **argv);
+
+#endif /* CU_TEST_H */
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+/*
+ * fatest.c:
+ *
+ * Copyright (C) 2007-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <dlutter@redhat.com>
+ */
+
+#include <config.h>
+
+#include "fa.h"
+#include "cutest.h"
+#include "internal.h"
+#include "memory.h"
+#include <stdio.h>
+#include <stdlib.h>
+#include <ctype.h>
+
+#define FA_DOT_DIR "FA_DOT_DIR"
+
+struct fa_list {
+ struct fa_list *next;
+ struct fa *fa;
+};
+
+static struct fa_list *fa_list;
+
+static void print_regerror(int err, const char *regexp) {
+ size_t size;
+ char *errbuf;
+ size = regerror(err, NULL, NULL, 0);
+ if (ALLOC_N(errbuf, size) < 0)
+ die_oom();
+ regerror(err, NULL, errbuf, size);
+ if (strlen(regexp) > 40) {
+ char *s = strndup(regexp, 40);
+ fprintf(stderr, "Error building fa from %s...:\n", s);
+ free(s);
+ } else {
+ fprintf(stderr, "Error building fa from %s:\n", regexp);
+ }
+ fprintf(stderr, " %s\n", errbuf);
+ free(errbuf);
+}
+
+static void setup(ATTRIBUTE_UNUSED CuTest *tc) {
+ fa_list = NULL;
+}
+
+static void teardown(ATTRIBUTE_UNUSED CuTest *tc) {
+ list_for_each(fl, fa_list) {
+ fa_free(fl->fa);
+ }
+ list_free(fa_list);
+}
+
+static struct fa *mark(struct fa *fa) {
+ struct fa_list *fl;
+
+ if (fa != NULL) {
+ if (ALLOC(fl) < 0)
+ die_oom();
+ fl->fa = fa;
+ list_cons(fa_list, fl);
+ }
+ return fa;
+}
+
+static void assertAsRegexp(CuTest *tc, struct fa *fa) {
+ char *re;
+ size_t re_len;
+ struct fa *fa1, *fa2;
+ struct fa *empty = mark(fa_make_basic(FA_EPSILON));
+ int r;
+
+ /* Jump through some hoops to make FA1 a copy of FA */
+ fa1 = mark(fa_concat(fa, empty));
+ /* Minimize FA1, otherwise the regexp returned is enormous for the */
+ /* monster (~ 2MB) and fa_compile becomes incredibly slow */
+ r = fa_minimize(fa1);
+ if (r < 0)
+ die_oom();
+
+ r = fa_as_regexp(fa1, &re, &re_len);
+ CuAssertIntEquals(tc, 0, r);
+
+ r = fa_compile(re, re_len, &fa2);
+ if (r != REG_NOERROR) {
+ print_regerror(r, re);
+ }
+ CuAssertIntEquals(tc, REG_NOERROR, r);
+ CuAssertTrue(tc, fa_equals(fa, fa2));
+
+ fa_free(fa2);
+ free(re);
+}
+
+static struct fa *make_fa(CuTest *tc,
+ const char *regexp, size_t reglen,
+ int exp_err) {
+ struct fa *fa;
+ int r;
+
+ r = fa_compile(regexp, reglen, &fa);
+ if (r == REG_ESPACE)
+ die_oom();
+ if (exp_err == REG_NOERROR) {
+ if (r != REG_NOERROR)
+ print_regerror(r, regexp);
+ CuAssertIntEquals(tc, REG_NOERROR, r);
+ CuAssertPtrNotNull(tc, fa);
+ mark(fa);
+ assertAsRegexp(tc, fa);
+ } else {
+ CuAssertIntEquals(tc, exp_err, r);
+ CuAssertPtrEquals(tc, NULL, fa);
+ }
+ return fa;
+}
+
+static struct fa *make_good_fa(CuTest *tc, const char *regexp) {
+ return make_fa(tc, regexp, strlen(regexp), REG_NOERROR);
+}
+
+static void dot(struct fa *fa) {
+ static int count = 0;
+ FILE *fp;
+ const char *dot_dir;
+ char *fname;
+ int r;
+
+ if ((dot_dir = getenv(FA_DOT_DIR)) == NULL)
+ return;
+
+ r = asprintf(&fname, "%s/fa_test_%02d.dot", dot_dir, count++);
+ if (r == -1)
+ return;
+
+ if ((fp = fopen(fname, "w")) == NULL) {
+ free(fname);
+ return;
+ }
+
+ fa_dot(fp, fa);
+ fclose(fp);
+
+ free(fname);
+}
+
+static void testBadRegexps(CuTest *tc) {
+ const char *const re1 = "(x";
+ const char *const re2 = "a{5,3}";
+ make_fa(tc, re1, strlen(re1), REG_EPAREN);
+ make_fa(tc, re2, strlen(re2), REG_BADBR);
+}
+
+/* Stress test, mostly good to check that allocation is clean */
+static void testMonster(CuTest *tc) {
+#define WORD "[a-zA-Z_0-9]+"
+#define CWS "([ \\n\\t]+|\\/\\*([^\\*]|\\*[^\\/])*\\*\\/)*"
+#define QUOTED "\"[^\"]*\""
+#define ELT "\\(" WORD "(," CWS WORD "){3}\\)"
+
+ static const char *const monster =
+ "(" WORD "|" QUOTED "|" "\\(" CWS ELT "(," CWS ELT ")*" CWS "\\))";
+
+#undef ELT
+#undef QUOTED
+#undef CWS
+#undef WORD
+
+ struct fa *fa, *fas;
+ char *upv, *pv, *v;
+ size_t upv_len;
+
+ fa = make_good_fa(tc, monster);
+
+ fa_ambig_example(fa, fa, &upv, &upv_len, &pv, &v);
+
+ /* Monster can't be concatenated with itself */
+ CuAssertStrEquals(tc, "AAA", upv);
+ CuAssertStrEquals(tc, "AA", pv);
+ CuAssertStrEquals(tc, "A", v);
+ free(upv);
+
+ /* Monster can also not be starred */
+ fas = mark(fa_iter(fa, 0, -1));
+ /* Minimize FAS, otherwise the example returned is nondeterministic,
+ since example generation depends on the structure of the FA.
+ FIXME: Explain why UPV with the unminimized FAS changes to a much
+ longer string simply when allocation patterns change (e.g., by
+ running the test under valgrind vs. plain glibc malloc. Fishy ?
+ FA_EXAMPLE should depend on the structure of the FA, but not
+ incidental details like sorting of transitions.
+ */
+ fa_minimize(fas);
+ fa_ambig_example(fas, fa, &upv, &upv_len, &pv, &v);
+
+ CuAssertStrEquals(tc, "AA", upv);
+ CuAssertStrEquals(tc, "AA", pv);
+ CuAssertStrEquals(tc, "A", v);
+ free(upv);
+}
+
+static void testChars(CuTest *tc) {
+ struct fa *fa1, *fa2, *fa3;
+
+ fa1 = make_good_fa(tc, ".");
+ fa2 = make_good_fa(tc, "[a-z]");
+ CuAssertTrue(tc, fa_contains(fa2, fa1));
+
+ fa1 = make_good_fa(tc, "(.|\n)");
+ CuAssertTrue(tc, fa_contains(fa2, fa1));
+
+ fa1 = mark(fa_intersect(fa1, fa2));
+ CuAssertTrue(tc, fa_equals(fa1, fa2));
+
+ fa1 = make_good_fa(tc, "[^b-dxyz]");
+ fa2 = make_good_fa(tc, "[a-z]");
+ fa3 = mark(fa_intersect(fa1, fa2));
+ fa2 = make_good_fa(tc, "[ae-w]");
+ CuAssertTrue(tc, fa_equals(fa2, fa3));
+}
+
+static void testManualAmbig(CuTest *tc) {
+ /* The point of this test is mostly to teach me how Anders Moeller's
+ algorithm for finding ambiguous strings works.
+
+ For the two languages a1 = a|ab and a2 = a|ba, a1.a2 has one
+ ambiguous word aba, which can be split as a.ba and ab.a.
+
+ This uses X and Y as the markers*/
+
+ struct fa *a1f = make_good_fa(tc, "Xa|XaXb");
+ struct fa *a1t = make_good_fa(tc, "(YX)*Xa|(YX)*Xa(YX)*Xb");
+ struct fa *a2f = make_good_fa(tc, "Xa|XbXa");
+ struct fa *a2t = make_good_fa(tc, "(YX)*Xa|((YX)*Xb(YX)*Xa)");
+ struct fa *mp = make_good_fa(tc, "YX(X(.|\n))+");
+ struct fa *ms = make_good_fa(tc, "YX(X(.|\n))*");
+ struct fa *sp = make_good_fa(tc, "(X(.|\n))+YX");
+ struct fa *ss = make_good_fa(tc, "(X(.|\n))*YX");
+
+ struct fa *a1f_mp = mark(fa_concat(a1f, mp));
+ struct fa *a1f_mp_a1t = mark(fa_intersect(a1f_mp, a1t));
+ struct fa *b1 = mark(fa_concat(a1f_mp_a1t, ms));
+
+ struct fa *sp_a2f = mark(fa_concat(sp, a2f));
+ struct fa *sp_a2f_a2t = mark(fa_intersect(sp_a2f, a2t));
+ struct fa *b2 = mark(fa_concat(ss, sp_a2f_a2t));
+
+ struct fa *amb = mark(fa_intersect(b1, b2));
+ struct fa *exp = make_good_fa(tc, "XaYXXbYXXa");
+ CuAssertTrue(tc, fa_equals(exp, amb));
+}
+
+static void testContains(CuTest *tc) {
+ struct fa *fa1, *fa2, *fa3;
+
+ fa1 = make_good_fa(tc, "ab*");
+ fa2 = make_good_fa(tc, "ab+");
+ fa3 = make_good_fa(tc, "ab+c*|acc");
+
+ CuAssertTrue(tc, fa_contains(fa1, fa1));
+ CuAssertTrue(tc, fa_contains(fa2, fa2));
+ CuAssertTrue(tc, fa_contains(fa3, fa3));
+
+ CuAssertTrue(tc, ! fa_contains(fa1, fa2));
+ CuAssertTrue(tc, fa_contains(fa2, fa1));
+ CuAssertTrue(tc, fa_contains(fa2, fa3));
+ CuAssertTrue(tc, ! fa_contains(fa3, fa2));
+ CuAssertTrue(tc, ! fa_contains(fa1, fa3));
+ CuAssertTrue(tc, ! fa_contains(fa3, fa1));
+}
+
+static void testIntersect(CuTest *tc) {
+ struct fa *fa1, *fa2, *fa;
+
+ fa1 = make_good_fa(tc, "[a-zA-Z]*[.:=]([0-9]|[^A-Z])*");
+ fa2 = make_good_fa(tc, "[a-z][:=][0-9a-z]+");
+ fa = mark(fa_intersect(fa1, fa2));
+ CuAssertPtrNotNull(tc, fa);
+ CuAssertTrue(tc, fa_equals(fa, fa2));
+ CuAssertTrue(tc, ! fa_equals(fa, fa1));
+}
+
+static void testComplement(CuTest *tc) {
+ struct fa *fa1 = make_good_fa(tc, "[b-y]+");
+ struct fa *fa2 = mark(fa_complement(fa1));
+ /* We use '()' to match the empty word explicitly */
+ struct fa *fa3 = make_good_fa(tc, "(()|[b-y]*[^b-y](.|\n)*)");
+
+ CuAssertTrue(tc, fa_equals(fa2, fa3));
+
+ fa2 = mark(fa_complement(fa2));
+ CuAssertTrue(tc, fa_equals(fa1, fa2));
+}
+
+static void testOverlap(CuTest *tc) {
+ struct fa *fa1 = make_good_fa(tc, "a|ab");
+ struct fa *fa2 = make_good_fa(tc, "a|ba");
+ struct fa *p = mark(fa_overlap(fa1, fa2));
+ struct fa *exp = make_good_fa(tc, "b");
+
+ CuAssertTrue(tc, fa_equals(exp, p));
+
+ fa1 = make_good_fa(tc, "a|b|c|abc");
+ fa2 = mark(fa_iter(fa1, 0, -1));
+ exp = make_good_fa(tc, "bc");
+ p = mark(fa_overlap(fa1, fa2));
+
+ CuAssertTrue(tc, fa_equals(exp, p));
+}
+
+static void assertExample(CuTest *tc, const char *regexp, const char *exp) {
+ struct fa *fa = make_good_fa(tc, regexp);
+ size_t xmpl_len;
+ char *xmpl;
+ fa_example(fa, &xmpl, &xmpl_len);
+ CuAssertStrEquals(tc, exp, xmpl);
+ free(xmpl);
+
+ fa_nocase(fa);
+ char *s = strdup(exp);
+ for (int i = 0; i < strlen(s); i++) s[i] = tolower(s[i]);
+
+ fa_example(fa, &xmpl, &xmpl_len);
+ CuAssertStrEquals(tc, s, xmpl);
+ free(xmpl);
+ free(s);
+}
+
+static void testExample(CuTest *tc) {
+ assertExample(tc, "(.|\n)", "A");
+ assertExample(tc, "(\n|\t|x)", "x");
+ assertExample(tc, "[^b-y]", "A");
+ assertExample(tc, "x*", "x");
+ assertExample(tc, "yx*", "y");
+ assertExample(tc, "ab+cx*", "abc");
+ assertExample(tc, "ab+cx*|y*", "y");
+ assertExample(tc, "u*|[0-9]", "u");
+ assertExample(tc, "u+|[0-9]", "u");
+ assertExample(tc, "vu+|[0-9]", "0");
+ assertExample(tc, "vu{2}|[0-9]", "0");
+ assertExample(tc, "\\[", "[");
+ assertExample(tc, "[\\]", "\\");
+ assertExample(tc, "a{3}", "aaa");
+ assertExample(tc, "a{3,}", "aaa");
+
+ assertExample(tc, "\001((\002.)*\001)+\002", "\001\001\002");
+ assertExample(tc, "\001((\001.)*\002)+\002", "\001\002\002");
+
+ assertExample(tc, "a[^\001-\004]+b", "aAb");
+
+ /* A strange way to write a? - allowed by POSIX */
+ assertExample(tc, "(a|)", "a");
+ assertExample(tc, "(|a)", "a");
+
+ struct fa *fa1 = mark(fa_make_basic(FA_EMPTY));
+ size_t xmpl_len;
+ char *xmpl;
+ fa_example(fa1, &xmpl, &xmpl_len);
+ CuAssertPtrEquals(tc, NULL, xmpl);
+
+ fa1 = mark(fa_make_basic(FA_EPSILON));
+ fa_example(fa1, &xmpl, &xmpl_len);
+ CuAssertStrEquals(tc, "", xmpl);
+ free(xmpl);
+}
+
+static void assertAmbig(CuTest *tc, const char *regexp1, const char *regexp2,
+ const char *exp_upv,
+ const char *exp_pv, const char *exp_v) {
+
+ struct fa *fa1 = make_good_fa(tc, regexp1);
+ struct fa *fa2 = make_good_fa(tc, regexp2);
+ char *upv, *pv, *v;
+ size_t upv_len;
+ fa_ambig_example(fa1, fa2, &upv, &upv_len, &pv, &v);
+ CuAssertPtrNotNull(tc, upv);
+ CuAssertPtrNotNull(tc, pv);
+ CuAssertPtrNotNull(tc, v);
+
+ CuAssertStrEquals(tc, exp_upv, upv);
+ CuAssertIntEquals(tc, strlen(exp_upv), upv_len);
+ CuAssertStrEquals(tc, exp_pv, pv);
+ CuAssertStrEquals(tc, exp_v, v);
+ free(upv);
+}
+
+static void assertNotAmbig(CuTest *tc, const char *regexp1,
+ const char *regexp2) {
+ struct fa *fa1 = make_good_fa(tc, regexp1);
+ struct fa *fa2 = make_good_fa(tc, regexp2);
+ char *upv;
+ size_t upv_len;
+ fa_ambig_example(fa1, fa2, &upv, &upv_len, NULL, NULL);
+ CuAssertPtrEquals(tc, NULL, upv);
+}
+
+static void testAmbig(CuTest *tc) {
+ assertAmbig(tc, "a|ab", "a|ba", "aba", "ba", "a");
+ assertAmbig(tc, "(a|ab)*", "a|ba", "aba", "ba", "a");
+ assertAmbig(tc, "(a|b|c|d|abcd)", "(a|b|c|d|abcd)*",
+ "abcd", "bcd", "");
+ assertAmbig(tc, "(a*)*", "a*", "a", "a", "");
+ assertAmbig(tc, "(a+)*", "a+", "aa", "aa", "a");
+
+ assertNotAmbig(tc, "a*", "a");
+ assertNotAmbig(tc, "(a*b)*", "a*b");
+ assertNotAmbig(tc, "(a|b|c|d|abcd)", "(a|b|c|d|abcd)");
+}
+
+static void testAmbigWithNuls(CuTest *tc) {
+ struct fa *fa1 = make_fa(tc, "X\0ba?", 5, REG_NOERROR);
+ struct fa *fa2 = make_fa(tc, "a?\0Y", 4, REG_NOERROR);
+ char *upv, *pv, *v;
+ size_t upv_len;
+ int r = fa_ambig_example(fa1, fa2, &upv, &upv_len, &pv, &v);
+ CuAssertIntEquals(tc, 0, r);
+ CuAssertIntEquals(tc, 6, (int)upv_len);
+ /* u = "X\0b" */
+ size_t u_len = pv - upv;
+ CuAssertIntEquals(tc, 3, u_len);
+ CuAssertIntEquals(tc, (int)'X', upv[0]);
+ CuAssertIntEquals(tc, (int)'\0', upv[1]);
+ CuAssertIntEquals(tc, (int)'b', upv[2]);
+ /* p = "a" */
+ size_t p_len = v - pv;
+ CuAssertIntEquals(tc, 1, p_len);
+ CuAssertIntEquals(tc, (int)'a', pv[0]);
+ /* v = "\0Y" */
+ size_t v_len = upv_len - (v - upv);
+ CuAssertIntEquals(tc, 2, v_len);
+ CuAssertIntEquals(tc, (int)'\0', v[0]);
+ CuAssertIntEquals(tc, (int)'Y', v[1]);
+ free(upv);
+}
+
+static void assertFaAsRegexp(CuTest *tc, const char *regexp) {
+ char *re;
+ size_t re_len;
+ struct fa *fa1 = make_good_fa(tc, regexp);
+ struct fa *fa2;
+ int r;
+
+ r = fa_as_regexp(fa1, &re, &re_len);
+ CuAssertIntEquals(tc, 0, r);
+
+ r = fa_compile(re, strlen(re), &fa2);
+ CuAssertIntEquals(tc, REG_NOERROR, r);
+
+ CuAssert(tc, regexp, fa_equals(fa1, fa2));
+
+ fa_free(fa2);
+ free(re);
+}
+
+static void testAsRegexp(CuTest *tc) {
+ assertFaAsRegexp(tc, "a*");
+ assertFaAsRegexp(tc, "abcd");
+ assertFaAsRegexp(tc, "ab|cd");
+ assertFaAsRegexp(tc, "[a-z]+");
+ assertFaAsRegexp(tc, "[]a-]+");
+ assertFaAsRegexp(tc, "[^0-9A-Z]");
+ assertFaAsRegexp(tc, "ab|(xy[A-Z0-9])*(uv[^0-9]?)");
+ assertFaAsRegexp(tc, "[A-CE-GI-LN-QS-Z]");
+}
+
+static void testAsRegexpMinus(CuTest *tc) {
+ struct fa *fa1 = make_good_fa(tc, "[A-Za-z]+");
+ struct fa *fa2 = make_good_fa(tc, "Deny(Users|Groups|Other)");
+ struct fa *fa = mark(fa_minus(fa1, fa2));
+ char *re;
+ size_t re_len;
+ int r;
+
+ r = fa_as_regexp(fa, &re, &re_len);
+ CuAssertIntEquals(tc, 0, r);
+
+ struct fa *far = make_good_fa(tc, re);
+ CuAssertTrue(tc, fa_equals(fa, far));
+
+ free(re);
+}
+
+static void testRangeEnd(CuTest *tc) {
+ const char *const re = "[1-0]";
+ make_fa(tc, re, strlen(re), REG_ERANGE);
+}
+
+static void testNul(CuTest *tc) {
+ static const char *const re0 = "a\0b";
+ int re0_len = 3;
+
+ struct fa *fa1 = make_fa(tc, "a\0b", re0_len, REG_NOERROR);
+ struct fa *fa2 = make_good_fa(tc, "a.b");
+ char *re;
+ size_t re_len;
+ int r;
+
+ CuAssertTrue(tc, fa_contains(fa1, fa2));
+
+ r = fa_as_regexp(fa1, &re, &re_len);
+ CuAssertIntEquals(tc, 0, r);
+ CuAssertIntEquals(tc, re0_len, re_len);
+ CuAssertIntEquals(tc, 0, memcmp(re0, re, re0_len));
+ free(re);
+}
+
+static void testRestrictAlphabet(CuTest *tc) {
+ const char *re = "ab|(xy[B-Z0-9])*(uv[^0-9]?)";
+ struct fa *fa_exp = make_good_fa(tc, "((xy[0-9])*)uv[^0-9A-Z]?|ab");
+ struct fa *fa_act = NULL;
+ size_t nre_len;
+ char *nre;
+ int r;
+
+ r = fa_restrict_alphabet(re, strlen(re), &nre, &nre_len, 'A', 'Z');
+ CuAssertIntEquals(tc, 0, r);
+ CuAssertIntEquals(tc, nre_len, strlen(nre));
+ fa_act = make_good_fa(tc, nre);
+ CuAssertTrue(tc, fa_equals(fa_exp, fa_act));
+ free(nre);
+
+ r = fa_restrict_alphabet("HELLO", strlen("HELLO"),
+ &nre, &nre_len, 'A', 'Z');
+ CuAssertIntEquals(tc, -2, r);
+ CuAssertPtrEquals(tc, NULL, nre);
+
+ r = fa_restrict_alphabet("a{2,", strlen("a{2"), &nre, &nre_len, 'A', 'Z');
+ CuAssertIntEquals(tc, REG_EBRACE, r);
+}
+
+static void testExpandCharRanges(CuTest *tc) {
+ const char *re = "[1-3]*|[a-b]([^\nU-X][^\n])*";
+ const char *re2 = "a\\|b";
+
+ char *nre;
+ size_t nre_len;
+ int r;
+
+ r = fa_expand_char_ranges(re, strlen(re), &nre, &nre_len);
+ CuAssertIntEquals(tc, 0, r);
+ CuAssertStrEquals(tc, "[123]*|[ab]([^\nUVWX].)*", nre);
+ CuAssertIntEquals(tc, strlen(nre), nre_len);
+ free(nre);
+
+ r = fa_expand_char_ranges(re2, strlen(re2), &nre, &nre_len);
+ CuAssertIntEquals(tc, 0, r);
+ CuAssertStrEquals(tc, re2, nre);
+ free(nre);
+}
+
+static void testNoCase(CuTest *tc) {
+ struct fa *fa1 = make_good_fa(tc, "[a-z0-9]");
+ struct fa *fa2 = make_good_fa(tc, "B");
+ struct fa *fa, *exp;
+ int r;
+
+ fa_nocase(fa1);
+ fa = fa_intersect(fa1, fa2);
+ CuAssertPtrNotNull(tc, fa);
+ r = fa_equals(fa, fa2);
+ fa_free(fa);
+ CuAssertIntEquals(tc, 1, r);
+
+ fa = fa_concat(fa1, fa2);
+ exp = make_good_fa(tc, "[a-zA-Z0-9]B");
+ r = fa_equals(fa, exp);
+ fa_free(fa);
+ CuAssertIntEquals(tc, 1, r);
+}
+
+static void testExpandNoCase(CuTest *tc) {
+ const char *p1 = "aB";
+ const char *p2 = "[a-cUV]";
+ const char *p3 = "[^a-z]";
+ char *s;
+ size_t len;
+ int r;
+
+ r = fa_expand_nocase(p1, strlen(p1), &s, &len);
+ CuAssertIntEquals(tc, 0, r);
+ CuAssertStrEquals(tc, "[Aa][Bb]", s);
+ free(s);
+
+ r = fa_expand_nocase(p2, strlen(p2), &s, &len);
+ CuAssertIntEquals(tc, 0, r);
+ CuAssertStrEquals(tc, "[A-CUVa-cuv]", s);
+ free(s);
+
+ r = fa_expand_nocase(p3, strlen(p3), &s, &len);
+ CuAssertIntEquals(tc, 0, r);
+ CuAssertStrEquals(tc, "[^A-Za-z]", s);
+ free(s);
+}
+
+static void testNoCaseComplement(CuTest *tc) {
+ const char *key_s = "keY";
+ struct fa *key = make_good_fa(tc, key_s);
+ struct fa *isect = NULL;
+
+ fa_nocase(key);
+
+ struct fa *comp = mark(fa_complement(key));
+
+ key = make_good_fa(tc, key_s);
+
+ /* We used to have a bug in totalize that caused the intersection
+ * to contain "keY" */
+ isect = fa_intersect(key, comp);
+
+ CuAssertIntEquals(tc, 1, fa_is_basic(isect, FA_EMPTY));
+ fa_free(isect);
+}
+
+static void free_words(int n, char **words) {
+ for (int i=0; i < n; i++) {
+ free(words[i]);
+ }
+ free(words);
+}
+
+static void testEnumerate(CuTest *tc) {
+ struct fa *fa1 = make_good_fa(tc, "[ab](cc|dd)");
+ static const char *const fa1_expected[] =
+ { "acc", "add", "bcc", "bdd" };
+ struct fa *fa_inf = make_good_fa(tc, "a(b*|d)c");
+ struct fa *fa_empty = make_good_fa(tc, "a?");
+
+ char **words;
+ int r;
+
+ r = fa_enumerate(fa1, 2, &words);
+ CuAssertIntEquals(tc, -2, r);
+ CuAssertPtrEquals(tc, NULL, words);
+
+ r = fa_enumerate(fa1, 10, &words);
+ CuAssertIntEquals(tc, 4, r);
+ CuAssertPtrNotNull(tc, words);
+
+ for (int i=0; i < r; i++) {
+ int found = 0;
+ for (int j=0; j < ARRAY_CARDINALITY(fa1_expected); j++) {
+ if (STREQ(words[i], fa1_expected[j]))
+ found = 1;
+ }
+ if (!found) {
+ char *msg;
+ /* Ignore return of asprintf intentionally */
+ r = asprintf(&msg, "Generated word %s not expected", words[i]);
+ CuFail(tc, msg);
+ }
+ }
+ free_words(10, words);
+
+ r = fa_enumerate(fa_inf, 100, &words);
+ CuAssertIntEquals(tc, -2, r);
+ CuAssertPtrEquals(tc, NULL, words);
+
+ r = fa_enumerate(fa_empty, 10, &words);
+ CuAssertIntEquals(tc, 2, r);
+ CuAssertPtrNotNull(tc, words);
+ CuAssertStrEquals(tc, "", words[0]);
+ CuAssertStrEquals(tc, "a", words[1]);
+ free_words(10, words);
+}
+
+int main(int argc, char **argv) {
+ if (argc == 1) {
+ char *output = NULL;
+ CuSuite* suite = CuSuiteNew();
+ CuSuiteSetup(suite, setup, teardown);
+
+ SUITE_ADD_TEST(suite, testBadRegexps);
+ SUITE_ADD_TEST(suite, testMonster);
+ SUITE_ADD_TEST(suite, testChars);
+ SUITE_ADD_TEST(suite, testManualAmbig);
+ SUITE_ADD_TEST(suite, testContains);
+ SUITE_ADD_TEST(suite, testIntersect);
+ SUITE_ADD_TEST(suite, testComplement);
+ SUITE_ADD_TEST(suite, testOverlap);
+ SUITE_ADD_TEST(suite, testExample);
+ SUITE_ADD_TEST(suite, testAmbig);
+ SUITE_ADD_TEST(suite, testAmbigWithNuls);
+ SUITE_ADD_TEST(suite, testAsRegexp);
+ SUITE_ADD_TEST(suite, testAsRegexpMinus);
+ SUITE_ADD_TEST(suite, testRangeEnd);
+ SUITE_ADD_TEST(suite, testNul);
+ SUITE_ADD_TEST(suite, testRestrictAlphabet);
+ SUITE_ADD_TEST(suite, testExpandCharRanges);
+ SUITE_ADD_TEST(suite, testNoCase);
+ SUITE_ADD_TEST(suite, testExpandNoCase);
+ SUITE_ADD_TEST(suite, testNoCaseComplement);
+ SUITE_ADD_TEST(suite, testEnumerate);
+
+ CuSuiteRun(suite);
+ CuSuiteSummary(suite, &output);
+ CuSuiteDetails(suite, &output);
+ printf("%s\n", output);
+ free(output);
+ int result = suite->failCount;
+ CuSuiteFree(suite);
+ return result;
+ }
+
+ for (int i=1; i<argc; i++) {
+ struct fa *fa;
+ int r;
+ if ((r = fa_compile(argv[i], strlen(argv[i]), &fa)) != REG_NOERROR) {
+ print_regerror(r, argv[i]);
+ } else {
+ dot(fa);
+ size_t s_len;
+ char *s;
+ fa_example(fa, &s, &s_len);
+ printf("Example for %s: %s\n", argv[i], s);
+ free(s);
+ char *re;
+ size_t re_len;
+ r = fa_as_regexp(fa, &re, &re_len);
+ if (r == 0) {
+ printf("/%s/ = /%s/\n", argv[i], re);
+ free(re);
+ } else {
+ printf("/%s/ = ***\n", argv[i]);
+ }
+ fa_free(fa);
+ }
+ }
+}
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+#include <config.h>
+#include "augeas.h"
+
+#include <stdlib.h>
+#include <stdio.h>
+
+const char *abs_top_srcdir;
+const char *abs_top_builddir;
+char *root = NULL, *src_root = NULL, *lensdir = NULL;
+struct augeas *aug = NULL;
+
+
+#define die(msg) \
+ do { \
+ fprintf(stderr, "%s:%d: Fatal error: %s\n", __FILE__, __LINE__, msg); \
+ exit(EXIT_FAILURE); \
+ } while(0)
+
+
+int main (void)
+{
+ abs_top_srcdir = getenv("abs_top_srcdir");
+ if (abs_top_srcdir == NULL)
+ die("env var abs_top_srcdir must be set");
+
+ abs_top_builddir = getenv("abs_top_builddir");
+ if (abs_top_builddir == NULL)
+ die("env var abs_top_builddir must be set");
+
+ if (asprintf(&src_root, "%s/tests/root", abs_top_srcdir) < 0) {
+ die("failed to set src_root");
+ }
+
+ if (asprintf(&lensdir, "%s/lenses", abs_top_srcdir) < 0)
+ die("asprintf lensdir failed");
+
+
+ aug = aug_init (src_root, lensdir, AUG_NO_STDINC);
+ if (!aug) { perror ("aug_init"); exit (1); }
+ aug_close (aug);
+
+ free(root);
+ free(src_root);
+ free(lensdir);
+ return 0;
+}
--- /dev/null
+#! /bin/sh
+# Run one lens test.
+# Derive names of inputs from the name of this script.
+
+[ -n "$abs_top_srcdir" ] || abs_top_srcdir=$TOPDIR
+LENS_DIR=$abs_top_srcdir/lenses
+
+me=`echo "$0"|sed 's,.*/lens-\(.*\)\.sh$,\1,'`
+
+t=$LENS_DIR/tests/test_$me.aug
+
+if [ -n "$VALGRIND" ] ; then
+ exec $VALGRIND $AUGPARSE --nostdinc -I "$LENS_DIR" "$t"
+else
+ export MALLOC_CHECK_=3
+ exec augparse --nostdinc -I "$LENS_DIR" "$t"
+fi
--- /dev/null
+Modules that test the interpreter, particularly ones that make module
+loading fail.
+
+Tests for actually shipping lenses should go into ../lenses/tests.
--- /dev/null
+module Fail_cf_two_nullable =
+
+(* Both branches of the union are nullable, making SPC ambiguous *)
+let rec spc = [ spc . label "x" ] | del /s*/ "s"
--- /dev/null
+module Fail_concat_ctype =
+
+ (* Concatenation of /a|ab/ and /a|ba/ is ambiguous *)
+ let lns = store /a|ab/ . (del /a/ "x" | del /ba/ "y")
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+module Fail_del_default_check =
+
+ (* Not valid since the default value "NO" does not match /[a-z]+/ *)
+ let lns = del /[a-z]+/ "NO"
--- /dev/null
+(* The construct (del RE STR)? must raise an error from the typechecker *)
+(* since there's no way for the put of '?' to determine whether the del *)
+(* should be used or not - that decision is made by looking at the tree *)
+(* alone. *)
+module Fail_del_maybe =
+
+let indent = (del /[ \t]+/ " ")?
--- /dev/null
+module Fail_double_maybe =
+
+ (* We can not allow this: for a string " = " we have no way to know *)
+ (* during put whether to use any of the lenses within the '?' or not *)
+ (* since we key that decision off either a tree label (there is none *)
+ (* here) or the presence of a value (which in this case may or may *)
+ (* not be there) *)
+ let lns = (del /[ \t]*=/ "=" . (del /[ \t]*/ "" . store /[a-z]+/)? )?
--- /dev/null
+module Fail_Invalid_Regexp =
+
+ (* We used to not spot that the second regexp in the union is invalid
+ because we did not properly check expressions that contained
+ literals. This construct now leads to a syntax error *)
+ let rx = /a/ | /d)/
--- /dev/null
+module Fail_let_no_exp =
+
+ (* This used to be accepted because of a grammar bug *)
+ let x =
--- /dev/null
+module Fail_multi_key_concat =
+
+ let lns = key /a/ . key /b/
--- /dev/null
+module Fail_multi_key_iter =
+
+ let lns = ( key /a/ ) *
+
--- /dev/null
+module Fail_multi_store_concat =
+
+ let lns = store /a/ . store /b/
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+module Fail_multi_store_iter =
+
+ let lns = (store /a/)*
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+module Fail_Nocase_Union =
+
+let lns = [ key /[a-z]+/i ] | [ key "UPPER" ]
--- /dev/null
+module Fail_put_bad_value =
+
+ let lns = [ key /a/ . del /=/ "=" . store /[0-9]+/ ]
+
+ test lns put "a=20" after set "a" "foo" = ?
--- /dev/null
+module Fail_recursion_multi_keys =
+
+(* This is the same as (key "a")* . store "x" *)
+let rec l = key "a" . l | store "x"
--- /dev/null
+module Fail_regexp_minus_empty =
+
+ (* This fails since the result of the subtraction is the empty set *)
+ (* which we can not represent as a regular expression. *)
+ let re = "a" - /[a-z]/
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+module Fail_regexp_range_end =
+
+ let lns = [ key /[1-0]/ ]
--- /dev/null
+module Fail_self_ref_module =
+
+ let b = Fail_self_ref_module.a
+ let a = "a"
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+module Fail_shadow_union =
+ let lns = store /[a-z]+/ | del /(abc|xyz)/ "abc"
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+module Fail_square_consistency =
+
+let left = key "a"
+let right = del "b" "b"
+let body = del "x" "x"
+let s = square left body right
--- /dev/null
+module Fail_square_consistency_del =
+
+let left = del /[ab]/ "a"
+let right = del /[ab]/ "b"
+let body = del "x" "x"
+let s = square left body right
--- /dev/null
+module Fail_square_dup_key =
+
+let left = key "a"
+let right = del "a" "a"
+let body = key "a"
+let s = square left body right
--- /dev/null
+module Fail_square_lens_type =
+
+let left = [ key "a" ]
+let right = [ key "a" ]
+let body = del "x" "x"
+let s = square left body right
--- /dev/null
+module Fail_xform_orphan_value =
+
+ let lns = store /a/
+
+ let xfm = transform lns (incl "/dev/null")
+
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(* Check that we properly discard skeletons when we shift subtrees *)
+(* across lenses. The test shifts a subtree parsed with SND to one *)
+(* that is put with FST. Since the two have different skeleton types *)
+(* we need to ignore the skeleton that comes with it. *)
+module Pass_array =
+
+ let array =
+ let array_value = store /[a-z0-9]+/ in
+ let fst = seq "values" . array_value . del /[ \t]+/ "\t" in
+ let snd = seq "values" . array_value in
+ del "(" "(" . counter "values" .
+ [ fst ] * . [ snd ] . del ")" ")"
+
+ let lns = [ key /[a-z]+/ . del "=" "=" . array ]
+
+ test lns put "var=(v1 v2)" after
+ set "var/3" "v3" = "var=(v1 v2\tv3)"
--- /dev/null
+(* Make sure that nonsensical tests fail properly instead of *)
+(* causing a segfault (see bug #129) *)
+module Pass_bad_tests =
+
+ let k = [ key "a" ]*
+
+ test k put "aaa" after insb "a" "/a[4]" = *
+ test k put "aaa" after insa "a" "/a[4]" = *
+
+ test k put "aa" after set "x[func()]" "foo" = *
+
+ (* Make sure we reset internal error flags after the above *)
+ test k get "aa" = { "a" } { "a" }
+
+ test k put "aa" after rm "/a[func()]" = *
+
+ test k put "aa" after clear "/a[func()]" = *
--- /dev/null
+module Pass_Compose_Func =
+
+ (* string -> regexp *)
+ let f (x:string) = x . /[a-z]/
+
+ (* regexp -> lens *)
+ let g (x:regexp) = key x
+
+ (* string -> lens *)
+ let h = f ; g
+
+ let _ = h "a"
--- /dev/null
+module Pass_concat_atype =
+
+ (* This passes both the ctype and atype check for unambiguous *)
+ (* concatenation because the STORE's keep everything copacetic.*)
+ (* If we would only look at tree labels, we'd get a type error *)
+ (* in the PUT direction, because we couldn't tell how to split *)
+ (* the tree *)
+ (* { "a" = .. } { "b" = .. } { "a" = .. } *)
+ (* solely by looking at tree labels. *)
+
+ let ab = [ key /a/ . store /1/ ] . ([ key /b/ . store /2/ ]?)
+ let ba = ([ key /b/ . store /3/ ])? . [ key /a/ . store /4/ ]
+ let lns = ab . ba
+
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+module Pass_cont_line =
+
+(* Parse a list of words where the list can stretch over multiple lines.
+ Mostly there to demonstrate how to deal with continuation lines. *)
+
+let list_elt = [ label "element" . store /[a-z]+/ ]
+
+let ws_cont = /([ \t]+|[ \t]*\\\\\n[ \t]*)/
+
+let sep = del ws_cont " "
+let eol = del /[ \t]*\n/ "\n"
+
+let list = list_elt . ( sep . list_elt )* . eol
+
+let exp_tree = { "element" = "a" } { "element" = "b" }
+
+test list get "a b\n" = exp_tree
+test list get "a \\\n b\n" = exp_tree
+test list get "a\\\nb\n" = exp_tree
--- /dev/null
+module Pass_Create_Del =
+
+(* del, on create, would do another round of unescaping the default value
+ * which is wrong. See Issue #507 *)
+let sep = del /:([ \t]*\\\\\n[ \t]*:)?/ ":\\\n\t:"
+
+let lns = [ label "entry" . sep . store /[a-z]+/ ]*
+
+test lns get ":a:\\\n:b" =
+ { "entry" = "a" }
+ { "entry" = "b" }
+
+test lns put ":a" after
+ set "/entry[last()+1]" "b" = ":a:\\\n\t:b"
--- /dev/null
+module Pass_empty_put =
+
+ let eol = del "\n" "\n"
+ let lns = [ key /[a-z]+/ . del /[ \t]+/ " " . store /[a-z]+/ . eol ]*
+
+ test lns put "" after
+ set "entry" "value"
+ = "entry value\n"
--- /dev/null
+module Pass_empty_regexp =
+
+let l = [ del // "" . key /[a-z]+/ ]
+
+test l get "abc" = { "abc" }
--- /dev/null
+module Pass_format_atype =
+
+(* format_atype in lens.c had an allocation bug that would corrupt memory
+ and later lead to unpleasant behavior like segfaults. This nonsensical
+ lens, when formatted, triggers that problem.
+
+ For this test to make sense, it needs to be run with MALLOC_CHECK_=2 so
+ that glibc complains and aborts the test
+*)
+
+let l = [label "l"]|del /x/ "x"
+let _ = lens_format_atype l
--- /dev/null
+module Pass_ins_test =
+
+ let eol = del "\n" "\n"
+ let word = /[a-z0-9]+/
+ let sep = del /[ \t]+/ " "
+ let lns = [ key word . sep . store word . eol ]*
+
+ let s = "key value1\nkey value2\n"
+ let t = "key value1\nnewkey newvalue\nkey value2\n"
+
+ test lns put s after
+ insa "newkey" "key[1]";
+ set "newkey" "newvalue"
+ = t
+
+ test lns put s after
+ insb "newkey" "key[2]";
+ set "newkey" "newvalue"
+ = t
+
+ test lns put s after
+ insb "newkey" "key[1]";
+ set "newkey" "newvalue"
+ = "newkey newvalue\n" . s
+
+ (* Now test insertion inside the tree *)
+
+ let lns2 = [ key word . eol . lns ]
+
+ let s2 = "outer\n" . s
+ let t2 = "outer\n" . t
+
+ test lns2 put s2 after
+ insa "newkey" "outer/key[1]";
+ set "outer/newkey" "newvalue"
+ = t2
+
+ test lns2 put s2 after
+ insb "newkey" "outer/key[2]";
+ set "outer/newkey" "newvalue"
+ = t2
+
+
+ test lns2 put s2 after
+ insb "newkey" "outer/key[1]";
+ set "outer/newkey" "newvalue"
+ = "outer\nnewkey newvalue\n" . s
+
--- /dev/null
+module Pass_iter_atype =
+
+ (* Similar to the Pass_concat_atype check; verify that the *)
+ (* typechecker takes tree values into account in the PUT direction, *)
+ (* and not just tree labels. *)
+
+ let a (r:regexp) = [ key /a/ . store r ]
+ let aa = (a /1/) . (a /2/)
+ let lns = ((a /3/) | aa)*
+
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+module Pass_iter_key_union =
+
+ (* We used to typecheck the atype of this as (a|b/)* *)
+ (* which is wrong and leads to spurious ambiguous iteration errors *)
+ (* The right atype is ((a|b)/)* *)
+ let l1 = [ key /a|b/ . store /x/ ]
+ let l2 = [ key /ab/ . store /y/ ]
+
+ let lns = (l1 | l2)*
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+module Pass_label_value =
+
+let l = [ label "label" . value "value" ]
+
+test l get "" = { "label" = "value" }
+
+test l put "" after rm "/foo" = ""
+
+let word = /[^ \t\n]+/
+let ws = del /[ \t]+/ " "
+let chain = [ key "RewriteCond" . ws .
+ [ label "eq" . store word ] . ws . store word .
+ ([ label "chain_as" . ws . del "[OR]" "[OR]" . value "or"]
+ |[ label "chain_as" . value "and" ]) ]
+
+test chain get "RewriteCond %{var} val [OR]" =
+ { "RewriteCond" = "val"
+ { "eq" = "%{var}" }
+ { "chain_as" = "or" } }
+
+test chain get "RewriteCond %{var} lue" =
+ { "RewriteCond" = "lue"
+ { "eq" = "%{var}" }
+ { "chain_as" = "and" } }
+
+test chain put "RewriteCond %{var} val [OR]" after
+ set "/RewriteCond/chain_as" "and" = "RewriteCond %{var} val"
--- /dev/null
+module Pass_lens_plus =
+
+ let a = [ key /a/ ]
+ let lns = a+
+
+ test lns put "a" after rm "x" = "a"
+
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+(* Test let expressions *)
+module Pass_let_exp =
+
+ (* This definition is insanely roundabout; it's written that way *)
+ (* since we want to exercise LET expressions *)
+ let lns =
+ let lbl = "a" in
+ let spc = " " in
+ let del_spaces (s:string) = del spc+ s in
+ let del_str (s:string) = del s s in
+ let store_delim (ldelim:string)
+ (rdelim:string) (val:regexp) =
+ del_str ldelim . store val . del_str rdelim in
+ [ label lbl . del_spaces " " . store_delim "(" ")" /[a-z]+/ ]
+
+ test lns get " (abc)" = { "a" = "abc" }
+
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+module Pass_Lexer =
+
+ (* Some tests for corner cases for the lexer; they will all lead to
+ * syntax errors if we are not lexing correctly *)
+
+ let s1 = "\\"
+ let s2 = "\
+"
+
+ let r1 = /\\\\/
+
+ let slash = "/" (* Just here to cause trouble if the lexer does not
+ * properly terminate the above expressions *)
--- /dev/null
+module Pass_Mixed_Recursion =
+
+(* Test that using a recursive lens as part of another lens works *)
+
+let rec r = [ key "a" . r ] | [ key "a" . store "x" ]
+
+(* Star a recursive lens *)
+let star = r*
+
+test star get "aax" = { "a" { "a" = "x" } }
+test star put "aax" after rm "/nothing" = "aax"
+
+test star get "axax" = { "a" = "x" } { "a" = "x" }
+test star put "axaaxax" after rm "/a[2]" = "axax"
+
+(* Use a starred recursive lens in a more complicated construct *)
+let top = [ label "top" . r* . value "high" ]
+
+test top get "axaax" = { "top" = "high" { "a" = "x" } { "a" { "a" = "x" } } }
+test top put "axaax" after rm "/top/a[1]" = "aax"
+
+(* Use a recursive lens in a union *)
+let union = (r | [ key "b" . store /[a-z]/ ])*
+
+test union get "aaxbyax" =
+ { "a" { "a" = "x" } } { "b" = "y" } { "a" = "x" }
+test union put "aaxbyax" after
+ set "/b[2]" "z" = "aaxbyaxbz"
+test union put "aaxbyax" after
+ set "/b[2]" "z"; rm "/b[1]" = "aaxaxbz"
--- /dev/null
+module Pass_nested_sections =
+
+let word = /[a-zA-Z0-9]+/
+let ws = /[ \t]*/
+let nl = /\n/
+
+let eol = del (ws . nl) "\n"
+let eq = del (ws . "=" . ws) "="
+let lbr = del (ws . "{" . ws . nl) " {\n"
+let rbr = del (ws . "}" . ws . nl) "}\n"
+let indent = del ws ""
+
+let entry = [ indent . key word . eq . store word . eol ]
+
+let rec lns =
+ let sec = [ indent . key word . lbr . lns . rbr ] in
+ (sec | entry)+
+
+
+test lns get "key = value\n" = { "key" = "value" }
+
+test lns get "section {
+ key1 = v1
+ key2 = v2
+ section {
+ section {
+ key4 = v4
+ }
+ }
+ section {
+ key5 = v5
+ }
+}\n" =
+ { "section"
+ { "key1" = "v1" }
+ { "key2" = "v2" }
+ { "section" { "section" { "key4" = "v4" } } }
+ { "section" { "key5" = "v5" } } }
+
+
+test lns get "section {
+ section {
+ key2 = v2
+ }
+}
+section {
+ key3 = v3
+}
+" =
+ { "section"
+ { "section"
+ { "key2" = "v2" } } }
+ { "section"
+ { "key3" = "v3" } }
--- /dev/null
+module Pass_nocase =
+
+let lns1 =
+ let re = /[a-z]+/i - "Key" in
+ [ label "1" . store re ] | [ label "2" . store "Key" ]
+
+test lns1 get "Key" = { "2" = "Key" }
+test lns1 get "key" = { "1" = "key" }
+test lns1 get "KEY" = { "1" = "KEY" }
+test lns1 get "KeY" = { "1" = "KeY" }
+
+let lns2 =
+ let re = /[A-Za-z]+/ - /Key/i in
+ [ label "1" . store re ] | [ label "2" . store /Key/i ]
+
+test lns2 get "Key" = { "2" = "Key" }
+test lns2 get "key" = { "2" = "key" }
+test lns2 get "KEY" = { "2" = "KEY" }
+test lns2 get "KeY" = { "2" = "KeY" }
+
+let lns3 =
+ let rx = /type/i|/flags/i in
+ [ key rx . del "=" "=" . store /[0-9]+/ ]
+
+test lns3 get "FLAGS=1" = { "FLAGS" = "1" }
--- /dev/null
+module Pass_prefix_union =
+
+let eol = del "\n" "\n"
+let pair (k:regexp) (v:regexp) = [ key k . del "=" "=" . store v . eol ]
+
+let lns = pair "a" "a" | pair "a" "a" . pair "b" "b"
+
+test lns get "a=a\nb=b\n" = { "a" = "a" } { "b" = "b" }
--- /dev/null
+module Pass_put_bad_label =
+
+ let entry = [ key /[a-z]+/ . del /=/ "=" . store /[0-9]+/ . del "\n" "\n" ]
+
+ let lns = [ key /[A-Z]+/ . del "\n" "\n" . entry* ]
+
+ test lns put "SECTION\na=1\nz=26\n" after
+ insa "B" "/SECTION/a";
+ set "/SECTION/B" "2" = *
--- /dev/null
+module Pass_put_invalid_star =
+
+ let lns = [ key /[a-z]+/ . del "=" "=" . store /[0-9]+/ . del "\n" "\n" ]*
+
+ test lns put "a=1\nb=2\n" after
+ set "c2" "3"
+ = *
+
+ test lns put "a=1\n" after
+ insb "x1" "a" ;
+ set "x1" "1"
+ = *
+
+
+
--- /dev/null
+module Pass_quote_quote =
+
+ let str = "\"A quote\""
+
+ let delq = del /"/ "\""
+
+ let lns = [ delq . key /[a-zA-Z ]+/ . delq ] (* " Make emacs relax *)
+
+ test lns get str = { "A quote" }
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+module Pass_read_file =
+
+(* This is a roundabout way to test that Sys.getenv and Sys.read_file *)
+(* work. Since we don't have a generic unit testing facility, we need *)
+(* to phrase things in terms of a lens test. *)
+
+let fname = (Sys.getenv "abs_top_srcdir") . "/tests/root/pairs.txt"
+let str = (Sys.read_file fname)
+let lns = [ key /[a-z0-9]*/ . del /[ \t]*=[ \t]*/ "="
+ . store /[^ \t\n]*/ . del /\n/ "\n" ] *
+
+test lns get str =
+ { "key1" = "value1" }
+ { "key2" = "value2" }
+ { "key3" = "value3" }
--- /dev/null
+module Pass_regexp_minus =
+
+ let word = /[a-z]+/
+ let no_baseurl = word - "baseurl"
+
+ let eq = del "=" "="
+
+ let l1 = [ key no_baseurl . eq . store /[a-z]+/ ]
+ let l2 = [ key "baseurl" . eq . store /[0-9]+/ ]
+
+ let lns = ((l1 | l2) . del "\n" "\n")*
+
+ test lns get "foo=abc\nbaseurl=123\n" =
+ { "foo" = "abc" }
+ { "baseurl" = "123" }
+
+ test lns get "baseurl=abc\n" = *
+
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+module Pass_self_ref_module =
+
+ let b = "b"
+ let a = Pass_self_ref_module.b
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+module Pass_simple_recursion =
+
+let rec lns =
+ let lbr = del "<" "<" in
+ let rbr = del ">" ">" in
+ let k = [ key /[a-z]+/ ] in
+ let node = [ label "S" . lbr . lns . rbr ] in
+ let b = node | k in
+ b*
+
+(* let rec lns = [ key "a" . lns ] | [ key "a" ] *)
+test lns get "<x>" = { "S" { "x" } }
+
+test lns put "<x>" after rm "nothing" = "<x>"
+
+test lns put "<<x>>" after rm "nothing" = "<<x>>"
+
+test lns put "<x><x>" after rm "/S[2]" = "<x>"
+
+test lns put "<x>" after clear "/S/S/S/x" = "<x<<x>>>"
+
+
+(* Start with { "S" { "x" } } and modify to { "S" { "S" { "x" } } } *)
+test lns put "<x>" after
+ insa "S" "/S";
+ clear "/S[2]/S/x";
+ rm "/S[1]" = "<<x>>"
+
+test lns get "<<<x><x>><x>><x>" =
+ { "S"
+ { "S"
+ { "S" { "x" } }
+ { "S" { "x" } } }
+ { "S" { "x" } } }
+ { "S" { "x" } }
+
+test lns put "<<<x><x>><x>><x>" after rm "/S[1]/S[1]/S[1]" =
+ "<<<x>><x>><x>"
+
+
+test lns get "<<yo>><zulu>" =
+ { "S" { "S" { "yo" } } }
+ { "S" { "zulu" } }
+
+(* Some pathological tests for nullable lenses *)
+
+let rec prim_nullable = [ prim_nullable . key /x/ ] | del /s*/ ""
+test prim_nullable get "sx" = { "x" }
+test prim_nullable get "x" = { "x" }
+
+let rec ambig = [ ambig . label "x" ] | del /s+/ "s"
+test ambig get "" = *
+test ambig get "s" = *
+
+(* Test link filtering. These tests cause seemingly ambiguous parses, which
+ * need to be disambiguated by filtering links in the Earley graph. See
+ * section 5.3 in the paper *)
+let rec unamb1 = [ label "x" . unamb1 . store /y/ ] | [ key "z" ]
+test unamb1 get "zyy" = { "x" = "y" { "x" = "y" { "z" } } }
+
+
+let rec unamb2 = del /u*/ "" . [ unamb2 . key /x/ ] | del /s*/ ""
+test unamb2 get "sx" = { "x" }
+test unamb2 get "x" = { "x" }
+
+(* Test proper handling of '?'; bug #119 *)
+let rec maybe = [ del "a" "a" . maybe . del "b" "b" ]?
+test maybe get "aabb" = { { } }
+
+(* Test proper handling of '?'; bug #180 *)
+let rec maybe2 = [ del "a" "a" . maybe2 ]?
+test maybe2 get "aa" = { { } }
+
+let rec maybe3 = [ maybe3 . del "a" "a" ]?
+test maybe3 get "aa" = { { } }
+
+let rec maybe4 = [ del "a" "a" ] . maybe4?
+test maybe4 get "aa" = { } { }
+
+let rec maybe5 = maybe5? . [ del "a" "a" ]
+test maybe5 get "aa" = { } { }
+
+(* Test that parses ending with a SCAN are accepted; bug #126 *)
+let dels (s:string) = del s s
+let d2 = del /b*/ ""
+let sec (body:lens) = [ key /a*/ . dels "{" . body . dels "}"]*
+let rec sec_complete = sec sec_complete
+let lns2 = sec_complete . d2
+test lns2 get "a{}b" = { "a" }
+
+(* Test stack handling with both parsing direction; bug #136 *)
+let idr = [ key /[0-9]+/ . del /z+/ "z" . store /[0-9]+/ . del /a+/ "a" ]
+let rec idr_left = idr_left? . idr
+let rec idr_right = idr . idr_right?
+let input = "1zz2aa33zzz44aaa555zzzz666aaaa"
+test idr_left get input = { "1" = "2" }{ "33" = "44" }{ "555" = "666" }
+test idr_right get input = { "1" = "2" }{ "33" = "44" }{ "555" = "666" }
--- /dev/null
+module Pass_square =
+
+(* Utilities lens *)
+let dels (s:string) = del s s
+
+(************************************************************************
+ * Regular square lens
+ *************************************************************************)
+
+(* Simplest square lens *)
+let s = store /[ab]/
+let sqr0 =
+ let k = key "x" in
+ let d = dels "x" in
+ [ square k s d ] *
+test sqr0 get "xaxxbxxax" = { "x" = "a" }{ "x" = "b" }{ "x" = "a" }
+test sqr0 put "xax" after set "/x[3]" "b" = "xaxxbx"
+
+(* test mismatch tag *)
+test sqr0 get "xya" = *
+
+(* Test regular expression matching with multiple groups *)
+let body = del /([f]+)([f]+)/ "ff" . del /([g]+)([g]+)/ "gg"
+let sqr1 =
+ let k = key /([a-b]*)([a-b]*)([a-b]*)/ in
+ let d1 = del /([a-b]*)([a-b]*)([a-b]*)/ "a" in
+ let d2 = del /([x]+)([x]+)/ "xx" in
+ [ square k body d1 . d2 ] *
+
+test sqr1 get "aaffggaaxxbbffggbbxx" = { "aa" }{ "bb" }
+test sqr1 get "affggaxx" = { "a" }
+test sqr1 put "affggaxx" after clear "/b" = "affggaxxbffggbxx"
+
+(* Test XML like elements up to depth 2 *)
+let b = del ">" ">" . del /[a-z ]*/ "" . del "</" "</"
+let open_tag = key /[a-z]+/
+let close_tag = del /[a-z]+/ "a"
+let xml = [ del "<" "<" . square open_tag b close_tag . del ">" ">" ] *
+
+let b2 = del ">" ">" . xml . del "</" "</"
+let xml2 = [ del "<" "<" . square open_tag b2 close_tag . del ">" ">" ] *
+
+test xml get "<a></a><b></b>" = { "a" }{ "b" }
+
+(* test error on mismatch tag *)
+test xml get "<a></a><b></c>" = *
+
+(* test get nested tags of depth 2 *)
+test xml2 get "<a><b></b><c></c></a>" =
+ { "a"
+ { "b" }
+ { "c" }
+ }
+
+(* test nested put of depth 2 *)
+test xml2 put "<a></a>" after clear "/x/y" = "<a></a><x><y></y></x>"
+
+(* test nested put of depth 3 : should fail *)
+test xml2 put "<a></a>" after clear "/x/y/z" = *
+
+(* matches can be case-insensitive *)
+let s5 = store /[yz]/
+let sqr5 =
+ let k = key /x/i in
+ let d = del /x/i "x" in
+ [ square k s5 d ] *
+test sqr5 get "xyX" = { "x" = "y" }
+test sqr5 get "xyXXyxXyx" = { "x" = "y" }{ "X" = "y" }{ "X" = "y" }
+test sqr5 put "xyX" after set "/x[3]" "z" = "xyxxzx"
+
+(* test concat multiple squares *)
+let rex = /[a-z]/
+let csqr =
+ let k = key rex in
+ let d = del rex "a" in
+ let e = dels "" in
+ [ square k e d . square d e d ] *
+
+test csqr get "aabbccdd" = { "a" } { "c" }
+test csqr put "aabb" after insa "z" "/a" = "aabbzzaa"
+
+(* test default square create values *)
+let create_square =
+ let d = dels "a" in
+ [ key "x" . square d d d ]*
+
+test create_square put "" after clear "/x" = "xaaa"
+
+(* test optional quotes *)
+let word = /[A-Za-z0-9_.-]+/
+let entry =
+ let k = key word in
+ let quote = del /"?/ "\"" (* " *) in
+ let body = store /[a-z]+/ in
+ let v = square quote body quote in
+ [ k . dels "=" . v ]
+
+test entry get "key=\"value\"" = { "key" = "value" }
+test entry get "key=value" = { "key" = "value" }
+
+test entry put "key=value" after
+ set "/key" "other" = "key=other"
+
+test entry put "key=\"value\"" after
+ set "/key" "other" = "key=\"other\""
+
+(* create with square *)
+(* Passing this test successfully requires that the skeleton from the get *)
+(* is correctly detected as not matching the skeleton for the second lens *)
+(* in hte union - the reason for the mismatch is that the quote is *)
+(* optional in the first branch of the union, and the skeleton therefore *)
+(* does not have "@" in the right places, triggering a create *)
+let sq_create =
+ let word = store /[a-z]+/ in
+ let number = store /[0-9]+/ in
+ let quot = dels "@" in
+ let quot_opt = del /@?/ "@" in
+ [ label "t" . square quot_opt word quot_opt ]
+ | [ label "t" . square quot number quot ]
+
+test sq_create put "abc" after
+ set "/t" "42" = "@42@"
--- /dev/null
+module Pass_square_rec =
+
+(* Utilities lens *)
+let dels (s:string) = del s s
+
+(************************************************************************
+ * Recursive square lens
+ *************************************************************************)
+(* test square with left and right as dels *)
+let lr (body:lens) =
+ let k = key "c" . body* in
+ let d = dels "ab" in
+ [ square d k d ]
+
+let rec lr2 = lr lr2
+
+test lr2 get "abcabcabab" =
+ { "c"
+ { "c" }
+ }
+
+let open_tag = key /[a-z]+/
+let close_tag = del /[a-z]+/ "a"
+
+(* Basic element *)
+let xml_element (body:lens) =
+ let g = del ">" ">" . body . del "</" "</" in
+ [ del "<" "<" . square open_tag g close_tag . del ">" ">" ] *
+
+let rec xml_rec = xml_element xml_rec
+
+test xml_rec get "<a><b><c><d><e></e></d></c></b></a>" =
+ { "a"
+ { "b"
+ { "c"
+ { "d"
+ { "e" }
+ }
+ }
+ }
+ }
+
+test xml_rec get "<a><b></b><c></c><d></d><e></e></a>" =
+ { "a"
+ { "b" }
+ { "c" }
+ { "d" }
+ { "e" }
+ }
+
+test xml_rec put "<a></a><b><c></c></b>" after clear "/x/y/z" = "<a></a><b><c></c></b><x><y><z></z></y></x>"
+
+(* mismatch tag *)
+test xml_rec get "<a></c>" = *
+test xml_rec get "<a><b></b></c>" = *
+test xml_rec get "<a><b></c></a>" = *
+
+
+(* test ctype_nullable and typecheck *)
+let rec z =
+ let k = key "ab" in
+ let d = dels "ab" in
+ [ square k z? d ]
+test z get "abab" = { "ab" }
+
+(* test tip handling when using store inside body *)
+let c (body:lens) =
+ let sto = store "c" . body* in
+ let d = dels "ab" in
+ let k = key "ab" in
+ [ square k sto d ]
+
+let rec cc = c cc
+
+test cc get "abcabcabab" =
+ { "ab" = "c"
+ { "ab" = "c" }
+ }
+
+(* test mixing regular and recursive lenses *)
+
+let reg1 =
+ let k = key "y" in
+ let d = dels "y" in
+ let e = dels "" in
+ [ square k e d ]
+
+let reg2 =
+ let k = key "y" in
+ let d = dels "y" in
+ [ square k reg1 d ]
+
+let rec rec2 =
+ let d1 = dels "x" in
+ let k1 = key "x" in
+ let body = reg2 | rec2 in
+ [ square k1 body d1 ]?
+
+test rec2 get "xyyyyx" =
+ { "x"
+ { "y"
+ { "y" }
+ }
+ }
+
+test rec2 put "" after clear "/x/y/y" = "xyyyyx"
+
+(* test correct put behavior *)
+let input3 = "aaxyxbbaaaxyxbb"
+let b3 = dels "y"
+let sqr3 =
+ let k = key /[x]/ in
+ let d = dels "x" in
+ [ del /[a]*/ "a" . square k b3 d . del /[b]*/ "b" ]*
+test sqr3 get input3 = { "x" }{ "x" }
+test sqr3 put input3 after clear "/x[1]" = input3
+
+let b4 = dels "x"
+let rec sqr4 =
+ let k = key /[b]|[c]/ in
+ let d = del /[b]|[c]/ "b" in
+ [ del /[a]+/ "a" . square k (b4|sqr4) d ]
+test sqr4 put "aabaaacxcb" after rm "x" = "aabaaacxcb"
+
+(* test concat multiple squares *)
+let rex = /[a-z]/
+let rec csqr =
+ let k = key rex in
+ let d = del rex "a" in
+ let e = dels "" in
+ [ square k e d . csqr* . square d e d ]
+
+test csqr get "aabbccdd" =
+ { "a"
+ { "b" }
+ }
+
+test csqr put "aabbccdd" after clear "/a" = "aabbccdd"
+test csqr put "aabb" after clear "/a/z" = "aazzaabb"
--- /dev/null
+(* Test that the '?' operator behaves properly when it contains only *)
+(* a store. For this to work, the put for '?' has to take into account *)
+(* whether the current tree node has a value associated with it or not *)
+module Pass_store_maybe =
+
+ let lns = [ key /[a-z]+/ . del / +/ " " . (store /[0-9]+/) ? ]
+
+ test lns put "key " after rm "noop" = "key "
+ test lns put "key " after set "key" "42" = "key 42"
+ test lns put "key 42" after set "key" "13" = "key 13"
+
+ (* Convoluted way to make the value for "key" NULL *)
+ test lns put "key 42" after insa "key" "key"; rm "key[1]" = "key "
+
+ (* Check that we correctly restore the DEL if we choose to go into the *)
+ (* '?' operator instead of doing a create and using the "--" default *)
+ let ds = [ key /[a-z]+/ . (del /[ -]*/ "--" . store /[0-9]+/) ? ]
+
+ test ds put "key 0" after rm "noop" = "key 0"
--- /dev/null
+(* Demonstrate how quotes can be stripped from values. *)
+(* Values can be enclosed in single or double quotes, if they *)
+(* contain spaces they _have_ to be enclosed in quotes. Since *)
+(* everything's regular, we can't actually match a closing quote *)
+(* to the opening quote, so "hello' will be accepted, too. *)
+
+module Pass_strip_quotes =
+
+ let nuttin = del /(""|'')?/ "''"
+ let bare = del /["']?/ "" . store /[a-zA-Z0-9]+/ . del /["']?/ ""
+ let quoted = del /["']/ "'" . store /.*[ \t].*/ . del /["']/ "'"
+
+ let lns = [ label "foo" . bare ]
+ | [ label "foo" . quoted ]
+ | [ label "foo" . nuttin ]
+
+ test lns get "'hello'" = { "foo" = "hello" }
+ test lns get "'hello world'" = { "foo" = "hello world" }
+
+ let hw = "'hello world'"
+ let set_hw = set "/foo" "hello world"
+ test lns put "hello" after set_hw = hw
+ test lns put "'hello world'" after set "/foo" "hello" = "'hello'"
+
+ test lns put "" after set_hw = hw
+ test lns put "\"\"" after set_hw = hw
--- /dev/null
+module Pass_subtree_growth =
+
+ (* Make sure that a subtree that is not the lowest one does indeed *)
+ (* grow the tree, instead of just setting the label of an enclosed *)
+ (* subtree. This is only a problem if the enclosed subtree does *)
+ (* not have a label *)
+
+ let lns = [ label "outer" . [ store /a/ ] ]
+
+ (* The improper result is { "outer" = "a" } *)
+ test lns get "a" = { "outer" { = "a" } }
+
+ (* This produces a tree { "outer" = "b" { = "a" } } *)
+ (* but the value for "outer" is never used in put *)
+ (* (That should probably be flagged as an error separately) *)
+ test lns put "a" after set "outer" "b" = "a"
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+module Pass_switch_value_regexp =
+
+ let lns = ([ key /u/ ] . [ key /b/ . store /[0-9]+/ ])
+ | ([ key /v/ ] . [ key /b/ . store /[a-z]+/ ])
+
+ test lns get "ub42" = { "u" } { "b" = "42" }
+ test lns get "vbxy" = { "v" } { "b" = "xy" }
+ test lns get "ubxy" = *
+ test lns get "vb42" = *
--- /dev/null
+(* Test that we take the right branch in a union based solely on *)
+(* differing values associated with a tree node *)
+module Pass_union_atype =
+ let del_str (s:string) = del s s
+
+ let lns = [ key /a/ . store /b/ . del_str " (l)" ]
+ | [ key /a/ . store /c/ . del_str " (r)" ]
+
+ test lns put "ac (r)" after set "a" "b" = "ab (l)"
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+module Pass_union_atype_298 =
+ (* The key calculation in union would cause the outer subtree to have *)
+ (* an atype of %r{a/(/|/)}, which is wrong since the two subtrees in *)
+ (* the union do not contribute to the key of the outer subtree *)
+ (* The tree produced by this lens has schema *)
+ (* { /a/ ({ = /b/ }|{}) } *)
+ (* which is prefectly legal *)
+ let lns = [ key /a/ . ([ store /b/] | del /c/ "c") ]
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+module Pass_union_nokey =
+
+let lns = [ key /[a-z]+/ . store /[0-9]+/
+ | [ key /[a-z]+/ ] . del /[ ]+/ " " ]
--- /dev/null
+module Pass_union_select_star =
+
+ (* Check that in unions the right branch is selected, even if the *)
+ (* first branch matches nothing. *)
+ let a = [ key /a/ ]
+ let b = [ key /b/ ]
+
+ let lns = (a* | b+)
+
+ (* The 'rm "x"' is a noop; the grammar won't let us do a put test *)
+ (* without any commands *)
+ test lns put "b" after rm "x" = "b"
+
+(* Local Variables: *)
+(* mode: caml *)
+(* End: *)
--- /dev/null
+module Pass_Unit =
+
+(* The unit literal *)
+let _ = ()
+(* Check that composition allows units on the left *)
+let _ = () ; () ; (); "something"
--- /dev/null
+# grub.conf generated by anaconda
+#
+# Note that you do not have to rerun grub after making changes to this file
+# NOTICE: You have a /boot partition. This means that
+# all kernel and initrd paths are relative to /boot/, eg.
+# root (hd0,0)
+# kernel /vmlinuz-version ro root=/dev/vg00/lv00
+# initrd /initrd-version.img
+#boot=/dev/sda
+default=0
+timeout=5
+splashimage=(hd0,0)/grub/splash.xpm.gz
+hiddenmenu
+title Fedora (2.6.24.4-64.fc8)
+ root (hd0,0)
+ kernel /vmlinuz-2.6.24.4-64.fc8 ro root=/dev/vg00/lv00
+ initrd /initrd-2.6.24.4-64.fc8.img
+title Fedora (2.6.24.3-50.fc8)
+ root (hd0,0)
+ kernel /vmlinuz-2.6.24.3-50.fc8 ro root=/dev/vg00/lv00
+ initrd /initrd-2.6.24.3-50.fc8.img
+title Fedora (2.6.21.7-3.fc8xen)
+ root (hd0,0)
+ kernel /xen.gz-2.6.21.7-3.fc8
+ module /vmlinuz-2.6.21.7-3.fc8xen ro root=/dev/vg00/lv00
+ module /initrd-2.6.21.7-3.fc8xen.img
+title Fedora (2.6.24.3-34.fc8)
+ root (hd0,0)
+ kernel /vmlinuz-2.6.24.3-34.fc8 ro root=/dev/vg00/lv00
+ initrd /initrd-2.6.24.3-34.fc8.img
+ savedefault
--- /dev/null
+grub.conf
\ No newline at end of file
--- /dev/null
+#
+# Aliases in this file will NOT be expanded in the header from
+# Mail, but WILL be visible over networks or from /bin/mail.
+#
+# >>>>>>>>>> The program "newaliases" must be run after
+# >> NOTE >> this file is updated for any changes to
+# >>>>>>>>>> show through to sendmail.
+#
+
+# Basic system aliases -- these MUST be present.
+mailer-daemon: postmaster
+postmaster: root
+
+# General redirections for pseudo accounts.
+bin: root, adm
+daemon: root
+adm: root
+
+# mailman aliases
+mailman: postmaster
+mailman-owner: mailman
+
+# Person who should get root's mail
+mrepo: root
+root: realroot@example.com
+root+special: realroot+other@example.com
+
+include: :include:/etc/morealiases
+command: |/usr/local/bin/procmail
--- /dev/null
+APT
+{
+ NeverAutoRemove
+ {
+ "^firmware-linux.*";
+ "^linux-firmware$";
+ };
+
+ VersionedKernelPackages
+ {
+ # linux kernels
+ "linux-image";
+ "linux-headers";
+ "linux-image-extra";
+ "linux-signed-image";
+ # kfreebsd kernels
+ "kfreebsd-image";
+ "kfreebsd-headers";
+ # hurd kernels
+ "gnumach-image";
+ # (out-of-tree) modules
+ ".*-modules";
+ ".*-kernel";
+ "linux-backports-modules-.*";
+ # tools
+ "linux-tools";
+ };
+
+ Never-MarkAuto-Sections
+ {
+ "metapackages";
+ "restricted/metapackages";
+ "universe/metapackages";
+ "multiverse/metapackages";
+ "oldlibs";
+ "restricted/oldlibs";
+ "universe/oldlibs";
+ "multiverse/oldlibs";
+ };
+};
--- /dev/null
+// DO NOT EDIT! File autogenerated by /etc/kernel/postinst.d/apt-auto-removal
+APT::NeverAutoRemove
+{
+ "^linux-image-3\.16\.0-4-amd64$";
+ "^linux-headers-3\.16\.0-4-amd64$";
+ "^linux-image-extra-3\.16\.0-4-amd64$";
+ "^linux-signed-image-3\.16\.0-4-amd64$";
+ "^kfreebsd-image-3\.16\.0-4-amd64$";
+ "^kfreebsd-headers-3\.16\.0-4-amd64$";
+ "^gnumach-image-3\.16\.0-4-amd64$";
+ "^.*-modules-3\.16\.0-4-amd64$";
+ "^.*-kernel-3\.16\.0-4-amd64$";
+ "^linux-backports-modules-.*-3\.16\.0-4-amd64$";
+ "^linux-tools-3\.16\.0-4-amd64$";
+};
--- /dev/null
+// Unattended-Upgrade::Origins-Pattern controls which packages are
+// upgraded.
+//
+// Lines below have the format format is "keyword=value,...". A
+// package will be upgraded only if the values in its metadata match
+// all the supplied keywords in a line. (In other words, omitted
+// keywords are wild cards.) The keywords originate from the Release
+// file, but several aliases are accepted. The accepted keywords are:
+// a,archive,suite (eg, "stable")
+// c,component (eg, "main", "crontrib", "non-free")
+// l,label (eg, "Debian", "Debian-Security")
+// o,origin (eg, "Debian", "Unofficial Multimedia Packages")
+// n,codename (eg, "jessie", "jessie-updates")
+// site (eg, "http.debian.net")
+// The available values on the system are printed by the command
+// "apt-cache policy", and can be debugged by running
+// "unattended-upgrades -d" and looking at the log file.
+//
+// Within lines unattended-upgrades allows 2 macros whose values are
+// derived from /etc/debian_version:
+// ${distro_id} Installed origin.
+// ${distro_codename} Installed codename (eg, "jessie")
+Unattended-Upgrade::Origins-Pattern {
+ // Codename based matching:
+ // This will follow the migration of a release through different
+ // archives (e.g. from testing to stable and later oldstable).
+// "o=Debian,n=jessie";
+// "o=Debian,n=jessie-updates";
+// "o=Debian,n=jessie-proposed-updates";
+// "o=Debian,n=jessie,l=Debian-Security";
+
+ // Archive or Suite based matching:
+ // Note that this will silently match a different release after
+ // migration to the specified archive (e.g. testing becomes the
+ // new stable).
+// "o=Debian,a=stable";
+// "o=Debian,a=stable-updates";
+// "o=Debian,a=proposed-updates";
+ "origin=Debian,codename=${distro_codename},label=Debian-Security";
+};
+
+// List of packages to not update (regexp are supported)
+Unattended-Upgrade::Package-Blacklist {
+// "vim";
+// "libc6";
+// "libc6-dev";
+// "libc6-i686";
+};
+
+// This option allows you to control if on a unclean dpkg exit
+// unattended-upgrades will automatically run
+// dpkg --force-confold --configure -a
+// The default is true, to ensure updates keep getting installed
+//Unattended-Upgrade::AutoFixInterruptedDpkg "false";
+
+// Split the upgrade into the smallest possible chunks so that
+// they can be interrupted with SIGUSR1. This makes the upgrade
+// a bit slower but it has the benefit that shutdown while a upgrade
+// is running is possible (with a small delay)
+//Unattended-Upgrade::MinimalSteps "true";
+
+// Install all unattended-upgrades when the machine is shuting down
+// instead of doing it in the background while the machine is running
+// This will (obviously) make shutdown slower
+//Unattended-Upgrade::InstallOnShutdown "true";
+
+// Send email to this address for problems or packages upgrades
+// If empty or unset then no email is sent, make sure that you
+// have a working mail setup on your system. A package that provides
+// 'mailx' must be installed. E.g. "user@example.com"
+//Unattended-Upgrade::Mail "root";
+
+// Set this value to "true" to get emails only on errors. Default
+// is to always send a mail if Unattended-Upgrade::Mail is set
+//Unattended-Upgrade::MailOnlyOnError "true";
+
+// Do automatic removal of new unused dependencies after the upgrade
+// (equivalent to apt-get autoremove)
+//Unattended-Upgrade::Remove-Unused-Dependencies "false";
+
+// Automatically reboot *WITHOUT CONFIRMATION* if
+// the file /var/run/reboot-required is found after the upgrade
+//Unattended-Upgrade::Automatic-Reboot "false";
+
+// If automatic reboot is enabled and needed, reboot at the specific
+// time instead of immediately
+// Default: "now"
+//Unattended-Upgrade::Automatic-Reboot-Time "02:00";
+
+// Use apt bandwidth limit feature, this example limits the download
+// speed to 70kb/sec
+//Acquire::http::Dl-Limit "70";
--- /dev/null
+// Pre-configure all packages with debconf before they are installed.
+// If you don't like it, comment it out.
+DPkg::Pre-Install-Pkgs {"/usr/sbin/dpkg-preconfigure --apt || true";};
--- /dev/null
+//Written by cloud-init per 'apt_pipelining'
+Acquire::http::Pipeline-Depth "0";
--- /dev/null
+#deb http://www.backports.org/debian/ sarge postfix
+# deb http://people.debian.org/~adconrad sarge subversion
+
+deb ftp://mirror.bytemark.co.uk/debian/ etch main non-free contrib
+deb http://security.debian.org/ etch/updates main contrib non-free # security line
+deb-src http://mirror.bytemark.co.uk/debian etch main contrib non-free
--- /dev/null
+##
+# Sample ceph ceph.conf file.
+##
+# This file defines cluster membership, the various locations
+# that Ceph stores data, and any other runtime options.
+
+# If a 'host' is defined for a daemon, the init.d start/stop script will
+# verify that it matches the hostname (or else ignore it). If it is
+# not defined, it is assumed that the daemon is intended to start on
+# the current host (e.g., in a setup with a startup.conf on each
+# node).
+
+## Metavariables
+# $cluster ; Expands to the Ceph Storage Cluster name. Useful
+# ; when running multiple Ceph Storage Clusters
+# ; on the same hardware.
+# ; Example: /etc/ceph/$cluster.keyring
+# ; (Default: ceph)
+#
+# $type ; Expands to one of mds, osd, or mon, depending on
+# ; the type of the instant daemon.
+# ; Example: /var/lib/ceph/$type
+#
+# $id ; Expands to the daemon identifier. For osd.0, this
+# ; would be 0; for mds.a, it would be a.
+# ; Example: /var/lib/ceph/$type/$cluster-$id
+#
+# $host ; Expands to the host name of the instant daemon.
+#
+# $name ; Expands to $type.$id.
+# ; Example: /var/run/ceph/$cluster-$name.asok
+
+[global]
+### http://ceph.com/docs/master/rados/configuration/general-config-ref/
+
+ fsid = b4b2e571-fbbf-4ff3-a9f8-ab80f08b7fe6 # use `uuidgen` to generate your own UUID
+ public network = 192.168.0.0/24
+ cluster network = 192.168.0.0/24
+
+ # Each running Ceph daemon has a running process identifier (PID) file.
+ # The PID file is generated upon start-up.
+ # Type: String (optional)
+ # (Default: N/A). The default path is /var/run/$cluster/$name.pid.
+ pid file = /var/run/ceph/$name.pid
+
+ # If set, when the Ceph Storage Cluster starts, Ceph sets the max open fds
+ # at the OS level (i.e., the max # of file descriptors).
+ # It helps prevents Ceph OSD Daemons from running out of file descriptors.
+ # Type: 64-bit Integer (optional)
+ # (Default: 0)
+ max open files = 131072
+
+
+### http://ceph.com/docs/master/rados/operations/authentication
+### http://ceph.com/docs/master/rados/configuration/auth-config-ref/
+
+ # If enabled, the Ceph Storage Cluster daemons (i.e., ceph-mon, ceph-osd,
+ # and ceph-mds) must authenticate with each other.
+ # Type: String (optional); Valid settings are "cephx" or "none".
+ # (Default: cephx)
+ auth cluster required = cephx
+
+ # If enabled, the Ceph Storage Cluster daemons require Ceph Clients to
+ # authenticate with the Ceph Storage Cluster in order to access Ceph
+ # services.
+ # Type: String (optional); Valid settings are "cephx" or "none".
+ # (Default: cephx)
+ auth service required = cephx
+
+ # If enabled, the Ceph Client requires the Ceph Storage Cluster to
+ # authenticate with the Ceph Client.
+ # Type: String (optional); Valid settings are "cephx" or "none".
+ # (Default: cephx)
+ auth client required = cephx
+
+ # If set to true, Ceph requires signatures on all message traffic between
+ # the Ceph Client and the Ceph Storage Cluster, and between daemons
+ # comprising the Ceph Storage Cluster.
+ # Type: Boolean (optional)
+ # (Default: false)
+ cephx require signatures = true
+
+ # kernel RBD client do not support authentication yet:
+ cephx cluster require signatures = true
+ cephx service require signatures = false
+
+ # The path to the keyring file.
+ # Type: String (optional)
+ # Default: /etc/ceph/$cluster.$name.keyring,/etc/ceph/$cluster.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin
+ keyring = /etc/ceph/$cluster.$name.keyring
+
+
+### http://ceph.com/docs/master/rados/configuration/pool-pg-config-ref/
+
+
+ ## Replication level, number of data copies.
+ # Type: 32-bit Integer
+ # (Default: 3)
+ osd pool default size = 3
+
+ ## Replication level in degraded state, less than 'osd pool default size' value.
+ # Sets the minimum number of written replicas for objects in the
+ # pool in order to acknowledge a write operation to the client. If
+ # minimum is not met, Ceph will not acknowledge the write to the
+ # client. This setting ensures a minimum number of replicas when
+ # operating in degraded mode.
+ # Type: 32-bit Integer
+ # (Default: 0), which means no particular minimum. If 0, minimum is size - (size / 2).
+ osd pool default min size = 2
+
+ ## Ensure you have a realistic number of placement groups. We recommend
+ ## approximately 100 per OSD. E.g., total number of OSDs multiplied by 100
+ ## divided by the number of replicas (i.e., osd pool default size). So for
+ ## 10 OSDs and osd pool default size = 3, we'd recommend approximately
+ ## (100 * 10) / 3 = 333
+
+ # Description: The default number of placement groups for a pool. The
+ # default value is the same as pg_num with mkpool.
+ # Type: 32-bit Integer
+ # (Default: 8)
+ osd pool default pg num = 128
+
+ # Description: The default number of placement groups for placement for a
+ # pool. The default value is the same as pgp_num with mkpool.
+ # PG and PGP should be equal (for now).
+ # Type: 32-bit Integer
+ # (Default: 8)
+ osd pool default pgp num = 128
+
+ # The default CRUSH ruleset to use when creating a pool
+ # Type: 32-bit Integer
+ # (Default: 0)
+ osd pool default crush rule = 0
+
+ # The bucket type to use for chooseleaf in a CRUSH rule.
+ # Uses ordinal rank rather than name.
+ # Type: 32-bit Integer
+ # (Default: 1) Typically a host containing one or more Ceph OSD Daemons.
+ osd crush chooseleaf type = 1
+
+
+### http://ceph.com/docs/master/rados/troubleshooting/log-and-debug/
+
+ # The location of the logging file for your cluster.
+ # Type: String
+ # Required: No
+ # Default: /var/log/ceph/$cluster-$name.log
+ log file = /var/log/ceph/$cluster-$name.log
+
+ # Determines if logging messages should appear in syslog.
+ # Type: Boolean
+ # Required: No
+ # (Default: false)
+ log to syslog = true
+
+
+### http://ceph.com/docs/master/rados/configuration/ms-ref/
+
+ # Enable if you want your daemons to bind to IPv6 address instead of
+ # IPv4 ones. (Not required if you specify a daemon or cluster IP.)
+ # Type: Boolean
+ # (Default: false)
+ ms bind ipv6 = true
+
+##################
+## Monitors
+## You need at least one. You need at least three if you want to
+## tolerate any node failures. Always create an odd number.
+[mon]
+### http://ceph.com/docs/master/rados/configuration/mon-config-ref/
+### http://ceph.com/docs/master/rados/configuration/mon-osd-interaction/
+
+ # The IDs of initial monitors in a cluster during startup.
+ # If specified, Ceph requires an odd number of monitors to form an
+ # initial quorum (e.g., 3).
+ # Type: String
+ # (Default: None)
+ mon initial members = mycephhost
+
+ mon host = cephhost01,cephhost02
+ mon addr = 192.168.0.101,192.168.0.102
+
+ # The monitor's data location
+ # Default: /var/lib/ceph/mon/$cluster-$id
+ mon data = /var/lib/ceph/mon/$name
+
+ # The clock drift in seconds allowed between monitors.
+ # Type: Float
+ # (Default: .050)
+ mon clock drift allowed = .15
+
+ # Exponential backoff for clock drift warnings
+ # Type: Float
+ # (Default: 5)
+ mon clock drift warn backoff = 30 # Tell the monitor to backoff from this warning for 30 seconds
+
+ # The percentage of disk space used before an OSD is considered full.
+ # Type: Float
+ # (Default: .95)
+ mon osd full ratio = .95
+
+ # The percentage of disk space used before an OSD is considered nearfull.
+ # Type: Float
+ # (Default: .85)
+ mon osd nearfull ratio = .85
+
+ # The number of seconds Ceph waits before marking a Ceph OSD
+ # Daemon "down" and "out" if it doesn't respond.
+ # Type: 32-bit Integer
+ # (Default: 300)
+ mon osd down out interval = 300
+
+ # The grace period in seconds before declaring unresponsive Ceph OSD
+ # Daemons "down".
+ # Type: 32-bit Integer
+ # (Default: 900)
+ mon osd report timeout = 300
+
+### http://ceph.com/docs/master/rados/troubleshooting/log-and-debug/
+
+ # logging, for debugging monitor crashes, in order of
+ # their likelihood of being helpful :)
+ debug ms = 1
+ debug mon = 20
+ debug paxos = 20
+ debug auth = 20
+
+
+[mon.alpha]
+ host = alpha
+ mon addr = 192.168.0.10:6789
+
+[mon.beta]
+ host = beta
+ mon addr = 192.168.0.11:6789
+
+[mon.gamma]
+ host = gamma
+ mon addr = 192.168.0.12:6789
+
+
+##################
+## Metadata servers
+# You must deploy at least one metadata server to use CephFS. There is
+# experimental support for running multiple metadata servers. Do not run
+# multiple metadata servers in production.
+[mds]
+### http://ceph.com/docs/master/cephfs/mds-config-ref/
+
+ # where the mds keeps it's secret encryption keys
+ keyring = /var/lib/ceph/mds/$name/keyring
+
+ # Determines whether a 'ceph-mds' daemon should poll and
+ # replay the log of an active MDS (hot standby).
+ # Type: Boolean
+ # (Default: false)
+ mds standby replay = true
+
+ # mds logging to debug issues.
+ debug ms = 1
+ debug mds = 20
+ debug journaler = 20
+
+ # The number of inodes to cache.
+ # Type: 32-bit Integer
+ # (Default: 100000)
+ mds cache size = 250000
+
+[mds.alpha]
+ host = alpha
+
+[mds.beta]
+ host = beta
+
+##################
+## osd
+# You need at least one. Two or more if you want data to be replicated.
+# Define as many as you like.
+[osd]
+### http://ceph.com/docs/master/rados/configuration/osd-config-ref/
+
+ # The path to the OSDs data.
+ # You must create the directory when deploying Ceph.
+ # You should mount a drive for OSD data at this mount point.
+ # We do not recommend changing the default.
+ # Type: String
+ # Default: /var/lib/ceph/osd/$cluster-$id
+ osd data = /var/lib/ceph/osd/$name
+
+ ## You can change the number of recovery operations to speed up recovery
+ ## or slow it down if your machines can't handle it
+
+ # The number of active recovery requests per OSD at one time.
+ # More requests will accelerate recovery, but the requests
+ # places an increased load on the cluster.
+ # Type: 32-bit Integer
+ # (Default: 5)
+ osd recovery max active = 3
+
+ # The maximum number of backfills allowed to or from a single OSD.
+ # Type: 64-bit Integer
+ # (Default: 10)
+ osd max backfills = 5
+
+ # The maximum number of simultaneous scrub operations for a Ceph OSD Daemon.
+ # Type: 32-bit Int
+ # (Default: 1)
+ osd max scrubs = 2
+
+ # You may add settings for ceph-deploy so that it will create and mount
+ # the correct type of file system. Remove the comment `#` character for
+ # the following settings and replace the values in parenthesis
+ # with appropriate values, or leave the following settings commented
+ # out to accept the default values.
+
+ #osd mkfs type = {fs-type}
+ #osd mkfs options {fs-type} = {mkfs options} # default for xfs is "-f"
+ #osd mount options {fs-type} = {mount options} # default mount option is "rw, noatime"
+ osd mkfs type = btrfs
+ osd mount options btrfs = noatime,nodiratime
+
+ ## Ideally, make this a separate disk or partition. A few
+ ## hundred MB should be enough; more if you have fast or many
+ ## disks. You can use a file under the osd data dir if need be
+ ## (e.g. /data/$name/journal), but it will be slower than a
+ ## separate disk or partition.
+ # The path to the OSD's journal. This may be a path to a file or a block
+ # device (such as a partition of an SSD). If it is a file, you must
+ # create the directory to contain it.
+ # We recommend using a drive separate from the osd data drive.
+ # Type: String
+ # Default: /var/lib/ceph/osd/$cluster-$id/journal
+ osd journal = /var/lib/ceph/osd/$name/journal
+
+ # Check log files for corruption. Can be computationally expensive.
+ # Type: Boolean
+ # (Default: false)
+ osd check for log corruption = true
+
+### http://ceph.com/docs/master/rados/configuration/journal-ref/
+
+ # The size of the journal in megabytes. If this is 0,
+ # and the journal is a block device, the entire block device is used.
+ # Since v0.54, this is ignored if the journal is a block device,
+ # and the entire block device is used.
+ # Type: 32-bit Integer
+ # (Default: 5120)
+ # Recommended: Begin with 1GB. Should be at least twice the product
+ # of the expected speed multiplied by "filestore max sync interval".
+ osd journal size = 2048 ; journal size, in megabytes
+
+ ## If you want to run the journal on a tmpfs, disable DirectIO
+ # Enables direct i/o to the journal.
+ # Requires "journal block align" set to "true".
+ # Type: Boolean
+ # Required: Yes when using aio.
+ # (Default: true)
+ journal dio = false
+
+ # osd logging to debug osd issues, in order of likelihood of being helpful
+ debug ms = 1
+ debug osd = 20
+ debug filestore = 20
+ debug journal = 20
+
+### http://ceph.com/docs/master/rados/configuration/filestore-config-ref/
+
+ # The maximum interval in seconds for synchronizing the filestore.
+ # Type: Double (optional)
+ # (Default: 5)
+ filestore max sync interval = 5
+
+ # Enable snapshots for a btrfs filestore.
+ # Type: Boolean
+ # Required: No. Only used for btrfs.
+ # (Default: true)
+ filestore btrfs snap = false
+
+ # Enables the filestore flusher.
+ # Type: Boolean
+ # Required: No
+ # (Default: false)
+ filestore flusher = true
+
+ # Defines the maximum number of in progress operations the file store
+ # accepts before blocking on queuing new operations.
+ # Type: Integer
+ # Required: No. Minimal impact on performance.
+ # (Default: 500)
+ filestore queue max ops = 500
+
+ ## Filestore and OSD settings can be tweak to achieve better performance
+
+### http://ceph.com/docs/master/rados/configuration/filestore-config-ref/#misc
+
+ # Min number of files in a subdir before merging into parent NOTE: A negative value means to disable subdir merging
+ # Type: Integer
+ # Required: No
+ # Default: 10
+ filestore merge threshold = 10
+
+ # filestore_split_multiple * abs(filestore_merge_threshold) * 16 is the maximum number of files in a subdirectory before splitting into child directories.
+ # Type: Integer
+ # Required: No
+ # Default: 2
+ filestore split multiple = 2
+
+ # The number of filesystem operation threads that execute in parallel.
+ # Type: Integer
+ # Required: No
+ # Default: 2
+ filestore op threads = 4
+
+ # The number of threads to service Ceph OSD Daemon operations. Set to 0 to disable it. Increasing the number may increase the request processing rate.
+ # Type: 32-bit Integer
+ # Default: 2
+ osd op threads = 2
+
+ ## CRUSH
+
+ # By default OSDs update their details (location, weight and root) on the CRUSH map during startup
+ # Type: Boolean
+ # Required: No;
+ # (Default: true)
+ osd crush update on start = false
+
+[osd.0]
+ host = delta
+
+[osd.1]
+ host = epsilon
+
+[osd.2]
+ host = zeta
+
+[osd.3]
+ host = eta
+
+
+##################
+## client settings
+[client]
+
+### http://ceph.com/docs/master/rbd/rbd-config-ref/
+
+ # Enable caching for RADOS Block Device (RBD).
+ # Type: Boolean
+ # Required: No
+ # (Default: true)
+ rbd cache = true
+
+ # The RBD cache size in bytes.
+ # Type: 64-bit Integer
+ # Required: No
+ # (Default: 32 MiB)
+ ;rbd cache size = 33554432
+
+ # The dirty limit in bytes at which the cache triggers write-back.
+ # If 0, uses write-through caching.
+ # Type: 64-bit Integer
+ # Required: No
+ # Constraint: Must be less than rbd cache size.
+ # (Default: 24 MiB)
+ rbd cache max dirty = 25165824
+
+ # The dirty target before the cache begins writing data to the data storage.
+ # Does not block writes to the cache.
+ # Type: 64-bit Integer
+ # Required: No
+ # Constraint: Must be less than rbd cache max dirty.
+ # (Default: 16 MiB)
+ rbd cache target dirty = 16777216
+
+ # The number of seconds dirty data is in the cache before writeback starts.
+ # Type: Float
+ # Required: No
+ # (Default: 1.0)
+ rbd cache max dirty age = 1.0
+
+ # Start out in write-through mode, and switch to write-back after the
+ # first flush request is received. Enabling this is a conservative but
+ # safe setting in case VMs running on rbd are too old to send flushes,
+ # like the virtio driver in Linux before 2.6.32.
+ # Type: Boolean
+ # Required: No
+ # (Default: true)
+ rbd cache writethrough until flush = true
+
+ # The Ceph admin socket allows you to query a daemon via a socket interface
+ # From a client perspective this can be a virtual machine using librbd
+ # Type: String
+ # Required: No
+ admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok
+
+
+##################
+## radosgw client settings
+[client.radosgw.gateway]
+
+### http://ceph.com/docs/master/radosgw/config-ref/
+
+ # Sets the location of the data files for Ceph Object Gateway.
+ # You must create the directory when deploying Ceph.
+ # We do not recommend changing the default.
+ # Type: String
+ # Default: /var/lib/ceph/radosgw/$cluster-$id
+ rgw data = /var/lib/ceph/radosgw/$name
+
+ # Client's hostname
+ host = ceph-radosgw
+
+ # where the radosgw keeps it's secret encryption keys
+ keyring = /etc/ceph/ceph.client.radosgw.keyring
+
+ # FastCgiExternalServer uses this socket.
+ # If you do not specify a socket path, Ceph Object Gateway will not run as an external server.
+ # The path you specify here must be the same as the path specified in the rgw.conf file.
+ # Type: String
+ # Default: None
+ rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
+
+ # The location of the logging file for your radosgw.
+ # Type: String
+ # Required: No
+ # Default: /var/log/ceph/$cluster-$name.log
+ log file = /var/log/ceph/client.radosgw.gateway.log
+
+ # Enable 100-continue if it is operational.
+ # Type: Boolean
+ # Default: true
+ rgw print continue = false
+
+ # The DNS name of the served domain.
+ # Type: String
+ # Default: None
+ rgw dns name = radosgw.ceph.internal
--- /dev/null
+MAILTO=cron@example.com
+42 * * * * lutter /usr/local/bin/backup
+54 16 * * * lutter /usr/sbin/stuff
--- /dev/null
+# This somewhat nonsensical file used to segfault in test-api.c
+if [ 1 ]; then
+# K
+else
+# I
+fi
--- /dev/null
+# Example dput.cf that defines the host that can be used
+# with dput for uploading.
+
+[DEFAULT]
+login = username
+method = ftp
+hash = md5
+allow_unsigned_uploads = 0
+run_lintian = 0
+run_dinstall = 0
+check_version = 0
+scp_compress = 0
+post_upload_command =
+pre_upload_command =
+passive_ftp = 1
+default_host_non-us =
+default_host_main = hebex
+
+[hebex]
+fqdn = condor.infra.s1.p.fti.net
+login = anonymous
+method = ftp
+incoming = /incoming/hebex
+passive_ftp = 0
+
+[dop/desktop]
+fqdn = condor.infra.s1.p.fti.net
+login = anonymous
+method = ftp
+incoming = /incoming/dop/desktop
+passive_ftp = 0
+
+[dop/experimental]
+fqdn = condor.infra.s1.p.fti.net
+login = anonymous
+method = ftp
+incoming = /incoming/dop/experimental
+passive_ftp = 0
+
+[dop/test]
+fqdn = condor.infra.s1.p.fti.net
+login = anonymous
+method = ftp
+incoming = /incoming/dop/test
+passive_ftp = 0
+
--- /dev/null
+/local 207.46.0.0/16(rw,sync)
+/home 207.46.0.0/16(rw,root_squash,sync) 192.168.50.2/32(rw,root_squash,sync)
+/tmp 207.46.0.0/16(rw,root_squash,sync)
+/pub *(ro,insecure,all_squash)
--- /dev/null
+/dev/vg00/lv00 / ext3 defaults 1 1
+LABEL=/boot /boot ext3 defaults 1 2
+devpts /dev/pts devpts gid=5,mode=620 0 0
+tmpfs /dev/shm tmpfs defaults 0 0
+/dev/vg00/home /home ext3 defaults 1 2
+proc /proc proc defaults 0 0
+sysfs /sys sysfs defaults 0 0
+/dev/vg00/local /local ext3 defaults 1 2
+/dev/vg00/images /var/lib/xen/images ext3 defaults 1 2
+/dev/vg00/swap swap swap defaults 0 0
--- /dev/null
+root:x:0:root
+bin:x:1:root,bin,daemon
+daemon:x:2:root,bin,daemon
+sys:x:3:root,bin,adm
+adm:x:4:root,adm,daemon
+tty:x:5:
+disk:x:6:root
+lp:x:7:daemon,lp
+mem:x:8:
+kmem:x:9:
+wheel:x:10:root
+mail:x:12:mail,postfix
+uucp:x:14:uucp
+man:x:15:
+games:x:20:
+gopher:x:30:
+dip:x:40:
+ftp:x:50:
+lock:x:54:
+nobody:x:99:
+users:x:100:
+floppy:x:19:
+vcsa:x:69:
+rpc:x:32:
+rpcuser:x:29:
+nfsnobody:x:499:
\ No newline at end of file
--- /dev/null
+../boot/grub/grub.conf
\ No newline at end of file
--- /dev/null
+root:x::root
+bin:x::root,bin,daemon
+daemon:x::root,bin,daemon
+sys:x::root,bin,adm
+adm:x:root,adm:root,adm,daemon
+tty:x::
+disk:x::root
+lp:x::daemon,lp
+mem:x::
+kmem:x::
+wheel:x::root
+mail:x::mail,postfix
+uucp:x::uucp
+man:x::
+games:x::
+gopher:x::
+dip:x::
+ftp:x::
+lock:x::
+nobody:x::
+users:x::
+floppy:x::
+vcsa:x::
+rpc:x::
+rpcuser:x::
+nfsnobody:x::
--- /dev/null
+# Do not remove the following line, or various programs
+# that require network functionality will fail.
+127.0.0.1 localhost.localdomain localhost galia.watzmann.net galia
+#172.31.122.254 granny.watzmann.net granny puppet
+#172.31.122.1 galia.watzmann.net galia
+172.31.122.14 orange.watzmann.net orange
--- /dev/null
+#
+# This is the Apache server configuration file providing SSL support.
+# It contains the configuration directives to instruct the server how to
+# serve pages over an https connection. For detailing information about these
+# directives see <URL:http://httpd.apache.org/docs/2.2/mod/mod_ssl.html>
+#
+# Do NOT simply read the instructions in here without understanding
+# what they do. They're here only as hints or reminders. If you are unsure
+# consult the online docs. You have been warned.
+#
+
+LoadModule ssl_module modules/mod_ssl.so
+
+#
+# When we also provide SSL we have to listen to the
+# the HTTPS port in addition.
+#
+Listen 443
+
+##
+## SSL Global Context
+##
+## All SSL configuration in this context applies both to
+## the main server and all SSL-enabled virtual hosts.
+##
+
+# Pass Phrase Dialog:
+# Configure the pass phrase gathering process.
+# The filtering dialog program (`builtin' is a internal
+# terminal dialog) has to provide the pass phrase on stdout.
+SSLPassPhraseDialog builtin
+
+# Inter-Process Session Cache:
+# Configure the SSL Session Cache: First the mechanism
+# to use and second the expiring timeout (in seconds).
+SSLSessionCache shmcb:/var/cache/mod_ssl/scache(512000)
+SSLSessionCacheTimeout 300
+
+# Semaphore:
+# Configure the path to the mutual exclusion semaphore the
+# SSL engine uses internally for inter-process synchronization.
+SSLMutex default
+
+# Pseudo Random Number Generator (PRNG):
+# Configure one or more sources to seed the PRNG of the
+# SSL library. The seed data should be of good random quality.
+# WARNING! On some platforms /dev/random blocks if not enough entropy
+# is available. This means you then cannot use the /dev/random device
+# because it would lead to very long connection times (as long as
+# it requires to make more entropy available). But usually those
+# platforms additionally provide a /dev/urandom device which doesn't
+# block. So, if available, use this one instead. Read the mod_ssl User
+# Manual for more details.
+SSLRandomSeed startup file:/dev/urandom 256
+SSLRandomSeed connect builtin
+#SSLRandomSeed startup file:/dev/random 512
+#SSLRandomSeed connect file:/dev/random 512
+#SSLRandomSeed connect file:/dev/urandom 512
+
+#
+# Use "SSLCryptoDevice" to enable any supported hardware
+# accelerators. Use "openssl engine -v" to list supported
+# engine names. NOTE: If you enable an accelerator and the
+# server does not start, consult the error logs and ensure
+# your accelerator is functioning properly.
+#
+SSLCryptoDevice builtin
+#SSLCryptoDevice ubsec
+
+##
+## SSL Virtual Host Context
+##
+
+<VirtualHost _default_:443>
+
+# General setup for the virtual host, inherited from global configuration
+#DocumentRoot "/var/www/html"
+#ServerName www.example.com:443
+
+# Use separate log files for the SSL virtual host; note that LogLevel
+# is not inherited from httpd.conf.
+ErrorLog logs/ssl_error_log
+TransferLog logs/ssl_access_log
+LogLevel warn
+
+# SSL Engine Switch:
+# Enable/Disable SSL for this virtual host.
+SSLEngine on
+
+# SSL Protocol support:
+# List the enable protocol levels with which clients will be able to
+# connect. Disable SSLv2 access by default:
+SSLProtocol all -SSLv2
+
+# SSL Cipher Suite:
+# List the ciphers that the client is permitted to negotiate.
+# See the mod_ssl documentation for a complete list.
+SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW
+
+# Server Certificate:
+# Point SSLCertificateFile at a PEM encoded certificate. If
+# the certificate is encrypted, then you will be prompted for a
+# pass phrase. Note that a kill -HUP will prompt again. A new
+# certificate can be generated using the genkey(1) command.
+SSLCertificateFile /etc/pki/tls/certs/localhost.crt
+
+# Server Private Key:
+# If the key is not combined with the certificate, use this
+# directive to point at the key file. Keep in mind that if
+# you've both a RSA and a DSA private key you can configure
+# both in parallel (to also allow the use of DSA ciphers, etc.)
+SSLCertificateKeyFile /etc/pki/tls/private/localhost.key
+
+# Server Certificate Chain:
+# Point SSLCertificateChainFile at a file containing the
+# concatenation of PEM encoded CA certificates which form the
+# certificate chain for the server certificate. Alternatively
+# the referenced file can be the same as SSLCertificateFile
+# when the CA certificates are directly appended to the server
+# certificate for convinience.
+#SSLCertificateChainFile /etc/pki/tls/certs/server-chain.crt
+
+# Certificate Authority (CA):
+# Set the CA certificate verification path where to find CA
+# certificates for client authentication or alternatively one
+# huge file containing all of them (file must be PEM encoded)
+#SSLCACertificateFile /etc/pki/tls/certs/ca-bundle.crt
+
+# Client Authentication (Type):
+# Client certificate verification type and depth. Types are
+# none, optional, require and optional_no_ca. Depth is a
+# number which specifies how deeply to verify the certificate
+# issuer chain before deciding the certificate is not valid.
+#SSLVerifyClient require
+#SSLVerifyDepth 10
+
+# Access Control:
+# With SSLRequire you can do per-directory access control based
+# on arbitrary complex boolean expressions containing server
+# variable checks and other lookup directives. The syntax is a
+# mixture between C and Perl. See the mod_ssl documentation
+# for more details.
+#<Location />
+#SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \
+# and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \
+# and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \
+# and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \
+# and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \
+# or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/
+#</Location>
+
+# SSL Engine Options:
+# Set various options for the SSL engine.
+# o FakeBasicAuth:
+# Translate the client X.509 into a Basic Authorisation. This means that
+# the standard Auth/DBMAuth methods can be used for access control. The
+# user name is the `one line' version of the client's X.509 certificate.
+# Note that no password is obtained from the user. Every entry in the user
+# file needs this password: `xxj31ZMTZzkVA'.
+# o ExportCertData:
+# This exports two additional environment variables: SSL_CLIENT_CERT and
+# SSL_SERVER_CERT. These contain the PEM-encoded certificates of the
+# server (always existing) and the client (only existing when client
+# authentication is used). This can be used to import the certificates
+# into CGI scripts.
+# o StdEnvVars:
+# This exports the standard SSL/TLS related `SSL_*' environment variables.
+# Per default this exportation is switched off for performance reasons,
+# because the extraction step is an expensive operation and is usually
+# useless for serving static content. So one usually enables the
+# exportation for CGI and SSI requests only.
+# o StrictRequire:
+# This denies access when "SSLRequireSSL" or "SSLRequire" applied even
+# under a "Satisfy any" situation, i.e. when it applies access is denied
+# and no other module can change it.
+# o OptRenegotiate:
+# This enables optimized SSL connection renegotiation handling when SSL
+# directives are used in per-directory context.
+#SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
+<Files ~ "\.(cgi|shtml|phtml|php3?)$">
+ SSLOptions +StdEnvVars
+</Files>
+<Directory "/var/www/cgi-bin">
+ SSLOptions +StdEnvVars
+</Directory>
+
+# SSL Protocol Adjustments:
+# The safe and default but still SSL/TLS standard compliant shutdown
+# approach is that mod_ssl sends the close notify alert but doesn't wait for
+# the close notify alert from client. When you need a different shutdown
+# approach you can use one of the following variables:
+# o ssl-unclean-shutdown:
+# This forces an unclean shutdown when the connection is closed, i.e. no
+# SSL close notify alert is send or allowed to received. This violates
+# the SSL/TLS standard but is needed for some brain-dead browsers. Use
+# this when you receive I/O errors because of the standard approach where
+# mod_ssl sends the close notify alert.
+# o ssl-accurate-shutdown:
+# This forces an accurate shutdown when the connection is closed, i.e. a
+# SSL close notify alert is send and mod_ssl waits for the close notify
+# alert of the client. This is 100% SSL/TLS standard compliant, but in
+# practice often causes hanging connections with brain-dead browsers. Use
+# this only for browsers where you know that their SSL implementation
+# works correctly.
+# Notice: Most problems of broken clients are also related to the HTTP
+# keep-alive facility, so you usually additionally want to disable
+# keep-alive for those clients, too. Use variable "nokeepalive" for this.
+# Similarly, one has to force some clients to use HTTP/1.0 to workaround
+# their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and
+# "force-response-1.0" for this.
+SetEnvIf User-Agent ".*MSIE.*" \
+ nokeepalive ssl-unclean-shutdown \
+ downgrade-1.0 force-response-1.0
+
+# Per-Server Logging:
+# The home of a custom SSL log file. Use this when you want a
+# compact non-error SSL logfile on a virtual host basis.
+CustomLog logs/ssl_request_log \
+ "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"
+
+</VirtualHost>
+
--- /dev/null
+#
+# This file loads most of the modules included with the Apache HTTP
+# Server itself.
+#
+
+LoadModule access_compat_module modules/mod_access_compat.so
+LoadModule actions_module modules/mod_actions.so
+LoadModule alias_module modules/mod_alias.so
+LoadModule allowmethods_module modules/mod_allowmethods.so
+LoadModule auth_basic_module modules/mod_auth_basic.so
+LoadModule auth_digest_module modules/mod_auth_digest.so
+LoadModule authn_anon_module modules/mod_authn_anon.so
+LoadModule authn_core_module modules/mod_authn_core.so
+LoadModule authn_dbd_module modules/mod_authn_dbd.so
+LoadModule authn_dbm_module modules/mod_authn_dbm.so
+LoadModule authn_file_module modules/mod_authn_file.so
+LoadModule authn_socache_module modules/mod_authn_socache.so
+LoadModule authz_core_module modules/mod_authz_core.so
+LoadModule authz_dbd_module modules/mod_authz_dbd.so
+LoadModule authz_dbm_module modules/mod_authz_dbm.so
+LoadModule authz_groupfile_module modules/mod_authz_groupfile.so
+LoadModule authz_host_module modules/mod_authz_host.so
+LoadModule authz_owner_module modules/mod_authz_owner.so
+LoadModule authz_user_module modules/mod_authz_user.so
+LoadModule autoindex_module modules/mod_autoindex.so
+LoadModule cache_module modules/mod_cache.so
+LoadModule cache_disk_module modules/mod_cache_disk.so
+LoadModule cache_socache_module modules/mod_cache_socache.so
+LoadModule data_module modules/mod_data.so
+LoadModule dbd_module modules/mod_dbd.so
+LoadModule deflate_module modules/mod_deflate.so
+LoadModule dir_module modules/mod_dir.so
+LoadModule dumpio_module modules/mod_dumpio.so
+LoadModule echo_module modules/mod_echo.so
+LoadModule env_module modules/mod_env.so
+LoadModule expires_module modules/mod_expires.so
+LoadModule ext_filter_module modules/mod_ext_filter.so
+LoadModule filter_module modules/mod_filter.so
+LoadModule headers_module modules/mod_headers.so
+LoadModule include_module modules/mod_include.so
+LoadModule info_module modules/mod_info.so
+LoadModule log_config_module modules/mod_log_config.so
+LoadModule logio_module modules/mod_logio.so
+LoadModule macro_module modules/mod_macro.so
+LoadModule mime_magic_module modules/mod_mime_magic.so
+LoadModule mime_module modules/mod_mime.so
+LoadModule negotiation_module modules/mod_negotiation.so
+LoadModule remoteip_module modules/mod_remoteip.so
+LoadModule reqtimeout_module modules/mod_reqtimeout.so
+LoadModule request_module modules/mod_request.so
+LoadModule rewrite_module modules/mod_rewrite.so
+LoadModule setenvif_module modules/mod_setenvif.so
+LoadModule slotmem_plain_module modules/mod_slotmem_plain.so
+LoadModule slotmem_shm_module modules/mod_slotmem_shm.so
+LoadModule socache_dbm_module modules/mod_socache_dbm.so
+LoadModule socache_memcache_module modules/mod_socache_memcache.so
+LoadModule socache_shmcb_module modules/mod_socache_shmcb.so
+LoadModule status_module modules/mod_status.so
+LoadModule substitute_module modules/mod_substitute.so
+LoadModule suexec_module modules/mod_suexec.so
+LoadModule unique_id_module modules/mod_unique_id.so
+LoadModule unixd_module modules/mod_unixd.so
+LoadModule userdir_module modules/mod_userdir.so
+LoadModule version_module modules/mod_version.so
+LoadModule vhost_alias_module modules/mod_vhost_alias.so
+LoadModule watchdog_module modules/mod_watchdog.so
+
--- /dev/null
+LoadModule dav_module modules/mod_dav.so
+LoadModule dav_fs_module modules/mod_dav_fs.so
+LoadModule dav_lock_module modules/mod_dav_lock.so
--- /dev/null
+LoadModule lua_module modules/mod_lua.so
--- /dev/null
+# Select the MPM module which should be used by uncommenting exactly
+# one of the following LoadModule lines. See the httpd.service(8) man
+# page for more information on changing the MPM.
+
+# prefork MPM: Implements a non-threaded, pre-forking web server
+# See: http://httpd.apache.org/docs/2.4/mod/prefork.html
+#
+# NOTE: If enabling prefork, the httpd_graceful_shutdown SELinux
+# boolean should be enabled, to allow graceful stop/shutdown.
+#
+#LoadModule mpm_prefork_module modules/mod_mpm_prefork.so
+
+# worker MPM: Multi-Processing Module implementing a hybrid
+# multi-threaded multi-process web server
+# See: http://httpd.apache.org/docs/2.4/mod/worker.html
+#
+#LoadModule mpm_worker_module modules/mod_mpm_worker.so
+
+# event MPM: A variant of the worker MPM with the goal of consuming
+# threads only for connections with active processing
+# See: http://httpd.apache.org/docs/2.4/mod/event.html
+#
+LoadModule mpm_event_module modules/mod_mpm_event.so
--- /dev/null
+#
+# This file lists modules included with the Apache HTTP Server
+# which are not enabled by default.
+#
+
+#LoadModule asis_module modules/mod_asis.so
+#LoadModule buffer_module modules/mod_buffer.so
+#LoadModule heartbeat_module modules/mod_heartbeat.so
+#LoadModule heartmonitor_module modules/mod_heartmonitor.so
+#LoadModule usertrack_module modules/mod_usertrack.so
+#LoadModule dialup_module modules/mod_dialup.so
+#LoadModule charset_lite_module modules/mod_charset_lite.so
+#LoadModule log_debug_module modules/mod_log_debug.so
+#LoadModule log_forensic_module modules/mod_log_forensic.so
+#LoadModule ratelimit_module modules/mod_ratelimit.so
+#LoadModule reflector_module modules/mod_reflector.so
+#LoadModule sed_module modules/mod_sed.so
+#LoadModule speling_module modules/mod_speling.so
--- /dev/null
+# This file configures all the proxy modules:
+LoadModule proxy_module modules/mod_proxy.so
+LoadModule lbmethod_bybusyness_module modules/mod_lbmethod_bybusyness.so
+LoadModule lbmethod_byrequests_module modules/mod_lbmethod_byrequests.so
+LoadModule lbmethod_bytraffic_module modules/mod_lbmethod_bytraffic.so
+LoadModule lbmethod_heartbeat_module modules/mod_lbmethod_heartbeat.so
+LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
+LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
+LoadModule proxy_connect_module modules/mod_proxy_connect.so
+LoadModule proxy_express_module modules/mod_proxy_express.so
+LoadModule proxy_fcgi_module modules/mod_proxy_fcgi.so
+LoadModule proxy_fdpass_module modules/mod_proxy_fdpass.so
+LoadModule proxy_ftp_module modules/mod_proxy_ftp.so
+LoadModule proxy_http_module modules/mod_proxy_http.so
+LoadModule proxy_hcheck_module modules/mod_proxy_hcheck.so
+LoadModule proxy_scgi_module modules/mod_proxy_scgi.so
+LoadModule proxy_wstunnel_module modules/mod_proxy_wstunnel.so
--- /dev/null
+# This file configures systemd module:
+LoadModule systemd_module modules/mod_systemd.so
--- /dev/null
+# This configuration file loads a CGI module appropriate to the MPM
+# which has been configured in 00-mpm.conf. mod_cgid should be used
+# with a threaded MPM; mod_cgi with the prefork MPM.
+
+<IfModule mpm_worker_module>
+ LoadModule cgid_module modules/mod_cgid.so
+</IfModule>
+<IfModule mpm_event_module>
+ LoadModule cgid_module modules/mod_cgid.so
+</IfModule>
+<IfModule mpm_prefork_module>
+ LoadModule cgi_module modules/mod_cgi.so
+</IfModule>
+
--- /dev/null
+LoadModule http2_module modules/mod_http2.so
--- /dev/null
+LoadModule dnssd_module modules/mod_dnssd.so
--- /dev/null
+LoadModule proxy_http2_module modules/mod_proxy_http2.so
--- /dev/null
+
+This directory holds configuration files for the Apache HTTP Server;
+any files in this directory which have the ".conf" extension will be
+processed as httpd configuration files. This directory contains
+configuration fragments necessary only to load modules.
+Administrators should use the directory "/etc/httpd/conf.d" to modify
+the configuration of httpd, or any modules.
+
+Files are processed in alphanumeric order.
--- /dev/null
+#
+# inittab This file describes how the INIT process should set up
+# the system in a certain run-level.
+#
+# Author: Miquel van Smoorenburg, <miquels@drinkel.nl.mugnet.org>
+# Modified for RHS Linux by Marc Ewing and Donnie Barnes
+#
+
+# Default runlevel. The runlevels used by RHS are:
+# 0 - halt (Do NOT set initdefault to this)
+# 1 - Single user mode
+# 2 - Multiuser, without NFS (The same as 3, if you do not have networking)
+# 3 - Full multiuser mode
+# 4 - unused
+# 5 - X11
+# 6 - reboot (Do NOT set initdefault to this)
+#
+id:5:initdefault:
+
+# System initialization.
+si::sysinit:/etc/rc.d/rc.sysinit
+
+l0:0:wait:/etc/rc.d/rc 0
+l1:1:wait:/etc/rc.d/rc 1
+l2:2:wait:/etc/rc.d/rc 2
+l3:3:wait:/etc/rc.d/rc 3
+l4:4:wait:/etc/rc.d/rc 4
+l5:5:wait:/etc/rc.d/rc 5
+l6:6:wait:/etc/rc.d/rc 6
+
+# Trap CTRL-ALT-DELETE
+ca::ctrlaltdel:/sbin/shutdown -t3 -r now
+
+# When our UPS tells us power has failed, assume we have a few minutes
+# of power left. Schedule a shutdown for 2 minutes from now.
+# This does, of course, assume you have powerd installed and your
+# UPS connected and working correctly.
+pf::powerfail:/sbin/shutdown -f -h +2 "Power Failure; System Shutting Down"
+
+# If power was restored before the shutdown kicked in, cancel it.
+pr:12345:powerokwait:/sbin/shutdown -c "Power Restored; Shutdown Cancelled"
+
+
+# Run gettys in standard runlevels
+1:2345:respawn:/sbin/mingetty tty1
+2:2345:respawn:/sbin/mingetty tty2
+3:2345:respawn:/sbin/mingetty tty3
+4:2345:respawn:/sbin/mingetty tty4
+5:2345:respawn:/sbin/mingetty tty5
+6:2345:respawn:/sbin/mingetty tty6
+
+# Run xdm in runlevel 5
+x:5:respawn:/etc/X11/prefdm -nodaemon
--- /dev/null
+# This file contains a series of commands to perform (in order) in the kdump
+# kernel after a kernel crash in the crash kernel(1st kernel) has happened.
+#
+# Directives in this file are only applicable to the kdump initramfs, and have
+# no effect once the root filesystem is mounted and the normal init scripts are
+# processed.
+#
+# Currently, only one dump target and path can be specified. If the dumping to
+# the configured target fails, the failure action which can be configured via
+# the "failure_action" directive will be performed.
+#
+# Supported options:
+#
+# auto_reset_crashkernel <yes|no>
+# - whether to reset kernel crashkernel to new default value
+# or not when kexec-tools updates the default crashkernel value and
+# existing kernels using the old default kernel crashkernel value.
+# The default value is yes.
+#
+# raw <partition>
+# - Will dd /proc/vmcore into <partition>.
+# Use persistent device names for partition devices,
+# such as /dev/vg/<devname>.
+#
+# nfs <nfs mount>
+# - Will mount nfs to <mnt>, and copy /proc/vmcore to
+# <mnt>/<path>/%HOST-%DATE/, supports DNS.
+#
+# ssh <user@server>
+# - Will save /proc/vmcore to <user@server>:<path>/%HOST-%DATE/,
+# supports DNS.
+# NOTE: make sure the user has write permissions on the server.
+#
+# sshkey <path>
+# - Will use the sshkey to do ssh dump.
+# Specify the path of the ssh key to use when dumping
+# via ssh. The default value is /root/.ssh/kdump_id_rsa.
+#
+# <fs type> <partition>
+# - Will mount -t <fs type> <partition> <mnt>, and copy
+# /proc/vmcore to <mnt>/<path>/%HOST_IP-%DATE/.
+# NOTE: <partition> can be a device node, label or uuid.
+# It's recommended to use persistent device names
+# such as /dev/vg/<devname>.
+# Otherwise it's suggested to use label or uuid.
+#
+# path <path>
+# - "path" represents the file system path in which vmcore
+# will be saved. If a dump target is specified in
+# kdump.conf, then "path" is relative to the specified
+# dump target.
+#
+# Interpretation of "path" changes a bit if the user didn't
+# specify any dump target explicitly in kdump.conf. In this
+# case, "path" represents the absolute path from root. The
+# dump target and adjusted path are arrived at automatically
+# depending on what's mounted in the current system.
+#
+# Ignored for raw device dumps. If unset, will use the default
+# "/var/crash".
+#
+# core_collector <command> <options>
+# - This allows you to specify the command to copy
+# the vmcore. The default is makedumpfile, which on
+# some architectures can drastically reduce vmcore size.
+# See /sbin/makedumpfile --help for a list of options.
+# Note that the -i and -g options are not needed here,
+# as the initrd will automatically be populated with a
+# config file appropriate for the running kernel.
+# The default core_collector for raw/ssh dump is:
+# "makedumpfile -F -l --message-level 7 -d 31".
+# The default core_collector for other targets is:
+# "makedumpfile -l --message-level 7 -d 31".
+#
+# "makedumpfile -F" will create a flattened vmcore.
+# You need to use "makedumpfile -R" to rearrange the dump data to
+# a normal dumpfile readable with analysis tools. For example:
+# "makedumpfile -R vmcore < vmcore.flat".
+#
+# For core_collector format details, you can refer to
+# kexec-kdump-howto.txt or kdump.conf manpage.
+#
+# kdump_post <binary | script>
+# - This directive allows you to run a executable binary
+# or script after the vmcore dump process terminates.
+# The exit status of the current dump process is fed to
+# the executable binary or script as its first argument.
+# All files under /etc/kdump/post.d are collectively sorted
+# and executed in lexical order, before binary or script
+# specified kdump_post parameter is executed.
+#
+# kdump_pre <binary | script>
+# - Works like the "kdump_post" directive, but instead of running
+# after the dump process, runs immediately before it.
+# Exit status of this binary is interpreted as follows:
+# 0 - continue with dump process as usual
+# non 0 - run the final action (reboot/poweroff/halt)
+# All files under /etc/kdump/pre.d are collectively sorted and
+# executed in lexical order, after binary or script specified
+# kdump_pre parameter is executed.
+# Even if the binary or script in /etc/kdump/pre.d directory
+# returns non 0 exit status, the processing is continued.
+#
+# extra_bins <binaries | shell scripts>
+# - This directive allows you to specify additional binaries or
+# shell scripts to be included in the kdump initrd.
+# Generally they are useful in conjunction with a kdump_post
+# or kdump_pre binary or script which depends on these extra_bins.
+#
+# extra_modules <module(s)>
+# - This directive allows you to specify extra kernel modules
+# that you want to be loaded in the kdump initrd.
+# Multiple modules can be listed, separated by spaces, and any
+# dependent modules will automatically be included.
+#
+# failure_action <reboot | halt | poweroff | shell | dump_to_rootfs>
+# - Action to perform in case dumping fails.
+# reboot: Reboot the system.
+# halt: Halt the system.
+# poweroff: Power down the system.
+# shell: Drop to a bash shell.
+# Exiting the shell reboots the system by default,
+# or perform "final_action".
+# dump_to_rootfs: Dump vmcore to rootfs from initramfs context and
+# reboot by default or perform "final_action".
+# Useful when non-root dump target is specified.
+# The default option is "reboot".
+#
+# default <reboot | halt | poweroff | shell | dump_to_rootfs>
+# - Same as the "failure_action" directive above, but this directive
+# is obsolete and will be removed in the future.
+#
+# final_action <reboot | halt | poweroff>
+# - Action to perform in case dumping succeeds. Also performed
+# when "shell" or "dump_to_rootfs" failure action finishes.
+# Each action is same as the "failure_action" directive above.
+# The default is "reboot".
+#
+# force_rebuild <0 | 1>
+# - By default, kdump initrd will only be rebuilt when necessary.
+# Specify 1 to force rebuilding kdump initrd every time when kdump
+# service starts.
+#
+# force_no_rebuild <0 | 1>
+# - By default, kdump initrd will be rebuilt when necessary.
+# Specify 1 to bypass rebuilding of kdump initrd.
+#
+# force_no_rebuild and force_rebuild options are mutually
+# exclusive and they should not be set to 1 simultaneously.
+#
+# override_resettable <0 | 1>
+# - Usually an unresettable block device can't be a dump target.
+# Specifying 1 when you want to dump even though the block
+# target is unresettable
+# By default, it is 0, which will not try dumping destined to fail.
+#
+# dracut_args <arg(s)>
+# - Pass extra dracut options when rebuilding kdump initrd.
+#
+# fence_kdump_args <arg(s)>
+# - Command line arguments for fence_kdump_send (it can contain
+# all valid arguments except hosts to send notification to).
+#
+# fence_kdump_nodes <node(s)>
+# - List of cluster node(s) except localhost, separated by spaces,
+# to send fence_kdump notifications to.
+# (this option is mandatory to enable fence_kdump).
+#
+
+#raw /dev/vg/lv_kdump
+#ext4 /dev/vg/lv_kdump
+#ext4 LABEL=/boot
+#ext4 UUID=03138356-5e61-4ab3-b58e-27507ac41937
+#nfs my.server.com:/export/tmp
+#nfs [2001:db8::1:2:3:4]:/export/tmp
+#ssh user@my.server.com
+#ssh user@2001:db8::1:2:3:4
+#sshkey /root/.ssh/kdump_id_rsa
+auto_reset_crashkernel yes
+path /var/crash
+core_collector makedumpfile -l --message-level 7 -d 31
+#core_collector scp
+#kdump_post /var/crash/scripts/kdump-post.sh
+#kdump_pre /var/crash/scripts/kdump-pre.sh
+#extra_bins /usr/bin/lftp
+#extra_modules gfs2
+#failure_action shell
+#force_rebuild 1
+#force_no_rebuild 1
+#dracut_args --omit-drivers "cfg80211 snd" --add-drivers "ext2 ext3"
+#fence_kdump_args -p 7410 -f auto -c 0 -i 10
+#fence_kdump_nodes node1 node2
--- /dev/null
+[logging]
+ default = FILE:/var/log/krb5libs.log
+ kdc = FILE:/var/log/krb5kdc.log
+ admin_server = FILE:/var/log/kadmind.log
+
+[libdefaults]
+ default_realm = EXAMPLE.COM
+ dns_lookup_realm = false
+ dns_lookup_kdc = false
+ ticket_lifetime = 24h
+ forwardable = yes
+
+[realms]
+ EXAMPLE.COM = {
+ kdc = kerberos.example.com:88
+ admin_server = kerberos.example.com:749
+ default_domain = example.com
+ }
+
+[domain_realm]
+ .example.com = EXAMPLE.COM
+ example.com = EXAMPLE.COM
+
+[appdefaults]
+ pam = {
+ debug = false
+ ticket_lifetime = 36000
+ renew_lifetime = 36000
+ forwardable = true
+ krb4_convert = false
+ }
--- /dev/null
+/var/log/acpid {
+ missingok
+ notifempty
+ size=64k
+ postrotate
+ /etc/init.d/acpid condrestart >/dev/null || :
+ endscript
+ }
--- /dev/null
+/var/log/rpmpkgs {
+ weekly
+ notifempty
+ missingok
+ create 0640 root root
+}
--- /dev/null
+### This file is automatically generated by update-modules"
+#
+# Please do not edit this file directly. If you want to change or add
+# anything please take a look at the files in /etc/modutils and read
+# the manpage for update-modules.
+#
+### update-modules: start processing /etc/modutils/0keep
+# DO NOT MODIFY THIS FILE!
+# This file is not marked as conffile to make sure if you upgrade modutils
+# it will be restored in case some modifications have been made.
+#
+# The keep command is necessary to prevent insmod and friends from ignoring
+# the builtin defaults of a path-statement is encountered. Until all other
+# packages use the new `add path'-statement this keep-statement is essential
+# to keep your system working
+keep
+
+### update-modules: end processing /etc/modutils/0keep
+
+### update-modules: start processing /etc/modutils/1devfsd
+# /etc/modules.devfs
+# Richard Gooch <rgooch@atnf.csiro.au> 24-MAR-2002
+#
+# THIS IS AN AUTOMATICALLY GENERATED FILE. DO NOT EDIT!!!
+# THIS FILE WILL BE OVERWRITTEN EACH TIME YOU INSTALL DEVFSD!!!
+# Modify /etc/modules.conf instead.
+# This file comes with devfsd-vDEVFSD-VERSION which is available from:
+# http://www.atnf.csiro.au/~rgooch/linux/
+# or directly from:
+# ftp://ftp.atnf.csiro.au/pub/people/rgooch/linux/daemons/devfsd-vDEVFSD-VERSION.tar.gz
+
+###############################################################################
+# Sample configurations that you may want to place in /etc/modules.conf
+#
+#alias sound-slot-0 sb
+#alias /dev/v4l bttv
+#alias /dev/misc/watchdog pcwd
+#alias gen-md raid0
+#alias /dev/joysticks joystick
+#probeall scsi_hostadapter sym53c8xx
+
+###############################################################################
+# Generic section: do not change or copy
+#
+# All HDDs
+probeall /dev/discs scsi_hostadapter sd_mod ide-probe-mod ide-disk ide-floppy DAC960
+alias /dev/discs/* /dev/discs
+
+# All CD-ROMs
+probeall /dev/cdroms scsi_hostadapter sr_mod ide-probe-mod ide-cd cdrom
+alias /dev/cdroms/* /dev/cdroms
+alias /dev/cdrom /dev/cdroms
+
+# All tapes
+probeall /dev/tapes scsi_hostadapter st ide-probe-mod ide-tape
+alias /dev/tapes/* /dev/tapes
+
+# All SCSI devices
+probeall /dev/scsi scsi_hostadapter sd_mod sr_mod st sg
+
+# All IDE devices
+alias /dev/hd* /dev/ide
+alias /dev/ide/host*/bus*/target*/lun*/* /dev/ide
+probeall /dev/ide ide-probe-mod ide-disk ide-cd ide-tape ide-floppy
+
+# IDE CD-ROMs
+alias /dev/ide/*/cd ide-cd
+
+# SCSI HDDs
+probeall /dev/sd scsi_hostadapter sd_mod
+alias /dev/sd* /dev/sd
+
+# SCSI CD-ROMs
+probeall /dev/sr scsi_hostadapter sr_mod
+alias /dev/sr* /dev/sr
+alias /dev/scsi/*/cd sr_mod
+
+# SCSI tapes
+probeall /dev/st scsi_hostadapter st
+alias /dev/st* /dev/st
+alias /dev/nst* /dev/st
+
+# SCSI generic
+probeall /dev/sg scsi_hostadapter sg
+alias /dev/sg* /dev/sg
+alias /dev/scsi/*/generic /dev/sg
+alias /dev/pg /dev/sg
+alias /dev/pg* /dev/sg
+
+# Floppies
+alias /dev/floppy floppy
+alias /dev/fd* floppy
+
+# RAMDISCs
+alias /dev/rd rd
+alias /dev/ram* rd
+
+# Loop devices
+alias /dev/loop* loop
+
+# Meta devices
+alias /dev/md* gen-md
+
+# Parallel port printers
+alias /dev/printers* lp
+alias /dev/lp* /dev/printers
+
+# Soundcard
+alias /dev/sound sound-slot-0
+alias /dev/audio /dev/sound
+alias /dev/mixer /dev/sound
+alias /dev/dsp /dev/sound
+alias /dev/dspW /dev/sound
+alias /dev/midi /dev/sound
+
+# Joysticks
+alias /dev/js* /dev/joysticks
+
+# Serial ports
+alias /dev/tts* serial
+alias /dev/ttyS* /dev/tts
+alias /dev/cua* /dev/tts
+
+# Input devices
+alias /dev/input/mouse* mousedev
+
+# Miscellaneous devices
+alias /dev/misc/atibm atixlmouse
+alias /dev/misc/inportbm msbusmouse
+alias /dev/misc/logibm busmouse
+alias /dev/misc/rtc rtc
+alias /dev/misc/agpgart agpgart
+alias /dev/rtc /dev/misc/rtc
+
+# PPP devices
+alias /dev/ppp* ppp_generic
+
+# Video capture devices
+alias /dev/video* /dev/v4l
+alias /dev/vbi* /dev/v4l
+
+# agpgart
+alias /dev/agpgart agpgart
+alias /dev/dri* agpgart
+
+# Irda devices
+alias /dev/ircomm ircomm-tty
+alias /dev/ircomm* /dev/ircomm
+
+# Raw I/O devices
+alias /dev/rawctl /dev/raw
+
+
+# Pull in the configuration file. Do this last because modprobe(8) processes in
+# per^H^H^Hreverse order and the sysadmin may want to over-ride what is in the
+# generic file
+#include /etc/modules.conf
+
+### update-modules: end processing /etc/modutils/1devfsd
+
+### update-modules: start processing /etc/modutils/actions
+# Special actions that are needed for some modules
+
+# The BTTV module does not load the tuner module automatically,
+# so do that in here
+post-install bttv insmod tuner
+post-remove bttv rmmod tuner
+
+
+### update-modules: end processing /etc/modutils/actions
+
+### update-modules: start processing /etc/modutils/aliases
+# Aliases to tell insmod/modprobe which modules to use
+
+# Uncomment the network protocols you don't want loaded:
+# alias net-pf-1 off # Unix
+# alias net-pf-2 off # IPv4
+# alias net-pf-3 off # Amateur Radio AX.25
+# alias net-pf-4 off # IPX
+# alias net-pf-5 off # DDP / appletalk
+# alias net-pf-6 off # Amateur Radio NET/ROM
+# alias net-pf-9 off # X.25
+# alias net-pf-10 off # IPv6
+# alias net-pf-11 off # ROSE / Amateur Radio X.25 PLP
+# alias net-pf-19 off # Acorn Econet
+
+alias char-major-10-175 agpgart
+alias char-major-10-200 tun
+alias char-major-81 bttv
+alias char-major-108 ppp_generic
+alias /dev/ppp ppp_generic
+alias tty-ldisc-3 ppp_async
+alias tty-ldisc-14 ppp_synctty
+alias ppp-compress-21 bsd_comp
+alias ppp-compress-24 ppp_deflate
+alias ppp-compress-26 ppp_deflate
+
+# Crypto modules (see http://www.kerneli.org/)
+alias loop-xfer-gen-0 loop_gen
+alias loop-xfer-3 loop_fish2
+alias loop-xfer-gen-10 loop_gen
+alias cipher-2 des
+alias cipher-3 fish2
+alias cipher-4 blowfish
+alias cipher-6 idea
+alias cipher-7 serp6f
+alias cipher-8 mars6
+alias cipher-11 rc62
+alias cipher-15 dfc2
+alias cipher-16 rijndael
+alias cipher-17 rc5
+
+alias char-major-195 NVdriver
+
+### update-modules: end processing /etc/modutils/aliases
+
+### update-modules: start processing /etc/modutils/alsa-path
+# Debian ALSA modules path
+# Do not edit this unless you understand what you're doing.
+path=/lib/modules/`uname -r`/alsa
+
+### update-modules: end processing /etc/modutils/alsa-path
+
+### update-modules: start processing /etc/modutils/apm
+alias char-major-10-134 apm
+alias /dev/apm_bios /dev/misc/apm_bios
+alias /dev/misc/apm_bios apm
+
+### update-modules: end processing /etc/modutils/apm
+
+### update-modules: start processing /etc/modutils/cdrw
+options ide-cd ignore=hdc # tell the ide-cd module to ignore hdb
+alias scd0 sr_mod # load sr_mod upon access of scd0
+#pre-install ide-scsi modprobe imm # uncomment for some ZIP drives only
+pre-install sg modprobe ide-scsi # load ide-scsi before sg
+pre-install sr_mod modprobe ide-scsi # load ide-scsi before sr_mod
+pre-install ide-scsi modprobe ide-cd # load ide-cd before ide-scsi
+
+### update-modules: end processing /etc/modutils/cdrw
+
+### update-modules: start processing /etc/modutils/irda
+alias tty-ldisc-11 irtty
+alias char-major-161 ircomm-tty
+alias char-major-60 ircomm_tty
+
+# for dongle
+alias irda-dongle-0 tekram
+alias irda-dongle-1 esi
+alias irda-dongle-2 actisys
+alias irda-dongle-3 actisys
+alias irda-dongle-4 girbil
+alias irda-dongle-5 litelink
+alias irda-dongle-6 airport
+alias irda-dongle-7 old_belkin
+
+# for FIR device
+alias irda0 smc-ircc
+#dongle_id=0x09
+pre-install smc-ircc /usr/local/sbin/tosh5100-smcinit
+
+### update-modules: end processing /etc/modutils/irda
+
+### update-modules: start processing /etc/modutils/paths
+# This file contains a list of paths that modprobe should scan,
+# beside the once that are compiled into the modutils tools
+# themselves.
+
+
+### update-modules: end processing /etc/modutils/paths
+
+### update-modules: start processing /etc/modutils/pcmcia
+pre-install ide-cs /etc/init.d/irda stop
+post-remove ide-cs /etc/init.d/irda start
+
+
+
+### update-modules: end processing /etc/modutils/pcmcia
+
+### update-modules: start processing /etc/modutils/ppp
+alias /dev/ppp ppp_generic
+alias char-major-108 ppp_generic
+alias tty-ldisc-3 ppp_async
+alias tty-ldisc-14 ppp_synctty
+alias ppp-compress-21 bsd_comp
+alias ppp-compress-24 ppp_deflate
+alias ppp-compress-26 ppp_deflate
+
+### update-modules: end processing /etc/modutils/ppp
+
+### update-modules: start processing /etc/modutils/setserial
+#
+# This is what I wanted to do, but logger is in /usr/bin, which isn't loaded
+# when the module is first loaded into the kernel at boot time!
+#
+#post-install serial /etc/init.d/setserial start | logger -p daemon.info -t "setserial-module reload"
+#pre-remove serial /etc/init.d/setserial stop | logger -p daemon.info -t "setserial-module uload"
+#
+alias /dev/tts serial
+alias /dev/tts/0 serial
+alias /dev/tts/1 serial
+alias /dev/tts/2 serial
+alias /dev/tts/3 serial
+post-install serial /etc/init.d/setserial modload > /dev/null 2> /dev/null
+pre-remove serial /etc/init.d/setserial modsave > /dev/null 2> /dev/null
+
+### update-modules: end processing /etc/modutils/setserial
+
+### update-modules: start processing /etc/modutils/sound
+# ALSA portion
+alias char-major-116 snd
+# OSS/Free portion
+alias char-major-14 soundcore
+alias snd-card-0 snd-intel8x0
+alias sound-slot-0 snd-card-0
+# OSS/Free portion - card #1
+alias sound-service-0-0 snd-mixer-oss
+alias sound-service-0-1 snd-seq-oss
+alias sound-service-0-3 snd-pcm-oss
+alias sound-service-0-8 snd-seq-oss
+alias sound-service-0-12 snd-pcm-oss
+alias sound-service-1-0 off
+alias sound-slot-1 off
+#gentoo suggestion
+alias /dev/dsp snd-pcm-oss
+alias /dev/mixer snd-mixer-oss
+alias /dev/midi snd-seq-oss
+
+
+### update-modules: end processing /etc/modutils/sound
+
+### update-modules: start processing /etc/modutils/toshutils
+alias char-major-10-181 toshiba
+options toshiba tosh_fn=0x62
+### update-modules: end processing /etc/modutils/toshutils
+
+### update-modules: start processing /etc/modutils/usb
+options usb-uhci debug 3
+post-install belkin_sa /usr/local/sbin/belkin-usb-serial
+
+
+### update-modules: end processing /etc/modutils/usb
+
+### update-modules: start processing /etc/modutils/arch/i386
+#alias parport_lowlevel parport_pc
+alias char-major-10-144 nvram
+alias binfmt-0064 binfmt_aout
+alias char-major-10-135 rtc
+
+alias parport_lowlevel off
+alias char-major-6 off
+
+### update-modules: end processing /etc/modutils/arch/i386
+
--- /dev/null
+# This is a basic configuration file with some examples, for device mapper
+# multipath.
+# For a complete list of the default configuration values, see
+# /usr/share/doc/device-mapper-multipath-0.4.8/multipath.conf.defaults
+# For a list of configuration options with descriptions, see
+# /usr/share/doc/device-mapper-multipath-0.4.8/multipath.conf.annotated
+
+
+# Blacklist all devices by default. Remove this to enable multipathing
+# on the default devices.
+blacklist {
+ devnode "*"
+}
+
+# By default, devices with vendor = "IBM" and product = "S/390.*" are
+# blacklisted. To enable mulitpathing on these devies, uncomment the
+# following lines.
+blacklist_exceptions {
+ device {
+ vendor "IBM"
+ product "S/390.*"
+ }
+}
+
+## Use user friendly names, instead of using WWIDs as names.
+defaults {
+ user_friendly_names yes
+}
+#
+# Here is an example of how to configure some standard options.
+#
+
+defaults {
+ udev_dir /dev
+ polling_interval 10
+ selector "round-robin 0"
+ path_grouping_policy multibus
+ getuid_callout "/sbin/scsi_id --whitelisted /dev/%n"
+ prio alua
+ path_checker readsector0
+ rr_min_io 100
+ max_fds 8192
+ rr_weight priorities
+ failback immediate
+ no_path_retry fail
+ user_friendly_names yes
+}
+#
+# The wwid line in the following blacklist section is shown as an example
+# of how to blacklist devices by wwid. The 2 devnode lines are the
+# compiled in default blacklist. If you want to blacklist entire types
+# of devices, such as all scsi devices, you should use a devnode line.
+# However, if you want to blacklist specific devices, you should use
+# a wwid line. Since there is no guarantee that a specific device will
+# not change names on reboot (from /dev/sda to /dev/sdb for example)
+# devnode lines are not recommended for blacklisting specific devices.
+#
+blacklist {
+ wwid 26353900f02796769
+ devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
+ devnode "^hd[a-z]"
+}
+multipaths {
+ multipath {
+ wwid 3600508b4000156d700012000000b0000
+ alias yellow
+ path_grouping_policy multibus
+ path_checker readsector0
+ path_selector "round-robin 0"
+ failback manual
+ rr_weight priorities
+ no_path_retry 5
+ }
+ multipath {
+ wwid 1DEC_____321816758474
+ alias red
+ }
+}
+devices {
+ device {
+ vendor "COMPAQ "
+ product "HSV110 (C)COMPAQ"
+ path_grouping_policy multibus
+ getuid_callout "/sbin/scsi_id --whitelisted /dev/%n"
+ path_checker readsector0
+ path_selector "round-robin 0"
+ hardware_handler "0"
+ failback 15
+ rr_weight priorities
+ no_path_retry queue
+ }
+ device {
+ vendor "COMPAQ "
+ product "MSA1000 "
+ path_grouping_policy multibus
+ }
+}
--- /dev/null
+# /etc/network/interfaces -- configuration file for ifup(8), ifdown(8)
+
+# The loopback interface
+auto lo
+iface lo inet loopback
+
+# The first network card - this entry was created during the Debian installation
+## auto eth0
+iface eth0 inet dhcp
+ pre-up /etc/init.d/ntp-server stop || true
+ up /etc/init.d/ntpdate restart || true
+ up /etc/init.d/ntp-server start || true
+
+iface eth0-0 inet static
+ address 134.158.129.99
+ netmask 255.255.254.0
+ network 134.158.128.0
+ broadcast 134.158.129.255
+ gateway 134.158.128.1
+
+iface eth0-2 inet static
+ address 192.168.1.160
+ netmask 255.255.255.0
+ network 192.168.1.0
+ broadcast 192.168.1.255
+ gateway 192.168.1.1
+
+iface eth0-3 inet static
+ address 192.168.1.7
+ netmask 255.255.255.0
+ network 192.168.1.0
+ broadcast 192.168.1.255
+
+iface adsl0 inet dhcp
+ pre-up /sbin/modprobe adiusbadsl
+ pre-up /usr/sbin/adictrl -i
+ pre-up /usr/sbin/adictrl -f
+ pre-up /usr/sbin/adictrl -d
+ pre-up /usr/sbin/adictrl -s
--- /dev/null
+
+user nobody;
+worker_processes 1;
+
+error_log logs/error.log;
+error_log logs/error.log notice;
+error_log logs/error.log info;
+
+pid logs/nginx.pid;
+
+
+events {
+ worker_connections 1024;
+}
+
+
+http {
+ include mime.types;
+ default_type application/octet-stream;
+
+ log_format main '$remote_addr - $remote_user [$time_local] "$request" '
+ '$status $body_bytes_sent "$http_referer" '
+ '"$http_user_agent" "$http_x_forwarded_for"';
+
+ access_log logs/access.log main;
+
+ sendfile on;
+ tcp_nopush on;
+
+ keepalive_timeout 0;
+ keepalive_timeout 65;
+
+ gzip on;
+
+ server {
+ listen 80;
+ server_name localhost;
+
+ charset koi8-r;
+
+ access_log logs/host.access.log main;
+
+ location / {
+ root html;
+ index index.html index.htm;
+ }
+
+ error_page 404 /404.html;
+
+ # redirect server error pages to the static page /50x.html
+ #
+ error_page 500 502 503 504 /50x.html;
+ location = /50x.html {
+ root html;
+ }
+
+ # proxy the PHP scripts to Apache listening on 127.0.0.1:80
+
+ location ~ \.php$ {
+ proxy_pass http://127.0.0.1;
+ }
+
+ # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
+ #
+ location ~ \.php$ {
+ root html;
+ fastcgi_pass 127.0.0.1:9000;
+ fastcgi_index index.php;
+ fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
+ include fastcgi_params;
+ }
+
+ # deny access to .htaccess files, if Apache's document root
+ # concurs with nginx's one
+ #
+ location ~ /\.ht {
+ deny all;
+ }
+ }
+
+
+ # another virtual host using mix of IP-, name-, and port-based configuration
+
+ server {
+ listen 8000;
+ listen somename:8080;
+ server_name somename alias another.alias;
+
+ location / {
+ root html;
+ index index.html index.htm;
+ }
+ }
+
+
+ # HTTPS server
+ #
+ server {
+ listen 443 ssl;
+ server_name localhost;
+
+ ssl_certificate cert.pem;
+ ssl_certificate_key cert.key;
+
+ ssl_session_cache shared:SSL:1m;
+ ssl_session_timeout 5m;
+
+ ssl_ciphers HIGH:!aNULL:!MD5;
+ ssl_prefer_server_ciphers on;
+
+ location / {
+ root html;
+ index index.html index.htm;
+ }
+ }
+
+}
--- /dev/null
+# /etc/nslcd.conf
+# nslcd configuration file. See nslcd.conf(5)
+# for details.
+
+# Specifies the number of threads to start that can handle requests and perform LDAP queries.
+threads 5
+
+# The user and group nslcd should run as.
+uid nslcd
+gid nslcd
+
+# This option controls the way logging is done.
+log syslog info
+
+# The location at which the LDAP server(s) should be reachable.
+uri ldaps://XXX.XXX.XXX
+
+# The search base that will be used for all queries.
+base dc=XXX,dc=XXX
+
+# The LDAP protocol version to use.
+ldap_version 3
+
+# The DN to bind with for normal lookups.
+binddn cn=annonymous,dc=example,dc=net
+bindpw secret
+
+
+# The DN used for password modifications by root.
+rootpwmoddn cn=admin,dc=example,dc=com
+
+# The password used for password modifications by root.
+rootpwmodpw XXXXXX
+
+
+# SASL authentication options
+sasl_mech OTP
+sasl_realm realm
+sasl_authcid authcid
+sasl_authzid dn:cn=annonymous,dc=example,dc=net
+sasl_secprops noanonymous,noplain,minssf=0,maxssf=2,maxbufsize=65535
+sasl_canonicalize yes
+
+# Kerberos authentication options
+krb5_ccname ccname
+
+# Search/mapping options
+
+# Specifies the base distinguished name (DN) to use as search base.
+base dc=people,dc=example,dc=com
+base dc=morepeople,dc=example,dc=com
+base alias dc=aliases,dc=example,dc=com
+base alias dc=morealiases,dc=example,dc=com
+base group dc=group,dc=example,dc=com
+base group dc=moregroup,dc=example,dc=com
+base passwd dc=users,dc=example,dc=com
+
+# Specifies the search scope (subtree, onelevel, base or children).
+scope sub
+scope passwd sub
+scope aliases sub
+
+# Specifies the policy for dereferencing aliases.
+deref never
+
+# Specifies whether automatic referral chasing should be enabled.
+referrals yes
+
+# The FILTER is an LDAP search filter to use for a specific map.
+filter passwd (objectClass=posixAccount)
+
+# This option allows for custom attributes to be looked up instead of the default RFC 2307 attributes.
+map passwd homeDirectory \"${homeDirectory:-/home/$uid}\"
+map passwd loginShell \"${loginShell:-/bin/bash}\"
+map shadow userPassword myPassword
+
+# Timing/reconnect options
+
+# Specifies the time limit (in seconds) to use when connecting to the directory server.
+bind_timelimit 30
+
+# Specifies the time limit (in seconds) to wait for a response from the LDAP server.
+timelimit 5
+
+# Specifies the period if inactivity (in seconds) after which the connection to the LDAP server will be closed.
+idle_timelimit 10
+
+# Specifies the number of seconds to sleep when connecting to all LDAP servers fails.
+reconnect_sleeptime 10
+
+# Specifies the time after which the LDAP server is considered to be permanently unavailable.
+reconnect_retrytime 10
+
+# SSL/TLS options
+
+# Specifies whether to use SSL/TLS or not (the default is not to).
+ssl start_tls
+# Specifies what checks to perform on a server-supplied certificate.
+tls_reqcert never
+# Specifies the directory containing X.509 certificates for peer authentication.
+tls_cacertdir /etc/ssl/ca
+# Specifies the path to the X.509 certificate for peer authentication.
+tls_cacertfile /etc/ssl/certs/ca-certificates.crt
+# Specifies the path to an entropy source.
+tls_randfile /dev/random
+# Specifies the ciphers to use for TLS.
+tls_ciphers TLSv1
+# Specifies the path to the file containing the local certificate for client TLS authentication.
+tls_cert /etc/ssl/certs/cert.pem
+# Specifies the path to the file containing the private key for client TLS authentication.
+tls_key /etc/ssl/private/cert.pem
+
+# Other options
+pagesize 100
+nss_initgroups_ignoreusers user1,user2,user3
+nss_min_uid 1000
+nss_nested_groups yes
+nss_getgrent_skipmembers yes
+nss_disable_enumeration yes
+validnames /^[a-z0-9._@$()]([a-z0-9._@$() \\~-]*[a-z0-9._@$()~-])?$/i
+ignorecase yes
+pam_authc_ppolicy yes
+pam_authz_search (&(objectClass=posixAccount)(uid=$username)(|(authorizedService=$service)(!(authorizedService=*))))
+pam_password_prohibit_message "MESSAGE LONG AND WITH SPACES"
+reconnect_invalidate nfsidmap,db2,db3
+cache dn2uid 1s 2h
--- /dev/null
+# Permit time synchronization with our time source, but do not
+# permit the source to query or modify the service on this system.
+restrict default kod nomodify notrap nopeer noquery
+restrict -6 default kod nomodify notrap nopeer noquery
+
+# Permit all access over the loopback interface. This could
+# be tightened as well, but to do so would effect some of
+# the administrative functions.
+restrict 127.0.0.1
+restrict -6 ::1
+
+# Hosts on local network are less restricted.
+restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
+
+# Use public servers from the pool.ntp.org project.
+# Please consider joining the pool (http://www.pool.ntp.org/join.html).
+server 0.centos.pool.ntp.org
+server 1.centos.pool.ntp.org
+server 2.centos.pool.ntp.org
+
+broadcast 192.168.1.255 key 42 # broadcast server
+broadcastclient # broadcast client
+broadcast 224.0.1.1 key 42 # multicast server
+multicastclient 224.0.1.1 # multicast client
+manycastserver 239.255.254.254 # manycast server
+manycastclient 239.255.254.254 key 42 # manycast client
+
+# Undisciplined Local Clock. This is a fake driver intended for backup
+# and when no outside source of synchronized time is available.
+server 127.127.1.0 # local clock
+fudge 127.127.1.0 stratum 10
+
+# Drift file. Put this in a directory which the daemon can write to.
+# No symbolic links allowed, either, since the daemon updates the file
+# by creating a temporary in the same directory and then rename()'ing
+# it to the file.
+driftfile /var/lib/ntp/drift
+
+# Key file containing the keys and key identifiers used when operating
+# with symmetric key cryptography.
+keys /etc/ntp/keys
+
+# Specify the key identifiers which are trusted.
+trustedkey 4 8 42
+
+# Specify the key identifier to use with the ntpdc utility.
+requestkey 8
+
+# Specify the key identifier to use with the ntpq utility.
+controlkey 8
--- /dev/null
+#%PAM-1.0
+auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so
+auth include system-auth
+account required pam_nologin.so
+account include system-auth
+password include system-auth
+# pam_selinux.so close should be the first session rule
+session required pam_selinux.so close
+session optional pam_keyinit.so force revoke
+session include system-auth
+session required pam_loginuid.so
+session optional pam_console.so
+# pam_selinux.so open should only be followed by sessions to be executed in the user context
+session required pam_selinux.so open
+session optional pam_ck_connector.so
--- /dev/null
+#%PAM-1.0
+auth include system-auth
+account include system-auth
+password include system-auth
+session required pam_namespace.so unmnt_remnt no_unmount_on_close
--- /dev/null
+#%PAM-1.0
+auth include system-auth
+account include system-auth
--- /dev/null
+root:x:0:0:root:/root:/bin/bash
+bin:x:1:1:bin:/bin:/sbin/nologin
+daemon:x:2:2:daemon:/sbin:/sbin/nologin
+adm:x:3:4:adm:/var/adm:/sbin/nologin
+lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
+sync:x:5:0:sync:/sbin:/bin/sync
+shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
+halt:x:7:0:halt:/sbin:/sbin/halt
+mail:x:8:12:mail:/var/spool/mail:/sbin/nologin
+uucp:x:10:14:uucp:/var/spool/uucp:/sbin/nologin
+operator:x:11:0:operator:/root:/sbin/nologin
+games:x:12:100:games:/usr/games:/sbin/nologin
+gopher:x:13:30:gopher:/var/gopher:/sbin/nologin
+ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
+nobody:x:99:99:Nobody:/:/sbin/nologin
+vcsa:x:69:69:virtual console memory owner:/dev:/sbin/nologin
+rpc:x:32:32:Rpcbind Daemon:/var/lib/rpcbind:/sbin/nologin
+rpcuser:x:29:29:RPC Service User:/var/lib/nfs:/sbin/nologin
+nfsnobody:x:4294967294:499:Anonymous NFS User:/var/lib/nfs:/sbin/nologin
--- /dev/null
+[PHP]
+
+;;;;;;;;;;;;;;;;;;;
+; About php.ini ;
+;;;;;;;;;;;;;;;;;;;
+; This file controls many aspects of PHP's behavior. In order for PHP to
+; read it, it must be named 'php.ini'. PHP looks for it in the current
+; working directory, in the path designated by the environment variable
+; PHPRC, and in the path that was defined in compile time (in that order).
+; Under Windows, the compile-time path is the Windows directory. The
+; path in which the php.ini file is looked for can be overridden using
+; the -c argument in command line mode.
+;
+; The syntax of the file is extremely simple. Whitespace and Lines
+; beginning with a semicolon are silently ignored (as you probably guessed).
+; Section headers (e.g. [Foo]) are also silently ignored, even though
+; they might mean something in the future.
+;
+; Directives are specified using the following syntax:
+; directive = value
+; Directive names are *case sensitive* - foo=bar is different from FOO=bar.
+;
+; The value can be a string, a number, a PHP constant (e.g. E_ALL or M_PI), one
+; of the INI constants (On, Off, True, False, Yes, No and None) or an expression
+; (e.g. E_ALL & ~E_NOTICE), or a quoted string ("foo").
+;
+; Expressions in the INI file are limited to bitwise operators and parentheses:
+; | bitwise OR
+; & bitwise AND
+; ~ bitwise NOT
+; ! boolean NOT
+;
+; Boolean flags can be turned on using the values 1, On, True or Yes.
+; They can be turned off using the values 0, Off, False or No.
+;
+; An empty string can be denoted by simply not writing anything after the equal
+; sign, or by using the None keyword:
+;
+; foo = ; sets foo to an empty string
+; foo = none ; sets foo to an empty string
+; foo = "none" ; sets foo to the string 'none'
+;
+; If you use constants in your value, and these constants belong to a
+; dynamically loaded extension (either a PHP extension or a Zend extension),
+; you may only use these constants *after* the line that loads the extension.
+;
+;
+;;;;;;;;;;;;;;;;;;;
+; About this file ;
+;;;;;;;;;;;;;;;;;;;
+; This is the recommended, PHP 5-style version of the php.ini-dist file. It
+; sets some non standard settings, that make PHP more efficient, more secure,
+; and encourage cleaner coding.
+;
+; The price is that with these settings, PHP may be incompatible with some
+; applications, and sometimes, more difficult to develop with. Using this
+; file is warmly recommended for production sites. As all of the changes from
+; the standard settings are thoroughly documented, you can go over each one,
+; and decide whether you want to use it or not.
+;
+; For general information about the php.ini file, please consult the php.ini-dist
+; file, included in your PHP distribution.
+;
+; This file is different from the php.ini-dist file in the fact that it features
+; different values for several directives, in order to improve performance, while
+; possibly breaking compatibility with the standard out-of-the-box behavior of
+; PHP. Please make sure you read what's different, and modify your scripts
+; accordingly, if you decide to use this file instead.
+;
+; - register_globals = Off [Security, Performance]
+; Global variables are no longer registered for input data (POST, GET, cookies,
+; environment and other server variables). Instead of using $foo, you must use
+; you can use $_REQUEST["foo"] (includes any variable that arrives through the
+; request, namely, POST, GET and cookie variables), or use one of the specific
+; $_GET["foo"], $_POST["foo"], $_COOKIE["foo"] or $_FILES["foo"], depending
+; on where the input originates. Also, you can look at the
+; import_request_variables() function.
+; Note that register_globals is going to be depracated (i.e., turned off by
+; default) in the next version of PHP, because it often leads to security bugs.
+; Read http://php.net/manual/en/security.registerglobals.php for further
+; information.
+; - register_long_arrays = Off [Performance]
+; Disables registration of the older (and deprecated) long predefined array
+; variables ($HTTP_*_VARS). Instead, use the superglobals that were
+; introduced in PHP 4.1.0
+; - display_errors = Off [Security]
+; With this directive set to off, errors that occur during the execution of
+; scripts will no longer be displayed as a part of the script output, and thus,
+; will no longer be exposed to remote users. With some errors, the error message
+; content may expose information about your script, web server, or database
+; server that may be exploitable for hacking. Production sites should have this
+; directive set to off.
+; - log_errors = On [Security]
+; This directive complements the above one. Any errors that occur during the
+; execution of your script will be logged (typically, to your server's error log,
+; but can be configured in several ways). Along with setting display_errors to off,
+; this setup gives you the ability to fully understand what may have gone wrong,
+; without exposing any sensitive information to remote users.
+; - output_buffering = 4096 [Performance]
+; Set a 4KB output buffer. Enabling output buffering typically results in less
+; writes, and sometimes less packets sent on the wire, which can often lead to
+; better performance. The gain this directive actually yields greatly depends
+; on which Web server you're working with, and what kind of scripts you're using.
+; - register_argc_argv = Off [Performance]
+; Disables registration of the somewhat redundant $argv and $argc global
+; variables.
+; - magic_quotes_gpc = Off [Performance]
+; Input data is no longer escaped with slashes so that it can be sent into
+; SQL databases without further manipulation. Instead, you should use the
+; function addslashes() on each input element you wish to send to a database.
+; - variables_order = "GPCS" [Performance]
+; The environment variables are not hashed into the $_ENV. To access
+; environment variables, you can use getenv() instead.
+; - error_reporting = E_ALL [Code Cleanliness, Security(?)]
+; By default, PHP surpresses errors of type E_NOTICE. These error messages
+; are emitted for non-critical errors, but that could be a symptom of a bigger
+; problem. Most notably, this will cause error messages about the use
+; of uninitialized variables to be displayed.
+; - allow_call_time_pass_reference = Off [Code cleanliness]
+; It's not possible to decide to force a variable to be passed by reference
+; when calling a function. The PHP 4 style to do this is by making the
+; function require the relevant argument by reference.
+
+
+;;;;;;;;;;;;;;;;;;;;
+; Language Options ;
+;;;;;;;;;;;;;;;;;;;;
+
+; Enable the PHP scripting language engine under Apache.
+engine = On
+
+; Enable compatibility mode with Zend Engine 1 (PHP 4.x)
+zend.ze1_compatibility_mode = Off
+
+; Allow the <? tag. Otherwise, only <?php and <script> tags are recognized.
+; NOTE: Using short tags should be avoided when developing applications or
+; libraries that are meant for redistribution, or deployment on PHP
+; servers which are not under your control, because short tags may not
+; be supported on the target server. For portable, redistributable code,
+; be sure not to use short tags.
+short_open_tag = On
+
+; Allow ASP-style <% %> tags.
+asp_tags = Off
+
+; The number of significant digits displayed in floating point numbers.
+precision = 14
+
+; Enforce year 2000 compliance (will cause problems with non-compliant browsers)
+y2k_compliance = On
+
+; Output buffering allows you to send header lines (including cookies) even
+; after you send body content, at the price of slowing PHP's output layer a
+; bit. You can enable output buffering during runtime by calling the output
+; buffering functions. You can also enable output buffering for all files by
+; setting this directive to On. If you wish to limit the size of the buffer
+; to a certain size - you can use a maximum number of bytes instead of 'On', as
+; a value for this directive (e.g., output_buffering=4096).
+output_buffering = 4096
+
+; You can redirect all of the output of your scripts to a function. For
+; example, if you set output_handler to "mb_output_handler", character
+; encoding will be transparently converted to the specified encoding.
+; Setting any output handler automatically turns on output buffering.
+; Note: People who wrote portable scripts should not depend on this ini
+; directive. Instead, explicitly set the output handler using ob_start().
+; Using this ini directive may cause problems unless you know what script
+; is doing.
+; Note: You cannot use both "mb_output_handler" with "ob_iconv_handler"
+; and you cannot use both "ob_gzhandler" and "zlib.output_compression".
+; Note: output_handler must be empty if this is set 'On' !!!!
+; Instead you must use zlib.output_handler.
+;output_handler =
+
+; Transparent output compression using the zlib library
+; Valid values for this option are 'off', 'on', or a specific buffer size
+; to be used for compression (default is 4KB)
+; Note: Resulting chunk size may vary due to nature of compression. PHP
+; outputs chunks that are few hundreds bytes each as a result of
+; compression. If you prefer a larger chunk size for better
+; performance, enable output_buffering in addition.
+; Note: You need to use zlib.output_handler instead of the standard
+; output_handler, or otherwise the output will be corrupted.
+zlib.output_compression = Off
+
+; You cannot specify additional output handlers if zlib.output_compression
+; is activated here. This setting does the same as output_handler but in
+; a different order.
+;zlib.output_handler =
+
+; Implicit flush tells PHP to tell the output layer to flush itself
+; automatically after every output block. This is equivalent to calling the
+; PHP function flush() after each and every call to print() or echo() and each
+; and every HTML block. Turning this option on has serious performance
+; implications and is generally recommended for debugging purposes only.
+implicit_flush = Off
+
+; The unserialize callback function will be called (with the undefined class'
+; name as parameter), if the unserializer finds an undefined class
+; which should be instantiated.
+; A warning appears if the specified function is not defined, or if the
+; function doesn't include/implement the missing class.
+; So only set this entry, if you really want to implement such a
+; callback-function.
+unserialize_callback_func=
+
+; When floats & doubles are serialized store serialize_precision significant
+; digits after the floating point. The default value ensures that when floats
+; are decoded with unserialize, the data will remain the same.
+serialize_precision = 100
+
+; Whether to enable the ability to force arguments to be passed by reference
+; at function call time. This method is deprecated and is likely to be
+; unsupported in future versions of PHP/Zend. The encouraged method of
+; specifying which arguments should be passed by reference is in the function
+; declaration. You're encouraged to try and turn this option Off and make
+; sure your scripts work properly with it in order to ensure they will work
+; with future versions of the language (you will receive a warning each time
+; you use this feature, and the argument will be passed by value instead of by
+; reference).
+allow_call_time_pass_reference = Off
+
+;
+; Safe Mode
+;
+safe_mode = Off
+
+; By default, Safe Mode does a UID compare check when
+; opening files. If you want to relax this to a GID compare,
+; then turn on safe_mode_gid.
+safe_mode_gid = Off
+
+; When safe_mode is on, UID/GID checks are bypassed when
+; including files from this directory and its subdirectories.
+; (directory must also be in include_path or full path must
+; be used when including)
+safe_mode_include_dir =
+
+; When safe_mode is on, only executables located in the safe_mode_exec_dir
+; will be allowed to be executed via the exec family of functions.
+safe_mode_exec_dir =
+
+; Setting certain environment variables may be a potential security breach.
+; This directive contains a comma-delimited list of prefixes. In Safe Mode,
+; the user may only alter environment variables whose names begin with the
+; prefixes supplied here. By default, users will only be able to set
+; environment variables that begin with PHP_ (e.g. PHP_FOO=BAR).
+;
+; Note: If this directive is empty, PHP will let the user modify ANY
+; environment variable!
+safe_mode_allowed_env_vars = PHP_
+
+; This directive contains a comma-delimited list of environment variables that
+; the end user won't be able to change using putenv(). These variables will be
+; protected even if safe_mode_allowed_env_vars is set to allow to change them.
+safe_mode_protected_env_vars = LD_LIBRARY_PATH
+
+; open_basedir, if set, limits all file operations to the defined directory
+; and below. This directive makes most sense if used in a per-directory
+; or per-virtualhost web server configuration file. This directive is
+; *NOT* affected by whether Safe Mode is turned On or Off.
+;open_basedir =
+
+; This directive allows you to disable certain functions for security reasons.
+; It receives a comma-delimited list of function names. This directive is
+; *NOT* affected by whether Safe Mode is turned On or Off.
+disable_functions =
+
+; This directive allows you to disable certain classes for security reasons.
+; It receives a comma-delimited list of class names. This directive is
+; *NOT* affected by whether Safe Mode is turned On or Off.
+disable_classes =
+
+; Colors for Syntax Highlighting mode. Anything that's acceptable in
+; <span style="color: ???????"> would work.
+;highlight.string = #DD0000
+;highlight.comment = #FF9900
+;highlight.keyword = #007700
+;highlight.bg = #FFFFFF
+;highlight.default = #0000BB
+;highlight.html = #000000
+
+; If enabled, the request will be allowed to complete even if the user aborts
+; the request. Consider enabling it if executing long request, which may end up
+; being interrupted by the user or a browser timing out.
+; ignore_user_abort = On
+
+; Determines the size of the realpath cache to be used by PHP. This value should
+; be increased on systems where PHP opens many files to reflect the quantity of
+; the file operations performed.
+; realpath_cache_size=16k
+
+; Duration of time, in seconds for which to cache realpath information for a given
+; file or directory. For systems with rarely changing files, consider increasing this
+; value.
+; realpath_cache_ttl=120
+
+;
+; Misc
+;
+; Decides whether PHP may expose the fact that it is installed on the server
+; (e.g. by adding its signature to the Web server header). It is no security
+; threat in any way, but it makes it possible to determine whether you use PHP
+; on your server or not.
+expose_php = On
+
+
+;;;;;;;;;;;;;;;;;;;
+; Resource Limits ;
+;;;;;;;;;;;;;;;;;;;
+
+max_execution_time = 30 ; Maximum execution time of each script, in seconds
+max_input_time = 60 ; Maximum amount of time each script may spend parsing request data
+memory_limit = 16M ; Maximum amount of memory a script may consume
+
+
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+; Error handling and logging ;
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+
+; error_reporting is a bit-field. Or each number up to get desired error
+; reporting level
+; E_ALL - All errors and warnings (doesn't include E_STRICT)
+; E_ERROR - fatal run-time errors
+; E_WARNING - run-time warnings (non-fatal errors)
+; E_PARSE - compile-time parse errors
+; E_NOTICE - run-time notices (these are warnings which often result
+; from a bug in your code, but it's possible that it was
+; intentional (e.g., using an uninitialized variable and
+; relying on the fact it's automatically initialized to an
+; empty string)
+; E_STRICT - run-time notices, enable to have PHP suggest changes
+; to your code which will ensure the best interoperability
+; and forward compatibility of your code
+; E_CORE_ERROR - fatal errors that occur during PHP's initial startup
+; E_CORE_WARNING - warnings (non-fatal errors) that occur during PHP's
+; initial startup
+; E_COMPILE_ERROR - fatal compile-time errors
+; E_COMPILE_WARNING - compile-time warnings (non-fatal errors)
+; E_USER_ERROR - user-generated error message
+; E_USER_WARNING - user-generated warning message
+; E_USER_NOTICE - user-generated notice message
+;
+; Examples:
+;
+; - Show all errors, except for notices and coding standards warnings
+;
+;error_reporting = E_ALL & ~E_NOTICE
+;
+; - Show all errors, except for notices
+;
+;error_reporting = E_ALL & ~E_NOTICE | E_STRICT
+;
+; - Show only errors
+;
+;error_reporting = E_COMPILE_ERROR|E_ERROR|E_CORE_ERROR
+;
+; - Show all errors, except coding standards warnings
+;
+error_reporting = E_ALL
+
+; Print out errors (as a part of the output). For production web sites,
+; you're strongly encouraged to turn this feature off, and use error logging
+; instead (see below). Keeping display_errors enabled on a production web site
+; may reveal security information to end users, such as file paths on your Web
+; server, your database schema or other information.
+display_errors = Off
+
+; Even when display_errors is on, errors that occur during PHP's startup
+; sequence are not displayed. It's strongly recommended to keep
+; display_startup_errors off, except for when debugging.
+display_startup_errors = Off
+
+; Log errors into a log file (server-specific log, stderr, or error_log (below))
+; As stated above, you're strongly advised to use error logging in place of
+; error displaying on production web sites.
+log_errors = On
+
+; Set maximum length of log_errors. In error_log information about the source is
+; added. The default is 1024 and 0 allows to not apply any maximum length at all.
+log_errors_max_len = 1024
+
+; Do not log repeated messages. Repeated errors must occur in same file on same
+; line until ignore_repeated_source is set true.
+ignore_repeated_errors = Off
+
+; Ignore source of message when ignoring repeated messages. When this setting
+; is On you will not log errors with repeated messages from different files or
+; sourcelines.
+ignore_repeated_source = Off
+
+; If this parameter is set to Off, then memory leaks will not be shown (on
+; stdout or in the log). This has only effect in a debug compile, and if
+; error reporting includes E_WARNING in the allowed list
+report_memleaks = On
+
+; Store the last error/warning message in $php_errormsg (boolean).
+track_errors = Off
+
+; Disable the inclusion of HTML tags in error messages.
+; Note: Never use this feature for production boxes.
+;html_errors = Off
+
+; If html_errors is set On PHP produces clickable error messages that direct
+; to a page describing the error or function causing the error in detail.
+; You can download a copy of the PHP manual from http://www.php.net/docs.php
+; and change docref_root to the base URL of your local copy including the
+; leading '/'. You must also specify the file extension being used including
+; the dot.
+; Note: Never use this feature for production boxes.
+;docref_root = "/phpmanual/"
+;docref_ext = .html
+
+; String to output before an error message.
+;error_prepend_string = "<font color=ff0000>"
+
+; String to output after an error message.
+;error_append_string = "</font>"
+
+; Log errors to specified file.
+;error_log = filename
+
+; Log errors to syslog (Event Log on NT, not valid in Windows 95).
+;error_log = syslog
+
+
+;;;;;;;;;;;;;;;;;
+; Data Handling ;
+;;;;;;;;;;;;;;;;;
+;
+; Note - track_vars is ALWAYS enabled as of PHP 4.0.3
+
+; The separator used in PHP generated URLs to separate arguments.
+; Default is "&".
+;arg_separator.output = "&"
+
+; List of separator(s) used by PHP to parse input URLs into variables.
+; Default is "&".
+; NOTE: Every character in this directive is considered as separator!
+;arg_separator.input = ";&"
+
+; This directive describes the order in which PHP registers GET, POST, Cookie,
+; Environment and Built-in variables (G, P, C, E & S respectively, often
+; referred to as EGPCS or GPC). Registration is done from left to right, newer
+; values override older values.
+variables_order = "EGPCS"
+
+; Whether or not to register the EGPCS variables as global variables. You may
+; want to turn this off if you don't want to clutter your scripts' global scope
+; with user data. This makes most sense when coupled with track_vars - in which
+; case you can access all of the GPC variables through the $HTTP_*_VARS[],
+; variables.
+;
+; You should do your best to write your scripts so that they do not require
+; register_globals to be on; Using form variables as globals can easily lead
+; to possible security problems, if the code is not very well thought of.
+register_globals = Off
+
+; Whether or not to register the old-style input arrays, HTTP_GET_VARS
+; and friends. If you're not using them, it's recommended to turn them off,
+; for performance reasons.
+register_long_arrays = Off
+
+; This directive tells PHP whether to declare the argv&argc variables (that
+; would contain the GET information). If you don't use these variables, you
+; should turn it off for increased performance.
+register_argc_argv = Off
+
+; When enabled, the SERVER and ENV variables are created when they're first
+; used (Just In Time) instead of when the script starts. If these variables
+; are not used within a script, having this directive on will result in a
+; performance gain. The PHP directives register_globals, register_long_arrays,
+; and register_argc_argv must be disabled for this directive to have any affect.
+auto_globals_jit = On
+
+; Maximum size of POST data that PHP will accept.
+post_max_size = 8M
+
+; Magic quotes
+;
+
+; Magic quotes for incoming GET/POST/Cookie data.
+magic_quotes_gpc = Off
+
+; Magic quotes for runtime-generated data, e.g. data from SQL, from exec(), etc.
+magic_quotes_runtime = Off
+
+; Use Sybase-style magic quotes (escape ' with '' instead of \').
+magic_quotes_sybase = Off
+
+; Automatically add files before or after any PHP document.
+auto_prepend_file =
+auto_append_file =
+
+; As of 4.0b4, PHP always outputs a character encoding by default in
+; the Content-type: header. To disable sending of the charset, simply
+; set it to be empty.
+;
+; PHP's built-in default is text/html
+default_mimetype = "text/html"
+;default_charset = "iso-8859-1"
+
+; Always populate the $HTTP_RAW_POST_DATA variable.
+;always_populate_raw_post_data = On
+
+
+;;;;;;;;;;;;;;;;;;;;;;;;;
+; Paths and Directories ;
+;;;;;;;;;;;;;;;;;;;;;;;;;
+
+; UNIX: "/path1:/path2"
+;include_path = ".:/php/includes"
+;
+; Windows: "\path1;\path2"
+;include_path = ".;c:\php\includes"
+
+; The root of the PHP pages, used only if nonempty.
+; if PHP was not compiled with FORCE_REDIRECT, you SHOULD set doc_root
+; if you are running php as a CGI under any web server (other than IIS)
+; see documentation for security issues. The alternate is to use the
+; cgi.force_redirect configuration below
+doc_root =
+
+; The directory under which PHP opens the script using /~username used only
+; if nonempty.
+user_dir =
+
+; Directory in which the loadable extensions (modules) reside.
+extension_dir = "/usr/lib/php/modules"
+
+; Whether or not to enable the dl() function. The dl() function does NOT work
+; properly in multithreaded servers, such as IIS or Zeus, and is automatically
+; disabled on them.
+enable_dl = On
+
+; cgi.force_redirect is necessary to provide security running PHP as a CGI under
+; most web servers. Left undefined, PHP turns this on by default. You can
+; turn it off here AT YOUR OWN RISK
+; **You CAN safely turn this off for IIS, in fact, you MUST.**
+; cgi.force_redirect = 1
+
+; if cgi.nph is enabled it will force cgi to always sent Status: 200 with
+; every request.
+; cgi.nph = 1
+
+; if cgi.force_redirect is turned on, and you are not running under Apache or Netscape
+; (iPlanet) web servers, you MAY need to set an environment variable name that PHP
+; will look for to know it is OK to continue execution. Setting this variable MAY
+; cause security issues, KNOW WHAT YOU ARE DOING FIRST.
+; cgi.redirect_status_env = ;
+
+; FastCGI under IIS (on WINNT based OS) supports the ability to impersonate
+; security tokens of the calling client. This allows IIS to define the
+; security context that the request runs under. mod_fastcgi under Apache
+; does not currently support this feature (03/17/2002)
+; Set to 1 if running under IIS. Default is zero.
+; fastcgi.impersonate = 1;
+
+; Disable logging through FastCGI connection
+; fastcgi.log = 0
+
+; cgi.rfc2616_headers configuration option tells PHP what type of headers to
+; use when sending HTTP response code. If it's set 0 PHP sends Status: header that
+; is supported by Apache. When this option is set to 1 PHP will send
+; RFC2616 compliant header.
+; Default is zero.
+;cgi.rfc2616_headers = 0
+
+
+;;;;;;;;;;;;;;;;
+; File Uploads ;
+;;;;;;;;;;;;;;;;
+
+; Whether to allow HTTP file uploads.
+file_uploads = On
+
+; Temporary directory for HTTP uploaded files (will use system default if not
+; specified).
+;upload_tmp_dir =
+
+; Maximum allowed size for uploaded files.
+upload_max_filesize = 2M
+
+
+;;;;;;;;;;;;;;;;;;
+; Fopen wrappers ;
+;;;;;;;;;;;;;;;;;;
+
+; Whether to allow the treatment of URLs (like http:// or ftp://) as files.
+allow_url_fopen = On
+
+; Define the anonymous ftp password (your email address)
+;from="john@doe.com"
+
+; Define the User-Agent string
+; user_agent="PHP"
+
+; Default timeout for socket based streams (seconds)
+default_socket_timeout = 60
+
+; If your scripts have to deal with files from Macintosh systems,
+; or you are running on a Mac and need to deal with files from
+; unix or win32 systems, setting this flag will cause PHP to
+; automatically detect the EOL character in those files so that
+; fgets() and file() will work regardless of the source of the file.
+; auto_detect_line_endings = Off
+
+
+;;;;;;;;;;;;;;;;;;;;;;
+; Dynamic Extensions ;
+;;;;;;;;;;;;;;;;;;;;;;
+;
+; If you wish to have an extension loaded automatically, use the following
+; syntax:
+;
+; extension=modulename.extension
+;
+; For example:
+;
+; extension=msql.so
+;
+; Note that it should be the name of the module only; no directory information
+; needs to go here. Specify the location of the extension with the
+; extension_dir directive above.
+
+
+;;;;
+; Note: packaged extension modules are now loaded via the .ini files
+; found in the directory /etc/php.d; these are loaded by default.
+;;;;
+
+
+;;;;;;;;;;;;;;;;;;;
+; Module Settings ;
+;;;;;;;;;;;;;;;;;;;
+
+[Date]
+; Defines the default timezone used by the date functions
+;date.timezone =
+
+[Syslog]
+; Whether or not to define the various syslog variables (e.g. $LOG_PID,
+; $LOG_CRON, etc.). Turning it off is a good idea performance-wise. In
+; runtime, you can define these variables by calling define_syslog_variables().
+define_syslog_variables = Off
+
+[mail function]
+; For Win32 only.
+SMTP = localhost
+smtp_port = 25
+
+; For Win32 only.
+;sendmail_from = me@example.com
+
+; For Unix only. You may supply arguments as well (default: "sendmail -t -i").
+sendmail_path = /usr/sbin/sendmail -t -i
+
+; Force the addition of the specified parameters to be passed as extra parameters
+; to the sendmail binary. These parameters will always replace the value of
+; the 5th parameter to mail(), even in safe mode.
+;mail.force_extra_parameters =
+
+[SQL]
+sql.safe_mode = Off
+
+[ODBC]
+;odbc.default_db = Not yet implemented
+;odbc.default_user = Not yet implemented
+;odbc.default_pw = Not yet implemented
+
+; Allow or prevent persistent links.
+odbc.allow_persistent = On
+
+; Check that a connection is still valid before reuse.
+odbc.check_persistent = On
+
+; Maximum number of persistent links. -1 means no limit.
+odbc.max_persistent = -1
+
+; Maximum number of links (persistent + non-persistent). -1 means no limit.
+odbc.max_links = -1
+
+; Handling of LONG fields. Returns number of bytes to variables. 0 means
+; passthru.
+odbc.defaultlrl = 4096
+
+; Handling of binary data. 0 means passthru, 1 return as is, 2 convert to char.
+; See the documentation on odbc_binmode and odbc_longreadlen for an explanation
+; of uodbc.defaultlrl and uodbc.defaultbinmode
+odbc.defaultbinmode = 1
+
+[MySQL]
+; Allow or prevent persistent links.
+mysql.allow_persistent = On
+
+; Maximum number of persistent links. -1 means no limit.
+mysql.max_persistent = -1
+
+; Maximum number of links (persistent + non-persistent). -1 means no limit.
+mysql.max_links = -1
+
+; Default port number for mysql_connect(). If unset, mysql_connect() will use
+; the $MYSQL_TCP_PORT or the mysql-tcp entry in /etc/services or the
+; compile-time value defined MYSQL_PORT (in that order). Win32 will only look
+; at MYSQL_PORT.
+mysql.default_port =
+
+; Default socket name for local MySQL connects. If empty, uses the built-in
+; MySQL defaults.
+mysql.default_socket =
+
+; Default host for mysql_connect() (doesn't apply in safe mode).
+mysql.default_host =
+
+; Default user for mysql_connect() (doesn't apply in safe mode).
+mysql.default_user =
+
+; Default password for mysql_connect() (doesn't apply in safe mode).
+; Note that this is generally a *bad* idea to store passwords in this file.
+; *Any* user with PHP access can run 'echo get_cfg_var("mysql.default_password")
+; and reveal this password! And of course, any users with read access to this
+; file will be able to reveal the password as well.
+mysql.default_password =
+
+; Maximum time (in secondes) for connect timeout. -1 means no limit
+mysql.connect_timeout = 60
+
+; Trace mode. When trace_mode is active (=On), warnings for table/index scans and
+; SQL-Errors will be displayed.
+mysql.trace_mode = Off
+
+[MySQLi]
+
+; Maximum number of links. -1 means no limit.
+mysqli.max_links = -1
+
+; Default port number for mysqli_connect(). If unset, mysqli_connect() will use
+; the $MYSQL_TCP_PORT or the mysql-tcp entry in /etc/services or the
+; compile-time value defined MYSQL_PORT (in that order). Win32 will only look
+; at MYSQL_PORT.
+mysqli.default_port = 3306
+
+; Default socket name for local MySQL connects. If empty, uses the built-in
+; MySQL defaults.
+mysqli.default_socket =
+
+; Default host for mysql_connect() (doesn't apply in safe mode).
+mysqli.default_host =
+
+; Default user for mysql_connect() (doesn't apply in safe mode).
+mysqli.default_user =
+
+; Default password for mysqli_connect() (doesn't apply in safe mode).
+; Note that this is generally a *bad* idea to store passwords in this file.
+; *Any* user with PHP access can run 'echo get_cfg_var("mysqli.default_pw")
+; and reveal this password! And of course, any users with read access to this
+; file will be able to reveal the password as well.
+mysqli.default_pw =
+
+; Allow or prevent reconnect
+mysqli.reconnect = Off
+
+[mSQL]
+; Allow or prevent persistent links.
+msql.allow_persistent = On
+
+; Maximum number of persistent links. -1 means no limit.
+msql.max_persistent = -1
+
+; Maximum number of links (persistent+non persistent). -1 means no limit.
+msql.max_links = -1
+
+[PostgresSQL]
+; Allow or prevent persistent links.
+pgsql.allow_persistent = On
+
+; Detect broken persistent links always with pg_pconnect().
+; Auto reset feature requires a little overheads.
+pgsql.auto_reset_persistent = Off
+
+; Maximum number of persistent links. -1 means no limit.
+pgsql.max_persistent = -1
+
+; Maximum number of links (persistent+non persistent). -1 means no limit.
+pgsql.max_links = -1
+
+; Ignore PostgreSQL backends Notice message or not.
+; Notice message logging require a little overheads.
+pgsql.ignore_notice = 0
+
+; Log PostgreSQL backends Noitce message or not.
+; Unless pgsql.ignore_notice=0, module cannot log notice message.
+pgsql.log_notice = 0
+
+[Sybase]
+; Allow or prevent persistent links.
+sybase.allow_persistent = On
+
+; Maximum number of persistent links. -1 means no limit.
+sybase.max_persistent = -1
+
+; Maximum number of links (persistent + non-persistent). -1 means no limit.
+sybase.max_links = -1
+
+;sybase.interface_file = "/usr/sybase/interfaces"
+
+; Minimum error severity to display.
+sybase.min_error_severity = 10
+
+; Minimum message severity to display.
+sybase.min_message_severity = 10
+
+; Compatability mode with old versions of PHP 3.0.
+; If on, this will cause PHP to automatically assign types to results according
+; to their Sybase type, instead of treating them all as strings. This
+; compatability mode will probably not stay around forever, so try applying
+; whatever necessary changes to your code, and turn it off.
+sybase.compatability_mode = Off
+
+[Sybase-CT]
+; Allow or prevent persistent links.
+sybct.allow_persistent = On
+
+; Maximum number of persistent links. -1 means no limit.
+sybct.max_persistent = -1
+
+; Maximum number of links (persistent + non-persistent). -1 means no limit.
+sybct.max_links = -1
+
+; Minimum server message severity to display.
+sybct.min_server_severity = 10
+
+; Minimum client message severity to display.
+sybct.min_client_severity = 10
+
+[bcmath]
+; Number of decimal digits for all bcmath functions.
+bcmath.scale = 0
+
+[browscap]
+;browscap = extra/browscap.ini
+
+[Informix]
+; Default host for ifx_connect() (doesn't apply in safe mode).
+ifx.default_host =
+
+; Default user for ifx_connect() (doesn't apply in safe mode).
+ifx.default_user =
+
+; Default password for ifx_connect() (doesn't apply in safe mode).
+ifx.default_password =
+
+; Allow or prevent persistent links.
+ifx.allow_persistent = On
+
+; Maximum number of persistent links. -1 means no limit.
+ifx.max_persistent = -1
+
+; Maximum number of links (persistent + non-persistent). -1 means no limit.
+ifx.max_links = -1
+
+; If on, select statements return the contents of a text blob instead of its id.
+ifx.textasvarchar = 0
+
+; If on, select statements return the contents of a byte blob instead of its id.
+ifx.byteasvarchar = 0
+
+; Trailing blanks are stripped from fixed-length char columns. May help the
+; life of Informix SE users.
+ifx.charasvarchar = 0
+
+; If on, the contents of text and byte blobs are dumped to a file instead of
+; keeping them in memory.
+ifx.blobinfile = 0
+
+; NULL's are returned as empty strings, unless this is set to 1. In that case,
+; NULL's are returned as string 'NULL'.
+ifx.nullformat = 0
+
+[Session]
+; Handler used to store/retrieve data.
+session.save_handler = files
+
+; Argument passed to save_handler. In the case of files, this is the path
+; where data files are stored. Note: Windows users have to change this
+; variable in order to use PHP's session functions.
+;
+; As of PHP 4.0.1, you can define the path as:
+;
+; session.save_path = "N;/path"
+;
+; where N is an integer. Instead of storing all the session files in
+; /path, what this will do is use subdirectories N-levels deep, and
+; store the session data in those directories. This is useful if you
+; or your OS have problems with lots of files in one directory, and is
+; a more efficient layout for servers that handle lots of sessions.
+;
+; NOTE 1: PHP will not create this directory structure automatically.
+; You can use the script in the ext/session dir for that purpose.
+; NOTE 2: See the section on garbage collection below if you choose to
+; use subdirectories for session storage
+;
+; The file storage module creates files using mode 600 by default.
+; You can change that by using
+;
+; session.save_path = "N;MODE;/path"
+;
+; where MODE is the octal representation of the mode. Note that this
+; does not overwrite the process's umask.
+session.save_path = "/var/lib/php/session"
+
+; Whether to use cookies.
+session.use_cookies = 1
+
+; This option enables administrators to make their users invulnerable to
+; attacks which involve passing session ids in URLs; defaults to 0.
+; session.use_only_cookies = 1
+
+; Name of the session (used as cookie name).
+session.name = PHPSESSID
+
+; Initialize session on request startup.
+session.auto_start = 0
+
+; Lifetime in seconds of cookie or, if 0, until browser is restarted.
+session.cookie_lifetime = 0
+
+; The path for which the cookie is valid.
+session.cookie_path = /
+
+; The domain for which the cookie is valid.
+session.cookie_domain =
+
+; Handler used to serialize data. php is the standard serializer of PHP.
+session.serialize_handler = php
+
+; Define the probability that the 'garbage collection' process is started
+; on every session initialization.
+; The probability is calculated by using gc_probability/gc_divisor,
+; e.g. 1/100 means there is a 1% chance that the GC process starts
+; on each request.
+
+session.gc_probability = 1
+session.gc_divisor = 1000
+
+; After this number of seconds, stored data will be seen as 'garbage' and
+; cleaned up by the garbage collection process.
+session.gc_maxlifetime = 1440
+
+; NOTE: If you are using the subdirectory option for storing session files
+; (see session.save_path above), then garbage collection does *not*
+; happen automatically. You will need to do your own garbage
+; collection through a shell script, cron entry, or some other method.
+; For example, the following script would is the equivalent of
+; setting session.gc_maxlifetime to 1440 (1440 seconds = 24 minutes):
+; cd /path/to/sessions; find -cmin +24 | xargs rm
+
+; PHP 4.2 and less have an undocumented feature/bug that allows you to
+; to initialize a session variable in the global scope, albeit register_globals
+; is disabled. PHP 4.3 and later will warn you, if this feature is used.
+; You can disable the feature and the warning separately. At this time,
+; the warning is only displayed, if bug_compat_42 is enabled.
+
+session.bug_compat_42 = 0
+session.bug_compat_warn = 1
+
+; Check HTTP Referer to invalidate externally stored URLs containing ids.
+; HTTP_REFERER has to contain this substring for the session to be
+; considered as valid.
+session.referer_check =
+
+; How many bytes to read from the file.
+session.entropy_length = 0
+
+; Specified here to create the session id.
+session.entropy_file =
+
+;session.entropy_length = 16
+
+;session.entropy_file = /dev/urandom
+
+; Set to {nocache,private,public,} to determine HTTP caching aspects
+; or leave this empty to avoid sending anti-caching headers.
+session.cache_limiter = nocache
+
+; Document expires after n minutes.
+session.cache_expire = 180
+
+; trans sid support is disabled by default.
+; Use of trans sid may risk your users security.
+; Use this option with caution.
+; - User may send URL contains active session ID
+; to other person via. email/irc/etc.
+; - URL that contains active session ID may be stored
+; in publically accessible computer.
+; - User may access your site with the same session ID
+; always using URL stored in browser's history or bookmarks.
+session.use_trans_sid = 0
+
+; Select a hash function
+; 0: MD5 (128 bits)
+; 1: SHA-1 (160 bits)
+session.hash_function = 0
+
+; Define how many bits are stored in each character when converting
+; the binary hash data to something readable.
+;
+; 4 bits: 0-9, a-f
+; 5 bits: 0-9, a-v
+; 6 bits: 0-9, a-z, A-Z, "-", ","
+session.hash_bits_per_character = 5
+
+; The URL rewriter will look for URLs in a defined set of HTML tags.
+; form/fieldset are special; if you include them here, the rewriter will
+; add a hidden <input> field with the info which is otherwise appended
+; to URLs. If you want XHTML conformity, remove the form entry.
+; Note that all valid entries require a "=", even if no value follows.
+url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry"
+
+[MSSQL]
+; Allow or prevent persistent links.
+mssql.allow_persistent = On
+
+; Maximum number of persistent links. -1 means no limit.
+mssql.max_persistent = -1
+
+; Maximum number of links (persistent+non persistent). -1 means no limit.
+mssql.max_links = -1
+
+; Minimum error severity to display.
+mssql.min_error_severity = 10
+
+; Minimum message severity to display.
+mssql.min_message_severity = 10
+
+; Compatability mode with old versions of PHP 3.0.
+mssql.compatability_mode = Off
+
+; Connect timeout
+;mssql.connect_timeout = 5
+
+; Query timeout
+;mssql.timeout = 60
+
+; Valid range 0 - 2147483647. Default = 4096.
+;mssql.textlimit = 4096
+
+; Valid range 0 - 2147483647. Default = 4096.
+;mssql.textsize = 4096
+
+; Limits the number of records in each batch. 0 = all records in one batch.
+;mssql.batchsize = 0
+
+; Specify how datetime and datetim4 columns are returned
+; On => Returns data converted to SQL server settings
+; Off => Returns values as YYYY-MM-DD hh:mm:ss
+;mssql.datetimeconvert = On
+
+; Use NT authentication when connecting to the server
+mssql.secure_connection = Off
+
+; Specify max number of processes. -1 = library default
+; msdlib defaults to 25
+; FreeTDS defaults to 4096
+;mssql.max_procs = -1
+
+; Specify client character set.
+; If empty or not set the client charset from freetds.comf is used
+; This is only used when compiled with FreeTDS
+;mssql.charset = "ISO-8859-1"
+
+[Assertion]
+; Assert(expr); active by default.
+;assert.active = On
+
+; Issue a PHP warning for each failed assertion.
+;assert.warning = On
+
+; Don't bail out by default.
+;assert.bail = Off
+
+; User-function to be called if an assertion fails.
+;assert.callback = 0
+
+; Eval the expression with current error_reporting(). Set to true if you want
+; error_reporting(0) around the eval().
+;assert.quiet_eval = 0
+
+[Verisign Payflow Pro]
+; Default Payflow Pro server.
+pfpro.defaulthost = "test-payflow.verisign.com"
+
+; Default port to connect to.
+pfpro.defaultport = 443
+
+; Default timeout in seconds.
+pfpro.defaulttimeout = 30
+
+; Default proxy IP address (if required).
+;pfpro.proxyaddress =
+
+; Default proxy port.
+;pfpro.proxyport =
+
+; Default proxy logon.
+;pfpro.proxylogon =
+
+; Default proxy password.
+;pfpro.proxypassword =
+
+[COM]
+; path to a file containing GUIDs, IIDs or filenames of files with TypeLibs
+;com.typelib_file =
+; allow Distributed-COM calls
+;com.allow_dcom = true
+; autoregister constants of a components typlib on com_load()
+;com.autoregister_typelib = true
+; register constants casesensitive
+;com.autoregister_casesensitive = false
+; show warnings on duplicate constat registrations
+;com.autoregister_verbose = true
+
+[mbstring]
+; language for internal character representation.
+;mbstring.language = Japanese
+
+; internal/script encoding.
+; Some encoding cannot work as internal encoding.
+; (e.g. SJIS, BIG5, ISO-2022-*)
+;mbstring.internal_encoding = EUC-JP
+
+; http input encoding.
+;mbstring.http_input = auto
+
+; http output encoding. mb_output_handler must be
+; registered as output buffer to function
+;mbstring.http_output = SJIS
+
+; enable automatic encoding translation according to
+; mbstring.internal_encoding setting. Input chars are
+; converted to internal encoding by setting this to On.
+; Note: Do _not_ use automatic encoding translation for
+; portable libs/applications.
+;mbstring.encoding_translation = Off
+
+; automatic encoding detection order.
+; auto means
+;mbstring.detect_order = auto
+
+; substitute_character used when character cannot be converted
+; one from another
+;mbstring.substitute_character = none;
+
+; overload(replace) single byte functions by mbstring functions.
+; mail(), ereg(), etc are overloaded by mb_send_mail(), mb_ereg(),
+; etc. Possible values are 0,1,2,4 or combination of them.
+; For example, 7 for overload everything.
+; 0: No overload
+; 1: Overload mail() function
+; 2: Overload str*() functions
+; 4: Overload ereg*() functions
+;mbstring.func_overload = 0
+
+; enable strict encoding detection.
+;mbstring.strict_encoding = Off
+
+[FrontBase]
+;fbsql.allow_persistent = On
+;fbsql.autocommit = On
+;fbsql.default_database =
+;fbsql.default_database_password =
+;fbsql.default_host =
+;fbsql.default_password =
+;fbsql.default_user = "_SYSTEM"
+;fbsql.generate_warnings = Off
+;fbsql.max_connections = 128
+;fbsql.max_links = 128
+;fbsql.max_persistent = -1
+;fbsql.max_results = 128
+;fbsql.batchSize = 1000
+
+[gd]
+; Tell the jpeg decode to libjpeg warnings and try to create
+; a gd image. The warning will then be displayed as notices
+; disabled by default
+;gd.jpeg_ignore_warning = 0
+
+[exif]
+; Exif UNICODE user comments are handled as UCS-2BE/UCS-2LE and JIS as JIS.
+; With mbstring support this will automatically be converted into the encoding
+; given by corresponding encode setting. When empty mbstring.internal_encoding
+; is used. For the decode settings you can distinguish between motorola and
+; intel byte order. A decode setting cannot be empty.
+;exif.encode_unicode = ISO-8859-15
+;exif.decode_unicode_motorola = UCS-2BE
+;exif.decode_unicode_intel = UCS-2LE
+;exif.encode_jis =
+;exif.decode_jis_motorola = JIS
+;exif.decode_jis_intel = JIS
+
+[Tidy]
+; The path to a default tidy configuration file to use when using tidy
+;tidy.default_config = /usr/local/lib/php/default.tcfg
+
+; Should tidy clean and repair output automatically?
+; WARNING: Do not use this option if you are generating non-html content
+; such as dynamic images
+tidy.clean_output = Off
+
+[soap]
+; Enables or disables WSDL caching feature.
+soap.wsdl_cache_enabled=1
+; Sets the directory name where SOAP extension will put cache files.
+soap.wsdl_cache_dir="/tmp"
+; (time to live) Sets the number of second while cached file will be used
+; instead of original one.
+soap.wsdl_cache_ttl=86400
+
+; Local Variables:
+; tab-width: 4
+; End:
--- /dev/null
+[main]
+ # Where Puppet stores dynamic and growing data.
+ # The default value is '/var/puppet'.
+ vardir = /var/lib/puppet
+
+ # The Puppet log directory.
+ # The default value is '$vardir/log'.
+ logdir = /var/log/puppet
+
+ # Where Puppet PID files are kept.
+ # The default value is '$vardir/run'.
+ rundir = /var/run/puppet
+
+ # Where SSL certificates are kept.
+ # The default value is '$confdir/ssl'.
+ ssldir = $vardir/ssl
+
+[puppetd]
+ # The file in which puppetd stores a list of the classes
+ # associated with the retrieved configuratiion. Can be loaded in
+ # the separate ``puppet`` executable using the ``--loadclasses``
+ # option.
+ # The default value is '$confdir/classes.txt'.
+ classfile = $vardir/classes.txt
+
+ # Where puppetd caches the local configuration. An
+ # extension indicating the cache format is added automatically.
+ # The default value is '$confdir/localconfig'.
+ localconfig = $vardir/localconfig
--- /dev/null
+; Created by cloud-init on instance boot automatically, do not edit.
+;
+search awsqualif.net aws.eu-west-1.censured_here
+nameserver 192.168.0.1
+nameserver 192.168.0.2
+options timeout:2 rotate
--- /dev/null
+# This is the main Samba configuration file. You should read the
+# smb.conf(5) manual page in order to understand the options listed
+# here. Samba has a huge number of configurable options (perhaps too
+# many!) most of which are not shown in this example
+#
+# For a step to step guide on installing, configuring and using samba,
+# read the Samba-HOWTO-Collection. This may be obtained from:
+# http://www.samba.org/samba/docs/Samba-HOWTO-Collection.pdf
+#
+# Many working examples of smb.conf files can be found in the
+# Samba-Guide which is generated daily and can be downloaded from:
+# http://www.samba.org/samba/docs/Samba-Guide.pdf
+#
+# Any line which starts with a ; (semi-colon) or a # (hash)
+# is a comment and is ignored. In this example we will use a #
+# for commentry and a ; for parts of the config file that you
+# may wish to enable
+#
+# NOTE: Whenever you modify this file you should run the command "testparm"
+# to check that you have not made any basic syntactic errors.
+#
+#---------------
+# SELINUX NOTES:
+#
+# If you want to use the useradd/groupadd family of binaries please run:
+# setsebool -P samba_domain_controller on
+#
+# If you want to share home directories via samba please run:
+# setsebool -P samba_enable_home_dirs on
+#
+# If you create a new directory you want to share you should mark it as
+# "samba-share_t" so that selinux will let you write into it.
+# Make sure not to do that on system directories as they may already have
+# been marked with othe SELinux labels.
+#
+# Use ls -ldZ /path to see which context a directory has
+#
+# Set labels only on directories you created!
+# To set a label use the following: chcon -t samba_share_t /path
+#
+# If you need to share a system created directory you can use one of the
+# following (read-only/read-write):
+# setsebool -P samba_export_all_ro on
+# or
+# setsebool -P samba_export_all_rw on
+#
+# If you want to run scripts (preexec/root prexec/print command/...) please
+# put them into the /var/lib/samba/scripts directory so that smbd will be
+# allowed to run them.
+# Make sure you COPY them and not MOVE them so that the right SELinux context
+# is applied, to check all is ok use restorecon -R -v /var/lib/samba/scripts
+#
+#--------------
+#
+#======================= Global Settings =====================================
+
+[global]
+
+# ----------------------- Netwrok Related Options -------------------------
+#
+# workgroup = NT-Domain-Name or Workgroup-Name, eg: MIDEARTH
+#
+# server string is the equivalent of the NT Description field
+#
+# netbios name can be used to specify a server name not tied to the hostname
+#
+# Interfaces lets you configure Samba to use multiple interfaces
+# If you have multiple network interfaces then you can list the ones
+# you want to listen on (never omit localhost)
+#
+# Hosts Allow/Hosts Deny lets you restrict who can connect, and you can
+# specify it as a per share option as well
+#
+ workgroup = MYGROUP
+ server string = Samba Server Version %v
+
+; netbios name = MYSERVER
+
+; interfaces = lo eth0 192.168.12.2/24 192.168.13.2/24
+; hosts allow = 127. 192.168.12. 192.168.13.
+
+# --------------------------- Logging Options -----------------------------
+#
+# Log File let you specify where to put logs and how to split them up.
+#
+# Max Log Size let you specify the max size log files should reach
+
+ # logs split per machine
+ log file = /var/log/samba/log.%m
+ # max 50KB per log file, then rotate
+ max log size = 50
+
+# ----------------------- Standalone Server Options ------------------------
+#
+# Scurity can be set to user, share(deprecated) or server(deprecated)
+#
+# Backend to store user information in. New installations should
+# use either tdbsam or ldapsam. smbpasswd is available for backwards
+# compatibility. tdbsam requires no further configuration.
+
+ security = user
+ passdb backend = tdbsam
+
+
+# ----------------------- Domain Members Options ------------------------
+#
+# Security must be set to domain or ads
+#
+# Use the realm option only with security = ads
+# Specifies the Active Directory realm the host is part of
+#
+# Backend to store user information in. New installations should
+# use either tdbsam or ldapsam. smbpasswd is available for backwards
+# compatibility. tdbsam requires no further configuration.
+#
+# Use password server option only with security = server or if you can't
+# use the DNS to locate Domain Controllers
+# The argument list may include:
+# password server = My_PDC_Name [My_BDC_Name] [My_Next_BDC_Name]
+# or to auto-locate the domain controller/s
+# password server = *
+
+
+; security = domain
+; passdb backend = tdbsam
+; realm = MY_REALM
+
+; password server = <NT-Server-Name>
+
+# ----------------------- Domain Controller Options ------------------------
+#
+# Security must be set to user for domain controllers
+#
+# Backend to store user information in. New installations should
+# use either tdbsam or ldapsam. smbpasswd is available for backwards
+# compatibility. tdbsam requires no further configuration.
+#
+# Domain Master specifies Samba to be the Domain Master Browser. This
+# allows Samba to collate browse lists between subnets. Don't use this
+# if you already have a Windows NT domain controller doing this job
+#
+# Domain Logons let Samba be a domain logon server for Windows workstations.
+#
+# Logon Scrpit let yuou specify a script to be run at login time on the client
+# You need to provide it in a share called NETLOGON
+#
+# Logon Path let you specify where user profiles are stored (UNC path)
+#
+# Various scripts can be used on a domain controller or stand-alone
+# machine to add or delete corresponding unix accounts
+#
+; security = user
+; passdb backend = tdbsam
+
+; domain master = yes
+; domain logons = yes
+
+ # the login script name depends on the machine name
+; logon script = %m.bat
+ # the login script name depends on the unix user used
+; logon script = %u.bat
+; logon path = \\%L\Profiles\%u
+ # disables profiles support by specifying an empty path
+; logon path =
+
+; add user script = /usr/sbin/useradd "%u" -n -g users
+; add group script = /usr/sbin/groupadd "%g"
+; add machine script = /usr/sbin/useradd -n -c "Workstation (%u)" -M -d /nohome -s /bin/false "%u"
+; delete user script = /usr/sbin/userdel "%u"
+; delete user from group script = /usr/sbin/userdel "%u" "%g"
+; delete group script = /usr/sbin/groupdel "%g"
+
+
+# ----------------------- Browser Control Options ----------------------------
+#
+# set local master to no if you don't want Samba to become a master
+# browser on your network. Otherwise the normal election rules apply
+#
+# OS Level determines the precedence of this server in master browser
+# elections. The default value should be reasonable
+#
+# Preferred Master causes Samba to force a local browser election on startup
+# and gives it a slightly higher chance of winning the election
+; local master = no
+; os level = 33
+; preferred master = yes
+
+#----------------------------- Name Resolution -------------------------------
+# Windows Internet Name Serving Support Section:
+# Note: Samba can be either a WINS Server, or a WINS Client, but NOT both
+#
+# - WINS Support: Tells the NMBD component of Samba to enable it's WINS Server
+#
+# - WINS Server: Tells the NMBD components of Samba to be a WINS Client
+#
+# - WINS Proxy: Tells Samba to answer name resolution queries on
+# behalf of a non WINS capable client, for this to work there must be
+# at least one WINS Server on the network. The default is NO.
+#
+# DNS Proxy - tells Samba whether or not to try to resolve NetBIOS names
+# via DNS nslookups.
+
+; wins support = yes
+; wins server = w.x.y.z
+; wins proxy = yes
+
+; dns proxy = yes
+
+# --------------------------- Printing Options -----------------------------
+#
+# Load Printers let you load automatically the list of printers rather
+# than setting them up individually
+#
+# Cups Options let you pass the cups libs custom options, setting it to raw
+# for example will let you use drivers on your Windows clients
+#
+# Printcap Name let you specify an alternative printcap file
+#
+# You can choose a non default printing system using the Printing option
+
+ load printers = yes
+ cups options = raw
+
+; printcap name = /etc/printcap
+ #obtain list of printers automatically on SystemV
+; printcap name = lpstat
+; printing = cups
+
+# --------------------------- Filesystem Options ---------------------------
+#
+# The following options can be uncommented if the filesystem supports
+# Extended Attributes and they are enabled (usually by the mount option
+# user_xattr). Thess options will let the admin store the DOS attributes
+# in an EA and make samba not mess with the permission bits.
+#
+# Note: these options can also be set just per share, setting them in global
+# makes them the default for all shares
+
+; map archive = no
+; map hidden = no
+; map read only = no
+; map system = no
+; store dos attributes = yes
+
+
+#============================ Share Definitions ==============================
+
+[homes]
+ comment = Home Directories
+ browseable = no
+ writable = yes
+; valid users = %S
+; valid users = MYDOMAIN\%S
+
+[printers]
+ comment = All Printers
+ path = /var/spool/samba
+ browseable = no
+ guest ok = no
+ writable = no
+ printable = yes
+
+# Un-comment the following and create the netlogon directory for Domain Logons
+; [netlogon]
+; comment = Network Logon Service
+; path = /var/lib/samba/netlogon
+; guest ok = yes
+; writable = no
+; share modes = no
+
+
+# Un-comment the following to provide a specific roving profile share
+# the default is to use the user's home directory
+; [Profiles]
+; path = /var/lib/samba/profiles
+; browseable = no
+; guest ok = yes
+
+
+# A publicly accessible directory, but read only, except for people in
+# the "staff" group
+; [public]
+; comment = Public Stuff
+; path = /home/samba
+; public = yes
+; writable = yes
+; printable = no
+; write list = +staff
--- /dev/null
+# /etc/security/limits.conf
+#
+#Each line describes a limit for a user in the form:
+#
+#<domain> <type> <item> <value>
+#
+#Where:
+#<domain> can be:
+# - an user name
+# - a group name, with @group syntax
+# - the wildcard *, for default entry
+# - the wildcard %, can be also used with %group syntax,
+# for maxlogin limit
+#
+#<type> can have the two values:
+# - "soft" for enforcing the soft limits
+# - "hard" for enforcing hard limits
+#
+#<item> can be one of the following:
+# - core - limits the core file size (KB)
+# - data - max data size (KB)
+# - fsize - maximum filesize (KB)
+# - memlock - max locked-in-memory address space (KB)
+# - nofile - max number of open files
+# - rss - max resident set size (KB)
+# - stack - max stack size (KB)
+# - cpu - max CPU time (MIN)
+# - nproc - max number of processes
+# - as - address space limit
+# - maxlogins - max number of logins for this user
+# - maxsyslogins - max number of logins on the system
+# - priority - the priority to run user process with
+# - locks - max number of file locks the user can hold
+# - sigpending - max number of pending signals
+# - msgqueue - max memory used by POSIX message queues (bytes)
+# - nice - max nice priority allowed to raise to
+# - rtprio - max realtime priority
+#
+#<domain> <type> <item> <value>
+#
+
+#* soft core 0
+#* hard rss 10000
+#@student hard nproc 20
+#@faculty soft nproc 20
+#@faculty hard nproc 50
+#ftp hard nproc 0
+#@student - maxlogins 4
+
+# End of file
+
+## Automatically appended by jack-audio-connection-kit
+@jackuser - rtprio 20
+@jackuser - memlock 4194304
+
+## Automatically appended by jack-audio-connection-kit
+@pulse-rt - rtprio 20
+@pulse-rt - nice -20
--- /dev/null
+# Authors: Jason Tang <jtang@tresys.com>
+#
+# Copyright (C) 2004-2005 Tresys Technology, LLC
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License, or (at your option) any later version.
+#
+# This library is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+#
+# Specify how libsemanage will interact with a SELinux policy manager.
+# The four options are:
+#
+# "source" - libsemanage manipulates a source SELinux policy
+# "direct" - libsemanage will write directly to a module store.
+# /foo/bar - Write by way of a policy management server, whose
+# named socket is at /foo/bar. The path must begin
+# with a '/'.
+# foo.com:4242 - Establish a TCP connection to a remote policy
+# management server at foo.com. If there is a colon
+# then the remainder is interpreted as a port number;
+# otherwise default to port 4242.
+module-store = direct
+
+# When generating the final linked and expanded policy, by default
+# semanage will set the policy version to POLICYDB_VERSION_MAX, as
+# given in <sepol/policydb.h>. Change this setting if a different
+# version is necessary.
+#policy-version = 19
+
+# expand-check check neverallow rules when executing all semanage
+# commands. There might be a penalty in execution time if this
+# option is enabled.
+expand-check=0
+
+# usepasswd check tells semanage to scan all pass word records for home directories
+# and setup the labeling correctly. If this is turned off, SELinux will label only /home
+# and home directories of users with SELinux login mappings defined, see
+# semanage login -l for the list of such users.
+# If you want to use a different home directory, you will need to use semanage fcontext command.
+# For example, if you had home dirs in /althome directory you would have to execute
+# semanage fcontext -a -e /home /althome
+usepasswd=False
+bzip-small=true
+bzip-blocksize=5
+ignoredirs=/root;/bin;/boot;/dev;/etc;/lib;/lib64;/proc;/run;/sbin;/sys;/tmp;/usr;/var
+optimize-policy=true
+
+[sefcontext_compile]
+path = /usr/sbin/sefcontext_compile
+args = -r $@
+[end]
--- /dev/null
+# /etc/services:
+# $Id: services,v 1.44 2008/04/07 21:30:33 pknirsch Exp $
+#
+#
+# Truncated version of Fedora's /etc/services, the original is gigantic
+#
+# Network services, Internet style
+#
+# Note that it is presently the policy of IANA to assign a single well-known
+# port number for both TCP and UDP; hence, most entries here have two entries
+# even if the protocol doesn't support UDP operations.
+# Updated from RFC 1700, ``Assigned Numbers'' (October 1994). Not all ports
+# are included, only the more common ones.
+#
+# The latest IANA port assignments can be gotten from
+# http://www.iana.org/assignments/port-numbers
+# The Well Known Ports are those from 0 through 1023.
+# The Registered Ports are those from 1024 through 49151
+# The Dynamic and/or Private Ports are those from 49152 through 65535
+#
+# Each line describes one service, and is of the form:
+#
+# service-name port/protocol [aliases ...] [# comment]
+
+tcpmux 1/tcp # TCP port service multiplexer
+tcpmux 1/udp # TCP port service multiplexer
+rje 5/tcp # Remote Job Entry
+rje 5/udp # Remote Job Entry
+echo 7/tcp
+echo 7/udp
+discard 9/tcp sink null
+discard 9/udp sink null
+systat 11/tcp users
+systat 11/udp users
+daytime 13/tcp
+daytime 13/udp
+qotd 17/tcp quote
+qotd 17/udp quote
+msp 18/tcp # message send protocol
+msp 18/udp # message send protocol
+chargen 19/tcp ttytst source
+chargen 19/udp ttytst source
+ftp-data 20/tcp
+ftp-data 20/udp
+# 21 is registered to ftp, but also used by fsp
+ftp 21/tcp
+ftp 21/udp fsp fspd
+ssh 22/tcp # SSH Remote Login Protocol
+ssh 22/udp # SSH Remote Login Protocol
+telnet 23/tcp
+telnet 23/udp
+# 24 - private mail system
+lmtp 24/tcp # LMTP Mail Delivery
+lmtp 24/udp # LMTP Mail Delivery
+smtp 25/tcp mail
+smtp 25/udp mail
+time 37/tcp timserver
+time 37/udp timserver
+rlp 39/tcp resource # resource location
+rlp 39/udp resource # resource location
+nameserver 42/tcp name # IEN 116
+nameserver 42/udp name # IEN 116
+nicname 43/tcp whois
+nicname 43/udp whois
+tacacs 49/tcp # Login Host Protocol (TACACS)
+tacacs 49/udp # Login Host Protocol (TACACS)
+re-mail-ck 50/tcp # Remote Mail Checking Protocol
+re-mail-ck 50/udp # Remote Mail Checking Protocol
+domain 53/tcp # name-domain server
+domain 53/udp
+whois++ 63/tcp
+whois++ 63/udp
+bootps 67/tcp # BOOTP server
+bootps 67/udp
+bootpc 68/tcp dhcpc # BOOTP client
+bootpc 68/udp dhcpc
+tftp 69/tcp
+tftp 69/udp
+gopher 70/tcp # Internet Gopher
+gopher 70/udp
+netrjs-1 71/tcp # Remote Job Service
+netrjs-1 71/udp # Remote Job Service
+netrjs-2 72/tcp # Remote Job Service
+netrjs-2 72/udp # Remote Job Service
+netrjs-3 73/tcp # Remote Job Service
+netrjs-3 73/udp # Remote Job Service
+netrjs-4 74/tcp # Remote Job Service
+netrjs-4 74/udp # Remote Job Service
+finger 79/tcp
+finger 79/udp
+http 80/tcp www www-http # WorldWideWeb HTTP
+http 80/udp www www-http # HyperText Transfer Protocol
+kerberos 88/tcp kerberos5 krb5 # Kerberos v5
+kerberos 88/udp kerberos5 krb5 # Kerberos v5
+supdup 95/tcp
+supdup 95/udp
+hostname 101/tcp hostnames # usually from sri-nic
+hostname 101/udp hostnames # usually from sri-nic
+iso-tsap 102/tcp tsap # part of ISODE.
+csnet-ns 105/tcp cso # also used by CSO name server
+csnet-ns 105/udp cso
+# unfortunately the poppassd (Eudora) uses a port which has already
+# been assigned to a different service. We list the poppassd as an
+# alias here. This should work for programs asking for this service.
+# (due to a bug in inetd the 3com-tsmux line is disabled)
+#3com-tsmux 106/tcp poppassd
+#3com-tsmux 106/udp poppassd
+rtelnet 107/tcp # Remote Telnet
+rtelnet 107/udp
+pop2 109/tcp pop-2 postoffice # POP version 2
+pop2 109/udp pop-2
+pop3 110/tcp pop-3 # POP version 3
+pop3 110/udp pop-3
+sunrpc 111/tcp portmapper rpcbind # RPC 4.0 portmapper TCP
+sunrpc 111/udp portmapper rpcbind # RPC 4.0 portmapper UDP
+auth 113/tcp authentication tap ident
+auth 113/udp authentication tap ident
+sftp 115/tcp
+sftp 115/udp
+uucp-path 117/tcp
+uucp-path 117/udp
+nntp 119/tcp readnews untp # USENET News Transfer Protocol
+nntp 119/udp readnews untp # USENET News Transfer Protocol
+ntp 123/tcp
+ntp 123/udp # Network Time Protocol
+netbios-ns 137/tcp # NETBIOS Name Service
+netbios-ns 137/udp
+netbios-dgm 138/tcp # NETBIOS Datagram Service
+netbios-dgm 138/udp
+netbios-ssn 139/tcp # NETBIOS session service
+netbios-ssn 139/udp
+imap 143/tcp imap2 # Interim Mail Access Proto v2
+imap 143/udp imap2
+snmp 161/tcp # Simple Net Mgmt Proto
+snmp 161/udp # Simple Net Mgmt Proto
+snmptrap 162/tcp # SNMPTRAP
+snmptrap 162/udp snmp-trap # Traps for SNMP
+cmip-man 163/tcp # ISO mgmt over IP (CMOT)
+cmip-man 163/udp
+cmip-agent 164/tcp
+cmip-agent 164/udp
+mailq 174/tcp # MAILQ
+mailq 174/udp # MAILQ
+xdmcp 177/tcp # X Display Mgr. Control Proto
+xdmcp 177/udp
+nextstep 178/tcp NeXTStep NextStep # NeXTStep window
+nextstep 178/udp NeXTStep NextStep # server
+bgp 179/tcp # Border Gateway Proto.
+bgp 179/udp
+prospero 191/tcp # Cliff Neuman's Prospero
+prospero 191/udp
+irc 194/tcp # Internet Relay Chat
+irc 194/udp
+smux 199/tcp # SNMP Unix Multiplexer
+smux 199/udp
+at-rtmp 201/tcp # AppleTalk routing
+at-rtmp 201/udp
+at-nbp 202/tcp # AppleTalk name binding
+at-nbp 202/udp
+at-echo 204/tcp # AppleTalk echo
+at-echo 204/udp
+at-zis 206/tcp # AppleTalk zone information
+at-zis 206/udp
+qmtp 209/tcp # Quick Mail Transfer Protocol
+qmtp 209/udp # Quick Mail Transfer Protocol
+z39.50 210/tcp z3950 wais # NISO Z39.50 database
+z39.50 210/udp z3950 wais
+ipx 213/tcp # IPX
+ipx 213/udp
+imap3 220/tcp # Interactive Mail Access
+imap3 220/udp # Protocol v3
+link 245/tcp ttylink
+link 245/udp ttylink
+fatserv 347/tcp # Fatmen Server
+fatserv 347/udp # Fatmen Server
+rsvp_tunnel 363/tcp
+rsvp_tunnel 363/udp
+odmr 366/tcp # odmr required by fetchmail
+odmr 366/udp # odmr required by fetchmail
+rpc2portmap 369/tcp
+rpc2portmap 369/udp # Coda portmapper
+codaauth2 370/tcp
+codaauth2 370/udp # Coda authentication server
+ulistproc 372/tcp ulistserv # UNIX Listserv
+ulistproc 372/udp ulistserv
+ldap 389/tcp
+ldap 389/udp
+svrloc 427/tcp # Server Location Protocl
+svrloc 427/udp # Server Location Protocl
+mobileip-agent 434/tcp
+mobileip-agent 434/udp
+mobilip-mn 435/tcp
+mobilip-mn 435/udp
+https 443/tcp # MCom
+https 443/udp # MCom
+snpp 444/tcp # Simple Network Paging Protocol
+snpp 444/udp # Simple Network Paging Protocol
+microsoft-ds 445/tcp
+microsoft-ds 445/udp
+kpasswd 464/tcp kpwd # Kerberos "passwd"
+kpasswd 464/udp kpwd # Kerberos "passwd"
+photuris 468/tcp
+photuris 468/udp
+saft 487/tcp # Simple Asynchronous File Transfer
+saft 487/udp # Simple Asynchronous File Transfer
+gss-http 488/tcp
+gss-http 488/udp
+pim-rp-disc 496/tcp
+pim-rp-disc 496/udp
+isakmp 500/tcp
+isakmp 500/udp
+gdomap 538/tcp # GNUstep distributed objects
+gdomap 538/udp # GNUstep distributed objects
+iiop 535/tcp
+iiop 535/udp
+dhcpv6-client 546/tcp
+dhcpv6-client 546/udp
+dhcpv6-server 547/tcp
+dhcpv6-server 547/udp
+rtsp 554/tcp # Real Time Stream Control Protocol
+rtsp 554/udp # Real Time Stream Control Protocol
+nntps 563/tcp # NNTP over SSL
+nntps 563/udp # NNTP over SSL
+whoami 565/tcp
+whoami 565/udp
+submission 587/tcp msa # mail message submission
+submission 587/udp msa # mail message submission
+npmp-local 610/tcp dqs313_qmaster # npmp-local / DQS
+npmp-local 610/udp dqs313_qmaster # npmp-local / DQS
+npmp-gui 611/tcp dqs313_execd # npmp-gui / DQS
+npmp-gui 611/udp dqs313_execd # npmp-gui / DQS
+hmmp-ind 612/tcp dqs313_intercell # HMMP Indication / DQS
+hmmp-ind 612/udp dqs313_intercell # HMMP Indication / DQS
+ipp 631/tcp # Internet Printing Protocol
+ipp 631/udp # Internet Printing Protocol
+ldaps 636/tcp # LDAP over SSL
+ldaps 636/udp # LDAP over SSL
+acap 674/tcp
+acap 674/udp
+ha-cluster 694/tcp # Heartbeat HA-cluster
+ha-cluster 694/udp # Heartbeat HA-cluster
+kerberos-adm 749/tcp # Kerberos `kadmin' (v5)
+kerberos-adm 749/udp # kerberos administration
+kerberos-iv 750/udp kerberos4 kerberos-sec kdc loadav
+kerberos-iv 750/tcp kerberos4 kerberos-sec kdc rfile
+webster 765/tcp # Network dictionary
+webster 765/udp
+phonebook 767/tcp # Network phonebook
+phonebook 767/udp
+rsync 873/tcp # rsync
+rsync 873/udp # rsync
+rquotad 875/tcp # rquota daemon
+rquotad 875/udp # rquota daemon
+telnets 992/tcp
+telnets 992/udp
+imaps 993/tcp # IMAP over SSL
+imaps 993/udp # IMAP over SSL
+ircs 994/tcp
+ircs 994/udp
+pop3s 995/tcp # POP-3 over SSL
+pop3s 995/udp # POP-3 over SSL
+
+#
+# UNIX specific services
+#
+exec 512/tcp
+biff 512/udp comsat
+login 513/tcp
+who 513/udp whod
+shell 514/tcp cmd # no passwords used
+syslog 514/udp
+printer 515/tcp spooler # line printer spooler
+printer 515/udp spooler # line printer spooler
+talk 517/udp
+ntalk 518/udp
+utime 519/tcp unixtime
+utime 519/udp unixtime
+efs 520/tcp
+router 520/udp route routed # RIP
+ripng 521/tcp
+ripng 521/udp
+timed 525/tcp timeserver
+timed 525/udp timeserver
+tempo 526/tcp newdate
+courier 530/tcp rpc
+conference 531/tcp chat
+netnews 532/tcp
+netwall 533/udp # -for emergency broadcasts
+uucp 540/tcp uucpd # uucp daemon
+klogin 543/tcp # Kerberized `rlogin' (v5)
+kshell 544/tcp krcmd # Kerberized `rsh' (v5)
+afpovertcp 548/tcp # AFP over TCP
+afpovertcp 548/udp # AFP over TCP
+remotefs 556/tcp rfs_server rfs # Brunhoff remote filesystem
+
+#
+# From ``PORT NUMBERS'':
+#
+#>REGISTERED PORT NUMBERS
+#>
+#>The Registered Ports are listed by the IANA and on most systems can be
+#>used by ordinary user processes or programs executed by ordinary
+#>users.
+#>
+#>Ports are used in the TCP [RFC793] to name the ends of logical
+#>connections which carry long term conversations. For the purpose of
+#>providing services to unknown callers, a service contact port is
+#>defined. This list specifies the port used by the server process as
+#>its contact port.
+#>
+#>The IANA registers uses of these ports as a convienence to the
+#>community.
+#
+socks 1080/tcp # socks proxy server
+socks 1080/udp # socks proxy server
+
+# Port 1236 is registered as `bvcontrol', but is also used by the
+# Gracilis Packeten remote config server. The official name is listed as
+# the primary name, with the unregistered name as an alias.
+bvcontrol 1236/tcp rmtcfg # Daniel J. Walsh, Gracilis Packeten remote config server
+bvcontrol 1236/udp # Daniel J. Walsh
+
+h323hostcallsc 1300/tcp # H323 Host Call Secure
+h323hostcallsc 1300/udp # H323 Host Call Secure
+ms-sql-s 1433/tcp # Microsoft-SQL-Server
+ms-sql-s 1433/udp # Microsoft-SQL-Server
+ms-sql-m 1434/tcp # Microsoft-SQL-Monitor
+ms-sql-m 1434/udp # Microsoft-SQL-Monitor
+ica 1494/tcp # Citrix ICA Client
+ica 1494/udp # Citrix ICA Client
+wins 1512/tcp # Microsoft's Windows Internet Name Service
+wins 1512/udp # Microsoft's Windows Internet Name Service
+ingreslock 1524/tcp
+ingreslock 1524/udp
+prospero-np 1525/tcp orasrv # Prospero non-privileged/oracle
+prospero-np 1525/udp orasrv
+datametrics 1645/tcp old-radius sightline # datametrics / old radius entry
+datametrics 1645/udp old-radius sightline # datametrics / old radius entry
+sa-msg-port 1646/tcp old-radacct # sa-msg-port / old radacct entry
+sa-msg-port 1646/udp old-radacct # sa-msg-port / old radacct entry
+kermit 1649/tcp
+kermit 1649/udp
+l2tp 1701/tcp l2f
+l2tp 1701/udp l2f
+h323gatedisc 1718/tcp
+h323gatedisc 1718/udp
+h323gatestat 1719/tcp
+h323gatestat 1719/udp
+h323hostcall 1720/tcp
+h323hostcall 1720/udp
+tftp-mcast 1758/tcp
+tftp-mcast 1758/udp
+mtftp 1759/udp spss-lm
+hello 1789/tcp
+hello 1789/udp
+radius 1812/tcp # Radius
+radius 1812/udp # Radius
+radius-acct 1813/tcp radacct # Radius Accounting
+radius-acct 1813/udp radacct # Radius Accounting
+mtp 1911/tcp #
+mtp 1911/udp #
+hsrp 1985/tcp # Cisco Hot Standby Router Protocol
+hsrp 1985/udp # Cisco Hot Standby Router Protocol
+licensedaemon 1986/tcp
+licensedaemon 1986/udp
+gdp-port 1997/tcp # Cisco Gateway Discovery Protocol
+gdp-port 1997/udp # Cisco Gateway Discovery Protocol
+sieve 2000/tcp cisco-sccp # Sieve Mail Filter Daemon
+sieve 2000/udp cisco-sccp # Sieve Mail Filter Daemon
+nfs 2049/tcp nfsd shilp
+nfs 2049/udp nfsd shilp
+zephyr-srv 2102/tcp # Zephyr server
+zephyr-srv 2102/udp # Zephyr server
+zephyr-clt 2103/tcp # Zephyr serv-hm connection
+zephyr-clt 2103/udp # Zephyr serv-hm connection
+zephyr-hm 2104/tcp # Zephyr hostmanager
+zephyr-hm 2104/udp # Zephyr hostmanager
+cvspserver 2401/tcp # CVS client/server operations
+cvspserver 2401/udp # CVS client/server operations
+venus 2430/tcp # codacon port
+venus 2430/udp # Venus callback/wbc interface
+venus-se 2431/tcp # tcp side effects
+venus-se 2431/udp # udp sftp side effect
+codasrv 2432/tcp # not used
+codasrv 2432/udp # server port
+codasrv-se 2433/tcp # tcp side effects
+codasrv-se 2433/udp # udp sftp side effectQ
--- /dev/null
+root:$5$rounds=1000$TMTRLLOM$h24vGZsHaf6aNdz3dsUuE4z/fy5at1Luuu.FBI6D6M:16200::999999:7:::
+bin:x:16200::999999:7:::
+daemon:x:16200::999999:7:::
+adm:x:16200::999999:7:::
+lp:x:16200::999999:7:::
+sync:x:16200::999999:7:::
+shutdown:x:16200::999999:7:::
+halt:x:16200::999999:7:::
+mail:x:16200::999999:7:::
+uucp:x:16200::999999:7:::
+operator:x:16200::999999:7:::
+games:x:16200::999999:7:::
+gopher:x:16200::999999:7:::
+ftp:x:16200::999999:7:::
+nobody:x:16200::999999:7:::
+vcsa:x:16200::999999:7:::
+rpc:x:16200::999999:7:::
+rpcuser:x:16200::999999:7:::
+nfsnobody:x:16200::999999:7:::
--- /dev/null
+
+# WELCOME TO SQUID 3.0.STABLE13
+# ----------------------------
+#
+# This is the default Squid configuration file. You may wish
+# to look at the Squid home page (http://www.squid-cache.org/)
+# for the FAQ and other documentation.
+#
+# The default Squid config file shows what the defaults for
+# various options happen to be. If you don't need to change the
+# default, you shouldn't uncomment the line. Doing so may cause
+# run-time problems. In some cases "none" refers to no default
+# setting at all, while in other cases it refers to a valid
+# option - the comments for that keyword indicate if this is the
+# case.
+#
+
+
+# Configuration options can be included using the "include" directive.
+# Include takes a list of files to include. Quoting and wildcards is
+# supported.
+#
+# For example,
+#
+# include /path/to/included/file/squid.acl.config
+#
+# Includes can be nested up to a hard-coded depth of 16 levels.
+# This arbitrary restriction is to prevent recursive include references
+# from causing Squid entering an infinite loop whilst trying to load
+# configuration files.
+
+
+# OPTIONS FOR AUTHENTICATION
+# -----------------------------------------------------------------------------
+
+# TAG: auth_param
+# This is used to define parameters for the various authentication
+# schemes supported by Squid.
+#
+# format: auth_param scheme parameter [setting]
+#
+# The order in which authentication schemes are presented to the client is
+# dependent on the order the scheme first appears in config file. IE
+# has a bug (it's not RFC 2617 compliant) in that it will use the basic
+# scheme if basic is the first entry presented, even if more secure
+# schemes are presented. For now use the order in the recommended
+# settings section below. If other browsers have difficulties (don't
+# recognize the schemes offered even if you are using basic) either
+# put basic first, or disable the other schemes (by commenting out their
+# program entry).
+#
+# Once an authentication scheme is fully configured, it can only be
+# shutdown by shutting squid down and restarting. Changes can be made on
+# the fly and activated with a reconfigure. I.E. You can change to a
+# different helper, but not unconfigure the helper completely.
+#
+# Please note that while this directive defines how Squid processes
+# authentication it does not automatically activate authentication.
+# To use authentication you must in addition make use of ACLs based
+# on login name in http_access (proxy_auth, proxy_auth_regex or
+# external with %LOGIN used in the format tag). The browser will be
+# challenged for authentication on the first such acl encountered
+# in http_access processing and will also be re-challenged for new
+# login credentials if the request is being denied by a proxy_auth
+# type acl.
+#
+# WARNING: authentication can't be used in a transparently intercepting
+# proxy as the client then thinks it is talking to an origin server and
+# not the proxy. This is a limitation of bending the TCP/IP protocol to
+# transparently intercepting port 80, not a limitation in Squid.
+# Ports flagged 'transparent' or 'tproxy' have authentication disabled.
+#
+# === Parameters for the basic scheme follow. ===
+#
+# "program" cmdline
+# Specify the command for the external authenticator. Such a program
+# reads a line containing "username password" and replies "OK" or
+# "ERR" in an endless loop. "ERR" responses may optionally be followed
+# by a error description available as %m in the returned error page.
+# If you use an authenticator, make sure you have 1 acl of type proxy_auth.
+#
+# By default, the basic authentication scheme is not used unless a
+# program is specified.
+#
+# If you want to use the traditional NCSA proxy authentication, set
+# this line to something like
+#
+# auth_param basic program /usr/libexec/ncsa_auth /usr/etc/passwd
+#
+# "children" numberofchildren
+# The number of authenticator processes to spawn. If you start too few
+# Squid will have to wait for them to process a backlog of credential
+# verifications, slowing it down. When password verifications are
+# done via a (slow) network you are likely to need lots of
+# authenticator processes.
+# auth_param basic children 5
+#
+# "concurrency" concurrency
+# The number of concurrent requests the helper can process.
+# The default of 0 is used for helpers who only supports
+# one request at a time. Setting this changes the protocol used to
+# include a channel number first on the request/response line, allowing
+# multiple requests to be sent to the same helper in parallel without
+# wating for the response.
+# Must not be set unless it's known the helper supports this.
+# auth_param basic concurrency 0
+#
+# "realm" realmstring
+# Specifies the realm name which is to be reported to the
+# client for the basic proxy authentication scheme (part of
+# the text the user will see when prompted their username and
+# password). There is no default.
+# auth_param basic realm Squid proxy-caching web server
+#
+# "credentialsttl" timetolive
+# Specifies how long squid assumes an externally validated
+# username:password pair is valid for - in other words how
+# often the helper program is called for that user. Set this
+# low to force revalidation with short lived passwords. Note
+# setting this high does not impact your susceptibility
+# to replay attacks unless you are using an one-time password
+# system (such as SecureID). If you are using such a system,
+# you will be vulnerable to replay attacks unless you also
+# use the max_user_ip ACL in an http_access rule.
+#
+# "casesensitive" on|off
+# Specifies if usernames are case sensitive. Most user databases are
+# case insensitive allowing the same username to be spelled using both
+# lower and upper case letters, but some are case sensitive. This
+# makes a big difference for user_max_ip ACL processing and similar.
+# auth_param basic casesensitive off
+#
+# === Parameters for the digest scheme follow ===
+#
+# "program" cmdline
+# Specify the command for the external authenticator. Such
+# a program reads a line containing "username":"realm" and
+# replies with the appropriate H(A1) value hex encoded or
+# ERR if the user (or his H(A1) hash) does not exists.
+# See rfc 2616 for the definition of H(A1).
+# "ERR" responses may optionally be followed by a error description
+# available as %m in the returned error page.
+#
+# By default, the digest authentication scheme is not used unless a
+# program is specified.
+#
+# If you want to use a digest authenticator, set this line to
+# something like
+#
+# auth_param digest program /usr/bin/digest_auth_pw /usr/etc/digpass
+#
+# "children" numberofchildren
+# The number of authenticator processes to spawn (no default).
+# If you start too few Squid will have to wait for them to
+# process a backlog of H(A1) calculations, slowing it down.
+# When the H(A1) calculations are done via a (slow) network
+# you are likely to need lots of authenticator processes.
+# auth_param digest children 5
+#
+# "realm" realmstring
+# Specifies the realm name which is to be reported to the
+# client for the digest proxy authentication scheme (part of
+# the text the user will see when prompted their username and
+# password). There is no default.
+# auth_param digest realm Squid proxy-caching web server
+#
+# "nonce_garbage_interval" timeinterval
+# Specifies the interval that nonces that have been issued
+# to client_agent's are checked for validity.
+#
+# "nonce_max_duration" timeinterval
+# Specifies the maximum length of time a given nonce will be
+# valid for.
+#
+# "nonce_max_count" number
+# Specifies the maximum number of times a given nonce can be
+# used.
+#
+# "nonce_strictness" on|off
+# Determines if squid requires strict increment-by-1 behavior
+# for nonce counts, or just incrementing (off - for use when
+# useragents generate nonce counts that occasionally miss 1
+# (ie, 1,2,4,6)). Default off.
+#
+# "check_nonce_count" on|off
+# This directive if set to off can disable the nonce count check
+# completely to work around buggy digest qop implementations in
+# certain mainstream browser versions. Default on to check the
+# nonce count to protect from authentication replay attacks.
+#
+# "post_workaround" on|off
+# This is a workaround to certain buggy browsers who sends
+# an incorrect request digest in POST requests when reusing
+# the same nonce as acquired earlier on a GET request.
+#
+# === NTLM scheme options follow ===
+#
+# "program" cmdline
+# Specify the command for the external NTLM authenticator.
+# Such a program reads exchanged NTLMSSP packets with
+# the browser via Squid until authentication is completed.
+# If you use an NTLM authenticator, make sure you have 1 acl
+# of type proxy_auth. By default, the NTLM authenticator_program
+# is not used.
+#
+# auth_param ntlm program /usr/bin/ntlm_auth
+#
+# "children" numberofchildren
+# The number of authenticator processes to spawn (no default).
+# If you start too few Squid will have to wait for them to
+# process a backlog of credential verifications, slowing it
+# down. When credential verifications are done via a (slow)
+# network you are likely to need lots of authenticator
+# processes.
+#
+# auth_param ntlm children 5
+#
+# "keep_alive" on|off
+# If you experience problems with PUT/POST requests when using the
+# Negotiate authentication scheme then you can try setting this to
+# off. This will cause Squid to forcibly close the connection on
+# the initial requests where the browser asks which schemes are
+# supported by the proxy.
+#
+# auth_param ntlm keep_alive on
+#
+# === Options for configuring the NEGOTIATE auth-scheme follow ===
+#
+# "program" cmdline
+# Specify the command for the external Negotiate authenticator.
+# This protocol is used in Microsoft Active-Directory enabled setups with
+# the Microsoft Internet Explorer or Mozilla Firefox browsers.
+# Its main purpose is to exchange credentials with the Squid proxy
+# using the Kerberos mechanisms.
+# If you use a Negotiate authenticator, make sure you have at least one acl
+# of type proxy_auth active. By default, the negotiate authenticator_program
+# is not used.
+# The only supported program for this role is the ntlm_auth
+# program distributed as part of Samba, version 4 or later.
+#
+# auth_param negotiate program /usr/bin/ntlm_auth --helper-protocol=gss-spnego
+#
+# "children" numberofchildren
+# The number of authenticator processes to spawn (no default).
+# If you start too few Squid will have to wait for them to
+# process a backlog of credential verifications, slowing it
+# down. When crendential verifications are done via a (slow)
+# network you are likely to need lots of authenticator
+# processes.
+# auth_param negotiate children 5
+#
+# "keep_alive" on|off
+# If you experience problems with PUT/POST requests when using the
+# Negotiate authentication scheme then you can try setting this to
+# off. This will cause Squid to forcibly close the connection on
+# the initial requests where the browser asks which schemes are
+# supported by the proxy.
+#
+# auth_param negotiate keep_alive on
+#
+#Recommended minimum configuration per scheme:
+#auth_param negotiate program <uncomment and complete this line to activate>
+#auth_param negotiate children 5
+#auth_param negotiate keep_alive on
+#auth_param ntlm program <uncomment and complete this line to activate>
+#auth_param ntlm children 5
+#auth_param ntlm keep_alive on
+#auth_param digest program <uncomment and complete this line>
+#auth_param digest children 5
+#auth_param digest realm Squid proxy-caching web server
+#auth_param digest nonce_garbage_interval 5 minutes
+#auth_param digest nonce_max_duration 30 minutes
+#auth_param digest nonce_max_count 50
+#auth_param basic program <uncomment and complete this line>
+#auth_param basic children 5
+#auth_param basic realm Squid proxy-caching web server
+#auth_param basic credentialsttl 2 hours
+
+# TAG: authenticate_cache_garbage_interval
+# The time period between garbage collection across the username cache.
+# This is a tradeoff between memory utilization (long intervals - say
+# 2 days) and CPU (short intervals - say 1 minute). Only change if you
+# have good reason to.
+#
+#Default:
+# authenticate_cache_garbage_interval 1 hour
+
+# TAG: authenticate_ttl
+# The time a user & their credentials stay in the logged in
+# user cache since their last request. When the garbage
+# interval passes, all user credentials that have passed their
+# TTL are removed from memory.
+#
+#Default:
+# authenticate_ttl 1 hour
+
+# TAG: authenticate_ip_ttl
+# If you use proxy authentication and the 'max_user_ip' ACL,
+# this directive controls how long Squid remembers the IP
+# addresses associated with each user. Use a small value
+# (e.g., 60 seconds) if your users might change addresses
+# quickly, as is the case with dialups. You might be safe
+# using a larger value (e.g., 2 hours) in a corporate LAN
+# environment with relatively static address assignments.
+#
+#Default:
+# authenticate_ip_ttl 0 seconds
+
+
+# ACCESS CONTROLS
+# -----------------------------------------------------------------------------
+
+# TAG: external_acl_type
+# This option defines external acl classes using a helper program
+# to look up the status
+#
+# external_acl_type name [options] FORMAT.. /path/to/helper [helper arguments..]
+#
+# Options:
+#
+# ttl=n TTL in seconds for cached results (defaults to 3600
+# for 1 hour)
+# negative_ttl=n
+# TTL for cached negative lookups (default same
+# as ttl)
+# children=n Number of acl helper processes spawn to service
+# external acl lookups of this type. (default 5)
+# concurrency=n concurrency level per process. Only used with helpers
+# capable of processing more than one query at a time.
+# cache=n result cache size, 0 is unbounded (default)
+# grace=n Percentage remaining of TTL where a refresh of a
+# cached entry should be initiated without needing to
+# wait for a new reply. (default 0 for no grace period)
+# protocol=2.5 Compatibility mode for Squid-2.5 external acl helpers
+#
+# FORMAT specifications
+#
+# %LOGIN Authenticated user login name
+# %EXT_USER Username from external acl
+# %IDENT Ident user name
+# %SRC Client IP
+# %SRCPORT Client source port
+# %URI Requested URI
+# %DST Requested host
+# %PROTO Requested protocol
+# %PORT Requested port
+# %PATH Requested URL path
+# %METHOD Request method
+# %MYADDR Squid interface address
+# %MYPORT Squid http_port number
+# %PATH Requested URL-path (including query-string if any)
+# %USER_CERT SSL User certificate in PEM format
+# %USER_CERTCHAIN SSL User certificate chain in PEM format
+# %USER_CERT_xx SSL User certificate subject attribute xx
+# %USER_CA_xx SSL User certificate issuer attribute xx
+# %{Header} HTTP request header
+# %{Hdr:member} HTTP request header list member
+# %{Hdr:;member}
+# HTTP request header list member using ; as
+# list separator. ; can be any non-alphanumeric
+# character.
+#
+# In addition to the above, any string specified in the referencing
+# acl will also be included in the helper request line, after the
+# specified formats (see the "acl external" directive)
+#
+# The helper receives lines per the above format specification,
+# and returns lines starting with OK or ERR indicating the validity
+# of the request and optionally followed by additional keywords with
+# more details.
+#
+# General result syntax:
+#
+# OK/ERR keyword=value ...
+#
+# Defined keywords:
+#
+# user= The users name (login)
+# password= The users password (for login= cache_peer option)
+# message= Message describing the reason. Available as %o
+# in error pages
+# tag= Apply a tag to a request (for both ERR and OK results)
+# Only sets a tag, does not alter existing tags.
+# log= String to be logged in access.log. Available as
+# %ea in logformat specifications
+#
+# If protocol=3.0 (the default) then URL escaping is used to protect
+# each value in both requests and responses.
+#
+# If using protocol=2.5 then all values need to be enclosed in quotes
+# if they may contain whitespace, or the whitespace escaped using \.
+# And quotes or \ characters within the keyword value must be \ escaped.
+#
+# When using the concurrency= option the protocol is changed by
+# introducing a query channel tag infront of the request/response.
+# The query channel tag is a number between 0 and concurrency-1.
+#
+#Default:
+# none
+
+# TAG: acl
+# Defining an Access List
+#
+# Every access list definition must begin with an aclname and acltype,
+# followed by either type-specific arguments or a quoted filename that
+# they are read from.
+#
+# acl aclname acltype argument ...
+# acl aclname acltype "file" ...
+#
+# When using "file", the file should contain one item per line.
+#
+# By default, regular expressions are CASE-SENSITIVE. To make
+# them case-insensitive, use the -i option.
+#
+#
+# ***** ACL TYPES AVAILABLE *****
+#
+# acl aclname src ip-address/netmask ... # clients IP address
+# acl aclname src addr1-addr2/netmask ... # range of addresses
+# acl aclname dst ip-address/netmask ... # URL host's IP address
+# acl aclname myip ip-address/netmask ... # local socket IP address
+#
+# acl aclname arp mac-address ... (xx:xx:xx:xx:xx:xx notation)
+# # The arp ACL requires the special configure option --enable-arp-acl.
+# # Furthermore, the ARP ACL code is not portable to all operating systems.
+# # It works on Linux, Solaris, Windows, FreeBSD, and some other *BSD variants.
+# #
+# # NOTE: Squid can only determine the MAC address for clients that are on
+# # the same subnet. If the client is on a different subnet, then Squid cannot
+# # find out its MAC address.
+#
+# acl aclname srcdomain .foo.com ... # reverse lookup, from client IP
+# acl aclname dstdomain .foo.com ... # Destination server from URL
+# acl aclname srcdom_regex [-i] \.foo\.com ... # regex matching client name
+# acl aclname dstdom_regex [-i] \.foo\.com ... # regex matching server
+# # For dstdomain and dstdom_regex a reverse lookup is tried if a IP
+# # based URL is used and no match is found. The name "none" is used
+# # if the reverse lookup fails.
+#
+# acl aclname src_as number ...
+# acl aclname dst_as number ...
+# # Except for access control, AS numbers can be used for
+# # routing of requests to specific caches. Here's an
+# # example for routing all requests for AS#1241 and only
+# # those to mycache.mydomain.net:
+# # acl asexample dst_as 1241
+# # cache_peer_access mycache.mydomain.net allow asexample
+# # cache_peer_access mycache_mydomain.net deny all
+#
+# acl aclname time [day-abbrevs] [h1:m1-h2:m2]
+# # day-abbrevs:
+# # S - Sunday
+# # M - Monday
+# # T - Tuesday
+# # W - Wednesday
+# # H - Thursday
+# # F - Friday
+# # A - Saturday
+# # h1:m1 must be less than h2:m2
+#
+# acl aclname url_regex [-i] ^http:// ... # regex matching on whole URL
+# acl aclname urlpath_regex [-i] \.gif$ ... # regex matching on URL path
+#
+# acl aclname port 80 70 21 ...
+# acl aclname port 0-1024 ... # ranges allowed
+# acl aclname myport 3128 ... # (local socket TCP port)
+# acl aclname myportname 3128 ... # http(s)_port name
+#
+# acl aclname proto HTTP FTP ...
+#
+# acl aclname method GET POST ...
+#
+# acl aclname http_status 200 301 500- 400-403 ... # status code in reply
+#
+# acl aclname browser [-i] regexp ...
+# # pattern match on User-Agent header (see also req_header below)
+#
+# acl aclname referer_regex [-i] regexp ...
+# # pattern match on Referer header
+# # Referer is highly unreliable, so use with care
+#
+# acl aclname ident username ...
+# acl aclname ident_regex [-i] pattern ...
+# # string match on ident output.
+# # use REQUIRED to accept any non-null ident.
+#
+# acl aclname proxy_auth [-i] username ...
+# acl aclname proxy_auth_regex [-i] pattern ...
+# # list of valid usernames
+# # use REQUIRED to accept any valid username.
+# #
+# # NOTE: when a Proxy-Authentication header is sent but it is not
+# # needed during ACL checking the username is NOT logged
+# # in access.log.
+# #
+# # NOTE: proxy_auth requires a EXTERNAL authentication program
+# # to check username/password combinations (see
+# # auth_param directive).
+# #
+# # NOTE: proxy_auth can't be used in a transparent/intercepting proxy
+# # as the browser needs to be configured for using a proxy in order
+# # to respond to proxy authentication.
+#
+# acl aclname snmp_community string ...
+# # A community string to limit access to your SNMP Agent
+# # Example:
+# #
+# # acl snmppublic snmp_community public
+#
+# acl aclname maxconn number
+# # This will be matched when the client's IP address has
+# # more than <number> HTTP connections established.
+#
+# acl aclname max_user_ip [-s] number
+# # This will be matched when the user attempts to log in from more
+# # than <number> different ip addresses. The authenticate_ip_ttl
+# # parameter controls the timeout on the ip entries.
+# # If -s is specified the limit is strict, denying browsing
+# # from any further IP addresses until the ttl has expired. Without
+# # -s Squid will just annoy the user by "randomly" denying requests.
+# # (the counter is reset each time the limit is reached and a
+# # request is denied)
+# # NOTE: in acceleration mode or where there is mesh of child proxies,
+# # clients may appear to come from multiple addresses if they are
+# # going through proxy farms, so a limit of 1 may cause user problems.
+#
+# acl aclname req_mime_type [-i] mime-type ...
+# # regex match against the mime type of the request generated
+# # by the client. Can be used to detect file upload or some
+# # types HTTP tunneling requests.
+# # NOTE: This does NOT match the reply. You cannot use this
+# # to match the returned file type.
+#
+# acl aclname req_header header-name [-i] any\.regex\.here
+# # regex match against any of the known request headers. May be
+# # thought of as a superset of "browser", "referer" and "mime-type"
+# # ACLs.
+#
+# acl aclname rep_mime_type [-i] mime-type ...
+# # regex match against the mime type of the reply received by
+# # squid. Can be used to detect file download or some
+# # types HTTP tunneling requests.
+# # NOTE: This has no effect in http_access rules. It only has
+# # effect in rules that affect the reply data stream such as
+# # http_reply_access.
+#
+# acl aclname rep_header header-name [-i] any\.regex\.here
+# # regex match against any of the known reply headers. May be
+# # thought of as a superset of "browser", "referer" and "mime-type"
+# # ACLs.
+#
+# acl aclname external class_name [arguments...]
+# # external ACL lookup via a helper class defined by the
+# # external_acl_type directive.
+#
+# acl aclname user_cert attribute values...
+# # match against attributes in a user SSL certificate
+# # attribute is one of DN/C/O/CN/L/ST
+#
+# acl aclname ca_cert attribute values...
+# # match against attributes a users issuing CA SSL certificate
+# # attribute is one of DN/C/O/CN/L/ST
+#
+# acl aclname ext_user username ...
+# acl aclname ext_user_regex [-i] pattern ...
+# # string match on username returned by external acl helper
+# # use REQUIRED to accept any non-null user name.
+#
+#Examples:
+#acl macaddress arp 09:00:2b:23:45:67
+#acl myexample dst_as 1241
+#acl password proxy_auth REQUIRED
+#acl fileupload req_mime_type -i ^multipart/form-data$
+#acl javascript rep_mime_type -i ^application/x-javascript$
+#
+#Default:
+# acl all src all
+#
+#Recommended minimum configuration:
+acl manager proto cache_object
+acl localhost src 127.0.0.1/32
+acl to_localhost dst 127.0.0.0/8
+#
+# Example rule allowing access from your local networks.
+# Adapt to list your (internal) IP networks from where browsing
+# should be allowed
+acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
+acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
+acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
+#
+acl SSL_ports port 443
+acl Safe_ports port 80 # http
+acl Safe_ports port 21 # ftp
+acl Safe_ports port 443 # https
+acl Safe_ports port 70 # gopher
+acl Safe_ports port 210 # wais
+acl Safe_ports port 1025-65535 # unregistered ports
+acl Safe_ports port 280 # http-mgmt
+acl Safe_ports port 488 # gss-http
+acl Safe_ports port 591 # filemaker
+acl Safe_ports port 777 # multiling http
+acl CONNECT method CONNECT
+
+# TAG: http_access
+# Allowing or Denying access based on defined access lists
+#
+# Access to the HTTP port:
+# http_access allow|deny [!]aclname ...
+#
+# NOTE on default values:
+#
+# If there are no "access" lines present, the default is to deny
+# the request.
+#
+# If none of the "access" lines cause a match, the default is the
+# opposite of the last line in the list. If the last line was
+# deny, the default is allow. Conversely, if the last line
+# is allow, the default will be deny. For these reasons, it is a
+# good idea to have an "deny all" or "allow all" entry at the end
+# of your access lists to avoid potential confusion.
+#
+#Default:
+# http_access deny all
+#
+#Recommended minimum configuration:
+#
+# Only allow cachemgr access from localhost
+http_access allow manager localhost
+http_access deny manager
+# Deny requests to unknown ports
+http_access deny !Safe_ports
+# Deny CONNECT to other than SSL ports
+http_access deny CONNECT !SSL_ports
+#
+# We strongly recommend the following be uncommented to protect innocent
+# web applications running on the proxy server who think the only
+# one who can access services on "localhost" is a local user
+#http_access deny to_localhost
+#
+# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
+
+# Example rule allowing access from your local networks.
+# Adapt localnet in the ACL section to list your (internal) IP networks
+# from where browsing should be allowed
+http_access allow localnet
+
+# And finally deny all other access to this proxy
+http_access allow localhost
+http_access deny all
+
+# TAG: http_reply_access
+# Allow replies to client requests. This is complementary to http_access.
+#
+# http_reply_access allow|deny [!] aclname ...
+#
+# NOTE: if there are no access lines present, the default is to allow
+# all replies
+#
+# If none of the access lines cause a match the opposite of the
+# last line will apply. Thus it is good practice to end the rules
+# with an "allow all" or "deny all" entry.
+#
+#Default:
+# none
+
+# TAG: icp_access
+# Allowing or Denying access to the ICP port based on defined
+# access lists
+#
+# icp_access allow|deny [!]aclname ...
+#
+# See http_access for details
+#
+#Default:
+# icp_access deny all
+#
+#Allow ICP queries from local networks only
+icp_access allow localnet
+icp_access deny all
+
+# TAG: htcp_access
+# Allowing or Denying access to the HTCP port based on defined
+# access lists
+#
+# htcp_access allow|deny [!]aclname ...
+#
+# See http_access for details
+#
+# NOTE: The default if no htcp_access lines are present is to
+# deny all traffic. This default may cause problems with peers
+# using the htcp or htcp-oldsquid options.
+#
+#Default:
+# htcp_access deny all
+#
+#Allow HTCP queries from local networks only
+htcp_access allow localnet
+htcp_access deny all
+
+# TAG: htcp_clr_access
+# Allowing or Denying access to purge content using HTCP based
+# on defined access lists
+#
+# htcp_clr_access allow|deny [!]aclname ...
+#
+# See http_access for details
+#
+##Allow HTCP CLR requests from trusted peers
+#acl htcp_clr_peer src 172.16.1.2
+#htcp_clr_access allow htcp_clr_peer
+#
+#Default:
+# htcp_clr_access deny all
+
+# TAG: miss_access
+# Use to force your neighbors to use you as a sibling instead of
+# a parent. For example:
+#
+# acl localclients src 172.16.0.0/16
+# miss_access allow localclients
+# miss_access deny !localclients
+#
+# This means only your local clients are allowed to fetch
+# MISSES and all other clients can only fetch HITS.
+#
+# By default, allow all clients who passed the http_access rules
+# to fetch MISSES from us.
+#
+#Default setting:
+# miss_access allow all
+
+# TAG: ident_lookup_access
+# A list of ACL elements which, if matched, cause an ident
+# (RFC 931) lookup to be performed for this request. For
+# example, you might choose to always perform ident lookups
+# for your main multi-user Unix boxes, but not for your Macs
+# and PCs. By default, ident lookups are not performed for
+# any requests.
+#
+# To enable ident lookups for specific client addresses, you
+# can follow this example:
+#
+# acl ident_aware_hosts src 198.168.1.0/255.255.255.0
+# ident_lookup_access allow ident_aware_hosts
+# ident_lookup_access deny all
+#
+# Only src type ACL checks are fully supported. A src_domain
+# ACL might work at times, but it will not always provide
+# the correct result.
+#
+#Default:
+# ident_lookup_access deny all
+
+# TAG: reply_body_max_size size [acl acl...]
+# This option specifies the maximum size of a reply body. It can be
+# used to prevent users from downloading very large files, such as
+# MP3's and movies. When the reply headers are received, the
+# reply_body_max_size lines are processed, and the first line where
+# all (if any) listed ACLs are true is used as the maximum body size
+# for this reply.
+#
+# This size is checked twice. First when we get the reply headers,
+# we check the content-length value. If the content length value exists
+# and is larger than the allowed size, the request is denied and the
+# user receives an error message that says "the request or reply
+# is too large." If there is no content-length, and the reply
+# size exceeds this limit, the client's connection is just closed
+# and they will receive a partial reply.
+#
+# WARNING: downstream caches probably can not detect a partial reply
+# if there is no content-length header, so they will cache
+# partial responses and give them out as hits. You should NOT
+# use this option if you have downstream caches.
+#
+# WARNING: A maximum size smaller than the size of squid's error messages
+# will cause an infinite loop and crash squid. Ensure that the smallest
+# non-zero value you use is greater that the maximum header size plus
+# the size of your largest error page.
+#
+# If you set this parameter none (the default), there will be
+# no limit imposed.
+#
+# Configuration Format is:
+# reply_body_max_size SIZE UNITS [acl ...]
+# ie.
+# reply_body_max_size 10 MB
+#
+#
+#Default:
+# none
+
+
+# NETWORK OPTIONS
+# -----------------------------------------------------------------------------
+
+# TAG: http_port
+# Usage: port [options]
+# hostname:port [options]
+# 1.2.3.4:port [options]
+#
+# The socket addresses where Squid will listen for HTTP client
+# requests. You may specify multiple socket addresses.
+# There are three forms: port alone, hostname with port, and
+# IP address with port. If you specify a hostname or IP
+# address, Squid binds the socket to that specific
+# address. This replaces the old 'tcp_incoming_address'
+# option. Most likely, you do not need to bind to a specific
+# address, so you can use the port number alone.
+#
+# If you are running Squid in accelerator mode, you
+# probably want to listen on port 80 also, or instead.
+#
+# The -a command line option may be used to specify additional
+# port(s) where Squid listens for proxy request. Such ports will
+# be plain proxy ports with no options.
+#
+# You may specify multiple socket addresses on multiple lines.
+#
+# Options:
+#
+# transparent Support for transparent interception of
+# outgoing requests without browser settings.
+# NP: disables authentication on the port.
+#
+# tproxy Support Linux TPROXY for spoofing outgoing
+# connections using the client IP address.
+# NP: disables authentication on the port.
+#
+# accel Accelerator mode. Also needs at least one of
+# vhost / vport / defaultsite.
+#
+# defaultsite=domainname
+# What to use for the Host: header if it is not present
+# in a request. Determines what site (not origin server)
+# accelerators should consider the default.
+# Implies accel.
+#
+# vhost Accelerator mode using Host header for virtual
+# domain support. Implies accel.
+#
+# vport Accelerator with IP based virtual host support.
+# Implies accel.
+#
+# vport=NN As above, but uses specified port number rather
+# than the http_port number. Implies accel.
+#
+# protocol= Protocol to reconstruct accelerated requests with.
+# Defaults to http.
+#
+# disable-pmtu-discovery=
+# Control Path-MTU discovery usage:
+# off lets OS decide on what to do (default).
+# transparent disable PMTU discovery when transparent
+# support is enabled.
+# always disable always PMTU discovery.
+#
+# In many setups of transparently intercepting proxies
+# Path-MTU discovery can not work on traffic towards the
+# clients. This is the case when the intercepting device
+# does not fully track connections and fails to forward
+# ICMP must fragment messages to the cache server. If you
+# have such setup and experience that certain clients
+# sporadically hang or never complete requests set
+# disable-pmtu-discovery option to 'transparent'.
+#
+# name= Specifies a internal name for the port. Defaults to
+# the port specification (port or addr:port)
+#
+# If you run Squid on a dual-homed machine with an internal
+# and an external interface we recommend you to specify the
+# internal address:port in http_port. This way Squid will only be
+# visible on the internal address.
+#
+# Squid normally listens to port 3128
+http_port 3128
+
+# TAG: https_port
+# Usage: [ip:]port cert=certificate.pem [key=key.pem] [options...]
+#
+# The socket address where Squid will listen for HTTPS client
+# requests.
+#
+# This is really only useful for situations where you are running
+# squid in accelerator mode and you want to do the SSL work at the
+# accelerator level.
+#
+# You may specify multiple socket addresses on multiple lines,
+# each with their own SSL certificate and/or options.
+#
+# Options:
+#
+# accel Accelerator mode. Also needs at least one of
+# defaultsite or vhost.
+#
+# defaultsite= The name of the https site presented on
+# this port. Implies accel.
+#
+# vhost Accelerator mode using Host header for virtual
+# domain support. Requires a wildcard certificate
+# or other certificate valid for more than one domain.
+# Implies accel.
+#
+# protocol= Protocol to reconstruct accelerated requests with.
+# Defaults to https.
+#
+# cert= Path to SSL certificate (PEM format).
+#
+# key= Path to SSL private key file (PEM format)
+# if not specified, the certificate file is
+# assumed to be a combined certificate and
+# key file.
+#
+# version= The version of SSL/TLS supported
+# 1 automatic (default)
+# 2 SSLv2 only
+# 3 SSLv3 only
+# 4 TLSv1 only
+#
+# cipher= Colon separated list of supported ciphers.
+#
+# options= Various SSL engine options. The most important
+# being:
+# NO_SSLv2 Disallow the use of SSLv2
+# NO_SSLv3 Disallow the use of SSLv3
+# NO_TLSv1 Disallow the use of TLSv1
+# SINGLE_DH_USE Always create a new key when using
+# temporary/ephemeral DH key exchanges
+# See src/ssl_support.c or OpenSSL SSL_CTX_set_options
+# documentation for a complete list of options.
+#
+# clientca= File containing the list of CAs to use when
+# requesting a client certificate.
+#
+# cafile= File containing additional CA certificates to
+# use when verifying client certificates. If unset
+# clientca will be used.
+#
+# capath= Directory containing additional CA certificates
+# and CRL lists to use when verifying client certificates.
+#
+# crlfile= File of additional CRL lists to use when verifying
+# the client certificate, in addition to CRLs stored in
+# the capath. Implies VERIFY_CRL flag below.
+#
+# dhparams= File containing DH parameters for temporary/ephemeral
+# DH key exchanges.
+#
+# sslflags= Various flags modifying the use of SSL:
+# DELAYED_AUTH
+# Don't request client certificates
+# immediately, but wait until acl processing
+# requires a certificate (not yet implemented).
+# NO_DEFAULT_CA
+# Don't use the default CA lists built in
+# to OpenSSL.
+# NO_SESSION_REUSE
+# Don't allow for session reuse. Each connection
+# will result in a new SSL session.
+# VERIFY_CRL
+# Verify CRL lists when accepting client
+# certificates.
+# VERIFY_CRL_ALL
+# Verify CRL lists for all certificates in the
+# client certificate chain.
+#
+# sslcontext= SSL session ID context identifier.
+#
+# vport Accelerator with IP based virtual host support.
+#
+# vport=NN As above, but uses specified port number rather
+# than the https_port number. Implies accel.
+#
+# name= Specifies a internal name for the port. Defaults to
+# the port specification (port or addr:port)
+#
+#
+#Default:
+# none
+
+# TAG: tcp_outgoing_tos
+# Allows you to select a TOS/Diffserv value to mark outgoing
+# connections with, based on the username or source address
+# making the request.
+#
+# tcp_outgoing_tos ds-field [!]aclname ...
+#
+# Example where normal_service_net uses the TOS value 0x00
+# and normal_service_net uses 0x20
+#
+# acl normal_service_net src 10.0.0.0/255.255.255.0
+# acl good_service_net src 10.0.1.0/255.255.255.0
+# tcp_outgoing_tos 0x00 normal_service_net
+# tcp_outgoing_tos 0x20 good_service_net
+#
+# TOS/DSCP values really only have local significance - so you should
+# know what you're specifying. For more information, see RFC2474 and
+# RFC3260.
+#
+# The TOS/DSCP byte must be exactly that - a octet value 0 - 255, or
+# "default" to use whatever default your host has. Note that in
+# practice often only values 0 - 63 is usable as the two highest bits
+# have been redefined for use by ECN (RFC3168).
+#
+# Processing proceeds in the order specified, and stops at first fully
+# matching line.
+#
+# Note: The use of this directive using client dependent ACLs is
+# incompatible with the use of server side persistent connections. To
+# ensure correct results it is best to set server_persisten_connections
+# to off when using this directive in such configurations.
+#
+#Default:
+# none
+
+# TAG: clientside_tos
+# Allows you to select a TOS/Diffserv value to mark client-side
+# connections with, based on the username or source address
+# making the request.
+#
+#Default:
+# none
+
+# TAG: tcp_outgoing_address
+# Allows you to map requests to different outgoing IP addresses
+# based on the username or source address of the user making
+# the request.
+#
+# tcp_outgoing_address ipaddr [[!]aclname] ...
+#
+# Example where requests from 10.0.0.0/24 will be forwarded
+# with source address 10.1.0.1, 10.0.2.0/24 forwarded with
+# source address 10.1.0.2 and the rest will be forwarded with
+# source address 10.1.0.3.
+#
+# acl normal_service_net src 10.0.0.0/24
+# acl good_service_net src 10.0.2.0/24
+# tcp_outgoing_address 10.1.0.1 normal_service_net
+# tcp_outgoing_address 10.1.0.2 good_service_net
+# tcp_outgoing_address 10.1.0.3
+#
+# Processing proceeds in the order specified, and stops at first fully
+# matching line.
+#
+# Note: The use of this directive using client dependent ACLs is
+# incompatible with the use of server side persistent connections. To
+# ensure correct results it is best to set server_persistent_connections
+# to off when using this directive in such configurations.
+#
+#Default:
+# none
+
+
+# SSL OPTIONS
+# -----------------------------------------------------------------------------
+
+# TAG: ssl_unclean_shutdown
+# Some browsers (especially MSIE) bugs out on SSL shutdown
+# messages.
+#
+#Default:
+# ssl_unclean_shutdown off
+
+# TAG: ssl_engine
+# The OpenSSL engine to use. You will need to set this if you
+# would like to use hardware SSL acceleration for example.
+#
+#Default:
+# none
+
+# TAG: sslproxy_client_certificate
+# Client SSL Certificate to use when proxying https:// URLs
+#
+#Default:
+# none
+
+# TAG: sslproxy_client_key
+# Client SSL Key to use when proxying https:// URLs
+#
+#Default:
+# none
+
+# TAG: sslproxy_version
+# SSL version level to use when proxying https:// URLs
+#
+#Default:
+# sslproxy_version 1
+
+# TAG: sslproxy_options
+# SSL engine options to use when proxying https:// URLs
+#
+#Default:
+# none
+
+# TAG: sslproxy_cipher
+# SSL cipher list to use when proxying https:// URLs
+#
+#Default:
+# none
+
+# TAG: sslproxy_cafile
+# file containing CA certificates to use when verifying server
+# certificates while proxying https:// URLs
+#
+#Default:
+# none
+
+# TAG: sslproxy_capath
+# directory containing CA certificates to use when verifying
+# server certificates while proxying https:// URLs
+#
+#Default:
+# none
+
+# TAG: sslproxy_flags
+# Various flags modifying the use of SSL while proxying https:// URLs:
+# DONT_VERIFY_PEER Accept certificates even if they fail to
+# verify.
+# NO_DEFAULT_CA Don't use the default CA list built in
+# to OpenSSL.
+#
+#Default:
+# none
+
+# TAG: sslpassword_program
+# Specify a program used for entering SSL key passphrases
+# when using encrypted SSL certificate keys. If not specified
+# keys must either be unencrypted, or Squid started with the -N
+# option to allow it to query interactively for the passphrase.
+#
+#Default:
+# none
+
+
+# OPTIONS WHICH AFFECT THE NEIGHBOR SELECTION ALGORITHM
+# -----------------------------------------------------------------------------
+
+# TAG: cache_peer
+# To specify other caches in a hierarchy, use the format:
+#
+# cache_peer hostname type http-port icp-port [options]
+#
+# For example,
+#
+# # proxy icp
+# # hostname type port port options
+# # -------------------- -------- ----- ----- -----------
+# cache_peer parent.foo.net parent 3128 3130 proxy-only default
+# cache_peer sib1.foo.net sibling 3128 3130 proxy-only
+# cache_peer sib2.foo.net sibling 3128 3130 proxy-only
+#
+# type: either 'parent', 'sibling', or 'multicast'.
+#
+# proxy-port: The port number where the cache listens for proxy
+# requests.
+#
+# icp-port: Used for querying neighbor caches about
+# objects. To have a non-ICP neighbor
+# specify '7' for the ICP port and make sure the
+# neighbor machine has the UDP echo port
+# enabled in its /etc/inetd.conf file.
+# NOTE: Also requires icp_port option enabled to send/receive
+# requests via this method.
+#
+# options: proxy-only
+# weight=n
+# basetime=n
+# ttl=n
+# no-query
+# background-ping
+# default
+# round-robin
+# weighted-round-robin
+# carp
+# userhash
+# sourcehash
+# multicast-responder
+# closest-only
+# no-digest
+# no-netdb-exchange
+# no-delay
+# login=user:password | PASS | *:password
+# connect-timeout=nn
+# digest-url=url
+# allow-miss
+# max-conn=n
+# htcp
+# htcp-oldsquid
+# originserver
+# name=xxx
+# forceddomain=name
+# ssl
+# sslcert=/path/to/ssl/certificate
+# sslkey=/path/to/ssl/key
+# sslversion=1|2|3|4
+# sslcipher=...
+# ssloptions=...
+# front-end-https[=on|auto]
+#
+# use 'proxy-only' to specify objects fetched
+# from this cache should not be saved locally.
+#
+# use 'weight=n' to affect the selection of a peer
+# during any weighted peer-selection mechanisms.
+# The weight must be an integer; default is 1,
+# larger weights are favored more.
+# This option does not affect parent selection if a peering
+# protocol is not in use.
+#
+# use 'basetime=n' to specify a base amount to
+# be subtracted from round trip times of parents.
+# It is subtracted before division by weight in calculating
+# which parent to fectch from. If the rtt is less than the
+# base time the rtt is set to a minimal value.
+#
+# use 'ttl=n' to specify a IP multicast TTL to use
+# when sending an ICP queries to this address.
+# Only useful when sending to a multicast group.
+# Because we don't accept ICP replies from random
+# hosts, you must configure other group members as
+# peers with the 'multicast-responder' option below.
+#
+# use 'no-query' to NOT send ICP queries to this
+# neighbor.
+#
+# use 'background-ping' to only send ICP queries to this
+# neighbor infrequently. This is used to keep the neighbor
+# round trip time updated and is usually used in
+# conjunction with weighted-round-robin.
+#
+# use 'default' if this is a parent cache which can
+# be used as a "last-resort" if a peer cannot be located
+# by any of the peer-selection mechanisms.
+# If specified more than once, only the first is used.
+#
+# use 'round-robin' to define a set of parents which
+# should be used in a round-robin fashion in the
+# absence of any ICP queries.
+#
+# use 'weighted-round-robin' to define a set of parents
+# which should be used in a round-robin fashion with the
+# frequency of each parent being based on the round trip
+# time. Closer parents are used more often.
+# Usually used for background-ping parents.
+#
+# use 'carp' to define a set of parents which should
+# be used as a CARP array. The requests will be
+# distributed among the parents based on the CARP load
+# balancing hash function based on their weight.
+#
+# use 'userhash' to load-balance amongst a set of parents
+# based on the client proxy_auth or ident username.
+#
+# use 'sourcehash' to load-balance amongst a set of parents
+# based on the client source ip.
+#
+# 'multicast-responder' indicates the named peer
+# is a member of a multicast group. ICP queries will
+# not be sent directly to the peer, but ICP replies
+# will be accepted from it.
+#
+# 'closest-only' indicates that, for ICP_OP_MISS
+# replies, we'll only forward CLOSEST_PARENT_MISSes
+# and never FIRST_PARENT_MISSes.
+#
+# use 'no-digest' to NOT request cache digests from
+# this neighbor.
+#
+# 'no-netdb-exchange' disables requesting ICMP
+# RTT database (NetDB) from the neighbor.
+#
+# use 'no-delay' to prevent access to this neighbor
+# from influencing the delay pools.
+#
+# use 'login=user:password' if this is a personal/workgroup
+# proxy and your parent requires proxy authentication.
+# Note: The string can include URL escapes (i.e. %20 for
+# spaces). This also means % must be written as %%.
+#
+# use 'login=PASS' if users must authenticate against
+# the upstream proxy or in the case of a reverse proxy
+# configuration, the origin web server. This will pass
+# the users credentials as they are to the peer.
+# This only works for the Basic HTTP authentication scheme.
+# Note: To combine this with proxy_auth both proxies must
+# share the same user database as HTTP only allows for
+# a single login (one for proxy, one for origin server).
+# Also be warned this will expose your users proxy
+# password to the peer. USE WITH CAUTION
+#
+# use 'login=*:password' to pass the username to the
+# upstream cache, but with a fixed password. This is meant
+# to be used when the peer is in another administrative
+# domain, but it is still needed to identify each user.
+# The star can optionally be followed by some extra
+# information which is added to the username. This can
+# be used to identify this proxy to the peer, similar to
+# the login=username:password option above.
+#
+# use 'connect-timeout=nn' to specify a peer
+# specific connect timeout (also see the
+# peer_connect_timeout directive)
+#
+# use 'digest-url=url' to tell Squid to fetch the cache
+# digest (if digests are enabled) for this host from
+# the specified URL rather than the Squid default
+# location.
+#
+# use 'allow-miss' to disable Squid's use of only-if-cached
+# when forwarding requests to siblings. This is primarily
+# useful when icp_hit_stale is used by the sibling. To
+# extensive use of this option may result in forwarding
+# loops, and you should avoid having two-way peerings
+# with this option. (for example to deny peer usage on
+# requests from peer by denying cache_peer_access if the
+# source is a peer)
+#
+# use 'max-conn=n' to limit the amount of connections Squid
+# may open to this peer.
+#
+# use 'htcp' to send HTCP, instead of ICP, queries
+# to the neighbor. You probably also want to
+# set the "icp port" to 4827 instead of 3130.
+# You MUST also set htcp_access expicitly. The default of
+# deny all will prevent peer traffic.
+#
+# use 'htcp-oldsquid' to send HTCP to old Squid versions
+# You MUST also set htcp_access expicitly. The default of
+# deny all will prevent peer traffic.
+#
+# 'originserver' causes this parent peer to be contacted as
+# a origin server. Meant to be used in accelerator setups.
+#
+# use 'name=xxx' if you have multiple peers on the same
+# host but different ports. This name can be used to
+# differentiate the peers in cache_peer_access and similar
+# directives.
+#
+# use 'forceddomain=name' to forcibly set the Host header
+# of requests forwarded to this peer. Useful in accelerator
+# setups where the server (peer) expects a certain domain
+# name and using redirectors to feed this domain name
+# is not feasible.
+#
+# use 'ssl' to indicate connections to this peer should
+# be SSL/TLS encrypted.
+#
+# use 'sslcert=/path/to/ssl/certificate' to specify a client
+# SSL certificate to use when connecting to this peer.
+#
+# use 'sslkey=/path/to/ssl/key' to specify the private SSL
+# key corresponding to sslcert above. If 'sslkey' is not
+# specified 'sslcert' is assumed to reference a
+# combined file containing both the certificate and the key.
+#
+# use sslversion=1|2|3|4 to specify the SSL version to use
+# when connecting to this peer
+# 1 = automatic (default)
+# 2 = SSL v2 only
+# 3 = SSL v3 only
+# 4 = TLS v1 only
+#
+# use sslcipher=... to specify the list of valid SSL ciphers
+# to use when connecting to this peer.
+#
+# use ssloptions=... to specify various SSL engine options:
+# NO_SSLv2 Disallow the use of SSLv2
+# NO_SSLv3 Disallow the use of SSLv3
+# NO_TLSv1 Disallow the use of TLSv1
+# See src/ssl_support.c or the OpenSSL documentation for
+# a more complete list.
+#
+# use sslcafile=... to specify a file containing
+# additional CA certificates to use when verifying the
+# peer certificate.
+#
+# use sslcapath=... to specify a directory containing
+# additional CA certificates to use when verifying the
+# peer certificate.
+#
+# use sslcrlfile=... to specify a certificate revocation
+# list file to use when verifying the peer certificate.
+#
+# use sslflags=... to specify various flags modifying the
+# SSL implementation:
+# DONT_VERIFY_PEER
+# Accept certificates even if they fail to
+# verify.
+# NO_DEFAULT_CA
+# Don't use the default CA list built in
+# to OpenSSL.
+# DONT_VERIFY_DOMAIN
+# Don't verify the peer certificate
+# matches the server name
+#
+# use ssldomain= to specify the peer name as advertised
+# in it's certificate. Used for verifying the correctness
+# of the received peer certificate. If not specified the
+# peer hostname will be used.
+#
+# use front-end-https to enable the "Front-End-Https: On"
+# header needed when using Squid as a SSL frontend in front
+# of Microsoft OWA. See MS KB document Q307347 for details
+# on this header. If set to auto the header will
+# only be added if the request is forwarded as a https://
+# URL.
+#
+#Default:
+# none
+
+# TAG: cache_peer_domain
+# Use to limit the domains for which a neighbor cache will be
+# queried. Usage:
+#
+# cache_peer_domain cache-host domain [domain ...]
+# cache_peer_domain cache-host !domain
+#
+# For example, specifying
+#
+# cache_peer_domain parent.foo.net .edu
+#
+# has the effect such that UDP query packets are sent to
+# 'bigserver' only when the requested object exists on a
+# server in the .edu domain. Prefixing the domainname
+# with '!' means the cache will be queried for objects
+# NOT in that domain.
+#
+# NOTE: * Any number of domains may be given for a cache-host,
+# either on the same or separate lines.
+# * When multiple domains are given for a particular
+# cache-host, the first matched domain is applied.
+# * Cache hosts with no domain restrictions are queried
+# for all requests.
+# * There are no defaults.
+# * There is also a 'cache_peer_access' tag in the ACL
+# section.
+#
+#Default:
+# none
+
+# TAG: cache_peer_access
+# Similar to 'cache_peer_domain' but provides more flexibility by
+# using ACL elements.
+#
+# cache_peer_access cache-host allow|deny [!]aclname ...
+#
+# The syntax is identical to 'http_access' and the other lists of
+# ACL elements. See the comments for 'http_access' below, or
+# the Squid FAQ (http://www.squid-cache.org/FAQ/FAQ-10.html).
+#
+#Default:
+# none
+
+# TAG: neighbor_type_domain
+# usage: neighbor_type_domain neighbor parent|sibling domain domain ...
+#
+# Modifying the neighbor type for specific domains is now
+# possible. You can treat some domains differently than the the
+# default neighbor type specified on the 'cache_peer' line.
+# Normally it should only be necessary to list domains which
+# should be treated differently because the default neighbor type
+# applies for hostnames which do not match domains listed here.
+#
+#EXAMPLE:
+# cache_peer cache.foo.org parent 3128 3130
+# neighbor_type_domain cache.foo.org sibling .com .net
+# neighbor_type_domain cache.foo.org sibling .au .de
+#
+#Default:
+# none
+
+# TAG: dead_peer_timeout (seconds)
+# This controls how long Squid waits to declare a peer cache
+# as "dead." If there are no ICP replies received in this
+# amount of time, Squid will declare the peer dead and not
+# expect to receive any further ICP replies. However, it
+# continues to send ICP queries, and will mark the peer as
+# alive upon receipt of the first subsequent ICP reply.
+#
+# This timeout also affects when Squid expects to receive ICP
+# replies from peers. If more than 'dead_peer' seconds have
+# passed since the last ICP reply was received, Squid will not
+# expect to receive an ICP reply on the next query. Thus, if
+# your time between requests is greater than this timeout, you
+# will see a lot of requests sent DIRECT to origin servers
+# instead of to your parents.
+#
+#Default:
+# dead_peer_timeout 10 seconds
+
+# TAG: hierarchy_stoplist
+# A list of words which, if found in a URL, cause the object to
+# be handled directly by this cache. In other words, use this
+# to not query neighbor caches for certain objects. You may
+# list this option multiple times.
+# Note: never_direct overrides this option.
+#We recommend you to use at least the following line.
+hierarchy_stoplist cgi-bin ?
+
+
+# MEMORY CACHE OPTIONS
+# -----------------------------------------------------------------------------
+
+# TAG: cache_mem (bytes)
+# NOTE: THIS PARAMETER DOES NOT SPECIFY THE MAXIMUM PROCESS SIZE.
+# IT ONLY PLACES A LIMIT ON HOW MUCH ADDITIONAL MEMORY SQUID WILL
+# USE AS A MEMORY CACHE OF OBJECTS. SQUID USES MEMORY FOR OTHER
+# THINGS AS WELL. SEE THE SQUID FAQ SECTION 8 FOR DETAILS.
+#
+# 'cache_mem' specifies the ideal amount of memory to be used
+# for:
+# * In-Transit objects
+# * Hot Objects
+# * Negative-Cached objects
+#
+# Data for these objects are stored in 4 KB blocks. This
+# parameter specifies the ideal upper limit on the total size of
+# 4 KB blocks allocated. In-Transit objects take the highest
+# priority.
+#
+# In-transit objects have priority over the others. When
+# additional space is needed for incoming data, negative-cached
+# and hot objects will be released. In other words, the
+# negative-cached and hot objects will fill up any unused space
+# not needed for in-transit objects.
+#
+# If circumstances require, this limit will be exceeded.
+# Specifically, if your incoming request rate requires more than
+# 'cache_mem' of memory to hold in-transit objects, Squid will
+# exceed this limit to satisfy the new requests. When the load
+# decreases, blocks will be freed until the high-water mark is
+# reached. Thereafter, blocks will be used to store hot
+# objects.
+#
+#Default:
+# cache_mem 8 MB
+
+# TAG: maximum_object_size_in_memory (bytes)
+# Objects greater than this size will not be attempted to kept in
+# the memory cache. This should be set high enough to keep objects
+# accessed frequently in memory to improve performance whilst low
+# enough to keep larger objects from hoarding cache_mem.
+#
+#Default:
+# maximum_object_size_in_memory 8 KB
+
+# TAG: memory_replacement_policy
+# The memory replacement policy parameter determines which
+# objects are purged from memory when memory space is needed.
+#
+# See cache_replacement_policy for details.
+#
+#Default:
+# memory_replacement_policy lru
+
+
+# DISK CACHE OPTIONS
+# -----------------------------------------------------------------------------
+
+# TAG: cache_replacement_policy
+# The cache replacement policy parameter determines which
+# objects are evicted (replaced) when disk space is needed.
+#
+# lru : Squid's original list based LRU policy
+# heap GDSF : Greedy-Dual Size Frequency
+# heap LFUDA: Least Frequently Used with Dynamic Aging
+# heap LRU : LRU policy implemented using a heap
+#
+# Applies to any cache_dir lines listed below this.
+#
+# The LRU policies keeps recently referenced objects.
+#
+# The heap GDSF policy optimizes object hit rate by keeping smaller
+# popular objects in cache so it has a better chance of getting a
+# hit. It achieves a lower byte hit rate than LFUDA though since
+# it evicts larger (possibly popular) objects.
+#
+# The heap LFUDA policy keeps popular objects in cache regardless of
+# their size and thus optimizes byte hit rate at the expense of
+# hit rate since one large, popular object will prevent many
+# smaller, slightly less popular objects from being cached.
+#
+# Both policies utilize a dynamic aging mechanism that prevents
+# cache pollution that can otherwise occur with frequency-based
+# replacement policies.
+#
+# NOTE: if using the LFUDA replacement policy you should increase
+# the value of maximum_object_size above its default of 4096 KB to
+# to maximize the potential byte hit rate improvement of LFUDA.
+#
+# For more information about the GDSF and LFUDA cache replacement
+# policies see http://www.hpl.hp.com/techreports/1999/HPL-1999-69.html
+# and http://fog.hpl.external.hp.com/techreports/98/HPL-98-173.html.
+#
+#Default:
+# cache_replacement_policy lru
+
+# TAG: cache_dir
+# Usage:
+#
+# cache_dir Type Directory-Name Fs-specific-data [options]
+#
+# You can specify multiple cache_dir lines to spread the
+# cache among different disk partitions.
+#
+# Type specifies the kind of storage system to use. Only "ufs"
+# is built by default. To enable any of the other storage systems
+# see the --enable-storeio configure option.
+#
+# 'Directory' is a top-level directory where cache swap
+# files will be stored. If you want to use an entire disk
+# for caching, this can be the mount-point directory.
+# The directory must exist and be writable by the Squid
+# process. Squid will NOT create this directory for you.
+#
+# The ufs store type:
+#
+# "ufs" is the old well-known Squid storage format that has always
+# been there.
+#
+# cache_dir ufs Directory-Name Mbytes L1 L2 [options]
+#
+# 'Mbytes' is the amount of disk space (MB) to use under this
+# directory. The default is 100 MB. Change this to suit your
+# configuration. Do NOT put the size of your disk drive here.
+# Instead, if you want Squid to use the entire disk drive,
+# subtract 20% and use that value.
+#
+# 'Level-1' is the number of first-level subdirectories which
+# will be created under the 'Directory'. The default is 16.
+#
+# 'Level-2' is the number of second-level subdirectories which
+# will be created under each first-level directory. The default
+# is 256.
+#
+# The aufs store type:
+#
+# "aufs" uses the same storage format as "ufs", utilizing
+# POSIX-threads to avoid blocking the main Squid process on
+# disk-I/O. This was formerly known in Squid as async-io.
+#
+# cache_dir aufs Directory-Name Mbytes L1 L2 [options]
+#
+# see argument descriptions under ufs above
+#
+# The diskd store type:
+#
+# "diskd" uses the same storage format as "ufs", utilizing a
+# separate process to avoid blocking the main Squid process on
+# disk-I/O.
+#
+# cache_dir diskd Directory-Name Mbytes L1 L2 [options] [Q1=n] [Q2=n]
+#
+# see argument descriptions under ufs above
+#
+# Q1 specifies the number of unacknowledged I/O requests when Squid
+# stops opening new files. If this many messages are in the queues,
+# Squid won't open new files. Default is 64
+#
+# Q2 specifies the number of unacknowledged messages when Squid
+# starts blocking. If this many messages are in the queues,
+# Squid blocks until it receives some replies. Default is 72
+#
+# When Q1 < Q2 (the default), the cache directory is optimized
+# for lower response time at the expense of a decrease in hit
+# ratio. If Q1 > Q2, the cache directory is optimized for
+# higher hit ratio at the expense of an increase in response
+# time.
+#
+# The coss store type:
+#
+# NP: COSS filesystem in 3.0 has been deemed too unstable for
+# production use and has thus been removed from this 3.0
+# STABLE release. We hope that it can be made usable again
+# in a future release.
+#
+# block-size=n defines the "block size" for COSS cache_dir's.
+# Squid uses file numbers as block numbers. Since file numbers
+# are limited to 24 bits, the block size determines the maximum
+# size of the COSS partition. The default is 512 bytes, which
+# leads to a maximum cache_dir size of 512<<24, or 8 GB. Note
+# you should not change the coss block size after Squid
+# has written some objects to the cache_dir.
+#
+# The coss file store has changed from 2.5. Now it uses a file
+# called 'stripe' in the directory names in the config - and
+# this will be created by squid -z.
+#
+# The null store type:
+#
+# no options are allowed or required
+#
+# Common options:
+#
+# no-store, no new objects should be stored to this cache_dir
+#
+# max-size=n, refers to the max object size this storedir supports.
+# It is used to initially choose the storedir to dump the object.
+# Note: To make optimal use of the max-size limits you should order
+# the cache_dir lines with the smallest max-size value first and the
+# ones with no max-size specification last.
+#
+# Note for coss, max-size must be less than COSS_MEMBUF_SZ,
+# which can be changed with the --with-coss-membuf-size=N configure
+# option.
+#
+#Default:
+# cache_dir ufs /var/spool/squid 100 16 256
+
+# TAG: store_dir_select_algorithm
+# Set this to 'round-robin' as an alternative.
+#
+#Default:
+# store_dir_select_algorithm least-load
+
+# TAG: max_open_disk_fds
+# To avoid having disk as the I/O bottleneck Squid can optionally
+# bypass the on-disk cache if more than this amount of disk file
+# descriptors are open.
+#
+# A value of 0 indicates no limit.
+#
+#Default:
+# max_open_disk_fds 0
+
+# TAG: minimum_object_size (bytes)
+# Objects smaller than this size will NOT be saved on disk. The
+# value is specified in kilobytes, and the default is 0 KB, which
+# means there is no minimum.
+#
+#Default:
+# minimum_object_size 0 KB
+
+# TAG: maximum_object_size (bytes)
+# Objects larger than this size will NOT be saved on disk. The
+# value is specified in kilobytes, and the default is 4MB. If
+# you wish to get a high BYTES hit ratio, you should probably
+# increase this (one 32 MB object hit counts for 3200 10KB
+# hits). If you wish to increase speed more than your want to
+# save bandwidth you should leave this low.
+#
+# NOTE: if using the LFUDA replacement policy you should increase
+# this value to maximize the byte hit rate improvement of LFUDA!
+# See replacement_policy below for a discussion of this policy.
+#
+#Default:
+# maximum_object_size 4096 KB
+
+# TAG: cache_swap_low (percent, 0-100)
+# TAG: cache_swap_high (percent, 0-100)
+#
+# The low- and high-water marks for cache object replacement.
+# Replacement begins when the swap (disk) usage is above the
+# low-water mark and attempts to maintain utilization near the
+# low-water mark. As swap utilization gets close to high-water
+# mark object eviction becomes more aggressive. If utilization is
+# close to the low-water mark less replacement is done each time.
+#
+# Defaults are 90% and 95%. If you have a large cache, 5% could be
+# hundreds of MB. If this is the case you may wish to set these
+# numbers closer together.
+#
+#Default:
+# cache_swap_low 90
+# cache_swap_high 95
+
+
+# LOGFILE OPTIONS
+# -----------------------------------------------------------------------------
+
+# TAG: logformat
+# Usage:
+#
+# logformat <name> <format specification>
+#
+# Defines an access log format.
+#
+# The <format specification> is a string with embedded % format codes
+#
+# % format codes all follow the same basic structure where all but
+# the formatcode is optional. Output strings are automatically escaped
+# as required according to their context and the output format
+# modifiers are usually not needed, but can be specified if an explicit
+# output format is desired.
+#
+# % ["|[|'|#] [-] [[0]width] [{argument}] formatcode
+#
+# " output in quoted string format
+# [ output in squid text log format as used by log_mime_hdrs
+# # output in URL quoted format
+# ' output as-is
+#
+# - left aligned
+# width field width. If starting with 0 the
+# output is zero padded
+# {arg} argument such as header name etc
+#
+# Format codes:
+#
+# >a Client source IP address
+# >A Client FQDN
+# >p Client source port
+# <A Server IP address or peer name
+# la Local IP address (http_port)
+# lp Local port number (http_port)
+# ts Seconds since epoch
+# tu subsecond time (milliseconds)
+# tl Local time. Optional strftime format argument
+# default %d/%b/%Y:%H:%M:%S %z
+# tg GMT time. Optional strftime format argument
+# default %d/%b/%Y:%H:%M:%S %z
+# tr Response time (milliseconds)
+# >h Request header. Optional header name argument
+# on the format header[:[separator]element]
+# <h Reply header. Optional header name argument
+# as for >h
+# un User name
+# ul User name from authentication
+# ui User name from ident
+# us User name from SSL
+# ue User name from external acl helper
+# Hs HTTP status code
+# Ss Squid request status (TCP_MISS etc)
+# Sh Squid hierarchy status (DEFAULT_PARENT etc)
+# mt MIME content type
+# rm Request method (GET/POST etc)
+# ru Request URL
+# rp Request URL-Path excluding hostname
+# rv Request protocol version
+# et Tag returned by external acl
+# ea Log string returned by external acl
+# <st Reply size including HTTP headers
+# >st Request size including HTTP headers
+# st Request+Reply size including HTTP headers
+# <sH Reply high offset sent
+# <sS Upstream object size
+# % a literal % character
+#
+# The default formats available (which do not need re-defining) are:
+#
+#logformat squid %ts.%03tu %6tr %>a %Ss/%03Hs %<st %rm %ru %un %Sh/%<A %mt
+#logformat squidmime %ts.%03tu %6tr %>a %Ss/%03Hs %<st %rm %ru %un %Sh/%<A %mt [%>h] [%<h]
+#logformat common %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %<st %Ss:%Sh
+#logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %<st "%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh
+#
+#Default:
+# none
+
+# TAG: access_log
+# These files log client request activities. Has a line every HTTP or
+# ICP request. The format is:
+# access_log <filepath> [<logformat name> [acl acl ...]]
+# access_log none [acl acl ...]]
+#
+# Will log to the specified file using the specified format (which
+# must be defined in a logformat directive) those entries which match
+# ALL the acl's specified (which must be defined in acl clauses).
+# If no acl is specified, all requests will be logged to this file.
+#
+# To disable logging of a request use the filepath "none", in which case
+# a logformat name should not be specified.
+#
+# To log the request via syslog specify a filepath of "syslog":
+#
+# access_log syslog[:facility.priority] [format [acl1 [acl2 ....]]]
+# where facility could be any of:
+# authpriv, daemon, local0 .. local7 or user.
+#
+# And priority could be any of:
+# err, warning, notice, info, debug.
+access_log /var/log/squid/access.log squid
+
+# TAG: log_access allow|deny acl acl...
+# This options allows you to control which requests gets logged
+# to access.log (see access_log directive). Requests denied for
+# logging will also not be accounted for in performance counters.
+#
+#Default:
+# none
+
+# TAG: cache_log
+# Cache logging file. This is where general information about
+# your cache's behavior goes. You can increase the amount of data
+# logged to this file with the "debug_options" tag below.
+#
+#Default:
+# cache_log /var/log/squid/cache.log
+
+# TAG: cache_store_log
+# Logs the activities of the storage manager. Shows which
+# objects are ejected from the cache, and which objects are
+# saved and for how long. To disable, enter "none". There are
+# not really utilities to analyze this data, so you can safely
+# disable it.
+#
+#Default:
+# cache_store_log /var/log/squid/store.log
+
+# TAG: cache_swap_state
+# Location for the cache "swap.state" file. This index file holds
+# the metadata of objects saved on disk. It is used to rebuild
+# the cache during startup. Normally this file resides in each
+# 'cache_dir' directory, but you may specify an alternate
+# pathname here. Note you must give a full filename, not just
+# a directory. Since this is the index for the whole object
+# list you CANNOT periodically rotate it!
+#
+# If %s can be used in the file name it will be replaced with a
+# a representation of the cache_dir name where each / is replaced
+# with '.'. This is needed to allow adding/removing cache_dir
+# lines when cache_swap_log is being used.
+#
+# If have more than one 'cache_dir', and %s is not used in the name
+# these swap logs will have names such as:
+#
+# cache_swap_log.00
+# cache_swap_log.01
+# cache_swap_log.02
+#
+# The numbered extension (which is added automatically)
+# corresponds to the order of the 'cache_dir' lines in this
+# configuration file. If you change the order of the 'cache_dir'
+# lines in this file, these index files will NOT correspond to
+# the correct 'cache_dir' entry (unless you manually rename
+# them). We recommend you do NOT use this option. It is
+# better to keep these index files in each 'cache_dir' directory.
+#
+#Default:
+# none
+
+# TAG: logfile_rotate
+# Specifies the number of logfile rotations to make when you
+# type 'squid -k rotate'. The default is 10, which will rotate
+# with extensions 0 through 9. Setting logfile_rotate to 0 will
+# disable the file name rotation, but the logfiles are still closed
+# and re-opened. This will enable you to rename the logfiles
+# yourself just before sending the rotate signal.
+#
+# Note, the 'squid -k rotate' command normally sends a USR1
+# signal to the running squid process. In certain situations
+# (e.g. on Linux with Async I/O), USR1 is used for other
+# purposes, so -k rotate uses another signal. It is best to get
+# in the habit of using 'squid -k rotate' instead of 'kill -USR1
+# <pid>'.
+#logfile_rotate 0
+#
+#Default:
+# logfile_rotate 0
+
+# TAG: emulate_httpd_log on|off
+# The Cache can emulate the log file format which many 'httpd'
+# programs use. To disable/enable this emulation, set
+# emulate_httpd_log to 'off' or 'on'. The default
+# is to use the native log format since it includes useful
+# information Squid-specific log analyzers use.
+#
+#Default:
+# emulate_httpd_log off
+
+# TAG: log_ip_on_direct on|off
+# Log the destination IP address in the hierarchy log tag when going
+# direct. Earlier Squid versions logged the hostname here. If you
+# prefer the old way set this to off.
+#
+#Default:
+# log_ip_on_direct on
+
+# TAG: mime_table
+# Pathname to Squid's MIME table. You shouldn't need to change
+# this, but the default file contains examples and formatting
+# information if you do.
+#
+#Default:
+# mime_table /etc/squid/mime.conf
+
+# TAG: log_mime_hdrs on|off
+# The Cache can record both the request and the response MIME
+# headers for each HTTP transaction. The headers are encoded
+# safely and will appear as two bracketed fields at the end of
+# the access log (for either the native or httpd-emulated log
+# formats). To enable this logging set log_mime_hdrs to 'on'.
+#
+#Default:
+# log_mime_hdrs off
+
+# TAG: useragent_log
+# Squid will write the User-Agent field from HTTP requests
+# to the filename specified here. By default useragent_log
+# is disabled.
+#
+#Default:
+# none
+
+# TAG: referer_log
+# Squid will write the Referer field from HTTP requests to the
+# filename specified here. By default referer_log is disabled.
+# Note that "referer" is actually a misspelling of "referrer"
+# however the misspelt version has been accepted into the HTTP RFCs
+# and we accept both.
+#
+#Default:
+# none
+
+# TAG: pid_filename
+# A filename to write the process-id to. To disable, enter "none".
+#
+#Default:
+# pid_filename /var/run/squid.pid
+
+# TAG: debug_options
+# Logging options are set as section,level where each source file
+# is assigned a unique section. Lower levels result in less
+# output, Full debugging (level 9) can result in a very large
+# log file, so be careful. The magic word "ALL" sets debugging
+# levels for all sections. We recommend normally running with
+# "ALL,1".
+#
+#Default:
+# debug_options ALL,1
+
+# TAG: log_fqdn on|off
+# Turn this on if you wish to log fully qualified domain names
+# in the access.log. To do this Squid does a DNS lookup of all
+# IP's connecting to it. This can (in some situations) increase
+# latency, which makes your cache seem slower for interactive
+# browsing.
+#
+#Default:
+# log_fqdn off
+
+# TAG: client_netmask
+# A netmask for client addresses in logfiles and cachemgr output.
+# Change this to protect the privacy of your cache clients.
+# A netmask of 255.255.255.0 will log all IP's in that range with
+# the last digit set to '0'.
+#
+#Default:
+# client_netmask 255.255.255.255
+
+# TAG: forward_log
+# Note: This option is only available if Squid is rebuilt with the
+# -DWIP_FWD_LOG define
+#
+# Logs the server-side requests.
+#
+# This is currently work in progress.
+#
+#Default:
+# none
+
+# TAG: strip_query_terms
+# By default, Squid strips query terms from requested URLs before
+# logging. This protects your user's privacy.
+#
+#Default:
+# strip_query_terms on
+
+# TAG: buffered_logs on|off
+# cache.log log file is written with stdio functions, and as such
+# it can be buffered or unbuffered. By default it will be unbuffered.
+# Buffering it can speed up the writing slightly (though you are
+# unlikely to need to worry unless you run with tons of debugging
+# enabled in which case performance will suffer badly anyway..).
+#
+#Default:
+# buffered_logs off
+
+
+# OPTIONS FOR FTP GATEWAYING
+# -----------------------------------------------------------------------------
+
+# TAG: ftp_user
+# If you want the anonymous login password to be more informative
+# (and enable the use of picky ftp servers), set this to something
+# reasonable for your domain, like wwwuser@somewhere.net
+#
+# The reason why this is domainless by default is the
+# request can be made on the behalf of a user in any domain,
+# depending on how the cache is used.
+# Some ftp server also validate the email address is valid
+# (for example perl.com).
+#
+#Default:
+# ftp_user Squid@
+
+# TAG: ftp_list_width
+# Sets the width of ftp listings. This should be set to fit in
+# the width of a standard browser. Setting this too small
+# can cut off long filenames when browsing ftp sites.
+#
+#Default:
+# ftp_list_width 32
+
+# TAG: ftp_passive
+# If your firewall does not allow Squid to use passive
+# connections, turn off this option.
+#
+#Default:
+# ftp_passive on
+
+# TAG: ftp_sanitycheck
+# For security and data integrity reasons Squid by default performs
+# sanity checks of the addresses of FTP data connections ensure the
+# data connection is to the requested server. If you need to allow
+# FTP connections to servers using another IP address for the data
+# connection turn this off.
+#
+#Default:
+# ftp_sanitycheck on
+
+# TAG: ftp_telnet_protocol
+# The FTP protocol is officially defined to use the telnet protocol
+# as transport channel for the control connection. However, many
+# implementations are broken and does not respect this aspect of
+# the FTP protocol.
+#
+# If you have trouble accessing files with ASCII code 255 in the
+# path or similar problems involving this ASCII code you can
+# try setting this directive to off. If that helps, report to the
+# operator of the FTP server in question that their FTP server
+# is broken and does not follow the FTP standard.
+#
+#Default:
+# ftp_telnet_protocol on
+
+
+# OPTIONS FOR EXTERNAL SUPPORT PROGRAMS
+# -----------------------------------------------------------------------------
+
+# TAG: diskd_program
+# Specify the location of the diskd executable.
+# Note this is only useful if you have compiled in
+# diskd as one of the store io modules.
+#
+#Default:
+# diskd_program /usr/lib64/squid/diskd
+
+# TAG: unlinkd_program
+# Specify the location of the executable for file deletion process.
+#
+#Default:
+# unlinkd_program /usr/lib64/squid/unlinkd
+
+# TAG: pinger_program
+# Note: This option is only available if Squid is rebuilt with the
+# --enable-icmp option
+#
+# Specify the location of the executable for the pinger process.
+#
+#Default:
+# pinger_program /usr/lib64/squid/pinger
+
+
+# OPTIONS FOR URL REWRITING
+# -----------------------------------------------------------------------------
+
+# TAG: url_rewrite_program
+# Specify the location of the executable for the URL rewriter.
+# Since they can perform almost any function there isn't one included.
+#
+# For each requested URL rewriter will receive on line with the format
+#
+# URL <SP> client_ip "/" fqdn <SP> user <SP> method [<SP> kvpairs]<NL>
+#
+# In the future, the rewriter interface will be extended with
+# key=value pairs ("kvpairs" shown above). Rewriter programs
+# should be prepared to receive and possibly ignore additional
+# whitespace-separated tokens on each input line.
+#
+# And the rewriter may return a rewritten URL. The other components of
+# the request line does not need to be returned (ignored if they are).
+#
+# The rewriter can also indicate that a client-side redirect should
+# be performed to the new URL. This is done by prefixing the returned
+# URL with "301:" (moved permanently) or 302: (moved temporarily).
+#
+# By default, a URL rewriter is not used.
+#
+#Default:
+# none
+
+# TAG: url_rewrite_children
+# The number of redirector processes to spawn. If you start
+# too few Squid will have to wait for them to process a backlog of
+# URLs, slowing it down. If you start too many they will use RAM
+# and other system resources.
+#
+#Default:
+# url_rewrite_children 5
+
+# TAG: url_rewrite_concurrency
+# The number of requests each redirector helper can handle in
+# parallel. Defaults to 0 which indicates the redirector
+# is a old-style single threaded redirector.
+#
+#Default:
+# url_rewrite_concurrency 0
+
+# TAG: url_rewrite_host_header
+# By default Squid rewrites any Host: header in redirected
+# requests. If you are running an accelerator this may
+# not be a wanted effect of a redirector.
+#
+# WARNING: Entries are cached on the result of the URL rewriting
+# process, so be careful if you have domain-virtual hosts.
+#
+#Default:
+# url_rewrite_host_header on
+
+# TAG: url_rewrite_access
+# If defined, this access list specifies which requests are
+# sent to the redirector processes. By default all requests
+# are sent.
+#
+#Default:
+# none
+
+# TAG: url_rewrite_bypass
+# When this is 'on', a request will not go through the
+# redirector if all redirectors are busy. If this is 'off'
+# and the redirector queue grows too large, Squid will exit
+# with a FATAL error and ask you to increase the number of
+# redirectors. You should only enable this if the redirectors
+# are not critical to your caching system. If you use
+# redirectors for access control, and you enable this option,
+# users may have access to pages they should not
+# be allowed to request.
+#
+#Default:
+# url_rewrite_bypass off
+
+
+# OPTIONS FOR TUNING THE CACHE
+# -----------------------------------------------------------------------------
+
+# TAG: cache
+# A list of ACL elements which, if matched and denied, cause the request to
+# not be satisfied from the cache and the reply to not be cached.
+# In other words, use this to force certain objects to never be cached.
+#
+# You must use the words 'allow' or 'deny' to indicate whether items
+# matching the ACL should be allowed or denied into the cache.
+#
+# Default is to allow all to be cached
+#
+#Default:
+# none
+
+# TAG: refresh_pattern
+# usage: refresh_pattern [-i] regex min percent max [options]
+#
+# By default, regular expressions are CASE-SENSITIVE. To make
+# them case-insensitive, use the -i option.
+#
+# 'Min' is the time (in minutes) an object without an explicit
+# expiry time should be considered fresh. The recommended
+# value is 0, any higher values may cause dynamic applications
+# to be erroneously cached unless the application designer
+# has taken the appropriate actions.
+#
+# 'Percent' is a percentage of the objects age (time since last
+# modification age) an object without explicit expiry time
+# will be considered fresh.
+#
+# 'Max' is an upper limit on how long objects without an explicit
+# expiry time will be considered fresh.
+#
+# options: override-expire
+# override-lastmod
+# reload-into-ims
+# ignore-reload
+# ignore-no-cache
+# ignore-no-store
+# ignore-private
+# ignore-auth
+# refresh-ims
+#
+# override-expire enforces min age even if the server
+# sent an explicit expiry time (e.g., with the
+# Expires: header or Cache-Control: max-age). Doing this
+# VIOLATES the HTTP standard. Enabling this feature
+# could make you liable for problems which it causes.
+#
+# override-lastmod enforces min age even on objects
+# that were modified recently.
+#
+# reload-into-ims changes client no-cache or ``reload''
+# to If-Modified-Since requests. Doing this VIOLATES the
+# HTTP standard. Enabling this feature could make you
+# liable for problems which it causes.
+#
+# ignore-reload ignores a client no-cache or ``reload''
+# header. Doing this VIOLATES the HTTP standard. Enabling
+# this feature could make you liable for problems which
+# it causes.
+#
+# ignore-no-cache ignores any ``Pragma: no-cache'' and
+# ``Cache-control: no-cache'' headers received from a server.
+# The HTTP RFC never allows the use of this (Pragma) header
+# from a server, only a client, though plenty of servers
+# send it anyway.
+#
+# ignore-no-store ignores any ``Cache-control: no-store''
+# headers received from a server. Doing this VIOLATES
+# the HTTP standard. Enabling this feature could make you
+# liable for problems which it causes.
+#
+# ignore-private ignores any ``Cache-control: private''
+# headers received from a server. Doing this VIOLATES
+# the HTTP standard. Enabling this feature could make you
+# liable for problems which it causes.
+#
+# ignore-auth caches responses to requests with authorization,
+# as if the originserver had sent ``Cache-control: public''
+# in the response header. Doing this VIOLATES the HTTP standard.
+# Enabling this feature could make you liable for problems which
+# it causes.
+#
+# refresh-ims causes squid to contact the origin server
+# when a client issues an If-Modified-Since request. This
+# ensures that the client will receive an updated version
+# if one is available.
+#
+# Basically a cached object is:
+#
+# FRESH if expires < now, else STALE
+# STALE if age > max
+# FRESH if lm-factor < percent, else STALE
+# FRESH if age < min
+# else STALE
+#
+# The refresh_pattern lines are checked in the order listed here.
+# The first entry which matches is used. If none of the entries
+# match the default will be used.
+#
+# Note, you must uncomment all the default lines if you want
+# to change one. The default setting is only active if none is
+# used.
+#
+#Suggested default:
+refresh_pattern ^ftp: 1440 20% 10080
+refresh_pattern ^gopher: 1440 0% 1440
+refresh_pattern (cgi-bin|\?) 0 0% 0
+refresh_pattern . 0 20% 4320
+
+# TAG: quick_abort_min (KB)
+# TAG: quick_abort_max (KB)
+# TAG: quick_abort_pct (percent)
+# The cache by default continues downloading aborted requests
+# which are almost completed (less than 16 KB remaining). This
+# may be undesirable on slow (e.g. SLIP) links and/or very busy
+# caches. Impatient users may tie up file descriptors and
+# bandwidth by repeatedly requesting and immediately aborting
+# downloads.
+#
+# When the user aborts a request, Squid will check the
+# quick_abort values to the amount of data transferred until
+# then.
+#
+# If the transfer has less than 'quick_abort_min' KB remaining,
+# it will finish the retrieval.
+#
+# If the transfer has more than 'quick_abort_max' KB remaining,
+# it will abort the retrieval.
+#
+# If more than 'quick_abort_pct' of the transfer has completed,
+# it will finish the retrieval.
+#
+# If you do not want any retrieval to continue after the client
+# has aborted, set both 'quick_abort_min' and 'quick_abort_max'
+# to '0 KB'.
+#
+# If you want retrievals to always continue if they are being
+# cached set 'quick_abort_min' to '-1 KB'.
+#
+#Default:
+# quick_abort_min 16 KB
+# quick_abort_max 16 KB
+# quick_abort_pct 95
+
+# TAG: read_ahead_gap buffer-size
+# The amount of data the cache will buffer ahead of what has been
+# sent to the client when retrieving an object from another server.
+#
+#Default:
+# read_ahead_gap 16 KB
+
+# TAG: negative_ttl time-units
+# Time-to-Live (TTL) for failed requests. Certain types of
+# failures (such as "connection refused" and "404 Not Found") are
+# negatively-cached for a configurable amount of time. The
+# default is 5 minutes. Note that this is different from
+# negative caching of DNS lookups.
+#
+# WARNING: This setting VIOLATES RFC 2616 when non-zero.
+# If you have problems with error pages caching, set to '0 seconds'
+#
+#Default:
+# negative_ttl 5 minutes
+
+# TAG: positive_dns_ttl time-units
+# Upper limit on how long Squid will cache positive DNS responses.
+# Default is 6 hours (360 minutes). This directive must be set
+# larger than negative_dns_ttl.
+#
+#Default:
+# positive_dns_ttl 6 hours
+
+# TAG: negative_dns_ttl time-units
+# Time-to-Live (TTL) for negative caching of failed DNS lookups.
+# This also sets the lower cache limit on positive lookups.
+# Minimum value is 1 second, and it is not recommendable to go
+# much below 10 seconds.
+#
+#Default:
+# negative_dns_ttl 1 minutes
+
+# TAG: range_offset_limit (bytes)
+# Sets a upper limit on how far into the the file a Range request
+# may be to cause Squid to prefetch the whole file. If beyond this
+# limit Squid forwards the Range request as it is and the result
+# is NOT cached.
+#
+# This is to stop a far ahead range request (lets say start at 17MB)
+# from making Squid fetch the whole object up to that point before
+# sending anything to the client.
+#
+# A value of -1 causes Squid to always fetch the object from the
+# beginning so it may cache the result. (2.0 style)
+#
+# A value of 0 causes Squid to never fetch more than the
+# client requested. (default)
+#
+#Default:
+# range_offset_limit 0 KB
+
+# TAG: minimum_expiry_time (seconds)
+# The minimum caching time according to (Expires - Date)
+# Headers Squid honors if the object can't be revalidated
+# defaults to 60 seconds. In reverse proxy environments it
+# might be desirable to honor shorter object lifetimes. It
+# is most likely better to make your server return a
+# meaningful Last-Modified header however. In ESI environments
+# where page fragments often have short lifetimes, this will
+# often be best set to 0.
+#
+#Default:
+# minimum_expiry_time 60 seconds
+
+# TAG: store_avg_object_size (kbytes)
+# Average object size, used to estimate number of objects your
+# cache can hold. The default is 13 KB.
+#
+#Default:
+# store_avg_object_size 13 KB
+
+# TAG: store_objects_per_bucket
+# Target number of objects per bucket in the store hash table.
+# Lowering this value increases the total number of buckets and
+# also the storage maintenance rate. The default is 20.
+#
+#Default:
+# store_objects_per_bucket 20
+
+
+# HTTP OPTIONS
+# -----------------------------------------------------------------------------
+
+# TAG: request_header_max_size (KB)
+# This specifies the maximum size for HTTP headers in a request.
+# Request headers are usually relatively small (about 512 bytes).
+# Placing a limit on the request header size will catch certain
+# bugs (for example with persistent connections) and possibly
+# buffer-overflow or denial-of-service attacks.
+#
+#Default:
+# request_header_max_size 20 KB
+
+# TAG: reply_header_max_size (KB)
+# This specifies the maximum size for HTTP headers in a reply.
+# Reply headers are usually relatively small (about 512 bytes).
+# Placing a limit on the reply header size will catch certain
+# bugs (for example with persistent connections) and possibly
+# buffer-overflow or denial-of-service attacks.
+#
+#Default:
+# reply_header_max_size 20 KB
+
+# TAG: request_body_max_size (bytes)
+# This specifies the maximum size for an HTTP request body.
+# In other words, the maximum size of a PUT/POST request.
+# A user who attempts to send a request with a body larger
+# than this limit receives an "Invalid Request" error message.
+# If you set this parameter to a zero (the default), there will
+# be no limit imposed.
+#
+#Default:
+# request_body_max_size 0 KB
+
+# TAG: broken_posts
+# A list of ACL elements which, if matched, causes Squid to send
+# an extra CRLF pair after the body of a PUT/POST request.
+#
+# Some HTTP servers has broken implementations of PUT/POST,
+# and rely on an extra CRLF pair sent by some WWW clients.
+#
+# Quote from RFC2616 section 4.1 on this matter:
+#
+# Note: certain buggy HTTP/1.0 client implementations generate an
+# extra CRLF's after a POST request. To restate what is explicitly
+# forbidden by the BNF, an HTTP/1.1 client must not preface or follow
+# a request with an extra CRLF.
+#
+#Example:
+# acl buggy_server url_regex ^http://....
+# broken_posts allow buggy_server
+#
+#Default:
+# none
+
+# TAG: via on|off
+# If set (default), Squid will include a Via header in requests and
+# replies as required by RFC2616.
+#
+#Default:
+# via on
+
+# TAG: ie_refresh on|off
+# Microsoft Internet Explorer up until version 5.5 Service
+# Pack 1 has an issue with transparent proxies, wherein it
+# is impossible to force a refresh. Turning this on provides
+# a partial fix to the problem, by causing all IMS-REFRESH
+# requests from older IE versions to check the origin server
+# for fresh content. This reduces hit ratio by some amount
+# (~10% in my experience), but allows users to actually get
+# fresh content when they want it. Note because Squid
+# cannot tell if the user is using 5.5 or 5.5SP1, the behavior
+# of 5.5 is unchanged from old versions of Squid (i.e. a
+# forced refresh is impossible). Newer versions of IE will,
+# hopefully, continue to have the new behavior and will be
+# handled based on that assumption. This option defaults to
+# the old Squid behavior, which is better for hit ratios but
+# worse for clients using IE, if they need to be able to
+# force fresh content.
+#
+#Default:
+# ie_refresh off
+
+# TAG: vary_ignore_expire on|off
+# Many HTTP servers supporting Vary gives such objects
+# immediate expiry time with no cache-control header
+# when requested by a HTTP/1.0 client. This option
+# enables Squid to ignore such expiry times until
+# HTTP/1.1 is fully implemented.
+# WARNING: This may eventually cause some varying
+# objects not intended for caching to get cached.
+#
+#Default:
+# vary_ignore_expire off
+
+# TAG: extension_methods
+# Squid only knows about standardized HTTP request methods.
+# You can add up to 20 additional "extension" methods here.
+#
+#Default:
+# none
+
+# TAG: request_entities
+# Squid defaults to deny GET and HEAD requests with request entities,
+# as the meaning of such requests are undefined in the HTTP standard
+# even if not explicitly forbidden.
+#
+# Set this directive to on if you have clients which insists
+# on sending request entities in GET or HEAD requests. But be warned
+# that there is server software (both proxies and web servers) which
+# can fail to properly process this kind of request which may make you
+# vulnerable to cache pollution attacks if enabled.
+#
+#Default:
+# request_entities off
+
+# TAG: request_header_access
+# Usage: request_header_access header_name allow|deny [!]aclname ...
+#
+# WARNING: Doing this VIOLATES the HTTP standard. Enabling
+# this feature could make you liable for problems which it
+# causes.
+#
+# This option replaces the old 'anonymize_headers' and the
+# older 'http_anonymizer' option with something that is much
+# more configurable. This new method creates a list of ACLs
+# for each header, allowing you very fine-tuned header
+# mangling.
+#
+# This option only applies to request headers, i.e., from the
+# client to the server.
+#
+# You can only specify known headers for the header name.
+# Other headers are reclassified as 'Other'. You can also
+# refer to all the headers with 'All'.
+#
+# For example, to achieve the same behavior as the old
+# 'http_anonymizer standard' option, you should use:
+#
+# request_header_access From deny all
+# request_header_access Referer deny all
+# request_header_access Server deny all
+# request_header_access User-Agent deny all
+# request_header_access WWW-Authenticate deny all
+# request_header_access Link deny all
+#
+# Or, to reproduce the old 'http_anonymizer paranoid' feature
+# you should use:
+#
+# request_header_access Allow allow all
+# request_header_access Authorization allow all
+# request_header_access WWW-Authenticate allow all
+# request_header_access Proxy-Authorization allow all
+# request_header_access Proxy-Authenticate allow all
+# request_header_access Cache-Control allow all
+# request_header_access Content-Encoding allow all
+# request_header_access Content-Length allow all
+# request_header_access Content-Type allow all
+# request_header_access Date allow all
+# request_header_access Expires allow all
+# request_header_access Host allow all
+# request_header_access If-Modified-Since allow all
+# request_header_access Last-Modified allow all
+# request_header_access Location allow all
+# request_header_access Pragma allow all
+# request_header_access Accept allow all
+# request_header_access Accept-Charset allow all
+# request_header_access Accept-Encoding allow all
+# request_header_access Accept-Language allow all
+# request_header_access Content-Language allow all
+# request_header_access Mime-Version allow all
+# request_header_access Retry-After allow all
+# request_header_access Title allow all
+# request_header_access Connection allow all
+# request_header_access Proxy-Connection allow all
+# request_header_access All deny all
+#
+# although many of those are HTTP reply headers, and so should be
+# controlled with the reply_header_access directive.
+#
+# By default, all headers are allowed (no anonymizing is
+# performed).
+#
+#Default:
+# none
+
+# TAG: reply_header_access
+# Usage: reply_header_access header_name allow|deny [!]aclname ...
+#
+# WARNING: Doing this VIOLATES the HTTP standard. Enabling
+# this feature could make you liable for problems which it
+# causes.
+#
+# This option only applies to reply headers, i.e., from the
+# server to the client.
+#
+# This is the same as request_header_access, but in the other
+# direction.
+#
+# This option replaces the old 'anonymize_headers' and the
+# older 'http_anonymizer' option with something that is much
+# more configurable. This new method creates a list of ACLs
+# for each header, allowing you very fine-tuned header
+# mangling.
+#
+# You can only specify known headers for the header name.
+# Other headers are reclassified as 'Other'. You can also
+# refer to all the headers with 'All'.
+#
+# For example, to achieve the same behavior as the old
+# 'http_anonymizer standard' option, you should use:
+#
+# reply_header_access From deny all
+# reply_header_access Referer deny all
+# reply_header_access Server deny all
+# reply_header_access User-Agent deny all
+# reply_header_access WWW-Authenticate deny all
+# reply_header_access Link deny all
+#
+# Or, to reproduce the old 'http_anonymizer paranoid' feature
+# you should use:
+#
+# reply_header_access Allow allow all
+# reply_header_access Authorization allow all
+# reply_header_access WWW-Authenticate allow all
+# reply_header_access Proxy-Authorization allow all
+# reply_header_access Proxy-Authenticate allow all
+# reply_header_access Cache-Control allow all
+# reply_header_access Content-Encoding allow all
+# reply_header_access Content-Length allow all
+# reply_header_access Content-Type allow all
+# reply_header_access Date allow all
+# reply_header_access Expires allow all
+# reply_header_access Host allow all
+# reply_header_access If-Modified-Since allow all
+# reply_header_access Last-Modified allow all
+# reply_header_access Location allow all
+# reply_header_access Pragma allow all
+# reply_header_access Accept allow all
+# reply_header_access Accept-Charset allow all
+# reply_header_access Accept-Encoding allow all
+# reply_header_access Accept-Language allow all
+# reply_header_access Content-Language allow all
+# reply_header_access Mime-Version allow all
+# reply_header_access Retry-After allow all
+# reply_header_access Title allow all
+# reply_header_access Connection allow all
+# reply_header_access Proxy-Connection allow all
+# reply_header_access All deny all
+#
+# although the HTTP request headers won't be usefully controlled
+# by this directive -- see request_header_access for details.
+#
+# By default, all headers are allowed (no anonymizing is
+# performed).
+#
+#Default:
+# none
+
+# TAG: header_replace
+# Usage: header_replace header_name message
+# Example: header_replace User-Agent Nutscrape/1.0 (CP/M; 8-bit)
+#
+# This option allows you to change the contents of headers
+# denied with header_access above, by replacing them with
+# some fixed string. This replaces the old fake_user_agent
+# option.
+#
+# This only applies to request headers, not reply headers.
+#
+# By default, headers are removed if denied.
+#
+#Default:
+# none
+
+# TAG: relaxed_header_parser on|off|warn
+# In the default "on" setting Squid accepts certain forms
+# of non-compliant HTTP messages where it is unambiguous
+# what the sending application intended even if the message
+# is not correctly formatted. The messages is then normalized
+# to the correct form when forwarded by Squid.
+#
+# If set to "warn" then a warning will be emitted in cache.log
+# each time such HTTP error is encountered.
+#
+# If set to "off" then such HTTP errors will cause the request
+# or response to be rejected.
+#
+#Default:
+# relaxed_header_parser on
+
+
+# TIMEOUTS
+# -----------------------------------------------------------------------------
+
+# TAG: forward_timeout time-units
+# This parameter specifies how long Squid should at most attempt in
+# finding a forwarding path for the request before giving up.
+#
+#Default:
+# forward_timeout 4 minutes
+
+# TAG: connect_timeout time-units
+# This parameter specifies how long to wait for the TCP connect to
+# the requested server or peer to complete before Squid should
+# attempt to find another path where to forward the request.
+#
+#Default:
+# connect_timeout 1 minute
+
+# TAG: peer_connect_timeout time-units
+# This parameter specifies how long to wait for a pending TCP
+# connection to a peer cache. The default is 30 seconds. You
+# may also set different timeout values for individual neighbors
+# with the 'connect-timeout' option on a 'cache_peer' line.
+#
+#Default:
+# peer_connect_timeout 30 seconds
+
+# TAG: read_timeout time-units
+# The read_timeout is applied on server-side connections. After
+# each successful read(), the timeout will be extended by this
+# amount. If no data is read again after this amount of time,
+# the request is aborted and logged with ERR_READ_TIMEOUT. The
+# default is 15 minutes.
+#
+#Default:
+# read_timeout 15 minutes
+
+# TAG: request_timeout
+# How long to wait for an HTTP request after initial
+# connection establishment.
+#
+#Default:
+# request_timeout 5 minutes
+
+# TAG: persistent_request_timeout
+# How long to wait for the next HTTP request on a persistent
+# connection after the previous request completes.
+#
+#Default:
+# persistent_request_timeout 2 minutes
+
+# TAG: client_lifetime time-units
+# The maximum amount of time a client (browser) is allowed to
+# remain connected to the cache process. This protects the Cache
+# from having a lot of sockets (and hence file descriptors) tied up
+# in a CLOSE_WAIT state from remote clients that go away without
+# properly shutting down (either because of a network failure or
+# because of a poor client implementation). The default is one
+# day, 1440 minutes.
+#
+# NOTE: The default value is intended to be much larger than any
+# client would ever need to be connected to your cache. You
+# should probably change client_lifetime only as a last resort.
+# If you seem to have many client connections tying up
+# filedescriptors, we recommend first tuning the read_timeout,
+# request_timeout, persistent_request_timeout and quick_abort values.
+#
+#Default:
+# client_lifetime 1 day
+
+# TAG: half_closed_clients
+# Some clients may shutdown the sending side of their TCP
+# connections, while leaving their receiving sides open. Sometimes,
+# Squid can not tell the difference between a half-closed and a
+# fully-closed TCP connection.
+#
+# By default, Squid will immediately close client connections when
+# read(2) returns "no more data to read."
+#
+# Change this option to 'on' and Squid will keep open connections
+# until a read(2) or write(2) on the socket returns an error.
+# This may show some benefits for reverse proxies. But if not
+# it is recommended to leave OFF.
+#
+#Default:
+# half_closed_clients off
+
+# TAG: pconn_timeout
+# Timeout for idle persistent connections to servers and other
+# proxies.
+#
+#Default:
+# pconn_timeout 1 minute
+
+# TAG: ident_timeout
+# Maximum time to wait for IDENT lookups to complete.
+#
+# If this is too high, and you enabled IDENT lookups from untrusted
+# users, you might be susceptible to denial-of-service by having
+# many ident requests going at once.
+#
+#Default:
+# ident_timeout 10 seconds
+
+# TAG: shutdown_lifetime time-units
+# When SIGTERM or SIGHUP is received, the cache is put into
+# "shutdown pending" mode until all active sockets are closed.
+# This value is the lifetime to set for all open descriptors
+# during shutdown mode. Any active clients after this many
+# seconds will receive a 'timeout' message.
+#
+#Default:
+# shutdown_lifetime 30 seconds
+
+
+# ADMINISTRATIVE PARAMETERS
+# -----------------------------------------------------------------------------
+
+# TAG: cache_mgr
+# Email-address of local cache manager who will receive
+# mail if the cache dies. The default is "root."
+#
+#Default:
+# cache_mgr root
+
+# TAG: mail_from
+# From: email-address for mail sent when the cache dies.
+# The default is to use 'appname@unique_hostname'.
+# Default appname value is "squid", can be changed into
+# src/globals.h before building squid.
+#
+#Default:
+# none
+
+# TAG: mail_program
+# Email program used to send mail if the cache dies.
+# The default is "mail". The specified program must comply
+# with the standard Unix mail syntax:
+# mail-program recipient < mailfile
+#
+# Optional command line options can be specified.
+#
+#Default:
+# mail_program mail
+
+# TAG: cache_effective_user
+# If you start Squid as root, it will change its effective/real
+# UID/GID to the user specified below. The default is to change
+# to UID of squid.
+# see also; cache_effective_group
+#
+#Default:
+# cache_effective_user squid
+
+# TAG: cache_effective_group
+# Squid sets the GID to the effective user's default group ID
+# (taken from the password file) and supplementary group list
+# from the groups membership.
+#
+# If you want Squid to run with a specific GID regardless of
+# the group memberships of the effective user then set this
+# to the group (or GID) you want Squid to run as. When set
+# all other group privileges of the effective user are ignored
+# and only this GID is effective. If Squid is not started as
+# root the user starting Squid MUST be member of the specified
+# group.
+#
+# This option is not recommended by the Squid Team.
+# Our preference is for administrators to configure a secure
+# user account for squid with UID/GID matching system policies.
+#
+#Default:
+# none
+
+# TAG: httpd_suppress_version_string on|off
+# Suppress Squid version string info in HTTP headers and HTML error pages.
+#
+#Default:
+# httpd_suppress_version_string off
+
+# TAG: visible_hostname
+# If you want to present a special hostname in error messages, etc,
+# define this. Otherwise, the return value of gethostname()
+# will be used. If you have multiple caches in a cluster and
+# get errors about IP-forwarding you must set them to have individual
+# names with this setting.
+#
+#Default:
+# none
+
+# TAG: unique_hostname
+# If you want to have multiple machines with the same
+# 'visible_hostname' you must give each machine a different
+# 'unique_hostname' so forwarding loops can be detected.
+#
+#Default:
+# none
+
+# TAG: hostname_aliases
+# A list of other DNS names your cache has.
+#
+#Default:
+# none
+
+# TAG: umask
+# Minimum umask which should be enforced while the proxy
+# is running, in addition to the umask set at startup.
+#
+# For a traditional octal representation of umasks, start
+# your value with 0.
+#
+#Default:
+# umask 027
+
+
+# OPTIONS FOR THE CACHE REGISTRATION SERVICE
+# -----------------------------------------------------------------------------
+#
+# This section contains parameters for the (optional) cache
+# announcement service. This service is provided to help
+# cache administrators locate one another in order to join or
+# create cache hierarchies.
+#
+# An 'announcement' message is sent (via UDP) to the registration
+# service by Squid. By default, the announcement message is NOT
+# SENT unless you enable it with 'announce_period' below.
+#
+# The announcement message includes your hostname, plus the
+# following information from this configuration file:
+#
+# http_port
+# icp_port
+# cache_mgr
+#
+# All current information is processed regularly and made
+# available on the Web at http://www.ircache.net/Cache/Tracker/.
+
+# TAG: announce_period
+# This is how frequently to send cache announcements. The
+# default is `0' which disables sending the announcement
+# messages.
+#
+# To enable announcing your cache, just uncomment the line
+# below.
+#
+#Default:
+# announce_period 0
+#
+#To enable announcing your cache, just uncomment the line below.
+#announce_period 1 day
+
+# TAG: announce_host
+# TAG: announce_file
+# TAG: announce_port
+# announce_host and announce_port set the hostname and port
+# number where the registration message will be sent.
+#
+# Hostname will default to 'tracker.ircache.net' and port will
+# default default to 3131. If the 'filename' argument is given,
+# the contents of that file will be included in the announce
+# message.
+#
+#Default:
+# announce_host tracker.ircache.net
+# announce_port 3131
+
+
+# HTTPD-ACCELERATOR OPTIONS
+# -----------------------------------------------------------------------------
+
+# TAG: httpd_accel_surrogate_id
+# Note: This option is only available if Squid is rebuilt with the
+# -DUSE_SQUID_ESI define
+#
+# Surrogates (http://www.esi.org/architecture_spec_1.0.html)
+# need an identification token to allow control targeting. Because
+# a farm of surrogates may all perform the same tasks, they may share
+# an identification token.
+#
+#Default:
+# httpd_accel_surrogate_id unset-id
+
+# TAG: http_accel_surrogate_remote on|off
+# Note: This option is only available if Squid is rebuilt with the
+# -DUSE_SQUID_ESI define
+#
+# Remote surrogates (such as those in a CDN) honour Surrogate-Control: no-store-remote.
+# Set this to on to have squid behave as a remote surrogate.
+#
+#Default:
+# http_accel_surrogate_remote off
+
+# TAG: esi_parser libxml2|expat|custom
+# Note: This option is only available if Squid is rebuilt with the
+# -DUSE_SQUID_ESI define
+#
+# ESI markup is not strictly XML compatible. The custom ESI parser
+# will give higher performance, but cannot handle non ASCII character
+# encodings.
+#
+#Default:
+# esi_parser custom
+
+
+# DELAY POOL PARAMETERS
+# -----------------------------------------------------------------------------
+
+# TAG: delay_pools
+# This represents the number of delay pools to be used. For example,
+# if you have one class 2 delay pool and one class 3 delays pool, you
+# have a total of 2 delay pools.
+#
+#Default:
+# delay_pools 0
+
+# TAG: delay_class
+# This defines the class of each delay pool. There must be exactly one
+# delay_class line for each delay pool. For example, to define two
+# delay pools, one of class 2 and one of class 3, the settings above
+# and here would be:
+#
+#Example:
+# delay_pools 4 # 4 delay pools
+# delay_class 1 2 # pool 1 is a class 2 pool
+# delay_class 2 3 # pool 2 is a class 3 pool
+# delay_class 3 4 # pool 3 is a class 4 pool
+# delay_class 4 5 # pool 4 is a class 5 pool
+#
+# The delay pool classes are:
+#
+# class 1 Everything is limited by a single aggregate
+# bucket.
+#
+# class 2 Everything is limited by a single aggregate
+# bucket as well as an "individual" bucket chosen
+# from bits 25 through 32 of the IP address.
+#
+# class 3 Everything is limited by a single aggregate
+# bucket as well as a "network" bucket chosen
+# from bits 17 through 24 of the IP address and a
+# "individual" bucket chosen from bits 17 through
+# 32 of the IP address.
+#
+# class 4 Everything in a class 3 delay pool, with an
+# additional limit on a per user basis. This
+# only takes effect if the username is established
+# in advance - by forcing authentication in your
+# http_access rules.
+#
+# class 5 Requests are grouped according their tag (see
+# external_acl's tag= reply).
+#
+# NOTE: If an IP address is a.b.c.d
+# -> bits 25 through 32 are "d"
+# -> bits 17 through 24 are "c"
+# -> bits 17 through 32 are "c * 256 + d"
+#
+#Default:
+# none
+
+# TAG: delay_access
+# This is used to determine which delay pool a request falls into.
+#
+# delay_access is sorted per pool and the matching starts with pool 1,
+# then pool 2, ..., and finally pool N. The first delay pool where the
+# request is allowed is selected for the request. If it does not allow
+# the request to any pool then the request is not delayed (default).
+#
+# For example, if you want some_big_clients in delay
+# pool 1 and lotsa_little_clients in delay pool 2:
+#
+#Example:
+# delay_access 1 allow some_big_clients
+# delay_access 1 deny all
+# delay_access 2 allow lotsa_little_clients
+# delay_access 2 deny all
+# delay_access 3 allow authenticated_clients
+#
+#Default:
+# none
+
+# TAG: delay_parameters
+# This defines the parameters for a delay pool. Each delay pool has
+# a number of "buckets" associated with it, as explained in the
+# description of delay_class. For a class 1 delay pool, the syntax is:
+#
+#delay_parameters pool aggregate
+#
+# For a class 2 delay pool:
+#
+#delay_parameters pool aggregate individual
+#
+# For a class 3 delay pool:
+#
+#delay_parameters pool aggregate network individual
+#
+# For a class 4 delay pool:
+#
+#delay_parameters pool aggregate network individual user
+#
+# For a class 5 delay pool:
+#
+#delay_parameters pool tag
+#
+# The variables here are:
+#
+# pool a pool number - ie, a number between 1 and the
+# number specified in delay_pools as used in
+# delay_class lines.
+#
+# aggregate the "delay parameters" for the aggregate bucket
+# (class 1, 2, 3).
+#
+# individual the "delay parameters" for the individual
+# buckets (class 2, 3).
+#
+# network the "delay parameters" for the network buckets
+# (class 3).
+#
+# user the delay parameters for the user buckets
+# (class 4).
+#
+# tag the delay parameters for the tag buckets
+# (class 5).
+#
+# A pair of delay parameters is written restore/maximum, where restore is
+# the number of bytes (not bits - modem and network speeds are usually
+# quoted in bits) per second placed into the bucket, and maximum is the
+# maximum number of bytes which can be in the bucket at any time.
+#
+# For example, if delay pool number 1 is a class 2 delay pool as in the
+# above example, and is being used to strictly limit each host to 64kbps
+# (plus overheads), with no overall limit, the line is:
+#
+#delay_parameters 1 -1/-1 8000/8000
+#
+# Note that the figure -1 is used to represent "unlimited".
+#
+# And, if delay pool number 2 is a class 3 delay pool as in the above
+# example, and you want to limit it to a total of 256kbps (strict limit)
+# with each 8-bit network permitted 64kbps (strict limit) and each
+# individual host permitted 4800bps with a bucket maximum size of 64kb
+# to permit a decent web page to be downloaded at a decent speed
+# (if the network is not being limited due to overuse) but slow down
+# large downloads more significantly:
+#
+#delay_parameters 2 32000/32000 8000/8000 600/8000
+#
+# There must be one delay_parameters line for each delay pool.
+#
+# Finally, for a class 4 delay pool as in the example - each user will
+# be limited to 128Kb no matter how many workstations they are logged into.:
+#
+#delay_parameters 4 32000/32000 8000/8000 600/64000 16000/16000
+#
+#Default:
+# none
+
+# TAG: delay_initial_bucket_level (percent, 0-100)
+# The initial bucket percentage is used to determine how much is put
+# in each bucket when squid starts, is reconfigured, or first notices
+# a host accessing it (in class 2 and class 3, individual hosts and
+# networks only have buckets associated with them once they have been
+# "seen" by squid).
+#
+#Default:
+# delay_initial_bucket_level 50
+
+
+# WCCPv1 AND WCCPv2 CONFIGURATION OPTIONS
+# -----------------------------------------------------------------------------
+
+# TAG: wccp_router
+# TAG: wccp2_router
+# Use this option to define your WCCP ``home'' router for
+# Squid.
+#
+# wccp_router supports a single WCCP(v1) router
+#
+# wccp2_router supports multiple WCCPv2 routers
+#
+# only one of the two may be used at the same time and defines
+# which version of WCCP to use.
+#
+#Default:
+# wccp_router 0.0.0.0
+
+# TAG: wccp_version
+# This directive is only relevant if you need to set up WCCP(v1)
+# to some very old and end-of-life Cisco routers. In all other
+# setups it must be left unset or at the default setting.
+# It defines an internal version in the WCCP(v1) protocol,
+# with version 4 being the officially documented protocol.
+#
+# According to some users, Cisco IOS 11.2 and earlier only
+# support WCCP version 3. If you're using that or an earlier
+# version of IOS, you may need to change this value to 3, otherwise
+# do not specify this parameter.
+#
+#Default:
+# wccp_version 4
+
+# TAG: wccp2_rebuild_wait
+# If this is enabled Squid will wait for the cache dir rebuild to finish
+# before sending the first wccp2 HereIAm packet
+#
+#Default:
+# wccp2_rebuild_wait on
+
+# TAG: wccp2_forwarding_method
+# WCCP2 allows the setting of forwarding methods between the
+# router/switch and the cache. Valid values are as follows:
+#
+# 1 - GRE encapsulation (forward the packet in a GRE/WCCP tunnel)
+# 2 - L2 redirect (forward the packet using Layer 2/MAC rewriting)
+#
+# Currently (as of IOS 12.4) cisco routers only support GRE.
+# Cisco switches only support the L2 redirect assignment method.
+#
+#Default:
+# wccp2_forwarding_method 1
+
+# TAG: wccp2_return_method
+# WCCP2 allows the setting of return methods between the
+# router/switch and the cache for packets that the cache
+# decides not to handle. Valid values are as follows:
+#
+# 1 - GRE encapsulation (forward the packet in a GRE/WCCP tunnel)
+# 2 - L2 redirect (forward the packet using Layer 2/MAC rewriting)
+#
+# Currently (as of IOS 12.4) cisco routers only support GRE.
+# Cisco switches only support the L2 redirect assignment.
+#
+# If the "ip wccp redirect exclude in" command has been
+# enabled on the cache interface, then it is still safe for
+# the proxy server to use a l2 redirect method even if this
+# option is set to GRE.
+#
+#Default:
+# wccp2_return_method 1
+
+# TAG: wccp2_assignment_method
+# WCCP2 allows the setting of methods to assign the WCCP hash
+# Valid values are as follows:
+#
+# 1 - Hash assignment
+# 2 - Mask assignment
+#
+# As a general rule, cisco routers support the hash assignment method
+# and cisco switches support the mask assignment method.
+#
+#Default:
+# wccp2_assignment_method 1
+
+# TAG: wccp2_service
+# WCCP2 allows for multiple traffic services. There are two
+# types: "standard" and "dynamic". The standard type defines
+# one service id - http (id 0). The dynamic service ids can be from
+# 51 to 255 inclusive. In order to use a dynamic service id
+# one must define the type of traffic to be redirected; this is done
+# using the wccp2_service_info option.
+#
+# The "standard" type does not require a wccp2_service_info option,
+# just specifying the service id will suffice.
+#
+# MD5 service authentication can be enabled by adding
+# "password=<password>" to the end of this service declaration.
+#
+# Examples:
+#
+# wccp2_service standard 0 # for the 'web-cache' standard service
+# wccp2_service dynamic 80 # a dynamic service type which will be
+# # fleshed out with subsequent options.
+# wccp2_service standard 0 password=foo
+#
+#
+#Default:
+# wccp2_service standard 0
+
+# TAG: wccp2_service_info
+# Dynamic WCCPv2 services require further information to define the
+# traffic you wish to have diverted.
+#
+# The format is:
+#
+# wccp2_service_info <id> protocol=<protocol> flags=<flag>,<flag>..
+# priority=<priority> ports=<port>,<port>..
+#
+# The relevant WCCPv2 flags:
+# + src_ip_hash, dst_ip_hash
+# + source_port_hash, dst_port_hash
+# + src_ip_alt_hash, dst_ip_alt_hash
+# + src_port_alt_hash, dst_port_alt_hash
+# + ports_source
+#
+# The port list can be one to eight entries.
+#
+# Example:
+#
+# wccp2_service_info 80 protocol=tcp flags=src_ip_hash,ports_source
+# priority=240 ports=80
+#
+# Note: the service id must have been defined by a previous
+# 'wccp2_service dynamic <id>' entry.
+#
+#Default:
+# none
+
+# TAG: wccp2_weight
+# Each cache server gets assigned a set of the destination
+# hash proportional to their weight.
+#
+#Default:
+# wccp2_weight 10000
+
+# TAG: wccp_address
+# TAG: wccp2_address
+# Use this option if you require WCCP to use a specific
+# interface address.
+#
+# The default behavior is to not bind to any specific address.
+#
+#Default:
+# wccp_address 0.0.0.0
+# wccp2_address 0.0.0.0
+
+
+# PERSISTENT CONNECTION HANDLING
+# -----------------------------------------------------------------------------
+#
+# Also see "pconn_timeout" in the TIMEOUTS section
+
+# TAG: client_persistent_connections
+# TAG: server_persistent_connections
+# Persistent connection support for clients and servers. By
+# default, Squid uses persistent connections (when allowed)
+# with its clients and servers. You can use these options to
+# disable persistent connections with clients and/or servers.
+#
+#Default:
+# client_persistent_connections on
+# server_persistent_connections on
+
+# TAG: persistent_connection_after_error
+# With this directive the use of persistent connections after
+# HTTP errors can be disabled. Useful if you have clients
+# who fail to handle errors on persistent connections proper.
+#
+#Default:
+# persistent_connection_after_error off
+
+# TAG: detect_broken_pconn
+# Some servers have been found to incorrectly signal the use
+# of HTTP/1.0 persistent connections even on replies not
+# compatible, causing significant delays. This server problem
+# has mostly been seen on redirects.
+#
+# By enabling this directive Squid attempts to detect such
+# broken replies and automatically assume the reply is finished
+# after 10 seconds timeout.
+#
+#Default:
+# detect_broken_pconn off
+
+
+# CACHE DIGEST OPTIONS
+# -----------------------------------------------------------------------------
+
+# TAG: digest_generation
+# This controls whether the server will generate a Cache Digest
+# of its contents. By default, Cache Digest generation is
+# enabled if Squid is compiled with --enable-cache-digests defined.
+#
+#Default:
+# digest_generation on
+
+# TAG: digest_bits_per_entry
+# This is the number of bits of the server's Cache Digest which
+# will be associated with the Digest entry for a given HTTP
+# Method and URL (public key) combination. The default is 5.
+#
+#Default:
+# digest_bits_per_entry 5
+
+# TAG: digest_rebuild_period (seconds)
+# This is the wait time between Cache Digest rebuilds.
+#
+#Default:
+# digest_rebuild_period 1 hour
+
+# TAG: digest_rewrite_period (seconds)
+# This is the wait time between Cache Digest writes to
+# disk.
+#
+#Default:
+# digest_rewrite_period 1 hour
+
+# TAG: digest_swapout_chunk_size (bytes)
+# This is the number of bytes of the Cache Digest to write to
+# disk at a time. It defaults to 4096 bytes (4KB), the Squid
+# default swap page.
+#
+#Default:
+# digest_swapout_chunk_size 4096 bytes
+
+# TAG: digest_rebuild_chunk_percentage (percent, 0-100)
+# This is the percentage of the Cache Digest to be scanned at a
+# time. By default it is set to 10% of the Cache Digest.
+#
+#Default:
+# digest_rebuild_chunk_percentage 10
+
+
+# SNMP OPTIONS
+# -----------------------------------------------------------------------------
+
+# TAG: snmp_port
+# The port number where Squid listens for SNMP requests. To enable
+# SNMP support set this to a suitable port number. Port number
+# 3401 is often used for the Squid SNMP agent. By default it's
+# set to "0" (disabled)
+#Default:
+# snmp_port 0
+#
+#snmp_port 3401
+
+# TAG: snmp_access
+# Allowing or denying access to the SNMP port.
+#
+# All access to the agent is denied by default.
+# usage:
+#
+# snmp_access allow|deny [!]aclname ...
+#
+#Example:
+# snmp_access allow snmppublic localhost
+# snmp_access deny all
+#
+#Default:
+# snmp_access deny all
+
+# TAG: snmp_incoming_address
+# TAG: snmp_outgoing_address
+# Just like 'udp_incoming_address' above, but for the SNMP port.
+#
+# snmp_incoming_address is used for the SNMP socket receiving
+# messages from SNMP agents.
+# snmp_outgoing_address is used for SNMP packets returned to SNMP
+# agents.
+#
+# The default snmp_incoming_address (0.0.0.0) is to listen on all
+# available network interfaces.
+#
+# If snmp_outgoing_address is set to 255.255.255.255 (the default)
+# it will use the same socket as snmp_incoming_address. Only
+# change this if you want to have SNMP replies sent using another
+# address than where this Squid listens for SNMP queries.
+#
+# NOTE, snmp_incoming_address and snmp_outgoing_address can not have
+# the same value since they both use port 3401.
+#
+#Default:
+# snmp_incoming_address 0.0.0.0
+# snmp_outgoing_address 255.255.255.255
+
+
+# ICP OPTIONS
+# -----------------------------------------------------------------------------
+
+# TAG: icp_port
+# The port number where Squid sends and receives ICP queries to
+# and from neighbor caches. The standard UDP port for ICP is 3130.
+# Default is disabled (0).
+#Default:
+# icp_port 0
+#
+icp_port 3130
+
+# TAG: htcp_port
+# The port number where Squid sends and receives HTCP queries to
+# and from neighbor caches. To turn it on you want to set it to
+# 4827. By default it is set to "0" (disabled).
+#Default:
+# htcp_port 0
+#
+#htcp_port 4827
+
+# TAG: log_icp_queries on|off
+# If set, ICP queries are logged to access.log. You may wish
+# do disable this if your ICP load is VERY high to speed things
+# up or to simplify log analysis.
+#
+#Default:
+# log_icp_queries on
+
+# TAG: udp_incoming_address
+# udp_incoming_address is used for UDP packets received from other
+# caches.
+#
+# The default behavior is to not bind to any specific address.
+#
+# Only change this if you want to have all UDP queries received on
+# a specific interface/address.
+#
+# NOTE: udp_incoming_address is used by the ICP, HTCP, and DNS
+# modules. Altering it will affect all of them in the same manner.
+#
+# see also; udp_outgoing_address
+#
+# NOTE, udp_incoming_address and udp_outgoing_address can not
+# have the same value since they both use the same port.
+#
+#Default:
+# udp_incoming_address 0.0.0.0
+
+# TAG: udp_outgoing_address
+# udp_outgoing_address is used for UDP packets sent out to other
+# caches.
+#
+# The default behavior is to not bind to any specific address.
+#
+# Instead it will use the same socket as udp_incoming_address.
+# Only change this if you want to have UDP queries sent using another
+# address than where this Squid listens for UDP queries from other
+# caches.
+#
+# NOTE: udp_outgoing_address is used by the ICP, HTCP, and DNS
+# modules. Altering it will affect all of them in the same manner.
+#
+# see also; udp_incoming_address
+#
+# NOTE, udp_incoming_address and udp_outgoing_address can not
+# have the same value since they both use the same port.
+#
+#Default:
+# udp_outgoing_address 255.255.255.255
+
+# TAG: icp_hit_stale on|off
+# If you want to return ICP_HIT for stale cache objects, set this
+# option to 'on'. If you have sibling relationships with caches
+# in other administrative domains, this should be 'off'. If you only
+# have sibling relationships with caches under your control,
+# it is probably okay to set this to 'on'.
+# If set to 'on', your siblings should use the option "allow-miss"
+# on their cache_peer lines for connecting to you.
+#
+#Default:
+# icp_hit_stale off
+
+# TAG: minimum_direct_hops
+# If using the ICMP pinging stuff, do direct fetches for sites
+# which are no more than this many hops away.
+#
+#Default:
+# minimum_direct_hops 4
+
+# TAG: minimum_direct_rtt
+# If using the ICMP pinging stuff, do direct fetches for sites
+# which are no more than this many rtt milliseconds away.
+#
+#Default:
+# minimum_direct_rtt 400
+
+# TAG: netdb_low
+# TAG: netdb_high
+# The low and high water marks for the ICMP measurement
+# database. These are counts, not percents. The defaults are
+# 900 and 1000. When the high water mark is reached, database
+# entries will be deleted until the low mark is reached.
+#
+#Default:
+# netdb_low 900
+# netdb_high 1000
+
+# TAG: netdb_ping_period
+# The minimum period for measuring a site. There will be at
+# least this much delay between successive pings to the same
+# network. The default is five minutes.
+#
+#Default:
+# netdb_ping_period 5 minutes
+
+# TAG: query_icmp on|off
+# If you want to ask your peers to include ICMP data in their ICP
+# replies, enable this option.
+#
+# If your peer has configured Squid (during compilation) with
+# '--enable-icmp' that peer will send ICMP pings to origin server
+# sites of the URLs it receives. If you enable this option the
+# ICP replies from that peer will include the ICMP data (if available).
+# Then, when choosing a parent cache, Squid will choose the parent with
+# the minimal RTT to the origin server. When this happens, the
+# hierarchy field of the access.log will be
+# "CLOSEST_PARENT_MISS". This option is off by default.
+#
+#Default:
+# query_icmp off
+
+# TAG: test_reachability on|off
+# When this is 'on', ICP MISS replies will be ICP_MISS_NOFETCH
+# instead of ICP_MISS if the target host is NOT in the ICMP
+# database, or has a zero RTT.
+#
+#Default:
+# test_reachability off
+
+# TAG: icp_query_timeout (msec)
+# Normally Squid will automatically determine an optimal ICP
+# query timeout value based on the round-trip-time of recent ICP
+# queries. If you want to override the value determined by
+# Squid, set this 'icp_query_timeout' to a non-zero value. This
+# value is specified in MILLISECONDS, so, to use a 2-second
+# timeout (the old default), you would write:
+#
+# icp_query_timeout 2000
+#
+#Default:
+# icp_query_timeout 0
+
+# TAG: maximum_icp_query_timeout (msec)
+# Normally the ICP query timeout is determined dynamically. But
+# sometimes it can lead to very large values (say 5 seconds).
+# Use this option to put an upper limit on the dynamic timeout
+# value. Do NOT use this option to always use a fixed (instead
+# of a dynamic) timeout value. To set a fixed timeout see the
+# 'icp_query_timeout' directive.
+#
+#Default:
+# maximum_icp_query_timeout 2000
+
+# TAG: minimum_icp_query_timeout (msec)
+# Normally the ICP query timeout is determined dynamically. But
+# sometimes it can lead to very small timeouts, even lower than
+# the normal latency variance on your link due to traffic.
+# Use this option to put an lower limit on the dynamic timeout
+# value. Do NOT use this option to always use a fixed (instead
+# of a dynamic) timeout value. To set a fixed timeout see the
+# 'icp_query_timeout' directive.
+#
+#Default:
+# minimum_icp_query_timeout 5
+
+# TAG: background_ping_rate time-units
+# Controls how often the ICP pings are sent to siblings that
+# have background-ping set.
+#
+#Default:
+# background_ping_rate 10 seconds
+
+
+# MULTICAST ICP OPTIONS
+# -----------------------------------------------------------------------------
+
+# TAG: mcast_groups
+# This tag specifies a list of multicast groups which your server
+# should join to receive multicasted ICP queries.
+#
+# NOTE! Be very careful what you put here! Be sure you
+# understand the difference between an ICP _query_ and an ICP
+# _reply_. This option is to be set only if you want to RECEIVE
+# multicast queries. Do NOT set this option to SEND multicast
+# ICP (use cache_peer for that). ICP replies are always sent via
+# unicast, so this option does not affect whether or not you will
+# receive replies from multicast group members.
+#
+# You must be very careful to NOT use a multicast address which
+# is already in use by another group of caches.
+#
+# If you are unsure about multicast, please read the Multicast
+# chapter in the Squid FAQ (http://www.squid-cache.org/FAQ/).
+#
+# Usage: mcast_groups 239.128.16.128 224.0.1.20
+#
+# By default, Squid doesn't listen on any multicast groups.
+#
+#Default:
+# none
+
+# TAG: mcast_miss_addr
+# Note: This option is only available if Squid is rebuilt with the
+# -DMULTICAST_MISS_STREAM define
+#
+# If you enable this option, every "cache miss" URL will
+# be sent out on the specified multicast address.
+#
+# Do not enable this option unless you are are absolutely
+# certain you understand what you are doing.
+#
+#Default:
+# mcast_miss_addr 255.255.255.255
+
+# TAG: mcast_miss_ttl
+# Note: This option is only available if Squid is rebuilt with the
+# -DMULTICAST_MISS_STREAM define
+#
+# This is the time-to-live value for packets multicasted
+# when multicasting off cache miss URLs is enabled. By
+# default this is set to 'site scope', i.e. 16.
+#
+#Default:
+# mcast_miss_ttl 16
+
+# TAG: mcast_miss_port
+# Note: This option is only available if Squid is rebuilt with the
+# -DMULTICAST_MISS_STREAM define
+#
+# This is the port number to be used in conjunction with
+# 'mcast_miss_addr'.
+#
+#Default:
+# mcast_miss_port 3135
+
+# TAG: mcast_miss_encode_key
+# Note: This option is only available if Squid is rebuilt with the
+# -DMULTICAST_MISS_STREAM define
+#
+# The URLs that are sent in the multicast miss stream are
+# encrypted. This is the encryption key.
+#
+#Default:
+# mcast_miss_encode_key XXXXXXXXXXXXXXXX
+
+# TAG: mcast_icp_query_timeout (msec)
+# For multicast peers, Squid regularly sends out ICP "probes" to
+# count how many other peers are listening on the given multicast
+# address. This value specifies how long Squid should wait to
+# count all the replies. The default is 2000 msec, or 2
+# seconds.
+#
+#Default:
+# mcast_icp_query_timeout 2000
+
+
+# INTERNAL ICON OPTIONS
+# -----------------------------------------------------------------------------
+
+# TAG: icon_directory
+# Where the icons are stored. These are normally kept in
+# /usr/share/squid/icons
+#
+#Default:
+# icon_directory /usr/share/squid/icons
+
+# TAG: global_internal_static
+# This directive controls is Squid should intercept all requests for
+# /squid-internal-static/ no matter which host the URL is requesting
+# (default on setting), or if nothing special should be done for
+# such URLs (off setting). The purpose of this directive is to make
+# icons etc work better in complex cache hierarchies where it may
+# not always be possible for all corners in the cache mesh to reach
+# the server generating a directory listing.
+#
+#Default:
+# global_internal_static on
+
+# TAG: short_icon_urls
+# If this is enabled Squid will use short URLs for icons.
+# If disabled it will revert to the old behavior of including
+# it's own name and port in the URL.
+#
+# If you run a complex cache hierarchy with a mix of Squid and
+# other proxies you may need to disable this directive.
+#
+#Default:
+# short_icon_urls on
+
+
+# ERROR PAGE OPTIONS
+# -----------------------------------------------------------------------------
+
+# TAG: error_directory
+# Directory where the error files are read from.
+# /usr/lib/squid/errors contains sets of error files
+# in different languages. The default error directory
+# is /etc/squid/errors, which is a link to one of these
+# error sets.
+#
+# If you wish to create your own versions of the error files,
+# either to customize them to suit your language or company,
+# copy the template English files to another directory and
+# point this tag at them.
+#
+# Current Language updates can be downloaded from:
+# http://www.squid-cache.org/Versions/langpack/
+#
+# The squid developers are interested in making squid available in
+# a wide variety of languages. If you are making translations for a
+# language that Squid does not currently provide please consider
+# contributing your translation back to the project.
+# see http://wiki.squid-cache.org/Translations
+#
+#Default:
+# error_directory /usr/share/squid/errors/templates
+
+# TAG: err_html_text
+# HTML text to include in error messages. Make this a "mailto"
+# URL to your admin address, or maybe just a link to your
+# organizations Web page.
+#
+# To include this in your error messages, you must rewrite
+# the error template files (found in the "errors" directory).
+# Wherever you want the 'err_html_text' line to appear,
+# insert a %L tag in the error template file.
+#
+#Default:
+# none
+
+# TAG: email_err_data on|off
+# If enabled, information about the occurred error will be
+# included in the mailto links of the ERR pages (if %W is set)
+# so that the email body contains the data.
+# Syntax is <A HREF="mailto:%w%W">%w</A>
+#
+#Default:
+# email_err_data on
+
+# TAG: deny_info
+# Usage: deny_info err_page_name acl
+# or deny_info http://... acl
+# Example: deny_info ERR_CUSTOM_ACCESS_DENIED bad_guys
+#
+# This can be used to return a ERR_ page for requests which
+# do not pass the 'http_access' rules. Squid remembers the last
+# acl it evaluated in http_access, and if a 'deny_info' line exists
+# for that ACL Squid returns a corresponding error page.
+#
+# The acl is typically the last acl on the http_access deny line which
+# denied access. The exceptions to this rule are:
+# - When Squid needs to request authentication credentials. It's then
+# the first authentication related acl encountered
+# - When none of the http_access lines matches. It's then the last
+# acl processed on the last http_access line.
+#
+# You may use ERR_ pages that come with Squid or create your own pages
+# and put them into the configured errors/ directory.
+#
+# Alternatively you can specify an error URL. The browsers will
+# get redirected (302) to the specified URL. %s in the redirection
+# URL will be replaced by the requested URL.
+#
+# Alternatively you can tell Squid to reset the TCP connection
+# by specifying TCP_RESET.
+#
+#Default:
+# none
+
+
+# OPTIONS INFLUENCING REQUEST FORWARDING
+# -----------------------------------------------------------------------------
+
+# TAG: nonhierarchical_direct
+# By default, Squid will send any non-hierarchical requests
+# (matching hierarchy_stoplist or not cacheable request type) direct
+# to origin servers.
+#
+# If you set this to off, Squid will prefer to send these
+# requests to parents.
+#
+# Note that in most configurations, by turning this off you will only
+# add latency to these request without any improvement in global hit
+# ratio.
+#
+# If you are inside an firewall see never_direct instead of
+# this directive.
+#
+#Default:
+# nonhierarchical_direct on
+
+# TAG: prefer_direct
+# Normally Squid tries to use parents for most requests. If you for some
+# reason like it to first try going direct and only use a parent if
+# going direct fails set this to on.
+#
+# By combining nonhierarchical_direct off and prefer_direct on you
+# can set up Squid to use a parent as a backup path if going direct
+# fails.
+#
+# Note: If you want Squid to use parents for all requests see
+# the never_direct directive. prefer_direct only modifies how Squid
+# acts on cacheable requests.
+#
+#Default:
+# prefer_direct off
+
+# TAG: always_direct
+# Usage: always_direct allow|deny [!]aclname ...
+#
+# Here you can use ACL elements to specify requests which should
+# ALWAYS be forwarded by Squid to the origin servers without using
+# any peers. For example, to always directly forward requests for
+# local servers ignoring any parents or siblings you may have use
+# something like:
+#
+# acl local-servers dstdomain my.domain.net
+# always_direct allow local-servers
+#
+# To always forward FTP requests directly, use
+#
+# acl FTP proto FTP
+# always_direct allow FTP
+#
+# NOTE: There is a similar, but opposite option named
+# 'never_direct'. You need to be aware that "always_direct deny
+# foo" is NOT the same thing as "never_direct allow foo". You
+# may need to use a deny rule to exclude a more-specific case of
+# some other rule. Example:
+#
+# acl local-external dstdomain external.foo.net
+# acl local-servers dstdomain .foo.net
+# always_direct deny local-external
+# always_direct allow local-servers
+#
+# NOTE: If your goal is to make the client forward the request
+# directly to the origin server bypassing Squid then this needs
+# to be done in the client configuration. Squid configuration
+# can only tell Squid how Squid should fetch the object.
+#
+# NOTE: This directive is not related to caching. The replies
+# is cached as usual even if you use always_direct. To not cache
+# the replies see no_cache.
+#
+# This option replaces some v1.1 options such as local_domain
+# and local_ip.
+#
+#Default:
+# none
+
+# TAG: never_direct
+# Usage: never_direct allow|deny [!]aclname ...
+#
+# never_direct is the opposite of always_direct. Please read
+# the description for always_direct if you have not already.
+#
+# With 'never_direct' you can use ACL elements to specify
+# requests which should NEVER be forwarded directly to origin
+# servers. For example, to force the use of a proxy for all
+# requests, except those in your local domain use something like:
+#
+# acl local-servers dstdomain .foo.net
+# never_direct deny local-servers
+# never_direct allow all
+#
+# or if Squid is inside a firewall and there are local intranet
+# servers inside the firewall use something like:
+#
+# acl local-intranet dstdomain .foo.net
+# acl local-external dstdomain external.foo.net
+# always_direct deny local-external
+# always_direct allow local-intranet
+# never_direct allow all
+#
+# This option replaces some v1.1 options such as inside_firewall
+# and firewall_ip.
+#
+#Default:
+# none
+
+
+# ADVANCED NETWORKING OPTIONS
+# -----------------------------------------------------------------------------
+
+# TAG: incoming_icp_average
+# TAG: incoming_http_average
+# TAG: incoming_dns_average
+# TAG: min_icp_poll_cnt
+# TAG: min_dns_poll_cnt
+# TAG: min_http_poll_cnt
+# Heavy voodoo here. I can't even believe you are reading this.
+# Are you crazy? Don't even think about adjusting these unless
+# you understand the algorithms in comm_select.c first!
+#
+#Default:
+# incoming_icp_average 6
+# incoming_http_average 4
+# incoming_dns_average 4
+# min_icp_poll_cnt 8
+# min_dns_poll_cnt 8
+# min_http_poll_cnt 8
+
+# TAG: accept_filter
+# FreeBSD:
+#
+# The name of an accept(2) filter to install on Squid's
+# listen socket(s). This feature is perhaps specific to
+# FreeBSD and requires support in the kernel.
+#
+# The 'httpready' filter delays delivering new connections
+# to Squid until a full HTTP request has been received.
+# See the accf_http(9) man page for details.
+#
+# The 'dataready' filter delays delivering new connections
+# to Squid until there is some data to process.
+# See the accf_dataready(9) man page for details.
+#
+# Linux:
+#
+# The 'data' filter delays delivering of new connections
+# to Squid until there is some data to process by TCP_ACCEPT_DEFER.
+# You may optionally specify a number of seconds to wait by
+# 'data=N' where N is the number of seconds. Defaults to 30
+# if not specified. See the tcp(7) man page for details.
+#EXAMPLE:
+## FreeBSD
+#accept_filter httpready
+## Linux
+#accept_filter data
+#
+#Default:
+# none
+
+# TAG: tcp_recv_bufsize (bytes)
+# Size of receive buffer to set for TCP sockets. Probably just
+# as easy to change your kernel's default. Set to zero to use
+# the default buffer size.
+#
+#Default:
+# tcp_recv_bufsize 0 bytes
+
+
+# ICAP OPTIONS
+# -----------------------------------------------------------------------------
+
+# TAG: icap_enable on|off
+# If you want to enable the ICAP module support, set this to on.
+#
+#Default:
+# icap_enable off
+
+# TAG: icap_connect_timeout
+# This parameter specifies how long to wait for the TCP connect to
+# the requested ICAP server to complete before giving up and either
+# terminating the HTTP transaction or bypassing the failure.
+#
+# The default for optional services is peer_connect_timeout.
+# The default for essential services is connect_timeout.
+# If this option is explicitly set, its value applies to all services.
+#
+#Default:
+# none
+
+# TAG: icap_io_timeout time-units
+# This parameter specifies how long to wait for an I/O activity on
+# an established, active ICAP connection before giving up and
+# either terminating the HTTP transaction or bypassing the
+# failure.
+#
+# The default is read_timeout.
+#
+#Default:
+# none
+
+# TAG: icap_service_failure_limit
+# The limit specifies the number of failures that Squid tolerates
+# when establishing a new TCP connection with an ICAP service. If
+# the number of failures exceeds the limit, the ICAP service is
+# not used for new ICAP requests until it is time to refresh its
+# OPTIONS. The per-service failure counter is reset to zero each
+# time Squid fetches new service OPTIONS.
+#
+# A negative value disables the limit. Without the limit, an ICAP
+# service will not be considered down due to connectivity failures
+# between ICAP OPTIONS requests.
+#
+#Default:
+# icap_service_failure_limit 10
+
+# TAG: icap_service_revival_delay
+# The delay specifies the number of seconds to wait after an ICAP
+# OPTIONS request failure before requesting the options again. The
+# failed ICAP service is considered "down" until fresh OPTIONS are
+# fetched.
+#
+# The actual delay cannot be smaller than the hardcoded minimum
+# delay of 30 seconds.
+#
+#Default:
+# icap_service_revival_delay 180
+
+# TAG: icap_preview_enable on|off
+# The ICAP Preview feature allows the ICAP server to handle the
+# HTTP message by looking only at the beginning of the message body
+# or even without receiving the body at all. In some environments,
+# previews greatly speedup ICAP processing.
+#
+# During an ICAP OPTIONS transaction, the server may tell Squid what
+# HTTP messages should be previewed and how big the preview should be.
+# Squid will not use Preview if the server did not request one.
+#
+# To disable ICAP Preview for all ICAP services, regardless of
+# individual ICAP server OPTIONS responses, set this option to "off".
+#Example:
+#icap_preview_enable off
+#
+#Default:
+# icap_preview_enable on
+
+# TAG: icap_preview_size
+# The default size of preview data to be sent to the ICAP server.
+# -1 means no preview. This value might be overwritten on a per server
+# basis by OPTIONS requests.
+#
+#Default:
+# icap_preview_size -1
+
+# TAG: icap_default_options_ttl
+# The default TTL value for ICAP OPTIONS responses that don't have
+# an Options-TTL header.
+#
+#Default:
+# icap_default_options_ttl 60
+
+# TAG: icap_persistent_connections on|off
+# Whether or not Squid should use persistent connections to
+# an ICAP server.
+#
+#Default:
+# icap_persistent_connections on
+
+# TAG: icap_send_client_ip on|off
+# This adds the header "X-Client-IP" to ICAP requests.
+#
+#Default:
+# icap_send_client_ip off
+
+# TAG: icap_send_client_username on|off
+# This sends authenticated HTTP client username (if available) to
+# the ICAP service. The username value is encoded based on the
+# icap_client_username_encode option and is sent using the header
+# specified by the icap_client_username_header option.
+#
+#Default:
+# icap_send_client_username off
+
+# TAG: icap_client_username_header
+# ICAP request header name to use for send_client_username.
+#
+#Default:
+# icap_client_username_header X-Client-Username
+
+# TAG: icap_client_username_encode on|off
+# Whether to base64 encode the authenticated client username.
+#
+#Default:
+# icap_client_username_encode off
+
+# TAG: icap_service
+# Defines a single ICAP service
+#
+# icap_service servicename vectoring_point bypass service_url
+#
+# vectoring_point = reqmod_precache|reqmod_postcache|respmod_precache|respmod_postcache
+# This specifies at which point of transaction processing the
+# ICAP service should be activated. *_postcache vectoring points
+# are not yet supported.
+# bypass = 1|0
+# If set to 1, the ICAP service is treated as optional. If the
+# service cannot be reached or malfunctions, Squid will try to
+# ignore any errors and process the message as if the service
+# was not enabled. No all ICAP errors can be bypassed.
+# If set to 0, the ICAP service is treated as essential and all
+# ICAP errors will result in an error page returned to the
+# HTTP client.
+# service_url = icap://servername:port/service
+#
+#Example:
+#icap_service service_1 reqmod_precache 0 icap://icap1.mydomain.net:1344/reqmod
+#icap_service service_2 respmod_precache 0 icap://icap2.mydomain.net:1344/respmod
+#
+#Default:
+# none
+
+# TAG: icap_class
+# Defines an ICAP service chain. Eventually, multiple services per
+# vectoring point will be supported. For now, please specify a single
+# service per class:
+#
+# icap_class classname servicename
+#
+#Example:
+#icap_class class_1 service_1
+#icap class class_2 service_1
+#icap class class_3 service_3
+#
+#Default:
+# none
+
+# TAG: icap_access
+# Redirects a request through an ICAP service class, depending
+# on given acls
+#
+# icap_access classname allow|deny [!]aclname...
+#
+# The icap_access statements are processed in the order they appear in
+# this configuration file. If an access list matches, the processing stops.
+# For an "allow" rule, the specified class is used for the request. A "deny"
+# rule simply stops processing without using the class. You can also use the
+# special classname "None".
+#
+# For backward compatibility, it is also possible to use services
+# directly here.
+#Example:
+#icap_access class_1 allow all
+#
+#Default:
+# none
+
+
+# DNS OPTIONS
+# -----------------------------------------------------------------------------
+
+# TAG: check_hostnames
+# For security and stability reasons Squid can check
+# hostnames for Internet standard RFC compliance. If you want
+# Squid to perform these checks turn this directive on.
+#
+#Default:
+# check_hostnames off
+
+# TAG: allow_underscore
+# Underscore characters is not strictly allowed in Internet hostnames
+# but nevertheless used by many sites. Set this to off if you want
+# Squid to be strict about the standard.
+# This check is performed only when check_hostnames is set to on.
+#
+#Default:
+# allow_underscore on
+
+# TAG: cache_dns_program
+# Note: This option is only available if Squid is rebuilt with the
+# --disable-internal-dns option
+#
+# Specify the location of the executable for dnslookup process.
+#
+#Default:
+# cache_dns_program /usr/lib64/squid/dnsserver
+
+# TAG: dns_children
+# Note: This option is only available if Squid is rebuilt with the
+# --disable-internal-dns option
+#
+# The number of processes spawn to service DNS name lookups.
+# For heavily loaded caches on large servers, you should
+# probably increase this value to at least 10. The maximum
+# is 32. The default is 5.
+#
+# You must have at least one dnsserver process.
+#
+#Default:
+# dns_children 5
+
+# TAG: dns_retransmit_interval
+# Initial retransmit interval for DNS queries. The interval is
+# doubled each time all configured DNS servers have been tried.
+#
+#
+#Default:
+# dns_retransmit_interval 5 seconds
+
+# TAG: dns_timeout
+# DNS Query timeout. If no response is received to a DNS query
+# within this time all DNS servers for the queried domain
+# are assumed to be unavailable.
+#
+#Default:
+# dns_timeout 2 minutes
+
+# TAG: dns_defnames on|off
+# Normally the RES_DEFNAMES resolver option is disabled
+# (see res_init(3)). This prevents caches in a hierarchy
+# from interpreting single-component hostnames locally. To allow
+# Squid to handle single-component names, enable this option.
+#
+#Default:
+# dns_defnames off
+
+# TAG: dns_nameservers
+# Use this if you want to specify a list of DNS name servers
+# (IP addresses) to use instead of those given in your
+# /etc/resolv.conf file.
+# On Windows platforms, if no value is specified here or in
+# the /etc/resolv.conf file, the list of DNS name servers are
+# taken from the Windows registry, both static and dynamic DHCP
+# configurations are supported.
+#
+# Example: dns_nameservers 10.0.0.1 192.172.0.4
+#
+#Default:
+# none
+
+# TAG: hosts_file
+# Location of the host-local IP name-address associations
+# database. Most Operating Systems have such a file on different
+# default locations:
+# - Un*X & Linux: /etc/hosts
+# - Windows NT/2000: %SystemRoot%\system32\drivers\etc\hosts
+# (%SystemRoot% value install default is c:\winnt)
+# - Windows XP/2003: %SystemRoot%\system32\drivers\etc\hosts
+# (%SystemRoot% value install default is c:\windows)
+# - Windows 9x/Me: %windir%\hosts
+# (%windir% value is usually c:\windows)
+# - Cygwin: /etc/hosts
+#
+# The file contains newline-separated definitions, in the
+# form ip_address_in_dotted_form name [name ...] names are
+# whitespace-separated. Lines beginning with an hash (#)
+# character are comments.
+#
+# The file is checked at startup and upon configuration.
+# If set to 'none', it won't be checked.
+# If append_domain is used, that domain will be added to
+# domain-local (i.e. not containing any dot character) host
+# definitions.
+#
+#Default:
+# hosts_file /etc/hosts
+
+# TAG: dns_testnames
+# The DNS tests exit as soon as the first site is successfully looked up
+#
+# This test can be disabled with the -D command line option.
+#
+#Default:
+# dns_testnames netscape.com internic.net nlanr.net microsoft.com
+
+# TAG: append_domain
+# Appends local domain name to hostnames without any dots in
+# them. append_domain must begin with a period.
+#
+# Be warned there are now Internet names with no dots in
+# them using only top-domain names, so setting this may
+# cause some Internet sites to become unavailable.
+#
+#Example:
+# append_domain .yourdomain.com
+#
+#Default:
+# none
+
+# TAG: ignore_unknown_nameservers
+# By default Squid checks that DNS responses are received
+# from the same IP addresses they are sent to. If they
+# don't match, Squid ignores the response and writes a warning
+# message to cache.log. You can allow responses from unknown
+# nameservers by setting this option to 'off'.
+#
+#Default:
+# ignore_unknown_nameservers on
+
+# TAG: ipcache_size (number of entries)
+# TAG: ipcache_low (percent)
+# TAG: ipcache_high (percent)
+# The size, low-, and high-water marks for the IP cache.
+#
+#Default:
+# ipcache_size 1024
+# ipcache_low 90
+# ipcache_high 95
+
+# TAG: fqdncache_size (number of entries)
+# Maximum number of FQDN cache entries.
+#
+#Default:
+# fqdncache_size 1024
+
+
+# MISCELLANEOUS
+# -----------------------------------------------------------------------------
+
+# TAG: memory_pools on|off
+# If set, Squid will keep pools of allocated (but unused) memory
+# available for future use. If memory is a premium on your
+# system and you believe your malloc library outperforms Squid
+# routines, disable this.
+#
+#Default:
+# memory_pools on
+
+# TAG: memory_pools_limit (bytes)
+# Used only with memory_pools on:
+# memory_pools_limit 50 MB
+#
+# If set to a non-zero value, Squid will keep at most the specified
+# limit of allocated (but unused) memory in memory pools. All free()
+# requests that exceed this limit will be handled by your malloc
+# library. Squid does not pre-allocate any memory, just safe-keeps
+# objects that otherwise would be free()d. Thus, it is safe to set
+# memory_pools_limit to a reasonably high value even if your
+# configuration will use less memory.
+#
+# If set to zero, Squid will keep all memory it can. That is, there
+# will be no limit on the total amount of memory used for safe-keeping.
+#
+# To disable memory allocation optimization, do not set
+# memory_pools_limit to 0. Set memory_pools to "off" instead.
+#
+# An overhead for maintaining memory pools is not taken into account
+# when the limit is checked. This overhead is close to four bytes per
+# object kept. However, pools may actually _save_ memory because of
+# reduced memory thrashing in your malloc library.
+#
+#Default:
+# memory_pools_limit 5 MB
+
+# TAG: forwarded_for on|off
+# If set, Squid will include your system's IP address or name
+# in the HTTP requests it forwards. By default it looks like
+# this:
+#
+# X-Forwarded-For: 192.1.2.3
+#
+# If you disable this, it will appear as
+#
+# X-Forwarded-For: unknown
+#
+#Default:
+# forwarded_for on
+
+# TAG: cachemgr_passwd
+# Specify passwords for cachemgr operations.
+#
+# Usage: cachemgr_passwd password action action ...
+#
+# Some valid actions are (see cache manager menu for a full list):
+# 5min
+# 60min
+# asndb
+# authenticator
+# cbdata
+# client_list
+# comm_incoming
+# config *
+# counters
+# delay
+# digest_stats
+# dns
+# events
+# filedescriptors
+# fqdncache
+# histograms
+# http_headers
+# info
+# io
+# ipcache
+# mem
+# menu
+# netdb
+# non_peers
+# objects
+# offline_toggle *
+# pconn
+# peer_select
+# reconfigure *
+# redirector
+# refresh
+# server_list
+# shutdown *
+# store_digest
+# storedir
+# utilization
+# via_headers
+# vm_objects
+#
+# * Indicates actions which will not be performed without a
+# valid password, others can be performed if not listed here.
+#
+# To disable an action, set the password to "disable".
+# To allow performing an action without a password, set the
+# password to "none".
+#
+# Use the keyword "all" to set the same password for all actions.
+#
+#Example:
+# cachemgr_passwd secret shutdown
+# cachemgr_passwd lesssssssecret info stats/objects
+# cachemgr_passwd disable all
+#
+#Default:
+# none
+
+# TAG: client_db on|off
+# If you want to disable collecting per-client statistics,
+# turn off client_db here.
+#
+#Default:
+# client_db on
+
+# TAG: refresh_all_ims on|off
+# When you enable this option, squid will always check
+# the origin server for an update when a client sends an
+# If-Modified-Since request. Many browsers use IMS
+# requests when the user requests a reload, and this
+# ensures those clients receive the latest version.
+#
+# By default (off), squid may return a Not Modified response
+# based on the age of the cached version.
+#
+#Default:
+# refresh_all_ims off
+
+# TAG: reload_into_ims on|off
+# When you enable this option, client no-cache or ``reload''
+# requests will be changed to If-Modified-Since requests.
+# Doing this VIOLATES the HTTP standard. Enabling this
+# feature could make you liable for problems which it
+# causes.
+#
+# see also refresh_pattern for a more selective approach.
+#
+#Default:
+# reload_into_ims off
+
+# TAG: maximum_single_addr_tries
+# This sets the maximum number of connection attempts for a
+# host that only has one address (for multiple-address hosts,
+# each address is tried once).
+#
+# The default value is one attempt, the (not recommended)
+# maximum is 255 tries. A warning message will be generated
+# if it is set to a value greater than ten.
+#
+# Note: This is in addition to the request re-forwarding which
+# takes place if Squid fails to get a satisfying response.
+#
+#Default:
+# maximum_single_addr_tries 1
+
+# TAG: retry_on_error
+# If set to on Squid will automatically retry requests when
+# receiving an error response. This is mainly useful if you
+# are in a complex cache hierarchy to work around access
+# control errors.
+#
+#Default:
+# retry_on_error off
+
+# TAG: as_whois_server
+# WHOIS server to query for AS numbers. NOTE: AS numbers are
+# queried only when Squid starts up, not for every request.
+#
+#Default:
+# as_whois_server whois.ra.net
+# as_whois_server whois.ra.net
+
+# TAG: offline_mode
+# Enable this option and Squid will never try to validate cached
+# objects.
+#
+#Default:
+# offline_mode off
+
+# TAG: uri_whitespace
+# What to do with requests that have whitespace characters in the
+# URI. Options:
+#
+# strip: The whitespace characters are stripped out of the URL.
+# This is the behavior recommended by RFC2396.
+# deny: The request is denied. The user receives an "Invalid
+# Request" message.
+# allow: The request is allowed and the URI is not changed. The
+# whitespace characters remain in the URI. Note the
+# whitespace is passed to redirector processes if they
+# are in use.
+# encode: The request is allowed and the whitespace characters are
+# encoded according to RFC1738. This could be considered
+# a violation of the HTTP/1.1
+# RFC because proxies are not allowed to rewrite URI's.
+# chop: The request is allowed and the URI is chopped at the
+# first whitespace. This might also be considered a
+# violation.
+#
+#Default:
+# uri_whitespace strip
+
+# TAG: coredump_dir
+# By default Squid leaves core files in the directory from where
+# it was started. If you set 'coredump_dir' to a directory
+# that exists, Squid will chdir() to that directory at startup
+# and coredump files will be left there.
+#
+#Default:
+# coredump_dir none
+#
+# Leave coredumps in the first cache dir
+coredump_dir /var/spool/squid
+
+# TAG: chroot
+# Use this to have Squid do a chroot() while initializing. This
+# also causes Squid to fully drop root privileges after
+# initializing. This means, for example, if you use a HTTP
+# port less than 1024 and try to reconfigure, you will may get an
+# error saying that Squid can not open the port.
+#
+#Default:
+# none
+
+# TAG: balance_on_multiple_ip
+# Some load balancing servers based on round robin DNS have been
+# found not to preserve user session state across requests
+# to different IP addresses.
+#
+# By default Squid rotates IP's per request. By disabling
+# this directive only connection failure triggers rotation.
+#
+#Default:
+# balance_on_multiple_ip on
+
+# TAG: pipeline_prefetch
+# To boost the performance of pipelined requests to closer
+# match that of a non-proxied environment Squid can try to fetch
+# up to two requests in parallel from a pipeline.
+#
+# Defaults to off for bandwidth management and access logging
+# reasons.
+#
+#Default:
+# pipeline_prefetch off
+
+# TAG: high_response_time_warning (msec)
+# If the one-minute median response time exceeds this value,
+# Squid prints a WARNING with debug level 0 to get the
+# administrators attention. The value is in milliseconds.
+#
+#Default:
+# high_response_time_warning 0
+
+# TAG: high_page_fault_warning
+# If the one-minute average page fault rate exceeds this
+# value, Squid prints a WARNING with debug level 0 to get
+# the administrators attention. The value is in page faults
+# per second.
+#
+#Default:
+# high_page_fault_warning 0
+
+# TAG: high_memory_warning
+# If the memory usage (as determined by mallinfo) exceeds
+# this amount, Squid prints a WARNING with debug level 0 to get
+# the administrators attention.
+#
+#Default:
+# high_memory_warning 0 KB
+
+# TAG: sleep_after_fork (microseconds)
+# When this is set to a non-zero value, the main Squid process
+# sleeps the specified number of microseconds after a fork()
+# system call. This sleep may help the situation where your
+# system reports fork() failures due to lack of (virtual)
+# memory. Note, however, if you have a lot of child
+# processes, these sleep delays will add up and your
+# Squid will not service requests for some amount of time
+# until all the child processes have been started.
+# On Windows value less then 1000 (1 milliseconds) are
+# rounded to 1000.
+#
+#Default:
+# sleep_after_fork 0
+
+# TAG: windows_ipaddrchangemonitor on|off
+# On Windows Squid by default will monitor IP address changes and will
+# reconfigure itself after any detected event. This is very useful for
+# proxies connected to internet with dial-up interfaces.
+# In some cases (a Proxy server acting as VPN gateway is one) it could be
+# desiderable to disable this behaviour setting this to 'off'.
+# Note: after changing this, Squid service must be restarted.
+#
+#Default:
+# windows_ipaddrchangemonitor on
+
--- /dev/null
+# $OpenBSD: ssh_config,v 1.28 2013/09/16 11:35:43 sthen Exp $
+
+# This is the ssh client system-wide configuration file. See
+# ssh_config(5) for more information. This file provides defaults for
+# users, and the values can be changed in per-user configuration files
+# or on the command line.
+
+# Configuration data is parsed as follows:
+# 1. command line options
+# 2. user-specific file
+# 3. system-wide file
+# Any configuration value is only changed the first time it is set.
+# Thus, host-specific definitions should be at the beginning of the
+# configuration file, and defaults at the end.
+
+# Site-wide defaults for some commonly used options. For a comprehensive
+# list of available options, their meanings and defaults, please see the
+# ssh_config(5) man page.
+
+# Host *
+# ForwardAgent no
+# ForwardX11 no
+# RhostsRSAAuthentication no
+# RSAAuthentication yes
+# PasswordAuthentication yes
+# HostbasedAuthentication no
+# GSSAPIAuthentication no
+# GSSAPIDelegateCredentials no
+# GSSAPIKeyExchange no
+# GSSAPITrustDNS no
+# BatchMode no
+# CheckHostIP yes
+# AddressFamily any
+# ConnectTimeout 0
+# StrictHostKeyChecking ask
+# IdentityFile ~/.ssh/identity
+# IdentityFile ~/.ssh/id_rsa
+# IdentityFile ~/.ssh/id_dsa
+# Port 22
+# Protocol 2,1
+# Cipher 3des
+# Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc
+# MACs hmac-md5,hmac-sha1,umac-64@openssh.com,hmac-ripemd160
+# EscapeChar ~
+# Tunnel no
+# TunnelDevice any:any
+# PermitLocalCommand no
+# VisualHostKey no
+# ProxyCommand ssh -q -W %h:%p gateway.example.com
+# RekeyLimit 1G 1h
+#
+# Uncomment this if you want to use .local domain
+# Host *.local
+# CheckHostIP no
+
+Host *
+ GSSAPIAuthentication no
+# If this option is set to yes then remote X11 clients will have full access
+# to the original X11 display. As virtually no X11 client supports the untrusted
+# mode correctly we set this to yes.
+ ForwardX11Trusted = yes
+# Send locale-related environment variables
+ SendEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES
+ SendEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT
+ SendEnv LC_IDENTIFICATION LC_ALL LANGUAGE
+ SendEnv XMODIFIERS
--- /dev/null
+# $OpenBSD: sshd_config,v 1.75 2007/03/19 01:01:29 djm Exp $
+
+# This is the sshd server system-wide configuration file. See
+# sshd_config(5) for more information.
+
+# This sshd was compiled with PATH=/usr/local/bin:/bin:/usr/bin
+
+# The strategy used for options in the default sshd_config shipped with
+# OpenSSH is to specify options with their default value where
+# possible, but leave them commented. Uncommented options change a
+# default value.
+
+#Port 22
+#AddressFamily any
+#ListenAddress 0.0.0.0
+#ListenAddress ::
+
+# Disable legacy (protocol version 1) support in the server for new
+# installations. In future the default will change to require explicit
+# activation of protocol 1
+Protocol 2
+
+# HostKey for protocol version 1
+#HostKey /etc/ssh/ssh_host_key
+# HostKeys for protocol version 2
+#HostKey /etc/ssh/ssh_host_rsa_key
+#HostKey /etc/ssh/ssh_host_dsa_key
+
+# Lifetime and size of ephemeral version 1 server key
+#KeyRegenerationInterval 1h
+#ServerKeyBits 768
+
+# Logging
+# obsoletes QuietMode and FascistLogging
+#SyslogFacility AUTH
+SyslogFacility AUTHPRIV
+#LogLevel INFO
+
+# Authentication:
+
+#LoginGraceTime 2m
+#PermitRootLogin yes
+#StrictModes yes
+#MaxAuthTries 6
+
+#RSAAuthentication yes
+#PubkeyAuthentication yes
+#AuthorizedKeysFile .ssh/authorized_keys
+
+# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts
+#RhostsRSAAuthentication no
+# similar for protocol version 2
+#HostbasedAuthentication no
+# Change to yes if you don't trust ~/.ssh/known_hosts for
+# RhostsRSAAuthentication and HostbasedAuthentication
+#IgnoreUserKnownHosts no
+# Don't read the user's ~/.rhosts and ~/.shosts files
+#IgnoreRhosts yes
+
+# To disable tunneled clear text passwords, change to no here!
+#PasswordAuthentication yes
+#PermitEmptyPasswords no
+PasswordAuthentication yes
+
+# Change to no to disable s/key passwords
+#ChallengeResponseAuthentication yes
+ChallengeResponseAuthentication no
+
+# Kerberos options
+#KerberosAuthentication no
+#KerberosOrLocalPasswd yes
+#KerberosTicketCleanup yes
+#KerberosGetAFSToken no
+
+# GSSAPI options
+#GSSAPIAuthentication no
+GSSAPIAuthentication yes
+#GSSAPICleanupCredentials yes
+GSSAPICleanupCredentials yes
+
+# Set this to 'yes' to enable PAM authentication, account processing,
+# and session processing. If this is enabled, PAM authentication will
+# be allowed through the ChallengeResponseAuthentication and
+# PasswordAuthentication. Depending on your PAM configuration,
+# PAM authentication via ChallengeResponseAuthentication may bypass
+# the setting of "PermitRootLogin without-password".
+# If you just want the PAM account and session checks to run without
+# PAM authentication, then enable this but set PasswordAuthentication
+# and ChallengeResponseAuthentication to 'no'.
+#UsePAM no
+UsePAM yes
+
+# Accept locale-related environment variables
+AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES
+AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT
+AcceptEnv LC_IDENTIFICATION LC_ALL
+#AllowTcpForwarding yes
+#GatewayPorts no
+#X11Forwarding no
+X11Forwarding yes
+#X11DisplayOffset 10
+#X11UseLocalhost yes
+#PrintMotd yes
+#PrintLastLog yes
+#TCPKeepAlive yes
+#UseLogin no
+#UsePrivilegeSeparation yes
+#PermitUserEnvironment no
+#Compression delayed
+#ClientAliveInterval 0
+#ClientAliveCountMax 3
+#ShowPatchLevel no
+#UseDNS yes
+#PidFile /var/run/sshd.pid
+#MaxStartups 10
+#PermitTunnel no
+
+# no default banner path
+#Banner /some/path
+
+# override default of no subsystems
+Subsystem sftp /usr/libexec/openssh/sftp-server
+
+# Example of overriding settings on a per-user basis
+Match User anoncvs
+ X11Forwarding no
+ AllowTcpForwarding no
+ ForceCommand cvs server
+
+Match Group restricted
+ ForceCommand /usr/local/bin/restricted_group_command
--- /dev/null
+## Sudoers allows particular users to run various commands as
+## the root user, without needing the root password.
+##
+## Examples are provided at the bottom of the file for collections
+## of related commands, which can then be delegated out to particular
+## users or groups.
+##
+## This file must be edited with the 'visudo' command.
+
+## Host Aliases
+## Groups of machines. You may prefer to use hostnames (perhaps using
+## wildcards for entire domains) or IP addresses instead.
+# Host_Alias FILESERVERS = fs1, fs2
+# Host_Alias MAILSERVERS = smtp, smtp2
+
+## User Aliases
+## These aren't often necessary, as you can use regular groups
+## (ie, from files, LDAP, NIS, etc) in this file - just use %groupname
+## rather than USERALIAS
+# User_Alias ADMINS = jsmith, mikem
+
+
+## Command Aliases
+## These are groups of related commands...
+
+## Networking
+Cmnd_Alias NETWORKING = /sbin/route, /sbin/ifconfig, /bin/ping, /sbin/dhclient, /usr/bin/net, /sbin/iptables, /usr/bin/rfcomm, /usr/bin/wvdial, /sbin/iwconfig, /sbin/mii-tool
+
+## Installation and management of software
+Cmnd_Alias SOFTWARE = /bin/rpm, /usr/bin/up2date, /usr/bin/yum
+
+## Services
+Cmnd_Alias SERVICES = /sbin/service, /sbin/chkconfig
+
+## Updating the locate database
+Cmnd_Alias LOCATE = /usr/bin/updatedb
+
+## Storage
+Cmnd_Alias STORAGE = /sbin/fdisk, /sbin/sfdisk, /sbin/parted, /sbin/partprobe, /bin/mount, /bin/umount
+
+## Delegating permissions
+Cmnd_Alias DELEGATING = /usr/sbin/visudo, /bin/chown, /bin/chmod, /bin/chgrp
+
+## Processes
+Cmnd_Alias PROCESSES = /bin/nice, /bin/kill, /usr/bin/kill, /usr/bin/killall
+
+## Drivers
+Cmnd_Alias DRIVERS = /sbin/modprobe
+
+# Defaults specification
+
+#
+# Disable "ssh hostname sudo <cmd>", because it will show the password in clear.
+# You have to run "ssh -t hostname sudo <cmd>".
+#
+Defaults requiretty
+
+Defaults env_reset
+Defaults env_keep = "COLORS DISPLAY HOSTNAME HISTSIZE INPUTRC KDEDIR LS_COLORS"
+Defaults env_keep += "MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE"
+Defaults env_keep += "LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES"
+Defaults env_keep += "LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE"
+Defaults env_keep += "LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY"
+
+Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin
+
+## Next comes the main part: which users can run what software on
+## which machines (the sudoers file can be shared between multiple
+## systems).
+## Syntax:
+##
+## user MACHINE=COMMANDS
+##
+## The COMMANDS section may have other options added to it.
+##
+## Allow root to run any commands anywhere
+root ALL=(ALL) ALL
+
+## Allows members of the 'sys' group to run networking, software,
+## service management apps and more.
+# %sys ALL = NETWORKING, SOFTWARE, SERVICES, STORAGE, DELEGATING, PROCESSES, LOCATE, DRIVERS
+
+## Allows people in group wheel to run all commands
+%wheel ALL=(ALL) ALL
+
+## Same thing without a password
+# %wheel ALL=(ALL) NOPASSWD: ALL
+
+## Allows members of the users group to mount and unmount the
+## cdrom as root
+# %users ALL=/sbin/mount /mnt/cdrom, /sbin/umount /mnt/cdrom
+
+## Allows members of the users group to shutdown this system
+# %users localhost=/sbin/shutdown -h now
+
--- /dev/null
+# This file has been generated by the Anaconda Installer 21.48.22.134-1
+
+[ProgressSpoke]
+visited = 1
+
--- /dev/null
+# specify additional command line arguments for atd
+#
+# -l Specifies a limiting load factor, over which batch jobs should not be run, instead of the compile-time
+# choice of 0.8. For an SMP system with n CPUs, you will probably want to set this higher than n-1.
+#
+# -b Specify the minimum interval in seconds between the start of two batch jobs (60 default).
+
+#example:
+#OPTS="-l 4 -b 120"
--- /dev/null
+USEWINBINDAUTH=no
+USEHESIOD=no
+USESYSNETAUTH=no
+USEKERBEROS=no
+FORCESMARTCARD=no
+USESMBAUTH=no
+USESMARTCARD=no
+USELDAPAUTH=no
+USELOCAUTHORIZE=no
+USEWINBIND=no
+USESHADOW=yes
+USEDB=no
+USEPASSWDQC=no
+USEMD5=yes
+USELDAP=no
+USECRACKLIB=yes
+USENIS=no
--- /dev/null
+#
+# Define default options for autofs.
+#
+# MASTER_MAP_NAME - default map name for the master map.
+#
+#MASTER_MAP_NAME="auto.master"
+#
+# TIMEOUT - set the default mount timeout (default 600).
+#
+TIMEOUT=3600
+#
+# NEGATIVE_TIMEOUT - set the default negative timeout for
+# failed mount attempts (default 60).
+#
+#NEGATIVE_TIMEOUT=60
+#
+# BROWSE_MODE - maps are browsable by default.
+#
+BROWSE_MODE="yes"
+#
+# APPEND_OPTIONS - append to global options instead of replace.
+#
+#APPEND_OPTIONS="yes"
+#
+# LOGGING - set default log level "none", "verbose" or "debug"
+#
+#LOGGING="none"
+#
+# Define base dn for map dn lookup.
+#
+# Define server URIs
+#
+# LDAP_URI - space separated list of server uris of the form
+# <proto>://<server>[/] where <proto> can be ldap
+# or ldaps. The option can be given multiple times.
+# Map entries that include a server name override
+# this option.
+#
+#LDAP_URI=""
+#
+# LDAP__TIMEOUT - timeout value for the synchronous API calls
+# (default is LDAP library default).
+#
+#LDAP_TIMEOUT=-1
+#
+# LDAP_NETWORK_TIMEOUT - set the network response timeout (default 8).
+#
+#LDAP_NETWORK_TIMEOUT=8
+#
+# SEARCH_BASE - base dn to use for searching for map search dn.
+# Multiple entries can be given and they are checked
+# in the order they occur here.
+#
+#SEARCH_BASE=""
+#
+# Define the LDAP schema to used for lookups
+#
+# If no schema is set autofs will check each of the schemas
+# below in the order given to try and locate an appropriate
+# basdn for lookups. If you want to minimize the number of
+# queries to the server set the values here.
+#
+#MAP_OBJECT_CLASS="nisMap"
+#ENTRY_OBJECT_CLASS="nisObject"
+#MAP_ATTRIBUTE="nisMapName"
+#ENTRY_ATTRIBUTE="cn"
+#VALUE_ATTRIBUTE="nisMapEntry"
+#
+# Other common LDAP nameing
+#
+#MAP_OBJECT_CLASS="automountMap"
+#ENTRY_OBJECT_CLASS="automount"
+#MAP_ATTRIBUTE="ou"
+#ENTRY_ATTRIBUTE="cn"
+#VALUE_ATTRIBUTE="automountInformation"
+#
+#MAP_OBJECT_CLASS="automountMap"
+#ENTRY_OBJECT_CLASS="automount"
+#MAP_ATTRIBUTE="automountMapName"
+#ENTRY_ATTRIBUTE="automountKey"
+#VALUE_ATTRIBUTE="automountInformation"
+#
+# AUTH_CONF_FILE - set the default location for the SASL
+# authentication configuration file.
+#
+#AUTH_CONF_FILE="/etc/autofs_ldap_auth.conf"
+#
+# General global options
+#
+#OPTIONS=""
+#
--- /dev/null
+# The ZONE parameter is only evaluated by system-config-date.
+# The timezone of the system is defined by the contents of /etc/localtime.
+ZONE="America/Los Angeles"
+UTC=true
+ARC=false
--- /dev/null
+# /etc/sysconfig/cpuspeed
+#
+# This configuration file controls the behavior of both the
+# cpuspeed daemon and various cpufreq modules.
+# For the vast majority of users, there shouldn't be any need to
+# alter the contents of this file at all. By and large, frequency
+# scaling should Just Work(tm) with the defaults.
+
+### DRIVER ###
+# Your CPUFreq driver module
+# Note that many drivers are now built-in, rather than built as modules,
+# so its usually best not to specify one. The most commonly-needed driver
+# module these days is 'p4-clockmod', however, in most cases, it is not
+# recommended for use. See: http://lkml.org/lkml/2006/2/25/84
+# default value: empty (try to auto-detect/use built-in)
+DRIVER=
+
+### GOVERNOR ###
+# Which scaling governor to use
+# Details on scaling governors for your cpu(s) can be found in
+# cpu-freq/governors.txt, part of the kernel-doc package
+# NOTES:
+# - The GOVERNOR parameter is only valid on centrino, powernow-k8 (amd64)
+# and p4-clockmod platforms, other platforms that support frequency
+# scaling always use the 'userspace' governor.
+# - Using the 'userspace' governor will trigger the cpuspeed daemon to run,
+# which provides said user-space frequency scaling.
+# default value: empty (defaults to ondemand on centrino, powernow-k8,
+# and p4-clockmod systems, userspace on others)
+GOVERNOR=
+
+### FREQUENCIES ###
+# NOTE: valid max/min frequencies for your cpu(s) can be found in
+# /sys/devices/system/cpu/cpu*/cpufreq/scaling_available_frequencies
+# on systems that support frequency scaling (though only after the
+# appropriate drivers have been loaded via the cpuspeed initscript).
+# maximum speed to scale up to
+# default value: empty (use cpu reported maximum)
+MAX_SPEED=
+# minimum speed to scale down to
+# default value: empty (use cpu reported minimum)
+MIN_SPEED=
+
+### SCALING THRESHOLDS ###
+# Busy percentage threshold over which to scale up to max frequency
+# default value: empty (use governor default)
+UP_THRESHOLD=
+# Busy percentage threshold under which to scale frequency down
+# default value: empty (use governor default)
+DOWN_THRESHOLD=
+
+### NICE PROCESS HANDLING ###
+# Let background (nice) processes speed up the cpu
+# default value: 0 (background process usage can speed up cpu)
+# alternate value: 1 (background processes will be ignored)
+IGNORE_NICE=0
+
+
+#####################################################
+########## HISTORICAL CPUSPEED CONFIG BITS ##########
+#####################################################
+VMAJOR=1
+VMINOR=1
+
+# Add your favorite options here
+#OPTS="$OPTS -s 0 -i 10 -r"
+
+# uncomment and modify this to check the state of the AC adapter
+#OPTS="$OPTS -a /proc/acpi/ac_adapter/*/state"
+
+# uncomment and modify this to check the system temperature
+#OPTS="$OPTS -t /proc/acpi/thermal_zone/*/temperature 75"
--- /dev/null
+# Settings for the CRON daemon.
+# CRONDARGS= : any extra command-line startup arguments for crond
+# CRON_VALIDATE_MAILRCPTS=1:a non-empty value of this variable will
+# enable vixie-cron-4.1's validation of
+# mail recipient names, which would then be
+# restricted to contain only the chars
+# from this tr(1) set : [@!:%-_.,:alnum:]
+# otherwise mailing is not attempted.
+CRONDARGS=
--- /dev/null
+# Possible values are 1, 2, ... or nothing
+# Delay is determined using the hostname and the variable (Delay) from this configuration file.
+# Bigger value means shorter delay.
+# This delay could be switched off, but then you can have problems with network overload
+# (for example yum updates in cron.daily run on all your computers etc.)
+DELAY=1
--- /dev/null
+RUN_FIRSTBOOT=NO
--- /dev/null
+boot=/dev/sda
+forcelba=0
--- /dev/null
+# $Id: hsqldb-1.73.0-standard.cfg,v 1.1 2004/12/23 22:21:08 fnasser Exp $
+
+# Sample configuration file for HSQLDB database server.
+# See the "UNIX Quick Start" chapter of the Hsqldb User Guide.
+
+# N.b.!!!! You must place this in the right location for your type of UNIX.
+# See the init script "hsqldb" to see where this must be placed and
+# what it should be renamed to.
+
+# This file is "sourced" by a Bourne shell, so use Bourne shell syntax.
+
+# This file WILL NOT WORK until you set (at least) the non-commented
+# variables to the appropriate values for your system.
+# Life will be easier if you avoid all filepaths with spaces or any other
+# funny characters. Don't ask for support if you ignore this advice.
+
+# Thanks to Meikel Bisping for his contributions. -- Blaine
+
+# JPackage hsqldb home is /var/lib/hsqldb
+
+HSQLDB_HOME=/var/lib/hsqldb
+
+# JPackage source Java config
+
+. /etc/java/java.conf
+
+JAVA_EXECUTABLE=${JAVA_HOME}/bin/java
+
+# Unless you copied a hsqldb.jar file from another system, this typically
+# resides at $HSQLDB_HOME/lib/hsqldb.jar, where $HSQLDB_HOME is your HSQLDB
+# software base directory.
+HSQLDB_JAR_PATH=${HSQLDB_HOME}/lib/hsqldb.jar
+
+# Where the file "server.properties" (or "webserver.properties") resides.
+SERVER_HOME=${HSQLDB_HOME}
+
+# What UNIX user the Server/WebServer process will run as.
+# (The shutdown client is always run as root or the invoker of the init script).
+# Runs as root by default, but you should take the time to set database file
+# ownerships to another user and set that user name here.
+# You do need to run as root if your Server/WebServer will run on a privileged
+# (< 1024) port.
+# If you really do want to run as root, comment out the HSQLDB_OWNER setting
+# completely. I.e., do not set it to root. This will run Server/Webserver
+# without any "su" at all.
+HSQLDB_OWNER=hsqldb
+
+# We require all Server/WebServer instances to be accessible within
+# $MAX_START_SECS from when the Server/WebServer is started.
+# Defaults to 60.
+# Raise this is you are running lots of DB instances or have a slow server.
+#MAX_START_SECS=200
+# Ditto for this one
+#SU_ECHO_SECS=1
+
+# Time to allow for JVM to die after all HSQLDB instances stopped.
+# Defaults to 1.
+#MAX_TERMINATE_SECS=0
+
+# These are "urlid" values from a SqlTool authentication file
+# ** IN ADDITION TO THOSE IN YOUR server.properties OR webserver.properties **
+# file. All server.urlid.X values from your properties file will automatically
+# be started/stopped/tested. $SHUTDOWN_URLIDS is for additional urlids which
+# will stopped. (Therefore, most users will not set this at all).
+# Separate multiple values with white space. NO OTHER SPECIAL CHARACTERS!
+# Make sure to quote the entire value if it contains white space separator(s).
+# Defaults to none (i.e., only urlids set in properties file will be stopped).
+#SHUTDOWN_URLIDS='sa mygms'
+
+# SqlTool authentication file used only for shutdown.
+# The default value will be sqltool.rc in root's home directory, since it is
+# root who runs the init script.
+# (See the SqlTool chapter of the HSQLDB User Guide if you don't understand
+# this).
+AUTH_FILE=${HSQLDB_HOME}/sqltool.rc
+
+# Set to 'WebServer' to start a HSQLDB WebServer instead of a Server.
+# Defaults to 'Server'.
+#TARGET_CLASS=WebServer
+
+# Server-side classpath IN ADDITION TO the HSQLDB_JAR_PATH set above.
+# The classpath here is *earlier* than HSQLDB_JAR_PATH, to allow you
+# override classes in the HSQLDB_JAR_PATH jar file.
+# In particular, you will want to add classpath elements to give access of
+# all of your store procedures (store procedures are documented in the
+# HSQLDB User Guide in the SQL Syntax chapter.
+#
+# N.B.!
+# If you're adding files to the classpath in order to be able to call them
+# from SQL queries, you will be unable to access them unless you adjust the
+# value of the system property hsqldb.method_class_names. Please see the
+# comments on SERVER_JVMARGS, at the end of this file.
+# SERVER_ADDL_CLASSPATH=/home/blaine/storedprocs.jar:/usr/dev/dbutil/classes
+
+# For TLS encryption for your Server, set these two variables.
+# N.b.: If you set these, then make this file unreadable to non-root users!!!!
+# See the TLS chapter of the HSQLDB User Guide, paying attention to the
+# security warning(s).
+# If you are running with a private server cert, then you will also need to
+# set "truststore" in the your SqlTool config file (location is set by the
+# AUTH_FILE variable in this file, or it must be at the default location for
+# HSQLDB_OWNER).
+#TLS_KEYSTORE=/path/to/jks/server.store
+#TLS_PASSWORD=password
+
+# Any JVM args for the invocation of the JDBC client used to verify DB
+# instances and to shut them down (SqlToolSprayer).
+# For multiple args, put quotes around entire value.
+#CLIENT_JVMARGS=-Djavax.net.debug=ssl
+
+# Any JVM args for the server.
+# For multiple args, put quotes around entire value.
+#
+# N.B.!
+# The default value of SERVER_JVMARGS sets the system property
+# hsqldb.method_class_names to be empty. This is in order to lessen the
+# security risk posed by HSQLDB allowing Java method calls in SQL statements.
+# The implications of changing this value (as explained by the authors of
+# HSQLDB) are as follows:
+# If [it] is not set, then static methods of all available Java classes
+# can be accessed as functions in HSQLDB. If the property is set, then
+# only the list of semicolon separated method names becomes accessible.
+# An empty property value means no class is accessible.
+# Regardless of the value of hsqldb.method_class_names, methods in
+# org.hsqldb.Library will be accessible.
+# Before making changes to the value below, please be advised of the possible
+# dangers involved in allowing SQL queries to contain Java method calls.
+SERVER_JVMARGS=-Dhsqldb.method_class_names=\"\"
--- /dev/null
+# Configuration file for the httpd service.
+
+#
+# The default processing model (MPM) is the process-based
+# 'prefork' model. A thread-based model, 'worker', is also
+# available, but does not work with some modules (such as PHP).
+# The service must be stopped before changing this variable.
+#
+#HTTPD=/usr/sbin/httpd.worker
+
+#
+# To pass additional options (for instance, -D definitions) to the
+# httpd binary at startup, set OPTIONS here.
+#
+#OPTIONS=
+
+#
+# By default, the httpd process is started in the C locale; to
+# change the locale in which the server runs, the HTTPD_LANG
+# variable can be set.
+#
+#HTTPD_LANG=C
--- /dev/null
+9a1c565e-3b93-4e74-9611-2b71b9b84a05
--- /dev/null
+-
+class: OTHER
+bus: PCI
+detached: 0
+desc: "Intel Corporation Mobile 945GM/PM/GMS, 943/940GML and 945GT Express Memory Controller Hub"
+vendorId: 8086
+deviceId: 27a0
+subVendorId: 17aa
+subDeviceId: 2017
+pciType: 1
+pcidom: 0
+pcibus: 0
+pcidev: 0
+pcifn: 0
+-
+class: OTHER
+bus: PCI
+detached: 0
+driver: shpchp
+desc: "Intel Corporation 82801G (ICH7 Family) PCI Express Port 1"
+vendorId: 8086
+deviceId: 27d0
+subVendorId: 0000
+subDeviceId: 0000
+pciType: 1
+pcidom: 0
+pcibus: 0
+pcidev: 1c
+pcifn: 0
+-
+class: OTHER
+bus: PCI
+detached: 0
+driver: shpchp
+desc: "Intel Corporation 82801G (ICH7 Family) PCI Express Port 2"
+vendorId: 8086
+deviceId: 27d2
+subVendorId: 0000
+subDeviceId: 0000
+pciType: 1
+pcidom: 0
+pcibus: 0
+pcidev: 1c
+pcifn: 1
+-
+class: OTHER
+bus: PCI
+detached: 0
+driver: shpchp
+desc: "Intel Corporation 82801G (ICH7 Family) PCI Express Port 3"
+vendorId: 8086
+deviceId: 27d4
+subVendorId: 0000
+subDeviceId: 0000
+pciType: 1
+pcidom: 0
+pcibus: 0
+pcidev: 1c
+pcifn: 2
+-
+class: OTHER
+bus: PCI
+detached: 0
+driver: shpchp
+desc: "Intel Corporation 82801G (ICH7 Family) PCI Express Port 4"
+vendorId: 8086
+deviceId: 27d6
+subVendorId: 0000
+subDeviceId: 0000
+pciType: 1
+pcidom: 0
+pcibus: 0
+pcidev: 1c
+pcifn: 3
+-
+class: OTHER
+bus: PCI
+detached: 0
+desc: "Intel Corporation 82801 Mobile PCI Bridge"
+vendorId: 8086
+deviceId: 2448
+subVendorId: 0000
+subDeviceId: 0000
+pciType: 1
+pcidom: 0
+pcibus: 0
+pcidev: 1e
+pcifn: 0
+-
+class: OTHER
+bus: PCI
+detached: 0
+driver: intel-rng
+desc: "Intel Corporation 82801GBM (ICH7-M) LPC Interface Bridge"
+vendorId: 8086
+deviceId: 27b9
+subVendorId: 17aa
+subDeviceId: 2009
+pciType: 1
+pcidom: 0
+pcibus: 0
+pcidev: 1f
+pcifn: 0
+-
+class: OTHER
+bus: PCI
+detached: 0
+driver: i2c-i801
+desc: "Intel Corporation 82801G (ICH7 Family) SMBus Controller"
+vendorId: 8086
+deviceId: 27da
+subVendorId: 17aa
+subDeviceId: 200f
+pciType: 1
+pcidom: 0
+pcibus: 0
+pcidev: 1f
+pcifn: 3
+-
+class: OTHER
+bus: ISAPNP
+detached: 0
+desc: "ATM1200"
+deviceId: ATM1200
+compat: PNP0c31
+-
+class: OTHER
+bus: USB
+detached: 0
+driver: hci_usb
+desc: "Broadcom Corp BCM2045B"
+usbclass: 254
+usbsubclass: 1
+usbprotocol: 0
+usbbus: 5
+usblevel: 1
+usbport: 0
+usbdev: 2
+vendorId: 0a5c
+deviceId: 2110
+usbmfr: Broadcom Corp
+usbprod: BCM2045B
+-
+class: OTHER
+bus: USB
+detached: 0
+driver: hci_usb
+desc: "Broadcom Corp BCM2045B"
+usbclass: 255
+usbsubclass: 255
+usbprotocol: 255
+usbbus: 5
+usblevel: 1
+usbport: 0
+usbdev: 2
+vendorId: 0a5c
+deviceId: 2110
+usbmfr: Broadcom Corp
+usbprod: BCM2045B
+-
+class: OTHER
+bus: USB
+detached: 0
+driver: hci_usb
+desc: "Broadcom Corp BCM2045B"
+usbclass: 224
+usbsubclass: 1
+usbprotocol: 1
+usbbus: 5
+usblevel: 1
+usbport: 0
+usbdev: 2
+vendorId: 0a5c
+deviceId: 2110
+usbmfr: Broadcom Corp
+usbprod: BCM2045B
+-
+class: OTHER
+bus: ISAPNP
+detached: 0
+desc: "IBM0057"
+deviceId: IBM0057
+compat: PNP0f13
+-
+class: OTHER
+bus: ISAPNP
+detached: 0
+driver: nsc-ircc
+desc: "IBM0071"
+deviceId: IBM0071
+compat: PNP0511
+-
+class: OTHER
+bus: PSAUX
+detached: 0
+desc: "Lid Switch"
+-
+class: OTHER
+bus: USB
+detached: 0
+desc: "Linux 2.6.24.4-64.fc8 ehci_hcd EHCI Host Controller"
+usbclass: 9
+usbsubclass: 0
+usbprotocol: 0
+usbbus: 1
+usblevel: 0
+usbport: 0
+usbdev: 1
+vendorId: 0000
+deviceId: 0000
+usbmfr: Linux 2.6.24.4-64.fc8 ehci_hcd
+usbprod: EHCI Host Controller
+-
+class: OTHER
+bus: USB
+detached: 0
+desc: "Linux 2.6.24.4-64.fc8 uhci_hcd UHCI Host Controller"
+usbclass: 9
+usbsubclass: 0
+usbprotocol: 0
+usbbus: 5
+usblevel: 0
+usbport: 0
+usbdev: 1
+vendorId: 0000
+deviceId: 0000
+usbmfr: Linux 2.6.24.4-64.fc8 uhci_hcd
+usbprod: UHCI Host Controller
+-
+class: OTHER
+bus: USB
+detached: 0
+desc: "Linux 2.6.24.4-64.fc8 uhci_hcd UHCI Host Controller"
+usbclass: 9
+usbsubclass: 0
+usbprotocol: 0
+usbbus: 4
+usblevel: 0
+usbport: 0
+usbdev: 1
+vendorId: 0000
+deviceId: 0000
+usbmfr: Linux 2.6.24.4-64.fc8 uhci_hcd
+usbprod: UHCI Host Controller
+-
+class: OTHER
+bus: USB
+detached: 0
+desc: "Linux 2.6.24.4-64.fc8 uhci_hcd UHCI Host Controller"
+usbclass: 9
+usbsubclass: 0
+usbprotocol: 0
+usbbus: 3
+usblevel: 0
+usbport: 0
+usbdev: 1
+vendorId: 0000
+deviceId: 0000
+usbmfr: Linux 2.6.24.4-64.fc8 uhci_hcd
+usbprod: UHCI Host Controller
+-
+class: OTHER
+bus: USB
+detached: 0
+desc: "Linux 2.6.24.4-64.fc8 uhci_hcd UHCI Host Controller"
+usbclass: 9
+usbsubclass: 0
+usbprotocol: 0
+usbbus: 2
+usblevel: 0
+usbport: 0
+usbdev: 1
+vendorId: 0000
+deviceId: 0000
+usbmfr: Linux 2.6.24.4-64.fc8 uhci_hcd
+usbprod: UHCI Host Controller
+-
+class: OTHER
+bus: PSAUX
+detached: 0
+driver: pcspkr
+desc: "PC Speaker"
+-
+class: OTHER
+bus: ISAPNP
+detached: 0
+desc: "PNP0103"
+deviceId: PNP0103
+-
+class: OTHER
+bus: ISAPNP
+detached: 0
+desc: "PNP0200"
+deviceId: PNP0200
+-
+class: OTHER
+bus: ISAPNP
+detached: 0
+desc: "PNP0303"
+deviceId: PNP0303
+-
+class: OTHER
+bus: ISAPNP
+detached: 0
+desc: "PNP0800"
+deviceId: PNP0800
+-
+class: OTHER
+bus: ISAPNP
+detached: 0
+desc: "PNP0a08"
+deviceId: PNP0a08
+compat: PNP0a03
+-
+class: OTHER
+bus: ISAPNP
+detached: 0
+desc: "PNP0b00"
+deviceId: PNP0b00
+-
+class: OTHER
+bus: ISAPNP
+detached: 0
+desc: "PNP0c01"
+deviceId: PNP0c01
+-
+class: OTHER
+bus: ISAPNP
+detached: 0
+desc: "PNP0c02"
+deviceId: PNP0c02
+-
+class: OTHER
+bus: ISAPNP
+detached: 0
+desc: "PNP0c04"
+deviceId: PNP0c04
+-
+class: OTHER
+bus: PSAUX
+detached: 0
+desc: "Power Button (FF)"
+-
+class: OTHER
+bus: USB
+detached: 0
+desc: "STMicroelectronics Biometric Coprocessor"
+usbclass: 255
+usbsubclass: 0
+usbprotocol: 0
+usbbus: 5
+usblevel: 1
+usbport: 1
+usbdev: 3
+vendorId: 0483
+deviceId: 2016
+usbmfr: STMicroelectronics
+usbprod: Biometric Coprocessor
+-
+class: OTHER
+bus: PSAUX
+detached: 0
+desc: "Sleep Button (CM)"
+-
+class: OTHER
+bus: PSAUX
+detached: 0
+desc: "TPPS/2 IBM TrackPoint"
+-
+class: OTHER
+bus: USB
+detached: 0
+desc: "Unknown USB device 0x451:0x2046"
+usbclass: 9
+usbsubclass: 0
+usbprotocol: 0
+usbbus: 1
+usblevel: 2
+usbport: 0
+usbdev: 5
+vendorId: 0451
+deviceId: 2046
+-
+class: OTHER
+bus: PSAUX
+detached: 0
+desc: "Video Bus"
+-
+class: OTHER
+bus: PSAUX
+detached: 0
+desc: "Video Bus"
+-
+class: NETWORK
+bus: PCI
+detached: 0
+device: eth0
+driver: e1000
+desc: "Intel Corporation 82573L Gigabit Ethernet Controller"
+network.hwaddr: 00:15:58:81:5b:0e
+vendorId: 8086
+deviceId: 109a
+subVendorId: 17aa
+subDeviceId: 2001
+pciType: 1
+pcidom: 0
+pcibus: 2
+pcidev: 0
+pcifn: 0
+-
+class: NETWORK
+bus: PCI
+detached: 0
+device: wlan0
+driver: iwl3945
+desc: "Intel Corporation PRO/Wireless 3945ABG Network Connection"
+network.hwaddr: 00:19:d2:9f:88:96
+vendorId: 8086
+deviceId: 4227
+subVendorId: 8086
+subDeviceId: 1010
+pciType: 1
+pcidom: 0
+pcibus: 3
+pcidev: 0
+pcifn: 0
+-
+class: MOUSE
+bus: USB
+detached: 0
+device: input/mice
+driver: genericwheelusb
+desc: "ATEN 4 Port USB KVM B V1.80"
+usbclass: 3
+usbsubclass: 1
+usbprotocol: 2
+usbbus: 1
+usblevel: 3
+usbport: 0
+usbdev: 6
+vendorId: 0557
+deviceId: 2205
+usbmfr: ATEN
+usbprod: 4 Port USB KVM B V1.80
+-
+class: MOUSE
+bus: PSAUX
+detached: 0
+device: input/mice
+driver: generic3ps/2
+desc: "Macintosh mouse button emulation"
+-
+class: MOUSE
+bus: PSAUX
+detached: 0
+device: input/mice
+driver: synaptics
+desc: "SynPS/2 Synaptics TouchPad"
+-
+class: MOUSE
+bus: PSAUX
+detached: 0
+device: input/mice
+driver: generic3ps/2
+desc: "ThinkPad Extra Buttons"
+-
+class: AUDIO
+bus: PCI
+detached: 0
+driver: snd-hda-intel
+desc: "Intel Corporation 82801G (ICH7 Family) High Definition Audio Controller"
+vendorId: 8086
+deviceId: 27d8
+subVendorId: 17aa
+subDeviceId: 2010
+pciType: 1
+pcidom: 0
+pcibus: 0
+pcidev: 1b
+pcifn: 0
+-
+class: CDROM
+bus: SCSI
+detached: 0
+device: scd0
+desc: "MATSHITA DVD-RAM UJ-842"
+host: 4
+id: 0
+channel: 0
+lun: 0
+-
+class: VIDEO
+bus: PCI
+detached: 0
+driver: intelfb
+desc: "Intel Corporation Mobile 945GM/GMS, 943/940GML Express Integrated Graphics Controller"
+video.xdriver: intel
+vendorId: 8086
+deviceId: 27a2
+subVendorId: 17aa
+subDeviceId: 201a
+pciType: 1
+pcidom: 0
+pcibus: 0
+pcidev: 2
+pcifn: 0
+-
+class: VIDEO
+bus: PCI
+detached: 0
+desc: "Intel Corporation Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller"
+vendorId: 8086
+deviceId: 27a6
+subVendorId: 17aa
+subDeviceId: 201a
+pciType: 1
+pcidom: 0
+pcibus: 0
+pcidev: 2
+pcifn: 1
+-
+class: HD
+bus: SCSI
+detached: 0
+device: sda
+desc: "ATA HTS721010G9SA00"
+host: 0
+id: 0
+channel: 0
+lun: 0
+-
+class: KEYBOARD
+bus: PSAUX
+detached: 0
+desc: "AT Translated Set 2 keyboard"
+-
+class: KEYBOARD
+bus: USB
+detached: 0
+driver: keybdev
+desc: "ATEN 4 Port USB KVM B V1.80"
+usbclass: 3
+usbsubclass: 1
+usbprotocol: 1
+usbbus: 1
+usblevel: 3
+usbport: 0
+usbdev: 6
+vendorId: 0557
+deviceId: 2205
+usbmfr: ATEN
+usbprod: 4 Port USB KVM B V1.80
+-
+class: USB
+bus: PCI
+detached: 0
+driver: uhci-hcd
+desc: "Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #1"
+vendorId: 8086
+deviceId: 27c8
+subVendorId: 17aa
+subDeviceId: 200a
+pciType: 1
+pcidom: 0
+pcibus: 0
+pcidev: 1d
+pcifn: 0
+-
+class: USB
+bus: PCI
+detached: 0
+driver: uhci-hcd
+desc: "Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #2"
+vendorId: 8086
+deviceId: 27c9
+subVendorId: 17aa
+subDeviceId: 200a
+pciType: 1
+pcidom: 0
+pcibus: 0
+pcidev: 1d
+pcifn: 1
+-
+class: USB
+bus: PCI
+detached: 0
+driver: uhci-hcd
+desc: "Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #3"
+vendorId: 8086
+deviceId: 27ca
+subVendorId: 17aa
+subDeviceId: 200a
+pciType: 1
+pcidom: 0
+pcibus: 0
+pcidev: 1d
+pcifn: 2
+-
+class: USB
+bus: PCI
+detached: 0
+driver: uhci-hcd
+desc: "Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #4"
+vendorId: 8086
+deviceId: 27cb
+subVendorId: 17aa
+subDeviceId: 200a
+pciType: 1
+pcidom: 0
+pcibus: 0
+pcidev: 1d
+pcifn: 3
+-
+class: USB
+bus: PCI
+detached: 0
+driver: ehci-hcd
+desc: "Intel Corporation 82801G (ICH7 Family) USB2 EHCI Controller"
+vendorId: 8086
+deviceId: 27cc
+subVendorId: 17aa
+subDeviceId: 200b
+pciType: 1
+pcidom: 0
+pcibus: 0
+pcidev: 1d
+pcifn: 7
+-
+class: SOCKET
+bus: PCI
+detached: 0
+driver: yenta_socket
+desc: "Texas Instruments PCI1510 PC card Cardbus Controller"
+vendorId: 104c
+deviceId: ac56
+subVendorId: 17aa
+subDeviceId: 2012
+pciType: 1
+pcidom: 0
+pcibus: 15
+pcidev: 0
+pcifn: 0
+-
+class: IDE
+bus: PCI
+detached: 0
+driver: ata_piix
+desc: "Intel Corporation 82801G (ICH7 Family) IDE Controller"
+vendorId: 8086
+deviceId: 27df
+subVendorId: 17aa
+subDeviceId: 200c
+pciType: 1
+pcidom: 0
+pcibus: 0
+pcidev: 1f
+pcifn: 1
+-
+class: SATA
+bus: PCI
+detached: 0
+driver: ahci
+desc: "Intel Corporation 82801GBM/GHM (ICH7 Family) SATA AHCI Controller"
+vendorId: 8086
+deviceId: 27c5
+subVendorId: 17aa
+subDeviceId: 200d
+pciType: 1
+pcidom: 0
+pcibus: 0
+pcidev: 1f
+pcifn: 2
--- /dev/null
+LANG="en_US.UTF-8"
+SYSFONT="latarcyrheb-sun16"
--- /dev/null
+# color => new RH6.0 bootup
+# verbose => old-style bootup
+# anything else => new style bootup without ANSI colors or positioning
+BOOTUP=color
+# Turn on graphical boot
+GRAPHICAL=yes
+# column to start "[ OK ]" label in
+RES_COL=60
+# terminal sequence to move to that column. You could change this
+# to something like "tput hpa ${RES_COL}" if your terminal supports it
+MOVE_TO_COL="echo -en \\033[${RES_COL}G"
+# terminal sequence to set color to a 'success' color (currently: green)
+SETCOLOR_SUCCESS="echo -en \\033[0;32m"
+# terminal sequence to set color to a 'failure' color (currently: red)
+SETCOLOR_FAILURE="echo -en \\033[0;31m"
+# terminal sequence to set color to a 'warning' color (currently: yellow)
+SETCOLOR_WARNING="echo -en \\033[0;33m"
+# terminal sequence to reset to the default color.
+SETCOLOR_NORMAL="echo -en \\033[0;39m"
+# default kernel loglevel on boot (syslog will reset this)
+LOGLEVEL=3
+# Set to anything other than 'no' to allow hotkey interactive startup...
+PROMPT=yes
+# Set to 'yes' to allow probing for devices with swap signatures
+AUTOSWAP=no
--- /dev/null
+# Firewall configuration written by system-config-firewall
+# Manual customization of this file is not recommended.
+*filter
+:INPUT ACCEPT [0:0]
+:FORWARD ACCEPT [0:0]
+:OUTPUT ACCEPT [0:0]
+:RH-Firewall-1-INPUT - [0:0]
+-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
+-A INPUT -p icmp -j ACCEPT
+-A INPUT -i lo -j ACCEPT
+-A INPUT -m state --state NEW -m tcp -p tcp --dport 53 -j ACCEPT
+-A INPUT -m state --state NEW -m udp -p udp --dport 53 -j ACCEPT
+-A INPUT -p ah -j ACCEPT
+-A INPUT -p esp -j ACCEPT
+-A INPUT -m state --state NEW -m udp -p udp --dport 5353 -d 224.0.0.251 -j ACCEPT
+-A INPUT -m state --state NEW -m tcp -p tcp --dport 631 -j ACCEPT
+-A INPUT -m state --state NEW -m udp -p udp --dport 631 -j ACCEPT
+-A INPUT -m state --state NEW -m tcp -p tcp --dport 2049 -j ACCEPT
+-A INPUT -m state --state NEW -m udp -p udp --dport 2049 -j ACCEPT
+-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
+-A INPUT -m state --state NEW -m tcp -p tcp --dport 111 -j ACCEPT
+-A INPUT -m state --state NEW -m udp -p udp --dport 111 -j ACCEPT
+-A INPUT -m state --state NEW -m tcp -p tcp --dport 2020 -j ACCEPT
+-A INPUT -m state --state NEW -m tcp -p tcp --dport 2049 -j ACCEPT
+-A INPUT -m state --state NEW -m udp -p udp --dport 2049 -j ACCEPT
+-A INPUT -m state --state NEW -m tcp -p tcp --dport 32769 -j ACCEPT
+-A INPUT -m state --state NEW -m tcp -p tcp --dport 32803 -j ACCEPT
+-A INPUT -m state --state NEW -m tcp -p tcp --dport 5900 -j ACCEPT
+-A INPUT -m state --state NEW -m udp -p udp --dport 5900 -j ACCEPT
+-A INPUT -m state --state NEW -m tcp -p tcp --dport 5901 -j ACCEPT
+-A INPUT -m state --state NEW -m udp -p udp --dport 5901 -j ACCEPT
+-A INPUT -m state --state NEW -m tcp -p tcp --dport 662 -j ACCEPT
+-A INPUT -m state --state NEW -m tcp -p tcp --dport 892 -j ACCEPT
+-A INPUT -m state --state NEW -m udp -p udp --dport 892 -j ACCEPT
+-A INPUT --tcp-flags SYN,RST,ACK,FIN SYN -j ACCEPT
+-I FORWARD -m physdev --physdev-is-bridged -j ACCEPT
+-A INPUT -j REJECT --reject-with icmp-host-prohibited
+-A FORWARD -j REJECT --reject-with icmp-host-prohibited
+-A INPUT -j RH-Firewall-1-INPUT
+-A FORWARD -j RH-Firewall-1-INPUT
+-A RH-Firewall-1-INPUT -i lo -j ACCEPT
+-A RH-Firewall-1-INPUT -p icmp --icmp-type any -j ACCEPT
+-A RH-Firewall-1-INPUT -p 50 -j ACCEPT
+-A RH-Firewall-1-INPUT -p 51 -j ACCEPT
+-A RH-Firewall-1-INPUT -p udp --dport 5353 -d 224.0.0.251 -j ACCEPT
+-A RH-Firewall-1-INPUT -p udp -m udp --dport 631 -j ACCEPT
+-A RH-Firewall-1-INPUT -p tcp -m tcp --dport 631 -j ACCEPT
+COMMIT
--- /dev/null
+# Load additional iptables modules (nat helpers)
+# Default: -none-
+# Space separated list of nat helpers (e.g. 'ip_nat_ftp ip_nat_irc'), which
+# are loaded after the firewall rules are applied. Options for the helpers are
+# stored in /etc/modprobe.conf.
+IPTABLES_MODULES="ip_conntrack_netbios_ns"
+
+# Unload modules on restart and stop
+# Value: yes|no, default: yes
+# This option has to be 'yes' to get to a sane state for a firewall
+# restart or stop. Only set to 'no' if there are problems unloading netfilter
+# modules.
+IPTABLES_MODULES_UNLOAD="yes"
+
+# Save current firewall rules on stop.
+# Value: yes|no, default: no
+# Saves all firewall rules to /etc/sysconfig/iptables if firewall gets stopped
+# (e.g. on system shutdown).
+IPTABLES_SAVE_ON_STOP="no"
+
+# Save current firewall rules on restart.
+# Value: yes|no, default: no
+# Saves all firewall rules to /etc/sysconfig/iptables if firewall gets
+# restarted.
+IPTABLES_SAVE_ON_RESTART="no"
+
+# Save (and restore) rule and chain counter.
+# Value: yes|no, default: no
+# Save counters for rules and chains to /etc/sysconfig/iptables if
+# 'service iptables save' is called or on stop or restart if SAVE_ON_STOP or
+# SAVE_ON_RESTART is enabled.
+IPTABLES_SAVE_COUNTER="no"
+
+# Numeric status output
+# Value: yes|no, default: yes
+# Print IP addresses and port numbers in numeric format in the status output.
+IPTABLES_STATUS_NUMERIC="yes"
+
+# Verbose status output
+# Value: yes|no, default: yes
+# Print info about the number of packets and bytes plus the "input-" and
+# "outputdevice" in the status output.
+IPTABLES_STATUS_VERBOSE="no"
+
+# Status output with numbered lines
+# Value: yes|no, default: yes
+# Print a counter/number for every rule in the status output.
+IPTABLES_STATUS_LINENUMBERS="yes"
--- /dev/null
+IRDA=yes
+DEVICE=/dev/ttyS2
+#DONGLE=actisys+
+DISCOVERY=yes
--- /dev/null
+# irqbalance is a daemon process that distributes interrupts across
+# CPUS on SMP systems. The default is to rebalance once every 10
+# seconds. There is one configuration option:
+#
+# ONESHOT=yes
+# after starting, wait for a minute, then look at the interrupt
+# load and balance it once; after balancing exit and do not change
+# it again.
+ONESHOT=
+
+#
+# IRQ_AFFINITY_MASK
+# 64 bit bitmask which allows you to indicate which cpu's should
+# be skipped when reblancing irqs. Cpu numbers which have their
+# corresponding bits set to zero in this mask will not have any
+# irq's assigned to them on rebalance
+#
+#IRQ_AFFINITY_MASK=
--- /dev/null
+# Kernel Version string for the -kdump kernel, such as 2.6.13-1544.FC5kdump
+# If no version is specified, then the init script will try to find a
+# kdump kernel with the same version number as the running kernel.
+KDUMP_KERNELVER=""
+
+# The kdump commandline is the command line that needs to be passed off to
+# the kdump kernel. This will likely match the contents of the grub kernel
+# line. For example:
+# KDUMP_COMMANDLINE="ro root=LABEL=/"
+# If a command line is not specified, the default will be taken from
+# /proc/cmdline
+KDUMP_COMMANDLINE=""
+
+# This variable lets us append arguments to the current kdump commandline
+# As taken from either KDUMP_COMMANDLINE above, or from /proc/cmdline
+KDUMP_COMMANDLINE_APPEND="irqpoll maxcpus=1"
+
+# Any additional kexec arguments required. In most situations, this should
+# be left empty
+#
+# Example:
+# KEXEC_ARGS="--elf32-core-headers"
+KEXEC_ARGS=" --args-linux"
+
+#Where to find the boot image
+KDUMP_BOOTDIR="/boot"
+
+#What is the image type used for kdump
+KDUMP_IMG="vmlinuz"
+
+#What is the images extension. Relocatable kernels don't have one
+KDUMP_IMG_EXT=""
--- /dev/null
+# UPDATEDEFAULT specifies if new-kernel-pkg should make
+# new kernels the default
+UPDATEDEFAULT=yes
+
+# DEFAULTKERNEL specifies the default kernel package type
+DEFAULTKERNEL=kernel-xen
--- /dev/null
+KEYBOARDTYPE="pc"
+KEYTABLE="us"
--- /dev/null
+# Set to anything other than 'no' to force a 'safe' probe on startup.
+# 'safe' probe disables:
+# - serial port probing
+# - DDC monitor probing
+# - PS/2 probing
+SAFE=no
--- /dev/null
+# Override the default config file
+#LIBVIRTD_CONFIG=/etc/libvirt/libvirtd.conf
+
+# Listen for TCP/IP connections
+# NB. must setup TLS/SSL keys prior to using this
+#LIBVIRTD_ARGS="--listen"
+
+# Override Kerberos service keytab for SASL/GSSAPI
+#KRB5_KTNAME=/etc/libvirt/krb5.tab
--- /dev/null
+# Options to lircd
+LIRCD_OPTIONS=
--- /dev/null
+# /etc/sysconfig/sensors - Defines modules loaded by /etc/rc.d/init.d/lm_sensors
+# Run sensors-detect to generate this config file
--- /dev/null
+# Options to nasd
+# See nasd(1) for more details
+# -aa allow any client to connect
+# -local allow local clients only
+# -b detach and run in background
+# -v enable verbose messages
+# -d <num> enable debug messages at level <num>
+# -pn partial networking enabled
+# -nopn partial networking disabled [default]
+NASD_OPTIONS="-b -local"
--- /dev/null
+# This is the configuration file for the netconsole service. By starting
+# this service you allow a remote syslog daemon to record console output
+# from this system.
+
+# The local port number that the netconsole module will use
+# LOCALPORT=6666
+
+# The ethernet device to send console messages out of (only set this if it
+# can't be automatically determined)
+# DEV=
+
+# The IP address of the remote syslog server to send messages to
+# SYSLOGADDR=
+
+# The listening port of the remote syslog daemon
+# SYSLOGPORT=514
+
+# The MAC address of the remote syslog server (only set this if it can't
+# be automatically determined)
+# SYSLOGMACADDR=
--- /dev/null
+ssh-dss AAAAB3NzaC1kc3MAAACBAN4hXeRHrCzo+hdWYlXNK17bVODegv1x4HxDbrCZK92tRxHBsYFng1+oWTZs607LQ/dfcLxFRfPREuKLXiWFY6bDdJDfB5V5HzCBFCH+o5NQ48y8IcIpGic/5+cqWyY6pcxnwfzEQHtdLEeo93lRMzpMsFsbkST3qpBe8QJM3/gtAAAAFQDWWFFtL9NeP0zjhJv6FNDNfZ75CwAAAIAJansjnrRm3FKDxeFf6FuiBvioa4UJszeaSfoGpd6ugScfOyM/u1r08xPgn9ud5/kwRPxV56HWkqgxJQ0dChIMij3HiraZmyg5AY9i85ZW1ZUOEgMRDmWRTOMHK++u9Dmh1d1FtugrUeP6e4wP9nC2y/r+3qhsPTrqBUTXZikkFgAAAIA8Oue6cIFNZSzQRB4UM6hLwxfXAgWBHzoa7UxF7Zh6H65xnKswpIIcQHX77RFK0oF5Y4ks0Fjy5GLTlAGbSy2IcH9ecugRK6+bnEzO09NNO+yXzh/xahCX3ubOmdoFNm4dwdlQy7n3NgFqI99tHIvY/B1MCs7XkMKV4s6yzLVS4Q== root@localhost.localdomain
--- /dev/null
+NETWORKING=yes
+NETWORKING_IPV6=no
+HOSTNAME=galia.watzmann.net
--- /dev/null
+# Intel Corporation 82573L Gigabit Ethernet Controller
+DEVICE=br0
+ONBOOT=yes
+BOOTPROTO=dhcp
+TYPE=Bridge
--- /dev/null
+# Intel Corporation 82573L Gigabit Ethernet Controller
+DEVICE=eth0
+#BOOTPROTO=dhcp
+HWADDR=XX:YY:ZZ:81:5B:0E
+ONBOOT=yes
+#DHCP_HOSTNAME=dhcp.example.com
+BRIDGE=br0
--- /dev/null
+DEVICE=lo
+IPADDR=127.0.0.1
+NETMASK=255.0.0.0
+NETWORK=127.0.0.0
+# If you're having problems with gated making 127.0.0.0/8 a martian,
+# you can change this to something else (255.255.255.255, for example)
+BROADCAST=127.255.255.255
+ONBOOT=yes
+NAME=loopback
--- /dev/null
+DEVICE=lo
+IPADDR=127.0.0.1
+NETMASK=255.0.0.0
+NETWORK=127.0.0.0
+# If you're having problems with gated making 127.0.0.0/8 a martian,
+# you can change this to something else (255.255.255.255, for example)
+BROADCAST=127.255.255.255
+ONBOOT=yes
+NAME=loopback
--- /dev/null
+# This file is only here to make sure augeas handles truly bizarre
+# file names gracefully. Looking this file up in the tree will require
+# escaping all the special chars in the file name
+DEVICE=weird
--- /dev/null
+# Intel Corporation PRO/Wireless 3945ABG Network Connection
+DEVICE=wlan0
+BOOTPROTO=dhcp
+ONBOOT=no
+HWADDR=XX:XX:XX:9f:88:96
--- /dev/null
+## Firewalling
+STATD_PORT=662
+STATD_OUTGOING_PORT=2020
+LOCKD_TCPPORT=32803
+LOCKD_UDPPORT=32769
+MOUNTD_PORT=892
+##
+#
+# Define which protocol versions mountd
+# will advertise. The values are "no" or "yes"
+# with yes being the default
+#MOUNTD_NFS_V1="no"
+#MOUNTD_NFS_V2="no"
+#MOUNTD_NFS_V3="no"
+#
+#
+# Path to remote quota server. See rquotad(8)
+#RQUOTAD="/usr/sbin/rpc.rquotad"
+# Port rquotad should listen on.
+#RQUOTAD_PORT=875
+# Optional options passed to rquotad
+#RPCRQUOTADOPTS=""
+#
+#
+# TCP port rpc.lockd should listen on.
+#LOCKD_TCPPORT=32803
+# UDP port rpc.lockd should listen on.
+#LOCKD_UDPPORT=32769
+#
+#
+# Optional arguments passed to rpc.nfsd. See rpc.nfsd(8)
+#RPCNFSDARGS
+# Number of nfs server processes to be started.
+# The default is 8.
+#RPCNFSDCOUNT=8
+#
+#
+# Optional arguments passed to rpc.mountd. See rpc.mountd(8)
+#RPCMOUNTDOPTS=""
+# Port rpc.mountd should listen on.
+#MOUNTD_PORT=892
+#
+#
+# Optional arguments passed to rpc.statd. See rpc.statd(8)
+#STATDARG=""
+# Port rpc.statd should listen on.
+#STATD_PORT=662
+# Outgoing port statd should used. The default is port
+# is random
+#STATD_OUTGOING_PORT=2020
+# Specify callout program
+#STATD_HA_CALLOUT="/usr/local/bin/foo"
+#
+#
+# Optional arguments passed to rpc.idmapd. See rpc.idmapd(8)
+#RPCIDMAPDARGS=""
+#
+# Set to turn on Secure NFS mounts.
+#SECURE_NFS="yes"
+# Optional arguments passed to rpc.gssd. See rpc.gssd(8)
+#RPCGSSDARGS="-vvv"
+# Optional arguments passed to rpc.svcgssd. See rpc.svcgssd(8)
+#RPCSVCGSSDARGS="-vvv"
+# Don't load security modules in to the kernel
+#SECURE_NFS_MODS="noload"
+#
+# Don't load sunrpc module.
+#RPCMTAB="noload"
+#
--- /dev/null
+# Drop root to id 'ntp:ntp' by default.
+OPTIONS="-u ntp:ntp -p /var/run/ntpd.pid"
+
+# Set to 'yes' to sync hw clock after successful ntpdate
+SYNC_HWCLOCK=no
+
+# Additional options for ntpdate
+NTPDATE_OPTIONS=""
--- /dev/null
+# Set this to no to disable prelinking altogether
+# (if you change this from yes to no prelink -ua
+# will be run next night to undo prelinking)
+PRELINKING=yes
+
+# Options to pass to prelink
+# -m Try to conserve virtual memory by allowing overlapping
+# assigned virtual memory slots for libraries which
+# never appear together in one binary
+# -R Randomize virtual memory slot assignments for libraries.
+# This makes it slightly harder for various buffer overflow
+# attacks, since library addresses will be different on each
+# host using -R.
+PRELINK_OPTS=-mR
+
+# How often should full prelink be run (in days)
+# Normally, prelink will be run in quick mode, every
+# $PRELINK_FULL_TIME_INTERVAL days it will be run
+# in normal mode. Comment it out if it should be run
+# in normal mode always.
+PRELINK_FULL_TIME_INTERVAL=14
+
+# How often should prelink run (in days) even if
+# no packages have been upgraded via rpm.
+# If $PRELINK_FULL_TIME_INTERVAL days have not elapsed
+# yet since last normal mode prelinking, last
+# quick mode prelinking happened less than
+# $PRELINK_NONRPM_CHECK_INTERVAL days ago
+# and no packages have been upgraded by rpm
+# since last quick mode prelinking, prelink
+# will not do anything.
+# Change to
+# PRELINK_NONRPM_CHECK_INTERVAL=0
+# if you want to disable the rpm database timestamp
+# check (especially if you don't use rpm/up2date/yum/apt-rpm
+# exclusively to upgrade system libraries and/or binaries).
+PRELINK_NONRPM_CHECK_INTERVAL=7
--- /dev/null
+# The puppetmaster server
+#PUPPET_SERVER=puppet
+
+# If you wish to specify the port to connect to do so here
+#PUPPET_PORT=8140
+
+# Where to log to. Specify syslog to send log messages to the system log.
+#PUPPET_LOG=/var/log/puppet/puppet.log
+
+# You may specify other parameters to the puppet client here
+#PUPPET_EXTRA_OPTS=--waitforcert=500
--- /dev/null
+# Set to 'yes' to mount the system filesystems read-only.
+READONLY=no
+# Set to 'yes' to mount various temporary state as either tmpfs
+# or on the block device labelled RW_LABEL. Implied by READONLY
+TEMPORARY_STATE=no
+# Place to put a tmpfs for temporary scratch writable space
+RW_MOUNT=/var/lib/stateless/writable
+# Label on local filesystem which can be used for temporary scratch space
+RW_LABEL=stateless-rw
+# Options to use for temporary mount
+RW_OPTIONS=
+# Label for partition with persistent data
+STATE_LABEL=stateless-state
+# Where to mount to the persistent data
+STATE_MOUNT=/var/lib/stateless/state
+# Options to use for persistent mount
+STATE_OPTIONS=
--- /dev/null
+# Options to syslogd
+# -m 0 disables 'MARK' messages.
+# -r enables logging from remote machines
+# -x disables DNS lookups on messages received with -r
+# See syslogd(8) for more details
+SYSLOGD_OPTIONS="-m 0"
+# Options to klogd
+# -2 prints all kernel oops messages twice; once for klogd to decode, and
+# once for processing with 'ksymoops'
+# -x disables all klogd processing of oops messages entirely
+# See klogd(8) for more details
+KLOGD_OPTIONS="-x"
+#
+SYSLOG_UMASK=077
+# set this to a umask value to use for all log files as in umask(1).
+# By default, all permissions are removed for "group" and "other".
--- /dev/null
+# Options to smbd
+SMBDOPTIONS="-D"
+# Options to nmbd
+NMBDOPTIONS="-D"
+# Options for winbindd
+WINBINDOPTIONS=""
--- /dev/null
+# Directory in which to place saslauthd's listening socket, pid file, and so
+# on. This directory must already exist.
+SOCKETDIR=/var/run/saslauthd
+
+# Mechanism to use when checking passwords. Run "saslauthd -v" to get a list
+# of which mechanism your installation was compiled with the ablity to use.
+MECH=pam
+
+# Additional flags to pass to saslauthd on the command line. See saslauthd(8)
+# for the list of accepted flags.
+FLAGS=
--- /dev/null
+# command line options for smartd
+smartd_opts="-q never"
+# autogenerated config file options
+# smartd_conf_opts="-H -m root"
--- /dev/null
+# Options to spamd
+SPAMDOPTIONS="-d -c -m5 -H"
--- /dev/null
+# How long to keep log files (days), maximum is a month
+HISTORY=7
--- /dev/null
+#
+# sysstat.ioconf
+#
+# Copyright (C) 2004, Red Hat, Inc.
+#
+# This file gives iostat and sadc a clue about how to find whole
+# disk devices in /proc/partitions and /proc/diskstats
+#
+# line format, general record:
+# major:name:ctrlpre:ctrlno:devfmt:devcnt:partpre:partcnt:description
+#
+# major: major # for device
+# name: base of device name
+# ctrlpre: string to use in generating controller designators
+# eg: the c in c0d2p6, decimal formatting implied
+# '*' means none or irrelevant
+# ctrlno: which controller of this type is this
+# devfmt: type of device naming convention
+# a: alpha: xxa, xxb, ... xxaa, xxab, ... xxzz
+# x: exception... record contains a specific name
+# for a specific minor #, stored in the devcnt field
+# %string: string to use in generating drive designators,
+# eg: the 'd' in c0d2p6 , decimal formatting implied
+# d: no special translations (decimal formatting)
+# devcnt: how many whole devs per major number
+# partpre: appended to whole dev before part designator
+# eg. the p in c0d2p6, decimal formatting implied
+# '*' means none
+# partcnt: number of partitions per volume
+# or minor # for exception records
+# description: informative text
+#
+# line format, indirect record:
+# major:base_major:ctrlno[:[desc]]
+#
+# major: major number of the device
+# base_major: major number of the template for this type,
+# 0 for not supported
+# ctrlno: controller number of this type
+# desc: controller-specific description
+# if absent the desc from base_major will be
+# used in sprintf( buf, desc, ctrlno )
+
+
+1:ram:*:0:d:256:*:1:RAM disks (ram0..ram255)
+1:initrd:x:250:d:256:*:1:Initial RAM Disk (initrd)
+
+#2:0:0:Floppy Devices
+2:fd:*:0:d:4:*:1:Floppy Devices fd0,fd1,fd2,fd3
+
+3:hd:*:0:a:2:*:64:IDE - Controller %d
+22:3:1:
+33:3:2:
+34:3:3:
+56:3:4:
+57:3:5:
+88:3:6:
+89:3:7:
+90:3:8:
+91:3:9:
+
+#4:0:0:NODEV
+#5:0:0:NODEV
+#6:0:0:NODEV
+7:loop:*:0:d:256:*:1:Loop Devices
+
+8:sd:*:0:a:16:*:16:SCSI - Controller %d
+65:8:1:
+66:8:2:
+67:8:3:
+68:8:4:
+69:8:5:
+70:8:6:
+71:8:7:
+
+9:md:*:0:d:256:*:1:Metadisk (Software RAID) devices (md0..md255)
+
+#10:0:0:NODEV
+
+11:sr:*:0:d:256:*:1:CDROM - CDROM (sr0..sr255)
+
+#12:0:0:MSCDEX CD-ROM Callback
+
+13:xd:*:0:a:2:*:64:8-bit MFM/RLL/IDE controller (xda, xdb)
+
+#14:0:0:BIOS Hard Drive Callback
+#15:0:0:CDROM - Sony CDU-31A/CDU-33A
+#16:0:0:CDROM - Goldstar
+#17:0:0:CDROM - Optics Storage
+#18:0:0:CDROM - Sanyo
+
+19:double:*:0:d:256:*:1:Compressed Disk (double0..double255)
+
+#20:0:0:CDROM - Hitachi
+
+21:mfm:*:0:a:2:*:64:Acorn MFM Hard Drive (mfma, mfmb)
+
+# 22: see IDE, dev 3
+
+#23:0:0:CDROM - Mistumi Proprietary
+#24:0:0:CDROM - Sony CDU-535
+#25:0:0:CDROM - Matsushita (Panasonic/Soundblaster) #1
+#26:0:1:CDROM - Matsushita (Panasonic/Soundblaster) #2
+#27:0:2:CDROM - Matsushita (Panasonic/Soundblaster) #3
+#28:0:3:CDROM - Matsushita (Panasonic/Soundblaster) #4
+# 28:0:0:! ACSI (Atari) Disk Not Supported
+#29:0:0:CDROM - Aztech/Orchid/Okano/Wearnes
+#30:0:0:CDROM - Philips LMS CM-205
+#31:0:0:ROM/flash Memory Card
+#32:0:0:CDROM - Phillips LMS CM-206
+
+# 33: See IDE, dev 3
+# 34: See IDE, dev 3
+
+#35:0:0:Slow Memory RAM Disk
+
+36:ed:*:0:a:2:*:64:MCA ESDI Hard Disk (eda, edb)
+
+#37:0:0:Zorro II Ram Disk
+#38:0:0:Reserved For Linux/AP+
+#39:0:0:Reserved For Linux/AP+
+#40:0:0:Syquest EZ135 Parallel Port Drive
+#41:0:0:CDROM - MicroSolutions Parallel Port BackPack
+#42:0:0:For DEMO Use Only
+
+43:nb:*:0:d:256:*:1:Network Block devices (nb0..nb255)
+44:ftl:*:0:a:16:*:16:Flash Translation Layer (ftla..ftlp)
+45:pd:*:0:a:4:*:16:Parallel Port IDE (pda..pdd)
+
+#46:0:0:CDROM - Parallel Port ATAPI
+
+47:pf:*:0:d:256:*:1:Parallel Port ATAPI Disk Devices (pf0..pf255)
+
+48:rd:/c:0:%d:32:p:8:Mylex DAC960 RAID, Controller %d
+49:48:1:
+50:48:2:
+51:48:3:
+52:48:4:
+53:48:5:
+54:48:6:
+55:48:7:
+
+# 56, 57: see IDE, dev 3:
+
+58:lvm:*:0:d:256:*:1:Logical Volume Manager (lvm0..lvm255)
+
+#59:0:0:PDA Filesystem Device
+#60:0:0:Local/Experimental Use
+#61:0:0:Local/Experimental Use
+#62:0:0:Local/Experimental Use
+#63:0:0:Local/Experimental Use
+#64:0:0:NODEV
+
+# 65..71: See SCSI, dev 8:
+
+72:ida/:c:0:%d:16:p:16:Compaq Intelligent Drive Array - Controller %d
+73:72:1:
+74:72:2:
+75:72:3:
+76:72:4:
+77:72:5:
+78:72:6:
+79:72:7:
+
+80:i2o/hd:*:0:a:16:*:16:I2O Disk - Controller %d
+81:80:1:
+82:80:2:
+83:80:3:
+84:80:4:
+85:80:5:
+86:80:6:
+87:80:7:
+
+# 88..91: see IDE, dev 3:
+
+#92:0:0:PPDD Encrypted Disk
+#93:0:0:NAND Flash Translation Layer not supported
+
+94:dasd:*:0:a:64:*:4:IBM S/390 DASD Block Storage (dasda, dasdb, ...)
+
+#95:0:0:IBM S/390 VM/ESA Minidisk
+#96:0:0:NODEV
+#97:0:0:CD/DVD packed writing devices not supported
+
+98:ubd:*:0:d:256:*:1:User-mode Virtual Block Devices (ubd0..ubd256)
+
+#99:0:0:JavaStation Flash Disk
+#100:0:0:NODEV
+
+101:amiraid/ar:*:0:d:16:p:16:AMI HyperDisk RAID (amiraid/ar0 - amiraid/ar15)
+
+#102:0:0:Compressed Block Device
+#103:0:0:Audit Block Device
+
+104:cciss:/c:0:%d:16:p:16:HP SA 5xxx/6xxx (cciss) Controller %d
+105:104:1:
+106:104:2:
+107:104:3:
+108:104:4:
+109:104:5:
+110:104:6:
+111:104:7:
+
+112:iseries/vd:*:0:a:32:*:8:IBM iSeries Virtual Disk (.../vda - .../vdaf)
+
+#113:0:0:CDROM - IBM iSeries Virtual
+
+# 114..159 NODEV
+
+160:sx8/:*:0:d:8:p:32:Promise SATA SX8 Unit %d
+161:160:1:
+
+# 162..198 UNUSED
+
+#199:0:0:Veritas Volume Manager (VxVM) Volumes
+#200:0:0:NODEV
+#201:0:0:Veritas VxVM Dynamic Multipathing Driver
+
+# 202..230: UNUSED
+
+232:emcpower:*:0:a:16:*:16:EMC PowerPath Unit %d
+233:232:1:
+234:232:2:
+235:232:3:
+236:232:4:
+237:232:5:
+238:232:6:
+239:232:7:
+240:232:8:
+241:232:9:
+242:232:10:
+243:232:11:
+244:232:12:
+245:232:13:
+246:232:14:
+247:232:15:
+
+# 240..254: LOCAL/Experimental
+# 255: reserved for big dev_t expansion
+
--- /dev/null
+# Configuration file for system-config-securitylevel
+
+--enabled
+--port=22:tcp
+--port=2049:tcp
--- /dev/null
+# Configuration file for system-config-users
+
+# Filter out system users
+FILTER=true
+# Automatically assign highest UID for new users
+ASSIGN_HIGHEST_UID=true
+# Automatically assign highest GID for new groups
+ASSIGN_HIGHEST_GID=true
+# Prefer to have same UID and GID for new users
+PREFER_SAME_UID_GID=true
--- /dev/null
+# The VNCSERVERS variable is a list of display:user pairs.
+#
+# Uncomment the lines below to start a VNC server on display :2
+# as my 'myusername' (adjust this to your own). You will also
+# need to set a VNC password; run 'man vncpasswd' to see how
+# to do that.
+#
+# DO NOT RUN THIS SERVICE if your local area network is
+# untrusted! For a secure way of using VNC, see
+# <URL:http://www.uk.research.att.com/archive/vnc/sshvnc.html>.
+
+# Use "-nolisten tcp" to prevent X connections to your VNC server via TCP.
+
+# Use "-nohttpd" to prevent web-based VNC clients connecting.
+
+# Use "-localhost" to prevent remote VNC clients connecting except when
+# doing so through a secure tunnel. See the "-via" option in the
+# `man vncviewer' manual page.
+
+# VNCSERVERS="2:myusername"
+# VNCSERVERARGS[2]="-geometry 800x600 -nolisten tcp -nohttpd -localhost"
--- /dev/null
+# wlan0 and wifi0
+# INTERFACES="-iwlan0 -iwifi0"
+INTERFACES="-iwlan0"
+# ndiswrapper and prism
+# DRIVERS="-Dndiswrapper -Dprism"
+DRIVERS="-Dwext"
--- /dev/null
+
+#XENSTORED_PID="/var/run/xenstore.pid"
+#XENSTORED_ARGS=
+
+# Log all hypervisor messages (cf xm dmesg)
+#XENCONSOLED_LOG_HYPERVISOR=no
+
+# Log all guest console output (cf xm console)
+#XENCONSOLED_LOG_GUESTS=no
+
+# Location to store guest & hypervisor logs
+#XENCONSOLED_LOG_DIR=/var/log/xen/console
+
+#XENCONSOLED_ARGS=
+
+#BLKTAPCTRL_ARGS=
--- /dev/null
+## Path: System/xen
+## Description: xen domain start/stop on boot
+## Type: string
+## Default:
+#
+# The xendomains script can send SysRq requests to domains on shutdown.
+# If you don't want to MIGRATE, SAVE, or SHUTDOWN, this may be a possibility
+# to do a quick and dirty shutdown ("s e i u o") or at least sync the disks
+# of the domains ("s").
+#
+XENDOMAINS_SYSRQ=""
+
+## Type: integer
+## Default: 100000
+#
+# If XENDOMAINS_SYSRQ is set, this variable determines how long to wait
+# (in microseconds) after each SysRq, so the domain has a chance to react.
+# If you want to a quick'n'dirty shutdown via SysRq, you may want to set
+# it to a relatively high value (1200000).
+#
+XENDOMAINS_USLEEP=100000
+
+## Type: integer
+## Default: 5000000
+#
+# When creating a guest domain, it is sensible to allow a little time for it
+# to get started before creating another domain or proceeding through the
+# boot process. Without this, the booting guests will thrash the disk as they
+# start up. This timeout (in microseconds) specifies the delay after guest
+# domain creation.
+#
+XENDOMAINS_CREATE_USLEEP=5000000
+
+## Type: string
+## Default: ""
+#
+# Set this to a non-empty string if you want to migrate virtual machines
+# on shutdown. The string will be passed to the xm migrate DOMID command
+# as is: It should contain the target IP address of the physical machine
+# to migrate to and optionally parameters like --live. Leave empty if
+# you don't want to try virtual machine relocation on shutdown.
+# If migration succeeds, neither SAVE nor SHUTDOWN will be executed for
+# that domain.
+#
+XENDOMAINS_MIGRATE=""
+
+## Type: string
+## Default: /var/lib/xen/save
+#
+# Directory to save running domains to when the system (dom0) is
+# shut down. Will also be used to restore domains from if # XENDOMAINS_RESTORE
+# is set (see below). Leave empty to disable domain saving on shutdown
+# (e.g. because you rather shut domains down).
+# If domain saving does succeed, SHUTDOWN will not be executed.
+#
+XENDOMAINS_SAVE=/var/lib/xen/save
+
+## Type: string
+## Default: "--halt --wait"
+#
+# If neither MIGRATE nor SAVE were enabled or if they failed, you can
+# try to shut down a domain by sending it a shutdown request. To do this,
+# set this to "--halt --wait". Omit the "--wait" flag to avoid waiting
+# for the domain to be really down. Leave empty to skip domain shutdown.
+#
+XENDOMAINS_SHUTDOWN="--halt --wait"
+
+## Type: string
+## Default: "--all --halt --wait"
+#
+# After we have gone over all virtual machines (resp. all automatically
+# started ones, see XENDOMAINS_AUTO_ONLY below) in a loop and sent SysRq,
+# migrated, saved and/or shutdown according to the settings above, we
+# might want to shutdown the virtual machines that are still running
+# for some reason or another. To do this, set this variable to
+# "--all --halt --wait", it will be passed to xm shutdown.
+# Leave it empty not to do anything special here.
+# (Note: This will hit all virtual machines, even if XENDOMAINS_AUTO_ONLY
+# is set.)
+#
+XENDOMAINS_SHUTDOWN_ALL="--all --halt --wait"
+
+## Type: boolean
+## Default: true
+#
+# This variable determines whether saved domains from XENDOMAINS_SAVE
+# will be restored on system startup.
+#
+XENDOMAINS_RESTORE=true
+
+## Type: string
+## Default: /etc/xen/auto
+#
+# This variable sets the directory where domains configurations
+# are stored that should be started on system startup automatically.
+# Leave empty if you don't want to start domains automatically
+# (or just don't place any xen domain config files in that dir).
+# Note that the script tries to be clever if both RESTORE and AUTO are
+# set: It will first restore saved domains and then only start domains
+# in AUTO which are not running yet.
+# Note that the name matching is somewhat fuzzy.
+#
+XENDOMAINS_AUTO=/etc/xen/auto
+
+## Type: boolean
+## Default: false
+#
+# If this variable is set to "true", only the domains started via config
+# files in XENDOMAINS_AUTO will be treated according to XENDOMAINS_SYSRQ,
+# XENDOMAINS_MIGRATE, XENDOMAINS_SAVE, XENDMAINS_SHUTDOWN; otherwise
+# all running domains will be.
+# Note that the name matching is somewhat fuzzy.
+#
+XENDOMAINS_AUTO_ONLY=false
+
+## Type: integer
+## Default: 300
+#
+# On xendomains stop, a number of xm commands (xm migrate, save, shutdown,
+# shutdown --all) may be executed. In the worst case, these commands may
+# stall forever, which will prevent a successful shutdown of the machine.
+# If this variable is non-zero, the script will set up a watchdog timer
+# for every of these xm commands and time it out after the number of seconds
+# specified by this variable.
+# Note that SHUTDOWN_ALL will not be called if no virtual machines or only
+# zombies are still running, so you don't need to enable this timeout just
+# for the zombie case.
+# The setting should be large enough to make sure that migrate/save/shutdown
+# can succeed. If you do live migrations, keep in mind that live migration
+# of a 1GB machine over Gigabit ethernet may actually take something like
+# 100s (assuming that live migration uses 10% of the network # bandwidth).
+# Depending on the virtual machine, a shutdown may also require a significant
+# amount of time. So better setup this variable to a huge number and hope the
+# watchdog never fires.
+#
+XENDOMAINS_STOP_MAXWAIT=300
+
--- /dev/null
+# Kernel sysctl configuration file for Red Hat Linux
+#
+# For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and
+# sysctl.conf(5) for more details.
+
+# Controls IP packet forwarding
+net.ipv4.ip_forward = 0
+
+# Controls source route verification
+net.ipv4.conf.default.rp_filter = 1
+
+# Do not accept source routing
+net.ipv4.conf.default.accept_source_route = 0
+
+# Controls the System Request debugging functionality of the kernel
+kernel.sysrq = 0
+
+# Controls whether core dumps will append the PID to the core filename.
+# Useful for debugging multi-threaded applications.
+kernel.core_uses_pid = 1
--- /dev/null
+# $FreeBSD$
+#
+# Spaces ARE valid field separators in this file. However,
+# other *nix-like systems still insist on using tabs as field
+# separators. If you are sharing this file between systems, you
+# may want to use only tabs as field separators here.
+# Consult the syslog.conf(5) manpage.
+*.err;kern.warning;auth.notice;mail.crit /dev/console
+*.notice;authpriv.none;kern.debug;lpr.info;mail.crit;news.err /var/log/messages
+security.* /var/log/security
+auth.info;authpriv.info /var/log/auth.log
+mail.info /var/log/maillog
+lpr.info /var/log/lpd-errs
+ftp.info /var/log/xferlog
+cron.* /var/log/cron
+!-devd
+*.=debug /var/log/debug.log
+*.emerg *
+# uncomment this to log all writes to /dev/console to /var/log/console.log
+# touch /var/log/console.log and chmod it to mode 600 before it will work
+#console.info /var/log/console.log
+# uncomment this to enable logging of all log messages to /var/log/all.log
+# touch /var/log/all.log and chmod it to mode 600 before it will work
+#*.* /var/log/all.log
+# uncomment this to enable logging to a remote loghost named loghost
+#*.* @loghost
+# uncomment these if you're running inn
+# news.crit /var/log/news/news.crit
+# news.err /var/log/news/news.err
+# news.notice /var/log/news/news.notice
+# Uncomment this if you wish to see messages produced by devd
+# !devd
+# *.>=notice /var/log/devd.log
+!ppp
+*.* /var/log/ppp.log
+!*
+include /etc/syslog.d
+include /usr/local/etc/syslog.d
--- /dev/null
+# Standalone mode
+listen=YES
+max_clients=200
+max_per_ip=4
+# Access rights
+anonymous_enable=YES
+local_enable=NO
+write_enable=NO
+anon_upload_enable=NO
+anon_mkdir_write_enable=NO
+anon_other_write_enable=NO
+# Security
+anon_world_readable_only=YES
+connect_from_port_20=YES
+hide_ids=YES
+pasv_min_port=50000
+pasv_max_port=60000
+# Features
+xferlog_enable=YES
+ls_recurse_enable=NO
+ascii_download_enable=NO
+async_abor_enable=YES
+# Performance
+one_process_model=YES
+idle_session_timeout=120
+data_connection_timeout=300
+accept_timeout=60
+connect_timeout=60
+anon_max_rate=50000
--- /dev/null
+#
+# This is the master xinetd configuration file. Settings in the
+# default section will be inherited by all service configurations
+# unless explicitly overridden in the service configuration. See
+# xinetd.conf in the man pages for a more detailed explanation of
+# these attributes.
+
+defaults
+{
+# The next two items are intended to be a quick access place to
+# temporarily enable or disable services.
+#
+# enabled =
+# disabled =
+
+# Define general logging characteristics.
+ log_type = SYSLOG daemon info
+ log_on_failure = HOST
+ log_on_success = PID HOST DURATION EXIT
+
+# Define access restriction defaults
+#
+# no_access =
+# only_from =
+# max_load = 0
+ cps = 50 10
+ instances = 50
+ per_source = 10
+
+# Address and networking defaults
+#
+# bind =
+# mdns = yes
+ v6only = no
+
+# setup environmental attributes
+#
+# passenv =
+ groups = yes
+ umask = 002
+
+# Generally, banners are not used. This sets up their global defaults
+#
+# banner =
+# banner_fail =
+# banner_success =
+}
+
+includedir /etc/xinetd.d
+
--- /dev/null
+# default: off
+# description: The CVS service can record the history of your source \
+# files. CVS stores all the versions of a file in a single \
+# file in a clever way that only stores the differences \
+# between versions.
+service cvspserver
+{
+ disable = yes
+ port = 2401
+ socket_type = stream
+ protocol = tcp
+ wait = no
+ user = root
+ passenv = PATH
+ server = /usr/bin/cvs
+ env = HOME=/var/cvs
+ server_args = -f --allow-root=/var/cvs pserver
+# bind = 127.0.0.1
+}
--- /dev/null
+# default: off
+# description: The rsync server is a good addition to an ftp server, as it \
+# allows crc checksumming etc.
+service rsync
+{
+ disable = yes
+ flags = IPv6
+ socket_type = stream
+ wait = no
+ user = root
+ server = /usr/bin/rsync
+ server_args = --daemon
+ log_on_failure += USERID
+}
--- /dev/null
+[main]
+cachedir=/var/cache/yum
+keepcache=0
+debuglevel=2
+logfile=/var/log/yum.log
+exactarch=1
+obsoletes=1
+gpgcheck=1
+plugins=1
+metadata_expire=1800
+
+installonly_limit=100
+
+# PUT YOUR REPOS HERE OR IN separate files named file.repo
+# in /etc/yum.repos.d
--- /dev/null
+[updates]
+name=Fedora $releasever - $basearch - Updates
+failovermethod=priority
+#baseurl=http://download.fedora.redhat.com/pub/fedora/linux/updates/$releasever/$basearch/
+mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=updates-released-f$releasever&arch=$basearch
+enabled=1
+gpgcheck=1
+gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora
+
+[updates-debuginfo]
+name=Fedora $releasever - $basearch - Updates - Debug
+failovermethod=priority
+#baseurl=http://download.fedora.redhat.com/pub/fedora/linux/updates/$releasever/$basearch/debug/
+mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=updates-released-debug-f$releasever&arch=$basearch
+enabled=0
+gpgcheck=1
+gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora
+
+[updates-source]
+name=Fedora $releasever - Updates Source
+failovermethod=priority
+#baseurl=http://download.fedora.redhat.com/pub/fedora/linux/updates/$releasever/SRPMS/
+mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=updates-released-source-f$releasever&arch=$basearch
+enabled=0
+gpgcheck=1
+gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora
--- /dev/null
+[fedora]
+name=Fedora $releasever - $basearch
+failovermethod=priority
+#baseurl=http://download.fedora.redhat.com/pub/fedora/linux/releases/$releasever/Everything/$basearch/os/
+mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=fedora-$releasever&arch=$basearch
+enabled=1
+gpgcheck=1
+gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora file:///etc/pki/rpm-gpg/RPM-GPG-KEY
+
+[fedora-debuginfo]
+name=Fedora $releasever - $basearch - Debug
+failovermethod=priority
+#baseurl=http://download.fedora.redhat.com/pub/fedora/linux/releases/$releasever/Everything/$basearch/debug/
+mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=fedora-debug-$releasever&arch=$basearch
+enabled=0
+gpgcheck=1
+gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora file:///etc/pki/rpm-gpg/RPM-GPG-KEY
+
+[fedora-source]
+name=Fedora $releasever - Source
+failovermethod=priority
+#baseurl=http://download.fedora.redhat.com/pub/fedora/linux/releases/$releasever/Everything/source/SRPMS/
+mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=fedora-source-$releasever&arch=$basearch
+enabled=0
+gpgcheck=1
+gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora file:///etc/pki/rpm-gpg/RPM-GPG-KEY
--- /dev/null
+[remi]
+name=Les RPM de remi pour FC$releasever - $basearch
+baseurl=http://remi.collet.free.fr/rpms/fc$releasever.$basearch/
+ http://iut-info.ens.univ-reims.fr/remirpms/fc$releasever.$basearch/
+enabled=0
+gpgcheck=1
+gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-remi
+
+[remi-test]
+name=Les RPM de remi en test pour FC$releasever - $basearch
+baseurl=http://remi.collet.free.fr/rpms/test-fc$releasever.$basearch/
+ http://iut-info.ens.univ-reims.fr/remirpms/test-fc$releasever.$basearch/
+enabled=0
+gpgcheck=1
+gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-remi
+
--- /dev/null
+key1=value1
+key2 = value2
+key3= value3
--- /dev/null
+MAILTO=cron@example.com
+RANDOM_DELAY=7
+17 12 */4 * * /usr/sbin/boom
+@reboot /usr/sbin/boom
--- /dev/null
+find this: ]][ )( . * $ / \
--- /dev/null
+# Tests for aug_srun
+
+# Blank lines and lines starting with '#' are ignored. This file is
+# processed by test-run.c
+#
+# The syntax for a test specification is
+# test NAME RESULT ERRCODE
+# [use MODULE]
+# COMMANDS
+# prints
+# OUTPUT
+#
+# where
+# NAME - the name printed to identify the test
+# RESULT - an integer that is compared against the return code
+# of aug_srun
+# ERRCODE - one of the error codes defined in enum errcode_t in augeas.h
+# without the AUG_ prefix, i.e. NOERROR, EMMATCH etc. If ERRCODE
+# is omitted, it defaults to NOERROR
+# MODULE - the name of a module that should be loaded before the test
+# COMMANDS - the commands to hand to aug_srun; can be multiple lines,
+# which are passed as one string.
+# OUTPUT - the string that aug_srun should print on the OUT file stream
+#
+# The prints keyword and OUTPUT are optional; if they are not provided, the
+# output of aug_srun must be empty
+#
+# Leading spaces are stripped from COMMANDS and OUTPUT; a leading or trailing
+# ':' is also stripped, but the rest of the line is used verbatim.
+#
+# A test passes when RESULT and ERRCODE agree with what aug_srun given
+# COMMANDS produces, and OUTPUT coincides with what aug_srun prints on its
+# OUT stream.
+#
+# The test is run against a tree initialized with AUG_NO_STDINC|AUG_NO_LOAD
+# and file system root /dev/null. The Hosts module is loaded
+
+
+#
+# Various corner cases
+#
+test null 0
+
+test empty 0
+ :
+
+test quit -2
+ quit
+
+test quit-2 -2
+ get /augeas
+ quit
+prints
+ /augeas (none)
+
+test two-commands 2
+ get /augeas/root
+ get /augeas/span
+prints
+ /augeas/root = /dev/null/
+ /augeas/span = enable
+
+test comment 1
+ :# Get /augeas
+ get /augeas
+prints
+ /augeas (none)
+
+test get_wspace 2
+ : get /augeas :
+ : rm /augeas/root :
+prints
+ /augeas (none)
+ rm : /augeas/root 1
+
+test get_wspace_between 1
+ : get /augeas
+prints
+ /augeas (none)
+
+test unknown-cmd -1 ECMDRUN
+ nocommand
+
+test help 1
+ help
+prints something
+
+#
+# ls tests
+#
+test ls-root 1
+ ls /
+prints
+ augeas/ = (none)
+ files = (none)
+
+test ls-bad-pathx -1 EPATHX
+ ls /files[]
+
+#
+# match tests
+#
+test match-root 1
+ match /
+prints
+ /augeas = (none)
+ /files = (none)
+
+test match-context 1
+ match .
+prints
+ /files = (none)
+
+test match-root-star 1
+ match /*
+prints
+ /augeas = (none)
+ /files = (none)
+
+test match-bad-pathx -1 EPATHX
+ match /files[]
+
+test match-nothing 1
+ match /not-there
+prints
+ : (no matches)
+
+#
+# test rm
+#
+test rm-save-modes 1
+ rm /augeas/version/save
+prints
+ rm : /augeas/version/save 5
+
+test rm-bad-pathx -1 EPATHX
+ rm /files[]
+
+#
+# test mv
+#
+test mv 1
+ mv /augeas/version /files
+
+test mv-not-there -1 ENOMATCH
+ mv /not-there /files
+
+test mv-to-not-there 1
+ mv /files /new-node
+
+test mv-into-descendant -1 EMVDESC
+ mv /augeas /augeas/version
+
+test mv-into-self -1 EMVDESC
+ mv /augeas /augeas
+
+test mv-into-multiple -1 EMMATCH
+ mv /files /augeas/version/save/*
+
+test mv-multiple -1 EMMATCH
+ mv /augeas/version/save/* /files
+
+test mv-tree1 3
+ set /a/b/c value
+ mv /a/b/c /x
+ print /*[ label() != 'augeas' and label() != 'files']
+prints
+ /a
+ /a/b
+ /x = "value"
+
+test mv-tree2 3
+ set /a/b/c value
+ mv /a/b/c /a/x
+ print /*[ label() != 'augeas' and label() != 'files']
+prints
+ /a
+ /a/b
+ /a/x = "value"
+
+test mv-tree3 3
+ set /a/b/c value
+ mv /a/b/c /x/y
+ print /*[ label() != 'augeas' and label() != 'files']
+prints
+ /a
+ /a/b
+ /x
+ /x/y = "value"
+
+test mv-tree4 -1 EMVDESC
+ set /a/b/c value
+ mv /a/b/c /a/b/c/d
+ print /*[ label() != 'augeas' and label() != 'files']
+
+test mv-tree5 3
+ set /a/b/c value
+ mv /a/b/c /a/b/d
+ print /*[ label() != 'augeas' and label() != 'files']
+prints
+ /a
+ /a/b
+ /a/b/d = "value"
+
+test mv-tree6 3
+ set /a/b/c value
+ mv /a /x/y
+ print /*[ label() != 'augeas' and label() != 'files']
+prints
+ /x
+ /x/y
+ /x/y/b
+ /x/y/b/c = "value"
+
+#
+# test rename
+#
+test rename 1
+ rename /augeas/version version2
+prints
+ rename : /augeas/version to version2 1
+
+test rename-into-self 1
+ rename /augeas augeas
+prints
+ rename : /augeas to augeas 1
+
+test rename-tree1 3
+ set /a/b/c value
+ rename /a/b/c x
+ print /*[ label() != 'augeas' and label() != 'files']
+prints
+ rename : /a/b/c to x 1
+ /a
+ /a/b
+ /a/b/x = "value"
+
+test rename-tree2 4
+ set /a/b/c value
+ set /a/b/d value2
+ rename /a/b/c x
+ print /*[ label() != 'augeas' and label() != 'files']
+prints
+ rename : /a/b/c to x 1
+ /a
+ /a/b
+ /a/b/x = "value"
+ /a/b/d = "value2"
+
+test rename-multiple 4
+ set /a/b/d value
+ set /a/c/d value2
+ rename //d x
+ print /*[ label() != 'augeas' and label() != 'files']
+prints
+ rename : //d to x 2
+ /a
+ /a/b
+ /a/b/x = "value"
+ /a/c
+ /a/c/x = "value2"
+
+test rename-slash -1 ELABEL
+ set /a/b/c value
+ rename /a/b/c va/lue
+
+#
+# test set
+#
+test set-not-there 2
+ set /foo value
+ get /foo
+prints
+ /foo = value
+
+test set-existing 2
+ set /files value
+ get /files
+prints
+ /files = value
+
+test set-trailing-slash 2
+ set /files/ value
+ get /files
+prints
+ /files = value
+
+test set-bad-pathx -1 EPATHX
+ set /files[] 1
+
+test set-multiple -1 EMMATCH
+ set /augeas/version/save/mode value
+
+test set-args 2
+ set /files
+ get /files
+prints
+ /files (none)
+
+test set-union-not-there -1 EMMATCH
+ set (/files/left|/files/right) 1
+
+test set-union-existing -1 EMMATCH
+ set /files/left value1
+ set /files/right value2
+ set (/files/left|/files/right) 1
+
+test set-else-not-there 2
+ set '(/files/not-there else /files/not-there-yet)' value
+ get /files/not-there-yet
+prints
+ /files/not-there-yet = value
+
+test set-else-existing 3
+ set /files/existing value
+ set '(/files/existing else /files/not-there)' new_value
+ get /files/existing
+prints
+ /files/existing = new_value
+
+test set-else-update 3
+ set /files/existing value
+ set '(/files/not-there else /files/existing)' value3
+ get /files/existing
+prints
+ /files/existing = value3
+
+
+#
+# test clear
+#
+test clear-not-there 2
+ clear /foo
+ get /foo
+prints
+ /foo (none)
+
+test clear-existing 2
+ clear /files
+ get /files
+prints
+ /files (none)
+
+test clear-bad-pathx -1 EPATHX
+ clear /files[]
+
+test clear-multiple -1 EMMATCH
+ clear /augeas/version/save/mode
+
+test clear-args -1 ECMDRUN
+ clear /files value
+
+#
+# test get
+#
+test get-save-mode 1
+ get /augeas/version/save/mode[1]
+prints
+ /augeas/version/save/mode[1] = backup
+
+test get-too-many -1 EMMATCH
+ get /augeas/*
+
+test get-not-there 1
+ get /not-there
+prints
+ /not-there (o)
+
+test get-bad-pathx -1 EPATHX
+ get /files[]
+
+#
+# test transform
+#
+test transform-1 3
+ transform Test incl /tmp/bar
+ get /augeas/load/Test/lens
+ get /augeas/load/Test/incl
+prints
+ /augeas/load/Test/lens = Test.lns
+ /augeas/load/Test/incl = /tmp/bar
+
+test transform-2 4
+ transform Bar incl /tmp/foo/*
+ transform Bar incl /tmp/bar/*
+ transform Bar excl /tmp/foo/baz
+ print /augeas/load/Bar
+prints
+ /augeas/load/Bar
+ /augeas/load/Bar/lens = "Bar.lns"
+ /augeas/load/Bar/incl[1] = "/tmp/foo/*"
+ /augeas/load/Bar/incl[2] = "/tmp/bar/*"
+ /augeas/load/Bar/excl = "/tmp/foo/baz"
+
+test transform-3 2
+ transform Bar.lns incl /tmp/foo/*
+ print /augeas/load/Bar
+prints
+ /augeas/load/Bar
+ /augeas/load/Bar/lens = "Bar.lns"
+ /augeas/load/Bar/incl = "/tmp/foo/*"
+
+#
+# test print
+#
+test print-save 1
+ print /augeas/version/save
+prints
+ /augeas/version/save
+ /augeas/version/save/mode[1] = "backup"
+ /augeas/version/save/mode[2] = "newfile"
+ /augeas/version/save/mode[3] = "noop"
+ /augeas/version/save/mode[4] = "overwrite"
+
+test print-root 1
+ print /
+
+#
+# test set/get parsing with quoting, whitespace and escaping
+#
+test set-single-quotes 2
+ set /files 'a test value'
+ get /files
+prints
+ /files = a test value
+
+test set-double-quotes 2
+ set /files "a test value"
+ get /files
+prints
+ /files = a test value
+
+test set-mixed-quotes1 2
+ set /files "a 'mixed quotes' test"
+ get /files
+prints
+ /files = a 'mixed quotes' test
+
+test set-mixed-quotes2 2
+ set /files 'a "mixed quotes" test'
+ get /files
+prints
+ /files = a "mixed quotes" test
+
+test set-mixed-quotes-expr 2
+ clear /foo
+ print "/*[ label() != 'augeas' and label() != 'files']"
+prints
+ /foo
+
+test set-quote-concat 2
+ set "/fi"les test
+ get "/fi"les
+prints
+ /files = test
+
+test set-escaped-quotes 2
+ set /files "''\"''"
+ get /files
+prints
+ /files = ''"''
+
+test set-escaped-path 2
+ set /white\ space\ tab value
+ get /white\ space\ tab
+prints
+ /white space tab = value
+
+test set-escaped-path-bracket 2
+ set /white\ space/\[section value
+ print /white\ space/\[section
+prints
+ /white\ space/\[section = "value"
+
+test set-squote-escaped-bracket 2
+ set '/augeas/\[section' value
+ print '/augeas/\[section'
+prints
+ /augeas/\[section = "value"
+
+test set-squote-escaped-path 2
+ set '/white\ space' value
+ get '/white\ space'
+prints
+ /white\ space = value
+
+test set-dquote-escaped-path 2
+ set "/white\ space" value
+ get "/white\ space"
+prints
+ /white\ space = value
+
+test set-tabnline 2
+ set /files newl\ntab\tend
+ get /files
+prints
+ /files = newl
+tab end
+
+test set-tabnline-squote 2
+ set /files 'newl\ntab\tend'
+ get /files
+prints
+ /files = newl
+tab end
+
+test set-tabnline-dquote 2
+ set /files "newl\ntab\tend"
+ get /files
+prints
+ /files = newl
+tab end
+
+# Combinations of quotes in values, some unmatched
+# Tests from David Schmitt (Puppet bug #12199)
+test quot_sq -1 ECMDRUN
+ set /test '
+
+test quot_sq_sq -1 ECMDRUN
+ set /test '''
+
+test quot_sq_dq 2
+ set /test "'"
+ get /test
+prints
+ /test = '
+
+test quot_sqsq 2
+ set /test ''
+ get /test
+prints
+ /test = :
+
+test quot_sqsq_sq 2
+ set /test ''''
+ get /test
+prints
+ /test = :
+
+test quot_sqsq_dq 2
+ set /test "''"
+ get /test
+prints
+ /test = ''
+
+test quot_sqsqsq -1 ECMDRUN
+ set /test '''
+
+test quot_sqsqsq_sq -1 ECMDRUN
+ set /test '''''
+
+test quot_sqsqsq_dq 2
+ set /test "'''"
+ get /test
+prints
+ /test = '''
+
+test quot_sqsqsqsq 2
+ set /test ''''
+ get /test
+prints
+ /test = :
+
+test quot_sqsqsqsq_sq 2
+ set /test ''''''
+ get /test
+prints
+ /test = :
+
+test quot_sqsqsqsq_dq 2
+ set /test "''''"
+ get /test
+prints
+ /test = ''''
+
+test quot_dq -1 ECMDRUN
+ set /test "
+
+test quot_dq_sq 2
+ set /test '"'
+ get /test
+prints
+ /test = "
+
+test quot_dq_dq -1 ECMDRUN
+ set /test """
+
+test quot_dqdq 2
+ set /test ""
+ get /test
+prints
+ /test = :
+
+test quot_dqdq_sq 2
+ set /test '""'
+ get /test
+prints
+ /test = ""
+
+test quot_dqdq_dq 2
+ set /test """"
+ get /test
+prints
+ /test = :
+
+test quot_dqdqdq -1 ECMDRUN
+ set /test """
+
+test quot_dqdqdq_sq 2
+ set /test '"""'
+ get /test
+prints
+ /test = """
+
+test quot_dqdqdq_dq -1 ECMDRUN
+ set /test """""
+
+test quot_dqdqdqdq 2
+ set /test """"
+ get /test
+prints
+ /test = :
+
+test quot_dqdqdqdq_sq 2
+ set /test '""""'
+ get /test
+prints
+ /test = """"
+
+test quot_dqdqdqdq_dq 2
+ set /test """"""
+ get /test
+prints
+ /test = :
+
+test quot_truncated_dq 2
+ set /test "s"bc"d"ef
+ get /test
+prints
+ /test = sbcdef
+
+test quot_truncated_dq_sq 2
+ set /test '"s"bc"d"ef'
+ get /test
+prints
+ /test = "s"bc"d"ef
+
+test quot_truncated_dq_dq 2
+ set /test ""s"bc"d"ef"
+ get /test
+prints
+ /test = sbcdef
+
+test quot_truncated_sq 2
+ set /test 's'bc'd'ef
+ get /test
+prints
+ /test = sbcdef
+
+test quot_truncated_sq_sq 2
+ set /test ''s'bc'd'ef'
+ get /test
+prints
+ /test = sbcdef
+
+test quot_truncated_sq_dq 2
+ set /test "'s'bc'd'ef"
+ get /test
+prints
+ /test = 's'bc'd'ef
+
+test quot_truncated_dq_mix 2
+ set /test "s"bc'd'ef
+ get /test
+prints
+ /test = sbcdef
+
+test quot_truncated_dq_mix_sq 2
+ set /test '"s"bc'd'ef'
+ get /test
+prints
+ /test = "s"bcdef
+
+test quot_truncated_dq_mix_dq 2
+ set /test ""s"bc'd'ef"
+ get /test
+prints
+ /test = sbc'd'ef
+
+test quot_truncated_sq_mix 2
+ set /test 's'bc"d"ef
+ get /test
+prints
+ /test = sbcdef
+
+test quot_truncated_sq_mix_sq 2
+ set /test ''s'bc"d"ef'
+ get /test
+prints
+ /test = sbc"d"ef
+
+test quot_truncated_sq_mix_dq 2
+ set /test "'s'bc"d"ef"
+ get /test
+prints
+ /test = 's'bcdef
+
+test quot_truncated_space -1 ECMDRUN
+ set /test before after
+
+test quot_truncated_space_sq 2
+ set /test 'before after'
+ get /test
+prints
+ /test = before after
+
+test quot_truncated_space_dq 2
+ set /test "before after"
+ get /test
+prints
+ /test = before after
+
+test quot_mix -1 ECMDRUN
+ set /test a"b'c"d'e
+
+test quot_mix_sq -1 ECMDRUN
+ set /test 'a"b'c"d'e'
+
+test quot_mix_dq -1 ECMDRUN
+ set /test "a"b'c"d'e"
+
+#
+# Tests for aug_text_store
+#
+test store_hosts 3
+ use Hosts
+ set /text/in/t1 "192.168.0.1 rtr.example.com foo\n"
+ store Hosts.lns /text/in/t1 /text/tree/t1
+ print /text/tree/t1
+prints
+ /text/tree/t1
+ /text/tree/t1/1
+ /text/tree/t1/1/ipaddr = "192.168.0.1"
+ /text/tree/t1/1/canonical = "rtr.example.com"
+ /text/tree/t1/1/alias = "foo"
+
+test store_nolens -1 ENOLENS
+ set /text/in/t1 "192.168.0.1 rtr.example.com foo\n"
+ store Nomodule.lns /text/in/t1 /text/tree/t1
+
+test store_epathx_node -1 EPATHX
+ use Hosts
+ store Hosts.lns [garbage] /text/tree/t1
+
+test store_epathx_path -1 EPATHX
+ use Hosts
+ store Hosts.lns /text/in/t1 [garbage]
+
+test store_null_text -1 ENOMATCH
+ use Hosts
+ store Hosts.lns /text/in/t1 /text/tree/t1
+
+test store_esyntax 3
+ use Hosts
+ set /text/in/t1 "not a hosts file entry"
+ store Hosts.lns /text/in/t1 /text/tree/t1
+ match /augeas/text/text/tree/t1/error
+prints
+ /augeas/text/text/tree/t1/error = parse_failed
+
+# Bug #283; /text is actually NULL
+test store_null -1 ENOMATCH
+ use Hosts
+ set /text/1 "192.168.0.1 toto\n"
+ store Hosts.lns /text /hosts
+
+#
+# Tests for aug_text_retrieve
+#
+test retrieve_hosts 5
+ use Hosts
+ set /text/in/t1 "192.168.0.1 rtr.example.com foo\n"
+ store Hosts.lns /text/in/t1 /text/tree/t1
+ set /text/tree/t1/1/alias[1] bar
+ retrieve Hosts.lns /text/in/t1 /text/tree/t1 /text/out/t1
+ print /text/out/t1
+prints
+ /text/out/t1 = "192.168.0.1 rtr.example.com bar\n"
+
+test retrieve_nolens -1 ENOLENS
+ set /text/in/t1 "192.168.0.1 rtr.example.com foo\n"
+ store Hosts.lns /text/in/t1 /text/tree/t1
+ retrieve Nomodule.lns /text/in/t1 /text/tree/t1 /text/out/t1
+
+test retrieve_epathx_node_in -1 EPATHX
+ use Hosts
+ retrieve Hosts.lns [garbage] /text/tree/t1 /text/out/t1
+
+test retrieve_epathx_path -1 EPATHX
+ use Hosts
+ retrieve Hosts.lns /text/in/t1 [garbage] /text/out/t1
+
+test retrieve_epathx_node_out -1 EPATHX
+ use Hosts
+ set /text/in/t1 "192.168.0.1 rtr.example.com foo\n"
+ retrieve Hosts.lns /text/in/t1 /text/tree/t1 [garbage]
+
+test retrieve_no_node_in -1 ENOMATCH
+ use Hosts
+ retrieve Hosts.lns /text/in/t1 /text/tree/t1 /text/out/t1
+
+test retrieve_no_tree 3
+ use Hosts
+ set /text/in/t1 "192.168.0.1 rtr.example.com foo\n"
+ retrieve Hosts.lns /text/in/t1 /text/tree/t1 /text/out/t1
+ print /text/out/t1
+prints
+ /text/out/t1 = ""
+
+test retrieve_esyntax 5
+ use Hosts
+ set /text/in/t1 "192.168.0.1 rtr.example.com foo\n"
+ store Hosts.lns /text/in/t1 /text/tree/t1
+ set /text/in/t1 "not a hosts file entry"
+ retrieve Hosts.lns /text/in/t1 /text/tree/t1 /text/out/t1
+ match /augeas//error
+prints
+ /augeas/text/text/tree/t1/error = parse_skel_failed
+
+# Bug #283; /text is actually NULL
+test retrieve_null -1 ENOMATCH
+ use Hosts
+ set /text/1 "192.168.0.1 toto\n"
+ retrieve Hosts.lns /text /hosts /out
+
+# Change 'var=val' to 'variable=value' and check that the span gets updated
+test span_updates_lv 9
+ set /in "var=val\n"
+ store Cron.lns /in /cron
+ span /cron/var
+ span /cron
+ mv /cron/var /cron/variable
+ set /cron/variable value
+ retrieve Cron.lns /in /cron /out
+ span /cron/variable
+ span /cron
+prints
+ /cron label=(0:3) value=(4:7) span=(0,8)
+ /cron label=(0:0) value=(0:0) span=(0,8)
+ /cron label=(0:8) value=(9:14) span=(0,15)
+ /cron label=(0:0) value=(0:0) span=(0,15)
+
+# Make a change and check that the parents' span gets updated
+test span_updates_parent 8
+ set /in "10.0.0.1 gw.example.com\n"
+ store Hosts.lns /in /host
+ span /host/1
+ span /host
+ set /host/1/canonical gateway.example.com
+ retrieve Hosts.lns /in /host /out
+ span /host/1
+ span /host
+prints
+ /host label=(0:0) value=(0:0) span=(0,24)
+ /host label=(0:0) value=(0:0) span=(0,24)
+ /host label=(0:0) value=(0:0) span=(0,29)
+ /host label=(0:0) value=(0:0) span=(0,29)
+
+# Check that creating nodes sets spans
+test span_updates_on_create 5
+ set /in ""
+ set /cron/var val
+ retrieve Cron.lns /in /cron /out
+ span /cron/var
+ span /cron
+prints
+ /cron label=(0:3) value=(4:7) span=(0,8)
+ /cron label=(0:0) value=(0:0) span=(0,8)
--- /dev/null
+/*
+ * test-api.c: test public API functions for conformance
+ *
+ * Copyright (C) 2009-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <lutter@redhat.com>
+ */
+
+#include <config.h>
+
+#include "augeas.h"
+
+#include "cutest.h"
+#include "internal.h"
+
+#include <unistd.h>
+#include <libgen.h>
+
+#include <libxml/tree.h>
+
+static const char *abs_top_srcdir;
+static char *root;
+static char *loadpath;
+
+#define die(msg) \
+ do { \
+ fprintf(stderr, "%d: Fatal error: %s\n", __LINE__, msg); \
+ exit(EXIT_FAILURE); \
+ } while(0)
+
+static void testGet(CuTest *tc) {
+ int r;
+ const char *value;
+ const char *label;
+ struct augeas *aug;
+
+ aug = aug_init(root, loadpath, AUG_NO_STDINC|AUG_NO_LOAD);
+ CuAssertPtrNotNull(tc, aug);
+ CuAssertIntEquals(tc, AUG_NOERROR, aug_error(aug));
+
+ /* Make sure we're looking at the right thing */
+ r = aug_match(aug, "/augeas/version/save/*", NULL);
+ CuAssertTrue(tc, r > 1);
+ CuAssertIntEquals(tc, AUG_NOERROR, aug_error(aug));
+
+ /* aug_get returns 1 and the value if exactly one node matches */
+ r = aug_get(aug, "/augeas/version/save/*[1]", &value);
+ CuAssertIntEquals(tc, 1, r);
+ CuAssertPtrNotNull(tc, value);
+ CuAssertIntEquals(tc, AUG_NOERROR, aug_error(aug));
+
+ /* aug_get returns 0 and no value when no node matches */
+ r = aug_get(aug, "/augeas/version/save/*[ last() + 1 ]", &value);
+ CuAssertIntEquals(tc, 0, r);
+ CuAssertPtrEquals(tc, NULL, value);
+ CuAssertIntEquals(tc, AUG_NOERROR, aug_error(aug));
+
+ /* aug_get should return an error when multiple nodes match */
+ r = aug_get(aug, "/augeas/version/save/*", &value);
+ CuAssertIntEquals(tc, -1, r);
+ CuAssertPtrEquals(tc, NULL, value);
+ CuAssertIntEquals(tc, AUG_EMMATCH, aug_error(aug));
+
+ /* aug_label returns 1 and the label if exactly one node matches */
+ r = aug_label(aug, "/augeas/version/save/*[1]", &label);
+ CuAssertIntEquals(tc, 1, r);
+ CuAssertPtrNotNull(tc, label);
+ CuAssertIntEquals(tc, AUG_NOERROR, aug_error(aug));
+
+ /* aug_label returns 0 and no label when no node matches */
+ r = aug_label(aug, "/augeas/version/save/*[ last() + 1 ]", &label);
+ CuAssertIntEquals(tc, 0, r);
+ CuAssertPtrEquals(tc, NULL, label);
+ CuAssertIntEquals(tc, AUG_NOERROR, aug_error(aug));
+
+ /* aug_label should return an error when multiple nodes match */
+ r = aug_label(aug, "/augeas/version/save/*", &label);
+ CuAssertIntEquals(tc, -1, r);
+ CuAssertPtrEquals(tc, NULL, label);
+ CuAssertIntEquals(tc, AUG_EMMATCH, aug_error(aug));
+
+ /* augeas should prepend context if relative path given */
+ r = aug_set(aug, "/augeas/context", "/augeas/version");
+ r = aug_get(aug, "save/*[1]", &value);
+ CuAssertIntEquals(tc, 1, r);
+ CuAssertPtrNotNull(tc, value);
+ CuAssertIntEquals(tc, AUG_NOERROR, aug_error(aug));
+
+ /* augeas should still work with an empty context */
+ r = aug_set(aug, "/augeas/context", "");
+ r = aug_get(aug, "/augeas/version", &value);
+ CuAssertIntEquals(tc, 1, r);
+ CuAssertPtrNotNull(tc, value);
+ CuAssertIntEquals(tc, AUG_NOERROR, aug_error(aug));
+
+ /* augeas should ignore trailing slashes in context */
+ r = aug_set(aug, "/augeas/context", "/augeas/version/");
+ r = aug_get(aug, "save/*[1]", &value);
+ CuAssertIntEquals(tc, 1, r);
+ CuAssertPtrNotNull(tc, value);
+ CuAssertIntEquals(tc, AUG_NOERROR, aug_error(aug));
+
+ /* augeas should create non-existent context path */
+ r = aug_set(aug, "/augeas/context", "/context/foo");
+ r = aug_set(aug, "bar", "value");
+ r = aug_get(aug, "/context/foo/bar", &value);
+ CuAssertIntEquals(tc, 1, r);
+ CuAssertPtrNotNull(tc, value);
+ CuAssertIntEquals(tc, AUG_NOERROR, aug_error(aug));
+
+ /* aug_get should set VALUE to NULL even if the path expression is invalid
+ Issue #372 */
+ value = (const char *) 7;
+ r = aug_get(aug, "[invalid path]", &value);
+ CuAssertIntEquals(tc, -1, r);
+ CuAssertPtrEquals(tc, NULL, value);
+
+ aug_close(aug);
+}
+
+static void testSet(CuTest *tc) {
+ int r;
+ const char *value;
+ struct augeas *aug;
+
+ aug = aug_init(root, loadpath, AUG_NO_STDINC|AUG_NO_LOAD);
+ CuAssertPtrNotNull(tc, aug);
+ CuAssertIntEquals(tc, AUG_NOERROR, aug_error(aug));
+
+ /* aug_set returns 0 for a simple set */
+ r = aug_set(aug, "/augeas/testSet", "foo");
+ CuAssertIntEquals(tc, 0, r);
+ CuAssertIntEquals(tc, AUG_NOERROR, aug_error(aug));
+
+ /* aug_set returns -1 when cannot set due to multiple nodes */
+ r = aug_set(aug, "/augeas/version/save/*", "foo");
+ CuAssertIntEquals(tc, -1, r);
+ CuAssertIntEquals(tc, AUG_EMMATCH, aug_error(aug));
+
+ /* aug_set is able to set the context, even when currently invalid */
+ r = aug_set(aug, "/augeas/context", "( /files | /augeas )");
+ CuAssertIntEquals(tc, 0, r);
+ CuAssertIntEquals(tc, AUG_NOERROR, aug_error(aug));
+ r = aug_get(aug, "/augeas/version", &value);
+ CuAssertIntEquals(tc, -1, r);
+ CuAssertIntEquals(tc, AUG_EMMATCH, aug_error(aug));
+ r = aug_set(aug, "/augeas/context", "/files");
+ CuAssertIntEquals(tc, 0, r);
+ CuAssertIntEquals(tc, AUG_NOERROR, aug_error(aug));
+
+ aug_close(aug);
+}
+
+static void testSetM(CuTest *tc) {
+ int r;
+ struct augeas *aug;
+
+ aug = aug_init(root, loadpath, AUG_NO_STDINC|AUG_NO_LOAD);
+ CuAssertPtrNotNull(tc, aug);
+ CuAssertIntEquals(tc, AUG_NOERROR, aug_error(aug));
+
+ /* Change base nodes when SUB is NULL */
+ r = aug_setm(aug, "/augeas/version/save/*", NULL, "changed");
+ CuAssertIntEquals(tc, 4, r);
+
+ r = aug_match(aug, "/augeas/version/save/*[. = 'changed']", NULL);
+ CuAssertIntEquals(tc, 4, r);
+
+ /* Only change existing nodes */
+ r = aug_setm(aug, "/augeas/version/save", "mode", "again");
+ CuAssertIntEquals(tc, 4, r);
+
+ r = aug_match(aug, "/augeas/version/save/*", NULL);
+ CuAssertIntEquals(tc, 4, r);
+
+ r = aug_match(aug, "/augeas/version/save/*[. = 'again']", NULL);
+ CuAssertIntEquals(tc, 4, r);
+
+ /* Create a new node */
+ r = aug_setm(aug, "/augeas/version/save", "mode[last() + 1]", "newmode");
+ CuAssertIntEquals(tc, 1, r);
+
+ r = aug_match(aug, "/augeas/version/save/*", NULL);
+ CuAssertIntEquals(tc, 5, r);
+
+ r = aug_match(aug, "/augeas/version/save/*[. = 'again']", NULL);
+ CuAssertIntEquals(tc, 4, r);
+
+ r = aug_match(aug, "/augeas/version/save/*[last()][. = 'newmode']", NULL);
+ CuAssertIntEquals(tc, 1, r);
+
+ /* Noexistent base */
+ r = aug_setm(aug, "/augeas/version/save[last()+1]", "mode", "newmode");
+ CuAssertIntEquals(tc, 0, r);
+
+ /* Invalid path expressions */
+ r = aug_setm(aug, "/augeas/version/save[]", "mode", "invalid");
+ CuAssertIntEquals(tc, -1, r);
+
+ r = aug_setm(aug, "/augeas/version/save/*", "mode[]", "invalid");
+ CuAssertIntEquals(tc, -1, r);
+
+ aug_close(aug);
+}
+
+/* Check that defining a variable leads to a corresponding entry in
+ * /augeas/variables and that that entry disappears when the variable is
+ * undefined */
+static void testDefVarMeta(CuTest *tc) {
+ int r;
+ struct augeas *aug;
+ static const char *const expr = "/augeas/version/save/mode";
+ const char *value;
+
+ aug = aug_init(root, loadpath, AUG_NO_STDINC|AUG_NO_LOAD);
+ CuAssertPtrNotNull(tc, aug);
+ CuAssertIntEquals(tc, AUG_NOERROR, aug_error(aug));
+
+ r = aug_defvar(aug, "var", expr);
+ CuAssertIntEquals(tc, 4, r);
+
+ r = aug_match(aug, "/augeas/variables/*", NULL);
+ CuAssertIntEquals(tc, 1, r);
+
+ r = aug_get(aug, "/augeas/variables/var", &value);
+ CuAssertStrEquals(tc, expr, value);
+
+ r = aug_defvar(aug, "var", NULL);
+ CuAssertIntEquals(tc, 0, r);
+
+ r = aug_match(aug, "/augeas/variables/*", NULL);
+ CuAssertIntEquals(tc, 0, r);
+
+ aug_close(aug);
+}
+
+/* Check that defining a variable with defnode leads to a corresponding
+ * entry in /augeas/variables and that that entry disappears when the
+ * variable is undefined
+ */
+static void testDefNodeExistingMeta(CuTest *tc) {
+ int r, created;
+ struct augeas *aug;
+ static const char *const expr = "/augeas/version/save/mode";
+ const char *value;
+
+ aug = aug_init(root, loadpath, AUG_NO_STDINC|AUG_NO_LOAD);
+ CuAssertPtrNotNull(tc, aug);
+ CuAssertIntEquals(tc, AUG_NOERROR, aug_error(aug));
+
+ r = aug_defnode(aug, "var", expr, "other", &created);
+ CuAssertIntEquals(tc, 4, r);
+ CuAssertIntEquals(tc, 0, created);
+
+ r = aug_match(aug, "/augeas/variables/*", NULL);
+ CuAssertIntEquals(tc, 1, r);
+
+ r = aug_get(aug, "/augeas/variables/var", &value);
+ CuAssertStrEquals(tc, expr, value);
+
+ r = aug_defvar(aug, "var", NULL);
+ CuAssertIntEquals(tc, 0, r);
+
+ r = aug_match(aug, "/augeas/variables/*", NULL);
+ CuAssertIntEquals(tc, 0, r);
+
+ aug_close(aug);
+}
+
+/* Check that defining a variable with defnode leads to a corresponding
+ * entry in /augeas/variables and that that entry disappears when the
+ * variable is undefined
+ */
+static void testDefNodeCreateMeta(CuTest *tc) {
+ int r, created;
+ struct augeas *aug;
+ static const char *const expr = "/augeas/version/save/mode[last()+1]";
+ static const char *const expr_can = "/augeas/version/save/mode[5]";
+ const char *value;
+
+ aug = aug_init(root, loadpath, AUG_NO_STDINC|AUG_NO_LOAD);
+ CuAssertPtrNotNull(tc, aug);
+ CuAssertIntEquals(tc, AUG_NOERROR, aug_error(aug));
+
+ r = aug_defnode(aug, "var", expr, "other", &created);
+ CuAssertIntEquals(tc, 1, r);
+ CuAssertIntEquals(tc, 1, created);
+
+ r = aug_match(aug, "/augeas/variables/*", NULL);
+ CuAssertIntEquals(tc, 1, r);
+
+ r = aug_get(aug, "/augeas/variables/var", &value);
+ CuAssertStrEquals(tc, expr_can, value);
+
+ r = aug_defvar(aug, "var", NULL);
+ CuAssertIntEquals(tc, 0, r);
+
+ r = aug_match(aug, "/augeas/variables/*", NULL);
+ CuAssertIntEquals(tc, 0, r);
+
+ aug_close(aug);
+}
+
+static void reset_indexes(uint *a, uint *b, uint *c, uint *d, uint *e, uint *f) {
+ *a = 0; *b = 0; *c = 0; *d = 0; *e = 0; *f = 0;
+}
+
+#define SPAN_TEST_DEF_LAST { .expr = NULL, .ls = 0, .le = 0, \
+ .vs = 0, .ve = 0, .ss = 0, .se = 0 }
+
+struct span_test_def {
+ const char *expr;
+ const char *f;
+ int ret;
+ int ls;
+ int le;
+ int vs;
+ int ve;
+ int ss;
+ int se;
+};
+
+static const struct span_test_def span_test[] = {
+ { .expr = "/files/etc/hosts/1/ipaddr", .f = "hosts", .ret = 0, .ls = 0, .le = 0, .vs = 104, .ve = 113, .ss = 104, .se = 113 },
+ { .expr = "/files/etc/hosts/1", .f = "hosts", .ret = 0, .ls = 0, .le = 0, .vs = 0, .ve = 0, .ss = 104, .se = 171 },
+ { .expr = "/files/etc/hosts/*[last()]", .f = "hosts", .ret = 0, .ls = 0, .le = 0, .vs = 0, .ve = 0, .ss = 266, .se = 309 },
+ { .expr = "/files/etc/hosts/#comment[2]", .f = "hosts", .ret = 0, .ls = 0, .le = 0, .vs = 58, .ve = 103, .ss = 56, .se = 104 },
+ { .expr = "/files/etc/hosts", .f = "hosts", .ret = 0, .ls = 0, .le = 0, .vs = 0, .ve = 0, .ss = 0, .se = 309 },
+ { .expr = "/files", .f = NULL, .ret = -1, .ls = 0, .le = 0, .vs = 0, .ve = 0, .ss = 0, .se = 0 },
+ { .expr = "/random", .f = NULL, .ret = -1, .ls = 0, .le = 0, .vs = 0, .ve = 0, .ss = 0, .se = 0 },
+ SPAN_TEST_DEF_LAST
+};
+
+static void testNodeInfo(CuTest *tc) {
+ int ret;
+ int i = 0;
+ struct augeas *aug;
+ struct span_test_def test;
+ char *fbase;
+ char msg[1024];
+ static const char *const expr = "/files/etc/hosts/1/ipaddr";
+
+ char *filename_ac;
+ uint label_start, label_end, value_start, value_end, span_start, span_end;
+
+ aug = aug_init(root, loadpath, AUG_NO_STDINC|AUG_NO_LOAD|AUG_ENABLE_SPAN);
+ ret = aug_load(aug);
+ CuAssertRetSuccess(tc, ret);
+
+ while(span_test[i].expr != NULL) {
+ test = span_test[i];
+ i++;
+ ret = aug_span(aug, test.expr, &filename_ac, &label_start, &label_end,
+ &value_start, &value_end, &span_start, &span_end);
+ sprintf(msg, "span_test %d ret\n", i);
+ CuAssertIntEquals_Msg(tc, msg, test.ret, ret);
+ sprintf(msg, "span_test %d label_start\n", i);
+ CuAssertIntEquals_Msg(tc, msg, test.ls, label_start);
+ sprintf(msg, "span_test %d label_end\n", i);
+ CuAssertIntEquals_Msg(tc, msg, test.le, label_end);
+ sprintf(msg, "span_test %d value_start\n", i);
+ CuAssertIntEquals_Msg(tc, msg, test.vs, value_start);
+ sprintf(msg, "span_test %d value_end\n", i);
+ CuAssertIntEquals_Msg(tc, msg, test.ve, value_end);
+ sprintf(msg, "span_test %d span_start\n", i);
+ CuAssertIntEquals_Msg(tc, msg, test.ss, span_start);
+ sprintf(msg, "span_test %d span_end\n", i);
+ CuAssertIntEquals_Msg(tc, msg, test.se, span_end);
+ if (filename_ac != NULL) {
+ fbase = basename(filename_ac);
+ } else {
+ fbase = NULL;
+ }
+ sprintf(msg, "span_test %d filename\n", i);
+ CuAssertStrEquals_Msg(tc, msg, test.f, fbase);
+ free(filename_ac);
+ filename_ac = NULL;
+ reset_indexes(&label_start, &label_end, &value_start, &value_end,
+ &span_start, &span_end);
+ }
+
+ /* aug_span returns -1 and when no node matches */
+ ret = aug_span(aug, "/files/etc/hosts/*[ last() + 1 ]", &filename_ac,
+ &label_start, &label_end, &value_start, &value_end,
+ &span_start, &span_end);
+ CuAssertIntEquals(tc, -1, ret);
+ CuAssertPtrEquals(tc, NULL, filename_ac);
+ CuAssertIntEquals(tc, AUG_ENOMATCH, aug_error(aug));
+
+ /* aug_span should return an error when multiple nodes match */
+ ret = aug_span(aug, "/files/etc/hosts/*", &filename_ac,
+ &label_start, &label_end, &value_start, &value_end,
+ &span_start, &span_end);
+ CuAssertIntEquals(tc, -1, ret);
+ CuAssertPtrEquals(tc, NULL, filename_ac);
+ CuAssertIntEquals(tc, AUG_EMMATCH, aug_error(aug));
+
+ /* aug_span returns -1 if nodes span are not loaded */
+ aug_close(aug);
+ aug = aug_init(root, loadpath, AUG_NO_STDINC|AUG_NO_LOAD);
+ ret = aug_load(aug);
+ CuAssertRetSuccess(tc, ret);
+ ret = aug_span(aug, expr, &filename_ac, &label_start, &label_end,
+ &value_start, &value_end, &span_start, &span_end);
+ CuAssertIntEquals(tc, -1, ret);
+ CuAssertPtrEquals(tc, NULL, filename_ac);
+ CuAssertIntEquals(tc, AUG_ENOSPAN, aug_error(aug));
+ reset_indexes(&label_start, &label_end, &value_start, &value_end,
+ &span_start, &span_end);
+
+ aug_close(aug);
+}
+
+static void testMv(CuTest *tc) {
+ struct augeas *aug;
+ int r;
+
+ aug = aug_init(root, loadpath, AUG_NO_STDINC|AUG_NO_LOAD);
+ CuAssertPtrNotNull(tc, aug);
+
+ r = aug_set(aug, "/a/b/c", "value");
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_mv(aug, "/a/b/c", "/a/b/c/d");
+ CuAssertIntEquals(tc, -1, r);
+ CuAssertIntEquals(tc, AUG_EMVDESC, aug_error(aug));
+
+ aug_close(aug);
+}
+
+static void testCp(CuTest *tc) {
+ struct augeas *aug;
+ int r;
+ const char *value;
+
+ aug = aug_init(root, loadpath, AUG_NO_STDINC|AUG_NO_LOAD);
+ CuAssertPtrNotNull(tc, aug);
+ r = aug_load(aug);
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_set(aug, "/a/b/c", "value");
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_cp(aug, "/a/b/c", "/a/b/c/d");
+ CuAssertIntEquals(tc, -1, r);
+ CuAssertIntEquals(tc, AUG_ECPDESC, aug_error(aug));
+
+ // Copy recursive tree with empty label
+ r = aug_cp(aug, "/files/etc/logrotate.d/rpm/rule/create", "/files/etc/logrotate.d/acpid/rule/create");
+ CuAssertRetSuccess(tc, r);
+
+ // Check empty label
+ r = aug_get(aug, "/files/etc/logrotate.d/rpm/rule/create", &value);
+ CuAssertIntEquals(tc, 1, r);
+ CuAssertStrEquals(tc, NULL, value);
+
+ // Check that copies are well separated
+ r = aug_set(aug, "/files/etc/logrotate.d/rpm/rule/create/mode", "1234");
+ CuAssertRetSuccess(tc, r);
+ r = aug_set(aug, "/files/etc/logrotate.d/acpid/rule/create/mode", "5678");
+ CuAssertRetSuccess(tc, r);
+ r = aug_get(aug, "/files/etc/logrotate.d/rpm/rule/create/mode", &value);
+ CuAssertIntEquals(tc, 1, r);
+ CuAssertStrEquals(tc, "1234", value);
+ r = aug_get(aug, "/files/etc/logrotate.d/acpid/rule/create/mode", &value);
+ CuAssertIntEquals(tc, 1, r);
+ CuAssertStrEquals(tc, "5678", value);
+
+ aug_close(aug);
+}
+
+
+static void testRename(CuTest *tc) {
+ struct augeas *aug;
+ int r;
+
+ aug = aug_init(root, loadpath, AUG_NO_STDINC|AUG_NO_LOAD);
+ CuAssertPtrNotNull(tc, aug);
+
+ r = aug_set(aug, "/a/b/c", "value");
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_rename(aug, "/a/b/c", "d");
+ CuAssertIntEquals(tc, 1, r);
+
+ r = aug_set(aug, "/a/e/d", "value2");
+ CuAssertRetSuccess(tc, r);
+
+ // Multiple rename
+ r = aug_rename(aug, "/a//d", "x");
+ CuAssertIntEquals(tc, 2, r);
+
+ // Label with a /
+ r = aug_rename(aug, "/a/e/x", "a/b");
+ CuAssertIntEquals(tc, -1, r);
+ CuAssertIntEquals(tc, AUG_ELABEL, aug_error(aug));
+
+ aug_close(aug);
+}
+
+static void testToXml(CuTest *tc) {
+ struct augeas *aug;
+ int r;
+ xmlNodePtr xmldoc, xmlnode;
+ xmlChar *value;
+
+ aug = aug_init(root, loadpath, AUG_NO_STDINC|AUG_NO_LOAD);
+ r = aug_load(aug);
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_to_xml(aug, "/files/etc/passwd", &xmldoc, 0);
+ CuAssertRetSuccess(tc, r);
+
+ value = xmlGetProp(xmldoc, BAD_CAST "match");
+ CuAssertStrEquals(tc, "/files/etc/passwd", (const char *) value);
+ xmlFree(value);
+
+ xmlnode = xmlFirstElementChild(xmldoc);
+ value = xmlGetProp(xmlnode, BAD_CAST "label");
+ CuAssertStrEquals(tc, "passwd", (const char *) value);
+ xmlFree(value);
+
+ value = xmlGetProp(xmlnode, BAD_CAST "path");
+ CuAssertStrEquals(tc, "/files/etc/passwd", (const char *) value);
+ xmlFree(value);
+
+ xmlnode = xmlFirstElementChild(xmlnode);
+ value = xmlGetProp(xmlnode, BAD_CAST "label");
+ CuAssertStrEquals(tc, "root", (const char *) value);
+ xmlFree(value);
+ xmlFreeNode(xmldoc);
+
+ /* Bug #239 */
+ r = aug_set(aug, "/augeas/context", "/files/etc/passwd");
+ CuAssertRetSuccess(tc, r);
+ r = aug_to_xml(aug, ".", &xmldoc, 0);
+ CuAssertRetSuccess(tc, r);
+ xmlnode = xmlFirstElementChild(xmldoc);
+ value = xmlGetProp(xmlnode, BAD_CAST "label");
+ CuAssertStrEquals(tc, "passwd", (const char *) value);
+ xmlFree(value);
+
+ xmlFreeNode(xmldoc);
+ aug_close(aug);
+}
+
+static void testTextStore(CuTest *tc) {
+ static const char *const hosts = "192.168.0.1 rtr.example.com router\n";
+ /* Not acceptable for Hosts.lns - missing canonical and \n */
+ static const char *const hosts_bad = "192.168.0.1";
+ const char *v;
+
+ struct augeas *aug;
+ int r;
+
+ aug = aug_init(root, loadpath, AUG_NO_STDINC|AUG_NO_LOAD);
+ CuAssertPtrNotNull(tc, aug);
+
+ r = aug_set(aug, "/raw/hosts", hosts);
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_text_store(aug, "Hosts.lns", "/raw/hosts", "/t1");
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_match(aug, "/t1/*", NULL);
+ CuAssertIntEquals(tc, 1, r);
+
+ /* Test bad lens name */
+ r = aug_text_store(aug, "Notthere.lns", "/raw/hosts", "/t2");
+ CuAssertIntEquals(tc, -1, r);
+ CuAssertIntEquals(tc, AUG_ENOLENS, aug_error(aug));
+
+ r = aug_match(aug, "/t2", NULL);
+ CuAssertIntEquals(tc, 0, r);
+
+ /* Test parse error */
+ r = aug_set(aug, "/raw/hosts_bad", hosts_bad);
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_text_store(aug, "Hosts.lns", "/raw/hosts_bad", "/t3");
+ CuAssertIntEquals(tc, -1, r);
+
+ r = aug_match(aug, "/t3", NULL);
+ CuAssertIntEquals(tc, 0, r);
+
+ r = aug_get(aug, "/augeas/text/t3/error", &v);
+ CuAssertIntEquals(tc, 1, r);
+ CuAssertStrEquals(tc, "parse_failed", v);
+
+ r = aug_text_store(aug, "Hosts.lns", "/raw/hosts", "/t3");
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_match(aug, "/augeas/text/t3/error", NULL);
+ CuAssertIntEquals(tc, 0, r);
+
+ /* Test invalid PATH */
+ r = aug_text_store(aug, "Hosts.lns", "/raw/hosts", "[garbage]");
+ CuAssertIntEquals(tc, -1, r);
+ CuAssertIntEquals(tc, AUG_EPATHX, aug_error(aug));
+
+ r = aug_match(aug, "/t2", NULL);
+ CuAssertIntEquals(tc, 0, r);
+
+ aug_close(aug);
+}
+
+static void testTextRetrieve(CuTest *tc) {
+ static const char *const hosts = "192.168.0.1 rtr.example.com router\n";
+ const char *hosts_out;
+ struct augeas *aug;
+ int r;
+
+ aug = aug_init(root, loadpath, AUG_NO_STDINC|AUG_NO_LOAD);
+ CuAssertPtrNotNull(tc, aug);
+
+ r = aug_set(aug, "/raw/hosts", hosts);
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_text_store(aug, "Hosts.lns", "/raw/hosts", "/t1");
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_text_retrieve(aug, "Hosts.lns", "/raw/hosts", "/t1", "/out/hosts");
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_get(aug, "/out/hosts", &hosts_out);
+ CuAssertIntEquals(tc, 1, r);
+
+ CuAssertStrEquals(tc, hosts, hosts_out);
+
+ aug_close(aug);
+}
+
+static void testAugEscape(CuTest *tc) {
+ static const char *const in = "a/[]b|=c()!, \td";
+ static const char *const exp = "a\\/\\[\\]b\\|\\=c\\(\\)\\!\\,\\ \\\td";
+ char *out;
+ struct augeas *aug;
+ int r;
+
+ aug = aug_init(root, loadpath, AUG_NO_STDINC|AUG_NO_LOAD);
+ CuAssertPtrNotNull(tc, aug);
+
+ r = aug_escape_name(aug, in, &out);
+ CuAssertRetSuccess(tc, r);
+
+ CuAssertStrEquals(tc, out, exp);
+ free(out);
+
+ aug_close(aug);
+}
+
+static void testRm(CuTest *tc) {
+ struct augeas *aug;
+ int r;
+
+ aug = aug_init(root, loadpath, AUG_NO_STDINC|AUG_NO_MODL_AUTOLOAD);
+ CuAssertPtrNotNull(tc, aug);
+
+ r = aug_set(aug, "/files/1/2/3/4/5", "1");
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_rm(aug, "/files//*");
+ CuAssertIntEquals(tc, 5, r);
+
+ aug_close(aug);
+}
+
+static void testLoadFile(CuTest *tc) {
+ struct augeas *aug;
+ const char *value;
+ int r;
+
+ aug = aug_init(root, loadpath, AUG_NO_STDINC|AUG_NO_LOAD);
+ CuAssertPtrNotNull(tc, aug);
+ CuAssertIntEquals(tc, AUG_NOERROR, aug_error(aug));
+
+ /* augeas should load a single file */
+ r = aug_load_file(aug, "/etc/fstab");
+ CuAssertRetSuccess(tc, r);
+ r = aug_get(aug, "/files/etc/fstab/1/vfstype", &value);
+ CuAssertIntEquals(tc, 1, r);
+ CuAssertPtrNotNull(tc, value);
+
+ /* Only one file should be loaded */
+ r = aug_match(aug, "/files/etc/*", NULL);
+ CuAssertIntEquals(tc, 1, r);
+
+ /* augeas should return an error when no lens can be found for a file */
+ r = aug_load_file(aug, "/etc/unknown.conf");
+ CuAssertIntEquals(tc, -1, r);
+ CuAssertIntEquals(tc, AUG_ENOLENS, aug_error(aug));
+
+ /* augeas should return without an error when trying to load a
+ nonexistent file that would be handled by a lens */
+ r = aug_load_file(aug, "/etc/mtab");
+ CuAssertRetSuccess(tc, r);
+ r = aug_match(aug, "/files/etc/mtab", NULL);
+ CuAssertIntEquals(tc, 0, r);
+
+ aug_close(aug);
+}
+
+/* Make sure that if somebody erroneously creates a node
+ /augeas/files/path, we do not corrupt the tree. It used to be that
+ having such a node would free /augeas/files
+*/
+static void testLoadBadPath(CuTest *tc) {
+ struct augeas *aug;
+ int r;
+
+ aug = aug_init(root, loadpath, AUG_NO_STDINC|AUG_NO_LOAD);
+ CuAssertPtrNotNull(tc, aug);
+ CuAssertIntEquals(tc, AUG_NOERROR, aug_error(aug));
+
+ r = aug_set(aug, "/augeas/files/path", "/files");
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_load(aug);
+ CuAssertRetSuccess(tc, r);
+
+ aug_close(aug);
+}
+
+/* If a lens is set to a partial path which happens to actually resolve to
+ a file when appended to the loadpath, we went into an infinite loop of
+ loading a module, but then not realizing that it had actually been
+ loaded, reloading it over and over again.
+ See https://github.com/hercules-team/augeas/issues/522
+*/
+static void testLoadBadLens(CuTest *tc) {
+ struct augeas *aug;
+ int r;
+ char *lp;
+
+ // This setup depends on the fact that
+ // loadpath == abs_top_srcdir + "/lenses"
+ r = asprintf(&lp, "%s:%s", loadpath, abs_top_srcdir);
+ CuAssert(tc, "failed to allocate loadpath", (r >= 0));
+
+ aug = aug_init(root, lp, AUG_NO_STDINC|AUG_NO_LOAD);
+ CuAssertPtrNotNull(tc, aug);
+ CuAssertIntEquals(tc, AUG_NOERROR, aug_error(aug));
+
+ r = aug_set(aug, "/augeas/load/Fstab/lens", "lenses/Fstab.lns");
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_load(aug);
+ CuAssertRetSuccess(tc, r);
+
+ // We used to record the error to load the lens above against every
+ // lens that we tried to load after it.
+ r = aug_match(aug, "/augeas//error", NULL);
+ CuAssertIntEquals(tc, 1, r);
+
+ free(lp);
+}
+
+/* Test the aug_ns_* functions */
+static void testAugNs(CuTest *tc) {
+ struct augeas *aug;
+ int r;
+ const char *v, *l;
+ char *s;
+
+ aug = aug_init(root, loadpath, AUG_NO_STDINC|AUG_NO_LOAD);
+ CuAssertPtrNotNull(tc, aug);
+ CuAssertIntEquals(tc, AUG_NOERROR, aug_error(aug));
+
+ r = aug_load_file(aug, "/etc/hosts");
+ CuAssertIntEquals(tc, 0, r);
+
+ r = aug_defvar(aug, "matches",
+ "/files/etc/hosts/*[label() != '#comment']/ipaddr");
+ CuAssertIntEquals(tc, 2, r);
+
+ r = aug_ns_attr(aug, "matches", 0, &v, &l, &s);
+ CuAssertIntEquals(tc, 1, r);
+ CuAssertStrEquals(tc, "127.0.0.1", v);
+ CuAssertStrEquals(tc, "ipaddr", l);
+ CuAssertStrEquals(tc, "/files/etc/hosts", s);
+ free(s);
+ s = NULL;
+
+ r = aug_ns_attr(aug, "matches", 0, NULL, NULL, NULL);
+ CuAssertIntEquals(tc, 1, r);
+
+ r = aug_ns_attr(aug, "matches", 1, &v, NULL, &s);
+ CuAssertIntEquals(tc, 1, r);
+ CuAssertStrEquals(tc, "172.31.122.14", v);
+ CuAssertStrEquals(tc, "/files/etc/hosts", s);
+ free(s);
+ s = NULL;
+
+ r = aug_ns_attr(aug, "matches", 2, &v, &l, &s);
+ CuAssertIntEquals(tc, -1, r);
+ CuAssertPtrEquals(tc, NULL, v);
+ CuAssertPtrEquals(tc, NULL, l);
+ CuAssertPtrEquals(tc, NULL, s);
+ free(s);
+ s = NULL;
+
+ aug_close(aug);
+}
+
+/* Test aug_source */
+static void testAugSource(CuTest *tc) {
+ struct augeas *aug;
+ int r;
+ char *s;
+
+ aug = aug_init(root, loadpath, AUG_NO_STDINC|AUG_NO_LOAD);
+ CuAssertPtrNotNull(tc, aug);
+ CuAssertIntEquals(tc, AUG_NOERROR, aug_error(aug));
+
+ r = aug_load_file(aug, "/etc/hosts");
+ CuAssertIntEquals(tc, 0, r);
+
+ r = aug_source(aug, "/files/etc/hosts/1", &s);
+ CuAssertIntEquals(tc, 0, r);
+ CuAssertStrEquals(tc, "/files/etc/hosts", s);
+ free(s);
+
+ r = aug_source(aug, "/files/etc/fstab", &s);
+ CuAssertIntEquals(tc, -1, r);
+ CuAssertIntEquals(tc, AUG_ENOMATCH, aug_error(aug));
+ CuAssertPtrEquals(tc, NULL, s);
+
+ r = aug_source(aug, "/files[", &s);
+ CuAssertIntEquals(tc, -1, r);
+ CuAssertIntEquals(tc, AUG_EPATHX, aug_error(aug));
+ CuAssertPtrEquals(tc, NULL, s);
+
+ r = aug_source(aug, "/files/etc/hosts/*", &s);
+ CuAssertIntEquals(tc, -1, r);
+ CuAssertIntEquals(tc, AUG_EMMATCH, aug_error(aug));
+ CuAssertPtrEquals(tc, NULL, s);
+
+}
+
+static void testAugPreview(CuTest *tc) {
+ struct augeas *aug;
+ int r;
+ char *s;
+ char *etc_hosts_fn = NULL;
+ FILE *hosts_fp = NULL;
+ char *hosts_txt = NULL;
+ int readsz = 0;
+
+ /* Read the original contents of the etc/hosts file */
+ if (asprintf(&etc_hosts_fn,"%s/etc/hosts",root) >=0 ) {
+ hosts_fp = fopen(etc_hosts_fn,"r");
+ if ( hosts_fp ) {
+ hosts_txt = calloc(sizeof(char),4096);
+ if ( hosts_txt ) {
+ readsz = fread(hosts_txt,sizeof(char),4096,hosts_fp);
+ *(hosts_txt+readsz) = '\0';
+ }
+ fclose(hosts_fp);
+ }
+ free(etc_hosts_fn);
+ }
+
+ aug = aug_init(root, loadpath, AUG_NO_STDINC|AUG_NO_LOAD);
+ CuAssertPtrNotNull(tc, aug);
+ CuAssertIntEquals(tc, AUG_NOERROR, aug_error(aug));
+
+ r = aug_load_file(aug, "/etc/hosts");
+ CuAssertIntEquals(tc, 0, r);
+
+ r = aug_preview(aug, "/files/etc/hosts/1", &s);
+ CuAssertIntEquals(tc, 0, r);
+ CuAssertStrEquals(tc, hosts_txt, s);
+
+ free(hosts_txt);
+ free(s);
+ aug_close(aug);
+}
+
+int main(void) {
+ char *output = NULL;
+ CuSuite* suite = CuSuiteNew();
+ CuSuiteSetup(suite, NULL, NULL);
+
+ SUITE_ADD_TEST(suite, testGet);
+ SUITE_ADD_TEST(suite, testSet);
+ SUITE_ADD_TEST(suite, testSetM);
+ SUITE_ADD_TEST(suite, testDefVarMeta);
+ SUITE_ADD_TEST(suite, testDefNodeExistingMeta);
+ SUITE_ADD_TEST(suite, testDefNodeCreateMeta);
+ SUITE_ADD_TEST(suite, testNodeInfo);
+ SUITE_ADD_TEST(suite, testMv);
+ SUITE_ADD_TEST(suite, testCp);
+ SUITE_ADD_TEST(suite, testRename);
+ SUITE_ADD_TEST(suite, testToXml);
+ SUITE_ADD_TEST(suite, testTextStore);
+ SUITE_ADD_TEST(suite, testTextRetrieve);
+ SUITE_ADD_TEST(suite, testAugEscape);
+ SUITE_ADD_TEST(suite, testRm);
+ SUITE_ADD_TEST(suite, testLoadFile);
+ SUITE_ADD_TEST(suite, testLoadBadPath);
+ SUITE_ADD_TEST(suite, testLoadBadLens);
+ SUITE_ADD_TEST(suite, testAugNs);
+ SUITE_ADD_TEST(suite, testAugSource);
+ SUITE_ADD_TEST(suite, testAugPreview);
+
+ abs_top_srcdir = getenv("abs_top_srcdir");
+ if (abs_top_srcdir == NULL)
+ die("env var abs_top_srcdir must be set");
+
+ if (asprintf(&root, "%s/tests/root", abs_top_srcdir) < 0) {
+ die("failed to set root");
+ }
+
+ if (asprintf(&loadpath, "%s/lenses", abs_top_srcdir) < 0) {
+ die("failed to set loadpath");
+ }
+
+ CuSuiteRun(suite);
+ CuSuiteSummary(suite, &output);
+ CuSuiteDetails(suite, &output);
+ printf("%s\n", output);
+ free(output);
+ int result = suite->failCount;
+ CuSuiteFree(suite);
+ return result;
+}
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+#!/bin/sh
+
+# Tests for augmatch
+
+TOPDIR=$(cd $(dirname $0)/.. && pwd)
+[ -n "$abs_top_srcdir" ] || abs_top_srcdir=$TOPDIR
+
+export AUGEAS_LENS_LIB=$abs_top_srcdir/lenses
+export AUGEAS_ROOT=$abs_top_srcdir/tests/root
+
+fail() {
+ echo "failed: $*"
+ exit 1
+}
+
+assert_eq() {
+ if [ "$1" != "$2" ]; then
+ shift 2
+ fail $*
+ fi
+}
+
+# print the tree for /etc/exports
+act=$(augmatch /etc/exports)
+assert_eq 23 $(echo "$act" | wc -l) "t1: expected 23 lines of output"
+
+# show only the entry for a specific mount
+act=$(augmatch -m 'dir["/home"]' /etc/exports)
+assert_eq 9 $(echo "$act" | wc -l) "t2: expected 9 lines of output"
+
+# show all the clients to which we are exporting /home
+act=$(augmatch -eom 'dir["/home"]/client' /etc/exports)
+exp=$(printf "207.46.0.0/16\n192.168.50.2/32\n")
+assert_eq "$exp" "$act" "t3: expected '$exp'"
+
+# report errors with exit code 2
+augmatch -m '**' /etc/exports >/dev/null 2>&1
+ret=$?
+assert_eq 2 $ret "t4: expected exit code 2 but got $ret"
+
+augmatch /etc >/dev/null 2>&1
+ret=$?
+assert_eq 2 $ret "t5: expected exit code 2 but got $ret"
+
+# test --quiet
+act=$(augmatch -q -m "dir['/local']" /etc/exports)
+ret=$?
+assert_eq '' "$act" "t6: expected no output"
+assert_eq 0 $ret "t6: expected exit code 0 but got $ret"
+
+act=$(augmatch -q -m "dir['/not_there']" /etc/exports)
+ret=$?
+assert_eq '' "$act" "t7: expected no output"
+assert_eq 1 $ret "t7: expected exit code 1 but got $ret"
--- /dev/null
+#!/bin/bash
+
+# Test that augprint produces the expected output, and that the output
+# can be used to reconstruct the original file
+
+
+test_augprint_files=$abs_top_srcdir/tests/test-augprint
+
+export AUGEAS_ROOT=$abs_top_builddir/build/test-augprint
+export AUGEAS_LENS_LIB=$abs_top_srcdir/lenses
+
+# ------------- /etc/hosts ------------
+mkdir -p $AUGEAS_ROOT/etc
+cp --no-preserve=mode --recursive $test_augprint_files/etc/ $AUGEAS_ROOT/
+
+output=$AUGEAS_ROOT/etc.hosts.augprint
+AUGEAS_ROOT=$test_augprint_files augprint --pretty --verbose /etc/hosts > $output
+augtool --load-file=/etc/hosts -f $output --autosave 1>/dev/null
+# Check that output matches expected output
+diff -bu $test_augprint_files/etc.hosts.pretty.augprint $output || exit 1
+# Check that file is unchanged
+diff -bu -B $test_augprint_files/etc/hosts $AUGEAS_ROOT/etc/hosts || exit 1
+
+AUGEAS_ROOT=$test_augprint_files augprint --regexp=2 /etc/hosts > $output
+augtool --load-file=/etc/hosts -f $output --autosave 1>/dev/null
+# Check that output matches expected output
+diff -bu $test_augprint_files/etc.hosts.regexp.augprint $output || exit 1
+# Check that file is unchanged
+diff -bu -B $test_augprint_files/etc/hosts $AUGEAS_ROOT/etc/hosts || exit 1
+
+# ------------- /etc/squid/squid.conf and /etc/pam,d/systemd-auth ------------
+for test_file in /etc/squid/squid.conf /etc/pam.d/system-auth ; do
+ mkdir -p $(dirname $AUGEAS_ROOT/$test_file)
+ output=${test_file//\//.}
+ output=${AUGEAS_ROOT}/${output#.}.augprint
+ AUGEAS_ROOT=$test_augprint_files augprint $test_file > $output
+ augtool --load-file=$test_file -f $output --autosave 1>/dev/null
+ # Check that file is unchanged
+ diff -bu -B $test_augprint_files/$test_file $AUGEAS_ROOT/$test_file || exit 1
+done
+
--- /dev/null
+# Using default lens: Hosts
+# transform Hosts incl /etc/hosts
+# /files/etc
+# /files/etc/hosts
+# /files/etc/hosts/#comment[1] '/etc/hosts for testing specific functionality of augprint'
+set /files/etc/hosts/#comment[.='/etc/hosts for testing specific functionality of augprint'] '/etc/hosts for testing specific functionality of augprint'
+
+# /files/etc/hosts/1
+# /files/etc/hosts/1/ipaddr '127.0.0.1'
+set /files/etc/hosts/seq::*[ipaddr='127.0.0.1' ]/ipaddr '127.0.0.1'
+# /files/etc/hosts/1/canonical 'localhost'
+set /files/etc/hosts/seq::*[ipaddr='127.0.0.1' ]/canonical 'localhost'
+# /files/etc/hosts/1/alias[1] 'localhost4'
+set /files/etc/hosts/seq::*[ipaddr='127.0.0.1' ]/alias[.='localhost4' ] 'localhost4'
+# /files/etc/hosts/1/alias[2] 'localhost.localdomain'
+set /files/etc/hosts/seq::*[ipaddr='127.0.0.1' ]/alias[.='localhost.localdomain'] 'localhost.localdomain'
+# /files/etc/hosts/1/#comment 'ipv4'
+set /files/etc/hosts/seq::*[ipaddr='127.0.0.1' ]/#comment 'ipv4'
+
+# /files/etc/hosts/2
+# /files/etc/hosts/2/ipaddr '::1'
+set /files/etc/hosts/seq::*[ipaddr='::1' ]/ipaddr '::1'
+# /files/etc/hosts/2/canonical 'localhost'
+set /files/etc/hosts/seq::*[ipaddr='::1' ]/canonical 'localhost'
+# /files/etc/hosts/2/alias 'localhost6'
+set /files/etc/hosts/seq::*[ipaddr='::1' ]/alias 'localhost6'
+# /files/etc/hosts/2/#comment 'ipv6'
+set /files/etc/hosts/seq::*[ipaddr='::1' ]/#comment 'ipv6'
+
+# /files/etc/hosts/#comment[2] '"double-quoted"'
+set /files/etc/hosts/#comment[.='"double-quoted"' ] '"double-quoted"'
+
+# /files/etc/hosts/#comment[3] "'single quoted'"
+set /files/etc/hosts/#comment[.="'single quoted'" ] "'single quoted'"
+
+# /files/etc/hosts/#comment[4] 'Comment\ttab\t\ttabx2'
+set /files/etc/hosts/#comment[.='Comment\ttab\t\ttabx2'] 'Comment\ttab\t\ttabx2'
+
+# /files/etc/hosts/#comment[5] 'Comment \\backslash \\\\double-backslash'
+set /files/etc/hosts/#comment[.='Comment \\backslash \\\\double-backslash'] 'Comment \\backslash \\\\double-backslash'
+
+# /files/etc/hosts/#comment[6] 'Repeated comment'
+set /files/etc/hosts/#comment[.='Repeated comment' ][1] 'Repeated comment'
+
+# /files/etc/hosts/#comment[7] 'First preference, unique first tail (/ipaddr)'
+set /files/etc/hosts/#comment[.='First preference, unique first tail (/ipaddr)'] 'First preference, unique first tail (/ipaddr)'
+
+# /files/etc/hosts/3
+# /files/etc/hosts/3/ipaddr '192.0.2.1'
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.1' ]/ipaddr '192.0.2.1'
+# /files/etc/hosts/3/canonical 'example.com'
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.1' ]/canonical 'example.com'
+# /files/etc/hosts/3/alias[1] 'www.example.com'
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.1' ]/alias[.='www.example.com'] 'www.example.com'
+# /files/etc/hosts/3/alias[2] 'ftp.example.com'
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.1' ]/alias[.='ftp.example.com'] 'ftp.example.com'
+
+# /files/etc/hosts/4
+# /files/etc/hosts/4/ipaddr '192.0.2.2'
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.2' ]/ipaddr '192.0.2.2'
+# /files/etc/hosts/4/canonical 'example.com'
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.2' ]/canonical 'example.com'
+# /files/etc/hosts/4/alias[1] 'www.example.com'
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.2' ]/alias[.='www.example.com'] 'www.example.com'
+# /files/etc/hosts/4/alias[2] 'ftp.example.com'
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.2' ]/alias[.='ftp.example.com'] 'ftp.example.com'
+
+# /files/etc/hosts/#comment[8] 'Second preference, unique tail+value for /alias[1]'
+set /files/etc/hosts/#comment[.='Second preference, unique tail+value for /alias[1]'] 'Second preference, unique tail+value for /alias[1]'
+
+# /files/etc/hosts/5
+# /files/etc/hosts/5/ipaddr '192.0.2.77'
+set /files/etc/hosts/seq::*[alias='find_this1']/ipaddr '192.0.2.77'
+# /files/etc/hosts/5/canonical 'second'
+set /files/etc/hosts/seq::*[alias='find_this1' or count(alias)=0]/canonical 'second'
+# /files/etc/hosts/5/alias[1] 'find_this1'
+set /files/etc/hosts/seq::*[alias='find_this1' or count(alias)=0]/alias[.='find_this1'] 'find_this1'
+# /files/etc/hosts/5/alias[2] 'alias77'
+set /files/etc/hosts/seq::*[alias='find_this1']/alias[.='alias77' ] 'alias77'
+# /files/etc/hosts/5/#comment 'add another tail (this comment)'
+set /files/etc/hosts/seq::*[alias='find_this1']/#comment 'add another tail (this comment)'
+
+# /files/etc/hosts/6
+# /files/etc/hosts/6/ipaddr '192.0.2.77'
+set /files/etc/hosts/seq::*[alias='find_this2']/ipaddr '192.0.2.77'
+# /files/etc/hosts/6/canonical 'second'
+set /files/etc/hosts/seq::*[alias='find_this2' or count(alias)=0]/canonical 'second'
+# /files/etc/hosts/6/alias 'find_this2'
+set /files/etc/hosts/seq::*[alias='find_this2' or count(alias)=0]/alias 'find_this2'
+
+# /files/etc/hosts/#comment[9] 'Third preference, unique (first-tail /ipaddr + tail+value /alias[1] )'
+set /files/etc/hosts/#comment[.='Third preference, unique (first-tail /ipaddr + tail+value /alias[1] )'] 'Third preference, unique (first-tail /ipaddr + tail+value /alias[1] )'
+
+# /files/etc/hosts/7
+# /files/etc/hosts/7/ipaddr '192.0.2.33'
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.33' and alias='alias1']/ipaddr '192.0.2.33'
+# /files/etc/hosts/7/canonical 'third'
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.33' and ( alias='alias1' or count(alias)=0 ) ]/canonical 'third'
+# /files/etc/hosts/7/alias 'alias1'
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.33' and ( alias='alias1' or count(alias)=0 ) ]/alias 'alias1'
+
+# /files/etc/hosts/8
+# /files/etc/hosts/8/ipaddr '192.0.2.33'
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.33' and alias='alias2']/ipaddr '192.0.2.33'
+# /files/etc/hosts/8/canonical 'third'
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.33' and ( alias='alias2' or count(alias)=0 ) ]/canonical 'third'
+# /files/etc/hosts/8/alias 'alias2'
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.33' and ( alias='alias2' or count(alias)=0 ) ]/alias 'alias2'
+
+# /files/etc/hosts/9
+# /files/etc/hosts/9/ipaddr '192.0.2.34'
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.34' and alias='alias1']/ipaddr '192.0.2.34'
+# /files/etc/hosts/9/canonical 'third'
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.34' and ( alias='alias1' or count(alias)=0 ) ]/canonical 'third'
+# /files/etc/hosts/9/alias 'alias1'
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.34' and ( alias='alias1' or count(alias)=0 ) ]/alias 'alias1'
+
+# /files/etc/hosts/10
+# /files/etc/hosts/10/ipaddr '192.0.2.34'
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.34' and alias='alias2']/ipaddr '192.0.2.34'
+# /files/etc/hosts/10/canonical 'third'
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.34' and ( alias='alias2' or count(alias)=0 ) ]/canonical 'third'
+# /files/etc/hosts/10/alias 'alias2'
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.34' and ( alias='alias2' or count(alias)=0 ) ]/alias 'alias2'
+
+# /files/etc/hosts/#comment[10] 'Third preference for first one, Fourth preference (fallback) for second and third'
+set /files/etc/hosts/#comment[.='Third preference for first one, Fourth preference (fallback) for second and third'] 'Third preference for first one, Fourth preference (fallback) for second and third'
+
+# /files/etc/hosts/11
+# /files/etc/hosts/11/ipaddr '192.0.2.99'
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.99' and canonical='third']/ipaddr '192.0.2.99'
+# /files/etc/hosts/11/canonical 'third'
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.99' and ( canonical='third' or count(canonical)=0 ) ]/canonical 'third'
+# /files/etc/hosts/11/alias 'abc'
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.99' and canonical='third']/alias 'abc'
+
+# /files/etc/hosts/12
+# /files/etc/hosts/12/ipaddr '192.0.2.99'
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.99'][2]/ipaddr '192.0.2.99'
+# /files/etc/hosts/12/canonical 'fourth'
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.99'][2]/canonical 'fourth'
+# /files/etc/hosts/12/alias[1] 'abc'
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.99'][2]/alias[.='abc'] 'abc'
+# /files/etc/hosts/12/alias[2] 'def'
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.99'][2]/alias[.='def'] 'def'
+
+# /files/etc/hosts/13
+# /files/etc/hosts/13/ipaddr '192.0.2.99'
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.99'][3]/ipaddr '192.0.2.99'
+# /files/etc/hosts/13/canonical 'fourth'
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.99'][3]/canonical 'fourth'
+# /files/etc/hosts/13/alias[1] 'abc'
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.99'][3]/alias[.='abc'] 'abc'
+# /files/etc/hosts/13/alias[2] 'def'
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.99'][3]/alias[.='def'] 'def'
+
+# /files/etc/hosts/#comment[11] 'Repeated comment'
+set /files/etc/hosts/#comment[.='Repeated comment' ][2] 'Repeated comment'
--- /dev/null
+set /files/etc/hosts/#comment[.=~regexp('/e.*')] '/etc/hosts for testing specific functionality of augprint'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('12.*')]/ipaddr '127.0.0.1'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('12.*')]/canonical 'localhost'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('12.*')]/alias[.=~regexp('localhost4')] 'localhost4'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('12.*')]/alias[.=~regexp('localhost\\..*')] 'localhost.localdomain'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('12.*')]/#comment 'ipv4'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('::1')]/ipaddr '::1'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('::1')]/canonical 'localhost'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('::1')]/alias 'localhost6'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('::1')]/#comment 'ipv6'
+set /files/etc/hosts/#comment[.=~regexp('"d.*')] '"double-quoted"'
+set /files/etc/hosts/#comment[.=~regexp("'s.*")] "'single quoted'"
+set /files/etc/hosts/#comment[.=~regexp('Comment\tt.*')] 'Comment\ttab\t\ttabx2'
+set /files/etc/hosts/#comment[.=~regexp('Comment .*')] 'Comment \\backslash \\\\double-backslash'
+set /files/etc/hosts/#comment[.=~regexp('Re.*')][1] 'Repeated comment'
+set /files/etc/hosts/#comment[.=~regexp('Fi.*')] 'First preference, unique first tail (/ipaddr)'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.1')]/ipaddr '192.0.2.1'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.1')]/canonical 'example.com'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.1')]/alias[.=~regexp('ww.*')] 'www.example.com'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.1')]/alias[.=~regexp('ft.*')] 'ftp.example.com'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.2')]/ipaddr '192.0.2.2'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.2')]/canonical 'example.com'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.2')]/alias[.=~regexp('ww.*')] 'www.example.com'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.2')]/alias[.=~regexp('ft.*')] 'ftp.example.com'
+set /files/etc/hosts/#comment[.=~regexp('Se.*')] 'Second preference, unique tail+value for /alias[1]'
+set /files/etc/hosts/seq::*[alias=~regexp('find_this1')]/ipaddr '192.0.2.77'
+set /files/etc/hosts/seq::*[alias=~regexp('find_this1') or count(alias)=0]/canonical 'second'
+set /files/etc/hosts/seq::*[alias=~regexp('find_this1') or count(alias)=0]/alias[.=~regexp('fi.*')] 'find_this1'
+set /files/etc/hosts/seq::*[alias=~regexp('find_this1')]/alias[.=~regexp('al.*')] 'alias77'
+set /files/etc/hosts/seq::*[alias=~regexp('find_this1')]/#comment 'add another tail (this comment)'
+set /files/etc/hosts/seq::*[alias=~regexp('find_this2')]/ipaddr '192.0.2.77'
+set /files/etc/hosts/seq::*[alias=~regexp('find_this2') or count(alias)=0]/canonical 'second'
+set /files/etc/hosts/seq::*[alias=~regexp('find_this2') or count(alias)=0]/alias 'find_this2'
+set /files/etc/hosts/#comment[.=~regexp('Third preference,.*')] 'Third preference, unique (first-tail /ipaddr + tail+value /alias[1] )'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.33') and alias=~regexp('alias1')]/ipaddr '192.0.2.33'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.33') and ( alias=~regexp('alias1') or count(alias)=0 ) ]/canonical 'third'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.33') and ( alias=~regexp('alias1') or count(alias)=0 ) ]/alias 'alias1'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.33') and alias=~regexp('alias2')]/ipaddr '192.0.2.33'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.33') and ( alias=~regexp('alias2') or count(alias)=0 ) ]/canonical 'third'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.33') and ( alias=~regexp('alias2') or count(alias)=0 ) ]/alias 'alias2'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.34') and alias=~regexp('alias1')]/ipaddr '192.0.2.34'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.34') and ( alias=~regexp('alias1') or count(alias)=0 ) ]/canonical 'third'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.34') and ( alias=~regexp('alias1') or count(alias)=0 ) ]/alias 'alias1'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.34') and alias=~regexp('alias2')]/ipaddr '192.0.2.34'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.34') and ( alias=~regexp('alias2') or count(alias)=0 ) ]/canonical 'third'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.34') and ( alias=~regexp('alias2') or count(alias)=0 ) ]/alias 'alias2'
+set /files/etc/hosts/#comment[.=~regexp('Third preference .*')] 'Third preference for first one, Fourth preference (fallback) for second and third'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.99') and canonical=~regexp('th.*')]/ipaddr '192.0.2.99'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.99') and ( canonical=~regexp('th.*') or count(canonical)=0 ) ]/canonical 'third'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.99') and canonical=~regexp('th.*')]/alias 'abc'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.99')][2]/ipaddr '192.0.2.99'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.99')][2]/canonical 'fourth'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.99')][2]/alias[.=~regexp('abc')] 'abc'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.99')][2]/alias[.=~regexp('def')] 'def'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.99')][3]/ipaddr '192.0.2.99'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.99')][3]/canonical 'fourth'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.99')][3]/alias[.=~regexp('abc')] 'abc'
+set /files/etc/hosts/seq::*[ipaddr=~regexp('192\\.0\\.2\\.99')][3]/alias[.=~regexp('def')] 'def'
+set /files/etc/hosts/#comment[.=~regexp('Re.*')][2] 'Repeated comment'
--- /dev/null
+# /etc/hosts for testing specific functionality of augprint
+
+127.0.0.1 localhost localhost4 localhost.localdomain # ipv4
+::1 localhost localhost6 # ipv6
+
+# "double-quoted"
+# 'single quoted'
+# Comment tab tabx2
+# Comment \backslash \\double-backslash
+# Repeated comment
+
+# First preference, unique first tail (/ipaddr)
+192.0.2.1 example.com www.example.com ftp.example.com
+192.0.2.2 example.com www.example.com ftp.example.com
+
+# Second preference, unique tail+value for /alias[1]
+192.0.2.77 second find_this1 alias77 # add another tail (this comment)
+192.0.2.77 second find_this2
+
+# Third preference, unique (first-tail /ipaddr + tail+value /alias[1] )
+192.0.2.33 third alias1
+192.0.2.33 third alias2
+192.0.2.34 third alias1
+192.0.2.34 third alias2
+
+# Third preference for first one, Fourth preference (fallback) for second and third
+192.0.2.99 third abc
+192.0.2.99 fourth abc def
+192.0.2.99 fourth abc def
+
+# Repeated comment
--- /dev/null
+# typical pam.d config file - a real mixed bag for augprint (copied from package pam-1.5.2 for fedora 35)
+
+auth required pam_env.so
+auth required pam_faildelay.so delay=2000000
+auth sufficient pam_fprintd.so
+auth [default=1 ignore=ignore success=ok] pam_succeed_if.so uid >= 1000 quiet
+auth [default=1 ignore=ignore success=ok] pam_localuser.so
+auth sufficient pam_unix.so nullok try_first_pass
+auth requisite pam_succeed_if.so uid >= 1000 quiet_success
+auth sufficient pam_sss.so forward_pass
+auth required pam_deny.so
+
+account required pam_unix.so
+account sufficient pam_localuser.so
+account sufficient pam_succeed_if.so uid < 1000 quiet
+account [default=bad success=ok user_unknown=ignore] pam_sss.so
+account required pam_permit.so
+
+password requisite pam_pwquality.so try_first_pass local_users_only
+password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtok
+password sufficient pam_sss.so use_authtok
+password required pam_deny.so
+
+session optional pam_keyinit.so revoke
+session required pam_limits.so
+-session optional pam_systemd.so
+session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
+session required pam_unix.so
+session optional pam_sss.so
--- /dev/null
+# Typical squid.conf settings
+
+cache_mgr root@example.com
+
+acl localnet src 0.0.0.1-0.255.255.255 # RFC 1122 "this" network (LAN)
+acl localnet src 10.0.0.0/8 # RFC 1918 local private network (LAN)
+acl localnet src 100.64.0.0/10 # RFC 6598 shared address space (CGN)
+acl localnet src 169.254.0.0/16 # RFC 3927 link-local (directly plugged) machines
+acl localnet src 172.16.0.0/12 # RFC 1918 local private network (LAN)
+acl localnet src 192.168.0.0/16 # RFC 1918 local private network (LAN)
+acl localnet src fc00::/7 # RFC 4193 local private network range
+acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines
+
+acl SSL_ports port 443
+acl Safe_ports port 80 # http
+acl Safe_ports port 21 # ftp
+acl Safe_ports port 443 # https
+acl Safe_ports port 70 # gopher
+acl Safe_ports port 210 # wais
+acl Safe_ports port 1025-65535 # unregistered ports
+acl Safe_ports port 280 # http-mgmt
+acl Safe_ports port 488 # gss-http
+acl Safe_ports port 591 # filemaker
+acl Safe_ports port 777 # multiling http
+acl CONNECT method CONNECT
+
+# Deny requests to certain unsafe ports
+http_access deny !Safe_ports
+
+# Deny CONNECT to other than secure SSL ports
+http_access deny CONNECT !SSL_ports
+
+# Only allow cachemgr access from localhost
+http_access allow localhost manager
+http_access deny manager
+
+# Disallow proxied access to localhost ports (eg cups)
+http_access deny to_localhost # built-in acl as of Squid 3.2
+http_access allow localhost # built-in acl as of Squid 3.2
+http_access allow localnet
+
+# And finally deny all other access to this proxy
+http_access deny all
+
+# Squid normally listens to port 3128
+http_port 3128
+
+cache_dir ufs /var/spool/squid 100 16 256
+
+refresh_pattern ^ftp: 1440 20% 10080
+refresh_pattern ^gopher: 1440 0% 1440
+refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
+refresh_pattern . 0 20% 4320
+
--- /dev/null
+#!/bin/sh
+
+# Test for BZ 566844. Make sure we don't spew nonsense when the input
+# contains empty lines
+out=$(augtool --nostdinc 2>&1 <<EOF
+
+quit
+EOF
+)
+result=$?
+
+if [ $result -ne 0 ]; then
+ echo "augtool failed"
+ exit 1
+fi
+
+if [ -n "$out" ]; then
+ echo "augtool produced output:"
+ echo "$out"
+ exit 1
+fi
--- /dev/null
+#!/bin/sh
+
+# Make sure changing the value of root works
+
+exp="/ = root"
+
+act=$(augtool --noautoload 2>&1 <<EOF
+set / root
+get /
+quit
+EOF
+)
+result=$?
+
+if [ $result -ne 0 ]; then
+ echo "augtool failed"
+ exit 1
+fi
+
+if [ "$act" != "$exp" ]; then
+ echo "augtool produced unexpected output:"
+ echo "Expected:"
+ echo "$exp"
+ echo "Actual:"
+ echo "$act"
+ exit 1
+fi
--- /dev/null
+#!/bin/bash
+
+TOP_DIR=$(cd $(dirname $0)/.. && pwd)
+TOP_BUILDDIR="$abs_top_builddir"
+[ -z "$TOP_BUILDDIR" ] && TOP_BUILDDIR="$TOP_DIR"
+TOP_SRCDIR="$abs_top_srcdir"
+[ -z "$TOP_SRCDIR" ] && TOP_SRCDIR="$TOP_DIR"
+
+TEST_DIR="$TOP_SRCDIR/tests"
+
+export PATH="$TOP_BUILDDIR/src:${PATH}"
+
+export AUGEAS_ROOT="$TOP_BUILDDIR/build/test-augtool"
+export AUGEAS_LENS_LIB="$TOP_SRCDIR/lenses"
+
+GSED=sed
+type gsed >/dev/null 2>&1 && GSED=gsed
+
+fail() {
+ [ -z "$failed" ] && echo FAIL
+ failed=yes
+ echo "$@"
+ result=1
+}
+
+# Without args, run all tests
+if [ $# -eq 0 ] ; then
+ args="$TEST_DIR/test-augtool/*.sh"
+else
+ args="$@"
+fi
+
+result=0
+
+for tst in $args; do
+ unset failed
+
+ printf "%-40s ... " $(basename $tst .sh)
+
+ # Read in test variables. The variables we understand are
+ # echo - echo augtool commands if set to some value
+ # commands - the commands to send to augtool
+ # lens - the lens to use
+ # file - the file that should be changed
+ # diff - the expected diff
+ # refresh - print diff in a form suitable for cut and paste
+ # into the test file if set to some value
+
+ unset echo commands lens file diff refresh
+ . $tst
+
+ # Setup test root from root/
+ [ -d "$AUGEAS_ROOT" ] && rm -rf "$AUGEAS_ROOT"
+ dest_dir="$AUGEAS_ROOT"$(dirname $file)
+ mkdir -p $dest_dir
+ cp -p "$TEST_DIR"/root/$file $dest_dir
+
+ [ -n "$echo" ] && echo="-e"
+
+ commands="set /augeas/load/Test/lens $lens
+set /augeas/load/Test/incl $file
+load
+$commands
+save
+quit"
+ echo "$commands" | augtool $echo --nostdinc --noautoload -n || fail "augtool failed"
+
+ abs_file="$AUGEAS_ROOT$file"
+ if [ ! -f "${abs_file}.augnew" ]; then
+ fail "Expected file $file.augnew"
+ else
+ safe_augeas_root=$(printf "%s\n" "$AUGEAS_ROOT" | sed 's/[][\.*^$(){}?+|/]/\\&/g')
+ act=$(diff -u "$abs_file" "${abs_file}.augnew" \
+ | $GSED -r -e "s/^ $//;s!^(---|\+\+\+) ${safe_augeas_root}($file(\.augnew)?)(.*)\$!\1 \2!;s/\\t/\\\\t/g")
+
+ if [ "$act" != "$diff" ] ; then
+ fail "$act"
+ fi
+ fi
+ other_files=$(find "$AUGEAS_ROOT" -name \*.augnew | grep -v "$abs_file.augnew")
+ [ -n "$other_files" ] && fail "Unexpected file(s) $other_files"
+ [ -z "$failed" ] && echo OK
+done
+
+exit $result
--- /dev/null
+commands="
+ins value after /files/etc/aliases/1/value[last()]
+set /files/etc/aliases/1/value[last()] barbar
+"
+
+lens=Aliases.lns
+file="/etc/aliases"
+
+diff='--- /etc/aliases
++++ /etc/aliases.augnew
+@@ -8,7 +8,7 @@
+ #
+
+ # Basic system aliases -- these MUST be present.
+-mailer-daemon:\tpostmaster
++mailer-daemon:\tpostmaster, barbar
+ postmaster:\troot
+
+ # General redirections for pseudo accounts.'
--- /dev/null
+commands="
+set /files/etc/aliases/3/value[2] ruth
+"
+
+lens=Aliases.lns
+file="/etc/aliases"
+
+diff='--- /etc/aliases
++++ /etc/aliases.augnew
+@@ -12,7 +12,7 @@
+ postmaster:\troot
+
+ # General redirections for pseudo accounts.
+-bin:\t\troot, adm
++bin:\t\troot, ruth
+ daemon:\t\troot
+ adm:\t\troot'
--- /dev/null
+commands="
+set /files/etc/grub.conf/default 2
+"
+
+lens=Grub.lns
+file="/etc/grub.conf"
+
+diff='--- /etc/grub.conf
++++ /etc/grub.conf.augnew
+@@ -7,7 +7,7 @@
+ # kernel /vmlinuz-version ro root=/dev/vg00/lv00
+ # initrd /initrd-version.img
+ #boot=/dev/sda
+-default=0
++default=2
+ timeout=5
+ splashimage=(hd0,0)/grub/splash.xpm.gz
+ hiddenmenu'
--- /dev/null
+commands="
+rm /files/etc/grub.conf/title[2]
+"
+
+lens=Grub.lns
+file="/etc/grub.conf"
+
+diff='--- /etc/grub.conf
++++ /etc/grub.conf.augnew
+@@ -15,10 +15,6 @@
+ \troot (hd0,0)
+ \tkernel /vmlinuz-2.6.24.4-64.fc8 ro root=/dev/vg00/lv00
+ \tinitrd /initrd-2.6.24.4-64.fc8.img
+-title Fedora (2.6.24.3-50.fc8)
+-\troot (hd0,0)
+-\tkernel /vmlinuz-2.6.24.3-50.fc8 ro root=/dev/vg00/lv00
+-\tinitrd /initrd-2.6.24.3-50.fc8.img
+ title Fedora (2.6.21.7-3.fc8xen)
+ \troot (hd0,0)
+ \tkernel /xen.gz-2.6.21.7-3.fc8
+@@ -28,4 +24,4 @@
+ \troot (hd0,0)
+ \tkernel /vmlinuz-2.6.24.3-34.fc8 ro root=/dev/vg00/lv00
+ \tinitrd /initrd-2.6.24.3-34.fc8.img
+- savedefault
++\tsavedefault'
--- /dev/null
+# add console values to multiple kernel entries
+commands="
+setm /files/etc/grub.conf/*/kernel[. =~ regexp('.*2.6.24.*')] console[last()+1] tty0
+setm /files/etc/grub.conf/*/kernel[. =~ regexp('.*2.6.24.*')] console[last()+1] ttyS0,9600n8
+"
+
+lens=Grub.lns
+file="/etc/grub.conf"
+
+diff='--- /etc/grub.conf
++++ /etc/grub.conf.augnew
+@@ -13,11 +13,11 @@
+ hiddenmenu
+ title Fedora (2.6.24.4-64.fc8)
+ \troot (hd0,0)
+-\tkernel /vmlinuz-2.6.24.4-64.fc8 ro root=/dev/vg00/lv00
++\tkernel /vmlinuz-2.6.24.4-64.fc8 ro root=/dev/vg00/lv00 console=tty0 console=ttyS0,9600n8
+ \tinitrd /initrd-2.6.24.4-64.fc8.img
+ title Fedora (2.6.24.3-50.fc8)
+ \troot (hd0,0)
+-\tkernel /vmlinuz-2.6.24.3-50.fc8 ro root=/dev/vg00/lv00
++\tkernel /vmlinuz-2.6.24.3-50.fc8 ro root=/dev/vg00/lv00 console=tty0 console=ttyS0,9600n8
+ \tinitrd /initrd-2.6.24.3-50.fc8.img
+ title Fedora (2.6.21.7-3.fc8xen)
+ \troot (hd0,0)
+@@ -26,6 +26,6 @@
+ \tmodule /initrd-2.6.21.7-3.fc8xen.img
+ title Fedora (2.6.24.3-34.fc8)
+ \troot (hd0,0)
+-\tkernel /vmlinuz-2.6.24.3-34.fc8 ro root=/dev/vg00/lv00
++\tkernel /vmlinuz-2.6.24.3-34.fc8 ro root=/dev/vg00/lv00 console=tty0 console=ttyS0,9600n8
+ \tinitrd /initrd-2.6.24.3-34.fc8.img
+ savedefault'
--- /dev/null
+commands="
+set /files/etc/yum.conf/main/debuglevel 99
+"
+
+lens=Yum.lns
+file="/etc/yum.conf"
+
+diff='--- /etc/yum.conf
++++ /etc/yum.conf.augnew
+@@ -1,7 +1,7 @@
+ [main]
+ cachedir=/var/cache/yum
+ keepcache=0
+-debuglevel=2
++debuglevel=99
+ logfile=/var/log/yum.log
+ exactarch=1
+ obsoletes=1'
--- /dev/null
+commands="
+set /files/etc/yum.conf/main/newparam newval
+"
+
+lens=Yum.lns
+file="/etc/yum.conf"
+
+diff='--- /etc/yum.conf
++++ /etc/yum.conf.augnew
+@@ -13,3 +13,4 @@
+
+ # PUT YOUR REPOS HERE OR IN separate files named file.repo
+ # in /etc/yum.repos.d
++newparam=newval'
--- /dev/null
+commands="
+set /files/etc/yum.conf/other/newparam newval
+"
+
+lens=Yum.lns
+file="/etc/yum.conf"
+
+diff='--- /etc/yum.conf
++++ /etc/yum.conf.augnew
+@@ -13,3 +13,5 @@
+
+ # PUT YOUR REPOS HERE OR IN separate files named file.repo
+ # in /etc/yum.repos.d
++[other]
++newparam=newval'
--- /dev/null
+commands="
+rm /files/etc/yum.conf/main/keepcache
+"
+
+lens=Yum.lns
+file="/etc/yum.conf"
+
+diff='--- /etc/yum.conf
++++ /etc/yum.conf.augnew
+@@ -1,6 +1,5 @@
+ [main]
+ cachedir=/var/cache/yum
+-keepcache=0
+ debuglevel=2
+ logfile=/var/log/yum.log
+ exactarch=1'
--- /dev/null
+#!/bin/bash
+
+commands="
+set /files/var/tmp/test-quoted-strings.txt/01 '['
+set /files/var/tmp/test-quoted-strings.txt/02 ']'
+set /files/var/tmp/test-quoted-strings.txt/03 ']['
+set /files/var/tmp/test-quoted-strings.txt/04 '[]'
+set /files/var/tmp/test-quoted-strings.txt/05 '[]['
+set /files/var/tmp/test-quoted-strings.txt/06 '[]]'
+set /files/var/tmp/test-quoted-strings.txt/07 '('
+set /files/var/tmp/test-quoted-strings.txt/08 ')'
+set /files/var/tmp/test-quoted-strings.txt/09 '"\""'
+insert 010 after /files/var/tmp/test-quoted-strings.txt/seq::*[.=~regexp('.*\]\]\[ \)\( .*')]
+set /files/var/tmp/test-quoted-strings.txt/010 'found ]][ )('
+insert 011 after /files/var/tmp/test-quoted-strings.txt/seq::*[.=~regexp('.*\\\\..*')]
+set /files/var/tmp/test-quoted-strings.txt/011 'found .'
+"
+lens=Simplelines.lns
+file="/var/tmp/test-quoted-strings.txt"
+
+diff='--- /var/tmp/test-quoted-strings.txt
++++ /var/tmp/test-quoted-strings.txt.augnew
+@@ -1 +1,12 @@
+ find this: ]][ )( . * $ / \
++found .
++found ]][ )(
++[
++]
++][
++[]
++[][
++[]]
++(
++)
++"'
--- /dev/null
+commands='
+ins 01 after /files/etc/pam.d/newrole/*[last()]
+defvar new /files/etc/pam.d/newrole/*[last()]
+set $new/type auth
+set $new/control include
+set $new/module system-auth
+'
+
+lens=Pam.lns
+file="/etc/pam.d/newrole"
+
+diff='--- /etc/pam.d/newrole
++++ /etc/pam.d/newrole.augnew
+@@ -3,3 +3,4 @@
+ account include\tsystem-auth
+ password include\tsystem-auth
+ session required\tpam_namespace.so unmnt_remnt no_unmount_on_close
++auth include system-auth'
--- /dev/null
+commands='
+ins 10000 before /files/etc/hosts/2
+set /files/etc/hosts/10000/ipaddr 192.168.0.1
+set /files/etc/hosts/10000/canonical pigiron.example.com
+'
+
+lens=Hosts.lns
+file="/etc/hosts"
+
+diff='--- /etc/hosts
++++ /etc/hosts.augnew
+@@ -3,4 +3,5 @@
+ 127.0.0.1\tlocalhost.localdomain\tlocalhost galia.watzmann.net galia
+ #172.31.122.254 granny.watzmann.net granny puppet
+ #172.31.122.1 galia.watzmann.net galia
++192.168.0.1\tpigiron.example.com
+ 172.31.122.14 orange.watzmann.net orange'
--- /dev/null
+# Delete the first hosts entry and then add it back into the tree as
+# a new entry.
+#
+# Since we add the same entry, but under a new key (01 instead of 1), all
+# the separators are set to their defaults. That is why the separator
+# between localhost.localdomain and localhost changes from a '\t' to a ' '
+# If we used the old key '0' to insert back in, we'd have an exact move of
+# the line. That is checked by rec-hosts-reorder
+
+commands="
+rm /files/etc/hosts/1
+set /files/etc/hosts/01/ipaddr 127.0.0.1
+set /files/etc/hosts/01/canonical localhost.localdomain
+set /files/etc/hosts/01/alias[1] localhost
+set /files/etc/hosts/01/alias[2] galia.watzmann.net
+set /files/etc/hosts/01/alias[3] galia
+"
+
+lens=Hosts.lns
+file="/etc/hosts"
+
+diff='--- /etc/hosts
++++ /etc/hosts.augnew
+@@ -1,6 +1,6 @@
+ # Do not remove the following line, or various programs
+ # that require network functionality will fail.
+-127.0.0.1\tlocalhost.localdomain\tlocalhost galia.watzmann.net galia
+ #172.31.122.254 granny.watzmann.net granny puppet
+ #172.31.122.1 galia.watzmann.net galia
+ 172.31.122.14 orange.watzmann.net orange
++127.0.0.1\tlocalhost.localdomain localhost galia.watzmann.net galia'
--- /dev/null
+# Delete the first hosts entry and then add it back into the tree
+# Because of how nodes are ordered, this should be equivalent to
+# moving the corresponding line to the end of the file
+commands="
+rm /files/etc/hosts/1
+set /files/etc/hosts/1/ipaddr 127.0.0.1
+set /files/etc/hosts/1/canonical localhost.localdomain
+set /files/etc/hosts/1/alias[1] localhost
+set /files/etc/hosts/1/alias[2] galia.watzmann.net
+set /files/etc/hosts/1/alias[3] galia
+"
+
+lens=Hosts.lns
+file="/etc/hosts"
+
+diff='--- /etc/hosts
++++ /etc/hosts.augnew
+@@ -1,6 +1,6 @@
+ # Do not remove the following line, or various programs
+ # that require network functionality will fail.
+-127.0.0.1\tlocalhost.localdomain\tlocalhost galia.watzmann.net galia
+ #172.31.122.254 granny.watzmann.net granny puppet
+ #172.31.122.1 galia.watzmann.net galia
+ 172.31.122.14 orange.watzmann.net orange
++127.0.0.1\tlocalhost.localdomain\tlocalhost galia.watzmann.net galia'
--- /dev/null
+# This will leave /etc/hosts with nothing but comments and whitespace
+commands="
+rm /files/etc/hosts/1
+rm /files/etc/hosts/2
+"
+
+lens=Hosts.lns
+file="/etc/hosts"
+
+diff='--- /etc/hosts
++++ /etc/hosts.augnew
+@@ -1,6 +1,4 @@
+ # Do not remove the following line, or various programs
+ # that require network functionality will fail.
+-127.0.0.1\tlocalhost.localdomain\tlocalhost galia.watzmann.net galia
+ #172.31.122.254 granny.watzmann.net granny puppet
+ #172.31.122.1 galia.watzmann.net galia
+-172.31.122.14 orange.watzmann.net orange'
--- /dev/null
+# Change the initdefault
+
+commands="
+set /files/etc/inittab/*[action = 'initdefault']/runlevels 3
+"
+
+lens=Inittab.lns
+file="/etc/inittab"
+
+diff='--- /etc/inittab
++++ /etc/inittab.augnew
+@@ -15,7 +15,7 @@
+ # 5 - X11
+ # 6 - reboot (Do NOT set initdefault to this)
+ #
+-id:5:initdefault:
++id:3:initdefault:
+
+ # System initialization.
+ si::sysinit:/etc/rc.d/rc.sysinit'
--- /dev/null
+commands="
+ins 10000 before /files/etc/pam.d/newrole/3
+set /files/etc/pam.d/newrole/10000/type session
+set /files/etc/pam.d/newrole/10000/control include
+set /files/etc/pam.d/newrole/10000/module system-auth
+"
+
+lens=Pam.lns
+file="/etc/pam.d/newrole"
+
+diff='--- /etc/pam.d/newrole
++++ /etc/pam.d/newrole.augnew
+@@ -1,5 +1,6 @@
+ #%PAM-1.0
+ auth include\tsystem-auth
+ account include\tsystem-auth
++session include system-auth
+ password include\tsystem-auth
+ session required\tpam_namespace.so unmnt_remnt no_unmount_on_close'
--- /dev/null
+commands="
+set /files/etc/pam.d/newrole/3/module other_module
+set /files/etc/pam.d/newrole/3/argument other_module_opts
+"
+
+lens=Pam.lns
+file="/etc/pam.d/newrole"
+
+diff='--- /etc/pam.d/newrole
++++ /etc/pam.d/newrole.augnew
+@@ -1,5 +1,5 @@
+ #%PAM-1.0
+ auth include\tsystem-auth
+ account include\tsystem-auth
+-password include\tsystem-auth
++password include\tother_module other_module_opts
+ session required\tpam_namespace.so unmnt_remnt no_unmount_on_close'
--- /dev/null
+commands="
+rm /files/etc/pam.d/newrole/4/argument
+"
+
+lens=Pam.lns
+file="/etc/pam.d/newrole"
+
+diff='--- /etc/pam.d/newrole
++++ /etc/pam.d/newrole.augnew
+@@ -2,4 +2,4 @@
+ auth include\tsystem-auth
+ account include\tsystem-auth
+ password include\tsystem-auth
+-session required\tpam_namespace.so unmnt_remnt no_unmount_on_close
++session required\tpam_namespace.so'
--- /dev/null
+commands="
+rm /files/etc/pam.d/newrole/1
+"
+
+lens=Pam.lns
+file="/etc/pam.d/newrole"
+
+diff='--- /etc/pam.d/newrole
++++ /etc/pam.d/newrole.augnew
+@@ -1,5 +1,4 @@
+ #%PAM-1.0
+-auth include\tsystem-auth
+ account include\tsystem-auth
+ password include\tsystem-auth
+ session required\tpam_namespace.so unmnt_remnt no_unmount_on_close'
--- /dev/null
+commands="
+rm /files/etc/pam.d/newrole/4/argument
+"
+
+lens=Pam.lns
+file="/etc/pam.d/newrole"
+
+diff='--- /etc/pam.d/newrole
++++ /etc/pam.d/newrole.augnew
+@@ -2,4 +2,4 @@
+ auth include\tsystem-auth
+ account include\tsystem-auth
+ password include\tsystem-auth
+-session required\tpam_namespace.so unmnt_remnt no_unmount_on_close
++session required\tpam_namespace.so'
--- /dev/null
+commands="
+set /files/etc/ssh/sshd_config/AcceptEnv[3]/01 FOO
+"
+
+lens=Sshd.lns
+file="/etc/ssh/sshd_config"
+
+diff='--- /etc/ssh/sshd_config
++++ /etc/ssh/sshd_config.augnew
+@@ -93,7 +93,7 @@
+ # Accept locale-related environment variables
+ AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES
+ AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT
+-AcceptEnv LC_IDENTIFICATION LC_ALL
++AcceptEnv LC_IDENTIFICATION LC_ALL FOO
+ #AllowTcpForwarding yes
+ #GatewayPorts no
+ #X11Forwarding no'
--- /dev/null
+# set all existing repos to enabled
+commands="
+setm /files/etc/yum.repos.d/fedora.repo/* enabled 1
+"
+
+lens=Yum.lns
+file="/etc/yum.repos.d/fedora.repo"
+
+diff='--- /etc/yum.repos.d/fedora.repo
++++ /etc/yum.repos.d/fedora.repo.augnew
+@@ -12,7 +12,7 @@
+ failovermethod=priority
+ #baseurl=http://download.fedora.redhat.com/pub/fedora/linux/releases/$releasever/Everything/$basearch/debug/
+ mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=fedora-debug-$releasever&arch=$basearch
+-enabled=0
++enabled=1
+ gpgcheck=1
+ gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora file:///etc/pki/rpm-gpg/RPM-GPG-KEY
+
+@@ -21,6 +21,6 @@
+ failovermethod=priority
+ #baseurl=http://download.fedora.redhat.com/pub/fedora/linux/releases/$releasever/Everything/source/SRPMS/
+ mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=fedora-source-$releasever&arch=$basearch
+-enabled=0
++enabled=1
+ gpgcheck=1
+ gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora file:///etc/pki/rpm-gpg/RPM-GPG-KEY'
--- /dev/null
+#!/bin/sh
+
+# Test for bug https://fedorahosted.org/augeas/ticket/1
+#
+# Check that putting an invalid node into the tree and saving
+# leads to failure, and therefore the original file being preserved
+
+root=$abs_top_builddir/build/test-bug-1
+file=$root/etc/logrotate.d/test
+
+rm -rf $root
+mkdir -p $(dirname $file)
+
+cat > $file <<EOF
+/myfile {
+ size=5M
+}
+EOF
+ln $file $file.orig
+
+augtool --nostdinc -I $abs_top_srcdir/lenses -r $root > /dev/null <<EOF
+ins invalid before /files/etc/logrotate.d/rpm/rule[1]
+save
+EOF
+
+result=$?
+
+if [ $result -eq 0 ] ; then
+ echo "augtool succeeded, but should have failed"
+ exit 1
+fi
+
+if [ ! $file -ef $file.orig ] ; then
+ echo "File was changed, but should not have been"
+ exit 1
+fi
--- /dev/null
+#!/bin/bash
+#
+# Test that augeas can create a file based on new paths added to the tree
+# The tree should retain the paths on a subsequent load operation
+#
+
+root=${abs_top_builddir:-.}/build/test-createfile
+lenses=${abs_top_srcdir:-.}/lenses
+
+sysctl_file=/etc/sysctl.d/newfile1.conf
+
+rm -rf $root
+mkdir -p $(dirname $root/$sysctl_file)
+
+expected_match="/files/etc/sysctl.d/newfile1.conf/net.ipv4.ip_nonlocal_bind = 1"
+expected_content='net.ipv4.ip_nonlocal_bind = 1'
+
+output=$(augtool --nostdinc -r $root -I $lenses <<EOF | grep "$expected_match"
+set /files/etc/sysctl.d/newfile1.conf/net.ipv4.ip_nonlocal_bind 1
+save
+match /files/etc/sysctl.d/newfile1.conf/net.ipv4.ip_nonlocal_bind
+EOF
+)
+
+if [[ ! -e $root/$sysctl_file ]]; then
+ echo "Failed to create file $sysctl_file under $root"
+ exit 1
+elif ! diff -bq $root/$sysctl_file <(echo "$expected_content") 1>/dev/null 2>&1; then
+ echo "Contents of $root/sysctl_file are incorrect"
+ cat $root/$sysctl_file
+ echo '-- end of file --'
+ echo "Expected:"
+ echo "$expected_content"
+ echo '-- end of file --'
+ exit 1
+elif [[ -z "$output" ]]; then
+ echo "Missing /files/$sysctl_file in tree after save"
+ exit 1
+else
+ echo "Successfully created $sysctl_file with content:"
+ cat $root/$sysctl_file
+ echo '-- end of file --'
+fi
+
+sysctl_file=/etc/sysctl.d/newfile2.conf
+
+expected_match="/files/etc/sysctl.d/newfile2.conf/net.ipv4.ip_forward = 1"
+expected_content='net.ipv4.ip_forward = 1'
+
+output=$(augtool --nostdinc -r $root -I $lenses <<EOF | grep "$expected_match"
+set /files/etc/sysctl.d/newfile2.conf/net.ipv4.ip_forward 1
+save
+load
+match /files/etc/sysctl.d/newfile2.conf/*
+save
+EOF
+)
+
+if [[ ! -e $root/$sysctl_file ]]; then
+ echo "Failed to create file $sysctl_file under $root"
+ exit 1
+elif ! diff -bq $root/$sysctl_file <(echo "$expected_content") 1>/dev/null 2>&1; then
+ echo "Contents of $root/sysctl_file are incorrect"
+ cat $root/$sysctl_file
+ echo '-- end of file --'
+ echo "Expected:"
+ echo "$expected_content"
+ echo '-- end of file --'
+ exit 1
+elif [[ -z "$output" ]]; then
+ echo "Missing /files/$sysctl_file in tree after save"
+ exit 1
+else
+ echo "Successfully created $sysctl_file with content:"
+ cat $root/$sysctl_file
+ echo '-- end of file --'
+fi
--- /dev/null
+#!/bin/sh
+
+# Check that saving preserves mode and ownership; for this test to make
+# much sense (if any) the user running it should have at least one
+# supplementary group
+
+run_augtool() {
+savemode=$1
+augtool --nostdinc -r $root -I $abs_top_srcdir/lenses <<EOF
+set /augeas/save $savemode
+set /files/etc/hosts/1/ipaddr 127.0.1.1
+set /files/boot/grub/menu.lst/default 3
+set /files/etc/inittab/1/action fake
+rm /files/etc/puppet/puppet.conf
+set /files/etc/yum.repos.d/fedora.repo/fedora/enabled 0
+save
+match /augeas/events/saved
+EOF
+}
+
+root=$abs_top_builddir/build/test-events-saved
+
+for savemode in overwrite backup newfile noop; do
+ rm -rf $root
+ mkdir -p $root
+ cp -pr $abs_top_srcdir/tests/root/* $root
+ chmod -R u+w $root
+
+ saved=$(run_augtool $savemode | grep ^/augeas/events/saved | cut -d ' ' -f 3 | sort | tr '\n' ' ')
+ exp="/files/boot/grub/menu.lst /files/etc/hosts /files/etc/inittab /files/etc/puppet/puppet.conf /files/etc/yum.repos.d/fedora.repo "
+
+ if [ -f "$root/etc/puppet/puppet.conf" -a "$savemode" != "noop" ]
+ then
+ echo "Save mode: $savemode"
+ echo "File /etc/puppet/puppet.conf should have been deleted"
+ exit 1
+ fi
+
+ if [ "$saved" != "$exp" ]
+ then
+ echo "Unexpected entries in /augeas/events/saved:"
+ echo "Expected: \"$exp\""
+ echo "Actual: \"$saved\""
+ echo "Save mode: $savemode"
+ exit 1
+ fi
+done
--- /dev/null
+#!/bin/sh
+
+# Test that changing the value of a node causes the function modified() to return true
+
+root=$abs_top_builddir/build/test-function-modified
+hosts=$root/etc/hosts
+
+rm -rf $root
+mkdir -p $(dirname $hosts)
+
+cat <<EOF > $hosts
+127.0.0.1 localhost
+::1 localhost6
+EOF
+
+touch -r $0 $hosts
+
+augtool --nostdinc -r $root -I $abs_top_srcdir/lenses > /dev/null <<EOF >$root/output.1
+set /files/etc/hosts/1/ipaddr 127.0.0.4
+match /files/etc/hosts//*[modified()]
+EOF
+
+[ "$(cat $root/output.1)" = '/files/etc/hosts/1/ipaddr = 127.0.0.4' ] || exit 1
+
+# An empty value should return true, even if none of the child nodes are modified
+augtool --nostdinc -r $root -I $abs_top_srcdir/lenses > /dev/null <<EOF >$root/output.2
+set /files/etc/hosts/1 x
+set /files/etc/hosts/1
+match /files/etc/hosts//*[modified()]
+EOF
+
+[ "$(cat $root/output.2)" = '/files/etc/hosts/1 = (none)' ] || exit 1
+
+# Test that changing a value does not change the parent node
+# Test that adding a new node also marks the (new) parent node.
+augtool --nostdinc -r $root -I $abs_top_srcdir/lenses > /dev/null <<EOF >$root/output.3
+set /files/etc/hosts/1/ipaddr 127.1.1.1
+set /files/etc/hosts/3/ipaddr 127.3.3.3
+match /files/etc/hosts//*[modified()]
+EOF
+
+diff -q $root/output.3 - <<EOF || exit 1
+/files/etc/hosts/3 = (none)
+/files/etc/hosts/1/ipaddr = 127.1.1.1
+/files/etc/hosts/3/ipaddr = 127.3.3.3
+EOF
+
--- /dev/null
+#!/bin/sh
+
+# Check that reading the files in tests/root/ with augtool does not lead to
+# any errors
+
+TOPDIR=$(cd $(dirname $0)/.. && pwd)
+[ -n "$abs_top_srcdir" ] || abs_top_srcdir=$TOPDIR
+
+export AUGEAS_LENS_LIB=$abs_top_srcdir/lenses
+export AUGEAS_ROOT=$abs_top_srcdir/tests/root
+
+errors=$(augtool --nostdinc match '/augeas//error/descendant-or-self::*')
+
+if [ "x$errors" != "x (no matches)" ] ; then
+ printf "get /augeas//error reported errors:\n%s\n" "$errors"
+ exit 1
+fi
--- /dev/null
+#!/bin/sh
+
+# Test that saving changes that don't really change the underlying file
+# leave the original file intact
+
+root=$abs_top_builddir/build/test-idempotent
+hosts=$root/etc/hosts
+
+rm -rf $root
+mkdir -p $(dirname $hosts)
+
+cat <<EOF > $hosts
+127.0.0.1 localhost
+EOF
+touch -r $0 $hosts
+cp -p $hosts $hosts.stamp
+
+augtool --nostdinc -r $root -I $abs_top_srcdir/lenses > /dev/null <<EOF
+set /files/etc/hosts/1/ipaddr 127.0.1.1
+set /files/etc/hosts/1/ipaddr 127.0.0.1
+save
+EOF
+
+[ $hosts -nt $hosts.stamp ] && exit 1
+
+augtool --nostdinc -r $root -I $abs_top_srcdir/lenses > /dev/null <<EOF
+set /files/etc/hosts/1/ipaddr 127.0.1.1
+save
+set /files/etc/hosts/1/ipaddr 127.0.0.1
+save
+EOF
+
+[ $hosts -nt $hosts.stamp ] || exit 1
+
+# Test that /path/seq::*[expr] can create a node, and is idempotent
+touch -r $0 $hosts
+
+# Add a new ip+host to $hosts using seq::*[expr]
+augtool --nostdinc -r $root -I $abs_top_srcdir/lenses > /dev/null <<EOF
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.0']/ipaddr 192.0.2.0
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.0']/canonical dns1
+save
+EOF
+
+[ $hosts -nt $hosts.stamp ] || exit 1
+[ "$(grep -E -c '192.0.2.0\s+dns1' $hosts)" = 1 ] || exit 1
+
+touch -r $0 $hosts
+
+# Test that seq::*[expr] is idempotent
+augtool --nostdinc -r $root -I $abs_top_srcdir/lenses > /dev/null <<EOF
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.0']/ipaddr 192.0.2.0
+set /files/etc/hosts/seq::*[ipaddr='192.0.2.0']/canonical dns1
+save
+EOF
+
+[ "$?" != 0 -o $hosts -nt $hosts.stamp ] && exit 1
+
+[ "$(grep -E -c '192.0.2.0\s+dns1' $hosts)" = 1 ] || exit 1
--- /dev/null
+#!/bin/sh
+
+# Run test modules that make sure the interpreter fails/succeeds in
+# various hairy situations.
+#
+# If run with option '-v', error output from each augparse run is printed
+
+TOPDIR=$(cd $(dirname $0)/.. && pwd)
+DATADIR=${abs_top_srcdir-${TOPDIR}}/tests
+MODULES=${DATADIR}/modules
+AUGPARSE=${abs_top_builddir-${DATADIR}/..}/src/augparse
+
+set -e
+
+VERBOSE=n
+if [ "x$1" = "x-v" ]; then
+ VERBOSE=y
+fi
+
+run_tests_plain ()
+{
+ ret_succ=$1
+ ret_fail=$(( 1 - $ret_succ ))
+ action=$2
+ shift
+ shift
+
+ for g in $*; do
+ if [ ! -r "$g" ]; then
+ echo "Grammar file $g is not readable"
+ exit 19
+ fi
+ printf "$action %-30s ... " $(basename $g .aug)
+ set +e
+ errs=$(MALLOC_CHECK_=3 augparse --nostdinc -I ${MODULES} $g 2>&1 > /dev/null)
+ ret=$?
+ set -e
+ if [ $ret -eq $ret_fail ]; then
+ echo FAIL
+ result=1
+ elif [ $ret -eq $ret_succ ]; then
+ echo PASS
+ else
+ echo ERROR
+ result=19
+ fi
+ if [ "$VERBOSE" = "y" ] ; then
+ echo $errs
+ fi
+ done
+}
+
+run_tests_valgrind ()
+{
+ ret_succ=$1
+ ret_fail=$(( 1 - $ret_succ ))
+ action=$2
+ shift
+ shift
+
+ for g in $*; do
+ if [ ! -r "$g" ]; then
+ echo "Grammar file $g is not readable"
+ exit 19
+ fi
+ echo "========================================================="
+ printf "$action %-30s \n" $(basename $g .aug)
+ set +e
+ $VALGRIND $AUGPARSE --nostdinc -I ${MODULES} $g
+ ret=$?
+ set -e
+ if [ $ret -eq $ret_fail ]; then
+ echo FAIL
+ result=1
+ elif [ $ret -eq $ret_succ ]; then
+ echo PASS
+ else
+ echo ERROR
+ result=19
+ fi
+ if [ "$VERBOSE" = "y" ] ; then
+ echo $errs
+ fi
+ echo "---------------------------------------------------------"
+ done
+}
+
+if [ ! -x $AUGPARSE ] ; then
+ echo "Failed to find executable augparse"
+ echo " looked for $AUGPARSE"
+ exit 127
+fi
+
+echo "--------------------------------"
+echo "Running interpreter tests"
+echo
+
+result=0
+[ -z "$VALGRIND" ] && flavor=plain || flavor=valgrind
+
+run_tests_$flavor 1 reject $MODULES/fail_*.aug
+run_tests_$flavor 0 accept $MODULES/pass_*.aug
+
+exit $result
--- /dev/null
+/*
+ * test-load.c: test the aug_load functionality
+ *
+ * Copyright (C) 2009-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <lutter@redhat.com>
+ */
+
+#include <config.h>
+#include <sys/types.h>
+#include <unistd.h>
+
+#include "augeas.h"
+
+#include "cutest.h"
+#include "internal.h"
+
+static const char *abs_top_srcdir;
+static const char *abs_top_builddir;
+static char *root = NULL;
+static char *loadpath;
+
+#define die(msg) \
+ do { \
+ fprintf(stderr, "%d: Fatal error: %s\n", __LINE__, msg); \
+ exit(EXIT_FAILURE); \
+ } while(0)
+
+static char *setup_hosts(CuTest *tc) {
+ char *etcdir, *build_root;
+
+ if (asprintf(&build_root, "%s/build/test-load/%s",
+ abs_top_builddir, tc->name) < 0) {
+ CuFail(tc, "failed to set build_root");
+ }
+
+ if (asprintf(&etcdir, "%s/etc", build_root) < 0)
+ CuFail(tc, "asprintf etcdir failed");
+
+ run(tc, "test -d %s && chmod -R u+rw %s || :", build_root, build_root);
+ run(tc, "rm -rf %s", build_root);
+ run(tc, "mkdir -p %s", etcdir);
+ run(tc, "cp -pr %s/etc/hosts %s", root, etcdir);
+
+ free(etcdir);
+ return build_root;
+}
+
+static struct augeas *setup_hosts_aug(CuTest *tc, char *build_root) {
+ struct augeas *aug = NULL;
+ int r;
+
+ aug = aug_init(build_root, loadpath, AUG_NO_MODL_AUTOLOAD);
+ CuAssertPtrNotNull(tc, aug);
+
+ r = aug_set(aug, "/augeas/load/Hosts/lens", "Hosts.lns");
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_set(aug, "/augeas/load/Hosts/incl", "/etc/hosts");
+ CuAssertRetSuccess(tc, r);
+
+ free(build_root);
+ return aug;
+}
+
+static struct augeas *setup_writable_hosts(CuTest *tc) {
+ char *build_root = setup_hosts(tc);
+ run(tc, "chmod -R u+w %s", build_root);
+ return setup_hosts_aug(tc, build_root);
+}
+
+static struct augeas *setup_unreadable_hosts(CuTest *tc) {
+ char *build_root = setup_hosts(tc);
+ run(tc, "chmod -R a-r %s/etc/hosts", build_root);
+ return setup_hosts_aug(tc, build_root);
+}
+
+static void testDefault(CuTest *tc) {
+ augeas *aug = NULL;
+ int nmatches, r;
+
+ aug = aug_init(root, loadpath, AUG_NO_STDINC);
+ CuAssertPtrNotNull(tc, aug);
+
+ nmatches = aug_match(aug, "/augeas/load/*", NULL);
+ CuAssertPositive(tc, nmatches);
+
+ nmatches = aug_match(aug, "/files/etc/hosts/1", NULL);
+ CuAssertIntEquals(tc, 1, nmatches);
+
+ r = aug_rm(aug, "/augeas/load/*");
+ CuAssertTrue(tc, r >= 0);
+
+ r = aug_load(aug);
+ CuAssertRetSuccess(tc, r);
+
+ nmatches = aug_match(aug, "/files/*", NULL);
+ CuAssertZero(tc, nmatches);
+
+ aug_close(aug);
+}
+
+static void testNoLoad(CuTest *tc) {
+ augeas *aug = NULL;
+ int nmatches, r;
+
+ aug = aug_init(root, loadpath, AUG_NO_STDINC|AUG_NO_LOAD);
+ CuAssertPtrNotNull(tc, aug);
+
+ nmatches = aug_match(aug, "/augeas/load/*", NULL);
+ CuAssertPositive(tc, nmatches);
+
+ nmatches = aug_match(aug, "/files/*", NULL);
+ CuAssertZero(tc, nmatches);
+
+ r = aug_load(aug);
+ CuAssertRetSuccess(tc, r);
+
+ nmatches = aug_match(aug, "/files/*", NULL);
+ CuAssertPositive(tc, nmatches);
+
+ /* Now load /etc/hosts only */
+ r = aug_rm(aug, "/augeas/load/*[label() != 'Hosts']");
+ CuAssertTrue(tc, r >= 0);
+
+ r = aug_load(aug);
+ CuAssertRetSuccess(tc, r);
+
+ nmatches = aug_match(aug, "/files/etc/*", NULL);
+ CuAssertIntEquals(tc, 1, nmatches);
+
+ aug_close(aug);
+}
+
+static void testNoAutoload(CuTest *tc) {
+ augeas *aug = NULL;
+ int nmatches, r;
+
+ aug = aug_init(root, loadpath, AUG_NO_MODL_AUTOLOAD);
+ CuAssertPtrNotNull(tc, aug);
+
+ nmatches = aug_match(aug, "/augeas/load/*", NULL);
+ CuAssertZero(tc, nmatches);
+
+ r = aug_set(aug, "/augeas/load/Hosts/lens", "Hosts.lns");
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_set(aug, "/augeas/load/Hosts/incl", "/etc/hosts");
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_load(aug);
+ CuAssertRetSuccess(tc, r);
+
+ nmatches = aug_match(aug, "/files/etc/hosts/*[ipaddr]", NULL);
+ CuAssertIntEquals(tc, 2, nmatches);
+
+ aug_close(aug);
+}
+
+static void invalidLens(CuTest *tc, augeas *aug, const char *lens) {
+ int r, nmatches;
+
+ r = aug_set(aug, "/augeas/load/Junk/lens", lens);
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_set(aug, "/augeas/load/Junk/incl", "/dev/null");
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_load(aug);
+ CuAssertRetSuccess(tc, r);
+
+ nmatches = aug_match(aug, "/augeas/load/Junk/error", NULL);
+ CuAssertIntEquals(tc, 1, nmatches);
+}
+
+static void testInvalidLens(CuTest *tc) {
+ augeas *aug = NULL;
+ int r;
+
+ aug = aug_init(root, loadpath, AUG_NO_STDINC|AUG_NO_LOAD);
+ CuAssertPtrNotNull(tc, aug);
+
+ r = aug_rm(aug, "/augeas/load/*");
+ CuAssertTrue(tc, r >= 0);
+
+ invalidLens(tc, aug, NULL);
+ invalidLens(tc, aug, "@Nomodule");
+ invalidLens(tc, aug, "@Util");
+ invalidLens(tc, aug, "Nomodule.noelns");
+
+ aug_close(aug);
+}
+
+static void testLoadSave(CuTest *tc) {
+ augeas *aug = NULL;
+ int r;
+
+ aug = setup_writable_hosts(tc);
+
+ r = aug_load(aug);
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_save(aug);
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_match(aug, "/augeas/events/saved", NULL);
+ CuAssertIntEquals(tc, 0, r);
+
+ aug_close(aug);
+}
+
+/* Tests bug #79 */
+static void testLoadDefined(CuTest *tc) {
+ augeas *aug = NULL;
+ int r;
+
+ aug = aug_init(root, loadpath, AUG_NO_STDINC);
+ CuAssertPtrNotNull(tc, aug);
+
+ r = aug_defvar(aug, "v", "/files/etc/hosts/*/ipaddr");
+ CuAssertIntEquals(tc, 2, r);
+
+ r = aug_load(aug);
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_match(aug, "$v", NULL);
+ CuAssertIntEquals(tc, 2, r);
+
+ aug_close(aug);
+}
+
+static void testDefvarExpr(CuTest *tc) {
+ static const char *const expr = "/files/etc/hosts/*/ipaddr";
+ static const char *const expr2 = "/files/etc/hosts/*/canonical";
+
+ augeas *aug = NULL;
+ const char *v;
+ int r;
+
+ aug = aug_init(root, loadpath, AUG_NO_STDINC);
+ CuAssertPtrNotNull(tc, aug);
+
+ r = aug_defvar(aug, "v", expr);
+ CuAssertIntEquals(tc, 2, r);
+
+ r = aug_get(aug, "/augeas/variables/v", &v);
+ CuAssertIntEquals(tc, 1, r);
+ CuAssertStrEquals(tc, expr, v);
+
+ r = aug_defvar(aug, "v", expr2);
+ CuAssertIntEquals(tc, 2, r);
+
+ r = aug_get(aug, "/augeas/variables/v", &v);
+ CuAssertIntEquals(tc, 1, r);
+ CuAssertStrEquals(tc, expr2, v);
+
+ r = aug_defvar(aug, "v", NULL);
+ CuAssertIntEquals(tc, 0, r);
+
+ r = aug_get(aug, "/augeas/variables/v", &v);
+ CuAssertIntEquals(tc, 0, r);
+ CuAssertStrEquals(tc, NULL, v);
+
+ aug_close(aug);
+}
+
+static void testReloadChanged(CuTest *tc) {
+ FILE *fp;
+ augeas *aug = NULL;
+ const char *build_root, *mtime2, *s;
+ char *mtime1;
+ char *hosts = NULL;
+ int r;
+
+ aug = setup_writable_hosts(tc);
+
+ r = aug_load(aug);
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_get(aug, "/augeas/root", &build_root);
+ CuAssertIntEquals(tc, 1, r);
+
+ r = aug_get(aug, "/augeas/files/etc/hosts/mtime", &s);
+ CuAssertIntEquals(tc, 1, r);
+ mtime1 = strdup(s);
+ CuAssertPtrNotNull(tc, mtime1);
+
+ /* Tickle /etc/hosts behind augeas' back */
+ r = asprintf(&hosts, "%setc/hosts", build_root);
+ CuAssertPositive(tc, r);
+
+ fp = fopen(hosts, "a");
+ CuAssertPtrNotNull(tc, fp);
+
+ r = fprintf(fp, "192.168.0.1 other.example.com\n");
+ CuAssertTrue(tc, r > 0);
+
+ r = fclose(fp);
+ CuAssertRetSuccess(tc, r);
+
+ /* Unsaved changes are discarded */
+ r = aug_set(aug, "/files/etc/hosts/1/ipaddr", "127.0.0.2");
+ CuAssertRetSuccess(tc, r);
+
+ /* Check that we really did load the right file*/
+ r = aug_load(aug);
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_get(aug, "/augeas/files/etc/hosts/mtime", &mtime2);
+ CuAssertIntEquals(tc, 1, r);
+ CuAssertStrNotEqual(tc, mtime1, mtime2);
+
+ r = aug_match(aug, "/files/etc/hosts/*[ipaddr = '192.168.0.1']", NULL);
+ CuAssertIntEquals(tc, 1, r);
+
+ r = aug_match(aug, "/files/etc/hosts/1[ipaddr = '127.0.0.1']", NULL);
+ CuAssertIntEquals(tc, 1, r);
+
+ free(mtime1);
+ free(hosts);
+ aug_close(aug);
+}
+
+static void testReloadDirty(CuTest *tc) {
+ augeas *aug = NULL;
+ int r;
+
+ aug = setup_writable_hosts(tc);
+
+ r = aug_load(aug);
+ CuAssertRetSuccess(tc, r);
+
+ /* Unsaved changes are discarded */
+ r = aug_set(aug, "/files/etc/hosts/1/ipaddr", "127.0.0.2");
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_load(aug);
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_match(aug, "/files/etc/hosts/1[ipaddr = '127.0.0.1']", NULL);
+ CuAssertIntEquals(tc, 1, r);
+
+ aug_close(aug);
+}
+
+static void testReloadDeleted(CuTest *tc) {
+ augeas *aug = NULL;
+ int r;
+
+ aug = setup_writable_hosts(tc);
+
+ r = aug_load(aug);
+ CuAssertRetSuccess(tc, r);
+
+ /* A missing file causes a reload */
+ r = aug_rm(aug, "/files/etc/hosts");
+ CuAssertPositive(tc, r);
+
+ r = aug_load(aug);
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_match(aug, "/files/etc/hosts/1[ipaddr = '127.0.0.1']", NULL);
+ CuAssertIntEquals(tc, 1, r);
+
+ /* A missing entry in a file causes a reload */
+ r = aug_rm(aug, "/files/etc/hosts/1/ipaddr");
+ CuAssertPositive(tc, r);
+
+ r = aug_load(aug);
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_match(aug, "/files/etc/hosts/1[ipaddr = '127.0.0.1']", NULL);
+ CuAssertIntEquals(tc, 1, r);
+
+ aug_close(aug);
+}
+
+static void testReloadDeletedMeta(CuTest *tc) {
+ augeas *aug = NULL;
+ int r;
+
+ aug = setup_writable_hosts(tc);
+
+ r = aug_load(aug);
+ CuAssertRetSuccess(tc, r);
+
+ /* Unsaved changes are discarded */
+ r = aug_rm(aug, "/augeas/files/etc/hosts");
+ CuAssertPositive(tc, r);
+
+ r = aug_set(aug, "/files/etc/hosts/1/ipaddr", "127.0.0.2");
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_load(aug);
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_match(aug, "/files/etc/hosts/1[ipaddr = '127.0.0.1']", NULL);
+ CuAssertIntEquals(tc, 1, r);
+
+ aug_close(aug);
+}
+
+/* BZ 613967 - segfault when reloading a file that has been externally
+ * modified, and we have a variable pointing into the old tree
+ */
+static void testReloadExternalMod(CuTest *tc) {
+ augeas *aug = NULL;
+ int r, created;
+ const char *aug_root, *s;
+ char *mtime;
+
+ aug = setup_writable_hosts(tc);
+
+ r = aug_load(aug);
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_get(aug, "/augeas/files/etc/hosts/mtime", &s);
+ CuAssertIntEquals(tc, 1, r);
+ mtime = strdup(s);
+ CuAssertPtrNotNull(tc, mtime);
+
+ /* Set up a new entry and save */
+ r = aug_defnode(aug, "new", "/files/etc/hosts/3", NULL, &created);
+ CuAssertIntEquals(tc, 1, r);
+ CuAssertIntEquals(tc, 1, created);
+
+ r = aug_set(aug, "$new/ipaddr", "172.31.42.1");
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_set(aug, "$new/canonical", "new.example.com");
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_save(aug);
+ CuAssertRetSuccess(tc, r);
+
+ /* Fake the mtime to be old */
+ r = aug_set(aug, "/augeas/files/etc/hosts/mtime", mtime);
+ CuAssertRetSuccess(tc, r);
+
+ /* Now modify the file outside of Augeas */
+ r = aug_get(aug, "/augeas/root", &aug_root);
+ CuAssertIntEquals(tc, 1, r);
+
+ run(tc, "sed -e '1,2d' %setc/hosts > %setc/hosts.new", aug_root, aug_root);
+ run(tc, "mv %setc/hosts.new %setc/hosts", aug_root, aug_root);
+
+ /* Reload and save again */
+ r = aug_load(aug);
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_save(aug);
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_match(aug, "/files/etc/hosts/#comment", NULL);
+ CuAssertIntEquals(tc, 2, r);
+
+ r = aug_match(aug, "/files/etc/hosts/*", NULL);
+ CuAssertIntEquals(tc, 5, r);
+
+ free(mtime);
+ aug_close(aug);
+}
+
+/* Bug #259 - after save with /augeas/save = newfile, make sure we discard
+ * changes and reload files.
+ */
+static void testReloadAfterSaveNewfile(CuTest *tc) {
+ augeas *aug = NULL;
+ int r;
+
+ aug = setup_writable_hosts(tc);
+
+ r = aug_load(aug);
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_set(aug, "/augeas/save", "newfile");
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_set(aug, "/files/etc/hosts/1/ipaddr", "127.0.0.2");
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_save(aug);
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_load(aug);
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_match(aug, "/files/etc/hosts/1[ipaddr = '127.0.0.1']", NULL);
+ CuAssertIntEquals(tc, 1, r);
+
+ aug_close(aug);
+}
+
+/* Make sure parse errors from applying a lens to a file that does not
+ * match get reported under /augeas//error
+ *
+ * Tests bug #138
+ */
+static void testParseErrorReported(CuTest *tc) {
+ augeas *aug = NULL;
+ int nmatches, r;
+
+ aug = aug_init(root, loadpath, AUG_NO_MODL_AUTOLOAD);
+ CuAssertPtrNotNull(tc, aug);
+
+ r = aug_set(aug, "/augeas/load/Bad/lens", "Yum.lns");
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_set(aug, "/augeas/load/Bad/incl", "/etc/fstab");
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_load(aug);
+ CuAssertRetSuccess(tc, r);
+
+ nmatches = aug_match(aug, "/augeas/files/etc/fstab/error", NULL);
+ CuAssertIntEquals(tc, 1, nmatches);
+
+ aug_close(aug);
+}
+
+/* Test failed file opening is reported, e.g. EACCES */
+static void testPermsErrorReported(CuTest *tc) {
+ if (getuid() == 0) {
+ puts("pending (testPermsErrorReported): can't test permissions under root account");
+ return;
+ }
+
+ augeas *aug = NULL;
+ int r;
+ const char *s;
+
+ aug = setup_unreadable_hosts(tc);
+
+ r = aug_load(aug);
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_match(aug, "/files/etc/hosts", NULL);
+ CuAssertIntEquals(tc, 0, r);
+
+ r = aug_get(aug, "/augeas/files/etc/hosts/error", &s);
+ CuAssertIntEquals(tc, 1, r);
+ CuAssertStrEquals(tc, "read_failed", s);
+
+ r = aug_get(aug, "/augeas/files/etc/hosts/error/message", &s);
+ CuAssertIntEquals(tc, 1, r);
+
+ aug_close(aug);
+}
+
+/* Test bug #252 - excl patterns have no effect when loading with a root */
+static void testLoadExclWithRoot(CuTest *tc) {
+ augeas *aug = NULL;
+ static const char *const cmds =
+ "set /augeas/context /augeas/load\n"
+ "set Hosts/lens Hosts.lns\n"
+ "set Hosts/incl /etc/hosts\n"
+ "set Fstab/lens Fstab.lns\n"
+ "set Fstab/incl /etc/ho*\n"
+ "set Fstab/excl /etc/hosts\n"
+ "load";
+ int r;
+
+ aug = aug_init(root, loadpath, AUG_NO_STDINC|AUG_NO_MODL_AUTOLOAD);
+ CuAssertPtrNotNull(tc, aug);
+
+ r = aug_srun(aug, stderr, cmds);
+ CuAssertIntEquals(tc, 7, r);
+
+ r = aug_match(aug, "/augeas//error", NULL);
+ CuAssertIntEquals(tc, 0, r);
+
+ aug_close(aug);
+}
+
+/* Test excl patterns matching the end of a filename work, e.g. *.bak */
+static void testLoadTrailingExcl(CuTest *tc) {
+ augeas *aug = NULL;
+ static const char *const cmds =
+ "set /augeas/context /augeas/load/Shellvars\n"
+ "set lens Shellvars.lns\n"
+ "set incl /etc/sysconfig/network-scripts/ifcfg-lo*\n"
+ "set excl *.rpmsave\n"
+ "load";
+ int r;
+
+ aug = aug_init(root, loadpath, AUG_NO_STDINC|AUG_NO_MODL_AUTOLOAD);
+ CuAssertPtrNotNull(tc, aug);
+
+ r = aug_srun(aug, stderr, cmds);
+ CuAssertIntEquals(tc, 5, r);
+
+ r = aug_match(aug, "/augeas/files/etc/sysconfig/network-scripts/ifcfg-lo", NULL);
+ CuAssertIntEquals(tc, 1, r);
+
+ r = aug_match(aug, "/augeas/files/etc/sysconfig/network-scripts/ifcfg-lo.rpmsave", NULL);
+ CuAssertIntEquals(tc, 0, r);
+
+ aug_close(aug);
+}
+
+static void testMultipleXfm(CuTest *tc) {
+ augeas *aug = NULL;
+ static const char *const cmds =
+ "set /augeas/context /augeas/load\n"
+ "set P1/lens Passwd.lns\n"
+ "set P1/incl /etc/passwd\n"
+ "set P2/lens @Passwd\n"
+ "set P2/incl /etc/passwd\n"
+ "load";
+ int r;
+ const char *v;
+
+ aug = aug_init(root, loadpath, AUG_NO_STDINC|AUG_NO_LOAD);
+ CuAssertPtrNotNull(tc, aug);
+
+ /* The initial setup is fine: two different transforms that want to use
+ the same lens to load a file */
+ r = aug_srun(aug, stderr, cmds);
+ CuAssertIntEquals(tc, 6, r);
+
+ r = aug_match(aug, "/augeas/files/etc/passwd", NULL);
+ CuAssertIntEquals(tc, 1, r);
+
+ /* Now change things so we use two different lenses for the same file */
+ r = aug_set(aug, "/augeas/load/P2/lens", "Hosts.lns");
+ CuAssertIntEquals(tc, 0, r);
+
+ r = aug_rm(aug, "/files/*");
+ CuAssertPositive(tc, r);
+
+ r = aug_load(aug);
+ CuAssertIntEquals(tc, 0, r);
+
+ r = aug_get(aug, "/augeas/files/etc/passwd/error", &v);
+ CuAssertIntEquals(tc, 1, r);
+ CuAssertStrEquals(tc, "mxfm_load", v);
+
+ aug_close(aug);
+}
+
+int main(void) {
+ char *output = NULL;
+ CuSuite* suite = CuSuiteNew();
+
+ CuSuiteSetup(suite, NULL, NULL);
+ SUITE_ADD_TEST(suite, testDefault);
+ SUITE_ADD_TEST(suite, testNoLoad);
+ SUITE_ADD_TEST(suite, testNoAutoload);
+ SUITE_ADD_TEST(suite, testInvalidLens);
+ SUITE_ADD_TEST(suite, testLoadSave);
+ SUITE_ADD_TEST(suite, testLoadDefined);
+ SUITE_ADD_TEST(suite, testDefvarExpr);
+ SUITE_ADD_TEST(suite, testReloadChanged);
+ SUITE_ADD_TEST(suite, testReloadDirty);
+ SUITE_ADD_TEST(suite, testReloadDeleted);
+ SUITE_ADD_TEST(suite, testReloadDeletedMeta);
+ SUITE_ADD_TEST(suite, testReloadExternalMod);
+ SUITE_ADD_TEST(suite, testReloadAfterSaveNewfile);
+ SUITE_ADD_TEST(suite, testParseErrorReported);
+ SUITE_ADD_TEST(suite, testPermsErrorReported);
+ SUITE_ADD_TEST(suite, testLoadExclWithRoot);
+ SUITE_ADD_TEST(suite, testLoadTrailingExcl);
+ SUITE_ADD_TEST(suite, testMultipleXfm);
+
+ abs_top_srcdir = getenv("abs_top_srcdir");
+ if (abs_top_srcdir == NULL)
+ die("env var abs_top_srcdir must be set");
+
+ abs_top_builddir = getenv("abs_top_builddir");
+ if (abs_top_builddir == NULL)
+ die("env var abs_top_builddir must be set");
+
+ if (asprintf(&root, "%s/tests/root", abs_top_srcdir) < 0) {
+ die("failed to set root");
+ }
+
+ if (asprintf(&loadpath, "%s/lenses", abs_top_srcdir) < 0) {
+ die("failed to set loadpath");
+ }
+
+ CuSuiteRun(suite);
+ CuSuiteSummary(suite, &output);
+ CuSuiteDetails(suite, &output);
+ printf("%s\n", output);
+ free(output);
+
+ int result = suite->failCount;
+ CuSuiteFree(suite);
+ return result;
+}
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+#!/bin/sh
+
+# Test that an attempt to save into a non-writable file does not leave a
+# temporary file behind.
+# See https://github.com/hercules-team/augeas/issues/479
+
+if [ "$UID" != 0 -o "$(uname -s)" != "Linux" ]; then
+ echo "Test can only be run as root on Linux as it uses chattr"
+ exit 77
+fi
+
+root=$abs_top_builddir/build/test-nonwritable
+hosts=$root/etc/hosts
+
+rm -rf $root
+mkdir -p $(dirname $hosts)
+
+cat <<EOF > $hosts
+127.0.0.1 localhost
+EOF
+
+chattr +i $hosts
+
+augtool --nostdinc -r $root -I $abs_top_srcdir/lenses > /dev/null <<EOF
+set /files/etc/hosts/1/ipaddr 127.0.0.2
+save
+EOF
+
+chattr -i $hosts
+
+if stat -t $hosts.* > /dev/null 2>&1
+then
+ echo "found a tempfile" $hosts.*
+ exit 1
+fi
--- /dev/null
+/*
+ * test-perf.c: test performance of API functions
+ *
+ * Copyright (C) 2009-2016 Red Hat Inc.
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <lutter@redhat.com>
+ */
+
+#include <config.h>
+
+#include "augeas.h"
+
+#include "cutest.h"
+#include "internal.h"
+
+#include <sys/time.h>
+#include <unistd.h>
+
+static const char *abs_top_srcdir;
+static char *root;
+static char *loadpath;
+
+#define die(msg) \
+ do { \
+ fprintf(stderr, "%d: Fatal error: %s\n", __LINE__, msg); \
+ exit(EXIT_FAILURE); \
+ } while(0)
+
+#define time_taken(start, stop) \
+ ((stop.tv_sec - start.tv_sec)*1000 + (stop.tv_usec - start.tv_usec)/1000)
+
+/* Test performance of basic predicate [1..n] with many nodes */
+static void testPerfPredicate(CuTest *tc) {
+ const char *value;
+ char *path;
+ struct timeval stop, start;
+ struct augeas *aug;
+
+ aug = aug_init(root, loadpath, AUG_NO_STDINC|AUG_NO_LOAD);
+ CuAssertPtrNotNull(tc, aug);
+
+ gettimeofday(&start, NULL);
+
+ for (int i=1; i <= 5000; i++) {
+ if (!asprintf(&path, "/test/service[%i]", i))
+ die("failed to generate set path");
+ aug_set(aug, path, "test");
+ free(path);
+ }
+
+ for (int i=1; i <= 5000; i++) {
+ if (!asprintf(&path, "/test/service[%i]", i))
+ die("failed to generate set path");
+ aug_get(aug, path, &value);
+ CuAssertStrEquals(tc, "test", value);
+ free(path);
+ }
+
+ gettimeofday(&stop, NULL);
+ printf("testPerfPredicate = %lums\n", time_taken(start, stop));
+
+ aug_close(aug);
+}
+
+int main(void) {
+ char *output = NULL;
+ CuSuite* suite = CuSuiteNew();
+ CuSuiteSetup(suite, NULL, NULL);
+
+ SUITE_ADD_TEST(suite, testPerfPredicate);
+
+ abs_top_srcdir = getenv("abs_top_srcdir");
+ if (abs_top_srcdir == NULL)
+ die("env var abs_top_srcdir must be set");
+
+ if (asprintf(&root, "%s/tests/root", abs_top_srcdir) < 0) {
+ die("failed to set root");
+ }
+
+ if (asprintf(&loadpath, "%s/lenses", abs_top_srcdir) < 0) {
+ die("failed to set loadpath");
+ }
+
+ CuSuiteRun(suite);
+ CuSuiteSummary(suite, &output);
+ CuSuiteDetails(suite, &output);
+ printf("%s\n", output);
+ free(output);
+ int result = suite->failCount;
+ CuSuiteFree(suite);
+ return result;
+}
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+#!/bin/sh
+
+# Check that saving preserves mode and ownership; for this test to make
+# much sense (if any) the user running it should have at least one
+# supplementary group
+
+root=$abs_top_builddir/build/preserve
+hosts=$root/etc/hosts
+
+init_dirs() {
+rm -rf $root
+mkdir -p $(dirname $hosts)
+}
+
+stat_inode() {
+ls -il $1 | awk '{ print $1 }'
+}
+
+AUGTOOL="augtool --nostdinc -r $root -I $abs_top_srcdir/lenses"
+init_dirs
+
+printf '127.0.0.1\tlocalhost\n' > $hosts
+
+chmod 0600 $hosts
+group=$(groups | tr ' ' '\n' | tail -1)
+chgrp $group $hosts
+
+[ -x /usr/bin/chcon ] && selinux=yes || selinux=no
+[ x$SKIP_TEST_PRESERVE_SELINUX = x1 ] && selinux=no
+if [ $selinux = yes ] ; then
+ /usr/bin/chcon -t etc_t $hosts > /dev/null 2>/dev/null || selinux=no
+fi
+
+$AUGTOOL >/dev/null <<EOF
+set /files/etc/hosts/1/alias alias.example.com
+save
+EOF
+if [ $? != 0 ] ; then
+ echo "augtool failed on existing file"
+ exit 1
+fi
+
+act_group=$(ls -l $hosts | sed -e 's/ */ /g' | cut -d ' ' -f 4)
+act_mode=$(ls -l $hosts | cut -b 1-10)
+if [ $selinux = yes ] ; then
+ act_con=$(stat --format=%C $hosts | cut -d ':' -f 3)
+fi
+if [ "x$group" != "x$act_group" ] ; then
+ echo "Expected group $group but got $act_group"
+ exit 1
+fi
+
+if [ x-rw------- != "x$act_mode" ] ; then
+ echo "Expected mode 0600 but got $act_mode"
+ exit 1
+fi
+
+if [ $selinux = yes -a xetc_t != "x$act_con" ] ; then
+ echo "Expected SELinux type etc_t but got $act_con"
+ exit 1
+fi
+
+# Check that we create new files without error and with permissions implied
+# from the umask
+init_dirs
+
+oldumask=$(umask)
+umask 0002
+$AUGTOOL > /dev/null <<EOF
+set /files/etc/hosts/1/ipaddr 127.0.0.1
+set /files/etc/hosts/1/canonical host.example.com
+save
+EOF
+if [ $? != 0 ] ; then
+ echo "augtool failed on new file"
+ exit 1
+fi
+if [ ! -e $hosts ]; then
+ echo "augtool didn't create new /etc/hosts file"
+ exit 1
+fi
+act_mode=$(ls -l $hosts | cut -b 1-10)
+if [ x-rw-rw-r-- != "x$act_mode" ] ; then
+ echo "Expected mode 0664 due to $(umask) umask but got $act_mode"
+ exit 1
+fi
+umask $oldumask
+
+# Check that we create new files without error when backups are requested
+init_dirs
+
+$AUGTOOL -b > /dev/null <<EOF
+set /files/etc/hosts/1/ipaddr 127.0.0.1
+set /files/etc/hosts/1/canonical host.example.com
+save
+EOF
+if [ $? != 0 ] ; then
+ echo "augtool -b failed on new file"
+ exit 1
+fi
+
+# Check that we preserve a backup file on request
+printf '127.0.0.1\tlocalhost\n' > $hosts
+exp_inode=$(stat_inode $hosts)
+
+$AUGTOOL -b > /dev/null <<EOF
+set /files/etc/hosts/1/alias alias.example.com
+print /augeas/save
+save
+EOF
+
+if [ ! -f $hosts.augsave ] ; then
+ echo "Backup file was not created"
+ exit 1
+fi
+
+act_inode=$(stat_inode $hosts.augsave)
+if [ "x$act_inode" != "x$exp_inode" ] ; then
+ echo "Backup file's inode changed"
+ exit 1
+fi
+
+act_inode=$(stat_inode $hosts)
+if [ "x$act_inode" = "x$exp_inode" ] ; then
+ echo "Same inode for backup file and main file"
+ exit 1
+fi
--- /dev/null
+#!/bin/sh
+
+# Test that we can write into a bind mount placed at PATH.augnew with the
+# copy_if_rename_fails flag.
+# This requires that EXDEV or EBUSY is returned from rename(2) to activate the
+# code path, so set up a bind mount on Linux.
+
+if [ "$UID" != 0 -o "$(uname -s)" != "Linux" ]; then
+ echo "Test can only be run as root on Linux to create bind mounts"
+ exit 77
+fi
+
+ROOT=$abs_top_builddir/build/test-put-mount-augnew
+LENSES=$abs_top_srcdir/lenses
+
+HOSTS=$ROOT/etc/hosts
+HOSTS_AUGNEW=${HOSTS}.augnew
+TARGET=$ROOT/other/real_hosts
+
+rm -rf $ROOT
+mkdir -p $(dirname $HOSTS)
+mkdir -p $(dirname $TARGET)
+
+echo 127.0.0.1 localhost > $HOSTS
+touch $TARGET $HOSTS_AUGNEW
+
+mount --bind $TARGET $HOSTS_AUGNEW
+Exit() {
+ umount $HOSTS_AUGNEW
+ exit $1
+}
+
+HOSTS_SUM=$(sum $HOSTS)
+
+augtool --nostdinc -I $LENSES -r $ROOT --new <<EOF
+set /augeas/save/copy_if_rename_fails 1
+set /files/etc/hosts/1/alias myhost
+save
+print /augeas//error
+EOF
+
+if [ ! -f $HOSTS ] ; then
+ echo "/etc/hosts is no longer a regular file"
+ Exit 1
+fi
+if [ ! "x${HOSTS_SUM}" = "x$(sum $HOSTS)" ]; then
+ echo "/etc/hosts has changed"
+ Exit 1
+fi
+if [ ! "x${HOSTS_SUM}" = "x$(sum $HOSTS)" ]; then
+ echo "/etc/hosts has changed"
+ Exit 1
+fi
+
+if [ ! -s $HOSTS_AUGNEW ]; then
+ echo "/etc/hosts.augnew is empty"
+ Exit 1
+fi
+if [ ! -s $TARGET ]; then
+ echo "/other/real_hosts is empty"
+ Exit 1
+fi
+
+if ! grep myhost $TARGET >/dev/null; then
+ echo "/other/real_hosts does not contain the modification"
+ Exit 1
+fi
+
+Exit 0
--- /dev/null
+#!/bin/sh
+
+# Test that we don't follow bind mounts when writing to .augsave.
+# This requires that EXDEV or EBUSY is returned from rename(2) to activate the
+# code path, so set up a bind mount on Linux.
+
+if [ "$UID" != 0 -o "$(uname -s)" != "Linux" ]; then
+ echo "Test can only be run as root on Linux to create bind mounts"
+ exit 77
+fi
+
+actual() {
+ (augtool --nostdinc -I $LENSES -r $ROOT --backup | grep ^/augeas) <<EOF
+ set /augeas/save/copy_if_rename_fails 1
+ set /files/etc/hosts/1/alias myhost
+ save
+ print /augeas//error
+EOF
+}
+
+expected() {
+ cat <<EOF
+/augeas/files/etc/hosts/error = "clone_unlink_dst_augsave"
+/augeas/files/etc/hosts/error/message = "Device or resource busy"
+EOF
+}
+
+ROOT=$abs_top_builddir/build/test-put-mount-augsave
+LENSES=$abs_top_srcdir/lenses
+
+HOSTS=$ROOT/etc/hosts
+HOSTS_AUGSAVE=${HOSTS}.augsave
+
+ATTACK_FILE=$ROOT/other/attack
+
+rm -rf $ROOT
+mkdir -p $(dirname $HOSTS)
+mkdir -p $(dirname $ATTACK_FILE)
+
+echo 127.0.0.1 localhost > $HOSTS
+touch $ATTACK_FILE $HOSTS_AUGSAVE
+
+mount --bind $ATTACK_FILE $HOSTS_AUGSAVE
+Exit() {
+ umount $HOSTS_AUGSAVE
+ exit $1
+}
+
+ACTUAL=$(actual)
+EXPECTED=$(expected)
+if [ "$ACTUAL" != "$EXPECTED" ]; then
+ echo "No error when trying to unlink augsave (a bind mount):"
+ echo "$ACTUAL"
+ exit 1
+fi
+
+if [ -s $ATTACK_FILE ]; then
+ echo "/other/attack now contains data, should be blank"
+ Exit 1
+fi
+
+Exit 0
--- /dev/null
+#!/bin/sh
+
+# Test that we can write into a bind mount with the copy_if_rename_fails flag.
+# This requires that EXDEV or EBUSY is returned from rename(2) to activate the
+# code path, so set up a bind mount on Linux.
+
+if [ "$UID" != 0 -o "$(uname -s)" != "Linux" ]; then
+ echo "Test can only be run as root on Linux to create bind mounts"
+ exit 77
+fi
+
+ROOT=$abs_top_builddir/build/test-put-mount
+LENSES=$abs_top_srcdir/lenses
+
+HOSTS=$ROOT/etc/hosts
+TARGET=$ROOT/other/real_hosts
+
+rm -rf $ROOT
+mkdir -p $(dirname $HOSTS)
+mkdir -p $(dirname $TARGET)
+
+echo 127.0.0.1 localhost > $TARGET
+touch $HOSTS
+
+mount --bind $TARGET $HOSTS
+Exit() {
+ umount $HOSTS
+ exit $1
+}
+
+HOSTS_SUM=$(sum $HOSTS)
+
+augtool --nostdinc -I $LENSES -r $ROOT <<EOF
+set /augeas/save/copy_if_rename_fails 1
+set /files/etc/hosts/1/alias myhost
+save
+print /augeas//error
+EOF
+
+if [ ! "x${HOSTS_SUM}" != "x$(sum $HOSTS)" ]; then
+ echo "/etc/hosts hasn't changed"
+ Exit 1
+fi
+
+if [ ! "x${HOSTS_SUM}" != "x$(sum $TARGET)" ]; then
+ echo "/other/real_hosts hasn't changed"
+ Exit 1
+fi
+
+if ! grep myhost $TARGET >/dev/null; then
+ echo "/other/real_hosts does not contain the modification"
+ Exit 1
+fi
+
+Exit 0
--- /dev/null
+#!/bin/sh
+
+# Test that we don't follow symlinks when writing to .augnew
+
+ROOT=$abs_top_builddir/build/test-put-symlink-augnew
+LENSES=$abs_top_srcdir/lenses
+
+HOSTS=$ROOT/etc/hosts
+HOSTS_AUGNEW=${HOSTS}.augnew
+
+ATTACK_FILE=$ROOT/other/attack
+
+rm -rf $ROOT
+mkdir -p $(dirname $HOSTS)
+mkdir -p $(dirname $ATTACK_FILE)
+
+cat <<EOF > $HOSTS
+127.0.0.1 localhost
+EOF
+touch $ATTACK_FILE
+
+(cd $(dirname $HOSTS) && ln -s ../other/attack $(basename $HOSTS).augnew)
+
+HOSTS_SUM=$(sum $HOSTS)
+
+augtool --nostdinc -I $LENSES -r $ROOT --new > /dev/null <<EOF
+set /files/etc/hosts/1/alias myhost
+save
+EOF
+
+if [ ! -f $HOSTS -o -h $HOSTS ] ; then
+ echo "/etc/hosts is no longer a regular file"
+ exit 1
+fi
+if [ ! "x${HOSTS_SUM}" = "x$(sum $HOSTS)" ]; then
+ echo "/etc/hosts has changed"
+ exit 1
+fi
+
+if [ -h $HOSTS_AUGNEW ] ; then
+ echo "/etc/hosts.augnew is still a symlink, should be unlinked"
+ exit 1
+fi
+if ! grep myhost $HOSTS_AUGNEW >/dev/null; then
+ echo "/etc/hosts does not contain the modification"
+ exit 1
+fi
+
+if [ -s $ATTACK_FILE ]; then
+ echo "/other/attack now contains data, should be blank"
+ exit 1
+fi
--- /dev/null
+#!/bin/sh
+
+# Test that we don't follow .augsave symlinks
+
+ROOT=$abs_top_builddir/build/test-put-symlink-augsave
+LENSES=$abs_top_srcdir/lenses
+
+HOSTS=$ROOT/etc/hosts
+HOSTS_AUGSAVE=${HOSTS}.augsave
+
+ATTACK_FILE=$ROOT/other/attack
+
+rm -rf $ROOT
+mkdir -p $(dirname $HOSTS)
+mkdir -p $(dirname $ATTACK_FILE)
+
+cat <<EOF > $HOSTS
+127.0.0.1 localhost
+EOF
+HOSTS_SUM=$(sum $HOSTS | cut -d ' ' -f 1)
+
+touch $ATTACK_FILE
+(cd $(dirname $HOSTS) && ln -s ../other/attack $(basename $HOSTS).augsave)
+
+# Now ask for the original to be saved in .augsave
+augtool --nostdinc -I $LENSES -r $ROOT --backup > /dev/null <<EOF
+set /files/etc/hosts/1/alias myhost
+save
+EOF
+
+if [ ! -f $HOSTS -o -h $HOSTS ] ; then
+ echo "/etc/hosts is no longer a regular file"
+ exit 1
+fi
+if [ -h $HOSTS_AUGSAVE ] ; then
+ echo "/etc/hosts.augsave is still a symlink, should be unlinked"
+ exit 1
+fi
+
+if [ ! "x${HOSTS_SUM}" = "x$(sum $HOSTS_AUGSAVE | cut -d ' ' -f 1)" ]; then
+ echo "/etc/hosts.augsave has changed from the original /etc/hosts"
+ exit 1
+fi
+if ! grep myhost $HOSTS >/dev/null; then
+ echo "/etc/hosts does not contain the modification"
+ exit 1
+fi
+
+if [ -s $ATTACK_FILE ]; then
+ echo "/other/attack now contains data, should be blank"
+ exit 1
+fi
--- /dev/null
+#!/bin/sh
+
+# Test that we don't follow .augnew symlinks (regression test)
+
+ROOT=$abs_top_builddir/build/test-put-symlink-augtemp
+LENSES=$abs_top_srcdir/lenses
+
+HOSTS=$ROOT/etc/hosts
+HOSTS_AUGNEW=${HOSTS}.augnew
+
+ATTACK_FILE=$ROOT/other/attack
+
+rm -rf $ROOT
+mkdir -p $(dirname $HOSTS)
+mkdir -p $(dirname $ATTACK_FILE)
+
+cat <<EOF > $HOSTS
+127.0.0.1 localhost
+EOF
+touch $ATTACK_FILE
+
+(cd $(dirname $HOSTS) && ln -s ../other/attack $(basename $HOSTS).augnew)
+
+# Test the normal save code path which would use a temp augnew file
+augtool --nostdinc -I $LENSES -r $ROOT > /dev/null <<EOF
+set /files/etc/hosts/1/alias myhost1
+save
+EOF
+
+if [ -h $HOSTS ] ; then
+ echo "/etc/hosts is now a symlink, pointing to" $(readlink $HOSTS)
+ exit 1
+fi
+if ! grep myhost1 $HOSTS >/dev/null; then
+ echo "/etc/hosts does not contain the modification"
+ exit 1
+fi
+
+if [ ! -h $HOSTS_AUGNEW ] ; then
+ echo "/etc/hosts.augnew is not a symbolic link"
+ exit 1
+fi
+LINK=$(readlink $HOSTS_AUGNEW)
+if [ "x$LINK" != "x../other/attack" ] ; then
+ echo "/etc/hosts.augnew no longer links to ../other/attack"
+ exit 1
+fi
+
+if [ -s $ATTACK_FILE ]; then
+ echo "/other/attack now contains data, should be blank"
+ exit 1
+fi
--- /dev/null
+#!/bin/sh
+
+# Test that we correctly preserve symlinks when saving a file
+
+ROOT=$abs_top_builddir/build/test-put-symlink
+LENSES=$abs_top_srcdir/lenses
+HOSTS=$ROOT/etc/hosts
+REAL_HOSTS=$ROOT/other/hosts
+
+rm -rf $ROOT
+mkdir -p $(dirname $HOSTS)
+mkdir -p $(dirname $REAL_HOSTS)
+
+cat <<EOF > $REAL_HOSTS
+127.0.0.1 localhost
+EOF
+
+(cd $(dirname $HOSTS) && ln -s ../other/hosts $(basename $HOSTS))
+
+augtool --nostdinc -I $LENSES -b -r $ROOT > /dev/null <<EOF
+set /files/etc/hosts/1/alias myhost
+save
+EOF
+
+HOSTS_AUGSAVE=${HOSTS}.augsave
+if [ ! -f $HOSTS_AUGSAVE ] ; then
+ echo "Missing /etc/hosts.augsave"
+ exit 1
+fi
+if [ -h $HOSTS_AUGSAVE ] ; then
+ echo "The file /etc/hosts.augsave is a symlink"
+ exit 1
+fi
+if [ ! -h $HOSTS ] ; then
+ echo "/etc/hosts is not a symbolic link"
+ exit 1
+fi
+
+LINK=$(readlink $HOSTS)
+if [ "x$LINK" != "x../other/hosts" ] ; then
+ echo "/etc/hosts does not link to ../other/hosts"
+ exit 1
+fi
+
+if ! grep myhost $REAL_HOSTS >/dev/null; then
+ echo "/other/hosts does not contain the modification"
+ exit 1
+fi
--- /dev/null
+/*
+ * test-run.c: test the aug_srun API function
+ *
+ * Copyright (C) 2009-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <lutter@redhat.com>
+ */
+
+#include <config.h>
+
+#include <stdlib.h>
+#include <stdio.h>
+#include <ctype.h>
+
+#include "augeas.h"
+
+#include "cutest.h"
+#include "internal.h"
+#include <memory.h>
+
+#include <unistd.h>
+
+static const char *abs_top_srcdir;
+static char *lensdir;
+
+#define KW_TEST "test"
+#define KW_PRINTS "prints"
+#define KW_SOMETHING "something"
+#define KW_USE "use"
+
+/* This array needs to be kept in sync with aug_errcode_t, and
+ * the entries need to be in the same order as in that enum; used
+ * by errtok
+ */
+static const char *const errtokens[] = {
+ "NOERROR", "ENOMEM", "EINTERNAL", "EPATHX", "ENOMATCH",
+ "EMMATCH", "ESYNTAX", "ENOLENS", "EMXFM", "ENOSPAN",
+ "EMVDESC", "ECMDRUN", "EBADARG", "ELABEL"
+};
+
+struct test {
+ struct test *next;
+ char *name;
+ char *module;
+ int result;
+ int errcode;
+ char *cmd;
+ char *out;
+ bool out_present;
+};
+
+static void free_tests(struct test *test) {
+ if (test == NULL)
+ return;
+ free_tests(test->next);
+ free(test->name);
+ free(test->module);
+ free(test->cmd);
+ free(test->out);
+ free(test);
+}
+
+#define die(msg) \
+ do { \
+ fprintf(stderr, "%d: Fatal error: %s\n", __LINE__, msg); \
+ exit(EXIT_FAILURE); \
+ } while(0)
+
+static char *skipws(char *s) {
+ while (isspace(*s)) s++;
+ return s;
+}
+
+static char *findws(char *s) {
+ while (*s && ! isspace(*s)) s++;
+ return s;
+}
+
+static char *token(char *s, char **tok) {
+ char *t = skipws(s);
+ s = findws(t);
+ *tok = strndup(t, s - t);
+ return s;
+}
+
+static char *inttok(char *s, int *tok) {
+ char *t = skipws(s);
+ s = findws(t);
+ *tok = strtol(t, NULL, 10);
+ return s;
+}
+
+static char *errtok(char *s, int *err) {
+ char *t = skipws(s);
+ s = findws(t);
+ if (s == t) {
+ *err = AUG_NOERROR;
+ return s;
+ }
+ for (*err = 0; *err < ARRAY_CARDINALITY(errtokens); *err += 1) {
+ const char *e = errtokens[*err];
+ if (strlen(e) == s - t && STREQLEN(e, t, s - t))
+ return s;
+ }
+ fprintf(stderr, "errtok: '%s'\n", t);
+ die("unknown error code");
+}
+
+static bool looking_at(const char *s, const char *kw) {
+ return STREQLEN(s, kw, strlen(kw));
+}
+
+static struct test *read_tests(void) {
+ char *fname = NULL;
+ FILE *fp;
+ char line[BUFSIZ];
+ struct test *result = NULL, *t = NULL;
+ int lc = 0;
+ bool append_cmd = true;
+
+ if (asprintf(&fname, "%s/tests/run.tests", abs_top_srcdir) < 0)
+ die("asprintf fname");
+
+ if ((fp = fopen(fname, "r")) == NULL)
+ die("fopen run.tests");
+
+ while (fgets(line, BUFSIZ, fp) != NULL) {
+ lc += 1;
+ char *s = skipws(line);
+ if (*s == '#' || *s == '\0')
+ continue;
+ if (*s == ':')
+ s += 1;
+ char *eos = s + strlen(s) - 2;
+ if (eos >= s && *eos == ':') {
+ *eos++ = '\n';
+ *eos = '\0';
+ }
+
+ if (looking_at(s, KW_TEST)) {
+ if (ALLOC(t) < 0)
+ die_oom();
+ list_append(result, t);
+ append_cmd = true;
+ s = token(s + strlen(KW_TEST), &(t->name));
+ s = inttok(s, &t->result);
+ s = errtok(s, &t->errcode);
+ } else if (looking_at(s, KW_PRINTS)) {
+ s = skipws(s + strlen(KW_PRINTS));
+ t->out_present = looking_at(s, KW_SOMETHING);
+ append_cmd = false;
+ } else if (looking_at(s, KW_USE)) {
+ if (t->module !=NULL)
+ die("Can use at most one module in a test");
+ s = token(s + strlen(KW_USE), &(t->module));
+ } else {
+ char **buf = append_cmd ? &(t->cmd) : &(t->out);
+ if (*buf == NULL) {
+ *buf = strdup(s);
+ if (*buf == NULL)
+ die_oom();
+ } else {
+ if (REALLOC_N(*buf, strlen(*buf) + strlen(s) + 1) < 0)
+ die_oom();
+ strcat(*buf, s);
+ }
+ }
+ if (t->out != NULL)
+ t->out_present = true;
+ }
+ free(fname);
+ return result;
+}
+
+#define fail(cond, msg ...) \
+ if (cond) { \
+ printf("FAIL ("); \
+ fprintf(stdout, ## msg); \
+ printf(")\n"); \
+ goto error; \
+ }
+
+static int load_module(struct augeas *aug, struct test *test) {
+ char *fname, *fpath;
+ int r, result = -1;
+
+ if (test->module == NULL)
+ return 0;
+
+ if (asprintf(&fname, "%s.aug", test->module) == -1)
+ fail(true, "asprintf test->module");
+
+ for (int i=0; i < strlen(fname); i++)
+ fname[i] = tolower(fname[i]);
+
+ if (asprintf(&fpath, "%s/%s", lensdir, fname) == -1)
+ fail(true, "asprintf lensdir");
+
+ r = __aug_load_module_file(aug, fpath);
+ fail(r < 0, "Could not load %s", fpath);
+ result = 0;
+ error:
+ free(fname);
+ free(fpath);
+ return result;
+}
+
+static int run_one_test(struct test *test) {
+ int r;
+ struct augeas *aug = NULL;
+ struct memstream ms;
+ int result = 0;
+
+ MEMZERO(&ms, 1);
+
+ aug = aug_init("/dev/null", lensdir,
+ AUG_NO_STDINC|AUG_NO_MODL_AUTOLOAD|AUG_ENABLE_SPAN);
+ fail(aug == NULL, "aug_init");
+ fail(aug_error(aug) != AUG_NOERROR, "aug_init: errcode was %d",
+ aug_error(aug));
+
+ printf("%-30s ... ", test->name);
+
+ r = load_module(aug, test);
+ if (r < 0)
+ goto error;
+
+ r = init_memstream(&ms);
+ fail(r < 0, "init_memstream");
+
+ r = aug_srun(aug, ms.stream, test->cmd);
+ fail(r != test->result, "return value: expected %d, actual %d",
+ test->result, r);
+ fail(aug_error(aug) != test->errcode, "errcode: expected %s, actual %s",
+ errtokens[test->errcode], errtokens[aug_error(aug)]);
+
+ r = close_memstream(&ms);
+ fail(r < 0, "close_memstream");
+ fail(ms.buf == NULL, "close_memstream left buf NULL");
+
+ if (test->out != NULL) {
+ fail(STRNEQ(ms.buf, test->out), "output: expected '%s', actual '%s'",
+ test->out, ms.buf);
+ } else if (test->out_present) {
+ fail(strlen(ms.buf) == 0,
+ "output: expected some output");
+ } else {
+ fail(strlen(ms.buf) > 0,
+ "output: expected nothing, actual '%s'", ms.buf);
+ }
+ printf("PASS\n");
+
+ done:
+ free(ms.buf);
+ aug_close(aug);
+ return result;
+ error:
+ result = -1;
+ goto done;
+}
+
+static int run_tests(struct test *tests, int argc, char **argv) {
+ int result = EXIT_SUCCESS;
+
+ list_for_each(t, tests) {
+ if (! should_run(t->name, argc, argv))
+ continue;
+ if (run_one_test(t) < 0)
+ result = EXIT_FAILURE;
+ }
+ return result;
+}
+
+int main(int argc, char **argv) {
+ struct test *tests;
+
+ abs_top_srcdir = getenv("abs_top_srcdir");
+ if (abs_top_srcdir == NULL)
+ die("env var abs_top_srcdir must be set");
+
+ if (asprintf(&lensdir, "%s/lenses", abs_top_srcdir) < 0)
+ die("out of memory setting lensdir");
+
+ tests = read_tests();
+ int result = run_tests(tests, argc - 1, argv + 1);
+ free_tests(tests);
+ return result;
+}
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+#!/bin/sh
+
+# Test that we report an error when writing to nonexistent dirs
+# but that we do create new files correctly
+
+save_hosts() {
+opts="--nostdinc -r $ROOT -I $abs_top_srcdir/lenses"
+(augtool $opts | grep ^/augeas) <<EOF
+set /files/etc/hosts/1/ipaddr 127.0.0.1
+set /files/etc/hosts/1/canonical localhost
+save
+print /augeas/files/etc/hosts/error
+EOF
+}
+
+expected_errors() {
+cat <<EOF
+/augeas/files/etc/hosts/error = "mk_augtemp"
+/augeas/files/etc/hosts/error/message = "No such file or directory"
+EOF
+}
+
+ROOT=$abs_top_builddir/build/test-save-empty
+HOSTS=$ROOT/etc/hosts
+
+rm -rf $ROOT
+mkdir -p $ROOT
+ACTUAL=$(save_hosts)
+EXPECTED=$(expected_errors)
+
+if [ "$ACTUAL" != "$EXPECTED" ]
+then
+ echo "No error on missing /etc directory:"
+ echo "$ACTUAL"
+ exit 1
+fi
+
+mkdir -p $ROOT/etc
+ACTUAL=$(save_hosts)
+if [ -n "$ACTUAL" ] ; then
+ echo "Error creating file:"
+ echo $ACTUAL
+ exit 1
+fi
+
+if [ ! -f $HOSTS ] ; then
+ echo "File ${HOSTS} was not created"
+ exit 1
+fi
+
+printf '127.0.0.1\tlocalhost\n' > $HOSTS.expected
+
+if ! cmp $HOSTS $HOSTS.expected > /dev/null 2>&1 ; then
+ echo "Contents of $HOSTS are incorrect"
+ exit 1
+fi
--- /dev/null
+#!/bin/sh
+
+# Test manipulating the save flags in /augeas/save
+
+root=$abs_top_builddir/build/test-save-mode
+hosts=$root/etc/hosts
+augopts="--nostdinc -r $root -I $abs_top_srcdir/lenses"
+
+run_augtool() {
+ exp=$1
+ shift
+ augtool $augopts "$@" > /dev/null
+ status=$?
+ if [ "x$exp" = ok -a $status -ne 0 ] ; then
+ echo "augtool failed"
+ exit 1
+ elif [ "x$exp" = fail -a $status -eq 0 ] ; then
+ echo "augtool succeeded but should have failed"
+ exit 1
+ fi
+}
+
+assert_ipaddr() {
+ exp="/files/etc/hosts/1/ipaddr = $1"
+ act=$(augtool $augopts get /files/etc/hosts/1/ipaddr)
+
+ if [ "$act" != "$exp" ] ; then
+ printf "Expected: %s\n" "$exp"
+ printf "Actual : %s\n" "$act"
+ exit 1
+ fi
+}
+
+assert_file_exists() {
+ if [ ! -f "$1" ] ; then
+ echo "File $1 does not exist, but should"
+ exit 1
+ fi
+}
+
+assert_file_exists_not() {
+ if [ -f "$1" ] ; then
+ echo "File $1 exists, but should not"
+ exit 1
+ fi
+}
+
+setup() {
+# echo $*
+ rm -rf $root
+ mkdir -p $(dirname $hosts)
+ cat > $hosts <<EOF
+127.0.0.1 localhost
+EOF
+}
+
+setup "No /augeas/save"
+run_augtool fail <<EOF
+set /files/etc/hosts/1/ipaddr 127.0.0.2
+rm /augeas/save
+save
+EOF
+assert_ipaddr 127.0.0.1
+
+setup "Invalid /augeas/save"
+run_augtool fail <<EOF
+set /files/etc/hosts/1/ipaddr 127.0.0.2
+set /augeas/save "not a valid flag"
+save
+EOF
+assert_ipaddr 127.0.0.1
+
+setup "noop"
+run_augtool fail <<EOF
+set /files/etc/hosts/1/ipaddr 127.0.0.2
+set /augeas/save noop
+save
+EOF
+assert_ipaddr 127.0.0.1
+assert_file_exists_not $hosts.augnew
+assert_file_exists_not $hosts.augsave
+
+setup "newfile"
+run_augtool ok <<EOF
+set /files/etc/hosts/1/ipaddr 127.0.0.2
+set /augeas/save newfile
+save
+EOF
+assert_ipaddr 127.0.0.1
+assert_file_exists $hosts.augnew
+assert_file_exists_not $hosts.augsave
+
+setup "overwrite"
+run_augtool ok <<EOF
+set /files/etc/hosts/1/ipaddr 127.0.0.2
+set /augeas/save overwrite
+save
+EOF
+assert_ipaddr 127.0.0.2
+assert_file_exists_not $hosts.augnew
+assert_file_exists_not $hosts.augsave
+
+setup "backup"
+run_augtool ok <<EOF
+set /files/etc/hosts/1/ipaddr 127.0.0.2
+set /augeas/save backup
+save
+EOF
+assert_ipaddr 127.0.0.2
+assert_file_exists_not $hosts.augnew
+assert_file_exists $hosts.augsave
+
+
+augopts="${augopts} --autosave"
+
+setup "autosave"
+run_augtool ok <<EOF
+set /files/etc/hosts/1/ipaddr 127.0.0.2
+EOF
+assert_ipaddr 127.0.0.2
+assert_file_exists_not $hosts.augnew
+assert_file_exists_not $hosts.augsave
+
+setup "autosave command line"
+run_augtool ok set /files/etc/hosts/1/ipaddr 127.0.0.2
+assert_ipaddr 127.0.0.2
+assert_file_exists_not $hosts.augnew
+assert_file_exists_not $hosts.augsave
--- /dev/null
+/*
+ * test-save.c: test various aspects of saving
+ *
+ * Copyright (C) 2009-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <lutter@redhat.com>
+ */
+
+#include <config.h>
+#include "augeas.h"
+#include "internal.h"
+#include "cutest.h"
+
+#include <stdio.h>
+#include <sys/stat.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <unistd.h>
+
+const char *abs_top_srcdir;
+const char *abs_top_builddir;
+char *root = NULL, *src_root = NULL;
+struct augeas *aug = NULL;
+
+#define die(msg) \
+ do { \
+ fprintf(stderr, "%s:%d: Fatal error: %s\n", __FILE__, __LINE__, msg); \
+ exit(EXIT_FAILURE); \
+ } while(0)
+
+static void setup(CuTest *tc) {
+ char *lensdir;
+
+ if (asprintf(&root, "%s/build/test-save/%s",
+ abs_top_builddir, tc->name) < 0) {
+ CuFail(tc, "failed to set root");
+ }
+
+ if (asprintf(&lensdir, "%s/lenses", abs_top_srcdir) < 0)
+ CuFail(tc, "asprintf lensdir failed");
+
+ umask(0022);
+ run(tc, "test -d %s && chmod -R u+w %s || :", root, root);
+ run(tc, "rm -rf %s", root);
+ run(tc, "mkdir -p %s", root);
+ run(tc, "cp -pr %s/* %s", src_root, root);
+ run(tc, "chmod -R u+w %s", root);
+
+ aug = aug_init(root, lensdir, AUG_NO_STDINC);
+ free(lensdir);
+ CuAssertPtrNotNull(tc, aug);
+}
+
+static void teardown(ATTRIBUTE_UNUSED CuTest *tc) {
+ /* testRemovePermission makes <root>/etc nonwritable. That leads
+ to an error from 'make distcheck' make sure that directory is
+ writable by the user after the test */
+ run(tc, "chmod u+w %s/etc", root);
+
+ aug_close(aug);
+ aug = NULL;
+ free(root);
+ root = NULL;
+}
+
+static void testRemoveNoPermission(CuTest *tc) {
+ if (getuid() == 0) {
+ puts("pending (testRemoveNoPermission): can't test permissions under root account");
+ return;
+ }
+
+ int r;
+ const char *errmsg;
+
+ // Prevent deletion of files
+ run(tc, "chmod 0500 %s/etc", root);
+
+ r = aug_rm(aug, "/files/etc/hosts");
+ CuAssertTrue(tc, r > 0);
+
+ r = aug_save(aug);
+ CuAssertIntEquals(tc, -1, r);
+
+ r = aug_get(aug, "/augeas/files/etc/hosts/error", &errmsg);
+ CuAssertIntEquals(tc, 1, r);
+ CuAssertPtrNotNull(tc, errmsg);
+ CuAssertStrEquals(tc, "unlink_orig", errmsg);
+}
+
+static void testSaveNewFile(CuTest *tc) {
+ int r;
+
+ r = aug_match(aug, "/augeas/files/etc/yum.repos.d/new.repo/path", NULL);
+ CuAssertIntEquals(tc, 0, r);
+
+ r = aug_set(aug, "/files/etc/yum.repos.d/new.repo/newrepo/baseurl",
+ "http://foo.com/");
+ CuAssertIntEquals(tc, 0, r);
+
+ r = aug_save(aug);
+ CuAssertIntEquals(tc, 0, r);
+
+ r = aug_match(aug, "/augeas/files/etc/yum.repos.d/new.repo/path", NULL);
+ CuAssertIntEquals(tc, 1, r);
+}
+
+static void testNonExistentLens(CuTest *tc) {
+ int r;
+
+ r = aug_rm(aug, "/augeas/load/*");
+ CuAssertTrue(tc, r >= 0);
+
+ r = aug_set(aug, "/augeas/load/Fake/lens", "Fake.lns");
+ CuAssertIntEquals(tc, 0, r);
+ r = aug_set(aug, "/augeas/load/Fake/incl", "/fake");
+ CuAssertIntEquals(tc, 0, r);
+ r = aug_set(aug, "/files/fake/entry", "value");
+ CuAssertIntEquals(tc, 0, r);
+
+ r = aug_save(aug);
+ CuAssertIntEquals(tc, -1, r);
+ r = aug_error(aug);
+ CuAssertIntEquals(tc, AUG_ENOLENS, r);
+}
+
+static void testMultipleXfm(CuTest *tc) {
+ int r;
+
+ r = aug_set(aug, "/augeas/load/Yum2/lens", "Yum.lns");
+ CuAssertIntEquals(tc, 0, r);
+ r = aug_set(aug, "/augeas/load/Yum2/incl", "/etc/yum.repos.d/*");
+ CuAssertIntEquals(tc, 0, r);
+
+ r = aug_set(aug, "/files/etc/yum.repos.d/fedora.repo/fedora/enabled", "0");
+ CuAssertIntEquals(tc, 0, r);
+ /* What we have set up is fine: two ways to save the same file with the
+ same lens */
+ r = aug_save(aug);
+ CuAssertIntEquals(tc, 0, r);
+
+ /* Now we make it bad: a different lens for the same file */
+ r = aug_set(aug, "/augeas/load/Yum2/lens", "@Subversion");
+ CuAssertIntEquals(tc, 0, r);
+
+ r = aug_set(aug, "/files/etc/yum.repos.d/fedora.repo/fedora/enabled", "1");
+ CuAssertIntEquals(tc, 0, r);
+
+ r = aug_save(aug);
+ CuAssertIntEquals(tc, -1, r);
+
+ r = aug_error(aug);
+ CuAssertIntEquals(tc, AUG_EMXFM, r);
+}
+
+static void testMtime(CuTest *tc) {
+ const char *s, *mtime2;
+ char *mtime1;
+ int r;
+
+ r = aug_set(aug, "/files/etc/hosts/1/alias[last() + 1]", "new");
+ CuAssertIntEquals(tc, 0, r);
+
+ r = aug_get(aug, "/augeas/files/etc/hosts/mtime", &s);
+ CuAssertIntEquals(tc, 1, r);
+ mtime1 = strdup(s);
+ CuAssertPtrNotNull(tc, mtime1);
+
+
+ r = aug_save(aug);
+ CuAssertIntEquals(tc, 0, r);
+
+ r = aug_get(aug, "/augeas/files/etc/hosts/mtime", &mtime2);
+ CuAssertIntEquals(tc, 1, r);
+
+ CuAssertStrNotEqual(tc, mtime1, mtime2);
+ CuAssertStrNotEqual(tc, "0", mtime2);
+ free(mtime1);
+}
+
+/* Check that loading and saving a file given with a relative path
+ * works. Bug #238
+ */
+static void testRelPath(CuTest *tc) {
+ int r;
+
+ r = aug_rm(aug, "/augeas/load/*");
+ CuAssertPositive(tc, r);
+
+ r = aug_set(aug, "/augeas/load/Hosts/lens", "Hosts.lns");
+ CuAssertRetSuccess(tc, r);
+ r = aug_set(aug, "/augeas/load/Hosts/incl", "etc/hosts");
+ CuAssertRetSuccess(tc, r);
+ r = aug_load(aug);
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_match(aug, "/files/etc/hosts/1/alias[ . = 'new']", NULL);
+ CuAssertIntEquals(tc, 0, r);
+
+ r = aug_set(aug, "/files/etc/hosts/1/alias[last() + 1]", "new");
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_save(aug);
+ CuAssertRetSuccess(tc, r);
+ r = aug_match(aug, "/augeas//error", NULL);
+ CuAssertIntEquals(tc, 0, r);
+
+ /* Force reloading the file */
+ r = aug_rm(aug, "/augeas/files//mtime");
+ CuAssertPositive(tc, r);
+
+ r = aug_load(aug);
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_match(aug, "/files/etc/hosts/1/alias[. = 'new']", NULL);
+ CuAssertIntEquals(tc, 1, r);
+}
+
+/* Check that loading and saving a file with // in the incl pattern works.
+ * RHBZ#1031084
+ */
+static void testDoubleSlashPath(CuTest *tc) {
+ int r;
+
+ r = aug_rm(aug, "/augeas/load/*");
+ CuAssertPositive(tc, r);
+
+ r = aug_set(aug, "/augeas/load/Hosts/lens", "Hosts.lns");
+ CuAssertRetSuccess(tc, r);
+ r = aug_set(aug, "/augeas/load/Hosts/incl", "/etc//hosts");
+ CuAssertRetSuccess(tc, r);
+ r = aug_load(aug);
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_match(aug, "/files/etc/hosts/1/alias[ . = 'new']", NULL);
+ CuAssertIntEquals(tc, 0, r);
+
+ r = aug_set(aug, "/files/etc/hosts/1/alias[last() + 1]", "new");
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_save(aug);
+ CuAssertRetSuccess(tc, r);
+ r = aug_match(aug, "/augeas//error", NULL);
+ CuAssertIntEquals(tc, 0, r);
+
+ /* Force reloading the file */
+ r = aug_rm(aug, "/augeas/files//mtime");
+ CuAssertPositive(tc, r);
+
+ r = aug_load(aug);
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_match(aug, "/files/etc/hosts/1/alias[. = 'new']", NULL);
+ CuAssertIntEquals(tc, 1, r);
+}
+
+/* Check the umask is followed when creating files
+ */
+static void testUmask(CuTest *tc, int tumask, mode_t expected_mode) {
+ int r;
+ struct stat buf;
+ char* fpath = NULL;
+
+ if (asprintf(&fpath, "%s/etc/test", root) < 0) {
+ CuFail(tc, "failed to set root");
+ }
+
+ umask(tumask);
+
+ r = aug_rm(aug, "/augeas/load/*");
+ CuAssertPositive(tc, r);
+
+ r = aug_set(aug, "/augeas/load/Test/lens", "Simplelines.lns");
+ CuAssertRetSuccess(tc, r);
+ r = aug_set(aug, "/augeas/load/Test/incl", "/etc/test");
+ CuAssertRetSuccess(tc, r);
+ r = aug_load(aug);
+ CuAssertRetSuccess(tc, r);
+ r = aug_set(aug, "/files/etc/test/1", "test");
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_save(aug);
+ CuAssertRetSuccess(tc, r);
+ r = aug_match(aug, "/augeas//error", NULL);
+ CuAssertIntEquals(tc, 0, r);
+
+ CuAssertIntEquals(tc, 0, stat(fpath, &buf));
+ CuAssertIntEquals(tc, expected_mode, buf.st_mode & 0777);
+ free(fpath);
+}
+static void testUmask077(CuTest *tc) {
+ testUmask(tc, 0077, 0600);
+}
+static void testUmask027(CuTest *tc) {
+ testUmask(tc, 0027, 0640);
+}
+static void testUmask022(CuTest *tc) {
+ testUmask(tc, 0022, 0644);
+}
+
+/* Test that handling of 'strange' characters in path names works as
+ * expected. In particular, that paths with characters that have special
+ * meaning in path expressions are escaped properly.
+ *
+ * This test isn't all that specific to save, but since these tests set up
+ * a copy of tests/root/ that is modifiable, it was convenient to put this
+ * test here.
+ */
+static void testPathEscaping(CuTest *tc) {
+ /* Path expression with characters escaped */
+ static const char *const weird =
+ "/files/etc/sysconfig/network-scripts/ifcfg-weird\\ \\[\\!\\]\\ \\(used\\ to\\ fail\\)";
+ /* Path without any escaping */
+ static const char *const weird_no_escape =
+ "/files/etc/sysconfig/network-scripts/ifcfg-weird [!] (used to fail)";
+
+ char *fname = NULL, *s = NULL;
+ const char *v;
+ int r;
+
+ /* Construct the file name in the file system and check the file is there */
+ r = asprintf(&fname, "%s%s", root, weird_no_escape + strlen("/files"));
+ CuAssertPositive(tc, r);
+
+ r = access(fname, R_OK);
+ CuAssertIntEquals(tc, 0, r);
+
+ /* Make sure weird is in the tree */
+ r = aug_match(aug, weird, NULL);
+ CuAssertIntEquals(tc, 1, r);
+
+ /* Make sure we can get to the metadata about weird */
+ r = asprintf(&s, "/augeas%s/path", weird);
+ CuAssertPositive(tc, r);
+
+ r = aug_get(aug, s, &v);
+ CuAssertIntEquals(tc, 1, r);
+ CuAssertStrEquals(tc, weird_no_escape, v);
+
+ /* Delete it from the tree and save it; make sure it gets removed
+ from the file system */
+ r = aug_rm(aug, weird);
+ CuAssertPositive(tc, r);
+
+ r = aug_save(aug);
+ CuAssertRetSuccess(tc, r);
+
+ r = access(fname, R_OK);
+ CuAssertIntEquals(tc, -1, r);
+ CuAssertIntEquals(tc, ENOENT, errno);
+
+ free(s);
+ free(fname);
+}
+
+/* Test that we handle failure to save a file because we lack permission on
+ * the target file is handled gracefully.
+ *
+ * As reported in https://github.com/hercules-team/augeas/issues/178, this
+ * used to lead to a SEGV
+ */
+static void testSaveNoPermission(CuTest *tc) {
+ if (getuid() == 0) {
+ puts("pending (testSaveNoPermission): can't test permissions under root account");
+ return;
+ }
+
+ int r;
+ char *path = NULL;
+ const char *v;
+
+ r = asprintf(&path, "%s/etc/hosts", root);
+ CuAssertPositive(tc, r);
+
+ r = aug_set(aug, "/files/etc/hosts/1/alias[1]", "othername");
+ CuAssertRetSuccess(tc, r);
+
+ r = chmod(path, 0);
+ CuAssertRetSuccess(tc, r);
+
+ r = aug_save(aug);
+ CuAssertIntEquals(tc, -1, r);
+
+ r = aug_get(aug, "/augeas/files/etc/hosts/error", &v);
+ CuAssertIntEquals(tc, 1, r);
+ CuAssertStrEquals(tc, "replace_from_missing", v);
+ free(path);
+}
+
+int main(void) {
+ char *output = NULL;
+ CuSuite* suite = CuSuiteNew();
+
+ abs_top_srcdir = getenv("abs_top_srcdir");
+ if (abs_top_srcdir == NULL)
+ die("env var abs_top_srcdir must be set");
+
+ abs_top_builddir = getenv("abs_top_builddir");
+ if (abs_top_builddir == NULL)
+ die("env var abs_top_builddir must be set");
+
+ if (asprintf(&src_root, "%s/tests/root", abs_top_srcdir) < 0) {
+ die("failed to set src_root");
+ }
+
+ CuSuiteSetup(suite, setup, teardown);
+
+ SUITE_ADD_TEST(suite, testSaveNoPermission);
+ SUITE_ADD_TEST(suite, testSaveNewFile);
+ SUITE_ADD_TEST(suite, testRemoveNoPermission);
+ SUITE_ADD_TEST(suite, testNonExistentLens);
+ SUITE_ADD_TEST(suite, testMultipleXfm);
+ SUITE_ADD_TEST(suite, testMtime);
+ SUITE_ADD_TEST(suite, testRelPath);
+ SUITE_ADD_TEST(suite, testDoubleSlashPath);
+ SUITE_ADD_TEST(suite, testUmask077);
+ SUITE_ADD_TEST(suite, testUmask027);
+ SUITE_ADD_TEST(suite, testUmask022);
+ SUITE_ADD_TEST(suite, testPathEscaping);
+
+ CuSuiteRun(suite);
+ CuSuiteSummary(suite, &output);
+ CuSuiteDetails(suite, &output);
+ printf("%s\n", output);
+ free(output);
+ int result = suite->failCount;
+ CuSuiteFree(suite);
+ return result;
+}
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+# This test checks that https://github.com/hercules-team/augeas/issues/397 is
+# fixed. It would otherwise lead to a segfault in augtool
+
+if [ -z "$abs_top_builddir" ]; then
+ echo "abs_top_builddir is not set"
+ exit 1
+fi
+
+if [ -z "$abs_top_srcdir" ]; then
+ echo "abs_top_srcdir is not set"
+ exit 1
+fi
+
+ROOT=$abs_top_builddir/build/test-span-rec-lens
+LENSES=$abs_top_srcdir/lenses
+
+FILE=$ROOT/etc/default/im-config
+
+rm -rf $ROOT
+mkdir -p $(dirname $FILE)
+cat <<EOF > $FILE
+if [ 1 ]; then
+# K
+else
+# I
+fi
+EOF
+
+# If bug 397 is not fixed, this will abort because of memory corruption
+augtool --nostdinc -I $LENSES -r $ROOT --span rm /files >/dev/null
--- /dev/null
+#!/bin/sh
+
+# Make sure we don't delete files simply because there was an error reading
+# them in
+
+root=$abs_top_builddir/build/test-unlink-error
+xinetd=$root/etc/xinetd.conf
+
+rm -rf $root
+mkdir -p $(dirname $xinetd)
+
+cat > $xinetd <<EOF
+intentional garbage
+EOF
+
+augtool --nostdinc -r $root -I $abs_top_srcdir/lenses > /dev/null <<EOF
+clear /files
+save
+EOF
+
+if [ ! -f $xinetd ] ; then
+ echo "Deleted xinetd.conf"
+ exit 1
+fi
--- /dev/null
+#!/bin/sh
+
+# Run the tests in lenses/tests through valgrind and report which ones leak
+set -e
+
+TOPDIR=$(cd $(dirname $0)/.. && pwd)
+[ -n "$top_builddir" ] || top_builddir=$TOPDIR
+[ -n "$top_srcdir" ] || top_srcdir=$TOPDIR
+
+
+AUGPARSE="libtool --mode=execute valgrind -q --leak-check=full ${top_builddir}/src/augparse"
+LENS_DIR=${top_srcdir}/lenses
+TESTS=$LENS_DIR/tests/test_*.aug
+
+for t in $TESTS
+do
+ echo Run $(basename $t .aug)
+ set +e
+ ${AUGPARSE} --nostdinc -I $LENS_DIR $t
+done
--- /dev/null
+/*
+ * test-xpath.c: check that XPath expressions yield the expected result
+ *
+ * Copyright (C) 2007-2016 David Lutterkort
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author: David Lutterkort <dlutter@redhat.com>
+ */
+
+#include <config.h>
+
+#include <stdlib.h>
+#include <stdio.h>
+#include <ctype.h>
+
+#include <augeas.h>
+#include <internal.h>
+#include <memory.h>
+
+#include "cutest.h"
+
+static const char *abs_top_srcdir;
+static char *root;
+
+#define KW_TEST "test"
+
+struct entry {
+ struct entry *next;
+ char *path;
+ char *value;
+};
+
+struct test {
+ struct test *next;
+ char *name;
+ char *match;
+ struct entry *entries;
+};
+
+#define die(msg) \
+ do { \
+ fprintf(stderr, "%d: Fatal error: %s\n", __LINE__, msg); \
+ exit(EXIT_FAILURE); \
+ } while(0)
+
+static char *skipws(char *s) {
+ while (isspace(*s)) s++;
+ return s;
+}
+
+static char *findws(char *s) {
+ while (*s && ! isspace(*s)) s++;
+ return s;
+}
+
+static char *token(char *s, char **tok) {
+ char *t = skipws(s);
+ s = findws(t);
+ *tok = strndup(t, s - t);
+ return s;
+}
+
+static char *token_to_eol(char *s, char **tok) {
+ char *t = skipws(s);
+ while (*s && *s != '\n') s++;
+ *tok = strndup(t, s - t);
+ return s;
+}
+
+static char *findpath(char *s, char **p) {
+ char *t = skipws(s);
+
+ while (*s && *s != '=') s++;
+ if (s > t) {
+ s -= 1;
+ while (*s && isspace(*s)) s -= 1;
+ s += 1;
+ }
+ *p = strndup(t, s - t);
+ return s;
+}
+
+static void free_tests(struct test *test) {
+ while (test != NULL) {
+ struct test *del = test;
+ test = test->next;
+ struct entry *entry = del->entries;
+ while (entry != NULL) {
+ struct entry *e = entry;
+ entry = entry->next;
+ free(e->path);
+ free(e->value);
+ free(e);
+ }
+ free(del->name);
+ free(del->match);
+ free(del);
+ }
+}
+
+static struct test *read_tests(void) {
+ char *fname;
+ FILE *fp;
+ char line[BUFSIZ];
+ struct test *result = NULL, *t = NULL;
+ int lc = 0;
+
+ if (asprintf(&fname, "%s/tests/xpath.tests", abs_top_srcdir) < 0)
+ die("asprintf fname");
+
+ if ((fp = fopen(fname, "r")) == NULL)
+ die("fopen xpath.tests");
+
+ while (fgets(line, BUFSIZ, fp) != NULL) {
+ lc += 1;
+ char *s = skipws(line);
+ if (*s == '#' || *s == '\0')
+ continue;
+ if (STREQLEN(s, KW_TEST, strlen(KW_TEST))) {
+ if (ALLOC(t) < 0)
+ die("out of memory");
+ list_append(result, t);
+ s = token(s + strlen(KW_TEST), &(t->name));
+ s = token_to_eol(s, &(t->match));
+ } else {
+ struct entry *e = NULL;
+ if (ALLOC(e) < 0)
+ die("out of memory");
+ list_append(t->entries, e);
+ s = findpath(s, &(e->path));
+ s = skipws(s);
+ if (*s) {
+ if (*s != '=') {
+ fprintf(stderr,
+ "line %d: either list only a path or path = value\n", lc);
+ die("xpath.tests has incorrect format");
+ }
+ s = token_to_eol(s + 1, &(e->value));
+ }
+ }
+ s = skipws(s);
+ if (*s != '\0') {
+ fprintf(stderr, "line %d: junk at end of line\n", lc);
+ die("xpath.tests has incorrect format");
+ }
+ }
+ fclose(fp);
+ free(fname);
+ return result;
+}
+
+static void print_pv(const char *path, const char *value) {
+ if (value)
+ printf(" %s = %s\n", path, value);
+ else
+ printf(" %s\n", path);
+}
+
+static int has_match(const char *path, char **matches, int nmatches) {
+ int found = 0;
+ for (int i=0; i < nmatches; i++) {
+ if (matches[i] != NULL && STREQ(path, matches[i])) {
+ found = 1;
+ break;
+ }
+ }
+ return found;
+}
+
+static int run_one_test(struct augeas *aug, struct test *t) {
+ int nexp = 0, nact;
+ char **matches;
+ int result = 0;
+
+ printf("%-30s ... ", t->name);
+ list_for_each(e, t->entries)
+ nexp++;
+ nact = aug_match(aug, t->match, &matches);
+ if (nact != nexp) {
+ result = -1;
+ } else {
+ struct entry *e;
+ const char *val;
+
+ for (e = t->entries; e != NULL; e = e->next) {
+ if (! has_match(e->path, matches, nact))
+ result = -1;
+ if (! streqv(e->value, "...")) {
+ aug_get(aug, e->path, &val);
+ if (!streqv(e->value, val))
+ result = -1;
+ }
+ }
+ }
+ if (result == 0) {
+ printf("PASS\n");
+ } else {
+ printf("FAIL\n");
+
+ printf(" Match: %s\n", t->match);
+ printf(" Expected: %d entries\n", nexp);
+ list_for_each(e, t->entries) {
+ print_pv(e->path, e->value);
+ }
+ if (nact < 0) {
+ printf(" Actual: aug_match failed\n");
+ } else {
+ printf(" Actual: %d entries\n", nact);
+ }
+ for (int i=0; i < nact; i++) {
+ const char *val;
+ aug_get(aug, matches[i], &val);
+ print_pv(matches[i], val);
+ }
+ }
+ for (int i=0; i < nact; i++) {
+ free(matches[i]);
+ }
+ free(matches);
+ return result;
+}
+
+static int test_rm_var(struct augeas *aug) {
+ int r;
+
+ printf("%-30s ... ", "rm_var");
+ r = aug_defvar(aug, "h", "/files/etc/hosts/2/ipaddr");
+ if (r < 0)
+ die("aug_defvar failed");
+
+ r = aug_match(aug, "$h", NULL);
+ if (r != 1) {
+ fprintf(stderr, "expected 1 match, got %d\n", r);
+ goto fail;
+ }
+
+ r = aug_rm(aug, "/files/etc/hosts/2");
+ if (r != 4) {
+ fprintf(stderr, "expected 4 nodes removed, got %d\n", r);
+ goto fail;
+ }
+
+ r = aug_match(aug, "$h", NULL);
+ if (r != 0) {
+ fprintf(stderr, "expected no match, got %d\n", r);
+ goto fail;
+ }
+ printf("PASS\n");
+ return 0;
+ fail:
+ printf("FAIL\n");
+ return -1;
+}
+
+static int test_defvar_nonexistent(struct augeas *aug) {
+ int r;
+
+ printf("%-30s ... ", "defvar_nonexistent");
+ r = aug_defvar(aug, "x", "/foo/bar");
+ if (r < 0)
+ die("aug_defvar failed");
+
+ r = aug_set(aug, "$x", "baz");
+ if (r != -1)
+ goto fail;
+ printf("PASS\n");
+ return 0;
+ fail:
+ printf("FAIL\n");
+ return -1;
+}
+
+static int test_defnode_nonexistent(struct augeas *aug) {
+ int r, created;
+
+ printf("%-30s ... ", "defnode_nonexistent");
+ r = aug_defnode(aug, "x", "/defnode/bar[0 = 1]", "foo", &created);
+ if (r != 1)
+ die("aug_defnode failed");
+ if (created != 1) {
+ fprintf(stderr, "defnode did not create a node\n");
+ goto fail;
+ }
+ r = aug_match(aug, "$x", NULL);
+ if (r != 1) {
+ fprintf(stderr, "$x must have exactly one entry, but has %d\n", r);
+ goto fail;
+ }
+
+ r = aug_defnode(aug, "x", "/defnode/bar", NULL, &created);
+ if (r != 1)
+ die("aug_defnode failed");
+ if (created != 0) {
+ fprintf(stderr, "defnode created node again\n");
+ goto fail;
+ }
+
+ // FIXME: get values and compare them, too
+
+ r = aug_set(aug, "$x", "baz");
+ if (r != 0)
+ goto fail;
+
+ r = aug_match(aug, "$x", NULL);
+ if (r != 1)
+ goto fail;
+
+ printf("PASS\n");
+ return 0;
+ fail:
+ printf("FAIL\n");
+ return -1;
+}
+
+static int test_invalid_regexp(struct augeas *aug) {
+ int r;
+
+ printf("%-30s ... ", "invalid_regexp");
+ r = aug_match(aug, "/files/*[ * =~ regexp('.*[aeiou')]", NULL);
+ if (r >= 0)
+ goto fail;
+
+ printf("PASS\n");
+ return 0;
+ fail:
+ printf("FAIL\n");
+ return -1;
+}
+
+static int test_wrong_regexp_flag(struct augeas *aug) {
+ int r;
+
+ printf("%-30s ... ", "wrong_regexp_flag");
+ r = aug_match(aug, "/files/*[ * =~ regexp('abc', 'o')]", NULL);
+ if (r >= 0)
+ goto fail;
+
+ printf("PASS\n");
+ return 0;
+ fail:
+ printf("FAIL\n");
+ return -1;
+}
+
+static int test_trailing_ws_in_name(struct augeas *aug) {
+ int r;
+
+ printf("%-30s ... ", "trailing_ws_in_name");
+
+ /* We used to incorrectly lop escaped whitespace off the end of a
+ * name. Make sure that we really create a tree node with label 'x '
+ * with the below set, and look for it in a number of ways to ensure we
+ * are not lopping off trailing whitespace. */
+ r = aug_set(aug, "/ws\\ ", "1");
+ if (r < 0) {
+ fprintf(stderr, "failed to set '/ws ': %d\n", r);
+ goto fail;
+ }
+ /* We did not create a node with label 'ws' */
+ r = aug_get(aug, "/ws", NULL);
+ if (r != 0) {
+ fprintf(stderr, "created '/ws' instead: %d\n", r);
+ goto fail;
+ }
+
+ /* We did not create a node with label 'ws\t' (this also checks that we
+ * don't create something like 'ws\\' by dropping the last whitespace
+ * character. */
+ r = aug_get(aug, "/ws\\\t", NULL);
+ if (r != 0) {
+ fprintf(stderr, "found '/ws\\t': %d\n", r);
+ goto fail;
+ }
+
+ /* But we did create 'ws ' */
+ r = aug_get(aug, "/ws\\ ", NULL);
+ if (r != 1) {
+ fprintf(stderr, "could not find '/ws ': %d\n", r);
+ goto fail;
+ }
+
+ /* If the whitespace is preceded by an even number of '\\' chars,
+ * whitespace must be stripped */
+ r = aug_set(aug, "/nows\\\\ ", "1");
+ if (r < 0) {
+ fprintf(stderr, "set of '/nows' failed: %d\n", r);
+ goto fail;
+ }
+ r = aug_get(aug, "/nows\\\\", NULL);
+ if (r != 1) {
+ fprintf(stderr, "could not get '/nows\\'\n");
+ goto fail;
+ }
+ printf("PASS\n");
+ return 0;
+ fail:
+ printf("FAIL\n");
+ return -1;
+}
+
+static int run_tests(struct test *tests, int argc, char **argv) {
+ char *lensdir;
+ struct augeas *aug = NULL;
+ int r, result = EXIT_SUCCESS;
+
+ if (asprintf(&lensdir, "%s/lenses", abs_top_srcdir) < 0)
+ die("asprintf lensdir failed");
+
+ aug = aug_init(root, lensdir, AUG_NO_STDINC|AUG_SAVE_NEWFILE);
+ if (aug == NULL)
+ die("aug_init");
+ r = aug_defvar(aug, "hosts", "/files/etc/hosts/*");
+ if (r != 6)
+ die("aug_defvar $hosts");
+ r = aug_defvar(aug, "localhost", "'127.0.0.1'");
+ if (r != 0)
+ die("aug_defvar $localhost");
+ r = aug_defvar(aug, "php", "/files/etc/php.ini");
+ if (r != 1)
+ die("aug_defvar $php");
+
+ list_for_each(t, tests) {
+ if (! should_run(t->name, argc, argv))
+ continue;
+ if (run_one_test(aug, t) < 0)
+ result = EXIT_FAILURE;
+ }
+
+ if (argc == 0) {
+ if (test_rm_var(aug) < 0)
+ result = EXIT_FAILURE;
+
+ if (test_defvar_nonexistent(aug) < 0)
+ result = EXIT_FAILURE;
+
+ if (test_defnode_nonexistent(aug) < 0)
+ result = EXIT_FAILURE;
+
+ if (test_invalid_regexp(aug) < 0)
+ result = EXIT_FAILURE;
+
+ if (test_wrong_regexp_flag(aug) < 0)
+ result = EXIT_FAILURE;
+
+ if (test_trailing_ws_in_name(aug) < 0)
+ result = EXIT_FAILURE;
+ }
+ aug_close(aug);
+ free(lensdir);
+
+ return result;
+}
+
+int main(int argc, char **argv) {
+ struct test *tests;
+
+ abs_top_srcdir = getenv("abs_top_srcdir");
+ if (abs_top_srcdir == NULL)
+ die("env var abs_top_srcdir must be set");
+
+ if (asprintf(&root, "%s/tests/root", abs_top_srcdir) < 0) {
+ die("failed to set root");
+ }
+
+ tests = read_tests();
+ int result = run_tests(tests, argc - 1, argv + 1);
+ /*
+ list_for_each(t, tests) {
+ printf("Test %s\n", t->name);
+ printf("match %s\n", t->match);
+ list_for_each(e, t->entries) {
+ if (e->value)
+ printf(" %s = %s\n", e->path, e->value);
+ else
+ printf(" %s\n", e->path);
+ }
+ }
+ */
+ free_tests(tests);
+ return result;
+}
+
+/*
+ * Local variables:
+ * indent-tabs-mode: nil
+ * c-indent-level: 4
+ * c-basic-offset: 4
+ * tab-width: 4
+ * End:
+ */
--- /dev/null
+# Tests of the XPath matching
+
+# Blank lines and lines starting with '#' are ignored
+#
+# Each test consists of one line declaring the test followed by a complete
+# list of the expected result of the match.
+#
+# The test is declared with a line 'test NAME MATCH'. A result is either
+# just a path (meaning that the value associated with that node must be
+# NULL) or of the form PATH = VALUE, meaning that the value for the node at
+# PATH must be VALUE. If VALUE is '...', the test does not check the value
+# associated with PATH in the tree.
+#
+# The MATCH XPath expression is matched against a fixed tree (the one from the
+# root/ subdirectory) and the result of the aug_match is compared with the
+# results listed in the test
+#
+# The test framework sets up variables:
+# hosts /files/etc/hosts/*
+# localhost '127.0.0.1'
+# php /files/etc/php.ini
+
+# Very simple to warm up
+test wildcard /files/etc/hosts/*/ipaddr
+ /files/etc/hosts/1/ipaddr = 127.0.0.1
+ /files/etc/hosts/2/ipaddr = 172.31.122.14
+
+test wildcard-var $hosts/ipaddr
+ /files/etc/hosts/1/ipaddr = 127.0.0.1
+ /files/etc/hosts/2/ipaddr = 172.31.122.14
+
+# Compare the value of the current node with a constant
+test self-value /files/etc/hosts/*/ipaddr[ . = '127.0.0.1' ]
+ /files/etc/hosts/1/ipaddr = 127.0.0.1
+
+test self-value-var $hosts/ipaddr[ . = $localhost ]
+ /files/etc/hosts/1/ipaddr = 127.0.0.1
+
+# Find nodes that have a child named 'ipaddr' with a fixed value
+test child-value /files/etc/hosts/*[ipaddr = '127.0.0.1']
+ /files/etc/hosts/1
+
+test child-value-var $hosts[ipaddr = $localhost]
+ /files/etc/hosts/1
+
+# Find nodes that have a child 'ipaddr' that has no value
+test child-nil-value /files/etc/hosts/*[ipaddr = '']
+
+test child-nil-value-var $hosts[ipaddr = '']
+
+# Find nodes that have no value
+test self-nil-value /files/etc/hosts/*[. = '']
+ /files/etc/hosts/1
+ /files/etc/hosts/2
+
+test self-nil-value-var $hosts[. = '']
+ /files/etc/hosts/1
+ /files/etc/hosts/2
+
+# Match over two levels of the tree
+test two-wildcards /files/etc/*/*[ipaddr='127.0.0.1']
+ /files/etc/hosts/1
+
+test pam-system-auth /files/etc/pam.d/*/*[module = 'system-auth']
+ /files/etc/pam.d/login/2
+ /files/etc/pam.d/login/4
+ /files/etc/pam.d/login/5
+ /files/etc/pam.d/login/8
+ /files/etc/pam.d/postgresql/1
+ /files/etc/pam.d/postgresql/2
+ /files/etc/pam.d/newrole/1
+ /files/etc/pam.d/newrole/2
+ /files/etc/pam.d/newrole/3
+
+# Multiple predicates are treated with 'and'
+test pam-two-preds /files/etc/pam.d/*/*[module = 'system-auth'][type = 'account']
+ /files/etc/pam.d/login/4
+ /files/etc/pam.d/postgresql/2
+ /files/etc/pam.d/newrole/2
+
+# Find nodes that have siblings with a given value
+test pam-two-preds-control /files/etc/pam.d/*/*[module = 'system-auth'][type = 'account']/control
+ /files/etc/pam.d/login/4/control = include
+ /files/etc/pam.d/postgresql/2/control = include
+ /files/etc/pam.d/newrole/2/control = include
+
+# last() gives the last node with a certain name
+test last /files/etc/hosts/*[ipaddr = "127.0.0.1"]/alias[last()]
+ /files/etc/hosts/1/alias[3] = galia
+
+test last-var $hosts[ipaddr = $localhost]/alias[last()]
+ /files/etc/hosts/1/alias[3] = galia
+
+# We can get nodes counting from the right with 'last()-N'
+test last-minus-one /files/etc/hosts/*[ipaddr = "127.0.0.1"]/alias[ last() - 1 ]
+ /files/etc/hosts/1/alias[2] = galia.watzmann.net
+
+# Make sure we look at all nodes with a given label (ticket #23)
+test transparent-multi-node /files/etc/ssh/sshd_config/AcceptEnv/10
+ /files/etc/ssh/sshd_config/AcceptEnv[2]/10 = LC_ADDRESS
+
+test abbrev-descendants /files/etc/pam.d//1
+ /files/etc/pam.d/login/1
+ /files/etc/pam.d/postgresql/1
+ /files/etc/pam.d/newrole/1
+
+test descendant-or-self /files/descendant-or-self :: 4
+ /files/etc/ssh/sshd_config/AcceptEnv[1]/4 = LC_TIME
+ /files/etc/ssh/ssh_config/Host/SendEnv[1]/4 = LC_TIME
+ /files/etc/ssh/ssh_config/Host/SendEnv[2]/4 = LC_TELEPHONE
+ /files/etc/aliases/4
+ /files/etc/selinux/semanage.conf/ignoredirs/4 = /dev
+ /files/etc/fstab/4
+ /files/etc/pam.d/login/4
+ /files/etc/pam.d/newrole/4
+ /files/etc/inittab/4
+
+test descendant /files/etc/aliases/4/descendant::4
+
+test descendant-or-self-2 /files/etc/aliases/4/descendant-or-self::4
+ /files/etc/aliases/4
+
+# No matches because the predicate asks if there is a toplevel node
+# 'ipaddr' with the given value
+test abs-locpath /files/etc/hosts/*[/ipaddr = '127.0.0.1']/canonical
+
+test rel-pred /files/etc/hosts/*/canonical[../ipaddr = '127.0.0.1']
+ /files/etc/hosts/1/canonical = localhost.localdomain
+
+# Not the best way to write this, but entirely acceptable
+test path-with-parent /files/etc/hosts/*/canonical[../ipaddr = '127.0.0.1']/../alias
+ /files/etc/hosts/1/alias[1] = localhost
+ /files/etc/hosts/1/alias[2] = galia.watzmann.net
+ /files/etc/hosts/1/alias[3] = galia
+
+test node-exists-pred /files/etc/hosts/*/canonical[../alias]
+ /files/etc/hosts/1/canonical = localhost.localdomain
+ /files/etc/hosts/2/canonical = orange.watzmann.net
+
+test ipaddr-child //*[ipaddr]
+ /files/etc/hosts/1
+ /files/etc/hosts/2
+
+test ipaddr-sibling //*[../ipaddr]
+ /files/etc/hosts/1/ipaddr = 127.0.0.1
+ /files/etc/hosts/1/canonical = localhost.localdomain
+ /files/etc/hosts/1/alias[1] = localhost
+ /files/etc/hosts/1/alias[2] = galia.watzmann.net
+ /files/etc/hosts/1/alias[3] = galia
+ /files/etc/hosts/2/ipaddr = 172.31.122.14
+ /files/etc/hosts/2/canonical = orange.watzmann.net
+ /files/etc/hosts/2/alias = orange
+
+test lircd-ancestor //*[ancestor::kudzu][label() != '#comment']
+ /augeas/files/etc/sysconfig/kudzu/path = /files/etc/sysconfig/kudzu
+ /augeas/files/etc/sysconfig/kudzu/mtime = ...
+ /augeas/files/etc/sysconfig/kudzu/lens = @Shellvars
+ /augeas/files/etc/sysconfig/kudzu/lens/info = ...
+ /files/etc/sysconfig/kudzu/SAFE = no
+
+test wildcard-last /files/etc/hosts/*[position() = last()]
+ /files/etc/hosts/2
+
+test wildcard-not-last /files/etc/hosts/*[position() != last()][ipaddr]
+ /files/etc/hosts/1
+
+test nodeset-nodeset-eq /files/etc/sysconfig/network-scripts/*[BRIDGE = /files/etc/sysconfig/network-scripts/ifcfg-br0/DEVICE]
+ /files/etc/sysconfig/network-scripts/ifcfg-eth0
+
+test last-ssh-service /files/etc/services/service-name[port = '22'][last()]
+ /files/etc/services/service-name[24] = ssh
+
+test count-one-alias /files/etc/hosts/*[count(alias) = 1]
+ /files/etc/hosts/2
+
+test number-gt /files/etc/hosts/*[count(alias) > 1]
+ /files/etc/hosts/1
+
+test pred-or /files/etc/hosts/*[canonical = 'localhost' or alias = 'localhost']
+ /files/etc/hosts/1
+
+test pred-and-or /files/etc/hosts/*[(canonical = 'localhost' or alias = 'localhost') and ipaddr = '127.0.0.1']
+ /files/etc/hosts/1
+
+# We used to parse this as '/files/etc/.' followed by garbage, that
+# was silently ignored. This path must not match anything, instead of
+# every child of /files/etc
+test path-with-dot /files/etc/.notthere
+
+test str-neq /files/etc/*['foo' != 'foo']
+
+# label() returns the label of the context node as a string
+test label-neq /files/etc/hosts/*[label() != '#comment']
+ /files/etc/hosts/1
+ /files/etc/hosts/2
+
+# 'and' and 'or' need to coerce types to boolean
+test coerce-or /files[/files/etc/hosts/*[ipaddr = '127.0.0.1'] or /files/etc/aliases/*[name = 'root'] ]
+ /files
+
+test coerce-and-true /files['foo' and count(/files/etc/hosts/*) ]
+ /files
+
+test coerce-and-false /files['' and count(/none) and /none]
+
+test preceding-sibling /files/etc/hosts/*/preceding-sibling::*[alias = 'localhost']
+ /files/etc/hosts/1
+
+test preceding-sibling-pred /files/etc/grub.conf/*[preceding-sibling::*[1] = ../default]
+ /files/etc/grub.conf/timeout = 5
+
+test preceding-sibling-pred2 /files/etc/grub.conf/*[preceding-sibling::*[1][self::default]]
+ /files/etc/grub.conf/timeout = 5
+
+test following-sibling /files/etc/hosts/1/*/following-sibling::alias
+ /files/etc/hosts/1/alias[1] = localhost
+ /files/etc/hosts/1/alias[2] = galia.watzmann.net
+ /files/etc/hosts/1/alias[3] = galia
+
+test following-sibling-pred /files/etc/hosts/1/*[following-sibling::alias]
+ /files/etc/hosts/1/ipaddr = 127.0.0.1
+ /files/etc/hosts/1/canonical = localhost.localdomain
+ /files/etc/hosts/1/alias[1] = localhost
+ /files/etc/hosts/1/alias[2] = galia.watzmann.net
+
+test regexp1 /files/etc/sysconfig/network-scripts/*[label() =~ regexp('.*-eth0')]
+ /files/etc/sysconfig/network-scripts/ifcfg-eth0
+
+test regexp2 /files/etc/hosts/*[* =~ regexp('127\..*')]
+ /files/etc/hosts/1
+
+test regexp3 /files/etc/hosts/*[ipaddr =~ regexp(/files/etc/hosts/*/ipaddr)]
+ /files/etc/hosts/1
+ /files/etc/hosts/2
+
+# Check that we don't crash when the nodeset contains all NULL's
+test regexp4 /files/etc/hosts/*[ipaddr =~ regexp(/files/etc/hosts/*[ipaddr])]
+
+# Check case-insensitive matches
+test regexp5 /files/etc/sysconfig/network-scripts/*[label() =~ regexp('.*-ETH0', 'i')]
+ /files/etc/sysconfig/network-scripts/ifcfg-eth0
+
+test regexp6 /files/etc/hosts/*[ipaddr =~ regexp(/files/etc/hosts/*/ipaddr, 'i')]
+ /files/etc/hosts/1
+ /files/etc/hosts/2
+
+test glob1 /files[ 'axxa' =~ glob('a*a') ]
+ /files
+
+test glob2 /files[ 'axxa' =~ glob('a?[a-z]a') ]
+ /files
+
+test glob3 /files[ '^a' =~ glob('^a') ]
+ /files
+
+test glob4 /augeas/load/*[ '/etc/hosts' =~ glob(incl) ]
+ /augeas/load/Hosts
+
+test glob5 /files[ '/files/etc/hosts/1' =~ glob('/files/*/1') ]
+
+test glob6 /files[ '/files/etc/hosts/1' =~ glob('/files/*/*/1') ]
+ /files
+
+test glob_nomatch /files/etc/hosts/*[ipaddr][ ipaddr !~ glob(/files/etc/hosts/*/ipaddr[1]) ]
+ /files/etc/hosts/2
+
+test glob_for_lens /augeas/load/*[ '/etc/hosts/1/ipaddr' =~ glob(incl) + regexp('/.*') ]/lens
+ /augeas/load/Hosts/lens = @Hosts
+
+# Union of nodesets
+test union (/files/etc/yum.conf | /files/etc/yum.repos.d/*)/*/gpgcheck
+ /files/etc/yum.conf/main/gpgcheck = 1
+ /files/etc/yum.repos.d/fedora-updates.repo/updates/gpgcheck = 1
+ /files/etc/yum.repos.d/fedora-updates.repo/updates-debuginfo/gpgcheck = 1
+ /files/etc/yum.repos.d/fedora-updates.repo/updates-source/gpgcheck = 1
+ /files/etc/yum.repos.d/fedora.repo/fedora/gpgcheck = 1
+ /files/etc/yum.repos.d/fedora.repo/fedora-debuginfo/gpgcheck = 1
+ /files/etc/yum.repos.d/fedora.repo/fedora-source/gpgcheck = 1
+ /files/etc/yum.repos.d/remi.repo/remi/gpgcheck = 1
+ /files/etc/yum.repos.d/remi.repo/remi-test/gpgcheck = 1
+
+test else_nodeset_lhs (/files/etc/yum.conf else /files/etc/yum.repos.d/*)/*/gpgcheck
+ /files/etc/yum.conf/main/gpgcheck = 1
+
+test else_nodeset_rhs (/files/etc/yum.conf.missing else /files/etc/yum.repos.d/fedora.repo)/*/gpgcheck
+ /files/etc/yum.repos.d/fedora.repo/fedora/gpgcheck = 1
+ /files/etc/yum.repos.d/fedora.repo/fedora-debuginfo/gpgcheck = 1
+ /files/etc/yum.repos.d/fedora.repo/fedora-source/gpgcheck = 1
+
+test else_nodeset_nomatch (/files/left else /files/right)
+
+# Paths with whitespace in them
+test php1 $php/mail function
+ /files/etc/php.ini/mail\ function
+
+test php2 $php[mail function]
+ /files/etc/php.ini
+
+test php3 $php[count(mail function) = 1]
+ /files/etc/php.ini
+
+test php4 $php/mail function/SMTP
+ /files/etc/php.ini/mail\ function/SMTP = localhost
+
+test php5 $php/mail\ function
+ /files/etc/php.ini/mail\ function
+
+test expr-or /files/etc/group/root/*[self::gid or self::user]
+ /files/etc/group/root/gid = 0
+ /files/etc/group/root/user = root
+
+test int-ns-eq /files/etc/group/*[int(gid) = 30]
+ /files/etc/group/gopher
+
+test int-ns-lt /files/etc/group/*[int(gid) < 3]
+ /files/etc/group/root
+ /files/etc/group/bin
+ /files/etc/group/daemon
+
+test int-bool-f /files[int(1 = 0) > 0]
+
+test int-bool-t /files[int(1 = 1) > 0]
+ /files
+
+# Relative paths, relying on default /files context
+test ctx_file etc/network/interfaces
+ /files/etc/network/interfaces
+
+test ctx_file_pred etc/network/interfaces/iface[. = "lo"]
+ /files/etc/network/interfaces/iface[1] = lo
+
+# Test matching with characters that need escaping in the filename
+test escape1 /files/etc/sysconfig/network-scripts/*
+ /files/etc/sysconfig/network-scripts/ifcfg-eth0
+ /files/etc/sysconfig/network-scripts/ifcfg-wlan0
+ /files/etc/sysconfig/network-scripts/ifcfg-weird\ \[\!\]\ \(used\ to\ fail\)
+ /files/etc/sysconfig/network-scripts/ifcfg-lo
+ /files/etc/sysconfig/network-scripts/ifcfg-br0
+
+test escape2 /files/etc/sysconfig/network-scripts/ifcfg-weird\ \[\!\]\ \(used\ to\ fail\)/DEVICE
+ /files/etc/sysconfig/network-scripts/ifcfg-weird\ \[\!\]\ \(used\ to\ fail\)/DEVICE = weird
+
+test escape3 /files/etc/sysconfig/network-scripts/*[DEVICE = 'weird']
+ /files/etc/sysconfig/network-scripts/ifcfg-weird\ \[\!\]\ \(used\ to\ fail\)
+
+# Test matching in the presence of hidden nodes
+test hidden1 /files/etc/security/limits.conf/*[label() != '#comment'][1]
+ /files/etc/security/limits.conf/domain[1] = @jackuser
+
+test seqaxismatchall /files/etc/hosts/seq::*
+ /files/etc/hosts/1
+ /files/etc/hosts/2
+
+test seqaxismatchpos /files/etc/hosts/seq::*[2]
+ /files/etc/hosts/2
+
+test seqaxismatchlabel /files/etc/hosts/seq::2
+ /files/etc/hosts/2
+
+test seqaxismatchregexp /files/etc/hosts/seq::*[canonical =~ regexp('.*orange.*')]
+ /files/etc/hosts/2
+
+test else_simple_lhs /files/etc/fstab/*[passno='1' else passno='2']/file
+ /files/etc/fstab/1/file = /
+
+test else_simple_rhs /files/etc/fstab/*[passno='9' else passno='2']/file
+ /files/etc/fstab/2/file = /boot
+ /files/etc/fstab/5/file = /home
+ /files/etc/fstab/8/file = /local
+ /files/etc/fstab/9/file = /var/lib/xen/images
+
+test else_haschild_lhs /files/etc/hosts/*[alias else alias='orange']/canonical
+ /files/etc/hosts/1/canonical = localhost.localdomain
+ /files/etc/hosts/2/canonical = orange.watzmann.net
+
+test else_chain1 /files/etc/fstab/*[passno='9' else passno='1' else passno='2']/file
+ /files/etc/fstab/1/file = /
+
+test else_chain2 /files/etc/fstab/*[passno='9' else passno='8' else passno='2']/file
+ /files/etc/fstab/2/file = /boot
+ /files/etc/fstab/5/file = /home
+ /files/etc/fstab/8/file = /local
+ /files/etc/fstab/9/file = /var/lib/xen/images
+
+# Although there are nodes matching passno=2 and nodes matching dump=0
+# there is no node with both
+test_and_else_lhs /files/etc/fstab/*[passno='2' and ( dump='0' else dump='1')]/file
+
+test_and_else_rhs /files/etc/fstab/*[passno='2' and ( dump='nemo' else dump='1')]/file
+ /files/etc/fstab/2/file = /boot
+ /files/etc/fstab/5/file = /home
+ /files/etc/fstab/8/file = /local
+ /files/etc/fstab/9/file = /var/lib/xen/images