--- /dev/null
+0.12:
+ - Use a tiny retry util helper class
+ for performing process locking
+ retries.
+0.11:
+ - Directly use monotonic.monotonic.
+ - Use BLATHER level for previously
+ INFO/DEBUG statements.
+0.10:
+ - Add LICENSE in generated source tarballs
+ - Add a version.py file that can be used to
+ extract the current version.
+0.9:
+ - Allow providing a non-standard (eventlet or
+ other condition class) to the r/w lock for
+ cases where it is useful to do so.
+ - Instead of having the r/w lock take a find
+ eventlet keyword argument, allow for it to
+ be provided a function that will be later
+ called to get the current thread. This allows
+ for the current *hack* to be easily removed
+ by users (if they so desire).
+0.8:
+ - Add fastener logo (from openclipart).
+ - Ensure r/w writer -> reader -> writer
+ lock acquisition.
+ - Attempt to use the monotonic pypi module
+ if its installed for monotonically increasing
+ time on python versions where this is not
+ built-in.
+0.7:
+ - Add helpful `locked` decorator that can
+ lock a method using a found attribute (a lock
+ object or list of lock objects) in the
+ instance the method is attached to.
+ - Expose top level `try_lock` function.
+0.6:
+ - Allow the sleep function to be provided (so that
+ various alternatives other than time.sleep can
+ be used), ie eventlet.sleep (or other).
+ - Remove dependency on oslo.utils (replace with
+ small utility code that achieves the same effect).
+0.5:
+ - Make it possible to provide a acquisition timeout
+ to the interprocess lock (which when acquisition
+ can not complete in the desired time will return
+ false).
+0.4:
+ - Have the interprocess lock acquire take a blocking
+ keyword argument (defaulting to true) that can avoid
+ blocking trying to acquire the lock
+0.3:
+ - Renamed from 'shared_lock' to 'fasteners'
+0.2.1
+ - Fix delay not working as expected
+0.2:
+ - Add a interprocess lock
+0.1:
+ - Add travis yaml file
+ - Initial commit/import
--- /dev/null
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+
--- /dev/null
+include LICENSE
+include ChangeLog
+include README.rst
--- /dev/null
+Fasteners
+=========
+
+.. image:: https://travis-ci.org/harlowja/fasteners.png?branch=master
+ :target: https://travis-ci.org/harlowja/fasteners
+
+.. image:: https://readthedocs.org/projects/fasteners/badge/?version=latest
+ :target: https://readthedocs.org/projects/fasteners/?badge=latest
+ :alt: Documentation Status
+
+.. image:: https://img.shields.io/pypi/dm/fasteners.svg
+ :target: https://pypi.python.org/pypi/fasteners/
+ :alt: Downloads
+
+.. image:: https://img.shields.io/pypi/v/fasteners.svg
+ :target: https://pypi.python.org/pypi/fasteners/
+ :alt: Latest Version
+
+Overview
+--------
+
+A python `package`_ that provides useful locks.
+
+It includes the following.
+
+Locking decorator
+*****************
+
+* Helpful ``locked`` decorator (that acquires instance
+ objects lock(s) and acquires on method entry and
+ releases on method exit).
+
+Reader-writer locks
+*******************
+
+* Multiple readers (at the same time).
+* Single writers (blocking any readers).
+* Helpful ``read_locked`` and ``write_locked`` decorators.
+
+Inter-process locks
+*******************
+
+* Single writer using file based locking (these automatically
+ release on process exit, even if ``__release__`` or
+ ``__exit__`` is never called).
+* Helpful ``interprocess_locked`` decorator.
+
+Generic helpers
+***************
+
+* A ``try_lock`` helper context manager that will attempt to
+ acquire a given lock and provide back whether the attempt
+ passed or failed (if it passes, then further code in the
+ context manager will be ran **with** the lock acquired).
+
+.. _package: https://pypi.python.org/pypi/fasteners
--- /dev/null
+python-fasteners (0.12.0-2ubuntu1) xenial; urgency=low
+
+ * Merge from Debian unstable. Remaining changes:
+ - d/control: Drop BD on sphinx-rtd-theme for Ubuntu MIR; optional
+ dependency and no documentation is built as part of this package.
+
+ -- James Page <james.page@ubuntu.com> Thu, 04 Feb 2016 16:49:09 +0000
+
+python-fasteners (0.12.0-2) unstable; urgency=medium
+
+ * Fixed typo in long description (Closes: #802195).
+
+ -- Thomas Goirand <zigo@debian.org> Fri, 23 Oct 2015 12:24:51 +0000
+
+python-fasteners (0.12.0-1ubuntu1) wily; urgency=medium
+
+ * d/control: Drop BD on sphinx-rtd-theme for Ubuntu MIR; optional
+ dependency and no documentation is built as part of this package.
+
+ -- James Page <james.page@ubuntu.com> Tue, 04 Aug 2015 11:03:16 +0200
+
+python-fasteners (0.12.0-1) unstable; urgency=medium
+
+ * Initial release. (Closes: #792497)
+
+ -- Thomas Goirand <zigo@debian.org> Wed, 15 Jul 2015 14:13:32 +0200
+
--- /dev/null
+Source: python-fasteners
+Section: python
+Priority: optional
+Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>
+XSBC-Original-Maintainer: PKG OpenStack <openstack-devel@lists.alioth.debian.org>
+Uploaders: Thomas Goirand <zigo@debian.org>,
+Build-Depends: debhelper (>= 9),
+ dh-python,
+ openstack-pkg-tools,
+ python-all,
+ python-setuptools,
+ python-sphinx,
+ python3-all,
+ python3-setuptools,
+Build-Depends-Indep: python-concurrent.futures,
+ python-monotonic,
+ python-nose,
+ python-six,
+ python-testtools,
+ python3-monotonic,
+ python3-nose,
+ python3-six,
+ python3-testtools,
+Standards-Version: 3.9.6
+Vcs-Browser: http://anonscm.debian.org/gitweb/?p=openstack/python-fasteners.git
+Vcs-Git: git://anonscm.debian.org/openstack/python-fasteners.git
+Homepage: https://github.com/harlowja/fasteners
+
+Package: python-fasteners
+Architecture: all
+Depends: python-monotonic,
+ python-six,
+ ${misc:Depends},
+ ${python:Depends},
+Description: provides useful locks - Python 2.7
+ Fasteners is a Python package that provides useful locks. It includes locking
+ decorator (that acquires instance objects lock(s), acquires on method entry
+ and releases on method exit), reader-writer locks, inter-process locks and
+ generic lock helpers.
+ .
+ This package contains the Python 2.7 module.
+
+Package: python3-fasteners
+Architecture: all
+Depends: python3-monotonic,
+ python3-six,
+ ${misc:Depends},
+ ${python3:Depends},
+Description: provides useful locks - Python 3.x
+ Fasteners is a Python package that provides useful locks. It includes locking
+ decorator (that acquires instance objects lock(s), acquires on method entry
+ and releases on method exit), reader-writer locks, inter-process locks and
+ generic lock helpers.
+ .
+ This package contains the Python 3.x module.
--- /dev/null
+Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
+Upstream-Name: fasteners
+Source: https://github.com/harlowja/fasteners
+
+Files: debian/*
+Copyright: (c) 2014, Thomas Goirand <zigo@debian.org>
+License: Apache-2.0
+
+Files: *
+Copyright: (c) 2013, Joshua Harlow <harlowja@yahoo-inc.com>
+ (c) 2014-2015, Yahoo INC.
+ (c) 2012-2015, OpenStack Foundation
+License: Apache-2.0
+
+License: Apache-2.0
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+ .
+ http://www.apache.org/licenses/LICENSE-2.0
+ .
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+ .
+ On Debian-based systems the full text of the Apache version 2.0 license
+ can be found in /usr/share/common-licenses/Apache-2.0.
--- /dev/null
+python-fasteners_0.12.0-2ubuntu1_all.deb python optional
+python3-fasteners_0.12.0-2ubuntu1_all.deb python optional
--- /dev/null
+[DEFAULT]
+upstream-branch = master
+debian-branch = debian/unstable
+upstream-tag = %(version)s
+compression = xz
+
+[buildpackage]
+export-dir = ../build-area/
+
--- /dev/null
+#!/usr/bin/make -f
+
+PYTHONS:=$(shell pyversions -vr)
+PYTHON3S:=$(shell py3versions -vr)
+
+UPSTREAM_GIT = git://github.com/harlowja/fasteners.git
+include /usr/share/openstack-pkg-tools/pkgos.make
+
+%:
+ dh $@ --buildsystem=python_distutils --with python2,python3
+
+override_dh_auto_install:
+ set -e ; for pyvers in $(PYTHONS); do \
+ python$$pyvers setup.py install --install-layout=deb \
+ --root $(CURDIR)/debian/python-fasteners; \
+ done
+ set -e ; for pyvers in $(PYTHON3S); do \
+ python$$pyvers setup.py install --install-layout=deb \
+ --root $(CURDIR)/debian/python3-fasteners; \
+ done
+ rm -rf $(CURDIR)/debian/python*-fasteners/usr/lib/python*/dist-packages/*.pth
+
+override_dh_auto_test:
+ifeq (,$(findstring nocheck, $(DEB_BUILD_OPTIONS)))
+ set -e ; for pyvers in $(PYTHONS) ; do \
+ PYTHON=python$$pyvers nosetests -v ; \
+ done
+ set -e ; for pyvers in $(PYTHON3S) ; do \
+ PYTHON=python$$pyvers nosetests3 -v ; \
+ done
+endif
+
+override_dh_clean:
+ dh_clean -O--buildsystem=python_distutils
+ rm -rf build
+
+# Commands not to run
+override_dh_installcatalogs:
+override_dh_installemacsen override_dh_installifupdown:
+override_dh_installinfo override_dh_installmenu override_dh_installmime:
+override_dh_installmodules override_dh_installlogcheck:
+override_dh_installpam override_dh_installppp override_dh_installudev override_dh_installwm:
+override_dh_installxfonts override_dh_gconf override_dh_icons override_dh_perl override_dh_usrlocal:
+override_dh_installcron override_dh_installdebconf:
+override_dh_installlogrotate override_dh_installgsettings:
--- /dev/null
+3.0 (quilt)
--- /dev/null
+extend-diff-ignore = "^[^/]*[.]egg-info/"
--- /dev/null
+version=3
+opts="uversionmangle=s/\.(b|rc)/~$1/" \
+https://github.com/harlowja/fasteners/tags .*/(\d[\d\.]+)\.tar\.gz
+
--- /dev/null
+=======
+ Lock
+=======
+
+-------
+Classes
+-------
+
+.. autoclass:: fasteners.lock.ReaderWriterLock
+ :members:
+
+----------
+Decorators
+----------
+
+.. autofunction:: fasteners.lock.read_locked
+
+.. autofunction:: fasteners.lock.write_locked
+
+.. autofunction:: fasteners.lock.locked
+
+----------------
+Helper functions
+----------------
+
+.. autofunction:: fasteners.lock.try_lock
--- /dev/null
+==============
+ Process lock
+==============
+
+-------
+Classes
+-------
+
+.. autoclass:: fasteners.process_lock.InterProcessLock
+ :members:
+
+.. autoclass:: fasteners.process_lock._InterProcessLock
+ :members:
+
+----------
+Decorators
+----------
+
+.. autofunction:: fasteners.process_lock.interprocess_locked
--- /dev/null
+# -*- coding: utf-8 -*-
+#
+# Fasteners documentation build configuration file, created by
+# sphinx-quickstart on Fri Jun 5 14:47:29 2015.
+#
+# This file is execfile()d with the current directory set to its
+# containing dir.
+#
+# Note that not all possible configuration values are present in this
+# autogenerated file.
+#
+# All configuration values have a default; values that are commented out
+# serve to show the default.
+
+import sys
+import os
+import shlex
+
+from fasteners import version as fasteners_version
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+#sys.path.insert(0, os.path.abspath('.'))
+
+# -- General configuration ------------------------------------------------
+
+# If your documentation needs a minimal Sphinx version, state it here.
+#needs_sphinx = '1.0'
+
+# Add any Sphinx extension module names here, as strings. They can be
+# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
+# ones.
+extensions = [
+ 'sphinx.ext.doctest',
+ 'sphinx.ext.autodoc',
+ 'sphinx.ext.viewcode',
+]
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['_templates']
+
+# The suffix(es) of source filenames.
+# You can specify multiple suffix as a list of string:
+# source_suffix = ['.rst', '.md']
+source_suffix = '.rst'
+
+# The encoding of source files.
+#source_encoding = 'utf-8-sig'
+
+# The master toctree document.
+master_doc = 'index'
+
+# General information about the project.
+project = u'Fasteners'
+copyright = u'2015, OpenStack Foundation, Yahoo!'
+author = u'Joshua Harlow, OpenStack Developers'
+
+# The version info for the project you're documenting, acts as replacement for
+# |version| and |release|, also used in various other places throughout the
+# built documents.
+#
+# The short X.Y version.
+version = fasteners_version.version_string()
+
+# The full version, including alpha/beta/rc tags.
+release = fasteners_version.version_string()
+
+# The language for content autogenerated by Sphinx. Refer to documentation
+# for a list of supported languages.
+#
+# This is also used if you do content translation via gettext catalogs.
+# Usually you set "language" from the command line for these cases.
+language = None
+
+# There are two options for replacing |today|: either, you set today to some
+# non-false value, then it is used:
+#today = ''
+# Else, today_fmt is used as the format for a strftime call.
+#today_fmt = '%B %d, %Y'
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+exclude_patterns = []
+
+# The reST default role (used for this markup: `text`) to use for all
+# documents.
+#default_role = None
+
+# If true, '()' will be appended to :func: etc. cross-reference text.
+#add_function_parentheses = True
+
+# If true, the current module name will be prepended to all description
+# unit titles (such as .. function::).
+#add_module_names = True
+
+# If true, sectionauthor and moduleauthor directives will be shown in the
+# output. They are ignored by default.
+#show_authors = False
+
+# The name of the Pygments (syntax highlighting) style to use.
+pygments_style = 'sphinx'
+
+# A list of ignored prefixes for module index sorting.
+#modindex_common_prefix = []
+
+# If true, keep warnings as "system message" paragraphs in the built documents.
+#keep_warnings = False
+
+# If true, `todo` and `todoList` produce output, else they produce nothing.
+todo_include_todos = False
+
+
+# -- Options for HTML output ----------------------------------------------
+
+# The theme to use for HTML and HTML Help pages. See the documentation for
+# a list of builtin themes.
+html_theme = 'alabaster'
+
+# Theme options are theme-specific and customize the look and feel of a theme
+# further. For a list of options available for each theme, see the
+# documentation.
+#html_theme_options = {}
+
+# Add any paths that contain custom themes here, relative to this directory.
+#html_theme_path = []
+
+# The name for this set of Sphinx documents. If None, it defaults to
+# "<project> v<release> documentation".
+#html_title = None
+
+# A shorter title for the navigation bar. Default is the same as html_title.
+#html_short_title = None
+
+# The name of an image file (relative to this directory) to place at the top
+# of the sidebar.
+html_logo = "img/safety-pin-small.png"
+
+# The name of an image file (within the static path) to use as favicon of the
+# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
+# pixels large.
+#html_favicon = None
+
+# Add any paths that contain custom static files (such as style sheets) here,
+# relative to this directory. They are copied after the builtin static files,
+# so a file named "default.css" will overwrite the builtin "default.css".
+html_static_path = ['_static']
+
+# Add any extra paths that contain custom files (such as robots.txt or
+# .htaccess) here, relative to this directory. These files are copied
+# directly to the root of the documentation.
+#html_extra_path = []
+
+# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
+# using the given strftime format.
+#html_last_updated_fmt = '%b %d, %Y'
+
+# If true, SmartyPants will be used to convert quotes and dashes to
+# typographically correct entities.
+#html_use_smartypants = True
+
+# Custom sidebar templates, maps document names to template names.
+#html_sidebars = {}
+
+# Additional templates that should be rendered to pages, maps page names to
+# template names.
+#html_additional_pages = {}
+
+# If false, no module index is generated.
+#html_domain_indices = True
+
+# If false, no index is generated.
+#html_use_index = True
+
+# If true, the index is split into individual pages for each letter.
+#html_split_index = False
+
+# If true, links to the reST sources are added to the pages.
+#html_show_sourcelink = True
+
+# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
+#html_show_sphinx = True
+
+# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
+#html_show_copyright = True
+
+# If true, an OpenSearch description file will be output, and all pages will
+# contain a <link> tag referring to it. The value of this option must be the
+# base URL from which the finished HTML is served.
+#html_use_opensearch = ''
+
+# This is the file name suffix for HTML files (e.g. ".xhtml").
+#html_file_suffix = None
+
+# Language to be used for generating the HTML full-text search index.
+# Sphinx supports the following languages:
+# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
+# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'
+#html_search_language = 'en'
+
+# A dictionary with options for the search language support, empty by default.
+# Now only 'ja' uses this config value
+#html_search_options = {'type': 'default'}
+
+# The name of a javascript file (relative to the configuration directory) that
+# implements a search results scorer. If empty, the default will be used.
+#html_search_scorer = 'scorer.js'
+
+# Output file base name for HTML help builder.
+htmlhelp_basename = 'Fastenersdoc'
+
+# -- Options for LaTeX output ---------------------------------------------
+
+latex_elements = {
+# The paper size ('letterpaper' or 'a4paper').
+#'papersize': 'letterpaper',
+
+# The font size ('10pt', '11pt' or '12pt').
+#'pointsize': '10pt',
+
+# Additional stuff for the LaTeX preamble.
+#'preamble': '',
+
+# Latex figure (float) alignment
+#'figure_align': 'htbp',
+}
+
+# Grouping the document tree into LaTeX files. List of tuples
+# (source start file, target name, title,
+# author, documentclass [howto, manual, or own class]).
+latex_documents = [
+ (master_doc, 'Fasteners.tex', u'Fasteners Documentation',
+ u'Joshua Harlow', 'manual'),
+]
+
+# The name of an image file (relative to this directory) to place at the top of
+# the title page.
+#latex_logo = None
+
+# For "manual" documents, if this is true, then toplevel headings are parts,
+# not chapters.
+#latex_use_parts = False
+
+# If true, show page references after internal links.
+#latex_show_pagerefs = False
+
+# If true, show URL addresses after external links.
+#latex_show_urls = False
+
+# Documents to append as an appendix to all manuals.
+#latex_appendices = []
+
+# If false, no module index is generated.
+#latex_domain_indices = True
+
+
+# -- Options for manual page output ---------------------------------------
+
+# One entry per manual page. List of tuples
+# (source start file, name, description, authors, manual section).
+man_pages = [
+ (master_doc, 'fasteners', u'Fasteners Documentation',
+ [author], 1)
+]
+
+# If true, show URL addresses after external links.
+#man_show_urls = False
+
+
+# -- Options for Texinfo output -------------------------------------------
+
+# Grouping the document tree into Texinfo files. List of tuples
+# (source start file, target name, title, author,
+# dir menu entry, description, category)
+texinfo_documents = [
+ (master_doc, 'Fasteners', u'Fasteners Documentation',
+ author, 'Fasteners', 'One line description of project.',
+ 'Miscellaneous'),
+]
+
+# Documents to append as an appendix to all manuals.
+#texinfo_appendices = []
+
+# If false, no module index is generated.
+#texinfo_domain_indices = True
+
+# How to display URL addresses: 'footnote', 'no', or 'inline'.
+#texinfo_show_urls = 'footnote'
+
+# If true, do not generate a @detailmenu in the "Top" node's menu.
+#texinfo_no_detailmenu = False
--- /dev/null
+Welcome to Fasteners's documentation!
+=====================================
+
+A python `package`_ that provides useful locks.
+
+*Contents:*
+
+.. toctree::
+ :maxdepth: 3
+
+ api/lock
+ api/process_lock
+
+Indices and tables
+==================
+
+* :ref:`genindex`
+* :ref:`modindex`
+* :ref:`search`
+
+.. _package: https://pypi.python.org/pypi/fasteners
+
--- /dev/null
+# -*- coding: utf-8 -*-
+
+# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved.
+#
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+# promote helpers to this module namespace
+
+from __future__ import absolute_import
+
+from fasteners.lock import locked # noqa
+from fasteners.lock import read_locked # noqa
+from fasteners.lock import try_lock # noqa
+from fasteners.lock import write_locked # noqa
+
+from fasteners.lock import ReaderWriterLock # noqa
+from fasteners.process_lock import InterProcessLock # noqa
+
+from fasteners.process_lock import interprocess_locked # noqa
--- /dev/null
+# -*- coding: utf-8 -*-
+
+# Copyright (C) 2015 Yahoo! Inc. All Rights Reserved.
+#
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import logging
+import time
+
+from monotonic import monotonic as now # noqa
+
+# log level for low-level debugging
+BLATHER = 5
+
+LOG = logging.getLogger(__name__)
+
+
+class LockStack(object):
+ """Simple lock stack to get and release many locks.
+
+ An instance of this should **not** be used by many threads at the
+ same time, as the stack that is maintained will be corrupted and
+ invalid if that is attempted.
+ """
+
+ def __init__(self):
+ self._stack = []
+
+ def acquire_lock(self, lock):
+ gotten = lock.acquire()
+ if gotten:
+ self._stack.append(lock)
+ return gotten
+
+ def __enter__(self):
+ return self
+
+ def __exit__(self, exc_type, exc_value, exc_tb):
+ am_left = len(self._stack)
+ tot_am = am_left
+ while self._stack:
+ lock = self._stack.pop()
+ try:
+ lock.release()
+ except Exception:
+ LOG.exception("Failed releasing lock %s from lock"
+ " stack with %s locks", am_left, tot_am)
+ am_left -= 1
+
+
+class RetryAgain(Exception):
+ """Exception to signal to retry helper to try again."""
+
+
+class Retry(object):
+ """A little retry helper object."""
+
+ def __init__(self, delay, max_delay,
+ sleep_func=time.sleep, watch=None):
+ self.delay = delay
+ self.attempts = 0
+ self.max_delay = max_delay
+ self.sleep_func = sleep_func
+ self.watch = watch
+
+ def __call__(self, fn, *args, **kwargs):
+ while True:
+ self.attempts += 1
+ try:
+ return fn(*args, **kwargs)
+ except RetryAgain:
+ maybe_delay = self.attempts * self.delay
+ if maybe_delay < self.max_delay:
+ actual_delay = maybe_delay
+ else:
+ actual_delay = self.max_delay
+ actual_delay = max(0.0, actual_delay)
+ if self.watch is not None:
+ leftover = self.watch.leftover()
+ if leftover is not None and leftover < actual_delay:
+ actual_delay = leftover
+ self.sleep_func(actual_delay)
+
+
+class StopWatch(object):
+ """A really basic stop watch."""
+
+ def __init__(self, duration=None):
+ self.duration = duration
+ self.started_at = None
+ self.stopped_at = None
+
+ def leftover(self):
+ if self.duration is None:
+ return None
+ return max(0.0, self.duration - self.elapsed())
+
+ def elapsed(self):
+ if self.stopped_at is not None:
+ end_time = self.stopped_at
+ else:
+ end_time = now()
+ return max(0.0, end_time - self.started_at)
+
+ def __enter__(self):
+ self.start()
+ return self
+
+ def __exit__(self, exc_type, exc_value, exc_tb):
+ self.stopped_at = now()
+
+ def start(self):
+ self.started_at = now()
+ self.stopped_at = None
+
+ def expired(self):
+ if self.duration is None:
+ return False
+ else:
+ return self.elapsed() > self.duration
--- /dev/null
+# -*- coding: utf-8 -*-
+
+# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved.
+# Copyright 2011 OpenStack Foundation.
+#
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import collections
+import contextlib
+import threading
+
+from fasteners import _utils
+
+import six
+
+try:
+ # Used for the reader-writer lock get the right
+ # thread 'hack' (needed below).
+ import eventlet
+ from eventlet import patcher as eventlet_patcher
+except ImportError:
+ eventlet = None
+ eventlet_patcher = None
+
+
+def read_locked(*args, **kwargs):
+ """Acquires & releases a read lock around call into decorated method.
+
+ NOTE(harlowja): if no attribute name is provided then by default the
+ attribute named '_lock' is looked for (this attribute is expected to be
+ a :py:class:`.ReaderWriterLock`) in the instance object this decorator
+ is attached to.
+ """
+
+ def decorator(f):
+ attr_name = kwargs.get('lock', '_lock')
+
+ @six.wraps(f)
+ def wrapper(self, *args, **kwargs):
+ rw_lock = getattr(self, attr_name)
+ with rw_lock.read_lock():
+ return f(self, *args, **kwargs)
+
+ return wrapper
+
+ # This is needed to handle when the decorator has args or the decorator
+ # doesn't have args, python is rather weird here...
+ if kwargs or not args:
+ return decorator
+ else:
+ if len(args) == 1:
+ return decorator(args[0])
+ else:
+ return decorator
+
+
+def write_locked(*args, **kwargs):
+ """Acquires & releases a write lock around call into decorated method.
+
+ NOTE(harlowja): if no attribute name is provided then by default the
+ attribute named '_lock' is looked for (this attribute is expected to be
+ a :py:class:`.ReaderWriterLock` object) in the instance object this
+ decorator is attached to.
+ """
+
+ def decorator(f):
+ attr_name = kwargs.get('lock', '_lock')
+
+ @six.wraps(f)
+ def wrapper(self, *args, **kwargs):
+ rw_lock = getattr(self, attr_name)
+ with rw_lock.write_lock():
+ return f(self, *args, **kwargs)
+
+ return wrapper
+
+ # This is needed to handle when the decorator has args or the decorator
+ # doesn't have args, python is rather weird here...
+ if kwargs or not args:
+ return decorator
+ else:
+ if len(args) == 1:
+ return decorator(args[0])
+ else:
+ return decorator
+
+
+class ReaderWriterLock(object):
+ """A reader/writer lock.
+
+ This lock allows for simultaneous readers to exist but only one writer
+ to exist for use-cases where it is useful to have such types of locks.
+
+ Currently a reader can not escalate its read lock to a write lock and
+ a writer can not acquire a read lock while it is waiting on the write
+ lock.
+
+ In the future these restrictions may be relaxed.
+
+ This can be eventually removed if http://bugs.python.org/issue8800 ever
+ gets accepted into the python standard threading library...
+ """
+
+ #: Writer owner type/string constant.
+ WRITER = 'w'
+
+ #: Reader owner type/string constant.
+ READER = 'r'
+
+ @staticmethod
+ def _fetch_current_thread_functor():
+ # Until https://github.com/eventlet/eventlet/issues/172 is resolved
+ # or addressed we have to use complicated workaround to get a object
+ # that will not be recycled; the usage of threading.current_thread()
+ # doesn't appear to currently be monkey patched and therefore isn't
+ # reliable to use (and breaks badly when used as all threads share
+ # the same current_thread() object)...
+ if eventlet is not None and eventlet_patcher is not None:
+ if eventlet_patcher.is_monkey_patched('thread'):
+ return eventlet.getcurrent
+ return threading.current_thread
+
+ def __init__(self,
+ condition_cls=threading.Condition,
+ current_thread_functor=None):
+ self._writer = None
+ self._pending_writers = collections.deque()
+ self._readers = collections.deque()
+ self._cond = condition_cls()
+ if current_thread_functor is None:
+ current_thread_functor = self._fetch_current_thread_functor()
+ self._current_thread = current_thread_functor
+
+ @property
+ def has_pending_writers(self):
+ """Returns if there are writers waiting to become the *one* writer."""
+ return bool(self._pending_writers)
+
+ def is_writer(self, check_pending=True):
+ """Returns if the caller is the active writer or a pending writer."""
+ me = self._current_thread()
+ if self._writer == me:
+ return True
+ if check_pending:
+ return me in self._pending_writers
+ else:
+ return False
+
+ @property
+ def owner(self):
+ """Returns whether the lock is locked by a writer or reader."""
+ with self._cond:
+ # Obtain the lock to ensure we get a accurate view of the actual
+ # owner that isn't likely to change when we are reading it...
+ if self._writer is not None:
+ return self.WRITER
+ if self._readers:
+ return self.READER
+ return None
+
+ def is_reader(self):
+ """Returns if the caller is one of the readers."""
+ me = self._current_thread()
+ return me in self._readers
+
+ @contextlib.contextmanager
+ def read_lock(self):
+ """Context manager that grants a read lock.
+
+ Will wait until no active or pending writers.
+
+ Raises a ``RuntimeError`` if a pending writer tries to acquire
+ a read lock.
+ """
+ me = self._current_thread()
+ if me in self._pending_writers:
+ raise RuntimeError("Writer %s can not acquire a read lock"
+ " while waiting for the write lock"
+ % me)
+ with self._cond:
+ while True:
+ # No active writer, or we are the writer;
+ # we are good to become a reader.
+ if self._writer is None or self._writer == me:
+ self._readers.append(me)
+ break
+ # An active writer; guess we have to wait.
+ self._cond.wait()
+ try:
+ yield self
+ finally:
+ # I am no longer a reader, remove *one* occurrence of myself.
+ # If the current thread acquired two read locks, then it will
+ # still have to remove that other read lock; this allows for
+ # basic reentrancy to be possible.
+ with self._cond:
+ self._readers.remove(me)
+ self._cond.notify_all()
+
+ @contextlib.contextmanager
+ def write_lock(self):
+ """Context manager that grants a write lock.
+
+ Will wait until no active readers. Blocks readers after acquiring.
+
+ Raises a ``RuntimeError`` if an active reader attempts to acquire
+ a lock.
+ """
+ me = self._current_thread()
+ i_am_writer = self.is_writer(check_pending=False)
+ if self.is_reader() and not i_am_writer:
+ raise RuntimeError("Reader %s to writer privilege"
+ " escalation not allowed" % me)
+ if i_am_writer:
+ # Already the writer; this allows for basic reentrancy.
+ yield self
+ else:
+ with self._cond:
+ self._pending_writers.append(me)
+ while True:
+ # No readers, and no active writer, am I next??
+ if len(self._readers) == 0 and self._writer is None:
+ if self._pending_writers[0] == me:
+ self._writer = self._pending_writers.popleft()
+ break
+ self._cond.wait()
+ try:
+ yield self
+ finally:
+ with self._cond:
+ self._writer = None
+ self._cond.notify_all()
+
+
+@contextlib.contextmanager
+def try_lock(lock):
+ """Attempts to acquire a lock, and auto releases if acquired (on exit)."""
+ # NOTE(harlowja): the keyword argument for 'blocking' does not work
+ # in py2.x and only is fixed in py3.x (this adjustment is documented
+ # and/or debated in http://bugs.python.org/issue10789); so we'll just
+ # stick to the format that works in both (oddly the keyword argument
+ # works in py2.x but only with reentrant locks).
+ was_locked = lock.acquire(False)
+ try:
+ yield was_locked
+ finally:
+ if was_locked:
+ lock.release()
+
+
+def locked(*args, **kwargs):
+ """A locking **method** decorator.
+
+ It will look for a provided attribute (typically a lock or a list
+ of locks) on the first argument of the function decorated (typically this
+ is the 'self' object) and before executing the decorated function it
+ activates the given lock or list of locks as a context manager,
+ automatically releasing that lock on exit.
+
+ NOTE(harlowja): if no attribute name is provided then by default the
+ attribute named '_lock' is looked for (this attribute is expected to be
+ the lock/list of locks object/s) in the instance object this decorator
+ is attached to.
+ """
+
+ def decorator(f):
+ attr_name = kwargs.get('lock', '_lock')
+
+ @six.wraps(f)
+ def wrapper(self, *args, **kwargs):
+ attr_value = getattr(self, attr_name)
+ if isinstance(attr_value, (tuple, list)):
+ with _utils.LockStack() as stack:
+ for i, lock in enumerate(attr_value):
+ if not stack.acquire_lock(lock):
+ raise threading.ThreadError("Unable to acquire"
+ " lock %s" % (i + 1))
+ return f(self, *args, **kwargs)
+ else:
+ lock = attr_value
+ with lock:
+ return f(self, *args, **kwargs)
+
+ return wrapper
+
+ # This is needed to handle when the decorator has args or the decorator
+ # doesn't have args, python is rather weird here...
+ if kwargs or not args:
+ return decorator
+ else:
+ if len(args) == 1:
+ return decorator(args[0])
+ else:
+ return decorator
--- /dev/null
+# -*- coding: utf-8 -*-
+
+# Copyright 2011 OpenStack Foundation.
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import errno
+import logging
+import os
+import threading
+import time
+
+import six
+
+from fasteners import _utils
+
+LOG = logging.getLogger(__name__)
+
+
+def _ensure_tree(path):
+ """Create a directory (and any ancestor directories required).
+
+ :param path: Directory to create
+ """
+ try:
+ os.makedirs(path)
+ except OSError as e:
+ if e.errno == errno.EEXIST:
+ if not os.path.isdir(path):
+ raise
+ else:
+ return False
+ else:
+ raise
+ else:
+ return True
+
+
+class _InterProcessLock(object):
+ """An interprocess locking implementation.
+
+ This is a lock implementation which allows multiple locks, working around
+ issues like http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=632857 and
+ does not require any cleanup. Since the lock is always held on a file
+ descriptor rather than outside of the process, the lock gets dropped
+ automatically if the process crashes, even if ``__exit__`` is not
+ executed.
+
+ There are no guarantees regarding usage by multiple threads in a
+ single process here. This lock works only between processes.
+
+ Note these locks are released when the descriptor is closed, so it's not
+ safe to close the file descriptor while another thread holds the
+ lock. Just opening and closing the lock file can break synchronization,
+ so lock files must be accessed only using this abstraction.
+ """
+
+ MAX_DELAY = 0.1
+ """
+ Default maximum delay we will wait to try to acquire the lock (when
+ it's busy/being held by another process).
+ """
+
+ DELAY_INCREMENT = 0.01
+ """
+ Default increment we will use (up to max delay) after each attempt before
+ next attempt to acquire the lock. For example if 3 attempts have been made
+ the calling thread will sleep (0.01 * 3) before the next attempt to
+ acquire the lock (and repeat).
+ """
+
+ def __init__(self, path, sleep_func=time.sleep):
+ self.lockfile = None
+ self.path = path
+ self.acquired = False
+ self.sleep_func = sleep_func
+
+ def _try_acquire(self, blocking, watch):
+ try:
+ self.trylock()
+ except IOError as e:
+ if e.errno in (errno.EACCES, errno.EAGAIN):
+ if not blocking or watch.expired():
+ return False
+ else:
+ raise _utils.RetryAgain()
+ else:
+ raise threading.ThreadError("Unable to acquire lock on"
+ " `%(path)s` due to"
+ " %(exception)s" %
+ {
+ 'path': self.path,
+ 'exception': e,
+ })
+ else:
+ return True
+
+ def _do_open(self):
+ basedir = os.path.dirname(self.path)
+ made_basedir = _ensure_tree(basedir)
+ if made_basedir:
+ LOG.log(_utils.BLATHER,
+ 'Created lock base path `%s`', basedir)
+ # Open in append mode so we don't overwrite any potential contents of
+ # the target file. This eliminates the possibility of an attacker
+ # creating a symlink to an important file in our lock path.
+ if self.lockfile is None or self.lockfile.closed:
+ self.lockfile = open(self.path, 'a')
+
+ def acquire(self, blocking=True,
+ delay=DELAY_INCREMENT, max_delay=MAX_DELAY,
+ timeout=None):
+ """Attempt to acquire the given lock.
+
+ :param blocking: whether to wait forever to try to acquire the lock
+ :type blocking: bool
+ :param delay: when blocking this is the delay time in seconds that
+ will be added after each failed acquisition
+ :type delay: int/float
+ :param max_delay: the maximum delay to have (this limits the
+ accumulated delay(s) added after each failed
+ acquisition)
+ :type max_delay: int/float
+ :param timeout: an optional timeout (limits how long blocking
+ will occur for)
+ :type timeout: int/float
+ :returns: whether or not the acquisition succeeded
+ :rtype: bool
+ """
+ if delay < 0:
+ raise ValueError("Delay must be greater than or equal to zero")
+ if timeout is not None and timeout < 0:
+ raise ValueError("Timeout must be greater than or equal to zero")
+ if delay >= max_delay:
+ max_delay = delay
+ self._do_open()
+ watch = _utils.StopWatch(duration=timeout)
+ r = _utils.Retry(delay, max_delay,
+ sleep_func=self.sleep_func, watch=watch)
+ with watch:
+ gotten = r(self._try_acquire, blocking, watch)
+ if not gotten:
+ self.acquired = False
+ return False
+ else:
+ self.acquired = True
+ LOG.log(_utils.BLATHER,
+ "Acquired file lock `%s` after waiting %0.3fs [%s"
+ " attempts were required]", self.path, watch.elapsed(),
+ r.attempts)
+ return True
+
+ def _do_close(self):
+ if self.lockfile is not None:
+ self.lockfile.close()
+ self.lockfile = None
+
+ def __enter__(self):
+ self.acquire()
+ return self
+
+ def release(self):
+ """Release the previously acquired lock."""
+ if not self.acquired:
+ raise threading.ThreadError("Unable to release an unacquired"
+ " lock")
+ try:
+ self.unlock()
+ except IOError:
+ LOG.exception("Could not unlock the acquired lock opened"
+ " on `%s`", self.path)
+ else:
+ self.acquired = False
+ try:
+ self._do_close()
+ except IOError:
+ LOG.exception("Could not close the file handle"
+ " opened on `%s`", self.path)
+ else:
+ LOG.log(_utils.BLATHER,
+ "Unlocked and closed file lock open on"
+ " `%s`", self.path)
+
+ def __exit__(self, exc_type, exc_val, exc_tb):
+ self.release()
+
+ def exists(self):
+ """Checks if the path that this lock exists at actually exists."""
+ return os.path.exists(self.path)
+
+ def trylock(self):
+ raise NotImplementedError()
+
+ def unlock(self):
+ raise NotImplementedError()
+
+
+class _WindowsLock(_InterProcessLock):
+
+ def trylock(self):
+ msvcrt.locking(self.lockfile.fileno(), msvcrt.LK_NBLCK, 1)
+
+ def unlock(self):
+ msvcrt.locking(self.lockfile.fileno(), msvcrt.LK_UNLCK, 1)
+
+
+class _FcntlLock(_InterProcessLock):
+
+ def trylock(self):
+ fcntl.lockf(self.lockfile, fcntl.LOCK_EX | fcntl.LOCK_NB)
+
+ def unlock(self):
+ fcntl.lockf(self.lockfile, fcntl.LOCK_UN)
+
+
+if os.name == 'nt':
+ import msvcrt
+
+ #: Interprocess lock implementation that works on your system.
+ InterProcessLock = _WindowsLock
+else:
+ import fcntl
+
+ #: Interprocess lock implementation that works on your system.
+ InterProcessLock = _FcntlLock
+
+
+def interprocess_locked(path):
+ """Acquires & releases a interprocess lock around call into
+ decorated function."""
+
+ lock = InterProcessLock(path)
+
+ def decorator(f):
+
+ @six.wraps(f)
+ def wrapper(*args, **kwargs):
+ with lock:
+ return f(*args, **kwargs)
+
+ return wrapper
+
+ return decorator
--- /dev/null
+# -*- coding: utf-8 -*-
+
+# vim: tabstop=4 shiftwidth=4 softtabstop=4
+
+# Copyright (C) 2013 Yahoo! Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import testtools
+
+
+class TestCase(testtools.TestCase):
+ pass
--- /dev/null
+# -*- coding: utf-8 -*-
+
+# Copyright (C) 2015 Yahoo! Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import threading
+
+import fasteners
+from fasteners import test
+
+
+class Locked(object):
+ def __init__(self):
+ self._lock = threading.Lock()
+
+ @fasteners.locked
+ def i_am_locked(self, cb):
+ cb(self._lock.locked())
+
+ def i_am_not_locked(self, cb):
+ cb(self._lock.locked())
+
+
+class ManyLocks(object):
+ def __init__(self, amount):
+ self._lock = []
+ for _i in range(0, amount):
+ self._lock.append(threading.Lock())
+
+ @fasteners.locked
+ def i_am_locked(self, cb):
+ gotten = [lock.locked() for lock in self._lock]
+ cb(gotten)
+
+ def i_am_not_locked(self, cb):
+ gotten = [lock.locked() for lock in self._lock]
+ cb(gotten)
+
+
+class RWLocked(object):
+ def __init__(self):
+ self._lock = fasteners.ReaderWriterLock()
+
+ @fasteners.read_locked
+ def i_am_read_locked(self, cb):
+ cb(self._lock.owner)
+
+ @fasteners.write_locked
+ def i_am_write_locked(self, cb):
+ cb(self._lock.owner)
+
+ def i_am_not_locked(self, cb):
+ cb(self._lock.owner)
+
+
+class DecoratorsTest(test.TestCase):
+ def test_locked(self):
+ obj = Locked()
+ obj.i_am_locked(lambda is_locked: self.assertTrue(is_locked))
+ obj.i_am_not_locked(lambda is_locked: self.assertFalse(is_locked))
+
+ def test_many_locked(self):
+ obj = ManyLocks(10)
+ obj.i_am_locked(lambda gotten: self.assertTrue(all(gotten)))
+ obj.i_am_not_locked(lambda gotten: self.assertEqual(0, sum(gotten)))
+
+ def test_read_write_locked(self):
+ reader = fasteners.ReaderWriterLock.READER
+ writer = fasteners.ReaderWriterLock.WRITER
+ obj = RWLocked()
+ obj.i_am_write_locked(lambda owner: self.assertEqual(owner, writer))
+ obj.i_am_read_locked(lambda owner: self.assertEqual(owner, reader))
+ obj.i_am_not_locked(lambda owner: self.assertIsNone(owner))
--- /dev/null
+# -*- coding: utf-8 -*-
+
+# Copyright (C) 2015 Yahoo! Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import threading
+
+import fasteners
+from fasteners import test
+
+
+class HelpersTest(test.TestCase):
+ def test_try_lock(self):
+ lock = threading.Lock()
+ with fasteners.try_lock(lock) as locked:
+ self.assertTrue(locked)
+ with fasteners.try_lock(lock) as locked:
+ self.assertFalse(locked)
--- /dev/null
+# -*- coding: utf-8 -*-
+
+# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import collections
+import random
+import threading
+import time
+
+from concurrent import futures
+
+import fasteners
+from fasteners import test
+
+from fasteners import _utils
+
+
+# NOTE(harlowja): Sleep a little so now() can not be the same (which will
+# cause false positives when our overlap detection code runs). If there are
+# real overlaps then they will still exist.
+NAPPY_TIME = 0.05
+
+# We will spend this amount of time doing some "fake" work.
+WORK_TIMES = [(0.01 + x / 100.0) for x in range(0, 5)]
+
+# If latches/events take longer than this to become empty/set, something is
+# usually wrong and should be debugged instead of deadlocking...
+WAIT_TIMEOUT = 300
+
+
+def _find_overlaps(times, start, end):
+ overlaps = 0
+ for (s, e) in times:
+ if s >= start and e <= end:
+ overlaps += 1
+ return overlaps
+
+
+def _spawn_variation(readers, writers, max_workers=None):
+ start_stops = collections.deque()
+ lock = fasteners.ReaderWriterLock()
+
+ def read_func(ident):
+ with lock.read_lock():
+ # TODO(harlowja): sometime in the future use a monotonic clock here
+ # to avoid problems that can be caused by ntpd resyncing the clock
+ # while we are actively running.
+ enter_time = _utils.now()
+ time.sleep(WORK_TIMES[ident % len(WORK_TIMES)])
+ exit_time = _utils.now()
+ start_stops.append((lock.READER, enter_time, exit_time))
+ time.sleep(NAPPY_TIME)
+
+ def write_func(ident):
+ with lock.write_lock():
+ enter_time = _utils.now()
+ time.sleep(WORK_TIMES[ident % len(WORK_TIMES)])
+ exit_time = _utils.now()
+ start_stops.append((lock.WRITER, enter_time, exit_time))
+ time.sleep(NAPPY_TIME)
+
+ if max_workers is None:
+ max_workers = max(0, readers) + max(0, writers)
+ if max_workers > 0:
+ with futures.ThreadPoolExecutor(max_workers=max_workers) as e:
+ count = 0
+ for _i in range(0, readers):
+ e.submit(read_func, count)
+ count += 1
+ for _i in range(0, writers):
+ e.submit(write_func, count)
+ count += 1
+
+ writer_times = []
+ reader_times = []
+ for (lock_type, start, stop) in list(start_stops):
+ if lock_type == lock.WRITER:
+ writer_times.append((start, stop))
+ else:
+ reader_times.append((start, stop))
+ return (writer_times, reader_times)
+
+
+def _daemon_thread(target):
+ t = threading.Thread(target=target)
+ t.daemon = True
+ return t
+
+
+class ReadWriteLockTest(test.TestCase):
+ THREAD_COUNT = 20
+
+ def test_no_double_writers(self):
+ lock = fasteners.ReaderWriterLock()
+ watch = _utils.StopWatch(duration=5)
+ watch.start()
+ dups = collections.deque()
+ active = collections.deque()
+
+ def acquire_check(me):
+ with lock.write_lock():
+ if len(active) >= 1:
+ dups.append(me)
+ dups.extend(active)
+ active.append(me)
+ try:
+ time.sleep(random.random() / 100)
+ finally:
+ active.remove(me)
+
+ def run():
+ me = threading.current_thread()
+ while not watch.expired():
+ acquire_check(me)
+
+ threads = []
+ for i in range(0, self.THREAD_COUNT):
+ t = _daemon_thread(run)
+ threads.append(t)
+ t.start()
+ while threads:
+ t = threads.pop()
+ t.join()
+
+ self.assertEqual([], list(dups))
+ self.assertEqual([], list(active))
+
+ def test_no_concurrent_readers_writers(self):
+ lock = fasteners.ReaderWriterLock()
+ watch = _utils.StopWatch(duration=5)
+ watch.start()
+ dups = collections.deque()
+ active = collections.deque()
+
+ def acquire_check(me, reader):
+ if reader:
+ lock_func = lock.read_lock
+ else:
+ lock_func = lock.write_lock
+ with lock_func():
+ if not reader:
+ # There should be no-one else currently active, if there
+ # is ensure we capture them so that we can later blow-up
+ # the test.
+ if len(active) >= 1:
+ dups.append(me)
+ dups.extend(active)
+ active.append(me)
+ try:
+ time.sleep(random.random() / 100)
+ finally:
+ active.remove(me)
+
+ def run():
+ me = threading.current_thread()
+ while not watch.expired():
+ acquire_check(me, random.choice([True, False]))
+
+ threads = []
+ for i in range(0, self.THREAD_COUNT):
+ t = _daemon_thread(run)
+ threads.append(t)
+ t.start()
+ while threads:
+ t = threads.pop()
+ t.join()
+
+ self.assertEqual([], list(dups))
+ self.assertEqual([], list(active))
+
+ def test_writer_abort(self):
+ lock = fasteners.ReaderWriterLock()
+ self.assertFalse(lock.owner)
+
+ def blow_up():
+ with lock.write_lock():
+ self.assertEqual(lock.WRITER, lock.owner)
+ raise RuntimeError("Broken")
+
+ self.assertRaises(RuntimeError, blow_up)
+ self.assertFalse(lock.owner)
+
+ def test_reader_abort(self):
+ lock = fasteners.ReaderWriterLock()
+ self.assertFalse(lock.owner)
+
+ def blow_up():
+ with lock.read_lock():
+ self.assertEqual(lock.READER, lock.owner)
+ raise RuntimeError("Broken")
+
+ self.assertRaises(RuntimeError, blow_up)
+ self.assertFalse(lock.owner)
+
+ def test_double_reader_abort(self):
+ lock = fasteners.ReaderWriterLock()
+ activated = collections.deque()
+
+ def double_bad_reader():
+ with lock.read_lock():
+ with lock.read_lock():
+ raise RuntimeError("Broken")
+
+ def happy_writer():
+ with lock.write_lock():
+ activated.append(lock.owner)
+
+ with futures.ThreadPoolExecutor(max_workers=20) as e:
+ for i in range(0, 20):
+ if i % 2 == 0:
+ e.submit(double_bad_reader)
+ else:
+ e.submit(happy_writer)
+
+ self.assertEqual(10, len([a for a in activated if a == 'w']))
+
+ def test_double_reader_writer(self):
+ lock = fasteners.ReaderWriterLock()
+ activated = collections.deque()
+ active = threading.Event()
+
+ def double_reader():
+ with lock.read_lock():
+ active.set()
+ while not lock.has_pending_writers:
+ time.sleep(0.001)
+ with lock.read_lock():
+ activated.append(lock.owner)
+
+ def happy_writer():
+ with lock.write_lock():
+ activated.append(lock.owner)
+
+ reader = _daemon_thread(double_reader)
+ reader.start()
+ active.wait(WAIT_TIMEOUT)
+ self.assertTrue(active.is_set())
+
+ writer = _daemon_thread(happy_writer)
+ writer.start()
+
+ reader.join()
+ writer.join()
+ self.assertEqual(2, len(activated))
+ self.assertEqual(['r', 'w'], list(activated))
+
+ def test_reader_chaotic(self):
+ lock = fasteners.ReaderWriterLock()
+ activated = collections.deque()
+
+ def chaotic_reader(blow_up):
+ with lock.read_lock():
+ if blow_up:
+ raise RuntimeError("Broken")
+ else:
+ activated.append(lock.owner)
+
+ def happy_writer():
+ with lock.write_lock():
+ activated.append(lock.owner)
+
+ with futures.ThreadPoolExecutor(max_workers=20) as e:
+ for i in range(0, 20):
+ if i % 2 == 0:
+ e.submit(chaotic_reader, blow_up=bool(i % 4 == 0))
+ else:
+ e.submit(happy_writer)
+
+ writers = [a for a in activated if a == 'w']
+ readers = [a for a in activated if a == 'r']
+ self.assertEqual(10, len(writers))
+ self.assertEqual(5, len(readers))
+
+ def test_writer_chaotic(self):
+ lock = fasteners.ReaderWriterLock()
+ activated = collections.deque()
+
+ def chaotic_writer(blow_up):
+ with lock.write_lock():
+ if blow_up:
+ raise RuntimeError("Broken")
+ else:
+ activated.append(lock.owner)
+
+ def happy_reader():
+ with lock.read_lock():
+ activated.append(lock.owner)
+
+ with futures.ThreadPoolExecutor(max_workers=20) as e:
+ for i in range(0, 20):
+ if i % 2 == 0:
+ e.submit(chaotic_writer, blow_up=bool(i % 4 == 0))
+ else:
+ e.submit(happy_reader)
+
+ writers = [a for a in activated if a == 'w']
+ readers = [a for a in activated if a == 'r']
+ self.assertEqual(5, len(writers))
+ self.assertEqual(10, len(readers))
+
+ def test_writer_reader_writer(self):
+ lock = fasteners.ReaderWriterLock()
+ with lock.write_lock():
+ self.assertTrue(lock.is_writer())
+ with lock.read_lock():
+ self.assertTrue(lock.is_reader())
+ with lock.write_lock():
+ self.assertTrue(lock.is_writer())
+
+ def test_single_reader_writer(self):
+ results = []
+ lock = fasteners.ReaderWriterLock()
+ with lock.read_lock():
+ self.assertTrue(lock.is_reader())
+ self.assertEqual(0, len(results))
+ with lock.write_lock():
+ results.append(1)
+ self.assertTrue(lock.is_writer())
+ with lock.read_lock():
+ self.assertTrue(lock.is_reader())
+ self.assertEqual(1, len(results))
+ self.assertFalse(lock.is_reader())
+ self.assertFalse(lock.is_writer())
+
+ def test_reader_to_writer(self):
+ lock = fasteners.ReaderWriterLock()
+
+ def writer_func():
+ with lock.write_lock():
+ pass
+
+ with lock.read_lock():
+ self.assertRaises(RuntimeError, writer_func)
+ self.assertFalse(lock.is_writer())
+
+ self.assertFalse(lock.is_reader())
+ self.assertFalse(lock.is_writer())
+
+ def test_writer_to_reader(self):
+ lock = fasteners.ReaderWriterLock()
+
+ def reader_func():
+ with lock.read_lock():
+ self.assertTrue(lock.is_writer())
+ self.assertTrue(lock.is_reader())
+
+ with lock.write_lock():
+ self.assertIsNone(reader_func())
+ self.assertFalse(lock.is_reader())
+
+ self.assertFalse(lock.is_reader())
+ self.assertFalse(lock.is_writer())
+
+ def test_double_writer(self):
+ lock = fasteners.ReaderWriterLock()
+ with lock.write_lock():
+ self.assertFalse(lock.is_reader())
+ self.assertTrue(lock.is_writer())
+ with lock.write_lock():
+ self.assertTrue(lock.is_writer())
+ self.assertTrue(lock.is_writer())
+
+ self.assertFalse(lock.is_reader())
+ self.assertFalse(lock.is_writer())
+
+ def test_double_reader(self):
+ lock = fasteners.ReaderWriterLock()
+ with lock.read_lock():
+ self.assertTrue(lock.is_reader())
+ self.assertFalse(lock.is_writer())
+ with lock.read_lock():
+ self.assertTrue(lock.is_reader())
+ self.assertTrue(lock.is_reader())
+
+ self.assertFalse(lock.is_reader())
+ self.assertFalse(lock.is_writer())
+
+ def test_multi_reader_multi_writer(self):
+ writer_times, reader_times = _spawn_variation(10, 10)
+ self.assertEqual(10, len(writer_times))
+ self.assertEqual(10, len(reader_times))
+ for (start, stop) in writer_times:
+ self.assertEqual(0, _find_overlaps(reader_times, start, stop))
+ self.assertEqual(1, _find_overlaps(writer_times, start, stop))
+ for (start, stop) in reader_times:
+ self.assertEqual(0, _find_overlaps(writer_times, start, stop))
+
+ def test_multi_reader_single_writer(self):
+ writer_times, reader_times = _spawn_variation(9, 1)
+ self.assertEqual(1, len(writer_times))
+ self.assertEqual(9, len(reader_times))
+ start, stop = writer_times[0]
+ self.assertEqual(0, _find_overlaps(reader_times, start, stop))
+
+ def test_multi_writer(self):
+ writer_times, reader_times = _spawn_variation(0, 10)
+ self.assertEqual(10, len(writer_times))
+ self.assertEqual(0, len(reader_times))
+ for (start, stop) in writer_times:
+ self.assertEqual(1, _find_overlaps(writer_times, start, stop))
--- /dev/null
+# -*- coding: utf-8 -*-
+
+# Copyright 2011 OpenStack Foundation.
+# Copyright 2011 Justin Santa Barbara
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import errno
+import fcntl
+import multiprocessing
+import os
+import shutil
+import signal
+import tempfile
+import threading
+import time
+
+from fasteners import process_lock as pl
+from fasteners import test
+
+
+class BrokenLock(pl.InterProcessLock):
+ def __init__(self, name, errno_code):
+ super(BrokenLock, self).__init__(name)
+ self.errno_code = errno_code
+
+ def unlock(self):
+ pass
+
+ def trylock(self):
+ err = IOError()
+ err.errno = self.errno_code
+ raise err
+
+
+class ProcessLockTest(test.TestCase):
+ def setUp(self):
+ super(ProcessLockTest, self).setUp()
+ self.lock_dir = tempfile.mkdtemp()
+ self.tmp_dirs = [self.lock_dir]
+
+ def tearDown(self):
+ super(ProcessLockTest, self).tearDown()
+ for a_dir in reversed(self.tmp_dirs):
+ if os.path.exists(a_dir):
+ shutil.rmtree(a_dir, ignore_errors=True)
+
+ def test_lock_acquire_release_file_lock(self):
+ lock_file = os.path.join(self.lock_dir, 'lock')
+ lock = pl.InterProcessLock(lock_file)
+
+ def try_lock():
+ try:
+ my_lock = pl.InterProcessLock(lock_file)
+ my_lock.lockfile = open(lock_file, 'w')
+ my_lock.trylock()
+ my_lock.unlock()
+ os._exit(1)
+ except IOError:
+ os._exit(0)
+
+ def attempt_acquire(count):
+ children = []
+ for i in range(count):
+ child = multiprocessing.Process(target=try_lock)
+ child.start()
+ children.append(child)
+ exit_codes = []
+ for child in children:
+ child.join()
+ exit_codes.append(child.exitcode)
+ return sum(exit_codes)
+
+ self.assertTrue(lock.acquire())
+ try:
+ acquired_children = attempt_acquire(10)
+ self.assertEqual(0, acquired_children)
+ finally:
+ lock.release()
+
+ acquired_children = attempt_acquire(5)
+ self.assertNotEqual(0, acquired_children)
+
+ def test_nested_synchronized_external_works(self):
+ sentinel = object()
+
+ @pl.interprocess_locked(os.path.join(self.lock_dir, 'test-lock-1'))
+ def outer_lock():
+
+ @pl.interprocess_locked(os.path.join(self.lock_dir, 'test-lock-2'))
+ def inner_lock():
+ return sentinel
+
+ return inner_lock()
+
+ self.assertEqual(sentinel, outer_lock())
+
+ def _do_test_lock_externally(self, lock_dir):
+ lock_path = os.path.join(lock_dir, "lock")
+
+ def lock_files(handles_dir):
+ with pl.InterProcessLock(lock_path):
+
+ # Open some files we can use for locking
+ handles = []
+ for n in range(50):
+ path = os.path.join(handles_dir, ('file-%s' % n))
+ handles.append(open(path, 'w'))
+
+ # Loop over all the handles and try locking the file
+ # without blocking, keep a count of how many files we
+ # were able to lock and then unlock. If the lock fails
+ # we get an IOError and bail out with bad exit code
+ count = 0
+ for handle in handles:
+ try:
+ fcntl.flock(handle, fcntl.LOCK_EX | fcntl.LOCK_NB)
+ count += 1
+ fcntl.flock(handle, fcntl.LOCK_UN)
+ except IOError:
+ os._exit(2)
+ finally:
+ handle.close()
+
+ # Check if we were able to open all files
+ self.assertEqual(50, count)
+
+ handles_dir = tempfile.mkdtemp()
+ self.tmp_dirs.append(handles_dir)
+ children = []
+ for n in range(50):
+ pid = os.fork()
+ if pid:
+ children.append(pid)
+ else:
+ try:
+ lock_files(handles_dir)
+ finally:
+ os._exit(0)
+ for child in children:
+ (pid, status) = os.waitpid(child, 0)
+ if pid:
+ self.assertEqual(0, status)
+
+ def test_lock_externally(self):
+ self._do_test_lock_externally(self.lock_dir)
+
+ def test_lock_externally_lock_dir_not_exist(self):
+ os.rmdir(self.lock_dir)
+ self._do_test_lock_externally(self.lock_dir)
+
+ def test_lock_file_exists(self):
+ lock_file = os.path.join(self.lock_dir, 'lock')
+
+ @pl.interprocess_locked(lock_file)
+ def foo():
+ self.assertTrue(os.path.exists(lock_file))
+
+ foo()
+
+ def test_bad_acquire(self):
+ lock_file = os.path.join(self.lock_dir, 'lock')
+ lock = BrokenLock(lock_file, errno.EBUSY)
+ self.assertRaises(threading.ThreadError, lock.acquire)
+
+ def test_bad_release(self):
+ lock_file = os.path.join(self.lock_dir, 'lock')
+ lock = pl.InterProcessLock(lock_file)
+ self.assertRaises(threading.ThreadError, lock.release)
+
+ def test_interprocess_lock(self):
+ lock_file = os.path.join(self.lock_dir, 'lock')
+
+ pid = os.fork()
+ if pid:
+ # Make sure the child grabs the lock first
+ start = time.time()
+ while not os.path.exists(lock_file):
+ if time.time() - start > 5:
+ self.fail('Timed out waiting for child to grab lock')
+ time.sleep(0)
+ lock1 = pl.InterProcessLock('foo')
+ lock1.lockfile = open(lock_file, 'w')
+ # NOTE(bnemec): There is a brief window between when the lock file
+ # is created and when it actually becomes locked. If we happen to
+ # context switch in that window we may succeed in locking the
+ # file. Keep retrying until we either get the expected exception
+ # or timeout waiting.
+ while time.time() - start < 5:
+ try:
+ lock1.trylock()
+ lock1.unlock()
+ time.sleep(0)
+ except IOError:
+ # This is what we expect to happen
+ break
+ else:
+ self.fail('Never caught expected lock exception')
+ # We don't need to wait for the full sleep in the child here
+ os.kill(pid, signal.SIGKILL)
+ else:
+ try:
+ lock2 = pl.InterProcessLock('foo')
+ lock2.lockfile = open(lock_file, 'w')
+ have_lock = False
+ while not have_lock:
+ try:
+ lock2.trylock()
+ have_lock = True
+ except IOError:
+ pass
+ finally:
+ # NOTE(bnemec): This is racy, but I don't want to add any
+ # synchronization primitives that might mask a problem
+ # with the one we're trying to test here.
+ time.sleep(.5)
+ os._exit(0)
+
+ def test_non_destructive(self):
+ lock_file = os.path.join(self.lock_dir, 'not-destroyed')
+ with open(lock_file, 'w') as f:
+ f.write('test')
+ with pl.InterProcessLock(lock_file):
+ with open(lock_file) as f:
+ self.assertEqual(f.read(), 'test')
--- /dev/null
+# -*- coding: utf-8 -*-
+
+# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved.
+# Copyright 2011 OpenStack Foundation.
+#
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+_VERSION = "0.12"
+
+
+def version_string():
+ return _VERSION
--- /dev/null
+[bdist_wheel]
+
+universal = 1
--- /dev/null
+#!/usr/bin/env python
+
+# -*- coding: utf-8 -*-
+
+# vim: tabstop=4 shiftwidth=4 softtabstop=4
+
+# Copyright (C) 2015 Yahoo! Inc. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from setuptools import find_packages
+from setuptools import setup
+
+with open("README.rst", "r") as readme:
+ long_description = readme.read()
+
+install_requires = [
+ 'six',
+ 'monotonic>=0.1',
+]
+
+setup(
+ name='fasteners',
+ version='0.12.0',
+ description='A python package that provides useful locks.',
+ author="Joshua Harlow",
+ author_email='harlowja@yahoo-inc.com',
+ url='https://github.com/harlowja/fasteners',
+ license="ASL 2.0",
+ install_requires=install_requires,
+ classifiers=[
+ "Development Status :: 4 - Beta",
+ "Topic :: Utilities",
+ "License :: OSI Approved :: Apache Software License",
+ "Operating System :: POSIX :: Linux",
+ "Programming Language :: Python",
+ "Programming Language :: Python :: 2",
+ "Programming Language :: Python :: 2.6",
+ "Programming Language :: Python :: 2.7",
+ "Programming Language :: Python :: 3",
+ "Programming Language :: Python :: 3.4",
+ ],
+ keywords="locks thread threads interprocess"
+ " processes process fasteners",
+ packages=find_packages(),
+ long_description=long_description,
+)
--- /dev/null
+nose
+testtools
+futures
+sphinx
+sphinx-rtd-theme
--- /dev/null
+[tox]
+minversion = 1.6
+envlist = py34,py26,py27
+skipsdist = True
+
+[testenv]
+usedevelop = True
+install_command = pip install -U {opts} {packages}
+setenv =
+ VIRTUAL_ENV={envdir}
+deps = -r{toxinidir}/test-requirements.txt
+commands = nosetests {posargs}
+
+[testenv:venv]
+commands = {posargs}