--- /dev/null
+include tests/cases/*.xml
+recursive-include tests/tproj *.py *.xml
+prune tests/tproj/fixtures
+include spm/templates/*.html
--- /dev/null
+install
+=======
+install itest
+-------------
+sudo python setup.py install
+
+=====
+
+prepare for test environment
+----------------------------
+# itest will use this password to run sudo
+export ITEST_SUDO_PASSWORD
+export http_proxy, https_proxy, no_proxy
+
+running gbs test cases
+----------------------
+1. run all test cases
+ $ runtest
+
+2. print detail message when running test cases
+ $ runtest -v
+
+3. print log when runing test cases, useful for debuging
+ $ runtest -vv data/auto/changelog/test_changelog_since.gbs
+
+4. run test suites
+ $ runtest chroot export
+
+5. run single test case and test suites
+ $ runtest data/auto/build/test_build_commit_ia32.gbs import submit
+
+6. check test results
+ $ runtest chroot submit changelog auto/build/test_build_help.gbs
+........................
+
+Ran 24 tests in 0h 00 min 10s
+
+OK
+
+Details
+---------------------------------
+Component Passed Failed
+build 1 0
+remotebuild 0 0
+changelog 7 0
+chroot 2 0
+import 0 0
+export 0 0
+submit 14 0
+conf 0 0
+
+
+Syntax of case
+==============
+
+\_\_steps\_\_
+-------------
+
+*steps* is the core section of a case. It consist of command lines and
+comments. A lines starting with '>' is called command line. Others are all
+treated as comments. Comments are only for reading, they will be ignored in
+running.
+
+Each command line runs one by one in the same order as they occur in case. If
+any command exit with nonzero, the whole case will exit immediately and is
+treated as failed. The only condition that a case pass is when the last command
+exit with code 0.
+
+For example:
+
+ > echo 1
+ > false | echo 2
+ > echo 3
+
+"echo 3" never run, it fail in the second line.
+
+When you want to assert a command will fail, add "!" before it, and enclose with
+parenthesis(subshell syntax).
+
+ > echo 1
+ > (! false | echo 2)
+ > echo 3
+
+This case pass, because the designer assert that the second will fail via "!".
+Parenthesis are required, which makes the whole line a subshell and the subshell
+exit with 0. When parenthesis are missing, this case will fail in the second
+line(same as the above example).
+
+NOTE: Itest use "bash -xe" and "set -o pipefall" to implement this, please refer
+bash manual for more detail.
+
+\_\_setup\_\_
+-------------
+This is an optional section which can be used to set up environment need
+by following steps. Its content should be valid shell code.
+
+Variables declared in this section can also be used in *steps* and *teardown*
+sections. In constract, variables defined in *steps* can't be seen in the
+scope of *teardown*, so if there are common variables, they should be set
+in this section.
+
+For example:
+
+ __vars__:
+ temp_project_name=test_$(date +%Y%m%d)_$RANDOM
+ touch another_temp_file
+
+ __steps__:
+ > gbs remotebuild -T $temp_project_name
+
+ __teardown__:
+ rm -f another_temp_file
+ osc delete $temp_project
+
+\_\_teardown\_\_
+----------------
+This is also an optional section which can be used to clean up environment
+after *steps* finish. Its content should be valid shell code.
+
+Whatever *steps* failed or successed, this section gaurantee to be run.
+Result of this section doesn't affect result of the case.
--- /dev/null
+itest-core (1.7) unstable; urgency=high
+ * Upgrade to itest1.7, which contains the following bug fixing & features:
+ * #1125: Image diff tool
+ * #1430: Make compatible for pexpect-2.5
+
+ -- Huang, Hao <hao.h.huang.com> Fri, 29 Nov 2013 12:02:00 +0800
+
+itest-core (1.6) unstable; urgency=high
+ * Upgrade to itest1.6, which contains the following bug fixing & features:
+ * #1369: Raise Timeout error if no output for a while.
+ * Print log to sys.stdout instead of /dev/fd/1
+ * #1099: ctrl-c can't break runtest
+ * #1086: Fix dependency issue of itest-core on centos
+ * #1128: "__conditions__" can not work with "distwhitelist: opensuse / distblacklist: opensuse12.1-i586".
+ * #961: support selecting platforms in cases
+ * #1065: support JUnit XML format of report
+
+ -- Huang, Hao <hao.h.huang.com> Fri, 29 Nov 2013 12:00:00 +0800
+
+itest-core (1.5) unstable; urgency=high
+ * Upgrade to itest1.5, which contains the following bug fixing & features:
+ * #942: Mark case as failure for a period of time.
+ * #943: Retry if particular error occurs in case
+
+ -- Huang, Hao <hao.h.huang.com> Fri, 31 May 2013 12:00:00 +0800
+
+itest-core (1.4) unstable; urgency=high
+ * Upgrade to itest1.4, which contains the following bug fixing & features:
+ * #666: itest installion
+ * #827: Run relative cases according to a gbs or gbp patch
+ * #824: Change search order of env path
+ * #823: Display time cost for each test
+ * #804: Show tips if copyed directory like 'fixtures' is very large
+ * #870: Itest exit without printing "steps finish"
+ * #860: Report URL is too long
+ * #800: Itest could not print information which include chinese characters or signs
+
+ -- Huang, Hao <hao.h.huang.com> Fri, 26 Apr 2013 12:00:00 +0800
+
+itest-core (1.3) unstable; urgency=high
+ * Upgrade to itest1.3, which contains the following bug fixing & features:
+ * Redesign test report. Add failed summary report
+ * Add datetime info to log file when cases begin and end
+ * Set timezone in test env
+ * Add itest dependencies
+
+ -- Junchun Guan <junchunx.guan@intel.com> Thu, 24 Jan 2013 11:45:08 +0800
+
+itest-core (1.2) unstable; urgency=high
+ * Upgrade to itest1.2, which contains the following bug fixing & features:
+ * Exit with nonzero value when any case failed
+ * Reduce output log when run auto sync
+ * Support setup and teardown section in test case
+ * Support coverage report
+
+ -- Junchun Guan <junchunx.guan@intel.com> Thu, 24 Jan 2013 11:45:08 +0800
+
+itest-core (1.1) unstable; urgency=high
+ * Upgrade to itest1.1, which contains the following bug fixing & features:
+ * Refactor HTML report and deploy to web server
+ * Support html template using python-bottle
+ * Anto upload html report to web server
+ * Support mic function test
+
+ -- Junchun Guan <junchunx.guan@intel.com> Thu, 24 Jan 2013 11:45:08 +0800
+
+itest-core (1.0) unstable; urgency=high
+ * Init release
+ * Support gbs functional test
+ * Generate local report and simple html report
+
+ -- Junchun Guan <junchunx.guan@intel.com> Tue, 6 Nov 2012 09:54:46 +0800
+
--- /dev/null
+Source: itest-core
+Section: devel
+Priority: extra
+Maintainer: Junchun Guan <junchunx.guan@intel.com>
+Build-Depends: debhelper, python (>= 2.6), python-support, python-setuptools
+Standards-Version: 3.8.0
+X-Python-Version: >= 2.6
+Homepage: http://www.tizen.org
+
+Package: itest-core
+Architecture: all
+Depends: ${misc:Depends}, ${python:Depends},
+ python-pexpect, python-coverage, python-jinja2, python-unittest2, spm
+Description: functional test framework for gbs and mic
+
+Package: spm
+Architecture: all
+Depends: ${misc:Depends}, ${python:Depends},
+ python-jinja2, python-yaml
+Description: Smart package management tool on Linux
+ A wrapper of yum, apt-get, zypper command. Support Redhat, Debian, SuSE
+
+Package: nosexcase
+Architecture: all
+Depends: ${misc:Depends}, ${python:Depends},
+ itest-core, python-nose
+Description: A nose plugin that supports running test cases defined in XML format
--- /dev/null
+Upstream Authors:
+
+ Intel Inc.
+
+Copyright:
+
+ Copyright (C) 2012 Intel Inc.
--- /dev/null
+usr/lib/python*/*packages/itest/*.py
+usr/lib/python*/*packages/itest/conf/*.py
+usr/lib/python*/*packages/imgdiff/*.py
+usr/lib/python*/*packages/itest-*.egg-info
+usr/bin/runtest
+usr/bin/imgdiff
--- /dev/null
+usr/lib/python*/*packages/nosexcase/*.py
--- /dev/null
+#!/usr/bin/make -f
+
+%:
+ dh $@
+
+override_dh_auto_install:
+ python setup.py install --root=debian/tmp --prefix=/usr
+
+override_dh_auto_test:
+ @echo 'Skipping autotests'
--- /dev/null
+usr/lib/python*/*packages/spm/*.py
+usr/lib/python*/*packages/spm/templates/*.html
+usr/bin/spm
+etc/spm.yml
--- /dev/null
+{
+ "ignoreFiles": [
+ "*.log",
+ "*.cache",
+ "machine-id",
+ "*/zypp/AnonymousUniqueId",
+ "/var/cache/*",
+ "/etc/shadow*",
+ "/var/lib/rpm/*",
+ "/boot/extlinux/ldlinux.sys"
+ ,"/dev/*"
+ ,"/var/lib/random-seed"
+ ,"/opt/usr/dbspace/*"
+ ,"/opt/dbspace/*"
+ ,"/boot/vmlinuz"
+ ],
+ "ignoreLines": [{
+ "Files": ["/etc/machine-id"],
+ "Lines": ["UUID=.*"]
+ }, {
+ "Files": ["extlinux.conf"],
+ "Lines": ["^label .*", "^[ \\t]*linux .*", "^[ \t]*append.*root=.*"]
+ }, {
+ "Files": ["/etc/os-release", "/etc/system-release", "/etc/tizen-release"],
+ "Lines": ["^BUILD_ID=.*"]
+ }]
+}
--- /dev/null
+"Module imgdiff"
--- /dev/null
+#!/usr/bin/env python
+'''This script will cleanup resources allocated by unpack_image.py
+'''
+import os
+import sys
+from subprocess import call
+
+
+def umount(path):
+ '''Umount a mount point at path
+ '''
+ if not os.path.isdir(path) or not os.path.ismount(path):
+ return
+
+ cmd = ['sudo', 'umount', '-l', path]
+ print "Umounting", path, "..."
+ return call(cmd)
+
+
+def loopdel(val):
+ '''Release loop dev at val
+ '''
+ devloop, filename = val.split(':', 1)
+ print "Releasing %s(%s)" % (devloop, filename), "..."
+
+
+def main():
+ '''Main'''
+ # cleanup mountpoint in reverse order
+ lines = sys.stdin.readlines()
+ lines.sort(reverse=1)
+
+ handler = {
+ 'mountpoint': umount,
+ 'loopdev': loopdel,
+ }
+
+ for line in lines:
+ key, val = line.strip().split(':', 1)
+ if key in handler:
+ handler[key](val)
+ else:
+ print >> sys.stderr, "Have no idea to release:", line,
+
+
+if __name__ == '__main__':
+ main()
--- /dev/null
+#!/usr/bin/env python
+'''This script parse diff result from stdin filter out trivial differences
+defined in config file and print out the left
+'''
+import re
+import os
+import sys
+import argparse
+from itertools import imap, ifilter
+
+from imgdiff.trivial import Conf, Rules
+from imgdiff.unified import parse
+
+
+PATTERN_PREFIX = re.compile(r'.*?img[12](%(sep)sroot)?(%(sep)s.*)' %
+ {'sep': os.path.sep})
+
+
+def strip_prefix(filename):
+ '''Strip prefix added by imgdiff script.
+ For example:
+ img1/partition_table.txt -> partition_table.txt
+ img1/root/tmp/file -> /tmp/file
+ '''
+ match = PATTERN_PREFIX.match(filename)
+ return match.group(2) if match else filename
+
+
+def fix_filename(onefile):
+ '''Fix filename'''
+ onefile['filename'] = strip_prefix(onefile['filename'])
+ return onefile
+
+
+class Mark(object):
+ '''Mark one file and its content as nontrivial
+ '''
+ def __init__(self, conf_filename):
+ self.rules = Rules(Conf.load(conf_filename))
+
+ def __call__(self, onefile):
+ self.rules.check_and_mark(onefile)
+ return onefile
+
+
+def nontrivial(onefile):
+ '''Filter out nontrivial'''
+ return not('ignore' in onefile and onefile['ignore'])
+
+
+def parse_and_mark(stream, conf_filename=None):
+ '''
+ Parse diff from stream and mark nontrivial defined
+ by conf_filename
+ '''
+ stream = parse(stream)
+ stream = imap(fix_filename, stream)
+
+ if conf_filename:
+ mark_trivial = Mark(conf_filename)
+ stream = imap(mark_trivial, stream)
+ return stream
+
+
+def parse_args():
+ '''parse arguments'''
+ parser = argparse.ArgumentParser()
+ parser.add_argument('-c', '--conf-filename',
+ help='conf for defining unimportant difference')
+ return parser.parse_args()
+
+
+def main():
+ "Main"
+ args = parse_args()
+ stream = parse_and_mark(sys.stdin, args.conf_filename)
+ stream = ifilter(nontrivial, stream)
+ cnt = 0
+ for each in stream:
+ print each
+ cnt += 1
+ return cnt
+
+
+if __name__ == '__main__':
+ try:
+ sys.exit(main())
+ except Exception:
+ # normally python exit 1 for exception
+ # we change it to 255 to avoid confusion with 1 difference
+ import traceback
+ traceback.print_exc()
+ sys.exit(255)
--- /dev/null
+'''Get image information, such as partition table, block id fstab etc.
+'''
+import re
+import os
+import sys
+from subprocess import check_output, CalledProcessError
+from itertools import ifilter, islice, chain
+
+
+def parted(img):
+ "Parse output of parted command"
+ column = re.compile(r'([A-Z][a-z\s]*?)((?=[A-Z])|$)')
+
+ def parse(output):
+ '''Example:
+ Model: (file)
+ Disk /home/xxx/tmp/images/small.raw: 839909376B
+ Sector size (logical/physical): 512B/512B
+ Partition Table: msdos
+
+ Number Start End Size Type File system Flags
+ 1 1048576B 34602495B 33553920B primary ext4 boot
+ 2 34603008B 839909375B 805306368B primary ext4
+ '''
+ state = 'header'
+ headers = {}
+ parts = []
+ for line in output.splitlines():
+ if state == 'header':
+ if line == '':
+ state = 'title'
+ else:
+ key, val = line.split(':', 1)
+ headers[key.lower()] = val.strip()
+ elif state == 'title':
+ titles = []
+ start = 0
+ for col, _ in column.findall(line):
+ title = col.rstrip().lower()
+ getter = slice(start, start+len(col))
+ titles.append((title, getter))
+ start += len(col)
+ state = 'parts'
+ elif line.strip():
+ part = dict([(title, line[getter].strip())
+ for title, getter in titles])
+ for title in ('start',): # start, end, size
+ part[title] = int(part[title][:-1]) # remove tailing "B"
+ part['number'] = int(part['number'])
+ parts.append(part)
+ return parts
+
+ cmd = ['parted', img, '-s', 'unit B print']
+ output = check_output(cmd)
+ return parse(output)
+
+
+def blkid(img, offset_in_bytes):
+ "Parse output of blkid command"
+ def parse(output):
+ '''Example:
+ sdb.raw: LABEL="boot" UUID="2995b233-ff79-4719-806d-d7f42b34a133" \
+ VERSION="1.0" TYPE="ext4" USAGE="filesystem"
+ '''
+ output = output.splitlines()[0].split(': ', 1)[1]
+ info = {}
+ for item in output.split():
+ key, val = item.split('=', 1)
+ info[key.lower()] = val[1:-1] # remove double quotes
+ return info
+
+ cmd = ['blkid', '-p', '-O', str(offset_in_bytes), '-o', 'full', img]
+ output = check_output(cmd)
+ return parse(output)
+
+
+def gdisk(img):
+ "Parse output of gdisk"
+ cmd = ['gdisk', '-l', img]
+
+ def parse(output):
+ """Example:
+ GPT fdisk (gdisk) version 0.8.1
+
+ Partition table scan:
+ MBR: protective
+ BSD: not present
+ APM: not present
+ GPT: present
+
+ Found valid GPT with protective MBR; using GPT.
+ Disk tizen_20131115.3_ivi-efi-i586-sdb.raw: 7809058 sectors, 3.7 GiB
+ Logical sector size: 512 bytes
+ Disk identifier (GUID): 4A6D60CE-C42D-4A81-B82B-120624CE867E
+ Partition table holds up to 128 entries
+ First usable sector is 34, last usable sector is 7809024
+ Partitions will be aligned on 2048-sector boundaries
+ Total free space is 2049 sectors (1.0 MiB)
+
+ Number Start (sector) End (sector) Size Code Name
+ 1 2048 133085 64.0 MiB EF00 primary
+ 2 133120 7809023 3.7 GiB 0700 primary
+ """
+ lines = output.splitlines()
+
+ line = [i for i in lines if i.startswith('Logical sector size:')]
+ if not line:
+ raise Exception("Can't find sector size from gdisk output:%s:%s"
+ % (" ".join(cmd), output))
+ size = int(line[0].split(':', 1)[1].strip().split()[0])
+
+ parts = []
+ lines.reverse()
+ for line in lines:
+ if not line.startswith(' ') or \
+ not line.lstrip().split()[0].isdigit():
+ break
+ number, start, _ = line.lstrip().split(None, 2)
+ parts.append(dict(number=int(number), start=int(start)*size))
+ return parts
+
+ output = check_output(cmd)
+ return parse(output)
+
+
+class FSTab(dict):
+ '''
+ A dict representing fstab file.
+ Key is <mount point>, corresponding value is its whole entry
+ '''
+ def __init__(self, filename):
+ with open(filename) as stream:
+ output = stream.read()
+ data = self._parse(output)
+ super(FSTab, self).__init__(data)
+
+ FS = re.compile(r'/dev/sd[a-z](\d+)|UUID=(.*)')
+
+ def _parse(self, output):
+ '''Parse fstab in this format:
+ <file system> <mount point> <type> <options> <dump> <pass>
+ '''
+ mountpoints = {}
+ for line in output.splitlines():
+ fstype, mountpoint, _ = line.split(None, 2)
+ mres = self.FS.match(fstype)
+ if not mres:
+ continue
+
+ number, uuid = mres.group(1), mres.group(2)
+ if number:
+ item = {"number": number}
+ else:
+ item = {"uuid": uuid}
+ item["entry"] = line
+ mountpoints[mountpoint] = item
+ return mountpoints
+
+ @classmethod
+ def guess(cls, paths):
+ '''Guess fstab location from all partitions of the image
+ '''
+ guess1 = (os.path.join(path, 'etc', 'fstab') for path in paths)
+ guess2 = (os.path.join(path, 'fstab') for path in paths)
+ guesses = chain(guess1, guess2)
+ exists = ifilter(os.path.exists, guesses)
+ one = list(islice(exists, 1))
+ return cls(one[0]) if one else None
+
+
+def get_partition_info(img):
+ '''Get partition table information of image'''
+ try:
+ parts = parted(img)
+ except CalledProcessError as err:
+ print >> sys.stderr, err
+ # Sometimes parted could failed with error
+ # like this, then we try gdisk.
+ # "Error during translation: Invalid or incomplete
+ # multibyte or wide character"
+ parts = gdisk(img)
+ for part in parts:
+ part['blkid'] = blkid(img, part['start'])
+ return parts
--- /dev/null
+"""This module provides classes to deal with
+unimportant difference in diff result.
+"""
+import os
+import re
+import json
+import fnmatch
+
+
+class Conf(dict):
+ """
+ Configuration defining unimportant difference
+ """
+
+ @classmethod
+ def load(cls, filename):
+ "Load config from file"
+ with open(filename) as reader:
+ txt = reader.read()
+ txt.replace(os.linesep, '')
+ data = json.loads(txt)
+ return cls(data)
+
+
+class Rules(object):
+ """
+ Unimportant rules
+ """
+ def __init__(self, conf):
+ self._rules = self._compile(conf)
+
+ def check_and_mark(self, item):
+ """Check if there are unimportant differences in item.
+ Mark them as ignore
+ """
+ for matcher, rule in self._rules:
+ if matcher(item['filename']):
+ rule(item)
+ break
+
+ @staticmethod
+ def _compile(conf):
+ """Compile config item to matching rules
+ """
+ def new_matcher(pattern):
+ """Supported file name pattern like:
+ *.log
+ partition_tab.txt
+ /tmp/a.txt
+ /dev/
+ some/file.txt
+ """
+ if pattern.endswith(os.path.sep): # direcotry name
+ pattern = pattern + '*'
+
+ bname = os.path.basename(pattern)
+ if bname == pattern: # only basename, ignore dirname
+ def matcher(filename):
+ "Matcher"
+ return fnmatch.fnmatch(os.path.basename(filename), pattern)
+ else:
+ def matcher(filename):
+ "Matcher"
+ return fnmatch.fnmatch(filename, pattern)
+
+ matcher.__docstring__ = 'Match filename with pattern %s' % pattern
+ return matcher
+
+ rules = []
+ for pat in conf.get('ignoreFiles', []):
+ matcher = new_matcher(pat)
+ rules.append((matcher, ignore_file))
+
+ for entry in conf.get('ignoreLines', []):
+ files = entry['Files']
+ lines = entry['Lines']
+ if isinstance(files, basestring):
+ files = [files]
+ if isinstance(lines, basestring):
+ lines = [lines]
+ ignore = IgnoreLines(lines)
+ for pat in files:
+ matcher = new_matcher(pat)
+ rules.append((matcher, ignore))
+
+ return rules
+
+
+def ignore_file(onefile):
+ """Mark whole file as trivial difference
+ """
+ onefile['ignore'] = True
+
+
+class IgnoreLines(object):
+ """Mark certain lines in a file as trivial
+ differences according to given patterns
+ """
+ def __init__(self, patterns):
+ self.patterns = [re.compile(p) for p in patterns]
+
+ def is_unimportant(self, line):
+ "Is this line trivial"
+ for pat in self.patterns:
+ if pat.match(line['text']):
+ return True
+
+ def __call__(self, onefile):
+ "Mark lines as trivial"
+ if onefile['type'] != 'onefilediff':
+ return
+
+ def should_ignore(line):
+ "Is this line trivial"
+ if line['type'] in ('insert', 'delete'):
+ return self.is_unimportant(line)
+ # else: context, no_newline_at_eof
+ return True
+
+ all_ignored = True
+ for section in onefile['sections']:
+ for line in section['hunks']:
+ line['ignore'] = should_ignore(line)
+ all_ignored = all_ignored and line['ignore']
+
+ # if all lines are unimportant then the whole file is unimportant
+ if all_ignored:
+ onefile['ignore'] = True
--- /dev/null
+'''This module contains parser which understand unified diff result'''
+import os
+import re
+import sys
+
+
+class LookAhead(object):
+ '''Iterable but can also push back'''
+ def __init__(self, iterable):
+ self.iterable = iterable
+ self.stack = []
+
+ def push_back(self, token):
+ "push token back to this iterable"
+ self.stack.append(token)
+
+ def next(self):
+ "next token"
+ if self.stack:
+ return self.stack.pop()
+ return self.iterable.next()
+
+ def __iter__(self):
+ "iterable"
+ return self
+
+
+class MessageParser(object):
+ '''Message in diff result. This class is a abstract class. All its
+ children should implement its interface:
+
+ Attr: self.PATTERN
+ Method: parse(self, line, match)
+ '''
+
+ # it should be implemented by subclasses
+ PATTERN = None
+
+ def parse(self, line, mres):
+ "it should be implemented by subclass"
+ raise NotImplementedError
+
+ def match(self, line):
+ '''determine whether the line is a message'''
+ mres = self.PATTERN.match(line)
+ return self.parse(line, mres) if mres else None
+
+
+class OnlyInOneSide(MessageParser):
+ '''Message like this:
+ Only in img2/root/home/tizen: .bash_profile
+ '''
+
+ PATTERN = re.compile(r'Only in (.*?): (.*)')
+
+ def parse(self, line, match):
+ '''Return the concrete message'''
+ side = 'left' if match.group(1).startswith('img1/') else 'right'
+ filename = os.path.join(match.group(1), match.group(2))
+ return {
+ 'type': 'message',
+ 'filetype': 'Only in %s side' % side,
+ 'message': line[:-1],
+ 'filename': filename,
+ 'side': side,
+ }
+
+
+class SpecialFile(MessageParser):
+ '''Message like this:
+ File img1/partx/p2/dev/full is a character special file while file
+ img2/partx/p2/dev/full is a character special file
+ '''
+
+ PATTERN = re.compile(r'File (.*?) is a (.*) while file (.*?) is a (.*)')
+
+ def parse(self, line, match):
+ '''Return the concrete message'''
+ fromfile, tofile = match.group(1), match.group(3)
+ return {
+ 'type': 'message',
+ 'filetype': match.group(2),
+ 'message': line[:-1], # strip the last \n
+ 'fromfile': fromfile,
+ 'tofile': tofile,
+ 'filename': fromfile,
+ }
+
+
+class BinaryFile(MessageParser):
+ '''Message like this:
+ Binary files img1/partx/p2/var/lib/random-seed and
+ img2/partx/p2/var/lib/random-seed differ
+ '''
+
+ PATTERN = re.compile(r'Binary files (.*?) and (.*?) differ')
+
+ def parse(self, line, match):
+ '''Return the concrete message'''
+ fromfile, tofile = match.group(1), match.group(2)
+ return {
+ 'type': 'message',
+ 'filetype': 'Binary files',
+ 'message': line[:-1], # strip the last \n
+ 'fromfile': fromfile,
+ 'tofile': tofile,
+ 'filename': fromfile,
+ }
+
+
+MESSAGE_PARSERS = [obj() for name, obj in globals().items()
+ if hasattr(obj, '__bases__') and
+ MessageParser in obj.__bases__]
+
+
+class Message(dict):
+ """
+ Message that file can't be compare, such as binary, device files
+ """
+
+ @classmethod
+ def parse(cls, stream):
+ "Parse message text into dict"
+ line = stream.next()
+ for parser in MESSAGE_PARSERS:
+ data = parser.match(line)
+ if data:
+ return cls(data)
+ stream.push_back(line)
+
+ def __str__(self):
+ "to message text"
+ return self['message']
+
+
+class OneFileDiff(dict):
+ """
+ Diff result for one same file name in two sides
+ """
+
+ @classmethod
+ def parse(cls, stream):
+ '''Parse a patch which should contains following parts:
+ Start line
+ Two lines header
+ Serveral sections which of each is consist of:
+ Range: start and count
+ Hunks: context and different text
+
+ Example:
+ diff -r -u /home/xxx/tmp/images/img1/partition_table.txt
+ /home/xxx/tmp/images/img2/partition_table.txt
+ --- img1/partition_tab.txt 2013-10-28 11:05:11.814220566 +0800
+ +++ img2/partition_tab.txt 2013-10-28 11:05:14.954220642 +0800
+ @@ -1,5 +1,5 @@
+ Model: (file)
+ -Disk /home/xxx/tmp/images/192.raw: 3998237696B
+ +Disk /home/xxx/tmp/images/20.raw: 3998237696B
+ Sector size (logical/physical): 512B/512B
+ Partition Table: gpt
+ '''
+ line = stream.next()
+ if not line.startswith('diff '):
+ stream.push_back(line)
+ return
+
+ startline = line[:-1]
+ cols = ('path', 'date', 'time', 'timezone')
+
+ def parse_header(line):
+ '''header'''
+ return dict(zip(cols, line.rstrip().split()[1:]))
+
+ fromfile = parse_header(stream.next())
+ tofile = parse_header(stream.next())
+ sections = cls._parse_sections(stream)
+ return cls({
+ 'type': 'onefilediff',
+ 'startline': startline,
+ 'sections': sections,
+ 'fromfile': fromfile,
+ 'tofile': tofile,
+ 'filename': fromfile['path'],
+ })
+
+ def __str__(self):
+ "back to unified format"
+ header = '%(path)s\t%(date)s %(time)s %(timezone)s'
+ fromfile = '--- ' + (header % self['fromfile'])
+ tofile = '+++ ' + (header % self['tofile'])
+ sections = []
+
+ def start_count(start, count):
+ "make start count string"
+ return str(start) if count <= 1 else '%d,%d' % (start, count)
+
+ for i in self['sections']:
+ sec = ['@@ -%s +%s @@' %
+ (start_count(*i['range']['delete']),
+ start_count(*i['range']['insert']))
+ ]
+ for j in i['hunks']:
+ typ, txt = j['type'], j['text']
+ if typ == 'context':
+ sec.append(' ' + txt)
+ elif typ == 'delete':
+ sec.append('-' + txt)
+ elif typ == 'insert':
+ sec.append('+' + txt)
+ elif typ == 'no_newline_at_eof':
+ sec.append('\\' + txt)
+ else:
+ sec.append(txt)
+ sections.append('\n'.join(sec))
+ return '\n'.join([self['startline'],
+ fromfile,
+ tofile,
+ '\n'.join(sections),
+ ])
+
+ @classmethod
+ def _parse_sections(cls, stream):
+ '''Range and Hunks'''
+ sections = []
+ for line in stream:
+ if not line.startswith('@@ '):
+ stream.push_back(line)
+ return sections
+
+ range_ = cls._parse_range(line)
+ hunks = cls._parse_hunks(stream)
+ sections.append({'range': range_,
+ 'hunks': hunks,
+ })
+ return sections
+
+ @classmethod
+ def _parse_range(cls, line):
+ '''Start and Count'''
+ def parse_start_count(chars):
+ '''Count ommit when it's 1'''
+ start, count = (chars[1:] + ',1').split(',')[:2]
+ return int(start), int(count)
+
+ _, delete, insert, _ = line.split()
+ return {
+ 'delete': parse_start_count(delete),
+ 'insert': parse_start_count(insert),
+ }
+
+ @classmethod
+ def _parse_hunks(cls, stream):
+ '''Hunks'''
+ hunks = []
+ for line in stream:
+ if line.startswith(' '):
+ type_ = 'context'
+ elif line.startswith('-'):
+ type_ = 'delete'
+ elif line.startswith('+'):
+ type_ = 'insert'
+ elif line.startswith('\\ No newline at end of file'):
+ type_ = 'no_newline_at_eof'
+ else:
+ stream.push_back(line)
+ break
+ text = line[1:-1] # remove the last \n
+ hunks.append({'type': type_, 'text': text})
+ return hunks
+
+
+def parse(stream):
+ '''
+ Unified diff result parser
+ Reference: http://www.gnu.org/software/diffutils/manual/html_node/Detailed-Unified.html#Detailed-Unified # flake8: noqa
+
+ '''
+ stream = LookAhead(stream)
+ while 1:
+ try:
+ one = Message.parse(stream) or \
+ OneFileDiff.parse(stream)
+ except StopIteration:
+ break
+
+ if one:
+ yield one
+ continue
+
+ try:
+ line = stream.next()
+ except StopIteration:
+ # one equals None means steam hasn't stop but no one can
+ # understand the input. If we are here there must be bug
+ # in previous parsing logic
+ raise Exception('Unknown error in parsing diff output')
+ else:
+ print >> sys.stderr, '[WARN] Unknown diff output:', line,
--- /dev/null
+#!/usr/bin/env python
+'''This script unpack a whole image into a directory
+'''
+import os
+import sys
+import errno
+import argparse
+from subprocess import check_call
+
+from imgdiff.info import get_partition_info, FSTab
+
+
+def mkdir_p(path):
+ '''Same as mkdir -p'''
+ try:
+ os.makedirs(path)
+ except OSError as err:
+ if err.errno != errno.EEXIST:
+ raise
+
+
+class ResourceList(object):
+ '''
+ Record all resource allocated into a file
+ '''
+ def __init__(self, filename):
+ self.filename = filename
+
+ def umount(self, path):
+ '''record a mount point'''
+ line = 'mountpoint:%s%s' % (os.path.abspath(path), os.linesep)
+ with open(self.filename, 'a') as writer:
+ writer.write(line)
+
+
+class Mount(object):
+ '''
+ Mount image partions
+ '''
+ def __init__(self, limited_to_dir, resourcelist):
+ self.limited_to_dir = limited_to_dir
+ self.resourcelist = resourcelist
+
+ def _check_path(self, path):
+ '''Check whether path is ok to mount'''
+ if not path.startswith(self.limited_to_dir):
+ raise ValueError("Try to mount outside of jar: " + path)
+ if os.path.ismount(path):
+ raise Exception("Not allowed to override an exists "
+ "mountpoint: " + path)
+
+ self.resourcelist.umount(path)
+ mkdir_p(path)
+
+ def mount(self, image, offset, fstype, path):
+ '''Mount a partition starting from perticular
+ position of a image to a direcotry
+ '''
+ self._check_path(path)
+ cmd = ['sudo', 'mount',
+ '-o', 'ro,offset=%d' % offset,
+ '-t', fstype,
+ image, path]
+ print 'Mounting', '%d@%s' % (offset, image), '->', path, '...'
+ check_call(cmd)
+
+ def move(self, source, target):
+ '''Remove mount point to another path'''
+ self._check_path(target)
+ cmd = ['sudo', 'mount', '--make-runbindable', '/']
+ print 'Make runbindable ...', ' '.join(cmd)
+ check_call(cmd)
+ cmd = ['sudo', 'mount', '-M', source, target]
+ print 'Moving mount point from', source, 'to', target, '...'
+ check_call(cmd)
+
+
+class Image(object):
+ '''A raw type image'''
+ def __init__(self, image):
+ self.image = image
+ self.partab = get_partition_info(self.image)
+
+ @staticmethod
+ def _is_fs_supported(fstype):
+ '''Only support ext? and *fat*.
+ Ignore others such as swap, tmpfs etc.
+ '''
+ return fstype.startswith('ext') or 'fat' in fstype
+
+ def _mount_to_temp(self, basedir, mount):
+ '''Mount all partitions into temp dirs like partx/p?
+ '''
+ num2temp, uuid2temp = {}, {}
+ for part in self.partab:
+ number = str(part['number'])
+ fstype = part['blkid']['type']
+ if not self._is_fs_supported(fstype):
+ print >> sys.stderr, \
+ "ignore partition %s of type %s" % (number, fstype)
+ continue
+
+ path = os.path.join(basedir, 'partx', 'p'+number)
+ mount.mount(self.image, part['start'], fstype, path)
+
+ num2temp[number] = path
+ uuid2temp[part['blkid']['uuid']] = path
+ return num2temp, uuid2temp
+
+ @staticmethod
+ def _move_to_root(fstab, num2temp, uuid2temp, basedir, mount):
+ '''Move partitions to their correct mount points according to fstab
+ '''
+ pairs = []
+ for mountpoint in sorted(fstab.keys()):
+ item = fstab[mountpoint]
+ if 'number' in item and item['number'] in num2temp:
+ source = num2temp[item['number']]
+ elif 'uuid' in item and item['uuid'] in uuid2temp:
+ source = uuid2temp[item['uuid']]
+ else:
+ print >> sys.stderr, "fstab mismatch with partition table:", \
+ item["entry"]
+ return
+
+ # remove heading / otherwise the path will reduce to root
+ target = os.path.join(basedir, 'root',
+ mountpoint.lstrip(os.path.sep))
+ pairs.append((source, target))
+
+ for source, target in pairs:
+ mount.move(source, target)
+ return True
+
+ def unpack(self, basedir, resourcelist):
+ '''Unpack self into the basedir and record all resource used
+ into resourcelist
+ '''
+ mount = Mount(basedir, resourcelist)
+
+ num2temp, uuid2temp = self._mount_to_temp(basedir, mount)
+
+ fstab = FSTab.guess(num2temp.values())
+ if not fstab:
+ print >> sys.stderr, "Can't find fstab file from image"
+ return
+ return self._move_to_root(fstab,
+ num2temp, uuid2temp,
+ basedir, mount)
+
+
+def parse_args():
+ "Parse arguments"
+ parser = argparse.ArgumentParser()
+ parser.add_argument('image', type=os.path.abspath,
+ help='image file to unpack. Only raw format is '
+ 'supported')
+ parser.add_argument('basedir', type=os.path.abspath,
+ help='directory to unpack the image')
+ parser.add_argument('resourcelist_filename', type=os.path.abspath,
+ help='will record each mount point when unpacking '
+ 'the image. Make sure call cleanup script with this '
+ 'file name to release all allocated resources.')
+ return parser.parse_args()
+
+
+def main():
+ "Main"
+ args = parse_args()
+ img = Image(args.image)
+ resfile = ResourceList(args.resourcelist_filename)
+ return 0 if img.unpack(args.basedir, resfile) else 1
+
+
+if __name__ == '__main__':
+ sys.exit(main())
--- /dev/null
+__version__ = '1.7'
--- /dev/null
+from itest.main import main
+
+
+main()
--- /dev/null
+import os
+import sys
+import time
+import uuid
+
+try:
+ import unittest2 as unittest
+ from unittest2 import SkipTest
+except ImportError:
+ import unittest
+ from unittest import SkipTest
+
+import pexpect
+if hasattr(pexpect, 'spawnb'): # pexpect-u-2.5
+ spawn = pexpect.spawnb
+else:
+ spawn = pexpect.spawn
+
+from itest.conf import settings
+from itest.utils import now, cd, get_machine_labels
+from itest.fixture import Fixture
+
+
+def id_split(idstring):
+ parts = idstring.split('.')
+ if len(parts) > 1:
+ return '.'.join(parts[:-1]), parts[-1]
+ return '', idstring
+
+
+class TimeoutError(Exception):
+ pass
+
+
+def pcall(cmd, args=(), expecting=(), output=None,
+ eof_timeout=None, output_timeout=None, **spawn_opts):
+ '''call cmd with expecting
+ expecting: list of pairs, first is expecting string, second is send string
+ output: redirect cmd stdout and stderr to file object
+ eof_timeout: timeout for whole cmd in seconds. None means block forever
+ output_timeout: timeout if no output in seconds. Disabled by default
+ spawn_opts: keyword arguments passed to spawn call
+ '''
+ question = [pexpect.EOF, pexpect.TIMEOUT]
+ question.extend([pair[0] for pair in expecting])
+ if output_timeout:
+ question.append(r'\r|\n')
+ answer = [None]*2 + [i[1] for i in expecting]
+
+ start = time.time()
+ child = spawn(cmd, list(args), **spawn_opts)
+ if output:
+ child.logfile_read = output
+
+ timeout = output_timeout if output_timeout else eof_timeout
+ try:
+ while True:
+ if output_timeout:
+ cost = time.time() - start
+ if cost >= eof_timeout:
+ msg = 'Run out of time in %s seconds!:%s %s' % \
+ (cost, cmd, ' '.join(args))
+ raise TimeoutError(msg)
+
+ i = child.expect(question, timeout=timeout)
+ if i == 0: # EOF
+ break
+ elif i == 1: # TIMEOUT
+ if output_timeout:
+ msg = 'Hanging for %s seconds!:%s %s'
+ else:
+ msg = 'Run out of time in %s seconds!:%s %s'
+ raise TimeoutError(msg % (timeout, cmd, ' '.join(args)))
+ elif output_timeout and i == len(question)-1:
+ # new line, stands for any output
+ # do nothing, just flush timeout counter
+ pass
+ else:
+ child.sendline(answer[i])
+ finally:
+ child.close()
+
+ return child.exitstatus
+
+
+# enumerate patterns for all distributions
+# fedora16-64:
+# [sudo] password for itestuser5707:
+# suse121-32b
+# root's password:
+# suse122-32b
+# itestuser23794's password:
+# u1110-32b
+# [sudo] password for itester:
+SUDO_PASS_PROMPT_PATTERN = "\[sudo\] password for .*?:|" \
+ "root's password:|" \
+ ".*?'s password:"
+
+
+class Tee(object):
+
+ '''data write to original will write to another as well'''
+
+ def __init__(self, original, another=None):
+ self.original = original
+ if another is None:
+ self.another = sys.stderr
+ else:
+ self.another = another
+
+ def write(self, data):
+ self.another.write(data)
+ return self.original.write(data)
+
+ def flush(self):
+ self.another.flush()
+ return self.original.flush()
+
+ def close(self):
+ self.original.close()
+
+
+class Meta(object):
+ """
+ Meta information of a test case
+
+ All meta information are put in a .meta/ directory under case running
+ path. Scripts `setup`, `steps` and `teardown` are in this meta path.
+ """
+
+ meta = '.meta'
+
+ def __init__(self, rundir, test):
+ self.rundir = rundir
+ self.test = test
+
+ self.logname = None
+ self.logfile = None
+ self.setup_script = None
+ self.steps_script = None
+ self.teardown_script = None
+
+ def begin(self):
+ """
+ Begin to run test. Generate meta scripts and open log file.
+ """
+ os.mkdir(self.meta)
+
+ self.logname = os.path.join(self.rundir, self.meta, 'log')
+ self.logfile = open(self.logname, 'a')
+ if settings.verbosity >= 3:
+ self.logfile = Tee(self.logfile)
+
+ if self.test.setup:
+ self.setup_script = self._make_setup_script()
+ self.steps_script = self._make_steps_script()
+ if self.test.teardown:
+ self.teardown_script = self._make_teardown_script()
+
+ def end(self):
+ """
+ Test finished, do some cleanup.
+ """
+ if not self.logfile:
+ return
+
+ self.logfile.close()
+ self.logfile = None
+
+ # FIXME: it's a little hack here
+ # delete color code
+ os.system("sed -i 's/\x1b\[[0-9]*m//g' %s" % self.logname)
+ os.system("sed -i 's/\x1b\[[0-9]*K//g' %s" % self.logname)
+
+ def setup(self):
+ code = 0
+ if self.setup_script:
+ self.log('setup start')
+ code = self._psh(self.setup_script)
+ self.log('setup finish')
+ return code
+
+ def steps(self):
+ self.log('steps start')
+ code = self._psh(self.steps_script, self.test.qa)
+ self.log('steps finish')
+ return code
+
+ def teardown(self):
+ if self.teardown_script:
+ self.log('teardown start')
+ self._psh(self.teardown_script)
+ self.log('teardown finish')
+
+ def log(self, msg, level="INFO"):
+ self.logfile.write('%s %s: %s\n' % (now(), level, msg))
+
+ def _make_setup_script(self):
+ code = '''cd %(rundir)s
+(set -o posix; set) > %(var_old)s
+set -x
+%(setup)s
+__exitcode__=$?
+set +x
+(set -o posix; set) > %(var_new)s
+diff --unchanged-line-format= --old-line-format= --new-line-format='%%L' \\
+ %(var_old)s %(var_new)s > %(var_out)s
+exit ${__exitcode__}
+''' % {
+ 'rundir': self.rundir,
+ 'var_old': os.path.join(self.meta, 'var.old'),
+ 'var_new': os.path.join(self.meta, 'var.new'),
+ 'var_out': os.path.join(self.meta, 'var.out'),
+ 'setup': self.test.setup,
+ }
+ return self._make_code('setup', code)
+
+ def _make_steps_script(self):
+ code = '''cd %(rundir)s
+if [ -f %(var_out)s ]; then
+ . %(var_out)s
+fi
+set -o pipefail
+set -ex
+%(steps)s
+''' % {
+ 'rundir': self.rundir,
+ 'var_out': os.path.join(self.meta, 'var.out'),
+ 'steps': self.test.steps,
+ }
+ return self._make_code('steps', code)
+
+ def _make_teardown_script(self):
+ code = '''cd %(rundir)s
+if [ -f %(var_out)s ]; then
+ . %(var_out)s
+fi
+set -x
+%(teardown)s
+''' % {
+ 'rundir': self.rundir,
+ 'var_out': os.path.join(self.meta, 'var.out'),
+ 'teardown': self.test.teardown,
+ }
+ return self._make_code('teardown', code)
+
+ def _make_code(self, name, code):
+ """Write `code` into `name`"""
+ path = os.path.join(self.meta, name)
+ data = code.encode('utf8') if isinstance(code, unicode) else code
+ with open(path, 'w') as f:
+ f.write(data)
+ return path
+
+ def _psh(self, script, more_expecting=()):
+ expecting = [(SUDO_PASS_PROMPT_PATTERN, settings.SUDO_PASSWD)] + \
+ list(more_expecting)
+ try:
+ return pcall('/bin/bash',
+ [script],
+ expecting=expecting,
+ output=self.logfile,
+ eof_timeout=float(settings.RUN_CASE_TIMEOUT),
+ output_timeout=float(settings.HANGING_TIMEOUT),
+ )
+ except Exception as err:
+ self.log('pcall error:%s\n%s' % (script, err), 'ERROR')
+ return -1
+
+
+class TestCase(unittest.TestCase):
+ '''Single test case'''
+
+ count = 1
+ was_skipped = False
+ was_successful = False
+
+ def __init__(self, filename, fields):
+ super(TestCase, self).__init__()
+ self.filename = filename
+
+ # Fields from case definition
+ self.version = fields.get('version')
+ self.summary = fields.get('summary')
+ self.steps = fields.get('steps')
+ self.setup = fields.get('setup')
+ self.teardown = fields.get('teardown')
+ self.qa = fields.get('qa', ())
+ self.tracking = fields.get('tracking', {})
+ self.conditions = fields.get('conditions', {})
+ self.fixtures = [Fixture(os.path.dirname(self.filename),
+ i)
+ for i in fields.get('fixtures', ())]
+
+ self.component = self._guess_component(self.filename)
+
+ def id(self):
+ """
+ This id attribute is used in xunit file.
+
+ classname.name
+ """
+ if settings.env_root:
+ retpath = self.filename[len(settings.cases_dir):]\
+ .lstrip(os.path.sep)
+ base = os.path.splitext(retpath)[0]
+ else:
+ base = os.path.splitext(os.path.basename(self.filename))[0]
+ return base.replace(os.path.sep, '.')
+
+ def __eq__(self, that):
+ if type(self) is not type(that):
+ return NotImplemented
+ return self.id() == that.id()
+
+ def __hash__(self):
+ return hash((type(self), self.filename))
+
+ def __str__(self):
+ cls, name = id_split(self.id())
+ if cls:
+ return "%s (%s)" % (name, cls)
+ return name
+
+ def __repr__(self):
+ return '<%s %s>' % (self.__class__.__name__, self.id())
+
+ def setUp(self):
+ self._check_conditions()
+ self.rundir = rundir = self._new_rundir()
+ self._copy_fixtures()
+
+ self.meta = meta = Meta(rundir, self)
+ with cd(rundir):
+ meta.begin()
+ meta.log('case start to run!')
+ if self.setup:
+ code = meta.setup()
+ if code != 0:
+ msg = "setup failed. Exit %d, see log: %s" % (
+ code, meta.logname)
+ raise Exception(msg)
+
+ def tearDown(self):
+ meta = self.meta
+ if meta:
+ with cd(self.rundir):
+ meta.teardown()
+ meta.log('case is finished!')
+ meta.end()
+
+ def runTest(self):
+ meta = self.meta
+ with cd(self.rundir):
+ code = meta.steps()
+
+ msg = "Exit Nonzero %d. See log: %s" % (code, self.meta.logname)
+ self.assertEqual(0, code, msg)
+
+ def _check_conditions(self):
+ '''Check if conditions match, raise SkipTest if some conditions are
+ defined but not match.
+ '''
+ labels = set((i.lower() for i in get_machine_labels()))
+ # blacklist has higher priority, if it match both black and white
+ # lists, it will be skipped
+ if self.conditions.get('blacklist'):
+ intersection = labels & set(self.conditions.get('blacklist'))
+ if intersection:
+ raise SkipTest('by distribution blacklist:%s' %
+ ','.join(intersection))
+
+ kw = 'whitelist'
+ if self.conditions.get(kw):
+ intersection = labels & set(self.conditions[kw])
+ if not intersection:
+ raise SkipTest('not in distribution whitelist:%s' %
+ ','.join(self.conditions[kw]))
+
+ def _guess_component(self, filename):
+ # assert that filename is absolute path
+ if not settings.env_root or \
+ not filename.startswith(settings.cases_dir):
+ return 'unknown'
+ relative = filename[len(settings.cases_dir)+1:].split(os.sep)
+ # >1 means [0] is an dir name
+ return relative[0] if len(relative) > 1 else 'unknown'
+
+ def _new_rundir(self):
+ hash_ = str(uuid.uuid4()).replace('-', '')
+ path = os.path.join(settings.WORKSPACE, hash_)
+ os.mkdir(path)
+ return path
+
+ def _copy_fixtures(self):
+ for item in self.fixtures:
+ item.copy(self.rundir)
--- /dev/null
+'''
+These LazyObject, LazySettings and Settings are mainly copied from Django
+'''
+
+import os
+import imp
+import time
+
+
+class Settings(object):
+
+ def __init__(self):
+ self.env_root = None
+ self.cases_dir = None
+ self.fixtures_dir = None
+
+ def load(self, mod):
+ for name in dir(mod):
+ if name == name.upper():
+ setattr(self, name, getattr(mod, name))
+
+ if hasattr(self, 'TZ') and self.TZ:
+ os.environ['TZ'] = self.TZ
+ time.tzset()
+
+ def setup_test_project(self, test_project_root):
+ self.env_root = os.path.abspath(test_project_root)
+ self.cases_dir = os.path.join(self.env_root, self.CASES_DIR)
+ self.fixtures_dir = os.path.join(self.env_root, self.FIXTURES_DIR)
+
+
+settings = Settings()
+
+
+def load_settings(test_project_root=None):
+ global settings
+
+ mod = __import__('itest.conf.global_settings',
+ fromlist=['global_settings'])
+ settings.load(mod)
+
+ if test_project_root:
+ settings_py = os.path.join(test_project_root, 'settings.py')
+ try:
+ mod = imp.load_source('settings', settings_py)
+ except (ImportError, IOError), e:
+ raise ImportError("Could not import settings '%s' (Is it on "
+ "sys.path?): %s" % (settings_py, e))
+ settings.load(mod)
+ settings.setup_test_project(test_project_root)
--- /dev/null
+'''
+Global settings for test ENV
+
+This file contains default values for all settings and can be overwrite in
+individual env's settings.py
+'''
+
+import os
+
+
+WORKSPACE = os.path.expanduser('~/testspace')
+
+
+CASES_DIR = 'cases'
+
+FIXTURES_DIR = 'fixtures'
+
+# All case text is actually JinJa2 template. Default template directories
+# will include the dirname of the case file and CASES_DIR. You can set
+# external template directories here, it should be a list string of path.
+TEMPLATE_DIRS = ()
+
+# Mapping from suite name to a list of cases.
+# For example, an ENV can have special suite names such as "Critical" and
+# "CasesUpdatedThisWeek", which include different set of cases.
+# Then refer it in command line as:
+# $ runtest Critical
+# $ runtest CasesUpdatedThisWeek
+SUITES = {}
+
+
+# Define testing target name and version. They can be showed in console info
+# or title or HTML report. But if TARGET_NAME is None, it will show nothing
+TARGET_NAME = None
+
+# If TARGET_NAME is not None, but TARGET_VERSION is None, version will be got
+# by querying package TARGET_NAME. If TARGET_VERSION is not None, simply use it
+TARGET_VERSION = None
+
+# List of package names as dependencies. This info can be show in report.
+DEPENDENCIES = []
+
+
+# Password to run sudo.
+SUDO_PASSWD = os.environ.get('ITEST_SUDO_PASSWD')
+
+
+# Timeout(in seconds) for running a single case
+RUN_CASE_TIMEOUT = 30 * 60 # half an hour
+
+# Timeout(in seconds) for no output
+HANGING_TIMEOUT = 5 * 60 # 5 minutes
+
+# Time zone
+TZ = None
--- /dev/null
+import os
+import shutil
+from jinja2 import Environment, FileSystemLoader
+
+from itest.conf import settings
+from itest.utils import makedirs
+
+
+def Fixture(casedir, item):
+ typ = item.pop('type')
+ cls = globals().get(typ)
+ if not cls:
+ raise Exception("Unknown fixture type: %s" % typ)
+ return cls(casedir, **item)
+
+
+def guess_source(casedir, src):
+ source = os.path.join(casedir, src)
+ if not os.path.exists(source) and settings.fixtures_dir:
+ source = os.path.join(settings.fixtures_dir, src)
+ return source
+
+
+def guess_target(todir, src, target):
+ if target:
+ return os.path.join(todir, target)
+ return os.path.join(todir, os.path.basename(src))
+
+
+class copy(object):
+
+ def __init__(self, casedir, src, target=None):
+ self.source = guess_source(casedir, src)
+ self.target = target
+ if not os.path.isfile(self.source):
+ raise Exception("Fixutre <copy> '%s' doesn't exist" % src)
+
+ def copy(self, todir):
+ target = guess_target(todir, self.source, self.target)
+ makedirs(os.path.dirname(target))
+ shutil.copy(self.source, target)
+
+
+class copydir(object):
+
+ def __init__(self, casedir, src, target=None):
+ self.source = guess_source(casedir, src.rstrip(os.path.sep))
+ self.target = target
+ if not os.path.isdir(self.source):
+ raise Exception("Fixture <copydir> '%s' doesn't exist" % src)
+
+ def copy(self, todir):
+ target = guess_target(todir,
+ self.source,
+ self.target).rstrip(os.path.sep)
+ makedirs(os.path.dirname(target))
+ shutil.copytree(self.source, target)
+
+
+class content(object):
+
+ def __init__(self, casedir, target, text):
+ self.target = target
+ self.text = text
+
+ def copy(self, todir):
+ target = os.path.join(todir, self.target)
+ makedirs(os.path.dirname(target))
+ with open(target, 'w') as writer:
+ writer.write(self.text)
+
+
+class template(object):
+
+ def __init__(self, casedir, src, target=None):
+ self.source = guess_source(casedir, src)
+ self.target = target
+
+ def copy(self, todir):
+ target = guess_target(todir, self.source, self.target)
+
+ template_dirs = [os.path.abspath(os.path.dirname(self.source))]
+ if settings.fixtures_dir:
+ template_dirs.append(settings.fixtures_dir)
+
+ jinja2_env = Environment(loader=FileSystemLoader(template_dirs))
+ template = jinja2_env.get_template(os.path.basename(self.source))
+ text = template.render()
+
+ makedirs(os.path.dirname(target))
+ with open(target, 'w') as writer:
+ writer.write(text)
--- /dev/null
+import os
+import logging
+
+try:
+ import unittest2 as unittest
+except ImportError:
+ import unittest
+from jinja2 import Environment, FileSystemLoader
+
+from itest import xmlparser
+from itest.conf import settings
+from itest.case import TestCase
+
+log = logging.getLogger(os.path.splitext(os.path.basename(__file__))[0])
+
+
+def load_case(sel):
+ '''
+ Load tests from a single test select pattern `sel`
+ '''
+ suiteClass = unittest.TestSuite
+ def _is_test(ret):
+ return isinstance(ret, suiteClass) or \
+ isinstance(ret, TestCase)
+
+ suite = suiteClass()
+ stack = [sel]
+ while stack:
+ sel = stack.pop()
+ for pattern in suite_patterns.all():
+ if callable(pattern):
+ pattern = pattern()
+
+ ret = pattern.load(sel)
+ if not ret:
+ continue
+
+ if _is_test(ret):
+ suite.addTest(ret)
+ elif isinstance(ret, list):
+ stack.extend(ret)
+ else:
+ stack.append(ret)
+ break
+
+ return suite
+
+
+class TestLoader(unittest.TestLoader):
+
+ def loadTestsFromModule(self, _module, _use_load_tests=True):
+ if settings.env_root:
+ return load_case(settings.env_root)
+ return self.suiteClass()
+
+ def loadTestsFromName(self, name, module=None):
+ return load_case(name)
+
+
+class AliasPattern(object):
+ '''dict key of settings.SUITES is alias for its value'''
+
+ def load(self, sel):
+ if sel in settings.SUITES:
+ return settings.SUITES[sel]
+
+
+class FilePattern(object):
+ '''test from file name'''
+
+ def load(self, name):
+ if not os.path.isfile(name):
+ return
+
+ template_dirs = [os.path.abspath(os.path.dirname(name))]
+ if settings.cases_dir:
+ template_dirs.append(settings.cases_dir)
+ jinja2_env = Environment(loader=FileSystemLoader(template_dirs))
+ template = jinja2_env.get_template(os.path.basename(name))
+ text = template.render()
+
+ if isinstance(text, unicode):
+ text = text.encode('utf8')
+ # template returns unicode
+ # but xml parser only accepts str
+ # And we can only assume it's utf8 here
+
+ data = xmlparser.Parser().parse(text)
+ if not data:
+ raise Exception("Can't load test case from %s" % name)
+ return TestCase(os.path.abspath(name), data)
+
+
+class DirPattern(object):
+ '''find all tests recursively in a dir'''
+
+ def load(self, top):
+ if os.path.isdir(top):
+ return list(self._walk(top))
+
+ def _walk(self, top):
+ for current, _dirs, nondirs in os.walk(top):
+ for name in nondirs:
+ if name.endswith('.case'):
+ yield os.path.join(current, name)
+
+
+class ComponentPattern(object):
+ '''tests from a component name'''
+
+ _components = None
+
+ @staticmethod
+ def guess_components():
+ if not settings.env_root:
+ return ()
+ comp = []
+ for base in os.listdir(settings.cases_dir):
+ full = os.path.join(settings.cases_dir, base)
+ if os.path.isdir(full):
+ comp.append(base)
+ return set(comp)
+
+ @classmethod
+ def is_component(cls, comp):
+ if cls._components is None:
+ cls._components = cls.guess_components()
+ return comp in cls._components
+
+ def load(self, comp):
+ if self.is_component(comp):
+ return os.path.join(settings.cases_dir, comp)
+
+
+class InversePattern(object):
+ '''string starts with "!" is the inverse of string[1:]'''
+
+ def load(self, sel):
+ if sel.startswith('!'):
+ comp = sel[1:]
+ comps = ComponentPattern.guess_components()
+ if ComponentPattern.is_component(comp):
+ return [c for c in comps if c != comp]
+ # if the keyword isn't a component name, then it is useless
+ return list(comps)
+
+
+class IntersectionPattern(object):
+ '''use && load intersection set of many parts'''
+
+ loader_class = TestLoader
+
+ def load(self, sel):
+ if sel.find('&&') <= 0:
+ return
+
+ def intersection(many):
+ inter = None
+ for each in many:
+ if inter is None:
+ inter = set(each)
+ else:
+ inter.intersection_update(each)
+ return inter
+
+ loader = self.loader_class()
+ many = [load_case(part) for part in sel.split('&&')]
+
+ return loader.suiteClass(intersection(many))
+
+
+class _SuitePatternRegister(object):
+
+ def __init__(self):
+ self._patterns = []
+
+ def register(self, cls):
+ self._patterns.append(cls)
+
+ def all(self):
+ return self._patterns
+
+
+def register_default_patterns():
+ for pattern in (AliasPattern,
+ FilePattern,
+ DirPattern,
+ IntersectionPattern,
+ ComponentPattern,
+ InversePattern,
+ ):
+ suite_patterns.register(pattern)
+
+suite_patterns = _SuitePatternRegister()
+register_default_patterns()
--- /dev/null
+import os
+import sys
+import argparse
+
+try:
+ import unittest2 as unittest
+ from unittest2 import TextTestResult
+except ImportError:
+ import unittest
+ from unittest import TextTestResult
+
+from itest import conf
+from itest.utils import makedirs
+from itest.loader import TestLoader
+from itest import __version__
+
+
+ENVIRONMENT_VARIABLE = "ITEST_ENV_PATH"
+
+
+def find_test_project_from_cwd():
+ '''
+ Returns test project root directory or None
+ '''
+ path = os.getcwd()
+ while 1:
+ name = os.path.join(path, 'settings.py')
+ if os.path.exists(name):
+ return path
+
+ if path == '/':
+ return
+ path = os.path.dirname(path)
+
+
+class TestProgram(unittest.TestProgram):
+
+ def parseArgs(self, argv):
+ if len(argv) > 1 and argv[1].lower() == 'discover':
+ self._do_discovery(argv[2:])
+ return
+
+ parser = argparse.ArgumentParser()
+ parser.add_argument('-V', '--version', action='version',
+ version=__version__)
+ parser.add_argument('-q', '--quiet', action='store_true',
+ help="minimal output")
+ parser.add_argument('-v', '--verbose', action='count',
+ help="verbose output")
+ parser.add_argument('-f', '--failfast', action='store_true',
+ help="stop on the first failure")
+ parser.add_argument('-c', '--catch', action='store_true',
+ help="catch ctrl-c and display results")
+ parser.add_argument('-b', '--buffer', action='store_true',
+ help="buffer stdout and stderr during test runs")
+ parser.add_argument('tests', nargs='*')
+ parser.add_argument('--test-project-path',
+ default=os.environ.get(ENVIRONMENT_VARIABLE),
+ help='set test project path where settings.py '
+ 'locate. [%s]' % ENVIRONMENT_VARIABLE)
+ parser.add_argument('--test-workspace', type=os.path.abspath,
+ help='set test workspace path')
+ parser.add_argument('--with-xunit', action='store_true',
+ help='provides test resutls in standard XUnit XML '
+ 'format')
+ parser.add_argument('--xunit-file',
+ type=os.path.abspath, default='xunit.xml',
+ help='Path to xml file to store the xunit report.'
+ 'Default is xunit.xml in the working directory')
+
+ opts = parser.parse_args()
+
+ # super class options
+ if opts.quiet:
+ self.verbosity = 0
+ elif opts.verbose:
+ # default verbosity is 1
+ self.verbosity = opts.verbose + 1
+ self.failfast = opts.failfast
+ self.catchbreak = opts.catch
+ self.buffer = opts.buffer
+
+ # additional options
+ if opts.with_xunit:
+ if not os.access(os.path.dirname(opts.xunit_file), os.W_OK):
+ print >> sys.stderr, "Permission denied:", opts.xunit_file
+ sys.exit(1)
+ from itest.result import XunitTestResult
+ self.testRunner.resultclass = XunitTestResult
+ self.testRunner.resultclass.xunit_file = opts.xunit_file
+ else:
+ self.testRunner.resultclass = TextTestResult
+
+ if opts.test_project_path:
+ conf.load_settings(opts.test_project_path)
+ else:
+ conf.load_settings(find_test_project_from_cwd())
+
+ conf.settings.verbosity = self.verbosity
+
+ if opts.test_workspace:
+ conf.settings.WORKSPACE = opts.test_workspace
+ makedirs(conf.settings.WORKSPACE)
+
+ # copy from super class
+ if len(opts.tests) == 0 and self.defaultTest is None:
+ # createTests will load tests from self.module
+ self.testNames = None
+ elif len(opts.tests) > 0:
+ self.testNames = opts.tests
+ if __name__ == '__main__':
+ # to support python -m unittest ...
+ self.module = None
+ else:
+ self.testNames = (self.defaultTest,)
+ self.createTests()
+
+
+class TextTestRunner(unittest.TextTestRunner):
+
+ def __init__(self, stream=None, descriptions=True, verbosity=1,
+ failfast=False, buffer=False, resultclass=None):
+ if stream is None:
+ stream = sys.stderr
+ super(TextTestRunner, self).__init__(stream, descriptions, verbosity,
+ failfast, buffer, resultclass)
+
+
+def main():
+ import logging
+ logging.basicConfig()
+ TestProgram(testLoader=TestLoader(), testRunner=TextTestRunner)
--- /dev/null
+import re
+import time
+import xml.etree.ElementTree as ET
+
+try:
+ from unittest2 import TextTestResult
+except ImportError:
+ from unittest import TextTestResult
+
+from itest.case import id_split
+
+SHELL_COLOR_PATTERN = re.compile(r'\x1b\[[0-9]*[mK]')
+
+
+class XunitTestResult(TextTestResult):
+
+ xunit_file = 'xunit.xml'
+
+ def __init__(self, *args, **kw):
+ super(XunitTestResult, self).__init__(*args, **kw)
+ self.testsuite = ET.Element('testsuite')
+ self._timer = time.time()
+
+ def startTest(self, test):
+ "Called when the given test is about to be run"
+ super(XunitTestResult, self).startTest(test)
+ self._timer = time.time()
+
+ def _time_taken(self):
+ if hasattr(self, '_timer'):
+ taken = time.time() - self._timer
+ else:
+ # test died before it ran (probably error in setup())
+ # or success/failure added before test started probably
+ # due to custom TestResult munging
+ taken = 0.0
+ return taken
+
+ def addError(self, test, err):
+ """Called when an error has occurred. 'err' is a tuple of values as
+ returned by sys.exc_info().
+ """
+ super(XunitTestResult, self).addError(test, err)
+ self._add_failure(test, err)
+
+ def addFailure(self, test, err):
+ """Called when an error has occurred. 'err' is a tuple of values as
+ returned by sys.exc_info()."""
+ super(XunitTestResult, self).addFailure(test, err)
+ self._add_failure(test, err)
+
+ def _add_failure(self, test, err):
+ cls, name = id_split(test.id())
+
+ def get_log():
+ with open(test.meta.logname) as reader:
+ content = reader.read()
+ content = content.replace('\r', '\n').replace('\x00', '')
+ content = SHELL_COLOR_PATTERN.sub('', content)
+ return content.decode('utf8', 'ignore')
+
+ if hasattr(test, 'meta'):
+ content = get_log()
+ else:
+ content = "Log file isn't available!"
+
+ testcase = ET.SubElement(self.testsuite, 'testcase',
+ classname=cls,
+ name=name,
+ time="%.3f" % self._time_taken())
+ failure = ET.SubElement(testcase, 'failure',
+ message=str(err))
+ failure.text = content
+
+ def addSuccess(self, test):
+ "Called when a test has completed successfully"
+ super(XunitTestResult, self).addSuccess(test)
+ cls, name = id_split(test.id())
+ ET.SubElement(self.testsuite, 'testcase',
+ classname=cls,
+ name=name,
+ taken="%.3f" % self._time_taken())
+
+ def stopTestRun(self):
+ """Called once after all tests are executed.
+
+ See stopTest for a method called after each test.
+ """
+ super(XunitTestResult, self).stopTestRun()
+
+ ts = self.testsuite
+ ts.set("tests", str(self.testsRun))
+ ts.set("errors", str(len(self.errors)))
+ ts.set("failures", str(len(self.failures)))
+ ts.set("skip", str(len(self.skipped)))
+ xml = ET.tostring(ts)
+
+ with open(self.xunit_file, 'w') as fp:
+ fp.write(xml)
--- /dev/null
+import os
+import datetime
+import platform
+import subprocess
+from contextlib import contextmanager
+
+
+def now():
+ return datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
+
+
+def get_machine_labels():
+ '''Get machine labels for localhost. The label are strings in format of
+ <dist_name><dist_version>-<arch>. Such as "Fedora", "Fedora17",
+ "Fedora17-x86_64", "Ubuntu", "Ubuntu12.04", "Ubuntun12.10-i586".
+ '''
+ dist_name, dist_ver = \
+ [i.strip() for i in platform.linux_distribution()[:2]]
+ arch = platform.machine().strip()
+ return (dist_name,
+ arch,
+ '%s%s' % (dist_name, dist_ver),
+ '%s-%s' % (dist_name, arch),
+ '%s%s-%s' % (dist_name, dist_ver, arch),
+ )
+
+
+def check_output(*popenargs, **kwargs):
+ if hasattr(subprocess, 'check_output'):
+ return subprocess.check_output(*popenargs, **kwargs)
+ return _check_output(*popenargs, **kwargs)
+
+
+def _check_output(*popenargs, **kwargs):
+ r"""Run command with arguments and return its output as a byte string.
+ """
+ if 'stdout' in kwargs:
+ raise ValueError('stdout argument not allowed, it will be overridden.')
+ process = subprocess.Popen(stdout=subprocess.PIPE, *popenargs, **kwargs)
+ return process.communicate()[0]
+
+
+@contextmanager
+def cd(path):
+ '''cd to given path and get back when it finish
+ '''
+ old_path = os.getcwd()
+ os.chdir(path)
+ yield
+ os.chdir(old_path)
+
+
+def makedirs(path):
+ """
+ Recursively create `path`, do nothing if it exists
+ """
+ try:
+ os.makedirs(path)
+ except OSError as err:
+ import errno
+ if err.errno != errno.EEXIST:
+ raise
+
+
+def in_dir(child, parent):
+ """
+ Check whether `child` is inside `parent`
+ """
+ return os.path.realpath(child).startswith(os.path.realpath(parent))
--- /dev/null
+"""
+Parser of XML format of case file
+"""
+import os
+import logging
+import xml.etree.ElementTree as ET
+
+try:
+ from xml.etree.ElementTree import ParseError
+except ImportError:
+ from xml.parsers.expat import ExpatError as ParseError
+
+log = logging.getLogger(os.path.splitext(os.path.basename(__file__))[0])
+
+
+class Parser(object):
+ """
+ The XML case parser
+ """
+
+ def parse(self, xmldoc):
+ """
+ Returns a dict represent a case
+ """
+ data = {}
+ try:
+ root = ET.fromstring(xmldoc)
+ except ParseError as err:
+ log.warn("Case syntax error: %s", str(err))
+ return
+
+ for child in root:
+ method = '_on_' + child.tag
+ if hasattr(self, method):
+ value = getattr(self, method)(child)
+ data[child.tag] = value
+ return data
+
+ def _text(self, element):
+ """
+ Returns stripped text of `element`
+ """
+ return element.text.strip() if element.text else ''
+
+ _on_formatversion = _text
+ _on_summary = _text
+ _on_setup = _text
+ _on_steps = _text
+ _on_teardown = _text
+
+ def _on_tracking(self, element):
+ """
+ Subelement can be a Gerrit `change` or a Redmine `ticket`.
+ <tracking>
+ <change>90125</change>
+ <ticket>5150</ticket>
+ </tracking>
+ """
+ return [(child.tag, self._text(child))
+ for child in element
+ if child.tag in ('change', 'ticket')]
+
+ def _on_qa(self, element):
+ """
+ A seqence of <prompt> and <asnwer>.
+ <qa>
+ <prompt>Are you sure [N/y]?</prompt>
+ <answer>y</answer>
+ </qa>
+ """
+ data = []
+ state = 0
+ for node in element:
+ if state == 0:
+ if node.tag == 'prompt':
+ prompt = self._text(node)
+ state = 1
+ else:
+ raise Exception("Case syntax error: expects <prompt> "
+ "rather than %s" % node.tag)
+ elif state == 1:
+ if node.tag == 'answer':
+ answer = self._text(node)
+ data.append((prompt, answer))
+ state = 0
+ else:
+ raise Exception("Case syntax error: expects <answer> "
+ "rather than %s" % node.tag)
+ if state == 1:
+ raise Exception("Case syntax error: expects <answer> rather than "
+ "closing")
+ return data
+
+ def _on_conditions(self, element):
+ """
+ Platform white list and black list
+ <conditions>
+ <whitelist>
+ <platform>OpenSuse-64bit</platform>
+ <platform>Ubuntu12.04</platform>
+ </whitelist>
+ <blacklist>
+ <platform>Fedora19-x86_64</platform>
+ </blacklist>
+ </conditions>
+ """
+ def _platforms(key):
+ return [self._text(n)
+ for n in element.findall('./%s/platform' % key)]
+ return {
+ 'whitelist': _platforms('whitelist'),
+ 'blacklist': _platforms('blacklist'),
+ }
+
+ def _on_fixtures(self, element):
+ """
+ <fixtures>
+ <copy src="conf/a.conf" />
+ <template src="conf/b.conf" target="newdir/c.conf" />
+ <content target="c.conf">conf content</content>
+ </fixtures>
+ """
+ data = []
+ for i in element:
+ item = dict({'type': i.tag}, **i.attrib)
+ text = self._text(i)
+ if text:
+ item['text'] = text
+ data.append(item)
+ return data
--- /dev/null
+"""This plugin provides test cases definition in XML format.
+
+It's designed for functional testing of *nix commands.
+
+Write a test case like this:
+
+::
+
+ <?xml version="1.0" encoding="UTF-8"?>
+ <testcase>
+ <summary>An example test</summary>
+ <setup>
+ touch a.txt
+ </setup>
+ <steps>
+ cp a.txt b.txt
+ test -f a.txt
+ </steps>
+ <teardown>
+ rm *.txt
+ </teardown>
+ </testcase>
+
+"""
+import os
+import logging
+from os.path import (sep, join, isdir, isfile, exists,
+ dirname, basename, expanduser)
+
+from jinja2 import Environment, FileSystemLoader
+from nose.plugins import Plugin
+
+from itest import conf
+from itest.utils import in_dir
+from itest.utils import makedirs
+from itest.main import find_test_project_from_cwd
+from itest import xmlparser
+from itest.case import TestCase
+from itest.loader import load_case
+from itest import __version__
+
+
+log = logging.getLogger("nose.plugins.xcase")
+
+
+class XCase(Plugin):
+ """
+ As a Nose plugin
+ """
+
+ name = 'xcase'
+
+ def options(self, parser, env):
+ """
+ Register options for this plugin
+
+ A plugin's options() method receives a parser instance. It's good form
+ for a plugin to use that instance only to add additional arguments
+ that take only long arguments (-like-this). Most of nose's built-in
+ arguments get their default value from an environment variable.
+ """
+ super(XCase, self).options(parser, env)
+ parser.version = __version__
+ parser.add_option('--V', '--xcase-version', action='version',
+ dest="xcase-version",
+ help="print xcase version")
+ parser.add_option('--f', '--xcase-failfast', action='store_true',
+ help="stop on the first failure")
+ parser.add_option('--c', '--xcase-catch', action='store_true',
+ help="catch ctrl-c and display results")
+ parser.add_option('--b', '--xcase-buffer', action='store_true',
+ help="buffer stdout and stderr during test runs")
+ parser.add_option('--xcase-tests', nargs='*')
+ parser.add_option('--xcase-project-path', action="store", metavar="PATH",
+ default=env.get("XCASE_ENV_PATH"),
+ help='set test project path where settings.py '
+ 'locate. [%s]' % env.get("XCASE_ENV_PATH"))
+ parser.add_option('--xcase-workspace', action="store", metavar="PATH",
+ help='set test workspace path')
+ parser.add_option('--xcase-with-xunit', action='store_true',
+ help='provides test resutls in standard XUnit XML '
+ 'format')
+ parser.add_option('--xcase-xunit-file', action='store',
+ default='xunit.xml',
+ help='Path to xml file to store the xunit report.'
+ 'Default is xunit.xml in the working directory')
+ parser.add_option('--xcase-test-env', action="store", metavar="PATH",
+ dest='test_env',
+ default=env.get("NOSE_XCASE_ENV"),
+ help="Path to test ENV containing fixtures and cases")
+ parser.add_option('--xcase-cases-dir', action='store',
+ dest='cases_dir',
+ metavar='PATH', default='cases',
+ help="Path to case files, relative to `test-env` path")
+ parser.add_option('--xcase-fixtures-dir', action='store',
+ dest='fixtures_dir',
+ metavar='PATH', default='fixtures',
+ help="Path to fixture files, relative to `test-env` path")
+ parser.add_option('--xcase-sudo-password', action='store',
+ metavar='PASSWORD', default=env.get('NOSE_SUDO_PASSWORD'),
+ help="Password to run sudo")
+ parser.add_option('--xcase-timeout-run-case', action='store',
+ metavar='SECONDS', default=30*60, # half an hour
+ help="Timeout(in seconds) for running a single case")
+ parser.add_option('--xcase-timeout-hanging', action='store',
+ metavar='SECONDS', default=5*60, # five minutes
+ help="Timeout(in seconds) if there isn't any output")
+ parser.add_option('--xcase-case-ext', action='store',
+ dest='case_ext',
+ metavar='EXT', default='xml',
+ help="Extension name of case file")
+
+ def configure(self, options, config):
+ """
+ Coinfigure plugin.
+
+ A plugin's configure() method receives the parsed OptionParser options
+ object, as well as the current config object. Plugins should configure
+ their behavior based on the user-selected settings, and may raise
+ exceptions if the configured behavior is nonsensical.
+ """
+ super(XCase, self).configure(options, config)
+ if not self.enabled:
+ return
+ self.options = options
+ self.config = config
+
+ # Check for Test Env
+ if not options.test_env:
+ path = find_test_project_from_cwd()
+ if path:
+ options.test_env = path
+
+ if options.test_env:
+ self._configure_test_env()
+
+ conf.settings.verbosity = options.verbosity
+ if options.xcase_project_path:
+ conf.load_settings(options.xcase_project_path)
+ else:
+ conf.load_settings(find_test_project_from_cwd())
+
+ if options.xcase_workspace:
+ conf.settings.WORKSPACE = options.xcase_workspace
+ makedirs(conf.settings.WORKSPACE)
+
+ def prepareTestLoader(self, loader):
+ """
+ Capture loader
+ """
+ self.loader = loader
+
+ def loadTestsFromName(self, name, module=None, importPath=None):
+ """
+ Return tests in this file or module. Return None if you are not able
+ to load any tests, or an iterable if you are. May be a generator.
+ """
+ log.info("load from name %s", name)
+ fromfile = self.loadTestsFromFile
+ fromdir = self.loader.loadTestsFromDir
+
+ if isfile(name):
+ return fromfile(name)
+ if isdir(name):
+ return fromdir(name)
+ if not self.options.test_env:
+ return
+
+ path = name.replace('.', sep)
+ if isdir(path):
+ return fromdir(path)
+ path2 = join(self.options.cases_dir, path)
+ if isdir(path2):
+ return fromdir(path2)
+
+ filename = ''.join([path, '.', self.options.case_ext])
+ if exists(filename):
+ return fromfile(filename)
+ filename2 = join(self.options.cases_dir, filename)
+ if exists(filename2):
+ return fromfile(filename2)
+
+ def loadTestsFromFile(self, filename):
+ """
+ Load test case
+
+ Writing a plugin that loads tests from files other than python modules
+
+ Implement wantFile and loadTestsFromFile. In wantFile, return True for
+ files that you want to examine for tests. In loadTestsFromFile, for
+ those files, return an iterable containing TestCases (or yield them as
+ you find them; loadTestsFromFile may also be a generator).
+ """
+ yield load_case(filename)
+
+ def wantFile(self, filename):
+ """
+ Want case file
+
+ Implement any or all want* methods. Return False to reject the test
+ candidate, True to accept it - which means that the test candidate
+ will pass through the rest of the system, so you must be prepared to
+ load tests from it if tests can't be loaded by the core loader or
+ another plugin - and None if you don't care.
+ """
+ log.info("%s, %s", self.options.case_ext, filename)
+ return filename.endswith('.' + self.options.case_ext)
+
+ def wantDirectory(self, dirname):
+ """
+ Returns True if want to search into `dirname`
+
+ Do search if `dirname` is inside test-env
+ """
+ if self.options.test_env:
+ return in_dir(dirname, self.options.test_env)
+ return False
+
+ def addError(self, test, err):
+ log.debug("AddError:----\n%s:%s\n%s:%s\n----",
+ type(test), test,
+ type(err), err)
+
+ def addFailure(self, test, err):
+ log.debug("AddFailure:----\n%s:%s\n%s:%s\n----",
+ type(test), test,
+ type(err), err)
+
+ def _configure_test_env(self):
+ opt = self.options
+ opt.cases_dir = join(opt.test_env,
+ opt.cases_dir.lstrip(sep))
+ opt.fixtures_dir = join(opt.test_env,
+ opt.fixtures_dir.lstrip(sep))
--- /dev/null
+CentOS-6.5: python-mock python-nose python-coverage
+Ubuntu-12.04: python-mock python-nose python-coverage
+Ubuntu-12.10: python-mock python-nose python-coverage
+Ubuntu-13.04: python-mock python-nose python-coverage
+Ubuntu-13.10: python-mock python-nose python-coverage
+openSUSE-12.1: python-mock python-nose python-coverage
+openSUSE-12.2: python-mock python-nose python-coverage
+openSUSE-12.3: python-mock python-nose python-coverage
+openSUSE-13.1: python-mock python-nose python-coverage
+Fedora-16: python-mock python-nose python-coverage
+Fedora-17: python-mock python-nose python-coverage
+Fedora-18: python-mock python-nose python-coverage
+Fedora-19: python-mock python-nose python-coverage
--- /dev/null
+PKG_NAME := itest-core
+SPECFILE = $(addsuffix .spec, $(PKG_NAME))
+PKG_VERSION := $(shell grep '^Version: ' $(SPECFILE)|awk '{print $$2}')
+
+TARBALL := $(PKG_NAME)_$(PKG_VERSION).tar.gz
+
+dsc: tarball
+ $(eval MD5=$(shell md5sum $(TARBALL) | sed "s/ / $(shell stat -c '%s' $(TARBALL)) /"))
+ @sed -i 's/^Version:.*/Version: $(PKG_VERSION)/' $(PKG_NAME).dsc
+ @sed -i 's/ [a-f0-9]\+ [0-9]\+ $(PKG_NAME).*tar.*/ $(MD5)/' $(PKG_NAME).dsc
+
+tarball:
+ @cd .. && git archive --prefix $(PKG_NAME)-$(PKG_VERSION)/ HEAD \
+ | gzip > packaging/$(TARBALL)
+
+clean:
+ @rm -f $(PKG_NAME)*tar*
+
+all: tarball dsc
--- /dev/null
+* Fri Nov 29 2013 Huang, Hao <hao.h.huang@intel.com> - 1.7
+ * Add script imgdiff
+ * Make compatible for pexpect-2.5
+
+* Fri Nov 29 2013 Huang, Hao <hao.h.huang@intel.com> - 1.6
+ * Merge "Raise Timeout error if no output for a while." into devel
+ * Raise Timeout error if no output for a while.
+ * Remove ksctrl scripts
+ * Print log to sys.stdout instead of /dev/fd/1
+ * fix ksctrol can't find image
+ * fix xml report can not display
+ * Merge "Generate xUnit report for ksctrl." into devel
+ * Add python-urlgrabber as requirement.
+ * Generate xUnit report for ksctrl.
+ * Fetch KS files starting from images/ URL.
+ * Add script ksctrl
+ * Ignore keyword if it's not a component name.
+ * Remove feature of auto retry when test failed
+ * Remove --install-layout=deb in setup.py
+ * Merge "Rewrite code of locking testspace" into devel
+ * Rewrite code of locking testspace
+ * Remove a useless function in itest.utils
+ * Strip redundant white space from distribution info.
+ * Support selecting platforms in cases.
+ * Skip a test when a SkipTest exception is raised.
+ * Support xunit XML format of report(#1065).
+ * ctrl-c can't break runtest(#1099).
+ * Remove HTML report and auto sync feature
+ * Change requirement from "pexpect<2.5" to "pexpect"
+ * Change name to itest-core for deb package
+ * Changing package name from itest to itest-core
+ * Ignore .osc and tarball in packaging dir
+ * Add spec and dsc to build rpm and deb packages
+ * Moving itest core framework from itest project
+
+* Thu Jan 24 2013 Guan Junchun <junchunx.guan@intel.com> - 1.1
+- Support debian building
+- bump to version 1.1
+- add an osc certificat to resolve the obs certification issue
+- Add --tizen option for create_proj
+- Add a jenkins job script: clean up unwanted files, users in test vms
+- Change default value of GBS.ENABLE_COVERAGE to false
+- Fix bug introduced by coverage support.
+- Fix some pylint warnings.
+- Merge "add a build case for buildroot uid gid issue #685" into devel
+- Merge "Test submit --tag #596" into devel
+- Merge "update mic ks files, no user authentication for repos" into devel
+- update mic ks files, no user authentication for repos
+- modify mic cases bases on itest's design about bug #644
+- add a build case for buildroot uid gid issue #685
+- Add a jenkins job script: clean up tmp projects in the tizen.org OBS
+- Merge "Guess component from case path. #664" into devel
+- Merge "Remove leading string "mic" from TARGET_VERSION in MIC ENV" into devel
+- Guess component from case path. #664
+- Merge "modify mic cases bases on itest's design about bug #644" into devel
+- Remove leading string "mic" from TARGET_VERSION in MIC ENV
+- Merge "add a build case for incremental feature." into devel
+- Test submit --tag #596
+- Add coverage report. #680
+- Merge "add a build case for #535 on pm" into devel
+- add a build case for incremental feature.
+- Add pexpect event when obs server certificate failed verification
+- add a build case for #535 on pm
+- Merge changes If97ad020,I0662f7a7 into devel
+- Merge "Fix useradd issue on openSUSE" into devel
+- Merge "Add a build case for target arch check" into devel
+- Add more tab completion
+- Support gbs clone and pull subcommand tab completion
+- Merge "Cleanup export cases" into devel
+- Cleanup export cases
+- Fix useradd issue on openSUSE
+- Move "Issue related" section to the end of report
+- Merge "Add more options in gbs.bash" into devel
+- Merge "update export cases for new patch" into devel
+- Merge "Update README for __steps__. #644" into devel
+- Add a build case for target arch check
+- update export cases for new patch
+- Add more options in gbs.bash
+- Merge "Fix two build cases" into devel
+- Update README for __steps__. #644
+- update build cases for target arch
+- Fix two build cases
+- Update build cases
+- Cleanup import cases
+- add new feature cases for #594
+- Merge "Change default value of SUDO_PASSWD" into devel
+- Modify more cases to adapt pipefail option. #644
+- update cases 's summary tag. fix #667
+- Add set -o pipefail at the beginning of case script.
+- update cases from i686 to i586
+- update and add cases on pm from #83 to #90
+- Merge "Add MIC folder to Itest, including MIC test cases and related files." into devel
+- Merge "export cases for testing write permission #617" into devel
+- Merge "Change cases to adapt bash -e. #644" into devel
+- Change cases to adapt bash -e. #644
+- Change default value of SUDO_PASSWD
+- Fix KeyError in web page
+- Merge "update and add cases on pm from #91 to #112" into devel
+- export cases for testing write permission #617
+- Add MIC folder to Itest, including MIC test cases and related files.
+- Merge "generate report as Ctl-c when jenkins termnate job" into devel
+- Merge "add web server configuration in README" into devel
+- Merge "add 2 import cases for #641, #649" into devel
+- Merge "Update cases for #340, #467, #332, #396" into devel
+- Update cases for #340, #467, #332, #396
+- update and add cases on pm from #91 to #112
+- add 2 import cases for #641, #649
+- Merge "add a case for 185" into devel
+- Merge "update and add cases on pm from #76 to #82" into devel
+- add web server configuration in README
+- add a case for 185
+- Update app.py according to latest format of report.
+- Merge "update and add cases on pm from #71 to #75" into devel
+- generate report as Ctl-c when jenkins termnate job
+- Merge "Generate a bash script to run case commands. #644" into devel
+- update and add cases on pm from #71 to #75
+- update and add cases on pm from #76 to #82
+- Generate a bash script to run case commands. #644
+- Test gbs import in different user at same machine #574
+- Merge "Change suffix of manual cases into txt" into devel
+- Change suffix of manual cases into txt
+- update case for OS difference
+- Support customized HTML report for ENV
+- Adding ENV support and pack GBS into an ENV
+- Merge "support multiple issue numbers in __issue__ section. #632" into devel
+- Merge "update and add cases on pm from #69 to #70" into devel
+- update and add cases on pm from #69 to #70
+- Merge "update and add cases on pm from #56 to #60" into devel
+- Merge "update and add cases on pm from #31 to #35" into devel
+- update and add cases on pm from #56 to #60
+- Merge "test gbs export about #635" into devel
+- Merge "update and add cases on pm from #36 to #55" into devel
+- update and add cases on pm from #31 to #35
+- update and add cases on pm from #19 to #24
+- avoid ~ not expanded by quotes
+- test gbs export about #635
+- update and add cases on pm from #36 to #55
+- Merge "Remove hardcode expecting strings and add them into cases." into devel
+- Merge "update and add cases on pm from #25 to #30" into devel
+- support multiple issue numbers in __issue__ section. #632
+- Remove hardcode expecting strings and add them into cases.
+- Merge "update and add cases on pm from #4 to #13" into devel
+- update and add cases on pm from #25 to #30
+- Print error message when rm testspace failed by invalid passwd.
+- Do not install data/ plugin/
+- Remove "gbs" string from docstring
+- update and add cases on pm from #4 to #13
+- Rename env variable from GBS_SUDO_PASSWD to ITEST_SUDO_PASSWD
+- Merge "Remove test-packages.tar.gz" into devel
+- Merge "Update rb cases" into devel
+- Remove test-packages.tar.gz
+- Merge "add issue ID for some cases" into devel
+- Update rb cases
+- Remove script are_gits_clean since it duplicate with AssertClean
+- add issue ID for some cases
+- Merge "Add GBS vars and assert to avoid hard code in case" into devel
+- Merge "update case deleting useless action" into devel
+- Add GBS vars and assert to avoid hard code in case
+- copy data to running dir in testspace, not to link
+- use root authority to delete old testspace
+- update case deleting useless action
+- Merge "Encode html report to utf8" into devel
+- Merge "Update changelog cases" into devel
+- Update submit cases
+- Merge "Update conf cases" into devel
+- Merge "update export and import cases" into devel
+- Update conf cases
+- Update changelog cases
+- update export and import cases
+- Merge "Update build cases" into devel
+- Update build cases
+- Merge "modify some rb cases and add oscrc config file" into devel
+- modify some rb cases and add oscrc config file
+- Merge "Refactor HTML report code." into devel
+- Refactor HTML report code.
+- Add chroot cases to test different repos
+- Add issue number information to HTML report
+- Only allow one instance of itest to run within the same workspace
+- Merge "Change default uploading url" into devel
+- Fix minor problem in utils
+- Change default uploading url
+- Merge "No need care libyaml-perl" into devel
+- Fix grep issue in remotebuild test cases
+- No need care libyaml-perl
+- Sync report to remote periodically
+- Merge "modify two export cases and one import case" into devel
+- Merge "Change error output message in changelog, conf, submit negative cases" into devel
+- Merge "change the cases's grep msg to match the output of gbs" into devel
+- Merge "update test_build_profile_negative.case" into devel
+- change the cases's grep msg to match the output of gbs
+- Change error output message in changelog, conf, submit negative cases
+- Merge "add major gbs deps packages's version html report" into devel
+- update test_build_profile_negative.case
+- modify two export cases and one import case
+- Merge "update case test_build_noinit_without_buildroot" into devel
+- add major gbs deps packages's version html report
+- update case test_build_noinit_without_buildroot
+- add new export cases of for-gbs-0.12 branch
+- Dynamic summary page on web server
+- Log running status in workspace.
+- change all cases's name to be *.case
+- Encode html report to utf8
+- Create individual env for each test case.
+- Change TestSuite class-attr WORKSPACE and LOGDIR to instance-attr
+- Merge "Add shell script are_gits_clean" into devel
+- Merge "Change extra cases and add two testing specs" into devel
+- Change extra cases and add two testing specs
+- Merge "Refine remote --commit case" into devel
+- Merge "modify 9 export cases since default no merge" into devel
+- Refine remote --commit case
+- modify 9 export cases since default no merge
+- Add shell script are_gits_clean
+- update build cases for changing i586 to i686
+- Merge "modify import cases for default no merge feature" into devel
+- modify import cases for default no merge feature
+- Merge "clean old repos cache in fedora before yum makecache" into devel
+- Merge "Check upstream and pristine-tar branch in remote #551" into devel
+- Merge "modify test_build_clean_repo.gbs (fail in x86_64)" into devel
+- Check upstream and pristine-tar branch in remote #551
+- Test long submit request message #553
+- modify test_build_clean_repo.gbs (fail in x86_64)
+- clean old repos cache in fedora before yum makecache
+- Merge "Add a build case for issue 552 on pm" into devel
+- Merge "add a case for testing --clean-repos" into devel
+- Merge "Update two test cases" into devel
+- Update two test cases
+- add a case for testing --clean-repos
+- Add a build case for issue 552 on pm
+- fix a syntax error
+- add a export case for testing --packaging-dir
+- modify import case gbs_im_from_spec_case.gbs
+- add to delete alpha and m68k miscs to avoid restart
+- use upgrade.sh to test gbs upgrade
+- Merge "modify test_build_keeppacks_incremental.gbs" into devel
+- Merge "Add a new case for issue 365 on pm" into devel
+- Merge "refine a rb case and a build case" into devel
+- modify test_build_keeppacks_incremental.gbs
+- Merge "Fix conf test cases bugs" into devel
+- Fix conf test cases bugs
+- Merge "Test gbs build x86_64 #527" into devel
+- Merge "update gbs remotebuild cases" into devel
+- Merge "modify case gbs_im_spec_negative.gbs" into devel
+- Merge "modify export cases about pristine-tar" into devel
+- modify case gbs_im_spec_negative.gbs
+- refine a rb case and a build case
+- Add a new case for issue 365 on pm
+- Add one import case to check applying patches automaticlly
+- Merge "simplify the output of itest of -v option" into devel
+- modify export cases about pristine-tar
+- Merge "Update incremental cases for #465" into devel
+- Test gbs build x86_64 #527
+- simplify the output of itest of -v option
+- update gbs remotebuild cases
+- Delete BuildArch in the test spec files
+- Merge "add more import cases about pristine-tar" into devel
+- Update incremental cases for #465
+- Merge "update GBS build --spec cases" into devel
+- Merge "Add build case for --define option" into devel
+- Merge "Set default base_prj=Tizen:Main in remotebuild conf #539" into devel
+- Merge "add 8 export cases about upstream and pristine-tar" into devel
+- Merge "modify export cases due to --spec changed" into devel
+- Set default base_prj=Tizen:Main in remotebuild conf #539
+- add 8 export cases about upstream and pristine-tar
+- Add build case for --define option
+- add 3 build cases for --packing-dir
+- Merge "Add 2 keep-packs cases" into devel
+- update GBS build --spec cases
+- Update changelog spec test cases #485
+- modify export cases due to --spec changed
+- Merge "Search cases data from current directory firstly" into devel
+- add more import cases about pristine-tar
+- Merge "add three cases about keep packs feature" into devel
+- add three cases about keep packs feature
+- Add 2 keep-packs cases
+- Search cases data from current directory firstly
+- Add more completion in gbs.bash
+- Add Makefile
+- Add static web page
+- Refine test result info
+- Add .gitignore
+- Add comments about cases order
+- Add RSA test cases
+- update build increamental and add noinit cases
+- add gbs test cases
+- No need reboot while clean system
+
+* Mon Nov 05 2012 Guan Junchun <junchunx.guan@intel.com> - 1.0
+- bump to version 1.0
+- refine a remotebuild case
+- modify import and conf cases
+- update import cases because of pristine-tar
+- fix press Ctrl+C twice bug
+- fix the color dispaly bug of failed cases
+- Add pristine-tar support case
+- Rename modules
+- add and update build cases
+- Add a case to test building cyclic dependency issue
+- Support create dummy project from several spec files
+- Update Install and Upgrade test cases
+- if press ctrl+c in running, it can give a report too
+- Fix spec in test-packages.tar.gz
+- Revert "Shuffle cases before testing rather than sort them"
+- fix remotebuild test cases synatax error
+- modify import cases
+- fix submit cases syntax error found by CaseParser
+- fix two gbs remotebuild cases:
+- add three import negative test cases
+- Add a script to create dummy project for testing
+- remove the old itest data before install itest
+- Get sudo password from env variable GBS_SUDO_PASSWORD
+- Support case interactive input by section __QA__
+- Add CaseParser to parse case text
+- update remotebuild cases and test-packages
+- Revert "update remotebuild cases and update test packages"
+- update remotebuild cases and update test packages
+- Print log to stdout if -vv
+- do not use /var/tmp/.gbs.log as tmp log file
+- Refine export cases
+- update test cases
+- Squeeze tset-packages.tar.gz
+- Squeeze IMPORT dir in test-packages.tar.gz
+- Shuffle cases before testing rather than sort them
+- Fix cases sort issue
+- Modify check method of import cases
+- Revert "Modify check method of import cases"
+- Modify check method of import cases
+- Send test report and logs to server
+- Add two manuall cases: install / upgrade
+- refine --spec case in remotebuild
+- Split TestReport class into TextReport and HTMLReport
+- Move utils.env_set to TestSuite class
+- Check exit code of steps to set testing pass.
+- Refactor abort_with_ctrl_c()
+- Add two export cases and one import case
+- update build case
+- Refactor code deal with ctrl c
+- delete 2 invalid cases
+- update conf cases
+- update and add build cases
+- Fix get_local_ipv4() for Fedora 17
+- Remove explicit use of sys.argv
+- update build case
+- return the log's path instead of log content of failed case
+- refine time_cost display
+- chang echo GBS_TEST_PASSED to a line
+- update and add new cases
+- Change itest directory hierarchy
+- Use ifconfig to get local ip address
+- Add message to show log dir and report file
+- when press CTRL+C, choose to continue next case or abort
+- delete needless time variables
+- fix bugs
+- Fix some pylint issues
+- Change style of import lines
+- Calcuate time cost in GbsTest.run()
+- delete proxy set function
+- Format README in consist indent
+- refine docs of classes and functions
+- add gbs vim plugin
+- add README
+- update build repo
+- add specfile
+- code cleanup
+- add cases to test build, remotebuild profile option
+- add setup.py
+- fix bugs
+- add multiple profiles conf and localrepo conf
+- Module separation
+- new way to find cases
+- show more friendly test results
+- update test cases
+- modify submit test cases
+- update build cases
+- update build testcases
+- delete all cat commands in remotebuild cases
+- Revert "version 0.1.1 is more general than enter key"
+- update remotebuild and submit test cases
+- modify export and import cases
+- update test-packages
+- Merge "add remotebuild and submit cases"
+- add build cases
+- support running multiple test suits and cases for gbstest
+- update changelog cases
+- add remotebuild and submit cases
+- fix debug info bug
+- add import and export cases
+- add conf test cases
+- add chroot test cases
+- add changelog test cases
+- when gbs import, send enter defaultly
+- support bash tab completion for gbs
+- add html.py
+- new report function
+- add gbstest script
+- update fake package
+- Add test packages
+- add config fixtures
+- Initial itest project
+- update build case and conf file
+- in some os /tmp/ is not exist
+- move workspace to /tmp/
+- change gbs result to pass or fail
+- version 0.1.1 is more general than enter key
+- print debug info, when executing
+- fix quote bug
+
--- /dev/null
+Format: 1.0
+Source: itest-core
+Version: 1.7
+Binary: itest-core, spm, nosexcase
+Maintainer: Huang Hao<hao.h.huang@intel.com>, Junchun Guan <junchunx.guan@intel.com>
+Architecture: all
+Standards-Version: 3.7.1
+Build-Depends: debhelper, python-support, python-setuptools
+Files:
+ 58db6459a40aef3b2ef9e7460f4a65a9 40672 itest-core_1.7.tar.gz
--- /dev/null
+%{!?python_sitelib: %define python_sitelib %(%{__python} -c "from distutils.sysconfig import get_python_lib; print get_python_lib()")}
+%{!?python_version: %define python_version %(%{__python} -c "import sys; sys.stdout.write(sys.version[:3])")}
+Name: itest-core
+Summary: Functional testing utility
+Version: 1.7
+%if 0%{?opensuse_bs}
+Release: 0.dev.<CI_CNT>.<B_CNT>
+%else
+Release: 0
+%endif
+
+Group: Development/Tools
+License: GPLv2
+BuildArch: noarch
+URL: https://otctools.jf.intel.com/pm/projects/itest
+Source0: %{name}_%{version}.tar.gz
+
+Requires: python >= 2.6
+%if 0%{?suse_version}
+Requires: python-pexpect
+%else
+Requires: pexpect
+%endif
+Requires: spm
+
+%if "%{?python_version}" < "2.7"
+Requires: python-argparse
+%endif
+
+Requires: python-jinja2
+Requires: python-unittest2
+
+BuildRequires: python-setuptools
+BuildRequires: python-devel
+
+%description
+Functional testing utility
+
+%package -n spm
+Summary: smart package management tool
+Requires: python-jinja2
+Requires: python-yaml
+%if "%{?python_version}" < "2.7"
+Requires: python-ordereddict
+%endif
+%if ! 0%{?suse_version}
+Requires: yum-plugin-remove-with-leaves
+%endif
+
+%description -n spm
+Smart package management tool on Linux
+A wrapper of yum, apt-get, zypper command
+Support Redhat, Debian, SuSE
+
+%package -n nosexcase
+Summary: nose plugin
+Requires: itest-core
+Requires: python-nose
+
+%description -n nosexcase
+This is a nose plugin that provides test cases
+definition in XML format
+Use this plugin with ``nosetests --with-xcase xml case``
+
+%prep
+%setup -q -n %{name}-%{version}
+
+%install
+%{__python} setup.py install --prefix=%{_prefix} --root=%{buildroot}
+
+%files
+%defattr(-,root,root,-)
+%dir %{python_sitelib}/imgdiff
+%dir %{python_sitelib}/itest
+%{python_sitelib}/itest-*-py*.egg-info
+%{python_sitelib}/imgdiff/*
+%{python_sitelib}/itest/*
+%{_bindir}/runtest
+%{_bindir}/imgdiff
+
+%files -n spm
+%defattr(-,root,root,-)
+%dir %{python_sitelib}/spm
+%{python_sitelib}/spm/*
+%{_bindir}/spm
+%{_sysconfdir}/spm.yml
+
+%files -n nosexcase
+%defattr(-,root,root,-)
+%dir %{python_sitelib}/nosexcase
+%{python_sitelib}/nosexcase/*
--- /dev/null
+pexpect
+Jinja2
+argparse
+unittest2
--- /dev/null
+#!/bin/bash
+pdir=$(dirname $0)
+CLEANUP_PY="python -m imgdiff.cleanup"
+UNPACK_PY="python -m imgdiff.unpack"
+DIFFIMG_PY="python -m imgdiff.diff"
+
+RESFILE=$(mktemp /tmp/resource.$$.XXX)
+
+cleanup() {
+ if [ "${KEEP_IMGDIRS+defined}" ] && [ -z "$KEEP_IMGDIRS" ]; then
+ if [ "${RESFILE+defined}" ] && \
+ [ -f "$RESFILE" ] && \
+ (cat $RESFILE | $CLEANUP_PY); then
+
+ rm -f "$RESFILE" || \
+ echo "[WARN] Some file can't be cleanup: $RESFILE" >&2
+
+ ([ "${tempimg1+defined}" ] && [ -d "$tempimg1" ]) && \
+ (rm -rf "$tempimg1" || \
+ echo "[WARN] Some file can't be cleanup: $tempimg1" >&2)
+
+ ([ "${tempimg2+defined}" ] && [ -d "$tempimg2" ]) && \
+ (rm -rf "$tempimg2" || \
+ echo "[WARN] Some file can't be cleanup: $tempimg2" >&2)
+ else
+ echo "[WARN] Some resource can't be cleanup, " \
+ "please check it manually: $RESFILE" >&2
+ fi
+ else
+ echo "[WARN] Please cleanup resource manually: $RESFILE" >&2
+ fi
+}
+
+usage() {
+ echo "Usage: $0 [options] <img1> <img2>"
+ echo " -d: output directory. Default is '.'"
+ echo " -k: keep unpacked images direcotries"
+ echo " -o: diff output file name"
+ echo " -c: conf defining trivial difference"
+}
+
+##############
+## Main
+##############
+TEMP=`getopt -o d:ko:c:h -n 'imgdiff' -- "$@"`
+if [ $? != 0 ] ; then echo "[ERROR] getopt failed" >&2 ; exit 1 ; fi
+eval set -- "$TEMP"
+
+BASE_PATH=$(pwd)
+KEEP_IMGDIRS=
+OUTPUT_FILENAME=img.diff
+UNIMPORTANT_CONF=
+
+while true ; do
+ case "$1" in
+ -d) BASE_PATH=$(realpath $2); shift 2;;
+ -k) KEEP_IMGDIRS=1; shift;;
+ -o) OUTPUT_FILENAME=$2; shift 2;;
+ -c) UNIMPORTANT_CONF=$2; shift 2;;
+ -h) usage; exit 0 ;;
+ --) shift; break ;;
+ *) echo "[ERROR] getopt internal error!" >&2 ; exit 1 ;;
+ esac
+done
+
+if [ $# -lt 2 ]; then
+ echo "[ERROR] Requires two image files at least" >&2
+ exit 1
+fi
+
+if [ ! -d "$BASE_PATH" ]; then
+ echo "$BASE_PATH: No such direcotry" >&2
+ exit 1
+fi
+
+img1=$1
+img2=$2
+
+tempimg1=$BASE_PATH/img1
+tempimg2=$BASE_PATH/img2
+
+##############
+#trap cleanup INT TERM EXIT ABRT
+trap cleanup EXIT
+
+start_time=$(date +%s)
+echo "Unpacking images ..."
+cat /dev/null > $RESFILE
+$UNPACK_PY $img1 $tempimg1 $RESFILE
+$UNPACK_PY $img2 $tempimg2 $RESFILE
+
+#parted $img1 -s 'unit B print' > $tempimg1/partition_table.txt
+#parted $img2 -s 'unit B print' > $tempimg2/partition_table.txt
+
+echo "Comapring images ..."
+if [ -n "$UNIMPORTANT_CONF" ]; then
+ opts="-c $UNIMPORTANT_CONF"
+else
+ opts=
+fi
+# need sudo here to read all files in images
+# generate unified(-u) diff output
+sudo diff -r -u $tempimg1 $tempimg2 | tee ${OUTPUT_FILENAME}.orig.txt | $DIFFIMG_PY $opts >$OUTPUT_FILENAME
--- /dev/null
+#!/usr/bin/env python
+
+if __name__ == '__main__':
+ from itest import __main__
+ __main__
--- /dev/null
+#!/usr/bin/env python
+from spm import cli
+
+
+cli.main()
--- /dev/null
+#!/usr/bin/env python
+from setuptools import setup
+
+from itest import __version__
+
+setup(name='itest',
+ version=__version__,
+ description='Functional test framework',
+ long_description='Functional test framework',
+ author='Hui Wang, Yigang Wen, Daiwei Yang, Hao Huang, Junchun Guan',
+ author_email='huix.wang@intel.com, yigangx.wen@intel.com, '
+ 'dawei.yang@intel.com, hao.h.huang@intel.com, junchunx.guan@intel.com',
+ license='GPLv2',
+ platforms=['Linux'],
+ include_package_data=True,
+ packages=['itest', 'itest.conf', 'imgdiff', 'spm', 'nosexcase'],
+ package_data={'': ['*.html']},
+ data_files=[('/etc', ['spm/spm.yml'])],
+ entry_points={
+ 'nose.plugins.0.10': [
+ 'xcase = nosexcase.xcase:XCase'
+ ]
+ },
+ scripts=[
+ 'scripts/runtest',
+ 'scripts/imgdiff',
+ 'scripts/spm',
+ ],
+ )
--- /dev/null
+__version__ = '0.1'
--- /dev/null
+import os
+import functools
+import argparse
+import spm
+from spm import core, __version__
+from jinja2 import Environment, FileSystemLoader
+try:
+ from collections import OrderedDict
+except ImportError:
+ from ordereddict import OrderedDict
+
+
+def generate_report(data):
+ template_dirs = os.path.join(os.path.dirname(spm.__file__), 'templates')
+ jinja2_env = Environment(loader=FileSystemLoader(template_dirs))
+ template = jinja2_env.get_template('report.html')
+ return template.render(data)
+
+
+def subparser(func):
+ @functools.wraps(func)
+ def wrapper(parser):
+ splitted = func.__doc__.split('\n')
+ name = func.__name__.split('_')[0]
+ subpar = parser.add_parser(name, help=splitted[0],
+ description='\n'.join(splitted[1:]))
+ return func(subpar)
+ return wrapper
+
+
+@subparser
+def install_parser(parser):
+ """install package
+ Examples:
+ $ spm install -r http://download.tizen.org/tools/latest-release gbs
+ """
+ parser.add_argument('-r', '--repo', help='repo url')
+ parser.add_argument('pkg', help='package name')
+
+ def handler(args):
+ distro = core.distro
+ distro.uninstall(args.pkg)
+ if args.repo:
+ distro.make_repo('tools', args.repo)
+ print distro.check_version(args.pkg)
+ distro.clean()
+ distro.refresh()
+ distro.install(args.pkg)
+
+ parser.set_defaults(handler=handler)
+ return parser
+
+
+@subparser
+def upgrade_parser(parser):
+ """upgrade package
+ Examples:
+ $ spm upgrade --from repo1 --to repo2 gbs
+ """
+ parser.add_argument('--from', dest='oldrepo', help='upgrade from repo url')
+ parser.add_argument('--to', help='upgrade to repo url')
+ parser.add_argument('pkg', help='package name')
+ parser.add_argument('--html-dir', help='html directory')
+
+ def handler(args):
+ data = {}
+ data['package'] = args.pkg
+ data['type'] = 'upgrade'
+ data['install_repo'] = args.oldrepo
+ data['upgrade_repo'] = args.to
+ data['package_list'] = OrderedDict()
+ distro = core.distro
+ distro.uninstall(args.pkg)
+ if args.to:
+ distro.make_repo('tools', args.to)
+ distro.clean()
+ distro.refresh()
+ distro.install(args.pkg)
+ dependencies = distro.get_package_dependency(args.pkg)
+ if dependencies:
+ for pkg in dependencies:
+ _, version = distro.check_version(pkg)
+ data['package_list'][pkg] = {'install': version}
+ distro.uninstall(args.pkg)
+ if args.oldrepo:
+ distro.make_repo('tools', args.oldrepo)
+ distro.clean()
+ distro.refresh()
+ distro.install(args.pkg)
+ if dependencies:
+ for pkg in dependencies:
+ _, version = distro.check_version(pkg)
+ data['package_list'][pkg].update(before=version)
+ if args.to:
+ distro.make_repo('tools', args.to)
+ distro.refresh()
+ distro.install(args.pkg)
+ if dependencies:
+ for pkg in dependencies:
+ _, version = distro.check_version(pkg)
+ data['package_list'][pkg].update(after=version)
+ if args.html_dir:
+ with open("%s/index.html" % args.html_dir, 'w') as f:
+ f.write(generate_report(data))
+
+ parser.set_defaults(handler=handler)
+ return parser
+
+
+@subparser
+def version_parser(parser):
+ """query package version
+ Example:
+ $ spm version gbs
+ """
+ parser.add_argument('pkg', help='package name')
+
+ def handler(args):
+ distro = core.distro
+ packages = distro.get_package_dependency(args.pkg)
+ if packages:
+ for pkg in packages:
+ print distro.check_version(pkg)
+ else:
+ print distro.check_version(args.pkg)
+
+ parser.set_defaults(handler=handler)
+ return parser
+
+
+def main():
+ parser = argparse.ArgumentParser(
+ prog='spm',
+ description='Smart package management tool on linux',
+ epilog='Try spm --help for help on specific subcommand')
+ parser.add_argument('-V', '--version',
+ action='version', version=__version__)
+ subparsers = parser.add_subparsers(title='subcommands')
+ for name, obj in globals().iteritems():
+ if name.endswith('_parser') and callable(obj):
+ obj(subparsers)
+ args = parser.parse_args()
+ args.handler(args)
+
+if __name__ == '__main__':
+ main()
--- /dev/null
+import os
+import yaml
+
+
+def load_conf():
+ conf_file = os.path.expanduser('/etc/spm.yml')
+ conf = None
+ if os.path.exists(conf_file):
+ with open(conf_file) as fobj:
+ conf = yaml.load(fobj)
+ return conf
--- /dev/null
+import os
+import re
+import subprocess
+import platform
+from spm import conf
+
+
+class BaseDistro(object):
+ """Base class"""
+ reposuffix = '.repo'
+
+ def __init__(self, name, version, arch):
+ self.name = name
+ self.version = version
+ self.arch = arch
+ self.config = conf.load_conf()
+
+ def install(self, pkg):
+ pass
+
+ def uninstall(self, pkg):
+ pass
+
+ def refresh(self):
+ pass
+
+ def _repofile(self, reponame, url):
+ pass
+
+ def make_repo(self, reponame, url):
+ repofile = os.path.join(self.repodir, reponame + self.reposuffix)
+ with open(repofile, 'w') as fp:
+ fp.write(self._repofile(reponame, url))
+ return repofile
+
+ def clean(self):
+ pass
+
+ def get_package_dependency(self, pkg):
+ """Get package dependency from $HOME/.spm.yml"""
+ packages = []
+ if self.config and pkg in self.config:
+ if 'default' in self.config[pkg]['dependency']:
+ packages = self.config[pkg]['dependency']['default']
+ if distro.name in self.config[pkg]['dependency']:
+ packages += self.config[pkg]['dependency'][distro.name]
+ return packages
+
+
+class RpmDistro(BaseDistro):
+ def check_version(self, pkg):
+ cmd = 'rpm -q --qf %%{version}-%%{release} %s' % pkg
+ p = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE)
+ ret = p.wait()
+ if ret:
+ return (pkg, 'N/A')
+ else:
+ return (pkg, p.communicate()[0])
+
+ def remove(self, pkg):
+ os.system('rpm -e --nodeps %s' % pkg)
+
+
+class DebDistro(BaseDistro):
+ def check_version(self, pkg):
+ cmd = 'dpkg -s %s ' % pkg
+ p = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE)
+ ret = p.wait()
+ if ret:
+ return (pkg, 'N/A')
+ else:
+ m = re.search('Version: .*', p.communicate()[0])
+ return (pkg, m.group().split()[1])
+
+ def remove(self, pkg):
+ os.system('dpkg -P --force-depends %s' % pkg)
+
+
+class RedhatDistro(RpmDistro):
+ """Redhat Distro class"""
+ repodir = '/etc/yum.repos.d'
+
+ def __init__(self, name, version, arch):
+ super(RedhatDistro, self).__init__(name, version, arch)
+ self.packager = 'rpm'
+
+ def install(self, pkg):
+ os.system('yum -y --nogpgcheck install %s' % pkg)
+
+ def uninstall(self, pkg):
+ os.system('yum remove --remove-leaves -y %s' % pkg)
+
+ def refresh(self):
+ os.system('yum makecache')
+
+ def _repofile(self, reponame, url):
+
+ if self.name == 'CentOS':
+ distro_str = self.name + '_' + self.version.split('.')[0]
+ else:
+ distro_str = self.name + '_' + self.version
+ url = os.path.join(url, distro_str)
+ repocontent = """[%s]
+name=%s
+type=rpm-md
+baseurl=%s
+gpgcheck=0
+enabled=1
+""" % (reponame, reponame, url)
+ return repocontent
+
+ def clean(self):
+ os.system('yum clean all')
+
+
+class SuSEDistro(RpmDistro):
+ """Suse Distro class"""
+ repodir = '/etc/zypp/repos.d'
+
+ def __init__(self, name, version, arch):
+ super(SuSEDistro, self).__init__(name, version, arch)
+ self.packager = 'rpm'
+
+ def install(self, pkg):
+ os.system('zypper -n --no-gpg-checks install -f %s' % pkg)
+
+ def uninstall(self, pkg):
+ os.system('zypper remove -u -y %s' % pkg)
+
+ def refresh(self):
+ os.system('zypper refresh')
+
+ def _repofile(self, reponame, url):
+ url = os.path.join(url, self.name + '_' + self.version)
+ repocontent = """[%s]
+name=%s
+enabled=1
+autorefresh=1
+baseurl=%s
+type=rpm-md
+priority=1
+gpgcheck=0
+""" % (reponame, reponame, url)
+ return repocontent
+
+ def clean(self):
+ os.system('zypper clean --all')
+
+
+class UbuntuDistro(DebDistro):
+ """Ubuntu Distro class"""
+ repodir = '/etc/apt/sources.list.d'
+ reposuffix = '.list'
+
+ def __init__(self, name, version, arch):
+ super(UbuntuDistro, self).__init__(name, version, arch)
+ self.packager = 'dpkg'
+
+ def install(self, pkg):
+ os.system('apt-get install -y --force-yes %s' % pkg)
+
+ def uninstall(self, pkg):
+ os.system('apt-get autoremove -y --force-yes %s' % pkg)
+
+ def refresh(self):
+ os.system('apt-get update')
+
+ def _repofile(self, reponame, url):
+ if self.name.lower().startswith('debian'):
+ platform = self.name + '_' + self.version.split('.')[0]
+ else:
+ platform = self.name + '_' + self.version
+ url = os.path.join(url, platform)
+ return """deb %s /""" % url
+
+ def clean(self):
+ os.system('apt-get autoclean')
+
+
+def init_distro():
+ name, version, _ = platform.dist()
+ arch = platform.architecture()
+ if name == 'centos':
+ distro = RedhatDistro('CentOS', version, arch)
+ elif name == 'fedora':
+ distro = RedhatDistro('Fedora', version, arch)
+ elif name == 'SuSE':
+ distro = SuSEDistro('openSUSE', version, arch)
+ elif name == 'Ubuntu':
+ distro = UbuntuDistro('Ubuntu', version, arch)
+ elif name.lower() == 'debian':
+ distro = UbuntuDistro('Debian', version, arch)
+ return distro
+
+distro = init_distro()
--- /dev/null
+gbs:
+ dependency:
+ default:
+ - gbs
+ - gbs-api
+ - gbs-export
+ - gbs-remotebuild
+ - git-buildpackage-rpm
+ - git-buildpackage-common
+ - depanneur
+ - build
+ - qemu-arm-static
+ - createrepo
+ - pristine-tar
+ - librpm-tizen
+ - pbzip2
+ - deltarpm
+ - osc
+ - mic
+ Ubuntu:
+ - libcrypt-ssleay-perl
+ openSUSE:
+ - perl-Crypt-SSLeay
+ - build-mkdrpms
+ - python-deltarpm
+ - build-initvm-i586
+ - build-initvm-x86_64
+ CentOS:
+ - perl-Crypt-SSLeay
+ - python-deltarpm
+ - build-initvm-i586
+ - build-initvm-x86_64
+ Fedora:
+ - perl-Crypt-SSLeay
+ - python-deltarpm
+ - build-initvm-i586
+ - build-initvm-x86_64
+
+mic-native:
+ dependency:
+ default:
+ - mic-native
+ - mic
+ - libzypp
+ - python-zypp
+ - python-zypp-tizen
+ - satsolver-tools
+ - qemu-arm-static
+ - qemu-user-static
+ - syslinux
--- /dev/null
+<html lang="en">
+ <head>
+ <title>{{ package }} {{ type }} test report</title>
+ </head>
+ <style>
+ .diff { color: blue }
+ table { border-collapse: collapse; margin-bottom: 1em }
+ th { background-color: #F3F3F3 }
+ td, th { border: 1px solid grey; padding: 3px }
+ table.right th { text-align: right }
+ </style>
+ <body>
+ <h1>{{ package }} {{ type }} test report</h1>
+ <h2>from: <a href="{{ install_repo }}">{{ install_repo }}</a></h2>
+ {% if type == "upgrade" %}
+ <h2>to: <a href="{{ upgrade_repo }}">{{ upgrade_repo }}</a></h2>
+ {% endif %}
+ <h1>Dependencies</h1>
+ <table class="right">
+ {% if type == "upgrade" %}
+ <tr><th>Version diff</th><td>Before</td><td>After</td><td>Install</td></tr>
+ {% for pkg, val in package_list.iteritems() %}
+ <tr><th>{{ pkg }}</th><td>{{ val.before }}</td>
+ {% if val.before != val.after %}
+ <td class="diff">{{ val.after }}</td>
+ {% else %}
+ <td>{{ val.after }}</td>
+ {% endif %}
+ {% if val.install != val.after %}
+ <td class="diff">{{ val.install }}</td>
+ {% else %}
+ <td>{{ val.install }}</td>
+ {% endif %}
+ </tr>
+ {% endfor %}
+ {% else %}
+ <tr><th>Package</th><td>Version</td></tr>
+ {% for pkg, val in package_list.iteritems() %}
+ <tr><th>{{ pkg }}</th><td>{{ val.before }}</td></tr>
+ {% endfor %}
+ {% endif %}
+ </table>
+ </body>
+</html>
--- /dev/null
+nose
+mock
--- /dev/null
+<testcase>
+ <summary>This case has cdata fields</summary>
+ <setup><![CDATA[
+ echo setup
+]]></setup>
+ <steps><![CDATA[
+ echo steps
+]]></steps>
+ <teardown><![CDATA[
+ echo teardown
+]]></teardown>
+</testcase>
--- /dev/null
+<testcase>
+ <summary>Generate a fixture with given content</summary>
+ <fixtures>
+ <content target='afile'>1984</content>
+ </fixtures>
+ <steps>
+n=$(expr $(cat afile) - 4)
+[ $n -eq 1980 ]
+ </steps>
+</testcase>
--- /dev/null
+<testcase>
+ <summary>If `qa` takes effect, case will pass,
+ otherwise case will exit with non-zero.
+ </summary>
+ <qa>
+ <prompt>How are you ?</prompt>
+ <answer>Good !</answer>
+ </qa>
+ <steps>
+ read -t 0.75 -p "How are you ? " GREETING
+ </steps>
+</testcase>
--- /dev/null
+<testcase>
+ <summary>"setup" always run</summary>
+ <setup>
+ echo This message only appears in setup section
+ </setup>
+ <steps>
+ echo steps
+ </steps>
+</testcase>
--- /dev/null
+<testcase>
+ <summary>Steps won't run if setup failed</summary>
+ <setup>false</setup>
+ <steps>echo This message only appears in steps section</steps>
+ <teardown>echo This message only appears in teardown section</teardown>
+</testcase>
\ No newline at end of file
--- /dev/null
+<testcase>
+ <summary>
+ A very simple case. It only contains
+ a summary field and a simple step
+ </summary>
+ <steps>
+ echo hello simple
+ </steps>
+</testcase>
--- /dev/null
+<testcase>
+ <summary>it should be treated as failure</summary>
+ <steps>false</steps>
+</testcase>
--- /dev/null
+<testcase>
+ <summary>teardown always run even steps failed</summary>
+ <steps>
+ ls -a .meta
+ false
+ </steps>
+ <teardown>
+ echo This message only appears in teardown section
+ </teardown>
+</testcase>
\ No newline at end of file
--- /dev/null
+<?xml version="1.0" encoding="UTF-8"?>
+<testcase>
+ <summary>There is unicode characters in this case</summary>
+ <steps>
+ echo 中文可以有
+ </steps>
+</testcase>
--- /dev/null
+<?xml version="1.0" encoding="UTF-8"?>
+<testcase>
+ <summary>Chinese in log file should be correctly write into xunit.xml</summary>
+ <steps>
+ echo 中文可以有
+ false
+ </steps>
+</testcase>
--- /dev/null
+<testcase>
+ <summary>
+ Varaibles defined in "steup" section should be available in "steps" and
+ "teardown" sections.
+ </summary>
+ <setup>
+ a=1
+ </setup>
+ <steps>
+ [ "$a" = 1 ]
+ b=2
+ </steps>
+ <teardown>
+ [ "$a" = 1 ]
+ [ "$b" = 2 ]
+ </teardown>
+</testcase>
\ No newline at end of file
--- /dev/null
+<testcase>
+ <summary>Vars defined in setup section
+can also be used in steps seciton</summary>
+ <setup>
+ a=1
+ </setup>
+ <steps>
+ [ "$a" == 1 ]
+ </steps>
+ <teardown>
+ echo value of a is $a
+ </teardown>
+</testcase>
--- /dev/null
+<testcase>
+ <summary>Copy a fixture dir into case workspace</summary>
+ <fixtures>
+ <copydir src="dir1" />
+ </fixtures>
+ <steps>
+ test -d dir1
+ test -f dir1/a
+ test -d dir1/dir2
+ test -f dir1/dir2/b
+ </steps>
+</testcase>
--- /dev/null
+<testcase>
+ <summary>Dir names end with slash</summary>
+ <fixtures>
+ <copydir src="dir1/" target="hehe/" />
+ </fixtures>
+ <steps>
+ test -d hehe
+ test -f hehe/a
+ test -d hehe/dir2
+ test -f hehe/dir2/b
+ </steps>
+</testcase>
--- /dev/null
+<testcase>
+ <summary>Copy a fixture file into case workspace</summary>
+ <fixtures>
+ <copy src="empty" />
+ </fixtures>
+ <steps>
+ test -f empty
+ </steps>
+</testcase>
--- /dev/null
+<testcase>
+ <summary>Copy a part of a dir</summary>
+ <fixtures>
+ <copydir src="dir1/dir2" />
+ </fixtures>
+ <steps>
+ test -d dir2
+ test -f dir2/b
+ </steps>
+</testcase>
--- /dev/null
+<testcase>
+ <summary>This fixture is a template file which
+ extends from another base file.
+ </summary>
+ <fixtures>
+ <template src="template" />
+ </fixtures>
+ <steps>
+cat template
+grep 'Only in base' template
+grep 'Only in child' template
+ </steps>
+</testcase>
--- /dev/null
+{% extends "template_base" %}
+
+{% block more %}
+Only in child
+{% endblock %}
--- /dev/null
+Only in base
+{% block more %}{% endblock %}
--- /dev/null
+import os
+import unittest
+import functools
+from subprocess import call
+from cStringIO import StringIO
+
+from mock import patch
+
+from itest.utils import cd as _cd
+
+
+SELF_PATH = os.path.dirname(__file__)
+DATA_PATH = os.path.join(SELF_PATH, '..', 'data')
+CASES_PATH = os.path.join(DATA_PATH, 'cases')
+PROJ_PATH = os.path.join(DATA_PATH, 'sample_project')
+PROJ_CASES_PATH = os.path.join(PROJ_PATH, 'cases')
+
+
+class MockExit(object):
+
+ def __call__(self, exitcode):
+ self.exitcode = exitcode
+
+
+def runtest(*argv):
+ with patch('sys.argv', ['runtest'] + list(argv)):
+ with patch('sys.exit', MockExit()) as mockexit:
+ with patch('sys.stderr', StringIO()) as mockerr:
+ from itest.main import main
+ main()
+ return mockexit.exitcode, mockerr.getvalue()
+
+
+def cd(path):
+ def decorator(func):
+ @functools.wraps(func)
+ def wrapper(*args, **kw):
+ with _cd(path):
+ return func(*args, **kw)
+ return wrapper
+ return decorator
+
+
+def format_msg(exitcode, stderr):
+ return """Exit code %s. STDERR:
+%s
+END""" % (exitcode, stderr)
+
+
+class TestBase(unittest.TestCase):
+
+ @cd(DATA_PATH)
+ def setUp(self):
+ call(["find", ".", "-name", "xunit*.xml", "-delete"])
+
+ def assertPass(self, *argv):
+ exitcode, stderr = runtest(*argv)
+ self.assertTrue(exitcode == 0 and
+ stderr.find("Ran 0 tests in") == -1,
+ format_msg(exitcode, stderr))
+
+ def assertFail(self, *argv):
+ exitcode, stderr = runtest(*argv)
+ self.assertNotEquals(0, exitcode,
+ format_msg(exitcode, stderr))
+
+ def assertWithText(self, argv, text):
+ exitcode, stderr = runtest(*argv)
+ self.assertTrue(stderr.find(text) >= 0,
+ format_msg(exitcode, stderr))
+
+ def assertWithoutText(self, argv, text):
+ exitcode, stderr = runtest(*argv)
+ self.assertEquals(-1, stderr.find(text),
+ format_msg(exitcode, stderr))
--- /dev/null
+from base import TestBase, cd, PROJ_PATH, PROJ_CASES_PATH, DATA_PATH
+
+
+class InProjectTest(TestBase):
+
+ @cd(PROJ_PATH)
+ def test_copy_fixture(self):
+ self.assertPass("cases/copy_fixture.xml")
+
+ @cd(PROJ_CASES_PATH)
+ def test_render_template_fixture(self):
+ self.assertPass("template_fixture.xml")
+
+ @cd(PROJ_CASES_PATH)
+ def test_copy_dir_fixture(self):
+ self.assertPass("copy_dir_fixture.xml")
+
+ @cd(PROJ_CASES_PATH)
+ def test_copy_dir_fixture(self):
+ self.assertPass("copy_part_of_dir_fixture.xml")
+
+ @cd(PROJ_CASES_PATH)
+ def test_copy_dir_with_tailing_slash(self):
+ self.assertPass("copy_dir_with_tailing_slash.xml")
+
+ @cd(DATA_PATH)
+ def test_argument_test_project_path(self):
+ self.assertPass("--test-project-path=sample_project",
+ "sample_project/cases/copy_fixture.xml")
--- /dev/null
+from base import TestBase, CASES_PATH, cd
+
+
+class SetupTeardownTest(TestBase):
+
+ @cd(CASES_PATH)
+ def test_setup_always_run(self):
+ self.assertWithText(["-vv", "setup.xml"],
+ "This message only appears in setup section")
+
+ @cd(CASES_PATH)
+ def test_teardown_always_run(self):
+ self.assertWithText(["-vv", "teardown.xml"],
+ "This message only appears in teardown section")
+
+ @cd(CASES_PATH)
+ def test_steps_wont_run_if_setup_failed(self):
+ self.assertWithoutText(["-vv", "setup_failed.xml"],
+ "This message only appears in steps section")
+
+ @cd(CASES_PATH)
+ def test_vars_in_setup_can_be_saw_in_steps(self):
+ self.assertPass("vars_in_setup.xml")
+
+ @cd(CASES_PATH)
+ def test_vars_in_setup_can_be_saw_in_teardown(self):
+ self.assertWithText(["-vv", "vars_in_setup.xml"],
+ "value of a is 1")
--- /dev/null
+from base import TestBase, CASES_PATH, cd
+
+
+class BasicTest(TestBase):
+
+ @cd(CASES_PATH)
+ def test_simple(self):
+ self.assertPass("simple.xml")
+
+ @cd(CASES_PATH)
+ def test_simple_false(self):
+ self.assertFail("-vv", "simple_false.xml")
+
+ @cd(CASES_PATH)
+ def test_cdata(self):
+ self.assertPass("cdata.xml")
+
+ @cd(CASES_PATH)
+ def test_qa(self):
+ self.assertPass("qa.xml")
+
+ @cd(CASES_PATH)
+ def test_content_fixture(self):
+ self.assertPass("content_fixture.xml")
+
+ @cd(CASES_PATH)
+ def test_multi_case_pass(self):
+ self.assertPass("simple.xml", "cdata.xml")
+
+ @cd(CASES_PATH)
+ def test_multi_case_failed(self):
+ self.assertFail("simple.xml", "simple_false.xml")
+
+ @cd(CASES_PATH)
+ def test_vars(self):
+ self.assertPass("vars.xml")
--- /dev/null
+import os
+import xml.etree.ElementTree as ET
+
+
+from base import cd, TestBase, runtest, CASES_PATH
+
+
+class XunitTest(TestBase):
+
+ @cd(CASES_PATH)
+ def test_with_xunit(self):
+ runtest("--with-xunit", "simple.xml")
+ # check whether xml is valid
+ ET.parse('xunit.xml')
+
+ @cd(CASES_PATH)
+ def test_without_xunit(self):
+ runtest("simple.xml")
+ self.assertFalse(os.path.exists("xunit.xml"))
+
+ @cd(CASES_PATH)
+ def test_xunit_file(self):
+ runtest("--with-xunit", "--xunit-file=xunit2.xml", "simple.xml")
+ self.assertTrue(os.path.exists("xunit2.xml"))
+
+ @cd(CASES_PATH)
+ def test_xml_validation(self):
+ runtest("--with-xunit", "simple.xml", "simple_false.xml")
+ ET.parse('xunit.xml')
+
+ @cd(CASES_PATH)
+ def test_non_ascii_chars(self):
+ runtest("--with-xunit", "unicode_false.xml")
+ ET.parse("xunit.xml")
--- /dev/null
+import unittest
+
+from itest.xmlparser import Parser
+
+
+class TestXMLParser(unittest.TestCase):
+
+ def test_simple(self):
+ self.assertEquals({
+ 'summary': 'test',
+ 'steps': 'echo test1\necho test2',
+ },
+ Parser().parse("""<testcase>
+<summary>test</summary>
+<steps>
+echo test1
+echo test2
+</steps>
+</testcase>"""))
+
+ def test_tracking(self):
+ self.assertEquals({'tracking': [
+ ('change', '90125'),
+ ('ticket', '5150'),
+ ]},
+ Parser().parse('''<testcase>
+<tracking>
+ <change>90125</change>
+ <ticket>5150</ticket>
+</tracking>
+</testcase>'''))
+
+ def test_qa(self):
+ self.assertEquals({'qa': [
+ ('Are you sure?', 'y'),
+ ('Do you agree?', 'n'),
+ ]},
+ Parser().parse('''<testcase>
+<qa>
+ <prompt>Are you sure?</prompt>
+ <answer>y</answer>
+ <prompt>Do you agree?</prompt>
+ <answer>n</answer>
+</qa>
+</testcase>'''))
+
+ def test_qa_unmatch(self):
+ self.assertRaises(Exception, Parser().parse, '''<testcase>
+<qa>
+ <prompt>Are you sure?</prompt>
+</qa>
+</testcase>''')
+
+ def test_conditions(self):
+ self.assertEquals({'conditions': {
+ 'whitelist': [
+ 'OpenSuse-64bit',
+ 'Ubuntu12.04',
+ ],
+ 'blacklist': [
+ 'Fedora19-x86_64',
+ ],
+ }},
+ Parser().parse('''<testcase>
+<conditions>
+ <whitelist>
+ <platform>OpenSuse-64bit</platform>
+ <platform>Ubuntu12.04</platform>
+ </whitelist>
+ <blacklist>
+ <platform>Fedora19-x86_64</platform>
+ </blacklist>
+</conditions>
+</testcase>'''))
+
+ def test_bad_case(self):
+ self.assertEquals(None, Parser().parse('I am not XML format!'))
--- /dev/null
+[tox]
+envlist = py26,py27,flake8
+
+[testenv]
+deps =
+ -rrequirements.txt
+ -rtest-requirements.txt
+commands = nosetests
+
+[testenv:py27]
+deps =
+ -rrequirements.txt
+ -rtest-requirements.txt
+ coverage
+commands = nosetests --with-coverage --cover-package=itest
+
+[testenv:flake8]
+deps = flake8
+commands = flake8 itest spm imgdiff setup.py scripts scripts/runtest scripts/spm
+
+[flake8]
+exclude = .svn,CVS,.bzr,.hg,.git,__pycache,.tox,tests