+ Tue Aug 16 2012 Zhang Huihui <huihuix.zhang@intel.com> 2.2.2-3
+ - support run both core and web api test packages in one test run
+ - fill the device info, test plan name and testing start/end time to result xml
+ - remove txt test result
+ Tue Aug 14 2012 Zhang Huihui <huihuix.zhang@intel.com> 2.2.2-2
+ - use str2str function defined in common module to replace the complex regular expression
+ - support Windows platform
+ Tue Aug 8 2012 Zhang Huihui <huihuix.zhang@intel.com> 2.2.2-1
+ - remove unreadable characters from stdout and stderr
Tue Aug 4 2012 Zhang Huihui <huihuix.zhang@intel.com> 2.2.1-1
- Fix bug: Testkit could not output the whole result.xml
Tue Jul 26 2012 Zhang Huihui <huihuix.zhang@intel.com> 2.2.0-3
Get Results:
=================
- Two kinds of test report will be generated:
- 1) <tests xml name>.result.txt
- example:
- ===================================TestReport===================================
- TYPE PASS FAIL N/A
- ---/usr/share/mnts1.0-distromisc-test/tests.xml XML 0 0 24
- `---Distro Misc SUITE 0 0 24
- |---General shortkeys SET 0 0 2
- | |---general_shortkeys_console_switch CASE 0 0 1
- | `---general_shortkeys_standard_shortkeys CASE 0 0 1
- |---Linux Usability SET 0 0 4
- | |---linux_usability_common_commands CASE 0 0 1
- | |---linux_usability_default_app_and_MIME CASE 0 0 1
- | |---linux_usability_gconf_database CASE 0 0 1
- | `---linux_usability_security_usability CASE 0 0 1
- |---Log system SET 0 0 4
- | |---log_system_UI_process CASE 0 0 1
- | |---log_system_monitor_log_access CASE 0 0 1
- | |---log_system_process_pulseaudio_daemon CASE 0 0 1
- | `---log_system_sreadhead CASE 0 0 1
- |---Non-defult integrated applications SET 0 0 3
- | |---non_default_apps_installation CASE 0 0 1
- | |---non_default_apps_sanity_check CASE 0 0 1
- | `---non_default_apps_uninstallation CASE 0 0 1
- |---Package check SET 0 0 3
- | |---package_check_core_debuginfo CASE 0 0 1
- | |---package_check_core_dependency CASE 0 0 1
- | `---package_check_core_version CASE 0 0 1
- |---Peripheral Devices SET 0 0 6
- | |---peripheral_devices_automount CASE 0 0 1
- | |---peripheral_devices_external_bluetooth CASE 0 0 1
- | |---peripheral_devices_external_cdrom CASE 0 0 1
- | |---peripheral_devices_usb_hotplug CASE 0 0 1
- | |---peripheral_devices_usb_hub CASE 0 0 1
- | `---peripheral_devices_usb_mount_umount CASE 0 0 1
- `---User actions on the stage of system boot-up SET 0 0 2
- |---user_actions_at_bootup_keyboard_operations CASE 0 0 1
- `---user_actions_at_bootup_mouse_operations CASE 0 0 1
-
- 2) <tests xml name>.result.xml
+ Test report will be generated as bellow:
+ tests.result.xml
xml result files aligned with schema files: /opt/testkit/lite/xsd/
example: <ignore>
Notes:
=================
+ One testxml should contains only one <suite> tag, multiple tags are not supported
testkit-lite's TestLog is stored to /opt/testkit/lite/latest
testkit-lite enables both automatic and manual tests by default
Obviously -A and -M are conflict options
Options:
Common options:
- -f, --testxml Specify the test.xml, support multiple test.xml
+ -f, --testxml Specify the test.xml. If run more the one testxml, just list them all and separate with a whitespace
-D, --dryrun Dry-run the selected test cases
-A, --auto-only Enable only auto tests
-M, --manual-only Enable only manual tests
-E <engine name> Specify a test engine
-o RESULTFILE, --output=RESULTFILE
- Specify output file for result xml
+ Specify output file for result xml. If more than one testxml provided, results will be merged together to this output file
-e Launch external test with an executable file
--fullscreen Run Web API test in full screen mode
--version Show version information
--category Select the specified white-rules
Examples:
- 1): testkit-lite -f /usr/share/webapi-w3c-device-tests/tests.xml -e "WRTLauncher webapi-w3c-device-tests" --set Battery
- 2): testkit-lite -f /usr/share/webapi-w3c-device-tests/tests.xml -e "WRTLauncher webapi-w3c-device-tests" --priority P1 -o /tmp/webapi-w3c-device-tests.xml
+ run a webapi package:
+ 1): testkit-lite -f /usr/share/webapi-webkit-tests/tests.xml -e 'WRTLauncher webapi-webkit-tests' -o /tmp/wekit-tests-result.xml --priority P0 --status ready
+ run both core and webapi packages:
+ 2): testkit-lite -f /usr/share/webapi-webkit-tests/tests.xml /usr/share/tts-bluez-tests/tests.xml -e 'WRTLauncher webapi-webkit-tests' -o /tmp/wekit-tests-result.xml
TODO:
========
1. add --verbose and logging level
-2. remove text result
-3. support both core and webapi packages
\ No newline at end of file
Get Results:
=================
- Two kinds of test report will be generated:
- 1) <tests xml name>.result.txt
- example:
- ===================================TestReport===================================
- TYPE PASS FAIL N/A
- ---/usr/share/mnts1.0-distromisc-test/tests.xml XML 0 0 24
- `---Distro Misc SUITE 0 0 24
- |---General shortkeys SET 0 0 2
- | |---general_shortkeys_console_switch CASE 0 0 1
- | `---general_shortkeys_standard_shortkeys CASE 0 0 1
- |---Linux Usability SET 0 0 4
- | |---linux_usability_common_commands CASE 0 0 1
- | |---linux_usability_default_app_and_MIME CASE 0 0 1
- | |---linux_usability_gconf_database CASE 0 0 1
- | `---linux_usability_security_usability CASE 0 0 1
- |---Log system SET 0 0 4
- | |---log_system_UI_process CASE 0 0 1
- | |---log_system_monitor_log_access CASE 0 0 1
- | |---log_system_process_pulseaudio_daemon CASE 0 0 1
- | `---log_system_sreadhead CASE 0 0 1
- |---Non-defult integrated applications SET 0 0 3
- | |---non_default_apps_installation CASE 0 0 1
- | |---non_default_apps_sanity_check CASE 0 0 1
- | `---non_default_apps_uninstallation CASE 0 0 1
- |---Package check SET 0 0 3
- | |---package_check_core_debuginfo CASE 0 0 1
- | |---package_check_core_dependency CASE 0 0 1
- | `---package_check_core_version CASE 0 0 1
- |---Peripheral Devices SET 0 0 6
- | |---peripheral_devices_automount CASE 0 0 1
- | |---peripheral_devices_external_bluetooth CASE 0 0 1
- | |---peripheral_devices_external_cdrom CASE 0 0 1
- | |---peripheral_devices_usb_hotplug CASE 0 0 1
- | |---peripheral_devices_usb_hub CASE 0 0 1
- | `---peripheral_devices_usb_mount_umount CASE 0 0 1
- `---User actions on the stage of system boot-up SET 0 0 2
- |---user_actions_at_bootup_keyboard_operations CASE 0 0 1
- `---user_actions_at_bootup_mouse_operations CASE 0 0 1
-
- 2) <tests xml name>.result.xml
+ Test report will be generated as bellow:
+ tests.result.xml
xml result files aligned with schema files: /opt/testkit/lite/xsd/
example: <ignore>
Notes:
=================
+ One testxml should contains only one <suite> tag, multiple tags are not supported
testkit-lite's TestLog is stored to /opt/testkit/lite/latest
testkit-lite enables both automatic and manual tests by default
Obviously -A and -M are conflict options
Options:
Common options:
- -f, --testxml Specify the test.xml, support multiple test.xml
+ -f, --testxml Specify the test.xml. If run more the one testxml, just list them all and separate with a whitespace
-D, --dryrun Dry-run the selected test cases
-A, --auto-only Enable only auto tests
-M, --manual-only Enable only manual tests
-E <engine name> Specify a test engine
-o RESULTFILE, --output=RESULTFILE
- Specify output file for result xml
+ Specify output file for result xml. If more than one testxml provided, results will be merged together to this output file
-e Launch external test with an executable file
--fullscreen Run Web API test in full screen mode
--version Show version information
--category Select the specified white-rules
Examples:
- 1): testkit-lite -f /usr/share/webapi-w3c-device-tests/tests.xml -e "WRTLauncher webapi-w3c-device-tests" --set Battery
- 2): testkit-lite -f /usr/share/webapi-w3c-device-tests/tests.xml -e "WRTLauncher webapi-w3c-device-tests" --priority P1 -o /tmp/webapi-w3c-device-tests.xml
+ run a webapi package:
+ 1): testkit-lite -f /usr/share/webapi-webkit-tests/tests.xml -e 'WRTLauncher webapi-webkit-tests' -o /tmp/wekit-tests-result.xml --priority P0 --status ready
+ run both core and webapi packages:
+ 2): testkit-lite -f /usr/share/webapi-webkit-tests/tests.xml /usr/share/tts-bluez-tests/tests.xml -e 'WRTLauncher webapi-webkit-tests' -o /tmp/wekit-tests-result.xml
install-scripts = /usr/bin
install-lib = /usr/lib/python2.7/site-packages
[bdist_rpm]
-release = 1
+release = 3
packager = huihuix.zhang@intel.com
requires = python
pre_install = preinstall
install_script = fakeinstall
post_install = postinstall
-changelog = * Tue Aug 4 2012 Zhang Huihui <huihuix.zhang@intel.com> 2.2.1-1
+changelog = * Tue Aug 16 2012 Zhang Huihui <huihuix.zhang@intel.com> 2.2.2-3
+ - support run both core and web api test packages in one test run
+ - fill the device info, test plan name and testing start/end time to result xml
+ - remove txt test result
+ Tue Aug 14 2012 Zhang Huihui <huihuix.zhang@intel.com> 2.2.2-2
+ - use str2str function defined in common module to replace the complex regular expression
+ - support Windows platform
+ Tue Aug 8 2012 Zhang Huihui <huihuix.zhang@intel.com> 2.2.2-1
+ - remove unreadable characters from stdout and stderr
+ - use str2str function defined in common module to replace the complex regular expression
+ Tue Aug 4 2012 Zhang Huihui <huihuix.zhang@intel.com> 2.2.1-1
- Fix bug: Testkit could not output the whole result.xml
Tue Jul 26 2012 Zhang Huihui <huihuix.zhang@intel.com> 2.2.0-3
- modify index and manualharness to make test widgets display better
from distutils.core import setup
if platform.system() == "Linux":
- data_files = [('/opt/testkit/lite/xsd', ['xsd/test_definition.xsd', 'xsd/tests.css', 'xsd/resultstyle.xsl']),
+ data_files = [('/opt/testkit/lite/xsd', ['xsd/test_definition.xsd', 'xsd/tests.css', 'xsd/testresult.xsl']),
('/opt/testkit/lite/', ['LICENSE']),
('/opt/testkit/lite/', ['README']),
('/opt/testkit/web/', ['web/jquery.js', 'web/index.html', 'web/manualharness.html']),
setup(name='testkit-lite',
description='command line test execution framework',
- version='2.2.1',
+ version='2.2.2',
long_description='',
author='Zhang, Huihui',
author_email='huihuix.zhang@intel.com',
import os
import sys
+import time
import platform
import xml.etree.ElementTree as etree
from optparse import *
+from shutil import copyfile
from tempfile import mktemp
from datetime import datetime
_j = os.path.join
_e = os.path.exists
_d = os.path.dirname
+_b = os.path.basename
_abspath = os.path.abspath
testkit_dir = "/opt/testkit/lite"
if not platform.system() == "Linux":
testkit_dir = _d(_abspath(__file__))
sys.path += [_j(testkit_dir)]
+ testkit_dir = _j(testkit_dir , "results")
LOG_DIR = testkit_dir
PID_FILE = _j(LOG_DIR , "pid.log")
option_list = [
make_option("-f", "--testxml", dest="testxml",
action="callback", callback=varnarg,
- help="Specify the test.xml"),
+ help="Specify the test.xml. If run more the one testxml, just list them all and separate with a whitespace"),
make_option("-D", "--dryrun", dest="bdryrun",
action="store_true",
help="Dry-run the selected test cases"),
action="store_true",
help="Enable only auto tests"),
make_option("-o", "--output", dest="resultfile",
- help="specify output file for result xml"),
- make_option("-E", dest="engine", help="specific test engine"),
+ help="Specify output file for result xml. If more than one testxml provided, results will be merged together to this output file"),
+ make_option("-E", dest="engine", help="Specific test engine"),
make_option("-e", dest="exttest", action="store",
- help="launch external test with an executable file"),
+ help="Launch external test with an executable file"),
make_option("--fullscreen", dest="fullscreen", action="store_true",
- help="run web API test in full screen mode"),
+ help="Run web API test in full screen mode"),
make_option("--version", dest="version_info", action="store_true",
- help="show version information"),
+ help="Show version information"),
]
option_list.extend(map(lambda flt: \
%%prog -f test.xml -D -A --type type1 ...\n\
%%prog -f test.xml -D -A --type type1 --status ready ...\n\
\n\
+ run a webapi package: \n\
%%prog -f /usr/share/webapi-webkit-tests/tests.xml -e 'WRTLauncher webapi-webkit-tests' -o /tmp/wekit-tests-result.xml --priority P0 --status ready ...\n\
+ run both core and webapi packages: \n\
+\n\
+ %%prog -f /usr/share/webapi-webkit-tests/tests.xml /usr/share/tts-bluez-tests/tests.xml -e 'WRTLauncher webapi-webkit-tests' -o /tmp/wekit-tests-result.xml ...\n\
\n\
Note: \n\
- 1) TestLog is stored to %s/latest\n\
- 2) %%prog enables both auto and manual tests by default\n\
- 3) Obviously -A and -M are conflict options\n\
- 4) -e option does not support -D mode" % (LOG_DIR)
+ 1) One testxml should contains only one <suite> tag, multiple tags are not supported\n\
+ 2) TestLog is stored to %s/latest\n\
+ 3) %%prog enables both auto and manual tests by default\n\
+ 4) Obviously -A and -M are conflict options\n\
+ 5) -e option does not support -D mode" % (LOG_DIR)
except:
usage = None
# detect version option
if options.version_info:
- raise ValueError("testkit-lite v2.2.1-1")
+ raise ValueError("testkit-lite v2.2.2-3")
# detect conflict
if options.bautoonly and options.bmanualonly:
if eval('options.w%s' % flt):
wfilters[flt] = eval('options.w%s' % flt)
- # apply auto-only
- if not options.bautoonly:
- if options.bmanualonly:
- wfilters['execution_type'] = ["manual"]
- else:
- wfilters['execution_type'] = ["auto", "manual"]
- else:
- wfilters['execution_type'] = ["auto"]
-
if options.fullscreen:
runner.set_fullscreen(True)
sys.exit(1)
# 1) prepare log dir
- session = datetime.today().isoformat('-')
+ if platform.system() == "Linux":
+ session = datetime.today().isoformat('-')
+ else:
+ session = datetime.today().strftime("%Y-%m-%d_%H_%M_%S")
log_dir = _j(LOG_DIR, session)
latest_dir = _j(LOG_DIR, "latest")
try:
# 2) run test
if len(options.testxml) > 1:
- testxml = mktemp(suffix='.xml', prefix='tests.', dir=log_dir)
testxmls = set(options.testxml)
- root = etree.Element('test_definition', name="merged_test")
for t in testxmls:
- parser = etree.parse(t)
- for suite in parser.getiterator('suite'):
- root.append(suite)
-
- try:
- with open(testxml, 'w') as fd:
- tree = etree.ElementTree(element=root)
- tree.write(testxml)
- print "[ merged testxmls into %s ]" % testxml
- except IOError, e:
- print "[ **merge testxmls failed**(%s) ]" % e
+ filename = t
+ filename = os.path.splitext(filename)[0]
+ filename = filename.split('/')[3]
+ filename = "%s.total" % _b(filename)
+ resultfile = "%s.xml" % filename
+ resultfile = _j(log_dir, resultfile)
+ ep = etree.parse(t)
+ rt = ep.getroot()
+ if options.bautoonly:
+ wfilters['execution_type'] = ["auto"]
+ runner.add_filter_rules(**wfilters)
+ if options.bmanualonly:
+ wfilters['execution_type'] = ["manual"]
+ runner.add_filter_rules(**wfilters)
+ runner.apply_filter(rt)
+ ep.write(resultfile)
+ start_time = datetime.today().strftime("%Y-%m-%d_%H_%M_%S")
+ if not options.bautoonly:
+ if options.bmanualonly:
+ for t in testxmls:
+ try:
+ wfilters['execution_type'] = ["manual"]
+ runner.add_filter_rules(**wfilters)
+ runner.run(t, resultdir=log_dir)
+ except Exception, e:
+ print e
+ else:
+ for t in testxmls:
+ try:
+ wfilters['execution_type'] = ["auto"]
+ runner.add_filter_rules(**wfilters)
+ runner.run(t, resultdir=log_dir)
+ except Exception, e:
+ print e
+ for t in testxmls:
+ try:
+ wfilters['execution_type'] = ["manual"]
+ runner.add_filter_rules(**wfilters)
+ runner.run(t, resultdir=log_dir)
+ except Exception, e:
+ print e
+ else:
+ for t in testxmls:
+ try:
+ wfilters['execution_type'] = ["auto"]
+ runner.add_filter_rules(**wfilters)
+ runner.run(t, resultdir=log_dir)
+ except Exception, e:
+ print e
else:
testxml = (options.testxml)[0]
-
+ filename = testxml
+ filename = os.path.splitext(filename)[0]
+ if platform.system() == "Linux":
+ filename = filename.split('/')[3]
+ else:
+ filename = filename.split('\\')[-2]
+ filename = "%s.total" % _b(filename)
+ resultfile = "%s.xml" % filename
+ resultfile = _j(log_dir, resultfile)
+ ep = etree.parse(testxml)
+ rt = ep.getroot()
+ if options.bautoonly:
+ wfilters['execution_type'] = ["auto"]
+ runner.add_filter_rules(**wfilters)
+ if options.bmanualonly:
+ wfilters['execution_type'] = ["manual"]
+ runner.add_filter_rules(**wfilters)
+ runner.apply_filter(rt)
+ ep.write(resultfile)
+ start_time = datetime.today().strftime("%Y-%m-%d_%H_%M_%S")
+ if not options.bautoonly:
+ if options.bmanualonly:
+ try:
+ wfilters['execution_type'] = ["manual"]
+ runner.add_filter_rules(**wfilters)
+ runner.run(testxml, resultdir=log_dir)
+ except Exception, e:
+ print e
+ else:
+ try:
+ wfilters['execution_type'] = ["auto"]
+ runner.add_filter_rules(**wfilters)
+ runner.run(testxml, resultdir=log_dir)
+ wfilters['execution_type'] = ["manual"]
+ runner.add_filter_rules(**wfilters)
+ runner.run(testxml, resultdir=log_dir)
+ except Exception, e:
+ print e
+ else:
+ try:
+ wfilters['execution_type'] = ["auto"]
+ runner.add_filter_rules(**wfilters)
+ runner.run(testxml, resultdir=log_dir)
+ except Exception, e:
+ print e
try:
- runner.run(testxml, resultdir=log_dir)
+ end_time = datetime.today().strftime("%Y-%m-%d_%H_%M_%S")
+ runner.merge_resultfile(start_time, end_time, log_dir)
except Exception, e:
print e
import os
import sys
-import re
import time
import threading
import subprocess
from multiprocessing import Process
from multiprocessing import Value
from testkitlite.common.killall import killall
+from testkitlite.common.str2 import *
###############################################################################
def shell_exec(cmd, timeout=None, boutput=False):
stderr_log = rbuffile2.read()
# only leave readable characters
- stderr_log_new = '';
- for i in range(len(stderr_log)):
- temp = stderr_log[i:i + 1]
- pattern = re.compile('[a-zA-Z0-9\-\_]')
- match = pattern.search(temp)
- if not match:
- stderr_log_new += "*"
- else:
- stderr_log_new += temp
+ stdout_log = str2str(stdout_log)
+ stderr_log = str2str(stderr_log)
+ stdout_log = '<![CDATA[' + stdout_log + ']]>'
+ stderr_log = '<![CDATA[' + stderr_log + ']]>'
# close file
wbuffile1.close()
os.remove(BUFFILE1)
os.remove(BUFFILE2)
- return [exit_code, stdout_log.strip('\n'), stderr_log_new.strip('\n')]
+ return [exit_code, stdout_log.strip('\n'), stderr_log.strip('\n')]
+
server = HTTPServer(("127.0.0.1", 8000), MyHandler)
print "[ started http server at %s:%d ]" % ("127.0.0.1", 8000)
server.serve_forever()
- except KeyboardInterrupt:
- server.socket.close()
+ except:
+ pass
#
import os
+import platform
from datetime import datetime
from shutil import copyfile
from textreport import TestResultsTextReport
import xml.etree.ElementTree as etree
import ConfigParser
from xml.dom import minidom
+from tempfile import mktemp
from testkitlite.common.str2 import *
from testkitlite.common.autoexec import shell_exec
# filter rules
self.filter_rules = None
self.fullscreen = False
-
+ self.resultfiles = set()
+
def set_dryrun(self, bdryrun):
self.bdryrun = bdryrun
try:
filename = testxmlfile
filename = os.path.splitext(filename)[0]
- filename = "%s.result" % _b(filename)
+ if platform.system() == "Linux":
+ filename = filename.split('/')[3]
+ else:
+ filename = filename.split('\\')[-2]
resultfile = "%s.xml" % filename
resultfile = _j(resultdir, resultfile)
- textfile = "%s.txt" % filename
- textfile = _j(resultdir, textfile)
-
+ if _e(resultfile):
+ filename = "%s.manual" % _b(filename)
+ resultfile = "%s.xml" % filename
+ resultfile = _j(resultdir, resultfile)
if not _e(resultdir):
os.mkdir(resultdir)
-
print "[ apply filter ]"
try:
ep = etree.parse(testxmlfile)
except Exception, e:
print str(e)
return False
-
- print "[ xml %s ]" % _abs(resultfile)
- print "[ testing now ]"
- if self.external_test:
- self.execute_external_test(resultfile, resultfile)
- else:
- self.execute(resultfile, resultfile)
-
- if _e(resultfile):
- # report the result using xml mode
- print "[ generate the result(XML): %s ]" % resultfile
- # add XSL support to testkit-lite
- first_line = os.popen("head -n 1 %s" % resultfile).readlines()
- first_line = '<?xml-stylesheet type="text/xsl" href="./resultstyle.xsl"?>' + first_line[0]
- os.system("sed -i '1c " + first_line + "' " + resultfile)
- os.system("cp /opt/testkit/lite/xsd/tests.css " + resultdir)
- os.system("cp /opt/testkit/lite/xsd/resultstyle.xsl " + resultdir)
-
- print "[ generate the result(TXT): %s ]" % textfile
- print self.textreport.report(resultfile)
- open(textfile, "w+").write(self.textreport.report(resultfile))
- if self.resultfile:
- copyfile(resultfile, self.resultfile)
- copyfile(textfile, self.resultfile + '.txt')
-
+ casefind = etree.parse(resultfile).getiterator('testcase')
+ if casefind:
+ print "[ xml %s ]" % _abs(resultfile)
+ print "[ testing now ]"
+ if self.external_test:
+ parser = etree.parse(resultfile)
+ no_test_definition = 1
+ parser = etree.parse(resultfile)
+ for tf in parser.getiterator('test_definition'):
+ no_test_definition = 0
+ if tf.get('launcher'):
+ if tf.get('launcher').find('webapi'):
+ self.execute(resultfile, resultfile)
+ else:
+ self.execute_external_test(resultfile, resultfile)
+ else:
+ self.execute(resultfile, resultfile)
+ if no_test_definition:
+ self.execute(resultfile, resultfile)
+ else:
+ self.execute(resultfile, resultfile)
+ self.resultfiles.add(resultfile)
except Exception, e:
print e
ok &= False
-
return ok
+
+ def merge_resultfile(self, start_time, end_time, latest_dir):
+ mergefile = mktemp(suffix='.xml', prefix='tests.', dir=latest_dir)
+ mergefile = os.path.splitext(mergefile)[0]
+ mergefile = os.path.splitext(mergefile)[0]
+ mergefile = "%s.result" % _b(mergefile)
+ mergefile = "%s.xml" % mergefile
+ mergefile = _j(latest_dir, mergefile)
+ print "[merged resultfiles into %s]" % mergefile
+ print "........................................."
+ root = etree.Element('test_definition')
+ totals = set()
+ for t in self.resultfiles:
+ totalfile = os.path.splitext(t)[0]
+ totalfile = os.path.splitext(totalfile)[0]
+ totalfile = "%s.total" % totalfile
+ totalfile = "%s.xml" % totalfile
+ totalparser = etree.parse(totalfile)
+ parser = etree.parse(t)
+ for cs in totalparser.getiterator('set'):
+ for ct in cs.getiterator('testcase'):
+ for cp in parser.getiterator('testcase'):
+ if ct.get('id') == cp.get('id'):
+ cs.remove(ct)
+ cs.append(cp)
+ totalparser.write(totalfile)
+ totals.add(totalfile)
+ for tl in totals:
+ parser = etree.parse(tl)
+ for suite in parser.getiterator('suite'):
+ suite.tail = "\n"
+ root.append(suite)
+ try:
+ with open(mergefile, 'w') as output:
+ tree = etree.ElementTree(element=root)
+ tree.write(output)
+ except IOError, e:
+ print "[ **merge resultfiles failed**(%s)]" % e
+ # report the result using xml mode
+ print "[ generate the result(XML): %s ]" % mergefile
+ # add XSL support to testkit-lite
+
+ ep = etree.parse(mergefile)
+ rt = ep.getroot()
+ environment = etree.Element('environment')
+ environment.attrib['device_id'] = "Empty device_id"
+ environment.attrib['device_model'] = "Empty device_model"
+ environment.attrib['device_name'] = "Empty device_name"
+ environment.attrib['firmware_version'] = "Empty firmware_version"
+ environment.attrib['host'] = "Empty host"
+ environment.attrib['os_version'] = "Empty os_version"
+ environment.attrib['resolution'] = "Empty resolution"
+ environment.attrib['screen_size'] = "Empty screen_size"
+ other = etree.Element('other')
+ other.text = "Here is a String for testing"
+ environment.append(other)
+ environment.tail = "\n"
+ summary = etree.Element('summary')
+ summary.attrib['test_plan_name'] = "Empty test_plan_name"
+ start_at = etree.Element('start_at')
+ start_at.text = start_time
+ end_at = etree.Element('end_at')
+ end_at.text = end_time
+ summary.append(start_at)
+ summary.append(end_at)
+ summary.tail = "\n "
+ rt.insert(0, summary)
+ rt.insert(0, environment)
+
+ DECLARATION = """<?xml version="1.0" encoding="UTF-8"?>
+<?xml-stylesheet type="text/xsl" href="resultstyle.xsl"?>\n"""
+ with open(mergefile, 'w') as output:
+ output.write(DECLARATION)
+ ep.write(output, xml_declaration=False, encoding='utf-8')
+
+ if self.resultfile:
+ copyfile(mergefile, self.resultfile)
def pretty_print(self, ep, resultfile):
rawstr = etree.tostring(ep.getroot(), 'utf-8')
""" Handle manual test """
if case.get('execution_type', '') == 'manual':
+ case.set('result', 'N/A')
try:
for attr in case.attrib:
print " %s: %s" % (attr, case.get(attr))
stderr_elm.text = stderr
# record endtime
- end_elm.text = datetime.today().strftime("%Y-%m-%d_%H_%M_S")
+ end_elm.text = datetime.today().strftime("%Y-%m-%d_%H_%M_%S")
if expected_result != res_elm.text:
case.set('result', 'FAIL')
ep = etree.parse(testxmlfile)
rt = ep.getroot()
for tsuite in rt.getiterator('suite'):
- print "[Suite] execute suite: %s" % tsuite.get('name')
+ for tcaselog in tsuite.getiterator('testcase'):
+ if tcaselog.get('execution_type') == 'manual':
+ print "[Suite] execute manual suite: %s" % tsuite.get('name')
+ break
+ else:
+ print "[Suite] execute suite: %s" % tsuite.get('name')
+ break
for tset in tsuite.getiterator('set'):
print "[Set] execute set: %s" % tset.get('name')
for tc in tset.getiterator('testcase'):
+++ /dev/null
-<?xml version="1.0" encoding="UTF-8"?>
-<xsl:stylesheet version="1.0"
- xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
- <xsl:output method="html" version="1.0" encoding="UTF-8"
- indent="yes" />
- <xsl:template match="/">
- <html id="test_tool">
- <STYLE type="text/css">
- @import "tests.css";
- </STYLE>
-
- <body id="test_tool">
- <div id="testcasepage">
- <div id="index_page">
- <div id="title">
- <table>
- <tr>
- <td class="title">
- <h1 align="center">Test Report</h1>
- </td>
- </tr>
- </table>
- </div>
- <div id="device">
- <table>
- <tr>
- <th colspan="2">Device Information</th>
- </tr>
- <tr>
- <td>Device Name</td>
- <td>
- <xsl:value-of select="test_definition/environment/@device_name" />
- </td>
- </tr>
- <tr>
- <td>Device Model</td>
- <td>
- <xsl:value-of select="test_definition/environment/@device_model" />
- </td>
- </tr>
- <tr>
- <td>OS Version</td>
- <td>
- <xsl:value-of select="test_definition/environment/@os_version" />
- </td>
- </tr>
- <tr>
- <td>Device ID</td>
- <td>
- <xsl:value-of select="test_definition/environment/@device_id" />
- </td>
- </tr>
- <tr>
- <td>Firmware Version</td>
- <td>
- <xsl:value-of select="test_definition/environment/@firmware_version" />
- </td>
- </tr>
- <tr>
- <td>Screen Size</td>
- <td>
- <xsl:value-of select="test_definition/environment/@screen_size" />
- </td>
- </tr>
- <tr>
- <td>Resolution</td>
- <td>
- <xsl:value-of select="test_definition/environment/@resolution" />
- </td>
- </tr>
- <tr>
- <td>Host Info</td>
- <td>
- <xsl:value-of select="test_definition/environment/@host" />
- </td>
- </tr>
- <tr>
- <td>Others</td>
- <td>
- <xsl:value-of select="test_definition/environment/other" />
- </td>
- </tr>
- </table>
- </div>
-
- <div id="summary">
- <table>
- <tr>
- <th colspan="2">Test Summary</th>
- </tr>
- <tr>
- <td>Test Plan Name</td>
- <td>
- <xsl:value-of select="test_definition/summary/@test_plan_name" />
- </td>
- </tr>
- <tr>
- <td>Tests Total</td>
- <td>
- <xsl:value-of select="count(test_definition//suite/set/testcase)" />
- </td>
- </tr>
- <tr>
- <td>Test Passed</td>
- <td>
- <xsl:value-of
- select="count(test_definition//suite/set/testcase[@result = 'PASS'])" />
- </td>
- </tr>
- <tr>
- <td>Test Failed</td>
- <td>
- <xsl:value-of
- select="count(test_definition//suite/set/testcase[@result = 'FAIL'])" />
- </td>
- </tr>
- <tr>
- <td>Test N/A</td>
- <td>
- <xsl:value-of
- select="count(test_definition//suite/set/testcase[@result = 'BLOCK'])" />
- </td>
- </tr>
- <tr>
- <td>Test Not Run</td>
- <td>
- <xsl:value-of
- select="count(test_definition//suite/set/testcase) - count(test_definition//suite/set/testcase[@result = 'PASS']) - count(test_definition//suite/set/testcase[@result = 'FAIL']) - count(test_definition//suite/set/testcase[@result = 'BLOCK'])" />
- </td>
- </tr>
- <tr>
- <td>Start time</td>
- <td>
- <xsl:value-of select="test_definition/summary/start_at" />
- </td>
- </tr>
- <tr>
- <td>End time</td>
- <td>
- <xsl:value-of select="test_definition/summary/end_at" />
- </td>
- </tr>
- </table>
- </div>
-
-
- <div id="suite_summary">
- <div id="title">
- <table>
- <tr>
- <td class="title">
- <h1 align="center">Test Summary by Suite</h1>
- </td>
- </tr>
- </table>
- </div>
- <table>
- <tr>
- <th>Suite</th>
- <th>Passed</th>
- <th>Failed</th>
- <th>N/A</th>
- <th>Not Run</th>
- <th>Total</th>
- </tr>
- <xsl:for-each select="test_definition/suite">
- <xsl:sort select="@name" />
- <tr>
- <td>
- <xsl:value-of select="@name" />
- </td>
- <td>
- <xsl:value-of select="count(set//testcase[@result = 'PASS'])" />
- </td>
- <td>
- <xsl:value-of select="count(set//testcase[@result = 'FAIL'])" />
- </td>
- <td>
- <xsl:value-of select="count(set//testcase[@result = 'BLOCK'])" />
- </td>
- <td>
- <xsl:value-of
- select="count(set//testcase) - count(set//testcase[@result = 'PASS']) - count(set//testcase[@result = 'FAIL']) - count(set//testcase[@result = 'BLOCK'])" />
- </td>
- <td>
- <xsl:value-of select="count(set//testcase)" />
- </td>
- </tr>
- </xsl:for-each>
- </table>
- </div>
-
- <div id="cases">
- <div id="title">
- <table>
- <tr>
- <td class="title">
- <h1 align="center">Detailed Test Results</h1>
- </td>
- </tr>
- </table>
- </div>
- <xsl:for-each select="test_definition/suite">
- <xsl:sort select="@name" />
- <p>
- Test Suite:
- <xsl:value-of select="@name" />
- </p>
- <table>
- <tr>
- <th>Case_ID</th>
- <th>Purpose</th>
- <th>Result</th>
- <th>Stdout</th>
- </tr>
- <xsl:for-each select=".//set">
- <xsl:sort select="@name" />
- <tr>
- <td colspan="4">
- Test Set:
- <xsl:value-of select="@name" />
- </td>
- </tr>
- <xsl:for-each select=".//testcase">
- <xsl:sort select="@id" />
- <tr>
- <td>
- <xsl:value-of select="@id" />
- </td>
- <td>
- <xsl:value-of select="@purpose" />
- </td>
-
- <xsl:choose>
- <xsl:when test="@result">
- <xsl:if test="@result = 'FAIL'">
- <td class="red_rate">
- <xsl:value-of select="@result" />
- </td>
- </xsl:if>
- <xsl:if test="@result = 'PASS'">
- <td class="green_rate">
- <xsl:value-of select="@result" />
- </td>
- </xsl:if>
- <xsl:if test="@result = 'BLOCK' ">
- <td>
- BLOCK
- </td>
- </xsl:if>
- </xsl:when>
- <xsl:otherwise>
- <td>
-
- </td>
- </xsl:otherwise>
- </xsl:choose>
- <td>
- <xsl:value-of select=".//result_info/stdout" />
- <xsl:if test=".//result_info/stdout = ''">
- N/A
- </xsl:if>
- </td>
- </tr>
- </xsl:for-each>
- </xsl:for-each>
- </table>
- </xsl:for-each>
- </div>
- </div>
- </div>
- </body>
- </html>
- </xsl:template>
-</xsl:stylesheet>
\ No newline at end of file
<xs:complexType>
<xs:sequence minOccurs="1" maxOccurs="unbounded">
- <xs:element ref="set"></xs:element>
+ <xs:element ref="set" minOccurs="0" maxOccurs="unbounded"></xs:element>
</xs:sequence>
<xs:attributeGroup ref="set_attribute_group"></xs:attributeGroup>
<xs:complexType>
- <xs:sequence minOccurs="0" maxOccurs="unbounded">
-
- <xs:element ref="testcase"></xs:element>
+ <xs:sequence minOccurs="1" maxOccurs="unbounded">
+ <xs:element ref="testcase" minOccurs="0" maxOccurs="unbounded"></xs:element>
</xs:sequence>
+
<xs:attributeGroup ref="set_attribute_group"></xs:attributeGroup>
</xs:complexType>
<xs:unique name="uniqueCaseName">
<xs:complexType>
- <xs:sequence>
+ <xs:sequence minOccurs="1">
<xs:element ref="description"></xs:element>
- <xs:element name="result_info" type="result_info_type"
- minOccurs="0">
- </xs:element>
<xs:element name="categories" type="categories"
minOccurs="0" maxOccurs="1">
</xs:element>
</xs:element>
<xs:element name="spec" type="xs:string" minOccurs="0"
maxOccurs="1"></xs:element>
+ <xs:element name="result_info" type="result_info_type"
+ minOccurs="0">
+ </xs:element>
</xs:sequence>
<xs:attributeGroup ref="case_attribute_group"></xs:attributeGroup>
<xs:attributeGroup name="set_attribute_group">
<xs:attribute name="name" type="xs:anyURI" use="required"></xs:attribute>
+ <xs:attribute name="type" type="xs:string"></xs:attribute>
</xs:attributeGroup>
<xs:attributeGroup name="case_attribute_group">
</xs:element>
<xs:complexType name="result_info_type">
- <xs:sequence>
+ <xs:sequence minOccurs="0">
<xs:element name="actual_result" type="xs:string"></xs:element>
<xs:element name="start" type="xs:string"></xs:element>
<xs:element name="end" type="xs:string"></xs:element>
- <xs:element name="stdout" type="xs:string"></xs:element>
- <xs:element name="stderr" type="xs:string"></xs:element>
+ <xs:element name="stdout" type="xs:string" minOccurs="0"></xs:element>
+ <xs:element name="stderr" type="xs:string" minOccurs="0"></xs:element>
</xs:sequence>
</xs:complexType>
<xs:attribute name="target" type="xs:string"></xs:attribute>
<xs:attribute name="failure" type="xs:string"></xs:attribute>
<xs:attribute name="power" type="xs:string"></xs:attribute>
- <xs:attribute name="file" type="xs:string"></xs:attribute>
</xs:complexType>
<xs:complexType name="seriesType">
<xs:element name="test_definition">
<xs:complexType>
-
<xs:sequence minOccurs="1" maxOccurs="unbounded">
- <xs:element ref="suite"></xs:element>
+ <xs:element name="environment" type="BuildInfoType"
+ minOccurs="0">
+ </xs:element>
+ <xs:element name="summary" type="summaryType"
+ minOccurs="0">
+ </xs:element>
+ <xs:element ref="suite" minOccurs="1"
+ maxOccurs="unbounded">
+ </xs:element>
</xs:sequence>
-
- <xs:attributeGroup ref="set_attribute_group"></xs:attributeGroup>
+ <xs:attribute name="launcher" type="xs:string"></xs:attribute>
</xs:complexType>
</xs:element>
+
+
+
+ <xs:complexType name="BuildInfoType">
+ <xs:sequence>
+ <xs:element name="other" type="xs:string"></xs:element>
+ </xs:sequence>
+ <xs:attribute name="device_name" type="xs:string"></xs:attribute>
+ <xs:attribute name="device_model" type="xs:string"></xs:attribute>
+ <xs:attribute name="os_version" type="xs:string"></xs:attribute>
+ <xs:attribute name="device_id" type="xs:string"></xs:attribute>
+ <xs:attribute name="firmware_version" type="xs:string"></xs:attribute>
+ <xs:attribute name="screen_size" type="xs:string"></xs:attribute>
+ <xs:attribute name="resolution" type="xs:string"></xs:attribute>
+ <xs:attribute name="host" type="xs:string"></xs:attribute>
+ </xs:complexType>
+
+
+ <xs:complexType name="summaryType">
+ <xs:sequence>
+ <xs:element name="start_at" type="xs:string"></xs:element>
+ <xs:element name="end_at" type="xs:string"></xs:element>
+ </xs:sequence>
+ <xs:attribute name="test_plan_name" type="xs:string"></xs:attribute>
+ </xs:complexType>
+
</xs:schema>
--- /dev/null
+<?xml version="1.0" encoding="UTF-8"?>
+<xsl:stylesheet version="1.0"
+ xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
+ <xsl:output method="html" version="1.0" encoding="UTF-8"
+ indent="yes" />
+ <xsl:template match="/">
+ <html>
+ <STYLE type="text/css">
+ @import "tests.css";
+ </STYLE>
+
+ <body>
+ <div id="testcasepage">
+ <div id="title">
+ <table>
+ <tr>
+ <td>
+ <h1>Test Report</h1>
+ </td>
+ </tr>
+ </table>
+ </div>
+ <div id="device">
+ <table>
+ <tr>
+ <th colspan="2">Device Information</th>
+ </tr>
+ <tr>
+ <td>Device Name</td>
+ <td>
+ <xsl:value-of select="test_definition/environment/@device_name" />
+ </td>
+ </tr>
+ <tr>
+ <td>Device Model</td>
+ <td>
+ <xsl:value-of select="test_definition/environment/@device_model" />
+ </td>
+ </tr>
+ <tr>
+ <td>OS Version</td>
+ <td>
+ <xsl:value-of select="test_definition/environment/@os_version" />
+ </td>
+ </tr>
+ <tr>
+ <td>Device ID</td>
+ <td>
+ <xsl:value-of select="test_definition/environment/@device_id" />
+ </td>
+ </tr>
+ <tr>
+ <td>Firmware Version</td>
+ <td>
+ <xsl:value-of select="test_definition/environment/@firmware_version" />
+ </td>
+ </tr>
+ <tr>
+ <td>Screen Size</td>
+ <td>
+ <xsl:value-of select="test_definition/environment/@screen_size" />
+ </td>
+ </tr>
+ <tr>
+ <td>Resolution</td>
+ <td>
+ <xsl:value-of select="test_definition/environment/@resolution" />
+ </td>
+ </tr>
+ <tr>
+ <td>Host Info</td>
+ <td>
+ <xsl:value-of select="test_definition/environment/@host" />
+ </td>
+ </tr>
+ <tr>
+ <td>Others</td>
+ <td>
+ <xsl:value-of select="test_definition/environment/other" />
+ </td>
+ </tr>
+ </table>
+ </div>
+
+ <div id="summary">
+ <table>
+ <tr>
+ <th colspan="2">Test Summary</th>
+ </tr>
+ <tr>
+ <td>Test Plan Name</td>
+ <td>
+ <xsl:value-of select="test_definition/summary/@test_plan_name" />
+ </td>
+ </tr>
+ <tr>
+ <td>Tests Total</td>
+ <td>
+ <xsl:value-of select="count(test_definition//suite/set/testcase)" />
+ </td>
+ </tr>
+ <tr>
+ <td>Test Passed</td>
+ <td>
+ <xsl:value-of
+ select="count(test_definition//suite/set/testcase[@result = 'PASS'])" />
+ </td>
+ </tr>
+ <tr>
+ <td>Test Failed</td>
+ <td>
+ <xsl:value-of
+ select="count(test_definition//suite/set/testcase[@result = 'FAIL'])" />
+ </td>
+ </tr>
+ <tr>
+ <td>Test N/A</td>
+ <td>
+ <xsl:value-of
+ select="count(test_definition//suite/set/testcase[@result = 'BLOCK'])" />
+ </td>
+ </tr>
+ <tr>
+ <td>Test Not Run</td>
+ <td>
+ <xsl:value-of
+ select="count(test_definition//suite/set/testcase) - count(test_definition//suite/set/testcase[@result = 'PASS']) - count(test_definition//suite/set/testcase[@result = 'FAIL']) - count(test_definition//suite/set/testcase[@result = 'BLOCK'])" />
+ </td>
+ </tr>
+ <tr>
+ <td>Start time</td>
+ <td>
+ <xsl:value-of select="test_definition/summary/start_at" />
+ </td>
+ </tr>
+ <tr>
+ <td>End time</td>
+ <td>
+ <xsl:value-of select="test_definition/summary/end_at" />
+ </td>
+ </tr>
+ </table>
+ </div>
+
+
+ <div id="suite_summary">
+ <div id="title">
+ <table>
+ <tr>
+ <td class="title">
+ <h1>Test Summary by Suite</h1>
+ </td>
+ </tr>
+ </table>
+ </div>
+ <table>
+ <tr>
+ <th>Suite</th>
+ <th>Passed</th>
+ <th>Failed</th>
+ <th>N/A</th>
+ <th>Not Run</th>
+ <th>Total</th>
+ </tr>
+ <xsl:for-each select="test_definition/suite">
+ <xsl:sort select="@name" />
+ <tr>
+ <td>
+ <xsl:value-of select="@name" />
+ </td>
+ <td>
+ <xsl:value-of select="count(set//testcase[@result = 'PASS'])" />
+ </td>
+ <td>
+ <xsl:value-of select="count(set//testcase[@result = 'FAIL'])" />
+ </td>
+ <td>
+ <xsl:value-of select="count(set//testcase[@result = 'BLOCK'])" />
+ </td>
+ <td>
+ <xsl:value-of
+ select="count(set//testcase) - count(set//testcase[@result = 'PASS']) - count(set//testcase[@result = 'FAIL']) - count(set//testcase[@result = 'BLOCK'])" />
+ </td>
+ <td>
+ <xsl:value-of select="count(set//testcase)" />
+ </td>
+ </tr>
+ </xsl:for-each>
+ </table>
+ </div>
+
+ <div id="cases">
+ <div id="title">
+ <table>
+ <tr>
+ <td class="title">
+ <h1 align="center">Detailed Test Results</h1>
+ </td>
+ </tr>
+ </table>
+ </div>
+ <xsl:for-each select="test_definition/suite">
+ <xsl:sort select="@name" />
+ <p>
+ Test Suite:
+ <xsl:value-of select="@name" />
+ </p>
+ <table>
+ <tr>
+ <th>Case_ID</th>
+ <th>Purpose</th>
+ <th>Result</th>
+ <th>Stdout</th>
+ </tr>
+ <xsl:for-each select=".//set">
+ <xsl:sort select="@name" />
+ <tr>
+ <td colspan="4">
+ Test Set:
+ <xsl:value-of select="@name" />
+ </td>
+ </tr>
+ <xsl:for-each select=".//testcase">
+ <xsl:sort select="@id" />
+ <tr>
+ <td>
+ <xsl:value-of select="@id" />
+ </td>
+ <td>
+ <xsl:value-of select="@purpose" />
+ </td>
+
+ <xsl:choose>
+ <xsl:when test="@result">
+ <xsl:if test="@result = 'FAIL'">
+ <td class="red_rate">
+ <xsl:value-of select="@result" />
+ </td>
+ </xsl:if>
+ <xsl:if test="@result = 'PASS'">
+ <td class="green_rate">
+ <xsl:value-of select="@result" />
+ </td>
+ </xsl:if>
+ <xsl:if test="@result = 'BLOCK' ">
+ <td>
+ BLOCK
+ </td>
+ </xsl:if>
+ </xsl:when>
+ <xsl:otherwise>
+ <td>
+
+ </td>
+ </xsl:otherwise>
+ </xsl:choose>
+ <td>
+ <xsl:value-of select=".//result_info/stdout" />
+ <xsl:if test=".//result_info/stdout = ''">
+ N/A
+ </xsl:if>
+ </td>
+ </tr>
+ </xsl:for-each>
+ </xsl:for-each>
+ </table>
+ </xsl:for-each>
+ </div>
+ </div>
+ </body>
+ </html>
+ </xsl:template>
+</xsl:stylesheet>
\ No newline at end of file
@charset "UTF-8";
/* CSS Document */
-
-/* @group General styles */
-
-/* @group reset */
-
-#test_tool html, body h1, p, table,caption,tbody,tfoot,thead,tr,th,td
-{
+#testcasepage div,#testcasepage h1,#testcasepage p,#testcasepage table,#testcasepage tr,#testcasepage th,#testcasepage td
+ {
margin: 0;
padding: 0;
border: 0;
border-collapse: separate;
border-spacing: 0;
margin-bottom: 1.4em;
+ vertical-align: middle;
}
-#testcasepage th,td {
+#testcasepage th,#testcasepage td {
text-align: left;
font-weight: normal;
-}
-
-#testcasepage table,td,th {
+ padding: 4px 10px 4px 5px;
vertical-align: middle;
}
width: 50%;
}
-#title table tr td {
- border-bottom: none;
-}
-
#testcasepage th {
border-bottom: 1px solid #000;
- background-color: #AAAAAA;
+ background-color: #AAAAAA;
border-left: 1px solid #000;
border-top: 1px solid #000;
-}
-
-#testcasepage th a,th {
color: #000;
-}
-
-#testcasepage th {
font-weight: bold;
vertical-align: bottom;
}
-#testcasepage th:last-child {
+#testcasepage th:last-child, #testcasepage td:last-child {
border-right: 1px solid #000;
}
-#testcasepage td:last-child {
- border-right: 1px solid;
-}
-
-#testcasepage th,td,caption {
- padding: 4px 10px 4px 5px;
-}
-
#testcasepage td {
border-left: 1px solid;
font-weight: normal;
border-bottom: 1px solid;
}
-#testcasepage td.title{
- background-color: #FFF;
-}
-
#testcasepage td.yellow_rate {
background-color: #ffcc00;
}
background-color: #FF3333;
}
-#title tr td:first-child {
+#title table, #title tr, #title td {
border-left: none;
+ border-bottom: none;
+ text-align: center;
}
-/* @group basic typography */
-#testcasepage h1,h2,h3 {
- font-family: Arial, sans-serif;
- font-weight: bold;
+#title td:last-child {
+ border-right: none;
}
#testcasepage h1 {
font-size: 2em;
+ font-family: Arial, sans-serif; font-weight : bold;
line-height: 1;
color: #000;
margin-bottom: 0.75em;
padding-top: 0.25em;
-}
-
-#testcasepage h1 em {
- font-style: normal;
- color: #909090;
-}
+ font-weight: bold;
+}
\ No newline at end of file