+ Tue Sep 29 2012 Zhang Huihui <huihuix.zhang@intel.com> 2.2.4-4
+ - update pdf files
+ Tue Sep 28 2012 Zhang Huihui <huihuix.zhang@intel.com> 2.2.4-3
+ - add Known Issues section in README
+ - update Notes section in README
+ Tue Sep 25 2012 Zhang Huihui <huihuix.zhang@intel.com> 2.2.4-2
+ - integrate shaofeng's patch onload_for_perf
+ - remove category filter
+ Tue Sep 17 2012 Zhang Huihui <huihuix.zhang@intel.com> 2.2.4-1
+ - add sleep time between running tests.xml
+ - update post message
+ - add merging status message
+ - update log format
+ - Update elements(button, radiobox, etc) in testkit/web/manualharness.html
+ - add time for merge log
+ - update xsl according to new <spec> schema
+ Tue Sep 10 2012 Zhang Huihui <huihuix.zhang@intel.com> 2.2.3-2
+ - support merge result file that has same case ID
+ - update result file name
+ - add error handler for post request, to print error message
Tue Aug 30 2012 Zhang Huihui <huihuix.zhang@intel.com> 2.2.3-1
- support run both auto and manual core cases in one test run
- write pid into pid_log,kill pid on windows platform
- Installation
+ Installation:
=================
Before installation, please make sure a basic packages python have been installed;
Quick Start:
=================
-
+
At first, prepare one tests.xml file aligned with schema files: /opt/testkit/lite/xsd/testdefinition-syntax.xsd.
-
+
And then,
-
+
1) You can select on parser engine to simply conduct one or more tests.xml:
testkit-lite -f <somewhere>/tests.xml
-
+
2) If you just want to get the statistic (such as the testcases number or the structure), dryrun could help:
testkit-lite -f tests.xml -D
-
+
3) If you want to execute both auto and manual tests:
testkit-lite -f tests.xml
-
+
4) If you just want to execute manual tests:
testkit-lite -f tests.xml -M
-
+
5) If you just want to execute auto tests:
testkit-lite -f tests.xml -A
-
+
6) If you want to save test result to another file, by default it'll be under /opt/testkit/lite/latest:
testkit-lite -f tests.xml -o <somewhere>
-
+
7) If you want to choose some filters:
testkit-lite -f tests.xml --status level1 --type type1 ...
-
+
8) If you want to run Web API test:
testkit-lite -f /usr/share/webapi-webkit-tests/tests.xml -e "WRTLauncher webapi-webkit-tests" -o /tmp/wekit-tests-result.xml
-
+
9) If you want to run Web API test in full screen mode:
testkit-lite -f /usr/share/webapi-webkit-tests/tests.xml -e "WRTLauncher webapi-webkit-tests" -o /tmp/wekit-tests-result.xml --fullscreen
-
+
10) At last, you can freely compose the above parameters together:
testkit-lite -f <somewhere1>/tests.xml <somewhere2>/tests.xml -A --priority P1 --type type1 ...
Get Results:
=================
+
Test report will be generated as bellow:
tests.result.xml
xml result files aligned with schema files: /opt/testkit/lite/xsd/
example: <ignore>
-
+
The result will be under /opt/testkit/lite/latest after execution, you can also check the history results in /opt/testkit/lite/yyyy-mm-dd-HH:MM:SS.NNNNNN.
Notes:
=================
- One testxml should contains only one <suite> tag, multiple tags are not supported
- testkit-lite's TestLog is stored to /opt/testkit/lite/latest
- testkit-lite enables both automatic and manual tests by default
- Obviously -A and -M are conflict options
- -e option does not support -D mode
+
+ 1) One testxml should contains only one <suite> tag, multiple tags are not supported
+ 2) testkit-lite's TestLog is stored to /opt/testkit/lite/latest
+ 3) testkit-lite enables both automatic and manual tests by default
+ 4) Obviously -A and -M are conflict options
+ 5) -e option does not support -D mode
+ 6) The test cases' order in the result files might be arbitrary, when running same tests.xml with same options. This is caused by python's API 'getiterator' from module 'xml.etree.ElementTree'
+
+ Known Issues:
+ =================
+
+ 1) testkit-lite might crash when running test package which contains more than 1000 test cases on portable devices(launch box, PR3)
+
+ workraroud for 1): Splite the test package's xml into smaller xmls, which contain less than 1000 test cases. And run them one by one
Detail Options:
=================
--status Select the specified white-rules
--type Select the specified white-rules
--priority Select the specified white-rules
- --category Select the specified white-rules
-Examples:
- run a webapi package:
- 1): testkit-lite -f /usr/share/webapi-webkit-tests/tests.xml -e 'WRTLauncher webapi-webkit-tests' -o /tmp/wekit-tests-result.xml --priority P0 --status ready
- run both core and webapi packages:
- 2): testkit-lite -f /usr/share/webapi-webkit-tests/tests.xml /usr/share/tts-bluez-tests/tests.xml -e 'WRTLauncher webapi-webkit-tests' -o /tmp/wekit-tests-result.xml
+ Examples:
+ =================
+
+ run a webapi package:
+ 1) testkit-lite -f /usr/share/webapi-webkit-tests/tests.xml -e 'WRTLauncher webapi-webkit-tests' -o /tmp/wekit-tests-result.xml --priority P0 --status ready
+ run both core and webapi packages:
+ 2) testkit-lite -f /usr/share/webapi-webkit-tests/tests.xml /usr/share/tts-bluez-tests/tests.xml -e 'WRTLauncher webapi-webkit-tests' -o /tmp/wekit-tests-result.xml
TODO:
========
-1. add --verbose and logging level
-2. improve algorithm to merge result files
\ No newline at end of file
+1. add --verbose and logging level
\ No newline at end of file
- Installation
+ Installation:
=================
Before installation, please make sure a basic packages python have been installed;
Quick Start:
=================
-
+
At first, prepare one tests.xml file aligned with schema files: /opt/testkit/lite/xsd/testdefinition-syntax.xsd.
-
+
And then,
-
+
1) You can select on parser engine to simply conduct one or more tests.xml:
testkit-lite -f <somewhere>/tests.xml
-
+
2) If you just want to get the statistic (such as the testcases number or the structure), dryrun could help:
testkit-lite -f tests.xml -D
-
+
3) If you want to execute both auto and manual tests:
testkit-lite -f tests.xml
-
+
4) If you just want to execute manual tests:
testkit-lite -f tests.xml -M
-
+
5) If you just want to execute auto tests:
testkit-lite -f tests.xml -A
-
+
6) If you want to save test result to another file, by default it'll be under /opt/testkit/lite/latest:
testkit-lite -f tests.xml -o <somewhere>
-
+
7) If you want to choose some filters:
testkit-lite -f tests.xml --status level1 --type type1 ...
-
+
8) If you want to run Web API test:
testkit-lite -f /usr/share/webapi-webkit-tests/tests.xml -e "WRTLauncher webapi-webkit-tests" -o /tmp/wekit-tests-result.xml
-
+
9) If you want to run Web API test in full screen mode:
testkit-lite -f /usr/share/webapi-webkit-tests/tests.xml -e "WRTLauncher webapi-webkit-tests" -o /tmp/wekit-tests-result.xml --fullscreen
-
+
10) At last, you can freely compose the above parameters together:
testkit-lite -f <somewhere1>/tests.xml <somewhere2>/tests.xml -A --priority P1 --type type1 ...
Get Results:
=================
+
Test report will be generated as bellow:
tests.result.xml
xml result files aligned with schema files: /opt/testkit/lite/xsd/
example: <ignore>
-
+
The result will be under /opt/testkit/lite/latest after execution, you can also check the history results in /opt/testkit/lite/yyyy-mm-dd-HH:MM:SS.NNNNNN.
Notes:
=================
- One testxml should contains only one <suite> tag, multiple tags are not supported
- testkit-lite's TestLog is stored to /opt/testkit/lite/latest
- testkit-lite enables both automatic and manual tests by default
- Obviously -A and -M are conflict options
- -e option does not support -D mode
+
+ 1) One testxml should contains only one <suite> tag, multiple tags are not supported
+ 2) testkit-lite's TestLog is stored to /opt/testkit/lite/latest
+ 3) testkit-lite enables both automatic and manual tests by default
+ 4) Obviously -A and -M are conflict options
+ 5) -e option does not support -D mode
+ 6) The test cases' order in the result files might be arbitrary, when running same tests.xml with same options. This is caused by python's API 'getiterator' from module 'xml.etree.ElementTree'
+
+ Known Issues:
+ =================
+
+ 1) testkit-lite might crash when running test package which contains more than 1000 test cases on portable devices(launch box, PR3)
+
+ workraroud for 1): Splite the test package's xml into smaller xmls, which contain less than 1000 test cases. And run them one by one
Detail Options:
=================
--status Select the specified white-rules
--type Select the specified white-rules
--priority Select the specified white-rules
- --category Select the specified white-rules
-Examples:
- run a webapi package:
- 1): testkit-lite -f /usr/share/webapi-webkit-tests/tests.xml -e 'WRTLauncher webapi-webkit-tests' -o /tmp/wekit-tests-result.xml --priority P0 --status ready
- run both core and webapi packages:
- 2): testkit-lite -f /usr/share/webapi-webkit-tests/tests.xml /usr/share/tts-bluez-tests/tests.xml -e 'WRTLauncher webapi-webkit-tests' -o /tmp/wekit-tests-result.xml
+ Examples:
+ =================
+
+ run a webapi package:
+ 1) testkit-lite -f /usr/share/webapi-webkit-tests/tests.xml -e 'WRTLauncher webapi-webkit-tests' -o /tmp/wekit-tests-result.xml --priority P0 --status ready
+ run both core and webapi packages:
+ 2) testkit-lite -f /usr/share/webapi-webkit-tests/tests.xml /usr/share/tts-bluez-tests/tests.xml -e 'WRTLauncher webapi-webkit-tests' -o /tmp/wekit-tests-result.xml
install-scripts = /usr/bin
install-lib = /usr/lib/python2.7/site-packages
[bdist_rpm]
-release = 1
+release = 4
packager = huihuix.zhang@intel.com
requires = python
pre_install = preinstall
install_script = fakeinstall
post_install = postinstall
-changelog = * Tue Aug 30 2012 Zhang Huihui <huihuix.zhang@intel.com> 2.2.3-1
+changelog = * Tue Sep 29 2012 Zhang Huihui <huihuix.zhang@intel.com> 2.2.4-4
+ - update pdf files
+ Tue Sep 28 2012 Zhang Huihui <huihuix.zhang@intel.com> 2.2.4-3
+ - add Known Issues section in README
+ - update Notes section in README
+ Tue Sep 25 2012 Zhang Huihui <huihuix.zhang@intel.com> 2.2.4-2
+ - integrate shaofeng's patch onload_for_perf
+ - remove category filter
+ Tue Sep 17 2012 Zhang Huihui <huihuix.zhang@intel.com> 2.2.4-1
+ - add sleep time between running tests.xml
+ - update post message
+ - add merging status message
+ - update log format
+ - Update elements(button, radiobox, etc) in testkit/web/manualharness.html
+ - add time for merge log
+ - update xsl according to new <spec> schema
+ Tue Sep 10 2012 Zhang Huihui <huihuix.zhang@intel.com> 2.2.3-2
+ - support merge result file that has same case ID
+ - update result file name
+ - add error handler for post request, to print error message
+ Tue Aug 30 2012 Zhang Huihui <huihuix.zhang@intel.com> 2.2.3-1
- support run both auto and manual core cases in one test run
- write pid into pid_log,kill pid on windows platform
- - modify webAPI identify string
+ - modify webAPI identify string
- use CDATA to resolve unreadable characters
- add default time for core package
Tue Aug 16 2012 Zhang Huihui <huihuix.zhang@intel.com> 2.2.2-3
- add normal user running support for command testkit-lite
- deal with non-params as -h
Wed Jul 21 2010 Wei, Zhang <wei.z.zhang@intel.com> 1.0.0-1
- - for 1.0.0 release
-
+ - for 1.0.0 release
\ No newline at end of file
setup(name='testkit-lite',
description='command line test execution framework',
- version='2.2.3',
+ version='2.2.4',
long_description='',
author='Zhang, Huihui',
author_email='huihuix.zhang@intel.com',
"id": [],
"type": [],
"priority": [],
- "category": [],
"status": [],
"component": []}
if pid:
if platform.system() == "Linux":
os.kill(int(pid), 9)
- print "[ kill existing testkit-lite pid %s ]" % pid
+ print "[ kill existing testkit-lite, pid: %s ]" % pid
else:
kernel32 = ctypes.windll.kernel32
handle = kernel32.OpenProcess(1, 0, int(pid))
kill_result = kernel32.TerminateProcess(handle, 0)
- print "[ kill existing testkit-lite pid %s ]" % pid
+ print "[ kill existing testkit-lite, pid: %s ]" % pid
except Exception, e:
pattern = re.compile('No such file or directory|No such process')
match = pattern.search(str(e))
if not os.path.exists(LOG_DIR):
os.makedirs(LOG_DIR)
except OSError, e:
- print >> sys.stderr, "\n[ create results directory failed: %s ]\n" % e
+ print >> sys.stderr, "\n[ create results directory: %s failed, error: %s ]\n" % (LOG_DIR, e)
try:
with open(PID_FILE, "w") as fd:
pid = str(os.getpid())
fd.writelines(pid + '\n')
os.chmod(PID_FILE, 0666)
-except:
- print "[ can't create pid.log... ]"
+except Exception, e:
+ print "[ can't create pid log file: %s, error: %s ]" % (PID_FILE, e)
sys.exit(1)
# detect version option
if "--version" in sys.argv:
- print "[ testkit-lite v2.2.3-1 ]"
+ print "[ testkit-lite v2.2.4-4 ]"
sys.exit(1)
#get test engine, now we only got default engine
exec "from testkitlite.engines.%s.runner import TRunner" % engine
print "[ loading %s test engine ]" % engine
except ImportError, e:
- print "[ loading test engine failed: %s ]" % e
+ print "[ loading test engine: %s failed, error: %s ]" % (engine, e)
sys.argv.append("-h")
def varnarg(option, opt_str, value, parser):
if os.name == "posix":
os.symlink(log_dir, latest_dir)
except OSError, e:
- print >> sys.stderr, "\n[ create session log directory failed: %s ]\n" % e
+ print >> sys.stderr, "\n[ create session log directory: %s failed, error: %s ]\n" % (log_dir, e)
# 2) run test
+ # run more than one tests.xml
+ # 1. run all auto cases from the xmls
+ # 2. run all manual cases from the xmls
if len(options.testxml) > 1:
testxmls = set(options.testxml)
for t in testxmls:
wfilters['execution_type'] = ["manual"]
runner.add_filter_rules(**wfilters)
runner.apply_filter(rt)
+ # just leave suite and set for merge result
+ for suite in ep.getiterator('suite'):
+ for set in suite.getiterator('set'):
+ for testcase in set.getiterator('testcase'):
+ set.remove(testcase)
ep.write(resultfile)
start_time = datetime.today().strftime("%Y-%m-%d_%H_%M_%S")
if not options.bautoonly:
wfilters['execution_type'] = ["manual"]
runner.add_filter_rules(**wfilters)
runner.run(t, resultdir=log_dir)
+ time.sleep(50)
except Exception, e:
print e
else:
wfilters['execution_type'] = ["auto"]
runner.add_filter_rules(**wfilters)
runner.run(t, resultdir=log_dir)
+ time.sleep(6)
except Exception, e:
print e
for t in testxmls:
wfilters['execution_type'] = ["manual"]
runner.add_filter_rules(**wfilters)
runner.run(t, resultdir=log_dir)
+ time.sleep(50)
except Exception, e:
print e
else:
wfilters['execution_type'] = ["auto"]
runner.add_filter_rules(**wfilters)
runner.run(t, resultdir=log_dir)
+ time.sleep(6)
except Exception, e:
print e
+ # run only one tests.xml
+ # 1. run all auto cases from the xml
+ # 2. run all manual cases from the xml
else:
testxml = (options.testxml)[0]
filename = testxml
wfilters['execution_type'] = ["manual"]
runner.add_filter_rules(**wfilters)
runner.apply_filter(rt)
+ # just leave suite and set for merge result
+ for suite in ep.getiterator('suite'):
+ for set in suite.getiterator('set'):
+ for testcase in set.getiterator('testcase'):
+ set.remove(testcase)
ep.write(resultfile)
start_time = datetime.today().strftime("%Y-%m-%d_%H_%M_%S")
if not options.bautoonly:
wfilters['execution_type'] = ["auto"]
runner.add_filter_rules(**wfilters)
runner.run(testxml, resultdir=log_dir)
+ time.sleep(6)
wfilters['execution_type'] = ["manual"]
runner.add_filter_rules(**wfilters)
runner.run(testxml, resultdir=log_dir)
runner.add_filter_rules(**wfilters)
runner.run(testxml, resultdir=log_dir)
except Exception, e:
- print e
+ print e
try:
end_time = datetime.today().strftime("%Y-%m-%d_%H_%M_%S")
runner.merge_resultfile(start_time, end_time, log_dir)
self.end_headers()
self.wfile.write(testsuitexml)
except Exception, e:
- print "[ reading test suite %s failed ]" % self.Query["testsuite"]
- print e
+ print "[ reading test suite %s failed, error: %s ]" % (self.Query["testsuite"], e)
else:
- print "[ test-suite file not found ]"
+ print "[ testsuite parameter not found ]"
return None
def do_POST(self):
print "[ save result xml to %s ]" % resultfile
#kill open windows
+ #if process is not existed, just continue
time.sleep(5)
with open(self.Query["pid_log"], "r") as fd:
main_pid = 1
try:
if platform.system() == "Linux":
os.kill(int(pid), 9)
- print "[ kill open window pid %s ]" % pid
+ print "[ kill execution process, pid: %s ]" % pid
else:
kernel32 = ctypes.windll.kernel32
handle = kernel32.OpenProcess(1, 0, int(pid))
kill_result = kernel32.TerminateProcess(handle, 0)
- print "[ kill open window pid %s ]" % pid
+ print "[ kill execution process, pid: %s ]" % pid
except Exception, e:
pattern = re.compile('No such process')
match = pattern.search(str(e))
if not match:
- print "[ fail to kill open window pid %s, error: %s ]" % (int(pid), e)
+ print "[ fail to kill execution process, pid: %s, error: %s ]" % (int(pid), e)
#send response
if resultfile is not None:
self.send_response(200)
else:
self.send_response(100)
-
+
if self.path.strip() == "/test_hint":
- tcase = ""
- tsuite = ""
- tset = ""
- global CurSuite
- global CurSet
- if query.has_key("suite"):
- tsuite = (query.get("suite"))[0]
- if not tsuite == CurSuite:
- CurSuite = tsuite
- CurSet = ""
- print "[Suite] execute suite: %s" % tsuite
- if query.has_key("set"):
- tset = (query.get("set"))[0]
- if not tset == CurSet:
- CurSet = tset
- print "[Set] execute set: %s" % tset
- if query.has_key("testcase"):
- tcase = (query.get("testcase"))[0]
- print "[Case] execute case: %s" % tcase
+ try:
+ tcase = ""
+ tsuite = ""
+ tset = ""
+ global CurSuite
+ global CurSet
+ if query.has_key("suite"):
+ tsuite = (query.get("suite"))[0]
+ if not tsuite == CurSuite:
+ CurSuite = tsuite
+ CurSet = ""
+ print "[Suite] execute suite: %s" % tsuite
+ if query.has_key("set"):
+ tset = (query.get("set"))[0]
+ if not tset == CurSet:
+ CurSet = tset
+ print "[Set] execute set: %s" % tset
+ if query.has_key("testcase"):
+ tcase = (query.get("testcase"))[0]
+ print "[Case] execute case: %s" % tcase
+ except Exception, e:
+ print "[ fail to print test hint, error: %s ]" % e
#send response
self.send_response(200)
self.send_header("foo", "bar")
self.end_headers()
return None
except Exception, e:
- pass
+ print "[ fail to handle post request, error: %s ]" % e
def do_GET(self):
""" Handle GET type request """
with open(filename, "w") as fd:
fd.write(filecontent)
return filename
- except IOError, e:
- print "[ fail to save result xml: %s ]" % filename
- print e
+ except Exception, e:
+ print "[ fail to save result xml %s, error: %s ]" % (filename, e)
return None
def startup(parameters):
filename = filename.split('/')[3]
else:
filename = filename.split('\\')[-2]
- resultfile = "%s.xml" % filename
+ resultfile = "%s.auto.xml" % filename
resultfile = _j(resultdir, resultfile)
if _e(resultfile):
filename = "%s.manual" % _b(filename)
print "[ merge result files into %s ]" % mergefile
root = etree.Element('test_definition')
totals = set()
- for t in self.resultfiles:
- totalfile = os.path.splitext(t)[0]
+ for resultfile in self.resultfiles:
+ print "|--[ merge result file: %s ]" % resultfile
+ totalfile = os.path.splitext(resultfile)[0]
totalfile = os.path.splitext(totalfile)[0]
totalfile = "%s.total" % totalfile
totalfile = "%s.xml" % totalfile
- totalparser = etree.parse(totalfile)
- parser = etree.parse(t)
- for cs in totalparser.getiterator('set'):
- for ct in cs.getiterator('testcase'):
- for cp in parser.getiterator('testcase'):
- if ct.get('id') == cp.get('id') and ct.get('component') == cp.get('component'):
- try:
- if not cp.get('result'):
- cp.set('result', 'N/A')
- cs.remove(ct)
- cs.append(cp)
- except Exception, e:
- print "[ fail to remove %s, add %s, error: %s ]" % (ct.get('id'), cp.get('id'), e)
- totalparser.write(totalfile)
+ total_xml = etree.parse(totalfile)
+ result_xml = etree.parse(resultfile)
+
+ for total_suite in total_xml.getiterator('suite'):
+ for total_set in total_suite.getiterator('set'):
+ for result_suite in result_xml.getiterator('suite'):
+ for result_set in result_suite.getiterator('set'):
+ # when total xml and result xml have same suite name and set name
+ if result_set.get('name') == total_set.get('name') and result_suite.get('name') == total_suite.get('name'):
+ # set cases that doesn't have result in result set to N/A
+ # append cases from result set to total set
+ result_case_iterator = result_set.getiterator('testcase')
+ if result_case_iterator:
+ print "`----[ suite: %s, set: %s, time: %s ]" % (result_suite.get('name'), result_set.get('name'), datetime.today().strftime("%Y-%m-%d_%H_%M_%S"))
+ for result_case in result_case_iterator:
+ try:
+ if not result_case.get('result'):
+ result_case.set('result', 'N/A')
+ total_set.append(result_case)
+ except Exception, e:
+ print "[ fail to append %s, error: %s ]" % (result_case.get('id'), e)
+ total_xml.write(totalfile)
totals.add(totalfile)
- for tl in totals:
- parser = etree.parse(tl)
- for suite in parser.getiterator('suite'):
+ for total in totals:
+ result_xml = etree.parse(total)
+ for suite in result_xml.getiterator('suite'):
suite.tail = "\n"
root.append(suite)
try:
tree = etree.ElementTree(element=root)
tree.write(output)
except IOError, e:
- print "[ merge result file failed: %s ]" % e
+ print "[ merge result file failed, error: %s ]" % e
# report the result using xml mode
- print "[ generate result XML: %s ]" % mergefile
+ print "[ generate result xml: %s ]" % mergefile
if self.core_manual_flag:
print "[ all results for core manual cases are N/A, the result file is at %s ]" % mergefile
# add XSL support to testkit-lite
DECLARATION = """<?xml version="1.0" encoding="UTF-8"?>
-<?xml-stylesheet type="text/xsl" href="resultstyle.xsl"?>\n"""
+<?xml-stylesheet type="text/xsl" href="testresult.xsl"?>\n"""
with open(mergefile, 'w') as output:
output.write(DECLARATION)
ep.write(output, xml_declaration=False, encoding='utf-8')
import subprocess, thread
from pyhttpd import startup
if self.bdryrun:
- print "[ external test does not support dryrun ]"
+ print "[ WRTLauncher mode does not support dryrun ]"
return True
#start http server in here
try:
var last_test_page = "";
var current_page_uri = "";
+ var activetest = true;
+
var manualcases = function() {
this.casesid = "";
this.index = 0;
}
iTest++;
- doTest();
+
+ //alert("Reporting result:" + result + "; and if continue testing:" + activetest);
+ if(activetest){
+ doTest();
+ }else{
+ activetest = true;
+ }
}
function doTest() {
psuite = $(Tests[iTest]).parent().parent().attr('name');
startTime = new Date();
- setTimeout("CheckResult()", pollTime);
+ //setTimeout("CheckResult()", pollTime);
current_page_uri = $(it).text();
var index = current_page_uri.indexOf("?");
test_page = current_page_uri;
// Don't load the same test page again
- if (test_page == last_test_page)
- return;
+ //alert("Testing page: " + test_page + "; Previous_page: " + last_test_page);
+ //alert("No: " + iTest);
+ if (test_page == last_test_page){
+ activetest = false;
+ //alert("Continue:" + activetest);
+ CheckResult();
+ continue;
+ }
if ((current_page_uri.indexOf("2DTransforms") != -1)
|| (current_page_uri.indexOf("3DTransforms") != -1)) {
}
oTestFrame.src = current_page_uri;
last_test_page = test_page;
+ //alert("Prepare onload.collback");
+ if (oTestFrame.attachEvent){
+ oTestFrame.attachEvent("onload", function(){
+ CheckResult();
+ });
+ } else {
+ oTestFrame.onload = function(){
+ CheckResult();
+ };
+ }
return;
+
}
doManualTest();
}
input,lable,select{
- font-size: 28px;
+ font-size: 40px;
}
</style>
</head>
<body onload="initManual()">
<div id="manualharness" >
-<input type="button" style="width:12%" id="prevbutton" value="<< Prev" onclick="prevTest()"/>
-<select id="caseslist" style="width:61%" onchange="listUpdate()">
+<input type="button" style="width:13%" id="prevbutton" value="< Prev" onclick="prevTest()"/>
+<select id="caseslist" style="width:59%" onchange="listUpdate()">
</select>
-<input type="button" style="width:12%" id="nextbutton" value="Next >>" onclick="nextTest()"/>
-<input type="button" style="width:14%" id="runbutton" value="Run" onclick="runTest()"/>
+<input type="button" style="width:13%" id="nextbutton" value="Next >" onclick="nextTest()"/>
+<input type="button" style="width:13%" id="runbutton" value="Run" onclick="runTest()"/>
</div>
<div width=100%>
-<textarea id="casesinfo" rows=12 disabled='disabled' />
+<textarea id="casesinfo" rows=11 disabled='disabled' />
</textarea>
</div>
<div style="width:100%;text-align:right;background-color:#cccccc;">
-<input type="radio" id="passradio" value="Pass" onclick="passRadio()"/><label onclick="passLabel()">Pass</label>
+<input type="radio" id="passradio" value="Pass" onclick="passRadio()"/><label style="font-size:40px" onclick="passLabel()">Pass</label>
-<input type="radio" id="failradio" value="Fail" onclick="failRadio()"/><label onclick="failLabel()">Fail</label>
+<input type="radio" id="failradio" value="Fail" onclick="failRadio()"/><label style="font-size:40px" onclick="failLabel()">Fail</label>
<input type="button" style="width:12%" id="submitbutton" value="Save" onclick="submitTest()"/>
<input type="button" style="width:12%" id="completebutton" value="Done" onclick="completeTest()"/><br>
</xs:sequence>
<xs:attributeGroup ref="set_attribute_group"></xs:attributeGroup>
+ <xs:attribute name="launcher" type="xs:string"></xs:attribute>
</xs:complexType>
<xs:unique name="uniqueSetName">
<xs:selector xpath=".//set" />
<xs:element name="series" type="seriesType" minOccurs="0"
maxOccurs="unbounded">
</xs:element>
- <xs:element name="spec" type="xs:string" minOccurs="0"
+ <xs:element name="specs" type="specsType" minOccurs="0"
maxOccurs="1"></xs:element>
<xs:element name="result_info" type="result_info_type"
minOccurs="0">
maxOccurs="unbounded">
</xs:element>
</xs:sequence>
- <xs:attribute name="launcher" type="xs:string"></xs:attribute>
</xs:complexType>
</xs:element>
<xs:attribute name="test_plan_name" type="xs:string"></xs:attribute>
</xs:complexType>
+
+ <xs:complexType name="specsType">
+ <xs:sequence>
+ <xs:element name="spec" type="specType" maxOccurs="unbounded"></xs:element>
+ </xs:sequence>
+ </xs:complexType>
+
+ <xs:complexType name="specType">
+ <xs:sequence>
+ <xs:element name="spec_assertion" type="spec_assertionType"></xs:element>
+ <xs:element name="spec_url" type="xs:string"></xs:element>
+ <xs:element name="spec_statement" type="xs:string"></xs:element>
+ </xs:sequence>
+ </xs:complexType>
+
+ <xs:complexType name="spec_assertionType">
+ <xs:attribute name="category" type="xs:string" use="required"></xs:attribute>
+ <xs:attribute name="section" type="xs:string" use="required"></xs:attribute>
+ <xs:attribute name="specification" type="xs:string"
+ use="required">
+ </xs:attribute>
+ <xs:attribute name="interface" type="xs:string"
+ use="required">
+ </xs:attribute>
+ <xs:attribute name="element_name" type="xs:string"
+ use="optional">
+ </xs:attribute>
+ <xs:attribute name="usage" type="xs:boolean" default="false"></xs:attribute>
+ <xs:attribute name="element_type" type="xs:string"></xs:attribute>
+ </xs:complexType>
</xs:schema>
<STYLE type="text/css">
@import "tests.css";
</STYLE>
-
+ <head>
+ <script type="text/javascript" src="jquery.min.js" />
+ </head>
<body>
<div id="testcasepage">
<div id="title">
<tr>
<td>Others</td>
<td>
- <xsl:value-of select="test_definition/environment/other" />
+ <xsl:call-template name="br-replace">
+ <xsl:with-param name="word"
+ select="test_definition/environment/other" />
+ </xsl:call-template>
+ <!-- xsl:value-of select="test_definition/environment/other" / -->
</td>
</tr>
</table>
<div id="suite_summary">
<div id="title">
+ <a name="contents"></a>
<table>
<tr>
<td class="title">
<xsl:sort select="@name" />
<tr>
<td>
- <xsl:value-of select="@name" />
+ <a>
+ <xsl:attribute name="href">
+ #<xsl:value-of select="@name"/>
+ </xsl:attribute>
+ <xsl:value-of select="@name" />
+ </a>
</td>
<td>
<xsl:value-of select="count(set//testcase[@result = 'PASS'])" />
</div>
<xsl:for-each select="test_definition/suite">
<xsl:sort select="@name" />
- <p>
+ <div id="btc"><a href="#contents">Back to Contents</a></div>
+ <div id="suite_title">
Test Suite:
<xsl:value-of select="@name" />
- </p>
+ <a>
+ <xsl:attribute name="name">
+ <xsl:value-of select="@name"/>
+ </xsl:attribute>
+ </a>
+ </div>
<table>
<tr>
<th>Case_ID</th>
</xsl:for-each>
</div>
</div>
+ <div id="goTopBtn"><img border="0" src="./back_top.png"/></div>
+ <script type="text/javascript" src="application.js" />
+ <script language="javascript" type="text/javascript">
+ $(document).ready(function(){
+ goTopEx();
+ });
+ </script>
</body>
</html>
</xsl:template>
-</xsl:stylesheet>
\ No newline at end of file
+ <xsl:template name="br-replace">
+ <xsl:param name="word" />
+ <xsl:variable name="cr">
+ <xsl:text>
+</xsl:text>
+ </xsl:variable>
+ <xsl:choose>
+ <xsl:when test="contains($word,$cr)">
+ <xsl:value-of select="substring-before($word,$cr)" />
+ <br />
+ <xsl:call-template name="br-replace">
+ <xsl:with-param name="word" select="substring-after($word,$cr)" />
+ </xsl:call-template>
+ </xsl:when>
+ <xsl:otherwise>
+ <xsl:value-of select="$word" />
+ </xsl:otherwise>
+ </xsl:choose>
+ </xsl:template>
+</xsl:stylesheet>
text-align: left;
}
+#suite_title {
+ text-align: left;
+}
+
+#btc {
+ text-align: right;
+}
+
#testcasepage table {
border-collapse: separate;
border-spacing: 0;
vertical-align: bottom;
}
-#testcasepage th:last-child, #testcasepage td:last-child {
+#testcasepage th:last-child,#testcasepage td:last-child {
border-right: 1px solid #000;
}
background-color: #FF3333;
}
-#title table, #title tr, #title td {
+#title table,#title tr,#title td {
border-left: none;
border-bottom: none;
text-align: center;
#testcasepage h1 {
font-size: 2em;
- font-family: Arial, sans-serif; font-weight : bold;
+ font-family: Arial, sans-serif;
+ font-weight: bold;
line-height: 1;
color: #000;
margin-bottom: 0.75em;
padding-top: 0.25em;
font-weight: bold;
+}
+
+#goTopBtn {
+ right: 0px;
+ bottom: 0px;
+ position: fixed; +position: absolute;
+ top: expression(parseInt(document.body.scrollTop)+document.body.clientHeight-40);
}
\ No newline at end of file