[pacman-dev] [PATCH 0/7] integrate test suite with automake
This patchset converts the output of all of our tests to tap [1] and fully integrates them with automake so that tests can be run in parallel with `make check`. The test suite may also be run with other test harnesses such as perl's prove which can do such interesting things as remember which tests failed and run only those on subsequent invocations. The documentation for integrating tests is here [2]. [1] http://podwiki.hexten.net/TAP/TAP.html?page=TAP [2] http://www.gnu.org/software/automake/manual/html_node/Parallel-Test-Harness.... Andrew Gregory (7): convert test scripts to tap output pactest: treat unknown rules as failures convert pactest to TAP output provide default values for test scripts pactest: accept test names without a switch integrate tests with automake pactest: remove results summary .gitignore | 2 + Makefile.am | 36 +- build-aux/tap-driver.sh | 652 +++++++++++++++++++++++++++++++++++++ configure.ac | 1 + test/pacman/pactest.py | 29 +- test/pacman/pmdb.py | 5 +- test/pacman/pmenv.py | 87 +---- test/pacman/pmrule.py | 15 +- test/pacman/pmtest.py | 26 +- test/pacman/tap.py | 64 ++++ test/pacman/tests/TESTS | 288 ++++++++++++++++ test/pacman/util.py | 4 +- test/scripts/human_to_size_test.sh | 29 +- test/scripts/parseopts_test.sh | 30 +- test/util/pacsorttest.sh | 41 +-- test/util/vercmptest.sh | 39 +-- 16 files changed, 1136 insertions(+), 212 deletions(-) create mode 100755 build-aux/tap-driver.sh create mode 100644 test/pacman/tap.py create mode 100644 test/pacman/tests/TESTS -- 1.8.3.4
Signed-off-by: Andrew Gregory <andrew.gregory.8@gmail.com> --- It would be nice if somebody more familiar with bash than I am could double check the way I handle printing the diff output on failures in pacsorttest.sh test/scripts/human_to_size_test.sh | 24 +++++++++++++++--------- test/scripts/parseopts_test.sh | 25 ++++++++++++++----------- test/util/pacsorttest.sh | 28 +++++++++++++++++----------- test/util/vercmptest.sh | 27 ++++++++++++--------------- 4 files changed, 58 insertions(+), 46 deletions(-) diff --git a/test/scripts/human_to_size_test.sh b/test/scripts/human_to_size_test.sh index dbf1997..678fa87 100755 --- a/test/scripts/human_to_size_test.sh +++ b/test/scripts/human_to_size_test.sh @@ -1,14 +1,16 @@ #!/bin/bash +declare -i testcount=0 fail=0 pass=0 total=15 + # source the library function if [[ -z $1 || ! -f $1 ]]; then - printf "error: path to human_to_size library not provided or does not exist\n" + printf "Bail out! path to human_to_size library not provided or does not exist\n" exit 1 fi . "$1" if ! type -t human_to_size >/dev/null; then - printf 'human_to_size function not found\n' + printf 'Bail out! human_to_size function not found\n' exit 1 fi @@ -20,27 +22,31 @@ parse_hts() { result=$(human_to_size "$1") if [[ $result = "$expected" ]]; then (( ++pass )) + printf "ok %d - %s\n" "$testcount" "$input" else (( ++fail )) - printf '[TEST %3s]: FAIL\n' "$testcount" - printf ' input: %s\n' "$input" - printf ' output: %s\n' "$result" - printf ' expected: %s\n' "$expected" + printf "not ok %d - %s\n" "$testcount" "$input" + printf '# [TEST %3s]: FAIL\n' "$testcount" + printf '# input: %s\n' "$input" + printf '# output: %s\n' "$result" + printf '# expected: %s\n' "$expected" fi } summarize() { if (( !fail )); then - printf 'All %s tests successful\n\n' "$testcount" + printf '# All %s tests successful\n\n' "$testcount" exit 0 else - printf '%s of %s tests failed\n\n' "$fail" "$testcount" + printf '# %s of %s tests failed\n\n' "$fail" "$testcount" exit 1 fi } trap 'summarize' EXIT -printf 'Beginning human_to_size tests\n' +printf '# Beginning human_to_size tests\n' + +echo "1..$total" # parse_hts <input> <expected output> diff --git a/test/scripts/parseopts_test.sh b/test/scripts/parseopts_test.sh index b7e5d08..8df1908 100755 --- a/test/scripts/parseopts_test.sh +++ b/test/scripts/parseopts_test.sh @@ -1,16 +1,16 @@ #!/bin/bash -declare -i testcount=0 pass=0 fail=0 +declare -i testcount=0 pass=0 fail=0 total=25 # source the library function if [[ -z $1 || ! -f $1 ]]; then - printf "error: path to parseopts library not provided or does not exist\n" + printf "Bail out! path to parseopts library not provided or does not exist\n" exit 1 fi . "$1" if ! type -t parseopts >/dev/null; then - printf 'parseopts function not found\n' + printf 'Bail out! parseopts function not found\n' exit 1 fi @@ -36,28 +36,31 @@ test_result() { if [[ $result = "$*" ]] && (( tokencount == $# )); then (( ++pass )) + printf 'ok %d - %s\n' "$testcount" "$input" else - printf '[TEST %3s]: FAIL\n' "$testcount" - printf ' input: %s\n' "$input" - printf ' output: %s (%s tokens)\n' "$*" "$#" - printf ' expected: %s (%s tokens)\n' "$result" "$tokencount" - echo + printf 'not ok %d - %s\n' "$testcount" "$input" + printf '# [TEST %3s]: FAIL\n' "$testcount" + printf '# input: %s\n' "$input" + printf '# output: %s (%s tokens)\n' "$*" "$#" + printf '# expected: %s (%s tokens)\n' "$result" "$tokencount" (( ++fail )) fi } summarize() { if (( !fail )); then - printf 'All %s tests successful\n\n' "$testcount" + printf '# All %s tests successful\n\n' "$testcount" exit 0 else - printf '%s of %s tests failed\n\n' "$fail" "$testcount" + printf '# %s of %s tests failed\n\n' "$fail" "$testcount" exit 1 fi } trap 'summarize' EXIT -printf 'Beginning parseopts tests\n' +printf '# Beginning parseopts tests\n' + +echo "1..$total" # usage: parse <expected result> <token count> test-params... # a failed parse will match only the end of options marker '--' diff --git a/test/util/pacsorttest.sh b/test/util/pacsorttest.sh index 9cbf619..0abddc2 100755 --- a/test/util/pacsorttest.sh +++ b/test/util/pacsorttest.sh @@ -2,6 +2,7 @@ # # pacsorttest - a test suite for pacsort # +# Copyright (c) 2013 by Pacman Development Team <pacman-dev@archlinux.org> # Copyright (c) 2011 by Dan McGee <dan@archlinux.org> # # This program is free software; you can redistribute it and/or modify @@ -20,32 +21,39 @@ # default binary if one was not specified as $1 bin='pacsort' # holds counts of tests -total=0 +total=23 +run=0 failure=0 # args: # runtest input expected test_description optional_opts runtest() { # run the test - diff -u <(printf "$1" | $bin $4) <(printf "$2") - if [[ $? -ne 0 ]]; then - echo "FAILURE: $3" + ((run++)) + out=$(diff -u <(printf "$1" | $bin $4) <(printf "$2")) + if [[ $? -eq 0 ]]; then + echo "ok $run - $3" + else ((failure++)) + echo "not ok $run - $3" + while read line; do + echo " # $line" + done <<<"$out" fi - ((total++)) } # use first arg as our binary if specified [[ -n "$1" ]] && bin="$1" if ! type -p "$bin"; then - echo "pacsort binary ($bin) could not be located" - echo + echo "Bail out! pacsort binary ($bin) could not be located" exit 1 fi echo "Running pacsort tests..." +echo "1..$total" + # BEGIN TESTS in="1\n2\n3\n4\n" @@ -113,11 +121,9 @@ runtest "$separator" "$separator_reverse" "really long input, sort key, separato #END TESTS if [[ $failure -eq 0 ]]; then - echo "All $total tests successful" - echo + echo "# All $run tests successful" exit 0 fi -echo "$failure of $total tests failed" -echo +echo "# $failure of $run tests failed" exit 1 diff --git a/test/util/vercmptest.sh b/test/util/vercmptest.sh index 04b841f..9297cdc 100755 --- a/test/util/vercmptest.sh +++ b/test/util/vercmptest.sh @@ -20,22 +20,20 @@ # default binary if one was not specified as $1 bin='vercmp' # holds counts of tests -total=0 +total=92 +run=0 failure=0 # args: # pass ver1 ver2 ret expected pass() { - #echo "test: ver1: $1 ver2: $2 ret: $3 expected: $4" - #echo " --> pass" - echo -n + echo "ok $run - ver1: $1 ver2: $2 ret: $3" } # args: # fail ver1 ver2 ret expected fail() { - echo "test: ver1: $1 ver2: $2 ret: $3 expected: $4" - echo " ==> FAILURE" + echo "not ok $run - test: ver1: $1 ver2: $2 ret: $3 expected: $4" ((failure++)) } @@ -43,12 +41,13 @@ fail() { # runtest ver1 ver2 expected runtest() { # run the test + ((run++)) ret=$($bin $1 $2) func='pass' [[ -n $ret && $ret -eq $3 ]] || func='fail' $func $1 $2 $ret $3 - ((total++)) # and run its mirror case just to be sure + ((run++)) reverse=0 [[ $3 -eq 1 ]] && reverse=-1 [[ $3 -eq -1 ]] && reverse=1 @@ -56,19 +55,19 @@ runtest() { func='pass' [[ -n $ret && $ret -eq $reverse ]] || func='fail' $func $2 $1 $ret $reverse - ((total++)) } # use first arg as our binary if specified [[ -n "$1" ]] && bin="$1" if ! type -p "$bin"; then - echo "vercmp binary ($bin) could not be located" - echo + echo "Bail out! vercmp binary ($bin) could not be located" exit 1 fi -echo "Running vercmp tests..." +echo "# Running vercmp tests..." + +echo "1..$total" # BEGIN TESTS @@ -147,11 +146,9 @@ runtest 1:1.1 1.1 1 #END TESTS if [[ $failure -eq 0 ]]; then - echo "All $total tests successful" - echo + echo "# All $run tests successful" exit 0 fi -echo "$failure of $total tests failed" -echo +echo "# $failure of $run tests failed" exit 1 -- 1.8.3.4
Tests should only be skipped when they aren't relevant, not when the test itself is bad. Signed-off-by: Andrew Gregory <andrew.gregory.8@gmail.com> --- test/pacman/pmenv.py | 5 ++--- test/pacman/pmtest.py | 4 +--- 2 files changed, 3 insertions(+), 6 deletions(-) diff --git a/test/pacman/pmenv.py b/test/pacman/pmenv.py index 9a88262..5eaa473 100644 --- a/test/pacman/pmenv.py +++ b/test/pacman/pmenv.py @@ -110,9 +110,8 @@ def _printtest(t): else: result = "[FAIL]" print result, - print "%s Rules: OK = %2u FAIL = %2u SKIP = %2u" \ - % (t.testname.ljust(34), success, fail, \ - rules - (success + fail)) + print "%s Rules: OK = %2u FAIL = %2u" \ + % (t.testname.ljust(34), success, fail) if fail != 0: # print test description if test failed print " ", t.description diff --git a/test/pacman/pmtest.py b/test/pacman/pmtest.py index f5a9680..cea584d 100644 --- a/test/pacman/pmtest.py +++ b/test/pacman/pmtest.py @@ -266,11 +266,9 @@ def check(self): if success == 1: msg = " OK " self.result["success"] += 1 - elif success == 0: + else: msg = "FAIL" self.result["fail"] += 1 - else: - msg = "SKIP" print "\t[%s] %s" % (msg, i) # vim: set ts=4 sw=4 et: -- 1.8.3.4
Each test produces a single TAP result with the rules run in a sub-test. This reduces output when run under automake and makes it possible to continue setting expectfailure at the test level rather than per-rule. Signed-off-by: Andrew Gregory <andrew.gregory.8@gmail.com> --- test/pacman/pactest.py | 5 ++-- test/pacman/pmdb.py | 5 ++-- test/pacman/pmenv.py | 65 +++++++++++++++++++++++--------------------------- test/pacman/pmrule.py | 15 ++++++------ test/pacman/pmtest.py | 22 ++++++++--------- test/pacman/tap.py | 64 +++++++++++++++++++++++++++++++++++++++++++++++++ test/pacman/util.py | 4 +++- 7 files changed, 121 insertions(+), 59 deletions(-) create mode 100644 test/pacman/tap.py diff --git a/test/pacman/pactest.py b/test/pacman/pactest.py index 2b1dee6..fe04c2b 100755 --- a/test/pacman/pactest.py +++ b/test/pacman/pactest.py @@ -26,6 +26,7 @@ import tempfile import pmenv +import tap import util __author__ = "Aurelien FORET" @@ -110,7 +111,7 @@ def create_parser(): env.pacman["ldconfig"] = opts.ldconfig if opts.testcases is None or len(opts.testcases) == 0: - print "no tests defined, nothing to do" + tap.bail("no tests defined, nothing to do") os.rmdir(root_path) sys.exit(2) @@ -124,7 +125,7 @@ def create_parser(): if not opts.keeproot: shutil.rmtree(root_path) else: - print "pacman testing root saved: %s" % root_path + tap.diag("pacman testing root saved: %s" % root_path) if env.failed > 0: sys.exit(1) diff --git a/test/pacman/pmdb.py b/test/pacman/pmdb.py index 3e9d305..b7b3522 100644 --- a/test/pacman/pmdb.py +++ b/test/pacman/pmdb.py @@ -23,6 +23,7 @@ import tarfile import pmpkg +import tap import util def _getsection(fd): @@ -105,7 +106,7 @@ def db_read(self, name): # desc filename = os.path.join(path, "desc") if not os.path.isfile(filename): - print "invalid db entry found (desc missing) for pkg", pkgname + tap.bail("invalid db entry found (desc missing) for pkg " + pkgname) return None fd = open(filename, "r") while 1: @@ -160,7 +161,7 @@ def db_read(self, name): # files filename = os.path.join(path, "files") if not os.path.isfile(filename): - print "invalid db entry found (files missing) for pkg", pkgname + tap.bail("invalid db entry found (files missing) for pkg " + pkgname) return None fd = open(filename, "r") while 1: diff --git a/test/pacman/pmenv.py b/test/pacman/pmenv.py index 5eaa473..a3a8f54 100644 --- a/test/pacman/pmenv.py +++ b/test/pacman/pmenv.py @@ -20,6 +20,7 @@ import os import pmtest +import tap class pmenv(object): @@ -58,26 +59,18 @@ def addtest(self, testcase): def run(self): """ """ - + tap.plan(len(self.testcases)) for t in self.testcases: - print "=========="*8 - print "Running '%s'" % t.testname + tap.diag("==========" * 8) + tap.diag("Running '%s'" % t.testname) t.load() - print t.description - print "----------"*8 - t.generate(self.pacman) - t.run(self.pacman) - t.check() - print "==> Test result" - if t.result["fail"] == 0: - print "\tPASS" - else: - print "\tFAIL" - print + tap.diag("==> Checking rules") + tap.todo = t.expectfailure + tap.subtest(lambda: t.check(), t.description) def results(self): """ @@ -109,40 +102,42 @@ def _printtest(t): result = "[PASS]" else: result = "[FAIL]" - print result, - print "%s Rules: OK = %2u FAIL = %2u" \ - % (t.testname.ljust(34), success, fail) + tap.diag("%s %s Rules: OK = %2u FAIL = %2u" \ + % (result, t.testname.ljust(34), success, fail)) if fail != 0: # print test description if test failed - print " ", t.description + tap.diag(" " + t.description) - print "=========="*8 - print "Results" - print "----------"*8 - print " Passed:" + tap.diag("==========" * 8) + tap.diag("Results") + tap.diag("----------" * 8) + tap.diag(" Passed:") for test in tpassed: _printtest(test) - print "----------"*8 - print " Expected Failures:" + tap.diag("----------" * 8) + tap.diag(" Expected Failures:") for test in texpectedfail: _printtest(test) - print "----------"*8 - print " Unexpected Passes:" + tap.diag("----------" * 8) + tap.diag(" Unexpected Passes:") for test in tunexpectedpass: _printtest(test) - print "----------"*8 - print " Failed:" + tap.diag("----------" * 8) + tap.diag(" Failed:") for test in tfailed: _printtest(test) - print "----------"*8 + tap.diag("----------" * 8) total = len(self.testcases) - print "Total = %3u" % total + tap.diag("Total = %3u" % total) if total: - print "Pass = %3u (%6.2f%%)" % (self.passed, float(self.passed) * 100 / total) - print "Expected Fail = %3u (%6.2f%%)" % (self.expectedfail, float(self.expectedfail) * 100 / total) - print "Unexpected Pass = %3u (%6.2f%%)" % (self.unexpectedpass, float(self.unexpectedpass) * 100 / total) - print "Fail = %3u (%6.2f%%)" % (self.failed, float(self.failed) * 100 / total) - print "" + tap.diag("Pass = %3u (%6.2f%%)" % (self.passed, + float(self.passed) * 100 / total)) + tap.diag("Expected Fail = %3u (%6.2f%%)" % (self.expectedfail, + float(self.expectedfail) * 100 / total)) + tap.diag("Unexpected Pass = %3u (%6.2f%%)" % (self.unexpectedpass, + float(self.unexpectedpass) * 100 / total)) + tap.diag("Fail = %3u (%6.2f%%)" % (self.failed, + float(self.failed) * 100 / total)) # vim: set ts=4 sw=4 et: diff --git a/test/pacman/pmrule.py b/test/pacman/pmrule.py index 3d38b85..c97a158 100644 --- a/test/pacman/pmrule.py +++ b/test/pacman/pmrule.py @@ -19,6 +19,7 @@ import os import stat +import tap import util class pmrule(object): @@ -57,12 +58,12 @@ def check(self, test): elif case == "OUTPUT": logfile = os.path.join(test.root, util.LOGFILE) if not os.access(logfile, os.F_OK): - print "LOGFILE not found, cannot validate 'OUTPUT' rule" + tap.diag("LOGFILE not found, cannot validate 'OUTPUT' rule") success = 0 elif not util.grep(logfile, key): success = 0 else: - print "PACMAN rule '%s' not found" % case + tap.diag("PACMAN rule '%s' not found" % case) success = -1 elif kind == "PKG": localdb = test.db["local"] @@ -108,7 +109,7 @@ def check(self, test): if not found: success = 0 else: - print "PKG rule '%s' not found" % case + tap.diag("PKG rule '%s' not found" % case) success = -1 elif kind == "FILE": filename = os.path.join(test.root, key) @@ -148,7 +149,7 @@ def check(self, test): if not os.path.isfile("%s.pacsave" % filename): success = 0 else: - print "FILE rule '%s' not found" % case + tap.diag("FILE rule '%s' not found" % case) success = -1 elif kind == "DIR": filename = os.path.join(test.root, key) @@ -156,7 +157,7 @@ def check(self, test): if not os.path.isdir(filename): success = 0 else: - print "DIR rule '%s' not found" % case + tap.diag("DIR rule '%s' not found" % case) success = -1 elif kind == "LINK": filename = os.path.join(test.root, key) @@ -164,7 +165,7 @@ def check(self, test): if not os.path.islink(filename): success = 0 else: - print "LINK rule '%s' not found" % case + tap.diag("LINK rule '%s' not found" % case) success = -1 elif kind == "CACHE": cachedir = os.path.join(test.root, util.PM_CACHEDIR) @@ -174,7 +175,7 @@ def check(self, test): os.path.join(cachedir, pkg.filename())): success = 0 else: - print "Rule kind '%s' not found" % kind + tap.diag("Rule kind '%s' not found" % kind) success = -1 if self.false and success != -1: diff --git a/test/pacman/pmtest.py b/test/pacman/pmtest.py index cea584d..b343d55 100644 --- a/test/pacman/pmtest.py +++ b/test/pacman/pmtest.py @@ -27,6 +27,7 @@ import pmrule import pmdb import pmfile +import tap import util from util import vprint @@ -104,7 +105,7 @@ def load(self): raise IOError("file %s does not exist!" % self.name) def generate(self, pacman): - print "==> Generating test environment" + tap.diag("==> Generating test environment") # Cleanup leftover files from a previous test session if os.path.isdir(self.root): @@ -192,23 +193,23 @@ def generate(self, pacman): def run(self, pacman): if os.path.isfile(util.PM_LOCK): - print "\tERROR: another pacman session is on-going -- skipping" + tap.bail("\tERROR: another pacman session is on-going -- skipping") return - print "==> Running test" + tap.diag("==> Running test") vprint("\tpacman %s" % self.args) cmd = [] if os.geteuid() != 0: fakeroot = util.which("fakeroot") if not fakeroot: - print "WARNING: fakeroot not found!" + tap.diag("WARNING: fakeroot not found!") else: cmd.append("fakeroot") fakechroot = util.which("fakechroot") if not fakechroot: - print "WARNING: fakechroot not found!" + tap.diag("WARNING: fakechroot not found!") else: cmd.append("fakechroot") @@ -252,23 +253,20 @@ def run(self, pacman): # Check if the lock is still there if os.path.isfile(util.PM_LOCK): - print "\tERROR: %s not removed" % util.PM_LOCK + tap.diag("\tERROR: %s not removed" % util.PM_LOCK) os.unlink(util.PM_LOCK) # Look for a core file if os.path.isfile(os.path.join(self.root, util.TMPDIR, "core")): - print "\tERROR: pacman dumped a core file" + tap.diag("\tERROR: pacman dumped a core file") def check(self): - print "==> Checking rules" - + tap.plan(len(self.rules)) for i in self.rules: success = i.check(self) if success == 1: - msg = " OK " self.result["success"] += 1 else: - msg = "FAIL" self.result["fail"] += 1 - print "\t[%s] %s" % (msg, i) + tap.ok(success, i) # vim: set ts=4 sw=4 et: diff --git a/test/pacman/tap.py b/test/pacman/tap.py new file mode 100644 index 0000000..c70a535 --- /dev/null +++ b/test/pacman/tap.py @@ -0,0 +1,64 @@ +# Copyright (c) 2013 Pacman Development Team <pacman-dev@archlinux.org> +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see <http://www.gnu.org/licenses/>. + +todo = None +count = 0 +level = 0 +failed = 0 + +def _output(msg): + print("%s%s" % (" "*level, msg)) + +def ok(ok, description=""): + global count, failed + count += 1 + if not ok: + failed += 1 + directive = " # TODO" if todo else "" + _output("%s %d - %s%s" % ("ok" if ok else "not ok", count, + description, directive)) + +def plan(count): + _output("1..%d" % (count)) + +def diag(msg): + _output("# %s" % (msg)) + +def bail(reason=""): + _output("Bail out! %s" % (reason)) + +def subtest(func, description=""): + global todo, count, level, failed + + save_todo = todo + save_count = count + save_level = level + save_failed = failed + + todo = None + count = 0 + level += 1 + failed = 0 + + func() + + subtest_ok = not failed + + todo = save_todo + count = save_count + level = save_level + failed = save_failed + + ok(subtest_ok, description) diff --git a/test/pacman/util.py b/test/pacman/util.py index 65540ed..14035d7 100644 --- a/test/pacman/util.py +++ b/test/pacman/util.py @@ -21,6 +21,8 @@ import re import hashlib +import tap + SELFPATH = os.path.abspath(os.path.dirname(__file__)) # ALPM @@ -43,7 +45,7 @@ def vprint(msg): if verbose: - print msg + tap.diag(msg) # # Methods to generate files -- 1.8.3.4
Our test scripts currently require that the first argument be the library or binary to be tested. This makes integrating them with automake which doesn't have a mechanism for passing specific arguments to individual tests. Instead, provide a default built from paths in the environment which can be provided to all test scripts by automake. Signed-off-by: Andrew Gregory <andrew.gregory.8@gmail.com> --- test/scripts/human_to_size_test.sh | 7 ++++--- test/scripts/parseopts_test.sh | 7 ++++--- test/util/pacsorttest.sh | 15 ++++++--------- test/util/vercmptest.sh | 16 +++++++--------- 4 files changed, 21 insertions(+), 24 deletions(-) diff --git a/test/scripts/human_to_size_test.sh b/test/scripts/human_to_size_test.sh index 678fa87..6306137 100755 --- a/test/scripts/human_to_size_test.sh +++ b/test/scripts/human_to_size_test.sh @@ -3,11 +3,12 @@ declare -i testcount=0 fail=0 pass=0 total=15 # source the library function -if [[ -z $1 || ! -f $1 ]]; then - printf "Bail out! path to human_to_size library not provided or does not exist\n" +lib=${1:-${PMTEST_SCRIPTLIB_DIR}human_to_size.sh} +if [[ -z $lib || ! -f $lib ]]; then + echo "Bail out! human_to_size library ($lib) could not be located\n" exit 1 fi -. "$1" +. "$lib" if ! type -t human_to_size >/dev/null; then printf 'Bail out! human_to_size function not found\n' diff --git a/test/scripts/parseopts_test.sh b/test/scripts/parseopts_test.sh index 8df1908..5ff4bc5 100755 --- a/test/scripts/parseopts_test.sh +++ b/test/scripts/parseopts_test.sh @@ -3,11 +3,12 @@ declare -i testcount=0 pass=0 fail=0 total=25 # source the library function -if [[ -z $1 || ! -f $1 ]]; then - printf "Bail out! path to parseopts library not provided or does not exist\n" +lib=${1:-${PMTEST_SCRIPTLIB_DIR}parseopts.sh} +if [[ -z $lib || ! -f $lib ]]; then + printf "Bail out! parseopts library ($lib) could not be located\n" exit 1 fi -. "$1" +. "$lib" if ! type -t parseopts >/dev/null; then printf 'Bail out! parseopts function not found\n' diff --git a/test/util/pacsorttest.sh b/test/util/pacsorttest.sh index 0abddc2..ac16c45 100755 --- a/test/util/pacsorttest.sh +++ b/test/util/pacsorttest.sh @@ -19,12 +19,17 @@ # along with this program. If not, see <http://www.gnu.org/licenses/>. # default binary if one was not specified as $1 -bin='pacsort' +bin=${1:-${PMTEST_UTIL_DIR}pacsort} # holds counts of tests total=23 run=0 failure=0 +if ! type -p "$bin"; then + echo "Bail out! pacsort binary ($bin) could not be located" + exit 1 +fi + # args: # runtest input expected test_description optional_opts runtest() { @@ -42,14 +47,6 @@ runtest() { fi } -# use first arg as our binary if specified -[[ -n "$1" ]] && bin="$1" - -if ! type -p "$bin"; then - echo "Bail out! pacsort binary ($bin) could not be located" - exit 1 -fi - echo "Running pacsort tests..." echo "1..$total" diff --git a/test/util/vercmptest.sh b/test/util/vercmptest.sh index 9297cdc..a7fd851 100755 --- a/test/util/vercmptest.sh +++ b/test/util/vercmptest.sh @@ -18,12 +18,18 @@ # along with this program. If not, see <http://www.gnu.org/licenses/>. # default binary if one was not specified as $1 -bin='vercmp' +bin=${1:-${PMTEST_UTIL_DIR}vercmp} # holds counts of tests total=92 run=0 failure=0 +# use first arg as our binary if specified +if ! type -p "$bin"; then + echo "Bail out! vercmp binary ($bin) could not be located" + exit 1 +fi + # args: # pass ver1 ver2 ret expected pass() { @@ -57,14 +63,6 @@ runtest() { $func $2 $1 $ret $reverse } -# use first arg as our binary if specified -[[ -n "$1" ]] && bin="$1" - -if ! type -p "$bin"; then - echo "Bail out! vercmp binary ($bin) could not be located" - exit 1 -fi - echo "# Running vercmp tests..." echo "1..$total" -- 1.8.3.4
This removes the --test switch, making it easier to call pactest from a test harness. Signed-off-by: Andrew Gregory <andrew.gregory.8@gmail.com> --- Makefile.am | 4 ++-- test/pacman/pactest.py | 21 ++++----------------- 2 files changed, 6 insertions(+), 19 deletions(-) diff --git a/Makefile.am b/Makefile.am index 28f7f8f..1adf0f8 100644 --- a/Makefile.am +++ b/Makefile.am @@ -28,10 +28,10 @@ check-local: test-pacman test-pacsort test-vercmp test-parseopts test-pacman: test/pacman src/pacman $(PYTHON) $(top_srcdir)/test/pacman/pactest.py --debug=1 \ - --test $(top_srcdir)/test/pacman/tests/*.py \ --scriptlet-shell $(SCRIPTLET_SHELL) \ --ldconfig $(LDCONFIG) \ - -p $(top_builddir)/src/pacman/pacman + -p $(top_builddir)/src/pacman/pacman \ + $(top_srcdir)/test/pacman/tests/*.py test-pacsort: test/util src/util $(BASH_SHELL) $(top_srcdir)/test/util/pacsorttest.sh \ diff --git a/test/pacman/pactest.py b/test/pacman/pactest.py index fe04c2b..e92864d 100755 --- a/test/pacman/pactest.py +++ b/test/pacman/pactest.py @@ -35,21 +35,8 @@ def resolve_binary_path(option, opt_str, value, parser): setattr(parser.values, option.dest, os.path.abspath(value)) -def glob_tests(option, opt_str, value, parser): - idx = 0 - globlist = [] - - # maintain the idx so we can modify rargs - while idx < len(parser.rargs) and \ - not parser.rargs[idx].startswith('-'): - globlist += glob.glob(parser.rargs[idx]) - idx += 1 - - parser.rargs = parser.rargs[idx:] - setattr(parser.values, option.dest, globlist) - def create_parser(): - usage = "usage: %prog [options] [[--test <path/to/testfile.py>] ...]" + usage = "usage: %prog [options] <path/to/testfile.py>..." description = "Runs automated tests on the pacman binary. Tests are " \ "described using an easy python syntax, and several can be " \ "ran at once." @@ -65,9 +52,6 @@ def create_parser(): callback = resolve_binary_path, type = "string", dest = "bin", default = "pacman", help = "specify location of the pacman binary") - parser.add_option("-t", "--test", action = "callback", - callback = glob_tests, dest = "testcases", - help = "specify test case(s)") parser.add_option("--keep-root", action = "store_true", dest = "keeproot", default = False, help = "don't remove the generated pacman root filesystem") @@ -110,6 +94,9 @@ def create_parser(): env.pacman["scriptlet-shell"] = opts.scriptletshell env.pacman["ldconfig"] = opts.ldconfig + opts.testcases = [] + for path in args: + opts.testcases += glob.glob(path) if opts.testcases is None or len(opts.testcases) == 0: tap.bail("no tests defined, nothing to do") os.rmdir(root_path) -- 1.8.3.4
Signed-off-by: Andrew Gregory <andrew.gregory.8@gmail.com> --- Perhaps somebody more familiar with autotools knows a way to avoid having to list all of the tests manually. .gitignore | 2 + Makefile.am | 38 ++- build-aux/tap-driver.sh | 652 ++++++++++++++++++++++++++++++++++++++++++++++++ configure.ac | 1 + test/pacman/tests/TESTS | 288 +++++++++++++++++++++ 5 files changed, 960 insertions(+), 21 deletions(-) create mode 100755 build-aux/tap-driver.sh create mode 100644 test/pacman/tests/TESTS diff --git a/.gitignore b/.gitignore index cc28d71..f565b43 100644 --- a/.gitignore +++ b/.gitignore @@ -20,3 +20,5 @@ pacman-*.tar.gz root stamp-h1 tags +*.log +*.trs diff --git a/Makefile.am b/Makefile.am index 1adf0f8..77bc06d 100644 --- a/Makefile.am +++ b/Makefile.am @@ -23,29 +23,25 @@ dist_pkgdata_DATA = \ proto/proto.install \ proto/ChangeLog.proto -# run the pactest test suite and vercmp tests -check-local: test-pacman test-pacsort test-vercmp test-parseopts - -test-pacman: test/pacman src/pacman - $(PYTHON) $(top_srcdir)/test/pacman/pactest.py --debug=1 \ +TESTS = test/scripts/parseopts_test.sh \ + test/scripts/human_to_size_test.sh \ + test/util/pacsorttest.sh \ + test/util/vercmptest.sh +include $(top_srcdir)/test/pacman/tests/TESTS + +TEST_EXTENSIONS = .py +AM_TESTS_ENVIRONMENT = \ + PMTEST_UTIL_DIR=$(top_srcdir)/src/util/; export PMTEST_UTIL_DIR; \ + PMTEST_SCRIPTLIB_DIR=$(top_srcdir)/scripts/library/; export PMTEST_SCRIPTLIB_DIR; +TEST_LOG_DRIVER = env AM_TAP_AWK='$(AWK)' $(SHELL) \ + $(top_srcdir)/build-aux/tap-driver.sh +PY_LOG_DRIVER = env AM_TAP_AWK='$(AWK)' $(SHELL) \ + $(top_srcdir)/build-aux/tap-driver.sh +PY_LOG_COMPILER = test/pacman/pactest.py +AM_PY_LOG_FLAGS = \ --scriptlet-shell $(SCRIPTLET_SHELL) \ --ldconfig $(LDCONFIG) \ - -p $(top_builddir)/src/pacman/pacman \ - $(top_srcdir)/test/pacman/tests/*.py - -test-pacsort: test/util src/util - $(BASH_SHELL) $(top_srcdir)/test/util/pacsorttest.sh \ - $(top_builddir)/src/util/pacsort - -test-vercmp: test/util src/util - $(BASH_SHELL) $(top_srcdir)/test/util/vercmptest.sh \ - $(top_builddir)/src/util/vercmp - -test-parseopts: test/scripts scripts - $(BASH_SHELL) $(top_srcdir)/test/scripts/parseopts_test.sh \ - $(top_srcdir)/scripts/library/parseopts.sh - $(BASH_SHELL) $(top_srcdir)/test/scripts/human_to_size_test.sh \ - $(top_srcdir)/scripts/library/human_to_size.sh + -p $(top_builddir)/src/pacman/pacman # create the pacman DB and cache directories upon install install-data-local: diff --git a/build-aux/tap-driver.sh b/build-aux/tap-driver.sh new file mode 100755 index 0000000..19aa531 --- /dev/null +++ b/build-aux/tap-driver.sh @@ -0,0 +1,652 @@ +#! /bin/sh +# Copyright (C) 2011-2013 Free Software Foundation, Inc. +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2, or (at your option) +# any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see <http://www.gnu.org/licenses/>. + +# As a special exception to the GNU General Public License, if you +# distribute this file as part of a program that contains a +# configuration script generated by Autoconf, you may include it under +# the same distribution terms that you use for the rest of that program. + +# This file is maintained in Automake, please report +# bugs to <bug-automake@gnu.org> or send patches to +# <automake-patches@gnu.org>. + +scriptversion=2011-12-27.17; # UTC + +# Make unconditional expansion of undefined variables an error. This +# helps a lot in preventing typo-related bugs. +set -u + +me=tap-driver.sh + +fatal () +{ + echo "$me: fatal: $*" >&2 + exit 1 +} + +usage_error () +{ + echo "$me: $*" >&2 + print_usage >&2 + exit 2 +} + +print_usage () +{ + cat <<END +Usage: + tap-driver.sh --test-name=NAME --log-file=PATH --trs-file=PATH + [--expect-failure={yes|no}] [--color-tests={yes|no}] + [--enable-hard-errors={yes|no}] [--ignore-exit] + [--diagnostic-string=STRING] [--merge|--no-merge] + [--comments|--no-comments] [--] TEST-COMMAND +The \`--test-name', \`--log-file' and \`--trs-file' options are mandatory. +END +} + +# TODO: better error handling in option parsing (in particular, ensure +# TODO: $log_file, $trs_file and $test_name are defined). +test_name= # Used for reporting. +log_file= # Where to save the result and output of the test script. +trs_file= # Where to save the metadata of the test run. +expect_failure=0 +color_tests=0 +merge=0 +ignore_exit=0 +comments=0 +diag_string='#' +while test $# -gt 0; do + case $1 in + --help) print_usage; exit $?;; + --version) echo "$me $scriptversion"; exit $?;; + --test-name) test_name=$2; shift;; + --log-file) log_file=$2; shift;; + --trs-file) trs_file=$2; shift;; + --color-tests) color_tests=$2; shift;; + --expect-failure) expect_failure=$2; shift;; + --enable-hard-errors) shift;; # No-op. + --merge) merge=1;; + --no-merge) merge=0;; + --ignore-exit) ignore_exit=1;; + --comments) comments=1;; + --no-comments) comments=0;; + --diagnostic-string) diag_string=$2; shift;; + --) shift; break;; + -*) usage_error "invalid option: '$1'";; + esac + shift +done + +test $# -gt 0 || usage_error "missing test command" + +case $expect_failure in + yes) expect_failure=1;; + *) expect_failure=0;; +esac + +if test $color_tests = yes; then + init_colors=' + color_map["red"]="[0;31m" # Red. + color_map["grn"]="[0;32m" # Green. + color_map["lgn"]="[1;32m" # Light green. + color_map["blu"]="[1;34m" # Blue. + color_map["mgn"]="[0;35m" # Magenta. + color_map["std"]="[m" # No color. + color_for_result["ERROR"] = "mgn" + color_for_result["PASS"] = "grn" + color_for_result["XPASS"] = "red" + color_for_result["FAIL"] = "red" + color_for_result["XFAIL"] = "lgn" + color_for_result["SKIP"] = "blu"' +else + init_colors='' +fi + +# :; is there to work around a bug in bash 3.2 (and earlier) which +# does not always set '$?' properly on redirection failure. +# See the Autoconf manual for more details. +:;{ + ( + # Ignore common signals (in this subshell only!), to avoid potential + # problems with Korn shells. Some Korn shells are known to propagate + # to themselves signals that have killed a child process they were + # waiting for; this is done at least for SIGINT (and usually only for + # it, in truth). Without the `trap' below, such a behaviour could + # cause a premature exit in the current subshell, e.g., in case the + # test command it runs gets terminated by a SIGINT. Thus, the awk + # script we are piping into would never seen the exit status it + # expects on its last input line (which is displayed below by the + # last `echo $?' statement), and would thus die reporting an internal + # error. + # For more information, see the Autoconf manual and the threads: + # <http://lists.gnu.org/archive/html/bug-autoconf/2011-09/msg00004.html> + # <http://mail.opensolaris.org/pipermail/ksh93-integration-discuss/2009-February/004121.html> + trap : 1 3 2 13 15 + if test $merge -gt 0; then + exec 2>&1 + else + exec 2>&3 + fi + "$@" + echo $? + ) | LC_ALL=C ${AM_TAP_AWK-awk} \ + -v me="$me" \ + -v test_script_name="$test_name" \ + -v log_file="$log_file" \ + -v trs_file="$trs_file" \ + -v expect_failure="$expect_failure" \ + -v merge="$merge" \ + -v ignore_exit="$ignore_exit" \ + -v comments="$comments" \ + -v diag_string="$diag_string" \ +' +# FIXME: the usages of "cat >&3" below could be optimized when using +# FIXME: GNU awk, and/on on systems that supports /dev/fd/. + +# Implementation note: in what follows, `result_obj` will be an +# associative array that (partly) simulates a TAP result object +# from the `TAP::Parser` perl module. + +## ----------- ## +## FUNCTIONS ## +## ----------- ## + +function fatal(msg) +{ + print me ": " msg | "cat >&2" + exit 1 +} + +function abort(where) +{ + fatal("internal error " where) +} + +# Convert a boolean to a "yes"/"no" string. +function yn(bool) +{ + return bool ? "yes" : "no"; +} + +function add_test_result(result) +{ + if (!test_results_index) + test_results_index = 0 + test_results_list[test_results_index] = result + test_results_index += 1 + test_results_seen[result] = 1; +} + +# Whether the test script should be re-run by "make recheck". +function must_recheck() +{ + for (k in test_results_seen) + if (k != "XFAIL" && k != "PASS" && k != "SKIP") + return 1 + return 0 +} + +# Whether the content of the log file associated to this test should +# be copied into the "global" test-suite.log. +function copy_in_global_log() +{ + for (k in test_results_seen) + if (k != "PASS") + return 1 + return 0 +} + +# FIXME: this can certainly be improved ... +function get_global_test_result() +{ + if ("ERROR" in test_results_seen) + return "ERROR" + if ("FAIL" in test_results_seen || "XPASS" in test_results_seen) + return "FAIL" + all_skipped = 1 + for (k in test_results_seen) + if (k != "SKIP") + all_skipped = 0 + if (all_skipped) + return "SKIP" + return "PASS"; +} + +function stringify_result_obj(result_obj) +{ + if (result_obj["is_unplanned"] || result_obj["number"] != testno) + return "ERROR" + + if (plan_seen == LATE_PLAN) + return "ERROR" + + if (result_obj["directive"] == "TODO") + return result_obj["is_ok"] ? "XPASS" : "XFAIL" + + if (result_obj["directive"] == "SKIP") + return result_obj["is_ok"] ? "SKIP" : COOKED_FAIL; + + if (length(result_obj["directive"])) + abort("in function stringify_result_obj()") + + return result_obj["is_ok"] ? COOKED_PASS : COOKED_FAIL +} + +function decorate_result(result) +{ + color_name = color_for_result[result] + if (color_name) + return color_map[color_name] "" result "" color_map["std"] + # If we are not using colorized output, or if we do not know how + # to colorize the given result, we should return it unchanged. + return result +} + +function report(result, details) +{ + if (result ~ /^(X?(PASS|FAIL)|SKIP|ERROR)/) + { + msg = ": " test_script_name + add_test_result(result) + } + else if (result == "#") + { + msg = " " test_script_name ":" + } + else + { + abort("in function report()") + } + if (length(details)) + msg = msg " " details + # Output on console might be colorized. + print decorate_result(result) msg + # Log the result in the log file too, to help debugging (this is + # especially true when said result is a TAP error or "Bail out!"). + print result msg | "cat >&3"; +} + +function testsuite_error(error_message) +{ + report("ERROR", "- " error_message) +} + +function handle_tap_result() +{ + details = result_obj["number"]; + if (length(result_obj["description"])) + details = details " " result_obj["description"] + + if (plan_seen == LATE_PLAN) + { + details = details " # AFTER LATE PLAN"; + } + else if (result_obj["is_unplanned"]) + { + details = details " # UNPLANNED"; + } + else if (result_obj["number"] != testno) + { + details = sprintf("%s # OUT-OF-ORDER (expecting %d)", + details, testno); + } + else if (result_obj["directive"]) + { + details = details " # " result_obj["directive"]; + if (length(result_obj["explanation"])) + details = details " " result_obj["explanation"] + } + + report(stringify_result_obj(result_obj), details) +} + +# `skip_reason` should be empty whenever planned > 0. +function handle_tap_plan(planned, skip_reason) +{ + planned += 0 # Avoid getting confused if, say, `planned` is "00" + if (length(skip_reason) && planned > 0) + abort("in function handle_tap_plan()") + if (plan_seen) + { + # Error, only one plan per stream is acceptable. + testsuite_error("multiple test plans") + return; + } + planned_tests = planned + # The TAP plan can come before or after *all* the TAP results; we speak + # respectively of an "early" or a "late" plan. If we see the plan line + # after at least one TAP result has been seen, assume we have a late + # plan; in this case, any further test result seen after the plan will + # be flagged as an error. + plan_seen = (testno >= 1 ? LATE_PLAN : EARLY_PLAN) + # If testno > 0, we have an error ("too many tests run") that will be + # automatically dealt with later, so do not worry about it here. If + # $plan_seen is true, we have an error due to a repeated plan, and that + # has already been dealt with above. Otherwise, we have a valid "plan + # with SKIP" specification, and should report it as a particular kind + # of SKIP result. + if (planned == 0 && testno == 0) + { + if (length(skip_reason)) + skip_reason = "- " skip_reason; + report("SKIP", skip_reason); + } +} + +function extract_tap_comment(line) +{ + if (index(line, diag_string) == 1) + { + # Strip leading `diag_string` from `line`. + line = substr(line, length(diag_string) + 1) + # And strip any leading and trailing whitespace left. + sub("^[ \t]*", "", line) + sub("[ \t]*$", "", line) + # Return what is left (if any). + return line; + } + return ""; +} + +# When this function is called, we know that line is a TAP result line, +# so that it matches the (perl) RE "^(not )?ok\b". +function setup_result_obj(line) +{ + # Get the result, and remove it from the line. + result_obj["is_ok"] = (substr(line, 1, 2) == "ok" ? 1 : 0) + sub("^(not )?ok[ \t]*", "", line) + + # If the result has an explicit number, get it and strip it; otherwise, + # automatically assing the next progresive number to it. + if (line ~ /^[0-9]+$/ || line ~ /^[0-9]+[^a-zA-Z0-9_]/) + { + match(line, "^[0-9]+") + # The final `+ 0` is to normalize numbers with leading zeros. + result_obj["number"] = substr(line, 1, RLENGTH) + 0 + line = substr(line, RLENGTH + 1) + } + else + { + result_obj["number"] = testno + } + + if (plan_seen == LATE_PLAN) + # No further test results are acceptable after a "late" TAP plan + # has been seen. + result_obj["is_unplanned"] = 1 + else if (plan_seen && testno > planned_tests) + result_obj["is_unplanned"] = 1 + else + result_obj["is_unplanned"] = 0 + + # Strip trailing and leading whitespace. + sub("^[ \t]*", "", line) + sub("[ \t]*$", "", line) + + # This will have to be corrected if we have a "TODO"/"SKIP" directive. + result_obj["description"] = line + result_obj["directive"] = "" + result_obj["explanation"] = "" + + if (index(line, "#") == 0) + return # No possible directive, nothing more to do. + + # Directives are case-insensitive. + rx = "[ \t]*#[ \t]*([tT][oO][dD][oO]|[sS][kK][iI][pP])[ \t]*" + + # See whether we have the directive, and if yes, where. + pos = match(line, rx "$") + if (!pos) + pos = match(line, rx "[^a-zA-Z0-9_]") + + # If there was no TAP directive, we have nothing more to do. + if (!pos) + return + + # Let`s now see if the TAP directive has been escaped. For example: + # escaped: ok \# SKIP + # not escaped: ok \\# SKIP + # escaped: ok \\\\\# SKIP + # not escaped: ok \ # SKIP + if (substr(line, pos, 1) == "#") + { + bslash_count = 0 + for (i = pos; i > 1 && substr(line, i - 1, 1) == "\\"; i--) + bslash_count += 1 + if (bslash_count % 2) + return # Directive was escaped. + } + + # Strip the directive and its explanation (if any) from the test + # description. + result_obj["description"] = substr(line, 1, pos - 1) + # Now remove the test description from the line, that has been dealt + # with already. + line = substr(line, pos) + # Strip the directive, and save its value (normalized to upper case). + sub("^[ \t]*#[ \t]*", "", line) + result_obj["directive"] = toupper(substr(line, 1, 4)) + line = substr(line, 5) + # Now get the explanation for the directive (if any), with leading + # and trailing whitespace removed. + sub("^[ \t]*", "", line) + sub("[ \t]*$", "", line) + result_obj["explanation"] = line +} + +function get_test_exit_message(status) +{ + if (status == 0) + return "" + if (status !~ /^[1-9][0-9]*$/) + abort("getting exit status") + if (status < 127) + exit_details = "" + else if (status == 127) + exit_details = " (command not found?)" + else if (status >= 128 && status <= 255) + exit_details = sprintf(" (terminated by signal %d?)", status - 128) + else if (status > 256 && status <= 384) + # We used to report an "abnormal termination" here, but some Korn + # shells, when a child process die due to signal number n, can leave + # in $? an exit status of 256+n instead of the more standard 128+n. + # Apparently, both behaviours are allowed by POSIX (2008), so be + # prepared to handle them both. See also Austing Group report ID + # 0000051 <http://www.austingroupbugs.net/view.php?id=51> + exit_details = sprintf(" (terminated by signal %d?)", status - 256) + else + # Never seen in practice. + exit_details = " (abnormal termination)" + return sprintf("exited with status %d%s", status, exit_details) +} + +function write_test_results() +{ + print ":global-test-result: " get_global_test_result() > trs_file + print ":recheck: " yn(must_recheck()) > trs_file + print ":copy-in-global-log: " yn(copy_in_global_log()) > trs_file + for (i = 0; i < test_results_index; i += 1) + print ":test-result: " test_results_list[i] > trs_file + close(trs_file); +} + +BEGIN { + +## ------- ## +## SETUP ## +## ------- ## + +'"$init_colors"' + +# Properly initialized once the TAP plan is seen. +planned_tests = 0 + +COOKED_PASS = expect_failure ? "XPASS": "PASS"; +COOKED_FAIL = expect_failure ? "XFAIL": "FAIL"; + +# Enumeration-like constants to remember which kind of plan (if any) +# has been seen. It is important that NO_PLAN evaluates "false" as +# a boolean. +NO_PLAN = 0 +EARLY_PLAN = 1 +LATE_PLAN = 2 + +testno = 0 # Number of test results seen so far. +bailed_out = 0 # Whether a "Bail out!" directive has been seen. + +# Whether the TAP plan has been seen or not, and if yes, which kind +# it is ("early" is seen before any test result, "late" otherwise). +plan_seen = NO_PLAN + +## --------- ## +## PARSING ## +## --------- ## + +is_first_read = 1 + +while (1) + { + # Involutions required so that we are able to read the exit status + # from the last input line. + st = getline + if (st < 0) # I/O error. + fatal("I/O error while reading from input stream") + else if (st == 0) # End-of-input + { + if (is_first_read) + abort("in input loop: only one input line") + break + } + if (is_first_read) + { + is_first_read = 0 + nextline = $0 + continue + } + else + { + curline = nextline + nextline = $0 + $0 = curline + } + # Copy any input line verbatim into the log file. + print | "cat >&3" + # Parsing of TAP input should stop after a "Bail out!" directive. + if (bailed_out) + continue + + # TAP test result. + if ($0 ~ /^(not )?ok$/ || $0 ~ /^(not )?ok[^a-zA-Z0-9_]/) + { + testno += 1 + setup_result_obj($0) + handle_tap_result() + } + # TAP plan (normal or "SKIP" without explanation). + else if ($0 ~ /^1\.\.[0-9]+[ \t]*$/) + { + # The next two lines will put the number of planned tests in $0. + sub("^1\\.\\.", "") + sub("[^0-9]*$", "") + handle_tap_plan($0, "") + continue + } + # TAP "SKIP" plan, with an explanation. + else if ($0 ~ /^1\.\.0+[ \t]*#/) + { + # The next lines will put the skip explanation in $0, stripping + # any leading and trailing whitespace. This is a little more + # tricky in truth, since we want to also strip a potential leading + # "SKIP" string from the message. + sub("^[^#]*#[ \t]*(SKIP[: \t][ \t]*)?", "") + sub("[ \t]*$", ""); + handle_tap_plan(0, $0) + } + # "Bail out!" magic. + # Older versions of prove and TAP::Harness (e.g., 3.17) did not + # recognize a "Bail out!" directive when preceded by leading + # whitespace, but more modern versions (e.g., 3.23) do. So we + # emulate the latter, "more modern" behaviour. + else if ($0 ~ /^[ \t]*Bail out!/) + { + bailed_out = 1 + # Get the bailout message (if any), with leading and trailing + # whitespace stripped. The message remains stored in `$0`. + sub("^[ \t]*Bail out![ \t]*", ""); + sub("[ \t]*$", ""); + # Format the error message for the + bailout_message = "Bail out!" + if (length($0)) + bailout_message = bailout_message " " $0 + testsuite_error(bailout_message) + } + # Maybe we have too look for dianogtic comments too. + else if (comments != 0) + { + comment = extract_tap_comment($0); + if (length(comment)) + report("#", comment); + } + } + +## -------- ## +## FINISH ## +## -------- ## + +# A "Bail out!" directive should cause us to ignore any following TAP +# error, as well as a non-zero exit status from the TAP producer. +if (!bailed_out) + { + if (!plan_seen) + { + testsuite_error("missing test plan") + } + else if (planned_tests != testno) + { + bad_amount = testno > planned_tests ? "many" : "few" + testsuite_error(sprintf("too %s tests run (expected %d, got %d)", + bad_amount, planned_tests, testno)) + } + if (!ignore_exit) + { + # Fetch exit status from the last line. + exit_message = get_test_exit_message(nextline) + if (exit_message) + testsuite_error(exit_message) + } + } + +write_test_results() + +exit 0 + +} # End of "BEGIN" block. +' + +# TODO: document that we consume the file descriptor 3 :-( +} 3>"$log_file" + +test $? -eq 0 || fatal "I/O or internal error" + +# Local Variables: +# mode: shell-script +# sh-indentation: 2 +# eval: (add-hook 'write-file-hooks 'time-stamp) +# time-stamp-start: "scriptversion=" +# time-stamp-format: "%:y-%02m-%02d.%02H" +# time-stamp-time-zone: "UTC" +# time-stamp-end: "; # UTC" +# End: diff --git a/configure.ac b/configure.ac index 4e6e2a9..42416a4 100644 --- a/configure.ac +++ b/configure.ac @@ -57,6 +57,7 @@ AC_CONFIG_SRCDIR([config.h.in]) AC_CONFIG_HEADERS([config.h]) AC_CONFIG_MACRO_DIR([m4]) AC_CONFIG_AUX_DIR([build-aux]) +AC_REQUIRE_AUX_FILE([tap-driver.sh]) AC_CANONICAL_HOST AM_INIT_AUTOMAKE([1.11 foreign]) diff --git a/test/pacman/tests/TESTS b/test/pacman/tests/TESTS new file mode 100644 index 0000000..2b47244 --- /dev/null +++ b/test/pacman/tests/TESTS @@ -0,0 +1,288 @@ +TESTS += \ +test/pacman/tests/clean001.py \ +test/pacman/tests/clean002.py \ +test/pacman/tests/clean003.py \ +test/pacman/tests/clean004.py \ +test/pacman/tests/clean005.py \ +test/pacman/tests/config001.py \ +test/pacman/tests/config002.py \ +test/pacman/tests/database001.py \ +test/pacman/tests/database002.py \ +test/pacman/tests/database010.py \ +test/pacman/tests/database011.py \ +test/pacman/tests/database012.py \ +test/pacman/tests/depconflict100.py \ +test/pacman/tests/depconflict110.py \ +test/pacman/tests/depconflict111.py \ +test/pacman/tests/depconflict120.py \ +test/pacman/tests/deptest001.py \ +test/pacman/tests/dummy001.py \ +test/pacman/tests/epoch001.py \ +test/pacman/tests/epoch002.py \ +test/pacman/tests/epoch003.py \ +test/pacman/tests/epoch004.py \ +test/pacman/tests/epoch005.py \ +test/pacman/tests/epoch010.py \ +test/pacman/tests/epoch011.py \ +test/pacman/tests/epoch012.py \ +test/pacman/tests/fileconflict001.py \ +test/pacman/tests/fileconflict002.py \ +test/pacman/tests/fileconflict003.py \ +test/pacman/tests/fileconflict004.py \ +test/pacman/tests/fileconflict005.py \ +test/pacman/tests/fileconflict006.py \ +test/pacman/tests/fileconflict007.py \ +test/pacman/tests/fileconflict008.py \ +test/pacman/tests/fileconflict009.py \ +test/pacman/tests/fileconflict010.py \ +test/pacman/tests/fileconflict011.py \ +test/pacman/tests/fileconflict012.py \ +test/pacman/tests/fileconflict013.py \ +test/pacman/tests/fileconflict015.py \ +test/pacman/tests/fileconflict016.py \ +test/pacman/tests/fileconflict017.py \ +test/pacman/tests/fileconflict020.py \ +test/pacman/tests/fileconflict021.py \ +test/pacman/tests/fileconflict022.py \ +test/pacman/tests/fileconflict023.py \ +test/pacman/tests/fileconflict024.py \ +test/pacman/tests/fileconflict025.py \ +test/pacman/tests/fileconflict030.py \ +test/pacman/tests/ignore001.py \ +test/pacman/tests/ignore002.py \ +test/pacman/tests/ignore003.py \ +test/pacman/tests/ignore004.py \ +test/pacman/tests/ignore005.py \ +test/pacman/tests/ignore006.py \ +test/pacman/tests/ignore007.py \ +test/pacman/tests/ignore008.py \ +test/pacman/tests/ldconfig001.py \ +test/pacman/tests/ldconfig002.py \ +test/pacman/tests/ldconfig003.py \ +test/pacman/tests/mode001.py \ +test/pacman/tests/mode002.py \ +test/pacman/tests/mode003.py \ +test/pacman/tests/pacman001.py \ +test/pacman/tests/pacman002.py \ +test/pacman/tests/pacman003.py \ +test/pacman/tests/pacman004.py \ +test/pacman/tests/pacman005.py \ +test/pacman/tests/provision001.py \ +test/pacman/tests/provision002.py \ +test/pacman/tests/provision003.py \ +test/pacman/tests/provision004.py \ +test/pacman/tests/provision010.py \ +test/pacman/tests/provision011.py \ +test/pacman/tests/provision012.py \ +test/pacman/tests/provision020.py \ +test/pacman/tests/provision021.py \ +test/pacman/tests/provision022.py \ +test/pacman/tests/query001.py \ +test/pacman/tests/query002.py \ +test/pacman/tests/query003.py \ +test/pacman/tests/query004.py \ +test/pacman/tests/query005.py \ +test/pacman/tests/query006.py \ +test/pacman/tests/query007.py \ +test/pacman/tests/query010.py \ +test/pacman/tests/query011.py \ +test/pacman/tests/query012.py \ +test/pacman/tests/reason001.py \ +test/pacman/tests/remove001.py \ +test/pacman/tests/remove002.py \ +test/pacman/tests/remove010.py \ +test/pacman/tests/remove011.py \ +test/pacman/tests/remove012.py \ +test/pacman/tests/remove020.py \ +test/pacman/tests/remove021.py \ +test/pacman/tests/remove030.py \ +test/pacman/tests/remove031.py \ +test/pacman/tests/remove040.py \ +test/pacman/tests/remove041.py \ +test/pacman/tests/remove042.py \ +test/pacman/tests/remove043.py \ +test/pacman/tests/remove044.py \ +test/pacman/tests/remove045.py \ +test/pacman/tests/remove047.py \ +test/pacman/tests/remove049.py \ +test/pacman/tests/remove050.py \ +test/pacman/tests/remove051.py \ +test/pacman/tests/remove052.py \ +test/pacman/tests/remove060.py \ +test/pacman/tests/remove070.py \ +test/pacman/tests/remove071.py \ +test/pacman/tests/replace100.py \ +test/pacman/tests/replace101.py \ +test/pacman/tests/replace102.py \ +test/pacman/tests/replace103.py \ +test/pacman/tests/replace104.py \ +test/pacman/tests/replace110.py \ +test/pacman/tests/scriptlet001.py \ +test/pacman/tests/scriptlet002.py \ +test/pacman/tests/sign001.py \ +test/pacman/tests/sign002.py \ +test/pacman/tests/smoke001.py \ +test/pacman/tests/smoke002.py \ +test/pacman/tests/smoke003.py \ +test/pacman/tests/smoke004.py \ +test/pacman/tests/symlink001.py \ +test/pacman/tests/symlink002.py \ +test/pacman/tests/symlink010.py \ +test/pacman/tests/symlink011.py \ +test/pacman/tests/symlink012.py \ +test/pacman/tests/symlink020.py \ +test/pacman/tests/sync-nodepversion01.py \ +test/pacman/tests/sync-nodepversion02.py \ +test/pacman/tests/sync-nodepversion03.py \ +test/pacman/tests/sync-nodepversion04.py \ +test/pacman/tests/sync-nodepversion05.py \ +test/pacman/tests/sync-nodepversion06.py \ +test/pacman/tests/sync001.py \ +test/pacman/tests/sync002.py \ +test/pacman/tests/sync003.py \ +test/pacman/tests/sync009.py \ +test/pacman/tests/sync010.py \ +test/pacman/tests/sync011.py \ +test/pacman/tests/sync012.py \ +test/pacman/tests/sync020.py \ +test/pacman/tests/sync021.py \ +test/pacman/tests/sync022.py \ +test/pacman/tests/sync023.py \ +test/pacman/tests/sync024.py \ +test/pacman/tests/sync030.py \ +test/pacman/tests/sync031.py \ +test/pacman/tests/sync040.py \ +test/pacman/tests/sync041.py \ +test/pacman/tests/sync042.py \ +test/pacman/tests/sync043.py \ +test/pacman/tests/sync044.py \ +test/pacman/tests/sync045.py \ +test/pacman/tests/sync050.py \ +test/pacman/tests/sync100.py \ +test/pacman/tests/sync1000.py \ +test/pacman/tests/sync1003.py \ +test/pacman/tests/sync1004.py \ +test/pacman/tests/sync1008.py \ +test/pacman/tests/sync101.py \ +test/pacman/tests/sync102.py \ +test/pacman/tests/sync103.py \ +test/pacman/tests/sync104.py \ +test/pacman/tests/sync110.py \ +test/pacman/tests/sync1100.py \ +test/pacman/tests/sync1101.py \ +test/pacman/tests/sync1102.py \ +test/pacman/tests/sync1103.py \ +test/pacman/tests/sync1104.py \ +test/pacman/tests/sync1105.py \ +test/pacman/tests/sync120.py \ +test/pacman/tests/sync130.py \ +test/pacman/tests/sync131.py \ +test/pacman/tests/sync132.py \ +test/pacman/tests/sync133.py \ +test/pacman/tests/sync134.py \ +test/pacman/tests/sync135.py \ +test/pacman/tests/sync136.py \ +test/pacman/tests/sync137.py \ +test/pacman/tests/sync138.py \ +test/pacman/tests/sync139.py \ +test/pacman/tests/sync140.py \ +test/pacman/tests/sync141.py \ +test/pacman/tests/sync150.py \ +test/pacman/tests/sync200.py \ +test/pacman/tests/sync300.py \ +test/pacman/tests/sync306.py \ +test/pacman/tests/sync400.py \ +test/pacman/tests/sync401.py \ +test/pacman/tests/sync402.py \ +test/pacman/tests/sync403.py \ +test/pacman/tests/sync404.py \ +test/pacman/tests/sync405.py \ +test/pacman/tests/sync406.py \ +test/pacman/tests/sync407.py \ +test/pacman/tests/sync500.py \ +test/pacman/tests/sync501.py \ +test/pacman/tests/sync502.py \ +test/pacman/tests/sync503.py \ +test/pacman/tests/sync600.py \ +test/pacman/tests/sync700.py \ +test/pacman/tests/sync701.py \ +test/pacman/tests/sync702.py \ +test/pacman/tests/sync890.py \ +test/pacman/tests/sync891.py \ +test/pacman/tests/sync892.py \ +test/pacman/tests/sync893.py \ +test/pacman/tests/sync895.py \ +test/pacman/tests/sync896.py \ +test/pacman/tests/sync897.py \ +test/pacman/tests/sync898.py \ +test/pacman/tests/sync899.py \ +test/pacman/tests/sync900.py \ +test/pacman/tests/sync901.py \ +test/pacman/tests/sync990.py \ +test/pacman/tests/sync992.py \ +test/pacman/tests/sync993.py \ +test/pacman/tests/sync999.py \ +test/pacman/tests/trans001.py \ +test/pacman/tests/type001.py \ +test/pacman/tests/unresolvable001.py \ +test/pacman/tests/upgrade001.py \ +test/pacman/tests/upgrade002.py \ +test/pacman/tests/upgrade003.py \ +test/pacman/tests/upgrade004.py \ +test/pacman/tests/upgrade005.py \ +test/pacman/tests/upgrade006.py \ +test/pacman/tests/upgrade010.py \ +test/pacman/tests/upgrade011.py \ +test/pacman/tests/upgrade012.py \ +test/pacman/tests/upgrade013.py \ +test/pacman/tests/upgrade014.py \ +test/pacman/tests/upgrade015.py \ +test/pacman/tests/upgrade016.py \ +test/pacman/tests/upgrade020.py \ +test/pacman/tests/upgrade021.py \ +test/pacman/tests/upgrade022.py \ +test/pacman/tests/upgrade023.py \ +test/pacman/tests/upgrade024.py \ +test/pacman/tests/upgrade025.py \ +test/pacman/tests/upgrade026.py \ +test/pacman/tests/upgrade027.py \ +test/pacman/tests/upgrade028.py \ +test/pacman/tests/upgrade029.py \ +test/pacman/tests/upgrade030.py \ +test/pacman/tests/upgrade031.py \ +test/pacman/tests/upgrade032.py \ +test/pacman/tests/upgrade040.py \ +test/pacman/tests/upgrade041.py \ +test/pacman/tests/upgrade042.py \ +test/pacman/tests/upgrade043.py \ +test/pacman/tests/upgrade045.py \ +test/pacman/tests/upgrade046.py \ +test/pacman/tests/upgrade050.py \ +test/pacman/tests/upgrade051.py \ +test/pacman/tests/upgrade052.py \ +test/pacman/tests/upgrade053.py \ +test/pacman/tests/upgrade054.py \ +test/pacman/tests/upgrade055.py \ +test/pacman/tests/upgrade056.py \ +test/pacman/tests/upgrade057.py \ +test/pacman/tests/upgrade058.py \ +test/pacman/tests/upgrade059.py \ +test/pacman/tests/upgrade060.py \ +test/pacman/tests/upgrade061.py \ +test/pacman/tests/upgrade070.py \ +test/pacman/tests/upgrade071.py \ +test/pacman/tests/upgrade072.py \ +test/pacman/tests/upgrade073.py \ +test/pacman/tests/upgrade074.py \ +test/pacman/tests/upgrade075.py \ +test/pacman/tests/upgrade076.py \ +test/pacman/tests/upgrade077.py \ +test/pacman/tests/upgrade078.py \ +test/pacman/tests/upgrade080.py \ +test/pacman/tests/upgrade081.py \ +test/pacman/tests/upgrade082.py \ +test/pacman/tests/upgrade083.py \ +test/pacman/tests/upgrade084.py \ +test/pacman/tests/upgrade090.py \ +test/pacman/tests/upgrade100.py \ +test/pacman/tests/xfercommand001.py -- 1.8.3.4
This functionality can be provided by a test harness. Having pactest output this information as well clutters the result log created by automake. Signed-off-by: Andrew Gregory <andrew.gregory.8@gmail.com> --- test/pacman/pactest.py | 3 +-- test/pacman/pmenv.py | 69 -------------------------------------------------- 2 files changed, 1 insertion(+), 71 deletions(-) diff --git a/test/pacman/pactest.py b/test/pacman/pactest.py index e92864d..d39fcaa 100755 --- a/test/pacman/pactest.py +++ b/test/pacman/pactest.py @@ -105,9 +105,8 @@ def create_parser(): for i in opts.testcases: env.addtest(i) - # run tests and print overall results + # run tests env.run() - env.results() if not opts.keeproot: shutil.rmtree(root_path) diff --git a/test/pacman/pmenv.py b/test/pacman/pmenv.py index a3a8f54..f358285 100644 --- a/test/pacman/pmenv.py +++ b/test/pacman/pmenv.py @@ -61,7 +61,6 @@ def run(self): """ tap.plan(len(self.testcases)) for t in self.testcases: - tap.diag("==========" * 8) tap.diag("Running '%s'" % t.testname) t.load() @@ -72,72 +71,4 @@ def run(self): tap.todo = t.expectfailure tap.subtest(lambda: t.check(), t.description) - def results(self): - """ - """ - tpassed = [] - tfailed = [] - texpectedfail = [] - tunexpectedpass = [] - for test in self.testcases: - fail = test.result["fail"] - if fail == 0 and not test.expectfailure: - self.passed += 1 - tpassed.append(test) - elif fail != 0 and test.expectfailure: - self.expectedfail += 1 - texpectedfail.append(test) - elif fail == 0: # and not test.expectfail - self.unexpectedpass += 1 - tunexpectedpass.append(test) - else: - self.failed += 1 - tfailed.append(test) - - def _printtest(t): - success = t.result["success"] - fail = t.result["fail"] - rules = len(t.rules) - if fail == 0: - result = "[PASS]" - else: - result = "[FAIL]" - tap.diag("%s %s Rules: OK = %2u FAIL = %2u" \ - % (result, t.testname.ljust(34), success, fail)) - if fail != 0: - # print test description if test failed - tap.diag(" " + t.description) - - tap.diag("==========" * 8) - tap.diag("Results") - tap.diag("----------" * 8) - tap.diag(" Passed:") - for test in tpassed: - _printtest(test) - tap.diag("----------" * 8) - tap.diag(" Expected Failures:") - for test in texpectedfail: - _printtest(test) - tap.diag("----------" * 8) - tap.diag(" Unexpected Passes:") - for test in tunexpectedpass: - _printtest(test) - tap.diag("----------" * 8) - tap.diag(" Failed:") - for test in tfailed: - _printtest(test) - tap.diag("----------" * 8) - - total = len(self.testcases) - tap.diag("Total = %3u" % total) - if total: - tap.diag("Pass = %3u (%6.2f%%)" % (self.passed, - float(self.passed) * 100 / total)) - tap.diag("Expected Fail = %3u (%6.2f%%)" % (self.expectedfail, - float(self.expectedfail) * 100 / total)) - tap.diag("Unexpected Pass = %3u (%6.2f%%)" % (self.unexpectedpass, - float(self.unexpectedpass) * 100 / total)) - tap.diag("Fail = %3u (%6.2f%%)" % (self.failed, - float(self.failed) * 100 / total)) - # vim: set ts=4 sw=4 et: -- 1.8.3.4
On 02/08/13 22:34, Andrew Gregory wrote:
This patchset converts the output of all of our tests to tap [1] and fully integrates them with automake so that tests can be run in parallel with `make check`. The test suite may also be run with other test harnesses such as perl's prove which can do such interesting things as remember which tests failed and run only those on subsequent invocations. The documentation for integrating tests is here [2].
[1] http://podwiki.hexten.net/TAP/TAP.html?page=TAP [2] http://www.gnu.org/software/automake/manual/html_node/Parallel-Test-Harness....
Have you any ideas on how to fix the "unexpected" pass on the time test for x86_64 to not have the test suite return non-zero? I believe this is essential. Allan
On 08/05/13 at 10:52am, Allan McRae wrote:
On 02/08/13 22:34, Andrew Gregory wrote:
This patchset converts the output of all of our tests to tap [1] and fully integrates them with automake so that tests can be run in parallel with `make check`. The test suite may also be run with other test harnesses such as perl's prove which can do such interesting things as remember which tests failed and run only those on subsequent invocations. The documentation for integrating tests is here [2].
[1] http://podwiki.hexten.net/TAP/TAP.html?page=TAP [2] http://www.gnu.org/software/automake/manual/html_node/Parallel-Test-Harness....
Have you any ideas on how to fix the "unexpected" pass on the time test for x86_64 to not have the test suite return non-zero? I believe this is essential.
Allan
I think that "unexpected" passes are rightly considered failures. The test should reflect what we actually expect to happen. We should either update the test so that it succeeds or fails uniformly on all systems or set expectfailure only on systems where we actually expect it to fail. Personally, I would prefer that the test use the maximum values that the testing system could be expected to support and unset expectfailure, but the easier solution is to just set expectfailure only on 32 bit systems. apg
On 05/08/13 14:16, Andrew Gregory wrote:
On 08/05/13 at 10:52am, Allan McRae wrote:
On 02/08/13 22:34, Andrew Gregory wrote:
This patchset converts the output of all of our tests to tap [1] and fully integrates them with automake so that tests can be run in parallel with `make check`. The test suite may also be run with other test harnesses such as perl's prove which can do such interesting things as remember which tests failed and run only those on subsequent invocations. The documentation for integrating tests is here [2].
[1] http://podwiki.hexten.net/TAP/TAP.html?page=TAP [2] http://www.gnu.org/software/automake/manual/html_node/Parallel-Test-Harness....
Have you any ideas on how to fix the "unexpected" pass on the time test for x86_64 to not have the test suite return non-zero? I believe this is essential.
Allan
I think that "unexpected" passes are rightly considered failures. The test should reflect what we actually expect to happen. We should either update the test so that it succeeds or fails uniformly on all systems or set expectfailure only on systems where we actually expect it to fail. Personally, I would prefer that the test use the maximum values that the testing system could be expected to support and unset expectfailure, but the easier solution is to just set expectfailure only on 32 bit systems.
Setting expected failure on 32bit systems would actually be my preferred solution in this case. Can our test suite handle that?
On 08/05/13 at 02:18pm, Allan McRae wrote:
On 05/08/13 14:16, Andrew Gregory wrote:
On 08/05/13 at 10:52am, Allan McRae wrote:
On 02/08/13 22:34, Andrew Gregory wrote:
This patchset converts the output of all of our tests to tap [1] and fully integrates them with automake so that tests can be run in parallel with `make check`. The test suite may also be run with other test harnesses such as perl's prove which can do such interesting things as remember which tests failed and run only those on subsequent invocations. The documentation for integrating tests is here [2].
[1] http://podwiki.hexten.net/TAP/TAP.html?page=TAP [2] http://www.gnu.org/software/automake/manual/html_node/Parallel-Test-Harness....
Have you any ideas on how to fix the "unexpected" pass on the time test for x86_64 to not have the test suite return non-zero? I believe this is essential.
Allan
I think that "unexpected" passes are rightly considered failures. The test should reflect what we actually expect to happen. We should either update the test so that it succeeds or fails uniformly on all systems or set expectfailure only on systems where we actually expect it to fail. Personally, I would prefer that the test use the maximum values that the testing system could be expected to support and unset expectfailure, but the easier solution is to just set expectfailure only on 32 bit systems.
Setting expected failure on 32bit systems would actually be my preferred solution in this case. Can our test suite handle that?
I don't have any 32-bit systems readily available to test it at the moment, but checking either platform.architecture [1] or sys.maxsize [2] should be sufficient. [1] http://docs.python.org/2/library/platform.html#platform.architecture [2] http://docs.python.org/2/library/sys.html#sys.maxsize
On 05/08/13 14:47, Andrew Gregory wrote:
On 08/05/13 at 02:18pm, Allan McRae wrote:
On 05/08/13 14:16, Andrew Gregory wrote:
On 08/05/13 at 10:52am, Allan McRae wrote:
On 02/08/13 22:34, Andrew Gregory wrote:
This patchset converts the output of all of our tests to tap [1] and fully integrates them with automake so that tests can be run in parallel with `make check`. The test suite may also be run with other test harnesses such as perl's prove which can do such interesting things as remember which tests failed and run only those on subsequent invocations. The documentation for integrating tests is here [2].
[1] http://podwiki.hexten.net/TAP/TAP.html?page=TAP [2] http://www.gnu.org/software/automake/manual/html_node/Parallel-Test-Harness....
Have you any ideas on how to fix the "unexpected" pass on the time test for x86_64 to not have the test suite return non-zero? I believe this is essential.
Allan
I think that "unexpected" passes are rightly considered failures. The test should reflect what we actually expect to happen. We should either update the test so that it succeeds or fails uniformly on all systems or set expectfailure only on systems where we actually expect it to fail. Personally, I would prefer that the test use the maximum values that the testing system could be expected to support and unset expectfailure, but the easier solution is to just set expectfailure only on 32 bit systems.
Setting expected failure on 32bit systems would actually be my preferred solution in this case. Can our test suite handle that?
I don't have any 32-bit systems readily available to test it at the moment, but checking either platform.architecture [1] or sys.maxsize [2] should be sufficient.
[1] http://docs.python.org/2/library/platform.html#platform.architecture [2] http://docs.python.org/2/library/sys.html#sys.maxsize
I guess I can test this in a chroot (or you could...). It also looks like .gitignore needs updated: # test-suite.log # test/pacman/tests/clean001.log # test/pacman/tests/clean001.trs # test/pacman/tests/clean002.log # test/pacman/tests/clean002.trs # test/pacman/tests/clean003.log # test/pacman/tests/clean003.trs # test/pacman/tests/clean004.log # test/pacman/tests/clean004.trs # test/pacman/tests/clean005.log
On 08/12/13 at 09:10pm, Allan McRae wrote:
On 05/08/13 14:47, Andrew Gregory wrote:
On 08/05/13 at 02:18pm, Allan McRae wrote:
On 05/08/13 14:16, Andrew Gregory wrote:
On 08/05/13 at 10:52am, Allan McRae wrote:
On 02/08/13 22:34, Andrew Gregory wrote:
This patchset converts the output of all of our tests to tap [1] and fully integrates them with automake so that tests can be run in parallel with `make check`. The test suite may also be run with other test harnesses such as perl's prove which can do such interesting things as remember which tests failed and run only those on subsequent invocations. The documentation for integrating tests is here [2].
[1] http://podwiki.hexten.net/TAP/TAP.html?page=TAP [2] http://www.gnu.org/software/automake/manual/html_node/Parallel-Test-Harness....
Have you any ideas on how to fix the "unexpected" pass on the time test for x86_64 to not have the test suite return non-zero? I believe this is essential.
Allan
I think that "unexpected" passes are rightly considered failures. The test should reflect what we actually expect to happen. We should either update the test so that it succeeds or fails uniformly on all systems or set expectfailure only on systems where we actually expect it to fail. Personally, I would prefer that the test use the maximum values that the testing system could be expected to support and unset expectfailure, but the easier solution is to just set expectfailure only on 32 bit systems.
Setting expected failure on 32bit systems would actually be my preferred solution in this case. Can our test suite handle that?
I don't have any 32-bit systems readily available to test it at the moment, but checking either platform.architecture [1] or sys.maxsize [2] should be sufficient.
[1] http://docs.python.org/2/library/platform.html#platform.architecture [2] http://docs.python.org/2/library/sys.html#sys.maxsize
I guess I can test this in a chroot (or you could...).
It also looks like .gitignore needs updated:
# test-suite.log # test/pacman/tests/clean001.log # test/pacman/tests/clean001.trs # test/pacman/tests/clean002.log # test/pacman/tests/clean002.trs # test/pacman/tests/clean003.log # test/pacman/tests/clean003.trs # test/pacman/tests/clean004.log # test/pacman/tests/clean004.trs # test/pacman/tests/clean005.log
Erm, I did update .gitignore... Did you by any chance run make check with these patches then switch to a different branch? Otherwise I have no idea why those would show up. apg
Use the architecture of the python interpreter running the test to detect 32bit systems. Signed-off-by: Andrew Gregory <andrew.gregory.8@gmail.com> --- Should be applied before automake integration so git bisect won't consider the non-zero return for an unexpected pass a failure. Tested in a 32bit chroot. test/pacman/tests/query006.py | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/test/pacman/tests/query006.py b/test/pacman/tests/query006.py index 3668478..0f6f762 100644 --- a/test/pacman/tests/query006.py +++ b/test/pacman/tests/query006.py @@ -24,4 +24,7 @@ self.addrule("PACMAN_OUTPUT=^Build Date.* 2065") self.addrule("PACMAN_OUTPUT=^Install Date.* 2286") -self.expectfailure = True +# expect failure on 32bit systems +import sys +if sys.maxsize <= 2**32: + self.expectfailure = True -- 1.8.3.4
participants (2)
-
Allan McRae
-
Andrew Gregory