The "valgrind" test cases are designed to pick up errors reported when
passt is running under valgrind. But what it actually does is just kill
the passt process, then see if it had a non-zero exit code. That means it
will equally well pick up any other problems which caused passt to exit
with an error status: either something detected within passt or as a result
of passt being killed by an unexpected signal.
The fact that the "valgrind" test is actually responsible for shutting down
the passt process is non-obvious and can lead to problems when selectively
running tests during debugging.
Rename the "valgrind" tests to "shutdown" tests and run it regardless of
whether we're using valgrind or not. This allows us to remove an ugly
speacial case in the passt_in_ns teardown code.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Currently the build tests and distro tests share a common setup function.
That works for now, but changes we want to make will mean they need
slightly different setup, so split the setup functions in preparation.
Currently, neither build nor distro tests have any teardown function.
Again, future changes are going to mean we need to do some teardown, so
create some empty for now teardown functions in preparation.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Both clang-tidy and cppcheck linting are handled by the same test file,
test/build/static_checkers. The two linters are independent of each other
though, and each one takes quite a long time. Split them into separate
files to make it easier to control which are executed from the top level
test script.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Currently test/run uses wildcards to run all of the tests in a directory.
However, that wildcard list is filtered down by the "onlyfor" directives
in the test files... usually to a single file.
Therefore, just explicitly list the files we *really* want to run for this
test mode. This makes it easier to see at the top level what tests will
be executed, and to change that list temporarily while debugging specific
failures.
This means the "onlyfor" directive no longer has any purpose, and we can
remove it. "onlyfor" was also the only used of the $MODE variable, so we
can remove that too.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The top level listing control of which tests to run is in test/run, however
it uses the test() function which runs an entire directory of test files,
filtered by some criteria. This makes it awkward to narrow down to a
subset of tests when debugging a specific failure.
To make this easier, have test() take an explicit list of test files to
run, and have the caller in test/run handle the directory traversal. The
construct we use for this is pretty awkward to handle the fact that we're
in the source tree root directory rather than test/ at this point in
test/run. Later cleanups will improve that.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
There's no need to return non-zero if there have been failures in
run(), because the exit value is already determined from the number
of failures reported in the log file.
Return zero, so that this doesn't cause the script to fail, given we
now run it with -e.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Most commands issued during the testing scripts aren't explicitly checked
for errors. Therefore, if they fail, the shell will just keep on
executing. This makes it difficult to figure out where things started
going wrong if things fall over.
Run the whole script with the set -e mode so that it will exit in the case
of any (unchecked) failing command. To make this work we do need to add
explicit checks / fallbacks for some commands which we expect to fail.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[sbrivio: use sh -e instead of setting -e later, so that we don't miss
anything before set -e is issued]
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
The XVFB variable is initialized at the beginning of test/run then never
used again. I'm assuming it's a leftover from some ealier iteration.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Pass to seccomp.sh a list of additional syscalls valgrind needs as
EXTRA_SYSCALLS in a new 'valgrind' make target, and add corresponding
support in seccomp.sh itself.
In test setup functions, start passt with valgrind, but not for
performance tests.
Add tests checking that valgrind exits without errors after all the
other tests in the group are done.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
For demos, cool-retro-term(1) looked fancier, but several threads of
that and ffmpeg(1) are just messing up with performance testing.
The CI videos started getting really big as well, and they were
difficult to read.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
...showing setup steps, some peculiarities as --net option, and a
general side-to-side comparison with slirp4netns(1), including
"quick" TCP and UDP throughput and latency benchmarks.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
The new tests check build and a simple case with pasta sending a
short message in both directions (namespace to init, init to
namespace).
Tests cover a mix of Debian, Fedora, OpenSUSE and Ubuntu combinations
on aarch64, i386, ppc64, ppc64le, s390x, x86_64.
Builds tested starting from approximately glibc 2.19, gcc 4.7, and
actual functionality approximately from 4.4 kernels, glibc 2.25,
gcc 4.8, all the way up to current glibc/gcc/kernel versions.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>