9224af1494
The "valgrind" test cases are designed to pick up errors reported when passt is running under valgrind. But what it actually does is just kill the passt process, then see if it had a non-zero exit code. That means it will equally well pick up any other problems which caused passt to exit with an error status: either something detected within passt or as a result of passt being killed by an unexpected signal. The fact that the "valgrind" test is actually responsible for shutting down the passt process is non-obvious and can lead to problems when selectively running tests during debugging. Rename the "valgrind" tests to "shutdown" tests and run it regardless of whether we're using valgrind or not. This allows us to remove an ugly speacial case in the passt_in_ns teardown code. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> |
||
---|---|---|
.. | ||
build | ||
demo | ||
dhcp | ||
distro | ||
env | ||
icmp | ||
lib | ||
ndp | ||
perf | ||
shutdown | ||
tcp | ||
two_guests | ||
udp | ||
.gitignore | ||
ci | ||
find-arm64-firmware.sh | ||
Makefile | ||
passt.mbuto | ||
prepare-distro-img.sh | ||
README.md | ||
run | ||
run_demo | ||
valgrind.supp |
Scope
This directory contains test cases for passt and pasta and a simple POSIX shell-based framework to define them, and run them as a suite.
These tests can be run as part of a continuous integration workflow, and are also used to provide short usage demos, with video recording, for passt and pasta basic use cases.
Run
Dependencies
Packages
The tests require some package dependencies commonly available in Linux distributions. If some packages are not available, the test groups that need them will be selectively skipped.
This is a non-exhaustive list of packages that might not commonly be installed on a system, i.e. common utilities such as a shell are not included here.
Example for Debian, and possibly most Debian-based distributions:
build-essential git jq strace iperf3 qemu-system-x86 tmux sipcalc bc
clang-tidy cppcheck isc-dhcp-common psmisc linux-cpupower socat
netcat-openbsd fakeroot lz4 lm-sensors qemu-system-arm qemu-system-ppc
qemu-system-misc qemu-system-x86 valgrind
Other tools
Test measuring request-response and connect-request-response latencies use
neper
, which is not commonly packaged by distributions and needs to be built
and installed manually:
git clone https://github.com/google/neper
cd neper; make
cp tcp_crr tcp_rr udp_rr /usr/local/bin
Virtual machine images are built during test executions using mbuto, the shell script is sourced via git as needed, so there's no need to actually install it.
Special requirements for continuous integration and demo modes
Running the test suite as continuous integration or demo modes will record the terminal with the steps being executed, using asciinema(1), and create binary packages.
The following additional packages are commonly needed:
alien asciinema linux-perf tshark
Regular test
Just issue:
./run
from the test
directory. Elevated privileges are not needed. Environment
variable settings: DEBUG=1 enables debugging messages, TRACE=1 enables tracing
(further debugging messages), PCAP=1 enables packet captures. Example:
PCAP=1 TRACE=1 ./run
Continuous integration
Issuing:
./ci
will run the whole test suite while recording the execution, and it will also build JavaScript fragments used on http://passt.top/ for performance data tables and links to specific offsets in the captures.
Demo mode
Issuing:
./demo
will run the demo cases under demo
, with terminal captures as well.
Framework
The implementation of the testing framework is under lib
, and it provides
facilities for terminal and tmux session management, interpretation of test
directives, video recording, and suchlike. Test cases are organised in the
remaining directories.
Test cases can be implemented as POSIX shell scripts, or as a set of directives,
which are not formally documented here, but should be clear enough from the
existing cases. The entry point for interpretation of test directives is
implemented in lib/test
.