There are some 'sleep 1' commands between starting the socat server
and its corresponding client to avoid races due to the server not
being ready as we start sending data.
However, those don't cover all the cases where we might need them,
and in some cases the sleep command actually ended up being before
the server even starts.
This fixes occasional failures in TCP and UDP simple transfer tests,
that became apparent with the new command dispatch mechanism.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Otherwise, we're depending on having /sbin in $PATH. For some reason
I didn't completely grasp, with the new command dispatch mechanism
that's not the case anymore, even if I have /sbin in $PATH in the
parent shell.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
We use the [ "$x" -eq "$x" ] syntax to check if $x is a number. The
behaviour is clearly implied by POSIX, but some shells might actually
report the (intended) error, and dash floods script.log with
"Illegal number" error messages. Hide them.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Because UDP is connectionless we don't have an in-built end-of-stream
signal for our connectivity tests. We work around this by explicitly
adding an end marker to our sample data and killing the listening end once
it is seen.
However, socat has some built-in options - null-eof and shut-null - which
can be used to signal the end of stream with a zero-length UDP packet.
Use these to simplify how the UDP tests are implemented.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The tests generate a performance report in $BASEPATH/perf.js and
hooks/pre-push copies it to the website. To avoid cluttering the working
directory, instead put perf.js in $LOGDIR/web, since it's a test output
artefact. Update hooks/pre-push to copy from its new location.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The asciinema video handling creates a number of temporary files (.uncat,
.start, .stop) which currently go into the source tree. Put them in the
temporary state directory to avoid clutter.
The final processed output is now placed into test_logs/web/ along with the
corresponding .js file with links, since they're essentially test
artefacts. hooks/pre-push is updated to look for those files in the new
location when updating the web site.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Avoiding putting them in bare /tmp means they will be automatically
cleaned up with everything else.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Currently they go in the passt source tree with a fixed names, which means
their presence can mess with subsequent test runs.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The capture files are more or less a different form of log output from the
tests, so place them in $LOGDIR.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Instead of using the 'temp' and 'tempdir' DSL directives to create
temporary files, use fixed paths relative to __STATEDIR__. This has two
advantages:
1) The files are automatically cleaned up if the tests fail (and even if
that doesn't work they're easier to clean up manuall)
2) When debugging tests it's easier to figure out which of the temporary
files are relevant to whatever's going wrong
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Currently the context command dispatch subsystem creates a bunch of
temporary files in $LOGDIR, which is messy. Store them in $STATEDIR which
is for precisely this purpose. The logs from each context still go into
$LOGDIR.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We use this fifo to send messages to the information pane. Put it in the
state directory so it doesn't need its own cleanup.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The test scripts create a bunch of temporary files to keep track of
internal state. Some are made in /tmp with individual mktemp calls, some
go in the passt source directory, and some go in $LOGDIR. This can
sometimes make it messy to clean up after failed test runs.
Start cleaning this up by creating a single "state" directory ($STATEBASE)
in /tmp for all the state or temporary files used by a single test run.
Clean it up automatically in cleanup() - except when DEBUG==1, because
those files can be useful for debugging test script failures.
We create subdirectories under $STATEBASE for each setup function, exposed
as $STATESETUP. We also create subdirectories for each test script and
expose those to the scripts as __STATEDIR__.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We install a cleanup() function with 'trap' in order to clean up temporary
files we generate during the tests. However, we deinstall it after
run_term, which means it won't run in most of the cases where it would be
useful. Even if "run from_term" exits with an error, that error will be
hidden from the run_term wrapper because it's within a tmux session, so we
will return from run_term normally, uninstall the trap and never clean up.
In fact there's no reason to uninstall the trap at all, it works just as
well on the success exit path as an error exit path.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
For example, passt/dhcp rather than dhcp/passt. This is more
consistent with the two_guests and other test groups, and makes some
other cleanups simpler.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Put the pieces together to use the new style context based dispatch for
the passt_in_pasta tests.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Now that we have all the pieces we need for issuing commands both into
namespaces and into guests, we can use those to convert the two_guests to
using only the new style context command issue.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Extends the context system in the test scripts to allow executing commands
within a guest. Do this without requiring an existing network in the guest
by using socat to run ssh via a vsock connection.
We do need some additional "sleep"s in the tests, because the new
faster dispatch means that sometimes we attempt to connect before
socat has managed to listen.
For now, only use this for the plain "passt" tests. The "passt_in_ns" and
other tests have additional complications we still need to deal with.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Extend the context system to allow commands to be run in a namespace
created with unshare, and use it for the namespace used in the pasta tests.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
In our test scripts we need to do some ugly parsing of /proc and/or pstree
output in order to get the PIDs of processes running in namespaces so that
we can connect to those namespaces with nsenter or pasta.
This is actually a pretty tricky problem with standard tools. To determine
the PID from the outside of the namespace we need to know how the process
of interest is related to the unshare or pasta process (child? one of
several children? grandchild?) as well as then parsing /proc or ps output.
This is slightly awkward now, and will get worse with future changes I'd
like to make to have processes are dispatched.
The obvious solution would be to have the process of interest (which we
control) report its own PID, but that doesn't work easily, because it is in
a PID namepace and sees only its local PID not the global PID we need to
address it from outside.
To handle this, add a small custom tool, "nsholder". This takes a path
and a mode parameter. In "hold" mode it will create a unix domain socket
bound to the path and listening. In "pid" mode it will get the "hold"ing
process's pid via the unix socket using SO_PEERCRED, which translates
between PID namespaces. In "stop" mode it will send a message to the
socket causing the "hold"ing process to clean up and exit.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Convert the pasta and passt tests to use new-style context execution
for the things that run in the "passt" frame. Don't touch the
passt_in_ns or two_guests tests yet, because they run passt inside a
namespace which introduces some additional complications we have yet
to handle.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Convert most of the tests to use the new-style system for issuing commands
for all host commands. We leave the distro tests for now: they use
the same pane for both host and guest commands which we'll need some more
things to deal with.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We're creating a system for tests to more reliably execute commands in
various contexts (e.g. host, guest, namespace). That transition is going
to happen over a number of steps though, so in the meantime we need to deal
with both the old-style issuing of commands via typing into and screen
scraping tmux panels, and the new-style system for executing commands in
context.
Introduce some transitional helpers which will issue a command via context
if the requested context is initialized, but will otherwise fall back to
the old style tmux panel based method. Re-implement the various test DSL
commands in terms of these new helpers.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We're moving to a new way of the tests dispatching commands to running in
contexts (host, guest, namespace, etc.). As we make this transition,
though, we still want the user to be able to watch the commands running
in a context, as they previously could from the commands issued in the
pane.
Add a helper to set up a pane to watch a context's log to allow this. In
some cases we currently issue commands from several different logical
contexts in the same pane, so allow a pane to watch several contexts at
once. Also use tail's --retry option to allow starting the watch before
we've initialized the context which will be useful in some cases.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
For the tests, we need to run commands in various contexts: in the host,
in a guest or in a namespace. Currently we do this by running each context
in a tmux pane, and using tmux commands to type the commands into the
relevant pane, then screen-scrape the output for the results if we need
them.
This is very fragile, because we have to make various assumptions to parse
the output. Those can break if a shell doesn't have the prompt we expect,
if the tmux pane is too small or in various other conditions.
This starts some library functions for a new "context" system, that
provides a common way to invoke commands in a given context, in a way that
properly preserves stdout, stderr and the process return code.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Our test DSL has a number of paired commands to run something in the
background in a pane, then later to wait for it to complete. However, in
some of the tests we have these mismatched - starting a command in one
pane, then waiting for it in another.
We appear to get away with this for some reason, but it's not correct and
future changes make it cause more problems.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
test_iperf3() is a pretty inscrutable mess of nested background processes.
It has a number of ugly sleeps needed to wait for things to complete.
Rewrite it to be cleaner:
* Use the construct (a & b & wait) to run 'a' and 'b' in parallel, but
then wait for them both to complete before continuing
* This allows us to wait for both the server and client to finish, rather
than sleeping
* Use jq to do all the math we need to get the final result, rather than
jq followed by some complicated 'bc' mangling
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Currently all the throughput tests are run for 30s. This is reflected in
both the actual parameters given to the iperf commands, but also in the
matching sleeps in test_iperf3.
Allow this to be adjusted more easily with a new parameter to test_iperf3.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[sbrivio: Reflect new parameter in comment to test_iperf3()]
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
These two commands in the DSL to run an iperf client and server are always
used together, and some of the parameters must match between them. The
iperf3s must also be run more or less immediately after iperf3c, since
iperf3c will run a client in the background after a sleep and requires a
server to be running before it will work.
A bunch of things can be made cleaner if we make a single DSL command that
runs both sides of the test. For now make the combined command work
exactly like the two commands together did, warts and all.
This does lose the ability for the DSL scripts to give additional options
to the iperf3 server, but we weren't using that anyway.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
On new Ubuntu 22.04 images, stopping systemd-resolved to get the
dhclient script override resolv.conf doesn't work anymore. I
originally used that hack to avoid introducing a delay which is
needed when running it on TCG.
Keep systemd-resolved running instead, and wait for it to be ready
by retrying to resolve a domain a few times before installing
packages, so that we don't add another ugly delay that might
unnecessarily slow down things even further.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Performance tests use iperf3(1) with large windows, and these sysctl
entries are needed to run them unmodified.
The passt demo uses perf(1) to report syscall overhead, and that
needs access to hardware performance counters for unprivileged
users.
Reported-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
We start getting prompts about restarting outdated services: we're
using daily images but they might have been cached for a while now.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Currently in at least some of the testcases we kill qemu processes we're
done with by issuing a Control-C to the tmux panel it's running in. That
makes things harder as we try to move towards allowing "headless" testing
without tmux.
So, instead always use an explicit kill on a pid derived from a pidfile
for killing qemu. Note that we don't need to remove the pidfiles
afterwards, because qemu does that itself when terminated.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The test scripts run with sh -e, which means they will stop if any commands
return an error. That's generally desirable, because we won't continue
after things are hopeless due to an earlier step failing.
Unfortunately, the tmux setup we run the script in means it's not obvious
where any error messages related to such a failure will go. Depending on
exactly where the error occurs they might go to the original terminal
hidden behind tmux, or they might go to a tmux panel that's not visible in
the normal layouts.
To make it easier to find such error message, redirect direct output and
errors from the test script itself to a 'script.log' file in the logs
directory. When in DEBUG=1 mode, additionaly 'set -x' so we log all the
commands we execute to that file.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
For the passt and passt_in_ns tests we have a "shutdown" testcase that
checks for any errors from the passt process we were using (including
valgrind warnings). Do the same for pasta tests, so that we catch any
error codes from the pasta process.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The "valgrind" test cases are designed to pick up errors reported when
passt is running under valgrind. But what it actually does is just kill
the passt process, then see if it had a non-zero exit code. That means it
will equally well pick up any other problems which caused passt to exit
with an error status: either something detected within passt or as a result
of passt being killed by an unexpected signal.
The fact that the "valgrind" test is actually responsible for shutting down
the passt process is non-obvious and can lead to problems when selectively
running tests during debugging.
Rename the "valgrind" tests to "shutdown" tests and run it regardless of
whether we're using valgrind or not. This allows us to remove an ugly
speacial case in the passt_in_ns teardown code.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The queries we use in the test scripts to locate the external interface
or gateway can return multiple results. We get away with this because the
way we parse command output only looks at the last line. It's not really
correct, though, and improvements to our handling of command output will
mean it breaks.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Currently the build tests and distro tests share a common setup function.
That works for now, but changes we want to make will mean they need
slightly different setup, so split the setup functions in preparation.
Currently, neither build nor distro tests have any teardown function.
Again, future changes are going to mean we need to do some teardown, so
create some empty for now teardown functions in preparation.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
When recording tests or demos with asciinema we generate several temporary
files during post-processing. Add these to the .gitignore file so they're
not accidentally comitted.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The DEMO_XTERM and CI_XTERM variables defined in test/lib/term aren't used
anywhere. Remove them.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Both clang-tidy and cppcheck linting are handled by the same test file,
test/build/static_checkers. The two linters are independent of each other
though, and each one takes quite a long time. Split them into separate
files to make it easier to control which are executed from the top level
test script.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We've recently converted most of our tests to use socat instead of
nc/netcat/ncat, because socat is more powerful and we don't need to deal
with the several possible variants of netcat.
We still use nc or ncat for the distro tests. Because there we control
the guest environment and can pick our tools, there isn't the same reason
to switch to socat. However, using socat here as well makes the tests
a bit easier to read, and doesn't require people reading or modifying them
to become familiar with an additional tool.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[sbrivio: keep using netcat-openbsd in Ubuntu 16.04 ppc64 test, as socat
is unavailable there]
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Distribution packages reasonably expect to have a human-readable
Markdown version of the README under /usr/share/doc/, but all we have
right now is a heavily web-oriented version.
Introduce a ugly hack to strip web-oriented parts from the current
README and install it.
It should probably work the other way around: a human-readable README
could be used as a source for the web page. But cgit needs a file
that's in the tree, not something that can be built, and
https://passt.top/ is based on cgit. It should eventually be doable
to work around this in cgit, instead.
Reported-by: Benson Muite <benson_muite@emailplus.org>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Given that a three-way git merge was enough to cope with context
changes in man pages, it's probably a good idea to enable that for
'git am' in the demo too.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Now that the back end allows passt/pasta to use different external
interfaces for IPv4 and IPv6, use that to do the right thing in the case
that the host has IPv4 and IPv6 connectivity via different interfaces.
If the user hasn't explicitly chosen an interface, separately search for
a suitable external interface for each protocol.
As a bonus, this substantially simplifies the external interface probe. It
also eliminates a subtle confusing case where in some circumstances we
would pick the first interface in interface index order, and sometimes in
order of routes returned from netlink. On some network configurations that
could cause tests to fail, because the logic in the tests was subtly
different (it always used route order).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
By default, passt itself attaches to the first host interface with a
default route. However, when determining the host interface name the tests
implicitly select the *last* host interface: they use a jq expression which
will list all interfaces with default routes, but the way output detection
works in the scripts, it will only pick up the last line.
If there are multiple interfaces with default routes on the host, and they
each have a different address, this can cause spurious test failures.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>