We use the [ "$x" -eq "$x" ] syntax to check if $x is a number. The
behaviour is clearly implied by POSIX, but some shells might actually
report the (intended) error, and dash floods script.log with
"Illegal number" error messages. Hide them.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Because UDP is connectionless we don't have an in-built end-of-stream
signal for our connectivity tests. We work around this by explicitly
adding an end marker to our sample data and killing the listening end once
it is seen.
However, socat has some built-in options - null-eof and shut-null - which
can be used to signal the end of stream with a zero-length UDP packet.
Use these to simplify how the UDP tests are implemented.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The tests generate a performance report in $BASEPATH/perf.js and
hooks/pre-push copies it to the website. To avoid cluttering the working
directory, instead put perf.js in $LOGDIR/web, since it's a test output
artefact. Update hooks/pre-push to copy from its new location.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The asciinema video handling creates a number of temporary files (.uncat,
.start, .stop) which currently go into the source tree. Put them in the
temporary state directory to avoid clutter.
The final processed output is now placed into test_logs/web/ along with the
corresponding .js file with links, since they're essentially test
artefacts. hooks/pre-push is updated to look for those files in the new
location when updating the web site.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Avoiding putting them in bare /tmp means they will be automatically
cleaned up with everything else.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Currently they go in the passt source tree with a fixed names, which means
their presence can mess with subsequent test runs.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The capture files are more or less a different form of log output from the
tests, so place them in $LOGDIR.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Instead of using the 'temp' and 'tempdir' DSL directives to create
temporary files, use fixed paths relative to __STATEDIR__. This has two
advantages:
1) The files are automatically cleaned up if the tests fail (and even if
that doesn't work they're easier to clean up manuall)
2) When debugging tests it's easier to figure out which of the temporary
files are relevant to whatever's going wrong
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Currently the context command dispatch subsystem creates a bunch of
temporary files in $LOGDIR, which is messy. Store them in $STATEDIR which
is for precisely this purpose. The logs from each context still go into
$LOGDIR.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We use this fifo to send messages to the information pane. Put it in the
state directory so it doesn't need its own cleanup.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The test scripts create a bunch of temporary files to keep track of
internal state. Some are made in /tmp with individual mktemp calls, some
go in the passt source directory, and some go in $LOGDIR. This can
sometimes make it messy to clean up after failed test runs.
Start cleaning this up by creating a single "state" directory ($STATEBASE)
in /tmp for all the state or temporary files used by a single test run.
Clean it up automatically in cleanup() - except when DEBUG==1, because
those files can be useful for debugging test script failures.
We create subdirectories under $STATEBASE for each setup function, exposed
as $STATESETUP. We also create subdirectories for each test script and
expose those to the scripts as __STATEDIR__.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We install a cleanup() function with 'trap' in order to clean up temporary
files we generate during the tests. However, we deinstall it after
run_term, which means it won't run in most of the cases where it would be
useful. Even if "run from_term" exits with an error, that error will be
hidden from the run_term wrapper because it's within a tmux session, so we
will return from run_term normally, uninstall the trap and never clean up.
In fact there's no reason to uninstall the trap at all, it works just as
well on the success exit path as an error exit path.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
For example, passt/dhcp rather than dhcp/passt. This is more
consistent with the two_guests and other test groups, and makes some
other cleanups simpler.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Put the pieces together to use the new style context based dispatch for
the passt_in_pasta tests.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Now that we have all the pieces we need for issuing commands both into
namespaces and into guests, we can use those to convert the two_guests to
using only the new style context command issue.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Extends the context system in the test scripts to allow executing commands
within a guest. Do this without requiring an existing network in the guest
by using socat to run ssh via a vsock connection.
We do need some additional "sleep"s in the tests, because the new
faster dispatch means that sometimes we attempt to connect before
socat has managed to listen.
For now, only use this for the plain "passt" tests. The "passt_in_ns" and
other tests have additional complications we still need to deal with.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Extend the context system to allow commands to be run in a namespace
created with unshare, and use it for the namespace used in the pasta tests.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
In our test scripts we need to do some ugly parsing of /proc and/or pstree
output in order to get the PIDs of processes running in namespaces so that
we can connect to those namespaces with nsenter or pasta.
This is actually a pretty tricky problem with standard tools. To determine
the PID from the outside of the namespace we need to know how the process
of interest is related to the unshare or pasta process (child? one of
several children? grandchild?) as well as then parsing /proc or ps output.
This is slightly awkward now, and will get worse with future changes I'd
like to make to have processes are dispatched.
The obvious solution would be to have the process of interest (which we
control) report its own PID, but that doesn't work easily, because it is in
a PID namepace and sees only its local PID not the global PID we need to
address it from outside.
To handle this, add a small custom tool, "nsholder". This takes a path
and a mode parameter. In "hold" mode it will create a unix domain socket
bound to the path and listening. In "pid" mode it will get the "hold"ing
process's pid via the unix socket using SO_PEERCRED, which translates
between PID namespaces. In "stop" mode it will send a message to the
socket causing the "hold"ing process to clean up and exit.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Convert the pasta and passt tests to use new-style context execution
for the things that run in the "passt" frame. Don't touch the
passt_in_ns or two_guests tests yet, because they run passt inside a
namespace which introduces some additional complications we have yet
to handle.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Convert most of the tests to use the new-style system for issuing commands
for all host commands. We leave the distro tests for now: they use
the same pane for both host and guest commands which we'll need some more
things to deal with.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We're creating a system for tests to more reliably execute commands in
various contexts (e.g. host, guest, namespace). That transition is going
to happen over a number of steps though, so in the meantime we need to deal
with both the old-style issuing of commands via typing into and screen
scraping tmux panels, and the new-style system for executing commands in
context.
Introduce some transitional helpers which will issue a command via context
if the requested context is initialized, but will otherwise fall back to
the old style tmux panel based method. Re-implement the various test DSL
commands in terms of these new helpers.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We're moving to a new way of the tests dispatching commands to running in
contexts (host, guest, namespace, etc.). As we make this transition,
though, we still want the user to be able to watch the commands running
in a context, as they previously could from the commands issued in the
pane.
Add a helper to set up a pane to watch a context's log to allow this. In
some cases we currently issue commands from several different logical
contexts in the same pane, so allow a pane to watch several contexts at
once. Also use tail's --retry option to allow starting the watch before
we've initialized the context which will be useful in some cases.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
For the tests, we need to run commands in various contexts: in the host,
in a guest or in a namespace. Currently we do this by running each context
in a tmux pane, and using tmux commands to type the commands into the
relevant pane, then screen-scrape the output for the results if we need
them.
This is very fragile, because we have to make various assumptions to parse
the output. Those can break if a shell doesn't have the prompt we expect,
if the tmux pane is too small or in various other conditions.
This starts some library functions for a new "context" system, that
provides a common way to invoke commands in a given context, in a way that
properly preserves stdout, stderr and the process return code.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Our test DSL has a number of paired commands to run something in the
background in a pane, then later to wait for it to complete. However, in
some of the tests we have these mismatched - starting a command in one
pane, then waiting for it in another.
We appear to get away with this for some reason, but it's not correct and
future changes make it cause more problems.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
test_iperf3() is a pretty inscrutable mess of nested background processes.
It has a number of ugly sleeps needed to wait for things to complete.
Rewrite it to be cleaner:
* Use the construct (a & b & wait) to run 'a' and 'b' in parallel, but
then wait for them both to complete before continuing
* This allows us to wait for both the server and client to finish, rather
than sleeping
* Use jq to do all the math we need to get the final result, rather than
jq followed by some complicated 'bc' mangling
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Currently all the throughput tests are run for 30s. This is reflected in
both the actual parameters given to the iperf commands, but also in the
matching sleeps in test_iperf3.
Allow this to be adjusted more easily with a new parameter to test_iperf3.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[sbrivio: Reflect new parameter in comment to test_iperf3()]
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
These two commands in the DSL to run an iperf client and server are always
used together, and some of the parameters must match between them. The
iperf3s must also be run more or less immediately after iperf3c, since
iperf3c will run a client in the background after a sleep and requires a
server to be running before it will work.
A bunch of things can be made cleaner if we make a single DSL command that
runs both sides of the test. For now make the combined command work
exactly like the two commands together did, warts and all.
This does lose the ability for the DSL scripts to give additional options
to the iperf3 server, but we weren't using that anyway.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
On new Ubuntu 22.04 images, stopping systemd-resolved to get the
dhclient script override resolv.conf doesn't work anymore. I
originally used that hack to avoid introducing a delay which is
needed when running it on TCG.
Keep systemd-resolved running instead, and wait for it to be ready
by retrying to resolve a domain a few times before installing
packages, so that we don't add another ugly delay that might
unnecessarily slow down things even further.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Performance tests use iperf3(1) with large windows, and these sysctl
entries are needed to run them unmodified.
The passt demo uses perf(1) to report syscall overhead, and that
needs access to hardware performance counters for unprivileged
users.
Reported-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
We start getting prompts about restarting outdated services: we're
using daily images but they might have been cached for a while now.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Currently in at least some of the testcases we kill qemu processes we're
done with by issuing a Control-C to the tmux panel it's running in. That
makes things harder as we try to move towards allowing "headless" testing
without tmux.
So, instead always use an explicit kill on a pid derived from a pidfile
for killing qemu. Note that we don't need to remove the pidfiles
afterwards, because qemu does that itself when terminated.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The test scripts run with sh -e, which means they will stop if any commands
return an error. That's generally desirable, because we won't continue
after things are hopeless due to an earlier step failing.
Unfortunately, the tmux setup we run the script in means it's not obvious
where any error messages related to such a failure will go. Depending on
exactly where the error occurs they might go to the original terminal
hidden behind tmux, or they might go to a tmux panel that's not visible in
the normal layouts.
To make it easier to find such error message, redirect direct output and
errors from the test script itself to a 'script.log' file in the logs
directory. When in DEBUG=1 mode, additionaly 'set -x' so we log all the
commands we execute to that file.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
For the passt and passt_in_ns tests we have a "shutdown" testcase that
checks for any errors from the passt process we were using (including
valgrind warnings). Do the same for pasta tests, so that we catch any
error codes from the pasta process.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The "valgrind" test cases are designed to pick up errors reported when
passt is running under valgrind. But what it actually does is just kill
the passt process, then see if it had a non-zero exit code. That means it
will equally well pick up any other problems which caused passt to exit
with an error status: either something detected within passt or as a result
of passt being killed by an unexpected signal.
The fact that the "valgrind" test is actually responsible for shutting down
the passt process is non-obvious and can lead to problems when selectively
running tests during debugging.
Rename the "valgrind" tests to "shutdown" tests and run it regardless of
whether we're using valgrind or not. This allows us to remove an ugly
speacial case in the passt_in_ns teardown code.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The queries we use in the test scripts to locate the external interface
or gateway can return multiple results. We get away with this because the
way we parse command output only looks at the last line. It's not really
correct, though, and improvements to our handling of command output will
mean it breaks.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Currently the build tests and distro tests share a common setup function.
That works for now, but changes we want to make will mean they need
slightly different setup, so split the setup functions in preparation.
Currently, neither build nor distro tests have any teardown function.
Again, future changes are going to mean we need to do some teardown, so
create some empty for now teardown functions in preparation.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
When recording tests or demos with asciinema we generate several temporary
files during post-processing. Add these to the .gitignore file so they're
not accidentally comitted.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The DEMO_XTERM and CI_XTERM variables defined in test/lib/term aren't used
anywhere. Remove them.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Both clang-tidy and cppcheck linting are handled by the same test file,
test/build/static_checkers. The two linters are independent of each other
though, and each one takes quite a long time. Split them into separate
files to make it easier to control which are executed from the top level
test script.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We've recently converted most of our tests to use socat instead of
nc/netcat/ncat, because socat is more powerful and we don't need to deal
with the several possible variants of netcat.
We still use nc or ncat for the distro tests. Because there we control
the guest environment and can pick our tools, there isn't the same reason
to switch to socat. However, using socat here as well makes the tests
a bit easier to read, and doesn't require people reading or modifying them
to become familiar with an additional tool.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[sbrivio: keep using netcat-openbsd in Ubuntu 16.04 ppc64 test, as socat
is unavailable there]
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Distribution packages reasonably expect to have a human-readable
Markdown version of the README under /usr/share/doc/, but all we have
right now is a heavily web-oriented version.
Introduce a ugly hack to strip web-oriented parts from the current
README and install it.
It should probably work the other way around: a human-readable README
could be used as a source for the web page. But cgit needs a file
that's in the tree, not something that can be built, and
https://passt.top/ is based on cgit. It should eventually be doable
to work around this in cgit, instead.
Reported-by: Benson Muite <benson_muite@emailplus.org>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Given that a three-way git merge was enough to cope with context
changes in man pages, it's probably a good idea to enable that for
'git am' in the demo too.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Now that the back end allows passt/pasta to use different external
interfaces for IPv4 and IPv6, use that to do the right thing in the case
that the host has IPv4 and IPv6 connectivity via different interfaces.
If the user hasn't explicitly chosen an interface, separately search for
a suitable external interface for each protocol.
As a bonus, this substantially simplifies the external interface probe. It
also eliminates a subtle confusing case where in some circumstances we
would pick the first interface in interface index order, and sometimes in
order of routes returned from netlink. On some network configurations that
could cause tests to fail, because the logic in the tests was subtly
different (it always used route order).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
By default, passt itself attaches to the first host interface with a
default route. However, when determining the host interface name the tests
implicitly select the *last* host interface: they use a jq expression which
will list all interfaces with default routes, but the way output detection
works in the scripts, it will only pick up the last line.
If there are multiple interfaces with default routes on the host, and they
each have a different address, this can cause spurious test failures.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
A couple of days ago, we started running out of space there as we're
about to install gcc -- about 50 MiB are missing.
Given that virt-resize (which could be conveniently invoked by the
Makefile for tests) reorders partitions if we expand the first one,
resize the image using qemu-img from the test script itself, and then
take care of expanding root partition and filesystem online later.
This is probably a temporary hack, so I'm not looking for a more
generic or elegant solution at the moment.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
For some reason using https to clone from the passt git repo is very slow,
at least from network-distant places. Use git protocol in the demo instead
to avoid a tedious wait to get the source.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
With pasta, the namespace interface name is generally the same as the host
interface name. We already rely on this in the dhcp/pasta tests, but for
no clear reason ndp/pasta separately determines the host interface name.
Remove this unnecessary step.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The iperf based test commands create a bunch of .bw and .pid files for
each iperf client and server. The server side .bw files are cleaned
up afterwards, but the pid files are not, and none of the client side
files are cleaned up. The latter doesn't really matter when the
client is run on ephemeral guests, but sometimes we run it in a
namespace that shares the filesystem with the host.
Clean up all of these files after the tests.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Before starting the guests, these tests configure addresses in a pasta
namespace using dhclient. However, because it's a user namespace, it's
not running as "real" root and can't write to the dhclient pid file.
This doesn't stop it working, but causes an ugly error message which we
can avoid by using the --no-pid option.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
All the UDP tests use :> to truncate some temporary data files. This
appears to be so that they're empty before writing data to them with tee.
However tee, by default, truncates its output file anyway (you need tee -a
to append). So drop the unnecessary truncations.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
teardown_passt_in_ns() sends a ^D to the NS pane, which appears to be
intended to terminate the nsenter running there, leaving the namespace.
However, we've also sent a ^D to the PASST pane which will exit the pasta
instance which created the namespace. With the namespace destroyed the
nsenter in the NS pane will be killed, so it does not need to be exited
explicitly.
In fact sending the extra ^D can be harmful, since it will exit the shell
in which the nsenter was run, causing the whole pane to be closed. That
can then mean that the "pane_wait NS" hangs indefinitely. I believe this
will sometimes work, because there's a race between the various options
here, but it should be more reliable without the extra ^D.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Commit 41c02e10 ("tests: Use nmap-ncat instead of openbsd netcat for pasta
tests") updated the pasta tests to use the nmap version of ncat instead of
the openbsd version, for greater portability.
For some upcoming changes, however, we'll be wanting to use socat.
"socat" can do everything "ncat" can and more, so let's move all the
tests using host tools (either directly on the host or via mbuto
generated images) to using socat instead.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[sbrivio: Fix a typo in port specification, 31337 instead of x31337]
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
In what looks like a copy-and-paste error from the TCP script, the
udp/passt test script creates a test file called '__TEMP_BIG__', while
the commands it use the variable __TEMP__. Correct this so that a) we
actually transfer the data we created for the purpose and b) we don't
leave a stale __TEMP_BIG__ file in the current directory.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The dhcp/passt and dhcp/passt_in_ns tests at least, and maybe others
use 'hout' commands that need to be able to detect empty output.
However, we don't set PS1, which means the screen-scraping logic which
detects this may not be reliable. In addition, if the host is using a
recent bash, it will have bracketed paste mode enabled which will also
add escape codes which will mess up the empty output detection.
Set the prompt and disable bracketed paste mode from the passt and
passt_in_ns setups to avoid these problems.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Currently our small custom dhclient-script only handles the 'domain-name'
option, which can just list a single domain, not the 'domain-search'
option, which can handle several. Correct it to handle both.
We also weren't emptying the resolv.conf file before we began, which
could lead to surprising contents after multiple DHCP transactions.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We previously introduced a change to passt to handle the case where the
host machine is its own nameserver - so resolv.conf points to 127.0.0.1.
In this case we advertize the gateway as the DNS server for the guest,
which in turn will be redirected back to the host by existing passt logic.
The dhcp/passt doesn't handle this case correctly, so add some logic to
account for it.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
To check publishing of DNS information via DHCP, we need to extract a list
of nameservers and/or search domains from resolv.conf in the test script.
The current version (usually) leaves the result with a trailing ','.
That's usually ok because it happens on both guest and host sides. However
it's kind of confusing, and might stop working if the host had a
resolv.conf without a trailing \n on the last line. It also makes some
later changes we'll need more difficult.
So, normalize the output from resolv.conf a bit further, removing any
trailing ','. It turns out we can do this with a slightly less complex
sed expression than the one we already have.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Although it can operate without them, dhclient can issue errors if it
doesn't have /var/run to write a pid file and /var/lib to write a leases
file. Create those in mbuto.img to stop it complaining.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We now supply a minimal dhclient-script of our own in the mbuto boot image.
There are some problems with it, so add some basic logging to help debug
it.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Modern Fedora (and RHEL) systems have /sbin as a symlink to /usr/sbin
(along with a number of similar links). Along with that it expects to
find dhclient-script in /usr/sbin/dhclient-script rather than
/sbin/dhclient-script.
Link them together in our mbuto image so that the Fedora build of dhclient
can find it.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
AFAICT the symlink we created in mbuto from /usr/bin/bash to /bin/sh was
for the benefit of a dhclient-script which used /usr/bin/bash as its
interpreter (e.g. in Fedora). That was a bit risky if the script really
did require bash and we linked it to dash or another shell.
We now supply our own custom dhclient-script, so we don't need the
link any more.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Similar case as the one fixed by David's patch "tests: Remove
unnecessary ^D in passt_in_ns teardown": we happen to pseudo-randomly
close panes by unnecessarily exiting the parent shells there, and
subsequent pane_wait directives hang.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
For some reason, I now have to update some "vendored" dependencies
on a fresh git clone, at least in my environment, before building.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
This was dependent on my own environment where I usually have /sbin
in $PATH. If that's missing, given that we're running dhclient as
user, we won't find it.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Parsing pstree's output is somewhat unreliable: there might be
multiple pasta instances running on the same host, and depending on
the overall output width pstree might truncate some branches.
Ask pasta to save its PID to file, and use that as parameter for
pgrep to find the PID of the interactive shell whose user and network
namespaces we want to join.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
The Fedora test file extracts some information from the host resolv.conf
into a DNS6 variable which is then never used. Remove this unnecessary
step, which is presumably a leftover from an earlier iteration.
This was the only user of 'head' and 'sed' in the test file, so those can
also be removed from the required tools. The debian and ubuntu test files
also listed 'head' and 'sed' as tools, although they don't use them,
I'm guessing because of an earlier version which had the same DNS6 code.
Remove those as well.
The opensuse test file still actually uses DNS6, so leave it there for now.
The DNS handling and network config handling for SuSE looks to be kind of
broken, but fixing that is a job for another day.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Before booting the guest images, the distro test cases need to modify the
guest images, using virt-edit and guestfish, to boot in the way we need.
At present this gets repeated on every test run, even though it's not
really doing anything we want to test for.
In addition many of the images have the same preparation steps leading to
a lot of duplicated stages in the tests. A number of additional images can
be prepared using common steps, even if the ones used now have small
differences.
Therefore move the preparation of most of the guest images to the asset
build phase, where they can be done a single time for multiple test runs,
using a common preparation script. We can even avoid making a copy of the
disk image for booting, by using qemu's -snapshot option.
A few of the distros (openSUSE and older Ubuntu) do need different steps.
For now we don't chage how they are run, they could possibly be handled
more like this in future.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Rather than directly download distro images from the test scripts, handle
all the downloads during the test asset build, then just clone them for
the tests themselves. This avoids repeated downloads which can be very
slow when debugging failing tests.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[sbrivio: Add OPENSUSE_IMGS to DOWNLOAD_ASSETS in Makefile, and note
that xzcat doesn't take a -O option in test/distro/opensuse]
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Currently test/run uses wildcards to run all of the tests in a directory.
However, that wildcard list is filtered down by the "onlyfor" directives
in the test files... usually to a single file.
Therefore, just explicitly list the files we *really* want to run for this
test mode. This makes it easier to see at the top level what tests will
be executed, and to change that list temporarily while debugging specific
failures.
This means the "onlyfor" directive no longer has any purpose, and we can
remove it. "onlyfor" was also the only used of the $MODE variable, so we
can remove that too.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The top level listing control of which tests to run is in test/run, however
it uses the test() function which runs an entire directory of test files,
filtered by some criteria. This makes it awkward to narrow down to a
subset of tests when debugging a specific failure.
To make this easier, have test() take an explicit list of test files to
run, and have the caller in test/run handle the directory traversal. The
construct we use for this is pretty awkward to handle the fact that we're
in the source tree root directory rather than test/ at this point in
test/run. Later cleanups will improve that.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The test scripts support a "req" directive which requires one test script
to be run before another. It's implemented by doing a topological sort
based on these directives in the runner scripts, which is about as awkward
as you'd expect in Bourne shell.
It turns out we only use this functionality in one place - to make the
"make install" test run after the plain "make" test. We also already have
a simpler way of making sure tests run in a specific order: just put them
into the same test script file.
So, remove support for the "req" directive and just fold the build/all and
build/install test scripts together.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Apparently qemu's ARM virt machine needs to be explicitly given a firmware
image, rather than just supplying a sane default. Unfortunately the EDK2
firmware image we need isn't in the same place on all host distros.
Currently the test scripts hardcode the Debian location, meaning it will
break on hosts that have it somewhere else. This patch searches multiple
locations for the firmware, and creates a local link during the asset build
phase, which the tests can then use.
For now it only searches the locations used by Debian and Fedora, but
that's a small improvement in robustness already, and can be later improved
further if we need to.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Move the download of mbuto and using it to create a sample initramfs to
the asset build makefile, rather than embedding it in the test scripts
themselves.
The two_guests tests used to use two separate copies of the mbuto
image. As an initramfs the mbuto image is strictly readonly though,
so that's not necessary. So, also use the same image for both guests.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
A number of passt/pasta testcases have initial steps which are just about
building images or other assets we need for the test proper. Repeating
these for each test run can be quite costly.
This patch makes a start on moving this sort of test asset building to
a separate phase before running the tests proper. For now just add a
Makefile to handle the asset building (although it doesn't build
anything yet), and make the path where we'll be building the assets
available to the tests.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
A lot of tests and examples invoke qemu with the command "kvm". However,
as far as I can tell, "kvm" being aliased to the appropriate qemu system
binary is Debian specific. The binary names from qemu upstream -
qemu-system-$ARCH - also aren't universal, but they are more common (they
should be good for both Debian and Fedora at least).
In order to still get KVM acceleration when available, we use the option
"-M accel=kvm:tcg" to tell qemu to try using either KVM or TCG in that
order
A number of the places we invoked "kvm" are expecting specifically an x86
guest, and so it's also safer to explicitly invoke qemu-system-x86_64.
Some others appear to be independent of the target arch (just wanting the
same arch as the host to allow KVM acceleration). Although I suspect there
may be more subtle x86 specific options in the qemu command lines, attempt
to preserve arch independence by using $(uname -m).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Several tests run pp64le guests using "qemu-system-ppc64le". But, at the
system level there's no difference between ppc64 and ppc64le - it's the
same hardware, just placed into different endian modes by OS early boot
code. Reflecting that, qemu only supplies a single "qemu-system-ppc64".
Some distros alias qemu-system-ppc64le to qemu-system-ppc64 (Debian does),
but it's best not to count on this (Fedora doesn't, for example).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
David reports that dhclient-script(8) on Fedora needs a number of
binaries that are not included in PROGS of the current mbuto profile,
and we would also need to include hostnamectl(1) there, which will
fail without a systemd init.
Embed a minimal script for dhclient(8) in the profile itself, written
to /sbin/dhclient-script at boot, to just check what we need to check
out of DHCP and DHCPv6 functionality.
While at it, drop busybox and logger from PROGS, as we don't need them,
and add hostname(1). While DHCP option 12 isn't supported yet by the
DHCP implementation in passt, we should probably add it soon.
Note: owing to the simplicity of this script, we now need to bring up
the interface before starting dhclient: add this in test scripts where
it's not the case yet.
Reported-by: David Gibson <david@gibson.dropbear.id.au>
Suggested-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
note that we need to bring up the interface before starting dhclient
This depends on a future change in mbuto to accept external profile
files. Add a file defining what we need for tests and demos, dropping
udhcpc and script as they're not needed anymore, and switch to it.
Suggested-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
There are several places which explicitly list the various generated
binaries, even though a $(BIN) variable already lists them. There are
several more places that list all the manpage files, introduce a
$(MANPAGES) variable to remove that repetition as well.
Tweak the generation of pasta.1 as a link to passt.1 so it's not just made
as a side effect of the pasta target.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[sbrivio: add passt.1 and qrap.1 to guest files for distro tests]
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
A number of the testcases use options specific the OpenBSD version of
netcat. That's available in Debian, but not easily available in Fedora.
Switch the pasta tests to using the nmap version of netcat (a.k.a. ncat).
This is easily available in both Debian and Fedora, and appears to be a
bit more modern and maintained as well.
ncat generally requires explicit listen addresses (which is good for
clarity anywhere). Its default options appear to remove the need for the
-N and -q options.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[sbrivio: changed one ncat listening address to IPv6 loopback]
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
For some reason, the passt/pasta tests and examples use dhclient for
DHCPv6, but in most cases use udhcpc for DHCPv4. Change it to use dhclient
for both DHCPv4 and DHCPv6. This means one less tool we need for testing,
plus dhclient is easily available on Fedora whereas udhcpc is not.
Note that the passt tests still rely on udhcpc indirectly because mbuto
wants to put it into the guest images it generates.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
A number of tests and examples use dhclient in both IPv4 and IPv6 modes.
We use "dhclient -6" for IPv6, but usually just "dhclient" for IPv4. Add
an explicit "-4" argument to make it more clear and explicit.
In addition, when dhclient is run from within pasta it usually won't be
"real" root, and so will not have access to write the default global pid
file. This results in a mostly harmless but irritating error:
Can't create /var/run/dhclient.pid: Permission denied
We can avoid that by using the --no-pid flag to dhclient.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
ip(8)'s ability to take abbreviated arguments (e.g. "li sh" instead of
"link show") is very handy when using it interactively, but it doesn't make
for very readable scripts and examples when shown that way.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
distro/fedora contains two versions of the basic tests, used for different
Fedora versions. One uses explicit listening address for netcat in some
extra places, the other does not. Apparently the older netcat versions
didn't require the explicit addresses. Not supplying addresses doesn't
test anything useful though, just a detail in netcat's behaviour. So,
it's cleaner to just always supply explicit addresses.
In addition, we're explicitly expecting the nmap version of ncat, also
known as "ncat". So, it's more explicit what we're after if we invoke it
via that name rather than "nc", which will go via an /etc/alternatives
link.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[sbrivio: Fix port argument in distro_quick_pasta_test{,_fedora34} too]
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Having all those 'echo $?' is rather distracting in demos.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Sefano Brivio <sbrivio@redhat.com>
...there are no 'test' directives in demo, and this causes a
script failure.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
'sleep' always needs an argument, this was meant to introduce
a 2 seconds delay.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
There's no need to return non-zero if there have been failures in
run(), because the exit value is already determined from the number
of failures reported in the log file.
Return zero, so that this doesn't cause the script to fail, given we
now run it with -e.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
There are a few occurrences of this assignment, which are needed to
re-add ::1 as loopback address after the MTU has been increased
back from a value below 1280 bytes.
This one, however, is redundant, and causes an error in the
execution.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
A number of individual test cases use '*out' commands to check for success
of specific commands they've issued. Now that the test harness is testing
for success of all issued commands as a matter of course, we no longer need
to do this.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Now that we have pane_status to check the success of commands issued to
panes, we can more easily check for the success of the 'which' commands
used to check tool availability, rather than constructing, then parsing
special "skip" output.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
When we use pane_wait to wait for a command issued to a tmux pane to finish
we have no idea whether the command succeeded or not. This means that the
test scripts can keep running long after the point something vital has
failed, making it difficult to work out what went wrong.
Add a new pane_status command that checks for success of the issued command
and use it in most places instead of pane_wait. We still need explicit
pane_wait where we're gathering explicit output with pane_parse, because
the way we check the status with 'echo $?' means we lose track of that
output.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[sbrivio:
- instead of quitting the script, make a test fail if a command
issued in a pane fails during a test, and loop until the status code is
numeric in pane_status() as a hack to make it a bit more robust
- retain usage of pane_wait() in iperf3 and teardown functions as we
interrupt iperf3, passt, and pasta, so a non-zero exit code is expected
- drop bogus ns_{1,2}_wait() calls in teardown_two_guests(), those
functions were never implemented
- use pane_status() for "guest" test directives too
]
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Most commands issued during the testing scripts aren't explicitly checked
for errors. Therefore, if they fail, the shell will just keep on
executing. This makes it difficult to figure out where things started
going wrong if things fall over.
Run the whole script with the set -e mode so that it will exit in the case
of any (unchecked) failing command. To make this work we do need to add
explicit checks / fallbacks for some commands which we expect to fail.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[sbrivio: use sh -e instead of setting -e later, so that we don't miss
anything before set -e is issued]
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
pane_parse() attempts to grab the output from the last command issued
into a tmux pane. It strips out control characters using tr, which in
particular includes the final \r\n. However, this won't fully strip
out terminal escape sequences. In particular this breaks if the shell
in the pane is bash, with enable-bracketed-paste enabled in readline.
That issues terminal sequences to enable and disable bracketed paste
mode around every shell prompt.
We can work around this because these escapes are followed by a \r
(CR). More generally, it seems reasonable to assume that any terminal
shenanigans followed by a CR, but not an LF is supposed to be hidden.
So, use sed to strip everything before the second last CR. We still
need the tr to remove the final \r\n from the string (sed processes a
line at a time, and doesn't consider the CRLF part of the buffer it's
processing).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[sbrivio: modify regexp to keep foo\r\r\n unchanged, by matching on at
least one CR and a non-CR afterwards: that's the usual output pattern
for bash on Debian 8 and Debian 9]
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
run_term() uses tmux set-option -g to globally set the default shell.
Unfortunately this hits a chicken-and-egg problem that's common with many
of tmux's session options. If there isn't already a tmux server running,
we can't connect to set the option. If we attempt this after starting our
session (and therefore the server), then the session will already be
started with the previous default shell.
In any case it's not a good idea to set tmux global options, since that
might interfere with whatever else the user is doing in tmux. So, instead
set the default-shell option locally to the session after starting it. To
make sure we get the right shell for our initial script, explicitly invoke
/bin/sh to interpret it.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The semantics of tmux's update-environment option are a bit confusing.
It says it means the given variables are copied into the session
environment from the source environment, but it's not entirely clear
what the "source" environment means.
From my experimentation it appeast to be the environment from which
the tmux *server* is launched, not the one issuing the 'new-session'
command. That makes it pretty much useles, certainly in our case where
we have no way of knowing if the user has pre-existing tmux sessions.
Instead use the new-session -e option to explicitly pass in the variables
we want to propagate.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The DEBUG option for tests/run enables debugging options to passt/pasta,
however that doesn't help with debugging the test scripts themselves, which
are fairly fragile.
Extend the DEBUG option so it also prints information on each command in
the test scripts to make it easier to work out where things are falling
over.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The XVFB variable is initialized at the beginning of test/run then never
used again. I'm assuming it's a leftover from some ealier iteration.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Ignore various files generated during build or test.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reflect the recent changes in the Podman adaptation (no port
forwarding by default).
It turns out that by running two iperf3 processes, sometimes
slirp4netns blocks the second connection until the first test is
done, thus doubling the throughput. Use a single process for
slirp4netns with slirp4netns port handling.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
I didn't have time to investigate the root cause for the virtio_net
TX hang yet. Add a quick work-around for the moment being.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Pass to seccomp.sh a list of additional syscalls valgrind needs as
EXTRA_SYSCALLS in a new 'valgrind' make target, and add corresponding
support in seccomp.sh itself.
In test setup functions, start passt with valgrind, but not for
performance tests.
Add tests checking that valgrind exits without errors after all the
other tests in the group are done.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
--debug can be a bit too noisy, especially as single packets or
socket messages are logged: implement a new option, --trace,
implying --debug, that enables all debug messages.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Build-time selection of AVX2 flags and routines is not practical for
distributions, but limiting AVX2 usage to checksum routines with
specific run-time detection doesn't allow for easy performance gains
from auto-vectorisation of batched packet handling routines.
For x86_64, build non-AVX2 and AVX2 binaries, and implement a simple
wrapper replacing the current executable with the AVX2 build if it's
available, and if AVX2 is supported by the current CPU.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
For demos, cool-retro-term(1) looked fancier, but several threads of
that and ffmpeg(1) are just messing up with performance testing.
The CI videos started getting really big as well, and they were
difficult to read.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
...showing setup steps, some peculiarities as --net option, and a
general side-to-side comparison with slirp4netns(1), including
"quick" TCP and UDP throughput and latency benchmarks.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
That test fails sometimes, it looks like iperf3 is still sending
initial messages that are too big. I'll need to figure out why,
but given that 256 bytes is not really an expected MTU, drop the
thresholds to zero for the moment being.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Removing the needrestart package doesn't seem to work anymore, and
I'm getting again prompts to restart services after installing gcc
and make: export DEBIAN_FRONTEND=noninteractive before installing
packages to avoid that.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
To reach (at least) a conceptually equivalent security level as
implemented by --enable-sandbox in slirp4netns, we need to create a
new mount namespace and pivot_root() into a new (empty) mountpoint, so
that passt and pasta can't access any filesystem resource after
initialisation.
While at it, also detach IPC, PID (only for passt, to prevent
vulnerabilities based on the knowledge of a target PID), and UTS
namespaces.
With this approach, if we apply the seccomp filters right after the
configuration step, the number of allowed syscalls grows further. To
prevent this, defer the application of seccomp policies after the
initialisation phase, before the main loop, that's where we expect bad
things to happen, potentially. This way, we get back to 22 allowed
syscalls for passt and 34 for pasta, on x86_64.
While at it, move #syscalls notes to specific code paths wherever it
conceptually makes sense.
We have to open all the file handles we'll ever need before
sandboxing:
- the packet capture file can only be opened once, drop instance
numbers from the default path and use the (pre-sandbox) PID instead
- /proc/net/tcp{,v6} and /proc/net/udp{,v6}, for automatic detection
of bound ports in pasta mode, are now opened only once, before
sandboxing, and their handles are stored in the execution context
- the UNIX domain socket for passt is also bound only once, before
sandboxing: to reject clients after the first one, instead of
closing the listening socket, keep it open, accept and immediately
discard new connection if we already have a valid one
Clarify the (unchanged) behaviour for --netns-only in the man page.
To actually make passt and pasta processes run in a separate PID
namespace, we need to unshare(CLONE_NEWPID) before forking to
background (if configured to do so). Introduce a small daemon()
implementation, __daemon(), that additionally saves the PID file
before forking. While running in foreground, the process itself can't
move to a new PID namespace (a process can't change the notion of its
own PID): mention that in the man page.
For some reason, fork() in a detached PID namespace causes SIGTERM
and SIGQUIT to be ignored, even if the handler is still reported as
SIG_DFL: add a signal handler that just exits.
We can now drop most of the pasta_child_handler() implementation,
that took care of terminating all processes running in the same
namespace, if pasta started a shell: the shell itself is now the
init process in that namespace, and all children will terminate
once the init process exits.
Issuing 'echo $$' in a detached PID namespace won't return the
actual namespace PID as seen from the init namespace: adapt
demo and test setup scripts to reflect that.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
With a recent 5.15 kernel, passing a huge window size to iperf3 with
lower MTUs makes iperf3 stop sending packets after a few seconds --
I haven't investigated this in detail, but the window size will be
adjusted dynamically anyway and not passing it doesn't actually
affect throughput, so simply drop the option.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Some recent change to xenial-updates broke dependencies for gcc,
it can't be installed anymore. Skipping apt-get update leaves gcc
dependencies in a consistent state, though.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
The shell might report 'nc -6 -l -p 9999 > /tmp/ns_msg' as done
even after the subsequent 'echo' is done: wait one second before
reading out /tmp/ns_msg, to ensure we read that instead of the
"Done" message.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
The new tests check build and a simple case with pasta sending a
short message in both directions (namespace to init, init to
namespace).
Tests cover a mix of Debian, Fedora, OpenSUSE and Ubuntu combinations
on aarch64, i386, ppc64, ppc64le, s390x, x86_64.
Builds tested starting from approximately glibc 2.19, gcc 4.7, and
actual functionality approximately from 4.4 kernels, glibc 2.25,
gcc 4.8, all the way up to current glibc/gcc/kernel versions.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
For distribution tests, we'll repeat some tests frequently. Add a
'def' directive that starts a block, ended by 'endef', whose
execution can then be triggered by simply giving its name as a
directive itself.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
We might have highlighting and slightly different prompts across
different distributions, allow a more reasonable set of prompt
strings to be accepted as prompts.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
The throughput results in this test look quite variable, slightly
lower figures look reasonable anyway.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Debug information might be printed after a prompt is seen,
just wait those 3 seconds and be done with it.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
An inline comment prefixed by a space doesn't mean the space
is dropped, and sleep(1) will get a blank in its argument.
Move the comment on its own line.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
They'll start DAD as we bring up the interface, and the DHCPv6
client might be unreasonably delayed if we start it too early.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
...mostly false positives, but a number of very relevant ones too,
in tcp_get_sndbuf(), tcp_conn_from_tap(), and siphash PREAMBLE().
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
With recent improvements, we're not CPU-bound at all while testing
UDP performance. Give the VM more memory and CPUs, forward two
additional ports, start up to four threads in parallel, and give
single iperf3 threads higher bandwidth targets.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
On most recent CPUs, that's a better indication of all-core turbo
frequency, or non-turbo frequency, than /proc/cpuinfo.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
SPDX tags don't replace license files. Some notices were missing and
some tags were not according to the SPDX specification, too.
Now reuse --lint from the REUSE tool (https://reuse.software/) passes.
Reported-by: Martin Hauke <mardnh@gmx.de>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>