When determining the namespace's IPv6 address in the perf test setup, we
explicitly filter for addresses with a 64-bit prefix length. There's no
real reason we need that - as long as it's a global address we can use it.
I suspect this was copied without thinking from a similar example in the
NDP tests, where the 64-bit prefix length _is_ meaningful (though it's not
entirely clear if the handling is correct there either).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Currently nstool die()s on essentially any error. In most cases that's
fine for our purposes. However, it's a problem when in "hold" mode and
getting an IO error on an accept()ed socket. This could just indicate that
the control client aborted prematurely, in which case we don't want to
kill of the namespace we're holding.
Adjust these to print an error, close() the control client socket and
carry on. In addition, we need to explicitly ignore SIGPIPE in order not
to be killed by an abruptly closed client connection.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
nstool in "exec" mode will propagate some signals (specifically SIGTERM) to
the process in the namespace it executes. The signal handler which
accomplishes this is called simply sig_handler(). However, it turns out
we're going to need some other signal handlers, so rename this to the more
specific sig_propagate().
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Most of our transfer tests using socat use 'sleep' waaiting for the server
side to be ready before starting the client. However in two_guests/basic
the sleep is in the wrong place: rather than being between starting the
server and starting the client, it's after waiting for the server to
complete. This causes occasional hangs when the client runs before the
server is ready - in that case the receiving guest sends an RST, which we
don't (currently) propagate back to the sender.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Remove debian-9-nocloud-amd64-daily-20200210-166.qcow2 and
openSUSE-Tumbleweed-JeOS.x86_64-kvm-and-xen.qcow2 as they cannot be
downloaded anymore
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
The tests in pasta/tcp and pasta/udp for inbound transfers have the server
listening within the namespace explicitly bound to 127.0.0.1 or ::1. This
only works because of the behaviour of inbound splice connections, which
always appear with both source and destination addresses as loopback in
the namespace. That's not an inherent property for "spliced" connections
and arguably an undesirable one. Also update the test names to make it
clearer that these tests are expecting to exercise the "splice" path.
Interestingly this was already correct for the equivalent passt_in_ns/*,
although we also update the test names for clarity there.
Note that there are similar issues in some of the podman tests, addressed
in https://github.com/containers/podman/pull/24064
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
After running dhclient -6 we expect the DHCPv6 assigned address to be
immediately usable. That's true with the Fedora dhclient-script (and the
upstream ISC DHCP one), however it's not true with the Debian
dhclient-script. The Debian script can complete with the address still
in "tentative" state, and the address won't be usable until Duplicate
Address Detection (DAD) completes. That's arguably a bug in Debian (see
link below), but for the time being we need to work around it anyway.
We usually get away with this, because by the time we do anything where the
address matters, DAD has completed. However, it's not robust, so we should
explicitly wait for DAD to complete when we get an DHCPv6 address.
Link: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1085231
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Getting a SLAAC address takes a little while because the kernel must
complete Duplicate Address Detection (DAD) before marking the address as
ready. In several places we have an explicit 'sleep 2' to wait for that
to complete.
Fixed length delays are never a great idea, although this one is pretty
solid. Still, it would be better to explicitly wait for DAD to complete
in case of long delays (which might happen on slow emulated hosts, or with
heavy load), and to speed the tests up if DAD completes quicker.
Replace the fixed sleeps with a loop waiting for DAD to complete. We do
this by looping waiting for all tentative addresses to disappear.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Just like we do for PCAP, DEBUG and KERNEL. Otherwise, running tests
with TRACE=1 will not actually enable tracing output.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
...instead of echo: otherwise, bash won't handle escape sequences we
use to colour messages (and 'echo -e' is not specified by POSIX).
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
This is quite useful at least for myself as I'm usually running tests
using a guest kernel that's not the same as the one on the host.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Some distributions already have OpenSSH 9.8, which introduces split
sshd/sshd-session binaries, and there we need to copy the binary from
the host, which can be /usr/libexec/openssh/sshd-session (Fedora
Rawhide), /usr/lib/ssh/sshd-session (Arch Linux),
/usr/lib/openssh/sshd-session (Debian), and possibly other paths.
Add at least those three, and, if we don't find sshd-session, assume
we don't need it: it could very well be an older version of OpenSSH,
as reported by David for Fedora 40, or perhaps another daemon (would
Dropbear even work? I'm not sure).
Reported-by: David Gibson <david@gibson.dropbear.id.au>
Fixes: d6817b3930 ("test/passt.mbuto: Install sshd-session OpenSSH's split process")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Tested-by: David Gibson <david@gibson.dropbear.id.au>
Mostly packages we now need to run Podman-based tests.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Some architectures, including i686, actually have a recv() system
call, not just a recvfrom(), and we need to cover the recv() with
MSG_TRUNC into a NULL buffer for them as well.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
OpenSSH now ships a per-session binary, sshd-session, with sshd
acting as mere listener. It's typically not found in $PATH, so specify
the whole path at which it's commonly installed in $PROGS.
Link: https://www.openssh.com/releasenotes.html#9.8p1
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
It's qemu-system-i386, but uname -m reports i686. I didn't test i486
and i586.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Because the host and guest share the same IP address with passt/pasta, it's
not possible for the guest to directly address the host. Therefore we
allow packets from the guest going to a special "NAT to host" address to be
redirected to the host, appearing there as though they have both source and
destination address of loopback.
Currently that special address is always the address of the default
gateway (or none). That can be a problem if we want that gateway to be
addressable by the guest. Therefore, allow the special "NAT to host"
address to be overridden on the command line with a new --map-host-loopback
option.
In order to exercise and test it, update the passt_in_ns and perf
tests to use this option and give different mapping addresses for the
two layers of the environment.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
In the TCP throughput tests, we adjust the guest's MTU in order to test
various packet sizes. Some of those are below 1280 which causes IPv6 to
be deconfigured on the guest interface. When we increase it above 1280
again, IPv6 is re-enabled and we get an address in the right prefix with
NDP, but we don't get exactly the expected address back - that's only
communicated with --config-net or DHCPv6.
With changes to how we handle NAT this can cause some of the IPv6 tests to
fail, because they don't use the address that passt/pasta expects, and the
guest doesn't initiate any traffic which allows us to learn what the new
address is.
Work around this by re-invoking dhclient -6 between adjusting the MTU and
running IPv6 test cases.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
We have a number of delays when we switch to new layouts that were
added to make the tests visually easier to follow, together with
blinking status bars. Shorten the delays and avoid blinking the
status bar if $FAST is set to 1 (no demo mode).
Shorten delays in busy loops to 10ms, instead of 100ms, and skip the
one-second fixed delay when we wait for the status of a command.
Cut the duration of throughput and latency tests to one second, down
from ten. Somewhat surprisingly, the results we get are rather
consistent, and not significantly different from what we'd get with
10 seconds.
This, together with Podman's commit 20f3e8909e3a ("test/system:
pasta_test_do add explicit port check"), cuts the time needed on my
setup for full test run from approximately 37 minutes to...:
$ time ./run
[exited]
PASS: 165, FAIL: 0
Log at /home/sbrivio/passt/test/test_logs/test.log
real 15m34.253s
user 0m0.011s
sys 0m0.011s
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Tested-by: David Gibson <david@gibson.dropbear.id.au>
Particularly in shell it's sometimes natural to save the pid from a process
run and later kill it. If doing this with nstool exec, however, it will
kill nstool itself, not the program it is running, which isn't usually what
you want or expect.
Address this by having nstool propagate SIGTERM to its child process. It
may make sense to propagate some other signals, but some introduce extra
complications, so we'll worry about them when and if it seems useful.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
systemd-resolved has the rather strange behaviour of listening on the
non-standard loopback address 127.0.0.53. Various changes we've made in
passt mean that we now usually work fine on a host using systemd-resolved.
However our tests still fail in this case. We have a special case for when
the guest's resolv.conf needs to differ from the host's because the
resolver is on a host loopback address. However, we only consider the case
where the host resolver is on 127.0.0.1, not other loopback addresses.
Correct this with a different test condition.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Starting from iperf3 version 3.16, -P / --parallel spawns multiple
clients as separate threads, instead of multiple streams serviced by
the same thread.
So we can drop our lib/test implementation to spawn several iperf3
client and server processes and finally simplify things quite a bit.
Adjust number of threads and UDP sending bandwidth to values that seem
to be more or less matching previous throughput tests on my setup.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Tested-by: David Gibson <david@gibson.dropbear.id.au>
Differences in allocated Acpi-Parse entries are gone (at least) since
the 6.1 Linux kernel series. I should run this on a 6.10 kernel,
eventually, and adjust things further, as needed.
Userspace symbols are also fairly different now: show whatever is more
than 1 MiB at the moment.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Tested-by: David Gibson <david@gibson.dropbear.id.au>
This used to work on my setup as I kept reusing an old mbuto
(initramfs) image, but since commit 65923ba798 ("conf: Accept
duplicate and conflicting options, the last one wins"), --netns-only
is, as originally intended, a pasta-only option.
I had used --netns-only, here, to prevent passt from trying to detach
its own user namespace, which is not permitted as we're in a chroot,
see unshare(2). In turn, we need the chroot because passt can't pivot
root directly into its own empty filesystem using an initramfs.
Use switch_root into the tmpfs mountpoint instead of chroot, so that
we can still detach user namespaces.
Note that in the mbuto images, we can't switch to nobody as we have
no password entries at all, so we need to detach a further user
namespace before starting passt, to trick passt into running as UID
0.
Given the new sequence, it's now more convenient to directly switch
to a detached network namespace as well, which means we need to move
the initialisation of the dummy network from the init script into the
test script.
Reported-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Tested-by: David Gibson <david@gibson.dropbear.id.au>
When we retrieve or copy host addresses we can include deprecated
addresses, which is not what we want. Adjust our logic to exclude them.
Similarly our tests can retrieve deprecated addresses, so exclude them
there too.
I hit this in practice because my router sometimes temporarily advertises
an fd00:: prefix before the real delegated IPv6 prefix. The deprecated
address can hang around for some time messing up my tests.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
During some debugging recently, I wanted to extact a file from a test
guest and found it was tricky, since the ssh-over-vsock setup we had didn't
allow sftp/scp. We can fix this by adding a line to the guest side sshd
config from mbuto. While we're there correct an inaccurate comment.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
test/pasta_options/log_to_file checks that pasta truncates its log file
when started. It does that by starting pasta with a log file once, then
starting it again and checking that after the second round, the log file
has only one line: the startup banner from the second invocation.
However, this test will break if the second invocation logs any additional
messages at startup. This can easily happen on a host with multiple
network interfaces due to the "Multiple default route" informational
messages added in 639fdf06e ("netlink: Fix selection of template
interface"). I believe it could also happen on a host without IPv6
connectivity due to the "Couldn't pick external interface" messages, though
I haven't confirmed this.
Make the log file test more robust, by not testing for a single line, but
instead explicitly testing for the PID of the second pasta invocation in
the banner line.
Link: https://bugs.passt.top/show_bug.cgi?id=88
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
test/pasta_options/log_to_file contains a couple of rudimentary tests
where we start pasta with an interactive shell, then immediately exit it.
We can achieve the same thing by using /bin/true as the command to pasta.
This also means that waiting for pasta to start, waiting for the executed
command to complete and for pasta to clean up are all handled by simply
waiting for pasta to complete in the foreground, so there's no need for an
additional sleep.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Paul Holzinger pointed out that when we invoke the podman tests inside the
passt testsuite, the way we point podman at the newly built pasta binary
is kind of indirect. It's therefore prudent to check that podman is
actually using the binary we expect it to - in particular that it is using
the binary built in this tree, not some system installed pasta binary.
Suggested-by: Paul Holzinger <pholzing@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
The pasta_podman/bats test script looks for 'catatonit' amongst other tools
to be avaiiliable on the host. However, while the podman tests do require
catatonit, it doesn't necessarily need to be in the regular path. For
example Fedora and RHEL place catatonit in /usr/libexec and podman finds it
there fine.
Therefore, remove it as an htools dependency.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
The pasta_podman/bats test scrpt downloads and builds podman, then runs its
pasta specific tests. Downloading from within a test case has some
drawbacks:
* It can be very tedious if you have poor connectivity to the server
* It makes a test that's ostensibly for pasta itself dependent on the
state of the github server
* It precludes runnning the tests in an isolated network environment
The same concerns largely apply to building podman too, because it's pretty
common for Go builds to download dependencies themselves. Therefore move
the download and build of podman from the test itself, to the Makefile
where we prepare other test assets.
To avoid cryptic failures if something went wrong with the build, make
running the test dependent on having the built podman binary.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
We download and use mbuto to build trivial boot images for our VM tests.
However, if mbuto is already cloned, we won't update it to the current
version. Add some make logic to ensure that we do this.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
f0ccca74 ("test: make passt.mbuto script more robust") is supposed to make
mbuto more robust by standardizing on always putting things in /usr/sbin
with /sbin a symlink to it. This matters because different distros have
different conventions about how the two are used.
However, the logic there requires that /usr/sbin at least exists to start
with. This isn't always the case with Fedora derived mbuto images.
Ironically the DIRS variable ensures that /sbin exists, although we then
remove it, but doesn't require /usr/sbin to exist. Fix that up so that
the new logic will work with Fedora.
Fixes: f0ccca741f ("test: make passt.mbuto script more robust")
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Creation of a symbolic link from /sbin to /usr/sbin fails if /sbin
exists and is non-empty. This is the case on Ubuntu-23.04.
We fix this by removing /sbin before creating the link.
Signed-off-by: Jon Maloy <jmaloy@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
If we run passt nested (a guest connected via passt to a guest
connected via passt to the host), the first guest (L1) typically has
two IPv6 addresses on the same interface: one formed from the prefix
assigned via SLAAC, and another one assigned via DHCPv6 (to match the
address on the host).
When we select addresses for comparison, in this case, we have
multiple global unicast addresses -- again, on the same interface.
Selecting the first reported one on both host and guest is not
entirely correct (in theory, the order might differ), but works
reasonably well.
Use the trick from 5beef08597 ("test: Only select a single
interface or gateway in tests") to ask jq(1) for the first address
returned by the query.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
When using the old-style "pane" methods of executing commands during the
tests, we need to scan the shell output for prompts in order to tell when
commands have finished. This is inherently unreliable because commands
could output things that look like prompts, and prompts might not look like
we expect them to. The only way to really fix this is to use a better way
of dispatching commands, like the newer "context" system.
However, it's awkward to convert everything to "context" right at the
moment, so we're still relying on some tests that do work most of the time.
It is, however, particularly sensitive to fancy coloured prompts using
escape sequences. Currently we try to handle this by stripping actual
ESC characters with tr, then looking for some common variants.
We can do a bit better: instead strip all escape sequences using sed before
looking for our prompt. Or, at least, any one using [a-zA-Z] as the
terminating character. Strictly speaking ANSI escapes can be terminated by
any character in 0x40..0x7e, which isn't easily expressed in a regexp.
This should capture all common ones, though.
With this transformation we can simplify the list of patterns we then look
for as a prompt, removing some redundant variants.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
In test/prepare-distro-img.sh we use guestfish to tweak our distro guest
images to be suitable. Part of this is using a 'copy-in' directive to copy
in the source files for passt itself. Currently we copy in all the files
with a single 'copy-in', since it allows listing multiple files. However
it turns out that the number of arguments it can accept is fairly limited
and our current list of files is already very close to that limit.
Instead, expand our list of files to one copy-in per file, avoiding that
limitation. This isn't much slower, because all the commands still run in
a single invocation of guestfish itself.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
valgrind complains if we pass a NULL buffer to recv(), even if we use
MSG_TRUNC, in which case it's actually safe. For a long time we've had
a valgrind suppression for this. It singles out the recv() in
tcp_sock_consume(), the only place we use MSG_TRUNC.
However, tcp_sock_consume() only has a single caller, which makes it a
prime candidate for inlining. If inlined, it won't appear on the stack and
valgrind won't match the correct suppression.
It appears that certain compiler versions (for example gcc-13.2.1 in
Fedora 39) will inline this function even with the -O0 we use for valgrind
builds. This breaks the suppression leading to a spurious failure in the
tests.
There's not really any way to adjust the suppression itself without making
it overly broad (we don't want to match other recv() calls). So, as a hack
explicitly prevent inlining of this function when we're making a valgrind
build. To accomplish this add an explicit -DVALGRIND when making such a
build.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
As commit 2926970523 ("test/perf: Small MTUs for spliced TCP
aren't interesting") drops all columns for TCP test MTUs except for
one, in throughput test for pasta's local flows, the first column we
need to highlight in that table is now the second one.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
For the TCP throughput tests, we use iperf3's -O "omit" option which
ignores results for the given time at the beginning of the test. Currently
we calculate this as 1/6th of the test measurement time. The purpose of
-O, however, is to skip over the TCP slow start period, which in no way
depends on the overall length of the test.
The slow start time is roughly speaking
log_2 ( max_window_size / MSS ) * round_trip_time
These factors all vary between tests and machines we're running on, but we
can estimate some reasonable bounds for them:
* The maximum window size is bounded by the buffer sizes at each end,
which shouldn't exceed 16MiB
* The mss varies with the MTU we use, but the smallest we use in tests is
~256 bytes
* Round trip time will vary with the system, but with these essentially
local transfers it will typically be well under 1ms (on my laptop it is
closer to 0.03ms)
That gives a worst case slow start time of about 16ms. Setting an omit
time of 0.1s uniformly is therefore more than enough, and substantially
smaller than what we calculate now for the default case (10s / 6 ~= 1.7s).
This reduces total time for the standard benchmark run by around 30s.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
We always set --pacing-timer when invoking iperf3. However, the iperf3
man page implies this is only relevant for the -b option. We only use the
-b option for the UDP tests, not TCP, so remove --pacing-timer from the TCP
cases.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
The TCP packet size used on the passt L2 link (qemu socket) makes a huge
difference to passt/pasta throughput; many of passt's overheads (chiefly
syscalls) are per-packet.
That packet size is largely determined by the MTU on the L2 link, so we
benchmark for a number of different MTUs. That works well for the guest to
host transfers. For the host to guest transfers, we purport to test for
different MTUs, but we're not actually adjusting anything interesting.
The host to guest transfers adjust the MTU on the "host's" (actually ns)
loopback interface. However, that only affects the packet size for the
socket going to passt, not the packet size for the L2 link that passt
manages - passt can and will repack the stream into packets of its own
size. Since the depacketization on that socket is handled by the kernel it
doesn't have a lot of bearing on passt's performance.
We can't fix this by changing the L2 link MTU from the guest side (as we do
for guest to host), because that would only change the guest's view of the
MTU, passt would still think it has the large MTU. We could test this by
using the --mtu option to passt, but that would require restarting passt
for each run, which is awkward in the current setup. So, for now, drop all
the "small MTU" tests for host to guest.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Packet size can make a big difference to UDP throughput, so it makes sense
to measure it for a variety of different sizes. Currently we do this by
adjusting the MTU on the relevant interface before running iperf3.
However, the UDP packet size has no inherent connection to the MTU - it's
controlled by the sender, and the MTU just affects whether the packet will
make it through or be fragmented. The only reason adjusting the MTU works
is because iperf3 bases its default packet size on the (path) MTU.
We can test this more simply by using the -l option to the iperf3 client
to directly control the packet size, instead of adjusting the MTU.
As well as simplifying this lets us test different packet sizes for host to
ns traffic. We couldn't do that previously because we don't have
permission to change the MTU on the host.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Currently we make TCP throughput measurements for spliced connections with
a number of different MTU values. However, the results from this aren't
really interesting.
Unlike with tap connections, spliced connections only involve the loopback
interface on host and container, not a "real" external interface. lo
typically has an MTU of 65535 and there is very little reason to ever
change that. So, the measurements for smaller MTUs are rarely going to be
relevant.
In addition, the fact that we can offload all the {de,}packetization to the
kernel with splice(2) means that the throughput difference between these
MTUs isn't very great anyway.
Remove the short MTUs and only show spliced throughput for the normal
65535 byte loopback MTU. This reduces runtime of the performance tests on
my laptop by about 1 minute (out of ~24 minutes).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Currently we start both the iperf3 server(s) and client(s) afresh each time
we want to make a bandwidth measurement. That's not really necessary as
usually a whole batch of bandwidth measurements can use the same server.
Split up the iperf3 directive into 3 directives: iperf3s to start the
server, iperf3 to make a measurement and iperf3k to kill the server, so
that we can start the server less often. This - and more importantly, the
reduced number of waits for the server to be ready - reduces runtime of the
performance tests on my laptop by about 4m (out of ~28minutes).
For now we still restart the server between IPv4 and IPv6 tests. That's
because in some cases the latency measurements we make in between use the
same ports.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
iperf3 generates statistics about its run on both the client and server
sides. They don't have exactly the same information, but both have the
pieces we need (AFAICT the server communicates some nformation to the
client over the control socket, so the most important information is in the
client side output, even if measured by the server).
Currently we use the server side information for our measurements. Using
the client side information has several advantages though:
* We can directly wait for the client to complete and we know we'll have
the output we want. We don't need to sleep to give the server time to
write out the results.
* That in turn means we can wrap up as soon as the client is done, we
don't need to wait overlong to make sure everything is finished.
* The slightly different organisation of the data in the client output
means that we always want the same json value, rather than requiring
slightly different onces for UDP and TCP.
The fact that we avoid some extra delays speeds up the overal run of the
perf tests by around 7 minutes (out of around 35 minutes) on my laptop.
The fact that we no longer unconditionally kill client and server after
a certain time means that the client could run indefinitely if the server
doesn't respond. We mitigate that by setting 1s connect timeout on the
client. This isn't foolproof - if we get an initial response, but then
lose connectivity this could still run indefinitely, however it does cover
by far the most likely failure cases. --snd-timeout would provide more
robustness, but I've hit odd failures when trying to use it.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Some older revisions used separate iperf3c and iperf3s test directives to
invoke the iperf3 client and server. Those were combined into a single
iperf3 directive some time ago, but a couple of places still have the old
syntax.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>