In pasta mode, when we get data from sockets and write it as single
frames to the tap device, we batch receive operations considerably,
and then (conceptually) split the data in many smaller writes.
It looked like an obvious choice, but performance is actually better
if we receive data in many small frame-sized recvmsg()/recvmmsg().
The syscall overhead with the previous behaviour, observed by perf,
comes predominantly from write operations, but receiving data in
shorter chunks probably improves cache locality by a considerable
amount.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
To reach (at least) a conceptually equivalent security level as
implemented by --enable-sandbox in slirp4netns, we need to create a
new mount namespace and pivot_root() into a new (empty) mountpoint, so
that passt and pasta can't access any filesystem resource after
initialisation.
While at it, also detach IPC, PID (only for passt, to prevent
vulnerabilities based on the knowledge of a target PID), and UTS
namespaces.
With this approach, if we apply the seccomp filters right after the
configuration step, the number of allowed syscalls grows further. To
prevent this, defer the application of seccomp policies after the
initialisation phase, before the main loop, that's where we expect bad
things to happen, potentially. This way, we get back to 22 allowed
syscalls for passt and 34 for pasta, on x86_64.
While at it, move #syscalls notes to specific code paths wherever it
conceptually makes sense.
We have to open all the file handles we'll ever need before
sandboxing:
- the packet capture file can only be opened once, drop instance
numbers from the default path and use the (pre-sandbox) PID instead
- /proc/net/tcp{,v6} and /proc/net/udp{,v6}, for automatic detection
of bound ports in pasta mode, are now opened only once, before
sandboxing, and their handles are stored in the execution context
- the UNIX domain socket for passt is also bound only once, before
sandboxing: to reject clients after the first one, instead of
closing the listening socket, keep it open, accept and immediately
discard new connection if we already have a valid one
Clarify the (unchanged) behaviour for --netns-only in the man page.
To actually make passt and pasta processes run in a separate PID
namespace, we need to unshare(CLONE_NEWPID) before forking to
background (if configured to do so). Introduce a small daemon()
implementation, __daemon(), that additionally saves the PID file
before forking. While running in foreground, the process itself can't
move to a new PID namespace (a process can't change the notion of its
own PID): mention that in the man page.
For some reason, fork() in a detached PID namespace causes SIGTERM
and SIGQUIT to be ignored, even if the handler is still reported as
SIG_DFL: add a signal handler that just exits.
We can now drop most of the pasta_child_handler() implementation,
that took care of terminating all processes running in the same
namespace, if pasta started a shell: the shell itself is now the
init process in that namespace, and all children will terminate
once the init process exits.
Issuing 'echo $$' in a detached PID namespace won't return the
actual namespace PID as seen from the init namespace: adapt
demo and test setup scripts to reflect that.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
clang-tidy from LLVM 13.0.1 reports some new warnings from these
checkers:
- altera-unroll-loops, altera-id-dependent-backward-branch: ignore
for the moment being, add a TODO item
- bugprone-easily-swappable-parameters: ignore, nothing to do about
those
- readability-function-cognitive-complexity: ignore for the moment
being, add a TODO item
- altera-struct-pack-align: ignore, alignment is forced in protocol
headers
- concurrency-mt-unsafe: ignore for the moment being, add a TODO
item
Fix bugprone-implicit-widening-of-multiplication-result warnings,
though, that's doable and they seem to make sense.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
The existing behaviour is not really practical: an automated agent in
charge of starting both qemu and passt would need to fork itself to
start passt, because passt won't fork to background until qemu
connects, and the agent needs to unblock to start qemu.
Instead of waiting for a connection to daemonise, do it right away as
soon as a socket is available: that can be considered an initialised
state already.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Depending on the C library, but not necessarily in all the
functions we use, statx() might be used instead of stat(),
getdents() instead of getdents64(), readlinkat() instead of
readlink(), openat() instead of open().
On aarch64, it's clone() and not fork(), and dup3() instead of
dup2() -- just allow the existing alternative instead of dealing
with per-arch selections.
Since glibc commit 9a7565403758 ("posix: Consolidate fork
implementation"), we need to allow set_robust_list() for
fork()/clone(), even in a single-threaded context.
On some architectures, epoll_pwait() is provided instead of
epoll_wait(), but never both. Same with newfstat() and
fstat(), sigreturn() and rt_sigreturn(), getdents64() and
getdents(), readlink() and readlinkat(), unlink() and
unlinkat(), whereas pipe() might not be available, but
pipe2() always is, exclusively or not.
Seen on Fedora 34: newfstatat() is used on top of fstat().
syslog() is an actual system call on some glibc/arch combinations,
instead of a connect()/send() implementation.
On ppc64 and ppc64le, _llseek(), recv(), send() and getuid()
are used. For ppc64 only: ugetrlimit() for the getrlimit()
implementation, plus sigreturn() and fcntl64().
On s390x, additionally, we need to allow socketcall() (on top
of socket()), and sigreturn() also for passt (not just for
pasta).
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Bitmap manipulating functions would otherwise refer to inconsistent
sets of bits on big-endian architectures. While at it, fix up a
couple of casts.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
tcpi_bytes_acked and tcpi_min_rtt are only available on recent
kernel versions: provide fall-back paths (incurring some grade of
performance penalty).
Support for getrandom() was introduced in Linux 3.17 and glibc 2.25:
provide an alternate mechanism for that as well, reading from
/dev/random.
Also check if NETLINK_GET_STRICT_CHK is defined before using it:
it's not strictly needed, we'll filter out irrelevant results from
netlink anyway.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
This is the only remaining Linux-specific include -- drop it to avoid
clang-tidy warnings and to make code more portable.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
This was actually fine "on the wire", but it's inconsistent with the
way we hash other addresses/protocols and also ends up with a wrong
endianness in captures in case we replace the address with our
default gateway.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
...mostly false positives, but a number of very relevant ones too,
in tcp_get_sndbuf(), tcp_conn_from_tap(), and siphash PREAMBLE().
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Unions and structs, you all have names now.
Take the chance to enable bugprone-reserved-identifier,
cert-dcl37-c, and cert-dcl51-cpp checkers in clang-tidy.
Provide a ffsl() weak declaration using gcc built-in.
Start reordering includes, but that's not enough for the
llvm-include-order checker yet.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Detect missing tcpi_snd_wnd in struct tcp_info at build time,
otherwise build fails with a pre-5.3 linux/tcp.h header.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
This shouldn't happen on any sane configuration, but I just met an
example of that: the default IPv6 gateway on the host is configured
with a global unicast address, we use that as source for RA, DHCPv6
replies, and the guest ignores it. Same later on if we talk TCP or
UDP and the guest has no idea where that address comes from.
Use our link-local address in case the gateway address is global.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
A mix of unchecked return values, a missing permission mask for
open(2) with O_CREAT, and some false positives from
-Wstringop-overflow and -Wmaybe-uninitialized.
Reported-by: Martin Hauke <mardnh@gmx.de>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
For some reason, on 4.19, splice() doesn't honour SOCK_NONBLOCK from
accept4() while reading from a TCP socket. Pass SPLICE_F_NONBLOCK
explicitly in all cases.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
If the window isn't updated on !c->tcp.kernel_snd_wnd, we still
have to send ACKs if the ACK sequence was updated, or if an error
occurred while querying TCP_INFO.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
...and while at it, reverse the operands in the window equality
comparison to detect the need for fast re-transmit: it's easier
to read this way.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
I thought I'd get away with it, but no, after some clean-ups, I
finally got a socket with number 0. Fix up all the convenient,
yet botched assumptions.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Caching iov_len for messages from socket doesn't actually decrease
overhead by the tiniest bit, and added a lot of complexity. Drop
that.
Also drop the sendmmsg(), we don't need to send multiple messages
with TCP, as long as we make sure no messages with a length
descriptor are sent partially, qemu is fine with it.
Just like it's done for segments without data (flags), also defer
the sendmsg() for sending data segments, to improve batching.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Otherwise, tcp4_l2_flags_buf_t is not consistent with tcp4_l2_buf_t and
header fields get all mixed up in tcp_l2_buf_fill_headers().
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
List of allowed syscalls comes from comments in the form:
#syscalls <list>
for syscalls needed both in passt and pasta mode, and:
#syscalls:pasta <list>
#syscalls:passt <list>
for syscalls specifically needed in pasta or passt mode only.
seccomp.sh builds a list of BPF statements from those comments,
prefixed by a binary search tree to keep lookup fast.
While at it, clean up a bit the Makefile using wildcards.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Move netlink routines to their own file, and use netlink to configure
or fetch all the information we need, except for the TUNSETIFF ioctl.
Move pasta-specific functions to their own file as well, add
parameters and calls to configure the tap interface in the namespace.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Based on a patch from Giuseppe Scrivano, this adds the ability to:
- specify paths and names of target namespaces to join, instead of
a PID, also for user namespaces, with --userns
- request to join or create a network namespace only, without
entering or creating a user namespace, with --netns-only
- specify the base directory for netns mountpoints, with --nsrun-dir
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
[sbrivio: reworked logic to actually join the given namespaces when
they're not created, implemented --netns-only and --nsrun-dir,
updated pasta demo script and man page]
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
That's the first thing we have to do, before sending SYN, ACK:
if tcp_send_to_tap() fails, we'll get a lot of useless events
otherwise.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Now that we have a proper function checking when and how to send
ACKs and window updates, we don't need to duplicate this logic in
tcp_data_from_tap().
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
...using pre-cooked buffers, just like we do with other segments.
While at it, remove some code duplication by having separate
functions for updating ACK sequence and window, and for filling in
buffer headers.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Now that we fixed the issue with small receiving buffers, we can
safely increase this back and get slightly lower syscall overhead.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
If net.core.rmem_max and net.core.wmem_max sysctls have low values,
we can get bigger buffers by not trying to set them high -- the
kernel would lock their values to what we get.
Try, instead, to get bigger buffers by queueing as much as possible,
and if maximum values in tcp_wmem and tcp_rmem are bigger than this,
that will work.
While at it, drop QUICKACK option for non-spliced sockets, I set
that earlier by mistake.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
If the connection is local or the RTT was comparable to the time it
takes to queue a batch of messages, we can safely use a large MSS
regardless of the sending buffer, but otherwise not.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
If we start with a very small sending buffer, we can make the kernel
expand it if we cause the congestion window to get bigger, but this
won't reliably happen if we use just half (other half is accounted
as overhead).
Scale usage depending on its own size, we might eventually get some
retransmissions because we can't queue messages the sender sends us
in-window, but it's better than keeping that small buffer forever.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>