If net.core.rmem_max and net.core.wmem_max sysctls have low values,
we can get bigger buffers by not trying to set them high -- the
kernel would lock their values to what we get.
Try, instead, to get bigger buffers by queueing as much as possible,
and if maximum values in tcp_wmem and tcp_rmem are bigger than this,
that will work.
While at it, drop QUICKACK option for non-spliced sockets, I set
that earlier by mistake.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
If the connection is local or the RTT was comparable to the time it
takes to queue a batch of messages, we can safely use a large MSS
regardless of the sending buffer, but otherwise not.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
If we start with a very small sending buffer, we can make the kernel
expand it if we cause the congestion window to get bigger, but this
won't reliably happen if we use just half (other half is accounted
as overhead).
Scale usage depending on its own size, we might eventually get some
retransmissions because we can't queue messages the sender sends us
in-window, but it's better than keeping that small buffer forever.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
...and from the sending socket only if the MTU is not configured.
Otherwise, a connection to a host from a local guest, with a
non-loopback destination address, will get its MSS from the MTU of the
outbound interface with that address, which is unnecessary as we know
the guest can send us larger segments.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Detecting bound ports at start-up time isn't terribly useful: do this
periodically instead, if configured.
This is only implemented for TCP at the moment, UDP is somewhat more
complicated: leave a TODO there.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
In case we need to reinitialise the tap interface, make that
relatively transparent to processes running in the namespace.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
passt is stable enough, and dropping O_DSYNC makes reduces the impact
of capturing packets on timing, while running tests.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
If the guest disconnects, and a given name (without timestamp) for
the pcap file is passed, we would otherwise lose the packets
captured until that point.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
The initial timestamp was not initialised, so timers for protocol
handlers wouldn't run at all sometimes.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Seen while testing: lifetime expires while we're flooding a tap
interface with UDP packets, the router advertisement comes too late,
and the kernel drops the default router in the namespace. This
should only affect testing, so go for the maximum allowed value,
that is, 9000 seconds.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Until now, messages would be passed to protocol handlers in a single
batch only if they happened to be dequeued in a row. Packets
interleaved between different connections would result in multiple
calls to the same protocol handler for a single connection.
Instead, keep track of incoming packet descriptors, arrange them in
sequences, and call protocol handlers only as we completely sorted
input messages in batches.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
If transparent huge pages are available, madvise() will do the trick.
While at it, decrease EPOLL_EVENTS for the main loop from 10 to 8,
for slightly better socket fairness.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Similarly to the decrease in TCP_TAP_FRAMES, this improves fairness,
with a very small impact on performance.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
That might just mean we shut down the socket -- but we still have to
go through the other states to ensure a orderly shutdown guest-side.
While at it, drop the EPOLLHUP check for unhandled states: we should
never hit that, but if we do, resetting the connection at that point
is probably the wrong thing to do.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Now that we dropped EPOLLET, we'll keep getting EPOLLRDHUP, and
possibly EPOLLIN, even if there's nothing to read anymore.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
EPOLLHUP just means we shut down one side of the connection on
*one* socket: remember, we have two sockets here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
...throughput isn't everything: this leads (of course) to horrible
latency with small, sparse messages. As a consequence, there's no
need to set TCP_NODELAY either.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
If we're at the first message in a batch, it's safe to get the
window value from it, and there's no need to subtract anything for
a comparison on that's not even done -- we'll override it later in
any case if we find messages with a higher ACK sequence number.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
...tcp_handler_splice() doesn't guarantee we read all the available
data, the sending buffer might be full.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Checking it only when the cached value is smaller than the current
window of the receiver is not enough: it might shrink further while
the receiver window is growing.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
If pasta created the namespace, it's probably expected that processes
started in the same namespace are terminated once pasta exits. Scan
procfs namespace links for corresponding processes, send SIGQUIT and
SIGKILL (after one second) if found.
While at it, make the signal handler reap otherwise-zombies resulting
from clone().
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
If we couldn't write the whole batch of received packets to the socket,
and we have missing segments, we still need to request their
retransmission right away, otherwise it will take ages for the guest to
figure out we're missing them.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
...instead of waiting for the remote peer to do that -- it's
especially important in case we request retransmissions from the
guest, but it also helps speeding up slow start. This should
probably be a configurable behaviour in the future.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Seen with iperf3: a control connection is established, no data flows
for a while, all segments are acknowledged. The socket starts closing
it, and we immediately time out because the last ACK from tap was one
minute before that.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>