16b08367a5
This isn't optional: TCP streams must carry a unique, hard-to-guess, non-zero label for each direction. Linux, probably among others, will otherwise refuse to associate packets in a given stream to the same connection. Signed-off-by: Stefano Brivio <sbrivio@redhat.com> |
||
---|---|---|
doc | ||
libvirt | ||
qemu | ||
arp.c | ||
arp.h | ||
checksum.c | ||
dhcp.c | ||
dhcp.h | ||
dhcpv6.c | ||
dhcpv6.h | ||
icmp.c | ||
icmp.h | ||
igmp.c | ||
Makefile | ||
mld.c | ||
ndp.c | ||
ndp.h | ||
passt.c | ||
passt.h | ||
pcap.c | ||
pcap.h | ||
qrap.c | ||
README.md | ||
siphash.c | ||
siphash.h | ||
tap.c | ||
tap.h | ||
tcp.c | ||
tcp.h | ||
udp.c | ||
udp.h | ||
util.c | ||
util.h |
passt: Plug A Simple Socket Transport
passt implements a translation layer between a Layer-2 network interface (tap) and native Layer-4 sockets (TCP, UDP, ICMP/ICMPv6 echo) on a host. It doesn't require any capabilities or privileges, and it can be used as a simple replacement for Slirp.
- General idea
- Non-functional Targets
- Interfaces and Environment
- Services
- Addresses
- Protocols
- Ports
- Try it
- Contribute
General idea
When container workloads are moved to virtual machines, the network traffic is typically forwarded by interfaces operating at data link level. Some components in the containers ecosystem (such as service meshes), however, expect applications to run locally, with visible sockets and processes, for the purposes of socket redirection, monitoring, port mapping.
To solve this issue, user mode networking as provided e.g. by Slirp, libslirp, slirp4netns can be used. However, these existing solutions implement a full TCP/IP stack, replaying traffic on sockets that are local to the pod of the service mesh. This creates the illusion of application processes running on the same host, eventually separated by user namespaces.
While being almost transparent to the service mesh infrastructure, that kind of solution comes with a number of downsides:
- three different TCP/IP stacks (guest, adaptation and host) need to be traversed for every service request. There are no chances to implement zero-copy mechanisms, and the amount of context switches increases dramatically
- addressing needs to be coordinated to create the pretense of consistent addresses and routes between guest and host environments. This typically needs a NAT with masquerading, or some form of packet bridging
- the traffic seen by the service mesh and observable externally is a distant
replica of the packets forwarded to and from the guest environment:
- TCP congestion windows and network buffering mechanisms in general operate differently from what would be naturally expected by the application
- protocols carrying addressing information might pose additional challenges, as the applications don't see the same set of addresses and routes as they would if deployed with regular containers
passt implements a thinner layer between guest and host, that only implements
what's strictly needed to pretend processes are running locally. A further, full
TCP/IP stack is not necessarily needed. Some sort of TCP adaptation is needed,
however, as this layer runs without the CAP_NET_RAW
capability: we can't
create raw IP sockets on the pod, and therefore need to map packets at Layer-2
to Layer-4 sockets offered by the host kernel.
The problem and this approach are illustrated in more detail, with diagrams, here.
Non-functional Targets
Security and maintainability goals:
- no dynamic memory allocation
- ~2 000 LoC target
- no external dependencies
Interfaces and Environment
passt exchanges packets with qemu via UNIX domain socket, using the socket
back-end in qemu. Currently, qemu can only connect to a listening process via
TCP. Two temporary solutions are available:
- a patch for qemu
- a wrapper, qrap, that connects to a UNIX domain socket and starts qemu, which can now use the file descriptor that's already opened
This approach, compared to using a tap device, doesn't require any security capabilities, as we don't need to create any interface.
Services
passt provides some minimalistic implementations of networking services that can't practically run on the host:
- ARP proxy, that resolves the address of the host (which is used as gateway) to the original MAC address of the host
- DHCP server, a simple implementation handing out one single IPv4 address to the guest, namely, the same address as the first one configured for the upstream host interface, and passing the nameservers configured on the host
- NDP proxy, which can also assign prefix and nameserver using SLAAC
- DHCPv6 server: a simple implementation handing out one single IPv6 address to the guest, namely, the the same address as the first one configured for the upstream host interface, and passing the first nameserver configured on the host
Addresses
For IPv4, the guest is assigned, via DHCP, the same address as the upstream interface of the host, and the same default gateway as the default gateway of the host. Addresses are translated in case the guest is seen using a different address from the assigned one.
For IPv6, the guest is assigned, via SLAAC, the same prefix as the upstream interface of the host, the same default route as the default route of the host, and, if a DHCPv6 client is running on the guest, also the same address as the upstream address of the host. This means that, with a DHCPv6 client on the guest, addresses don't need to be translated. Should the client use a different address, the destination address is translated for packets going to the guest.
For UDP and TCP, for both IPv4 and IPv6, packets addressed to a loopback address are forwarded to the guest with their source address changed to the address of the gateway or first hop of the default route. This mapping is reversed as the guest replies to those packets (on the same TCP connection, or using destination port and address that were used as source for UDP).
Protocols
passt supports TCP, UDP and ICMP/ICMPv6 echo (requests and replies). More details about the TCP implementation are available here, and for the UDP implementation here.
An IGMP proxy is currently work in progress.
Ports
To avoid the need for explicit port mapping configuration, passt binds to all unbound non-ephemeral (0-49152) TCP and UDP ports. Binding to low ports (0-1023) will fail without additional capabilities, and ports already bound (service proxies, etc.) will also not be used.
UDP ephemeral ports are bound dynamically, as the guest uses them.
Service proxies and other services running in the container need to be started before passt starts.
Try it
-
build from source:
git clone https://passt.top/passt cd passt make
-
to make passt not fork into background when it starts, and to get verbose debug information, build with:
CFLAGS="-DDEBUG" make
-
-
a static build for x86_64 as of the latest commit is also available for convenience here. These binaries are simply built with:
CFLAGS="-static" make
-
run the demo script, that creates a network namespace called
passt
, sets up sets up a veth pair and and addresses, together with NAT for IPv4 and NDP proxying for IPv6, then starts passt in the network namespace:doc/demo.sh
-
from the same network namespace, start qemu. At the moment, qemu doesn't support UNIX domain sockets for the
socket
back-end. Two alternatives:-
use the qrap wrapper, which maps a tap socket descriptor to passt's UNIX domain socket, for example:
ip netns exec passt ./qrap 5 qemu-system-x86_64 ... -net socket,fd=5 -net nic,model=virtio ...
-
or patch qemu with this patch and start it like this:
qemu-system-x86_64 ... -net socket,connect=/tmp/passt.socket -net nic,model=virtio
-
-
alternatively, you can use libvirt, with this patch, to start qemu (with the patch mentioned above), with this kind of network interface configuration:
<interface type='client'> <mac address='52:54:00:02:6b:60'/> <source path='/tmp/passt.socket'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface>
-
and that's it, you should now have TCP connections, UDP, and ICMP/ICMPv6 echo working from/to the guest for IPv4 and IPv6
-
to connect to a service on the VM, just connect to the same port directly with the address of the network namespace. For example, to ssh to the guest, from the main namespace on the host:
ssh 192.0.2.2
Contribute
Send patches and issue reports to sbrivio@redhat.com.