1
0
Fork 0
mirror of https://passt.top/passt synced 2025-06-14 19:35:35 +02:00

flow: Enforce that freeing of closed flows must happen in deferred handlers

Currently, flows are only evern finally freed (and the table compacted)
from the deferred handlers.  Some future ways we want to optimise managing
the flow table will rely on this, so enforce it: rather than having the
TCP code directly call flow_table_compact(), add a boolean return value to
the per-flow deferred handlers.  If true, this indicates that the flow
code itself should free the flow.

This forces all freeing of flows to occur during the flow code's scan of
the table in flow_defer_handler() which opens possibilities for future
optimisations.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
This commit is contained in:
David Gibson 2024-01-16 11:50:42 +11:00 committed by Stefano Brivio
parent 4a849e9526
commit 9c0881d4f6
5 changed files with 21 additions and 15 deletions

13
flow.c
View file

@ -81,7 +81,7 @@ void flow_alloc_cancel(union flow *flow)
* @c: Execution context
* @hole: Pointer to recently closed flow
*/
void flow_table_compact(const struct ctx *c, union flow *hole)
static void flow_table_compact(const struct ctx *c, union flow *hole)
{
union flow *from;
@ -131,18 +131,23 @@ void flow_defer_handler(const struct ctx *c, const struct timespec *now)
}
for (flow = flowtab + flow_count - 1; flow >= flowtab; flow--) {
bool closed = false;
switch (flow->f.type) {
case FLOW_TCP:
tcp_flow_defer(c, flow);
closed = tcp_flow_defer(flow);
break;
case FLOW_TCP_SPLICE:
tcp_splice_flow_defer(c, flow);
if (timer)
closed = tcp_splice_flow_defer(flow);
if (!closed && timer)
tcp_splice_timer(c, flow);
break;
default:
/* Assume other flow types don't need any handling */
;
}
if (closed)
flow_table_compact(c, flow);
}
}