Skip to content
Snippets Groups Projects
  • Eric Blake's avatar
    582d4210
    qemu-nbd: Use SOMAXCONN for socket listen() backlog · 582d4210
    Eric Blake authored
    Our default of a backlog of 1 connection is rather puny; it gets in
    the way when we are explicitly allowing multiple clients (such as
    qemu-nbd -e N [--shared], or nbd-server-start with its default
    "max-connections":0 for unlimited), but is even a problem when we
    stick to qemu-nbd's default of only 1 active client but use -t
    [--persistent] where a second client can start using the server once
    the first finishes.  While the effects are less noticeable on TCP
    sockets (since the client can poll() to learn when the server is ready
    again), it is definitely observable on Unix sockets, where on Linux, a
    client will fail with EAGAIN and no recourse but to sleep an arbitrary
    amount of time before retrying if the server backlog is already full.
    
    Since QMP nbd-server-start is always persistent, it now always
    requests a backlog of SOMAXCONN; meanwhile, qemu-nbd will request
    SOMAXCONN if persistent, otherwise its backlog should be based on the
    expected number of clients.
    
    See https://bugzilla.redhat.com/1925045
    
     for a demonstration of where
    our low backlog prevents libnbd from connecting as many parallel
    clients as it wants.
    
    Reported-by: default avatarRichard W.M. Jones <rjones@redhat.com>
    Signed-off-by: default avatarEric Blake <eblake@redhat.com>
    CC: qemu-stable@nongnu.org
    Message-Id: <20210209152759.209074-2-eblake@redhat.com>
    Tested-by: default avatarRichard W.M. Jones <rjones@redhat.com>
    Reviewed-by: default avatarDaniel P. Berrangé <berrange@redhat.com>
    Signed-off-by: default avatarEric Blake <eblake@redhat.com>
    582d4210
    History
    qemu-nbd: Use SOMAXCONN for socket listen() backlog
    Eric Blake authored
    Our default of a backlog of 1 connection is rather puny; it gets in
    the way when we are explicitly allowing multiple clients (such as
    qemu-nbd -e N [--shared], or nbd-server-start with its default
    "max-connections":0 for unlimited), but is even a problem when we
    stick to qemu-nbd's default of only 1 active client but use -t
    [--persistent] where a second client can start using the server once
    the first finishes.  While the effects are less noticeable on TCP
    sockets (since the client can poll() to learn when the server is ready
    again), it is definitely observable on Unix sockets, where on Linux, a
    client will fail with EAGAIN and no recourse but to sleep an arbitrary
    amount of time before retrying if the server backlog is already full.
    
    Since QMP nbd-server-start is always persistent, it now always
    requests a backlog of SOMAXCONN; meanwhile, qemu-nbd will request
    SOMAXCONN if persistent, otherwise its backlog should be based on the
    expected number of clients.
    
    See https://bugzilla.redhat.com/1925045
    
     for a demonstration of where
    our low backlog prevents libnbd from connecting as many parallel
    clients as it wants.
    
    Reported-by: default avatarRichard W.M. Jones <rjones@redhat.com>
    Signed-off-by: default avatarEric Blake <eblake@redhat.com>
    CC: qemu-stable@nongnu.org
    Message-Id: <20210209152759.209074-2-eblake@redhat.com>
    Tested-by: default avatarRichard W.M. Jones <rjones@redhat.com>
    Reviewed-by: default avatarDaniel P. Berrangé <berrange@redhat.com>
    Signed-off-by: default avatarEric Blake <eblake@redhat.com>