Skip to content
Snippets Groups Projects
  • Eric Blake's avatar
    58a6fdcc
    nbd/server: Allow MULTI_CONN for shared writable exports · 58a6fdcc
    Eric Blake authored
    
    According to the NBD spec, a server that advertises
    NBD_FLAG_CAN_MULTI_CONN promises that multiple client connections will
    not see any cache inconsistencies: when properly separated by a single
    flush, actions performed by one client will be visible to another
    client, regardless of which client did the flush.
    
    We always satisfy these conditions in qemu - even when we support
    multiple clients, ALL clients go through a single point of reference
    into the block layer, with no local caching.  The effect of one client
    is instantly visible to the next client.  Even if our backend were a
    network device, we argue that any multi-path caching effects that
    would cause inconsistencies in back-to-back actions not seeing the
    effect of previous actions would be a bug in that backend, and not the
    fault of caching in qemu.  As such, it is safe to unconditionally
    advertise CAN_MULTI_CONN for any qemu NBD server situation that
    supports parallel clients.
    
    Note, however, that we don't want to advertise CAN_MULTI_CONN when we
    know that a second client cannot connect (for historical reasons,
    qemu-nbd defaults to a single connection while nbd-server-add and QMP
    commands default to unlimited connections; but we already have
    existing means to let either style of NBD server creation alter those
    defaults).  This is visible by no longer advertising MULTI_CONN for
    'qemu-nbd -r' without -e, as in the iotest nbd-qemu-allocation.
    
    The harder part of this patch is setting up an iotest to demonstrate
    behavior of multiple NBD clients to a single server.  It might be
    possible with parallel qemu-io processes, but I found it easier to do
    in python with the help of libnbd, and help from Nir and Vladimir in
    writing the test.
    
    Signed-off-by: default avatarEric Blake <eblake@redhat.com>
    Suggested-by: default avatarNir Soffer <nsoffer@redhat.com>
    Suggested-by: default avatarVladimir Sementsov-Ogievskiy <v.sementsov-og@mail.ru>
    Message-Id: <20220512004924.417153-3-eblake@redhat.com>
    Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
    58a6fdcc
    History
    nbd/server: Allow MULTI_CONN for shared writable exports
    Eric Blake authored
    
    According to the NBD spec, a server that advertises
    NBD_FLAG_CAN_MULTI_CONN promises that multiple client connections will
    not see any cache inconsistencies: when properly separated by a single
    flush, actions performed by one client will be visible to another
    client, regardless of which client did the flush.
    
    We always satisfy these conditions in qemu - even when we support
    multiple clients, ALL clients go through a single point of reference
    into the block layer, with no local caching.  The effect of one client
    is instantly visible to the next client.  Even if our backend were a
    network device, we argue that any multi-path caching effects that
    would cause inconsistencies in back-to-back actions not seeing the
    effect of previous actions would be a bug in that backend, and not the
    fault of caching in qemu.  As such, it is safe to unconditionally
    advertise CAN_MULTI_CONN for any qemu NBD server situation that
    supports parallel clients.
    
    Note, however, that we don't want to advertise CAN_MULTI_CONN when we
    know that a second client cannot connect (for historical reasons,
    qemu-nbd defaults to a single connection while nbd-server-add and QMP
    commands default to unlimited connections; but we already have
    existing means to let either style of NBD server creation alter those
    defaults).  This is visible by no longer advertising MULTI_CONN for
    'qemu-nbd -r' without -e, as in the iotest nbd-qemu-allocation.
    
    The harder part of this patch is setting up an iotest to demonstrate
    behavior of multiple NBD clients to a single server.  It might be
    possible with parallel qemu-io processes, but I found it easier to do
    in python with the help of libnbd, and help from Nir and Vladimir in
    writing the test.
    
    Signed-off-by: default avatarEric Blake <eblake@redhat.com>
    Suggested-by: default avatarNir Soffer <nsoffer@redhat.com>
    Suggested-by: default avatarVladimir Sementsov-Ogievskiy <v.sementsov-og@mail.ru>
    Message-Id: <20220512004924.417153-3-eblake@redhat.com>
    Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>