Skip to content
Snippets Groups Projects
  • Stefan Hajnoczi's avatar
    d37d0e36
    aio-posix: remove idle poll handlers to improve scalability · d37d0e36
    Stefan Hajnoczi authored
    
    When there are many poll handlers it's likely that some of them are idle
    most of the time.  Remove handlers that haven't had activity recently so
    that the polling loop scales better for guests with a large number of
    devices.
    
    This feature only takes effect for the Linux io_uring fd monitoring
    implementation because it is capable of combining fd monitoring with
    userspace polling.  The other implementations can't do that and risk
    starving fds in favor of poll handlers, so don't try this optimization
    when they are in use.
    
    IOPS improves from 10k to 105k when the guest has 100
    virtio-blk-pci,num-queues=32 devices and 1 virtio-blk-pci,num-queues=1
    device for rw=randread,iodepth=1,bs=4k,ioengine=libaio on NVMe.
    
    [Clarified aio_poll_handlers locking discipline explanation in comment
    after discussion with Paolo Bonzini <pbonzini@redhat.com>.
    --Stefan]
    
    Signed-off-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
    Link: https://lore.kernel.org/r/20200305170806.1313245-8-stefanha@redhat.com
    Message-Id: <20200305170806.1313245-8-stefanha@redhat.com>
    d37d0e36
    History
    aio-posix: remove idle poll handlers to improve scalability
    Stefan Hajnoczi authored
    
    When there are many poll handlers it's likely that some of them are idle
    most of the time.  Remove handlers that haven't had activity recently so
    that the polling loop scales better for guests with a large number of
    devices.
    
    This feature only takes effect for the Linux io_uring fd monitoring
    implementation because it is capable of combining fd monitoring with
    userspace polling.  The other implementations can't do that and risk
    starving fds in favor of poll handlers, so don't try this optimization
    when they are in use.
    
    IOPS improves from 10k to 105k when the guest has 100
    virtio-blk-pci,num-queues=32 devices and 1 virtio-blk-pci,num-queues=1
    device for rw=randread,iodepth=1,bs=4k,ioengine=libaio on NVMe.
    
    [Clarified aio_poll_handlers locking discipline explanation in comment
    after discussion with Paolo Bonzini <pbonzini@redhat.com>.
    --Stefan]
    
    Signed-off-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
    Link: https://lore.kernel.org/r/20200305170806.1313245-8-stefanha@redhat.com
    Message-Id: <20200305170806.1313245-8-stefanha@redhat.com>