Skip to content
  • John Snow's avatar
    4c9bca7e
    block/backup: avoid copying less than full target clusters · 4c9bca7e
    John Snow authored
    
    
    During incremental backups, if the target has a cluster size that is
    larger than the backup cluster size and we are backing up to a target
    that cannot (for whichever reason) pull clusters up from a backing image,
    we may inadvertantly create unusable incremental backup images.
    
    For example:
    
    If the bitmap tracks changes at a 64KB granularity and we transmit 64KB
    of data at a time but the target uses a 128KB cluster size, it is
    possible that only half of a target cluster will be recognized as dirty
    by the backup block job. When the cluster is allocated on the target
    image but only half populated with data, we lose the ability to
    distinguish between zero padding and uninitialized data.
    
    This does not happen if the target image has a backing file that points
    to the last known good backup.
    
    Even if we have a backing file, though, it's likely going to be faster
    to just buffer the redundant data ourselves from the live image than
    fetching it from the backing file, so let's just always round up to the
    target granularity.
    
    The same logic applies to backup modes top, none, and full. Copying
    fractional clusters without the guarantee of COW is dangerous, but even
    if we can rely on COW, it's likely better to just re-copy the data.
    
    Reported-by: default avatarFam Zheng <famz@redhat.com>
    Signed-off-by: default avatarJohn Snow <jsnow@redhat.com>
    Reviewed-by: default avatarFam Zheng <famz@redhat.com>
    Message-id: 1456433911-24718-3-git-send-email-jsnow@redhat.com
    Signed-off-by: default avatarJeff Cody <jcody@redhat.com>
    4c9bca7e
    block/backup: avoid copying less than full target clusters
    John Snow authored
    
    
    During incremental backups, if the target has a cluster size that is
    larger than the backup cluster size and we are backing up to a target
    that cannot (for whichever reason) pull clusters up from a backing image,
    we may inadvertantly create unusable incremental backup images.
    
    For example:
    
    If the bitmap tracks changes at a 64KB granularity and we transmit 64KB
    of data at a time but the target uses a 128KB cluster size, it is
    possible that only half of a target cluster will be recognized as dirty
    by the backup block job. When the cluster is allocated on the target
    image but only half populated with data, we lose the ability to
    distinguish between zero padding and uninitialized data.
    
    This does not happen if the target image has a backing file that points
    to the last known good backup.
    
    Even if we have a backing file, though, it's likely going to be faster
    to just buffer the redundant data ourselves from the live image than
    fetching it from the backing file, so let's just always round up to the
    target granularity.
    
    The same logic applies to backup modes top, none, and full. Copying
    fractional clusters without the guarantee of COW is dangerous, but even
    if we can rely on COW, it's likely better to just re-copy the data.
    
    Reported-by: default avatarFam Zheng <famz@redhat.com>
    Signed-off-by: default avatarJohn Snow <jsnow@redhat.com>
    Reviewed-by: default avatarFam Zheng <famz@redhat.com>
    Message-id: 1456433911-24718-3-git-send-email-jsnow@redhat.com
    Signed-off-by: default avatarJeff Cody <jcody@redhat.com>
Loading