Skip to content
  • Paolo Bonzini's avatar
    ef02dac2
    job: detect change of aiocontext within job coroutine · ef02dac2
    Paolo Bonzini authored
    
    
    We want to make sure access of job->aio_context is always done
    under either BQL or job_mutex. The problem is that using
    aio_co_enter(job->aiocontext, job->co) in job_start and job_enter_cond
    makes the coroutine immediately resume, so we can't hold the job lock.
    And caching it is not safe either, as it might change.
    
    job_start is under BQL, so it can freely read job->aiocontext, but
    job_enter_cond is not.
    We want to avoid reading job->aio_context in job_enter_cond, therefore:
    1) use aio_co_wake(), since it doesn't want an aiocontext as argument
       but uses job->co->ctx
    2) detect possible discrepancy between job->co->ctx and job->aio_context
       by checking right after the coroutine resumes back from yielding if
       job->aio_context has changed. If so, reschedule the coroutine to the
       new context.
    
    Calling bdrv_try_set_aio_context() will issue the following calls
    (simplified):
    * in terms of  bdrv callbacks:
      .drained_begin -> .set_aio_context -> .drained_end
    * in terms of child_job functions:
      child_job_drained_begin -> child_job_set_aio_context -> child_job_drained_end
    * in terms of job functions:
      job_pause_locked -> job_set_aio_context -> job_resume_locked
    
    We can see that after setting the new aio_context, job_resume_locked
    calls again job_enter_cond, which then invokes aio_co_wake(). But
    while job->aiocontext has been set in job_set_aio_context,
    job->co->ctx has not changed, so the coroutine would be entering in
    the wrong aiocontext.
    
    Using aio_co_schedule in job_resume_locked() might seem as a valid
    alternative, but the problem is that the bh resuming the coroutine
    is not scheduled immediately, and if in the meanwhile another
    bdrv_try_set_aio_context() is run (see test_propagate_mirror() in
    test-block-iothread.c), we would have the first schedule in the
    wrong aiocontext, and the second set of drains won't even manage
    to schedule the coroutine, as job->busy would still be true from
    the previous job_resume_locked().
    
    The solution is to stick with aio_co_wake() and detect every time
    the coroutine resumes back from yielding if job->aio_context
    has changed. If so, we can reschedule it to the new context.
    
    Check for the aiocontext change in job_do_yield_locked because:
    1) aio_co_reschedule_self requires to be in the running coroutine
    2) since child_job_set_aio_context allows changing the aiocontext only
       while the job is paused, this is the exact place where the coroutine
       resumes, before running JobDriver's code.
    
    Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
    Reviewed-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
    Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
    Message-Id: <20220926093214.506243-13-eesposit@redhat.com>
    Reviewed-by: default avatarKevin Wolf <kwolf@redhat.com>
    Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
    ef02dac2
    job: detect change of aiocontext within job coroutine
    Paolo Bonzini authored
    
    
    We want to make sure access of job->aio_context is always done
    under either BQL or job_mutex. The problem is that using
    aio_co_enter(job->aiocontext, job->co) in job_start and job_enter_cond
    makes the coroutine immediately resume, so we can't hold the job lock.
    And caching it is not safe either, as it might change.
    
    job_start is under BQL, so it can freely read job->aiocontext, but
    job_enter_cond is not.
    We want to avoid reading job->aio_context in job_enter_cond, therefore:
    1) use aio_co_wake(), since it doesn't want an aiocontext as argument
       but uses job->co->ctx
    2) detect possible discrepancy between job->co->ctx and job->aio_context
       by checking right after the coroutine resumes back from yielding if
       job->aio_context has changed. If so, reschedule the coroutine to the
       new context.
    
    Calling bdrv_try_set_aio_context() will issue the following calls
    (simplified):
    * in terms of  bdrv callbacks:
      .drained_begin -> .set_aio_context -> .drained_end
    * in terms of child_job functions:
      child_job_drained_begin -> child_job_set_aio_context -> child_job_drained_end
    * in terms of job functions:
      job_pause_locked -> job_set_aio_context -> job_resume_locked
    
    We can see that after setting the new aio_context, job_resume_locked
    calls again job_enter_cond, which then invokes aio_co_wake(). But
    while job->aiocontext has been set in job_set_aio_context,
    job->co->ctx has not changed, so the coroutine would be entering in
    the wrong aiocontext.
    
    Using aio_co_schedule in job_resume_locked() might seem as a valid
    alternative, but the problem is that the bh resuming the coroutine
    is not scheduled immediately, and if in the meanwhile another
    bdrv_try_set_aio_context() is run (see test_propagate_mirror() in
    test-block-iothread.c), we would have the first schedule in the
    wrong aiocontext, and the second set of drains won't even manage
    to schedule the coroutine, as job->busy would still be true from
    the previous job_resume_locked().
    
    The solution is to stick with aio_co_wake() and detect every time
    the coroutine resumes back from yielding if job->aio_context
    has changed. If so, we can reschedule it to the new context.
    
    Check for the aiocontext change in job_do_yield_locked because:
    1) aio_co_reschedule_self requires to be in the running coroutine
    2) since child_job_set_aio_context allows changing the aiocontext only
       while the job is paused, this is the exact place where the coroutine
       resumes, before running JobDriver's code.
    
    Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
    Reviewed-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
    Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
    Message-Id: <20220926093214.506243-13-eesposit@redhat.com>
    Reviewed-by: default avatarKevin Wolf <kwolf@redhat.com>
    Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
Loading