Skip to content
Snippets Groups Projects
Commit 8bafcb21 authored by Paolo Bonzini's avatar Paolo Bonzini
Browse files

memory: add early bail out from cpu_physical_memory_set_dirty_range


This condition is true in the common case, so we can cut out the body of
the function.  In addition, this makes it easier for the compiler to do
at least partial inlining, even if it decides that fully inlining the
function is unreasonable.

Reviewed-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent ac1be2ae
No related branches found
No related tags found
No related merge requests found
......@@ -165,6 +165,10 @@ static inline void cpu_physical_memory_set_dirty_range(ram_addr_t start,
unsigned long end, page;
unsigned long **d = ram_list.dirty_memory;
if (!mask && !xen_enabled()) {
return;
}
end = TARGET_PAGE_ALIGN(start + length) >> TARGET_PAGE_BITS;
page = start >> TARGET_PAGE_BITS;
if (likely(mask & (1 << DIRTY_MEMORY_MIGRATION))) {
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment