- Sep 20, 2023
-
-
Stefan Hajnoczi authored
The synchronous bdrv_aio_cancel() function needs the acb's AioContext so it can call aio_poll() to wait for cancellation. It turns out that all users run under the BQL in the main AioContext, so this callback is not needed. Remove the callback, mark bdrv_aio_cancel() GLOBAL_STATE_CODE just like its blk_aio_cancel() caller, and poll the main loop AioContext. The purpose of this cleanup is to identify bdrv_aio_cancel() as an API that does not work with the multi-queue block layer. Signed-off-by:
Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20230912231037.826804-2-stefanha@redhat.com> Reviewed-by:
Kevin Wolf <kwolf@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Reviewed-by:
Klaus Jensen <k.jensen@samsung.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
- Sep 16, 2023
-
-
Richard Henderson authored
Reviewed-by:
Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
- Sep 15, 2023
-
-
Richard Henderson authored
Detect PMULL in cpuinfo; implement the accel hook. Acked-by:
Ard Biesheuvel <ardb@kernel.org> Tested-by:
Ard Biesheuvel <ardb@kernel.org> Reviewed-by:
Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Detect PCLMUL in cpuinfo; implement the accel hook. Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Akihiko Odaki authored
IA-64 and PA-RISC host support is already removed with commit b1cef6d0 ("Drop remaining bits of ia64 host support"). Signed-off-by:
Akihiko Odaki <akihiko.odaki@daynix.com> Message-Id: <20230810225922.21600-1-akihiko.odaki@daynix.com> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
- Sep 08, 2023
-
-
Philippe Mathieu-Daudé authored
Use autofree heap allocation instead of variable-length array on the stack. The codebase has very few VLAs, and if we can get rid of them all we can make the compiler error on new additions. This is a defensive measure against security bugs where an on-stack dynamic allocation isn't correctly size-checked (e.g. CVE-2021-3527). Signed-off-by:
Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Message-ID: <20230824164706.2652277-1-peter.maydell@linaro.org> Reviewed-by:
Eric Blake <eblake@redhat.com> Signed-off-by:
Eric Blake <eblake@redhat.com>
-
Stefan Hajnoczi authored
The ongoing QEMU multi-queue block layer effort makes it possible for multiple threads to process I/O in parallel. The nbd block driver is not compatible with the multi-queue block layer yet because QIOChannel cannot be used easily from coroutines running in multiple threads. This series changes the QIOChannel API to make that possible. In the current API, calling qio_channel_attach_aio_context() sets the AioContext where qio_channel_yield() installs an fd handler prior to yielding: qio_channel_attach_aio_context(ioc, my_ctx); ... qio_channel_yield(ioc); // my_ctx is used here ... qio_channel_detach_aio_context(ioc); This API design has limitations: reading and writing must be done in the same AioContext and moving between AioContexts involves a cumbersome sequence of API calls that is not suitable for doing on a per-request basis. There is no fundamental reason why a QIOChannel needs to run within the same AioContext every time qio_channel_yield() is called. QIOChannel only uses the AioContext while inside qio_channel_yield(). The rest of the time, QIOChannel is independent of any AioContext. In the new API, qio_channel_yield() queries the AioContext from the current coroutine using qemu_coroutine_get_aio_context(). There is no need to explicitly attach/detach AioContexts anymore and qio_channel_attach_aio_context() and qio_channel_detach_aio_context() are gone. One coroutine can read from the QIOChannel while another coroutine writes from a different AioContext. This API change allows the nbd block driver to use QIOChannel from any thread. It's important to keep in mind that the block driver already synchronizes QIOChannel access and ensures that two coroutines never read simultaneously or write simultaneously. This patch updates all users of qio_channel_attach_aio_context() to the new API. Most conversions are simple, but vhost-user-server requires a new qemu_coroutine_yield() call to quiesce the vu_client_trip() coroutine when not attached to any AioContext. While the API is has become simpler, there is one wart: QIOChannel has a special case for the iohandler AioContext (used for handlers that must not run in nested event loops). I didn't find an elegant way preserve that behavior, so I added a new API called qio_channel_set_follow_coroutine_ctx(ioc, true|false) for opting in to the new AioContext model. By default QIOChannel uses the iohandler AioHandler. Code that formerly called qio_channel_attach_aio_context() now calls qio_channel_set_follow_coroutine_ctx(ioc, true) once after the QIOChannel is created. Signed-off-by:
Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Acked-by:
Daniel P. Berrangé <berrange@redhat.com> Message-ID: <20230830224802.493686-5-stefanha@redhat.com> [eblake: also fix migration/rdma.c] Signed-off-by:
Eric Blake <eblake@redhat.com>
-
- Sep 01, 2023
-
-
Michael Tokarev authored
Signed-off-by:
Michael Tokarev <mjt@tls.msk.ru> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-ID: <20230901101302.3618955-9-mjt@tls.msk.ru> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Richard Henderson authored
Use dev_t instead of a string, and ino_t instead of uint64_t. The latter is likely to be identical on modern systems but is more type-correct for usage. Tested-by:
Helge Deller <deller@gmx.de> Reviewed-by:
Ilya Leoshkevich <iii@linux.ibm.com> Reviewed-by:
Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
- Aug 31, 2023
-
-
Michael Tokarev authored
Signed-off-by:
Michael Tokarev <mjt@tls.msk.ru> Reviewed-by:
Philippe Mathieu-Daudé <philmd@linaro.org> Message-ID: <20230823065335.1919380-3-mjt@tls.msk.ru> Signed-off-by:
Philippe Mathieu-Daudé <philmd@linaro.org>
-
- Aug 30, 2023
-
-
Stefan Hajnoczi authored
liburing does not clear sqe->user_data. We must do it ourselves to avoid undefined behavior in process_cqe() when user_data is used. Note that fdmon-io_uring is currently disabled, so this is a latent bug that does not affect users. Let's merge this fix now to make it easier to enable fdmon-io_uring in the future (and I'm working on that). Signed-off-by:
Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20230426212639.82310-1-stefanha@redhat.com>
-
- Aug 29, 2023
-
-
Zhenwei Pi authored
The first dimension of both to_check and bucket_types_size/bucket_types_units is used as throttle direction, use THROTTLE_MAX instead of hard coded number. Also use ARRAY_SIZE() to avoid hard coded number for the second dimension. Hanna noticed that the two array should be static. Yes, turn them into static variables. Reviewed-by:
Hanna Czenczek <hreitz@redhat.com> Signed-off-by:
zhenwei pi <pizhenwei@bytedance.com> Message-Id: <20230728022006.1098509-8-pizhenwei@bytedance.com> Signed-off-by:
Hanna Czenczek <hreitz@redhat.com>
-
Zhenwei Pi authored
enum ThrottleDirection is already there, use ThrottleDirection instead of 'bool is_write' for throttle API, also modify related codes from block, fsdev, cryptodev and tests. Reviewed-by:
Hanna Czenczek <hreitz@redhat.com> Signed-off-by:
zhenwei pi <pizhenwei@bytedance.com> Message-Id: <20230728022006.1098509-7-pizhenwei@bytedance.com> Signed-off-by:
Hanna Czenczek <hreitz@redhat.com>
-
Zhenwei Pi authored
Only one direction is necessary in several scenarios: - a read-only disk - operations on a device are considered as *write* only. For example, encrypt/decrypt/sign/verify operations on a cryptodev use a single *write* timer(read timer callback is defined, but never invoked). Allow a single direction in throttle, this reduces memory, and uplayer does not need a dummy callback any more. Reviewed-by:
Alberto Garcia <berto@igalia.com> Reviewed-by:
Hanna Czenczek <hreitz@redhat.com> Signed-off-by:
zhenwei pi <pizhenwei@bytedance.com> Message-Id: <20230728022006.1098509-4-pizhenwei@bytedance.com> Signed-off-by:
Hanna Czenczek <hreitz@redhat.com>
-
Zhenwei Pi authored
Use enum ThrottleDirection instead of number index. Reviewed-by:
Alberto Garcia <berto@igalia.com> Reviewed-by:
Hanna Czenczek <hreitz@redhat.com> Signed-off-by:
zhenwei pi <pizhenwei@bytedance.com> Message-Id: <20230728022006.1098509-2-pizhenwei@bytedance.com> Signed-off-by:
Hanna Czenczek <hreitz@redhat.com>
-
- Aug 09, 2023
-
-
Helge Deller authored
Fix a crash in qemu-user when running cat /proc/self/maps in a chroot, where /proc isn't mounted. The problem was introduced by commit 3ce3dd8c ("util/selfmap: Rewrite using qemu/interval-tree.h") where in open_self_maps_1() the function read_self_maps() is called and which returns NULL if it can't read the hosts /proc/self/maps file. Afterwards that NULL is fed into interval_tree_iter_first() which doesn't check if the root node is NULL. Fix it by adding a check if root is NULL and return NULL in that case. Signed-off-by:
Helge Deller <deller@gmx.de> Fixes: 3ce3dd8c ("util/selfmap: Rewrite using qemu/interval-tree.h") Message-Id: <ZNOsq6Z7t/eyIG/9@p100> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
- Aug 08, 2023
-
-
Richard Henderson authored
We will want to be able to search the set of mappings. For this patch, the two users iterate the tree in order. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
- Aug 03, 2023
-
-
Thomas Huth authored
Clang complains: ../util/oslib-win32.c:483:56: error: omitting the parameter name in a function definition is a C2x extension [-Werror,-Wc2x-extensions] win32_close_exception_handler(struct _EXCEPTION_RECORD*, ^ Fix it by adding parameter names. Message-Id: <20230728142748.305341-4-thuth@redhat.com> Reviewed-by:
Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by:
Thomas Huth <thuth@redhat.com>
-
- Aug 01, 2023
-
-
Anthony PERARD authored
thread_pool_free() might have been called on the `pool`, which would be a reason for worker_thread() to quit. In this case, `pool->request_cond` is been destroyed. If worker_thread() didn't managed to signal `request_cond` before it been destroyed by thread_pool_free(), we got: util/qemu-thread-posix.c:198: qemu_cond_signal: Assertion `cond->initialized' failed. One backtrace: __GI___assert_fail (assertion=0x55555614abcb "cond->initialized", file=0x55555614ab88 "util/qemu-thread-posix.c", line=198, function=0x55555614ad80 <__PRETTY_FUNCTION__.17104> "qemu_cond_signal") at assert.c:101 qemu_cond_signal (cond=0x7fffb800db30) at util/qemu-thread-posix.c:198 worker_thread (opaque=0x7fffb800dab0) at util/thread-pool.c:129 qemu_thread_start (args=0x7fffb8000b20) at util/qemu-thread-posix.c:505 start_thread (arg=<optimized out>) at pthread_create.c:486 Reported here: https://lore.kernel.org/all/ZJwoK50FcnTSfFZ8@MacBook-Air-de-Roger.local/T/#u To avoid issue, keep lock while sending a signal to `request_cond`. Fixes: 900fa208 ("thread-pool: replace semaphore with condition variable") Signed-off-by:
Anthony PERARD <anthony.perard@citrix.com> Reviewed-by:
Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20230714152720.5077-1-anthony.perard@citrix.com> Signed-off-by:
Anthony PERARD <anthony.perard@citrix.com>
-
- Jul 31, 2023
-
-
Richard Henderson authored
While less susceptible to optimization problems than left and right, interval_tree_iter_next also reads rb_parent(), so make sure that stores and loads are atomic. This goes further than technically required, changing all loads to be atomic, rather than simply the ones in the iteration side. But it doesn't really affect the code generation on the rebalance side and is cleaner to handle everything the same. Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Ensure that the stores to rb_left and rb_right are complete before inserting the new node into the tree. Otherwise a concurrent reader could see garbage in the new leaf. Cc: qemu-stable@nongnu.org Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Fixes a race condition (generally without optimization) in which the subtree is re-read after the protecting if condition. Cc: qemu-stable@nongnu.org Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
- Jul 10, 2023
-
-
Thomas Huth authored
We recently introduced "-run-with" for options that influence the runtime behavior of QEMU. This option has the big advantage that it can group related options (so that it is easier for the users to spot them) and that the options become introspectable via QMP this way. So let's start moving more switches into this option group, starting with "-chroot" now. Reviewed-by:
Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by:
Michael Tokarev <mjt@tls.msk.ru> Reviewed-by:
Ján Tomko <jtomko@redhat.com> Message-Id: <20230703074447.17044-1-thuth@redhat.com> Signed-off-by:
Thomas Huth <thuth@redhat.com>
-
- Jul 08, 2023
-
-
Richard Henderson authored
Detect CRYPTO in cpuinfo; implement the accel hooks. Reviewed-by:
Daniel Henrique Barboza <danielhb413@gmail.com> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Detect AES in cpuinfo; implement the accel hooks. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Detect AES in cpuinfo; implement the accel hooks. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Move the code from tcg/. Fix a bug in that PPC_FEATURE2_ARCH_3_10 is actually spelled PPC_FEATURE2_ARCH_3_1. Reviewed-by:
Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by:
Daniel Henrique Barboza <danielhb413@gmail.com> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
- Jun 27, 2023
-
-
Marc-André Lureau authored
Introduce qemu_win32_map_alloc() and qemu_win32_map_free() to allocate shared memory mapping. The handle can be used to share the mapping with another process. Teach qemu_create_displaysurface() to allocate shared memory. Following patches will introduce other places for shared memory allocation. Other patches for -display dbus will share the memory when possible with the client, to avoid expensive memory copy between the processes. Signed-off-by:
Marc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <20230606115658.677673-10-marcandre.lureau@redhat.com>
-
- Jun 13, 2023
-
-
Philippe Mathieu-Daudé authored
<libkern/OSCacheControl.h> describes sys_icache_invalidate() as "equivalent to sys_cache_control(kCacheFunctionPrepareForExecution)", having kCacheFunctionPrepareForExecution defined as: /* Prepare memory for execution. This should be called * after writing machine instructions to memory, before * executing them. It syncs the dcache and icache. [...] */ Since the dcache is also sync'd, we can avoid the sys_dcache_flush() call when both rx/rw pointers are equal. Suggested-by:
Richard Henderson <richard.henderson@linaro.org> Signed-off-by:
Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Reviewed-by:
Akihiko Odaki <akihiko.odaki@daynix.com> Message-Id: <20230605195911.96033-1-philmd@linaro.org>
-
Philippe Mathieu-Daudé authored
Per the cache(3) man page, sys_icache_invalidate() and sys_dcache_flush() are declared in <libkern/OSCacheControl.h>. Signed-off-by:
Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230605175647.88395-2-philmd@linaro.org>
-
Ivan Klokov authored
Added QEMU option 'vpu' to log vector extension registers such as gpr\fpu. Signed-off-by:
Ivan Klokov <ivan.klokov@syntacore.com> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Message-Id: <20230410124451.15929-2-ivan.klokov@syntacore.com> Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
- Jun 06, 2023
-
-
Paolo Bonzini authored
qatomic_mb_read and qatomic_mb_set were the very first atomic primitives introduced for QEMU; their semantics are unclear and they provide a false sense of safety. The last use of qatomic_mb_read() has been removed, so delete it. qatomic_mb_set() instead can survive as an optimized qatomic_set()+smp_mb(), similar to Linux's smp_store_mb(), but rename it to qatomic_set_mb() to match the order of the two operations. Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
- Jun 05, 2023
-
-
Hanna Czenczek authored
bdrv_pad_request() was the main user of qemu_iovec_init_extended(). HEAD^ has removed that use, so we can remove qemu_iovec_init_extended() now. The only remaining user is qemu_iovec_init_slice(), which can easily inline the small part it really needs. Note that qemu_iovec_init_extended() offered a memcpy() optimization to initialize the new I/O vector. qemu_iovec_concat_iov(), which is used to replace its functionality, does not, but calls qemu_iovec_add() for every single element. If we decide this optimization was important, we will need to re-implement it in qemu_iovec_concat_iov(), which might also benefit its pre-existing users. Reviewed-by:
Eric Blake <eblake@redhat.com> Reviewed-by:
Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Signed-off-by:
Hanna Czenczek <hreitz@redhat.com> Message-Id: <20230411173418.19549-4-hreitz@redhat.com>
-
Hanna Czenczek authored
We want to inline qemu_iovec_init_extended() in block/io.c for padding requests, and having access to qiov_slice() is useful for this. As a public function, it is renamed to qemu_iovec_slice(). (We will need to count the number of I/O vector elements of a slice there, and then later process this slice. Without qiov_slice(), we would need to call qemu_iovec_subvec_niov(), and all further IOV-processing functions may need to skip prefixing elements to accomodate for a qiov_offset. Because qemu_iovec_subvec_niov() internally calls qiov_slice(), we can just have the block/io.c code call qiov_slice() itself, thus get the number of elements, and also create an iovec array with the superfluous prefixing elements stripped, so the following processing functions no longer need to skip them.) Reviewed-by:
Eric Blake <eblake@redhat.com> Reviewed-by:
Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Signed-off-by:
Hanna Czenczek <hreitz@redhat.com> Message-Id: <20230411173418.19549-2-hreitz@redhat.com>
-
- Jun 02, 2023
-
-
Eric Blake authored
We have several limitations and bugs worth fixing; they are inter-related enough that it is not worth splitting this patch into smaller pieces: * ".5k" should work to specify 512, just as "0.5k" does * "1.9999k" and "1." + "9"*50 + "k" should both produce the same result of 2048 after rounding * "1." + "0"*350 + "1B" should not be treated the same as "1.0B"; underflow in the fraction should not be lost * "7.99e99" and "7.99e999" look similar, but our code was doing a read-out-of-bounds on the latter because it was not expecting ERANGE due to overflow. While we document that scientific notation is not supported, and the previous patch actually fixed qemu_strtod_finite() to no longer return ERANGE overflows, it is easier to pre-filter than to try and determine after the fact if strtod() consumed more than we wanted. Note that this is a low-level semantic change (when endptr is not NULL, we can now successfully parse with a scale of 'E' and then report trailing junk, instead of failing outright with EINVAL); but an earlier commit already argued that this is not a high-level semantic change since the only caller passing in a non-NULL endptr also checks that the tail is whitespace-only. Fixes: https://gitlab.com/qemu-project/qemu/-/issues/1629 Fixes: cf923b78 ("utils: Improve qemu_strtosz() to have 64 bits of precision", 6.0.0) Fixes: 7625a1ed ("utils: Use fixed-point arithmetic in qemu_strtosz", 6.0.0) Signed-off-by:
Eric Blake <eblake@redhat.com> Reviewed-by:
Hanna Czenczek <hreitz@redhat.com> Message-Id: <20230522190441.64278-20-eblake@redhat.com> [eblake: tweak function comment for accuracy]
-
Eric Blake authored
Previous patches changed all integral qemu_strto*() error paths to guarantee that *value is never left uninitialized. Do likewise for qemu_strtod. Also, tighten qemu_strtod_finite() to never return a non-finite value (prior to this patch, we were rejecting "inf" with -EINVAL and unspecified result 0.0, but failing "9e999" with -ERANGE and HUGE_VAL - which is infinite on IEEE machines - despite our function claiming to recognize only finite values). Auditing callers, we have no external callers of qemu_strtod, and among the callers of qemu_strtod_finite: - qapi/qobject-input-visitor.c:qobject_input_type_number_keyval() and qapi/string-input-visitor.c:parse_type_number() which reject all errors (does not matter what we store) - utils/cutils.c:do_strtosz() incorrectly assumes that *endptr points to '.' on all failures (that is, it is not distinguishing between EINVAL and ERANGE; and therefore still does the WRONG THING for "9.9e999". The change here does not entirely fix that (a later patch will tackle this more systematically), but at least it fixes the read-out-of-bounds first diagnosed in https://gitlab.com/qemu-project/qemu/-/issues/1629 - our testsuite, which we can update to match what we document Signed-off-by:
Eric Blake <eblake@redhat.com> Reviewed-by:
Hanna Czenczek <hreitz@redhat.com> CC: qemu-stable@nongnu.org Message-Id: <20230522190441.64278-19-eblake@redhat.com>
-
Eric Blake authored
Rather than open-coding two different ways to check for an unwanted negative sign, reuse the same code in both functions. That way, if we decide down the road to accept "-0" instead of rejecting it, we have fewer places to change. Also, it means we now get ERANGE instead of EINVAL for negative values in qemu_strtosz, which is reasonable for what it represents. This in turn changes the expected output of a couple of iotests. The change is not quite complete: negative fractional scaled values can trip us up. This will be fixed in a later patch addressing other issues with fractional scaled values. Signed-off-by:
Eric Blake <eblake@redhat.com> Reviewed-by:
Hanna Czenczek <hreitz@redhat.com> Message-Id: <20230522190441.64278-18-eblake@redhat.com>
-
Eric Blake authored
Our goal in writing qemu_strtoi() and friends is to have an interface harder to abuse than libc's strtol(). Leaving the return value uninitialized on some but not all error paths does not lend itself well to this goal; and our documentation wasn't helpful on what to expect. Note that the previous patch changed all qemu_strtosz() EINVAL error paths to slam value to 0 rather than stay uninitialized, even when the EINVAL eror occurs because of trailing junk. But for the remaining integral qemu_strto*, it's easier to return the parsed value than to force things back to zero, in part because of how check_strtox_error works; in part because people expect that from libc strto* (while there is no libc strtosz to compare to), and in part because doing so creates less churn in the testsuite. Here, the list of affected callers is much longer ('git grep "qemu_strto[ui]" "*.c" "**/*.c" | grep -v tests/ |wc -l' outputs 107, although a few of those are the implementation in in cutils.c), so touching as little as possible is the wisest course of action. Signed-off-by:
Eric Blake <eblake@redhat.com> Reviewed-by:
Hanna Czenczek <hreitz@redhat.com> Message-Id: <20230522190441.64278-17-eblake@redhat.com>
-
Eric Blake authored
Making callers determine whether or not *value was populated on error is not nice for usability. Pre-patch, we have unit tests that check that *result is left unchanged on most EINVAL errors and set to 0 on many ERANGE errors. This is subtly different from libc strtoumax() behavior which returns UINT64_MAX on ERANGE errors, as well as different from our parse_uint() which slams to 0 on EINVAL on the grounds that we want our functions to be harder to mis-use than strtoumax(). Let's audit callers: - hw/core/numa.c:parse_numa() fixed in the previous patch to check for errors - migration/migration-hmp-cmds.c:hmp_migrate_set_parameter(), monitor/hmp.c:monitor_parse_arguments(), qapi/opts-visitor.c:opts_type_size(), qapi/qobject-input-visitor.c:qobject_input_type_size_keyval(), qemu-img.c:cvtnum_full(), qemu-io-cmds.c:cvtnum(), target/i386/cpu.c:x86_cpu_parse_featurestr(), and util/qemu-option.c:parse_option_size() appear to reject all failures (although some with distinct messages for ERANGE as opposed to EINVAL), so it doesn't matter what is in the value parameter on error. - All remaining callers are in the testsuite, where we can tweak our expectations to match our new desired behavior. Advancing to the end of the string parsed on overflow (ERANGE), while still returning 0, makes sense (UINT64_MAX as a size is unlikely to be useful); likewise, our size parsing code is complex enough that it's easier to always return 0 when endptr is NULL but trailing garbage was found, rather than trying to return the value of the prefix actually parsed (no current caller cared about the value of the prefix). Signed-off-by:
Eric Blake <eblake@redhat.com> Reviewed-by:
Hanna Czenczek <hreitz@redhat.com> Message-Id: <20230522190441.64278-16-eblake@redhat.com>
-