- Mar 04, 2022
-
-
Emanuele Giuseppe Esposito authored
Signed-off-by:
Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20220303151616.325444-20-eesposit@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Emanuele Giuseppe Esposito authored
blockjob functions run always under the BQL lock. Signed-off-by:
Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20220303151616.325444-19-eesposit@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Emanuele Giuseppe Esposito authored
Following the assertion derived from the API split, propagate the assertion also in the static functions. Signed-off-by:
Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20220303151616.325444-18-eesposit@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Emanuele Giuseppe Esposito authored
Signed-off-by:
Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20220303151616.325444-17-eesposit@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Emanuele Giuseppe Esposito authored
Since the I/O functions are not many, keep a single file. Also split the function pointers in BlockJobDriver. Signed-off-by:
Emanuele Giuseppe Esposito <eesposit@redhat.com> Reviewed-by:
Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20220303151616.325444-16-eesposit@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Emanuele Giuseppe Esposito authored
We want to be sure that the functions that write the child and parent list of a bs are under BQL and drain. BQL prevents from concurrent writings from the GS API, while drains protect from I/O. TODO: drains are missing in some functions using this assert. Therefore a proper assertion will fail. Because adding drains requires additional discussions, they will be added in future series. Signed-off-by:
Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20220303151616.325444-15-eesposit@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Emanuele Giuseppe Esposito authored
Mark all I/O functions with IO_CODE, and all "I/O OR GS" with IO_OR_GS_CODE. Signed-off-by:
Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20220303151616.325444-14-eesposit@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Emanuele Giuseppe Esposito authored
Signed-off-by:
Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20220303151616.325444-13-eesposit@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Emanuele Giuseppe Esposito authored
Similarly to the previous patch, split block_int.h in block_int-io.h and block_int-global-state.h block_int-common.h contains the structures shared between the two headers, and the functions that can't be categorized as I/O or global state. Assertions are added in the next patch. Signed-off-by:
Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20220303151616.325444-12-eesposit@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Emanuele Giuseppe Esposito authored
Now that we "covered" the three main cases where the permission API was being used under BQL (fuse, amend and invalidate_cache), we can safely assert for the permission functions implemented in block.c Signed-off-by:
Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20220303151616.325444-11-eesposit@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Emanuele Giuseppe Esposito authored
Mark all I/O functions with IO_CODE, and all "I/O OR GS" with IO_OR_GS_CODE. Signed-off-by:
Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20220303151616.325444-10-eesposit@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Emanuele Giuseppe Esposito authored
All the global state (GS) API functions will check that qemu_in_main_thread() returns true. If not, it means that the safety of BQL cannot be guaranteed, and they need to be moved to I/O. Signed-off-by:
Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20220303151616.325444-9-eesposit@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Emanuele Giuseppe Esposito authored
Similarly to the previous patches, split block-backend.h in block-backend-io.h and block-backend-global-state.h In addition, remove "block/block.h" include as it seems it is not necessary anymore, together with "qemu/iov.h" block-backend-common.h contains the structures shared between the two headers, and the functions that can't be categorized as I/O or global state. Assertions are added in the next patch. Signed-off-by:
Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20220303151616.325444-8-eesposit@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Emanuele Giuseppe Esposito authored
Allow writable exports to get BLK_PERM_RESIZE permission from creation, in fuse_export_create(). In this way, there is no need to give the permission in fuse_do_truncate(), which might be run in an iothread. Permissions should be set only in the main thread, so in any case if an iothread tries to set RESIZE, it will be blocked. Also assert in fuse_do_truncate that if we give the RESIZE permission we can then restore the original ones. Signed-off-by:
Emanuele Giuseppe Esposito <eesposit@redhat.com> Reviewed-by:
Hanna Reitz <hreitz@redhat.com> Message-Id: <20220303151616.325444-7-eesposit@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Emanuele Giuseppe Esposito authored
Mark all I/O functions with IO_CODE, and all "I/O OR GS" with IO_OR_GS_CODE. Signed-off-by:
Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20220303151616.325444-6-eesposit@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Emanuele Giuseppe Esposito authored
All the global state (GS) API functions will check that qemu_in_main_thread() returns true. If not, it means that the safety of BQL cannot be guaranteed, and they need to be moved to I/O. Signed-off-by:
Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20220303151616.325444-5-eesposit@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Emanuele Giuseppe Esposito authored
block.h currently contains a mix of functions: some of them run under the BQL and modify the block layer graph, others are instead thread-safe and perform I/O in iothreads. Some others can only be called by either the main loop or the iothread running the AioContext (and not other iothreads), and using them in another thread would cause deadlocks, and therefore it is not ideal to define them as I/O. It is not easy to understand which function is part of which group (I/O vs GS vs "I/O or GS"), and this patch aims to clarify it. The "GS" functions need the BQL, and often use aio_context_acquire/release and/or drain to be sure they can modify the graph safely. The I/O function are instead thread safe, and can run in any AioContext. "I/O or GS" functions run instead in the main loop or in a single iothread, and use BDRV_POLL_WHILE(). By splitting the header in two files, block-io.h and block-global-state.h we have a clearer view on what needs what kind of protection. block-common.h contains common structures shared by both headers. block.h is left there for legacy and to avoid changing all includes in all c files that use the block APIs. Assertions are added in the next patch. Signed-off-by:
Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20220303151616.325444-4-eesposit@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Emanuele Giuseppe Esposito authored
Righ now, IO_CODE and IO_OR_GS_CODE are nop, as there isn't really a way to check that a function is only called in I/O. On the other side, we can use qemu_in_main_thread() to check if we are in the main loop. The usage of macros makes easy to extend them in the future without making changes in all callers. They will also visually help understanding in which category each function is, without looking at the header. Signed-off-by:
Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20220303151616.325444-3-eesposit@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Emanuele Giuseppe Esposito authored
When invoked from the main loop, this function is the same as qemu_mutex_iothread_locked, and returns true if the BQL is held. When invoked from iothreads or tests, it returns true only if the current AioContext is the Main Loop. This essentially just extends qemu_mutex_iothread_locked to work also in unit tests or other users like storage-daemon, that run in the Main Loop but end up using the implementation in stubs/iothread-lock.c. Using qemu_mutex_iothread_locked in unit tests defaults to false because they use the implementation in stubs/iothread-lock, making all assertions added in next patches fail despite the AioContext is still the main loop. See the comment in the function header for more information. Signed-off-by:
Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20220303151616.325444-2-eesposit@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Hanna Reitz authored
185 tests quitting qemu while a block job is active. It does not specifically test quitting qemu while a mirror or active commit job is in its READY phase. Add two test cases for this, where we respectively mirror or commit to an external QSD instance, which provides a throttled block device. qemu is supposed to cancel the job so that it can quit as soon as possible instead of waiting for the job to complete (which it did before 6.2). Signed-off-by:
Hanna Reitz <hreitz@redhat.com> Message-Id: <20220303164814.284974-5-hreitz@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Hanna Reitz authored
To implement this, we reuse the existing daemonizing functions from the system emulator, which mainly do the following: - Fork off a child process, and set up a pipe between parent and child - The parent process waits until the child sends a status byte over the pipe (0 means that the child was set up successfully; anything else (including errors or EOF) means that the child was not set up successfully), and then exits with an appropriate exit status - The child process enters a new session (forking off again), changes the umask, and will ignore terminal signals from then on - Once set-up is complete, the child will chdir to /, redirect all standard I/O streams to /dev/null, and tell the parent that set-up has been completed successfully In contrast to qemu-nbd's --fork implementation, during the set up phase, error messages are not piped through the parent process. qemu-nbd mainly does this to detect errors, though (while os_daemonize() has the child explicitly signal success after set up); because we do not redirect stderr after forking, error messages continue to appear on whatever the parent's stderr was (until set up is complete). Signed-off-by:
Hanna Reitz <hreitz@redhat.com> Message-Id: <20220303164814.284974-4-hreitz@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Hanna Reitz authored
In contrast to qemu-nbd (where it is called --fork) and the system emulator, QSD does not have a --daemonize switch yet. Just like them, QSD allows setting up block devices and exports on the command line. When doing so, it is often necessary for whoever invoked the QSD to wait until these exports are fully set up. A --daemonize switch allows precisely this, by virtue of the parent process exiting once everything is set up. Note that there are alternative ways of waiting for all exports to be set up, for example: - Passing the --pidfile option and waiting until the respective file exists (but I do not know if there is a way of implementing this without a busy wait loop) - Set up some network server (e.g. on a Unix socket) and have the QSD connect to it after all arguments have been processed by appending corresponding --chardev and --monitor options to the command line, and then wait until the QSD connects Having a --daemonize option would make this simpler, though, without having to rely on additional tools (to set up a network server) or busy waiting. Implementing a --daemonize switch means having to fork the QSD process. Ideally, we should do this as early as possible: All the parent process has to do is to wait for the child process to signal completion of its set-up phase, and therefore there is basically no initialization that needs to be done before the fork. On the other hand, forking after initialization steps means having to consider how those steps (like setting up the block layer or QMP) interact with a later fork, which is often not trivial. In order to fork this early, we must scan the command line for --daemonize long before our current process_options() call. Instead of adding custom new code to do so, just reuse process_options() and give it a @pre_init_pass argument to distinguish the two passes. I believe there are some other switches but --daemonize that deserve parsing in the first pass: - --help and --version are supposed to only print some text and then immediately exit (so any initialization we do would be for naught). This changes behavior, because now "--blockdev inv-drv --help" will print a help text instead of complaining about the --blockdev argument. Note that this is similar in behavior to other tools, though: "--help" is generally immediately acted upon when finding it in the argument list, potentially before other arguments (even ones before it) are acted on. For example, "ls /does-not-exist --help" prints a help text and does not complain about ENOENT. - --pidfile does not need initialization, and is already exempted from the sequential order that process_options() claims to strictly follow (the PID file is only created after all arguments are processed, not at the time the --pidfile argument appears), so it makes sense to include it in the same category as --daemonize. - Invalid arguments should always be reported as soon as possible. (The same caveat with --help applies: That means that "--blockdev inv-drv --inv-arg" will now complain about --inv-arg, not inv-drv.) This patch does make some references to --daemonize without having implemented it yet, but that will happen in the next patch. Signed-off-by:
Hanna Reitz <hreitz@redhat.com> Reviewed-by:
Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20220303164814.284974-3-hreitz@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Hanna Reitz authored
The daemonizing functions in os-posix (os_daemonize() and os_setup_post()) only daemonize the process if the static `daemonize` variable is set. Right now, it can only be set by os_parse_cmd_args(). In order to use os_daemonize() and os_setup_post() from the storage daemon to have it be daemonized, we need some other way to set this `daemonize` variable, because I would rather not tap into the system emulator's arg-parsing code. Therefore, this patch adds an os_set_daemonize() function, which will return an error on os-win32 (because daemonizing is not supported there). Signed-off-by:
Hanna Reitz <hreitz@redhat.com> Message-Id: <20220303164814.284974-2-hreitz@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Stefan Hajnoczi authored
qemu_mutex_iothread_locked() may be used from coroutines. Standard __thread variables cannot be used by coroutines. Use the coroutine TLS macros instead. Signed-off-by:
Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20220222140150.27240-5-stefanha@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Stefan Hajnoczi authored
RCU may be used from coroutines. Standard __thread variables cannot be used by coroutines. Use the coroutine TLS macros instead. Signed-off-by:
Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20220222140150.27240-4-stefanha@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Stefan Hajnoczi authored
QEMU TLS macros must be used to make TLS variables safe with coroutines. Signed-off-by:
Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20220222140150.27240-3-stefanha@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Stefan Hajnoczi authored
Compiler optimizations can cache TLS values across coroutine yield points, resulting in stale values from the previous thread when a coroutine is re-entered by a new thread. Serge Guelton developed an __attribute__((noinline)) wrapper and tested it with clang and gcc. I formatted his idea according to QEMU's coding style and wrote documentation. The compiler can still optimize based on analyzing noinline code, so an asm volatile barrier with an output constraint is required to prevent unwanted optimizations. Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1952483 Suggested-by:
Serge Guelton <sguelton@redhat.com> Signed-off-by:
Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20220222140150.27240-2-stefanha@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Emanuele Giuseppe Esposito authored
Split bdrv_co_invalidate cache in two: the Global State (under BQL) code that takes care of permissions and running GS callbacks, and leave only the I/O code (->bdrv_co_invalidate_cache) running in the I/O coroutine. The only side effect is that bdrv_co_invalidate_cache is not recursive anymore, and so is every direct call to bdrv_invalidate_cache(). Signed-off-by:
Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20220209105452.1694545-6-eesposit@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Emanuele Giuseppe Esposito authored
Following the bdrv_activate renaming, change also the name of the respective callers. bdrv_invalidate_cache_all -> bdrv_activate_all blk_invalidate_cache -> blk_activate test_sync_op_invalidate_cache -> test_sync_op_activate No functional change intended. Signed-off-by:
Emanuele Giuseppe Esposito <eesposit@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Reviewed-by:
Hanna Reitz <hreitz@redhat.com> Message-Id: <20220209105452.1694545-5-eesposit@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Emanuele Giuseppe Esposito authored
This function is currently just a wrapper for bdrv_invalidate_cache(), but in future will contain the code of bdrv_co_invalidate_cache() that has to always be protected by BQL, and leave the rest in the I/O coroutine. Replace all bdrv_invalidate_cache() invokations with bdrv_activate(). Signed-off-by:
Emanuele Giuseppe Esposito <eesposit@redhat.com> Reviewed-by:
Hanna Reitz <hreitz@redhat.com> Message-Id: <20220209105452.1694545-4-eesposit@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Emanuele Giuseppe Esposito authored
block_crypto_amend_options_generic_luks uses the block layer permission API, therefore it should be called with the BQL held. However, the same function is being called by two BlockDriver callbacks: bdrv_amend_options (under BQL) and bdrv_co_amend (I/O). The latter is I/O because it is invoked by block/amend.c's blockdev_amend_run(), a .run callback of the amend JobDriver. Therefore we want to change this function to still perform the permission check, but making sure it is done under BQL regardless of the caller context. Remove the permission check in block_crypto_amend_options_generic_luks() and: - in block_crypto_amend_options_luks() (BQL case, called by .bdrv_amend_options()), reuse helper functions block_crypto_amend_{prepare/cleanup} that take care of checking permissions. - for block_crypto_co_amend_luks() (I/O case, called by .bdrv_co_amend()), don't check for permissions but delegate .bdrv_amend_pre_run() and .bdrv_amend_clean() to do it, performing these checks before and after the job runs in its aiocontext. Signed-off-by:
Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20220209105452.1694545-3-eesposit@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Emanuele Giuseppe Esposito authored
Move the permission API calls into driver-specific callbacks that always run under BQL. In this case, bdrv_crypto_luks needs to perform permission checks before and after qcrypto_block_amend_options(). The problem is that the caller, block_crypto_amend_options_generic_luks(), can also run in I/O from .bdrv_co_amend(). This does not comply with Global State-I/O API split, as permissions API must always run under BQL. Firstly, introduce .bdrv_amend_pre_run() and .bdrv_amend_clean() callbacks. These two callbacks are guaranteed to be invoked under BQL, respectively before and after .bdrv_co_amend(). They take care of performing the permission checks in the same way as they are currently done before and after qcrypto_block_amend_options(). These callbacks are in preparation for next patch, where we delete the original permission check. Right now they just add redundant control. Then, call .bdrv_amend_pre_run() before job_start in qmp_x_blockdev_amend(), so that it will be run before the job coroutine is created and stay in the main loop. As a cleanup, use JobDriver's .clean() callback to call .bdrv_amend_clean(), and run amend-specific cleanup callbacks under BQL. After this patch, permission failures occur early in the blockdev-amend job to update a LUKS volume's keys. iotest 296 must now expect them in x-blockdev-amend's QMP reply instead of waiting for the actual job to fail later. Signed-off-by:
Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20220209105452.1694545-2-eesposit@redhat.com> Signed-off-by:
Hanna Reitz <hreitz@redhat.com> Message-Id: <20220304153729.711387-6-hreitz@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Peter Maydell authored
hw/nvme updates - add enhanced protection information (64-bit guard) # gpg: Signature made Fri 04 Mar 2022 06:23:36 GMT # gpg: using RSA key 522833AA75E2DCE6A24766C04DE1AF316D4F0DE9 # gpg: Good signature from "Klaus Jensen <its@irrelevant.dk>" [unknown] # gpg: aka "Klaus Jensen <k.jensen@samsung.com>" [unknown] # gpg: WARNING: This key is not certified with a trusted signature! # gpg: There is no indication that the signature belongs to the owner. # Primary key fingerprint: DDCA 4D9C 9EF9 31CC 3468 4272 63D5 6FC5 E55D A838 # Subkey fingerprint: 5228 33AA 75E2 DCE6 A247 66C0 4DE1 AF31 6D4F 0DE9 * remotes/nvme/tags/nvme-next-pull-request: hw/nvme: 64-bit pi support hw/nvme: add pi tuple size helper hw/nvme: add support for the lbafee hbs feature hw/nvme: move format parameter parsing hw/nvme: add host behavior support feature hw/nvme: move dif/pi prototypes into dif.h Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Peter Maydell authored
Rewrite nios2 interrupt handling # gpg: Signature made Thu 03 Mar 2022 19:52:33 GMT # gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F # gpg: issuer "richard.henderson@linaro.org" # gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full] # Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F * remotes/rth-gitlab/tags/pull-nios-20220303: target/nios2: Rewrite interrupt handling target/nios2: Special case ipending in rdctl and wrctl target/nios2: Split mmu_write target/nios2: Hoist R_ZERO check in rdctl target/nios2: Only build mmu.c for system mode target/nios2: Replace MMU_LOG with tracepoints target/nios2: Remove mmu_read_debug Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
- Mar 03, 2022
-
-
Peter Maydell authored
Fifth RISC-V PR for QEMU 7.0 * Fixup checks for ext_zb[abcs] * Add AIA support for virt machine * Increase maximum number of CPUs in virt machine * Fixup OpenTitan SPI address * Add support for zfinx, zdinx and zhinx{min} extensions # gpg: Signature made Thu 03 Mar 2022 05:26:55 GMT # gpg: using RSA key F6C4AC46D4934868D3B8CE8F21E10D29DF977054 # gpg: Good signature from "Alistair Francis <alistair@alistair23.me>" [full] # Primary key fingerprint: F6C4 AC46 D493 4868 D3B8 CE8F 21E1 0D29 DF97 7054 * remotes/alistair/tags/pull-riscv-to-apply-20220303: target/riscv: expose zfinx, zdinx, zhinx{min} properties target/riscv: add support for zhinx/zhinxmin target/riscv: add support for zdinx target/riscv: add support for zfinx target/riscv: hardwire mstatus.FS to zero when enable zfinx target/riscv: add cfg properties for zfinx, zdinx and zhinx{min} hw: riscv: opentitan: fixup SPI addresses hw/riscv: virt: Increase maximum number of allowed CPUs docs/system: riscv: Document AIA options for virt machine hw/riscv: virt: Add optional AIA IMSIC support to virt machine hw/intc: Add RISC-V AIA IMSIC device emulation hw/riscv: virt: Add optional AIA APLIC support to virt machine target/riscv: fix inverted checks for ext_zb[abcs] Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
Previously, we would avoid setting CPU_INTERRUPT_HARD when interrupts are disabled at a particular point in time, instead queuing the value into cpu->irq_pending. This is more complicated than required. Instead, set CPU_INTERRUPT_HARD any time there is a pending interrupt, and exclusively check for interrupts disabled in nios2_cpu_exec_interrupt. Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
It was never correct to be able to write to ipending. Until the rest of the irq code is tidied, the read of ipending will generate an "unnecessary" mask. Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Create three separate functions for the three separate registers. Avoid extra dispatch through op_helper.c. Dispatch to the correct function in translation. Clean up the ifdefs in wrctl. Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
This will avoid having to replicate the check to additional cases. Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
We can thus remove an ifdef covering the entire file. Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-