`cpu_pc` at this point does not necessary point to the current
instruction (i.e., the wait instruction being translated), so it's
incorrect to calculate the new value of `cpu_pc` based on this. It must
be updated with `ctx->base.pc_next`, which contains the correct address
of the next instruction.
This patch fixes the implementation of the wait instruction to
implicitly update PSW.I as required by the ISA specification.
Signed-off-by: Tomoaki Kawada
---
target/rx/op_helper.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/target/rx/op_helper.c b/target/rx/op_helper.c
index 11f952d340.
Hi Pedro Torres,
Please note this threads
https://patchew.org/QEMU/20211022161448.81579-1-yaroshchuk2...@gmail.com/
https://patchew.org/QEMU/20220113152836.60398-1-yaroshchuk2...@gmail.com/
There was a discussion about how to preserve
backward compatibility of emulated AppleSMC
behaviour. The dis
On Thu, 14 Apr 2022 19:57:47 +0200
Paolo Bonzini wrote:
> The main point of this series is patch 7, which removes the dubious and
> probably wrong use of atomics in block/nbd.c. This in turn is enabled
> mostly by the cleanups in patches 3-5. Together, they introduce a
> QemuMutex that synchron
On 30/3/22 14:56, Damien Hedde wrote:
qom-path of cpus are changed:
+ "apu-cluster/apu-cpu[n]" to "apu/cpu[n]"
+ "rpu-cluster/rpu-cpu[n]" to "rpu/cpu[n]"
Signed-off-by: Damien Hedde
---
include/hw/arm/xlnx-zynqmp.h | 8 +--
hw/arm/xlnx-zynqmp.c | 121 +--
On 30/3/22 14:56, Damien Hedde wrote:
This object can be used to create a group of homogeneous
arm cpus.
Signed-off-by: Damien Hedde
---
include/hw/arm/arm_cpus.h | 45
hw/arm/arm_cpus.c | 63 +++
hw/arm/meson.build
On 30/3/22 14:56, Damien Hedde wrote:
This object will be a _cpu-cluster_ generalization and
is meant to allow create cpus of the same type.
The main goal is that this object, on contrary to _cpu-cluster-_,
can be used to dynamically create cpus: it does not rely on
external code to populate the
14.04.2022 20:57, Paolo Bonzini wrote:
Signed-off-by: Paolo Bonzini
---
block/nbd.c | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/block/nbd.c b/block/nbd.c
index 31c684772e..d0d94b40bd 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -81,12 +81,18 @@ typedef struct B
14.04.2022 20:57, Paolo Bonzini wrote:
requests[].receiving is set by nbd_receive_replies() under the receive_mutex;
Read it under the same mutex as well. Waking up receivers on errors happens
after each reply finishes processing, in nbd_co_receive_one_chunk().
If there is no currently-active re
Adds the optional -cmbdev= option that takes a QEMU memory backend
-object to be used to for the CMB (Controller Memory Buffer).
This option takes precedence over cmb_size_mb= if both used.
(The size will be deduced from the memory backend option).
Signed-off-by: Rick Wertenbroek
---
hw/nvme/ctrl
14.04.2022 20:57, Paolo Bonzini wrote:
Remove the confusing, and most likely wrong, atomics. The only function
that used to be somewhat in a hot path was nbd_client_connected(),
but it is not anymore after the previous patches.
The same logic is used both to check if a request had to be reissue
14.04.2022 20:57, Paolo Bonzini wrote:
Prepare for the next patch, so that the diff is less confusing.
nbd_client_connecting is moved closer to the definition point.
Amm. To usage-point you mean?
The original idea was to keep simple state-reading helpers definitions together
:)
nbd_client_
14.04.2022 20:57, Paolo Bonzini wrote:
The condition for waiting on the s->free_sema queue depends on
both s->in_flight and s->state. The latter is currently using
atomics, but this is quite dubious and probably wrong.
Because s->state is written in the main thread too, for example by
the yank
14.04.2022 20:57, Paolo Bonzini wrote:
Elevate s->in_flight early so that other incoming requests will wait
on the CoQueue in nbd_co_send_request; restart them after getting back
from nbd_reconnect_attempt. This could be after the reconnect timer or
nbd_cancel_in_flight have cancelled the attemp
From: Xiang Chen
It uses [offset, offset + size - 1] to indicate that the length of range is
size in most places in vfio trace code (such as
trace_vfio_region_region_mmap()) execpt trace_vfio_region_sparse_mmap_entry().
So change it for trace_vfio_region_sparse_mmap_entry(), but if size is zero,
From: Xiang Chen
Currently memory_region_iommu_replay() does full page table walk with
fixed granularity (page size) no matter translate() succeeds or not.
Actually if translate() successfully, we can skip translation size
(iotlb.addr_mask + 1) instead of fixed granularity.
Signed-off-by: Xiang
From: Xiang Chen
It always calls the IOMMU MR translate() callback with flag=IOMMU_NONE in
memory_region_iommu_replay(). Currently, smmuv3_translate() return an
IOMMUTLBEntry with perm set to IOMMU_NONE even if the translation success,
whereas it is expected to return the actual permission set in
Hi Eric,
在 2022/4/15 0:02, Eric Auger 写道:
Hi Chenxiang,
On 4/7/22 9:57 AM, chenxiang via wrote:
From: Xiang Chen
In function memory_region_iommu_replay(), it decides to notify() or not
according to the perm of returned IOMMUTLBEntry. But for smmuv3, the
returned perm is always IOMMU_NONE ev
18 matches
Mail list logo