On 6/3/2025 9:41 AM, David Hildenbrand wrote:
On 03.06.25 09:17, Gupta, Pankaj wrote:
+CC Tony & Kishen
In this patch series we are only maintaining the bitmap for Ram
discard/
populate state not for regular guest_memfd private/shared?
As mentioned in changelog, "In the c
+CC Tony & Kishen
In this patch series we are only maintaining the bitmap for Ram discard/
populate state not for regular guest_memfd private/shared?
As mentioned in changelog, "In the context of RamDiscardManager, shared
state is analogous to populated, and private state is signified as
disca
On 6/3/2025 3:26 AM, Chenyi Qiang wrote:
On 6/1/2025 5:58 PM, Gupta, Pankaj wrote:
On 5/30/2025 10:32 AM, Chenyi Qiang wrote:
Commit 852f0048f3 ("RAMBlock: make guest_memfd require uncoordinated
discard") highlighted that subsystems like VFIO may disable RAM block
discar
Update ReplayRamDiscard() function to return the result and unify the
ReplayRamPopulate() and ReplayRamDiscard() to ReplayRamDiscardState() at
the same time due to their identical definitions. This unification
simplifies related structures, such as VirtIOMEMReplayData, which makes
it cleaner.
On 5/30/2025 10:32 AM, Chenyi Qiang wrote:
Commit 852f0048f3 ("RAMBlock: make guest_memfd require uncoordinated
discard") highlighted that subsystems like VFIO may disable RAM block
discard. However, guest_memfd relies on discard operations for page
conversion between private and shared memory, p
Rename the helper to memory_region_section_intersect_range() to make it
more generic. Meanwhile, define the @end as Int128 and replace the
related operations with Int128_* format since the helper is exported as
a wider API.
Suggested-by: Alexey Kardashevskiy
Reviewed-by: Alexey Kardashevskiy
Modify memory_region_set_ram_discard_manager() to return -EBUSY if a
RamDiscardManager is already set in the MemoryRegion. The caller must
handle this failure, such as having virtio-mem undo its actions and fail
the realize() process. Opportunistically move the call earlier to avoid
complex err
On 3/19/2025 12:23 PM, Chenyi Qiang wrote:
On 3/19/2025 4:55 PM, Gupta, Pankaj wrote:
As the commit 852f0048f3 ("RAMBlock: make guest_memfd require
uncoordinated discard") highlighted, some subsystems like VFIO may
disable ram block discard. However, guest_memfd relies on t
As the commit 852f0048f3 ("RAMBlock: make guest_memfd require
uncoordinated discard") highlighted, some subsystems like VFIO may
disable ram block discard. However, guest_memfd relies on the discard
operation to perform page conversion between private and shared
memory.
This can lead to stale I
On 3/17/2025 11:36 AM, David Hildenbrand wrote:
On 17.03.25 03:54, Chenyi Qiang wrote:
On 3/14/2025 8:11 PM, Gupta, Pankaj wrote:
On 3/10/2025 9:18 AM, Chenyi Qiang wrote:
As the commit 852f0048f3 ("RAMBlock: make guest_memfd require
uncoordinated discard") highlighted, some subsy
On 3/10/2025 9:18 AM, Chenyi Qiang wrote:
As the commit 852f0048f3 ("RAMBlock: make guest_memfd require
uncoordinated discard") highlighted, some subsystems like VFIO may
disable ram block discard. However, guest_memfd relies on the discard
operation to perform page conversion between private and
On 2/27/2025 3:29 PM, Roy Hopkins wrote:
IGVM support has been implemented for Confidential Guests that support
AMD SEV and AMD SEV-ES. Add some documentation that gives some
background on the IGVM format and how to use it to configure a
confidential guest.
Signed-off-by: Roy Hopkins
Reviewed-b
On 2/27/2025 3:29 PM, Roy Hopkins wrote:
When an SEV guest is started, the reset vector and state are
extracted from metadata that is contained in the firmware volume.
In preparation for using IGVM to setup the initial CPU state,
the code has been refactored to populate vmcb_save_area for each
C
On 2/17/2025 1:08 PM, Paolo Bonzini wrote:
It is possible to start QEMU with a confidential-guest-support object
even in TCG mode. While there is already a check in qemu_machine_creation_done:
if (machine->cgs && !machine->cgs->ready) {
error_setg(errp, "accelerator does not suppo
On 10/11/2024 10:59 AM, Paolo Bonzini wrote:
The exact set of available memory attributes can vary by VM. In the
future it might vary depending on enabled capabilities, too. Query the
extension on the VM level instead of on the KVM level, and only after
architecture-specific initialization.
In
KVM_CAP_READONLY_MEM used to be a global capability, but with the
introduction of AMD SEV-SNP confidential VMs, this extension is not
always available on all VM types [1,2].
Query the extension on the VM level instead of on the KVM level.
[1]
https://patchwork.kernel.org/project/kvm/patch/20
When using an IGVM file the configuration of the system firmware is
defined by IGVM directives contained in the file. In this case the user
should not configure any pflash devices.
This commit skips initialization of the ROM mode when pflash0 is not set
then checks to ensure no pflash devices hav
The class function and implementations for updating launch data return
a code in case of error. In some cases an error message is generated and
in other cases, just the error return value is used.
This small refactor adds an 'Error **errp' parameter to all functions
which consistently set an erro
On 8/2/2024 1:43 AM, Paolo Bonzini wrote:
The vcek-disabled property of the sev-snp-guest object is misspelled
vcek-required (which I suppose would use the opposite polarity) in
the call to object_class_property_add_bool(). Fix it.
Reported-by: Zixi Chen
Cc: Pankaj Gupta
Signed-off-by: Paolo
On 6/14/2024 10:58 AM, Xiaoyao Li wrote:
On 5/30/2024 7:16 PM, Pankaj Gupta wrote:
From: Michael Roth
Current SNP guest kernels will attempt to access these regions with
with C-bit set, so guest_memfd is needed to handle that. Otherwise,
kvm_convert_memory() will fail when the guest kernel tri
On 6/14/2024 10:34 AM, Xiaoyao Li wrote:
On 5/30/2024 7:16 PM, Pankaj Gupta wrote:
From: Michael Roth
When guest_memfd is enabled, the BIOS is generally part of the initial
encrypted guest image and will be accessed as private guest memory. Add
the necessary changes to set up the associated RA
Hi Paolo,
please check if branch qemu-coco-queue of
https://gitlab.com/bonzini/qemu works for you!
Getting compilation error here: Hope I am looking at correct branch.
Oops, sorry:
diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c
index 96dc41d355c..ede3ef1225f 100644
--- a/target/
These patches implement SEV-SNP base support along with CPUID enforcement
support for QEMU, and are also available at:
https://github.com/pagupta/qemu/tree/snp_v4
Latest version of kvm changes are posted here [2] and also queued in kvm/next.
Patch Layout
01-03: 'error_setg' inde
Update the comment to match the X86ConfidentialGuestClass
implementation.
Suggested-by: Xiaoyao Li
Signed-off-by: Zhao Liu
---
target/i386/confidential-guest.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/target/i386/confidential-guest.h b/target/i386/confidential-g
From: William Roche
AMD guests can't currently deal with BUS_MCEERR_AO MCE injection
as it panics the VM kernel. We filter this event and provide a
warning message.
Signed-off-by: William Roche
---
v3:
- New patch
v4:
- Remove redundant check for AO errors
---
target/i386/kvm/kvm.c | 9
diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c
index 5fce74aac5..4d42d3ed4c 100644
--- a/target/i386/kvm/kvm.c
+++ b/target/i386/kvm/kvm.c
@@ -604,6 +604,10 @@ static void kvm_mce_inject(X86CPU *cpu, hwaddr
paddr, int code)
mcg_status |= MCG_STATUS_RIPV;
}
On 9/6/2023 10:53 PM, John Allen wrote:
From: William Roche
AMD guests can't currently deal with BUS_MCEERR_AO MCE injection
as it panics the VM kernel. We filter this event and provide a
warning message.
Signed-off-by: William Roche
---
v3:
- New patch
---
target/i386/kvm/kvm.c | 13 +++
Early-boot e820 records will be inserted by the bios/efi/early boot
software and be reported to the kernel via insert_resource. Later, when
CXL drivers iterate through the regions again, they will insert another
resource and make the RESERVED memory area a child.
This RESERVED memory area cau
On 10/17/2022 6:19 PM, Kirill A . Shutemov wrote:
On Mon, Oct 17, 2022 at 03:00:21PM +0200, Vlastimil Babka wrote:
On 9/15/22 16:29, Chao Peng wrote:
From: "Kirill A. Shutemov"
KVM can use memfd-provided memory for guest memory. For normal userspace
accessible memory, KVM userspace (e.g. QEMU
On 9/30/2022 3:58 PM, Gerd Hoffmann wrote:
Not needed for a virtio 1.0 device.
Signed-off-by: Gerd Hoffmann
Reviewed-by: Pankaj Gupta
Tested-by: Pankaj Gupta
---
include/hw/pci/pci.h| 1 -
hw/virtio/virtio-pmem-pci.c | 2 --
2 files changed, 3 deletions(-)
diff --git a/include
Actually the current version allows you to delay the allocation to a
later time (e.g. page fault time) if you don't call fallocate() on the
private fd. fallocate() is necessary in previous versions because we
treat the existense in the fd as 'private' but in this version we track
private/shared
Actually the current version allows you to delay the allocation to a
later time (e.g. page fault time) if you don't call fallocate() on the
private fd. fallocate() is necessary in previous versions because we
treat the existense in the fd as 'private' but in this version we track
private/shared
Hi Chao,
Actually the current version allows you to delay the allocation to a
later time (e.g. page fault time) if you don't call fallocate() on the
private fd. fallocate() is necessary in previous versions because we
treat the existense in the fd as 'private' but in this version we track
priva
However, fallocate() preallocates full guest memory before starting the guest.
With this behaviour guest memory is *not* demand pinned. Is there a way to
prevent fallocate() from reserving full guest memory?
Isn't the pinning being handled by the corresponding host memory backend with mmu >
This is the v7 of this series which tries to implement the
fd-based KVM
guest private memory. The patches are based on latest kvm/queue
branch
commit:
b9b71f43683a (kvm/queue) KVM: x86/mmu: Buffer nested MMU
split_desc_cache only by default capacity
Introduction
In general
This is the v7 of this series which tries to implement the fd-based
KVM
guest private memory. The patches are based on latest kvm/queue branch
commit:
b9b71f43683a (kvm/queue) KVM: x86/mmu: Buffer nested MMU
split_desc_cache only by default capacity
Introduction
In general t
On 8/11/2022 7:18 PM, Nikunj A. Dadhania wrote:
On 11/08/22 17:00, Gupta, Pankaj wrote:
This is the v7 of this series which tries to implement the fd-based KVM
guest private memory. The patches are based on latest kvm/queue branch
commit:
b9b71f43683a (kvm/queue) KVM: x86/mmu: Buffer
This is the v7 of this series which tries to implement the fd-based KVM
guest private memory. The patches are based on latest kvm/queue branch
commit:
b9b71f43683a (kvm/queue) KVM: x86/mmu: Buffer nested MMU
split_desc_cache only by default capacity
Introduction
In general thi
Normally, a write to unallocated space of a file or the hole of a sparse
file automatically causes space allocation, for memfd, this equals to
memory allocation. This new seal prevents such automatically allocating,
either this is from a direct write() or a write on the previously
mmap-ed area.
I view it as a performance problem because nothing stops KVM from
copying from
userspace into the private fd during the SEV ioctl(). What's
missing is the
ability for userspace to directly initialze the private fd, which
may or may not
avoid an extra memcpy() depending on how clever userspa
* The current patch should just work, but prefer to have pre-boot guest
payload/firmware population into private memory for performance.
Not just performance in the case of SEV, it's needed there because firmware
only supports in-place encryption of guest memory, there's no mech
Hi Sean, Chao,
While attempting to solve the pre-boot guest payload/firmware population
into private memory for SEV SNP, retrieved this thread. Have question below:
Requirements & Gaps
-
- Confidential computing(CC): TDX/SEV/CCA
* Need support both
Normally, a write to unallocated space of a file or the hole of a sparse
file automatically causes space allocation, for memfd, this equals to
memory allocation. This new seal prevents such automatically allocating,
either this is from a direct write() or a write on the previously
mmap-ed area.
Use kvm_arch_has_private_mem(), both because "has" makes it obvious this is
checking
a flag of sorts, and to align with other helpers of this nature (and with
CONFIG_HAVE_KVM_PRIVATE_MEM).
$ git grep kvm_arch | grep supported | wc -l
0
$ git grep kvm_arch | grep has | wc -l
26
+
+bool __weak kvm_arch_private_mem_supported(struct kvm *kvm)
+{
+ return false;
+}
Does this function has to be overriden by SEV and TDX to support the private
regions?
Yes it should be overridden by architectures which want to support it.
o.k
+
static int check_memory_regi
+bool __weak kvm_arch_private_mem_supported(struct kvm *kvm)
+{
+ return false;
+}
Does this function has to be overriden by SEV and TDX to support the private
regions?
Yes it should be overridden by architectures which want to support it.
o.k
+
static int check_memory_regio
Register private memslot to fd-based memory backing store and handle the
memfile notifiers to zap the existing mappings.
Currently the register is happened at memslot creating time and the
initial support does not include page migration/swap.
KVM_MEM_PRIVATE is not exposed by default, architectu
Hi Chao,
Some comments below:
If CONFIG_HAVE_KVM_PRIVATE_MEM=y, userspace can register/unregister the
guest private memory regions through KVM_MEMORY_ENCRYPT_{UN,}REG_REGION
ioctls. The patch reuses existing SEV ioctl but differs that the
address in the region for private memory is gpa while SE
Currently in mmu_notifier validate path, hva range is recorded and then
checked in the mmu_notifier_retry_hva() from page fault path. However
for the to be introduced private memory, a page fault may not have a hva
As this patch appeared in v7, just wondering did you see an actual bug
because o
This is the v7 of this series which tries to implement the fd-based KVM
guest private memory. The patches are based on latest kvm/queue branch
commit:
b9b71f43683a (kvm/queue) KVM: x86/mmu: Buffer nested MMU
split_desc_cache only by default capacity
Introduction
In general t
This is the v7 of this series which tries to implement the
fd-based KVM
guest private memory. The patches are based on latest kvm/queue
branch
commit:
b9b71f43683a (kvm/queue) KVM: x86/mmu: Buffer nested MMU
split_desc_cache only by default capacity
Introduction
In general
This is the v7 of this series which tries to implement the fd-based KVM
guest private memory. The patches are based on latest kvm/queue branch
commit:
b9b71f43683a (kvm/queue) KVM: x86/mmu: Buffer nested MMU
split_desc_cache only by default capacity
Introduction
In general t
+#ifdef CONFIG_MIGRATION
+static int shmem_migrate_page(struct address_space *mapping,
+ struct page *newpage, struct page *page,
+ enum migrate_mode mode)
+{
+ struct inode *inode = mapping->host;
+ struct shmem_inode_info *in
This is the v7 of this series which tries to implement the fd-based KVM
guest private memory. The patches are based on latest kvm/queue branch
commit:
b9b71f43683a (kvm/queue) KVM: x86/mmu: Buffer nested MMU
split_desc_cache only by default capacity
Introduction
In general th
+#ifdef CONFIG_MIGRATION
+static int shmem_migrate_page(struct address_space *mapping,
+ struct page *newpage, struct page *page,
+ enum migrate_mode mode)
+{
+ struct inode *inode = mapping->host;
+ struct shmem_inode_info *in
This is the v7 of this series which tries to implement the fd-based KVM
guest private memory. The patches are based on latest kvm/queue branch
commit:
b9b71f43683a (kvm/queue) KVM: x86/mmu: Buffer nested MMU
split_desc_cache only by default capacity
Introduction
In general thi
On 7/6/2022 10:20 AM, Chao Peng wrote:
From: "Kirill A. Shutemov"
Implement shmem as a memfile_notifier backing store. Essentially it
interacts with the memfile_notifier feature flags for userspace
access/page migration/page reclaiming and implements the necessary
memfile_backing_store callback
For SEV-SNP, an OS is "SEV-SNP capable" without supporting this UEFI
v2.9 memory type. In order for OVMF to be able to avoid pre-validating
potentially hundreds of gibibytes of data before booting, it needs to
know if the guest OS can support its use of the new type of memory in
the memory map.
Hi Gerd,
Hi,
AFAIU 'true' is the behavior you are proposing with your EFI changes?
Saying that what's the difference between 'false' & 'default' wrt EFI
firmware? Just wondering do we need default?
true/false will force the one or the other no matter what.
'default' allows the firmware t
For SEV-SNP, an OS is "SEV-SNP capable" without supporting this UEFI
v2.9 memory type. In order for OVMF to be able to avoid pre-validating
potentially hundreds of gibibytes of data before booting, it needs to
know if the guest OS can support its use of the new type of memory in
the memory map
Introduce a new memfd_create() flag indicating the content of the
created memfd is inaccessible from userspace through ordinary MMU
access (e.g., read/write/mmap). However, the file content can be
accessed via a different mechanism (e.g. KVM MMU) indirectly.
SEV, TDX, pkvm and software-only
61 matches
Mail list logo