Ref:    https://bugzilla.tianocore.org/show_bug.cgi?id=1512
Repo:   https://github.com/lersek/edk2.git
Branch: smram_at_default_smbase_bz_1512_wave_1

This is the firmware-side patch series for the QEMU feature that Igor is
proposing in

  [Qemu-devel] [PATCH 0/2]
  q35: mch: allow to lock down 128K RAM at default SMBASE address

posted at

  20190917130708.10281-1-imammedo@redhat.com">http://mid.mail-archive.com/20190917130708.10281-1-imammedo@redhat.com
  https://lists.gnu.org/archive/html/qemu-devel/2019-09/msg03538.html
  https://edk2.groups.io/g/devel/message/47364

This OVMF patch series is supposed to be the first "wave" of patch sets,
working towards VCPU hotplug with SMM enabled.

We want the hot-plugged VCPU to execute its very first instruction from
SMRAM, running in SMM, for protection from OS and PCI DMA interference.
In the (long!) discussion at

  20190905174503.2acaa46a@redhat.com">http://mid.mail-archive.com/20190905174503.2acaa46a@redhat.com
  https://edk2.groups.io/g/devel/message/46910
  https://bugzilla.tianocore.org/show_bug.cgi?id=1512#c9

we considered diverging from the specs and changing the default SMBASE
to an address different from 0x3_0000, so that it would point into
SMRAM. However, Igor had the idea to keep the default SMBASE intact, and
replace the underlying system RAM (128 KB of it) with lockable SMRAM.
This conforms to the language of the Intel SDM, namely, "[t]he actual
physical location of the SMRAM can be in system memory or in a separate
RAM memory". When firmware locks this QEMU-specific SMRAM, the latter
completely disappears from the address space of code that runs outside
of SMM (including PCI devices doing DMA). We call the remaining MMIO
area a "black hole".

The present patch set detects the QEMU feature, locks the SMRAM at the
right times, and informs firmware and OS components to stay away form
the SMRAM at the default SMBASE.

I've tested the set extensively:

  c18f1ada-3eca-d5f1-da4f-e74181c5a862@redhat.com">http://mid.mail-archive.com/c18f1ada-3eca-d5f1-da4f-e74181c5a862@redhat.com
  https://edk2.groups.io/g/devel/message/47864

Going forward, if I understand correctly, the plan is to populate the
new SMRAM with the hotplug SMI handler. (This would likely be put in
place by OvmfPkg/PlatformPei, from NASM source code. The
SmmRelocateBases() function in UefiCpuPkg/PiSmmCpuDxeSmm backs up the
original contents before, and restores them after, the initial SMBASE
relocation of the cold-plugged VCPUs. Thus the hotplug SMI handler would
survive the initial relocation of the cold-plugged VCPUs.)

Further, when a VCPU is hot-plugged, we seem to intend to raise an SMI
for it (perhaps internally to QEMU?), which will remain pending until
the VCPU is launched with INIT-SIPI-SIPI. Once the INIT-SIPI-SIPI is
sent, the VCPU will immediately enter SMM, and run the SMI handler in
the locked SMRAM at the default SMBASE. At RSM, it may continue at the
vector requested in the INIT-SIPI-SIPI.

Cc: Ard Biesheuvel <ard.biesheu...@linaro.org>
Cc: Boris Ostrovsky <boris.ostrov...@oracle.com>
Cc: Brijesh Singh <brijesh.si...@amd.com>
Cc: Igor Mammedov <imamm...@redhat.com>
Cc: Jiewen Yao <jiewen....@intel.com>
Cc: Joao M Martins <joao.m.mart...@oracle.com>
Cc: Jordan Justen <jordan.l.jus...@intel.com>
Cc: Jun Nakajima <jun.nakaj...@intel.com>
Cc: Michael Kinney <michael.d.kin...@intel.com>
Cc: Paolo Bonzini <pbonz...@redhat.com>
Cc: Phillip Goerl <phillip.go...@oracle.com>
Cc: Yingwen Chen <yingwen.c...@intel.com>

Thanks
Laszlo

Laszlo Ersek (10):
  OvmfPkg: introduce PcdQ35SmramAtDefaultSmbase
  OvmfPkg/IndustryStandard: increase vertical whitespace in Q35 macro
    defs
  OvmfPkg/IndustryStandard: add MCH_DEFAULT_SMBASE* register macros
  OvmfPkg/PlatformPei: factor out Q35BoardVerification()
  OvmfPkg/PlatformPei: detect SMRAM at default SMBASE (skeleton)
  OvmfPkg/PlatformPei: assert there's no permanent PEI RAM at default
    SMBASE
  OvmfPkg/PlatformPei: reserve the SMRAM at the default SMBASE, if it
    exists
  OvmfPkg/SEV: don't manage the lifecycle of the SMRAM at the default
    SMBASE
  OvmfPkg/SmmAccess: close and lock SMRAM at default SMBASE
  OvmfPkg/PlatformPei: detect SMRAM at default SMBASE (for real)

 OvmfPkg/Include/IndustryStandard/Q35MchIch9.h           | 106 
+++++++++++---------
 OvmfPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.c   |  21 +++-
 OvmfPkg/Library/SmmCpuFeaturesLib/SmmCpuFeaturesLib.inf |   4 +
 OvmfPkg/OvmfPkg.dec                                     |   6 ++
 OvmfPkg/OvmfPkgIa32.dsc                                 |   1 +
 OvmfPkg/OvmfPkgIa32X64.dsc                              |   1 +
 OvmfPkg/OvmfPkgX64.dsc                                  |   1 +
 OvmfPkg/PlatformPei/AmdSev.c                            |  24 ++++-
 OvmfPkg/PlatformPei/MemDetect.c                         |  87 +++++++++++++---
 OvmfPkg/PlatformPei/Platform.c                          |  24 +++++
 OvmfPkg/PlatformPei/Platform.h                          |   7 ++
 OvmfPkg/PlatformPei/PlatformPei.inf                     |   1 +
 OvmfPkg/SmmAccess/SmmAccess2Dxe.c                       |   7 ++
 OvmfPkg/SmmAccess/SmmAccess2Dxe.inf                     |   1 +
 OvmfPkg/SmmAccess/SmmAccessPei.c                        |   6 ++
 OvmfPkg/SmmAccess/SmmAccessPei.inf                      |   1 +
 OvmfPkg/SmmAccess/SmramInternal.c                       |  25 +++++
 OvmfPkg/SmmAccess/SmramInternal.h                       |   8 ++
 18 files changed, 260 insertions(+), 71 deletions(-)

-- 
2.19.1.3.g30247aa5d201


-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.

View/Reply Online (#47924): https://edk2.groups.io/g/devel/message/47924
Mute This Topic: https://groups.io/mt/34274934/21656
Group Owner: devel+ow...@edk2.groups.io
Unsubscribe: https://edk2.groups.io/g/devel/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to