On 05/22/18 23:47, Alex Williamson wrote:
> On Wed, 23 May 2018 00:44:22 +0300
> "Michael S. Tsirkin" <m...@redhat.com> wrote:
> 
>> On Tue, May 22, 2018 at 03:36:59PM -0600, Alex Williamson wrote:
>>> On Tue, 22 May 2018 23:58:30 +0300
>>> "Michael S. Tsirkin" <m...@redhat.com> wrote:   
>>>>
>>>> It's not hard to think of a use-case where >256 devices
>>>> are helpful, for example a nested virt scenario where
>>>> each device is passed on to a different nested guest.
>>>>
>>>> But I think the main feature this is needed for is numa modeling.
>>>> Guests seem to assume a numa node per PCI root, ergo we need more PCI
>>>> roots.  
>>>
>>> But even if we have NUMA affinity per PCI host bridge, a PCI host
>>> bridge does not necessarily imply a new PCIe domain.  
>>
>> What are you calling a PCIe domain?
> 
> Domain/segment
> 
> 0000:00:00.0
> ^^^^ This
> 
> Isn't that the only reason we'd need a new MCFG section and the reason
> we're limited to 256 buses?  Thanks,

(Just to confirm: this matches my understanding of the thread as well.)

Laszlo

Reply via email to