On 07/27/2018 11:19 AM, Cédric Le Goater wrote:
> On 07/27/2018 10:43 AM, Benjamin Herrenschmidt wrote:
>> On Fri, 2018-07-27 at 10:25 +0200, Cédric Le Goater wrote:
>>> Each PHB creates a pci-bridge device and the PCI bus that comes with it.
>>> It makes things easier to define PCI devices.
>>>
>>> It is still quite complex ... Here is a sample :
>>>
>>> qemu-system-ppc64 -m 2G -machine powernv \
>>> -cpu POWER8 -smp 2,cores=2,threads=1 -accel tcg,thread=multi \
>>> -kernel ./zImage.epapr -initrd ./rootfs.cpio.xz -bios ./skiboot.lid \
>>> \
>>> -device megasas,id=scsi0,bus=pci.0,addr=0x1 \
>>> -drive
>>> file=./rhel7-ppc64le.qcow2,if=none,id=drive-scsi0-0-0-0,format=qcow2,cache=none
>>> \
>>> -device
>>> scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=2
>>> \
>>> \
>>> -device ich9-ahci,id=sata0,bus=pci.1,addr=0x1 \
>>> -drive
>>> file=./ubuntu-ppc64le.qcow2,if=none,id=drive0,format=qcow2,cache=none \
>>> -device ide-hd,bus=sata0.0,unit=0,drive=drive0,id=ide,bootindex=1 \
>>> -device e1000,netdev=net0,mac=C0:FF:EE:00:00:02,bus=pci.1,addr=0x2 \
>>> -netdev bridge,helper=/usr/libexec/qemu-bridge-helper,br=virbr0,id=net0 \
>>> -device nec-usb-xhci,bus=pci.1,addr=0x7 \
>>
>> I don't understand why. That means you can't put emulated (or real)
>> PCIe device below it ?
>
> Well, skiboot does seem to find them. But that's not a good reason.
> I will dig in.
Nothing is wrong in skiboot, that was my misunderstanding of the overall
device layout.
Each PHB3 device exposes a single PCIE root port on which a *single* device
can be plugged. It can be a any adapter or a bridge/switch. The QEMU device
hierarchy is rather deep if you want to have multiple devices on a single
PHB3 :
dev: pnv-phb3
bus: phb3-root-bus
dev: phb3-root-port
bus: pcie
dev: pcie-bridge (only one device allowed)
bus: pcie
dev: e1000e
dev: ...
So it makes the command line rather heavy :/ or you need to have
more phb3s to define an adapter on each. How about 3 per chip ?
Here is a quick survey,
On OpenPower systems:
palmetto: 3 PHBs - one 16x, one 8x, one 4x backed by a switch
with the BMC devices
habanero: 3 PHBs - one 16x, one 8x, one 8x backed by a switch
with the BMC devices
firestone: socket 0 : 2 16x PHBs
socket 1 : 3 PHBs - one 16x, one 8x, one 8x backed
by a switch with the BMC devices
garrison: socket 0 : 1 16x PHB, 2 8x PHB linked to the GPUs, 1 8x PHB,
socket 1 : 1 16x PHB, 2 8x PHB linked to the GPUs, 1 8x PHB
backed by a switch with the BMC devices
On IBM systems :
tuletas: 2 16x PHB, 2 8x PHB, backed with a switch.
Thanks,
C.