On 17/2/21 4:54 pm, Sebastian Huber wrote: > On 17/02/2021 03:20, Chris Johns wrote: > >> I have looked into this some more with Joel. Comments below .. >> >> On 16/2/21 5:49 pm, Chris Johns wrote: >>> Hi Sebastian, >>> >>> Thank you for the review. >>> >>> On 16/2/21 5:05 pm, Sebastian Huber wrote: >>>> On 16/02/2021 03:02, chr...@rtems.org wrote: >>>>> +/* >>>>> + * Values for the bus space tag, not to be used directly by MI code. >>>>> + */ >>>>> +#define BSP_BUS_SPACE_IO 0 /* space is i/o space */ >>>>> +#define BSP_BUS_SPACE_MEM 1 /* space is mem space */ >>>>> + >>>>> +/* >>>>> + * BSP PCI Support >>>>> + * >>>>> + * The RTEMS Nexus bus support can optionaly support PCI spaces that >>>>> + * mapped to BSP speciic address spaces. Add the following define to >>>>> + * the BSP header file to enable this support: >>>>> + * >>>>> + * #define BSP_HAS_PCI >>>>> + * >>>>> + * If enabled a BSP must the following IO region calls: >>>>> + * >>>>> + * inb : read 8 bits >>>>> + * outb : write 8 bits >>>>> + * inw : read 16 bits >>>>> + * outw : write 16 bits >>>>> + * inl : read 32 bits >>>>> + * outl : write 32 bits >>>>> + * >>>>> + * The BSP needs to provide the DRAM address space offset >>>>> + * PCI_DRAM_OFFSET. This is the base address of the DRAM as seen by a >>>>> + * PCI master. >>>> Why is it not possible to account for this offset in the bus_space_handle_t >>>> bsh? >>> How do I do that? >> I have figured this out. We can avoid tags (for now). > Good, the day will come when we need the tags. >> >>> FYI some testing with the zynq in qemu results in the PHY failing to >>> initialise. >>> It will need a closer look. >> I am yet to see what the issue was here. >> >>>> I thought the purpose of the bus_space_tag_t bst was to select different >>>> instructions to access the memory space. >>> Instructions or a mix of buses? I am still learning my way around this bus >>> design but I would have thought tags can handle specific discontinuous buses >>> mixed into the same linear address space. That is the drivers offsets are >>> all >>> relative to a specific bus. The PCI reports addresses that are relative to >>> that >>> buses base address and this is what PCI drivers use. >> Given the defined resources I am not sure you fully mix things without using >> tags. As I will discuss below endian issues with a specific bus can be a >> reason. >> >>>> The generic file is for systems with memory mapped access only. >>> The bus.h is per arch so if I specialise it for the PowerPC all BSPs in the >>> PowerPC arch come across and in the end it becomes the same thing. I cannot >>> see >>> a way specialise this per BSP and I am not sure it is needed. >> I am coming the conclusion the current bus.h is not suitable for the PowerPC >> architecture in general as is. There maybe younger devices with more advanced >> bus structures optimised for a specific niche however we also need to balance >> the fact we have decades of proven operation in important systems we need to >> maintain. Some how we need to find a path all LibBSD PowerPC users are OK >> with. > > The PowerPC architecture is quite old and well designed. I am pretty sure that > the Cache-Inhibited and Guarded memory is available in all PowerPC systems we > support. > >> >> We need to look at using the `eieio` instruction. I am reluctant to stop >> using >> it even if testing shows it is OK. These systems are older PowerPC systems >> and >> playing around at the low level after all this time would concern me. > I didn't found a use of eieio in the FreeBSD bus support. Where have you seen > it > in FreeBSD? >> >> The factor that has come to light with my testing is the PCI bus and devices >> are >> in LE and the PowerPC is BE. This means a suitability sized volatile pointer >> to >> memory does not work. With the VME PowerPC boards I think we are OK with >> LibBSD >> as the devices it will interact with will all be PCI and we force all IO to >> be >> LE. > > Yes, the endian conversion is an issue. For the NVMe support I hacked it like > this: > > https://git.rtems.org/rtems-libbsd/commit/?id=aaeae61bd097db64e9f3c0c8f67de768887197e5 > > > https://git.rtems.org/rtems-libbsd/commit/?id=cdbae21e4d55c01ce9a3db98443ab315e011e760 > > > This is definitely not a perfect solution, however, it worked without changing > the bus space implementation.
I did wonder about this but it does not resolve the `eieio` instruction. >>>> It also >>>> includes <bsp.h> and thus <rtems.h> in a file which is included all over >>>> the >>>> place. >>> The buf.h header inlines calls which is fine but if we need an offset from >>> a BSP >>> the BSP header is needed. We only export from a BSP as a single header. >>> >>> I could not see an easy way to have a BSP indicate it has PCI support in >>> LibBSD. >> We need to decide if we let the BSP define a set of calls that can be used. >> This >> would mean including `bsp.h` in `bus.h` as I have done. I would rework the >> implementation to default to a volatile memory pointer if the BSP has not >> provided a BSP specific replacement. >> >> Another alternative is to have a PowerPC specific `bus.h` and then basically >> do >> the same thing only with the LE and BE present in that file then somehow >> figuring out how to select the one needed. > There are definitely reasons, why the FreeBSD bus space implementation is a > bit > more complicated. For me it would be all right to let BSPs optionally enable > the > full featured bus space implementation ported from FreeBSD. This would be a > bit > of work. Yes and currently not needed considering the changes I have made in v2 (should have been v3) of the patch I posted. Those changes also fix a few bugs in some of the calls. >> I prefer the first option with the increased build time due to including >> bsp.h, >> rtems.h etc. > I am not concerned about the increased build time. The <bsp.h> and <rtems.h> > define all sorts of stuff. I would rather use a new BSP provided header file > for > this, e.g. <bsp/bus.h>. I think this is a good idea and worth sorting out now. It will require a configure test to enable the include. I am not sure when I will get back to this. Chris _______________________________________________ devel mailing list devel@rtems.org http://lists.rtems.org/mailman/listinfo/devel