Re: etnaviv: PHYS_OFFSET usage
Hi Alexey, Am Mittwoch, den 15.11.2017, 16:24 + schrieb Alexey Brodkin: > Hi Lucas, > > As we discussed on ELCE last month in Prague we have Vivante GPU > built-in our new ARC HSDK development board. > > And even though [thanks to your suggestions] I got Etnaviv driver > working perfectly fine on our board I faced one quite a tricky > situation [which I dirty worked-around for now]. > > Etnaviv driver uses some PHYS_OFFSET define which is not very > usual across all architectures and platforms supported by Linux kernel. > > In fact for ARC we don't have PHYS_OFFSET defined [yet]. > And I'm wondering how to get this resolved. > > Essentially we have 2 options: > 1. Define PHYS_OFFSET for ARC (and later for other arches once needed) > 2. Replace PHYS_OFFSET with something else in etnaviv sources. > > Even though (1) seems to be the simplest solution is doesn't look very nice > because it seems to be quite ARM-specific but not something really generic > and portable. > > As for (2) frankly I din't quite understand why do we really care about > DDR start offset in the GPU driver. If some more light could be shed on this > topic probably we'll figure out what would be more elegant solution. Basically the GPU has a linear address window which is 2GB in size and all GPU command buffers must be mapped through this window. The window has a base offset, so we can move it to point to different locations in the physical address space of the system. Etnaviv uses the PHYS_OFFSET to find out where in the physical address space the RAM starts. If the start of RAM is above the 2GB mark we _must_ use the linear window in order to make the command buffers available to the GPU. I'm not aware of any other kernel API that would allow us to find the start of RAM. If there is I would be happy to replace the PHYS_OFFSET stuff. If you don't like to copy the PHYS_OFFSET stuff to ARC, you would need to introduce some new API, which allows us to retrieve this information. Regards, Lucas ___ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc
Re: etnaviv: PHYS_OFFSET usage
Am Mittwoch, den 15.11.2017, 17:36 + schrieb Alexey Brodkin: > Hi Lucas, > > On Wed, 2017-11-15 at 17:44 +0100, Lucas Stach wrote: > > Hi Alexey, > > > > Am Mittwoch, den 15.11.2017, 16:24 + schrieb Alexey Brodkin: > > > > > > Hi Lucas, > > > > > > As we discussed on ELCE last month in Prague we have Vivante GPU > > > built-in our new ARC HSDK development board. > > > > > > And even though [thanks to your suggestions] I got Etnaviv driver > > > working perfectly fine on our board I faced one quite a tricky > > > situation [which I dirty worked-around for now]. > > > > > > Etnaviv driver uses some PHYS_OFFSET define which is not very > > > usual across all architectures and platforms supported by Linux kernel. > > > > > > In fact for ARC we don't have PHYS_OFFSET defined [yet]. > > > And I'm wondering how to get this resolved. > > > > > > Essentially we have 2 options: > > > 1. Define PHYS_OFFSET for ARC (and later for other arches once needed) > > > 2. Replace PHYS_OFFSET with something else in etnaviv sources. > > > > > > Even though (1) seems to be the simplest solution is doesn't look very > > > nice > > > because it seems to be quite ARM-specific but not something really generic > > > and portable. > > > > > > As for (2) frankly I din't quite understand why do we really care about > > > DDR start offset in the GPU driver. If some more light could be shed on > > > this > > > topic probably we'll figure out what would be more elegant solution. > > > > Basically the GPU has a linear address window which is 2GB in size and > > all GPU command buffers must be mapped through this window. The window > > has a base offset, so we can move it to point to different locations in > > the physical address space of the system. > > Wow, what a design decision :) > > > Etnaviv uses the PHYS_OFFSET to find out where in the physical address > > space the RAM starts. If the start of RAM is above the 2GB mark we > > _must_ use the linear window in order to make the command buffers > > available to the GPU. > > Well that looks not super safe and versatile solution to me. > What if used RAM is much more than 2Gb? I guess in that case it's > possible to to set PHYS_OFFSET to say 0 and then kernel might allocate > command buffer above 2Gb which will make that buffer not visible for > GPU I guess. GPU command buffer allocations is done through the dma_alloc_* function, which will respect the GPU requirements. Also the linear window is not normally moved to the start of RAM but to the system CMA region, see below. > > I'm not aware of any other kernel API that would allow us to find the > > start of RAM. If there is I would be happy to replace the PHYS_OFFSET > > stuff. If you don't like to copy the PHYS_OFFSET stuff to ARC, you > > would need to introduce some new API, which allows us to retrieve this > > information. > > I'd say we may use so-called "reserved memory" here as a nice an elegant > solution. > In device tree we describe this memory area like this: > -->8--- > > gpu_3d: gpu@9 { > > compatible = "vivante,gc"; > > reg = <0x9 0x4000>; > > interrupts = <28>; > > memory-region = <&gpu_memory>; > }; > > reserved-memory { > > #address-cells = <2>; > > #size-cells = <2>; > > ranges; > > > gpu_memory: gpu_memory@be00 { > > compatible = "shared-dma-pool"; > > reg = <0xbe00 0x200>; > > no-map; > > }; > }; > -->8--- > > And then in the driver code we just need to do 2 things: > 1) Start using this memory for allocations in the driver > with help of of_reserved_mem_device_init() > 2) Get the region start. Not sure what's the best way to do it > but I guess we'll be able to get "reg" property of the "gpu_memory" > node in the worst case. And then use that base instead of PHYS_OFFSET. > > If of any interest I'll be willing to send you an RFC shortly so you > may see real implementation in details. I'm not keen on having a private memory region for the GPU. Normally we just use the
Re: glxgears on Etnaviv: couldn't get an RGB, Double-buffered visual
Hi Alexey, Am Freitag, den 24.11.2017, 16:02 + schrieb Alexey Brodkin: > Hello, > > Being in the middle of bring-up of the new board with Vivante GPU (HSDK > namely, > see > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/arc/plat-hsdk) > I was looking at simple 3D test apps to see how Etnaviv works on the hardware. > > So far I was able to get kmscube working perfectly fine and the next item I > took > was glxgears (for some reason I was under impression that's de facto "Hello > world" app > in the GPU world). But apparently even with Xserver up and running glxgears > doesn't work. > > Moreover I tried the same thing on Wandboard Quad but to no avail as well. > That's what I saw: > ->8- > # glxgears > Error: couldn't get an RGB, Double-buffered visual > > # glxinfo > name of display: :0 > Error: couldn't find RGB GLX visual or fbconfig > ->8- > > Googling didn't help here unfortunately so maybe some pointers could be > suggested here... like what do I do wrong and if glxgears is supposed to > work on top of DRM GPU at all? > > Thanks a lot in advance! For 3D acceleration to work under X you need the etnaviv specific DDX driver, which can be found here: http://git.arm.linux.org.uk/cgit/xf86-video-armada.git/log/?h=unstable-devel Don't let you get confused by the name, the armada driver implements support for both armada drm and imx-drm and the etnaviv DDX. This provides 2D acceleration on the Vivante 2D cores, as well a the DRI2/3 bit necessary to get a 3D context on X. Regards, Lucas ___ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc
Re: glxgears on Etnaviv: couldn't get an RGB, Double-buffered visual
Am Freitag, den 24.11.2017, 16:25 + schrieb Alexey Brodkin: > Hi Lucas, > > On Fri, 2017-11-24 at 17:11 +0100, Lucas Stach wrote: > > Hi Alexey, > > > > Am Freitag, den 24.11.2017, 16:02 + schrieb Alexey Brodkin: > > > > > > Hello, > > > > > > Being in the middle of bring-up of the new board with Vivante GPU (HSDK > > > namely, > > > see > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__git.kernel.org_pub_scm_linux_kernel_git_torvalds_linux.git_tree_arch_arc_plat-2Dhsdk&d=Dw > > > IDaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=lqdeeSSEes0GFDDl656eViXO7breS55ytWkhpk5R81I&m=ZXa-564Jm43PXsqGXCf2US2DY7C0qIlCw6c56pL- > > > bLY&s=ZJSI1u6GgsRHNIcONVFfIKvn1AWaB38GmtCN1dGB3w0&e=) > > > I was looking at simple 3D test apps to see how Etnaviv works on the > > > hardware. > > > > > > So far I was able to get kmscube working perfectly fine and the next item > > > I took > > > was glxgears (for some reason I was under impression that's de facto > > > "Hello world" app > > > in the GPU world). But apparently even with Xserver up and running > > > glxgears doesn't work. > > > > > > Moreover I tried the same thing on Wandboard Quad but to no avail as well. > > > That's what I saw: > > > ->8- > > > # glxgears > > > Error: couldn't get an RGB, Double-buffered visual > > > > > > # glxinfo > > > name of display: :0 > > > Error: couldn't find RGB GLX visual or fbconfig > > > ->8- > > > > > > Googling didn't help here unfortunately so maybe some pointers could be > > > suggested here... like what do I do wrong and if glxgears is supposed to > > > work on top of DRM GPU at all? > > > > > > Thanks a lot in advance! > > > > For 3D acceleration to work under X you need the etnaviv specific DDX > > driver, which can be found here: > > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__git.arm.linux.org.uk_cgit_xf86-2Dvideo-2Darmada.git_log_-3Fh-3Dunstable-2Ddevel&d=DwIDaQ&c=DPL6_ > > X_6JkXFx7AXWqB0tg&r=lqdeeSSEes0GFDDl656eViXO7breS55ytWkhpk5R81I&m=ZXa-564Jm43PXsqGXCf2US2DY7C0qIlCw6c56pL- > > bLY&s=ZzK2fxA6_XlN6pGnf2Tpo6qKzzQh76ocWZ6IDR-WPtc&e= > > Thanks for the pointer, still a couple of questions below... > > > Don't let you get confused by the name, the armada driver implements > > support for both armada drm and imx-drm and the etnaviv DDX. This > > provides 2D acceleration on the Vivante 2D cores, as well a the DRI2/3 > > bit necessary to get a 3D context on X. > > From Wandboard's .dts I see that 2D core is a separate node with separate > set of registers mapped at a different location in memory, right? > > Do you know if that's possible if the same one memory-mapped register set > controls both 3D and 2D engine? Yes, a "core" in Vivante speak is a GPU with one DMA frontend. A single frontend can feed both 3D and 2D acceleration engines behind it. On i.MX6 the 2D and 3D engine are on separate cores, but Marvell Dove has a combined 2D/3D core. > If we happen to not have 2D core if that's a no go for us for anything? I don't know if the DDX works properly without 2D acceleration. Weston on the other hand only relies on the 3D accel core for doing compositing, so even if you don't have a 2D engine you will be able to launch a modern Linux graphics stack. The etnaviv DDX could also emulate 2D accel over the 3D core by using the X.Org glamor module, but no one has bothered to implement this yet. > > In the meantime I'll try to figure out if we have 2D core or not. You can find out what your GPU provides by looking at the feature bits. chipFeatures_PIPE_2D, chipFeatures_PIPE_3D and chipFeatures_PIPE_VG is what you are looking out for. Regards, Lucas ___ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc
Re: glxgears on Etnaviv: couldn't get an RGB, Double-buffered visual
Am Freitag, den 24.11.2017, 16:49 + schrieb Alexey Brodkin: [...] > > Yes, a "core" in Vivante speak is a GPU with one DMA frontend. A > > single > > frontend can feed both 3D and 2D acceleration engines behind it. On > > i.MX6 the 2D and 3D engine are on separate cores, but Marvell Dove > > has > > a combined 2D/3D core. > > Hm, that sounds encouraging. The next question would be if Marvel > Dove is > supported in Etnaviv DDX? I guess it's called Armada so the answer if > yes, right? Yes, the Dove was the original platform for the Armada X.Org driver. Combined 2D/3D cores are supported just fine by etnaviv. > > > If we happen to not have 2D core if that's a no go for us for > > > anything? > > > > I don't know if the DDX works properly without 2D acceleration. > > Weston > > on the other hand only relies on the 3D accel core for doing > > compositing, so even if you don't have a 2D engine you will be able > > to > > launch a modern Linux graphics stack. > > That's really cool! I'm much more interested in Weston ATM, which is > actually another separate question :) > I tried to find some details on how to run Weston on Wandboard > but seems like I was looking at wrong Google again... do you > know any good manuals for doing that? There really is no magic to it. Or better there is, but it's all hidden in the Mesa implementation. You need at least Mesa 17.2 and Weston 3.0 for etnaviv to work properly. Other than that just set XDG_RUNTIME_DIR to something sensible and launch Weston with "weston --tty=63", done. > > > > The etnaviv DDX could also emulate 2D accel over the 3D core by > > using > > the X.Org glamor module, but no one has bothered to implement this > > yet. > > Ok we'll see if above case (combined cores) is applicable to us and > then > we'll see what to do. > > > > In the meantime I'll try to figure out if we have 2D core or not. > > > > You can find out what your GPU provides by looking at the feature > > bits. > > chipFeatures_PIPE_2D, chipFeatures_PIPE_3D and chipFeatures_PIPE_VG > > is > > what you are looking out for. > > Does that info helps to decipher these bits? Unfortunately we forgot to expose the major feature bits register in debugfs. It's gpu->identity.features in the kernel driver, the interesting bits in there are chipFeatures_PIPE_3D and chipFeatures_PIPE_2D. Regards, Lucas ___ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc
Re: DRM_UDL and GPU under Xserver
Am Donnerstag, den 05.04.2018, 11:32 +0200 schrieb Daniel Vetter: > On Thu, Apr 5, 2018 at 9:16 AM, Alexey Brodkin > wrote: > > Hi Daniel, > > > > On Thu, 2018-04-05 at 08:18 +0200, Daniel Vetter wrote: > > > On Wed, Apr 4, 2018 at 10:06 PM, Alexey Brodkin > > > wrote: > > > > Hello, > > > > > > > > We're trying to use DisplayLink USB2-to-HDMI adapter to render > > > > GPU-accelerated graphics. > > > > Hardware setup is as simple as a devboard + DisplayLink > > > > adapter. > > > > Devboards we use for this experiment are: > > > > * Wandboard Quad (based on IMX6 SoC with Vivante GPU) or > > > > * HSDK (based on Synopsys ARC HS38 SoC with Vivante GPU as > > > > well) > > > > > > > > I'm sure any other board with DRM supported GPU will work, > > > > those we just used > > > > as the very recent Linux kernels could be easily run on them > > > > both. > > > > > > > > Basically the problem is UDL needs to be explicitly notified > > > > about new data > > > > to be rendered on the screen compared to typical bit-streamers > > > > that infinitely > > > > scan a dedicated buffer in memory. > > > > > > > > In case of UDL there're just 2 ways for this notification: > > > > 1) DRM_IOCTL_MODE_PAGE_FLIP that calls drm_crtc_funcs- > > > > >page_flip() > > > > 2) DRM_IOCTL_MODE_DIRTYFB that calls drm_framebuffer_funcs- > > > > >dirty() > > > > > > > > But neither of IOCTLs happen when we run Xserver with xf86- > > > > video-armada driver > > > > (see https://urldefense.proofpoint.com/v2/url?u=http-3A__git.ar > > > > m.linux.org.uk_cgit_xf86-2Dvideo-2Darmada.git_log_-3Fh- > > > > 3Dunstable-2Ddevel&d=DwIBaQ&; > > > > c=DPL6_X_6JkXFx7AXWqB0tg&r=lqdeeSSEes0GFDDl656eViXO7breS55ytWkh > > > > pk5R81I&m=oEAlP64L9vkuUs_k3kGwwwlN1WJbDMJbCo0uDhwKwwk&s=3ZHj- > > > > 6JXZBLSTWg_4KMnL0VNi7z8c0RxHzj2U5ywVIw&e=). > > > > > > > > Is it something missing in Xserver or in UDL driver? > > > > > > Use the -modesetting driverr for UDL, that one works correctly. > > > > If you're talking about "modesetting" driver of Xserver [1] then > > indeed > > picture is displayed on the screen. But there I guess won't be any > > 3D acceleration. > > > > At least that's what was suggested to me earlier here [2] by Lucas: > > >8--- > > For 3D acceleration to work under X you need the etnaviv specific > > DDX > > driver, which can be found here: > > > > http://git.arm.linux.org.uk/cgit/xf86-video-armada.git/log/?h=unsta > > ble-devel > > You definitely want to use -modesetting for UDL. And I thought with > glamour and the corresponding mesa work you should also get > accelaration. Insisting that you must use a driver-specific ddx is > broken, the world doesn't work like that anymore. On etnaviv the world definitely still works like this. The etnaviv DDX uses the dedicated 2D hardware of the Vivante GPUs, which is much faster and efficient than going through Glamor. Especially since almost all X accel operations are done on linear buffers, while the 3D GPU can only ever do tiled on both sampler and render, where some multi-pipe 3D cores can't even read the tiling they write out. So Glamor is an endless copy fest using the resolve engine on those. If using etnaviv with UDL is a use-case that need to be supported, one would need to port the UDL specifics from -modesetting to the -armada DDX. > Lucas, can you pls clarify? Also, why does -armada bind against all > kms drivers, that's probaly too much. I think that's a local modification done by Alexey. The armada driver only binds to armada and imx-drm by default. Regards, Lucas ___ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc
Re: [RFC] etnaviv: missing dma_mask
Am Freitag, den 17.08.2018, 08:42 +0200 schrieb Christoph Hellwig: > On Tue, Aug 14, 2018 at 05:12:25PM +0300, Eugeniy Paltsev wrote: > > Hi Lucas, Christoph, > > > > After switching ARC to generic dma_noncoherent cache ops > > etnaviv driver start failing on dma maping functions because of > > dma_mask lack. > > > > So I'm wondering is it valid case to have device which is > > DMA capable and doesn't have dma_mask set? > > > > If not, then I guess something like that should work > > (at least it works for ARC): > > This looks ok is a minimal fix: > > Reviewed-by: Christoph Hellwig > > But why doesn't this device have a dma-range property in DT? Because the etnaviv device is a virtual device not represented in DT, as it is only used to expose the DRM device, which may cover multiple GPU core devices. The GPU core devices are properly configured from DT, but unfortunately many of the dma related operations happen through the DRM device. We could fix this by replacing many of the DRM helpers with etnaviv specific functions handling dma per GPU core, but it isn't a clear win right now, as generally on SoCs with multiple GPU cores, the devices are on the same bus and have the same dma requirements. Regards, Lucas ___ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc
Re: [PATCH] etnaviv: allow to build on ARC
Am Mittwoch, den 16.01.2019, 08:21 -0800 schrieb Christoph Hellwig: > On Mon, Jan 14, 2019 at 07:31:57PM +0300, Eugeniy Paltsev wrote: > > ARC HSDK SoC has Vivante GPU IP so allow build etnaviv for ARC. > > > > Signed-off-by: Eugeniy Paltsev > > --- > > drivers/gpu/drm/etnaviv/Kconfig | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/drivers/gpu/drm/etnaviv/Kconfig > > b/drivers/gpu/drm/etnaviv/Kconfig > > index 342591a1084e..49a9957c3373 100644 > > --- a/drivers/gpu/drm/etnaviv/Kconfig > > +++ b/drivers/gpu/drm/etnaviv/Kconfig > > @@ -2,7 +2,7 @@ > > config DRM_ETNAVIV > > tristate "ETNAVIV (DRM support for Vivante GPU IP cores)" > > depends on DRM > > - depends on ARCH_MXC || ARCH_DOVE || ARM || COMPILE_TEST > > + depends on ARCH_MXC || ARCH_DOVE || ARM || ARC || > > COMPILE_TEST > > Is there any reason to not just remove the dependencies entirely? > It seems like it could literally build everywhere, and who knows what > other SOCs the IP blocks end up in sooner or later? I've just sent out a patch to do exactly this instead of playing whack- a-mole with all the architectures. The patch has been chewed on by the 0-day robot since yesterday and didn't turn up any obvious fallout yet. Regards, Lucas ___ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc
Re: [PATCH] ARC: [plat-hsdk]: Add support of Vivante GPU
Am Dienstag, den 29.01.2019, 15:41 +0300 schrieb Eugeniy Paltsev: > HSDK board has built-in Vivante GPU IP which works perfectly fine > with Etnaviv driver, so let's use it. > > > Signed-off-by: Eugeniy Paltsev > --- > NOTE: > * this patch has prerequisite: > https://patchwork.kernel.org/patch/10766265/ > > arch/arc/boot/dts/hsdk.dts | 6 ++ > arch/arc/configs/hsdk_defconfig | 2 +- > 2 files changed, 7 insertions(+), 1 deletion(-) > > diff --git a/arch/arc/boot/dts/hsdk.dts b/arch/arc/boot/dts/hsdk.dts > index 43f17b51ee89..ef892994f024 100644 > --- a/arch/arc/boot/dts/hsdk.dts > +++ b/arch/arc/boot/dts/hsdk.dts > @@ -237,6 +237,12 @@ > > reg = <0>; > > }; > > }; > + > > > + gpu_3d: gpu@9 { > > + compatible = "vivante,gc"; > > + reg = <0x9 0x4000>; > + interrupts = <28>; Really no clock inputs? While the driver does not enforce this due to backward compatibility issues, the binding says that some of the clocks are mandatory. So I would really like to see them hooked up here, even if it's just fixed always-on clocks. Regards, Lucas > + }; > > }; > > > memory@8000 { > diff --git a/arch/arc/configs/hsdk_defconfig b/arch/arc/configs/hsdk_defconfig > index 87b23b7fb781..915e1cca31f8 100644 > --- a/arch/arc/configs/hsdk_defconfig > +++ b/arch/arc/configs/hsdk_defconfig > @@ -52,6 +52,7 @@ CONFIG_GPIO_DWAPB=y > CONFIG_DRM=y > # CONFIG_DRM_FBDEV_EMULATION is not set > CONFIG_DRM_UDL=y > +CONFIG_DRM_ETNAVIV=y > CONFIG_FB=y > CONFIG_FRAMEBUFFER_CONSOLE=y > CONFIG_USB_EHCI_HCD=y > @@ -63,7 +64,6 @@ CONFIG_MMC=y > CONFIG_MMC_SDHCI=y > CONFIG_MMC_SDHCI_PLTFM=y > CONFIG_MMC_DW=y > -# CONFIG_IOMMU_SUPPORT is not set > CONFIG_EXT3_FS=y > CONFIG_VFAT_FS=y > CONFIG_TMPFS=y ___ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc