On Mon, Sep 3, 2018 at 9:25 AM Paul Lemire <paul.lem...@kdab.com> wrote:
> Glad to hear that, hopefully things are starting to make more sense now. > Getting there - thank you! On 09/03/2018 02:54 PM, Andy wrote: > > Progress! Here's my current framegraph: > > [snip] > > Question: > > 1) I am using an RGBAFormat for my texture. I changed the alpha in the > clear colour from 0x80 to 0xEE and I now see an alpha cleared background in > the offscreen (see image). I can just use RGB for my purposes right now, > but I'm curious why the onscreen clearing is not using the alpha channel? I > can confirm this by changing the clear colour to #FF000000 - I just get > solid black. > > Well I believe that this depends on the format of your back buffer > (usually it is RGB). You can try to query it with > QSurfaceFormat::defaultFormat() and looking for the alphaBuffer size (or > apitrace also gives you the format when you select a draw call that renders > to screen). > Got it! If I setAlphaBufferSize( 8 ) on my default format it works. > > Problem: > > 1) The resulting scene isn't the same in the offscreen capture: > - the yellow cube is on top of everything > - the red & blue arrows aren't clipped by the plane > > I suspect that this is caused by the fact that you have no depth > attachment on your RenderTarget so that depth testing isn't performed > properly. You would need to create another RenderTargetOutput that you bind > to the attachment point Depth with a suitable Texture2D texture with format > (D32, D24 ...). > Bingo. That fixes it. > - it isn't antialiased > > That's likely caused by a) not having a high resolution enough for your > attachments b) using a Texture2D instead of a Texture2DMultisample (though > I'm not sure RenderCapture would work with the latter). > Have you tried going for a 2048/2408 texture instead of 512/512 assuming > you have no memory constraints? Then you can always scale back the QImage > you capture to 512/512 if need be. > Texture2DMultisample does indeed make it better. Once I set "samples" to the same as my QSurfaceFormat::defaultFormat(), I get decent results. Not 100% the same, but very close. (So it does work w/RenderCapture!) My "final" frame graph (for those following along): RenderSurfaceSelector: Viewport: ClearBuffers: buffers: ColorDepthBuffer clearColor: "#faebd7" NoDraw: {} FrustumCulling: # OnScreen CameraSelector: objectName: onScreenCameraSelector RenderCapture: objectName: onScreenCapture # OffScreen CameraSelector: objectName: offScreenCameraSelector RenderTargetSelector: target: RenderTarget: attachments: - RenderTargetOutput: attachmentPoint: Color0 texture: Texture2DMultisample: objectName: offScreenTexture width: 1024 height: 768 format: RGBFormat samples: 8 - RenderTargetOutput: attachmentPoint: Depth texture: Texture2DMultisample: width: 1024 height: 768 format: D24 samples: 8 ClearBuffers: buffers: ColorDepthBuffer clearColor: "#faebd7" NoDraw: {} RenderCapture: objectName: offScreenCapture If anyone is interested in the code to read framegraphs as YAML like this, please get in touch and I can clean it up & put on gitlab (sometime next month). It makes it a lot easier to iterate on building a framegraph. It also drastically reduces the amount of boilerplate code, you can include & read them as resources, and you don't have to bring in all of QML. Now that I have the basics working... I'll need to dig into the multipass shader stuff to get the effects I want. Thank you for your patience!
_______________________________________________ Interest mailing list Interest@qt-project.org http://lists.qt-project.org/mailman/listinfo/interest