I have given some thought as to how we could do it in our compiler stack. It basically comes down to adding a new type of ralloc context which takes a Vulkan allocator struct. If the parent context has such a struct, that allocator gets used. It wouldn't be that hard; I've just not gone to the effort of wiring it up.
--Jason On Tue, May 12, 2020 at 9:14 AM Jose Fonseca <jfons...@vmware.com> wrote: > > You raise a good point about LLVM. It can easily be the biggest memory > consumer (at least transiently) for any driver that uses it, so the value of > implementing Vulkan allocation callbacks without is indeed dubious. > > Jose > > ________________________________ > From: Jason Ekstrand <ja...@jlekstrand.net> > Sent: Monday, May 11, 2020 17:29 > To: Jose Fonseca <jfons...@vmware.com> > Cc: ML mesa-dev <mesa-dev@lists.freedesktop.org>; > erik.faye-l...@collabora.com <erik.faye-l...@collabora.com> > Subject: Re: [Mesa-dev] RFC: Memory allocation on Mesa > > Sorry for the top-post. > > Very quick comment: If part of your objective is to fulfill Vulkan's > requirements, we need a LOT more plumbing than just > MALLOC/CALLOC/FREE. The Vulkan callbacks aren't set at a global level > when the driver is loaded but are provided to every call that > allocates anything and we're expected to track these sorts of > "domains" that things are allocated from. The reality, however, is > that the moment you get into the compiler, all bets are off. This is > also true on other drivers; I don't think anyone has plumbed the > Vulkan allocation callbacks into LLVM. :-) > > --Jason > > On Mon, May 11, 2020 at 11:13 AM Jose Fonseca <jfons...@vmware.com> wrote: > > > > Hi, > > > > To give everybody a bit of background context, this email comes from > > https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fmesa%2Fmesa%2F-%2Fissues%2F2911&data=02%7C01%7Cjfonseca%40vmware.com%7C6565468f840241a093ae08d7f5c877d3%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637248113667594798&sdata=hYmA5dMivC0jGjAx9cA9MwF81FjQSoo5plQBvHEDYes%3D&reserved=0 > > . > > > > The short story is that Gallium components (but not Mesa) used to have > > their malloc/free calls intercepted, to satisfy certain needs: 1) memory > > debugging on Windows, 2) memory accounting on embedded systems. But with > > the unification of Gallium into Mesa, the gallium vs non-gallium division > > got blurred, leading to some mallocs being intercepted but not the > > respective frees, and vice-versa. > > > > > > I admit that trying to intercept mallocs/frees for some components and not > > others is error prone. We could get this going on again, it's doable, but > > it's possible it would keep come causing troubles, for us or others, over > > and over again. > > > > > > The two needs mentioned above were mostly VMware's needs. So I've > > reevaluated, and I think that with some trickery we satisfy those two needs > > differently. (Without wide spread source code changes.) > > > > > > On the other hand, VMware is probably not the only one to have such needs. > > In fact Vulkan spec added memory callbacks precisely with the same use > > cases as ours, as seen > > https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.khronos.org%2Fregistry%2Fvulkan%2Fspecs%2F1.2%2Fhtml%2Fchap10.html%23memory-host&data=02%7C01%7Cjfonseca%40vmware.com%7C6565468f840241a093ae08d7f5c877d3%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637248113667594798&sdata=SarQxeuRUMOm%2FZHogUQKo64rh7K7uLn5UOlIRPDe1jM%3D&reserved=0 > > which states: > > > > Vulkan provides applications the opportunity to perform host memory > > allocations on behalf of the Vulkan implementation. If this feature is not > > used, the implementation will perform its own memory allocations. Since > > most memory allocations are off the critical path, this is not meant as a > > performance feature. Rather, this can be useful for certain embedded > > systems, for debugging purposes (e.g. putting a guard page after all host > > allocations), or for memory allocation logging. > > > > > > And I know there were people interested in having Mesa drivers on embedded > > devices on the past (the old Tunsten Graphics having even been multiple > > times hired to do so), and I'm pretty sure they exist again. > > > > > > > > Therefore, rather than shying away from memory allocation abstractions now, > > I wonder if now it's not the time to actually double down on them and > > ensure we do so comprehensively throughout the whole mesa, all drivers? > > > > After all Mesa traditionally always had MALLOC*/CALLOC*/FREE wrappers > > around malloc/free. As so many other projects do. > > > > > > > > More concretely, I'd like to propose that we: > > > > ensure all components use MALLOC*/CALLOC*/FREE and never malloc/calloc/free > > directly (unless interfacing with a 3rd party which expects memory to be > > allocated/freed with malloc/free directly) > > Perhaps consider renaming MALLOC -> _mesa_malloc etc while we're at it > > introduce a mechanisms to quickly catch any mistaken use of > > malloc/calloc/free, regardless compiler/OS used: > > > > #define malloc/free/calloc as malloc_do_not_use/free_do_not_use to trigger > > compilation errors, except on files which explicely opt out of this (source > > files which need to interface with 3rd party, or source files which > > implement the callbacks) > > Add a cookie to MALLOC/CALLOC/FREE memory to ensure it's not inadvertently > > mixed with malloc/calloc/free > > > > The end goal is that we should be able to have a memory allocation > > abstraction which can be used for all the needs above: memory debugging, > > memory accounting, and satisfying Vulkan host memory callbacks. > > > > > > Some might retort: why not just play some tricks with the linker, and > > intercept all malloc/free calls, without actually having to modify any > > source code? > > > > Yes, that's indeed technically feasible. And is precisely the sort of > > trick I was planing to resort to satisfy VMware needs without having to > > further modify the source code. But for these tricks to work, it is > > absolutely imperative that one statically links C++ library and STL. The > > problem being that if one doesn't then there's an imbalance: the > > malloc/free/new/delete calls done in inline code on C++ headers will be > > intercepted, where as malloc/free/new/delete calls done in code from the > > shared object which is not inlined will not, causing havoc. This is OK for > > us VMware (we do it already for many other reasons, including avoiding DLL > > hell.) But I doubt it will be palatable for everybody else, particularly > > Linux distros, to have everything statically linked. > > > > So effectively, if one really wants to implement Vulkan host memory > > callbacks, the best way is to explicitly use malloc/free abstractions, > > instead of the malloc/free directly. > > > > > > So before we put more time on pursuing either the "all" or "nothing" > > approaches, I'd like to get a feel for where people's preferences are. > > > > Jose > > > > _______________________________________________ > > mesa-dev mailing list > > mesa-dev@lists.freedesktop.org > > https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fmesa-dev&data=02%7C01%7Cjfonseca%40vmware.com%7C6565468f840241a093ae08d7f5c877d3%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637248113667594798&sdata=4HVtEd1PFjbYpmPjXkmo%2F0VLTyLTy9PAaU9zq2q8c88%3D&reserved=0 _______________________________________________ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev