> On Sep 6, 2017, at 3:42 PM, Joe Groff <[email protected]> wrote: > > >> On Sep 6, 2017, at 3:26 PM, Andrew Trick <[email protected]> wrote: >> >> >>> On Sep 6, 2017, at 2:31 PM, Joe Groff <[email protected]> wrote: >>> >>>> >>>> The fact that we’re using malloc and free is already part of the ABI >>>> because old libraries need to be able to deallocate memory allocated by >>>> newer libraries. >>> >>> The compiler doesn't ever generate calls directly to malloc and free, and >>> the runtime entry points we do use already take size and alignment on both >>> allocation and deallocation. >> >> True, we’ve never said that UnsafePointer deallocation is compatible with C, >> and I we don't currently expose malloc_size functionality in any API AFAIK. >> >>>> Within the standard library we could make use of some new deallocation >>>> fast path in the future without worrying about backward deployment. >>>> >>>> Outside of the standard library, clients will get the benefits of whatever >>>> allocator is available on their deployed platform because we now encourage >>>> them to use UnsafeBufferPointer.deallocate(). We can change the >>>> implementation inside UnsafeBufferPointer all we want, as long as it’s >>>> still malloc-compatible. >>>> >>>> I’m sure we’ll want to provide a better allocation/deallocation API in the >>>> future for systems programmers based on move-only types. That will already >>>> be deployment-limited. >>>> >>>> Absolute worst case, we introduce a sized UnsafePointer.deallocate in the >>>> future. Any new code outside the stdlib that wants the performance >>>> advantage would either need to >>>> - trivially wrap deallocation using UnsafeBufferPointer >>>> - create a trivial UnsafePointer.deallocate thunk under an availability >>>> flag >>> >>> Since we already have sized deallocate, why would we take it away? If the >>> name is misleading, we can change the name. >> >> I don't think leaving it as an optional argument is worthwhile, as I >> explained above. Deallocation capacity is either required or we drop it >> completely. If we keep it, then `allocatedCapacity` is the right name. >> >> The reason for taking it away, beside being misleading, is that it exposes >> another level of unsafety. >> >> My thinking has been that this is not the allocation fast path of the >> future, and the stdlib itself could track the size of unsafely-allocated >> blocks if it ever used a different underlying allocator. >> >> Now I realize this isn't really about fast/slow deallocation paths. Removing >> `capacity` or even making it optional forces all future Swift >> implementations to provide malloc_size functionality for any piece of memory >> that is compatible with the Unsafe API. >> >> I'm actually ok with that, because I think it's generally desirable for >> application memory to reside in either in malloc-compatible blocks or fixed >> size pools. i.e. I think malloc_size is something the platform needs. >> However, you seem to think this would be too restrictive in the future. How >> is this a known problem for C and what's your confidence level it will be a >> problem for Swift? > > No, I agree that being malloc-compatible is a reasonable goal; on Apple > platforms, being able to register any custom allocator we come up with as a > malloc zone would mean that the platform's existing memory profiling and > debugging tools can still work. Even if we have a scheme where the allocator > directly reaches into per-thread fixed-sized pools, it seems to me like it'd > be hard to make malloc_size impossible to implement, though it might be slow, > asking each pool for each thread whether an address is inside it. Strongly > encouraging, if not requiring, user code to provide deallocation sizes seems > to me like a prerequisite to making that sort of design a net win over plain > malloc/free. > > -Joe
Ok good. For growable buffers, we also want the OS to give us malloc_size which may be larger than requested capacity. The semantics of buffer.deallocate() needs to be: free `buffer.count` bytes of memory at `buffer.baseAddress`. So, that will always be the fast path! Kelvin, do you agree with that? Any future safe API for manual buffer allocation/deallocation will also track the buffer size, so that will also be the fast path. In the future, pointer.deallocate() without capacity might become a slower path. But I think that the UnsafePointer allocation/deallocation paths are Swift's equivalent of malloc/free. If the client code knows the buffer size, it shouldn't be using those. In fact, we don't want to force the client to track capacity if deallocation is *not* performance critical. I suppose an optional `allocatedCapacity` argument would be ok, since we should be asserting in the runtime anyway. It just adds complexity to the API and I don't buy the backward deployment argument. -Andy _______________________________________________ swift-evolution mailing list [email protected] https://lists.swift.org/mailman/listinfo/swift-evolution
