augusto2112 added a comment. Hi Jason,
I could place this check on the caller, but I think here is the right place for it, maybe I can convince you. > the perf impact of forcing all reads to be out of memory would be > dramatically bad for a remote system debug session, this can't be done. On macOS, this would only affect images that have the LC_SEGMENT_SPLIT_INFO load command. Note that this does not include images that were extracted from the shared cache, as __LINKEDIT is already stripped in that case, so I don't think the performance impact would be that big. For other object files, I could change the default behavior of IsSafeToReadFromFileCache to what we had prior, but, similar to the problem we had on mach-O, there might be situations where the memory diverges that we don't know about. My reasoning for placing this change here is that I think Target::ReadMemory should be as safe as possible when using the filecache (within reason, if a process changes read-only data mid-execution then that's their responsibility , and they can force a live memory read), this way more callers can benefit from this optimization without having to thoroughly investigate that would be safe for themselves. For example, right now any from a section that contains LC_SEGMENT_SPLIT_INFO could read wrong data, and we would have to reason at every call sites if the default behavior of Target::ReadMemory is a problem or not. Also, many of those call sites are generic, and having to check if this is a problem for a //specific// object file everywhere would not be very good. If you're not convinced, I can see a couple of alternatives: - Instead of passing a boolean determining if we prefer the filecache or memory, pass an enum instead with more options, something like: - Default to memory (fallback to filecache). - Default to filecache (fallback to memory). - Use filecache if deemed safe. - Use filecache if read-only. - Etc. - Create another version of "safe" read memory that does what this change does. - Concentrate this logic of figuring out if the filecache read is safe, so we can reuse it from different call sites. > I should add — I also don’t think this will fix the bug you’re working on. At the bare minimum it solves rdar://76973844, which I verified. I couldn't verify my bug with a real watch yet but as it stands we're definitely reading wrong memory sometimes. > I do like the debug-lldb check that the file cache read matches the live > memory part. :) At least that we can keep then :) Do you think it's should be a crash, or maybe a log? Wasn't sure what was best. Repository: rG LLVM Github Monorepo CHANGES SINCE LAST ACTION https://reviews.llvm.org/D106584/new/ https://reviews.llvm.org/D106584 _______________________________________________ lldb-commits mailing list lldb-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits