Re: [PATCH] Enhance ASAN_CHECK optimization
> Formatting. The {} should be indented like static > and return 2 columns to the right of that. Right. > For base_addr computation, you don't really need g or ptr_checks, > do you? So why not move the: > auto_vec *ptr_checks = &ctx->asan_check_map.get_or_insert (ptr); > gimple g = maybe_get_dominating_check (*ptr_checks); > lines below the if? I can do this. But then base_checks would be invalidated when I call get_or_insert for ptr_checks so I'll still have to hash_map::get. > If asan (kernel-address) is > recovering, I don't see a difference from not reporting two different > invalid accesses to the same function and not reporting two integer > overflows in the same function, at least if they have different > location_t. Ok, agreed. BTW how about replacing '& SANITIZE_KERNEL_ADDRESS' with '& SANITIZE_ADDRESS'? I know we do not support recovery for userspace but having a general enum sounds more logical. -Y -- View this message in context: http://gcc.1065356.n5.nabble.com/PATCH-Optimize-UBSAN-NULL-checks-add-sanopt-c-tp1085786p1095527.html Sent from the gcc - patches mailing list archive at Nabble.com.
Re: [PATCH] Enhance ASAN_CHECK optimization
> Testing SANITIZE_ADDRESS bit in flag_sanitize_recover doesn't make sense, > testing it in flag_sanitize of course does, but for recover you care > whether > the SANITIZE_{KERNEL,USER}_ADDRESS bit in flag_sanitize_recover is set > depending on if SANITIZE_{KERNEL,USER}_ADDRESS is set in > flag_sanitize_recover. Ok, got it. BTW shouldn't we disable local optimization of ASan checks (in asan.c) as well? That would be a massive perf hit ... -Y -- View this message in context: http://gcc.1065356.n5.nabble.com/PATCH-Optimize-UBSAN-NULL-checks-add-sanopt-c-tp1085786p1095536.html Sent from the gcc - patches mailing list archive at Nabble.com.
Re: [PATCH] Optimize UBSAN_NULL checks
> And I wonder whether it'd be worth it to create sanopt.c - > and move sanopt related stuff there +1 -- View this message in context: http://gcc.1065356.n5.nabble.com/PATCH-Optimize-UBSAN-NULL-checks-tp1084891p1084905.html Sent from the gcc - patches mailing list archive at Nabble.com.
Re: [PATCH] Optimize UBSAN_NULL checks
On Fri, Oct 31, 2014 at 12:19 PM, Marek Polacek-3 [via gcc] wrote: > On Thu, Oct 30, 2014 at 07:47:52PM +0100, Marek Polacek wrote: > >> This patch tries to optimize away redundant UBSAN_NULL checks. >> It walks the statements, looks for UBSAN_NULL calls and keeps >> track of pointers and statements checking that pointer in a >> hash map. Now, if we can prove that some UBSAN_NULL stmt is >> dominated by other one which requires the same or less strict >> alignment, there's no point in keeping this check around and >> expanding it. >> >> optimize_checks should be enhanced to handle other {ub,a,t}san >> checks as well - which is what I'm going to work on next. > > (Strike this version. I'm working on a variant that walks the dominator > tree first to get better optimizations.) Just curious how much speedup did you get from this? I've tried similar optimizations and got pitiful 3% speedup. -Y -- View this message in context: http://gcc.1065356.n5.nabble.com/PATCH-Optimize-UBSAN-NULL-checks-tp1084891p1085286.html Sent from the gcc - patches mailing list archive at Nabble.com.
Re: [PATCH] Optimize UBSAN_NULL checks
On Fri, Oct 31, 2014 at 10:51 PM, Yuri Gribov wrote: > I've tried similar optimizations For Asan that is. -- View this message in context: http://gcc.1065356.n5.nabble.com/PATCH-Optimize-UBSAN-NULL-checks-tp1084891p1085287.html Sent from the gcc - patches mailing list archive at Nabble.com.
Re: [PATCH] Fix asan optimization for aligned accesses. (PR sanitizer/63316)
> BTW, I've noticed that perhaps using BIT_AND_EXPR for the > (shadow != 0) & ((base_addr & 7) + (real_size_in_bytes - 1) >= shadow) > tests isn't best, maybe we could get better code if we expanded it as > (shadow != 0) && ((base_addr & 7) + (real_size_in_bytes - 1) >= shadow) > (i.e. an extra basic block containing the second half of the test > and fastpath for the shadow == 0 case if it is sufficiently common > (probably it is)). BIT_AND_EXPR allows efficient branchless implementation on platforms which allow chained conditional compares (e.g. ARM). You can't repro this on current trunk though because I'm still waiting for ccmp patches from Zhenqiang Chen to be approved :( > Will try to code this up unless somebody beats me to > that, but if somebody volunteered to benchmark such a change, it would > be very much appreciated. AFAIK LLVM team recently got some 1% on SPEC from this. -Y -- View this message in context: http://gcc.1065356.n5.nabble.com/Re-please-verify-my-mail-to-community-tp1066917p1073370.html Sent from the gcc - patches mailing list archive at Nabble.com.
Re: [PATCH] Fix asan optimization for aligned accesses. (PR sanitizer/63316)
> AFAIK LLVM team recently got some 1% on SPEC from this. On x64 that is. -- View this message in context: http://gcc.1065356.n5.nabble.com/Re-please-verify-my-mail-to-community-tp1066917p1073371.html Sent from the gcc - patches mailing list archive at Nabble.com.