I have a test program written in UPC that takes a long time to compile
on Mac OS X. This is caused by the var_tracking code that I think is
getting erroneously enabled for no-optimization case - only "-g" option
is used on a command line.
When process_options (in toplevel.c) is called, flag_var_tracking has a
value of 2 which is AUTODETECT_VALUE, and is getting set based on the
optimization level:
1466 if (flag_var_tracking == AUTODETECT_VALUE)
1467 flag_var_tracking = optimize >= 1;
For x86_64 on Linux box this would set flag_var_tracking to 0.
However, for Darwin, "target_option.override" is used and
darwin_override_options() is called at the beginning of the
process_options() and this code is getting executed:
2929 if (flag_var_tracking
2930 && generating_for_darwin_version >= 9
2931 && (flag_gtoggle ? (debug_info_level == DINFO_LEVEL_NONE)
2932 : (debug_info_level >= DINFO_LEVEL_NORMAL))
2933 && write_symbols == DWARF2_DEBUG)
2934 flag_var_tracking_uninit = 1;
The above will set "flag_var_tracking_uninit" as all conditions are there:
flag_var_tracking == 2
generating_for_darwin_version == 10
debug_info_level >= DINFO_LEVEL_NORMAL
write_symbols == DWARF2_DEBUG
Once code returns to process_options:
1460 /* If the user specifically requested variable tracking with tagging
1461 uninitialized variables, we need to turn on variable tracking.
1462 (We already determined above that variable tracking is
feasible.) */
1463 if (flag_var_tracking_uninit)
1464 flag_var_tracking = 1;
1465
1466 if (flag_var_tracking == AUTODETECT_VALUE)
1467 flag_var_tracking = optimize >= 1;
flag_var_tracking is getting unconditionally set based debug level (not
optimization).
The net effect is that for Darwin, var_tracking is always enabled, even
for the optim level of 0.
If I specify "-g -fvar-tracking" on the Linux x86_64 box I also get long
compile times on my test.
What is the reason to enable var_tracking for -O0 on Darwin? Is
var_tracking supposed to run at all on
-O0.
Thanks,
Nenad