Re: Unifying the GCC Debugging Interface
On Tue, Nov 20, 2012 at 6:16 PM, Lawrence Crowl wrote: > And, as a side note, highly formatted output generally is not > much better than printf. For any text that needs to be localized, > I recommend that we stick with what we have. I agree with Lawrence that for texts that need localization, what we currently have is probably much better deployed. On the other hand, for debugging routines and in-memory formatting, IOStreams are very handy. -- Gaby
Re: Unifying the GCC Debugging Interface
On Tue, Nov 20, 2012 at 7:01 PM, Xinliang David Li wrote: > Right -- gdb does not know the complete type of std::cout and > std::cerr -- try the following program with -g and invoke print, or << > in the debugger -- see what you will get: Is this because the hack we (libstdc++ folks) used to define them? -- Gaby
Re: Unifying the GCC Debugging Interface
On 11/21/2012 02:01 AM, Xinliang David Li wrote: Right -- gdb does not know the complete type of std::cout and std::cerr -- try the following program with -g and invoke print, or << in the debugger -- see what you will get: But that also suggest that the debugging experience needs for some improvement Everyone will win if this works better Slightly off topic of course, but... Theo.
Re: Unifying the GCC Debugging Interface
On Wed, Nov 21, 2012 at 5:37 AM, Theodore Papadopoulo wrote: > On 11/21/2012 02:01 AM, Xinliang David Li wrote: >> >> Right -- gdb does not know the complete type of std::cout and >> std::cerr -- try the following program with -g and invoke print, or << >> in the debugger -- see what you will get: > > > But that also suggest that the debugging experience needs for some > improvement Everyone will win if this works better Fully agreed. -- Gaby
Re: Unifying the GCC Debugging Interface
Hi, On Tue, 20 Nov 2012, Lawrence Crowl wrote: > On 11/19/12, Diego Novillo wrote: > > On Nov 19, 2012 Michael Matz wrote: > > > So, yes, the larger layouting should be determined by name of the > > > dump function. A flag argument might look nice from an interface > > > design perspective, but it's harder to use in the debugger. > > > > As long as all these different objects share the same data > > structure, we will need to have different named entry points. > > Ideally they would all respond to 'dump(t)' and overloading will > > figure it out automatically. For now, we'll need dump_function, > > dump_tree, dump_generic, and we may need a few more. > > Diego and I talked about this a bit more, and would like to explore > a set of dump names that distinguish between dumping the head of > an item and its body. In essence, the former asks for the function > declaration, the latter its definition. > > Comments? Sounds useful for functions. But what other items do you have in mind? What's the head of a PLUS_EXPR, or generally a tree that isn't DECL_P, or of a gimple_seq or an CFG edge? For the two latter I could conconct some artificial definition of head, but it would feel arbitrary. Ciao, Michael.
Re: Unifying the GCC Debugging Interface
On 11/20/2012 08:32 PM, Basile Starynkevitch wrote: On Tue, Nov 20, 2012 at 11:24:40AM -0800, Lawrence Crowl wrote: [] All of these functions come in two forms. function (FILE *, item_to_dump, formatting) function (item_to_dump, formatting) Since we have switched to C++, it would be really nice to have dump functions writing to a C++ std::ostream If I understood correctly, including iostream may (depending on your c++ library) introduce static constructors in every translation unit. This could negatively impact the startup time of gcc. LLVM explicitly forbids the use. If we allow it, it would be good if a more knowledgeable person thought about the possible overhead. Cheers Tobi
Re: Unifying the GCC Debugging Interface
On Wed, Nov 21, 2012 at 7:07 AM, Tobias Grosser wrote: > On 11/20/2012 08:32 PM, Basile Starynkevitch wrote: >> >> On Tue, Nov 20, 2012 at 11:24:40AM -0800, Lawrence Crowl wrote: >> [] > > All of these functions come in two forms. > > function (FILE *, item_to_dump, formatting) > function (item_to_dump, formatting) >> >> >> Since we have switched to C++, it would be really nice to have dump >> functions >> writing to a C++ std::ostream > > > If I understood correctly, including iostream may (depending on your c++ > library) introduce static constructors in every translation unit. This is obviously false on the face of it. Only the translation that includes iostream gets the niftty counters. Furthermore, not including iostream does not mean you don't get static constructors -- GCC has lot of global variables and if any of them incures dynamic initialization, you get dynamic initialization. Note also that if you explicitly delay initialization to runtime, you are going to pay for it anyway through brittle manual initialization. > This could > negatively impact the startup time of gcc. Do you have concrete data point on this specific case? (I am not asking for artificial benchmark) The cost you get with iostream isn't with dynamic intiialization, it is with something else (notably locales.) > LLVM explicitly forbids the use. If we allow it, it would be good if a more > knowledgeable person thought about the possible overhead. LLVM forbids lot of good things. If your competition is LLVM, you don't win by repeated the same mistakes. -- Gaby
Re: Unifying the GCC Debugging Interface
On 11/21/2012 02:33 PM, Gabriel Dos Reis wrote: On Wed, Nov 21, 2012 at 7:07 AM, Tobias Grosser wrote: On 11/20/2012 08:32 PM, Basile Starynkevitch wrote: On Tue, Nov 20, 2012 at 11:24:40AM -0800, Lawrence Crowl wrote: [] All of these functions come in two forms. function (FILE *, item_to_dump, formatting) function (item_to_dump, formatting) Since we have switched to C++, it would be really nice to have dump functions writing to a C++ std::ostream If I understood correctly, including iostream may (depending on your c++ library) introduce static constructors in every translation unit. This is obviously false on the fac Sorry. I did not express this carefully. Also, I probably did not make clear that I am not proposing to not use iostream, but rather asking of someone knowledgeable who understands the costs. Is it correct to state that every translation unit that includes iostream will include the iostream static constructors? Will the number of static constructors increase linearly with the number of translation units? Is it necessary to include iostream in a core header, in case we want to use iostream for the debugging functionality? e of it. Only the translation that includes iostream gets the niftty counters. Furthermore, not including iostream does not mean you don't get static constructors -- GCC has lot of global variables and if any of them incures dynamic initialization, you get dynamic initialization. Note also that if you explicitly delay initialization to runtime, you are going to pay for it anyway through brittle manual initialization. I was mainly interested in compering FILE* and iostream. To my knowledge the FILE* interface does not have any significant construction overhead. This could negatively impact the startup time of gcc. Do you have concrete data point on this specific case? (I am not asking for artificial benchmark) No. I just heard about it and wanted to verify that this is a non-issue. The cost you get with iostream isn't with dynamic intiialization, it is with something else (notably locales.) Thanks for the pointer. Cheers Tobi
Re: Unifying the GCC Debugging Interface
On Wed, Nov 21, 2012 at 7:48 AM, Tobias Grosser wrote: > Is it correct to state that every translation unit that includes iostream > will include the iostream static constructors? C++ requires the definitions of globals such as std::cin, std::cout, and std::cerr that must be contructed (by any magic) before users attempt to use them. To aid with this, the C++ standard formalizes the programming pattern known as `nifty counter' in the form of the class std::ios_base::Init such that (quoting C++) The class Init describes an object whose construction ensures the construction of the eight objects declared in (27.4) that associate file stream buffers with the standard C streams provided for by the functions declared in (27.9.2). Whether a compiler decides to implement this with "static constructor" is an implementation detail issue. Of course, since no object is constructed more than once, no actual iostream object constructor is run more than once. You can see how we implemented this in libstdc++ http://gcc.gnu.org/viewcvs/trunk/libstdc%2B%2B-v3/src/c%2B%2B98/ios_init.cc?revision=184997&view=markup > Will the number of static > constructors increase linearly with the number of translation units? Is it > necessary to include iostream in a core header, in case we want to use > iostream for the debugging functionality? I think this is the case or premature optimization and you are worrying about the wrong thing. Every translation unit that includes gets a static global variable of an empty class type (therefore occupying 1 byte). See http://gcc.gnu.org/viewcvs/trunk/libstdc%2B%2B-v3/include/std/iostream?revision=184997&view=markup That 1 byte triggers a *dynamic* intialization from the corresponding translation unit. However, the actual iostream objects are constructed only once. > > >> e of it. Only the translation that includes >> iostream gets the niftty counters. >> >> Furthermore, not including iostream does not mean you don't >> get static constructors -- GCC has lot of global variables and if >> any of them incures dynamic initialization, you get >> dynamic initialization. Note also that if you explicitly delay >> initialization to runtime, you are going to pay for it anyway >> through brittle manual initialization. > > > I was mainly interested in compering FILE* and iostream. To my knowledge the > FILE* interface does not have any significant construction overhead. You are kidding me, right? Anyway, I think you are focusing on the wrong thing. For GCC's homegrown IO occurs far more overhead than you appear to believe - I am saying this from my work on the diagnostic machinery; we can do a better job if we had a more typed interface. This is true not only for the diagnostic machinery, which is partly used by the various debug systems, but also of the debug systems themselves. You are worrying about something I suspect you would not be able to measure compared to the performance of the existing debug systems. -- Gaby
Re: Unifying the GCC Debugging Interface
On 11/21/2012 03:25 PM, Gabriel Dos Reis wrote: On Wed, Nov 21, 2012 at 7:48 AM, Tobias Grosser wrote: Is it correct to state that every translation unit that includes iostream will include the iostream static constructors? C++ requires the definitions of globals such as std::cin, std::cout, and std::cerr that must be contructed (by any magic) before users attempt to use them. To aid with this, the C++ standard formalizes the programming pattern known as `nifty counter' in the form of the class std::ios_base::Init such that (quoting C++) The class Init describes an object whose construction ensures the construction of the eight objects declared in (27.4) that associate file stream buffers with the standard C streams provided for by the functions declared in (27.9.2). Whether a compiler decides to implement this with "static constructor" is an implementation detail issue. Of course, since no object is constructed more than once, no actual iostream object constructor is run more than once. You can see how we implemented this in libstdc++ http://gcc.gnu.org/viewcvs/trunk/libstdc%2B%2B-v3/src/c%2B%2B98/ios_init.cc?revision=184997&view=markup Will the number of static constructors increase linearly with the number of translation units? Is it necessary to include iostream in a core header, in case we want to use iostream for the debugging functionality? I think this is the case or premature optimization and you are worrying about the wrong thing. Every translation unit that includes gets a static global variable of an empty class type (therefore occupying 1 byte). See http://gcc.gnu.org/viewcvs/trunk/libstdc%2B%2B-v3/include/std/iostream?revision=184997&view=markup That 1 byte triggers a *dynamic* intialization from the corresponding translation unit. However, the actual iostream objects are constructed only once. e of it. Only the translation that includes iostream gets the niftty counters. Furthermore, not including iostream does not mean you don't get static constructors -- GCC has lot of global variables and if any of them incures dynamic initialization, you get dynamic initialization. Note also that if you explicitly delay initialization to runtime, you are going to pay for it anyway through brittle manual initialization. I was mainly interested in compering FILE* and iostream. To my knowledge the FILE* interface does not have any significant construction overhead. You are kidding me, right? Anyway, I think you are focusing on the wrong thing. For GCC's homegrown IO occurs far more overhead than you appear to believe - I am saying this from my work on the diagnostic machinery; we can do a better job if we had a more typed interface. This is true not only for the diagnostic machinery, which is partly used by the various debug systems, but also of the debug systems themselves. You are worrying about something I suspect you would not be able to measure compared to the performance of the existing debug systems. I was not too much worried. Just very interested in some judgment. Thanks a lot for the nice explanation Tobias
Re: Unifying the GCC Debugging Interface
I forgot to add this important information: you don't get the nifty counters if you don't include . Specifically. That means including or does not introduce any nifty counter. Including , which allows you to perform in-memory formatted IO, does not introduce any nifty counter. Said, differently, the worry about IOStreams introducing unnecessary "static constructor" is either overblown or misplaced, or both. -- Gaby On Wed, Nov 21, 2012 at 8:25 AM, Gabriel Dos Reis wrote: > On Wed, Nov 21, 2012 at 7:48 AM, Tobias Grosser wrote: > >> Is it correct to state that every translation unit that includes iostream >> will include the iostream static constructors? > > C++ requires the definitions of globals such as std::cin, std::cout, > and std::cerr > that must be contructed (by any magic) before users attempt to use them. To > aid > with this, the C++ standard formalizes the programming pattern known as > `nifty counter' in the form of the class std::ios_base::Init such that > (quoting C++) > >The class Init describes an object whose construction ensures the >construction of the eight objects declared in (27.4) > that associate >file stream buffers with the standard C streams provided for by the >functions declared in (27.9.2). > > Whether a compiler decides to implement this with "static constructor" is > an implementation detail issue. Of course, since no object is constructed > more than once, no actual iostream object constructor is run more than > once. You can see how we implemented this in libstdc++ > > > http://gcc.gnu.org/viewcvs/trunk/libstdc%2B%2B-v3/src/c%2B%2B98/ios_init.cc?revision=184997&view=markup > >> Will the number of static >> constructors increase linearly with the number of translation units? Is it >> necessary to include iostream in a core header, in case we want to use >> iostream for the debugging functionality? > > I think this is the case or premature optimization and you are worrying > about the wrong thing. Every translation unit that includes > gets a static global variable of an empty class type (therefore occupying > 1 byte). See > > > http://gcc.gnu.org/viewcvs/trunk/libstdc%2B%2B-v3/include/std/iostream?revision=184997&view=markup > > That 1 byte triggers a *dynamic* intialization from the corresponding > translation unit. However, the actual iostream objects are constructed > only once. > >> >> >>> e of it. Only the translation that includes >>> iostream gets the niftty counters. >>> >>> Furthermore, not including iostream does not mean you don't >>> get static constructors -- GCC has lot of global variables and if >>> any of them incures dynamic initialization, you get >>> dynamic initialization. Note also that if you explicitly delay >>> initialization to runtime, you are going to pay for it anyway >>> through brittle manual initialization. >> >> >> I was mainly interested in compering FILE* and iostream. To my knowledge the >> FILE* interface does not have any significant construction overhead. > > You are kidding me, right? > > Anyway, I think you are focusing on the wrong thing. > For GCC's homegrown IO occurs far more overhead than > you appear to believe - I am saying this from my work on the > diagnostic machinery; we can do a better job if we had > a more typed interface. This is true not only for the > diagnostic machinery, which is partly used by the various > debug systems, but also of the debug systems themselves. > You are worrying about something I suspect you would not be able to > measure compared to the performance of the existing > debug systems. > > -- Gaby
Re: Unifying the GCC Debugging Interface
On Wed, Nov 21, 2012 at 3:18 AM, Gabriel Dos Reis wrote: > On Tue, Nov 20, 2012 at 6:16 PM, Lawrence Crowl wrote: > >> And, as a side note, highly formatted output generally is not >> much better than printf. For any text that needs to be localized, >> I recommend that we stick with what we have. > > I agree with Lawrence that for texts that need localization, what > we currently have is probably much better deployed. On the other hand, for > debugging routines and in-memory formatting, IOStreams are > very handy. I'm not deeply against iostreams, but I don't see that they bring us any significant advantages over what we already have. We already have typed check formatting, we can already write to a memory buffer. It took a lot of work to get there, but that work has been done. It's quite unlikely that we would ever want to use iostreams for user-visible compiler output, because there is no straightforward support for localization. So we are only talking about dump files and debug output. What parts of the compiler would be clearly better if we used iostreams? Ian
Re: Unifying the GCC Debugging Interface
On Wed, Nov 21, 2012 at 9:02 AM, Ian Lance Taylor wrote: > On Wed, Nov 21, 2012 at 3:18 AM, Gabriel Dos Reis > wrote: >> On Tue, Nov 20, 2012 at 6:16 PM, Lawrence Crowl wrote: >> >>> And, as a side note, highly formatted output generally is not >>> much better than printf. For any text that needs to be localized, >>> I recommend that we stick with what we have. >> >> I agree with Lawrence that for texts that need localization, what >> we currently have is probably much better deployed. On the other hand, for >> debugging routines and in-memory formatting, IOStreams are >> very handy. > > I'm not deeply against iostreams, but I don't see that they bring us > any significant advantages over what we already have. We already have > typed check formatting, we can already write to a memory buffer. It > took a lot of work to get there, but that work has been done. It's > quite unlikely that we would ever want to use iostreams for > user-visible compiler output, because there is no straightforward > support for localization. As I said earlier, our homegrown IO with localization is probably much better than what bare bones C++ IOstreams offer, so we are all in agreement over this. > So we are only talking about dump files and > debug output. Yes. > What parts of the compiler would be clearly better if > we used iostreams? Having to hardcode the format specifiers means that we are either restricted in changes or bound to (silent truncation) errors when we change representation. -- Gaby
Re: -fPIC -fPIE
Il 14/11/2012 15:27, Ian Lance Taylor ha scritto: > On Wed, Nov 14, 2012 at 5:36 AM, Richard Earnshaw wrote: >> On 13/11/12 14:56, Ian Lance Taylor wrote: >>> >>> Currently -fPIC -fPIE seems to be the same as -fPIE. Unfortunately, >>> -fPIE -fPIC also seems to be the same as -fPIE. It seems to me that, >>> as is usual with conflicting options, we should use the one that >>> appears last on the command line. >>> >>> Do we have an existing mechanism in options processing for one option >>> to turn off another, where the options are not exact inverses? I >>> looked for one but I didn't see one. There is support for that for >>> options with the Mask property, but I don't see it for non-target >>> options. >> >> pic and pie are mostly the same, but the pre-emption rules are different. >> For fpie we don't have to permit pre-emption of global definitions. >> >> I hope we don't loose that distinction. > > No, of course not. All I'm talking about here is option processing > when both -fPIC and -fPIE are provided. There is no change to the > normal case of providing just -fPIC or just -fPIE. I think both -fPIC -fPIE and -fPIE -fPIC should be the same as -fPIC. The main advantage is that you can compile a program with CFLAGS="-O2 -g -fPIE", and libtool's adding of -fPIC for shared libraries will work reliably. If -fPIE can still override -fPIC, the result depends on whether -fPIC comes before or after CFLAGS. Paolo
removing forced labels triggers assertion in dwarf info generation
Hello, I am facing some trouble with the following code (reduced with c-reduce): fn1 (ip) int *ip; { int x = 0; int *a; base: x++; if (x == 4) return; *a++ = 1; goto *&&base + *ip++; } The problems lies in the label base. Label base is added to the forced_labels list, but it is unfortunately deleted by cfg_layout_merge_blocks since it's the first insn of a basic block being merged into another. This breaks dwarf information generation because it calls maybe_record_trace_start with all the forced labels (dwarf2cfi.c in create_trace_edges) and it expects a trace to exist starting at each of the forced labels in the list (ensured by the gcc_assert(ti != NULL) in maybe_record_trace_start). Even though the label does exist in the forced labels list (now as a note saying it was deleted), the label didn't trigger the generation of a trace at this point. The code in cfgrtl to remove the label is: if (LABEL_P (BB_HEAD (b))) { delete_insn (BB_HEAD (b)); } I was however expecting a call to can_delete_label_p in the condition, or the removal of the label from forced_labels if it is a forced label. Adding the condition can_delete_label_p to the code above results in a segmentation fault during scheduling so it seems that there's some kind of assumption regarding code_labels position in a block. The segmentation fault is triggered at sched_get_condition_with_rev_uncached because the code_label is passed as an argument to this function and GET_CODE (pat) == COND_EXEC generates the seg fault. My question is, given it is a forced label, it seems to me that we really shouldn't delete the label. Is there a missed constraint in the condition above and therefore some further checks in scheduling? Paulo Matos
Re: -fPIC -fPIE
On Wed, Nov 21, 2012 at 8:56 AM, Paolo Bonzini wrote: > Il 14/11/2012 15:27, Ian Lance Taylor ha scritto: >> On Wed, Nov 14, 2012 at 5:36 AM, Richard Earnshaw wrote: >>> On 13/11/12 14:56, Ian Lance Taylor wrote: Currently -fPIC -fPIE seems to be the same as -fPIE. Unfortunately, -fPIE -fPIC also seems to be the same as -fPIE. It seems to me that, as is usual with conflicting options, we should use the one that appears last on the command line. Do we have an existing mechanism in options processing for one option to turn off another, where the options are not exact inverses? I looked for one but I didn't see one. There is support for that for options with the Mask property, but I don't see it for non-target options. >>> >>> pic and pie are mostly the same, but the pre-emption rules are different. >>> For fpie we don't have to permit pre-emption of global definitions. >>> >>> I hope we don't loose that distinction. >> >> No, of course not. All I'm talking about here is option processing >> when both -fPIC and -fPIE are provided. There is no change to the >> normal case of providing just -fPIC or just -fPIE. > > I think both -fPIC -fPIE and -fPIE -fPIC should be the same as -fPIC. > > The main advantage is that you can compile a program with CFLAGS="-O2 -g > -fPIE", and libtool's adding of -fPIC for shared libraries will work > reliably. If -fPIE can still override -fPIC, the result depends on > whether -fPIC comes before or after CFLAGS. ...which is exactly how all our other options work. The last one wins. Why should these be different? Using -fPIE in CFLAGS for libtool seems like a very specific use case, and I don't see it as sufficient justification for changing our ordinary option processing. Ian
RFC - Initial planning for next Cauldron workshop
Ian and I have started thinking about the next Cauldron. This time, we are thinking of organizing it in Mountain View, at Google's headquarters. Dates are not yet set in stone, but these are some likely details: - The workshop would last 3 days, just like the Prague meeting. - Dates: We are looking at 12/Jul - 14/Jul, and 2/Aug - 4/Aug. - Hotels: Google can negotiate reduced rates and reserve blocks of rooms in the area. - Transportation: Google can provide bus transportation between main campus and the hotel. - Meals and snacks would be provided by Google. Please let us know whether those dates would work for you. We have some flexibility, but we would like to avoid late summer, as many people take off for vacation. Also, are there any other major conferences that may conflict? Thanks.
Re: DWARF location descriptor and multi-register frame pointer
On 11/20/2012 02:22 AM, Senthil Kumar Selvaraj wrote: How are frame pointer registers that span more than one hard register handled? Would it be appropriate to check the mode and do a multiple_reg_loc_descriptor call or something similar to handle this case? There is no requirement that the DWARF registers map one-to-one to hardware registers. You could define a DWARF register which represents the FP register, spanning two hardware registers. -- Michael Eagerea...@eagercon.com 1960 Park Blvd., Palo Alto, CA 94306 650-325-8077
[cxx-conversion] Merge from trunk rev 193681
Now that we are out of stage 1, the next wave of cleanups will go into the cxx-conversion branch. I figured it was easier to revive this branch than open a new one. I just merged trunk at rev 193681 and updated the failures manifest in the branch (contrib/testsuite-management/*.xfail) so it's easy to decide whether a cleanup broke any tests. Tested on x86_64. Diego.
Re: RFC - Initial planning for next Cauldron workshop
On Wed, Nov 21, 2012 at 11:27 AM, Diego Novillo wrote: > Ian and I have started thinking about the next Cauldron. This > time, we are thinking of organizing it in Mountain View, at > Google's headquarters. In case it's not obvious, this is Mountain View, California, USA. > - Dates: We are looking at 12/Jul - 14/Jul, and 2/Aug - 4/Aug. Note that these are over the weekend, which is when it is easier for us to get space. Ian
Re: -fPIC -fPIE
On Wed, Nov 21, 2012 at 8:02 PM, Ian Lance Taylor wrote: >> The main advantage is that you can compile a program with CFLAGS="-O2 -g >> -fPIE", and libtool's adding of -fPIC for shared libraries will work >> reliably. If -fPIE can still override -fPIC, the result depends on >> whether -fPIC comes before or after CFLAGS. > > ...which is exactly how all our other options work. The last one > wins. Why should these be different? Most other options are not added by the build system automatically with the presumption that they always override the default. > Using -fPIE in CFLAGS for > libtool seems like a very specific use case It's actually a very common use case for distributions that want to harden some binaries. Paolo
Re: -fPIC -fPIE
On Wed, Nov 21, 2012 at 1:20 PM, Paolo Bonzini wrote: > On Wed, Nov 21, 2012 at 8:02 PM, Ian Lance Taylor wrote: >>> The main advantage is that you can compile a program with CFLAGS="-O2 -g >>> -fPIE", and libtool's adding of -fPIC for shared libraries will work >>> reliably. If -fPIE can still override -fPIC, the result depends on >>> whether -fPIC comes before or after CFLAGS. >> >> ...which is exactly how all our other options work. The last one >> wins. Why should these be different? > > Most other options are not added by the build system automatically > with the presumption that they always override the default. I don't think that GCC can predict what various different build systems are going to do. >> Using -fPIE in CFLAGS for >> libtool seems like a very specific use case > > It's actually a very common use case for distributions that want to > harden some binaries. Well, OK. I could actually give you the reverse argument--I ran into this working with Google's internal build system, where it was a problem--but it doesn't really matter. My view is this: we have a simple rule for options that is very easy to understand. We should only deviate from that rule for exceptional reasons. The fact that libtool acts a certain way is not an exceptional reason; libtool can change behaviour easily enough, and that change will be backward compatible. Note that even before my patch, gcc -fpic -fpie was equivalent to -fpie. What changed is that previously gcc -fpie -fpic was also equivalent to -fpie. So if you were adding -fpie to CFLAGS, and libtool was not aware of that, the result was that when libtool expected -fpic it was actually getting -fpie. I don't see how that could be correct anyhow. Ian