Re: potential simple loop optimization assistance strategy?
Tom Tromey <[EMAIL PROTECTED]> wrote: > Giovanni> Agreed, but my point is whether we can do that when NDEBUG > Giovanni> is defined. > > I thought when NDEBUG is defined, assert expands to something like > '(void) 0' -- the original expression is no longer around. Yes, but the condition is still morally true in the code. NDEBUG is meant to speed up the generated code, and it's actually a pity that instead it *disables* some optimizations because we don't see the condition anymore. My suggestion is that assert with NDEBUG might expand to something like: if (condition) unreachable(); where unreachable is a function call marked with a special attribute saying that execution can never get there. This way the run-time check is removed from the code, but the range information can still be propagated and used. Notice that such an attribute would be needed in the first place for gcc_unreachable() in our own sources. Right now we expand it to gcc_assert(0), but we could do much better with a special attribute. Giovanni Bajo
Re: MEMBER_TYPE_FORCES_BLK on IA-64/HP-UX
> Steve Ellcey defined MEMBER_TYPE_FORCES_BLK when he first implemented > the ia64-hpux port. At the time, I mentioned using PARALLELs was a > better solution, but this was a simpler way for him to get the initial > port working. Since then, there have been a lot of bug fixes to the > ia64-hpux support by various people: Steve, Zack, Joseph, etc. Looking > at the current code, it does appear that all cases are now handled by > PARALLELs, and that the definition of MEMBER_TYPE_FORCES_BLK no longer > appears to be necessary. Running the 2 GCC compatibility testsuites between the unpatched and the patched compiled indeed shows that this part of MEMBER_TYPE_FORCES_BLK is now useless for the purpose it was initially defined. Moreover, removing it doesn't trigger any new failure in the testsuite except in one corner case (a GNU extension); but that can be easily patched in ia64_function_arg. I plan on submitting a patch to Steve and you next week. Thanks for your feedback. -- Eric Botcazou
Re: potential simple loop optimization assistance strategy?
Giovanni Bajo wrote on 02/07/2005 12:18:00: > > Yes, but the condition is still morally true in the code. NDEBUG is meant to > speed up the generated code, and it's actually a pity that instead it > *disables* some optimizations because we don't see the condition anymore. My > suggestion is that assert with NDEBUG might expand to something like: > > if (condition) >unreachable(); > > where unreachable is a function call marked with a special attribute saying > that execution can never get there. This way the run-time check is > removed from > the code, but the range information can still be propagated and used. > > Notice that such an attribute would be needed in the first place for > gcc_unreachable() in our own sources. Right now we expand it to gcc_assert(0), > but we could do much better with a special attribute. I always thought that when NDEBUG is set, then assert(x) should have no side effects. So if "condition" contains any side effect, or potential side effect (e.g. through function call), then the compiler should not generate the code for "condition". Michael
Re: Should GCC publish a general rule/warning due to it's default presumption of undefined signed integer overflow semantics?
Gabriel Dos Reis wrote: What I'm claiming is that he thinks "undefined behaviour" in the standard should not be taken as meaning "go to hell" (or punishment to borrow words from you) or absolute liberty for compiler writers to do just about everything that is imaginable, regardless of expectations. In other words, it is a question of balance. The trouble is that the standard does not formalize the balance, and that is why we use the rhetoric. The real issue is whether the compiler optimizer circuits should be allowed to assume that there is no overflow. At first glance, it seems reasonable to say yes, that is after all what the undefined semantics is about. The trouble, is, as I have pointed out before, and tried to illustrate with my password example, that this assumption, propagated by clever global flow algorithms, can have very unexpected results. So what to do? We could try to formalize the notion that although overflow is undefined, this information cannot be propagated, or perhaps more specifically cannot be propagated backwards. But in practice this is very difficult to formalize, and it is very hard to put bounds on what propagation means in this case. We have a sort of informal notion that says the kind of thing that occurred in my password example is a bad thing and should not happen, but we have a lot of trouble formalizing it (I know of lots of attempts to do this, no successes, anyone know a success story here?) Note that a similar issue arises with assertions (is the compiler allowed to assume that assertions are true and propagate this information?) We could try to avoid the use of the general undefined. After all if Stroustrup doesn't like the consequence of unbounded undefinedness, then presumably he doesn't like the language of the standard. It is the standard that introduces this notion of unbounded undefinedness not me :-) In fact I think the attempt to avoid unbounded undefinedness (what Ada calls erroneous execution), is something that is very worth while. One of the major steps forward in the Ada 95 standard over the Ada 83 standard was that we eliminated many of the common cases of erroneousness and replaced it with bounded errors. FOr example, consider the case of uninitialized variables. In Ada 83, a reference to an uninitialized scalar variable caused the program to be erroneous. Now in Ada 95, such a value may have an invalid representation. To reference an uninitialized variable is now not-erroneous, but rather a bounded error. The possible effects are defined as follows in the RM: 9 If the representation of a scalar object does not represent a value of the object's subtype (perhaps because the object was not initialized), the object is said to have an invalid representation. It is a bounded error to evaluate the value of such an object. If the error is detected, either Constraint_Error or Program_Error is raised. Otherwise, execution continues using the invalid representation. The rules of the language outside this subclause assume that all objects have valid representations. The semantics of operations on invalid representations are as follows: 10 If the representation of the object represents a value of the object's type, the value of the type is used. 11 If the representation of the object does not represent a value of the object's type, the semantics of operations on such representations is implementation-defined, but does not by itself lead to erroneous or unpredictable execution, or to other objects becoming abnormal. As, an example, he illustrated the issue with the story -- quickly classified as "legend" by Robert Dewar -- about the best C optimizing compilers that miscompiled the Unix kernel that nobody wanted to use in practice (even if it would blow up any other competing compiler) and the company running out of business. Actually I asked for documentation of this claim, and as far as I remember no one provided it, which is a pity. It is really useful to have examples of real life situations where sticking to the standard and ignoring real world practice resulted in useless compilers. I gave one such example, the Burroughs Fortran Compiler. In class, I have always suggested as another example that a C compiler that generated code for a flat address space would be a menace if it gave random results for comparing addresses between different allocated objects. I assume no C compiler does this in practice. So the Unix kernel example would be really nice to pin down. What compiler was this? And what was the "legitimate" freedom that it took that it should not have. By the way, I generally agree that C is a language where the expected behavior is quite a bit larger than the formally defined behavior (I consider this a bad characteristic of a language by the way). At another extreme, in the case of COBOL, when we wrote Realia COBOL, there were a number of places where we deliberately deviated from th
Re: Should GCC publish a general rule/warning due to it's default presumption of undefined signed integer overflow semantics?
Andrew Pinski wrote: But the reason question is why make it an undefined behavior instead of an implementation defined? This would have made it clearer instead of allowing the compiler not document what happens. Or is C++ just following C here? In which case it might be better to ask the C committee why it was done this way and real definition of undefined for this case? Note that implementation defined in practice is a fairly severe constraint. That's because you don't want to have a super complicated definition that takes a book to describe all the horrible things that might happen, so in practice you are pushed into some simple decision (always traps, always saturates, always wraps etc for the case of overflow).
Re: Should GCC publish a general rule/warning due to it's default presumption of undefined signed integer overflow semantics?
Paul Schlie wrote: My primary concern is about predictability, and could live with undefined integer overflow if it were likely reasonably possible to verify that in the general case an overflow would not occur, as otherwise an undefined behavior may result. (which I can't believe is acceptable to anyone). Well bugs in programs in general are not acceptable. This is just one example of a bug. Although I recognize and accept that most trivial uses of signed arithmetic can likely be verified as being constrained or not; it seems pretty clear to me that it's very difficult and often strictly impossible in the general case to do so; implying that signed integer arithmetic needs to be avoided in the general case by either specifying signed integers as being unsigned and convert them as required post-fact (which may also be undefined), and/or utilize floats if one wants to produce a program which has a reasonable chance of predictable behavior. Actually in the safety critical world, people all the time go through procedures to verify that their program is free of bugs, including unexpected overflow.
Re: Should GCC publish a general rule/warning due to it's default presumption of undefined signed integer overflow semantics?
Gabriel Dos Reis wrote: As we have briefly discussed in mails, the most critical part of the issue seems to be what can be assumed for loop variables. I countend that for many if not most practical loops, the variable can be assumed not to overflow and apply the transformation. But we need not apply "undefined behaviour" to all other cases; only for those "well-written" loop and loop variables. In summary, if user's loop is well-written then he will benefit from the transformation. That will cover already a good set of common loops. Note that the lack of a FOR loop in C adds to the problems here. In languages with FOR loops, this issue comes up very explicitly, but at the code generation level, you have to be careful to generate code that deals with the boundary cases correctly.
Re: signed is undefined and has been since 1992 (in GCC)
* Dave Korn: > It certainly wasn't meant to be. It was meant to be a dispassionate > description of the state of facts. Software that violates the C standard > just *is* "buggy" or "incorrect", Not if a GCC extension makes it legal code. And actually, I believe a GCC extension which basically boils to -fwrapv by default makes sense because so much existing code the free software community has written (including critical code paths which fix security bugs) implicitly relies on -fwrapv.
Re: signed is undefined and has been since 1992 (in GCC)
* Robert Dewar: > I am puzzled, why would *ANYONE* who knows C use int > rather than unsigned if they want wrap around semantics? Both OpenSSL and Apache programmers did this, in carefully reviewed code which was written in response to a security report. They simply didn't know that there is a potential problem. The reason for this gap in knowledge isn't quite clear to me. Probably it's hard to accept for hard-code C coders that a program which generates correct machine code with all GCC versions released so far (modulo bugs in GCC) can still be illegal C and exhibit undefined behavior. IIRC, I needed quite some time to realize the full impact of this distinction.
gcc-4.1-20050702 is now available
Snapshot gcc-4.1-20050702 is now available on ftp://gcc.gnu.org/pub/gcc/snapshots/4.1-20050702/ and on various mirrors, see http://gcc.gnu.org/mirrors.html for details. This snapshot has been generated from the GCC 4.1 CVS branch with the following options: -D2005-07-02 17:43 UTC You'll find: gcc-4.1-20050702.tar.bz2 Complete GCC (includes all of below) gcc-core-4.1-20050702.tar.bz2 C front end and core compiler gcc-ada-4.1-20050702.tar.bz2 Ada front end and runtime gcc-fortran-4.1-20050702.tar.bz2 Fortran front end and runtime gcc-g++-4.1-20050702.tar.bz2 C++ front end and runtime gcc-java-4.1-20050702.tar.bz2 Java front end and runtime gcc-objc-4.1-20050702.tar.bz2 Objective-C front end and runtime gcc-testsuite-4.1-20050702.tar.bz2The GCC testsuite Diffs from 4.1-20050625 are available in the diffs/ subdirectory. When a particular snapshot is ready for public consumption the LATEST-4.1 link is updated and a message is sent to the gcc list. Please do not use a snapshot before it has been announced that way.
Re: potential simple loop optimization assistance strategy?
Michael Veksler <[EMAIL PROTECTED]> wrote: > have no side effects. So if "condition" contains any side effect, > or potential side effect (e.g. through function call), then the > compiler should not generate the code for "condition". Right. Thus we can find a way to avoid generating e.g. function calls or similar things, and get information from the most basic conditions. Giovanni Bajo
Re: signed is undefined and has been since 1992 (in GCC)
Florian Weimer <[EMAIL PROTECTED]> writes: | * Robert Dewar: | | > I am puzzled, why would *ANYONE* who knows C use int | > rather than unsigned if they want wrap around semantics? | | Both OpenSSL and Apache programmers did this, in carefully reviewed | code which was written in response to a security report. They simply | didn't know that there is a potential problem. The reason for this | gap in knowledge isn't quite clear to me. | | Probably it's hard to accept for hard-code C coders that a program | which generates correct machine code with all GCC versions released so | far (modulo bugs in GCC) can still be illegal C and exhibit undefined We need to be careful not to to substitute "illegal" for "undefined behaviour". GCC is not a court. Part from that, I maintain that we should not apply "undfeined behaviour" whole sale. -- Gaby
Re: Should GCC publish a general rule/warning due to it's default presumption of undefined signed integer overflow semantics?
Robert Dewar <[EMAIL PROTECTED]> writes: | Gabriel Dos Reis wrote: | | > As we have briefly discussed in mails, the most critical part of the | > issue seems to be what can be assumed for loop variables. I countend | > that for many if not most practical loops, the variable can be assumed | > not to overflow and apply the transformation. But we need not apply | > "undefined behaviour" to all other cases; only for those | > "well-written" loop and loop variables. In summary, if user's loop is | > well-written then he will benefit from the transformation. That will | > cover already a good set of common loops. | | Note that the lack of a FOR loop in C adds to the problems here. In This is why I think a switch is useful here. The programmer may know more about his/her loops than the current GCC infrastructure may prove. Furthermore, the blanket statement of lack of a "FOR loop" does not preclude existence of properly writeen FOR loop. We should be able to do conservative analysis and catch those. If, User writes for (int i = min; i < max; ++i) and i, min and max don't change in the body, no matter what you think of C's general "for" not being a FOR loop, the above is a FOR loop. -- Gaby
Re: signed is undefined and has been since 1992 (in GCC)
On Sat, 2 Jul 2005, Florian Weimer wrote: I am puzzled, why would *ANYONE* who knows C use int rather than unsigned if they want wrap around semantics? Both OpenSSL and Apache programmers did this, in carefully reviewed code which was written in response to a security report. They simply didn't know that there is a potential problem. The reason for this gap in knowledge isn't quite clear to me. I've done a lot of C programming in the last three years, and for my day job I'm working on a C compiler (albeit in parts that are not very C specific), and I didn't know that signed overflow is undefined. Why not? I guess I never heard otherwise and I just assumed it would wrap due to two's complement arithmetic. I don't think I've ever written a serious C program that required wrap-around on overflow, though. Nick
Re: Should GCC publish a general rule/warning due to it's default presumption of undefined signed integer overflow semantics?
Gabriel Dos Reis wrote: for (int i = min; i < max; ++i) and i, min and max don't change in the body, no matter what you think of C's general "for" not being a FOR loop, the above is a FOR loop. But this normal paradigm for representing a FOR loop does not work for all possible ranges, that's precisely the trouble!
Re: signed is undefined and has been since 1992 (in GCC)
Florian Weimer wrote: Probably it's hard to accept for hard-code C coders that a program which generates correct machine code with all GCC versions released so far (modulo bugs in GCC) can still be illegal C and exhibit undefined behavior. IIRC, I needed quite some time to realize the full impact of this distinction. Note that even making things implementation defined does not help the problem of learning by example from one implementation. It really is a good idea for people programming in language X to learn language X :-) Back in the days of Algol-60 absolutely everyone read the report. Then we went through an era of standards which few people read (how many fortran programmers read the fortran standard, cobol programmers read the cobol standard, c programmers read the c standard etc). A rather nice achievment with Ada is that the standard is indeed a reference book that all Ada programmers have on their shelf and even though not all have read it through, they know it is the important ultimate reference standard of what is and what is not allowed in valid programs, and you would be hard put to find a professional Ada programmer who has not frequently reached for the standard to look something up. In a big class of programmers nearly all of whom had done professional C programming a couple of years ago, only 2 out of 94 had held the C standard in their hands. It's not an easy document, but it's also not that hard, it would be nice to promote its use more!
Re: signed is undefined and has been since 1992 (in GCC)
Gabriel Dos Reis wrote: We need to be careful not to to substitute "illegal" for "undefined behaviour". GCC is not a court. Note that in Ada, illegal is a technical term, it refers to a program that fails to meet the syntactic or static semantic rules for a correct Ada program, and must be rejected by an Ada compiler at compile time.
Re: Should GCC publish a general rule/warning due to it's default presumption of undefined signed integer overflow semantics?
On Sat, Jul 02, 2005 at 07:15:17PM -0400, Robert Dewar wrote: > Gabriel Dos Reis wrote: > > > > for (int i = min; i < max; ++i) > > > > > >and i, min and max don't change in the body, no matter what you think > >of C's general "for" not being a FOR loop, the above is a FOR loop. > > But this normal paradigm for representing a FOR loop does not work for > all possible ranges, that's precisely the trouble! Yes, there's a problem if the maximum value of i is intended to be INT_MAX. There's a more common problem with for (unsigned i = array_size-1; i >= 0; --i) ... which, of course, doesn't work because unsigned values are always greater than zero.
Re: signed is undefined and has been since 1992 (in GCC)
Robert Dewar <[EMAIL PROTECTED]> writes: | Gabriel Dos Reis wrote: | | > We need to be careful not to to substitute "illegal" for "undefined | > behaviour". GCC is not a court. | | Note that in Ada, You may not have noticed but this issue is primarily about C and C++ and we're discussing what the relevant standard says and what engineering decision we would take. Please, let's not get into more distractions. We already have plenty of them (many orginating from you). -- Gaby
Re: signed is undefined and has been since 1992 (in GCC)
Robert Dewar <[EMAIL PROTECTED]> writes: | Florian Weimer wrote: | | > Probably it's hard to accept for hard-code C coders that a program | > which generates correct machine code with all GCC versions released so | > far (modulo bugs in GCC) can still be illegal C and exhibit undefined | > behavior. IIRC, I needed quite some time to realize the full impact | > of this distinction. | | Note that even making things implementation defined does not help the | problem of learning by example from one implementation. It really is | a good idea for people programming in language X to learn language X :-) Then, one wonders why the GNAT is not bug free ;-p | Back in the days of Algol-60 absolutely everyone read the report. Then | we went through an era of standards which few people read (how many | fortran programmers read the fortran standard, cobol programmers | read the cobol standard, c programmers read the c standard etc). A | rather nice achievment with Ada is that the standard is indeed a | reference book that all Ada programmers have on their shelf and | even though not all have read it through, they know it is the oh, so it suffices to have it? Not to understand it? | important ultimate reference standard of what is and what is not | allowed in valid programs, and you would be hard put to find a | professional Ada programmer who has not frequently reached for | the standard to look something up. In a big class of programmers | nearly all of whom had done professional C programming a couple | of years ago, only 2 out of 94 had held the C standard in their | hands. | | It's not an easy document, but it's also not that hard, it would | be nice to promote its use more! The issue is whether they need to become expect in red herring or just know how to write good and correct programs. Interestingly, backis the old days K&R put emphasis on how to write good and useful programs rather than academic exercise in "undefined behaviour". -- Gaby
Re: Should GCC publish a general rule/warning due to it's default presumption of undefined signed integer overflow semantics?
Robert Dewar <[EMAIL PROTECTED]> writes: | Gabriel Dos Reis wrote: | >for (int i = min; i < max; ++i) | > | > and i, min and max don't change in the body, no matter what you think | > of C's general "for" not being a FOR loop, the above is a FOR loop. | | But this normal paradigm for representing a FOR loop does not work for | all possible ranges, Yes, but it needs not cover all possible ranges. For practical purposes, if it covers the vast majority of such loops I'll take it and leave you complaining about not covering all possible ranges. -- Gaby
Re: Should GCC publish a general rule/warning due to it's default presumption of undefined signed integer overflow semantics?
Joe Buck <[EMAIL PROTECTED]> writes: | On Sat, Jul 02, 2005 at 07:15:17PM -0400, Robert Dewar wrote: | > Gabriel Dos Reis wrote: | > > | > > for (int i = min; i < max; ++i) | > > | > > | > >and i, min and max don't change in the body, no matter what you think | > >of C's general "for" not being a FOR loop, the above is a FOR loop. | > | > But this normal paradigm for representing a FOR loop does not work for | > all possible ranges, that's precisely the trouble! | | Yes, there's a problem if the maximum value of i is intended to be | INT_MAX. I don't think the cases of INT_MAX represents the majority of such FOR loop. The semi-open interval is a widespread idiom and has become a standard idiom, at least if you move to STL :-) -- Gaby
Re: "cycle advancing states" versus "cycle advancing arcs"
On Fri, 2005-07-01 at 22:20 -0300, Rafael Ávila de Espíndola wrote: > The finite automaton used in the pipeline hazard recognizer uses a cycle > advancing arc in every state to represent a clock pulse. Bala(1) uses a > different technique: All states were a instruction issue is not possible are > marked as "cycle advancing" and the pipeline state is update to reflect a > clock cycle. Every time the simulation gets to a cycle advancing state, it > knows that it cannot issue another instruction in this cycle. > > It appears to me that adding a cycle advance arc to every state has the > potential of increasing the size of the automaton quite significantly. Is > this true? > There were a lot of efforts to decrease the automata size. Comb vectors are used to represent transition of one state to another. And advancing cycle transition is frequently reused. But I am sure that the automata size could be decreased more. There are a lot of methods to compress the tables (I read one article long ago where about 20 methods are reviewed and compared). The only problem is tradeoff between the size and the access speed. > What is the advantage of the "cycle advancing arc"? It is simpler to > understand, but the only use I can see is to represent a case were it was > possible to issue a instruction but, instead of doing this, the pipeline > advanced the clock. Does this happens in practice? > Yes, It is quite general situation. Even if there are enough resources to execute the insn, it has to wait a few cycle because the needed data are not ready yet. Generally speaking, data delays can be described by automaton too (using registers as resources), but the result automata will be huge. > 1) V.Bala and N.Rubin, Efficient Instruction Scheduling Using Finite State > Automata. It is probably the best article about using DFA for pipeline hazard recognizer. But gcc approach is more sophisticated. The major difference is to introduce an alternative in the automata description and optional treatment of the alternative in non-deterministic way. It makes generation of the pipeline hazard more interesting and complicated task.
Re: Add clog10 to builtins.def, round 2
The fortran front-end needs to recognize clog10, clog10f and clog10l a proper built-ins. Attached patch tries to add them to clog10, under a new category: DEF_EXT_C99RES_BUILTIN (as suggested by jsm28). Can someone review this? Is it OK? Just realized I forgot the ChangeLog entry to go with it. OK to commit? 2005-06-28 Francois-Xavier Coudert <[EMAIL PROTECTED]> * builtins.def: Add DEF_EXT_C99RES_BUILTIN to define builtins that C99 reserve for future use. Use it to define clog10, clog10f and clog10l. Index: gcc/builtins.def === RCS file: /cvsroot/gcc/gcc/gcc/builtins.def,v retrieving revision 1.105 diff -u -3 -p -r1.105 builtins.def --- gcc/builtins.def27 Jun 2005 12:17:18 - 1.105 +++ gcc/builtins.def3 Jul 2005 02:44:06 - @@ -119,6 +119,13 @@ Software Foundation, 51 Franklin Street, DEF_BUILTIN (ENUM, "__builtin_" NAME, BUILT_IN_NORMAL, TYPE, TYPE, \ true, true, !flag_isoc99, ATTRS, TARGET_C99_FUNCTIONS, true) +/* Builtin that C99 reserve the name for future use. We can still recognize + the builtin in C99 mode but we can't produce it implicitly. */ +#undef DEF_EXT_C99RES_BUILTIN +#define DEF_EXT_C99RES_BUILTIN(ENUM, NAME, TYPE, ATTRS)\ + DEF_BUILTIN (ENUM, "__builtin_" NAME, BUILT_IN_NORMAL, TYPE, TYPE, \ + true, true, true, ATTRS, false, true) + /* Allocate the enum and the name for a builtin, but do not actually define it here at all. */ #undef DEF_BUILTIN_STUB @@ -436,6 +443,9 @@ DEF_C99_BUILTIN(BUILT_IN_CIMAGL, DEF_C99_BUILTIN(BUILT_IN_CLOG, "clog", BT_FN_COMPLEX_DOUBLE_COMPLEX_DOUBLE, ATTR_MATHFN_FPROUNDING) DEF_C99_BUILTIN(BUILT_IN_CLOGF, "clogf", BT_FN_COMPLEX_FLOAT_COMPLEX_FLOAT, ATTR_MATHFN_FPROUNDING) DEF_C99_BUILTIN(BUILT_IN_CLOGL, "clogl", BT_FN_COMPLEX_LONGDOUBLE_COMPLEX_LONGDOUBLE, ATTR_MATHFN_FPROUNDING) +DEF_EXT_C99RES_BUILTIN (BUILT_IN_CLOG10, "clog10", BT_FN_COMPLEX_DOUBLE_COMPLEX_DOUBLE, ATTR_MATHFN_FPROUNDING) +DEF_EXT_C99RES_BUILTIN (BUILT_IN_CLOG10F, "clog10f", BT_FN_COMPLEX_FLOAT_COMPLEX_FLOAT, ATTR_MATHFN_FPROUNDING) +DEF_EXT_C99RES_BUILTIN (BUILT_IN_CLOG10L, "clog10l", BT_FN_COMPLEX_LONGDOUBLE_COMPLEX_LONGDOUBLE, ATTR_MATHFN_FPROUNDING) DEF_C99_BUILTIN(BUILT_IN_CONJ, "conj", BT_FN_COMPLEX_DOUBLE_COMPLEX_DOUBLE, ATTR_CONST_NOTHROW_LIST) DEF_C99_BUILTIN(BUILT_IN_CONJF, "conjf", BT_FN_COMPLEX_FLOAT_COMPLEX_FLOAT, ATTR_CONST_NOTHROW_LIST) DEF_C99_BUILTIN(BUILT_IN_CONJL, "conjl", BT_FN_COMPLEX_LONGDOUBLE_COMPLEX_LONGDOUBLE, ATTR_CONST_NOTHROW_LIST)